• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Robust camera pose estimation by viewpoint classification using deep learning

    2017-06-19 19:20:22YoshikatsuNakajimaHideoSaito
    Computational Visual Media 2017年2期

    Yoshikatsu NakajimaHideo Saito

    Robust camera pose estimation by viewpoint classification using deep learning

    Yoshikatsu Nakajima1Hideo Saito1

    Camera pose estimation with respect to target scenes is an important technology for superimposing virtualinformation in augmented reality (AR).However,it is difficult to estimate the camera pose for all possible view angles because feature descriptors such as SIFT are not completely invariant from every perspective.We propose a novelmethod of robust camera pose estimation using multiple feature descriptor databases generated for each partitioned viewpoint,in which the feature descriptor of each keypoint is almost invariant.Our method estimates the viewpoint class for each input image using deep learning based on a set of training images prepared for each viewpoint class.We give two ways to prepare these images for deep learning and generating databases.In the first method,images are generated using a projection matrix to ensure robust learning in a range of environments with changing backgrounds. The second method uses real images to learn a given environment around a planar pattern.Our evaluation results confirm that our approach increases the number of correct matches and the accuracy of camera pose estimation compared to the conventional method.

    pose estimation;augmented reality(AR); deep learning;convolutional neural network

    1 Introduction

    Since augmented reality(AR)toolkit[1]introduced the superimposition of virtual information onto planar patterns in images by real-time estimation of camera pose,technologies for markerless camera-tracking technology have become mainstream[2, 3].Markerless tracking needs to find a point of correspondence between the input image and the planar pattern for any camera pose.

    Lowe’s SIFT[4]is one of the most famous algorithms in computer vision for detecting keypoints and describing local features in images. SIFT detects keypoints using differences of Gaussians to approximate a Laplacian of Gaussian filter and describes them using a 128-dimensional feature vector.Then,keypoint correspondences are obtained using Euclidean distances between feature vectors.Although SIFT is robust in the face of scaling and rotation[5],when the input image is distorted due to projection distortion of the planar pattern,we cannot find keypoint correspondences.Randomised trees(RT)[6]improve the problem by training a variety of descriptors for each keypoint using affine transformations,and generating a tree structure[7]based on the resulting brightness values,for real-time recognition of keypoint identity.Viewpoint generative learning(VGL),developed by Yoshida et al.[8],extends this idea to train various descriptors for every keypoint by generating images as ifthey were taken from various viewpoints using a projection transformation,and generating a database of keypoints and features from the images.

    However,methods based on training feature descriptors of keypoints,such as RT and VGL, trade robustness of the various descriptors against computation time when searching for matched keypoints.For example,VGL compresses the database of training descriptors usingk-means clustering[9]for fast search,but this sometimes results in wrong keypoint matching,especially when the camera angle is shallow.Because feature descriptors of keypoints change significantly at ashallow angle,weak compression of the database is required to allow such shallow camera angles,but this increases the computation for keypoint search.

    In this paper,we propose a novel method for camera pose estimation based on two-stage keypoint matching to solve the trade-off problem. The first stage is viewpoint classification using aconvolutional neural network(CNN)[10–12],so that the feature descriptors of every keypoint are similar from the classified viewpoints.The second stage is camera pose estimation based on accurate keypoint matching,which is achieved in the classified viewpoint using a nearest neighbor(NN)search for the descriptor.To achieve this two-stage camera pose estimation,in pre-processing,our method generates the uncompressed descriptor databases of a planar pattern for each partitioned viewpoint, including a shallow angle,and trains a CNN to classify the viewpoint of the input image.

    A CNN can perform stable classification against variations of a property for the same class by learning from a large amount of data with variations for each class.For instance,object recognition which is stable under viewpoint changes can be performed by learning from many images taken from various viewpoints for each object class[13].This stable performance against viewpoint change is not achieved by just the structure of the CNN,but through the capability of a CNN to learn from variable data for each class.For example,Agrawal et al.[14]applied a CNN to estimate egomotion by constructing a network model with two inputs comprising two images whose viewpoints slightly differ.In this paper,we apply a CNN to a viewpoint classification for a single object.

    Additional reasons for using a CNN for viewpoint classification are as follows.Firstly,a CNN is robust to occlusion.This is very important,as it widens the range of applications.Secondly,computation time is unchanged as the number ofviewpoint classes increases,enabling us to easily analyze the trade-off relationship between accuracy and size.

    We introduce two methods for generating a database and preparing the images for deep learning, under the assumption that those methods might be used in different ways.The first one is robust for a range of environments and is used to initialize camera pose estimation,etc.The second method learns the entire environment around the planar pattern and is used in a learned environment.

    The NN search in the second stage is not timeconsuming,because little variety is necessary in the descriptors in the classified viewpoint in the first stage.The camera pose of the input image is computed based on correspondences between matched keypoints.

    2 Method

    Figure 1 shows the flow of our proposed method, which consists ofthree parts.The first part generates databases offeatures for every viewpoint class,which are partitioned into viewing angles from the entire viewing angle range(?90°<θ<90°,?180°<φ<180°)with respect to a target planar pattern,as shown in Fig.2.The second part trains the CNN to classify the viewpoint of the input image.The last part estimates the camera pose of the input image. We now explain each part in detail.

    In particular,during database generation(Section 2.1)and CNN training(Section 2.2),we use two methods to prepare images for database generation and deep learning by the CNN,with the assumption that these methods are to be used in different ways.

    Fig.1 Flow of proposed method.Top:database generation,middle:deep learning by the CNN,bottom:camera pose estimation.

    Fig.2 Generating databases:viewpoint class,virtual camera,and angle definitions.

    The first method uses only one image of the planar pattern and generates many images by use of projection matrices(Pmatrices).This reduces the learning cost because it only uses a single image andPmatrices.Moreover,this enables the CNN to be robust with changes in background of the input image,because we can vary the backgrounds of the images generated for deep learning.Viewpoint class estimation will not be extremely accurate;however, because the CNN only uses the appearance of the planar pattern in the input image,so this first method is not suitable for movies.On the other hand,it can manage shallow angles better than the conventional method,so it is useful for the initialization of the camera pose,etc.From now on,we call this methodlearning based on generated viewpoints.

    The second method uses real images by fixing the planar pattern within the environment and taking pictures with a camera.The CNN can learn not only about the appearance of the planar pattern but also the environment around it,including the background,the lighting,and so on.Therefore,the viewpoint class of the input image can be estimated with almost perfect precision,so this method is suitable for movies.However,the CNN can be only used in the environment in which the planar pattern is fixed when images for deep learning are taken.In contrast to the first method,we call this methodlearning based on example viewpoints.

    2.1 Database generation

    In this part,we generate one feature database per viewpoint class.Each database is generated from one image because features sampled from a certain viewpoint are almost identical in the viewpoint class,so one image is enough.As mentioned in the introduction,we use two methods for preparing images for database generation.Firstly,we will explain the method using one image and aPmatrix,which is robust in various environments. Secondly,we will explain the method using real images taken by a camera,which is more robust in the particular environment in which the preprocessing is performed.This flow is shown in the upper part of Fig.1.

    2.1.1 Learning based on generated viewpoints

    Firstly,we partition the entire range of viewing angles of the camera’s viewpoint with respect to a target pattern.We call each partitioned viewpoint a viewpoint class(see Fig.2).Secondly,we compute the projection matrices that transform the frontal image to images which appear to have been taken from the center of each viewpoint class,using Eq.(1).From now on,we will denote the number of viewpoint classes byN,the viewpoint classes byVi(i=1,...,N),and the projection matrix for each viewpoint classVibyPi.In Eq.(1),let the intrinsic parameters ofthe virtualcamera,the rotation matrix for viewpoint classVi,and the translation vector,beA,Ri,andt,respectively.The matrixRiis given by Eq.(2),usingθ,φ,andψdefined as in Fig.2.

    Using the projection matrixPi,we obtain an imageIifor each viewpoint classVi.Next,we detect keypoints and describe their local features for each imageIi,using the appropriate algorithm. We denote the number of detected keypoints byMi, each keypoint bypij,and each feature bydij(j= 1,...,Mi).Then we compute a homography matrixHithat transforms the imageIi,which represents the viewpoint classVi,to the frontal image.We also generate the database in which the described featuresdijand their coordinatesin the frontal image are stored.The coordinatesare found by transforming the coordinates of each detected keypointpijto the frontalimage using the equationBy performing this process on all images that represent each viewpoint class,we obtain one uncompressed descriptor database per viewpointclass.

    2.1.2 Learning based on example viewpoints

    For this method,we first use the camera to take multiple viewpoints of the planar pattern that is fixed in the environment.Next,for each imageIi,we compute a homography matrixHithat transforms the imageIito the frontalimage.In this computation,we use four points whose coordinates in the frontal image are easily determined,like corners.Equation(3)can be used to compute the homography matrixHi:

    Here,we denote the coordinates in the frontal image of the planar pattern byand the coordinates in the taken image as~(x,y,1)T.Then,we detect keypoints and describe their local features in the imageIiusing the appropriate algorithm.We denote the number of detected keypoints byMi,each keypoint bypij,and each feature bydij(j=1,...,Mi).The keypointpijcan be projected intowhich represents the coordinates in the frontal image,usingFinally,we generate the database for each image; multiple sets ofanddijare stored.By judging whether the coordinates ofare on the planar pattern or not,we can eliminate features belonging to the environment when we store features belonging to the planar pattern in the database.

    2.2 Deep learning by the CNN

    We train a CNN for the purpose of classifying the viewpoint of the input image.A CNN is a deep neural network mainly used for object recognition. We apply a CNN to viewpoint classification of a single planar pattern.In this step,we only use images,and do not use features for deep learning, because we employ a CNN that only receives images as input.As we do with database generation,we willexplain the two methods of preparing images for deep learning.However,the deep learning processing explained below should use the same method as that used for the database generation step.This process is illustrated in the middle row of Fig.1.

    2.2.1 Learning based on generated viewpoints

    Firstly,we generate multiple images for each viewpoint classViusing Eq.(1).Then we randomly change the background of every image and the position and scale of the planar pattern.By using these images for deep learning,the weight of the background part is reduced and the CNN can classify the viewpoint robustly.Here,we employ a softmax function as the activation function of the output layer and make its number ofunits coincide with the number ofviewpoint classes—this is the CNN design recommended for classification problems.Finally, we perform deep learning by teaching the CNN the correct viewpoint class for each generated image using the techniques of back-propagation[15],pretraining[16],and drop-out[17].In general,it is a problem for deep learning to prepare images for training,but our method uses images synthesized from a single planar pattern,enabling us to reduce the learning cost.

    2.2.2 Learning based on example viewpoints

    For each imageIi,we take multiple images of the planar pattern for deep learning from the same viewpoint as that used for imageIi.We then change the scale and the rotation to help ensure that the CNN is robust.Deep learning is performed as in Section 2.2.1,i.e.,we employ a softmax function in the output layer,we make its number of units coincide with the number of the viewpoint classes, and we teach the CNN the correct viewpoint class for every image.

    2.3 Camera pose estimation

    In this section,we explain the details ofcamera pose estimation given the input image.This process is shown at the bottom of Fig.1.We detect keypoints and describe their local features in the image using the same algorithm as that used to generate the databases.Next,we input the image to the CNN, which has been tuned by deep learning.Because the activation function of the output layer is a softmax function,the percentage informs us which viewpoint class the image belongs to(see Fig.3).We select the viewpoint class with the highest percentage and compare keypoints in the database for that viewpoint class with keypoints in the input image in terms of the Euclidean distance of their feature descriptors. Then we search for the nearest keypoint and the next nearest as suggested by Mikolajczy et al.[18],so that we can use the ratio to reduce mismatches between keypoints.Only when the Euclidean distance to the nearest keypoint is sufficiently smaller than the Euclidean distance to the second one,there is a match.Thus,DAandDBare matched only whenEq.(4)is satisfied:

    Fig.3 Viewpoint class estimation using CNN.

    Here,DA,DB,andDCrepresent the feature descriptor of the input image,the feature descriptor of the nearest keypoint in the database,and the feature descriptor ofthe second nearest,respectively. Ifwe set the thresholdtlarge,the number ofmatches increases as well as the number of mismatches; conversely,if we set the thresholdtsmall,the number of matches reduces as well as the number of mismatches.

    By matching keypoints between the database and the input image,we can obtain corresponding points in the input image and the frontal image,as feature descriptors and their coordinates in the frontalimage are stored in each database.After mismatches are reduced by RANSAC[19],we estimate the camera pose of the input image by computing the homography that transforms the frontal image to the input image using the coordinates of those correspondinge points.

    3 Experimental evaluation

    In this section,we demonstrate the validity of our method through experiments.In Section 2,we introduce two methods of preparing images for database generation and deep learning.Because those methods have different uses,we evaluate them with different datasets.We use VGL[8]as a basis for comparison.Conventional methods of camera pose estimation with CNN are typified by PoseNet,as described by Kendall et al.;however, such a method does not use SIFT-like pointbased features,while VGL does use point-based matching.Furthermore,VGL is more robust than other conventional methods that use point-based matching like ASIFT[20]andrandom ferns[21]. Thus,we compare our method to VGL.

    3.1 Experimental setup

    The evaluation environment was as follows.CPU: Intel Core i7-4770K 3.5 GHz,GPU:GeForce GTX760,and RAM:16 GB.The definition of viewpoint class and the datasets are different for the two methods,and will be explained separately. The deep learning framework used in this evaluation experiment was Chainer[22].

    3.1.1 Learning based on generated viewpoints

    For this method,we defined the viewpoint classViby splitting the viewpoints for observing the planar pattern as shown in Table 1.As features change more at a shallow angle,we subdivided the viewpoint more as angleθincreased.Thus,the number ofviewpoint classes was 4+8+12+12=36 in this experiment.

    As forψ,due to use of rotation invariant features including SIFT,we obtained many keypoint matches between the input image and the database for every values of camera pose angleψfor the input image.

    Next,we generated imagesIito represent each viewpoint classViusing Eq.(1),using anglesθandφat the center of each viewpoint classVi. We used SIFT to detect keypointspijand describe their localfeaturesdij.We usednetwork-in-network(NIN)[23]to constitute the CNN.NIN is useful for reducing classification time by reducing the number of parameters while maintaining high accuracy.To tune the parameters of the CNN by deep learning, we generated about three thousand images for each viewpoint classVi.The background images were prepared by capturing each frame from a movie taken indoors.Furthermore,we randomly changed the radius ofthe sphere(see Fig.2)and the angleψwhen we generated the images for deep learning.Doing so allows estimation of the viewpoint class with the trained CNN even if the camera distance and thecamera orientation with respect to the input image change.

    Table 1 Viewpoint class definition

    Fig.4 Viewpoint class and camera pose estimation results using our method and VGL[8],using evaluation images.Left:estimated viewpoint class,center:our method,right:VGL[8].

    Fig.5 Number of keypoint matches.

    We prepared 71 images of the planar pattern, including ones taken from a shallow angle and ones in which the planar pattern was occluded.Using those images,we compared the accuracy of camera pose estimation,the number ofcorrect matches,and the processing time with the corresponding values for VGL.For VGL,we generated a database using the same images as in our method and set the number of clusters to five and the number of stable keypoints to 2000.

    3.1.2 Learning based on example viewpoints

    In this method,we took 22 imagesIi(N=22) of a planar pattern from multiple viewpoints after fixing the planar pattern onto a desk.These images define the viewpoint classVi(see Fig.2),so there were 22 viewpoint classes in this experiment. Using those imagesIi,we generated 22 feature databases containing the coordinatesof all detected keypoints and their local featuresdij.We employed SIFT as a keypoint detector and a feature descriptor,and used the coordinates of four corners to computeHiused to transform coordinatespijto coordinatesWe again employed NIN as the network model for the CNN.Next,we generated about 600 images for each viewpoint classViby clipping every frame of movies that we took from around the viewpoint of each of the 22 imagesIi. By teaching the correct viewpoint class for every prepared image to the CNN using deep learning,the CNN became able to estimate the viewpoint class for each input image.Again in this method,we randomly changed the camera distance and the angleψwhen we prepared the images for deep learning to make the CNN robust to changes in scale and rotation.

    For the evaluation experiment,we prepared a movie of the fixed planar pattern,including frames taken from a shallow angle,in the same environment as the one used for database generation and image preparation for deep learning.

    In this experiment,we evaluated the estimated camera pose from the re-projection error of the corners of the planar pattern.Denoting the coordinates of the corners observed in the test image byPk,and the coordinates of the corners reprojected using the estimated homographyHbyQk, the re-projection errorEis given by the following equation:

    Erepresents the average Euclidean distance of the four corners between the ground-truth coordinates and the estimated coordinates.MinimisingEgives the camera pose estimation.

    To compare VGL with this method,we generated a database with the same 22 images used for our method and set the number of clusters to five and the number of stable keypoints to 2000.

    3.2 Results

    We now describe the results of the experimental evaluation of each method.

    3.2.1 Learning based on generated viewpoints

    Figure 4 shows the results of viewpoint class estimation by the CNN,and camera pose estimation using our method and VGL.The left image indicates the viewpoint class estimated by the CNN for the input image,the center image shows the result of camera pose estimation by our method,and the right image is the result from VGL.We visualize camera pose estimation by re-projecting coordinates of the four corners ofthe frontalimage using the computed homography and connecting them with red lines. Images without red lines indicate lacked sufficient matches to compute the homography.Figure 5 gives the number ofkeypoint matches used to compute the homography for each of the 71 images.

    As Fig.4 shows,our method estimated camera pose more robustly for shallow angles than the conventional method.Figure 5 shows that the number of matches was higher for our method than for the conventional method,for almost all images. Because our method matches keypoints between the input image and a database that was generated using an image similar to the input image,matching was more accurate with our method.Although the planar pattern is occluded in some images in Fig.4,our method estimated the viewpoint class and camera pose accurately.Because deep CNNs give robust results in the presence of occlusion[13],and the uncompressed descriptor databases ofthe planar pattern are generated for each viewpoint class,our method was robust to occlusion.

    Regarding the accuracy ofviewpoint classification, 7 of 71 images were incorrectly classified.However, 4 of these 7 images were successfully classified into the adjacent viewpoint class,so that keypoint matching works well enough,and the camera pose is estimated reasonably precisely.Features in the adjacent viewpoint class are similar to features in the correct one since the database for the next viewpoint class is generated from an image taken from a viewpoint next to the correct viewpoint.This allows the second step of accurate localization to still have a chance to correct the errors,making the algorithm robust.In contrast,3 of 7 images were classified into a completely different viewpoint class,so camera pose estimation failed.Overall, viewpoint class estimation was about 90%accurate because the CNN only uses the appearance of the planar pattern in the input image.Therefore,this method is not suitable for movies.On the other hand,it copes with shallow angles(see Fig.4),so it is useful for initialization of the camera pose and similar tasks.Furthermore,when using our method in applications,we can easily combine it with a conventional tracking method.By doing that,we can estimate camera pose continuously while coping with shallow angles.

    Next,we consider processing time.Table 2 shows the average processing time for our method and VGL for all images,for each stage of camera pose estimation,and in total.

    The overhead for viewpoint class estimation in our method is small.Detecting keypoints and describing feature descriptors using SIFT account for the most of the processing time.We could easily apply our method to a binary algorithmlike AKAZE[24]because it generates uncompressed descriptor databases.Thus,we could reduce the processing time spent on detecting keypoints and describing features in our method.

    Table 2 Average time spent on each processing stage (Unit:ms)

    3.2.2 Learning based on example viewpoints

    Figure 6 shows some results of camera pose estimation using our method and VGL,for a movie that we prepared for this evaluation.The camera pose was estimated using the method described in Section 3.2.1.Figure 7 shows the re-projection error computed with Eq.(5)for each frame of the movie. The ground-truth coordinates of the corners were detected manually.Figure 8 shows the number of keypoint matches between the input image and the estimated ones used for computation of the homography.

    As shown in Fig.6,this method also estimated camera pose for shallow angles more robustly than the conventional method.In Fig.8,the number of matches fluctuates because the database used for keypoint matching was changed by the CNN every few frames.As shown by Figs.6–8,the accuracy of camera pose estimation using VGL decreased for shallow angles and the features changed drastically, because VGL compresses features usingk-means for fast computation.On the other hand,our method estimates the camera pose more robustly because the database that contains allfeatures sampled from images similar to the input image is appropriately selected by the CNN.

    Fig.6 Results of camera pose estimation using our method and VGL[8]using some evaluation frames.Left:our method,right:results from VGL[8].

    Fig.7 Re-projection errors.

    Fig.8 Number of matches.

    With this method,viewpoint class estimation accuracy is almost 100%(see Fig.7),because the CNN can learn not only the appearance ofthe planarpattern but also the environment around it:the background,the lighting,and so on.However,the CNN can be only used in the same environment as the one in which the planar pattern was given and deep learning has been performed.

    We next discuss processing time.Figure 9 shows the processing frame rate.Again,the overhead for viewpoint class estimation in the proposed method is sufficiently small.

    3.2.3 Number of viewpoint classes

    The number of viewpoint classes affects the results of camera pose estimation and the size of the databases.Therefore,we generated 100 test images with homographies and evaluated how the number of viewpoint classes aff ected the results for the method of learning based on generated viewpoints.Table 3 shows the re-projection error calculated by Eq.(5) and the database size when changing the number of viewpoint classes.

    The re-projection error decreases with an increasing number of viewpoint classes since the input image and the matching image in the database become closer by splitting the viewpoint more finely.However,the size of the database is also increased as the number of generated databases is also increased.Thus,accuracy and size must be traded-off according to the particular application.

    4 Conclusions

    Fig.9 Frame rate.

    Table 3 Re-projection error and database size with respect to the number of viewpoint classes

    We have proposed a method for robust camera pose estimation using uncompressed descriptor databases generated for each viewpoint class.Our method classifies the viewpoint of each input image using a CNN that is trained by deep learning so that keypoints of the input image can be matched almost perfectly with the database.We gave two ways of generating these databases and preparing the images for deep learning.These methods have different applications.The first is robust in a changing environment,while the second allows the CNN to learn the entire environment around the planar pattern.We have experimentally confirmed that the number of keypoint matches was higher,and the accuracy of camera pose estimation was better,than with a conventional method.

    The application of our method to threedimensional objects is our future work.

    [1]Kato,H.;Billinghurst,M.Marker tracking and HMD calibration for a video-based augmented reality conferencing system.In:Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality,85–94,1999.

    [2]Lee,T.;Hollerer,T.Hybrid feature tracking and user interaction for markerless augmented reality.In: Proceedings of IEEE Virtual Reality Conference,145–152,2008.

    [3]Maidi,M.;Preda,M.;Le,V.H.Markerless tracking for mobile augmented reality.In:Proceedings of IEEE International Conference on Signal and Image Processing Applications,301–306,2011.

    [4]Lowe,D.G.Distinctive image features from scale-invariant keypoints.International Journal of Computer VisionVol.60,No.2,91–110,2004.

    [5]Mikolajczyk,K.;Schmid,C.A performance evaluation of local descriptors.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.27,No.10, 1615–1630,2005.

    [6]Lepetit,V.;Fua,P.Keypoint recognition using randomized trees.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.28,No.9, 1465–1479,2006.

    [7]Breiman,L.Random forests.Machine LearningVol. 45,No.1,5–32,2001.

    [8]Yoshida,T.;Saito,H.;Shimizu,M.;Taguchi,A. Stable keypoint recognition using viewpoint generative learning.In:Proceedings of the International Conference on Computer Vision Theory and Applications,Vol.2,310–315,2013.

    [9]Hartigan,J.A.;Wong,M.A.Algorithm AS 136:Ak-means clustering algorithm.Journal of the Royal Statistical Society.Series C(Applied Statistics)Vol. 28,No.1,100–108,1979.

    [10]Fukushima,K.;Miyake,S.Neocognitron:A new algorithm for pattern recognition tolerant ofdeformations and shifts in position.Pattern RecognitionVol.15,No.6,455–469,1982.

    [11]Hubel,D.H.;Wiesel,T.N.Receptive fields,binocular interaction and functional architecture in the cat’s visualcortex.The Journal of PhysiologyVol.160,No. 1,106–154,1962.

    [12]LeCun,Y.;Boser,B.;Denker,J.S.;Henderson, D.;Howard,R.E.;Hubbard,W.;Jackel,L.D. Backpropagation applied to handwritten zip code recognition.Neural ComputationVol.1,No.4,541–551,1989.

    [13]Russakovsky,O.;Deng,J.;Su,H.;Krause,J.; Satheesh,S.;Ma,S.;Huang,Z.;Karpathy,A.;Khosla, A.;Bernstein,M.;Berg,A.C.;Fei-Fei,L.ImageNet large scale visual recognition challenge.International Journal of Computer VisionVol.115,No.3,211–252, 2015.

    [14]Agrawal,P.;Carreira,J.;Malik,J.Learning to see by moving.In:Proceedings of IEEE International Conference on Computer Vision,37–45,2015.

    [15]Rumelhart,D.E.;Hintont,G.E.;Williams,R.J. Learning representations by back-propagating errors.NatureVol.323,533–536,1986.

    [16]Hinton,G.E.;Srivastava,N.;Krizhevsky,A.; Sutskever,I.;Salakhutdinov,R.Improving neural networks by preventing co-adaptation of feature detectors.arXiv preprintarXiv:1207.0580,2012.

    [17]Krizhevsky,A.;Sutskever,I.;Hinton,G.E.ImageNet classification with deep convolutional neural network. In:Proceedings of Advances in Neural Information Processing Systems,1097–1105,2012.

    [18]Mikolajczyk,K.;Tuytelaars,T.;Schmid,C.; Zisserman,A.;Matas,J.;Schaff alitzky,F.;Kadir,T.; GooL,L.V.A comparison of affine region detectors.International Journal of Computer VisionVol.65,No. 1,43–72,2005.

    [19]Fischler,M.A.;Bolles,R.C.Random sample consensus:A paradigm for model fitting with applications to image analysis and automated cartography.Communications of the ACMVol.24,No. 6,381–395,1981.

    [20]Yu,G.;Morel,J.-M.ASIFT:An algorithm for fully affine invariant comparison.Image Processing On LineVol.1,1–28,2011.

    [21]Ozuysal,M.;Calonder,M.;Lepetit,V.;Fua, P.Fast keypoint recognition using random ferns.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.32,No.3,448–461,2009.

    [22]Tokui,S.;Oono,K.;Hido,S.;Clayton,J.Chainer: A next-generation open source framework for deep learning.In:Proceedings of Workshop on Machine Learning Systems(LearningSys)in the 29th Annual Conference on NeuralInformation Processing Systems, 2015.

    [23]Lin,M.;Chen,Q.;Yan,S.Network in network.arXiv preprintarXiv:1312.4400,2013.

    [24]Alcantarilla,P.F.;Nuevo,J.;Bartoli,A.Fast explicit diff usion for accelerated features in nonlinear scale spaces.In:Proceedings of British Machine Vision Conference,13.1–13.11,2013.

    Hideo Saito received his Ph.D.degree in electrical engineering from Keio University,Japan,in 1992.Since then, he has been on the Faculty of Science and Technology,Keio University. From 1997 to 1999,he joined the Virtualized Reality Project in the Robotics Institute,Carnegie Mellon University as a visiting researcher.Since 2006,he has been a full professor in the Department of Information and Computer Science,Keio University.His recent activities for academic conferences include being Program Chair of ACCV2014,a General Chair of ISMAR2015,and a Program Chair of ISMAR2016.His research interests include computer vision and pattern recognition,and their applications to augmented reality,virtual reality,and human robotics interaction.

    Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/),which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journalare available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    u Nakajima

    his B.E.degree in information and computer science from Keio University, Japan,in 2016.Since 2016,he has been a master student in the Department of Science and Technology at Keio University,Japan.His research interests include augmented reality,SLAM, object recognition,and computer vision.

    1 Department of Science and Technology,Keio University, Japan.E-mail:Y.Nakajima,nakajima@hvrl.ics.keio.ac. jpH.Saito,saito@hvrl.ics.keio.ac.jp.

    Manuscript received:2016-07-25;accepted:2016-11-13

    男女做爰动态图高潮gif福利片| 久久精品91蜜桃| 久99久视频精品免费| 中文字幕人妻丝袜一区二区| www.999成人在线观看| 午夜福利免费观看在线| 99re在线观看精品视频| 久久精品91蜜桃| 国产精品亚洲一级av第二区| 99国产精品一区二区三区| 午夜日韩欧美国产| 精品一区二区三区av网在线观看| 亚洲精品国产一区二区精华液| 欧美黑人巨大hd| 国产一区二区三区在线臀色熟女| 18禁国产床啪视频网站| 国产精品久久久人人做人人爽| 大型黄色视频在线免费观看| 亚洲 欧美 日韩 在线 免费| 身体一侧抽搐| 好男人电影高清在线观看| av中文乱码字幕在线| 日韩 欧美 亚洲 中文字幕| 亚洲国产欧美人成| 亚洲人与动物交配视频| 99精品在免费线老司机午夜| 黄色成人免费大全| 男人舔奶头视频| 一二三四在线观看免费中文在| 成年版毛片免费区| 亚洲 国产 在线| 日韩欧美国产在线观看| 久久久国产成人免费| 中文字幕精品亚洲无线码一区| 成熟少妇高潮喷水视频| 丁香六月欧美| 啦啦啦韩国在线观看视频| 一卡2卡三卡四卡精品乱码亚洲| 一级片免费观看大全| 午夜福利视频1000在线观看| 91av网站免费观看| 757午夜福利合集在线观看| 成人18禁高潮啪啪吃奶动态图| 久久国产精品人妻蜜桃| 成人亚洲精品av一区二区| 久久热在线av| 日日夜夜操网爽| 在线十欧美十亚洲十日本专区| 久久精品91蜜桃| av中文乱码字幕在线| 欧美日韩中文字幕国产精品一区二区三区| 中文在线观看免费www的网站 | 亚洲成人久久爱视频| 中文字幕精品亚洲无线码一区| 国产av麻豆久久久久久久| 欧美三级亚洲精品| 亚洲av美国av| 久久午夜亚洲精品久久| 一区二区三区国产精品乱码| 亚洲精品一区av在线观看| 露出奶头的视频| 无人区码免费观看不卡| 无限看片的www在线观看| 国产野战对白在线观看| 免费人成视频x8x8入口观看| 国产片内射在线| 久热爱精品视频在线9| 男人舔女人下体高潮全视频| 高清毛片免费观看视频网站| 日韩大尺度精品在线看网址| 国产精品影院久久| а√天堂www在线а√下载| 精品久久蜜臀av无| 精品少妇一区二区三区视频日本电影| 国产成年人精品一区二区| 亚洲成av人片免费观看| 99精品在免费线老司机午夜| 精品人妻1区二区| 国产一区二区三区在线臀色熟女| 可以免费在线观看a视频的电影网站| 天天一区二区日本电影三级| 亚洲国产看品久久| 亚洲人成77777在线视频| 久久久久亚洲av毛片大全| 国内久久婷婷六月综合欲色啪| 给我免费播放毛片高清在线观看| 女同久久另类99精品国产91| 18禁观看日本| 少妇被粗大的猛进出69影院| 国产精品一及| 亚洲性夜色夜夜综合| 亚洲精华国产精华精| 可以在线观看的亚洲视频| 午夜久久久久精精品| 欧美激情久久久久久爽电影| 亚洲美女视频黄频| 中国美女看黄片| 成人18禁高潮啪啪吃奶动态图| 三级国产精品欧美在线观看 | 日韩av在线大香蕉| 中文在线观看免费www的网站 | aaaaa片日本免费| 久久久久免费精品人妻一区二区| 国产精品久久久av美女十八| 日本 欧美在线| 日本三级黄在线观看| av免费在线观看网站| 日韩欧美一区二区三区在线观看| 毛片女人毛片| 久久精品国产综合久久久| 在线视频色国产色| 18美女黄网站色大片免费观看| 国产又黄又爽又无遮挡在线| 18禁观看日本| 一个人免费在线观看电影 | 亚洲自偷自拍图片 自拍| 巨乳人妻的诱惑在线观看| 中亚洲国语对白在线视频| 亚洲av成人精品一区久久| 国产精品99久久99久久久不卡| 亚洲欧美精品综合一区二区三区| 婷婷丁香在线五月| 久久久久精品国产欧美久久久| 最近最新中文字幕大全电影3| 成人精品一区二区免费| 亚洲一区二区三区不卡视频| 日韩欧美在线乱码| 麻豆成人av在线观看| 久久国产乱子伦精品免费另类| 午夜福利欧美成人| 国产精品爽爽va在线观看网站| 高清毛片免费观看视频网站| 精品一区二区三区av网在线观看| 国产男靠女视频免费网站| 性色av乱码一区二区三区2| 欧美另类亚洲清纯唯美| 国产精品综合久久久久久久免费| 亚洲精品中文字幕一二三四区| √禁漫天堂资源中文www| 亚洲 欧美 日韩 在线 免费| 舔av片在线| 精品久久蜜臀av无| 黄色 视频免费看| or卡值多少钱| 色尼玛亚洲综合影院| 亚洲电影在线观看av| 九色成人免费人妻av| 国内精品久久久久久久电影| 欧美成人午夜精品| 黄色视频不卡| 欧美另类亚洲清纯唯美| 日韩免费av在线播放| 欧美日韩乱码在线| 99热这里只有是精品50| 村上凉子中文字幕在线| 在线观看免费日韩欧美大片| 女人爽到高潮嗷嗷叫在线视频| 男女下面进入的视频免费午夜| 青草久久国产| 国产伦一二天堂av在线观看| 深夜精品福利| 老司机午夜福利在线观看视频| 亚洲精品一卡2卡三卡4卡5卡| 久久精品影院6| 国产视频内射| 精品国产亚洲在线| 51午夜福利影视在线观看| 精品久久久久久久久久久久久| 欧美一区二区精品小视频在线| 精品国产乱码久久久久久男人| 蜜桃久久精品国产亚洲av| 亚洲成人免费电影在线观看| 男女之事视频高清在线观看| 夜夜躁狠狠躁天天躁| 亚洲乱码一区二区免费版| 亚洲人成77777在线视频| 免费人成视频x8x8入口观看| 国产成人系列免费观看| 97人妻精品一区二区三区麻豆| 国产亚洲欧美在线一区二区| 亚洲一卡2卡3卡4卡5卡精品中文| 99国产精品99久久久久| 国产精品98久久久久久宅男小说| 成人av一区二区三区在线看| 亚洲专区中文字幕在线| or卡值多少钱| 欧美成狂野欧美在线观看| www.www免费av| 最新在线观看一区二区三区| 午夜福利成人在线免费观看| 在线国产一区二区在线| 国产一级毛片七仙女欲春2| 男女做爰动态图高潮gif福利片| av视频在线观看入口| 99久久综合精品五月天人人| 国产精品综合久久久久久久免费| 欧美成人一区二区免费高清观看 | 亚洲午夜精品一区,二区,三区| 亚洲熟妇中文字幕五十中出| √禁漫天堂资源中文www| 精品电影一区二区在线| 99riav亚洲国产免费| 色综合婷婷激情| 国内揄拍国产精品人妻在线| 两人在一起打扑克的视频| 欧美日韩精品网址| 香蕉丝袜av| 亚洲在线自拍视频| 国产蜜桃级精品一区二区三区| 无人区码免费观看不卡| 动漫黄色视频在线观看| 成人午夜高清在线视频| 啪啪无遮挡十八禁网站| 日本在线视频免费播放| 欧美日本亚洲视频在线播放| 午夜福利视频1000在线观看| 国语自产精品视频在线第100页| 19禁男女啪啪无遮挡网站| 91麻豆av在线| 亚洲中文字幕一区二区三区有码在线看 | av国产免费在线观看| 精品欧美国产一区二区三| 757午夜福利合集在线观看| 十八禁网站免费在线| 国产v大片淫在线免费观看| 亚洲精品一卡2卡三卡4卡5卡| 91成年电影在线观看| 色av中文字幕| 又粗又爽又猛毛片免费看| 亚洲黑人精品在线| 成人永久免费在线观看视频| 国产午夜精品久久久久久| 国产单亲对白刺激| 国产一区二区三区视频了| 亚洲性夜色夜夜综合| svipshipincom国产片| 久久精品人妻少妇| 一本久久中文字幕| av免费在线观看网站| 88av欧美| 1024视频免费在线观看| 欧美黄色淫秽网站| 99热这里只有是精品50| 国产三级中文精品| 国内毛片毛片毛片毛片毛片| tocl精华| 最近最新中文字幕大全电影3| 亚洲电影在线观看av| 久久久水蜜桃国产精品网| 美女 人体艺术 gogo| 一本大道久久a久久精品| 美女午夜性视频免费| 亚洲国产欧美一区二区综合| 成人18禁在线播放| 亚洲精品一卡2卡三卡4卡5卡| 久久久国产欧美日韩av| 久久精品国产亚洲av高清一级| 国产亚洲欧美98| 亚洲欧美日韩东京热| 午夜免费成人在线视频| 免费高清视频大片| 国产麻豆成人av免费视频| 免费在线观看日本一区| 黄色成人免费大全| 1024香蕉在线观看| 亚洲色图 男人天堂 中文字幕| 亚洲欧美日韩高清在线视频| 国内少妇人妻偷人精品xxx网站 | 草草在线视频免费看| 亚洲色图 男人天堂 中文字幕| 亚洲人成网站高清观看| 国产精品一区二区三区四区免费观看 | 午夜免费成人在线视频| 久久国产乱子伦精品免费另类| 最近视频中文字幕2019在线8| 一级作爱视频免费观看| 成人一区二区视频在线观看| 悠悠久久av| 欧美日韩中文字幕国产精品一区二区三区| 免费在线观看黄色视频的| 女人爽到高潮嗷嗷叫在线视频| 久久亚洲精品不卡| 少妇的丰满在线观看| 成人国产一区最新在线观看| 久久久国产成人免费| 国产精品99久久99久久久不卡| 在线观看美女被高潮喷水网站 | 男女做爰动态图高潮gif福利片| 美女 人体艺术 gogo| 999久久久精品免费观看国产| 男女之事视频高清在线观看| 国产熟女xx| 亚洲国产欧洲综合997久久,| 亚洲国产中文字幕在线视频| 精华霜和精华液先用哪个| 欧美色视频一区免费| 婷婷精品国产亚洲av| videosex国产| 国产av麻豆久久久久久久| 精品乱码久久久久久99久播| 91大片在线观看| 国产精品久久久av美女十八| www国产在线视频色| 午夜视频精品福利| 男人的好看免费观看在线视频 | 男人的好看免费观看在线视频 | 国产av又大| 欧美日韩瑟瑟在线播放| 一个人免费在线观看电影 | 国产精品一区二区精品视频观看| 日韩欧美三级三区| 好男人在线观看高清免费视频| 波多野结衣高清作品| 黄色片一级片一级黄色片| 最新美女视频免费是黄的| 国产爱豆传媒在线观看 | 亚洲中文日韩欧美视频| 亚洲美女黄片视频| 久久香蕉激情| 国产精品免费视频内射| 香蕉丝袜av| 日韩大尺度精品在线看网址| 91字幕亚洲| 男女之事视频高清在线观看| 一区二区三区国产精品乱码| 99久久精品热视频| 午夜老司机福利片| 国产乱人伦免费视频| 午夜福利免费观看在线| 成人三级做爰电影| 午夜激情av网站| 亚洲人成77777在线视频| 欧美另类亚洲清纯唯美| 一边摸一边做爽爽视频免费| 中文字幕熟女人妻在线| 亚洲avbb在线观看| 久久精品91蜜桃| 欧美日韩亚洲综合一区二区三区_| 两个人看的免费小视频| 最好的美女福利视频网| www.www免费av| 韩国av一区二区三区四区| 一区福利在线观看| 日本一二三区视频观看| 成人精品一区二区免费| avwww免费| 女警被强在线播放| 国产午夜福利久久久久久| 欧美一区二区国产精品久久精品 | 成人国产综合亚洲| 美女黄网站色视频| 给我免费播放毛片高清在线观看| 亚洲va日本ⅴa欧美va伊人久久| 99精品欧美一区二区三区四区| 久久精品国产99精品国产亚洲性色| 每晚都被弄得嗷嗷叫到高潮| 日日干狠狠操夜夜爽| 免费一级毛片在线播放高清视频| 国产精品久久久av美女十八| 亚洲人与动物交配视频| 久久香蕉精品热| 亚洲av电影在线进入| 淫妇啪啪啪对白视频| 男人舔奶头视频| 国产精品久久视频播放| 色综合婷婷激情| 男人舔女人下体高潮全视频| 久久久久性生活片| 亚洲一区二区三区不卡视频| 亚洲成人国产一区在线观看| 亚洲免费av在线视频| 国产精品免费视频内射| 男人舔女人下体高潮全视频| 亚洲人成电影免费在线| 三级男女做爰猛烈吃奶摸视频| 麻豆国产97在线/欧美 | 国产亚洲精品综合一区在线观看 | 亚洲精品国产一区二区精华液| 亚洲成人免费电影在线观看| 国产亚洲欧美在线一区二区| 欧美国产日韩亚洲一区| 久久国产乱子伦精品免费另类| 国产区一区二久久| 亚洲国产精品999在线| 欧美中文日本在线观看视频| 亚洲成人精品中文字幕电影| 亚洲第一欧美日韩一区二区三区| 日韩精品免费视频一区二区三区| 色尼玛亚洲综合影院| 香蕉av资源在线| 久久伊人香网站| 两个人免费观看高清视频| svipshipincom国产片| 一进一出好大好爽视频| 91老司机精品| 国产精品,欧美在线| 两个人看的免费小视频| 99精品欧美一区二区三区四区| 啦啦啦观看免费观看视频高清| 亚洲男人天堂网一区| 日韩大尺度精品在线看网址| 亚洲国产欧美人成| 亚洲一区二区三区不卡视频| 国产伦人伦偷精品视频| 黄色成人免费大全| 青草久久国产| 欧美+亚洲+日韩+国产| 久久香蕉激情| 2021天堂中文幕一二区在线观| 亚洲免费av在线视频| 丁香欧美五月| 色综合欧美亚洲国产小说| av欧美777| 亚洲avbb在线观看| 欧美黄色片欧美黄色片| 免费观看精品视频网站| 亚洲性夜色夜夜综合| 国产伦一二天堂av在线观看| 欧美黑人巨大hd| 亚洲欧美精品综合一区二区三区| 制服诱惑二区| 神马国产精品三级电影在线观看 | 最好的美女福利视频网| 狂野欧美白嫩少妇大欣赏| 麻豆成人av在线观看| 色哟哟哟哟哟哟| 国产亚洲欧美在线一区二区| 级片在线观看| 欧美一区二区精品小视频在线| 日日干狠狠操夜夜爽| 天堂av国产一区二区熟女人妻 | 一边摸一边做爽爽视频免费| 国产午夜精品久久久久久| 色综合欧美亚洲国产小说| 国产高清视频在线播放一区| 99久久无色码亚洲精品果冻| 成人特级黄色片久久久久久久| 亚洲成人国产一区在线观看| 蜜桃久久精品国产亚洲av| 人人妻,人人澡人人爽秒播| 国产熟女午夜一区二区三区| 亚洲精品一卡2卡三卡4卡5卡| 国产精品精品国产色婷婷| 国产成人av激情在线播放| 97人妻精品一区二区三区麻豆| 久久香蕉激情| 中文亚洲av片在线观看爽| 亚洲精品中文字幕一二三四区| 国产亚洲精品综合一区在线观看 | 久久热在线av| 日韩精品免费视频一区二区三区| 成人18禁在线播放| 深夜精品福利| 在线观看舔阴道视频| av在线天堂中文字幕| 欧美不卡视频在线免费观看 | 亚洲精品在线美女| 亚洲成人久久爱视频| 亚洲精品一区av在线观看| 久久久久国产一级毛片高清牌| 国内揄拍国产精品人妻在线| 高潮久久久久久久久久久不卡| videosex国产| 国产精品国产高清国产av| a级毛片在线看网站| 身体一侧抽搐| 性欧美人与动物交配| 男女做爰动态图高潮gif福利片| 淫妇啪啪啪对白视频| or卡值多少钱| av免费在线观看网站| 1024手机看黄色片| 无遮挡黄片免费观看| 欧美国产日韩亚洲一区| 麻豆国产97在线/欧美 | 国产一区二区三区在线臀色熟女| 成年版毛片免费区| 一级作爱视频免费观看| 丁香欧美五月| 级片在线观看| 国产精品爽爽va在线观看网站| 成人永久免费在线观看视频| 18禁裸乳无遮挡免费网站照片| 我的老师免费观看完整版| 久久久久性生活片| 国产免费av片在线观看野外av| 天堂影院成人在线观看| 变态另类丝袜制服| 日本熟妇午夜| 国产精华一区二区三区| 亚洲精品在线观看二区| 可以免费在线观看a视频的电影网站| 此物有八面人人有两片| 在线观看免费午夜福利视频| 国产高清视频在线播放一区| 国产真人三级小视频在线观看| 欧美黑人巨大hd| 亚洲一区二区三区不卡视频| 变态另类丝袜制服| 欧美一级a爱片免费观看看 | 一边摸一边抽搐一进一小说| 日韩国内少妇激情av| 极品教师在线免费播放| 成人欧美大片| 国产一区二区激情短视频| 国产激情欧美一区二区| АⅤ资源中文在线天堂| 国产在线精品亚洲第一网站| 国产成年人精品一区二区| 最近最新中文字幕大全电影3| 少妇被粗大的猛进出69影院| 亚洲avbb在线观看| 一个人免费在线观看电影 | 日韩精品免费视频一区二区三区| 午夜精品一区二区三区免费看| 日韩欧美免费精品| 麻豆成人av在线观看| 精品无人区乱码1区二区| 一级a爱片免费观看的视频| 欧洲精品卡2卡3卡4卡5卡区| 天堂动漫精品| 熟妇人妻久久中文字幕3abv| 亚洲精品久久国产高清桃花| 97碰自拍视频| 久久伊人香网站| 欧美性猛交黑人性爽| www日本在线高清视频| 好男人在线观看高清免费视频| 久久人人精品亚洲av| 欧美大码av| 国产黄a三级三级三级人| 成人国产综合亚洲| 91国产中文字幕| 亚洲真实伦在线观看| 亚洲第一电影网av| 女生性感内裤真人,穿戴方法视频| 国产精品国产高清国产av| 亚洲成av人片免费观看| 日韩欧美精品v在线| 精品久久久久久久久久久久久| 亚洲精品粉嫩美女一区| 亚洲一区中文字幕在线| 欧美乱妇无乱码| 亚洲乱码一区二区免费版| 精品第一国产精品| 91av网站免费观看| 精品国产亚洲在线| 又爽又黄无遮挡网站| 五月玫瑰六月丁香| 国产亚洲精品av在线| 国产成人影院久久av| 欧美+亚洲+日韩+国产| 法律面前人人平等表现在哪些方面| 床上黄色一级片| 欧美性长视频在线观看| 欧美日韩一级在线毛片| 国产成人系列免费观看| 国产在线精品亚洲第一网站| 亚洲av第一区精品v没综合| 国产一级毛片七仙女欲春2| 夜夜夜夜夜久久久久| 草草在线视频免费看| 精品久久久久久久人妻蜜臀av| 亚洲国产欧美一区二区综合| 国产av在哪里看| 久久午夜综合久久蜜桃| 黄频高清免费视频| 身体一侧抽搐| 亚洲国产中文字幕在线视频| 亚洲av日韩精品久久久久久密| 日本成人三级电影网站| 亚洲人成伊人成综合网2020| 精品乱码久久久久久99久播| 亚洲午夜理论影院| 俺也久久电影网| 亚洲成人精品中文字幕电影| 淫妇啪啪啪对白视频| 日韩av在线大香蕉| 国产精品综合久久久久久久免费| 成人18禁在线播放| 午夜福利免费观看在线| 熟女电影av网| 欧美一级毛片孕妇| 亚洲av成人av| 亚洲欧美精品综合一区二区三区| 亚洲免费av在线视频| 中文字幕人妻丝袜一区二区| 亚洲自偷自拍图片 自拍| 在线观看免费午夜福利视频| 国产又色又爽无遮挡免费看| 亚洲国产欧美网| 九九热线精品视视频播放| netflix在线观看网站| 久久亚洲真实| 久久久久国产精品人妻aⅴ院| 丝袜美腿诱惑在线| 色综合婷婷激情| 90打野战视频偷拍视频| 亚洲精品av麻豆狂野| 欧美性猛交╳xxx乱大交人| 欧美绝顶高潮抽搐喷水| 国产伦人伦偷精品视频| 熟妇人妻久久中文字幕3abv| 99国产精品一区二区蜜桃av| 国产精品国产高清国产av| 禁无遮挡网站| 久久久久久人人人人人| 两人在一起打扑克的视频| 日韩精品免费视频一区二区三区| 91老司机精品| 淫秽高清视频在线观看| 国内精品一区二区在线观看| 国模一区二区三区四区视频 | 亚洲美女黄片视频| 级片在线观看|