• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Feature mapping space and sample determination for person re-identification①

    2022-10-22 02:23:12HOUWeiHUZhentaoLIUXianxingSHIChangsen
    High Technology Letters 2022年3期

    HOU Wei(侯 巍), HU Zhentao, LIU Xianxing, SHI Changsen

    (School of Artificial Intelligence, Henan University, Zhengzhou 450046, P.R.China)

    Abstract Person re-identification (Re-ID) is integral to intelligent monitoring systems. However, due to the variability in viewing angles and illumination, it is easy to cause visual ambiguities, affecting the accuracy of person re-identification. An approach for person re-identification based on feature mapping space and sample determination is proposed. At first, a weight fusion model, including mean and maximum value of the horizontal occurrence in local features, is introduced into the mapping space to optimize local features. Then, the Gaussian distribution model with hierarchical mean and covariance of pixel features is introduced to enhance feature expression. Finally, considering the influence of the size of samples on metric learning performance, the appropriate metric learning is selected by sample determination method to further improve the performance of person re-identification. Experimental results on the VIPeR,PRID450S and CUHK01 datasets demonstrate that the proposed method is better than the traditional methods.

    Key words: person re-identification (Re-ID),mapping space,feature optimization,sample determination

    0 Introduction

    The purpose of person re-identification (Re-ID)is to match the same person from different camera views[1]. Person Re-ID is a key component of video surveillance, which is of great significance in security monitoring, person search and criminal investigation.Although great progress has been made in person Re-ID, there are still many problems to be solved due to the existence of visual ambiguities.

    The visual ambiguities brought by changes in viewpoint and illumination are manifested in the person images like large changes in scale and background of the same person, which can significantly degrade the performance of the person Re-ID system. To overcome this limitation, there have been studies that try to use local information and information discrimination[2-3]. Properly utilizing the information in person images and better discriminating them can effectively improve the performance of person Re-ID. The related studies that have emerged in person Re-ID can be generally classified into two types: feature extraction and metric learning.

    Some researchers construct features of person images based on color, texture and other appearance attributes[4-5]. The basic idea is that the person image is divided into multiple overlapping or non-overlapping local image blocks, and then color or texture features are extracted from them separately, thus adding spatial region information into person image features. When calculating the similarity of two person images, the features within the corresponding image blocks will be compared separately, and then the comparison results of each image block will be fused as the final recognition result. Nevertheless, the features constructed by the above method are weak and the feature representation for person Re-ID is abated.

    On the other hand, there are many work that use a given set of training samples to obtain a metric matrix that effectively reflects the similarity between data samples, increasing the distance between non-similar samples while reducing the distance between similar samples[6]. However, these methods do not consider the effect of sample size on the metric learning performance, making the person Re-ID results less reliable.

    Color features are robust to pose and viewpoint changes, but are susceptible to illumination and obstructions. It is difficult to effectively distinguish largescale person images using only color features due to the similarity of dressing problem. The clothing often contains texture information, and texture features involve comparison of neighboring pixels and are robust to illumination, so making full use of color and texture features is very effective for person Re-ID. However, traditional methods apply single color and texture features to the person Re-ID task, and they are insufficient to handle the differences between different person images.In addition, the completeness and richness of feature representations also affect the results of similarity metrics, and traditional methods do not fully utilize the richness of samples when dealing with such metrics,resulting in lower overall performance of the methods.

    To address the above problems, this paper proposes a person Re-ID method based on feature mapping space and sample determination metric learning. The method combines an improved weighted local maximal occurrence (wLOMO) feature that modifies the original LOMO[7]feature with the Gaussian of Gaussian(GOG)[8]feature, and uses a sample determination method to select a suitable metric learning method to rank the similarity of person images. The method in this paper performs simulation experiments on each of the three typical datasets and is compared with other methods. The main contributions are summarized as follows.

    (1) A fused feature mapping space is proposed to enhance the person images features. The mean information of the horizontal direction of person image is introduced into LOMO feature, and the weighted mean and max are fused to obtain the proposed wLOMO feature. To enhance the feature expression of each person image, wLOMO feature is combined with GOG feature.On this basis, in order to simplify the complexity of feature extraction model, the feature transformation processes of wLOMO and GOG are integrated into one feature mapping space.

    (2) A sample determination method is proposed to accommodate different sample sizes. In the dataset,the sample determination method selects the appropriate metric learning to accomplish the similarity ranking of person images according to the demand of different sample sizes. In addition, the selected sample size is dynamically tuned according to the matching rate of different metric learning outputs.

    (3) Extended experiments on three publicly available datasets are designed to evaluate the performance of the proposed method and the comparison method, and to demonstrate the effectiveness and applicability of the proposed method in person Re-ID.

    1 Related work

    The research on person Re-ID can be divided into two groups: feature extraction and metric learning.Person Re-ID based on feature extraction is usually constructed by basic color, texture and other appearance attributes. Ref.[2] proposed the symmetry driven accumulation of local feature (SDALF) based on the symmetrical and asymmetric characteristics of person body structure, which fused three kinds of color feature in person image to complete the discrimination of person image. Ref.[4] proposed an ensemble of localized features (ELF) method. The method adopted AdaBoost algorithm to select the appropriate feature combination from a group of color and texture features,which improved the experimental accuracy. Refs[5,9,10]introduced biologically inspired features (BIF) in person images. By calculating the characteristics of BIF on adjacent scales, a feature called Bicov was proposed. On this basis, Gabor filter and covariance feature were introduced to deal with the problems caused by illumination change and background transformation in person images. Ref.[11] proposed a feature transformation method based on Zero-Padding Augmentation, which could align the features distributed across the disjoint person images to improve the performance of the matching model. Ref.[12] constructed the feature fusion network (FNN) by combining the manually extracted features and deep learning features, and realized the fusion of deep learning features and artificial features by constantly adjusting the parameters of the deep neural network. Ref.[13] proposed a deep convolution model, which highlights the discriminative part by giving the features in each part of the person a different weight to realize the person Re-ID task. The person Re-ID method based on deep learning needs to consider using a large number of labeled samples to train a complex model, and the training process is very time-consuming.

    Person Re-ID methods based on metric learning minimizes the distance between similar person by learning appropriate similarity. Ref.[3] introduced the concept of large margin in Mahalanobis distance and proposed a metric learning method called large margin nearest neighbor (LMNN). LMNN assumed that the sample features of the same class were adjacent, so there was a big gap between the feature samples of different classes. Thus, when calculating the distance,the features of the same kind of samples were gathered,and the different types of samples were pushed. Ref.[6]proposed a local fisher discriminative analysis (LFDA)method, which introduced a matrix based on subspace learning, allocated different scale factors for the same classes and different classes, and used the local invariance principle to calculate the distance. Ref.[14]proposed a Mahalanobis distance metric called keep it simple and straightforward metric (KISSME) by calculating the difference between the intra class and inter class covariance matrices of sample features. The method did not need to calculate the metric matrix through complex iterative algorithm, so it was more efficient. Ref.[15] used a new multi-scale metric learning method based on strip descriptors for person Re-ID. According to this method, the internal structure of different person images can be effectively extracted,improving the recognition rate. However, due to the non-linearity of the person image in the cross field of view, the linear transformation generated by the general metric learning method effects commonly general.Therefore, the kernel correlation based metric learning method was introduced to solve the nonlinear problem in person Re-ID[16-17]. However, the above-mentioned methods adopt a single strategy to deal with the change of sample size, without considering the accuracy impact of the method itself.

    2 Problem description

    It considers that the general process of person rerecognition is to extract features first and then rank them by metric learning. The performance of the method depends strongly on the expression ability of features and metric learning, and the existence of visual ambiguities will inevitably affect the ability. To solve this problem, a new method is proposed to improve the matching rate of person re-recognition.

    The framework of the proposed method is divided into three parts in Fig.1. The first part is the extraction of basic color, texture and spatial features, the second part is the mapping process of basic features, and the third part is the metric learning method based on sample determination.

    Fig.1 The person re-identification framework

    3 Methodology

    Based on the wLOMO in subsection 3.1 and the proposed sample determination in subsection 3.2, the proposed method flowchart is shown in Fig.2.

    3.1 Feature mapping space

    When designing the feature mapping space, two state-of-the-art feature transformation processes are merged into one feature mapping space by cascading,which simplifies the feature extraction.

    3.1.1 LOMO

    When extracting LOMO features, a 10 ×10 sliding subwindow is used to represent the local area of a person image,and an 8 ×8 ×8 bin combined color histogram of the hue, saturati, value (HSV) and two scale the scale invariant local ternary pattern (SILTP)texture histogramFSILTPare extracted from each subwindow. Then the maximum value of pixel features occurrence of all subwindows at the same horizontal position is calculated as

    whereρ(?)is the pixel feature occurrence in all subwindows.

    3.1.2 The proposed wLOMO

    Fig.2 Flowchart of the proposed method

    Considering that the maximization of pixel features leads to the loss of some person features,and the clothes worn by person are often composed of a small number of colors in each part, the mean information can enhance the feature expression of person images when the person background changes little. Therefore, the mean information of pixel feature distribution is introduced into the feature expression, expressed as

    3.1.3 GOG

    Considering that color features are more sensitive to illumination changes in cross view person images,and the impact of spatial information loss on person Re-ID, this paper further extracts GOG features from the same person image to enhance the feature expression.Firstly, the pixel level featurefis extracted as

    f= [y,FMθ,FRGB,FHSV,FLAB,FRG]T(6)

    whereFRGB,FHSV,FLAB,FRGare the color features,FMθis the texture feature,yis the space feature. The color features are channel values of person image,Mθconsists of the values of pixel intensity gradients in the four standard directions of the two-dimensional coordinate system.yis the position of the pixel in the vertical direction of image. After that, block level features are extracted. Each person image is divided intoGpartially overlapped horizontal regions, and each region is divided intok×klocal blocks. The pixel features in each local blocksare represented by Gaussian distribution to form a Gaussian blockzi

    whereμsis the mean vector,Σsis the covariance matrix of blocks.

    Then, the Gauss blockziis mapped to symmetric positive definite matrix to complete block level feature extraction. Finally,the region level features are extracted. The Gaussian blocks are modeled as a Gaussian region by Gaussian distribution. Meanwhile, Gaussian region is embedded into symmetric positive definite matrix. These vectors are finally aggregated to form the GOG featureFGOGof a person image.

    wherezGis theG-th horizontal region feature of a person image.

    3.1.4 Feature mapping space

    The proposed wLOMO describes only maximum occurrence and mean occurrence of pixel features, moreover, GOG can provide covariance information.

    To comprehensively consider the maximum occurrence, mean occurrence and covariance information of pixel features, Eq.(5) and Eq.(8) are combined. It means that wLOMO feature and GOG feature are aligned according to the person’s identity, and their feature mapping process is simplified to one feature mapping space by cascading.

    whereFis the feature of the output of the mapping space.

    3.2 Sample determination

    Cross-view quadratic discriminant analysis (XQDA)[7]and kernel cross-view quadratic discriminant analysis (k-XQDA)[18]are state-of-the-art methods in depending on feature dimension and samples size respectively. Based on the two methods, a sample determination method is proposed to synthesize the advantages of the two methods.

    3.2.1 XQDA

    Before summarizing the XQDA method,a brief introduction is given to the distance measurement of person Re-ID. For a datasetX, it containsCclasses personci(1 ≤i≤C) ∈Rn.The classical Mahalanobis distance metric learns the distanced(xi,zj) between personxi= [xi1,xi2,…,xin] in cameraaand personzj=[zj1,zj2,…,zjm] in camerab.

    3.2.2 k-XQDA

    XQDA metric learning method is directly trained in the original linear feature space, and the similarity and difference among samples are not well expressed.k-XQDA uses a kernel function to map the original samples into the easily distinguishable nonlinear space,and then distinguishes the differences of samples in the nonlinear space. The derivation of k-XQDA method involves mainly the distance metric functiond(xi,zj) in XQDA and the kernelization of the cost functionJ(wk).

    In the kernel space, two kinds of expansion coefficientsαandβcorresponding to person in cameraaandbare used, respectively. Mapping matrixwkcan be expressed as

    3.2.3 Sample determination

    All the intrinsic matrix dimensions of k-XQDA method depend on the size of samples, which greatly reduces the amount of calculation compared with the XQDA method depending on the feature dimension.

    On the basis of subsection 3.2.1 and subsection

    3.2.2, considering the different focus of the two metric learning methods, in order to integrate the advantages of the two and make the actual person re-identification task a better match, this paper proposes a sample determination method, that is, when the size of training setSsatisfies the Eq.(18), using the corresponding metric learning method will make a better effect in the corresponding dataset.

    whereSis the sample size to be determined,sis the current sample size.

    4 Experiments

    To evaluate the performance of the method fairly,all the comparison methods run in the same environment. The hardware environment is Intel Core i7-9700F CPU@3.00 GHz, 8 GB RAM. The operating system is Windows 10 64 bit, and the software environment is Matlab 2019b.

    4.1 Datasets and evaluation protocol

    The effectiveness of the proposed method is demonstrated by three publicly available datasets, they are VIPeR[19], PRID450S[20]and CUHK01[21]. The VIPeR dataset contains 632 persons with different identities. Each person involves two images captured from two disjoint camera views, including variations in background and illumination. The PRID450S dataset contains 450 persons with different identities. Each person covers two images captured by two non-overlapping cameras with a single background. The CUHK01 dataset consists of 971 persons with a total of 3884 shots captured by two non-overlapping cameras with an average of two images for each person, and the person poses vary greatly.

    To evaluate the results of the features in different metric learning,cumulative match characteristics(CMC)curve is used as the evaluation protocol.

    4.2 Comparison with state-of-the-art

    All images are normalized to the same size of 128×48 pixels. The datasets of VIPeR, PRID450S and CUHK01 are randomly divided into two equal parts,one half for training and the other for testing. The size of images in the training set of the three data sets is 632,450 and 972 respectively. To eliminate the performance difference caused by randomly dividing the training set and the testing set, the process is repeated 10 times, and the average cumulative matching accuracies at rank 1, 5, 10 and 20 are reported over 10 runs. In addition, the corresponding CMC curves are shown.

    4.2.1 Evaluation of the mapping space

    To analyze the effectiveness of the proposed mapping space, the output features of the mapping space are sent to the XQDA metric learning method to verify the performance of the method. Since the method is iterative, different weights are looped in different datasets to retain the one with the highest performance.Furthermore, showing the Rank-1 values corresponding to various weights may indicate that the weights are not constant and change between datasets. This paper selects three different datasets and compares the results with state-of-the-art approaches.

    VIPeR dataset: to analyze the influence of weightaon the performance of the wLOMO, the Rank-1 under different weight on VIPeR dataset are shown in Fig.3. It can be seen the introduction of mean information has a certain impact on the method performance. Whenais in range of 0.1 -0.2, the performance of the method is optimal, and increasingacontinually the performance of the method declines.

    The compared methods and their matching rates on VIPeR are shown in Table 1 and Fig.4. The results are reported in Table 1, the Rank-1 of LOMO, LSSCDL, DNS and GOG are better, all exceeding 40%.The proposed approach achieves 50.63% in Rank-1,which is 2.37% better than GOG.

    Fig.3 Rank-1 matching rates

    Table 1 Comparison of Rank results with other methods on VIPeR dataset

    Fig.4 CMC curves

    PRID450S dataset: Fig.5 shows the performance comparison of the wLOMO under different weight values. When the weight value is 0.3 -0.4, the method performance is optimal.

    The comparison methods and their matching rates results on PRID450S dataset are shown in Table 2 and Fig.6. Different from the person images in VIPeR and CUHK01 datasets, the background of person images in PRID450S dataset is relatively simple, and the background interference to all methods is small, the final matching results are generally better. For the proposed method with mean information, the matching rate of Rank-1 is 71.42%, outperforming the second best one GOG by 3.6%.

    Fig.5 Rank-1 matching rates

    Table 2 Comparison of Rank results with other methods on PRID450S dataset

    Fig.6 CMC curves

    CUHK01 dataset: the performance of the wLOMO has been declining withaincreasing, because the person background information is more complex than the first two datasets in Fig.7, and the introduction of mean information leads to performance degradation.Thus, the combination with GOG can strengthen the feature expression and weaken the error caused by mean information.

    Fig.7 Rank-1 matching rates

    The compared methods and their matching rates on CUHK01 dataset are shown in Table 3 and Fig.8.Considering that each person in the CUHK01 dataset contains four images, the first two images contain one front/back view, the last two images contain one side view, and the overall difference between them is little.Therefore, in the experiment, one is randomly selected from the foreground and background images of each person, and one is randomly selected from the side images of each person. The training sets contain 486 pairs of person images, and the test sets contain 485 pairs of person images. As listed in Table 3,the performance of proposed method is better than other methods,outperforming the second-best method with improvements of 5.65%.

    Table 3 Comparison of Rank results with other methods on CUHK01 dataset

    Fig.8 CMC curves

    4.2.2 Evaluation of the sample determination

    The proposed method has achieved state-of-the-art performance, with inputting the output features of the mapping space into XQDA in the above experiment.Then,in order to verify the effectiveness of the proposed sample determination method, the output features of the mapping space are sent to XQDA and k-XQDA respectively to compare the performance of the methods.The experiment results are shown in Table 4, Table 5 and Table 6, in which the size of samples is the number of sample.

    VIPeR dataset:in Table 4,when the size of training set samples is gradually increased, Rank-1 of the two metric learning methods is also increasing during the experiment on the VIPeR dataset. According to the Rank-1, the matching rate of XQDA is greater than that of k-XQDA even with the increase of training set samples. However, the increase of XQDA is 6. 87%and 15.3%, the increase of k-XQDA is 7.97% and 16.93%. The increase extent of k-XQDA is greater than that of XQDA. Thus, when the size of training set samples increases to a certain size, k-XQDA can show better accuracy than XQDA.

    Table 4 Ranks matching rates versus different size of samples on VIPeR dataset

    Table 5 Ranks matching rates versus different size of samples on PRID450S dataset

    Table 6 Ranks matching rates versus different size of samples on CUHK01 dataset

    PRID450S dataset: when the size of samples in the training set increases from 225 to 300 and 436, the Rank-1 of XQDA is better than that of k-XQDA,reported in Table 5. In terms of the extent of Rank-1 increases, XQDA increases by 6.38% and 16.32%, k-XQDA increases by 8.06% and 20.94%. According to the experiment results on PRID450S dataset, when the size of training sets increases to a certain size, the Rank-1 of k-XQDA can exceed that of XQDA.

    CUHK01 dataset: the output features of the mapping space are calculated by XQDA and k-XQDA respectively on CUHK01 dataset. When the size of training set samples is 486,the Rank-1 of k-XQDA exceeds that of XQDA by 1.8%, reported in Table 6.

    In summary, when the size of training set samples is about 532, the performance of k-XQDA is better than that of XQDA in Table 4. Here, the k-XQDA can obtain better results. When the size of training sets is less than 532, the performance of XQDA is better than that of k-XQDA. On PRID450S dataset, when the size of training set samples is bigger than 436, the performance of k-XQDA method is better than that of XQDA method, and better results can be obtained by using k-XQDA. When the size of training sets is less than 436, the performance of XQDA is better than that of k-XQDA in Table 5. According to the results in Table 6,when person Re-ID is conducted on CUHK01 dataset,the size of training set samples is about 486, k-XQDA can obtain good results.

    5 Conclusion

    Based on multi-feature extraction,an effective feature mapping space and a sample determination method is proposed to solve the problem of visual ambiguities in person re-identification. The feature mapping space simplifies the process of complex feature extraction,which takes the basic features in person images as input and outputs the mapped features through the feature mapping space. The mapped features are discriminated by the proposed metric learning method to complete the similarity ranking. Compared with the existing correlation methods, the proposed method improves matching rate effectively. In the future, it is proposed to further study the determination method of metric learning and optimize the performance of the algorithm.

    黄色视频在线播放观看不卡| 久久韩国三级中文字幕| 亚洲综合精品二区| 看十八女毛片水多多多| 男男h啪啪无遮挡| 日本黄色日本黄色录像| 久久精品久久精品一区二区三区| 91精品三级在线观看| 久久午夜综合久久蜜桃| 国产毛片在线视频| 色视频在线一区二区三区| 亚洲男人天堂网一区| 国产成人av激情在线播放| 亚洲人成网站在线观看播放| 黑人猛操日本美女一级片| 国产淫语在线视频| svipshipincom国产片| 久久毛片免费看一区二区三区| 国产午夜精品一二区理论片| 亚洲第一青青草原| 欧美日韩亚洲综合一区二区三区_| 久久免费观看电影| 一级a爱视频在线免费观看| 中文天堂在线官网| 精品久久久精品久久久| 黄频高清免费视频| 黄色视频在线播放观看不卡| 亚洲激情五月婷婷啪啪| 国产精品国产三级国产专区5o| 在线观看免费日韩欧美大片| 老汉色∧v一级毛片| bbb黄色大片| 精品亚洲成a人片在线观看| 成年女人毛片免费观看观看9 | 精品国产一区二区三区久久久樱花| 九草在线视频观看| 国产成人精品无人区| 国产精品无大码| 国产精品亚洲av一区麻豆 | 熟女av电影| 波野结衣二区三区在线| 最近手机中文字幕大全| 欧美在线一区亚洲| 久久久精品区二区三区| 人人妻人人澡人人看| tube8黄色片| 2018国产大陆天天弄谢| 亚洲欧美成人精品一区二区| av国产精品久久久久影院| 99精国产麻豆久久婷婷| 中文乱码字字幕精品一区二区三区| 亚洲一区二区三区欧美精品| 丝袜在线中文字幕| 国产97色在线日韩免费| 成人午夜精彩视频在线观看| 亚洲情色 制服丝袜| 亚洲av日韩精品久久久久久密 | 欧美精品一区二区大全| 亚洲三区欧美一区| 日韩精品有码人妻一区| 久久国产精品男人的天堂亚洲| 久久久久精品国产欧美久久久 | 亚洲中文av在线| 日本欧美国产在线视频| 久久性视频一级片| 亚洲少妇的诱惑av| 国产人伦9x9x在线观看| 久久精品人人爽人人爽视色| 国产老妇伦熟女老妇高清| 中文欧美无线码| 国产乱人偷精品视频| 黑人猛操日本美女一级片| 大香蕉久久网| 国产 精品1| 日本一区二区免费在线视频| 美女中出高潮动态图| 啦啦啦中文免费视频观看日本| av网站免费在线观看视频| 成人黄色视频免费在线看| 亚洲精品日韩在线中文字幕| 亚洲av电影在线进入| 欧美激情 高清一区二区三区| 少妇精品久久久久久久| 欧美日韩亚洲高清精品| 男女边吃奶边做爰视频| 精品国产一区二区三区久久久樱花| 亚洲熟女毛片儿| av又黄又爽大尺度在线免费看| 欧美日本中文国产一区发布| 成年人午夜在线观看视频| 人人妻人人澡人人看| 欧美激情 高清一区二区三区| 超碰成人久久| 精品午夜福利在线看| 黑人猛操日本美女一级片| 免费日韩欧美在线观看| 久久久久久久国产电影| 欧美激情 高清一区二区三区| 国产精品av久久久久免费| 最黄视频免费看| 亚洲国产精品一区三区| 9热在线视频观看99| 免费黄频网站在线观看国产| 不卡视频在线观看欧美| 久久人妻熟女aⅴ| 在线观看免费日韩欧美大片| 最新在线观看一区二区三区 | 中文字幕人妻丝袜制服| 日韩一本色道免费dvd| 国产精品免费大片| www日本在线高清视频| videos熟女内射| av国产精品久久久久影院| 亚洲精品aⅴ在线观看| 欧美精品一区二区免费开放| 国产精品一区二区在线不卡| 侵犯人妻中文字幕一二三四区| 国产精品久久久av美女十八| 男女之事视频高清在线观看 | 亚洲国产精品一区二区三区在线| 国产极品粉嫩免费观看在线| 国产成人av激情在线播放| 97人妻天天添夜夜摸| 久久人人爽人人片av| www.自偷自拍.com| 另类亚洲欧美激情| 国产男女超爽视频在线观看| 亚洲精品日韩在线中文字幕| 18禁裸乳无遮挡动漫免费视频| 久久99热这里只频精品6学生| 国产精品99久久99久久久不卡 | 国产精品av久久久久免费| 日韩一本色道免费dvd| e午夜精品久久久久久久| 青春草亚洲视频在线观看| 日韩成人av中文字幕在线观看| 男男h啪啪无遮挡| 国产成人欧美| 国产一级毛片在线| 亚洲国产看品久久| 丝袜美腿诱惑在线| 别揉我奶头~嗯~啊~动态视频 | 韩国av在线不卡| 晚上一个人看的免费电影| 国产av一区二区精品久久| 永久免费av网站大全| 一级片'在线观看视频| 男男h啪啪无遮挡| 如何舔出高潮| 亚洲男人天堂网一区| 母亲3免费完整高清在线观看| 又大又黄又爽视频免费| 国产av国产精品国产| 亚洲av综合色区一区| 亚洲在久久综合| 亚洲精品日韩在线中文字幕| 最近最新中文字幕免费大全7| 中文字幕av电影在线播放| 蜜桃国产av成人99| 欧美变态另类bdsm刘玥| 一级毛片黄色毛片免费观看视频| 操出白浆在线播放| 午夜免费鲁丝| 热re99久久精品国产66热6| 精品一区二区三卡| 成人亚洲精品一区在线观看| 亚洲熟女精品中文字幕| 国产一区二区 视频在线| 国产精品免费视频内射| 精品国产一区二区三区四区第35| 久久国产精品大桥未久av| 亚洲色图 男人天堂 中文字幕| 亚洲美女视频黄频| 午夜老司机福利片| 色视频在线一区二区三区| 日韩一区二区三区影片| 精品国产一区二区三区四区第35| 满18在线观看网站| 菩萨蛮人人尽说江南好唐韦庄| 高清不卡的av网站| 纯流量卡能插随身wifi吗| 国产探花极品一区二区| 国产精品成人在线| 久久午夜综合久久蜜桃| 亚洲欧美日韩另类电影网站| 在线观看免费视频网站a站| 国产免费福利视频在线观看| 国产成人免费观看mmmm| 好男人视频免费观看在线| 国产欧美日韩一区二区三区在线| 你懂的网址亚洲精品在线观看| 一级毛片电影观看| 国产一区有黄有色的免费视频| 国产黄频视频在线观看| 久久久久精品性色| 欧美人与善性xxx| 91国产中文字幕| 啦啦啦在线观看免费高清www| 国产片特级美女逼逼视频| 精品人妻在线不人妻| 国产精品 国内视频| 久久久久人妻精品一区果冻| 婷婷色av中文字幕| 国产在线免费精品| 亚洲欧美成人综合另类久久久| 久久国产亚洲av麻豆专区| av线在线观看网站| 99精国产麻豆久久婷婷| bbb黄色大片| 无限看片的www在线观看| 青春草视频在线免费观看| 午夜福利免费观看在线| 亚洲精品,欧美精品| 亚洲av电影在线观看一区二区三区| 亚洲欧美精品综合一区二区三区| 国产精品国产av在线观看| 亚洲精品久久久久久婷婷小说| 成人国语在线视频| 久久韩国三级中文字幕| av线在线观看网站| 免费高清在线观看日韩| 99热网站在线观看| 国产99久久九九免费精品| 亚洲精品国产区一区二| 久久亚洲国产成人精品v| 伊人亚洲综合成人网| 中文精品一卡2卡3卡4更新| 国语对白做爰xxxⅹ性视频网站| 91成人精品电影| 亚洲欧美一区二区三区黑人| av网站在线播放免费| 97人妻天天添夜夜摸| 亚洲精华国产精华液的使用体验| 国产精品偷伦视频观看了| 欧美日韩视频高清一区二区三区二| 另类精品久久| 不卡视频在线观看欧美| 亚洲av成人精品一二三区| 欧美乱码精品一区二区三区| tube8黄色片| 欧美在线一区亚洲| 国产高清国产精品国产三级| 啦啦啦啦在线视频资源| 不卡视频在线观看欧美| 国产成人91sexporn| 亚洲欧美清纯卡通| 久久久精品免费免费高清| 老鸭窝网址在线观看| 王馨瑶露胸无遮挡在线观看| 黄片小视频在线播放| 亚洲精品乱久久久久久| 十八禁高潮呻吟视频| 成人手机av| 曰老女人黄片| 亚洲av日韩在线播放| 国产成人av激情在线播放| www.自偷自拍.com| 欧美日韩一区二区视频在线观看视频在线| 国产成人免费观看mmmm| 99久久99久久久精品蜜桃| 免费黄网站久久成人精品| 宅男免费午夜| av免费观看日本| 亚洲成人手机| 丁香六月天网| 久久久久久久久久久久大奶| 成人18禁高潮啪啪吃奶动态图| 如何舔出高潮| 久久国产精品大桥未久av| 可以免费在线观看a视频的电影网站 | 国产一区二区在线观看av| 亚洲四区av| 少妇被粗大的猛进出69影院| 超色免费av| 久久韩国三级中文字幕| 欧美av亚洲av综合av国产av | a级毛片黄视频| 精品酒店卫生间| 一级毛片电影观看| 午夜免费观看性视频| 日韩大片免费观看网站| 国产深夜福利视频在线观看| 99热国产这里只有精品6| 建设人人有责人人尽责人人享有的| 亚洲国产精品一区二区三区在线| 我要看黄色一级片免费的| 亚洲中文av在线| 两个人看的免费小视频| 黄片小视频在线播放| xxx大片免费视频| 亚洲国产精品国产精品| 欧美日韩福利视频一区二区| 欧美 日韩 精品 国产| 日韩大码丰满熟妇| 国产亚洲av高清不卡| 久久精品国产亚洲av高清一级| 美女福利国产在线| 人人妻人人爽人人添夜夜欢视频| 亚洲国产精品一区三区| 久久久欧美国产精品| 大码成人一级视频| 男女午夜视频在线观看| 国产欧美日韩综合在线一区二区| 女的被弄到高潮叫床怎么办| 美女主播在线视频| 老汉色∧v一级毛片| 日韩av不卡免费在线播放| 天天躁日日躁夜夜躁夜夜| 黄色一级大片看看| 啦啦啦在线免费观看视频4| 国产成人一区二区在线| 黑人欧美特级aaaaaa片| 一本一本久久a久久精品综合妖精| 2018国产大陆天天弄谢| avwww免费| 少妇的丰满在线观看| 国产在线一区二区三区精| 哪个播放器可以免费观看大片| 韩国av在线不卡| 午夜福利网站1000一区二区三区| av电影中文网址| 人人妻人人澡人人爽人人夜夜| 黑人欧美特级aaaaaa片| 男女床上黄色一级片免费看| 青青草视频在线视频观看| 考比视频在线观看| 欧美另类一区| 人妻 亚洲 视频| 高清av免费在线| 国产日韩欧美在线精品| 可以免费在线观看a视频的电影网站 | 街头女战士在线观看网站| 午夜激情av网站| 男女下面插进去视频免费观看| 我要看黄色一级片免费的| 欧美日韩综合久久久久久| 久久国产亚洲av麻豆专区| 久久精品国产亚洲av涩爱| 9色porny在线观看| 欧美黄色片欧美黄色片| 天美传媒精品一区二区| 亚洲成av片中文字幕在线观看| 制服人妻中文乱码| 深夜精品福利| 色视频在线一区二区三区| 欧美黄色片欧美黄色片| 久久综合国产亚洲精品| 不卡视频在线观看欧美| 黄色毛片三级朝国网站| 精品酒店卫生间| 丁香六月欧美| 免费高清在线观看日韩| 夜夜骑夜夜射夜夜干| 美女脱内裤让男人舔精品视频| netflix在线观看网站| videosex国产| 欧美乱码精品一区二区三区| 精品免费久久久久久久清纯 | 天天躁狠狠躁夜夜躁狠狠躁| 中文字幕色久视频| 日韩精品免费视频一区二区三区| 欧美少妇被猛烈插入视频| 男女无遮挡免费网站观看| 大片电影免费在线观看免费| 国产av一区二区精品久久| 国产精品 欧美亚洲| 精品人妻在线不人妻| 国产精品欧美亚洲77777| 午夜免费观看性视频| 国产精品欧美亚洲77777| av视频免费观看在线观看| 男女高潮啪啪啪动态图| 午夜福利,免费看| 青青草视频在线视频观看| 人人妻人人爽人人添夜夜欢视频| 国精品久久久久久国模美| 国产欧美亚洲国产| 国产成人a∨麻豆精品| 又黄又粗又硬又大视频| 国产成人精品久久二区二区91 | 久久国产精品大桥未久av| 亚洲欧美一区二区三区久久| 男女之事视频高清在线观看 | 热re99久久精品国产66热6| 欧美精品一区二区免费开放| 国产欧美日韩一区二区三区在线| 免费不卡黄色视频| 欧美精品高潮呻吟av久久| 成年女人毛片免费观看观看9 | 国产在线视频一区二区| 91精品伊人久久大香线蕉| 亚洲av电影在线进入| 丰满少妇做爰视频| 高清黄色对白视频在线免费看| 曰老女人黄片| 国产极品天堂在线| 黄色 视频免费看| 操出白浆在线播放| 三上悠亚av全集在线观看| 亚洲欧美成人综合另类久久久| av线在线观看网站| 精品久久蜜臀av无| 国产午夜精品一二区理论片| 综合色丁香网| 国产成人精品在线电影| 超碰成人久久| 亚洲av中文av极速乱| 激情视频va一区二区三区| 亚洲av国产av综合av卡| 国产精品久久久久久久久免| 少妇人妻久久综合中文| 亚洲国产看品久久| 亚洲av电影在线观看一区二区三区| 亚洲成av片中文字幕在线观看| 久久热在线av| 少妇被粗大的猛进出69影院| 欧美日韩亚洲国产一区二区在线观看 | 99久久综合免费| 亚洲,欧美精品.| 国产男女内射视频| 极品少妇高潮喷水抽搐| 制服诱惑二区| 水蜜桃什么品种好| 中文字幕人妻丝袜制服| 亚洲成人手机| av有码第一页| 天堂8中文在线网| 亚洲精品久久成人aⅴ小说| 久久这里只有精品19| 蜜桃国产av成人99| 久久免费观看电影| 热99国产精品久久久久久7| 午夜福利免费观看在线| 久久久久久久精品精品| 国产午夜精品一二区理论片| 亚洲综合色网址| 99久久人妻综合| 国产熟女午夜一区二区三区| 97精品久久久久久久久久精品| 亚洲国产精品一区二区三区在线| 日日撸夜夜添| 高清在线视频一区二区三区| 国产激情久久老熟女| 久久久欧美国产精品| 校园人妻丝袜中文字幕| 欧美日韩综合久久久久久| 久久久久网色| 成人手机av| 亚洲美女搞黄在线观看| xxx大片免费视频| 午夜免费男女啪啪视频观看| 久久青草综合色| 大陆偷拍与自拍| 国产精品熟女久久久久浪| 日韩av在线免费看完整版不卡| 啦啦啦啦在线视频资源| 少妇人妻久久综合中文| 午夜免费观看性视频| 精品一区二区三区av网在线观看 | 国产一区二区 视频在线| 精品酒店卫生间| 国产精品久久久久成人av| 亚洲激情五月婷婷啪啪| 国产欧美日韩一区二区三区在线| 日韩中文字幕视频在线看片| 免费看不卡的av| xxxhd国产人妻xxx| 肉色欧美久久久久久久蜜桃| 精品少妇一区二区三区视频日本电影 | 人人妻人人添人人爽欧美一区卜| 人体艺术视频欧美日本| 在线免费观看不下载黄p国产| 婷婷色综合大香蕉| 少妇精品久久久久久久| 久久午夜综合久久蜜桃| 欧美最新免费一区二区三区| 777久久人妻少妇嫩草av网站| 欧美精品一区二区免费开放| 极品少妇高潮喷水抽搐| 激情五月婷婷亚洲| 日韩熟女老妇一区二区性免费视频| 日本欧美国产在线视频| 成人手机av| 十八禁高潮呻吟视频| 国产精品三级大全| 国产一区二区激情短视频 | 丝袜脚勾引网站| 岛国毛片在线播放| xxxhd国产人妻xxx| 精品人妻在线不人妻| 午夜激情久久久久久久| 国产深夜福利视频在线观看| 菩萨蛮人人尽说江南好唐韦庄| 亚洲少妇的诱惑av| 亚洲av男天堂| 日日啪夜夜爽| 中国三级夫妇交换| 国产色婷婷99| 国产激情久久老熟女| h视频一区二区三区| 黑人猛操日本美女一级片| 蜜桃在线观看..| 国产一区二区 视频在线| 免费在线观看完整版高清| 精品第一国产精品| 18禁国产床啪视频网站| 国产一区二区 视频在线| 免费看不卡的av| 中文字幕色久视频| 精品视频人人做人人爽| 精品国产乱码久久久久久小说| 纯流量卡能插随身wifi吗| 黄片无遮挡物在线观看| 日本欧美国产在线视频| 日韩熟女老妇一区二区性免费视频| 丰满乱子伦码专区| 男女午夜视频在线观看| 观看av在线不卡| 精品少妇一区二区三区视频日本电影 | 18在线观看网站| 另类精品久久| 在线观看免费午夜福利视频| 成年av动漫网址| 极品人妻少妇av视频| av在线老鸭窝| av网站在线播放免费| 在线观看免费视频网站a站| 多毛熟女@视频| 亚洲精华国产精华液的使用体验| 国产日韩一区二区三区精品不卡| 一区二区日韩欧美中文字幕| 最近最新中文字幕免费大全7| 久久久久久久大尺度免费视频| 国产一区二区三区av在线| 午夜精品国产一区二区电影| 国产一区二区三区av在线| 在线亚洲精品国产二区图片欧美| 免费不卡黄色视频| 欧美在线黄色| 精品一区在线观看国产| kizo精华| 欧美中文综合在线视频| tube8黄色片| 国产成人91sexporn| 久久久久久人妻| 色网站视频免费| 午夜免费鲁丝| 成人亚洲精品一区在线观看| 亚洲国产最新在线播放| 亚洲欧美精品自产自拍| 国产精品99久久99久久久不卡 | 久久人人爽av亚洲精品天堂| 亚洲精品美女久久久久99蜜臀 | netflix在线观看网站| 狠狠婷婷综合久久久久久88av| 免费在线观看视频国产中文字幕亚洲 | av免费观看日本| 欧美日韩精品网址| 国产成人精品在线电影| 久久ye,这里只有精品| 无遮挡黄片免费观看| 日韩大码丰满熟妇| 高清欧美精品videossex| 精品人妻熟女毛片av久久网站| 一本—道久久a久久精品蜜桃钙片| 天堂俺去俺来也www色官网| 你懂的网址亚洲精品在线观看| 天堂8中文在线网| 美女中出高潮动态图| 亚洲av成人精品一二三区| 侵犯人妻中文字幕一二三四区| 免费高清在线观看日韩| 天堂8中文在线网| 精品一品国产午夜福利视频| 亚洲av成人精品一二三区| 韩国av在线不卡| 满18在线观看网站| 嫩草影院入口| 男女床上黄色一级片免费看| 国产激情久久老熟女| 男女午夜视频在线观看| 中文字幕人妻熟女乱码| 国产日韩欧美视频二区| 久久精品国产亚洲av高清一级| 啦啦啦视频在线资源免费观看| 久久毛片免费看一区二区三区| 色吧在线观看| 国产爽快片一区二区三区| 亚洲精品,欧美精品| 亚洲欧美中文字幕日韩二区| 大陆偷拍与自拍| 国精品久久久久久国模美| 国产一卡二卡三卡精品 | 久久99热这里只频精品6学生| 秋霞伦理黄片| 国产一区亚洲一区在线观看| 久久人人97超碰香蕉20202| 在线观看免费日韩欧美大片| 国产精品久久久久久精品电影小说| 最近中文字幕2019免费版| 久久99热这里只频精品6学生| 免费看不卡的av| 欧美中文综合在线视频| 天天躁狠狠躁夜夜躁狠狠躁| 久久这里只有精品19| 欧美日韩亚洲高清精品| 欧美精品一区二区免费开放| 97精品久久久久久久久久精品| 在线看a的网站| 我要看黄色一级片免费的| 亚洲av电影在线进入| 日日摸夜夜添夜夜爱| 亚洲,欧美,日韩| 久久综合国产亚洲精品| 1024香蕉在线观看|