• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    DTHN:Dual-Transformer Head End-to-End Person Search Network

    2023-12-12 15:49:42ChengFengDezhiHanandChongqingChen
    Computers Materials&Continua 2023年10期

    Cheng Feng,Dezhi Han and Chongqing Chen

    School of Information Engineering,Shanghai Maritime University,Shanghai,201306,China

    ABSTRACT Person search mainly consists of two submissions,namely Person Detection and Person Re-identification (re-ID).Existing approaches are primarily based on Faster R-CNN and Convolutional Neural Network(CNN)(e.g.,ResNet).While these structures may detect high-quality bounding boxes,they seem to degrade the performance of re-ID.To address this issue,this paper proposes a Dual-Transformer Head Network(DTHN)for end-to-end person search,which contains two independent Transformer heads,a box head for detecting the bounding box and extracting efficient bounding box feature,and a re-ID head for capturing high-quality re-ID features for the re-ID task.Specifically,after the image goes through the ResNet backbone network to extract features,the Region Proposal Network(RPN)proposes possible bounding boxes.The box head then extracts more efficient features within these bounding boxes for detection.Following this,the re-ID head computes the occluded attention of the features in these bounding boxes and distinguishes them from other persons or backgrounds.Extensive experiments on two widely used benchmark datasets,CUHK-SYSU and PRW,achieve state-of-the-art performance levels,94.9 mAP and 95.3 top-1 scores on the CUHK-SYSU dataset,and 51.6 mAP and 87.6 top-1 scores on the PRW dataset,which demonstrates the advantages of this paper’s approach.The efficiency comparison also shows our method is highly efficient in both time and space.

    KEYWORDS Transformer;occluded attention;end-to-end person search;person detection;person re-ID;Dual-Transformer Head

    1 Introduction

    Person search aims to localize a specific target person from the gallery set,which means it contains two submissions,Person Detection,and Person re-ID.Depending on these two different submissions,existing work can be divided into two-step and end-to-end methods.Two-step methods[1–6]treat them separately by conducting re-ID[7–10]on cropped person patches found by a standalone person box detector.They trade time and resource consumption for better performance,as shown in Fig.1a.

    By comparison,in a multi-task framework,end-to-end methods [11–17] effectively tackle both detection and re-ID simultaneously,as seen in Fig.1b.These approaches commonly utilize a person detector (e.g.,Faster R-CNN [18],RetinaNet [19],or FCOS [20]) for detection and then feed the feature into re-ID branches.To address the issue caused by the parallel structure of Faster R-CNN,Li et al.[12]proposed SeqNet to perform detection and re-ID sequentially for extracting high-quality features and achieving superior re-ID performance.Yu[17]introduced COAT to solve the imbalance between detection and re-ID by learning pose/scale-invariant features in a coarse-to-fine manner and achieving improved performance.However,end-to-end methods still suffer from several challenges:

    ■Handing occlusions with background objects or partial appearance poses a significant challenge.The detection and correct re-ID of persons become more challenging when they are obscured by objects or positioned at the edges of the captured image.While current models may perform well in person search,they are prone to failure in complex occlusion situations.

    ■The significant scale of pose variations makes it complicated to re-ID.Since current models mainly utilize CNN to extract re-ID features,they tend to suffer from the scale of pose variations due to inconsistent perceptual fields,which degrades the re-ID performance.

    ■Efficient re-ID feature extraction remains a thorny problem.Existing methods either re-ID first or detection first,but still leave the unsolved issue of how to efficiently extract the re-ID feature for better performance.

    Figure 1:Classification and comparison of two person search network

    For such cases,we propose a Dual-Transformer Head End-to-End Person Search Network(DTHN)method to address the above limitations.First,inspired by SeqNet,an additional Faster RCNN head is used as an enhanced RPN to provide high-quality bounding boxes.Then a Transformerbased box head is utilized to efficiently extract box features to perform high-accuracy detection.Next,a Transformer-based re-ID head is employed to efficiently obtain the re-ID representation from the bounding boxes.Moreover,we randomly mix up partial tokens of instances in a mini-batch to learn the cross-attention.Compared to previous works that have difficulty dealing with the balance issue between detection and re-ID,DTHN can achieve high detection accuracy without degrading re-ID performance.

    The main contributions of this paper are as follows:

    ■we propose a Dual-Transformer Head End-to-End Person Search Network,refining the box and re-ID feature extraction problem previous end-to-end frameworks were limited.The performance is improved by designing a Dual-Transformer Head structure containing two independent Transformer heads for handling high-quality bounding box feature extraction and high-quality re-ID feature extraction,respectively.

    ■we improve the end-to-end person search efficiency by using a Dual-Transformer Head instead of traditional CNN,reducing the number of parameters and remain a comparable accuracy.By employing the occlusion attention mechanism,the network can learn person features under occlusion,which substantially improves the performance of the re-ID in small-scale person and occlusion situations.

    ■we validate the effectiveness of our approach by achieving state-of-the-art performance on two widely used datasets,CUHK-SYSU and PRW.94.9 mAP and 95.3 top-1 scores were achieved on the CUHK-SYSU dataset,and 51.6 mAP and 87.6 top-1 scores were achieved on the PRW dataset.

    The remainder of this paper is organized as follows: Section 2 presents the research related to this work in recent years;Section 3 reviews the relative preparatory knowledge and presents the proposed DTHN design in detail;Section 4 presents some relevant experimental setups and verifies the effectiveness of the proposed method through experiments;Section 5 summarizes this work and provides an outlook for feature work.

    2 Related Work

    2.1 Person Search

    Person search has received increasing attention since the release of CUHK-SYSU and PRW,two large-scale datasets.This development marked a shift in researchers’approach to person search,as they began viewing it as a holistic task instead of treating it separately.The early solutions were two-step methods,using a person detector or manually constructing the person box,then constructing a person re-ID model to search for targets in the gallery.With high performance comes high time and resource consumption,two-step methods tend to consume more computational resources and time to perform at the same level as end-to-end methods.End-to-end person search has attracted extensive interest due to the integrity of solving two submissions together.Li et al.[12]shared the stem representations of person detection and re-ID,solving two submissions sequentially.Yan[14]proposed the first anchorfree person search method to address the misalignment problem at different levels.Furthermore,Yu[17]presented a three-cascade framework for progressively balancing person detection and re-ID.

    2.2 Vision Transformer

    Transformer [21] was initially designed to solve problems in natural language processing.Since the release of Vision Transformer (ViT) [22],it has become popular in computer vision (CV) [23–26].This pure Transformer backbone achieves state-of-the-art performance on many CV problems and has been shown to extract multi-scale features that traditional CNNs struggle with.The re-ID process heavily relies on fine-grained features,making it a promising technology in this field.Several efforts have been made to explore the application of ViT in person re-ID.Li et al.[27]proposed the part-aware Transformer to perform occluded person re-ID through diverse part discovery.Yu [17]performed the person search with multi-scale convolutional Transformers,learning discriminative re-ID features and distinguishing people from the background in a cascade pipeline.Our paper proposes a Dual-Transformer Head for the end-to-end person search network to efficiently extract high-quality bounding boxes feature and re-ID feature.

    2.3 Attention Mechanism

    The attention mechanism plays a crucial role in the operation and function of the whole Transformer.After the proposal of ViT,numerous variants of ViT have tried to bring different features to the Transformer by changing the attention mechanism.Among them,in the target detection task,using a combination of artificial token transformations has become a mainstream approach to solve the detection of occluded targets.Based on this,Yu [17] proposed an occlusion attention module in which both positive and negative samples in the same mini-batch are randomly partially swapped to simulate the encountered background occlusion of a person,achieving good performance.This is also mainly the attention mechanism used in this paper.

    To give the reader further insight into the work in this paper,Table 1 provides a summary of the related work and the work in this paper.

    Table 1:A summary of related person search works and our work

    3 Methods

    As previously mentioned,existing end-to-end person search works still struggle with the conflict of person detection and person re-ID.Prior studies have indicated that,despite a potential decrease in detection precision,the precision of re-ID can be maintained or even improved through serialization.However,achieving a high-level detection precision results in accurate bounding box features,which are beneficial for re-ID.Thus,we propose the Dual-Transformer Head Person Search Network(DTHN)manage to get both high-quality detection and refined re-ID accuracy.

    3.1 End-to-End Person Search Network

    As shown in Fig.2,our network is based on the Faster R-CNN object detector backbone with Region Proposal Network.We start by pre-processing the image to be searched for which will be converted to a size of 800 ?1500 as a standard input.We then use the ResNet-50 [28] backbone to extract the 1024-dim backbone feature in a size of 1024 ?58 ?76,then fed it into the RPN to obtain the region proposals.During training,RoI-Align is performed using the proposals generated by RPN to obtain the features of the region of interest for bounding box search,but RoI-Align is performed using a Ground-truth bounding box during the re-ID phase.Note that instead of using ResNet-50 stage 5 (res5) as our box head,we utilize a Transformer to extract high-quality box features and get high detection accuracy,and use the predictor head of Faster R-CNN to obtain high-confidence detection boxes.The RoI-Align operation is applied to pool ah?wregion as our region of interest,we use it as the stem featureF∈Rh?w?c.Note that F has the height of h and the width ofw,andcdenotes the number of channels.We set the intersection-over-union (IoU) thresholds at 0.5 in the training phase to distinguish positive and negative samples,and 0.8 IoU in the testing phase to get high-confidence bounding boxes.Then a Transformer re-ID head is utilized to extract distinguish features from theF.In each Transformer head,we learn the feature supervised by two lossesLreg1andLreg2.WhereNpdenotes the number of positive samples,ridenotes the calculated regression ofi-th positive samples,Δidenotes the corresponding ground truth regression,andLlocdenotes the Smooth-L1-Loss.The expressions forLreg1andLreg2are identical,as shown in the equation forLregbelow.

    Figure 2:Structural framework of the DTHN,the dotted line means only happens in the testing phase

    In addition,we also calculate the classification lossLcls1,andLcls2after two transformer heads.WhereNdenotes the number of samples,pidenotes the predicted classification probability ofi-th sample,andcidenotes the ground truth label.

    Note thatLcls2and the re-ID lossLreidare two different losses calculated by the Norm-Aware Embedding(NAE)Lnae(.),wherefdenotes the extracted 256-dim features.

    3.2 Occluded Attention

    The attention mechanism plays a crucial role in the Transformer.In our application,where we aim to extract high-quality bounding boxes and re-ID features,we must address the issue of occlusion.To this end,we use occluded attention in the DTH to prompt the model to learn the occlusion feature and address it in real applications,as shown in Fig.3.Equations should be flushed to the left of the column.First,we build the token bankwherepdenotes the number of box proposals,andxidenotes the token in one mini-batch.We exchange part of the tokens with another token from the token bank according to the index,using Token-Mix-Up(TMU)function,wherexiandxjdenote the token to be handled,Rdenotes the random value generated by the system,Tdenotes the exchange threshold.

    Figure 3:The occluded attention mechanism in DTHN

    After random swapping,we transform the tokenized features into three matrices through three fully connected(FC)layers:query matrixQ,key matrixKand value matrixV,and then we compute the multi-head self-attention(MSA)as follows,where ?cdenotes the channel scale of the token,it equals,nis the number of slices during tokenization,mdenotes the number of heads MSA has:

    After MSA,we perform Feed Forward Network(FFN)to output features for feature regression,classification,and re-ID.

    3.3 Dual-Transformer Head

    The Dual-Transformer Head(DTH)consists of two individual Transformer heads designed for detection and re-ID.Although working in different parts of the network,the detection and re-ID heads share the same mechanism.The Transformer box head takes box proposals as input and generates processed features as output.In contrast,the Transformer re-ID head takes ground truth as input during the training phase but proposals during the testing phase.Therefore,we hypothesize that the quality of detection can positively impact the re-ID performance.To provide a visual representation,the structure of the DTH is visualized in Fig.4.

    Figure 4:The structure of DTH and how it works

    First,the pooled stem featureF∈Rh?w?cis fed into the Transformer box head and obtains the proposal feature,which is fed into Faster R-CNN to calculate the proposal regression and proposal classification.After that,Fis re-fed into the Transformer re-ID head and obtains box feature,which is fed into the bounding box regressor and Norm-Aware Embedding to calculate the box regression and box classification.The loss function of NAE to calculate the box classificationLcls2is shown in equation below:

    wherey∈{0,1} denotes that the box is a person or background.norm r∈[0,∞).σdenotes the sigmoid activation function,within which is a batch normalization layer.The OIM loss is calculated using the features processed by NAE.OIM only consider the labeled and unlabeled identities,while leave the other proposals untouched.OIM has two auxiliary structures,Look-Up Table(LUT)to store all feature vectors with tagged identities and Circular Queue(CQ)to store untagged identities detected in the recent mini-batch.Based on these two structures,the probability ofxbeing recognized as the identity with class-idiand thei-th unlabeled identity by two Softmax function.OIM loss is calculated as equation below as our re-ID loss.

    wheredenotes the i-th column of the LUT,denotes the i-th column of the CQ,τdenotes softer probability distribution,Exdenotes the expectation,ptdenotes the probability of being judged ast.

    We take the Transformer re-ID head as an example to demonstrate the process.After the feature has been pooled intoF∈Rh?w?c,Fwill go through the tokenization.We splitFtonslices channelwise getting∈Rh?w?c?.We utilize series convolutional layers to generate tokens based ongetting∈R?h??w??c.By flattening ?Finto tokenx∈R?h?w??c.After finishing TMU,go through the MSA and FFN mentioned above transforming each token to enhance its representation ability.The enhanced feature will be projected into the same size it gets in,Then we concatenate the features of the n scales of transformers to the original sizeh?w?c.There is a residual connection outside each transformer.After the global average pooling (GAP) layer,the feature Transformer outputs will be pooled and delivered to different loss functions according to the type of Transformer head.The internal structure of the Transformer head is shown in Fig.5.

    4 Experiment

    All training processes are conducted in PyTorch with one NVIDIA A40 GPU,while testing processes are conducted with one NVIDIA 3070Ti GPU.The origin image will go through the ResNet-50 stage 4 and be resized to 900 ?1500 as the input.The source code and implementation details can be found in https://github.com/FitzCoulson/DTHN/tree/master.

    Figure 5:The internal structure of Transformer head

    4.1 Datasets and Metrics

    We conduct our experiments on two wildly used datasets.The CUHK-SYSU dataset[13]contains images from 18184 scenes with 8432 identities and 96143 bounding boxes.The default gallery contains 2900 testing identities in 6978 images with a default size of 100.While the PRW dataset [6] collects 11816 video frames from 6 cameras with 5704 frames and 482 identities,dividing into a training set with 5705 frames and 482 identities and a testing set with 2057 query persons in 6112 frames.

    We evaluate our model following the standard evaluation metrics.According to the Cumulative Matching Characteristic (CMC),the detection box will only be considered correct when the IoU is more than 0.5.So,we use Recall and Average Precision (AP) as the performance metric for person detection.While the person re-ID uses the mean Average Precision(mAP)and top-1 scores.All the metrics the higher the better.

    whereRnandPnseparately denote the recall and precision of then-th confidence threshold,Cdenotes the number of all classifications.The top-1 score denotes the result with the highest accuracy under the classification.

    4.2 Implementation Detail

    We take ResNet-50 pre-trained on the ImageNet as the backbone.The batch size is set to 5 during training and 1 during testing.The size of theFwill be set to 14 ?14 ?1024.The number of heads m in MSA is set to 8.The loss weightλ1is set to 10,and others are set to 1.We use the SGD optimizer with a momentum of 0.9 to train 20 epochs.The initial learning rate will warm up to 0.003 during the first epoch and decrease by 10 after the 16th epoch.The CQ size of OIM is set to 5000 for CUHK-SYSU and 500 for PRW.The IoU threshold is set to 0.4 in the testing phase.

    4.3 Ablation Study

    We conducted several experiments on the PRW dataset to analyze our proposed method.As shown in Table 2,we test several combinations of different box heads and re-ID heads and evaluate their performance on the PRW dataset.

    We set the default box head and re-ID head as ResNet-50(stage 5)and conduct one experiment,follow by two experiments by setting the box head or the re-ID head to the corresponding Transformer head,respectively,and finally set both the box head and the re-ID head to the Transformer head for one experiment.As we can see from Table 2,when using ResNet-50 (stage 5) as the box head and the re-ID head,both detection and re-ID are at a moderate level.However,when we change the box head to Transformer,the detection accuracy does not improve,while the re-ID accuracy is also slightly reduced,so Transformer cannot play a good effect only for the box head.When we maintain the box head as ResNet-50(stage 5),and replace the re-ID head with Transformer,the re-ID accuracy increases significantly,which shows that Transformer can maximize information extracted from the feature for re-ID.Finally,we replace both the box head and re-ID head with Transformer,while the detection accuracy is slightly reduced,the re-ID accuracy is significantly improved with the support of the DTH.As can be seen,although the Transformer box head reduces the detection accuracy,it efficiently extracts the valid information and improves the overall re-ID performance with the Transformer re-ID head.The Transformer re-ID head undoubtedly enhances the re-ID performance in various occlusion scenarios,and significantly increases the overall re-ID performance.

    Therefore,we believe that our design of the DTHN can fully extract both the box features and the unique features of the person for efficient re-ID.

    4.4 Comparison with State-of-the-Art Models

    We compare our DTHN with state-of-the-art methods on CUHK-SYSU and PRW,including two-step and end-to-end methods.The results are shown in Table 3.

    Table 3:Comparison with SOTA models

    Context Bipartite Graph Matching(CBGM)is a algorithm used in test phase to integrate context information into the matching process.It compares the two most similar targets and use K-M algorithm to the optimal matching with largest weight.

    The results of using CBGM are shown in Table 4.

    Table 4:Comparison with SOTA models using CBGM

    The graphical representations of each dataset’s results are shown in Figs.6 and 7.The horizontal axis is mAP and the vertical axis is top-1.

    Figure 6:Comparison with SOTA end-to-end models in CUHK-SYSU

    4.4.1 Result on CUHK-SYSU

    As shown in the table,we achieved the same 93.9 mAP and a comparable 94.3 top-1 scores compared to the state-of-the-art two-step method TCTS.Compared with the recent end-to-end works,our mAP outperforms the AlignPS,SeqNet,and AGWF,and our top-1 score outperforms the AlignPS and AGWF.Additionally,by using the post-processing operation CBGM,both mAP and top-1 scores of our method improved to 94.9 and 95.3,achieving the best mAP in all methods with a highly competitive top-1 scores.

    4.4.2 Result on PRW

    PRW dataset is well known as more challenging.We achieved 50.7 mAP and 85.1 top-1 scores.Our mAP outperforms all the two-step methods.Among the end-to-end methods,our mAP and top-1 score outperform AlignPS and SeqNet,while remaining a 2.5 gap with AGWT and COAT.Due to the structural advantage of COAT,it remains state-of-the-art status on the PRW dataset,but the DTHN proposed in this paper still achieves respectable results with a smaller number of parameters and computational effort.However,by applying CBGM as a post-processing operation,we obtain a slight gain of 0.9 mAP and a significant gain of 2.5 for the top-1 score,further improving the performance of our method and reducing the gap with COAT.This means that our proposed DTHN is effective in handling the challenging PRW dataset.

    Figure 7:Comparison with SOTA end-to-end models in PRW

    4.4.3 Efficiency Comparison

    We compare our efficiency with two end-to-end networks SeqNet and COAT.All experiments are conducted on the RTX 3070Ti GPU on the PRW dataset.As shown in Table 5,we include the number of parameters,the multiply-accumulate operations(MACs),and the running speed in frames per second(FPS)in the comparison.

    Table 5:Efficiency comparison

    Compared with SeqNet and COAT,we significantly reduce the number of parameters and remain the equivalent MACs,achieving a comparable accuracy.In terms of FPS,SeqNet has the highest 9.43 because it does not need to compute attention,and we have a slight advantage in running speed compared to COAT with also computes attention.In summary,our model can run efficiently while having a good performance.

    4.5 Visualization Analysis

    To show the recognition accuracy of DTHN in different scenes,several scenes are selected as demonstrations as shown in Fig.8.The green bounding box indicates the detection results that are higher than 0.5 similarity.

    Person search is difficult for several reasons,such as camera distance,occlusion,resolution,complex background,and lighting environments.DTHN can extract the features of the target well,thanks to the inclusion of DTH structure.The visualization demonstrates the model’s ability to make sound judgments despite a variety of difficult situations,proving the model’s effectiveness.

    The network takes the query picture as the target and search the person in the gallery.In case(1),the target is a dancing girl on the dance floor.Despite the dim lighting and the fact that dance movements may make the target difficult to recognize,the model is still able to find the target among the many dancers in the scene.In case (2),the target is a young man with a suitcase which covered his lower half body.Despite the lack of information about the lower half,the model can still target in multi-crowd scenarios based on existing information,even with the target’s back toward the camera.In case(3),the target is a male with his back to the camera.In the absence of front side information,the model does a good job of identifying the target based on other information such as clothing.In the same back scene with target undressing,the model is still able to correctly recognize the target.

    5 Conclusion and Outlook

    After noticing the challenges of occlusion and efficiency in end-to-end person search,we propose a DTHN to address the problems.We use two Transformer heads to deal with box detection and re-ID tasks separately,handling high-quality bounding box feature extraction and high-quality re-ID feature extraction.DTHN outperforms existing methods in the CUHK-SYSU dataset and achieves competitive results in the PRW dataset,which demonstrates the method’s superior structural design and effectiveness.

    Although our method is slightly slower than traditional CNN methods due to the scale dot production used by the attention mechanism in the Transformer,which consumes more computational resources.However,thanks to the small size of the Transformer,we have cut down the number of parameters compared to traditional CNNs,which gives us hope for deployment on terminal devices.Despite the good results,we believe that there is still room for improvement in our approach,either in terms of better and more convenient attention computation methods or in terms of adaptive attention mechanisms.Eventually,we may be able to create a pure Transformer model,using different attention heads on a single Transformer to accomplish different tasks.This is the main focus of our team afterward.We believe that the deployment of person search on terminal devices is just around the corner.

    Acknowledgement:Thank you to laboratory colleagues for their support of this paper.

    Funding Statement:This research is supported by the Natural Science Foundation of Shanghai under Grant 21ZR1426500,and the National Natural Science Foundation of China under Grant 61873160.

    Author Contributions:The authors confirm their contribution to the paper as follows:study conception and design: Cheng Feng;data collection: Cheng Feng;analysis and interpretation of results: Cheng Feng;draft manuscript preparation:Cheng Feng,Dezhi Han,Chongqing Chen.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data that support the findings of this study are available upon request from the corresponding author,Cheng Feng,upon reasonable request.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久精品国产鲁丝片午夜精品 | 久久久久久久亚洲中文字幕| 天美传媒精品一区二区| 日韩欧美国产一区二区入口| 联通29元200g的流量卡| 午夜激情欧美在线| 我要搜黄色片| 欧美日本亚洲视频在线播放| 亚洲成人免费电影在线观看| 熟妇人妻久久中文字幕3abv| 国产一区二区激情短视频| 动漫黄色视频在线观看| 欧美激情久久久久久爽电影| 91狼人影院| 女人被狂操c到高潮| 国内精品宾馆在线| 舔av片在线| 日本精品一区二区三区蜜桃| 天天一区二区日本电影三级| 真人做人爱边吃奶动态| 狂野欧美白嫩少妇大欣赏| 精品久久久久久久末码| 中国美女看黄片| 欧美性猛交黑人性爽| 亚州av有码| 国产免费av片在线观看野外av| 男人舔奶头视频| 99久久久亚洲精品蜜臀av| 国产白丝娇喘喷水9色精品| 国产精品一区二区免费欧美| 成人鲁丝片一二三区免费| 非洲黑人性xxxx精品又粗又长| 国产高清有码在线观看视频| 久久精品综合一区二区三区| 美女黄网站色视频| 99久久精品热视频| 深夜a级毛片| av中文乱码字幕在线| 成人国产综合亚洲| 久久久久久久久久成人| 天堂网av新在线| 亚洲天堂国产精品一区在线| 男人狂女人下面高潮的视频| aaaaa片日本免费| 欧美性猛交黑人性爽| 久久这里只有精品中国| 神马国产精品三级电影在线观看| 中文亚洲av片在线观看爽| 波野结衣二区三区在线| 搡老熟女国产l中国老女人| 久久久久久久亚洲中文字幕| 亚洲国产精品久久男人天堂| 欧美国产日韩亚洲一区| 精品午夜福利视频在线观看一区| 久久99热6这里只有精品| bbb黄色大片| 听说在线观看完整版免费高清| 欧美区成人在线视频| 亚洲av电影不卡..在线观看| 久久久久久久午夜电影| 国产女主播在线喷水免费视频网站 | 中国美白少妇内射xxxbb| 能在线免费观看的黄片| 黄色女人牲交| 特级一级黄色大片| 国产亚洲欧美98| 露出奶头的视频| 少妇的逼好多水| 99久久中文字幕三级久久日本| 国产精品久久久久久精品电影| 亚洲成人精品中文字幕电影| 亚洲成人精品中文字幕电影| 亚洲美女黄片视频| 国内精品美女久久久久久| 国产精品久久久久久久电影| 亚洲精品一卡2卡三卡4卡5卡| 内射极品少妇av片p| 人人妻人人看人人澡| 99久久精品一区二区三区| 亚洲精品乱码久久久v下载方式| 99久国产av精品| aaaaa片日本免费| 日韩 亚洲 欧美在线| av视频在线观看入口| 美女xxoo啪啪120秒动态图| 美女黄网站色视频| av天堂在线播放| 国产熟女欧美一区二区| 在线观看舔阴道视频| 成人一区二区视频在线观看| 大又大粗又爽又黄少妇毛片口| 99久国产av精品| 成年免费大片在线观看| 国产精品日韩av在线免费观看| 中文字幕av成人在线电影| 日韩人妻高清精品专区| 国产免费一级a男人的天堂| 国内精品美女久久久久久| 欧美一区二区精品小视频在线| 熟女电影av网| 国产熟女欧美一区二区| 全区人妻精品视频| 白带黄色成豆腐渣| 尾随美女入室| 99热精品在线国产| 中文在线观看免费www的网站| 日本在线视频免费播放| 很黄的视频免费| 午夜精品在线福利| 精品久久久久久久末码| 国产精品久久久久久av不卡| 99国产精品一区二区蜜桃av| 亚洲无线在线观看| 亚洲精品色激情综合| 精品乱码久久久久久99久播| 久久精品综合一区二区三区| 我要搜黄色片| 亚洲狠狠婷婷综合久久图片| 免费高清视频大片| 色噜噜av男人的天堂激情| 免费av毛片视频| 国产精品av视频在线免费观看| 亚洲精品在线观看二区| 一级黄色大片毛片| 伦理电影大哥的女人| 国产午夜精品论理片| 日本熟妇午夜| 日韩,欧美,国产一区二区三区 | 中文在线观看免费www的网站| 色av中文字幕| 欧美成人免费av一区二区三区| 三级毛片av免费| 一级黄片播放器| 成年女人毛片免费观看观看9| 极品教师在线视频| 深爱激情五月婷婷| 少妇人妻精品综合一区二区 | 韩国av一区二区三区四区| 好男人在线观看高清免费视频| 国产精华一区二区三区| 99久久精品国产国产毛片| 毛片女人毛片| 日韩 亚洲 欧美在线| aaaaa片日本免费| 精品99又大又爽又粗少妇毛片 | 深夜a级毛片| 少妇被粗大猛烈的视频| 国产精品,欧美在线| 久久久久久久久大av| 欧美精品国产亚洲| 88av欧美| 高清日韩中文字幕在线| 国内精品美女久久久久久| 国产三级中文精品| 一a级毛片在线观看| 美女高潮喷水抽搐中文字幕| 精品一区二区免费观看| 国产一区二区亚洲精品在线观看| 国内毛片毛片毛片毛片毛片| 日韩欧美免费精品| 免费搜索国产男女视频| 免费观看的影片在线观看| 国产成人av教育| 亚洲美女搞黄在线观看 | 成年免费大片在线观看| 91麻豆av在线| 精品久久久久久久久久久久久| 国产精品野战在线观看| 人妻制服诱惑在线中文字幕| 国产成年人精品一区二区| 91久久精品国产一区二区三区| 狠狠狠狠99中文字幕| 99久久精品国产国产毛片| 亚洲欧美激情综合另类| 一级av片app| 久久人妻av系列| 亚洲中文日韩欧美视频| 国产黄片美女视频| 91午夜精品亚洲一区二区三区 | 韩国av在线不卡| 午夜久久久久精精品| 日韩欧美在线乱码| 久久精品国产亚洲av涩爱 | 嫩草影院新地址| 亚洲欧美日韩高清专用| 老司机福利观看| 亚洲久久久久久中文字幕| 免费观看精品视频网站| 99热这里只有是精品50| 久久精品国产亚洲av涩爱 | 日韩 亚洲 欧美在线| 国产在视频线在精品| 久久久久久久久中文| 欧美不卡视频在线免费观看| 欧美又色又爽又黄视频| 午夜a级毛片| 在线观看一区二区三区| 色综合站精品国产| 欧美成人一区二区免费高清观看| 99热这里只有是精品在线观看| 免费在线观看日本一区| 国产精品一区二区三区四区免费观看 | 中文字幕精品亚洲无线码一区| 亚洲精品乱码久久久v下载方式| 国产av麻豆久久久久久久| 日日夜夜操网爽| 欧美不卡视频在线免费观看| av天堂在线播放| 免费电影在线观看免费观看| 91久久精品国产一区二区成人| 简卡轻食公司| 天堂网av新在线| 亚洲人成网站在线播放欧美日韩| eeuss影院久久| 欧美性猛交╳xxx乱大交人| 国产成人aa在线观看| 老师上课跳d突然被开到最大视频| 亚洲国产精品合色在线| 午夜免费成人在线视频| 天堂√8在线中文| 国产亚洲精品久久久com| 日本在线视频免费播放| av在线亚洲专区| 日本免费一区二区三区高清不卡| 日韩欧美一区二区三区在线观看| 少妇裸体淫交视频免费看高清| 三级毛片av免费| 精品国内亚洲2022精品成人| 中文亚洲av片在线观看爽| 久久精品国产亚洲av香蕉五月| 亚洲七黄色美女视频| 在线看三级毛片| 欧美最黄视频在线播放免费| 又紧又爽又黄一区二区| 精品久久久久久,| 我要看日韩黄色一级片| bbb黄色大片| 亚洲欧美日韩高清在线视频| 亚洲av五月六月丁香网| 亚洲成人精品中文字幕电影| 欧美性感艳星| 亚洲欧美激情综合另类| 亚洲中文日韩欧美视频| 久久欧美精品欧美久久欧美| 变态另类丝袜制服| 精品久久久久久久末码| 婷婷丁香在线五月| 搞女人的毛片| 黄片wwwwww| 两个人视频免费观看高清| aaaaa片日本免费| 午夜免费男女啪啪视频观看 | 国产精品国产高清国产av| 熟女电影av网| 亚洲 国产 在线| 亚洲18禁久久av| 99久久无色码亚洲精品果冻| 国产亚洲欧美98| 午夜日韩欧美国产| 97碰自拍视频| 欧美日韩中文字幕国产精品一区二区三区| 最近在线观看免费完整版| 日本一本二区三区精品| 免费无遮挡裸体视频| 搡老岳熟女国产| 在线观看美女被高潮喷水网站| 在线免费观看不下载黄p国产 | 一进一出好大好爽视频| 欧美zozozo另类| 热99re8久久精品国产| 国产精品电影一区二区三区| 国产精品亚洲一级av第二区| 国产高清激情床上av| 久久6这里有精品| 欧美日韩亚洲国产一区二区在线观看| 色综合婷婷激情| 国产免费一级a男人的天堂| 亚洲精品粉嫩美女一区| 国产成人aa在线观看| 国产精品野战在线观看| 亚洲专区中文字幕在线| 亚洲精品国产成人久久av| 一进一出抽搐动态| 99热精品在线国产| 亚洲 国产 在线| 乱人视频在线观看| 色视频www国产| 国产精品99久久久久久久久| 日韩中字成人| 91久久精品国产一区二区成人| 校园春色视频在线观看| 国产毛片a区久久久久| 免费黄网站久久成人精品| 人妻制服诱惑在线中文字幕| 18禁黄网站禁片免费观看直播| 久久6这里有精品| 久久精品国产亚洲av天美| 免费av不卡在线播放| 国产精品,欧美在线| 久久久久九九精品影院| 精品久久久久久久久久久久久| 夜夜看夜夜爽夜夜摸| 亚洲av.av天堂| 禁无遮挡网站| 日日夜夜操网爽| 99国产极品粉嫩在线观看| 在线观看美女被高潮喷水网站| 极品教师在线视频| 婷婷亚洲欧美| www.色视频.com| 久久久久九九精品影院| 日韩大尺度精品在线看网址| 国产爱豆传媒在线观看| 国产精品一及| 国产淫片久久久久久久久| 天堂av国产一区二区熟女人妻| 日本成人三级电影网站| 久久这里只有精品中国| 少妇猛男粗大的猛烈进出视频 | 色尼玛亚洲综合影院| 亚洲色图av天堂| 久久精品国产清高在天天线| 成人无遮挡网站| av黄色大香蕉| 亚洲专区国产一区二区| 国产精品av视频在线免费观看| 免费人成视频x8x8入口观看| 黄色丝袜av网址大全| 免费在线观看影片大全网站| 琪琪午夜伦伦电影理论片6080| 成人鲁丝片一二三区免费| 国内精品一区二区在线观看| av黄色大香蕉| 欧美xxxx黑人xx丫x性爽| 亚洲人成网站在线播| 在线天堂最新版资源| 国产精品爽爽va在线观看网站| 欧美另类亚洲清纯唯美| 免费观看在线日韩| 在线免费观看的www视频| 国产高清视频在线播放一区| 最近视频中文字幕2019在线8| 乱系列少妇在线播放| 国产av麻豆久久久久久久| av在线观看视频网站免费| 日韩av在线大香蕉| 淫妇啪啪啪对白视频| 亚洲专区中文字幕在线| 日本a在线网址| 国产 一区 欧美 日韩| 在线观看免费视频日本深夜| 亚洲国产欧美人成| 人妻久久中文字幕网| 日本黄大片高清| 免费大片18禁| 久久久久久久久久久丰满 | 99久国产av精品| 欧美不卡视频在线免费观看| 美女高潮喷水抽搐中文字幕| 亚洲av免费高清在线观看| 人妻丰满熟妇av一区二区三区| .国产精品久久| 美女大奶头视频| 偷拍熟女少妇极品色| 免费在线观看成人毛片| 日韩在线高清观看一区二区三区 | 他把我摸到了高潮在线观看| 偷拍熟女少妇极品色| 亚洲成人久久爱视频| 日本一本二区三区精品| 国产伦精品一区二区三区四那| 在线国产一区二区在线| 动漫黄色视频在线观看| 国产成人福利小说| 精品一区二区三区视频在线| 亚洲不卡免费看| 日韩在线高清观看一区二区三区 | 精品人妻视频免费看| 草草在线视频免费看| 日韩强制内射视频| 高清毛片免费观看视频网站| 国产在线精品亚洲第一网站| 99久久九九国产精品国产免费| 人人妻,人人澡人人爽秒播| 久久人人爽人人爽人人片va| 99久久无色码亚洲精品果冻| 人妻久久中文字幕网| 欧美绝顶高潮抽搐喷水| 国产一区二区三区视频了| 国产精品国产三级国产av玫瑰| 久久天躁狠狠躁夜夜2o2o| 成人亚洲精品av一区二区| 亚洲精华国产精华液的使用体验 | 国国产精品蜜臀av免费| 亚洲欧美日韩高清专用| 真实男女啪啪啪动态图| 小说图片视频综合网站| 精品久久久久久,| 搡老妇女老女人老熟妇| 啦啦啦啦在线视频资源| 变态另类成人亚洲欧美熟女| 国产精品一及| 99久久精品国产国产毛片| av在线老鸭窝| 日韩欧美 国产精品| 免费高清视频大片| 久99久视频精品免费| 免费一级毛片在线播放高清视频| 日本黄大片高清| 美女高潮喷水抽搐中文字幕| 国产综合懂色| 婷婷亚洲欧美| 高清日韩中文字幕在线| 午夜精品在线福利| av专区在线播放| 亚洲av中文av极速乱 | 国产成人av教育| 久久久精品大字幕| 精品一区二区三区视频在线观看免费| 久久精品国产清高在天天线| 久久久久九九精品影院| 亚洲人成网站高清观看| 日韩欧美 国产精品| 欧美性感艳星| 老司机福利观看| 国产精品国产高清国产av| 在线观看一区二区三区| 毛片一级片免费看久久久久 | 精品一区二区三区av网在线观看| 午夜福利在线观看免费完整高清在 | 国产高清视频在线播放一区| 91久久精品国产一区二区三区| 国产成人av教育| 久久精品国产亚洲av涩爱 | 性色avwww在线观看| 久久久精品大字幕| 男女啪啪激烈高潮av片| 中文字幕熟女人妻在线| xxxwww97欧美| 亚洲av二区三区四区| 97超视频在线观看视频| 麻豆国产av国片精品| 国产激情偷乱视频一区二区| 久久人人精品亚洲av| 国产精品久久久久久av不卡| 一本精品99久久精品77| 1000部很黄的大片| 有码 亚洲区| 亚洲欧美激情综合另类| 内射极品少妇av片p| 黄色女人牲交| 一本一本综合久久| 99精品在免费线老司机午夜| 欧美日韩乱码在线| 成人无遮挡网站| 免费人成在线观看视频色| 国产爱豆传媒在线观看| 亚洲无线在线观看| 国内精品久久久久久久电影| 国产精品99久久久久久久久| 最新中文字幕久久久久| 日本一二三区视频观看| 少妇的逼水好多| 成人三级黄色视频| 久久久久国内视频| 免费观看精品视频网站| 久久欧美精品欧美久久欧美| 搡老妇女老女人老熟妇| 亚洲第一电影网av| 亚洲av中文av极速乱 | av福利片在线观看| 久久6这里有精品| 特级一级黄色大片| av在线天堂中文字幕| 中文字幕高清在线视频| 亚洲精品成人久久久久久| 别揉我奶头~嗯~啊~动态视频| 夜夜看夜夜爽夜夜摸| 最近最新免费中文字幕在线| 久久久久久久久久成人| 日韩强制内射视频| 成人欧美大片| 美女高潮喷水抽搐中文字幕| 久久久久精品国产欧美久久久| 日韩精品有码人妻一区| 91精品国产九色| 亚洲乱码一区二区免费版| 18禁黄网站禁片午夜丰满| 麻豆久久精品国产亚洲av| www.www免费av| 国产黄片美女视频| 黄色视频,在线免费观看| 国产激情偷乱视频一区二区| 国产男人的电影天堂91| 国内久久婷婷六月综合欲色啪| 最新中文字幕久久久久| 老司机午夜福利在线观看视频| 亚洲乱码一区二区免费版| 精品无人区乱码1区二区| 老熟妇仑乱视频hdxx| 久久久色成人| 国产极品精品免费视频能看的| 一边摸一边抽搐一进一小说| 少妇高潮的动态图| 最近中文字幕高清免费大全6 | 亚洲美女黄片视频| 精品人妻熟女av久视频| 成人毛片a级毛片在线播放| 午夜老司机福利剧场| .国产精品久久| 欧美xxxx黑人xx丫x性爽| 网址你懂的国产日韩在线| 欧美黑人巨大hd| 亚洲精品一卡2卡三卡4卡5卡| 国产精品久久久久久av不卡| 国产毛片a区久久久久| 亚洲av熟女| 久久精品国产亚洲av涩爱 | 午夜福利在线观看免费完整高清在 | 久久久久精品国产欧美久久久| 久久亚洲精品不卡| 精品久久久久久成人av| 欧美bdsm另类| 亚洲男人的天堂狠狠| 精华霜和精华液先用哪个| 亚洲精品国产成人久久av| 久久久精品大字幕| 五月玫瑰六月丁香| 中文字幕久久专区| 床上黄色一级片| 国产精品野战在线观看| 国产黄a三级三级三级人| av在线老鸭窝| 亚洲自偷自拍三级| 亚洲在线观看片| 亚洲精品粉嫩美女一区| 日韩一区二区视频免费看| 亚洲国产欧美人成| 波多野结衣巨乳人妻| 亚洲精品国产成人久久av| 欧美高清成人免费视频www| 欧美性猛交黑人性爽| 人人妻人人看人人澡| 色综合站精品国产| 午夜视频国产福利| 中文字幕熟女人妻在线| 尾随美女入室| 一个人免费在线观看电影| 国产伦一二天堂av在线观看| 国内毛片毛片毛片毛片毛片| 国产午夜精品论理片| 美女xxoo啪啪120秒动态图| 91在线观看av| 国内毛片毛片毛片毛片毛片| 久久久久久久久久成人| 久久6这里有精品| 精品福利观看| 精品久久久久久久末码| 在线观看免费视频日本深夜| 日本一二三区视频观看| 精品一区二区三区视频在线观看免费| 一本一本综合久久| 悠悠久久av| 亚洲成人久久爱视频| 91午夜精品亚洲一区二区三区 | 日韩欧美 国产精品| 国产美女午夜福利| 亚洲av中文av极速乱 | 婷婷精品国产亚洲av在线| 日本撒尿小便嘘嘘汇集6| 成人三级黄色视频| 一卡2卡三卡四卡精品乱码亚洲| eeuss影院久久| 直男gayav资源| 国产免费av片在线观看野外av| 黄色丝袜av网址大全| 国产91精品成人一区二区三区| 国产精品一区二区免费欧美| 亚洲熟妇中文字幕五十中出| 美女被艹到高潮喷水动态| 亚洲av日韩精品久久久久久密| 尾随美女入室| 69人妻影院| 欧美最新免费一区二区三区| 国产亚洲av嫩草精品影院| 老熟妇仑乱视频hdxx| 成年免费大片在线观看| 午夜福利欧美成人| 亚洲国产精品久久男人天堂| 日韩欧美三级三区| 免费人成在线观看视频色| 丰满乱子伦码专区| 久久久久免费精品人妻一区二区| 午夜亚洲福利在线播放| 免费无遮挡裸体视频| 成人无遮挡网站| 免费在线观看影片大全网站| 精品无人区乱码1区二区| 97人妻精品一区二区三区麻豆| 国产久久久一区二区三区| 国产私拍福利视频在线观看| 成人美女网站在线观看视频| 国产一区二区亚洲精品在线观看| 欧美日韩国产亚洲二区| 欧美在线一区亚洲| 99视频精品全部免费 在线| 人妻少妇偷人精品九色| 国产精品福利在线免费观看| 成熟少妇高潮喷水视频| 久久中文看片网| 国产欧美日韩精品一区二区| 我要搜黄色片| 婷婷精品国产亚洲av| 午夜福利视频1000在线观看| 两个人的视频大全免费|