• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    DAAPS:A Deformable-Attention-Based Anchor-Free Person Search Model

    2023-12-15 03:57:16XiaoqiXinDezhiHanandMingmingCui
    Computers Materials&Continua 2023年11期

    Xiaoqi Xin,Dezhi Han and Mingming Cui

    School of Information Engineering,Shanghai Maritime University,Shanghai,201306,China

    ABSTRACT Person Search is a task involving pedestrian detection and person re-identification,aiming to retrieve person images matching a given objective attribute from a large-scale image library.The Person Search models need to understand and capture the detailed features and context information of smaller objects in the image more accurately and comprehensively.The current popular Person Search models,whether end-to-end or two-step,are based on anchor boxes.However,due to the limitations of the anchor itself,the model inevitably has some disadvantages,such as unbalance of positive and negative samples and redundant calculation,which will affect the performance of models.To address the problem of fine-grained understanding of target pedestrians in complex scenes and small sizes,this paper proposes a Deformable-Attention-based Anchor-free Person Search model(DAAPS).Fully Convolutional One-Stage(FCOS),as a classic Anchor-free detector,is chosen as the model’s infrastructure.The DAAPS model is the first to combine the Anchor-free Person Search model with Deformable Attention Mechanism,applied to guide the model adaptively adjust the perceptual.The Deformable Attention Mechanism is used to help the model focus on the critical information and effectively improve the poor accuracy caused by the absence of anchor boxes.The experiment proves the adaptability of the Attention mechanism to the Anchor-free model.Besides,with an improved ResNeXt+network frame,the DAAPS model selects the Triplet-based Online Instance Matching(TOIM)Loss function to achieve a more precise end-to-end Person Search task.Simulation experiments demonstrate that the proposed model has higher accuracy and better robustness than most Person Search models,reaching 95.0%of mean Average Precision(mAP)and 95.6%of Top-1 on the CUHK-SYSU dataset,48.6%of mAP and 84.7%of Top-1 on the Person Re-identification in the Wild(PRW)dataset,respectively.

    KEYWORDS Person Search;anchor-free;attention mechanism;person detection;pedestrian re-identification

    1 Introduction

    The Person Search aims to locate and detect all pedestrian targets in a given image or video,providing their positions and related information.The Person Search model usually includes two tasks person detection and person re-identification (re-ID) [1].The primary purpose of pedestrian detection is automatically detecting and locating pedestrians in images or videos.And person re-ID refers to the task of matching different images of the same pedestrian to their corresponding identity embeddings through deep learning.The Person Search model is more complex and has higher practical value due to its involvement in detecting,recognizing,and inferring relationships among multiple individuals[2].

    In practical application,the difficulty of improving the accuracy of the person-finding task focuses on the fine-grained understanding of the image.Distinguishing similar targets requires detailed analysis and comparison of their detailed features and minimizes interference from complex background factors such as changes in the appearance of pedestrian targets and crowds.Therefore,algorithms with strong robustness,high accuracy,and high efficiency need to be designed for Person Search tasks to cope with these challenges.

    Currently,mainstream deep learning-based Person Search models typically utilize neural networks to learn image features and then perform detection through object classification and position regression.This type of model can be further divided by model characteristics into one-stage,two-stage,and one-step two-stage Person Search models[3-8],as shown in Fig.1.

    Figure 1:Classification and comparison of three person search models

    (a) DAAPS: A Deformable-Attention-Based Anchor-Free Person Search Model.The one-step Person Search model[9],also known as end-to-end Person Search models,directly outputs pedestrian targets’position and size information from the input image.This model type usually has a faster speed and can achieve real-time detection.However,its detection accuracy is relatively lower due to the need for an explicit candidate region generation process.(b)Two-step Person Search model generates candidate boxes,then performs classification and position regression to obtain the final detection results.This model type usually has higher detection accuracy but needs to process many candidate regions,resulting in high computational complexity and relatively slow speed.(c)One-step two-stage model employs a Region of Interest Align(ROI-Align)layer to aggregate features from the detected bounding boxes,allowing detection and re-ID to share these features.Our model adopts a two-stage detector,such as Faster Region-based Convolutional Neural Networks(R-CNN)[10].

    Anchors[10]are commonly used prior information in object detection tasks,which set a few fixed sizes and aspect ratios of anchor boxes to predict the position and size of the targets.The existence of anchor boxes can improve the accuracy of the Anchor-based model,but it is also affected by overparameterization,which requires manual operation.Though,models based on Anchor-free [3,4] do not require predefined anchors but directly detect,extract,and recognize the human regions in the image.The model based on Anchor-free does not rely on prior boxes and can directly regress the position and size of the target,thus improving the computational efficiency.

    The model based on Anchor-free has received much research due to its simple and fast construction.Models based on Anchor-free introduce an Aligned Feature Aggregation (AFA) module improved on the FCOS object detection framework.The FCOS architecture adopts the basic Residual Network(ResNet)backbone network structure and Feature Pyramid Network(FPN)to fuse multiscale features and then deploys a decoupled detection head to detect targets on each scale separately.The network structure is shown in Fig.2.AFA reshapes some modules of FPN by utilizing deformable convolutional kernels for feature fusion to generate more robust re-identification embeddings,overcoming the problem of models based on Anchor-free being unable to learn and detect aligned features for specific regions.

    Figure 2:FCOS network architecture

    The Attention Mechanism is one of the widely applied Computer Vision tasks to improve the utilization of crucial information by dynamically assigning importance or weight to different input parts in deep learning models.Common Attention Mechanisms include Spatial Attention,Channel Attention,and Self-Attention.The Deformable Attention[11]mechanism,as an extension of the Self-Attention mechanism,can learn feature deformations and adaptively adjust feature sampling positions to better handle target deformations and pose changes.

    Besides,the design of loss functions is a crucial aspect in improving Person Search models.Common loss functions include Triplet Loss,Online Instance Matching (OIM) [1] Loss,etc.The TOIM Loss function deployed in the DAAPS model combines the above two loss functions to match instances across frames or videos online accurately[12].

    This paper proposes a Deformable-Attention-based Anchor-free Person Search(DAAPS)model inspired by the above research methods.Our main contributions are as follows:

    This paper proposes a novel Person Search model based on an Anchor-free detector that incorporates the Deformable Attention mechanism for the first time.The proposed model first extracts image features by combining the Deformable Attention mechanism and convolutional layer,aligns the features by the AFA module,uses the FCOS detection head for target detection,and then feeds into the pedestrian re-recognition module to combine the character features with labels to get the search results.

    The improved anchor-free detection feature extraction network,ResNeXt+,adds network branches and enhances the model scalability.The group convolution structure of ResNeXt+can better extract multi-scale features,making the model more adaptable to complex Person Search tasks.Furthermore,the TOIM Loss function,a more suitable function,is chosen to better adapt to target variations,thus improving the model’s detection accuracy.

    To demonstrate that the optimization can help the model better understand the images at a finer granularity,the paper conducts extensive ablation experiments,in which mAP and top-1 are 95.0%and 95.6% on the CUHK-SYSU dataset,and 48.6% and 84.7% on the PRW dataset,respectively.The experimental results show that the DAAPS model outperforms the current best models based on Anchor-free,fully demonstrating rationality and effectiveness.In addition to this,the study conducted ablation experiments on various parts of the model and proved that the proposed modifications and optimizations are more suitable for the anchor-free model,thus illustrating the robustness and superiority of the present model.

    The remainder of this paper is structured as follows.Section 2 reviews related work on the Person Search model based on Anchor-free and Attention Mechanisms.In Section 3,the implementation of the DAAPS model is depicted in detail.Section 4 analyzes and compares the experimental results,demonstrating the effectiveness of our proposed model.Finally,the paper is summarized,and future research is discussed in Section 5.Table 1 contains common abbreviations used in the paper for reference purposes.

    Table 1: Table of common abbreviations

    2 Related Work

    This section mainly reviews the existing Person Search models split by Anchor and based on Attention Mechanism,respectively,to highlight proposed model.

    In this paper,databases such as Institute of Electrical and Electronics Engineers Xplore,Web of Science,the Engineering Index,ScienceDirect,Springer,and Arxiv are used as the main target for the Person Search model within the last five years through Google Scholar.We use various combinations of characters as search terms,e.g.,“Person Search model”,“Anchor-free”,“pedestrian re-ID”and“Attention Mechanism”.After screening the suitable 54 papers whose sources include conferences,journals and online published papers were used as references for this paper.Deep learning models are gradually becoming one of the main targets of cyber attack.Attacks include adversarial attack,model spoofing attack,backdoor attack[13-15]and so on.How to reduce the impact of attacks and enhance robustness is also one of the focuses of model design.

    2.1 Person Search Models Split by Anchor

    The Person Search,object detection,and person recognition models have developed dramatically with the in-depth study of deep learning.Faster R-CNN is a classic two-step target detection model with Anchor,which can also be used for Person Search[10,16,17].Chen et al.[18]combined Faster R-CNN and Mask R-CNN to search for people using two parallel streams.One stream is used for object detection and feature extraction,and the other is used to generate semantic segmentation masks for pedestrians to improve the accuracy of pedestrian searches further.He et al.[19,20]implemented a Siamese architecture instead of one stream for an end-to-end training strategy.The detection module is optimized based on Faster-RCNN.However,when the human object is occluded or deformed,the anchor point cannot accurately capture the shape and position information of the object,thus affecting the detection performance of the Anchor-based models.

    Anchor-free detection is widely used in image detection[3,4,21-25],but it has been proposed to be applied to the Person Search model recently.Law et al.[26]proposed the earliest anchor-free target detection model,CornerNet,which does not rely on anchor boxes for target detection,but converts the target detection problem into a task of object corners detection.Subsequently,many classic Anchorfree detection models are proposed [3,4,9,21].Yan et al.[4] proposed the AlignPS model based on FCOS that introduces Anchor-free into the Person Search task for the first time.In the AlignPS model,the AFA module addresses the issues of scale,area,and task misalignment caused by Anchor-free.

    Nevertheless,as models based on Anchor-free usually rely on the prediction of critical points or center points,the precise positioning of targets is limited to some extent,and other methods are needed to help improve the model’s accuracy.There is still room for optimization in the accuracy of character detection and recognition and in the model architecture.

    2.2 Person Search Models Based on Attention Mechanism

    The application of the Attention Mechanism in the Person Search task can help improve the accuracy of detection and matching [21,24,27-34].Chen et al.[21] introduced the channel attention module into the model based on Anchor-free to express different forms of occlusion and make full use of the spatial attention module to highlight the target area of the occlusion-covered objects.Zhong et al.propose an enhancement to feature extraction in their work by incorporating a position-channel dual attention mechanism [33].This mechanism aims to improve the accuracy of feature representation by selectively attending to important spatial and channel-wise information.Zheng et al.introduce a novel hierarchical Gumbel attention network[34],which utilizes the Gumbel top-k re-parameterization algorithm.This network is designed for text-based person search and focuses on selecting semantically relevant image regions and words/phrases from images and texts.It enables precise matching by aligning and calculating similarities between these selected regions.Ji et al.develop a Description Strengthened Fusion-Attention Network (DSFA-Net) [35],which employs an end-to-end fusion-attention structure.DSFA-Net consists of a fusion and attention subnetwork,leveraging three attention mechanisms.This architecture addresses the challenges in Person Search by enhancing the fusion of multimodal features and applying attention mechanisms to capture relevant information.

    However,according to the experiments in this paper,Deformable Attention brings higher detection accuracy and is more suitable for Anchor-free Mechanism than channel attention and spatial attention.Cao et al.[36] and Chen et al.[37] proposed adding Deformable Attention Mechanism to Transformer [38] for the Person Search model.Although the Transformer works well for tasks such as long text or images,it has high computing and memory requirements due to the need to compute associations between all locations in the self-attention mechanism.Especially for longer input sequences,model training and reasoning can become more time-consuming and resource intensive.All above are why proposed model adopts the Deformable Attention mechanism to cooperate with the model based on an Anchor-free FCOS structure.Previous research has been limited to improving model performance by changing the detector or optimizing the re-identification algorithm.They focus only on the mechanisms they add and do not consider the effects of other attention mechanisms,nor do they validate the performance of the model under other attention mechanisms.The paper is the first to propose combining a deformable attention mechanism and an Anchor-free person search model with comparing with other attention,filling the gap in the impact of attention mechanism on the performance of pedestrian detection models.In addition,most of the previous studies have only considered the anchor frame and the attention mechanism itself,without considering how they are combined and what structures are needed to enable the two to be sufficiently combined to enhance model performance,which is one of the considerable differences between studies.

    3 Method

    In this section,the network architecture of DAAPS,the improved ResNeXt+structure,the implementation of the deformable Attention Mechanism,and the calculation of the loss function are introduced in detail.

    3.1 Network Architecture

    The infrastructure of the proposed DAAPS model in this paper is designed based on FCOS.As shown in Fig.3,for an input image of the size I ∈R3×H×W,the DAAPS model can simultaneously locate multiple target pedestrians in the image and learn re-ID embedding.Specifically,the proposed model first extracts image features and gets three levels of features according to the feature pyramid.A Deformable Attention Mechanism then processes it to handle better objects of different scales,directions,and shapes.Feature maps {P3,P4,P5} is obtained by down-sampling and weighting using strides of 8,16,and 32.Subsequently,an AFA module is utilized to fuse features of different scales into a single embedding vector.The AFA module has multiple branches,each branch performing a weighted fusion of features at different scales and producing a fused and flattened feature vector.Then,an FCOS detection head is employed for object detection.It comprises two branches,namely the classification regression branch,each including four 3×3 deformable convolutional layers.The classification branch is utilized to classify each pixel’s position determine if it is a queried object,and predict the object’s category.At the same time the regression branch is employed to regress each pixel’s position and predict the object’s position and size.

    Figure 3:Network architecture of DAAPS

    3.2 ResNeXt+Optimization

    The proposed DAAPS model is based on the classic model based on Anchor-free FCOS,which incorporates a single-stage object detection method and multi-scale feature fusion techniques.Unlike its predecessor,DAAPS introduces group convolution layers with more channels on top of the ResNet backbone to achieve ResNeXt[39]and deeper feature extraction.Subsequently,a pruning algorithm removes unimportant connections,reducing network complexity and improving the model’s speed and accuracy.This improved network structure is referred to as the ResNeXt+architecture in this paper.

    For given input data of D dimensions x=[x1,x2,...,xd],the corresponding filter weight is w=[w1,w2,...,wd].A linearly activated neuron without bias can be expressed as:

    That is,the data is split into individual features with low-dimensional embeddings.Each lowdimensional embedding undergoes a linear transformation and then aggregated using unit addition.It is a split-transform-merge structure that can be replaced with a more general function so that each structure makes use of the same topology.The aggregation transformation results are as follows:

    Among them,C is the size of the transformation set to be aggregated,namely cardinality.Ti(x)is any transformation,such as a series of convolution operations.ResNeXt+is based on group convolutions,a strategy between regular convolutional kernels and depth-separable convolutions.By controlling the cardinality,a balance between the two strategies is achieved.The complete ResNeXt+network structure is obtained,combined with the robust residual network.That is the addition of a shortcut is added to the simplified inception architecture,which is expressed as:

    ResNeXt+adopts a Visual Geometry Group-like block stacking method,and the stacking process follows two principles: (a) If the same size spatial maps are produced,the blocks share the same hyperparameters.(b)Whenever the spatial map is downsampling twice,the width of the convolutional kernel is multiplied by 2.The structure is shown in Fig.4.

    Figure 4:ResNeXt+structure

    That is,the input channel is reduced from 256 to 128 by a 1 × 1 convolutional layer and then processed using group convolution,with a convolutional kernel size of 3×3 groups of 32,and then up-dimensioned using a 1 × 1 convolutional layer.The output is added to the input to obtain the final output.ResNeXt+employs group convolution to increase the width of the model.Compared to traditional convolution layers,group convolution splits the input features into several small groups,performs convolution operations on each small group separately,and finally concatenates all the convolution results of the small groups together.

    Then pruning is implemented through the filter pruning algorithm[40]to optimize the network.Specifically,the L1 parametric was first used as the filter metric to determine which filters were more critical,and the L1 parametric was normalized to the average importance of each filter.A global pruning threshold is determined by setting the percentage of filter importance in the entire network,and the filters in the network are pruned according to the threshold.After pruning,the remaining filters are reattached to the network.Finally,fine-tuning is performed using a lower learning rate to recover the performance,and the fine-tuned network is initialized based on the weights before pruning.This approach allows the model to significantly reduce the number of parameters and the computational complexity of ResNeXt+without losing much performance,resulting in a more efficient network.

    ResNeXt+has advantages over ResNet in terms of better performance and higher accuracy.Due to the ability of ResNeXt+to simultaneously increase the depth and width of the model,it can better adapt to complex visual tasks.In addition,the more complex residual block is also employed in the ResNeXt+structure,further improving the model’s nonlinear expression ability.Therefore,the application of ResNeXt+can improve the receptive field of FCOS and boost the effectiveness of the network.At the same time,ResNeXt+has better generalization performance,making FCOS training and inference faster and more resource-efficient.

    3.3 Deformable Attention Mechanism

    For a given input feature mapping x ∈RC×H×W,where C is the number of channels,H and W are height and width,respectively.First of all,for each deformable convolution kernel m,the method computes the sampling offset Δpmqkand attention weight distribution Amqkfor each sampled key k based on the linear projection of the query vector zqand position vector pq:

    where Amqkis a scalar attention weight with a value range of[0,1].are weights exploited to compute the offset and attention distribution in deformable convolutional kernels,which are both learnable parameters.Subsequently,the method multiplies the feature vector xmqklocated at pq+Δpmqkin x by Amqkto obtain a weighted feature vector ymqk:

    Finally,sum up ymqkof all deformable convolution kernels to obtain the final output feature vector:

    To sum up,the calculation process of the Deformable Attention module includes three steps,calculation of sampling offsets and attention distributions,convolution on the input feature map,and weighted addition to obtaining the final output.The advantage of the Deformable Attention Mechanism lies in its ability to accurately capture long-range dependencies and geometric structures of the target,and exchange information between multi-scale feature maps,thus improving the accuracy of object detection and recognition.

    3.4 Optimization Program

    TOIM Loss function can be expressed as:

    where LOIMis OIM[1]Loss function.OIM aims to match the predicted instance in the image with the real instance.It first generates a pair of matching scores between the predicted and the real instance and then applies the matching score to calculate the loss function,which can be expressed as:

    Here,N is the batch size,K is the total number of classes,yirepresents the class label of sample i,firepresents the feature vector of sample i,wyirepresents the weight vector of class yi,I represents the indicator function,and τ is the temperature parameter.OIM is designed to maximize the scores of the predicted instances of its underlying actual class and minimize the scores of other classes.

    The Ltristands for the Triplet Loss function,mainly deployed to address the association problem among multiple images of the same identity in pedestrian re-identification.Ltriworks by encouraging the embeddings of positive instances to be closer while pushing the embeddings of negative instances away from the query target.Its computation is as follows.For a training sample,we need to select two images that are different from it,one belonging to the same category as the sample and the other belonging to a different category from the sample.

    Given a training set containing m triples (A,P,N),where A represents the targeted sample,P represents the Positive sample,and N represents the Negative sample that does not belong to the same class as the targeted sample.The Triplet Loss function can be represented as follows:

    where f(x) represents the embedding vector that maps the input sample x to the feature space.A(i),P(i),N(i)represent the anchor,positive,and negative samples in theitriplet,respectively.‖·‖2is the norm of L2.α is the margin,which indicates the minimum distance that should be different between positive and negative samples.By minimizing the Triplet Loss function,a face recognition model can be trained to map different photos of the same person to similar feature spaces.

    4 Experiment

    This section describes the experimental process and results from seven aspects,the datasets used in the experiment,the model’s implementation details,the evaluation indicators,the attention ablation experiments,the comparison of experimental results,the loss function effect,the ResNeXt+effect,and the visualization results.

    4.1 Dataset

    CUHK-SYSU [1] dataset is a large-scale pedestrian detection dataset with 1.14 gigabytes (GB)shared by the author Prof.Shuang Li.The images are sourced from two data types,authentic street snapshots and movies or TV shows.12,490 images and 6,057 people to be detected are collected using hundreds of street scenes.Moreover,5,694 images and 2,375 people to be detected are selected from movies or TV shows.Unlike the re-ID datasets that manually crop images of the queried person,the CUHK-SYSU is more closely related to real-world scenarios.The data is divided into training and testing sets,with the training set consisting of 11,206 images and 5,532 people to be queried and the testing set covering 6,978 images and 2,900 people to be queried.The images and people in the training and testing set are distinct.

    PRW [41] dataset is an extension of the Market1501 dataset,typically employed for end-toend pedestrian detection and person re-identification in raw video frames,and for evaluating person search and pedestrian re-ID in the wild.PRW dataset includes 11,816 video frames captured by six synchronized cameras and corresponding mat file annotations.Each mat file records the position of the bounding box within the frame along with its ID,and the dataset also contains 2,057 query boxes from the PRW dataset.The PRW dataset,available as an open-source repository on GitHub,is 2.67 GB.It encompasses a training set comprising 5,704 images along with 18,048 corresponding annotations and a test set containing 6,112 images with 250,062 annotations.

    4.2 Implementation Details

    The DAAPS model is implemented using PyTorch and the MMDetection toolkit.ResNeXt101+serves as the backbone of our model.FPN with 3 × 3 deformable convolutions is applied as the neck,with a default Deformable Attention Mechanism.DAAPS is trained by the Stochastic Gradient Descent (SGD) optimizer,in which an initial learning rate is 0.0015,momentum is 0.9,and weight decay is set to 0.0005.Training and testing are conducted on an NVIDIA V100-SXM2-32 GB GPU.By default,the model is trained for 24 epochs.Linear learning rate warming up is chosen during training,with a warming up iteration of 1141 and a ratio of the warming up learning rate to the initial learning rate of 1/200.The learning rate is adjusted at the 16th and 22nd epochs;the remaining epochs are trained with the adjusted learning rate.During the training processes,the length of the image’s longer side is adjusted to a random number between 667 and 2000,while the size of the images in the test set is adjusted to 1500×900 in the testing processes.

    4.3 Evaluation Index

    The performance of the Person Search model is evaluated through mAP and top-1 accuracy.mAP is one of the most commonly utilized evaluation metrics in object detection.First,for each category,all detection results are sorted by confidence from high to low.Then,based on the matching relationship between the actual and predicted categories of the detection results,the Precision-Recall curve is calculated at different confidence thresholds,and the area under the curve is obtained for each category.Finally,the AP values of all categories are averaged to obtain the mAP value of the model.The calculation formula is as follows:

    where C is the number of categories,Rc(m)is the number of class c in positive examples,rank(i)is the ranking of test results ranking i,and TP(j)represents the number of positive cases correctly detected among the first j detection results.

    Top-1 refers to the top-1 accuracy of prediction results in classification tasks,namely,the highest probability value of correct results in the model prediction.mAP stands for the change or delta in mean Average Precision.A positivemAP indicates a performance improvement,while a negativemAP suggests a decrease in performance.Top-1 represents the change or delta in the Top-1 accuracy metric.It helps quantify the improvement or degradation in classification performance.The above four indexes are selected as indicators of the evaluation during experiments.

    4.4 Ablation Studies of Attention Mechanism

    To demonstrate the need for the Deformable Attention Mechanism,the ablation experiments are carried out on the CUHK-SYSU dataset,using a model based on Anchor-free without attention mechanisms as the Base.The impact of attention mechanisms on the algorithm is investigated,and the effectiveness of deformable attention mechanisms on DAAPS is verified.The experimental results are shown in Table 2.

    Table 2: Effects of different attention mechanisms on the CUHK-SYSU dataset

    The addition of Convolutional Block Attention Module(CBAM)based on the Base,the combination of the Spatial Attention Mechanism and Channel Attention Mechanism,resulted in a slight increase in mAP and Top-1 index,0.6% and 0.9%,respectively.Both mAP and Top-1 perform well if the combination is changed to the Attention Mechanism in Dual Attention network (DANet).Nevertheless,none of these improvements compare to DAAPS,with the Deformable Attention Mechanism,giving mAP and Top-1 indexes a 1.9% and 2.2% boost on the CUHK-SYSU dataset.Our Deformable Attention Mechanism is more suitable for models based on Anchor-free.

    To demonstrate the effectiveness and irreplaceability of the Deformable Attention Mechanism on the DAAPS model,this paper adds the CBAM Attention Mechanism to the model to work together with the Deformable Attention Mechanism.Theoretically,the superimposition of attention mechanisms can help the model learn the importance of different parts in the input and adjust the model’s attention based on their importance to better capture relevant features.However,experimental data on the CUHK-SYSU dataset show that including CBAM results in a 0.4%decrease in both mAP and Top-1.Although the model’s performance is only slightly reduced,it fully proves the robustness of the proposed model,as shown in Table 3.

    Table 3: Prove the robustness of DAAPS on the CUHK-SYSU dataset

    4.5 Comparison to State-of-the-Art

    The results compared with state-of-the-art models are shown in Table 4,and our model outperforms most existing one-stage and two-stage Person Search models.The best result of the DAAPS model is compared to the previous best model,AlignPS+,where mAP has improved by 0.5%and Top-1 by 2.1% in the CUHK-SYSU dataset.This advantage is also reflected in the PRW dataset,where the mAP is 1.7%higher than the previous best task-consistent two-stage framework(TCTS)model.Moreover,our model is based on an Anchor-free network architecture,which runs faster than other models.Additionally,more efficient optimization algorithms and hyperparameter tuning techniques enable the proposed model to achieve better model performance with 24 training epochs.

    Table 4: Comparison of experimental results on the CUHK-SYSU and PRW datasets

    Due to the limited training data in the PRW dataset and the fact that images are taken from six different camera viewpoints,the effect of all models on this dataset is constrained.Our model achieves the best mAP among all models.Although DAAPS’s top-1 accuracy on the PRW dataset is 0.2%less than that of the current one-step best-performing ABOS,mAP is 2.1%higher.This indicates an improvement in the model’s overall performance in terms of accuracy and recall,which are critical factors in tasks such as object detection or people search.The trade-off is reasonable because it leads to a more effective and comprehensive performance evaluation.

    It can be seen that TCTS [43] achieves the highest accuracy in Top-1 on PRW,but it is a twostep model that requires a dedicated re-ID model to receive detection results for re-ID.As a model based on Anchor-free,DAAPS can adaptively learn the feature representation of the target without the need for a predefined fixed anchor point,which is certainly not affected by the selection and distribution of anchor points.Moreover,it does not require additional computation to interpolate feature representations between anchor points,making it more robust and efficient.

    4.6 Ablation Studies of Loss Function

    To prove the rationality of choosing the TOIM Loss function,the proposed model’s performance is further evaluated using several different loss functions.As shown in Table 5,it is found that using the composite TOIM Loss function results in better performance of DAAPS than using only Logarithmic Loss Function or Triplet Loss function.Compared to applying the Triplet Loss function with Look-Up-Table(LUT),TOIM increases mAP and Top-1 by 1.7%and 1.8%on the CUHK-SYSU dataset,moreover 0.3%and 0.4%on the PRW dataset.

    Table 5: Comparison of DAAPS under different loss functions on the CUHK-SYSU and PRW

    4.7 Ablation Studies of ResNeXt+

    The study compares pedestrian detection models based on ResNet,ResNeXt,and ResNeXt+network structures,excluding other factors,and the experimental results are shown in Table 6.The accuracy and overall performance of the model based on ResNeXt+are higher than others.Although the improved metric values are small,the results cannot simply be compared with the ResNet-based model.Using the optimized ResNeXt+overcomes the adverse effects of ResNeXt on the model and instead gives better results than ResNet.

    Table 6: Comparison of DAAPS based on different networks on the CUHK-SYSU

    4.8 Visualization Results

    The task of Person Search is challenging due to various factors that can affect people’s posture,clothing,facial expression,and other characteristics in real-world scenes,making the detecting and recognizing of individuals difficult,especially in low-light environments and occluded areas.

    Due to various factors (such as camera distance and resolution),the size of a pedestrian in a natural scene can vary greatly.Some pedestrians may appear at relatively large distances,causing them to appear smaller in size in the image,known as“small targets”.For the performance evaluation of pedestrian detection algorithms,obtaining a high recall rate and accuracy on small targets is essential.The DAAPS addresses these issues by utilizing the Deformable Attention Mechanism to guide the model to focus on the salient features of the target person adaptively.This allows the model to identify small targets better and perform well in complex and occluded environments.The effectiveness of the DAAPS model is further demonstrated through visualization,and its results are shown in Fig.5.

    The paper chooses images with more occlusion,complex backgrounds,and dim light,which is a big challenge for the Person Search model.Enter the person to be detected on the left,ask the DAAPS model to find other pictures with the same target in the vast database and show the detection results in a blue box.The detection results show that the DAAPS model successfully recognizes and accurately locates the target to be queried,proving the proposed model’s effectiveness.

    Figure 5:Visualization results

    5 Conclusion

    To reduce the impact of complex scenes and varying levels of occlusion on the model’s accuracy in Person Search,this paper proposes the DAAPS model to combine Deformable Attention Mechanism with Anchor-free architecture for the first time.Separately,the detection network ResNeXt+of DAAPS with enhanced scalability extracts multi-scale features for improved adaptability in complex Person Search tasks.Moreover,applying the more effective TOIM Loss function in the re-ID module improves the discriminative ability of embedding vectors.The model’s generalization ability and robustness are enhanced,achieving better performance in practical applications demonstrated by simulation experiments,with mAP and Top-1 of 95.0% and 95.6% on the CUHK-SYSU dataset and 48.6% and 84.7% on the PRW dataset,respectively.The DAAPS model outperforms current models based on Anchor-free,showcasing rationality and effectiveness.In the study,many ablation experiments are used to test various essential modules of the model.The experimental results fully demonstrate the adaptability of the Deformable Attention mechanism as well as the rest of the components to the Anchor-free model,which offer a strong accuracy addition to the detector,and therefore provide ideas for later scholars to study the Anchor-free Person Search model.Due to the limitations of hardware,the model proposed in this paper does not achieve its optimal performance.In the future,more perfect algorithms and better hardware devices will be designed to enhance the real-time efficiency of the Person Search model.

    Acknowledgement:Not applicable.

    Funding Statement:We would like to express our sincere gratitude to the Natural Science Foundation of Shanghai under Grant 21ZR1426500,and the Top-Notch Innovative Talent Training Program for Graduate Students of Shanghai Maritime University under Grant 2021YBR008,for their generous support and funding through the project funding program.This funding has played a pivotal role in the successful completion of our research.We are deeply appreciative of their invaluable contribution to our research efforts.

    Author Contributions:Study conception and design:X.Xin,D.Han;data collection:X.Xin;analysis and interpretation of results:X.Xin,D.Han,M.Cui;draft manuscript preparation:X.Xin,D.Han,M.Cui.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Data provided in this study are available on request to the corresponding author by xinxiaoqi@stu.shmtu.edu.cn.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    成年人黄色毛片网站| 亚洲免费av在线视频| 色播亚洲综合网| 丰满的人妻完整版| 村上凉子中文字幕在线| 午夜免费成人在线视频| 国产高清视频在线播放一区| ponron亚洲| 777久久人妻少妇嫩草av网站| 国产激情偷乱视频一区二区| 国产精品野战在线观看| 啦啦啦观看免费观看视频高清| 午夜久久久在线观看| 欧美黄色淫秽网站| 国产区一区二久久| 成年免费大片在线观看| 99热6这里只有精品| 日本撒尿小便嘘嘘汇集6| 18禁黄网站禁片午夜丰满| 久久久久九九精品影院| 久久九九热精品免费| 给我免费播放毛片高清在线观看| 精品免费久久久久久久清纯| 久久天躁狠狠躁夜夜2o2o| 久久久精品欧美日韩精品| 一个人免费在线观看的高清视频| 成在线人永久免费视频| 黑丝袜美女国产一区| 国产亚洲精品久久久久5区| 精品免费久久久久久久清纯| 母亲3免费完整高清在线观看| 俺也久久电影网| 久9热在线精品视频| 麻豆成人午夜福利视频| 亚洲成人久久爱视频| 欧美丝袜亚洲另类 | 色播亚洲综合网| 1024视频免费在线观看| 久久香蕉激情| 免费在线观看黄色视频的| 97超级碰碰碰精品色视频在线观看| 欧美人与性动交α欧美精品济南到| 黄色片一级片一级黄色片| 女警被强在线播放| 97超级碰碰碰精品色视频在线观看| 久久草成人影院| 欧洲精品卡2卡3卡4卡5卡区| 成人三级做爰电影| 国产三级黄色录像| 在线观看日韩欧美| 日韩大码丰满熟妇| 欧美性猛交黑人性爽| 日日爽夜夜爽网站| 91大片在线观看| 中文字幕久久专区| av片东京热男人的天堂| 欧美成人免费av一区二区三区| 91麻豆av在线| 免费在线观看黄色视频的| 免费在线观看黄色视频的| 久久精品国产亚洲av高清一级| 夜夜爽天天搞| 午夜精品在线福利| 中文字幕精品亚洲无线码一区 | 久久 成人 亚洲| 在线观看舔阴道视频| 国产精品国产高清国产av| 天天一区二区日本电影三级| 日韩国内少妇激情av| 变态另类成人亚洲欧美熟女| 亚洲熟女毛片儿| www.自偷自拍.com| 男人舔奶头视频| 久久人人精品亚洲av| 国产高清激情床上av| 精品乱码久久久久久99久播| 亚洲第一青青草原| 国产精品日韩av在线免费观看| 18禁美女被吸乳视频| 给我免费播放毛片高清在线观看| 国产国语露脸激情在线看| 老司机福利观看| 精品国内亚洲2022精品成人| 日日夜夜操网爽| 婷婷亚洲欧美| 午夜视频精品福利| 欧美在线黄色| 长腿黑丝高跟| 黄频高清免费视频| 丝袜人妻中文字幕| 很黄的视频免费| 伦理电影免费视频| 中文字幕高清在线视频| 757午夜福利合集在线观看| 精品国产亚洲在线| 亚洲av成人不卡在线观看播放网| 美女免费视频网站| 亚洲成人精品中文字幕电影| 国产av一区二区精品久久| 日韩三级视频一区二区三区| а√天堂www在线а√下载| 俺也久久电影网| 夜夜夜夜夜久久久久| 久久伊人香网站| 国产1区2区3区精品| 老司机在亚洲福利影院| 午夜福利高清视频| 国产精品久久久av美女十八| 国产精品久久久人人做人人爽| 1024香蕉在线观看| 黄色视频,在线免费观看| 18禁裸乳无遮挡免费网站照片 | 看黄色毛片网站| 婷婷亚洲欧美| 亚洲精品国产区一区二| 97人妻精品一区二区三区麻豆 | 久久亚洲真实| 亚洲色图 男人天堂 中文字幕| 久久性视频一级片| 欧美日韩一级在线毛片| 精品欧美一区二区三区在线| 成人18禁高潮啪啪吃奶动态图| 看片在线看免费视频| 国产1区2区3区精品| 免费无遮挡裸体视频| 老司机靠b影院| 国产av在哪里看| 一区二区日韩欧美中文字幕| 美女高潮到喷水免费观看| cao死你这个sao货| 国产成人av教育| 99久久综合精品五月天人人| 国产成+人综合+亚洲专区| 国产亚洲精品av在线| 一边摸一边做爽爽视频免费| 久久精品人妻少妇| 久久狼人影院| 女性生殖器流出的白浆| 成人午夜高清在线视频 | 国产黄片美女视频| 午夜福利高清视频| 国产精品久久久久久精品电影 | 校园春色视频在线观看| 99精品在免费线老司机午夜| 手机成人av网站| 亚洲全国av大片| 97人妻精品一区二区三区麻豆 | 国产av又大| 国产亚洲欧美精品永久| 亚洲午夜理论影院| 欧美成人午夜精品| 2021天堂中文幕一二区在线观 | 国内久久婷婷六月综合欲色啪| 人人妻人人看人人澡| 亚洲人成网站在线播放欧美日韩| 韩国精品一区二区三区| 久久婷婷人人爽人人干人人爱| 变态另类丝袜制服| 日日干狠狠操夜夜爽| 男人操女人黄网站| 国产久久久一区二区三区| 精品国内亚洲2022精品成人| 亚洲色图av天堂| 久久精品91蜜桃| 91成人精品电影| 十八禁网站免费在线| cao死你这个sao货| 免费在线观看亚洲国产| 国产精品 欧美亚洲| 人人妻人人看人人澡| 国产亚洲精品久久久久久毛片| 老熟妇仑乱视频hdxx| 男女下面进入的视频免费午夜 | 免费在线观看黄色视频的| 午夜成年电影在线免费观看| 久久精品国产综合久久久| 亚洲熟女毛片儿| 久热这里只有精品99| 天天添夜夜摸| 91国产中文字幕| 亚洲国产中文字幕在线视频| 一本综合久久免费| 日韩免费av在线播放| 宅男免费午夜| 国产高清有码在线观看视频 | 精品乱码久久久久久99久播| or卡值多少钱| 一本一本综合久久| 在线播放国产精品三级| 欧美国产精品va在线观看不卡| 99国产极品粉嫩在线观看| 午夜免费观看网址| 亚洲av熟女| 无人区码免费观看不卡| 日韩欧美国产在线观看| 超碰成人久久| 欧美又色又爽又黄视频| 88av欧美| 1024视频免费在线观看| 久久久久国产一级毛片高清牌| 99国产精品99久久久久| 亚洲三区欧美一区| 亚洲专区中文字幕在线| 久久狼人影院| 18禁美女被吸乳视频| 国内精品久久久久久久电影| 国产成人av激情在线播放| 欧美日韩精品网址| 在线视频色国产色| 亚洲av日韩精品久久久久久密| 99在线视频只有这里精品首页| 国产又黄又爽又无遮挡在线| 亚洲专区国产一区二区| 国产主播在线观看一区二区| 香蕉丝袜av| 久久久精品国产亚洲av高清涩受| 天堂影院成人在线观看| 午夜激情av网站| 国产又爽黄色视频| 久久人妻福利社区极品人妻图片| 中文字幕精品亚洲无线码一区 | 国产又爽黄色视频| 久久精品人妻少妇| 免费高清视频大片| 高清毛片免费观看视频网站| 成人18禁在线播放| 国产精品1区2区在线观看.| avwww免费| a级毛片在线看网站| 99热这里只有精品一区 | 日韩精品免费视频一区二区三区| 免费女性裸体啪啪无遮挡网站| 成年免费大片在线观看| 精品国产国语对白av| 天天躁夜夜躁狠狠躁躁| 久久久水蜜桃国产精品网| 久久性视频一级片| 久久午夜综合久久蜜桃| 日本成人三级电影网站| 一区二区日韩欧美中文字幕| 国产精品 国内视频| 午夜久久久久精精品| 麻豆成人av在线观看| 欧美午夜高清在线| 99riav亚洲国产免费| 久久久久精品国产欧美久久久| 国产精品98久久久久久宅男小说| 日本 av在线| 亚洲七黄色美女视频| 一卡2卡三卡四卡精品乱码亚洲| 久久久久国产精品人妻aⅴ院| 午夜福利18| 男女之事视频高清在线观看| 在线av久久热| 亚洲自偷自拍图片 自拍| 最好的美女福利视频网| 国产精品自产拍在线观看55亚洲| 国产麻豆成人av免费视频| 亚洲精品av麻豆狂野| 天天躁夜夜躁狠狠躁躁| 欧美日韩中文字幕国产精品一区二区三区| 操出白浆在线播放| 成人欧美大片| 一区二区三区高清视频在线| 国产黄a三级三级三级人| 国产麻豆成人av免费视频| 9191精品国产免费久久| 久久人人精品亚洲av| 亚洲狠狠婷婷综合久久图片| 一进一出抽搐动态| 成人精品一区二区免费| 神马国产精品三级电影在线观看 | 久久人妻福利社区极品人妻图片| 非洲黑人性xxxx精品又粗又长| 国产精品亚洲美女久久久| 97碰自拍视频| 成在线人永久免费视频| 国产亚洲欧美98| 欧美日本视频| 一级a爱片免费观看的视频| 久久精品国产亚洲av高清一级| 精品久久久久久成人av| 757午夜福利合集在线观看| 国产不卡一卡二| 一个人观看的视频www高清免费观看 | 不卡一级毛片| 国产精品久久久久久亚洲av鲁大| 欧美日韩中文字幕国产精品一区二区三区| 国产成人精品无人区| 国产区一区二久久| 国产精品乱码一区二三区的特点| 国产不卡一卡二| 99riav亚洲国产免费| 久久青草综合色| www.自偷自拍.com| 叶爱在线成人免费视频播放| 国产单亲对白刺激| 老熟妇乱子伦视频在线观看| 亚洲自拍偷在线| 国产精品自产拍在线观看55亚洲| 国产精品亚洲av一区麻豆| 欧美乱色亚洲激情| 波多野结衣av一区二区av| 大型av网站在线播放| 美女扒开内裤让男人捅视频| 欧美不卡视频在线免费观看 | 国产私拍福利视频在线观看| 久久久国产成人免费| 精品久久久久久久人妻蜜臀av| 999久久久国产精品视频| 国产一区二区激情短视频| av片东京热男人的天堂| 好看av亚洲va欧美ⅴa在| 国产三级在线视频| 叶爱在线成人免费视频播放| 国内精品久久久久久久电影| 女人爽到高潮嗷嗷叫在线视频| 在线视频色国产色| 亚洲国产精品成人综合色| 九色国产91popny在线| 12—13女人毛片做爰片一| 91成年电影在线观看| 欧美黄色淫秽网站| 亚洲自偷自拍图片 自拍| 午夜福利一区二区在线看| 青草久久国产| 亚洲精品av麻豆狂野| 神马国产精品三级电影在线观看 | 国产精品av久久久久免费| 午夜福利成人在线免费观看| 亚洲色图av天堂| 国产成年人精品一区二区| 无人区码免费观看不卡| 真人做人爱边吃奶动态| 色在线成人网| 欧美zozozo另类| 亚洲人成网站高清观看| 午夜福利在线在线| 51午夜福利影视在线观看| 精品欧美一区二区三区在线| av免费在线观看网站| 成人手机av| 成人18禁在线播放| 国产午夜福利久久久久久| 哪里可以看免费的av片| 日韩有码中文字幕| 国产精品亚洲美女久久久| 欧美在线黄色| 俺也久久电影网| 亚洲av中文字字幕乱码综合 | 国产精品美女特级片免费视频播放器 | av天堂在线播放| 国产精品久久视频播放| 50天的宝宝边吃奶边哭怎么回事| 亚洲 国产 在线| 极品教师在线免费播放| 91麻豆av在线| 一二三四在线观看免费中文在| 深夜精品福利| 18禁黄网站禁片午夜丰满| 日韩国内少妇激情av| 丰满的人妻完整版| 国产真实乱freesex| 国产熟女xx| 草草在线视频免费看| 真人一进一出gif抽搐免费| 色播在线永久视频| 国产又色又爽无遮挡免费看| 精品久久久久久久末码| 脱女人内裤的视频| 亚洲 国产 在线| 久久精品国产亚洲av高清一级| 亚洲欧洲精品一区二区精品久久久| 日韩欧美 国产精品| 国产伦人伦偷精品视频| 日韩欧美国产在线观看| 男人舔奶头视频| av超薄肉色丝袜交足视频| 中文字幕久久专区| 丁香欧美五月| 给我免费播放毛片高清在线观看| 国产亚洲精品av在线| 少妇被粗大的猛进出69影院| 亚洲人成电影免费在线| 欧美 亚洲 国产 日韩一| 免费电影在线观看免费观看| 在线看三级毛片| 变态另类成人亚洲欧美熟女| 国产亚洲欧美在线一区二区| 亚洲精品国产一区二区精华液| 久久国产精品影院| 亚洲精品美女久久久久99蜜臀| 亚洲一区高清亚洲精品| 国产精品99久久99久久久不卡| 亚洲无线在线观看| bbb黄色大片| 精品国产国语对白av| 91麻豆精品激情在线观看国产| 亚洲人成电影免费在线| 免费无遮挡裸体视频| 欧美性猛交黑人性爽| svipshipincom国产片| 99久久久亚洲精品蜜臀av| 99国产精品一区二区蜜桃av| 成人一区二区视频在线观看| av在线播放免费不卡| 在线观看免费午夜福利视频| 国产乱人伦免费视频| 成人免费观看视频高清| 在线播放国产精品三级| 人人妻,人人澡人人爽秒播| 99riav亚洲国产免费| 一本大道久久a久久精品| 天堂√8在线中文| 久久伊人香网站| 波多野结衣高清无吗| www.www免费av| 亚洲av日韩精品久久久久久密| 日韩欧美在线二视频| av天堂在线播放| 国产熟女午夜一区二区三区| 亚洲色图av天堂| av在线天堂中文字幕| 久久青草综合色| 亚洲精品久久成人aⅴ小说| 美女 人体艺术 gogo| 在线观看66精品国产| 日韩视频一区二区在线观看| 黄片播放在线免费| 男女下面进入的视频免费午夜 | 满18在线观看网站| 成人欧美大片| 久久99热这里只有精品18| 亚洲一区高清亚洲精品| 亚洲熟女毛片儿| 国产成人精品久久二区二区91| 脱女人内裤的视频| 久久亚洲精品不卡| 99久久国产精品久久久| 国产视频一区二区在线看| 制服丝袜大香蕉在线| 高潮久久久久久久久久久不卡| 啦啦啦免费观看视频1| 亚洲成人久久爱视频| 女同久久另类99精品国产91| 一夜夜www| 黄色毛片三级朝国网站| 亚洲成人国产一区在线观看| 免费看日本二区| 亚洲成人久久性| 日本a在线网址| 热re99久久国产66热| 啦啦啦免费观看视频1| 女警被强在线播放| 黄网站色视频无遮挡免费观看| 悠悠久久av| 免费搜索国产男女视频| 成人永久免费在线观看视频| 国产精品一区二区三区四区久久 | 91成人精品电影| 亚洲一卡2卡3卡4卡5卡精品中文| av免费在线观看网站| 国产一区二区激情短视频| 亚洲第一电影网av| 日韩成人在线观看一区二区三区| 国产av又大| 老司机在亚洲福利影院| 天天添夜夜摸| 中文字幕av电影在线播放| 99久久99久久久精品蜜桃| 正在播放国产对白刺激| 午夜a级毛片| 亚洲第一欧美日韩一区二区三区| 国产爱豆传媒在线观看 | 国产精品久久久久久人妻精品电影| 精品久久久久久久人妻蜜臀av| 18禁国产床啪视频网站| 成人亚洲精品av一区二区| 亚洲成人精品中文字幕电影| 亚洲色图 男人天堂 中文字幕| 亚洲专区国产一区二区| 色综合站精品国产| 50天的宝宝边吃奶边哭怎么回事| 国产高清激情床上av| 欧美丝袜亚洲另类 | 一a级毛片在线观看| 亚洲欧美激情综合另类| 一进一出抽搐gif免费好疼| 国产亚洲av嫩草精品影院| 少妇 在线观看| 国产伦人伦偷精品视频| 国产熟女xx| 亚洲无线在线观看| 亚洲欧美日韩高清在线视频| 欧美激情 高清一区二区三区| 午夜成年电影在线免费观看| 天天躁狠狠躁夜夜躁狠狠躁| 国产日本99.免费观看| 亚洲一区二区三区色噜噜| 久久天躁狠狠躁夜夜2o2o| 婷婷亚洲欧美| 午夜亚洲福利在线播放| 亚洲国产日韩欧美精品在线观看 | 亚洲av第一区精品v没综合| 人人妻人人看人人澡| 亚洲精品美女久久av网站| 国产精品一区二区免费欧美| 色综合亚洲欧美另类图片| 国产亚洲欧美精品永久| 亚洲国产精品999在线| 在线观看免费午夜福利视频| 国产一区二区三区视频了| 久久精品成人免费网站| 曰老女人黄片| 性色av乱码一区二区三区2| 国产1区2区3区精品| 哪里可以看免费的av片| 淫妇啪啪啪对白视频| 香蕉av资源在线| 校园春色视频在线观看| 免费看a级黄色片| 久久久精品国产亚洲av高清涩受| 国产精品综合久久久久久久免费| 麻豆成人午夜福利视频| 欧美亚洲日本最大视频资源| 国产精品久久视频播放| 午夜激情av网站| 夜夜爽天天搞| 国产精品99久久99久久久不卡| 国产精品久久久久久人妻精品电影| 亚洲国产精品合色在线| 欧美激情高清一区二区三区| 国产亚洲精品av在线| 俄罗斯特黄特色一大片| 国产成人精品久久二区二区免费| 国产午夜精品久久久久久| 欧美最黄视频在线播放免费| 99在线人妻在线中文字幕| 窝窝影院91人妻| 哪里可以看免费的av片| 日日夜夜操网爽| 琪琪午夜伦伦电影理论片6080| 真人做人爱边吃奶动态| 国产精品爽爽va在线观看网站 | 男人舔女人的私密视频| 日本一区二区免费在线视频| 老司机靠b影院| av福利片在线| 男人舔女人的私密视频| 中文字幕精品亚洲无线码一区 | 国产精品av久久久久免费| 亚洲午夜理论影院| 黑丝袜美女国产一区| 搞女人的毛片| 午夜亚洲福利在线播放| 女性被躁到高潮视频| 欧美黑人巨大hd| 欧美精品亚洲一区二区| 视频在线观看一区二区三区| 女同久久另类99精品国产91| 男人的好看免费观看在线视频 | 18禁观看日本| 中文字幕精品亚洲无线码一区 | 嫩草影院精品99| 欧美乱色亚洲激情| 亚洲一码二码三码区别大吗| 欧美激情久久久久久爽电影| cao死你这个sao货| 久久香蕉精品热| 一区二区三区精品91| 亚洲一区二区三区色噜噜| 99在线视频只有这里精品首页| 一进一出抽搐gif免费好疼| 美女 人体艺术 gogo| 精品久久久久久,| 精品欧美一区二区三区在线| 日韩精品免费视频一区二区三区| 伦理电影免费视频| 中文字幕人妻丝袜一区二区| 欧美最黄视频在线播放免费| 国产精品自产拍在线观看55亚洲| 香蕉久久夜色| 欧美日韩乱码在线| 老司机在亚洲福利影院| 久久久久国内视频| 黄色 视频免费看| 亚洲人成伊人成综合网2020| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲avbb在线观看| 日韩欧美一区视频在线观看| 一本精品99久久精品77| 88av欧美| 国产激情偷乱视频一区二区| 久久中文字幕人妻熟女| 激情在线观看视频在线高清| 十八禁人妻一区二区| 一区二区三区高清视频在线| 在线看三级毛片| 精品国产一区二区三区四区第35| 老司机福利观看| 国产一卡二卡三卡精品| 亚洲 欧美一区二区三区| 国产一区二区三区在线臀色熟女| 国产成+人综合+亚洲专区| 母亲3免费完整高清在线观看| 夜夜看夜夜爽夜夜摸| 在线观看免费午夜福利视频| 女人高潮潮喷娇喘18禁视频| 精品久久久久久成人av| 久久99热这里只有精品18| 男女那种视频在线观看| 美女高潮到喷水免费观看| 一本综合久久免费| 欧美成人一区二区免费高清观看 | 午夜精品在线福利|