• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Learning discriminative representation with global and fine‐grained features for cross‐view gait recognition

    2022-05-28 15:17:10JingXiaoHuanYangKunXieJiaZhuJiZhang

    Jing Xiao|Huan Yang|Kun Xie|Jia Zhu|Ji Zhang

    1School of Computer Science,South China Normal University,Guangzhou,Guangdong,China

    2Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province,Zhejiang Normal University,Jinhua,Zhejiang,China 3School of Sciences,University of Southern Queensland,Toowoomba Qld,Australia

    Abstract In this study,we examine the cross-view gait recognition problem.Many existing methods establish global feature representation based on the whole human body shape.However,they ignore some important details of different parts of the human body.In the latest literature,positioning partial regions to learn fine-grained features has been verified to be effective in human identification.But they only consider coarse fine-grained features and ignore the relationship between neighboring regions.Taken the above insights together,we propose a novel model called GaitGP,which learns both important details through fine-grained features and the relationship between neighboring regions through global features.Our GaitGP model mainly consists of the following two aspects.First,we propose a Channel-Attention Feature Extractor (CAFE) to extract the global features,which aggregates the channel-level attention to enhance the spatial information in a novel convolutional component.Second,we present the Global and Partial Feature Combiner(GPFC) to learn different fine-grained features,and combine them with the global features extracted by the CAFE to obtain the relevant information between neighboring regions.Experimental results on the CASIA gait recognition dataset B (CASIA-B),The OU-ISIR gait database,multi-view large population dataset,and The OU-ISIR gait database gait datasets show that our method is superior to the state-of-the-art cross-view gait recognition methods.

    1|INTRODUCTION

    Gait recognition is a promising video-based biometric identification technology applied to identify individuals by their walking patterns.Compared to other biometric technologies,such as the face,fingerprint and iris recognition,gait recognition has the advantages of non-contact,long-distance and no explicit cooperative interest-subjects.Therefore,gait recognition has a potentially wide range of applications in video surveillance.As the accuracy increases,gait recognition technology will definitely become another effective tool for crime prevention,forensic identification and social security.In order to improve the accuracy of recognition,we need to overcome various external factors,including walking speed,bag-carrying,coat-wearing and camera viewpoint,that cause dramatic changes in gait appearance.As shown in Figure 1,the appearance of gait walking changes observably in different directions,which may result in the similarity of inter-class common attributes greater than that of intra-class common attributes,and brings challenges to gait recognition.

    FIGURE 1 From left to right are silhouettes of all views in The CASIA gait recognition dataset B (CASIA-B),gait dataset,which possess evidently different shapes and moving patterns during walking

    There are several attempts in the literature to solve the cross-view gait recognition problem.A common strategy is to extract global features by treating the whole human image as a unit.It is worth mentioning that many methods [1-4] use attention mechanisms to improve the performance of the model,and our method is no exception.However,due to the diversity of gait walking conditions in the cross-view situation,some important details are often ignored in the global features.Another learning strategy considers that different parts of the human body poss evidently various shapes and moving patterns during walking [5-10].They aim to learn fine-grained features through specific regions.Unfortunately,they only consider coarse fine-grained features and ignore the relationship between neighboring regions.To solve the problems in the above two strategies,we propose a novel model called GaitGP,which learns both important details through finegrained features and the relationship between neighboring regions through global feature representation.

    Our novel model GaitGP consists of the following two components.The first component is a Channel-Attention Feature Extractor (CAFE),which is a novel application of convolution and can extract global features with channel attention mechanism.The other one is the Global and Partial Feature Combiner(GPFC),which learns fine-grained features in specific regions of images.Moreover,the GPFC combines the global features extracted by the CAFE with fine-grained features to obtain relevant information between neighboring regions.

    In the CAFE,we jointly learn attention selection and feature representation to extract global features with a channel attention mechanism.From the experimental data,we find that channel attention does enhance the performance of the model compared to the original global features extracted from the image.Therefore,an effective channel attention mechanism method,called the Channel-level Spatial Pooling(CSP)is introduced to select the channel attention information and optimize the global features.Additionally,in order to improve the compatibility between channel attention selection and global features,our novel convolution layeradopts the partitionable stackingdesign,which will be discussed specifically in Section 3.2.

    In the GPFC,we divide the global feature map extracted from the CAFE into several sub-branches.To combine the global feature and the fine-grained features,the first sub-branch contains only one whole partition to preserve the global information.In the remaining sub-branches,we divide the global feature maps into different numbers of stripes as part regions to learn local feature representations independently [5].More details will be discussed specifically in Section 3.3.

    More simply,we summarise our contributions as follows:

    · We propose a novel model called the GaitGP,which learns both important details through fine-grained features and the relationship between neighboring regions through global feature representation.

    · We propose a CAFE for the optimization of global feature representation.

    · We propose a GPFC for combining the global and finegrained features.

    · For gait recognition accuracy,we combine the above aspects to conduct a large number of experimental ablation experiments on the widely used gait datasets the CASIA gait recognition dataset B (CASIA-B) [11],The OU-ISIR gait database,multi-view large population dataset (OU-MVLP)[12] and The OU-ISIR gait database (OULP) [13].Compared to several state-of-the-art methods,GaitGP shows superiority.

    2|RELATED WORK

    2.1|Cross‐view gait recognition

    To adapt to the situation of cross-view for gait recognition,one of the most typical gait recognition methods is treating the whole human body shape as a unit to extract features and can be divided into two categories:model-based[14-17]and appearance-based[18-25].The model-based method tries to reconstruct the human 3D-body and motion models to identify individuals.Wolf et al.[14]used to model the dynamic characteristics of the gait sequence to express the overall understanding of the gait sequence.The gait silhouettes under different views are mapped on a common template by the 3D-model,but it is difficult to train because of the complexity of network architecture.

    Many appearance-based methods in this fashion perform gait recognition in a more lightweight (easily to train) network architecture.Inspired by the great achievements in face recognition and action recognition,some researchers leverage generative methods to reconstruct the gait template in all views.The generative adversarial network (GAN) [26] is used to generate invariant side-view gait images to adapt to the situation of appearance changes caused by different clothing.Yu et al.[22]proposed a unified cross-view gait recognition model based on a generative framework to learn view-invariant features.A multiloss strategy is used in GaitGAN[27]to optimize the network to increase the inter-class distance and reduce the intra-class distance.All these methods compress the gait silhouettes from different views into a uniform template for gait recognition.However,it is believed that these methods retain unnecessary sequential constraints for periodic gait [3] and ignore some important details of different parts of the human body.

    For learning more detailed information to enhance feature representation,many advanced methods in Re-Identification(Re-ID)task[5-9,28-30]have proved that locating important body parts from images to represent local identity information is an effective method to improve the accuracy and robustness of recognition.One of the most commonly used strategies is to split the feature map into strips and merge them into column vectors.Wang et al.[31]designed a Multiple Granularity Network with multiple branches,which uniformly partitions the images into several stripes,and varies the number of parts in different local branches to obtain local feature representation with multiple granularities.Fu et al.[8]proposed a simple and effective horizontal pyramid matching method to fully exploit various partial information of a given person.In the task of gait recognition,many of the latest articles have applied the strategies of finegrained features.Chao et al.[3]used Horizontal Pyramid Mapping to map the set-level feature into a more discriminative space for robust feature representation.Zhang et al.[32]employed the idea of part-based unified segmentation to extract local features of gait.However,these methods only consider coarse finegrained attentional features and ignore the relationship between neighboring regions.

    To learn both the complement of important details through fine-grained features and the relationship between neighboring regions,in this work,the proposed model GaitGP combines the attention information to learn global feature representation and aggregate it with the fine-grained features to make feature representation more robust.

    2.2|Deep learning on attention

    One common learning strategy is long short term memory(LSTM)-based [32,33].They employ LSTM to temporal attention scores to pay more attention on those discriminative frames,and thus,improving the overall performance.However,these methods are considered to retain the unnecessary sequence constraints on the periodic gait.Additionally,some new approaches in the Re-ID task combine the local attention-based representation of the image to improve performance [1,30,34-38].Li et al.[34] proposed a Spatial Transformer Network(STN)with spatial constraints[36]to locate deformable pedestrian features.Zhao et al.[39]built a hard attention model by the STN to search for components,given the pre-defined spatial constraints.Li et al.[1]presented Harmonious Attention convolutional neural network(CNN)for joint learning of different levels of visual attention subject along with simultaneous optimization of feature representation.

    Inspired by the successful application of visual attention,some methods directly perform on random sequences to get attention information,thus,avoiding unnecessary sequence constraints on the periodic gait.GaitSet [3] presented the Set Pooling applying attention mechanism [1,40,41] to improve its performance.GaitPart [4] applied the channel-wise attention mechanism [6,42,43] to the re-weighted micro-motion feature,which aims to overcome the limitation of the global feature.The above methods show that the attention information is beneficial to improve the performance of gait recognition.Therefore,in our method,we propose the CSP to learn channel-wise attention to enhance the global feature.

    3|PROPOSED METHOD

    In this section,we first summarize the overall network architecture of the GaitGP model.This is followed by a detailed description of the two components of the model,that is,the CAFE and GPFC.

    3.1|Overall framework

    The overall of the proposed method is shown in Figure 2.Given a dataset ofnpeople with identityyi,i∈1,2,…,n,we assume that the sequence of each identity isXi.ssilhouettes given from eachXiare expressed asXi=We first use the CAFE to jointly perform attention selection and feature representation.Then,the global features are extracted through a channel attention mechanism,which is formulated as follows:

    whereχτdenotes the output feature map of the CAFE andφdenotes the function of attention selection,which is implemented by the CSP in the CAFE.The details of the CSP will be introduced in Section 3.2.

    Then,the GPFC dividesχτintotsub-branches.Each subbranch is horizontally split intop=2γ,γ=1,2,…,denoted aspartitions,Finally,the GPFC combines all the fine-grained features and the global featureχτto learn the relationship between the neighbor regions.The GPFC is formulated as follows:

    where νδdenotes the column vector down-sampled by theδ;δdenotes a Multi-Granularity Mapping (MGM) module.More details will be introduced in Section 3.3.

    Finally,we choose the separate triplet loss[3,4]to train the proposed model.

    3.2|Channel‐Attention Fusion Extractor

    The CAFE learns global features with channel-level attention to enhance representation.There are two components in the CAFE:The CSP which aims to learn the attention information and the Partitionable Convolution layer(PConv) used to extract the global feature for integrating the attention information freom the CSP.Next,the CSP is described in detail first,followed by the exact structure of the PConv.

    3.2.1|Channel-level Spatial Pooling

    To enhance the expressiveness of global features,the CSP learns a channel-wise attention map to refine it.As shown in Figure 2,each block of the Partial Branch contains a CSP,assuming thatf b∈Rc×s×h×wis the input feature map of the CSP;brepresents the block in the CAFE;cis the number of channels;sis the length of the gait sequence and(h,w)is the size of each feature map.

    FIGURE 2 The framework of GaitGP.Channel-Attention Fusion Extractor is consistsof CSP and Blocks.CSP represents the Channel-level Spatial Pooling and the Blocks are composed of two convolutional units(PConvs).In the Global Branch,PConv is mainly utilized to extract global features;while in the Partial Branch,PConv is used to collect channel-level attention information.Global And Partial Feature Aggregator is used to gather the global and fine-grained features.MGM represents the Multi-Granularity Mapping.Note that the MGMs are independent,each of which has a different scale.The dimension of the final feature is 256.FC,Fully Convolution

    Since the length of the input gait may be different,many previous works [3,4,10] successfully utilize pooling to aggregate the gait information of elements in a sequence.Therefore,as shown in Figure 3,we first use Spatial Pooling to aggregate the information of gait elements to represent the gait motion pattern.A natural choice of the Spatial Pooling is to apply the statistical max function [3] on channel dimension.We pre-dividefbintoτ,channel-level partitions to aggregate the information of gait elements which is formulated as follows:

    whereConcatrepresents the concatenation on the dimension of the channel.

    Finally,fscoreis merged into thecollected by the statistical functions,formulated as follows:

    wherefweight∈Rc×h×wis the final output of the CSP and ⊕is a channel-wise fusion operation.fweightcontains the framelevel Global information.

    3.2.2|Partitionable convolution layer

    The PConv is a basic unit of the blocks in the CAFE.To improve the compatibility between attention information and the global feature,the PConv is designed to be partitionable.As shown in Figure 2,we design the CAFE as a multibranched structure.In the Global Branch,the PConv is mainly utilised to extract global features;while in the Partial Branch,the PConv is used to collect channel-level attention information.The global feature extracted in different blocks of the Global Branch are added to the Partial Branch.In order to adapt to the various-level fusion of global features and channel-level attention information,we pre-define that each block hasτchannel-wise regions.In the initialization block(Block1),the PConv (τ=1) is equivalent to the regular convolutional layer.In the remaining blocks (Block2 and Block3),the PConvs are divided;The input global features are divided into t channel regions for convolution operation,and then vertically spliced together as the final output.

    Supposing the output of the Global Branch isSglobal∈Rc×h×wand output of partial branch isSpart∈Rc×h×w,we connect both the two feature maps,represented as follows:

    wherePConvpandPConvgrepresent the convolutional layer in the Partial Branch and in the Global Branch,respectively.⊕denotes the concatenate operation.

    whereScafeis the final feature map of the CAFE,Concatrepresents the function Concatenate.Note that 2cmeans that the dimension of the channel becomes twice after the operationConcat.

    Different layers have different receptive fields and each block contains two PConv layers,as shown in Figure 4(b).The exact structure and parameters of each PConv are shown in Table 1.As shown in Figure 4(a),taking the PConv in Block3 as an example,the input feature map is horizontally divided intoτ=4 partitions,which are operated independently.Then,the obtained channel-level feature vectors are spliced vertically as the final output.

    3.3|Global and Partial Feature Aggregator

    In literature,splitting the feature map into strips is commonly used in person Re-ID task [5,8,31].Horizontal Pyramid Pooling (HPP) [8] proposes to learn different finegrained features with four scales,and thus,can help the deep network focus on features with different sizes to gather both partial and global information.We improve the HPP to obtain relevant information between neighboring regions.The most obvious modification is that we divide the subsequent part into five independent sub-branches after the CAFE process.Each sub-branch has similar architecture with different scales.

    Specifically,the GPFC hasρscales.On scaleρ,the feature map,Scafe,extracted by the CAFE is split into five independent sub-branches,expressed asEach sub-branch uses an MGM module with different scales,as shown in Figure 2.The MGM splits eachParttintoρ=2m-1on height dimension,that is,strips in total,wherem∈M=1.The upper sub-branchPart1contains only one whole partition(preserve global feature),which is used to supplement the relevant information between neighboring regions of other sub-branches.For the remaining four sub-branches,theScafeis split into differentρscales,that is,horizontally divided intoρstripes to learn different finegrained features independently.

    Moreover,the structure of the MGM module is shown in Figure 5.On scaleρ,the Separate Max Pooling (SMP) is applied to downsampleScafeinto 3-D strip features of equal size.Then the Separate Conv1dNet (SC) module is leveraged to reduce the dimension,presented asvt,which consists of a 1-D convolutional layer with a kernel size ofρ.The specific parameters of each MGM component are shown in Table 2.The MGM is formulated as follows:

    FIGURE 3 The structure of CSP.Take τ=4 as an example.The SP module applies an improved statistical max function to gather the most discriminative feature.The ConvNet is a convolutional layer with an activation function rectified linear unit (ReLU),which obtains the channel-level attention scores.CSP,Channel-level Spatial Pooling

    FIGURE 4 (a) The illustration of the PConv in Block3 and the dimension of the input feature map is expressed as c×h w.(b)Block3 is a deep-layer block and consists of two PConvs.PConv,Partitionable Convolution

    TABLE 1 The exact structure of the CAFE and the specific parameters of PConv.In_D,Out_D,Kernel represent the input dimension,output dimension and kernel size of the PConv,respectively.In particular,τ indicates the pre-defined partition in the PConv.Feature denotes the output feature maps of each block

    whereMGMtis aρ-granularities extraction feature;is horizontally divided intoρscales;vtis the aggregated output vector andConcatrepresents the concatenate operation.The SMP is implemented by the 1-D Max Pooling with the kernel size ofρ,which is formulated as follows:

    FIGURE 5 The structure of Multi-Granularity Mapping(MGM).Take scale ρ=16 as an example,the Separate Max Pooling (SMP) is applied to downsample.The Separate Conv1dNet (SC) is leveraged to reduce the dimension

    Finally,we perform the Separate Fully Convolution (FC)layer to obtain the final features of the GaitGP,described asfc,formulated as follows:

    In the testing phase,to obtain the discriminating ability,we splice all the features down to 256 dimensions as the final feature map,combining the global and fine-grained information to improve the comprehensiveness of the learning features.

    3.4|Implementation details

    3.4.1|Loss function

    AsshowninFigure2,weaddaSeparatedTripletLossfunctionto supervise learning,which applies theSeparate Batch All(BA+)triplet loss[44]to train the network and use the corresponding columnfeature vectors between thedifferentadversarial samples to calculate the loss.The triplet loss is defined as follows:

    whereNγis a random sample.Npis a positive sample with the same identity as theNγ.Nnis a negative sample with a different identity from theNγ.mis the margin of the triplet loss.The operation[?]+is equal tomax(?,0).

    3.4.2|Training

    The input of the network is a series of silhouettes.We randomly select samples from the entire gait sequence,which can be regarded as a time data enhancement method.We sample a batch of sizen×sfrom the training set,wherenrepresents the number of people with differentids,andsrepresents the number of different sequences used by each person with the sameidin the batch.Sampling strategies in[3,4]are applied,and theSeparate Batch All(BA+)triplet loss[44] is used to calculate the loss.

    TABLE 2 Comparison of the settings for the MGM in five subbranches.“Sub” refers to the name of sub-branches.“P” refers to the number of partitions on feature maps.“Map Size”refers to the size of the output feature maps from each branch.“Dim”refers to the dimensionality and number of features for the output representations.“Feature”means the symbols for the output feature representation

    3.4.3|Testing

    The gait sequence is tested using the spatio-temporal features extracted for each gait sequence.The average Euclidean distance between the gallery and the feature column vector of the gallery can be used to match the metric.

    4|EXPERIMENTS

    In this section,we first describe two databases,CASIA-B and OU-MVLP,to evaluate our model GaitGP,followed by comparing the performance of GaitGP with the state-of-theart methods and ending with ablation study on CASIA-B to verify the effectiveness of each component in GaitGP.

    4.1|Datasets and training details

    4.1.1|CASIA-B

    CASIA-B [11] is a widely used gait dataset containing 124 subjects,each of which includes 11 views.Among the views,there are 10 sequences with three gait conditions;one normal condition normal(NM)that includes six sequences.The first 4 sequences NM#01-04 form a gallery,and the remaining two sequences NM#05-06 are used as probes.In addition to the normal condition sequence,there are two sequences;one is wearing a coat cloth (CL)#01-02,the other is carrying a bag(BG)#01-02.The dataset enables researchers to simultaneously study cross-view and cross-wearing issues,in other words,each body contains 11 × (6+2+2)=110 sequences.There are various experimental schemes[45]based on CASIA-B to verify the feasibility and effectiveness of the proposed method.For fairness,this study strictly follows the popular protocol [6].Besides,there are three training settings which are configured according to the different training scales in the training stage[3],that is,small-scale training (ST),medium-scale training(MT),and large-scale training(LT).Among them,124 subjects are divided into two groups;24,63,and 74 subjects are put into the training set,and the remaining subjects are reserved for testing.During the test,the first 4 sequence conditions of NM (NM#01-04) are regarded as a gallery and the rest are divided into three subsets of walking conditions based on these six sequences,which are the NM subset of NM#05-06,the other BG subset of BG#01-02,and the last CL subset of CL#01-02.

    4.1.2|OU-MVLP

    OU-MVLP [12] is the newly released public gait database with the largest view changes,which consists of 10,307 subjects;each subject containing 14 viewsWe use the first 5153 for training and the remaining 5154 for testing.There are two sequences in the dataset.In the testing stage,the sequence#01 is classified as the gallery set,and the other sequences#00 are classified as the probe set.According to [12],four typical viewing anglesare evaluated.In addition to doing these four typical views,we conduct experiments with all the views [3,4,32,46].The data set can provide us with stable comparison results.

    4.1.3|OULP

    OULP [13] is a large dataset with only 4 view angles(55°,65°,75°,85°).There are 4,007 subjects (2135 males and 1872 females) with ages ranging from 1 to 94 years and each subject containing two sequences,one in the gallery and the other as a probe sample.Compared with CASIA-B,OULP has smaller view differences and no variants in walking conditions.However,the large number of subjects enables us to compare different gait recognition approaches with statistical significance.Our experimental setting is the same as in[20],since not all samples of each subject are covered from four view angles.A total of 3714 subjects (according to the file of first view angle) are used in the subsequent experiments.We use 1857 subjects as the train set and the rest as the test set.Note that the original silhouettes have already been cropped and aligned.We directly use the given silhouettes to construct the gait templates.

    4.1.4|Training details

    During the experiment,the lengthsof the input gait sequences is set to 30,the same as[3,4].We use the method mentioned in[12] to crop,align all input sequences,and adjust their size to64 × 64.The optimizer Adam is Adopted [47] to perform gradient optimization and the learning rate is set to 1e-4.In addition,the momentum is set to 0.9 and the margin of the Separate Triplet Loss is set to 0.2,the same as[44].In CASIAB,we set the batch size to(8,16),and the number of training iterations is 90K.In OU-MVLP,because it contains far more sequences than CASIA-B,we set the number of parts of the GPConv layer in block2 and block3 to 2,2,4,4,and the batch size to(32,8),the number of iterations is set to 250K,and the learning rate is set to 1e-5.

    TABLE 3 In three experimental environments with different sample sizes(BT,MT,LT),CASIA-B’s average level 1 accuracy under all viewing angles and different conditions (not including the same viewing angle)

    4.2|Comparison with the state‐of‐art methods

    4.2.1|CASIA-B

    As shown in Table 3,we compare our method with the latest gait recognition methods,which mainly include CNN-LB[20],GaitNet[46],GaitSet[3],MGAN[25]and ACL[33].To make a systematic and comprehensive comparison with the advanced methods,all conditions (NM,BG,CL) are included,and further experiments and comparative analyses are carried out with different training sample sizes (BT,MT,LT).The proposed method achieves the best recognition accuracy in almost all angles.

    (1)As shown in Table 3,CNN-LB [20] is a GEI-based method and others are all based on the silhouettes,but the latter all perform better than the former.It shows that video-based methods have great potential in extracting more fine-grained information and distinguishing information from images.

    (2)We discuss with GaitNet [46] and MGAN [25],which have the same structural purpose but different architecture composition.In GaitNet [45],the Auto-Encoder is introduced to obtain more distinguishing functions,and the multi-layer LSTM is applied for spatio-temporal modeling.MGAN uses a generative confrontation network to map different costumes to the same template from the front and side perspectives.In our model,we introduce the CSP to extract the local feature attention through channel-level division as the spatio-temporal attention feature of the subject.

    (3)Compared with GaitSet [3],our structure is used a partitionable convolution unit called the PConv,which is used to obtain the channel-level feature fusing with spatial attention.The MGM of GaitGP also has a similar structure as that of GaitSet,but the MGM pays more attention to the fine-grained local segmentation using the independent operation to enlarge more representative features and reduce the similarity between different subjects.This result reveals the advantages of the PConv and MGM through experiments.From the experiment,GaitGP has obtained better results under various walking conditions on CASIA-B.

    4.2.2|OU-MVLP

    To prove the effectiveness of our method,we conduct two large-scale experiments on OUMVLP.(1) We use the same evaluation setting as [12] where 5153 people are trained and 5154 people are tested.The silhouettes of four typical viewsare evaluated for cross-view recognition,as shown in Table 4.(2) We list the results in two gallery collections including all the 14 views and the results are averaged on the gallery view (exclude the identical-view).We set the dimension of the global feature and the local feature as 512 and reduce the dimensionality through MGM to 256,as shown in Table 5.

    4.2.3|OULP

    To prove the broad applicability of our method,we also perform the experiments on OULP.The results are shownin Table 6:We compare our method with CNN-LB [20],GEINet [46],and MGAN [25].These methods are based on cross-view to calculate the accuracy,that is,calculating the average accuracy of each view angle excluding the same view angle.Our GaitGP performs better than these methods.

    TABLE 4 OUMVLP results excluding the identical-view cases under four typical views(0°,30°,60°,90°)

    TABLE 5 OUMVLP results excluding the identical-view cases under all views

    4.3|Ablation study

    To further verify the effectiveness of each component in our proposed network GaitGP,the two components of the CAFE,PConv and CSP,and the MGM module in the GPFC pipeline are included.We perform the ablation study of these components on the CASIA-B data set.Research,experimental results,and analysis are as follows.

    TABLE 6 OULP cross-view average accuracies (% )for all pairs of four view angles

    4.3.1|Effectiveness of PConv

    As introduced in Table 1,we present the parameter settings of the PConv.To evaluate the robustness,we design four groups of experiments.The blocks in the CAFE are composed of two PConv.In Exp.1-1,we set theτ=1 in Block1 for retaining its original state and the remaining two blocks are parameterized as 1.The difference between Exp.1-2 and Exp.1-1 is that Block1 remains unchanged,but the parameterτin Block2 and Block3 are set to 2 and 4.Exp.1-3 is based on Exp.1-2,but the latter two blocks are sets to 4.Similarly,the parameterτof Block2 and Block3 in Exp.1-4 are set to 4 and 8.All the results of these controlled experiments are shown in Table 7.

    Comparing Exp.1-1 and Exp.1-2,on the one hand,we found that the blocks with the original state(τ=1) are not effective,which shows the advantages of partitionable extraction.On the other hand,the features of Exp.1-4 are too dispersed and lead to poor performance in the superficial layer,which is probably because of too much subdivision destroying the information of silhouette between the edges of the adjacent regions and increases the proportion of noise covariates.Finally,by comparing the differences of Exp.1-2,Exp.1-3,Exp.1-4,we observe that the average rank-1 accuracy first rises and then falls on the NM and BG subset,while it continues to rise under the CL subsets.It is believed that the reason for this phenomenon is that the different receptive fields of the top neurons can adapt to different walking conditions.

    4.3.2|Effectiveness of CSP

    The traditional spatial feature mapping [3,4] usesMax(·)orAvg(·)to aggregate spatial information.But,using them alone cannot realize the mapping adaptively.In this paper,we introduce CSP to achieve spatial feature mapping.Figure 3 shows its internal structure and describes the components used inside.Inspired by the idea in[1],slicing the feature map at the channel level,we design a new statistical functionSPand useConvNetto weight the local features to enhance attention.To verify the effectiveness of CSP,we design comparative experiments by implementing methods with different spatial feature mapping strategies on the CASIA-B data set.Note that the channel-level slice parameters are referred to the parametersτin the previous ablation experiment.

    The results are shown in Table 8.Exp.2-1 uses the traditional statistical functionSPunder the conditions of NM and BG.Compared with Exp.2-2,we set the parameterτofSP1,SP2,SP3to slice 1,2,and 4 in different blocks,which has better performance.In Exp.2-3,the addition of theConvNetlayer aims to enhance the attention and make the aggregation of spatialinformationmoreeffective,reachingaccuracyratesof96.2%and 90.3%.Besides,in the CL setting,when the parameterτofSP1,SP2,SP3is set to 1,4,and 8,as shown in Exp.2-4,the highest accuracyrateis69.2%.Thismayindicatethatfine-grainedfeature extraction is better for extracting silhouette maps with bags.

    TABLE 7 The ablation experiment performed on CASIA-B using the setup LT.The result is the average level 1 accuracy of all 11 views,excluding the case of the same view.Comparison of different parameter settings of PConv

    TABLE 8 The ablation experiment performed on CASIA-B using setup LT.Results are rank-1 accuracies of all 11 views,excluding the case of the same view.Comparison of CSP with different settings for different blocks

    4.3.3|Effectiveness of the MGM

    We duplicate five branches of the intermediate feature maps obtained by the backbone network,namedPart1,Part2,Part3,Part4andPart5,and the corresponding configurations are shown in Table 2.From the experimental results,we found that setting the horizontal stripes asρ=2m-1,m∈1,2,3,4,5,the same as in [3],performs well,which shows that different fine-grained segmentation can better capture details that are easily ignored for recognition.

    In our experiment,we explore the influence of multibranch architecture from two aspects.As shown in Table 9,on the one hand,the structure with only one partition branchPart1(considered as the global representation) is compared with the structure of integrating only four independent different multi-granularity branches.It is shown that the integrated strategy achieves better performance than any single participating network.It shows that,compared with the globalnetwork,the collaborative learning of branches has more discriminatory feature representations.On the other hand,we combine the two structures and compare them with the above two experiments.The effect of combining the global features and local features is higher than using one of them alone.We believe that the mutual influence between the four independent different multi-granularity branches supplements their blind spots in their learning process.

    TABLE 9 The ablation experiment performed on CASIA-B using setup LT.The result is the average level 1 accuracy of all 11 views,excluding the case of the same view.Accuracy (%) of using different branches in MGM

    TABLE 10 The ablation experiment performed on CASIA-B using setup LT.The result is the average level 1 accuracy of all 11 views,excluding the case of the same view.Accuracy (%) of using different branches in MGM

    4.3.4|Efficiency of GaitGP

    As discussed in [48],the efficiency of the pair-wise simulation degree learning method [49] is limited.On the other hand,since each sample only needs to be calculated once [3],our network takes 1.36 min to complete the test on 4 NVIDIA 1080TI GPUs.Table 10 lists the efficiency comparison on CASIA-B.

    5|CONCLUSION

    This paper proposes a new network architecture and designs the PConv to extract the global and partial features by combining the advantages of both.We also propose CSP for spatial learning attention and feature expression to improve the performance of gait recognition tasks.In addition,through the multi-granularity horizontal segmentation pipeline,MGM,different multi-granularity branches are integrated to obtain the final gait representation.Experimental results on three public datasets verify the effectiveness and efficiency of our method.

    ACKNOWLEDGEMENTS

    This work was partially supported by the Natural Science Foundation of Guangdong Province No.2018A030313318 and the Key-Area Research and Development Program of Guangdong Province No.2019B111101001.

    ORCID

    Jing Xiaohttps://orcid.org/0000-0002-5242-7909

    大陆偷拍与自拍| 久久中文字幕人妻熟女| 成人亚洲精品一区在线观看| 久久亚洲真实| 日韩人妻精品一区2区三区| 一本大道久久a久久精品| 18美女黄网站色大片免费观看| 色在线成人网| 超碰成人久久| 色婷婷久久久亚洲欧美| 亚洲国产欧美一区二区综合| 又大又爽又粗| 无人区码免费观看不卡| 中文字幕最新亚洲高清| 老司机在亚洲福利影院| 亚洲精品在线观看二区| 亚洲男人天堂网一区| 少妇裸体淫交视频免费看高清 | 神马国产精品三级电影在线观看 | 亚洲国产精品一区二区三区在线| 99在线视频只有这里精品首页| 国产精品亚洲av一区麻豆| 伦理电影免费视频| 国产精品久久久久成人av| 亚洲专区国产一区二区| 亚洲成人精品中文字幕电影 | 国产黄色免费在线视频| 免费不卡黄色视频| 黄色怎么调成土黄色| netflix在线观看网站| 日韩精品中文字幕看吧| 乱人伦中国视频| 88av欧美| 麻豆av在线久日| 三级毛片av免费| 亚洲精品国产一区二区精华液| 久久午夜综合久久蜜桃| www日本在线高清视频| 国产亚洲av高清不卡| 国产无遮挡羞羞视频在线观看| 男女下面进入的视频免费午夜 | 亚洲精品久久午夜乱码| 精品福利观看| 日韩欧美国产一区二区入口| 他把我摸到了高潮在线观看| 母亲3免费完整高清在线观看| 国产极品粉嫩免费观看在线| 国产高清videossex| 国产精品美女特级片免费视频播放器 | 亚洲精品美女久久久久99蜜臀| 美女扒开内裤让男人捅视频| 少妇被粗大的猛进出69影院| 久久精品国产综合久久久| 在线观看午夜福利视频| 亚洲成人精品中文字幕电影 | 亚洲色图综合在线观看| av超薄肉色丝袜交足视频| 日韩精品免费视频一区二区三区| 久久国产乱子伦精品免费另类| 涩涩av久久男人的天堂| 亚洲中文av在线| 精品福利永久在线观看| 午夜成年电影在线免费观看| 免费日韩欧美在线观看| 男人舔女人下体高潮全视频| 日韩欧美免费精品| 免费人成视频x8x8入口观看| 色婷婷av一区二区三区视频| www.熟女人妻精品国产| 中文字幕最新亚洲高清| 国产精品98久久久久久宅男小说| 婷婷六月久久综合丁香| 亚洲黑人精品在线| 久久精品成人免费网站| 成人亚洲精品av一区二区 | 妹子高潮喷水视频| 国产又色又爽无遮挡免费看| 亚洲七黄色美女视频| 色婷婷av一区二区三区视频| 91精品国产国语对白视频| 麻豆久久精品国产亚洲av | 多毛熟女@视频| 国产亚洲精品一区二区www| 欧美亚洲日本最大视频资源| 国产精品日韩av在线免费观看 | √禁漫天堂资源中文www| 麻豆一二三区av精品| 欧美成人午夜精品| 国产成年人精品一区二区 | 在线观看免费视频日本深夜| av视频免费观看在线观看| 日韩三级视频一区二区三区| 亚洲精品一二三| 日韩欧美国产一区二区入口| 国产一区二区三区视频了| 精品卡一卡二卡四卡免费| 黄色a级毛片大全视频| 最新在线观看一区二区三区| 丝袜美腿诱惑在线| 午夜福利在线免费观看网站| 久久久精品国产亚洲av高清涩受| 岛国视频午夜一区免费看| 国产精品香港三级国产av潘金莲| 午夜a级毛片| 久久精品亚洲熟妇少妇任你| www.精华液| 亚洲人成电影观看| 视频在线观看一区二区三区| 午夜福利在线观看吧| 亚洲狠狠婷婷综合久久图片| 色精品久久人妻99蜜桃| 精品欧美一区二区三区在线| 午夜激情av网站| 亚洲自拍偷在线| 少妇裸体淫交视频免费看高清 | 欧美色视频一区免费| 亚洲av成人av| 国产欧美日韩精品亚洲av| 最近最新中文字幕大全免费视频| 99精品久久久久人妻精品| 视频在线观看一区二区三区| 亚洲成人精品中文字幕电影 | 国产又色又爽无遮挡免费看| 久久国产乱子伦精品免费另类| 男女午夜视频在线观看| 亚洲中文字幕日韩| 色哟哟哟哟哟哟| 男人舔女人的私密视频| 自线自在国产av| 男人舔女人的私密视频| av欧美777| 久久精品国产清高在天天线| 欧美激情 高清一区二区三区| 免费av毛片视频| 久久久久久大精品| 老鸭窝网址在线观看| 久久久久久久久久久久大奶| 久久精品国产99精品国产亚洲性色 | 少妇 在线观看| 欧美日韩精品网址| 欧美精品亚洲一区二区| 中文亚洲av片在线观看爽| 欧美日韩黄片免| 亚洲欧美日韩另类电影网站| 国产精品98久久久久久宅男小说| 曰老女人黄片| 老司机在亚洲福利影院| 又黄又爽又免费观看的视频| 国产精品亚洲一级av第二区| 黄网站色视频无遮挡免费观看| 欧美成人午夜精品| 国产99白浆流出| 亚洲三区欧美一区| 欧美一区二区精品小视频在线| 久久亚洲精品不卡| 日韩人妻精品一区2区三区| 成人影院久久| 999久久久精品免费观看国产| 欧美黑人欧美精品刺激| 中文字幕人妻熟女乱码| 成人精品一区二区免费| 国产又色又爽无遮挡免费看| 丝袜人妻中文字幕| 在线国产一区二区在线| 国产在线观看jvid| 18美女黄网站色大片免费观看| 人人妻人人澡人人看| 亚洲中文日韩欧美视频| 国产亚洲精品第一综合不卡| 欧美日韩乱码在线| 成人18禁高潮啪啪吃奶动态图| 国产视频一区二区在线看| 精品国产美女av久久久久小说| 亚洲五月天丁香| 亚洲在线自拍视频| 国产深夜福利视频在线观看| 母亲3免费完整高清在线观看| 熟女少妇亚洲综合色aaa.| 免费日韩欧美在线观看| 久久精品影院6| 男人操女人黄网站| 丝袜美足系列| 国产成人系列免费观看| 欧美激情高清一区二区三区| 多毛熟女@视频| 又紧又爽又黄一区二区| 女生性感内裤真人,穿戴方法视频| 欧美人与性动交α欧美精品济南到| 日日摸夜夜添夜夜添小说| 亚洲全国av大片| 成人免费观看视频高清| 久久久国产成人精品二区 | 久久天躁狠狠躁夜夜2o2o| 欧洲精品卡2卡3卡4卡5卡区| 亚洲精品中文字幕在线视频| www.999成人在线观看| 女警被强在线播放| 在线十欧美十亚洲十日本专区| 美女大奶头视频| 中亚洲国语对白在线视频| 高清在线国产一区| 午夜视频精品福利| 天堂动漫精品| 美女 人体艺术 gogo| 中文字幕高清在线视频| 国产av一区二区精品久久| 亚洲一区二区三区欧美精品| 久久人妻av系列| 乱人伦中国视频| 久久久国产精品麻豆| 狂野欧美激情性xxxx| 国产精品乱码一区二三区的特点 | 老司机亚洲免费影院| 村上凉子中文字幕在线| 日本a在线网址| 制服人妻中文乱码| 999久久久国产精品视频| 18禁美女被吸乳视频| 自拍欧美九色日韩亚洲蝌蚪91| 99国产精品一区二区三区| 宅男免费午夜| 久久久精品国产亚洲av高清涩受| 国产亚洲精品久久久久久毛片| 99精品欧美一区二区三区四区| 久久精品91无色码中文字幕| 人妻丰满熟妇av一区二区三区| 搡老熟女国产l中国老女人| 女人高潮潮喷娇喘18禁视频| 91av网站免费观看| 亚洲欧美日韩高清在线视频| 村上凉子中文字幕在线| 又大又爽又粗| 色综合婷婷激情| av欧美777| 一区二区三区精品91| 国产高清国产精品国产三级| 男人舔女人的私密视频| 人成视频在线观看免费观看| 看片在线看免费视频| 婷婷精品国产亚洲av在线| 国产成人精品无人区| √禁漫天堂资源中文www| 亚洲中文字幕日韩| 高清黄色对白视频在线免费看| 亚洲精品国产色婷婷电影| 男人舔女人的私密视频| 亚洲aⅴ乱码一区二区在线播放 | 成人国产一区最新在线观看| 亚洲av熟女| 亚洲avbb在线观看| 精品国产亚洲在线| 久久99一区二区三区| 久久午夜综合久久蜜桃| 啦啦啦免费观看视频1| 又大又爽又粗| 欧美乱色亚洲激情| a级片在线免费高清观看视频| 午夜老司机福利片| 国产精品乱码一区二三区的特点 | 国产成人精品无人区| 99久久国产精品久久久| 99在线视频只有这里精品首页| 97碰自拍视频| 国产视频一区二区在线看| 午夜日韩欧美国产| svipshipincom国产片| 交换朋友夫妻互换小说| 99re在线观看精品视频| 亚洲全国av大片| 妹子高潮喷水视频| 国产熟女xx| 丝袜美足系列| 成人精品一区二区免费| 国产日韩一区二区三区精品不卡| 成人影院久久| 丰满迷人的少妇在线观看| 中文字幕最新亚洲高清| 国产精品永久免费网站| 中出人妻视频一区二区| 视频区图区小说| 日本wwww免费看| 在线十欧美十亚洲十日本专区| 亚洲激情在线av| 夜夜夜夜夜久久久久| 777久久人妻少妇嫩草av网站| 成人亚洲精品一区在线观看| 天堂影院成人在线观看| 老汉色∧v一级毛片| 亚洲精品中文字幕一二三四区| 国产在线精品亚洲第一网站| 美女大奶头视频| 两性夫妻黄色片| 黄色a级毛片大全视频| 韩国精品一区二区三区| 中国美女看黄片| 99国产精品一区二区三区| 99精品欧美一区二区三区四区| 麻豆av在线久日| 丝袜美足系列| 人人妻人人爽人人添夜夜欢视频| www.熟女人妻精品国产| 久久国产乱子伦精品免费另类| 日韩有码中文字幕| 九色亚洲精品在线播放| 精品国产乱码久久久久久男人| 成人影院久久| 超碰97精品在线观看| 99精国产麻豆久久婷婷| 亚洲精品久久午夜乱码| 十八禁网站免费在线| 大码成人一级视频| 婷婷精品国产亚洲av在线| 高清毛片免费观看视频网站 | 欧美激情久久久久久爽电影 | 亚洲一码二码三码区别大吗| 精品高清国产在线一区| 亚洲,欧美精品.| 美女扒开内裤让男人捅视频| 国产精品影院久久| 在线看a的网站| 一级片'在线观看视频| 亚洲精品久久午夜乱码| 亚洲欧美精品综合一区二区三区| 国产伦一二天堂av在线观看| 两性午夜刺激爽爽歪歪视频在线观看 | 操出白浆在线播放| 亚洲 欧美 日韩 在线 免费| 国产伦人伦偷精品视频| 国产精品久久电影中文字幕| 精品久久久久久,| 久久精品亚洲av国产电影网| 久热爱精品视频在线9| 淫秽高清视频在线观看| 免费观看精品视频网站| 亚洲视频免费观看视频| 人人妻人人澡人人看| 久久久久久久久中文| 黄色成人免费大全| 黑丝袜美女国产一区| 高清在线国产一区| av在线播放免费不卡| 久久中文字幕一级| 老司机福利观看| 12—13女人毛片做爰片一| 久久中文字幕人妻熟女| 亚洲男人的天堂狠狠| 日韩成人在线观看一区二区三区| 欧美日韩一级在线毛片| 在线观看免费视频日本深夜| 大型av网站在线播放| 91大片在线观看| 在线观看一区二区三区激情| 麻豆av在线久日| 国产一区二区在线av高清观看| 亚洲欧美精品综合久久99| 水蜜桃什么品种好| 欧美成人性av电影在线观看| 午夜91福利影院| 99在线人妻在线中文字幕| 夫妻午夜视频| 巨乳人妻的诱惑在线观看| x7x7x7水蜜桃| 黄色 视频免费看| 国内毛片毛片毛片毛片毛片| 91九色精品人成在线观看| 亚洲,欧美精品.| 国产高清videossex| 国产精品久久久av美女十八| 欧美色视频一区免费| 成年版毛片免费区| 欧美日韩中文字幕国产精品一区二区三区 | 老司机靠b影院| 国产精品久久久久久人妻精品电影| 日日夜夜操网爽| 国产真人三级小视频在线观看| 后天国语完整版免费观看| 变态另类成人亚洲欧美熟女 | 麻豆国产av国片精品| 超碰成人久久| 日韩av在线大香蕉| 香蕉久久夜色| 99国产综合亚洲精品| 脱女人内裤的视频| 国产精品九九99| 精品第一国产精品| 在线永久观看黄色视频| 免费久久久久久久精品成人欧美视频| 曰老女人黄片| 久久精品aⅴ一区二区三区四区| 午夜福利在线免费观看网站| 欧美成狂野欧美在线观看| 国产高清国产精品国产三级| 亚洲va日本ⅴa欧美va伊人久久| 18禁观看日本| 十八禁网站免费在线| 欧美日韩亚洲综合一区二区三区_| 国产精品秋霞免费鲁丝片| 精品国产乱子伦一区二区三区| 黑人巨大精品欧美一区二区mp4| av中文乱码字幕在线| 嫩草影院精品99| 国产国语露脸激情在线看| 午夜成年电影在线免费观看| 超碰97精品在线观看| 长腿黑丝高跟| 国产熟女午夜一区二区三区| 怎么达到女性高潮| 一边摸一边抽搐一进一出视频| av在线播放免费不卡| 嫁个100分男人电影在线观看| 91九色精品人成在线观看| 日韩视频一区二区在线观看| 精品久久久久久久久久免费视频 | 亚洲av熟女| 成人永久免费在线观看视频| 日日干狠狠操夜夜爽| 午夜91福利影院| 亚洲男人天堂网一区| 国产av一区在线观看免费| 欧美日韩黄片免| 亚洲男人天堂网一区| 桃色一区二区三区在线观看| 美国免费a级毛片| 午夜免费成人在线视频| 视频区欧美日本亚洲| 女警被强在线播放| 亚洲欧美激情综合另类| 高清av免费在线| 国产精品免费视频内射| 久久中文字幕一级| 成年版毛片免费区| 十八禁人妻一区二区| 在线观看www视频免费| 香蕉国产在线看| 99国产精品一区二区三区| 国产亚洲精品久久久久5区| 色综合婷婷激情| 99国产综合亚洲精品| 看黄色毛片网站| 久久中文字幕一级| 国产成+人综合+亚洲专区| 一夜夜www| netflix在线观看网站| 久久中文字幕一级| 老司机深夜福利视频在线观看| 一级毛片高清免费大全| 成年版毛片免费区| 91麻豆精品激情在线观看国产 | 黄色片一级片一级黄色片| 日日爽夜夜爽网站| 999久久久精品免费观看国产| 黄片小视频在线播放| 麻豆av在线久日| 最新在线观看一区二区三区| 免费在线观看日本一区| 精品福利观看| 国产av在哪里看| 亚洲成人国产一区在线观看| 这个男人来自地球电影免费观看| 久久久国产成人精品二区 | 看黄色毛片网站| 亚洲精品国产区一区二| 亚洲第一青青草原| www.www免费av| 热re99久久国产66热| 国产野战对白在线观看| videosex国产| 亚洲avbb在线观看| 日韩免费av在线播放| 日日干狠狠操夜夜爽| 欧美最黄视频在线播放免费 | 18禁裸乳无遮挡免费网站照片 | 成人黄色视频免费在线看| 国产一区二区三区视频了| 免费在线观看黄色视频的| 午夜精品国产一区二区电影| 国产97色在线日韩免费| 精品久久久久久,| 很黄的视频免费| 男人舔女人下体高潮全视频| 久久伊人香网站| 亚洲精品美女久久av网站| 日日爽夜夜爽网站| 午夜亚洲福利在线播放| 日韩国内少妇激情av| 中国美女看黄片| 一边摸一边做爽爽视频免费| 视频区图区小说| 乱人伦中国视频| 亚洲国产中文字幕在线视频| 日韩欧美一区二区三区在线观看| 亚洲 欧美一区二区三区| 亚洲精品粉嫩美女一区| 成人永久免费在线观看视频| 在线看a的网站| 中文亚洲av片在线观看爽| 中文欧美无线码| 亚洲少妇的诱惑av| 国产精品亚洲av一区麻豆| 国产精品1区2区在线观看.| 亚洲狠狠婷婷综合久久图片| 精品午夜福利视频在线观看一区| 精品国产国语对白av| 欧美日本中文国产一区发布| 国产精品秋霞免费鲁丝片| 老司机靠b影院| 精品熟女少妇八av免费久了| 精品国产亚洲在线| av超薄肉色丝袜交足视频| 亚洲午夜理论影院| 99久久久亚洲精品蜜臀av| 91麻豆精品激情在线观看国产 | 一边摸一边抽搐一进一出视频| 大陆偷拍与自拍| 久久国产精品人妻蜜桃| 村上凉子中文字幕在线| 欧美精品一区二区免费开放| 欧美精品啪啪一区二区三区| 99国产精品免费福利视频| 国产成人啪精品午夜网站| 亚洲第一av免费看| 日本一区二区免费在线视频| 一级黄色大片毛片| 天堂中文最新版在线下载| 一级毛片精品| 日韩欧美一区视频在线观看| 日韩一卡2卡3卡4卡2021年| 日韩精品青青久久久久久| 国产黄色免费在线视频| 久久精品亚洲熟妇少妇任你| 久久人妻熟女aⅴ| av国产精品久久久久影院| 巨乳人妻的诱惑在线观看| 日韩av在线大香蕉| 精品国产国语对白av| 国产欧美日韩一区二区三| 久久久精品国产亚洲av高清涩受| 一级黄色大片毛片| 国内毛片毛片毛片毛片毛片| 国产片内射在线| 在线观看一区二区三区激情| 青草久久国产| 在线观看66精品国产| 99国产综合亚洲精品| 精品国产乱子伦一区二区三区| 欧美日韩亚洲高清精品| 在线观看免费日韩欧美大片| 一本综合久久免费| www.999成人在线观看| 精品欧美一区二区三区在线| 成人国产一区最新在线观看| 午夜免费激情av| 色婷婷久久久亚洲欧美| 午夜免费观看网址| 日韩有码中文字幕| 午夜视频精品福利| 丝袜人妻中文字幕| 日韩中文字幕欧美一区二区| 首页视频小说图片口味搜索| 久热这里只有精品99| 两个人看的免费小视频| 亚洲中文av在线| 欧美国产精品va在线观看不卡| 69精品国产乱码久久久| 神马国产精品三级电影在线观看 | 欧美激情 高清一区二区三区| 中文字幕av电影在线播放| 老汉色av国产亚洲站长工具| 国产成+人综合+亚洲专区| 国产精品 国内视频| 91字幕亚洲| 狂野欧美激情性xxxx| 国产精品一区二区三区四区久久 | 欧美黄色片欧美黄色片| 无限看片的www在线观看| 精品福利永久在线观看| 好男人电影高清在线观看| 精品欧美一区二区三区在线| 国产一区二区激情短视频| 好看av亚洲va欧美ⅴa在| 久久性视频一级片| 婷婷精品国产亚洲av在线| 国产精品一区二区在线不卡| 国产欧美日韩精品亚洲av| 色婷婷久久久亚洲欧美| 久久精品亚洲av国产电影网| 国产成人系列免费观看| 国产单亲对白刺激| 婷婷精品国产亚洲av在线| 亚洲第一欧美日韩一区二区三区| 久久久久九九精品影院| 成年版毛片免费区| 一级片免费观看大全| 91九色精品人成在线观看| 国产免费现黄频在线看| 日日夜夜操网爽| 99国产精品免费福利视频| 久久草成人影院| 亚洲自偷自拍图片 自拍| 亚洲欧美日韩无卡精品| 热99re8久久精品国产| 妹子高潮喷水视频| 国产熟女午夜一区二区三区| 亚洲欧美一区二区三区久久| 午夜a级毛片| 十分钟在线观看高清视频www| 成人国产一区最新在线观看| 亚洲九九香蕉| 在线十欧美十亚洲十日本专区| 久久人妻福利社区极品人妻图片| 国产成人精品久久二区二区免费| 亚洲片人在线观看| 亚洲精品国产色婷婷电影| 久热这里只有精品99| 可以免费在线观看a视频的电影网站|