• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Two stream skeleton behavior recognition algorithm based on Motif-GCN①

    2023-12-15 10:43:28WUJinWANGLeiFENGHaoranCHONGGege
    High Technology Letters 2023年4期

    WU Jin(吳 進(jìn)),WANG Lei,FENG Haoran,CHONG Gege

    (School of Electronic Engineering,Xi′an University of Posts and Telecommunications,Xi′an 710121, P.R.China)

    Abstract

    Key words: skeleton behavior recognition,Motif-GCN,two stream network

    0 Introduction

    Human behavior recognition is of great research significance in computer vision.In essence,it enables computers to recognize human behavior intelligently.Based on this characteristic,it is widely used in video surveillance[1], intelligent transportation[2], humancomputer interaction,virtual reality simulation[3],intelligent security[4]and smart home,etc.

    Compared with the methods based on RGB(red green blue) images or videos,the behavior recognition based on human skeletons data is less susceptible to the influence of external factors and has better robustness.Therefore,it has been widely studied.Traditional deep learning-based skeletons behavior recognition methods convert human skeletons data into vector sequences or two-dimensional grids,and then input them into convolutional neural network (CNN) or recurrent neural network (RNN) for prediction.However,the graph structure of human skeletons data itself is neglected in this method,and the dependence relationship between related joints cannot be represented fully.However,the graph convolution network(GCN)emerged in recent years can well deal with the data with irregular topology structure.In addition,the continuous maturation and development of equipment capturing human skeletons coordinates in recent years,such as Openpose, Optical Camera, Microsoft Kinect, Intel RealSense.As a result,human skeletons behavior recognition based on GCN has been widely studied.In 2018,Yan et al.[5]proposed the application of GCN to skeletal behavior recognition.They regarded human joint nodes as graph nodes,human joints naturally connected and the same joints in continuous frames as edges to construct spatio temporal graph convolutional network(ST-GCN).However,ST-GCN has limitations in the process of graph construction.(1) The graph in STGCN only represents the physical structure of the human body,so it cannot better identify some human actions.For example,the action of putting on shoes,it is a very important human activity,but ST-GCN is difficult to capture the relationship between hands and feet.(2)ST-GCN only contains the first-order information of the human skeletons,namely the joint information,while the second-order information which also contains important action information of the skeletons is ignored.(3) The structure of GCN is hierarchical,and different layers contain semantic information of multiple layers.However,the graph topology in ST-GCN is fixed in all layers,which lacks the flexibility and ability to model the multi-level semantic information in all layers.Refs[6 -14] and others on the basis of these problems have made the improvement.Aiming at problem(1) and problem (2),the use of Motif-GCN is proposed in this paper to extract spatial features of skeleton graph.The relationship between adjacent joint nodes that are naturally connected by the human body are mainly considered in the first Motif,and the relationship between joints that are not physically connected are considered in the second Motif.

    The temporal features is extracted by the temporal convolutional network (TCN),the length and direction of the vector between two joints are regarded as the length and direction of the bone,and it's added to the GCN to predict the action label like the first-order information.Finally,the results of joint flow and bone flow are combined to obtain the final result.

    1 Related work

    1.1 Skeleton-based behavior recognition

    The traditional behavior recognition based on skeletons mostly depends on the manual design features[15],however,the traditional way is based on the prior knowledge of researchers or data feature extraction,to some extent,the action characteristics of human behavior is often reflected,and it could not fully represent the overall state, and is susceptible to outside influences.With the rapid development of deep learning,behavior recognition based on deep learning has become a research hotspot.Common deep learning techniques include 3D convolutional neural network (3DCNN) model[16],RNN model[17],two stream CNN model[18]and hybrid network model[19].CNN-based methods convert skeleton data into pseudo-images based on hand-designed transformation rules,representing temporal dynamics and skeleton joints in the form of rows and columns,while RNN-based methods convert skeleton data into coordinate vector sequences.3DCNN model is to stack some frames into a cube shape and uses 3D convolution kernel for feature extraction.The idea of two stream convolution network is to input RGB information and optical flow field information of video frames into a CNN grid respectively,and then make prediction respectively.Finally,the final result is obtained by fusing the two prediction results.Hybrid network models include the combination of CNN and RNN,and the combination of CNN and long short term memory (LSTM).Spatial features are extracted by the former,while the temporal features are extracted by the latter.However,a large amount of data and parameters are required by the 3D-CNN,which is not conducive to the extraction of long-term features.The two stream method can only extract the temporal features of the before and after frames.The extracted temporal features are not very comprehensive,and the hybrid network model is difficult to combine them, with too many parameters and high resource consumption,which is difficult to deploy in reality.Since the skeleton data is the structure of a graph,it cannot be converted into vector sequences or two-dimensional grids to extract features well.Compared with CNN and RNN in recent years,GCN can handle this data structure well.In 2018,Yan et al.[5]proposed the application of GCN in behavior recognition based on human skeletons data,and a variety of improved methods have emerged since then.Based on Ref.[20],Motif-GCN is adopted to extract spatial features and TCN is adopted to extract temporal features.At the same time,the bone information is added to construct a two stream structure to further realize the purpose of strengthening the relationship between the joints of the body.

    1.2 Graph convolutional network

    Ref.[21]combined deep learning technology with graph data for the first time and graph neural network(GNN) is proposed,which made the deep learning technology be effectively used in the related scenes of graph data.Due to the success of CNN,the concept of CNN is generalized from grid data to graph data,and thus GCN is generated.

    Construction on the graph of GCN method usually can be divided into the method based on spectral domain[21]and the method based on spatial domain[22].GCN based on spectral domain is similar to the convolution theorem,the spatial domain of the signal through the Fourier transform to the spectral domain for multiplication operation,treatment after the transformation to the spatial domain,GCN method based on spectral domain is defined in the spatial domain of graph nodes signal through the graph spectral domain Fourier transform into,and then,finally a method to transform back to the spatial domain.The method based on spatial domain is to directly process the graph signal.In the CNN convolution operation,each pixel is treated as a node,and the new features of the node are obtained by calculating the weighted average value of the neighbor nodes around the node.As for graph data,its structure is irregular,so it can not directly realize feature extraction by sliding convolution kernel like CNN.In GCN,the neighboring nodes of each node are usually sampled,and then these neighbor nodes are divided into different subsets to realize weight sharing and finally realize feature extraction.

    2 Introduction to algorithm Principle

    In the traditional method of behavior recognition based on human skeletons,in the graph structure constructed with human joints as nodes and bone as edges,only adjacent nodes of joints are generally considered when using GCN to extract spatial features.In this paper,Motif-GCN is used to extract spatial relationship between joints.The relationship between joints that are directly adjacent is encoded by the first Motif-GCN,and the relationship between joints that are disconnected is encoded by the second Motif-GCN,so as to strengthen the relationship between physically connected and non-physically connected joints and capture higher-order information.In addition,bone information is also introduced to construct the two stream structure of bone and joints.

    2.1 Motif-GCN

    Compared with CNN,the convolution kernel has translation invariance,and graph structured data does not have this property because of its unique structure.Therefore,the biggest difference between GCN and CNN is the definition of sampling function and weight function.In the previous methods,most of them chose to sample the first-order or second-order neighbor nodes of each node,as shown in Fig.1,and divided them into root nodes,centripetal nodes,and centrifugal nodes according to their distance from the center of gravity and the distance from the root node to center of gravity.

    As shown in Fig.1, it is the traditional neighbor node sampling rules,where 0 represents the root node,1 represents the centripetal node,2 represents the centrifugal node,and the cross represents the center of gravity.

    Fig.1 Traditional neighbor node sampling rules

    In the traditional graph convolution operation,the convolution operation at a node can be expressed as in Eq.(1)[5],wherevtirepresents the central node,vtjrepresents the neighbor node ofvti,Prepresents the sampling function,Wrepresents the weight function,andZis shown in Eq.(2),Zrepresents the number of subsets divided by neighbor nodes,lti(vtj) represents the label to whichvtjbelongs in the molecular set label withvtias the center node,which is used to balance the contribution of different subsets.

    Although joints moved in groups when people perform movements,a single joint may appear in multiple parts of the body.In this sense,learnable maskMis added to each layer of spatio temporal graph convolution[5].The mask will scale the contribution of a node's features to its neighbors based on the learned importance weight of each spatial graph edge in the edges naturally connected by the human body.Therefore,the graph convolution operation is finally represented as

    In Eq.(3),finrepresents the input,Akrepresents the adjacency matrix,Mkrepresents the learnable mask,andKvrepresents the kernel size of the spatial dimension,☉denotes the dot product.

    In 2019,Wen et al.[20]proposed Motif-GCN.Motif refers to the connection pattern between different node types,and a double Motif structure was constructed in Motif-GCN.The first Motif-GCN is used to encode the relationship between joints directly connected in the skeleton structure of human body,the relationship between joints without connection in human skeleton structure is encoded by the second Motif-GCN.In the process of human movement,joints without connection often contain important action information.For example,in the movement of the right hand touching the left foot,although there is no connection between the hand and feet,the relationship between them is useful for identifying the movement of foot touching.Therefore,the addition of this Motif can achieve the capture of higher-order information.

    The problem that the traditional graph convolution method for skeleton structure modeling only considers the physical connection neighbors of each joint and joints of the same type is solved by Motif-GCN,which cannot capture higher-order information.Different from Ref.[20],Motif-GCN is used for reference and Motif-GCN is choosed to extract spatial features while continuing to use TCN to extract temporal features.In addition,bone information was added in this paper to form a two stream structure with joint information.Then,the two were input into the 9-layers model composed of Motif-GCN and TCN respectively,and finally,Softmax classification function was used to obtain the classification results,and then a fusion of the two results was carried out to obtain the final classification verification results.The experimental results shown by the final combined results were significantly improved and enhanced compared with the previous classical models.

    Two Motifs are used to simulate the physical connection and non physical connection of the human skeletons.In the first Motif,only the nodes with direct adjacent relationships are considered.The neighbor of each joint has three characters,the joint itself,the parent nodes of joint,and the child nodes of joint.It is shown in Fig.2.

    Fig.2 Graph structure in the first Motif

    As shown in Fig.2, the graph structure in the first Motif is shown,where each circle represents a joint point,the line with arrow represents the natural physical connection of the human body,and the arrow represents from the parent node to the child node.

    In the second Motif,the joints naturally connected by the human body are not considered,but the joints without physical connectivity are mainly considered.The weighted adjacency matrix between the disconnected joints is defined by allocating a large weight to the joints with short distance.In the weighted adjacency matrix,the relationship between nodeiand nodejcan be expressed asαi,j=maxe-e(i,j),whereeis a matrix representing the average euclidean distance between pairs of nodes in the sequence.The relationship between the joint that is not connected to the neck joint and the neck joint is shown in Fig.3.The calculation formula of Euclidean distance between node 1 (x1,x2,…,xn) and node 2 (y1,y2,…,yn) inn-dimensional spaces is shown in Eq.(4).

    As shown in Fig 3,It is the structure between nodes in the second Motif.The two joints without physical connection is connected by the dotted line.

    Fig.3 Graph structure in the second Motif

    Finally,Motif-GCN can be expressed as

    In Eq.(5),Xt∈RN×Drepresents the input.There areNnodes andDcoordinates in framet,andKMrepresents the dependency between different semantics.Because the neighbor of each joint in the first Motif has three semantic roles,KM1=3,KM2=1;in the second Motif,represents the adjacency matrix corresponding to each Motif,andDis an angle matrix,whererepresents the weight matrix corresponding to the node typekin each Motif,is output.

    2.2 Overall network structure

    The process of behavior recognition using Motif-GCN is shown in Fig 4.After bone sequence input,spatial features are extracted by Motif-GCN and temporal features are extracted by TCN.Finally,the final classification results are obtained through Softmax classifier.The specific algorithm process is shown in Fig.4.

    Fig.4 The process of behavior recognition by Motif-GCN

    As shown in Fig.5,it is the overall flow chart of the Motif-GCN algorithm.Firstly,the skeleton sequence is input to obtain joint information and bone information,and then the two are fed into the Motif-GCN and TCN structures respectively.The Motif-GCN and TCN are followed by a batch normalization (BN) layer and ReLU layer.A Dropout layer is also added between the Motif-GCN and TCN.There are 9 layers in this structure,and the numbers below each layer represent the input channel number,output channel number and step size information of that layer.Then,the results are sent to the Softmax classifier to get the respective classification,Finally,the final classification results are obtained by result fusion.

    Fig.5 Overall flow chart of two stream Motif-GCN algorithm

    3 Experimental results and analysis

    3.1 Dataset introduction

    NTU-RGB+D[23]dataset consists of 56 880 action samples,including RGB video,3D skeleton data,depth map sequence and infrared video for each sample.There are 60 categories,which are mainly divided into 3 groups,40 daily activities,such as drinking,eating,reading;9 health-related movements,such as sneezing,rocking and falling,and 11 reciprocal movements,such as answering,kicking and hugging.This data set was captured simultaneously by three Microsoft Kinect V.2 cameras,with a resolution of 1920 ×1080 for RGB video,512 ×424 for both depth map and infrared video,and 3D skeleton data containing the 3D positions of 25 major body joints per frame.NTU-RGB + D dataset adopts two different partitioning criteria when dividing training set and test set: Cross-Subject (X-Sub) and Cross-View (X-View).

    Cross-Subject (X-Sub): in the cross-subject,a total of 40 subjects are used to collect data,and the age of these subjects is between 10—35 years old.This data set divides these 40 subjects into training group and testing group,with 20 subjects in each group,and the training set and testing set contain 40 320 and 16 560 samples respectively.

    Cross-View(X-View): in the cross-view,three different horizontal views of - 45 °and + 45 ° were captured from the same action using three cameras at the same height.Each subject was required to make each action twice,once facing the left camera and once facing the right camera.Two front views,one left view,one right view,a left 45 ° degree view and a right 45 °view can be captured by this way.

    Kinetics:data set contains about 300 000 video clips retrieved from YouTube.These videos cover up to 400 human action courses,from daily activities,sports scenes to complex interactive actions.In Kinetics,each clip lasts about 10.240 000 videos are used to train and 20 000 videos are used for verification.Train the comparison model on the training set and report the accuracy on the verification set.

    3.2 Experimental platform

    As shown in Table 1,this experiment is carried out in Linux system,based on CUDA platform and combined with PyTorch deep learning framework.PyTorch version is 1.1.0,and GPU version is GTX 1080 Ti.The memory is Kingston HyperX Savage DDR4 and the hard drive is Seagate in 1 TB size.

    3.3 Analysis of experimental results

    The experiment was mainly conducted on the large data sets NTU-RGB + D and Kinetics.The accuracy and loss rate curves of the verification set on the NTURGB+D data set are shown in Figs.6 -13.

    Table 1 Experimental platform

    Fig.6 X-sub joint accuracy

    Fig.7 X-sub joint loss

    Fig.8 X-sub bone accuracy

    Fig.9 X-sub bone loss

    Fig.10 X-view joint accuracy

    As shown in Figs 6 -13,in the X-Sub dataset,the accuracy and loss rate of joint flow and bone flow tend to be stable after epoch 30.The final accuracy rate of joint flow is about 0.873,the loss rate is about 0.489,the accuracy rate of bone flow is about 0.869,and the loss rate is about 0.508.Similarly,in the X-View dataset,the accuracy and loss rate of joint flow and bone flow tend to be stable after epoch 30.The final accuracy rate of joint flow is about 0.942, the loss rate is about 0.197,the accuracy rate of bone flow is about 0.938,and the loss rate is about 0.203.

    Fig.11 X-view joint los

    Fig.12 X-view bone accuracy

    Fig.13 X-view bone loss

    The recognition accuracy on the validation set of NTU-RGB + D and Kinetics dataset are shown in Table 2 and Table 3.The comparison between the fused results and the results of other models is shown in Table 4.

    Table 2 Experimental results under the NTU dataset

    Table 3 Experimental results under the Kinetics dataset

    Table 4 Comparison of validation accuracy between the proposed method and other methods on NTU-RGB+D and Kinetics dataset

    For the NTU-RGB+D dataset,each sample of the dataset has a maximum of 2 people.If there are less than 2 people in the sample,then 0 is used to fill.The maximum number of frames in each sample can be 300; if the number of frames is less than 300,repeat sampling until it reaches 300,the experiment is carried out on the PyTorch platform with stochastic gradient descent (SGD) algorithm.The batch size is 16,the weight attenuation is 0.001,and the learning rate is 0.1.The training will end at the 50th epoch.The experimental results in the two sub datasets of NTU-RGB+D of the method proposed in this paper is shown in Table 2.It can be seen from the Table 2 that the Top-1 accuracy of the joint results and bone results under XSub dataset is 87.3% and 86.9% respectively;the Top-1 accuracy of the joint results and bone results under X-View dataset is 93.7% and 93.8% respectively;and the final Top-1 accuracy of combined results of X-Sub and X-View is 89.5%,95.4% respectively.

    For Kinetics dataset,the experimental setup is the same as Ref.[6].SGD algorithm is used in the experiment on PyTorch platform,with batch size of 8,weight attenuation of 0.001,and learning rate of 0.1.The training ends on the 65th epoch.The experimental results of the Kinetics dataset of the method proposed in this paper is shown in Table 3.It can be seen from the Table 3 that the Top-1 accuracy of the joint results and bone results under Kinetics dataset is 34.3% and 34.4% respectively;and the final Top-1 accuracy of combined results of Kinetics is 36.7%.

    As shown in Table 4,compared with 2S-AGCN,the result of proposed method is improved by 1.0%,0.3% respectively on X-Sub and X-View which are two subdatasets under NTU-RGB + D dataset,and is improved by 0.6% and 0.2% respectively on Kinetics dataset.The effectiveness of the method is also proved by this.However,compared with graph neural network neural architecture search (GCN-NAS),the accuracy of X-Sub is higher than that of GCN-NAS,while the accuracy of X-View and Kinetics is still not enough.

    4 Conclusion

    Aiming at the traditional behavior recognition method based on graph convolution,this paper only considers the problem of physical connection or the same type of joints when building the model,Motif-GCN is used to extract the spatial information of human skeleton points,The first Motif is used to encode the edges with natural connection relationship in the human body;the other Motif is used to encode the relationship between joints without connectivity in the human skeleton,and add joint and bone information at the same time.a two-stream structure is constructed,and experiments are carried out on the large dataset NTURGB+D.Finally,the accuracy rates on the two sub datasets X-Sub and X-View are 89.5% and 95.4%respectively,and the experimental results are 1.0%and 0.3% higher than those of the 2S-AGCN model.The method proposed in this paper,by adding the relationship between the joints of non physical connections and by building a two stream structure to add more action information,so as to strengthen the connection between the physical connection and non physical connection,joints in the human skeleton structure and captures the higher-order information.The effectiveness of this method is proved by the improvement of the experimental results compared with the 2S-AGCN model,but compared with some other recent methods,such as GCN-NAS,the experimental results need to be further improved.

    国产欧美日韩一区二区精品| 在线观看免费日韩欧美大片| 国产精品 欧美亚洲| 色综合婷婷激情| 久久久久久免费高清国产稀缺| 国产真人三级小视频在线观看| 极品教师在线免费播放| 非洲黑人性xxxx精品又粗又长| 国产97色在线日韩免费| 色综合欧美亚洲国产小说| 法律面前人人平等表现在哪些方面| 欧美中文日本在线观看视频| 国产麻豆成人av免费视频| 99久久国产精品久久久| 国产精品亚洲av一区麻豆| 亚洲欧美精品综合久久99| 国产人伦9x9x在线观看| 精品久久久久久久人妻蜜臀av| 可以在线观看毛片的网站| 亚洲国产日韩欧美精品在线观看 | 色婷婷久久久亚洲欧美| 精品福利观看| 精品一区二区三区视频在线观看免费| 天堂动漫精品| 女警被强在线播放| 成人欧美大片| 成人av一区二区三区在线看| 久久久国产精品麻豆| 亚洲一卡2卡3卡4卡5卡精品中文| 欧美最黄视频在线播放免费| 嫩草影视91久久| 在线天堂中文资源库| 成人手机av| 久久国产精品人妻蜜桃| 国产亚洲精品久久久久久毛片| 草草在线视频免费看| 大型av网站在线播放| 少妇的丰满在线观看| 亚洲国产欧美网| 亚洲 国产 在线| 日本精品一区二区三区蜜桃| 久99久视频精品免费| 精品一区二区三区av网在线观看| 午夜视频精品福利| av在线天堂中文字幕| 黄色片一级片一级黄色片| 丁香六月欧美| 欧美一区二区精品小视频在线| 亚洲激情在线av| 狠狠狠狠99中文字幕| 国产日本99.免费观看| 色播在线永久视频| 97人妻精品一区二区三区麻豆 | 天堂影院成人在线观看| 成人特级黄色片久久久久久久| 精品国产美女av久久久久小说| 母亲3免费完整高清在线观看| 欧美日韩亚洲国产一区二区在线观看| 久热爱精品视频在线9| 亚洲真实伦在线观看| a级毛片a级免费在线| 非洲黑人性xxxx精品又粗又长| 看黄色毛片网站| 神马国产精品三级电影在线观看 | 国产国语露脸激情在线看| 丁香六月欧美| 亚洲国产精品999在线| 午夜亚洲福利在线播放| 亚洲avbb在线观看| 亚洲av五月六月丁香网| 久久久久久久久久黄片| 免费搜索国产男女视频| 亚洲五月婷婷丁香| 国产不卡一卡二| 99国产极品粉嫩在线观看| 国产一卡二卡三卡精品| 精品久久久久久久毛片微露脸| 18禁裸乳无遮挡免费网站照片 | 色尼玛亚洲综合影院| 国产伦一二天堂av在线观看| 亚洲电影在线观看av| 色在线成人网| 天堂动漫精品| 久久天躁狠狠躁夜夜2o2o| 亚洲一区高清亚洲精品| 黑人欧美特级aaaaaa片| 成人欧美大片| 国产黄片美女视频| 大型黄色视频在线免费观看| 在线观看一区二区三区| 久久香蕉国产精品| 久久九九热精品免费| 特大巨黑吊av在线直播 | 亚洲精品美女久久av网站| 中亚洲国语对白在线视频| x7x7x7水蜜桃| 高潮久久久久久久久久久不卡| 又黄又粗又硬又大视频| 麻豆成人av在线观看| 日本五十路高清| 99热只有精品国产| 51午夜福利影视在线观看| 在线av久久热| 亚洲在线自拍视频| 两性午夜刺激爽爽歪歪视频在线观看 | 无人区码免费观看不卡| 国产人伦9x9x在线观看| 91在线观看av| 老司机靠b影院| 久久精品国产亚洲av香蕉五月| 久久久国产精品麻豆| 男人的好看免费观看在线视频 | 国产精品免费视频内射| 亚洲美女黄片视频| 天堂√8在线中文| 1024视频免费在线观看| 成年免费大片在线观看| 午夜日韩欧美国产| 亚洲成人免费电影在线观看| avwww免费| 制服丝袜大香蕉在线| 亚洲色图 男人天堂 中文字幕| 丰满人妻熟妇乱又伦精品不卡| 国产成+人综合+亚洲专区| 色老头精品视频在线观看| 老熟妇仑乱视频hdxx| 一级a爱片免费观看的视频| 90打野战视频偷拍视频| 久久性视频一级片| 99久久综合精品五月天人人| 91九色精品人成在线观看| 国产99白浆流出| 他把我摸到了高潮在线观看| 精品国产乱码久久久久久男人| 十八禁人妻一区二区| 国产成人av教育| 啦啦啦观看免费观看视频高清| 老司机在亚洲福利影院| 午夜福利在线在线| 日本精品一区二区三区蜜桃| cao死你这个sao货| 老司机午夜十八禁免费视频| 精品国产乱子伦一区二区三区| 久久国产精品影院| 中文字幕人成人乱码亚洲影| 听说在线观看完整版免费高清| 别揉我奶头~嗯~啊~动态视频| 久久午夜综合久久蜜桃| 手机成人av网站| 亚洲欧美日韩无卡精品| 国产精品精品国产色婷婷| 欧美色视频一区免费| 国产aⅴ精品一区二区三区波| 一级a爱视频在线免费观看| 日韩精品免费视频一区二区三区| 真人一进一出gif抽搐免费| 男男h啪啪无遮挡| 一级a爱片免费观看的视频| 日韩大码丰满熟妇| 女人爽到高潮嗷嗷叫在线视频| 亚洲第一av免费看| 欧美日韩一级在线毛片| 在线十欧美十亚洲十日本专区| 国产欧美日韩一区二区三| 麻豆成人午夜福利视频| 757午夜福利合集在线观看| 免费在线观看完整版高清| 99久久国产精品久久久| 老熟妇乱子伦视频在线观看| 久久久久久国产a免费观看| 欧美一级毛片孕妇| 亚洲 欧美 日韩 在线 免费| 免费女性裸体啪啪无遮挡网站| 黄色丝袜av网址大全| 天天躁夜夜躁狠狠躁躁| 国产精品久久视频播放| 女人爽到高潮嗷嗷叫在线视频| 黄色女人牲交| 在线看三级毛片| 久热爱精品视频在线9| 女生性感内裤真人,穿戴方法视频| 老司机午夜福利在线观看视频| 午夜两性在线视频| 成人午夜高清在线视频 | 亚洲久久久国产精品| xxxwww97欧美| 老汉色∧v一级毛片| 日韩欧美免费精品| 精品第一国产精品| 国产一区在线观看成人免费| 真人一进一出gif抽搐免费| 香蕉久久夜色| 亚洲av美国av| 91麻豆av在线| 伊人久久大香线蕉亚洲五| 夜夜爽天天搞| 久久久久久久久中文| 国内揄拍国产精品人妻在线 | 熟女电影av网| 久久精品国产亚洲av高清一级| 最新美女视频免费是黄的| 人人妻,人人澡人人爽秒播| а√天堂www在线а√下载| 久久国产精品人妻蜜桃| 在线观看一区二区三区| 97超级碰碰碰精品色视频在线观看| 精品少妇一区二区三区视频日本电影| 免费女性裸体啪啪无遮挡网站| 日本一本二区三区精品| 在线视频色国产色| 国产熟女午夜一区二区三区| 亚洲欧美精品综合一区二区三区| 国产黄色小视频在线观看| 国产精品一区二区免费欧美| 一a级毛片在线观看| 久9热在线精品视频| 欧美国产精品va在线观看不卡| 深夜精品福利| 成年女人毛片免费观看观看9| 不卡一级毛片| 满18在线观看网站| 91在线观看av| 国产一区二区激情短视频| 校园春色视频在线观看| av天堂在线播放| 俺也久久电影网| 女人爽到高潮嗷嗷叫在线视频| 久久久久久久久久黄片| 色尼玛亚洲综合影院| av天堂在线播放| 亚洲五月婷婷丁香| 亚洲精品一卡2卡三卡4卡5卡| videosex国产| 日韩大尺度精品在线看网址| 久久天堂一区二区三区四区| 老司机午夜福利在线观看视频| 亚洲国产欧美日韩在线播放| www.www免费av| 欧美日本视频| 老汉色av国产亚洲站长工具| 在线播放国产精品三级| 亚洲中文字幕日韩| 国产一区二区激情短视频| 人人妻人人澡人人看| 亚洲男人的天堂狠狠| 亚洲午夜精品一区,二区,三区| 2021天堂中文幕一二区在线观 | 久久精品国产综合久久久| 99国产极品粉嫩在线观看| av在线天堂中文字幕| 熟女少妇亚洲综合色aaa.| 麻豆成人av在线观看| www.999成人在线观看| 久久香蕉激情| 一级片免费观看大全| 久久久国产精品麻豆| 亚洲国产中文字幕在线视频| 亚洲人成网站在线播放欧美日韩| 亚洲中文av在线| 中出人妻视频一区二区| 久久香蕉精品热| 国产精品久久久人人做人人爽| 国产成人精品久久二区二区91| 免费看日本二区| 国产激情偷乱视频一区二区| 精品人妻1区二区| 极品教师在线免费播放| 三级毛片av免费| netflix在线观看网站| 精品久久久久久,| 大型av网站在线播放| 久久伊人香网站| 欧美精品亚洲一区二区| 91麻豆精品激情在线观看国产| 91国产中文字幕| 日本一本二区三区精品| 丝袜在线中文字幕| 国产精品九九99| 国产日本99.免费观看| 又黄又粗又硬又大视频| 男人舔女人下体高潮全视频| 久久精品91无色码中文字幕| 黑人欧美特级aaaaaa片| 男人舔女人的私密视频| 视频在线观看一区二区三区| 又黄又粗又硬又大视频| 久久中文字幕一级| 国产在线精品亚洲第一网站| 每晚都被弄得嗷嗷叫到高潮| 变态另类成人亚洲欧美熟女| 夜夜看夜夜爽夜夜摸| АⅤ资源中文在线天堂| 别揉我奶头~嗯~啊~动态视频| 亚洲专区字幕在线| 亚洲av美国av| 亚洲人成伊人成综合网2020| 国产乱人伦免费视频| 黄色女人牲交| 国产麻豆成人av免费视频| av免费在线观看网站| 90打野战视频偷拍视频| 欧美大码av| 久久国产乱子伦精品免费另类| www.www免费av| 成人永久免费在线观看视频| 国产91精品成人一区二区三区| 国产精品美女特级片免费视频播放器 | 真人做人爱边吃奶动态| 国产视频一区二区在线看| 搡老岳熟女国产| 麻豆成人午夜福利视频| 成人国产综合亚洲| 90打野战视频偷拍视频| 日韩欧美国产在线观看| 99国产极品粉嫩在线观看| 亚洲七黄色美女视频| 一本精品99久久精品77| 国产精品久久久久久亚洲av鲁大| 国产精品久久视频播放| 波多野结衣巨乳人妻| 老熟妇仑乱视频hdxx| av有码第一页| 亚洲成av人片免费观看| 国产精品久久电影中文字幕| 亚洲国产日韩欧美精品在线观看 | 免费高清视频大片| 久久久久久亚洲精品国产蜜桃av| 一边摸一边做爽爽视频免费| 制服丝袜大香蕉在线| 亚洲一区高清亚洲精品| 成人18禁在线播放| 日本一本二区三区精品| 日日爽夜夜爽网站| 91在线观看av| 757午夜福利合集在线观看| 777久久人妻少妇嫩草av网站| 亚洲精品一卡2卡三卡4卡5卡| 亚洲三区欧美一区| 成人av一区二区三区在线看| 男人舔女人的私密视频| 国产亚洲精品第一综合不卡| 97人妻精品一区二区三区麻豆 | 深夜精品福利| 久久久久久久午夜电影| 成在线人永久免费视频| 久久精品国产亚洲av高清一级| 久久久久久久精品吃奶| 日韩高清综合在线| 好男人在线观看高清免费视频 | 婷婷亚洲欧美| 精品少妇一区二区三区视频日本电影| 精品一区二区三区视频在线观看免费| 免费在线观看影片大全网站| 久久久久亚洲av毛片大全| 国产成人系列免费观看| 日韩精品免费视频一区二区三区| 国产私拍福利视频在线观看| 老汉色av国产亚洲站长工具| 国产精品自产拍在线观看55亚洲| 满18在线观看网站| 在线观看免费日韩欧美大片| 一进一出抽搐动态| 国产成人欧美在线观看| 色尼玛亚洲综合影院| 看免费av毛片| 国产亚洲精品av在线| 亚洲一区中文字幕在线| 精品国产超薄肉色丝袜足j| 亚洲 国产 在线| 欧美在线黄色| 深夜精品福利| 国内毛片毛片毛片毛片毛片| 久9热在线精品视频| 欧美亚洲日本最大视频资源| 免费高清在线观看日韩| 国产高清有码在线观看视频 | 两个人免费观看高清视频| 人人妻人人澡人人看| 可以在线观看毛片的网站| 一级毛片女人18水好多| 日韩欧美国产一区二区入口| 亚洲国产精品成人综合色| 国产精品一区二区免费欧美| 日韩免费av在线播放| 成人午夜高清在线视频 | 国产一级毛片七仙女欲春2 | 老熟妇仑乱视频hdxx| 老司机靠b影院| 一级a爱片免费观看的视频| 一进一出好大好爽视频| 国产精品爽爽va在线观看网站 | 午夜精品在线福利| 久久久久久亚洲精品国产蜜桃av| 伦理电影免费视频| 亚洲国产欧美网| 一级a爱视频在线免费观看| 天天躁夜夜躁狠狠躁躁| 国产高清激情床上av| 国产一区二区激情短视频| 欧美黑人巨大hd| 无人区码免费观看不卡| 国产久久久一区二区三区| 一级毛片高清免费大全| 欧美精品亚洲一区二区| 亚洲av成人不卡在线观看播放网| 午夜福利一区二区在线看| 亚洲中文av在线| 视频在线观看一区二区三区| 欧美色欧美亚洲另类二区| 国产人伦9x9x在线观看| 国产伦一二天堂av在线观看| 亚洲精品中文字幕在线视频| 国产一区在线观看成人免费| 亚洲成av片中文字幕在线观看| 男女之事视频高清在线观看| 久久香蕉激情| 成人18禁在线播放| 久久精品亚洲精品国产色婷小说| 欧美亚洲日本最大视频资源| av欧美777| av在线天堂中文字幕| 好男人在线观看高清免费视频 | 国内少妇人妻偷人精品xxx网站 | 亚洲五月婷婷丁香| 色av中文字幕| 久久久久久久久中文| 动漫黄色视频在线观看| 午夜福利欧美成人| 国产精品九九99| 91国产中文字幕| 亚洲国产精品久久男人天堂| 国产伦一二天堂av在线观看| 少妇粗大呻吟视频| 久久久久久久久久黄片| 啪啪无遮挡十八禁网站| 国产97色在线日韩免费| 88av欧美| 一个人免费在线观看的高清视频| 国产精华一区二区三区| 在线国产一区二区在线| 国产av不卡久久| 正在播放国产对白刺激| 亚洲国产毛片av蜜桃av| 可以在线观看的亚洲视频| 老熟妇乱子伦视频在线观看| 日本精品一区二区三区蜜桃| 日韩一卡2卡3卡4卡2021年| 久久久久国产精品人妻aⅴ院| 国产欧美日韩一区二区三| 亚洲一卡2卡3卡4卡5卡精品中文| 国产黄片美女视频| 69av精品久久久久久| 99热这里只有精品一区 | 国产精品99久久99久久久不卡| 国产又爽黄色视频| 日本免费a在线| 两人在一起打扑克的视频| 真人做人爱边吃奶动态| 久久久久久人人人人人| 日韩 欧美 亚洲 中文字幕| 免费电影在线观看免费观看| 黄频高清免费视频| 久久国产精品人妻蜜桃| 日本黄色视频三级网站网址| 国产精品久久久人人做人人爽| 大型av网站在线播放| 91麻豆av在线| 好看av亚洲va欧美ⅴa在| 黄色成人免费大全| av在线天堂中文字幕| 色综合婷婷激情| 99热只有精品国产| 一个人观看的视频www高清免费观看 | 亚洲成a人片在线一区二区| 国产乱人伦免费视频| 色综合站精品国产| 脱女人内裤的视频| 亚洲欧美日韩无卡精品| 亚洲九九香蕉| 特大巨黑吊av在线直播 | 视频区欧美日本亚洲| 日本一区二区免费在线视频| 一区二区三区国产精品乱码| ponron亚洲| 嫩草影院精品99| 国产亚洲欧美98| 超碰成人久久| 国内揄拍国产精品人妻在线 | 日本一区二区免费在线视频| 中文字幕高清在线视频| 男人舔奶头视频| 欧美日韩亚洲综合一区二区三区_| 夜夜夜夜夜久久久久| 不卡av一区二区三区| 亚洲精品久久成人aⅴ小说| 中国美女看黄片| 最新美女视频免费是黄的| 麻豆久久精品国产亚洲av| 亚洲精品国产精品久久久不卡| 18禁黄网站禁片午夜丰满| 韩国av一区二区三区四区| 一级a爱片免费观看的视频| 国产精品久久久久久人妻精品电影| 美女扒开内裤让男人捅视频| 精品福利观看| 18禁国产床啪视频网站| 久久香蕉激情| 搡老熟女国产l中国老女人| 欧美av亚洲av综合av国产av| 51午夜福利影视在线观看| 法律面前人人平等表现在哪些方面| 变态另类成人亚洲欧美熟女| 男女床上黄色一级片免费看| bbb黄色大片| 不卡av一区二区三区| 丰满的人妻完整版| 欧美色视频一区免费| 一区二区三区国产精品乱码| 亚洲国产精品999在线| av超薄肉色丝袜交足视频| 九色国产91popny在线| 精品电影一区二区在线| 国内毛片毛片毛片毛片毛片| 午夜久久久在线观看| 妹子高潮喷水视频| 亚洲av电影在线进入| 亚洲男人的天堂狠狠| 久久精品91蜜桃| 淫秽高清视频在线观看| 色av中文字幕| 久久婷婷成人综合色麻豆| av在线天堂中文字幕| 色尼玛亚洲综合影院| 久久久久久九九精品二区国产 | 日韩欧美一区视频在线观看| 在线观看免费日韩欧美大片| 中文字幕高清在线视频| 久久狼人影院| 制服诱惑二区| 成人永久免费在线观看视频| 久久天堂一区二区三区四区| 在线观看日韩欧美| 亚洲最大成人中文| 一二三四社区在线视频社区8| 可以免费在线观看a视频的电影网站| 亚洲人成网站在线播放欧美日韩| videosex国产| www.精华液| 一进一出抽搐gif免费好疼| 亚洲精品中文字幕一二三四区| 精品久久久久久成人av| 麻豆久久精品国产亚洲av| 午夜亚洲福利在线播放| 精品久久久久久久久久免费视频| 日本五十路高清| 国产成人影院久久av| 好看av亚洲va欧美ⅴa在| 香蕉av资源在线| 欧美黄色淫秽网站| 精品一区二区三区视频在线观看免费| 亚洲性夜色夜夜综合| 久久人妻av系列| 国产三级在线视频| 亚洲成av人片免费观看| 亚洲va日本ⅴa欧美va伊人久久| 免费在线观看完整版高清| 两人在一起打扑克的视频| 欧美三级亚洲精品| 狠狠狠狠99中文字幕| 国产激情偷乱视频一区二区| 久久精品aⅴ一区二区三区四区| 三级毛片av免费| 亚洲三区欧美一区| 满18在线观看网站| 人人妻人人看人人澡| 久久久久久久久久黄片| 色综合欧美亚洲国产小说| 国产爱豆传媒在线观看 | 天堂√8在线中文| 成人亚洲精品av一区二区| 韩国精品一区二区三区| 禁无遮挡网站| 老熟妇乱子伦视频在线观看| 黑丝袜美女国产一区| 亚洲在线自拍视频| 国产主播在线观看一区二区| 在线看三级毛片| 12—13女人毛片做爰片一| 中国美女看黄片| 又黄又爽又免费观看的视频| 一个人免费在线观看的高清视频| 国产在线精品亚洲第一网站| 日韩精品免费视频一区二区三区| 97超级碰碰碰精品色视频在线观看| 999久久久精品免费观看国产| 母亲3免费完整高清在线观看| 国产亚洲精品一区二区www| 国产精品九九99| 人人澡人人妻人| 久久亚洲精品不卡| 亚洲免费av在线视频| 天天躁夜夜躁狠狠躁躁| 久久午夜亚洲精品久久| 亚洲精品一卡2卡三卡4卡5卡| 18禁美女被吸乳视频| 亚洲av成人一区二区三| 亚洲精品美女久久av网站| 精品久久久久久久久久久久久 | 波多野结衣高清作品| 天堂影院成人在线观看| 久久精品aⅴ一区二区三区四区| 国产单亲对白刺激| 精品电影一区二区在线| 禁无遮挡网站| 中文亚洲av片在线观看爽|