• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Micro-Expression Recognition Based on Spatio-Temporal Feature Extraction of Key Regions

    2023-12-12 15:51:42WenqiuZhuYongshengLiQiangLiuandZhigaoZeng
    Computers Materials&Continua 2023年10期

    Wenqiu Zhu,Yongsheng Li,Qiang Liu,? and Zhigao Zeng

    1College of Computer Science,Hunan University of Technology,Zhuzhou,412007,China

    2Intelligent Information Perception and Processing Technology,Hunan Province Key Laboratory,Zhuzhou,412007,China

    ABSTRACT Aiming at the problems of short duration,low intensity,and difficult detection of micro-expressions(MEs),the global and local features of ME video frames are extracted by combining spatial feature extraction and temporal feature extraction.Based on traditional convolution neural network(CNN)and long short-term memory(LSTM),a recognition method combining global identification attention network (GIA),block identification attention network (BIA) and bi-directional long short-term memory (Bi-LSTM) is proposed.In the BIA,the ME video frame will be cropped,and the training will be carried out by cropping into 24 identification blocks(IBs),10 IBs and uncropped IBs.To alleviate the overfitting problem in training,we first extract the basic features of the preprocessed sequence through the transfer learning layer,and then extract the global and local spatial features of the output data through the GIA layer and the BIA layer,respectively.In the BIA layer,the input data will be cropped into local feature vectors with attention weights to extract the local features of the ME frames;in the GIA layer,the global features of the ME frames will be extracted.Finally,after fusing the global and local feature vectors,the ME time-series information is extracted by Bi-LSTM.The experimental results show that using IBs can significantly improve the model’s ability to extract subtle facial features,and the model works best when 10 IBs are used.

    KEYWORDS Micro-expression recognition;attention mechanism;long and short-term memory network;transfer learning;identification block

    1 Introduction

    Compared with traditional expressions,MEs are expressions of short duration and small movements.As a spontaneous expression,ME is produced when people try to cover up their genuine internal emotions.It is an expression that can neither be forged nor suppressed[1].In 1966,Haggard et al.[2]discovered a facial expression that is fast and not easily detected by the human eye and first proposed the concept of MEs.At first,this small and transient facial change did not attract the attention of other peer researchers.Until 1969,when Ekman et al.[3]studied a video of depression,he found that patients with smiling expressions would have extremely brief painful expressions.The patient tried to hide his anxiety with a more positive expression,such as a smile.Unlike macro-expressions,MEs only last for 1/25~1/5 second.Therefore,recognition only by human eyes does not meet the need for accurate identification[4,5],and it is essential to use modern artificial intelligence means.

    Research on micro-expression recognition (MER) has undergone a shift from using traditional image feature extraction methods to deep learning feature extraction methods.Pfister et al.[6,7]extended the feature extraction method from XY direction to three orthogonal planes composed of XY,XT and YT by using the local binary patterns from three orthogonal planes (LBP-TOP)algorithm.The LBP-TOP algorithm has been extended from the previous static feature extraction to the dynamic feature extraction that changes with time information.But this recognition method is not ideal for MEs with small intensity changes.Xia et al.[8] found the problem that facial details with minor changes in MER can quickly disappear in deep models.He demonstrated that lower resolution input data and shallower model structure could help alleviate the phenomenon of detail disappearance.Then,he further proposed a recurrent convolutional network (RCN) to reduce the model and data.However,compared to the CNN with attention mechanisms,this design does not perform well in deep models.Xie et al.[9] proposed an MER method based on action units (AUs).Based on the correlation between facial muscles and AUs,this method improves the recognition rate of MEs to a certain extent.Li et al.[10] proposed a model structure based on 3DCNN,an MER method combining attention mechanism and feature fusion.This model extracts optical flow features and facial features through a deep CNN and adds transfer learning to alleviate the problem of model overfitting.Gan et al.[11]proposed the OFF-ApexNet framework by using the optical flow characteristics between images,which can input the extracted optical flow characteristics between onset frame,apex frame and offset frame into CNN for recognition.However,the ME change is a continuous process,and only relying on the onset frame,apex frame and offset frame may ignore the details between video sequences.Huang et al.[12] proposed a method of MER by using the optical flow characteristics of apex frames and integrating the SHCFNet framework.The SHCFNet framework combines the extraction of spatial and temporal features,but it ignores the processing of local detail features of MEs.Zhan et al.[13] proposed an MER method based on an evolutionary algorithm and named it the GP (genetic programming) algorithm.The GP algorithm can select representative sequence frames from ME video frames and guide individuals to evolve toward higher recognition ability.This method can efficiently extract time-varying sequence features in MER.But it only performs feature extraction globally and does not consider that the importance of different parts of the face varies in MER.Tang et al.[14]proposed a model based on the optical flow method and Pseudo 3D Residual Network (P3D ResNet).This method uses the optical flow method to extract the characteristic information of the ME optical flow sequence,then extracts the spatial information and temporal information of the ME sequence through the P3D ResNet model,and finally classifies and outputs it.However,P3D ResNet is more based on the entire area of the face and does not take into account the minor detail changes in the local MEs.Niu et al.[15] proposed a CBAMDPN algorithm based on a convolutional attention module and a dual-channel network.The method fuses channel attention and spatial attention,thus enabling feature extraction of local details of MEs.Simultaneously,the DPN structure can inhibit useless features and enhance the expression ability of model features.But this method only relies on apex frames,ignoring the sequence correlation between ME video frames.

    To solve the problems of low intensity,short duration and difficult detection of ME,we propose a method for MER using key facial regions.This method can extract spatial and temporal information from ME frames.The design of the local IBs in the experiments overcomes the shortcoming of only utilizing global feature extraction in the SHCFNet[12]framework.Compared with the OFF-ApexNet[11] framework,our method utilizes all video frames from onset to apex,which can further extract more detailed facial change information.After the spatial feature extraction,we added the Bi-LSTM framework,which can further extract the sequence features of the video frames compared with the CBAM-DPN [15,16] algorithm,thereby improving the recognition accuracy.In addition,to further extract the facial details of MEs,in the experiment,we crop the ME video frames into IBs and perform ablation experiments on the uncropped IBs,24 and 10 IBs.Finally,according to the experimental results,the selected schemes of different IBs are compared.

    2 Related Work

    2.1 Facial Expression Coding System(FACS)

    There are 42 muscles in the human face.The rich expression changes are the result of the joint action of a variety of muscles.Some facial muscles that can be consciously controlled are called“voluntary muscles”.There are also some facial muscles that can not be under conscious control are called “involuntary muscles”.In 1976,Ekman et al.[3] proposed a facial expression coding system (FACS) based on facial anatomy.FACS divides the human face into 44 AUs.Different AUs represent different local facial actions.For example,AU1 represents the inner browser raiser,while AU5 represents the upper lip raiser[17–19].ME generation is usually the result of the joint action of one or more AUs.For example,the ME representing happiness results from the joint action of AU6 and AU12,where AU16 represents the downward pull of the lower lip and AU12 represents the upward corner of the mouth.FACS is an essential basis for MER,and it also is an action record of facial key point features such as eyebrows,cheeks and corners of the mouth[20–22].In our experiment,the face will be divided into several ME IBs according to the AU.

    2.2 Neural Network with Attention Mechanism

    To address the shortcomings of short duration and low action intensity in MER,we add an attention mechanism in a CNN[23].This design makes the CNN model not only extract the features of the whole face but also focus on the changes in local details.It enables the model to extract more subtle facial detail features in MER.CNN can extract the abstract features of ME[24].The CNN with a local attention network is used to extract the motion information of critical local units in ME change.In contrast,the CNN with a global attention network can extract the global change information.In the experiment,we combine the CNN with the local attention mechanism and the CNN with the global attention mechanism.We expect the improved CNN model to have the ability to pay attention to both the global and the details.

    2.3 Bi-Directional Long Short-Term Memory Network(Bi-LSTM)

    Traditional CNNs and fully connected (FC) layers have a common feature in that they cannot“memorize”relevant information between time series when dealing with continuous sequences [25].Compared with traditional neural networks,recurrent neural network(RNN)adds a hidden layer that can save state information.This hidden layer includes historical information about the sequence and updates itself with the input sequence.However,the most significant disadvantage of traditional RNN is that with the increase of training scale and layers,it is easy to produce long-term dependencies problems[26,27].That is,it is easy to produce gradient disappearance and gradient explosion when learning a long sequence.To solve the above problems of RNNs,in the early 1990s,Hochreiter et al.proposed LSTM.Each unit block of LSTM includes an input gate,forget gate and output gate[28].The input gate is used to determine how much input data at the current time can be saved to the current state unit;The forgetting gate is used to indicate how many state units at the last time can be saved to this state unit;The output gate controls how many current state units can be used for output.Bi-LSTM adds a backpropagation layer to the LSTM which make the Bi-LSTM model can use not only historical sequence information but also future information[29].Simultaneously,Bi-LSTM can better extract the feature and sequence information in ME than LSTM.

    3 Proposed Method

    3.1 Method Overview

    We propose a neural network structure based on the combination of CNN with attention mechanism and Bi-LSTM.To accurately capture small-scale facial movements,we add global and local attention mechanisms [30] to the traditional CNN framework.The improved framework can extract different feature information from multiple facial regions.Simultaneously,we also increase the processing of global information.The improved model architecture is shown in Fig.1.Firstly,the network uses the transfer learning method to pass the pre-processed feature vector through the VGG16 model with pre-training weight and extract the basic facial features [31].Then,the facial features extracted from each frame are passed through GIA and BIA to extract global and local information.Afterward,we fuse the extracted global and local information and extract the sequencerelated information through Bi-LSTM.Finally,the classification output is carried out through a threelayer FC layer.

    Figure 1:The model combining GIA,BIA and Bi-LSTM.It includes a transfer learning layer,GIA and BIA layer,Bi-LSTM layer and FC layer

    To extract the global and local features of the face,we introduce the BIA and GIA frameworks.As shown in Fig.1,BIA is the upper part of the dashed box in the figure,and GIA is the lower part of the dashed box in the figure.

    3.2 BIA Mechanism

    The range of facial variation of ME is small,which is challenging to be recognized effectively.This experiment adopts the recognition method of increasing the blocks with attention in the critical regions of the face.The representative area and the corresponding attention weight are added to the facial features to be recognized.In the experiments,we will perform ablation experiments on uncropped,cropped into 24 and 10 ME blocks,respectively.

    3.2.1 The Neural Network with Attention Mechanism

    BIA is shown in the upper part of the dashed box in Fig.1.After cropping in the BIA,the local IBs are obtained,and then each IB vector goes through an FC layer and an attention network whose output is a weighted scalar.Finally,each IB gets a weighted feature vector and outputs it.

    In the attention network(the upper half of the dashed box in Fig.1),it is assumed thatcirepresents the input feature vector of thei-th IB.As in Eq.(1),?(·)is the operation in the attention network,andpiis the attention weighted scalar of thei-th IB.As in Eq.(2),τ(·)represents the feature learning of the input feature vector,andrepresents the unweighted feature after thei-th IB is extracted.As in Eq.(3),αiis the feature of thei-th IB with attention weight.Finally,the weighted feature vectors of all IBs are obtained after calculation.

    3.2.2 Generation Method of 24 IBs

    To accurately recognize the local details of the face,we generate 24 detailed IBs based on facial key points.There are Dlib[32]method and face_recognition[33]method to determine face key points.The Dlib method can obtain 68 facial key points(see Fig.2a),and the face_recognition method can obtain 72 facial key points(see Fig.2b).In experiments,we found that the face_recognition method can obtain more accurate facial key point information than the Dlib method.Therefore,we use the face_recognition method to achieve precise positioning when determining the ME IB.The 24 IBs are generated as follows:

    Figure 2:Comparison of ME key points and IPs.(a)68 facial key points(b)72 facial key points(c)24 IPs(d)10 IPs.We select 24 and 10 IPs for experiments on 72 facial key points,respectively

    (1) Determine the identification points (IPs): We first extracted 72 facial key points using the face_recognition method(see Fig.2b).Then,based on 72 facial key points,We converted them to 24 IPs.The location of IPs covers the cheeks,mouth,nose,eyes and eyebrows.The conversion process is as follows.Firstly,16 IPs covering mouth,nose,eyes and eyebrows are selected from 72 facial key points.The extraction sequence numbers of 72 facial key points (see Fig.2b)are: 19,22,23,26,39,37,44,46,28,30,49,51,53,55,59 and 57.The serial numbers of the IPs generated(see Fig.2C)are:1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 and 16.Secondly,for the eyes,eyebrows and cheeks,we generate them through the midpoint coordinates of the key points.For the left eye,left eyebrow and left cheek,we select the midpoint coordinates of(20,38),(41,42),(18,59)point pairs from 72 facial key points(see Fig.2b)as the IPs;For the right eye,right eyebrow and right cheek,we select the midpoint coordinates of(25,45),(47,48)and(27,57)point pairs from 72 facial key points as the IPs;The serial numbers of the generated IPs(see Fig.2c)are:17,19,18,20,21 and 22.Finally,for the left and right corners of the mouth,we select 49 and 55 keys from 72 facial keys(see Fig.2b).Then,according to the coordinates of the two points,the relative offset points of the two corners of the mouth are selected as the generation basis of the coordinates of the IPs.The generated IPs at the left and right corners of the mouth (see Fig.2C) are numbered 23 and 24.Eqs.(4) and (5) are the calculation methods of the IPs at the left and right corners of the mouth.Whereinandare the abscissa and ordinate of the 49th point under 72 fac(ial key )points;andare the abscissa and ordinate of the 5(5th po)int under 72 facial key points;is the coordinate of the 23rd point under the 24 IPs;is the coordinate of the 24th point under the 24 IPs.

    (2)Generate IBs:Finally,we got 24 IPs(see Fig.2c).The re-selected 24 IPs will generate 24 48×48 IBs centered on the IPs.To improve the robustness of the model,we perform feature extraction on IBs after passing through the transfer learning layer.

    3.2.3 Generation Method of 10 IBs

    The 24 IBs can cover the face area relatively wholly,but in the experiment we found that covering the face area too finely may make BIA learn some redundant features.In subsequent experiments,we obtained 10 IBs based on FACS.The 10 IBs relatively completely covered the eyebrows,eyes,nose,mouth and chin of the human face.The detailed experimental steps for obtaining 10 IBs are as follows:

    (1) Determine the IPs: we obtained 72 facial key points through face_recognition and then converted them into 10 IPs.The conversion process is as follows: we first determine the side length of the IB area.We selected half of the abscissa distance between points 49 and 55(see Fig.2b)as the side length of the IB area.For the eyebrow part,we select the midpoint coordinates of the 20th and 25th points(see Fig.2b)among the 72 facial key points as the coordinates of the 8th IP(see Fig.2d).The 9th and 10th IPs are generated based on the existing 8th IP.The generation method of the 9th and 10th IPs is shown in Eqs.(6)and(7).Whererepresent the coordinates of the 9th and 10th points under the 10 IPs.andrepresent the abscissa and ordinate of the 8th point under the 10 IPs.Width is the side length of the square IB area under 10 IPs.

    For the eyes,we select the coordinates of the 37th and 46th points(see Fig.2b)among the 72 facial key points as the coordinate generation basis of the 6th and 7th IPs (see Fig.2d).The generation method of IPs 6 and 7 is shown in Eqs.(8) and (9).Among them,represent the coordinates of the 6th and 7th points under the 10 IPs,respectively;represent the abscissa and ordinate of the 37th point under 72 facial key points;represent the abscissa and ordinate of the 46th point under 72 facial key points;width is the side length of the square IB area under 10 IPs.

    For the nose parts,we select the coordinates of the 32nd and 36th points (see Fig.2b) among the 72 facial key points as the coordinate generation basis of the 4th and 5th IPs (see Fig.2d).The generation method of IPs 4 and 5 is shown in Eqs.(10)and(11).Whererespectively represent the coordinates of the 4th and 5th IPs under 10 IBs;represent the abscissa and ordinate of the 32nd point under 72 facial key points;represent the abscissa and ordinate of the 36th point under 72 facial key points;width is the side length of the square IB area under 10 IBs.

    For the lip part,we directly select the 49th and 55th points(see Fig.2b)among the 72 facial key points as the coordinates of the 1st and 2nd IPs(see Fig.2d).Finally,in the chin part,we select the 9th point(see Fig.2b)among the 72 facial key points as the coordinate generation basis of the 3rd IP(see Fig.2d).The generation method of the 3rd IP is shown in Eq.(12),where()represents the coordinates of the 3rd point under 10 IPs;represent the abscissa and ordinate of the 9th point among the 72 facial key points;width is the side length of the square IB area under 10 IBs.

    (2)Generate IBs:Finally,we get 10 IPs(see Fig.2d).Simultaneously,we select half of the abscissa distance of point 49 and point 55(see Fig.2b)as the side length of the IB.The final re-selected 10 IPs will generate 10 IBs centered on the IPs in the experiment.To improve the robustness of the model,we perform feature extraction on IBs after passing through the transfer learning layer.

    3.3 GIA Mechanism

    BIA can learn subtle changes in facial features.We not only need to extract local facial features but also global features.Therefore,integrating global features into feature recognition is expected to improve the recognition effect of MEs.

    The detailed structure of GIA is shown in the lower half of the dashed box in Fig.1.The input feature vector size of GIA is 512 × 28 × 28.In GIA,we first pass the input feature vector through the conv4_2 to conv5_2 layers of the VGG16 network to obtain a feature vector with an output size of 512×14×14;Then,the feature vector of size 512×14×14 is passed through an FC layer and an attention network whose output is a weighted scalar,and finally,a weighted global feature vector is output.

    3.4 Bi-LSTM Mechanism

    GIA and BIA can extract the local and global information of a frame of MEs.However,ME video frames change dynamically in continuous time,so we also need to extract the temporal sequence information of ME.LSTM is a new structure designed to overcome the long-term dependency problem of traditional RNNs.The Bi-LSTM adds a reverse layer based on LSTM,which makes the new network structure cannot only utilize the historical information but also can capture future available information[34,35].

    Bi-LSTM is shown in Fig.3.Bi-LSTM replaces each node of the bidirectional RNN with an LSTM unit.We define the input feature sequence of the Bi-LSTM network model asX=(x1,...,xT);Define the variable sequence of the hidden layer in the forward propagation asand the variable sequence of the hidden layer in the backpropagation asDefine the Bi-LSTM model output sequence asy=(y1,...,yT).We get the following formula:

    In the above formula,S(x)is the activation function;W represents the weight of Bi-LSTM;b is the bias;Each unit is calculated using LSTM cells,shown in Fig.4.

    Figure 3:Bidirectional RNN model diagram

    The input of the Bi-LSTM layer is the feature vector after BIA and GIA.The Bi-LSTM layer adopts a single-layer bidirectional LSTM structure,which contains a hidden layer with 128 nodes.To increase the robustness of model network nodes and reduce the complex co-adaptation relationship between neurons,we add a dropout layer between the Bi-LSTM layer and the FC layer to mask neurons with a certain probability randomly.

    Figure 4:LSTM cell

    4 Experiments and Results

    We selected four datasets for experiments.We pre-process the dataset and then select accuracy,unweighted f1-score,and unweighted average recall as evaluation criteria.Finally,we conducted experiments on without IBs,24 and 10 IBs,respectively,and compared them with different algorithms.

    4.1 Selection of Datasets

    Four datasets,CASME II,SAMM,SMIC and MEGC,were selected for the experiment.In the experiment,we divided expressions into three categories:negative,positive and surprise.

    4.1.1 CASME II Dataset

    The CASME II[36]dataset was established by the team of Fu Xiaolan,Institute of Psychology,Chinese Academy of Sciences.The CASME II dataset employs a 200 fps high-speed camera with a frame size of 640×480 pixels.There are 255 samples in the dataset,the average age of the participants is 22 years old,and the total number of subjects is 24.The dataset includes emotion labels corresponding to each subject sample and video sequence annotations with the onset frame,apex frame and offset frame[37–39].Labels include depression,disgust,happiness,surprise,fear,sadness,and others.In the experiment,we divided the CASME II dataset into a new division,and the division results are shown in Table 1.

    Table 1:Dataset division on CASME II

    4.1.2 SAMM Dataset

    The SAMM[40]dataset has 149 video clips captured by 32 participants from 13 countries.The participants were 17 white British,accounting for 53.1%of the participants;also included 3 Chinese,2 Arabs,and 2 Malays,in addition to Spanish,Pakistani,Arab,African Caribbean 1 person each,a British African,an African,a Nepalese,and an Indian.The average age of the participants was 33.24 years,with a gender-balanced number of male and female participants.There were significant differences in the race and age of the participants,and the imbalance of the label classes was also evident.The SAMM dataset has a 200 fps high frame rate camera with a resolution of 960×650 per frame[41–43].The dataset is accompanied by the positions of the onset frame,offset frame and apex frame of MEs,as well as emotion labels and action unit information.Labels include disgust,contempt,anger,sadness,fear,happiness,surprise,and others.In the experiment,we divided the SAMM dataset into a new division,and the division results are shown in Table 2.

    Table 2:Dataset division on SAMM

    4.1.3 SMIC Dataset

    The SMIC dataset consists of 16 participants and 164 ME clips.Among the volunteers were 8 Asians and 8 Caucasians.The SMIC dataset has a 100 fps camera and a resolution of 640×480 per frame[44,45].The SMIC dataset includes three categories:negative,positive,and surprised,and we do not re-segment in the experiments.The SMIC dataset classification is shown in Table 3.

    Table 3:Dataset division on SMIC

    4.1.4 MEGC Composite Dataset

    The MEGC composite dataset has 68 volunteers,including 24 from the CASME II dataset,28 from the SAMM dataset,and 16 from the SMIC dataset.The classification of the composite dataset is shown in Table 4.

    Table 4:Dataset division on MEGC composite dataset

    4.2 Data Pre-Processing

    Apex frames are annotated in the CASME II and SAMM datasets.Still,in the experiment we found that some datasets are not accurate in the annotation of apex frames and are even mislabeled.In addition,there is no Apex frame information in the SMIC dataset.Therefore it is necessary to re-label apex frames [46].In the experiments,we obtain the apex frame position by calculating the absolute pixel difference of the gray value between the current frame and the onset and offset frames.To reduce the interference of image noise,we simultaneously calculate the absolute value of the pixel difference between the adjacent frame and the current frame.Then,We divide the two values.Finally,the difference value between each frame and the onset frame and the offset frame is obtained,and the frame with the most considerable difference value is selected as the apex frame.

    As in Eqs.(16)and(17),xi,xjrepresent thei-th frame and thej-th frame in a ME video sequence;f(xi,xj)represents the difference between thei-th frame and thej-th frame in the ME sequence.Adding 1 to the numerator and denominator is to ensure that the formula makes sense when particular values occur.In Eq.(17),xirepresents the currenti-th frame;xonrepresents the onset frame;xoffrepresents the offset frame;difirepresents the difference value between thei-th frame and the onset frame and the offset frame.As shown in Fig.5,the place with the most enormous difference value,that is,the position of the red vertical line represents the position of the apex frame.

    After determining the vertex frame,we then use the temporal interpolation model(TIM)[47]to process the video frames from the onset frame to the apex frame into a fixed input sequence of 10 frames.We use Local Weighted Mean Transformation(LWMT)[48]on the 10-frame sequence.The faces are aligned and cropped at the positions of the eyes in the first frame in the same video,and the video frames are normalized to 224×224 pixels by bilinear interpolation[49].In determining 24 facial IBs,we first use face_recognition to get 72 facial keys.After analyzing the face key points,we select 24 facial motion IPs and generate 24 IBs from 24 IPs.In the experiment of determining 10 facial IBs,we first use face_recognition to get 72 facial keys.After analyzing the key points on the face,we select 10 representative IPs and generate 10 IBs from 10 IPs.Finally,we put the pre-processed video frames and the corresponding IBs of each frame into the model for training.

    Figure 5:The change process of the difference value of different frames in the ME video.The place with the most immense difference value,that is,the position of the red vertical line,represents the position of the apex frame

    4.3 Experimental Evaluation Criteria

    Due to the small sample size of ME datasets,to ensure the accuracy of the experiment,we choose Leave One Subject Out(LOSO)[50].That is,the dataset is divided according to the subjects,and all videos of one subject are selected each time for testing and the remaining fold training.Until all folds are involved in the test.Finally,all test results are combined and used as the final experimental result.

    We adopt the evaluation metrics ofUF1(Unweighted F1-score),UAR(Unweighted average recall)andAcc(Accuracy)[46–51].The calculation ofUF1is shown in Eq.(18),whereTPi,FPi,andFNiare the number of true cases,false positive cases,and false negative cases in thei-th category,respectively,andCis the number of categories.The calculation ofUARis shown in Eq.(19),whereTPiis the number of correct predictions in thei-th category,andNiis the number of samples in thei-th sample.Accis shown in Eq.(20),whereTPis the number of true examples in all categories,andFPis the number of false positives in all categories.

    4.4 Experimental Results

    The training uses the Adam optimizer;the learning rate is 0.0001;the number of iterations epoch is set to 100;the training batch_size is set to 16.Because the ME dataset sample size is small,it is prone to overfitting.To improve the robustness and generalization ability of the model,we take the regularized L2 norm for the model parameters and addλtimes the L2 parameter norm to the loss function.After many experiments,it is shown that the model works best whenλis set to 0.00001.In addition,we add random rotation and random cropping with degrees from-8 to 8 for data augmentation in our experiments.

    4.4.1 Experimental Results on CASME II Dataset

    The experimental results are shown in Table 5.In the CASME II dataset,the average accuracy of LOSO without IBs is 0.7364,UF1is 0.6899,and UAR is 0.7122;When 24 IBs are used,the average accuracy of LOSO is 0.8175,UF1is 0.7779 and UAR is 0.7842;When using 10 IBs,the average accuracy of LOSO is 0.8513,UF1is 0.8256 and UAR is 0.8570.From Table 5,we can see that in the CASME II dataset,the model accuracy of 24 IBs increased by 0.0811,UF1score increased by 0.0880 and UAR score increased by 0.0720 compared with that of the model without IBs.Simultaneously,the accuracy,UF1and UAR scores of 10 IBs are also improved relative to 24 IBs.Among them,the accuracy rate increases by 0.0338,the UF1score increases by 0.0477,and the UAR score increases by 0.0728.

    Table 5:The training results of different IBs

    The confusion matrix of not using IBs,using 24 IBs and using 10 IBs is shown in Figs.6a–6c.The confusion matrices of the three methods show commonality in the CASME II dataset.From the confusion matrix,we found that the prediction results of the three methods are more distributed near“negative”and“surprise”,and the accuracy is relatively high.It is mainly caused by the unbalanced distribution of the datasets.Because it is difficult to trigger the“positive”ME in the collection of the CASME II dataset,the number of dataset labels as“negative”and“surprised”is much larger than that of“positive”.It leads to the imbalance of dataset distribution,which affects the training accuracy.

    4.4.2 Experimental Results on SAMM Dataset

    The experimental results are shown in Table 5.In the SAMM dataset,the average accuracy of LOSO without IBs is 0.7235,UF1is 0.5624,and UAR is 0.5907;When 24 IBs are used,the average accuracy of LOSO is 0.7580,UF1is 0.6066 and UAR is 0.6258;When using 10 IBs,the average accuracy of LOSO is 0.7642,UF1is 0.6850 and UAR is 0.7207.From Table 5,we can see that in the SAMM dataset,the accuracy of 24 IBs is increased by 0.0345,the UF1score is increased by 0.0442 and the UAR score is increased by 0.0351 compared with the model without IBs.Simultaneously,the accuracy,UF1,and UAR scores of 10 IBs are also relatively improved compared with 24 IBs.The accuracy increased by 0.0062,the UF1score increased by 0.0784 and the UAR score increased by 0.0949.

    The confusion matrix of not using IBs,using 24 IBs and using 10 IBs is shown in Figs.6d–6f.In the confusion matrix,we can see that the sample number of“surprise”expressions in the SAMM dataset is tiny,which is one of the reasons why the UF1and UAR scores in Table 5 are far lower than the accuracy.By comparing the confusion matrix of the experimental results without IBs,adding 24 IBs and adding 10 IBs,we can find that adding IBs can improve the recognition performance of the model and reduce the number of misclassification.

    Figure 6:(Continued)

    Figure 6:Confusion matrix results on CASME II,SAMM,SMIC and MEGC datasets.We have experimented with 24,10 IBs,and without IBs on datasets.The experimental results show that using IBs can effectively increase the robustness and recognition effect of the model.Simultaneously,10 IBs work best

    4.4.3 Experimental Results on SMIC Dataset

    The experimental results are shown in Table 5.In the SMIC dataset,the average accuracy of LOSO without IBs is 0.6025,UF1is 0.5931,and UAR is 0.5995;When 24 IBs are used,the average accuracy of LOSO is 0.6602,UF1is 0.6430 and UAR is 0.6423;When using 10 IBs,the average accuracy of LOSO is 0.6858,UF1is 0.6749 and UAR is 0.6735.From Table 5,we can see that in the SMIC dataset,the accuracy of 24 IBs is increased by 0.0577,the UF1score is increased by 0.0499 and the UAR score is increased by 0.0428 compared with the model without IBs.Simultaneously,the accuracy,the UF1and the UAR scores of 10 IBs are also relatively improved compared with 24 IBs,in which the accuracy is improved by 0.0256,the UF1score is improved by 0.0319 and the UAR score is improved by 0.0312.

    The confusion matrix of not using IBs,using 24 IBs and using 10 IBs is shown in Figs.6g–6i.The accuracy of the SMIC dataset is lower than that of the CASME II and SAMM datasets,mainly due to the lower frame rate and pixels captured by SMIC.In addition,the shooting environment of the SMIC dataset is relatively dark,and the interference of the noise environment is also more than that of the CASME II and SAMM datasets.In the SMIC dataset,by comparing the confusion matrix of the experimental results of adding without IBs,adding 24 IBs and adding 10 IBs,we can find that adding IBs can increase the accuracy of model recognition,especially for the recognition performance of “positive”.It is because the addition of IBs with an attention mechanism increases the ability to extract facial detail features of ME.

    4.4.4 Experimental Results on MEGC Composite Dataset

    The experimental results are shown in Table 5.In the MEGC composite dataset,the average accuracy of LOSO without IBs is 0.6674,UF1is 0.6126,and UAR is 0.6070;The average accuracy of LOSO when using 24 IBs is 0.7197,UF1is 0.6627,and UAR is 0.6421;The average accuracy of LOSO when using 10 IBs is 0.7658,UF1is 0.7364,and UAR is 0.7337.From Table 5,we can see that in the MEGC composite dataset,the accuracy of the 24 IBs increases by 0.0523,the UF1score increases by 0.0501,and the UAR score increases by 0.0351 compared with the model without the IBs.Simultaneously,the accuracy and score of 10 IBs are also improved relative to 24 IBs,among which the accuracy rate is increased by 0.0461,the UF1score is increased by 0.0737,and the UAR score is increased by 0.0916.

    Confusion matrices without IBs,24 and 10 IBs are used,as shown in Figs.6j–6l.The MEGC composite dataset has high requirements on the robustness of the model due to the fusion of three datasets with considerable differences.Compared with without IBs,the confusion matrix with IBs shows higher prediction accuracy in negative expressions.Simultaneously,in the confusion matrix,we also found that the prediction accuracy of negative expressions was the highest when using 24 blocks.It is because negative expressions are mainly eyebrow and eye movements.The 24 IBs have more points at the eyebrows and eyes,so more details are extracted from the face.However,paying too much attention to local details makes the overall robustness of the model worse,which is also why the overall accuracy of 10 IBs is higher than that of 24 IBs.

    4.5 Data Analysis

    The comparison of recognition effects of different algorithms is shown in Table 6.The improved algorithm model has the best performance when the number of IBs is 10.The data in Table 6 shows that the accuracy of the model of 10 IBs has been relatively improved compared with the previous recognition algorithms,in which the UF1and the UAR have been increased by 0.0067 and 0.0463,respectively,compared with the P3D ResNet model on the CASME II dataset;On the SAMM dataset,the UF1improves by 0.0447,and the UAR improves by 0.0939;on the SMIC dataset,the UF1improves by 0.0219,and the UAR improves by 0.0236;On the MEGC composite dataset,the UF1improves by 0.0011 and the UAR improves by 0.0094.Compared with the GP model,the UAR increases by 0.0174 on the CASME II dataset;on the SAMM dataset,the UF1increases by 0.0847,and the UAR increases by 0.1253;on the SMIC dataset,the UF1increases by 0.0012,and the UAR increases by 0.0075;accuracy improves by 0.0022 on MEGC composite dataset,UF1by 0.0160,and UAR by 0.0274.Compared with the CBAM-DPN model,the CASME II dataset improves UF1by 0.0772 and UAR by 0.1054;on the SMIC dataset,UF1improves by 0.0433 and UAR improves by 0.0174;On the MEGC composite dataset,UF1improves by 0.0161 and UAR improves by 0.0044.

    Table 6:Comparison of recognition effects of different algorithms

    It is because the GP model is an improved algorithm based on an evolutionary algorithm,which has a good effect on extracting the features of ME sequences that change over time.However,this model only extracts global features and does not consider that different parts of the face have different weights in MER.The CBAM-DPN model adds channel and spatial attention to the feature extraction of local details of MEs.But it only relies on the onset and apex frames for identification and ignores the valuable ME information in other consecutive frames.The P3D ResNet can use the optical flow to extract sequence information.This model considers the spatial and temporal information in consecutive frames.However,it does not take into account the variability of different facial parts.

    5 Conclusion

    Aiming at the characteristics of short duration and small movement range of ME,we propose a recognition method combining the GIA and BIA framework.In the BIA framework,the ME frames will be cropped into blocks.we perform ablation experiments on uncropped,cropped into 24 and 10 blocks.Considering that the ME dataset is a small sample and prone to over-fitting,we first extract the essential features from the pre-processed ME video frames through VGG16;The global and local features are extracted by GIA and BIA;Then,the sequence information of each frame is extracted by Bi-LSTM;Finally,it is classified by three FC layers.Experiments show that the combination of attention networks with IBs and Bi-LSTM can effectively extract useful spatial information and sequence information from video frames with small action amplitude.It show high accuracy in the experiment.Among them,the model effect is the best when there are 10 IBs.However,the small sample size of ME datasets,generally short duration and low intensity,are still the main reasons for the low experimental recognition rate,which is particularly obvious in the confusion matrix.Although the method in this paper uses TIM to process a fixed input sequence,the low efficiency of the model still needs to be solved due to the use of multiple video frames for feature extraction.

    In future research,for the problem of a small sample size of datasets,the quality and quantity of ME datasets need to be further improved.For problems with low intensity of MEs,the next step is to maximize the use of the dataset sequence by doing TIM simultaneously between the video onset frame to apex frame and apex frame to offset frame.In addition,The range of IB can be adjusted according to future experiments.The selection of IBs should be as representative as possible and with high anti-interference.

    Acknowledgement:Firstly,I would like to thank Mr.Zhu Wenqiu for his guidance and suggestions on the research direction of my paper.At the same time,I am also very grateful to the reviewers for their useful opinions and suggestions,which have improved the article.

    Funding Statement:This work is partially supported by the National Natural Science Foundation of Hunan Province,China (Grant Nos.2021JJ50058,2022JJ50051),the Open Platform Innovation Foundation of Hunan Provincial Education Department(Grant No.20K046),The Scientific Research Fund of Hunan Provincial Education Department,China(Grant Nos.21A0350,21C0439,19A133).

    Author Contributions:Conceptualization,Z.W.Q and L.Y.S;methodology,Z.W.Q;validation,Z.W.Q,L.Y.S,Z.Z.G and L.Q;formal analysis,Z.W.Q and Z.Z.G;investigation,L.Y.S;resources,Z.W.Q;data curation,L.Q;writing—original draft preparation,Z.W.Q and L.Y.S;writing—review and editing,Z.W.Q and L.Y.S;visualization,Z.Z.G;supervision,L.Q;project administration,Z.W.Q;funding acquisition,Z.W.Q and Z.Z.G.All authors have read and agreed to the published version of the manuscript.

    Availability of Data and Materials:The data used to support the findings of this study are included within the article.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲自偷自拍三级| 三级毛片av免费| 黄色一级大片看看| 男人和女人高潮做爰伦理| 午夜福利在线在线| av在线亚洲专区| 欧美性猛交╳xxx乱大交人| 色综合色国产| 欧美成人一区二区免费高清观看| 国产爱豆传媒在线观看| 热99re8久久精品国产| 亚洲精品,欧美精品| 亚洲天堂国产精品一区在线| 国产黄色小视频在线观看| 午夜福利在线在线| 在线观看66精品国产| 亚洲人成网站高清观看| 岛国在线免费视频观看| 国产不卡一卡二| 久久精品久久久久久噜噜老黄 | 日本猛色少妇xxxxx猛交久久| 一级毛片电影观看 | 晚上一个人看的免费电影| 亚洲不卡免费看| 又黄又爽又刺激的免费视频.| 国产探花在线观看一区二区| 欧美精品一区二区大全| 亚洲欧美精品专区久久| 精品免费久久久久久久清纯| 国模一区二区三区四区视频| 久久人妻av系列| 波野结衣二区三区在线| 亚洲av男天堂| 国内精品宾馆在线| 男女那种视频在线观看| 三级国产精品欧美在线观看| 国产高清视频在线观看网站| 男插女下体视频免费在线播放| 久久精品国产亚洲av天美| 久久鲁丝午夜福利片| 国产真实伦视频高清在线观看| 欧美性感艳星| 99久国产av精品| 天天一区二区日本电影三级| 日韩制服骚丝袜av| 日本一二三区视频观看| 中文字幕久久专区| 日韩av不卡免费在线播放| av又黄又爽大尺度在线免费看 | 国产伦一二天堂av在线观看| eeuss影院久久| 日本爱情动作片www.在线观看| 青春草国产在线视频| 亚洲精华国产精华液的使用体验| 亚洲成人精品中文字幕电影| 国产黄片视频在线免费观看| 中文字幕免费在线视频6| 精品免费久久久久久久清纯| 人人妻人人澡人人爽人人夜夜 | 久久久午夜欧美精品| 亚洲精品色激情综合| 亚洲va在线va天堂va国产| 亚洲精品一区蜜桃| 寂寞人妻少妇视频99o| 亚洲一级一片aⅴ在线观看| 一边摸一边抽搐一进一小说| 国产av一区在线观看免费| 看十八女毛片水多多多| 亚洲伊人久久精品综合 | 免费看光身美女| 亚洲欧美日韩无卡精品| 99久久无色码亚洲精品果冻| 观看免费一级毛片| 1000部很黄的大片| 狂野欧美激情性xxxx在线观看| 国产麻豆成人av免费视频| 午夜视频国产福利| 日韩欧美精品v在线| 69人妻影院| 特大巨黑吊av在线直播| 精品国产一区二区三区久久久樱花 | 久久人人爽人人爽人人片va| 能在线免费观看的黄片| 欧美激情国产日韩精品一区| 免费av毛片视频| 亚洲综合精品二区| 久久这里有精品视频免费| 午夜福利高清视频| 亚洲精品国产成人久久av| 午夜激情福利司机影院| 两性午夜刺激爽爽歪歪视频在线观看| 婷婷六月久久综合丁香| 在现免费观看毛片| 欧美高清成人免费视频www| 日日啪夜夜撸| 成人漫画全彩无遮挡| 建设人人有责人人尽责人人享有的 | 国产高潮美女av| 床上黄色一级片| 国产黄色视频一区二区在线观看 | 岛国在线免费视频观看| 99热这里只有是精品50| 国产高潮美女av| 欧美性猛交黑人性爽| 久久精品国产亚洲网站| 波野结衣二区三区在线| 亚洲国产欧美人成| 亚洲精品日韩在线中文字幕| 国产精品日韩av在线免费观看| 午夜激情欧美在线| 中国美白少妇内射xxxbb| 亚洲成人中文字幕在线播放| 亚洲综合精品二区| 在线播放无遮挡| 欧美日韩在线观看h| 日本av手机在线免费观看| 午夜福利成人在线免费观看| av天堂中文字幕网| 男女国产视频网站| 女的被弄到高潮叫床怎么办| 夜夜看夜夜爽夜夜摸| 天堂√8在线中文| 亚洲欧美一区二区三区国产| 日韩,欧美,国产一区二区三区 | 国产激情偷乱视频一区二区| 免费观看a级毛片全部| 亚洲国产精品成人综合色| 久久精品国产鲁丝片午夜精品| 国产av码专区亚洲av| 亚洲av成人精品一二三区| 亚洲欧美清纯卡通| 成人欧美大片| 午夜a级毛片| 能在线免费看毛片的网站| 亚洲av成人精品一区久久| ponron亚洲| 我要看日韩黄色一级片| 亚洲精品乱码久久久v下载方式| 成人亚洲欧美一区二区av| 免费不卡的大黄色大毛片视频在线观看 | 精品人妻熟女av久视频| 亚洲内射少妇av| 久久精品久久精品一区二区三区| 亚洲天堂国产精品一区在线| 又粗又爽又猛毛片免费看| 久久综合国产亚洲精品| 久久精品国产亚洲网站| h日本视频在线播放| 亚洲精品乱久久久久久| 简卡轻食公司| 伦精品一区二区三区| 久久久久精品久久久久真实原创| 国产淫片久久久久久久久| 在线播放无遮挡| 国产不卡一卡二| 好男人视频免费观看在线| 国产一区有黄有色的免费视频 | 精品人妻视频免费看| 好男人视频免费观看在线| 联通29元200g的流量卡| 国产精品国产三级专区第一集| 一夜夜www| 精品久久久久久久久久久久久| 身体一侧抽搐| 国产精品女同一区二区软件| 成人美女网站在线观看视频| 国产午夜精品久久久久久一区二区三区| av在线播放精品| 中文在线观看免费www的网站| 身体一侧抽搐| 女人十人毛片免费观看3o分钟| 中文在线观看免费www的网站| 天天躁夜夜躁狠狠久久av| 久久99热这里只频精品6学生 | 亚洲精品成人久久久久久| 国模一区二区三区四区视频| 麻豆成人av视频| 久久这里只有精品中国| 精品无人区乱码1区二区| 免费看光身美女| 亚州av有码| 国产视频内射| 大话2 男鬼变身卡| 欧美一区二区亚洲| 日本一本二区三区精品| 男女下面进入的视频免费午夜| 一区二区三区乱码不卡18| 久久这里有精品视频免费| av国产免费在线观看| 春色校园在线视频观看| 国产免费男女视频| 国产一区亚洲一区在线观看| 国产一级毛片在线| 欧美精品一区二区大全| 国模一区二区三区四区视频| 欧美激情国产日韩精品一区| 爱豆传媒免费全集在线观看| 晚上一个人看的免费电影| 国产一级毛片七仙女欲春2| 超碰av人人做人人爽久久| 中文字幕人妻熟人妻熟丝袜美| 22中文网久久字幕| 亚洲国产最新在线播放| 免费av观看视频| 亚洲综合色惰| 亚洲精品456在线播放app| 国产极品精品免费视频能看的| 色综合站精品国产| av线在线观看网站| 汤姆久久久久久久影院中文字幕 | 日韩 亚洲 欧美在线| 女人久久www免费人成看片 | 岛国毛片在线播放| 嫩草影院精品99| 久久精品国产亚洲av天美| 日韩av不卡免费在线播放| 亚洲精品影视一区二区三区av| 免费一级毛片在线播放高清视频| 99热网站在线观看| 亚洲精品456在线播放app| 国产免费福利视频在线观看| 亚洲高清免费不卡视频| 国产免费男女视频| 日韩强制内射视频| 久久人妻av系列| 村上凉子中文字幕在线| 日韩欧美精品免费久久| 人人妻人人看人人澡| 久久久成人免费电影| 99热全是精品| 国产精品综合久久久久久久免费| 能在线免费看毛片的网站| 尾随美女入室| 国产黄a三级三级三级人| 午夜精品国产一区二区电影 | 少妇的逼水好多| 99久久精品热视频| 老司机福利观看| 午夜福利网站1000一区二区三区| 国语对白做爰xxxⅹ性视频网站| 亚洲成av人片在线播放无| 国产成人免费观看mmmm| 老司机福利观看| 国产乱人视频| 最后的刺客免费高清国语| 国产成人一区二区在线| 精品久久国产蜜桃| 精品久久久噜噜| www日本黄色视频网| 国产精品久久久久久精品电影小说 | 欧美一区二区国产精品久久精品| 亚洲久久久久久中文字幕| 91午夜精品亚洲一区二区三区| 成人无遮挡网站| 看黄色毛片网站| 国产精品一及| 精品99又大又爽又粗少妇毛片| 日韩一本色道免费dvd| 亚洲五月天丁香| 欧美日本视频| 国产亚洲午夜精品一区二区久久 | 一级av片app| 我的老师免费观看完整版| 天天躁日日操中文字幕| 少妇熟女aⅴ在线视频| 欧美一区二区国产精品久久精品| 免费观看精品视频网站| 爱豆传媒免费全集在线观看| 天堂影院成人在线观看| 免费播放大片免费观看视频在线观看 | 最近的中文字幕免费完整| 欧美成人免费av一区二区三区| 国产精品女同一区二区软件| 亚洲国产色片| 啦啦啦韩国在线观看视频| 成人亚洲欧美一区二区av| 99久久精品热视频| 国产精品久久视频播放| 97超碰精品成人国产| 亚洲一级一片aⅴ在线观看| 午夜精品国产一区二区电影 | av黄色大香蕉| 免费播放大片免费观看视频在线观看 | 99久久精品国产国产毛片| 日本熟妇午夜| 九色成人免费人妻av| 亚洲久久久久久中文字幕| 自拍偷自拍亚洲精品老妇| 乱系列少妇在线播放| 九九在线视频观看精品| 日韩欧美精品免费久久| 国产高清不卡午夜福利| 卡戴珊不雅视频在线播放| 免费av观看视频| 九色成人免费人妻av| 最近最新中文字幕大全电影3| 男人舔女人下体高潮全视频| 少妇熟女欧美另类| 国产乱人视频| 精品一区二区三区视频在线| 精品国产一区二区三区久久久樱花 | 黄片wwwwww| 精品人妻熟女av久视频| 国产精品综合久久久久久久免费| 免费观看的影片在线观看| 色综合色国产| 我要搜黄色片| 91精品一卡2卡3卡4卡| 又粗又硬又长又爽又黄的视频| 亚洲图色成人| 久久99热这里只有精品18| 久久久国产成人精品二区| av.在线天堂| 91精品伊人久久大香线蕉| 久久这里有精品视频免费| 99久久中文字幕三级久久日本| 国产v大片淫在线免费观看| 神马国产精品三级电影在线观看| 热99re8久久精品国产| 国产高潮美女av| 国内精品宾馆在线| 人妻夜夜爽99麻豆av| 国模一区二区三区四区视频| 亚洲成人av在线免费| av在线播放精品| 欧美+日韩+精品| 一级毛片电影观看 | 国产精品,欧美在线| 高清毛片免费看| 精华霜和精华液先用哪个| 美女高潮的动态| 亚洲精品亚洲一区二区| 99热6这里只有精品| 国产高潮美女av| 精品欧美国产一区二区三| 熟女电影av网| 99久国产av精品国产电影| av天堂中文字幕网| 插阴视频在线观看视频| 亚洲第一区二区三区不卡| 精品久久久久久久末码| av天堂中文字幕网| 午夜免费激情av| 熟女电影av网| 国产精品熟女久久久久浪| 精品国产一区二区三区久久久樱花 | 国产精品永久免费网站| 亚洲综合色惰| 国产一区有黄有色的免费视频 | 美女xxoo啪啪120秒动态图| 久久久欧美国产精品| 国产一区二区亚洲精品在线观看| 天堂av国产一区二区熟女人妻| 精品久久久久久久久av| 中文字幕av在线有码专区| 国产一区二区在线av高清观看| 免费观看精品视频网站| 国国产精品蜜臀av免费| 赤兔流量卡办理| 有码 亚洲区| 啦啦啦观看免费观看视频高清| 国产精品人妻久久久影院| videos熟女内射| 99久久精品热视频| 丝袜喷水一区| 国产精品人妻久久久影院| 亚洲av成人精品一区久久| 在线免费观看的www视频| 国产在视频线在精品| 高清日韩中文字幕在线| 黄色欧美视频在线观看| 久久久久久国产a免费观看| 国产精品电影一区二区三区| 综合色av麻豆| 尾随美女入室| 久久精品久久精品一区二区三区| 网址你懂的国产日韩在线| 亚洲国产精品成人综合色| 亚洲色图av天堂| 卡戴珊不雅视频在线播放| 三级经典国产精品| 欧美成人一区二区免费高清观看| 免费观看在线日韩| 精品一区二区三区视频在线| 亚洲精品自拍成人| 韩国高清视频一区二区三区| 国产真实伦视频高清在线观看| 免费看av在线观看网站| 亚洲精品成人久久久久久| 中文字幕制服av| 国产一区二区在线观看日韩| 精品久久久久久久久久久久久| av免费在线看不卡| 午夜精品一区二区三区免费看| 精品午夜福利在线看| 亚洲丝袜综合中文字幕| 听说在线观看完整版免费高清| 日韩在线高清观看一区二区三区| 嫩草影院新地址| 色综合色国产| 国产在视频线精品| 欧美一区二区精品小视频在线| 久久精品夜色国产| 亚洲真实伦在线观看| 人妻系列 视频| 深夜a级毛片| 久久久久久久国产电影| 日日撸夜夜添| 国产精品电影一区二区三区| 青春草亚洲视频在线观看| 久久久久免费精品人妻一区二区| 成人午夜高清在线视频| .国产精品久久| 最近手机中文字幕大全| 1000部很黄的大片| 国产精品国产三级国产专区5o | 国产亚洲av片在线观看秒播厂 | 欧美一区二区亚洲| 国产白丝娇喘喷水9色精品| 美女内射精品一级片tv| 永久网站在线| 国产精品久久久久久精品电影小说 | 日本一二三区视频观看| 午夜福利高清视频| 久久久国产成人精品二区| 男女下面进入的视频免费午夜| 国产乱人偷精品视频| 日韩大片免费观看网站 | 美女脱内裤让男人舔精品视频| 国产精品一区二区三区四区免费观看| 高清日韩中文字幕在线| 亚洲精品乱码久久久v下载方式| 免费播放大片免费观看视频在线观看 | 国产中年淑女户外野战色| 午夜福利网站1000一区二区三区| 免费观看的影片在线观看| 免费av不卡在线播放| 69av精品久久久久久| 日日摸夜夜添夜夜添av毛片| 97在线视频观看| 国产色婷婷99| 亚洲av福利一区| 国产高清三级在线| av又黄又爽大尺度在线免费看 | 少妇高潮的动态图| 国产亚洲午夜精品一区二区久久 | 中国美白少妇内射xxxbb| 99久久中文字幕三级久久日本| 成人综合一区亚洲| 舔av片在线| 亚洲国产精品合色在线| 国产在视频线精品| 国产三级中文精品| 99久国产av精品| 久久久久久伊人网av| 成人综合一区亚洲| 免费大片18禁| 老司机影院毛片| 97热精品久久久久久| 国产午夜福利久久久久久| 水蜜桃什么品种好| 国产精品无大码| 久久久久久久久久久丰满| 亚州av有码| 人人妻人人看人人澡| 狠狠狠狠99中文字幕| av天堂中文字幕网| 国产精品美女特级片免费视频播放器| av播播在线观看一区| 欧美极品一区二区三区四区| 黑人高潮一二区| 亚洲av中文字字幕乱码综合| 校园人妻丝袜中文字幕| 99视频精品全部免费 在线| 中文乱码字字幕精品一区二区三区 | 亚洲欧美精品专区久久| 国产乱人偷精品视频| 精品久久久久久久久av| 亚洲最大成人中文| 禁无遮挡网站| 神马国产精品三级电影在线观看| 内地一区二区视频在线| 变态另类丝袜制服| 国产探花极品一区二区| 最后的刺客免费高清国语| 精品人妻偷拍中文字幕| 成人亚洲精品av一区二区| 国产欧美另类精品又又久久亚洲欧美| 中文资源天堂在线| 日韩欧美精品免费久久| 波多野结衣高清无吗| 99热这里只有精品一区| 少妇的逼好多水| 国产探花极品一区二区| 大香蕉97超碰在线| 日本五十路高清| 99热这里只有精品一区| 国产黄a三级三级三级人| 欧美人与善性xxx| 最后的刺客免费高清国语| 亚洲激情五月婷婷啪啪| 午夜福利成人在线免费观看| 又粗又硬又长又爽又黄的视频| 国产伦一二天堂av在线观看| 能在线免费看毛片的网站| 亚洲精品乱码久久久久久按摩| 91精品国产九色| 黑人高潮一二区| 少妇的逼水好多| 超碰av人人做人人爽久久| 内射极品少妇av片p| 免费av不卡在线播放| 成人特级av手机在线观看| 美女被艹到高潮喷水动态| 欧美bdsm另类| 成年免费大片在线观看| 成年女人看的毛片在线观看| 男人的好看免费观看在线视频| .国产精品久久| 91精品伊人久久大香线蕉| www.av在线官网国产| 男女边吃奶边做爰视频| 黄色欧美视频在线观看| 亚洲熟妇中文字幕五十中出| 成人美女网站在线观看视频| 久久久久久伊人网av| 欧美丝袜亚洲另类| 久久久午夜欧美精品| 亚洲精品国产av成人精品| 国产成人精品久久久久久| 一本一本综合久久| 亚洲av日韩在线播放| 国产片特级美女逼逼视频| 国产精品一区二区性色av| 男插女下体视频免费在线播放| 精品国产露脸久久av麻豆 | 日韩精品有码人妻一区| 美女高潮的动态| 成人三级黄色视频| 乱码一卡2卡4卡精品| 亚洲av电影在线观看一区二区三区 | 老司机福利观看| 高清av免费在线| 精品酒店卫生间| 水蜜桃什么品种好| 日本wwww免费看| 99久久精品一区二区三区| 亚洲欧美日韩卡通动漫| 日韩人妻高清精品专区| 亚洲av不卡在线观看| 亚洲av成人av| 99久久九九国产精品国产免费| 能在线免费观看的黄片| 久久99热这里只有精品18| 97超碰精品成人国产| 国产在视频线精品| 久久国产乱子免费精品| 啦啦啦观看免费观看视频高清| 女人十人毛片免费观看3o分钟| 日本免费一区二区三区高清不卡| 欧美高清性xxxxhd video| 综合色av麻豆| 全区人妻精品视频| 一级av片app| 亚洲内射少妇av| 一个人看的www免费观看视频| 国产美女午夜福利| 婷婷六月久久综合丁香| 日韩成人伦理影院| 日本欧美国产在线视频| 日本午夜av视频| 国产淫语在线视频| 亚洲五月天丁香| 人人妻人人澡欧美一区二区| 亚洲在线观看片| 狠狠狠狠99中文字幕| 天天躁夜夜躁狠狠久久av| 高清在线视频一区二区三区 | 精品久久久久久久人妻蜜臀av| 99久久无色码亚洲精品果冻| 神马国产精品三级电影在线观看| 十八禁国产超污无遮挡网站| av视频在线观看入口| 国产精品综合久久久久久久免费| 久久精品国产鲁丝片午夜精品| 美女xxoo啪啪120秒动态图| 日本黄大片高清| 中文字幕制服av| 狂野欧美激情性xxxx在线观看| 日本黄色视频三级网站网址| 免费看日本二区| 成年女人永久免费观看视频| 美女脱内裤让男人舔精品视频| 国产一级毛片在线| 色综合站精品国产| 亚洲中文字幕日韩| 晚上一个人看的免费电影| 丰满乱子伦码专区| 免费观看在线日韩| 亚洲久久久久久中文字幕| 听说在线观看完整版免费高清| 亚洲av日韩在线播放| 亚洲精品乱码久久久v下载方式| av在线天堂中文字幕| 纵有疾风起免费观看全集完整版 | 成人一区二区视频在线观看| 国产成人a∨麻豆精品| 自拍偷自拍亚洲精品老妇| 精品国内亚洲2022精品成人| 男女视频在线观看网站免费| 久久99蜜桃精品久久| 秋霞伦理黄片| 一边摸一边抽搐一进一小说| 精华霜和精华液先用哪个| 欧美高清成人免费视频www|