• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    HybridHR-Net:Action Recognition in Video Sequences Using Optimal Deep Learning Fusion Assisted Framework

    2023-10-26 13:14:24MuhammadNaeemAkbarSeemabKhanMuhammadUmarFarooqMajedAlhaisoniUsmanTariqandMuhammadUsmanAkram
    Computers Materials&Continua 2023年9期

    Muhammad Naeem Akbar ,Seemab Khan ,Muhammad Umar Farooq ,Majed Alhaisoni ,Usman Tariq and Muhammad Usman Akram

    1Department of Computer Engineering,National University of Sciences and Technology(NUST),Islamabad,46000,Pakistan

    2Department of Robotics,SMME NUST,Islamabad,45600,Pakistan

    3Computer Sciences Department,College of Computer and Information Sciences,Princess Nourah Bint Abdulrahman University,Riyadh,11671,Saudi Arabia

    4Management Information System Department,College of Business Administration,Prince Sattam bin Abdulaziz University,Al-Kharj,16278,Saudi Arabia

    ABSTRACT The combination of spatiotemporal videos and essential features can improve the performance of human action recognition(HAR);however,the individual type of features usually degrades the performance due to similar actions and complex backgrounds.The deep convolutional neural network has improved performance in recent years for several computer vision applications due to its spatial information.This article proposes a new framework called for video surveillance human action recognition dubbed HybridHR-Net.On a few selected datasets,deep transfer learning is used to pre-trained the EfficientNet-b0 deep learning model.Bayesian optimization is employed for the tuning of hyperparameters of the fine-tuned deep model.Instead of fully connected layer features,we considered the average pooling layer features and performed two feature selection techniques-an improved artificial bee colony and an entropy-based approach.Using a serial nature technique,the features that were selected are combined into a single vector,and then the results are categorized by machine learning classifiers.Five publically accessible datasets have been utilized for the experimental approach and obtained notable accuracy of 97%,98.7%,100%,99.7%,and 96.8%,respectively.Additionally,a comparison of the proposed framework with contemporary methods is done to demonstrate the increase in accuracy.

    KEYWORDS Action recognition;entropy;deep learning;transfer learning;artificial bee colony;feature fusion

    1 Introduction

    Over the last decade,machine learning(ML)has emerged as one of the most rapidly growing fields in advanced computer sciences.Several studies in Activity Recognition have been conducted using machine learning and computer vision[1].However,they encountered various types and similarities between multiple human actions,making it more difficult to identify the action accurately.Several techniques for action recognition have been introduced in the past.These techniques belong to traditional ML methods such as Convolution Neural Networks (CNN) and sparse coding (SC).Few advanced ML techniques,including Long-term Short Memory (LTSM),Deep Convolutional Neural Networks(DCNN),and recurrent neural networks(RNN),have also been employed for action recognition with improved accuracy[2].

    These advanced techniques are comprised of complex architectures that require a lot of memory and have limitations regarding computational resources for HAR applications.Real-world applications of HAR may include Human-Computer Interaction(HCI)and some intelligent video surveillance applications.Mobile Edge Computing(MEC)also contributes a lot of technology integration in the field of medicine.Automation in remote health care supervision is also one of the advantages of MEC.The technique is also applicable in action recognition.The services where HAR might be applicable may include content-centric summarization [3],sports video analysis and evaluation,and remote health monitoring applications for intelligent surveillance.Silhouette-based features can support robust detection of actions in a real-time environment [4].Action recognition from video streams has advanced from analyzing the present to forecasting the coming action.It applies highly to surveillance,driverless cars,and entertainment[5].

    EfficientNet Models [6] are state-of-the-art deep CNN (DCNN) models comprising meek yet highly potent compound scaling functions.The function can scale a baseline CNN to a target resource bound while maintaining model efficiency.EfficientNet is a scale-able model in terms of layer depth,width,and resolution,which makes it capable of performing better than other DCNNs,which include AlexNet,GoogleNet,and MobileNet.It has become an important and basic component of new computer vision research,especially in deep learning.In the proposed technique,EfficientNet [7] is used to extract the best features from multiple datasets,and these feature vectors are further processed.Transfer learning involves transferring information from the source domain(ImageNet)to the target domain[8].Information is transferred to get the best features from the datasets.Fully connected layers are modified to account for the no number of classes in each dataset.The technique helps to create a high-performance method that uses pre-trained models[9].

    Major Challenges and Contributions:Intangible ML and Data Mining(DM)techniques have been applied to solve numerous real applications.Feature fusion is a technique where extracted feature vectors from the training images are fused based on some pre-determined standard [10].The fused vector has the best features with a high contribution.In supervised learning,the dataset is kept in two sets,training,and testing,depending upon the ratio set by the researcher.Training images are used to make the model learn,and then the proposed model is validated on testing images.Evaluation is done on pre-defined parameters [11].The current deep learning systems mainly focus on hybridizing the latest and traditional deep learning methods.Most of the hybrid techniques managed to improve the accuracy,but their least focus was on reducing the time complexity.Computational time is a significant component,especially in action recognition problems,as the system needs to identify the correct action in a minimum time[12].Some other factors that need to be sorted for better results include redundant and irrelevant or unimportant features.

    In this work,we proposed a deep learning and Entropy controlled optimization algorithm-based framework for action recognition.The following are our main contributions:

    ? Fine-tuned EfficientNet-B0 deep learning model and training are performed on selected action recognition datasets using deep transfer learning.The deep model’s training has been done with static hyperparameters.

    ? Entropy-controlled Artificial Bee Colony optimization algorithm is proposed for the best feature selection.

    ? Fusion is performed using a mean deviation-based serial threshold function.

    2 Literature Review

    Recently,HAR has grown in importance as a research field.The researchers have adapted several supervised and unsupervised learning methods for HAR applications[13].It is essential to consider all available clues to analyze human behavior and predict the appropriate action later.Human action can also be identified using the blend of some traditional techniques with advanced deep learning methods.Traditional methods for action recognition may not produce the best result when used in isolation—a hybrid of conventional and advanced techniques performed better in several recent studies.

    Masmoudi et al.[2]presented an unsupervised CNN that has overcome memory and computational issues to a greater extent.PCANet-TOP is an unsupervised convolutional PCANet architecture;it can learn spatiotemporal features from Three Orthogonal Planes(TOP).To reduce the dimensions of the learned features,whitening PCA has been used.They used a Support Vector Machine(SVM)to classify action.The presented techniques were assessed on Weizmann,royal institute(KTH),UCF Sports,and YouTube actions datasets,and the achieved accuracy on these datasets is 90%,87.33%,92.67%,and 81.40%,respectively.Results have proven that the presented principle component analysis(PCANet-TOP)model provides distinguishing and balancing features using TOP.It also enabled us to attain comparatively better results than the existing techniques.Ramya et al.[14] presented an algorithm based on distant transform and entropy features extracted from the human silhouettes.The first step was to attain the silhouettes,which were performed by using the correlation coefficientbased frame difference method.Then,the step was to extract features using Entropy and distance transform.This helped by facilitating the model with contour and deviation information.In the final step,the extracted features were given to neural networks to classify human actions.Datasets used to assess the presented model include Weizmann,KTH,and UCF50,and the achieved accuracy on them was 92.5%,91.4%,and 80%,respectively.Researchers also observed that there is still room for improvement,and results can be improved by manipulating the training testing ratio in the future.The local variation features and fused shape features resulted in the better performance of the algorithm.

    Khan et al.[9]worked on a deep learning algorithm for HAR based on Kurtosis based weighted k-nearest neighbor (KNN).The architecture included four steps: feature extraction and mapping,kurtosis-based feature selection,serial-based feature fusion,and action identification.For feature extraction,two CNN models were used:DenseNet201 and Inception3.The classification was carried out on four different datasets: KTH,IXMAS,WVU,and Hollywood,with the obtained results being 99.3%,97.4%,99.8%,and 99.9%,respectively.It was discovered here that less features are included for the final classification aided in improving the algorithm’s performance.Khan et al.[9]presented a Gated Recurrent Neural Network that has amplified computational competency.For action classification,researchers have used sequential data.Gaussian mixture model (GMM) and Kalman’s filters were used to extract features.A novel approach based on hybrid deep learning methods was used for recognition.The GRUs aid in modeling the problem by the current sequential dependencies.Furthermore,graph regression neural network(GRNN)can be used to model problems with temporal relationships and time gaps between events.The method was tested using the KTH,UCF101,and UCF sports datasets.

    Basak et al.[15] presented multiple ways to recognize action,including red,green,blue (RGB),depth,point cloud,infrared,etc.The choice of technique depends on the nature of the scenario and the application for which it is being developed.A survey of the performance of various HAR techniques is presented.The study surveyed Fusion techniques,including the Fusion of RGB,depth,and skeleton modalities.Among the existing fusion techniques,the fusion of A/V modalities produced the best results in predicting actions.Aside from the fusion,co-learning techniques were thoroughly investigated.It was a technique for transferring learning by extracting knowledge from auxiliary modalities and applying it to learning another modality.Visual modalities such as RGB and depth are included in these co-learning techniques.Fu et al.[16]presented an algorithm to detect sports actions using deep learning methods,specifically the algorithm of clustering extraction.Athletic movements were first detected from deep learning techniques and then fused with sports-centered movements.CNN was applied on the sample set where non-athletic and negative images were provided to the network.The set was gradually enhanced with gathered false positive predictions,and the obtained results were then optimized using a clustering algorithm.The idea was to acquire athletes’training posture by analyzing the movements of their specific sport.The application was designed to assist sports trainers in giving professional training to athletes effectively and efficiently.

    Liang et al.[17] developed a hybrid of CNN and Short-Term Long Memory (LTSM).Extensive testing has been carried out to determine the efficacy of the hybrid method.The paper also included a comparison of various deep-learning techniques.The researchers named their technique CNN+LTSM.First,the results demonstrated that the efficiency of learning algorithms differed marginally,but this did not affect the overall result.Second,it claimed that spatial,temporal interest point (STIP) could perform even better in the given conditions because it could extract interest points in video frames containing various human actions.Yue et al.[18] performed survey research on multiple robust and operative architectures for HAR and future action predictions.The study compared state-of-the-art methods for the recognition and prediction of actions.Recent models,efficient algorithms,challenges,popular datasets,evaluation criteria,and future guidelines were also presented with documented proofs.After detailed study and analysis,it was concluded that better datasets provide a foundation for better prediction of actions.

    3 Methodology

    In this section,a detailed methodology for the proposed architecture has been presented.The complete architecture consists of various steps,including feature extraction via transfer learning,using two optimizers,i.e.,Artificial Bee Colony and Entropy-controlled feature selection,and serial-based feature fusion.The proposed HAR architecture is illustrated in Fig.1.

    3.1 Datasets

    In this work,five publicly accessible datasets have been utilized for the experimental approach.The datasets include IXMAS[19],KTH[20],UT Interaction[20],UCF Sports[20],and Weizmann[20].All these datasets have been well-known and used by several researchers in the last few years.The IXMAS and Weizmann have ten action classes,whereas the KTH and UT Interaction datasets have six action classes.UCF Sports action dataset contains 13 action classes.

    Figure 1:Visual illustration of the proposed framework for action recognition

    3.2 Convolutional Neural Network(CNN)

    In recent times,CNN has become immensely popular for image classification problems.Various studies are conducted to analyze the efficiency of CNN in spatial patterns that allow for extracting valuable features [21].Recent trends in deep learning include spectral resolution,spatial grain,etc.CNN can apply to various problems in which classification,identification,and segmentation are at the top.The networks are useful for working on spatial patterns and enabling high spatial resolution data value.A variety of techniques for feature visualization by CNNs is helpful in the interpretation and allow learning from these models to improve its productivity.CNN is one of the novel techniques in machine learning that allows efficient and quick predictions for any given image.The network requires fewer parameters to learn than previously designed neural networks.A standard CNN has several layers,including the activation layer,i.e.,ReLU(Rectified Linear unit)layer,the Pooling layer(Max,Avg,Min),the fully connected(FC)layer,and some other hidden layers.There exist a variety of CNNs,including AlexNet,GoogleNet,Inception,ResNet,and DenseNet.The general structure of a CNN with multi-layer architecture is illustrated in Fig.2.The figure shows the complete design from input steam to final classification through the FC layer.Convolution layers are added to convolve the initial input and extract the required features.The extracted features are passed to multiple layers for further processing.After passing through different hidden layers,the network makes the final prediction.A simple architecture is illustrated in Fig.2.

    Figure 2:Detailed structure of a multi-layered convolutional neural network

    3.3 EfficientNet-B0

    EfficientNet is one of the best CNNs of recent times[22].It is a family of prediction models from GoogleAI.It can scale up according to the number of parameters in the network.The model scales up with greater efficiency regarding the layer’s depth,width,and resolution of the input image/video frame.It can scale up to a mix of the parameters mentioned above.To balance the dimensions of width,depth,and resolution,compound scaling is performed.These dimensions are scaled up on a fixed ratio.The mathematical representation of compound scaling is given below:

    The network also allows the creation of features instead of just feature extraction.These features can later be passed on to the classifier for predictions.The model outperformed all state-of-the-art networks of recent times,including ResNet,DenseNet,AlexNet,and others.In this research,the model is used on five different publically available datasets,and results are then compared on pre-defined criteria.Fig.3 defines the complete network structure of an EfficientNet model.

    3.4 Transfer Learning

    Figure 3:Detailed architecture of an EfficientNet-b0 deep learning model

    3.5 Feature Selection

    After feature extraction,the next step is to discard the features that do not contribute much to the performance.Next,the highest contribution feature is selected using two optimization algorithms,ABC and Entropy.In this section,the two algorithms are discussed in detail.Finally,from 1280 features extracted via EfficientNet-b0,the top 600 are selected in two separate feature vectors.

    Figure 4:Illustration of transferring knowledge for action recognition

    Artificial Bee Colony (ABC):Regarding the real-life bee colonies,ABC divides the bees into three groups:i)employed bees,ii)observer or onlooker bees,and iii)scout bees[25].The job of the employed bees is to look for the food resource and convey the message to onlooker bees.On the given information,the onlookers choose to start exploring the nearby space of the food resource and find a new food resource.Employed bees with an improved food resource with already decided iterations get the scout status,and the new task for the scout is searching for a new food resource.ABC is employed in four fundamental steps:

    The first step is an initialization,where the algorithm is set to produce random food resources.Each of them is defined as a vector in the search space;xi=xi,1,xi,2,xi.3,...,xi,n

    wherei={1,2,3,...,R}and R is the number four resource which is equal to the number of employed bees or onlookers.j={1,2,3,...,ρ}andρis the search space dimensions.xijis the jth dimension ofxi,R (0,1)is a random variable that uniformly distributes the search space.The minimum boundary value isand maximum boundary value is.

    The second step is employed bees:Every employed bee is assigned a food resource,later modified by the bee itself after searching for a better resource.That is how knowledge is transferred from all the neighborhood except for the current locationxk.New food resource is located under Eq.(3);

    wherexiis the current food source location,?ijis a homogeneously distributed value within the given range[-1,1].After the initial positionis found,the fitness value is assessed and equated with thexiwhich is the current position.Ifis better thanxi,is replaced byxiand this makes the algorithm enter its next iteration.The counter for the number of attempts for this iteration again resets to 0.Otherwisexienters the next iteration with the same food resource value.The value of the counter,in this case,is upgraded to 1.

    The third step focuses on onlooker bees.Each of the employed bees passes on the gathered information about their respective food resources to onlooker bees.Depending on the fitness value of the food resource,each onlooker bee selects a position,and for the selection,roulette wheel scheme is followed by the onlookers.They advocate that the better the source’s fitness value,the higher the probability of selection.Probability is computed by Eq.(4).

    wherefitiis the fitness value of food resourcexi,After equating the probability of each location,a random number and(0,1)is generated to govern the choice of food resource.Ifδi >rand(0,1),xiis selected as an employed bee in this step.

    The last step caters to the scout bees;each food resource is initialized with 0.A counter contains the number of attempts.If the counter’s value increases from the fixed value,the previous food resource will be discarded,and then a new food resource is assigned that is generated by Eq.(2).

    Each food resource is added to a feature subset when the features are selected using ABC.Fitness value determines the quality of the food resource in the feature subset.Each source is represented in a binary string.One represents the selection,whereas 0 indicates the source is not selected.

    Entropy-Based Selection:Entropy is the measure of uncertainty of the random variableλ.It measures the different probabilities among a set of limited values.Letλbe a random variable with a limited set of values havingnvalues,such as {λ1,λ2,λ3,...,λn} andPis the set of a probability distribution.If a specified valueλ1occurs with probability distributionP(λi)such thatP(λi)≥0,i=1,2,3...,nand=1,then the information amount is related to the known occurrences ofλican be defined as:

    This shows that the information generated in selecting a symbolλi is-log2P (λi)bits for a distinct source.On average,if the symbolλiis selectedn x P (λi)times innselections,the average information gathered fromnsource outputs is given below:

    Mathematically,Entropy is the distribution function of a random variableλwhich depends on the probabilities.Hence,EntropyE(λ)is the mean value and can be determined by the following equation:

    3.6 Feature Fusion

    whereFusion(i)is the resultant of two feature vectors fused withM×J.The value ofJis modified in accordance with the variation in the training images.

    4 Results and Discussion

    This section focuses on the experiments performed and the analysis of the achieved results after extensive experimentation.In addition,performance measures and evaluation criteria are also discussed in the same section.A total of five datasets were chosen for use in this work;information on the datasets are given in Section 3.1.The results for each dataset are tabulated,and a complete analysis is provided along with the confusion matrix.50% of the total images in the dataset were used for training,with the remaining 50% used for model validation.K fold cross-validation,where K equals 10.The criteria for evaluation include the achieved accuracy and the computational time(S).The entire experiment is conducted on MATLAB2021b using a Personal Desktop Computer with 16 GB of RAM and an 8 GB graphics card.

    4.1 KTH Dataset Results

    Extensive experimentation is performed during the study on different standardized datasets.There are 6 six classes of this dataset.The entire dataset is split into 50:50 for training and testing.Table 1 presents the results of this dataset which obtained the highest accuracy by Cubic SVM (CSVM)of 98.4% and a computational time is 157.6 S.In the second step,ABC optimization is used,and selected the best features.For this experiment,the CSVM obtained the highest accuracy of 98.6%,and the recorded computational time was 83.412 S.Then,the entropy-controlled weighted KNN-based selection technique is employed for selecting the best features in descending order.This experiment obtained the best accuracy of 98.6%on CSVM,whereas the computational time was 73.534 S.In the last step,both selected features are fused via the SBE feature fusion technique.As a result,the CSVM obtained the best accuracy of 98.7%,which is improved to the previous experiments;however,the testing time is increased.

    The accuracy of CSVM can be checked through a confusion matrix,illustrated in Fig.5.

    Table 1:Achieved results on the KTH dataset.?Linear discriminant analysis(LDA)

    4.2 Weizmann Dataset Results

    Weizmann dataset results are presented in this section as numerical and confusion matrix.In the first experiment,features are extracted from the original EfficientNet model and performed classification.As a result,the CSVM obtained the best accuracy of 96.5%,whereas the noted computational time is 45.678 S.The ABC optimizer is applied in the second experiment,and the best features are selected.The best-selected features are classified using several classifiers and obtained the best accuracy of 96.4%.For this experiment,the computational time is reduced to 26.65 S,previously 45.678(S).In the third experiment,entropy-based features were selected and obtained the best accuracy of 96.7%,whereas the computational time was 23.758(S).In the last experiment,SbE-based fusion was performed and obtained the best accuracy of 96.8%,which is improved to the previous experiments (seen in Table 2).Overall,the CSVM outperformed this dataset.Also,the fusion process’s computational time is extended,but accuracy is also improved.In addition,the CSVM confusion matrix,which can be used to confirm the proposed accuracy,is shown in Fig.6.

    Figure 5:Confusion matrix for feature fusion on cubic SVM classifier on KTH dataset

    Table 2:Achieved results on Weizmann dataset

    Figure 6:Confusion matrix for feature selection on cubic SVM classifier on Weizmann dataset

    4.3 UCF Sports Dataset

    The results of the UCF sports dataset have been described in this section.Table 3 presents the results of the UCF Sports dataset for all four experiments.In the first experiment,EfficientNet-based deep features are extracted and performed the classification.As a result,more than one classifier has been obtained the best accuracy of 100%,whereas the computational time of the LDA classifier is a minimum of 43.237 S.In the second step,ABC based optimization is performed,and selected the best features.The selected features are passed to the classifiers and obtain the best accuracy of 100%,whereas the time is reduced to 17.403 S.In the third experiment,Entropy-based best features were selected,and CSVM and FKNN obtained the best accuracy of 100%.In the last step,fusion is performed,and 100% accuracy is obtained,consistent with the other experiments but computationally slow.Moreover,Fig.7 shows the LDA classifier’s confusion matrix that can be utilized to verify the classification accuracy.

    Table 3:Achieved results on UCF sports dataset

    4.4 IXMAS Dataset

    Results from the IXMAS dataset are displayed as a confusion matrix and as numerals in this section.In the first experiment,features are extracted from the original EfficientNet model and performed classification.As a result,the Fine KNN obtained the best accuracy of 96.7%,whereas the noted computational time is 189.79 S.The ABC optimizer is applied in the second experiment,and the best features are selected.The best-selected features are classified using several classifiers and obtained the best accuracy of 96.7%.As a result,this experiment’s computational time is reduced to 97.538 S,previously 189.79 S.In the third experiment,entropy-based features were selected and obtained the best accuracy of 97%,which improved,whereas the computational time was 88.911 S.In the last experiment,SbE-based fusion is performed and obtained the best accuracy of 96.9%(as seen in Table 4).This experiment consumed more time than the first three,but the accuracy was stable.In addition,the CSVM confusion matrix is shown in Fig.8,and it can be used to check the proposed accuracy.

    Figure 7:Confusion matrix for feature selection on liner discriminant classifier on UCF sports dataset

    Table 4:Achieved results on the IXMAS dataset

    4.5 UT Interaction Dataset

    This section contains the findings from the UT Interaction dataset.Table 5 presents the results of the UT Interaction dataset for all four experiments.In the first experiment,EfficientNet-based deep features are extracted and performed the classification.The best-obtained accuracy for this experiment is 99.7% on the Fine KNN classifier,whereas the computational time is 16.643 S.In the second experiment,Fine KNN obtained the best 96.7% accuracy and the computational time of 7.343 S.From this,it is noted that the computational time is reduced,but accuracy is also dropped.In the third experiment,CSVM obtained the best accuracy of 99.6%,whereas the computational time was 11.113 S.This experiment’s performance is better than the first two experiments.In the last experiment,fusion was performed and obtained the best accuracy of 99.7%with a computational time of 15.382 s.Overall,the CSVM performed well for this dataset.Fig.9 shows this dataset’s confusion matrix that can be utilized to verify the accuracy of Fine-KNN after the fusion process.

    Table 5:Classification accuracy of UT interaction dataset

    Figure 9:Confusion matrix of fine KNN classifier on UT interaction dataset

    Finally,a thorough comparison is made with current methods,as shown in Table 6.In this table,several methods are listed;for each method,it is noted that they used several classifiers.Finally,we only use relevant data sets to compare the proposed accuracy.It can be seen from the accuracy values listed in this table that the proposed HAR framework has demonstrated increased accuracy.

    Table 6:Comparison of the proposed method’s accuracy with the existing techniques

    5 Conclusion

    Action recognition has been gaining popularity in recent years due to its vast range of real-life applications.In this work,we proposed a deep learning and fusion of optimized features framework for the classification of accurate action recognition.The proposed framework consists of several serial steps.The pre-trained EfficientNet deep model was fine-tuned and trained on the selected action datasets using deep transfer learning in the first step.Then,features are extracted from the average pooling layer and computed the results.Based on the computed results,we analyzed several redundant features.Therefore,we performed two feature selection techniques and selected the best features.Then,the selected features are classified,and improved accuracies are obtained for all selected datasets.Also,the time was significantly reduced,which was this framework’s main strength.In the last,the fusion of selected features is performed to enhance the accuracy,but this step also increases the computational time,which is a drawback of this approach.In the future,we will consider this problem and propose a more optimized fusion approach.

    Acknowledgement:Not applicable.

    Funding Statement:The authors received no specific funding for this study.

    Author Contributions:Software: M.N and S.K;Methodology: M.N,S.K,and M.U.F;Validation:M.A and M.N;Supervision: M.U.F and U.A;Writting and Review: U.T,M.N,and S.K;Project Administration: U.A and U.T;Conceptualization: M.A and U.T;Verification: U.A and M.U.F;Funding:M.N,S.K,U.A,and M.U.F.

    Availability of Data and Materials:The datasets used in this work are publically available for the research purpose.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久草成人影院| 国产激情偷乱视频一区二区| 免费看美女性在线毛片视频| 美女免费视频网站| 精品国产亚洲在线| 国产精品99久久久久久久久| 国内久久婷婷六月综合欲色啪| 好男人在线观看高清免费视频| 国产高清有码在线观看视频| 国产不卡一卡二| 三级国产精品欧美在线观看 | 亚洲 欧美 日韩 在线 免费| 在线观看舔阴道视频| 一卡2卡三卡四卡精品乱码亚洲| 99在线视频只有这里精品首页| 欧美日韩乱码在线| 亚洲无线在线观看| 九九久久精品国产亚洲av麻豆 | 波多野结衣高清无吗| 亚洲熟女毛片儿| 日韩国内少妇激情av| 国产av麻豆久久久久久久| 国产精品永久免费网站| 日韩有码中文字幕| 国产成人精品久久二区二区91| 久久久久久国产a免费观看| 亚洲片人在线观看| 成人av一区二区三区在线看| 国产精品一区二区三区四区免费观看 | 久久国产乱子伦精品免费另类| 无限看片的www在线观看| 91老司机精品| 国产麻豆成人av免费视频| 夜夜爽天天搞| 九九热线精品视视频播放| 亚洲,欧美精品.| 色噜噜av男人的天堂激情| 欧美不卡视频在线免费观看| 桃色一区二区三区在线观看| 黄频高清免费视频| 90打野战视频偷拍视频| 国产精品久久久人人做人人爽| 国产精品久久久久久亚洲av鲁大| 一个人看的www免费观看视频| 91老司机精品| 日日干狠狠操夜夜爽| 亚洲精品在线美女| 在线观看舔阴道视频| 欧美日韩黄片免| 亚洲av熟女| 国产黄a三级三级三级人| 一级a爱片免费观看的视频| 精品电影一区二区在线| av天堂中文字幕网| 天堂影院成人在线观看| 欧美最黄视频在线播放免费| 成人欧美大片| 亚洲第一欧美日韩一区二区三区| 久9热在线精品视频| 亚洲七黄色美女视频| 久久久精品欧美日韩精品| 香蕉av资源在线| 12—13女人毛片做爰片一| 小说图片视频综合网站| 久久久久免费精品人妻一区二区| 亚洲人成网站高清观看| 日韩欧美在线二视频| 18禁裸乳无遮挡免费网站照片| av在线天堂中文字幕| 亚洲18禁久久av| 欧美一区二区国产精品久久精品| 亚洲国产欧洲综合997久久,| 精品国产三级普通话版| 亚洲精品在线观看二区| 伊人久久大香线蕉亚洲五| 色哟哟哟哟哟哟| 曰老女人黄片| 亚洲在线观看片| 熟女少妇亚洲综合色aaa.| 色视频www国产| 97超级碰碰碰精品色视频在线观看| 亚洲成av人片免费观看| 亚洲七黄色美女视频| 18禁美女被吸乳视频| 国产精品自产拍在线观看55亚洲| 国产成年人精品一区二区| 免费av不卡在线播放| 国产激情久久老熟女| 欧美日韩一级在线毛片| 黄片大片在线免费观看| 十八禁人妻一区二区| 免费人成视频x8x8入口观看| 亚洲狠狠婷婷综合久久图片| 九九热线精品视视频播放| 亚洲中文av在线| 成年女人永久免费观看视频| 久久久久久久久久黄片| 亚洲精品色激情综合| 丰满人妻一区二区三区视频av | 午夜激情欧美在线| 一本久久中文字幕| 日韩高清综合在线| а√天堂www在线а√下载| 一个人免费在线观看的高清视频| 九色成人免费人妻av| 97人妻精品一区二区三区麻豆| 国产精品久久久久久久电影 | 高清在线国产一区| 美女扒开内裤让男人捅视频| 亚洲国产精品久久男人天堂| 久久国产精品人妻蜜桃| 欧美国产日韩亚洲一区| 高清在线国产一区| 午夜成年电影在线免费观看| 日本在线视频免费播放| 欧美色欧美亚洲另类二区| 女人被狂操c到高潮| 成年女人毛片免费观看观看9| 亚洲第一欧美日韩一区二区三区| 人妻久久中文字幕网| 天堂影院成人在线观看| 久久久久精品国产欧美久久久| 国产成人啪精品午夜网站| 日本三级黄在线观看| 亚洲国产精品久久男人天堂| 最近最新中文字幕大全电影3| 精品久久久久久久久久免费视频| 脱女人内裤的视频| 午夜日韩欧美国产| 国产午夜精品论理片| 国产毛片a区久久久久| 全区人妻精品视频| 久久久久久人人人人人| 99国产极品粉嫩在线观看| 久久久国产欧美日韩av| 日韩 欧美 亚洲 中文字幕| 一区福利在线观看| 最新中文字幕久久久久 | 中亚洲国语对白在线视频| 免费看十八禁软件| 亚洲第一电影网av| 三级毛片av免费| 成人鲁丝片一二三区免费| 亚洲男人的天堂狠狠| 级片在线观看| 夜夜躁狠狠躁天天躁| 中文字幕av在线有码专区| 欧美乱码精品一区二区三区| 国产乱人伦免费视频| 欧美一区二区精品小视频在线| 精品不卡国产一区二区三区| 亚洲国产高清在线一区二区三| 成人国产一区最新在线观看| 在线视频色国产色| 中文字幕人成人乱码亚洲影| 国产又色又爽无遮挡免费看| 国产主播在线观看一区二区| 在线免费观看的www视频| 麻豆国产97在线/欧美| 成在线人永久免费视频| 母亲3免费完整高清在线观看| 青草久久国产| 可以在线观看的亚洲视频| 美女高潮的动态| svipshipincom国产片| 日本一本二区三区精品| 一级毛片高清免费大全| 亚洲中文日韩欧美视频| 国产熟女xx| 熟妇人妻久久中文字幕3abv| 亚洲色图av天堂| av在线蜜桃| 亚洲专区中文字幕在线| 免费看a级黄色片| 国产伦精品一区二区三区视频9 | 无遮挡黄片免费观看| a级毛片a级免费在线| 亚洲精品美女久久av网站| 女人高潮潮喷娇喘18禁视频| 此物有八面人人有两片| 国产男靠女视频免费网站| 亚洲成人中文字幕在线播放| 一进一出好大好爽视频| 免费高清视频大片| 欧美日韩中文字幕国产精品一区二区三区| 欧美日韩乱码在线| 一个人观看的视频www高清免费观看 | 国产亚洲欧美在线一区二区| 国产一区二区三区在线臀色熟女| 日本免费a在线| 十八禁人妻一区二区| 国产成人aa在线观看| 99久久精品国产亚洲精品| 成熟少妇高潮喷水视频| 日本三级黄在线观看| 久久久精品大字幕| 麻豆一二三区av精品| 国产精品国产高清国产av| 国产亚洲av高清不卡| 又粗又爽又猛毛片免费看| 97人妻精品一区二区三区麻豆| 欧美午夜高清在线| 亚洲无线在线观看| 国产精品久久久久久人妻精品电影| 成年人黄色毛片网站| 18禁黄网站禁片午夜丰满| 性欧美人与动物交配| 中文在线观看免费www的网站| 窝窝影院91人妻| 国产精品99久久久久久久久| 麻豆成人av在线观看| 身体一侧抽搐| 2021天堂中文幕一二区在线观| 精品乱码久久久久久99久播| 国产精品自产拍在线观看55亚洲| 91久久精品国产一区二区成人 | 国产精品久久久人人做人人爽| 麻豆国产97在线/欧美| 岛国在线免费视频观看| 欧美色视频一区免费| 久久久久国产一级毛片高清牌| 国产人伦9x9x在线观看| 在线观看舔阴道视频| 白带黄色成豆腐渣| 每晚都被弄得嗷嗷叫到高潮| 美女大奶头视频| 女生性感内裤真人,穿戴方法视频| 听说在线观看完整版免费高清| 最近最新免费中文字幕在线| 中文字幕精品亚洲无线码一区| 亚洲国产中文字幕在线视频| 中文字幕久久专区| 亚洲国产日韩欧美精品在线观看 | 超碰成人久久| 精品国产超薄肉色丝袜足j| 亚洲自拍偷在线| 亚洲精品色激情综合| 我要搜黄色片| cao死你这个sao货| 一本一本综合久久| 99久国产av精品| 国产av麻豆久久久久久久| 国产精品1区2区在线观看.| 国产男靠女视频免费网站| 91麻豆精品激情在线观看国产| 91av网站免费观看| 亚洲国产色片| 成人一区二区视频在线观看| 又紧又爽又黄一区二区| 村上凉子中文字幕在线| 五月玫瑰六月丁香| 国模一区二区三区四区视频 | 我的老师免费观看完整版| av欧美777| 日韩av在线大香蕉| 桃色一区二区三区在线观看| 成人一区二区视频在线观看| 国产亚洲精品久久久久久毛片| 亚洲国产精品sss在线观看| 日韩欧美三级三区| 日韩av在线大香蕉| 国产精品亚洲av一区麻豆| 日韩欧美在线乱码| 亚洲中文字幕一区二区三区有码在线看 | 国产午夜精品久久久久久| 99riav亚洲国产免费| 欧美极品一区二区三区四区| 亚洲无线观看免费| 欧美一区二区精品小视频在线| 免费av毛片视频| av视频在线观看入口| 亚洲成a人片在线一区二区| 一区二区三区国产精品乱码| 成人特级黄色片久久久久久久| 久久久国产欧美日韩av| 女生性感内裤真人,穿戴方法视频| 又大又爽又粗| 午夜成年电影在线免费观看| 日韩国内少妇激情av| 此物有八面人人有两片| 99在线人妻在线中文字幕| 日韩高清综合在线| 欧美国产日韩亚洲一区| 天天一区二区日本电影三级| 神马国产精品三级电影在线观看| 日韩三级视频一区二区三区| 性色avwww在线观看| 亚洲第一电影网av| 亚洲午夜精品一区,二区,三区| 日韩欧美精品v在线| 亚洲av电影在线进入| 久久久国产成人免费| 成人一区二区视频在线观看| 欧美三级亚洲精品| 亚洲激情在线av| 中亚洲国语对白在线视频| 亚洲男人的天堂狠狠| 精品福利观看| 村上凉子中文字幕在线| 亚洲av成人av| 精品久久久久久久末码| 又爽又黄无遮挡网站| 真人做人爱边吃奶动态| 小说图片视频综合网站| 欧美在线黄色| 精品国产乱码久久久久久男人| 午夜精品在线福利| 两个人看的免费小视频| 国产乱人伦免费视频| 一区福利在线观看| 久久中文字幕一级| 99精品久久久久人妻精品| 国产aⅴ精品一区二区三区波| 国内精品一区二区在线观看| 欧美又色又爽又黄视频| 天堂√8在线中文| a级毛片在线看网站| 亚洲第一欧美日韩一区二区三区| 看片在线看免费视频| 精品国产超薄肉色丝袜足j| 日韩欧美一区二区三区在线观看| 亚洲七黄色美女视频| 麻豆成人午夜福利视频| 久久国产精品人妻蜜桃| 757午夜福利合集在线观看| 五月伊人婷婷丁香| 在线观看美女被高潮喷水网站 | 国产在线精品亚洲第一网站| 黑人巨大精品欧美一区二区mp4| 亚洲乱码一区二区免费版| 偷拍熟女少妇极品色| 欧美色欧美亚洲另类二区| 免费观看精品视频网站| 变态另类成人亚洲欧美熟女| 巨乳人妻的诱惑在线观看| 日韩国内少妇激情av| 最好的美女福利视频网| 国产一区二区三区在线臀色熟女| 可以在线观看的亚洲视频| 亚洲自偷自拍图片 自拍| 午夜福利18| 舔av片在线| 欧美色视频一区免费| 夜夜躁狠狠躁天天躁| 国产午夜福利久久久久久| 一进一出好大好爽视频| 国模一区二区三区四区视频 | 丁香欧美五月| av在线天堂中文字幕| 非洲黑人性xxxx精品又粗又长| 99精品在免费线老司机午夜| 两个人的视频大全免费| 亚洲熟妇中文字幕五十中出| 久久久色成人| 欧美最黄视频在线播放免费| 亚洲专区中文字幕在线| 亚洲av日韩精品久久久久久密| 欧美一级a爱片免费观看看| 亚洲中文字幕一区二区三区有码在线看 | 在线观看美女被高潮喷水网站 | 麻豆一二三区av精品| www.自偷自拍.com| 国产一区二区三区在线臀色熟女| 最近在线观看免费完整版| 成人欧美大片| 久久精品夜夜夜夜夜久久蜜豆| 我要搜黄色片| 欧美色欧美亚洲另类二区| 亚洲av成人一区二区三| 免费看a级黄色片| 最近视频中文字幕2019在线8| 三级毛片av免费| 黑人操中国人逼视频| 99国产极品粉嫩在线观看| 久久中文字幕人妻熟女| 久久久久国产一级毛片高清牌| 又黄又爽又免费观看的视频| 最新中文字幕久久久久 | 久久精品国产99精品国产亚洲性色| 亚洲国产日韩欧美精品在线观看 | 国产三级中文精品| 欧美成人性av电影在线观看| 午夜久久久久精精品| 1000部很黄的大片| 国产精品精品国产色婷婷| 欧美成人免费av一区二区三区| 国产精品一区二区精品视频观看| 草草在线视频免费看| 国产欧美日韩一区二区精品| 欧美日本视频| 亚洲成人中文字幕在线播放| 午夜福利在线观看吧| 久久久久久久久免费视频了| 悠悠久久av| 亚洲性夜色夜夜综合| 一卡2卡三卡四卡精品乱码亚洲| 免费在线观看日本一区| 变态另类成人亚洲欧美熟女| 国产精品久久久av美女十八| 欧美黑人欧美精品刺激| 亚洲人与动物交配视频| 国产免费男女视频| 久久亚洲精品不卡| 欧美日韩一级在线毛片| 窝窝影院91人妻| 国产精品久久久av美女十八| 久久天躁狠狠躁夜夜2o2o| 国产高清三级在线| 亚洲精品在线观看二区| 国产高清有码在线观看视频| 亚洲avbb在线观看| 日本黄大片高清| 国产精品女同一区二区软件 | 国产成人一区二区三区免费视频网站| 欧美绝顶高潮抽搐喷水| 男人和女人高潮做爰伦理| 精品午夜福利视频在线观看一区| av福利片在线观看| 久久久国产精品麻豆| 99热这里只有精品一区 | 国产精品av视频在线免费观看| 久久久国产成人免费| 9191精品国产免费久久| 丁香欧美五月| 毛片女人毛片| 99久久精品一区二区三区| 巨乳人妻的诱惑在线观看| 在线观看午夜福利视频| 国产探花在线观看一区二区| 国产成人系列免费观看| 成年人黄色毛片网站| 人人妻人人看人人澡| 亚洲国产精品999在线| 这个男人来自地球电影免费观看| 中文字幕久久专区| 亚洲自拍偷在线| a在线观看视频网站| 在线国产一区二区在线| 亚洲片人在线观看| 成人av在线播放网站| 日韩国内少妇激情av| 99热只有精品国产| 欧美中文日本在线观看视频| 精品国产亚洲在线| 舔av片在线| svipshipincom国产片| 午夜视频精品福利| а√天堂www在线а√下载| 久久久精品大字幕| 精品久久久久久,| 青草久久国产| 两性午夜刺激爽爽歪歪视频在线观看| 免费电影在线观看免费观看| 中文字幕精品亚洲无线码一区| 国产黄片美女视频| 国产欧美日韩精品亚洲av| 狂野欧美激情性xxxx| 人妻夜夜爽99麻豆av| 精品免费久久久久久久清纯| 亚洲av成人精品一区久久| 精品久久久久久,| 国产精华一区二区三区| 成人精品一区二区免费| 精品国产三级普通话版| 女警被强在线播放| 亚洲欧美日韩东京热| 色吧在线观看| x7x7x7水蜜桃| 国产精品久久久久久久电影 | 成在线人永久免费视频| 国产 一区 欧美 日韩| 国产成人系列免费观看| 午夜精品久久久久久毛片777| 国产伦精品一区二区三区四那| 久久天堂一区二区三区四区| 中文字幕最新亚洲高清| 午夜视频精品福利| 久久亚洲真实| 91在线观看av| 丰满人妻一区二区三区视频av | 亚洲欧美一区二区三区黑人| 怎么达到女性高潮| 欧美黄色片欧美黄色片| 国产精品 欧美亚洲| 亚洲欧美日韩无卡精品| 国产欧美日韩一区二区精品| 日韩精品中文字幕看吧| 亚洲中文av在线| 高清在线国产一区| 午夜福利18| 别揉我奶头~嗯~啊~动态视频| 一级黄色大片毛片| 搡老妇女老女人老熟妇| 99久久精品一区二区三区| 在线永久观看黄色视频| 性欧美人与动物交配| 悠悠久久av| 亚洲五月婷婷丁香| 国语自产精品视频在线第100页| 一个人免费在线观看的高清视频| 中文字幕最新亚洲高清| 亚洲国产看品久久| 精品欧美国产一区二区三| 小说图片视频综合网站| 欧美成人性av电影在线观看| 日日夜夜操网爽| 免费电影在线观看免费观看| 亚洲avbb在线观看| 美女免费视频网站| 美女高潮的动态| 久久久久久九九精品二区国产| 国产精品 国内视频| 熟妇人妻久久中文字幕3abv| 国产一区二区激情短视频| 亚洲人成网站高清观看| 久久精品国产亚洲av香蕉五月| 亚洲欧美精品综合久久99| 99国产极品粉嫩在线观看| 成人av在线播放网站| 全区人妻精品视频| 欧美黑人巨大hd| 757午夜福利合集在线观看| 成在线人永久免费视频| 天堂av国产一区二区熟女人妻| 97超视频在线观看视频| 久久草成人影院| 欧美日韩精品网址| 国产精品爽爽va在线观看网站| 亚洲第一电影网av| 久久久久久人人人人人| a在线观看视频网站| 国产精品一区二区三区四区免费观看 | 天堂av国产一区二区熟女人妻| 每晚都被弄得嗷嗷叫到高潮| 国产精品一区二区三区四区久久| 久久久久久久精品吃奶| 99久国产av精品| 国产野战对白在线观看| 18禁黄网站禁片午夜丰满| 日本黄大片高清| 免费观看的影片在线观看| 亚洲专区国产一区二区| 亚洲av熟女| 国产精品乱码一区二三区的特点| 亚洲狠狠婷婷综合久久图片| 国产激情欧美一区二区| 亚洲熟妇中文字幕五十中出| 亚洲国产精品久久男人天堂| 成人三级黄色视频| 好男人在线观看高清免费视频| 亚洲 欧美 日韩 在线 免费| 亚洲国产精品成人综合色| 国产熟女xx| 午夜福利免费观看在线| 啦啦啦观看免费观看视频高清| 在线免费观看的www视频| 亚洲欧洲精品一区二区精品久久久| 久久亚洲精品不卡| 欧美日韩亚洲国产一区二区在线观看| 国产成人系列免费观看| 97超级碰碰碰精品色视频在线观看| 999久久久国产精品视频| 一本一本综合久久| 在线十欧美十亚洲十日本专区| 日韩欧美免费精品| 91九色精品人成在线观看| 精品久久久久久久毛片微露脸| 国产伦精品一区二区三区四那| 久久香蕉国产精品| 国产1区2区3区精品| 女生性感内裤真人,穿戴方法视频| 天堂网av新在线| 天堂动漫精品| 日韩大尺度精品在线看网址| АⅤ资源中文在线天堂| 国产av麻豆久久久久久久| 长腿黑丝高跟| 可以在线观看毛片的网站| 国产黄a三级三级三级人| 精品国内亚洲2022精品成人| 天堂√8在线中文| 国产av麻豆久久久久久久| 视频区欧美日本亚洲| 99久久99久久久精品蜜桃| 偷拍熟女少妇极品色| 欧美乱色亚洲激情| 性色av乱码一区二区三区2| 在线十欧美十亚洲十日本专区| 51午夜福利影视在线观看| 欧美日韩瑟瑟在线播放| 最近最新中文字幕大全免费视频| 日韩精品中文字幕看吧| 欧美激情在线99| 欧美日韩瑟瑟在线播放| 99国产精品一区二区蜜桃av| 曰老女人黄片| 丝袜人妻中文字幕| 精品国产超薄肉色丝袜足j| 1024手机看黄色片| 国内少妇人妻偷人精品xxx网站 | 中文亚洲av片在线观看爽| 俺也久久电影网| 老汉色∧v一级毛片| 在线观看免费午夜福利视频| 欧美又色又爽又黄视频| 国产成人欧美在线观看| 中文字幕人成人乱码亚洲影| 欧美中文综合在线视频| 午夜免费观看网址| 一级毛片女人18水好多| 琪琪午夜伦伦电影理论片6080| 国产亚洲精品综合一区在线观看| 免费观看人在逋| 免费无遮挡裸体视频| 禁无遮挡网站|