• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improved Shark Smell Optimization Algorithm for Human Action Recognition

    2023-10-26 13:12:58InzamamMashoodNasirMudassarRazaJamalHussainShahMuhammadAttiqueKhanYunCheolNamandYunyoungNam
    Computers Materials&Continua 2023年9期

    Inzamam Mashood Nasir ,Mudassar Raza ,Jamal Hussain Shah ,Muhammad Attique Khan ,Yun-Cheol Nam and Yunyoung Nam

    1Department of Computer Science,COMSATS University Islamabad,Wah Campus,Wah Cantt,47040,Pakistan

    2Department of Computer Science,HITEC University,Taxila,Pakistan

    3Department of Architecture,Joongbu University,Goyang,10279,South Korea

    4Department of ICT Convergence,Soonchunhyang University,Asan,31538,Korea

    ABSTRACT Human Action Recognition(HAR)in uncontrolled environments targets to recognition of different actions from a video.An effective HAR model can be employed for an application like human-computer interaction,health care,person tracking,and video surveillance.Machine Learning(ML)approaches,specifically,Convolutional Neural Network(CNN)models had been widely used and achieved impressive results through feature fusion.The accuracy and effectiveness of these models continue to be the biggest challenge in this field.In this article,a novel feature optimization algorithm,called improved Shark Smell Optimization(iSSO)is proposed to reduce the redundancy of extracted features.This proposed technique is inspired by the behavior of white sharks,and how they find the best prey in the whole search space.The proposed iSSO algorithm divides the Feature Vector(FV)into subparts,where a search is conducted to find optimal local features from each subpart of FV.Once local optimal features are selected,a global search is conducted to further optimize these features.The proposed iSSO algorithm is employed on nine(9)selected CNN models.These CNN models are selected based on their top-1 and top-5 accuracy in ImageNet competition.To evaluate the model,two publicly available datasets UCF-Sports and Hollywood2 are selected.

    KEYWORDS Action recognition;improved shark smell optimization;convolutional neural networks;machine learning

    1 Introduction

    Human Action Recognition(HAR)includes the action recognition of a person through imaging data which has various applications.Recognition approaches can be divided into three categories:multi-model,overlapping categories,and video sequences [1].This data used for recognition is the major difference between images and video categories.Data in form of images and videos are acquired through cameras in controlled and uncontrolled environments.With the advancement of technology in past decades,various smart devices have been developed which to collect images and video data for HAR,health monitoring,and disease prevention[2].Different research has been carried out on HAR through images or videos over the last three decades[3,4].Human visual systems get visual information about an object such as its movement,shape,and its variations.This information is used to investigate the biophysical processes of HAR.Computer vision systems have achieved very good accuracy while catering to different challenges such as occlusion,background clutter,scale and rotation invariance,and environmental changes[5].

    HAR depending upon the action complexity can be divided into primitive,single-person,interaction,and group action recognition [6].The basic movement of a single human body part considers primitive action,a set of primitive actions of one person includes including single-person action,a collection of humans and objects involves in interaction while collective actions performed by a group of people are group actions.Computer vision-based HAR systems are divided into hand-crafted feature-based methods and deep learning-based methods.The combined framework of hand-crafted and deep features is also employed by many researchers[7].

    The data plays an important role in efficient HAR systems.The HAR data is categorized into color channels,depth,and skeleton information.Texture information can be extracted from color channels,i.e.,RGB which is close to the visual appearance,but illumination variations can affect the visual data [8].Depth map information is invariant to the lighting changes which is helpful in foreground object extractions.3D information can also be captured through a depth map,but noise factors should be considered while capturing the depth map.Skeletons information can be gathered through color channels and depth maps,but it can be exploited from environmental factors[9].HAR systems use different levels of features such as whole data as the input of HAR used in [10].Apart from features,motion is an important factor that can be incorporated into the feature computation step.It includes optical flow for capturing low-level feature information in multiple video frames.Some researchers included motion information in the classification step with Conditional Random Fields,Hidden Markov Models,Long-Short Term Memory (LSTM),Recurrent Neural Networks (RNN),and 3D Convolutional Neural Networks(CNN)[11–15].These HAR systems have good recognition accuracy using the most appropriate feature set.

    A CNN-based convolutional 3D (C3D) network was proposed in [16].The major difference between the 3D CNN and the proposed one was that it utilized the whole video as an input instead of a few frames or segmented frames,which makes it robust for large databases.The architecture of the C3D network comprises several layer groups like convolutional layer=8,maximum pooling layers=5,fully connected layers=2,and the last softmax loss layer.UCF 101 dataset was utilized to evaluate the best combination of the proposed network architecture.The best performance achieved by the proposed network was using a 3×3×3 convolutional filter without updating the other parameter.The researcher came up with RNNs[17]to overcome the limitation action of CNN models of information derivation from long timelapse.RNN has proved robust while extracting time dimension features and has one drawback of gradient disappearance.The mentioned problem is addressed by presenting Long Short-Term Memory Network(LSTM)[18],which utilizes processors to gauge the information integrity and relevance.Normally,input gates,output gates,and forget gates are utilized in the processor.The information flow is controlled by gates in the processor and unnecessary information which requires large memory chunks is stored for long-term tasks.

    A ConvNet architecture for the spatiotemporal fusion of video fragments has evaluated its performance on dataset UCF-101 by achieving an accuracy of 93.5% and HMDB-51 by achieving an accuracy of 69.2%[19].An architecture is proposed to handle 3D signals effectively and efficiently and introduced Factorized Spatio-Temporal Convolutional Network(FSTCN).It was tested on two publicly available datasets UCF-101 and achieved 88.1% accuracy,while achieved 59.0% accuracy on HMDB-51[20].In another method,LSTM models are trained to utilize the differential gating scheme,which focuses on the varying gain due to the slow movements between the successive frames,change based on Derivate of States(DoS)and this combined called differential RNN(dRNN).The method is implemented on KTH and MSRAction3D datasets.The accuracy achieved on their datasets is 93.96% and 92.03%,respectively[21].

    This article presents an improved form of the Shark Smell Algorithm (SSO),which reduces redundant features.The proposed algorithm utilizes both,SSO and White Shark Optimization(WSO)properties to solve the redundancy issues.The proposed iSSO divides the population into sub-spaces to find local and global optimal features.In the end,these extracted local features are used to optimize global features.Features are extracted using 9 pre-trained CNN models,which are selected based on their top-1 and top-5 accuracies in ImageNet competition.This model is tested on two publicly available datasets UCF-Sports (D1) and Hollywood2 (D2) and it has obtained better results than state-of-the-art(SOTA)methods.

    2 Proposed Methodology

    In an uncontrolled environment,various viewports,illuminations,and changing backgrounds,traditional hand-crafted features have been proved insufficient [22].In the age of big data and the evolution of ML methods,Deep Learning(DL)has achieved remarkable results[23–25].These results have motivated researchers around the globe to apply these DL methods to domains involving video data.The challenge of ImageNet classification drastically changed the dimensions of DL methods,when CNNs made a huge breakthrough.The main difference between CNN methods and local feature-based methods is that CNN iteratively and automatically extracts deep features through its interconnected layers.

    2.1 Transfer Learning of Pre-Trained CNN Models

    Artificial Intelligence (AI) and Machine Learning (ML) have a sub-domain,called Transfer Learning(TL),which transforms the learned knowledge of one problem(base problem)into another problem (target problem).TL improves the learning of a model through the data provided for the target problem.A model trained to classify Wikipedia text can be utilized to classify the texts of simple documents after TL.A model trained to classify cards can also classify birds.The nature of this problem is the same,which is to classify objects.TL provides scalability to a trained model,which enables it to recognize different types of objects.Since 2015,after the first CNN model,AlexNet[22] was proposed,a lot of CNN architectures were proposed.The base for all these models was a competition,where a dataset,ImageNet [26],having 1000 classes was presented.The efficiency of all proposed CNN models to date is still measured on how the proposed model performs on the ImageNet dataset.In this research,nine of the most used CNN models are selected,where,through TL,features of input images from selected datasets will be extracted.Table 1 lists all selected CNN models along with their depth,size,input size,number of parameters,and their top-1 and top-5 accuracies on ImageNet datasets.

    Table 1:Different characteristics of selected pre-trained CNN models

    The structure of all these selected pre-trained models is different because of the nature and arrangement of layers.The selected feature extraction layer and extracted features per image vary from model to model.For Vg,the fc7 layer is selected to extract 4096 features for a single image.1280 and 4032 features are extracted from the global_average_pooling2d_1 and global_average_pooling2d_2 layers of Mo and Na models,respectively.avg_pool is selected as a feature extraction layer for Re,De,Xe,and In models,which extracted 2048,1920,2048,and 1536 features,respectively.avg1 is selected as the feature extraction layer for Da,and it extracted 1024 features against a single image.When the Ef model is used as a feature extractor,it extracts 1280 features from the GlobAvgPool layer.All these extracted features are forwarded to iSSO for optimization.

    2.2 Improved Shark Smell Optimization(iSSO)

    The meta-heuristic model used in this article is an improved form of Shark Smell Optimization(SSO)[33].The SSO was proposed after inspiration was taken from the species of sharks.Sharks are considered as most hazardous and strongest predacious in the universe[34].Sharks are creatures with a keen ability to smell and highly contrasted vision due to their sturdy eyesight and powerful muscles.They have more than 300 sharp,pointing,and triangular teeth in their gigantic jaws.Sharks usually strike with a large and abrupt bite of prey,which proves so sudden that the prey cannot avoid it.These sharks hunt the prey by using their extreme sense of smelling and hearing the traits of prey.The iSSO algorithm initially divides the whole search space intosubparts.The algorithm then performs the local and global search to find the optimum prey in both,local and global search spaces of.Once an optimum prey is located,the search then continues to find all the optimal prey in the remaining subparts.The process mentioned below is for a single subpart.The whole process will be repeated for all.Another factor is the quantity of selected optimal features.For this,denotes the total selected features.

    2.2.1 Prey Tracking

    Sharks wander in the ocean freely just like any other organism of the sea and search for prey.In that search,sharks update their positions by the traits of prey.They apply all their tricks to locate,stalk and track down the prey.All senses of sharks along with their average distance range are illustrated in Fig.1.All these illustrated features help them to exploit and search the whole space for hunting prey.

    Figure 1:Senses of shark along with its average distance range

    2.2.2 Prey Searching(Exploration and Exploitation)

    The sharks have a very unfamiliar sense of hearing,that is,they can hear any wavelength from the full length of their body.Their whole body can detect any change in water pressure and reveal the nearby movements of the targeted prey.The attention of sharks is usually attained by moving prey,which leaves a disturbance in water pressure.Sharks even have body organs,which can detect the tiny electromagnetic fields,produced through the swimming of prey.Turbulence due to the prey’s motion helps sharks to sense the frequency of waves and accurately predict the size and location of prey.The velocity of waves detected by sharks is described as:

    whereυdenotes the velocity of wavy motion,ωdenotes the wavelength that defines the distance between shark and prey andωfdenotes the frequency of waves during the wavy motion.This frequency is determined by the total number of cycles,completed by the shark in a second.The sharks utilize their extraordinary sense to exploit the whole space and to detect prey.Once,a prey is in the nearby area,the senses of the shark grow exponentially,and it travels towards the pined point position of the prey.The following equation is assumed to be used to update the position of a shark with constant acceleration:

    here,a new position of the shark is denoted byρ,the primitive position is denoted byρiand the initial velocity is denoted byυi.The interval taken to travel between current and initial positions is represented byΔTandAccdenotes the constant acceleration factor.Many preys disburse their scent when they leave their position.When a shark reaches that position,it finds no prey and thus starts to search for the prey randomly and explore the nearby areas by using its sense of smell,hearing,and sight.The first step of this algorithm is to generate a search space of all possible solutions.Search space ofmsharks inndimensions,with a position of all sharks,is presented as:

    here,P is a 2D matrix,containing the positions of all sharks in search space,ndenotes the total number of decision variables andrepresentsxthshark innthdimension.This population is generated by randomly initialized upper and lower bounds as:

    Now is the time for the shark to move toward prey.When a shark detects the waves of moving prey,it locks its target and starts moving towards that prey,which is defined as:

    here,C represents the coefficient of acceleration.The value of C for this work is equal to 2.145 after extensive experiments.?1and?2are calculated as:

    here,maximum,and current iterations are denoted bySands.Active motion of sharks can be achieved by using subordinate and initial velocities denoted by?maxand?min.For this work,these velocities for?maxand?minis set at 0.14 and 1.35,respectively.

    The sharks spend most of their time searching for optimal prey and to achieve it,they constantly change their positions.Their position changes when either they smell the scent of prey or they feel the movement in waves,caused by prey.Sometimes,a potential prey leaves its position and leaves some scent,either they feel a shark coming towards them or in search of food.In this case,the shark starts to stray randomly in search of other prey.The position of the shark,in that case,is updated as per the following equation:

    here,scdis a factor,which changes the direction of the moving shark,ωfmaxandωfmixdenote the maximum and minimum frequencies during its motion,pandqdenote any positive constants to maintain the exploitation and exploration behavior of the shark.For this work,the values ofωfmaxandωfminare kept at 0.31 and 0.03 after in-depth analysis.Sharks have a behavior,which tends to maintain their position closer to the prey:

    TheSenseis a parameter,which denotes the key senses of a shark while moving towards the prey and it is defined as:

    here,ris a positive constant,which is used to manage the behavior of exploitation and exploration of sharks.During the evaluation of this study,the value ofris kept at 0.002.

    The behavior of sharks is simulated mathematically by preserving the initial two optimal solutions and updated white shark position w.r.t these optimum solutions.The following equation is used to preserve the stated behavior:

    This relation shows that the position of the shark is always updated w.r.t.the optimal position of prey.The final location of the shark will be somewhere in the search space,near the optimum prey.The final algorithm of iSSO is presented in Algorithm 1.

    After extensive experiments,the value ofandis set at 14 and 0.65.The impact of these values is also presented in the result section.

    3 Experimental Results

    The proposed iSSO algorithm is evaluated by performing multiple experiments under different parameters,which efficiently verifies the performance of this algorithm.This section provides an in-depth view of performed experiments along with ablation analysis and comparison with existing techniques.

    3.1 Experimental Setup and Datasets

    The proposed iSSO algorithm is evaluated on two(2)benchmark datasets including UCF-Sports Dataset (D1) [35] and Hollywood2 Dataset (D2) [36].D1 contains a total of 150 videos from 10 classes included in this dataset,which represents human actions from different viewpoints and a range of scenes.D2 contains a total of 1,707 videos across 12 classes.These videos are extracted from 69 Hollywood movies.

    The proposed iSSO model is trained,tested,and validated using an HP Z440 workstation having an NVIDIA Quadro K2000 with a GPU memory of 2 GB DDR5.This card has 382 CUDA cores along with a 128-bit memory interface and 17 GB/s memory bandwidth.MATLAB2021a was used for training,testing,and validation.All selected pre-trained models are transfer learned with an initial learning rate of 0.0001 with an average decrease of 5% after 7 epochs.The whole process has 160 epochs and overall momentum of 0.45.Selected datasets are split using the standard 70-15-15 ratio for training,testing,and validation.During the testing of the proposed model,eight(8)classifiers were trained,which include Bagged Tree(BTree),Linear Discriminant Analysis(LDA),three kernels of k-Nearest Neighbor (kNN),i.e.,Ensemble Subspace kNN (ES-kNN),Weighted kNN (W-kNN) and Fine kNN(F-kNN),and three kernels of Support Vector Machine(SVM),i.e.,Cubic SVM(C-SVM),Quadratic SMV(Q-SVM)and Multi-class SVM(M-SVM).The performance of the proposed iSSO algorithm is evaluated using six metrics,such as Sensitivity(Sen),Correct Recognition Rate(CRR),Precision (Pre),Accuracy (Acc),Prediction Time (PT),and Training Time (TT).All experimental results presented in the next section are achieved after performing each experiment at least five times,using the same environment and factors.

    3.2 Recognition Results

    The efficiency of the proposed model is evaluated by performing multiple experiments.Initially,the impact of all selected pre-trained models is noted by feeding the dataset and extracting features from the selected output layer.In the next experiment,the proposed iSSO algorithm is employed on extracted deep features.And finally,the iSSO-enabled CNN model with the highest accuracy is further forwarded to the other classifiers.It is noteworthy that all the selected classifiers were used during this experiment,but F-kNN achieved the highest accuracy,thus Table 2 contains the results of F-kNN.While using D1,the Na model achieved the highest average Acc of 97.44 was achieved.This average accuracy has a factor,of±1.36%,which it alters during the five experiments.Similarly,Na obtained 96.97% CRR.The F-kNN took 206 min on average to train and 0.53 s to predict an input image.The lowest average Acc of 73.02%was obtained by the Vg model,whereas Ef took the highest TT of 347 min.

    Table 2:Performance of iSSO on selected CNN models on D1

    Once a model with the best performance is selected in the first experiment,this model is used to train all selected classifiers.As mentioned earlier,F-kNN performed better on D1 when Na was selected as the base CNN model.This classifier achieved average Sen of 97.37%,an average CRR of 96.97%,and a Pre of 97.28%.The second-best average Acc of 91.75% was achieved by Es-kNN.The worst-performing classifier was BTree,which could only achieve an 80.83% average Acc.The lowest average TT was of 193 s and the lowest average PT of 0.39 s was taken by LDA,but it could only achieve 84.16% Acc.

    The proposed model is also evaluated on D2,where the Da network achieved a maximum average Acc of 80.66%.The change factor of this model is 1.04%,after performing the same experiment 5 times.The average CRR of this model is noted at 79.68%.The best classifier for this model is MSVM,which took 139 min on average to train and 0.48 s on average to predict an input image.The second-best average Acc of 78.27% is achieved by De,which also achieves 78.66% CRR.For this model,M-SVM took 221 min to train and 0.54 s to predict.The lowest average accuracy of 60.02% on D2 is again achieved by Vg,where the selected classifier took 297 min to train and 1.45 s to predict an input image.The performances of all selected CNN models with and without the iSSO algorithm are compared in Table 3.

    Table 3:Performance of iSSO on selected CNN models on D2

    After the selection of the best-performing CNN model,all selected classifiers are trained on the extracted features of that CNN model.During this experiment,selected evaluation matrices are used to note the performance of each classifier.M-SVM has achieved the best average Sen of 79.22%,best average CRR of 79.68%,best Pre of 79.84%,and best average Acc of 80.66%.This classifier requires 280 min for training and 0.48 s for predicting an input image.The second-best average Acc of 75.88% is obtained by W-kNN,which took 280 min to train and 0.36 s to predict.The lowest TT is noted at 115 min for BTree,but the achieved average Acc is 50.95%.

    3.3 Ablation Analysis of iSSO

    This section discusses the importance of selecting values of parameters used in the iSSO algorithm.It should be noted that all readings of this section are performed using the network,which obtained the highest accuracy for each dataset,i.e.,Na for D1 and Da for D2.Secondly,the classifier used for this analysis is also retrieved from the best experiment for each dataset,i.e.,f-kNN for D1 and M-SVM for D2.All experiments in this analysis are performed thrice and an average reading of three experiments is mentioned against each parameter.

    The first and most important factor of the iSSO algorithm is the number of subparts,into which the whole search space,the feature vector,is divided.Table 4 represents the impact of different values for this parameter on accuracy and training time.It is noteworthy that the less value ofdecreases TT but reduces the performance of the algorithm.

    Table 4:Impact of different values of

    Another important parameter is,which selects the total number of features after the completion of an algorithm.The impact ofon TT and Acc is shown in Table 5.It is visible that with the increase of selected features,the Acc and TT increase for both datasets until the value ofreaches 0.65.

    Table 5:Impact of different values of

    Table 5:Impact of different values of

    The coefficient of acceleration C determines how quickly the shark will move from its current position.The quicker the movement is,the less exploration it will make.The acceleration must neither be too fast nor too slow,as the faster shark will skip important and potential prey and slower sharks will take too much time in exploration.Another factor is the behavior of sharksrduring the exploitation and exploration process.The value ofrdetermines the intervals,by which each prey should be searched for.Lesser value ofrwill increase the searching time and ultimately increases the TT.Table 6 represents the comparison of different values of C andr.

    Table 6:Impact of different values of

    The values of?max,?min,anddo not majorly impact the overall performance of iSSO,specifically in terms of Acc and TT.At the selected values of these parameters,the iSSO has obtained the highest possible performance.Tweaking these parameters marginally changes the results,which can be ignored.The validation accuracy and validation loss of the proposed model on both datasets are shown in Fig.2,where Figs.2a and 2b are the validation accuracy and validation loss on D1,respectively,while Figs.2c and 2d are the validation accuracy and validation loss on D2,respectively.It can be seen that 50% accuracy on both datasets is achieved on the initial 40 epochs,the validation loss is also reduced to less than 50% in the same number of epochs,which shows the high convergence of the proposed model.

    Figure 2:Validation accuracy and validation loss on D1 and D2

    3.4 Comparison with Existing Techniques

    A hybrid model was proposed in [37] by combining Speeded Up Robust Features (SURF) and Histogram of Oriented Gradients (HOG) for HAR.This model was cable of extracting global and local features as it obtained motion regions by adopting background subtraction.Motion edge features,effectively described by the directional controllable filters were utilized in HOG to extract information on local edges.The bag of Word(BoW)model was also obtained by performing k-means clustering.In the end,Support Vector Machines(SVM)were used to recognize the motion features.This model was tested on SBU Kinect Interaction,UCF Sports,and KTH datasets and achieved accuracies of 98.5%,97.6%,and 98.2%,respectively.QWSA-HDLAR model was proposed in[38]for the recognition of human actions.This model utilized TL-enabled CNN architecture,called NASNet for feature extraction.The NASNet model also employs a tuning process for hyper-parameters to optimally increase performance.In the end,a hybrid model containing CNN and RNN,called CNNBiRNN,was used to classify different human actions.This model was tested on D1 and KTH,and it achieved an average recognition rate of 99.0% and 99.6% on both datasets,respectively.

    An attention mechanism based on bi-directional LSTM (BiLSTM) and dilated CNN (dCNN)was proposed in [39],which extracted effective features of the HAR frame.Salient features were extracted using the dCNN and these features were fed to the BiLSTM model for the learning process.The learning process helped the model for long-term dependencies,which boosted the evaluation performance and extracted HAR-related cues and patterns.This model was evaluated on J-HMDB,D1,and UCF11 and achieved 80.2%,99.1%,and 98.3% accuracies,respectively.A DCNN-based model was proposed in[40],which took the input of globally contrasted frames.The resnet-50 model was transferred and learned and it extracted features from a fully connected and global average pooling layer.Both features were fused using Canonical Correlation Analysis (CCA) and then finetuned using the Shanon Entropy-based technique.The proposed model was tested on KTH,UTInteraction,YouTube,D1,and IXMAS datasets and achieved accuracies of 96.6%,96.7%,100%,99.7%,and 89.6%,respectively.The authors in [41] proposed the HAR model using feature fusion and optimization techniques.Before feature engineering,the color transformation was applied to enhance the video frames.Optical flow extracted the moving region after the frames fusion,and these regions were forwarded to extract texture and shape features.Finally,weighted entropy was utilized to select related features and M-SVM was used to classify the actions.This model experimented on UCF YouTube,D1,KTH,and Weizmann datasets and it achieved 94.5%,99.3%,100%,and 94.5%,respectively.Table 7 compares the proposed model with existing techniques.

    Table 7:Comparison with existing techniques on D1

    HAR was carried out using three models in[44]including where extraction of compact features,re-sampling of shot framerate,and detection of the shot boundary.The main objective of this research was to emphasize the extraction of relevant features.This model was tested on Weizmann,UCF,KTH,and D2 datasets using the second model,it achieved 97.8%,95.6%,97.0%,and 73.6% accuracies,respectively.A lightweight deep learning model was proposed in[45],which recognizes human actions using surveillance streams of CNN models.An ultra-fast object recognizer named Minimum-Output-Sum-of-Squared-Error(MOSSE)locates the subject in a video,while the LiteFlowNet CNN model was used to extract pyramid convolutional features of successive frames.In the end,Gated Recurrent Unit(GRU)was trained to perform HAR.Experiments were conducted on YouTube,Hollywood2,UCF-50,UCF-101 and HMDB51 datasets and overall average accuracy of 97.1%,71.3%,95.2%,95.5% and 72.3%,respectively.

    Double-constrained BOW (DC-BOW) was presented in [46],which utilized spatial information of features on three different scales including hidden scale,presentation scale,and descriptor scale.Length and Angle Constrained Linear Coding (LACLC) methods were obtained by constructing a loss function between local features and visual words.To optimize the features,spatial differentiation between extracted features of every cluster was considered.LACLC and a hierarchical weighted approach were applied to extract the related features.The proposed model was tested on UCF101,D2,UCF11,Olympic Sports,and KTH datasets and it achieved accuracies of 88.9%,67.13%,96%,92.3%,and 98.83%,respectively.A Spatiotemporally Attentive 3D Network(STA3D)was proposed in [42] for the propagation of important temporal descriptors and refining of spatial descriptors in 3D Fully Convolutional Networks(3D-FCN).To refine spatial descriptors and propagate temporal descriptors,an adaptive up-sampling module was also proposed.This technique was evaluated on D1 and D2,where it achieved 90%and 71.3%accuracies,respectively.A DCNN-based model is proposed in [43],which has three modules,reasoning and memory,attention,and high-level representation modules.The first modules concentrated on temporal and spatial reasoning so that temporal and spatial patterns could be efficiently discriminated.The second and third modules were mainly utilized for learning through captured spatial saliencies.This model was evaluated on D1 and D2,where it achieved 88.9% and 78.9%accuracies.Table 8 compares the performance of the proposed model with existing techniques.

    Table 8:Comparison with existing techniques on D2

    4 Conclusion

    In this article,an analysis of pre-trained CNN models is presented,where 9 models are selected based on their total parameters,size,and Top-1 and Top-5 accuracies.These selected pre-trained CNN models are trained on the selected dataset using the TL.The output layer of these pre-trained models is mentioned,and no experiments are performed based on a selection of the output layer.The extracted features of these CNN models are forwarded to the proposed iSSO,which is an improved algorithm from the traditional SSO.The iSSO algorithm divides the feature vector into subsets,where each subset is then used to find the local and global best features.The selection of local and global best features is inspired by the searching capabilities of the white shark,which uses its senses to find the optimal prey.Once the features are selected,the results are taken using selected publicly available datasets.The limitation of this work is the training time,which is too high,i.e.,the lowest training time for D1 is 194 min and for D2,it is 139 min.The one reason for taking this much TT is the dataset,which includes videos.But the main reason is the architecture of these models,which have too many repeated blocks of layers,which can be reduced.In the future,the architecture of the best-performing CNN models of this article will be analyzed to detect and reduce the repeated blocks of layers.The impact of these repeated blocks can also be analyzed.

    Acknowledgement:Not applicable.

    Funding Statement:This work was supported by the Collabo R&D between Industry,Academy,and Research Institute (S3250534) funded by the Ministry of SMEs and Startups (MSS,Korea),the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.RS-2023-00218176),and the Soonchunhyang University Research Fund.

    Author Contributions:The authors confirm their contribution to the paper as follows:study conception and design: I.M.N,M.A.K,and M.R;data collection: I.M.N,M.A.K,and M.R;draft manuscript preparation:I.M.N,M.A.K,M.R,and J.H.S;funding:J-C.N and Y.N;validation:JH.S,Y-C.N,and Y.N;software: I.M.N,M.A.K,Y.N,and Y-C.N;visualization: JH.S,Y-C.N,and Y.N;supervision:M.A.K,M.R and Y.N.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The dataset used in this work is publically available for research purpose.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲一区二区三区不卡视频| 欧美成人免费av一区二区三区| 日韩欧美三级三区| 欧美 亚洲 国产 日韩一| 国产日本99.免费观看| 亚洲欧美一区二区三区黑人| 岛国视频午夜一区免费看| 自线自在国产av| 亚洲一区二区三区不卡视频| 在线播放国产精品三级| 麻豆国产av国片精品| 免费高清在线观看日韩| 搡老岳熟女国产| 精品国产超薄肉色丝袜足j| 91九色精品人成在线观看| 亚洲精品国产一区二区精华液| 国产精品爽爽va在线观看网站 | 成人特级黄色片久久久久久久| 午夜免费鲁丝| 免费在线观看成人毛片| 性色av乱码一区二区三区2| 国产亚洲欧美在线一区二区| 欧美亚洲日本最大视频资源| 亚洲一区二区三区不卡视频| √禁漫天堂资源中文www| 亚洲成人久久爱视频| 国产精华一区二区三区| 免费在线观看完整版高清| 日韩免费av在线播放| 不卡av一区二区三区| 少妇的丰满在线观看| 亚洲色图 男人天堂 中文字幕| 久久久精品欧美日韩精品| 国产又爽黄色视频| 国产成人欧美| 激情在线观看视频在线高清| 日韩欧美国产在线观看| 亚洲国产精品成人综合色| 亚洲中文字幕一区二区三区有码在线看 | 精品一区二区三区四区五区乱码| 亚洲av熟女| 国产又色又爽无遮挡免费看| 人成视频在线观看免费观看| 午夜免费激情av| 精品国产一区二区三区四区第35| 精品一区二区三区视频在线观看免费| 国语自产精品视频在线第100页| 人人澡人人妻人| 天堂√8在线中文| 成人国产一区最新在线观看| 女人爽到高潮嗷嗷叫在线视频| svipshipincom国产片| 欧美日韩福利视频一区二区| 色精品久久人妻99蜜桃| 黑丝袜美女国产一区| 一级作爱视频免费观看| 欧美乱码精品一区二区三区| 成年女人毛片免费观看观看9| 久久久久国内视频| 国产精品av久久久久免费| 亚洲专区字幕在线| 久久久久精品国产欧美久久久| 18禁裸乳无遮挡免费网站照片 | 欧美激情 高清一区二区三区| 国产爱豆传媒在线观看 | 国产高清videossex| 国产精品乱码一区二三区的特点| 国产亚洲av嫩草精品影院| 久久中文字幕人妻熟女| 日韩大码丰满熟妇| 国产成人精品久久二区二区91| 深夜精品福利| 国产男靠女视频免费网站| 亚洲国产欧美日韩在线播放| 两性午夜刺激爽爽歪歪视频在线观看 | 精华霜和精华液先用哪个| 免费在线观看日本一区| 欧美丝袜亚洲另类 | 99久久精品国产亚洲精品| 视频在线观看一区二区三区| 国产成人精品无人区| 免费高清在线观看日韩| 日本精品一区二区三区蜜桃| 人人妻人人澡人人看| 香蕉久久夜色| 高潮久久久久久久久久久不卡| svipshipincom国产片| www国产在线视频色| 在线看三级毛片| 国产精品美女特级片免费视频播放器 | 亚洲成av人片免费观看| 欧美在线一区亚洲| 丝袜美腿诱惑在线| 91成年电影在线观看| 亚洲精品美女久久av网站| 免费电影在线观看免费观看| 午夜a级毛片| 波多野结衣av一区二区av| 精品电影一区二区在线| 麻豆久久精品国产亚洲av| 黄网站色视频无遮挡免费观看| 黄色成人免费大全| 老熟妇乱子伦视频在线观看| 又黄又粗又硬又大视频| 色综合婷婷激情| 91在线观看av| 亚洲av片天天在线观看| 中文字幕av电影在线播放| 久久国产精品人妻蜜桃| 亚洲国产中文字幕在线视频| 欧美中文日本在线观看视频| 桃色一区二区三区在线观看| 亚洲久久久国产精品| 免费在线观看完整版高清| 国产一卡二卡三卡精品| 在线永久观看黄色视频| 丰满的人妻完整版| 免费在线观看完整版高清| 欧美日韩福利视频一区二区| 免费在线观看视频国产中文字幕亚洲| 脱女人内裤的视频| 亚洲欧美日韩高清在线视频| 精品久久久久久久久久久久久 | 久久精品国产99精品国产亚洲性色| 国产欧美日韩一区二区精品| 久久久久九九精品影院| 视频在线观看一区二区三区| 一级片免费观看大全| 亚洲一码二码三码区别大吗| 欧美性猛交╳xxx乱大交人| 国产精品九九99| 亚洲熟妇中文字幕五十中出| 香蕉av资源在线| 精品国产一区二区三区四区第35| 久久人人精品亚洲av| 日韩欧美国产一区二区入口| 久久久久久九九精品二区国产 | 亚洲自偷自拍图片 自拍| 成人欧美大片| 国内精品久久久久精免费| 欧美大码av| 亚洲人成电影免费在线| 韩国精品一区二区三区| 波多野结衣高清作品| 久久精品国产亚洲av高清一级| 成人国语在线视频| 在线观看午夜福利视频| 午夜福利在线观看吧| 成年版毛片免费区| 成年版毛片免费区| 精品少妇一区二区三区视频日本电影| 国产精品一区二区精品视频观看| 成年版毛片免费区| 国产亚洲精品久久久久久毛片| 亚洲五月色婷婷综合| 少妇熟女aⅴ在线视频| 老司机在亚洲福利影院| 女同久久另类99精品国产91| 国产成人欧美| 亚洲国产欧美日韩在线播放| 国产精品综合久久久久久久免费| 999久久久国产精品视频| www.自偷自拍.com| 久久人人精品亚洲av| 中文字幕人成人乱码亚洲影| 日韩欧美一区视频在线观看| 午夜福利欧美成人| 午夜福利欧美成人| 国内揄拍国产精品人妻在线 | 亚洲av美国av| 精品乱码久久久久久99久播| 欧美性猛交黑人性爽| 亚洲精品中文字幕一二三四区| 夜夜爽天天搞| 国产欧美日韩一区二区精品| 亚洲欧美激情综合另类| 欧美成人免费av一区二区三区| 亚洲欧洲精品一区二区精品久久久| 成人18禁高潮啪啪吃奶动态图| 精品一区二区三区av网在线观看| 国产精品电影一区二区三区| 超碰成人久久| 久久国产精品影院| 18禁观看日本| www日本黄色视频网| 国产免费av片在线观看野外av| 亚洲熟妇中文字幕五十中出| av福利片在线| 亚洲精品在线美女| 国产精品综合久久久久久久免费| 91字幕亚洲| 久久狼人影院| 亚洲熟妇中文字幕五十中出| 黄色视频,在线免费观看| 精品久久久久久久末码| 黄片大片在线免费观看| 免费在线观看亚洲国产| 国产精品久久电影中文字幕| 久久性视频一级片| 精品免费久久久久久久清纯| 一区二区三区精品91| 三级毛片av免费| 国产单亲对白刺激| svipshipincom国产片| 亚洲 欧美一区二区三区| 香蕉久久夜色| 又紧又爽又黄一区二区| 欧美一区二区精品小视频在线| 国产精品久久久久久亚洲av鲁大| 亚洲欧美日韩无卡精品| 一区二区日韩欧美中文字幕| 亚洲av成人av| 黄频高清免费视频| 又大又爽又粗| 人人妻人人澡人人看| 精品午夜福利视频在线观看一区| 级片在线观看| 在线国产一区二区在线| 亚洲一区中文字幕在线| 亚洲精品一卡2卡三卡4卡5卡| 亚洲一区二区三区色噜噜| 日韩精品青青久久久久久| 亚洲成人精品中文字幕电影| 中文字幕人成人乱码亚洲影| 国产精品免费一区二区三区在线| 国产午夜精品久久久久久| 18禁美女被吸乳视频| 久久午夜亚洲精品久久| 欧洲精品卡2卡3卡4卡5卡区| 看免费av毛片| 国产亚洲精品av在线| 亚洲av五月六月丁香网| 久久香蕉精品热| 亚洲国产毛片av蜜桃av| avwww免费| 国产欧美日韩精品亚洲av| 黄色视频,在线免费观看| 欧美黑人巨大hd| 午夜福利视频1000在线观看| 久久亚洲真实| АⅤ资源中文在线天堂| 成人亚洲精品一区在线观看| 免费女性裸体啪啪无遮挡网站| 视频在线观看一区二区三区| 香蕉国产在线看| 91麻豆av在线| 国产精品永久免费网站| 久久久久久久久免费视频了| 亚洲av中文字字幕乱码综合 | 日韩成人在线观看一区二区三区| 一级a爱片免费观看的视频| 国产精品久久久久久精品电影 | 免费在线观看亚洲国产| 成熟少妇高潮喷水视频| 人妻丰满熟妇av一区二区三区| 白带黄色成豆腐渣| 波多野结衣高清无吗| 国产av一区二区精品久久| 天天添夜夜摸| 亚洲精华国产精华精| 免费在线观看成人毛片| 天堂影院成人在线观看| 波多野结衣巨乳人妻| 国产精品免费视频内射| 国产午夜福利久久久久久| 亚洲国产精品999在线| 久久婷婷人人爽人人干人人爱| 哪里可以看免费的av片| av在线天堂中文字幕| 在线观看午夜福利视频| 亚洲一区高清亚洲精品| 国产成人av激情在线播放| 亚洲国产日韩欧美精品在线观看 | 91九色精品人成在线观看| 精品久久久久久,| 亚洲欧美精品综合久久99| 国产久久久一区二区三区| 精品国产国语对白av| 亚洲成人免费电影在线观看| 日韩免费av在线播放| 一a级毛片在线观看| 国产精品九九99| 欧美zozozo另类| 熟女少妇亚洲综合色aaa.| 色综合婷婷激情| 精品国产亚洲在线| av欧美777| 亚洲九九香蕉| 1024手机看黄色片| 美女扒开内裤让男人捅视频| 日日爽夜夜爽网站| 一级黄色大片毛片| 久久国产乱子伦精品免费另类| 久久精品国产清高在天天线| 日日干狠狠操夜夜爽| 久久这里只有精品19| 看免费av毛片| 国产亚洲av高清不卡| 亚洲美女黄片视频| 伦理电影免费视频| 精品高清国产在线一区| 国产精品98久久久久久宅男小说| 免费看a级黄色片| 亚洲精品色激情综合| 色播亚洲综合网| 在线看三级毛片| 男女那种视频在线观看| 曰老女人黄片| 欧美日本亚洲视频在线播放| 18禁国产床啪视频网站| 免费女性裸体啪啪无遮挡网站| 99久久国产精品久久久| 欧美日韩亚洲国产一区二区在线观看| netflix在线观看网站| 在线观看www视频免费| 久久99热这里只有精品18| 国产精品 国内视频| 成熟少妇高潮喷水视频| 久久精品亚洲精品国产色婷小说| 日本免费一区二区三区高清不卡| 国产av不卡久久| 久9热在线精品视频| 国产又黄又爽又无遮挡在线| 国产97色在线日韩免费| 中文字幕高清在线视频| 亚洲一区二区三区色噜噜| 1024香蕉在线观看| 一边摸一边抽搐一进一小说| 中出人妻视频一区二区| 亚洲国产日韩欧美精品在线观看 | 欧美 亚洲 国产 日韩一| 每晚都被弄得嗷嗷叫到高潮| 国产av一区二区精品久久| 亚洲七黄色美女视频| 国产v大片淫在线免费观看| 亚洲自偷自拍图片 自拍| 村上凉子中文字幕在线| 国产精品精品国产色婷婷| 精品一区二区三区四区五区乱码| 免费在线观看日本一区| 亚洲中文字幕日韩| 欧美激情 高清一区二区三区| 中国美女看黄片| 国产精品永久免费网站| 成人国产综合亚洲| 99re在线观看精品视频| cao死你这个sao货| 国产99久久九九免费精品| 一进一出抽搐gif免费好疼| 午夜福利一区二区在线看| 国产成人精品无人区| 色av中文字幕| 精品福利观看| 午夜a级毛片| 国产成人精品无人区| 欧美日韩亚洲国产一区二区在线观看| 日本熟妇午夜| 国内揄拍国产精品人妻在线 | 久久性视频一级片| 国产片内射在线| 一边摸一边做爽爽视频免费| 欧美日韩亚洲综合一区二区三区_| 日本免费一区二区三区高清不卡| 免费在线观看完整版高清| 俺也久久电影网| 久久人妻av系列| 不卡一级毛片| АⅤ资源中文在线天堂| 香蕉国产在线看| 日本一本二区三区精品| 国产视频一区二区在线看| 他把我摸到了高潮在线观看| 久久精品成人免费网站| 午夜久久久久精精品| 又黄又爽又免费观看的视频| 国产又色又爽无遮挡免费看| 性色av乱码一区二区三区2| 成人亚洲精品av一区二区| 亚洲专区国产一区二区| 亚洲激情在线av| 桃红色精品国产亚洲av| 国产99久久九九免费精品| 国产精品影院久久| 18禁美女被吸乳视频| 日韩欧美在线二视频| 久久香蕉精品热| 波多野结衣高清作品| 午夜精品久久久久久毛片777| 日韩欧美国产在线观看| 欧美久久黑人一区二区| 一本精品99久久精品77| 久久草成人影院| 久久久久国产一级毛片高清牌| 日韩大码丰满熟妇| 成人午夜高清在线视频 | 色播在线永久视频| 免费看a级黄色片| 99久久无色码亚洲精品果冻| 法律面前人人平等表现在哪些方面| 夜夜看夜夜爽夜夜摸| 久久精品影院6| 国产麻豆成人av免费视频| 熟妇人妻久久中文字幕3abv| 色综合亚洲欧美另类图片| 国产真实乱freesex| 久久久久免费精品人妻一区二区 | 日韩欧美一区二区三区在线观看| 久久久久国内视频| 精品久久久久久久久久免费视频| 欧美+亚洲+日韩+国产| 亚洲一码二码三码区别大吗| 精品久久久久久久久久久久久 | 亚洲熟妇中文字幕五十中出| 美女 人体艺术 gogo| 一级毛片女人18水好多| 99国产极品粉嫩在线观看| cao死你这个sao货| 国产在线精品亚洲第一网站| 精品电影一区二区在线| 免费女性裸体啪啪无遮挡网站| 丰满人妻熟妇乱又伦精品不卡| 黄色 视频免费看| 免费女性裸体啪啪无遮挡网站| 天堂影院成人在线观看| 男人舔奶头视频| 国产精品久久久久久人妻精品电影| 我的亚洲天堂| 久久国产乱子伦精品免费另类| a在线观看视频网站| 国产精品电影一区二区三区| 美女大奶头视频| 男人操女人黄网站| 色综合婷婷激情| 97人妻精品一区二区三区麻豆 | 久久久久久亚洲精品国产蜜桃av| 色老头精品视频在线观看| 亚洲精品美女久久久久99蜜臀| 久久精品91无色码中文字幕| 成人18禁高潮啪啪吃奶动态图| 亚洲 欧美 日韩 在线 免费| 国产亚洲欧美在线一区二区| 婷婷丁香在线五月| 国产精品1区2区在线观看.| 一二三四社区在线视频社区8| 久久天躁狠狠躁夜夜2o2o| 久久热在线av| 亚洲一区二区三区色噜噜| 亚洲国产看品久久| 欧美成狂野欧美在线观看| 无限看片的www在线观看| 他把我摸到了高潮在线观看| 中文字幕人妻熟女乱码| 好看av亚洲va欧美ⅴa在| 精品国产美女av久久久久小说| 99精品久久久久人妻精品| 亚洲va日本ⅴa欧美va伊人久久| 国产一卡二卡三卡精品| 精品一区二区三区四区五区乱码| 亚洲熟女毛片儿| 麻豆av在线久日| 欧美乱色亚洲激情| 亚洲熟妇中文字幕五十中出| 欧美丝袜亚洲另类 | 精品福利观看| 黄色视频不卡| 欧美成人免费av一区二区三区| 动漫黄色视频在线观看| 香蕉丝袜av| 国产在线精品亚洲第一网站| 午夜影院日韩av| 香蕉国产在线看| 天天一区二区日本电影三级| 欧美黑人欧美精品刺激| 久久精品国产99精品国产亚洲性色| 国产午夜福利久久久久久| 18禁美女被吸乳视频| 一级毛片精品| 亚洲激情在线av| 午夜福利视频1000在线观看| 久久 成人 亚洲| 色综合亚洲欧美另类图片| 国产97色在线日韩免费| 久久香蕉精品热| 亚洲七黄色美女视频| 人人澡人人妻人| 亚洲精品在线观看二区| 俺也久久电影网| 亚洲最大成人中文| 天堂√8在线中文| 一卡2卡三卡四卡精品乱码亚洲| 99久久久亚洲精品蜜臀av| 欧美黑人巨大hd| 欧美乱码精品一区二区三区| 99热只有精品国产| 亚洲av电影不卡..在线观看| 嫩草影视91久久| 99riav亚洲国产免费| 国产精品乱码一区二三区的特点| 国产一区二区激情短视频| 日韩欧美国产在线观看| 一级a爱片免费观看的视频| 亚洲人成电影免费在线| 欧美成人午夜精品| av视频在线观看入口| 91在线观看av| 国产黄a三级三级三级人| 国产成人啪精品午夜网站| 欧美日本亚洲视频在线播放| 别揉我奶头~嗯~啊~动态视频| 少妇 在线观看| 久久精品国产亚洲av高清一级| 色播在线永久视频| 99在线人妻在线中文字幕| 黄色丝袜av网址大全| 国产精品久久久av美女十八| 日韩有码中文字幕| 亚洲三区欧美一区| 亚洲黑人精品在线| 热re99久久国产66热| 欧美激情 高清一区二区三区| 欧美一级毛片孕妇| 亚洲av成人不卡在线观看播放网| 真人一进一出gif抽搐免费| 黑人巨大精品欧美一区二区mp4| 搡老妇女老女人老熟妇| 一区二区三区国产精品乱码| 一二三四社区在线视频社区8| 国产久久久一区二区三区| 成人三级黄色视频| 午夜老司机福利片| 精品国产亚洲在线| 精品久久久久久久末码| 又黄又粗又硬又大视频| 久久久久久国产a免费观看| 夜夜躁狠狠躁天天躁| 亚洲,欧美精品.| 一边摸一边做爽爽视频免费| 色哟哟哟哟哟哟| 老鸭窝网址在线观看| 亚洲专区字幕在线| videosex国产| 最新美女视频免费是黄的| 一区二区日韩欧美中文字幕| 久久草成人影院| 波多野结衣av一区二区av| 三级毛片av免费| 国产v大片淫在线免费观看| 精品电影一区二区在线| 两性夫妻黄色片| 老熟妇乱子伦视频在线观看| 欧美性猛交╳xxx乱大交人| 免费一级毛片在线播放高清视频| 欧美激情 高清一区二区三区| 亚洲国产中文字幕在线视频| 亚洲真实伦在线观看| 国产成人av教育| 久久久久久人人人人人| tocl精华| 麻豆成人av在线观看| 日本免费a在线| 热re99久久国产66热| av有码第一页| 日日夜夜操网爽| 亚洲熟妇中文字幕五十中出| xxx96com| 午夜免费观看网址| 中文字幕最新亚洲高清| 搡老岳熟女国产| 欧美中文日本在线观看视频| 亚洲真实伦在线观看| 精品一区二区三区四区五区乱码| 麻豆久久精品国产亚洲av| 99riav亚洲国产免费| 午夜激情av网站| 大型av网站在线播放| 免费在线观看视频国产中文字幕亚洲| av在线天堂中文字幕| 啦啦啦韩国在线观看视频| 国产精品一区二区三区四区久久 | 高清在线国产一区| 啦啦啦免费观看视频1| 久久久国产成人精品二区| 亚洲国产精品sss在线观看| 国产区一区二久久| 欧美又色又爽又黄视频| 精品一区二区三区四区五区乱码| 男人舔奶头视频| bbb黄色大片| 此物有八面人人有两片| 日韩一卡2卡3卡4卡2021年| 国产av在哪里看| av片东京热男人的天堂| 日韩精品青青久久久久久| 一进一出抽搐gif免费好疼| 美女国产高潮福利片在线看| cao死你这个sao货| 国语自产精品视频在线第100页| 欧美成人一区二区免费高清观看 | 午夜免费激情av| 亚洲中文字幕一区二区三区有码在线看 | 一卡2卡三卡四卡精品乱码亚洲| 亚洲片人在线观看| 天堂影院成人在线观看| 国产精品二区激情视频| 日日夜夜操网爽| 淫秽高清视频在线观看| 搞女人的毛片| 中亚洲国语对白在线视频| 午夜两性在线视频| 在线观看日韩欧美| 免费看a级黄色片| 色av中文字幕| 欧美午夜高清在线| 精品国内亚洲2022精品成人| 淫秽高清视频在线观看|