• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Lightweight Road Scene Semantic Segmentation Algorithm

    2023-12-15 03:57:20JianshengPengQingYangandYaruHou
    Computers Materials&Continua 2023年11期

    Jiansheng Peng,Qing Yang and Yaru Hou

    1College of Automation,Guangxi University of Science and Technology,Liuzhou,545000,China

    2Department of Artificial Intelligence and Manufacturing,Hechi University,Hechi,547000,China

    ABSTRACT In recent years,with the continuous deepening of smart city construction,there have been significant changes and improvements in the field of intelligent transportation.The semantic segmentation of road scenes has important practical significance in the fields of automatic driving,transportation planning,and intelligent transportation systems.However,the current mainstream lightweight semantic segmentation models in road scene segmentation face problems such as poor segmentation performance of small targets and insufficient refinement of segmentation edges.Therefore,this article proposes a lightweight semantic segmentation model based on the LiteSeg model improvement to address these issues.The model uses the lightweight backbone network MobileNet instead of the LiteSeg backbone network to reduce the network parameters and computation,and combines the Coordinate Attention (CA) mechanism to help the network capture long-distance dependencies.At the same time,by combining the dependencies of spatial information and channel information,the Spatial and Channel Network(SCNet)attention mechanism is proposed to improve the feature extraction ability of the model.Finally,a multiscale transposed attention encoding(MTAE)module was proposed to obtain features of different resolutions and perform feature fusion.In this paper,the proposed model is verified on the Cityscapes dataset.The experimental results show that the addition of SCNet and MTAE modules increases the mean Intersection over Union(mIoU)of the original LiteSeg model by 4.69%.On this basis,the backbone network is replaced with MobileNet,and the CA model is added at the same time.At the cost of increasing the minimum model parameters and computing costs,the mIoU of the original LiteSeg model is increased by 2.46%.This article also compares the proposed model with some current lightweight semantic segmentation models,and experiments show that the comprehensive performance of the proposed model is the best,especially in achieving excellent results in small object segmentation.Finally,this article will conduct generalization testing on the KITTI dataset for the proposed model,and the experimental results show that the proposed algorithm has a certain degree of generalization.

    KEYWORDS Semantic segmentation;lightweight;road scenes;multi-scale transposition attention encoding(MTAE)

    1 Introduction

    In today’s society,road scene segmentation has become a technology of great importance as urbanization and traffic demand continue to grow.Road scene segmentation aims to accurately separate and identify individual objects on the road from their surroundings in digital images or videos.This technology has a wide range of promising applications in areas such as autonomous driving,traffic monitoring,and intelligent transportation systems.The goal of road scene segmentation is to achieve a comprehensive understanding and perception of the traffic environment by accurately segmenting the vehicles,pedestrians,traffic signs,and other elements on the road.By effectively separating all objects on the road,road scene segmentation provides autonomous vehicles with the necessary environment awareness to ensure safe driving.At the same time,road scene segmentation can be used in traffic monitoring systems to monitor traffic flow in real-time,detect violations,and optimize traffic signal control,thereby improving road safety and traffic efficiency.

    Semantic segmentation is one of the important tasks in the field of computer vision,aiming to classify each pixel in an image or video into different semantic classes accurately.With the continuous development of deep learning,semantic segmentation models have also evolved.From the original full convolutional network(FCN)[1]to various improved methods later,each generation of networks has introduced new ideas and techniques in feature extraction,contextual information,and multi-scale features to continuously improve the accuracy and efficiency of semantic segmentation.Common semantic segmentation models are based on convolutional neural networks to achieve pixel-level classification.The main feature of these models is the adoption of an encoder-decoder structure and the combination of improved strategies such as skip connections and contextual coding,which improve the accuracy of image semantic segmentation.In the ILSVRC2012 vision competition [2],the AlexNet network [3] achieved state-of-the-art results (SOTA) by improved the accuracy of the model through techniques such as nonlinear activation functions,Dropout layers,data augmentation,and multi-GPU training.These ideas and techniques have since been widely used in deep learning models.In 2014,Long et al.improved the convolutional neural network and proposed FCN,which was the first successful application of convolutional neural network to semantic segmentation tasks.The FCN network achieves pixel-level classification by removing the fully connected layer and mapping the feature map output from the convolutional neural network to the output image of the same size as the input image.In 2015,Ronneberger et al.proposed the U-Net network [4],which uses an encoder-decoder structure.The encoder is used to extract the features,while the decoder is used to gradually recover the position and size of the feature image pixels.This model structure is able to fuse shallow features of images with deep features to obtain highly accurate segmentation results.Chen et al.proposed DeepLab network [5],which mainly uses dilated convolutions and multiscale pyramid pooling to improve segmentation accuracy.Niu et al.proposed the hybrid multi-attention network HMANet [6],which employs a channel attention module and introduces a region random playback attention module to reduce feature redundancy and improve the efficiency of the selfattentive mechanism by representing it in a regional manner.Chen et al.proposed DeepLabv3+network [7],which employs a series of technical improvements,including atrous spatial pyramid pooling(ASPP),encoder-decoder structure,and hybrid loss function.Zhao et al.proposed the PSPNet network[8],which uses a pyramid pooling module to capture the global contextual information of the input image and combine it with local information to obtain better semantic segmentation results.

    However,the complexity of road scenes and the limitations of current semantic segmentation models lead to a series of challenges in road scene segmentation.(1) Road scenes have complex diversity,including different road types,changes in lighting conditions,effects of weather conditions,and vehicle occlusion and overlap.All these factors will lead to a decrease in the accuracy and stability of the semantic segmentation model.(2) There are objects of different scales in the road scene,such as pedestrians,vehicles,and traffic signs.Current semantic segmentation models face difficulties in handing scale variation,and it is difficult to accurately segment objects with different scales,especially small-scale targets.(3) In autonomous driving and traffic monitoring systems,real-time segmentation results of road scenes are highly demanding.However,the current semantic segmentation models cannot be applied on embedded devices to achieve real-time segmentation,considering both the number of parameters and inference time.(4) In road scenes,the boundaries between some semantic categories may be unclear or blurred,and it is challenging for current semantic segmentation techniques to accurately capture and segment these blurred boundary regions.Therefore,how to effectively solve these challenges and improve the accuracy and robustness of road scene segmentation has become a hot spot and focus of current research.

    In this paper,we propose a new lightweight semantic segmentation model based on LiteSeg[9]to address these problems.The work we have done is as follows:

    1.The original model backbone network is replaced by the MobileOne[10]network.MobileOne is an efficient neural network that can effectively reduce the number of parameters and computation of the network.Also,to ensure the feature extraction capability of the model,a lightweight and flexible CA attention mechanism[11]is introduced to obtain more efficient feature information.

    2.A multi-scale transposition attention module is proposed.This module can acquire vector features at different resolutions and fuse the features.A Transformer encoder module is also incorporated to operate between channels with the help of covariance matrices of key and query values,which combine the accuracy of global transform networks with the scalability of convolutional structures.

    3.The SCNet module is proposed.This module processes spatial attention and channel attention in parallel to obtain richer feature information.

    2 Related Work

    As deep learning models continue to evolve,there is an increasing need to apply these models to real-world problems,thus requiring more and more models to be deployed on mobile devices.In order to interact with the real environment in real-time,semantic segmentation models need to have real-time processing capabilities and meet accuracy requirements.In recent years,the state-of-the-art lightweight semantic segmentation models can be divided into three main types:encoder-decoder structures,twobranch structures,and multi-branch structures.The model with an encoder-decoder structure extracts the features of the input image through the encoder and then maps these features back to the original image size through the decoder to achieve pixel-level classification.This structure allows for relatively fast inference while maintaining high accuracy.The model with a two-branch structure divides the network into two branches,one for the extraction of global contextual information and the other for capturing local details.This structure can effectively balance global and local information,improve segmentation accuracy,and enhance inference speed to a certain extent.The model with a multibranch structure processes feature information at different scales separately by using multiple parallel branching networks.These branching networks can extract features in different sense field ranges and fuse them to obtain more comprehensive contextual information,thus improving the accuracy of semantic segmentation.The classification of lightweight semantic segmentation models is shown in Table 1.

    Table 1: Lightweight semantic segmentation model classification

    1.Encoder-decoder architecture.In 2016,Paszke et al.proposed the ENet[12]model,which is the first semantic segmentation model that takes real-time into account,but its segmentation accuracy is low.In the same year,Eduardo et al.improved on ENet by proposing the ERFNet [13] model to obtain more information by interleaving the use of null convolution and ResNet blocks.The RGPNet[14]model consists of an asymmetric encoder-decoder and an adapter,which helps to preserve and refine the distributed representation of multiple levels and facilitates the flow of gradients between different levels.The LiteSeg model proposed by Emara et al.explores a deeper version of the ASPP module and applies short and long residual connections and deeply separable convolution to provide a faster and more efficient model.The MSCFNet [15] model uses decomposed convolution blocks and asymmetric residual blocks of dilated convolution to construct the encoder and uses deconvolution instead of the high computational cost of the FPANet [16] model extracts high-level semantic information by aggregating spatial pyramids with feature pyramids and uses a bidirectional directed feature pyramid network to fuse feature information at different levels.Besides,the LETNet[17]model combines U-shaped Convolutional Neural Networks(CNN)with Transformer,the ELANet[18] model designs an effective lightweight attention-guided network,and the ELUNet [19]model provides an efficient and lightweight U-shaped network architecture.The EACNet[20]model uses convolutional decomposition to enhance the feature representation capability and robustness to rotating objects by using depth-oriented convolutional decomposition as a basic feature layer and point-by-point convolution for fusion.The CFPNet[21]model combines the Inception module and null convolution to extract feature maps and contextual information of various sizes.MobileOne is an ultra-lightweight backbone network for mobile devices that achieves significant improvements in latency and accuracy through the introduction of linear branching.

    2.Double branch structure.In 2018,Yu et al.proposed BiSeNet [22],a bilateral segmentation network containing spatial and contextual paths,and they introduced a feature fusion module and an attention refinement module to further improve accuracy at an acceptable cost.To handle communication between parallel branches,the authors proposed BiseNetV2 [23] by adding an effective fusion layer to the BiSeNet model,which enhances the connection between the two paths.Despite the significant progress in speed and accuracy of BiseNetV2,there are still some redundancies in the initial downsampling phase and the fusion layer,which limit the information exchange between spatial and semantic branches.To address this issue,Faster BiSeNet [24] adopts a cleaner design that reduces redundant network architecture and enhances the relationship between the two branches.Aerial-BiSeNet [25] proposes a feature attention module and a channel attention-based feature fusion module based on the channel attention mechanism,effectively refining and combining features to improve the model’s performance.Additionally,Poudel et al.proposed Fast-SCNN[26],which introduces a learning downsampling module based on the existing two-branch fast segmentation method to compute low-level features of multiple resolution branches simultaneously and combine high-resolution spatial details with low-resolution depth features.This method is suitable for low-memory embedded devices with efficient computational power.

    3.Multi-branch structure.In 2018,Zhao et al.proposed ICNet[27]with multi-scale input,using few convolutions at high resolution and a deeper network at low resolution,and finally features fusion.In 2019,Li et al.proposed DFANet[28]to aggregate discriminative features through a series of subsidiary stages.DFANet is based on multi-scale feature propagation,which reduces the model parameters while maintaining a good perceptual field and enhancing the learning ability of the model.In the same year,Liu et al.proposed FDDWNet [29],which uses decomposition-expanded depth-separable convolution to learn feature representations from different scale receptive fields.The MSFNet [30] model designs a multiscale feature network consisting of an enhanced diverse attention module and an upsampling phase fusion module that uses high-level semantic information to complement low-level detail information to improve prediction.In 2021,Fan et al.proposed the Short-Term Dense Concate (STDC)network [31],which constructs the basic modules of the STDC network by reducing the dimensionality of the feature map and using feature map clustering for image representation.NDNet [32] eliminates redundant information through pruning and is suitable for realtime segmentation tasks with narrow width and large depth.To further optimize the output resolution of the segmentation network,NDNet uses point-by-point convolution to connect feature maps,facilitating the aggregation of information from two different levels.DFFNet[33]proposes a lightweight multiscale component of the semantic pyramid module,which improves the efficiency of context encoding through depth decomposition.

    3 Proposed Method

    LiteSeg is one of the current excellent lightweight semantic segmentation models.It is designed based on the encoder-decoder architecture,ASPP,dilated convolutions,and depth-wise separable convolutions.By employing depth-wise separable convolutions and ASPP,the model reduces parameter count while improving segmentation accuracy.The use of dilated convolutions expands the receptive field,enabling better capture of object information at different scales.On the Cityscapes dataset,LiteSeg achieves an impressive mIoU accuracy of 67.81%.It is a lightweight,efficient real-time semantic segmentation model.However,we noticed that the LiteSeg model does not consider the positional information of features and the semantic correlations of long-distance features,resulting in difficulties in accurately segmenting object boundaries and small objects.Therefore,in this paper,we propose a lightweight semantic segmentation model with higher accuracy based on LiteSeg while maintaining the model volume,and the structure is shown in Fig.1.Building upon LiteSeg,this paper utilizes the MobileOne backbone network module and incorporates the CA attention mechanism and SCNet attention module to extract feature information.Additionally,the multi-scale transposition attention coding module is used to extract long-range global features.

    3.1 Feature Extraction Based on CA Attention Mechanism

    The CA attention mechanism is used to enhance the receptive field of deep neural networks,primarily by weighting the feature maps of different channels in the network.Its structure diagram is shown in Fig.2.The CA attention module encodes channel relationships and long-term dependencies through precise location information,and the specific operations are divided into two parts:Coordinate information embedding and Coordinate Attention generation.

    The CA Attention module encodes channel relationships and long-term dependencies through precise location information,and the specific operations are divided into two parts: Coordinate information embedding and Coordinate Attention generation.

    Figure 1:Improved LitSeg network structure

    Figure 2:CA attention mechanism

    Coordinate information embedding:the global pooling approach is used for the global encoding of spatial information for channel attention encoding,but it makes it difficult to maintain location information because it pushes global spatial information into the channel description.To induce the attention unit to capture distant spatial interactions with accurate location information,the global pooling is decomposed according to the formula in Eq.(1)and transformed into a one-to-one feature encoding operation.

    Specifically,given the input,each channel is encoded along the horizontal or vertical coordinates using pooling kernels of size(H,1)or(1,W),respectively.This channel attention captures long-term dependencies along one spatial direction and preserves precise position information along the other spatial direction,which helps the network to locate the target of interest more accurately.

    Coordinate Attention generation:After passing through the transformations in the information embedding,this part splices the above transformations and then uses the convolutional transform function to process them.

    Eq.(2) represents the concatenation operation along the spatial dimension,encoding spatial information in both the horizontal and vertical directions.It is then decomposed into two separate tensor sums along the spatial dimension.Utilizing two additional convolutional transforms,zhandzware individually transformed into tensors with the same number of channels,as shown in Eqs.(3)and(4).

    Finally,the output of the CA module can be expressed as shown in Eq.(5).

    The global pooling layer in the network pools the input feature maps globally on average to obtain the global average of each channel.The channel weighting layer multiplies the original feature maps with the channel weights to obtain the weighted feature maps.Finally,the feature reconstruction layer reconstructs the weighted feature maps into the final feature maps.The addition of the CA module helps MobileOne extract more feature information at a very small additional computational cost.

    3.2 SCNet Attention Module

    Combined attention modules have achieved wide application in the field of image processing,and in[34],a spatial,temporal,and channel attention module was proposed to achieve the extraction of spatio-temporal features using three attention modules.Since using three attention modules in series would add extra computation,this paper aims to balance the accuracy and speed of the model and thus designs the SCNet attention module.The SCNet attention module uses the spatial and channel attention mechanisms and performs feature extraction on the input separately in parallel,and finally fuses them using one-dimensional convolution.The structure of the SCNet attention module model is shown in Fig.3.

    Figure 3:SCNet attention module

    The SCNet attention module consists of a spatial attention module(SAM)and a channel attention module(CAM).The SAM module is capable of assigning different levels of attention to each region.The equations for this module are shown in Eq.(6).

    wherefindenotes the input features andδ(·) denotes the Sigmoid activation function.The spatial attention mapMsis multiplied with the input feature map to perform adaptive refinement in terms of residuals.

    The channel attention module is used to extract channel features from the region feature map in the image frame.The formula for this module is shown in Eq.(7).

    whereδ(·)denotes the Sigmoid activation function,wheregcdenotes the multilayer perceptron(MLP),andMcdenotes the channel attention map.The channel attention mapMcis multiplied with the input feature map in a residual manner for adaptive refinement.

    After the input features pass through the two attention modules separately,the concat function is used to concatenate the two feature maps in parallel,as shown in Eq.(8),where concat denotes the concatenation operation.

    The improved SCNet attention module features the spatial and channel attention modules in parallel to extract the input features separately.The features of each module are multiplied with the input features in the form of residuals for adaptive refinement.The two features in parallel are then concatenated together and finally input to a 1Dconvolutional network for fusion.

    3.3 Multiscale Transposed Attention Encoding Module

    In this paper,different regions of the feature map are segmented using the MTAE module,which encodes the feature map to extract multiscale features.The encoder model is used to extract long-distance global features.The encoder module is a component of the Transformer model,which can reduce the strict neighborhood constraint of graph convolution and focus on the connection between pixels that are physically far away,while also being able to focus on local information.The feature encoding module can associate more features between regions,establish long-distance feature dependencies,and extract richer features.

    The traditional Transformer uses a self-attentive mechanism to interact with image blocks to model image data.However,the complexity of the self-attentive mechanism itself makes it difficult to handle high-resolution images.In this paper,the encoder module uses a transposed attention mechanism,which operates between channels of features with the help of covariance matrices of key and query values.The transposed attention mechanism combines the accuracy of traditional global transform networks with the scalability of convolutional structures with linear complexity in sequence length,thus allowing efficient processing of high-resolution images.

    The mutual covariance attention mechanism is an improvement on the self-attentive mechanism,and the former is capable of handling high-resolution images.In the self-attentive mechanism,the input vectors can form a matrixX.The query matrixQ,the key matrixK,and the value matrixVare obtained by multiplyingXwith three learnable transformation matricesWq,WkandWv,respectively.

    For each query vectorqi,calculate its dot product scores with all key vectorskjand input these scores into the softmax function to obtain a weight vectorwi,where each element represents the correlation betweenqiand the different key vectorskj,as shown in Eq.(9).

    Due to the limitations of the self-attention mechanism,this paper adopts the mutual covariance attention mechanism to interact with image features.The mutual covariance attention is a transposed form of the self-attention mechanism,which improves the attention mechanism based on the mutual covariance matrix.In the self-attention mechanism,the attention score is first calculated using the query matrixQ,the key matrixK,and the value matrixV.Then,the values are weighted and summed by the attention scores to get the output.In contrast,in the mutual covariance attention,the mutual covariance matrix between features needs to be calculated first,as shown in Eq.(10).

    The formulation of the mutual covariance attention is shown in Eq.(11),where bothQandKare generated by the coding layer,tdenotes the learnable parameter,andTdenotes the transpose operation.The XCA module is shown in Fig.4,and each mutual covariance attention is preceded by a LayerNorm layer,which serves to normalize all the data.

    The multiscale perception module based on the transposed attention mechanism is shown in Fig.5.From the figure,it can be seen that the module inserts residual connection structures with hierarchies in the residual units.The encoder structure is used as a filter in the module,while connecting different filter groups in a hierarchical residual-like manner.After chunking,the input is divided into three subsets:x1,x2,andx3.Each feature has the same scale size,but the channels are 1/3 of the input features.The encoder first extracts features from a set of feature maps,and then the output features from the previous set are sent to the next set of encoder filters and another set of input feature maps.This process is repeated several times until all the input feature maps have been processed.Finally,all the feature maps are concatenated to obtain the fusion information.Due to the combination effect,many equivalent feature scales are obtained.

    Figure 4:Reciprocal covariance attention

    Figure 5:Multi-level transposed attention encoding module

    The module partitions the feature map into 3 subsets:x1,x2,andx3,and then performs the operations using the transpose attention mechanism in the encoder,respectively.The formula is shown in Eq.(12).

    It is specified that each passing encoder is an operationAi,and the output isyi.Meanwhile,the outputyi-1ofAi-1is added to the feature subsetxi,and then input toAito complete the feature extraction.

    4 Experimental Results and Analysis

    4.1 Experimental Environment Configuration

    The software and hardware environments for the experiments in this paper are shown in Table 2.

    Table 2: Experimental environment configuration

    Hyperparameter setting:Optimizer,SGD(stochastic gradient descent);momentum,0.937;weight decay,0.0005;learning rate,0.001;epoch,150.

    4.2 Experimental Data and Evaluation Index

    In this paper,we use Cityscapes dataset as the experimental data.Cityscapes dataset is a large-scale dataset for computer vision,focusing on providing training and performance testing for autonomous driving environment perception models.It covers various street scenes,road scenes,and seasons,with a total of 5000 images.The dataset includes 2975 images in the training set,500 images in the validation set,and 1525 images in the test set.

    Before training,the Cityscapes dataset was divided into 19 classes,namely: road,sidewalk,building,wall,fence,pole,traffic light,traffic sign,vehicle,terrain,sky,person,rider,car,truck,bus,train,motorcycle,and bicycle,In the training process,the size of the images is scaled from 1024*2048 to 321*512 according to the server’s computing power and time efficiency,and the number of images loaded in each batch is 8.The experiments use the pre-training weights of the official LiteSeg model to initialize the model parameters,and the other network parameters are kept constant during the training process.

    The evaluation metric used in the experiment is mIoU,which is the average of the IoU of all categories.In semantic segmentation,the IoU value of a category is calculated as the ratio of the intersection of the set of pixels predicted by the semantic segmentation model as the set of pixels in that category to the real set of pixels in that category and the union set.The IoU is calculated as shown in Eq.(13).The mIoU is calculated as shown in Eq.(14).

    where,kis the number of categories,k+1 includes background categories,idenotes the true value,jdenotes the predicted value,pijdenotes the prediction ofitoj,piidenotes the prediction ofitoi,pjidenotes the prediction ofjtoi.

    Params refers to the number of parameters to be trained in the model,including all the weights and bias terms,and is used in this paper to measure the complexity of the model.Giga Floating Point Operations(GFLOPs)refers to the number of floating-point operations performed by the model when performing a single forward computation,in billions of operations per second,which is used in this paper to measure the computational complexity of the model.

    4.3 Analysis of Experimental Results

    4.3.1 Comparison of Different Algorithms

    To validate the effectiveness of the proposed algorithm in this paper,we compared it with LiteSeg,ICNet,ENet,ERFNet,BiSeNetV2,STDC1-Seg50,and SeaFormer[35]algorithms.The experimental results are shown in Table 3.The proposed algorithm in this paper increases the mIoU value by 2.46%,the Params value by 0.19M,and the GFLOPs value by 0.46 compared to the original LiteSeg,which proves that the improvements in this paper are effective.Compared to ICNet,the mIoU value increased by 11.57%,the Params value decreased by 4.86M,and the GFLOPs value decreased by 22.94.Compared to ENet,the mIoU value increased by 4.95%,the Params value increased by 1.40M,and the GFLOPs value decreased by 3.15.Compared to ERFNet,the mIoU value increased by 5.57%,and the Params value decreased by 0.25.Compared to BiSeNetV2,although the mIoU value decreased by 1.57%,our model size is much smaller,the Params value decreased by 39.43M,and the GFLOPs value decreased by 15.78.Compared to STDC1-Seg50,the mIoU value increased by 0.53%,the Params value increased by 6.58M,and GFLOPs value increased by 4.55.Compared to SeaFormer,although the mIoU value decreased by 0.07%,our model size is much smaller,the Params value decreased by 2.18M,and the GFLOPs value increased by 3.37.It can be seen that the proposed algorithm balances accuracy and model size and has the best overall performance among the eight algorithms.

    Table 3: Comparison of metrics of mainstream algorithm

    To further verify the validity proposed in this paper,the metrics on the 19 categories of datasets in Cityscapes were analyzed and compared,and the experimental results are shown in Table 4.The IoU values for sidewalk,building,wall,fence,pole,traffic light,terrain,car,train,motorcycle,and bicycle categories are higher than the other four models.The IoU values of the proposed algorithm in the road category are slightly lower than those of the ERFNet model.In the traffic sign and rider categories,the IoU values of the proposed algorithm are lower than those of the LiteSeg and ERFNet models.In the vehicle category,the IoU of the proposed algorithm is lower than the other four models.In the sky,bus,and track categories,the IoU value of the proposed algorithm is lower than that of the LiteSeg model.In the person category,the IoU of the proposed algorithm is lower than the LiteSeg and ENet models.The proposed model optimizes the feature extraction results by using attention mechanism and multi-scale structure and can extract higher quality feature information of low-resolution targets compared to the original model and other models.However,the resolution of targets such as vehicle,road,and traffic sign are higher,so the improvement of segmentation effect for them is relatively low.

    Table 4: IoU comparison of dataset categories

    This article also analyzed and compared the Class mIoU metric of five algorithms,with 150 iterations in the experiment.The mIoU change curve is shown in Fig.6,from which it can be seen that the proposed algorithm and LiteSeg present a stable growth trend (smooth curve fluctuation)with the increase of epoch and gradually stabilize when epoch reaches 140.ICNet,ENet and ERFNet show small fluctuations in mIoU values with the increase of epochs,and ENet has stabilized when epoch reaches 100,and ERFNet stabilizes only when epoch reaches 140,while the mIoU value of ICNet is not only the lowest but also always in oscillation during the training process.

    Figure 6:mIoU variation graph

    4.3.2 Ablation Experiments

    To further verify the effectiveness of the added modules,they are compared with the original algorithm,and the experimental results are shown in Table 5.It can be seen that adding SCNet and MTAE can increase the mIoU value of LiteSeg by 4.96%.However,the Params and GFLOPs of the algorithm will also increase by 1.61M and 7.14,respectively.In this paper,considering the model size and computation,we use MobileOne instead of the original backbone network and introduce a lightweight and flexible CA attention mechanism.As a result,we achieve a 2.46% increase in the mIoU of the algorithm without significantly increasing the Params and GFLOPs.

    Table 5: Ablation experiment

    4.3.3 Comparison of Visualization Results

    To directly validate the effectiveness of the proposed model in this paper,we compare the segmentation results of seven different algorithms on the Cityscapes dataset,as shown in Fig.7.The yellow box indicates areas where other models have incomplete segmentation compared to our model.

    Figure 7: (Continued)

    Figure 7:Visualization results of multiple models on Cityscapes validation set.From top to bottom 1-Input RGB image;2-Our model;3-LiteSeg model;4-ICNet model;5-ENet model;6-ERFNet model;7-BiSeNetV2;8-STDC1-Seg50;9-SeaFormer

    To verify the generalization ability of the model proposed in this article,we conducted experiments on the KITTI dataset.We selected three models with similar segmentation accuracy as the proposed model in this article,BiSeNetV2,STDC1-Seg50,and SeaFormer,and visually compared their generalization ability,as shown in Figs.8-10.The yellow box represents the part of the model that has not been segmented.

    Figure 8: Visualization results of KITTI dataset.From top to bottom 1-Input RGB image;2-Our model;3-BiSeNetV2

    Figure 9: Visualization results of KITTI dataset.From top to bottom 1-Input RGB image;2-Our model;3-STDC1-Seg50

    Figure 10: Visualization results of KITTI dataset.From top to bottom 1-Input RGB image;2-Our model;3-SeaFormer

    5 Conclusion

    To achieve the segmentation task of complex road scenes,this paper proposes a lightweight semantic segmentation algorithm based on LiteSeg.To reduce the size of the model,the MobileOne backbone network is used to extract features.A lightweight and efficient CA attention mechanism and SCNet module are used to enhance the feature extraction capability of the network,enabling it to focus on discriminative regions in the image and efficiently distinguish differences between different regions to achieve accurate segmentation.Furthermore,cross-dimensional feature fusion is achieved by adding the MTAE module,introducing the encoder module of Transformer in each scale space,and establishing jump connections in each dimensional space to fuse the features from different dimensional spaces.In this paper,the proposed algorithm is tested on the Cityscapes dataset,and the experimental results show that the algorithm improves the mIoU to 71.03%with only a slight increase in Params and GFLOPs compared to LiteSeg,while the IoU of all 12 categories on Cityscapes is higher than that of the LiteSeg algorithm,with only 7 categories having slightly lower IoU values.At the same time,this article will also test the generalization ability of the proposed model on the KITTI dataset,and the experimental results show that the proposed model has a certain degree of generalization ability.This demonstrates that the proposed algorithm meets the demand for accurate and fast segmentation of road images.However,the algorithm proposed in this paper also has some limitations.For example,for the compactness of the model,we abandon the downsampling process in the final stage,which makes the receptive field of the model insufficient to cover large target objects,resulting in limited improvement in segmentation accuracy for high-resolution objects.Due to the limitation of the computing power of the device,the image is cropped during the model training process,resulting in the loss of spatial details in the image,which leads to unsatisfactory segmentation of the boundary part of the image.Future work will conduct in-depth experiments on the power consumption of the model while improving segmentation accuracy.We will use knowledge distillation to further reduce the computational resources of the model and conduct experiments on different datasets captured on smart cars.

    Acknowledgement:The authors are highly thankful to the National Natural Science Foundation of China,to the Innovation Fund of Chinese Universities Industry-University-Research,to the Research Project for Young and Middle-Aged Teachers in Guangxi Universities,and to the Special Research Project of Hechi University.This research was financially supported by the Project of Outstanding Thousand Young Teachers’Training in Higher Education Institutions of Guangxi,Guangxi Colleges and Universities Key Laboratory of AI and Information Processing (Hechi University),Education Department of Guangxi Zhuang Autonomous Region.

    Funding Statement:The authors are highly thankful to the National Natural Science Foundation of China (No.62063006),the Natural Science Foundation of Guangxi Province (No.2023GXNSFAA026025),to the Innovation Fund of Chinese Universities Industry-University-Research (ID:2021RYC06005),to the Research Project for Young and Middle-Aged Teachers in Guangxi Universities(ID:2020KY15013),and to the Special Research Project of Hechi University(ID:2021GCC028).This research was financially supported by the Project of Outstanding Thousand Young Teachers’Training in Higher Education Institutions of Guangxi,Guangxi Colleges and Universities Key Laboratory of AI and Information Processing(Hechi University),Education Department of Guangxi Zhuang Autonomous Region.

    Author Contributions:The authors confirm contribution to the paper as follows: study conception and design: J.Peng;data collection: Y.Hou;analysis and interpretation of results: Q.Yang;draft manuscript preparation:Q.Yang.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data presented in this study are available upon request from the corresponding author.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    午夜福利在线观看免费完整高清在 | 一区二区三区国产精品乱码| 黄色日韩在线| 亚洲av成人不卡在线观看播放网| 日韩成人在线观看一区二区三区| 亚洲精品亚洲一区二区| 亚洲专区中文字幕在线| 白带黄色成豆腐渣| 黄色日韩在线| 麻豆成人av在线观看| 免费无遮挡裸体视频| 色哟哟哟哟哟哟| 国产蜜桃级精品一区二区三区| 国产黄色小视频在线观看| 在线看三级毛片| 最近最新中文字幕大全电影3| 亚洲国产精品合色在线| 国内揄拍国产精品人妻在线| 久久久国产精品麻豆| 成年人黄色毛片网站| av黄色大香蕉| 精品国产超薄肉色丝袜足j| 丰满的人妻完整版| 日韩亚洲欧美综合| 中文字幕久久专区| 男人舔奶头视频| 国产一区二区在线av高清观看| 性色avwww在线观看| 久久伊人香网站| 少妇熟女aⅴ在线视频| 午夜影院日韩av| 长腿黑丝高跟| 少妇人妻精品综合一区二区 | 久久精品国产综合久久久| 又粗又爽又猛毛片免费看| 亚洲内射少妇av| tocl精华| 少妇的逼水好多| 91久久精品电影网| 国产精品爽爽va在线观看网站| 麻豆国产av国片精品| 亚洲在线自拍视频| 国产精品综合久久久久久久免费| 90打野战视频偷拍视频| 成人鲁丝片一二三区免费| 搡老熟女国产l中国老女人| 神马国产精品三级电影在线观看| 少妇裸体淫交视频免费看高清| 18禁黄网站禁片午夜丰满| 国产激情欧美一区二区| 日本五十路高清| 麻豆国产97在线/欧美| 欧美色视频一区免费| 国产成人a区在线观看| 国产精品久久久久久久电影 | 中文字幕高清在线视频| 日韩欧美精品v在线| 国产精品女同一区二区软件 | 午夜精品久久久久久毛片777| 日韩大尺度精品在线看网址| 老熟妇仑乱视频hdxx| 亚洲熟妇熟女久久| 三级毛片av免费| 天堂网av新在线| 欧美精品啪啪一区二区三区| 国产精品免费一区二区三区在线| 欧美成狂野欧美在线观看| 黑人欧美特级aaaaaa片| 欧美日韩亚洲国产一区二区在线观看| 国产欧美日韩精品一区二区| 亚洲乱码一区二区免费版| 黄色视频,在线免费观看| 精品无人区乱码1区二区| 中文字幕av在线有码专区| 伊人久久大香线蕉亚洲五| a级毛片a级免费在线| 亚洲人与动物交配视频| 搞女人的毛片| 一进一出好大好爽视频| 99在线视频只有这里精品首页| 成人一区二区视频在线观看| 欧美一区二区精品小视频在线| 99在线人妻在线中文字幕| 一级作爱视频免费观看| 中文字幕av在线有码专区| 国产av在哪里看| 中文字幕熟女人妻在线| 色综合欧美亚洲国产小说| 免费看光身美女| 国内揄拍国产精品人妻在线| 亚洲av免费高清在线观看| 国产成人av激情在线播放| 亚洲成人久久性| 国产一区二区激情短视频| 一级黄片播放器| 久久久久免费精品人妻一区二区| 搡女人真爽免费视频火全软件 | 欧美一级a爱片免费观看看| 国产午夜福利久久久久久| 在线免费观看的www视频| 女生性感内裤真人,穿戴方法视频| 老汉色av国产亚洲站长工具| 国内毛片毛片毛片毛片毛片| 一区二区三区免费毛片| 国产麻豆成人av免费视频| www国产在线视频色| 好看av亚洲va欧美ⅴa在| 12—13女人毛片做爰片一| 久久婷婷人人爽人人干人人爱| 国产精品女同一区二区软件 | 大型黄色视频在线免费观看| 在线观看一区二区三区| 成人特级av手机在线观看| 欧美成人性av电影在线观看| 一级作爱视频免费观看| 亚洲第一电影网av| 少妇熟女aⅴ在线视频| 亚洲美女黄片视频| 小说图片视频综合网站| 国产精品av视频在线免费观看| 中文字幕熟女人妻在线| 在线看三级毛片| 亚洲人成网站高清观看| 日本黄色视频三级网站网址| 亚洲美女视频黄频| 天美传媒精品一区二区| 欧美色视频一区免费| 欧美黄色淫秽网站| 免费观看精品视频网站| 国产aⅴ精品一区二区三区波| 国产午夜精品论理片| 九九在线视频观看精品| 好男人电影高清在线观看| av欧美777| 国产精品精品国产色婷婷| 久久性视频一级片| 免费在线观看成人毛片| 精品久久久久久久久久久久久| 91在线观看av| 精品一区二区三区人妻视频| 搡女人真爽免费视频火全软件 | 国产av在哪里看| 两个人视频免费观看高清| 国产高清videossex| 日韩欧美 国产精品| 久久久久久大精品| 日本三级黄在线观看| 亚洲一区二区三区色噜噜| 欧美性猛交黑人性爽| 蜜桃亚洲精品一区二区三区| 午夜两性在线视频| 男女之事视频高清在线观看| 国产探花在线观看一区二区| 精华霜和精华液先用哪个| 成人无遮挡网站| 成人欧美大片| 一区福利在线观看| 国产成人aa在线观看| 欧美+日韩+精品| 国产精品av视频在线免费观看| 好男人在线观看高清免费视频| 中文字幕av在线有码专区| 午夜福利欧美成人| 亚洲成人久久爱视频| 国产精品乱码一区二三区的特点| 天堂√8在线中文| 国产视频内射| 精品一区二区三区人妻视频| 女人十人毛片免费观看3o分钟| 日本一本二区三区精品| av国产免费在线观看| 免费观看的影片在线观看| 少妇的逼水好多| 国内揄拍国产精品人妻在线| 免费在线观看亚洲国产| 日韩中文字幕欧美一区二区| 乱人视频在线观看| 高潮久久久久久久久久久不卡| 色哟哟哟哟哟哟| 国产精品,欧美在线| 精品久久久久久久毛片微露脸| 天美传媒精品一区二区| 97超级碰碰碰精品色视频在线观看| 国内精品久久久久久久电影| 欧美中文综合在线视频| xxx96com| 婷婷精品国产亚洲av| 亚洲美女黄片视频| 欧美日韩中文字幕国产精品一区二区三区| 国模一区二区三区四区视频| 欧美日韩亚洲国产一区二区在线观看| 国产亚洲精品av在线| 三级国产精品欧美在线观看| 美女高潮喷水抽搐中文字幕| 在线观看午夜福利视频| 日本三级黄在线观看| 无限看片的www在线观看| 99久久久亚洲精品蜜臀av| 99精品欧美一区二区三区四区| 一级黄片播放器| 黄色成人免费大全| 欧美一区二区亚洲| 亚洲欧美日韩高清在线视频| 99久久精品热视频| 久久久久性生活片| 亚洲中文字幕一区二区三区有码在线看| 亚洲性夜色夜夜综合| 亚洲国产精品久久男人天堂| 亚洲美女黄片视频| 亚洲成人久久爱视频| 少妇高潮的动态图| 俄罗斯特黄特色一大片| 一本精品99久久精品77| 97人妻精品一区二区三区麻豆| 亚洲成av人片免费观看| 久久久久国产精品人妻aⅴ院| 国产成年人精品一区二区| 夜夜夜夜夜久久久久| 日韩中文字幕欧美一区二区| 51午夜福利影视在线观看| 婷婷丁香在线五月| 听说在线观看完整版免费高清| 床上黄色一级片| 淫秽高清视频在线观看| 亚洲欧美日韩高清专用| 99热这里只有精品一区| 欧美最黄视频在线播放免费| 国产成+人综合+亚洲专区| 日本成人三级电影网站| 3wmmmm亚洲av在线观看| 99国产综合亚洲精品| 国产视频内射| 欧美日本视频| 欧美又色又爽又黄视频| 90打野战视频偷拍视频| 国内精品久久久久久久电影| 国产亚洲精品综合一区在线观看| 女人高潮潮喷娇喘18禁视频| 99在线人妻在线中文字幕| 精品熟女少妇八av免费久了| 在线国产一区二区在线| 亚洲欧美日韩无卡精品| 两个人视频免费观看高清| 国内久久婷婷六月综合欲色啪| 免费在线观看影片大全网站| 亚洲av电影在线进入| 成人鲁丝片一二三区免费| 国产一区二区三区视频了| 69人妻影院| 午夜视频国产福利| 五月玫瑰六月丁香| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 精品不卡国产一区二区三区| 欧美日韩乱码在线| 亚洲成av人片在线播放无| 欧美黄色片欧美黄色片| 日韩欧美在线二视频| 人人妻人人澡欧美一区二区| 久99久视频精品免费| 国产老妇女一区| 一夜夜www| 精品久久久久久久人妻蜜臀av| 黄色女人牲交| 特大巨黑吊av在线直播| 国产男靠女视频免费网站| 真人一进一出gif抽搐免费| 又黄又爽又免费观看的视频| 舔av片在线| 人妻丰满熟妇av一区二区三区| 亚洲美女视频黄频| 校园春色视频在线观看| 亚洲欧美日韩无卡精品| 桃色一区二区三区在线观看| 此物有八面人人有两片| 亚洲人与动物交配视频| 午夜福利高清视频| 欧美乱妇无乱码| 又黄又爽又免费观看的视频| 69人妻影院| 9191精品国产免费久久| 久久久久久久午夜电影| 窝窝影院91人妻| 美女黄网站色视频| 精品人妻一区二区三区麻豆 | xxxwww97欧美| 91字幕亚洲| 午夜福利视频1000在线观看| 熟女电影av网| 18禁裸乳无遮挡免费网站照片| 亚洲欧美日韩东京热| 久久久精品欧美日韩精品| 国产成人av激情在线播放| 国产精品久久久人人做人人爽| 国产v大片淫在线免费观看| 91字幕亚洲| 久久性视频一级片| 动漫黄色视频在线观看| 青草久久国产| 在线观看66精品国产| 天美传媒精品一区二区| 久久精品国产亚洲av涩爱 | 亚洲人成伊人成综合网2020| 男人舔奶头视频| 好男人在线观看高清免费视频| 精品不卡国产一区二区三区| 午夜老司机福利剧场| 国产毛片a区久久久久| 99国产综合亚洲精品| 真人做人爱边吃奶动态| 99精品欧美一区二区三区四区| 黄色丝袜av网址大全| 美女免费视频网站| 亚洲第一欧美日韩一区二区三区| 亚洲美女黄片视频| 美女高潮的动态| 亚洲精品亚洲一区二区| 99热6这里只有精品| 国产一区二区三区视频了| 日韩欧美精品免费久久 | 国产免费av片在线观看野外av| 色av中文字幕| 一区福利在线观看| 日韩欧美在线二视频| 啦啦啦免费观看视频1| 色视频www国产| 久久久久亚洲av毛片大全| 成人性生交大片免费视频hd| 欧美一区二区精品小视频在线| 久久午夜亚洲精品久久| 免费在线观看影片大全网站| 精品熟女少妇八av免费久了| 大型黄色视频在线免费观看| 国产中年淑女户外野战色| 精品电影一区二区在线| 少妇人妻精品综合一区二区 | 热99在线观看视频| 蜜桃久久精品国产亚洲av| eeuss影院久久| 蜜桃亚洲精品一区二区三区| 日韩 欧美 亚洲 中文字幕| 狂野欧美激情性xxxx| 亚洲乱码一区二区免费版| 国内精品久久久久久久电影| 日本与韩国留学比较| 三级毛片av免费| 中文字幕av成人在线电影| 国产精品一区二区免费欧美| 午夜免费观看网址| 九色国产91popny在线| 久久婷婷人人爽人人干人人爱| 精品99又大又爽又粗少妇毛片 | 国产野战对白在线观看| 国产69精品久久久久777片| 午夜免费成人在线视频| 性欧美人与动物交配| 日本与韩国留学比较| 日韩有码中文字幕| 99久久九九国产精品国产免费| 国产精品,欧美在线| 成人av在线播放网站| 亚洲国产精品合色在线| 久久久成人免费电影| 最近最新免费中文字幕在线| 日日摸夜夜添夜夜添小说| 亚洲精品日韩av片在线观看 | 欧美色视频一区免费| 成人永久免费在线观看视频| 国产欧美日韩一区二区三| 两人在一起打扑克的视频| 在线天堂最新版资源| 欧美性感艳星| xxx96com| 五月伊人婷婷丁香| 欧美日韩一级在线毛片| 亚洲精品一卡2卡三卡4卡5卡| 中文字幕熟女人妻在线| 制服人妻中文乱码| 国产真人三级小视频在线观看| 精品熟女少妇八av免费久了| 欧美日韩精品网址| 欧美一区二区国产精品久久精品| 黄色丝袜av网址大全| 欧美av亚洲av综合av国产av| 欧美色视频一区免费| 黄色日韩在线| 亚洲狠狠婷婷综合久久图片| 国产 一区 欧美 日韩| 丁香欧美五月| 午夜激情福利司机影院| 999久久久精品免费观看国产| 女人十人毛片免费观看3o分钟| 韩国av一区二区三区四区| 久久精品影院6| 中文在线观看免费www的网站| 美女黄网站色视频| 窝窝影院91人妻| 久久久久久久午夜电影| 一本综合久久免费| 亚洲国产精品成人综合色| 免费看光身美女| 欧美在线一区亚洲| 久久国产精品影院| 真人一进一出gif抽搐免费| 老熟妇仑乱视频hdxx| 午夜激情福利司机影院| 搡老岳熟女国产| 99热这里只有精品一区| 美女黄网站色视频| 美女被艹到高潮喷水动态| 国产成人福利小说| 欧美日韩国产亚洲二区| 成人午夜高清在线视频| 欧美区成人在线视频| 亚洲成av人片免费观看| 亚洲精品亚洲一区二区| 老司机午夜福利在线观看视频| 五月伊人婷婷丁香| 中文字幕人妻熟人妻熟丝袜美 | 在线播放无遮挡| 免费看美女性在线毛片视频| 网址你懂的国产日韩在线| 18禁裸乳无遮挡免费网站照片| 国产一级毛片七仙女欲春2| 麻豆成人午夜福利视频| 男女午夜视频在线观看| 蜜桃久久精品国产亚洲av| 91麻豆精品激情在线观看国产| 精品一区二区三区av网在线观看| 成人特级av手机在线观看| 一本综合久久免费| 国产精品 国内视频| 在线观看免费视频日本深夜| 男人和女人高潮做爰伦理| 国产高潮美女av| 在线看三级毛片| 丁香欧美五月| 制服人妻中文乱码| 久久精品综合一区二区三区| 老司机深夜福利视频在线观看| 男女视频在线观看网站免费| 身体一侧抽搐| x7x7x7水蜜桃| 欧美在线黄色| 久久人妻av系列| 少妇的逼好多水| 欧美乱码精品一区二区三区| 母亲3免费完整高清在线观看| 一区二区三区激情视频| 精品福利观看| 国产蜜桃级精品一区二区三区| 免费在线观看成人毛片| 久久久久久九九精品二区国产| 国产精华一区二区三区| 国产精品一区二区三区四区免费观看 | АⅤ资源中文在线天堂| 中文在线观看免费www的网站| 精品国产三级普通话版| 婷婷丁香在线五月| 国产午夜精品久久久久久一区二区三区 | 麻豆国产97在线/欧美| 久久国产精品人妻蜜桃| 亚洲精品色激情综合| 中文字幕高清在线视频| 91久久精品国产一区二区成人 | 欧美黑人巨大hd| 亚洲国产高清在线一区二区三| 一进一出抽搐动态| eeuss影院久久| 国产 一区 欧美 日韩| 琪琪午夜伦伦电影理论片6080| 久久精品亚洲精品国产色婷小说| 国模一区二区三区四区视频| 91字幕亚洲| 欧美成人免费av一区二区三区| 男女做爰动态图高潮gif福利片| 国产精品一区二区免费欧美| 欧美性猛交╳xxx乱大交人| 精品福利观看| 国产真实乱freesex| 久久国产乱子伦精品免费另类| 亚洲第一电影网av| 久久精品综合一区二区三区| 国产精品国产高清国产av| 俺也久久电影网| 日韩精品青青久久久久久| 色综合欧美亚洲国产小说| 黄片小视频在线播放| 午夜日韩欧美国产| 最近最新免费中文字幕在线| 精品人妻偷拍中文字幕| 欧美日韩亚洲国产一区二区在线观看| 一二三四社区在线视频社区8| 中文字幕av在线有码专区| 欧美zozozo另类| 亚洲电影在线观看av| 在线看三级毛片| 久久精品国产亚洲av香蕉五月| 国产精品电影一区二区三区| 欧美日韩乱码在线| 动漫黄色视频在线观看| 亚洲一区高清亚洲精品| 在线免费观看的www视频| 91在线观看av| 最后的刺客免费高清国语| 国产乱人视频| 熟女少妇亚洲综合色aaa.| 十八禁网站免费在线| 搞女人的毛片| 亚洲av免费高清在线观看| 亚洲美女黄片视频| 两个人看的免费小视频| 免费看美女性在线毛片视频| 成人特级黄色片久久久久久久| 精品无人区乱码1区二区| 精品国产三级普通话版| 亚洲av中文字字幕乱码综合| 女人十人毛片免费观看3o分钟| 人妻夜夜爽99麻豆av| 亚洲第一欧美日韩一区二区三区| 亚洲熟妇中文字幕五十中出| 亚洲最大成人中文| 麻豆成人午夜福利视频| 非洲黑人性xxxx精品又粗又长| 国产精品av视频在线免费观看| 国产伦精品一区二区三区四那| 日韩av在线大香蕉| 亚洲专区中文字幕在线| 国产久久久一区二区三区| 久久久久久国产a免费观看| 亚洲av五月六月丁香网| 一进一出抽搐动态| 国产精品av视频在线免费观看| 亚洲av中文字字幕乱码综合| 最近最新免费中文字幕在线| 国产精品久久久久久亚洲av鲁大| 黄色丝袜av网址大全| 国产一区二区三区在线臀色熟女| 波野结衣二区三区在线 | 精品国产超薄肉色丝袜足j| 欧美3d第一页| 亚洲国产欧洲综合997久久,| 国内精品一区二区在线观看| 欧美日韩福利视频一区二区| 久久精品夜夜夜夜夜久久蜜豆| 一级毛片高清免费大全| 免费搜索国产男女视频| 97碰自拍视频| 天堂影院成人在线观看| 舔av片在线| 夜夜看夜夜爽夜夜摸| 又爽又黄无遮挡网站| 欧美黑人欧美精品刺激| 成人高潮视频无遮挡免费网站| 久久午夜亚洲精品久久| 午夜福利欧美成人| 真人做人爱边吃奶动态| 国模一区二区三区四区视频| 国产三级在线视频| 高潮久久久久久久久久久不卡| 岛国在线免费视频观看| 中文字幕av成人在线电影| 在线播放国产精品三级| 国产精品日韩av在线免费观看| 免费av不卡在线播放| 午夜激情福利司机影院| 黄片小视频在线播放| 免费看光身美女| 在线观看免费午夜福利视频| 国产精品久久久人人做人人爽| 在线观看舔阴道视频| 亚洲最大成人手机在线| 亚洲美女黄片视频| 免费观看精品视频网站| 日本 欧美在线| 男人舔奶头视频| 九色国产91popny在线| 日本a在线网址| ponron亚洲| 国产高清videossex| 欧美日韩乱码在线| 91久久精品电影网| 高清在线国产一区| 在线观看美女被高潮喷水网站 | 啦啦啦免费观看视频1| 国产伦精品一区二区三区四那| 在线观看午夜福利视频| 成人三级黄色视频| 看片在线看免费视频| 中亚洲国语对白在线视频| 亚洲精品在线观看二区| 久久久精品大字幕| 免费搜索国产男女视频| 人人妻人人澡欧美一区二区| 免费在线观看日本一区| 久久精品91无色码中文字幕| 亚洲美女黄片视频| 国产欧美日韩精品一区二区| 一边摸一边抽搐一进一小说| 91av网一区二区| 波多野结衣巨乳人妻| 日本一二三区视频观看| 18禁黄网站禁片免费观看直播| 少妇的逼水好多| 亚洲成av人片免费观看| 搡女人真爽免费视频火全软件 | 午夜福利免费观看在线| 亚洲精品在线美女| 蜜桃久久精品国产亚洲av| 偷拍熟女少妇极品色| 91久久精品电影网| 制服人妻中文乱码| 久久6这里有精品| 日韩欧美国产一区二区入口| 亚洲在线自拍视频| 亚洲精华国产精华精|