• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Study on Enhancing Chip Detection Efficiency Using the Lightweight Van-YOLOv8 Network

    2024-05-25 14:40:00MengHuangHongleiWeiandXianyiZhai
    Computers Materials&Continua 2024年4期

    Meng Huang,Honglei Weiand Xianyi Zhai

    School of Mechanical Engineering and Automation,Dalian Polytechnic University,Dalian,116034,China

    ABSTRACT In pursuit of cost-effective manufacturing,enterprises are increasingly adopting the practice of utilizing recycled semiconductor chips.To ensure consistent chip orientation during packaging,a circular marker on the front side is employed for pin alignment following successful functional testing.However,recycled chips often exhibit substantial surface wear,and the identification of the relatively small marker proves challenging.Moreover,the complexity of generic target detection algorithms hampers seamless deployment.Addressing these issues,this paper introduces a lightweight YOLOv8s-based network tailored for detecting markings on recycled chips,termed Van-YOLOv8.Initially,to alleviate the influence of diminutive,low-resolution markings on the precision of deep learning models,we utilize an upscaling approach for enhanced resolution.This technique relies on the Super-Resolution Generative Adversarial Network with Extended Training(SRGANext)network,facilitating the reconstruction of high-fidelity images that align with input specifications.Subsequently,we replace the original YOLOv8s model’s backbone feature extraction network with the lightweight Vanilla Network(VanillaNet),simplifying the branch structure to reduce network parameters.Finally,a Hybrid Attention Mechanism(HAM)is implemented to capture essential details from input images,improving feature representation while concurrently expediting model inference speed.Experimental results demonstrate that the Van-YOLOv8 network outperforms the original YOLOv8s on a recycled chip dataset in various aspects.Significantly,it demonstrates superiority in parameter count,computational intricacy,precision in identifying targets,and speed when compared to certain prevalent algorithms in the current landscape.The proposed approach proves promising for real-time detection of recycled chips in practical factory settings.

    KEYWORDS Lightweight neural networks;attention mechanisms;image super-resolution enhancement;feature extraction;small object detection

    1 Introduction

    The escalating volume of discarded electronic devices has led to the emergence of a significant reservoir of reusable semiconductor chips.Compared to their new counterparts,recycled chips come at a markedly lower price,prompting numerous enterprises to adopt chip regenerating equipment for cost reduction.Following successful functional testing,chips undergo tape-and-reel packaging for automated surface mounting on circuit boards.Ensuring precise chip orientation during this process is crucial,as any deviation can result in severe consequences,including the scrapping of circuit boards.To maintain consistent chip orientation,a small circular marker is typically placed on one end of the chip to denote pin arrangement.Detecting and identifying the position of this circular marker is essential for both testing and packaging.However,recycled chips often exhibit substantial surface wear,leading to blurred and challenging-to-detect markers.Moreover,the computational resources required by models for detection are substantial,posing difficulties and challenges during algorithm deployment on computing-limited terminal apparatus.Therefore,a critical challenge in this field is how to minimize the complexity and computational demands of detection models while ensuring precision.

    Recycled chip mark detection falls under the category of small object recognition in the field of machine vision.The common methods include conventional image processing and deep learning.Conventional image processing mainly involves image segmentation,texture analysis,image restoration,and feature matching.Feature matching is widely used due to its adaptability in dealing with grayscale changes,deformation,and occlusion.Cui et al.[1]employed diverse approaches for identifying distinct irregularities on the exterior of mobile phone casings,including the least squares method,image differential algorithm,and an improved traditional template matching algorithm for detecting IR holes,ink spots,and LOGOs.Yao et al.[2]extracted the HU invariant moment features and template features of target contours for coarse matching to obtain candidate part contours,used an improved Harris corner detection method to obtain corners,and based on gray-scale fast matching,achieved the effect of improving the precision of part image matching.While these conventional methods have made progress in target feature matching,they perform well on clear chip surfaces but are less effective on severely worn marks on recycled chips.

    With the progress of modern information technology and intelligent manufacturing techniques,deep learning-based systems for detecting small targets in industrial products have found widespread application [3].Given that the hardware employed in industrial settings is often edge devices with limited memory and computational power,considerations extend beyond the requirements of the mechanical drive systems to encompass the computational demands of the models.Consequently,reducing the size of the model proves advantageous for seamless integration into computing devices[4].The prevalent methodologies for object detection include multi-step strategies like Region-Centric Convolutional Neural Network (R-CNN) [5],Fast R-CNN [6],Mask R-CNN [7],and one-step methodologies such as One-Shot Multibox Detector (SSD) [8],You Only Look Once (YOLO) [9],and Transformer-based object detection algorithms like Detection Transformer (DETR) [10].Twostage algorithms first generate candidate boxes and then extract features from these boxes before regressing the objects,which slows down the detection speed.Transformer-based DETR performs poorly in small object detection and has extremely long training times,10–20 times that of two-stage algorithms.In contrast to alternative categories,object detection algorithms employing a single-stage approach,grounded in regression,combine both localization and classification objectives,leading to increased detection speed and significant advantages in terms of real-time object detection.Moreover,the YOLO algorithm is highly scalable and easily extends to new detection tasks.Based on single-stage algorithms,for more accurate specific target detection,Li et al.[11]proposed an aviation engine part surface defect detection model,YOLO-KEB.This model incorporates the Efficient Channel Attention Network(ECA-Net)into YOLO’s foundational feature extraction network to improve its capabilities in feature extraction.Additionally,it integrates the Bi-directional Feature Pyramid Network(BiFPN)module into the feature integration network for a comprehensive integration of multi-scale features,thereby amplifying the model’s performance for object detection.Wang et al.[12] introduced a network for small defect detection,YOLOV4-SA,combining the Spatial Attention Module (SAM)with YOLOV4.SAM corrects feature values and highlights defect areas,thus effectively recognizing small defects.To address the deployment issue of object detection models on terminal devices with limited computational resources,Zhou[13]and colleagues introduced the YOLOv5s-GCE lightweight model designed to identify surface defects on strip steel.This model incorporates the Ghost module and integrates the Coordinate Attention(CA)strategy,effectively decreasing the model’s dimensions and computational demand without compromising detection accuracy.Yang et al.[14]proposed the improved CBAM-MobilenetV2-YOLOv5 model,introducing both the Mobile Network Version 2(MobilenetV2)module and Convolutional Block Attention Module(CBAM)for a lighter strip steel surface defect detection model.Following this,Zhang [15] improved YOLOv5 by using a lighter Shuffle Network Version 2 (ShuffleNetv2) as the backbone network,reducing model complexity and having an advantage in detection speed.Zhou et al.[16] proposed the YOLOv8-EL object detection method,using Generative Adversarial Network for Generative Art (GauGAN) for the purpose of augmenting the dataset to rectify the imbalance of different defects in the dataset.The method incorporates the Context Aggregation Module(CAM)in the backbone and feature extraction networks to suppress background noise and builds a Multi-Attention Detection Head (MADH) to effectively improve detection accuracy.

    The aforementioned studies have undertaken significant efforts in the lightweight processing of computationally intensive object detection models,offering valuable insights.Addressing challenges related to the identification of small and heavily worn markings on semiconductor chips,as well as the deployment complexities of generic detection algorithms on resource-limited devices,this paper presents a novel lightweight chip marker detection algorithm.Leveraging the characteristics of chip markings and building upon YOLOv8s as the baseline,our approach enhances the detection performance of the original YOLOv8s method while reducing the computational load,rendering the network more lightweight.The principal achievements of this study are delineated as follows:

    (1) Effectively generating high-quality samples using the SRGANext sample generator to meet the input size requirements of the detection model.Simultaneously,significantly enhancing image resolution in this process contributes to providing a superior dataset for the detection model.

    (2) Introducing the lightweight VanillaNet [17] as the backbone feature extraction network for YOLOv8s,successfully reducing the number of convolutional layers and computational resources.This adjustment results in a more lightweight detection model.

    (3) Integrating the HAM [18] into the foundational structure of YOLOv8s to enhance the network’s proficiency in capturing target feature information.This technology elevates the model’s predictive capacity,enabling the real-time and accurate detection of chip markings.

    Section 2 of this paper outlines the chip marker detection approach,encompassing data collection,image preprocessing,and the Van-YOLOv8 detection model.In Section 3,the experimental details are expounded,covering evaluation metrics,assessing the effectiveness of validation data preprocessing,conducting ablation experiments,and comparing results with other experiments.Section 4 concludes the paper and provides future prospects.

    2 Experimental Setup and Methodology

    2.1 Data Collection

    As depicted in Fig.1,the experimental platform primarily consists of an industrial camera,a light source,a vibrating disk,a fiber optic sensor,and a feed box.Chips are transported along the track of the vibrating disk and,once detected by the fiber optic sensor,trigger the industrial camera to capture an image.The indicator on the front side of the chip is used to identify the position of the first pin.Accurate detection of this circular mark is essential.Therefore,the YOLOv8 network is used to detect the position of the chip mark.When the indicator is positioned in the lower-left corner,chips continues to move forward and is placed into the tray by a robotic arm to proceed to the next inspection process.Otherwise,the chip is blown back to the vibrating disk by air holes.The detection continues until all the chips on the vibrating disk are checked.The company requires an image processing speed of 200 pcs/min,and to improve the speed of image capture and transmission,the image resolution is set to 304 pixels×168 pixels.

    Figure 1: Experimental platform for chip symbol detection

    2.2 Image Preprocessing

    In practical applications,low-resolution images may hinder deep learning models from effectively capturing and identifying critical features,thereby impacting the ultimate detection performance.To surmount this challenge,the paper introduces SRGANext technology,enhancing the quality and clarity of chip marker images significantly.The specific process is illustrated in Fig.2,where a 128 ?128-sized block is extracted from the image’s bottom-left corner,and the circular marker is magnified.If the template matches the marker in this region,it indicates correct positioning.The YOLOv8 model uses the Letterbox[19]function to process images to fit the model’s input size.This function maintains the original aspect ratio of the image by adding padding on one or both necessary sides to adjust the size to meet the input dimensions.This method introduces additional non-informative areas,reducing the effective resolution.Therefore,to improve recognition accuracy,this paper first enlarges the image to 512×512 before inputting it into the YOLOv8 model.To enhance the clarity of the enlarged image,a super-resolution magnification method based on the SRGANext network is used.The images are then annotated to construct a training set for network training,followed by chip detection.

    The SRGANext architecture represents an enhanced iteration of the Super-Resolution Generative Adversarial Network (SRGAN) [20],with structural improvements facilitated by the Convnext [21]network.It comprises a generative network and a discriminative network.The framework takes in low-resolution images with three channels,and the generator network reconstructs high-resolution images from these.The discriminator serves as a tool to help the generator produce better quality images.The discriminator only guides the generator during the training phase.In the inference stage,the generator independently reconstructs high-resolution images.The specific structure is illustrated in Fig.3.

    Figure 2: Overall framework for data preprocessing

    Figure 3: SRGANext network architecture

    The generative module in the SRGANext architecture is derived from the Convnext network,incorporating internal adjustments in channel quantity,as illustrated in Fig.3a.The generator network sequentially goes through a stem module,four Stage modules,and an upsample module.Each Stage module contains a certain number of SRGANext Block modules in the ratio of 3:3:9:3.As shown in Fig.3b,the SRGANext Block is a residual module that includes DW Conv and PW Conv for adjusting the channel count.Additionally,the LayerNorm component is employed for channel-wise normalization,thereby diminishing the model’s intricacy and lessening computational demands.

    The discriminative component within the SRGANext architecture draws inspiration from the SRGANext Block as its foundational module.It incorporates Depthwise Convolution modules that substantially decrease the parameter count and diminish the interdependence among channels,thereby expediting model training,as illustrated in Fig.3c.Unlike the generator,the discriminator network undergoes downsampling,reducing the feature map size to half of its original.Ultimately,it undergoes global average pooling for feature map size reduction to 1 × 1,bringing the reconstructed representation closer to the actual image.Fig.4 shows the image reconstruction effect of the SRGANext network after 100 training cycles,demonstrating a notable improvement in the resolution ratio of the reconstructed images.

    Figure 4: SRGANext network processing effect

    In this study,inputting low-resolution ratio chip marker images into the SRGANext network effectively increases pixel density,thereby enhancing image clarity and detail.This super-resolution enhancement technology not only aids in compensating for information loss caused by low resolution in deep learning models but also strengthens the accuracy of the model in chip marker detection tasks.

    2.3 Van-YOLOv8 Detection Model

    The challenge of detecting small and heavily worn markings on recycled chips has given rise to issues of false positives and negatives,underscoring the pressing need for enhanced precision in detection.At the same time,prevalent target identification methodologies present issues such as heightened intricacy and considerable computational demands,rendering the deployment of algorithms on edge devices notably challenging.Therefore,this paper proposes a lightweight YOLOv8s network structure,namely Van-YOLOv8,as illustrated in Fig.5,to address these issues.

    The primary detection process involves three key steps:Image preprocessing,model refinement,and model testing.In the first step,low-resolution ratio images undergo super-resolution ratio enlargement to reconstruct a high-quality dataset.After selecting specific images,the circular markings within them are annotated using the LabelImg annotation software.Notably,some heavily worn chips that,even after preprocessing,fail to exhibit a complete marking are manually identified,with those meeting the circular marking criteria deemed qualified.Transitioning to the subsequent stage,the categorized images,both after preprocessing and in their original state,are utilized for the training and validation procedures,and subsequently input into the Van-YOLOv8 model for training.The detection framework consists of a simplified VanillaNet,an amalgamated Attention Mechanism (HAM) element,and the YOLOv8s convolutional neural architecture component.The VanillaNet significantly reduces the model’s volume,thus lowering computational resource demands.The backbone network,augmented with the HAM module at the bottom,enhances feature extraction capabilities.The input is processed through the Neck network and detection head,ultimately providing predicted bounding box coordinates and class labels for the targets.In the third step,the trained model undergoes evaluation using the test set,allowing for an analysis of its detection performance.

    Figure 5: Topology of the Van-YOLOv8 model

    2.3.1 Baseline-YOLOv8s Network

    In this manuscript,we utilize the single-stage detection algorithm YOLOv8s as the reference model.As illustrated in Fig.6,this framework comprises three fundamental elements:Backbone architecture,Neck module,and Head component.In the Backbone architecture,in contrast to YOLOv5,YOLOv8s adopts a more lightweight C2f module in lieu of the C3 module.In the Neck network,YOLOv8s omits the 1×1 convolutional downsampling unit observed in YOLOv5 and substitutes the C3 with a C2f.In the Head network,YOLOv8s utilizes a disentangled head configuration,segregating the tasks of classification and regression,and shifts from Anchor-Based to Anchor-Free.

    2.3.2 Integrated VanillaNet Minimalist Network

    Considering the limited computational resources typically found in endpoint devices within conventional enterprises,deploying complex chip marker detection models that demand significant computing power often becomes a constrained task.In response to this challenge,this paper addresses the issue by introducing a streamlined neural network module,VanillaNet,into the backbone feature extraction network of the baseline model YOLOv8s.In contrast to intricate residual and attention modules,VanillaNet comprises basic convolutional and pooling layers,eliminating complex connections and skip connections.Such a design streamlines the network structure,significantly reducing the model’s volume and parameter count,consequently lowering the computational intricacy.

    Figure 6: YOLOv8s network structure diagram

    The architecture of VanillaNet is depicted in Fig.7(using a 6-layer structure as an example),and it mainly consists of three parts:A backbone block,which converts the input image from 3 channels to multiple channels and performs downsampling;a main body that extracts useful information;and a densely connected layer for generating classification results.For the backbone block,a 4 × 4 × 3× C convolution layer with a stride of 4 is used to downsample the original 3-channel image into a feature map with C channels.In the primary segments—stage 1,stage 2,and stage 3—max-pooling layers with a 2-unit stride are implemented to modify the dimension of the feature map and double the number of channels from the preceding layer.In stage 4,an average pooling operation is employed without augmenting the channel quantity.Finally,the fully connected layer outputs the classification result.

    Figure 7: VanillaNet network structure

    To preserve feature map information at each layer while minimizing computational costs,we opted for 1 × 1 convolutional kernels.Following each 1 × 1 convolutional layer,we applied the Series Informed Activation Function (SIAF),expressed mathematically as shown in Eq.(1).This choice aims to effectively activate the neural network’s response,rendering the network more flexible and responsive during the information propagation process.To further simplify the training process,we introduced batch normalization after each convolutional layer.

    Here,n denotes the quantity of cascaded activation functions,andai,birepresent the scaling and offset for each activation,preventing mere accumulation.

    2.3.3 Introduction of Hybrid Attention Mechanisms

    In order to concentrate on the regions of the image that include circular marks,to enhance the network’s feature extraction capabilities,a HAM module is utilized.HAM incorporates an attention mechanism,integrating both channel and spatial attention mechanisms to retrieve crucial relevant details from initial images’channel and spatial properties.In contrast to traditional attention mechanisms,this approach is more flexible and adaptive,offering a balance between adaptability and computational efficiency.

    Part A:Channel Attention (CAM).Channel attention chiefly concentrates on modifying the weights of individual channels at each spatial location.Channel attention distributes weights to convolutional feature maps.Following the convolution of the original image,global average pooling is executed to derive a vector with dimensions [C,1,1].The resulting tensor quantity undergoes convolutional and activation processes to produce the weight vector corresponding to the channels‘s’.As depicted in Fig.8,the[C,H,W]dimensional input feature‘X’undergoes global average pooling(GAP) for dimension reduction and information condensation.Interactions among neighboring channels are delineated by contemplating each individual channel alongside its surrounding ‘k’channels.Efficient prediction of channel-based attention mechanism is achieved through a Conv1D convolution employing a kernel with dimensions‘k’בk’,where the kernel dimension is proportionate to the channel dimension ‘C’.The input representation map ‘X’undergoes multiplication with the weight vector‘s’associated with channels to produce the output representation map‘Y’.The equation is expressed as[22]:

    Here,σrepresents the Sigmoid activation operation.The dimension of the convolutional kernelkadjusts proportionally to the channel dimension,with the model parametersγ=2 andb=1.The symbol‖oddspecifies thatkmust exclusively be positive odd integers.The adopted channel attention strategy in this investigation utilizes a regional inter-channel communication approach,maintaining optimal effectiveness while concurrently diminishing model complexity.

    Figure 8: CAM diagram

    Part B:Spatial Attention(SAM).Spatial attention is concerned with adjusting the importance of different spatial positions within each channel.The attention mechanism based on spatial attributes filters out less relevant portions of the background in an image and directs attention towards and transforms regions of significance.In Fig.9,the feature maps are subjected to processing through Max Pooling (MaxPool) and Mean Pooling (AvgPool) to produce two arrays with dimensions [1,H,W],aggregating every channel at the same representation point.The pair of representation maps are later merged to generate a unified representation map with dimensions [2,H,W],and subsequently transformed to [1,H,W] through a convolutional stratum.The spatial influence is employed to modulate the original feature map with dimensions [C,H,W] for refinement.By backpropagating the effective receptive field to the initial image,the network can dynamically concentrate on crucial portions[23].

    Figure 9: SAM diagram

    Here,f n×ndenotes a convolutional operation employing a kernel dimension ofn×n.Xrepresents the input representation map,whileYindicates the resulting representation map.G(X)denotes the segmentation of the input representation map into a lattice of points.

    3 Experimental Study

    The Van-YOLOv8 network is deployed and trained on the TensorRT (version 8.4) framework,using FP16 optimization mode.The experiment employed an NVIDIA GeForce RTX 3060 GPU;an Intel Core i7-12700H CPU with a base frequency of 2.70 GHz and 16 GB of RAM;the operating system was Windows 11.The programming software used was PyCharm 2021.3.1,and the CUDA version was 11.8.The dimensions of the input image were configured to 512 pixels×512 pixels,with the count of iterations (Epoch) designated as 300.The size of the training batch was established as 8,while the quantity of threads(Num workers)was defined as 4.During the model training process,the weights trained on the COCO dataset were used as pre-trained weights.Additionally,600 images generated by the SRGANext network were utilized as training samples for the detection model,with a subset of 150 original images employed for testing and validation purposes.

    3.1 Evaluation Metrics

    In this experiment,we introduced evaluation criteria for gauging the efficacy of image reconstruction and object detection.To evaluate the fidelity of reconstructed images,this study incorporates two metrics:Structural SIMilarity(SSIM)and Peak Signal-to-Noise Ratio(PSNR).

    Here,l,c,andsdenote the resemblance in brightness,contrast ratio,and composition,respectively.MSEstands for Mean Square Error.

    For measuring the performance of object detection,this paper introduces five evaluation metrics:Detection precision (Precision),mean Average Precision (mAP),Frames Per Second (FPS),model parameter count(Params),and computational load(GFLOPs).

    Here,TP refer to correctly identified positive instances.FN indicate incorrectly missed positive instances.FP signify incorrectly identified negative instances.ksymbolizes the quantity of categories,andAP(i) denotes the Average Precision value corresponding to thei-th category.ElapsedTimeencompasses the total duration for image preprocessing,inference,and post-processing.Floating Point Operations Per Second(FLOPs)correspond to the quantity of floating-point operations conducted.C0signifies the count of resultant channels,Cidenotes the quantity of initial channels,kwdenotes the convolution kernel width,andkhindicates the convolution kernel height.

    3.2 Effectiveness of Image Preprocessing

    In this part of our study,we examine how well the SRGAN and SRGANext models reconstruct images across various datasets.We tested these models using several datasets:DIV2K,ImageNet100,COCO2017,Set5,and Set14.According to Table 1,the SRGANext framework uniformly surpasses the SRGAN framework in both SSIM and PSNR across all these datasets.These metrics are important for evaluating the quality of image reconstruction.

    Table 1: Comparison of SRGAN and SRGANext model performance

    Moreover,this study delves into a comprehensive examination of the comparison between images preprocessed using the SRGANext network and the original images in the Van-YOLOv8 model for chip datasets.The preprocessed images exhibit sharper and more accurate features in chip marker detection tasks,contributing to an improved ability of the model to recognize subtle markings.Table 2 clearly illustrates the substantial improvements of preprocessed images over original images across various performance metrics,emphasizing the notable role of SRGANext preprocessing in enhancing chip marker detection effectiveness.

    Table 2: Influence of preprocessed and original images on detection performance

    3.3 Ablation Experiment

    In this study,YOLOv8s is employed as the foundational model,and the VanillaNet element is integrated into the core feature extraction network to streamline the model’s intricacy.Simultaneously,a HAM is incorporated to focus on local information and enhance feature extraction capabilities.To affirm the efficacy of the enhancements implemented on the foundational model,ablation studies were conducted from two perspectives:

    1) Derived from the foundational model,each enhancement module was introduced one at a time to validate the influence of individual modules on model identification accuracy,number of parameters,detection speed,etc.

    2) In the conclusive model (Van-YOLOv8),each enhancement module was systematically excluded one by one (except for the exclusion of the VanillaNet module) to evaluate the influence of specific improvement modules on the final model’s performance.The outcomes of the experiments are illustrated in Table 3.

    Table 3: Ablation experimental study

    The experimental findings indicate that by exclusively integrating the VanillaNet component within the feature extraction network of the backbone,the parameter count decreased from 23.9 M in the baseline to 11.1 M,resulting in a reduction of 12.8 M.VanillaNet uses a sequence of convolutionpooling structures to extract features,without direct connections between different blocks.The feature map is continuously downsampled through convolution and pooling layers to subsequent blocks,avoiding branched structures and thereby reducing a significant amount of computation.Hence,incorporating the simplified VanillaNet module into the backbone for feature extraction efficiently diminishes the model’s intricacy and enhances inference speed.Furthermore,integrating the HAM component within the structure explicitly establishes the interrelation among image channels and spatial dimensions.It aggregates information from multiple convolution kernels in a nonlinear manner,focusing more quickly on local information.The HAM module,composed of SAM and CAM,only slightly increases the parameters and computational cost,enhancing the model’s inference velocity.The ultimate model proposed in this manuscript,incorporating both the VanillaNet and HAM modules,achieves a detection accuracy of 90.2% and an mAP of 91.9%,respectively,increasing by 4.8% and 4.4%.Moreover,the final model maintains a lower number of parameters and computational load,only 54.8%and 42.0%of the baseline,with an FPS increase of 11.3 fps compared to the baseline.In summary,the lightweight chip mark detection model presented in this study efficiently simplifies the model’s structure while preserving robust detection accuracy and real-time efficiency.

    3.4 Comparison with Current Advanced Algorithms

    In our study,we compared the performance of various advanced object detection algorithms with our Van-YOLOv8 network.This encompassed the two-step algorithm Fast RCNN and various singlestep algorithms: SSD,YOLOv4-Tiny [24],YOLOv5s,YOLOv7s [25],and YOLOv8s.The results of these comparative tests are condensed in Table 4.

    Table 4: Experimental comparisons

    According to the results provided in Table 4,the framework within this investigation demonstrated superior execution concerning detection accuracy,mAP,model parameter count,computational load,and FPS.Compared to the baseline YOLOv8s,it increased detection accuracy by 4.8%,mAP by 4.4%,diminished the quantity of model parameters by 10.8 M,and lowered the computational load from 50.7 GFLOPs to 21.3 GFLOPs,a decrease of 29.4 GFLOPs,while FPS increased by 11.3 fps.In comparison with YOLOv5s,another one-stage object detection algorithm,our approach showed even more significant advantages in chip mark detection,with notable improvements in detection accuracy and mAP.The model’s parameter count was reduced by 14.8 M,only 46.9%of YOLOv5s,and the computational load decreased by 43.8 GFLOPs,only 32.7%of YOLOv5s,with an increase of 24.1 fps.Furthermore,Van-YOLOv8 reduces the parameter count by 13.1 M compared to YOLOv7s,3.2 M compared to YOLOv4-tiny,17.0 M compared to SSD,and 44.7 M compared to Faster RCNN.The frames per second(FPS)are enhanced by 30.7 fps over YOLOv7s,7.2 fps over YOLOv4-tiny,33.5 fps over SSD,and 33.8 fps over Faster RCNN.Additionally,Fig.10 presents the detection accuracy and mAP curve of the Van-YOLOv8 model training.The curve showing the changes in detection accuracy over iterations indicates that the Van-YOLOv8 model quickly improves in target detection accuracy and can achieve a result close to 1 when stabilized.To more intuitively display the differences between each model,examples of the detection process are illustrated in Fig.11.

    Figure 10: Van-YOLOv8 model training results

    Through detailed results and comparative analysis,the Van-YOLOv8 model significantly curtails computational costs while ensuring enhanced detection precision compared to other cutting-edge algorithms.This not only underscores the outstanding performance of Van-YOLOv8 in object detection tasks but also indicates its effective management of computational resources while enhancing accuracy.The successful manifestation of this balance highlights the unique design and excellent performance advantages of the Van-YOLOv8 model.

    Figure 11: Detection results of different models.The boxes indicate the locations of detected marks,and the numbers signify the model’s confidence level in the detected objects

    4 Conclusion

    The Van-YOLOv8 model,leveraging SRGANext for processing the chip dataset,integrates VanillaNet and a hybrid attention mechanism,demonstrating outstanding performance in detection accuracy.Simultaneously,Van-YOLOv8 achieves a significant improvement in computational costs,striking a balance between efficiency and performance.This is particularly crucial for implementing target detection tasks in resource-constrained environments,offering a balanced solution that meets high accuracy requirements while effectively managing computational expenses.

    While Van-YOLOv8 excels in object detection tasks,its design is more tailored to specific recycled chip detection,and adaptability differences may exist for other types of object detection tasks.Additionally,Van-YOLOv8’s performance is sensitive to certain key hyperparameters,requiring careful tuning for optimal performance,thereby increasing the difficulty of model optimization.In the future of object detection,it is recommended to improve the framework’s versatility to ensure broader applicability in diverse object detection scenarios.Moreover,exploring automated methods for hyperparameter optimization,such as reinforcement learning or optimization algorithm-based auto-tuning tools,can assist in reducing the complexity of model tuning while enhancing performance stability and generalization.

    Acknowledgement:We sincerely thank the Scientific Research Funding Project of Liaoning Provincial Department of Education in 2021,the Comprehensive Reform Project of Undergraduate Education and Teaching in Liaoning in 2021,and the Graduate Innovation Fund of Dalian University of Technology for providing the necessary technical support for this research.

    Funding Statement:This work was supported by the Liaoning Provincial Department of Education 2021 Annual Scientific Research Funding Program (Grant Numbers LJKZ0535,LJKZ0526),and the 2021 Annual Comprehensive Reform of Undergraduate Education Teaching (Grant Numbers JGLX2021020,JCLX2021008),and Graduate Innovation Fund of Dalian Polytechnic University(Grant Number 2023CXYJ13).

    Author Contributions:Study conception and design: Honglei Wei,Meng Huang,Xianyi Zhai;data collection: Meng Huang;analysis and interpretation of results: Honglei Wei,Meng Huang;draft manuscript preparation:Meng Huang.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data cannot be made publicly available upon publication because they are owned by a third party and the terms of use prevent public distribution.The data that support the findings of this study are available upon reasonable request from the authors.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产欧美日韩精品亚洲av| 成人av在线播放网站| 成熟少妇高潮喷水视频| 天堂av国产一区二区熟女人妻| 免费av不卡在线播放| 色在线成人网| 床上黄色一级片| 夜夜爽天天搞| 成人二区视频| 日本一本二区三区精品| 在线观看66精品国产| 国产三级在线视频| 国国产精品蜜臀av免费| 一边摸一边抽搐一进一小说| 内地一区二区视频在线| 亚洲无线观看免费| 久久精品人妻少妇| 久久人人爽人人爽人人片va| 一区二区三区免费毛片| 亚洲欧美精品综合久久99| 色5月婷婷丁香| 又粗又爽又猛毛片免费看| 国产国拍精品亚洲av在线观看| 男插女下体视频免费在线播放| 久久久久久久久久久丰满| 久久久久国产网址| 国产午夜精品久久久久久一区二区三区 | 国产三级中文精品| 精品久久久久久久久av| 春色校园在线视频观看| 日本一本二区三区精品| 三级国产精品欧美在线观看| 亚洲熟妇熟女久久| 国产真实乱freesex| 午夜精品一区二区三区免费看| 成人特级黄色片久久久久久久| 中出人妻视频一区二区| 啦啦啦韩国在线观看视频| 日本黄大片高清| 丰满人妻一区二区三区视频av| 亚洲精品影视一区二区三区av| 亚洲av不卡在线观看| 熟妇人妻久久中文字幕3abv| 免费看a级黄色片| 最近视频中文字幕2019在线8| 一进一出抽搐动态| 亚洲性久久影院| 91午夜精品亚洲一区二区三区| 97在线视频观看| 成人性生交大片免费视频hd| 国产亚洲精品av在线| 久久欧美精品欧美久久欧美| 欧美性感艳星| 久久精品国产99精品国产亚洲性色| 欧美人与善性xxx| 亚洲国产精品sss在线观看| 99九九线精品视频在线观看视频| 在线免费观看不下载黄p国产| a级一级毛片免费在线观看| 免费在线观看影片大全网站| 国产精品三级大全| 亚洲成人久久爱视频| 啦啦啦观看免费观看视频高清| 精品熟女少妇av免费看| 久久精品国产清高在天天线| 搡老妇女老女人老熟妇| 国产人妻一区二区三区在| 亚洲精品乱码久久久v下载方式| 日韩 亚洲 欧美在线| 欧美又色又爽又黄视频| 久久久久久久久久黄片| 观看美女的网站| 欧美xxxx黑人xx丫x性爽| 女人被狂操c到高潮| 十八禁国产超污无遮挡网站| 欧美人与善性xxx| 国产毛片a区久久久久| 美女大奶头视频| 亚洲精品国产成人久久av| 18禁裸乳无遮挡免费网站照片| 一级av片app| 免费人成在线观看视频色| 少妇高潮的动态图| ponron亚洲| 99久久精品热视频| 免费av毛片视频| 亚州av有码| av视频在线观看入口| 麻豆国产av国片精品| 欧美国产日韩亚洲一区| 啦啦啦韩国在线观看视频| 国产在线男女| 日韩欧美在线乱码| 国产伦一二天堂av在线观看| 人人妻人人澡人人爽人人夜夜 | 国产精品电影一区二区三区| h日本视频在线播放| 亚洲精品日韩在线中文字幕 | 哪里可以看免费的av片| 亚洲欧美日韩卡通动漫| 国内精品宾馆在线| 国产日本99.免费观看| 中文在线观看免费www的网站| 在线天堂最新版资源| 91麻豆精品激情在线观看国产| 午夜福利在线观看吧| 久久久久精品国产欧美久久久| 亚洲精品一卡2卡三卡4卡5卡| 国产成人91sexporn| 久久6这里有精品| 啦啦啦啦在线视频资源| 国产成人freesex在线 | 又黄又爽又刺激的免费视频.| 精品一区二区三区视频在线观看免费| 18禁在线播放成人免费| 人妻久久中文字幕网| 亚洲经典国产精华液单| 黑人高潮一二区| 熟女人妻精品中文字幕| 亚洲色图av天堂| 日韩中字成人| 色哟哟·www| 成人漫画全彩无遮挡| 久久亚洲国产成人精品v| 乱码一卡2卡4卡精品| 小蜜桃在线观看免费完整版高清| 国产午夜精品论理片| 亚洲无线观看免费| 国产熟女欧美一区二区| 男女啪啪激烈高潮av片| 免费不卡的大黄色大毛片视频在线观看 | 久久精品人妻少妇| 九九久久精品国产亚洲av麻豆| 日韩强制内射视频| 性色avwww在线观看| 国产69精品久久久久777片| 联通29元200g的流量卡| 亚洲av二区三区四区| 22中文网久久字幕| av中文乱码字幕在线| 日本色播在线视频| 一a级毛片在线观看| 国产不卡一卡二| 美女黄网站色视频| 国产片特级美女逼逼视频| 成人精品一区二区免费| 18禁黄网站禁片免费观看直播| 女人十人毛片免费观看3o分钟| 午夜福利视频1000在线观看| 国产精品1区2区在线观看.| 亚洲在线观看片| 国内精品一区二区在线观看| 欧美日韩综合久久久久久| 12—13女人毛片做爰片一| 久久精品夜夜夜夜夜久久蜜豆| 久久久久久国产a免费观看| 最后的刺客免费高清国语| 最新在线观看一区二区三区| 国产在线男女| 身体一侧抽搐| 国产精品人妻久久久影院| 亚洲高清免费不卡视频| 女人被狂操c到高潮| 亚洲av中文字字幕乱码综合| 亚洲第一区二区三区不卡| 国产成人91sexporn| 搡老妇女老女人老熟妇| 久久久色成人| 亚洲av不卡在线观看| 免费av观看视频| 两个人的视频大全免费| 亚州av有码| 久久午夜亚洲精品久久| 久久久久精品国产欧美久久久| 网址你懂的国产日韩在线| 国产一区二区在线观看日韩| 男女边吃奶边做爰视频| 99国产精品一区二区蜜桃av| 亚洲国产精品成人久久小说 | 精品99又大又爽又粗少妇毛片| 欧美日韩国产亚洲二区| 欧美又色又爽又黄视频| 一级毛片电影观看 | 国产精品久久久久久久电影| 久久精品久久久久久噜噜老黄 | 欧美又色又爽又黄视频| 亚洲无线在线观看| 日本黄色片子视频| 国产一级毛片七仙女欲春2| 中国国产av一级| 欧美极品一区二区三区四区| 身体一侧抽搐| 亚洲av第一区精品v没综合| 老女人水多毛片| 成人毛片a级毛片在线播放| 俄罗斯特黄特色一大片| 尤物成人国产欧美一区二区三区| 国产一区二区三区av在线 | 成人综合一区亚洲| 精品一区二区免费观看| 日韩 亚洲 欧美在线| 白带黄色成豆腐渣| 露出奶头的视频| 一卡2卡三卡四卡精品乱码亚洲| 午夜福利在线观看免费完整高清在 | 免费观看人在逋| 天堂影院成人在线观看| 亚洲乱码一区二区免费版| 一个人观看的视频www高清免费观看| 日本一本二区三区精品| 精品国产三级普通话版| 美女高潮的动态| 在线观看免费视频日本深夜| 午夜亚洲福利在线播放| 啦啦啦韩国在线观看视频| 久久精品国产自在天天线| 1024手机看黄色片| 欧美不卡视频在线免费观看| 乱人视频在线观看| 国产高清三级在线| www日本黄色视频网| 日韩亚洲欧美综合| 国产成人freesex在线 | 看十八女毛片水多多多| 国产精品精品国产色婷婷| 尤物成人国产欧美一区二区三区| 黄片wwwwww| 国产美女午夜福利| 在线a可以看的网站| 国产黄a三级三级三级人| 干丝袜人妻中文字幕| 亚洲电影在线观看av| 欧美性猛交╳xxx乱大交人| 亚洲久久久久久中文字幕| 久久久久久九九精品二区国产| 一区二区三区四区激情视频 | 久久热精品热| 亚洲欧美日韩卡通动漫| 国产伦一二天堂av在线观看| 亚洲熟妇中文字幕五十中出| 麻豆精品久久久久久蜜桃| av福利片在线观看| 淫秽高清视频在线观看| 亚洲丝袜综合中文字幕| 国产精品av视频在线免费观看| av在线播放精品| 青春草视频在线免费观看| 国产不卡一卡二| 午夜激情福利司机影院| 女同久久另类99精品国产91| 免费大片18禁| 赤兔流量卡办理| 久久中文看片网| 干丝袜人妻中文字幕| 91久久精品电影网| 日韩欧美精品免费久久| 日韩国内少妇激情av| 久久人人爽人人爽人人片va| 国产大屁股一区二区在线视频| 日产精品乱码卡一卡2卡三| 亚洲最大成人手机在线| 日本成人三级电影网站| 69人妻影院| 少妇的逼水好多| 色吧在线观看| 直男gayav资源| 国产精品福利在线免费观看| 熟女人妻精品中文字幕| 成人av一区二区三区在线看| 国产亚洲av嫩草精品影院| 亚洲欧美日韩东京热| 精品久久国产蜜桃| 精品久久久噜噜| 久久精品国产自在天天线| 久久久久性生活片| 久久久久久九九精品二区国产| 卡戴珊不雅视频在线播放| 久久久久精品国产欧美久久久| 国产高清三级在线| 国产亚洲欧美98| 国内精品久久久久精免费| 免费观看在线日韩| 精品国内亚洲2022精品成人| 国产精品人妻久久久影院| 亚洲精品乱码久久久v下载方式| 2021天堂中文幕一二区在线观| 男人和女人高潮做爰伦理| 三级经典国产精品| 99久久无色码亚洲精品果冻| 人妻制服诱惑在线中文字幕| 成人漫画全彩无遮挡| 美女被艹到高潮喷水动态| 久久精品国产清高在天天线| 久久久久精品国产欧美久久久| 97人妻精品一区二区三区麻豆| 欧美色视频一区免费| 女同久久另类99精品国产91| av卡一久久| 久久精品国产亚洲av涩爱 | 亚洲一级一片aⅴ在线观看| 99热这里只有是精品在线观看| 国产一级毛片七仙女欲春2| 卡戴珊不雅视频在线播放| 亚洲性久久影院| 国产黄a三级三级三级人| 亚洲图色成人| 精品国产三级普通话版| 欧美高清性xxxxhd video| 精品人妻视频免费看| 精品一区二区三区视频在线| 亚洲精品一卡2卡三卡4卡5卡| 观看美女的网站| 亚洲人成网站在线观看播放| АⅤ资源中文在线天堂| 亚洲天堂国产精品一区在线| 色吧在线观看| 十八禁国产超污无遮挡网站| 一区福利在线观看| 午夜精品国产一区二区电影 | av天堂在线播放| 99在线视频只有这里精品首页| 网址你懂的国产日韩在线| 91在线精品国自产拍蜜月| 在线播放无遮挡| 国产亚洲91精品色在线| 亚洲在线自拍视频| 久久人人爽人人爽人人片va| 欧美潮喷喷水| 一级毛片电影观看 | 少妇人妻精品综合一区二区 | 亚洲真实伦在线观看| 在现免费观看毛片| 国产成人精品久久久久久| 免费搜索国产男女视频| 欧美人与善性xxx| 噜噜噜噜噜久久久久久91| 免费人成在线观看视频色| 精品一区二区免费观看| 国产aⅴ精品一区二区三区波| 亚洲精品日韩av片在线观看| 狂野欧美白嫩少妇大欣赏| 精品一区二区三区人妻视频| 最近在线观看免费完整版| 在线a可以看的网站| 搡老妇女老女人老熟妇| 又黄又爽又免费观看的视频| 国产黄色小视频在线观看| a级一级毛片免费在线观看| 亚洲自偷自拍三级| 亚洲在线观看片| 国产成人91sexporn| 蜜桃久久精品国产亚洲av| 日本色播在线视频| 色视频www国产| 日韩人妻高清精品专区| 成人性生交大片免费视频hd| 色哟哟哟哟哟哟| 午夜福利视频1000在线观看| 1000部很黄的大片| 1024手机看黄色片| av在线蜜桃| 日韩亚洲欧美综合| 国产高潮美女av| 1024手机看黄色片| av在线蜜桃| 俺也久久电影网| 欧美绝顶高潮抽搐喷水| 免费大片18禁| 欧美激情国产日韩精品一区| 国产一区二区在线av高清观看| 日本一本二区三区精品| 成人国产麻豆网| 一级黄片播放器| 插逼视频在线观看| 国产精品久久视频播放| 免费无遮挡裸体视频| 两个人视频免费观看高清| 久久久久国产精品人妻aⅴ院| 乱人视频在线观看| 国产亚洲精品av在线| 深爱激情五月婷婷| 亚洲国产日韩欧美精品在线观看| 一夜夜www| 亚洲性久久影院| 亚洲国产欧洲综合997久久,| 久久精品国产99精品国产亚洲性色| 又粗又爽又猛毛片免费看| а√天堂www在线а√下载| 成人综合一区亚洲| a级毛片免费高清观看在线播放| 啦啦啦韩国在线观看视频| 欧美精品国产亚洲| 欧美最黄视频在线播放免费| 国产高清视频在线观看网站| 99热精品在线国产| 亚洲无线在线观看| 免费av观看视频| 久久久精品欧美日韩精品| 看片在线看免费视频| 国产欧美日韩精品亚洲av| 亚洲av成人精品一区久久| 深夜精品福利| 黄色欧美视频在线观看| 亚洲内射少妇av| 国产精品野战在线观看| 成年女人永久免费观看视频| 日韩在线高清观看一区二区三区| 国产高清有码在线观看视频| 国产精品国产高清国产av| 免费搜索国产男女视频| 精品熟女少妇av免费看| videossex国产| 免费观看在线日韩| 乱系列少妇在线播放| 亚洲中文字幕一区二区三区有码在线看| 午夜精品在线福利| 亚洲精品一卡2卡三卡4卡5卡| 男女视频在线观看网站免费| 亚洲人成网站在线播放欧美日韩| 男人和女人高潮做爰伦理| 日本黄大片高清| 成人性生交大片免费视频hd| 日本五十路高清| 久久精品夜色国产| 亚洲激情五月婷婷啪啪| 男人狂女人下面高潮的视频| 婷婷色综合大香蕉| 99久久成人亚洲精品观看| 91麻豆精品激情在线观看国产| 国产高清三级在线| 淫秽高清视频在线观看| 91狼人影院| 日本爱情动作片www.在线观看 | 禁无遮挡网站| 身体一侧抽搐| 成人av在线播放网站| 国产爱豆传媒在线观看| 在线看三级毛片| 午夜福利高清视频| 日韩亚洲欧美综合| 人妻夜夜爽99麻豆av| 好男人在线观看高清免费视频| 精品人妻偷拍中文字幕| 日本撒尿小便嘘嘘汇集6| 国产真实伦视频高清在线观看| 亚洲av电影不卡..在线观看| 黑人高潮一二区| 亚洲三级黄色毛片| 日韩强制内射视频| 亚洲色图av天堂| 一区二区三区四区激情视频 | 男女做爰动态图高潮gif福利片| 国产又黄又爽又无遮挡在线| 国产一区亚洲一区在线观看| 国产毛片a区久久久久| 在线天堂最新版资源| 综合色丁香网| 成人综合一区亚洲| 国内揄拍国产精品人妻在线| 久久精品夜色国产| 在线天堂最新版资源| 免费搜索国产男女视频| 欧美日本视频| 久久精品综合一区二区三区| 精品久久久久久久久久免费视频| 看非洲黑人一级黄片| 亚洲精品乱码久久久v下载方式| 亚洲18禁久久av| 天堂av国产一区二区熟女人妻| 亚洲性夜色夜夜综合| 日韩精品中文字幕看吧| 久久久色成人| 欧美日本亚洲视频在线播放| 日本在线视频免费播放| 五月玫瑰六月丁香| 91狼人影院| 女人十人毛片免费观看3o分钟| 你懂的网址亚洲精品在线观看 | 变态另类丝袜制服| 国产在线男女| 国产成人a∨麻豆精品| av.在线天堂| 久久久久性生活片| 日本 av在线| 国产精品一区二区免费欧美| 成人漫画全彩无遮挡| 看十八女毛片水多多多| 成人性生交大片免费视频hd| 少妇人妻一区二区三区视频| 精品久久久久久久久久久久久| 天堂√8在线中文| 国产亚洲精品av在线| 免费人成视频x8x8入口观看| 成人毛片a级毛片在线播放| 亚洲成a人片在线一区二区| 成人国产麻豆网| 啦啦啦韩国在线观看视频| 国产黄色小视频在线观看| 夜夜看夜夜爽夜夜摸| 美女 人体艺术 gogo| av国产免费在线观看| 一本精品99久久精品77| 久久久久久久午夜电影| 国产毛片a区久久久久| 99久国产av精品国产电影| 人人妻,人人澡人人爽秒播| 特大巨黑吊av在线直播| 精品乱码久久久久久99久播| 欧美日本视频| 亚洲va在线va天堂va国产| 日产精品乱码卡一卡2卡三| 九九爱精品视频在线观看| 精品久久国产蜜桃| 日韩精品中文字幕看吧| 成人av在线播放网站| 日本色播在线视频| 真实男女啪啪啪动态图| 乱人视频在线观看| 欧美日韩综合久久久久久| 久久人人爽人人片av| 亚洲欧美成人综合另类久久久 | 十八禁国产超污无遮挡网站| 听说在线观看完整版免费高清| 老熟妇乱子伦视频在线观看| 亚洲av免费高清在线观看| 国产三级在线视频| 国产大屁股一区二区在线视频| a级毛片a级免费在线| 精品福利观看| 老司机影院成人| 国产精品无大码| 国产在视频线在精品| 亚洲性夜色夜夜综合| 亚洲成人av在线免费| 午夜激情福利司机影院| 香蕉av资源在线| 亚洲精华国产精华液的使用体验 | 国产色爽女视频免费观看| 中文字幕熟女人妻在线| 熟女电影av网| 美女高潮的动态| 久久中文看片网| 国产精品99久久久久久久久| 亚洲精品国产av成人精品 | 中文在线观看免费www的网站| 一级毛片电影观看 | 国产色爽女视频免费观看| 黄色视频,在线免费观看| 国产视频一区二区在线看| 狂野欧美白嫩少妇大欣赏| 精品一区二区免费观看| 国产精品人妻久久久久久| 99riav亚洲国产免费| 波多野结衣高清无吗| 99久久成人亚洲精品观看| 日韩强制内射视频| 99视频精品全部免费 在线| 日本撒尿小便嘘嘘汇集6| 亚洲av.av天堂| 91狼人影院| 免费观看的影片在线观看| 日本色播在线视频| 联通29元200g的流量卡| av卡一久久| 国产精品久久久久久亚洲av鲁大| 在线播放国产精品三级| 成人特级av手机在线观看| 免费av不卡在线播放| 欧美色欧美亚洲另类二区| av免费在线看不卡| 国产免费男女视频| 亚洲欧美日韩高清专用| 国产一区二区在线观看日韩| 99久久精品一区二区三区| 亚洲国产精品成人综合色| 日韩av不卡免费在线播放| 亚洲第一电影网av| av在线天堂中文字幕| 三级男女做爰猛烈吃奶摸视频| 国产视频内射| 久久中文看片网| 国产精品乱码一区二三区的特点| 亚洲欧美中文字幕日韩二区| 看非洲黑人一级黄片| 18+在线观看网站| 最近手机中文字幕大全| eeuss影院久久| 国产激情偷乱视频一区二区| 久久天躁狠狠躁夜夜2o2o| 亚洲乱码一区二区免费版| 国产色爽女视频免费观看| 欧美激情在线99| 中文字幕av成人在线电影| 国产av一区在线观看免费| 偷拍熟女少妇极品色| 91久久精品电影网| 日本在线视频免费播放| 综合色丁香网| 1000部很黄的大片| 又爽又黄a免费视频| av在线天堂中文字幕| 精品一区二区三区视频在线观看免费| 夜夜夜夜夜久久久久| 国产av不卡久久| 亚洲精品一卡2卡三卡4卡5卡| 干丝袜人妻中文字幕| 亚洲内射少妇av| .国产精品久久| 国产日本99.免费观看| 美女黄网站色视频| 亚洲国产日韩欧美精品在线观看| 国产精华一区二区三区| 看非洲黑人一级黄片| 亚洲精华国产精华液的使用体验 | 欧美最黄视频在线播放免费|