• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Precise Agriculture:Effective Deep Learning Strategies to Detect Pest Insects

    2022-01-25 12:50:34LucaButeraAlbertoFerranteMauroJerminiMauroPrevostiniandCesareAlippi
    IEEE/CAA Journal of Automatica Sinica 2022年2期

    Luca Butera,Alberto Ferrante,Mauro Jermini,Mauro Prevostini,and Cesare Alippi,

    Abstract—Pest insect monitoring and control is crucial to ensure a safe and profitable crop growth in all plantation types,as well as guarantee food quality and limited use of pesticides.We aim at extending traditional monitoring by means of traps,by involving the general public in reporting the presence of insects by using smartphones.This includes the largely unexplored problem of detecting insects in images that are taken in noncontrolled conditions.Furthermore,pest insects are,in many cases,extremely similar to other species that are harmless.Therefore,computer vision algorithms must not be fooled by these similar insects,not to raise unmotivated alarms.In this work,we study the capabilities of state-of-the-art (SoA) object detection models based on convolutional neural networks (CNN)for the task of detecting beetle-like pest insects on nonhomogeneous images taken outdoors by different sources.Moreover,we focus on disambiguating a pest insect from similar harmless species.We consider not only detection performance of different models,but also required computational resources.This study aims at providing a baseline model for this kind of tasks.Our results show the suitability of current SoA models for this application,highlighting how FasterRCNN with a MobileNetV3 backbone is a particularly good starting point for accuracy and inference execution latency.This combination provided a mean average precision score of 92.66% that can be considered qualitatively at least as good as the score obtained by other authors that adopted more specific models.

    I.INTRODUCTION

    THIS work is in the field of precise agriculture and aims at exploring different deep learning (DL) models for detecting insects in images.More in detail,this paper discusses the use of DL-based object detection models for the recognition of pest insects outdoors.In particular,we focus on the comparison of known architectures in the context of a novel dataset.Considering the field of interest,we also evaluate computational resource requirements of the considered models.

    Early pest insect identification is a crucial task to ensure healthy crop growth;this reduces the chance of yield loss and enables the adoption of precise treatments that provide an environmental and economic advantage [1].Currently,this task is mainly carried out by experts,who have to monitor field traps to assess the presence of species which cause significant economic loss.This procedure is prone to delays in the identification and requires a great deal of human effort.In fact,human operators need to travel to the different locations where traps are installed to check the presence of the target insects and,in some cases,to manually count the number of captured insects.The current trend is to introduce machine learning techniques to help decision makers in the choice of suitable control strategies for pest insects.Even though this approach is going to automate some time-consuming tasks,it leaves monitoring of pest insects limited to traps.For some pest insects that spread outside crop fields,it can be beneficial to have additional information coming from other locations.This is the case of Popillia japonica,a dangerous phytophage registered as a quarantine organism at European level [2].Early detection of the presence of this insect in specific regions is extremely important,as it permits to track its spread.In this exercise,the general public may be of great help,as they can provide valuable input by means of mobile phones.As shown in Fig.1,our monitoring system for pest insects will integrate inputs from traps and from smartphones,with the purpose of creating a map of the presence of the pest insects.Machine learning techniques become necessary when reports come from a large pool of users that take images with different cameras,lighting conditions,framing and backgrounds.Furthermore,pest insects are,in many cases,extremely similar to other species that are harmless (e.g.,Popillia japonica is very similar to two harmless species,Cetonia aurata and Phyllopertha horticola).Therefore,computer vision algorithms must be able to distinguish between them,even in presence of high similarities.This is especially important when the general public is involved in the process,to avoid non-motivated alarms and insect eradication actions that are useless and may damage the environment.

    Throughout the years,many Computer Vision approaches have been proposed to assess the presence of pest insects in crop fields.Early solutions (e.g.,[3]) were based on image classification by means of handcrafted features.These approaches suffered from lack of generality,as most features were specific to particular species and,moreover,required great domain knowledge and design effort.

    Fig.1.Sketch of a multimodal detection system.

    With the advent of DL and convolutional neural networks(CNN),the problem has started being tackled with Data Driven approaches that leveraged the abstraction power of deep architectures both to extract meaningful features and perform the recognition task.The majority of these studies focused on Image Classification [4],[5] which assigns a class label to each image.CNNs solved this problem with the help of Transfer Learning from models pre-trained on generic datasets,such as ImageNet [6].

    Only in recent years,as highlighted in [7],some studies have tackled the task of both recognizing and localizing pest insects in images.Remarkably,PestNet [8] compared an adhoc network to state-of-the-art (SoA) ones,for the task of detecting insects in images taken in highly controlled environments,such as in traps.

    However,not much work has been done on detecting insects in the open,despite the fact that exploiting the pictures any citizen could take with a smartphone,can greatly improve the control of pests over the territory.In the context of a multimodal pest monitoring system like the one shown in Fig.1,it is important to have a reliable detection algorithm that is accurate with highly diverse backgrounds and robust to the presence of harmless insects that may be mistaken for dangerous ones.In fact,in such a system,observations would come from a large number of locations and at much higher rates if compared to trap-only monitoring systems.In a large scale system,false alarms become a critical problem that requires a specific study.

    In this paper,we assess the capabilities of SoA Generic Object Detection models for the detection of very similar insects in non-controlled environments.We do so as we believe that current architectures are well suited for this task and there is no need for new ones;the main problem is understanding how they compare and which ones are better choices.

    We built a dataset of images taken from the Internet,with high variability in their characteristics,such as background,perspective,and resolution.The dataset contains 3 classes:the first one contains images of a pest insect,Popillia japonica;the other two classes contain images of insects commonly mistaken for our target one.Special care was put in minimizing the presence of highly similar samples.This task poses many challenges that SoA methods may have trouble with,such as presence,in the images,of small or partially occluded insects,bad lighting conditions or insects disguised in the background.Most importantly,without any prior knowledge,the object detectors may not be able to learn the general features necessary to disambiguate two similar insects.

    We then used our dataset to study the performance of three SoA object detection models across four different backbones.The goal was to:

    1) Assess the adaptability of existing models to detection of Popillia japonica.

    2) Identify the best candidate for task-specific improvements and modifications.

    3) Study thecomputational cost/accuracytrade-off of different backbones.

    With respect to existing literature,the novelty of our work can be summed up as:

    1) We adopt advanced filtering techniques to provide enhanced datasets.The impact on performance is well evident both in terms of accuracy and reliability.Notably,we avoid near-duplicate samples that may hinder the model evaluation procedure correctness in case of similar couples being split in the training and test sets.

    2) We experimentally verify that designing object detectors targeted to beetle-like insects is neither requested nor advisable,as general purpose models can be successfully used after proper tuning.Furthermore,we compare these general models both from the stand-point of performance and from that of computational resources by employing common metrics.

    3) For the first time,we focus on the detection of extremely similar insects.In our application only one insect is actually harmful and must not be confused with others closely resembling it.

    4) We consider images that are very different in their characteristics and,in general,not shot in controlled environments nor with the same equipment.This,together with the previous point,sets up a highly complex detection environment that has not been previously addressed in similar studies on insects.

    5) We propose a study on the impact of output thresholding on false positive count,showing that it is possible to find a good trade-off between false alarms w.r.t.the detection of the dangerous insects also for our specific application scenario.

    6) We show the applicability of Guided Backpropagationbased visualization techniques that can help experts gather insights on insect features considered by the models.This is of particular importance since it allows validation of the algorithm’s decisions from a more qualitative standpoint.These visualizations highlight the key features of the image used for the prediction:a good detector is expected to rely on visually meaningful parts of the image.

    The results we obtained show that known SoA Object Detection models are suitable for beetle-like insect recognition in the open,even in presence of similar species.Additionally,backbone choice can affect performance,in particular for SSD models.Transfer Learning is a necessary step but pre-training on a generic insect dataset provided no improvements.In general,FasterRCNN with MobileNetV3 backbone provides the best trade-off between accuracy and inference speed.

    The paper is organized as follows:In Section II,we go over the state of the art for both Generic Object Detection and Pest Insects Detection.In Section III,we describe our dataset,how we created it and how we ensured absence of duplicates/nearduplicates.In Section IV,we briefly introduce the different models that we have compared.In Section V,we explain how we designed our experiments.In Section VI,we present and comment on our experimental results.

    II.STATE OF THE ART

    In this section,we describe the state of the art for our application.First we spend a few words on Generic Object Detection,then we talk specifically about the work that has been done specifically on insect recognition.

    A.Generic Object Detection

    Generic Object Detection is the task of assigning,for each relevant object instance in an image,a Bounding Box and a Label.The state of the art about this topic grew considerably in recent years as shown in [9],which represent an up-to-date and complete survey about Generic Object Detection.

    We can divide Generic Object Detection models based on CNNs into two broad categories:Two-stage and One-stage detectors.Two-stage detectors are generally slower but more accurate.They first compute regions of interest (RoI) for each image and then compute the actual predictions for each RoI.One-stage detectors,instead,are faster at the cost of some accuracy,as they compute the predictions directly.Both detector types are composed of two parts:a feature extraction section,called backbone,and a detection head.The backbone is generally an Image Classification model stripped of the classification layer.Common examples are VGG [10] and ResNet [11].

    We chose to select one representative model for each category to compare the trade-offs of the two approaches.FasterRCNN [12],for Two-Stage Detectors,is a well known architecture that uses a region proposal network (RPN) to extract the RoIs.For One-Stage Detectors we picked SSD[13],as it has shown,in the available literature,the highest inference speed for its class,while maintaining a respectable accuracy.SSD uses a set of Default Boxes which span the whole image across different scales,instead of an RoI pooling layer.To the two aforementioned models we have added the third competitor:RetinaNet [14],which is sort of a mixedbreed.It is a One-Stage Detector but its speed and accuracy are similar to the ones of common Two-Stage Detectors.Its peculiar characteristic is that it neither uses an RoI pooling layer nor Default Boxes,but it exploits a Focal Loss function which weights the foreground samples as of higher importance with respect to the background samples.

    B.Pest Insects Detection

    For Pest Insects Detection the state of the art is quite narrow.The majority of the studies focus on Insect Classification,usually with an approach similar to the one introduced in [4],where known CNN-based classifiers are trained on multi-class insect datasets like IP102 [15].This dataset is one of the few openly available of its kind and it comprises over 75 000 images of insects from 102 different species;additionally,about 19 000 images of this dataset are annotated with bounding boxes.IP102,though,presents a very long tailed distribution,which is not ideal for learning.

    Another popular branch,commonly associated with Pest Insect Detection is the recognition of Plant Diseases.For example,[16]–[18] namely relate to the identification of pest insects,but they actually detect their presence indirectly,by considering damages on plants.

    For actual Pest Insect Detection,different studies,like [8],[19]–[23],use common DL architectures.In these examples,however,images are collected in controlled environments,by adopting standardized equipment and locations.In [21]images are taken in an open field on a wide,bright red,trap.In [8],[19],[20],closed traps with a white background are used,while [22] and [23] consider images taken in grain containers.The only study closely related to ours,both in terms of approach and final goal,is [24].In this study,images of 24 different species have been collected from the internet,and a network similar to FasterRCNN [12] with the VGG [10]backbone has been trained to perform detection.In comparison to the aforementioned studies our work has a major difference:our images are not taken in controlled environments nor shot with the same equipment.Differently from the other works,and in particular from [24],in our work we consider insect species that exhibit high similarity.Unlike other works we have extensively studied the capabilities of SoA models in order to find the best starting point for finer model design.Moreover,we show a duplicate safe data acquisition procedure,while similar studies have not given relevance to this aspect,which,in our opinion,is quite important.

    III.DATASET

    The images for our dataset have been gathered through the internet,by scraping the results from two popular Search Engines,Google and Bing,as well as from Flickr,a known photo sharing community.We have collected images for three classes of insects:Popillia japonica(PJ,Fig.2(a)),Cetonia aurata(CA,Fig.2(b)) andPhyllopertha horticola(PH,Fig.2(c)).The first one is a rapidly spreading and dangerous pest insect;the other two,even though similar to P.japonica,are harmless.P.horticola,in particular,for its morphological characteristics,is easily mistaken for P.japonica [25].In a real-world scenario,it is really important to detect those insects correctly to avoid false alarms.For each class of insects we have collected the first 4000 results from each website,corresponding to a total of 36 000 pictures.These samples were then subject to two phases of filtering,with the purpose of obtaining a high-quality dataset.The first phase involved some automatic methods,but also manual filtering;the second phase,instead,was fully automatic.

    Fig.2.The species selected for the dataset.

    A.Initial Filtering

    In the initial filtering phase,we have applied 3 steps:

    1) Duplicate removal based on file size and on the hash of the first 1024 bytes.

    2) Unrelated removal based on a weak classifier that distinguishes images containing insects from images that do not.

    3) Manual inspection.

    After the filtering we had approximately 1200 images remaining for each class.

    B.Removal of Near Duplicates

    A problem that persisted through the first filtering phase was the presence of near duplicates:two or more non-identical but extremely similar images.These images,of which an example is shown in Fig.3,are particularly harmful in assessing performance of models.This problem is highlighted and tackled in [26] for the CIFAR Dataset [27].We have decided to use a similar approach,with a ResNet50 [11]Image Classification Model,pre-trained on ImageNet [6],instead of a model specifically trained on our dataset,as we found the results to be sufficiently good for our application.Once the classification layer is removed,the model,given an RGB image as input,outputs a 2048 dimensional vector,which can be interpreted as an embedding of the input image.These embeddings are known to be useful,in image retrieval applications,to assess image similarity by means of common distance metrics,as stated by [28].We have used theL2-distanceand an empirically chosen distance threshold of 90,below which we have considered two images to be near duplicates.In the end,we built our sanitized dataset incrementally,by adding iteratively all the images that were distant enough from the ones already present.We chose our threshold conservatively;thus,this procedure may have ruled out some “good” samples.Nonetheless,we preferred it to having near duplicates slip through.This step removed another 5% to 7% of the samples,depending on the class.Finally,we split the dataset into training and test sets,with the usual 80%–20% proportions.The actual number of samples for each class is shown in Table I.Splits are balanced with respect to the set of insect classes that appear in the image:this approach produces subsets with reasonably similar distributions.

    Fig.3.An example of near duplicates.

    TABLE IDETECTION DATASET SAMPLE COUNTS NUMBERS REFER TO TOTAL OBjECTS

    IV.MODELS

    In this section,we provide a brief explanation of the main characteristics of the considered architectures.Since all detection models work on high-level features extracted by the backbone,we first focus on describing the actual detection networks.Backbones are described later in this section.

    A.Detection Models

    A Detection Model tries to assign a Bounding Box and a Class Label to each relevant object in an input image.The former is generally represented by a four-value vector;the latter is a scalar that encodes the actual class name.The Object Detectors we consider are based on CNN [29] which are implemented as a computational graph that comprises layers of 2D convolutions,nonlinear activations and maxpooling operations.The weights of the convolution filters act as parameters of the network.These models are generally composed of two conceptual sub-networks:the backbone and the detector.The backbone is generally derived from SoA CNN Image Classifiers and outputs a set of high level features extracted from the input image.The detector takes these features as input and,through a series of convolution and linear operations,outputs the desired predictions.Each CNN is a parametric function,whose parameters can be optimized iteratively by means of common gradient descent algorithms,such as Stochastic Gradient Descent and Adam.In general the loss function compares the predicted output with the expected one and yields lower values when predictions are close to ground truths.Since these networks work with images,they must be robust to changes in the input that are not semantically meaningful.This is partially achieved during training with augmentation techniques that enhance data variability and improve shift,scale,and rotation invariance of the learned model.Scale invariance,usually,is also boosted by specific design choices that allow features to be implicitly computed at different scales.

    We have considered the following models:FasterRCNN,SSD,and RetinaNet.We have chosen these models as they represent the state of the art when it comes to general performance and reliability in real world scenarios.

    1) FasterRCNN:FasterRCNN [12] is a well known Two-Stage detector.In the first stage,the region proposal network(RPN),shown in Fig.4,extracts proposals from high level feature maps with respect to a set of predefined Anchor Boxes.These proposals take the form of:

    i) an objectness score:The probability that an anchor in a specific location contains an object.

    ii) a box regression:A 4-value vector that represents the displacement of the anchor that best matches the object position.

    Fig.4.FasterRCNN’s RPN architecture from [12].

    Features are then extracted from the best proposals,by means of an RoI Pooling layer.These proposed-specific features are then passed to a regression head and a classification head;the former computes the final displacements to best fit the proposed box onto the object;the latter assigns the predicted class label.This model uses a loss function with two terms:one accounting for the classification error and one for the box regression one.The former takes the shape of a log loss over a binary classification problem,the latter is the smooth L1 loss for the box displacement regression.

    2) SSD:SSD [13] belongs to the family of One-Stage detectors.To improve inference speed,even though sacrificing accuracy,features are directly computed on top of the backbone,without the aid of RoIs.Each layer extracts features at a different scale,and uses them to calculate predictions;this helps achieving scale invariance.Predictions have the same form of FasterRCNN’s ones.Box Regression values are computed with respect to a set of Default Boxes,which are similar to the Anchor Boxes used in FasterRCNN.This model uses a loss similar to that of FasterRCNN,with a softmax log loss accounting for the classification error and a smooth L1loss taking care of the box regression one.

    3) RetinaNet:RetinaNet’s [14] architecture is straightforward:features coming from the backbone are fed into a regression head and a classification head,which predicts the bounding box regressions with respect to a set of anchors and the class label.

    The uniqueness of this network is not in the architecture,but in the loss function that it uses,calledFocal Loss.This loss assigns higher importance to hard-to-classify samples and,in turn,it reduces the problem of the gradient being overwhelmed by the vast majority of easily classifiable samples belonging to the background class.

    B.Backbones

    The backbone is the section of a detection model that performs feature extraction.Usually,a backbone is composed of an Image Classification network stripped of its classification head.Here we briefly introduce the backbones considered in our study that are:VGG,ResNet,DenseNet,and MobileNet.As previously stated for the detectors,these models have been chosen for their general reliability in real world scenarios.

    1) VGG:VGG [10] is one of the first very deep Image Classification networks.Its design is rather surpassed by fully convolutional approaches,but it still represents a relevant baseline.Its architecture borrows from AlexNet [30],with Convolutional Layers followed by Max Pooling and activation function.The final fully connected layers are stripped,since the model needs only to perform feature extraction.

    2) ResNet:ResNet [11] is considered to be the current gold standard both as image classifier and as backbone.ResNet first introduced the concept of Residual Blocks,shown in Fig.5.This is a classic convolutional block with a skip connection that allows the gradient to flow backwards freely and minimizes the problem of Vanishing Gradient.Letxbe the block input,andy=F(x) its output;then the residual block with skip connection is defined asy=x+F(x);this corresponds to the computational flow in Fig.5.

    3) DenseNet:DenseNet [31] is considered a good Image Classifier,but it is not popular as a backbone.DenseNet takes the idea of skip connections a step further,with the DenseBlock:a series of feature extraction blocks,each one connected to the following ones,allowing both backward gradient flow and forward information flow.This is particularly suitable when learning from small datasets.In comparison to ResNet,features from different depth levels are explicitly combined inside the Dense Block.

    Fig.5.ResNet’s residual block from [11].

    4) MobileNet:MobileNet is an effort to optimize CNNbased Image Classifiers to make them run on mobile devices while minimizing loss of accuracy.To achieve this,a number of strategies used to reduce the parameter count have been employed.The convolution operations have been decomposed into a Deptwise Separable Convolution,where the operation against a CxNxN filter gets split into a convolution against a 1xNxN followed by a convolution against a Cx1x1 one.On top of that,MobileNet uses an Inverted Residual Block that has the same purpose as the one shown in Fig.5,but it is more efficient from the standpoint of computational resources.

    5) Feature Pyramid Network:A feature pyramid network(FPN) [32] is a building block designed to enhance the output of a feature extraction backbone with features that span the input across multiple scales.Features are computed across different scales by means of convolution operations,then they are merged,as shown in Fig.6,in order to obtain a set of rich,multi-scale,feature maps.

    Fig.6.FPN architecture from [32].

    The FPN is very important for Object Detection networks,as multi-scale features improve the performances when recognizing objects of different sizes in the same image.

    V.ExPERIMENT DESIGN

    In this section,we describe the experiments used to compare the different models and backbones discussed in Section IV.Additionally,we describe some experiments performed to assess the impact of pre-training on model performances.

    A.Performance Evaluation of Models and Backbones

    Each detection model has been combined with each of the four selected backbones to assess how this choice impacts accuracy and inference time.Backbones were all pre-trained on ImageNet and the experiments have been carried out on an Nvidia Titan V GPU.

    Note that FasterRCNN and RetinaNet use FPNs,while SSD works directly on the backbone’s output constructing multiscale features.

    Each model was trained on the same data;parameters were optimized with stochastic gradient descent (SGD) with Momentum,using Cosine Annealing with Linear Warmup as learning rate schedule policy.This policy,shown in Fig.7,stabilizes training in the initial epochs and allows for fine optimization in the last ones.

    Fig.7.Example of Cosine Annealing with Linear Warmup learning rate scheduling.

    Table II shows the different hyperparameters considered for each Backbone/Detector pair.The number of epochs has been selected so that each model was trained to convergence.SSD was trained with a larger batch size as its smaller memory footprint allowed to do so.

    We employ McNemar’s test,as suggested in [33],to assess statistic difference among pairs of models performance.This test is applied to a 2×2contingency table,of the form shown in Table III,where,given a binary classification test,aanddcount the samples on which the two models agree,whilebandccount those on which they do not.In the Object Detection case,each model can predict a bounding box for a particular ground truth or not,hence:

    ●ais the number of ground truths predicted by both models.

    ●dis the number of ground truths missed by both models.

    ●bis the number of ground truths predicted by the first model but missed by the second one.

    ●cis the number of ground truths missed by the first model but predicted by the second one.

    McNemar’s test statistic makes use of the formula shown in(1)

    The null hypothesis is thatH0:pb=pc;the alternative is thatH1:pb≠pc,wherepsymboldenotes the theoretical probability of occurrences in the corresponding cell.In case of rejection of the null hypothesis it is possible to affirm that the two models are significantly different.

    Applying McNemar’s test to an Object Detection algorithm is not straightforward,as each input image may contain multiple objects and each of these objects may or may not be recognized correctly.In our case we considered an object recognized if the model predicted a box with the followingcharacteristics:intersection over union (IoU) with the object’s ground truth box greater than 0.5;confidence score greater than 0.5;the correct label for the object.Each prediction can match only one object.IoU for two boxesAandB,as defined in (2),is the ratio between the area of the intersection of the two boxes and the area of their union.Thus,its value is 1 when the two boxes perfectly overlap but shrinks to 0 as the two boxes either move away from each other or one becomes contained by the other.

    TABLE IIMODELS AND HYPERPARAMETERS USED FOR TRAINING

    TABLE IIIExAMPLE OF CONTINGENCY TABLE

    B.Pre-Training Impact

    A secondary aspect we wanted to test is the impact of pretraining on performance for the following two different scenarios:

    1) Starting from a model completely pre-trained on the common objects in context (COCO) [34] dataset.

    2) Using a backbone pre-trained on a generic insect dataset instead of ImageNet.

    We restricted this experiment toResNet50-FPNFasterRCNNas pre-trained COCO weights were already available.COCO is a widely known dataset,annotated for different computer vision tasks,one of which is Object Detection.To train the insect-specific backbone,we have used a custom dataset generated by combining the images from our 3-class dataset with a filtered version of IP102 [15],from which we removed insects with too low sample count,duplicates,and misplaced images.To this we have added some background images that contained only plants or flowers without any insect.All the steps to ensure dataset fairness,described in Section III,have been applied to this data as well.To account for the high unbalance of this dataset,we have used a Weighted Cross Entropy loss function,assigning higher importance to less frequent classes.For these tests we have used the same hyperparameters ofResNet101-FPNFasterRCNN,which are listed in Table II.

    VI.RESULTS

    In this section,we present the results of the experiments discussed in Section IV.Furthermore,we compare our results to the ones obtained in other studies available in the scientific literature.

    A.Comparison Across Models

    Table IV shows the performance of each tested modelbackbone combination.The capabilities of the model to correctly predict the bounding boxes and to assign the labels have been evaluated by means of mean average precision(mAP).In Object Detection,the calculation of this parameter,though,may introduce some ambiguity.Therefore,we have used theAPIoU=.50as specified in the COCO challenge [34].Another relevant aspect that we have evaluated is the computational requirements,as we target smart agriculture,where power IoT devices are deployed in the field and edge computation may be preferable to avoid frequent energy expensive data transfers over the network.To evaluate the computational requirements,we have measured the inference speed in Frames per Second,both on GPU (an Nvidia GeForce RTX 2 080) and on CPU (an Intel Xeon Silver 4 116).These measurements do not provide a direct evaluation of the performance on IoT nodes,but rather a relative ordering of the model-backbone combinations in terms of computational requirements.These tests have been performed averaging the inference time of 100 iterations without batching;the inference operation is comprehensive of any preprocessing needed by the model,which usually is normalization and resizing of the input images.The input images were randomly generated,with a size of 1280×720 pixels.

    As shown in Table IV,FasterRCNN and RetinaNet perform better than SSD with small changes in mAP among different backbones.SSD struggles with less powerful backbones likeMobileNet and VGG16,but it obtains higher scores when paired with more demanding backbones.There is no clear winner in terms of raw mAP,as scores of the best performing model-backbone combinations are very similar.However,these architectures do not support interpretability of results as it is not possible to open the black box and draw cause-effect indications.This is particularly true when we keep the detector unchanged and we only switch the backbone.For these reasons,the optimal architecture must be found empirically through a trial and error approach.

    TABLE IVPERFORMANCE OF THE COMPARED MODELS IN TERMS OF MEAN AVERAGE PRECISION ON THE TEST SET AND FRAMES PER SECOND AT INFERENCE TIME.FALSE POSITIVES COUNT FOR CLASS POPILLIA JAPONICA IS ALSO SHOWN

    Regarding computation speed,SSD is the fastest on CPU;this does not come as a surprise,as this model is designed to be fast and it is especially suitable for applications where the available hardware has no GPU acceleration.On GPU,SSD is still fast,but,surprisingly,FasterRCNN is computationally lighter with a noticeably higher FPS than all the others.This follows from the MobileNetV3 model being specifically optimized to reduce execution time;this is proportionally more relevant for FasterRCNN than for other detectors.Whether this depends on how the computation is allocated on GPU or to some specific backbone-detector interaction is difficult to say and should be the object of a further investigation task,outside of the scope of this work.

    Table IV also shows the number of false positives,produced on the test set,for the class P.Japonica.FasterRCNN is the model showing less false positives;in particular,when this model is associated with the MobileNetV3 backbone,the number of false positives is the lowest among all models.If we consider these numbers with respect to the overall number of non-PJ insects present in our dataset,which is 530,we see that the best configuration has a false positive ratio of 2.26%.Mind that this is not a formally precise false positive ratio as the concept of negative sample is ill defined for the problem of Object Detection.

    Fig.8 shows the influence of a threshold on the confidence score of predictions on the false positives count and the mAP for each model.Every prediction below the threshold is automatically ignored.For the best performers,mAP is not greatly reduced,even at higher threshold values;while using small thresholds can still have great impact on the number of false positives for models that produce many of them.This type of plot is really helpful in reducing false alarms while preserving detection performance.

    Fig.8.False Positives for PJ class and mAP variation for different prediction confidence thresholds.

    Fig.9 shows the trend of the training loss for our three models;we can notice how it smoothly converges in all cases but FasterRCNN with MobileNetV3 backbone.The loss increase,however,seems to marginally affect the mAP score during validation,as shown in Fig.10(a).This tells us that a thoroughly calibrated training procedure may lead the loss of this model,which already is one of the best,to a plateau as well,possibly with better performance.

    Fig.9.Train loss for the 3 detection architectures with different backbones.

    Fig.10.Validation mAP for MobileNetV3 and DenseNet169.

    Fig.10 shows validation mAP score for our models with the DenseNet169 and MobileNetV3 backbones.As shown in Fig.10(b),DenseNet169 provides a smooth increase in mAP for all architectures,with minimal variance among training epochs and detectors,whilst MobileNetV3 plot is noisier and the results among the detectors are very different.The equivalent plot for ResNet101 is closer to DenseNet169,while the one for VGG16 is closer to MobileNetV3;this suggests that deeper and more demanding backbones have more consistent behavior independently of the specific detection model,while less demanding ones have troubles with light detectors like SSD.

    Fig.11 shows the McNemar’s Test p-values for each model pair.Considering the conventional significance level of α=0.05,we can see that the majority of the models are significantly different one another,meaning that,even with similar performances,they have different weaknesses and strengths.This makes the choice of the best model a nontrivial decision.Furthermore,for each detector aside from SSD,the mAP score remains similar across the different backbones,suggesting that backbone choice is not that critical.

    Given the performance results together with McNemar’s test outcome we can confidently say that a FasterRCNN detector with a MobileNetV3 backbone is the best choice for our case study,as it yields top mAP coupled with the highest speed on GPU.Moreover,the false positives are the lowest among competitors,this result is particularly relevant in this application.

    When computational power is a strong constraint and no hardware acceleration is available,SSD with the MobileNetV3 backbone should be preferred instead,as it is reasonably fast without a significant drop in mAP.We comment that,even though an SSD with VGG16 backbone is almost 3 times faster on CPU,the loss in detection accuracy does not nicely trade off with the improvement in latency.Yet,this combination can be used,accepting the reduced detection performance,in systems that are highly constrained in computational resources and/or energy.

    B.Effects of Pre-Training

    Table V shows the performance of FasterRCNN with the ResNet50 backbone in the 3 cases described in Section V-B.For this evaluation,we only consider Mean Average Precision since the architecture is constant.Thus,the FPS scores of the different solutions are constant too.

    The highest mAP is reached when the model is fully pretrained on COCO;pre-training the backbone on the insect dataset presented in Section V-B results,instead,in the lowest score.This is consistent with the loss trends of Fig.12,withcocobeing the lowest curve andinsectsthe highest.Results suggest that using an ad-hoc dataset to pre-train the backbone harms the overall Transfer Learning process instead of improving it.Whether this is due to the relatively small size of the dataset used or inherent in the usage of less diverse data,needs to be investigated further.

    Fig.11.McNemar’s Test on compared models.Each cell contains the p-value for the specific pair.Significance level is α=0.05.

    TABLE VPERFORMANCE OF FASTERRCNN-RESNET50, WITH DIFFERENT PRE-TRAINING DATASETS, ExPRESSED IN TERMS OF MEAN AVERAGE PRECISION ON THE TEST SET

    As shown in Fig.13,no statistical difference between the model pre-trained on COCO and the one pre-trained on ImageNet is present,suggesting that pre-training the detection head has no significant benefits,at least in this case.The model pre-trained on generic insect images,however,is significantly different from the others.This further supports the thesis that pre-training on specific data significantly affects the learning,possibly for the worst.Hence,sticking to common Transfer Learning procedures seems to be the safer choice.

    C.Visualization of Importance

    In applications where the solution is implemented by means of very deep models,it is also important to assess the prediction quality in a human-readable form.Gradientweighted class activation map (Guided GradCAM) [35] is an approach that combines the results from Guided Backpropagation [36] and GradCAM [35] to form a visually understandable image of the areas of major importance for the prediction.

    Guided Backpropagation relies on the intuition that the gradient of the relevant output (e.g.,the class score) w.r.t.the input image is a good indicator of which areas of the image are more relevant for the prediction.It is calledguidedbecause only positive gradients,corresponding to positive activations,are back-propagated.Instead,in GradCAM,the activation of the deepest convolutional layer of a CNN,properly weighted by the gradient of the relevant output w.r.t.said layer’s parameters,is interpreted as a heatmap of the relevant regions of the input.Guided GradCAM is the product of these two pieces of information.

    In the case of insect recognition,these visualizations are particularly useful if an expert wants to evaluate the model based on the relevance of the highlighted insect features.In Fig.14 we can see how the regions of the highest importance are located on the head of the Popillia japonica and around its white hairs,a distinctive feature of this species.

    D.Comparison with Other Studies

    Our results are qualitatively similar to those of other works in the scientific literature,such as [8] and [24].The former reached 75.46% average mAP with a peak of 90.48% on the best predicted class;the latter topped at 89.22% mAP.We have reached a maximum mAP of 93.3%.

    However,we point out that,even though the two mentioned studies consider some of the models that we have also included in our work,this comparison can only be qualitative for the following reasons:

    1) The considered datasets are different;in particular for [8],whose data come from traps.To be more precise,we believe that our dataset is inherently more challenging.For this reason,we can consider our results to be at least as good as the ones reported by the cited works.

    2) The two mentioned studies have not disclosed their approach to mAP calculation.Therefore,the reported numbers may not have the same exact meaning as in our paper.

    3) In [24],the adopted backbone and the procedure used to train SSD and FasterRCNN are not specified.

    4) In [8],when ResNet101 and FasterRCNN are considered,reported results are significantly worse than ours,with a top mAP of 71.62%,as opposed to our 92.14%.This strengthens the belief that adopted procedures are inherently different.

    VII.CONCLUSIONS AND FUTURE WORK

    Fig.12.Train loss (left) and Validation mAP (right) for FasterRCNN ResNet50 with different pre-training.

    Fig.13.McNemar’s Test on FasterRCNN ResNet50 models with different pre-training datasets.The first name is the dataset used to pre-train the whole model,the second is the one used for the backbone.Significance level is α=0.05.

    In this paper,we have evaluated different combinations of models and backbones for detecting a pest insect in images that are not obtained in controlled environments.Our results demonstrate that,at least for insects similar to Popillia japonica,this task can be performed with high accuracy,even by using general-purpose models.Not only detection performances have been estimated,but also inference speed,which can provide information on which model is less computational resource hungry among the tested ones.The best detection performance has been reached by the combination RetinaNet-ResNet101 (mAP=93.3%),but,on average,FasterRCNN was the best performer.The model with the best throughput on GPU is FasterRCNN paired with a MobileNetV3 backbone (FPS=60.92).The best throughput on CPU was obtained by the combination SSD-VGG16(FPS=4.27).Given the statistical similarity between some of the models we think that the critical part is the choice of the overall detection architecture rather that the specific backbone.SSD is an exception to this,as the backbone choice demonstrated to play a significant role in the final results.

    Fig.14.Example of visualizing image importance through Guided GradCAM for a FasterRCNN model with ResNet50 backbone.

    Additionally,our experiments show that pre-training on ImageNet is a suitable Transfer Learning setup for the insect recognition tasks and pre-training on small task-related datasets seemingly has no benefits.Overall,we consider FasterRCNN with MobileNetV3 backbone a strong baseline for insect detection given both the good performance and high inference speed on CPU and GPU;moreover this model produced the lowest number of false positives for the pest insect class and this is of particular importance for this type of application.

    We conclude that widely adopted generic object detection architectures are well suited for the recognition of beetle-like insects and,realistically,for insects in general.The real advance in general insect recognition would probably come from the construction of bigger datasets,with a big number of species and images,rather than from the search for particular architectures which may,in the end,be too much task specific.

    Future work should investigate the development of optimized models that take the found optimum as a baseline and make task-specific improvements to the architecture.A few examples are:addressing hardware and embedded system resource constraints to port the solution on mobile devices,leveraging known characteristics of the target pest insect(inductive bias),working on methods aimed at improving detection of small insects as well as dealing with bad lighting conditions and harsh environments.In the spirit of deep learning we would also envisage generation of a huge image dataset,possibly containing different species.This might be achieved by considering collaborative methods where citizens contribute in taking pictures and delivering them to a cloud validating process before enriching the database.

    亚洲aⅴ乱码一区二区在线播放| 国产伦在线观看视频一区| 亚洲第一区二区三区不卡| 欧美bdsm另类| 亚洲七黄色美女视频| 99国产精品一区二区蜜桃av| 国产69精品久久久久777片| 中文字幕av在线有码专区| 国产亚洲精品久久久久久毛片| 国产一区二区在线观看日韩| 我要看日韩黄色一级片| 精品一区二区三区视频在线| 国内久久婷婷六月综合欲色啪| 少妇熟女aⅴ在线视频| 亚洲精品日韩在线中文字幕 | 亚洲精品日韩av片在线观看| 亚洲精品成人久久久久久| 国内揄拍国产精品人妻在线| 男女做爰动态图高潮gif福利片| 久久这里只有精品中国| 99热网站在线观看| 丰满的人妻完整版| 五月玫瑰六月丁香| 国产精品一区二区三区四区久久| 亚洲国产欧美人成| 乱人视频在线观看| 日日摸夜夜添夜夜爱| 亚洲婷婷狠狠爱综合网| 性色avwww在线观看| 最近的中文字幕免费完整| 国产在线精品亚洲第一网站| 99视频精品全部免费 在线| 最新中文字幕久久久久| 欧美激情在线99| 又爽又黄无遮挡网站| 亚洲欧美成人精品一区二区| 婷婷精品国产亚洲av在线| 97在线视频观看| 能在线免费观看的黄片| 亚洲中文日韩欧美视频| 蜜臀久久99精品久久宅男| АⅤ资源中文在线天堂| 熟女人妻精品中文字幕| 日韩欧美在线乱码| 欧美日韩在线观看h| 舔av片在线| 亚洲人与动物交配视频| 特级一级黄色大片| 亚洲欧美成人综合另类久久久 | 色哟哟·www| 亚洲欧美日韩高清专用| av专区在线播放| 三级国产精品欧美在线观看| 波多野结衣高清作品| 日本一二三区视频观看| 国产色婷婷99| 国产精品综合久久久久久久免费| 亚洲熟妇熟女久久| 男人和女人高潮做爰伦理| av在线老鸭窝| 12—13女人毛片做爰片一| 哪里可以看免费的av片| 能在线免费观看的黄片| 久久精品国产亚洲av涩爱 | 亚洲av一区综合| 国产视频内射| 久久鲁丝午夜福利片| 一级a爱片免费观看的视频| 99在线人妻在线中文字幕| 久久精品影院6| 精品一区二区三区视频在线观看免费| 精品熟女少妇av免费看| 寂寞人妻少妇视频99o| 国产免费一级a男人的天堂| 插逼视频在线观看| 精品熟女少妇av免费看| 一进一出抽搐gif免费好疼| 91午夜精品亚洲一区二区三区| 亚洲中文字幕一区二区三区有码在线看| 看十八女毛片水多多多| 国产精品人妻久久久久久| 亚洲第一电影网av| 亚洲av成人精品一区久久| 免费人成在线观看视频色| 18+在线观看网站| 午夜日韩欧美国产| 亚洲专区国产一区二区| 精品一区二区三区视频在线观看免费| 男女啪啪激烈高潮av片| 亚洲成人中文字幕在线播放| 天堂√8在线中文| 99九九线精品视频在线观看视频| 欧美成人a在线观看| 午夜爱爱视频在线播放| 日本黄色片子视频| 欧美高清性xxxxhd video| 十八禁国产超污无遮挡网站| 中文资源天堂在线| 你懂的网址亚洲精品在线观看 | 人妻少妇偷人精品九色| 桃色一区二区三区在线观看| 麻豆成人午夜福利视频| 99久久精品国产国产毛片| 99国产精品一区二区蜜桃av| 亚洲欧美日韩卡通动漫| 久久天躁狠狠躁夜夜2o2o| 99久久无色码亚洲精品果冻| 夜夜爽天天搞| 色播亚洲综合网| 免费看美女性在线毛片视频| 69av精品久久久久久| 一本一本综合久久| 麻豆av噜噜一区二区三区| 99热这里只有是精品50| 免费看光身美女| 国产亚洲精品综合一区在线观看| 国产伦一二天堂av在线观看| av在线亚洲专区| 18+在线观看网站| 内射极品少妇av片p| 国产探花极品一区二区| 老熟妇乱子伦视频在线观看| 国产精品久久久久久久电影| av在线观看视频网站免费| 九九久久精品国产亚洲av麻豆| 久久久久久大精品| 亚洲精品久久国产高清桃花| 美女高潮的动态| 精品一区二区三区av网在线观看| 天天一区二区日本电影三级| 精品一区二区三区人妻视频| avwww免费| 床上黄色一级片| 欧美成人a在线观看| 桃色一区二区三区在线观看| 可以在线观看毛片的网站| 一级黄色大片毛片| 3wmmmm亚洲av在线观看| 免费看美女性在线毛片视频| 国产黄色视频一区二区在线观看 | av国产免费在线观看| 亚洲18禁久久av| 日本免费a在线| 午夜精品在线福利| 日本撒尿小便嘘嘘汇集6| 国产69精品久久久久777片| 国产精品一二三区在线看| 又爽又黄无遮挡网站| 久久久久久久亚洲中文字幕| 日韩成人伦理影院| 午夜福利在线观看免费完整高清在 | 一级毛片aaaaaa免费看小| 国产视频一区二区在线看| 少妇人妻一区二区三区视频| 久久精品影院6| 高清毛片免费观看视频网站| 女人被狂操c到高潮| 真人做人爱边吃奶动态| 国产极品精品免费视频能看的| 久久久久久国产a免费观看| 亚洲精品456在线播放app| .国产精品久久| 亚洲18禁久久av| 国产精品一二三区在线看| 国语自产精品视频在线第100页| 97超视频在线观看视频| 日本黄色片子视频| 国产精品一区二区性色av| 九九热线精品视视频播放| 精品午夜福利视频在线观看一区| 久久精品91蜜桃| 久久久精品欧美日韩精品| 免费黄网站久久成人精品| 寂寞人妻少妇视频99o| 成人漫画全彩无遮挡| 麻豆av噜噜一区二区三区| 搡老熟女国产l中国老女人| 色在线成人网| 好男人在线观看高清免费视频| 国产精品不卡视频一区二区| 亚洲综合色惰| 秋霞在线观看毛片| 亚洲人成网站在线播放欧美日韩| 亚洲最大成人av| 成年av动漫网址| 一进一出抽搐gif免费好疼| а√天堂www在线а√下载| 观看免费一级毛片| 一级a爱片免费观看的视频| 男人的好看免费观看在线视频| 亚洲av中文字字幕乱码综合| 婷婷六月久久综合丁香| 日本撒尿小便嘘嘘汇集6| 国产高清不卡午夜福利| 日本爱情动作片www.在线观看 | 欧美+亚洲+日韩+国产| 成人漫画全彩无遮挡| 国产在视频线在精品| 在线观看av片永久免费下载| 日韩av在线大香蕉| 免费av观看视频| 香蕉av资源在线| 久久精品夜色国产| 国产精品亚洲一级av第二区| 在线观看美女被高潮喷水网站| 丝袜喷水一区| 国产精品福利在线免费观看| 亚洲精品国产av成人精品 | 亚洲aⅴ乱码一区二区在线播放| 黄色欧美视频在线观看| 久久精品国产亚洲av香蕉五月| 久久精品夜色国产| 日本黄色视频三级网站网址| 日本免费a在线| 国产在线精品亚洲第一网站| eeuss影院久久| 成人av在线播放网站| 欧美激情久久久久久爽电影| 国产一区二区在线av高清观看| 精品99又大又爽又粗少妇毛片| 国内少妇人妻偷人精品xxx网站| 亚洲欧美日韩卡通动漫| av在线亚洲专区| 97超视频在线观看视频| 九九爱精品视频在线观看| 欧美+亚洲+日韩+国产| 在线观看午夜福利视频| 国产精品国产高清国产av| 色吧在线观看| 国产真实乱freesex| 啦啦啦韩国在线观看视频| 国产亚洲精品久久久久久毛片| 日韩制服骚丝袜av| 色播亚洲综合网| 亚洲av电影不卡..在线观看| 亚洲国产日韩欧美精品在线观看| 在线观看av片永久免费下载| 天堂动漫精品| 日本a在线网址| 日本撒尿小便嘘嘘汇集6| 赤兔流量卡办理| 免费观看在线日韩| 免费高清视频大片| videossex国产| 日日干狠狠操夜夜爽| 俺也久久电影网| 亚洲国产精品国产精品| 亚洲av二区三区四区| 久久这里只有精品中国| 69av精品久久久久久| 亚洲国产精品久久男人天堂| 国产精品1区2区在线观看.| 日日撸夜夜添| 禁无遮挡网站| 大又大粗又爽又黄少妇毛片口| 亚洲成人久久性| 床上黄色一级片| 精品一区二区三区视频在线| 麻豆一二三区av精品| 神马国产精品三级电影在线观看| 我要看日韩黄色一级片| 极品教师在线视频| 日本成人三级电影网站| 免费不卡的大黄色大毛片视频在线观看 | 久久亚洲国产成人精品v| 国语自产精品视频在线第100页| 深夜a级毛片| 国国产精品蜜臀av免费| 男插女下体视频免费在线播放| 国产v大片淫在线免费观看| 精品久久久久久久人妻蜜臀av| 亚洲中文字幕一区二区三区有码在线看| 高清日韩中文字幕在线| 99久久精品国产国产毛片| 少妇的逼水好多| 亚洲精品一区av在线观看| 俄罗斯特黄特色一大片| 午夜爱爱视频在线播放| 日韩大尺度精品在线看网址| 亚洲中文字幕日韩| 免费av毛片视频| 99久国产av精品国产电影| 人妻丰满熟妇av一区二区三区| 国产免费男女视频| 亚洲乱码一区二区免费版| 天堂√8在线中文| 亚洲四区av| 国产91av在线免费观看| 少妇的逼水好多| 国产精品福利在线免费观看| 日韩 亚洲 欧美在线| 精品不卡国产一区二区三区| 亚洲av熟女| 看黄色毛片网站| 欧美精品国产亚洲| 亚洲欧美日韩高清在线视频| 国产蜜桃级精品一区二区三区| av女优亚洲男人天堂| 国产伦在线观看视频一区| 国产爱豆传媒在线观看| 亚洲熟妇熟女久久| 亚洲最大成人手机在线| 日本一本二区三区精品| 国产视频一区二区在线看| 九九久久精品国产亚洲av麻豆| 亚洲aⅴ乱码一区二区在线播放| 一卡2卡三卡四卡精品乱码亚洲| 国产精品乱码一区二三区的特点| 男女做爰动态图高潮gif福利片| 在线观看美女被高潮喷水网站| 久久精品国产亚洲网站| 国产高清三级在线| 一级黄色大片毛片| 日本精品一区二区三区蜜桃| 一本久久中文字幕| 真实男女啪啪啪动态图| 免费在线观看影片大全网站| 日韩高清综合在线| 欧美区成人在线视频| 99久久无色码亚洲精品果冻| 色综合色国产| 91久久精品国产一区二区成人| 欧美日韩国产亚洲二区| 亚洲性夜色夜夜综合| 成人欧美大片| 床上黄色一级片| 亚洲在线自拍视频| 长腿黑丝高跟| av黄色大香蕉| 久久中文看片网| av黄色大香蕉| 在线免费观看的www视频| 欧美极品一区二区三区四区| 在线观看66精品国产| a级毛片免费高清观看在线播放| 性插视频无遮挡在线免费观看| 干丝袜人妻中文字幕| 久久久久久久久久成人| 日产精品乱码卡一卡2卡三| 中国国产av一级| 久久人人爽人人片av| 国产亚洲精品av在线| av免费在线看不卡| 国产在线精品亚洲第一网站| 深夜精品福利| 精品一区二区三区av网在线观看| 国产精品一区二区三区四区久久| 内地一区二区视频在线| 日韩av不卡免费在线播放| 亚洲aⅴ乱码一区二区在线播放| 成年av动漫网址| 亚洲最大成人中文| 五月伊人婷婷丁香| 两个人视频免费观看高清| 成人亚洲精品av一区二区| 精品福利观看| 国产私拍福利视频在线观看| 国产亚洲精品久久久com| 日韩一区二区视频免费看| 村上凉子中文字幕在线| av在线天堂中文字幕| 性欧美人与动物交配| 九九在线视频观看精品| 欧美性猛交╳xxx乱大交人| 直男gayav资源| 97碰自拍视频| 亚洲婷婷狠狠爱综合网| 欧美高清成人免费视频www| 美女黄网站色视频| 女人被狂操c到高潮| 亚洲欧美精品自产自拍| 精品免费久久久久久久清纯| 欧美一级a爱片免费观看看| 日韩中字成人| 午夜爱爱视频在线播放| 国语自产精品视频在线第100页| 91久久精品国产一区二区三区| 国产精品久久电影中文字幕| av中文乱码字幕在线| 乱系列少妇在线播放| 女同久久另类99精品国产91| 亚洲av一区综合| 国产成人a区在线观看| 亚洲精品在线观看二区| 一区二区三区四区激情视频 | 欧美xxxx性猛交bbbb| 中文字幕av在线有码专区| 91久久精品国产一区二区三区| 麻豆一二三区av精品| 淫秽高清视频在线观看| 特级一级黄色大片| 夜夜爽天天搞| 夜夜看夜夜爽夜夜摸| 99久国产av精品国产电影| 国产精品一区二区三区四区免费观看 | 国内精品久久久久精免费| 日日撸夜夜添| 欧美中文日本在线观看视频| 日本黄色视频三级网站网址| 亚洲国产欧洲综合997久久,| 亚洲人与动物交配视频| 欧美最新免费一区二区三区| a级毛片a级免费在线| 午夜久久久久精精品| 免费看a级黄色片| 别揉我奶头~嗯~啊~动态视频| 国产精品1区2区在线观看.| 免费观看人在逋| 午夜免费男女啪啪视频观看 | 国产精品一区二区三区四区免费观看 | 啦啦啦韩国在线观看视频| 97在线视频观看| 国产精品不卡视频一区二区| 丰满乱子伦码专区| 国内精品久久久久精免费| 亚洲av.av天堂| 亚洲精品久久国产高清桃花| 久久久久性生活片| 日本黄色视频三级网站网址| 午夜老司机福利剧场| .国产精品久久| 欧美zozozo另类| 伊人久久精品亚洲午夜| 久久久精品欧美日韩精品| 亚洲久久久久久中文字幕| 亚洲自拍偷在线| 日本黄色片子视频| 免费看a级黄色片| 非洲黑人性xxxx精品又粗又长| 中文字幕av成人在线电影| 久久久色成人| 不卡一级毛片| 在现免费观看毛片| 国产午夜福利久久久久久| 狠狠狠狠99中文字幕| 中文字幕av在线有码专区| 精品久久国产蜜桃| 午夜久久久久精精品| avwww免费| 女的被弄到高潮叫床怎么办| 美女大奶头视频| 国产精品女同一区二区软件| 精品久久久久久久久久免费视频| 欧美精品国产亚洲| 搡老熟女国产l中国老女人| 一区二区三区四区激情视频 | 久久精品人妻少妇| 欧美成人精品欧美一级黄| 精品一区二区免费观看| 成人欧美大片| 嫩草影院新地址| 在线观看66精品国产| 欧美色视频一区免费| 一进一出抽搐gif免费好疼| 97超级碰碰碰精品色视频在线观看| 亚洲精品成人久久久久久| 国产乱人偷精品视频| 免费高清视频大片| 国产精品亚洲美女久久久| 国产精品久久久久久av不卡| 国产一级毛片七仙女欲春2| 最近的中文字幕免费完整| 色在线成人网| 午夜久久久久精精品| 国产一区二区在线av高清观看| 国产成人一区二区在线| 一边摸一边抽搐一进一小说| 中文字幕av成人在线电影| 一级av片app| avwww免费| 99热这里只有精品一区| 最新中文字幕久久久久| 欧美极品一区二区三区四区| 少妇被粗大猛烈的视频| 蜜桃亚洲精品一区二区三区| 97碰自拍视频| 国内精品宾馆在线| 小说图片视频综合网站| 一级毛片久久久久久久久女| 日韩欧美在线乱码| 淫秽高清视频在线观看| 国产av麻豆久久久久久久| 亚洲av成人av| 99精品在免费线老司机午夜| 国产精品一区二区三区四区久久| 一本精品99久久精品77| 精品一区二区免费观看| 波多野结衣巨乳人妻| 免费大片18禁| 亚洲中文字幕日韩| 99热这里只有是精品在线观看| 久久亚洲精品不卡| 亚洲丝袜综合中文字幕| 校园人妻丝袜中文字幕| 精品熟女少妇av免费看| 欧美丝袜亚洲另类| 国产精品av视频在线免费观看| 国产色爽女视频免费观看| 嫩草影院精品99| 婷婷亚洲欧美| 成人性生交大片免费视频hd| av国产免费在线观看| 99热这里只有是精品50| 综合色丁香网| 高清午夜精品一区二区三区 | 国产午夜精品论理片| videossex国产| 嫩草影视91久久| 亚洲美女搞黄在线观看 | 欧美绝顶高潮抽搐喷水| 俺也久久电影网| 亚洲精品在线观看二区| 尤物成人国产欧美一区二区三区| 国产一区二区激情短视频| 国产精品一区二区三区四区免费观看 | 日本熟妇午夜| 精品欧美国产一区二区三| 2021天堂中文幕一二区在线观| 久久精品国产亚洲av天美| 有码 亚洲区| 99久久九九国产精品国产免费| 麻豆成人午夜福利视频| 一a级毛片在线观看| 日韩大尺度精品在线看网址| av在线播放精品| 免费在线观看影片大全网站| 女的被弄到高潮叫床怎么办| 国产av不卡久久| 国内精品一区二区在线观看| 最近在线观看免费完整版| 少妇人妻一区二区三区视频| 亚洲欧美日韩高清在线视频| 一区二区三区四区激情视频 | 女生性感内裤真人,穿戴方法视频| 久久久久国产精品人妻aⅴ院| 精品一区二区三区视频在线| 少妇丰满av| 免费一级毛片在线播放高清视频| 99国产精品一区二区蜜桃av| 22中文网久久字幕| 亚洲第一电影网av| 听说在线观看完整版免费高清| .国产精品久久| 韩国av在线不卡| 99热网站在线观看| 91狼人影院| 卡戴珊不雅视频在线播放| 大型黄色视频在线免费观看| 在线播放国产精品三级| 中国国产av一级| 亚洲国产高清在线一区二区三| 女人被狂操c到高潮| 亚洲av美国av| 看免费成人av毛片| 国产成人aa在线观看| 日韩制服骚丝袜av| 搡老熟女国产l中国老女人| 久久久久九九精品影院| 久久国产乱子免费精品| 可以在线观看毛片的网站| 免费搜索国产男女视频| 欧美又色又爽又黄视频| 十八禁国产超污无遮挡网站| 午夜亚洲福利在线播放| 欧美三级亚洲精品| 欧美成人免费av一区二区三区| 国产欧美日韩精品亚洲av| 白带黄色成豆腐渣| 韩国av在线不卡| 日韩制服骚丝袜av| 亚洲性久久影院| 1024手机看黄色片| 男人舔女人下体高潮全视频| 99精品在免费线老司机午夜| 国产白丝娇喘喷水9色精品| 最后的刺客免费高清国语| 亚洲乱码一区二区免费版| 亚洲av成人精品一区久久| 亚洲欧美日韩卡通动漫| 日韩精品青青久久久久久| av黄色大香蕉| 天美传媒精品一区二区| ponron亚洲| 级片在线观看| 嫩草影院新地址| 天天躁夜夜躁狠狠久久av| 精品人妻一区二区三区麻豆 | 色视频www国产| 长腿黑丝高跟| .国产精品久久| 干丝袜人妻中文字幕| 99在线人妻在线中文字幕| 波野结衣二区三区在线| 久久久久性生活片| 亚洲熟妇中文字幕五十中出| 国内精品一区二区在线观看| 在线免费观看不下载黄p国产| 久久久久久伊人网av| 日本熟妇午夜| 午夜激情福利司机影院| 国产成年人精品一区二区| 成人高潮视频无遮挡免费网站| 精品人妻熟女av久视频| 高清日韩中文字幕在线| 亚洲最大成人中文| 精品无人区乱码1区二区| 五月玫瑰六月丁香| 人人妻人人澡人人爽人人夜夜 | 国产爱豆传媒在线观看| 亚洲天堂国产精品一区在线| 日本免费a在线| 久久精品人妻少妇| 日本一本二区三区精品| 夜夜爽天天搞| 国产真实伦视频高清在线观看|