• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Learning-Based Model for Detection of Brinjal Weed in the Era of Precision Agriculture

    2023-12-12 15:51:34JignaPatelAnandRupareliaSudeepTanwarFayezAlqahtaniAmrTolbaRaviSharmaMariaSimonaRaboacaandBogdanConstantinNeagu
    Computers Materials&Continua 2023年10期

    Jigna Patel,Anand Ruparelia,Sudeep Tanwar,?,Fayez Alqahtani,Amr Tolba,Ravi Sharma,Maria Simona Raboacaand Bogdan Constantin Neagu

    1Department of Computer Science and Engineering,Institute of Technology,Nirma University,Ahmedabad,382481,India

    2Software Engineering Department,College of Computer and Information Sciences,King Saud University,Riyadh,12372,Saudi Arabia

    3Computer Science Department,Community College,King Saud University,Riyadh,11437,Saudi Arabia

    4Centre for Inter-Disciplinary Research and Innovation,University of Petroleum and Energy Studies,Dehradun,248001,India

    5Doctoral School,University Politehnica of Bucharest,Bucharest,060042,Romania

    6National Research and Development Institute for Cryogenic and Isotopic Technologies-ICSI Rm,Valcea,Ramnicu Valcea,240050,Romania

    7Power Engineering Department,Gheorghe Asachi Technical University of Iasi,Iasi,700050,Romania

    ABSTRACT The overgrowth of weeds growing along with the primary crop in the fields reduces crop production.Conventional solutions like hand weeding are labor-intensive,costly,and time-consuming;farmers have used herbicides.The application of herbicide is effective but causes environmental and health concerns.Hence,Precision Agriculture(PA) suggests the variable spraying of herbicides so that herbicide chemicals do not affect the primary plants.Motivated by the gap above,we proposed a Deep Learning(DL)based model for detecting Eggplant(Brinjal)weed in this paper.The key objective of this study is to detect plant and non-plant(weed)parts from crop images.With the help of object detection,the precise location of weeds from images can be achieved.The dataset is collected manually from a private farm in Gandhinagar,Gujarat,India.The combined approach of classification and object detection is applied in the proposed model.The Convolutional Neural Network(CNN)model is used to classify weed and non-weed images;further DL models are applied for object detection.We have compared DL models based on accuracy,memory usage,and Intersection over Union(IoU).ResNet-18,YOLOv3,CenterNet,and Faster RCNN are used in the proposed work.CenterNet outperforms all other models in terms of accuracy,i.e.,88%.Compared to other models,YOLOv3 is the least memory-intensive,utilizing 4.78 GB to evaluate the data.

    KEYWORDS Precision Agriculture;Deep Learning;brinjal weed detection;ResNet-18;YOLOv3;CenterNet;Faster RCNN

    1 Introduction

    According to research statistics from the Food and Agriculture Organization Corporate Statistical(FAOSTAT)Database[1],India is the second-largest producer of brinjal in the world after China.In 2022,India produced over 12.98 million metric tons of brinjal crop across the nation [2,3],which signifies the importance and popularity of the crop.It is a vegetable crop grown all over India except in higher-altitude areas[4].In many developing countries like India(as per the World Bank),as shown in Fig.1a,the GDP contribution of agriculture,forestry,and fishing is decreasing due to the high and fast growth rates of services in the industries sector[5,6].In addition to the fact that India’s agriculture sector is crucial to the country’s economy,it also faces a few challenges like unpredictable climate,poor supply chain,and low productivity [7–9].India today has almost 315 million rural population using smartphones and the Internet.Technology brought improvisations to irrigation systems,crop management,and equipment harvesting tools,predicting the ideal weather for sowing and harvesting.As shown in Fig.1b,India positions itself as second in the production of brinjal [10,11] across the globe.As shown in Fig.1c,brinjal is India’s third most vegetation filled.

    Figure 1:Statistic for the need for weed detection

    Given the importance of the brinjal crop in the economic context,it is necessary to use techniques that maximize its productivity and yield quality.To enhance the productivity of the vegetable crop,various factors such as proper water and nutrition for the crop should be considered as control over weeds[12,13].Therefore,it is crucial to reduce the losses imposed by weeds.According to estimates,Fig.2a illustrates how crops typically lose 20–80 percent of their productivity to weeds,diseases,and pests.Herbicide application is a standard solution to avoid weeds in vegetable crops,including brinjal[14,15].Statistics shown in Fig.2b reflect that agricultural tech startups have raised over 800 million USD in the last six years across the globe[10].

    Figure 2:(a)Losses due to weed(b)AI techniques in smart agriculture

    Crops treated with chemical application lead to a number of health,environmental,and biological issues [16].Moreover,the high costs of herbicides and their adverse effects on human health are also a few concerns [17].The excessive usage of herbicides can make the weeds resist the chemicals.Few studies have shown that a common herbicide (Glyphosate) has toxic elements that are harmful to human beings [18–20].This must be considered part of the technological revolution and treated as a priority issue.Therefore,Artificial Intelligence (AI) can contribute to the issue by efficiently delivering solutions that cater to the purpose.Continuous research is underway to control weeds using biological methods like deploying natural microbes or insects that rely on and feed upon weeds [21],reducing negative impacts on herbicide chemicals.Growing technology in PA follows the practice of site-specific weed management (SSWM) [22–24].Developing computer vision and Deep Learning (DL) technologies would simplify the object detection task.These technologies are extensively studied to identify weeds [25,26].Conventional techniques such as image preprocessing,feature extraction,classification,and segmentation are explored for weed detection[27,28].It works well when images are captured under perfect conditions and at specific plant growth stages [23,29].Hence,the potential of DL has been utilized in PA,especially for weed detection[30,31].Compared to conventional techniques discussed above,DL can learn the hidden feature expression and hierarchical insights from the images,which helps avoid the tedious process of extracting and optimizing the handcrafted features [32].Furthermore,semantic segmentation is one of the most effective approaches for alleviating the effect of occlusion and overlapping since pixel-wise segmentation can be achieved[33].The main objective of this paper is to do the classification of images and object detection of weeds from the crop images of brinjal,which can further need to estimate the weed densities for successfully achieving SSWM for herbicide application expending a few segmentation models,such as Convolutional Neural Network(CNN),U-Net,LinkNet,and SegNet.

    1.1 Research Contributions

    Following are the research contributions of the proposed work:

    ? We generated a real-time dataset by collecting images of brinjal crop variants from a farm field of brinjal vegetable crops near one village named Kudasan,Gandhinagar,Gujarat,India.

    ? We utilized a DL-based model for weed identification from brinjal crop fields.Implementation of the proposed model,including object classification and object detection,is performed using CNN and DL Models(ResNet18,YOLOv3,CenterNet,Faster RCNN).

    ? The performance of the proposed model is evaluated on Intersection over Union (IoU) and memory usage along with various standard metrics such as F1-score,precision,and recall.

    1.2 Novelty

    The classification and object detection of weeds has been the subject of substantial agricultural research.Despite brinjal being the most widely grown plant in the world,weeds have only ever been researched for potatoes and soybeans.The leaf images of weeds like Acalypha Indica and Amaranthus Viridis are similar to those of brinjal leaves,making it difficult to train an accurate model.According to a study,deep pre-trained models followed by classification can give effective identification outcomes.

    1.3 Organization of the Paper

    The rest of the paper is organized as follows.First,Section 2 covers the literature review with a relative comparison of the DL-based model to detect brinjal weed.Next,Section 3 addresses the proposed model,and Section 4 shows the results analysis and discussion of the proposed model.Finally,the paper is concluded in Section 5.

    2 Literature Review

    This section describes related work on weed detection using various methodologies(as shown in Fig.3).We have considered the research articles published in the past ten years on a variety of crop and vegetable datasets,applications,and methodology for weed detection studied to satisfy the following objectives:

    1.Herbicide-Use Optimization-saves farmers’money on herbicides and aids organic farming.

    2.Effective Site-Specific Weed Management-Only spraying herbicides to parts of plants affected by weed instead of spraying their entire fields with herbicide.

    3.Reduce manual intervention and decrease laborious process:Hand weeding is tedious and timeconsuming.

    4.For cost-effectiveness in farm labor expenses,labor shortages have increased the cost of hand weeding.

    5.Utilize the potential of PA-Technologies such as DL has been helpful in these tasks.

    6.Improve the output metrics by the proposed architecture of the model-Different researchers have applied various methods to help accomplish the task.

    Figure 3:Task flow for literature survey carried out in the proposed work

    Fig.3 shows the task flow for the literature survey in the proposed work,which helped identify the correct reference research more aligned with the proposed study.For example,Wang et al.[34]developed a real-time embedded device for weed detection using sensors,a control module,and a global-positioning system using classification algorithms.The system was tested in two wheat fields[35],where the classification part was majorly dependent on the sensors,which could not perform well when the positions of the sensors were changed.Late on,Torres-Sospedra et al.[36]applied two stages procedure on smooth ensembles of the neural networks for weed detection in orange groves.In the first stage,the main features of an image,like trees,trunk,soil,and sky,are determined,and in the second stage,weeds are detected from those areas,which are determined to be the soil in the first stage.Algorithms used for color detection are not used in the diversified environment,and even the dataset size was very small(10 training and 130 tests).Hameed et al.[37]extended research towards detecting weed,wheat,and barren land in a wheat crop field using background subtraction and image classification.The dataset was self-developed using a drone with 4000 × 3000 pixels resolution in a format of JPEG.The classification part is carried out using computer vision and image processing techniques.The results achieved (99%) are good enough,but it is only reasonable to detect weeds,barren land,and wheat.Weed detection involves crop and weed shape,size,and color techniques.

    Dos Santos Ferreira et al.[38] have shown the performance of detection in soybean crops to classify among grasses and broadleaf using CNN and Support Vector Machines.In contrast,Bakhshipour et al.[4] applied weed detection in sugar beet crops with four types of weed Principal Component Analysis (PCA) and Artificial Neural Networks (ANN).Furthermore,Bakhshipour et al.[39] identified weeds from sugar beets with shape features ANN and Support Vector Machine (SVM) with comparative analytics for shape features,including other features on DL models.Asad et al.[8] applied semantic segmentation on the canola field for weed detection using U-Net and SegNet models for segmentation.Ashraf et al.[40] classified the rice crop images based on their weed density classes,such as no,low and high weeds using SVM and Random Forest.Duncan et al.[41]used object-wise semantic segmentation of images into the crop,soil,and weed.The encoder decoder network model was applied to achieve a mean IoU of 9.6.Raja et al.[33]achieved higher accuracy in weed detection in Unmanned Aerial System (UAS) imagery by field robot and Unmanned Aerial Vehicle (UAV) with the use of deep learning architectures U-Net,SegNet Fully Convolution Network (FCN),and DepLabV3+.Subeesh et al.[42] found the bell papers crop and weed identification with effective model Deep CNN models AlexNet,GoogLeNet,Inception V3,and Xception Inception V3 with an accuracy of 97.7%.Researchers worked on these techniques;for example,Perez-Ortiz et al.[43]worked on a semi-supervised system for mapping weeds in sunflower crops.However,the framework highly focuses on the row plant images taken from a distance.In contrast,our dataset is at the plant level and zoomed to plant leaves and weeds.Then,Partel et al.[44]developed a low-cost technology for precision weed management,which aligns with our objective of effective herbicide application.Still,after classifying the plants and weeds using DL algorithms,the research lacks comparison with the traditional broadcast sprayers that are usually employed to treat the entire field to control the pest.Various researchers also use graph-based DL methods for weed detection.For example,Hu et al.[45] developed a convolutional graph network to identify multiple types of weeds from RGB (Red,Green,and Blue) images taken in complex environments with multiple overlapping weed and plant species in highly variable lighting conditions.Classification(here recognition)is done using the ResNet-50 backbone and the DenseNet-202 backbone with five cross-fold-validation,and the Res-Net50 backbone achieved state-of-the-art performance as Graph Weeds Nets.The limitation of this research is that it is done for many different types of weeds and does not explicitly target any vegetation,crop,or plant.Though,the main essence of the paper achieved is the use of convolutional networks for better classification.Along with the latest DL technologies used for image datasets,some researchers show the use of various image descriptors for feature extraction.

    Prolonging the research,Bosilj et al.[46] emphasized pixel-based cropvs.weed classification methodologies,particularly in challenging circumstances with overlapping plants and illumination variation.The study compares the benefits of content-driven morphology-based descriptors and multiscale profiles to state-of-the-art feature descriptors with a fixed neighborhood,such as histograms of gradients and native binary patterns previously used in PA.The study used two datasets,the first of which is the Sugar Beets dataset 2016 (280 images),and the second is the Carrots dataset 2017.Classification is done using a Random Forest Classifier.More extensive AP descriptors based on numerous hierarchies are among the drawbacks.Nevertheless,it can potentially improve pixelbased classification and make it more accurate.Furthermore,combining region-based classification and morphological segmentation with hierarchical image representation could improve by adding the benefit of reusing the hierarchical picture representation.

    The most potent computational phase is segmentation,for both segmentation and feature extraction.Bell papers weed identification by four DCNN models,Inception V3,AlexNet,GoogLeNet,and Xception,were used in specific agricultural application research by Verma et al.[42],where the Inception V3 model is a better fit for higher accuracy.For future development,weeders and sitespecific weed management modules could be combined.The dataset is manually collected from ICARCentral Institute of Agricultural Engineering Bhopal,India.Raja et al.[33]created a more powerful architecture using data from the Sugar Cane Orthomosaic and Crop/Weed Field Image Dataset.To improve weed for recognition accuracy in UAS imagery (field robot and Unmanned Aerial Vehicle(UAV)),a model is created using deep learning architectures U-Net,FCN,SegNet,and DepLabV3+,with DeplabV3+receiving 84.3% accuracies.For low-quality,low-resolution imagery,the model is ineffective.Table 1 compares the DL-based techniques used to detect brinjal weed concerning parameters such as objectives,methodology,results,and limitations.

    Table 1:A relative comparison of the deep learning-based techniques used to detect brinjal weed

    3 The Proposed Model

    In this section,the authors proposed a paradigm,provided a complete flow diagram displaying the links between modules,and described a comprehensive method.As per the architecture shown in Fig.4,it comprises five modules: Dataset collection,Data preprocessing,Object detection,Deep Learning Models,and Evaluation metrics.

    Figure 4:The proposed model

    Captured crop images from a real-time environment need preprocessing to smooth images.It includes segmenting soil from the background,image augmentation,grayscale transformation,and Image resizing.Attributes,such as shape,size,and color,are considered for the normalization and scaling of the images.Image preprocessing is responsible for furnishing the Image in the required form of the weed image dataset[47].Dataset augmentation is used to increase the variety of the dataset.For the same,images are blurred,horizontally flipped,and rotated manually.Ground truth generation is an important step for any segmentation task.We give the label and the Image as weed and non-weed parts to the DL model.The need to generate ground truth by mapping and interpreting the regions of plant and non-plant from the Image as the dataset is collected manually.The final preprocessing phase is to rescale the Image from RGB to 250×250 pixels,included in the weed dataset.Preprocessed images are passed to the CNN model to extract features,as shown in Fig.5.Feature extraction consists of the shapes and sizes of leaves and weeds.CNN is used for learning the primary shapes present in the first layer.It also adjusts itself to learn other shapes in the deeper layers,followed by DL models for enhanced accuracy.In the object detection method,the Image is identified pixel-by-pixel.YOLOv3,CenterNet,Faster RCNN,and ResNet18 models were used for training and testing the features extracted from CNN.The post-processing and evaluation metrics module discusses the trend of best metrics used for segmentation Intersection over Union (IoU) [48].Each of the modules has been explained in detail in Section 4.

    3.1 Dataset Description

    Dataset acquisition includes capturing a single image of weeds,leaves,and soil.The data used in the experiment is manually collected from a field farm of brinjal crops near one village named Kudasan,Gandhinagar,Gujarat,India.There were a few iterations of data capturing to ensure the perfect quality of the images of brinjal with weeds and different stages of crop growth.The images are in RGB (Red,Green,Blue) format.The ideal light,weather,weed,and plant growth conditions are matched for dataset collection.Proper level from the ground is also maintained for each image.In summary,218 images were collected with augmentation from the field,and 1109 images were taken from dataset providers [49–51].The dataset for the model’s training,validation,and testing is bifurcated as Table 2 from a total of 1327 images.

    Table 2:Dataset description

    Figure 5:Flow diagram of the proposed model

    The image-capturing device had a 48 MP resolution of MI(Xiaomi).The captured image of the Brinjal crop is shown in Fig.6a,and after the augmentation,the application is shown in Fig.6b.The Brinjal crop dataset was made available as open-source[52].After the images are fed to the architecture,the output identifies different objects present in the image,like soil,weed,crop leaves,crop flowers,and brinjal.The same is depicted in Fig.7.

    Figure 6:Captured images of the brinjal crop before and after augmentation using the proposed model

    Figure 7:Object detection in weed classification using the proposed model

    3.2 Ground Truth Generation

    The input layer of the DL model needs ground truth creation since it generates labeled images.As the dataset is collected manually,there is a need to generate ground truth by mapping and annotating the regions of plant and non-plant from the image in Fig.8.We are required to divide the images into sections carefully.This would lead to a binary segmentation problem in terms of DL.For instance,as shown in the below collection of images,there is a standard semantic segmentation process where input,ground truth &prediction are the main parts.Ground truth generation helps in pixelwise classification for the segmentation model.The GNU Image Manipulation Program (GIMP)is an image editor system used for ground truth generation.It includes various choices for image manipulation and a toolkit for efficiently and precisely constructing the ground truth.To create ground truth,two colors are applied,and both of them resemble different parts of the image,including plants and non-plants.These portions are drawn with the help of the GIMP cursor tool and then colored using the same as shown in Fig.8.

    3.3 Data Pre-Processing Image Augmentation

    The dataset consists of manually collected images with varied sizes and shapes that must be reformed into shapes similar to all the images [53].For this research project,we have cropped and resized the dataset images from different sizes to 256×256,all while keeping the color channels intact[54].Then,the images are passed from the normalization process by subtracting each channel’s mean from the original value and later dividing the channels’standard deviation[55]for the learning phase,which will contribute in a better way to the segmentation model.

    The data augmentation phase is a common process in DL-based projects to ensure sufficient data provision to the model for the learning phase [56].The dataset’s diversity in terms of shape,size,location,and orientation of weed helps the model perform well.Many methods deal with adding more synthetic samples for text data augmentation by getting the nearest neighbor points.While working on the image projects,we can augment the data by using basic methods such as flipping the image vertically and horizontally and adding some noise [47].We have augmented the data with random horizontal and vertical flips.This augmentation step aids in extending the amount of training dataset,and by including augmented samples created using the methods described above,the model’s learning experience is improved.It also increases the accuracy of the model’s measuring key performance indicators(KPIs)when the test or actual data is inputted into the trained model[25].It also enables the model to learn previously missing features and strengthen connections in the network[16].

    3.4 Object Classification

    The CNN parameters employed in this model are described as follows.The first two-dimensional convolutional layer is performed with 448 parameters and 148×148×16 samples.With the pooling implementation,these samples were down-sampled to 74×74×16.Hence,out of the whole image,a particular section was focused on.Then,another convolutional layer of the same dimension is involved in deepening the section and extracting the features precisely.Further,4640 parameters are recognized to deepen the model,and more features are extracted to enhance the precision and efficiency of the model.Likewise,another three layers are implemented,followed by pooling in each layer.The final number of parameters was 18496.These parameters flattened to convert the map of more dimensions into a single-dimension array.The experiment is carried out on an utterly connected layer,and 2-Dense layers append the connected layer to the neural network.A total of 9,494,561 parameters were taken for the proposed model.

    3.5 Object Detection

    Different object detection models have been tested based on the literature survey.YOLOv3,CenterNet,Faster RCNN,and ResNet18 have used different backbone architectures per this project’s dataset.We had RGB images and the binary segmentation classes with only two differentiated classes.As the dataset size is average,models have to be chosen accordingly.Researchers have used U-Net for canola and paddy field image segmentation tasks and achieved good results.U-Net is a type of CNN.The U-Net Model successfully delivers better results in pixel-to-pixel classification for biomedical images[57,58].ResNet18 is a backbone for one of our beneficial and referred research papers,where several operations are used,including Convolution,Max-Pooling,Up-Sampling,and Transposed Convolution.The convolution operation takes two inputs: the input image and the set of filters or feature extractors[59].The Convolution operation and the Max-Pooling operation effectively lead to image size reduction.It is selected because of its good performance in previous research records and relatively low computation cost[60].

    Based on the literature review,many segmentation models have been tested.A CNN called YOLOv3 is used to identify objects in real-time [56].Originally,Darknet had a 53 layers network trained on ImageNet [61].But here,a variation of Darknet is being used.The primary components of the Darknet have skipped connections and 3 × 3 and 1 × 1 filters.This Darknet variant has 53 more layers,with 106 layers of the underlying convolutional architecture.YOLOv3 calculates the classification loss for each label using binary cross-entropy,and logistic regression is used to forecast object confidence and class predictions.It has three hyperparameters,class threshold,nonmax suppression threshold,and input height and shape.This model has three output layers and finds objects by applying 1×1 detection kernels to three different-size feature maps at three distinct points across the network.The classification loss for each label is calculated using binary cross-entropy,and class predictions and object confidence are forecasted using logistic regression.The YOLOv3 has three output layers.The first layer contains the bounding box coordinates,classes,and confidence (1,13,13,13),where 255 denotes.The second layer works at a different scale and divides your image into a grid of 26×26 squares.The third layer divides your image into a grid of 52×52 squares and predicts the bounding box coordinates for each grid cell[62,63].

    CenterNet is an anchorless object detection architecture.This technique dramatically accelerates inference [57].CenterNet recognizes each object as a triplet rather than a pair of key points.It uses two specifically created modules: center pooling and cascade corner pooling.It improves the data acquired by the top-left and bottom-right corners and provides more recognizable information in the center areas.Center pooling is used to predict center key points.This maximizes the response from a feature map’s central key point in the vertical and horizontal dimensions.Corners can extract features from center areas because of cascade corner pooling.It initially looks along a boundary value,then inside the box and where the maximum boundary value is placed to get a maximum internal value.Then,the sum of the two highest values is calculated.The output produced by CenterNet is[0,1]where R=output stride and C=number of keypoint types.By default,we use output stride R=4.A prediction of 1 corresponds to the detected keypoint,while 0 is background[22].Another object identification architecture can be used as a Faster RCNN[64].Convolution layers,Region Proposal Network,and Classes&Bounding boxes prediction comprise its three components.The filters are trained to extract the right picture characteristics in the convolution layers.A tiny neural network called the Region Proposal Network (RPN) slides on the final feature map created by the convolution layers and aids in determining whether or not there are objects present as well as their bounding boxes.With the regions recommended by the RPN as input,it now uses a fully connected neural network to forecast item class(classification)and bounding boxes(regression).

    3.6 Evaluation Metrics

    The evaluation metrics are chosen per the proposed model and the objective of this work.The dataset is divided into the train,validation,and test sets,each comprising 70%,15%,and 15%of the total.This guarantees that the model’s universal performance is not limited to the training dataset.However,there is overfitting when the validation loss stops decreasing or increases while the training loss keeps going down.Here,we are doing semantic segmentation of weeds and plants;therefore,the efficiency parameters are different in the segmentation model.The evaluation measures for semantic segmentation include IoU,mean IoU,accuracy,recall,precision,F1-score,and memory usage [65].True positive(TP)values for our dataset are images containing weed and are accurately detected.False positives(FP)are images without weed but identified as“weed.”False negatives(FN)are images having weed and detected as“no weed.”The efficiency parameters are defined as Eqs.(1)–(3)as follows:

    1.Accuracy: It is the proportion of right forecast perceptions to the complete number of perceptions.It holds significance when the information base is symmetric.

    2.Precision: It is the proportion of accurately anticipated positive perceptions to the all-out anticipated positive perceptions.

    3.Recall:The proportion of right-judged positive perceptions to perceptions in it.

    4.F1-score:It is a weighted average of Recall and Precision.

    Using the command-line function htop,memory use was assessed.The memory use should be identical when the same model is run on two distinct PCs.Before and during the model’s assessment of the training and target platforms,the RAM was kept under observation.As per the trend of best metrics used for segmentation,IoU is used.This metric is also industry-accepted and used by researchers for measuring the performance of segmentation models.This implies the percentage of overlap between the ground truth bounding box and the predicted bounding box.

    1.IoU value moving towards 1 indicates an overlapping between the ground truth and predicted bounding box.

    2.IoU value moving towards 0 indicates no overlapping between the ground truth and predicted ground boxes.

    4 Results and Discussion

    This section describes the results and discussion,where the proposed model is evaluated with different performance metrics to analyze its performance.It comprises the experimental setup,evaluation metrics,and discussion.

    4.1 Experimental Setup

    The parameters for simulation,the criteria for evaluation,and a discussion of the results are all described in this section.The experiments of the Brinjal images were performed on a GPU-enabled system.24 GB of RAM and an NVIDIA GTX 1050Ti graphics card were used to smoothly implement algorithms and train our dataset according to the proposed model.Simulation parameters like batch size,learning rate,and epochs are utilized to analyze the experiment,as shown in Table 3.

    Table 3:Simulation parameter

    1.Due to the unfavorable effects of significant modifications to the deeper convolution filters,the low learning rate was chosen.Unfortunately,the learning process collapsed due to training with 0.000001 or 0.001 since the losses spiked to extremely high numbers and never returned.

    2.As the largest value supported by an NVIDIA GPU,the batch size was chosen.This justification stems from the knowledge that bigger batch sizes offer a more precise gradient calculation for the weight update.

    3.The number of training epochs for the custom models ranged from 10 to 25,and once overfitting was detected,the previous epoch was applied.Therefore,we have found that 15 epochs are the ideal number for this experiment for all the models.

    4.2 Discussion

    We evaluated the models on 1327 samples and got different IoU results for the four models we used.The test mean IoU score is shown in Table 4.

    Table 4:IoU comparison of DL models

    IoU compares the ground truth of the image we gave as an input and the model’s predicted mask generated from the input image.Hence,IoU gives the exact amount of overlap compared to the union.Fig.9 summarizes models based on IoU non-weed,IoU weed,and MIoU measures for the test dataset.The IoU threshold is set to 0.70 for this experiment.

    Conditional Random Field(CRF)is implemented as a post-processing technique to improve the efficiency parameters by considering the neighboring predicted values.We have trained four different models with distinct backbone architectures.These segmentation models gave pretty good results when compared with each other.These models have simple architecture,faster training time,and simple implementation.The closure curves for all the models are shown in Fig.9.We have trained all the models for 15 epochs based on the dataset size for training to avoid overfitting the model due to excessive training.

    Epochsvs.Loss Graphs can be helpful in helping us to determine whether or not our model is overfitting or underfitting.For the YOLOv3 model,we might need to increase epochs because we can see that training and validation loss swings and that there is still some difference between training and validation even after 15 epochs.For ResNet18,we can observe that the graph appears inactive and that loss is also steadily dropping;after ten epochs,the difference between training and validation loss is nearly zero.We should apply this model to data that is differently distributed and make sure that it will not lead to any overfitting problems,according to the CenterNet graph,which shows that the starting training loss is quite low and has not decreased steadily.The model is learning gradually,but we can still run a few more epochs to ensure it doesn’t change much more and to check whether the difference between training loss and validation loss becomes null.We can observe in the Faster RCNN that the model begins training with a very high loss but that,over time,training and validation loss declines,resulting in an idle situation.

    Figure 9:Training and validation loss analysis of the proposed AI models

    The authors have compared models based on three different parameters:first,the accuracy aspect,the memory usage aspect,and the IoU aspect.We conducted experiments using ResNet-18,YOLOv3,CenterNet,and Faster RCNN models.The performance of the model’s training is validated using a validation,where the ResNet18 model had the lowest accuracy of 81% and high training loss,as displayed in Table 5.During training,it appears to have detected the dataset’s imbalance,which is probably brought on by the lack of dropout regularisation.CenterNet outperformed all the models and achieved the highest accuracy at 88%.The reason behind good accuracy is to apply an effective scaling technique using compound coefficient and anchorless object detection.The idea behind the model is to focus on how much of the item it overlaps with;box predictions can be prioritized for significance based on their center only.

    Table 5:A relative comparison of the DL models

    When compared to other models in the experiment,the ResNet-18 model requires somewhat more memory in terms of memory consumption.YOLOv3 is the lightest model as compared to all the models.It requires only 4.78 GB of memory to process the data.The main reason behind the vast difference is the depth and width of the network.YOLOv3 works with only 2M parameters,while ResNet-18 takes 11M.Regarding IoU non-weed,IoU weed,and mean IoU concerns,CenterNet and Faster RCNN achieve the same mean IoU of 0.86.As the predicted result shows,models are trained to capture the partition wall.These two architectures find the overlap between the weed-segmented area and the ground truth(manually generated).

    5 Conclusion

    Weed detection is an essential aspect of farming and,apart from other factors,affects crop yield and quality.This paper presents the comparative analytics of DL models to find weeds from the crop fields of the most popular vegetation crop,brinjal.Weed identification and detection require segregating soil,crop,and weed from an image.Data collection,preprocessing,data augmentation,classification,object detection,and postprocessing using CNN and DL models helped us achieve good results.The approach utilizes the pros of CNN in classification and DL in weed detection.With the help of the proposed model,we could classify the brinjal crop plant and detect weeds from the image given to the model with the best accuracy score of 88%.CenterNet is the most suitable architecture for brinjal weed detection among all the models,with a mean IoU of 86%and an accuracy of 88%.The memory requirement for ResNet-18 is quite higher than YOLOv3.The results are promising,and they provide a nod to achieving the objectives such as reducing herbicide usage and saving time from laborious hand weeding.These results also pave the way for the use of PA for site-specific weed management in vegetable crops.This work for weed detection in brinjal crops can be extended as a part of future work by developing an end-to-end pipeline.The pipeline would include a weed classification module that takes plant images as input.With the help of a classification module,it would classify the images into plant or non-plant(weed)types.It would then forward only the images of weed images to the object detection module to get the exact prediction mask,where the image is segmented into weed and plant.This would make a combination of classification and object detection,proving to be a perfect package for any application in real-time.

    In the future,we will test the performance of the proposed model considering diversified parameters like latency and prediction time and integrate the new DL-based models,such as CoAtNet-7 and ModelSoups(Basic-L).

    Acknowledgement:The authors would also like to acknowledge all members of Sudeep Tanwar’s Research Group(ST Lab)for their support in revising this manuscript.Further,the authors would like to acknowledge editorial board members of CMC and reviewers for providing technical comments to improve overall scientific depth of the manuscript.

    Funding Statement:This work was funded by the Researchers Supporting Project Number(RSP2023R 509),King Saud University,Riyadh,Saudi Arabia.

    Author Contributions:The authors confirm contribution to the paper as follows:study conception and design:Jigna Patel,Anand Ruparelia,Sudeep Tanwar;data collection:Jigna Patel,Anand Ruparelia,Ravi Sharma;analysis and interpretation of results:Sudeep Tanwar,Maria Simona Raboaca,Bogdan Constantin Neagu,Fayez Alqahtani,Amr Tolba;draft manuscript preparation:Jigna Patel,Simona Raboaca,Bogdan Constantin Neagu.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Not applicable.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    汤姆久久久久久久影院中文字幕| 亚洲国产欧美日韩在线播放| 五月天丁香电影| 午夜91福利影院| 精品卡一卡二卡四卡免费| 日日摸夜夜添夜夜添小说| 麻豆成人av在线观看| 少妇 在线观看| 午夜福利欧美成人| 亚洲一码二码三码区别大吗| 两性夫妻黄色片| 可以免费在线观看a视频的电影网站| 午夜激情久久久久久久| 亚洲情色 制服丝袜| 久久久久久久久免费视频了| 超碰97精品在线观看| 考比视频在线观看| 桃红色精品国产亚洲av| 丰满迷人的少妇在线观看| 国产精品免费视频内射| kizo精华| 欧美精品一区二区免费开放| 亚洲色图综合在线观看| 久久人人97超碰香蕉20202| 亚洲av美国av| 久久久国产成人免费| 最新美女视频免费是黄的| 9热在线视频观看99| 在线观看66精品国产| 精品国产一区二区三区四区第35| 亚洲七黄色美女视频| 亚洲成a人片在线一区二区| 国产1区2区3区精品| 另类精品久久| 五月开心婷婷网| 欧美日韩一级在线毛片| 99久久99久久久精品蜜桃| 中文字幕制服av| 国产在线视频一区二区| 波多野结衣一区麻豆| 日韩大片免费观看网站| 亚洲色图综合在线观看| 九色亚洲精品在线播放| 涩涩av久久男人的天堂| 国产不卡av网站在线观看| 亚洲中文日韩欧美视频| 一边摸一边抽搐一进一出视频| 免费在线观看影片大全网站| 在线观看一区二区三区激情| 久久久水蜜桃国产精品网| 国产午夜精品久久久久久| 视频区欧美日本亚洲| 欧美精品一区二区大全| 1024视频免费在线观看| 视频区图区小说| 亚洲免费av在线视频| 欧美国产精品一级二级三级| 啦啦啦中文免费视频观看日本| 欧美日韩国产mv在线观看视频| 亚洲欧美精品综合一区二区三区| 欧美 日韩 精品 国产| 搡老岳熟女国产| 中文字幕人妻丝袜一区二区| 久久99一区二区三区| 99热国产这里只有精品6| 在线av久久热| 黄色a级毛片大全视频| 午夜福利影视在线免费观看| 精品卡一卡二卡四卡免费| 丁香六月欧美| xxxhd国产人妻xxx| 日日夜夜操网爽| 欧美人与性动交α欧美软件| 热re99久久精品国产66热6| 日韩欧美三级三区| 一区二区三区精品91| 制服人妻中文乱码| 少妇粗大呻吟视频| 免费高清在线观看日韩| 午夜激情久久久久久久| 精品国产超薄肉色丝袜足j| 高清视频免费观看一区二区| 精品国产乱码久久久久久小说| 啦啦啦在线免费观看视频4| 少妇粗大呻吟视频| 精品久久久久久久毛片微露脸| 国产麻豆69| 99久久99久久久精品蜜桃| 99国产极品粉嫩在线观看| 大陆偷拍与自拍| 桃花免费在线播放| 精品福利观看| 捣出白浆h1v1| 日本一区二区免费在线视频| 老司机亚洲免费影院| 男女无遮挡免费网站观看| 亚洲黑人精品在线| 91老司机精品| 人人妻,人人澡人人爽秒播| 国产精品自产拍在线观看55亚洲 | 中文字幕人妻熟女乱码| 99九九在线精品视频| 免费黄频网站在线观看国产| 久久久久久久精品吃奶| 精品一区二区三区av网在线观看 | 久久天躁狠狠躁夜夜2o2o| 久久久久精品国产欧美久久久| 亚洲五月色婷婷综合| 亚洲成人国产一区在线观看| 人人妻人人澡人人看| 精品国产国语对白av| 丁香欧美五月| 久久亚洲真实| 日日夜夜操网爽| av天堂在线播放| 国产人伦9x9x在线观看| 午夜精品国产一区二区电影| 国产亚洲精品久久久久5区| 国产亚洲欧美精品永久| 日本黄色视频三级网站网址 | 国产熟女午夜一区二区三区| 我要看黄色一级片免费的| 成年动漫av网址| 1024视频免费在线观看| 精品高清国产在线一区| 精品视频人人做人人爽| 后天国语完整版免费观看| 免费一级毛片在线播放高清视频 | 正在播放国产对白刺激| 欧美日韩黄片免| 19禁男女啪啪无遮挡网站| 热99re8久久精品国产| 午夜福利影视在线免费观看| 国产精品免费一区二区三区在线 | 人妻久久中文字幕网| 亚洲欧美精品综合一区二区三区| 99re6热这里在线精品视频| 国产国语露脸激情在线看| 欧美日韩国产mv在线观看视频| 99re在线观看精品视频| 国产熟女午夜一区二区三区| 久久久久网色| 欧美激情极品国产一区二区三区| 久久久国产一区二区| 又紧又爽又黄一区二区| 90打野战视频偷拍视频| 久久久久精品国产欧美久久久| 黄色 视频免费看| 午夜福利免费观看在线| 欧美国产精品一级二级三级| 久久精品国产a三级三级三级| 男女免费视频国产| 久久久久精品国产欧美久久久| 亚洲国产欧美日韩在线播放| av网站免费在线观看视频| 欧美久久黑人一区二区| 午夜福利视频在线观看免费| 亚洲色图 男人天堂 中文字幕| 精品少妇内射三级| 少妇被粗大的猛进出69影院| 三级毛片av免费| 亚洲va日本ⅴa欧美va伊人久久| 91麻豆精品激情在线观看国产 | 一边摸一边抽搐一进一小说 | 国产精品影院久久| 麻豆av在线久日| 亚洲精品久久午夜乱码| 国产不卡一卡二| 免费人妻精品一区二区三区视频| 日韩免费高清中文字幕av| 在线观看免费视频网站a站| 人妻一区二区av| 18禁黄网站禁片午夜丰满| 一本大道久久a久久精品| 少妇被粗大的猛进出69影院| e午夜精品久久久久久久| 下体分泌物呈黄色| 90打野战视频偷拍视频| 免费av中文字幕在线| 女人被躁到高潮嗷嗷叫费观| 欧美在线一区亚洲| 淫妇啪啪啪对白视频| 国产在线一区二区三区精| 纵有疾风起免费观看全集完整版| 在线观看免费高清a一片| 久久国产精品男人的天堂亚洲| 免费人妻精品一区二区三区视频| 欧美国产精品va在线观看不卡| 国产一卡二卡三卡精品| 日韩大片免费观看网站| 久久久久久久大尺度免费视频| 精品国产乱码久久久久久男人| 国产精品 欧美亚洲| 亚洲国产成人一精品久久久| 免费看十八禁软件| 不卡av一区二区三区| 国产精品免费一区二区三区在线 | 国产亚洲欧美精品永久| 乱人伦中国视频| 中亚洲国语对白在线视频| 亚洲 欧美一区二区三区| 少妇精品久久久久久久| 99精品久久久久人妻精品| 岛国在线观看网站| 十八禁网站网址无遮挡| 狂野欧美激情性xxxx| 午夜两性在线视频| 狠狠狠狠99中文字幕| 怎么达到女性高潮| 欧美变态另类bdsm刘玥| 狂野欧美激情性xxxx| 亚洲精品中文字幕在线视频| 男女边摸边吃奶| 亚洲av成人一区二区三| 老汉色av国产亚洲站长工具| 精品午夜福利视频在线观看一区 | 久久久久国内视频| 久久精品国产综合久久久| 欧美黑人精品巨大| 少妇的丰满在线观看| 激情视频va一区二区三区| 免费观看人在逋| 久久人妻av系列| 亚洲成人免费av在线播放| a在线观看视频网站| netflix在线观看网站| 91麻豆精品激情在线观看国产 | 女警被强在线播放| 下体分泌物呈黄色| 免费观看av网站的网址| 我的亚洲天堂| av视频免费观看在线观看| 黄片大片在线免费观看| 国产精品免费大片| 日日爽夜夜爽网站| 一级黄色大片毛片| 人妻 亚洲 视频| 99香蕉大伊视频| 美女国产高潮福利片在线看| 丝袜美腿诱惑在线| 亚洲 欧美一区二区三区| 日韩精品免费视频一区二区三区| 两人在一起打扑克的视频| 建设人人有责人人尽责人人享有的| 老司机影院毛片| 久久午夜综合久久蜜桃| 日韩 欧美 亚洲 中文字幕| 99精品久久久久人妻精品| 九色亚洲精品在线播放| 最近最新中文字幕大全免费视频| 午夜日韩欧美国产| 老汉色av国产亚洲站长工具| 久久影院123| 一个人免费看片子| 国产麻豆69| 精品国产超薄肉色丝袜足j| 国产成人精品无人区| 亚洲中文日韩欧美视频| 99在线人妻在线中文字幕 | 9热在线视频观看99| 国产男女超爽视频在线观看| 99精品在免费线老司机午夜| 精品乱码久久久久久99久播| 国产精品欧美亚洲77777| av天堂在线播放| tocl精华| 在线观看www视频免费| 高清黄色对白视频在线免费看| 啦啦啦在线免费观看视频4| 他把我摸到了高潮在线观看 | 他把我摸到了高潮在线观看 | 亚洲av美国av| 欧美精品av麻豆av| av欧美777| 桃花免费在线播放| av天堂在线播放| 69精品国产乱码久久久| 亚洲精品国产区一区二| 中亚洲国语对白在线视频| 国产精品久久久久成人av| 99在线人妻在线中文字幕 | 建设人人有责人人尽责人人享有的| 亚洲欧美一区二区三区黑人| 高清欧美精品videossex| 岛国毛片在线播放| 成人18禁高潮啪啪吃奶动态图| 国产成人系列免费观看| 精品国产乱码久久久久久小说| 国产日韩欧美视频二区| 久久久久国内视频| 少妇 在线观看| 亚洲av第一区精品v没综合| 色综合婷婷激情| 99精品欧美一区二区三区四区| 高清av免费在线| 精品亚洲乱码少妇综合久久| 黄色片一级片一级黄色片| 日韩中文字幕欧美一区二区| 欧美亚洲 丝袜 人妻 在线| 一边摸一边抽搐一进一出视频| 国产精品av久久久久免费| 午夜免费鲁丝| www.999成人在线观看| 国产高清国产精品国产三级| 另类亚洲欧美激情| 成人三级做爰电影| 国产成人系列免费观看| 久久久久国产一级毛片高清牌| 久久人人97超碰香蕉20202| 后天国语完整版免费观看| 国产亚洲av高清不卡| 国产成人精品久久二区二区91| 搡老熟女国产l中国老女人| 亚洲男人天堂网一区| √禁漫天堂资源中文www| 国产一区有黄有色的免费视频| 亚洲国产av影院在线观看| 老司机靠b影院| 这个男人来自地球电影免费观看| 老熟妇乱子伦视频在线观看| 精品免费久久久久久久清纯 | 亚洲国产毛片av蜜桃av| 亚洲成国产人片在线观看| 好男人电影高清在线观看| 欧美精品一区二区免费开放| 亚洲精品中文字幕在线视频| 久9热在线精品视频| 麻豆乱淫一区二区| 99riav亚洲国产免费| 国产精品98久久久久久宅男小说| 一级,二级,三级黄色视频| 手机成人av网站| 狠狠狠狠99中文字幕| 一本久久精品| 国产有黄有色有爽视频| 国产黄色免费在线视频| 97人妻天天添夜夜摸| 51午夜福利影视在线观看| 一本久久精品| 亚洲av第一区精品v没综合| 一本久久精品| 一夜夜www| 精品久久久精品久久久| 狠狠婷婷综合久久久久久88av| 中文字幕高清在线视频| 成人亚洲精品一区在线观看| 99九九在线精品视频| 91成年电影在线观看| 真人做人爱边吃奶动态| 亚洲av电影在线进入| 日日摸夜夜添夜夜添小说| 99在线人妻在线中文字幕 | 中文字幕制服av| 波多野结衣一区麻豆| 精品国产超薄肉色丝袜足j| 欧美精品一区二区大全| 在线观看66精品国产| 亚洲精华国产精华精| 水蜜桃什么品种好| 又大又爽又粗| 激情视频va一区二区三区| 桃红色精品国产亚洲av| 老汉色∧v一级毛片| 久久精品亚洲av国产电影网| 国产成人欧美| 中文字幕人妻熟女乱码| 日韩制服丝袜自拍偷拍| 国产97色在线日韩免费| 国产日韩欧美在线精品| 高清av免费在线| 日韩精品免费视频一区二区三区| 99久久精品国产亚洲精品| 激情在线观看视频在线高清 | 亚洲人成电影免费在线| 麻豆成人av在线观看| 一边摸一边抽搐一进一小说 | 亚洲,欧美精品.| 国产一区二区 视频在线| 亚洲av美国av| 日日爽夜夜爽网站| 天天添夜夜摸| 亚洲va日本ⅴa欧美va伊人久久| 日本a在线网址| 亚洲自偷自拍图片 自拍| 狠狠婷婷综合久久久久久88av| 国产精品久久久久成人av| 日本a在线网址| 成年人午夜在线观看视频| 国产成人精品久久二区二区91| 美女国产高潮福利片在线看| 视频在线观看一区二区三区| 动漫黄色视频在线观看| 最黄视频免费看| 亚洲 国产 在线| 人人妻人人澡人人看| 99国产极品粉嫩在线观看| 桃花免费在线播放| 久久狼人影院| 国产片内射在线| 久久中文看片网| 人人妻人人添人人爽欧美一区卜| av有码第一页| 99久久99久久久精品蜜桃| 国内毛片毛片毛片毛片毛片| 在线十欧美十亚洲十日本专区| 大型av网站在线播放| 久久精品亚洲熟妇少妇任你| av超薄肉色丝袜交足视频| 国产亚洲av高清不卡| 中文欧美无线码| 蜜桃国产av成人99| 国产麻豆69| 午夜福利视频在线观看免费| 亚洲精品国产一区二区精华液| 午夜福利一区二区在线看| 亚洲欧洲精品一区二区精品久久久| 久久天堂一区二区三区四区| 高潮久久久久久久久久久不卡| 日韩三级视频一区二区三区| 亚洲av成人一区二区三| 狠狠狠狠99中文字幕| 国产精品99久久99久久久不卡| 麻豆av在线久日| 精品国产超薄肉色丝袜足j| 美女主播在线视频| 国产男靠女视频免费网站| 精品久久蜜臀av无| 久久毛片免费看一区二区三区| 国产精品一区二区在线不卡| 啦啦啦 在线观看视频| 大片免费播放器 马上看| 99精国产麻豆久久婷婷| 免费观看人在逋| 一本大道久久a久久精品| 交换朋友夫妻互换小说| 麻豆乱淫一区二区| 国产极品粉嫩免费观看在线| 亚洲美女黄片视频| 国产精品免费大片| 91精品三级在线观看| 亚洲中文日韩欧美视频| 满18在线观看网站| 丰满饥渴人妻一区二区三| 精品熟女少妇八av免费久了| 涩涩av久久男人的天堂| 国产精品欧美亚洲77777| 美女高潮喷水抽搐中文字幕| 色播在线永久视频| 亚洲七黄色美女视频| 一级黄色大片毛片| 精品熟女少妇八av免费久了| 亚洲国产毛片av蜜桃av| 亚洲欧洲精品一区二区精品久久久| www.999成人在线观看| 大片免费播放器 马上看| 国产免费福利视频在线观看| 精品少妇一区二区三区视频日本电影| 欧美日韩福利视频一区二区| 丁香六月天网| 亚洲欧洲精品一区二区精品久久久| 精品人妻熟女毛片av久久网站| 成人精品一区二区免费| 国产成人精品无人区| 亚洲三区欧美一区| 免费在线观看黄色视频的| 老熟女久久久| 国产成人啪精品午夜网站| 久久中文看片网| 真人做人爱边吃奶动态| 国产福利在线免费观看视频| 亚洲中文字幕日韩| 黄色片一级片一级黄色片| 国产在线观看jvid| 色尼玛亚洲综合影院| 亚洲精品美女久久久久99蜜臀| 亚洲中文日韩欧美视频| 亚洲av欧美aⅴ国产| 国产精品久久久久久精品古装| 精品一区二区三区四区五区乱码| 国产淫语在线视频| 看免费av毛片| 亚洲av第一区精品v没综合| 亚洲五月色婷婷综合| 在线观看免费午夜福利视频| 国产xxxxx性猛交| 久久热在线av| 免费高清在线观看日韩| 国产精品98久久久久久宅男小说| 欧美日韩福利视频一区二区| 日韩 欧美 亚洲 中文字幕| 久久久久久久久免费视频了| 久久中文字幕一级| 麻豆乱淫一区二区| 性少妇av在线| 久久毛片免费看一区二区三区| 国产av国产精品国产| 精品免费久久久久久久清纯 | www.999成人在线观看| 女人爽到高潮嗷嗷叫在线视频| 交换朋友夫妻互换小说| 久久久久久久大尺度免费视频| 狂野欧美激情性xxxx| 精品乱码久久久久久99久播| 精品久久久精品久久久| 日韩大码丰满熟妇| 日韩欧美国产一区二区入口| 成人18禁在线播放| 欧美激情极品国产一区二区三区| 在线 av 中文字幕| 亚洲一卡2卡3卡4卡5卡精品中文| 天天影视国产精品| 精品少妇久久久久久888优播| 肉色欧美久久久久久久蜜桃| 精品一区二区三区视频在线观看免费 | 成人18禁在线播放| 视频区欧美日本亚洲| 欧美日韩精品网址| 99久久国产精品久久久| 啦啦啦免费观看视频1| 高潮久久久久久久久久久不卡| 肉色欧美久久久久久久蜜桃| 妹子高潮喷水视频| 精品久久蜜臀av无| 久久久久精品人妻al黑| 岛国在线观看网站| 欧美日韩一级在线毛片| av欧美777| 黄色毛片三级朝国网站| 老鸭窝网址在线观看| 久久久久久久国产电影| 大片电影免费在线观看免费| 大香蕉久久网| av免费在线观看网站| 免费在线观看黄色视频的| tube8黄色片| 自拍欧美九色日韩亚洲蝌蚪91| 日韩 欧美 亚洲 中文字幕| 国产精品免费大片| 成年动漫av网址| 在线播放国产精品三级| 国产不卡av网站在线观看| 日本黄色视频三级网站网址 | 一区二区av电影网| 99国产极品粉嫩在线观看| 欧美黄色淫秽网站| 久久国产精品男人的天堂亚洲| 欧美另类亚洲清纯唯美| 日韩欧美一区视频在线观看| 成人影院久久| 日韩欧美三级三区| av国产精品久久久久影院| 成人亚洲精品一区在线观看| 黑人操中国人逼视频| 免费女性裸体啪啪无遮挡网站| 国产精品久久久久久精品电影小说| 男女边摸边吃奶| 黄网站色视频无遮挡免费观看| 亚洲色图 男人天堂 中文字幕| 男人舔女人的私密视频| 亚洲国产av新网站| 在线观看人妻少妇| 亚洲第一欧美日韩一区二区三区 | 亚洲人成电影观看| 一边摸一边做爽爽视频免费| 两性夫妻黄色片| 欧美+亚洲+日韩+国产| 夫妻午夜视频| 老司机影院毛片| 一区二区三区激情视频| 大码成人一级视频| 女人久久www免费人成看片| 一级a爱视频在线免费观看| 国产一区二区在线观看av| 国产精品免费一区二区三区在线 | 激情视频va一区二区三区| 91麻豆av在线| 亚洲av日韩精品久久久久久密| 老司机在亚洲福利影院| 热99re8久久精品国产| 国产精品免费一区二区三区在线 | 欧美日韩黄片免| 国产亚洲欧美精品永久| 色94色欧美一区二区| 亚洲精品成人av观看孕妇| 亚洲精品乱久久久久久| 精品人妻熟女毛片av久久网站| 黄色视频在线播放观看不卡| 国产麻豆69| 午夜福利,免费看| 亚洲一区中文字幕在线| 国产一区二区在线观看av| 高清视频免费观看一区二区| 激情在线观看视频在线高清 | 中文字幕精品免费在线观看视频| 麻豆国产av国片精品| 国产av一区二区精品久久| av电影中文网址| 最黄视频免费看| 久久天堂一区二区三区四区| 99国产精品一区二区三区| 人人妻人人澡人人看| 日韩欧美三级三区| 亚洲一区中文字幕在线| 日本撒尿小便嘘嘘汇集6| 十八禁网站免费在线| 亚洲色图av天堂| 久久久久久亚洲精品国产蜜桃av| 日本精品一区二区三区蜜桃| 色老头精品视频在线观看| 日韩一卡2卡3卡4卡2021年| 99国产精品免费福利视频| 黄色视频在线播放观看不卡| 久久久久久亚洲精品国产蜜桃av| 岛国毛片在线播放| 黄网站色视频无遮挡免费观看|