• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    U-Net Inspired Deep Neural Network-Based Smoke Plume Detection in Satellite Images

    2024-05-25 14:40:42AnanthakrishnanBalasundaramAyeshaShaikJapmannKaurBangaandAmanKumarSingh
    Computers Materials&Continua 2024年4期

    Ananthakrishnan Balasundaram ,Ayesha Shaik,? ,Japmann Kaur Banga and Aman Kumar Singh

    1Centre for Cyber Physical Systems,Vellore Institute of Technology(VIT),Chennai,Tamil Nadu,600127,India

    2School of Computer Science and Engineering,Vellore Institute of Technology(VIT),Chennai,Tamil Nadu,600127,India

    ABSTRACT Industrial activities,through the human-induced release of Green House Gas (GHG) emissions,have been identified as the primary cause of global warming.Accurate and quantitative monitoring of these emissions is essential for a comprehensive understanding of their impact on the Earth’s climate and for effectively enforcing emission regulations at a large scale.This work examines the feasibility of detecting and quantifying industrial smoke plumes using freely accessible geo-satellite imagery.The existing system has so many lagging factors such as limitations in accuracy,robustness,and efficiency and these factors hinder the effectiveness in supporting timely response to industrial fires.In this work,the utilization of grayscale images is done instead of traditional color images for smoke plume detection.The dataset was trained through a ResNet-50 model for classification and a U-Net model for segmentation.The dataset consists of images gathered by European Space Agency’s Sentinel-2 satellite constellation from a selection of industrial sites.The acquired images predominantly capture scenes of industrial locations,some of which exhibit active smoke plume emissions.The performance of the abovementioned techniques and models is represented by their accuracy and IOU (Intersection-over-Union) metric.The images are first trained on the basic RGB images where their respective classification using the ResNet-50 model results in an accuracy of 94.4%and segmentation using the U-Net Model with an IOU metric of 0.5 and accuracy of 94%which leads to the detection of exact patches where the smoke plume has occurred.This work has trained the classification model on grayscale images achieving a good increase in accuracy of 96.4%.

    KEYWORDS Smoke plume;ResNet-50;U-Net;geo satellite images;early warning;global monitoring

    1 Introduction

    The degree of Green House Gas (GHG) emissions that result from industrial activities and their contribution to global warming is a noteworthy and concerning environmental issue.Extensive research has been conducted to evaluate the scope of these emissions and their hazardous influence on the Earth’s climate system.A Keynote address by the Intergovernmental Panel on Climate Change(IPCC) Chair Hoesung Lee at the opening of the First Technical Dialogue of the Global Stocktake conducted on June 10,2022,provided reports that prove that climate change poses a grave danger to our planet.According to the findings of the climate change mitigation report,the progress in curbing global warming to 1.5°C falls short of expectations.Over the past twenty years the average annual greenhouse gas(GHG)emissions have reached unprecedented levels in human history and gradually it is becoming increasingly difficult to control it[1].The current situation demands to be acted upon immediately and effectively to save this planet and the well-being of its inhabitants.

    There are so many limitations and issues in existing methods that are being addressed in this work in detecting the smoke plumes using satellite images.Firstly,current tools lack the performance and accuracy needed to accurately identify smoke plumes in the industrial area.Sometimes,it misclassified the clouds and other things as smoke plumes.This causes false alarms while deploying the emergency teams for rescue,that is why existing methods are less reliable.Small forest fires or those hidden by a dense canopy are difficult to find with satellite imagery[2].

    Secondly,the primary method for detecting smoke plumes of existing technologies is based on colored images.However,by using color images,it is more difficult for a model to give correct predictions with good accuracy because there are so many colors in color images that make it difficult to distinguish between smoke plumes and other colors,especially in climate situations.Hence the smoke plume detection accuracy is affected.Furthermore,real-time or almost real-time detection and tracking capabilities might not be offered by the current technologies.

    The algorithms and techniques of smoke plume detection and its identification have evolved over the years.There are several methods including deep learning models like Convolutional Neural Networks(CNN)or computer vision algorithms.The CNN approaches often make use of approaches like optical flow analysis or frame differencing to detect smoke plumes and accurately delineate smoke patches.Smoking plumes are recognized as the target class in a huge collection of labeled photos on which the network is developed[3].

    The proposed system leverages ResNet-50 to achieve precise smoke region detection and incorporates a powerful processor for fast real-time image processing[4].The motivation factor for doing this work is that there is an urgent need to understand the effect of these smoke plumes in industrial areas.Wildfires are becoming increasingly destructive and devastating.Wildfires are frequently discovered after they are out of control due to their rapid spread,and as a result,they have billion-scale consequences in a very short period[5].In this study,the objective is to develop a model for accurately identifying the smoke plumes in industrial areas and see what effect they are making in the increase of global warming.It is well known that there is an impact of industrial activities on global warming.This study aims to contribute to a better understanding of how much impact these industrial activities have on global warming by developing an accurate and good performance model to detect and quantify the amount of smoke using freely available geo-satellite data.

    The three main objectives of this work are to investigate the possibility of detecting and quantifying the amount of smoke plumes present in a single image by using freely available geo-satellite multi-band images.First,the classification is done to classify the images from the images that do not have the smoke plumes in them.The second objective is to utilize the effectiveness of grayscale images over colored images for the detection of smoke plumes.In this study,both types of methods are used to train the model and by using the grayscale imagery,it is clearly shown that there is a notable increase in accuracy and performance in the results.The third objective is to accurately detect and quantify the smoke plumes and smoke plumes area for the contribution towards understanding the effect and impact of industrial activities in the increase of global warming.This last objective shows the broader perspective of this research work.Additionally,this work aims to support the enforcement of fire regulations by developing an improved working model for them to monitor wildfires and identify their sources so that they can be stopped as soon as possible.

    The key highlights of this work include the detection of smoke plumes in the industrial area using freely available geo satellite multi-band images,the use of grayscale images over colored images to improve the accuracy and performance of the work,the contribution in understanding the impact of industrial fire on global warming and to provide improved tools to the professionals to monitor the smoke plumes through the clouds to identify the fire in its early stage so that it can be stopped early without doing much damage.The importance of the research’s findings resides in their potential to improve the measurement and monitoring of industrial smoke plumes.The study provides a low-cost and widely applicable method to identify and measure smoke plumes,assisting early warning systems and emergency response planning.This method makes use of Sentinel-2 satellite data.Grayscale picture analysis sheds light on a detection technique that could be more precise.The results ultimately assist the implementation of emission restrictions and attempts to lessen the negative consequences of climate change,addressing the critical need for improved understanding and control of industrial emissions.

    2 Literature Survey

    This work proposed in[6]uses an algorithm called the Scattering-Based Smoke Detection Algorithm(SSDA)to overcome obstacles.It primarily relies on the visible and infrared imaging radiometer suite (VIIRS) blue and green bands.In [7],an automated detection model uses a deep learning approach to detect smoke plumes by obtaining shortwave reflectance data from the Geostationary Operational Environmental Satellite R series.The study in [8] discusses the feasibility of detecting industrial smoke plumes was explored using satellite images on a global scale and applying ResNet-50 and U-Net models.An affordable solution using images from NASA’s Aqua and Terra satellites is presented in[9]with an overview of the latest innovations and advancements in neural network-based techniques for object detection.The work [10] implemented a Gradient-weighted Class Activation Mapping(Grad-CAM)to verify whether the detected regions corresponded to the actual smoke areas in the image.The evaluation algorithms included ResNet and EfficientNet models.

    A two-stage smoke detection(TSSD)algorithm has been implemented on a lightweight detection algorithm in[11]to monitor the effect of real-time factory smoke.The work discussed in[12]involves a combination of deep learning and dynamic background modeling to mitigate false alarms.It employed a Single Shot MultiBox Detector(SSD)deep learning network for initial smoke detection and ViBe dynamic background modeling technique to identify dynamic regions within the video.The study[13],presented an innovative technique for smoke characterization by employing wavelets and support vector machines and raising minimal false alarms.In [14],a masking technique in the HSV color space is implemented to identify smoke-colored pixels and to apply temporal frame differencing.The optical flow of smoke is determined using texture information obtained from a Gabor filter bank with preferred orientations.The work proposed in[15]introduces a novel neural architecture called W-Net to address the highly ill-posed nature of smoke where multiple stacked convolutional encoder-decoder structures define the model.

    The work [16] uses deep learning techniques and methods to detect fire and smoke.It aims to develop a model which can keep learning and adapting new information without forgetting past information.The work discussed in[17]contains all the methods and algorithms to detect smoke and fire in the air that depend on visual data.In[18],a deep learning model is suggested that uses a selfattention network.The work proposed in[19]can distinguish smoke plumes from aerial photographs.CNN is used to extract information and distinguish each image into two categories containing smoke and not containing smoke.The work discussed in[20]uses an end-to-end structured network to detect fire and smoke areas.Deep learning algorithms are used to develop an end-to-end structured network to extract important information related to fire and smoke from the input images.

    The work discussed in[21]has two main components:Dynamic feature model and smoke object segmentation.The model can spot and detect the smoke present in the images and can accurately segment the smoke area with high accuracy.The work [22] model uses deep learning methods for performance and accurate detection of smoke in between clouds and other misleading things.It aims to improve forest fire surveillance using a learning-based system.The work proposed in[23]can detect fire and smoke in visual-colored images using image processing and machine learning algorithms.The approach of computer vision includes many intermediate and crucial steps such as feature extraction,classification,and image preprocessing.In [24],the model Deep Smoke detects the smoke in smoke areas in the images dataset.To extract the features of images,the study uses convolutional neural networks(CNNs)for smoke detection.The work discussed in[25]suggests a system for early forest fire detection using these two hardware’s capabilities,a DJI M300 drone and an H20T camera.

    3 Proposed System

    The steps involved in detecting smoke plumes are depicted in Fig.1.The first step is to obtain the images via satellite.The images are then preprocessed,which includes grayscale conversion and histogram equalization.The preprocessed images are then fed into a ResNet-50 model for image classification.The classified images are then fed into a U-Net model for image segmentation.The U-Net model produces a segmentation mask that shows where the smoke plumes are in the image.

    Figure 1: Flow of the proposed method

    Four important blocks make up the block diagram shown in Fig.2,which depicts the various phases of a smoke plume’s detection and segmentation.Every block is important to the overall process.Each block is briefly described below.

    3.1 Input

    The first block of the block diagram is input,in this,the colored images that may or may not contain smoke plumes are used as input in the initial block.The ensuing blocks take this input block as the main source of image data for training,preprocessing,and evaluation.In this work,a dataset of geo satellite-colored images is used,and some images contain smoke plumes and the rest do not contain them.So,classification needs to be done with good accuracy to distinguish images containing smoke plumes from images without smoke plumes.

    Figure 2: Block diagram of the proposed system

    3.2 Data Preprocessing

    In this block,some crucial steps are performed on the loaded data before feeding into the classification and segmentation models.The steps are as follows.

    3.2.1 Data Normalization

    In this step,the data need to be normalized to get good accuracy of the model.Major steps of normalization of image data include scaling the image pixel value from 0 to 1 for consistency in the data which can be seen in Eq.(1).Data normalization can also happen by standardizing the images using mean and standard deviation.

    3.2.2 Data Transformation

    In the image dataset,two classes are there.One is an image with smoke plumes and the other is an image without smoke plumes.This is a binary classification,but the class distribution is not balanced.The data is transformed using data up-sampling to make the class distribution balanced.The second data transformation that is happening is RGB images to grayscale images.This transformation might help in increasing the model’s performance and accuracy.

    3.2.3 Data Visualization

    To improve the contrast and visibility of key details in the grayscale photos,use histogram equalization.By redistributing the pixel intensities,histogram equalization improves the quality of the image and makes it easier to distinguish between important elements.

    3.3 Classification Block

    This is a binary classification problem,in this work the ResNet-50 pre-trained model is used to distinguish smoke plumes containing images from the other one.After training the model on train data,evaluation is done on the test data to evaluate the model on unseen image data.This block returns classified images as a result by using those images,accuracy and other key parameters can be calculated.Fig.3 shows the implemented architecture for the ResNet-50 Model.Here is a general formula for the ResNet-50 model as shown in Eq.(2).

    Here FC_K describes the Kth fully connected layer,BN_M describes the Mth bottleneck layer and ConvN describes the Nth convolutional layer.

    Figure 3: Architecture of ResNet-50 model

    3.4 Segmentation Block

    After classifying the images,to segment the smoke area of an image,a pre-trained model U-Net is used.The training and evaluation of a UNet model for smoke segmentation are the main topics of this block.The dataset contains images with smoke plumes and segmentation labels to feed to the U-Net model to create a segmentation boundary.

    After training the U-Net model,evaluation of the U-Net model is done using test image data that the model has not seen before.To evaluate the precision and performance of the model,measures like Intersection-over-Union(IoU),and Jaccard accuracy are used.

    The proposed system takes RGB images gathered by ESA’s Sentinel-2 satellite constellation from a selection of industrial sites.The collected data is loaded onto the system for further manipulation.As shown in Fig.4,the collected RGB images require some standardization procedure to bring them all to a uniform and usable format,therefore,we perform Data Normalization on the entire dataset.

    Figure 4: Workflow for processing RGB images

    In Fig.5,it is depicted how RGB images are collected from the satellite imagery and then normalized for further processing.To improve accuracy and enhance model performance,this paper took a novel approach to convert RGB images to grayscale which led to better results,and it was also able to overcome the challenges faced by the limitation of RGB images.

    Figure 5: Workflow for processing grayscale images

    Eq.(3) shows the manipulation required for converting each RGB image to a grayscale image,where the pixels of the images are stored in the form of an array.The pixels are averaged out to convert them into grayscale.Grayscale images reduce the number of false interpretations and complexities.In addition to it,in this approach,Histogram Equalization is also performed to visualize the image dataset.Fig.6 shows a sample representation of an image containing a smoke plume.

    Figure 6: Smoke density visualized via histogram equalization

    The peak in Fig.6 depicts the intensity of the smoke plume and the width of the peak demonstrates how far the smoke plume spread is across a specific area in the given image.Fig.7 depicts the major and final steps that settle the entire model and architecture.Once the data is processed it undergoes two modeling processes—Classification via ResNet-50 and Segmentation via U-Net.

    Figure 7: Sequential flow depicting classification and segmentation

    3.5 Classification

    Initially,the preprocessed RGB images are trained via a customized ResNet-50 architecture.The purpose of this step is to first train the images and classify them into two categories to distinguish whether or not the images contain any smoke plume.The model is thoroughly trained and saved for evaluation purposes.This algorithm was able to achieve a decent accuracy of 94.3% on the training set and an accuracy of 94%against the test data set.

    Next,the innovative method of applying this algorithm to grayscale images led to a significant increase in accuracy.During the model training,an accuracy of 96.4%was obtained and during the evaluation of the model,it could obtain an accuracy of 96.6%.This approach not only achieved a higher percentage of accuracy but also performed lesser computations with more precise and valid results.

    3.6 Segmentation

    This process involves the segmentation of both RGB and grayscale images individually.Once the dataset was loaded,initially the focus was to identify patches of smoke plume against manually computed segmented labels.This step was a foundation to observe and ideate further stages.Moving forward,the image dataset was trained through a custom U-Net model along with the segmented labels to achieve more precise results and better efficiency.The overall accuracy attained was 94.0%where the model can identify smoke plume patches and the IoU (Intersection-over-Union) metric justifies the intensity by comparing the percentage of overlap of smoke detected,i.e.,the predicated mask in the trained image to that which appears in the original image or ground truth mask.Eq.(4) shows how IoU can be mathematically represented.

    It is observed that IoU can be calculated as a ratio of the Area of Overlap(overlapped area between the predicted region and the ground truth region) to the Area of Union (The total area covered by the predicted region and the ground truth region),where|A ∩B|denotes the cardinality(number of elements)of the intersection of sets A and B and|A ∪B|denotes the cardinality of the union of sets A and B.

    4 Experimental Results and Discussion

    The Google Colab platform,which offered GPU acceleration for effective training on huge datasets,was used to create and run the models U-Net and ResNet-50.Several Python modules were used to improve the modeling process and optimize performance.Notably,TensorFlow and Keras were important in the model’s development,compilation,and training.NumPy was used to manage arrays effectively,and OpenCV made reading and processing images easier.The results obtained from the experimental analysis are presented and analyzed and a comprehensive discussion has been done on the findings.

    4.1 Experimental Setup

    The hardware and software requirements for the work are as follows.

    4.1.1 Software Requirements

    Any Operating System Windows/Linux/MacOS,python programming language for deep learning algorithms,necessary deep learning frameworks such as PyTorch and TensorFlow.If using an NVIDIA GPU,CUDA,and cuDNN libraries to leverage GPU acceleration for model training.

    4.1.2 Hardware Requirements

    As a tester or developer,a high-performance computing system is needed for running deep learning algorithms and a high-speed internet connection for model training and evaluation.As a user,just a computer with an operating system and internet connection is enough to use this system.

    4.1.3 Tools Used

    For data loading,data transformation,model training,and evaluation,Google Collab is used in this work.PyTorch is a widely used deep learning framework and in this work,PyTorch is used to load pre-trained models such as ResNet-50 for image classification and U-Net for image segmentation.In this work,scikit-learn,NumPy,and matplotlib are used for the basic functionality that is needed in this project.

    4.1.4 Dataset and Its Description

    The dataset used for the implementation of this paper consists of imaging data captured by ESA’s Sentinel-2 satellite constellation,which focuses on observing the Earth.The selection of industrial sites included in the dataset was based on emission data sourced from the European Pollutant Release and Transfer Register.The images,to a great degree,showcase industrial locations,with a particular focus on those that exhibit active smoke plumes.

    Each image provided in the dataset is in the GeoTIFF file format consisting of 13 bands,along with their respective geo-referencing information.Each image represents a square area with a ground edge length of 1.2 km.The bands are derived from Sentinel-2 Level-2A products,with the exception of band 10,which originates from the corresponding Level-1C product.A noteworthy point is that band 10 has not been utilized in the underlying work.The bands are derived from within this repository,and a total of 21,350 images can be found here.After diligent manual annotation,the image sample was partitioned into distinct subsets,resulting in 3,750 positively classified images portraying the presence of industrial smoke plumes,while a staggering 17,600 negatively classified images showcased the absence of any smoke plumes.Moreover,this repository is a comprehensive collection of carefully crafted JSON files that await manual segmentation labels that precisely identify the boundaries and details of the smoke plumes detected within the 1,437 images.

    4.2 Elaborate on the Findings

    Due to the variations in smoke patterns,clouds,lighting conditions,rain,and background clutter,smoke detection and segmentation are difficult problems to solve.To distinguish smoke from clouds or other catastrophic entities,current solutions widely use convolutional rule-based or heuristic techniques.Deep learning algorithms and models such as ResNet-50 for classification and U-Net for segmentation have shown promising results in the detection and segmentation of the smoke using GeoTIFF images.

    In Fig.8,the output images are shown after feeding the color images to the ResNet50 model.Each output image is a collection of three images stacked on top of each other.There are four different types of possible outcomes from the classification model,in Fig.8a,the image is classified as true positive by model.True positive means that the image contains smoke plumes,and the model is also predicting the same.In Fig.8b,the image is classified as false positive because the image does not contain smoke but the model is predicting the opposite.Fig.8c does not contain any smoke plumes and the model is also predicting the same that is why it is classified as a true negative.Fig.8d shows that the image does contain some quantity of smoke plumes but the model is predicting the opposite so it is classified as false negative.The accuracy of the model can be easily calculated by the formula given in Eq.(5).

    Figure 8: RGB images after classification via ResNet-50(a)True positive;(b)False positive;(c)True negative;(d)False negative

    The segmentation model U-Net gives the output in the form of images shown in Fig.9.It also has 3 outcomes discussed earlier in this study.Each output image contains two images inside it.The first part is the original input image that the model receives and trains itself.The second part shows the segmentation boundaries of smoke if present in the original photo and also calculates the IoU value.IoU (Intersection-over-Union) depicts the intensity and the amount of smoke present in the input image.IoU value varies from 0 with no smoke to 1 being 100%smoke in the image.In Fig.9a,the image contains smoke and it is classified as true positive,it has the IoU value of 0.72.It shows that approximately 72%of the pixels in the predicted segmentation mask align with the corresponding pixels in the ground truth mask which is a considerable overlap and therefore it can be concluded that the segmentation result is relatively valid.

    Figure 9: RGB images after segmentation via U-Net(a)True positive with 0.72 IoU value;(b)False positive with 0 IoU;(c)False negative with 0.02 IoU value

    Fig.10 shows the output images after feeding the grayscale images to the ResNet-50 model.Fig.10a shows that the image contains smoke and the model is also predicting the same,thus it is classified as true positive.Fig.10b shows that the image contains smoke but the model is not predicting it,that is why it is classified as a false negative.Fig.10c shows that the image contains clouds,not smoke,and the model also predicted that the image does not have smoke which is why it is classified as a true negative.

    Figure 10: Grayscale images after classification via ResNet-50(a)The image contains smoke and the model predicted the same;(b) The image contains smoke,but the model predicts it wrong;(c) The image contains clouds,and the model is also predicted that the image contains clouds do not smoke

    One of the advantages of ResNet-50 is when compared to convolutional rule-based or heuristic techniques,deep learning models and algorithms increase the accuracy and performance of the model in detecting smoke plumes in the air.Compared to colored images,when grayscale images are used,it increases the accuracy,precision,and effectiveness of the model in detecting smoke images[26].Some techniques can be applied to segment the boundaries of smoke in smoke-containing images.In this project,the U-Net model is being used to segment the boundaries of smoke and it also calculates the IoU value for each image.

    This IoU value shows how much of the smoke is present in the image.This can help fire safety forces determine the level of fire by just analyzing the IoU value.This method of using deep learning models seeks to enhance the precision,accuracy,effectiveness,and reliability of smoke detection and segmentation systems.This work has considerable potential in many areas,including emergency response management,environmental monitoring,and fire safety.

    Fig.11 shows the training accuracyvs.validation accuracy of colored GeoTiff images against the number of epochs for(a)the ResNet-50 Model and(b)for U-Net Model.Fig.11a is a representation of the ResNet-50 model’s performance while classifying color images–the blue line represents training accuracy which is approximately 94.3% and the validation accuracy of approximately 93%is represented by the orange line.Fig.11b is a representation of the U-Net model’s performance while executing segmentation on color images—the blue line represents training accuracy which is approximately 94.0%and the validation accuracy of 93.4%is represented by the orange line.In these figures,the blue line shows how well the model performs on the training data over time.On the other hand,the orange line represents validation accuracy,which indicates how well the model performs on a separate validation dataset that it was not exposed to during training.

    Fig.12 shows the training lossvs.validation loss of colored GeoTiff images against the number of epochs for(a)the ResNet-50 Model and(b)for U-Net Model.Fig.12a is a representation of the ResNet-50 model’s performance while classifying color images—the blue line represents training loss which is approximately 0.043 and the validation loss of 0.05 is represented by the orange line.Fig.12b is a representation of the U-Net model’s performance while executing segmentation on color images—the blue line represents train loss which is approximately 0.043 and the validation loss of 0.05.The blue line in the figure corresponds to the training loss,which evaluates the model’s fit to the training data throughout the training process.On the other hand,the orange line represents the validation loss,which measures how well the model fits a distinct validation dataset that was not used for training.

    Figure 11: Accuracy graphs of(a)ResNet-50 model;(b)U-Net Model;trained on colored GeoTIFF image dataset

    Figure 12: Loss graphs of(a)ResNet-50 model;(b)U-Net Model;trained on colored GeoTIFF image dataset

    Fig.13 shows the training accuracyvs.validation accuracy of grayscale GeoTiff images against the number of epochs for(a)the ResNet-50 Model and(b)for U-Net Model.Fig.13a is a representation of the ResNet-50 model’s performance while classifying grayscale images–the blue line represents training accuracy which is approximately 96.4%and the validation accuracy which is approximately 96%is represented by the orange line.Fig.13b is a representation of the U-Net model’s performance while executing segmentation on grayscale images–the blue line represents training accuracy which is approximately 94.0%and the validation accuracy is represented by the orange line.In these figures,by comparing the blue(training accuracy)and orange(validation accuracy)lines,insights can be gained into how well the model is learning and generalizing.Since the two lines closely follow each other and have similar values,it suggests that the model is generalizing well.

    Figure 13: Accuracy graphs of(a)ResNet-50 model;(b)U-Net model;trained on grayscale GeoTIFF image dataset

    Fig.14 shows the training lossvs.validation loss of grayscale GeoTiff images against the number of epochs for(a)the ResNet-50 Model and(b)for U-Net Model.Fig.14a is a representation of the ResNet-50 model’s performance while classifying grayscale images—the blue line represents training loss which is approximately 0.097,and the validation loss of 0.077 is represented by the orange line.Fig.14b is a representation of the U-Net model’s performance while executing segmentation on color images–the blue line represents train loss which is approximately 0.047,and the validation loss of 0.1.

    Figure 14: Loss graphs of(a)ResNet-50 model;(b)U-Net model;trained on grayscale GeoTIFF image dataset

    4.3 Comparison of Various Approaches

    This paper reviewed two major approaches,i.e.,Classification and Segmentation for identifying smoke plumes from industrial units to monitor the Greenhouse gas emissions and their effect on Earth’s climate.Further,these approaches were divided into two categories–RGB images and Grayscale images.Satellite remote sensing data offers a practical means to routinely detect and monitor these plumes across extensive regions[27].

    Initially,RGB images were trained through a custom layered architecture of Residual Network(ResNet-50) to classify images into two types—images containing smoke plumes and images not containing any smoke plumes and through U-Net for segmentation and marking the boundaries.It was observed that with RGB images,the accuracy attained was approximately 94.3%but there were certain limitations while processing RGB images such as color variations and complex dimensions.RGB images capture color information,which can vary depending on the lighting or atmospheric conditions.Smoke plumes can exhibit different colors depending on the combustion process or environmental conditions.Moreover,smoke illustrates a nature of dispersion which can lead to areas with translucent smoke patches with the patterns and colors of the surrounding environment[28–32].Because of these variations,it becomes quite difficult to set a threshold for smoke detection.In the work carried out in [33],an adaptive weighted direction algorithm has been proposed for fire and smoke detection with reduced loss and false alarm.

    To overcome this challenge even better,an innovative approach of using grayscale and binary images was taken into consideration.Grayscale images reduce the dimensionality which makes it easier for the model and it becomes more efficient as there are fewer parameters to be learned.Besides,grayscale images can appropriately capture the varying levels of brightness caused by smoke,making it easier for CNN to learn relevant features associated with smoke detection.This method of using grayscale images for training the ResNet-50 model not only proved to be efficient in computing but also resulted in a higher accuracy of 96.4%.

    During Segmentation,initially,the images were compared against the manually created segmented labels to obtain a founding idea regarding the patches of smoke plumes and contours being formed.Following this,a U-Net model is fed with both the images and the manually computed segmented labels to achieve an automated function for getting precise contours and patches for each image containing smoke plumes.The automated process is more efficient and results in more precise segmented boundaries.

    In Figs.15a and 15b,it is clearly shown that by using grayscale images over RGB-colored images for classification and segmentation training,there is a noticeable difference in the accuracy.In each epoch,the accuracy for grayscale images is much higher than the accuracy for RGB images.Using grayscale images for both model’s training does not only increase the accuracy but also increases the model performance as the test accuracy for grayscale images is also higher.

    Although a decent rise in accuracy was observed when instead of RGB images,grayscale images were trained,it is observed that there is not much difference between the segmentation of RGB and grayscale images.Fig.16 shows the comparison graphs (a) Segmentation Training accuracy of Grayscalevs.RGB and(b)Segmentation Validation accuracy of Grayscalevs.RGB.In these figures,the blue line represents the grayscale,and the orange line represents the RGB images.It can be observed that in both the figures,the lines almost travel along together,and the results obtained are almost the same for both cases.

    Figure 15: Comparison graphs(a)Classification train accuracy:grayscale vs.RGB;(b)Classification validation accuracy:grayscale vs.RGB

    Figure 16: Comparison graphs(a)Segmentation train accuracy:grayscale vs.RGB;(b)Segmentation validation accuracy:grayscale vs.RGB

    Fig.17 depicts the comparison between IoU values of segmentation performed on RGB images to grayscale images over the last 5 epochs where Fig.17a shows the comparison between the values of Train IoU and Fig.17b shows the comparison between Segmentation IoU values.The blue line depicts the Intersection-over-Union values of RGB images whereas the Intersection-over-Union values of grayscale images are represented by an orange line.The graph can be used to observe the performance of the U-Net model on two different types of image datasets.Fig.17a indicates that IoU values remain stable throughout for RGB images whereas for grayscale images,the values seem to increase initially but end up converging with the result of RGB.Similarly,in Fig.17b,it seems that the validation IoU is initially higher for grayscale images but the result of both the image datasets is nearly the same.Overall,a quick comparison can be performed between the model performance of two different image datasets and can also assist in analyzing the model performance on unseen data while detecting patterns over a while.

    Figure 17: IoU comparison graphs(a)Segmentation train IoU:RGB vs.grayscale;(b)Segmentation validation IoU:RGB vs.grayscale

    The ROC curve shown in Fig.18 is the trade-off between true positive rate(TPR)and false positive rate (FPR) at the different thresholds shown by the ROC curves.This shows how well the model can differentiate between smoke-containing and containing images.The diagonal baseline shows a random classifier,and the depicted points on the curves reflect various threshold values.The model’s performance can be evaluated for geoTIFF images,enabling it to analyze its efficiency in identifying smoke plumes in both RGB and grayscale images,by comparing the RGB and grayscale curves.

    Figure 18: ROC curve for CNN model-ResNet-50;applied on RGB and grayscale images

    4.4 Comparison Table

    In this work,the metric that is used to compare the results of RGB image training and grayscale image training is Accuracy.Since the data in this study is balanced,with each class having the same significance,accuracy is a reliable metric to use.Accuracy is calculated as shown in Eq.(5).

    Table 1 is a compilation of the two types of methods used for classification along with their respective accuracies.It can be observed that converting images to grayscale resulted in a noteworthy increase in the accuracy of the ResNet-50 Model.

    Table 1: ResNet-50 accuracy and loss results

    Table 2 is a compilation of the two types of methods used for segmentation along with their respective accuracies.It can be observed that converting images to grayscale did not portray any significant difference in the accuracy of the U-Net model.It was observed that during classification,grayscale images performed extremely well with a higher accuracy of 96.4%.Additionally,it was faster and more efficient due to reduced noises and dimensionality of grayscale images.This work addressed the challenges that are encountered during the detection and segmentation of smoke.The process gets further complicated due to varying smoke patterns,the presence of clouds,lighting conditions,and background clutter.

    Table 2: U-Net accuracy and loss results

    5 Conclusion

    The work utilized a three-step process: Classifying RGB images,converting RGB to grayscale images re-training them through the classification model,and segmenting RGB images.This work employed the concept of Intersection-over-Union(IoU)as a measure of smoke intensity in an image.By analyzing the IoU value,the level of smoke can be conveniently assessed.By taking advantage of a pre-trained ResNet-50 model on a large dataset of GeoTIFF images,this study was successfully able to distinguish smoke plumes and employed the UNet model to identify the patches and perform smoke boundary segmentation.This method holds promise for various fields such as managing emergency responses,monitoring the environment,and ensuring fire safety.It provides improved precision,accuracy,and reliability in systems for detecting and segmenting smoke.The deep learning models can accurately distinguish smoke from non-smoke with an accuracy of 96.4%.High performance and accuracy,robustness of the model under various environmental circumstances are also among the top results of the project.Taking into account future work,there are several realistic points for future research and development.By exploring the potential of transfer learning and fine-tuning techniques,the performance of the model can be enhanced on limited labeled data.The integration of multimodal data sources such as thermal imaging or air quality measurements,can enhance the accuracy and reliability of smoke detection.Furthermore,the extension of this project to real-time monitoring is also realistic.Overall,this work has established the framework for precise smoke segmentation and detection utilizing deep learning methods.To further improve the effectiveness,accuracy,and usefulness of smoke detection and analysis systems,future studies can concentrate on the integration of multi-modal data,transfer learning,real-time applications,and sophisticated deep learning models.

    Acknowledgement:The authors wish to express their thanks to VIT management for their extensive support during this work.

    Funding Statement:The authors received no specific funding for this work.

    Author Contributions:The authors confirm their contribution to the paper as follows:study conception and design:Ananthakrishnan Balasundaram,data collection:Ananthakrishnan Balasundaram,Ayesha Shaik,Japmann Kaur Banga and Aman Kumar Singh,analysis and interpretation of results:Ananthakrishnan Balasundaram,Ayesha Shaik,Japmann Kaur Banga and Aman Kumar Singh,draft manuscript preparation:Ananthakrishnan Balasundaram,Ayesha Shaik,Japmann Kaur Banga and Aman Kumar Singh.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Industrial Smoke Plume Data Set,2020,[online] https://zenodo.org/records/4250706(accessed on 01 December 2023).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present work.

    久久久久久久久久黄片| 午夜激情欧美在线| 最近最新中文字幕大全电影3| 免费看不卡的av| 大话2 男鬼变身卡| 少妇的逼好多水| 日韩欧美精品免费久久| 99热这里只有精品一区| 欧美激情国产日韩精品一区| 人人妻人人澡人人爽人人夜夜 | 久久久精品欧美日韩精品| 菩萨蛮人人尽说江南好唐韦庄| 男人舔奶头视频| 人人妻人人澡欧美一区二区| 亚洲激情五月婷婷啪啪| 亚洲精品自拍成人| 少妇人妻一区二区三区视频| 久久精品国产自在天天线| 日韩制服骚丝袜av| 亚洲精品乱久久久久久| 五月玫瑰六月丁香| 亚洲av一区综合| 精品久久久精品久久久| 99热这里只有是精品50| 小蜜桃在线观看免费完整版高清| 亚洲精品视频女| 一边亲一边摸免费视频| 简卡轻食公司| 欧美日本视频| av国产免费在线观看| 在线观看美女被高潮喷水网站| 亚洲精品国产av成人精品| 久久久久久久久久久丰满| 白带黄色成豆腐渣| 亚洲av.av天堂| 性插视频无遮挡在线免费观看| 嘟嘟电影网在线观看| 色5月婷婷丁香| freevideosex欧美| 亚洲最大成人av| 国产黄色视频一区二区在线观看| 国产午夜精品一二区理论片| 国产亚洲5aaaaa淫片| 丰满乱子伦码专区| 一级黄片播放器| 小蜜桃在线观看免费完整版高清| 国产真实伦视频高清在线观看| 嫩草影院精品99| 在线免费观看不下载黄p国产| 成人一区二区视频在线观看| 国产精品国产三级专区第一集| 国产精品人妻久久久影院| 亚洲精品国产av成人精品| 日韩欧美国产在线观看| 欧美日韩一区二区视频在线观看视频在线 | 日日摸夜夜添夜夜添av毛片| 中文欧美无线码| 我要看日韩黄色一级片| 极品教师在线视频| 成人亚洲欧美一区二区av| 日本wwww免费看| av卡一久久| 亚洲精品一区蜜桃| 在线天堂最新版资源| 99久久中文字幕三级久久日本| 18禁裸乳无遮挡免费网站照片| 亚洲天堂国产精品一区在线| 欧美精品一区二区大全| 淫秽高清视频在线观看| 亚洲成人一二三区av| 内射极品少妇av片p| 国产真实伦视频高清在线观看| 综合色丁香网| 色尼玛亚洲综合影院| 国产男女超爽视频在线观看| 国产成人福利小说| 欧美3d第一页| 成人亚洲欧美一区二区av| 一级毛片 在线播放| 亚洲精品国产av成人精品| 十八禁国产超污无遮挡网站| 久久久午夜欧美精品| 一级二级三级毛片免费看| 亚洲一区高清亚洲精品| 老司机影院毛片| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 99久久九九国产精品国产免费| 人人妻人人澡欧美一区二区| 嫩草影院入口| 最近最新中文字幕大全电影3| 国产一级毛片七仙女欲春2| 亚洲av福利一区| 麻豆成人午夜福利视频| 伦精品一区二区三区| 美女主播在线视频| 18禁在线无遮挡免费观看视频| 亚洲图色成人| 永久免费av网站大全| 国产一区二区亚洲精品在线观看| 秋霞在线观看毛片| 亚洲av二区三区四区| 一级毛片 在线播放| 国产老妇伦熟女老妇高清| 秋霞伦理黄片| 97人妻精品一区二区三区麻豆| 国产高清有码在线观看视频| 亚洲图色成人| 永久网站在线| 久久久久久久午夜电影| 免费看av在线观看网站| 欧美区成人在线视频| 日韩三级伦理在线观看| 少妇熟女欧美另类| 人人妻人人澡人人爽人人夜夜 | 亚洲av不卡在线观看| 大香蕉久久网| 国产成人精品婷婷| 秋霞在线观看毛片| 老司机影院毛片| 亚洲av福利一区| 亚洲精品第二区| 国产午夜精品论理片| 婷婷色综合大香蕉| 午夜精品在线福利| 欧美人与善性xxx| 婷婷色综合大香蕉| 久久久久网色| 一个人看视频在线观看www免费| 亚洲av成人精品一二三区| 亚洲欧美清纯卡通| 日韩成人伦理影院| 日韩大片免费观看网站| 国产成人aa在线观看| 亚洲性久久影院| 男人舔女人下体高潮全视频| 亚洲国产精品成人久久小说| 国产毛片a区久久久久| 国产精品久久久久久精品电影| 美女cb高潮喷水在线观看| 80岁老熟妇乱子伦牲交| 又大又黄又爽视频免费| 亚洲国产色片| 免费无遮挡裸体视频| 国产 亚洲一区二区三区 | 久久久色成人| 欧美+日韩+精品| 婷婷六月久久综合丁香| 韩国av在线不卡| 免费大片黄手机在线观看| 日韩精品有码人妻一区| 午夜爱爱视频在线播放| 伦理电影大哥的女人| 在线免费观看的www视频| 熟女人妻精品中文字幕| 网址你懂的国产日韩在线| 亚洲精品乱久久久久久| 非洲黑人性xxxx精品又粗又长| 岛国毛片在线播放| 日本猛色少妇xxxxx猛交久久| 韩国高清视频一区二区三区| 熟女电影av网| 午夜老司机福利剧场| 天堂网av新在线| 婷婷色麻豆天堂久久| 国产av国产精品国产| 男女那种视频在线观看| 街头女战士在线观看网站| 一个人观看的视频www高清免费观看| 成人美女网站在线观看视频| 91av网一区二区| 18禁裸乳无遮挡免费网站照片| 午夜免费观看性视频| av国产免费在线观看| 美女cb高潮喷水在线观看| 中文字幕av在线有码专区| 男人舔女人下体高潮全视频| 非洲黑人性xxxx精品又粗又长| 色综合色国产| 国产精品爽爽va在线观看网站| 亚洲av电影不卡..在线观看| 神马国产精品三级电影在线观看| 禁无遮挡网站| 免费大片黄手机在线观看| 日韩三级伦理在线观看| 中文字幕免费在线视频6| 日本-黄色视频高清免费观看| 噜噜噜噜噜久久久久久91| 精品久久久久久成人av| 日日啪夜夜撸| 2021少妇久久久久久久久久久| 赤兔流量卡办理| 人妻夜夜爽99麻豆av| 国产黄a三级三级三级人| 亚洲人与动物交配视频| 啦啦啦韩国在线观看视频| 国产黄频视频在线观看| 在线免费十八禁| 黄色配什么色好看| 亚洲最大成人av| 日日啪夜夜爽| 美女大奶头视频| 99久久精品国产国产毛片| 91狼人影院| 99久久精品热视频| 国产一级毛片在线| 亚洲av中文av极速乱| av卡一久久| 国产成人午夜福利电影在线观看| 国产欧美另类精品又又久久亚洲欧美| 亚洲久久久久久中文字幕| 日韩欧美 国产精品| 午夜亚洲福利在线播放| 丝袜喷水一区| 久久6这里有精品| 精品久久久久久成人av| 亚洲欧美精品专区久久| 精品人妻视频免费看| 老司机影院毛片| 尾随美女入室| 午夜福利视频1000在线观看| 深夜a级毛片| 十八禁网站网址无遮挡 | 欧美极品一区二区三区四区| a级毛片免费高清观看在线播放| 91狼人影院| 性插视频无遮挡在线免费观看| 成人高潮视频无遮挡免费网站| 夫妻午夜视频| 可以在线观看毛片的网站| av专区在线播放| 久久精品国产亚洲av涩爱| 一区二区三区乱码不卡18| 校园人妻丝袜中文字幕| 国产片特级美女逼逼视频| 亚洲精品久久久久久婷婷小说| 日韩,欧美,国产一区二区三区| 久久久久久久国产电影| 国产综合精华液| 国产一区二区三区av在线| 亚洲精华国产精华液的使用体验| 51国产日韩欧美| 爱豆传媒免费全集在线观看| 国产老妇女一区| 午夜久久久久精精品| 赤兔流量卡办理| 免费看光身美女| 国产日韩欧美在线精品| 亚洲成人一二三区av| 能在线免费看毛片的网站| 看非洲黑人一级黄片| 国产伦理片在线播放av一区| 中文字幕免费在线视频6| 日本av手机在线免费观看| av网站免费在线观看视频 | 国产黄色视频一区二区在线观看| 69av精品久久久久久| 免费电影在线观看免费观看| 91久久精品电影网| 欧美人与善性xxx| 国产淫语在线视频| 三级国产精品片| 久久午夜福利片| 成人特级av手机在线观看| 婷婷色麻豆天堂久久| 亚洲av二区三区四区| 欧美日韩亚洲高清精品| 国产精品女同一区二区软件| 我的女老师完整版在线观看| 欧美精品一区二区大全| 中文字幕久久专区| 青春草亚洲视频在线观看| 国产永久视频网站| 久久久精品免费免费高清| 中文字幕制服av| 国内精品宾馆在线| 少妇裸体淫交视频免费看高清| 久久久亚洲精品成人影院| 亚洲性久久影院| 亚洲人成网站在线播| 一级毛片我不卡| av一本久久久久| 91久久精品国产一区二区三区| 精品久久久久久久末码| 人妻系列 视频| 中文资源天堂在线| 久久99蜜桃精品久久| 丰满人妻一区二区三区视频av| 国产亚洲午夜精品一区二区久久 | 少妇熟女aⅴ在线视频| 国产精品麻豆人妻色哟哟久久 | 国产中年淑女户外野战色| 一区二区三区免费毛片| 如何舔出高潮| 91精品一卡2卡3卡4卡| 午夜视频国产福利| 国产精品99久久久久久久久| 亚洲人成网站在线观看播放| 国产91av在线免费观看| 国产精品一二三区在线看| 一区二区三区四区激情视频| 综合色丁香网| 免费观看性生交大片5| 毛片女人毛片| 十八禁国产超污无遮挡网站| 国产综合懂色| 久久精品夜色国产| 一本一本综合久久| 国产亚洲精品久久久com| 亚洲高清免费不卡视频| 欧美日韩视频高清一区二区三区二| 久久久久久伊人网av| 91午夜精品亚洲一区二区三区| 波野结衣二区三区在线| 久久久久久久久久人人人人人人| 高清午夜精品一区二区三区| 日韩av在线免费看完整版不卡| 黄片无遮挡物在线观看| 亚洲自拍偷在线| 3wmmmm亚洲av在线观看| 九九久久精品国产亚洲av麻豆| 超碰av人人做人人爽久久| 一本久久精品| 亚洲精品一区蜜桃| 亚洲在线观看片| 黄色一级大片看看| 纵有疾风起免费观看全集完整版 | 免费看光身美女| 在线天堂最新版资源| 男人舔奶头视频| 97人妻精品一区二区三区麻豆| 国产亚洲5aaaaa淫片| 国产精品无大码| 99久久精品热视频| 两个人视频免费观看高清| 麻豆av噜噜一区二区三区| 特级一级黄色大片| 亚洲欧美日韩无卡精品| 成人午夜高清在线视频| 99久国产av精品| 久久精品熟女亚洲av麻豆精品 | 99久久精品一区二区三区| 天堂网av新在线| 最近最新中文字幕大全电影3| 十八禁国产超污无遮挡网站| a级一级毛片免费在线观看| 日韩电影二区| 国产探花在线观看一区二区| 久久久久国产网址| 国内少妇人妻偷人精品xxx网站| av国产免费在线观看| 国产精品av视频在线免费观看| 国产精品三级大全| 高清欧美精品videossex| 精品亚洲乱码少妇综合久久| 精品久久久久久久人妻蜜臀av| 十八禁网站网址无遮挡 | 国产亚洲av片在线观看秒播厂 | 久久久久性生活片| 国内精品宾馆在线| 午夜福利视频1000在线观看| 国产黄色视频一区二区在线观看| 免费看日本二区| 大又大粗又爽又黄少妇毛片口| 免费看不卡的av| 老司机影院毛片| 人人妻人人澡欧美一区二区| 日本wwww免费看| 亚洲国产欧美在线一区| 成人特级av手机在线观看| 国产在视频线精品| 建设人人有责人人尽责人人享有的 | 黄片wwwwww| 成人午夜高清在线视频| 美女被艹到高潮喷水动态| 亚洲熟女精品中文字幕| 噜噜噜噜噜久久久久久91| 久久久色成人| 午夜精品一区二区三区免费看| 亚洲色图av天堂| 少妇人妻一区二区三区视频| a级毛片免费高清观看在线播放| 国产免费视频播放在线视频 | 欧美人与善性xxx| 国产精品一区二区性色av| 久久久久精品久久久久真实原创| 国产一区二区亚洲精品在线观看| 一区二区三区四区激情视频| 高清欧美精品videossex| 亚洲精品国产av蜜桃| 国产激情偷乱视频一区二区| 搞女人的毛片| 欧美激情国产日韩精品一区| 人人妻人人澡人人爽人人夜夜 | 亚洲激情五月婷婷啪啪| 免费av毛片视频| 一二三四中文在线观看免费高清| 麻豆av噜噜一区二区三区| 成年女人在线观看亚洲视频 | 啦啦啦韩国在线观看视频| 十八禁网站网址无遮挡 | 少妇熟女aⅴ在线视频| 永久网站在线| 日韩精品有码人妻一区| 青青草视频在线视频观看| 亚洲精品成人av观看孕妇| 一级爰片在线观看| 熟女电影av网| 亚洲综合色惰| 欧美成人a在线观看| 国产精品伦人一区二区| 在线播放无遮挡| 欧美日韩在线观看h| 丝袜美腿在线中文| 蜜桃久久精品国产亚洲av| 91狼人影院| 深夜a级毛片| av在线播放精品| 国产单亲对白刺激| 午夜免费男女啪啪视频观看| 午夜福利在线观看免费完整高清在| 又粗又硬又长又爽又黄的视频| 欧美xxxx黑人xx丫x性爽| 精品久久久久久久久久久久久| 欧美区成人在线视频| 亚洲精品国产成人久久av| 51国产日韩欧美| 99热这里只有是精品50| 人人妻人人澡人人爽人人夜夜 | 自拍偷自拍亚洲精品老妇| 777米奇影视久久| 国产精品久久视频播放| 国产精品国产三级国产专区5o| 秋霞伦理黄片| 91午夜精品亚洲一区二区三区| 亚洲精品乱久久久久久| 91精品伊人久久大香线蕉| 成人性生交大片免费视频hd| 波多野结衣巨乳人妻| 国产免费福利视频在线观看| 免费av毛片视频| 男女视频在线观看网站免费| 超碰97精品在线观看| 成人美女网站在线观看视频| 不卡视频在线观看欧美| 深夜a级毛片| 校园人妻丝袜中文字幕| 午夜激情欧美在线| 欧美 日韩 精品 国产| kizo精华| 91久久精品电影网| 九色成人免费人妻av| 毛片女人毛片| 午夜精品国产一区二区电影 | 禁无遮挡网站| 国产在视频线在精品| 精品国产露脸久久av麻豆 | 久久人人爽人人爽人人片va| 日本欧美国产在线视频| 国产一区有黄有色的免费视频 | 亚洲欧美一区二区三区黑人 | 久久韩国三级中文字幕| 亚洲人与动物交配视频| 国产成人a区在线观看| 国产又色又爽无遮挡免| 美女大奶头视频| 一本久久精品| 日韩欧美一区视频在线观看 | 美女脱内裤让男人舔精品视频| 日韩av免费高清视频| 国产一级毛片在线| 亚洲电影在线观看av| 国产一区二区亚洲精品在线观看| 久热久热在线精品观看| 大陆偷拍与自拍| 国产亚洲午夜精品一区二区久久 | 国产人妻一区二区三区在| 国产精品99久久久久久久久| 日本wwww免费看| 国产黄色视频一区二区在线观看| 天天一区二区日本电影三级| 美女黄网站色视频| 韩国高清视频一区二区三区| 精品不卡国产一区二区三区| 免费观看在线日韩| 我要看日韩黄色一级片| 国产淫语在线视频| 亚洲怡红院男人天堂| 亚洲人与动物交配视频| 你懂的网址亚洲精品在线观看| av在线天堂中文字幕| 亚洲三级黄色毛片| 国产精品av视频在线免费观看| 99热这里只有是精品在线观看| 亚洲欧美一区二区三区黑人 | 欧美变态另类bdsm刘玥| 国产不卡一卡二| 性色avwww在线观看| 91精品一卡2卡3卡4卡| 日韩大片免费观看网站| 少妇熟女欧美另类| 人体艺术视频欧美日本| 中国国产av一级| 亚洲精品一区蜜桃| 久久精品人妻少妇| 国产av国产精品国产| 国产探花在线观看一区二区| 91av网一区二区| 大香蕉97超碰在线| 亚洲性久久影院| 日韩大片免费观看网站| 国产精品一二三区在线看| 最近中文字幕2019免费版| or卡值多少钱| 国产在视频线在精品| 久久99蜜桃精品久久| 97热精品久久久久久| 亚洲av中文字字幕乱码综合| 少妇丰满av| 国产人妻一区二区三区在| 国产毛片a区久久久久| 色哟哟·www| 久久精品国产鲁丝片午夜精品| 国内少妇人妻偷人精品xxx网站| 丝袜美腿在线中文| 午夜视频国产福利| av免费在线看不卡| 久久久久久久久久久丰满| 国产一区二区三区av在线| 久久精品久久久久久久性| av在线蜜桃| 毛片一级片免费看久久久久| 国产又色又爽无遮挡免| 夫妻午夜视频| 免费av观看视频| 欧美性猛交╳xxx乱大交人| 日韩一区二区视频免费看| 丰满人妻一区二区三区视频av| 91精品国产九色| 久久精品人妻少妇| 国产在视频线精品| 亚洲最大成人中文| 欧美另类一区| 真实男女啪啪啪动态图| 亚洲乱码一区二区免费版| 一本久久精品| 大香蕉久久网| 亚洲国产欧美人成| 真实男女啪啪啪动态图| 欧美bdsm另类| 国产精品一二三区在线看| 亚洲精品aⅴ在线观看| 一区二区三区免费毛片| 久久精品国产亚洲av涩爱| av一本久久久久| 欧美97在线视频| 七月丁香在线播放| 中文字幕制服av| 男人舔奶头视频| 亚洲精品日本国产第一区| 成人亚洲精品一区在线观看 | 国精品久久久久久国模美| 亚洲人成网站在线观看播放| 免费不卡的大黄色大毛片视频在线观看 | 久久草成人影院| 免费高清在线观看视频在线观看| 五月天丁香电影| 天堂中文最新版在线下载 | 精品久久久久久久久av| 亚洲丝袜综合中文字幕| av一本久久久久| 欧美+日韩+精品| 校园人妻丝袜中文字幕| 国产一区二区三区综合在线观看 | 亚洲欧美中文字幕日韩二区| 男的添女的下面高潮视频| 狂野欧美白嫩少妇大欣赏| 蜜臀久久99精品久久宅男| 一级av片app| 少妇被粗大猛烈的视频| 99久久人妻综合| 色吧在线观看| 精品久久久噜噜| 亚洲国产欧美在线一区| 大又大粗又爽又黄少妇毛片口| 亚洲av成人av| 国产色婷婷99| 国产成人精品婷婷| 免费黄频网站在线观看国产| 国产一区亚洲一区在线观看| 成人美女网站在线观看视频| 亚洲18禁久久av| 中文字幕久久专区| 床上黄色一级片| 天堂av国产一区二区熟女人妻| 国产男人的电影天堂91| 欧美一级a爱片免费观看看| 中文精品一卡2卡3卡4更新| 成人综合一区亚洲| 国产精品三级大全| 国产又色又爽无遮挡免| 青春草亚洲视频在线观看| 亚洲,欧美,日韩| 久久99热这里只频精品6学生| 人妻制服诱惑在线中文字幕| 亚洲国产精品成人综合色| 黄色日韩在线| 亚洲va在线va天堂va国产| 欧美xxⅹ黑人| 国产成人aa在线观看| 国产精品国产三级专区第一集| 国产精品国产三级国产专区5o| 亚洲自拍偷在线| 午夜福利成人在线免费观看| 亚洲国产av新网站| 国产精品麻豆人妻色哟哟久久 |