• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Single-image night haze removal based on color channel transfer and estimation of spatial variation in atmospheric light

    2023-07-31 13:30:40ShuyunLiuQunHoYutongZhngFengGoHipingSongYutongJingYingshengWngXioyingCuiKunGo
    Defence Technology 2023年7期

    Shu-yun Liu ,Qun Ho ,Yu-tong Zhng ,Feng Go ,Hi-ping Song ,Yu-tong Jing ,Ying-sheng Wng ,Xio-ying Cui ,Kun Go ,*

    a Key Laboratory of Photoelectronic Imaging Technology and System,Ministry of Education,School of Optics and Photonics,Beijing Institute of Technology,Beijing,100081, China

    b China North Vehicle Research Institute, Beijing, China

    Keywords:Dehazing image captured at night Chromaticity fusion correction Color channel transfer Spatial change-based atmospheric light estimation DehazeNet

    ABSTRACT The visible-light imaging system used in military equipment is often subjected to severe weather conditions,such as fog,haze,and smoke,under complex lighting conditions at night that significantly degrade the acquired images.Currently available image defogging methods are mostly suitable for environments with natural light in the daytime,but the clarity of images captured under complex lighting conditions and spatial changes in the presence of fog at night is not satisfactory.This study proposes an algorithm to remove night fog from single images based on an analysis of the statistical characteristics of images in scenes involving night fog.Color channel transfer is designed to compensate for the high attenuation channel of foggy images acquired at night.The distribution of transmittance is estimated by the deep convolutional network DehazeNet,and the spatial variation of atmospheric light is estimated in a point-by-point manner according to the maximum reflection prior to recover the clear image.The results of experiments show that the proposed method can compensate for the high attenuation channel of foggy images at night,remove the effect of glow from a multi-color and non-uniform ambient source of light,and improve the adaptability and visual effect of the removal of night fog from images compared with the conventional method.

    1.Introduction

    Visible-light imaging systems on airborne,vehicle-mounted,and shipborne photoelectric equipment used in nocturnal military operations may encounter fog,haze,smoke,and other poor weather conditions under complex environmental lighting conditions that seriously degrade the images acquired.With the development of applications of computer vision,digital defogging technology now uses image processing methods to achieve clear images without changing the hardware of the original imaging system[1].Single-frame methods of digital defogging based on the atmospheric scattering model [2],such as the dark channel prior(DCP) [3] method proposed by He,have achieved good results on daytime scenes.However,the only source of natural light in the daytime is sunlight.On the contrary,the characteristics of lighting of nocturnal images vary with space,and artificial light sources of various colors may be present that provide poor illumination and lead to less information being extracted from the resulting images[7].Moreover,high attenuation channels are present,because of which algorithms to remove fog from images acquired at daytime are not suitable for those captured at night.

    Current defogging methods for nocturnal images are often designed based on mathematical models.For example,Zhang et al.[8] proposed a method to defog nocturnal images based on an imaging model in which point-by-point change terms are used to replace constant atmospheric light and the characteristics of lighting of images acquired at night with spatial changes are considered.This method overcomes the limitation of the atmospheric scattering model but does not consider glow,because of which it cannot reflect the characteristics of glow of images acquired at night.Zhang et al.[9] used this model to propose a method of illumination compensation and color correction for foggy images acquired at night prior to fog removal to obtain an output image with balanced illumination and color correction.The method estimates ambient light in a point-by-point manner,and can deal with the characteristics of spatial variation in the lighting of images acquired at night.However,the output image usually contains halo artifacts and color distortion around the glow source.Li et al.[10] added the atmospheric point diffusion function to represent the characteristics of glow of images acquired at night and considered ambient lighting with spatial variations as well.Narasimhan et al.[11]expressed glow as the sum of the product of the shape and direction of illumination of the glow source multiplied by the corresponding glow region.

    The above methods can deal with glow and spatially varying ambient lighting in images acquired at night,but the parameters of the objective function need to be evaluated based on several factors,such as the blur and depth of the scene,and the type of light source.It is difficult to obtain the values of all these parameters using only the information obtained from a single image,and some researchers have thus manually set them to constant values,which often results in a distorted output.

    Liao et al.[12] proposed expressing the fog-free image as the difference between a foggy image and a fog density image without calculating the transmittance and atmospheric light to reduce the distortion caused by parameter estimation.However,the fog density map is prone to changes with the scene,and is unstable.Santra et al.[13]proposed a relaxed atmospheric light model to allow for the spatial variation of the ambient lighting.By eliminating the influence of atmospheric light on pixels and defogging point by point,this method can deal with the spatial variation of lighting and glow characteristics in nocturnal images,but it is not sufficiently accurate for images with complex structures.Pei et al.[14]used an improved DCP method for image defogging by using blurred images at night as source images and blurred images taken at daytime as reference images in Lαβ color space.The global color transfer method used in this method cannot accurately maintain the color characteristics of the image,usually introduces color distortion in the output image,and performs the color conversion between images with a global scope,without considering local changes in the characteristics of scenes of the source image.This leads to the loss of important edge details.

    Ancuti et al.[15]proposed the concept of color channel transfer to remove fog from images acquired during the day.It uses a reference image derived from a source image to transmit information from the important color channel to the attenuated color channel to compensate for the loss of information.However,this method needs to be used in combination with other defogging methods,and its performance depends largely on the defogging ability of these methods.

    To sum up,the physical models on which current algorithms for removing fog from images acquired at night from image vary greatly in their effectiveness,which limits the adaptability of the algorithms.This paper proposes a robust algorithm for removing fog from images acquired at night from a single image.The main contributions of this study are as follows:

    First,the proposed method compensates for the high attenuation channel of foggy images at night through color channel transfer,and translates the problem of defogging images acquired at night to a defogging network for images acquired during the day to estimate transmittivity.This solves the problem of a lack of datasets of foggy images acquired at night,and reduces the color distortion of the results of defogging.

    Second,the deep convolution network DehazeNet is used to estimate the distribution of transmittance,and the spatial variation-based atmospheric light model is established by combining this distribution with the maximum reflection prior and the relaxed atmospheric light theory.Atmospheric light is estimated in a pixel-by-pixel manner,and a clear image in which the effect of glow owing to non-uniform ambient lighting is suppressed to yield an improved visual effect.

    We use a model of images of foggy scenes and the statistical analysis of foggy images acquired at night.We also use a digital simulation-based contrast experiment,a contrast experiment based on an empirically acquired scene,and ablation experiments on a public dataset to verify the effectiveness of the proposed algorithm.

    The remainder of this paper is organized as follows: Section 2 introduces related work in the area,including the model used to represent degraded images of foggy scenes and a statistical analysis of such images.In Section 3,we illustrate the proposed method to remove night haze from single images.Section 4 details the experiments on the proposed method,and the conclusions of this study are provided in Section 5.

    2.Related work

    2.1.Model of degraded images of foggy scenes

    The atmosphere is composed of suspended particles of gas,aerosols,small water droplets,ice crystals,large raindrops,and hail particles.These particles scatter the incident sunlight,and the effect of scattering changes under different weather conditions,that is,different concentrations of suspended particles,that degrade image quality,as shown in Fig.1.

    Fig.1.Model of foggy images.

    The light intensity obtained by imaging equipment in foggy environments includes incident light attenuated by scattering and atmospheric light [2].

    We useI(x)=E(d,λ),J(x)=E0(λ),A∞=E∞(λ),andt(x)=e-β(λ)dto get a general model of foggy images

    whereI(x)is the image received by the foggy imaging equipment,J(x)is fog-free image,A∞is atmospheric light at infinity,andt(x)is transmittance.

    The model illustrates the causes of image degradation in foggy environments.Light reflected by objects is attenuated by medium scattering during propagation,and atmospheric light is scattered by the medium into a path of propagation such that it participates in imaging.WhenI(x)is considered,ift(x)andA∞can be calculated,the fog-free imageJ(x)can also be obtained by the model.This is an ill-posed problem,and certain prior knowledge is needed to obtainJ(x).

    2.2.Statistical analysis of images of scenes involving night fog

    2.2.1.Statistical analysis of illuminance and ambient lighting

    The atmospheric light in foggy images in the daytime is evenly distributed,whereas lighting at night is mainly from an artificial source.This ambient lighting is uneven and may feature a variety of colors,as shown in Fig.2 [8].

    Fig.2.Characteristics of ambient lighting in foggy images acquired at night.

    It is clear that the overall brightness of the foggy image acquired at night is low,and features uneven ambient lighting of different colors-bluish,yellowish and reddish lights in brighter areas.In most models used to remove fog from images acquired at daytime,sunlight is the only source of natural light.It is thus often assumed that atmospheric light is evenly distributed.However,to establish a model to remove fog from images acquired at night,it is necessary to consider uneven brightness and ambient lighting with different colors.

    To more intuitively explain how fog particles reduce the contrast of images in daytime,and the extent of decline in the contrast of foggy images with increasing haze concentration,we used the gray histogram of clear-foggy image pairs from the O-HAZE dataset[22]as an example.The contrast value was marked in the upper-right corner (the gray value of the image was normalized to the interval [0,1]).We also used the gray histogram of clear-foggy image pairs with different haze concentrations from the D-HAZE dataset[21] as an example.

    Image contrast is defined as follows:

    In the above,δ(i,j)=|i-j| is the gray difference between adjacent pixels,andPδ(i,j)represents the probability that their gray difference is equal to δ(i,j).The larger the value ofCis,the higher is the contrast,and the more varied are the levels from black to white.In this paper,the gray difference of four adjacent pixels was calculated.

    To study the characteristics of illuminance of foggy images at night and illustrate them with examples,we selected 20 foggy images taken during the day and 20 taken at night from Yahoo’s Flickr image dataset for illuminance-related statistics(values in the V channel of images in HSV(Hue,Saturation,Value)color space),as shown in Fig.3.

    Fig.3.Illuminance-related statistics of foggy images acquired during the day and night from Flickr.

    Fig.4.Distribution of the standard deviation of foggy images acquired during the day and night.

    Owing to the low illumination at night,the distribution of illumination of foggy images acquired at night is also significantly lower than that of foggy images acquired during the day,and contains less image-related information.Therefore,the characteristics of low illumination of foggy images at night must be considered in image defogging.

    2.2.2.Statistical analysis of loss of detail in foggy images acquired at night

    Due to the presence of fog particles,the reflected light of the object is scattered and attenuated,and atmospheric light participates in the imaging process through the imaging equipment.The final image is affected by attenuated light and scattered atmospheric light that leads to the loss of image detail.To quantitatively explain this loss in foggy images acquired in the daytime,we collected statistics on the distribution of standard deviation of image blocks of over 1500 clear-foggy image pairs from the OHAZE and D-HAZY datasets.

    The standard deviation of the image block is defined as follows:

    In the above,σ represents the standard deviation of image blocks,nis the number of pixels in the image blocks,xiis the gray value of the pixel,andis the mean value of the image blocks.

    To study the loss of detail in foggy images acquired at night and illustrate them with examples,the standard deviation of image blocks of 20 foggy images taken in the daytime and 20 taken at night were selected from Flickr,as shown below.

    The standard deviation of daytime foggy image is distributed in the middle,However,the standard deviation distribution of foggy images at night is in a very low range.It can be seen that,compared with foggy images during the day,the loss of detail is greater in foggy images acquired at night contains less image information.Therefore,the fog removal technology of images acquired at night has higher requirements for recovering the details of the input foggy images.

    2.2.3.Statistical analysis of high attenuation channel in night foggy scenery

    To study the attenuation in foggy images acquired at night in different channels,image pairs in the O-HAZE dataset were used to gather statistics on the distribution of the three-channel histogram of foggy and clear images in the daytime,as shown in Fig.5.More than 130 images from the dataset of images captured on foggy night,collected by Li [23],were used,as shown in Fig.6.

    Fig.5.Analysis of attenuation of three channels in foggy images acquired during the day: (a),(b),(c) Three groups of clear-foggy image pairs and their corresponding threechannel histograms;Top: clear image;Bottom: foggy image;Left: image;Right: corresponding histogram.

    Fig.6.Analysis of three-channel attenuation in foggy images acquired at night:(a),(b),(c),(d),(e) Five foggy image pairs acquired at night and their corresponding three-channel histograms;Left: foggy images acquired at night;Right: three-channel gray image and the distribution of its corresponding histogram.

    The histogram of clear images in the daytime was distributed over a wide range while that of foggy images was narrow,and the situation of the three channels was the same.That is,in the same scene,the contrast of the image decreased with fog in the daytime,and the degree of degradation of the three channels was the same.However,foggy images at night often have a highly attenuated channel,such as the blue channel in Fig.5(a)-Fig.5(e),and the red channel in Fig.5(b).This is related to artificial lighting and haze scattering at night.

    We gathered statistics on the highly attenuated channels of the dataset of foggy images acquired at night,and their histogram is shown in Fig.7.The gray value of the channel was distributed close to zero with a high probability.

    Fig.7.Map of color channel distribution of foggy images at night (Li [23]).

    To sum up,the characteristics of foggy images at night are as follows:

    (1) They have poor illumination(as shown in Fig.3).

    (2) These images at night lose a significant amount of detail (as shown in Fig.4).The standard deviation of the image block of foggy images in the daytime is distributed over a larger area,that is,they contain more details.

    (3) Under uneven ambient lighting of different colors(as shown in Fig.2),although foggy images at night are dim on the whole,they contain bright areas that are biased to blue,yellow,and red.

    (4) The histogram of clear images acquired in the daytime is distributed over a wide range while that of foggy images has a narrower distribution,and the attenuation of the three channels is the same (see Fig.5).

    Fig.8 shows several classic ways to remove fog from images acquired during the day,such as DCP[5],boundary constraints-and context regularization-based defogging [3],non-local color priorbased dehazing (NLD) [4],and extreme reflectance channel priorbased defogging (ERC) [28].For foggy images acquired at night,these defogging methods tend to have serious color deviations.In particular,areas close to artificial lighting exhibit a partial supersaturation phenomenon,and the overall effect is not ideal.To attain defogged images at night image,it is necessary to compensate for the highly attenuated color channel according to the characteristics of the images,process the co-existing uneven and multi-colored ambient lighting,and ensure that the algorithm has good scene migration capability.

    Fig.8.Failure description of method to remove fog from images acquired during the day when applied to those captured at night:(a)Foggy image acquired at night;(b)Effect of fog removal of DCP;(c) Effect of fog removal based on boundary constraints and context regularization;(d) Effect of fog removal of NLD;(e) Effect of fog removal of ERC.

    3.Methods

    The architecture of our proposed DehazeNet-based night fog removal method is shown in Fig.9.The color channel of input foggy image acquired at nightIs(x)are transferred into imageI(x)that approaches natural lighting conditions;thenI(x)is thrown into DehazeNet to calculate the transmittance distributiont(x)meanwhile,I(x)is also used to calculate the atmosphere light modelAλiof each pixel using the maximum reflection prior with ambient lighting characteristics of uneven brightness and different colors.The final clear and fog-free image is synthesized as follows:

    Fig.9.Framework for the implementation of DehazeNet based on color channel transfer and estimated spatial variation in atmospheric light.

    3.1.High attenuation channel compensation based on color channel transfer

    According to the analysis in Section 2.2,foggy images acquired at night have lower illuminance,more serious loss of detail,greater spatial variation and more multi-color ambient lighting,and more highly attenuated channels than foggy images acquired during the day.

    To solve the problem of high attenuation channels in foggy images at night,a CCT-based method is used for preprocessing.

    Assuming that the quantization bits of the image are 8 bits,color channel transfer takes place in the CIE L*a*b*color space,where L represents the brightness of the pixel,with a range of values of[0,100],representing colors from pure black to pure white.The value of a ranges from red to green[127-128]and that of b ranges from yellow to blue.Fig.10 shows a schematic diagram of the CIE L*a * b * color space.The axes along it have little correlation among them.Thus,different operations can be applied to different color channels without incurring cross-channel artifacts.

    Fig.10.Schematic diagram of CIE L* a * b * color space.

    Color channel transfer is carried out in three steps.First,we subtract the average value of the original image.Second,the image is readjusted according to the ratio of the standard deviation of the original image to that of the reference image.Finally,the average value of the reference image is added to the result of above operation.Then color channel transitions can be expressed in the CIE L*a *b * color space as

    The advantage of color channel transfer in the CIE L*a*b*color space is that the loss of color can be automatically compensated for without needing to estimating the direction of color loss.This is because in the CIE L*a *b * color space,information on red-green and blue-yellow chromaticity is mixed,while in the RGB space,information on these three colors is independent.Adjusting the mean values of a and b according to the appropriate reference image introduces a color shift in the two coaxial colors.That is,the attenuation in the R or G channels can be compensated for by adjusting the red-green color shift,and the attenuation of the B channel can be compensated for by adjusting the blue-yellow color shift.

    To compensate for the high attenuation channels of foggy images at night and eliminate biased colors,an effective reference image needs to be established

    whereG(x)is a uniform grayscale image,white is fixed in(0,0,0)in the CIE L*a*b*color space,and moving the average value of color channels a and b to zero can yield tonal correction without changing the mean value of channel L because this value affects the brightness of the image.D(x)is the detail layer of the initial image,S(x)is its significance coefficient,andI(x)is the initial image.

    Iωhcis a Gaussian kernel(5×5)of the blurred version of the initial image,Iμis the mean vector of the initial image,Iωhc(x)is the vector at thexposition of the Gaussian blur image of the initial image,and‖·‖is the norm.

    Each link and effect of color channel transfer are shown in Fig.11.The uniform gray imageG(x)can preliminarily correct the color of the nocturnal image,the detail layer contains details of the original image,and the significance coefficient considers the color change in significant areas.Then,the reference image is obtained by combining the above factors.Finally the initial image is shifted in the color channel according to the direction of the reference image in CIE L*a*b*color space.The result of color channel shift renders the luminance of the image closer to natural lighting conditions.

    Fig.11.Color channel transfer: (a) Initial image;(b) Uniform grayscale image (G(x));(c) Detail layer (D(x));(d) Significance coefficient (S(x));(e) Reference image (R(x)),and (f)results of color channel transfer.

    Fig.12 shows the image and histogram distribution before and after color channel transfer.It is clear that the high attenuation channel R is compensated for in the example.

    Fig.12.Results of compensating for the high attenuation channel:(a)Foggy image acquired at night;(b)RGB channel and histogram distribution of(a);(c)Result of color channel transfer;(d) RGB channel and histogram distribution of (c).

    3.2.Estimating spatial variation in atmospheric light

    In the traditional model,the ambient illumination is assumed to be spatially consistent such that the atmospheric lightA∞in the daytime image is the same at each pixel of the three channels,and this is used to remove fog.However,nocturnal images usually have artificial light sources of multiple colors,such as street lamps,neon lights,and car lights,that are uneven.Therefore,it is important to estimate the effect of defogging of atmospheric light on images acquired at night by considering the characteristics of ambient lighting at night.We combine the relaxed atmospheric light model[13] and the maximum reflection prior [27],and use the fast maximum reflection prior to estimate the atmospheric light of foggy images acquired at night.

    According to the relaxed atmospheric light model,the model of foggy images at night can be defined as

    Here,A∞changes intoA(x),which reflects the spatial change in lighting in images acquired at night,that is,the atmospheric light changes with the positions of the pixels.

    To further reflect the lighting characteristics of different colors of images acquired at night,Ref.[27] rewrote the above as

    We decompose the atmospheric light term into the product of intensity and color distribution

    Zhang et al.[27] examined blocks of clear images acquired during the day,and found that each color channel had a very high intensity at some pixels.That is,the maximum intensity of each color channel had a high value.For an image,this can be expressed as

    The incident intensity of light for clear images acquired during the day is uniformly distributed in space,and can be assumed to have a fixed value of 1.Thus,pixels with the highest local intensity on a particular color channel mainly correspond to objects or surfaces with high reflectivity on the corresponding color channel.Therefore,Eq.(14) is equivalent to

    Areas of objects and surfaces with the highest reflectivity mainly include white (gray) or mirror areas,such as the sky,road,windows,and water,and surfaces of different colors,such as sources of light,flowers,billboards,and people.Thus,for most image blocks without fog acquired during the day,the maximum intensity of each color channel is one,that is,≈1.The above observations are called maximum reflectivity priors.To demonstrate the authenticity of the prior,Zhang et al.[27]calculated the histogram of intensity of 50,000 maximum reflectance images.To intuitively explain the calculation of the maximum reflectance distribution(),Fig.13 uses an image block as an example.The V channel is normalized in HSV space,and the maximum reflectance is then calculated in RGB channel of the image block.A maximum reflectivity of one for each channel does not require a single white pixel.A significant number of pixels in each color channel shown have the maximum reflectivity.These pixels are usually objects that are white,gray,or have different colors,such as clothes,flowers,forests,and road surfaces.

    Fig.13.Schematic diagram of maximum reflectivity of image block: (a) Initial image block;(b) Normalized image block of channel V;(c) Pixel of channel R with the maximum reflectivity;(d) Pixel of channel G with the maximum reflectivity;(e) Pixel of channel B with the maximum reflectivity.

    3.3.Estimating spatial variation in transmittance of atmospheric light based on DehazeNet

    Classical defogging methods require certain prior assumptions(such as the dark channel prior [5] and color prior [4]).Extracting these features is equivalent to a convolution of the image followed by non-linear mapping.Deep learning-based defogging that uses a convolutional neural network has attracted considerable attention in recent years.However,owing to a lack of datasets for this,not many deep learning methods are suitable for removing fog from images acquired at night.Considering that the foggy image acquired at night after color channel transfer is relatively similar to that acquired in the daytime,we can estimate the transmittance under night fog by improving the lightweight DehazeNet network.

    DehazeNet is a trainable end-to-end system based on the convolutional neural network,and was proposed by Cai et al.[6].Its network structure is shown in Fig.14,and consists of a cascaded convolutional layer,a pooling layer,and a non-linear activation function.Each layer is designed according to the established assumptions/priors in image defogging.We obtain the transmittance of each point through feature extraction,multi-scale mapping,calculating the local extremum,and non-linear regression.

    Fig.14.Structure of DehazeNet network.

    Step 1.Feature extraction

    Inspired by the idea of taking the extreme value of the color channel in the classical color prior[4],the first layer of DehazeNet is composed of the maxout element.The maxout function [25] is a feedforward non-linear activation function.The maxout element maximizes k affine feature images in a pixel-by-pixel manner to generate a new feature image.The output response is as follows:

    We choose the multi-scale convolution operation in the second layer of DehazeNet.The size of the convolution kernel is 3×3,5×5,7×7,The same number of convolution kernels are used for these three scales,and the output response is as follows:

    Step 2.Local extremum access

    According to the classical architecture of the CNN [26],local sensitivity can be overcome by considering the neighborhood maximum at each pixel.Considering that the local extreme satisfies the assumption that the transmittance is constant in the local area,this value for the third layer of DehazeNet is calculated as follows:

    In the above formula,Ω(x)is the neighborhood of sizef3×f3centered around the x-axis.The number of dimensions of the output of the third level isn3=n2,and the local extremum operation preserves the resolution of the feature image.

    Step 3.Non-linear regression

    For the removal of foggy images acquired at night,the output value of the last layer should be over a small range,and should have upper and lower bounds.This is a regression problem,and so we use the bilateral rectified linear unit activation in DehazeNet based on BReLU,and the output of the fourth layer is defined as:

    In the above equation,W4={W4}contains a convolution kerneln3×f4×f4andB4={B4}contains a bias.tmax,tminrepresents the boundary value of the BReLU.Because the transmittance value is in the interval [0,1],we choosetmin=0,tmax=1.

    Step 4.Network training

    It is difficult to obtain clear and foggy images at night.We use synthetic data based on the model of foggy images,and perform color channel transfer to compensate for the high attenuation channel in images acquired at night.The essential difference between foggy images acquired during the day at night is in the global uniformity of atmospheric light.DehazeNet acts as a network of foggy images and transmittance.The network itself does not estimate atmospheric light.It can thus be trained by directly using synthetic foggy-clear image pairs acquired during the day.The foggy image is mapped in terms of transmittance by minimizing the loss function between the transmittance of the medium and the true transmittance output by the network.We use the meansquared error as objective function and stochastic gradient descent to optimize the loss.

    4.Experimental results and analysis

    4.1.Experiment setup

    To demonstrate the effectiveness of the proposed method,we compared it with typical methods to remove fog from images acquired at night,including the method in Ref.[8],the method based on the polychromatic light model [10],the method based on the maximum reflection prior model [27],the method based on FFANet (Feature fusion attention network for single image dehazing in Ref.[29])and the method based on PSD(Principled Synthetic-to-Real Dehazing Guided by Physical Priors Principled Synthetic-toreal Dehazing in Ref.[30]).Objective and subjective performance evaluations were performed on the same test images.The selected dataset was due to Li [23],and contained more than 130 foggy images acquired at night that had been collected from the Internet.Some typical examples are shown in Fig.15.The experiment was divided into four parts: the experiment to eliminate the influence of ambient lighting,that on synthetic foggy images,that on foggy images acquired at night,and the experiment on color channel transfer and the ablation experiment of color channel transfer and space change atmospheric optical module.The peak signal-to-noise ratio (PSNR),structural similarity index (SSIM) [16],and color evaluation Index (CIEDE2000) were used for the objective assessment of the results [24].The Natural Image Quality Evaluator(NIQE) [17],Patch-based Contrast Quality Index (PCQI) [18],Blind/Referenceless Image Spatial QUality Evaluator (BRISQUE) [19],and Perception-based Image Quality Evaluator(PIQE)[20]were used to quantitatively evaluated the performance of each method in terms of removing fog from the images.

    Fig.15.Estimating the influence of eliminating ambient lighting on the foggy image:(a) Initial map of night fog;(b)Image defogging method based on a new imaging model;(c)Defogging method based on multi-color luminescence;(d)The proposed method(the upper part shows the ambient lighting estimated by the corresponding method and the lower part shows the result of eliminating ambient lighting).

    4.2.Experiment on eliminating the influence of ambient lighting

    As night scenes usually feature sources of artificial light of multiple colors,it is important to consider the characteristics of ambient lighting to remove fog from images acquired at night.To illustrate the effects of the proposed color channel transfer combined with the optical module to estimate the spatial variation in atmospheric lighting on eliminating the influence of environmental lighting,First,the defogging results for eliminating the influence of environmental lighting are presented and compared with the nighttime image defogging method based on the new imaging model [10] and the nighttime image defogging method based on the multicolor luminescence model [10].The results are shown in Fig.15.

    Fig.15 shows that the method proposed in Ref.[8] introduced color artifacts,whereas the defogging method in Ref.[8],based on the multi-color luminescence model,tended to lighten parts of the image,like the region occupied by the sky.The results of the proposed method were more natural.

    4.3.Experiment on removing fog from synthetic images acquired at night

    To quantitatively verify the effectiveness of the proposed method,we conducted experiments on synthesized foggy images acquired at night according to the model in Section 2.1.The results of comparisons with the defogging methods proposed in Refs.[8,10,29,30] are shown in Fig.16 and Fig.17,respectively.

    Fig.16.Comparison of methods on synthetic nighttime foggy images,Experiment 1: (a) Reference image;(b) Composite foggy image at night;(c) Defogging method proposed in Ref.[8];(d) Image defogging method based on the multi-color luminescence model in Ref.[10];(e) Image defogging method based on FFA-Net in Ref.[29];(f) Image defogging method based on PSD in Ref.[30];(g) The proposed method.

    Fig.17.Comparison of methods on synthetic foggy images acquired at night,Experiment 2: (a) Reference image,(b) Composite foggy image;(c) Defogging method proposed in Ref.[8];(d) Image defogging method based on the multi-color luminescence model in Ref.[10];(e) Image defogging method based on FFA-Net in Ref.[29];(f) Image defogging method based on PSD in Ref.[30];(g) The proposed method.

    Table 1 shows a quantitative comparison of the methods in terms of the PSNR,SSIM,CIEDE2000,NIQE,PCQI,BRISQUE,and PIQE.The proposed DehazeNet method yielded results similar to the reference image in terms of color and illumination.

    Table 1 Assessment of quality of defogging results on synthetic foggy images acquired at night.

    4.4.Experiment on removing foggy images acquired at night

    To verify the performance of the proposed method,foggy images(with the number I-VII)acquired at night are used,as shown in Fig.18.The detailed image contents inside the red boxes of test image I -III of Fig.18 are enlarged in Fig.19.

    Fig.18.Comparison of methods on foggy images acquired at night: (a) Foggy image acquired at night;(b) Results of the method proposed in Ref.[8];(c) Results of the polychromatic luminescence model in Ref.[10];(d) Results of FFA-Net in Ref.[29];(e) Results of PSD in Ref.[30];(f) Results of the proposed method.

    Fig.19.The enlarged image details inside the red boxes in Fig.18 (suffixes with ‘L'means the left red box,‘R'means the right red box):(a)Foggy images acquired at night;(b) Results of the method proposed in Ref.[8];(c) Results of the polychromatic luminescence model in Ref.[10];(d) Results of FFA-Net in Ref.[29];(e) Results of PSD in Ref.[30];(f) Results of the proposed method.

    The method proposed in Ref.[8] could not deal with color distortion,and produced exaggerated intensity and color in some areas.The method based on the multi-color luminescence model[10]tended to over-magnify the color around the edges,resulting in color edge artifacts,especially in the region occupied by the sky and the area around the light source.The methods based on FFA-net in Ref.[29] and PSD in Ref.[30] have serious color deviations.The proposed DehazeNet could correct for color distortion,enhance visibility,and obtain more natural images with fog removed from them.The magnified comparison in Fig.19 shows that compared with the methods proposed in Refs.[8,10,29,30] the proposed method generated more natural results.Compared with the overexposure and deviation in the color of these methods,the proposed method processed glow and incurred only a slight color deviation at the light source.It thus yielded a better color balance and clearer visibility in the area around the street lamp,as shown in Fig.20.

    Fig.20.Results of ablation experiment: (a) Foggy image at night;(b) Experiment A;(c) Experiment B;(d) Experiment C;(e) The proposed method.

    To quantitatively compare the effectiveness of several typical image defogging methods with the proposed DehazeNet,the NIQE was used to evaluate their results of defogging.Table 2 shows the average NIQE values on 28 foggy images acquired at night collected from Flickr.The results show that the proposed method was superior to the other methods in terms of removing fog from images acquired at night.

    Table 2 NIQE results of the methods on foggy images acquired at night.

    4.5.Ablation experiment

    For this experiment,the system was implemented in MATLAB 2020a on a laptop with i5 quda-core 2.30 GHZ,6 GB of RAM,and 64-bit Windows 7.

    The proposed method is composed of three parts: the color channel transfer module,DehazeNet fog removal network,and optical module to calculate the spatial variation in atmospheric lighting.The first two modules are designed to compensate for the high attenuation channel of the foggy image acquired at night,and for the uneven and multi-color environmental lighting by considering its characteristics.We conducted an ablation experiment to verify the necessity of these two modules in DehazeNet.

    The experiment consisted of three parts

    (1) Without preprocessing the image based on color channel transfer,DehazeNet was used directly to estimate transmittance,and the variation in atmospheric lighting was then estimated by the other module.The result was recorded as experiment A.

    (2) After color channel transfer-based preprocessing,DehazeNet was used to estimate transmittance,and global uniform estimation of atmospheric light was used [5] to obtain the results of fog removal.This was recorded as experiment B.

    (3) Without preprocessing using color channel transfer,DehazeNet was directly used to estimate transmittance,and global uniform estimation of atmospheric light was used[5].The result was recorded as experiment C.

    Fig.20 shows the results of the ablation experiment.

    A comparison of Fig.20(b),with Fig.20(e) and Fig.20(c),and Fig.20(d) shows that the color channel transfer module could compensate for the high attenuation channel,and helped restore a more natural and color-balanced image.A comparison Fig.20(c)with Fig.20(e),and that of Fig.20(b)with Fig.20(d)shows that the module to estimate the spatial variation in atmospheric lighting could remove glow,compensate for uneven and different colors of the environmental lighting,and correct the color distribution.This helped remove fog and improve visibility.A comparison of Fig.20(d) and Fig.20(e) shows that the DehazeNet defogging network without the two modules did not yield satisfactory results.Therefore,the color channel transfer module and the module to estimate the changes in atmospheric lighting are indispensable to DehazeNet.

    To quantitatively evaluate the necessity and function of the color channel transfer module and the module to estimate the spatial variation in atmospheric light,NIQE was used to evaluate the results of ablation.Table 3 shows the algorithm that used both module to remove fog from images acquired at night delivered the best performance.The absence of either module degraded performance,and the module to estimate the spatial variation in atmospheric light had a more significant impact on the results.

    Table 3 Results of NIQE for an ablation experiment on foggy images acquired at night.

    5.Conclusions

    This study used the statistical characteristics of foggy images acquired at night,especially the difference between foggy images acquired during the day and at night,to propose,DehazeNet,a single-frame fog removal method based on color channel transfer and estimated spatial variation in atmospheric light.The aim was to defog images acquired at night under complex ambient lighting.Color channel transfer was designed to compensate for the high attenuation channel of foggy images acquired at night,a deep convolutional network was used to estimate the distribution of transmittance,and atmospheric light was estimated point by point according to the maximum reflection prior.Comparative experiments involving other defogging methods showed that the proposed method can better deal with the high attenuation channel,and can remove glow due to multi-color,non-uniform environmental lighting.An ablation experiment was used to verify the necessity of the color channel transfer module and the module to estimate the spatial variation in atmospheric light for the proposed method.

    Declaration of competing interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgments

    This work was supported by a grant from the Qian Xuesen Laboratory of Space Technology,China Academy of Space Technology (Grant No.GZZKFJJ2020004),the National Natural Science Foundation of China(Grant Nos.61875013 and 61827814),and the Natural Science Foundation of Beijing Municipality (Grant No.Z190018).

    男人操女人黄网站| 又黄又爽又免费观看的视频| 熟女电影av网| 国产又爽黄色视频| 中文字幕另类日韩欧美亚洲嫩草| 久久久精品国产亚洲av高清涩受| 久久热在线av| 国产精品1区2区在线观看.| 好男人电影高清在线观看| 久久婷婷人人爽人人干人人爱| 国产一区二区三区在线臀色熟女| tocl精华| 日本在线视频免费播放| 亚洲最大成人中文| 亚洲自拍偷在线| 国产欧美日韩精品亚洲av| 国产精品久久久久久人妻精品电影| 香蕉国产在线看| 嫁个100分男人电影在线观看| 一区二区三区精品91| 香蕉国产在线看| 国产成人一区二区三区免费视频网站| 女人被狂操c到高潮| 成年免费大片在线观看| 免费在线观看成人毛片| 精品国产乱子伦一区二区三区| 手机成人av网站| 婷婷精品国产亚洲av在线| 男女床上黄色一级片免费看| 99久久无色码亚洲精品果冻| 狂野欧美激情性xxxx| 精品国内亚洲2022精品成人| 侵犯人妻中文字幕一二三四区| 无遮挡黄片免费观看| 精华霜和精华液先用哪个| 黑人操中国人逼视频| 日本免费a在线| 露出奶头的视频| 中国美女看黄片| 国产单亲对白刺激| xxx96com| 日韩欧美国产在线观看| 国产精品永久免费网站| 免费一级毛片在线播放高清视频| 在线观看舔阴道视频| 91在线观看av| 精品国产亚洲在线| 欧美又色又爽又黄视频| 2021天堂中文幕一二区在线观 | 亚洲精品一卡2卡三卡4卡5卡| 激情在线观看视频在线高清| 久久久久免费精品人妻一区二区 | 精品久久久久久,| 他把我摸到了高潮在线观看| 女生性感内裤真人,穿戴方法视频| 一个人免费在线观看的高清视频| а√天堂www在线а√下载| 欧美性猛交╳xxx乱大交人| 欧美成人一区二区免费高清观看 | 国产男靠女视频免费网站| 一级黄色大片毛片| 一本精品99久久精品77| 国产激情久久老熟女| 亚洲男人的天堂狠狠| 亚洲国产精品久久男人天堂| 亚洲精品国产一区二区精华液| 黄色丝袜av网址大全| xxx96com| 免费一级毛片在线播放高清视频| 91在线观看av| 国产精品1区2区在线观看.| 亚洲国产欧美日韩在线播放| 最好的美女福利视频网| 99热只有精品国产| 一本久久中文字幕| 欧美大码av| 又紧又爽又黄一区二区| 午夜免费观看网址| 老鸭窝网址在线观看| 欧美一级毛片孕妇| 欧美精品亚洲一区二区| avwww免费| а√天堂www在线а√下载| 国产v大片淫在线免费观看| 91在线观看av| 男女午夜视频在线观看| 日日干狠狠操夜夜爽| 欧美中文日本在线观看视频| 欧美国产精品va在线观看不卡| 婷婷精品国产亚洲av在线| 听说在线观看完整版免费高清| 欧美激情高清一区二区三区| 国产日本99.免费观看| 日韩大尺度精品在线看网址| 少妇粗大呻吟视频| 久久久国产精品麻豆| 久99久视频精品免费| 国产在线精品亚洲第一网站| 欧美亚洲日本最大视频资源| 国产一区二区激情短视频| 后天国语完整版免费观看| 黄片播放在线免费| 男人舔女人的私密视频| 男女之事视频高清在线观看| 欧美一级毛片孕妇| 欧美三级亚洲精品| 久久久久久大精品| 亚洲男人的天堂狠狠| 亚洲一区中文字幕在线| 国产91精品成人一区二区三区| av超薄肉色丝袜交足视频| 国内久久婷婷六月综合欲色啪| 亚洲专区国产一区二区| 国产欧美日韩一区二区三| 久久香蕉国产精品| 亚洲精品粉嫩美女一区| 51午夜福利影视在线观看| 可以在线观看的亚洲视频| 中文字幕精品亚洲无线码一区 | 看黄色毛片网站| 精品高清国产在线一区| 最近最新中文字幕大全电影3 | 亚洲 国产 在线| 美女午夜性视频免费| 波多野结衣高清作品| 国产精品av久久久久免费| 久久午夜亚洲精品久久| 最新在线观看一区二区三区| xxx96com| 777久久人妻少妇嫩草av网站| 久久久久久九九精品二区国产 | 国产精品日韩av在线免费观看| av福利片在线| 91九色精品人成在线观看| 女人高潮潮喷娇喘18禁视频| 免费人成视频x8x8入口观看| 黄色a级毛片大全视频| 亚洲一区二区三区色噜噜| 啦啦啦 在线观看视频| 亚洲人成电影免费在线| 在线十欧美十亚洲十日本专区| 天天躁狠狠躁夜夜躁狠狠躁| 18美女黄网站色大片免费观看| 国产又黄又爽又无遮挡在线| 大型黄色视频在线免费观看| 欧美激情高清一区二区三区| 中文字幕最新亚洲高清| 日韩免费av在线播放| a级毛片a级免费在线| 国产日本99.免费观看| 免费在线观看视频国产中文字幕亚洲| 欧美在线黄色| 久久九九热精品免费| 可以在线观看毛片的网站| 欧美一区二区精品小视频在线| 露出奶头的视频| 首页视频小说图片口味搜索| 欧美三级亚洲精品| 亚洲性夜色夜夜综合| 欧美丝袜亚洲另类 | 成人一区二区视频在线观看| 18禁观看日本| av视频在线观看入口| 国产亚洲欧美精品永久| 一区二区三区激情视频| 嫩草影院精品99| 国产v大片淫在线免费观看| av免费在线观看网站| 神马国产精品三级电影在线观看 | 99久久无色码亚洲精品果冻| 欧美一级毛片孕妇| 成人特级黄色片久久久久久久| 波多野结衣高清作品| 午夜激情福利司机影院| 一级a爱视频在线免费观看| 欧美黄色片欧美黄色片| 日韩一卡2卡3卡4卡2021年| 中文字幕精品免费在线观看视频| 男女做爰动态图高潮gif福利片| 久久久久国内视频| 窝窝影院91人妻| 午夜福利在线在线| 两个人视频免费观看高清| 级片在线观看| 久久久久亚洲av毛片大全| 午夜亚洲福利在线播放| 最近最新中文字幕大全电影3 | 午夜免费激情av| 免费观看精品视频网站| 国产av在哪里看| 亚洲aⅴ乱码一区二区在线播放 | 亚洲精品国产区一区二| 国产黄色小视频在线观看| 色尼玛亚洲综合影院| 色哟哟哟哟哟哟| 久久久精品欧美日韩精品| 精品久久蜜臀av无| 12—13女人毛片做爰片一| 亚洲狠狠婷婷综合久久图片| www.精华液| 又紧又爽又黄一区二区| 国产精品自产拍在线观看55亚洲| 亚洲第一电影网av| 午夜福利在线在线| 日本 欧美在线| 50天的宝宝边吃奶边哭怎么回事| 一级a爱片免费观看的视频| 午夜福利在线观看吧| 一级毛片女人18水好多| 欧美在线黄色| 可以在线观看的亚洲视频| 99国产精品一区二区三区| 国产蜜桃级精品一区二区三区| 一本久久中文字幕| 国产亚洲精品综合一区在线观看 | 大型av网站在线播放| 中文在线观看免费www的网站 | 国产一区二区激情短视频| 韩国av一区二区三区四区| 欧美亚洲日本最大视频资源| 欧美在线一区亚洲| 热re99久久国产66热| 搡老岳熟女国产| 1024视频免费在线观看| 波多野结衣巨乳人妻| 精华霜和精华液先用哪个| 99riav亚洲国产免费| 久久国产亚洲av麻豆专区| av欧美777| 欧美成狂野欧美在线观看| 精品一区二区三区av网在线观看| www.自偷自拍.com| 一边摸一边抽搐一进一小说| 国产亚洲精品一区二区www| 在线永久观看黄色视频| 亚洲成a人片在线一区二区| 少妇 在线观看| av有码第一页| 天堂影院成人在线观看| 无遮挡黄片免费观看| 一边摸一边抽搐一进一小说| 熟妇人妻久久中文字幕3abv| 一二三四社区在线视频社区8| 免费在线观看亚洲国产| 不卡一级毛片| 一夜夜www| 淫妇啪啪啪对白视频| 男女午夜视频在线观看| 国产欧美日韩一区二区精品| 在线观看免费视频日本深夜| 免费观看人在逋| 成人18禁高潮啪啪吃奶动态图| 国产av在哪里看| 十八禁人妻一区二区| 人人妻人人看人人澡| 日本 av在线| 很黄的视频免费| 老熟妇乱子伦视频在线观看| 成在线人永久免费视频| 啦啦啦观看免费观看视频高清| 精华霜和精华液先用哪个| 国产精华一区二区三区| 国产精品1区2区在线观看.| videosex国产| 国产成人av激情在线播放| 村上凉子中文字幕在线| 欧美成狂野欧美在线观看| 日韩精品中文字幕看吧| 欧美日韩中文字幕国产精品一区二区三区| 天堂影院成人在线观看| 国产成人系列免费观看| 午夜激情福利司机影院| 亚洲人成网站在线播放欧美日韩| 国产黄色小视频在线观看| 黑人欧美特级aaaaaa片| 在线观看一区二区三区| 可以免费在线观看a视频的电影网站| 亚洲精品在线美女| 色精品久久人妻99蜜桃| 欧美乱色亚洲激情| 国产一区二区激情短视频| √禁漫天堂资源中文www| ponron亚洲| 精品久久久久久久末码| 一本大道久久a久久精品| 十八禁网站免费在线| 色av中文字幕| 亚洲成人精品中文字幕电影| 免费在线观看黄色视频的| 一二三四在线观看免费中文在| 露出奶头的视频| 两个人看的免费小视频| 两个人免费观看高清视频| 99久久国产精品久久久| 丝袜在线中文字幕| 男人的好看免费观看在线视频 | 亚洲精品一区av在线观看| 99riav亚洲国产免费| 中文字幕精品免费在线观看视频| 淫秽高清视频在线观看| 精品久久久久久久久久免费视频| 国产私拍福利视频在线观看| 精品日产1卡2卡| 日韩免费av在线播放| 大香蕉久久成人网| 欧美成人午夜精品| 日韩av在线大香蕉| 香蕉久久夜色| 精品久久久久久久毛片微露脸| 91大片在线观看| 亚洲自拍偷在线| 免费在线观看黄色视频的| 久久香蕉精品热| 69av精品久久久久久| 麻豆一二三区av精品| 国产精品,欧美在线| 首页视频小说图片口味搜索| 成人国产一区最新在线观看| 亚洲色图 男人天堂 中文字幕| 亚洲国产精品合色在线| 两个人视频免费观看高清| 99精品在免费线老司机午夜| 欧美激情 高清一区二区三区| 狂野欧美激情性xxxx| 大型黄色视频在线免费观看| 在线观看免费午夜福利视频| 动漫黄色视频在线观看| 午夜福利在线观看吧| 老司机深夜福利视频在线观看| 久久久水蜜桃国产精品网| 久久伊人香网站| 在线视频色国产色| 亚洲欧美激情综合另类| 99国产极品粉嫩在线观看| 嫁个100分男人电影在线观看| 亚洲国产精品合色在线| 免费女性裸体啪啪无遮挡网站| 成人欧美大片| 99精品欧美一区二区三区四区| 国产亚洲欧美精品永久| 在线视频色国产色| 十分钟在线观看高清视频www| 免费在线观看影片大全网站| 一本一本综合久久| 丝袜在线中文字幕| 91国产中文字幕| 美女国产高潮福利片在线看| 狠狠狠狠99中文字幕| 欧美日本视频| 成在线人永久免费视频| 9191精品国产免费久久| 色综合站精品国产| 国产精品野战在线观看| 久久精品影院6| 91大片在线观看| 久久亚洲精品不卡| 色尼玛亚洲综合影院| 久久国产精品男人的天堂亚洲| 午夜老司机福利片| 好男人电影高清在线观看| 后天国语完整版免费观看| 国产精品亚洲一级av第二区| 日本 欧美在线| 曰老女人黄片| 国产黄色小视频在线观看| 亚洲午夜理论影院| 嫩草影院精品99| 真人做人爱边吃奶动态| 在线天堂中文资源库| 亚洲人成网站高清观看| 欧美激情 高清一区二区三区| 丰满人妻熟妇乱又伦精品不卡| 国产真人三级小视频在线观看| 最新在线观看一区二区三区| 成人三级黄色视频| 亚洲一区高清亚洲精品| 免费看a级黄色片| 热99re8久久精品国产| 国产高清视频在线播放一区| 亚洲人成电影免费在线| 97碰自拍视频| 久久精品国产99精品国产亚洲性色| 91成人精品电影| 免费在线观看成人毛片| 亚洲美女黄片视频| 久久久国产成人精品二区| av中文乱码字幕在线| 国产97色在线日韩免费| 亚洲av成人不卡在线观看播放网| 中文字幕人妻熟女乱码| 日韩免费av在线播放| 99在线视频只有这里精品首页| 999久久久精品免费观看国产| 1024手机看黄色片| 丝袜在线中文字幕| x7x7x7水蜜桃| 国产在线观看jvid| 色av中文字幕| 丰满的人妻完整版| 精品久久久久久久久久免费视频| 亚洲av成人av| 日本五十路高清| 国产精品美女特级片免费视频播放器 | 99riav亚洲国产免费| 亚洲专区国产一区二区| 日韩大码丰满熟妇| 日韩中文字幕欧美一区二区| 人成视频在线观看免费观看| 手机成人av网站| 国产精品 国内视频| 亚洲一区中文字幕在线| 久久精品成人免费网站| 不卡一级毛片| 丝袜人妻中文字幕| 熟女电影av网| 给我免费播放毛片高清在线观看| 老汉色∧v一级毛片| 大香蕉久久成人网| 丝袜在线中文字幕| 亚洲成人久久爱视频| 天天躁夜夜躁狠狠躁躁| 最新在线观看一区二区三区| 美女免费视频网站| 人妻久久中文字幕网| 午夜福利欧美成人| 亚洲欧洲精品一区二区精品久久久| 久久精品aⅴ一区二区三区四区| 欧美中文综合在线视频| 久久精品国产清高在天天线| 中文字幕最新亚洲高清| 亚洲熟妇熟女久久| 黄色毛片三级朝国网站| 久久久精品欧美日韩精品| netflix在线观看网站| 亚洲av电影在线进入| 国产成人精品久久二区二区免费| 俄罗斯特黄特色一大片| 美女国产高潮福利片在线看| 欧美黑人巨大hd| 变态另类成人亚洲欧美熟女| 亚洲五月婷婷丁香| 亚洲中文字幕一区二区三区有码在线看 | 久久国产乱子伦精品免费另类| 成人特级黄色片久久久久久久| 久久久水蜜桃国产精品网| 美女扒开内裤让男人捅视频| 亚洲国产毛片av蜜桃av| √禁漫天堂资源中文www| 岛国视频午夜一区免费看| 久久青草综合色| 成人亚洲精品一区在线观看| 不卡av一区二区三区| 男女视频在线观看网站免费 | 无遮挡黄片免费观看| 午夜久久久在线观看| 国产91精品成人一区二区三区| 亚洲 欧美一区二区三区| 91成年电影在线观看| 色婷婷久久久亚洲欧美| 高清毛片免费观看视频网站| 久久欧美精品欧美久久欧美| 精品久久久久久成人av| 欧美黑人欧美精品刺激| 在线国产一区二区在线| 男女视频在线观看网站免费 | 亚洲精品国产精品久久久不卡| 9191精品国产免费久久| 日韩av在线大香蕉| 亚洲国产欧美日韩在线播放| 女人被狂操c到高潮| 国产精品久久久人人做人人爽| 免费在线观看日本一区| 叶爱在线成人免费视频播放| 黑人巨大精品欧美一区二区mp4| 在线观看日韩欧美| 天天躁狠狠躁夜夜躁狠狠躁| 成人18禁高潮啪啪吃奶动态图| 成人免费观看视频高清| 欧美不卡视频在线免费观看 | 怎么达到女性高潮| 亚洲性夜色夜夜综合| 国语自产精品视频在线第100页| 国产av又大| 2021天堂中文幕一二区在线观 | 啦啦啦 在线观看视频| 国产熟女xx| 最近最新免费中文字幕在线| 丝袜在线中文字幕| 中国美女看黄片| 色精品久久人妻99蜜桃| 亚洲精品在线美女| 午夜免费激情av| 此物有八面人人有两片| 欧美日韩一级在线毛片| 亚洲国产毛片av蜜桃av| 最近在线观看免费完整版| 身体一侧抽搐| 亚洲精华国产精华精| 热re99久久国产66热| 亚洲无线在线观看| 高潮久久久久久久久久久不卡| 亚洲人成77777在线视频| 99精品在免费线老司机午夜| а√天堂www在线а√下载| 国产成+人综合+亚洲专区| 日韩高清综合在线| 国产黄片美女视频| 最近在线观看免费完整版| 一区二区三区激情视频| 两人在一起打扑克的视频| 嫩草影视91久久| 色综合欧美亚洲国产小说| 2021天堂中文幕一二区在线观 | 人人妻,人人澡人人爽秒播| 99久久综合精品五月天人人| 亚洲全国av大片| 亚洲免费av在线视频| 无遮挡黄片免费观看| 久久热在线av| 99热只有精品国产| 久久精品亚洲精品国产色婷小说| 俄罗斯特黄特色一大片| 亚洲国产高清在线一区二区三 | 国产亚洲精品一区二区www| 亚洲性夜色夜夜综合| 麻豆成人av在线观看| 中亚洲国语对白在线视频| 亚洲自偷自拍图片 自拍| 国产精品精品国产色婷婷| 亚洲第一电影网av| 日韩视频一区二区在线观看| 99久久99久久久精品蜜桃| 夜夜爽天天搞| 日韩国内少妇激情av| 好男人在线观看高清免费视频 | 在线看三级毛片| 国产99白浆流出| 亚洲第一欧美日韩一区二区三区| 亚洲美女黄片视频| 国产在线精品亚洲第一网站| 99热只有精品国产| 欧美性长视频在线观看| 在线观看免费日韩欧美大片| 美女大奶头视频| 一级毛片女人18水好多| 亚洲aⅴ乱码一区二区在线播放 | 99在线人妻在线中文字幕| 色老头精品视频在线观看| 日韩有码中文字幕| 久久精品夜夜夜夜夜久久蜜豆 | 成人国产综合亚洲| 国产熟女午夜一区二区三区| av欧美777| 看黄色毛片网站| 一边摸一边抽搐一进一小说| 波多野结衣高清无吗| 制服人妻中文乱码| 黄频高清免费视频| 一区二区三区高清视频在线| √禁漫天堂资源中文www| 国产精品爽爽va在线观看网站 | 亚洲aⅴ乱码一区二区在线播放 | 久久中文字幕人妻熟女| 欧美日韩瑟瑟在线播放| bbb黄色大片| 香蕉av资源在线| 国产久久久一区二区三区| 亚洲av日韩精品久久久久久密| 免费一级毛片在线播放高清视频| 久久天躁狠狠躁夜夜2o2o| 国内少妇人妻偷人精品xxx网站 | 男人舔女人下体高潮全视频| 久久 成人 亚洲| 男女视频在线观看网站免费 | 成人国产一区最新在线观看| 国产精品永久免费网站| 国产蜜桃级精品一区二区三区| 中文资源天堂在线| 国产一区二区三区在线臀色熟女| 一区二区三区激情视频| 给我免费播放毛片高清在线观看| 美女免费视频网站| 美女扒开内裤让男人捅视频| 免费高清在线观看日韩| 美国免费a级毛片| 最近在线观看免费完整版| 一区二区三区高清视频在线| 成人18禁在线播放| 中文字幕另类日韩欧美亚洲嫩草| 成年版毛片免费区| 99久久精品国产亚洲精品| 婷婷精品国产亚洲av在线| 亚洲在线自拍视频| 19禁男女啪啪无遮挡网站| 欧美精品亚洲一区二区| 18禁裸乳无遮挡免费网站照片 | 亚洲国产欧洲综合997久久, | 国产成年人精品一区二区| 中文字幕精品免费在线观看视频| 国产亚洲精品久久久久5区| 精品电影一区二区在线| 国产亚洲av嫩草精品影院| 男女下面进入的视频免费午夜 | 天天躁狠狠躁夜夜躁狠狠躁| 人妻久久中文字幕网| 亚洲人成77777在线视频| 亚洲午夜理论影院| 真人一进一出gif抽搐免费| 国产精品影院久久| 高潮久久久久久久久久久不卡| 女生性感内裤真人,穿戴方法视频| 黄色成人免费大全| 一个人免费在线观看的高清视频| 亚洲中文字幕一区二区三区有码在线看 | 成人av一区二区三区在线看|