• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    RF-Net:Unsupervised Low-Light Image Enhancement Based on Retinex and Exposure Fusion

    2023-12-12 15:51:14TianMaChenhuiFuJiayiYangJiehuiZhangandChuyangShang
    Computers Materials&Continua 2023年10期

    Tian Ma,Chenhui Fu,Jiayi Yang,Jiehui Zhang and Chuyang Shang

    College of Computer Science and Technology,Xi’an University of Science and Technology,Xi’an,710054,China

    ABSTRACT Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propose a two-stage unsupervised low-light image enhancement algorithm called Retinex and Exposure Fusion Network(RFNet),which can overcome the problems of over-enhancement of the high dynamic range and under-enhancement of the low dynamic range in existing enhancement algorithms.This algorithm can better manage the challenges brought about by complex environments in real-world scenarios by training with unpaired low-light images and regular-light images.In the first stage,we design a multi-scale feature extraction module based on Retinex theory,capable of extracting details and structural information at different scales to generate high-quality illumination and reflection images.In the second stage,an exposure image generator is designed through the camera response mechanism function to acquire exposure images containing more dark features,and the generated images are fused with the original input images to complete the low-light image enhancement.Experiments show the effectiveness and rationality of each module designed in this paper.And the method reconstructs the details of contrast and color distribution,outperforms the current state-of-the-art methods in both qualitative and quantitative metrics,and shows excellent performance in the real world.

    KEYWORDS Low-light image enhancement;multiscale feature extraction module;exposure generator;exposure fusion

    1 Introduction

    With the rapid development of artificial intelligence,low-light image-enhancement technology has been widely applied for pre-processing in advanced visual tasks.However,low-light images often suffer from detail degradation and color distortion due to the shooting environment and technical limitations.Balancing the image-enhancement effect and maintaining image realism are challenging problems in low-light image enhancement.These problems can significantly affect the performance of advanced downstream vision tasks.Therefore,improving visual quality and recovering image details have become important research topics.

    In the process of image enhancement,it is important to strike a balance between preserving image details and maintaining overall image quality.This requires preserving the original details in well-exposed areas,while appropriately brightening the underexposed areas to achieve a high-quality image.In addition,attention must be paid to balancing the brightness and contrast of the image during enhancement.If only the brightness is increased globally,the texture details in the image may be lost.Therefore,both brightness and contrast changes must be considered when enhancing an image to ensure its quality.Traditional methods [1–3] in the past often required a large amount of manual parameter adjustment to improve image quality;however,this method has significant limitations,as its effectiveness is largely based on assumptions regarding the threshold range.The use of low-light/normal-light images in supervised deep model training has become the main approach in algorithm research owing to advancements in deep learning.The accuracy of supervised learning methods depends on paired training datasets.However,it is technically difficult to obtain paired datasets from the same scene.In addition,the algorithm has poor generalization ability and cannot be effectively applied to real-scene images.In recent years,unsupervised image enhancement algorithms have emerged that eliminate reliance on paired datasets and achieve good enhancement results.For example,Deep Light Enhancement without Paired Supervision (EnlightenGAN) [4] uses unpaired datasets to train and implement low-light image enhancement techniques.Zero-Reference Deep Curve Estimation (Zero-DCE) [5] achieves enhancement using scene images with different illumination intensities.Although these methods eliminate the dependence of deep learning techniques on paired datasets,the quality of enhancement remains a challenge.The EnlightenGAN[4]method may produce artifacts,an overall uneven picture,and color-recovery errors when enhancing dark areas.Images enhanced using the Zero-DCE [5] method may exhibit whitish tones and less vibrant colors.These methods exhibit stronger generalization ability than supervised methods and reduce the requirements for dataset collection.

    To address these issues,we propose an unsupervised enhancement network called RF-Net,which combines Retinex with exposure fusion.The network comprises two stages: image decomposition and exposure fusion.In the first stage,to fully consider contextual and global information,we employed the powerful image-generation capabilities of a generative adversarial network and designed a multi-scale feature-extraction module to produce high-quality illumination and reflection images.Specifically,our network uses a multi-scale feature extraction module to perceptively capture differentscale features,preserve more detailed information,and avoid information loss between layers by using residual connections to transmit information from the current layer to the next layer.Most existing Retinex-based image enhancement methods obtain illumination and reflection component information matrices and generate enhanced images through calculations,which not only involve high computational complexity but also result in artifacts when processing shadow parts in dark areas.After obtaining the illumination and reflection images,an exposure image generator with correction coefficients was designed using the camera response function in the second stage to generate the exposure image and fuse it with the original low-light image to complete low-light image enhancement.The results obtained using the proposed method are shown in Fig.1.

    In summary,the main contributions of this paper are as follows:

    1.We devised a multi-scale feature-extraction module to produce high-quality illumination and reflection images.We incorporated a Coordinate Attention (CA) module that includes position-encoding information into the Markov discriminator.This module builds on channel attention and pays closer attention to the location information of the generated image,allowing for more accurate discrimination of texture details and improving the quality of the images generated.

    2.We improved the original Retinex formulation and designed an exposure image generator module with correction coefficients by referring to the camera response mechanism function.This module can generate images with different exposure levels while fusing illumination and reflection images.

    3.We proposed a novel unsupervised image enhancement method called RF-Net,which exhibits excellent performance in test results on several datasets and can be generalized to real-world low-light conditions.

    Figure 1:Representative enhancement results of RF-Net.Which can improve over-enhancement in the high dynamic range and under-enhancement in the low dynamic range

    2 Related Work

    In this section,we review research on low-light image enhancement using traditional and deep learning methods.

    2.1 Traditional Methods

    Histogram equalization is a classical image-enhancement method that enhances the contrast of an image by adjusting its brightness distribution.However,histogram equalization tends to cause image noise and over-enhancement problems.Some methods further increase the enhancement effect by setting a threshold to divide the image blocks [6],dividing the clipping points into chunks for processing[7],and combining them with adaptive gamma correction[8]to obtain a more reasonable S-shaped mapping function.However,these methods lead to problems of over-enhancement and amplification artifacts.The contrast was enhanced to an extent,but the details were lost.Retinex theory [9] posits that an image’s brightness is composed of two parts,reflection,and illumination,which enhance the image quality by separating these two parts.However,this approach is ineffective for images that are too dark or bright.Accordingly,researchers have proposed various improvement schemes[1–3],etc.This presupposes that spatial illumination changes slowly during implementation,but the processing is prone to halation and inaccurate color recovery.To reduce the computational cost in Retinex theory,researchers have proposed Low-Light Image Enhancement via Illumination Map Estimation(LIME)[10],the local illumination distribution of the image obtained by analyzing the local information.The local illumination distribution is applied to the reflection component to obtain the enhanced image.Compared with the Retinex method,LIME reduces the occurrence of halo artifacts during processing.However,LIME has a limited ability to distinguish between the foreground and background of an image,which can result in over-enhancement of the foreground and noise in the background.

    2.2 Deep Learning Methods

    Researchers have widely employed deep learning for image enhancement over the past decade,achieving promising results.In the following section,we review the current state of research on fully supervised,semi-supervised,and unsupervised approaches.

    2.2.1 Fully Supervised Methods

    The use of paired datasets to train network models has been widely adopted because of the oneto-one correspondence between the training data.RetinexNet[11],for the first time,combines Retinex theory with Convolutional Neural Networks to implement the low-illumination image enhancement problem by designing a decomposition module,and enhancement module.The feasibility of the Retinex application in deep learning was demonstrated for the first time.Kindling the Darkness(KinD)[12]designed global and local enhancement modules,where the global module extracts global luminance based on Retinex decomposition,and the local module enhances the image texture details.The global and local modules interacted through an adaptive mechanism,and the difference between the image generated by the global enhancement branch and the original image was used to calculate the weight of each pixel.These weights were then passed to the local enhancement branch to achieve contrast enhancement.KinD++[13]is based on KinD and improves the training speed and accuracy of the model by designing group learning and back-propagation mechanisms.GLobal Illuminationaware and Detail preserving Network (GLADNet) [14] generates a global before light by designing a global illumination estimation module that is then combined with the original input image to produce an enhanced image.Low-Light Image Enhancement with Normalizing Flow(LLFlow)[15]uses adaptive weights to control the effects of optical flow and global constraints;it also uses a deep learning model to learn the optical flow and the image to obtain an enhanced image.Self-Calibrated Illumination(SCI)[16]reduces computational costs by designing an adaptive correction illumination module that ensures the convergence of the results of each training phase to the final one.The literature[17]designs a generative adversarial network containing dual attention units that can effectively inhibit the artifacts and color reproduction bias generated during the enhancement.Transformer Photo Enhancement (TPE) [18] uses a pure transformer architecture to implement image enhancement based on multi-stage curve adjustment.Retinex based deep unfolding network (URetinex-Net) [19]decomposes the input image by designing a continuous optimization model with mutual feedback.To optimize the decomposition results,an implicit a priori regularization model was used,and a data initialization module,specific illumination intensity module,and denoising detail retention module were designed.Illumination Adaptive Transformer(IAT)[20]implements low-light enhancement by designing a lightweight transformer model that uses attention-query techniques to represent and adjust the parameters associated with the image signal processor(ISP).

    2.2.2 Semi-Supervised Methods

    These methods can learn better feature representations using both paired and unpaired data.First,researchers use paired datasets to train and obtain prior knowledge,and then they use the trained model as pre-training weights for unpaired data training.Based on this subdivision,Deep Recursive Band Network(DRBN)[21]introduces a recursive network architecture that uses the information of the highlight and shadow regions of the image and constructs a low-rank matrix and a sparse matrix to represent the brightness and structural information of the image,then inputs the two matrices into two branches of the network for feature extraction,and finally merges them to obtain an enhanced image.DRBN [22] utilizes a “band representation” technique to enhance low-light images.This method decomposes a low-light image into multiple bands using band representation and trains a neural network with a small amount of labeled data to learn how to enhance each band.Thus,this method retains the detail and texture information of a low-light image,thereby enhancing its quality.

    2.2.3 Unsupervised Methods

    Obtaining paired datasets can be difficult and using unsupervised methods has become the main approach for accomplishing image-enhancement tasks.This approach improves generality and applicability to many real-world scenarios.Exposure Correction Network (ExCNet) [23] is the first unsupervised enhancement method that uses the powerful learning ability of neural networks to estimate the most suitable“S”curve for low-light images and uses this curve directly to enhance the image.low-light image enhancement network(LEGAN)[24]is enhanced by a cleverly designed light perception module and a loss function that solves the overexposure problem.The EnlightenGAN[4]completed its first unsupervised image enhancement using unpaired datasets.This design overcomes the previous reliance on paired datasets by establishing unpaired mappings between low-light and nonmatching images and employing global-local discriminators and feature retention losses to constrain the feature distance between the enhanced and origin images for enhancement.Zero-DCE[5]uses a neural network to match a brightness mapping curve and then generates an enhanced image based on the curve.Unsupervised low-light image enhancement was achieved by designing a multi-stage highorder curve with a pixel-level dynamic range adjustment.Based on this,a lightweight version,Zero-DCE++[25],was developed.Generative adversarial network and Retinex (RetinexGAN) [26] uses a generative adversarial network based on Retinex to design a decomposition network with a simple two-layer convolution and achieve low-light image enhancement through image fusion.Restoration of Underexposed Images via Robust Retinex Decomposition (RRDNet) [27] achieves enhancement by designing a three-branch decomposition network and iterative loss functions to decompose the three components of reflection,illumination,and noise module.Retinex-inspired Unrolling with Architecture Search(RUAS)[28]used neural structure search to find an effective and lightweight set of networks for low-light enhancement.Retinex Deep Image Prior(RetinexDIP)[29]proposes a Retinexbased generation strategy that reduces the coupling between two components in the decomposition,making it easier to adjust the estimated illumination to perform an enhancement.

    3 Proposed Method

    In this section,the first module introduces the overall structure of the RF-Net network and provides a hierarchical representation of the first-and second-stage exposure-generation fusion networks.The second module describes the designed exposure image generator,and the third module describes the loss function.

    3.1 Network Architecture Design

    The proposed RF-Net is a two-stage network with the overall structure shown in Fig.2.In the first stage,two coupled-generator frameworks were used.The R network generates reflection images,whereas the L network generates illumination images.First,by cascading the maximum,minimum,and mean values of each channel of the original low-light image as inputs to the network,a multi-scale feature extraction module was designed to maintain the global consistency of good illumination and contextual information.After basic feature extraction,the resulting features are concatenated and mapped to high-dimensional information before being reduced in dimension through convolution,thereby improving image quality while learning complex features.For the discriminator,we used a VGG-based network structure in which the original Markov discriminator maps the input to an N×N matrix,such that each point in the matrix corresponds to the evaluation value of each region.To enhance the decomposition effect and discrimination accuracy of the network,we incorporated a CA attention mechanism with positional information[30].This further enhances the network’s ability to perceive and understand images and spatial information,learn useful features,and suppress irrelevant features,thereby improving the discriminator’s ability to accurately judge texture details.

    Figure 2:Overview of RF-Net.First,the low-light image and the corresponding maximum,minimum,and average grayscale images are input.The R and L have the same structure and are used to acquire the illumination and reflection components,respectively.Then the exposure image is acquired by the exposure generator.Finally,the low-light image is fused with the exposure image

    The second stage also consists of two coupled networks that use the original input image and the output of the first-stage network as inputs.At this stage,the output information from the first stage is processed by the exposure image generator module to create the initial exposure image.The details of this module are discussed in Section 2.The exposed image and original input image were then separately fed into two-branch networks[31].The two branches consist of a feature extraction module(FE),a super-resolution module(SR),and a feature fusion module(CF).The FE module consists of two convolution layers: SR uses the Convolutional Networks for Biomedical Image Segmentation(U-net) structure to learn more advanced features,and the first two modules are used to extract advanced features from the input low-dynamic-range images.RE was employed for super-resolution of the original input image before fusion to ensure the accurate extraction of high-level image features.The final module is the image fusion module that combines the super-resolution outputs of the two coupled networks and generates the output by weighting the super-resolution of the original image and the outputs of the two coupled blocks.

    3.2 Exposure Generation Module Design

    We employed a design that combines Retinex with the camera response mechanism to create an exposure image generator.In the original Retinex theory,the input imageSoutputis represented as the element-wise product of the illumination and reflectance components,which is expressed as Eq.(1):

    The input image is denoted as S,the reflected image as R,the illuminated image as L,andR×Ldenotes pixel-wise multiplication.However,numerous experiments have shown that the results obtained from the original Retinex equation are over-enhanced and lose detail owing to noise and uneven illumination.Therefore,we improved the original formula by first inverting the source illumination image using Eq.(2)to better utilize the content in the relatively overexposed region.

    The improved Retinex formula is represented by Eq.(3).

    where L and R denote the illuminated and reflected images,respectively,S denotes the original input low-light image,andSoudenotes the output result of the improved Retinex formula.

    To maintain a balance between brightness and contrast,we re-designed the improved Retinex formula using a camera response mechanism and proposed an exposure image generator.Here,we refer to the camera response function described in [32],where the model parameters of the camera response mechanism are determined by the camera’s parametersα,βand k.Parameter k is a correction factor that can be adjusted to obtain images with different exposure levels.As the k value increases,a brighter exposed image is acquired,and the details in the low-light areas become more significant;however,when the k value is too large,more detailed information is lost because of an exposure level that is too high.Therefore,we limited the value of k to a range of 2–6.The equation for generating the initial exposure image by combining the improved Retinex and camera response functions is expressed as Eq.(4).

    whereαandβare fixed parameters suitable for most cameras,withαset to -0.3293 andβset to 1.1258.Seorepresents the output exposure image,and different values of k directly affect the resulting output ofSeo.

    3.3 Loss Function

    Adversarial loss: In [11,12],by decomposing a low-light image into illumination and reflection components,the illumination components are approximately the same as those decomposed in a normally exposed image.With only differences in brightness,the reflection component is the same as the reflection component decomposed from the normal exposure image,which can be decomposed into high-quality reflection components by noise reduction.This means that the distribution of normally exposed images is very similar to that of the original images based on Retinex decomposition.Therefore,the original function [33] was used as an adversarial loss function to train the generator.In practical applications,the generated fake and real input samples are encoded as zero and one,respectively.Discriminators for the illumination and reflection maps were trained using squared error as the objective function.Our adversarial losses are defined by Eqs.(5)–(8).

    wheregLandgRdenote the reflected and illuminated images,respectively,denotes the average grey scale value of each channel.y represents the ground-truth reflection mapDLandDRrepresent the discriminators for the illumination and reflection maps,respectively.

    However, these aircraft had the familiar P-51 black paint with white stripes on the wings and were equipped with the wing tanks for extra range. Suddenly, they dropped their tanks just off to our right, and we looked around for German fighters in the area. We found them, when the whole formation of P-51s turned out to be Luftwaffe ME-109s that turned in to us with their cannons10 blazing! We narrowly missed being rammed11 by two of them that just barely passed over us.

    Perceptual loss: Using non-matching images for unsupervised image enhancement implies that the pixels in the training images are not one-to-one.The same pixel may have different semantics in different images.Therefore,we need a loss function to address the issue of non-corresponding pixel positions,and the perceptual loss function serves this purpose.This function is typically defined in the activation layer of a pre-trained network.By computing the distance between the activation layer features,it can effectively quantify the fundamental attributes of an image as well as the differences between its detailed features and high-level semantic information.This lays the foundation for generating high-quality images.Unlike common perceptual losses,this study adopted the concept of perceptual loss from[31].This implementation not only maintains the luminance consistency between the original and reference images but also recovers the details better.The perceptual loss function used in this paper is given by Eq.(9).

    where denotes the features extracted by the predefined network VGG19.CiHiWidenotes the number of channels and the height and width of the feature map in layer i.y denotes the unpaired real image information learned by the discriminator,g denotes the image generated by the generator.

    The total loss function of the network is shown in Eq.(10).

    4 Experiment

    4.1 Experimental Details

    We trained the RF-Net model on 914 randomly selected pairs of asymmetric datasets using the datasets provided in[4]and tested it on various datasets,including NPE[34],DICM[35],LIME[10],MEF[36],and VV.These datasets contain various low-light and unevenly exposed images from both indoor and outdoor settings.To demonstrate the performance of the enhancement algorithm better,we selected images with significant exposure differences from each test set as test set,which made the test more challenging.The deep learning framework used was PyTorch,and the hardware configuration was Tesla A100.

    To ensure that the training of the network could fully utilize the computing and storage resources of the computer and obtain better training results,the size of the training image was 640×400,and we randomly cropped the training data into a face slice of size 300×300.The batch size was set to one.To increase the data diversity,data expansion was performed,including random flipping,rotation,and cropping.This allowed the network to adapt better to various image scenes.The Adam optimizer was used to optimize the network,and the learning rate was set to 1e-4.Our network achieved better enhancement results with no more than 50 training iterations.

    4.2 Performance Evaluation

    To demonstrate the advantages of our proposed RF-Net,we compared our method with 10 other advanced methods:RetinexNet[11],EnlightenGAN[4],KinD[12],KinD++[13]RUAS[28],Zero-DCE [5],LLFlow [15],SCI [16],IAT [20],and GLADNet [14].Among these,RetinexNet,KinD,KinD++,RUAS,LLFlow,SCI,IAT,and GLADNet are supervised enhancement methods,whereas[4] and [5] are unsupervised enhancement methods.To ensure fairness,tests were conducted using the network parameters recommended in each study.Because we were unable to train the supervised models on unpaired datasets,we evaluated them using pre-trained models saved in the original papers.For unsupervised methods,if the original study used unpaired datasets for training,we used the datasets provided by [4].If the method used images with different exposures for training,we used the datasets provided by[5].Finally,the optimal model was selected for testing.These comparisons enabled us to evaluate the performance of the RF-Net method and demonstrate its competitiveness in image enhancement.

    4.2.1 Qualitative Comparison

    Figure 3:Qualitative comparison of RF-Net with other advanced algorithms.See the patch area for more detailed information

    Figure 4:(Continued)

    Figure 4:Qualitative comparison of RF-Net with other advanced algorithms.See the patch area for more detailed information

    Figure 5:Qualitative comparison of RF-Net with other advanced algorithms.See the patch area for more detailed information

    4.2.2 Quantitative Comparison

    The subjective evaluation may not be sufficient for determining the degree of detail retention during image enhancement.To demonstrate the feasibility of the proposed method further,we conducted quantitative comparisons.As we used unsupervised methods for model training,we could not evaluate the Peak Signal-to-Noise Ratio(PSNR)and Structural Similarity(SSIM)of the enhanced images to the ground truth,as with other supervised methods.Therefore,we used a non-referenced image quality assessment metric to compare the RF-Net method with other competitors.The metrics evaluated were Natural Image Quality Evaluator(NIQE)[37]and Blind/Referenceless Image Spatial QUality Evaluator(BRISQUE)[38].NIQE is a natural-image-based evaluation metric that compares algorithm processing results with a model calculated based on natural scenes.BRISQUE is an image-based no-reference quality score that is calculated based on natural scene images with similar distortions.The metrics used to evaluate the performance of RF-Net compared with the other algorithms are listed in Table 1.Based on the qualitative evaluation,the following issues were observed:RetinexNet[11]resulted in inaccurate color restoration with color bias;KinD[12]did not significantly enhance dark areas;KinD++[13]introduced artifacts while enhancing dark areas;RUAS[28]overenhanced the image,causing loss of information;LLFlow[15]produced incorrect color restoration;SCI[16]and IAT[20]over-enhanced the image and had insignificant enhancement in dark areas;Zero-DCE[5]had lower metrics compared to other unsupervised methods,possibly due to the whitish image produced by its enhancement results,as noted in the qualitative analysis.As shown in the graphs,the enhancement results of RF-Net validated this.In summary,because both NIQE and BRISQUE are methods based on local image statistical information,the impact of different algorithms on the small differences in the achieved metrics after image enhancement is better illustrated in the quantitative evaluation.

    Table 1:The NIQE(↓)and BRISQUE(↓)scores are shown,with lower scores indicating better image quality and richer information contained.The averages of the test image metrics are taken for each of the five datasets,and the five averages are eventually averaged again.The best result is shown in red and the second-ranked result is shown in blue

    4.2.3 Subjective Evaluation of People

    We also conducted a human subjective visual evaluation to further quantify the subjective quality of RF-Net compared to the other methods.We randomly selected 20 original low-light images from the test set(NPE[34],DICM[35],LIME[10],MEF[36],and VV)and applied four state-of-the-art methods (EnlightenGAN [4],Zero-DCE [5],SCI [16],and IAT [20]) to each image separately.We invited twelve reviewers to independently score the results of the five algorithms,including RF-Net.The reviewers primarily observed the following aspects:1.Whether the results contained artifacts in the over-or under-enhanced areas;2.Whether the color restoration in the results was accurate(e.g.,whether the colors were distorted);and 3.Whether the noise in dark areas was amplified,and whether there was an obvious loss of texture details in the results.As can be observed from the statistics in Fig.6,RF-Net achieved a higher subjective evaluation score for the reviewed images.

    Figure 6:Overview of people’s objective evaluation,in each graph,the x-axis indicates the observed quality of the five algorithms(1 for the best and 5 for the worst)and the y-axis indicates the number of good and bad images corresponding to m each algorithm.RF-Net shows the best performance

    4.3 Ablation Study

    To demonstrate the effectiveness of the modules used,the following ablation studies were conducted separately: three studies were designed in terms of CA removal,separation of inception[39]from residual connectivity[40],and direct fusion of illuminated and reflected components.

    In Ablation Study 1,we experimented with the multiscale module,residual connection,and CA attention separately,as shown in Fig.7,we can observe from the visual results that zooming in on the person above the house in the first image reveals that using only inception and the Markov discriminator leads to a blurred task,whereas using both inception and residual linking with a Markov discriminator produces more vivid colors and preserves more detailed information owing to residual linking.Furthermore,using inception,residual linking,and an improved Markov discriminator leads to a more realistic restoration of texture details,because the Markov discriminator has a clearer ability to distinguish true from false.The reddish hue in our image is due to the reflected image generated by the first stage of the network,which does not include knowledge of the illumination image or fusion module.

    Figure 7:Ablation study for each module.(a)Input,(b)w/o Multiscale,(c)w/o Residual,(d)w/o CA

    In Ablation Study 2,we compared our method with the direct integration of light and reflection components,as shown in Fig.8.The color of the sky in the second column of the first row was inaccurately restored,and the face of the person in the second column of the second row was overexposed.The proposed method in the third column performs significantly better than the direct fusion method.

    In Ablation Study 3,we tested the designed exposure generator module by adjusting the value of k to obtain images with different exposure levels and achieved exposure fusion.As shown in Fig.9,for different input images,we acquired images with varying exposure levels for fusion,with the k values set differently in different scenarios.For example,in images without exposed areas,k values are usually set between 4–6,while in images with exposed areas,k values are typically set between 2–4.This approach achieved the best enhancement when the exposed images were fused with the original lowlight images.Finally,we conducted a quantitative evaluation of the three sets of ablation studies.The results in Table 2 show that adding the residual block to the inception and residual block combinations produced superior performance.In addition,exposure fusion in the second stage of RF-Net exhibited advantages.A comparison of the second and fourth experimental rows demonstrates the effectiveness of RF-Net.

    Table 2:Quantitative review of ablation studies for each module

    Figure 8:Direct fusion of illuminated and reflected components with RF-Net ablation study

    Figure 9:Image decomposition and exposure generation fusion ablation study

    In Ablation Study 4,we use the low-light image and its corresponding Red,Green,Blue(RGB)channels with the maximum,minimum,and average values as inputs to our network.The final network input consists of six channels,with the first three channels used to generate the reflection map and the last three channels used to generate the illumination map.In the generated illumination images,the method used in this paper retains more details in the final generated illumination map compared with the input of only the original low illumination image.The qualitative comparison in Fig.10 shows that in the first column,there are artifacts near the shoulders of the person in the illumination map generated by the network with only low illumination input.In the second column,there is a noticeable loss of detail in the wall of the house on the right.In the third column,there are artifacts near the candle flame,and the grayscale boundary near the cup in the upper left corner is not clear;In the fourth column,there is an excessive amount of detail in the cave,while the detail in the trees outside the cave is lacking or insufficient.In Fig.11,the grayscale histogram results show that our method can generate a light map with smoother contrast and luminance changes,making the light and dark parts more visible and details more prominent.

    Figure 10:Qualitative comparison of illuminated images.(a) Input,(b) Low-light images as input,(c)Ours

    5 Conclusion

    In this study,we combined Retinex theory with exposure fusion for the first time to achieve unpaired low-light image enhancement.In the first stage,we designed a multi-scale generator by combining a residual network and inception.We also added a CA attention mechanism with position information to the discriminator network to obtain high-quality illumination and reflection components.By improving the original Retinex and camera response mechanism functions,we designed an exposure image generator with correction coefficients to solve the problems of illumination,reflection image fusion,and exposure image generation.Based on this,we realized low-light image enhancement with second-stage exposure fusion and proved the superiority of our method by comparing it with state-of-the-art methods.However,the algorithm has some limitations.On the one hand,it requires manual adjustment of the correction parameters for the exposure image generator based on the scene’s exposure level.On the other hand,Compared to other lightweight networks,RF-Net processes 640 × 400 images at a rate of 5 frames per second.In the future,our research will focus on lightweight the network structure and developing an adaptive low-light image enhancement method based on negative feedback control to solve the manual tuning problem of existing methods.We also aim to apply this model to enhance specific scenes,which will not only improve the generalization of the algorithm but also enhance the accuracy of other vision tasks.

    Acknowledgement:The authors also gratefully acknowledge the helpful comments and suggestions of the reviewers,which have improved the presentation.

    Funding Statement:This work was supported by the National Key Research and Development Program Topics (Grant No.2021YFB4000905),the National Natural Science Foundation of China(Grant Nos.62101432 and 62102309),and in part by Shaanxi Natural Science Fundamental Research Program Project(No.2022JM-508).

    Author Contributions:Study conception and design:Tian Ma,Jiayi Yang;data collection:Chenhui Fu;analysis and interpretation of results:Chenhui Fu,Jiehui Zhang,Chuyang Shang;draft manuscript preparation: Chenhui Fu.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data used in this paper can be requested from the corresponding author upon request.

    Conflicts of Interest:The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    色婷婷久久久亚洲欧美| 黑人猛操日本美女一级片| 欧美乱色亚洲激情| 久久久国产欧美日韩av| 亚洲av电影在线进入| 如日韩欧美国产精品一区二区三区| 久久久久精品国产欧美久久久| avwww免费| 亚洲av日韩精品久久久久久密| 成人手机av| 成人影院久久| 一级a爱视频在线免费观看| 一级a爱片免费观看的视频| 久久国产精品大桥未久av| 99热国产这里只有精品6| 91老司机精品| 欧美日韩亚洲高清精品| 国产成人精品在线电影| 亚洲免费av在线视频| 在线观看免费高清a一片| 亚洲精品中文字幕在线视频| 女人久久www免费人成看片| 国产欧美日韩一区二区精品| 国产成人免费无遮挡视频| 999精品在线视频| 亚洲精品一二三| 亚洲av电影在线进入| 久久国产精品大桥未久av| 国产精品综合久久久久久久免费 | 黑人巨大精品欧美一区二区蜜桃| 午夜福利视频在线观看免费| 精品国产美女av久久久久小说| 女人久久www免费人成看片| 99国产精品一区二区蜜桃av | 久久精品国产综合久久久| 又黄又爽又免费观看的视频| 精品人妻1区二区| 国产真人三级小视频在线观看| av视频免费观看在线观看| 午夜福利,免费看| 丝袜人妻中文字幕| 99精品在免费线老司机午夜| 女人高潮潮喷娇喘18禁视频| 精品免费久久久久久久清纯 | 午夜免费观看网址| 麻豆国产av国片精品| 高潮久久久久久久久久久不卡| 建设人人有责人人尽责人人享有的| 欧美午夜高清在线| 国产熟女午夜一区二区三区| 久久精品国产综合久久久| 午夜免费成人在线视频| 在线看a的网站| 欧美日韩亚洲国产一区二区在线观看 | 国产精品久久久久久精品古装| 宅男免费午夜| 桃红色精品国产亚洲av| 在线观看免费高清a一片| 成人国产一区最新在线观看| 男女之事视频高清在线观看| 正在播放国产对白刺激| 欧美日韩亚洲高清精品| 国产有黄有色有爽视频| 丰满人妻熟妇乱又伦精品不卡| 老熟妇乱子伦视频在线观看| 天堂动漫精品| av片东京热男人的天堂| 午夜激情av网站| 午夜福利一区二区在线看| 麻豆av在线久日| 久久ye,这里只有精品| 国产精品亚洲一级av第二区| 国产一区二区三区综合在线观看| aaaaa片日本免费| 午夜福利影视在线免费观看| 另类亚洲欧美激情| 国产国语露脸激情在线看| 亚洲av成人一区二区三| 精品国产一区二区三区四区第35| 国产精华一区二区三区| 男男h啪啪无遮挡| 亚洲精品在线观看二区| 母亲3免费完整高清在线观看| 黄色丝袜av网址大全| 一边摸一边做爽爽视频免费| 国产免费现黄频在线看| 国产激情久久老熟女| 亚洲欧美精品综合一区二区三区| 19禁男女啪啪无遮挡网站| 99久久国产精品久久久| av电影中文网址| 国产xxxxx性猛交| 夜夜爽天天搞| 黑人欧美特级aaaaaa片| 韩国精品一区二区三区| av网站免费在线观看视频| 高清av免费在线| 亚洲色图av天堂| av视频免费观看在线观看| 色精品久久人妻99蜜桃| 久久影院123| 亚洲精品中文字幕在线视频| 中文字幕人妻丝袜制服| 久久影院123| 免费在线观看影片大全网站| 人人妻人人澡人人爽人人夜夜| 欧美黄色淫秽网站| 中文亚洲av片在线观看爽 | 午夜91福利影院| 在线观看一区二区三区激情| 嫁个100分男人电影在线观看| 一级毛片女人18水好多| tube8黄色片| 亚洲黑人精品在线| 国产精品电影一区二区三区 | 高清在线国产一区| 日韩制服丝袜自拍偷拍| 电影成人av| 精品第一国产精品| 成人国产一区最新在线观看| 夜夜爽天天搞| 黄色毛片三级朝国网站| 国产精品香港三级国产av潘金莲| 69精品国产乱码久久久| 精品久久久精品久久久| 宅男免费午夜| 亚洲免费av在线视频| 成年版毛片免费区| 欧美日韩中文字幕国产精品一区二区三区 | 中文字幕高清在线视频| 国产在线观看jvid| 男人的好看免费观看在线视频 | 成人国产一区最新在线观看| 亚洲av欧美aⅴ国产| 久久影院123| 精品电影一区二区在线| 交换朋友夫妻互换小说| 欧美久久黑人一区二区| 亚洲美女黄片视频| 久久精品91无色码中文字幕| 国产成人免费无遮挡视频| 久久久国产欧美日韩av| 成年人黄色毛片网站| 精品高清国产在线一区| 久久天堂一区二区三区四区| 在线观看日韩欧美| 国产乱人伦免费视频| 国产淫语在线视频| 天天添夜夜摸| 国产高清国产精品国产三级| 中文亚洲av片在线观看爽 | 99精品在免费线老司机午夜| 久久国产精品男人的天堂亚洲| 999久久久国产精品视频| 欧美黑人欧美精品刺激| 91麻豆av在线| 超碰97精品在线观看| 大陆偷拍与自拍| 成熟少妇高潮喷水视频| 不卡一级毛片| 久久99一区二区三区| 色尼玛亚洲综合影院| 国产精品久久久久久精品古装| 99精国产麻豆久久婷婷| 久久影院123| 老熟女久久久| 欧美日韩成人在线一区二区| 亚洲av成人一区二区三| 亚洲成人国产一区在线观看| 日本撒尿小便嘘嘘汇集6| av电影中文网址| 人妻 亚洲 视频| 一级作爱视频免费观看| 亚洲欧美精品综合一区二区三区| 国产男女超爽视频在线观看| 久久影院123| 国产在线一区二区三区精| av网站免费在线观看视频| 精品国产乱子伦一区二区三区| 午夜免费成人在线视频| 久久精品亚洲av国产电影网| 岛国在线观看网站| 亚洲精品美女久久av网站| 精品久久蜜臀av无| 天堂俺去俺来也www色官网| 久久精品人人爽人人爽视色| 欧美久久黑人一区二区| 国产一区在线观看成人免费| 国产成人av教育| 精品福利观看| 欧美日韩亚洲综合一区二区三区_| 国内久久婷婷六月综合欲色啪| 久久久久久久国产电影| 国产黄色免费在线视频| 又黄又粗又硬又大视频| 一本一本久久a久久精品综合妖精| 免费不卡黄色视频| 妹子高潮喷水视频| 一级片免费观看大全| 久久国产亚洲av麻豆专区| 成年动漫av网址| 国产欧美日韩综合在线一区二区| 一边摸一边做爽爽视频免费| 久久99一区二区三区| 欧美最黄视频在线播放免费 | 国产精品久久久人人做人人爽| 18禁黄网站禁片午夜丰满| 欧美不卡视频在线免费观看 | 免费观看人在逋| 男人操女人黄网站| 99久久99久久久精品蜜桃| 国产精品久久久人人做人人爽| 三级毛片av免费| 一二三四在线观看免费中文在| 成人亚洲精品一区在线观看| 国产成人av激情在线播放| 在线视频色国产色| 美女 人体艺术 gogo| av免费在线观看网站| 91成年电影在线观看| 成人18禁在线播放| 欧美精品高潮呻吟av久久| 成人黄色视频免费在线看| 国产精品国产高清国产av | 色婷婷久久久亚洲欧美| 成在线人永久免费视频| 亚洲精品av麻豆狂野| 国产高清激情床上av| 热99re8久久精品国产| 麻豆成人av在线观看| 99精国产麻豆久久婷婷| 国产精品电影一区二区三区 | 窝窝影院91人妻| 国产不卡一卡二| 国产精品美女特级片免费视频播放器 | 亚洲精品国产色婷婷电影| 婷婷精品国产亚洲av在线 | 99国产极品粉嫩在线观看| 十八禁人妻一区二区| 中文字幕精品免费在线观看视频| 成人亚洲精品一区在线观看| 国产99久久九九免费精品| 精品视频人人做人人爽| 久久草成人影院| 国产精品综合久久久久久久免费 | 国产不卡av网站在线观看| 成人国语在线视频| 国产高清视频在线播放一区| 久久精品亚洲av国产电影网| 欧美不卡视频在线免费观看 | 亚洲第一欧美日韩一区二区三区| 嫩草影视91久久| 欧美亚洲 丝袜 人妻 在线| 两性午夜刺激爽爽歪歪视频在线观看 | 天天添夜夜摸| 真人做人爱边吃奶动态| 国产一区二区三区综合在线观看| 9色porny在线观看| 他把我摸到了高潮在线观看| 精品午夜福利视频在线观看一区| 大型黄色视频在线免费观看| 午夜精品久久久久久毛片777| 色老头精品视频在线观看| 国产一区二区三区综合在线观看| 99香蕉大伊视频| 少妇 在线观看| 少妇裸体淫交视频免费看高清 | av网站在线播放免费| 建设人人有责人人尽责人人享有的| 久久人妻熟女aⅴ| 国产1区2区3区精品| 精品一品国产午夜福利视频| 亚洲成国产人片在线观看| 少妇粗大呻吟视频| 这个男人来自地球电影免费观看| 手机成人av网站| 国产精品av久久久久免费| av国产精品久久久久影院| 少妇被粗大的猛进出69影院| 高清av免费在线| 国产日韩欧美亚洲二区| 精品国产一区二区久久| 纯流量卡能插随身wifi吗| 亚洲av欧美aⅴ国产| 久久久久久久久免费视频了| 婷婷精品国产亚洲av在线 | 亚洲国产精品sss在线观看 | 免费久久久久久久精品成人欧美视频| 搡老岳熟女国产| 极品人妻少妇av视频| 成人av一区二区三区在线看| 欧美成狂野欧美在线观看| 九色亚洲精品在线播放| 老鸭窝网址在线观看| 亚洲自偷自拍图片 自拍| 久久久久久久久久久久大奶| 一级毛片高清免费大全| 日韩三级视频一区二区三区| 国产高清国产精品国产三级| 丰满的人妻完整版| 黄色片一级片一级黄色片| a级毛片黄视频| 一区二区三区激情视频| 国产精品99久久99久久久不卡| 国产又色又爽无遮挡免费看| 这个男人来自地球电影免费观看| 日日夜夜操网爽| 日韩成人在线观看一区二区三区| 精品国产乱码久久久久久男人| 欧美一级毛片孕妇| 建设人人有责人人尽责人人享有的| 亚洲av日韩精品久久久久久密| 精品欧美一区二区三区在线| 一边摸一边做爽爽视频免费| 欧美日韩中文字幕国产精品一区二区三区 | 女人精品久久久久毛片| 老熟妇乱子伦视频在线观看| 国产一区二区三区综合在线观看| 国产97色在线日韩免费| 国产精品一区二区精品视频观看| 天堂√8在线中文| 久久久久久亚洲精品国产蜜桃av| 亚洲五月色婷婷综合| 亚洲av日韩精品久久久久久密| 国产亚洲欧美精品永久| 欧美 日韩 精品 国产| 亚洲av熟女| 天天躁日日躁夜夜躁夜夜| 国产免费现黄频在线看| 99久久99久久久精品蜜桃| 五月开心婷婷网| 一级毛片女人18水好多| 黄色视频不卡| 黄色毛片三级朝国网站| 90打野战视频偷拍视频| 黑人操中国人逼视频| 美女高潮到喷水免费观看| 久久青草综合色| 亚洲久久久国产精品| 久久热在线av| 深夜精品福利| 高潮久久久久久久久久久不卡| 国产精品免费一区二区三区在线 | 丁香欧美五月| 国产亚洲欧美精品永久| 精品人妻在线不人妻| 高清av免费在线| 美女午夜性视频免费| 亚洲第一欧美日韩一区二区三区| 国产不卡一卡二| 国产成人一区二区三区免费视频网站| 又紧又爽又黄一区二区| 人人妻人人澡人人爽人人夜夜| 国产精品美女特级片免费视频播放器 | 国产aⅴ精品一区二区三区波| 久久精品国产a三级三级三级| 母亲3免费完整高清在线观看| 亚洲精品av麻豆狂野| 精品一品国产午夜福利视频| 欧美日韩一级在线毛片| 9热在线视频观看99| 五月开心婷婷网| 多毛熟女@视频| 精品人妻1区二区| a在线观看视频网站| 欧美日韩国产mv在线观看视频| 午夜免费观看网址| 国产亚洲欧美98| 999久久久精品免费观看国产| 叶爱在线成人免费视频播放| 在线观看舔阴道视频| 亚洲性夜色夜夜综合| 99久久99久久久精品蜜桃| 国产精品一区二区在线观看99| 国产1区2区3区精品| 亚洲av日韩精品久久久久久密| 在线观看免费高清a一片| 久久精品人人爽人人爽视色| 国产在视频线精品| av有码第一页| 国产片内射在线| 久久天堂一区二区三区四区| 精品久久久久久久久久免费视频 | 久久中文字幕一级| 色婷婷久久久亚洲欧美| 中亚洲国语对白在线视频| 国产精品98久久久久久宅男小说| 桃红色精品国产亚洲av| 每晚都被弄得嗷嗷叫到高潮| 国产麻豆69| 十八禁高潮呻吟视频| 国产97色在线日韩免费| 国产精品香港三级国产av潘金莲| 欧美精品av麻豆av| 欧美黑人欧美精品刺激| 91精品国产国语对白视频| 露出奶头的视频| 搡老熟女国产l中国老女人| 国产av精品麻豆| 国产亚洲精品一区二区www | 黄色 视频免费看| 精品熟女少妇八av免费久了| 午夜免费成人在线视频| 人人妻人人澡人人看| 99国产极品粉嫩在线观看| 最新在线观看一区二区三区| 啦啦啦免费观看视频1| 国产精品久久电影中文字幕 | 久久久久视频综合| 精品人妻在线不人妻| 91大片在线观看| 99国产精品免费福利视频| 一级毛片精品| 99精国产麻豆久久婷婷| 久久久久国产精品人妻aⅴ院 | 午夜视频精品福利| av片东京热男人的天堂| 国产精品香港三级国产av潘金莲| 制服人妻中文乱码| 亚洲精品粉嫩美女一区| 午夜福利影视在线免费观看| 久久亚洲真实| 身体一侧抽搐| 性少妇av在线| avwww免费| 国产高清国产精品国产三级| 欧美日本中文国产一区发布| 久久精品国产综合久久久| 天堂中文最新版在线下载| 激情在线观看视频在线高清 | 日韩欧美国产一区二区入口| 大码成人一级视频| 人人妻,人人澡人人爽秒播| 亚洲欧美激情综合另类| 国产真人三级小视频在线观看| 夫妻午夜视频| 天堂√8在线中文| 在线国产一区二区在线| 高清黄色对白视频在线免费看| 国产精品免费视频内射| 一区在线观看完整版| av线在线观看网站| 大片电影免费在线观看免费| 日韩欧美三级三区| 丝袜美足系列| 亚洲av日韩在线播放| 麻豆成人av在线观看| 亚洲熟女精品中文字幕| 涩涩av久久男人的天堂| 免费av中文字幕在线| 日本精品一区二区三区蜜桃| 9191精品国产免费久久| 啪啪无遮挡十八禁网站| 999精品在线视频| 久久这里只有精品19| 久久国产精品男人的天堂亚洲| 国产片内射在线| 一二三四社区在线视频社区8| 国产成人精品久久二区二区免费| 亚洲一码二码三码区别大吗| 丰满饥渴人妻一区二区三| 午夜精品在线福利| 乱人伦中国视频| 欧美日韩亚洲国产一区二区在线观看 | 一边摸一边抽搐一进一小说 | 欧美成人免费av一区二区三区 | 美女扒开内裤让男人捅视频| 亚洲久久久国产精品| 精品卡一卡二卡四卡免费| 亚洲av熟女| 建设人人有责人人尽责人人享有的| 女警被强在线播放| 国产又色又爽无遮挡免费看| 亚洲在线自拍视频| 久久中文字幕人妻熟女| 一级黄色大片毛片| 人人妻人人澡人人看| 免费少妇av软件| 搡老乐熟女国产| 女人久久www免费人成看片| av不卡在线播放| 高清毛片免费观看视频网站 | 一a级毛片在线观看| xxx96com| 99国产综合亚洲精品| 成年版毛片免费区| 99精品久久久久人妻精品| 波多野结衣av一区二区av| 亚洲欧洲精品一区二区精品久久久| 19禁男女啪啪无遮挡网站| 欧美人与性动交α欧美软件| 国产黄色免费在线视频| 精品电影一区二区在线| 一级片'在线观看视频| 精品乱码久久久久久99久播| 欧美日韩国产mv在线观看视频| 亚洲五月天丁香| 国产精品综合久久久久久久免费| 99久久综合精品五月天人人| 国产精品一区二区三区四区久久| 无限看片的www在线观看| 欧美xxxx黑人xx丫x性爽| 久久久久久九九精品二区国产| 亚洲第一欧美日韩一区二区三区| 美女高潮喷水抽搐中文字幕| 成人永久免费在线观看视频| 男插女下体视频免费在线播放| 久久久久亚洲av毛片大全| 法律面前人人平等表现在哪些方面| 久久精品影院6| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲五月婷婷丁香| 亚洲乱码一区二区免费版| 国产在视频线在精品| 亚洲无线在线观看| 亚洲精品一卡2卡三卡4卡5卡| 久99久视频精品免费| 午夜福利欧美成人| 窝窝影院91人妻| 欧美zozozo另类| 欧美日韩黄片免| 日本黄色片子视频| 久久久久久久久久黄片| 又黄又粗又硬又大视频| 亚洲最大成人手机在线| 国产成人系列免费观看| 午夜精品一区二区三区免费看| 亚洲av一区综合| 亚洲电影在线观看av| 亚洲一区高清亚洲精品| 99精品欧美一区二区三区四区| 成人永久免费在线观看视频| 久久九九热精品免费| 亚洲欧美日韩无卡精品| 亚洲欧美日韩高清在线视频| 亚洲七黄色美女视频| 精品国产超薄肉色丝袜足j| 女人被狂操c到高潮| 88av欧美| 757午夜福利合集在线观看| 国产淫片久久久久久久久 | 国产av不卡久久| 欧美日韩瑟瑟在线播放| 国产真实乱freesex| 一a级毛片在线观看| 99热精品在线国产| 国产精品久久久久久人妻精品电影| 两个人的视频大全免费| 亚洲avbb在线观看| 久久久国产精品麻豆| 在线a可以看的网站| 欧美色欧美亚洲另类二区| 久久中文看片网| 又黄又粗又硬又大视频| 99在线视频只有这里精品首页| 午夜福利欧美成人| 波多野结衣巨乳人妻| 在线观看午夜福利视频| 午夜福利18| 久久久精品大字幕| 久久久久久久精品吃奶| 又粗又爽又猛毛片免费看| 欧美国产日韩亚洲一区| 最近视频中文字幕2019在线8| 亚洲成人免费电影在线观看| 欧美黄色淫秽网站| 国产av不卡久久| 精品福利观看| xxxwww97欧美| 国产精品一区二区三区四区久久| 欧美激情久久久久久爽电影| 黄色片一级片一级黄色片| 精品国产美女av久久久久小说| 99久久精品一区二区三区| 在线观看美女被高潮喷水网站 | 亚洲成av人片在线播放无| 免费观看人在逋| 在线a可以看的网站| 欧美色视频一区免费| 人人妻人人澡欧美一区二区| 精品久久久久久成人av| 亚洲av美国av| 看黄色毛片网站| 日韩av在线大香蕉| 午夜福利在线观看免费完整高清在 | 久久久久久人人人人人| 国产中年淑女户外野战色| 亚洲成人中文字幕在线播放| 国内久久婷婷六月综合欲色啪| 亚洲18禁久久av| 18+在线观看网站| 亚洲欧美日韩卡通动漫| 久久香蕉精品热| 一级作爱视频免费观看| 国产三级在线视频| 亚洲国产精品久久男人天堂| 国产欧美日韩精品亚洲av| 91在线观看av| 亚洲精品影视一区二区三区av| 身体一侧抽搐| 99久久精品国产亚洲精品| 最新中文字幕久久久久| 日本一本二区三区精品| 国产精品爽爽va在线观看网站| 757午夜福利合集在线观看| 俺也久久电影网| 伊人久久大香线蕉亚洲五| 亚洲精品456在线播放app | 国产乱人伦免费视频| 在线观看免费午夜福利视频| 少妇丰满av| 国产精品电影一区二区三区| 欧美成人性av电影在线观看| 亚洲电影在线观看av| 国产成人a区在线观看|