• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Self-Supervised Monocular Depth Estimation via Discrete Strategy and Uncertainty

    2022-07-18 06:17:30ZhenyuLiJunjunJiangandXianmingLiu
    IEEE/CAA Journal of Automatica Sinica 2022年7期

    Zhenyu Li, Junjun Jiang, and Xianming Liu

    Dear Editor,

    This letter is concerned with self-supervised monocular depth estimation. To estimate uncertainty simultaneously, we propose a simple yet effective strategy to learn the uncertainty for selfsupervised monocular depth estimation with the discrete strategy that explicitly associates the prediction and the uncertainty to train the networks. Furthermore, we propose the uncertainty-guided feature fusion module to fully utilize the uncertainty information. Codes will be available at https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox.

    Self-supervised monocular depth estimation methods turn into promising alternative trade-offs in both the training cost and the inference performance. However, compound losses that couple the depth and the pose lead to a dilemma of uncertainty calculation that is crucial for critical safety systems. To solve this issue, we propose a simple yet effective strategy to learn the uncertainty for selfsupervised monocular depth estimation using the discrete bins that explicitly associate the prediction and the uncertainty to train the networks. This strategy is more pluggable without any additional changes to self-supervised training losses and improves model performance. Secondly, to further exert the uncertainty information,we propose the uncertainty-guided feature fusion module to refine the depth estimation. The uncertainty maps serve as an attention source to guide the fusion of decoder features and skip-connection features. Experimental results on the KITTI and the Make3D dataset show that the proposed methods achieve satisfying results compared to the baseline methods.

    Estimating depth plays an important role in the perception of the 3D real-world, and it is often pivotal to many other tasks such as autonomous driving, planning and assistive navigation [1]–[3]. Selfsupervised methods trained with many monocular videos have emerged as an alternative for the depth estimation [4]–[6] since the ground truth RGB-D data is costly. These methods treat the depth estimation as one of novel view-synthesis by training a network to predict target images from other viewpoints. In general, the framework consists of a depth network to predict image depth and a pose network to predict the camera ego-motion between successive image pairs, and aims to minimize the photometric reprojection loss in the training stage. Moreover, the smooth regularization [5], [6] and the masking strategy [4], [5], [7] are commonly included in the selfsupervised loss for sharper estimation results.

    However, complex self-supervised training losses that couple the depth and the pose lead to a dilemma of uncertainty estimation [8],which is extremely vital in safety critical systems, allowing an agent to identify unknowns in an environment and reaches optimal decisions [6]. The popular log-likelihood maximization strategy proposed in [9] causes sub-optimal modeling and fails to beneficially work in the self-supervised situations [8]. This strategy needs to reweight all the loss terms during training stage to get reasonable uncertainty predictions, leading to a plight re-balancing the delicately designed loss terms in self-supervised depth estimation. In contrast,we aim for a pluggable uncertainty prediction strategy that can leave the weights of loss terms untouched.

    In this paper, instead of pre-training a teacher network to decouple the depth and the pose in losses [8], which doubles the training time and the parameters, we aim to learn the uncertainty by a single model in an end-to-end fashion without any additional modifications to the self-supervised loss terms. To this end, we apply the discrete strategy[10]. Following [9], We train the network to infer the mean and variance of a Gaussian distribution, which can be treated as the prediction and the uncertainty, respectively. After that, we divide the continued interval into discrete bins and calculate the probability of each bin based on the mean and the variance. A weighted sum of the normalized probabilities is then served to calculate the expected prediction. Such a strategy can explicitly associate the prediction and the uncertainty before calculating losses. After self-supervised training withonly a simpleadditionalL1uncertainloss,ourmethod can masterthe capabilitytopredictthe uncertainty.Itis more pluggable for self-supervised methods and improves the model performance in addition. Furthermore, our method also guarantees the Gaussian probability distribution on the discrete bins, which yields more reasonable and sharper uncertainty results compared to the method of standard deviation proposed in [6].

    Moreover, to make full use of the uncertainty information, based on the U-net multi-scale prediction backbone [5], we propose an uncertainty-guided feature fusion module to refine the depth estimation. Therefore, it will help the model pay closer attention to high-uncertain regions and refine the depth estimation more effectively. Sufficient experiments on the KITTI dataset [11] and the Make3D dataset [12] demonstrate the effectiveness and generalization of our proposed methods.

    Our contributions are three-fold: 1) We propose a strategy to learn uncertainty for self-supervised monocular depth estimation utilizing the discrete bins. 2) We design an uncertainty-guided feature fusion module in the decoder to make full use of the uncertainty. 3)Extensive experiments on KITTI and Make3D dataset demonstrate the effectiveness of our proposed methods.

    Methods:In this section, we present the importance of the main contributions of this paper: 1) A pluggable strategy to learn the depth uncertainty without additional modification to the self-supervised loss terms; and 2) An uncertainty-guided feature fusion module. We use the Monodepth2 [5] as our baseline. The framework with improvements is shown in Fig. 1.

    Depth and uncertainty: Following [9], we simultaneously estimate the mean and the variance of the Gaussian distribution, which respectively represent the mean estimated depth and the measurement uncertainty. It is formulated by

    whereIis the input RGB image,Dis the mean estimated depth map,Uistheuncertainty map, andfDrepresents the depth estimation network.

    In general, two key points are indicated in the design of loss terms tomakeuncertainty reasonable:1) AddingaL1losstoforcethe model topredict depthmore confidently; and2)Re-weightingthe loss terms according to the uncertainty. There will be a more relentless punishment for the pixel prediction results with a lower uncertainty. However, complex loss terms make 2) much more tough[9]. To this end, we combine the prediction and uncertainty before the loss computing.

    To be specific, we divide the depth range into discrete bins,compute the approximate probability of each bin, and normalize the probabilities

    Fig. 1. Overview of our proposed methods. In part (a), the framework is based on Monodepth2 [5] that contains a U-net-based depth network and a pose network. We simply extend the depth network in Monodepth2 to estimate depth and uncertainty at the same time. (b) shows more details of the modified multiscale decoder. Successive uncertainty-guided feature fusion modules refine the depth estimation. Our strategy is performed at each output level to achieve multi-scale predictions. In (c), we illustrate details of the uncertainty-guided feature fusion module. It makes full use of the uncertainty information, containing two convolutions to extract useful information and an identity feature mapping to facilitate gradients back-propagation and preserve semantic cues.

    Fig. 2. Visualization Example. Given the input RGB (a), (b) and (c) show the depth and the uncertainty prediction, respectively. (d) shows the depth probability distributions for the three selected points in picture. Blue and orange points have sharp peaks, indicating low uncertainty. Red point has a flatter probability distribution which means high uncertainty.

    whereiandjdenote the location of one pixel on the image. To make the representation more concise and cause no misunderstandings, we omit the subscriptsiandjin (2) and after.fcd(D,U,·) is the cumulative distribution function of the normal distribution whose mean and areDandU, respectively.dkis thek-th split point of the range,Nisthe number of bins, andqkis the probability after normalization.

    Finally, we calculate the expected depth as following:

    whered(k) represents the depth of thek-th bin andEis the expected depth. We use the expected depth to train our modals like other discrete bins based method [6].

    Notably, the expected depth is not equal to the predicted mean depth thanks to the discrete strategy. Therefore, we combine the mean and the variance explicitly before the loss calculation. In the mathematics view, smaller variance leads to a relatively higher lower bound of the self-supervised losses. Models are forced to predict more precise depth at pixels with smaller variance, thereby predicting reasonable uncertainty. Such a strategy avoids complicating the selfsupervised losses.

    In the training stage, we apply the minimum per-pixel photometric reprojection errorlp, the auto-masking strategy and the edge-aware smoothness losslsproposed in our baseline to train our model.Limited by pages, we recommend referring all the details in [5].

    Additionally, we also want the model to provide more confident results with less uncertainty, so we add an uncertainty loss followinglu[9]:hsmeans the hyperparameter factors in multi-scale outputS={1,1/2,1/4,1/8}. The scale factorshsare set to 1 ,1/2,1/4,1/8 to force the model decrease uncertainty (i.e., increase the punishment on uncertainty) during the depth refinement process. The total loss can be written as

    where λ1and λ2are the hyperparameters for weighing the importance of the smoothness loss and the proposed uncertainty loss.Both the pose model and depth model are trained jointly using this loss. Hyperparameter λ1follows the setting in origin paper [5].

    Examples of the probabilities on bins are shown in Fig. 2. We can see a sharper peak at a low-uncertainty point (please refer to the blue and orange points), which means the model is more confident with the estimation. A higher-uncertainty point has a flatter probability distribution (the red point), indicating the model is uncertain about the prediction.

    Fusion module: Uncertainty maps can provide the information of how confident the depth estimation is, which can help the depth refinement process to focus on areas with high uncertainty [13].

    Therefore, we propose the uncertainty-guided feature fusion module to refine the depth estimation. The proposed uncertaintyguided feature fusion module contains three main components as shown in Fig. 1(c), which are two 3×3 convolution layers and an identity feature mapping. Specifically, the first concatenation and convolution layer is used to extract low-uncertain information and filter high-uncertain features to make the model pay closer attention to high-uncertainty areas. The output is then concatenated with the skip connected feature and the uncertainty map, and they are fed into the second convolution layer. This allows effective feature selection between feature maps. Finally, identity mapping is used to facilitate gradient back-propagation and preserve high-level semantic cues[14].

    The fusion module,whichutilizesthepredicted uncertaintyUto fus e theupsampled featuresFuandthe skipconnectedfeaturesFs,ca n be formulated as

    In the multi-scale depth decoder, the uncertainty-guided feature fusion module is repeatedly applied to the gradual feature fusion procedure to refine the depth estimation. It helps the model pay closer attention to higher-uncertain depth estimation areas and refine the depth estimation more effectively.

    Experiments:

    Datasets: We conduct a series of experiments on the KITTI dataset[11] and the Make3D dataset [12] to prove the effectiveness of the proposed methods. KITTI contains 39 810 monocular triplets for training and 4424 for validation. After training our models on the KITTI dataset, we evaluate them on the Make3D dataset without further fine-tuning.

    Implementation details: We jointly train the pose and depth network by the Adam Optimizer with β1=0.9 and β2=0.999 for 25 epochs. The initial learning rate is set tolr=1E?4. We use a multilearning rate decay which drops tolr=1E?5 at after 15 epochs and decay tolr=1E?6 after 20 epochs. Following [6], we include a context module. By sufficient experiments, the weights in (6) are empirically set to λ1=1E?3 and λ2=1E?2, which can get the best results.

    Evaluation metrics: For the quantitative evaluation, several typical metrics [15] are employed in our experiments

    Performance comparison: We first evaluate our models on the KITTI dataset. The quantitative results compared with other methods are shown in Table 1. With the proposed methods, our models further decrease the evaluation error and achieve higher accuracy. We also provide some qualitative results in Fig. 3 . Our models provide sharper and more accurate results on object boundaries such as signboards, lamp posts, and background buildings. Furthermore,uncertainty maps also provide useful information. As shown in the first sub-figure in Fig. 3, the depth estimation of the closer round signboard is not very satisfactory with clear and accurate boundaries.On the uncertainty map, such inaccurate areas have higher uncertainty. Then, we compare the uncertainty maps with other methods. As shown in Fig. 4, the uncertainty maps we provided are more reasonable without artifacts from close to far and have more detailed information than the results in [6].

    Ablation study: Table 1 also shows the quantitative results of the ablation study. As for low-resolution images (640×192), based on Monodepth2 (Baseline), we can observe the better per formation in almost all the evaluation measures by the discrete strategy (+ DS).Uncertainty-guided feature fusion module (+ UGF) also provides a satisfying improvement. The ablation study for high-resolution images (1024×320) also shows the effectiveness of our proposedmethods.

    Table 1.Quantitative Results. Comparison of Existing Methods to Ours on the KITTI 2015 [11] Using the Eigen Split [15]. The Best and Second Best Results are Presented in Bold and Underline for Each Category. The Upper/Lower Part is the Low/High Resolution Result (640×192/1024×320). DS: Discrete Strategy. UGF: Uncertainty-Guided Fusion Module. DDVO: Differentiable Direct Visual Odometry

    Fig. 3. Qualitative results on the KITTI Eigen split. Our model produce sharper depth maps than baseline MonoDepth2 (MO2), which are reflected in the superior quantitative results in Table 1. At the same time, uncertainty depth maps are provided for high-level applications.

    We also provide some qualitative ablation study results in Fig. 4.Comparing the depth estimation results, the model with the uncertainty-guided feature fusion module provides sharper and more accurate results. Furthermore, there is a more prominent deep blue(lower uncertainty) area in uncertainty results provided by the model with an uncertainty-guided feature fusion module, which indicates the module can further reduce the uncertainty of depth estimations.

    Generalization test: To further evaluate the generalization of our proposed methods, we tested our model without fine-tuning on the Make3D dataset. The quantitative comparison results are tabulated in Table 2. It shows that our proposed method outperforms the baseline method with a significant margin. Qualitative results can be seen in Fig. 5. Our method results in sharper and more accurate depth maps and reasonable uncertainty estimations.

    Result analysis and future work: As we can see in the qualitative results, the highest uncertain areas locate at the object edges, which may be caused by the smooth loss that blurry the object edges since the lack of prior object information and occlusion. Therefore,designing a more effective smooth regularisation term, introducing objective edge information, and taking more effective masking strategies will help model training procedures and reduce uncertainty.Additionally, the smooth areas with less texture (high shadow and sky) show the lowest uncertainty. It indicates that photometric loss may not be helpful enough to train the model in this kind of area.While our model can precisely estimate the depth in these areas, it is essential to develop a more effective loss to supervise these areas better.

    While we have achieved more reasonable uncertainty maps, when we concatenate the uncertainty maps along the time axis, we find there will be fluctuations in different areas of the image. They will hurt the algorithm’s robustness, especially for the systems that request smooth predictions in the temporal domain. In the future, we will try to associate filter methods or explore more temporal constraints to make the prediction more smooth and stable, and it is much more meaningful work.

    Fig. 4. Comparison examples. Ours (w/o) represents our method without the UGF. MO2 is the baseline. Discrete disparity volume (DDV) shows the uncertainty results from [6].

    Table 2.Quantitative Results of the Make3D Dataset

    Fig. 5. Qualitative results on the Make3D dataset. Our methods show a better effectiveness on the depth estimation and can also provide uncertainty maps.

    Conclusion:This paper proposes a simple yet effective strategy to learn the uncertainty for self-supervised monocular depth estimation with the discrete strategy that explicitly associates the prediction and the uncertainty to train the networks. Furthermore, we propose the uncertainty-guided feature fusion module to fully utilize the uncertainty information. It helps the model pay closer attention to high-uncertain regions and refine the depth estimation more effectively. Sufficient experimental results on the KITTI dataset and the Make3D dataset indicate that the proposed algorithm achieves satisfying results compared to the baseline methods.

    Acknowledgments:This work was supported by the National Natural Science Foundation of China (61971165), in part by the Fundamental Research Funds for the Central Universities (FRFCU 5710050119), the Natural Science Foundation of Heilongjiang Province (YQ2020F004).

    亚洲色图av天堂| 中文资源天堂在线| 人体艺术视频欧美日本| 成人国产麻豆网| 蜜桃久久精品国产亚洲av| 免费看日本二区| 国产在视频线在精品| 内射极品少妇av片p| 边亲边吃奶的免费视频| 精品午夜福利在线看| 国产免费视频播放在线视频 | 热99re8久久精品国产| 日韩欧美国产在线观看| 老司机影院毛片| 少妇人妻一区二区三区视频| 一级二级三级毛片免费看| 国产片特级美女逼逼视频| 色尼玛亚洲综合影院| 欧美高清成人免费视频www| 国产成人精品久久久久久| 五月伊人婷婷丁香| 综合色av麻豆| 99热这里只有是精品在线观看| 又黄又爽又刺激的免费视频.| 亚洲最大成人中文| 精华霜和精华液先用哪个| 成人特级av手机在线观看| 日韩亚洲欧美综合| 老女人水多毛片| 色综合色国产| 99久久精品热视频| 能在线免费看毛片的网站| 精品久久久久久成人av| 黄色配什么色好看| 最新中文字幕久久久久| 六月丁香七月| 亚洲精品乱码久久久v下载方式| 国产探花极品一区二区| eeuss影院久久| 欧美性感艳星| 极品教师在线视频| 欧美不卡视频在线免费观看| 日本wwww免费看| 91在线精品国自产拍蜜月| 偷拍熟女少妇极品色| 久久久久久大精品| 国产亚洲一区二区精品| 久久久久久国产a免费观看| 国产精品,欧美在线| 亚洲真实伦在线观看| 一区二区三区高清视频在线| 欧美+日韩+精品| 亚洲国产日韩欧美精品在线观看| 免费人成在线观看视频色| 欧美高清成人免费视频www| 欧美日本视频| 白带黄色成豆腐渣| av天堂中文字幕网| av在线蜜桃| 三级国产精品欧美在线观看| 1000部很黄的大片| 日产精品乱码卡一卡2卡三| 久久精品夜夜夜夜夜久久蜜豆| 嫩草影院精品99| 人妻少妇偷人精品九色| 秋霞伦理黄片| 一区二区三区高清视频在线| 国产精品三级大全| 三级国产精品片| 国产午夜精品一二区理论片| 婷婷色麻豆天堂久久 | 久久久精品大字幕| 欧美一区二区精品小视频在线| 国产男人的电影天堂91| 成人性生交大片免费视频hd| 欧美高清成人免费视频www| 亚洲精品亚洲一区二区| 免费观看人在逋| 99热这里只有是精品在线观看| kizo精华| 国产免费福利视频在线观看| 久久99精品国语久久久| 国产精品一区二区性色av| 国产爱豆传媒在线观看| 亚洲精品亚洲一区二区| 婷婷色麻豆天堂久久 | 国内精品宾馆在线| 久久久色成人| 熟女人妻精品中文字幕| 成年免费大片在线观看| 精品99又大又爽又粗少妇毛片| 直男gayav资源| 夜夜爽夜夜爽视频| 久久久久免费精品人妻一区二区| 欧美人与善性xxx| 在线观看66精品国产| 久久热精品热| av在线观看视频网站免费| 免费看a级黄色片| 精品国产三级普通话版| 国产精品一区二区三区四区久久| 国内精品宾馆在线| 亚洲国产日韩欧美精品在线观看| 色噜噜av男人的天堂激情| 日日摸夜夜添夜夜爱| 午夜精品国产一区二区电影 | 99热网站在线观看| 国产精品久久久久久精品电影| 亚洲伊人久久精品综合 | 国产精华一区二区三区| 最近2019中文字幕mv第一页| 精品久久久久久成人av| 老师上课跳d突然被开到最大视频| 久久精品熟女亚洲av麻豆精品 | 亚洲欧美成人综合另类久久久 | 嘟嘟电影网在线观看| 成人二区视频| 三级国产精品片| 午夜激情欧美在线| 免费在线观看成人毛片| 国产在线男女| 男女下面进入的视频免费午夜| 卡戴珊不雅视频在线播放| 99热这里只有精品一区| 人体艺术视频欧美日本| 97在线视频观看| 毛片一级片免费看久久久久| 99久久精品国产国产毛片| 国产片特级美女逼逼视频| 亚洲欧美一区二区三区国产| 能在线免费观看的黄片| 亚洲精品456在线播放app| 男女那种视频在线观看| 亚洲成av人片在线播放无| 黄色日韩在线| 国产乱人视频| 亚洲国产精品专区欧美| 色综合站精品国产| 亚洲aⅴ乱码一区二区在线播放| 日本欧美国产在线视频| 女的被弄到高潮叫床怎么办| 亚洲国产精品国产精品| 久久这里有精品视频免费| 一个人看视频在线观看www免费| 精品国产一区二区三区久久久樱花 | 99国产精品一区二区蜜桃av| 简卡轻食公司| videos熟女内射| 女人久久www免费人成看片 | 久久午夜福利片| 少妇熟女欧美另类| 国产乱来视频区| 天堂影院成人在线观看| av在线老鸭窝| 精品少妇黑人巨大在线播放 | 一区二区三区免费毛片| 在线免费观看不下载黄p国产| 午夜久久久久精精品| 国产午夜精品论理片| 人妻夜夜爽99麻豆av| 男女国产视频网站| 欧美成人午夜免费资源| ponron亚洲| 免费观看人在逋| 久久精品国产鲁丝片午夜精品| 亚洲第一区二区三区不卡| 精品久久国产蜜桃| 好男人在线观看高清免费视频| 精品人妻一区二区三区麻豆| 一级黄色大片毛片| 五月伊人婷婷丁香| 最近中文字幕高清免费大全6| 亚洲va在线va天堂va国产| 又爽又黄无遮挡网站| 中国美白少妇内射xxxbb| 久久久久久九九精品二区国产| 久久久久久伊人网av| 亚洲av免费高清在线观看| 欧美变态另类bdsm刘玥| eeuss影院久久| 亚洲欧美清纯卡通| 内射极品少妇av片p| 日韩制服骚丝袜av| 国产精品一二三区在线看| 岛国毛片在线播放| 国产一区二区亚洲精品在线观看| 精品久久久久久成人av| 在线观看av片永久免费下载| 亚洲国产精品成人综合色| 国产 一区 欧美 日韩| 久久热精品热| 色播亚洲综合网| av在线老鸭窝| 少妇被粗大猛烈的视频| 国产精品嫩草影院av在线观看| 久久久久久久久中文| 男女下面进入的视频免费午夜| 99久国产av精品国产电影| 午夜免费男女啪啪视频观看| 久久精品久久久久久噜噜老黄 | 少妇的逼水好多| 国产伦理片在线播放av一区| 亚洲中文字幕日韩| 精品久久久久久久久亚洲| 九九热线精品视视频播放| 精品久久久久久久末码| 女的被弄到高潮叫床怎么办| 国产伦精品一区二区三区四那| 久久欧美精品欧美久久欧美| 一区二区三区乱码不卡18| 国产高清三级在线| 不卡视频在线观看欧美| 日本黄大片高清| 亚洲av成人精品一二三区| 一级毛片久久久久久久久女| 国产精品永久免费网站| 午夜福利在线在线| 大香蕉97超碰在线| 免费无遮挡裸体视频| 国产av不卡久久| 国产精品一二三区在线看| 亚洲成人av在线免费| 菩萨蛮人人尽说江南好唐韦庄 | 日本欧美国产在线视频| 亚洲av电影不卡..在线观看| 成人美女网站在线观看视频| 免费av不卡在线播放| 久久精品夜夜夜夜夜久久蜜豆| 日韩成人av中文字幕在线观看| 日韩 亚洲 欧美在线| 99久久精品一区二区三区| 国产av在哪里看| 噜噜噜噜噜久久久久久91| 自拍偷自拍亚洲精品老妇| 男女国产视频网站| 午夜精品国产一区二区电影 | 男女下面进入的视频免费午夜| 色哟哟·www| 国产精华一区二区三区| 日韩国内少妇激情av| 九九久久精品国产亚洲av麻豆| 禁无遮挡网站| 三级经典国产精品| 久久综合国产亚洲精品| 国产伦精品一区二区三区四那| 午夜a级毛片| videossex国产| 欧美极品一区二区三区四区| 成人一区二区视频在线观看| 亚洲电影在线观看av| 自拍偷自拍亚洲精品老妇| 精品久久国产蜜桃| 国产人妻一区二区三区在| 国产激情偷乱视频一区二区| av国产免费在线观看| 1024手机看黄色片| 国产老妇女一区| 亚洲国产欧洲综合997久久,| 一区二区三区乱码不卡18| 欧美成人免费av一区二区三区| 老师上课跳d突然被开到最大视频| 熟妇人妻久久中文字幕3abv| 国产三级在线视频| 日韩强制内射视频| 精品无人区乱码1区二区| 赤兔流量卡办理| 汤姆久久久久久久影院中文字幕 | 国产精品,欧美在线| 日韩av在线大香蕉| 久久精品国产99精品国产亚洲性色| 久久精品熟女亚洲av麻豆精品 | 国产成人午夜福利电影在线观看| 亚洲精品aⅴ在线观看| 床上黄色一级片| 女人被狂操c到高潮| 老司机影院毛片| 日本wwww免费看| 国产精品久久久久久av不卡| 国产单亲对白刺激| 欧美变态另类bdsm刘玥| 午夜福利在线观看免费完整高清在| 久久久久久久久大av| av国产久精品久网站免费入址| 亚洲精品一区蜜桃| 亚洲欧美精品自产自拍| 亚洲高清免费不卡视频| 小蜜桃在线观看免费完整版高清| 人妻夜夜爽99麻豆av| 亚洲av福利一区| 日本一二三区视频观看| 色尼玛亚洲综合影院| 国产高清不卡午夜福利| 成人特级av手机在线观看| 午夜久久久久精精品| 日本色播在线视频| 一级黄片播放器| 国产一级毛片七仙女欲春2| 女人十人毛片免费观看3o分钟| 国产色婷婷99| 国产av在哪里看| 激情 狠狠 欧美| 色视频www国产| 国产精品人妻久久久久久| 日韩av不卡免费在线播放| 欧美精品国产亚洲| 亚洲国产精品国产精品| 日韩精品有码人妻一区| 色视频www国产| 午夜福利高清视频| 国内精品宾馆在线| 一级黄色大片毛片| 美女内射精品一级片tv| 国产免费男女视频| 国产成人免费观看mmmm| 六月丁香七月| 又爽又黄a免费视频| 三级经典国产精品| 黄片无遮挡物在线观看| 深爱激情五月婷婷| 免费av观看视频| 国产精品一区二区三区四区久久| 丝袜喷水一区| 菩萨蛮人人尽说江南好唐韦庄 | 国产精品一区二区性色av| 午夜福利在线观看吧| 国产精品电影一区二区三区| 成年女人看的毛片在线观看| 日韩欧美三级三区| 国产精品嫩草影院av在线观看| 99热精品在线国产| 免费不卡的大黄色大毛片视频在线观看 | 91精品国产九色| 久久久午夜欧美精品| 老司机影院成人| 小说图片视频综合网站| 69人妻影院| 日韩一区二区视频免费看| 美女大奶头视频| 99国产精品一区二区蜜桃av| 国产91av在线免费观看| 国产av码专区亚洲av| 狂野欧美激情性xxxx在线观看| 十八禁国产超污无遮挡网站| 在线免费观看的www视频| 内地一区二区视频在线| 在线观看一区二区三区| 午夜精品国产一区二区电影 | 中文资源天堂在线| 国产一区亚洲一区在线观看| 最近的中文字幕免费完整| 99久久精品一区二区三区| 国产乱人视频| 免费观看人在逋| 99久久精品一区二区三区| 亚洲国产欧美在线一区| 成人美女网站在线观看视频| 丰满人妻一区二区三区视频av| 欧美高清性xxxxhd video| www日本黄色视频网| 美女高潮的动态| 汤姆久久久久久久影院中文字幕 | 一个人免费在线观看电影| 91精品国产九色| 免费无遮挡裸体视频| 丰满人妻一区二区三区视频av| 久久久久久久国产电影| 色噜噜av男人的天堂激情| 蜜桃亚洲精品一区二区三区| 亚洲精品成人久久久久久| 免费看日本二区| 日韩欧美精品v在线| 久久精品影院6| 春色校园在线视频观看| 国产成人午夜福利电影在线观看| 国产亚洲最大av| 成人午夜精彩视频在线观看| 高清av免费在线| 黄色欧美视频在线观看| 床上黄色一级片| 国产探花在线观看一区二区| 精品久久国产蜜桃| 国产成人午夜福利电影在线观看| 久久精品综合一区二区三区| 国产v大片淫在线免费观看| 国产熟女欧美一区二区| 男女边吃奶边做爰视频| 啦啦啦啦在线视频资源| 日本黄色视频三级网站网址| 午夜福利视频1000在线观看| 我的老师免费观看完整版| 99久国产av精品国产电影| 午夜福利在线观看免费完整高清在| 美女cb高潮喷水在线观看| 人体艺术视频欧美日本| 午夜精品在线福利| 欧美精品一区二区大全| 99热6这里只有精品| 天天躁夜夜躁狠狠久久av| 久久韩国三级中文字幕| 国产成人精品一,二区| 亚洲精品久久久久久婷婷小说 | 欧美另类亚洲清纯唯美| 日韩制服骚丝袜av| 国产 一区 欧美 日韩| 亚洲精品乱久久久久久| 精品国内亚洲2022精品成人| 男女国产视频网站| 欧美激情国产日韩精品一区| 亚洲aⅴ乱码一区二区在线播放| 99久久九九国产精品国产免费| 一个人观看的视频www高清免费观看| 成年版毛片免费区| 中文字幕熟女人妻在线| 精品久久久久久久久久久久久| 纵有疾风起免费观看全集完整版 | 亚洲一级一片aⅴ在线观看| 久久久a久久爽久久v久久| 亚洲精品乱久久久久久| 成人欧美大片| 美女xxoo啪啪120秒动态图| 91狼人影院| 视频中文字幕在线观看| 白带黄色成豆腐渣| 男人的好看免费观看在线视频| 国产一级毛片七仙女欲春2| 午夜福利在线观看吧| 久久亚洲国产成人精品v| 亚洲伊人久久精品综合 | 成人欧美大片| 男女边吃奶边做爰视频| 色5月婷婷丁香| 最近中文字幕高清免费大全6| 国产毛片a区久久久久| 精品久久久久久久末码| 精品一区二区三区人妻视频| 成人性生交大片免费视频hd| av在线观看视频网站免费| 男人狂女人下面高潮的视频| 精品国产三级普通话版| 国产 一区 欧美 日韩| 最新中文字幕久久久久| 国产中年淑女户外野战色| 夫妻性生交免费视频一级片| 综合色av麻豆| 久久久a久久爽久久v久久| 久久99精品国语久久久| 午夜福利视频1000在线观看| 亚洲欧美日韩东京热| 超碰av人人做人人爽久久| 欧美激情久久久久久爽电影| 午夜老司机福利剧场| 成年女人看的毛片在线观看| 七月丁香在线播放| 亚洲精品456在线播放app| 欧美日韩精品成人综合77777| 亚洲图色成人| 国产男人的电影天堂91| 黄色欧美视频在线观看| 久久99热这里只有精品18| 秋霞在线观看毛片| 国产精品1区2区在线观看.| 亚洲国产欧美人成| 99久久成人亚洲精品观看| 亚洲欧美精品专区久久| 免费黄色在线免费观看| 亚洲欧美日韩无卡精品| 高清在线视频一区二区三区 | 中文字幕亚洲精品专区| 久久久久久久久大av| 中文字幕久久专区| 成人漫画全彩无遮挡| 人妻制服诱惑在线中文字幕| 日韩成人av中文字幕在线观看| 99久久精品一区二区三区| 99久久精品热视频| 麻豆乱淫一区二区| 国产av码专区亚洲av| 噜噜噜噜噜久久久久久91| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲国产最新在线播放| ponron亚洲| 免费看av在线观看网站| 亚洲伊人久久精品综合 | eeuss影院久久| 乱码一卡2卡4卡精品| 色哟哟·www| 欧美激情国产日韩精品一区| 一级毛片我不卡| 波野结衣二区三区在线| 搡老妇女老女人老熟妇| 久久这里有精品视频免费| 国产精品av视频在线免费观看| a级毛色黄片| 日韩一本色道免费dvd| 国产69精品久久久久777片| 婷婷六月久久综合丁香| 亚洲av成人精品一区久久| 激情 狠狠 欧美| 成年版毛片免费区| av女优亚洲男人天堂| 国产精品无大码| 天堂影院成人在线观看| 综合色av麻豆| 一个人看的www免费观看视频| 精品欧美国产一区二区三| 成人毛片a级毛片在线播放| 亚洲va在线va天堂va国产| 黄色配什么色好看| 噜噜噜噜噜久久久久久91| 丝袜喷水一区| 亚洲美女视频黄频| 亚洲人成网站在线播| av免费在线看不卡| 91精品一卡2卡3卡4卡| 91精品伊人久久大香线蕉| 两个人的视频大全免费| 男人狂女人下面高潮的视频| 久久久久久久久久黄片| 国产伦在线观看视频一区| 69av精品久久久久久| 国产一级毛片七仙女欲春2| 国产亚洲一区二区精品| 99久久中文字幕三级久久日本| 久久久欧美国产精品| 国产高清国产精品国产三级 | 久久久久久久久中文| 又粗又硬又长又爽又黄的视频| 中文字幕人妻熟人妻熟丝袜美| 美女国产视频在线观看| 亚洲av中文字字幕乱码综合| 少妇猛男粗大的猛烈进出视频 | 国产精品1区2区在线观看.| 成人av在线播放网站| 97人妻精品一区二区三区麻豆| 中文在线观看免费www的网站| 国产精品人妻久久久影院| 夜夜爽夜夜爽视频| 在线观看av片永久免费下载| 亚洲婷婷狠狠爱综合网| 女人久久www免费人成看片 | kizo精华| 99九九线精品视频在线观看视频| .国产精品久久| 国产黄色视频一区二区在线观看 | 国产色婷婷99| 久久精品久久精品一区二区三区| 亚洲av日韩在线播放| 蜜桃亚洲精品一区二区三区| 亚洲熟妇中文字幕五十中出| 日韩,欧美,国产一区二区三区 | 日本五十路高清| 中文字幕人妻熟人妻熟丝袜美| 日韩三级伦理在线观看| 国产日韩欧美在线精品| 亚洲av成人av| 国产亚洲av片在线观看秒播厂 | 国产精品一区二区在线观看99 | 午夜福利视频1000在线观看| 国产午夜精品久久久久久一区二区三区| videos熟女内射| 一级二级三级毛片免费看| 中文亚洲av片在线观看爽| 三级国产精品片| 国产精品国产三级国产av玫瑰| 国产伦一二天堂av在线观看| 亚洲av免费高清在线观看| av在线观看视频网站免费| 嘟嘟电影网在线观看| av视频在线观看入口| 亚洲国产精品国产精品| 亚洲精品国产成人久久av| 精品不卡国产一区二区三区| 色视频www国产| 五月玫瑰六月丁香| 亚洲国产精品合色在线| 水蜜桃什么品种好| 色综合亚洲欧美另类图片| 中文资源天堂在线| 亚洲四区av| 3wmmmm亚洲av在线观看| 亚洲av中文字字幕乱码综合| 欧美成人一区二区免费高清观看| 日产精品乱码卡一卡2卡三| 国产91av在线免费观看| 久久久久久久久大av| 一级爰片在线观看| 免费不卡的大黄色大毛片视频在线观看 | 我要搜黄色片| 亚洲最大成人中文| 国产老妇伦熟女老妇高清| 最新中文字幕久久久久| 18+在线观看网站| 美女cb高潮喷水在线观看| 日本午夜av视频| 成人一区二区视频在线观看| 欧美成人a在线观看| 亚洲av中文av极速乱| 国产大屁股一区二区在线视频| 偷拍熟女少妇极品色| 国产精品一区二区性色av| 国产91av在线免费观看| 国产精品一及| 国产激情偷乱视频一区二区| 欧美性感艳星| 国产精品一区二区三区四区免费观看| 日韩av在线大香蕉| av天堂中文字幕网| 日本一二三区视频观看| 日韩av在线免费看完整版不卡| 美女高潮的动态| 热99在线观看视频| 亚洲av成人精品一区久久| 久久6这里有精品| 亚洲精品久久久久久婷婷小说 | 色噜噜av男人的天堂激情|