• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Perpendicular-Cutdepth:Perpendicular Direction Depth Cutting Data Augmentation Method

    2024-05-25 14:41:06LeZouLinsongHuYifanWangZhizeWuandXiaofengWang
    Computers Materials&Continua 2024年4期

    Le Zou ,Linsong Hu ,Yifan Wang ,Zhize Wu and Xiaofeng Wang,?

    1Anhui Provincial Engineering Laboratory of Big Data Technology Application for Urban Infrastructure,School of Artificial Intelligence and Big Data,Hefei University,Hefei,230601,China

    2Institute of Applied Optimization,School of Artificial Intelligence and Big Data,Hefei University,Hefei,230601,China

    ABSTRACT Depth estimation is an important task in computer vision.Collecting data at scale for monocular depth estimation is challenging,as this task requires simultaneously capturing RGB images and depth information.Therefore,data augmentation is crucial for this task.Existing data augmentation methods often employ pixel-wise transformations,which may inadvertently disrupt edge features.In this paper,we propose a data augmentation method for monocular depth estimation,which we refer to as the Perpendicular-Cutdepth method.This method involves cutting realworld depth maps along perpendicular directions and pasting them onto input images,thereby diversifying the data without compromising edge features.To validate the effectiveness of the algorithm,we compared it with existing convolutional neural network(CNN)against the current mainstream data augmentation algorithms.Additionally,to verify the algorithm’s applicability to Transformer networks,we designed an encoder-decoder network structure based on Transformer to assess the generalization of our proposed algorithm.Experimental results demonstrate that,in the field of monocular depth estimation,our proposed method,Perpendicular-Cutdepth,outperforms traditional data augmentation methods.On the indoor dataset NYU,our method increases accuracy from 0.900 to 0.907 and reduces the error rate from 0.357 to 0.351.On the outdoor dataset KITTI,our method improves accuracy from 0.9638 to 0.9642 and decreases the error rate from 0.060 to 0.0598.

    KEYWORDS Perpendicular;depth estimation;data augmentation

    1 Introduction

    Computer vision,as a pivotal branch in the realm of modern technology,spans a diverse array of applications [1–5].Nevertheless,with the escalating complexity of tasks,we encounter a myriad of challenges when training deep learning models,prominently featuring limited data,overfitting issues,and the imperative need for model generalization across diverse scenarios.In this context,data augmentation emerges as a pivotal strategy to counter these challenges.Data augmentation,by introducing diversity,exposes the model to a broader spectrum of scenes and variations during training,thereby augmenting its ability to generalize to unseen data.By learning representations adaptable to various scenarios,the model becomes more adept at accommodating novel,real-world inputs.Simultaneously,data augmentation aids in mitigating the risk of overfitting,as the model experiences a more diverse set of inputs during training,reducing its reliance on specific data distributions.This enhances the model’s resilience when confronted with unknown data,ensuring robust performance.Moreover,data augmentation can introduce various transformations such as rotation,scaling,flipping,and optical transformations,imparting greater robustness to the neural network model.This robustness signifies that the model is more resistant to subtle changes and noise in the input,contributing to a more reliable execution of tasks in the real world.

    These data augmentation methods have been widely applied in research for advanced tasks such as abnormal detection[1],personalized diagnosis[3–7],simulation enlargement and Transfer learning combined with fault sample augment[8].In the field of abnormal detection and diagnosis,obtaining real fault samples can be challenging or limited in availability.Data augmentation can generate more diversified fault samples through transformations,rotations,scaling,and other methods,aiding the model in better learning and understanding different types of faults.For personalized diagnosis,differences exist between individuals,necessitating more samples to better adapt to personalized requirements.Data augmentation can generate additional samples with personalized scenarios,assisting the model in better adapting to individual differences and improving the accuracy of personalized diagnosis.

    However,there has been relatively less research in the domain of low-level tasks,particularly when dealing with pixel-wise transformations,as seen in tasks like monocular depth estimation.Effective data augmentation methods for these lower-level tasks have not received sufficient attention and indepth investigation.The challenges of data augmentation[9–13]in pixel-wise tasks are more intricate,given the need to maintain precise pixel-level label information.

    Monocular depth estimation is a critical research focus in the field of computer vision,primarily aiming to predict depth information of objects in a scene from a single image.It finds extensive applications in areas such as 3D reconstruction,virtual reality,and autonomous driving.The input for monocular depth estimation typically consists of a set of images along with their corresponding depth maps.Depth maps are commonly acquired through depth cameras and laser scanners.However,obtaining accurate depth information can be challenging in certain scenarios,such as underwater environments or objects with transparent and glass-like properties.In such cases,data augmentation becomes an indispensable step in monocular depth estimation tasks.Currently,widely used data augmentation methods include random rotation[14],random cropping[15],and optical transformations[16](color and brightness variations),among others.Random rotation involves rotating the image by a certain angle to simulate different capture perspectives.Random cropping entails selecting a region of the image as input,mimicking different viewpoints.Optical transformations alter the brightness and contrast of the input image,enhancing data diversity.These augmentation techniques contribute to the robustness and generalization ability of monocular depth estimation models,enabling them to handle diverse and challenging real-world scenarios effectively.

    Although these methods have improved the generalization ability of neural networks,they mainly focus on altering the global environment rather than modifying the geometric structure within the scene.Many studies have attempted to modify the geometric structure within the scene to encourage the network to learn more complex scenes and thereby improve the accuracy of the model [14–16].Ishii et al.[15] observed similarities in edge positions between depth and RGB images,especially in low-level features.They introduced the Cutdepth algorithm for monocular depth estimation networks,aiming to normalize the images using the provided depth information and reduce the gap between RGB images and depth maps in the latent space.This not only increases visual diversity,but also restricts excessive geometric changes within the scene,causing the neural networks to focus more on high-frequency regions.Dijk et al.[17] investigated how neural networks perceive depth from single images,primarily using the vertical position of objects in images.In response,Kim et al.[16]argued that the vertical viewpoint in a single image is more important than the horizontal viewpoint,leading to the proposal of a variant of Cutdepth called Vertical-Cutdepth.This algorithm performs Cutdepth cuts in the vertical direction of the input images,encouraging the network to capture vertical long-range correlations.However,Vertical-Cutdepth overlooks the importance of both horizontal and vertical correlations for depth information,since in human vision the positional structure of an object is determined by the intersection of horizontal and vertical directions.A single vertical correlation can determine the height of an object,but not its width.

    To encourage the network to focus on the correlation between the horizontal and vertical directions,we propose “Perpendicular-Cutdepth”.This method aims to simultaneously reduce the horizontal and vertical distances between RGB images and their corresponding depth maps in the latent space,enhancing the network’s ability to learn from both horizontal and vertical directions within the scene.Perpendicular-Cutdepth involves randomly cropping horizontal and vertical regions from real depth maps and replacing them with corresponding areas in RGB images.By doing so,this effectively promotes the learning of both horizontal and vertical correlations in the scene.We conducted extensive quantitative and qualitative experiments on publicly available datasets,including the indoor dataset NYU[18]and the outdoor dataset KITTI[19],to validate the effectiveness of our proposed Perpendicular-Cutdepth.We will provide a detailed introduction to our method in Section 3.

    The contributions of our work are as follows:

    ? We compared the impact of data methods with different geometric structures on the network.

    ? We propose a new data augmentation method to improve model performance.

    ? Compared to previous data augmentation methods,our proposed method can improve depth estimation performance in both indoor and outdoor scenes.

    2 Related Method

    2.1 Monocular Depth Estimation

    Depth estimation,as a critical problem in the field of computer vision,has demonstrated vast potential in various applications.With the decreasing cost and widespread availability of monocular cameras,researchers have increasingly turned their attention to monocular depth estimation methods due to their simplicity and practicality.Traditional geometry-based methods rely on texture,corners,and edge information in images to compute depth.These approaches often require additional sensors or strict scene assumptions,limiting their applicability in complex environments.In recent years,with the advancement of deep learning,deep learning-based methods have made significant strides in this field.In monocular depth estimation,neural network-based approaches have proven capable of producing satisfactory depth estimates in many scenarios[20–23].Common architectures for depth estimation networks include Convolutional Neural networks(CNNs)[24–27]and Transformers[28–31].For instance,Lee et al.[20] introduced the concept of mask3D to predict local normal for obtaining depth information,encouraging the network to learn structural information within the scene.Li et al.[21] convert the 360° image to low degree distorted perspective patches,then obtain patch wise predictions based on CNN,and finally merge them to obtain the final prediction result,solving the problem of CNN structure being difficult to handle spherical distortions.Wang et al.[23]proposed Probability and Geometric Depth (PGD),which estimates depth by utilizing probability depth uncertainty and geometric relationships between instances.Patil et al.[27]designed a network with two heads.The first input outputs pixel plane level coefficients,while the second head outputs a dense offset vector field that identifies the position of seed pixels.The vector field then uses the sparsity of the seed pixel’s plane to predict the depth of each position.The prediction results are fused with the initial prediction of the first head through learning confidence adaptation.Bhat et al.[28]employed a CNN as an encoder,introduced the self-adaptive regression unit Adabins,and used a Transformer module to capture global information.Kim et al.used Segformer [29] as a feature extractor and proposed a selective local and global fusion network to enhance feature fusion.Bhat et al.[31]proposed a new architecture(LocalBins)for depth estimation from a single image.The architecture itself is based on the popular encoder decoder architecture.Firstly,the network predicts the depth distribution of the local neighborhood for each pixel,rather than predicting the global depth distribution.Secondly,the network is not only predicting the depth distribution at the end of the decoder,but also involving all layers of the decoder.Agarwal et al.[32]extended Adabins with Transbins to incorporate global information,yielding more detailed depth maps.Jun et al.[33]introduced a novel monocular depth estimation algorithm that decomposes depth maps into normalized depth maps and scale features.This method can utilize datasets without depth labels to improve monocular depth estimation performance.

    2.2 Data Augmentation

    When neural networks reach a performance bottleneck,data augmentation is an effective method to improve their performance without introducing additional computational burden.In the field of computer vision,several data augmentation techniques have been developed.As mentioned in the introduction,common data augmentation methods,such as rotation,cropping,and optical transformations,primarily alter the overall scene environment,which has inherent limitations in boosting network performance.To address this issue,some studies have attempted to modify the geometric structure of input images to further enhance network generalization[9,11–14].Fig.1 shows some data enhancement methods,where Figs.1a and 1b are RGB images and corresponding depth maps.as shown in Fig.1c,Devries et al.[12] introduced a regularization method called CutOut to prevent CNN overfitting.During network training,CutOut randomly selects a region of the input image and sets the pixel values within that region to 0 or adds random noise.Zhong et al.[13]introduced a lightweight data augmentation method called random erasure,as shown in Fig.1d.This method randomly selects a rectangular region and erases the pixel values within that region using random values.Yoo et al.[9] proposed a data augmentation method called CutBlur,which is specifically designed for image restoration tasks.It involves cutting out low-resolution regions and pasting them onto corresponding high-resolution regions.This approach teaches the model not only how to reconstruct,but also where to reconstruct.Ghiasi et al.[14] presented a simple yet efficient copy-paste data augmentation method that improves the accuracy of instance segmentation.The authors believed that this technique encourages the network to use information from the entire image rather than relying on specific small regions.Yun et al.[11] improved on CutOut and proposed CutMix,a method that fills the CutOut portion with parts of another image.This approach retains the advantages of CutOut,allowing the model to learn features from different parts of the target,including less discriminative areas.Additionally,it is more efficient than CutOut,enabling the model to simultaneously learn features from two targets.The specific procedure for CutMix is illustrated in Fig.1e.

    Figure 1: Examples of data augmentation

    In the field of monocular depth estimation,Ishii et al.[15] introduced the Cutdepth method to address geometric variations in scenes.This method replaces a portion of the RGB image with real depth map information,thereby enhancing visual diversity while suppressing irrelevant geometric features in the image.Building on this concept,Kim et al.[16] proposed a variant of Cutdepth called Vertical-Cutdepth,which aims to strengthen the network’s ability to capture depth cues by preserving the vertical information in the image.These two methods are depicted in Figs.1f and 1g.As mentioned in the introduction,although Vertical-Cutdepth motivates the network to learn cues in the vertical aspect,it fails to establish the correlation between the horizontal and vertical directions,and to alleviate this problem,we propose Perpendicular-Cutdepth,as shown in Fig.1h.We present the specific algorithm in Section 3.3.

    3 Method

    3.1 Motivation

    Our main motivation comes from neural networks for deep understanding of scenes.For realworld scenes,there will be a lot of texture information in the same plane,such as patterns,wall paintings,etc.,which is depth independent.However,the boundary information in the scene is crucial for understanding depth.The network considers the areas where color changes occur as the boundaries of objects,but this part of information also includes texture information.Our motivation for this is to randomly reduce texture information while retaining useful boundary information during the learning process of neural networks.Because RGB and its corresponding depth images have similar edge information.Our idea is to replace some scenes in the real world with corresponding depth scenes.So the main problem is how to replace it.In previous work,Cutdepth[15]randomly cropped a rectangular area of the depth image and pasted it at the corresponding position of the RGB image.However,horizontal and vertical information should not be considered equally important.Dijk et al.[17]found that neural networks ignored the size of known occlusions in the process of depth cognition and chose the vertical position on the image,that is to say,the network only needs to know the location of the ground contact points of the object to infer approximate depth information.Kim et al.[16]proposed an improved method(Vertical-Cutdepth),which enables the network incentive network to focus on the vertical geometric information in the scene.Although this method does improve accuracy,we believe that focusing only on vertical information in the scene is far from enough.If only Vertical-Cutdepth is used,it can easily lead to incomplete plane of the object.The plane integrity of the object and the correlation between horizontal and vertical are crucial[34,35].Therefore,we propose orthogonal cutting,which can guide the network to focus on vertical information in the scene,this has further deepened the network’s understanding of the plane of objects in the scene.

    3.2 Algorithm

    Our method is used in the data preprocessing process,but not throughout the entire dataset.In order to enhance the generalization ability of the network,we randomly select different scenes for data augmentation.Specifically,for the selected RGB image and corresponding depth map,we will randomly select a coordinate(l,u)in the image.Next,we will randomly select a cross-shaped region within the entire image.This enables the network to simultaneously consider correlations in both horizontal and vertical directions while preserving the vertical geometric structure in the image.For a given set of RGB images and their corresponding depth maps,the specific approach is shown in Algorithm 1,and Fig.2 is data augmentation using the Perpendicular-Cutdepth method.

    Figure 2: Data augmentation using Perpendicular-Cutdepth

    In Algorithm 1,alpha and beta represent random numbers ranging from 0 to 1,and p denotes a specified hyperparameter.Essentially,we randomly select a subset of data from the entire training data set for data augmentation.For the selected subset,we ensure that at least one pixel-wide region is cropped both horizontally and vertically.The start and end points of this region are chosen randomly to increase the generalization ability of the network.

    3.3 Network Architecture

    To validate the effectiveness of the algorithm,we constructed a simple network architecture TransUnet.The Transformer serves as the encoder of the network,and we stack several upsampling layers.Then,layer-wise concatenation is used as the decoder.As shown in the Fig.3,before inputting into the network,we apply various data augmentation methods to compare the final prediction results.

    Figure 3: TransUnet network architecture

    3.4 Accuracy Measures for Depth Estimation

    We use the RMSE,REL,Log10,andδaas metrics to evaluate depth estimation.We denotediandgithe predicted pixels and real pixels,respectively.Andnrepresents the total number of effective pixels.

    RMSE:Root mean square error.Lower is better.

    REL:Mean relative error.Lower is better.

    Log10:Mean log 10 error.Lower is better.

    δa:Accuracy under threshold.We use a ∈{1,2,3}.Higher is better.

    4 Experimental

    4.1 Experimental Setting

    We employed Transformer[29],and DenseNet161[36]as the backbones for our experiments,both of which were pretrained on ImageNet.During the training process,we used the PyTorch framework and selected Adam as the optimizer for our network.The learning rate was decayed using a polynomial decay strategy,starting at 1e-4 and gradually decreasing to 1e-5.We set the values ofβ1andβ2to 0.9 and 0.999,respectively.The experiments were conducted over 20 epochs with a batch size of 12.All experiments were carried out on a 3090 GPU.

    Our experiments were conducted on public datasets,namely the NYU dataset[18]and the KITTI dataset[19].The NYU dataset comprises 464 color images from indoor scenes,each accompanied by corresponding depth maps.The valid depth range for this dataset is from 0.5 to 10 m.To train on the NYU dataset,we used a dataset of 20 k samples as the training set and 654 image-depth pairs for testing.During training,we randomly cropped images to the size of 576×448 pixels.The KITTI dataset contains images and corresponding depth maps captured using LiDAR sensors.It includes 61 outdoor scenes with distances ranging from 50 to 80 m.Similarly,we used a training dataset of 20 k samples and randomly cropped the data to the size of 375×1241 pixels.For evaluation,we employed the official 697 images provided by the KITTI dataset for depth assessment.

    4.2 Comparation to the State-of-the-Art

    Table 1 shows our comparison results with different state-of-the-art models,from the table we can find that our proposed algorithm has limited improvement when targeting the latest model AdaBins[28],this is because the performance of the current model has reached a bottleneck,and the data enhancement algorithm alone is not enough to improve the performance of the network again.However,for some networks with weak generalization ability,such as BTS[20],and the Transformer network we constructed,the performance of the network can be significantly improved by using our algorithm,which also verifies the effectiveness of our algorithm for networks with weak generalization ability.

    Table 1: The result of different network performance on the NYU dataset

    4.3 Comparative Experiments

    To validate the impact of different data augmentation methods on network performance,we followed the findings of Cutdepth[15]and experimented on the NYU indoor dataset[18]using BTS[20] as the backbone,BTS was pre-trained on ImageNet and using DenseNet161 [36] as the feature extractor.Table 2 displays the experimental results for CutOut[12],RE[13],CutMix[11],Cutdepth[15],and our proposed Perpendicular-Cutdepth under the same backbone.From the experimental results,we can observe that all these methods show varying degrees of improvement in the final depth evaluation metrics compared to the baseline.Among them,our proposed Perpendicular-Cutdepth method has an REL(mean absolute relative error)metric that is only 0.001 away from the best Cutdepth result.However,our RMSE (root mean square error) and accuracy measurementδ1have improved by 2.2% and 0.2%,respectively,compared to the best Cutdepth results.We believe that this trade-off in one metric for improvements in others is worthwhile,and it directly validates the effectiveness and superiority of our proposed method.In addition,we also observed that the performance of the network did not increase with an increase in the hyperparameter p,indicating that our network has relatively low dependence on hyperparameters.

    4.4 The Impact of Geometric Structures on Network Performance in Data Augmentation

    Fig.4 shows different cut shapes for depth maps,where Figs.4a and 4b are RGB images and corresponding depth maps,and Fig.4c is the method of Cutdepth,which randomly cuts the rectangular part of the depth map and pastes it to the corresponding position of the RGB image.Kim et al.[16]discovered that replacing the vertical regions of depth maps in images with RGB images effectively improves the performance of network models.They introduced a variant of Cutdepth[15] called Vertical-Cutdepth (V-Cutdepth),as shown in Fig.4d.However,we had doubts about whether the vertical region is the most appropriate segmentation algorithm.In order to explore the impact of the geometric shape of the segmented depth map region on the network,we propose two segmentation shape algorithms:Horizontal-Cutdepth(H-Cutdepth)and Perpendicular-Cutdepth(P-Cutdepth)methods.H-Cutdepth,as shown in Fig.4e,selects the whole horizontal region for depth replacement.On the other hand,P-Cutdepth (Fig.4f) simultaneously selects both horizontal and vertical regions for depth replacement in the corresponding areas of the real image.In addition,we adopt the network architecture designed in Fig.3 as the main structure of our network,where the backbone utilizes Transformer instead of CNN,and the decoding part is represented by the decoder in Fig.3.The experimental results on the NYU indoor dataset,as shown in Table 3,demonstrate that our proposed P-Cutdepth method outperforms existing depth estimation data augmentation methods.Through corresponding data analysis,it is evident that merely changing the geometric structure can indeed improve the network performance.Under the same hyperparameters,P-Cutdepth shows better results in terms of accuracy measurement and error rate.Our proposed method reduces the RMSE error rate from 0.357 to 0.351,a decrease of 1.6%,while V-Cutdepth decreases by 1.1%,H-Cutdepth and Cutdepth both decrease by 0.84%.The accuracyδ1increases from 0.900 to 0.907,an improvement of 0.7%,while the other three methods correspondingly improve by 0.3%.This demonstrates the importance of geometric structures in data augmentation and highlights the superiority of our proposed method.Fig.5 provides a comparison of depth map predictions between our method and Cutdepth.

    Table 3: Experimental results of the network on the NYU dataset with different geometric structures using Cutdepth,with the best performance emphasized in bold.Here,‘p’refers to hyperparameters,and we use the prefixes V,H,and P to represent three different methods,corresponding to vertical,horizontal,and perpendicular,respectively

    Figure 4: Cutdepth and some variants of the Cutdepth.We use the prefixes V-Cutdepth,H-Cutdepth,and P-Cutdepth to represent three different methods,corresponding to vertical,horizontal,and perpendicular,respectively

    Figure 5: Results visualized on the NYU dataset,from left to right:RGB,depth,result of Cutdepth and ours

    To further compare the differences between our proposed method and the Cutdepth method,we conducted comparative experiments on the KITTI outdoor dataset.The experimental results are shown in Table 4.We found that using the Cutdepth data augmentation method in outdoor environments did not lead to any further improvement in network performance.In fact,all metrics showed a decrease.In contrast,our algorithm showed improvements in both the accuracy and error rate evaluation metrics.Therefore,compared to the Cutdepth algorithm,our method can improve network performance not only in indoor environments but also in outdoor environments,demonstrating the stronger generalization capability of our algorithm.Fig.6 provides a comparison of depth map predictions between our method and Cutdepth.

    Table 4: Experimental results of the network using Cutdepth and Perpendicular-Cutdepth on the KITTI dataset,with the best performance emphasized in bold.Here,‘p’refers to hyperparameters

    4.5 Ablation Experiments

    To more intuitively illustrate the effectiveness of our proposed method,we conducted ablation experiments on NYU dataset using both CNN and the Transformer network structure designed in Fig.3.From Table 5,it can be observed that our proposed method shows improvement regardless of the value of the hyperparameter p.Additionally,the experimental results do not increase with an increase in P,indicating the stability of our network.Furthermore,we also conducted ablation experiments on KITTI,and the results in Table 6 demonstrate that our method enhances the network to a certain extent in both indoor and outdoor environments.

    Table 5: Ablation experiments on the NYU dataset

    Table 6: Ablation experiments on the KITTI dataset

    Figure 6: Results visualized on the KITTI dataset,from left to right:RGB,depth,result of Cutdepth and ours

    5 Conclusion

    In this paper,we have introduced a novel data augmentation method for depth estimation.In contrast to traditional methods,our proposed approach involves replacing the horizontal and vertical regions of RGB images with corresponding depth regions.This enhances the ability of the network to extract features in both horizontal and vertical directions.Through extensive experiments,we have not only confirmed that altering geometric structures can improve model performance,but also demonstrated the superiority of our proposed Perpendicular-Cutdepth over traditional data augmentation methods.In future work,we will validate the effectiveness of the proposed method in other domains.

    Acknowledgement:The authors would like to thank the editors and reviewers for their valuable work,as well as the supervisor and family for their valuable support during the research process.

    Funding Statement:This work was supported by the Grant of Program for Scientific Research Innovation Team in Colleges and Universities of Anhui Province (2022AH010095);The Grant of Scientific Research and Talent Development Foundation of the Hefei University (No.21-22RC15);The Key Research Plan of Anhui Province (No.2022k07020011);The Grant of Anhui Provincial Natural Science Foundation,No.2308085MF213;The Open Fund of Information Materials and Intelligent Sensing Laboratory of Anhui Province IMIS202205,as well as the AI General Computing Platform of Hefei University.

    Author Contributions:The authors confirm contribution to the paper as follows:Le Zou:Methodology,Investigation,Funding.Linsong Hu:Investigation,Writing Review and Editing,Writing-Original Draft and Methodology.Yifang Wang:Resources,Validation.Zhize Wu and Xiaofeng Wang:Writing Review and Editing,Funding.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data that support the findings of this study are openly at: https://drive.google.com/file/d/1AysroWpfISmm-yRFGBgFTrLy6FjQwvwP/view?usp=sharing.https://www.cvlibs.net/datasets/kitti/.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产精品无大码| 亚洲va在线va天堂va国产| 少妇人妻精品综合一区二区| 乱码一卡2卡4卡精品| 在线精品无人区一区二区三| 亚洲激情五月婷婷啪啪| 午夜激情久久久久久久| 最近手机中文字幕大全| 2018国产大陆天天弄谢| 最后的刺客免费高清国语| 韩国av在线不卡| 黄色欧美视频在线观看| 欧美区成人在线视频| 欧美日韩在线观看h| 777米奇影视久久| 91精品国产国语对白视频| 简卡轻食公司| 亚洲精品国产av蜜桃| 赤兔流量卡办理| 一级毛片黄色毛片免费观看视频| 天天躁夜夜躁狠狠久久av| 久久久久久久久大av| 人人妻人人添人人爽欧美一区卜| 中文乱码字字幕精品一区二区三区| 国产69精品久久久久777片| 免费少妇av软件| 日韩熟女老妇一区二区性免费视频| 在线观看av片永久免费下载| 午夜激情久久久久久久| 女性被躁到高潮视频| 99久久人妻综合| 麻豆成人av视频| 久久久久久久久久久久大奶| 大片电影免费在线观看免费| 亚洲国产最新在线播放| 日日啪夜夜爽| 黑丝袜美女国产一区| 蜜臀久久99精品久久宅男| 亚洲国产欧美在线一区| 亚洲国产最新在线播放| 丝瓜视频免费看黄片| 麻豆成人av视频| 91午夜精品亚洲一区二区三区| 亚洲美女视频黄频| 国产成人a∨麻豆精品| 国产69精品久久久久777片| 国产欧美日韩精品一区二区| 国产成人精品无人区| 七月丁香在线播放| 一区二区三区乱码不卡18| 国产精品久久久久久久久免| 国产一区二区三区综合在线观看 | 国产精品伦人一区二区| 亚洲国产精品999| 天天操日日干夜夜撸| 久久久久精品久久久久真实原创| 久久久久久久精品精品| 一级av片app| 中文资源天堂在线| 免费人妻精品一区二区三区视频| 国产亚洲午夜精品一区二区久久| 夜夜骑夜夜射夜夜干| 十八禁高潮呻吟视频 | 亚洲av免费高清在线观看| 一个人免费看片子| 日韩一本色道免费dvd| 一级a做视频免费观看| 国产片特级美女逼逼视频| 日本欧美视频一区| 午夜福利网站1000一区二区三区| 亚洲国产精品专区欧美| 热99国产精品久久久久久7| 午夜影院在线不卡| 男女边摸边吃奶| 男女国产视频网站| 国产中年淑女户外野战色| 久久久精品免费免费高清| 22中文网久久字幕| 色视频在线一区二区三区| 欧美3d第一页| 成人免费观看视频高清| 一本—道久久a久久精品蜜桃钙片| 精品少妇内射三级| 乱人伦中国视频| 国产高清不卡午夜福利| 一本色道久久久久久精品综合| 大片免费播放器 马上看| 亚洲av在线观看美女高潮| 成人亚洲精品一区在线观看| 久久久久视频综合| 国产中年淑女户外野战色| 18+在线观看网站| 国产亚洲5aaaaa淫片| 9色porny在线观看| 亚洲人成网站在线播| 亚洲成色77777| 久久久久久伊人网av| 最近手机中文字幕大全| 赤兔流量卡办理| 自拍偷自拍亚洲精品老妇| 18禁在线播放成人免费| 中文字幕亚洲精品专区| 日本免费在线观看一区| 成人特级av手机在线观看| 亚洲欧美日韩另类电影网站| 久久av网站| 国产真实伦视频高清在线观看| 黑人猛操日本美女一级片| 春色校园在线视频观看| 99热这里只有精品一区| 日韩在线高清观看一区二区三区| 日韩熟女老妇一区二区性免费视频| 夫妻性生交免费视频一级片| 汤姆久久久久久久影院中文字幕| 这个男人来自地球电影免费观看 | 欧美最新免费一区二区三区| 在线观看av片永久免费下载| 日本wwww免费看| 噜噜噜噜噜久久久久久91| 夜夜骑夜夜射夜夜干| 久久久久久久大尺度免费视频| tube8黄色片| 高清视频免费观看一区二区| 国产精品一区二区性色av| 少妇被粗大猛烈的视频| 国国产精品蜜臀av免费| 免费观看性生交大片5| a 毛片基地| 国产亚洲av片在线观看秒播厂| 少妇的逼水好多| 国产黄片美女视频| 久久久久久久久久成人| 日韩一本色道免费dvd| 日韩中文字幕视频在线看片| 久久人人爽av亚洲精品天堂| 精品国产国语对白av| 国产成人aa在线观看| 99久久人妻综合| 欧美+日韩+精品| 成人毛片60女人毛片免费| 久久久久久伊人网av| 99九九在线精品视频 | 日本黄大片高清| 日韩精品有码人妻一区| av天堂久久9| 一本一本综合久久| 精品99又大又爽又粗少妇毛片| 一级毛片aaaaaa免费看小| 看十八女毛片水多多多| 亚洲精品久久久久久婷婷小说| 亚洲av男天堂| 久久99热这里只频精品6学生| 免费观看的影片在线观看| 性色av一级| 午夜影院在线不卡| 久久久久久久久久久免费av| 国产极品天堂在线| 成人国产av品久久久| 欧美成人午夜免费资源| 久久av网站| 久久99蜜桃精品久久| 一级二级三级毛片免费看| 亚洲欧美日韩另类电影网站| 亚洲无线观看免费| 国产精品福利在线免费观看| 精品国产一区二区三区久久久樱花| 91久久精品国产一区二区成人| 国产精品久久久久久av不卡| a 毛片基地| 国产老妇伦熟女老妇高清| 欧美3d第一页| 午夜视频国产福利| 人妻夜夜爽99麻豆av| 桃花免费在线播放| 一二三四中文在线观看免费高清| 国产精品久久久久久精品电影小说| 晚上一个人看的免费电影| 欧美+日韩+精品| 国产精品国产三级专区第一集| av又黄又爽大尺度在线免费看| 大香蕉久久网| 啦啦啦在线观看免费高清www| 日韩 亚洲 欧美在线| 日本黄色日本黄色录像| 亚洲内射少妇av| 一级毛片久久久久久久久女| 十八禁高潮呻吟视频 | 一区二区三区免费毛片| 亚洲欧美成人精品一区二区| 99久久中文字幕三级久久日本| 男人爽女人下面视频在线观看| 赤兔流量卡办理| 久久久久精品久久久久真实原创| 黄色视频在线播放观看不卡| 日韩,欧美,国产一区二区三区| 九九久久精品国产亚洲av麻豆| 中文字幕精品免费在线观看视频 | 国产精品久久久久久精品古装| 久久精品国产亚洲av涩爱| 最黄视频免费看| 热99国产精品久久久久久7| 欧美人与善性xxx| 一本色道久久久久久精品综合| 水蜜桃什么品种好| 在线天堂最新版资源| 国产高清国产精品国产三级| tube8黄色片| 国产午夜精品久久久久久一区二区三区| 日本爱情动作片www.在线观看| 日日啪夜夜爽| 深夜a级毛片| 精品久久久精品久久久| 伊人久久精品亚洲午夜| av一本久久久久| 久久精品国产a三级三级三级| 成年美女黄网站色视频大全免费 | 久久久久国产精品人妻一区二区| 三级经典国产精品| 日韩成人av中文字幕在线观看| 亚洲图色成人| 91aial.com中文字幕在线观看| 美女内射精品一级片tv| 一个人看视频在线观看www免费| 深夜a级毛片| 夜夜骑夜夜射夜夜干| 国产精品99久久久久久久久| 亚洲自偷自拍三级| 欧美成人精品欧美一级黄| 久久人妻熟女aⅴ| 欧美日韩视频高清一区二区三区二| 极品教师在线视频| 哪个播放器可以免费观看大片| 久久 成人 亚洲| 日韩一区二区视频免费看| 美女主播在线视频| 久久精品国产亚洲网站| av网站免费在线观看视频| 午夜福利视频精品| 不卡视频在线观看欧美| 男女国产视频网站| 精品视频人人做人人爽| 亚洲伊人久久精品综合| 国产精品国产三级国产专区5o| 国产精品99久久久久久久久| 国产亚洲欧美精品永久| 一个人免费看片子| 极品人妻少妇av视频| 麻豆精品久久久久久蜜桃| 欧美日韩一区二区视频在线观看视频在线| 九九久久精品国产亚洲av麻豆| 免费人妻精品一区二区三区视频| 日本wwww免费看| 美女cb高潮喷水在线观看| 国产毛片在线视频| 成人特级av手机在线观看| 观看免费一级毛片| 九九久久精品国产亚洲av麻豆| 在现免费观看毛片| h日本视频在线播放| 一级a做视频免费观看| 亚洲精品一区蜜桃| 少妇被粗大猛烈的视频| 免费看日本二区| 伊人亚洲综合成人网| videossex国产| 欧美人与善性xxx| 九色成人免费人妻av| 亚洲国产精品国产精品| 日韩伦理黄色片| 国产日韩欧美视频二区| 亚洲电影在线观看av| 啦啦啦中文免费视频观看日本| 狠狠精品人妻久久久久久综合| 色网站视频免费| 日韩强制内射视频| 日韩一区二区视频免费看| 日日撸夜夜添| 丝瓜视频免费看黄片| 黄片无遮挡物在线观看| 国产美女午夜福利| 日本黄大片高清| 亚洲,一卡二卡三卡| 中文字幕免费在线视频6| 久久av网站| 日本午夜av视频| 人人妻人人添人人爽欧美一区卜| 丰满少妇做爰视频| 亚洲国产毛片av蜜桃av| 永久网站在线| 国产男人的电影天堂91| 一区二区三区免费毛片| 国产在线一区二区三区精| 久久久久久久久久人人人人人人| 国产真实伦视频高清在线观看| 精品一区在线观看国产| 亚洲精品成人av观看孕妇| 久久av网站| 日韩制服骚丝袜av| 春色校园在线视频观看| 成人综合一区亚洲| 男人舔奶头视频| 青青草视频在线视频观看| 熟妇人妻不卡中文字幕| 免费av不卡在线播放| 日韩大片免费观看网站| 国内精品宾馆在线| 少妇的逼水好多| 久久久久精品久久久久真实原创| 日韩欧美精品免费久久| a级一级毛片免费在线观看| 中国三级夫妇交换| 免费黄频网站在线观看国产| 中国三级夫妇交换| 久久久a久久爽久久v久久| 亚洲美女视频黄频| 老熟女久久久| 日本vs欧美在线观看视频 | 国产 一区精品| 免费黄色在线免费观看| 欧美精品一区二区大全| 熟女人妻精品中文字幕| 国产永久视频网站| 乱系列少妇在线播放| 男女免费视频国产| 国产淫片久久久久久久久| 中文字幕人妻丝袜制服| 最近手机中文字幕大全| 久久人妻熟女aⅴ| 精品久久久精品久久久| 日本免费在线观看一区| 一区二区三区乱码不卡18| 亚洲经典国产精华液单| 一二三四中文在线观看免费高清| 精品少妇黑人巨大在线播放| 在线观看美女被高潮喷水网站| 美女中出高潮动态图| av免费在线看不卡| 在线观看人妻少妇| av又黄又爽大尺度在线免费看| 九九爱精品视频在线观看| 久久国产乱子免费精品| 美女视频免费永久观看网站| 美女主播在线视频| 自线自在国产av| 又大又黄又爽视频免费| 黑人巨大精品欧美一区二区蜜桃 | 久久久亚洲精品成人影院| 亚洲精品日本国产第一区| 免费观看在线日韩| 成年人午夜在线观看视频| 日本免费在线观看一区| 老司机亚洲免费影院| 国产熟女欧美一区二区| 夜夜骑夜夜射夜夜干| 美女脱内裤让男人舔精品视频| 国产av国产精品国产| 亚洲欧美日韩东京热| 视频中文字幕在线观看| 日本黄色日本黄色录像| 在线观看免费高清a一片| av有码第一页| 99久久综合免费| h日本视频在线播放| 久久国产亚洲av麻豆专区| 内射极品少妇av片p| 校园人妻丝袜中文字幕| 看免费成人av毛片| 精品少妇内射三级| 一级毛片黄色毛片免费观看视频| 国语对白做爰xxxⅹ性视频网站| 欧美激情极品国产一区二区三区 | 亚洲综合色惰| tube8黄色片| 日韩三级伦理在线观看| 一级毛片aaaaaa免费看小| 国产精品免费大片| 国产又色又爽无遮挡免| 国产精品女同一区二区软件| 十八禁网站网址无遮挡 | 99视频精品全部免费 在线| 国产精品久久久久久av不卡| 在线天堂最新版资源| 一本大道久久a久久精品| 精品熟女少妇av免费看| 在线 av 中文字幕| 一级,二级,三级黄色视频| 水蜜桃什么品种好| 国产精品偷伦视频观看了| 欧美3d第一页| 人人妻人人澡人人看| 亚洲av在线观看美女高潮| 色婷婷av一区二区三区视频| 在线观看一区二区三区激情| 2018国产大陆天天弄谢| 亚洲精品一二三| 久久午夜综合久久蜜桃| 久久鲁丝午夜福利片| av在线播放精品| 国产在线免费精品| 18禁在线无遮挡免费观看视频| 成人免费观看视频高清| 国产在视频线精品| 美女大奶头黄色视频| 亚洲人与动物交配视频| 国产精品成人在线| 特大巨黑吊av在线直播| 草草在线视频免费看| 亚洲图色成人| 中文精品一卡2卡3卡4更新| 久久国产精品男人的天堂亚洲 | 少妇人妻 视频| 久久久久久久久久久丰满| 日韩大片免费观看网站| 久久这里有精品视频免费| 亚洲欧洲精品一区二区精品久久久 | 亚洲精品国产av成人精品| 制服丝袜香蕉在线| 欧美精品亚洲一区二区| 国产精品无大码| 亚洲精品国产av蜜桃| 美女国产视频在线观看| 亚洲三级黄色毛片| 久久影院123| 国产精品人妻久久久影院| 久久久久久人妻| 这个男人来自地球电影免费观看 | 美女xxoo啪啪120秒动态图| 80岁老熟妇乱子伦牲交| 大码成人一级视频| 在线观看人妻少妇| 桃花免费在线播放| 乱人伦中国视频| 一区二区av电影网| 中文字幕制服av| 中文欧美无线码| 狂野欧美白嫩少妇大欣赏| 欧美区成人在线视频| 久久午夜福利片| 日韩熟女老妇一区二区性免费视频| 99九九线精品视频在线观看视频| 欧美一级a爱片免费观看看| 亚洲伊人久久精品综合| 欧美成人午夜免费资源| 国产一区二区在线观看日韩| 国产日韩欧美视频二区| 男的添女的下面高潮视频| 国内精品宾馆在线| 肉色欧美久久久久久久蜜桃| 青青草视频在线视频观看| 91精品一卡2卡3卡4卡| 男女国产视频网站| 99久久精品一区二区三区| 黑人高潮一二区| 亚洲美女搞黄在线观看| 国产精品无大码| 大香蕉97超碰在线| 中国美白少妇内射xxxbb| 成人18禁高潮啪啪吃奶动态图 | 欧美xxxx性猛交bbbb| 天堂中文最新版在线下载| 久久午夜福利片| 岛国毛片在线播放| 青青草视频在线视频观看| 赤兔流量卡办理| 九色成人免费人妻av| tube8黄色片| 久久久久久伊人网av| 高清欧美精品videossex| 在线精品无人区一区二区三| a级毛片免费高清观看在线播放| 久久精品国产亚洲av天美| 深夜a级毛片| 热re99久久精品国产66热6| 自拍偷自拍亚洲精品老妇| 久久精品国产亚洲网站| 亚洲精品自拍成人| 日韩 亚洲 欧美在线| 美女大奶头黄色视频| 成年人午夜在线观看视频| 午夜福利在线观看免费完整高清在| 亚洲成人手机| 日韩熟女老妇一区二区性免费视频| 在线观看美女被高潮喷水网站| 美女中出高潮动态图| 国产成人精品婷婷| 午夜影院在线不卡| 大码成人一级视频| 日韩视频在线欧美| 大码成人一级视频| 精品一区二区三卡| 国产亚洲欧美精品永久| 男女无遮挡免费网站观看| 777米奇影视久久| 在线观看一区二区三区激情| 亚洲av男天堂| 亚洲国产精品国产精品| 视频区图区小说| 成年av动漫网址| 99热国产这里只有精品6| 91精品国产九色| 国产午夜精品久久久久久一区二区三区| 男人和女人高潮做爰伦理| 免费观看av网站的网址| 性色avwww在线观看| 亚洲成色77777| 成人无遮挡网站| 国产高清不卡午夜福利| videossex国产| 欧美日韩综合久久久久久| 在线观看免费高清a一片| 看十八女毛片水多多多| 在线观看免费高清a一片| 色94色欧美一区二区| 一二三四中文在线观看免费高清| 麻豆成人av视频| 久久青草综合色| www.av在线官网国产| 亚洲色图综合在线观看| 看免费成人av毛片| 一二三四中文在线观看免费高清| 亚洲性久久影院| 人人妻人人添人人爽欧美一区卜| 国产精品不卡视频一区二区| 久久午夜综合久久蜜桃| 久久久欧美国产精品| av一本久久久久| 日韩欧美一区视频在线观看 | 丁香六月天网| 欧美精品国产亚洲| 久热这里只有精品99| 国产男人的电影天堂91| 日韩一区二区视频免费看| 亚洲欧洲日产国产| 日韩欧美一区视频在线观看 | 在线观看人妻少妇| 自拍欧美九色日韩亚洲蝌蚪91 | 欧美精品国产亚洲| 亚洲丝袜综合中文字幕| 男女国产视频网站| 亚洲国产精品999| 一级毛片久久久久久久久女| 亚洲国产精品999| 人人澡人人妻人| 国产伦在线观看视频一区| 亚洲情色 制服丝袜| 色5月婷婷丁香| 9色porny在线观看| 亚洲在久久综合| 日日爽夜夜爽网站| 久久久国产精品麻豆| 国产在视频线精品| 国产女主播在线喷水免费视频网站| 热99国产精品久久久久久7| 欧美精品一区二区大全| 亚洲精品成人av观看孕妇| 欧美日韩视频精品一区| 精品99又大又爽又粗少妇毛片| 亚洲怡红院男人天堂| av黄色大香蕉| av又黄又爽大尺度在线免费看| 亚洲第一av免费看| 九色成人免费人妻av| 亚洲欧美成人精品一区二区| 亚洲精品久久午夜乱码| 国产av精品麻豆| 下体分泌物呈黄色| 国产亚洲5aaaaa淫片| 极品教师在线视频| 亚洲美女视频黄频| 中文字幕人妻熟人妻熟丝袜美| 国产精品秋霞免费鲁丝片| 精品亚洲乱码少妇综合久久| av又黄又爽大尺度在线免费看| 国产色爽女视频免费观看| 亚洲内射少妇av| 日日摸夜夜添夜夜添av毛片| 黄色怎么调成土黄色| 性高湖久久久久久久久免费观看| 热re99久久国产66热| 免费观看av网站的网址| 最近2019中文字幕mv第一页| 亚洲美女黄色视频免费看| 熟妇人妻不卡中文字幕| 成人国产av品久久久| 看非洲黑人一级黄片| 丁香六月天网| 欧美国产精品一级二级三级 | a级毛色黄片| 大码成人一级视频| 看十八女毛片水多多多| 韩国av在线不卡| 亚洲精品色激情综合| av有码第一页| 麻豆精品久久久久久蜜桃| 五月玫瑰六月丁香| 三级经典国产精品| 国产69精品久久久久777片| 一边亲一边摸免费视频| 亚洲成人av在线免费| 嘟嘟电影网在线观看| 黑人猛操日本美女一级片| 国产精品久久久久久久电影| 中文字幕亚洲精品专区| 亚洲自偷自拍三级| 在线播放无遮挡| 国产精品一区二区在线不卡| 精品一区二区三区视频在线| 伊人亚洲综合成人网| 麻豆精品久久久久久蜜桃| 一区在线观看完整版| 十八禁高潮呻吟视频 | 99久国产av精品国产电影| 免费av不卡在线播放| 我要看黄色一级片免费的| 久久久亚洲精品成人影院|