• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Imitation Learning for Autonomous Vehicles Based on Convolutional Neural Networks

    2020-02-29 14:16:40ParhamKebriaAbbasKhosraviSyedMoshfeqSalakenandSaeidNahavandiSenior
    IEEE/CAA Journal of Automatica Sinica 2020年1期

    Parham M. Kebria,, Abbas Khosravi,, Syed Moshfeq Salaken, and Saeid Nahavandi, Senior

    Abstract — Providing autonomous systems with an effective quantity and quality of information from a desired task is challenging. In particular, autonomous vehicles, must have a reliable vision of their workspace to robustly accomplish driving functions. Speaking of machine vision, deep learning techniques, and specifically convolutional neural networks, have been proven to be the state of the art technology in the field. As these networks typically involve millions of parameters and elements, designing an optimal architecture for deep learning structures is a difficult task which is globally under investigation by researchers. This study experimentally evaluates the impact of three major architectural properties of convolutional networks, including the number of layers, filters, and filter size on their performance. In this study, several models with different properties are developed,equally trained, and then applied to an autonomous car in a realistic simulation environment. A new ensemble approach is also proposed to calculate and update weights for the models regarding their mean squared error values. Based on design properties,performance results are reported and compared for further investigations. Surprisingly, the number of filters itself does not largely affect the performance efficiency. As a result, proper allocation of filters with different kernel sizes through the layers introduces a considerable improvement in the performance.Achievements of this study will provide the researchers with a clear clue and direction in designing optimal network architectures for deep learning purposes.

    I. INTRODUCTION

    INCREASING demand for autonomous systems requires a practical decision maker that guarantees their performance.In particular, autonomous vehicles require reliability of vision to be aware of the working environment. Moreover, costs and time spent on collection and preparation of useful data, and then training efforts are critical objectives in nowadays realworld applications. Sometimes results are not applicable for practical purposes and require more data and training to improve the performance. Deep neural networks, and specifically, convolutional neural networks (CNN) have demonstrated their impressive capability in processing large-scale image and video data [1]-[12]. A recent solution to deal with those expenses is imitation learning [13], [14]. Also known as learning from demonstration, it introduces a framework for a learning machine in which the machine learns how to perform a specific task by only observing (being fed by training data directly from) the human behaviour. This phase is so called demonstration [15]. Once the machine learns, it can then execute the same task by just mimicking what it has observed from a human operator [16].

    Compared to other learning approaches, imitation learning is easier and more efficient in terms of computations and the amount of expert knowledge required for training process[16], [17]. Furthermore, it neither requires any explicit programming, nor heavily-proven mathematical analyses. In reinforcement learning techniques, for example, design of a reward-punishment function is necessary, which is sometimes a challenging task [16]. On the other side, thanks to the advancements in measurement technologies, a great value of sensory information is now provided, including visual,thermal, geometrical, and many other characteristic features of working environments. Thence, computers are able to easily process the collected data and generate the desired decision commands for a given task.

    Imitation strategies are being widely used in autonomous systems. Simple in implementation and adaptable with many other learning techniques, imitation methods are gaining a great deal of attention in intelligent applications. Learning from demonstration considerably relaxes the assumptions and restrictions in applications where real-time perception of a desired task is a must. In reinforcement learning methods, for instance, investigation in an applicable and generalisable reward function might become practically difficult in fastresponse and time-critical applications, such as autonomous vehicles [18]-[21]. On the other hand, an imitation approach has its own challenges as well, where the learning algorithm should be carefully designed to generate robust and effective models. To cope with this concern, researchers have usually developed imitation methods incorporated with other intelligent algorithms, including deep learning [22]-[24],reinforcement learning [25], deep reinforcement learning [26],parallel reinforcement learning [27], supervised methods [28],and many others [29]-[31].

    Fig. 1. Various applications of imitation learning.

    Flexibility of imitation learning approaches also makes them to be easily developed for a variety of applications. As shown in Fig. 1, these applications include robotics [22], [32],autonomous systems [28], [33], unmanned vehicles [34], [35],automotive control [36], and computational games [37]. The common concept in every application of imitation learning is demonstration of a desired task performed by a human operator (expert knowledge). This can be achieved by a human driver, pilot, chief, etc. The required information might be collected by cameras, sensors, and any proper data acquisition tools. For example, in the case of autonomous vehicles either cameras or distance measurement sensors (e.g.infra-red) may be used for providing the vehicle with the road profile and environmental characteristics for training purposes.

    Vision-based learning is one of the principle methods to teach a machine how to perform a given task [22], [28], [38],[39]. This is because when a machine is empowered by a vision tool, it can, to some extent, understand its surrounding environment, such as distances, obstacles, and shapes.Therefore, it is easier for the machine to learn how to avoid obstacles, using LIDAR [40], for example. On the other side,deep CNN algorithms are capable of processing visual data[41]-[47] very efficiently. Researchers have developed deep CNN structures to assess the influences of CNN structural parameters on performance of the output models. Several deep CNN configurations have been examined over a large-scale image dataset to evaluate the impact of the depth of CNN on output accuracy [5], [38]. However, finding the optimal depth of the network is still an active research question. In addition,the effect of the number and size of the convolutional filters(channel and kernel) in each layer on the output performance and transferability of layers is yet to be fully investigated [48].This has motivated us to develop several CNN models with architectures different in complexities: depth, number of filters, and filter size. Comparing the resulting performance of each model against well-known criteria, like mean squared error (MSE), this research gives an informative insight into CNN performance characteristics for imitation learning in autonomous driving scenario.

    The main contributions of this article are:

    1) A comprehensive evaluation and comparison of the three major architectural parameters, including the number of layers, filters, and kernel size in the design of a CNN, and their impact on the network’s overall performance. This comparison gives the researchers an overview of the most effective way to optimally design their deep networks to achieve the best possible performance.

    2) A new MSE-based ensemble methodology for regression problems that improves the performance according to the average performance of each model throughout the previous observation samples.

    3) As a popular ensemble approach, Bagging method is also considered to comparatively illustrate the superiority of the proposed ensemble approach.

    4) Demonstrative comparison between the developed models provides the information about the impact of design parameters on the overall performance which leads to optimal structures for better performances.

    To achieve this goal, the demonstration phase of a learning process is executed by a human driving a car in a simulation workspace. Three cameras on the vehicle continuously capture images from front, right, and left views from the car, while the human operator is driving it. All images are then fed into a deep CNN for processing and model generation. CNN outputs the desired steering angle as a decision command. Expectation of this process is to generate a model from only the images taken by the cameras. The generated model is expected to drive the car autonomously. To evaluate the performance of the resulting models, this study considers the impacts of quantity and size of the convolutional layers of each model.To this end, several models are developed with different structures in terms of the number of convolutional layers,number of convolutional filters (channels) in each layer, and filters’ size. It should be mentioned that all models are trained by the same training dataset, and then, tested over an identical input data. Furthermore, a new ensemble approach is proposed that calculates and updates weights for each model based on its MSE value. Having said that, this study develops a deep imitation learning methodology for an autonomous vehicle.Also, it provides a comprehensive evaluation study on structural properties of deep CNN models and their impacts on performance of the autonomous vehicle driving.

    Fig. 2. The scheme of a typical convolutional neural network (CNN).

    Rest of the paper is organised as follows. Section II describes a model development, the proposed ensemble approach, and training methodology. Experimental simulations for data collection and demonstration are presented in Section III. Then, Section IV discusses on the results, followed by conclusions and future directions in Sections V and VI, respectively.

    II. PROPOSED METHODOLOGY

    This section presents a methodology for CNN-based model development and the process of learning from demonstration.First, we briefly describe the CNN architectures developed in this paper for image processing purposes. The procedure of a learning policy by observing a human operator is then explained.

    A. Convolutional Neural Network

    As a kind of deep artificial neural networks, CNN has found a broad range of applications in computer vision and natural language processing [3]-[7]. CNN is one of the strongest tools in image classification, and therefore, in machine learning applications. In the case of autonomous vehicles, CNN models are also utilised for providing them with a visual sense and perception of their environment. CNN configurations enjoy from scattered connections and sharing weights. Depth in the context of CNN refers to the number of convolutional layers;the more the layers, the deeper CNN. Each layer has its own features and properties, including the number and size of convolutional filters (kernels). A deep CNN enforces local connections among neighbour neurons, which results in spatial correlations through the network. Fig. 2 shows the schematic of a typical CNN.

    In this study, we develop and apply several CNN models with different architectures to evaluate their efficiency and effectiveness on the performance of an autonomous vehicle.We start from the shallowest and simplest architecture with 3 convolutional layers, 4 filters with the size of 3×3, and then gradually increasing their depth and complexity to 18 layers,32 filters, and filter size up to 7×7. It should be mentioned that the selected numbers are chosen according to the previous studies [5]. We also develop several models in which filter size changes after each max-pooling, i.e., 7×7 in the first layers, 5×5 after the first max-pooling, and 3×3 after the next and final max-pooling. Comparison of the outcomes of these models with that of the models with fixed filter size tends to illustrate the effect of filter size on overall performance.Notably, the number of filters are only indicated for the first convolutional layer in each model. Thence, the number of convolutional filters is doubled after every max-pooling. In total, 96 CNN models are developed with different configurations. In this regard, we assign a 3-digit ID for every model to show its structural properties. For instance, the model ID “1Convf×s” means that the corresponding model has 1 convolutional layers,ffilters in the first layer, ands×sfilter size, respectively. Additionally, the models with different filter size are identified as “1Convf×S_s” with filter sizeS×Sin the first layers, shrinking down tos×sin the last layers. Moreover, all models have been ended up by an identical, fully-connected dense layer consisting of 100×50×10×1 nodes. Table I presents the structural properties of the CNN models developed and applied in this study.Thereafter, we only use models’ ID for referring to a particular model.

    B. Proposed Ensemble Approach

    For further investigations and improvements in performance of CNN models here we propose and apply an ensemble approach. Ensemble formalisation for regression problems had been extensively studied in [49]. Therefore, here we briefly present the concept. Consideringas output of the modellC onvfs(andforlC onvfS_s) , andy?as the desired output (steering angle), then MSE of each model at sampling timetk>0 is calculated as:

    and the ensemble output forNmodels is generally considered as:

    in which,wiis weight of theit h model with the outputyi, andNis the total number of models considered in ensemble. In our case, weights are calculated based on MSE of the corresponding model. Henceforth, we introduce weights as:

    Fig. 3. Framework of the proposed MSE-based ensemble approach.

    where numerator of the fraction in (3) is summation of all models’ MSE value excluding the corresponding modelthat the weight is being calculated for. Moreover, the denominator in (3) is summation of MSE of theNmodels considered in the ensemble:

    Therefore (3) can be simplified as:

    The reason for the multiplier (N-1) in the denominator in(4) is to satisfy the conditions in (2). To show this, adding all weights together results in:

    Additionally, at every sampling timetkweights are being derived based on MSE from previous sampling timetk-1. This way, weights are being updated regarding the recent values of MSE of the corresponding model and, notably, models that recently had less MSE value will have more contribution in the ensemble output. The workflow of the proposed MSEbased ensemble approach is illustrated in Fig. 3. Furthermore,Section IV will argue and discuss the performance of all models, including the proposed ensemble approaches. As one of the most applied ensemble techniques for regression problems [50]-[53], bagging ensemble is also applied and compared to the proposed ensemble strategy in this paper.

    C. Learning From Demonstration

    For the demonstration purpose, we have collected photos of three different views from a human driver’s perspective. Once the human drivers are happy with their performance, the captured photos are recorded, and then, used for the imitation learning. It is worth mentioning that here we feed the CNN models with exactly the same raw photos captured by the cameras during simulation experiments. In this case, 14 412 images have been captured by the size of 320×160 pixels.Moreover, 80% of the photos have been used for the training and 20% for the test. The CNN models have to predict the best steering angle. Hence, every trained model is desired to output an acceptable steering angle, while the speed and acceleration of the vehicle are prescribed, for simplicity. More details of the training and simulations are presented in the next section.

    III. EXPERIMENTAL RESULTS AND DISCUSSIONS

    In this section, we describe the experimental setup and simulation considered for training an autonomous vehicle from demonstration. As previously mentioned, we utilise a simulation environment to collect the training data for the demonstration phase in a cost-effective way (Fig. 4). For this purpose, we have executed our experiments in a self-drivingcar simulator developed by Udacity [54]. The main advantage of this simulation environment is its capability of generating datasets suitable for offline supervised learning algorithms. To this aim, we have performed the following steps:

    Fig. 4. Random sample photos taken by the cameras on the car.

    1) The required training data for imitation learning are being collected while a human drives the car. In other words, three cameras have been considered on the car in a simulation environment. Every camera takes photos from the designated angle; left, right, and front views, respectively. Fig. 4 illustrates a random set of the photos taken by the cameras.These cameras constantly capture the images with the recording time samples.

    2) After data collection, next step is to utilise the data for training the CNN models. CNN parameters are being updated by gradient descent algorithms. The models learn the control policy by imitation, similar to the original idea of “pixel to action”[55].

    3) All images (Fig. 4) are in the size of 320 160 pixels.They are stored and labelled by time-instance and angle: left,front, or right. However, through the learning process they are fed into the models in no particular order. In total, 14,412 images have been captured as the dataset for a training procedure. Further details of the training properties are provided in Table II.

    TABLE II TRAINING PARAMETERS AND PROPERTIES

    4) To prevent over-fitting due to an imbalanced dataset, we forced the generator to augment images in real-time, while the models are being trained. We have provided the models with an equal chance of seeing different steering angles. Thence,we have reshaped the training data into their appropriate batches and trained the models at the same time.

    5) In this step, several CNN models have been developed and trained. To have a better comparison and overview on effects of the architectural properties of each model on the final performance, models are developed with different structures (Table I). In other words, the number of layers(depth), the number of filters (channels), and filter size(kernels) increase by following a similar pattern. For validation, models are being tested on 20% of the data. Once all models have been trained and tested, they are evaluated on an autonomous vehicle for experimental performance examination and efficiency analyses.

    It should be mentioned that the images are captured over three complete laps driven by the human driver. Although it is possible to collect more data by driving more laps, the aim of this study is to provide the learning agent with the minimum amount of data to decrease the effort and demonstration time.Also, it is worth to examine the developed learning methodologies under minimal data. Furthermore, collecting a great deal of data from different drivers with various driving styles, and in variety of driving conditions (weather, road profile, etc.) might be helpful for training models for general applications.

    In the following, we fully present and discuss the resulting performance of the developed CNN models trained in the previous section. For evaluating and comparing the model performance and efficiency, we examine the output of every model based on well-known criteria, including loss function value, the number of trained parameters (weights), and training time (computational costs) of each model. Table III summarises the resulting numbers for the benchmarks of each model. Figs. 5-11 demonstrate the results for all the 96 developed CNN models and the ensemble in this study.Notably, the considered loss function is mean square error.

    Fig. 5. Loss values (MSE) of the CNN models. This figure shows the filters size on the x axis and the number of layers on the y axis. Different colours indicate the number of filters. Moving along the y axis towards the deepest models, MSE decreases regardless of the number and size of the filters.Along the x axis, enlarging the filters results in lesser MSE. On the other hand, minimum error in every category chiefly happens among the models with 16 filters in the first layers (green bars).

    As depicted in Fig. 5, moving from the back end of the figure towards the front end loss value of deeper models,specifically those with 12 layers and more, drops below 0.04.The most reduction occurs from 9Conv models to 12Conv ones. Although deeper models have resulted in less loss, the rate of reduction is not as large as that between 9 and 12 layers. Considering the quantity of filters, models with 16 and 32 filters in their first convolutional layers have shown a better performance. Surprisingly, in most of the configurations, models with 16 filters have slightly outperformed the more complicated models with 32 filters.This corollary is more visible in Fig. 9. Box plots of the loss values are illustrated with respect to different properties: (a)number of layers, (b) number of filters, and (c) the filter size.Also, Figs. 9 (d)-(f) show that the number of parameters significantly increases with respect to the quantity of layers and filters, while the size of filters has almost no impact on the number of parameters.

    Fig. 6. Loss values of the CNN models and their trend line with respect to the number of parameters. In (a) colours indicate the number of layers. Deep models (light blue, dark blue, and purple) are mostly placed below the estimation line. Considering the horizontal axis, quantity of the layers does not increase the number of parameters as different colours are spread through that axis. In (b) models are coloured by their filters. Existence of all colours on either above and below the regressor line shows that the number of filters does not inevitably result in error reduction. Whilst, a monotone growing pattern is visible in the number of parameters due to increasing the number of filters. In (c), we can observe almost all colours in every region of the plot, which means that the size of filters has no momentous impact on the performance.

    Fig. 7. Overall comparison of MSE values among the CNN models. It is apparent that the deeper models have less error. Moving from the right corner to the left corner of the graph, we see a big change in colours. In other words,deep models with bigger filters have outweighed the same-depth models with smaller filters.

    Furthermore, Fig. 10 presents the growth in parameter count through the CNN models. Noticeably, models depth is sorted in the opposite direction of that of Fig. 5. As a result, the number of parameters increases as the models deepen.However, the growth in parameter count is more significant due to the number of filters (Figs. 9 and 10). On the other side,loss value decreases as the number of parameters increases.Fig. 6 shows the overall relation between the number of parameters and loss values, in three different presentations with respect to the configuration properties of the CNN models. Consequently, training time drastically increases as the number of parameters (Fig. 8). As it comes from Figs. 6 and 8, deeper models with 18 convolutional layers have the maximum parameter and training time, but the minimum loss value. Considering the computational cost and time, Fig. 8 compares the training time of the CNN models in terms of (a)the number of layers, (b) the number of filters, and (c) filter size, respectively. As it is shown in Fig. 8(b), models with the largest number of filters (32) lead to the great number of parameters and, subsequently, a great time of training. Whilst,Figs. 8(a) and (c) illustrate that even shallower models“9Conv” and smaller filters 5 × 5 and 3 × 3 take a considerable time for learning.

    From Fig. 9, larger filters result in less errors. However,models with a mix of filter sizes have a better performance,and it shows that bigger filters only in the first layers can be more effective rather than enlarging filters throughout the network. Although size of the filters does not affect the total number of parameters (Fig. 9(f)), to some extent it reduces the computational cost as models with combined filter size 7 7_3 3have totally fewer parameters than those with the same number of layers and filters, but filters sized 7 × 7.Fig. 7 also shows the overall loss value of the models by scaling the colours, for better visualisations. The decreasing trend of the loss value among the models is quite apparent in it.

    IV. CONCLUSIONS

    Fig. 8. Training time of the CNN models and their trend line with respect to the number of parameters. Plot (a) groups the models in different colours based on the number of layers. Apparently, deeper models even with less parameters because of smaller filters, took a longer time to be trained. Similar to Fig. 6(b),here in (b) again we can see that models with more filters have substantially greater number of parameters, and therefore, longer training time. Scattered through plot (c), size of the filters neither did affect the number of parameters, nor the training time.

    Having a reliable and robust vision is a mandatory requirement in autonomous vehicles, and convolutional neural networks are one of the most successful deep neural networks for image processing applications. On the other hand,applying imitation learning methods reduces the amount of training data, while speeding up the training process.However, CNN models generally require a great deal of training data and finding an optimal configuration for design parameters of a CNN is another challenge. In this paper, we consider an autonomous vehicle in a simulation environment and capture images from three different views from the car driven by a human, as the demonstration phase of imitation learning. The recorded data are then used for training CNN models. In this work, a total of 96 CNN architectures have been developed with a different number of layers, filters, and filter size, to investigate the impact of these three properties on the performance of CNN models.

    The results show that the depth of the models causes a decrease the error (loss value). The minimum MSE achieved by the shallowest model (3 layers) is just below 0.045, while the maximum MSE by the deepest model (18 layers) is nearly 0.04. The biggest improvement is observed for models having layer counts between 9 and 12, where MSE drops from about 0.045 to 0.036. Additionally, utilising more filters in a network also reduces the error. The best results have been achieved by models having 16 filters in their first layers,though. In summary, increasing the number of filters does not necessarily result in a better performance. But the deeper models with larger filters in their first layers have an outstanding performance. As the results show, for this case study, a model with at least 12 layers, and 16 filters ranging from 7 × 7 in the first layers, to 3 × 3 through the last layers,has the best performance amongst the examined CNN architectures. However, finding the optimal architectural properties for a CNN is still under investigation. In this paper,we have analysed different structures for CNN to experimentally find a search range for the best architectural parameters.

    Finally, we have proposed an ensemble approach which assigns weights for the models based on their recent MSE values. The proposed ensemble performed better than bagging; a well-known ensemble technique for regression problems. Applying an ensemble on different CNN models is recommended, as there might be various choices for design properties.

    V. FUTURE WORK

    Fig. 9. (a) Shows box plots of the errors for every layer category. As shown in (b) the models with 16 filters have obtained the best results. Although error notably decreases due to an increase in the number of layers, a significant drop is observed between 9-layer and 12-layer models. As shown in (b) the models with 16 filters have obtained the best results. As illustrated in (c), having larger filters in the first layers, rather than throughout the network, results in a better performance. Considering two points on the regressor line in (e) with 16 and 32 values on the horizontal axis, and then comparing the corresponding numbers on the vertical axis, it is derived that the slope of the line is almost 1. As a result, growth in the number of parameters has approximately the same ratio as that of the number of filters. However, as (f) displays, enlarging the size of filters hardly affects the count of parameters. Moreover, taking into account the number of trainable parameters, the fitted regressor line in plot (e) has the steepest slope compared to (d) and (f).

    Fig. 10. Impact of design parameters on trainable parameters. As this figure shows, growth in the number of parameters is mostly caused by the quantity of filters in a CNN model. Models with more filters have a significantly larger number of parameters. Increasing the number of layers also leads to raise in the number of parameters, however, the size of filters does not have that distinguishable impact on parameters.

    Although imitation CNN models have a promising outcome,the optimal design of a CNN for achieving the best performance is a highly demanded task in deep learning applications. Specifically, taking advantage of imitation learning methodologies requires the deep learning approaches to be as efficient as possible to make the whole process practically effective and robust. To achieve this goal, the optimal selection of parameters of the network and training procedure is an open question that researchers are actively investigating worldwide. The future of this research is to study more intelligent and efficient techniques, including genetic and evolutionary algorithms [56], [57] to obtain the optimum set of parameters.

    Fig. 11. Comparison between the models and ensembles. Bagging approach totally has a better performance due to its less medians. However, the proposed ensemble method not only outperforms all models, it also achieved a better result compared to the bagging technique. This happens because of upgrading the weights at every sample time based on the MSE values from previous samples.

    APPENDIX

    In this appendix, Table I demonstrates the structural and architectural properties of the CNN models developed in this study. Also, Table III presents the numerical results of the models performance, in terms of the number of trainable parameters, training time, and loss value (MSE).

    TABLE I STRUCTURAL PROPERTIES OF THE CNN MODELS. COLOURS RED, GREEN, AND YELLOW INDICATE THE FILTER SIZES 3 3, 5 5, AND 7 7,RESPECTIVELY. THE NUMBER IN EVERY COLOURED CELL PRESENTS THE COUNT OF FILTERS OF THE CORRESPONDING SIZE IN EACH LAYER

    TABLE I (continued)STRUCTURAL PROPERTIES OF THE CNN MODELS. COLOURS RED, GREEN, AND YELLOW INDICATE THE FILTER SIZES 3 3, 5 5, AND 7 7,RESPECTIVELY. THE NUMBER IN EVERY COLOURED CELL PRESENTS THE COUNT OF FILTERS OF THE CORRESPONDING SIZE IN EACH LAYER

    TABLE I (continued)STRUCTURAL PROPERTIES OF THE CNN MODELS. COLOURS RED, GREEN, AND YELLOW INDICATE THE FILTER SIZES 3 3, 5 5, AND 7 7,RESPECTIVELY. THE NUMBER IN EVERY COLOURED CELL PRESENTS THE COUNT OF FILTERS OF THE CORRESPONDING SIZE IN EACH LAYER

    TABLE III NUMERICAL PERFORMANCE RESULTS OF THE TRAINED CNN MODELS

    TABLE III (continued)NUMERICAL PERFORMANCE RESULTS OF THE TRAINED CNN MODELS

    国产欧美日韩一区二区三 | 蜜桃在线观看..| 精品人妻在线不人妻| 免费在线观看完整版高清| 两个人免费观看高清视频| 欧美老熟妇乱子伦牲交| h视频一区二区三区| 久久亚洲精品不卡| 精品人妻在线不人妻| 蜜桃在线观看..| 久久精品人人爽人人爽视色| 日本精品一区二区三区蜜桃| 亚洲第一青青草原| 国产精品.久久久| 国产亚洲欧美在线一区二区| 国产欧美日韩精品亚洲av| 一二三四社区在线视频社区8| 国产真人三级小视频在线观看| 黄网站色视频无遮挡免费观看| 久久久精品区二区三区| 97精品久久久久久久久久精品| 黄色视频在线播放观看不卡| 欧美日韩av久久| 亚洲精品一卡2卡三卡4卡5卡 | 新久久久久国产一级毛片| 涩涩av久久男人的天堂| av视频免费观看在线观看| av欧美777| 久久精品熟女亚洲av麻豆精品| 日韩 欧美 亚洲 中文字幕| 热99久久久久精品小说推荐| 99国产精品免费福利视频| 黄色视频不卡| 天天操日日干夜夜撸| 69av精品久久久久久 | 久久精品熟女亚洲av麻豆精品| 欧美午夜高清在线| videosex国产| 99国产精品一区二区蜜桃av | av国产精品久久久久影院| 丰满迷人的少妇在线观看| 99国产精品99久久久久| 欧美乱码精品一区二区三区| 免费高清在线观看视频在线观看| 午夜福利在线观看吧| 精品福利永久在线观看| 欧美日韩视频精品一区| 国产黄频视频在线观看| 操出白浆在线播放| bbb黄色大片| 黄色片一级片一级黄色片| 国产精品成人在线| 最新的欧美精品一区二区| 中文字幕人妻丝袜制服| 纵有疾风起免费观看全集完整版| 中文字幕人妻丝袜制服| 老鸭窝网址在线观看| 免费人妻精品一区二区三区视频| 久久精品亚洲av国产电影网| 在线av久久热| 天堂8中文在线网| 又黄又粗又硬又大视频| 亚洲精品在线美女| 精品国产一区二区三区四区第35| 日韩熟女老妇一区二区性免费视频| 色精品久久人妻99蜜桃| 亚洲免费av在线视频| 国产片内射在线| 国产激情久久老熟女| 日本猛色少妇xxxxx猛交久久| 多毛熟女@视频| 91老司机精品| 国精品久久久久久国模美| 午夜91福利影院| 不卡av一区二区三区| 成年人午夜在线观看视频| www日本在线高清视频| 国产精品1区2区在线观看. | 天堂中文最新版在线下载| 久久天躁狠狠躁夜夜2o2o| 午夜成年电影在线免费观看| 日韩 亚洲 欧美在线| 久久久精品免费免费高清| 午夜福利在线观看吧| 青春草亚洲视频在线观看| 天堂俺去俺来也www色官网| 在线精品无人区一区二区三| 日韩一卡2卡3卡4卡2021年| 欧美乱码精品一区二区三区| 亚洲精品久久成人aⅴ小说| 美女大奶头黄色视频| 男女国产视频网站| 女人被躁到高潮嗷嗷叫费观| 日韩视频一区二区在线观看| 熟女少妇亚洲综合色aaa.| 黄色视频在线播放观看不卡| 国产在线视频一区二区| 日韩大片免费观看网站| 亚洲五月色婷婷综合| 精品乱码久久久久久99久播| 精品一区二区三区四区五区乱码| 下体分泌物呈黄色| 日本黄色日本黄色录像| 女警被强在线播放| 岛国在线观看网站| 亚洲成av片中文字幕在线观看| 麻豆国产av国片精品| 黄色毛片三级朝国网站| 在线观看一区二区三区激情| 国产精品偷伦视频观看了| 50天的宝宝边吃奶边哭怎么回事| 亚洲成人免费av在线播放| 亚洲精品中文字幕一二三四区 | 水蜜桃什么品种好| 国产欧美日韩一区二区三 | 中文精品一卡2卡3卡4更新| 亚洲精品av麻豆狂野| 国产成+人综合+亚洲专区| 亚洲精品国产区一区二| 他把我摸到了高潮在线观看 | 亚洲精品久久久久久婷婷小说| 国产精品成人在线| 国产精品一区二区免费欧美 | 黄片大片在线免费观看| 午夜激情久久久久久久| 欧美日韩国产mv在线观看视频| 国产日韩欧美视频二区| 日韩精品免费视频一区二区三区| 精品国产一区二区三区久久久樱花| 久9热在线精品视频| 欧美国产精品一级二级三级| 99久久综合免费| 亚洲成人免费电影在线观看| 在线天堂中文资源库| 日韩制服骚丝袜av| 一级黄色大片毛片| 日韩熟女老妇一区二区性免费视频| 成人国产av品久久久| 黄色 视频免费看| 国产精品.久久久| 亚洲精品乱久久久久久| 丰满饥渴人妻一区二区三| 欧美激情极品国产一区二区三区| 色综合欧美亚洲国产小说| 侵犯人妻中文字幕一二三四区| 在线精品无人区一区二区三| 99久久精品国产亚洲精品| a级毛片黄视频| 亚洲,欧美精品.| 99热国产这里只有精品6| 91九色精品人成在线观看| 久久狼人影院| 一区二区三区精品91| 窝窝影院91人妻| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲熟女精品中文字幕| 五月天丁香电影| 18禁国产床啪视频网站| 国产精品秋霞免费鲁丝片| 国产精品熟女久久久久浪| 91麻豆精品激情在线观看国产 | 欧美在线黄色| 别揉我奶头~嗯~啊~动态视频 | 99精品久久久久人妻精品| 啦啦啦中文免费视频观看日本| 精品人妻1区二区| 母亲3免费完整高清在线观看| 精品一区二区三区av网在线观看 | 亚洲精品久久成人aⅴ小说| 在线观看免费日韩欧美大片| 王馨瑶露胸无遮挡在线观看| 久久中文看片网| 日本wwww免费看| 亚洲一区中文字幕在线| 成人免费观看视频高清| 久久午夜综合久久蜜桃| 老熟女久久久| 欧美xxⅹ黑人| xxxhd国产人妻xxx| 两个人免费观看高清视频| 亚洲人成电影免费在线| 国产老妇伦熟女老妇高清| 欧美黄色片欧美黄色片| 色综合欧美亚洲国产小说| 999久久久精品免费观看国产| 久久国产精品大桥未久av| 国产激情久久老熟女| 成人av一区二区三区在线看 | 国产视频一区二区在线看| 国产亚洲欧美在线一区二区| 国产人伦9x9x在线观看| 亚洲成人免费av在线播放| 十八禁人妻一区二区| 精品高清国产在线一区| 考比视频在线观看| 国产国语露脸激情在线看| 日韩免费高清中文字幕av| 久久毛片免费看一区二区三区| 国产欧美日韩综合在线一区二区| 一级黄色大片毛片| 天堂中文最新版在线下载| 青青草视频在线视频观看| 国产野战对白在线观看| 麻豆国产av国片精品| 国产精品熟女久久久久浪| www.999成人在线观看| 18禁裸乳无遮挡动漫免费视频| 日韩中文字幕欧美一区二区| cao死你这个sao货| 午夜免费鲁丝| 国产av一区二区精品久久| 后天国语完整版免费观看| 无限看片的www在线观看| 蜜桃在线观看..| 久久这里只有精品19| 亚洲一区中文字幕在线| 91精品三级在线观看| 亚洲av男天堂| 久久久精品区二区三区| 男女之事视频高清在线观看| 亚洲国产精品一区三区| 欧美精品高潮呻吟av久久| 这个男人来自地球电影免费观看| 国产成人免费无遮挡视频| 一个人免费看片子| 欧美日韩视频精品一区| 一边摸一边抽搐一进一出视频| 久久亚洲国产成人精品v| 国产国语露脸激情在线看| 99国产精品一区二区三区| 亚洲成人国产一区在线观看| 自拍欧美九色日韩亚洲蝌蚪91| 动漫黄色视频在线观看| 天天添夜夜摸| 亚洲精品国产色婷婷电影| 每晚都被弄得嗷嗷叫到高潮| 亚洲黑人精品在线| 精品少妇内射三级| 美女高潮喷水抽搐中文字幕| 亚洲成国产人片在线观看| 日韩中文字幕欧美一区二区| 亚洲国产精品一区二区三区在线| 少妇猛男粗大的猛烈进出视频| 在线观看一区二区三区激情| 日韩有码中文字幕| 少妇猛男粗大的猛烈进出视频| 在线观看www视频免费| h视频一区二区三区| 欧美激情高清一区二区三区| 自拍欧美九色日韩亚洲蝌蚪91| 日韩一区二区三区影片| 99精国产麻豆久久婷婷| 搡老乐熟女国产| 人人妻,人人澡人人爽秒播| 伊人亚洲综合成人网| 中文字幕高清在线视频| 久久久久精品人妻al黑| 国产一区二区 视频在线| 国产精品久久久av美女十八| 久久精品久久久久久噜噜老黄| 亚洲精品成人av观看孕妇| 香蕉丝袜av| 精品国内亚洲2022精品成人 | 亚洲专区国产一区二区| 亚洲欧美精品自产自拍| 一本久久精品| 天天添夜夜摸| 男男h啪啪无遮挡| a级片在线免费高清观看视频| 爱豆传媒免费全集在线观看| 久久久久久人人人人人| 久久精品成人免费网站| 黄色视频在线播放观看不卡| 高清黄色对白视频在线免费看| 91精品伊人久久大香线蕉| 免费在线观看完整版高清| 欧美日韩亚洲国产一区二区在线观看 | 久9热在线精品视频| 欧美国产精品va在线观看不卡| 国产精品香港三级国产av潘金莲| 亚洲精品美女久久av网站| 色94色欧美一区二区| 日本a在线网址| 精品欧美一区二区三区在线| 老熟妇乱子伦视频在线观看 | 19禁男女啪啪无遮挡网站| 亚洲av日韩在线播放| 国产男女内射视频| 亚洲国产欧美日韩在线播放| 欧美激情高清一区二区三区| 男女高潮啪啪啪动态图| 国产无遮挡羞羞视频在线观看| 欧美老熟妇乱子伦牲交| 欧美精品一区二区大全| 午夜激情久久久久久久| 无限看片的www在线观看| 日本五十路高清| 一本一本久久a久久精品综合妖精| 天天操日日干夜夜撸| 国产精品亚洲av一区麻豆| 午夜91福利影院| 精品人妻一区二区三区麻豆| 黑丝袜美女国产一区| 日本vs欧美在线观看视频| 老司机深夜福利视频在线观看 | 国产成人av激情在线播放| 午夜免费成人在线视频| 19禁男女啪啪无遮挡网站| 免费不卡黄色视频| 亚洲五月色婷婷综合| 国产一区二区三区在线臀色熟女 | 麻豆乱淫一区二区| 90打野战视频偷拍视频| 精品久久蜜臀av无| 久久久精品区二区三区| 久久精品国产亚洲av高清一级| 人人澡人人妻人| 久久国产精品男人的天堂亚洲| 免费观看a级毛片全部| 两性午夜刺激爽爽歪歪视频在线观看 | 91大片在线观看| 18禁观看日本| 精品免费久久久久久久清纯 | 黄色怎么调成土黄色| 1024香蕉在线观看| 国产激情久久老熟女| 免费少妇av软件| 两人在一起打扑克的视频| 成年av动漫网址| 麻豆国产av国片精品| 日韩中文字幕欧美一区二区| 嫁个100分男人电影在线观看| a 毛片基地| 久久99一区二区三区| 免费在线观看视频国产中文字幕亚洲 | 国产日韩一区二区三区精品不卡| 欧美老熟妇乱子伦牲交| 黑人欧美特级aaaaaa片| 欧美黑人欧美精品刺激| 青青草视频在线视频观看| 国产不卡av网站在线观看| 国产精品 欧美亚洲| 精品一区二区三卡| 熟女少妇亚洲综合色aaa.| 巨乳人妻的诱惑在线观看| 亚洲成国产人片在线观看| 国产亚洲av片在线观看秒播厂| 老司机午夜十八禁免费视频| 亚洲av成人不卡在线观看播放网 | 欧美97在线视频| 一本—道久久a久久精品蜜桃钙片| 亚洲精品一区蜜桃| 久久久久久亚洲精品国产蜜桃av| 国产一区有黄有色的免费视频| 欧美黄色淫秽网站| 日本五十路高清| 亚洲伊人久久精品综合| 午夜福利视频在线观看免费| 自拍欧美九色日韩亚洲蝌蚪91| 天天影视国产精品| 少妇的丰满在线观看| 新久久久久国产一级毛片| 色婷婷av一区二区三区视频| 日韩制服丝袜自拍偷拍| 免费不卡黄色视频| 国产一区有黄有色的免费视频| 一本色道久久久久久精品综合| 在线观看人妻少妇| av又黄又爽大尺度在线免费看| 午夜影院在线不卡| 午夜久久久在线观看| 日韩电影二区| 伦理电影免费视频| 亚洲成av片中文字幕在线观看| 久久综合国产亚洲精品| 国产欧美日韩精品亚洲av| 免费在线观看日本一区| 久久久国产精品麻豆| 91精品伊人久久大香线蕉| videosex国产| 亚洲专区字幕在线| 黄频高清免费视频| 欧美 亚洲 国产 日韩一| 国产伦人伦偷精品视频| 欧美激情久久久久久爽电影 | 男女高潮啪啪啪动态图| 久久av网站| 人人澡人人妻人| 一二三四社区在线视频社区8| 久久99热这里只频精品6学生| 精品人妻熟女毛片av久久网站| 免费在线观看日本一区| 国产成人精品在线电影| 人妻人人澡人人爽人人| 永久免费av网站大全| 一边摸一边抽搐一进一出视频| 美女大奶头黄色视频| 97精品久久久久久久久久精品| 99国产精品一区二区蜜桃av | 国产精品亚洲av一区麻豆| 一边摸一边做爽爽视频免费| 国产精品一区二区在线观看99| 91老司机精品| 国产一卡二卡三卡精品| 精品欧美一区二区三区在线| 欧美+亚洲+日韩+国产| 亚洲精品久久久久久婷婷小说| 国产精品久久久久成人av| 亚洲欧美一区二区三区黑人| 亚洲国产中文字幕在线视频| 国产深夜福利视频在线观看| 大香蕉久久成人网| 不卡av一区二区三区| 国产三级黄色录像| 两个人看的免费小视频| 国产片内射在线| 国产精品影院久久| 欧美成狂野欧美在线观看| 亚洲午夜精品一区,二区,三区| 亚洲av日韩精品久久久久久密| 91大片在线观看| h视频一区二区三区| 少妇的丰满在线观看| 青春草视频在线免费观看| 欧美少妇被猛烈插入视频| 欧美激情 高清一区二区三区| 高潮久久久久久久久久久不卡| 欧美激情极品国产一区二区三区| 美国免费a级毛片| 男女下面插进去视频免费观看| 一级黄色大片毛片| 麻豆国产av国片精品| 丝瓜视频免费看黄片| 十分钟在线观看高清视频www| 亚洲国产看品久久| 亚洲av电影在线观看一区二区三区| 99国产精品免费福利视频| 91精品国产国语对白视频| 久久中文字幕一级| 国产精品1区2区在线观看. | 亚洲伊人色综图| 别揉我奶头~嗯~啊~动态视频 | 一级片免费观看大全| 精品卡一卡二卡四卡免费| www.999成人在线观看| 正在播放国产对白刺激| 丝袜喷水一区| 国产极品粉嫩免费观看在线| 国产亚洲一区二区精品| 母亲3免费完整高清在线观看| 又黄又粗又硬又大视频| 国产亚洲欧美在线一区二区| 9色porny在线观看| 午夜福利一区二区在线看| 日韩中文字幕视频在线看片| 成人国产一区最新在线观看| 国产色视频综合| 99热全是精品| 男女下面插进去视频免费观看| 夜夜骑夜夜射夜夜干| 亚洲精品久久午夜乱码| kizo精华| 久久久精品94久久精品| 日本欧美视频一区| 久久久精品国产亚洲av高清涩受| 丝袜在线中文字幕| 一级,二级,三级黄色视频| 岛国在线观看网站| 国产精品一区二区在线观看99| 91老司机精品| 国产不卡av网站在线观看| 免费久久久久久久精品成人欧美视频| 亚洲成人免费av在线播放| 三级毛片av免费| 伊人久久大香线蕉亚洲五| 制服诱惑二区| 又紧又爽又黄一区二区| 99国产精品一区二区蜜桃av | 日韩有码中文字幕| 啦啦啦啦在线视频资源| 亚洲精品久久久久久婷婷小说| av免费在线观看网站| 亚洲九九香蕉| 色播在线永久视频| 久久中文字幕一级| 狂野欧美激情性bbbbbb| 日本一区二区免费在线视频| 精品国产一区二区三区四区第35| 高潮久久久久久久久久久不卡| 大陆偷拍与自拍| 97人妻天天添夜夜摸| 69精品国产乱码久久久| 欧美精品高潮呻吟av久久| 国产高清国产精品国产三级| 亚洲av电影在线进入| 丝袜脚勾引网站| 亚洲色图 男人天堂 中文字幕| 新久久久久国产一级毛片| 欧美另类一区| 亚洲av日韩精品久久久久久密| 在线 av 中文字幕| 麻豆国产av国片精品| 亚洲成人免费av在线播放| 国产人伦9x9x在线观看| 国产一区二区三区在线臀色熟女 | 欧美日韩福利视频一区二区| 精品一区二区三卡| 看免费av毛片| 高清黄色对白视频在线免费看| 欧美日韩黄片免| 18禁裸乳无遮挡动漫免费视频| kizo精华| 一区二区三区乱码不卡18| 久久久久国内视频| 国产在线一区二区三区精| 国产精品 欧美亚洲| 亚洲中文av在线| 国产亚洲精品一区二区www | 男女下面插进去视频免费观看| 亚洲精品国产区一区二| 久久久久国产一级毛片高清牌| 久久亚洲国产成人精品v| 黑人操中国人逼视频| 亚洲精品日韩在线中文字幕| 日韩欧美国产一区二区入口| av一本久久久久| 午夜福利免费观看在线| 少妇 在线观看| 国产极品粉嫩免费观看在线| 久久亚洲精品不卡| 91精品伊人久久大香线蕉| 一级片免费观看大全| 男人舔女人的私密视频| √禁漫天堂资源中文www| 国产三级黄色录像| 国产精品99久久99久久久不卡| 日本一区二区免费在线视频| 久久久精品免费免费高清| 精品少妇内射三级| 国产黄色免费在线视频| 十八禁高潮呻吟视频| 制服人妻中文乱码| 中文字幕另类日韩欧美亚洲嫩草| 999久久久精品免费观看国产| 久久久久视频综合| 十八禁网站网址无遮挡| 成年人午夜在线观看视频| 亚洲人成电影免费在线| 久久精品aⅴ一区二区三区四区| 久久国产亚洲av麻豆专区| 伊人亚洲综合成人网| 嫩草影视91久久| 一级毛片电影观看| 99国产综合亚洲精品| 日韩中文字幕欧美一区二区| 成年女人毛片免费观看观看9 | 亚洲国产成人一精品久久久| 欧美另类亚洲清纯唯美| 成人亚洲精品一区在线观看| 精品少妇一区二区三区视频日本电影| 亚洲av男天堂| 99国产精品免费福利视频| 成年人午夜在线观看视频| 国产极品粉嫩免费观看在线| 在线永久观看黄色视频| 丰满少妇做爰视频| 欧美成狂野欧美在线观看| 美女午夜性视频免费| 国产免费视频播放在线视频| 这个男人来自地球电影免费观看| 午夜福利视频在线观看免费| 久久99热这里只频精品6学生| 交换朋友夫妻互换小说| 日韩制服丝袜自拍偷拍| 国产日韩一区二区三区精品不卡| 51午夜福利影视在线观看| 久久久久久人人人人人| 久久久精品94久久精品| 亚洲九九香蕉| 亚洲国产欧美日韩在线播放| √禁漫天堂资源中文www| 99精国产麻豆久久婷婷| 999精品在线视频| 午夜久久久在线观看| 国产一区二区三区在线臀色熟女 | 女人精品久久久久毛片| 一二三四社区在线视频社区8| a 毛片基地| 亚洲av男天堂| 日本黄色日本黄色录像| 久久久久国产精品人妻一区二区| 在线av久久热| 岛国毛片在线播放| 国产亚洲av高清不卡| 精品久久蜜臀av无| 欧美日韩中文字幕国产精品一区二区三区 | 精品少妇内射三级| 桃红色精品国产亚洲av| 国产真人三级小视频在线观看| 男男h啪啪无遮挡| 久久精品亚洲av国产电影网| 美女中出高潮动态图| 99久久99久久久精品蜜桃| 香蕉国产在线看| 久久精品aⅴ一区二区三区四区| 在线观看舔阴道视频| 成人黄色视频免费在线看| 久久精品aⅴ一区二区三区四区| 人妻人人澡人人爽人人| 99精品久久久久人妻精品| 国产不卡av网站在线观看| 五月天丁香电影| 性少妇av在线| 欧美日本中文国产一区发布| 国产精品免费视频内射|