• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Ship Local Path Planning Based on Improved Q-Learning

    2022-06-18 07:40:16-,-,,,-
    船舶力學(xué) 2022年6期

    -,-,,,-

    (1.Key Laboratory of High Performance Ship Technology(Wuhan University of Technology),Ministry of Education,Wuhan 430000,China;2.School of Transportation,Wuhan University of Technology,Wuhan 430000,China)

    Abstract:The local path planning is an important part of the intelligent ship sailing in an unknown en?vironment. In this paper, based on the reinforcement learning method of Q-Learning, an improved QLearning algorithm is proposed to solve the problems existing in the local path planning, such as slow convergence speed, high calculation complexity and easily falling into the local optimization. In the proposed method, the Q-table is initialized with respect to the artificial potential field, so that it has a prior knowledge of the environment.In addition,considering the heading factor of the ship,the two-di?mensional position information is extended to the three-dimensional by joining the angle information.Then,the traditional reward function is modified by introducing the forward information and the obsta?cle information obtained by the sensor,and by adding the influence of the environment.Therefore,the proposed method is able to obtain the optimal path with the ship energy consumption reduced to a cer?tain extent. The real-time capability and effectiveness of the algorithm are verified by the simulation and comparison experiments.

    Key words:Q-Learning;state set;reward function

    0 Introduction

    Path planning plays an important role in navigation of the autonomous vehicles like unmanned surface vehicle (USV). The path planning methods include the global path planning and the local path planning.In the global path planning,the safe path is planned for the USV under a static envi?ronment,where the obstacles are assumed to be known.The local path planning deals with the prob?lems of identifying the dynamic condition of environment and avoiding obstacles in real time. For the local path planning, some algorithms have been proposed in the literature, such as the artificial potential field[1], genetic algorithm[2], fuzzy logic[3], neural network[4-6]and so on. These algorithms can plan a safe path under the partially known environment or some unknown environment.But the adaptability of these algorithms for the unknown environment or mutation environment is not very well.

    At present, the reinforcement learning is the hottest research area. There are many studies about reinforcement learning on path planning[7-9]. Common reinforcement learning algorithms in?clude Q-Learning,SARSA,TD-Learning and Adaptive Dynamic Planning.Sadhu[10]proposed a hy?brid algorithm combining the Firefly algorithm and Q-Learning algorithm. In order to speed up the convergence, the flower pollination algorithm was utilized to improve the initialization of Q-Learn?ing[11]. Cui[12]optimized the cost function through Q-Learning to obtain the optimal conflict avoid?ance action sequence of UAV under the motion threat environment, and considered the maneuvers in order to create the path planning that complies with UAV movement limitations.Ni[13]proposed a joint motion selection strategy based on tabu-search and simulated annealing,and created a dynam?ic learning rate for the path planning method based on Q-Learning so that the method can effective?ly adapt to various environments.Ship maneuverability is integrated in Q-Learning algorithm as pri?or knowledge,which shortens the model training time[14].

    In this paper, we propose an improved Q-Learning algorithm for the ship sailing in an un?known environment. Firstly, this algorithm increases the state variable and includes the ship’s heading information,so that it can increase the path smoothness,and also changes the number of ac?tions to increase the path diversity. Then, we introduce the potential field gravity to initialize the Q table to speed up the convergence of the algorithm. After that, we modify the reward function, by adding the angle of environmental force and the forward guidance to the target point, to reduce the exploration step number and the calculation period of the algorithm. The search performance of the traditional algorithm and that of the improved algorithm in different environments are compared in the simulation experiments respectively.

    1 Classical Q-Learning algorithm

    Q-Learning is a typical reinforcement learning algorithm developed by Watkins[15], and it is a kind of reinforcement learning without environment model. Q-Learning applies the concept of reward and penalty in exploring the unstructured environment, and the term of Q-Learning is shown in Fig.1. Agent in the Fig.1 represents the unmanned vehicle, and in this paper, it repre?sents USV.State is the position of USV in local environment,and action is the movement that the USV moves from current state to next state.For the reward,it is a positive value to increase the Q-value for the cor?rect action. Besides, the penalty is a negative value to decrease the Qvalue for the wrong action.

    In general,the idea of Q-Learning is not to estimate an environment model,but to directly op?timize a Q function that can be iterated.The policy being applied is not affected by the values of the optimal Q to be converged. TheQvalues of the Q-Learning algorithm is updated using the expres?sion:

    Fig.1 Interaction between agent and environ?ment

    wheresindicates the current state of ship,aindicates the action performed insstate,rindicates the received reinforcement signal aftersexecuted,γindicates the discount factor (0<γ<1), andαindi?cates the learning coefficient(0<α<1).

    TheQvalue of the next possible state is determined by the following expression:

    whereεindicates the degree of greed,and it is a normal number with the values between 0 and 1,cis a random value,andArepresents the set of actions.

    2 Improved Q-Learning algorithm

    2.1 Improved state set

    The state in the traditional Q-Learning algorithm is represented by the grid position, and the corresponding four actions of each state are up,down,left and right[16].However,the ship path plan?ning not only needs the position information, but also needs to consider its heading information.Therefore, the following improvements are made in this paper: (1) the angle information introduced on the basis of the position information is discretized into eight directions. In other words, the state is extended from two degree of freedom (DOF) to three DOF, as shown in Fig.2; (2) four additional actions are introduced, i.e., left front, right front, left back and right back, as shown in Fig.3; with the increase of action set, the planned path is more diverse than the traditional method. Besides, in this paper, some actions are punished, such as the back action, because the ship cannot perform a large bow turn and is less likely to perform a retreat.

    Fig.2 Improved status

    Fig.3 Improved action

    The ship motion coordinate system established is s hown in Fig.4, in whichθvrepresents the ship heading,θgoalrepresents the angle between the ship’s current position to the target point and thexax?is,andθenvrepresents the direction of the environmental force.

    In order to quantitatively analyze the degree of path smoothness planned by the algorithm, a path angle function is introduced as follows:

    whereθi-θi-1indicates the included angle between the pointiand its previous point,nrepresents the number

    Fig.4 Ship motion coordinate system

    of path points. Obviously, the smaller the value ofζ, the smoother the path curve, which is more suitable for ship navigation.

    2.2 Prior information

    We know that the artificial potential field method is widely used in path planning. Its basic idea is to construct an artificial potential field in the surrounding environment of a ship, which in?cludes the gravitational field and the repulsive field. The target node forms a gravitational field to the ship, and the obstacle has a repulsive force field in a certain range around it, which makes the joint force in its working environment push the ship forward to the target. The traditional Q-Learn?ing algorithm has no prior information,which means that theQvalue is set to a same value or a ran?dom value in the initialization process, so the convergence speed of the algorithm is slow. In order to solve the above problems,the gravity information of potential field is introduced when theQtable is initialized. That means we add the environmental prior knowledge to speed up the convergence of the algorithm.

    The gravitational function is defined as

    2.3 Improved reward function

    The reward function in Q-Learning maps the perceived state to the enhancement signal to evaluate the advantages and disadvantages of the actions. The definition of the reward function de?termines the quality of the algorithm. The traditional Q-Learning algorithm has a simple definition of return function and does not contain any heuristic information,which leads to a slow convergence speed and high complexity of the algorithm. In this paper, the reward function is divided into three stages:reaching the target point,encountering obstacles and the stage showed in Eq.(6).

    wheref1,f2,andf3are the predefined numbers,pis the current point,Goal is the target point,dobsis the obstacle position,dois the obstacle influence range,m1,m2andm3are the weight coefficients,which are positive, and the sum of them is 1,d1,d2, andd3are the evaluation factors defined as fol?lows:

    whered2is the angle evaluation factor,andk1andk2are the weight coefficients.It can be seen from Fig.4 that when the ship implements the collision avoidance measures with the influence by envi?ronmental force at the same time, the direction of the environmental forces is quite relevant to the direction of the ship, which means it should follow the direction of environmental resultant force,which is beneficial to the saving of energy. Suppose the angle between the direction of the environ?mental force and the direction of the ship is between 120°and 240°,it is relatively energy-consum?ing.So it is necessary to combine the angle between the heading and the target position,the smaller the angle,the better.In that course of the ship,the direction of navigation is the same as that of the environmental force, which is beneficial to energy conserving. However, at the same time, the state of the target point cannot be excessively deviated, so the punishment will be larger when the devia?tion angle of the ship is higher.

    whered3is the obstacle evaluation factor,bis the total number of obstacles in the sensor detection range, and thedh(b) is the distance from the current point to the obstacle position. The formula shows that the closer the ship to the obstacle,the greater the penalty.

    Due to the heading of the ship is limited and we do not want the ship to retreat. For example,the ship moves from one state to another,but the action is not front,left front or right front,so the in?centive value is set to a negative value to avoid a large change in the path angle planned by the algo?rithm.

    2.4 Modification of greedy selection

    The balance between the exploration and exploitation of the algorithm depends on the greedy setting,and the greed of the traditional algorithm is set to a constant,which makes the algorithm de?pend on the quality of the set value. If the setting value is too large, the algorithm will fall into the local optimal that cannot jump out.If the setting value is too small,the algorithm will still find other paths,even the smallest path,resulting in a slow convergence speed and a long calculation time.In this paper,we propose an adaptive greed.The algorithm pays attention to the path exploration in the early stage to improve the path diversity. With the increase of the number of iterations, the algo?rithm focuses on the exploitation of the results of the previous exploration,and the formula is as fol?lows:

    wherehis the current number of iterations andεmaxis set to the maximum greedy value. As can be seen from Eq.(10),with the increase of the number of iterations,the greedy value increases until the maximum value is reached.Therefore,the probability of the random selection action is gradually re?duced,and the converge of the algorithm is accelerated.

    2.5 Algorithm pipeline

    The improved algorithm pipeline is shown in Fig.5,and is described as follow:

    Step 1: Establish aQtable formed by state and action, and initialize theQvalue table, the start stateSand the iterative counterh,and the stateSis a three-DOF information composed of the grid position and the angle to determine whetherSis the target point.

    Step 2: At the beginning of the iteration, the greed degreeεis set according to Eq.(10) in the action policy selection. If the random valuecis less thanε, then randomly select the action. If the random valuecis larger thanε,then select the action of the maximumQvalue according to Eq.(2).

    Step 3: Select the reward value of the action according to Eq.(6), and the determination of its reward value contains the direction of environmental force, whether to advance to the target point and the distance from the obstacle.

    Step 4: Update theQvalue table according to Eq.(1). If the next state is the target, the loop ends, otherwise, turn back to Step 2. If the result of the successive 15 iterations is consistent, the output result is considered to be converged in advance, otherwise, if the number of iterationshis greater than 1000,the result is outputted directly.

    Fig.5 Flow chart of the improved Q-Learning algorithm

    3 Simulation results and analysis

    In order to verify the validity of this algorithm,the improved Q-Learning algorithm and the tra?ditional Q-Learning algorithm are compared under the simulation platform of Python 3.6 with CPU of Intel (R) Core (TM) i3, 3.9 GHz, and 8 G memory. The grid map is established and the sensor measurement range is set to four girds, as shown in Fig.6. The simulation situation is divided into two common scenarios:one is that there are many obstacles at the starting point of the ship when en?tering the port, and the other is that there are obstacles at the target point when the ship leaves the port. One of the maps is a grid with a small map of 10×10, and the other is a grid with a large map of 20×20.The related parameters set in the algorithm are as follows:the maximum number of itera?tionsh=1000, the learning rateα=0.4, the discount factorγ=0.95, the greedy policyε=0.5,εmax=0.95,the parameters ofm1,m2andm3are 0.7,0.2 and 0.1 respectively,and those off1,f2,f3are+10,-10,-0.02.If the same value is outputted for 15 consecutive rounds,it can be regarded as the con?vergence of the algorithm in advance, otherwise the algorithm ends when the number of iterations reaches the maximum.

    Fig.6 Simulation environment

    Fig.7 and Fig.8 show the path lengths of the two algorithms in the small map and the large map respectively,where Fig.7(a)and Fig.8(a)are the traditional algorithms and Fig.7(b)and Fig.8(b)are the improved algorithms. As can be seen from Fig.7, the maximum number of exploration steps of the traditional algorithm is 1600 steps while that of the improved algorithm is only 15 steps, which is reduced by 99.06%. When the map becomes larger, the number of exploration steps increases.The number of exploration steps in the early stage of the traditional algorithm is larger than that of the latter, and most of them are above 500 steps, with a maximum of 3000 steps. Compared with that of the traditional algorithm, the number of exploration steps of the improved algorithm is pretty small, and the highest is 32 steps, only 1.07% of that of the traditional algorithm. So, we can con?clude that the improved algorithm performs better in exploration performance than the traditional al?gorithm.

    Fig.7 Length of path versus the iterations in small map

    Fig.8 Length of path versus the iterations in large map

    Fig.9 and Fig.10 are the path graphs of the two algorithms under the small map and large map,where Fig.9(a) and Fig.10(a) are the traditional algorithms, and Fig.9(b) and Fig.10(b) are the im?proved algorithms. From Fig.9, we can see that the improved algorithm increases the action set and improves the diversity of paths.In addition,the improved algorithm converts the two-DOF variables of the state into the three-DOF by adding the information of heading,and considers the limitation of ship motion.As can be seen from Fig.10,the improved algorithm reduces the rotation angle and im?proves the smoothness of the path.

    Fig.9 Planned path in small map

    In Fig.9, there are more obstacles at the end point, however, in Fig.10, there are more obsta?cles at the starting point,which is similar to the cases of the ship entering and leaving port.We can see that the improved algorithm can plan a better route from the starting point to the target node.

    Fig.10 Planned path in large map

    Tab.1 shows the comparison of the performance of the above-mentioned algorithm in the path planning, including the important factors such as running time, total running steps, path length and angle. Meanwhile, we average the data in the table after 10 times of operation. In the simple envi?ronment, compared with that of the traditional algorithm, the total operation step size of the im?proved algorithm is reduced by 91.19%,and the final path is shortened by 38.89%.Meanwhile,the smoothness of the planning path is increased 79.17%, and the time is shortened by 83.22%. In the complex environment, the step size of the improved algorithm operation is reduced by 98.59%, the path length is shortened by 42.11%, the path smoothness is improved by 75%, and the time is re?duced by 95.98%. The simulation results show that the improved algorithm is superior to the tradi?tional algorithm.And when the map is enlarged,the time of the improved algorithm is less than that ofthe traditional algorithm,indicating its effectiveness and real-time performance.

    Tab.1 Performance comparison of different algorithms in different environments

    4 Concluding remarks

    This paper presented an improved Q-Learning algorithm for the local path planning of an un?known environment.In the proposed algorithm,theQvalue table was initialized with potential field gravity to reduce the number of exploration steps. The two-DOF state variable was extended to the direction information of three-DOF with a reduced curvature of the route. The path was diversified by adding action set while angle limit was considered to meet the requirements of ship driving at the same time. Such information as the reward function, the introduction distance and the environment force direction was modified,with the convergence speed of the algorithm accelerated and the path length shortened as a result. Comparison of the traditional Q-Learning algorithm with the improved algorithm and the simulation results show that the improved algorithm is effective and feasible.

    男人爽女人下面视频在线观看| 久久久久久人人人人人| 国产福利在线免费观看视频| 日韩三级视频一区二区三区| 国产精品免费视频内射| 丝袜美足系列| 黄片播放在线免费| 大片电影免费在线观看免费| 在线看a的网站| 亚洲精品粉嫩美女一区| 久久久精品免费免费高清| 男男h啪啪无遮挡| 丰满人妻熟妇乱又伦精品不卡| 久9热在线精品视频| 亚洲精品日韩在线中文字幕| 欧美xxⅹ黑人| 国产精品国产av在线观看| 国产高清videossex| 视频在线观看一区二区三区| 一二三四在线观看免费中文在| 在线 av 中文字幕| 色婷婷久久久亚洲欧美| 老司机在亚洲福利影院| av免费在线观看网站| 亚洲三区欧美一区| 另类精品久久| 伊人久久大香线蕉亚洲五| 国产精品久久久久久精品电影小说| 黄色a级毛片大全视频| 久久久久国产一级毛片高清牌| 嫁个100分男人电影在线观看| 免费观看av网站的网址| 成人亚洲精品一区在线观看| 日韩三级视频一区二区三区| 色94色欧美一区二区| 久久av网站| 国产精品久久久人人做人人爽| 男人添女人高潮全过程视频| 国产亚洲精品第一综合不卡| 免费在线观看视频国产中文字幕亚洲 | 多毛熟女@视频| 午夜精品久久久久久毛片777| 国产在线视频一区二区| av在线老鸭窝| 久久精品亚洲熟妇少妇任你| 黄片播放在线免费| 欧美一级毛片孕妇| 亚洲色图 男人天堂 中文字幕| 美女高潮到喷水免费观看| 国产精品香港三级国产av潘金莲| 每晚都被弄得嗷嗷叫到高潮| 一边摸一边抽搐一进一出视频| 免费av中文字幕在线| 亚洲av男天堂| 一区二区日韩欧美中文字幕| 国产精品久久久av美女十八| av在线app专区| 国产精品久久久av美女十八| 巨乳人妻的诱惑在线观看| 国产亚洲av片在线观看秒播厂| 成年av动漫网址| 成人av一区二区三区在线看 | 精品亚洲成国产av| 另类精品久久| 亚洲三区欧美一区| 啦啦啦啦在线视频资源| 少妇 在线观看| 日韩 亚洲 欧美在线| 久久国产亚洲av麻豆专区| 午夜成年电影在线免费观看| 亚洲免费av在线视频| 日韩制服骚丝袜av| 最黄视频免费看| 欧美日韩福利视频一区二区| 丰满迷人的少妇在线观看| 久久久久国内视频| av免费在线观看网站| 最近最新中文字幕大全免费视频| 99国产精品免费福利视频| av欧美777| 午夜免费成人在线视频| 国产在线一区二区三区精| 97人妻天天添夜夜摸| 国产精品久久久久成人av| 大片免费播放器 马上看| 精品少妇内射三级| 国产成人精品久久二区二区免费| 丁香六月欧美| 热re99久久国产66热| av在线播放精品| 中文字幕最新亚洲高清| 正在播放国产对白刺激| 制服人妻中文乱码| 黑人欧美特级aaaaaa片| 午夜免费鲁丝| 人人澡人人妻人| 国精品久久久久久国模美| 日日爽夜夜爽网站| 亚洲中文日韩欧美视频| 精品国产乱子伦一区二区三区 | 久久久久久久大尺度免费视频| 婷婷成人精品国产| 久久久久久免费高清国产稀缺| 性色av一级| 亚洲五月色婷婷综合| 国产精品免费视频内射| 老司机在亚洲福利影院| 秋霞在线观看毛片| av天堂久久9| 亚洲av电影在线观看一区二区三区| 天堂8中文在线网| 91老司机精品| 国产免费视频播放在线视频| 国产精品一区二区精品视频观看| 亚洲精品国产av蜜桃| 久久久精品免费免费高清| 久久久久精品人妻al黑| 精品国产乱码久久久久久男人| 国产成人影院久久av| 国产精品秋霞免费鲁丝片| 国产亚洲午夜精品一区二区久久| 国产成人a∨麻豆精品| 97精品久久久久久久久久精品| 啦啦啦啦在线视频资源| 天天影视国产精品| 精品高清国产在线一区| 桃红色精品国产亚洲av| 99精品久久久久人妻精品| 人人妻人人澡人人看| 精品欧美一区二区三区在线| 操美女的视频在线观看| 亚洲精品乱久久久久久| 丰满迷人的少妇在线观看| 久久影院123| 亚洲国产中文字幕在线视频| 国产亚洲午夜精品一区二区久久| 欧美国产精品一级二级三级| 中文字幕最新亚洲高清| 亚洲精品久久午夜乱码| 我要看黄色一级片免费的| 午夜福利影视在线免费观看| 亚洲va日本ⅴa欧美va伊人久久 | 高潮久久久久久久久久久不卡| 一区二区三区四区激情视频| 国产激情久久老熟女| 精品一区二区三区av网在线观看 | 亚洲欧美精品综合一区二区三区| 久久 成人 亚洲| 中文字幕高清在线视频| 丝袜美足系列| 色婷婷久久久亚洲欧美| 每晚都被弄得嗷嗷叫到高潮| 无遮挡黄片免费观看| 18禁国产床啪视频网站| 99精品欧美一区二区三区四区| 欧美日本中文国产一区发布| 亚洲国产精品999| 精品少妇黑人巨大在线播放| 成人免费观看视频高清| 国内毛片毛片毛片毛片毛片| 人成视频在线观看免费观看| videosex国产| 香蕉丝袜av| 亚洲欧美一区二区三区黑人| 电影成人av| 日韩有码中文字幕| 性少妇av在线| 日本撒尿小便嘘嘘汇集6| 久久影院123| 午夜福利影视在线免费观看| 在线观看www视频免费| 亚洲精品中文字幕在线视频| 久久综合国产亚洲精品| 亚洲国产欧美在线一区| 国产一区二区激情短视频 | 亚洲精品一区蜜桃| 久久久久久久国产电影| 欧美精品高潮呻吟av久久| 国产xxxxx性猛交| 亚洲久久久国产精品| 香蕉丝袜av| 久久性视频一级片| 国产成人免费观看mmmm| 亚洲精品国产一区二区精华液| 久久99热这里只频精品6学生| 一本大道久久a久久精品| 亚洲国产精品成人久久小说| 男女午夜视频在线观看| 热99久久久久精品小说推荐| 国产免费现黄频在线看| 一区二区三区精品91| 国产精品成人在线| 黄色片一级片一级黄色片| 首页视频小说图片口味搜索| 制服人妻中文乱码| 国产成人a∨麻豆精品| 欧美日本中文国产一区发布| 国产精品一区二区在线不卡| 久久热在线av| 欧美精品啪啪一区二区三区 | 老熟妇乱子伦视频在线观看 | www.自偷自拍.com| 国产野战对白在线观看| 午夜福利,免费看| 一级片免费观看大全| 精品久久蜜臀av无| 91九色精品人成在线观看| 午夜视频精品福利| 蜜桃在线观看..| 99国产精品一区二区蜜桃av | 亚洲国产毛片av蜜桃av| 高清欧美精品videossex| 蜜桃在线观看..| 成在线人永久免费视频| 汤姆久久久久久久影院中文字幕| 99久久精品国产亚洲精品| 91大片在线观看| 国产在视频线精品| 9191精品国产免费久久| 高清欧美精品videossex| 妹子高潮喷水视频| av天堂在线播放| 国产精品久久久人人做人人爽| 国产欧美亚洲国产| 亚洲国产欧美一区二区综合| 少妇人妻久久综合中文| 免费久久久久久久精品成人欧美视频| 日日爽夜夜爽网站| 欧美日韩av久久| 国产精品影院久久| 在线永久观看黄色视频| 99国产综合亚洲精品| av超薄肉色丝袜交足视频| 黄色片一级片一级黄色片| 高清在线国产一区| 精品国产国语对白av| 激情视频va一区二区三区| 国产亚洲精品一区二区www | 精品亚洲成国产av| 十八禁高潮呻吟视频| 91麻豆精品激情在线观看国产 | 久久精品国产亚洲av香蕉五月 | 日韩大片免费观看网站| 正在播放国产对白刺激| 亚洲成人手机| 丰满饥渴人妻一区二区三| 久久久久网色| 成人国产一区最新在线观看| h视频一区二区三区| 大陆偷拍与自拍| 波多野结衣一区麻豆| 日本猛色少妇xxxxx猛交久久| 9色porny在线观看| 国产精品 欧美亚洲| 男男h啪啪无遮挡| 久久久精品区二区三区| 在线观看人妻少妇| 亚洲av美国av| 免费女性裸体啪啪无遮挡网站| 日本av免费视频播放| 欧美日韩亚洲国产一区二区在线观看 | 亚洲精品中文字幕一二三四区 | 欧美国产精品va在线观看不卡| 亚洲欧美一区二区三区久久| av免费在线观看网站| 乱人伦中国视频| 一本一本久久a久久精品综合妖精| 久久精品国产亚洲av高清一级| 18禁黄网站禁片午夜丰满| 午夜久久久在线观看| 国产精品熟女久久久久浪| 国产一级毛片在线| 精品一品国产午夜福利视频| 国产免费现黄频在线看| 欧美亚洲 丝袜 人妻 在线| 亚洲avbb在线观看| 色播在线永久视频| 国产日韩欧美视频二区| 亚洲国产毛片av蜜桃av| 国产精品九九99| 啪啪无遮挡十八禁网站| 国产一区二区在线观看av| 久久久久精品国产欧美久久久 | 色综合欧美亚洲国产小说| 久久久精品94久久精品| 午夜福利影视在线免费观看| 欧美亚洲日本最大视频资源| 在线永久观看黄色视频| 亚洲欧美一区二区三区黑人| 亚洲精品国产一区二区精华液| 午夜激情av网站| 亚洲成人国产一区在线观看| 亚洲精品自拍成人| 制服人妻中文乱码| 999久久久精品免费观看国产| 亚洲精品粉嫩美女一区| 人人妻人人澡人人看| 中文字幕高清在线视频| 乱人伦中国视频| 香蕉丝袜av| 国产成人欧美在线观看 | 久久中文字幕一级| 可以免费在线观看a视频的电影网站| 天天躁日日躁夜夜躁夜夜| 亚洲伊人色综图| 亚洲av电影在线进入| 国产成人免费观看mmmm| 美国免费a级毛片| 老汉色∧v一级毛片| 日本五十路高清| 午夜影院在线不卡| 日韩制服丝袜自拍偷拍| 97在线人人人人妻| 久久久久久久大尺度免费视频| 国产老妇伦熟女老妇高清| 人人妻人人添人人爽欧美一区卜| 91老司机精品| 国产1区2区3区精品| 午夜久久久在线观看| 亚洲精品国产色婷婷电影| 多毛熟女@视频| 国产在视频线精品| 国产深夜福利视频在线观看| 在线观看一区二区三区激情| 手机成人av网站| 国产野战对白在线观看| 国产精品久久久人人做人人爽| 亚洲国产成人一精品久久久| 国产精品.久久久| 日本av免费视频播放| 成人18禁高潮啪啪吃奶动态图| 欧美日韩av久久| 一区二区日韩欧美中文字幕| 亚洲av电影在线进入| 国产欧美日韩一区二区精品| 可以免费在线观看a视频的电影网站| 久久精品国产综合久久久| 老熟女久久久| www.999成人在线观看| 国产精品偷伦视频观看了| 黄色a级毛片大全视频| 欧美亚洲日本最大视频资源| 亚洲国产日韩一区二区| 最近最新中文字幕大全免费视频| 日韩中文字幕视频在线看片| 亚洲av片天天在线观看| 中文字幕制服av| 精品亚洲成a人片在线观看| 亚洲av电影在线进入| 99香蕉大伊视频| 免费久久久久久久精品成人欧美视频| 欧美日韩av久久| av线在线观看网站| 国产精品久久久人人做人人爽| 久久免费观看电影| 熟女少妇亚洲综合色aaa.| 成年人午夜在线观看视频| 欧美+亚洲+日韩+国产| 他把我摸到了高潮在线观看 | 99热网站在线观看| 91九色精品人成在线观看| 亚洲 国产 在线| 啦啦啦 在线观看视频| 别揉我奶头~嗯~啊~动态视频 | 欧美 日韩 精品 国产| 午夜免费观看性视频| 伊人亚洲综合成人网| 欧美日韩亚洲国产一区二区在线观看 | 亚洲av日韩在线播放| 免费看十八禁软件| 国产精品一区二区在线不卡| 水蜜桃什么品种好| 青春草视频在线免费观看| cao死你这个sao货| 日韩中文字幕视频在线看片| 免费日韩欧美在线观看| 成年动漫av网址| av网站免费在线观看视频| 成人三级做爰电影| 国产成人精品久久二区二区免费| 99久久人妻综合| 精品国产一区二区三区久久久樱花| 国产日韩欧美在线精品| 亚洲国产精品一区二区三区在线| 精品第一国产精品| 精品少妇一区二区三区视频日本电影| 国产一卡二卡三卡精品| 黄片大片在线免费观看| 人人妻人人添人人爽欧美一区卜| 久久人人爽人人片av| 久久毛片免费看一区二区三区| 久久精品成人免费网站| 91大片在线观看| 久久久久久久国产电影| 国产精品自产拍在线观看55亚洲 | 最近最新中文字幕大全免费视频| 亚洲免费av在线视频| 两性午夜刺激爽爽歪歪视频在线观看 | 午夜久久久在线观看| 一区二区三区激情视频| a级片在线免费高清观看视频| 精品福利永久在线观看| 久久ye,这里只有精品| 丝袜美足系列| 悠悠久久av| 国内毛片毛片毛片毛片毛片| 国产麻豆69| 高清欧美精品videossex| 亚洲av成人不卡在线观看播放网 | 成年人黄色毛片网站| 热re99久久国产66热| 欧美97在线视频| 永久免费av网站大全| 亚洲国产精品一区三区| 欧美日韩亚洲高清精品| av在线app专区| 久久国产精品男人的天堂亚洲| 欧美精品一区二区大全| 亚洲国产精品一区二区三区在线| 国产男女内射视频| 亚洲熟女毛片儿| 国产欧美亚洲国产| bbb黄色大片| 亚洲欧美一区二区三区久久| 日韩视频在线欧美| 一区二区三区精品91| 中文字幕高清在线视频| 欧美在线黄色| 亚洲国产精品一区二区三区在线| 夫妻午夜视频| 亚洲中文字幕日韩| 肉色欧美久久久久久久蜜桃| 精品一区在线观看国产| 亚洲av男天堂| 国产成人免费无遮挡视频| 亚洲全国av大片| 韩国精品一区二区三区| 他把我摸到了高潮在线观看 | 久久久久网色| 日本91视频免费播放| 久久精品国产亚洲av高清一级| 免费久久久久久久精品成人欧美视频| 亚洲欧美激情在线| 19禁男女啪啪无遮挡网站| 悠悠久久av| 狠狠精品人妻久久久久久综合| 汤姆久久久久久久影院中文字幕| 久久精品国产综合久久久| 亚洲精品国产精品久久久不卡| 成人影院久久| 在线观看一区二区三区激情| 久9热在线精品视频| 一二三四在线观看免费中文在| 叶爱在线成人免费视频播放| 可以免费在线观看a视频的电影网站| 曰老女人黄片| 99国产综合亚洲精品| 欧美中文综合在线视频| 亚洲精品第二区| 中文字幕另类日韩欧美亚洲嫩草| 淫妇啪啪啪对白视频 | 亚洲,欧美精品.| 国产成人av教育| 亚洲精品在线美女| 国产精品久久久久久人妻精品电影 | 免费人妻精品一区二区三区视频| a 毛片基地| 91av网站免费观看| 丁香六月欧美| 十八禁网站免费在线| 成人影院久久| 日韩电影二区| 国产精品秋霞免费鲁丝片| 老司机靠b影院| 91国产中文字幕| 国产精品免费视频内射| 最近最新中文字幕大全免费视频| 丝袜美足系列| kizo精华| 免费不卡黄色视频| 999久久久国产精品视频| 久久香蕉激情| 超碰97精品在线观看| 亚洲五月婷婷丁香| 欧美精品亚洲一区二区| 狠狠狠狠99中文字幕| av电影中文网址| 欧美激情极品国产一区二区三区| 50天的宝宝边吃奶边哭怎么回事| 欧美黄色片欧美黄色片| 伦理电影免费视频| 日韩欧美一区视频在线观看| 大片电影免费在线观看免费| 亚洲国产毛片av蜜桃av| 亚洲精品乱久久久久久| 国产亚洲精品第一综合不卡| 精品久久久久久电影网| 久久人人97超碰香蕉20202| 国产精品成人在线| 建设人人有责人人尽责人人享有的| 麻豆av在线久日| 9热在线视频观看99| 日本a在线网址| 超碰成人久久| 中文字幕人妻丝袜制服| 亚洲av成人一区二区三| 国产一区二区激情短视频 | 国产欧美日韩精品亚洲av| 中国国产av一级| 欧美变态另类bdsm刘玥| 男女高潮啪啪啪动态图| 日韩 欧美 亚洲 中文字幕| 可以免费在线观看a视频的电影网站| 国产日韩欧美亚洲二区| 两人在一起打扑克的视频| 国产日韩欧美亚洲二区| 桃花免费在线播放| 男女午夜视频在线观看| 日本撒尿小便嘘嘘汇集6| 国产精品国产三级国产专区5o| av超薄肉色丝袜交足视频| 黄色视频在线播放观看不卡| 满18在线观看网站| 国产老妇伦熟女老妇高清| 国产免费福利视频在线观看| 在线观看舔阴道视频| 欧美日韩亚洲综合一区二区三区_| 咕卡用的链子| 免费高清在线观看日韩| av电影中文网址| 欧美激情高清一区二区三区| 少妇猛男粗大的猛烈进出视频| 亚洲av日韩精品久久久久久密| 国产免费视频播放在线视频| 国产三级黄色录像| 国产成人免费无遮挡视频| 日韩熟女老妇一区二区性免费视频| 国产成人免费无遮挡视频| 女性生殖器流出的白浆| 欧美日韩国产mv在线观看视频| 午夜福利在线免费观看网站| 亚洲欧美一区二区三区久久| 久久中文字幕一级| 高清欧美精品videossex| 亚洲一码二码三码区别大吗| 又黄又粗又硬又大视频| 男女免费视频国产| 欧美人与性动交α欧美软件| 国产免费福利视频在线观看| av不卡在线播放| 少妇裸体淫交视频免费看高清 | 亚洲天堂av无毛| 老司机靠b影院| 欧美成狂野欧美在线观看| 久久久久久免费高清国产稀缺| 制服诱惑二区| 午夜精品国产一区二区电影| 国产欧美日韩综合在线一区二区| 成年av动漫网址| 美女主播在线视频| 国产深夜福利视频在线观看| 精品久久久久久电影网| 韩国高清视频一区二区三区| 天天影视国产精品| av片东京热男人的天堂| 韩国精品一区二区三区| 久久久精品国产亚洲av高清涩受| 十分钟在线观看高清视频www| 欧美日韩亚洲高清精品| 日本一区二区免费在线视频| 男女午夜视频在线观看| 亚洲综合色网址| 国产又色又爽无遮挡免| 欧美久久黑人一区二区| 一二三四社区在线视频社区8| 精品国内亚洲2022精品成人 | 1024视频免费在线观看| 亚洲精品第二区| 91国产中文字幕| 999久久久国产精品视频| 久久这里只有精品19| 国产野战对白在线观看| 亚洲国产欧美在线一区| 另类亚洲欧美激情| 精品一区二区三区av网在线观看 | 午夜福利视频在线观看免费| 国产亚洲欧美精品永久| h视频一区二区三区| 中国国产av一级| 国产一区二区三区综合在线观看| 国精品久久久久久国模美| 日韩视频一区二区在线观看| 一本大道久久a久久精品| 欧美xxⅹ黑人| 欧美人与性动交α欧美软件| 电影成人av| 亚洲精品在线美女| 18禁黄网站禁片午夜丰满| 精品亚洲成a人片在线观看| 一本色道久久久久久精品综合| 国产真人三级小视频在线观看| 免费av中文字幕在线| 亚洲国产精品999| 中文字幕av电影在线播放| 俄罗斯特黄特色一大片| 手机成人av网站| 国产一区二区三区av在线| 亚洲av成人不卡在线观看播放网 | 欧美精品高潮呻吟av久久| svipshipincom国产片| 人人妻人人爽人人添夜夜欢视频| 一级毛片女人18水好多| 91成年电影在线观看| a级毛片黄视频| 一边摸一边抽搐一进一出视频|