• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Data-Based Optimal Tracking of Autonomous Nonlinear Switching Systems

    2021-04-14 06:54:04XiaofengLiLuDongMemberIEEEandChangyinSunSeniorMemberIEEE
    IEEE/CAA Journal of Automatica Sinica 2021年1期
    關(guān)鍵詞:章句指導(dǎo)意義學(xué)界

    Xiaofeng Li, Lu Dong, Member, IEEE, and Changyin Sun, Senior Member, IEEE

    Abstract—In this paper, a data-based scheme is proposed to solve the optimal tracking problem of autonomous nonlinear switching systems. The system state is forced to track the reference signal by minimizing the performance function. First,the problem is transformed to solve the corresponding Bellman optimality equation in terms of the Q-function (also named as action value function). Then, an iterative algorithm based on adaptive dynamic programming (ADP) is developed to find the optimal solution which is totally based on sampled data. The linear-in-parameter (LIP) neural network is taken as the value function approximator. Considering the presence of approximation error at each iteration step, the generated approximated value function sequence is proved to be boundedness around the exact optimal solution under some verifiable assumptions. Moreover, the effect that the learning process will be terminated after a finite number of iterations is investigated in this paper. A sufficient condition for asymptotically stability of the tracking error is derived. Finally,the effectiveness of the algorithm is demonstrated with three simulation examples.

    I. INTRODUCTION

    THE optimal scheduling of nonlinear switching systems has attracted vast attention in recent decades. A switching system is a hybrid dynamic system which consists of continuous time subsystems and discrete time events. For each time step, only one subsystem is selected so that the main issue is to find the optimal policy to determine “when” to switch the mode and “which” mode should be activated [1],[2]. Many complex real-world applications can be described as switching systems, ranging from applications in bioengineering field to electronic circuits [3]-[7].

    Generally, the existing methods for optimal switching problem can be classified into two categories. The methods belonging to the first category are to find the switching sequence in a “planning” manner. In [8]-[11], nonlinear programming based algorithms are designed to determine the switching instants by using the gradient of the performance function. Note that the sequence of active modes are required to be fixed a priori. In [12], the authors propose a two-stage decision algorithm to allow free mode sequence which distinguishes the decision process of active mode and switching time. On the other hand, discretization-based methods solve the problem by discretizing the state and input space with a finite number of options [13]-[16]. However,these planning based algorithms achieve good performance only with specific initial states. Once the given initial conditions are changed, a new planning schedule should be made from the scratch.

    Optimal control is an important topic in modern control theory which aims to find a stabilized controller which minimizes the performance function [17]. In recent years,researchers have developed many optimal schemes for addressing practical real-world applications, such as trajectory planning and closed loop optimal control of cable robots[18]-[20]. Based on the reinforcement learning mechanism,the adaptive dynamic programming (ADP) algorithm was first developed to solve the optimal control problem of discretetime systems with continuous state space [21], [22]. In parallel, a continuous-time framework was proposed by the group of Frank L. Lewis to extend the application of ADP to continuous-time nonlinear systems [23]-[25]. Two main iterative methods including value iteration (VI) [26] and policy iteration (PI) [27] are employed to solve the Hamilton-Jacobi-Bellman (HJB) equation. The actor-critic (AC)structure is often employed to implement the ADP algorithm with two neural networks (NNs) [28]. The critic network takes system states as input and outputs the estimated value function while the actor network approximates the mapping between states and control input [29].

    According to the requirement of system dynamics, the family of ADP algorithms can be divided into three main aspects, including model-based methods, model-free methods,and data-based methods. The model-based ADP algorithms require we know the exact dynamics of the plant [26], [27],[30]. A monotonous non-decreasing or non-increasing sequence of value function is generated VI or PI based algorithm which will converge to the optimal solution. For the model-free algorithms, the system model is first identified,e.g., by using neural network (NN) or fuzzy system. Then, the iterations are operated based on the approximated model [31],[32]. It is worth noting that the presence of identification error may lead to sub-optimality of the learned policy. In contrast to the above two approaches, data-based ADP methods are totally based on input and output data [33]-[37]. The objective is to solve the Q-function based optimal Bellman function so that the optimal controller can be obtained without knowing system dynamics. Recently, the combination of ADP method with event-trigger mechanism has been investigated which substantially reduces the updating times of the control input without degrading the performance [38]-[41]. Considering the uncertainty of the system dynamics, the robust ADP algorithms are proposed to find the optimal controller of practical applications [42], [43]. In addition, many practical applications have been solved successfully by using the ADP method [44]-[46].

    As a powerful method for solving the HJB equation, ADP has been applied to solve the optimal control of switching systems in recent years. In [30], the optimal switching problem of autonomous subsystems is solved by using an ADP based method in a backwards fashion. In addition, the minimum dwell time constraint between different modes is considered in [47]. The feedback solution is obtained by learning the optimal value function with respect to the augmented states including system state, already active subsystem, and the elapsed time a given mode. In order to reduce the switching frequency, a switching cost is incorporated in the performance function [48]. In [49], the optimal tracking problem with infinite-horizon performance function is investigated by learning the mapping between the optimal value function and the switching instants. For the continuous-time autonomous switching system, a PI based learning scheme is proposed with consideration of the effect of approximation error on the behaviour [50]. Moreover, the problem of controlled switching nonlinear systems is addressed by co-designing the control signal and switching instants. In [51], the authors develop a VI based algorithm for solving the switching problem. Since a fixed-horizon performance function is considered, the optimal hybrid policy is obtained backward-in-time. In [52], the optimal control and triggering of networked control system is first transformed to an augmented switching system. Then, an ADP based algorithm is proposed to solve the problems with zero order hold (ZOH), generalized ZOH, finite-horizon and infinitehorizon performance functions. These aforementioned methods provide the closed-form solution which works for a vast domain of initial states. However, it is worthwhile noting that the accurate system dynamics is required to implement the existing algorithms which is difficult to obtain for complex nonlinear systems. In addition, the effect of approximation error incurred by employing the NN as the value function approximator is often ignored in previous literature.

    In this paper, a data-based algorithm is first proposed to address the optimal switching problem of autonomous subsystems. Instead of the requirement for system model, only input and output data is needed to learn the switching policy.Furthermore, two realistic issues are considered in this paper.On the one hand, the effect of presence of approximation errors between the outputs of a NN and the real target values are investigated. On the other hand, a sufficient condition is derived to guarantee the stability of the tracking error with a finite number of iterations. In addition, the critic-only structure is utilized for implementing the algorithm. The main contributions of this paper are listed as following. First, the problem is transformed to solving the Q-function based Bellman optimality equation, which enables us to derive a data-based algorithm. Second, considering the approximation errors, an approximated Q-learning based algorithm is first proposed for learning the optimal switching policy. Finally,the theoretical analysis of continuity of Q-functions,boundedness of generated value function sequence and the stability of the system is presented. Since [50]-[52] are all model-based methods, the completely “model-free” character of the proposed algorithm demonstrates its potential for complex nonlinear systems.

    The rest of this paper is organized as follows. Section II presents the problem formulation. In Section III, the exact Qlearning algorithm is proposed. Then, the approximated method is derived considering approximation error and a finite number of iterations. In addition, a linear-in-parameter (LIP)NN is utilized for implementing the algorithm of which the weights are updated by using least-mean-square (LMS)method. In Section IV, the theoretical analysis is given.Afterwards, three simulation examples are given in Section V.The simulation results demonstrate the potentials of the proposed method. Finally, conclusions are drawn in Section VI.

    II. PROBLEM FORMULATION

    Hence, the tracking problem is transformed to find the optimal Q-function. In the next section, an iterative Q-learning based algorithm is developed. In addition, the effects of the presence of the approximation error as well as termination condition of iterations are considered.

    III. PROPOSED ALGORITHM AND ITS IMPLEMENTATION

    A. Exact Q-Learning Algorithm

    It is worth noting that the convergence, optimality, and stability properties of exact Q-learning algorithm is achieved based on several ideal assumptions. On the one hand, the exact reconstruction of the target value function (15) is difficult when using value function approximators, except for some simple linear systems. On the other hand, theoretically,an infinite number of iterations are required to obtain the optimal Q-function. In the following subsection, these two realistic issues are considered and the approximated Qlearning algorithm is developed.

    B. Approximated Q-Learning Algorithm

    The approximated Q-learning method is proposed by extending the exact Q-learning algorithm. First, the algorithm starts from a zero initial Q-function, i.e., Q?(0)=0. Afterwards,considering the approximation error, the algorithm iterates between

    C. Implementation

    Fig. 1. The structure of critic network. The LIP NN consists of a basis function layer and a output layer. The basis functions are polynomials of combinations of system states and reference signals while the number of nodes is determined by trial-and-error method and the output layer has M nodes.

    The output of critic network can be expressed by can be updated at each iteration.

    Fig. 2. Simple diagram of the proposed algorithm. This figure shows the weight update process of an arbitrary output channel. The target network shares the same structure and weights of the critic network and computes the minimum value of Q-function at next time step. Note that at each iteration step, the weights of all output nodes should be updated.

    Another critical problem is to select the appropriate termination criteria for the training process. Let the iteration be stopped at the j-th iteration if the following convergence tolerance is satisfied

    where ζ(x,s) is positive definite function. Once the Q-function Q?(j)(x,s,v)is obtained, it can be applied to control system (1)by comparing the values of different modes and selecting the optimal one. The main procedure for implementing the proposed algorithm is given as in Algorithm 1. The theoretical analysis of the effect caused by the termination condition is given in the following section.

    Algorithm 1 Outline of Implementation of the Proposed Algorithm Step 1: Initialize the hyper-parameters including number of sampled data L and the termination condition of training process .?W(0)c,v =0 8v Ξ ζ Step 2: Initialize the weight vector of the critic NN, i.e.,.?x[l]k Ωx,s[l]k Ωs,v[l]k Ξ?L l=1 L Step 3: Randomly select a set of sample data, where is a large positive integer.?x[l+1]k ,s[l]k+1?L s[l]k+1=F(s[l]k )l=1 x[l]k+1= fv[l]k (x[l]k )Step 4: Obtain according to and, respectively.j=0 Step 5: Let and start the training process.Step 6: The active mode at next time step is selected according to.v(j),[l]k+1 =argminv Ξ(?W(j)c,v)Tφ(x[l]k+1,s[l]k+1)?Q(j+1)tar (xk,sk,vk)Step 7: The target values for critic network is computed according to (22). Then, the weights of the LIP NN are updated by using LMS method.j ?W(j+1)c,v - ?W(j+1)c,v j ≤ζ 8v Ξ j= j+1 Step 8: If is satisfied, then, proceed to Step 9, otherwise, let and execute Step 6.W?c,v= ?W(j)c,v 8v Ξ Step 9: Let and stop the iteration process.

    Remark 3: Note that the training process in Algorithm 1 is totally based on input and output data of subsystems. Once the weights of critic network are converged, the control signal can be derived only based on current system state and reference signal. In order to achieve competitive performance, it requires more training data than the model-based and modelfree algorithm. However, collecting input and output data is often easier than identifying the model.

    IV. THEORETICAL ANALYSIS

    In this section, the effects of presence of approximation error and termination condition on the convergence and stability properties are analyzed. Before proceeding to the proof of theorems, an approximated value function based ADP method is first briefly reviewed [47].

    A. Review of Approximated Value Iteration Algorithm

    B. Continuity Analysis

    C. Convergence Analysis

    Next, we will derive the proof that given an upper bounded constraint of approximation error at each iteration, the

    D. Stability Analysis

    V. SIMULATION RESULTS

    In this section, the simulation results of two numerical examples are first presented to illustrate the effectiveness of the proposed method. In addition, a simulation example of an anti-lock brake system (ABS) is included. The simulation examples are run on a laptop computer with Intel Core i7,3.2 GHz processor and 16 GB of memory, running macOS 10.13.6 and MATLAB 2018a (single threading).

    《文心雕龍·章句》作為“安章之總術(shù)”早已得到學(xué)界的普遍認(rèn)同,以今天的文藝評(píng)論眼光來看,毫無疑問是一篇?jiǎng)?chuàng)作論。但對(duì)初中語文教師來說,它也是一篇明晰章句、體悟韻律的鑒賞論,甚至是一篇入門級(jí)的批評(píng)論,對(duì)初中古詩文教學(xué)具有不可替代的指導(dǎo)意義。

    Example 1: First, the regulation problem of a simple scalar system with two subsystem is addressed. Specifically, the regulation problem can be regarded as a special case of the tracking problem with zero reference signal. The system dynamics is described as follows [30]:

    Fig. 3. Evolution of the Critic NN weight elements.

    Fig. 4. State trajectory and switching mode sequence under the proposed method with x0 = 1.5.

    After the training process is completed, the system is controlled by the converged policy with the initial state x0=1.5. The results are presented in Fig. 4. It is shown that the system switches to the first mode when the state becomes smaller than 1, which corresponds to (41). Moreover, let the system starts from different initial states, e.g., x0=1 and x0=?2; the results are given in Figs. 5 and 6, respectively. It is demonstrated that our method works well for different initial states.

    Fig. 5. State trajectory and switching mode sequence under the proposed method with x0 = 1.

    Fig. 6. State trajectory and switching mode sequence under the proposed method with x0 =-2.

    Example 2: A two-tank device with three different modes is considered. There are three positions of the valve which determine the fluid flow into the upper tank: fully open, half open, and fully closed. The objective is to force the fluid level of the lower tank to track the reference signal. Let the fluid heights in the set-up be denoted by x=[x1,x2]T, where x1and x2denote the fluid levels in the upper and lower tank,respectively. The dynamics of three subsystems are given as follows [49]:

    In addition, the dynamics of the reference command generator is described by

    Fig. 7. Evolution of the critic NN weight elements.

    Once the critic network is trained, the policy can be found by simply comparing three scalar values. Selecting the initial states as x0=[1,1]Tand s0=1, the evolution of states under obtained switching policy is shown in Fig. 8. It is shown that the fluid height in the lower tank can track the reference signal well. Furthermore, the results are compared with those of a model-based value iteration algorithm [49]. The trajectories during the interval of [200,300] are highlighted. It is shown that our algorithm achieves the same, if not better,performance without knowing the exact system dynamics. In addition, the values of performance function (3) by using the proposed Q-learning algorithm and value iteration method are 70.724 1 and 72.758 3, respectively which verifies the conclusion.

    Fig. 8. State trajectories and switching mode sequence of Q-learning based x0=[1,1]T and model based method with and s0 = 1.

    In order to test the tracking ability of the proposed algorithm for different time-varying reference signals, the fluid level of lower tank is forced to tracking the reference trajectories generated by , and ,respectively. Both the structure of NNs and parameters are kept the same with those in the previous paragraph. The state trajectories with different reference command generator is presented in Fig. 9 . The simulation results verify the effectiveness of our algorithm for time-varying reference trajectories.

    ˙s=?s2(t) ˙s=?s3(t) ˙s=?s4(t)

    Fig. 9. State trajectories with different reference command generators.

    The policy obtained after the iteration process is utilized to control the plant with the initial state x0=[0,0]T. Starting from the same state, the open-loop controller is derived according to the algorithm proposed in [12]. The trajectories of states under these two controllers are presented in Fig. 10(see top of next page). It is clear that the Q-learning controller achieves a more accurate tracking performance. By using the same Q-learning controller and nonlinear programming based controller, the simulation results with different initial state are presented in Fig. 11 (see next page). This figure illustrates the capability of the proposed method for different initial states.

    Example 3: The anti-lock brake system (ABS) is considered to illustrate the potentials of the proposed algorithm for realworld applications. In order to eliminate the effect of large ranges of state variables, the non-dimensionalised ABS model is described as follows [56]:

    Fig. 10. State trajectories of Q-learning based and nonlinear programming based method with x0=[0.8,0.2]T and the reference signal s(t)=0.5.

    Fig. 11. State trajectories of Q-learning based and nonlinear programming based method with x0=[0,0]T and the reference signal s(t)=0.5.

    Fig. 12. Evolution of the critic NN weight elements.

    Fig. 13. State trajectories and switching mode sequence of Q-learning based method with x0=[0,0.7,0,0]T.

    Furthermore, the robustness of the controller is tested with consideration of two kinds of uncertainties. First, a random noise signal with a magnitude in the range of[?0.1Ff(·),0.1Ff(·)] is added to the longitudinal force Ffin the ABS model (44). The simulation result is given in Fig. 14.The stopping distance and stopping time are 275.3 m and 6.76 s, respectively. The switching number between the three subsystems is 169 times. Compared with the case without noise, the uncertainty leads to about 0.81% increase of stopping distance, 0.75% increase of stopping time and 9 times of mode switching. Specifically, it can be seen in Fig. 14 that at the beginning of the braking process mode 2 is activated to decrease pressure. This unreasonable decision may be incurred by the random noise and leads to the degradation of performance.

    Fig. 14. State trajectories and switching mode sequence of Q-learning based method considering the uncertainty on the longitudinal force.

    In addition, the uncertainty of vehicle mass is considered.During the training process, the input and output data are generated based on (44) with M=500 kg. Once the policy is trained, it is applied to control the vehicle with M=600 kg.The simulation result is presented in Fig. 15. The stopping distance and stopping time are 323.9 m and 7.96 s,respectively. The switching number between three subsystems is 125 times. It is shown that the performance is degraded compared with that without uncertainty. However, the controller has still been successful in braking the vehicle with a admissible stopping distance which demonstrates.

    Fig. 15. State trajectories and switching mode sequence of Q-learning based method considering the uncertainty on the vehicle mass.

    VI. CONCLUSIONS

    In this paper, an approximated Q-learning algorithm is developed to find the optimal scheduling policy for autonomous switching systems with rigorous theoretical analysis. The learning process is totally based on the input and output data of the system and the reference command generator. The simulation results demonstrate the competitive performance of the proposed algorithm and its potential for complex nonlinear systems. Our future work is to investigate the optimal co-design of control and scheduling policies for controlled switching systems and Markov jump systems. In addition, the effect of employing deep NNs as value function approximator should be considered. It is also an interesting topic to deal with external disturbances.

    猜你喜歡
    章句指導(dǎo)意義學(xué)界
    劉玥辰
    中國篆刻(2022年9期)2022-09-26 02:21:54
    工夫、歷史與政教:“學(xué)庸章句序”中的道統(tǒng)說
    原道(2020年1期)2020-03-17 08:09:50
    朱子《中庸章句》的詮釋特點(diǎn)與道統(tǒng)意識(shí)——以鄭玄《中庸注》為參照
    原道(2020年1期)2020-03-17 08:09:46
    術(shù)中快速冰凍對(duì)判斷食管癌切緣范圍的指導(dǎo)意義
    健康教育對(duì)高原地區(qū)剖宮產(chǎn)患者的指導(dǎo)意義
    西藏科技(2015年6期)2015-09-26 12:12:11
    血乳酸檢測對(duì)引起呼吸衰竭常見疾病的臨床指導(dǎo)意義
    業(yè)界·學(xué)界“微天下”
    中國記者(2014年9期)2014-03-01 01:44:23
    業(yè)界·學(xué)界“微天下”
    中國記者(2014年7期)2014-03-01 01:41:10
    業(yè)界·學(xué)界:“微天下”
    中國記者(2014年6期)2014-03-01 01:39:53
    業(yè)界·學(xué)界“微天下”
    中國記者(2014年1期)2014-03-01 01:36:18
    在线播放国产精品三级| 国产精品三级大全| 美女cb高潮喷水在线观看| 亚洲成a人片在线一区二区| 亚洲四区av| 国产精品久久电影中文字幕| 人妻制服诱惑在线中文字幕| 午夜福利18| 亚州av有码| 乱人视频在线观看| 日本a在线网址| 国产精品一区二区性色av| 亚洲国产高清在线一区二区三| 夜夜看夜夜爽夜夜摸| 91久久精品电影网| 亚洲精品日韩在线中文字幕 | 91精品国产九色| 国产精品久久视频播放| 一级毛片我不卡| 99热网站在线观看| 99久久中文字幕三级久久日本| 久久精品国产清高在天天线| 男女之事视频高清在线观看| 精品无人区乱码1区二区| 国产色婷婷99| 99久久久亚洲精品蜜臀av| 男女做爰动态图高潮gif福利片| 我要看日韩黄色一级片| 在线免费观看不下载黄p国产| 最近最新中文字幕大全电影3| 成人高潮视频无遮挡免费网站| 神马国产精品三级电影在线观看| 听说在线观看完整版免费高清| 精品一区二区三区视频在线| 啦啦啦观看免费观看视频高清| 99视频精品全部免费 在线| 久久精品久久久久久噜噜老黄 | 床上黄色一级片| 最近中文字幕高清免费大全6| 一区二区三区免费毛片| 91狼人影院| 白带黄色成豆腐渣| 亚洲精品亚洲一区二区| 99热网站在线观看| 激情 狠狠 欧美| 国产精品久久视频播放| 最近2019中文字幕mv第一页| 国产91av在线免费观看| 亚洲一区高清亚洲精品| 成人鲁丝片一二三区免费| 一本精品99久久精品77| 成人精品一区二区免费| 99久久九九国产精品国产免费| 三级国产精品欧美在线观看| 亚洲欧美日韩无卡精品| 精品乱码久久久久久99久播| av在线天堂中文字幕| videossex国产| 三级毛片av免费| 亚洲国产精品国产精品| 亚洲中文日韩欧美视频| 国产精品亚洲一级av第二区| 亚洲精品在线观看二区| 一夜夜www| 蜜桃亚洲精品一区二区三区| 小说图片视频综合网站| 国产国拍精品亚洲av在线观看| 国产亚洲av嫩草精品影院| 国产又黄又爽又无遮挡在线| 12—13女人毛片做爰片一| 日本免费一区二区三区高清不卡| 婷婷精品国产亚洲av在线| 亚洲丝袜综合中文字幕| 欧美丝袜亚洲另类| 夜夜夜夜夜久久久久| 国产在视频线在精品| 18+在线观看网站| 亚洲人成网站在线播放欧美日韩| 欧美日韩综合久久久久久| 国产伦一二天堂av在线观看| 国产亚洲精品久久久久久毛片| 国产高潮美女av| 亚洲精品成人久久久久久| 国产精品一区二区三区四区久久| 日韩成人av中文字幕在线观看 | 午夜日韩欧美国产| 小说图片视频综合网站| 亚洲成人精品中文字幕电影| 床上黄色一级片| 亚洲婷婷狠狠爱综合网| 午夜爱爱视频在线播放| 欧美最黄视频在线播放免费| 亚洲自拍偷在线| 两个人视频免费观看高清| 成年av动漫网址| 亚洲成人av在线免费| 久久久久久久久久黄片| 在线看三级毛片| 色5月婷婷丁香| 五月伊人婷婷丁香| 欧美成人a在线观看| 国产av一区在线观看免费| 你懂的网址亚洲精品在线观看 | 国产蜜桃级精品一区二区三区| 最近手机中文字幕大全| 免费av观看视频| 2021天堂中文幕一二区在线观| 综合色丁香网| 一本精品99久久精品77| 欧美又色又爽又黄视频| 亚洲五月天丁香| 最近最新中文字幕大全电影3| 国产免费一级a男人的天堂| 精品久久国产蜜桃| 亚洲av电影不卡..在线观看| 两个人的视频大全免费| 天美传媒精品一区二区| 亚洲不卡免费看| 高清午夜精品一区二区三区 | 国产人妻一区二区三区在| 亚洲中文日韩欧美视频| 男女做爰动态图高潮gif福利片| 一本久久中文字幕| 中国美白少妇内射xxxbb| 18禁黄网站禁片免费观看直播| 男女视频在线观看网站免费| 国内精品久久久久精免费| 在线观看av片永久免费下载| 黄色视频,在线免费观看| 亚洲av不卡在线观看| 亚洲精品一卡2卡三卡4卡5卡| 日韩欧美三级三区| 可以在线观看的亚洲视频| 搡老熟女国产l中国老女人| 一a级毛片在线观看| 久久久久国内视频| 国产亚洲精品av在线| av国产免费在线观看| 亚洲五月天丁香| 人妻丰满熟妇av一区二区三区| 日日啪夜夜撸| 在线观看午夜福利视频| 精品国内亚洲2022精品成人| av视频在线观看入口| 在线播放国产精品三级| 一区二区三区高清视频在线| 一区二区三区四区激情视频 | 欧美日韩一区二区视频在线观看视频在线 | 国产片特级美女逼逼视频| 国产亚洲精品综合一区在线观看| 香蕉av资源在线| 国产精品嫩草影院av在线观看| 小蜜桃在线观看免费完整版高清| 久久久成人免费电影| 一区二区三区免费毛片| 在线播放无遮挡| 日韩中字成人| 熟女人妻精品中文字幕| 免费av不卡在线播放| 久久精品影院6| 麻豆av噜噜一区二区三区| 国产黄片美女视频| 久久久久久大精品| 级片在线观看| 成人av一区二区三区在线看| 亚洲av.av天堂| 国产精品精品国产色婷婷| 亚洲电影在线观看av| 国产不卡一卡二| 中国国产av一级| av卡一久久| 亚洲av电影不卡..在线观看| 少妇熟女aⅴ在线视频| 日韩av不卡免费在线播放| 国产精品人妻久久久影院| 韩国av在线不卡| 看十八女毛片水多多多| 国产一区二区在线av高清观看| 男人狂女人下面高潮的视频| 寂寞人妻少妇视频99o| 国产精品精品国产色婷婷| 日本在线视频免费播放| 国产精品福利在线免费观看| 一本一本综合久久| 久久久久久久久久成人| 日韩三级伦理在线观看| 麻豆精品久久久久久蜜桃| 亚洲av二区三区四区| 亚洲内射少妇av| 成年女人毛片免费观看观看9| 成人特级黄色片久久久久久久| 综合色av麻豆| 午夜精品一区二区三区免费看| 久久精品人妻少妇| 99riav亚洲国产免费| 看黄色毛片网站| 欧美bdsm另类| 韩国av在线不卡| 欧美最新免费一区二区三区| 日韩欧美免费精品| 成人欧美大片| 久久精品国产清高在天天线| 晚上一个人看的免费电影| 三级国产精品欧美在线观看| 午夜福利在线观看吧| 国产精品电影一区二区三区| 国产精品av视频在线免费观看| 亚洲婷婷狠狠爱综合网| 丝袜喷水一区| 亚洲精品日韩在线中文字幕 | 日韩强制内射视频| 淫妇啪啪啪对白视频| 午夜影院日韩av| 俺也久久电影网| 国产精品不卡视频一区二区| 一进一出好大好爽视频| 老司机午夜福利在线观看视频| 免费观看在线日韩| 六月丁香七月| 伊人久久精品亚洲午夜| 男女边吃奶边做爰视频| 国产成人a区在线观看| 搡老岳熟女国产| 男人和女人高潮做爰伦理| 亚洲,欧美,日韩| 国产精品人妻久久久影院| 一级av片app| 身体一侧抽搐| 99国产精品一区二区蜜桃av| 看黄色毛片网站| 久久精品国产清高在天天线| 99久国产av精品| 两个人的视频大全免费| 日韩高清综合在线| 久久久久国产精品人妻aⅴ院| 国产一区二区在线av高清观看| 欧美高清性xxxxhd video| 久久欧美精品欧美久久欧美| 免费观看的影片在线观看| 欧美激情国产日韩精品一区| АⅤ资源中文在线天堂| 丝袜喷水一区| www.色视频.com| 麻豆av噜噜一区二区三区| 永久网站在线| 午夜老司机福利剧场| 在线播放无遮挡| 国产成年人精品一区二区| 亚洲欧美精品综合久久99| 两个人的视频大全免费| 国产亚洲av嫩草精品影院| 人妻制服诱惑在线中文字幕| 久久久久久久午夜电影| 久久久a久久爽久久v久久| 日本色播在线视频| 成年免费大片在线观看| 午夜福利高清视频| 国产成人影院久久av| 国产综合懂色| 日本色播在线视频| 国产69精品久久久久777片| 亚洲无线观看免费| 国产高清视频在线观看网站| 日韩欧美国产在线观看| 热99在线观看视频| 亚洲av不卡在线观看| 神马国产精品三级电影在线观看| 国产人妻一区二区三区在| 在线a可以看的网站| 深爱激情五月婷婷| 91在线观看av| 成人特级av手机在线观看| 国产又黄又爽又无遮挡在线| 国产精品久久久久久精品电影| 欧美色欧美亚洲另类二区| 国产一级毛片七仙女欲春2| 国产成人a∨麻豆精品| 久久亚洲国产成人精品v| 长腿黑丝高跟| 国产av一区在线观看免费| 久久国产乱子免费精品| 午夜激情福利司机影院| 亚洲色图av天堂| 波多野结衣高清作品| 欧美成人精品欧美一级黄| 三级经典国产精品| 国产精品爽爽va在线观看网站| 淫妇啪啪啪对白视频| 草草在线视频免费看| 亚洲成a人片在线一区二区| 黄色一级大片看看| 岛国在线免费视频观看| 国产探花极品一区二区| 亚洲性夜色夜夜综合| 天堂网av新在线| 免费av毛片视频| 99久国产av精品| 蜜臀久久99精品久久宅男| 老司机影院成人| 在线a可以看的网站| 女人十人毛片免费观看3o分钟| 亚洲精华国产精华液的使用体验 | 一级毛片我不卡| 黄色一级大片看看| 日日摸夜夜添夜夜爱| 欧美日韩国产亚洲二区| 乱人视频在线观看| 中文资源天堂在线| 黄色视频,在线免费观看| 成人欧美大片| 久久精品综合一区二区三区| 麻豆一二三区av精品| 国产精品一区二区三区四区久久| 成人美女网站在线观看视频| 日本精品一区二区三区蜜桃| 精品久久久久久久久久免费视频| 丰满人妻一区二区三区视频av| 免费黄网站久久成人精品| 亚洲自拍偷在线| 99热全是精品| 亚洲精品一区av在线观看| 国产精品美女特级片免费视频播放器| 老熟妇仑乱视频hdxx| 亚洲最大成人手机在线| 成人特级黄色片久久久久久久| 一区二区三区免费毛片| 国产精品亚洲一级av第二区| 一本久久中文字幕| av.在线天堂| 美女内射精品一级片tv| .国产精品久久| 99国产精品一区二区蜜桃av| 白带黄色成豆腐渣| 亚洲高清免费不卡视频| 女同久久另类99精品国产91| 一级毛片我不卡| 欧美最新免费一区二区三区| 最后的刺客免费高清国语| 三级经典国产精品| 国产精品久久视频播放| 黄色日韩在线| 一级毛片我不卡| 变态另类成人亚洲欧美熟女| 国产成人影院久久av| 变态另类成人亚洲欧美熟女| 俺也久久电影网| 欧美日韩综合久久久久久| 天天躁日日操中文字幕| 成人亚洲欧美一区二区av| 美女内射精品一级片tv| 九九在线视频观看精品| 亚洲va在线va天堂va国产| 变态另类成人亚洲欧美熟女| 最近手机中文字幕大全| 精品一区二区三区人妻视频| 国产真实乱freesex| 久久精品91蜜桃| 男插女下体视频免费在线播放| 亚洲成人av在线免费| 精品久久久久久久末码| 亚洲成人av在线免费| 国模一区二区三区四区视频| 欧美日韩一区二区视频在线观看视频在线 | 亚洲一级一片aⅴ在线观看| 久久精品国产鲁丝片午夜精品| 国产高清激情床上av| 亚洲久久久久久中文字幕| 少妇被粗大猛烈的视频| 日本三级黄在线观看| 国内揄拍国产精品人妻在线| 尤物成人国产欧美一区二区三区| 日韩制服骚丝袜av| av在线老鸭窝| 99在线人妻在线中文字幕| 欧美最黄视频在线播放免费| 九九热线精品视视频播放| 波多野结衣巨乳人妻| 丰满乱子伦码专区| 亚洲无线观看免费| 一区二区三区四区激情视频 | 久久精品夜夜夜夜夜久久蜜豆| 特大巨黑吊av在线直播| 久久精品国产自在天天线| 成熟少妇高潮喷水视频| 国产精品久久久久久精品电影| 99久久成人亚洲精品观看| 91麻豆精品激情在线观看国产| 老女人水多毛片| 黑人高潮一二区| 成人综合一区亚洲| 丰满乱子伦码专区| 熟妇人妻久久中文字幕3abv| 亚洲aⅴ乱码一区二区在线播放| 国产精品一区二区三区四区免费观看 | 国内精品久久久久精免费| 波多野结衣高清作品| 美女被艹到高潮喷水动态| 97在线视频观看| 久久久欧美国产精品| 中国国产av一级| 伦精品一区二区三区| 69人妻影院| 99久久九九国产精品国产免费| 亚洲高清免费不卡视频| 国产一级毛片七仙女欲春2| 精品一区二区三区视频在线| 男人舔奶头视频| 欧美色欧美亚洲另类二区| 亚洲自拍偷在线| 特大巨黑吊av在线直播| 国产成人91sexporn| 观看免费一级毛片| 亚洲国产欧美人成| 国产午夜精品论理片| 一夜夜www| 国产黄片美女视频| 日本欧美国产在线视频| 欧美+亚洲+日韩+国产| 色视频www国产| 五月伊人婷婷丁香| 天堂影院成人在线观看| 国产av不卡久久| 麻豆一二三区av精品| 欧美性感艳星| 亚洲av五月六月丁香网| 午夜免费激情av| 国产精品一区二区性色av| 亚洲人与动物交配视频| 91av网一区二区| 日本在线视频免费播放| 免费高清视频大片| 国产精品不卡视频一区二区| 91久久精品国产一区二区成人| 一个人看的www免费观看视频| 18禁裸乳无遮挡免费网站照片| av在线蜜桃| 非洲黑人性xxxx精品又粗又长| 欧美日韩在线观看h| 亚洲人成网站在线播| 香蕉av资源在线| 亚洲三级黄色毛片| 偷拍熟女少妇极品色| 色综合色国产| 毛片女人毛片| 嫩草影视91久久| 老司机午夜福利在线观看视频| 久久这里只有精品中国| 男女那种视频在线观看| 亚洲最大成人av| 黄色配什么色好看| 男插女下体视频免费在线播放| 国产爱豆传媒在线观看| 国产黄色小视频在线观看| 亚洲最大成人中文| 亚洲精品国产av成人精品 | 免费看日本二区| 99久久精品热视频| 国产精品电影一区二区三区| 免费观看人在逋| 波野结衣二区三区在线| 国产真实乱freesex| 国产精品福利在线免费观看| 在线播放国产精品三级| 国产一区二区激情短视频| 国产伦一二天堂av在线观看| 中国美女看黄片| 成人av一区二区三区在线看| 男人狂女人下面高潮的视频| 久久精品国产清高在天天线| 波多野结衣巨乳人妻| 精品免费久久久久久久清纯| 熟女电影av网| 久久精品国产亚洲网站| 一级a爱片免费观看的视频| 美女高潮的动态| 亚洲精品在线观看二区| 久久精品夜色国产| ponron亚洲| 亚洲四区av| 免费无遮挡裸体视频| 久久6这里有精品| 日韩国内少妇激情av| 听说在线观看完整版免费高清| 国产探花在线观看一区二区| 丰满人妻一区二区三区视频av| 最近最新中文字幕大全电影3| 国产一区二区激情短视频| 国产精品99久久久久久久久| 久久久久九九精品影院| 亚洲精品日韩在线中文字幕 | 又黄又爽又免费观看的视频| 99久久精品国产国产毛片| 久久久午夜欧美精品| 午夜精品国产一区二区电影 | 俄罗斯特黄特色一大片| 成人亚洲欧美一区二区av| 乱系列少妇在线播放| 亚洲精品日韩av片在线观看| 又黄又爽又刺激的免费视频.| 狂野欧美激情性xxxx在线观看| 伦理电影大哥的女人| 久久午夜亚洲精品久久| 深夜a级毛片| 久久久a久久爽久久v久久| 淫秽高清视频在线观看| h日本视频在线播放| 简卡轻食公司| 最新中文字幕久久久久| 欧美最黄视频在线播放免费| av在线播放精品| 国产一级毛片七仙女欲春2| 成年免费大片在线观看| 免费电影在线观看免费观看| 中国美白少妇内射xxxbb| 久久午夜福利片| 男女啪啪激烈高潮av片| 校园春色视频在线观看| 亚洲精品粉嫩美女一区| 欧美不卡视频在线免费观看| 亚洲国产色片| 少妇丰满av| 99精品在免费线老司机午夜| 岛国在线免费视频观看| 一区二区三区高清视频在线| 亚洲av电影不卡..在线观看| 国产一区二区在线av高清观看| 日韩精品中文字幕看吧| 能在线免费观看的黄片| 日本 av在线| 黄色欧美视频在线观看| 成人午夜高清在线视频| 一区二区三区高清视频在线| 中文字幕免费在线视频6| 十八禁网站免费在线| 国产成人一区二区在线| 亚洲最大成人手机在线| 搡老岳熟女国产| 亚洲三级黄色毛片| 日日摸夜夜添夜夜爱| 国产精品综合久久久久久久免费| 欧美极品一区二区三区四区| 国产一区二区在线av高清观看| 日韩一本色道免费dvd| 欧美高清性xxxxhd video| 97在线视频观看| .国产精品久久| 久久人人爽人人片av| 99热网站在线观看| 91狼人影院| 久久九九热精品免费| 久久人人精品亚洲av| 99热这里只有是精品50| 日本a在线网址| 成人av一区二区三区在线看| 亚洲美女搞黄在线观看 | 一个人看的www免费观看视频| 狂野欧美白嫩少妇大欣赏| 日韩制服骚丝袜av| 中文资源天堂在线| 99久国产av精品国产电影| 日本色播在线视频| 可以在线观看毛片的网站| 国产成人a∨麻豆精品| 日韩精品青青久久久久久| 亚洲第一电影网av| 美女xxoo啪啪120秒动态图| 成人特级黄色片久久久久久久| 一级av片app| 床上黄色一级片| 黄色日韩在线| 夜夜夜夜夜久久久久| 精品午夜福利在线看| 少妇高潮的动态图| 三级毛片av免费| 免费电影在线观看免费观看| 国产探花极品一区二区| 国产人妻一区二区三区在| 国产伦精品一区二区三区视频9| 精品久久久久久久人妻蜜臀av| 欧美xxxx黑人xx丫x性爽| 午夜影院日韩av| 免费观看精品视频网站| 色播亚洲综合网| 日产精品乱码卡一卡2卡三| 亚洲av成人精品一区久久| 99国产精品一区二区蜜桃av| 此物有八面人人有两片| 久久久a久久爽久久v久久| 一区二区三区四区激情视频 | 欧美性猛交╳xxx乱大交人| 久久精品91蜜桃| 麻豆av噜噜一区二区三区| 搡女人真爽免费视频火全软件 | 男人舔女人下体高潮全视频| 免费一级毛片在线播放高清视频| av在线天堂中文字幕| .国产精品久久| 最近的中文字幕免费完整| 久久精品国产清高在天天线| 伊人久久精品亚洲午夜| 欧美xxxx性猛交bbbb| 国产精品国产三级国产av玫瑰| 国产成人freesex在线 | 久久精品久久久久久噜噜老黄 | 99久久精品一区二区三区| 性色avwww在线观看| eeuss影院久久| 国产精品人妻久久久久久| 在现免费观看毛片| 婷婷六月久久综合丁香| 伦精品一区二区三区| 能在线免费观看的黄片| 久久中文看片网| 又黄又爽又免费观看的视频| 少妇熟女欧美另类|