• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Data-Based Optimal Tracking of Autonomous Nonlinear Switching Systems

    2021-04-14 06:54:04XiaofengLiLuDongMemberIEEEandChangyinSunSeniorMemberIEEE
    IEEE/CAA Journal of Automatica Sinica 2021年1期
    關(guān)鍵詞:章句指導(dǎo)意義學(xué)界

    Xiaofeng Li, Lu Dong, Member, IEEE, and Changyin Sun, Senior Member, IEEE

    Abstract—In this paper, a data-based scheme is proposed to solve the optimal tracking problem of autonomous nonlinear switching systems. The system state is forced to track the reference signal by minimizing the performance function. First,the problem is transformed to solve the corresponding Bellman optimality equation in terms of the Q-function (also named as action value function). Then, an iterative algorithm based on adaptive dynamic programming (ADP) is developed to find the optimal solution which is totally based on sampled data. The linear-in-parameter (LIP) neural network is taken as the value function approximator. Considering the presence of approximation error at each iteration step, the generated approximated value function sequence is proved to be boundedness around the exact optimal solution under some verifiable assumptions. Moreover, the effect that the learning process will be terminated after a finite number of iterations is investigated in this paper. A sufficient condition for asymptotically stability of the tracking error is derived. Finally,the effectiveness of the algorithm is demonstrated with three simulation examples.

    I. INTRODUCTION

    THE optimal scheduling of nonlinear switching systems has attracted vast attention in recent decades. A switching system is a hybrid dynamic system which consists of continuous time subsystems and discrete time events. For each time step, only one subsystem is selected so that the main issue is to find the optimal policy to determine “when” to switch the mode and “which” mode should be activated [1],[2]. Many complex real-world applications can be described as switching systems, ranging from applications in bioengineering field to electronic circuits [3]-[7].

    Generally, the existing methods for optimal switching problem can be classified into two categories. The methods belonging to the first category are to find the switching sequence in a “planning” manner. In [8]-[11], nonlinear programming based algorithms are designed to determine the switching instants by using the gradient of the performance function. Note that the sequence of active modes are required to be fixed a priori. In [12], the authors propose a two-stage decision algorithm to allow free mode sequence which distinguishes the decision process of active mode and switching time. On the other hand, discretization-based methods solve the problem by discretizing the state and input space with a finite number of options [13]-[16]. However,these planning based algorithms achieve good performance only with specific initial states. Once the given initial conditions are changed, a new planning schedule should be made from the scratch.

    Optimal control is an important topic in modern control theory which aims to find a stabilized controller which minimizes the performance function [17]. In recent years,researchers have developed many optimal schemes for addressing practical real-world applications, such as trajectory planning and closed loop optimal control of cable robots[18]-[20]. Based on the reinforcement learning mechanism,the adaptive dynamic programming (ADP) algorithm was first developed to solve the optimal control problem of discretetime systems with continuous state space [21], [22]. In parallel, a continuous-time framework was proposed by the group of Frank L. Lewis to extend the application of ADP to continuous-time nonlinear systems [23]-[25]. Two main iterative methods including value iteration (VI) [26] and policy iteration (PI) [27] are employed to solve the Hamilton-Jacobi-Bellman (HJB) equation. The actor-critic (AC)structure is often employed to implement the ADP algorithm with two neural networks (NNs) [28]. The critic network takes system states as input and outputs the estimated value function while the actor network approximates the mapping between states and control input [29].

    According to the requirement of system dynamics, the family of ADP algorithms can be divided into three main aspects, including model-based methods, model-free methods,and data-based methods. The model-based ADP algorithms require we know the exact dynamics of the plant [26], [27],[30]. A monotonous non-decreasing or non-increasing sequence of value function is generated VI or PI based algorithm which will converge to the optimal solution. For the model-free algorithms, the system model is first identified,e.g., by using neural network (NN) or fuzzy system. Then, the iterations are operated based on the approximated model [31],[32]. It is worth noting that the presence of identification error may lead to sub-optimality of the learned policy. In contrast to the above two approaches, data-based ADP methods are totally based on input and output data [33]-[37]. The objective is to solve the Q-function based optimal Bellman function so that the optimal controller can be obtained without knowing system dynamics. Recently, the combination of ADP method with event-trigger mechanism has been investigated which substantially reduces the updating times of the control input without degrading the performance [38]-[41]. Considering the uncertainty of the system dynamics, the robust ADP algorithms are proposed to find the optimal controller of practical applications [42], [43]. In addition, many practical applications have been solved successfully by using the ADP method [44]-[46].

    As a powerful method for solving the HJB equation, ADP has been applied to solve the optimal control of switching systems in recent years. In [30], the optimal switching problem of autonomous subsystems is solved by using an ADP based method in a backwards fashion. In addition, the minimum dwell time constraint between different modes is considered in [47]. The feedback solution is obtained by learning the optimal value function with respect to the augmented states including system state, already active subsystem, and the elapsed time a given mode. In order to reduce the switching frequency, a switching cost is incorporated in the performance function [48]. In [49], the optimal tracking problem with infinite-horizon performance function is investigated by learning the mapping between the optimal value function and the switching instants. For the continuous-time autonomous switching system, a PI based learning scheme is proposed with consideration of the effect of approximation error on the behaviour [50]. Moreover, the problem of controlled switching nonlinear systems is addressed by co-designing the control signal and switching instants. In [51], the authors develop a VI based algorithm for solving the switching problem. Since a fixed-horizon performance function is considered, the optimal hybrid policy is obtained backward-in-time. In [52], the optimal control and triggering of networked control system is first transformed to an augmented switching system. Then, an ADP based algorithm is proposed to solve the problems with zero order hold (ZOH), generalized ZOH, finite-horizon and infinitehorizon performance functions. These aforementioned methods provide the closed-form solution which works for a vast domain of initial states. However, it is worthwhile noting that the accurate system dynamics is required to implement the existing algorithms which is difficult to obtain for complex nonlinear systems. In addition, the effect of approximation error incurred by employing the NN as the value function approximator is often ignored in previous literature.

    In this paper, a data-based algorithm is first proposed to address the optimal switching problem of autonomous subsystems. Instead of the requirement for system model, only input and output data is needed to learn the switching policy.Furthermore, two realistic issues are considered in this paper.On the one hand, the effect of presence of approximation errors between the outputs of a NN and the real target values are investigated. On the other hand, a sufficient condition is derived to guarantee the stability of the tracking error with a finite number of iterations. In addition, the critic-only structure is utilized for implementing the algorithm. The main contributions of this paper are listed as following. First, the problem is transformed to solving the Q-function based Bellman optimality equation, which enables us to derive a data-based algorithm. Second, considering the approximation errors, an approximated Q-learning based algorithm is first proposed for learning the optimal switching policy. Finally,the theoretical analysis of continuity of Q-functions,boundedness of generated value function sequence and the stability of the system is presented. Since [50]-[52] are all model-based methods, the completely “model-free” character of the proposed algorithm demonstrates its potential for complex nonlinear systems.

    The rest of this paper is organized as follows. Section II presents the problem formulation. In Section III, the exact Qlearning algorithm is proposed. Then, the approximated method is derived considering approximation error and a finite number of iterations. In addition, a linear-in-parameter (LIP)NN is utilized for implementing the algorithm of which the weights are updated by using least-mean-square (LMS)method. In Section IV, the theoretical analysis is given.Afterwards, three simulation examples are given in Section V.The simulation results demonstrate the potentials of the proposed method. Finally, conclusions are drawn in Section VI.

    II. PROBLEM FORMULATION

    Hence, the tracking problem is transformed to find the optimal Q-function. In the next section, an iterative Q-learning based algorithm is developed. In addition, the effects of the presence of the approximation error as well as termination condition of iterations are considered.

    III. PROPOSED ALGORITHM AND ITS IMPLEMENTATION

    A. Exact Q-Learning Algorithm

    It is worth noting that the convergence, optimality, and stability properties of exact Q-learning algorithm is achieved based on several ideal assumptions. On the one hand, the exact reconstruction of the target value function (15) is difficult when using value function approximators, except for some simple linear systems. On the other hand, theoretically,an infinite number of iterations are required to obtain the optimal Q-function. In the following subsection, these two realistic issues are considered and the approximated Qlearning algorithm is developed.

    B. Approximated Q-Learning Algorithm

    The approximated Q-learning method is proposed by extending the exact Q-learning algorithm. First, the algorithm starts from a zero initial Q-function, i.e., Q?(0)=0. Afterwards,considering the approximation error, the algorithm iterates between

    C. Implementation

    Fig. 1. The structure of critic network. The LIP NN consists of a basis function layer and a output layer. The basis functions are polynomials of combinations of system states and reference signals while the number of nodes is determined by trial-and-error method and the output layer has M nodes.

    The output of critic network can be expressed by can be updated at each iteration.

    Fig. 2. Simple diagram of the proposed algorithm. This figure shows the weight update process of an arbitrary output channel. The target network shares the same structure and weights of the critic network and computes the minimum value of Q-function at next time step. Note that at each iteration step, the weights of all output nodes should be updated.

    Another critical problem is to select the appropriate termination criteria for the training process. Let the iteration be stopped at the j-th iteration if the following convergence tolerance is satisfied

    where ζ(x,s) is positive definite function. Once the Q-function Q?(j)(x,s,v)is obtained, it can be applied to control system (1)by comparing the values of different modes and selecting the optimal one. The main procedure for implementing the proposed algorithm is given as in Algorithm 1. The theoretical analysis of the effect caused by the termination condition is given in the following section.

    Algorithm 1 Outline of Implementation of the Proposed Algorithm Step 1: Initialize the hyper-parameters including number of sampled data L and the termination condition of training process .?W(0)c,v =0 8v Ξ ζ Step 2: Initialize the weight vector of the critic NN, i.e.,.?x[l]k Ωx,s[l]k Ωs,v[l]k Ξ?L l=1 L Step 3: Randomly select a set of sample data, where is a large positive integer.?x[l+1]k ,s[l]k+1?L s[l]k+1=F(s[l]k )l=1 x[l]k+1= fv[l]k (x[l]k )Step 4: Obtain according to and, respectively.j=0 Step 5: Let and start the training process.Step 6: The active mode at next time step is selected according to.v(j),[l]k+1 =argminv Ξ(?W(j)c,v)Tφ(x[l]k+1,s[l]k+1)?Q(j+1)tar (xk,sk,vk)Step 7: The target values for critic network is computed according to (22). Then, the weights of the LIP NN are updated by using LMS method.j ?W(j+1)c,v - ?W(j+1)c,v j ≤ζ 8v Ξ j= j+1 Step 8: If is satisfied, then, proceed to Step 9, otherwise, let and execute Step 6.W?c,v= ?W(j)c,v 8v Ξ Step 9: Let and stop the iteration process.

    Remark 3: Note that the training process in Algorithm 1 is totally based on input and output data of subsystems. Once the weights of critic network are converged, the control signal can be derived only based on current system state and reference signal. In order to achieve competitive performance, it requires more training data than the model-based and modelfree algorithm. However, collecting input and output data is often easier than identifying the model.

    IV. THEORETICAL ANALYSIS

    In this section, the effects of presence of approximation error and termination condition on the convergence and stability properties are analyzed. Before proceeding to the proof of theorems, an approximated value function based ADP method is first briefly reviewed [47].

    A. Review of Approximated Value Iteration Algorithm

    B. Continuity Analysis

    C. Convergence Analysis

    Next, we will derive the proof that given an upper bounded constraint of approximation error at each iteration, the

    D. Stability Analysis

    V. SIMULATION RESULTS

    In this section, the simulation results of two numerical examples are first presented to illustrate the effectiveness of the proposed method. In addition, a simulation example of an anti-lock brake system (ABS) is included. The simulation examples are run on a laptop computer with Intel Core i7,3.2 GHz processor and 16 GB of memory, running macOS 10.13.6 and MATLAB 2018a (single threading).

    《文心雕龍·章句》作為“安章之總術(shù)”早已得到學(xué)界的普遍認(rèn)同,以今天的文藝評(píng)論眼光來看,毫無疑問是一篇?jiǎng)?chuàng)作論。但對(duì)初中語文教師來說,它也是一篇明晰章句、體悟韻律的鑒賞論,甚至是一篇入門級(jí)的批評(píng)論,對(duì)初中古詩文教學(xué)具有不可替代的指導(dǎo)意義。

    Example 1: First, the regulation problem of a simple scalar system with two subsystem is addressed. Specifically, the regulation problem can be regarded as a special case of the tracking problem with zero reference signal. The system dynamics is described as follows [30]:

    Fig. 3. Evolution of the Critic NN weight elements.

    Fig. 4. State trajectory and switching mode sequence under the proposed method with x0 = 1.5.

    After the training process is completed, the system is controlled by the converged policy with the initial state x0=1.5. The results are presented in Fig. 4. It is shown that the system switches to the first mode when the state becomes smaller than 1, which corresponds to (41). Moreover, let the system starts from different initial states, e.g., x0=1 and x0=?2; the results are given in Figs. 5 and 6, respectively. It is demonstrated that our method works well for different initial states.

    Fig. 5. State trajectory and switching mode sequence under the proposed method with x0 = 1.

    Fig. 6. State trajectory and switching mode sequence under the proposed method with x0 =-2.

    Example 2: A two-tank device with three different modes is considered. There are three positions of the valve which determine the fluid flow into the upper tank: fully open, half open, and fully closed. The objective is to force the fluid level of the lower tank to track the reference signal. Let the fluid heights in the set-up be denoted by x=[x1,x2]T, where x1and x2denote the fluid levels in the upper and lower tank,respectively. The dynamics of three subsystems are given as follows [49]:

    In addition, the dynamics of the reference command generator is described by

    Fig. 7. Evolution of the critic NN weight elements.

    Once the critic network is trained, the policy can be found by simply comparing three scalar values. Selecting the initial states as x0=[1,1]Tand s0=1, the evolution of states under obtained switching policy is shown in Fig. 8. It is shown that the fluid height in the lower tank can track the reference signal well. Furthermore, the results are compared with those of a model-based value iteration algorithm [49]. The trajectories during the interval of [200,300] are highlighted. It is shown that our algorithm achieves the same, if not better,performance without knowing the exact system dynamics. In addition, the values of performance function (3) by using the proposed Q-learning algorithm and value iteration method are 70.724 1 and 72.758 3, respectively which verifies the conclusion.

    Fig. 8. State trajectories and switching mode sequence of Q-learning based x0=[1,1]T and model based method with and s0 = 1.

    In order to test the tracking ability of the proposed algorithm for different time-varying reference signals, the fluid level of lower tank is forced to tracking the reference trajectories generated by , and ,respectively. Both the structure of NNs and parameters are kept the same with those in the previous paragraph. The state trajectories with different reference command generator is presented in Fig. 9 . The simulation results verify the effectiveness of our algorithm for time-varying reference trajectories.

    ˙s=?s2(t) ˙s=?s3(t) ˙s=?s4(t)

    Fig. 9. State trajectories with different reference command generators.

    The policy obtained after the iteration process is utilized to control the plant with the initial state x0=[0,0]T. Starting from the same state, the open-loop controller is derived according to the algorithm proposed in [12]. The trajectories of states under these two controllers are presented in Fig. 10(see top of next page). It is clear that the Q-learning controller achieves a more accurate tracking performance. By using the same Q-learning controller and nonlinear programming based controller, the simulation results with different initial state are presented in Fig. 11 (see next page). This figure illustrates the capability of the proposed method for different initial states.

    Example 3: The anti-lock brake system (ABS) is considered to illustrate the potentials of the proposed algorithm for realworld applications. In order to eliminate the effect of large ranges of state variables, the non-dimensionalised ABS model is described as follows [56]:

    Fig. 10. State trajectories of Q-learning based and nonlinear programming based method with x0=[0.8,0.2]T and the reference signal s(t)=0.5.

    Fig. 11. State trajectories of Q-learning based and nonlinear programming based method with x0=[0,0]T and the reference signal s(t)=0.5.

    Fig. 12. Evolution of the critic NN weight elements.

    Fig. 13. State trajectories and switching mode sequence of Q-learning based method with x0=[0,0.7,0,0]T.

    Furthermore, the robustness of the controller is tested with consideration of two kinds of uncertainties. First, a random noise signal with a magnitude in the range of[?0.1Ff(·),0.1Ff(·)] is added to the longitudinal force Ffin the ABS model (44). The simulation result is given in Fig. 14.The stopping distance and stopping time are 275.3 m and 6.76 s, respectively. The switching number between the three subsystems is 169 times. Compared with the case without noise, the uncertainty leads to about 0.81% increase of stopping distance, 0.75% increase of stopping time and 9 times of mode switching. Specifically, it can be seen in Fig. 14 that at the beginning of the braking process mode 2 is activated to decrease pressure. This unreasonable decision may be incurred by the random noise and leads to the degradation of performance.

    Fig. 14. State trajectories and switching mode sequence of Q-learning based method considering the uncertainty on the longitudinal force.

    In addition, the uncertainty of vehicle mass is considered.During the training process, the input and output data are generated based on (44) with M=500 kg. Once the policy is trained, it is applied to control the vehicle with M=600 kg.The simulation result is presented in Fig. 15. The stopping distance and stopping time are 323.9 m and 7.96 s,respectively. The switching number between three subsystems is 125 times. It is shown that the performance is degraded compared with that without uncertainty. However, the controller has still been successful in braking the vehicle with a admissible stopping distance which demonstrates.

    Fig. 15. State trajectories and switching mode sequence of Q-learning based method considering the uncertainty on the vehicle mass.

    VI. CONCLUSIONS

    In this paper, an approximated Q-learning algorithm is developed to find the optimal scheduling policy for autonomous switching systems with rigorous theoretical analysis. The learning process is totally based on the input and output data of the system and the reference command generator. The simulation results demonstrate the competitive performance of the proposed algorithm and its potential for complex nonlinear systems. Our future work is to investigate the optimal co-design of control and scheduling policies for controlled switching systems and Markov jump systems. In addition, the effect of employing deep NNs as value function approximator should be considered. It is also an interesting topic to deal with external disturbances.

    猜你喜歡
    章句指導(dǎo)意義學(xué)界
    劉玥辰
    中國篆刻(2022年9期)2022-09-26 02:21:54
    工夫、歷史與政教:“學(xué)庸章句序”中的道統(tǒng)說
    原道(2020年1期)2020-03-17 08:09:50
    朱子《中庸章句》的詮釋特點(diǎn)與道統(tǒng)意識(shí)——以鄭玄《中庸注》為參照
    原道(2020年1期)2020-03-17 08:09:46
    術(shù)中快速冰凍對(duì)判斷食管癌切緣范圍的指導(dǎo)意義
    健康教育對(duì)高原地區(qū)剖宮產(chǎn)患者的指導(dǎo)意義
    西藏科技(2015年6期)2015-09-26 12:12:11
    血乳酸檢測對(duì)引起呼吸衰竭常見疾病的臨床指導(dǎo)意義
    業(yè)界·學(xué)界“微天下”
    中國記者(2014年9期)2014-03-01 01:44:23
    業(yè)界·學(xué)界“微天下”
    中國記者(2014年7期)2014-03-01 01:41:10
    業(yè)界·學(xué)界:“微天下”
    中國記者(2014年6期)2014-03-01 01:39:53
    業(yè)界·學(xué)界“微天下”
    中國記者(2014年1期)2014-03-01 01:36:18
    av欧美777| 精品国产三级普通话版| 在线免费观看的www视频| 欧美性感艳星| 1024手机看黄色片| 最近最新中文字幕大全电影3| 精品久久久久久久人妻蜜臀av| 真人做人爱边吃奶动态| 日韩 亚洲 欧美在线| www.熟女人妻精品国产| 国产人妻一区二区三区在| 高清毛片免费观看视频网站| 国产精品久久视频播放| 一边摸一边抽搐一进一小说| 日韩欧美精品免费久久 | 日本三级黄在线观看| 99热这里只有精品一区| 观看免费一级毛片| 看片在线看免费视频| 欧美乱色亚洲激情| 人妻制服诱惑在线中文字幕| .国产精品久久| 欧美日本亚洲视频在线播放| 国产午夜精品久久久久久一区二区三区 | 亚洲欧美精品综合久久99| 国产亚洲精品av在线| 国产乱人视频| 18禁在线播放成人免费| 色视频www国产| 赤兔流量卡办理| 91久久精品电影网| 淫秽高清视频在线观看| 麻豆av噜噜一区二区三区| 人人妻人人看人人澡| 国内少妇人妻偷人精品xxx网站| 波多野结衣高清作品| 午夜福利视频1000在线观看| 啦啦啦韩国在线观看视频| 精品一区二区三区视频在线观看免费| 国产精品免费一区二区三区在线| 国产精品嫩草影院av在线观看 | 国产精品不卡视频一区二区 | 国产精品久久电影中文字幕| 99久久99久久久精品蜜桃| 日韩国内少妇激情av| 村上凉子中文字幕在线| 美女xxoo啪啪120秒动态图 | 欧美黄色片欧美黄色片| 国产成人影院久久av| 亚洲精品乱码久久久v下载方式| 欧美日韩瑟瑟在线播放| 丰满的人妻完整版| 久久午夜亚洲精品久久| 欧美另类亚洲清纯唯美| 高潮久久久久久久久久久不卡| 性色av乱码一区二区三区2| 变态另类丝袜制服| 少妇的逼好多水| 18禁在线播放成人免费| 亚洲欧美日韩卡通动漫| 色播亚洲综合网| 精品熟女少妇八av免费久了| 亚洲一区高清亚洲精品| 婷婷丁香在线五月| 久久国产精品人妻蜜桃| 成人欧美大片| 国产伦在线观看视频一区| 欧美日韩瑟瑟在线播放| 国产老妇女一区| 人妻夜夜爽99麻豆av| 嫩草影视91久久| 亚洲成人中文字幕在线播放| av欧美777| 色吧在线观看| 国产精品自产拍在线观看55亚洲| 一区二区三区四区激情视频 | 一区二区三区免费毛片| 欧美成狂野欧美在线观看| 国产 一区 欧美 日韩| 欧美三级亚洲精品| 免费高清视频大片| 日本黄色片子视频| .国产精品久久| 国产精品一及| 日韩有码中文字幕| 亚洲最大成人中文| 日韩中字成人| 亚洲国产欧洲综合997久久,| 免费在线观看影片大全网站| 美女高潮的动态| 欧美日韩亚洲国产一区二区在线观看| 欧美日韩亚洲国产一区二区在线观看| 99热精品在线国产| 国产伦精品一区二区三区四那| 精品国内亚洲2022精品成人| 欧美日韩瑟瑟在线播放| 中文亚洲av片在线观看爽| 久久久久久九九精品二区国产| 97人妻精品一区二区三区麻豆| 宅男免费午夜| 国内毛片毛片毛片毛片毛片| 日本 欧美在线| 狂野欧美白嫩少妇大欣赏| 亚洲avbb在线观看| 亚洲第一区二区三区不卡| 久久人人爽人人爽人人片va | 欧美高清成人免费视频www| 人人妻人人澡欧美一区二区| 毛片一级片免费看久久久久 | 午夜日韩欧美国产| av欧美777| 男人的好看免费观看在线视频| 久久久精品大字幕| 日韩人妻高清精品专区| 99在线人妻在线中文字幕| 五月玫瑰六月丁香| 小蜜桃在线观看免费完整版高清| 日韩大尺度精品在线看网址| 天堂影院成人在线观看| 人人妻人人看人人澡| 99视频精品全部免费 在线| 每晚都被弄得嗷嗷叫到高潮| 欧美日本亚洲视频在线播放| 一进一出好大好爽视频| 国产一区二区亚洲精品在线观看| 亚洲狠狠婷婷综合久久图片| 极品教师在线免费播放| 国产欧美日韩一区二区精品| 一本综合久久免费| 97超视频在线观看视频| 欧美黄色淫秽网站| 欧美三级亚洲精品| 噜噜噜噜噜久久久久久91| 国产单亲对白刺激| 久久精品91蜜桃| 国产成人福利小说| 欧美成人免费av一区二区三区| 麻豆成人av在线观看| 国产av不卡久久| 在线看三级毛片| 成人国产综合亚洲| 天堂动漫精品| 人妻久久中文字幕网| 国产视频一区二区在线看| 亚洲精品粉嫩美女一区| 午夜福利成人在线免费观看| 熟女电影av网| 在线观看美女被高潮喷水网站 | 精品不卡国产一区二区三区| 琪琪午夜伦伦电影理论片6080| 成年女人毛片免费观看观看9| 亚洲av二区三区四区| 亚州av有码| 精品人妻熟女av久视频| 99久久九九国产精品国产免费| 欧美黄色片欧美黄色片| 日本撒尿小便嘘嘘汇集6| 日韩精品中文字幕看吧| 少妇的逼好多水| 成人鲁丝片一二三区免费| 亚洲专区国产一区二区| 在现免费观看毛片| 亚洲av中文字字幕乱码综合| 婷婷六月久久综合丁香| 午夜激情福利司机影院| 国内精品一区二区在线观看| 免费电影在线观看免费观看| 老司机午夜福利在线观看视频| 在线a可以看的网站| 国产黄a三级三级三级人| 在线免费观看不下载黄p国产 | 成年女人看的毛片在线观看| 欧美日韩瑟瑟在线播放| 久久精品国产亚洲av涩爱 | 久久久成人免费电影| 免费观看人在逋| 亚洲人成电影免费在线| 欧美国产日韩亚洲一区| 国产精品亚洲美女久久久| 91久久精品电影网| 免费av毛片视频| 嫩草影院新地址| x7x7x7水蜜桃| 国产成人a区在线观看| 亚洲精品一卡2卡三卡4卡5卡| 欧美黄色片欧美黄色片| 色视频www国产| 久久香蕉精品热| 国产精品爽爽va在线观看网站| 亚洲美女搞黄在线观看 | 欧美区成人在线视频| 久久久久久久午夜电影| 国产成人啪精品午夜网站| 我的老师免费观看完整版| 日韩欧美免费精品| 夜夜夜夜夜久久久久| 久久久久免费精品人妻一区二区| 亚洲精品色激情综合| 午夜激情欧美在线| h日本视频在线播放| 国产三级在线视频| a在线观看视频网站| 99久久成人亚洲精品观看| 国内精品一区二区在线观看| 丰满乱子伦码专区| 欧美国产日韩亚洲一区| 自拍偷自拍亚洲精品老妇| 97热精品久久久久久| 午夜免费男女啪啪视频观看 | 国产亚洲精品av在线| 麻豆av噜噜一区二区三区| 国产黄色小视频在线观看| 男人和女人高潮做爰伦理| 久久精品国产亚洲av天美| 国产伦一二天堂av在线观看| 国产一区二区三区视频了| 在线观看一区二区三区| 我要搜黄色片| 国产真实乱freesex| 老女人水多毛片| 欧美激情久久久久久爽电影| 欧美+亚洲+日韩+国产| 一a级毛片在线观看| 天美传媒精品一区二区| 在线免费观看不下载黄p国产 | 国产久久久一区二区三区| 国产精品国产高清国产av| 18禁黄网站禁片免费观看直播| 51午夜福利影视在线观看| 精品一区二区三区av网在线观看| 精华霜和精华液先用哪个| 国产成人影院久久av| 国产伦一二天堂av在线观看| 国产精品一区二区三区四区久久| 别揉我奶头 嗯啊视频| 欧美又色又爽又黄视频| 午夜精品在线福利| 小说图片视频综合网站| 女人十人毛片免费观看3o分钟| 国产精品电影一区二区三区| 波多野结衣高清无吗| 欧美一区二区亚洲| avwww免费| 午夜影院日韩av| 亚洲无线在线观看| 国产精品久久视频播放| 亚洲最大成人中文| 深爱激情五月婷婷| 麻豆一二三区av精品| 国产免费av片在线观看野外av| aaaaa片日本免费| 动漫黄色视频在线观看| 一边摸一边抽搐一进一小说| 亚洲色图av天堂| 亚洲无线在线观看| 真实男女啪啪啪动态图| 亚洲美女视频黄频| 欧美成人免费av一区二区三区| 中文字幕av成人在线电影| 亚洲精品久久国产高清桃花| 久久久久久久久中文| 神马国产精品三级电影在线观看| 亚洲av熟女| 亚洲av电影不卡..在线观看| 午夜福利在线观看免费完整高清在 | 身体一侧抽搐| a级毛片a级免费在线| 国产中年淑女户外野战色| 亚洲成av人片在线播放无| 国产野战对白在线观看| 一夜夜www| 国产麻豆成人av免费视频| 精品熟女少妇八av免费久了| 欧美绝顶高潮抽搐喷水| 国产成人欧美在线观看| 嫁个100分男人电影在线观看| 18禁在线播放成人免费| 丰满人妻一区二区三区视频av| 日韩欧美精品v在线| 动漫黄色视频在线观看| 日韩精品中文字幕看吧| 精品国内亚洲2022精品成人| 99精品久久久久人妻精品| 能在线免费观看的黄片| 免费黄网站久久成人精品 | 2021天堂中文幕一二区在线观| 久久精品国产亚洲av香蕉五月| 亚洲国产欧美人成| 久久午夜福利片| 国产亚洲欧美98| 日本一本二区三区精品| 成人高潮视频无遮挡免费网站| 成人av一区二区三区在线看| 少妇被粗大猛烈的视频| 久久精品综合一区二区三区| 亚洲aⅴ乱码一区二区在线播放| 能在线免费观看的黄片| 别揉我奶头 嗯啊视频| 窝窝影院91人妻| 免费av不卡在线播放| 亚洲 国产 在线| 国产蜜桃级精品一区二区三区| 99国产综合亚洲精品| 真人一进一出gif抽搐免费| 亚洲人成网站在线播| 国产亚洲精品av在线| 身体一侧抽搐| 九九久久精品国产亚洲av麻豆| 亚洲av电影在线进入| 午夜老司机福利剧场| 欧美黄色淫秽网站| 丁香六月欧美| 小说图片视频综合网站| 亚洲精品日韩av片在线观看| 有码 亚洲区| 一本久久中文字幕| 成人亚洲精品av一区二区| netflix在线观看网站| 动漫黄色视频在线观看| 搡老岳熟女国产| 中文字幕熟女人妻在线| 免费在线观看成人毛片| 少妇的逼水好多| 国产不卡一卡二| 免费观看的影片在线观看| 色综合婷婷激情| 久久这里只有精品中国| a在线观看视频网站| 亚洲中文日韩欧美视频| 日本 av在线| 最近在线观看免费完整版| 伊人久久精品亚洲午夜| 精品乱码久久久久久99久播| bbb黄色大片| 国产免费男女视频| 色综合亚洲欧美另类图片| 亚洲欧美日韩高清专用| 91九色精品人成在线观看| 精品国产三级普通话版| 99在线视频只有这里精品首页| 日本熟妇午夜| 免费高清视频大片| 婷婷丁香在线五月| 一进一出抽搐gif免费好疼| 日日干狠狠操夜夜爽| 淫妇啪啪啪对白视频| 国产av在哪里看| a在线观看视频网站| 一个人看视频在线观看www免费| 亚州av有码| 中出人妻视频一区二区| h日本视频在线播放| 天堂动漫精品| 久久久色成人| av在线老鸭窝| 欧美日本亚洲视频在线播放| 十八禁人妻一区二区| 天堂影院成人在线观看| 亚洲中文字幕日韩| 国产一区二区激情短视频| 免费在线观看成人毛片| 一本综合久久免费| 国产不卡一卡二| 久久午夜亚洲精品久久| 国产激情偷乱视频一区二区| 99国产极品粉嫩在线观看| 国产一区二区三区在线臀色熟女| x7x7x7水蜜桃| 国产白丝娇喘喷水9色精品| 日本精品一区二区三区蜜桃| 麻豆av噜噜一区二区三区| 国内精品一区二区在线观看| 国产私拍福利视频在线观看| 桃红色精品国产亚洲av| 欧美潮喷喷水| 亚洲精品日韩av片在线观看| 国产成人影院久久av| 亚洲av熟女| 久久久久国内视频| 乱码一卡2卡4卡精品| 嫩草影视91久久| 丰满乱子伦码专区| 校园春色视频在线观看| 午夜两性在线视频| 十八禁网站免费在线| 夜夜夜夜夜久久久久| 亚洲人成网站在线播放欧美日韩| 亚洲av日韩精品久久久久久密| 国产精品亚洲av一区麻豆| 国产精品久久久久久久电影| 久久久久久大精品| 亚洲不卡免费看| 麻豆久久精品国产亚洲av| 少妇裸体淫交视频免费看高清| 在线播放国产精品三级| 国产爱豆传媒在线观看| 成人特级av手机在线观看| 亚洲精品456在线播放app | 天堂av国产一区二区熟女人妻| 97超视频在线观看视频| 日韩大尺度精品在线看网址| 亚洲精品粉嫩美女一区| 国产精品电影一区二区三区| 国产精品日韩av在线免费观看| 赤兔流量卡办理| 舔av片在线| 亚洲国产日韩欧美精品在线观看| 嫩草影院新地址| 国产毛片a区久久久久| 欧美激情在线99| 国语自产精品视频在线第100页| 一区二区三区免费毛片| 国产一级毛片七仙女欲春2| 麻豆成人午夜福利视频| 欧美+亚洲+日韩+国产| xxxwww97欧美| 少妇丰满av| 丰满人妻熟妇乱又伦精品不卡| 久久精品国产清高在天天线| 精品一区二区三区av网在线观看| 欧美一区二区国产精品久久精品| 国产午夜精品论理片| а√天堂www在线а√下载| 亚洲精品影视一区二区三区av| 在线a可以看的网站| 国产亚洲av嫩草精品影院| 免费在线观看影片大全网站| aaaaa片日本免费| 精品人妻视频免费看| 一卡2卡三卡四卡精品乱码亚洲| 久久婷婷人人爽人人干人人爱| 国产又黄又爽又无遮挡在线| 国产一区二区在线av高清观看| 午夜日韩欧美国产| 搡老妇女老女人老熟妇| 又紧又爽又黄一区二区| 午夜精品久久久久久毛片777| 亚洲av成人不卡在线观看播放网| 中文字幕精品亚洲无线码一区| 色在线成人网| 国内久久婷婷六月综合欲色啪| 日韩欧美三级三区| 亚洲中文日韩欧美视频| 亚洲七黄色美女视频| 波野结衣二区三区在线| 国产成人影院久久av| 每晚都被弄得嗷嗷叫到高潮| 两人在一起打扑克的视频| 3wmmmm亚洲av在线观看| 色5月婷婷丁香| www.999成人在线观看| 亚洲第一电影网av| 日韩中字成人| 99久国产av精品| 国产av在哪里看| 亚洲国产色片| 精品一区二区三区视频在线观看免费| 男女视频在线观看网站免费| 老司机深夜福利视频在线观看| 久久久久久国产a免费观看| 国产又黄又爽又无遮挡在线| 一a级毛片在线观看| 久久99热6这里只有精品| 99久久久亚洲精品蜜臀av| 97热精品久久久久久| 国产免费男女视频| 中文字幕熟女人妻在线| 国产精品野战在线观看| 国产av不卡久久| 久久久久精品国产欧美久久久| 亚洲成人精品中文字幕电影| 久久久久性生活片| 国产高清视频在线播放一区| 亚洲最大成人手机在线| 一区二区三区激情视频| 久久久久九九精品影院| 久久香蕉精品热| 国产精品久久视频播放| 亚洲av二区三区四区| 亚洲av熟女| 亚洲,欧美,日韩| 最近视频中文字幕2019在线8| 免费一级毛片在线播放高清视频| 久久久久久大精品| 国产精品一区二区三区四区免费观看 | 美女免费视频网站| 99久久成人亚洲精品观看| 久久精品影院6| 搡老熟女国产l中国老女人| 99久久无色码亚洲精品果冻| 国产精品一区二区性色av| aaaaa片日本免费| 国内揄拍国产精品人妻在线| 精品久久久久久久久久免费视频| 最近视频中文字幕2019在线8| 成人国产综合亚洲| 真人一进一出gif抽搐免费| av在线观看视频网站免费| 人妻久久中文字幕网| 在线免费观看的www视频| 赤兔流量卡办理| 成人午夜高清在线视频| 亚洲中文字幕日韩| 无人区码免费观看不卡| 午夜a级毛片| 国产色婷婷99| 制服丝袜大香蕉在线| 91在线观看av| 综合色av麻豆| 午夜免费男女啪啪视频观看 | 国产av不卡久久| 十八禁国产超污无遮挡网站| 国产精品,欧美在线| 色尼玛亚洲综合影院| 国产精品久久久久久精品电影| 日韩欧美在线二视频| 黄色女人牲交| 亚洲美女搞黄在线观看 | 天堂av国产一区二区熟女人妻| 90打野战视频偷拍视频| 少妇的逼水好多| 久久久精品欧美日韩精品| 看黄色毛片网站| 国产高清视频在线观看网站| 国产综合懂色| 免费电影在线观看免费观看| 中文字幕人妻熟人妻熟丝袜美| 在线观看一区二区三区| 精品熟女少妇八av免费久了| 一进一出抽搐gif免费好疼| 成人av在线播放网站| av在线观看视频网站免费| 欧美国产日韩亚洲一区| 亚洲国产欧美人成| 国产精品98久久久久久宅男小说| 一级黄片播放器| 国产精品av视频在线免费观看| 国产在线男女| 亚洲精品影视一区二区三区av| 91字幕亚洲| 国产人妻一区二区三区在| 欧美在线黄色| 99精品在免费线老司机午夜| 岛国在线免费视频观看| 国内少妇人妻偷人精品xxx网站| 内地一区二区视频在线| 久久久久久久久久黄片| 日韩国内少妇激情av| 国产淫片久久久久久久久 | 成人无遮挡网站| 最好的美女福利视频网| 久久国产精品人妻蜜桃| 亚洲av一区综合| 国产极品精品免费视频能看的| 中出人妻视频一区二区| 有码 亚洲区| 亚洲精品一卡2卡三卡4卡5卡| 亚洲人成伊人成综合网2020| 丰满人妻一区二区三区视频av| 在线播放无遮挡| 在线观看午夜福利视频| 亚洲自拍偷在线| 中文在线观看免费www的网站| 欧美精品国产亚洲| 韩国av一区二区三区四区| 欧美激情在线99| 中文亚洲av片在线观看爽| 成人av在线播放网站| 久久人人精品亚洲av| 内地一区二区视频在线| 国产高清视频在线观看网站| 欧美高清成人免费视频www| 日日干狠狠操夜夜爽| 99视频精品全部免费 在线| 简卡轻食公司| 成人国产一区最新在线观看| 一本一本综合久久| 波野结衣二区三区在线| 国产精品精品国产色婷婷| 怎么达到女性高潮| 久久精品国产自在天天线| 亚洲国产色片| 12—13女人毛片做爰片一| 欧美成人a在线观看| 国产精品不卡视频一区二区 | 久久久国产成人精品二区| 亚洲 国产 在线| 久久性视频一级片| 91在线观看av| 国产一区二区在线av高清观看| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 国产熟女xx| 国产成人福利小说| 亚洲欧美精品综合久久99| 亚洲精品成人久久久久久| 精品日产1卡2卡| 久久香蕉精品热| 亚洲国产精品成人综合色| 日韩国内少妇激情av| 中文字幕熟女人妻在线| 高潮久久久久久久久久久不卡| 在线观看一区二区三区| 久久精品国产亚洲av天美| 亚洲真实伦在线观看| 精品熟女少妇八av免费久了| 成年女人看的毛片在线观看| 99riav亚洲国产免费| 不卡一级毛片| 欧美黄色淫秽网站| 97超视频在线观看视频| 国产亚洲精品av在线| 一个人观看的视频www高清免费观看| 成人特级黄色片久久久久久久| 热99re8久久精品国产| 亚洲在线自拍视频|