• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Adaptive Linear Quadratic Regulator for Continuous-Time Systems With Uncertain Dynamics

    2020-05-21 05:44:54SumitKumarJhaandShubhenduBhasin
    IEEE/CAA Journal of Automatica Sinica 2020年3期

    Sumit Kumar Jha,and Shubhendu Bhasin,

    Abstract—In this paper,adaptive linear quadratic regulator(LQR)is proposed for continuous-time systems with uncertain dynamics. The dynamic state-feedback controller uses inputoutput data along the system trajectory to continuously adapt and converge to the optimal controller.The result differs from previous results in that the adaptive optimal controller is designed without the knowledge of the system dynamics and an initial stabilizing policy.Further,the controller is updated continuously using input-output data, as opposed to the commonly used switched/intermittent updates which can potentially lead to stability issues.An online state derivative estimator facilitates the design of a model-free controller.Gradient-based update laws are developed for online estimation of the optimal gain.Uniform exponential stability of the closed-loop system is established using the Lyapunov-based analysis,and a simulation example is provided to validate the theoretical contribution.

    I.INTRODUCTION

    THE development of the infinite-horizon linear quadratic regulator(LQR)[1]has been one of the most important contributions in linear optimal control theory.The optimal control law for the LQR problem is expressed in state-feedback form,where the optimal gain is obtained from the solution of the nonlinear matrix equation–the algebraic Riccati equation(ARE).The solution of the ARE requires exact knowledge of the system matrices and is typically found offline,a major impediment to online real-time control.

    Recent research has focused on solving the optimal control problem using iterative,data-driven algorithms which can be implemented online and require minimal knowledge of the system dynamics[2]?[15].In[2],Kleinman proposed a computationally efficient procedure for solving the ARE by iterating on the solution of the linear Lyapunov equation,with proven convergence to the optimal policy for any initial condition.The Newton-Kleinman algorithm[2],although offline and model-based,paved the way for a class of reinforcement learning(RL)/approximate dynamic programming(ADP)-based algorithms which utilize data along the system trajectory to learn the optimal policy[4],[7],[10],[16]?[18].Strong connections between RL/ADP and optimal control have been established[19]?[23]and several RL algorithms including policy iteration(PI),value iteration(VI)and Qlearning have been adapted for optimal control problems[4],[7]?[9],[13],[22],[24].Initial research on adaptive optimal control was mostly concentrated in the discrete-time domain due to the recursive nature of RL/ADP algorithms.An important contribution in[4]is the development of a modelfree PI algorithm using Q-functions for discrete-time adaptive linear quadratic control.The iterative RL/ADP algorithms have since been applied to various discrete-time optimal control problems[25]?[27].

    Extension to continuous-time systems entails challenges in controller development and convergence/stability proofs.One of the first adaptive optimal controllers for continuous-time systems is proposed in[17],where a model-based algorithm is designed using a continuous-time version of the temporal difference(TD)error.Model-free RL algorithms for continuoustime systems are proposed in[22],which require measurement of the state derivatives.In chapter 7 of[3],an indirect adaptive optimal linear quadratic(ALQ)controller is proposed,where the unknown system parameters are identified using an online adaptive update law,and the ARE is solved at every time instant using the current parameter estimates.However,the algorithm may become computationally prohibitive for higher dimensional systems,owing to the need for solving the ARE at every time instant.More recently,partially model-free PI algorithms are developed in[7],[24]for linear systems with unknown internal dynamics.In[9],[10],the idea in[7]is extended to adaptive optimal control of linear systems with completely unknown dynamics.In another significant contribution[6],the connections between Q-learning and the Pontryagin’s minimum principle are established,based on which an off policy control algorithm is proposed.

    A common feature of RL algorithms adapted for continuous-time systems is the requirement of an initial stabilizing policy[7],[9],[10],[18],[24],and a batch least square estimation algorithm leading to intermittent updates of the control policy[7],[9].Finding an initial stabilizing policy for systems with unknown dynamics may not always possible.Further,the intermittent control policy updates in[7],[9],[18]render the control law discontinuous,potentially leading to challenges in proving stability.Moreover,many adaptive optimal control algorithms require to implement delayedwindow integrals to construct the regressor/design update laws[5],[7],[9],[14],and“intelligent”data storage mechanism(procedure for populating independent set of data)[5],[7],[9],[10]to satisfy an underlying full-rank condition.The computation of delayed-window integrals of functions of states requires past data storage for the time interval[t ?T, t],?t >0,wheretandTare the current time instant and the window length,respectively,which demands significant memory consumption,especially for large scale systems.

    Recent works in[8],[11],[13]have cast the continuoustime RL problem in an adaptive control framework with continuous policy updates,without the need for an initial stabilizing policy.However,for continuous-time RL,it is not straight forward to develop a fixed-point equation for parameter updation,which is independent of the knowledge of system dynamics and state derivatives.A synchronous PI algorithm for known system dynamics is developed in[8],which is extended to a partially model-free method using a novel actor-critic-identifier architecture [11]. For inputconstrained systems with completely unknown dynamics,a PI and neural network(NN)based adaptive control algorithm is proposed in[13].However,the work in[13]utilizes past stored data along with the current data for identifier design,while guaranteeing bounded convergence of critic weight estimation error for bounded NN reconstruction error.

    The contribution of this paper is the design of a continuoustime adaptive LQR with a time-varying state-feedback gain,which is shown to exponentially converge to the optimal gain.The novelty of the proposed result lies in the computational/memory efficient algorithm used to solve the optimal control problem for uncertain dynamics,without requiring an initial stabilizing control policy,unlike previous results which either use an initial stabilizing control policy and a switched policy update[5],[7],[9],[10]or past data storage[5],[7],[9],[10],[28],[29]or memory-intensive delayedwindow integrals[5],[7],[9],[14].The result in this paper is facilitated by the development of a fixed point equation which is independent of system matrices,and the design of a state derivative estimator.A gradient-based update law is devised for online adaptation of the state-feedback gain and convergence to the optimal gain is shown,provided a uniform persistence of excitation(u-PE)condition[30],[31]on the state-dependent regressor is satisfied.The u-PE condition,although restrictive in its verification and implementation,establishes the theoretical requirements for convergence of adaptive linear quadratic controller proposed in the paper.The Lyapunov analysis is used to prove uniform exponential stability of the overall system.

    This paper is organized as follows.Section II discusses the primary concepts of linear optimal control,problem formulation,and subsequently the general methodology.The proposed model-free adaptive optimal control design along with the state derivative estimator is described in Section III.Convergence and exponential stability of the proposed result is shown in Section IV.Finally,an illustrative example is given in Section V.

    Notations:Throughout this paper,R is used to denote the set of real numbers.The operator||.||designates the Euclidean norm for vectors and induced matrix norm for matrices.The symbol?denotes the Kronecker product operator andvec(Z)∈Rqrdenotes the vectorization of the argument matrixZ ∈Rq×rand is obtained by stacking columns of the argument matrix on top of one another.The operatorsλmin(.)andλmax(.)denote the minimum and maximum eigenvalues of the argument matrix,respectively.The symbolBddenotes the open ballBd={z ∈Rn(n+m):||z||

    1),where matrix multiplication(DEF)is defined.

    2)vec(D+E+F)=vec(D)+vec(E)+vec(F),where matrix summation(D+E+F)is defined.

    wherea,bare vectors,Dis a matrix and the multiplication(aT Db)is defined,has also been used.

    II.PRELIMINARIES AND PROBLEM FORMULATION

    Consider a continuous-time deterministic LTI system given as

    wherex(t)∈Rndenotes the state andu(t)∈Rmdenotes the control input.A ∈Rn×nandB ∈Rn×mare constant unknown matrices and(A,B)are assumed to be controllable.

    The infinite horizon quadratic value function can be defined as the total cost starting from statex(t)and following a fixed control actionu(t)from timetonwards as

    whereQ ∈Rn×nis symmetric positive semi-definite with(Q,A)being observable andR ∈Rm×mis a positive definite matrix.

    WhenAandBare accurately known,the standard LQR problem is to find the optimal policy by minimizing the value function(2)with respect to the policyu.

    whereK?=R?1BT P ?∈Rm×nis the optimal control gain matrix andP ?∈Rn×nis the constant positive definite matrix solution of ARE[32]

    Remark 1:It is obvious that solving the ARE forP ?requires knowledge of the system matricesAandB,however,in the case where information aboutAandBis unavailable,it is challenging to determineP ?andK?online.

    The following assumptions are required to facilitate the subsequent design.

    Assumption 1:The optimal Riccati matrixP ?is upper bounded as∥P ?∥≤α1,whereα1is a known positive scalar constant.

    Assumption 2:The optimal gain matrixK?is upper bounded as∥K?∥≤α2,whereα2is a known positive scalar constant.

    For the linear system in(1),the optimal value function can be written as a quadratic function[33]

    To facilitate the development of the model-free LQR,differentiate(5)with respect to time and use system dynamics(1)to obtain

    Using(4),(6)reduces to

    The LHS of(7)can be written asby considering(5),which is then substituted in(7)as

    The expression in(8)acts as the fixed point equation used to defineD ∈R as the difference between LHS and RHS of(8)

    Remark 2:The motivation behind the formulation of(9)is to represent the fixed point equation in a model-free way without using memory-intensive delayed-window integrals and subsequently design a parameter estimation algorithm to learnP ?andK?without knowledge of system matricesAandB.

    III.OPTIMAL CONTROL DESIGN FOR COMPLETELY UNKNOWN LTI SYSTEMS

    In(9),P ?andK?are unknown parameter matrices and the objective is to estimate these parameters using gradient-based update laws.

    The gradient-based update laws are developed which minimize the squared errorΞ ∈R defined asΞ=E2/2.The update laws for the parameters to be estimated are given by

    whereν ∈R+andνk ∈R+are adaptation gains.Substituting the values of gradients ofΞwith respect toandthe normalized update laws are given as

    The continuous policy update is given as

    The design of the state derivative estimator,mentioned in (11) and (12), is facilitated by expressing the system dynamics(1)as linear-in-the-parameters(LIP)

    whereY(x,u)∈Rn×n(n+m)is the regressor matrix andθ ∈Rn(n+m)is the unknown vector defined as

    Assumption 3:The system parameter vectorθin(16)is upper bounded as∥θ∥≤a1,wherea1is a known positive constant.

    The state derivative estimator is designed as

    where Γ∈Rn(n+m)×n(n+m)is the constant positive definite gain matrix.

    Lemma 1:The update laws in(17)and(19)ensure that the state estimation and the system parameter estimation error dynamics are Lyapunov stable?t ≥0.

    Proof:Consider a positive-definite Lyapunov function candidate as

    Taking time derivative of(20)and substituting the value offrom(18),the following expression is obtained

    where

    Sinceandis bounded which implies that

    Remark 3:Assumptions 1 and 2 are standard assumptions required for projection based adaptive algorithms,frequently used in robust adaptive control literature([3],Chapter 11 of[36],Chapter 3 of[37],[38]).In fact,in the context of adaptive optimal control,analogous to Assumptions 1 and 2,many existing results[8],[11],[13],[14],[29]assume a known upper bound of the unknown parameters associated with the value function,an essential requirement for proving stability of the closed-loop system.Although the true system parameters(AandB)are unknown,a range of operating values(a compact set containing the true values of the elements ofAandB)may be known in many cases from the particular domain knowledge of the plant.Performing a uniform sampling over the known compact set and solving the ARE offline with those samples,a set of Riccati matrices can be obtained,and hence,the upper bounds(α1andα2),assumed in Assumptions 1 and 2,can be conservatively estimated using this set.Moreover,the proposed algorithm serves as an effective approach for the case where it is hard to obtain the initial stabilizing policy for uncertain systems.

    IV.CONVERGENCE AND STABILITY

    A.Development of Controller Parameter Estimation Error Dynamics

    The controller parameter estimation error dynamics forcan be obtained using(11)and(13)as

    where

    Using thevecoperator in(22),the following expression is obtained

    and

    Using(15)and(23),the system dynamics in terms of the error statez(t)can be expressed as

    whereF ∈Rn(1+m)is a vector valued function containing the right hand sides of(15)and(23).

    Assumption 4:The pair(φk,F)is u-PE,i.e.,PE uniformly in the initial conditions(z0,t0),if for eachd>0,?ε,δ>0 such that,?(z0,t0)∈Bd×[0,∞),all corresponding solutions satisfy

    ?t ≥t0[30].

    Remark 4:Since the regressorφk(z,t)in(23)is state dependent,the u-PE condition in(26),which is uniform in initial condition,is used instead of the classical PE condition,where the regressor is only function of time and not the states,e.g.,where the objective is identification(Section 2.5 of[39]).

    Remark 5:In adaptive control,convergence of system and control parameter error vectors are dependent on the excitation of the system regressors.This excitation property,typically known as persistence of excitation (PE), is necessary to achieve perfect identification and adaptation.The PE condition,although restrictive in its verification and implementation,is typically imposed by using a reference input with as many spectral lines as the number of unknown parameters[40].The u-PE condition mentioned in Assumption 4 may be satisfied by adding a probing exploratory signal to the control input[4],[8],[11],[13],[41].This signal can be removed once the parameter estimateconverges to optimal control policy and subsequently,exact regulation of the system states will be achieved.Exact regulation of the system states in presence of persistently exciting signal can also be achieved by following the method given in[42],in which the PE property is generated in a finite time interval by an asymptotically decaying“rich”feedback law.

    The expression in(23)can be represented using a perturbed system as

    For eachd>0,the dynamics of the nominal system

    can be shown to be uniformly exponentially stable?(z0,t0)∈Bd×[0,∞)by using Assumption 4,(25)and Lemma 5 of[31].

    Sinceis continuously differentiable and the Jacobianis bounded for the nominal system(28),it can be shown,by referring to the converse Lyapunov Theorem 4.14 in[43]and definitions and results in[31],[44],that there exists a Lyapunov function,which satisfies following inequalities.

    for some positive constantsd1,d2,d3,d4∈R.

    B.Lyapunov Stability Analysis

    Theorem 1:If Assumption 4 holds,the adaptive optimal controller(14)along with the parameter update laws(12)and(13)and the state derivative estimators(17)and(19)guarantees that the system states and the controller parameter estimation errorsz(t)are uniformly exponentially stable?t ≥0,providedz(0)∈?,where the set?is defined as1The initial condition region ? can be increased by appropriately choosing user defined matrices Q,R,and by tuning design parameters ν,νk and ηk.

    Proof:A positive-definite,continuously differentiable Lyapunov function candidateVL:Bd×[0,∞)→R is defined for eachd>0 as

    whereV ?(x)is the optimal value function defined in(5)which is positive definite and continuously differentiable andVcis defined in(29).Taking the time derivative ofVL,along the trajectories of(1)and(27),the following expression is obtained

    Using(6),(29)and the Rayleigh-Ritz theorem,can be upper bounded as

    where

    where the known functionρ2(∥z∥):R→R,defined asρ2(∥z∥)=2l2∥x∥2/d3,is positive,globally invertible and non-decreasing andˉν=1/νk ∈R.By using(24),(34)can be further expressed as

    Using(5),(24)and(29),the Lyapunov function candidateVLcan be bounded as

    whereσ1andσ2are positive constants.

    Using(36),(35)can be expressed as

    The expression in(37)can be further upper bounded by

    where the set?is defined as

    Ifz(0)∈?,then by looking at the solution of(38),

    it can be said that system states and the parameter estimation errors uniformly exponentially converge to the origin.

    Remark 6:The positive constantsd1,d2,d4in(29)do not appear in the design of the control law(14) or the parameter update law(13)and are only utilized for the stability analysis purpose.As a result,knowing the exact values of these constants is not required in general.However,the quantityd3,which appears in Theorem 1,can be determined by following the procedure given in[43](for details see proof of Theorem 4.14 in[43]).

    Remark 7:Traditionally, the parameter update laws in adaptive control have user defined design parameters termed as adaptation gains(in this paperνandνkdefined in(12)and(13),respectively).Typically,these gains are responsible for the convergence rate of the estimation of the unknown parameters.Hence,a careful selection of gains govern the performance of the designed estimators.However,a large value of adaptation gain may result in an unstable adaptive system,which can be overcome by introducing“normalization”in the update laws[45].The normalized estimator in the update law(13)involves constant tunable gainηk,which can be chosen in such a way that maintains the system stability in presence of high adaptation gainνk.

    Remark 8:The estimates of the system matricesAandB,given by(19),are not guaranteed to converge to the optimal parameters,since Lemma 1 only proves that the parameter estimation erroris bounded.Therefore,solving ARE in(4)using the estimates ofAandBmay not yield the optimal parameterP ?andK?.Moreover,solvingP ?directly from the ARE,which is nonlinear inP ?,can be challenging,especially for large scale systems.However,the proposed method utilizes the estimates ofAandBin the estimator design of the controller parametersP ?andK?.The adaptive update laws forandin(12)and(13),include the identifierwhich is designed in(17),and uses(estimates ofAandB).The proposed design is architecturally analogous to[11],[13],[29],where a system identifier is utilized in controller parameter estimation.Also,note that although the system parameter estimatesandare only guaranteed to be bounded,the controller parameter estimatesandare proved to be exponentially convergent to the optimal parameters,as proved in Theorem 1.

    C.Comparison With Existing Literature

    One of the main contributions of the result is that the initial stabilizing policy assumption is not required,unlike the iterative algorithms in[5],[7],[9],[10],where an initial stabilizing policy is assumed to ensure that the subsequent policies remain stabilizing.On the other hand,an adaptive control framework is considered in the proposed approach where the control policies are continuously updated until convergence to the optimal policy.The design of the controller,the parameter update laws and the state derivative estimator ensure exponential stability of the closed-loop system which is proved using a rigorous Lyapunov-based stability analysis,irrespective of the initial control policy(stabilizing or destabilizing)chosen.

    Moreover,other significant contributions of this paper with respect to some of the existing literatures are highlighted as follows.

    The algorithms proposed in[5],[7],[9],[10]require computation of delayed-window integrals to construct the regressor,and/or“intelligent”data storage mechanism to satisfy an underlying full-rank condition.Computation of delayedwindow integrals require past data storage for the time interval[t ?T, t],?t >0,wheretandTare the current time instant and the window length,respectively,which demands significant consumption of memory stacks,especially for large scale systems.Unlike[5],[7],[9],[10],the proposed work strategically obviates the requirement of memory intensive delayed-window integrals and“intelligent”data storage,a definite advantage in the case of large scale systems implemented on embedded hardware.

    Although the result in[14]designs an actor-critic architecture based adaptive optimal controller for uncertain LTI systems,it uses memory-intensive delayed-window integral based Bellman error(see the error expression for“e”defined below(17)in[14])to tune the critic weight estimates ?Wc.Unlike[14],the proposed algorithm uses an online state derivative estimator to obviate the need of past data storage for control parameter estimation by strategically formulating Bellman error“E”(11)to be independent of delayed-window integrals.Further,an exponential stability result is obtained using the proposed algorithm as compared to the asymptotic result achieved in[14].

    Recent results in[28],[29]relax the PE condition by concurrently applying past stored data along with the current parameter estimates,however,unlike[28],[29],the proposed result is established for completely uncertain systems without requiring past data storage.Moreover,a stronger exponential regulation result is obtained using the proposed controller,while obviating the need of past data storage,as compared to[28],[29].

    The proposed result also differs from the ALQ algorithm[3]in that it avoids the computational burden of solving the ARE(with the estimates ofAandB)at every iteration,thus also avoiding the restrictive condition on stabilizability of estimates ofAandB,at every iteration.

    V.SIMULATION

    To verify the effectiveness of the proposed result, the problem of controlling the angular position of the shaft in a DC motor is considered[12].The plant is modeled as a third order continuous-time LTI system and its system matrices are given as

    The objective is to find the optimal control policy for the infinite horizon value function(2),where the state and input penalties are taken asQ=I3andR=1,respectively.Solving ARE(4)for the given system dynamics,the optimal control gainK?is obtained asK?=[1.0 0.8549 0.4791].The gains for parameter update laws(12)and(13)are chosen asν= 35,νk= 55 andηk= 5.The gain matrix of the state derivative estimator is selected asL=I3.An exploration signal,comprising of a sum of sinusoids with irrational frequencies,is added to the control input in(14)which subsequently leads to the convergence of control gain to its optimal values(depicted by$)as shown in Fig.1.

    Fig.1. The evolution of parameter estimate ?K(t)for the proposed method.

    The proposed method is compared with the recently published work in[14].The Q-learning algorithm proposed in[14]solves adaptive optimal control problem for completely uncertain linear time invariant(LTI)systems.The norms of the control gain estimation error(used in the proposed work)and the actor weight estimation error(as discussed in[14]and analogous to theare depicted in Fig.2.

    Fig.2. Comparison of the parameter estimation error norms between[14]and the proposed method.

    The initial conditions are chosen as[0 0 0]andx(0)=[?0.2 0.2?0.2]T,and the gains for the update laws of the approach in[14]are chosen asαa=6 andαc=50.To ensure sufficient excitation,an exploration noise is added to the control input up tot=4 s in both cases.

    From the Fig.3,it can be observed that for similar control inputs,the convergence rates for both the methods(as shown in Fig.2)are comparable.However,as opposed to the memoryintensive delayed-window integration for the calculation of the regressor in[14],the proposed result does not use paststored data and hence is more memory efficient.Further,an exponential stability result is obtained using the proposed controller as compared to the asymptotic result obtained in[14].As seen from Figs.4 and 5,the state trajectories for both the methods initially have bounded perturbation around origin due to the presence of the exploration signal.However,once this signal is removed aftert=4 s,the trajectories converge to the origin.

    Fig.3. Comparison of the control inputs between[14]and the proposed method.

    Fig.4. System state trajectories for the proposed method.

    Fig.5. System state trajectories for[14].

    VI.CONCLUSION

    An adaptive LQR is developed for continuous-time LTI systems with uncertain dynamics. Unlike previous results on adaptive optimal control which use RL/ADP methods,the proposed adaptive controller is memory/computationally efficient and does not require an initial stabilizing policy.The result hinges on a u-PE condition on the regressor vector,which is shown to be critical for proving convergence to the optimal controller.Future work will be focused on relaxing the restrictive u-PE condition without compromising the merits of the proposed result.The Lyapunov analysis is used to prove uniform exponential stability of the tracking error and parameter estimation error dynamics.Simulation results validate the efficacy of the proposed algorithm.

    APPENDIX EVALUATION OF BOUND FOR

    This section presents bounds on different terms encountered at different stages of the proof for Theorem 1.These bounds,comprising of norms of the elements of the vectorz(t)defined in(24),are developed by using(13),(15),(18),(19),Lemma 1 and considering standardvecoperator and Kronecker product properties.

    The following inequality results from the use of projection operator in(12)[35].

    The expression in(39)is upper bounded,by using Assumptions 1 and 2,Lemma 1,(40)and the following supporting bounds

    wherehi ∈R fori=1,2,...,11 are positive constants and in(41b),equality expressionis used,as

    where the known functionρ1(∥z∥):R→R is a positive,globally invertible and non decreasing andz ∈Rn(n+m)is defined in(24).

    91aial.com中文字幕在线观看| 黑丝袜美女国产一区| 免费在线观看完整版高清| 亚洲欧美一区二区三区黑人 | 夜夜骑夜夜射夜夜干| 免费看av在线观看网站| 99国产精品免费福利视频| 香蕉丝袜av| 欧美日韩精品成人综合77777| 97在线视频观看| 黄色毛片三级朝国网站| 18禁动态无遮挡网站| 免费在线观看完整版高清| 亚洲av中文av极速乱| 午夜激情久久久久久久| a 毛片基地| 一级爰片在线观看| 欧美精品人与动牲交sv欧美| 精品午夜福利在线看| 一区二区三区乱码不卡18| 久久久久久久国产电影| 亚洲精华国产精华液的使用体验| 一级,二级,三级黄色视频| 2018国产大陆天天弄谢| 国产成人aa在线观看| 国产日韩欧美亚洲二区| 90打野战视频偷拍视频| 久久久久精品久久久久真实原创| 嫩草影院入口| 九九爱精品视频在线观看| 亚洲,欧美,日韩| 亚洲精品久久久久久婷婷小说| 欧美国产精品一级二级三级| 一边摸一边做爽爽视频免费| 18禁国产床啪视频网站| 国产一区二区三区av在线| 青春草亚洲视频在线观看| 国产精品99久久99久久久不卡 | 国产成人精品久久久久久| 性少妇av在线| 亚洲综合精品二区| 国产色婷婷99| 一级,二级,三级黄色视频| 夫妻午夜视频| 亚洲精品成人av观看孕妇| 成人午夜精彩视频在线观看| 日产精品乱码卡一卡2卡三| 久久久精品94久久精品| 在线观看www视频免费| 黄片小视频在线播放| 18在线观看网站| 男女高潮啪啪啪动态图| 韩国高清视频一区二区三区| 亚洲久久久国产精品| 日本av免费视频播放| 色视频在线一区二区三区| 久久久久久免费高清国产稀缺| 熟女av电影| 亚洲一区二区三区欧美精品| 久久久精品94久久精品| 美女视频免费永久观看网站| 老司机影院毛片| 日韩一卡2卡3卡4卡2021年| 嫩草影院入口| 男女无遮挡免费网站观看| 在线看a的网站| 亚洲av福利一区| www日本在线高清视频| 免费少妇av软件| 男女免费视频国产| 国产亚洲最大av| 最近的中文字幕免费完整| 国产av一区二区精品久久| 日韩一本色道免费dvd| 丰满乱子伦码专区| 国产一级毛片在线| 最近中文字幕2019免费版| 99热网站在线观看| 国产 精品1| 久久久精品免费免费高清| 老鸭窝网址在线观看| 精品午夜福利在线看| 男人爽女人下面视频在线观看| 日韩精品免费视频一区二区三区| 精品人妻偷拍中文字幕| 欧美日韩视频高清一区二区三区二| 亚洲av综合色区一区| 欧美+日韩+精品| 欧美人与性动交α欧美精品济南到 | 亚洲内射少妇av| 国产精品99久久99久久久不卡 | av在线观看视频网站免费| 亚洲精品国产色婷婷电影| 亚洲精品成人av观看孕妇| 午夜福利,免费看| 亚洲美女黄色视频免费看| 亚洲精品久久午夜乱码| 精品亚洲成a人片在线观看| 欧美激情高清一区二区三区 | 亚洲国产色片| 亚洲,欧美,日韩| 日韩中字成人| 国产1区2区3区精品| 国产爽快片一区二区三区| 亚洲av电影在线进入| av免费在线看不卡| 亚洲国产色片| 国产成人一区二区在线| 精品一区二区免费观看| 黄色视频在线播放观看不卡| 亚洲人成网站在线观看播放| 最黄视频免费看| 男男h啪啪无遮挡| 精品亚洲乱码少妇综合久久| 久久久久久久久久久久大奶| 久久青草综合色| 午夜精品国产一区二区电影| 寂寞人妻少妇视频99o| 亚洲精品国产一区二区精华液| 一二三四在线观看免费中文在| 亚洲精品一二三| 在线天堂最新版资源| 久久精品国产自在天天线| 亚洲 欧美一区二区三区| www日本在线高清视频| av视频免费观看在线观看| 只有这里有精品99| 国精品久久久久久国模美| 国产在线一区二区三区精| 国产成人精品久久久久久| www.熟女人妻精品国产| 亚洲国产毛片av蜜桃av| 天天操日日干夜夜撸| 丰满乱子伦码专区| 乱人伦中国视频| 成人亚洲欧美一区二区av| 日日撸夜夜添| 麻豆乱淫一区二区| 美女高潮到喷水免费观看| 亚洲综合色惰| 韩国av在线不卡| 国产探花极品一区二区| 一本—道久久a久久精品蜜桃钙片| 国产精品久久久久成人av| 精品一区二区三卡| 亚洲天堂av无毛| 日韩欧美精品免费久久| 国产精品香港三级国产av潘金莲 | 成人毛片60女人毛片免费| 欧美少妇被猛烈插入视频| 国产成人欧美| 永久免费av网站大全| 丰满迷人的少妇在线观看| 亚洲精品国产av蜜桃| 欧美日韩综合久久久久久| 一本—道久久a久久精品蜜桃钙片| 国产成人aa在线观看| 久久久精品免费免费高清| 制服诱惑二区| 性色avwww在线观看| 久久精品aⅴ一区二区三区四区 | 亚洲欧美日韩另类电影网站| 90打野战视频偷拍视频| √禁漫天堂资源中文www| 少妇熟女欧美另类| 在线天堂最新版资源| 男女高潮啪啪啪动态图| 男女午夜视频在线观看| 免费高清在线观看视频在线观看| 婷婷色综合www| 国产又色又爽无遮挡免| 久久久久精品久久久久真实原创| av女优亚洲男人天堂| 亚洲精品美女久久久久99蜜臀 | 亚洲色图 男人天堂 中文字幕| 亚洲欧美一区二区三区久久| 1024香蕉在线观看| 国产精品人妻久久久影院| 日韩一区二区三区影片| 国产有黄有色有爽视频| 黄色怎么调成土黄色| 新久久久久国产一级毛片| 日韩一本色道免费dvd| 国产av国产精品国产| 一个人免费看片子| 国产 一区精品| 99久久精品国产国产毛片| 亚洲欧美成人精品一区二区| 各种免费的搞黄视频| 亚洲精品成人av观看孕妇| 久久久久久久久久久免费av| 日韩av免费高清视频| 亚洲成人一二三区av| 日本免费在线观看一区| 啦啦啦在线免费观看视频4| 一级爰片在线观看| 视频区图区小说| 9热在线视频观看99| 不卡视频在线观看欧美| 多毛熟女@视频| 国产精品蜜桃在线观看| 高清黄色对白视频在线免费看| 青青草视频在线视频观看| 波多野结衣av一区二区av| 国产精品偷伦视频观看了| 精品久久久精品久久久| 精品少妇黑人巨大在线播放| 精品一区二区免费观看| 如何舔出高潮| 这个男人来自地球电影免费观看 | 乱人伦中国视频| 一级,二级,三级黄色视频| 国产精品久久久久久av不卡| 午夜福利一区二区在线看| 亚洲国产av影院在线观看| 少妇 在线观看| 国产在线免费精品| 国产精品一区二区在线不卡| 亚洲国产精品国产精品| 午夜av观看不卡| 我要看黄色一级片免费的| 午夜福利,免费看| 国产一区亚洲一区在线观看| 亚洲精品中文字幕在线视频| 免费观看av网站的网址| xxxhd国产人妻xxx| 日本欧美视频一区| 99热网站在线观看| 三上悠亚av全集在线观看| 国产精品 国内视频| 国产成人免费无遮挡视频| 两个人免费观看高清视频| 国产极品粉嫩免费观看在线| 精品国产超薄肉色丝袜足j| 国产有黄有色有爽视频| 毛片一级片免费看久久久久| 午夜免费观看性视频| 亚洲欧美成人综合另类久久久| 亚洲欧美精品综合一区二区三区 | 午夜免费鲁丝| 人妻 亚洲 视频| 亚洲美女视频黄频| 999精品在线视频| 天天操日日干夜夜撸| 毛片一级片免费看久久久久| 热99久久久久精品小说推荐| 美女国产视频在线观看| 国产一级毛片在线| 午夜日本视频在线| 国产精品嫩草影院av在线观看| 美女高潮到喷水免费观看| 亚洲精品国产色婷婷电影| 久久久国产欧美日韩av| 欧美bdsm另类| 在线观看www视频免费| 满18在线观看网站| 亚洲精品国产av成人精品| 捣出白浆h1v1| 久久综合国产亚洲精品| 另类亚洲欧美激情| av网站在线播放免费| 亚洲美女搞黄在线观看| 看非洲黑人一级黄片| 下体分泌物呈黄色| 狠狠婷婷综合久久久久久88av| 最近中文字幕高清免费大全6| 一区二区日韩欧美中文字幕| 99久久中文字幕三级久久日本| 超碰97精品在线观看| av网站免费在线观看视频| 亚洲婷婷狠狠爱综合网| 大香蕉久久网| 黄片播放在线免费| 日韩在线高清观看一区二区三区| 国产精品人妻久久久影院| 国产欧美日韩一区二区三区在线| 国产一区二区激情短视频 | 丰满饥渴人妻一区二区三| 亚洲欧美一区二区三区黑人 | 成人毛片a级毛片在线播放| 青青草视频在线视频观看| 在线观看免费视频网站a站| 一级毛片 在线播放| 中文字幕制服av| 大香蕉久久成人网| 国产在线观看jvid| 久久久国产成人精品二区 | 可以免费在线观看a视频的电影网站| 丝袜在线中文字幕| 欧美精品一区二区免费开放| 国产精品1区2区在线观看.| 香蕉丝袜av| 美女福利国产在线| av有码第一页| 久久 成人 亚洲| 88av欧美| 最新美女视频免费是黄的| av欧美777| 99精品久久久久人妻精品| 中文亚洲av片在线观看爽| 交换朋友夫妻互换小说| 又黄又粗又硬又大视频| 国产有黄有色有爽视频| 国产亚洲精品一区二区www| av有码第一页| 真人做人爱边吃奶动态| 国产亚洲欧美在线一区二区| 精品久久久久久久久久免费视频 | 可以免费在线观看a视频的电影网站| 成人国产一区最新在线观看| 欧美色视频一区免费| 亚洲中文字幕日韩| a在线观看视频网站| 日韩精品中文字幕看吧| 久久99一区二区三区| 女人爽到高潮嗷嗷叫在线视频| 琪琪午夜伦伦电影理论片6080| 人人妻人人添人人爽欧美一区卜| 国产欧美日韩一区二区三| 校园春色视频在线观看| 午夜a级毛片| 狂野欧美激情性xxxx| 国产av在哪里看| 精品熟女少妇八av免费久了| 一区二区三区激情视频| 桃红色精品国产亚洲av| 夜夜看夜夜爽夜夜摸 | 不卡av一区二区三区| 国产精品自产拍在线观看55亚洲| 丁香六月欧美| 琪琪午夜伦伦电影理论片6080| 亚洲精品久久午夜乱码| 在线免费观看的www视频| 日韩欧美在线二视频| 国产真人三级小视频在线观看| 欧美黄色淫秽网站| 亚洲成人免费av在线播放| 国产国语露脸激情在线看| 亚洲成人久久性| 午夜精品在线福利| 亚洲国产欧美日韩在线播放| 成人手机av| 又紧又爽又黄一区二区| 1024视频免费在线观看| 午夜两性在线视频| 欧美乱妇无乱码| 夜夜看夜夜爽夜夜摸 | 亚洲一区中文字幕在线| 母亲3免费完整高清在线观看| 色老头精品视频在线观看| 免费女性裸体啪啪无遮挡网站| 国产亚洲精品一区二区www| 亚洲人成77777在线视频| 在线观看免费高清a一片| 国产精品九九99| 国产激情欧美一区二区| 欧美日韩中文字幕国产精品一区二区三区 | 久久久久久亚洲精品国产蜜桃av| 婷婷精品国产亚洲av在线| 精品无人区乱码1区二区| 999久久久国产精品视频| 一级作爱视频免费观看| 嫩草影视91久久| 热99re8久久精品国产| 999久久久国产精品视频| 操出白浆在线播放| 成人国产一区最新在线观看| 无人区码免费观看不卡| 国产亚洲精品第一综合不卡| av片东京热男人的天堂| 亚洲国产精品sss在线观看 | 大型黄色视频在线免费观看| 在线观看免费午夜福利视频| 日本免费一区二区三区高清不卡 | 丝袜美足系列| 午夜福利在线免费观看网站| 在线国产一区二区在线| 免费观看人在逋| 超碰成人久久| 一级片免费观看大全| 亚洲av五月六月丁香网| 又大又爽又粗| 动漫黄色视频在线观看| 欧美色视频一区免费| 成人av一区二区三区在线看| 亚洲一卡2卡3卡4卡5卡精品中文| 色哟哟哟哟哟哟| 国产熟女午夜一区二区三区| 国产高清视频在线播放一区| 欧美日韩视频精品一区| 女人被躁到高潮嗷嗷叫费观| 后天国语完整版免费观看| 悠悠久久av| 女警被强在线播放| 91成人精品电影| 亚洲一区高清亚洲精品| 丁香欧美五月| 多毛熟女@视频| 两个人免费观看高清视频| 亚洲精品国产色婷婷电影| 国产精品免费一区二区三区在线| 亚洲在线自拍视频| 五月开心婷婷网| 一区二区三区国产精品乱码| 精品久久久久久久毛片微露脸| 搡老熟女国产l中国老女人| 免费女性裸体啪啪无遮挡网站| 欧美日韩瑟瑟在线播放| 麻豆av在线久日| 亚洲熟女毛片儿| 欧美成狂野欧美在线观看| 丝袜在线中文字幕| 亚洲欧美激情综合另类| 国产一区在线观看成人免费| svipshipincom国产片| 1024香蕉在线观看| 国产激情欧美一区二区| 亚洲九九香蕉| 色老头精品视频在线观看| 国产精品久久视频播放| 国产一区二区三区视频了| 99热国产这里只有精品6| 91字幕亚洲| 国产亚洲欧美98| 天天影视国产精品| 在线播放国产精品三级| 国产一区二区三区综合在线观看| 美女福利国产在线| 亚洲av第一区精品v没综合| 亚洲一区中文字幕在线| 久久精品亚洲av国产电影网| 国产精品综合久久久久久久免费 | 国产成人啪精品午夜网站| 国产91精品成人一区二区三区| 50天的宝宝边吃奶边哭怎么回事| 女性生殖器流出的白浆| 欧美乱色亚洲激情| 1024视频免费在线观看| 欧美日韩亚洲高清精品| 性少妇av在线| 日本 av在线| 国产aⅴ精品一区二区三区波| 欧美日本中文国产一区发布| 亚洲欧美一区二区三区久久| 国产成人精品无人区| 亚洲欧美日韩另类电影网站| 99在线视频只有这里精品首页| 日韩精品青青久久久久久| 可以免费在线观看a视频的电影网站| 日韩欧美一区视频在线观看| av天堂久久9| 亚洲熟妇熟女久久| 91九色精品人成在线观看| 男女下面插进去视频免费观看| 女同久久另类99精品国产91| 中文字幕人妻熟女乱码| 亚洲国产精品999在线| www.熟女人妻精品国产| 老熟妇乱子伦视频在线观看| 国产精品爽爽va在线观看网站 | 亚洲精品国产一区二区精华液| 18禁国产床啪视频网站| 亚洲中文日韩欧美视频| 51午夜福利影视在线观看| 亚洲熟妇熟女久久| 亚洲专区字幕在线| 日韩大码丰满熟妇| 纯流量卡能插随身wifi吗| 国产97色在线日韩免费| 另类亚洲欧美激情| 真人做人爱边吃奶动态| 男女下面插进去视频免费观看| 黄网站色视频无遮挡免费观看| 18禁黄网站禁片午夜丰满| 一级a爱片免费观看的视频| 级片在线观看| 视频区欧美日本亚洲| 热99国产精品久久久久久7| 国产成人影院久久av| 大型黄色视频在线免费观看| e午夜精品久久久久久久| 在线观看一区二区三区| 国产在线精品亚洲第一网站| 黄色视频,在线免费观看| 国产精品99久久99久久久不卡| 成人黄色视频免费在线看| 亚洲精品一区av在线观看| √禁漫天堂资源中文www| 久久精品影院6| 国产伦一二天堂av在线观看| 麻豆av在线久日| 精品电影一区二区在线| 精品日产1卡2卡| 日本精品一区二区三区蜜桃| 美国免费a级毛片| 老鸭窝网址在线观看| 成人av一区二区三区在线看| 大码成人一级视频| 国产精品二区激情视频| 午夜福利影视在线免费观看| 久久精品成人免费网站| 久久欧美精品欧美久久欧美| 亚洲男人的天堂狠狠| 亚洲精品美女久久久久99蜜臀| 午夜福利影视在线免费观看| 久久久久九九精品影院| 新久久久久国产一级毛片| 男女高潮啪啪啪动态图| 91成年电影在线观看| 午夜激情av网站| 精品熟女少妇八av免费久了| 国产精品香港三级国产av潘金莲| 亚洲 欧美一区二区三区| 亚洲欧美一区二区三区黑人| 色哟哟哟哟哟哟| 少妇的丰满在线观看| 久久精品国产清高在天天线| 亚洲午夜理论影院| 久久久久国内视频| 日韩精品青青久久久久久| 多毛熟女@视频| 久久久久久久午夜电影 | 欧美日韩视频精品一区| 亚洲国产精品999在线| 久久人人爽av亚洲精品天堂| 欧美激情高清一区二区三区| 精品国产亚洲在线| 亚洲成a人片在线一区二区| 欧美午夜高清在线| 精品国产国语对白av| 一边摸一边抽搐一进一小说| 97碰自拍视频| 国产人伦9x9x在线观看| 99国产综合亚洲精品| 亚洲欧美激情在线| 日本vs欧美在线观看视频| 欧美午夜高清在线| 女人爽到高潮嗷嗷叫在线视频| 超色免费av| 国产三级在线视频| 久久久久久久久免费视频了| 性色av乱码一区二区三区2| 超碰97精品在线观看| 可以免费在线观看a视频的电影网站| 欧美黑人欧美精品刺激| 免费在线观看视频国产中文字幕亚洲| 欧美日韩视频精品一区| 日韩 欧美 亚洲 中文字幕| 亚洲精品av麻豆狂野| 欧美在线一区亚洲| 欧美+亚洲+日韩+国产| 黄色丝袜av网址大全| 国产成人精品在线电影| 一边摸一边做爽爽视频免费| 久久久久久亚洲精品国产蜜桃av| 欧美黑人欧美精品刺激| 老司机亚洲免费影院| 99精品久久久久人妻精品| 在线观看免费视频网站a站| 在线观看免费高清a一片| 一级,二级,三级黄色视频| 波多野结衣高清无吗| 国产av一区在线观看免费| 国产av精品麻豆| 一区二区三区激情视频| 精品久久蜜臀av无| 久久香蕉国产精品| 一级黄色大片毛片| 色婷婷av一区二区三区视频| 国产成年人精品一区二区 | 中文欧美无线码| 巨乳人妻的诱惑在线观看| 日日摸夜夜添夜夜添小说| 一级片免费观看大全| 精品人妻1区二区| 国产在线精品亚洲第一网站| 久久99一区二区三区| 亚洲 国产 在线| 亚洲欧洲精品一区二区精品久久久| 久久人妻福利社区极品人妻图片| 欧美亚洲日本最大视频资源| 成熟少妇高潮喷水视频| 90打野战视频偷拍视频| 亚洲 欧美 日韩 在线 免费| 人妻久久中文字幕网| 午夜福利免费观看在线| 天堂俺去俺来也www色官网| 中文字幕色久视频| av有码第一页| 亚洲成人免费电影在线观看| 亚洲欧美精品综合久久99| 亚洲中文av在线| 国产亚洲av高清不卡| 老司机午夜福利在线观看视频| 女生性感内裤真人,穿戴方法视频| 少妇 在线观看| www国产在线视频色| 五月开心婷婷网| 亚洲成人免费电影在线观看| 精品日产1卡2卡| 亚洲av电影在线进入| 国产无遮挡羞羞视频在线观看| 色综合欧美亚洲国产小说| 一个人免费在线观看的高清视频| 精品一区二区三区视频在线观看免费 | 无限看片的www在线观看| 99精品久久久久人妻精品| 久久天堂一区二区三区四区| 久久精品亚洲熟妇少妇任你| 99热国产这里只有精品6| 亚洲成人国产一区在线观看| 757午夜福利合集在线观看| 国产精品秋霞免费鲁丝片| 亚洲第一青青草原| 老熟妇乱子伦视频在线观看|