• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Discounted Iterative Adaptive Critic Designs With Novel Stability Analysis for Tracking Control

    2022-07-18 06:17:08MingmingHaDingWangandDerongLiu
    IEEE/CAA Journal of Automatica Sinica 2022年7期

    Mingming Ha, Ding Wang,,, and Derong Liu,,

    Abstract—The core task of tracking control is to make the controlled plant track a desired trajectory. The traditional performance index used in previous studies cannot eliminate completely the tracking error as the number of time steps increases. In this paper, a new cost function is introduced to develop the value-iteration-based adaptive critic framework to solve the tracking control problem. Unlike the regulator problem,the iterative value function of tracking control problem cannot be regarded as a Lyapunov function. A novel stability analysis method is developed to guarantee that the tracking error converges to zero. The discounted iterative scheme under the new cost function for the special case of linear systems is elaborated.Finally, the tracking performance of the present scheme is demonstrated by numerical results and compared with those of the traditional approaches.

    I. INTRODUCTION

    RECENTLY, adaptive critic methods, known as approximate or adaptive dynamic programming (ADP) [1]–[8],have enjoyed rather remarkable successes for a wide range of fields in the energy scheduling [9], [10], orbital rendezvous[11], [12], urban wastewater treatment [13], attitude-tracking control for hypersonic vehicles [14] and so forth. Adaptive critic designs have close connections to both adaptive control and optimal control [15], [16]. For nonlinear systems, it is difficult to obtain the analytical solution of the Hamilton-Jacobi-Bellman (HJB) equation. Iterative adaptive critic techniques, mainly including value iteration (VI) [17]–[20]and policy iteration (PI) [21], [22], have been extensively studied and successfully applied to iteratively approximate the numerical solution of the HJB equation [23]–[26]. In [27],relaxed dynamic programming was introduced to overcome the “curse of dimensionality” problem by relaxing the demand for optimality. The upper and lower bounds of the iterative value function were first determined and the convergence of VI was revealed. For ensuring stability of the undiscounted VI, Heydari [28] developed a stabilizing VI algorithm initialized by a stabilizing policy. With this operation, the stability of the closed-loop system using the iterative control policy can be guaranteed. In [29], the convergence and monotonicity of discounted value function were investigated.The discounted iterative scheme was implemented by the neural-network-based globalized dual heuristic programming.Afterwards, Haet al. [30] discussed the effect of the discount factor on the stability of the iterative control policy. Several stability criteria with respect to the discount factor were established. In [31], Wanget al. developed an event-based adaptive critic scheme and presented an appropriate triggering condition to ensure the stability of the controlled plant.

    Optimal tracking control is a significant topic in the control community, which mainly aims at designing a controller to make the controlled plant track a reference trajectory. The literature on this problem is extensive [32]–[37] and reflects considerable current activity. In [38], Wanget al. developed a finite-horizon optimal tracking control strategy with convergence analysis for affine discrete-time systems by employing the iterative heuristic dynamic programming approach. For the linear quadratic output tracking control problem, Kiumarsiet al. [39] presented a novel Bellman equation, which allows policy evaluation by using only the input, output, and reference trajectory data. Liuet al. [40] concerned the robust optimal tracking control problem and introduced the adaptive critic design scheme into the controller to overcome the unknown uncertainty caused by multi-input multi-output discrete-time systems. In [41], Luoet al. designed the modelfree optimal tracking controller for nonaffine systems by using a critic-only Q-learning algorithm, while the proposed method needs to be given an initial admissible control policy. In [42],a novel cost function was proposed to eliminate the tracking error. The convergence and monotonicity of the new value function sequence were investigated. On the other hand, some methods to solve the tracking problem for affine continuoustime systems can be found in [43]–[46]. For affine nonlinear partially-unknown constraint-input systems, the integral reinforcement learning technique was studied to learn the solution to the optimal tracking control problem in [43], which does not require to identify the unknown systems.

    In general, the majority of adaptive critic tracking control methods need to solve the feedforward control input of the reference trajectory. Then, the tracking control problem can be transformed into a regulator problem. However, for some nonlinear systems, the feedforward control input corresponding to the reference trajectory might be nonexistent or not unique, which makes these methods unavailable. To avoid solving the feedforward control input, some tracking control approaches establish a performance index function of the tracking error and the control input. Then, the adaptive critic design is employed to minimize the performance index. With this operation, the tracking error cannot be eliminated because the minimization of the control input cannot always lead to the minimization of the tracking error. Moreover, as mentioned in[30], the introduction of discount factor will affect the stability of the optimal control policy. If an inappropriate discount factor is selected, the stability of the closed-loop system cannot be guaranteed. Besides, unlike the regulator problem,the iterative value function of tracking control is not a Lyapunov function. Till now, few studies have focussed on this problem. In this paper, inspired by [42], the new performance index is adopted to avoid solving the feedforward control and eliminate the tracking error. The stability conditions with respect to the discount factor are discussed, which can guarantee that the tracking error converges to zero as the number of time steps increases.

    The main contributions of this article are summarized as follows.

    1) Based on the new performance index function, a novel stability analysis method for the tracking control problem is established. It is guaranteed that the tracking error can be eliminated completely.

    2) The effect of the presence of the approximation errors derived from the value function approximator is discussed with respect to the stability of controlled systems.

    3) For linear systems, the new VI-based adaptive critic scheme between the kernel matrix and the state feedback gain is developed.

    The remainder of this paper is organized as follows. In Section II, the necessary background and motivation are provided. The VI-based adaptive critic scheme and the properties of the iterative value function are presented. In Section III, the novel stability analysis for tracking control is developed. In Section IV, the discounted iterative formulation under the new performance index for the special case of linear systems is discussed. Section V compares the tracking performance of the new and traditional tracking control approaches by the numerical results. In Section VI, conclusions of this paper and further research topics are summarized.

    Notations:Throughout this paper, N and N+are the sets of all nonnegative and positive integers, respectively, i.e.,N={0,1,2,...} and N+={1,2,...}. R denotes the set of all real numbers and R+is the set of nonnegative real numbers. Rnis the Euclidean space of alln-dimensional real vectors.Inand 0m×nrepresents then×nidentity matrix and them×nzero matrix, respectively.C≤0 means that the matrixCis negative semi-definite.

    II. PROBLEM FORMULATION AND VI-BASED ADAPTIVE CRITIC SCHEME

    Consider the following affine nonlinear systems given by:

    withthestateXk∈Rnandinputuk∈Rm,wheren,m∈N+andk∈N.F: Rn→RnandG: Rn×Rm→Rnarethedriftand control input dynamics, respectively. The tracking error is defined as

    whereDkis the reference trajectory at stagek. Suppose thatDkis bounded and satisfies

    whereM(·) is the command generator dynamics. The objective of the tracking control problem is to design a controller to track the desired trajectory. Letuk={uk,uk+1,...},k∈N, be an infinite-length sequence of control inputs. Assume that there exists a control sequenceu0such thatEk→0 ask→∞.

    In general, in the previous works [34], [38], assume that there exists a feedforward control input ηksatisfyingDk+1=F(Dk)+G(Dk)ηkto achieve perfect tracking. However,for some nonlinear systems, the feedforward control input might be nonexistent. To avoid computing the feedforward control input ηk, the performance index [33], [34] is generally designed as

    whereγ ∈(0,1]isthe discountfactor and U(·,·)istheutility function.TermsQ: Rn→R+andR: Rm→R+intheutility function are positive definite continuous functions. With this operation, both the tracking error and the control input in the performance index (4) are minimized. To the best of our knowledge, the minimization of the control input does not always result in the minimization of the tracking error unless the reference trajectory is assumed to beDk→0 ask→∞.Such assumption greatly reduces the application scope of the approach. Therefore, for the majority of desired trajectories,the tracking error cannot be eliminated [42] by adopting the performance index (4). According to [42], under the control sequence u0, a new discounted cost function for the initial tracking errorE0and reference pointD0is introduced as

    The adopted cost function (5) not only avoids computing the feedforward control input, but also eliminates the tracking error. The objective of this paper is to find a feedback control policy π(E,D), which both makes the dynamical system (1)track the reference trajectory and minimizes the cost function(5). According to (5), the state value function can be obtained as

    and its optimal value isV?(Ek,Dk).

    According to the Bellman’s principle of optimality, the optimal value function for tracking control problem satisfies

    whereEk+1=F(Ek+Dk)+G(Ek+Dk)π(Ek,Dk)?M(Dk). The corresponding optimal control policy is computed by

    Therefore, the Hamiltonian function for tracking control can be obtained as

    The optimal control policy π?satisfies the first-order necessary condition for optimality, i.e.,=0 [42]. The gradient of(9) with respect toπis given as

    In general, the positive definite function Q is chosen as the following quadratic form:

    whereQ∈Rn×nis a positive definite matrix. Then, the expression of the optimal control policy can be obtained by solving (10) [42].

    Since it is difficult or impossible to directly solve the Bellman equation (7), iterative adaptive critic methods are widely adopted to obtain its numerical solution. Here, the VIbased adaptive critic scheme for the tracking control problem is employed to approximate the optimal value functionV?(Ek,Dk)formulated in (7). The VI-based adaptive critic algorithm starts from a positive semi-definite continuous value functionV(0)(Ek,Dk).UsingtheinitialvaluefunctionV(0)(Ek,Dk),theinitialcontrol policy iscomputed by

    whereEk+1=F(Ek+Dk)+G(Ek+Dk)π(Ek,Dk)?M(Dk). For the iteration index ? ∈N+, the VI-based adaptive critic algorithm is implemented between the value function update

    and the policy improvement

    In the iteration learning process, two sequences, namely the iterative value function sequence {V(?)} and the corresponding control policy sequence {π(?)}, are obtained. The convergence and monotonicity of the undiscounted value function sequence have been investigated in [42]. Inspired by [42], the corresponding convergence and monotonicity properties of the discounted value function can be obtained.

    Lemma 1 [42]:Let the value function and control policy sequences be tuned by (13) and (14), respectively. For anyEkandDk, the value function starts fromV(0)(·,·)=0.

    1) The value function sequence {V(?)(Ek,Dk)} is monotonically nondecreasing, i.e.,V(?)(Ek,Dk)≤V(?+1)(Ek,Dk),? ∈N.

    2) Suppose that there exists a constant κ ∈(0,∞) such that 0 ≤γV?(Ek+1,Dk+1)≤κU(Ek,Dk,uk), whereEk+1=F(Ek+Dk)+G(Ek+Dk)uk?M(Dk). Then, the iterative value function approaches the optimal value function with the following manner:

    It can be guaranteed that the discounted value function and corresponding control policy sequences approximate the optimal value function and optimal control policy as the number of iterations increases, i.e.,lim?→∞V(?)(Ek,Dk)=V?(Ek,Dk) and lim?→∞π(?)(Ek,Dk)=π?(Ek,Dk). Note that the introduction of the discount factor will affect the stability of the optimal and iterative control policies. If the discount factor is chosen too small, the optimal control policy might be unstable. For the tracking control problem, the policy π?(Ek,Dk)cannot make the controlled plant track the desired trajectory. It is meaningless to design various iterative methods to approximate the optimal control policy. On the other hand, for the regulation problem, the iterative value function is a Lyapunov function to judge the stability of the closed-loop systems [18]. However, for the tracking control problem, the iterative value function cannot be regarded as a Lyapunov function as the iterative value function does not only depend on the tracking errorE. Therefore, it is necessary to develop a novel stability analysis approach for tracking control problems.

    III. NOVEL STABILITY ANALYSIS OF VI-BASED ADAPTIVE CRITIC DESIGNS

    In this section, the stability of the tracking error system is discussed. It is guaranteed that the tracking error under the iterative control policy converges to zero as the number of time steps increases.

    Theorem 1:Suppose that there exists a control sequenceu0for the system (1) and the desired trajectory (3) such thatEk→0 ask→∞. If the discount factor satisfies

    wherec∈(0,1) is a constant, then the tracking error under the optimal control π?(Ek,Dk) converges to zero ask→∞.

    Proof:According to (7) and (8), the Bellman equation can be rewritten as

    which is equivalent to

    Applying (19) to the tracking errorsE0,E1, ...,ENand the corresponding reference pointsD0,D1, ...,DN, one has

    Combining the inequalities in (20), we have

    For the discounted iterative adaptive critic tracking control,the condition (16) is important. Otherwise, the stability of the optimal control policy cannot be guaranteed. Theorem 1 reveals the effect of the discount factor on the convergence of the tracking error. However, the optimal value function is unknown in advance. In what follows, a practical stability condition is provided to guarantee that the tracking error converges to zero under the iterative control policy.

    Theorem 2:Let the value function with(·,·)=0 and the control policy be updated by (13) and (14), respectively. If the iterative value function satisfies

    which implies, forj=1,2,...,N

    Combining (23) and (25), the following relationship can be obtained:

    According to 2) in Lemma 1,V(?+1)(Ek,Dk)?V(?)(Ek,Dk)→0 as ? →∞. Therefore, the condition (22) in Theorem 2 can be satisfied in the iteration process. There must exist an iterative control policyin the control policy sequence {π(?)}, which makesEk→0 ask→∞.

    In general, for nonlinear systems, the value function update(13) cannot be solved exactly. Various fitting methods, such as neural networks, polynomial fitting and so forth, can be used to approximate the iterative value function of the nonlinear systems and many numerical methods can be applied to solve (14). Note that the inputs of the function approximator are the tracking error vectorEand the desired trajectoryD.Especially, for high dimensional nonlinear systems, the artificial neural network is applicable to approximate the iterative value function. Compared with the polynomial fitting method, the artificial neural network avoids manually designing each basis function. The introduction of the function approximator inevitably leads to the approximation error.

    Define the approximation error at the?th iteration as ε(?)(Ek,Dk). According to the value function update equation(13), the approximate value function is obtained as

    whereEk+1=F(Ek+Dk)+G(Ek+Dk)μ(Ek,Dk)?M(Dk) and the corresponding control policy μ (Ek,Dk) is computed by

    Note that the approximation error ε(??1)(Ek,Dk) is not the error between the approximate value function(?)(Ek,Dk) and the exact value functionV(?)(Ek,Dk). Next, considering the approximation error of the function approximator, we further discuss the stability of the closed-loop system using the control policy derived from the approximate value function.

    Theorem 3:Let the iterative value function withV(0)(·,·)=0 be approximated by a smooth function approximator. The approximate value function and the corresponding control policy are updated by (28) and (29), respectively. If the approximatevaluefunction withtheapproximationerror≤αU(Ek,Dk,μ(?)(Ek,Dk))isfiniteand satisfies where α ∈(0,1) andc∈(0,1?α) are constants, then the tracking error under the control policy μ(?)(Ek,Dk) satisfiesEk→0ask→∞.

    Proof:Forconvenience, in the sequel, μ(?)(Ek,Dk) is written as.According to (28) and the condition(30),itleadsto

    Evaluating (32) at the time stepsk=0,1,2,...,N, it results in

    Combining the inequalities in (33), we obtain

    IV. DISCOUNTED TRACKING CONTROL FOR THE SPECIAL CASE OF LINEAR SYSTEMS

    In this section, the VI-based adaptive critic scheme for linear systems and the stability properties are investigated.Consider the following discrete-time linear systems given by

    whereA∈Rn×nandB∈Rn×mare system matrices. Here, we assume that the reference trajectory satisfiesDk+1=ΓDk,where Γ ∈Rn×nis a constant matrix. This form is used because its analysis is convenient. According to the new cost function(5), for the linear system (35), a quadratic performance index with a positive definite weight matrixQis formulated as follows:

    Combining the dynamical system (35) and the desired trajectoryDk, we can obtain an augmented system as

    where the new weight matrixsatisfies

    As mentioned in [15], [16], [39], the value function can be regarded as the quadratic form in the state, i.e.,V()=for some kernel matrix. Then, the Bellman equation of linear quadratic tracking is obtained by

    The Hamiltonian function of linear quadratic tracking control is defined as

    Considering the state feedback policy π ()=?and the equation (40), it results in

    Therefore, the linear quadratic tracking problem can be solved by using the following equation:

    Considering the Hamiltonian function (41), a necessary condition for optimality is the stationarity condition=0[15 ], [16]. The optimal control policy is computed by

    and

    Theorem 4:Let the kernel matrix and the state feedback gain be iteratively updated by (45) and (46), respectively. If the iterative kernel matrix and state feedback gain satisfy

    which implies

    According to Theorem 2, we can obtainU(,π(?)())→0 ask→∞,whichshows thatthetracking errorunder the iterativecontrol policy π(?)()approacheszero ask→∞. ■

    For linear systems, if the system matricesAandBare known, it is not necessary to use the function approximator to estimate the iterative value function. According to the iterative algorithm (45) and (46), there is no approximation error derived from the approximate value function in the iteration procedure.

    V. SIMULATION STUDIES

    In this section, two numerical simulations with physical background are conducted to verify the effectiveness of the discounted adaptive critic designs. Compared with the cost function (4) proposed by the traditional studies, the adopted performance index can eliminate the tracking error.

    A. Example 1

    As shown in Fig. 1, the spring-mass-damper system is used to validate the present results and compare the performance between the present and the traditional adaptive critic tracking control approaches. LetM,s, anddbe the mass of the object,the stiffness constant of the spring, and the damping, respectively. The system dynamics is given as

    wherexdenotes the position,vstands for the velocity, andfis theforce appliedtotheobject. Letthesystemstate vectorbeX=[x,v]T∈R2andthe controlinputbeu=f∈R. The continuous-time system dynamics (50) is discretized using the Euler method with sampling interval ?t=0.01 s. Then, the discrete-time state space equation is obtained as

    Fig. 1. Diagrammatic sketch of the spring-mass-damper system.

    In this example, the practical parameters are selected asM=1 kg,s=5 N/m, andd=0.5 Ns/m. The reference trajectory is defined as

    Combining the original system (51) and the reference trajectory (52), the augmented system is formulated as

    The iterative kernel matrix with(0)=04×4and the state feedback gain are updated by (45) and (46), respectively,whereQ=I2and the discount factor is chosen as γ =0.98. On the other hand, considering the following traditional cost function:

    the corresponding VI-based adaptive critic control algorithm for system (53) is implemented between

    and

    whereR∈Rm×mis a positive definite matrix. As defined in(54), the objective of the cost function is to minimize both the tracking error and the control input. The role of the cost function (54) is to balance the minimizations of the tracking error and the control input according to the selection of the matricesQandR. To compare the tracking performance under different cost functions, we carry out the new VI-based adaptive critic algorithm and the traditional approach for 400 iterations. Three traditional cost functions with different weight matricesQiandRi,i=1,2,3 are selected to implement the algorithms (55) and (56), whereQ1,2,3=I2andR1,2,3=1,0.1,0.01. After 400 iteration steps, the obtained corresponding optimal kernel matrices and state feedback gains are given as follows:

    Let the initial system state and reference point beX0=[0.1,0.14]TandD0=[?0.3,0.3]T. Then, the obtained state feedback gains are applied to generate the control inputs of the controlled plant (53). The system state and tracking error trajectories under different weight matrices are shown in Figs. 2 and 3, respectively. It can be observed that smallerRleads to smaller tracking error. The weight matricesQandRreveal the importance of the minimizations of the tracking error and the control input. The tracking performance of the traditional cost function with smallerRis similar to that of the new tracking control approach. From (56), the matrixRcannot be a zero matrix. Otherwise, there might exist no inverse of the matrixR+. The corresponding control input curves are plotted in Fig. 4.

    Fig. 2. The reference trajectory and system state curves under different cost functions (Example 1).

    Fig. 3. The tracking error curves under different cost functions (Example 1).

    Fig. 4. The control input curves under different cost functions (Example 1).

    B. Example 2

    Consider the single link robot arm given in [47]. LetM,g,L, J andfrbe the mass of the payload, acceleration of gravity, length of the arm, moment of inertia and viscous friction, respectively. The system dynamics is formulated as

    whereαandudenote the angle position of robot arm and controlinput, respectively.LetthesystemstatevectorbeX=[α,α˙]T∈R2. SimilarlytoExample 1,the singlelink robot arm dynamics is discretized using the Euler method with sampling interval ?t=0.05 s. Then, the discrete-time state space equation of (61) is obtained as

    Inthisexample,thepractical parametersaresetasM=1kg,g=9.8m/s2,L=1m ,J =5 kg·m2andfr=2.The desired trajectory is defined as

    The cost function (5) is set as the quadratic form, whereQandγare selected asQ=I2and γ=0.97, respectively. In this example, sinceEkandDkare the independent variables of the value function, the function approximator of the iterative value function is selected as the following form:

    whereW(?)∈R26is the parameter vector. In the iteration process, 300 random samples in the region ?={(E∈R2,D∈R2):?1 ≤E1≤1,?1 ≤E2≤1,?1 ≤D1≤1,?1 ≤D2≤1}are chosen to learn the iterative value functionV(?)(E,D) for 200 iteration steps. The value function is initialized as zero. In the iteration process, considering the first-order necessary condition for optimality, the iterative control policy can be computed by the following equation:

    Note that the unknown control input μ(?)(Ek,Dk) exists on both sides of (65). Therefore, at each iteration step,μ(?)(Ek,Dk)is iteratively obtained by using the successive approximation approach. After the iterative learning process,the parameter vector is obtained as follows:

    Next, we compare the tracking performance of the new and the traditional methods. The traditional cost function is also selected as the quadratic form. Three traditional cost functions withQ1,2,3=I2andR1,2,3=0.1,0.01,0.001 are selected. The initial state and initial reference point are set asX0=[?0.32,0.12]TandD0=[0.12,?0.23]T, respectively. The obtained parameter vectors derived from the present and the traditional adaptive critic methods are employed to generate the near optimal control policy. The controlled plant state trajectories using these near optimal control policies are shown in Fig. 5. The corresponding tracking error and control input curves are plotted in Figs. 6 and 7, respectively. From Figs. 6 and 7, it is observed that both the tracking error and the control input derived from the traditional approach are minimized. However, it is not necessary to minimize the control input by deteriorating the tracking performance for tracking control.

    VI. CONCLUSIONS

    In this paper, for the tracking control problem, the stability of the discounted VI-based adaptive critic method with a new performance index is investigated. Based on the new performance index, the iterative formulation for the special case of linear systems is given. Some stability conditions are provided to guarantee that the tracking error approaches zero as the number of time steps increases. Moreover, the effect of the presence of the approximation errors of the value function is discussed. Two numerical simulations are performed to compare the tracking performance of the iterative adaptive critic designs under different performance index functions.

    Fig. 5. The reference trajectory and system state curves under different cost functions (Example 2).

    Fig. 6. The tracking error curves under different cost functions (Example 2).

    Fig. 7. The control input curves under different cost functions (Example 2).

    It is also interesting to further extend the present tracking control method to the nonaffine systems, data-based tracking control, output tracking control, various practical applications and so forth. The developed tracking control method will be more advanced in the future work of online adaptive critic designs for some practical complex systems with noises.

    黄色成人免费大全| 久久国产亚洲av麻豆专区| 日韩高清综合在线| 亚洲av成人av| 成人国产一区最新在线观看| 亚洲专区字幕在线| 国产片内射在线| 可以在线观看的亚洲视频| 黄色视频,在线免费观看| 美女免费视频网站| 亚洲一区中文字幕在线| 精品久久久久久久久久久久久 | 国产亚洲av嫩草精品影院| 欧美激情 高清一区二区三区| 一区二区三区激情视频| 人人妻人人看人人澡| 桃色一区二区三区在线观看| 日本 欧美在线| 国产一区二区三区在线臀色熟女| 国产不卡一卡二| 嫩草影视91久久| 色婷婷久久久亚洲欧美| 黄色毛片三级朝国网站| 国产亚洲欧美精品永久| 禁无遮挡网站| 中文字幕另类日韩欧美亚洲嫩草| 巨乳人妻的诱惑在线观看| 少妇粗大呻吟视频| 婷婷亚洲欧美| 成人18禁高潮啪啪吃奶动态图| 操出白浆在线播放| or卡值多少钱| АⅤ资源中文在线天堂| 搡老岳熟女国产| 精品少妇一区二区三区视频日本电影| 人人澡人人妻人| 亚洲成a人片在线一区二区| 99久久国产精品久久久| 日本a在线网址| 国产午夜精品久久久久久| 国产又爽黄色视频| 亚洲电影在线观看av| 久久午夜综合久久蜜桃| 可以在线观看的亚洲视频| 免费观看人在逋| 琪琪午夜伦伦电影理论片6080| 国内揄拍国产精品人妻在线 | 搡老岳熟女国产| 精品人妻1区二区| 国产爱豆传媒在线观看 | 国产激情偷乱视频一区二区| 特大巨黑吊av在线直播 | 老熟妇乱子伦视频在线观看| 久久国产亚洲av麻豆专区| 国产1区2区3区精品| 中出人妻视频一区二区| 国产真实乱freesex| 欧美黑人巨大hd| 成人亚洲精品一区在线观看| 亚洲中文字幕日韩| 怎么达到女性高潮| 欧美日韩一级在线毛片| 久久香蕉国产精品| 波多野结衣高清作品| 国产又色又爽无遮挡免费看| 日韩欧美 国产精品| 亚洲中文字幕日韩| 757午夜福利合集在线观看| 色播在线永久视频| 国产在线精品亚洲第一网站| 久久久久国产一级毛片高清牌| 一区二区日韩欧美中文字幕| 18禁观看日本| 变态另类成人亚洲欧美熟女| 韩国av一区二区三区四区| 精品久久久久久久久久免费视频| 一卡2卡三卡四卡精品乱码亚洲| 18禁黄网站禁片午夜丰满| 一级片免费观看大全| 欧美日韩亚洲综合一区二区三区_| 亚洲熟妇中文字幕五十中出| av欧美777| 最好的美女福利视频网| 在线观看www视频免费| 757午夜福利合集在线观看| netflix在线观看网站| 淫妇啪啪啪对白视频| 午夜成年电影在线免费观看| 亚洲一区高清亚洲精品| 中文字幕精品亚洲无线码一区 | 老司机福利观看| www国产在线视频色| 黄色片一级片一级黄色片| 久9热在线精品视频| 别揉我奶头~嗯~啊~动态视频| 给我免费播放毛片高清在线观看| 欧美成人一区二区免费高清观看 | 好看av亚洲va欧美ⅴa在| 脱女人内裤的视频| 两个人看的免费小视频| 中文字幕最新亚洲高清| 老司机福利观看| xxxwww97欧美| 女人被狂操c到高潮| 国产亚洲欧美精品永久| 99在线视频只有这里精品首页| 黄片大片在线免费观看| 日韩欧美免费精品| 琪琪午夜伦伦电影理论片6080| 国产精品一区二区三区四区久久 | 午夜亚洲福利在线播放| 免费高清在线观看日韩| 成在线人永久免费视频| 日韩欧美一区视频在线观看| 精华霜和精华液先用哪个| 精华霜和精华液先用哪个| 亚洲真实伦在线观看| 日韩欧美国产一区二区入口| 国内少妇人妻偷人精品xxx网站 | 草草在线视频免费看| 美女免费视频网站| 一区二区日韩欧美中文字幕| 波多野结衣高清无吗| 黄色 视频免费看| 国产一级毛片七仙女欲春2 | 天天添夜夜摸| av在线天堂中文字幕| 天天添夜夜摸| 欧美性猛交╳xxx乱大交人| 亚洲精品美女久久av网站| 国产黄a三级三级三级人| 欧美日韩中文字幕国产精品一区二区三区| 免费在线观看完整版高清| 岛国视频午夜一区免费看| 变态另类丝袜制服| 美女高潮到喷水免费观看| 女性生殖器流出的白浆| av有码第一页| 国产一区二区三区在线臀色熟女| 久久中文字幕人妻熟女| 亚洲九九香蕉| 免费高清视频大片| 国产午夜精品久久久久久| 亚洲一码二码三码区别大吗| 久久中文字幕人妻熟女| 欧美性猛交╳xxx乱大交人| 黄频高清免费视频| 满18在线观看网站| 一级片免费观看大全| 视频在线观看一区二区三区| 亚洲国产欧美一区二区综合| 一个人观看的视频www高清免费观看 | 久久久久免费精品人妻一区二区 | 亚洲国产高清在线一区二区三 | 巨乳人妻的诱惑在线观看| 久久国产精品男人的天堂亚洲| 两个人免费观看高清视频| 欧美日韩中文字幕国产精品一区二区三区| 国产av不卡久久| 久久久精品欧美日韩精品| 一夜夜www| 日日干狠狠操夜夜爽| 久久亚洲真实| 桃红色精品国产亚洲av| 欧美成狂野欧美在线观看| 午夜精品久久久久久毛片777| 亚洲人成伊人成综合网2020| 亚洲成国产人片在线观看| 国内久久婷婷六月综合欲色啪| 一本精品99久久精品77| 日韩大尺度精品在线看网址| 午夜久久久久精精品| 伦理电影免费视频| 精品卡一卡二卡四卡免费| 啦啦啦观看免费观看视频高清| 成人18禁高潮啪啪吃奶动态图| 国产精品一区二区免费欧美| 精品国产乱子伦一区二区三区| 亚洲国产欧美网| 欧美绝顶高潮抽搐喷水| 欧美激情久久久久久爽电影| 白带黄色成豆腐渣| 动漫黄色视频在线观看| ponron亚洲| 免费看a级黄色片| 日本 av在线| 久久精品影院6| 久久中文字幕人妻熟女| 一个人观看的视频www高清免费观看 | 天天躁夜夜躁狠狠躁躁| 亚洲国产精品999在线| 一a级毛片在线观看| 久久精品国产清高在天天线| 亚洲中文av在线| 狠狠狠狠99中文字幕| 制服诱惑二区| 亚洲成a人片在线一区二区| 制服人妻中文乱码| 人妻久久中文字幕网| 午夜成年电影在线免费观看| 欧美乱码精品一区二区三区| 看片在线看免费视频| 麻豆一二三区av精品| 国产精品亚洲美女久久久| 不卡av一区二区三区| 国产精品99久久99久久久不卡| 国产一区二区三区视频了| 亚洲中文字幕一区二区三区有码在线看 | 亚洲 欧美 日韩 在线 免费| 婷婷精品国产亚洲av在线| 波多野结衣高清无吗| 宅男免费午夜| 国产成人精品无人区| 亚洲人成电影免费在线| 欧美成人免费av一区二区三区| 久久伊人香网站| 国产精品永久免费网站| 久久热在线av| 国产伦一二天堂av在线观看| 午夜激情av网站| 99久久久亚洲精品蜜臀av| 国产三级在线视频| 日本 av在线| 长腿黑丝高跟| 亚洲中文av在线| 两个人视频免费观看高清| 制服诱惑二区| 精品一区二区三区四区五区乱码| √禁漫天堂资源中文www| 啦啦啦观看免费观看视频高清| 白带黄色成豆腐渣| 亚洲精品国产区一区二| 色在线成人网| 宅男免费午夜| 亚洲七黄色美女视频| 18禁黄网站禁片午夜丰满| 国产av一区在线观看免费| 人妻丰满熟妇av一区二区三区| 免费在线观看日本一区| 国产激情欧美一区二区| 日韩精品中文字幕看吧| 嫩草影视91久久| 国产不卡一卡二| 国产精品二区激情视频| 黄频高清免费视频| 久久久久国内视频| cao死你这个sao货| 99riav亚洲国产免费| 国产精品久久久久久人妻精品电影| 男人操女人黄网站| 欧洲精品卡2卡3卡4卡5卡区| xxx96com| 超碰成人久久| 女人高潮潮喷娇喘18禁视频| 色综合亚洲欧美另类图片| 日韩欧美免费精品| 别揉我奶头~嗯~啊~动态视频| 欧美大码av| 欧美 亚洲 国产 日韩一| 久久精品国产亚洲av高清一级| 一个人免费在线观看的高清视频| 人成视频在线观看免费观看| 免费在线观看黄色视频的| 可以免费在线观看a视频的电影网站| 欧美大码av| 91九色精品人成在线观看| 一边摸一边做爽爽视频免费| 久久国产乱子伦精品免费另类| 日本精品一区二区三区蜜桃| 叶爱在线成人免费视频播放| 午夜福利视频1000在线观看| 99久久综合精品五月天人人| 观看免费一级毛片| 久久精品亚洲精品国产色婷小说| 亚洲人成网站在线播放欧美日韩| 国产一区在线观看成人免费| 亚洲中文av在线| 国产熟女午夜一区二区三区| 亚洲第一欧美日韩一区二区三区| 怎么达到女性高潮| 一区二区三区激情视频| 中文字幕人妻丝袜一区二区| 99久久无色码亚洲精品果冻| 熟女少妇亚洲综合色aaa.| 岛国在线观看网站| 日韩欧美国产一区二区入口| 国产精品,欧美在线| 日本a在线网址| 无人区码免费观看不卡| 日日摸夜夜添夜夜添小说| 国产精品亚洲美女久久久| 欧美日韩黄片免| 欧美av亚洲av综合av国产av| 国产精品久久久久久精品电影 | 日本 欧美在线| 国产成人欧美在线观看| 99国产精品99久久久久| 91字幕亚洲| 久久精品国产亚洲av香蕉五月| 香蕉丝袜av| 中文字幕精品亚洲无线码一区 | 欧美精品亚洲一区二区| 国产亚洲精品第一综合不卡| 久久久久久国产a免费观看| 久久性视频一级片| 欧美中文综合在线视频| 国产精品亚洲美女久久久| 男人舔女人的私密视频| 国产精品,欧美在线| 夜夜爽天天搞| 国产成人啪精品午夜网站| 在线国产一区二区在线| 亚洲国产欧美日韩在线播放| 国产私拍福利视频在线观看| 日本 欧美在线| 女人被狂操c到高潮| 日日摸夜夜添夜夜添小说| 久久久久久免费高清国产稀缺| 黄色女人牲交| 欧美在线一区亚洲| 久久草成人影院| 女人爽到高潮嗷嗷叫在线视频| 国产高清视频在线播放一区| 国产精品久久久久久精品电影 | 中文字幕av电影在线播放| 日韩欧美国产一区二区入口| 免费高清视频大片| 亚洲色图 男人天堂 中文字幕| svipshipincom国产片| 岛国在线观看网站| 成人三级黄色视频| 黄色视频,在线免费观看| 国产成人av教育| 日日夜夜操网爽| 亚洲精品在线美女| 国产91精品成人一区二区三区| a级毛片在线看网站| 国产亚洲欧美精品永久| 欧美性长视频在线观看| 亚洲欧美日韩无卡精品| 日韩欧美免费精品| 国产视频内射| 亚洲av成人一区二区三| 伊人久久大香线蕉亚洲五| 成人免费观看视频高清| 操出白浆在线播放| 亚洲av中文字字幕乱码综合 | 视频在线观看一区二区三区| 亚洲欧洲精品一区二区精品久久久| 欧美精品亚洲一区二区| 久久精品国产清高在天天线| 高清毛片免费观看视频网站| 亚洲第一电影网av| 亚洲人成网站在线播放欧美日韩| 亚洲av美国av| 中文字幕人妻熟女乱码| 真人做人爱边吃奶动态| 国产野战对白在线观看| 香蕉丝袜av| 91麻豆av在线| 亚洲色图av天堂| 国产精品久久久久久精品电影 | 欧美精品啪啪一区二区三区| 99re在线观看精品视频| 日日干狠狠操夜夜爽| www.www免费av| 久久久久精品国产欧美久久久| 日日夜夜操网爽| 亚洲一区二区三区色噜噜| 国产黄a三级三级三级人| 亚洲免费av在线视频| 不卡av一区二区三区| 精品国产美女av久久久久小说| 欧美乱妇无乱码| 亚洲中文日韩欧美视频| 亚洲美女黄片视频| 男女之事视频高清在线观看| www.精华液| 亚洲国产高清在线一区二区三 | 大香蕉久久成人网| 久9热在线精品视频| 成熟少妇高潮喷水视频| 国产精品免费视频内射| 1024香蕉在线观看| x7x7x7水蜜桃| 女性被躁到高潮视频| 久热这里只有精品99| 一边摸一边抽搐一进一小说| 亚洲在线自拍视频| 日韩欧美国产一区二区入口| 亚洲av第一区精品v没综合| 欧洲精品卡2卡3卡4卡5卡区| 国产亚洲精品久久久久久毛片| 19禁男女啪啪无遮挡网站| 日韩一卡2卡3卡4卡2021年| 日韩精品中文字幕看吧| 亚洲男人天堂网一区| 人人妻人人澡人人看| 不卡av一区二区三区| 两个人免费观看高清视频| 欧美成狂野欧美在线观看| 在线国产一区二区在线| 美女免费视频网站| 在线观看午夜福利视频| 精品一区二区三区av网在线观看| 精品电影一区二区在线| 18禁美女被吸乳视频| 亚洲avbb在线观看| 妹子高潮喷水视频| 淫秽高清视频在线观看| 少妇的丰满在线观看| 国产精品一区二区精品视频观看| 精品乱码久久久久久99久播| 久久午夜亚洲精品久久| 精品高清国产在线一区| 18禁黄网站禁片免费观看直播| 啪啪无遮挡十八禁网站| 最新美女视频免费是黄的| 韩国精品一区二区三区| 一个人免费在线观看的高清视频| 久久草成人影院| 亚洲三区欧美一区| 午夜福利免费观看在线| 国产v大片淫在线免费观看| 又紧又爽又黄一区二区| 波多野结衣高清作品| aaaaa片日本免费| 老熟妇乱子伦视频在线观看| 日韩欧美三级三区| 日韩一卡2卡3卡4卡2021年| 亚洲av五月六月丁香网| 国产精品一区二区三区四区久久 | 国产aⅴ精品一区二区三区波| 精品国产超薄肉色丝袜足j| 国产亚洲欧美在线一区二区| 久久精品国产99精品国产亚洲性色| 国语自产精品视频在线第100页| 91麻豆精品激情在线观看国产| 免费在线观看黄色视频的| 日本熟妇午夜| 国产精品久久久久久精品电影 | 麻豆成人av在线观看| 久久久久久久午夜电影| 老司机福利观看| 老司机深夜福利视频在线观看| 久久精品国产亚洲av香蕉五月| 性欧美人与动物交配| 免费在线观看影片大全网站| 国产成人精品久久二区二区91| 黄色丝袜av网址大全| 嫩草影视91久久| 日本 欧美在线| 人人妻人人看人人澡| 欧美黑人精品巨大| 欧美日韩福利视频一区二区| 国产精品乱码一区二三区的特点| 国产精品,欧美在线| 夜夜夜夜夜久久久久| 一边摸一边做爽爽视频免费| 亚洲狠狠婷婷综合久久图片| 伦理电影免费视频| 国产三级在线视频| 成年人黄色毛片网站| 超碰成人久久| 哪里可以看免费的av片| 老司机福利观看| 丝袜在线中文字幕| 欧美在线一区亚洲| 一区二区三区国产精品乱码| 草草在线视频免费看| 精品无人区乱码1区二区| 欧美日韩精品网址| 女生性感内裤真人,穿戴方法视频| 欧美日韩亚洲国产一区二区在线观看| 国产高清视频在线播放一区| 色播亚洲综合网| 午夜日韩欧美国产| 1024香蕉在线观看| www国产在线视频色| 少妇粗大呻吟视频| av超薄肉色丝袜交足视频| 黑人操中国人逼视频| 男女做爰动态图高潮gif福利片| 国内精品久久久久久久电影| 亚洲熟女毛片儿| 狠狠狠狠99中文字幕| 亚洲av片天天在线观看| 亚洲第一青青草原| 国产精品久久电影中文字幕| 嫩草影视91久久| 给我免费播放毛片高清在线观看| 曰老女人黄片| 黑丝袜美女国产一区| 级片在线观看| 久久国产乱子伦精品免费另类| 亚洲一卡2卡3卡4卡5卡精品中文| 黑人巨大精品欧美一区二区mp4| 国产黄色小视频在线观看| 久久国产亚洲av麻豆专区| 国产精品,欧美在线| 国产激情久久老熟女| 日本五十路高清| 精品欧美国产一区二区三| 黑人巨大精品欧美一区二区mp4| 欧美成人性av电影在线观看| 我的亚洲天堂| 久久精品国产亚洲av香蕉五月| 久久久国产成人精品二区| 一区二区三区高清视频在线| 老鸭窝网址在线观看| 搞女人的毛片| 女性生殖器流出的白浆| 男女那种视频在线观看| 国产主播在线观看一区二区| 亚洲人成伊人成综合网2020| 熟女电影av网| 日韩av在线大香蕉| 久久精品国产综合久久久| 亚洲精品中文字幕在线视频| 极品教师在线免费播放| 免费观看精品视频网站| 日韩成人在线观看一区二区三区| 日本一区二区免费在线视频| 国产精品一区二区精品视频观看| 俄罗斯特黄特色一大片| 免费在线观看日本一区| 日韩欧美一区二区三区在线观看| 久久草成人影院| 久久欧美精品欧美久久欧美| 国产av又大| 亚洲三区欧美一区| 夜夜爽天天搞| www日本在线高清视频| 亚洲狠狠婷婷综合久久图片| 国产精品电影一区二区三区| 国产精品久久久人人做人人爽| 亚洲第一欧美日韩一区二区三区| 国产精品久久久久久亚洲av鲁大| 禁无遮挡网站| 曰老女人黄片| 亚洲av熟女| 亚洲av片天天在线观看| 国产亚洲欧美在线一区二区| 国产激情欧美一区二区| 亚洲成人国产一区在线观看| 欧洲精品卡2卡3卡4卡5卡区| 久久国产乱子伦精品免费另类| 国产成人av教育| 欧美中文日本在线观看视频| 国产熟女午夜一区二区三区| 色综合婷婷激情| 亚洲国产毛片av蜜桃av| 国产极品粉嫩免费观看在线| 999久久久精品免费观看国产| 久久天躁狠狠躁夜夜2o2o| 色老头精品视频在线观看| 香蕉丝袜av| 熟妇人妻久久中文字幕3abv| 亚洲九九香蕉| 搞女人的毛片| 51午夜福利影视在线观看| 老司机深夜福利视频在线观看| or卡值多少钱| 国产亚洲精品综合一区在线观看 | 中文亚洲av片在线观看爽| 成人午夜高清在线视频 | 久久久精品欧美日韩精品| 中文字幕高清在线视频| 久久香蕉国产精品| 啦啦啦韩国在线观看视频| 国产成年人精品一区二区| 淫妇啪啪啪对白视频| 一二三四社区在线视频社区8| 我的亚洲天堂| 天堂√8在线中文| 最近最新中文字幕大全电影3 | 欧美性猛交╳xxx乱大交人| 精品国产国语对白av| 成在线人永久免费视频| 99久久99久久久精品蜜桃| 露出奶头的视频| netflix在线观看网站| 99国产综合亚洲精品| 亚洲午夜理论影院| 日本免费a在线| 亚洲自偷自拍图片 自拍| 日本免费a在线| 亚洲色图av天堂| 中国美女看黄片| 精品少妇一区二区三区视频日本电影| av超薄肉色丝袜交足视频| 激情在线观看视频在线高清| 亚洲aⅴ乱码一区二区在线播放 | 一级毛片女人18水好多| 国产精品一区二区三区四区久久 | 国产午夜福利久久久久久| 亚洲成av人片免费观看| 999久久久精品免费观看国产| 丝袜人妻中文字幕| 黑人巨大精品欧美一区二区mp4| 18禁裸乳无遮挡免费网站照片 | 久久国产精品男人的天堂亚洲| 少妇的丰满在线观看| 波多野结衣av一区二区av| 老司机午夜福利在线观看视频| 91成人精品电影| 亚洲欧美精品综合久久99| 婷婷六月久久综合丁香| 国内久久婷婷六月综合欲色啪| 1024香蕉在线观看| 精品国产乱子伦一区二区三区| 99精品欧美一区二区三区四区| 亚洲欧美日韩高清在线视频| 久久香蕉国产精品| 国产亚洲精品综合一区在线观看 |