• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Distributed Subgradient Algorithm for Multi-Agent Optimization With Dynamic Stepsize

    2021-07-26 07:22:58XiaoxingRenDeweiLiYugengXiandHaibinShao
    IEEE/CAA Journal of Automatica Sinica 2021年8期

    Xiaoxing Ren, Dewei Li, Yugeng Xi,, and Haibin Shao,

    Abstract—In this paper, we consider distributed convex optimization problems on multi-agent networks. We develop and analyze the distributed gradient method which allows each agent to compute its dynamic stepsize by utilizing the time-varying estimate of the local function value at the global optimal solution.Our approach can be applied to both synchronous and asynchronous communication protocols. Specifically, we propose the distributed subgradient with uncoordinated dynamic stepsizes(DS-UD) algorithm for synchronous protocol and the AsynDGD algorithm for asynchronous protocol. Theoretical analysis shows that the proposed algorithms guarantee that all agents reach a consensus on the solution to the multi-agent optimization problem. Moreover, the proposed approach with dynamic stepsizes eliminates the requirement of diminishing stepsize in existing works. Numerical examples of distributed estimation in sensor networks are provided to illustrate the effectiveness of the proposed approach.

    I. INTRODUCTION

    DISTRIBUTED optimization in multi-agent systems has received extensive attention due to its ubiquity in scenarios such as power systems [1], [2], smart grids [3], [4],compressed sensing problems [5], [6], learning-based control[7], and machine learning [8], [9], etc. In distributed optimization problems, a whole task can be accomplished cooperatively by a group of agents via simple local information exchange and computation.

    There exist various studies of distributed optimization methods based on multi-agent networks. Among them, the most widely studied is distributed gradient methods. In this line of research, Nedic and Ozdaglar [10] develop a general framework for the multi-agent optimization problem over a network, they propose a distributed subgradient method and analyze its convergence properties. They further consider the case where the agent’s states are constrained to convex sets and propose the projected consensus algorithm in [11]. The authors of [12] develop and analyze the dual averaging subgradient method which carries out a projection operation after averaging and descending. In [13], two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm are proposed. Novel distributed methods that achieve linear rates for strongly convex and smooth problems have been proposed in [14]–[18]. The common idea in these methods to correct the error caused by a fixed stepsize is to construct a correction term using historical information. To deal with the case where communications among agents are asynchronous, some extensions have been proposed. Nedic[19] proposes an asynchronous broadcast-based algorithm while authors in [20] develop a gossip-based random projection (GRP) algorithm, they both study the convergence of the algorithms for a diminishing stepsize and the error bounds for a fixed stepsize. Leiet al. [21] consider the distributed constrained optimization problem in random graphs and develop a distributed primal-dual algorithm that uses the same diminishing stepsize for both the consensus part and the subgradient part.

    The selection of stepsize is critical in the design of gradient methods. Typically, the literature considers two types of methods, namely, diminishing stepsizes and fixed stepsizes.The existing distributed gradient methods with diminishing stepsizes asymptotically converge to the optimal solution. The diminishing stepsizes should follow a decaying rule such as being positive, vanishing, non-summable but square summable, see e.g., [10], [11], [13], [19]–[22]. While in [23],a wider selection of stepsizes is explored, the square summable requirement of the stepsizes commonly adopted in the literature is removed, which provides the possibility for better convergence rate. These methods are widely applicable to nonsmooth convex functions but the convergence is rather slow due to diminishing stepsizes. With a fixed stepsize, it is shown in [24] that the algorithm converges faster but only to a point in the neighborhood of an optimal solution. The recent developed distributed algorithms with fixed stepsizes[14]–[18] achieve linear convergence to the optimal solution.However, it requires that the local objective functions are strongly convex and smooth. Besides, the fixed stepsize should be less than a certain critical value which is determined by the weighted matrix of the network, the Lipschitz continuous and the strongly convex parameters of the objective functions. Thus, these algorithms have restricted conditions on the fixed stepsize and require the knowledge of global information, which makes it not widely applicable.

    By comparison to the previous work, the contribution of this paper is a novel dynamic stepsize selection approach for the distributed gradient algorithm. We develop the associated distributed gradient algorithms for synchronous and asynchronous gossip-like communication protocol. An interesting feature of the dynamic stepsize is that differently from the existing distributed algorithms whose diminishing or fixed stepsizes are determined before the algorithm is run, the proposed distributed subgradient with uncoordinated dynamic stepsizes (DS-UD) and AsynDGD algorithms use dynamic stepsizes that rely on the time-varying estimates of the optimal function values generated at each iteration during the algorithm. The advantages of this dynamic stepsize proposed in this paper lie in two aspects. On the one hand, the dynamic stepsize only requires that the local convex objective functions have locally bounded subgradients for the synchronous scenario and have locally Lipschitz continuous bounded gradients for the asynchronous scenario. Besides, the dynamic stepsize needs no knowledge of the global information on the network or the objective functions. Thus, the proposed algorithms are more applicable compared with the distributed algorithms with fixed stepsize. On the other hand, the dynamic stepsize can overcome inefficient computations caused by the diminishing stepsize and achieve faster convergence. The dynamic stepsize is a generalization of the Polyak stepsize [25], which is commonly used in centralized optimization and is shown to have faster convergence than the diminishing stepsize even with the estimated optimal function value [26]. Note that the proposed algorithms utilize two gradients at each iteration: one of them is used to construct the stepsize, and the other one is for the direction, which means that the iteration complexity of the algorithm is doubled.However, numerical examples in which the plots are in terms of the number of gradient calculations illustrate the effectiveness of the proposed algorithms.

    The remainder of this paper is organized as follows. In Section II, we describe the problem formulation. The distributed subgradient algorithm with uncoordinated dynamic stepsize and the convergence analysis are provided in Section III. Section IV discusses the extension of the proposed algorithm to the asynchronous communication protocol. In Section V, we apply our algorithms to distributed estimate problems to illustrate their effectiveness. Finally, we make concluding remarks in Section VI.

    II. PRObLEM FORMULATION

    Consider a network consisting ofNagents, the goal of the agents is to solve the following problem defined on the network cooperatively

    III. DISTRIbUTED SUbGRADIENT ALGORITHM WITH DYNAMIC STEPSIZES

    In this section, we derive the distributed subgradient algorithm with dynamic stepsizes for synchronous communication protocol and present its convergence analysis.

    A. Algorithm Development

    Algorithm 1 summarizes the above steps, this distributed subgradient method with uncoordinated dynamic stepsizes is abbreviated as DS-UD.

    Remark 1:Since the convergence speed of the algorithm varies when solving different specific optimization problems,different maximum iteration numbers can be set for different problems to ensure that the optimality error decreases rather slowly at the maximum iteration. In practical applications, we can set the maximum iteration number according to the connectivity of the multi-agent network and the scale of the optimization problem.

    Algorithm 1 DS-UD xi0=yi0 ∈Ω ?i ∈V W ∈RN×N k=0 1: Initial: Given initial variables , , the weight matrix under Assumptions 2 and 3, and the maximum iteration number. Set .2: Obtain the estimate: For , each agent computes (2) and(3) to get the estimate .i ∈V i fest i (k)3: Dynamic stepsize: Each agent obtains its stepsize based on the estimate according to (4) and (5).i k=k+1 i αi,k fest i (k)4: Local variable updates: Each agent updates according to (6).Set .5: Repeat steps from 2 to 4 until the predefined maximum iteration number is reached.

    B. Analysis of the Algorithm

    Substitute (9) into (8), for allx∈Ω andk≥0,

    IV. ASYNCHRONOUS COMMUNICATION

    In this section, we extend the DS-UD algorithm to the asynchronous communication protocol, which allows a group of agents to update while the others do not in each iteration.Also, we establish the convergence analysis of the proposed asynchronous algorithm.

    A. Algorithm Development

    In practical multi-agent systems, there exist uncertainties in communication networks, such as packet drops and link failures. We consider the gossip-like asynchronous communication protocol from [28]. Specifically, each agent is assumed to have a local clock that ticks at a Poisson rate of 1 and is independent of the other agent’s clocks. This setting is equivalent to having a single virtual clock whose ticking times form a Poisson process of rateNand which ticks whenever any local clock ticks. LetZkbe the absolute time of thek-th

    The idle agents do not update, i.e.,

    This asynchronous distributed gradient method with dynamic stepsize is abbreviated as AsynDGD, Algorithm 2 summarizes the above steps. We would like to remark that the maximum iteration number in Algorithm 2 is set in the same way as Algorithm 1.

    Algorithm 2 AsynDGD xi0=yi 0 ∈Ω ?i ∈V 1: Initial: Given initial variables , and the maximum iteration number. Set .i ∈V i ∈Jk i ?Jk k=0 2: Asynchronous updates: For , if , go to Step 3, if ,go to Step 6.3: Optimal value estimation: Agent computes (20) and (21) to get the estimate .i fest i (k)4: Dynamic stepsize: Agent calculates its stepsize based on the estimate according to (22) and (23).i k=k+1 i αi,k fest i (k)5: Local variable updates: Agent updates according to (24). Set, go to Step 7.i k=k+1 6: Idle agent does not update and maintain its variables in the new iteration as (25) and (26). Set .7: Repeat steps from 2 to 6 until the predefined maximum iteration number is satisfied.

    B. Analysis of the Algorithm

    To define the history of the algorithm, we denote the σalgebra generated by the entire history of our algorithm until timekbyFk, i.e., fork≥0,

    The convergence rates of the distributed gradient algorithms[11], [20] to the optimal solution are sublinear for convex functions due to the use of diminishing stepsize. The convergence rates of DS-UD and AsynDGD are also sublinear, however, we will discuss in detail why the proposed algorithms can achieve faster convergence than the algorithms with diminishing stepsizes.

    Recall that the dynamic stepsize is defined by

    V. NUMERICAL EXAMPLES

    In this section, we provide numerical examples on the convergence performance of the proposed algorithms and provide comparisons with the existing distributed gradient algorithms. The results are consistent with our theoretical convergence analysis and illustrate the improved algorithmic performance.

    Example 1:First, we study the performance of DS-UD. We consider an undirected cycle consisting of 4 agents. The convex objective functions are as follows.

    Fig. 1. The estimates of 4 agents and the residual of the DS-UD algorithm.(a) The estimates for the first component. (b) The estimates for the second component. (c) The residual.

    where γiis the regularization parameter.

    Consider a randomly generated undirected connected network consisting of 100 sensors, the average degree of the network is 49. We sets=10,d=10 and γi=0.05. The symmetric measurement matrixMi∈R10×10has eigenvalues

    Fig. 2. The normalized relative errors of three algorithms. (a) The normalized relative errors of DGD, D-NG and DS-UD algorithms versus the number of iterations. (b) The normalized relative errors of DGD, D-NG and DS-UD algorithms versus the number of gradient calculations.

    Note, that the proposed DS-UD algorithm utilizes two gradients at each iteration: one of them is used to construct the stepsize, and the other one is for the update direction. This means that the iteration complexity (number of gradients calculations at each iteration) of the DS-UD algorithm is twice as those of the DGD, D-NG and DDA algorithms. Therefore,to have a fair comparison with the DGD, D-NG and DDA algorithms, the plots in Fig. 2(a), Fig. 3(a) are in terms of the number of iterations and the plots in Fig. 2(b), Fig. 3(b) are in terms of the number of gradient calculations.

    Fig. 3. The normalized relative errors of three algorithms. (a) The normalized relative errors of DGD, DDA and DS-UD algorithms versus the number of iterations. (b) The normalized relative errors of DGD, DDA and DS-UD algorithms versus the number of gradient calculations.

    Moreover, DS-UD requires fewer iterations and gradient calculations to solve the optimization problem to a high-level of accuracy than the DGD, D-NG and DDA algorithms. It can be seen that DS-UD brings a satisfactory convergence result for the distributed optimization problem and outperforms the DGD, D-NG and DDA algorithms.

    Besides, we provide the trajectory of dynamic stepsizes in DS-UD and compare it to the diminishing stepsize in DGD.

    Fig. 4. The stepsizes of DGD and DS-UD algorithms.

    Fig. 5. The distance S R between the current stepsizes and the optimal stepsizes of DS-UD and DGD algorithms.

    Example 3:Now, we examine the effectiveness of AsynDGD. We compare it with the GRP algorithm in [20]and the primal-dual algorithm in [21].

    Consider an undirected fully connected network consisting of 10 sensors. The sensors are attempting to measure a parameter θ? by solving the distributed estimation problem(46). We sets=1,d=2, γi=0.2.Mi∈R1×2has entries randomly generated in (0,1) and the noise ωi∈R follows an i.i.d.Gaussianse{quenceN(0,0}.1),i=1,...,10. The constraintsetisΩ=θ∈R2:‖θ‖≤5.

    In the asynchronous scenario, for fair comparison, the three algorithms are assumed to use the same gossip-like protocol as in this work. Specifically, at each iteration, one of the 10 sensors will be randomly selected to be idle, it does not update and the associated edges are not activated.

    Fig. 6(a) depicts the averaged normalized relative error(over the Monte-Carlo runs) of the three algorithms versus the total number of iterations. Fig. 6(b) depicts the averaged normalized relative error (over the Monte-Carlo runs) of the three algorithms versus the total number of gradient calculations of 10 sensors. Fig. 6 shows that GRP and the primal-dual algorithm converge faster than AsynDGD at the beginning, but fall behind AsynDGD after short fast convergence. Besides, AsynDGD requires fewer iterations and gradient calculations to solve the optimization problem to a high-level of accuracy than GRP and the primal-dual algorithm. The reason for the observed result is the same as that in Example 2 and thus is omitted. It is seen that AsynDGD achieves improved convergence performance for the distributed optimization problem.

    VI. CONCLUSIONS

    In this paper, distributed gradient algorithms with dynamic stepsize are proposed for constrained distributed convex optimization problems. First, we develop distributed optimization algorithms for both synchronous and asynchronous communication protocols, in which each agent calculates its dynamic stepsizes based on the time-varying estimates of its local function value at the global optimal solution. Second, we present the convergence analysis for the proposed algorithms. Besides, we compare them with the existing algorithms through numerical examples of distributed estimation problems to illustrate their effectiveness.

    Fig. 6. The averaged normalized relative errors of three asynchronous algorithms. (a) The averaged normalized relative error of there asynchronous algorithms versus the number of iterations. (b) The averaged normalized relative error of there asynchronous algorithms versus the number of gradient calculations.

    综合色av麻豆| 国产精品日韩av在线免费观看| 日韩国内少妇激情av| 午夜精品久久久久久毛片777| 深夜a级毛片| 午夜两性在线视频| 久久久久久久亚洲中文字幕 | 亚洲av免费在线观看| 国产色婷婷99| 午夜激情福利司机影院| 又粗又爽又猛毛片免费看| 免费看光身美女| 啦啦啦韩国在线观看视频| 成人鲁丝片一二三区免费| 欧美成狂野欧美在线观看| 男女之事视频高清在线观看| 国产精品三级大全| 少妇的逼好多水| 一进一出好大好爽视频| 亚洲成人精品中文字幕电影| 老熟妇仑乱视频hdxx| 午夜两性在线视频| 一级黄色大片毛片| 国产探花极品一区二区| 成人精品一区二区免费| 国产色爽女视频免费观看| 欧美黄色淫秽网站| 最新在线观看一区二区三区| 成人性生交大片免费视频hd| 久久天躁狠狠躁夜夜2o2o| 少妇裸体淫交视频免费看高清| 亚洲内射少妇av| 免费电影在线观看免费观看| 精品久久久久久,| 欧美三级亚洲精品| 怎么达到女性高潮| 又爽又黄无遮挡网站| 亚洲五月天丁香| 国产私拍福利视频在线观看| 黄色一级大片看看| 国产高清三级在线| 亚洲av免费在线观看| 最好的美女福利视频网| 欧美+日韩+精品| 国产大屁股一区二区在线视频| 国产欧美日韩一区二区精品| 亚洲av日韩精品久久久久久密| 91字幕亚洲| 黄色配什么色好看| a级一级毛片免费在线观看| 99热6这里只有精品| 亚洲av一区综合| 久久亚洲精品不卡| 99riav亚洲国产免费| 亚洲国产欧洲综合997久久,| 日韩欧美国产一区二区入口| 99久久九九国产精品国产免费| 精品久久久久久成人av| 啪啪无遮挡十八禁网站| 精品久久久久久久久av| 久久国产乱子免费精品| 动漫黄色视频在线观看| 99久久精品一区二区三区| 老司机午夜福利在线观看视频| 99久久无色码亚洲精品果冻| 久久久久久大精品| 亚洲美女视频黄频| 国产精品一区二区性色av| 在线播放无遮挡| 国产精品久久电影中文字幕| 日韩 亚洲 欧美在线| 欧美精品国产亚洲| 97人妻精品一区二区三区麻豆| 欧美区成人在线视频| 韩国av一区二区三区四区| 成人av在线播放网站| 中文字幕人成人乱码亚洲影| 国产精品影院久久| 国产精品99久久久久久久久| 在线看三级毛片| 成人午夜高清在线视频| 精品久久久久久久久亚洲 | 国产美女午夜福利| or卡值多少钱| 最好的美女福利视频网| ponron亚洲| 亚洲,欧美,日韩| 成人美女网站在线观看视频| 男人狂女人下面高潮的视频| 99在线视频只有这里精品首页| 欧美一区二区亚洲| 国产欧美日韩一区二区三| 精品一区二区三区视频在线观看免费| 又黄又爽又免费观看的视频| 一本精品99久久精品77| 51国产日韩欧美| 男女视频在线观看网站免费| 国产黄a三级三级三级人| 亚洲精品日韩av片在线观看| 亚洲国产高清在线一区二区三| 波多野结衣巨乳人妻| 直男gayav资源| 欧美色视频一区免费| 51午夜福利影视在线观看| 丰满人妻一区二区三区视频av| 午夜影院日韩av| 欧美+亚洲+日韩+国产| 窝窝影院91人妻| 亚洲国产精品久久男人天堂| 嫁个100分男人电影在线观看| 国产成人福利小说| 全区人妻精品视频| 久久久久久久久中文| 成人鲁丝片一二三区免费| 中国美女看黄片| 88av欧美| 91久久精品电影网| 麻豆一二三区av精品| 色尼玛亚洲综合影院| 久久久色成人| 90打野战视频偷拍视频| 成人国产一区最新在线观看| 久久精品国产亚洲av涩爱 | 在线观看舔阴道视频| 成人高潮视频无遮挡免费网站| 在线a可以看的网站| 久久精品人妻少妇| 最新中文字幕久久久久| 美女高潮喷水抽搐中文字幕| 亚洲内射少妇av| 日本免费a在线| 欧美成人a在线观看| 1024手机看黄色片| 欧美中文日本在线观看视频| 99久国产av精品| 少妇人妻精品综合一区二区 | 国产精品三级大全| 女同久久另类99精品国产91| 悠悠久久av| 亚洲 国产 在线| 搡女人真爽免费视频火全软件 | 亚洲成av人片免费观看| 在现免费观看毛片| 日本撒尿小便嘘嘘汇集6| 欧美国产日韩亚洲一区| 国产亚洲精品综合一区在线观看| 欧美最新免费一区二区三区 | 女生性感内裤真人,穿戴方法视频| 观看美女的网站| 午夜精品在线福利| 特大巨黑吊av在线直播| 99热这里只有是精品50| 91九色精品人成在线观看| 国产一级毛片七仙女欲春2| 黄色日韩在线| 网址你懂的国产日韩在线| 又粗又爽又猛毛片免费看| 国产高清激情床上av| 色综合站精品国产| 欧美黑人巨大hd| 国产真实乱freesex| 精品午夜福利视频在线观看一区| 日韩大尺度精品在线看网址| 韩国av一区二区三区四区| 欧美日韩综合久久久久久 | av专区在线播放| 少妇熟女aⅴ在线视频| 国产午夜福利久久久久久| 国产主播在线观看一区二区| 国产精品一及| 久久久久久国产a免费观看| 亚洲七黄色美女视频| 美女cb高潮喷水在线观看| 性欧美人与动物交配| 级片在线观看| 国产精品人妻久久久久久| av福利片在线观看| 婷婷精品国产亚洲av在线| 五月伊人婷婷丁香| 国产黄a三级三级三级人| 毛片一级片免费看久久久久 | 天堂√8在线中文| 欧美性感艳星| 国产亚洲av嫩草精品影院| 亚洲成a人片在线一区二区| 国产aⅴ精品一区二区三区波| 国产成+人综合+亚洲专区| 日韩中文字幕欧美一区二区| 国产乱人视频| 久久99热这里只有精品18| 国产三级黄色录像| 夜夜爽天天搞| 在现免费观看毛片| 99热这里只有是精品在线观看 | 黄色日韩在线| 亚洲 国产 在线| 久久精品国产清高在天天线| 99国产综合亚洲精品| 国产人妻一区二区三区在| 欧美高清性xxxxhd video| 国产精品人妻久久久久久| 色哟哟哟哟哟哟| 久久亚洲精品不卡| 听说在线观看完整版免费高清| 97热精品久久久久久| 久9热在线精品视频| 日韩 亚洲 欧美在线| 又爽又黄无遮挡网站| 亚洲成a人片在线一区二区| 啦啦啦韩国在线观看视频| 日本成人三级电影网站| 18禁黄网站禁片免费观看直播| 午夜福利免费观看在线| 桃红色精品国产亚洲av| 亚洲av电影不卡..在线观看| 午夜日韩欧美国产| 熟女电影av网| 久久亚洲精品不卡| 制服丝袜大香蕉在线| 简卡轻食公司| av在线老鸭窝| 亚洲专区国产一区二区| 日韩欧美一区二区三区在线观看| 日日干狠狠操夜夜爽| aaaaa片日本免费| 免费人成在线观看视频色| 免费av不卡在线播放| 久久婷婷人人爽人人干人人爱| 亚洲乱码一区二区免费版| 中文亚洲av片在线观看爽| 亚洲专区国产一区二区| 在线观看一区二区三区| 国产成年人精品一区二区| 午夜老司机福利剧场| 如何舔出高潮| 国产探花极品一区二区| www日本黄色视频网| 搡老妇女老女人老熟妇| 一级av片app| 91在线观看av| 日韩人妻高清精品专区| 亚洲av成人av| 日韩欧美精品免费久久 | 亚洲精品成人久久久久久| 一个人免费在线观看电影| 国产精品久久久久久久久免 | 久久精品久久久久久噜噜老黄 | 成人国产综合亚洲| 免费电影在线观看免费观看| 精品免费久久久久久久清纯| 亚洲第一欧美日韩一区二区三区| 99国产精品一区二区三区| 亚洲自拍偷在线| 狠狠狠狠99中文字幕| 最新在线观看一区二区三区| 亚洲午夜理论影院| 又粗又爽又猛毛片免费看| 国产亚洲欧美在线一区二区| 男人的好看免费观看在线视频| 日韩欧美在线二视频| 黄色日韩在线| 美女黄网站色视频| 青草久久国产| 人妻丰满熟妇av一区二区三区| 欧美一区二区精品小视频在线| 女生性感内裤真人,穿戴方法视频| 丝袜美腿在线中文| 亚洲精品乱码久久久v下载方式| 亚洲精品一卡2卡三卡4卡5卡| www.色视频.com| a在线观看视频网站| 内地一区二区视频在线| 国产精品99久久久久久久久| 人人妻人人澡欧美一区二区| 日本免费a在线| 色av中文字幕| 欧美日韩瑟瑟在线播放| 一级a爱片免费观看的视频| 日日干狠狠操夜夜爽| 蜜桃久久精品国产亚洲av| 欧美日韩瑟瑟在线播放| 亚洲成人精品中文字幕电影| 国产黄色小视频在线观看| 如何舔出高潮| 亚洲av二区三区四区| 国产精品一及| 午夜精品在线福利| 啦啦啦观看免费观看视频高清| 在线观看av片永久免费下载| 九色成人免费人妻av| x7x7x7水蜜桃| 91九色精品人成在线观看| 婷婷色综合大香蕉| 亚洲国产精品成人综合色| 色尼玛亚洲综合影院| av福利片在线观看| 一进一出好大好爽视频| 欧美乱妇无乱码| 淫妇啪啪啪对白视频| 一进一出抽搐gif免费好疼| 久久久精品欧美日韩精品| 能在线免费观看的黄片| 成人美女网站在线观看视频| av福利片在线观看| 波多野结衣高清作品| 可以在线观看的亚洲视频| 国产精品一区二区免费欧美| 成年女人看的毛片在线观看| 亚洲av.av天堂| 毛片女人毛片| 免费人成在线观看视频色| ponron亚洲| 亚洲最大成人av| 99久国产av精品| 亚洲av二区三区四区| 88av欧美| 国产亚洲欧美98| 国产伦在线观看视频一区| 日韩精品青青久久久久久| 一级毛片久久久久久久久女| 欧美极品一区二区三区四区| 在线观看66精品国产| 国产精品一区二区三区四区久久| 别揉我奶头~嗯~啊~动态视频| 亚洲无线观看免费| 亚洲精品日韩av片在线观看| 色播亚洲综合网| 国产在视频线在精品| 夜夜看夜夜爽夜夜摸| 国产亚洲欧美在线一区二区| 日本黄色片子视频| 少妇熟女aⅴ在线视频| 香蕉av资源在线| 熟女人妻精品中文字幕| 免费一级毛片在线播放高清视频| 国内少妇人妻偷人精品xxx网站| a级毛片免费高清观看在线播放| 日韩 亚洲 欧美在线| 成人永久免费在线观看视频| 久久精品夜夜夜夜夜久久蜜豆| 日本一二三区视频观看| 麻豆一二三区av精品| 美女大奶头视频| 美女cb高潮喷水在线观看| 人人妻人人看人人澡| 婷婷色综合大香蕉| 一边摸一边抽搐一进一小说| 国产高清三级在线| 国产精品影院久久| 我要看日韩黄色一级片| 一级av片app| 久久香蕉精品热| 国产三级黄色录像| 亚洲欧美日韩卡通动漫| 97超级碰碰碰精品色视频在线观看| 一个人看视频在线观看www免费| 在线免费观看的www视频| 午夜激情欧美在线| 亚洲午夜理论影院| 欧美黄色片欧美黄色片| 一本一本综合久久| 日日摸夜夜添夜夜添小说| 精品人妻1区二区| 久久精品国产亚洲av香蕉五月| 亚洲人成电影免费在线| 欧美激情国产日韩精品一区| 热99在线观看视频| 精品欧美国产一区二区三| 成人精品一区二区免费| 日韩中字成人| 男插女下体视频免费在线播放| 国产亚洲精品综合一区在线观看| 久久精品国产亚洲av香蕉五月| 一进一出抽搐动态| 亚洲欧美日韩东京热| 深夜精品福利| 好男人在线观看高清免费视频| 国产精品亚洲美女久久久| 精品乱码久久久久久99久播| 欧美日韩福利视频一区二区| 国产伦在线观看视频一区| 丰满的人妻完整版| 三级毛片av免费| 亚洲欧美日韩东京热| 丰满人妻一区二区三区视频av| 在线看三级毛片| 中文亚洲av片在线观看爽| 黄色视频,在线免费观看| 亚洲天堂国产精品一区在线| 99国产极品粉嫩在线观看| а√天堂www在线а√下载| 国产av一区在线观看免费| 12—13女人毛片做爰片一| 性色av乱码一区二区三区2| 又黄又爽又刺激的免费视频.| 无遮挡黄片免费观看| 日本 欧美在线| 亚洲国产日韩欧美精品在线观看| 日韩欧美一区二区三区在线观看| 亚洲人与动物交配视频| 韩国av一区二区三区四区| 国产91精品成人一区二区三区| 国产成人aa在线观看| 老熟妇乱子伦视频在线观看| 免费高清视频大片| 亚洲av电影不卡..在线观看| 久久精品人妻少妇| 一本久久中文字幕| 欧美乱色亚洲激情| 亚洲国产欧洲综合997久久,| 亚洲精品456在线播放app | 狂野欧美白嫩少妇大欣赏| 婷婷亚洲欧美| 亚洲熟妇中文字幕五十中出| 国产视频内射| 中文亚洲av片在线观看爽| 精品久久久久久久久av| 婷婷丁香在线五月| 悠悠久久av| 一边摸一边抽搐一进一小说| 赤兔流量卡办理| 亚洲av.av天堂| 亚洲人成伊人成综合网2020| 动漫黄色视频在线观看| 琪琪午夜伦伦电影理论片6080| 亚洲avbb在线观看| 一进一出抽搐gif免费好疼| 久久久久久久精品吃奶| 午夜免费男女啪啪视频观看 | 男女那种视频在线观看| 亚洲国产精品sss在线观看| 久久九九热精品免费| 成人高潮视频无遮挡免费网站| 十八禁网站免费在线| 男人狂女人下面高潮的视频| 悠悠久久av| 亚洲aⅴ乱码一区二区在线播放| 欧美最黄视频在线播放免费| 男女下面进入的视频免费午夜| 精品人妻熟女av久视频| 免费无遮挡裸体视频| 中文字幕熟女人妻在线| 亚洲天堂国产精品一区在线| 国产午夜精品久久久久久一区二区三区 | 亚州av有码| 如何舔出高潮| 好男人电影高清在线观看| 男插女下体视频免费在线播放| 久久精品国产亚洲av涩爱 | 狠狠狠狠99中文字幕| 中文字幕av在线有码专区| 国产精品,欧美在线| 性欧美人与动物交配| 婷婷六月久久综合丁香| 97超级碰碰碰精品色视频在线观看| 夜夜夜夜夜久久久久| ponron亚洲| 午夜精品在线福利| 好看av亚洲va欧美ⅴa在| 日韩欧美三级三区| 老司机福利观看| av中文乱码字幕在线| 国产精品美女特级片免费视频播放器| 亚洲精品一卡2卡三卡4卡5卡| 免费看美女性在线毛片视频| 中文字幕久久专区| 国产私拍福利视频在线观看| 亚洲第一欧美日韩一区二区三区| 久久欧美精品欧美久久欧美| 一本一本综合久久| 黄色日韩在线| 一夜夜www| 韩国av一区二区三区四区| 免费高清视频大片| 中文字幕人妻熟人妻熟丝袜美| 村上凉子中文字幕在线| 国产欧美日韩一区二区精品| av福利片在线观看| 嫩草影院新地址| 午夜免费男女啪啪视频观看 | 久久久精品大字幕| avwww免费| 国产v大片淫在线免费观看| 国产三级在线视频| 熟女人妻精品中文字幕| 九九在线视频观看精品| 亚洲七黄色美女视频| 啦啦啦韩国在线观看视频| 国产精品久久久久久精品电影| 亚洲第一欧美日韩一区二区三区| 日本免费一区二区三区高清不卡| 久久久久国内视频| 黄色配什么色好看| 亚洲七黄色美女视频| 免费av观看视频| 国产成人福利小说| 中文字幕久久专区| 亚洲综合色惰| 欧美极品一区二区三区四区| 国产精品一区二区性色av| 午夜福利视频1000在线观看| 国产精品98久久久久久宅男小说| 精品一区二区三区av网在线观看| 国产精品久久久久久久电影| 午夜亚洲福利在线播放| a级一级毛片免费在线观看| 久久久久久久久久黄片| 欧美极品一区二区三区四区| 村上凉子中文字幕在线| 国产成人av教育| 99久久久亚洲精品蜜臀av| 美女cb高潮喷水在线观看| 女同久久另类99精品国产91| 亚洲aⅴ乱码一区二区在线播放| 国产亚洲精品av在线| 欧美一区二区国产精品久久精品| 日日夜夜操网爽| 精品午夜福利在线看| 国产久久久一区二区三区| 男女下面进入的视频免费午夜| 婷婷丁香在线五月| 精品人妻视频免费看| 国产成人av教育| 亚洲中文字幕一区二区三区有码在线看| 一区二区三区免费毛片| 人妻制服诱惑在线中文字幕| 国产精品爽爽va在线观看网站| 精品国内亚洲2022精品成人| 欧美黑人欧美精品刺激| 久久久久精品国产欧美久久久| 18禁黄网站禁片免费观看直播| 三级毛片av免费| 国产一区二区三区视频了| 日本 欧美在线| 一个人看的www免费观看视频| а√天堂www在线а√下载| 中文字幕高清在线视频| 国产一区二区在线av高清观看| 亚洲三级黄色毛片| 国内揄拍国产精品人妻在线| 久久草成人影院| 无遮挡黄片免费观看| 欧美黄色淫秽网站| 亚洲专区中文字幕在线| 1024手机看黄色片| 精品一区二区三区人妻视频| 俺也久久电影网| 久久精品91蜜桃| 国产成人影院久久av| 国产精品亚洲一级av第二区| 99国产精品一区二区蜜桃av| 日韩国内少妇激情av| av在线天堂中文字幕| 国产伦在线观看视频一区| a级毛片a级免费在线| 99久久精品一区二区三区| 天堂√8在线中文| 免费观看精品视频网站| 2021天堂中文幕一二区在线观| 久久久久国内视频| 99视频精品全部免费 在线| 少妇裸体淫交视频免费看高清| 国产精品永久免费网站| 国产一区二区在线观看日韩| 桃色一区二区三区在线观看| 久久国产乱子免费精品| 日本黄色片子视频| 精品人妻1区二区| 成人鲁丝片一二三区免费| 久久人人精品亚洲av| 无人区码免费观看不卡| 欧美+亚洲+日韩+国产| 无人区码免费观看不卡| 久久精品人妻少妇| aaaaa片日本免费| 99国产极品粉嫩在线观看| 老司机午夜十八禁免费视频| 91午夜精品亚洲一区二区三区 | 一个人观看的视频www高清免费观看| 日韩精品中文字幕看吧| 精品免费久久久久久久清纯| 国产伦在线观看视频一区| 一夜夜www| 熟妇人妻久久中文字幕3abv| 超碰av人人做人人爽久久| 国产探花极品一区二区| 岛国在线免费视频观看| 伊人久久精品亚洲午夜| 岛国在线免费视频观看| 一个人看的www免费观看视频| 免费搜索国产男女视频| 757午夜福利合集在线观看| 性欧美人与动物交配| 国产主播在线观看一区二区| 亚洲美女搞黄在线观看 | 亚洲三级黄色毛片| 色吧在线观看| 看免费av毛片| 听说在线观看完整版免费高清| 成人欧美大片| 精品久久久久久久久av| 麻豆一二三区av精品| 欧美性感艳星| 国产精品99久久久久久久久| 欧美日韩乱码在线| 欧美激情久久久久久爽电影| 欧美激情在线99| 亚洲电影在线观看av| 别揉我奶头~嗯~啊~动态视频| 身体一侧抽搐| 三级毛片av免费| 日本五十路高清| 免费在线观看亚洲国产| 国产野战对白在线观看| 成年免费大片在线观看|