• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Distributed Subgradient Algorithm for Multi-Agent Optimization With Dynamic Stepsize

    2021-07-26 07:22:58XiaoxingRenDeweiLiYugengXiandHaibinShao
    IEEE/CAA Journal of Automatica Sinica 2021年8期

    Xiaoxing Ren, Dewei Li, Yugeng Xi,, and Haibin Shao,

    Abstract—In this paper, we consider distributed convex optimization problems on multi-agent networks. We develop and analyze the distributed gradient method which allows each agent to compute its dynamic stepsize by utilizing the time-varying estimate of the local function value at the global optimal solution.Our approach can be applied to both synchronous and asynchronous communication protocols. Specifically, we propose the distributed subgradient with uncoordinated dynamic stepsizes(DS-UD) algorithm for synchronous protocol and the AsynDGD algorithm for asynchronous protocol. Theoretical analysis shows that the proposed algorithms guarantee that all agents reach a consensus on the solution to the multi-agent optimization problem. Moreover, the proposed approach with dynamic stepsizes eliminates the requirement of diminishing stepsize in existing works. Numerical examples of distributed estimation in sensor networks are provided to illustrate the effectiveness of the proposed approach.

    I. INTRODUCTION

    DISTRIBUTED optimization in multi-agent systems has received extensive attention due to its ubiquity in scenarios such as power systems [1], [2], smart grids [3], [4],compressed sensing problems [5], [6], learning-based control[7], and machine learning [8], [9], etc. In distributed optimization problems, a whole task can be accomplished cooperatively by a group of agents via simple local information exchange and computation.

    There exist various studies of distributed optimization methods based on multi-agent networks. Among them, the most widely studied is distributed gradient methods. In this line of research, Nedic and Ozdaglar [10] develop a general framework for the multi-agent optimization problem over a network, they propose a distributed subgradient method and analyze its convergence properties. They further consider the case where the agent’s states are constrained to convex sets and propose the projected consensus algorithm in [11]. The authors of [12] develop and analyze the dual averaging subgradient method which carries out a projection operation after averaging and descending. In [13], two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm are proposed. Novel distributed methods that achieve linear rates for strongly convex and smooth problems have been proposed in [14]–[18]. The common idea in these methods to correct the error caused by a fixed stepsize is to construct a correction term using historical information. To deal with the case where communications among agents are asynchronous, some extensions have been proposed. Nedic[19] proposes an asynchronous broadcast-based algorithm while authors in [20] develop a gossip-based random projection (GRP) algorithm, they both study the convergence of the algorithms for a diminishing stepsize and the error bounds for a fixed stepsize. Leiet al. [21] consider the distributed constrained optimization problem in random graphs and develop a distributed primal-dual algorithm that uses the same diminishing stepsize for both the consensus part and the subgradient part.

    The selection of stepsize is critical in the design of gradient methods. Typically, the literature considers two types of methods, namely, diminishing stepsizes and fixed stepsizes.The existing distributed gradient methods with diminishing stepsizes asymptotically converge to the optimal solution. The diminishing stepsizes should follow a decaying rule such as being positive, vanishing, non-summable but square summable, see e.g., [10], [11], [13], [19]–[22]. While in [23],a wider selection of stepsizes is explored, the square summable requirement of the stepsizes commonly adopted in the literature is removed, which provides the possibility for better convergence rate. These methods are widely applicable to nonsmooth convex functions but the convergence is rather slow due to diminishing stepsizes. With a fixed stepsize, it is shown in [24] that the algorithm converges faster but only to a point in the neighborhood of an optimal solution. The recent developed distributed algorithms with fixed stepsizes[14]–[18] achieve linear convergence to the optimal solution.However, it requires that the local objective functions are strongly convex and smooth. Besides, the fixed stepsize should be less than a certain critical value which is determined by the weighted matrix of the network, the Lipschitz continuous and the strongly convex parameters of the objective functions. Thus, these algorithms have restricted conditions on the fixed stepsize and require the knowledge of global information, which makes it not widely applicable.

    By comparison to the previous work, the contribution of this paper is a novel dynamic stepsize selection approach for the distributed gradient algorithm. We develop the associated distributed gradient algorithms for synchronous and asynchronous gossip-like communication protocol. An interesting feature of the dynamic stepsize is that differently from the existing distributed algorithms whose diminishing or fixed stepsizes are determined before the algorithm is run, the proposed distributed subgradient with uncoordinated dynamic stepsizes (DS-UD) and AsynDGD algorithms use dynamic stepsizes that rely on the time-varying estimates of the optimal function values generated at each iteration during the algorithm. The advantages of this dynamic stepsize proposed in this paper lie in two aspects. On the one hand, the dynamic stepsize only requires that the local convex objective functions have locally bounded subgradients for the synchronous scenario and have locally Lipschitz continuous bounded gradients for the asynchronous scenario. Besides, the dynamic stepsize needs no knowledge of the global information on the network or the objective functions. Thus, the proposed algorithms are more applicable compared with the distributed algorithms with fixed stepsize. On the other hand, the dynamic stepsize can overcome inefficient computations caused by the diminishing stepsize and achieve faster convergence. The dynamic stepsize is a generalization of the Polyak stepsize [25], which is commonly used in centralized optimization and is shown to have faster convergence than the diminishing stepsize even with the estimated optimal function value [26]. Note that the proposed algorithms utilize two gradients at each iteration: one of them is used to construct the stepsize, and the other one is for the direction, which means that the iteration complexity of the algorithm is doubled.However, numerical examples in which the plots are in terms of the number of gradient calculations illustrate the effectiveness of the proposed algorithms.

    The remainder of this paper is organized as follows. In Section II, we describe the problem formulation. The distributed subgradient algorithm with uncoordinated dynamic stepsize and the convergence analysis are provided in Section III. Section IV discusses the extension of the proposed algorithm to the asynchronous communication protocol. In Section V, we apply our algorithms to distributed estimate problems to illustrate their effectiveness. Finally, we make concluding remarks in Section VI.

    II. PRObLEM FORMULATION

    Consider a network consisting ofNagents, the goal of the agents is to solve the following problem defined on the network cooperatively

    III. DISTRIbUTED SUbGRADIENT ALGORITHM WITH DYNAMIC STEPSIZES

    In this section, we derive the distributed subgradient algorithm with dynamic stepsizes for synchronous communication protocol and present its convergence analysis.

    A. Algorithm Development

    Algorithm 1 summarizes the above steps, this distributed subgradient method with uncoordinated dynamic stepsizes is abbreviated as DS-UD.

    Remark 1:Since the convergence speed of the algorithm varies when solving different specific optimization problems,different maximum iteration numbers can be set for different problems to ensure that the optimality error decreases rather slowly at the maximum iteration. In practical applications, we can set the maximum iteration number according to the connectivity of the multi-agent network and the scale of the optimization problem.

    Algorithm 1 DS-UD xi0=yi0 ∈Ω ?i ∈V W ∈RN×N k=0 1: Initial: Given initial variables , , the weight matrix under Assumptions 2 and 3, and the maximum iteration number. Set .2: Obtain the estimate: For , each agent computes (2) and(3) to get the estimate .i ∈V i fest i (k)3: Dynamic stepsize: Each agent obtains its stepsize based on the estimate according to (4) and (5).i k=k+1 i αi,k fest i (k)4: Local variable updates: Each agent updates according to (6).Set .5: Repeat steps from 2 to 4 until the predefined maximum iteration number is reached.

    B. Analysis of the Algorithm

    Substitute (9) into (8), for allx∈Ω andk≥0,

    IV. ASYNCHRONOUS COMMUNICATION

    In this section, we extend the DS-UD algorithm to the asynchronous communication protocol, which allows a group of agents to update while the others do not in each iteration.Also, we establish the convergence analysis of the proposed asynchronous algorithm.

    A. Algorithm Development

    In practical multi-agent systems, there exist uncertainties in communication networks, such as packet drops and link failures. We consider the gossip-like asynchronous communication protocol from [28]. Specifically, each agent is assumed to have a local clock that ticks at a Poisson rate of 1 and is independent of the other agent’s clocks. This setting is equivalent to having a single virtual clock whose ticking times form a Poisson process of rateNand which ticks whenever any local clock ticks. LetZkbe the absolute time of thek-th

    The idle agents do not update, i.e.,

    This asynchronous distributed gradient method with dynamic stepsize is abbreviated as AsynDGD, Algorithm 2 summarizes the above steps. We would like to remark that the maximum iteration number in Algorithm 2 is set in the same way as Algorithm 1.

    Algorithm 2 AsynDGD xi0=yi 0 ∈Ω ?i ∈V 1: Initial: Given initial variables , and the maximum iteration number. Set .i ∈V i ∈Jk i ?Jk k=0 2: Asynchronous updates: For , if , go to Step 3, if ,go to Step 6.3: Optimal value estimation: Agent computes (20) and (21) to get the estimate .i fest i (k)4: Dynamic stepsize: Agent calculates its stepsize based on the estimate according to (22) and (23).i k=k+1 i αi,k fest i (k)5: Local variable updates: Agent updates according to (24). Set, go to Step 7.i k=k+1 6: Idle agent does not update and maintain its variables in the new iteration as (25) and (26). Set .7: Repeat steps from 2 to 6 until the predefined maximum iteration number is satisfied.

    B. Analysis of the Algorithm

    To define the history of the algorithm, we denote the σalgebra generated by the entire history of our algorithm until timekbyFk, i.e., fork≥0,

    The convergence rates of the distributed gradient algorithms[11], [20] to the optimal solution are sublinear for convex functions due to the use of diminishing stepsize. The convergence rates of DS-UD and AsynDGD are also sublinear, however, we will discuss in detail why the proposed algorithms can achieve faster convergence than the algorithms with diminishing stepsizes.

    Recall that the dynamic stepsize is defined by

    V. NUMERICAL EXAMPLES

    In this section, we provide numerical examples on the convergence performance of the proposed algorithms and provide comparisons with the existing distributed gradient algorithms. The results are consistent with our theoretical convergence analysis and illustrate the improved algorithmic performance.

    Example 1:First, we study the performance of DS-UD. We consider an undirected cycle consisting of 4 agents. The convex objective functions are as follows.

    Fig. 1. The estimates of 4 agents and the residual of the DS-UD algorithm.(a) The estimates for the first component. (b) The estimates for the second component. (c) The residual.

    where γiis the regularization parameter.

    Consider a randomly generated undirected connected network consisting of 100 sensors, the average degree of the network is 49. We sets=10,d=10 and γi=0.05. The symmetric measurement matrixMi∈R10×10has eigenvalues

    Fig. 2. The normalized relative errors of three algorithms. (a) The normalized relative errors of DGD, D-NG and DS-UD algorithms versus the number of iterations. (b) The normalized relative errors of DGD, D-NG and DS-UD algorithms versus the number of gradient calculations.

    Note, that the proposed DS-UD algorithm utilizes two gradients at each iteration: one of them is used to construct the stepsize, and the other one is for the update direction. This means that the iteration complexity (number of gradients calculations at each iteration) of the DS-UD algorithm is twice as those of the DGD, D-NG and DDA algorithms. Therefore,to have a fair comparison with the DGD, D-NG and DDA algorithms, the plots in Fig. 2(a), Fig. 3(a) are in terms of the number of iterations and the plots in Fig. 2(b), Fig. 3(b) are in terms of the number of gradient calculations.

    Fig. 3. The normalized relative errors of three algorithms. (a) The normalized relative errors of DGD, DDA and DS-UD algorithms versus the number of iterations. (b) The normalized relative errors of DGD, DDA and DS-UD algorithms versus the number of gradient calculations.

    Moreover, DS-UD requires fewer iterations and gradient calculations to solve the optimization problem to a high-level of accuracy than the DGD, D-NG and DDA algorithms. It can be seen that DS-UD brings a satisfactory convergence result for the distributed optimization problem and outperforms the DGD, D-NG and DDA algorithms.

    Besides, we provide the trajectory of dynamic stepsizes in DS-UD and compare it to the diminishing stepsize in DGD.

    Fig. 4. The stepsizes of DGD and DS-UD algorithms.

    Fig. 5. The distance S R between the current stepsizes and the optimal stepsizes of DS-UD and DGD algorithms.

    Example 3:Now, we examine the effectiveness of AsynDGD. We compare it with the GRP algorithm in [20]and the primal-dual algorithm in [21].

    Consider an undirected fully connected network consisting of 10 sensors. The sensors are attempting to measure a parameter θ? by solving the distributed estimation problem(46). We sets=1,d=2, γi=0.2.Mi∈R1×2has entries randomly generated in (0,1) and the noise ωi∈R follows an i.i.d.Gaussianse{quenceN(0,0}.1),i=1,...,10. The constraintsetisΩ=θ∈R2:‖θ‖≤5.

    In the asynchronous scenario, for fair comparison, the three algorithms are assumed to use the same gossip-like protocol as in this work. Specifically, at each iteration, one of the 10 sensors will be randomly selected to be idle, it does not update and the associated edges are not activated.

    Fig. 6(a) depicts the averaged normalized relative error(over the Monte-Carlo runs) of the three algorithms versus the total number of iterations. Fig. 6(b) depicts the averaged normalized relative error (over the Monte-Carlo runs) of the three algorithms versus the total number of gradient calculations of 10 sensors. Fig. 6 shows that GRP and the primal-dual algorithm converge faster than AsynDGD at the beginning, but fall behind AsynDGD after short fast convergence. Besides, AsynDGD requires fewer iterations and gradient calculations to solve the optimization problem to a high-level of accuracy than GRP and the primal-dual algorithm. The reason for the observed result is the same as that in Example 2 and thus is omitted. It is seen that AsynDGD achieves improved convergence performance for the distributed optimization problem.

    VI. CONCLUSIONS

    In this paper, distributed gradient algorithms with dynamic stepsize are proposed for constrained distributed convex optimization problems. First, we develop distributed optimization algorithms for both synchronous and asynchronous communication protocols, in which each agent calculates its dynamic stepsizes based on the time-varying estimates of its local function value at the global optimal solution. Second, we present the convergence analysis for the proposed algorithms. Besides, we compare them with the existing algorithms through numerical examples of distributed estimation problems to illustrate their effectiveness.

    Fig. 6. The averaged normalized relative errors of three asynchronous algorithms. (a) The averaged normalized relative error of there asynchronous algorithms versus the number of iterations. (b) The averaged normalized relative error of there asynchronous algorithms versus the number of gradient calculations.

    亚洲精品av麻豆狂野| 亚洲精品久久成人aⅴ小说| 在线天堂中文资源库| 一级片免费观看大全| 真人一进一出gif抽搐免费| 这个男人来自地球电影免费观看| 91字幕亚洲| 国产又爽黄色视频| 深夜精品福利| 大陆偷拍与自拍| 1024视频免费在线观看| 性少妇av在线| 成人三级做爰电影| 最近最新中文字幕大全电影3 | 操出白浆在线播放| 午夜成年电影在线免费观看| 制服人妻中文乱码| 无限看片的www在线观看| 成年版毛片免费区| 亚洲国产欧美日韩在线播放| 乱人伦中国视频| 日韩欧美一区视频在线观看| av欧美777| 国产黄a三级三级三级人| 色综合亚洲欧美另类图片| 国产黄a三级三级三级人| 黄频高清免费视频| 一区二区三区精品91| 国产国语露脸激情在线看| 久久伊人香网站| 欧美最黄视频在线播放免费| 黑人欧美特级aaaaaa片| 午夜久久久久精精品| 亚洲国产中文字幕在线视频| 亚洲avbb在线观看| 成人国语在线视频| 十分钟在线观看高清视频www| 99在线视频只有这里精品首页| 人人妻人人澡人人看| 亚洲五月婷婷丁香| 人人妻人人澡人人看| 窝窝影院91人妻| 日本黄色视频三级网站网址| 88av欧美| 日韩欧美免费精品| 夜夜夜夜夜久久久久| 一区二区三区国产精品乱码| 在线观看www视频免费| 91麻豆精品激情在线观看国产| 久久亚洲真实| 国产精品一区二区在线不卡| 亚洲欧美激情在线| 国产主播在线观看一区二区| 午夜福利在线观看吧| 国产激情久久老熟女| 亚洲国产精品合色在线| 在线观看一区二区三区| 18美女黄网站色大片免费观看| 国产精品久久电影中文字幕| 久久香蕉精品热| 国产精品,欧美在线| 1024视频免费在线观看| 中文字幕久久专区| 波多野结衣高清无吗| 国产精品日韩av在线免费观看 | av天堂在线播放| 久久精品人人爽人人爽视色| 午夜老司机福利片| 熟妇人妻久久中文字幕3abv| 午夜a级毛片| av视频在线观看入口| 亚洲第一欧美日韩一区二区三区| 岛国在线观看网站| 亚洲国产看品久久| 日本 欧美在线| 午夜成年电影在线免费观看| 日本黄色视频三级网站网址| 黄色视频不卡| 夜夜夜夜夜久久久久| 一本大道久久a久久精品| 久久人人精品亚洲av| 丝袜美腿诱惑在线| 高清在线国产一区| 亚洲精品国产区一区二| 一个人免费在线观看的高清视频| 亚洲成国产人片在线观看| 欧美绝顶高潮抽搐喷水| 成人三级黄色视频| 视频在线观看一区二区三区| 级片在线观看| 日本黄色视频三级网站网址| 亚洲av电影在线进入| 极品人妻少妇av视频| 国产亚洲精品av在线| 亚洲精品一区av在线观看| 国产av精品麻豆| 夜夜夜夜夜久久久久| 亚洲va日本ⅴa欧美va伊人久久| 巨乳人妻的诱惑在线观看| 国产高清有码在线观看视频 | 制服丝袜大香蕉在线| www国产在线视频色| 欧美国产精品va在线观看不卡| 啦啦啦免费观看视频1| 国产成人精品无人区| 色尼玛亚洲综合影院| 亚洲欧美日韩另类电影网站| 19禁男女啪啪无遮挡网站| 很黄的视频免费| 韩国av一区二区三区四区| 亚洲自拍偷在线| 男女之事视频高清在线观看| 婷婷丁香在线五月| 亚洲五月色婷婷综合| 日韩欧美三级三区| x7x7x7水蜜桃| 亚洲一区二区三区不卡视频| 精品人妻在线不人妻| 久久精品成人免费网站| 我的亚洲天堂| 亚洲美女黄片视频| 亚洲aⅴ乱码一区二区在线播放 | 久久精品国产99精品国产亚洲性色 | 女同久久另类99精品国产91| 国产亚洲欧美在线一区二区| 婷婷丁香在线五月| 少妇 在线观看| 免费在线观看完整版高清| 国产av又大| 免费一级毛片在线播放高清视频 | www.熟女人妻精品国产| 成人特级黄色片久久久久久久| 天堂动漫精品| 精品欧美一区二区三区在线| 一级a爱视频在线免费观看| 黄片小视频在线播放| 国产欧美日韩一区二区精品| 超碰成人久久| 国产成年人精品一区二区| 精品熟女少妇八av免费久了| 亚洲色图综合在线观看| 成人国产综合亚洲| 欧美成人性av电影在线观看| 久久中文字幕人妻熟女| www.www免费av| 非洲黑人性xxxx精品又粗又长| bbb黄色大片| 精品无人区乱码1区二区| 波多野结衣av一区二区av| 久久久久久人人人人人| 亚洲国产看品久久| 久久香蕉国产精品| 大型av网站在线播放| 高清黄色对白视频在线免费看| 欧美日韩一级在线毛片| 国产av一区二区精品久久| 天堂动漫精品| 日韩视频一区二区在线观看| 国产精品av久久久久免费| 久久中文字幕一级| 国产色视频综合| 天堂影院成人在线观看| 757午夜福利合集在线观看| 欧美激情久久久久久爽电影 | 91精品三级在线观看| 欧美激情高清一区二区三区| 久久精品91无色码中文字幕| 亚洲 国产 在线| 亚洲一区二区三区色噜噜| 日韩精品青青久久久久久| 一区福利在线观看| 国内毛片毛片毛片毛片毛片| 国产精品久久久久久人妻精品电影| av超薄肉色丝袜交足视频| 欧美中文综合在线视频| 久久久久久免费高清国产稀缺| 淫秽高清视频在线观看| 国产成人啪精品午夜网站| 老司机靠b影院| 欧美日韩乱码在线| 亚洲精品av麻豆狂野| 在线视频色国产色| 麻豆国产av国片精品| 一本大道久久a久久精品| 欧美午夜高清在线| 国产精品美女特级片免费视频播放器 | 午夜a级毛片| 男人操女人黄网站| 欧美乱色亚洲激情| 长腿黑丝高跟| www.精华液| 麻豆久久精品国产亚洲av| 少妇裸体淫交视频免费看高清 | 又大又爽又粗| 亚洲一区二区三区不卡视频| 亚洲精品美女久久av网站| 中文字幕色久视频| 久久久久久人人人人人| 国产成人精品无人区| 中文字幕久久专区| 丰满的人妻完整版| 欧美一级毛片孕妇| 午夜福利高清视频| 满18在线观看网站| 成人精品一区二区免费| 99热只有精品国产| 成人三级做爰电影| 午夜精品国产一区二区电影| 日韩高清综合在线| 免费在线观看视频国产中文字幕亚洲| 亚洲av成人av| 亚洲欧美精品综合久久99| 精品一区二区三区视频在线观看免费| 国产高清有码在线观看视频 | 中文字幕人妻熟女乱码| 色老头精品视频在线观看| 天堂√8在线中文| 色综合欧美亚洲国产小说| 每晚都被弄得嗷嗷叫到高潮| 亚洲国产欧美一区二区综合| 日日摸夜夜添夜夜添小说| 99re在线观看精品视频| 后天国语完整版免费观看| 一本大道久久a久久精品| 日本精品一区二区三区蜜桃| 亚洲国产精品久久男人天堂| 亚洲国产精品999在线| 禁无遮挡网站| 成人国产综合亚洲| 久久人妻福利社区极品人妻图片| 欧美大码av| 亚洲少妇的诱惑av| 精品欧美国产一区二区三| 亚洲国产精品合色在线| 亚洲九九香蕉| 啦啦啦观看免费观看视频高清 | 夜夜躁狠狠躁天天躁| 欧美在线黄色| 国产麻豆69| 中文亚洲av片在线观看爽| 亚洲性夜色夜夜综合| 天堂影院成人在线观看| 一区福利在线观看| 久久精品国产99精品国产亚洲性色 | 免费av毛片视频| 亚洲人成伊人成综合网2020| 色哟哟哟哟哟哟| 免费人成视频x8x8入口观看| 老司机深夜福利视频在线观看| 亚洲成av片中文字幕在线观看| 成人三级做爰电影| 欧美日本亚洲视频在线播放| 99精品久久久久人妻精品| 国产精品99久久99久久久不卡| 淫秽高清视频在线观看| 最新美女视频免费是黄的| 国产精品久久久av美女十八| 欧美人与性动交α欧美精品济南到| 精品福利观看| 久久人妻福利社区极品人妻图片| 热re99久久国产66热| 国产精品,欧美在线| 亚洲第一欧美日韩一区二区三区| 18禁观看日本| 99在线视频只有这里精品首页| 777久久人妻少妇嫩草av网站| 久久久国产精品麻豆| 最新美女视频免费是黄的| 国产在线观看jvid| 中文字幕人妻熟女乱码| 在线观看www视频免费| 大型av网站在线播放| 在线国产一区二区在线| 亚洲熟妇熟女久久| √禁漫天堂资源中文www| 99国产极品粉嫩在线观看| 国产亚洲av嫩草精品影院| 曰老女人黄片| 亚洲av熟女| 一区二区三区国产精品乱码| 亚洲精品国产精品久久久不卡| av天堂久久9| 亚洲国产精品999在线| 国产成人av教育| 黄色成人免费大全| 88av欧美| 国产精品免费一区二区三区在线| 国产精品香港三级国产av潘金莲| 日韩欧美免费精品| 操美女的视频在线观看| 精品卡一卡二卡四卡免费| 国产精品久久久久久亚洲av鲁大| 中文字幕人妻丝袜一区二区| 69av精品久久久久久| 亚洲av电影不卡..在线观看| 亚洲精品av麻豆狂野| 91字幕亚洲| 露出奶头的视频| 自拍欧美九色日韩亚洲蝌蚪91| 国产伦一二天堂av在线观看| 亚洲国产精品999在线| a级毛片在线看网站| 91老司机精品| 国产高清视频在线播放一区| 久久精品aⅴ一区二区三区四区| 亚洲五月天丁香| 欧美久久黑人一区二区| av中文乱码字幕在线| 电影成人av| 中文字幕av电影在线播放| 天天躁夜夜躁狠狠躁躁| 高清黄色对白视频在线免费看| 高潮久久久久久久久久久不卡| 国产成人系列免费观看| 成人手机av| 一区福利在线观看| 午夜福利在线观看吧| 久久伊人香网站| 每晚都被弄得嗷嗷叫到高潮| 搡老妇女老女人老熟妇| 好看av亚洲va欧美ⅴa在| 99在线人妻在线中文字幕| 国产高清videossex| 搡老岳熟女国产| 亚洲成a人片在线一区二区| 女人被躁到高潮嗷嗷叫费观| 中文字幕人妻熟女乱码| 国产欧美日韩一区二区三区在线| 可以在线观看的亚洲视频| 亚洲三区欧美一区| 日韩大码丰满熟妇| 精品少妇一区二区三区视频日本电影| 18美女黄网站色大片免费观看| 国产精品国产高清国产av| 国产亚洲精品久久久久5区| 久久久久久久久中文| 国产午夜精品久久久久久| 午夜免费鲁丝| 国产精品一区二区三区四区久久 | 狂野欧美激情性xxxx| 此物有八面人人有两片| 日本撒尿小便嘘嘘汇集6| 午夜视频精品福利| 一级作爱视频免费观看| 18禁观看日本| 国内精品久久久久久久电影| 最好的美女福利视频网| e午夜精品久久久久久久| 91字幕亚洲| 操出白浆在线播放| 欧美一区二区精品小视频在线| 成人手机av| 亚洲电影在线观看av| 69av精品久久久久久| www日本在线高清视频| 黄频高清免费视频| 欧美激情久久久久久爽电影 | 动漫黄色视频在线观看| 波多野结衣一区麻豆| 亚洲一区二区三区色噜噜| 成人手机av| 精品国产亚洲在线| 黄片大片在线免费观看| 精品国产亚洲在线| 午夜福利视频1000在线观看 | 一进一出抽搐动态| 精品人妻1区二区| 老司机午夜十八禁免费视频| 女人高潮潮喷娇喘18禁视频| www国产在线视频色| 亚洲国产精品999在线| 中文字幕色久视频| avwww免费| 老司机福利观看| 国产不卡一卡二| 婷婷丁香在线五月| 一区二区日韩欧美中文字幕| 国产av一区在线观看免费| 久久亚洲精品不卡| 91大片在线观看| 亚洲,欧美精品.| 国产精品一区二区在线不卡| 天堂动漫精品| 精品久久久久久久人妻蜜臀av | 欧美大码av| 黄片小视频在线播放| 国产在线精品亚洲第一网站| 在线天堂中文资源库| 国产成人免费无遮挡视频| 在线天堂中文资源库| 国产成人免费无遮挡视频| 成人亚洲精品一区在线观看| 咕卡用的链子| 国产成人精品久久二区二区91| 一边摸一边抽搐一进一出视频| 搡老妇女老女人老熟妇| 欧美精品亚洲一区二区| 亚洲精品久久国产高清桃花| 激情视频va一区二区三区| 亚洲人成电影观看| 每晚都被弄得嗷嗷叫到高潮| 亚洲人成伊人成综合网2020| 97人妻精品一区二区三区麻豆 | 国产高清videossex| 人人妻人人澡欧美一区二区 | 免费高清在线观看日韩| 97超级碰碰碰精品色视频在线观看| 又黄又爽又免费观看的视频| 美女高潮喷水抽搐中文字幕| 亚洲人成伊人成综合网2020| 午夜两性在线视频| 久久久久久大精品| 校园春色视频在线观看| 午夜福利高清视频| av视频在线观看入口| 自拍欧美九色日韩亚洲蝌蚪91| 久久久久久人人人人人| 亚洲国产精品成人综合色| 精品第一国产精品| 国产三级在线视频| 桃红色精品国产亚洲av| 91在线观看av| 精品国产美女av久久久久小说| 精品国产一区二区久久| 一二三四在线观看免费中文在| 亚洲精品国产精品久久久不卡| 色av中文字幕| 男人舔女人的私密视频| 大香蕉久久成人网| 搡老熟女国产l中国老女人| 久久久久久久午夜电影| 免费在线观看视频国产中文字幕亚洲| 一级,二级,三级黄色视频| 欧美成人午夜精品| 在线观看免费午夜福利视频| 国产男靠女视频免费网站| 欧美成人免费av一区二区三区| 男人舔女人下体高潮全视频| 欧美久久黑人一区二区| 亚洲无线在线观看| 久久久久国产精品人妻aⅴ院| 1024视频免费在线观看| 少妇粗大呻吟视频| 亚洲欧美激情综合另类| 精品久久久久久,| 国产成人啪精品午夜网站| 亚洲,欧美精品.| 一级毛片精品| 黄频高清免费视频| 黄色视频不卡| 丝袜美足系列| 国产精品久久久久久人妻精品电影| 女同久久另类99精品国产91| 国产欧美日韩综合在线一区二区| 国产成人系列免费观看| 最新美女视频免费是黄的| 国产一区二区在线av高清观看| 亚洲av成人一区二区三| 国产成人精品久久二区二区91| 一边摸一边做爽爽视频免费| 国产麻豆成人av免费视频| 亚洲国产精品sss在线观看| 亚洲avbb在线观看| 亚洲久久久国产精品| 亚洲国产欧美网| 香蕉丝袜av| 免费无遮挡裸体视频| 成年人黄色毛片网站| 乱人伦中国视频| 亚洲av电影不卡..在线观看| 亚洲久久久国产精品| 亚洲午夜理论影院| 电影成人av| 久久婷婷成人综合色麻豆| 97超级碰碰碰精品色视频在线观看| 在线观看一区二区三区| 啦啦啦观看免费观看视频高清 | 欧美不卡视频在线免费观看 | 免费搜索国产男女视频| 久久国产精品影院| 18美女黄网站色大片免费观看| 国产1区2区3区精品| 天天一区二区日本电影三级 | 亚洲情色 制服丝袜| 国产在线观看jvid| 日韩有码中文字幕| 桃色一区二区三区在线观看| 精品国产亚洲在线| 久久久久久久精品吃奶| 91九色精品人成在线观看| 国产三级黄色录像| 日韩大尺度精品在线看网址 | svipshipincom国产片| av在线播放免费不卡| 老司机午夜福利在线观看视频| 女人高潮潮喷娇喘18禁视频| 精品国产一区二区久久| 免费在线观看影片大全网站| 真人一进一出gif抽搐免费| 精品一区二区三区视频在线观看免费| 又大又爽又粗| 亚洲人成网站在线播放欧美日韩| 亚洲精品久久国产高清桃花| 亚洲五月色婷婷综合| www.www免费av| 一区二区三区激情视频| 在线观看免费日韩欧美大片| 精品国产亚洲在线| 成年人黄色毛片网站| 欧美日韩黄片免| 成人手机av| 亚洲精品在线美女| 非洲黑人性xxxx精品又粗又长| 精品一品国产午夜福利视频| 日日摸夜夜添夜夜添小说| 大型av网站在线播放| 丝袜在线中文字幕| 欧美中文日本在线观看视频| 18禁国产床啪视频网站| 好看av亚洲va欧美ⅴa在| 又大又爽又粗| 亚洲精品一区av在线观看| 级片在线观看| 亚洲性夜色夜夜综合| 精品不卡国产一区二区三区| 性少妇av在线| 欧美激情 高清一区二区三区| 亚洲国产日韩欧美精品在线观看 | 国产精品永久免费网站| 亚洲熟妇熟女久久| 手机成人av网站| 真人一进一出gif抽搐免费| 国产精品久久久av美女十八| 日韩大尺度精品在线看网址 | 国产99白浆流出| 99久久精品国产亚洲精品| 亚洲视频免费观看视频| 亚洲国产日韩欧美精品在线观看 | 精品第一国产精品| 91av网站免费观看| 不卡一级毛片| 成年女人毛片免费观看观看9| 中文字幕人妻丝袜一区二区| 色婷婷久久久亚洲欧美| 19禁男女啪啪无遮挡网站| 欧美最黄视频在线播放免费| 日韩av在线大香蕉| 91字幕亚洲| 国内久久婷婷六月综合欲色啪| 美女高潮到喷水免费观看| 国产精品98久久久久久宅男小说| 国产麻豆69| 久久精品国产清高在天天线| 一本综合久久免费| 精品欧美国产一区二区三| 99re在线观看精品视频| 国产1区2区3区精品| 最近最新免费中文字幕在线| 日本五十路高清| 久久热在线av| 久久精品国产亚洲av高清一级| 99久久久亚洲精品蜜臀av| 欧美中文综合在线视频| 亚洲av电影在线进入| 亚洲熟女毛片儿| 国产单亲对白刺激| АⅤ资源中文在线天堂| 97人妻天天添夜夜摸| 国内精品久久久久久久电影| 99热只有精品国产| 欧美日韩瑟瑟在线播放| 国产麻豆成人av免费视频| 亚洲激情在线av| 搞女人的毛片| 男人操女人黄网站| 精品一品国产午夜福利视频| 亚洲av成人一区二区三| 国产精华一区二区三区| 欧美在线黄色| 亚洲中文日韩欧美视频| 嫁个100分男人电影在线观看| 亚洲国产精品sss在线观看| 亚洲色图综合在线观看| 1024视频免费在线观看| 村上凉子中文字幕在线| 国产99久久九九免费精品| 亚洲五月天丁香| 国产亚洲精品第一综合不卡| 啦啦啦观看免费观看视频高清 | 成人av一区二区三区在线看| 国产精品 欧美亚洲| 亚洲九九香蕉| 每晚都被弄得嗷嗷叫到高潮| 成人18禁在线播放| 免费观看人在逋| 老司机靠b影院| 色尼玛亚洲综合影院| 日韩欧美免费精品| 一进一出抽搐gif免费好疼| 美女扒开内裤让男人捅视频| 日韩视频一区二区在线观看| 身体一侧抽搐| 日本三级黄在线观看| 日韩欧美免费精品| 纯流量卡能插随身wifi吗| 正在播放国产对白刺激| 亚洲精品国产精品久久久不卡| 久久精品亚洲精品国产色婷小说| 国产精品 国内视频| 精品国产亚洲在线| 一进一出抽搐动态| 亚洲国产精品999在线| 亚洲天堂国产精品一区在线| 欧美日韩中文字幕国产精品一区二区三区 | 可以在线观看毛片的网站|