• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A PENALTY FUNCTION METHOD FOR THE PRINCIPAL-AGENT PROBLEM WITH AN INFINITE NUMBER OF INCENTIVE-COMPATIBILITY CONSTRAINTS UNDER MORAL HAZARD?

    2021-10-28 05:45:20JiaLIU劉佳
    關(guān)鍵詞:劉佳

    Jia LIU(劉佳)

    School of Economics and Management,Wuhan University,Wuhan 430072,China

    E-mail:liujia.06@163.com

    Xianjia WANG(王先甲)

    School of Economics and Management,Wuhan University,Wuhan 430072,China Institute of Systems Engineering,Wuhan University,Wuhan 430072,China

    E-mail:wangxj@whu.edu.cn

    Abstract In this paper,we propose an iterative algorithm to find the optimal incentive mechanism for the principal-agent problem under moral hazard where the number of agent action pro files is in finite,and where there are an in finite number of results that can be observed by the principal.This principal-agent problem has an in finite number of incentivecompatibility constraints,and we transform it into an optimization problem with an in finite number of constraints called a semi-in finite programming problem.We then propose an exterior penalty function method to find the optimal solution to this semi-in finite programming and illustrate the convergence of this algorithm.By analyzing the optimal solution obtained by the proposed penalty function method,we can obtain the optimal incentive mechanism for the principal-agent problem with an in finite number of incentive-compatibility constraints under moral hazard.

    Key words principal-agent problem;mechanism design;moral hazard;semi-in finite programming problem;penalty function method

    1 Introduction

    In the principal-agent problem under moral hazard,the agent has private information about his personal behaviors,while the principal can only observe the results caused by the agent’s actions.In this case,the principal needs to design an incentive mechanism for all agents based on the observed results to maximize his expected pro fit.Since the agent has some private information and acts with the goal of maximizing his own expected pro fit,the purpose of designing the incentive mechanism for the principal is to maximize his own expected pro fit and to motivate the agents to act in a certain way.

    The optimal incentive mechanism satis fies two kinds of constraints:the incentive-compatibility constraint and individual rationality constraint.The incentive-compatibility constraint shows that the expected pro fit obtained by the agent when he chooses the action desired by the principal is not less than that obtained when he chooses any other action.This is the motivation of the principal to design the incentive mechanism based on the agent’s actions to maximize their own pro fits.The individual rationality constraint shows that the agent’s pro fit under the optimal incentive mechanism is not less than the maximum pro fit obtained when the optimal incentive mechanism is not accepted.This is the basis for the agent to participate in the incentive mechanism.All feasible actions taken by all individuals in making decisions must satisfy these two constraints.

    In the incentive mechanism design problem under moral hazard,the principal designs different payment schemes to the agent according to the observable results.Generally,it is assumed that the number of agent action pro files is finite and there are a limited number of results that can be observed by the principal.This assumption makes the number of incentive-compatibility constraints limited when the principal seeks to maximize his expected pro fit.Given the form of the optimal incentive mechanism,the problem of finding the optimal incentive mechanism can be transformed into that of solving an optimization problem with finite constraints.In this case,the Kuhn-Tucker theorem or the Lagrange method can be used to solve the optimization problem,and the optimal incentive mechanism can be obtained according to the optimal solution to this optimization problem.The vast majority of researchers study incentive mechanism design problems based on this idea.However,in practical economic and political problems,agents may have an in finite number of action pro files and there are an in finite number of results that can be observed by a principal,which makes the number of incentive-compatibility constraints in finite.At this point,the incentive mechanism design problem with an in finite number of incentive-compatibility constraints can be transformed into an optimization problem with an in finite number of constraints;this is called a semi-in finite programming problem.

    The main aim of this paper is to design an iterative algorithm to find the optimal incentive mechanism for the principal-agent problem with an in finite number of constraints under moral hazard.When the number of variables increases and the number of constraints increases,the traditional method becomes too complex to find an analytic solution.Since the 1970s,there has been much development in terms of solving general mechanism design problems,and a feasible analytical framework for obtaining analytical solutions has been established.However,there are many inconveniences when there are too many variables and too many constraints,such as discussion of the complex optimality conditions used in analytical methods.Inspired by the algorithm of optimization theory,we hope to design a iterative algorithm which can solve some complex mechanism design problems.This is the main motivation of this paper.

    The iterative algorithm designed in this paper is a exterior penalty function method,called the M-penalty function method,where M is the ampli fication factor for the penalty function.This algorithm is designed to transform the solving of a semi-in finite programming problem into the solving of a series of unconstrained optimization problems.By establishing a set of penalty functions,the optimal solutions of the unconstrained programming problems can be approximated to the optimal solutions of the original semi-in finite programming problems.So far,few researchers have used the optimization algorithm to find the optimal incentive mechanism of the principal-agent problem with an in finite number of constraints.

    The traditional analytical method and the iterative algorithm designed in this paper have their own advantages and disadvantages.For the principal-agent problem with a simple setting,it is more convenient to use the traditional analytical method to find the optimal incentive mechanism.When the parameters become more numerous and the constraints become more complex,the traditional method is more limited and it is difficult to get the optimal solution.At this point,the iterative algorithm proposed in this paper is more conducive to solving principal-agent problems.

    Since there are no additional assumptions on individual utility functions,such as satisfying convexity or concavity,the proposed iterative algorithm can solve the mechanism design problem with complex functions that cannot be solved by traditional methods.Also,because this paper does not make additional assumptions about the form of the contract of the principal,the iterative algorithm in this paper can be used for the nonlinear contract as well as for the traditional linear contract.

    In order to construct an iterative algorithm to find the optimal incentive mechanism for the principal-agent problem under moral hazard,this paper transforms this principal-agent problem into a semi-in finite programming problem.Then,by designing a penalty function method for solving semi-in finite programming problems,we obtain an iterative algorithm to find the optimal incentive mechanism for this principal-agent problem.

    The structure of this paper is as follows:in Section 3,the principal-agent problem with an in finite number of agent action pro files and an in finite number of observable results for the principal under moral hazard is established.In the Section 4,the principal-agent problem is transformed into a standard semi-in finite programming problem(SIP),and we propose a kind of exterior penalty function method,called the M-penalty function method,to solve this semi-in finite programming problem.In Section 5,we study the convergence of the M-penalty function method.In Section 6,a numerical example is used to illustrate the method of finding the optimal incentive mechanism of a two-person principal-agent problem under moral hazard where the agent’s action pro file is in finite and the principal can observe an in finite number of results.Finally,Section 7 offers the conclusions of this paper.

    2 Related Literature

    2.1 Mechanism Design Theory

    The research and development of mechanism design theory includes two categories:the mechanism design problem under adverse selection and the mechanism design problem under moral hazard.

    The main feature of the adverse selection model is the ex ante asymmetric information.Nature selects the type of agent,and the agent knows his own type while the principal does not.In the signaling model,in order to show his type,the agent chooses a signal and the principal signs a contract with the agent after observing the signal.In the screening model,the principal provides multiple contracts for the agent to choose,and the agent selects a suitable contract according to his or her own type and acts according to the contract(see,e.g.,[1–6]).

    The main feature of the moral hazard model is the ex post information.The information is symmetrical before signing the contract.In the moral hazard with hidden action model,after signing the contract,the principal can only observe the results caused by the agent’s choice of actions and the state of the world.At this time,the principal should design incentive contracts to induce the agent to choose the most bene ficial action for the principal from the perspective of maximizing the agent’s own pro fits.In the moral hazard with hidden information model,nature chooses the state of the world,the agent observes the nature’s selection and acts,while the principal observes the agent’s actions but cannot observe the state of nature’s selection.At this point,the principal should design an incentive contract to encourage the agent to choose the best action for the principal under a given state of the world(see,e.g.,[7–10]).

    There are several papers in the literature on the incentive mechanism design problem in which agents have continuous action spaces that are closely related to the present work.The main methods for dealing with such a mechanism design problem is based on the analytic method for solving the optimization problem.The basic treatment includes:(1)A discrete agent’s action space to make the incentive-compatibility constraint a finite number.In this case,the Kuhn-Tucker theorem can be used to find the optimal incentive mechanism;(2)Replacing the in finite number of incentive-compatibility constraints with relatively simple local incentive-compatibility constraints.The classic single-agent moral hazard problem already has“in finitely many actions”and thus an“in finite number of incentive-compatibility constraints”(see,e.g.,[11,12]).For example,Laffont and Martmort studied the incentive mechanism design problem in which the agents have a continuous level of effort([12]).Since the optimal solution does not necessarily exist([13]),they used this method to deal with the in finite number of incentive-compatibility constraints,and found the sufficient conditions of the optimal solution to the original principle-agent problem.

    In this paper,we study the incentive mechanism design problem under moral hazard with hidden action.We use an iterative algorithm to find the optimal incentive mechanism for the principle-agent problem under moral hazard where the number of agent action pro files is in finite,and where there are an in finite number of results that can be observed by the principal.The form of the optimal payment mechanism is obtained by an iterative method,which avoids the complex optimization discussion process emerged in using analytical methods.

    2.2 Semi-in finite Programming Problem

    In this paper,it is found that if the number of agent action pro files is in finite and there are an in finite number of results that can be observed by the principal,the optimal incentive mechanism for this incentive mechanism design problem is the optimal solution to an optimization problem with an in finite number of constraints;this called a semi-in finite programming problem.

    The semi-in finite programming problem mainly studies a constrained programming problem with an in finite number of constraints.It is applied in many fields([14]),such as in the data envelopment analysis of an in finite number of decision-making units([15]),robot trajectory planning([16]),and economic,fi nancial and engineering models([17]).

    There are many algorithms for solving semi-in finite programming problems,such as the discretization method([18,19]),the homotopy interior point method([20]),the Newton method([21]),the semismooth Newton method([22]),the trust region method([23]),the augmented Lagrangian method([24]),the exact penalty method([25]),the quadratic programming-type method([26]),and the branch-and-bound method([27]).The idea of designing these methods is basically based on the idea of constructing the methods for finding the optimal solution to the nonlinear programming problem.In this paper,we also design an algorithm for solving semi-in finite programming problems based on this idea.

    In this paper,a type of exterior penalty function method is proposed to solve the semiin finite programming problem;this is called M-penalty function method.The penalty function method is concise in terms of solving semi-in finite programming problems,and the exterior penalty function method is even more concise than the internal penalty function method.The reason why the internal penalty function method is not adopted here is mainly due to the selection of the initial point.The internal penalty function method needs to select the initial point that satis fies all constraints.Compared with the exterior penalty function method,it is more difficult to select the initial point of the internal penalty function method.By discussing the optimal solution obtained by the M-penalty function method,we can get the optimal incentive mechanism for incentive mechanism design problem with an in finite number of incentive-compatibility constraints under moral hazard.

    3 Model

    In a multi-player principal-agent game,a principal(player without private information)wants the agents(players with private information)to choose an appropriate action pro file according to the goal of maximizing his expected pro fit.The principal cannot directly observe the action pro file taken by the agents,but can only observe a certain result which is determined by the agents’action pro file and some exogenous random factors.The principal’s problem is how to design a reward or punishment contract for the agents based on the observed results to encourage them to take actions that maximize his expected pro fit.

    In the principal-agent problem under moral hazard,the order of the game is:the principal proposes a payment contract,then the agent decides whether to participate or not;if he does,he makes a certain effort to act and receives a bene fit under this contract.If he does not participate,the payoffs are all zero.Each individual has the goal of maximizing his or her own return.Therefore,the level of effort taken by the agent and the payment contract proposed by the principal are both obtained by solving a maximization problem.

    The principal’s goal is to design an incentive contract t(s)=(t(s),···,t(s)),which is a payment vector function proposed by the principal to the agents based on the observation of results which is denoted by s.Generally,it is assumed that the observed result s is a function of the agents’action pro file a and the exogenous random variable ξ∈Ξ,which is exogenously given.For example,the observable result s is related to the agents’actions a and the exogenous factor ξ in the following separate form:s(a,ξ)=k(a)+k(ξ)(see,e.g.,([12])).The set of all possible observed results is denoted by S.The probability distribution function and probability density function of the exogenous random variable ξ are P(ξ)and p(ξ),respectively.After the agents take action pro file a∈A,they need to pay a certain cost vector c(a),and the principal can get π(s(a,ξ))based on the observed result s(a,ξ).Suppose that the utility function π(s(a,ξ))is a strictly increasing concave function with respect to a.At this point,that means that the better the agents take the action pro file,the more the principal can obtain,and the marginal revenue will decrease.

    Assume that both the principal and the agents are risk neutral.The expected pro fit of the principal is

    The expected pro fit of agent i∈N is

    The first kind of constraint for designing the optimal incentive mechanism is the incentivecompatibility constraint.The principal cannot observe the agents’action pro file a∈A,and in any incentive mechanism,the agent always chooses the action that maximizes his expected pro fit,so ifˉa is the agents’action pro file that maximizes their expected payoff,and a is the other action pro file that the agents can take,then the agents will only chooseˉa if they can get more expected pro fit when they takeˉa as opposed to when they take a.Therefore,the incentive-compatibility constraint is denoted by IC as follows:

    For the principal,his aim is to design a payment t(s)for all observed results s∈S and to maximize his expected pro fit.Thus,the optimal payment contract is the solution to the following optimization problem:

    In this optimization problem,since the action pro file set A is an in finite point set,the numbers of constraints(IC)on the optimization problem(P)are in finite.In this paper,we assume that the principal knows the form of the payment function when designing the payment vector function,but the payment vector function contains some unknown parameters.For example,the principal designs a linear contract,i.e.,t(s)=α+βs.At this point,fi nding the optimal contract function t(s)is converted to a matter of finding its optimal parameters.In addition,ˉa is determined by the agent by maximizing his utility,and it also depends on the agent’s payment function.

    4 The Optimal Incentive Mechanism

    4.1 Semi-in finite Programming Model

    We transform the incentive mechanism design problem(P)into the following optimization problem(P’):

    There are in finite number of constraints in the optimization problem(P),and we cannot use the Kuhn-Tucker theorem to solve this optimization problem.

    It is assumed that the form of the incentive contract t(s)is known,but contains some unknown parameters.We denote all the variables that need to be decided as x=(x,···,x).As can be seen from the optimization problem(P),its general form is

    where A is an in finite-point measurable set,and N is a finite-point set.This optimization problem is called a semi-in finite programming problem(SIP).

    In the semi-in finite programming problem(SIP),the variable x∈Rrepresent the parameters in the payment vector function t,and there may be more than one unknown parameter.Therefore,after we use the algorithm of the semi-in finite programming problem to get the optimal solution to the optimization problem(P),we can find the optimal incentive mechanism for the incentive mechanism design problem(P).Thus,we need to design an algorithm for solving the semi-in finite programming problem(SIP) first.

    4.2 M-penalty Function Method

    Here,a kind of exterior penalty function method is proposed to solve the semi-in finite programming problem(SIP);this is called the M-penalty function method.The idea of designing this algorithm is derived from transforming the constrained programming problem into an unconstrained programming problem,and then obtaining the optimal solution to the constrained programming problem by solving this unconstrained programming problem.When designing an unconstrained programming problem,if the point is in the feasible domain of the original optimization problem,the objective function value will not be changed.If the point is not in the feasible domain,a penalty term is designed to punish the original objective function.This is an exterior penalty function method.

    First,a constraint violation function for the semi-in finite programming problem(SIP)is constructed and denoted by G(x)as follows:

    Obviously,G(x)=0 if and only if the constraints of the semi-in finite programming problem(SIP)are satis fied.G(x)>0 if and only if the point x∈Rdoes not satisfy the constraints of the semi-in finite programming problem(SIP).

    We de fine the penalty function as

    where σ>0 is the penalty factor.

    We denote x(σ)as a solution to the following optimization problem:

    The relationship between the optimal solution to the optimization problem(PG)and the optimal solution to the original semi-in finite programming problem(SIP)is illustrated in the following Lemma:

    Lemma 4.1

    If x(σ)is the optimal solution to an optimization problem(PG)and G(x(σ))=0,that is,if x(σ)satis fies the constraints of the semi-in finite programming problem(SIP),then x(σ)is the optimal solution to the semi-in finite programming problem(SIP).

    Proof

    Since x(σ)is the optimal solution to the optimization problem(PG),then?x∈R,

    Since x(σ)satis fies the constraints of the semi-in finite programming problem(SIP)such that G(x(σ))=0,for any point x that satis fies the constraints of the semi-in finite programming problem(SIP),we have

    Then x(σ)is also the optimal solution for the semi-in finite programming problem(SIP).Therefore,the conclusion is established.

    Lemma 4.1 shows that finding the optimal solution to the semi-in finite programming problem(SIP)can be transformed into solving the optimization problem(PG).Based on this idea,we design the M-penalty function method to solve the semi-in finite programming problem(SIP)as follows:

    M-penalty function method

    (1)Step 1:Given an initial point x,take σ>0,M>1,ε>0.k:=1.

    (2)Step 2:Using the initial point value xto solve the optimization problem(PG)when σ=σ,obtain x(σ).Let x=x(σ).

    (3)Step 3:If

    then stop,and the optimal solution is x.Otherwise,let σ:=Mσ,k:=k+1,and go to the Step 2.

    Here ε is called the error factor and M is called the ampli fication factor.

    5 Convergence Analysis

    Since M>1 and σ>0,it can be seen from the equation(5.5)that G(x)≤G(x)holds.

    According to equation(5.3)and equation(5.1),we have

    Then,based on equation(5.6),we know that f(x)≥f(x)holds.Therefore,the conclusion is established.

    As can be seen from Lemma 5.1,the value of the objective function corresponding to the sequence of points generated by the M-penalty function method decreases gradually.

    Lemma 5.2

    Let x(σ)be the optimal solution to optimization problem(PG),and let δ=G(x(σ)).Then x(σ)is also the optimal solution to the constrained optimization problem

    Proof

    For any x that satis fies equation(5.8),we have

    According to equation(5.9),for any x that satis fies equation(5.8),we have

    Therefore,x(σ)is also the optimal solution to the constrained optimization problem(5.7)–(5.8).The conclusion is established.

    Through Lemma 5.1 and Lemma 5.2,we can get the convergence theorem of the M-penalty function method.

    Theorem 5.3

    Assume that the error factor ε in the M-penalty function method satis fies the condition that

    Then,the algorithm must be terminated within finite iterations.

    As can be seen from equation(5.11),there isˉx such that

    As shown by equation(5.2),we have

    Let σ→+∞.Then

    This conclusion contradicts equation(5.14).Therefore,Theorem 5.3 is true,and the conclusion is established.

    According to Theorem 5.3,if the semi-in finite programming problem(SIP)has a feasible solution,for any given ε>0,the M-penalty function method will be terminated at the optimal solution to the optimization problem(5.7)–(5.8)and δ≤ε.

    If the algorithm cannot be terminated within finite iterations,we have the following conclusions:

    Theorem 5.4

    If the M-penalty function method will not be terminated within finite iterations,then for any error factor ε,we have

    and

    Therefore,the equation(5.18)is established.

    and the constraint condition(5.20)is satis fied.Therefore,we have

    It can be seen from Lemma 5.1 that f(x)monotonically approaches to f(x).Therefore,it can be known from equation(5.22)that,for any sufficiently large k,

    It can be known from equation(5.23)and equation(5.24)that

    From the above two theorems,the following corollary can be directly obtained:

    Then,we have

    6 Numerical Example

    The expected pro fit of the agent is

    Assuming that the agent’s disagreement pro fit uis 0,the optimal incentive mechanism for this principal-agent problem is the solution to the following semi-in finite programming problem:

    Next,we use the M-penalty function method in Section 4 to solve the semi-in finite programming problem(6.3)–(6.6).

    The constraint violation function is

    The optimization problem(PG)is

    Since this is a standard linear contract,we can use traditional method(e.g.,in[11,12])to verify this result.

    First,we modify the equation(6.1),and we get

    7 Conclusions

    This paper mainly designs an iterative algorithm to find the optimal incentive mechanism for the principal-agent problem under moral hazard where the number of agent action profi les is in finite,and where there are an in finite number of results that can be observed by the principal.The main characteristic of this principal-agent problem is that the number of constraints is in finite.Given the form of payment vector function designed by the principal,the principal-agent problem can be transformed into a semi-in finite programming problem.A exterior penalty function method(called the M-penalty function method)for solving semi-in finite programming problems is proposed,and its convergence is illustrated.By using the M-penalty function method designed in this paper,we can obtain the optimal incentive mechanism for the principal-agent problem with an in finite number of incentive-compatibility constraints under moral hazard.

    In this paper,the in finite number of constraints are mainly caused by the in finite number of agent action pro files.In addition,if the set of agents is assumed to be continuous,the number of individual rational constraints will be in finite,which will also make the incentive mechanism design problem a semi-in finite programming problem.At this point,the M-penalty function method designed in this paper for solving semi-in finite programming problems is also applicable for solving the incentive mechanism design problem.

    It should be noted that in this paper,we assume that the form of the principal’s payment function to the agent is given,but it contains some uncertain parameters.The optimal solution to the semi-in finite programming problem are the optimal parameters of the payofffunction to the incentive mechanism design problem.In a more general case,the form of the payment function may not be known.Therefore,the incentive mechanism design problem is a variational problem with an in finite number of constraints.Compared with the algorithm for solving semiin finite programming problems,the method for solving this problem is more complex.

    Acknowledgements

    We would like to express our gratitude to all those who helped us during the writing and revising of this paper.In particular,we are very grateful to the two anonymous reviewers for their comments,which were of great signi ficance.

    猜你喜歡
    劉佳
    ON THE GRAPHS OF PRODUCTS OF CONTINUOUS FUNCTIONS AND FRACTAL DIMENSIONS?
    太陽的歌
    Relativistic Hartree–Fock model and its recent progress on the description of nuclear structure*
    江蘇省宜興市陶城實(shí)驗(yàn)小學(xué)劉佳
    遇見,又如何
    照相機(jī)(2022年12期)2022-02-09 09:13:54
    Interaction induced non-reciprocal three-level quantum transport?
    劉佳美術(shù)作品
    Principles and Teaching Application of Suggestopedia’s 6 Technical Characteristics
    He’s just been to the zoo.
    On the Translation of English Film Titles from the Perspective of Reception Aesthetics
    19禁男女啪啪无遮挡网站| 大香蕉久久网| 女人爽到高潮嗷嗷叫在线视频| 可以免费在线观看a视频的电影网站| 亚洲精品乱久久久久久| 国产成人av教育| 亚洲黑人精品在线| 两个人免费观看高清视频| 天天影视国产精品| 999久久久国产精品视频| 日日摸夜夜添夜夜添小说| 人人澡人人妻人| 久久国产精品人妻蜜桃| 性少妇av在线| 欧美成狂野欧美在线观看| 91精品三级在线观看| 精品国产亚洲在线| 久久久精品区二区三区| 757午夜福利合集在线观看| 欧美人与性动交α欧美软件| 国产97色在线日韩免费| 成人国语在线视频| 99国产精品一区二区蜜桃av | 精品电影一区二区在线| 少妇的丰满在线观看| 在线观看日韩欧美| 一级片免费观看大全| 在线观看一区二区三区激情| 精品久久久久久,| 曰老女人黄片| 757午夜福利合集在线观看| 91精品国产国语对白视频| 夜夜躁狠狠躁天天躁| 精品少妇久久久久久888优播| 看黄色毛片网站| 亚洲熟妇中文字幕五十中出 | 久久精品国产综合久久久| 久久久久视频综合| 性少妇av在线| 首页视频小说图片口味搜索| 18禁黄网站禁片午夜丰满| 90打野战视频偷拍视频| 999久久久国产精品视频| 久久热在线av| 亚洲第一欧美日韩一区二区三区| 欧美乱妇无乱码| 国产伦人伦偷精品视频| 男女之事视频高清在线观看| 国产精品一区二区在线观看99| 亚洲国产看品久久| 好男人电影高清在线观看| 看片在线看免费视频| 人人妻人人添人人爽欧美一区卜| 国内久久婷婷六月综合欲色啪| av片东京热男人的天堂| 成人亚洲精品一区在线观看| 久久久精品免费免费高清| 久久精品aⅴ一区二区三区四区| 看免费av毛片| 国产黄色免费在线视频| 欧美乱妇无乱码| 国产男女内射视频| 真人做人爱边吃奶动态| 日本黄色视频三级网站网址 | 9色porny在线观看| 女人爽到高潮嗷嗷叫在线视频| 亚洲一区中文字幕在线| 亚洲专区中文字幕在线| 亚洲国产看品久久| 99国产综合亚洲精品| 99香蕉大伊视频| 欧美中文综合在线视频| 丁香六月欧美| 亚洲中文字幕日韩| 国产又爽黄色视频| 极品教师在线免费播放| 男人操女人黄网站| 黑人猛操日本美女一级片| 成人永久免费在线观看视频| 在线观看www视频免费| 99re在线观看精品视频| 国产成人精品在线电影| 在线观看免费日韩欧美大片| 美女扒开内裤让男人捅视频| 色播在线永久视频| 精品国产亚洲在线| 在线观看免费视频网站a站| 亚洲第一av免费看| 亚洲男人天堂网一区| 免费观看人在逋| 欧美激情极品国产一区二区三区| 捣出白浆h1v1| 日韩视频一区二区在线观看| 丝袜美腿诱惑在线| 亚洲熟女精品中文字幕| 成人18禁在线播放| 女人爽到高潮嗷嗷叫在线视频| 999久久久国产精品视频| 宅男免费午夜| 在线看a的网站| 亚洲欧美激情综合另类| 搡老乐熟女国产| 色老头精品视频在线观看| 黑人巨大精品欧美一区二区蜜桃| 美女高潮喷水抽搐中文字幕| 欧美激情高清一区二区三区| 久99久视频精品免费| 制服人妻中文乱码| 亚洲精品久久午夜乱码| 欧美精品啪啪一区二区三区| 久久香蕉精品热| 日韩 欧美 亚洲 中文字幕| 90打野战视频偷拍视频| 女警被强在线播放| 搡老乐熟女国产| 一边摸一边抽搐一进一出视频| 久热爱精品视频在线9| 天天影视国产精品| 欧美日本中文国产一区发布| 亚洲七黄色美女视频| 国产极品粉嫩免费观看在线| 欧美大码av| 搡老熟女国产l中国老女人| 色婷婷久久久亚洲欧美| 下体分泌物呈黄色| av中文乱码字幕在线| 国产精品国产av在线观看| 亚洲一码二码三码区别大吗| 日韩大码丰满熟妇| 亚洲精品国产一区二区精华液| 欧美乱色亚洲激情| 欧美老熟妇乱子伦牲交| 国产精品九九99| 最新的欧美精品一区二区| 久久草成人影院| 人妻丰满熟妇av一区二区三区 | 巨乳人妻的诱惑在线观看| 大陆偷拍与自拍| 久99久视频精品免费| 日日爽夜夜爽网站| 侵犯人妻中文字幕一二三四区| 十八禁高潮呻吟视频| 一二三四社区在线视频社区8| 啪啪无遮挡十八禁网站| 高清av免费在线| 亚洲国产精品合色在线| 91字幕亚洲| 大香蕉久久网| 欧美成狂野欧美在线观看| 搡老乐熟女国产| 一边摸一边做爽爽视频免费| 国产成人欧美| 精品乱码久久久久久99久播| 这个男人来自地球电影免费观看| 国内久久婷婷六月综合欲色啪| 曰老女人黄片| 99re在线观看精品视频| 久久人妻熟女aⅴ| 亚洲av熟女| 18禁美女被吸乳视频| 国产精品亚洲av一区麻豆| netflix在线观看网站| av线在线观看网站| 久久精品aⅴ一区二区三区四区| 满18在线观看网站| 国产97色在线日韩免费| 精品久久久精品久久久| 欧美人与性动交α欧美软件| ponron亚洲| 午夜福利,免费看| 久久精品人人爽人人爽视色| 一进一出抽搐动态| 脱女人内裤的视频| 日韩中文字幕欧美一区二区| 国产av一区二区精品久久| 老鸭窝网址在线观看| 美女扒开内裤让男人捅视频| 欧美激情久久久久久爽电影 | 亚洲 欧美一区二区三区| 亚洲专区国产一区二区| 久久国产精品男人的天堂亚洲| 国产男女超爽视频在线观看| 最近最新中文字幕大全免费视频| 国产亚洲欧美98| 99国产精品99久久久久| 12—13女人毛片做爰片一| 日韩 欧美 亚洲 中文字幕| 亚洲精品中文字幕一二三四区| 亚洲第一欧美日韩一区二区三区| 真人做人爱边吃奶动态| 伊人久久大香线蕉亚洲五| 久久精品国产清高在天天线| 最近最新免费中文字幕在线| 国产精品免费大片| 下体分泌物呈黄色| 欧美在线黄色| 色94色欧美一区二区| cao死你这个sao货| 多毛熟女@视频| 757午夜福利合集在线观看| 午夜福利欧美成人| 久久久久久久精品吃奶| www.熟女人妻精品国产| 国产精品98久久久久久宅男小说| 在线永久观看黄色视频| 国产黄色免费在线视频| 成人18禁高潮啪啪吃奶动态图| 欧美成人午夜精品| 美女高潮喷水抽搐中文字幕| 别揉我奶头~嗯~啊~动态视频| 看免费av毛片| 伊人久久大香线蕉亚洲五| 黑丝袜美女国产一区| 久99久视频精品免费| 午夜91福利影院| 精品亚洲成国产av| 成人18禁在线播放| 国产在线一区二区三区精| 成人亚洲精品一区在线观看| 亚洲免费av在线视频| 午夜日韩欧美国产| 每晚都被弄得嗷嗷叫到高潮| 亚洲成av片中文字幕在线观看| tocl精华| 99精品在免费线老司机午夜| 亚洲精品国产区一区二| 中文字幕人妻丝袜一区二区| 嫁个100分男人电影在线观看| 黄色毛片三级朝国网站| 悠悠久久av| 三上悠亚av全集在线观看| 亚洲视频免费观看视频| 丝袜人妻中文字幕| 精品久久久久久久久久免费视频 | 一二三四在线观看免费中文在| 国产精品国产av在线观看| 中文字幕另类日韩欧美亚洲嫩草| 大码成人一级视频| 亚洲一区中文字幕在线| 精品亚洲成国产av| 中文字幕人妻熟女乱码| 国产精品免费大片| 建设人人有责人人尽责人人享有的| 一二三四社区在线视频社区8| 日本vs欧美在线观看视频| 淫妇啪啪啪对白视频| 亚洲专区中文字幕在线| 中文字幕另类日韩欧美亚洲嫩草| 熟女少妇亚洲综合色aaa.| 欧美不卡视频在线免费观看 | 日本五十路高清| 久久久国产精品麻豆| 亚洲免费av在线视频| 下体分泌物呈黄色| 国产精品免费一区二区三区在线 | 亚洲色图综合在线观看| 伊人久久大香线蕉亚洲五| 高清黄色对白视频在线免费看| 在线播放国产精品三级| 色婷婷久久久亚洲欧美| 他把我摸到了高潮在线观看| 成人手机av| 精品久久久久久久久久免费视频 | 亚洲欧美日韩另类电影网站| 亚洲熟妇熟女久久| 免费不卡黄色视频| 在线观看66精品国产| 中文字幕人妻丝袜一区二区| 免费人成视频x8x8入口观看| 午夜成年电影在线免费观看| 国产亚洲精品一区二区www | 一级片'在线观看视频| 亚洲成人免费电影在线观看| 中文字幕人妻丝袜一区二区| 国产亚洲欧美98| av免费在线观看网站| 亚洲熟妇熟女久久| 国产区一区二久久| 曰老女人黄片| 国产一区二区三区综合在线观看| 久久这里只有精品19| 美女国产高潮福利片在线看| 国产精品二区激情视频| 国产国语露脸激情在线看| 亚洲欧美一区二区三区久久| 国产一区在线观看成人免费| 在线观看免费日韩欧美大片| 久久人人爽av亚洲精品天堂| 亚洲全国av大片| 国产成人欧美| 日本黄色日本黄色录像| 亚洲成a人片在线一区二区| 国产精品99久久99久久久不卡| 一本一本久久a久久精品综合妖精| av超薄肉色丝袜交足视频| av网站在线播放免费| 一边摸一边做爽爽视频免费| 亚洲男人天堂网一区| 天天添夜夜摸| 午夜福利在线观看吧| 国产不卡一卡二| 日本一区二区免费在线视频| 久久精品亚洲精品国产色婷小说| 亚洲国产看品久久| 成年人黄色毛片网站| 91九色精品人成在线观看| 另类亚洲欧美激情| 国产精品秋霞免费鲁丝片| 午夜久久久在线观看| 好男人电影高清在线观看| 中亚洲国语对白在线视频| 免费在线观看日本一区| 久久国产精品男人的天堂亚洲| 久久精品成人免费网站| 在线av久久热| 一级黄色大片毛片| 午夜福利在线免费观看网站| 国产亚洲欧美精品永久| 亚洲中文字幕日韩| 一边摸一边抽搐一进一出视频| 成熟少妇高潮喷水视频| 夜夜躁狠狠躁天天躁| 亚洲情色 制服丝袜| 18禁观看日本| 天堂中文最新版在线下载| 最近最新中文字幕大全电影3 | 国产精品永久免费网站| 国产视频一区二区在线看| 久久久水蜜桃国产精品网| 精品久久蜜臀av无| 久久精品亚洲熟妇少妇任你| 欧美老熟妇乱子伦牲交| 日本撒尿小便嘘嘘汇集6| av免费在线观看网站| 在线天堂中文资源库| 成人三级做爰电影| 久久香蕉精品热| 国产一区二区三区在线臀色熟女 | 搡老乐熟女国产| 韩国精品一区二区三区| 午夜亚洲福利在线播放| 国产aⅴ精品一区二区三区波| 亚洲全国av大片| 久久精品91无色码中文字幕| 国产日韩一区二区三区精品不卡| 国产男靠女视频免费网站| 高清毛片免费观看视频网站 | 欧美大码av| 精品人妻1区二区| 久久久久久久精品吃奶| 亚洲色图综合在线观看| 一区福利在线观看| 欧美色视频一区免费| 搡老熟女国产l中国老女人| 亚洲av美国av| 亚洲色图 男人天堂 中文字幕| 色婷婷久久久亚洲欧美| 天堂动漫精品| 老司机福利观看| 99精品在免费线老司机午夜| avwww免费| 18禁裸乳无遮挡免费网站照片 | 一级毛片女人18水好多| 日韩成人在线观看一区二区三区| 黄片大片在线免费观看| 在线观看舔阴道视频| 免费在线观看完整版高清| 狂野欧美激情性xxxx| 精品国产亚洲在线| 俄罗斯特黄特色一大片| 久久久久久久久免费视频了| 久久狼人影院| 99热国产这里只有精品6| 国产激情欧美一区二区| 日本欧美视频一区| 久久久久久久久久久久大奶| 一级毛片精品| 久久久国产欧美日韩av| 亚洲九九香蕉| 精品第一国产精品| 亚洲欧美色中文字幕在线| 久久久久国内视频| 成人永久免费在线观看视频| 美女高潮喷水抽搐中文字幕| 十分钟在线观看高清视频www| 无限看片的www在线观看| 人妻 亚洲 视频| 国产精华一区二区三区| 国产成人免费无遮挡视频| 嫩草影视91久久| avwww免费| 亚洲精品自拍成人| 国产精品一区二区免费欧美| 91精品国产国语对白视频| 亚洲国产看品久久| 国产av又大| av不卡在线播放| 久久中文看片网| 日韩一卡2卡3卡4卡2021年| 亚洲va日本ⅴa欧美va伊人久久| 在线观看免费视频网站a站| 久久香蕉激情| 成年人午夜在线观看视频| 久久久久精品人妻al黑| 久久久国产一区二区| 久久久久久人人人人人| 午夜亚洲福利在线播放| 黄色a级毛片大全视频| 精品亚洲成a人片在线观看| 嫁个100分男人电影在线观看| 精品国产国语对白av| 久久中文看片网| 国产单亲对白刺激| 飞空精品影院首页| 欧美在线一区亚洲| 99国产精品99久久久久| 国产一区在线观看成人免费| 两性夫妻黄色片| 岛国在线观看网站| 在线天堂中文资源库| 搡老乐熟女国产| 免费少妇av软件| 久久久国产成人免费| 美女国产高潮福利片在线看| 人妻一区二区av| 欧美另类亚洲清纯唯美| 久久精品国产a三级三级三级| 亚洲五月婷婷丁香| 热re99久久国产66热| 黄片小视频在线播放| 久久久久久亚洲精品国产蜜桃av| 亚洲精品成人av观看孕妇| 日韩视频一区二区在线观看| 两个人看的免费小视频| 欧美日韩亚洲高清精品| 国产成人精品无人区| 亚洲国产欧美网| 国产精品二区激情视频| 美女扒开内裤让男人捅视频| 大型黄色视频在线免费观看| 99riav亚洲国产免费| 成人永久免费在线观看视频| 国产欧美日韩精品亚洲av| 狂野欧美激情性xxxx| 嫩草影视91久久| 男人的好看免费观看在线视频 | 国产成人影院久久av| 欧美 亚洲 国产 日韩一| 宅男免费午夜| 高清av免费在线| 欧美av亚洲av综合av国产av| 99香蕉大伊视频| 757午夜福利合集在线观看| 欧美日韩中文字幕国产精品一区二区三区 | 久久久久久久精品吃奶| 亚洲欧美一区二区三区久久| 十八禁人妻一区二区| 成人18禁高潮啪啪吃奶动态图| 午夜福利视频在线观看免费| 久久午夜亚洲精品久久| 手机成人av网站| 久久香蕉国产精品| 亚洲欧洲精品一区二区精品久久久| 亚洲专区中文字幕在线| 一区二区三区精品91| 欧美 亚洲 国产 日韩一| 久久久国产精品麻豆| 欧美精品av麻豆av| 久久人人97超碰香蕉20202| 18禁裸乳无遮挡免费网站照片 | 99国产精品一区二区三区| 老司机影院毛片| 国产精品电影一区二区三区 | 久久精品国产99精品国产亚洲性色 | 巨乳人妻的诱惑在线观看| 日韩有码中文字幕| 女同久久另类99精品国产91| 男女之事视频高清在线观看| 男男h啪啪无遮挡| 午夜福利,免费看| 成人18禁在线播放| 亚洲av成人不卡在线观看播放网| 如日韩欧美国产精品一区二区三区| 女人爽到高潮嗷嗷叫在线视频| 一本一本久久a久久精品综合妖精| 亚洲一区高清亚洲精品| 亚洲三区欧美一区| 黄色视频,在线免费观看| 欧美最黄视频在线播放免费 | 成人av一区二区三区在线看| 亚洲一区二区三区欧美精品| 热99国产精品久久久久久7| 黄色视频,在线免费观看| 少妇猛男粗大的猛烈进出视频| 日韩欧美三级三区| 亚洲第一欧美日韩一区二区三区| 18禁观看日本| 麻豆av在线久日| 亚洲va日本ⅴa欧美va伊人久久| 女性生殖器流出的白浆| 亚洲情色 制服丝袜| 两个人免费观看高清视频| 成年女人毛片免费观看观看9 | 成人免费观看视频高清| 日日摸夜夜添夜夜添小说| 99re6热这里在线精品视频| 国产成人啪精品午夜网站| 国产亚洲精品第一综合不卡| bbb黄色大片| 亚洲精品在线美女| 一级,二级,三级黄色视频| av不卡在线播放| 久久久久久久午夜电影 | 国产日韩一区二区三区精品不卡| 亚洲国产毛片av蜜桃av| 99热网站在线观看| 91字幕亚洲| 国产av又大| 亚洲精品成人av观看孕妇| 久久天堂一区二区三区四区| 一级作爱视频免费观看| 午夜两性在线视频| 99在线人妻在线中文字幕 | 男人舔女人的私密视频| 一区在线观看完整版| 欧美精品亚洲一区二区| 精品福利永久在线观看| 成年人午夜在线观看视频| cao死你这个sao货| 国产精品98久久久久久宅男小说| 一级,二级,三级黄色视频| 麻豆乱淫一区二区| 国产av又大| av线在线观看网站| 久久草成人影院| 亚洲精品久久成人aⅴ小说| 中文字幕最新亚洲高清| 宅男免费午夜| 久久青草综合色| 国产精品.久久久| 欧美乱码精品一区二区三区| 午夜亚洲福利在线播放| 手机成人av网站| 久久久久国内视频| 婷婷成人精品国产| 精品国产一区二区久久| 乱人伦中国视频| 亚洲一区高清亚洲精品| 久久精品熟女亚洲av麻豆精品| 黑人猛操日本美女一级片| 91麻豆精品激情在线观看国产 | 一本综合久久免费| 高清黄色对白视频在线免费看| 欧美成人午夜精品| 在线观看午夜福利视频| 欧美亚洲日本最大视频资源| 中文字幕人妻熟女乱码| 男女午夜视频在线观看| 成年女人毛片免费观看观看9 | 一级片免费观看大全| 成人特级黄色片久久久久久久| 亚洲一卡2卡3卡4卡5卡精品中文| 精品电影一区二区在线| 最新在线观看一区二区三区| 国产淫语在线视频| 国产高清国产精品国产三级| 人人妻人人爽人人添夜夜欢视频| 免费在线观看日本一区| 国产男女超爽视频在线观看| 日韩免费高清中文字幕av| 国产伦人伦偷精品视频| 亚洲一区中文字幕在线| 精品免费久久久久久久清纯 | 黄片大片在线免费观看| 久久久久久久国产电影| 性少妇av在线| 久久久久久人人人人人| 国产成人欧美在线观看 | 亚洲欧洲精品一区二区精品久久久| 国产精品二区激情视频| 亚洲av片天天在线观看| 亚洲七黄色美女视频| 麻豆成人av在线观看| 亚洲人成电影观看| 18禁观看日本| 可以免费在线观看a视频的电影网站| 他把我摸到了高潮在线观看| 久久青草综合色| 亚洲九九香蕉| 久热爱精品视频在线9| 欧美日韩一级在线毛片| 日韩成人在线观看一区二区三区| 男女高潮啪啪啪动态图| 美女高潮到喷水免费观看| 欧美日韩国产mv在线观看视频| 欧美在线一区亚洲| 久久香蕉精品热| 精品一区二区三区四区五区乱码| 精品久久久久久,| 搡老乐熟女国产| 欧美日韩中文字幕国产精品一区二区三区 | 18禁观看日本| 欧美一级毛片孕妇| 男女床上黄色一级片免费看| 一区二区三区国产精品乱码| 欧美在线一区亚洲| 黄色 视频免费看| 国产单亲对白刺激| 亚洲,欧美精品.| 亚洲欧美一区二区三区久久| 亚洲国产欧美一区二区综合| 大香蕉久久网| 国产亚洲精品一区二区www | 在线观看免费视频日本深夜| av福利片在线|