• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Nested Alternating Direction Method of Multipliers to Low-Rank and Sparse-Column Matrices Recovery

    2021-04-20 13:54:58

    (1. College of Mathematics and Statistics, Henan University, Kaifeng 475000, China; 2. College of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471023, China;3. College of Software, Henan University, Kaifeng 475000, China)

    Abstract: The task of dividing corrupted-data into their respective subspaces can be well illustrated, both theoretically and numerically, by recovering low-rank and sparse-column components of a given matrix. Generally, it can be characterized as a matrix and a-norm involved convex minimization problem. However, solving the resulting problem is full of challenges due to the non-smoothness of the objective function. One of the earliest solvers is an 3-block alternating direction method of multipliers (ADMM) which updates each variable in a Gauss-Seidel manner. In this paper, we present three variants of ADMM for the 3-block separable minimization problem. More preciously, whenever one variable is derived, the resulting problems can be regarded as a convex minimization with 2 blocks, and can be solved immediately using the standard ADMM. If the inner iteration loops only once, the iterative scheme reduces to the ADMM with updates in a Gauss-Seidel manner. If the solution from the inner iteration is assumed to be exact, the convergence can be deduced easily in the literature. The performance comparisons with a couple of recently designed solvers illustrate that the proposed methods are effective and competitive.

    Keywords: Convex optimization; Variational inequality problem; Alternating direction method of multipliers; Low-rank representation; Subspace recovery

    §1. Introduction

    Given a set of corrupted-data samples drawn from a union of linear subspaces, the goal of the subspace recovery problem is to segment all samples into their respective subspaces and correct the possible errors simultaneously. The problem has recently attracted much attention because of its wide applications in the fields of pattern analysis, signal processing, and data mining,etc..

    whereλ>0 is a positive weighting parameter;A∈Rm×nis a dictionary which is actually column full rank;is a nuclear norm (trace norm or Ky Fan norm [12]) defined by the sum of all singular values;is-mixed norm defined by the sum of the-norm of each column of matrix. The nuclear norm is the best convex approximation of the rank function over the unit ball of matrices under the spectral norm [12]. The-norm represents the sparse component ofX, which shows that some data samples are corrupted while the others are keeping clean. When getting the minimizer (Z?,E?) of problem (1.1), the original datacan be reconstructed by settingX ?E?(orAZ?).

    Additionally, the minimizerZ?is named as the lowest-rank representation of dataXwith respect to a dictionaryA.

    Problem (1.1) is convex since it is separately convex in each of the terms. However, the nonsmoothness of the nuclear norm and-norm makes it too challenging task to minimize. On the one hand, this problem can be easily recast as a semi-definite programming problem, and solved by solvers such as [15] and [13]. On the other hand, it falls into the framework of the alternating direction method of multipliers (ADMM), which is used widely in variety of practical fields,such as, image processing [2,6,11], compressive sensing [17,19], matrix completion [4], matrix decomposition [9,14,16], nuclear norm minimization [18] and others. The earliest approach [9]reformulated problem (1.1) by adding an auxiliary variable, and minimized the corresponding augmented Lagrangian function with respect to with each variable in a Gauss-Seidel manner.Another alternative approach [16] solved (1.1) by using a linearized technique. More precisely,with one variable fixed, it linearized the subproblem to ensure that the closed-form solution is easily derived.

    In this paper, unlike all the mentioned algorithms, we propose three variants of ADMM for problem (1.1). In the first variant, we transfer (1.1) into an equivalent formulation by adding a new variableJ. Firstly, by fixing two variables, we minimize the corresponding augmented Lagrangian function to produce the temporary value of one variable. Secondly, by fixing the variable with its latest value, we treat the resulting subproblem as a new convex optimization problem but with fixed Lagrangian multipliers. Thus, it falls into the framework of the classic ADMM again. It is experimentally shown that the number of the inner loops greatly influence the whole performance of the algorithm. Meanwhile, the method reduces to the standard 3-block ADMM when the inner loop goes only once. Moreover, we design other two alternative versions of ADMM from different observation. The convergence of each proposed algorithm is analyzed under the assumption that the subproblem is solved exactly. Numerical experiments indicate that the proposed algorithms are promising and competitive with the recent solvers SLAL and LRR.

    The rest of this paper is organized as follows. In section 2, some notations and preliminaries which are used later are provided; a couple of recent algorithms are quickly reviewed; the motivation and iterative framework of the new algorithms are also included. In section 3, the convergence of the first version of the algorithm is established. In section 4, another variant of ADMM from a different observation together with its convergence are presented. In section 5,some numerical results which show the efficiency of the proposed algorithms are reported; the performance comparisons with other solvers are also included. Finally, in section 6, the paper is concluded with some remarks.

    §2. Algorithms

    2.1. Notations and preliminaries

    In this subsection, we summarize the notations which are used in this paper. The matrices are denoted by uppercase letters and vectors by lowercase letters. Given a matrixX, itsi-th row andj-th column are denoted respectively by[X]i,:and[X]:,j, andxi,jis the(i,j)-th component.The-norm,-norm, and Frobenius norm of matrix are defined respectively by

    For any two matricesX,Y ∈Rn×t, we define the standard trace inner product inthenFor a symmetric and positive definite matrixM ∈Rn×n, we defineSymbolis defined as the transpose of a vector or a matrix.

    Now, we list two important results, which are very useful to construct our algorithm.

    Theorem 2.1.[1,10] Given Y ∈Rm×n of rank r, let

    be the singular value decomposition (SVD) of Y. For each μ>0, we let

    where {·}+=max(0,·). It is shown that Dμ(Y)obeys

    Theorem 2.2.[5] Let Y ∈Rm×n be a given matrix, and Sμ(Y)be the optimal solution of

    then the i-th column of Sμ(Y)is

    2.2. Existing algorithms

    This subsection is devoted to review a couple of existing algorithms. The corresponding augmented Lagrangian function of (1.1) is

    where Λ∈Rm×nis the Lagrangian multiplier andμ>0 is a penalty parameter. For fixed(Ek,Zk,Λk), the next triplet (Ek+1,Zk+1,Λk+1) can be generated via

    For subproblem (2.3a), it can be easily deduced that

    On the other hand, fixing the latestEk+1, the subproblem (2.3b) with respect toZcan be characterized as

    For most of the dictionary matrixA, the closed-form solution of (2.5) is not easily derived.SLAL [16] linearizes the quadratic function and adds a proximal point term which ensure that the solution can be obtained explicitly.

    In a different way, another solver LRR [9] adds a new variableJ ∈Rn×nto model (1.1) and converts it to the following equivalent form:

    The augmented Lagrangian function of (2.6) is

    where Λ∈Rm×nand Γ∈Rn×nare the Lagrangian multipliers. LRR minimizesL2(E,Z,J,Λ,Γ)firstly with respect toE, later withZ, and then withJby fixing the other variables with their latest values. More precisely, with the given (Ek,Zk,Jk), the new iterate (Ek+1,Zk+1,Jk+1) is generated by

    The attractive feather of the above iterative scheme is that each variable permits closed-form solution.

    2.3. Nested minimizing algorithm

    In this subsection, we turn our attention to construct the new version of ADMM, namely nested minimizing algorithm here. Given (Ek,Zk,Jk,Λk,Γk), the nextEk+1is derived by

    IfZandJare grouped as one variable, for fixedEk+1, it is easy to deduce that

    Hence, (Zk+1,Jk+1) can also be considered as the solution of the minimization problem by standard Lagrangian function method but with fixed multipliers Λkand Γk,

    Fortunately, the favorable structure in both objective function and the constraint makes the resulting problem also fall into the framework of classic ADMM.

    For given (Ek+1,Zk,Jk), we letFor fixedthe next paircan be attained by the following alternating scheme:

    Firstly, the subproblem (2.13a) is equivalent to

    Clearly, (2.14) is a quadratic programming problem with respect toand can be further expressed as

    Secondly, the solution of subproblem (2.13b) with respect tocan be described as

    In summary,the algorithm named Nested Minimization Method(NMM_v1)can be described as follows.

    Algorithm 2.1 (NMM_v1).

    Remark 2.1.If the inner iteration goes only once without achieving convergence, then it reduces to the iterative form (2.8), where each variable is updated in a Gauss-Seildel way. Owing to the fact that the exact solution is not achieved as only one step goes in the inner iteration,the3-block ADMM can not converge globally (see [3]).

    Remark 2.2.The optimality condition of (2.6) (or (1.1)) can be characterized by finding the solution(E?,Z?,J?)∈Rm×n×Rn×n×Rn×n and the Lagrangian multipliersΛ?andΓ?satisfying the Karush-Kuhn-Tucker system

    At each iteration, the triple(Ek+1,Zk+1,Jk+1)generated by NMM_v1 satisfies

    Comparing the optimal conditions (2.18a)-(2.18e) with (2.19a)-(2.19e), it is clearly observed that the whole iteration process could be terminated ifΛk+1?Λk,Γk+1?Γk and Zk+1?Zk are all small enough. In other words, for positive constant >0, the stopping criteria should be

    From optimization theory, it is clear to see that the variables can be reordered by minimizing firstly with respect toJ, later withZ, and then withEby fixing the other variables with their latest values. More precisely, with given (Jk,Zk,Ek,Λk,Γk), the next iterate(Jk+1,Zk+1,Ek+1,Λk+1,Γk+1) can be generated via the following scheme — named Nested Minimization Method with version two (NMM_v2).

    Algorithm 2.2 (NMM_v2).

    §3. Convergence analysis

    This section is dedicated to establish the global convergence of algorithm NMM_v1. The convergence properties of second version NMM_v2 can be analyzed in a similar way. Hence, we omit it here. Throughout this paper, we make the following Assumption.

    Assumption 3.1.There exists a matrix pair(E?,Z?,J?,Λ?,Γ?)∈Rm×n×Rn×n×Rn×n×Rm×n×Rn×n satisfying the Karush-Kuhn-Tucker system(2.18).Besides the above Assumption,we also make the following Assumption on Algorithm NMM_v1.

    Assumption 3.2.The{(Zk+1,Jk+1)} is the exact solution of the resulting convex minimization(2.12).

    3.1. General description

    For convenience, we set

    whereIis an identity matrix and 0 is a so-called zero matrix in which its elements are all zero.Using these symbols, problem (2.6) is thus transformed into

    Let ?=Rm×n×Rn×n×Rn×n×R(m+n)×n. As a result, solving (3.2) is equivalent to findto satisfy the following variable inequalities problem

    Using the notations in (3.1) andthe augmented Lagrangian function in (2.7) can be rewritten as

    Moreover, it is not difficult to deduce that the subproblem (2.9) onEis equivalent to

    It is also easy to see that the subproblem (2.11) on variablesZandJis identical with

    Finally, the compact form of (2.16) and (2.17) equals to

    3.2. Further reformulations

    The subproblem (3.5) can be reformulated as the variable inequality scheme. That is, findEk+1such that

    Similarly, the problem (3.6) is equivalent to findZk+1andJk+1such that

    By (3.7), it holds that

    Using the above equality, (3.8) can be rewritten as

    In a similar way, (3.9) is reformulated as

    For the sake of simplicity, we denote

    Combining with (3.10) and (3.11), it yields that

    Furthermore, combining with (3.7), it holds that

    Recalling the definition ofWand letting

    then the inequality (3.13) is equivalent to

    3.3. Convergence theorem

    Let

    To establish the desired convergence theorem, we firstly list some useful lemmas.

    Lemma 3.1.Suppose that Assumptions 3.1 and 3.2 hold. Let {Wk+1}={(Ek+1,Zk+1,Jk+1,be generated by Algorithm 2.1. Then we have

    Proof.SettingW=W?in (3.14), we obtain

    By the monotonicity of operator Φ is easy to see that

    The first inequality is due to the monotonicity of Φ and the second one comes from (3.4) by recalling the symbols ofW,Fand Φ. Hence, the claims of this lemma is derived.

    By using the above lemma, it is easy to attain the following result,

    Lemma 3.2.Suppose that Assumptions 3.1 and 3.2 hold. Let {Wk+1}={(Ek+1,Zk+1,Jk+1,be generated by Algorithm 2.1. Then we have

    and

    Proof.By usingwe have

    Since (3.11) is true for anyk, we can get

    SettingZ=Zk+1in (3.11) andZ=Zkin (3.15), respectively, and adding both sides of the resulting inequalities, we have

    which shows the statement of this lemma.

    It is not difficult to deduce that both lemmas indicate the following fact.

    Lemma 3.3.Suppose that Assumptions 3.1 and 3.2 hold. Let the sequence {(Ek+1,Zk+1,Jk+1,be generated by Algorithm 2.1. Then we have

    For any matricesX,Y, and a symmetric definite matrixM, define

    Theorem 3.1.Suppose that Assumptions 3.1 and 3.2 hold. Let the sequence{(Ek+1,Zk+1,Jk+1,be generated by Algorithm 2.1. Then we have

    Proof.We have

    The proof is completed.

    The theorem shows that the sequenceis bounded, and

    which is essential for the convergence of the proposed method. Recalling the definition ofit also holds that

    To end this section, we state the desired convergence result of our proposed algorithm.

    Theorem 3.2.Suppose that Assumptions 3.1 and 3.2 hold. Let {(Ek,Zk,Jk,Λk,Γk)} be the sequence generated by Algorithm 2.1 from any initial points. Then the sequence converges to(E?,Z?,J?,Λ?,Γ?), i.e., the solution of equivalent model (3.2).

    Proof.It follows from(3.16)and(3.17)that there exists an index setkjsuch thatZkj →Z?,Additionally, since

    andZk ?Zk?1→0 and Λk ?Λk?1→0, it implies that

    It follows from (2.16) that

    or equivalently,

    By taking limit on both sides, it yields

    Similarly, it follows (2.17) that

    Taking limit on both sides, it holds

    Moreover, by (2.19a), we get

    Taking limit of both sides withkjon the above inequality and note (3.18), we get

    Similarly, from (2.19b) and (2.19c), we obtain respectively,

    and

    Note that (3.18) — (3.22) is exactly the optimal condition (2.18a) — (2.18e), it thus concludes that (E?,Z?,J?) is a solution of problem (3.2).

    §4. Another alternative scheme

    This section is devoted to develop another version of nesting minimizing algorithm for solving problem (1.1) from different observations. Reconsidering the original model and its augmented Lagrangian function (2.2), it clearly shows thatEk+1andZk+1are derived by (2.4)and (2.5), respectively. SettingHk+1=X ?Ek+1+Λk/μ, then (2.5) is reformulated as

    which indicates thatZk+1is the minimizer of the following optimization problems with auxiliary variableJ,

    Since the objective function and the constraint are both separable, it falls into the framework of the classic ADMM again. The augmented Lagrangian function of (4.2) is

    where ?!蔙n×nis a Lagrangian multiplier. Staring fromand from giventhe ADMM generates the next pairby

    Simple computing yields that

    and

    In short, the algorithm named Nested Minimizing Method with version three (NMM_v3)can be stated as follows.

    Algorithm 4.1 (NMM_v3).

    Remark 4.1.Comparing with Algorithm 2.1, it can be clearly seen that the significant difference between both algorithms is the updating of the multiplierΓk related to the constraint J ?Z=0.This multiplier is updated at the outer iteration process because the auxiliary variable J has been added to the original model (1.1) for obtaining (2.6), while it is updated at the inner loop owing to model (4.2) which is used only as a subproblem for deriving the next Zk+1.

    Remark 4.2.Similar as Remark 2.2, the optimal condition of (1.1) can be characterized by finding the solution(E?,Z?)∈Rm×n×Rn×n and the corresponding Lagrangian multiplierΛ?such that

    At each iteration, the triple(Ek+1,Zk+1,Λk+1)generated by NMM_v3 satisfies

    or, equivalently

    which indicates, for sufficient small >0, that the algorithm should be stopped when

    As the previous section, we make the following assumption to ensure the algorithm converge globally.

    Assumption 4.1.The{(Zk+1,Jk+1)} is the exact solution of the resulting convex minimization(4.2)We can clearly see that the inner iterations with variablesandand the out iterations withEandZare both classic ADMM. Hence, the convergence of this type of methods is available in this literature. Let

    It is similarly to prove that the sequence{Yk}generated by Algorithm 4.1 is contraction.

    Theorem 4.1.Suppose that Assumptions 3.1 and 4.1 hold. Let the sequence {(Ek+1,Zk+1,Λk+1)} be generated by Algorithm4.1. Then we have

    To end this section, we state the desirable convergence theorem without proof,

    Theorem 4.2.Suppose that Assumptions 3.1 and 4.1 hold. Let {(Ek,Zk,Λk)} be the sequence generated by Algorithm 4.1 from any initial points. Then every limit of {(Ek,Zk)} is an optimal solution of problem (1.1).

    §5. Numerical experiments

    In the section, we present two classes of numerical experiments. In the first class, we test the algorithms with different number of inner loops to verify their efficiency and stability. In the second class,we test against a couple of recent solvers SLAL and LRR to show that the proposed algorithms are very competitive. All experiments are performed with Window 7 operating system and Matlab 7.8 (2009a) running on a Lenovo laptop with Intel Dual-Core CPU at 2.5 GHz and 4G of memory.

    5.1. Test on NMM_v1, NMM_v2 and NMM_v3

    In the first class of experiment, we test the proposed algorithms with different number of inner steps on the synthetic data. The used data is created similarly as in the [9,16]. The data sets are constructed from five independent subspaceswith basesgenerated by Ui+1=TUi, 1≤i≤4, where T denotes a random rotation and Uiis a random orthogonal matrix of dimension 100×4. Hence, each subspace has a rank 4 and the data has an ambient dimension of 100. From each subspace,40 data vectors are sampled by usingwhere Qibeing a 4×40 independent and identically distributedN(0,1) matrix. In summary, each sample dataand the whole data matrix is formulated aswith rankr=20. In this test, a fraction (Fr=20%) of the data vectors are grossly corrupted by large noise while others are kept as noiseless. If thei-th column vectoris chosen to be corrupted, its components are generated by adding Gaussian noise with zero mean and standard deviationHence, we haveifi-th column is chosen to be corrupted.

    As usual,the dictionaryAis chosen asXin this test,i.e.,A=X. With the given noisy dataX, our goal is to derive the block-diagonal affinity matrixZ?and recover the low-rank matrix by settingX?=AZ?, orX?=X ?E?equivalently. To attain better performance, the values of multiplierμis taken as a nondecreasing sequence 1e?6≤μi ≤1e+10 with the relationshipμi+1=ρμiand settingρ=1.1. Moreover, the weighting parameter is chosen asλ=.1 which always achieves fruitful solutions as proved in experiments’preparing. Both test algorithms start at the zero matrix and terminate when the changes of two consecutive iterations are sufficiently small, i.e.,

    wheredenotes the maximum absolute value of components of a matrix;is a tolerance and chosen asin all the following test. To specifically illustrate the performance of each algorithm, we present two comparison results in terms of the number of iterations and running time as the number of inner steps varies from 1 to 10 in Figure 1.

    Fig. 1 Comparing the performance of NMM_v1, NMM_v2 and NMM_v3 in sense of the number of iterations (left), and the CPU time required (right) as the number of inner steps varied (the x-axes).

    As can be seen from Figure 1, the number of iterations required by algorithms NMM_v2 and NMM_v3 decreases dramatically at the beginning and slightly when the permitted number of inner iterations exceeds 5. It can be also observed that NMM_v3 needs less number of iterations while more computing time than NMM v2. The reason lies in that the newrequires a full Singular Value Decomposition (SVD) which may be the main computation burden in the inner iterative process. Another surprising observation is that the number of iterations required by NMM_v1 keeps invariable despite of the number of inner iterations. And this because the newandare obtained with only one step for the special constraintZ ?J=0.

    5.2. Compare with LRR and SLAL

    To further verify the efficiency of the algorithms NMM_v2 and NMM_v3, we test against a couple of solvers LRR and SLAL for performance comparison with different percent of data are grossly corrupted. The Matlab package of LRR is available athttp://sites.google.com/site/guangcanliu/. In running of LRR and SLAL, we set all the parameters as default except forλ=0.1, which is the best choice for the data settings by extensive preparing experiments. The noisy data are created the same as the previous experiment.In this test, the initial points, the stopping criterion, and all the parameters’ values are taken as the same as the previous test. Meanwhile, the quality of restorationX?is measured by means of the recovery errorwhereis the original noiseless data matrix.Moreover, for algorithms NMM_v2 and NMM_v3, we fixed the number of inner as 4 to balance the iterations and the computing time. The numerical results including the number of iterations(Iter), the CPU time required (Time), and the recovery error (Error) are listed in Table 1

    Table 1 Comparison results of NMM_v2 and NMM_v3 with LRR and SLAL.

    It can be seen from Table 1 that,for all the tested cases,each algorithm obtained comparable recovery errors. It is further observed that, comparing with NMM_v3, NMM_v2 requires more number of iterations but with least CPU seconds. Moreover, both NMM_v2 and NMM_v3 require less number of iterations than LRR, which indicates that more number of inner loops may decrease the whole number of outer iterations. The important observation experimentally verifies that the proposed approaches can accelerate the convergence of LRR. Now we change our attention to the state-of-the-art solver SLAL. We clearly see that SLAL is the fastest among the tested solvers. However, when the number of corrupted samples is relatively small (less than 60-percent), SLAL needs more number of iterations. From the limited performance comparisons,it concludes that our proposed algorithms perform quite well and are competitive with the well-known codes LRR and SLAL.

    §6. Concluding remarks

    In this paper, we have proposed, analyzed, and tested three variants of ADMM to solve the matrix-norm and nuclear norm involved non-smooth convex minimization problem. The problem mainly appears in the fields of pattern analysis, signal processing, and data mining, and used to find and exploit the low-dimensional structure of a given high-dimensional noisy data.The earliest solver LRR reformulated the problem into an equivalent model by adding a new variable and a new constraint, and derived the value of each variable alternatively. By using problem (1.1) as an example, this paper showed that if one variable is obtained, the other two variables should be grouped together and then minimized alternatively by standard ADMM.For variants of NMM_v2 and NMM_v3, we numerically illustrated that along with the advance of number of inner steps, both algorithms converge faster and faster in terms of outer iterations.

    There is no doubt that when the inner process goes only once without achieving convergence,all the proposed methods reduce to LRR. Surely, this is the main theoretical contribution of this paper. Unfortunately, the number of iterations generated by NMM_v1 keeps unchanged whatever the number of the inner steps. We think that the exact solutions onandhave been produced even the inner loops goes only once. Moreover, we have done performance comparisons with a couple of solvers LRR and SLAL in recent literature. The results showed that our proposed both NMM_v2 and NMM_v3 require less number of iterations to obtain similar quality reconstructions. To conclude our paper, we also hope that our method and its further modifications could produce even applications for problems in relevant areas of pattern analysis, signal processing, data mining and others.

    Acknowledgements

    We are grateful to the reviewers for their valuable suggestions and comments.

    日韩中字成人| 女人十人毛片免费观看3o分钟| 一级片'在线观看视频| 国内揄拍国产精品人妻在线| 午夜日本视频在线| 我的老师免费观看完整版| 1000部很黄的大片| 中文天堂在线官网| 亚洲av日韩在线播放| 国产熟女欧美一区二区| 国产精品一区二区在线观看99 | eeuss影院久久| 婷婷六月久久综合丁香| 菩萨蛮人人尽说江南好唐韦庄| 爱豆传媒免费全集在线观看| 国产精品三级大全| 床上黄色一级片| 成人特级av手机在线观看| 搡老乐熟女国产| 2021少妇久久久久久久久久久| 日本三级黄在线观看| 大香蕉久久网| 国内揄拍国产精品人妻在线| 精品久久久精品久久久| 夜夜爽夜夜爽视频| 男女啪啪激烈高潮av片| 久久久久久久午夜电影| 免费在线观看成人毛片| 高清视频免费观看一区二区 | 国产精品国产三级国产专区5o| a级一级毛片免费在线观看| 久久久久国产网址| 菩萨蛮人人尽说江南好唐韦庄| 午夜福利视频精品| 熟妇人妻不卡中文字幕| 久久久久久久久久久免费av| av免费在线看不卡| 国产淫片久久久久久久久| 国产一区有黄有色的免费视频 | 青青草视频在线视频观看| 少妇熟女欧美另类| 国产精品不卡视频一区二区| 美女大奶头视频| 国产精品女同一区二区软件| 国产v大片淫在线免费观看| kizo精华| 亚洲人成网站高清观看| 九九在线视频观看精品| 国产成人91sexporn| 欧美成人a在线观看| 亚洲精品国产av蜜桃| 国产探花极品一区二区| 亚洲欧洲国产日韩| 久久久久九九精品影院| 成人一区二区视频在线观看| av在线观看视频网站免费| 一级a做视频免费观看| 2021少妇久久久久久久久久久| 亚洲人成网站高清观看| 亚洲精品自拍成人| 69人妻影院| 91精品国产九色| 一个人看视频在线观看www免费| 国产亚洲5aaaaa淫片| 激情 狠狠 欧美| 99热全是精品| 黄色配什么色好看| 我的女老师完整版在线观看| 国语对白做爰xxxⅹ性视频网站| av黄色大香蕉| 国产高潮美女av| 日韩三级伦理在线观看| 国产精品久久久久久精品电影小说 | 2018国产大陆天天弄谢| 最近手机中文字幕大全| 80岁老熟妇乱子伦牲交| 欧美日韩在线观看h| 日韩成人av中文字幕在线观看| 免费看日本二区| 少妇熟女aⅴ在线视频| 自拍偷自拍亚洲精品老妇| 免费观看a级毛片全部| 国产免费一级a男人的天堂| 日本-黄色视频高清免费观看| 身体一侧抽搐| 亚洲乱码一区二区免费版| 九九久久精品国产亚洲av麻豆| 久久久久久久午夜电影| 毛片一级片免费看久久久久| 久久久久网色| 精品久久久久久成人av| 91精品一卡2卡3卡4卡| 全区人妻精品视频| 街头女战士在线观看网站| 丰满人妻一区二区三区视频av| 淫秽高清视频在线观看| 午夜福利视频1000在线观看| 久久精品久久久久久久性| 欧美一级a爱片免费观看看| 九草在线视频观看| 欧美日本视频| 国语对白做爰xxxⅹ性视频网站| 精品欧美国产一区二区三| 国产美女午夜福利| 能在线免费观看的黄片| 国产精品蜜桃在线观看| 中文字幕久久专区| 成年版毛片免费区| 国产女主播在线喷水免费视频网站 | 日韩精品青青久久久久久| 一级毛片 在线播放| 欧美日韩视频高清一区二区三区二| 插阴视频在线观看视频| 欧美 日韩 精品 国产| 久久这里有精品视频免费| 久久鲁丝午夜福利片| 欧美精品一区二区大全| 亚洲欧洲国产日韩| 国产精品嫩草影院av在线观看| 秋霞伦理黄片| 国产极品天堂在线| 亚洲婷婷狠狠爱综合网| a级一级毛片免费在线观看| 久久99蜜桃精品久久| 91在线精品国自产拍蜜月| 伊人久久国产一区二区| 男女边摸边吃奶| 免费看美女性在线毛片视频| 国产大屁股一区二区在线视频| 国产免费一级a男人的天堂| 国产亚洲最大av| 亚洲精品456在线播放app| 免费无遮挡裸体视频| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 国产中年淑女户外野战色| 久久久欧美国产精品| 亚洲欧美精品专区久久| 好男人视频免费观看在线| 久久久a久久爽久久v久久| 边亲边吃奶的免费视频| 搡老乐熟女国产| 亚洲人成网站在线播| 熟女电影av网| 舔av片在线| 亚洲精品国产av蜜桃| 欧美三级亚洲精品| 午夜亚洲福利在线播放| 国产日韩欧美在线精品| 精华霜和精华液先用哪个| 性色avwww在线观看| 久久精品熟女亚洲av麻豆精品 | 午夜老司机福利剧场| 黄色欧美视频在线观看| 欧美bdsm另类| 国产成人aa在线观看| 搞女人的毛片| 国产精品久久久久久精品电影| 国产精品熟女久久久久浪| 久久久欧美国产精品| 久久精品久久久久久噜噜老黄| 婷婷色综合www| 国产欧美日韩精品一区二区| 成人国产麻豆网| 国产亚洲精品久久久com| 国产精品人妻久久久久久| 3wmmmm亚洲av在线观看| 国产人妻一区二区三区在| 内射极品少妇av片p| 欧美bdsm另类| 亚洲自偷自拍三级| 久久久久久久久久人人人人人人| 欧美bdsm另类| 精品久久久久久久久亚洲| 国产亚洲91精品色在线| 亚洲精品久久午夜乱码| 午夜激情欧美在线| 极品少妇高潮喷水抽搐| 白带黄色成豆腐渣| 婷婷色麻豆天堂久久| 亚洲熟妇中文字幕五十中出| 六月丁香七月| 美女高潮的动态| 亚洲精华国产精华液的使用体验| 麻豆久久精品国产亚洲av| 国产成人免费观看mmmm| 97热精品久久久久久| 中文字幕制服av| 久久亚洲国产成人精品v| 2021天堂中文幕一二区在线观| 舔av片在线| 春色校园在线视频观看| 国产伦一二天堂av在线观看| 国产国拍精品亚洲av在线观看| 色视频www国产| 欧美日韩综合久久久久久| 亚洲欧洲日产国产| 日日啪夜夜爽| 亚洲熟女精品中文字幕| 精品人妻熟女av久视频| 精品不卡国产一区二区三区| 国产精品一及| 深爱激情五月婷婷| 国产精品一二三区在线看| 国内精品一区二区在线观看| 精品一区在线观看国产| 午夜激情福利司机影院| 丝瓜视频免费看黄片| 亚洲欧美一区二区三区国产| 亚洲av成人精品一区久久| 国产av码专区亚洲av| 国产精品一及| 午夜免费激情av| 青春草视频在线免费观看| 亚洲精品久久久久久婷婷小说| 欧美性感艳星| 青春草国产在线视频| 免费av观看视频| 国产成人精品久久久久久| 一级毛片黄色毛片免费观看视频| 神马国产精品三级电影在线观看| 天美传媒精品一区二区| 亚洲成人一二三区av| 免费人成在线观看视频色| 亚洲精品乱码久久久久久按摩| 激情 狠狠 欧美| 日本黄色片子视频| 亚洲av二区三区四区| 亚洲成人中文字幕在线播放| 性色avwww在线观看| 日韩欧美一区视频在线观看 | 午夜福利在线观看免费完整高清在| 国产国拍精品亚洲av在线观看| 国产精品国产三级国产专区5o| 搞女人的毛片| 久久久久久久久大av| 亚洲成人久久爱视频| 伊人久久精品亚洲午夜| 日韩精品有码人妻一区| 亚洲最大成人手机在线| 成人综合一区亚洲| 自拍偷自拍亚洲精品老妇| 毛片一级片免费看久久久久| 精品久久久噜噜| 日产精品乱码卡一卡2卡三| 久久韩国三级中文字幕| 成人美女网站在线观看视频| 天天一区二区日本电影三级| 汤姆久久久久久久影院中文字幕 | 蜜桃亚洲精品一区二区三区| 亚洲av二区三区四区| 菩萨蛮人人尽说江南好唐韦庄| videossex国产| 亚洲在线观看片| av在线蜜桃| 黑人高潮一二区| 中文乱码字字幕精品一区二区三区 | 看十八女毛片水多多多| 亚洲国产av新网站| 丰满少妇做爰视频| 免费观看精品视频网站| av网站免费在线观看视频 | 男人狂女人下面高潮的视频| 免费大片黄手机在线观看| 又大又黄又爽视频免费| 免费观看的影片在线观看| 青春草视频在线免费观看| 亚洲av中文av极速乱| 综合色丁香网| 国产综合懂色| 日本黄大片高清| 久久久久久国产a免费观看| 亚洲成人久久爱视频| 久久久久免费精品人妻一区二区| 日本色播在线视频| 欧美成人一区二区免费高清观看| 日韩欧美一区视频在线观看 | 秋霞在线观看毛片| 国产探花极品一区二区| 国产精品久久久久久精品电影| 亚洲精品亚洲一区二区| 国产精品国产三级国产av玫瑰| 午夜老司机福利剧场| 青青草视频在线视频观看| 久久久久久久久久人人人人人人| 国产精品国产三级国产av玫瑰| 久久久久网色| 国产伦精品一区二区三区四那| 国产av在哪里看| 国产高清有码在线观看视频| 国产精品一二三区在线看| 亚洲精品第二区| 亚洲精华国产精华液的使用体验| 国产精品av视频在线免费观看| 一级毛片我不卡| 国产伦精品一区二区三区四那| 神马国产精品三级电影在线观看| 欧美性感艳星| 直男gayav资源| 亚洲欧美精品自产自拍| 一级片'在线观看视频| 午夜激情欧美在线| 国产人妻一区二区三区在| 亚洲在久久综合| 国产v大片淫在线免费观看| 国内精品宾馆在线| 欧美3d第一页| 成人一区二区视频在线观看| 国产精品99久久久久久久久| 汤姆久久久久久久影院中文字幕 | 欧美最新免费一区二区三区| 欧美区成人在线视频| 久久热精品热| ponron亚洲| 国产男人的电影天堂91| 国产精品嫩草影院av在线观看| 超碰97精品在线观看| 免费av观看视频| 国产精品综合久久久久久久免费| 久久久久久久久久黄片| av国产久精品久网站免费入址| 特大巨黑吊av在线直播| 波多野结衣巨乳人妻| 身体一侧抽搐| 国产欧美另类精品又又久久亚洲欧美| 丰满乱子伦码专区| 少妇熟女欧美另类| 美女被艹到高潮喷水动态| 男人舔女人下体高潮全视频| 亚洲精品456在线播放app| ponron亚洲| 精品欧美国产一区二区三| 三级男女做爰猛烈吃奶摸视频| 校园人妻丝袜中文字幕| 久久久久久久久久成人| 美女xxoo啪啪120秒动态图| 日韩视频在线欧美| 久久久久久久久大av| 美女高潮的动态| 国产女主播在线喷水免费视频网站 | 美女xxoo啪啪120秒动态图| 日韩欧美一区视频在线观看 | kizo精华| 精品久久久久久久久av| 青春草视频在线免费观看| 美女国产视频在线观看| 亚洲av在线观看美女高潮| 美女大奶头视频| 99热全是精品| 亚洲精品日韩在线中文字幕| 最近的中文字幕免费完整| 身体一侧抽搐| 国产精品一区www在线观看| 一级毛片黄色毛片免费观看视频| 午夜老司机福利剧场| 国产成人精品福利久久| 免费观看性生交大片5| 91精品一卡2卡3卡4卡| 亚洲国产欧美人成| 免费人成在线观看视频色| 爱豆传媒免费全集在线观看| 真实男女啪啪啪动态图| 国产一区二区在线观看日韩| 国产精品久久久久久av不卡| 国产成人一区二区在线| 网址你懂的国产日韩在线| 国产成人91sexporn| 亚洲婷婷狠狠爱综合网| 国产亚洲av片在线观看秒播厂 | 内射极品少妇av片p| 男人狂女人下面高潮的视频| 中文在线观看免费www的网站| 欧美日本视频| 夫妻性生交免费视频一级片| 久久精品久久久久久噜噜老黄| 男女国产视频网站| 欧美成人一区二区免费高清观看| 国产单亲对白刺激| 伦精品一区二区三区| 国产高潮美女av| 亚洲av免费在线观看| 欧美日韩国产mv在线观看视频 | 男人舔女人下体高潮全视频| 国产免费视频播放在线视频 | 亚洲av二区三区四区| 国产伦精品一区二区三区四那| 欧美日韩亚洲高清精品| av专区在线播放| 国产伦在线观看视频一区| 黄色配什么色好看| 两个人的视频大全免费| 哪个播放器可以免费观看大片| 日韩强制内射视频| 能在线免费观看的黄片| 国产色婷婷99| 国产精品一区二区三区四区免费观看| 美女主播在线视频| 超碰av人人做人人爽久久| 日韩亚洲欧美综合| 亚洲天堂国产精品一区在线| 国产成人午夜福利电影在线观看| 午夜福利在线观看免费完整高清在| 婷婷色综合大香蕉| 69av精品久久久久久| 天堂网av新在线| 日韩三级伦理在线观看| av专区在线播放| 久久久精品94久久精品| 国产精品不卡视频一区二区| 免费大片18禁| 不卡视频在线观看欧美| 久久精品久久精品一区二区三区| 天天一区二区日本电影三级| 色播亚洲综合网| 成人高潮视频无遮挡免费网站| 免费观看性生交大片5| 国产毛片a区久久久久| 免费观看无遮挡的男女| 99视频精品全部免费 在线| 97热精品久久久久久| 99视频精品全部免费 在线| 禁无遮挡网站| 22中文网久久字幕| 国产精品国产三级国产专区5o| 黄片无遮挡物在线观看| 国产av国产精品国产| 亚洲精品456在线播放app| 亚洲三级黄色毛片| 亚洲精品一二三| 男人舔奶头视频| 国产亚洲午夜精品一区二区久久 | 欧美日韩精品成人综合77777| av在线播放精品| 欧美97在线视频| 我的老师免费观看完整版| 菩萨蛮人人尽说江南好唐韦庄| 欧美zozozo另类| 亚洲四区av| 免费黄色在线免费观看| 午夜免费男女啪啪视频观看| 亚洲av国产av综合av卡| 国产欧美另类精品又又久久亚洲欧美| 亚洲精品国产av成人精品| 国产成人a区在线观看| 亚洲成色77777| 国产精品无大码| 18禁在线播放成人免费| 亚洲精品中文字幕在线视频 | 99热这里只有是精品50| 国产精品一区二区三区四区免费观看| 成人美女网站在线观看视频| 欧美潮喷喷水| 免费观看a级毛片全部| 国产精品久久视频播放| 观看免费一级毛片| 看免费成人av毛片| 亚洲国产精品专区欧美| 欧美xxxx黑人xx丫x性爽| 精品久久久久久成人av| 永久网站在线| 久久久精品94久久精品| 最近最新中文字幕大全电影3| 久久久午夜欧美精品| 中文乱码字字幕精品一区二区三区 | videos熟女内射| 少妇丰满av| 国产乱来视频区| 国产精品一区二区性色av| 精品久久久精品久久久| 人妻制服诱惑在线中文字幕| 18禁在线播放成人免费| 亚洲人成网站在线观看播放| 国产男人的电影天堂91| 老女人水多毛片| 又大又黄又爽视频免费| 韩国高清视频一区二区三区| 男的添女的下面高潮视频| 日本一二三区视频观看| 在线a可以看的网站| 少妇猛男粗大的猛烈进出视频 | 白带黄色成豆腐渣| 天美传媒精品一区二区| 亚洲无线观看免费| 激情五月婷婷亚洲| 欧美潮喷喷水| 少妇的逼好多水| 亚洲丝袜综合中文字幕| 爱豆传媒免费全集在线观看| 日本黄色片子视频| 国产大屁股一区二区在线视频| 亚洲成人精品中文字幕电影| 狂野欧美白嫩少妇大欣赏| 欧美三级亚洲精品| 欧美zozozo另类| 欧美激情国产日韩精品一区| 亚洲欧洲国产日韩| 91精品国产九色| 亚洲av中文av极速乱| 在线 av 中文字幕| 国产有黄有色有爽视频| 中文精品一卡2卡3卡4更新| 乱码一卡2卡4卡精品| 国产真实伦视频高清在线观看| 国产乱来视频区| 日本-黄色视频高清免费观看| av在线播放精品| 欧美日本视频| 欧美性猛交╳xxx乱大交人| 看黄色毛片网站| 午夜亚洲福利在线播放| 国产有黄有色有爽视频| 韩国高清视频一区二区三区| 国产精品一及| 欧美高清成人免费视频www| 一级毛片久久久久久久久女| 色5月婷婷丁香| 欧美一区二区亚洲| 一区二区三区四区激情视频| 80岁老熟妇乱子伦牲交| 九色成人免费人妻av| 国产伦精品一区二区三区四那| 亚洲欧洲日产国产| 国产又色又爽无遮挡免| 在线观看美女被高潮喷水网站| 国产伦一二天堂av在线观看| 久久99热这里只频精品6学生| 不卡视频在线观看欧美| 免费观看无遮挡的男女| 人人妻人人澡人人爽人人夜夜 | 成年免费大片在线观看| 免费看a级黄色片| 青春草国产在线视频| 国产高潮美女av| 最近2019中文字幕mv第一页| 国产女主播在线喷水免费视频网站 | 国产乱人偷精品视频| 久久久久精品久久久久真实原创| 成人性生交大片免费视频hd| 国产高清不卡午夜福利| 亚洲av一区综合| 麻豆成人av视频| 天美传媒精品一区二区| 久久鲁丝午夜福利片| 国精品久久久久久国模美| 十八禁网站网址无遮挡 | 欧美丝袜亚洲另类| 99视频精品全部免费 在线| 亚洲综合精品二区| av.在线天堂| 高清欧美精品videossex| 天堂中文最新版在线下载 | 亚洲av一区综合| 大香蕉久久网| 简卡轻食公司| 99久久精品热视频| 国产又色又爽无遮挡免| 午夜免费激情av| 亚洲熟女精品中文字幕| 中文在线观看免费www的网站| 美女xxoo啪啪120秒动态图| 亚洲第一区二区三区不卡| 久久精品夜色国产| 高清日韩中文字幕在线| 久久久精品免费免费高清| 天美传媒精品一区二区| 夫妻性生交免费视频一级片| 亚洲最大成人手机在线| 22中文网久久字幕| 亚洲成人一二三区av| 又大又黄又爽视频免费| 精品亚洲乱码少妇综合久久| 国产精品人妻久久久影院| av专区在线播放| 少妇裸体淫交视频免费看高清| 久久久久久久久久久免费av| 高清欧美精品videossex| 日韩av在线免费看完整版不卡| 能在线免费看毛片的网站| 韩国高清视频一区二区三区| 国产精品国产三级专区第一集| 又爽又黄无遮挡网站| 欧美日韩精品成人综合77777| 色尼玛亚洲综合影院| 亚洲精品色激情综合| 中国美白少妇内射xxxbb| 日本免费a在线| 99久久九九国产精品国产免费| 成人午夜高清在线视频| 三级男女做爰猛烈吃奶摸视频| 欧美xxxx黑人xx丫x性爽| 成年女人看的毛片在线观看| 久久久久网色| 国产永久视频网站| 99热6这里只有精品| 成人美女网站在线观看视频| 又大又黄又爽视频免费| 精品人妻一区二区三区麻豆| 九九久久精品国产亚洲av麻豆| 丝袜喷水一区| 免费电影在线观看免费观看| 在线观看人妻少妇| 国产精品一及| av专区在线播放| 狂野欧美激情性xxxx在线观看| 国产午夜精品论理片| 久久久精品94久久精品| 成人高潮视频无遮挡免费网站| 婷婷色综合www| 男人舔奶头视频| 午夜精品国产一区二区电影 | 精品酒店卫生间| 最近手机中文字幕大全| 中文字幕av成人在线电影| 中文字幕免费在线视频6| av又黄又爽大尺度在线免费看| av专区在线播放|