• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Reducing parameter space for neural network training

    2020-07-01 05:13:24TongQinLingZhouDongbinXiu

    Tong Qin, Ling Zhou, Dongbin Xiu*

    Department of Mathematics, The Ohio State University, Columbus, OH 43210, USA

    Keywords:Rectified linear unit network Universal approximator Reduced space

    ABSTRACT For neural networks (NNs) with rectified linear unit (ReLU) or binary activation functions, we show that their training can be accomplished in a reduced parameter space. Specifically, the weights in each neuron can be trained on the unit sphere, as opposed to the entire space, and the threshold can be trained in a bounded interval, as opposed to the real line. We show that the NNs in the reduced parameter space are mathematically equivalent to the standard NNs with parameters in the whole space. The reduced parameter space shall facilitate the optimization procedure for the network training, as the search space becomes (much) smaller. We demonstrate the improved training performance using numerical examples.

    Interest in neural networks (NNs) has significantly increased in recent years because of the successes of deep networks in many practical applications.

    Complex and deep neural networks are known to be capable of learning very complex phenomena that are beyond the capabilities of many other traditional machine learning techniques.The amount of literature is too large to mention. Here we cite just a few review type publications [1–7].

    In an NN network, each neuron produces an output in the following form

    where the vector xinrepresents the signal from all incoming connecting neurons, w are the weights for the input, σ is the activation function, with b as its threshold. In a complex (and deep) network with a large number of neurons, the total number of the free parameters w and b can be exceedingly large. Their training thus poses a tremendous numerical challenge, as the objective function (loss function) to be optimized becomes highly non-convex and with highly complicated landscape [8].Any numerical optimization procedures can be trapped in a local minimum and produce unsatisfactory training results.

    This paper is not concerned with numerical algorithm aspect of the NN training. Instead, the purpose of this paper is to show that the training of NNs can be conducted in a reduced parameter space, thus providing any numerical optimization algorithm a smaller space to search for the parameters. This reduction applies to the type of activation functions with the following scaling property: for any α ≥0, σ (α·y)=γ(α)σ(y), where γ depends only on α. The binary activation function [9], one of the first used activation functions, satisfies this property with γ ≡1. The rectified linear unit (ReLU) [10, 11], one of the most widely used activation functions nowadays, satisfies this property with γ =α. For NNs with this type of activation functions, we show that they can be equivalently trained in a reduced parameter space. More specifically, let the length of the weights w be d. Instead of training w in Rd, they can be equivalently trained as unit vector with ∥ w∥=1, which means w ∈Sd?1, the unit sphere in Rd. Moreover, if one is interested in approximating a function defined in a compact domain D, the threshold can also be trained in a bounded interval b ∈[?XB,XB], whereas opposed to the entire real line b ∈R. It is well known that the standard NN with single hidden layer is a universal approximator, cf. Refs. [12–14]. Here we prove that our new formulation in the reduced parameter space is also a universal approximator, in the sense that its span is dense in C (Rd). We then further extend the parameter space constraints to NNs with multiple hidden layers. The major advantage of the constraints in the weights and thresholds is that they significantly reduce the search space for these parameters during training. Consequently, this eliminates a potentially large number of undesirable local minima that may cause training optimization algorithm to terminate prematurely,which is one of the reasons for unsatisfactory training results. We then present examples for function approximation to verify this numerically. Using the same network structure, optimization solver, and identical random initialization, our numerical tests show that the training results in the new reduced parameter space is notably better than those from the standard network.More importantly, the training using the reduced parameter space is much more robust against initialization.

    This paper is organized as follows. We first derive the constraints on the parameters using network with single hidden layer, while enforcing the equivalence of the network. Then we prove that the constrained NN formulation remains a universal approximator. After that, we present the constraints for NNs with multiple hidden layers. Finally, we present numerical experiments to demonstrate the improvement of the network training using the reduced parameter space. We emphasize that this paper is not concerned with any particular training algorithm.Therefore, in our numerical tests we used the most standard optimization algorithm from Matlab?. The additional constraints might add complexity to the optimization problem. Efficient optimization algorithms specifically designed for solving these constrained optimization problem will be investigated in a separate paper.

    Let us first consider the standard NN with a single hidden layer, in the context for approximating an unknown response function f :Rd→R. The NN approximation using activation function σ takes the following form,

    where wj∈Rdis the weight vector, bj∈R the threshold, cj∈R,and N is the width of the network.

    We restrict our discussion to following two activation functions. One is the ReLU,

    The other one is the binary activation function, also known as heaviside/step function,

    with σ (0)=1/2.

    We remark that these two activation functions satisfy the following scaling property. For any y ∈R and α ≥0, there exists a constant γ ≥0 such that

    where γ depends only on α but not on y. The ReLU satisfies this property with γ (α)=α, which is known as scale invariance. The binary activation function satisfies the scaling property with γ(α)≡1.

    We also list the following properties, which are important for the method we present in this paper.

    ? For the binary activation function Eq. (4), for any x ∈R,

    ? For the ReLU activation function Eq. (3), for any x ∈R and any α,

    We first show that the training of Eq. (2) can be equivalently conducted with constraint ∥ wj∥=1, i.e., unit vector. This is a straightforward result of the scaling property Eq. (5) of the activation function. It effectively reduces the search space for the weights from wj∈Rdto wj∈Sd?1, the unit sphere in Rd.

    Proposition 1. Any neural network construction Eq. (2) using the ReLU Eq. (3) or the binary Eq. (4) activation functions has an equivalent form

    Proof. Let us first assume ∥ wj∥?=0 for all j =1,2,...,N. We then have

    where γ is the factor in the scaling property Eq. (5) satisfied by both ReLU and binary activation functions. Upon defining

    we have the following equivalent form as in Eq. (8)

    Next, let us consider the case ∥ wj∥=0 for some j ∈{1,2,...,N}.The contribution of this term to the construction Eq. (2) is thus

    The proof immediately gives us another equivalent form, by combining all the constant terms from Eq. (2) into a single constant first and then explicitly including it in the expression.

    Corollary 2. Any neural network construction Eq. (2) using the ReLU Eq. (3) or the binary Eq. (4) activation functions has an equivalent form

    We now present constraints on the thresholds in Eq. (2). This is applicable when the target function f is defined on a compact domain, i.e., f :D →R , with D ?Rdbounded and closed. This is often the case in practice. We demonstrate that any NN Eq. (2)can be trained equivalently in a bounded interval for each of the thresholds. This (significantly) reduces the search space for the thresholds.

    Proposition 3. With the ReLU Eq. (3) or the binary Eq. (4) activation function, let Eq. (2) be an approximator to a function f :D →R , where D ?Rdis a bounded domain. Let

    Then, Eq. (2) has an equivalent form

    Proof. Proposition 1 establishes that Eq. (2) has an equivalent form Eq. (8)

    where the bound Eq. (10) is used.

    Let us first consider the case

    Next, let us consider the case>XB, then·x+>0 for all x ∈D. Let J ={j1,j2,...,jL}, L ≥1, be the set of terms in Eq. (8) that satisfy this condition. We then have·x+>0, for all x ∈D,and ? =1,2,...,L. We now show that the net contribution of these terms in Eq. (8) is included in the equivalent form Eq. (11).

    (1) For the binary activation function Eq. (4), the contribution of these terms to the approximation Eq. (8) is

    Again, using the relation Eq. (6), any constant can be expressed by a combination of binary activation terms with thresholds=0. Such terms are already included in Eq. (11).

    (2) For the ReLU activation Eq. (3), the contribution of these terms to Eq. (8) is

    where the last equality follows the simple property of the ReLU function σ(y)?σ(?y)=y. Using Proposition 1, the first two terms then have an equivalent form using unit weightand with zero threshold, which is included in Eq.(11). For the constant, we again invoke the relation Eq. (7) and represent it bywhereis an arbitrary unit vector and 0 <α

    We remark that the equivalent form in the reduced parameter space Eq. (11) has different number of “active” neurons than the original unrestricted case Eq. (2). The equivalence between the standard NN expression Eq. (2) and the constrained expression Eq. (11) indicates that the NN training can be conducted in a reduced parameter space. For the weights wjin each neuron,its training can be conducted in Sd?1, the d-dimensional unit sphere, as opposed to the entire space Rd. For the threshold, its training can be conducted in the bounded interval [? XB,XB], as opposed to the entire real line R. The reduction of the parameter space can eliminate many potential local minima and therefore enhance the performance of numerical optimization during the training.

    This advantage can be illustrated by the following one dimensional example. Consider an NN with one neuron N(x)=cσ(wx+b). Suppose we are given one data point ( 1,1),then we would like to fix parameters c ∈R, w ∈R and b ∈R by minimizing the following mean squared loss, i.e.,

    Global minimizers for this objective function lie in the set

    By the definition of the ReLU function, in the region

    we have J (c,w,b)≡1, which means every point in Z is a local minima.

    However, if we consider the equivalent minimization problem in the reduced parameter space, i.e.,

    The local minima set now is reduced to

    which is much smaller than Z.

    By universal approximation property, we aim to establish that the constrained formulations Eqs. (8) and (11) can approximate any continuous function. To this end, we define the following set of functions on Rd

    where Λ ∈Rdis the weight set, Θ ∈R the threshold set. We also denote ND(σ;Λ,Θ) as the same set of functions when confined in a compact domain D ?Rd.

    By following this definition, the standard NN expression and our two constrained expressions correspond to the following spaces

    where Sd?1is the unit sphere in Rdbecause=1.

    The universal approximation property for the standard unconstrained NN expression Eq. (2) has been well studied, cf.Refs. [13-17], and Ref. [12] for a survey. Here we cite the following result for N (σ;Rd,R).

    Theorem 4 (Ref. [17], Theorem 1). Let σ be a function in(R), of which the set of discontinuities has Lebesgue measure zero. Then the set N (σ;Rd,R) is dense in C (Rd), in the topology of uniform convergence on compact sets, if and only if σ is not an algebraic polynomial almost everywhere.

    We now examine the universal approximation property for the first constrained formulation Eq. (8).

    Theorem 5. Let σ be the binary function Eq. (4) or the ReLU function Eq. (3), then we have

    and the set N (σ;Sd?1,R) is dense in C (Rd), in the topology of uniform convergence on compact sets.

    Proof. Obviously, we have N (σ;Sd?1,R)?N(σ;Rd,R). By Proposition 1, any element N (x)∈N(σ;Rd,R) can be reformulated as an element N (x)∈N(σ;Sd?1,R). Therefore, we have N(σ;Rd,R)?N(σ;Sd?1,R). This concludes the first statement Eq. (17). Given the equivalence Eq. (17), the denseness result immediately follows from Theorem 4, as both the ReLU and the binary activation functions are not polynomials and are continuous everywhere except at a set of zero Lebesgue measure.

    We now examine the second constrained NN expression Eq.(11).

    Fig. 1. Numerical results for Eq. (26) with one sequence of random initialization.

    Theorem 6. Let σ be the binary Eq. (4) or the ReLU Eq. (3) activation function. Let x ∈D ?Rd, where D is closed and bounded with. Define Θ =[?XB,XB], then

    Furthermore, ND(σ;Sd?1,Θ) is dense in C (D) in the topology of uniform convergence.

    Proof. Obviously we have ND(σ;Sd?1,Θ)?ND(σ;Rd,R). On the other hand, Proposition 3 establishes that for any element N(x)∈ND(σ;Rd,R), there exists an equivalent formulation(x)∈ND(σ;Sd?1,Θ) for any x ∈D. This implies ND(σ;Rd,R)?ND(σ;Sd?1,Θ). We then have Eq. (18).

    For the denseness of ND(σ;Sd?1,Θ) in C (D), let us consider any function f ∈C(D). By the Tietze extension theorem (cf. Ref.[18]), there exists an extension F ∈C(Rd) with F (x)=f(x) for any x ∈D. Then, the denseness result of the standard unconstrained E={x ∈Rd:∥x∥≤XB} and any given ?>0, there exists N(x)∈N(σ;Rd,R) such that NN expression (Theorem 4) implies that, for the compact set

    By Proposition 3, there exists an equivalent constrained NN expression∈ND(σ;Sd?1,Θ) such that N(x)=N(x) for any x ∈D. We then immediately have, for any f ∈C(D) and any given ?>0, there exists N (x)∈ND(σ;Sd?1,Θ) such that

    Fig. 2. Numerical results for Eq. (26) with a second sequence of random initialization.

    The proof is now complete.

    We now generalize the previous result to feedforward NNs with multiple hidden layers. Let us again consider approximation of a multivariate function f :D →R , where D ?Rdis a compact subset of Rdwith

    Consider a feedforward NN with M layers, M ≥3, where m=1 is the input layer and m =M the output layer. Let Jm,m=1,2,...,M be the number of neurons in each layer. Obviously, we have J1=d and JM=1 in our case. Let y(m)∈RJmbe the output of the neurons in the m-th layer. Then, by following the notation from Ref. [5], we can write

    where W(m?1)∈RJm?1×Jmis the weight matrix and b(m)is the threshold vector. In component form, the output of the j-th neuron in the m-th layer is

    The derivation for the constraints on the weights vectorcan be generalized directly from the single-layer case and we have the following weight constraints,

    The constraints on the thresholddepend on the bounds of the output from the previous layer y(m?1).

    For the ReLU activation function Eq. (3), we derive from Eq.(20) that for m =2,3,...,M,

    If the domain D is bounded and withthen the constraints on the threshold can be recursively derived.Starting from ∥ y(1)∥=∥x∥∈[?XB,XB] and b(j2)∈[?XB,XB], we then have

    For the binary activation function Eq. (4), we derive from Eq.(20) that for m =2,3,...,M ?1,

    Then, the bounds for the thresholds are

    We present numerical examples to demonstrate the properties of the constrained NN training. We focus exclusively on the ReLU activation function Eq. (3) due to its overwhelming popularity in practice.

    Given a set of training samples, the weights and thresholds are trained by minimizing the following mean squared error (MSE)

    We conduct the training using the standard unconstrained NN formulation Eq. (2) and our new constrained Eq. (11) and compare the training results. In all tests, both formulations use exactly the same randomized initial conditions for the weights and thresholds. Since our new constrained formulation is irrespective of the numerical optimization algorithm, we use one of the most accessible optimization routines from MATLAB?, the function fminunc for unconstrained optimization and the function fmincon for constrained optimization. For the optimization options, we use the sequential quadratic programming (spg) algorithm for both the constrained and unconstrained optimization, with the same optimality tolerance 10?6and maximum iteration number 1500. It is natural to explore the specific form of the constraints in Eq. (11) to design more effective constrained optimization algorithms. This is,however, out of the scope of the current paper.

    Fig. 3. Numerical results for Eq. (26) with a third sequence of random initialization

    After training the networks, we evaluate the network approximation errors using another set of samples—a validation sample set, which consists of randomly generated points that are independent of the training sample set. Even though our discussion applies to functions in arbitrary dimension d ≥1, we present only the numerical results in d =1 and d =2 because they can be easily visualized.

    We first examine the approximation results using NNs with single hidden layer, with and without constraints.

    We first consider a one-dimensional smooth function

    The constrained formulation Eq. (11) becomes

    Due to the simple form of the weights and the domain D =[0,1],the proof of Proposition 3 also gives us the following tighter bounds for the thresholds for this specific problem,

    Fig. 4. Numerical results for Eq. (29). a Unconstrained formulation N (x) in Eq. (2). b Constrained formulation N (x) in Eq. (11).

    We approximate Eq. (26) with NNs with one hidden layer,which consists of 2 0 neurons. The size of the training data set is 200. Numerical tests were performed for different choices of random initializations. It is known that NN training performance depends on the initialization. In Figs. 1–3, we show the numerical results for three sets of different random initializations. In each set, the unconstrained NN Eq. (2), the constrained NN Eq.(27) and the specialized constrained NN with Eq. (28) use the same random sequence for initialization. We observe that the standard NN formulation without constraints (2) produces training results critically dependent on the initialization. This is widely acknowledged in the literature. On the other hand, our new constrained NN formulations are more robust and produce better results that are less sensitive to the initialization. The tighter constraint Eq. (28) performs better than the general constraint Eq. (27), which is not surprising. However, the tighter constraint is a special case for this particular problem and not available in the general case.

    We next consider two-dimensional functions. In particular,we show result for the Franke's function [19]

    Fig. 5. Numerical results for 1D function Eq. (26) with feedforward NNs of { 1,20,10,1}. From top to bottom: training results using three different random sequences for initialization. Left column: results by unconstrained NN formulation Eq. (19). Right column: results by NN formulation with constraints Eqs. (21) and (23).

    with ( x,y)∈[0,1]2. Again, we compare training results for both the standard NN without constraints Eq. (2) and our new constrained NN Eq. (8), using the same random sequence for initialization. The NNs have one hidden layer with 4 0 neurons.The size of the training set is 5 00 and that of the validation set is 1000. The numerical results are shown in Fig. 4. On the left column, the contour lines of the training results are shown, as well as those of the exact function. Here all contour lines are at the same values, from 0 to 1 with an increment of 0.1. We observe that the constrained NN formulation produces visually better result than the standard unconstrained formulation. On the right column, we plot the function value along y=0.2x.Again, the improvement of the constrained NN is visible.

    We now consider feedforward NNs with multiple hidden layers. We present results for both the standard NN without constraints Eq. (19) and the constrained ReLU NNs with the constraints Eqs. (21) and (23). We use the standard notation{J1,J2,...,JM} to denote the network structure, where Jmis the number of neurons in each layer. The hidden layers are J2,J3,...,JM?1. Again, we focus on 1D and 2D functions for ease of visualization purpose, i.e., J1=1,2.

    Fig. 6. Numerical results for 1D function Eq. (26) with feedforward NNs of { 1,10,10,10,10,1}. From top to bottom: training results using three different random sequences for initialization. Left column: results by unconstrained NN formulation Eq. (19). Right column: results by NN formulation with constraints Eqs. (21) and (23).

    We first consider the one-dimensional function Eq. (26). In Fig. 5, we show the numerical results by NNs of { 1,20,10,1}, using three different sequences of random initializations, with and without constraints. We observe that the standard NN formulation without constraints Eq. (19) produces widely different results. This is because of the potentially large number of local minima in the cost function and is not entirely surprising. On the other hand, using the exactly the same initialization, the NN formulation with constraints Eqs. (21) and (23) produces notably better results, and more importantly, is much less sensitive to the initialization. In Fig. 6, we show the results for NNs with{1,10,10,10,10,1} structure. We observe similar performance—the constrained NN produces better results and is less sensitive to initialization.

    We now consider the two-dimensional Franke's function Eq.(29). In Fig. 7, the results by NNs with { 2,20,10,1} structure are shown. In Fig. 8, the results by NNs with { 2,10,10,10,10,1} structure are shown. Both the contour lines (with exactly the same contour values: from 0 to 1 with increment 0.1) and the function value at y =0.2x are plotted, for both the unconstrained NN Eq.(19) and the constrained NN with the constraints Eqs. (21) and(23). Once again, the two cases use the same random sequence for initialization. The results show again the notably improvement of the training results by the constrained formulation.

    In this paper we presented a set of constraints on multi-layer feedforward NNs with ReLU and binary activation functions. The weights in each neuron are constrained on the unit sphere, as opposed to the entire space. This effectively reduces the number of parameters in weights by one per neuron. The threshold in each neuron is constrained to a bounded interval, as opposed to the entire real line. We prove that the constrained NN formulation is equivalent to the standard unconstrained NN formulation. The constraints on the parameters, even though may increase the complexity of the optimization problem, reduce the search space for network training and can potentially improve the training results. Our numerical examples for both single hidden layer and multiple hidden layers verify this finding.

    Fig. 7. Numerical results 2D function Eq. (29) with NNs of the structure { 2,20,10,1}. Top row: results by unconstrained NN formulation Eq.(19). Bottom row: results by constrained NN with Eqs. (21) and (23). Left column: contour plots. Right column: function cut along y =0.2x.Dashed lines are the exact function.

    Fig. 8. Numerical results 2D function Eq. (29) with NNs of the structure { 2,10,10,10,10,1}. Top row: results by unconstrained NN formulation Eq. (19). Bottom row: results by constrained NN with Eqs. (21) and (23). Left column: contour plots. Right column: function cut along y =0.2x.Dashed lines are the exact function.

    亚洲伊人久久精品综合| 免费av中文字幕在线| 午夜久久久在线观看| 国产精品女同一区二区软件| 在线免费观看不下载黄p国产| 美女cb高潮喷水在线观看| 色婷婷久久久亚洲欧美| 大陆偷拍与自拍| 夫妻性生交免费视频一级片| 色网站视频免费| 一个人免费看片子| 夜夜爽夜夜爽视频| 妹子高潮喷水视频| 三上悠亚av全集在线观看 | 蜜桃久久精品国产亚洲av| 国产精品伦人一区二区| 日日啪夜夜爽| 免费人成在线观看视频色| 精品人妻熟女毛片av久久网站| 成人午夜精彩视频在线观看| 波野结衣二区三区在线| 夫妻性生交免费视频一级片| 婷婷色麻豆天堂久久| 国产极品粉嫩免费观看在线 | 又黄又爽又刺激的免费视频.| 建设人人有责人人尽责人人享有的| 如日韩欧美国产精品一区二区三区 | 精品久久久久久电影网| 18禁裸乳无遮挡动漫免费视频| 中文欧美无线码| 久久99蜜桃精品久久| av播播在线观看一区| 肉色欧美久久久久久久蜜桃| 免费观看在线日韩| 日韩欧美 国产精品| 久久99精品国语久久久| 丰满人妻一区二区三区视频av| 久久精品久久精品一区二区三区| 国产精品久久久久久精品古装| 在线观看人妻少妇| av.在线天堂| av免费在线看不卡| 亚洲国产色片| 人人妻人人看人人澡| 色网站视频免费| 国产一区二区在线观看av| 午夜福利视频精品| 黑人巨大精品欧美一区二区蜜桃 | 国产成人精品一,二区| 熟女电影av网| 一边亲一边摸免费视频| 久久久久国产精品人妻一区二区| 国产高清三级在线| 插逼视频在线观看| 欧美精品国产亚洲| 午夜激情福利司机影院| 人妻一区二区av| 国产极品天堂在线| av国产精品久久久久影院| 在线观看www视频免费| 中国美白少妇内射xxxbb| 人妻制服诱惑在线中文字幕| 国产一区二区三区av在线| 久久久久久久久大av| 精品一区在线观看国产| 国产无遮挡羞羞视频在线观看| 欧美变态另类bdsm刘玥| 亚洲美女黄色视频免费看| 国产日韩一区二区三区精品不卡 | 国产亚洲一区二区精品| 大香蕉久久网| 一级毛片 在线播放| 午夜福利影视在线免费观看| 日韩欧美一区视频在线观看 | 国产老妇伦熟女老妇高清| 国产精品麻豆人妻色哟哟久久| 国产精品.久久久| 亚洲天堂av无毛| 视频中文字幕在线观看| 国产精品蜜桃在线观看| 久久久久久久久久人人人人人人| 亚洲综合色惰| 男人爽女人下面视频在线观看| 老女人水多毛片| 色网站视频免费| 日韩大片免费观看网站| 欧美日韩av久久| 日韩一区二区视频免费看| 成人毛片a级毛片在线播放| 亚洲内射少妇av| 国产精品久久久久久精品电影小说| 国产精品麻豆人妻色哟哟久久| 美女脱内裤让男人舔精品视频| 精品国产一区二区久久| 五月天丁香电影| 男女无遮挡免费网站观看| 91久久精品电影网| 亚洲国产精品一区三区| 国产精品久久久久久精品古装| 人妻一区二区av| 久久国产乱子免费精品| 嫩草影院入口| 欧美97在线视频| 日韩欧美 国产精品| 丰满乱子伦码专区| 麻豆精品久久久久久蜜桃| a级片在线免费高清观看视频| 亚洲精品aⅴ在线观看| 日韩av免费高清视频| 简卡轻食公司| 亚洲在久久综合| 少妇被粗大猛烈的视频| 插阴视频在线观看视频| av专区在线播放| 亚洲av国产av综合av卡| 高清不卡的av网站| 99热这里只有是精品50| 久久久午夜欧美精品| 亚洲情色 制服丝袜| 欧美3d第一页| 亚洲第一区二区三区不卡| 精品午夜福利在线看| 下体分泌物呈黄色| 亚洲在久久综合| 免费播放大片免费观看视频在线观看| 欧美日韩国产mv在线观看视频| 韩国高清视频一区二区三区| 日韩视频在线欧美| 亚洲精品久久久久久婷婷小说| 3wmmmm亚洲av在线观看| 国产成人免费无遮挡视频| av线在线观看网站| 九九在线视频观看精品| 亚洲欧洲精品一区二区精品久久久 | 六月丁香七月| 国产色爽女视频免费观看| 寂寞人妻少妇视频99o| 男女边吃奶边做爰视频| 在线亚洲精品国产二区图片欧美 | 九九久久精品国产亚洲av麻豆| 午夜福利视频精品| av专区在线播放| 久久99精品国语久久久| 国产在视频线精品| 午夜免费观看性视频| 日产精品乱码卡一卡2卡三| 天美传媒精品一区二区| 久久久a久久爽久久v久久| 日韩欧美 国产精品| 人人澡人人妻人| 蜜桃久久精品国产亚洲av| av.在线天堂| 欧美另类一区| 九九爱精品视频在线观看| 国产有黄有色有爽视频| 美女xxoo啪啪120秒动态图| 精品99又大又爽又粗少妇毛片| 少妇高潮的动态图| 2018国产大陆天天弄谢| 中文字幕制服av| 九九在线视频观看精品| 一级毛片 在线播放| 国产视频首页在线观看| 精品国产露脸久久av麻豆| 亚洲一区二区三区欧美精品| 久久久久久伊人网av| 天堂8中文在线网| 99热6这里只有精品| 欧美成人精品欧美一级黄| 欧美一级a爱片免费观看看| 成人亚洲精品一区在线观看| 亚洲精品亚洲一区二区| 国产成人91sexporn| 成人漫画全彩无遮挡| 少妇人妻精品综合一区二区| 精品亚洲乱码少妇综合久久| 成人综合一区亚洲| 国产精品不卡视频一区二区| 高清黄色对白视频在线免费看 | 精品久久久久久久久av| 精品少妇久久久久久888优播| 国语对白做爰xxxⅹ性视频网站| 精品卡一卡二卡四卡免费| 丝袜在线中文字幕| 日韩成人伦理影院| 男的添女的下面高潮视频| 亚洲av福利一区| 国产在视频线精品| 国产在线一区二区三区精| 国产在线免费精品| 中文欧美无线码| 成人漫画全彩无遮挡| 97精品久久久久久久久久精品| 99热这里只有是精品50| 老司机影院毛片| 亚洲怡红院男人天堂| 一二三四中文在线观看免费高清| 少妇人妻精品综合一区二区| 另类亚洲欧美激情| 男女啪啪激烈高潮av片| 国产成人免费无遮挡视频| 亚洲第一av免费看| 日韩精品有码人妻一区| 日韩一区二区三区影片| 国产男女超爽视频在线观看| 欧美日韩视频高清一区二区三区二| 欧美成人精品欧美一级黄| 男女免费视频国产| 一区二区三区四区激情视频| 国产av一区二区精品久久| 看非洲黑人一级黄片| 亚洲精品日本国产第一区| 欧美日本中文国产一区发布| 有码 亚洲区| 久热久热在线精品观看| 男女边摸边吃奶| 在线观看免费高清a一片| 一级毛片aaaaaa免费看小| 国产熟女午夜一区二区三区 | 亚洲av在线观看美女高潮| 在线观看av片永久免费下载| 在线观看免费高清a一片| 赤兔流量卡办理| 日韩不卡一区二区三区视频在线| 欧美精品亚洲一区二区| 波野结衣二区三区在线| 色视频在线一区二区三区| 国产精品麻豆人妻色哟哟久久| 日韩av在线免费看完整版不卡| 久久人人爽av亚洲精品天堂| 国产色婷婷99| 国产成人91sexporn| 99热全是精品| 在线天堂最新版资源| 97超视频在线观看视频| 国模一区二区三区四区视频| 中文字幕免费在线视频6| 国产精品不卡视频一区二区| 日韩人妻高清精品专区| 777米奇影视久久| 啦啦啦视频在线资源免费观看| 欧美日韩av久久| 国产乱人偷精品视频| 五月开心婷婷网| 国产真实伦视频高清在线观看| 亚洲av男天堂| 一区二区三区乱码不卡18| 乱码一卡2卡4卡精品| 亚洲欧美日韩东京热| 午夜av观看不卡| 嫩草影院新地址| 三级国产精品片| 国产成人精品一,二区| 大陆偷拍与自拍| 女人精品久久久久毛片| 亚洲av电影在线观看一区二区三区| 人妻人人澡人人爽人人| 99热这里只有是精品50| 美女脱内裤让男人舔精品视频| 国产永久视频网站| 色吧在线观看| 在线观看免费高清a一片| 欧美成人午夜免费资源| 亚洲av二区三区四区| 国产精品秋霞免费鲁丝片| 91精品国产国语对白视频| 熟妇人妻不卡中文字幕| 久久久久久久久久久免费av| 桃花免费在线播放| 高清欧美精品videossex| 男女无遮挡免费网站观看| 国产极品天堂在线| 啦啦啦中文免费视频观看日本| 青春草国产在线视频| 黄色配什么色好看| 久久久久久久久大av| 国产精品久久久久久精品电影小说| 成人午夜精彩视频在线观看| 亚洲熟女精品中文字幕| 国产中年淑女户外野战色| 美女脱内裤让男人舔精品视频| av天堂久久9| 成人美女网站在线观看视频| 亚洲综合色惰| 精品亚洲乱码少妇综合久久| 搡女人真爽免费视频火全软件| 亚洲欧美日韩另类电影网站| 天天操日日干夜夜撸| 精品国产国语对白av| 99国产精品免费福利视频| 欧美另类一区| 久久久精品免费免费高清| 一级毛片 在线播放| 三上悠亚av全集在线观看 | 一级片'在线观看视频| 国产精品蜜桃在线观看| 伊人亚洲综合成人网| 亚洲综合精品二区| 免费av不卡在线播放| 色视频在线一区二区三区| 国产在线男女| 又粗又硬又长又爽又黄的视频| 亚洲av免费高清在线观看| 亚洲精品乱码久久久久久按摩| 精品熟女少妇av免费看| 久久久国产一区二区| 少妇熟女欧美另类| 国产在线免费精品| 五月玫瑰六月丁香| 国产亚洲一区二区精品| 色94色欧美一区二区| 久久人人爽人人爽人人片va| 天堂俺去俺来也www色官网| 亚洲成色77777| 亚洲av福利一区| 国产亚洲一区二区精品| 波野结衣二区三区在线| 日本猛色少妇xxxxx猛交久久| 51国产日韩欧美| 免费看光身美女| 亚洲国产欧美在线一区| 丝袜在线中文字幕| 永久网站在线| 亚洲情色 制服丝袜| 自线自在国产av| 三级国产精品欧美在线观看| 免费不卡的大黄色大毛片视频在线观看| 亚洲高清免费不卡视频| 美女中出高潮动态图| freevideosex欧美| 久久久久久久精品精品| 新久久久久国产一级毛片| 亚洲av日韩在线播放| a级毛片免费高清观看在线播放| 99热全是精品| 久久女婷五月综合色啪小说| 欧美少妇被猛烈插入视频| 特大巨黑吊av在线直播| 亚洲精品国产成人久久av| 中文字幕免费在线视频6| 寂寞人妻少妇视频99o| 9色porny在线观看| 中文字幕人妻熟人妻熟丝袜美| 寂寞人妻少妇视频99o| 黄色怎么调成土黄色| 国产亚洲av片在线观看秒播厂| 国产免费一级a男人的天堂| 丰满乱子伦码专区| 成人二区视频| 三级经典国产精品| 亚洲国产欧美在线一区| 天堂俺去俺来也www色官网| 日本黄色日本黄色录像| 亚洲熟女精品中文字幕| 中文字幕精品免费在线观看视频 | 一本色道久久久久久精品综合| 国产精品久久久久久精品电影小说| 亚洲欧美成人精品一区二区| 99久久精品热视频| 亚洲欧美一区二区三区国产| 国产在线免费精品| 97在线视频观看| 精品人妻一区二区三区麻豆| 亚洲精品,欧美精品| 国内精品宾馆在线| 欧美日本中文国产一区发布| 九色成人免费人妻av| 高清毛片免费看| 2022亚洲国产成人精品| 亚洲欧美日韩另类电影网站| 日本与韩国留学比较| 国产视频内射| 欧美日韩在线观看h| 成人二区视频| 婷婷色综合大香蕉| 九九爱精品视频在线观看| 亚洲欧洲国产日韩| 丁香六月天网| 午夜激情久久久久久久| 十八禁高潮呻吟视频 | 欧美日韩av久久| 欧美高清成人免费视频www| av播播在线观看一区| 亚洲综合精品二区| 伦精品一区二区三区| 国产成人精品婷婷| av黄色大香蕉| 自拍偷自拍亚洲精品老妇| 99久国产av精品国产电影| 精品酒店卫生间| 国产亚洲最大av| av黄色大香蕉| 3wmmmm亚洲av在线观看| 少妇 在线观看| 亚洲无线观看免费| 亚洲熟女精品中文字幕| 国产精品久久久久久精品电影小说| 中文在线观看免费www的网站| 22中文网久久字幕| 亚洲精品第二区| 一区二区三区精品91| 丰满饥渴人妻一区二区三| 国产精品福利在线免费观看| 三上悠亚av全集在线观看 | 老女人水多毛片| 一级毛片 在线播放| 观看免费一级毛片| 国产精品三级大全| 亚洲精品一二三| 人人妻人人添人人爽欧美一区卜| 在线精品无人区一区二区三| 国产成人免费无遮挡视频| 免费av不卡在线播放| 午夜福利影视在线免费观看| 中文天堂在线官网| 丰满饥渴人妻一区二区三| 国产亚洲5aaaaa淫片| 欧美精品亚洲一区二区| 寂寞人妻少妇视频99o| 国模一区二区三区四区视频| 99re6热这里在线精品视频| 精品久久久久久久久av| 国产色爽女视频免费观看| 看非洲黑人一级黄片| 国产乱来视频区| 国产免费福利视频在线观看| 大片免费播放器 马上看| 久久久久久久久久成人| 免费久久久久久久精品成人欧美视频 | 成年av动漫网址| 91午夜精品亚洲一区二区三区| 国精品久久久久久国模美| 国产91av在线免费观看| 自拍偷自拍亚洲精品老妇| 男人爽女人下面视频在线观看| 色94色欧美一区二区| 中文字幕av电影在线播放| 日本黄色日本黄色录像| 一级毛片我不卡| 我的老师免费观看完整版| 日韩 亚洲 欧美在线| 欧美亚洲 丝袜 人妻 在线| 国产精品久久久久久久久免| 亚洲av男天堂| 99久久精品一区二区三区| 丰满迷人的少妇在线观看| 免费av中文字幕在线| 尾随美女入室| 国产免费视频播放在线视频| 成年美女黄网站色视频大全免费 | 男人添女人高潮全过程视频| 99九九线精品视频在线观看视频| 免费看光身美女| 亚洲高清免费不卡视频| 纵有疾风起免费观看全集完整版| 亚洲欧美日韩另类电影网站| 少妇被粗大猛烈的视频| videos熟女内射| 一级a做视频免费观看| kizo精华| 男男h啪啪无遮挡| 欧美日韩综合久久久久久| 王馨瑶露胸无遮挡在线观看| 日韩熟女老妇一区二区性免费视频| 国产高清国产精品国产三级| 免费少妇av软件| 中国三级夫妇交换| av黄色大香蕉| 国产精品秋霞免费鲁丝片| 亚洲精品国产av蜜桃| 亚洲天堂av无毛| 男人狂女人下面高潮的视频| 男女边摸边吃奶| 人妻少妇偷人精品九色| 大话2 男鬼变身卡| 99久国产av精品国产电影| 夜夜看夜夜爽夜夜摸| 亚洲综合精品二区| 大陆偷拍与自拍| 一二三四中文在线观看免费高清| 人人妻人人看人人澡| 欧美少妇被猛烈插入视频| 日韩精品免费视频一区二区三区 | 国产亚洲av片在线观看秒播厂| 在线观看三级黄色| 亚洲,欧美,日韩| 一区二区三区免费毛片| 91久久精品电影网| 国产精品蜜桃在线观看| 精品午夜福利在线看| 中国美白少妇内射xxxbb| 热re99久久国产66热| 寂寞人妻少妇视频99o| 久久99热6这里只有精品| 日韩精品有码人妻一区| 日韩精品免费视频一区二区三区 | 日本与韩国留学比较| 成年av动漫网址| 久久人人爽人人爽人人片va| 高清视频免费观看一区二区| 国产69精品久久久久777片| 看非洲黑人一级黄片| 成人美女网站在线观看视频| 一级毛片久久久久久久久女| 国产有黄有色有爽视频| 国产日韩欧美视频二区| 草草在线视频免费看| h视频一区二区三区| 久久99热这里只频精品6学生| 性高湖久久久久久久久免费观看| 欧美 亚洲 国产 日韩一| 国产黄色免费在线视频| 亚洲精品一区蜜桃| 伦理电影免费视频| 国产精品一区二区在线观看99| 免费观看的影片在线观看| 蜜臀久久99精品久久宅男| 99热这里只有精品一区| 亚洲欧洲日产国产| 日韩熟女老妇一区二区性免费视频| 国产精品久久久久成人av| 国产极品天堂在线| 中文字幕人妻熟人妻熟丝袜美| 九色成人免费人妻av| 久久精品国产亚洲av涩爱| 国产亚洲精品久久久com| 欧美精品一区二区免费开放| 久久久久久久亚洲中文字幕| 偷拍熟女少妇极品色| 国产有黄有色有爽视频| 日本黄色片子视频| 丰满人妻一区二区三区视频av| 80岁老熟妇乱子伦牲交| 99久久综合免费| 男女边摸边吃奶| 亚洲图色成人| 噜噜噜噜噜久久久久久91| 国产男女超爽视频在线观看| 韩国av在线不卡| 菩萨蛮人人尽说江南好唐韦庄| 久久婷婷青草| 少妇猛男粗大的猛烈进出视频| 日本黄大片高清| 一级黄片播放器| 国产精品成人在线| 免费看av在线观看网站| 国产熟女午夜一区二区三区 | 久久久久久久国产电影| 亚洲av成人精品一二三区| 又爽又黄a免费视频| 色吧在线观看| 欧美97在线视频| 伦理电影免费视频| 高清视频免费观看一区二区| 91在线精品国自产拍蜜月| 黄色配什么色好看| 成人无遮挡网站| 大话2 男鬼变身卡| 少妇 在线观看| 国产精品福利在线免费观看| 国产精品久久久久久精品古装| 一区二区三区精品91| 女的被弄到高潮叫床怎么办| 亚洲国产av新网站| 99热这里只有是精品50| 有码 亚洲区| 中文资源天堂在线| 一级毛片我不卡| 国产精品无大码| 精品久久久噜噜| 亚洲,一卡二卡三卡| 欧美精品人与动牲交sv欧美| 乱人伦中国视频| 亚洲国产精品一区三区| 国产成人精品久久久久久| 欧美 日韩 精品 国产| 国产亚洲午夜精品一区二区久久| 亚洲国产日韩一区二区| 乱系列少妇在线播放| 亚洲国产av新网站| 国产伦理片在线播放av一区| 国产精品嫩草影院av在线观看| 欧美精品一区二区免费开放| 中文字幕久久专区| 亚洲成人手机| 汤姆久久久久久久影院中文字幕| 老熟女久久久| 欧美另类一区| 在线天堂最新版资源| 丝袜在线中文字幕| 免费av中文字幕在线| 插逼视频在线观看| 如何舔出高潮| 91久久精品国产一区二区三区| 免费av不卡在线播放| 久久精品久久久久久久性| 午夜激情久久久久久久| 亚洲av二区三区四区| 丰满饥渴人妻一区二区三| 大片电影免费在线观看免费| 三级经典国产精品| 性色av一级| 少妇的逼好多水| 亚洲成人av在线免费| 性色av一级| 日本猛色少妇xxxxx猛交久久| 国产精品.久久久| 国产亚洲一区二区精品| 国产精品免费大片| 人体艺术视频欧美日本| 久热久热在线精品观看| a级毛片免费高清观看在线播放| 欧美最新免费一区二区三区| 免费看av在线观看网站| 国国产精品蜜臀av免费|