• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning

    2021-04-22 03:53:30XinLuoSeniorMemberIEEEWenQinAniDongKhaledSedraouiandMengChuZhouFellowIEEE
    IEEE/CAA Journal of Automatica Sinica 2021年2期

    Xin Luo, Senior Member, IEEE, Wen Qin, Ani Dong, Khaled Sedraoui, and MengChu Zhou, Fellow, IEEE

    Abstract—A recommender system (RS) relying on latent factor analysis usually adopts stochastic gradient descent (SGD) as its learning algorithm. However, owing to its serial mechanism, an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems. Aiming at addressing this issue, this study proposes a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm, whose main idea is two-fold: a) implementing parallelization via a novel datasplitting strategy, and b) accelerating convergence rate by integrating momentum effects into its training process. With it, an MPSGD-based latent factor (MLF) model is achieved, which is capable of performing efficient and high-quality recommendations. Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm, an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability.

    I. INTRODUCTION

    BIG date-related industrial applications like a recommender systems (RS) [1]–[5] have a major influence on our daily life. An RS commonly relies on a high-dimensional and sparse (HiDS) matrix that quantifies incomplete relationships among its users and items [6]–[11]. Despite its extreme sparsity and high-dimensionality, an HiDS matrix contains rich knowledge regarding various patterns [6]–[11] that are vital for accurate recommendations. A latent factor (LF)model has proven to be highly efficient in extracting such knowledge from an HiDS matrix [6]–[11].

    In general, an LF model works as follows:

    1) Mapping the involved users and items into the same LF space;

    2) Training desired LF according to the known data of a target HiDS matrix only; and

    3) Estimating the target matrix’s unknown data based on the updated LF for generating high-quality recommendations.

    Note that the achieved LF can precisely represent each user and item’s characteristics hidden in an HiDS matrix’s observed data [6]–[8]. Hence, an LF model is highly efficient in predicting unobserved user-item preferences in an RS.Moreover, it achieves fine balance among computational efficiency, storage cost and representative learning ability on an HiDS matrix [10]–[16]. Therefore, it is also widely adopted in other HiDS data related areas like network representation[17], Web-service QoS analysis [3], [4], [18], user track analysis [19], and bio-network analysis [12].

    Owing to its efficiency in addressing HiDS data [1]–[12], an LF model attracts the attention of researchers. A pyramid of sophisticated LF models are proposed, including a biased regularized incremental simultaneous model [20], singular value decomposition plus-plus model [21], probabilistic model [13], non-negative LF model [6], [22]–[27], and Graph regularized Lpsmooth non-negative matrix factorization model [28]. When constructing an LF model, a stochastic gradient descent (SGD) algorithm is often adopted as a learning algorithm, owing to its great efficiency in building a learning model via serial but fast-converging training [14],[20], [21]. Nevertheless, as an RS grows, its corresponding HiDS matrix explodes. For instance, Taobao contains billions of users and items. Although the data density of a corresponding HiDS matrix can be extremely low due to its extremely high dimension, it has a huge amount of known data. When factorizing it [21]–[28], a standard SGD algorithm suffers from the following defects:

    1) It serially traverses its known data in each training iteration, which can result in considerable time cost when a target HiDS matrix is large; and

    2) It can take many iterations to make an LF model converge to a steady solution.

    Based on the above analyses, we see that the key to a highly scalable SGD-based LF model is also two-fold: 1) reducing time cost per iteration by replacing its serial data traversing procedure with a parallel one, i.e., implementing a parallel SGD algorithm, and 2) reducing iterations to make a model converge, i.e., accelerating its convergence rate.

    Considering a parallel mechanism, it should be noted that an SGD algorithm is iterative to take multiple iterations for training an LF model. In each iteration, it accomplishes the following tasks:

    1) Traversing the observed data of a target HiDS matrix,picking up user-item ratings one-by-one;

    2) Computing the stochastic gradient of the instant loss on the active rating with its connected user/item LF;

    3) Updating these user/item LF by moving them along the opposite direction of the achieved stochastic gradient with a pre-defined step size; and

    4) Repeating steps 1)–3) until completing traversing a target HiDS matrix’s known data.

    From the above analyses, we clearly see that an SGD algorithm makes the desired LF depend on each other during a training iteration, and the learning task of each iteration also depends on those of the previously completed ones. To parallelize such a “single-pass” algorithm, researchers [29],[30], have proposed to decompose the learning task of each iteration such that the dependence of parameter update can be eliminated with care.

    A Hogwild! algorithm [29] splits the known data of an HiDS matrix into multiple subsets, and then dispatches them to multiple SGD-based training threads. Note that each thread maintains a unique group of LF. Thus, Hogwild! actually ignores the risk that a single LF can be updated by multiple training threads simultaneously, leading to partial loss of the update information. However, as proven in [29], such information loss will barely affect its convergence.

    On the other hand, a distributed stochastic gradient descent(DSGD) algorithm [30] splits a target HiDS matrix into J segmentations where each one consists of J data blocks with J being a positive integer. It makes user and item LF connected with different blocks in the same segmentation not affect each other’s update in a single iteration. Thus, when performing matrix factorization [31]–[39] a DSGD algorithm’s parallelization is implemented in the following way: learning tasks on J segmentations are taken serially, where the learning task on the jth segmentation is split into J subtasks that can be done in parallel.

    An alternative stochastic gradient descent (ASGD)algorithm [40] decouples the update dependence among different LF categories to implement its parallelization. For instance, to build an LF-based model for an RS, it splits the training task of each iteration into two sub-tasks, where one updates the user LF while the other updates the item LF with SGD. As discussed in [40], the coupling dependences among different LF categories are eliminated with such design,thereby making both subtasks dividable without any information loss.

    The parallel SGD algorithms mentioned above can implement a parallelized training process as well as maintain model performance. However, they cannot accelerate an LF model’s convergence rate, i.e., they consume as many training iterations as a standard SGD algorithm does despite their parallelization mechanisms. In other words, they all ignore the second factor of building a highly-scalable SGD-based LF model, i.e., accelerating its convergence rate.

    From this point of view, this work aims at implementing a parallel SGD algorithm with a faster convergence rate than existing ones. To do so, we incorporate a momentum method into a DGSD algorithm to achieve a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm. Note that a momentum method is initially designed for batch gradient descent algorithms [34], [35]. Nonetheless, as discussed in [33], [34], it can be adapted to SGD by alternating the learning direction of each single LF according to its stochastic gradients achieved in consecutive learning updates. The reason why we choose a DSGD algorithm as the base algorithm is that its parallelization is implemented based on data splitting instead of reformulating SGD-based learning rules. Thus, it is expected to be as compatible with a momentum method as a standard SGD algorithm appears to be [33]. The main contributions of this study include:

    1) An MPSGD algorithm that achieves faster convergence than existing parallel SGD algorithms when building an LF model for an RS;

    2) Algorithm design and analysis for an MPSGD-based LF model; and

    3) Empirical studies on four HiDS matrices from industrial applications.

    Section II gives preliminaries. Section III presents the methods. Section IV provides the experimental results.Finally, Section V concludes this paper.

    II. PRELIMINARIES

    An LF model takes an HiDS matrix as its fundamental input, as defined in [3], [16].

    Definition 1: Given two entity sets M and N. R|M|×|N|has its each entry rm,ndescribe the connection between m ∈ M and n ∈ N. Let Λ and Γ respectively denote its known and unknown data sets, it is HiDS if |Λ| ? |Γ|.

    Note that the operator |·| computes the cardinality of an enclosed set. Thus, we define an LF model as in [3], [16].

    Definition 2: Given R and Λ, an LF model builds a rank-d approximation R?=PQTto R as P|M|×dand Q|N|×dbeing LF matrices and d ? min{|M|, |N|}.

    To obtain P and Q, an objective function distinguishing R and R? is desired. Note that to achieve the highest efficiency, it should be defined on Λ only. With the Euclidean distance[16], it is formulated as

    Note that in (3) t denotes the tth update point and η denotes learning rate. Following the Robbins-Siegmund theorem [36],(3) ensures a solution to the bilinear problem (2) with proper η.

    III. PROPOSED METHODS

    A. DSGD Algorithm

    As mentioned before, a DSGD algorithm’s parallelization relies on data segmentation. For instance, as depicted in Fig.1,it splits the rating matrix into three segmentations, i.e., S1?S3.Each segmentation consists of three data blocks, e.g., Λ11, Λ22and Λ33belong to S1as in Fig.1. As proven in [30]. In each iteration, LF update inside a block does not affect those of other blocks from the same segmentation because they have no rows or columns in common as shown in Fig.1.Considering S1in Fig.1, we set three independent training threads, where the first traverses on Λ11, second on Λ22and third on Λ33. Thus, these training threads can run simultaneously.

    Fig.1. Splitting a rating matrix to achieve segmentations and blocks.

    B. Data Rearrangement Strategy

    However, note that different data segmentations do have rows and columns in common, as depicted in Fig.1.Therefore, each training iteration is actually divided into J tasks where J is the segmentation count. These J tasks should be done sequentially, where each task can be further divided into J subtasks that can be done in parallel, as depicted in Fig.2.Note that all J subtasks in a segmentation are executed synchronously to achieve bucket effects, i.e., the time cost of addressing each segmentation is decided by that of its largest subtask. From this perspective, when the data distribution is imbalanced as in an HiDS matrix, a DSGD algorithm can only speedup each training iteration in a limited way. For example,the unevenly distributed data of the MovieLens20M(ML20M) matrix is depicted in Fig.3(a), where Λ11,Λ22, Λ33and Λ44are independent blocks in the first data segmentation while its most data are in Λ11. Thus, for treads n1-4handling Λ11?Λ44, their time cost is decided by the cost of n1.

    Fig.2. Handling segmentations and blocks in an HiDS matrix with DSGD.

    For addressing this issue, data in an HiDS matrix should be rearranged to balance their distribution, making a DSGD algorithm achieve satisfactory speedup [38]. As shown in Fig.3(b), such a process is implemented by exchanging rows and columns in each segmentation at random [38].

    C. Data Rearrangement Strategy MPSGD Algorithm

    A momentum method is very efficient in accelerating the convergence rate of an SGD-based learning model [31], [33],[34]. It determines the learning update in the current iteration by building a linear combination of the current gradient and learning update in the last iteration. With such design,oscillations during a learning process decrease, making the resultant model converge faster. According to [33], with a momentum-incorporated SGD algorithm, the decision parameter θ of objective J(θ) is learnt as

    Fig.3. Illustration of an MLF model.

    In (4), Vθ(0)denotes the initial value of velocity, Vθ(t)denotes the t th iterative value of velocity, γ denotes the balancing constant that tunes the effects of the current gradient and previous update velocity, and o(t)denotes the t th training instance.

    To build an SGD-based LF model, the velocity vector is updated at each single training instance. We adopt a velocity parameter v(P)m,kfor pm,kto record its update velocity, and thus generate V(P)|M|×dfor P. According to (4), we update pm,kfor single loss εm,non training instance rm,nas

    Velocity constant γ in (5) adjusts the momentum effects.Similarly, we adopt a velocity parameter v(Q)n,kfor qn,kto record its update velocity, and thus V(Q)|N|×dis adopted for Q.The momentum-incorporated update rules for qn,kare given as

    As depicted in Figs. 3(c)–(d) , with the momentumincorporated learning rules presented in (5) and (6), LF matrices P and Q can be trained with much fewer oscillations.Moreover, by integrating the principle of DSGD into the algorithm, we achieve an MPSGD algorithm that parallelizes the learning process of an LF model at high convergence rate.After dividing Λ into J data segmentations with J × J data blocks, we obtain

    D. Data Rearrangement Strategy MPSGD Algorithm

    With an MPSGD algorithm, we design Algorithm 1 for an MPSGD-based LF (MLF) model. Note that algorithm MLF further depends on the procedure update shown in Algorithm 2. To implement its efficient parallelization, we first rearrange Λ according to the strategy mentioned in Section III-B to balance Λ, as in line 5 of Algorithm 1. Afterwards, the rearranged Λ is divided into J data segmentations with J × J data blocks, as in line 6 of Algorithm 1. Considering the ith data segmentation, its j th data black is assigned to the jth training threads, as shown in lines 8?10 of Algorithm 1. Then all J training threads can be started simultaneously to execute procedure update, which addresses the parameter update related to its assigned data block with MPSGD discussed in Section III-C.

    In Algorithm MLF, we introduce V(P)and V(Q)as auxiliary arrays for improving its computational efficiency. With J data segmentations and J training threads, each training thread actually takes 1/J of the whole data analysis task when the data distribution is balanced as shown in Fig.3(b). Thus, its time cost on a single training thread in each iteration is:

    Therefore, its time cost in t iterations is:

    * Note that all J training threads are started in parallel. Hence, the actual cost of this operation is decided by the thread consuming the most time.

    A lgorithm 2 Procedure Update Operation 1for each rm,n in Λij//Cost: ×|Λ|/J 2 2 ?rm,n=d∑ //Cost: Θ(d)3 for k = 1 to d do//Cost: × d 4 v(t)k=1 pm,kqn,k(P)m,k=γv(t?1)(P)m,k+η?ε(t?1)m,n?p(t?1)m,k //Cost: Θ(1)5 v(t)(Q)n,k=γv(t?1)(Q)n,k+η?ε(t?1)m,n?q(t?1)n,k //Cost: Θ(1)p(t)6 m,k ←p(t?1)m,k ?v(t)(P)m,k //Cost: Θ(1)7 p(t)m,k ←p(t?1)m,k ?v(t)(P)m,k //Cost: Θ(1)8 end for 9end for

    Note that J?1, d and t in (8) and (9) are all positive constants, which result in a linear relationship between the computational cost of an MLF model and the number of known entries in the target HiDS matrix. However, owing to its parallel and fast converging mechanism, J?1and t can be reduced significantly, thereby greatly reducing its time cost.Next we validate its performance on several HiDS matrices generated by industrial applications.

    IV. EXPERIMENTAL RESULTS AND ANALYSIS

    A. General Settings

    1) Evaluation Protocol: When analyzing an HiDS matrix from real applications [1]–[5], [7]–[10], [16], [19], a major motivation is to predict its missing data for achieving a complete relationship among all involved entities. Hence, this paper selects missing data estimation of an HiDS matrix as the evaluation protocol. More specifically, given Λ, such a task makes a tested model predict data in Γ. The outcome is validated on a validation set Ψ disjoint with Λ. For validating the prediction accuracy of a model, the root mean squared error (RMSE) and mean absolute error (MAE) are chosen as the metrics [9]–[11], [16], [37]–[39]

    2) Datasets : Four HiDS matrices are adopted in the experiments, whose details are given below:

    a) D1: Douban matrix. It is extracted from the China’s largest online music, book and movie database Douban [32].It has 16 830 839 ratings in the range of [1, 5] by 129 490 users on 58 541 items. Its data density is 0.22% only.

    b) D2: Dating Agency matrix. It is collected by the online dating site LibimSeTi with 17 359 346 observed entries in the range of [1, 10]. It has 135 359 users and 168 791 profiles [11,12]. Its data density is 0.076% only.

    c) D3: MovieLens 20M matrix. It is collected by the MovieLens site maintained by the GroupLens research team[37]. It has 20 000 263 known entries in [0.5, 5] among 26 744 movies and 138 493 users. Its density is 0.54% only.

    d) D4: NetFlix matrix. It is collected by the Netflix business website. It contains 100 480 507 known entries in the range of[1, 5] by 2 649 429 users on 17 770 movies [11, 12]. Its density is 0.21% only.

    e) All matrices are high-dimensional, extremely sparse and collected by industrial applications. Meanwhile, their data distributions are all highly imbalanced. Hence, results on them are highly representative.

    The known data set of each matrix is randomly divided into five equal-sized, disjoint subsets to comply with the five-fold cross-validation settings, i.e., each time we choose four subsets as the training set Λ to train a model predicting the remaining one subset as the testing set Ψ. This process is sequentially repeated for five times to achieve the final results.The training process of a tested model terminates if i) the number of consumed iterations reaches the preset threshold,i.e., 1000, or ii) the error difference between two sequential iterations is smaller than the preset threshold, i.e., 10?5.

    B. Comparison Results

    The following models are included in our experiments:

    M1: A DSGD-based LF model proposed in [30]. Note that M1’s parallelization is described in detail in Section III-A.However, it differs from an MLF model in two aspects: a) it does not adopt the data rearrangement as illustrated in Fig.3(b);and b) its learning algorithm is a standard SGD algorithm.

    M2: An LF model adopting a modified DSGD scheme,where the data distribution of the target HiDS matrix is rearranged for improving its speedup with multiple training threads. However, its learning algorithm is a standard SGD algorithm.

    M3: An MLF model proposed in this work.

    With such design, we expect to see the accumulative effects of the acceleration strategies adopted by M3, i.e., the data rearrangement in Fig.3(b) and momentum effect in Fig.3(c).

    To enable fair comparisons, we adopt the following settings:

    1) For all models we adopt the same regularization coefficient, i.e., λP= λQ= 0.005 according to [12], [16].Considering the learning rate η and balancing constant γ, we tune them on one fold of each experiment to achieve the best performance of each model, and then adopt the same values on the remaining four folds to achieve the most objective results. Their values on each dataset are summarizes in Table I.

    2) We adopt eight training threads for each model in all experiments following [29].

    TABLE I PARAMETERS OF M1?M3 ON D1?D4

    3) For M1?M3, on each dataset the same random arrays are adopted to initialize P and Q . Such a strategy can work compatibly with the five-fold cross validation settings to eliminate the biased results brought by the initial hypothesis of an LF model as discussed in [3].

    4) The LF space dimension d is set at 20 uniformly in all experiments. We adopt this value to enable good balance between the representative learning ability and computational cost of an LF model, as in [3], [16], [29].

    Training curves of M1?M3 on D1?D4 with training iteration count and time cost are respectively given in Figs. 4 and 5. Comparison results are recorded in Tables II and III.From them, we present our findings next.

    a) Owing to an MPSGD algorithm, an MLF model converges much faster than DSGD-based LF models do. For instance, as recorded in Table II, M1 and M2 respectively take 461 and 463 iterations on average to achieve the lowest RMSE on D1. In comparison, M3 takes 112 iterations on average to converge on D1, which takes less than one fourth of that by M1 and M2. Meanwhile, M3 takes 110 iterations averagely to converge in MAE, which are also much less than 441 iterations by M1 and 448 iterations by M2. Similar results can also be observed on the other testing cases, as shown in Fig.4 and Tables II and III.

    Meanwhile, we observe an interesting phenomenon that M1 and M2 converge at the same rate. Their training curves almost overlap on all testing cases according to Fig.4. Note that M2 adopts the data shuffling strategy mentioned in Section III-A as in [30] to balance the known data distribution of an HiDS matrix distribute uniformly while M1 does not.This phenomenon indicates that the data shuffling strategy barely affects the convergence rate or representative learning ability of an LF model.

    b) With an MPSGD algorithm, an MLF model’s time cost is significantly lower than those of its peers. For instance, as shown in Table II, M3 averagely takes 89 s to converge in RMSE on D3. In comparison, M1 takes 1208 s, which is over 13 times M3’s time. M2 takes 308 s, which is still over three times M3’s average time. The situation is the same with MAE as the metric, as recorded in Table III.

    c) Prediction accuracy of an MLF model is comparable with or slightly higher than those of its peers. As recorded in Tables II and III, on all testing cases M3’s prediction error is as low as or even slightly lower than those of M1 and M2.Hence, an MPSGD algorithm can slightly improve an MLF model’s prediction accuracy for missing data of an HiDS matrix as well as its high computational efficiency.

    d) The stability of M1?M3 are close. According to Tables I and II, we see that the standard deviations of M1?M3 in MAE and

    RMSE are very close on all testing cases. Considering their time cost, since M1 and M2 generally consume much more time than M3 does, their standard deviations in total time cost are generally larger than that of M3. However, it is also datadependent. On D4, we see that M1?M3 have very similar standard deviations in total time. Hence, we reasonably conclude that two acceleration strategies, i.e., data rearrangement and momentum-incorporation, do not affect its performance stability.

    C. Speedup Comparison

    A parallel model’s speedup measures its efficiency gain with the deployed core count, i.e.,

    where T1and TJdenote the training time of a parallel model deployed on one and J training threads, respectively. High speedup of a parallel model indicates its high scalability and feasibility for large-scale industrial applications.

    Fig.4. Training curves of M1?M3 in iteration count. All panels share the legend of panel (a).

    Fig.5. Training curves of M1?M3 in time cost. All panels share the legend of panel (a).

    TABLE II PERFORMANCE COMPARISON AMONG M1?M3 ON D1?D4 WITH RMSE AS AN ACCURACY METRIC

    TABLE III PERFORMANCE COMPARISON AMONG M1?M3 ON D1?D4 WITH MAE AS AN ACCURACY METRIC

    Fig.6. Parallel performance comparison among M1?M3 as core count increases. Both panels share the legend in panel (a).

    The speedup of M1?M3 on D4 as J increases from two to eight is depicted in Fig.6. Note that similar situations are found on D1?D3. From it, we clearly see that M3, i.e., the proposed MLF model, outperforms its peers in achieving higher speedup. As J increases, M3 always consumes less time than its peers do, and its speedup is always higher than those of its peers. For instance, from Fig.6(b) we see that M3’s speedup as J = 8 is 6.88, which is much higher than 4.61 by M1 and 4.44 by M2. Therefore, its scalability is higher than those of its peers, making it more feasible for real applications.

    D. Summary

    Based on the above results, we conclude that:

    a) Owing to an MPSGD algorithm, an MLF model has significantly higher computational efficiency than its peers do;and

    b) An MLF model’s speedup is also significantly higher than that of its peers. Thus, it has higher scalability for large scale industrial applications than its peers do.

    V. CONCLUSIONS

    This paper presents an MLF model able to perform LF analysis of an HiDS matrix with high computational efficiency and scalability. Its principle is two-fold: a) reducing its time cost per iteration through balanced data segmentation,and b) reducing its converging iteration count by incorporating momentum effects into its learning process.Empirical studies show that compared with state-of-the-art parallel LF models, it has obviously higher efficiency and scalability in handing an HiDS matrix.

    Although an MLF model performs LF analysis on a static HiDS matrix with high efficiency, its performance on dynamic data [12] remains unknown. As discussed in [41], a GPU-based acceleration scheme is highly efficient when manipulating full matrices in context of recommender systems and other applications [42]–[50]. Nonetheless, more efforts are required to adjust its fundamental matrix operations to be compatible with an HiDS one as concerned in this paper. We plan to address these issues in the future.

    亚洲国产精品999| 97人妻天天添夜夜摸| 青青草视频在线视频观看| 夜夜夜夜夜久久久久| 日韩人妻精品一区2区三区| 亚洲国产精品一区二区三区在线| 亚洲中文av在线| 国产91精品成人一区二区三区 | 脱女人内裤的视频| 成人18禁高潮啪啪吃奶动态图| 国产97色在线日韩免费| bbb黄色大片| 国产欧美日韩一区二区精品| 亚洲av国产av综合av卡| 久久精品国产亚洲av高清一级| 五月开心婷婷网| 一二三四社区在线视频社区8| 国产xxxxx性猛交| 国产日韩欧美亚洲二区| 91精品国产国语对白视频| 日韩大片免费观看网站| 亚洲精品自拍成人| 狠狠狠狠99中文字幕| 日本欧美视频一区| 精品国产国语对白av| xxxhd国产人妻xxx| 成人影院久久| 午夜成年电影在线免费观看| 狂野欧美激情性xxxx| 亚洲国产av新网站| 成人手机av| 欧美精品一区二区大全| 少妇裸体淫交视频免费看高清 | 最新在线观看一区二区三区| 我要看黄色一级片免费的| 亚洲国产欧美在线一区| 亚洲精品国产精品久久久不卡| 777久久人妻少妇嫩草av网站| 日本av免费视频播放| 成人av一区二区三区在线看 | 色婷婷久久久亚洲欧美| 中文字幕色久视频| 91成年电影在线观看| av线在线观看网站| 国产高清videossex| 精品国产乱码久久久久久男人| 男人添女人高潮全过程视频| 在线观看www视频免费| 国产精品一区二区免费欧美 | 亚洲精品在线美女| 国产免费视频播放在线视频| 狂野欧美激情性xxxx| 汤姆久久久久久久影院中文字幕| 一个人免费在线观看的高清视频 | 亚洲精品国产精品久久久不卡| 国产深夜福利视频在线观看| 精品卡一卡二卡四卡免费| 午夜激情久久久久久久| 成人手机av| 最近中文字幕2019免费版| 亚洲精品一二三| 99国产极品粉嫩在线观看| 另类精品久久| 9色porny在线观看| 日韩一卡2卡3卡4卡2021年| 国产精品 国内视频| 女性生殖器流出的白浆| 亚洲精品一二三| 国产深夜福利视频在线观看| 一边摸一边抽搐一进一出视频| 免费一级毛片在线播放高清视频 | 中文字幕精品免费在线观看视频| 高清在线国产一区| 女警被强在线播放| 午夜福利在线观看吧| 免费在线观看完整版高清| 成年av动漫网址| 欧美国产精品va在线观看不卡| 日本猛色少妇xxxxx猛交久久| 久久性视频一级片| 午夜福利免费观看在线| 两个人免费观看高清视频| 日本91视频免费播放| 热re99久久精品国产66热6| 国产在视频线精品| 我的亚洲天堂| 欧美国产精品一级二级三级| 三级毛片av免费| 老鸭窝网址在线观看| 视频区欧美日本亚洲| av视频免费观看在线观看| 欧美 日韩 精品 国产| 亚洲午夜精品一区,二区,三区| 高清在线国产一区| 成人国语在线视频| 久久精品成人免费网站| 丝袜美腿诱惑在线| 久久av网站| 久久天堂一区二区三区四区| 啦啦啦 在线观看视频| 91麻豆精品激情在线观看国产 | 亚洲 国产 在线| 日韩一区二区三区影片| 女人爽到高潮嗷嗷叫在线视频| 黑人欧美特级aaaaaa片| 人妻久久中文字幕网| 两人在一起打扑克的视频| 欧美日韩视频精品一区| 免费观看av网站的网址| 国产精品九九99| 1024香蕉在线观看| 日韩电影二区| 搡老岳熟女国产| 一个人免费看片子| 极品少妇高潮喷水抽搐| 伊人久久大香线蕉亚洲五| 精品国产超薄肉色丝袜足j| 国产精品99久久99久久久不卡| netflix在线观看网站| 日日摸夜夜添夜夜添小说| 欧美少妇被猛烈插入视频| 欧美黄色淫秽网站| 欧美日韩一级在线毛片| 国产黄频视频在线观看| 亚洲激情五月婷婷啪啪| 国产成人精品无人区| 久久精品人人爽人人爽视色| 成年人免费黄色播放视频| 国产不卡av网站在线观看| 国产野战对白在线观看| 美女视频免费永久观看网站| 五月天丁香电影| 99国产精品99久久久久| 久久综合国产亚洲精品| 亚洲情色 制服丝袜| 777米奇影视久久| 久久久久国产一级毛片高清牌| 女警被强在线播放| 欧美日韩福利视频一区二区| www.999成人在线观看| 日韩免费高清中文字幕av| 久久精品aⅴ一区二区三区四区| 老司机在亚洲福利影院| 免费久久久久久久精品成人欧美视频| bbb黄色大片| 黄色片一级片一级黄色片| 久久久精品国产亚洲av高清涩受| 免费观看a级毛片全部| 丝袜人妻中文字幕| 一级,二级,三级黄色视频| 久久久久久久久久久久大奶| 美女大奶头黄色视频| 中亚洲国语对白在线视频| 久久精品国产亚洲av高清一级| 在线看a的网站| 51午夜福利影视在线观看| 国产有黄有色有爽视频| 亚洲精品粉嫩美女一区| 国产av精品麻豆| 肉色欧美久久久久久久蜜桃| 亚洲精品中文字幕一二三四区 | 19禁男女啪啪无遮挡网站| 18禁国产床啪视频网站| 国产91精品成人一区二区三区 | 考比视频在线观看| 91麻豆精品激情在线观看国产 | 国产主播在线观看一区二区| 日本黄色日本黄色录像| av有码第一页| 视频区欧美日本亚洲| 天堂中文最新版在线下载| 精品福利永久在线观看| 亚洲一区二区三区欧美精品| 欧美日韩成人在线一区二区| 99香蕉大伊视频| 最黄视频免费看| 国产欧美日韩综合在线一区二区| 久久久精品国产亚洲av高清涩受| 亚洲欧美一区二区三区黑人| 51午夜福利影视在线观看| 国产91精品成人一区二区三区 | 午夜免费观看性视频| 久久亚洲国产成人精品v| 啦啦啦 在线观看视频| 免费高清在线观看日韩| 大型av网站在线播放| 亚洲天堂av无毛| 成人影院久久| 啦啦啦视频在线资源免费观看| 日日夜夜操网爽| 午夜91福利影院| 亚洲专区国产一区二区| 黄色 视频免费看| www.自偷自拍.com| 精品人妻在线不人妻| 亚洲精品久久成人aⅴ小说| 一区二区日韩欧美中文字幕| 亚洲成人国产一区在线观看| 午夜福利在线免费观看网站| 大香蕉久久成人网| 久久香蕉激情| 在线av久久热| 亚洲avbb在线观看| 午夜免费鲁丝| 国产成+人综合+亚洲专区| 国产在视频线精品| 少妇 在线观看| 少妇粗大呻吟视频| 亚洲少妇的诱惑av| 精品视频人人做人人爽| 欧美黄色片欧美黄色片| 国产亚洲欧美在线一区二区| 成在线人永久免费视频| 老鸭窝网址在线观看| 久久性视频一级片| 夜夜夜夜夜久久久久| 色综合欧美亚洲国产小说| svipshipincom国产片| 99国产精品99久久久久| 亚洲专区字幕在线| 日韩 亚洲 欧美在线| 精品久久久久久电影网| 成人国语在线视频| 丰满迷人的少妇在线观看| 国产成人免费无遮挡视频| 人妻人人澡人人爽人人| 国产精品一区二区精品视频观看| 精品国产超薄肉色丝袜足j| 国产精品影院久久| 亚洲成人国产一区在线观看| 黑丝袜美女国产一区| 欧美日韩亚洲综合一区二区三区_| 这个男人来自地球电影免费观看| 十分钟在线观看高清视频www| 久久午夜综合久久蜜桃| 国产精品久久久久成人av| 欧美 日韩 精品 国产| 国产日韩欧美在线精品| 亚洲成av片中文字幕在线观看| 国内毛片毛片毛片毛片毛片| tocl精华| 可以免费在线观看a视频的电影网站| 成人18禁高潮啪啪吃奶动态图| 午夜福利视频精品| 免费日韩欧美在线观看| 成人av一区二区三区在线看 | 99香蕉大伊视频| 成人三级做爰电影| 美女福利国产在线| 欧美成人午夜精品| 巨乳人妻的诱惑在线观看| 欧美黑人欧美精品刺激| 久久中文字幕一级| 一区在线观看完整版| 久久久久久亚洲精品国产蜜桃av| 中国国产av一级| 国产成+人综合+亚洲专区| 亚洲精品中文字幕在线视频| 国产精品国产av在线观看| 五月开心婷婷网| 69av精品久久久久久 | 老司机影院毛片| 高清黄色对白视频在线免费看| 国产色视频综合| 免费在线观看黄色视频的| 国产精品欧美亚洲77777| 女人爽到高潮嗷嗷叫在线视频| 日韩一区二区三区影片| 黑人巨大精品欧美一区二区蜜桃| 在线观看免费视频网站a站| 美女高潮喷水抽搐中文字幕| 亚洲国产精品一区二区三区在线| 久久九九热精品免费| 十八禁人妻一区二区| 超色免费av| 麻豆国产av国片精品| 后天国语完整版免费观看| 成人国产av品久久久| 欧美日韩亚洲综合一区二区三区_| 又大又爽又粗| 麻豆国产av国片精品| 婷婷丁香在线五月| 黄色怎么调成土黄色| 丝袜脚勾引网站| 91字幕亚洲| 99re6热这里在线精品视频| 母亲3免费完整高清在线观看| 亚洲国产av影院在线观看| 国产av国产精品国产| 日韩中文字幕欧美一区二区| 日韩视频在线欧美| 亚洲av日韩精品久久久久久密| 欧美日韩成人在线一区二区| 人妻久久中文字幕网| 老司机影院成人| 午夜福利视频在线观看免费| 久久中文看片网| 麻豆av在线久日| 精品欧美一区二区三区在线| 一级毛片女人18水好多| 国产精品99久久99久久久不卡| 美女中出高潮动态图| 亚洲七黄色美女视频| 十分钟在线观看高清视频www| 9191精品国产免费久久| 欧美亚洲日本最大视频资源| 久久久精品免费免费高清| 国产伦人伦偷精品视频| 精品久久久久久电影网| 啦啦啦中文免费视频观看日本| 午夜两性在线视频| 国产免费av片在线观看野外av| av有码第一页| 亚洲精品自拍成人| 成人三级做爰电影| 99国产精品一区二区三区| 日本91视频免费播放| 天天添夜夜摸| 极品人妻少妇av视频| 少妇猛男粗大的猛烈进出视频| 老司机靠b影院| 亚洲成人国产一区在线观看| 最近最新免费中文字幕在线| 高清av免费在线| 国产老妇伦熟女老妇高清| 新久久久久国产一级毛片| 亚洲av日韩精品久久久久久密| 欧美人与性动交α欧美精品济南到| 亚洲国产精品一区三区| 国产精品一区二区精品视频观看| 精品视频人人做人人爽| 高清黄色对白视频在线免费看| 国产精品亚洲av一区麻豆| 精品国产国语对白av| 嫩草影视91久久| 深夜精品福利| 久久99一区二区三区| 中文字幕色久视频| av电影中文网址| tube8黄色片| 少妇裸体淫交视频免费看高清 | 窝窝影院91人妻| 国产无遮挡羞羞视频在线观看| 男女床上黄色一级片免费看| 久久九九热精品免费| 亚洲情色 制服丝袜| 建设人人有责人人尽责人人享有的| 18禁国产床啪视频网站| 欧美日韩成人在线一区二区| 国产日韩一区二区三区精品不卡| 国产视频一区二区在线看| avwww免费| 母亲3免费完整高清在线观看| 午夜久久久在线观看| 国产精品99久久99久久久不卡| 日韩欧美一区二区三区在线观看 | 欧美国产精品一级二级三级| 老熟妇乱子伦视频在线观看 | 高清黄色对白视频在线免费看| 91麻豆av在线| 黄色视频在线播放观看不卡| 欧美黑人精品巨大| 超色免费av| av在线播放精品| 91字幕亚洲| 美女脱内裤让男人舔精品视频| 久久久久久亚洲精品国产蜜桃av| 又黄又粗又硬又大视频| 国产真人三级小视频在线观看| 中亚洲国语对白在线视频| 国产亚洲午夜精品一区二区久久| 狂野欧美激情性bbbbbb| 热99久久久久精品小说推荐| 麻豆av在线久日| 亚洲欧洲精品一区二区精品久久久| 韩国高清视频一区二区三区| 久久久久精品人妻al黑| 女警被强在线播放| 国产在视频线精品| 美女中出高潮动态图| 亚洲激情五月婷婷啪啪| 亚洲情色 制服丝袜| 999久久久国产精品视频| 黄色视频,在线免费观看| 国产精品亚洲av一区麻豆| 成年人午夜在线观看视频| 在线观看免费视频网站a站| 国产精品偷伦视频观看了| 一级毛片精品| 亚洲精品一区蜜桃| 最近最新中文字幕大全免费视频| 欧美另类一区| 亚洲欧美激情在线| 亚洲国产日韩一区二区| 最近最新中文字幕大全免费视频| 丝袜人妻中文字幕| 久久久水蜜桃国产精品网| 日韩中文字幕欧美一区二区| 91av网站免费观看| 久久精品亚洲熟妇少妇任你| 午夜成年电影在线免费观看| 一级毛片精品| 新久久久久国产一级毛片| 一级,二级,三级黄色视频| 亚洲 欧美一区二区三区| 韩国精品一区二区三区| 成年美女黄网站色视频大全免费| 欧美精品av麻豆av| 久久久久久免费高清国产稀缺| 欧美激情极品国产一区二区三区| 免费不卡黄色视频| 美国免费a级毛片| 欧美黑人精品巨大| 久久香蕉激情| 又黄又粗又硬又大视频| 宅男免费午夜| 大码成人一级视频| 久久久久久免费高清国产稀缺| 90打野战视频偷拍视频| 成人三级做爰电影| 在线观看www视频免费| 女人久久www免费人成看片| 日韩制服骚丝袜av| 黄色视频在线播放观看不卡| 一二三四社区在线视频社区8| 日韩有码中文字幕| 国产亚洲精品一区二区www | 日本vs欧美在线观看视频| 一边摸一边抽搐一进一出视频| 国产精品国产av在线观看| 女警被强在线播放| 国产老妇伦熟女老妇高清| 一本一本久久a久久精品综合妖精| 亚洲伊人久久精品综合| 水蜜桃什么品种好| 美女高潮到喷水免费观看| 高清av免费在线| 美女视频免费永久观看网站| 国产有黄有色有爽视频| 久久99热这里只频精品6学生| 亚洲欧美色中文字幕在线| 大香蕉久久成人网| 亚洲专区字幕在线| 国产欧美日韩一区二区三 | 国精品久久久久久国模美| 无遮挡黄片免费观看| 久久久久久久精品精品| 国产深夜福利视频在线观看| 亚洲av欧美aⅴ国产| 啦啦啦中文免费视频观看日本| 曰老女人黄片| 宅男免费午夜| 亚洲av男天堂| 秋霞在线观看毛片| 亚洲九九香蕉| 久久ye,这里只有精品| 女人高潮潮喷娇喘18禁视频| 男女边摸边吃奶| 亚洲欧美一区二区三区久久| 久久久精品免费免费高清| 人人妻人人澡人人看| 欧美日韩av久久| 亚洲精品粉嫩美女一区| 欧美日韩亚洲国产一区二区在线观看 | 国产免费视频播放在线视频| 啦啦啦免费观看视频1| 99精品久久久久人妻精品| 亚洲一区中文字幕在线| 欧美激情极品国产一区二区三区| 国产男女内射视频| 亚洲国产成人一精品久久久| 男女下面插进去视频免费观看| 久久青草综合色| 日韩 欧美 亚洲 中文字幕| 久久香蕉激情| 国产精品久久久久久精品电影小说| 国产一区二区三区综合在线观看| 精品久久蜜臀av无| 精品国产超薄肉色丝袜足j| 日本a在线网址| 淫妇啪啪啪对白视频 | 性高湖久久久久久久久免费观看| 成人三级做爰电影| 国产欧美日韩一区二区三区在线| 美女大奶头黄色视频| 国产成人精品久久二区二区免费| 久久精品aⅴ一区二区三区四区| 久久这里只有精品19| 午夜两性在线视频| 成年美女黄网站色视频大全免费| av国产精品久久久久影院| 亚洲精品久久成人aⅴ小说| 国产成人一区二区三区免费视频网站| 侵犯人妻中文字幕一二三四区| 亚洲av电影在线进入| 天堂8中文在线网| 巨乳人妻的诱惑在线观看| 99久久国产精品久久久| 建设人人有责人人尽责人人享有的| 中文字幕另类日韩欧美亚洲嫩草| 青青草视频在线视频观看| 99久久人妻综合| 久久免费观看电影| 午夜福利影视在线免费观看| 91精品三级在线观看| 大片电影免费在线观看免费| 亚洲国产欧美日韩在线播放| 国产精品 国内视频| 亚洲av成人一区二区三| 美女高潮喷水抽搐中文字幕| 高清av免费在线| 午夜免费成人在线视频| 麻豆国产av国片精品| 99热全是精品| 亚洲七黄色美女视频| 又黄又粗又硬又大视频| 午夜福利免费观看在线| 色视频在线一区二区三区| 久久亚洲精品不卡| 亚洲人成电影免费在线| 精品少妇一区二区三区视频日本电影| 亚洲欧美一区二区三区黑人| 国产成人精品久久二区二区免费| 18禁国产床啪视频网站| 亚洲精品乱久久久久久| 男人舔女人的私密视频| 免费人妻精品一区二区三区视频| 一边摸一边抽搐一进一出视频| 精品国产一区二区三区久久久樱花| 淫妇啪啪啪对白视频 | 国产av精品麻豆| 日本av手机在线免费观看| 午夜免费观看性视频| 亚洲成国产人片在线观看| 不卡一级毛片| 亚洲欧美一区二区三区黑人| 国产成人精品久久二区二区免费| 国产不卡av网站在线观看| 成人三级做爰电影| 亚洲av成人不卡在线观看播放网 | 国产老妇伦熟女老妇高清| 最近中文字幕2019免费版| 中文字幕av电影在线播放| 深夜精品福利| 国产片内射在线| 国产淫语在线视频| 色婷婷av一区二区三区视频| 午夜精品国产一区二区电影| 青草久久国产| 黄色视频在线播放观看不卡| 日本a在线网址| 精品久久久精品久久久| 久久精品人人爽人人爽视色| av国产精品久久久久影院| 免费在线观看完整版高清| av天堂久久9| 考比视频在线观看| www.熟女人妻精品国产| 国产成人系列免费观看| 国产极品粉嫩免费观看在线| 99热全是精品| 午夜福利在线观看吧| 色老头精品视频在线观看| 国产亚洲精品久久久久5区| 亚洲自偷自拍图片 自拍| 黄频高清免费视频| 亚洲欧美清纯卡通| 成年美女黄网站色视频大全免费| 老司机深夜福利视频在线观看 | 黄色毛片三级朝国网站| 视频区欧美日本亚洲| 国产1区2区3区精品| av有码第一页| 中文字幕高清在线视频| 亚洲人成77777在线视频| 人妻人人澡人人爽人人| 中文字幕人妻丝袜制服| 热99国产精品久久久久久7| 满18在线观看网站| 777久久人妻少妇嫩草av网站| 伊人亚洲综合成人网| 一级黄色大片毛片| 亚洲人成电影观看| 欧美日韩黄片免| 精品熟女少妇八av免费久了| 欧美+亚洲+日韩+国产| 99热网站在线观看| 国产精品久久久av美女十八| 欧美日韩亚洲国产一区二区在线观看 | 久久精品熟女亚洲av麻豆精品| 人妻久久中文字幕网| 久久亚洲精品不卡| av天堂久久9| 91精品三级在线观看| 啦啦啦视频在线资源免费观看| 一个人免费在线观看的高清视频 | 韩国精品一区二区三区| e午夜精品久久久久久久| 国产又爽黄色视频| 男女高潮啪啪啪动态图| 人人妻人人爽人人添夜夜欢视频| 久久99热这里只频精品6学生| 久久久国产精品麻豆| 美国免费a级毛片| 欧美在线一区亚洲| 亚洲欧美激情在线| 美女主播在线视频| av天堂在线播放| 人妻久久中文字幕网| 久久天堂一区二区三区四区| 午夜免费鲁丝| 国产精品一区二区在线观看99| 超碰成人久久| 麻豆av在线久日| 国产成人欧美|