• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Predictive Model of Live Shopping Interest Degree Based on Eye Movement Characteristics and Deep Factorization Machine

    2022-09-29 01:47:16SHIXiujin石秀金LIHaoSHIHangWANGShaoyu王紹宇SUNGuohao孫國豪

    SHI Xiujin(石秀金), LI Hao(李 昊), SHI Hang(史 航), WANG Shaoyu(王紹宇), SUN Guohao (孫國豪)

    School of Computer Science and Technology, Donghua University, Shanghai 201620, China

    Abstract: In the live broadcast process, eye movement characteristics can reflect people’s attention to the product. However, the existing interest degree predictive model research does not consider the eye movement characteristics. In order to obtain the users’ interest in the product more effectively, we will consider the key eye movement indicators. We first collect eye movement characteristics based on the self-developed data processing algorithm fast discriminative model prediction for tracking (FDIMP), and then we add data dimensions to the original data set through information filling. In addition, we apply the deep factorization machine (DeepFM) architecture to simultaneously learn the combination of low-level and high-level features. In order to effectively learn important features and emphasize relatively important features, the multi-head attention mechanism is applied in the interest model. The experimental results on the public data set Criteo show that, compared with the original DeepFM algorithm, the area under curve (AUC) value was improved by up to 9.32%.

    Key words: eye movement; interest degree predictive; deep factorization machine (DeepFM); multi-head attention mechanism

    Introduction

    Online live shopping has now become one popular way for people to obtain information. Obtaining the users’ interest in the live broadcast process can not only improve the merchant’s live broadcast strategy and increase users’ satisfaction with watching the live broadcast, but also help designers develop more humanized live broadcast interaction methods to enhance user experience. Therefore, it is of great practical significance to obtain users’ interest when watching live shopping broadcasts.

    The eye movement characteristic refers to the data feature of the subject’s eyeball when watching the live broadcast. Traditionally, eye tracking technology is an analysis tool that can be used in different disciplines such as medicine, psychology, and marketing[1-3]. In the process of visual evaluation, the method that combined eye tracking with some data processing methods can obtain fine-grained information in the process of individual cognition, and has achieved satisfactory results in a variety of scene detection.

    Current assessments of interest in live streaming are mostly based on “black box” research. Specifically, this kind of research relies on the viewer’s self-expression to reflect the degree of interest in live streaming. However, the interest obtained by “black box” not only involves the subjective factors of the viewer, but also will be affected by many objective factors such as the environment and mood, which makes them difficult to truly reflect the influence of the viewer’s interest in live shopping. With the development of neural networks, click-through rate(CTR) estimation technology is increasingly used in interest degree predictive models. However, CTR ignores a lot of objective information such as the level of detail of products in live shopping, and some important factors like dynamic parameters. Figure 1 is a simple example of the attributes of each dimension entity in the live broadcast process. Many data dimensions can be extracted from a live video, such as eye movement data, traditional interest model dimensions and other dimensions, where eye movement data extracts data dimensions through video processing algorithm. It is necessary to consider the various factors shown in Fig. 1 in the live shopping interest model.

    Fig. 1 Example diagram of each entity attribute

    In this paper, we take the eye movement factor into account in the proposed model. At the same time, the base model also has room for improvement. We make innovations from these two aspects.

    1 Related Work

    1.1 Application of eye tracking technology

    In recent years, eye tracking technology has been used more and more widely in visual evaluation research. Baazeemetal.[4]used eye movement data for machine learning to detect developmental dyslexia, and used random forest to select the most important eye movement features as input to the support vector machine classifier. This hybrid approach can reliably identify fluent readers. Bitkinaetal.[5]used eye movement indicators to classify and predict driving perception workload, and then studied the ability of eye movement indicators to predict driving load, and obtained a conclusion that some factors were correlated with gaze indicators. Relloetal.[6]studied the extent to which eye tracking improved the readability of Arabic texts, and used different regression algorithms to build several readability prediction models.

    Eye tracking technology is used in many fields to complete recommendation tasks or classification tasks. In the recommendation task, the improvement of the index area under curve (AUC) is mostly between 2% and 10%, and specific conclusions or models have been drawn on the respective research issues. However, most of these models are based on machine learning methods and the number of samples used is small, from tens to hundreds, which brings certain accidental factors to the experiment, and the learning ability of these models can be further improved.

    1.2 Interest prediction model

    Existing interest degree predictive models are mainly divided into two categories, namely, CTR predictive models based on machine learning and deep learning. Interest prediction models based on machine learning are mainly divided into two categories: single model and combined model prediction. In a single model, logistic regression and decision tree are the more common models. In terms of model combination, gradient boosting decision tree(GBDT)+logistic regression(LR) and field-weighted factorization machines (FwFM)[7-10]are the more common models. However, the interest degree predictive model based on machine learning relies more on the processing of features manually, and a lot of manual feature engineering is required in the early stage of application model. The interest degree prediction model based on deep learning has shown good results by exploring the high-level combination of features in the interest degree prediction field. Among them, wide&deep, fast growing cascade neural network (FGCNN),etc.[11-13]are the more common models.

    In the research related to the interest prediction in live broadcast, eye movement data is not used as a data dimension in the model.

    2 Eye Tracking Data Obtaining Algorithm

    Since the obtaining of eye movement data is an automated process and the eye tracker supporting software does not provide the calculation of the parameters that the subject is concerned about a single area. This paper proposes the fast discriminative model prediction for tracking(FDIMP) algorithm to solve the task of live video processing, and improves the tracking model based on the ability to discriminate goals and backgrounds and reduce the number of iterations. It provides an automated function to output the required data from the video. After the operation of filling the dimensions, the obtained dimensions are the CTR dimension and the eye movement dimension. The data set contains the data supplemented by the tested person and the characteristics of the large data set, which is used by the subsequent interest degree prediction model.

    2.1 Obtaining live video

    All subjects in this article have normal or corrected vision, and have no eye problems such as color blindness. The subjects include online shoppers and infrequent online shoppers, and are divided into different occupations. We apply the Noldus-Eye tracking glasses(ETG) eye tracker to collect the user’s eye movement data. The viewing distance is set as 60 cm. The device is calibrated before the experiment. If there is obvious head movement or it is detected on the screen used by the researcher to track eye movement drift, then we repeat the calibration[14]. The algorithm can generate relevant data such as live video corresponding to the viewpoint trajectory. In addition to the video, the eye tracker can also produce a variety of visual images, such as a heat map directly related to the number of gaze points, and a path diagram indicating the transition direction of gaze. These images are used as a supplement to the eye movement data and can intuitively show the characteristics of the learner when they are looking at the video. The experiment requires the subject to wear an eye tracker device to watch the live broadcast within a one-minute live shopping video. In the process, the data processing model is used to capture the relative gaze time and pupil concentration of the user for different areas of the plate. At the end of the experiment, we record the subject’s degree of satisfaction with the items introduced in the video.

    2.2 Live shopping video processing model

    The FDIMP processing model is used to process the subject’s eye movement video into the user’s relative gaze time and pupil concentration for different plate areas. As an end-to-end tracking architecture, it can make full use of the target and background appearance information to predict the target model[15]. The process is shown in Fig. 2. We apply random samples in the video sequence for training, where we extract three frames from a certain frame and the front as the training set, and extract three frames from the back of the frame as the test set, and pool the features of the extracted target area to obtain the initialized feature image. The function of model initialization is to initialize the features in the target area to generate a three-dimensional(4×4×n) feature filter. The initialized filter is combined with the background information of the target area to optimize, and the optimized filter is obtained in an iterative manner.

    Fig. 2 Target tracking process

    In the loss function setting,srepresents the number of training images, andrrepresents the residual function for calculating the predicted position and the target position. The common form is

    r(s,q)=s-yq,

    (1)

    whereyqis the desired target scores at each location, popularly set to a Gaussian function centered atq. It is worth noting that, this simple form is directly used with mean-square-error (MSE) for optimization. Since there are many negative examples, and the labels of the negative examples are collectively referred to as 0, this requires the model to be sufficiently complex. In this case, performing optimization on the negative examples will cause the model to be biased towards learning negative examples instead of distinguishing between negative and positive examples. In order to solve this problem, we added weights to loss, and referred to hinge loss[16]in support vector machine (SVM) to filter out a large number of negative examples in the score map. For the positive sample area, MSE loss is used, so the final residual function is

    r(s,c)=vc·[mcs+(1-mc)max(0,s)-yc],

    (2)

    where the subscriptcrepresents the degree of dependence on the center point;vcmeans weight;mc∈[0, 1] means mask, and in the background area,mc≈0; in the corresponding area of the object,mc≈ 1. In this way, the hinge loss can be used in the background area, and MSE loss can be used in the object area. In the design of this paper, regression factorsyccan be learned.

    Obtaining eye movement parameters in different regions requires frequent switching of unlearned tracking objects. Compared with offline pre-training and similarity measurement models, FDIMP, an online learning and iterative update strategy, can be used in live shopping broadcasts, and can play a better role in tracking situations where objects are not clear.

    2.3 Data used in the interest model

    It is necessary to establish a tracking frame as the user’s point of view and target area when the packaged data processing algorithm is used to track live broadcast items. When the target area covers the user’s viewpoint, it is judged to be coincident. That is, the user’s viewpoint is paying attention to the area within the corresponding time. For the demo sales items, such as live broadcast anchors, background, comment area, and event coupon area, the data collecting method is shown as described above.

    In addition to eye movement data, it is also necessary to collect user explicit feedback data, such as user age, user gender, and other customized information, to obtain user basic information and eye movement information (average blink time, number of blinks, attention time rate of sold items, attention time rate of anchor area, attention time rate of discount area, attention time rate of sold items, attention time rate of discount area and the number of attention points), and explain the subsequent model training after filling in the data.

    3 Interest Recommendation Model Based on Eye Movement Characteristic Data

    DeepFM algorithm is selected as the basic algorithm in this paper and it is improved on the basis of wide&deep. It does not need pre-training factor machine(FM) to obtain hidden vectors or artificial feature engineering. It can learn low-order and high-order combined features at the same time. FM module and deep module share the feature embedding part, which enables faster training and more accurate training and learning. It is very suitable for complex scenes such as interest prediction. Based on the DeepFM architecture, this model embeds and encodes eye movement data after introducing a collaborative information graph. Adding a self-attention mechanism to the deep neural network(DNN) improves the model’s ability to learn key information.

    3.1 Embedded coding layer design

    Since the original input features in interest degree prediction have various data types[17], some dimensions are even incomplete. In order to normalize the mapping of different types of feature components and reduce the dimensionality of the input feature vector, it is necessary to perform one hot vector mapping on the input feature first, and then perform one hot vector mapping on the input feature, followed by the extremely sparse after hot encoding the input layer, and cascading the embedding layer. Like field-aware factorization machines(FFM), DeepFM[18]summarizes features with the same characteristics as a field, and its formula is

    x=f(S,M),

    (3)

    wherexis the corresponding vector after embedding coding,Sis the one-hot coding sparse eigenvector,Mis the parameter matrix, and its elements are the weight parameters of the connecting lines in Fig. 3. These parameters are iterated by learning during the training of the CTR prediction model.

    As shown in Fig. 3, the embedded layer coding maps the one-hot code sparse vectors of different fields to low-dimensional vectors, which can compress the original data information and greatly reduce the input dimension.

    Fig. 3 Original input sparse feature vector to dense vector to embedding mapping

    As shown in Fig. 4, taking the particularity of the newly added data dimensions into account, the user behavior and project knowledge are coded into a unified relationship diagram through the collaboration information graph. To make an information graph, we first define a user item bipartite graph {(eu,yui,ei)∣eu∈U,ei∈I}, whereeuis a user entity,yuirepresents links between usersuand itemsi,eirepresents the project entity, anduandirepresent users and itemsets respectively. When there is an interaction between the two entities,yuiis set as 1. The collaboration info-graphic incorporates new data dimensions into it, where each user’s behavior can be represented as a triple(eu,R,ei).R= 1 indicates that there is an additional interactioneuandei. In this way, the user information graph can be integrated with the newly added dimension into a unified graph.

    Fig. 4 Collaboration information graph structure

    As shown in Fig. 5, the multi-modal information encoder takes the newly added dimension entities and the original information entities as input, and uses the entity encoder and attention layer to learn a new entity representation for each entity. The new entity representation retains its own information. At the same time, information about neighboring entities is aggregated. We use the new entity to represent the embedding in the interest prediction model.

    Fig. 5 Multi-modal information encoder

    3.2 Factorization machine

    In the CTR prediction, due to the extremely sparse input characteristics and the correlation between the input characteristics, the factorization machine model aims to fully consider the first-order features and the second-order combination characteristics when predicting the user’s CTR[19]. The regression prediction model in the factorization machine is

    (4)

    whereyFMis predicted output,nis the dimension of the input feature vector,xiis the feature vector for theith dimension,wiis the weight parameter of the first-order feature, andwijis the weight parameter of the second-order combination feature. In the second item, the estimated value ofwixiis taken and accumulated. There are many parameters to be learned for the second-order combination feature of the model, the number of parameters isn(n-1)/2. However, due to the sparseness of data in practical applications, this model is difficult to train. Therefore, we decompose the matrixwijintoVTV, where the matrixVis

    V=[v1,v2, …,vi, …,vn],

    (5)

    whereviis thek-dimensional hidden vector associated withxi.

    We encode different types of input data (images, texts, labels,etc.) into high-order hidden vectors. Then we combine multi-dimensional data based on the multi-modal graph attention mechanism module (multi-modal-knowledge-graphs attention layer).

    3.3 DNN architecture

    The DeepFM prediction model introduces DNN[20]to cascade the embedding and encoded feature vector in a fully connected layer to establish a regression or classification model. The output of each neuron is the linear weighted value of the neurons in the previous layer corresponding to the nonlinear mapping. That is, for the neurons in thel+1 layer, the corresponding output value is

    a(l+1)=φ(W(l)a(l)+b(l)),

    (6)

    whereW(l),a(l)andb(l)respectively represent the first layer of weight matrix, thellayer of neuron output corresponding, connecting thellayer and thellayer of the bias value vector. For the nonlinear mapping function, the following ReLU function and Sigmoid function are commonly used. The corresponding expressions are

    φ(d)=1/[1+exp(-d)],

    (7)

    (8)

    wheredrepresents the input of the previous layer.

    3.4 Self-attention mechanism

    The self-attention mechanism was proposed in the field of image processing[21], and later used in various fields[22-26]. The purpose is to focus on certain feature information during model training. The conventional attention mechanism is to use the state of the last hidden layer of the neural network, or use the state of the hidden layer output by the neural network at a moment to align with the hidden state of the current input. The self-attention is directly weighted to the current input, which is a special case of the attention mechanism. It uses the sequence itself as the key and value vector of the data, and the output vector can be aggregated from the previous hidden output of the neural network.

    A single attention network is not enough to capture multiple aspects and the multi-head attention network allows the model to focus on information from different locations and different representation spaces, and can simulate user preferences from multiple views of interest. Therefore, we adopt multi-head attention module after the hidden layer as shown in Fig. 6. The data dimensions are processed and fed into the input layer.

    Fig. 6 DeepFM model of multi-head attention mechanism

    4 Experiment and Analysis

    4.1 Experiment preparation

    The eye movement data set is collected manually from related industries and obtained from cooperative units, with a total of 673 pieces of data. The data set includes videos with the subject’s gaze area ranging from 30 s to 3 min, marked pictures of each area of interest, personal information, operation history and other related information. The eye tracking data set is populated and added to the public data set. In order to verify the performance of the proposed prediction model, the public data set Criteo is selected for evaluation, and the data in the data set Critro is filled with eye movement data. The data set contains more than 450 million user click events and 7-dimensional eye movement parameters. The data types include two major categories of numeric and hash values. The dimensions of click events are 13 and 26 respectively, and the proportions of positive and negative samples are 22.912 0% and 77.087 5% respectively. The data set is divided into training data set and test data set based on the ratio of 8∶2 respectively.

    4.2 Interest model performance evaluation index

    The interest model evaluation index uses the binary cross-entropy loss function Logloss and AUC. Logloss is defined as

    (9)

    AUC is defined as the area enclosed by the coordinate axis under the receive operating characteristic(ROC) curve:

    (10)

    whereArepresents AUC, andfpris the false positive rate. Different classification thresholds can get the true positive rate curve under different false positive rates, namely ROC.

    4.3 Experimental results

    4.3.1Experimentalsetup

    In order to reduce the impact of the order of samples on the performance of the final model after training, we first randomize the samples in the data and divide the labeled data setDinto two parts, namely the training data setDtrainand the test data setDtest. In the experiment, the batch size is set as 512, the learning rate is set as 0.001, the scale of the embedded coding layer is 8, and the maximum supported dimension for one-hot coding mapping is 450. The effects of the probabilitypof random inactivation and the number of fully connected layers of adaptive residual DNN are studied respectively. The optimal parameters are selected, and then the improvement designed in this paper is compared with the DeepFM model.

    4.3.2Hyperparameterimpactresearch

    Figure 7 shows the influence of the random inactivation probabilitypand the number of fully connected layers of the adaptive residual DNN on AUC. It can be observed that when the probability of random inactivation gradually increases, the AUC performance on the test set gradually becomes more and more better, but when the probability of random inactivation exceeds 0.6, the AUC performance of the test set begins to decrease. This is because when there are too many neurons in inaction, the number of effective neurons is not enough to learn and not enough to represent the interest model as feature information. It can be also seen from Fig. 7 that, as the number of DNN fully connected layers increases, the AUC of the test set gradually increases, which is 0.856 6. The experimental results show that the random inactivation probability and the number of DNN fully connected layers have an important impact on the generalization performance of the model.

    Fig. 7 AUC value of the test set under different random inactivation probabilities and different fully connected layers

    4.4 Model performance evaluation

    According to the experimental results in Table 1, after adding the eye movement data set, the AUC values of the logistic regression(LR) model, xDeepFM model, and DeepFM model were all improved, and the improvement rates were 2.40%, 4.46%, and 5.91%, respectively. Among them, DeepFM had the largest improvement. The combination of data dimensions will have better results. The proposed model shows the best performance. Compared with the basic model DeepFM, the AUC of the improved DeepFM on the data set Criteo increases by 5.91%, and the Logloss is reduced by 0.29%.

    Table 1 Performance results of different models and improvements on data set Criteo

    When the eye movement data dimension is added at the same time, the AUC of the proposed model on the data set Criteo is 5.91% higher than that of the basic algorithm DeepFM, and the AUC of xDeepFM on the data set Criteo is 4.46% higher than that without the eye movement data dimension. The AUC of LR on the data set Criteo is only 2.4% higher than that without the eye movement data dimension. That is, after adding the eye movement data dimension, the improvement of the xDeepFM model and the DeepFM model is larger, and the improvement of the LR model is smaller.

    According to the experimental results in Table 2, the number of DNN fully connected layers is 4. Table 2 shows the performance parameters of the current mainstream interest degree predictive model after increasing the dimension of eye movement data and adding the self-attention mechanism. It can be seen that improvement 1 (increasing the dimension of eye movement data) and improvement 2 (increasing the dimension of eye movement data and self-attention mechanism) are respectively 8.25% and 9.32% better than DeepFM, which proves that eye movement data can be used as an important factor of user interest. Dimensionality and self-attention mechanism also improve the accuracy of the interest model to a certain extent.

    Table 2 AUC value of the test set under different improvements

    5 Conclusions

    This paper proposes a prediction model of interest in live shopping based on eye movement features and DeepFM. In this model, we develop eye movement indicators, process eye movement videos through data processing algorithms, and add data dimensions to the original data set through information filling. We apply the DeepFM architecture in the proposed model. In addition, in order to effectively learn important features from different heads and emphasize the relatively important features, the multi-head attention mechanism is introduced into the interest model. Experiment on public data set Criteo shows that, compared with the DeepFM algorithm, the model proposed in this paper has lower Logloss and better AUC performance after increasing the data dimension and introducing the multi-head attention mechanism.

    亚洲欧美一区二区三区黑人 | 中文字幕最新亚洲高清| 人妻一区二区av| 国产片特级美女逼逼视频| 国产精品久久久久久精品电影小说| 午夜91福利影院| av国产久精品久网站免费入址| 国产高清有码在线观看视频| 精品久久久久久久久av| 一区二区av电影网| 大话2 男鬼变身卡| 亚洲欧美一区二区三区黑人 | a级毛色黄片| 免费黄色在线免费观看| 少妇 在线观看| 国产亚洲一区二区精品| 亚洲精品成人av观看孕妇| 能在线免费看毛片的网站| 成人影院久久| 中文精品一卡2卡3卡4更新| 91国产中文字幕| 国产无遮挡羞羞视频在线观看| xxxhd国产人妻xxx| 午夜久久久在线观看| 亚洲精品国产色婷婷电影| 狂野欧美激情性xxxx在线观看| 亚洲av电影在线观看一区二区三区| 成人18禁高潮啪啪吃奶动态图 | 制服诱惑二区| av专区在线播放| 少妇 在线观看| 亚洲色图 男人天堂 中文字幕 | 国产精品蜜桃在线观看| 亚洲欧美日韩另类电影网站| 亚洲情色 制服丝袜| 男女国产视频网站| 丰满迷人的少妇在线观看| 国产高清国产精品国产三级| 一级a做视频免费观看| 免费av中文字幕在线| 国产日韩一区二区三区精品不卡 | 麻豆乱淫一区二区| 啦啦啦啦在线视频资源| 国产深夜福利视频在线观看| av在线app专区| 黄色欧美视频在线观看| 亚洲精品久久成人aⅴ小说 | 国产精品 国内视频| 大话2 男鬼变身卡| 亚洲av欧美aⅴ国产| 这个男人来自地球电影免费观看 | 日本wwww免费看| 日韩av不卡免费在线播放| 欧美亚洲 丝袜 人妻 在线| 免费人妻精品一区二区三区视频| 伊人亚洲综合成人网| 一区在线观看完整版| 日韩不卡一区二区三区视频在线| 最黄视频免费看| 国产伦精品一区二区三区视频9| a级毛片在线看网站| 91国产中文字幕| 26uuu在线亚洲综合色| 久久精品久久久久久噜噜老黄| 九九久久精品国产亚洲av麻豆| 搡女人真爽免费视频火全软件| 国产伦精品一区二区三区视频9| 亚洲国产av影院在线观看| 日韩av不卡免费在线播放| 91aial.com中文字幕在线观看| 国产伦精品一区二区三区视频9| 国产伦精品一区二区三区视频9| av又黄又爽大尺度在线免费看| 搡老乐熟女国产| 久久青草综合色| 日日爽夜夜爽网站| 99久久综合免费| 亚洲国产色片| 婷婷色av中文字幕| 中文字幕人妻熟人妻熟丝袜美| 久久久久久久亚洲中文字幕| 精品人妻在线不人妻| 久久久久久久久久成人| 国产精品一区二区在线观看99| 日韩在线高清观看一区二区三区| 蜜桃国产av成人99| 亚洲精品乱码久久久v下载方式| 免费少妇av软件| a级片在线免费高清观看视频| 精品人妻熟女毛片av久久网站| 亚洲丝袜综合中文字幕| 在线精品无人区一区二区三| 免费av不卡在线播放| 久久久亚洲精品成人影院| 欧美精品一区二区免费开放| 桃花免费在线播放| 亚洲欧洲精品一区二区精品久久久 | 九色亚洲精品在线播放| 久久99热6这里只有精品| 自线自在国产av| a级片在线免费高清观看视频| 欧美日韩一区二区视频在线观看视频在线| av在线观看视频网站免费| 黄片无遮挡物在线观看| av福利片在线| 一本色道久久久久久精品综合| 视频中文字幕在线观看| 亚洲国产色片| 欧美日韩亚洲高清精品| 新久久久久国产一级毛片| 午夜视频国产福利| 欧美激情极品国产一区二区三区 | 亚洲av国产av综合av卡| 日韩不卡一区二区三区视频在线| 国产国拍精品亚洲av在线观看| 久久久久久久精品精品| 美女内射精品一级片tv| 在线 av 中文字幕| 在线观看www视频免费| 新久久久久国产一级毛片| 国产探花极品一区二区| 亚洲久久久国产精品| 一本久久精品| av电影中文网址| 男女啪啪激烈高潮av片| 观看av在线不卡| 亚洲av男天堂| 久久综合国产亚洲精品| 在现免费观看毛片| 超碰97精品在线观看| 久久精品熟女亚洲av麻豆精品| 日韩,欧美,国产一区二区三区| 亚洲欧美精品自产自拍| 国产精品偷伦视频观看了| 成人国产麻豆网| 少妇的逼好多水| 91精品国产国语对白视频| a级毛色黄片| 久久国产精品男人的天堂亚洲 | 婷婷色av中文字幕| 男人添女人高潮全过程视频| 少妇高潮的动态图| 午夜日本视频在线| 久久久国产一区二区| 啦啦啦视频在线资源免费观看| 下体分泌物呈黄色| 伊人久久国产一区二区| 18+在线观看网站| 美女脱内裤让男人舔精品视频| 日本午夜av视频| 777米奇影视久久| 91久久精品国产一区二区成人| 建设人人有责人人尽责人人享有的| 伊人久久国产一区二区| 久久av网站| 男女边摸边吃奶| 日本vs欧美在线观看视频| av天堂久久9| 大香蕉97超碰在线| 久热久热在线精品观看| kizo精华| 欧美xxⅹ黑人| 精品国产一区二区久久| 一级毛片 在线播放| av在线播放精品| 欧美xxxx性猛交bbbb| 国产亚洲av片在线观看秒播厂| 少妇的逼水好多| 黑丝袜美女国产一区| 亚洲精品久久久久久婷婷小说| 五月玫瑰六月丁香| 中文字幕av电影在线播放| 国语对白做爰xxxⅹ性视频网站| 热re99久久精品国产66热6| 免费人成在线观看视频色| 亚洲成色77777| 五月伊人婷婷丁香| 9色porny在线观看| 成人无遮挡网站| 久久精品久久久久久久性| 男人添女人高潮全过程视频| 亚洲成人av在线免费| 日本黄大片高清| 草草在线视频免费看| 人人妻人人添人人爽欧美一区卜| 婷婷色av中文字幕| 久久亚洲国产成人精品v| 国产免费福利视频在线观看| 伊人久久精品亚洲午夜| 少妇人妻精品综合一区二区| 亚洲精品乱码久久久v下载方式| 日韩成人伦理影院| 满18在线观看网站| 少妇被粗大猛烈的视频| a级毛片免费高清观看在线播放| 亚洲四区av| 国产高清三级在线| 岛国毛片在线播放| 日本与韩国留学比较| 精品久久蜜臀av无| 久久午夜福利片| 亚洲美女搞黄在线观看| 国产在线免费精品| 国产熟女午夜一区二区三区 | 中文字幕人妻熟人妻熟丝袜美| 一本色道久久久久久精品综合| 99九九在线精品视频| 国产视频内射| 精品一区二区三卡| 大话2 男鬼变身卡| 午夜久久久在线观看| 日韩 亚洲 欧美在线| 免费少妇av软件| 全区人妻精品视频| 黑人高潮一二区| 夜夜骑夜夜射夜夜干| 国产色爽女视频免费观看| 我的老师免费观看完整版| 在线观看三级黄色| 国产成人精品一,二区| 自拍欧美九色日韩亚洲蝌蚪91| 91精品伊人久久大香线蕉| 国产日韩欧美视频二区| 国产又色又爽无遮挡免| 欧美三级亚洲精品| 精品一区二区三卡| 十八禁网站网址无遮挡| 欧美精品一区二区免费开放| 嫩草影院入口| 中文字幕久久专区| 欧美精品一区二区免费开放| 午夜91福利影院| 欧美精品高潮呻吟av久久| 肉色欧美久久久久久久蜜桃| 一本色道久久久久久精品综合| 婷婷色麻豆天堂久久| 三级国产精品片| 日韩三级伦理在线观看| 欧美日本中文国产一区发布| 热re99久久精品国产66热6| 搡老乐熟女国产| 亚洲国产精品专区欧美| 国产精品女同一区二区软件| 亚洲人与动物交配视频| 亚洲精品久久成人aⅴ小说 | 尾随美女入室| 熟妇人妻不卡中文字幕| 亚洲成人av在线免费| 精品久久久久久久久av| 妹子高潮喷水视频| 欧美日韩综合久久久久久| 26uuu在线亚洲综合色| 青青草视频在线视频观看| 国产 精品1| 亚洲欧美色中文字幕在线| 男女啪啪激烈高潮av片| 久久99热这里只频精品6学生| 一级爰片在线观看| 2022亚洲国产成人精品| 男男h啪啪无遮挡| 日韩精品免费视频一区二区三区 | 久久久久久久亚洲中文字幕| 国产免费福利视频在线观看| 综合色丁香网| 国产高清三级在线| av.在线天堂| 少妇人妻久久综合中文| 男男h啪啪无遮挡| 国产精品麻豆人妻色哟哟久久| 久久99一区二区三区| 少妇的逼好多水| 久久精品国产亚洲网站| 免费av不卡在线播放| 免费av中文字幕在线| 亚洲欧美日韩另类电影网站| 亚洲精华国产精华液的使用体验| 午夜av观看不卡| 青青草视频在线视频观看| 亚洲精品一区蜜桃| 波野结衣二区三区在线| 欧美性感艳星| 秋霞在线观看毛片| 国产精品.久久久| 国产精品国产三级国产av玫瑰| 久久久久国产精品人妻一区二区| 中国美白少妇内射xxxbb| 久久久久精品久久久久真实原创| 中文乱码字字幕精品一区二区三区| 另类亚洲欧美激情| 精品国产露脸久久av麻豆| 久久这里有精品视频免费| 亚洲av欧美aⅴ国产| 多毛熟女@视频| av国产精品久久久久影院| 日日撸夜夜添| 王馨瑶露胸无遮挡在线观看| 日日啪夜夜爽| 久久国产精品大桥未久av| 亚洲欧洲国产日韩| 在线观看美女被高潮喷水网站| 免费黄网站久久成人精品| 亚洲第一区二区三区不卡| √禁漫天堂资源中文www| 永久网站在线| 久久久久精品久久久久真实原创| 国语对白做爰xxxⅹ性视频网站| 一级毛片aaaaaa免费看小| 国产一级毛片在线| 日本欧美视频一区| av.在线天堂| 九色成人免费人妻av| 国产老妇伦熟女老妇高清| 丝袜喷水一区| 麻豆成人av视频| 免费播放大片免费观看视频在线观看| 好男人视频免费观看在线| 亚洲少妇的诱惑av| 美女脱内裤让男人舔精品视频| 免费不卡的大黄色大毛片视频在线观看| 黄色视频在线播放观看不卡| 国产毛片在线视频| 日韩制服骚丝袜av| 美女脱内裤让男人舔精品视频| 国产精品久久久久久精品古装| 黄色一级大片看看| 欧美精品一区二区免费开放| 伊人亚洲综合成人网| 各种免费的搞黄视频| 国产 精品1| 校园人妻丝袜中文字幕| 男人爽女人下面视频在线观看| 久久久久久人妻| 免费高清在线观看日韩| 最黄视频免费看| 91精品伊人久久大香线蕉| 特大巨黑吊av在线直播| 国产探花极品一区二区| www.色视频.com| 在线 av 中文字幕| 久久鲁丝午夜福利片| 精品久久蜜臀av无| 精品少妇内射三级| 久久综合国产亚洲精品| 国产精品久久久久久精品古装| 免费高清在线观看日韩| 狂野欧美激情性xxxx在线观看| 久久精品夜色国产| 亚洲欧美成人精品一区二区| 只有这里有精品99| av卡一久久| 国产淫语在线视频| 国产精品一国产av| 人妻人人澡人人爽人人| 男女无遮挡免费网站观看| 超色免费av| 男男h啪啪无遮挡| 亚洲三级黄色毛片| 黄片无遮挡物在线观看| 精品午夜福利在线看| 99九九线精品视频在线观看视频| 亚洲美女搞黄在线观看| 久久这里有精品视频免费| 久热这里只有精品99| 国产精品成人在线| 99久久精品国产国产毛片| 国产日韩欧美在线精品| 免费观看av网站的网址| 亚洲欧洲国产日韩| av不卡在线播放| 少妇的逼水好多| 亚洲综合色惰| 一边摸一边做爽爽视频免费| 亚洲经典国产精华液单| 欧美另类一区| 久热久热在线精品观看| 99视频精品全部免费 在线| 免费av中文字幕在线| 中文字幕免费在线视频6| 久久99热6这里只有精品| 狠狠精品人妻久久久久久综合| 欧美激情极品国产一区二区三区 | 国产男女内射视频| 秋霞在线观看毛片| 免费av不卡在线播放| 亚洲欧洲精品一区二区精品久久久 | 97精品久久久久久久久久精品| 一级爰片在线观看| 亚洲精品乱码久久久久久按摩| 丰满乱子伦码专区| 如何舔出高潮| 一级,二级,三级黄色视频| 91久久精品国产一区二区成人| 日韩成人伦理影院| 国产黄频视频在线观看| 中文字幕人妻丝袜制服| 久久人人爽av亚洲精品天堂| 国产日韩欧美在线精品| 欧美日韩视频精品一区| 亚洲欧美一区二区三区国产| 国产成人精品在线电影| 极品人妻少妇av视频| 3wmmmm亚洲av在线观看| 国产在视频线精品| 热99久久久久精品小说推荐| 国产精品偷伦视频观看了| videossex国产| 久久亚洲国产成人精品v| 日韩伦理黄色片| 男人爽女人下面视频在线观看| 国产精品99久久久久久久久| 在线观看三级黄色| 成人二区视频| 久久久久久久久久久久大奶| 亚洲高清免费不卡视频| 午夜福利,免费看| 亚洲内射少妇av| 99热这里只有是精品在线观看| 亚洲av免费高清在线观看| 久久亚洲国产成人精品v| 亚洲av福利一区| 自线自在国产av| 日本与韩国留学比较| 在线 av 中文字幕| 大香蕉久久网| 久久久精品94久久精品| 91成人精品电影| 观看av在线不卡| 热re99久久精品国产66热6| 日本黄色日本黄色录像| 伦精品一区二区三区| 亚洲精品乱久久久久久| 国产精品久久久久久久久免| 伊人亚洲综合成人网| 国产在视频线精品| 国产欧美亚洲国产| 欧美三级亚洲精品| 亚洲精品乱码久久久久久按摩| 欧美日韩国产mv在线观看视频| 国产精品不卡视频一区二区| av福利片在线| 欧美人与性动交α欧美精品济南到 | 一本色道久久久久久精品综合| a级毛色黄片| 99视频精品全部免费 在线| 9色porny在线观看| 十八禁高潮呻吟视频| freevideosex欧美| 国产精品嫩草影院av在线观看| .国产精品久久| 日本av手机在线免费观看| 精品人妻在线不人妻| 日韩中文字幕视频在线看片| 99久久人妻综合| 久久久久久久国产电影| 永久免费av网站大全| 大片免费播放器 马上看| 十八禁网站网址无遮挡| 狂野欧美激情性bbbbbb| 中文字幕亚洲精品专区| 久久免费观看电影| 日本欧美国产在线视频| 成人免费观看视频高清| 久久久精品94久久精品| 国产成人精品一,二区| 少妇人妻久久综合中文| 91精品国产九色| 视频在线观看一区二区三区| 亚洲综合精品二区| 五月开心婷婷网| 午夜免费鲁丝| 亚洲精华国产精华液的使用体验| 午夜激情久久久久久久| 精品亚洲成国产av| 视频区图区小说| 婷婷成人精品国产| 亚洲精品,欧美精品| 在现免费观看毛片| videos熟女内射| 十八禁高潮呻吟视频| 免费人妻精品一区二区三区视频| a级毛片黄视频| 国产成人freesex在线| 少妇人妻久久综合中文| 国产无遮挡羞羞视频在线观看| 大陆偷拍与自拍| 18禁裸乳无遮挡动漫免费视频| 国产亚洲一区二区精品| 亚洲精品亚洲一区二区| 精品一区二区三区视频在线| 欧美日韩一区二区视频在线观看视频在线| 精品卡一卡二卡四卡免费| 亚洲,欧美,日韩| 亚洲欧洲精品一区二区精品久久久 | 中国三级夫妇交换| 中文字幕制服av| 日本午夜av视频| 中文字幕亚洲精品专区| 亚洲第一av免费看| 纵有疾风起免费观看全集完整版| 热re99久久国产66热| 黄色一级大片看看| 美女脱内裤让男人舔精品视频| 欧美另类一区| 国产色爽女视频免费观看| 中国三级夫妇交换| 国产黄色视频一区二区在线观看| 一级毛片黄色毛片免费观看视频| 国产精品一区二区在线不卡| 另类亚洲欧美激情| 国国产精品蜜臀av免费| 国产淫语在线视频| 欧美另类一区| 新久久久久国产一级毛片| 久久精品熟女亚洲av麻豆精品| 日韩三级伦理在线观看| 一个人免费看片子| 中国三级夫妇交换| 中文字幕人妻熟人妻熟丝袜美| 草草在线视频免费看| 中文字幕免费在线视频6| 日韩精品有码人妻一区| 国产欧美日韩一区二区三区在线 | 一区二区三区乱码不卡18| 人人妻人人澡人人爽人人夜夜| 哪个播放器可以免费观看大片| 99九九线精品视频在线观看视频| 97超碰精品成人国产| 亚洲精品国产色婷婷电影| 亚洲人成网站在线观看播放| 亚洲av福利一区| 免费少妇av软件| 五月玫瑰六月丁香| 日日啪夜夜爽| 日日撸夜夜添| 久久久国产欧美日韩av| 日韩不卡一区二区三区视频在线| 国产深夜福利视频在线观看| 美女主播在线视频| 999精品在线视频| 18在线观看网站| 国产视频内射| 日产精品乱码卡一卡2卡三| 日韩一区二区三区影片| 亚洲精品视频女| 国产乱来视频区| 中文欧美无线码| 大片免费播放器 马上看| 性色av一级| freevideosex欧美| 国产 精品1| 日产精品乱码卡一卡2卡三| 久久人人爽人人片av| 亚洲一区二区三区欧美精品| 一边摸一边做爽爽视频免费| 大香蕉久久成人网| 中文字幕亚洲精品专区| 看非洲黑人一级黄片| 久久久久久久国产电影| 国产伦精品一区二区三区视频9| 国产乱人偷精品视频| 草草在线视频免费看| 亚洲人成网站在线播| 精品久久久久久电影网| 十八禁高潮呻吟视频| 七月丁香在线播放| 一个人免费看片子| 自拍欧美九色日韩亚洲蝌蚪91| 日韩一区二区视频免费看| 亚洲精品国产色婷婷电影| 在线精品无人区一区二区三| 国产成人免费无遮挡视频| 久久国产精品大桥未久av| 最近手机中文字幕大全| 人人妻人人澡人人爽人人夜夜| 中文欧美无线码| 亚洲精品中文字幕在线视频| 婷婷色综合大香蕉| 18禁观看日本| 久久久久久久久大av| 男女国产视频网站| 国产成人精品一,二区| 大片电影免费在线观看免费| 久久这里有精品视频免费| 亚洲av电影在线观看一区二区三区| 中国美白少妇内射xxxbb| 视频中文字幕在线观看| 亚洲中文av在线| 一级毛片我不卡| 人成视频在线观看免费观看| 99热这里只有是精品在线观看| 免费观看a级毛片全部| 亚洲国产精品国产精品| 久久久久精品久久久久真实原创| av一本久久久久| 大又大粗又爽又黄少妇毛片口| 欧美另类一区| 亚洲,欧美,日韩| 男男h啪啪无遮挡| 满18在线观看网站| 亚洲精品成人av观看孕妇| 91国产中文字幕| 飞空精品影院首页| 日日撸夜夜添| 国产淫语在线视频| 中国三级夫妇交换| 午夜视频国产福利| 国产亚洲最大av| 久久精品国产亚洲av涩爱| videossex国产| 在线天堂最新版资源| 少妇 在线观看| 国产伦精品一区二区三区视频9| 欧美精品高潮呻吟av久久| 精品久久国产蜜桃| 熟女av电影|