• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Knowledge Tracing Embedding Neural Network for Individualized Learning

    2020-02-01 09:04:52HUANGYongfengSHIJie施杰

    HUANGYongfeng,SHIJie(施杰)

    College of Computer Science and Technology, Donghua University, Shanghai 201620, China

    Abstract: Knowledge tracing is the key component in online individualized learning, which is capable of assessing the users’ mastery of skills and predicting the probability that the users can solve specific problems. Available knowledge tracing models have the problem that the assessments are not directly used in the predictions. To make full use of the assessments during predictions, a novel model, named deep knowledge tracing embedding neural network (DKTENN), is proposed in this work. DKTENN is a synthesis of deep knowledge tracing (DKT) and knowledge graph embedding (KGE). DKT utilizes sophisticated long short-term memory (LSTM) to assess the users and track the mastery of skills according to the users’ interaction sequences with skill-level tags, and KGE is applied to predict the probability on the basis of both the embedded problems and DKT’s assessments. DKTENN outperforms performance factors analysis and the other knowledge tracing models based on deep learning in the experiments.

    Key words: knowledge tracing; knowledge graph embedding (KGE); deep neural network; user assessment; personalized prediction

    Introduction

    Nowadays, programming contests are thriving, which involve an increasing number of skills and become dramatically difficult. But, available online learning platforms lack the ability to present suitable exercises for practicing.

    Knowledge tracing is a key component to individualized learning, owing to its ability to assess the mastery of skills for each user and predict the probability that a user can correctly solve a specific problem. Introducing knowledge tracing into the learning of skills in programming contest can benefit both the teachers and the students.

    Nevertheless, there are some challenges. The traditional knowledge tracing models, such as Bayesian knowledge tracing (BKT)[1](and its variants with additional individualized parameters[2-3], problem difficulty parameters[4]and the forget parameter[5]), learning factors analysis (LFA)[6]and performance factors analysis (PFA)[7], have the following two assumptions for the skills. First, the skills are properly partitioned into ideal and hierarchical structure,i.e., if all the prerequisites are mastered, one skill can be mastered after repeated practicing. Second, the skills are independent of each other,i.e., ifpAis the probability that a user correctly solves a problem requiring skill A, andpBis the probability that the user can solve another problem requiring a different skill B, it is assumed the probability that the user correctly solves a problem requiring both skills A and B ispA×pB, implying that the interconnections between skills are ignored. However, in individualized learning, a reasonable partition should make it easier for the users to master the skills instead of purely enhancing models’ performance. In addition, according to the second assumption, ifpA=pB= 1, thenpA×pB= 1. This is not in line with the cognitive process, since a problem requiring two skills is sure to be far more difficult than problems each requiring only one skill.

    Recently, knowledge tracing models based on deep learning have been attracting tremendous research interests. Deep knowledge tracing (DKT)[8-11], which utilizes recurrent neural network (RNN) or long short-term memory (LSTM)[12], is robust[5]and has the power to infer the users’ mastery of one skill from another[13]. Along with its variants[14-15], like deep item response theory (Deep-IRT)[14], dynamic key-value memory network (DKVMN)[16]uses memory network[17](where richer information on the users’ knowledge states can be stored). Some researchers[18-19]used the attention mechanism[20]to prevent the models from discarding important information in the future. Specifically, self-attentive knowledge tracing (SAKT)[21]uses transformer[22]to enable faster training process and better prediction performance. These deep learning-based models are superior to the traditional models in that they have no extra requirements on the partition of skills. However, they are used to either assess the mastery of skills, or predict the probability that a problem can be solved. Even though some models[18]can assess and predict at the same time, the assessments are not convincing enough because of not being used in the predictions and only being evaluated according to the experts’ experience. Other researchers[23-24]integrated the traditional methods into the deep learning models, making assessing and predicting at the same time possible. Yet, their proposed models still had constraints for the partition of skills.

    In this paper, a novel knowledge tracing model, which makes use of both DKT and knowledge graph embedding (KGE)[25-29], is proposed. DKT has the capacity of assessing users and has no extra requirements on the partition of skills. KGE can be used to make inferences on whether two given entities (such as users and problems) have certain relation (such as a user is capable of solving a problem). A combination of both has the power to assess and predict simultaneously. Three datasets are used to evaluate the proposed model, compared with the state-of-the-art knowledge tracing models.

    1 Problem Formalization

    Individualized learning involves user assessment and personalized prediction. In an online learning platform, suppose there areKskills,Qproblems, andNusers. A user’s submissions (or attempts)s={(e1,a1), (e2,a2), …, (et,at), …, (eT,aT)} is a sequence, whereTis the length of the sequence,etis the problem identifier andatis the result of the user’stth attempt (1≤t≤T). If the user correctly solves problemet,at= 1, otherwiseat=0.

    Definition1User assessment: given a user’s submissionss, after theTth attempt, assess the user’s mastery of all the skillsyT= (yT, 1,yT, 2, ...,yT, j, ...,yT, K) ∈RK, whereyT, jis the mastery of thejth skill (0 ≤yT, j≤ 1, 1 ≤j≤K). yT, j=0 means the user knows nothing about thejth skill;yT,j= 1 means the user has already mastered thejth skill.

    Definition2Personalized prediction: given a user’s submissionss, after theTth attempt, predict the probabilitypT+1(0 ≤pT+1≤ 1) that the user correctly solves problemeT+1in the (T+1)th attempt.

    2 Deep Knowledge Tracing Embedding Neural Network (DKTENN)

    Fig. 1 Projection from the entity space to the relation space

    2.1 Model architecture

    As is shown in Fig. 2, DKTENN contains four components,i.e., user embedding, problem embedding, projection and normalization (proj. & norm.), and predictor.

    Userembedding: a user’s submissionssis first encoded into {x1,x2, …,xt, …,xT}.xt= (qt,rt) ∈R2Kis a vector containing the required skills of problemetand the resultatin the user’stth attempt, whereqt,rt∈RK(1 ≤t≤T). If problemetrequires thejth skill (1≤j≤K),qt’sjth entry is one andrt’sjth entry isat, otherwise both of thejth entries are zero. DKT’s input is {x1,x2, …,xt, …,xT}, and its output is {y1,y2, …,yt, …,yT}, whereyt= (yt, 1,yt, 2, …,yt, j, …,yt, K)∈RK, andyt, jindicates the user’s mastery of thejth skill after thetth attempt (1≤j≤K, 1≤t≤T).

    Fig. 2 Network architecture of DKTENN

    In this paper, LSTM is used to implement DKT. The update equations for LSTM are:

    it=σ(Wixxt+Wihht-1+bi),

    (1)

    ft=σ(Wfxxt+Wfhht-1+bf),

    (2)

    gt=tanh(Wgxxt+Wghht-1+bg),

    (3)

    ot=σ(Woxxt+Wohht-1+bo),

    (4)

    ct=ft*ct-1+it*gt,

    (5)

    ht=ot*tanhct,

    (6)

    whereWix,Wfx,Wgx,Wox∈Rl×2K;Wih,Wfh,Wgh,Woh∈Rl×l;bi,bf,bg,bo∈Rl;lis the size of the hidden states; * is the element-wise multiplication for matrices; andσ(·) is the sigmoid function:

    (7)

    ht∈Rlandct∈Rlare the hidden states and the cell states. Initially,h0=c0=0=(0, 0, …, 0) ∈Rl.

    The assessments are obtained by applying a fully-connected layer to the hidden states:

    yt=σ(Wy· dropout(ht) +by),

    (8)

    whereWy∈RK×l,by∈RK, and dropout(·) is used during model training to prevent overfitting[30].

    DKTENN uses DKT’s outputyTas the user embedding, which is in the user entity space and summarizes the user’s capabilities in skill-level. In this paper, the DKT in user embedding differs from standard DKT (DKT-S) in the input and output. The input of DKT is a sequence of skill-level tags (xtis encoded usinget’s required skills) and the output is the mastery of each skill, while the input of DKT-S is a sequence of problem-level tags (xtis encoded directly usingetinstead of its required skills) and the output is the probabilities that a user can correctly solve each problem.

    The projected vector ofyTisyTM1, whereM1∈RK× dis the projection matrix, anddis the dimension of the projected vector. The user vectoru∈Rdis the L2-normalized projected vector

    (9)

    whereεis added to avoid zero denominator, and ‖·‖2is the L2-norm. of a vector

    (10)

    wherey= (y1,y2, …,yd) ∈Rd, and |y| is the absolute value ofy.

    Similarly, the problem vectorv∈Rdis:

    (11)

    whereM2∈RK×dis the projection matrix.

    During predictions, the projection matricesM1andM2are invariant and independent of submissions.

    Predictor: the prediction is made based on the user vectoruand the problem vectorv. The concatenated vector (u,v) ∈R2dis used as the input of a feed-forward neural network (FNN):

    z=dropout(σ((u,v)W1+b1))W2+b2,

    (12)

    whereW1∈R2d×h,b1∈Rh,W2∈Rh×2,b2∈R2, andz= (z0,z1) ∈R2.

    A final softmax layer is applied tozto get the final prediction

    (13)

    wherep0is the probability that the user cannot solve the problem, whilep1is the probability that the user can correctly solve the problem, andp0+p1= 1.

    2.2 Model training

    There are two parts in training. The first part is user embedding. In order to assess the users’ mastery of skills, following Piechetal.[8], the loss function is defined by

    (14)

    (15)

    Considering DKT suffers from the reconstruction problem and the wavy transition problem, three regularization terms have been proposed by Yeung and Yeung[31]:

    (16)

    (17)

    (18)

    where ‖·‖1is the L1-norm. of a vector:

    (19)

    wherey= (y1,y2, …,yK) ∈RK.

    The regularized loss function of DKT is:

    (20)

    whereλR,λ1andλ2are the regularization parameters.

    The second part is problem embedding, proj. & norm. and predictor. The output of predictor indicates the probability. So, this part can be trained by minimizing the following binary cross entropy loss:

    (21)

    Adam[32]is used to minimize the loss functions. Gradient clipping is applied to deal with the exploding gradients[33].

    3 Experiments

    Experiments are conducted to show that: (1) DKT’s assessments are reasonable; (2) DKTENN outperforms the state-of-the-art knowledge tracing models; (3) KGE is necessary.

    3.1 Datasets

    The datasets used in this paper include users, problems and their required skills. The skills are partitioned according to the teaching experience of the domain experts. The data come from three publicly open online judges.

    Codeforces(CF): CF regularly holds online contests. All the problems on CF come from these contests. The problems with their required skills labeled by the experts, 500 top-rated users and their submissions are prepared as the CF dataset.

    HangzhouDianziUniversityonlinejudge(HDU) &PekingUniversityonlinejudge(POJ): the problems on HDU and POJ have no information on the required skills. Solutions from Chinese software developer network (CSDN) are collected and used to label the problems. The users who have solved the most problems and their submissions are prepared as the HDU and POJ datasets.

    The details of the datasets are shown in Table 1, where the numbers of users, problems, skills and submissions are given. For each dataset, 20% of the users are randomly chosen as the test set, and the remaining users are left as the training set.

    Table 1 Dataset overview

    3.2 Evaluation methodology

    Evaluationmetrics: the area under the ROC curve (AUC) and root mean square error (RMSE) are used to measure the performance of the models. AUC ranges from 0.5 to 1.0. RMSE is greater than or equal to 0. The model with a larger AUC and a smaller RMSE is better.

    Fig. 3 Architecture of DKTML

    3.3 Results and discussion

    The results for user assessment are shown in Table 2. DKT outperforms BKT and BKT-F, and achieves an average of 35.6% and 17.3% gain in AUC, respectively. On the one hand, BKT can only model the acquisition of a single skill, while DKT takes all the skills into account and is capable of adjusting the users’ mastery of one skill based on another closely related skill according to the results of the attempts. On the other hand, as a probabilistic model relying on only four (or five) parameters, BKT (or BKT-F) has difficulty in modeling the relatively complicated learning process in programming contest. Benefited from LSTM, DKT has more parameters and stronger learning ability. Nevertheless, the input of DKT does not contain information such as the difficulties of the problems,i.e., if two users have solved problems requiring similar skills, their assessments are also similar, though the problem difficulties may vary greatly. Thus, DKT’s assessments are only rough estimations on how well a user has mastered a skill.

    The results for personalized prediction are shown in Table 3. DKTENN outperforms the state-of-the-art knowledge tracing models over all of the three datasets, and achieves an average of 0.9% gain in AUC and an average of 0.6% decrease in RMSE, which proves the effectiveness of the proposed model.

    To predict whether a user can solve a problem, not only the required skills, but also other information such as the difficulties, should be considered. Both DKTML and DKTENN are based on DKT’s assessments, but the difference is that DKTML uses the required skills in a straightforward manner, while DKTENN uses a method similar to KGE to make full use of the information such as the users’ mastery of skills and the problems’ difficulties besides the required skills. Compared to DKTFNN (having the best performance in all kinds of DKTML), DKTENN achieves an average of 2.5% gain in AUC, which shows KGE is an essential component.

    Since the prediction of DKTENN is based on the assessments of DKT, better performance of DKTENN shows that the assessments of DKT are reasonable.

    Table 2 Experimental results of user assessment

    Figure 4 is drawn by projecting the trained problem embeddings into the 2-dimensional space using t-SNE[36]. The correspondence between the 50 problems and their required skills can be found in the Appendix. To some extent, Fig. 4 reveals that DKTENN is able to cluster similar problems. For example, problems 9 and 32 are clustered possibly because they share the skill “data structures”; problems 17 and 49 are both “interactive” problems. So, it is believed that the embeddings can help to discover the connections between problems. Due to the complexity of the problems in programming contest, further research on the similarity between problems is still needed.

    Table 3 Experimental results of personalized prediction

    Fig. 4 Visualizing problem embeddings using t-SNE

    4 Conclusions

    A new knowledge tracing model, DKTENN, making predictions directly based on the assessments of DKT, has been proposed in this work. The problems, the users and their submissions from CF, HDU and POJ are used as datasets. Due to the perfect combination of DKT and KGE, DKTENN outperforms the existing models in the experiments.

    At present, the problem or skill difficulties are not incorporated into the assessments of DKT. In the future, to further improve the assessments and the prediction performance of the model, better embedding methods will be explored to encode the features of problems and skills.

    Table Ⅰ Selected CF problem and their required skills

    (Table Ⅰ continued)

    久久av网站| 国产成人一区二区三区免费视频网站| 正在播放国产对白刺激| 国产精品久久久久成人av| 动漫黄色视频在线观看| 80岁老熟妇乱子伦牲交| 亚洲av欧美aⅴ国产| 成人国产av品久久久| 99久久综合免费| 亚洲精品一卡2卡三卡4卡5卡 | 午夜精品久久久久久毛片777| 中文字幕最新亚洲高清| 亚洲成人免费电影在线观看| 欧美激情 高清一区二区三区| 日韩人妻精品一区2区三区| 成人国产av品久久久| 超碰97精品在线观看| 他把我摸到了高潮在线观看 | 日本猛色少妇xxxxx猛交久久| 一区二区三区乱码不卡18| 黑人巨大精品欧美一区二区蜜桃| 精品久久久久久久毛片微露脸 | 一区二区三区激情视频| 久久毛片免费看一区二区三区| 狠狠婷婷综合久久久久久88av| 热99re8久久精品国产| 色视频在线一区二区三区| 建设人人有责人人尽责人人享有的| h视频一区二区三区| 精品免费久久久久久久清纯 | 国产极品粉嫩免费观看在线| 亚洲国产中文字幕在线视频| 日本av手机在线免费观看| 国产免费视频播放在线视频| 免费日韩欧美在线观看| 12—13女人毛片做爰片一| 99久久99久久久精品蜜桃| 久热爱精品视频在线9| 亚洲国产成人一精品久久久| avwww免费| 日本欧美视频一区| 久久中文看片网| 肉色欧美久久久久久久蜜桃| 一二三四社区在线视频社区8| 午夜福利免费观看在线| 天堂中文最新版在线下载| 美女午夜性视频免费| 91成年电影在线观看| 国产精品一区二区在线观看99| 亚洲人成电影观看| 成人国产av品久久久| 亚洲黑人精品在线| 日本一区二区免费在线视频| 自拍欧美九色日韩亚洲蝌蚪91| 一级毛片女人18水好多| 中文字幕精品免费在线观看视频| 一边摸一边做爽爽视频免费| 亚洲精品一区蜜桃| 欧美日韩亚洲综合一区二区三区_| 亚洲国产av新网站| 精品一区在线观看国产| 正在播放国产对白刺激| 51午夜福利影视在线观看| 男女边摸边吃奶| tocl精华| 性少妇av在线| 2018国产大陆天天弄谢| 国产人伦9x9x在线观看| 欧美少妇被猛烈插入视频| 在线av久久热| 50天的宝宝边吃奶边哭怎么回事| 午夜两性在线视频| 在线观看www视频免费| av线在线观看网站| 男女床上黄色一级片免费看| 欧美激情高清一区二区三区| 亚洲五月婷婷丁香| 国产激情久久老熟女| 亚洲精品国产色婷婷电影| 91精品国产国语对白视频| 老司机在亚洲福利影院| 免费黄频网站在线观看国产| 十八禁网站免费在线| 97精品久久久久久久久久精品| 久久久国产欧美日韩av| 国产三级黄色录像| 性色av乱码一区二区三区2| 日本av免费视频播放| 中文字幕最新亚洲高清| 欧美日韩av久久| 老汉色∧v一级毛片| 日本欧美视频一区| a级片在线免费高清观看视频| videosex国产| 日韩大码丰满熟妇| 人妻久久中文字幕网| 人妻人人澡人人爽人人| 国产高清视频在线播放一区 | 久久久国产成人免费| 男女国产视频网站| 老司机在亚洲福利影院| av在线app专区| 90打野战视频偷拍视频| 一区二区三区精品91| 狂野欧美激情性xxxx| 天天躁日日躁夜夜躁夜夜| 日本五十路高清| 欧美激情 高清一区二区三区| 午夜福利在线观看吧| 高清av免费在线| 日本av手机在线免费观看| 国产精品99久久99久久久不卡| 国产野战对白在线观看| 他把我摸到了高潮在线观看 | 最近最新免费中文字幕在线| 一区二区av电影网| 久久青草综合色| 9191精品国产免费久久| 国产精品影院久久| 老司机影院毛片| 最近最新免费中文字幕在线| 可以免费在线观看a视频的电影网站| 一本综合久久免费| 亚洲专区中文字幕在线| 在线天堂中文资源库| 久久九九热精品免费| 99久久国产精品久久久| 国产成人免费无遮挡视频| 亚洲精品第二区| 亚洲国产成人一精品久久久| 日韩欧美一区二区三区在线观看 | 青春草亚洲视频在线观看| netflix在线观看网站| 国产精品一二三区在线看| 一个人免费看片子| 亚洲精品一二三| 亚洲七黄色美女视频| 最黄视频免费看| 男女高潮啪啪啪动态图| www.av在线官网国产| 欧美亚洲 丝袜 人妻 在线| 夜夜夜夜夜久久久久| 一本综合久久免费| 深夜精品福利| 免费不卡黄色视频| 亚洲专区国产一区二区| 成人免费观看视频高清| 国产激情久久老熟女| 丰满饥渴人妻一区二区三| 国产在线免费精品| 美女扒开内裤让男人捅视频| 悠悠久久av| 国产精品一区二区在线观看99| 亚洲国产欧美在线一区| 亚洲精品在线美女| 欧美黄色片欧美黄色片| 国内毛片毛片毛片毛片毛片| 中文字幕制服av| 久久精品aⅴ一区二区三区四区| 国产精品.久久久| 国产欧美日韩一区二区精品| 青青草视频在线视频观看| 国产亚洲av高清不卡| 午夜福利视频在线观看免费| 大陆偷拍与自拍| 久久这里只有精品19| 久久久久视频综合| 久久精品国产综合久久久| 午夜精品国产一区二区电影| 午夜福利,免费看| 69av精品久久久久久 | 欧美在线一区亚洲| 热99re8久久精品国产| 亚洲 欧美一区二区三区| 狂野欧美激情性bbbbbb| 人人妻,人人澡人人爽秒播| 国产欧美日韩一区二区三 | 一本色道久久久久久精品综合| 精品高清国产在线一区| 亚洲中文字幕日韩| 久久亚洲精品不卡| 在线看a的网站| 婷婷丁香在线五月| 女性被躁到高潮视频| 亚洲视频免费观看视频| 欧美黄色淫秽网站| 亚洲国产成人一精品久久久| 中亚洲国语对白在线视频| 看免费av毛片| 久久精品国产亚洲av香蕉五月 | a在线观看视频网站| 国产成人免费无遮挡视频| 亚洲国产看品久久| 后天国语完整版免费观看| 高清欧美精品videossex| 久久亚洲国产成人精品v| 菩萨蛮人人尽说江南好唐韦庄| 欧美激情久久久久久爽电影 | 久久精品久久久久久噜噜老黄| 岛国在线观看网站| 深夜精品福利| 黑人猛操日本美女一级片| 成人av一区二区三区在线看 | 婷婷色av中文字幕| 日韩人妻精品一区2区三区| 久久国产亚洲av麻豆专区| 精品国产一区二区三区四区第35| 国产麻豆69| 国产精品1区2区在线观看. | 免费少妇av软件| 亚洲色图 男人天堂 中文字幕| 国产在视频线精品| 成人免费观看视频高清| 91老司机精品| 少妇被粗大的猛进出69影院| 成人国语在线视频| www.自偷自拍.com| 超色免费av| 国产成+人综合+亚洲专区| 蜜桃在线观看..| 菩萨蛮人人尽说江南好唐韦庄| 秋霞在线观看毛片| 亚洲国产精品一区二区三区在线| 男男h啪啪无遮挡| 国产精品久久久人人做人人爽| 99热国产这里只有精品6| 曰老女人黄片| 91成年电影在线观看| 正在播放国产对白刺激| 男人添女人高潮全过程视频| 国产区一区二久久| 中文字幕制服av| 最近最新免费中文字幕在线| 热99国产精品久久久久久7| 在线精品无人区一区二区三| 人妻久久中文字幕网| 久久久久久久国产电影| 一区二区三区精品91| 两性午夜刺激爽爽歪歪视频在线观看 | 日韩视频一区二区在线观看| 精品一区二区三区av网在线观看 | 久久亚洲国产成人精品v| 国产成人欧美在线观看 | 黑人巨大精品欧美一区二区mp4| 久久久国产精品麻豆| av天堂久久9| 一个人免费看片子| 女警被强在线播放| 另类精品久久| 1024视频免费在线观看| 欧美 亚洲 国产 日韩一| 啦啦啦 在线观看视频| svipshipincom国产片| 新久久久久国产一级毛片| 制服诱惑二区| 丰满饥渴人妻一区二区三| 久久精品国产亚洲av高清一级| 国产99久久九九免费精品| 欧美变态另类bdsm刘玥| 亚洲免费av在线视频| 超碰97精品在线观看| 69av精品久久久久久 | 新久久久久国产一级毛片| 如日韩欧美国产精品一区二区三区| 老司机亚洲免费影院| 国产精品九九99| 丰满人妻熟妇乱又伦精品不卡| 丁香六月天网| 欧美日韩一级在线毛片| 蜜桃国产av成人99| 久久九九热精品免费| 欧美日韩av久久| 99国产综合亚洲精品| 丝袜在线中文字幕| 久久天躁狠狠躁夜夜2o2o| 天天躁日日躁夜夜躁夜夜| 啦啦啦 在线观看视频| 久久ye,这里只有精品| 亚洲伊人色综图| 国产一区二区三区av在线| 成年人午夜在线观看视频| 国产成人精品在线电影| 亚洲 欧美一区二区三区| 国产又色又爽无遮挡免| 黑人猛操日本美女一级片| 欧美av亚洲av综合av国产av| 菩萨蛮人人尽说江南好唐韦庄| 婷婷成人精品国产| 一区二区三区精品91| 精品福利观看| av网站免费在线观看视频| 丰满少妇做爰视频| 国产一区二区三区综合在线观看| 久久国产精品人妻蜜桃| 他把我摸到了高潮在线观看 | 下体分泌物呈黄色| 国产成人精品无人区| 在线观看免费高清a一片| 国产精品国产av在线观看| 亚洲精品乱久久久久久| 无遮挡黄片免费观看| 精品国产乱码久久久久久小说| 日韩中文字幕欧美一区二区| 亚洲一区二区三区欧美精品| 欧美精品亚洲一区二区| cao死你这个sao货| 老司机亚洲免费影院| 国产精品av久久久久免费| 亚洲 欧美一区二区三区| 国产伦理片在线播放av一区| 中国国产av一级| 精品国产一区二区三区四区第35| 中文精品一卡2卡3卡4更新| 韩国精品一区二区三区| 一进一出抽搐动态| 久久久久久久国产电影| 精品国产乱码久久久久久小说| 超碰97精品在线观看| 最新的欧美精品一区二区| 水蜜桃什么品种好| 亚洲av欧美aⅴ国产| 热99国产精品久久久久久7| 亚洲九九香蕉| 国产亚洲精品一区二区www | 国产欧美亚洲国产| 国产成人免费无遮挡视频| www.999成人在线观看| 成人18禁高潮啪啪吃奶动态图| 久久精品国产亚洲av香蕉五月 | 日韩大片免费观看网站| av天堂在线播放| 亚洲美女黄色视频免费看| 手机成人av网站| 国产亚洲精品久久久久5区| 午夜91福利影院| 亚洲中文av在线| 看免费av毛片| 五月开心婷婷网| 国产成人av激情在线播放| 久久天堂一区二区三区四区| av一本久久久久| 久久精品国产亚洲av香蕉五月 | 日韩大片免费观看网站| 亚洲欧美色中文字幕在线| 最新在线观看一区二区三区| 国产日韩欧美亚洲二区| 女人精品久久久久毛片| 在线观看一区二区三区激情| 久久久久国产一级毛片高清牌| av在线app专区| 日本撒尿小便嘘嘘汇集6| 国产在线一区二区三区精| 中文字幕人妻熟女乱码| 韩国精品一区二区三区| 亚洲av男天堂| 亚洲第一av免费看| 欧美日韩亚洲国产一区二区在线观看 | 欧美中文综合在线视频| 欧美黑人精品巨大| 成人影院久久| 99re6热这里在线精品视频| 成人18禁高潮啪啪吃奶动态图| 宅男免费午夜| 日本av手机在线免费观看| 精品一品国产午夜福利视频| 别揉我奶头~嗯~啊~动态视频 | 一级毛片女人18水好多| 日本五十路高清| 亚洲七黄色美女视频| 精品第一国产精品| 每晚都被弄得嗷嗷叫到高潮| 欧美日韩亚洲综合一区二区三区_| 亚洲欧洲日产国产| 悠悠久久av| 满18在线观看网站| 天堂8中文在线网| 女人高潮潮喷娇喘18禁视频| 国产免费福利视频在线观看| 精品一区在线观看国产| 成年av动漫网址| 美女大奶头黄色视频| 啦啦啦在线免费观看视频4| 免费少妇av软件| 国产欧美日韩一区二区精品| 欧美日韩国产mv在线观看视频| 亚洲精品粉嫩美女一区| 黑人操中国人逼视频| 老司机影院毛片| av电影中文网址| 成年美女黄网站色视频大全免费| 9热在线视频观看99| a级毛片在线看网站| 亚洲精品第二区| 一区二区三区精品91| 亚洲精品国产区一区二| 视频区图区小说| av天堂在线播放| 一本—道久久a久久精品蜜桃钙片| 波多野结衣av一区二区av| 91麻豆av在线| 1024香蕉在线观看| 黄色怎么调成土黄色| 国产三级黄色录像| 国产在视频线精品| 电影成人av| 日韩一区二区三区影片| 成人黄色视频免费在线看| 三上悠亚av全集在线观看| 日本wwww免费看| 欧美成人午夜精品| 大型av网站在线播放| www日本在线高清视频| 精品免费久久久久久久清纯 | 大码成人一级视频| 午夜免费观看性视频| 国产精品一区二区在线不卡| 一二三四社区在线视频社区8| 久久青草综合色| 90打野战视频偷拍视频| 超碰成人久久| 又大又爽又粗| 久久久久久久大尺度免费视频| 99热国产这里只有精品6| 97在线人人人人妻| 免费久久久久久久精品成人欧美视频| 欧美激情 高清一区二区三区| 精品少妇久久久久久888优播| 成人免费观看视频高清| 99国产精品一区二区三区| 亚洲国产欧美一区二区综合| 激情视频va一区二区三区| 久9热在线精品视频| 狠狠精品人妻久久久久久综合| 热99re8久久精品国产| 天天躁狠狠躁夜夜躁狠狠躁| 久久久久精品国产欧美久久久 | 久久ye,这里只有精品| 在线精品无人区一区二区三| netflix在线观看网站| 男女免费视频国产| 亚洲精品第二区| 一区在线观看完整版| 乱人伦中国视频| 91麻豆av在线| 午夜福利,免费看| 免费一级毛片在线播放高清视频 | 狂野欧美激情性xxxx| 久热这里只有精品99| 少妇的丰满在线观看| 精品视频人人做人人爽| 亚洲av电影在线观看一区二区三区| 亚洲精品美女久久av网站| 99九九在线精品视频| 亚洲一码二码三码区别大吗| 亚洲欧美激情在线| 人人妻人人澡人人看| 丁香六月天网| 国产高清videossex| 亚洲成av片中文字幕在线观看| 欧美在线黄色| 91大片在线观看| 国产一区二区三区综合在线观看| 国产在视频线精品| 国产黄色免费在线视频| 亚洲 国产 在线| 777米奇影视久久| 欧美变态另类bdsm刘玥| 丁香六月欧美| 国产伦人伦偷精品视频| 亚洲熟女毛片儿| 99精品久久久久人妻精品| 精品亚洲成a人片在线观看| av网站在线播放免费| 国产伦理片在线播放av一区| 精品卡一卡二卡四卡免费| 亚洲七黄色美女视频| 亚洲熟女毛片儿| 热99re8久久精品国产| 欧美97在线视频| 欧美久久黑人一区二区| 99精国产麻豆久久婷婷| 精品福利观看| 国产欧美日韩综合在线一区二区| 精品国产一区二区三区四区第35| 亚洲欧美日韩高清在线视频 | 久久久久视频综合| 亚洲国产精品999| 午夜日韩欧美国产| 亚洲精华国产精华精| 少妇粗大呻吟视频| 亚洲国产欧美在线一区| 国产精品二区激情视频| 无限看片的www在线观看| 久久精品成人免费网站| 18禁观看日本| 亚洲综合色网址| 亚洲,欧美精品.| 99九九在线精品视频| 一级,二级,三级黄色视频| 在线亚洲精品国产二区图片欧美| 久久久久视频综合| 99精品欧美一区二区三区四区| 丝袜美足系列| 麻豆av在线久日| 亚洲专区中文字幕在线| 欧美中文综合在线视频| 亚洲中文av在线| 美女国产高潮福利片在线看| tocl精华| 国产又爽黄色视频| 一级毛片女人18水好多| 亚洲国产精品一区二区三区在线| 国产成人欧美| 人妻久久中文字幕网| 捣出白浆h1v1| 99国产精品一区二区三区| 欧美精品一区二区免费开放| 亚洲欧美色中文字幕在线| 黑人欧美特级aaaaaa片| 肉色欧美久久久久久久蜜桃| 另类精品久久| 91麻豆av在线| 国产欧美亚洲国产| 老司机在亚洲福利影院| 99久久国产精品久久久| 日韩欧美一区二区三区在线观看 | 成人国语在线视频| 高清视频免费观看一区二区| av电影中文网址| 黄片大片在线免费观看| 精品人妻1区二区| 国产精品一区二区在线不卡| 99国产综合亚洲精品| 亚洲三区欧美一区| av片东京热男人的天堂| 欧美黄色片欧美黄色片| 欧美 亚洲 国产 日韩一| 欧美日韩视频精品一区| 久久精品国产a三级三级三级| av一本久久久久| 黑丝袜美女国产一区| 午夜两性在线视频| 亚洲成国产人片在线观看| 蜜桃在线观看..| 国产亚洲精品一区二区www | 免费看十八禁软件| 十八禁高潮呻吟视频| 制服人妻中文乱码| 精品国产一区二区三区久久久樱花| 亚洲熟女毛片儿| 国产成人欧美在线观看 | 宅男免费午夜| 无限看片的www在线观看| 国产一区二区三区av在线| 国产xxxxx性猛交| 啦啦啦在线免费观看视频4| 伊人亚洲综合成人网| 国产深夜福利视频在线观看| 欧美一级毛片孕妇| 在线永久观看黄色视频| 亚洲av电影在线观看一区二区三区| 亚洲男人天堂网一区| 久久久久久久精品精品| 黄色视频不卡| 美女福利国产在线| 久久精品熟女亚洲av麻豆精品| 多毛熟女@视频| 亚洲,欧美精品.| 丁香六月欧美| 午夜福利在线观看吧| av在线app专区| 精品一区二区三区av网在线观看 | 香蕉国产在线看| 巨乳人妻的诱惑在线观看| 50天的宝宝边吃奶边哭怎么回事| 大型av网站在线播放| 自线自在国产av| 91大片在线观看| 日本a在线网址| 中文字幕制服av| 久久久久精品国产欧美久久久 | 国产成人a∨麻豆精品| 亚洲精品日韩在线中文字幕| 欧美av亚洲av综合av国产av| 亚洲精品av麻豆狂野| 电影成人av| 91成人精品电影| 亚洲 欧美一区二区三区| 最近最新免费中文字幕在线| 亚洲成人免费av在线播放| 啦啦啦中文免费视频观看日本| 国产精品 国内视频| 男女午夜视频在线观看| 丰满迷人的少妇在线观看| 国产成人精品无人区| 99久久人妻综合| 我的亚洲天堂| 老鸭窝网址在线观看| 国产野战对白在线观看| 50天的宝宝边吃奶边哭怎么回事| 伊人亚洲综合成人网| 91成人精品电影| 中文字幕av电影在线播放| 男女边摸边吃奶| 亚洲国产毛片av蜜桃av| 少妇猛男粗大的猛烈进出视频| 黄片大片在线免费观看| 日韩 欧美 亚洲 中文字幕| 9191精品国产免费久久| 成人黄色视频免费在线看| av视频免费观看在线观看| 中文字幕精品免费在线观看视频| 操出白浆在线播放| 亚洲国产精品999| 丝袜脚勾引网站|