• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Word Sense Disambiguation Model with a Cache-Like Memory Module

    2021-10-20 06:55:02LINQianLIUXinXINChunlei辛春蕾ZHANGHaiying張海英ZENGHualin曾華琳ZHANGTonghui張同輝SUJinsong蘇勁松
    關(guān)鍵詞:春蕾勁松

    LIN Qian(林 倩), LIU Xin(劉 鑫), XIN Chunlei(辛春蕾), ZHANG Haiying(張海英), ZENG Hualin(曾華琳), ZHANG Tonghui(張同輝), SU Jinsong(蘇勁松)

    School of Informatics, Xiamen University, Xiamen 361005, China

    Abstract: Word sense disambiguation (WSD), identifying the specific sense of the target word given its context, is a fundamental task in natural language processing. Recently, researchers have shown promising results using long short term memory (LSTM), which is able to better capture sequential and syntactic features of text. However, this method neglects the dependencies among instances, such as their context semantic similarities. To solve this problem, we proposed a novel WSD model by introducing a cache-like memory module to capture the semantic dependencies among instances for WSD. Extensive evaluations on standard datasets demonstrate the superiority of the proposed model over various baselines.

    Key words: word sense disambiguation (WSD); memory module; semantic dependencies

    Introduction

    Word sense disambiguation (WSD) aims to accurately identify the specific meaning of an ambiguous word according to particular context. As a fundamental task in natural language processing (NLP), it is beneficial to the studies of many other NLP tasks, such as neural machine translation (NMT), question answering (QA) and sentiment analysis. Therefore, how to construct a high-quality WSD model has attracted much attention in academia and industry.

    To achieve this goal, previous studies usually resorted to artificial features containing linguistic and other information. Generally, these models can be grouped into four categories: unsupervised[1-3], supervised[4-5], semi-supervised[6-8]and knowledge-based[9-10]approaches. Recently, with the rapid development of deep learning, the studies of WSD have evolved from conventional feature engineering based models into neural network architectures. From this point of view, the common practice is to use word embeddings. For example, the word embeddings were leveraged as WSD features in different ways[11]. In addition, recurrent neural networks (RNN) effectively exploiting word order have been proven to be effective. Some researchers[12-13]mainly focused on long short term memory (LSTM) based WSD models, which can capture the sequential and syntactic patterns of the given sentence and thus achieve competitive performance in this task. Despite their success, previous studies conducted WSD in isolation, while neglecting the semantic dependencies among instances: the considered words with similar context should have the same sense, which has been adopted in many NLP tasks, such as entity linking[14-15]. As shown in Fig.1, for the target worddykes, the same word senses appear in similar contexts.

    Instance 1 Assuming that magnetizations in the South Mountains granodiorite,Telegraph Pass granite and felsic dykes were acquired before and during ductile extensional deformation,we interpret these data as demonstrating that the South Mountains footwall has not been significantly tilted after mylonitic deformation.Sense_id: dykeXl:06:00::Instance 2 Similarly,assuming that magnetizations in the microdiorite dykes were acquired during initial stages of brittle deforma-tion,we interpret these data as demonstrating that the South Mountains footwall has not been significantly tilted after the brittle sta-ges of deformation.Sense_id: dykeXHiOO::

    In this paper, we propose a novel WSD model with a cache-like memory module. It is a significant extension of the conventional LSTM-based WSD model[13]. The introduced cache-like memory module is used to memorize the sense disambiguation results of other instances for the same target word, and thus provide helpful information to the current WSD. We design this module based on the fact that the sense disambiguation result may be the same based on the similar context. Besides, since the memory can be traced back to training examples, it might help explain the decisions that the model is making and thus improve understandability of the model, so that the memorized value could help to improve the model accuracy, as verified in the studies of other NLP tasks[16-17]. It is worth mentioning that, the introduced cache-like memory is composed of key-value pairs. The keys denote the semantic representations of instances, and values are the corresponding sense disambiguation results. We compute the dot product similarities between the current hidden state and the stored keys in memory. Then, according to these dot product similarities, we summarize the memorized sense disambiguation results as the weighted sum of the values. This summarized vector can be incorporated into the conventional decoder to refine the sense disambiguation result of the current instance. According to this, our proposed model is able to fully exploit the semantic similarities among instances to refine the conventional WSD model. To investigate the effectiveness of our proposed WSD model, we carry out multiple groups of experiments on benchmark datasets. Experimental results and in-depth analysis show that our model outperforms previous WSD models.

    The related work mainly includes WSD and memory neural network. WSD has been one of hot research topics in the community of NLP. More specifically, the previous studies on WSD can be roughly classified into the following aspects: unsupervised WSD, supervised WSD, semi-supervised WSD, and knowledge-based WSD. Unsupervised WSD is based on the assumption that similar word senses appear in similar contexts. Therefore, studies on unsupervised WSD mainly focuses on how to automatically learn the sense tags of target words from unlabeled data. The typical approaches take sense disambiguation task as a clustering problem which aims to group together examples with similar contexts[1-3, 18-19].

    Different from unsupervised WSD, supervised WSD mainly uses manually sense-annotated to train a classifier for WSD. Zhong and Ng[4]used a linear kernel support vector machine as the classifier. Shenetal.[5]also trained a multiclass classifier to distinguish categories[5]. Experimental results on many datasets demonstrate that these approaches can achieve satisfying performance in this task.

    Apparently, it is costly to obtain sense annotated corpora so that it is harder to extend the supervised WSD to the new domain. To deal with this problem, many researchers paid attention to semi-supervised WSD, which can simultaneously exploit both label and unlabeled datasets[6-8, 20-25].

    Unlike the above-mentioned approaches, dominant methods in this aspect mainly resort to leveraging external knowledge resources to identify the senses of target words such as knowledge bases, semantic networks and dictionaries[9, 19, 26-33]. However, knowledge-based WSD cannot been widely used due to the fact that external knowledge resources are rare for many languages and domains.

    Recently, with the rapid development of deep learning, neural network based WSD has attracted increasing attention and become dominant models in this tasks[9, 11, 34-36]. Compared with traditional methods, neural network-based models can automatically learn features that are beneficial to WSD. Particularly, some researchers use LSTM networks to capture the relationship between the context and word meaning by modeling the sequence of words surrounding the target word[12, 13, 37]. However, all above work conducts WSD in isolation.

    Recently, due to the role of memory in storing previous results and capturing useful history, memory neural network has been widely used in many NLP tasks, such as language modeling[38-40], QA[41]and NMT[16, 17, 42]. To the best of our knowledge, our work to introduce a memory module into WSD is meaningful, which directly utilizes the memorized useful information from similar examples, and thus makes better use of semantic dependencies between instances.

    The remainder of this paper is organized as follows. Session 1 describes our proposed model, including details on the model architecture and objective function. Experimental results are presented and analyzed in section 2, followed by conclusions in section 3.

    1 Proposed Model

    In this section, we will describe our proposed WSD model in detail. Our model is a significant extension of the conventional LSTM-based WSD model[13]. However, it is worth to note that our introduced cache-like memory is also applicable to other neural network based WSD model.

    Figure 2 illustrates the model architecture, which is composed of a conventional LSTM-based WSD model and a cache-like memory module.

    1.1 LSTM-based WSD model

    Fig. 2 Architecture of our WSD model

    Given a target wordxiand its contextual hidden statehi, we introduce a softmax layer to predict the probability distribution over its candidate senses. Formally, we produce the probability distribution of candidate senses as:

    (1)

    (2)

    1.2 Cache-like memory module

    In order to better identify the sense of a target word, we explicitly model the semantic dependencies among instances to refine the neural based WSD model. To this end, we introduce a cache-like memory module which memorizes the sense disambiguation results of other instances as an array of key-value pairs. Our basic intuition is that the more similar the context of current instance with other instances in memory, the closer their word sense disambiguation results should be.

    To exploit the cache-like memory information, we summarize the memorized sense disambiguation results as a memory vectormi. Formally,miis defined as the sum over the valuesvtweighted by the normalized similarities {st}:

    (3)

    Then we incorporate the memory vectormiinto the final output as

    (4)

    (5)

    whereσis the sigmoid function, the dynamic weightλis used to control the effect of the cache-like memory module andW(3),W(4), andW(5)are learnable parameter matrixes. The basic idea behind our strategy is that the same target word of different instances requires different sizes of context to be disambiguated. For one considered instance, if our model is able to retrieve another instance with the similar context from the cache-like memory, it is more reasonable for our model to exploit the disambiguation result of this instance, and vice versa.

    1.3 Training objective function

    Given a training corpusD, we train the model according to the following cross-entropy with parametersθ:

    (6)

    whereS(xi) is the sense set of the target wordxi, andtj(xi) is thejth element of the sense distributiont(xi) forxi. We will describe training details in section 2.1.2.

    2 Experiments

    2.1 Setup

    2.1.1Datasets

    To evaluate our proposed model, we carry out WSD experiments on the lexical sample task of SensEval2[43]and SensEval3[44].

    Table 1 provides the details of experimental data sets, including training set and testing set. Our baseline is a BiLSTM-based WSD model, proposed in Ref. [13].

    Table 1 Details of experimental data sets

    We train the proposed WSD model in two steps: pre-training and fine-tuning. In the pre-training step, from Ref. [13], we train a WSD model based on BiLSTM, with hyper-parameter setting presented in Table 2. Please note that we also train the baseline model under the same hyper-parameters, insuring the fair comparison.

    Table 2 Hyper-parameter settings at the pre-training

    2.1.2TrainingDetails

    Since our training datasets is not in large scale, it is better to employ dropout to prevent the model from over-fitting. Specifically, we set both dropout rates of embedding and hidden state as 0.5. Besides, we add Gaussian noise ~N(0, 0.2σi) to the word embeddings of input sentences, whereσiis theith dimension standard deviation in word embeddings matrix. In addition, we randomly discard some input words with rate 0.1 to further alleviate this issue and also we use theGloVevectors to initialize word embedding. For the out-of-vocabulary (OOV) words not appearing in the Glove vocabulary, we directly initialize these words according to the uniform distribution ~u(-0.1, 0.1).

    We apply stochastic gradient descent(SGD) algorithm to optimize model training. In order to balance the performance and the training speed of the model, at the early stage, we first use a large learning rate to ensure that the model can quickly descend after finding the gradient descent direction, and then at the later stage, a smaller learning rate is adopted to make the parameters slowly change to approximate the optimal parameters. The decay factor of the learning rate is set as 0.96 every fixed 75 steps.

    At the fine-tuning stage, we add a cache-like memory module into our WSD model. Note that before fine-tuning, we have stored the hidden states and sense disambiguation results of all training instances as key-value pairs into our cache-like memory, where these key-value pairs are fixed during fine-tuning. The hyper-parameters of the cache-like memory module is shown in Table 3. To avoid the slow training caused by a smaller learning rate, we limit the learning rate using a threshold. In addition, we clip the gradient to deal with the gradient vanishing problem.

    Table 3 Hyper-parameter settings at the cache-like

    2.1.3Baselines

    We refer to our model as MEM-BiLSTM and compare it with the following baselines.

    (1) 100JHU(R)[45]. It exploits a rich set of features for WSD.

    (2) IMS+adapted CW[34]. It uses a feedforward neural network to incorporate word embeddings into WSD model.

    (3) BiLSTM[13]. It is a commonly-used WSD model, which is based on bi-directional LSTM.

    2.2 Experimental results

    2.2.1Performance

    The results of different models measured in terms of F1 score are given in Table 4. Compared with the previous models, our reimplemented baseline achieves better or similar performance on the two datasets, respectively. This result demonstrates that our reimplemented baseline is competitive. Furthermore, when equipped with the baseline with our cache-like memory module, our WSD model achieves the best scores on SensEval2 and SensEval3 with varying degrees of improvements. Specifically, on the two datasets, our WSD model outperforms the reimplemented BiLSTM baseline by 0.4 and 0.3, respectively, which strongly proves that adding the memory module can help the WSD model.

    Table 4 Results for SensEval 2 and SensEval 3 on the

    2.2.2Generality

    To verify the generality of our proposed model, we also train different models using different sizes of training corpora: 10%, 25%, 50%, 75% and 100%, and then report the performances in Table 5. We can observe that with the increase of the amount of training data, the performance gap between the baseline and ours become larger. The underlying reason is when using the large training data, our model is able to exploit more similar instances to refine WSD.

    Table 5 Results for SensEval 2 and SensEval 3 on the English lexical sample task

    2.3 Case study

    To analyze why our model can outperform the baseline, we compare the WSD results of different models. Figures 3-5 show three examples, respectively. We can observe that in comparison to BiLSTM, our proposed model is able to make correct predictions with the help of the semantic related instances from the memory. Moreover, we simultaneously provide the three most similar instances for target words “argument”, “activate” and “hearth” in the last three rows of Figs. 3-5, respectively.

    Instance: ... When it fell to Dukes to introduce the second stage of the Bill empowering the referendum,he was forced to address himself specifically to the bishop, arguments in their letter (text,Irish Times,15 May 1986).Reference: argument0Zol: 10:02::BiLSTM: argumentyol: 10:03::MEM-BiLSTM: argumentyol: 10:02::The most similar instance 1: ...This has some affinity with the Marxist position. In a published argument between Scholes and Hir-sch, the former made the following statement, on the assumption that the conservative Hirsch would disagree with it ...The most similar instance 2: ... he accepted Chinas offer of a seat on the Basic Law Drafting Committee, helping to write Hong Kong SAR Chinas post - 1997 mini - constitution, and was embroiled in more unsuccessful arguments for direct elections, op-posed by mainland communists and Hong Kong conservatives ...The most similar instance 3: ...Although the banks will begin to present their arguments today,Mr Scrivener said: this court is not concerned with private rights ...

    Fig. 3 The first example

    Fig. 4 The second example

    3 Conclusions

    In this paper, we proposed a novel WSD model with a cache-like memory module. As an improvement of the conventional LSTM-based WSD model, our model incorporates a cache-like memory module composed of key-value pairs, where the keys denote the semantic representation of instances, and values are the corresponding sense disambiguation results. We first compute the dot product similarities between the current hidden state and the stored keys in memory. Then, we summarize the memory values as a memory vector according to these dot product similarities, and then the induced memory vector is exploited to refine the WSD results of the current instance. Extensive experiments also validate the effectiveness of our proposed model.

    In the future, we plan to design more effective architectures to better exploit semantic dependencies between instances for WSD. Besides, how to introduce graph neural networks into WSD is also one of our focuses in future researches.

    猜你喜歡
    春蕾勁松
    顧勁松
    藝術(shù)家(2024年2期)2024-04-15 08:19:20
    嚴(yán)冬過盡綻春蕾——致公黨連云港市委會齊心協(xié)力戰(zhàn)疫情
    莊勁松美術(shù)作品
    莆田市婦聯(lián)開展“精準(zhǔn)脫貧·春蕾圓夢”助學(xué)行動
    海峽姐妹(2020年10期)2020-10-28 08:08:46
    Simulation and experimental research of digital valve control servo system based on CMAC-PID control method①
    余勁松
    春蕾圓夢 高考助學(xué)在行動
    海峽姐妹(2015年8期)2015-02-27 15:12:33
    春蕾之花 美麗綻放——百名“春蕾之星”尋訪記
    福利中國(2015年6期)2015-01-03 08:44:38
    閱讀理解精練
    Gross Error Detection and Identification Based on Parameter Estimation for Dynamic Systems*
    精品久久久久久,| 综合色av麻豆| 99热只有精品国产| 国产成人福利小说| 久久久久久大精品| 无限看片的www在线观看| 亚洲欧美日韩高清在线视频| 欧美日韩乱码在线| 嫩草影视91久久| 91字幕亚洲| 此物有八面人人有两片| 亚洲国产精品sss在线观看| 嫩草影视91久久| 亚洲国产精品sss在线观看| 久久精品国产自在天天线| 中文字幕人成人乱码亚洲影| 亚洲精品一区av在线观看| 精品电影一区二区在线| 欧美又色又爽又黄视频| 国内少妇人妻偷人精品xxx网站| 亚洲中文字幕一区二区三区有码在线看| 国内精品一区二区在线观看| 国产精品乱码一区二三区的特点| 成人一区二区视频在线观看| 国产成人av激情在线播放| 中文字幕人妻熟人妻熟丝袜美 | 国产综合懂色| 欧美3d第一页| 国产精品美女特级片免费视频播放器| 一本久久中文字幕| 久久6这里有精品| 精品免费久久久久久久清纯| 中文字幕熟女人妻在线| 19禁男女啪啪无遮挡网站| 丰满的人妻完整版| 婷婷六月久久综合丁香| 国产真人三级小视频在线观看| 亚洲乱码一区二区免费版| av在线蜜桃| 婷婷亚洲欧美| 搞女人的毛片| 国产av麻豆久久久久久久| 国内久久婷婷六月综合欲色啪| 国产v大片淫在线免费观看| 国产欧美日韩一区二区精品| 老鸭窝网址在线观看| 综合色av麻豆| 变态另类丝袜制服| 九九久久精品国产亚洲av麻豆| 日韩免费av在线播放| 99视频精品全部免费 在线| 日韩国内少妇激情av| 亚洲欧美日韩高清在线视频| 欧美成人免费av一区二区三区| 国产精品99久久99久久久不卡| 国产激情偷乱视频一区二区| 亚洲精品色激情综合| 日本 av在线| 日本与韩国留学比较| 9191精品国产免费久久| 亚洲精品456在线播放app | 免费一级毛片在线播放高清视频| 在线看三级毛片| 亚洲av不卡在线观看| 亚洲av成人精品一区久久| 成人18禁在线播放| 欧美日韩福利视频一区二区| 久久6这里有精品| 最近视频中文字幕2019在线8| 国产精品精品国产色婷婷| 久久久精品欧美日韩精品| 99国产极品粉嫩在线观看| 亚洲国产精品合色在线| 午夜日韩欧美国产| 成年女人毛片免费观看观看9| 日本一二三区视频观看| 俄罗斯特黄特色一大片| 亚洲激情在线av| 在线观看一区二区三区| 日本黄大片高清| 亚洲一区二区三区不卡视频| 成熟少妇高潮喷水视频| 午夜精品在线福利| 成人午夜高清在线视频| 欧美黄色片欧美黄色片| 国产精品香港三级国产av潘金莲| 亚洲欧美日韩无卡精品| 国产视频一区二区在线看| 国产精品一区二区三区四区免费观看 | 婷婷六月久久综合丁香| 99国产极品粉嫩在线观看| 网址你懂的国产日韩在线| 啦啦啦韩国在线观看视频| 国产aⅴ精品一区二区三区波| 色综合欧美亚洲国产小说| 美女 人体艺术 gogo| 亚洲最大成人手机在线| 啪啪无遮挡十八禁网站| 亚洲精品美女久久久久99蜜臀| 色吧在线观看| 99热只有精品国产| 久久久成人免费电影| 婷婷精品国产亚洲av在线| 成人特级黄色片久久久久久久| 99视频精品全部免费 在线| 亚洲人成网站在线播放欧美日韩| 又黄又爽又免费观看的视频| 夜夜躁狠狠躁天天躁| 97超视频在线观看视频| 内射极品少妇av片p| 99久久无色码亚洲精品果冻| 国产免费一级a男人的天堂| 精品久久久久久成人av| 亚洲专区中文字幕在线| 高清在线国产一区| 日韩欧美国产在线观看| 99在线视频只有这里精品首页| 天堂动漫精品| 18禁黄网站禁片免费观看直播| 不卡一级毛片| h日本视频在线播放| 国产 一区 欧美 日韩| 狂野欧美激情性xxxx| 成人无遮挡网站| 大型黄色视频在线免费观看| 在线国产一区二区在线| 中文字幕人妻熟人妻熟丝袜美 | 又紧又爽又黄一区二区| 久久久精品大字幕| 欧洲精品卡2卡3卡4卡5卡区| 亚洲成人久久爱视频| 18禁美女被吸乳视频| 99精品欧美一区二区三区四区| 午夜福利欧美成人| 精品人妻偷拍中文字幕| 久久精品91蜜桃| 噜噜噜噜噜久久久久久91| 亚洲国产精品999在线| 亚洲成人免费电影在线观看| 国产免费男女视频| 国产又黄又爽又无遮挡在线| 亚洲精品456在线播放app | 国产伦在线观看视频一区| 黑人欧美特级aaaaaa片| 婷婷丁香在线五月| 国产69精品久久久久777片| 午夜激情欧美在线| 久久久国产成人免费| 嫁个100分男人电影在线观看| 色播亚洲综合网| av在线天堂中文字幕| 在线观看美女被高潮喷水网站 | 不卡一级毛片| 欧美又色又爽又黄视频| 国产一区二区三区视频了| 看黄色毛片网站| 中文在线观看免费www的网站| 精品日产1卡2卡| 欧美日韩亚洲国产一区二区在线观看| 狂野欧美白嫩少妇大欣赏| 亚洲av第一区精品v没综合| 亚洲美女视频黄频| 亚洲国产精品合色在线| 亚洲色图av天堂| 亚洲欧美日韩东京热| 特级一级黄色大片| 麻豆一二三区av精品| 国产探花在线观看一区二区| 亚洲av电影不卡..在线观看| 色综合欧美亚洲国产小说| 欧美日韩亚洲国产一区二区在线观看| 别揉我奶头~嗯~啊~动态视频| 国产精品 国内视频| 免费大片18禁| 露出奶头的视频| 90打野战视频偷拍视频| 草草在线视频免费看| 国产午夜精品久久久久久一区二区三区 | 一级毛片高清免费大全| 成人性生交大片免费视频hd| 国产精品久久久久久人妻精品电影| 俄罗斯特黄特色一大片| 亚洲av日韩精品久久久久久密| 国产精品一及| 日本 av在线| 午夜福利高清视频| 久久香蕉精品热| 人妻夜夜爽99麻豆av| 麻豆久久精品国产亚洲av| 亚洲精品亚洲一区二区| 国产真人三级小视频在线观看| 久久久成人免费电影| 18禁国产床啪视频网站| 久久久久久久久久黄片| 国产一区二区激情短视频| 最近视频中文字幕2019在线8| 国产欧美日韩精品亚洲av| 俄罗斯特黄特色一大片| 欧美xxxx黑人xx丫x性爽| tocl精华| 一级黄片播放器| 亚洲avbb在线观看| 日韩大尺度精品在线看网址| 日本一本二区三区精品| 国产欧美日韩精品一区二区| 99久久99久久久精品蜜桃| 国产69精品久久久久777片| 蜜桃亚洲精品一区二区三区| 国产成人欧美在线观看| 丰满人妻熟妇乱又伦精品不卡| 国产又黄又爽又无遮挡在线| 中文字幕av在线有码专区| 国产精品99久久99久久久不卡| 色哟哟哟哟哟哟| 日韩免费av在线播放| 亚洲成人中文字幕在线播放| 少妇的逼好多水| 男女视频在线观看网站免费| bbb黄色大片| 欧美3d第一页| 国产色爽女视频免费观看| 午夜免费男女啪啪视频观看 | 色综合婷婷激情| 在线十欧美十亚洲十日本专区| 熟女电影av网| 色在线成人网| 草草在线视频免费看| 天堂√8在线中文| 很黄的视频免费| 丝袜美腿在线中文| 午夜精品久久久久久毛片777| 欧美3d第一页| 亚洲在线观看片| 国产亚洲精品综合一区在线观看| 欧美日韩精品网址| 好男人电影高清在线观看| 亚洲国产精品sss在线观看| 男女床上黄色一级片免费看| 国内毛片毛片毛片毛片毛片| 禁无遮挡网站| 性欧美人与动物交配| 国产 一区 欧美 日韩| 国产午夜精品论理片| 老熟妇乱子伦视频在线观看| 动漫黄色视频在线观看| 神马国产精品三级电影在线观看| www日本黄色视频网| 久久久久久国产a免费观看| 亚洲aⅴ乱码一区二区在线播放| 麻豆国产av国片精品| 亚洲在线观看片| 中文亚洲av片在线观看爽| 欧美一区二区精品小视频在线| 亚洲国产精品999在线| 他把我摸到了高潮在线观看| 天天添夜夜摸| 99riav亚洲国产免费| av片东京热男人的天堂| 精品人妻一区二区三区麻豆 | 亚洲精华国产精华精| 老司机午夜十八禁免费视频| 校园春色视频在线观看| 美女被艹到高潮喷水动态| 免费av不卡在线播放| 免费看a级黄色片| 88av欧美| 夜夜爽天天搞| 国产中年淑女户外野战色| 高清在线国产一区| 无限看片的www在线观看| 欧美三级亚洲精品| 日韩欧美国产一区二区入口| 国产私拍福利视频在线观看| 欧美成人性av电影在线观看| 一级黄片播放器| 国产又黄又爽又无遮挡在线| 99国产精品一区二区三区| 悠悠久久av| 国产男靠女视频免费网站| 88av欧美| 性色avwww在线观看| 成人鲁丝片一二三区免费| 精品一区二区三区av网在线观看| 在线十欧美十亚洲十日本专区| 欧美一区二区精品小视频在线| 国产高清激情床上av| 国产成人福利小说| 久久久久国内视频| 99在线视频只有这里精品首页| 亚洲一区高清亚洲精品| 九色国产91popny在线| 韩国av一区二区三区四区| 国产一区二区三区视频了| 欧美在线黄色| 19禁男女啪啪无遮挡网站| 99久久无色码亚洲精品果冻| 很黄的视频免费| 九色国产91popny在线| 婷婷亚洲欧美| 欧美一区二区精品小视频在线| 日韩成人在线观看一区二区三区| 中文字幕人妻丝袜一区二区| 成人午夜高清在线视频| 国产中年淑女户外野战色| 法律面前人人平等表现在哪些方面| 亚洲欧美日韩高清在线视频| 91在线观看av| 动漫黄色视频在线观看| 久久久久久久久久黄片| 成人无遮挡网站| 校园春色视频在线观看| av专区在线播放| 精品一区二区三区视频在线观看免费| 午夜福利在线在线| 黄片小视频在线播放| 亚洲七黄色美女视频| 午夜激情福利司机影院| 国产成人av激情在线播放| 亚洲色图av天堂| 久久国产乱子伦精品免费另类| 久久精品综合一区二区三区| 成人av在线播放网站| www.熟女人妻精品国产| 成年版毛片免费区| 亚洲欧美日韩无卡精品| 亚洲精品国产精品久久久不卡| 日本 欧美在线| 亚洲五月婷婷丁香| 美女高潮的动态| 日日摸夜夜添夜夜添小说| 国产老妇女一区| 一级a爱片免费观看的视频| 亚洲精品粉嫩美女一区| 母亲3免费完整高清在线观看| 国产亚洲欧美98| 国产精品电影一区二区三区| 日本成人三级电影网站| 久久精品国产自在天天线| 最近视频中文字幕2019在线8| 操出白浆在线播放| 天堂网av新在线| 国产精品99久久99久久久不卡| 欧美日本亚洲视频在线播放| 一卡2卡三卡四卡精品乱码亚洲| 久久久久九九精品影院| 91麻豆精品激情在线观看国产| 成年版毛片免费区| 在线观看美女被高潮喷水网站 | 亚洲真实伦在线观看| 久久精品夜夜夜夜夜久久蜜豆| 欧美成人性av电影在线观看| 中出人妻视频一区二区| 少妇熟女aⅴ在线视频| 男人舔女人下体高潮全视频| 欧美性猛交黑人性爽| 熟女人妻精品中文字幕| 精品久久久久久久久久免费视频| h日本视频在线播放| 精华霜和精华液先用哪个| 嫩草影院精品99| 无人区码免费观看不卡| 久久久久免费精品人妻一区二区| 婷婷精品国产亚洲av在线| 99精品欧美一区二区三区四区| 午夜免费激情av| 精品99又大又爽又粗少妇毛片 | 亚洲aⅴ乱码一区二区在线播放| 婷婷丁香在线五月| 一本综合久久免费| 在线观看一区二区三区| 免费看十八禁软件| 夜夜躁狠狠躁天天躁| 亚洲成人久久性| 极品教师在线免费播放| 亚洲精品456在线播放app | 叶爱在线成人免费视频播放| xxxwww97欧美| 好看av亚洲va欧美ⅴa在| 久久精品国产自在天天线| 男女视频在线观看网站免费| 法律面前人人平等表现在哪些方面| 免费无遮挡裸体视频| 国产精品av视频在线免费观看| 给我免费播放毛片高清在线观看| 两个人看的免费小视频| 不卡一级毛片| 国产一区二区激情短视频| 最新在线观看一区二区三区| 国产精品99久久久久久久久| 97人妻精品一区二区三区麻豆| 在线观看美女被高潮喷水网站 | 亚洲国产欧美人成| 热99re8久久精品国产| 免费观看精品视频网站| 嫩草影视91久久| 超碰av人人做人人爽久久 | 亚洲人成网站高清观看| 国产免费一级a男人的天堂| 国产精品日韩av在线免费观看| 精品欧美国产一区二区三| 国产伦精品一区二区三区四那| 91字幕亚洲| 欧美性猛交黑人性爽| 国产亚洲欧美98| 99国产精品一区二区三区| 色综合亚洲欧美另类图片| 日韩高清综合在线| 怎么达到女性高潮| 国内精品美女久久久久久| 欧美一区二区国产精品久久精品| 99热6这里只有精品| 99国产精品一区二区三区| 在线观看一区二区三区| 午夜老司机福利剧场| 日日夜夜操网爽| 亚洲成人久久性| 免费大片18禁| 亚洲国产中文字幕在线视频| 日本黄色视频三级网站网址| 一区福利在线观看| 午夜老司机福利剧场| 精品国内亚洲2022精品成人| 欧美日韩黄片免| 亚洲国产欧洲综合997久久,| 三级毛片av免费| 欧美日韩国产亚洲二区| 国产一区二区在线观看日韩 | 三级毛片av免费| 小蜜桃在线观看免费完整版高清| 成人av一区二区三区在线看| 色综合站精品国产| 精品无人区乱码1区二区| 亚洲va日本ⅴa欧美va伊人久久| 欧美又色又爽又黄视频| 亚洲欧美日韩卡通动漫| 欧美国产日韩亚洲一区| 日本黄大片高清| 综合色av麻豆| 99久国产av精品| 精品国产超薄肉色丝袜足j| 亚洲内射少妇av| aaaaa片日本免费| 一个人看的www免费观看视频| 免费电影在线观看免费观看| 国产 一区 欧美 日韩| 熟女电影av网| 国产精品精品国产色婷婷| 亚洲第一电影网av| 亚洲 国产 在线| 欧美日韩乱码在线| 99热这里只有是精品50| 国产精品久久久久久亚洲av鲁大| 免费搜索国产男女视频| 窝窝影院91人妻| 国产主播在线观看一区二区| 18禁黄网站禁片午夜丰满| 国产一区二区在线av高清观看| 欧美色欧美亚洲另类二区| 欧美最新免费一区二区三区 | 大型黄色视频在线免费观看| 欧美zozozo另类| 精品国内亚洲2022精品成人| 美女免费视频网站| 久久性视频一级片| 伊人久久精品亚洲午夜| 日韩欧美三级三区| 国产单亲对白刺激| 欧美日韩黄片免| 一夜夜www| 成人一区二区视频在线观看| 搡老岳熟女国产| 麻豆成人午夜福利视频| 网址你懂的国产日韩在线| 亚洲av日韩精品久久久久久密| 亚洲av电影不卡..在线观看| 日韩免费av在线播放| 欧美黑人巨大hd| 少妇的逼水好多| 亚洲美女视频黄频| 国产成人av教育| 黄色成人免费大全| 舔av片在线| 亚洲人成网站高清观看| 国产真人三级小视频在线观看| 少妇人妻精品综合一区二区 | 一进一出好大好爽视频| 狂野欧美白嫩少妇大欣赏| 在线观看66精品国产| 18禁在线播放成人免费| 亚洲欧美日韩高清在线视频| 少妇丰满av| 久久久久性生活片| 国产午夜福利久久久久久| 午夜亚洲福利在线播放| 男女那种视频在线观看| 美女大奶头视频| 亚洲人成电影免费在线| 欧美最新免费一区二区三区 | 国产高潮美女av| 亚洲精品影视一区二区三区av| 韩国av一区二区三区四区| 丰满人妻一区二区三区视频av | 成人一区二区视频在线观看| 国产精品久久久久久精品电影| 首页视频小说图片口味搜索| 精品国产超薄肉色丝袜足j| 亚洲七黄色美女视频| 日韩欧美在线乱码| 国产私拍福利视频在线观看| 成年女人毛片免费观看观看9| 午夜福利高清视频| 黄片大片在线免费观看| 老熟妇乱子伦视频在线观看| 18禁在线播放成人免费| 亚洲在线自拍视频| 久久99热这里只有精品18| 亚洲第一欧美日韩一区二区三区| 国内少妇人妻偷人精品xxx网站| 欧美成人a在线观看| 欧美+日韩+精品| 国产av不卡久久| 观看免费一级毛片| 国内揄拍国产精品人妻在线| 老司机午夜十八禁免费视频| 日韩免费av在线播放| 久久精品91无色码中文字幕| 亚洲无线观看免费| a级毛片a级免费在线| 国产探花在线观看一区二区| 精品乱码久久久久久99久播| 亚洲av电影不卡..在线观看| 色在线成人网| 午夜激情欧美在线| 成人av一区二区三区在线看| 毛片女人毛片| 国产视频一区二区在线看| 欧美日韩乱码在线| 中文资源天堂在线| 欧美日韩中文字幕国产精品一区二区三区| 最近最新中文字幕大全电影3| 九九久久精品国产亚洲av麻豆| 国产日本99.免费观看| 在线观看av片永久免费下载| tocl精华| 久久精品91蜜桃| 国产成人影院久久av| 婷婷精品国产亚洲av| 制服丝袜大香蕉在线| 色综合婷婷激情| 国产一区二区在线av高清观看| 午夜精品一区二区三区免费看| 18禁国产床啪视频网站| 在线播放国产精品三级| 亚洲片人在线观看| 婷婷丁香在线五月| 91av网一区二区| 国产成人欧美在线观看| 国产午夜精品久久久久久一区二区三区 | 757午夜福利合集在线观看| 叶爱在线成人免费视频播放| 免费av观看视频| or卡值多少钱| 脱女人内裤的视频| 国产欧美日韩精品亚洲av| 亚洲在线自拍视频| 亚洲av美国av| 黑人欧美特级aaaaaa片| 岛国视频午夜一区免费看| 亚洲第一电影网av| 日韩欧美一区二区三区在线观看| 日本三级黄在线观看| 最新中文字幕久久久久| 成年女人毛片免费观看观看9| eeuss影院久久| 亚洲av不卡在线观看| 欧美bdsm另类| 法律面前人人平等表现在哪些方面| 精品人妻1区二区| av片东京热男人的天堂| 最好的美女福利视频网| e午夜精品久久久久久久| 男女午夜视频在线观看| 国产亚洲欧美98| 免费看a级黄色片| 女人被狂操c到高潮| 丁香欧美五月| 欧美极品一区二区三区四区| 午夜a级毛片| 国产精品永久免费网站| 婷婷六月久久综合丁香| 精品久久久久久久久久久久久| 午夜免费观看网址| 天堂影院成人在线观看| 日韩欧美三级三区| 亚洲av成人精品一区久久| 在线免费观看的www视频| 高潮久久久久久久久久久不卡| 亚洲18禁久久av| 国产精品电影一区二区三区| 麻豆国产97在线/欧美| 麻豆一二三区av精品| a级毛片a级免费在线| 欧美最黄视频在线播放免费| 精品国产亚洲在线| a在线观看视频网站| 动漫黄色视频在线观看| 久9热在线精品视频| 午夜福利18| 国产单亲对白刺激| 午夜a级毛片| 内地一区二区视频在线| 国产91精品成人一区二区三区| 欧美一级毛片孕妇| 精品人妻偷拍中文字幕| 好男人在线观看高清免费视频|