• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Tibetan Sentiment Classification Method Based on Semi-Supervised Recursive Autoencoders

    2019-08-13 05:55:04XiaodongYanWeiSongXiaobinghaoandAntiWang
    Computers Materials&Continua 2019年8期

    Xiaodong Yan,Wei Song,,Xiaobing Ζhao and Anti Wang

    Abstract: We apply the semi-supervised recursive autoencoders (RAE) model for the sentiment classification task of Tibetan short text,and we obtain a better classification effect.The input of the semi-supervised RAE model is the word vector.We crawled a large amount of Tibetan text from the Internet,got Tibetan word vectors by using Word2vec,and verified its validity through simple experiments.The values of parameter α and word vector dimension are important to the model effect.The experiment results indicate that when α is 0.3 and the word vector dimension is 60,the model works best.Our experiment also shows the effectiveness of the semi-supervised RAE model for Tibetan sentiment classification task and suggests the validity of the Tibetan word vectors we trained.

    Keywords: Recursive autoencoders (RAE),sentiment classification,word vector.

    1 Introduction

    With the rapid development of Web 2.0,users participate in the manufacture of website content.Consequently,there are a large number of user-involved valuable comments on people,events,products and so on which are generated on the Internet.By analyzing this information,potential users can mine people’s views and opinions to make business decisions,political decisions,and so on.It is a hard work to deal with such massive amounts of data manually.How to use help users quickly analyze and process these web texts automatically and extract useful emotional information by computer has become the focus of many researchers.Text sentiment analysis is the process of analyzing,processing,summarizing and disposing words,sentences and texts with emotional color0.At present,the research on sentiment classification of Chinese and English texts is relatively mature.However,for Tibetan information that started late,the study of Tibetan sentiment tendencies is relatively lagging behind.With the increasing content of network information such as Tibetan web pages and Tibetan digital libraries,more and more Tibetan compatriots express their opinions and opinions in Tibetan on the Internet.The emotional analysis of Tibetan texts has become an urgent research issue.On the basis of the analysis of sentence sentiment tendency,it is convenient to analyze the sentiment orientation of the text,and even get the overall tendencies of massive information.Therefore,sentence-level sentiment classification has important research value and is also the research focus of this paper.

    2 Related work

    Sentiment classification is one of the hot issues in natural language processing.There have been many researches on text sentiment classification at home and abroad.In general,it can be divided into a machine learning based method and sentiment dictionary-based method.The basic idea of the machine learning method is to get an estimate of the dependence between the input and output of the system based on known training samples,so that it can make the most accurate prediction of the unknown output.In 2002,Pang et al.[Pang,Lee and Vaithyanathan (2002)] used common machine learning techniques to make propensity judgments,and compared the propensity judgment effects of support vector machine (SVM),naive Bayes (NB),and maximum entropy.It shows that the SVM has the best classification effect.The literature0studied the classification of news texts,and used the naive Bayesian method and the maximum entropy method to divide the news text into positive emotions and negative emotions,and used word frequency and binary value as feature weights,and finally achieved better results.Classification effect,the highest classification accuracy rate of more than 90%.Based on sentiment dictionary or knowledge system,the literature uses the existing semantic dictionary to judge the semantic tendency of the sentiment words in the sentence.Then according to the syntactic structure and other information,indirectly get the semantic tendency of the sentence.Riloff et al.[Riloff and Shepherd (1997)] proposed a corpus-based approach to construct sentiment dictionary to achieve emotional classification.Later et al.[Riloff,Wiebe and Phillips (2005)] used the Bootstrapping algorithm,which used the elements of pronouns,verbs,adjectives and adverbs in the text as features.And they also treated differently according to the position of the sentence in the paragraph to realize the objective and subjective classification of corpus data.Zhu et al.[Zhu,Min,Zhou et al.(2006)] artificially constructed the word set of positive and negative seed sentiment words in the literature,and then used HowNet to calculate the semantic similarity between candidate words and seed sentiment words to determine their emotional polarity.

    In terms of sentiment classification of Tibetan texts,research at home and abroad is not yet mature,and relevant literature is very limited.The literature used Tibetan three-level segmentation system to segment the Tibetan texts and part-of-speech tagging,and used the hand-built Tibetan sentiment analysis vocabulary to extract the emotional features in combination with the existing feature selection methods,and used the similarity classification algorithm to classify the sentiment of Tibetan texts.In literature,the sentiment analysis of Tibetan Weibo was carried out based on the combination of statistics and dictionary-based methods.The accuracy of this method was significantly higher than that of TF-IDF-based Tibetan microblog sentiment analysis.

    Based on the above related work,this paper applies the semi-supervised recursive autoencoders RAE model to sentiment classification task of Tibetan short text.Through the extensive training of the word vector and the determination of the dimension,we get good classification result.

    3 Sentiment classification method based on semi-supervised RAE

    3.1 Tibetan word vector training

    The input of the semi-supervised RAE Tibetan sentiment classification model method is a sequence of word vectors of Tibetan text.There are two methods for initializing the word vector.In the first method,we simply initialize the vector of each word x∈Rnto a value(sample) that follows the Gaussian distribution:x~N(0 ,δ2),and then put the word vectors into a matrixL∈Rn×|v|,where |V| is the length of the vocabulary.This initialization method works well in an unsupervised neural network that can optimize these word vectors by capturing valid information in the training data.The second method to get a word vector is through an unsupervised neural language model [Bengio,Ducharme,Vinent et al.(2003);Collobert and Weston (2008)].In the processing of training the word vector by neural language model,they get the grammatical and semantic information in the training corpus by calculating the words’ co-occurrence statistical information.And then they transform the information into a vector space,so after getting the word vector the verbal similarity of the two words in the training corpus can be predicted.

    This paper uses the Word2vec tool to train and to obtain the neural language model of Tibetan.The Word2vec tool is a tool that Google developed and open sourced in 2013 to represent words as real numbers.Its main idea is to calculate the context relevant statistics of each word in all text and other co-occurring words by training a large amount of text.And then use these statistics to represent the words appearing in the text as a K-dimensional vector,and the value of K is nor very large.After obtaining the vectorized representation of the words,we can get the semantic similarity of the text by the word vector operation.Word vectors trained through Word2vec can be used to do a lot of research in the field of natural language,such as text clustering,text categorization,sentiment analysis,and so on.If we regard words as features,then Word2vec can express the features of the text in K-dimensional vector space,and this representation with semantic information is a deeper feature representation.The corpus used to train Tibetan word2vec includes wiki encyclopedia Tibetan version,primary and secondary school textbooks,news and Sina Weibo,totaling 253 M Tibetan texts.

    The evaluation method of word vector can be mainly divided into two types.The first one is to apply the word vector to an existing system,and compare the running results of the system before and after the addition;the second one is to evaluate the word vector directly from the perspective of linguistics,such as text similarity one-level semantic offset and so on.To test the effect and quality of trained Tibetan word vectors,we first test some semantic similarities of words,such as input words:(Qinghai Provincial People’s Congress Standing Committee) to find out which words are similar in the training expectation (see in Fig.1).The number on the right side of the figure measures the degree of similarity between the word and the input target word,and its value ranges from [0,1].The larger the value,the higher the similarity.

    Figure1:Tibetan word vector test (1)

    In the training corpus find the similar words with ???? (happiness) (see Fig.2).

    From the above test results,it can be found that the similar candidate words calculated by the Tibetan word vector trained in this paper have a large or a certain degree of semantic similarity with the calculated original words.Therefore,we believe that the Tibetan word vector trained in this paper has good effect and quality.In this paper,the Tibetan word vector trained by the Word2vec tool is used as the input of the semi-supervised RAE model.For a small number of words not found in the word vector trained by Word2vec,we use the first method above to initialize.

    Figure2:Tibetan word vector test (2)

    3.2 Semi-supervised RAE model for Tibetan sentiment classification

    The Tibetan sentiment classification based on semi-supervised RAE model is shown in Fig.3.

    Figure3:Semi-supervised RAE sentiment classification model

    With the Tibetan word vector,the unsupervised RAE method has been able to obtain the distribution feature vector representation of the text sentence without giving the text structure tree.In order to apply it to Tibetan sentiment classification,it needs to be expanded to a semi-supervised RAE.The basic idea is to add a classifier to the top of its RAE and supervise it with labeled sample data.To do this,we add a simple softmax layer to the root node of the tree structure representing the sentence for classification,as defined by formula (1) where d ∈RKis a K-dimensional polynomial distribution,and K is the number of emotional labels.This paper focuses on both negative and positive categories.That is k=2.

    The output of the softmax layer represents the conditional probability A,which is the probability that the current text belongs to each category,so that the category of the text can be predicted.The calculation method of the cross-entropy error is shown in formula (2)where t is the distribution of the labels.

    After adding the softmax layer,the training process of the semi-supervised RAE model needs to consider not only the reconstruction error of the parent node in the text sentence structure tree,but also the cross-entropy error of the softmax layer to learn the semantic and sentiment classification information in the text.Fig.4 shows the root node RAE unit of the sentence structure tree.

    Figure4:Root node RAE unit

    Therefore,the optimization objective function based on the semi-supervised RAE method on the tagged training data set can be expressed as Eq.(3) where (x,t) represents a sample in the training corpus,that is (text sentence,label),indicating the error of a sample.

    The error of a text sentence is the sum of the reconstruction error and the cross-entropy error of all non-terminal nodes in the sentence tree structure,so it can be expressed as shown in formula (4) Where s represents the non-terminal node in the sentence tree structure.

    For the root node,the reconstruction error and the cross-entropy error need to be considered at the same time.The error calculation formula in the formula (4) can be written as shown in the formula (5) Where α is a parameter that adjusts the reconstruction error and the crossentropy error weight.

    When the value of α is adjusted,the change will propagate back and affect the parameters of the RAE model as well as the vector representation of the text.In this paper,we will adjust the parameters to study its impact on the classification results in subsequent experiments.

    4 Experiment and result analysis

    4.1 Experimental data set

    We crawl Tibetan Weibo and Tibetan comments on Sina Weibo by writing a web crawler,and saves the crawled text corpus in txt format.After pre-processing,we also need to manually tag and verify these Tibetan texts to get a library of Tibetan emotional corpus.The labeling rules are as follows:positive text entries are marked with the label ‘+1’,negative text entries are marked with the label ‘-1’ neutral text entries are marked with the label ‘0’,and the useless text entries are not removed in the preprocessing are marked with the label ‘2’.The final marked result is shown in Tab.1.

    Table1:Marked corpus statistical result

    For a better comparison of the experimental results,the data set of this experiment are all 3717 negative samples and randomly selecting 4000 samples from the positive samples.At the same time,400 positive and negative samples were randomly selected from the experimental data set as test sets,and the remaining samples were used as training sets.

    4.2 Parameter setting

    The semi-supervised RAE algorithm used in this paper is an open source project on Github by Sanjeev Satheesh from Stanford University.The source code is re-implemented in Java based on Richard Socher’s documentation and MATLAB source code.For the hardware platform,the operating system of the server is Ubuntu Linux 14.04,128 G memory,16-core processor.

    We use the default values of the parameters in the softmax layer classifier.And we compare two sets of experiments to find the optimal values of super parameter α for measuring cross entropy error and reconstruction error and the dimension of Word2Vec word vector.The model needs multiple iterations to get the final optimized result.And the number of iterations required for model convergence is different under the different parameter settings state.For making the experiment result is always best at every parameter setting,after observing the number of iterations of multiple experiments,it is found that the number of iterations will not exceed 1000.So we set 1000 as the number of iteration of the experiment.

    4.3 The value of α selection experiment and result analysis

    The value of the parameter α determines the degree of attention to the cross-entropy error and reconstruction error of the semi-supervised RAE model during training.Therefore,when the value of α is larger,the model pays more attention to the reconstruction error,and the sentence distribution vector obtained by training will contain more syntactic information and less emotional information;when the value of α is smaller,the opposite is true.Therefore,this paper studies the influence of the parameter α on the training process and finds the optimal parameter value by setting a series of different values of α in each training process.In addition,we fix the dimension of the word vector as a moderate value 50,to avoiding its impact on the results.The details of the experiment results are shown in Tab.2.

    Table2:Experimental results with different A values

    In order to more intuitively observe and analyze the influence of the value of α on the experimental results,the following Fig.5 shows the trend of the fold line change of the negative polarity,positive polarity and overall classification accuracy under different values of α.

    Figure5:Trend of classification effect under different value of α

    It can be found from Tab.2 and Fig.5 that when the value of α changes from 0.1 to 0.3,the classification effect of the model becomes better as the value of α increases,and when α is 0.3,the whole model reaches the best;when the value of α changes from 0.3 to 0.4,the effect of the negative polarity classification increases with the increase of α,but the classification effect of the positive polarity decreases sharply;when the value of α is greater than 0.4,the classification effect of the positive polarity and the positive polarity decrease greatly with the increase of α,indicating that the model over-represented the syntactic information of the text and could not obtain the emotional information of the corpus.

    In the experiment of this paper,the whole model achieves the best effect when α is 0.3,and the optimal value of α is 0.2 in the comparative experiment of Pu et al.[Pu,Hou,Liu et al.(2017)].Therefore,we can find that the optimal value of α in the semi-supervised RAE method is not the same for different corpus of texts in the same language;however,the best value is always small.Therefore,the cross-entropy error of the softmax layer should be paid more attention to when training of the model.

    4.4 Word vector dimension selection experiment and result analysis

    When using the Word2Vec tool to train Tibetan word vectors,you can set the dimension(length) of the word vector.The size of the vector dimension has a significant impact on the accuracy of the semi-supervised RAE model and the training efficiency.If the word vector is too short,it cannot effectively contain the semantic information of the word;if the word vector is too long,not only will the training data be sparse,but also the subsequent model training will be inefficient,wasting time and computing resources.To find the best word vector,we set a group of experiments that the values of the word vector are different in each training process while the parameter α is fixed to the optimal value 0.3.The results are shown in the Tab.3.

    Table3:Word vector dimension comparison test result

    50 60 70 80 90 100 Negative 83.91 79.50 81.64 Positive 80.52 84.75 82.58 Negative 84.68 78.75 81.61 Positive 80.14 85.75 82.85 Negative 83.96 78.50 81.13 Positive 79.81 85.00 82.32 Negative 84.30 76.50 80.21 Positive 78.49 85.75 81.96 Negative 84.53 76.50 80.31 Positive 78.54 86.00 82.10 Negative 84.34 76.75 80.37 Positive 78.67 85.75 82.06 82.13 82.25 81.75 81.13 81.25 81.25

    To more intuitively observe and analyze the influence of word vector dimension on the classification effect of the model under different values,Fig.6 shows the variation trend of the negative line,positive polarity and overall classification accuracy under different vector dimensions.

    Figure6:Trend of classification effect under different feature dimensions

    From Tab.3 and Fig.6,we can see that when the word vector dimension is 10,the classification effect of the model is particularly poor.The reason may be that the vector dimension is too short to better represent the text information;when the word vector is increased from 10 to 20,the classification effect of the model has been greatly improved;from 20 to 60,the classification effect still improves with the increase of the vector dimension,but its growth is slow and its amplitude is getting smaller,and the overall effect of the model is best when the vector dimension is 60-dimensional;When the vector dimension is larger than 60,the overall classification effect decreases and fluctuates up and down with a small amplitude,which indicate that the dimension of the word vector increasing can not only fail to better express the text information,but also bring the noise data to the model and then affect the classification effect.

    In the literature of Pu et al.[Pu,Hou,Liu et al.(2017)],the optimal value of the word vector is 110,and when the corpus volume is 10,000,the overall classification accuracy of the model reaches 86.2%,which is higher than the best classification result of this paper.In theory,because the word vector used in this paper is obtained through training,the final classification effect should be higher than the method of initializing the word vector randomly.It is very likely that the Tibetan sentiment corpus collected in this paper covers a wider range of fields,and the number of samples in some areas is insufficient,resulting in the model not being able to learn the emotional characteristics of the field well.Therefore,this comparison is not strong scientific and rigorous.I hope that with the continuous development of informatization in Tibetan and other minority languages,relevant research institutions can launch relevant evaluation platforms,thereby achieving evaluation and comparison in the same corpus environment.It can better promote the progress and development of minority languages in the field of sentiment analysis.

    4.5 Comparison and analysis of experimental results

    In this paper,SVM experiments based on artificial extraction features,SVM Tibetan sentiment classification experiments based on algorithm extraction features,SVM Tibetan sentiment classification experiments based on multi-feature fusion,and Tibetan sentiment classification based on semi-supervised RAE model are respectively presented in the same dataset.The comparison of the results obtained by the four sets of experiments with their optimal parameters is shown in Fig.7:

    Figure7:Comparison of experimental results

    It can be seen from Fig.7 that the classification effect of the Tibetan sentiment classification model based on semi-supervised RAE is better than that of other models in this paper,and the overall accuracy of classification is 82.25%.We could find the reason.SVM is a statistical machine learning method which can only learn the probability statistics of words,while semi-supervised RAE model used in this paper can perform distributed vector representation of the text sentences.The vector not only contains the statistical distribution information of the feature words in the text,but also learns the sentence context structure information of the text,which can better understand the text,so that achieving a better emotional classification result.

    5 Conclusion

    This chapter applies the semi-supervised RAE model to the emotional classification task of Tibetan short texts,and achieves a better classification effect.The method is compared with SVM experiments based on artificial extraction features,SVM Tibetan emotion classification experiments based on algorithm extraction features,and SVM Tibetan emotion classification experiments based on multi-feature fusion.The results show that the proposed method is superior to the other three classification methods.

    Acknowledgment:The work in this paper is supported by the National Natural Science Foundation of China project “Research on special video recognition based on deep learning and Markov logic network” (61503424).

    成人三级黄色视频| 亚洲美女视频黄频| 国产精品综合久久久久久久免费| 999久久久精品免费观看国产| 国产精品综合久久久久久久免费| 91av网一区二区| 九九热线精品视视频播放| 男插女下体视频免费在线播放| 91精品国产九色| netflix在线观看网站| 亚洲第一区二区三区不卡| 日韩精品有码人妻一区| 天天躁日日操中文字幕| 亚洲精品456在线播放app | 一个人免费在线观看电影| 成人性生交大片免费视频hd| 午夜精品一区二区三区免费看| 麻豆成人av在线观看| 亚洲国产精品sss在线观看| 国产精品爽爽va在线观看网站| 1024手机看黄色片| 亚洲三级黄色毛片| 国产一级毛片七仙女欲春2| 黄色女人牲交| 亚洲性夜色夜夜综合| 搡老妇女老女人老熟妇| 午夜免费激情av| 亚洲国产色片| 乱系列少妇在线播放| 日韩欧美国产在线观看| av.在线天堂| 成人性生交大片免费视频hd| 国产精品野战在线观看| 久久欧美精品欧美久久欧美| or卡值多少钱| 麻豆av噜噜一区二区三区| 在线国产一区二区在线| 人人妻人人澡欧美一区二区| 欧美激情在线99| 成年女人毛片免费观看观看9| 国产免费男女视频| 精品国产三级普通话版| a在线观看视频网站| 国内久久婷婷六月综合欲色啪| 欧美日韩瑟瑟在线播放| 国产主播在线观看一区二区| a在线观看视频网站| 中文在线观看免费www的网站| 国产成人一区二区在线| 国产一区二区在线观看日韩| 岛国在线免费视频观看| 黄色丝袜av网址大全| 国产乱人伦免费视频| av国产免费在线观看| 日韩欧美免费精品| 精品99又大又爽又粗少妇毛片 | 国产精品嫩草影院av在线观看 | 精品国产三级普通话版| 最后的刺客免费高清国语| 国产精品永久免费网站| 亚洲电影在线观看av| 91麻豆精品激情在线观看国产| 久久天躁狠狠躁夜夜2o2o| 欧洲精品卡2卡3卡4卡5卡区| 免费av不卡在线播放| 香蕉av资源在线| 欧美xxxx黑人xx丫x性爽| 色尼玛亚洲综合影院| 亚洲av五月六月丁香网| 亚洲一级一片aⅴ在线观看| 国产成人影院久久av| 午夜久久久久精精品| 三级男女做爰猛烈吃奶摸视频| 久久这里只有精品中国| 午夜老司机福利剧场| 又爽又黄a免费视频| 久久香蕉精品热| 免费电影在线观看免费观看| 成人av在线播放网站| 国产精品野战在线观看| 久久国产乱子免费精品| 两个人视频免费观看高清| 成人亚洲精品av一区二区| 日韩精品中文字幕看吧| 91麻豆精品激情在线观看国产| 久久久久九九精品影院| 欧美一级a爱片免费观看看| 夜夜看夜夜爽夜夜摸| 精品乱码久久久久久99久播| 乱系列少妇在线播放| 色综合婷婷激情| 亚洲精品在线观看二区| 91在线精品国自产拍蜜月| 亚洲欧美精品综合久久99| 成年版毛片免费区| 久久人人爽人人爽人人片va| 日本撒尿小便嘘嘘汇集6| 国产探花极品一区二区| 男人舔女人下体高潮全视频| 亚洲欧美日韩高清专用| 午夜免费男女啪啪视频观看 | 日韩欧美免费精品| 国产av一区在线观看免费| 久久精品91蜜桃| 欧美精品国产亚洲| 亚洲成人久久爱视频| 人人妻人人看人人澡| 国产三级在线视频| 精品久久久久久久末码| 国产 一区 欧美 日韩| 成年女人看的毛片在线观看| 久久精品综合一区二区三区| 国产精品久久久久久久久免| 欧美区成人在线视频| 久久久久久久久久黄片| 99热6这里只有精品| 日日摸夜夜添夜夜添av毛片 | 老师上课跳d突然被开到最大视频| 国产熟女欧美一区二区| 91久久精品国产一区二区三区| 国产日本99.免费观看| 久久久久久久久久成人| 久久精品国产亚洲av香蕉五月| 国产亚洲精品久久久com| 99热精品在线国产| 婷婷色综合大香蕉| 国产精华一区二区三区| 国产欧美日韩精品一区二区| av国产免费在线观看| 午夜福利在线观看吧| 高清在线国产一区| 欧美人与善性xxx| 亚洲自拍偷在线| a级毛片a级免费在线| 欧美成人一区二区免费高清观看| 日本一本二区三区精品| 2021天堂中文幕一二区在线观| 超碰av人人做人人爽久久| 91麻豆av在线| 在线观看免费视频日本深夜| 久久久久久久久久黄片| 中文字幕精品亚洲无线码一区| 欧美日韩国产亚洲二区| eeuss影院久久| 久久亚洲真实| 黄色欧美视频在线观看| 国产三级在线视频| www.www免费av| 国内精品久久久久精免费| 午夜福利在线观看免费完整高清在 | 国产精品99久久久久久久久| 男女做爰动态图高潮gif福利片| 高清毛片免费观看视频网站| 免费一级毛片在线播放高清视频| 久久国产乱子免费精品| 99在线视频只有这里精品首页| 中文资源天堂在线| 成人国产麻豆网| 欧美色视频一区免费| 午夜日韩欧美国产| 久久久久久伊人网av| 麻豆国产av国片精品| 亚洲午夜理论影院| 九九久久精品国产亚洲av麻豆| 美女cb高潮喷水在线观看| 免费一级毛片在线播放高清视频| 黄色女人牲交| 一区二区三区激情视频| 色综合站精品国产| 午夜爱爱视频在线播放| 人妻久久中文字幕网| 欧美人与善性xxx| 亚洲综合色惰| 禁无遮挡网站| 久久久久久久久久成人| 又粗又爽又猛毛片免费看| 五月玫瑰六月丁香| 非洲黑人性xxxx精品又粗又长| 欧美不卡视频在线免费观看| 变态另类丝袜制服| 很黄的视频免费| 精品福利观看| 久久人人爽人人爽人人片va| 老司机午夜福利在线观看视频| 波野结衣二区三区在线| 深夜精品福利| 欧美色欧美亚洲另类二区| 日韩精品中文字幕看吧| or卡值多少钱| 国产大屁股一区二区在线视频| 日本免费一区二区三区高清不卡| 日本 欧美在线| 国产av麻豆久久久久久久| 精品人妻视频免费看| 亚洲五月天丁香| 国产激情偷乱视频一区二区| www.色视频.com| 最近在线观看免费完整版| 91在线观看av| avwww免费| 1024手机看黄色片| 欧美精品啪啪一区二区三区| 露出奶头的视频| 久久久久久久久久久丰满 | 成人无遮挡网站| 免费在线观看日本一区| 少妇丰满av| 男人的好看免费观看在线视频| 中国美白少妇内射xxxbb| 国产精品,欧美在线| 97超级碰碰碰精品色视频在线观看| 又紧又爽又黄一区二区| 丰满人妻一区二区三区视频av| 99riav亚洲国产免费| 麻豆av噜噜一区二区三区| 亚洲国产日韩欧美精品在线观看| 三级国产精品欧美在线观看| 人人妻人人澡欧美一区二区| 最近视频中文字幕2019在线8| 熟女人妻精品中文字幕| 天堂av国产一区二区熟女人妻| 麻豆久久精品国产亚洲av| 国产精品一区二区三区四区久久| 99在线视频只有这里精品首页| 成人综合一区亚洲| 91久久精品电影网| 国产av不卡久久| 亚洲国产色片| 亚洲中文日韩欧美视频| 国产男靠女视频免费网站| 欧美性猛交╳xxx乱大交人| 麻豆av噜噜一区二区三区| 老熟妇乱子伦视频在线观看| 日韩强制内射视频| 亚洲在线自拍视频| 他把我摸到了高潮在线观看| 啦啦啦啦在线视频资源| 成人国产一区最新在线观看| 亚洲四区av| 日韩精品中文字幕看吧| 国国产精品蜜臀av免费| 日韩大尺度精品在线看网址| 欧美bdsm另类| 成人鲁丝片一二三区免费| 色哟哟哟哟哟哟| 国产高清有码在线观看视频| 亚洲精华国产精华液的使用体验 | 欧美日韩精品成人综合77777| 麻豆成人午夜福利视频| 欧美三级亚洲精品| 国产精品久久久久久av不卡| 特级一级黄色大片| 国产精品日韩av在线免费观看| 亚洲精品乱码久久久v下载方式| 亚洲 国产 在线| 久久天躁狠狠躁夜夜2o2o| av在线亚洲专区| 国产国拍精品亚洲av在线观看| 亚洲av电影不卡..在线观看| 日本与韩国留学比较| 欧美色视频一区免费| 国语自产精品视频在线第100页| 亚洲性夜色夜夜综合| 欧美一级a爱片免费观看看| avwww免费| 在线看三级毛片| 日本黄色视频三级网站网址| 毛片一级片免费看久久久久 | 波多野结衣高清作品| 内地一区二区视频在线| 国产 一区精品| 亚洲国产色片| 人妻夜夜爽99麻豆av| 亚洲精华国产精华精| 欧美激情国产日韩精品一区| 俄罗斯特黄特色一大片| 看片在线看免费视频| 国产精品久久久久久精品电影| 两人在一起打扑克的视频| 免费在线观看影片大全网站| 国内精品宾馆在线| 亚洲人成网站在线播放欧美日韩| 99久久无色码亚洲精品果冻| 国产伦在线观看视频一区| 男女边吃奶边做爰视频| 天堂网av新在线| 日本 av在线| 一级黄片播放器| 免费在线观看影片大全网站| 成年人黄色毛片网站| 免费av不卡在线播放| 亚洲av第一区精品v没综合| 国产伦精品一区二区三区四那| 97超视频在线观看视频| 亚洲第一电影网av| 黄色女人牲交| 国产精品永久免费网站| 久久99热这里只有精品18| 欧美3d第一页| 精品久久久久久,| 午夜爱爱视频在线播放| 欧美一区二区精品小视频在线| 日韩欧美精品免费久久| 国产精品亚洲一级av第二区| 亚洲精华国产精华精| 内射极品少妇av片p| 国产精品爽爽va在线观看网站| 国内精品久久久久精免费| 精品一区二区免费观看| 成年免费大片在线观看| 免费看a级黄色片| 一进一出抽搐动态| 国产免费一级a男人的天堂| 桃红色精品国产亚洲av| 一区二区三区激情视频| 国产亚洲精品久久久com| 在线观看66精品国产| 最近中文字幕高清免费大全6 | 久久精品影院6| 国产av一区在线观看免费| 97人妻精品一区二区三区麻豆| 一级黄片播放器| 国产亚洲精品久久久久久毛片| 亚洲欧美日韩卡通动漫| 日韩,欧美,国产一区二区三区 | 99热这里只有是精品在线观看| 麻豆久久精品国产亚洲av| 一本久久中文字幕| 免费观看在线日韩| 国内精品宾馆在线| 麻豆久久精品国产亚洲av| 国产亚洲av嫩草精品影院| 婷婷精品国产亚洲av| 99九九线精品视频在线观看视频| 久久久久久久久中文| 国产精品三级大全| 日日夜夜操网爽| 国产亚洲91精品色在线| 99精品在免费线老司机午夜| 99在线人妻在线中文字幕| 精品一区二区三区av网在线观看| 亚洲熟妇中文字幕五十中出| 亚洲在线观看片| 高清日韩中文字幕在线| 又黄又爽又刺激的免费视频.| 91麻豆精品激情在线观看国产| 国内久久婷婷六月综合欲色啪| 国内揄拍国产精品人妻在线| 亚洲avbb在线观看| 国产免费av片在线观看野外av| 老司机福利观看| 国产午夜精品久久久久久一区二区三区 | 国产精品久久久久久av不卡| av天堂中文字幕网| 欧美成人一区二区免费高清观看| 亚洲成人免费电影在线观看| 男人的好看免费观看在线视频| 国产视频一区二区在线看| 久久亚洲真实| 国产视频一区二区在线看| 一边摸一边抽搐一进一小说| 国产精品亚洲一级av第二区| 毛片女人毛片| 成年女人看的毛片在线观看| 亚洲国产色片| 男人舔奶头视频| 国产成人aa在线观看| 12—13女人毛片做爰片一| 麻豆国产av国片精品| 免费在线观看影片大全网站| 国产成人av教育| 国内少妇人妻偷人精品xxx网站| 人妻少妇偷人精品九色| 人人妻,人人澡人人爽秒播| 国产美女午夜福利| av中文乱码字幕在线| 99热6这里只有精品| 欧美又色又爽又黄视频| 性欧美人与动物交配| 国产三级在线视频| 亚洲欧美精品综合久久99| www日本黄色视频网| 免费看日本二区| 欧美日韩精品成人综合77777| 欧美极品一区二区三区四区| 国内揄拍国产精品人妻在线| 中文在线观看免费www的网站| 国产私拍福利视频在线观看| 免费观看精品视频网站| 日韩强制内射视频| 日本熟妇午夜| 99久久无色码亚洲精品果冻| 在线天堂最新版资源| 亚洲最大成人手机在线| 大型黄色视频在线免费观看| 麻豆av噜噜一区二区三区| 国产真实伦视频高清在线观看 | 国产精品国产三级国产av玫瑰| 男女那种视频在线观看| 欧美精品国产亚洲| 国产激情偷乱视频一区二区| 深爱激情五月婷婷| 99久久九九国产精品国产免费| 欧美在线一区亚洲| 国产熟女欧美一区二区| 午夜精品久久久久久毛片777| 日本免费一区二区三区高清不卡| 亚洲成a人片在线一区二区| 深夜a级毛片| 欧美极品一区二区三区四区| 99久久中文字幕三级久久日本| 亚洲人成网站高清观看| 久久久久久久午夜电影| 中国美白少妇内射xxxbb| 天美传媒精品一区二区| 欧美最新免费一区二区三区| 在现免费观看毛片| 99国产极品粉嫩在线观看| 成人鲁丝片一二三区免费| eeuss影院久久| 色尼玛亚洲综合影院| 国产精品免费一区二区三区在线| 亚洲欧美日韩高清在线视频| or卡值多少钱| 久久久久国产精品人妻aⅴ院| 丝袜美腿在线中文| 免费观看在线日韩| 中文字幕精品亚洲无线码一区| 午夜福利18| 校园春色视频在线观看| 97超视频在线观看视频| 波多野结衣巨乳人妻| 免费大片18禁| 亚洲精品一卡2卡三卡4卡5卡| 亚洲欧美清纯卡通| 91久久精品电影网| 女人被狂操c到高潮| 国内揄拍国产精品人妻在线| 男女那种视频在线观看| 淫秽高清视频在线观看| 成人国产综合亚洲| 亚洲va在线va天堂va国产| 国产女主播在线喷水免费视频网站 | 男人狂女人下面高潮的视频| 久久精品夜夜夜夜夜久久蜜豆| 黄色丝袜av网址大全| 国产亚洲91精品色在线| 日日摸夜夜添夜夜添av毛片 | 国产一级毛片七仙女欲春2| 一进一出抽搐动态| 免费在线观看成人毛片| 久久国产乱子免费精品| 中文字幕人妻熟人妻熟丝袜美| 亚洲18禁久久av| 日韩一区二区视频免费看| 中文字幕熟女人妻在线| 精品福利观看| av女优亚洲男人天堂| 国产精品久久久久久亚洲av鲁大| 99久国产av精品| 在线观看一区二区三区| 人妻丰满熟妇av一区二区三区| 久久久久久久午夜电影| 久久久久国产精品人妻aⅴ院| 国产成人影院久久av| 国产爱豆传媒在线观看| 国产精品伦人一区二区| 丰满的人妻完整版| 可以在线观看毛片的网站| 国产精品自产拍在线观看55亚洲| 看十八女毛片水多多多| 午夜精品久久久久久毛片777| 午夜影院日韩av| 欧美国产日韩亚洲一区| 亚洲国产欧美人成| 黄色一级大片看看| 国产亚洲精品久久久com| 波多野结衣高清无吗| 校园春色视频在线观看| 国产欧美日韩精品亚洲av| 亚洲无线观看免费| 在线观看午夜福利视频| 97碰自拍视频| 婷婷丁香在线五月| 亚洲熟妇中文字幕五十中出| 欧美激情国产日韩精品一区| 亚洲欧美清纯卡通| 人人妻人人澡欧美一区二区| 在线播放国产精品三级| 国产成人福利小说| 亚洲国产精品合色在线| 亚洲欧美日韩高清专用| 日本色播在线视频| 国产aⅴ精品一区二区三区波| 成人特级av手机在线观看| 国产精品国产高清国产av| 久久九九热精品免费| 国产精品久久久久久亚洲av鲁大| 亚洲精品乱码久久久v下载方式| 亚洲熟妇熟女久久| 久久精品国产亚洲av涩爱 | 91在线精品国自产拍蜜月| 国产探花极品一区二区| 69人妻影院| 看免费成人av毛片| 91久久精品国产一区二区成人| 国产精品久久久久久久电影| 中文字幕精品亚洲无线码一区| 色噜噜av男人的天堂激情| 亚洲三级黄色毛片| 岛国在线免费视频观看| 12—13女人毛片做爰片一| 国产精品久久久久久久久免| 天堂动漫精品| 欧美+日韩+精品| 一个人观看的视频www高清免费观看| 天天一区二区日本电影三级| 两个人视频免费观看高清| 欧美最新免费一区二区三区| 中文字幕精品亚洲无线码一区| 国产主播在线观看一区二区| 天堂av国产一区二区熟女人妻| 日韩av在线大香蕉| h日本视频在线播放| 精品人妻熟女av久视频| 久久亚洲精品不卡| 中文字幕高清在线视频| 国模一区二区三区四区视频| 婷婷精品国产亚洲av在线| 在线播放国产精品三级| 亚洲狠狠婷婷综合久久图片| 波野结衣二区三区在线| 一级黄片播放器| 免费av毛片视频| 亚洲成av人片在线播放无| 久久久久国内视频| 久久人人精品亚洲av| 九色国产91popny在线| 一级毛片久久久久久久久女| 亚洲第一区二区三区不卡| 欧美中文日本在线观看视频| 99热只有精品国产| 国产探花极品一区二区| 久久久国产成人免费| 久久国产乱子免费精品| 99热精品在线国产| 男女视频在线观看网站免费| 成人毛片a级毛片在线播放| 中文字幕免费在线视频6| АⅤ资源中文在线天堂| 老熟妇乱子伦视频在线观看| 少妇人妻精品综合一区二区 | 国产老妇女一区| xxxwww97欧美| 国产精品久久久久久精品电影| 91在线观看av| 国产亚洲精品久久久com| 美女大奶头视频| 天堂网av新在线| 亚洲欧美日韩卡通动漫| 欧美+亚洲+日韩+国产| 精品久久久久久久久久久久久| 国产精品精品国产色婷婷| 亚洲电影在线观看av| 国产精品久久久久久久久免| 国产一区二区激情短视频| 色综合色国产| 99热6这里只有精品| 久久中文看片网| 不卡视频在线观看欧美| 久99久视频精品免费| 久久久成人免费电影| 搡老熟女国产l中国老女人| 国内精品宾馆在线| 男女下面进入的视频免费午夜| 午夜免费男女啪啪视频观看 | 国内精品久久久久精免费| 丝袜美腿在线中文| 日日啪夜夜撸| 精品一区二区三区av网在线观看| 69av精品久久久久久| 日韩精品青青久久久久久| 国产aⅴ精品一区二区三区波| 男女之事视频高清在线观看| 久久热精品热| 老师上课跳d突然被开到最大视频| 亚洲av日韩精品久久久久久密| 天天一区二区日本电影三级| 一边摸一边抽搐一进一小说| 国产精品久久久久久亚洲av鲁大| 国产色爽女视频免费观看| 91在线精品国自产拍蜜月| 91午夜精品亚洲一区二区三区 | 我要看日韩黄色一级片| 欧美日韩黄片免| 欧美极品一区二区三区四区| 淫秽高清视频在线观看| 动漫黄色视频在线观看| 校园春色视频在线观看| 最近视频中文字幕2019在线8| 国内精品美女久久久久久| 久久精品国产亚洲av天美| 超碰av人人做人人爽久久| 一进一出抽搐gif免费好疼| 国产久久久一区二区三区| 欧美日韩中文字幕国产精品一区二区三区| 亚洲人成网站在线播放欧美日韩| 精品午夜福利视频在线观看一区| 国产精品三级大全| 亚洲中文字幕日韩| 亚洲av电影不卡..在线观看| 我要看日韩黄色一级片| 两个人的视频大全免费| 又爽又黄a免费视频| 国产高清有码在线观看视频|