• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Fast Chinese syntactic parsing method based on conditional random fields

    2015-04-22 02:33:18HANLei韓磊LUOSenlin羅森林CHENQianrou陳倩柔PANLimin潘麗敏
    關(guān)鍵詞:韓磊姚明森林

    HAN Lei(韓磊), LUO Sen-lin(羅森林), CHEN Qian-rou(陳倩柔), PAN Li-min(潘麗敏)

    (School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China)

    ?

    Fast Chinese syntactic parsing method based on conditional random fields

    HAN Lei(韓磊), LUO Sen-lin(羅森林), CHEN Qian-rou(陳倩柔), PAN Li-min(潘麗敏)

    (School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China)

    A fast method for phrase structure grammar analysis is proposed based on conditional random fields (CRF). The method trains several CRF classifiers for recognizing the phrase nodes at different levels, and uses the bottom-up to connect the recognized phrase nodes to construct the syntactic tree. On the basis of Beijing forest studio Chinese tagged corpus, two experiments are designed to select the training parameters and verify the validity of the method. The result shows that the method costs 78.98 ms and 4.63 ms to train and test a Chinese sentence of 17.9 words. The method is a new way to parse the phrase structure grammar for Chinese, and has good generalization ability and fast speed.

    phrase structure grammar; syntactic tree; syntactic parsing; conditional random field

    Chinese syntactic parsing is one of the significant components in natural language processing. With the large scale development of Chinese Treebank[1], the corpus-based techniques that are successful for English have been applied extensively to Chinese. The two main approaches are traditional statistical and deterministic parsing.

    Traditional statistical approaches build models to obtain the unique tree with the highest PCFG probability (Viterbi tree)[2-9]. These algorithms require calculating PCFG probability of every kind of possible trees. Only at the expense of testing time, the Viterbi tree can be obtained. To reduce the testing time, deterministic parsing emerges as an attractive alternative to probabilistic parsing, offering accuracy just below the state-of-the-art in syntactic analysis of English, but running in linear time. Yamada and Matsumoto[10]proposed a method for analyzing word-word dependencies using deterministic bottom-up manner using support vector machines (SVM), and achieved R/P of over 90% accuracy of word-word dependency. Sagae and Lavie[11]used a basic bottom-up shift-reduce algorithm, but employed a classifier to determine parser actions instead of a grammar and achieved R/P of 87.54%/87.61%. Cheng et al.[12]presented a deterministic dependency structure analyzer which implements two algorithms—Yamada and Nivre algorithms—and two sorts of classifiers—SVM and Maximum Entropy models for Chinese. Encouraging results have also been shown in applying deterministic models to Chinese parsing. Wang et al.[13]presented classifier-based deterministic parser for Chinese constituency parsing. Their parser computes parse trees from bottom up in one pass, and uses classifiers to make shift-reduce decisions and achieves R/P of 88.3%/88.1%.

    In this paper, a faster deterministic parsing method is proposed based on conditional random fields (CRF) to reduce the runtime of parsing. The proposed method applies the CRF++[14]as the classifiers training and testing software to label the phrase nodes in syntactic levels for each segment. The nodes labeled are combined to construct the phrase structure syntactic tree.

    1 Method

    Base on the part-of-speech (POS) tagging results, the method labels the phrase and sentence nodes (both of them are defined as nodes) of each segment to construct a syntactic tree. The principle of the method is shown in Fig.1.

    Fig. 1 Principle of the method

    1.1 Data processing

    Data processing contains two parts, i.e. data training and data testing. The aim of data training is to change the data format from the POS and syntax tagging result to one word per line. The data training inputs are the POS and syntax tagging results and outputs are nodes in the format of one word per line. Take the sentence of “姚明說范甘迪是核心(Yao Ming said that Van Gundy is the core)?!?as an example. The data training inputs are the POS and syntax tagging results, which are “0^姚明(Yao Ming)/nr 1^說(said)/v 2^范甘迪(Van Gundy)/nr 3^是(is)/v 4^核心(the core)/n 5^。/w” and “[dj [np 0^姚明/nr] [vp 1^說/v [dj [np 2^范甘迪/nr] [vp 3^是/v 4^核心/n]]] 5^。/w]”. The data training outputs are shown in Tab.1. And the syntax tree is shown in Fig. 2.

    Tab. 1 Format of one word per line for“姚明說范甘迪是核心”

    Fig. 2 Syntactic tree of “姚明說范甘迪是核心”

    To explain the process of data training, the concept of syntactic level (hereinafter referred to as level) is introduced. The closest node to the segment is defined in the 1stlevel (1L) of the segment and less close node to the segment is defined in the 2ndlevel (2L) of the segment. The further the distance between segment and the node is, the higher level the node is in. The root node of the syntactic tree is defined in the last level of the segment. It can conclude that different segments may have different levels. Take the segment of “核心(the core)” as an example in Fig. 2. The phrase node of vp is in the 1stlevel, dj that does not belong to the root node is in the 2ndlevel, vp that is further from the segment is in the 3rdlevel (3L) and dj of the root node is in the 4thlevel (4L). In addition, the maximum level number among the segments in a sentence equals the level number of the sentence.

    In Tab. 1, the first column shows word segmentation results of the example sentence, the second column shows the POS tagging of each segment, the third column contains the segment order (SO) which means the order of the segment in the sentence, the other columns contain all the nodes in the syntactic tree including phrase nodes and sentence nodes.

    The data testing changes the POS tagging results into the format for the next recognition. Its inputs are POS tagging results and its outputs are the POS in the format as the first three columns in Tab. 1.

    1.2 Model training and testing

    Model training is to train the recognition models for each level labeling. Its inputs are the tagged syntactic tree and feature template. Its outputs are the recognition models for each level labeling. The model for labeling the nodes in the 1stlevel is defined as the 1stmodel and the model for labeling the nodes in the 2ndlevel is defined as the 2ndmodel and so on. Considering the data set for testing, the maximum level number of sentences is 14, thus 14 models need to be trained at least. Take the sentence in Tab. 1 as an example. The sentence needs 4 models to label each level. If the 2ndmodel needs to be trained, the inputs are the first 5 columns in Tab. 1 and the outputs are the model for labeling the nodes in the 2ndlevel. Each model has its own feature template and the parameters for training.

    Model testing labels the nodes of each segment from the 1stlevel to the highest level. Its inputs are the POS tagging results and outputs are the syntactic tree in the format of one word per line. Take the sentence in Tab. 1 as an example. The inputs are the first three columns, the 1stmodel, the 2ndmodel, the 3rdmodel and the 4thmodel. Firstly, the method uses the first three columns and the 1stmodel to label the nodes (np, vp, np, vp, vp, dj) in the 1stlevel. Secondly, the method uses the first 3 columns, the labeled nodes in the 1stlevel and the 2ndmodel to label the nodes in the 2ndlevel. Thirdly, the method uses the first 3 columns, the labeled nodes in the 1stand 2ndlevels and the 3rdmodel to label the nodes in the 3rdlevel. At last, based on all the labeled nodes and the 4thmodel, the output of the model testing is obtained as shown in Tab. 1.

    1.3 Constructing syntactic tree

    Constructing syntactic tree is an inverse process of data processing. Take the sentence in Tab. 1 as an example. Firstly, it connects the nodes from a segment to the node in the highest level as a single tree. Secondly, it combines the same node of dj. Thirdly, it combines the same child nodes of dj. Then, it combines the same nodes of every child node until the POS. The process is shown in Fig. 3.

    Fig. 3 Process of constructing syntactic tree

    2 Experiment

    Two experiments are designed to choose the CRF++ training parameters and verify the efficiency, stability and precision of the method. One is parameters selecting and the other is method verifying. For choosing the training parameters of 14 models, the parameters selecting applies the grid method to select the bestcandf. The value ofcranges from 0.2 to 6 with the step length of 0.2. The value offranges from 1 to 5 with the step length of 1. The method verifying uses the parameters with the highest precision for labeling the nodes in different levels to train the models, and then the syntactic tree is verified both in open and closed testing based on the method of 10-fold-cross-validation. At the same time, the training and testing time are recorded.

    2.1 Data set and hardware

    The data set is 10 000 Chinese tagged syntactic sentences in Beijing forest studio Chinese tagged corpus (BFS-CTC)[15-16]. The number of words in a sentence is 17.9 on the average. The maximum number of words in a sentence is 53, the minimum number of words in a sentence is 3, and the maximum syntactic level number is 14 in the BFS-CTC.

    The main hardware which is mostly relevance with the computing speedis as follows.

    ① CPU: Intel?Xeon?X5650 @ 2.67 GHz. (2 CPUs)

    ② Memory: 8 G.

    ③ System type:Windows Server 2008 R2 Enterprise sp1 64 bit.

    2.2 Evaluating

    2.3 Results

    2.3.1 Parameters selecting

    Take the 1stmodel training as an example to show the process of parameters selecting. There are total of 18 features used for 1stmodel training, including POS, POS of former segment etc., as shown in Tab. 2. Based on these features, the partly results are shown in Fig. 4, whose best result is 90.47% for nodes labeling in 1stlevel whencequals 0.4 andfequals 1. The process of parameters choosing of the other 13 models is the same.

    2.3.2 Method verifying

    Method verifying contains closed test and open test.In the closed test, theP,RandFvalues of 10 experimental groups are shown in Fig. 5. The averageP,RandFvalues are 76.32%, 81.25% and 0.787 1 respectively. In the open test, the result is shown in Fig.6. The averageP,RandFvalues are 73.27%, 78.25% and 0.756 8.

    Tab.2 Features used in model training of first level

    Fig. 4 Partly results of choosing parameters

    Fig.5 P, R and F in closed test

    Fig. 6 P, R and F in open test

    In the same condition, the recently results of the PCFGs[9]are shown in Tab. 4. The time of the table contains the rule collecting and testing for one group experiment. In Tab.4, theP,RandFare calculated by using the PARSEVAL[17]. The time of testing a sentence for PCFG, L-PCFG, S-PCFG and SL-PCFG is 1 847.1 ms, 1 425.9 ms, 205.8 ms and 82 ms in closed set, and 1 850.2 ms, 1 412.2 ms, 178.3 ms and 68.1 ms in open set. In comparison with previous work, the time of proposed method is very competitive. Compared with the PCFGs, the proposed method is 22 times faster and gives almost equal results for all three measures. In particular, although the SL-PCFG is the fastest in Tab.4 and almost equals the sum time of the proposed method in closed test, the proposed method gives a significantly better results in open test as shown in Tab.4.

    Tab.3 Time of training and testing in closed and open set ms

    Tab.4 Results of PCFGs

    Some other previous related works are shown in Tab.5. These models used the Penn Chinese Treebank. The average runtime of Levy & Manning’s, Bikel’s and stacked classifier model to analysis a sentence in Penn Chinese Treebank is 117.7 ms, 776.6 ms and 64.1 ms respectively. The runtime is calculated by using the time[13]divides the sum of sentences used in the experiment. Because of the differences of data set in the experiment, the results can’t be compared directly with the proposed method. But these results also have significant reference. The results show that the previous works are not fast enough until the machine learning algorithms are applied to the parser. The fastest result in Tab. 5 costs 64.1 ms to analysis a sentence while the proposed method costs 4.63 ms. But the three measures of the proposed method are lower much.

    Tab. 5 Previous related works

    In a word, the proposed method is fast for testing a sentence and stable in closed and open test. In addition, the syntactic structure may be failed to analyze in PCFGs and the previous works in Tab. 5, while there is no fail condition in the proposed method. But thePof the proposed method is not high, possibly because the nodes and number of syntactic levels tested by the classifiers cannot be all correct.

    3 Conclusion

    A fast method based on conditional random fields is proposed for syntactic parsing. For constructing a syntactic tree, firstly, the method uses the bottom-up to recognize the nodes from the nearest node to the segment to the root node. Secondly, it uses the up-bottom to combine the same node tested from the root node to the segment. Based on the BFS-CTC, two experiments are designed to illustrate the process of choosing training parameters and verify the validity of method. In addition, the results of the method are compared to PCFGs. The results show that the method has a faster time in training and testing. The method is a new way for syntactic analysis. According to the ability of machine learning, the method has good generalization ability and can be used for applications which don’t need high precision but fast speed.

    [1] Xue N, Xia F, Chiou F, et al. The Penn Chinese TreeBank: phrase structure annotation of a large corpus [J]. Natural Language Engineering, 2005, 11(02): 207-38.

    [2] Bikel D M, Chiang D. Two statistical parsing models applied to the Chinese Treebank[C]∥Proceedings of the second workshop on Chinese language processing: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics-Volume 12. Stroudsburg, PA, USA: Association for Computational Linguistics, 2000: 1-6.

    [3] Bikel D M. On the parameter space of generative lexicalized statistical parsing models [D]. PhiLadelphia: University of Pennsylvania, 2004.

    [4] Chiang D, Bikel D M. Recovering latent information in treebanks[C]∥Proceedings of the 19th international conference on Computational linguistics-Volume 1, Stroudsbury, PA, USA: Association for Computational Linguistics, 2002: 1-7.

    [5] Levy R, Manning C. Is it harder to parse Chinese, or the Chinese Treebank?[C]∥Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1. Stroudsburg, PA, USA: Association for Computational Linguistics, 2003: 439-446.

    [6] Xiong D, Li S, Liu Q, et al. Parsing the penn chinese treebank with semantic knowledge[M]∥Natural Language Processing-IJCNLP 2005. Berlin, Heidelberg: Springer, 2005: 70-81.

    [7] Jiang Zhengping. Statistical Chinese parsing [D]. Singapore: National University of Singapore, 2004.

    [8] Mi H T, Xiong D Y, Liu Q. Research on strategies for integrating Chinese lexical analysis and parsing[J]. Journal of Chinese Information Processing, 2008, 22(2): 10-17. (in Chinese)

    [9] Chen Gong, Luo Senlin, Chen Kaijiang, et al. Method for layered Chinese parsing based on subsidiary context and lexical information [J]. Journal of Chinese Information Processing, 2012, 26(01): 9-15. (in Chinese)

    [10] Yamada H, Matsumoto Y. Statistical dependency analysis with support vector machines [J]. Machine Learning, 1999, 34(1-3): 151-175.

    [11] Sagae K, Lavie A. A classifier-based parser with linear run-time complexity [C]∥Parsing 05 Proceedings of the Ninth International Workshop on Parsing Technology. Stroudsburg, PA, USA: Association for Computational Linguistics, 2005: 125-132.

    [12] Cheng Y, Asahara M, Matsumoto Y. Machine learning-based dependency analyzer for Chinese [J]. Journal of Chinese Language and Computing, 2005, 15(1): 13-24.

    [13] Wang M, Sagae K, Mitamura T. A fast, accurate deterministic parser for Chinese [C]∥Proceeding ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2006: 425-432.

    [14] Lafferty J, McCallum A, Pereira F. Conditional random fields: probabilistic models for segmenting and labeling sequence data [C]∥the Eighteenth International Conference on Machine Learning, San Francisco, CA, USA, 2012.

    [15] Luo Senlin, Liu Yingying, Feng Yang, et al. Method of building BFS-CTC: a Chinese tagged corpus of sentential semantic structure [J]. Transactions of Beijing Institute of Technology, 2012, 32(03): 311-315. (in Chinese)

    [16] Liu Yingying, Luo Senlin, Feng Yang, et al. BFS-CTC: a Chinese corpus of sentential semantic structure [J]. Journal of Chinese Information Processing, 2013, (27): 72-80. (in Chinese)

    [17] Charniak E. Statistical parsing with a context-free grammar and word statistics [C]∥the Fourteenth National Conference on Artificial Intelligence and Ninth Conference on Innovative Applications of Artificial Intelligence, Providence, Rhode Island, 1997.

    (Edited by Cai Jianying)

    10.15918/j.jbit1004-0579.201524.0414

    TP 391.1 Document code: A Article ID: 1004- 0579(2015)04- 0519- 07

    Received 2014- 05- 03

    Supported by the Science and Technology Innovation Plan of Beijing Institute of Technology (2013)

    E-mail: panlimin_bit@126.com

    猜你喜歡
    韓磊姚明森林
    姚明的兩筆賬
    哈Q森林
    哈Q森林
    哈Q森林
    跟著姚明,為愛奔跑!
    海峽姐妹(2016年5期)2016-02-27 15:18:50
    哈Q森林
    像姚明那樣誠實
    An ocean circulation model based on Eulerian forward-backward difference scheme and three-dimensional, primitive equations and its application in regional simulations*
    家長該從韓磊摔嬰案中反思什么?
    海峽姐妹(2014年2期)2014-02-27 15:09:03
    PARTY 控
    八小時以外(2012年1期)2012-06-27 07:28:27
    成年女人永久免费观看视频| 久久人人精品亚洲av| 国产成人福利小说| 男人舔女人下体高潮全视频| 久久精品国产亚洲av涩爱 | 97超碰精品成人国产| 欧美另类亚洲清纯唯美| 国产乱人偷精品视频| 亚洲av成人精品一区久久| 看片在线看免费视频| 成人综合一区亚洲| 啦啦啦韩国在线观看视频| 99久久精品一区二区三区| 国产精品人妻久久久久久| 亚洲欧美日韩东京热| 三级经典国产精品| 亚洲中文字幕日韩| 日产精品乱码卡一卡2卡三| 中文欧美无线码| 国产精品,欧美在线| 免费人成视频x8x8入口观看| 亚洲美女搞黄在线观看| 免费av不卡在线播放| 少妇人妻一区二区三区视频| 超碰av人人做人人爽久久| 国产日本99.免费观看| 日韩欧美国产在线观看| 日韩欧美在线乱码| 蜜臀久久99精品久久宅男| 国产精品福利在线免费观看| av在线天堂中文字幕| 亚洲国产色片| 赤兔流量卡办理| 国产精品一区二区性色av| 美女黄网站色视频| 插阴视频在线观看视频| 午夜精品一区二区三区免费看| 国产白丝娇喘喷水9色精品| 在线天堂最新版资源| 中文字幕人妻熟人妻熟丝袜美| 亚洲七黄色美女视频| 欧美一区二区国产精品久久精品| 淫秽高清视频在线观看| 亚洲av熟女| 在线观看午夜福利视频| 亚洲国产精品成人久久小说 | 国产老妇女一区| 成人毛片a级毛片在线播放| 最好的美女福利视频网| 男人舔女人下体高潮全视频| 国产精品人妻久久久影院| 一级毛片电影观看 | 深夜精品福利| 观看美女的网站| 嫩草影院入口| 丝袜喷水一区| 欧美精品一区二区大全| 床上黄色一级片| 国产片特级美女逼逼视频| 亚洲三级黄色毛片| 日日啪夜夜撸| 国产高清有码在线观看视频| 国产乱人偷精品视频| 国产老妇女一区| 国产精品免费一区二区三区在线| 亚洲国产色片| 欧美人与善性xxx| 美女国产视频在线观看| 日韩高清综合在线| 欧美潮喷喷水| 欧美日本亚洲视频在线播放| 免费看美女性在线毛片视频| 日本撒尿小便嘘嘘汇集6| 中国美白少妇内射xxxbb| 2022亚洲国产成人精品| 一区二区三区四区激情视频 | 爱豆传媒免费全集在线观看| 男女视频在线观看网站免费| 久久久久九九精品影院| 精品人妻偷拍中文字幕| 亚洲精品日韩av片在线观看| 国产亚洲5aaaaa淫片| 99国产极品粉嫩在线观看| 日韩中字成人| 欧美日韩精品成人综合77777| 亚洲美女视频黄频| 欧美精品国产亚洲| 乱人视频在线观看| 亚洲高清免费不卡视频| 午夜激情欧美在线| 精品日产1卡2卡| 在线免费观看不下载黄p国产| 欧美3d第一页| 又黄又爽又刺激的免费视频.| 久久久久国产网址| 欧美日本亚洲视频在线播放| 综合色av麻豆| 99久国产av精品国产电影| 在线播放国产精品三级| 日韩,欧美,国产一区二区三区 | 伦精品一区二区三区| 国产成人freesex在线| 亚洲精品日韩在线中文字幕 | av专区在线播放| 人体艺术视频欧美日本| 欧美日韩在线观看h| 国国产精品蜜臀av免费| 一本久久中文字幕| 特大巨黑吊av在线直播| 波多野结衣巨乳人妻| 麻豆久久精品国产亚洲av| 99久久精品热视频| 女人十人毛片免费观看3o分钟| 三级毛片av免费| 中文字幕人妻熟人妻熟丝袜美| av在线老鸭窝| 联通29元200g的流量卡| 老司机福利观看| 欧美丝袜亚洲另类| 亚洲av熟女| 一级毛片久久久久久久久女| 少妇的逼好多水| 热99在线观看视频| av免费观看日本| 日韩欧美精品v在线| 中文欧美无线码| 亚洲成人精品中文字幕电影| 在线a可以看的网站| 成人永久免费在线观看视频| 亚洲欧美精品自产自拍| 在线观看一区二区三区| 亚洲无线在线观看| 美女 人体艺术 gogo| 久久久久久久亚洲中文字幕| 亚洲最大成人中文| 久久久久久九九精品二区国产| 久久久久久久久久黄片| 给我免费播放毛片高清在线观看| 乱码一卡2卡4卡精品| 插逼视频在线观看| 美女国产视频在线观看| 最后的刺客免费高清国语| 国产乱人偷精品视频| 插逼视频在线观看| 亚洲av免费在线观看| 丝袜美腿在线中文| 亚洲欧美日韩高清专用| 一卡2卡三卡四卡精品乱码亚洲| 内地一区二区视频在线| 变态另类成人亚洲欧美熟女| 97人妻精品一区二区三区麻豆| 蜜臀久久99精品久久宅男| 床上黄色一级片| 成人无遮挡网站| 国产精品久久久久久久久免| 亚洲av成人精品一区久久| 国产精品1区2区在线观看.| 国产一区二区三区av在线 | 亚洲经典国产精华液单| 久久热精品热| 黄片wwwwww| 国产av不卡久久| 91在线精品国自产拍蜜月| 特大巨黑吊av在线直播| 国产av麻豆久久久久久久| 白带黄色成豆腐渣| 国产黄片美女视频| 免费观看a级毛片全部| 丰满人妻一区二区三区视频av| 又粗又硬又长又爽又黄的视频 | 久久精品夜色国产| 国内久久婷婷六月综合欲色啪| 欧美激情国产日韩精品一区| 寂寞人妻少妇视频99o| 免费av毛片视频| 夜夜爽天天搞| 插阴视频在线观看视频| 久久精品久久久久久久性| 成人美女网站在线观看视频| 人体艺术视频欧美日本| 草草在线视频免费看| 亚洲av一区综合| 久久久国产成人精品二区| 亚洲欧美成人精品一区二区| 在线观看免费视频日本深夜| 18禁黄网站禁片免费观看直播| 99九九线精品视频在线观看视频| 国产淫片久久久久久久久| 中文在线观看免费www的网站| 网址你懂的国产日韩在线| 成人三级黄色视频| 亚洲婷婷狠狠爱综合网| 欧美激情国产日韩精品一区| 亚洲国产欧洲综合997久久,| 国产单亲对白刺激| 欧美一区二区精品小视频在线| 亚洲七黄色美女视频| 日韩视频在线欧美| 看非洲黑人一级黄片| 国内精品宾馆在线| 国产精品一区二区在线观看99 | 两个人的视频大全免费| 色播亚洲综合网| 日韩大尺度精品在线看网址| 男女下面进入的视频免费午夜| 日韩亚洲欧美综合| 国产精品一区二区在线观看99 | 国产av在哪里看| 日韩欧美精品免费久久| 看免费成人av毛片| 国产精品爽爽va在线观看网站| 日本三级黄在线观看| 一进一出抽搐gif免费好疼| 亚洲国产精品成人综合色| 国产欧美日韩精品一区二区| 免费av毛片视频| 人妻系列 视频| 在线a可以看的网站| 最近视频中文字幕2019在线8| 天堂影院成人在线观看| 美女大奶头视频| 好男人在线观看高清免费视频| 亚洲在久久综合| 国产一区亚洲一区在线观看| 国内久久婷婷六月综合欲色啪| 2022亚洲国产成人精品| 成人av在线播放网站| 看免费成人av毛片| 狂野欧美激情性xxxx在线观看| 蜜桃亚洲精品一区二区三区| 久久久久久久久久成人| 欧美激情久久久久久爽电影| 久久99热这里只有精品18| 精品久久久久久久久久免费视频| 精品国内亚洲2022精品成人| 欧美日韩国产亚洲二区| 亚洲精品乱码久久久久久按摩| 亚洲国产欧美在线一区| 春色校园在线视频观看| 国产精品永久免费网站| 女同久久另类99精品国产91| 精品一区二区免费观看| 色播亚洲综合网| 免费黄网站久久成人精品| 99久久九九国产精品国产免费| 蜜臀久久99精品久久宅男| av在线天堂中文字幕| 啦啦啦啦在线视频资源| 亚洲无线观看免费| 精品人妻视频免费看| 男女视频在线观看网站免费| 男女那种视频在线观看| 在线天堂最新版资源| 青春草亚洲视频在线观看| 国产大屁股一区二区在线视频| 国产精品福利在线免费观看| 麻豆国产av国片精品| 亚洲经典国产精华液单| 欧美zozozo另类| kizo精华| 国产黄a三级三级三级人| 国内久久婷婷六月综合欲色啪| 国产精品久久久久久av不卡| 中文资源天堂在线| 国产一区二区激情短视频| 99久久人妻综合| 亚洲真实伦在线观看| 国产成人影院久久av| 搞女人的毛片| 一级毛片久久久久久久久女| 在线播放无遮挡| 免费不卡的大黄色大毛片视频在线观看 | 日韩人妻高清精品专区| 午夜精品在线福利| 九九在线视频观看精品| 亚洲婷婷狠狠爱综合网| 久久久久久久久久久免费av| 搡老妇女老女人老熟妇| 成人综合一区亚洲| 人人妻人人澡欧美一区二区| 日韩三级伦理在线观看| 国产三级在线视频| 黄色视频,在线免费观看| 桃色一区二区三区在线观看| 天堂av国产一区二区熟女人妻| 国产亚洲5aaaaa淫片| 免费搜索国产男女视频| 国产色婷婷99| 久久99蜜桃精品久久| 国产成人a∨麻豆精品| 成人综合一区亚洲| 国产精品无大码| 一本久久中文字幕| kizo精华| 日韩制服骚丝袜av| 狂野欧美白嫩少妇大欣赏| 精品少妇黑人巨大在线播放 | 精品一区二区三区人妻视频| 九九久久精品国产亚洲av麻豆| 国产高清有码在线观看视频| 国产亚洲av片在线观看秒播厂 | 亚洲精品色激情综合| 给我免费播放毛片高清在线观看| 联通29元200g的流量卡| 欧美日本亚洲视频在线播放| 丝袜美腿在线中文| 精品久久久噜噜| 男人舔奶头视频| 岛国在线免费视频观看| 国产精品麻豆人妻色哟哟久久 | 国产综合懂色| 又粗又爽又猛毛片免费看| 亚洲av熟女| 天堂网av新在线| 欧美日韩在线观看h| 一级毛片我不卡| 午夜视频国产福利| 精品久久久久久久久亚洲| 国产亚洲精品久久久com| 日日摸夜夜添夜夜添av毛片| 久久精品国产清高在天天线| 国产在视频线在精品| 成人国产麻豆网| 亚洲一区高清亚洲精品| 国产午夜福利久久久久久| 久久久久久大精品| 精品熟女少妇av免费看| 日韩欧美一区二区三区在线观看| 嫩草影院入口| 老司机福利观看| 欧美不卡视频在线免费观看| 热99在线观看视频| 两个人视频免费观看高清| 亚洲第一电影网av| 寂寞人妻少妇视频99o| 成人漫画全彩无遮挡| 内地一区二区视频在线| 国产高潮美女av| 淫秽高清视频在线观看| 晚上一个人看的免费电影| 亚洲国产高清在线一区二区三| 中文字幕熟女人妻在线| 嘟嘟电影网在线观看| 在线a可以看的网站| 深夜精品福利| 最近的中文字幕免费完整| 亚洲最大成人中文| 五月玫瑰六月丁香| 欧美色欧美亚洲另类二区| 在线a可以看的网站| 国产一区二区三区在线臀色熟女| 在线国产一区二区在线| 欧美一区二区亚洲| 91狼人影院| 偷拍熟女少妇极品色| 日日摸夜夜添夜夜添av毛片| 国产黄色小视频在线观看| 亚洲国产日韩欧美精品在线观看| 免费观看人在逋| 久久久久久久久久成人| 亚洲av熟女| 欧美激情久久久久久爽电影| 亚洲高清免费不卡视频| 你懂的网址亚洲精品在线观看 | 久久精品久久久久久久性| 波多野结衣高清无吗| 非洲黑人性xxxx精品又粗又长| 亚洲激情五月婷婷啪啪| 亚洲国产精品成人久久小说 | a级毛色黄片| 一个人看的www免费观看视频| 日韩制服骚丝袜av| 日韩人妻高清精品专区| 亚洲精品粉嫩美女一区| 啦啦啦啦在线视频资源| 岛国毛片在线播放| 亚洲国产欧美人成| 国产精品伦人一区二区| 永久网站在线| 天美传媒精品一区二区| 十八禁国产超污无遮挡网站| 亚洲精品日韩av片在线观看| 麻豆国产av国片精品| 黄色一级大片看看| 成年女人永久免费观看视频| 在线播放国产精品三级| 国产极品天堂在线| 高清毛片免费观看视频网站| 国产伦在线观看视频一区| 国产av不卡久久| 亚洲人成网站在线播| 久久久午夜欧美精品| 亚洲最大成人手机在线| 黄色配什么色好看| 波多野结衣高清作品| 久久久a久久爽久久v久久| 五月玫瑰六月丁香| 看黄色毛片网站| 听说在线观看完整版免费高清| 国产精品人妻久久久久久| 国产国拍精品亚洲av在线观看| 亚洲av免费高清在线观看| 天天躁日日操中文字幕| 国产欧美日韩精品一区二区| www.色视频.com| 国产精品一二三区在线看| 欧美一级a爱片免费观看看| 少妇熟女欧美另类| 少妇的逼水好多| 国产三级中文精品| 日本欧美国产在线视频| 欧美激情国产日韩精品一区| 国产精品国产三级国产av玫瑰| 免费在线观看成人毛片| 一级黄色大片毛片| 校园人妻丝袜中文字幕| 国产av麻豆久久久久久久| 国产成人91sexporn| 日本黄色片子视频| av在线蜜桃| 色播亚洲综合网| av在线亚洲专区| 国产精品三级大全| 亚洲人成网站高清观看| 少妇丰满av| 久久99热6这里只有精品| 麻豆久久精品国产亚洲av| 成人无遮挡网站| 国内精品一区二区在线观看| 亚洲最大成人av| 国产色婷婷99| 久久中文看片网| 精品久久久久久成人av| 国产精品无大码| 久久精品夜色国产| 国产色婷婷99| 日韩在线高清观看一区二区三区| 亚洲精品日韩av片在线观看| 一本久久精品| 久久这里有精品视频免费| 精品国产三级普通话版| 国产中年淑女户外野战色| 久久精品91蜜桃| 91久久精品国产一区二区成人| 欧美日韩国产亚洲二区| 国产精品三级大全| 精品久久久久久久久久免费视频| 国产精品人妻久久久影院| 99久久人妻综合| 五月伊人婷婷丁香| 在线观看午夜福利视频| 国产精品精品国产色婷婷| 精品久久久久久成人av| 亚洲精品亚洲一区二区| 久久久久九九精品影院| 欧美最新免费一区二区三区| 岛国在线免费视频观看| 亚洲在久久综合| 综合色丁香网| 国产成人一区二区在线| 久久综合国产亚洲精品| 特级一级黄色大片| 亚洲精品亚洲一区二区| 女人十人毛片免费观看3o分钟| 一级毛片aaaaaa免费看小| 亚洲美女视频黄频| 亚洲欧美精品综合久久99| 亚洲欧美精品专区久久| 精品无人区乱码1区二区| ponron亚洲| 给我免费播放毛片高清在线观看| 中文亚洲av片在线观看爽| 男女视频在线观看网站免费| 日韩,欧美,国产一区二区三区 | 少妇人妻精品综合一区二区 | 亚洲人与动物交配视频| 久久99蜜桃精品久久| 欧美另类亚洲清纯唯美| 久久精品影院6| 亚洲,欧美,日韩| 亚洲七黄色美女视频| 你懂的网址亚洲精品在线观看 | 中文资源天堂在线| videossex国产| 日韩人妻高清精品专区| 中文字幕免费在线视频6| 国产高清视频在线观看网站| 欧美激情国产日韩精品一区| 一本久久精品| 免费看a级黄色片| 久久精品国产鲁丝片午夜精品| 亚洲内射少妇av| 不卡视频在线观看欧美| 久久午夜亚洲精品久久| 亚洲欧美日韩无卡精品| 国产在线精品亚洲第一网站| 欧美一区二区亚洲| 国产精品久久久久久久电影| 国产美女午夜福利| 久久精品国产自在天天线| 精品久久久噜噜| 免费观看人在逋| 熟女电影av网| 99热这里只有是精品在线观看| 99久久精品热视频| 久久久久久久久中文| av在线播放精品| 色哟哟哟哟哟哟| 久久草成人影院| 五月玫瑰六月丁香| 色5月婷婷丁香| 国产成人影院久久av| 午夜福利在线在线| 欧美成人精品欧美一级黄| 日韩中字成人| 亚洲国产精品国产精品| 日韩强制内射视频| 欧美激情久久久久久爽电影| 六月丁香七月| 高清午夜精品一区二区三区 | 精品久久久久久久久亚洲| 精品久久久噜噜| 嫩草影院新地址| 亚洲国产高清在线一区二区三| 国产视频内射| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 免费av毛片视频| 日韩欧美在线乱码| 最后的刺客免费高清国语| 99国产极品粉嫩在线观看| av黄色大香蕉| 九九热线精品视视频播放| 免费av不卡在线播放| 国产老妇伦熟女老妇高清| 成人毛片60女人毛片免费| 能在线免费观看的黄片| 国产成人一区二区在线| 国产精品一区二区在线观看99 | 亚洲国产精品国产精品| 亚洲自拍偷在线| 欧美色欧美亚洲另类二区| 只有这里有精品99| 日本撒尿小便嘘嘘汇集6| 国产精品福利在线免费观看| 色视频www国产| 2021天堂中文幕一二区在线观| 麻豆精品久久久久久蜜桃| 自拍偷自拍亚洲精品老妇| 桃色一区二区三区在线观看| 国产一区二区三区av在线 | eeuss影院久久| 中文字幕免费在线视频6| 久久99热这里只有精品18| 又爽又黄a免费视频| 免费不卡的大黄色大毛片视频在线观看 | 两个人的视频大全免费| 国产精品美女特级片免费视频播放器| av在线观看视频网站免费| 国产又黄又爽又无遮挡在线| 天美传媒精品一区二区| 日韩欧美 国产精品| 亚洲国产精品久久男人天堂| 成人特级av手机在线观看| 小说图片视频综合网站| 干丝袜人妻中文字幕| 长腿黑丝高跟| 少妇高潮的动态图| 中文字幕精品亚洲无线码一区| 高清日韩中文字幕在线| 亚洲,欧美,日韩| 国产片特级美女逼逼视频| 亚洲欧洲国产日韩| 国产午夜精品一二区理论片| 午夜免费激情av| 哪个播放器可以免费观看大片| 可以在线观看毛片的网站| 久久韩国三级中文字幕| 国产极品天堂在线| 久久久精品大字幕| 少妇猛男粗大的猛烈进出视频 | 国产精品美女特级片免费视频播放器| 久久精品国产亚洲av香蕉五月| 久久久久久久久中文| 一本久久中文字幕| 黄片wwwwww| 天堂av国产一区二区熟女人妻| 亚洲精品久久国产高清桃花| 久久人人精品亚洲av| 99精品在免费线老司机午夜| 国产精品久久视频播放| 国产真实伦视频高清在线观看| 免费人成在线观看视频色| 欧美成人一区二区免费高清观看| 欧美日本亚洲视频在线播放| 免费观看精品视频网站| 久久午夜福利片| 国产三级在线视频| 国产探花在线观看一区二区| 国产成人a∨麻豆精品| 国产精品福利在线免费观看| 啦啦啦韩国在线观看视频| 97热精品久久久久久| 国产 一区精品| 赤兔流量卡办理| 亚洲av免费高清在线观看| 久久这里只有精品中国| 欧美+日韩+精品| av女优亚洲男人天堂| 日日撸夜夜添| 久久久久国产网址| 国产日本99.免费观看| 国产精品免费一区二区三区在线| 国内精品一区二区在线观看| 麻豆乱淫一区二区| 中文精品一卡2卡3卡4更新|