• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Density peaks clustering based integrate framework for multi-document summarization

    2017-05-16 10:26:43BoynWngJinZhngYiLiuYuexinZou

    Boyn Wng,Jin Zhng,Yi Liu,Yuexin Zou

    aADSPLAB,School of ECE,Peking University,Shenzhen,518055,China

    bShenzhen Raisound Technologies,Co.,Ltd,China

    cPKU Shenzhen Institute,China

    dPKU-HKUST Shenzhen-Hong Kong Institute,China

    eSchool of Computer Science and Network Security Dongguan University of Technology,China

    Original article

    Density peaks clustering based integrate framework for multi-document summarization

    Baoyan Wanga,Jian Zhangb,e,*,Yi Liuc,d,Yuexian Zoua

    aADSPLAB,School of ECE,Peking University,Shenzhen,518055,China

    bShenzhen Raisound Technologies,Co.,Ltd,China

    cPKU Shenzhen Institute,China

    dPKU-HKUST Shenzhen-Hong Kong Institute,China

    eSchool of Computer Science and Network Security Dongguan University of Technology,China

    A R T I C L E I N F O

    Article history:

    Received 14 October 2016

    Accepted 25 December 2016

    Available online 20 February 2017

    Multi-document summarization

    We present a novel unsupervised integrated score framework to generate generic extractive multidocument summaries by ranking sentences based on dynamic programming(DP)strategy.Considering that cluster-based methods proposed by other researchers tend to ignore informativeness of words when they generate summaries,our proposed framework takes relevance,diversity,informativeness and length constraint of sentences into consideration comprehensively.We apply Density Peaks Clustering (DPC)to get relevance scores and diversity scores of sentences simultaneously.Our framework produces the best performance on DUC2004,0.396 of ROUGE-1 score,0.094 of ROUGE-2 score and 0.143 of ROUGE-SU4 which outperforms a series of popular baselines,such as DUC Best,FGB[7],and BSTM[10].

    ?2017 Production and hosting by Elsevier B.V.on behalf of Chongqing University of Technology.This is an open access article under the CC BY-NC-ND license(http://creativecommons.org/licenses/by-nc-nd/4.0/).

    1.Introduction

    With the explosively growing of information overload over the Internet,consumers are flooded with all kinds of electronic documents i.e.news,emails,tweets,blog.Now more than ever,there are urgent demands for multi-document summarization(MDS),which aims at generating a concise and informative version for the large collection of documents and then helps consumers grasp the comprehensive information of the original documents quickly. Most existing studies are extractive methods,which focus on extracting salient sentences directly from given materials without any modi fication and simply combining them together to form a summary for multi-document set.In this article,we study on the generic extractive summarization from multiple documents. Nowadays,an effective summarization method always properly considers four important issues[1,2]:

    ·Relevance:a good summary should be interrelated to primary themes of the given multi-documents as possible.

    ·Diversity:a good summary should be less redundant.

    ·Informativeness:the sentences of a good summary should conclude information as much as possible.

    ·Length Constraint:the summary should be extracted under the limitation of the length.

    The extractive summarization methods can fall into two categories:supervised methods that rely on provided documentsummary pairs,and unsupervised ones based upon properties derived from document clusters.The supervised methods consider the multi-document summarization as a classi fication/regression problem[3].For those methods,a huge amount of annotated data is required,which are costly and time-consuming.For another thing, unsupervised approaches are very enticing and tend to score sentences based on semantic grouping extracted from the original documents.Researchers often select some linguistic features and statistic features to estimate importance of original sentences and then rank sentences.

    Inspired by the success of cluster-based methods,especially density peaks clustering(DPC)algorithm on bioinformatics,bibliometric,and pattern recognition[4],in this article we propose a novel method to extract sentences with higher relevance,more informativeness and a better diversity under the limitation of length for sentences ranking based on Density Peaks Clustering (DPC).First,thanks to the DPC,it is not necessary to provide theestablished number of clusters in advance and do the postprocessing operation to remove redundancy.Second,we attempt to put forward an integrated score framework to rank sentences and employ the dynamic programming solution to select salient sentences.

    This article is organized as follows:Section 2 describes related research work about our motivation in detail.Section 3 presents our proposed Multi-Document Summarization framework and the summary generation process based on dynamic programming technology.Section 4 and Section 5 give the evaluation of the algorithm on the benchmark data set DUC2004 for the task of multidocument summarization.We then conclude at the end of this article and give some directions for future research.

    2.Related work

    Various extractive multi-document summarization methods have been proposed.For supervised methods,different models have been trained for the task,such as hidden Markov model, conditional random field and REGSUM[5].Sparse coding[2]was introduced into document summarization due to its useful in image processing.Those supervised methods are based on algorithms that a large amount of labeled data is needed for precondition.The annotated data is chie fly available for documents, which are mostly relevant to the trained summarization model. Therefore,it's not necessary for the trained model to generate a satisfactory summary when documents are not parallel to the trained model.Furthermore,when consumers transform the aim of summarization or the characteristics of documents,the training data should be reconstructed and the model should be retrained necessarily.

    There are also numerous methods for unsupervised extractedbased summarization presented in the literature.Most of them tend to involve calculating salient scores for sentences of the original documents,ranking sentences according to the saliency score,and utilizing the top sentences with the highest scores to generate the final summary.Since clustering algorithm is the most essential unsupervised partitioning method,it is more appropriate to apply clustering algorithm for multi-document summarization. The cluster based methods tend to group sentences and then rank sentences by their saliency scores.Many methods use other algorithms combined with clustering to rank sentences.Wan et al.[6] clustered sentences first,consulted the HITS algorithm to regard clusters as hubs and sentences as authorities and then ranked and selected salient sentences by the final gained authority scores. Wang et al.[7]translated the cluster-based summarization issue to minimizing the Kullback-Leibler divergence between the original documents and model reconstructed terms.Cai et al.[8]ranked and clustered sentences simultaneously and enhanced each other mutually.Other typical existing methods include graph-based ranking,LSA based methods,NMF based methods,submodular functions based methods,LDA based methods.Wang et al.[9]used the symmetric non-negative matrix factorization(SNMF)to softly cluster sentences of documents into groups and selected salient sentences from each cluster to generate the summary.Wang et al. [10]used generative model and provided an ef ficient way to model the Bayesian probability distributions of selecting salience sentences given themes.Wang et al.[11]combined different summarization results from single summarization systems.Besides,some papers considered reducing the redundancy in summary,i.e.MMR [12].To eliminate redundancy among sentences,some systems selected the most important sentences first and calculated the similarity between previously selected ones and next candidate sentence,and add it to the summary only if it included suf ficient new information.

    We follow the idea of cluster-based method in this article. Different from previous work,we attempt to propose an integrated weighted score framework that can order sentences by evaluating salient scores and remove redundancy of summary.We also use the dynamic programming solution for optimal salient sentences selection.

    3.Proposed method

    In this section,we discuss the outline of our proposed method as illustrated in Fig.1.We show a novel way of handling the multidocument summarization task by using DPC algorithm.All documents are first represented by a set of the sentences as raw input of the framework.After the corpus is preprocessed,DPC is employed to get relevance scores and diversity scores of sentences simultaneously.Meanwhile,the number of effective words will be applied to obtain informativeness scores of sentences.What's more,a length constraint is used to ensure the extracted sentenced have a proper length.In the end,we attempt to use an integrated scoring framework to rank sentences and generate the summary based on the dynamic programming algorithm.The DPC based summarization method mainly includes the following steps:

    3.1.Pre-processing

    Before using our method to deal with the text data,a preprocessing module is indispensable.After the given corpus of English documents,C corpus={d 1,d2,…,d i,…,d cor},which d i denotes the i-th document inC corpusand those documents are same or similar topics,splitting apart into individual sentences,S={s 1,s 2,…s i,…,s sen}where s i means the i-th sentence inC corpus,we utilize an unde fined forward stop words list to remove all stop words and Porter's stemming algorithm to perform stem of remaining words.

    3.2.Sentence estimation factors

    3.2.1.Relevance score

    Fig.1.The outline of our proposed framework.

    In this section,we show a relevance score to measure the extent how much a sentence is relevant to residual sentences in thedocuments.One of the underlying assumptions of DPC is that cluster centers are characterized by a higher density than their neighbors.Inspired by the assumption,we assume that a sentence will be deemed to be higher relevance and more representational when it possesses higher density meaning owning more similar sentences.As the input of the DPC algorithm is similarity matrix among sentences,the sentences are represented by bag-of-words vector space mode primarily,and then cosine similarity formula is applied to calculate the similarity among sentences.The reason why terms are weighted with Binary schemes,which Term weighting Wij is set 1 if term tj appears at least once in the sentence,is that the frequency of term repetition tend to be less in sentences than that in documents.Thus we de fine the function to compute the Relevance Scoring SCrele(i)for each sentence si as following:

    whereSim ijrepresents the cosine similarity numerical value between thei-th andj-th sentence,Kdenotes the total number of sentences in the documents andTdenotes the total number of terms in the documents.ωrepresents the prede fined value of density threshold.SCR(i)should be normalized in order to adapt to the comprehensive scoring model.

    In this section,the density thresholdωis determined following the study[4]to exclude the sentences,which hold lower similarity values with the others.

    3.2.2.Diversity score

    In this section,diversity scoring is presented to argue a good summary should not include analogical sentences.A document set usually contains one core topic and some subtopics.In addition to the mostevident topic,it's also necessary to get the sub-topics most evident topic so as to better understand the whole corpus.In other words,sentences of the summary should be less overlap mutually so as to eliminate redundancy.Maximal Marginal relevance(MMR), one of the typical methods reducing redundancy,uses a greedy approach to sentence selection through combing criterion of query relevance and novelty of information.Another hypothesis of DPC is that cluster centers also are characterized by a relatively large distance from points with higher densities,which ensure the similar sentences get larger difference scores.Therefore,by comparing with all the other sentences of the corpus,the sentence with a higher score could be extracted,which also can guarantee the diversity globally.The diversity scoreSCdiv(i)is de fined as the following function.

    Note that diversity score of the sentence with the highest density is assigned 1 conventionally.

    3.2.3.Informativeness score

    Relevance score and diversity score measure the relationship between the sentences.In this section,Informative content words are employed to calculate the internal informativeness of sentences.Informative content words are the non-stop words and parts of speech are nouns,verbs and adjectives.

    It's also necessary to normalize the informativeness scoring as follows:

    3.2.4.Length constraint

    The longer sentence is,the more informativeness it owns,which causes the longish sentences tend to be extracted.The total number of words in the summary usually is limited.The longer sentences are,the fewer ones are selected.Therefore,it is requisite to provide a length constraint.Length of sentences li range in a large scope.On this occasion,we should lead in a smoothing method to handle the problem.Taking logarithm is a widely used smoothing approach. Thus the length constraint is de fined as follows in(7).

    It needs to be normalized as the previous operations:

    3.3.Integrated score framework

    The ultimate goal of our method is to select those sentences with higher relevance,more informativeness and better diversity under the limitation of length.We de fine a function comprehensively considering the above purposes as follows:

    In order to calculate concisely and conveniently,the scoring framework then is changed to:

    Note that in order to determine how to tune the parametersα,β, andγof the integrated score framework,we carry out a set of experiments on development dataset.The value ofα,β,andγwas tuned by varying from 0 to 1.5,and chose the values,with which the method performs best.

    3.4.Summary generation process

    The summary generation is regarded as the 0-1 knapsack problem:

    The 0-1 knapsack problem is NP-hard.To alleviate this problem we utilize the dynamic programming solution to select sentencesuntil the expected length of summaries is satis fied,shown as follows.

    where S[i][l]stands for a high score of summary,that can only contain sentences in the set{s 1,s 2,…s i}under the limit of the exact length l.

    4.Experimental setup

    4.1.Datasets and evaluation metrics

    We evaluate our approach on the open benchmark data sets DUC2004 and DUC2007 from Document Understanding Conference(DUC)for summarization task.Table 1 gives a brief description of the datasets.There are four human-generated summaries,of which every sentence is either selected in its entirety or not at all, are provided as the ground truth of the evaluation for each document set.

    In this section,DUC2007 is used as our development set to investigate howα,β,andγrelate to integrated score framework. ROUGE version 1.5.5 toolkit[13],widely used in the research of automatic documents summarization,is applied to evaluate the performance of our summarization method in experiments.Among the evaluation methods implemented in Rouge,Rouge-1 focuses on the occurrence of the same words between generated summary and reference summary,while Rouge-2 and Rouge-SU4 concerns more over the readability of the generated summary.We report the mean value over all topics of the recall scores of these three metrics in the experiment.

    4.2.Baselines

    We study with the following methods for generic summarization as the baseline methods to compare with our proposed method,which of them are widely applied in research or recently released in literature.

    1:DUC best:The best participating system in DUC2004;

    2:Cluster-based methods:KM[10],FGB[7],ClusterHITS[6],NMF [14],RTC[8];

    3:Other state-of-the-art MDS methods:Centroid[15],LexPageR-ank[16],BST M[10],WCS[11].

    5.Experimental results

    We evaluate our method on the DUC 2004 data withα=0.77, β=0.63,γ=0.92 which was our best performance in the experiments on the development data DUC 2007.The results of these experiments are listed in Table 2.Fig.2 visually illustrates the comparison between our method with the baselines so as to better demonstrate the results.We subtract the KM score from the scores of residual methods and then plus the number 0.01 in the figure, thus the distinction among those methods can be observed more distinctly.We show ROUGE-1,ROUGE-2 and ROUGE-SU Recallmeasures in Table 2.

    Table 1 Description of the dataset.

    Table 2 Overall performance comparison on DUC2004 dataset using ROUGE evaluation tool. Remark:“-”indicates that the corresponding method does not authoritatively release the results.

    From Table 2 and Fig.2,we can have the following observed results:our result is on the verge of the human-annotated result and our method clearly outperforms the DUC04 best team work.It is obvious that our method outperforms most rivals signi ficantly on the ROUGE-1 metric and the ROUGE-SU metric.In comparison with the WCS,the resultof our method is slightly worse.It may be due to the aggregation strategy used by WCS.The WCS aggregates various summarization systems to produce better summary results. Compared with other cluster-based methods,ours consider the informativeness of sentences and do not need to set the clusters' number.By removing one from the four scores of the integrated score framework,the results show that effectiveness of the method is reduced.In other words,the four scores of the integrated score framework have a promoting effect for the summarization task.In a word,it is effective for our proposed method to handle MDS task.

    6.Conclusion

    Fig.2.Comparison of the methods in terms of ROUGE-1,ROUGE-2,and ROUGESU Recall-measures.

    In this paper,we proposed a novel unsupervised method tohandle the task of multi-document summarization.For ranking sentences,we proposed an integrated score framework.Informative content words are used to get the informativeness,while DPC was employed to measure the relevance and diversity of sentences at the same time.We combined those scores with a length constraint and selected sentences based dynamic programming at last.Extensive experiments on standard datasets show that our method is quite effective for multi-document summarization.

    In the future,we will introduce external resources such as Wordnet and Wikipedia to calculate the sentence semantic similarity,which can solve the problems of the synonym and the multivocal word.We will then apply our proposed method in topicfocused and updated summarization,to which the tasks of summarization have turned.

    Acknowledgments

    This work is partially supported by NSFC(No:61271309,No. 61300197),Shenzhen Science&Research projects(No: CXZZ20140509093608290).

    [1]T.Ma,X.Wan,Multi-document summarization using minimum distortion,in: 2010 IEEE International Conference on Data Mining,IEEE,2010,pp.354-363.

    [2]H.Liu,H.Yu,Z.-H.Deng,Multi-document summarization based on two-level sparse representation model,in:AAAI,2015,pp.196-202.

    [3]Z.Cao,F.Wei,L.Dong,S.Li,M.Zhou,Ranking with recursive neural networks and its application to multi-document summarization,in:AAAI,2015,pp. 2153-2159.

    [4]A.Rodriguez,A.Laio,Clustering by fast search and find of density peaks, Science(6191)(2014)1492-1496.

    [5]K.Hong,A.Nenkova,Improving the estimation of word importance for news multi-document summarization,in:EACL,2014,pp.712-721.

    [6]X.Wan,J.Yang,Multi-document summarization using cluster-based link analysis,in:Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval,ACM,2008, pp.299-306.

    [7]D.Wang,S.Zhu,T.Li,Y.Chi,Y.Gong,Integrating document clustering and multi-document summarization,ACM Trans.Knowl.Discov.Data(TKDD)5(3) (2011)14.

    [8]X.Cai,W.Li,Ranking through clustering:an integrated approach to multidocument summarization,IEEE Trans.Audio,Speech,Lang.Process.21(7) (2013)1424-1433.

    [9]D.Wang,T.Li,S.Zhu,C.Ding,Multi-document summarization via sentencelevel semantic analysis and symmetric matrix factorization,in:Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval,ACM,2008,pp.307-314.

    [10]D.Wang,S.Zhu,T.Li,Y.Gong,Multi-document summarization using sentence-based topic models,in:Proceedings of the ACL-IJCNLP 2009 Conference Short Papers,ACL,2009,pp.297-300.

    [11]D.Wang,T.Li,Weighted consensus multi-document summarization,Inf. Process.Manag.48(3)(2012)513-523.

    [12]J.Goldstein,V.Mittal,J.Carbonell,M.Kantrowitz,Multi-document summarization by sentence extraction,in:Proceedings of the 2000 NAACL-ANLP Workshop on Automatic Summarization-Volume 4,ACL,2000,pp.40-48.

    [13]P.Over,J.Yen,Introduction to duc-2001:an intrinsic evaluation of generic news text summarization systems,in:Proceedings of DUC 2004 Document Understanding Workshop,Boston,2004.

    [14]D.Wang,T.Li,C.Ding,Weighted feature subset non-negative matrix factorization and its applications to document understanding,in:2010 IEEE International Conference on Data Mining,IEEE,2010,pp.541-550.

    [15]D.R.Radev,H.Jing,M.Sty's,D.Tam,Centroid-based summarization of multiple documents,Inf.Process.Manag.40(6)(2004)919-938.

    [16]Q.Mei,J.Guo,D.Radev,Divrank:the interplay of prestige and diversity in information networks,in:Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,ACM,2010,pp. 1009-1018.

    *Corresponding author.Shenzhen Raisound Technologies,Co.,Ltd,China.

    E-mail address:13925876721@163.com(J.Zhang).

    Peer review under responsibility of Chongqing University of Technology.

    http://dx.doi.org/10.1016/j.trit.2016.12.005

    2468-2322/?2017 Production and hosting by Elsevier B.V.on behalf of Chongqing University of Technology.This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

    Integrated score framework

    Density peaks clustering

    Sentences rank

    高清欧美精品videossex| 日日撸夜夜添| 麻豆精品久久久久久蜜桃| 又粗又硬又长又爽又黄的视频| av.在线天堂| 最黄视频免费看| 国产精品一区二区在线不卡| 熟女人妻精品中文字幕| 成人美女网站在线观看视频| 人体艺术视频欧美日本| 七月丁香在线播放| 毛片女人毛片| 国产日韩欧美在线精品| 欧美精品一区二区大全| 黑人猛操日本美女一级片| 国产精品熟女久久久久浪| 精品亚洲成a人片在线观看 | 大片电影免费在线观看免费| 亚洲欧美清纯卡通| 九九爱精品视频在线观看| 国产精品不卡视频一区二区| 99re6热这里在线精品视频| 亚洲国产欧美人成| 深夜a级毛片| 两个人的视频大全免费| 国产乱人视频| 亚洲激情五月婷婷啪啪| 只有这里有精品99| 国产精品国产三级专区第一集| 2022亚洲国产成人精品| 国产精品一区二区性色av| 一区二区三区精品91| 中文精品一卡2卡3卡4更新| 少妇猛男粗大的猛烈进出视频| 青春草国产在线视频| 日日啪夜夜爽| 国产精品麻豆人妻色哟哟久久| 夜夜爽夜夜爽视频| 亚洲国产精品国产精品| 一个人看视频在线观看www免费| 国产免费一级a男人的天堂| 亚洲精品aⅴ在线观看| 少妇人妻久久综合中文| 亚洲欧美日韩无卡精品| 自拍欧美九色日韩亚洲蝌蚪91 | 免费观看性生交大片5| 国产精品熟女久久久久浪| 亚洲av福利一区| 中文字幕久久专区| 色5月婷婷丁香| 身体一侧抽搐| 男女啪啪激烈高潮av片| 国产爱豆传媒在线观看| 在线观看三级黄色| 国产精品女同一区二区软件| 男人爽女人下面视频在线观看| 亚洲不卡免费看| 人人妻人人爽人人添夜夜欢视频 | 精品人妻一区二区三区麻豆| 国产成人午夜福利电影在线观看| 欧美三级亚洲精品| 欧美亚洲 丝袜 人妻 在线| 自拍欧美九色日韩亚洲蝌蚪91 | 一级av片app| 97超碰精品成人国产| 成人毛片60女人毛片免费| 日本av免费视频播放| 黑人高潮一二区| 嫩草影院入口| 精品人妻熟女av久视频| 国产成人aa在线观看| 在线观看一区二区三区激情| 亚洲丝袜综合中文字幕| 国产一区有黄有色的免费视频| 亚洲av在线观看美女高潮| 成年免费大片在线观看| 内地一区二区视频在线| 国产色婷婷99| 一个人看视频在线观看www免费| 国产av码专区亚洲av| 日日撸夜夜添| 水蜜桃什么品种好| 亚洲精华国产精华液的使用体验| 国产黄频视频在线观看| 国模一区二区三区四区视频| 久久久久久久亚洲中文字幕| 国产高潮美女av| 亚洲国产欧美人成| 久久久久久久大尺度免费视频| 国产在线视频一区二区| 成人国产av品久久久| 色吧在线观看| 国产精品99久久99久久久不卡 | 男女无遮挡免费网站观看| 国产高潮美女av| 亚洲av男天堂| 男人舔奶头视频| 亚洲成人一二三区av| 中文字幕制服av| 亚洲国产毛片av蜜桃av| kizo精华| 人体艺术视频欧美日本| av在线蜜桃| 欧美精品国产亚洲| 成人二区视频| 国产精品久久久久久久电影| 亚洲国产欧美在线一区| 97精品久久久久久久久久精品| 中文资源天堂在线| 亚洲欧美成人综合另类久久久| av国产久精品久网站免费入址| 久久精品国产亚洲av天美| 国产av精品麻豆| 五月玫瑰六月丁香| 成年免费大片在线观看| 久久久久久久久久久免费av| 99久久中文字幕三级久久日本| 国产精品久久久久成人av| 一级毛片aaaaaa免费看小| 日韩人妻高清精品专区| 国产黄片视频在线免费观看| 亚洲图色成人| 日韩人妻高清精品专区| 卡戴珊不雅视频在线播放| 新久久久久国产一级毛片| 日本黄大片高清| 亚洲国产欧美人成| 中文字幕亚洲精品专区| 身体一侧抽搐| 啦啦啦中文免费视频观看日本| 久久精品国产亚洲av天美| 午夜福利在线观看免费完整高清在| 日韩av免费高清视频| 麻豆乱淫一区二区| 九草在线视频观看| 亚洲自偷自拍三级| 韩国av在线不卡| 免费看光身美女| 精品一区二区免费观看| 免费黄色在线免费观看| 看十八女毛片水多多多| 久久人人爽人人爽人人片va| 热re99久久精品国产66热6| 国产白丝娇喘喷水9色精品| 高清日韩中文字幕在线| 国产人妻一区二区三区在| 精品久久久噜噜| av专区在线播放| 国产探花极品一区二区| 一区二区三区精品91| 高清av免费在线| 久久午夜福利片| 在线观看一区二区三区激情| 国产极品天堂在线| 美女福利国产在线 | 你懂的网址亚洲精品在线观看| 在线观看三级黄色| 18禁动态无遮挡网站| 精品国产一区二区三区久久久樱花 | 日韩中文字幕视频在线看片 | 又大又黄又爽视频免费| 水蜜桃什么品种好| 色综合色国产| 欧美国产精品一级二级三级 | 亚洲久久久国产精品| 亚洲成人中文字幕在线播放| 欧美xxxx黑人xx丫x性爽| 免费少妇av软件| 亚洲精品aⅴ在线观看| 美女xxoo啪啪120秒动态图| 熟妇人妻不卡中文字幕| 久久久亚洲精品成人影院| 久久人人爽av亚洲精品天堂 | 免费久久久久久久精品成人欧美视频 | 亚洲av中文字字幕乱码综合| 日韩中字成人| 日本一二三区视频观看| 91精品一卡2卡3卡4卡| 久久久久精品久久久久真实原创| 国产老妇伦熟女老妇高清| 91狼人影院| www.色视频.com| av不卡在线播放| 91精品一卡2卡3卡4卡| 亚洲国产av新网站| 在线免费观看不下载黄p国产| 看免费成人av毛片| 精品一区在线观看国产| 国产一区有黄有色的免费视频| 亚洲真实伦在线观看| 好男人视频免费观看在线| 日韩一区二区视频免费看| 亚洲精品日韩av片在线观看| 高清欧美精品videossex| 久久亚洲国产成人精品v| 99久久精品热视频| 成人二区视频| 在线看a的网站| 国产综合精华液| 97在线人人人人妻| 成人黄色视频免费在线看| 国产亚洲精品久久久com| 中文资源天堂在线| 2018国产大陆天天弄谢| 精品99又大又爽又粗少妇毛片| 日本黄大片高清| 日本爱情动作片www.在线观看| av在线老鸭窝| 国产真实伦视频高清在线观看| 日韩免费高清中文字幕av| 熟女av电影| 国产成人91sexporn| 最近的中文字幕免费完整| 三级经典国产精品| 黄色怎么调成土黄色| 欧美一级a爱片免费观看看| 亚洲一区二区三区欧美精品| 美女福利国产在线 | 日韩中文字幕视频在线看片 | 各种免费的搞黄视频| 建设人人有责人人尽责人人享有的 | 九九久久精品国产亚洲av麻豆| 欧美精品亚洲一区二区| 亚洲第一区二区三区不卡| 午夜免费鲁丝| 国产精品麻豆人妻色哟哟久久| 91aial.com中文字幕在线观看| 2021少妇久久久久久久久久久| 新久久久久国产一级毛片| av.在线天堂| 大片电影免费在线观看免费| 亚洲国产精品成人久久小说| 久久久a久久爽久久v久久| 国产精品久久久久久久电影| 欧美日韩在线观看h| 国产精品精品国产色婷婷| 性色av一级| 你懂的网址亚洲精品在线观看| 一级毛片久久久久久久久女| 在线观看人妻少妇| 男的添女的下面高潮视频| 一级毛片电影观看| 韩国高清视频一区二区三区| av天堂中文字幕网| 观看美女的网站| 国产亚洲av片在线观看秒播厂| 久久久午夜欧美精品| 夜夜骑夜夜射夜夜干| 色5月婷婷丁香| 自拍欧美九色日韩亚洲蝌蚪91 | 色综合色国产| 舔av片在线| 亚洲国产毛片av蜜桃av| 99热网站在线观看| 国产爽快片一区二区三区| 这个男人来自地球电影免费观看 | 午夜福利视频精品| 国产成人一区二区在线| 又粗又硬又长又爽又黄的视频| 熟妇人妻不卡中文字幕| 美女主播在线视频| 免费看av在线观看网站| 蜜臀久久99精品久久宅男| 美女cb高潮喷水在线观看| 国产成人freesex在线| 夫妻午夜视频| 久久国产精品男人的天堂亚洲 | 美女中出高潮动态图| 免费播放大片免费观看视频在线观看| 久久99热6这里只有精品| 制服丝袜香蕉在线| 美女福利国产在线 | 夫妻午夜视频| 国产高清三级在线| 99久久中文字幕三级久久日本| 极品教师在线视频| 亚洲av中文av极速乱| 热99国产精品久久久久久7| 性高湖久久久久久久久免费观看| 久久人人爽av亚洲精品天堂 | 久久97久久精品| 国产精品人妻久久久久久| 久久久久久久久大av| 国产精品蜜桃在线观看| 日本av免费视频播放| 日韩 亚洲 欧美在线| 欧美三级亚洲精品| 亚洲欧美清纯卡通| 女性生殖器流出的白浆| 18禁在线播放成人免费| 亚洲精品一二三| 亚洲自偷自拍三级| 99久久人妻综合| 亚洲欧美日韩无卡精品| 国产无遮挡羞羞视频在线观看| 亚洲精品aⅴ在线观看| 最后的刺客免费高清国语| 免费大片黄手机在线观看| 伦理电影免费视频| 久久久精品免费免费高清| 丰满人妻一区二区三区视频av| 在线亚洲精品国产二区图片欧美 | av网站免费在线观看视频| 日本一二三区视频观看| 国产精品久久久久久久电影| 欧美xxxx黑人xx丫x性爽| 国产欧美亚洲国产| 高清不卡的av网站| 国产成人精品福利久久| 亚洲欧美日韩卡通动漫| 91狼人影院| 干丝袜人妻中文字幕| 天天躁夜夜躁狠狠久久av| 99久国产av精品国产电影| 99精国产麻豆久久婷婷| 国语对白做爰xxxⅹ性视频网站| 色综合色国产| 国产综合精华液| 午夜免费男女啪啪视频观看| 国产精品三级大全| 久久这里有精品视频免费| 国产精品国产三级国产av玫瑰| 亚洲美女搞黄在线观看| 亚洲av成人精品一二三区| 晚上一个人看的免费电影| 只有这里有精品99| 热re99久久精品国产66热6| freevideosex欧美| 久久99热6这里只有精品| 亚洲一区二区三区欧美精品| 久久久久国产精品人妻一区二区| 亚洲精品aⅴ在线观看| 亚洲av免费高清在线观看| 性高湖久久久久久久久免费观看| 亚洲aⅴ乱码一区二区在线播放| 欧美人与善性xxx| 亚洲av电影在线观看一区二区三区| 在线观看免费视频网站a站| 国产亚洲91精品色在线| 免费人成在线观看视频色| xxx大片免费视频| 老女人水多毛片| 精品久久久久久电影网| 新久久久久国产一级毛片| 亚洲国产av新网站| 欧美成人午夜免费资源| 久久99热这里只有精品18| 蜜桃久久精品国产亚洲av| 成人黄色视频免费在线看| 欧美丝袜亚洲另类| 国产精品福利在线免费观看| 韩国高清视频一区二区三区| 国产精品无大码| 性色av一级| 免费人妻精品一区二区三区视频| 青春草国产在线视频| 国产精品国产三级专区第一集| 少妇人妻精品综合一区二区| 午夜日本视频在线| 日韩一区二区三区影片| 欧美日韩在线观看h| 国产69精品久久久久777片| 只有这里有精品99| 蜜桃亚洲精品一区二区三区| 美女主播在线视频| 久久精品国产亚洲av天美| 亚洲美女视频黄频| 国产欧美日韩一区二区三区在线 | 2022亚洲国产成人精品| 美女高潮的动态| 欧美精品一区二区免费开放| 国产v大片淫在线免费观看| 亚洲精品日韩av片在线观看| 欧美极品一区二区三区四区| 亚洲欧洲国产日韩| av在线app专区| 亚洲伊人久久精品综合| 最近最新中文字幕大全电影3| 99久久中文字幕三级久久日本| videos熟女内射| 久久久久精品久久久久真实原创| 99久久人妻综合| 亚洲国产精品专区欧美| 丝袜喷水一区| 麻豆国产97在线/欧美| 国产欧美日韩精品一区二区| 激情五月婷婷亚洲| 尾随美女入室| 亚洲色图综合在线观看| 欧美最新免费一区二区三区| 午夜福利影视在线免费观看| 亚洲欧美精品自产自拍| 亚洲人与动物交配视频| 欧美精品一区二区免费开放| 亚洲精品国产av蜜桃| 又大又黄又爽视频免费| 精品亚洲成国产av| 啦啦啦在线观看免费高清www| 精品视频人人做人人爽| 日韩强制内射视频| 亚洲最大成人中文| 国产免费一区二区三区四区乱码| 亚洲电影在线观看av| 色婷婷久久久亚洲欧美| 亚洲经典国产精华液单| 青春草亚洲视频在线观看| 极品少妇高潮喷水抽搐| 亚洲精品久久久久久婷婷小说| 国产综合精华液| 一级爰片在线观看| 97超视频在线观看视频| 免费人成在线观看视频色| 欧美另类一区| 欧美+日韩+精品| 啦啦啦中文免费视频观看日本| 伦理电影大哥的女人| 欧美xxxx黑人xx丫x性爽| 肉色欧美久久久久久久蜜桃| 久久久久精品性色| av播播在线观看一区| 欧美日韩亚洲高清精品| 一级毛片 在线播放| 日本欧美视频一区| 免费av中文字幕在线| 亚洲欧美成人精品一区二区| 日本av手机在线免费观看| 黄色一级大片看看| 亚洲综合精品二区| 午夜福利高清视频| 国产无遮挡羞羞视频在线观看| 少妇丰满av| 新久久久久国产一级毛片| 自拍欧美九色日韩亚洲蝌蚪91 | 亚洲精品亚洲一区二区| 香蕉精品网在线| 成人国产麻豆网| 一个人免费看片子| 一级毛片电影观看| 亚洲欧美日韩东京热| 嫩草影院入口| 高清黄色对白视频在线免费看 | 国产精品国产av在线观看| 18禁在线播放成人免费| 成人免费观看视频高清| 在现免费观看毛片| 蜜臀久久99精品久久宅男| 欧美成人精品欧美一级黄| 草草在线视频免费看| 99热这里只有是精品50| 搡老乐熟女国产| 日韩,欧美,国产一区二区三区| 久久午夜福利片| 国产成人精品久久久久久| 熟女人妻精品中文字幕| 亚洲成人中文字幕在线播放| 老女人水多毛片| 亚洲欧洲日产国产| 国产一区二区三区综合在线观看 | 热99国产精品久久久久久7| 日本黄大片高清| 少妇人妻精品综合一区二区| 看非洲黑人一级黄片| 精品亚洲乱码少妇综合久久| 中文精品一卡2卡3卡4更新| 亚洲欧美成人精品一区二区| 在线观看免费高清a一片| 日本免费在线观看一区| 秋霞在线观看毛片| 少妇人妻 视频| 国产免费又黄又爽又色| 舔av片在线| 五月开心婷婷网| 一本久久精品| 午夜日本视频在线| 免费黄网站久久成人精品| 18禁裸乳无遮挡动漫免费视频| 成人一区二区视频在线观看| 精品少妇久久久久久888优播| 99精国产麻豆久久婷婷| 51国产日韩欧美| 精品人妻视频免费看| 精品亚洲乱码少妇综合久久| 黑人猛操日本美女一级片| 日本黄色日本黄色录像| 亚洲四区av| 亚洲精品久久久久久婷婷小说| 国产91av在线免费观看| 一边亲一边摸免费视频| 日韩国内少妇激情av| 午夜福利影视在线免费观看| 午夜福利在线在线| 欧美成人午夜免费资源| 色哟哟·www| 免费看光身美女| 婷婷色综合www| 午夜福利在线在线| 精品亚洲乱码少妇综合久久| 成人特级av手机在线观看| 黄色配什么色好看| 国产片特级美女逼逼视频| 国产一区二区三区av在线| videos熟女内射| 日韩强制内射视频| 国产男女超爽视频在线观看| 久久精品夜色国产| 草草在线视频免费看| 在线观看免费视频网站a站| 国语对白做爰xxxⅹ性视频网站| 最近中文字幕2019免费版| 亚洲欧美中文字幕日韩二区| 日韩中字成人| 国产淫语在线视频| 亚洲,一卡二卡三卡| 男女免费视频国产| 成人二区视频| 日韩三级伦理在线观看| 亚洲久久久国产精品| 日韩免费高清中文字幕av| 欧美亚洲 丝袜 人妻 在线| 精品亚洲乱码少妇综合久久| 国产视频首页在线观看| 波野结衣二区三区在线| 在线免费观看不下载黄p国产| 亚洲欧洲国产日韩| 人妻 亚洲 视频| 欧美zozozo另类| 日韩欧美 国产精品| 久久久久久久大尺度免费视频| 中文精品一卡2卡3卡4更新| 国产探花极品一区二区| 五月天丁香电影| 亚洲成人中文字幕在线播放| 中文字幕久久专区| 日日摸夜夜添夜夜爱| 国产高潮美女av| 特大巨黑吊av在线直播| 精品久久久久久久久av| 三级经典国产精品| 狂野欧美激情性bbbbbb| 日韩制服骚丝袜av| 亚洲欧美成人精品一区二区| 18禁裸乳无遮挡免费网站照片| 亚洲真实伦在线观看| 在线观看人妻少妇| 视频中文字幕在线观看| 大话2 男鬼变身卡| 亚洲国产成人一精品久久久| 麻豆国产97在线/欧美| 成人免费观看视频高清| 久久久久国产精品人妻一区二区| 亚洲国产精品专区欧美| 尾随美女入室| 日本欧美国产在线视频| 国产一区二区在线观看日韩| 久热这里只有精品99| 精品久久久久久久末码| 只有这里有精品99| 在线观看一区二区三区激情| 免费观看性生交大片5| 国产女主播在线喷水免费视频网站| 中文字幕亚洲精品专区| 黑人猛操日本美女一级片| 国产高清不卡午夜福利| 韩国高清视频一区二区三区| 国产精品人妻久久久影院| 成人高潮视频无遮挡免费网站| 春色校园在线视频观看| 国产永久视频网站| 婷婷色av中文字幕| 久久久久精品久久久久真实原创| av在线老鸭窝| 日韩av免费高清视频| 国产伦理片在线播放av一区| 啦啦啦中文免费视频观看日本| 国产高清不卡午夜福利| 毛片女人毛片| 亚洲一区二区三区欧美精品| 亚洲精品456在线播放app| 亚洲伊人久久精品综合| 又黄又爽又刺激的免费视频.| 高清av免费在线| 中文资源天堂在线| 一区二区三区四区激情视频| 色网站视频免费| 干丝袜人妻中文字幕| 国产亚洲欧美精品永久| 波野结衣二区三区在线| 妹子高潮喷水视频| 国产男女内射视频| 成人亚洲欧美一区二区av| 亚洲av综合色区一区| 日韩av免费高清视频| 国产高清不卡午夜福利| 亚洲精品中文字幕在线视频 | 欧美一区二区亚洲| 国产免费一级a男人的天堂| 国产久久久一区二区三区| 偷拍熟女少妇极品色| 91久久精品电影网| 新久久久久国产一级毛片| 国产黄片视频在线免费观看| 久久久久久九九精品二区国产| av在线app专区| 最近中文字幕高清免费大全6| 成人毛片a级毛片在线播放| 国产在线一区二区三区精| 观看美女的网站| 在线观看av片永久免费下载| 又爽又黄a免费视频| 久久青草综合色| freevideosex欧美| 成人综合一区亚洲| 嫩草影院入口| 精品人妻视频免费看| 欧美变态另类bdsm刘玥|