• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Robust Re-Weighted Multi-View Feature Selection

    2019-08-13 05:55:08YimingXueNanWangYanNiuPingZhongShaozhangNiuandYuntaoSong
    Computers Materials&Continua 2019年8期

    Yiming Xue,Nan Wang,Yan Niu,Ping Zhong,Shaozhang Niuand Yuntao Song

    Abstract: In practical application,many objects are described by multi-view features because multiple views can provide a more informative representation than the single view.When dealing with the multi-view data,the high dimensionality is often an obstacle as it can bring the expensive time consumption and an increased chance of over-fitting.So how to identify the relevant views and features is an important issue.The matrix-based multi-view feature selection that can integrate multiple views to select relevant feature subset has aroused widely concern in recent years.The existing supervised multi-view feature selection methods usually concatenate all views into the long vectors to design the models.However,this concatenation has no physical meaning and indicates that different views play the similar roles for a specific task.In this paper,we propose a robust re-weighted multi-view feature selection method by constructing the penalty term based on the low-dimensional subspaces of each view through the least-absolute criterion.The proposed model can fully consider the complementary property of multiple views and the specificity of each view.It can not only induce robustness to mitigate the impacts of outliers,but also learn the corresponding weights adaptively for different views without any presetting parameter.In the process of optimization,the proposed model can be splitted to several small scale sub-problems.An iterative algorithm based on the iteratively re-weighted least squares is proposed to efficiently solve these sub-problems.Furthermore,the convergence of the iterative algorithm is theoretical analyzed.Extensive comparable experiments with several state-of-the-art feature selection methods verify the effectiveness of the proposed method.

    Keywords:Supervised feature selection,multi-view,robustness,re-weighted.

    1 Introduction

    In many applications,we need to deal with a large number of data which have high dimensionality.Handling high dimensional data(such as image and video data)may bring many challenges,including the added computational complexity and the increased chance of over-fitting.So how to effectively reduce the dimensionality has become an important issue.As an effective method of selecting representative features,feature selection has attracted many attentions.Feature selection methods[Fang,Cai,Sun et al.(2018)]can be grouped into filter methods,wrapper methods,and embedded methods.Filter methods select features according to the general characteristics of data without taking the learning model into consideration.Wrapper methods select features by taking the performance of some model as criterion.Embedded methods incorporate feature selection and classification process into a single optimization problem,which can achieve reasonable computational cost and good classification performance.Thus,embedded methods are in the dominant position in machine learning,and Least absolute shrinkage and selection operator(Lasso)[Tibshirani(2011)]is one of the most important representatives.

    Recently,unlike the previous vector-based feature selection methods(such as Lasso)that are only used for binary classification,many of matrix-based structured sparsity-inducing feature selection(SSFS)methods have been proposed to solve multi-class classification[Gui,Sun,Ji et al.(2016)].Obozinski et al.[Obozinski,Taskar and Jordan(2006)]first introducedl2,1-norm regularization term which is an extension ofl1-norm in Lasso for multi-task feature selection.Thel2,1-norm regularizer can obtain a joint sparsity matrix because minimizingl2,1-norm will make the rows of transformation matrix corresponding tothenonessentialfeaturesbecomezerosorclosetozeros.Thankstoitsgoodperformance,many SSFS methods based onl2,1-norm regularizer have been proposed[Yang,Ma,Hauptman et al.(2013);Chen,Zhou and Ye(2011);Wang,Nie,Huang et al.(2011);Wang,Nie,Huang et al.(2011);Jebara(2011)].In addition,Nie et al.[Nie,Huang,Cai et al.(2011)]utilizedl2,1-norm penalty to construct a robust SSFS method called RFS to deal with bioinformatics tasks.Unlike the frequently-used least squared penalty,the residual in RFS is not squared,and thus the outliers have less influence.With the aid ofl2,1-norm penalty,several robust SSFS methods have been proposed[Zhu,Zuo,Zhang et al.(2015);Du,Ma,Li et al.(2017)].

    As is known,the description of an object from multiple views is more informative than the one from single view,and a large amount of multi-view data have been collected.In order to describe this kind of data in a better way,a lot of features are extracted by different feature extractors.How to integrate these views to select more relevant feature subset is important for the subsequent classification model.Xiao et al.[Xiao,Sun,He et al.(2013)]firstly proposed the two-view feature selection method.However,many objects are described from more than two views.Wang et al.[Wang,Nie,Huang et al.(2012);Wang,Nie and Huang(2013);Wang,Nie,Huang et al.(2013)]proposed the improved multi-view feature selection methods to handle the general case.In Wang et al.[Wang,Nie and Huang(2012)],they established a multi-view feature selection framework by adoptingG1-norm regularizer andl2,1-norm regularizer to make both views and features sparsity.In Wang et al.[Wang,Nie and Huang(2013);Wang,Nie,Huang et al.(2013)],SSMVFS and SMML were proposed by using the same framework to induce the structured sparsity.Specifically,SSMVFS employed the discriminative K-means loss for clustering,and SMML employed the hinge loss for classification.Zhang et al.[Zhang,Tian,Yang et al.(2014)]proposed a multi-view feature selection method based onG2,1-norm regularizer by incorporating the view-wise structure information.Gui et al.[Gui,Rao,Sun et al.(2014)]proposed a joint feature extraction and feature selection method by considering both complementary property and consistency of different views.

    Multi-view feature selection methods have achieved good performance.Concatenating multiple views into new vectors is a common way when establishing the multi-view feature selection schemes.However,the concatenated vectors have no physical meaning,and the concatenation implies that the different views have similar effects on a specific class.In addition,the concatenated vectors are always high dimensional,which increases the chance of over-fitting.Noticing these limitations,some multi-view clustering methods[Xu,Wang and Lai(2016);Xu,Han,Nie et al.(2017)]have been proposed to learn the corresponding distribution of different views.

    Inspired by the above work,we propose a robust re-weighted multi-view feature selection method(RRMVFS)without concatenation.For each view,we make the predictive values close to the real labels,and construct the penalty term by using the least-absolute criterion,which can not only induce robustness but also learn the corresponding view weights adaptively without any presetting parameter.Based on the proposed penalty,the scheme is established by addingG1-norm andl2,1-norm regularization terms for the structured sparsity.In the procedure of optimization,the proposed model can be decomposed into several small scale subproblems,and an iterative algorithm based on Iterative Re-weighted Least Squares(IRLS)[Daubechies,Devore,Fornasier et al.(2008)]is proposed to solve the new model.Furthermore,the theoretical analysis of convergence is also presented.

    In a nutshell,the proposed multi-view feature selection method has the following advantages:

    ·It can fully consider the complementary property of multiple views as well as the specificity of each view,since it assigns all views of each sample to the same class while separately imposes penalty for each view.

    ·It can reduce the influence of outliers effectively because the least-absolute residuals of each view are combined as penalty.

    ·It can learn the view weights adaptively by a re-weighted way,where the weights are updated according to the current weights and bias matrix without any presetting parameter.

    ·It can be efficiently solved due to the following two reasons.One is that the objective function can be decomposed into several small scale optimization subproblems,and the other one is that IRLS can solve the least-absolute residual problem within finite iterations.

    ·Extensive comparison experiments with several state-of-the-art feature selection methods show the effectiveness of the proposed method.

    The paper is organized as follows.In Section 2,we present our feature selection model and algorithm in detail,and the convergence of the proposed algorithm is analysed.After presenting the extensive experiments in Section 3,we draw the conclusions in Section 4.Now,we give the notation in this paper.GivenNsamples which haveVviews belonging toPclasses,the data matrix of thevth view is denoted asis thevth view of thenth sample,anddvis the feature number

    of thevth view,v=1,···,V.The data matrix of all views can be represented by where

    2 Robust re-weighted multi-view feature selection method

    2.1 Model formulation

    In order to select the relevant views and feature subset from the original ones,we first use label information of the multi-view data to build the penalty term through the loss minimization principle.We calculate the penalty term by least-absolute criterion which can induce robustness.Instead of concatenating all views as long vectors,the penalty term is established as the sum of the residuals calculated on the latent subspace of each view:

    Formulation(2)assigns all views of each sample to the same class,and imposes the penalty separately on each view.It simultaneously considers the complementary property of different views and the specificity of each view.

    whereb(v)∈RPis a bias vector of thevth view,and1N ∈RNis the vector of all ones.The residuals are not squared and thus outliers have less effect compared with the squared residuals.Since different views have different effects for a specific class,we useG1-norm regularizer to enforce the sparsity between views.Meanwhile,we usel2,1-norm regularizer which has the ability of imposing the transformation matrix sparse between rows to select the representative features.The formulation of the proposed multi-view feature selection method can be described as follows:

    2.2 Optimization algorithm

    Next we give an iterative optimization algorithm to solve the problem(2).Firstly,we transform the objective function of(2)as:

    SinceJv(W(v),b(v))is only related to thevth view and nonnegative,the problem(2)can be decomposed into solvingV-subproblems:

    Note that the problem(4)cannot be easily solved by the sophisticated optimization algorithms since its objective function is nonsmooth.We utilize IRLS method[Daubechies,Devore,Fornasier et al.(2008)]to solve it,and change it as follows:

    1.Fixαv,updateW(v),b(v).For problem(5),we can solve the following problem iteratively with the fixedαv:

    Taking the derivative of the(10)with the respect toandand setting them to zeros,we can obtain

    So from(11),we have

    whereYp=(yp1,···,ypN)∈RN.Substituting(13)into(12),we have

    2.FixW(v),b(v),updateαv.The non-negative weightαvfor each view can be updated as

    The proposed multi-view feature selection algorithm(RRMVFS)is summarized in Algorithm 1.

    2.3 Convergence analysis

    Theorem 1.The values of the objective function of(2)monotonically decrease in each iteration in Algorithm 1,and the algorithm converges.

    Proof:From Eq.(3),in order to proveJ(Wt+1,Bt+1)≤J(Wt ,Bt),we only need to prove thatJv(W(v),b(v))monotonically decreases in each iteration.

    Algorithm 1:RobustRe-weighted Multi-view FeatureSelection(RRMVFS)Input:data matrix of each view X(v),v=1,2,···,V,label matrix Y=[y1 ,y2 ,···,yN ].Output:transformation matrix W(v),v=1,2,···,V.Initialization 1.Set t=0,threshold∈=10-5,the largest iterative number T=20;Set(W(v))0 ∈ Rdv×P(v=1,···,V)all elements 1,and(b(v)p )0 can be calculated by(b(v)p)0=Yp 1N -(w(v)(1≤p≤P,1≤v≤V)as well as(αv)0 = 1 p)0 TX(v)1N(1≤v≤V)while not converged do 2.Compute the diagonal matrices(ˉD(v))t (1≤v≤V)with the ith diagonal element 1 N 2‖(W(v))0TX(v)+(b(v))01TN -Y ‖F(xiàn) 2‖(W(v)i:)t-1‖2;Compute the diagonal matrices(?D(v)p )t =Idv (1≤v≤V)with Idv being an identity matrix.3.For each w(v)p and b(v)p (1≤ p≤ P,1≤v≤V),compute(w(v)p)t+1=((αv)tX(v)HX(v)T+γ1(?D(v)p)t+γ2(ˉD(v))t)-1(αv)tX(v)HYTp,and(b(v)p)t+1=Yp 1N -(w(v)1 2‖(w(v)p)t-1‖2 p)t+1 TX(v)1N 4.Calculate(αv)t +1 = 1(1≤v≤V);5.Check the convergence condition J(Wt ,Bt )-J(Wt +1 ,Bt +1 ) < ∈(J(·,·)is the objective function of(2))or t>T;6.t=t+1;End While N 2‖(W(v))t+1TX(v)+(b(v))t+11TN -Y ‖F(xiàn)

    By Step3 in Algorithm 1,we have

    From(17)-(20),we obtain

    So,wegetJ(Wt+1,Bt+1)≤J(Wt ,Bt),thatis,Algorithm1decreasestheobjectivevalues in each iteration.SinceJ(W,B)≥0,Algorithm 1 converges.

    3 Experiments

    In this section,we evaluate the effectiveness of the proposed method by comparing it with several related feature selection methods.

    3.1 Experimental setup

    We compare the performance of our method RRMVFS with several related methods:Single,CAT,RFS[Nie,Huang,Cai et al.(2011)],SSMVFS[Wang,Nie and Huang(2013)],SMML[Wang,Nie,Huang et al.(2013)],DSML-FS[Gui,Rao,Sun et al.(2014)],and WMCFS[Xu,Wang and Lai(2016)].Single refers to using single-view features to find best performance for classification.CAT refers to using concatenated vectors for classification without feature selection.RFS is an efficient robust feature selection method,but not designed for multi-view feature selection.The other feature selection methods are designed for multi-view feature selection.The parametersγ1andγ2in feature selection methods are tuned from the set{10i|i=-5,-4,···,5}.The exponential parameterρin WMCFS method is set to be 5 according to Xu et al.[Xu,Wang and Lai(2016)].

    The public available data sets,including images data set NUS-WIDE-OBJECT(NUS),handwritten numerals data set mfeat,and Internet pages data set Ads are employed in the experiments.For NUS data set,we choose all 12 animal classes including 8182 images.For Ads,there exist some incomplete data.We first discard the incomplete data and then randomly choose some non-AD samples so that the number of non-AD data is the same as that of AD data.The total number of the samples employed in Ads is 918.For mfeat data set,all of data are employed.In each data set,the samples are randomly and averagely divided into two parts.One part is used for training and the other part is used for testing.In the training set,we randomly choose 6(9,or 12)samples from each class to learn transformation matrix of these compared methods.In the test set,we employ 20%of data for validation,and the parameters that achieve the best performance on the validation set are employed for testing.We arrange features in descending order based on the values of‖Wi:‖2,i=1,2,···,d,and select a certain number of top-ranked features.The numbers of selected features are{10%,···,90%}of the total amount of features,respectively.Then the selected features are taken as the inputs of the subsequent 1-nearest neighbour(1NN)classifier.Weconducttwokindsofexperiments.First,weconductexperimentsonallviews to evaluate these methods.Then,we conduct experiments on the subsets formed by two views and four views to make the evaluation of views.For all experiments,ten independent and random choices of training samples are employed,and the averaged accuracies(AC)and F1 scores(macroF1)are reported.

    3.2 Evaluation of feature selection

    Figs.1,2,and 3 show the performance of the compared methods w.r.t.the different percentages of selected features on NUS,mfeat and Ads data sets,respectively.From thesefigures,we can see that,the performance of these methods under AC are consistent with the one under macroF1 scores.CAT has the better performance than Single in all cases,which means that it is essential to combine different views to select features.The feature selection methods,including ours,achieve the comparable or even better performance than CAT.Specifically,from Fig.1,we can see that three feature selection methods including RFS,SSMVFS,and DSML-FS show the comparable performance on NUS data set.SSMVFS and DSML-FS are the feature selection methods designed for multi-view learning,while RFS is a robust feature selection method that is not specially designed for multi-view learning.This means that it is necessary to build a robust method since data may be corrupted with noise.Along with the number of selected features increasing,the ACs and macroF1 scores of the multi-view feature selection methods WMCFS,SMML,and the proposed RRMVFS are greatly improved.Moreover,RRMVFS achieves the best results in most cases.This phenomenon might be attributed to the robust penalty which may help RRMVFS select more representative features.

    Figure1:ACs and macroF1 scores of compared methods vs.percents of selected features on NUS data set

    Figure2:ACs and macroF1 scores of compared methods vs.percents of selected features on mfeat data set

    Figure3:ACs and macroF1 scores of compared methods vs.percents of selected features on Ads data set

    From Fig.2,we can see that RFS,SSMVFS,and DSML-FS show the comparable performance on mfeat data set.Along with the number of selected features increasing,the ACs and macroF1 scores of SMML and WMCFS are increased except at a few percentages,especially for WMCFS.The proposed RRMVFS can achieve the best performance when only a small number of features(the percent of selected features is 10)are selected.

    From Fig.3,we can see that four feature selection methods including RFS,SSMVFS,SMML,and DSML-FS obtain comparable performance on Ads data set.They achieve their best performance at 10%,and when the percentages of selected features change from 20%to 90%,their ACs and macroF1 scores are substantially unchanged and comparable to those of CAT.Just like the performance on the other two data sets,the proposed RRMVFS still achieves the best performance on Ads data set.

    3.3 Evaluation of views

    In order to evaluate the effect of views,we test ACs and macroF1 scores of the compared methods in terms of views on NUS,mfeat,and Ads data sets,respectively.The experiments are conducted on the subsets that contains 12 samples of each class.For each data sets,we conduct two kinds of experiments.Firstly,we randomly choose two views for experiments.Secondly,we randomly choose two views from the remaining views,and combine them with the previous two views to form the subsets for experiments.

    Table1:ACs and macroF1 scores with standard deviation of the compared methods in terms of views on NUS data set

    The experimental results on these three data sets are shown in Tabs.1-3,respectively.It can be seen that with the increase of views,generally speaking,the performance of all compared methods gets better.On the subsets which consist of two views,our method RRMVFS does not show the best performance,but on the subsets formed by four views and the whole sets,our method is superior to others significantly.This phenomenon might be attributed to the learning of view weights which may help RRMVFS select more relevant views.

    Table2:ACs and macroF1 scores with standard deviation of the compared methods in terms of views on mfeat data set

    Table3:ACs and macroF1 scores with standard deviation of the compared methods in terms of views on Ads data set

    4 Conclusion

    In this paper,we have proposed a robust re-weighted multi-view feature selection method by assigning all views of each sample to the same class while imposing penalty based on latent subspaces of each view through least-absolute criterion,which can take both the complementary property of different views and the specificity of each view into consideration and induce the robustness.The proposed model can be efficiently solved by decomposing it into several small scale optimization subproblems,and the convergence of the proposed iterative algorithm is presented.The comparison experiments with several state-of-the-art feature selection methods verify the effectiveness of the proposed method.Many real-world applications,such as text categorization,are multi-label problems.The future work is to develop the proposed method for multi-label multi-view feature selection.

    Acknowledgement:The work was supported by the National Natural Science Foundation of China(Grant Nos.61872368,U1536121).

    www.www免费av| 精品卡一卡二卡四卡免费| 国产精品精品国产色婷婷| 色综合欧美亚洲国产小说| 最新在线观看一区二区三区| 精品国产超薄肉色丝袜足j| 亚洲成人免费电影在线观看| 免费看十八禁软件| av欧美777| 我的亚洲天堂| 亚洲中文字幕一区二区三区有码在线看 | 亚洲欧美精品综合一区二区三区| АⅤ资源中文在线天堂| 日韩精品中文字幕看吧| av有码第一页| 免费在线观看日本一区| 夜夜爽天天搞| 日韩欧美国产一区二区入口| 色av中文字幕| 国产午夜精品久久久久久| 一本大道久久a久久精品| 两个人视频免费观看高清| 国产亚洲精品久久久久久毛片| 免费女性裸体啪啪无遮挡网站| 90打野战视频偷拍视频| 国产熟女xx| www.熟女人妻精品国产| av视频免费观看在线观看| av超薄肉色丝袜交足视频| 黑丝袜美女国产一区| www日本在线高清视频| 高潮久久久久久久久久久不卡| 亚洲欧洲精品一区二区精品久久久| 午夜日韩欧美国产| 亚洲电影在线观看av| 久久久久久人人人人人| 国产成年人精品一区二区| 色综合站精品国产| 亚洲国产精品999在线| 精品乱码久久久久久99久播| 日韩精品中文字幕看吧| 国产男靠女视频免费网站| 亚洲第一青青草原| 侵犯人妻中文字幕一二三四区| 国产极品粉嫩免费观看在线| 午夜福利成人在线免费观看| 久久久久久久久中文| 咕卡用的链子| 韩国精品一区二区三区| 好看av亚洲va欧美ⅴa在| 一本综合久久免费| 999精品在线视频| netflix在线观看网站| 高潮久久久久久久久久久不卡| 国产亚洲精品av在线| 久久国产精品影院| 黑人巨大精品欧美一区二区蜜桃| 变态另类丝袜制服| 亚洲狠狠婷婷综合久久图片| 1024视频免费在线观看| 在线视频色国产色| 黄片小视频在线播放| 亚洲国产精品成人综合色| 可以在线观看的亚洲视频| 欧美日韩福利视频一区二区| 黄色毛片三级朝国网站| 夜夜躁狠狠躁天天躁| av网站免费在线观看视频| 国产精品香港三级国产av潘金莲| 啦啦啦免费观看视频1| 咕卡用的链子| 亚洲色图综合在线观看| 啦啦啦韩国在线观看视频| 国产精品 国内视频| 国产欧美日韩精品亚洲av| 亚洲熟女毛片儿| 青草久久国产| 午夜福利一区二区在线看| 欧美 亚洲 国产 日韩一| 久久性视频一级片| 搡老熟女国产l中国老女人| 亚洲第一青青草原| 麻豆国产av国片精品| 精品久久久久久,| 久久热在线av| 国产蜜桃级精品一区二区三区| АⅤ资源中文在线天堂| 岛国在线观看网站| 国产成人欧美在线观看| 国产真人三级小视频在线观看| 国产精品久久久av美女十八| 亚洲av日韩精品久久久久久密| 国产主播在线观看一区二区| 国产三级在线视频| 啦啦啦韩国在线观看视频| av网站免费在线观看视频| 午夜福利18| 国产aⅴ精品一区二区三区波| 国产高清视频在线播放一区| 一进一出好大好爽视频| 美女扒开内裤让男人捅视频| 国产日韩一区二区三区精品不卡| 亚洲精品中文字幕在线视频| 亚洲av成人av| 18禁黄网站禁片午夜丰满| 国产高清videossex| 成人亚洲精品av一区二区| 99精品在免费线老司机午夜| 少妇熟女aⅴ在线视频| 午夜福利影视在线免费观看| 欧美av亚洲av综合av国产av| 亚洲人成77777在线视频| 久久精品91无色码中文字幕| 亚洲国产中文字幕在线视频| 一二三四在线观看免费中文在| 国产真人三级小视频在线观看| 叶爱在线成人免费视频播放| 精品国产一区二区久久| 日韩国内少妇激情av| 亚洲成人免费电影在线观看| 亚洲激情在线av| 极品教师在线免费播放| 老司机福利观看| 久久午夜综合久久蜜桃| 999精品在线视频| 亚洲视频免费观看视频| 欧美在线一区亚洲| 精品国产乱码久久久久久男人| 亚洲成a人片在线一区二区| av在线天堂中文字幕| 欧美成狂野欧美在线观看| 女人被狂操c到高潮| 丁香六月欧美| 国产午夜福利久久久久久| 国产精品久久久人人做人人爽| 久久久精品欧美日韩精品| 黄色成人免费大全| 久久人人爽av亚洲精品天堂| 校园春色视频在线观看| 黄片大片在线免费观看| 日本在线视频免费播放| 国产成人av教育| 午夜久久久久精精品| 少妇熟女aⅴ在线视频| 激情视频va一区二区三区| 久久久久亚洲av毛片大全| 亚洲精品国产精品久久久不卡| 免费看十八禁软件| 两个人看的免费小视频| 一边摸一边做爽爽视频免费| 757午夜福利合集在线观看| 自线自在国产av| 欧美老熟妇乱子伦牲交| 电影成人av| 久久这里只有精品19| 国产精品一区二区免费欧美| 变态另类成人亚洲欧美熟女 | 欧美在线一区亚洲| 婷婷丁香在线五月| 夜夜躁狠狠躁天天躁| 一边摸一边抽搐一进一出视频| www.自偷自拍.com| 久久久久久大精品| 久久午夜亚洲精品久久| 久久伊人香网站| 久久久精品国产亚洲av高清涩受| 1024视频免费在线观看| 午夜成年电影在线免费观看| 99久久99久久久精品蜜桃| 日韩中文字幕欧美一区二区| 亚洲,欧美精品.| 麻豆av在线久日| 少妇的丰满在线观看| 成年女人毛片免费观看观看9| 成人欧美大片| 亚洲 国产 在线| 国产高清视频在线播放一区| 自线自在国产av| 97人妻精品一区二区三区麻豆 | 在线视频色国产色| 欧美日本亚洲视频在线播放| 极品教师在线免费播放| 成人国语在线视频| 熟妇人妻久久中文字幕3abv| 精品电影一区二区在线| 18禁国产床啪视频网站| 亚洲aⅴ乱码一区二区在线播放 | 涩涩av久久男人的天堂| 999精品在线视频| 国产精华一区二区三区| 亚洲熟妇熟女久久| 亚洲国产精品999在线| 国产亚洲精品av在线| 怎么达到女性高潮| 操美女的视频在线观看| 免费搜索国产男女视频| 亚洲aⅴ乱码一区二区在线播放 | 亚洲熟妇中文字幕五十中出| 黄色视频不卡| 大型av网站在线播放| 午夜精品久久久久久毛片777| svipshipincom国产片| 午夜激情av网站| 精品国产一区二区三区四区第35| 国产又色又爽无遮挡免费看| 九色亚洲精品在线播放| 黑人巨大精品欧美一区二区蜜桃| 亚洲精品美女久久久久99蜜臀| 又大又爽又粗| 啦啦啦韩国在线观看视频| 在线视频色国产色| 久久人妻熟女aⅴ| 精品一区二区三区四区五区乱码| 欧美久久黑人一区二区| 黄网站色视频无遮挡免费观看| 国产91精品成人一区二区三区| 最近最新免费中文字幕在线| 亚洲国产中文字幕在线视频| 波多野结衣巨乳人妻| 18禁观看日本| 免费看美女性在线毛片视频| 成人18禁在线播放| 欧美亚洲日本最大视频资源| 日本精品一区二区三区蜜桃| 亚洲成a人片在线一区二区| 亚洲国产精品合色在线| 日韩中文字幕欧美一区二区| 变态另类丝袜制服| 国产麻豆69| 在线播放国产精品三级| 国产一级毛片七仙女欲春2 | 欧美av亚洲av综合av国产av| www.999成人在线观看| 黄色片一级片一级黄色片| 非洲黑人性xxxx精品又粗又长| 国产精品乱码一区二三区的特点 | 亚洲,欧美精品.| av福利片在线| 在线播放国产精品三级| 999久久久国产精品视频| 欧美日韩亚洲综合一区二区三区_| 在线播放国产精品三级| 男人舔女人的私密视频| 亚洲aⅴ乱码一区二区在线播放 | 欧美大码av| 色老头精品视频在线观看| 91字幕亚洲| 国产成人精品无人区| 多毛熟女@视频| 一区二区三区国产精品乱码| 久久国产精品男人的天堂亚洲| 嫁个100分男人电影在线观看| 中文字幕色久视频| 亚洲中文字幕一区二区三区有码在线看 | 精品久久久久久久毛片微露脸| 久久久久久亚洲精品国产蜜桃av| 亚洲色图av天堂| 美女高潮喷水抽搐中文字幕| 国内精品久久久久久久电影| 国产亚洲av高清不卡| 国产黄a三级三级三级人| 亚洲熟妇中文字幕五十中出| 一卡2卡三卡四卡精品乱码亚洲| 国产精品一区二区精品视频观看| 久久婷婷人人爽人人干人人爱 | 国产精品亚洲美女久久久| 亚洲五月色婷婷综合| 男女做爰动态图高潮gif福利片 | 可以免费在线观看a视频的电影网站| 国产亚洲精品一区二区www| 黄色女人牲交| 两人在一起打扑克的视频| 97人妻天天添夜夜摸| 19禁男女啪啪无遮挡网站| 国产精品亚洲av一区麻豆| 夜夜夜夜夜久久久久| 九色亚洲精品在线播放| 久久久久久久午夜电影| 久久精品国产99精品国产亚洲性色 | 色综合站精品国产| av天堂久久9| 午夜免费成人在线视频| 午夜免费激情av| 亚洲av片天天在线观看| 国产亚洲精品第一综合不卡| 久久九九热精品免费| 18禁黄网站禁片午夜丰满| 制服人妻中文乱码| 国产免费男女视频| 久久国产精品男人的天堂亚洲| 欧美激情 高清一区二区三区| 精品欧美国产一区二区三| 欧美 亚洲 国产 日韩一| 亚洲精品在线美女| 精品免费久久久久久久清纯| 91麻豆av在线| 欧美日韩黄片免| 国产一区二区激情短视频| 国产区一区二久久| 波多野结衣一区麻豆| 亚洲av成人一区二区三| 成人亚洲精品av一区二区| 91成人精品电影| 国产精品98久久久久久宅男小说| 无遮挡黄片免费观看| 女性被躁到高潮视频| 神马国产精品三级电影在线观看 | 1024视频免费在线观看| 久9热在线精品视频| 日韩av在线大香蕉| 纯流量卡能插随身wifi吗| 亚洲av美国av| 国产精品美女特级片免费视频播放器 | 欧美+亚洲+日韩+国产| 亚洲精品美女久久久久99蜜臀| 两人在一起打扑克的视频| 亚洲成人免费电影在线观看| 香蕉丝袜av| 日韩三级视频一区二区三区| 日韩一卡2卡3卡4卡2021年| www日本在线高清视频| 变态另类丝袜制服| 国产精品永久免费网站| 最好的美女福利视频网| 99在线人妻在线中文字幕| 美女扒开内裤让男人捅视频| 日韩中文字幕欧美一区二区| 搡老妇女老女人老熟妇| 精品国内亚洲2022精品成人| 国产野战对白在线观看| 黄片小视频在线播放| 美女高潮到喷水免费观看| 午夜精品在线福利| 啦啦啦韩国在线观看视频| 亚洲欧美激情在线| 91成年电影在线观看| 亚洲成a人片在线一区二区| 黑人操中国人逼视频| 激情视频va一区二区三区| 露出奶头的视频| 色尼玛亚洲综合影院| 亚洲全国av大片| 国产精品,欧美在线| 国产成人影院久久av| 日日夜夜操网爽| 在线国产一区二区在线| 美女高潮到喷水免费观看| 九色国产91popny在线| 亚洲自拍偷在线| 国产精华一区二区三区| 手机成人av网站| 90打野战视频偷拍视频| 夜夜夜夜夜久久久久| 成人国产综合亚洲| 午夜福利免费观看在线| 日韩高清综合在线| 久久精品人人爽人人爽视色| 99香蕉大伊视频| 9热在线视频观看99| 十八禁网站免费在线| 国产精品亚洲av一区麻豆| 老司机靠b影院| 后天国语完整版免费观看| 国产精品乱码一区二三区的特点 | 精品一区二区三区四区五区乱码| 精品一区二区三区视频在线观看免费| 嫩草影院精品99| 一级毛片女人18水好多| 国产私拍福利视频在线观看| 在线av久久热| 一区二区三区高清视频在线| 18禁裸乳无遮挡免费网站照片 | 女警被强在线播放| 亚洲午夜精品一区,二区,三区| 久久精品国产亚洲av香蕉五月| 国产成人一区二区三区免费视频网站| 韩国精品一区二区三区| 久久久久九九精品影院| 亚洲专区字幕在线| 国产欧美日韩一区二区三| 国产高清videossex| 国产精品影院久久| av网站免费在线观看视频| 99精品在免费线老司机午夜| 久久国产精品男人的天堂亚洲| 91麻豆av在线| 国产成人精品久久二区二区免费| 999久久久国产精品视频| 国产极品粉嫩免费观看在线| 一区二区三区激情视频| 老汉色av国产亚洲站长工具| 亚洲天堂国产精品一区在线| 成人av一区二区三区在线看| 久久久精品国产亚洲av高清涩受| 午夜福利18| 制服诱惑二区| 久久婷婷成人综合色麻豆| 最近最新免费中文字幕在线| 国产亚洲精品久久久久久毛片| 久久久久久久精品吃奶| 窝窝影院91人妻| 一二三四社区在线视频社区8| 操美女的视频在线观看| 国产一区在线观看成人免费| 国产国语露脸激情在线看| 在线十欧美十亚洲十日本专区| 黄色 视频免费看| 91成人精品电影| 久久久精品欧美日韩精品| 中文字幕最新亚洲高清| 每晚都被弄得嗷嗷叫到高潮| 亚洲精品美女久久av网站| 夜夜夜夜夜久久久久| 国产成+人综合+亚洲专区| 免费在线观看影片大全网站| 亚洲精品国产精品久久久不卡| 香蕉久久夜色| 国产野战对白在线观看| 日本精品一区二区三区蜜桃| 两性夫妻黄色片| 亚洲午夜理论影院| 久久国产乱子伦精品免费另类| 日韩av在线大香蕉| 国产成人啪精品午夜网站| 中文亚洲av片在线观看爽| 他把我摸到了高潮在线观看| 两个人视频免费观看高清| 午夜成年电影在线免费观看| 久久香蕉精品热| 天天添夜夜摸| 九色国产91popny在线| 免费在线观看影片大全网站| 国产一区二区三区综合在线观看| 午夜福利视频1000在线观看 | 日韩免费av在线播放| 国产成人精品在线电影| 最近最新免费中文字幕在线| 制服人妻中文乱码| 午夜精品国产一区二区电影| 亚洲精品美女久久久久99蜜臀| 搡老妇女老女人老熟妇| 很黄的视频免费| 欧美老熟妇乱子伦牲交| 欧美日韩中文字幕国产精品一区二区三区 | 中文字幕最新亚洲高清| 亚洲五月色婷婷综合| 一二三四社区在线视频社区8| 很黄的视频免费| 亚洲一区高清亚洲精品| 黄色片一级片一级黄色片| 国产欧美日韩综合在线一区二区| 久久国产亚洲av麻豆专区| 日韩精品免费视频一区二区三区| 成人亚洲精品av一区二区| 色综合亚洲欧美另类图片| 亚洲avbb在线观看| 亚洲成人国产一区在线观看| 黄片播放在线免费| 欧美老熟妇乱子伦牲交| 99国产精品99久久久久| 欧美激情极品国产一区二区三区| 午夜日韩欧美国产| 在线天堂中文资源库| 国内久久婷婷六月综合欲色啪| 亚洲成人精品中文字幕电影| 18禁国产床啪视频网站| 亚洲中文字幕一区二区三区有码在线看 | 老汉色∧v一级毛片| 岛国视频午夜一区免费看| 国产精华一区二区三区| 中文字幕av电影在线播放| 搡老岳熟女国产| 在线永久观看黄色视频| 成在线人永久免费视频| 日本vs欧美在线观看视频| 久久久久精品国产欧美久久久| 两性午夜刺激爽爽歪歪视频在线观看 | 亚洲国产欧美一区二区综合| 国产av又大| 国产成人精品久久二区二区免费| 亚洲成人久久性| 国产黄a三级三级三级人| 在线观看日韩欧美| 精品国产亚洲在线| 日韩大码丰满熟妇| 亚洲va日本ⅴa欧美va伊人久久| 亚洲,欧美精品.| 亚洲一区高清亚洲精品| 国产男靠女视频免费网站| 女人被狂操c到高潮| 嫩草影视91久久| 可以在线观看的亚洲视频| 亚洲专区中文字幕在线| 日韩有码中文字幕| 日本免费一区二区三区高清不卡 | 又黄又粗又硬又大视频| 大陆偷拍与自拍| 国产1区2区3区精品| 99国产极品粉嫩在线观看| 熟女少妇亚洲综合色aaa.| 免费高清视频大片| 亚洲最大成人中文| 男人操女人黄网站| 久久国产乱子伦精品免费另类| 国产人伦9x9x在线观看| 一个人免费在线观看的高清视频| 国产精品98久久久久久宅男小说| 女警被强在线播放| 国产高清激情床上av| 免费观看精品视频网站| 两个人视频免费观看高清| 亚洲精品中文字幕在线视频| 可以免费在线观看a视频的电影网站| 国产精品,欧美在线| 久久精品国产综合久久久| 久久精品91无色码中文字幕| 伊人久久大香线蕉亚洲五| 色综合欧美亚洲国产小说| www.999成人在线观看| 搞女人的毛片| 久久精品影院6| 免费在线观看日本一区| 99久久99久久久精品蜜桃| 黄频高清免费视频| 亚洲中文日韩欧美视频| 欧美人与性动交α欧美精品济南到| 波多野结衣av一区二区av| 久久久精品国产亚洲av高清涩受| 久久国产精品男人的天堂亚洲| 免费久久久久久久精品成人欧美视频| 女生性感内裤真人,穿戴方法视频| 国产成人av激情在线播放| 国产高清激情床上av| 黄色a级毛片大全视频| 丰满的人妻完整版| 丝袜美足系列| 少妇裸体淫交视频免费看高清 | 婷婷六月久久综合丁香| 午夜福利成人在线免费观看| 亚洲精品美女久久久久99蜜臀| 丁香六月欧美| 日韩av在线大香蕉| 国产视频一区二区在线看| e午夜精品久久久久久久| 18禁观看日本| www.熟女人妻精品国产| 男女床上黄色一级片免费看| 在线十欧美十亚洲十日本专区| 老鸭窝网址在线观看| 大型黄色视频在线免费观看| 久久久精品国产亚洲av高清涩受| 久久天堂一区二区三区四区| 精品久久久久久久久久免费视频| 亚洲熟女毛片儿| 18禁黄网站禁片午夜丰满| 亚洲全国av大片| 国产av一区在线观看免费| 午夜福利欧美成人| 欧美大码av| 91麻豆精品激情在线观看国产| 久久国产精品人妻蜜桃| 国产av精品麻豆| 亚洲专区字幕在线| 禁无遮挡网站| 久久久久亚洲av毛片大全| 久久中文字幕人妻熟女| 两个人看的免费小视频| 亚洲欧美一区二区三区黑人| 日韩精品中文字幕看吧| 视频区欧美日本亚洲| 黄色成人免费大全| 男人的好看免费观看在线视频 | 欧美成人性av电影在线观看| 精品国产乱子伦一区二区三区| 午夜视频精品福利| 可以免费在线观看a视频的电影网站| 村上凉子中文字幕在线| 99久久国产精品久久久| 亚洲成人免费电影在线观看| 如日韩欧美国产精品一区二区三区| 97人妻精品一区二区三区麻豆 | 黄色女人牲交| 91大片在线观看| 天堂√8在线中文| 日本三级黄在线观看| 精品人妻在线不人妻| 欧美一级a爱片免费观看看 | 亚洲欧美精品综合久久99| 最近最新中文字幕大全电影3 | 国产精品98久久久久久宅男小说| 午夜福利在线观看吧| 亚洲欧美精品综合一区二区三区| 免费看美女性在线毛片视频| 每晚都被弄得嗷嗷叫到高潮| 老司机午夜十八禁免费视频| 人人妻人人澡欧美一区二区 | 欧美精品亚洲一区二区| 国产精品秋霞免费鲁丝片| 99久久久亚洲精品蜜臀av| 日本撒尿小便嘘嘘汇集6| 9热在线视频观看99| 亚洲人成伊人成综合网2020| 一a级毛片在线观看| 九色国产91popny在线| 久久精品人人爽人人爽视色| 一区二区三区国产精品乱码| 亚洲无线在线观看| 精品一区二区三区视频在线观看免费| 一区二区三区精品91| 亚洲专区国产一区二区| 欧美黄色淫秽网站| 午夜福利欧美成人| 亚洲在线自拍视频| 大型av网站在线播放| 不卡一级毛片|