• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improved Logistic Regression Algorithm Based on Kernel Density Estimation for Multi-Classification with Non-Equilibrium Samples

    2019-11-07 03:12:16YangYuZeyuXiongYueshanXiongandWeiziLi
    Computers Materials&Continua 2019年10期

    Yang Yu,Zeyu Xiong,Yueshan Xiongand Weizi Li

    Abstract:Logistic regression is often used to solve linear binary classi fication problems such as machine vision,speech recognition,and handwriting recognition.However,it usually fails to solve certain nonlinear multi-classi fication problem,such as problem with non-equilibrium samples.Many scholars have proposed some methods,such as neural network,least square support vector machine,AdaBoost meta-algorithm,etc.These methods essentially belong to machine learning categories.In this work,based on the probability theory and statistical principle,we propose an improved logistic regression algorithm based on kernel density estimation for solving nonlinear multi-classi fication.We havecomparedourapproachwithothermethodsusingnon-equilibriumsamples,theresults show that our approach guarantees sample integrity and achieves superior classi fication.

    Keywords:Logistic regression,multi-classi fication,kernel function,density estimation,non-equilibrium.

    1 Introduction

    Machine Learning has become one of the most popular fields in recent years.There are two main tasks of Machine Learning:1)classi fication,which goal is to divide instances into the appropriate categories,and 2)regression,which goal is to study relationship between samples.The most basic classi fication problem is binary classi fication.which can be solved using algorithms such as Naive Bayes(NB),support vector machine(SVM),decision tree,logistic regression,KNN,neural network,etc.More generally,multi-classi fication problems such as identifying handwritten digits 0~9,and and labeling document topics have gained much attention recently.To provide few examples,Liu et al.[Liu,Liang and Xue(2008)]proposed a multi-classi fication algorithm based on fuzzy support vector machines,whichprovidesbetterclassi ficationaccuracyandgeneralizationabilitycompared with traditional One-vs.-Rest methods.Tang et al.[Tang,Wang and Chen(2005)]proposed a new multi-classi fication algorithm based on support vector machine and binary tree structure to solve the problem of non-separable regions.

    In the existing regression algorithm,support vector machines are mostly used for multi-classi fication problem,but there are some limitations in algorithm.The logistic regression algorithm can only solve the problem of dichotomy and linear classi fication.Support vector machines typically support only small training samples and are equally dif ficult to deal with multiple classi fication problems.Naive Bayes is based on the assumption that the characteristic conditions are independent.Once the dataset does not satisfy this assumption,its classi fication accuracy will be greatly affected.

    In order to solve the problem above,towards dif ficult for implement large scale samples,not applicable to multi-classi fication and uncertainty to constraint conditions,Chen et al.[Chen,Chen,Mao et al.(2013)]proposed a model of Density-based Logistic Regression(DLR),which has a good result in practical application.Our model is based on kernel density-based logistic regression and we construct a new kernel function for multi-classi fication problems.This has three advantages:1)It makes better improvements to classi fication effect.2)It is an extension of DLR model to multi-classi fication problems.3)It shows good generalization performance on nonlinear and unbalanced data.We will describe the theoretical rationality and check classifying quality according to practical application for our new model.

    The rest of the paper is organized as the following.In Section 2,we explain background knowledge including logistic regression binary classi fication,multi-classi fication,SoftMax and DLR model.In Section 3,we introduce several solutions for multi-classi fication problems with imbalanced samples.In Section 4,we explain our approach in details.In Section 5,we compare our approach to other methods and analyze the performances.Finally,we conclude in Section 6.

    2 Logistic regression and related knowledge

    2.1 Logistic regression

    Logistic regression is based on linear regression,and a sigmoid logic function is applied,which is a logarithmic probability function.Logistic regression is represented as follows,

    In the model of sigmoid function,zvalues are distributed within the range of[0,1].When the independent variable is taken near 0,thez-value change curve is very steep,while thezvalue is relatively stable at other values.Therefore,the binary classi fication tasks can be handled well if taking 0 as the boundary.However,it is sometimes dif ficult to make the representation model approximate to the expected model.By adding a constant termbto the function,

    By substituting Eq.(2)into Eq.(1),we have

    Based on these formulae,assuming a given datasetD={xi,yi},i=1,···,N,xi∈ R D,D is the dimension of samples,andyi∈{0,1},logistic regression is described as follows:

    wherewstands for feature weight,which is a parameter to be learned.φis the characteristic transformation function.

    In LR modelφis usually de fined to be equal tox.The key step is to learn unknown parameterswandb.Ifyin Eq.(3)is regarded as posterior probability estimationp(y=1|x),Eq.(4)can be rewritten as:

    Thenwcan be obtained by the maximum likelihood estimate.With the definition ofbi=p(yi=1|xi),y=0 or 1,for a single sample,the posterior probability is,

    Then,the maximum likelihood function is represented as follows,

    For the convenience of calculation,the negative log of the maximum likelihood function is used as the objective function to be optimized,

    Since the maximizing likelihood probability is equivalent to minimizing negative likelihood probability,the last step is to minimize the Loss function.

    2.2 Density-based logistic regression

    In the DLR model,φis a function that mapsxto the eigenspace,

    whereDis the dimension of the input data,lnmeasures the contribution ofxdto the probability ofy=1,andmeasures the degree of imbalance for datasets.p(y=1)is the proportion of data in the training set,whose label isy=1.Nadaraya-Watson is usually used to estimatep(y=k|xd)wherek=0,1.

    whereDk?Dis the subset of data in class k,andK(x,y)is a Gaussian kernel function de fined as follows,

    wherehdis the bandwidth of the kernel density function.Thehdis usually set using the Silverman’s rule of thumb[Silverman and Green(1986)],

    whereNis the total number of samples andσis the standard deviation ofxd.

    Next we need to trainwthrough the learning algorithm untilwconverges.Givenbi=p(yi=1|xi),the loss function based on likelihood probability is calculated as follows,

    2.3 Extension of logistic regression to multiple classi fication

    Since the logistic classi fication is a binary classi fication model,it is necessary to extend it for multiple classi fication,common extensions include multiple binary classi fication models or SoftMax models.

    2.3.1 N-logistic model

    The N-logistic model generally adopts One-vs-Rest or One-vs.-One.When classifying a sample,we first classify the two classi fiers,then vote,and select the category with the highest score.At the same time,to prevent the same vote,we also add the probability of the class to each classi fier in the voting.The predictive accuracy of these two approaches is usually very similar,so unless there is a speci fic need for data characteristics,it is generally arbitrary to choose one approach to calculate.

    2.3.2 SoftMax model

    SoftMax regression is a generalization of logistic regression to multiple classi fication problems.Its basic form is described as follows,

    When in the test,to samplex,if there is a categoryc,for all the other categoryc *(c * /=c)meet thep(y=c|x)>p(y=c *|x),thenxbelongs to the categoryc.

    On the question of choosing N-logistic model or SoftMax model,many scholars have conducted in-depth exploration.Currently,it is accepted that it is necessary to investigate whether the various categories are mutually exclusive.If there is a mutual exclusion relationship between the categories to be classi fied,we’d better choose SoftMax classi fier.Whileifthereisnomutualexclusionbetweencategories,andthecategoriesareintersecting,it is best suited to the N-logistic classi fier.We verify this conclusion according to corresponding datasets in Section 5.

    3 Analysis of the classi fication results with unbalanced sample proportion

    In our actual classi fication tasks,there are often needs to deal with problems with unbalanced data sample proportions.For example,the ratio of positive and negative samples in a dataset is 10:1,including 100 positive classes and 10 negative classes.If using this kind of data to train a classi fier,it is very likely that the test data will be divided into positive classes.Obviously,this classi fier is invalid.

    For this kind of data,traditional logistic regression method usually fails to work.In recent years,studies on the problem of unbalanced classi fication have been very active[Ye,Wen and Lv(2009)].In this section we introduce several common approaches to solve the problem of sample imbalance classi fication.

    3.1 Obtain more samples

    For unbalanced classi fication,the first solution is to obtain more samples and expand a few samples to balance the sample proportion.However,in most cases,the sampling procedure needs speci fic conditions.Thus,it is generally dif ficult to obtain more samples under the same conditions.

    3.2 Sampling methods

    The general sampling method is mainly based on modifying the number of unbalanced samples.The research of Estabrooks et al.[Estabrooks,Jo and Japkowicz(2004)]show that the general sampling method has a better effect on solving unbalanced classi fication problems.

    3.2.1 Under-sampling method

    Under-sampling method is also called down-sampling[Gao,Ding and Han(2008)],which is to eliminate some samples from majority class samples,so that the number of samples in the whole group tends to be balanced.The commonly used method is random under-sampling downward method.The method is based onNmin,the number of minority class samples.We randomly sample from the majority class samples and eliminateNsamples,and thenNmax-N=Nmin,so the samples are balanced.

    3.2.2 Over-sampling method

    Over-sampling method is also called up-sampling,which refers to increase the number of minority class samples.The method of adding a small number of minority class samples(random over-sampling method)or re-fitting some new data in accordance with some law can be used to make the number of samples balanced.One commonly used method is Synthetic Minority Over-sampling Technique(SMOTE)[Chawla,Bowyer,Hall et al.(2002)].The method analyzes the distribution of the characteristic space of a few samples and proposes new samples.Compared to the random over-sampling method,the data added by SMOTE sampling method is completely new,which can follow the regular pattern in the original sample.The main idea of SMOTE is shown in Fig.1.

    For each samplexin a minority class,the Euclidean distance of each sample point of a minority sample is calculated,and itskneighbors are obtained.A suitable sampling ratio is set according to the sample proportion to determine the sampling rateN.For each of the minority samplex,select several samples randomly from itskneighbors.For each random nearest neighborxn,a new sample is constructed with the original sample according to the following equation,

    3.3 Modify evaluation index

    For unbalanced classi fication,using accuracy to evaluate classi fiers may biases.For example,assuming ratio of positive and negative samples in a dataset is 9:1,and all samples are labelled be positive.Although the accuracy rate is up to 90%,the classi fier is useless.

    Figure 1:The main idea of SMOTE method

    Table 1:A hybrid matrix of binary classi fication

    Therefore,accuracy can serve as a biased indicator.Davis et al.[Davis and Goadrich(2006)]proposed a new evaluation index named Precision and Recall,some factors are listed in Tab.1.

    Precisionrefers to the proportion of positive samples in all predicted positive samples,andRecallrefers to the proportion of all actual positive samples that are being correctly predicted.

    3.4 Use penalty items to modify the weights

    If samples are dif ficult to sample directly,the method of modifying sample weights can be used.It increases the weight of minority class samples and reduces the weight of the majority class samples.Because the weights of minority class samples are high,they can lead to better classi fication results.The commonly used method is to add a penalty item to the majority class samples each time when training the sample weight.In general,we use the regularization method to add a penalty parameter to a objective function,this reduces the chance of the over fitting[Goodfellow,Bengio and Courville(2017)].The regularized objective function is shown below,

    whereαis a parameter which represents the contribution of the penalty item and the objective function.The penalty can be adjusted by controllingα.Ifα=0,there is no penalty,otherwise the larger theα,the greater the penalty.

    After we chose an appropriate penalty,the training regularize the objective function.In this way,the data error and the parameter scale can be reduced,the computation efficiency can be improved.But in practice,how to select the optimal penalty item is a complicated problem,which needs more tests.

    3.5 Kernel-based methods

    Towards general classi fication problem,we can assume that the sample data can be classi fieddirectlybylinearmodel.Inotherwords,thereisahyperplanethatcanseparatethe samples and ensure that the classi fication is correct.However,in practice,there is usually no such a hyperplane to partition the original data correctly,which means that the data are not linearly separable.For such a problem,we can consider preprocessing data.Using the principle of support vector machine,data in the low-dimensional space are transformed into the high dimensional space through nonlinear transformation,so that they can be linearly separable[Zhou(2016)].Using this method,the relationship between data samples can be written as dot product.For example,the linear regression function can be rewritten as follows,

    wherex(i)is the training data.αis the coef ficient vector.Replacing the dot product with a function of the kernelk(x,x(i))=φ(x)·φ(x(i)),we can get,

    This function is nonlinear with respect tox,while it is linear with respect toφ(x).

    Kernel function can deal with nonlinear unbalanced classi fication well.It uses a convex optimization technique to address nonlinear problems in a linear manner.At the same time,this method can guarantee convergence and improve the accuracy of classi fication.And there is some simpli fication in parameter determination.In addition,it is much more efficient to use the kernel function to transform data into a transformation function[Goodfellow,Bengio and Courville(2017)].

    SVM can convert sample data into high dimensional feature space through a kernel function.According to the principle of maximum spacing of SVM,the hyperplane of the optimal classi fication can be constructed in the characteristic space of high dimension to realizetheclassi fication.Iftheintervalofclassi ficationcanbeextended,especiallybetween minority class samples and the optimal classi fication hyperplane,the generalization performance of the classi fier and the accuracy of classes with small samples can be effectively improved.This enables the correct classi fication of unbalanced data[Liu,Huang,Zhu et al.(2009)].

    4 Improved method of kernel density estimation model for multi-classi fication

    We extend the DLR model to solve the multi-classi fication problem and design an improved multi-classi fication algorithm.Assuming there areCclasses,fork=1,2,...,C,the DLR model is de fined as follows,

    wherewk=(wk1,wk2,...,wkD)is the feature weighting parameter of classk,andφk=(φk1,φk2,...,φkD)is the characteristic transformation function of classk.

    According to the Nadaraya-Watson estimator,the probability formula of classkis obtained as follows:

    Finally,we need to minimize the loss function,

    where,1yi=kis 1 if and only ifyi=k,otherwise it takes value 0.

    Now we present the process of evaluating the gradient of the Loss function with respect towk,

    We adjust the weightwkaccording to the direction of the gradient descent,until thewkconverges andwkin the model is well trained.During the testing,the same kernel function transformation is performed on the testing data.The transformedφ(x)and trainedwkare substituted into Eq.(25).Then we compare the probability of the different classes and choose the class with the largest probability as the result category.At this point,we have completed the generalization of the logistic regression to multi-classi fication based on kernel density function.

    To show the difference between kernel density estimation logistic regression and classical logistic regression,we will compare the corresponding algorithms later.

    In the DLR algorithm,the inputxis given a feature transformation to getφbefore calculating the probability in Eq.(25).And then substituteφforxas the input to the probability formula.At the same time,the probability formula is changed from the Sigmoid function to the SoftMax function.

    After conducting experiments,we have found that the differences ofφamong different labels obtained using the DLR algorithm are small.There is a large error in the final classi fication result.And the minority class samples cannot be discriminated at all.And the value of loss function is not reduced by training.Therefore,in the process of constructing the bandwidth of kernel function and preprocessing the data,we improve it by the following scheme.

    Figure 2:The process of searching for the optimal coef ficient

    First,We try to train the parameters of the kernel function by modifying the weight values on the basis of Eq.(14).We conducted 16 groups of experiments,as shown in Fig.2.In the previous experiment,since the value ofwwas too large,the characteristics of the input dataXitself were dif ficult to distinguish.Properly reducingcan limit the complexity of the model,thereby improving the generalization performance of the model.Through comparison experiments,we found that changing 1.06 in Eq.(14)to 0.02 can signi ficantly improve the accuracy of the model.According to Fig.2,we reduce the bandwidth of kernel function in Eq.(14).

    In this way,the difference ofhdhas been improved.However,it may cause the value ofybecome too large and over fl ow in subsequent calculations.Feature scaling is a crucial step in the data preprocessing process.For most machine learning and optimization algorithms,scaling the values of features to the same interval can make their performance even better.In order to accelerate loss function convergence rate,we normalizeφusing the min-max method.

    The training process of the improved model is established in Algorithm 3.

    In the next section,we will conduct a comparative test to analyze the relationship between test results and training results after using Algorithm 3.

    5 Application of improved algorithm:datasets and veri fication analysis

    In particular,we have implemented the following methods for testing.

    1)N-logistic model,One-vs-Rest methods,abbreviated as NLR.

    2)N-logistic model,One-vs-Rest methods,combined with the oversampling method,abbreviated as NLR_Sample.

    3)N-logistic model,One-vs-Rest methods,combined with the Smote method,abbreviated as NLR_Smote.

    4)SoftMax model.

    5)SoftMax model combined with Algorithm 3,abbreviated as DLR++.

    We choose three datasets for testing.The first one is the fitting datasetNumbconstructed by us.In this dataset,each data element contains 10 fl oating point values,ranging from 0 to 5.The data distribution is divided into three categories:GroupA,GroupBandGroupC.The second dataset is theIrisfrom UCI.There are four features,including calyx length,calyx width and petal width,and the eigenvalue is fl oating-point number.The target value is the classi fication result of irises,includingvirginica,versicolor,andsetosa.The third dataset is theWinefrom UCI,which uses the various parameters of theWineto predict the quality of the Wine.There are 11 characteristic values,including volatileacidity,non-volatile acid,citric acid,residual sugar,chlorine,total sulfur dioxide,free of sulfur dioxide,sulfate,concentration,PH and alcohol.There are three quality classes:1,2,or 3.

    Table 2:Accuracy(%)of different methods on three datasets

    Table 3:Time(s)for different methods on three datasets

    Table 4:The number of iterations of training Loss convergence on three datasets

    In order to keep the data more versatile,and the classi fication results more persuasive,we use k-fold cross validation and assign the dataset to the training set and testing set according to the ratio 7:3.The test results are given as follows.

    From Tab.2 to Tab.4,we can see that the DLR++algorithm shows better prediction accuracy.In the three datasets,Numbis linear,whileIrisandWineare non-linear.We can see from the results that both N-logistic and SoftMax models can solve the multi-classi fication problem well.Both oversampling and smote sampling method can be used to improve the classi fication results of the sample imbalance problem with the accuracy rate increased by 1.34%and 3.92%respectively.The improved DLR++model based on kernel density is the best among all these methods,and it has an advantage in solving nonlinear multi-classi fication problems.From Tab.2 to Tab.4,we can see that the improved DLR++model converges faster than the original logistic model,using only 1/20 of the training times.At the same time,the accuracy rate has been increased 7.04%,at the cost of a higher operation time.

    From Tab.5 to Tab.6,we can see that the improved DLR++model has a better performance on datasets of large scales and multiple categories.It offers an accuracy of 93.0%while LRoffers an accuracy of 47.0%for 10-classi fication problems.

    Table 5:Performance of DLR++on different scales of datasets

    Table 6:Performance of DLR++on different number of categories

    6 Conclusion

    In this paper,we propose an improved logistic regression model based on kernel density estimation,and it can be applied to solve nonlinear multi-classi fication problems.We have compared and tested several common algorithms for logistic regression.For the experimental results,we found that the sampling method[Gao,Ding and Han(2008);Chawla,Bowyer,Hall et al.(2002)]can improve the classi fication accuracy,but the training samples obtained are very different from the original samples,which destroys the data characteristics inherently in the original sample.However in contrast,our improved model guarantees the integrity of the samples,it has obvious advantages in classi fication accuracy,and has good generalization ability with an ideal training speed.But there is still room for optimization in training,especially in the matrix operation stage.In the future,we will reduce the size of the matrix and block calculation,expected to decline training time and improve efficiency.Combining application to document retrieval[Xiong and Wang(2018);Xiong,Shen,Wang et al.(2018)],we will also expect to check the improved method in this paper is effect to document classi fication which is interested by us.

    Acknowledgement:The authors would like to thank all anonymous reviewers for their suggestions and feedback.This work was supported by National Natural Science Foundation of China(Grant No.61379103).

    又粗又爽又猛毛片免费看| 最好的美女福利视频网| 最近中文字幕高清免费大全6| 日本三级黄在线观看| 午夜老司机福利剧场| 亚洲精品粉嫩美女一区| 欧美bdsm另类| 成人毛片a级毛片在线播放| 国产精品国产高清国产av| 国产色爽女视频免费观看| 中文字幕精品亚洲无线码一区| 99riav亚洲国产免费| 午夜福利在线观看吧| 97超视频在线观看视频| av在线天堂中文字幕| 丝袜美腿在线中文| 97在线视频观看| 狠狠狠狠99中文字幕| videossex国产| 能在线免费观看的黄片| 欧美成人免费av一区二区三区| 国产成人a区在线观看| 国产午夜精品论理片| 在线a可以看的网站| 三级经典国产精品| 日本与韩国留学比较| www.色视频.com| 久久久久久国产a免费观看| 一夜夜www| 麻豆国产av国片精品| 又爽又黄无遮挡网站| 日本撒尿小便嘘嘘汇集6| 尤物成人国产欧美一区二区三区| 国产精品乱码一区二三区的特点| 日韩av不卡免费在线播放| 国内揄拍国产精品人妻在线| 久久久精品欧美日韩精品| 夜夜爽天天搞| 一级av片app| 精品久久久久久久久久免费视频| 波野结衣二区三区在线| 亚洲欧美日韩无卡精品| 成年女人永久免费观看视频| 99riav亚洲国产免费| 最新在线观看一区二区三区| 校园春色视频在线观看| 97超碰精品成人国产| 欧美最新免费一区二区三区| 精品久久久久久久久亚洲| 国产成人福利小说| 亚洲四区av| 亚洲高清免费不卡视频| 秋霞在线观看毛片| 一级毛片aaaaaa免费看小| 亚洲色图av天堂| 午夜福利高清视频| 亚州av有码| 亚洲av成人av| 亚洲五月天丁香| 欧美xxxx性猛交bbbb| 尤物成人国产欧美一区二区三区| 三级国产精品欧美在线观看| 亚洲性夜色夜夜综合| 日本黄色片子视频| 人人妻人人澡人人爽人人夜夜 | 国产一区亚洲一区在线观看| 亚洲欧美清纯卡通| 国产色婷婷99| 日本成人三级电影网站| 精品一区二区三区视频在线观看免费| 免费看a级黄色片| 日本精品一区二区三区蜜桃| 最近手机中文字幕大全| 国产精品久久电影中文字幕| 国产毛片a区久久久久| 国产免费一级a男人的天堂| 亚洲欧美日韩无卡精品| or卡值多少钱| 在线观看午夜福利视频| 国产精品一二三区在线看| 男女那种视频在线观看| 又黄又爽又免费观看的视频| 91久久精品电影网| 国产女主播在线喷水免费视频网站 | 久久久久久久久久黄片| 亚洲无线在线观看| 热99re8久久精品国产| 国产av麻豆久久久久久久| 男女啪啪激烈高潮av片| 欧美日韩乱码在线| 久久久久久久久久久丰满| 一个人看视频在线观看www免费| 日本黄色视频三级网站网址| 国产不卡一卡二| 久久久成人免费电影| 国产av不卡久久| 久久精品综合一区二区三区| 别揉我奶头~嗯~啊~动态视频| 欧美成人免费av一区二区三区| 观看免费一级毛片| 精品少妇黑人巨大在线播放 | 国产精品亚洲一级av第二区| 精品人妻视频免费看| 久久亚洲精品不卡| 一区二区三区四区激情视频 | 99九九线精品视频在线观看视频| 男女边吃奶边做爰视频| 99久国产av精品国产电影| 国产亚洲欧美98| 老司机福利观看| 午夜免费男女啪啪视频观看 | 成人特级av手机在线观看| 91久久精品国产一区二区成人| 中文字幕精品亚洲无线码一区| 亚洲在线自拍视频| 婷婷亚洲欧美| 久久精品夜夜夜夜夜久久蜜豆| 日韩强制内射视频| 国产精品一区www在线观看| 色av中文字幕| 少妇熟女aⅴ在线视频| 国产白丝娇喘喷水9色精品| 亚洲精品一区av在线观看| 免费不卡的大黄色大毛片视频在线观看 | av中文乱码字幕在线| 国产高清视频在线观看网站| 欧美色视频一区免费| 亚洲久久久久久中文字幕| 成人精品一区二区免费| 国产极品精品免费视频能看的| 色综合亚洲欧美另类图片| 哪里可以看免费的av片| 免费人成视频x8x8入口观看| 欧洲精品卡2卡3卡4卡5卡区| 亚洲色图av天堂| 无遮挡黄片免费观看| 成年女人永久免费观看视频| 少妇猛男粗大的猛烈进出视频 | 亚洲人成网站在线观看播放| 小说图片视频综合网站| 香蕉av资源在线| 中文字幕熟女人妻在线| 美女cb高潮喷水在线观看| 极品教师在线视频| 国产亚洲精品av在线| 国产精品久久久久久久电影| av免费在线看不卡| 久久韩国三级中文字幕| 色av中文字幕| 两性午夜刺激爽爽歪歪视频在线观看| 麻豆久久精品国产亚洲av| 香蕉av资源在线| 毛片一级片免费看久久久久| 亚洲精品乱码久久久v下载方式| 国产色婷婷99| 亚洲成人av在线免费| 亚洲欧美日韩无卡精品| 欧美激情在线99| 可以在线观看的亚洲视频| 国产成人a区在线观看| 黄色一级大片看看| 日韩中字成人| 欧美在线一区亚洲| 精品99又大又爽又粗少妇毛片| 国产欧美日韩精品一区二区| 亚洲国产精品国产精品| 天天躁夜夜躁狠狠久久av| 狠狠狠狠99中文字幕| 最近最新中文字幕大全电影3| 精品人妻偷拍中文字幕| 精品久久久久久久末码| 国产亚洲精品av在线| 男女做爰动态图高潮gif福利片| 麻豆国产av国片精品| 久久久久久久久久成人| 国产精品一区二区三区四区久久| 亚洲av电影不卡..在线观看| 大型黄色视频在线免费观看| 99riav亚洲国产免费| 中文字幕av在线有码专区| 久久精品久久久久久噜噜老黄 | 哪里可以看免费的av片| 中文字幕免费在线视频6| 国产精品一区二区性色av| 我要搜黄色片| 亚洲丝袜综合中文字幕| 亚洲综合色惰| 干丝袜人妻中文字幕| 插逼视频在线观看| 日本欧美国产在线视频| 欧美+日韩+精品| 丰满人妻一区二区三区视频av| 午夜爱爱视频在线播放| 在线a可以看的网站| 丰满乱子伦码专区| 最近最新中文字幕大全电影3| 国产精品久久久久久亚洲av鲁大| 国产亚洲av嫩草精品影院| 日韩成人伦理影院| 国产一区亚洲一区在线观看| 九九爱精品视频在线观看| ponron亚洲| 久久6这里有精品| 青春草视频在线免费观看| av国产免费在线观看| 亚洲欧美清纯卡通| 久久久久国产网址| 淫秽高清视频在线观看| 观看免费一级毛片| 日本 av在线| 三级毛片av免费| av天堂中文字幕网| 99九九线精品视频在线观看视频| 青春草视频在线免费观看| eeuss影院久久| 精品欧美国产一区二区三| 国产午夜精品论理片| 免费不卡的大黄色大毛片视频在线观看 | 成年女人看的毛片在线观看| 成人漫画全彩无遮挡| 国产又黄又爽又无遮挡在线| 亚洲精品日韩在线中文字幕 | 中文字幕免费在线视频6| 免费av不卡在线播放| 亚洲精品色激情综合| 亚洲国产欧洲综合997久久,| 久久人人精品亚洲av| 美女xxoo啪啪120秒动态图| 人妻久久中文字幕网| 欧美最新免费一区二区三区| 五月玫瑰六月丁香| 国产淫片久久久久久久久| 美女被艹到高潮喷水动态| 成人av一区二区三区在线看| 亚洲18禁久久av| 精品无人区乱码1区二区| a级一级毛片免费在线观看| 一区二区三区高清视频在线| 国产精品爽爽va在线观看网站| 99热6这里只有精品| 日日摸夜夜添夜夜添av毛片| 一级a爱片免费观看的视频| 欧美成人a在线观看| 日本三级黄在线观看| 日本黄大片高清| 舔av片在线| 久久久久久久久久久丰满| 精品不卡国产一区二区三区| 一进一出好大好爽视频| videossex国产| 午夜久久久久精精品| 成人特级黄色片久久久久久久| 国产亚洲91精品色在线| 久久精品夜色国产| 亚洲成av人片在线播放无| 午夜福利18| 欧美zozozo另类| 国产乱人偷精品视频| 亚洲一区二区三区色噜噜| 成年女人永久免费观看视频| 欧美色欧美亚洲另类二区| 少妇熟女aⅴ在线视频| 免费av毛片视频| 亚洲欧美日韩高清专用| 欧美日本视频| 免费看a级黄色片| 亚洲aⅴ乱码一区二区在线播放| 国内精品久久久久精免费| 日韩av不卡免费在线播放| 欧美日韩乱码在线| 亚洲在线自拍视频| 国产精品,欧美在线| 精品午夜福利视频在线观看一区| 欧美一区二区精品小视频在线| 日韩高清综合在线| 老熟妇乱子伦视频在线观看| 国产高清有码在线观看视频| 日韩国内少妇激情av| 色哟哟·www| 直男gayav资源| 亚洲乱码一区二区免费版| 国产欧美日韩一区二区精品| 国产午夜福利久久久久久| 国产成人a∨麻豆精品| 成人永久免费在线观看视频| 亚洲最大成人av| 啦啦啦观看免费观看视频高清| 日韩制服骚丝袜av| 日本三级黄在线观看| 国产真实伦视频高清在线观看| 日韩 亚洲 欧美在线| 亚洲中文字幕日韩| 亚洲高清免费不卡视频| 国产白丝娇喘喷水9色精品| 欧美一区二区亚洲| 中文资源天堂在线| 国产一区二区三区在线臀色熟女| 少妇的逼水好多| 亚洲av成人av| 亚洲三级黄色毛片| 亚洲自偷自拍三级| 淫妇啪啪啪对白视频| www.色视频.com| 校园春色视频在线观看| 欧美bdsm另类| 国产91av在线免费观看| 男女下面进入的视频免费午夜| 97超级碰碰碰精品色视频在线观看| 身体一侧抽搐| 亚洲欧美日韩高清专用| 人妻夜夜爽99麻豆av| 俺也久久电影网| 亚洲av五月六月丁香网| 欧美在线一区亚洲| 成人高潮视频无遮挡免费网站| 美女xxoo啪啪120秒动态图| 国产在线男女| 综合色av麻豆| 欧美高清成人免费视频www| 天堂动漫精品| 精品国内亚洲2022精品成人| 床上黄色一级片| 久久久精品大字幕| 国产精品人妻久久久影院| 国产在视频线在精品| 亚洲美女黄片视频| 国产视频内射| 最好的美女福利视频网| 美女高潮的动态| 国产乱人视频| 精品无人区乱码1区二区| 国产精品综合久久久久久久免费| 亚洲四区av| 亚洲经典国产精华液单| 亚洲欧美日韩卡通动漫| 黄色配什么色好看| 一级毛片久久久久久久久女| 久久精品国产亚洲av涩爱 | 亚洲五月天丁香| 欧美激情久久久久久爽电影| 色播亚洲综合网| 亚洲国产精品合色在线| 免费搜索国产男女视频| 免费av不卡在线播放| 色播亚洲综合网| 黄色配什么色好看| 毛片一级片免费看久久久久| 一级黄色大片毛片| 男插女下体视频免费在线播放| 欧美人与善性xxx| 国产精品不卡视频一区二区| 久久久久久久久久成人| 免费无遮挡裸体视频| 国产精品人妻久久久影院| av女优亚洲男人天堂| 国产精品电影一区二区三区| 亚洲成人精品中文字幕电影| 亚洲va在线va天堂va国产| 99久久精品一区二区三区| 又粗又爽又猛毛片免费看| 少妇裸体淫交视频免费看高清| 久久久久精品国产欧美久久久| 日韩一本色道免费dvd| 欧美激情在线99| 看片在线看免费视频| 成人国产麻豆网| 别揉我奶头~嗯~啊~动态视频| 黄色一级大片看看| 国产一区二区在线av高清观看| 亚洲人成网站在线播| 精品人妻偷拍中文字幕| 嫩草影院入口| 在线播放无遮挡| 日日撸夜夜添| 成人二区视频| 日韩欧美 国产精品| 午夜亚洲福利在线播放| 一级毛片aaaaaa免费看小| 国产不卡一卡二| 免费人成视频x8x8入口观看| 国产精品电影一区二区三区| 久久99热这里只有精品18| 99久久精品国产国产毛片| 亚洲av中文字字幕乱码综合| 国国产精品蜜臀av免费| 欧美高清成人免费视频www| 最近视频中文字幕2019在线8| 欧美日本视频| 两个人视频免费观看高清| 伦理电影大哥的女人| 亚洲高清免费不卡视频| 黄色配什么色好看| a级毛片免费高清观看在线播放| 午夜福利成人在线免费观看| av天堂在线播放| 最近2019中文字幕mv第一页| 成年版毛片免费区| 97在线视频观看| 黄色欧美视频在线观看| 亚洲欧美日韩无卡精品| 午夜福利视频1000在线观看| 老熟妇乱子伦视频在线观看| 日韩强制内射视频| 老师上课跳d突然被开到最大视频| 国产精品无大码| 成人性生交大片免费视频hd| 人人妻人人澡人人爽人人夜夜 | 日韩三级伦理在线观看| 国产v大片淫在线免费观看| 91午夜精品亚洲一区二区三区| 高清毛片免费观看视频网站| 看免费成人av毛片| 久久久精品欧美日韩精品| 青春草视频在线免费观看| 色综合亚洲欧美另类图片| 极品教师在线视频| 国产乱人视频| 夜夜夜夜夜久久久久| 搡老岳熟女国产| 九九爱精品视频在线观看| 久久精品国产清高在天天线| 99热精品在线国产| 深爱激情五月婷婷| 国语自产精品视频在线第100页| 性色avwww在线观看| 亚洲中文字幕一区二区三区有码在线看| av女优亚洲男人天堂| 波多野结衣高清无吗| 国产视频内射| 亚洲欧美成人综合另类久久久 | 精品午夜福利视频在线观看一区| 亚洲综合色惰| 美女被艹到高潮喷水动态| 舔av片在线| 久久鲁丝午夜福利片| 美女内射精品一级片tv| av在线蜜桃| 亚洲人与动物交配视频| 日本色播在线视频| 老师上课跳d突然被开到最大视频| 亚洲国产欧洲综合997久久,| 国产精品不卡视频一区二区| 免费搜索国产男女视频| 亚洲欧美精品自产自拍| 麻豆乱淫一区二区| 黄片wwwwww| 一本精品99久久精品77| 九色成人免费人妻av| 亚洲在线自拍视频| 亚洲一级一片aⅴ在线观看| 成年av动漫网址| 日韩中字成人| 麻豆av噜噜一区二区三区| 99久久久亚洲精品蜜臀av| 日韩欧美精品免费久久| 亚洲欧美日韩东京热| 国产精品精品国产色婷婷| 真人做人爱边吃奶动态| 亚州av有码| 久久精品国产鲁丝片午夜精品| 欧美bdsm另类| 蜜桃亚洲精品一区二区三区| .国产精品久久| 97热精品久久久久久| 人妻夜夜爽99麻豆av| 少妇人妻精品综合一区二区 | 黑人高潮一二区| 真人做人爱边吃奶动态| 亚洲av熟女| 久久精品91蜜桃| 国产精品美女特级片免费视频播放器| 美女免费视频网站| 国产精品一区二区免费欧美| 亚洲性夜色夜夜综合| 亚洲色图av天堂| 国产精品,欧美在线| 午夜精品在线福利| 人妻制服诱惑在线中文字幕| a级毛片免费高清观看在线播放| 国产色爽女视频免费观看| 丰满人妻一区二区三区视频av| 极品教师在线视频| 国产爱豆传媒在线观看| 一本久久中文字幕| 亚洲av五月六月丁香网| 在线观看美女被高潮喷水网站| 尾随美女入室| 免费看美女性在线毛片视频| 欧美日本视频| 久久久久久国产a免费观看| 亚洲av中文字字幕乱码综合| 少妇的逼好多水| 久久久久久久久中文| 国产精品精品国产色婷婷| 国产探花极品一区二区| 亚洲七黄色美女视频| 亚洲性久久影院| 久久精品综合一区二区三区| 真实男女啪啪啪动态图| 床上黄色一级片| 欧美潮喷喷水| 美女黄网站色视频| 国产av一区在线观看免费| av在线观看视频网站免费| 午夜激情福利司机影院| 在线观看av片永久免费下载| 国产麻豆成人av免费视频| 此物有八面人人有两片| 啦啦啦啦在线视频资源| 欧美日韩精品成人综合77777| 久久草成人影院| 欧美日韩精品成人综合77777| 亚洲图色成人| 一区福利在线观看| 成人亚洲精品av一区二区| 91av网一区二区| 国产单亲对白刺激| 免费看a级黄色片| 夜夜夜夜夜久久久久| 日本与韩国留学比较| 最新在线观看一区二区三区| 美女大奶头视频| 如何舔出高潮| 久久这里只有精品中国| 亚洲国产精品国产精品| 久久中文看片网| 99热这里只有是精品50| 国产黄a三级三级三级人| 小蜜桃在线观看免费完整版高清| 中文字幕熟女人妻在线| 内射极品少妇av片p| 狠狠狠狠99中文字幕| 网址你懂的国产日韩在线| 俺也久久电影网| 免费在线观看影片大全网站| 最近的中文字幕免费完整| 欧美日本亚洲视频在线播放| 最近的中文字幕免费完整| 神马国产精品三级电影在线观看| 国产精品一二三区在线看| 亚洲成人中文字幕在线播放| 国产在视频线在精品| 国产一区二区在线av高清观看| 中文字幕av在线有码专区| 欧美性猛交黑人性爽| 国产伦在线观看视频一区| 国产男人的电影天堂91| 亚洲av中文字字幕乱码综合| 国产一区二区激情短视频| 大又大粗又爽又黄少妇毛片口| 精品久久久久久久久av| 国产在线男女| 欧美性感艳星| 黄色配什么色好看| 97热精品久久久久久| 夜夜看夜夜爽夜夜摸| 久久久久国内视频| 国产成人精品久久久久久| 最近在线观看免费完整版| 日韩一本色道免费dvd| 欧美丝袜亚洲另类| 午夜福利在线在线| 免费看av在线观看网站| 少妇丰满av| 国产精品永久免费网站| 国产精品国产三级国产av玫瑰| 国产又黄又爽又无遮挡在线| 久久精品影院6| 97超碰精品成人国产| 69人妻影院| 欧美人与善性xxx| 91久久精品国产一区二区成人| 熟女电影av网| 亚洲精品影视一区二区三区av| 国产熟女欧美一区二区| 国产日本99.免费观看| 亚洲av美国av| 日韩精品中文字幕看吧| 少妇人妻精品综合一区二区 | 99久久中文字幕三级久久日本| 欧美激情国产日韩精品一区| 国产真实乱freesex| 男女做爰动态图高潮gif福利片| 人人妻人人看人人澡| 午夜福利视频1000在线观看| 日韩中字成人| 午夜日韩欧美国产| 99久久精品国产国产毛片| 久久精品国产自在天天线| 日韩欧美在线乱码| 国产亚洲精品久久久com| 看非洲黑人一级黄片| 少妇的逼水好多| 免费观看的影片在线观看| 美女大奶头视频| 成人精品一区二区免费| 禁无遮挡网站| 日韩精品有码人妻一区| 免费av观看视频| 久久久色成人| 久久婷婷人人爽人人干人人爱| av在线亚洲专区| 简卡轻食公司| 99久久中文字幕三级久久日本| 成人高潮视频无遮挡免费网站| 美女 人体艺术 gogo| 深夜精品福利| 一个人看的www免费观看视频| 啦啦啦啦在线视频资源| 国产精品三级大全| 在线a可以看的网站| 中出人妻视频一区二区| 亚洲国产欧美人成| 欧美日韩一区二区视频在线观看视频在线 | 美女cb高潮喷水在线观看|