• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Unsupervised natural image patch learning

    2019-02-17 14:14:24DovDanonHadarAverbuchElorOhadFriedandDanielCohenOr
    Computational Visual Media 2019年3期

    Dov Danon(),Hadar Averbuch-Elor,Ohad Fried,and Daniel Cohen-Or

    Abstract A metric for natural image patches is an important tool for analyzing images.An efficient means of learning one is to train a deep network to map an image patch to a vector space,in which the Euclidean distance re fl ects patch similarity.Previous attempts learned such an embedding in a supervised manner,requiring the availability of many annotated images.In this paper,we present an unsupervised embedding of natural image patches,avoiding the need for annotated images. The key idea is that the similarity of two patches can be learned from the prevalence of their spatial proximity in natural images.Clearly,relying on this simple principle,many spatially nearby pairs are outliers.However,as we show,these outliers do not harm the convergence of the metric learning.We show that our unsupervised embedding approach is more effective than a supervised one or one that uses deep patch representations.Moreover,we show that it naturally lends itself to an efficient self-supervised domain adaptation technique onto a target domain that contains a common foreground object.

    Keywords unsupervised learning;metric learning

    1 Introduction

    Humans can easily understand what they see in different regions of an image,or tell whether two regions are similar or not.However,despite recent progress,such forms of image understanding remain extremely challenging.One way to address image understanding takes inspiration from the ability of human observers to understand image contents,even when viewing through a small observation window.Image understanding can be formalized as the ability to encode contents of small image patches into representation vectors.To keep such encoding generic,they are not predetermined by certain classes,but instead aim to project image patches into an embedding space,where Euclidean distances correlate with general similarity among image patches. As natural patches form a low dimensional manifold in the space of patches[1,2],such an embedding of image patches allows various image understanding and segmentation tasks.For example,semantic segmentation is reduced to a simple clustering technique based onl2distances.

    The key insight of our work is that such an embedding of image patches can be trained by a neural network in anunsupervisedmanner.Using semantic annotations allows a direct sampling of positive and negative pairs of patches that can be embedded using a triplet loss[3].However,data labeling is laborious and expensive.Therefore,only a tiny fraction of the images available online can be utilized by supervised techniques,necessarily limiting the learning to a bounded extent. An unsupervised embedding can also be based on deep patch representations that are learned indirectly by the network,e.g.,Ref.[4].However,as we show,explicitly training the network for an embedding can achieve signi fi cantly higher performance.

    In this work,we introduce an unsupervised patch embedding method,which analyses natural image patches to de fi ne a mapping from a patch to a vector,such that the Euclidean distance between two vectors re fl ects their perceptual similarity.We observe that the similarity of two patches in natural images is correlated with their spatial distance.In other words,patches of coherent or semantically similar segments tend to be spatially close,hence forming a surprisingly simple but strong correlation between patch similarity and spatial distance.Clearly,not all neighboring patches are similar(see Fig.2).However,as we shall show,these dissimilar close patches are rare enough and uncorrelated,resulting in insigni fi cant noise in the learning system which does not prohibit the learning.

    Our embedding yieldsdeep images,as each patch is mapped to a vector of 128D by a deep network.See the visualization of the deep images in the second and fourth rows of Fig.1,obtained by projecting the 128D vectors onto their three principle directions,producing pseudo-RGB images where similar colors correspond to similar embedded points.Using our embedding technique,we further present a domain specialization method.Given a new domain that contains a common foreground object,using selfsupervision,we re fi ne the initial embedding results for the speci fi c domain to yield a more accurate embedding.

    We use a convolutional neural network(CNN)to learn a 128D embedding space.We train the network on 2.5 million natural patches with a triplet-loss objective function.Section 3 explains our embedding framework in detail.Section 4 describes our domain adaptation technique to a target domain that contains a common foreground object.In Section 5,we show that the patch embedding space learned using our method is more effective than embedding spaces that were learned with supervision or those based on handcrafted features or deep patch representations.We further show that by fi ne-tuning the network to a speci fi c domain using self-supervision,we can further increase performance.

    Fig.1 Given a natural input image,our technique learns a high-dimensional embedding space,where Euclidean distances between embedded image patches re fl ect their similarity(visualized in pseudo-RGB colors).

    2 Related work

    Our work is closely related to dimensionality reduction and embedding techniques,image patch representation,transfer learning,and neural network based optimization.In the following we highlight directly relevant research.

    Image patches can be treated as a collection of partial objects with different textures.Julesz[5]introduced textons as a way to represent texture via second order statistics of small patches.Various filter banks can be used for texture representation[6],e.g.,Gabor fi lters[7].Also,hierarchical fi lter responses have been used with great success for texture synthesis[8,9].All these fi lters are fi xed and not learned from data.In contrast,we learn the embedding by analyzing the distribution of all natural patches,thus avoiding the bias of handcrafted features.

    The idea of representing a patch by its pixel values(without attempting dimensionality reduction)has had success in various applications[10];see Barnes and Zhang[11]for a survey.In Section 5,we compare our method against a raw pixel descriptor.

    PatchNet [14]introduces a compact and hierarchical representation of image regions.It uses rawL?a?b?pixel values to represent patches.PatchTable[15]proposes an efficient approximate nearest neighbor(ANN)implementation.ANN is an orthogonal and complementary task to patch representation.

    Fig.2 Learning patch similarity from spatial distances.Our premise is that two patches sampled from the same swatch(colored in red)are more likely to be similar to each other than to a patch sampled from a distant one(colored in blue).

    Recently,deep networks were used for image region representation and segmentation.Cimpoi et al.[16]use the last convolution layer of a convolutional neural network(CNN)as an image region descriptor.It is not suitable for patch representation,as it produces a 65k-dimensional vectorper patch.Fully convolutional networks(FCNs)[17]prove potent for,e.g.,image segmentation.We compare to FCNs in Section 5.

    Our work is based on Patch2Vec[3],which also uses deep networks to train a meaningful patch representation.However,in contrary to our method,Patch2Vec is asupervisedmethod that requires an annotated segmentation dataset for training.

    The ideas of using spatial proximity in image space and temporal proximity for videos have been utilized in the past.For self-supervised learning,Isola et al.[18]utilize space and time co-occurrences to learn patch,frame,and photo affinities.Wang and Gupta[19]track objects in videos to generate data,also in a self-supervised manner.Wang et al.[33]introduce image extrapolation using graph matching,and exploit similarity in the spatial domain.Closer to our method,Doersch et al.[4]train a network(UVRL)to predict the spatial relationships between pairs of patches,and use the patch representation to group similar visual concepts.Pathak et al.[20]train a network to predict missing content based on its spatial surrounding.These methods learn the patch representation while training the network for a different task,and the embedding is provided implicitly.In our work,the network is directly trained for patch embedding. We compare our method against UVRL in Section 5.

    Given a labeled set in asourcedomain and an unlabeled set of samples in atargetdomain,domain adaptation aims to generalize the classi fi er learned on the source domain to the target domain[21,22].It has become common practice to pre-train a classi fi er on a large labeled image database,such as ImageNet[23],and transfer the parameters to a target domain[24,25].See Patel et al.[26]for a survey of recent visual techniques.In our work,we re fi ne our embeddings from the natural image source domain to a target domain that contains a common object.Unlike recent unsupervised domain adaptation techniques[27,28],in our case neither domain contains labeled data.

    3 Patch space embedding

    In this work,we take advantage of the fact that there is a strong coherence in the appearance of semantic segments in natural images.It is expected then that nearby patches have similar appearance.The correlation between spatial proximity and appearance similarity is learned and encoded in a patch space,where the Euclidean distance between two patches re fl ects their appearance similarity.

    The embedding patch space is learned by training a neural network using a triplet loss:

    wherepc,pn,pfare three patches of sizes×s,selected from a collection of natural images,such thatpcis the current patch,pnis a nearby patch,pfis a distant patch,andmis a margin value(set empirically to 0.2).We uses=16 for all our results.

    To train our network,we utilize a large number of natural images(5000 images from the MIT-Adobe FiveK Dataset,in our implementation)and for each image we sample six disjoint regions,referred to as swatches.Each swatch is a 3×3 grid of patches.When sampling,we enforce a minimal distance of 3 s between swatches.In total,we sample 6 swatches per image,which is the maximal number guaranteed to f it in all our images.A triplet is formed by randomly picking two patches from one swatch,and one from another swatch.The assumption is that the two patches taken from the same swatch are close enough,while the third is distant.In our implementation,the distant patch is always taken from the same image.The above scheme for sampling triplets is illustrated in Fig.2,where only two swatches are illustrated,one in red and one in blue.A triplet is formed by sampling two positive patches from the red swatch,and one negative patch from the blue one.Furthermore,we adopt the principle described in Ref.[3]that selects the“hard”examples,i.e.,in each epoch,we use triplets that previously were not handled well by the network.This is expressed by the following equation:

    Thus,the setNcontains distant patches that the network embedded within the marginm. The networkf(p)is trained to create an embedding space that includes the training triplets.Once trained,f(p)can embed any given patch by feed-forwarding it through the network,yielding its 128D feature vector.

    To cope with outliers,we incorporate strong regularization into the network.The embedding lies only on the unit hypersphere,which prevents over fi tting.The unit hypersphere provides a structure to the embedding space that is otherwise unbounded.

    The architecture of our network is similar to the one used in Ref.[3],but with the required changes for supporting 16×16 patches(the network is illustrated in Fig.5).Note that inception layers are implemented as detailed in Szegedy et al.[29].

    Fig.3 Network loss convergence.The graph demonstrates the losses on the training(yellow)and test(blue)data.The loss function is not completely stable due to the presence of outlier swatches.Nonetheless,learning converges for both sets(starting from a loss of around 0.22,down to 0.07),demonstrating the network’s resiliency to outliers.

    We train the network for 1600 epochs on NVIDIA GTX 1080.Training takes approximately 24 hours.Network convergence is shown in Fig.3.Losses during training and testing(yellow and blue,respectively)are similar.This implies that our basic assumption holds and generalizes well.Furthermore,although learning converges,convergence is not completely stable.This may be attributed to the presence of outliers in the swatches,i.e.,two patches from the same swatch but not from the same segment,or two patches from different swatches but from the same segment.

    We conducted several experiments that tried to distill the input to the network(the patches)using hand-crafted features.Speci fi cally,we discarded a distant patch if its color histogram was too close to the current patch.Perhaps surprisingly,this fi ltering reduced accuracy by 8%.We hypothesize that this is due to the network ability to generalize better than with a hand-crafted fi lter as a pre-processing step.

    4 Domain specialization

    In Section 3,we described an unsupervised technique to encode any natural image patch as a 128D vector.Given a new domain that contains a common foreground object,we can improve the embedding by fi ne-tuning the network,or simply training it on patches taken from the new domain.However,we can do better,using the initial embedding obtained by the previously described method to generate a preliminary segmentation.We can then use these rough segments to “supervise”the re fi ned embedding.

    To generate the rough segments,the images are first transformed using the patch embedding so that each pixel is mapped to a vector of 128D.Next,we apply multi-region graph-cut image segmentation with 4 regions[30](see the third row,Fig.4).We experimented with 3-7 regions,and empirically found 4 to perform the best.

    These segments are then used as supervision for fine-tuning the network,where the triplets are de fi ned based on these segments:pcandpnare taken from the same foreground segment,andpfis a patch taken from any other segment in the image.

    In our experiments,we executed the fi ne-tuning process for just 400 epochs.This process improves our embedding space and makes it much more coherent(see Table 2 and Fig.8).

    Fig.4 Re fi ning the embedding using self-supervision.Given a new domain that contains a common foreground object(the input images on the top),we re fi ne our initial embedding(second row)by automatically generating semantic guiding segments(in unique colors,third row)for the training images.This yields a more coherent embedding of the common object(bottom row).

    5 Results

    We performed quantitative and qualitative evaluations to analyze the performance of our embedding technique.The quantitative evaluation was conducted on ground truth images from the Berkeley Segmentation Dataset(BSDS500)[31],which contains natural images that span a wide range of objects,as well as images from object-speci fi c internet datasets of Rubinstein et al.[32].These object-speci fi c datasets further enabled a quantitative evaluation of our domain specialization technique.

    To quantitatively demonstrate our improved performance over previous work,we adopt the measure used by Fried et al.[3].We start by sampling“same segment” and “different segment”pairs of patches and calculate their distance in the embedding space.Next,for a given distance threshold,we predict that all pairs below the threshold are from the same segment,and evaluate the prediction(for all threshold values)by calculating the area under the receiver operating characteristic(ROC)curve.

    Table 1 contains the full comparison. Notice that Ref.[3]issupervised,requiring an annotated segmentation dataset.The comparison to raw RGB pixels provides a more intuitive baseline.On the other hand,the accuracy of a human annotator(bottom,Table 1)demonstrates the problem ambiguity and a level of accuracy which can be considered ideal.

    To qualitatively visualize the quality of our embeddings,as previously detailed,we project the 128D vectors onto their three principal directions,which enables the production of pseudo-RGB images in which similarcolorscorrespond tosimilar embedded points. In Fig.6,we visualize our embeddings and compare the results to the supervised technique of Fried et al. [3]ontheirtraining data. Our results are more coherent than the ones obtained with supervision,even though our method does not train on these images. In the Electronic Supplementary Material(ESM),we provide a comparison for thefullBSDS500 dataset.Please refer to these results which demonstrate the high quality of our results.

    In Fig.7,we compare to the results of Doersch et al.[4],where the patch representation can also be obtained without supervision.For comparison purposes,we use both their pre-trained weights andthe weights retrained on BSDS500.We use their fc6 layer which performed best in our tests.Unlike our embeddings,their method does not produce similar embeddings,which are visualized by similar colors in the fi gure,for pixels of the same region.

    Table 1 Patch embedding evaluation.We compare our method to alternative patch representations.We report the AUC scores usingl2 distance between patch representation as means to predict whether a pair of patches comes from the same segment or not

    Fig.5 Our network architecture.

    Fig.6 Supervised vs.unsupervised embedding technique.Top:input images,from the training data of Ref.[3].Middle:results of Patch2Vec[3].Bottom:results of our unsupervised technique.Note that although our method did not train on these images,the textures are signi fi cantly less apparent in our embeddings.This suggests that segments with similar texture are embedded to closer locations in the embedding space.

    To evaluate our domain specialization technique,we f i ne-tune our network in two ways.Firstly,we retrain the weights by simply training it on patches taken from the object-speci fi c datasets of Rubinstein et al. [32].The second option is described in Section 4,where we use self-supervision to re fi ne the results in the new domain.

    In Table 2,we report the AUC scores in both settings.Since the ground truth for these datasets contains only foreground-background segmentation(and not a segment for each semantic object),the AUC measure required a slight adjustment.As the background may contain many unrelated segments,we sample “same segment”pairs only from the foreground.As validated in Table 2,our method successfully learns and adjusts to the new domain.Moreover,our self-supervision scheme further boosts the performance.

    Table 2 Domain specialization evaluation:AUC scores on the objectspeci fi c datasets provided by Ref.[32]before fi ne-tuning the network(baseline),after fi ne-tuning the network on patches from the dataset(fine-tuned),and after fine-tuning the network using our self-supervision technique( fi ne-tuned+self-supervision)

    In Fig.8,we qualitatively demonstrate the improvement over samples belonging to the HORSE dataset,half of them belong to the training set and the other half to the test set.Since one could not tell the samples apart,in the fi gure they are mixed together.As the fi gure illustrates,the colors,and thereby the embeddings,of the horses’parts are more compatible and in general more homogeneous.For more results,see the ESM.

    6 Summary,limitations,andfuture work

    We have presented an unsupervised patch embedding technique,where the network learns to map natural image patches into 128D codes such that thel2metric re fl ects their similarity.We showed that the triplet loss that we use to train the network explicitly for embedding outperforms other embeddings that are inferred by deep representations learned for other tasks or designed speci fi cally to learn similarities between patches.Generally speaking,learning to embed by a network has its limitations as it is applied at the patch level.Feeding forward patches in a network is a computationally-intensive task,and analyzing an image as a series of patches is time consuming.Parallel analysis of a multitude of patches,possibly overlapping ones,can signi fi cantly accelerate the process.

    Fig.7 Comparison between our embedding and one inferred by deep representations,UVRL[4],using their pre-trained weights(second row)and also by retraining them on BSDS500(third row).As demonstrated above,our technique maps pixels from similar regions to closer values.

    Fig.8 Re fi ning the embeddings to the HORSE domain.Above,we demonstrate the embeddings before and after the domain specialization stage.As shown,and quantitatively stated in Table 2,the embeddings of the objects(e.g.,the horses)are more coherent after re fi nement.

    To improve the performance and transfer the learning into a new domain,we utilize the embedding obtained by a trained network as self-supervision.The embedded image is segmented by a naive method,to yield a rough segmentation.As demonstrated,these segments,although imperfect,can successfully supervise the re fi nement of the network for the given new domain.However,we believe this can be further improved by using more advanced segmentation methods. In the future,we wish to consider conservative segmentation,where the segments may not necessarily cover the entire image,excluding regions with low con fi dence.

    Furthermore,in future,we would like to utilize our embedding technique to advance segmentation and foreground extraction methods.In particular,we hope to analyze large sets of embedded images,aiming to co-segment the common foreground of a weakly supervised set.We believe that the common foreground object can provide self-supervision to further improve the embedding performance.

    Electronic Supplementary MaterialSupplementary material is available in the online version of this article at https://doi.org/10.1007/s41095-019-0147-y.

    Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License,which permits use,sharing,adaptation,distribution and reproduction in any medium or format,as long as you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons licence,and indicate if changes were made.

    The images or other third party material in this article are included in the article’s Creative Commons licence,unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use,you will need to obtain permission directly from the copyright holder.

    To view a copy ofthis licence, visit http://creativecommons.org/licenses/by/4.0/.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095.To submit a manuscript,please go to https://www.editorialmanager.com/cvmj.

    午夜精品一区二区三区免费看| 最好的美女福利视频网| 人人妻,人人澡人人爽秒播| 久久热精品热| 免费看a级黄色片| av天堂在线播放| 日本与韩国留学比较| 精品人妻视频免费看| 亚洲乱码一区二区免费版| 麻豆成人av在线观看| 特级一级黄色大片| 国产大屁股一区二区在线视频| 精品久久久久久成人av| 成人性生交大片免费视频hd| 亚洲av熟女| 久久中文看片网| 国产精品一及| 在线天堂最新版资源| 在线播放无遮挡| 蜜桃久久精品国产亚洲av| 欧美色欧美亚洲另类二区| 精品人妻熟女av久视频| 国产一区二区激情短视频| 国产精品久久久久久久电影| 一级黄片播放器| 国产高清有码在线观看视频| 国产国拍精品亚洲av在线观看| 亚洲欧美日韩无卡精品| 一区二区三区激情视频| 18+在线观看网站| 成年女人毛片免费观看观看9| 午夜福利在线观看吧| 一个人免费在线观看电影| 精品久久久久久久久亚洲 | 亚洲 国产 在线| 成人毛片a级毛片在线播放| 可以在线观看毛片的网站| 女人被狂操c到高潮| av视频在线观看入口| 久久久久久伊人网av| 一级a爱片免费观看的视频| 日本色播在线视频| 日本黄大片高清| 国产精品国产高清国产av| 欧美日韩综合久久久久久 | 91午夜精品亚洲一区二区三区 | 一区福利在线观看| 国产av在哪里看| 日韩欧美国产一区二区入口| 日韩一本色道免费dvd| 亚洲中文字幕一区二区三区有码在线看| 国产高清不卡午夜福利| 黄色一级大片看看| 国产精品乱码一区二三区的特点| 国产精品乱码一区二三区的特点| 在线a可以看的网站| 亚洲第一区二区三区不卡| 我的老师免费观看完整版| avwww免费| 校园人妻丝袜中文字幕| 免费看av在线观看网站| 特级一级黄色大片| www日本黄色视频网| 99久久成人亚洲精品观看| 老司机深夜福利视频在线观看| 在线观看av片永久免费下载| 国产一区二区亚洲精品在线观看| bbb黄色大片| 人妻丰满熟妇av一区二区三区| 日本黄色视频三级网站网址| 我的女老师完整版在线观看| 免费观看的影片在线观看| 国内久久婷婷六月综合欲色啪| 黄色一级大片看看| 国产蜜桃级精品一区二区三区| 欧美成人性av电影在线观看| 欧美高清性xxxxhd video| 特级一级黄色大片| 久久精品国产自在天天线| 免费大片18禁| 中出人妻视频一区二区| 日韩精品中文字幕看吧| 国产免费av片在线观看野外av| 一个人观看的视频www高清免费观看| 久久精品国产99精品国产亚洲性色| 欧美日本视频| 精品人妻视频免费看| 我要搜黄色片| 精品久久久久久成人av| 久久香蕉精品热| 色综合亚洲欧美另类图片| 久久6这里有精品| 老司机午夜福利在线观看视频| 内射极品少妇av片p| 午夜免费成人在线视频| 噜噜噜噜噜久久久久久91| 国产麻豆成人av免费视频| 精品国内亚洲2022精品成人| 成人毛片a级毛片在线播放| 99在线人妻在线中文字幕| 国模一区二区三区四区视频| 国产精品一区二区免费欧美| 老熟妇仑乱视频hdxx| 欧美精品国产亚洲| 1024手机看黄色片| 日韩一区二区视频免费看| 麻豆成人av在线观看| 香蕉av资源在线| 精品久久久久久久末码| 午夜精品一区二区三区免费看| 不卡一级毛片| 国产精品日韩av在线免费观看| 熟女人妻精品中文字幕| 亚洲无线观看免费| eeuss影院久久| 亚洲av熟女| 亚洲男人的天堂狠狠| 成人国产综合亚洲| 久久精品国产亚洲av香蕉五月| 嫩草影院入口| 亚洲人成网站高清观看| 在线观看av片永久免费下载| or卡值多少钱| 婷婷丁香在线五月| 老熟妇乱子伦视频在线观看| 丰满人妻一区二区三区视频av| 日本一二三区视频观看| 又黄又爽又刺激的免费视频.| 永久网站在线| 国产一区二区三区在线臀色熟女| 99久国产av精品| 亚洲欧美日韩东京热| 一区福利在线观看| 午夜福利在线观看吧| 老女人水多毛片| 亚洲国产高清在线一区二区三| 在线免费观看的www视频| 高清毛片免费观看视频网站| 天堂av国产一区二区熟女人妻| 日韩欧美国产一区二区入口| 丝袜美腿在线中文| 久久久久久久久久久丰满 | 国产精品女同一区二区软件 | 久久久久久久久大av| 久久久久国内视频| 国产伦精品一区二区三区四那| 国内少妇人妻偷人精品xxx网站| 久久精品综合一区二区三区| 成熟少妇高潮喷水视频| 亚洲中文字幕一区二区三区有码在线看| 哪里可以看免费的av片| 色视频www国产| 内射极品少妇av片p| 在线观看一区二区三区| 性色avwww在线观看| 亚洲欧美日韩高清在线视频| 此物有八面人人有两片| 日韩av在线大香蕉| 直男gayav资源| 成人特级黄色片久久久久久久| 日本精品一区二区三区蜜桃| 一进一出抽搐动态| 亚洲国产精品合色在线| 国产男人的电影天堂91| 97超级碰碰碰精品色视频在线观看| 一个人看视频在线观看www免费| 一区福利在线观看| 国产人妻一区二区三区在| 日韩欧美在线乱码| 精品一区二区免费观看| 精品人妻熟女av久视频| 日韩精品中文字幕看吧| 久久人人爽人人爽人人片va| 免费看av在线观看网站| 亚洲电影在线观看av| 久久人人爽人人爽人人片va| 国产单亲对白刺激| 成年女人看的毛片在线观看| 久久久久久国产a免费观看| 最新在线观看一区二区三区| 欧美日韩综合久久久久久 | 日韩欧美国产在线观看| 大型黄色视频在线免费观看| 91麻豆精品激情在线观看国产| 日本免费a在线| 嫩草影院入口| 国产精品伦人一区二区| av视频在线观看入口| 男女视频在线观看网站免费| 啦啦啦啦在线视频资源| 窝窝影院91人妻| 欧美最黄视频在线播放免费| 国产探花极品一区二区| 日本黄色视频三级网站网址| 色在线成人网| 精品久久国产蜜桃| 嫁个100分男人电影在线观看| 色播亚洲综合网| 91在线精品国自产拍蜜月| 深爱激情五月婷婷| videossex国产| 蜜桃亚洲精品一区二区三区| 国产精品不卡视频一区二区| 日韩人妻高清精品专区| 日韩欧美三级三区| 久久久久久久精品吃奶| 精品欧美国产一区二区三| 欧美另类亚洲清纯唯美| 赤兔流量卡办理| 国产乱人视频| 国产免费av片在线观看野外av| 国内毛片毛片毛片毛片毛片| 亚洲av免费高清在线观看| 欧美高清性xxxxhd video| 国产精品乱码一区二三区的特点| 此物有八面人人有两片| 国产精品,欧美在线| 啪啪无遮挡十八禁网站| 一级av片app| 可以在线观看毛片的网站| 日本黄色视频三级网站网址| 久久人人精品亚洲av| 啦啦啦韩国在线观看视频| 国产伦在线观看视频一区| 又紧又爽又黄一区二区| 91麻豆精品激情在线观看国产| 免费搜索国产男女视频| 国产v大片淫在线免费观看| 看黄色毛片网站| 又紧又爽又黄一区二区| 欧美丝袜亚洲另类 | 国产av一区在线观看免费| 亚洲在线观看片| 赤兔流量卡办理| av福利片在线观看| 97人妻精品一区二区三区麻豆| 国产真实伦视频高清在线观看 | 夜夜夜夜夜久久久久| 久久久久久久亚洲中文字幕| av在线亚洲专区| 国产精品98久久久久久宅男小说| 中文亚洲av片在线观看爽| 亚洲成人久久性| 一区福利在线观看| 黄色视频,在线免费观看| 天天躁日日操中文字幕| 美女 人体艺术 gogo| 老女人水多毛片| 久久香蕉精品热| 亚洲精品色激情综合| 一本久久中文字幕| 久久精品国产亚洲av天美| 成人欧美大片| 搞女人的毛片| 午夜福利高清视频| 久久久久久大精品| 国产欧美日韩一区二区精品| 久久久国产成人精品二区| 校园春色视频在线观看| 赤兔流量卡办理| 12—13女人毛片做爰片一| 99在线人妻在线中文字幕| 午夜福利在线观看吧| 亚洲久久久久久中文字幕| 人妻制服诱惑在线中文字幕| 亚洲精品一区av在线观看| 99久久精品热视频| 精品久久久久久久久av| 久久国内精品自在自线图片| 麻豆av噜噜一区二区三区| 亚洲欧美激情综合另类| 校园春色视频在线观看| 久久久久精品国产欧美久久久| 日韩欧美在线乱码| 久99久视频精品免费| 别揉我奶头 嗯啊视频| 久久精品国产亚洲av涩爱 | 免费大片18禁| 国产中年淑女户外野战色| 国产中年淑女户外野战色| 国产大屁股一区二区在线视频| 日韩欧美在线乱码| 999久久久精品免费观看国产| 久久国产乱子免费精品| 亚洲自偷自拍三级| 俺也久久电影网| 成人鲁丝片一二三区免费| a级一级毛片免费在线观看| 国产女主播在线喷水免费视频网站 | 国产男靠女视频免费网站| 久久婷婷人人爽人人干人人爱| 免费观看人在逋| 成人av在线播放网站| 一本精品99久久精品77| 成年版毛片免费区| 午夜福利欧美成人| 亚洲欧美激情综合另类| 亚洲最大成人av| 精品无人区乱码1区二区| 国产精品人妻久久久久久| 在线观看av片永久免费下载| 亚洲美女视频黄频| 免费人成视频x8x8入口观看| 成人二区视频| 国产亚洲精品av在线| 国产三级中文精品| 亚洲精品国产成人久久av| 欧美xxxx黑人xx丫x性爽| 又爽又黄无遮挡网站| 一级av片app| 日韩av在线大香蕉| 男女那种视频在线观看| 国产爱豆传媒在线观看| 我要搜黄色片| 中文字幕免费在线视频6| 国产精品国产高清国产av| 亚洲精品成人久久久久久| 最近最新免费中文字幕在线| 欧美一级a爱片免费观看看| 老熟妇仑乱视频hdxx| 九色成人免费人妻av| 99热这里只有是精品在线观看| 啦啦啦观看免费观看视频高清| 欧美丝袜亚洲另类 | 91久久精品电影网| 免费av毛片视频| 久久久久久久精品吃奶| 蜜桃亚洲精品一区二区三区| 麻豆一二三区av精品| 中文字幕av成人在线电影| 长腿黑丝高跟| 成人av在线播放网站| 国产欧美日韩精品一区二区| av在线蜜桃| 狂野欧美激情性xxxx在线观看| 听说在线观看完整版免费高清| 日韩欧美三级三区| 99精品久久久久人妻精品| 亚洲av中文字字幕乱码综合| 欧美+亚洲+日韩+国产| 日本色播在线视频| 日本黄色视频三级网站网址| 一a级毛片在线观看| 美女xxoo啪啪120秒动态图| 男女做爰动态图高潮gif福利片| 欧美成人性av电影在线观看| 国产亚洲欧美98| 亚洲国产精品久久男人天堂| 午夜视频国产福利| 成人高潮视频无遮挡免费网站| 成人特级黄色片久久久久久久| 亚洲精品在线观看二区| 精品一区二区免费观看| 日韩高清综合在线| 国产久久久一区二区三区| 在线国产一区二区在线| 三级男女做爰猛烈吃奶摸视频| 午夜福利欧美成人| 亚洲性夜色夜夜综合| 99久久精品国产国产毛片| 97超级碰碰碰精品色视频在线观看| 久久久久久久久久成人| 日韩一本色道免费dvd| 亚洲精品国产成人久久av| 麻豆成人午夜福利视频| 最近最新中文字幕大全电影3| 日本-黄色视频高清免费观看| 亚洲av.av天堂| 日韩高清综合在线| 国产av一区在线观看免费| 又紧又爽又黄一区二区| 国产精品爽爽va在线观看网站| 女人被狂操c到高潮| 在线观看66精品国产| 有码 亚洲区| av女优亚洲男人天堂| 亚洲人成网站在线播| 少妇熟女aⅴ在线视频| 亚洲真实伦在线观看| or卡值多少钱| 精品人妻熟女av久视频| 日日夜夜操网爽| 亚洲第一区二区三区不卡| 国产一区二区激情短视频| 在线观看午夜福利视频| 成人亚洲精品av一区二区| 国产精品1区2区在线观看.| 亚洲av.av天堂| 性插视频无遮挡在线免费观看| 亚洲欧美激情综合另类| 国产 一区 欧美 日韩| 久久精品国产亚洲av涩爱 | 亚洲av日韩精品久久久久久密| 1024手机看黄色片| 91在线观看av| 久久人人精品亚洲av| 久久99热这里只有精品18| 亚洲av熟女| 亚洲av不卡在线观看| 成人国产综合亚洲| 99久久精品热视频| 国产精品久久视频播放| 精品久久久久久久久久免费视频| 国产免费一级a男人的天堂| 琪琪午夜伦伦电影理论片6080| 久久久国产成人免费| 久久久久久久亚洲中文字幕| 国内精品美女久久久久久| 亚洲精品色激情综合| 熟女人妻精品中文字幕| 中文资源天堂在线| 伊人久久精品亚洲午夜| 成人亚洲精品av一区二区| 两性午夜刺激爽爽歪歪视频在线观看| 日韩欧美三级三区| 亚洲欧美激情综合另类| 999久久久精品免费观看国产| 成人午夜高清在线视频| 精品国内亚洲2022精品成人| 亚洲精品成人久久久久久| 国产人妻一区二区三区在| 亚洲国产精品合色在线| 国产精品人妻久久久影院| 一级黄片播放器| av天堂中文字幕网| 88av欧美| 99在线视频只有这里精品首页| 淫妇啪啪啪对白视频| 成人国产一区最新在线观看| 制服丝袜大香蕉在线| 亚洲成人久久爱视频| 日韩国内少妇激情av| 国产精品人妻久久久影院| 久久久久性生活片| 国产白丝娇喘喷水9色精品| 亚洲精品粉嫩美女一区| 亚洲男人的天堂狠狠| 亚洲 国产 在线| 最近最新免费中文字幕在线| 啦啦啦观看免费观看视频高清| 美女免费视频网站| 国内精品宾馆在线| 老熟妇乱子伦视频在线观看| 少妇的逼好多水| 色av中文字幕| 国产精品98久久久久久宅男小说| 一级毛片久久久久久久久女| 大又大粗又爽又黄少妇毛片口| 精品国产三级普通话版| 热99re8久久精品国产| 一个人看的www免费观看视频| 88av欧美| 国产v大片淫在线免费观看| 动漫黄色视频在线观看| 网址你懂的国产日韩在线| 亚洲av.av天堂| 欧美3d第一页| 国内少妇人妻偷人精品xxx网站| 欧美成人性av电影在线观看| 淫妇啪啪啪对白视频| 男女下面进入的视频免费午夜| 别揉我奶头~嗯~啊~动态视频| 国内精品一区二区在线观看| 少妇的逼好多水| 国产在线精品亚洲第一网站| av国产免费在线观看| 51国产日韩欧美| 亚洲专区国产一区二区| 久久人人爽人人爽人人片va| 美女高潮的动态| 日本-黄色视频高清免费观看| 日本免费a在线| 成人精品一区二区免费| 日韩欧美一区二区三区在线观看| 国产成人av教育| 少妇高潮的动态图| 国产精品,欧美在线| 最新在线观看一区二区三区| 国产老妇女一区| 噜噜噜噜噜久久久久久91| 欧美成人免费av一区二区三区| 精品一区二区三区视频在线观看免费| 听说在线观看完整版免费高清| 亚洲avbb在线观看| 国产精品永久免费网站| av在线蜜桃| 国产精品久久久久久av不卡| 小蜜桃在线观看免费完整版高清| 丰满的人妻完整版| 韩国av在线不卡| 成人特级黄色片久久久久久久| 精品免费久久久久久久清纯| 中文字幕人妻熟人妻熟丝袜美| 午夜福利在线在线| 禁无遮挡网站| 欧美不卡视频在线免费观看| 一级毛片久久久久久久久女| 久久国产精品人妻蜜桃| 国产精品无大码| 韩国av在线不卡| 午夜福利高清视频| www.www免费av| 伦精品一区二区三区| 亚洲熟妇熟女久久| 国产大屁股一区二区在线视频| 国产一区二区在线观看日韩| 午夜激情福利司机影院| 国产精品一区二区性色av| 女人十人毛片免费观看3o分钟| 亚洲精华国产精华液的使用体验 | 国产日本99.免费观看| 免费在线观看影片大全网站| 久久午夜福利片| 亚洲色图av天堂| 欧美性感艳星| 午夜视频国产福利| 国产精品人妻久久久影院| 亚洲久久久久久中文字幕| 人人妻人人澡欧美一区二区| 99在线视频只有这里精品首页| 欧美最新免费一区二区三区| 少妇裸体淫交视频免费看高清| 国产在线男女| 十八禁网站免费在线| 成人二区视频| 国产真实伦视频高清在线观看 | 欧美日韩综合久久久久久 | 中出人妻视频一区二区| 亚洲成人精品中文字幕电影| 成人鲁丝片一二三区免费| 色在线成人网| 国产亚洲欧美98| 精品人妻偷拍中文字幕| 18禁裸乳无遮挡免费网站照片| 91狼人影院| 久久婷婷人人爽人人干人人爱| 欧美极品一区二区三区四区| 嫩草影视91久久| 日本 欧美在线| 两性午夜刺激爽爽歪歪视频在线观看| 91在线精品国自产拍蜜月| 人妻制服诱惑在线中文字幕| 男女视频在线观看网站免费| 亚洲专区国产一区二区| 国产成人影院久久av| 午夜福利在线观看免费完整高清在 | 麻豆av噜噜一区二区三区| 国产在线男女| 校园人妻丝袜中文字幕| 亚洲av五月六月丁香网| 久久午夜福利片| 国产极品精品免费视频能看的| 美女cb高潮喷水在线观看| 欧美日韩黄片免| 欧美色欧美亚洲另类二区| 亚洲精品成人久久久久久| 国内久久婷婷六月综合欲色啪| 免费av毛片视频| 色av中文字幕| 日日撸夜夜添| 亚洲在线自拍视频| 99热只有精品国产| 亚洲av成人精品一区久久| 欧美黑人欧美精品刺激| 日本黄色片子视频| 人妻夜夜爽99麻豆av| 午夜福利在线观看吧| 国产一级毛片七仙女欲春2| 熟女人妻精品中文字幕| av天堂在线播放| 中出人妻视频一区二区| 日韩中字成人| 亚洲精品乱码久久久v下载方式| 一a级毛片在线观看| 老司机福利观看| 久久久久久久久久黄片| 国产精品电影一区二区三区| 哪里可以看免费的av片| 99热精品在线国产| 免费大片18禁| 亚洲午夜理论影院| 国产一区二区三区在线臀色熟女| 成人永久免费在线观看视频| 天天躁日日操中文字幕| 欧美三级亚洲精品| bbb黄色大片| 亚洲成人精品中文字幕电影| 人妻久久中文字幕网| 日本免费一区二区三区高清不卡| 日韩 亚洲 欧美在线| 亚洲国产精品sss在线观看| 精品久久久久久久久久久久久| 床上黄色一级片| 看免费成人av毛片| 欧美成人一区二区免费高清观看| 悠悠久久av| 特大巨黑吊av在线直播| 国产高清视频在线观看网站| 中国美白少妇内射xxxbb| 最近最新免费中文字幕在线| 午夜福利高清视频| 欧美成人免费av一区二区三区| 国产成年人精品一区二区| 亚洲性久久影院| 精品久久久久久久末码| 两个人的视频大全免费| 如何舔出高潮| 精品国产三级普通话版| 大又大粗又爽又黄少妇毛片口| 在线观看美女被高潮喷水网站| h日本视频在线播放| 国内精品宾馆在线| 欧美日韩瑟瑟在线播放| 男插女下体视频免费在线播放| 白带黄色成豆腐渣|