• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Shadow GAN:Shadow synthesis for virtual objects with conditional adversarial networks

    2019-05-14 13:28:50ShuyangZhangRunzeLiangandiaoWang
    Computational Visual Media 2019年1期

    Shuyang Zhang(),Runze Liang,and M iao Wang

    Abstract We introduce ShadowGAN,a generative adversarial network(GAN)for synthesizing shadows for virtual objects inserted in images.Given a target image containing several existing objects with shadows,and an input source object with a specif ied insertion position,the network generates a realistic shadow for the source object. The shadow is synthesized by a generator;using the proposed local adversarial and global adversarial discriminators,the synthetic shadow’s appearance is locally realistic in shape,and globally consistent with other objects’shadowsin termsof shadow direction and area.To overcomethe lack of training data,weproduced training samplesbased on public 3D models and rendering technology.Experimental results from a user study show that thesynthetic shadowed resultslook natural and authentic.

    Keywords shadow synthesis;deep learning;generativeadversarial networks;imagesynthesis

    1 Introduction

    Inserting virtual objects into scenes has a wide range of applications in visual media,from movies,advertisements,and entertainment to virtual reality.Consistency of shadows between the original scene and the inserted object contributes greatly to the naturalness of the results.If no prior scene knowledge is provided,it requires much labor and expertise to make the scene look as realistic as possible,in a tedious photo or video editing process.Even an experienced editor spends much ef fort to produce convincing results using commercial editing software such as Adobe Photoshop.The dif ficulties in this process stem from the lack of accurate estimates of illumination and scene geometry.

    In this paper,we address the shadow synthesis problem for virtual objects inserted in an image.Shadow synthesis can be implemented by use of rendering techniques,which require much information,such as illumination,scene models,rendering frameworks,etc.Other methods[1–4]synthesize shadows with approximately estimated illumination and reconstructed scene geometry.Such computations either require user interaction or precise tools,and yet are time-consuming.

    Wepropose to solve thisproblem using a novel deep learning-based framework without explicit knowledge of scene geometry and illumination. We use a convolutional neural network to directly predict the shadow map for a virtually inserted object,given only the target scene image and the specif ied insertion position in the image domain.Specif ically,we use a generative adversarial network(GAN)framework,where the generator G tries to produce outputs that cannot be distinguished from“real”results,while the local discriminator DLand global discriminator DGtry to detect the generator’s“fakes”from local and global perspectives,respectively.During training,the generator and discriminators compete until convergence.As a result,a real-type,single-channel shadow map is predicted,from which the edited result with a synthetic shadow can be generated by a simple pixel-wise original image multiplication.The input constraints to our ShadowGAN are few while the computational ef ficiency is high as only a simple feed-forward operation through the network is needed.

    Fig.1 Input and output of ShadowGAN.(a)Given an input target scene with original objects and a virtually inserted object(here,a toy car),as well as the object mask((a)top-right),ShadowGAN predicts a shadow map(b)which can be used to synthesize the shadowed result(c)with a simple pixel-wise product operation.The ground truth result is shown in(d).

    Our method works for an image of a static scene.We assume scene surfaces to be made of Lambertian materials and we do not model specular ref lection or inter-ref lections between surfaces in the scene.Despite these assumptions,we can produce plausible results.To summarize,the contributions of our work are:

    ?A convolutional neural network,ShadowGAN,which can synthesize shadows for virtually inserted objects in target images.

    ?A local–global conditional adversarial scheme for both shape and direction supervision in shadow synthesis.

    ?A practical dataset for shadow synthesis network training,produced using rendering techniques and public 3D models.

    2 Related work

    In this section,we discuss related prior work,mainly on shadow synthesis,shadow detection and removal,and image-to-image translation using generative adversarial networks.

    2.1 Shadow synthesis

    In image editing,knowledge of illumination and scene geometry is essential to achieving realistic shadow synthesis results.Previous methods have been proposed to recover such information from input images or videos. Intrinsic image decomposition algorithms aim to separate a single image I into a pixel-wise product of an albedo or ref lectance layer R and a shading layer S[5–8]. The ref lectance layer reveals how the material ref lects incident light,and the shading layer accounts for illumination ef fects dueto geometry,shadows,and inter-ref lections.However,approachesbased on pixel-wiseillumination and ref lectance maps are not ef fective enough to support complex editing operations such as object insertion.For visually plausibleresults,shadowsmust be carefully computed,which requires an analysis of scene geometry and lighting conf iguration in 3D space.The problem of estimating illumination from images,or inverse lighting,has been investigated.In Refs.[9,10],illumination distributionsin a scenefrom object shadows of known shapes are recovered.Khan et al.[11]proposed editing object materials in a static image.Liu et al.[4]estimated illumination and scene geometry from video for various video applications.Ge et al.[12]proposed an object-aware image editing approach to obtain consistency in structure,color,and texture in a unif ied way.

    Rendering virtual objects into real scenes has been long investigated.A survey is provided by Kronander et al.[13].Various ways have been explored to solve the problems of illumination and geometry recovery.Debevec[14]proposed estimating scene radiance and global illumination using a mirrored ball to capture a high-dynamic range lighting environment,to support object insertion.Karsch et al.[1]developed an image composition system to render synthetic objects into legacy photographs.The scene structure and area light are provided by user interaction or a data-driven approach[2].Brief ly,previous methods for shadow synthesis either require user interaction and scene knowledge,or recover explicit representationsof scene geometry and illumination.Our method,in contrast,is novel in synthesizing shadows using a convolutional neural network without any requirements about the scene or the inserted object model.

    2.2 Shad ow d etection and removal

    The opposite problem to shadow synthesis,i.e.,

    shadow detection and removal,has been studied in the computer vision community[15–20].Its goals are to separate the target image into lit and shadowed areas,and thence to remove the shadows.In early work,color[15,20],edge[18],or segmentation[19]cues was used to build high level features for shadow description.Ma et al.[21]introduced appearance harmonization that makes the appearance of a deshadowed region compatible with the rest of the image.Recently,convolutional neural networks for shadow removal have been proposed[16,17].In Ref.[17],the input image is decomposed into a shadow-free image and a shadow matte;the shadow matte is predicted using a convolutional neural network.Two stacked conditional GANs successively detect the shadow region and remove the shadow matte.

    In the shadow removal problem,the objects casting shadows are commonly absent,while in the shadow synthesis problem,a virtually inserted object is present.

    2.3 Image-to-image translation using generative adversarial networks

    Goodfellow et al.[22]f irst introduced the concept of the generative adversarial network(GAN),consisting of two sub-networks:a generator(G)and a discriminator(D).G’s task is to generate outputs to resembletheground truth,while D tries to distinguish between fake and real inputs,i.e.,between generated output and the ground truth.G and D work against each other,and the ideal outcome is for G to produce outputs that D cannot discriminate.Since its introduction,the GAN method has been widely applied to image-to-image translation problems,such as face image synthesis[23–25],image super resolution [26],and image completion [27,28].Variations of the GAN architecture have also been developed,including conditional GAN [29,30],CycleGAN[31],StarGAN[24],etc.

    Isola et al.[30]proposed a GAN network that translates an image into another domain,such as from a sketch to a photo,from architectural maps to photos,from black-and-white to color photos,etc.Their approach used a U-net structure inside the generator,enabling earlier convolutions to be concatenated with later deconvolutional layers to pass down information about the input.In an image completion task[27],the contents of an arbitrary image region conditioned on its surroundings are generated by a convolutional neural network.Later,Iizuka et al.[28]proposed an image completion network with global and local discriminators.The addition of a local discriminator helps scrutinize the details of the completed image.Portenier et al.[32]developed the Faceshop system which supports interactive face editing with user provided sketch and color information as input conditions for the GAN architecture.Wei et al.[33]proposed to learn adaptive receptive f ields instead of manually selecting dilated convolutional kernels.

    Our proposed ShadowGAN is an adaption of GAN,which uses a local discriminator to guarantee shape correctness and a global discriminator to guarantee direction and area compatible with other objects’shadows.

    Fig.2 Training a conditional generative adversarial network to synthesize shadow maps.The local discriminator D L learns to classify between fake and real cropped tuples.The global discriminator D G learns to classify between fake and real tuples from a global view.The generator G learns to fool the discriminator.

    3 M ethod

    3.1 Training data

    3.1.1 Approach

    Our proposed ShadowGAN is trained on synthetic data,where static scene images are rendered using 3D models indexed by ShapeNet[34].Given an input target scene image Itincluding original objects with shadows and a virtually inserted object without shadow,whose position in the scene is specif ied by a mask ms,our goal is to predict a shadow map S,with which the output image Iowith a synthetic shadow can be obtained by a simple pixel-wise product operation Io=It?S.With the scene image Itand source object mask msas inputs,the shadow map S is predicted using a generative network(see Fig.3),where a reconstruction loss and two adversarial losses are used to guarantee the synthesis produces realistic output.

    As a supervised deep learning-based image synthesis method,ShadowGAN requires paired input and ground truth images as training data,where the input scene image Itcontains N objects(N3 is assumed in our work)with shadows and one virtually inserted source object without a shadow;its mask msindicating the insertion region is also provided.The ground truth shadow map S has the same size as It.Each position p of S is associated with a real number,indicating that the output synthetic image color Io(p)can be obtained by multiplying the scene image color It(p)by the coef ficient S(p),under the assumption that ambient light is present in the scene.

    Such data are impossible to ef fectively collect in real life.Firstly,on one hand,scenes in which a few objects have shadows and one object is fully lit do not realistically occur in reality,while on the other hand,if the virtually inserted object is copied and pasted from other photos,the ground truth shadow map S cannot be generated ef ficiently and realistically.Secondly,a wide variety of illumination,scenes,and camera conf igurations are required for training data,which is both tedious and challenging for real-life photo capture.

    Instead of using real-life photos,we use rendering technology to generate the training data.We render each target scene image Itwith N objects placed on the ground with shadows and one object with its shadow turned of f.The shadow map S is generated by rendering a scene image Itwith all the shadows turned on,then dividing it by It:S=It/It.

    3.1.2 Scenes

    We use a sub-set of commonly seen 3D model categories such as can,printer,bed,etc.from a publicly available dataset,ShapeNet[34].The object categories used for rendering are listed in Table 1.In total,9265 objects were selected for rendering scenes.To render realistic ground planes,we downloaded textures from Internet using key-words search for,e.g.,woollen,stone,tablecloth.A total of 110 textures

    Tab le 1 ShapeNet 3D model categories used to render the target scene

    wererandomly chosen for rendering theplane.In each target scene image,up to four objects were randomly selected from the model collection,one of them being the virtually inserted object,and the rest being the original objects in the scene.

    We assume each of the x,y,z coordinates to be in the range[?1,1]:the ground plane is set to P={(x,y,z)|x∈[?1,1],y∈[?1,1],z=0}.The four randomly selected objects are placed at locations(0.6,0.6,0),(?0.6,0.6,0),(?0.6,?0.6,0),(0.6,?0.6,0),randomly rotated about the z-axis.

    3.1.3 Camera

    The camera position Pc=(xc,yc,zc)was randomly chosen in the 3D space within the range:

    3.1.4 Illumination

    All scenes were illuminated by a single white point light with f ixed intensity.The distance between the light and the center of the f loor was randomly chosen in a limited range:the light position Pl=(xl,yl,zl)was randomly chosen in the following range:

    3.1.5 Rendering

    We used path tracing[35]to render the scenes,with 128 samplesper pixel.To f ind themask of theinserted object,we rendered it again with its material set to pure black,and then extracted its mask from the rendered image.

    3.1.6 Training data

    As a result,12,400 training samples were generated,comprising a scene image It,source object mask ms,and ground truth shadow map S,rendered at resolution 256×256.

    3.2 Formulation

    3.2.1 Approach

    Our goal is to train a generator G that learns a mapping function from domain X to domain Y,where X={xi}Ni=1are input scenes with virtually inserted object mask xi=Iit,mis,and Y={yj}Nj=1are the corresponding shadow maps yi=Si.The key requirement for learning is that the generated shadow map G(x)should reconstruct the shadow map,while not being distinguished from the ground truth shadow map data y≈pdata(y).We introduce a local discriminator DLand a global discriminator DGwhich are trained to detect the generated shadow mapsas“fakes”from aspectsof local shape and global direction and area,respectively.Our objective thus contains a reconstruction loss LL1,a local adversarial loss LLGAN,and a global adversarial loss LGGAN.

    3.2.2 Reconstruction loss

    Reconstruction loss is commonly used in supervised image-to-image translation problems[28,30,36],to constrain the generated result to be similar to the ground truth in an L1or L2sense.Here we use L1norm reconstruction loss to measure the error between the predicted shadow map G(x)and the ground truth shadow map y:

    3.2.3 Local adversarial loss

    The local discriminator DLtries to distinguish the generated fake results G(x)from real samples y from local considerations,so only looks at the region around the source object.Intuitively,the generated shadow G(x)for the sourceobject should beassimilar as possible to the ground truth sample y within a local region.We crop a square region centered at the source object,of side half the original image size,i.e.,128×128 pixels,and only pass the cropped region of the predicted shadow map C(G(x))or ground truth shadow map C(y),with conditional input scene image and source object mask C(x),to the local discriminator.Here,C(·)is the cropping operator.The local adversarial loss is def ined to be

    G tries to minimize this objective against the local adversarial DLthat tries to maximize it.DLtakes the cropped version of either conditional real samplesx,yor generated fake samplesx,G(x)as inputs.The discriminator determines whether the samples are real or fake.

    3.2.4 Global adversarial loss

    The global discriminator DGtries to distinguish the generated fake results G(x)from real samples y using a global view of the whole shadow map.In particular,the generated shadow G(x)for the source object should be compatible with other objects’shadows in the original scene in terms of direction and area.

    where G tries to minimize this objective against the global adversarial DLthat tries to maximize it.DGtakes either conditioned real samplesx,yor conditioned generated fake samplesx,G(x)as inputs.

    3.2.5 Full objective

    The overall objective is the weighted sum of the loss terms:

    whereλ=200 controls the relative importance of the objective terms.The goal is to determine:

    3.3 Imp lementation

    3.3.1 Conditional shadow map generator

    Fig.3 Conditional shadow map generator.

    Figure 3 visualizes the conditional shadow map generator. The generator takes an input of size 256×256 with 4 channels;3 are RGB channels from the target scene and 1 is the source object mask ms.The output is a single channel shadow map of size 256×256.We adopt the encoder–decoder architecture proposed by Isola et al.[30],where skip connections(U-net)are set up to concatenate the corresponding layers in encoder and decoder. The generator downsamples the input using strided convolutions,followed by intermediatelayersof dilated convolutions[37]before upsampling using transposed convolutions.We use the ReLU activation function after each layer except for the output layer,which uses a tanh activation function.In total,the proposed editing network has 15 convolutional layers with up to 256 feature channels.

    3.3.2 Discriminator networks

    Following Iizuka et al.[28]and Portenier et al.[32],we use local and global discriminators as adversaries for generator training(see Fig.4).The input to the global discriminator is a 256×256×5 tensor:a fake shadow map sample Sfor a real shadow map sample Sr,conditional input target scene image It,and the inserted object mask ms.The local discriminator uses the same input tensor but works on a cropped region of size 128×128 centered around the inserted object position.

    Both discriminators are fully-convolutional networks,with the spatial tensor dimension gradually downsampled to 1×1.Feature channels increase up to 512 channels then decrease to 1.The outputs of discriminators are predictions whether the inputs are more like real samples or fake ones.We use leaky ReLU activation functions with slope set to 0.2 everywhere in the discriminators,except for the last layer which uses a sigmoid activation function.Full network architectural details are provided in Tables 2 and 3.

    3.4 Op timization and p arameters

    To optimize the proposed ShadowGAN,we follow Ref.[30]in which gradient descent steps for D and G are alternately performed.We apply the Adam solver[38]with learning rate set to 0.0002,and momentum parametersβ1=0.5,β2=0.999.The training process using 100 epochs takes about 5 hours on a Titan 1080 Ti graphic card.

    Fig.4 Discriminator architecture,comprising a global(top)and a local(bottom)network.

    Tab le 2 Generator architecture.After each convolutional layer,except the last,there is a rectif ied linear unit(ReLU)layer.The output layer consists of a convolutional layer with a tanh function instead of a ReLU layer.“Outputs”gives the number of output channels for the output of the layer

    Table 3 Discriminator architectures.All Conv.layers are followed by leaky ReLU activation(slope 0.2).The output layer consists of a convolutional layer with sigmoid activation;it predicts the probability that an input shadow map is from real samples rather than the generator network(a)Local discriminator

    4 Results

    4.1 Initial tests

    We have tested ShadowGAN on rendered synthetic scenes from the test set.The test set was rendered using the same rendering strategy as for the training set,with randomly selected models,placed object positions and orientations,illumination and camera conf igurations.Time for shadow synthesis was about 0.3 s for a 256×256 input image on a Titan 1080 Ti graphic card.A gallery of corresponding synthetic shadowed results is shown in Fig.8.Figure 5 shows synthetic resultswith thesamesceneand illumination,but viewed from randomly selected viewpoints.It can be seen that even when observed from dif ferent view points,the synthetic shadows are visually realistic.As a further test,Fig.6 shows results with the same scene and illumination,but slightly dif ferent camera poses caused by camera rotation.It can be seen that the synthetic shadows are temporally consistent during the camera movement.

    Fig.5 Two examples of shadow synthesis for the same scene and illumination,with dif ferent viewing angles.In each example,top row:input scenes,bottom row:corresponding synthetic results.

    Fig.6 Shadow synthesis for the same scene and illumination,with a slightly rotated camera. Top row:input scenes,bottom row:corresponding synthetic results.

    ShadowGAN supports inserting virtual objects in sequence.Figure 7 shows an example of step-by-step object insertions with shadows synthesized using our method.

    As ShadowGAN is the f irst deep learning-based shadow synthesis network,we next present an ablation study to demonstrate the benef its of the components of our system,followed by a user study to verify whether fake results from ShadowGAN are indistinguishable from real ones.

    4.2 Ablation study

    In order to evaluate the ef fectiveness of components of the proposed method,we re-evaluated ShadowGAN with alternative loss functions: with only the reconstruction loss(denoted as L1),with the reconstruction loss and the local adversarial loss(denoted as L1+Local)and with the reconstruction loss and the global adversarial loss(denoted as L1+Global).Representative visual results are shown in Fig.9. The results indicate that with some losses turned of f,using functions L1,L1+Local,and L1+Global do not generalize well to the test samples and fail to predict visually plausible shadows with correct shape,area,and direction.

    We also evaluated an input variation,in which the input source object position was not explicitly provided by a mask mseither for the generator or for the discriminators.Figure 10 provides a visual comparison under input variations.The results indicate that the source object mask msis essential for ShadowGAN to obtain good results.

    4.3 User study

    To further assess whether the synthetic shadows for virtually inserted objects are visually natural and authentic,we conducted a user study with the task of observing and determining whether the shadows from our synthetic results look real.We also showed real scenes to the subjects and asked them to determine whether the images were real.

    Fig.7 Inserting virtual objects in sequence.

    Fig.8 Gallery of synthetic results.Each example,left to right:(a)input target scene with a virtual source object,(b)input source object mask,(c)predicted shadow map using Shadow GAN,(d)synthetic shadowed result,and(e)ground truth shadowed result.

    We collected 20 pairs of real and fake shadowed results from the test scenes;each pair shows the same scene.We invited 20 subjects without viewing or perception issues to observe and rate the images.Each subject observed a randomly selected image from each scene pair—either the synthetic result or the real shadowed image,and assessed whether the shadows in the image were real.We collected all votes from the subjects,and summarise the results of the user study in Table 4.As a result,50.48%of our synthetic shadows were assessed to be real images.Even shadows in the real images were sometimes considered to be fake;only 57.14%were considered to be real.The summary indicates that the visual ef fectiveness of synthetic results from ShadowGAN is close to that in rendered scenes.

    Fig.9 Ablation study for loss functions.Dif ferent losses lead to dif ferent qualities of results.Each column shows results trained under a dif ferent loss.

    Fig.10 Ablation study for source mask.Each row shows a scene with our shadow synthesis result and the result without source mask,m s.

    Table 4 User study summary

    5 Limitations and conclusions

    Shadow GAN has limitations.Firstly,as discussed in Section 3.1,our training set and test set were produced using rendering technology on public 3D models rather than using real-life photos.As collecting real-life photos with some objects’shadows turned of fis a challenging task,we regard collecting and testing real-life photos as requiring further work.Secondly,when testing our model,a scene with only one virtually inserted object is fed into the network.Synthesizing shadows for multiple objects is not supported by ShadowGAN.However as we have shown in the experimental results,users may iteratively perform insertion operations,one object at a time.As pioneering work that uses GAN to synthesize shadows for virtual object,we only tested our model on 256×256 images(as did Ref.[30]).

    In summary,we have presented a generative adversarial network—Shadow GAN—which can synthesize shadows for virtual objects in images.Shadows are predicted from a generator which during training competes against a local discriminator and a global discriminator.To our knowledge,this is the f irst novel shadow synthesis solution using a deep learning-based framework.It benef its from being free from input constraints and is computational ef fective.For network training,we have produced a large set of rendered scenes using public 3D models in commonly seen object categories.We believe both the training data and ShadowGAN will benef it the community of computer graphics and virtual reality.

    Acknowledgements

    The authors would like to thank all the reviewers.This work was supported by the National Natural Science Foundation of China (Project Nos.61561146393 and 61521002),the China Postdoctoral Science Foundation(Project No.2016M601032),and a Research Grant of Beijing Higher Institution Engineering Research Center.

    欧美激情 高清一区二区三区| 97超碰精品成人国产| 国产精品一二三区在线看| 人妻系列 视频| 大片免费播放器 马上看| 久久精品国产自在天天线| 欧美激情极品国产一区二区三区 | 国产成人免费无遮挡视频| 九九久久精品国产亚洲av麻豆| 狠狠婷婷综合久久久久久88av| 秋霞伦理黄片| 亚洲精品国产av蜜桃| 老司机亚洲免费影院| 久久97久久精品| 免费不卡的大黄色大毛片视频在线观看| 视频在线观看一区二区三区| 老司机影院毛片| 亚洲欧美成人精品一区二区| 久久久久久人妻| 男人爽女人下面视频在线观看| 国产精品女同一区二区软件| 国产伦精品一区二区三区视频9| 免费av不卡在线播放| 色5月婷婷丁香| 欧美变态另类bdsm刘玥| 亚洲人成网站在线观看播放| 日本欧美视频一区| 久久久a久久爽久久v久久| 中文字幕免费在线视频6| 国产精品一二三区在线看| 黄色视频在线播放观看不卡| 国产成人免费无遮挡视频| 久久综合国产亚洲精品| 国产男人的电影天堂91| 日本与韩国留学比较| 高清黄色对白视频在线免费看| 精品熟女少妇av免费看| 高清午夜精品一区二区三区| 波野结衣二区三区在线| 久久久国产精品麻豆| 久久精品人人爽人人爽视色| 国产精品.久久久| 国产又色又爽无遮挡免| 国产乱人偷精品视频| 中文字幕免费在线视频6| 国产精品久久久久久精品古装| 少妇人妻精品综合一区二区| 曰老女人黄片| 国产片特级美女逼逼视频| 99久久综合免费| 国产高清有码在线观看视频| 精品少妇久久久久久888优播| 国产综合精华液| 中国三级夫妇交换| 在线看a的网站| 亚洲性久久影院| 涩涩av久久男人的天堂| 一本一本综合久久| 中文字幕av电影在线播放| 一级片'在线观看视频| √禁漫天堂资源中文www| 亚洲av男天堂| 亚洲精品,欧美精品| 欧美xxⅹ黑人| 国产精品不卡视频一区二区| 老女人水多毛片| 免费看av在线观看网站| 国产精品久久久久成人av| 嘟嘟电影网在线观看| 夜夜骑夜夜射夜夜干| 成人漫画全彩无遮挡| 女人精品久久久久毛片| 视频在线观看一区二区三区| 熟女电影av网| 欧美日本中文国产一区发布| 在线观看免费视频网站a站| 亚洲av综合色区一区| 日本欧美国产在线视频| 伦理电影免费视频| 成人国产av品久久久| 狠狠婷婷综合久久久久久88av| 中国美白少妇内射xxxbb| 中文字幕人妻熟人妻熟丝袜美| 免费黄色在线免费观看| 99久久综合免费| 成人漫画全彩无遮挡| 精品久久蜜臀av无| 如日韩欧美国产精品一区二区三区 | 日本爱情动作片www.在线观看| 黑人高潮一二区| 亚洲色图 男人天堂 中文字幕 | 国产成人aa在线观看| 国产精品久久久久久av不卡| 欧美另类一区| 日韩av免费高清视频| 国产精品国产三级国产av玫瑰| 欧美日韩在线观看h| 大话2 男鬼变身卡| 成人亚洲精品一区在线观看| a级片在线免费高清观看视频| 成人综合一区亚洲| 亚洲图色成人| 国产视频内射| av女优亚洲男人天堂| 18禁在线无遮挡免费观看视频| 亚洲国产精品一区三区| 日韩中文字幕视频在线看片| av免费观看日本| 午夜日本视频在线| 满18在线观看网站| 国产精品久久久久久久电影| 国产欧美亚洲国产| 另类亚洲欧美激情| 成人综合一区亚洲| 亚洲精品456在线播放app| 免费av中文字幕在线| 国产有黄有色有爽视频| 亚洲人成网站在线播| 天堂8中文在线网| 国产免费一级a男人的天堂| 久久久久久久久久久丰满| 视频中文字幕在线观看| 成人毛片a级毛片在线播放| 亚洲国产欧美日韩在线播放| 国产精品欧美亚洲77777| 亚洲精品美女久久av网站| 国产成人aa在线观看| 亚洲国产精品专区欧美| 人人妻人人爽人人添夜夜欢视频| 免费观看的影片在线观看| 亚洲美女黄色视频免费看| 男人添女人高潮全过程视频| 91精品伊人久久大香线蕉| 婷婷色麻豆天堂久久| 国产有黄有色有爽视频| av有码第一页| 91精品三级在线观看| 免费看不卡的av| 一区二区日韩欧美中文字幕 | 精品卡一卡二卡四卡免费| 少妇猛男粗大的猛烈进出视频| 国产综合精华液| 制服人妻中文乱码| 婷婷色综合大香蕉| www.色视频.com| 九九久久精品国产亚洲av麻豆| 亚洲国产av影院在线观看| 精品国产乱码久久久久久小说| 一级毛片电影观看| 国产精品免费大片| 精品久久久久久久久av| 成人漫画全彩无遮挡| 综合色丁香网| 丰满迷人的少妇在线观看| 欧美另类一区| 激情五月婷婷亚洲| 国产视频首页在线观看| 久久久精品94久久精品| 男女边吃奶边做爰视频| 日韩强制内射视频| 国国产精品蜜臀av免费| 国产精品欧美亚洲77777| 国产视频首页在线观看| 久久午夜综合久久蜜桃| 欧美性感艳星| 欧美性感艳星| 人人妻人人添人人爽欧美一区卜| 亚洲av电影在线观看一区二区三区| 91久久精品国产一区二区三区| 欧美少妇被猛烈插入视频| 高清av免费在线| 2018国产大陆天天弄谢| 亚洲av.av天堂| 免费不卡的大黄色大毛片视频在线观看| 各种免费的搞黄视频| 各种免费的搞黄视频| 日韩大片免费观看网站| 少妇人妻久久综合中文| 国产成人免费观看mmmm| 成人国语在线视频| 亚洲一级一片aⅴ在线观看| 十分钟在线观看高清视频www| 七月丁香在线播放| 日韩三级伦理在线观看| 国产亚洲午夜精品一区二区久久| 一级毛片电影观看| 女性被躁到高潮视频| 久久久久久久大尺度免费视频| 啦啦啦啦在线视频资源| 女的被弄到高潮叫床怎么办| 久久久a久久爽久久v久久| 人体艺术视频欧美日本| 亚洲欧洲国产日韩| 国产成人a∨麻豆精品| 精品99又大又爽又粗少妇毛片| 国产成人一区二区在线| 寂寞人妻少妇视频99o| 人成视频在线观看免费观看| 欧美日韩一区二区视频在线观看视频在线| 日日摸夜夜添夜夜添av毛片| 亚洲人成网站在线播| 中文字幕精品免费在线观看视频 | 18禁观看日本| 91久久精品国产一区二区成人| 黄片无遮挡物在线观看| 国产成人午夜福利电影在线观看| 妹子高潮喷水视频| 夜夜骑夜夜射夜夜干| 9色porny在线观看| 日本午夜av视频| 永久网站在线| 亚洲人成77777在线视频| av网站免费在线观看视频| 桃花免费在线播放| 亚洲精品av麻豆狂野| 亚洲av综合色区一区| 蜜桃在线观看..| 最新的欧美精品一区二区| 3wmmmm亚洲av在线观看| 一边亲一边摸免费视频| 少妇猛男粗大的猛烈进出视频| 亚洲国产色片| 中文字幕亚洲精品专区| 国产淫语在线视频| 岛国毛片在线播放| 久久久久久久久久久久大奶| 最新中文字幕久久久久| 日韩av免费高清视频| 18禁在线无遮挡免费观看视频| 亚洲人与动物交配视频| 十八禁高潮呻吟视频| 99国产精品免费福利视频| 欧美日韩综合久久久久久| 午夜免费观看性视频| 国产不卡av网站在线观看| 99国产精品免费福利视频| 伦理电影大哥的女人| av.在线天堂| 欧美亚洲 丝袜 人妻 在线| 久久99精品国语久久久| 精品国产乱码久久久久久小说| 伦理电影免费视频| 久久 成人 亚洲| 一级黄片播放器| 欧美一级a爱片免费观看看| 女性生殖器流出的白浆| 久久人妻熟女aⅴ| 亚洲成色77777| 亚洲成人av在线免费| 少妇高潮的动态图| tube8黄色片| 天天影视国产精品| 国产毛片在线视频| www.av在线官网国产| 免费观看在线日韩| 久久久午夜欧美精品| 99热国产这里只有精品6| 国产精品无大码| 国产精品无大码| 国产精品秋霞免费鲁丝片| 一级毛片电影观看| 人人澡人人妻人| 在线观看美女被高潮喷水网站| 一二三四中文在线观看免费高清| 春色校园在线视频观看| 免费黄色在线免费观看| 香蕉精品网在线| 久久精品国产鲁丝片午夜精品| 久久久久久久亚洲中文字幕| 日韩欧美一区视频在线观看| 亚洲精品美女久久av网站| 插逼视频在线观看| 国产精品 国内视频| 少妇丰满av| 国产成人91sexporn| 国产成人精品婷婷| 男人操女人黄网站| 亚洲国产精品专区欧美| 亚洲内射少妇av| 免费观看性生交大片5| 日韩免费高清中文字幕av| 免费黄色在线免费观看| 国产爽快片一区二区三区| 精品一区二区三卡| 日韩成人伦理影院| www.av在线官网国产| 亚洲av二区三区四区| 久久久久久久久久久久大奶| 中文欧美无线码| 99热网站在线观看| 91久久精品国产一区二区三区| 高清不卡的av网站| a级毛片黄视频| 国产精品久久久久久久久免| videos熟女内射| 欧美变态另类bdsm刘玥| 亚洲久久久国产精品| 大陆偷拍与自拍| 寂寞人妻少妇视频99o| 久久久午夜欧美精品| 亚洲欧洲国产日韩| 多毛熟女@视频| 热re99久久国产66热| 狂野欧美白嫩少妇大欣赏| 乱人伦中国视频| 久久毛片免费看一区二区三区| 欧美日韩一区二区视频在线观看视频在线| 一区在线观看完整版| 久久韩国三级中文字幕| 国产精品一二三区在线看| 赤兔流量卡办理| 亚洲精品视频女| 黄片播放在线免费| 亚洲欧美日韩另类电影网站| a级片在线免费高清观看视频| 只有这里有精品99| 伦理电影免费视频| 久久97久久精品| 亚洲成人一二三区av| 欧美变态另类bdsm刘玥| 天堂中文最新版在线下载| 美女国产视频在线观看| 一本大道久久a久久精品| 国产色婷婷99| 亚洲精品视频女| 观看美女的网站| 欧美成人午夜免费资源| 免费高清在线观看日韩| 亚洲内射少妇av| 国产精品一二三区在线看| 人人妻人人爽人人添夜夜欢视频| 91国产中文字幕| 国产av国产精品国产| 狠狠精品人妻久久久久久综合| 国产欧美亚洲国产| 精品人妻偷拍中文字幕| 日韩中文字幕视频在线看片| 18禁在线无遮挡免费观看视频| 久久精品国产鲁丝片午夜精品| 久久国产精品大桥未久av| 亚洲欧洲国产日韩| 精品卡一卡二卡四卡免费| 精品亚洲成a人片在线观看| 亚洲欧美一区二区三区黑人 | 成年人午夜在线观看视频| 国产av一区二区精品久久| 男女啪啪激烈高潮av片| 国产精品三级大全| 国产精品嫩草影院av在线观看| 99久久人妻综合| 国产白丝娇喘喷水9色精品| 最黄视频免费看| 亚洲,一卡二卡三卡| 热re99久久精品国产66热6| 国产午夜精品一二区理论片| 亚洲精品aⅴ在线观看| 男女国产视频网站| 国产精品久久久久久久久免| 久久久久久久久久人人人人人人| 自拍欧美九色日韩亚洲蝌蚪91| 99热6这里只有精品| 国产亚洲精品第一综合不卡 | 国产高清国产精品国产三级| 亚洲人成网站在线播| 大话2 男鬼变身卡| 精品少妇久久久久久888优播| 亚洲欧美色中文字幕在线| 欧美精品一区二区免费开放| 一级毛片我不卡| 青春草亚洲视频在线观看| 久久精品国产a三级三级三级| 在线看a的网站| 黄色怎么调成土黄色| 亚洲国产毛片av蜜桃av| 久久99精品国语久久久| 欧美日韩国产mv在线观看视频| 亚洲人成网站在线观看播放| 成人国产麻豆网| 精品人妻熟女av久视频| 大香蕉久久网| 又粗又硬又长又爽又黄的视频| 久热久热在线精品观看| 久久人妻熟女aⅴ| 中国国产av一级| 欧美成人午夜免费资源| 日韩人妻高清精品专区| 国产成人a∨麻豆精品| 午夜免费观看性视频| 看非洲黑人一级黄片| 男人爽女人下面视频在线观看| 亚洲性久久影院| 97超视频在线观看视频| 久久精品久久久久久久性| 夜夜看夜夜爽夜夜摸| 9色porny在线观看| av在线老鸭窝| 欧美人与性动交α欧美精品济南到 | 亚洲伊人久久精品综合| 大码成人一级视频| 蜜桃久久精品国产亚洲av| 国产av码专区亚洲av| 一级毛片 在线播放| 午夜福利影视在线免费观看| 夜夜爽夜夜爽视频| 天天操日日干夜夜撸| 欧美性感艳星| 婷婷成人精品国产| 少妇高潮的动态图| 最新中文字幕久久久久| 亚洲高清免费不卡视频| 亚洲美女搞黄在线观看| 看十八女毛片水多多多| 久久久久久伊人网av| 日韩欧美精品免费久久| 久久久国产欧美日韩av| 寂寞人妻少妇视频99o| 伦理电影免费视频| 建设人人有责人人尽责人人享有的| 国产成人一区二区在线| 亚洲熟女精品中文字幕| 亚洲,一卡二卡三卡| 国产在线一区二区三区精| 哪个播放器可以免费观看大片| 丝袜脚勾引网站| 亚洲人成77777在线视频| 黑人欧美特级aaaaaa片| 久久精品熟女亚洲av麻豆精品| 视频区图区小说| 国产 精品1| 国产一区二区三区综合在线观看 | 成人国产麻豆网| 亚洲精品第二区| 狠狠婷婷综合久久久久久88av| 国产精品欧美亚洲77777| 国产在线一区二区三区精| 日韩一本色道免费dvd| 国产成人a∨麻豆精品| 国产又色又爽无遮挡免| 观看av在线不卡| 亚洲婷婷狠狠爱综合网| 欧美变态另类bdsm刘玥| 九九久久精品国产亚洲av麻豆| 少妇人妻精品综合一区二区| 人人妻人人澡人人爽人人夜夜| 欧美国产精品一级二级三级| 成年av动漫网址| 亚洲一区二区三区欧美精品| 亚洲av电影在线观看一区二区三区| 乱码一卡2卡4卡精品| av在线观看视频网站免费| 男女免费视频国产| 十分钟在线观看高清视频www| 大香蕉97超碰在线| 亚洲,欧美,日韩| 色哟哟·www| 新久久久久国产一级毛片| 日韩中字成人| 亚洲精品第二区| 肉色欧美久久久久久久蜜桃| 免费观看无遮挡的男女| 日韩成人伦理影院| 免费不卡的大黄色大毛片视频在线观看| 天堂俺去俺来也www色官网| 亚洲人成77777在线视频| 久久午夜综合久久蜜桃| 老女人水多毛片| 天天影视国产精品| 男女免费视频国产| 久久韩国三级中文字幕| 秋霞伦理黄片| 欧美bdsm另类| 精品国产国语对白av| 精品一品国产午夜福利视频| 免费高清在线观看视频在线观看| 91精品国产国语对白视频| 亚洲国产精品999| 亚洲欧洲日产国产| 乱人伦中国视频| 欧美日韩精品成人综合77777| 亚洲欧美日韩另类电影网站| 国产精品.久久久| 大片电影免费在线观看免费| 在线观看www视频免费| 久久精品人人爽人人爽视色| 午夜激情久久久久久久| 婷婷色av中文字幕| 高清午夜精品一区二区三区| 中文字幕最新亚洲高清| 99久久人妻综合| 久久鲁丝午夜福利片| 亚洲av.av天堂| 国产日韩欧美视频二区| 国产精品.久久久| 777米奇影视久久| 成年av动漫网址| 午夜日本视频在线| av卡一久久| 蜜桃在线观看..| 又粗又硬又长又爽又黄的视频| 亚洲人成网站在线播| 久久99精品国语久久久| 丝袜脚勾引网站| 插逼视频在线观看| 久久97久久精品| 亚洲精品一二三| 3wmmmm亚洲av在线观看| 91精品国产国语对白视频| 国产精品国产av在线观看| 七月丁香在线播放| 香蕉精品网在线| 欧美日韩成人在线一区二区| 亚洲欧美日韩卡通动漫| 三级国产精品片| 久久久久久久久久久免费av| av免费观看日本| 美女福利国产在线| kizo精华| 自拍欧美九色日韩亚洲蝌蚪91| 国产欧美日韩一区二区三区在线 | 99九九线精品视频在线观看视频| 中文字幕人妻熟人妻熟丝袜美| 街头女战士在线观看网站| 国产av国产精品国产| 免费看不卡的av| 高清av免费在线| 丝瓜视频免费看黄片| 国产黄色免费在线视频| 一区二区三区乱码不卡18| 日本黄色日本黄色录像| 国国产精品蜜臀av免费| tube8黄色片| 观看av在线不卡| 人妻少妇偷人精品九色| 亚洲成色77777| 久久97久久精品| 中国美白少妇内射xxxbb| 国产男人的电影天堂91| 中文字幕亚洲精品专区| 亚洲av中文av极速乱| 在线 av 中文字幕| 十分钟在线观看高清视频www| 一个人看视频在线观看www免费| 午夜激情久久久久久久| 亚洲不卡免费看| 一区二区三区精品91| 热99国产精品久久久久久7| av有码第一页| 美女xxoo啪啪120秒动态图| tube8黄色片| 欧美日韩精品成人综合77777| 最近的中文字幕免费完整| 精品国产露脸久久av麻豆| 91在线精品国自产拍蜜月| 赤兔流量卡办理| 国产精品三级大全| 欧美精品高潮呻吟av久久| 亚洲精品自拍成人| 老司机影院成人| 免费高清在线观看日韩| 18禁在线播放成人免费| 考比视频在线观看| 2018国产大陆天天弄谢| 天天影视国产精品| 黑人高潮一二区| 国产色爽女视频免费观看| www.色视频.com| av线在线观看网站| 搡女人真爽免费视频火全软件| 纯流量卡能插随身wifi吗| 国产欧美日韩综合在线一区二区| 欧美 日韩 精品 国产| 99热国产这里只有精品6| 国产爽快片一区二区三区| 亚洲精品乱码久久久久久按摩| 精品少妇内射三级| av专区在线播放| 免费久久久久久久精品成人欧美视频 | 乱码一卡2卡4卡精品| 久久久国产一区二区| 80岁老熟妇乱子伦牲交| 男女边吃奶边做爰视频| 热99久久久久精品小说推荐| av天堂久久9| 久久毛片免费看一区二区三区| 亚洲内射少妇av| 日韩电影二区| a级毛片免费高清观看在线播放| 热99久久久久精品小说推荐| 国产精品女同一区二区软件| 亚洲无线观看免费| 成人漫画全彩无遮挡| 97超碰精品成人国产| 久久久久人妻精品一区果冻| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 国产成人免费无遮挡视频| 国产探花极品一区二区| 国国产精品蜜臀av免费| 少妇被粗大的猛进出69影院 | 夫妻性生交免费视频一级片| 国产成人a∨麻豆精品| 国产免费现黄频在线看| 国产成人免费无遮挡视频| 久久久国产精品麻豆| 在线观看美女被高潮喷水网站| 国产精品麻豆人妻色哟哟久久| 一区在线观看完整版| 欧美xxxx性猛交bbbb| 久久精品人人爽人人爽视色| 国国产精品蜜臀av免费| 熟妇人妻不卡中文字幕| 午夜福利在线观看免费完整高清在| 青春草亚洲视频在线观看| 十八禁高潮呻吟视频| 超色免费av| 一本久久精品|