• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Estimating ref l ectance and shape of objects from a single cartoon-shaded image

    2017-06-19 19:20:12HidekiTodoYasushiYamaguchi
    Computational Visual Media 2017年1期

    Hideki Todo(),Yasushi Yamaguchi

    Estimating ref l ectance and shape of objects from a single cartoon-shaded image

    Hideki Todo1(),Yasushi Yamaguchi2

    Although many photorealistic relighting methods provide a way to change the illumination of objects in a digital photograph,it is currently diffi cult to relight digital illustrations having a cartoon shading style.The main dif f erence between photorealistic and cartoon shading styles is that cartoon shading is characterized by soft color quantization and nonlinear color variations that cause noticeable reconstruction errors under a physical ref l ectance assumption,such as Lambertian ref l ection.To handle this non-photorealistic shading property,we focus on shading analysis of the most fundamental cartoon shading technique.Based on the color map shading representation,we propose a simple method to determine the input shading as that of a smooth shape with a nonlinear ref l ectance property.We have conducted simple ground-truth evaluations to compare our results to those obtained by other approaches.

    non-photorealistic rendering;cartoon shading;relighting;quantization

    1 Introduction

    Despite recent progress in 3D computer graphics techniques,traditional cartoon shading styles remain popular for 2D digital art.Artists can use a variety of commercial software(e.g.,Photoshop,Painter)to design their own expressive shading styles.Although the design principle used roughly follows a physical illumination model,editing is restricted to 2D drawing operations.We are interested in exploring new interactions which allow relighting of a painted shading style given a single input image.

    Reconstructing surface shape and ref l ectance from a single image is known as the shape-from-shading problem[1].Based on the fundamental problem setting,most relighting approaches assume shading follows a Lambertian model[2–4].Although these approaches work well for photorealistic images,they often fail to interpret cartoon shading styles in digital illustrations.

    The main dif f erence between photorealistic and cartoon shading styles is that cartoon shading is characterized by nonlinear color variation with soft quantization.The designed shading is typically more quantized than the inherent surface shape and its illumination.This assumption is common in many 3D stylized rendering techniques which use color map representation[5–7]that simply convert smooth 3D illumination to an artistic shading style.As shown in Fig.1,this simple mechanism can produce a variety of shading styles with dif f erent quantization ef f ects. However,such stylization processes make it more diffi cult for shading analysis to reconstruct a surface shape and ref l ectance from such shading.

    Fig.1 Stylized shading styles obtained by color map representation.

    In this paper,we propose a simple shadinganalysis method to recover a reasonable shading representation from the input quantized shading. As a f i rst step,we focus on the most fundamental cartoon shading[6].Our primary assumption is that the main nonlinear factor in the f i nal shading can be encoded by a color map function.With this in mind,we aim to reconstruct a smooth surface f i eld and a nonlinear ref l ectance property from the input shading.Using these estimated data,our method provides a way to change the illumination of the input image with its quantized shading style. To evaluate our approach,we conducted a simple pilot study using a prepared set of 3D models and color maps with a variety of stylization inputs. The proposed method was quantitatively compared to related approaches,which provided several key insights regarding relighting stylized shading.

    2 Related work

    Color mapping is a common approach used to generate stylized appearances in comics or illustrations.In stylized rendering of a 3D scene,the color map representation is used to convert smooth 3D illumination into quantized nonlinear shading ef f ects[5–7].Similar conversion techniques are used in 2D image abstraction methods for photorealistic images or videos[8–11].As a starting point,our work follows the basic assumption that stylized shading appearance is based on a smooth surface shape.

    Previous shape reconstruction methods for painted illustrations also attempt to recover a smooth surface shape from the limited information provided by feature lines.Lumo[12]generates an approximate normal f i eld by interpolating normals on region boundaries and interior contours.S′ykora et al.[13] extended this approach with a simple set of user annotations to recover full 3D shape for global illumination rendering.CrossShade[14]enables the user to design cross-section curves for better control of the constructed normal f i eld.The CrossShade technique was extended by Iarussi et al.[15]to construct generalized bend f i elds from rough sketches in a bitmap form.However,these approaches only focus on shape modeling from the boundary constraints.The recently proposed inverse toon shading[16]modeling framework also follows the strategy of modeling normal f i elds by designing isophote curves.In this work,the interpolation scheme requires manual editing to design two sets of isophotes with dif f erent illumination conditions for robust interpolation.In addition,reliable isophote values are also assumed.In contrast,our objective is to use a single cartoon-shaded image to provide a shading representation that contains both a shape and a nonlinear color map ref l ectance.

    An entire illumination constraint is considered in the well-known shape-from-shading(SFS) problem[1]for photorealistic images.Since the problem is severely ill-posed,accurate surface reconstruction requires skilled user interaction[3,4,17].The user must specify shape constraints to reduce the solution space of the SFS problem.To reduce user burden,another class of approach suggests rough approximation from luminance gradients[2,18]that can be tolerated by human perception.However,such approaches assume a photorealistic ref l ectance model,which often results in large reconstruction errors for the nonlinear shading in digital illustrations.

    Motivated by these considerations,we attempt to leverage limited cartoon shading information to model a smooth surface shape and nonlinear ref l ectance to reproduce the original shading appearance.

    3 Problem def i nition

    3.1 Shading model assumptions

    As proposed in the technique of cartoon shading[6], we assume a color map representation is used to reproduce the artist’s nonlinear shading ef f ects. Figure 2 illustrates the basic cartoon shading process.In this model,shading color c∈R3is computed as follows:

    Fig.2 Cartoon shading process.

    where I∈R is the luminance value of theillumination,and M:R 7→R3is a 1D color map function which converts the luminance value to the f i nal shading color.For a dif f use shading material, we set I=L·N,whereLis a light vector andNis the surface normal vector.We are interested in manipulatingLtoL0to produce a new lighting result,i.e.,c0=M(L0·N).

    However,the inverse problem is ill-posed if only shading colorcis available.The primary consideration of this paper is that we limit the solution space for other factors while preserving the final shading appearance.Some basic assumptions considered in this paper are as follows.

    ?Smooth shape and illumination.We assume that the surface shapeNand the illumination I are smooth and follow a linear relationship.The only nonlinear factor is the color map function M,which is used to produce the stylized shading appearance.

    ?Monotonic function for color map.For the color map function M,we assume a monotonic relation between image luminance Ic(obtained fromc)and surface illumination I.This assumption is important to simplify our problem def i nition as a variation of a photorealistic relighting problem.

    ?Dif f use lighting for illumination.We analyze all shading ef f ects as due to dif f use lighting.We do not explicitly model specular ref l ections and shadows in our shading analysis experiments.

    4 Methods

    Figure 3 illustrates the main process of the proposed shading analysis and relighting approach.Here we provide the primary objective and summarize each step.

    ?Initial normal estimation.First,an initial normal f i eldN0is required as input for the ref l ectance estimation and normal ref i nement steps.Since the ref l ectance property is not available,we simply approximate a smooth rounded normal f i eld from the silhouette.

    ?Ref l ectance estimation.Given the initial normal f i eldN0,we estimate a key light directionLand a color map function M which best f i tc=M(L·N0).This decomposition result roughly matches the original shadingcfor the givenN0.

    ?Normal ref i nement.Since the estimated decomposition does not satisfyc=M(L·N0),we ref i ne the surface normalN0toNto reproduce the original shadingc.

    Fig.3 Method overview.(a)Initial normal estimation to approximate a smooth rounded normal f i eld.(b)Ref l ectance estimation to obtain a light and a color map.(c)Normal ref i nement to modify the initial normal by f i tting the shading appearance.(d)Relighting to provide lighting interactions based on the shading analysis data.

    ?Relighting.Based on the above analysis results, the proposed method can relight the given input illustration.We change the light vector L to L0to obtain the f i nal shading color c0=M(L0·N).

    In the following sections,each step of the proposed shading analysis and relighting approaches is described in detail.

    4.1 Initial normal estimation

    For the target region ?,we can obtain a rounded normal f i eld N0from the silhouette inf l ation constraints[12,13]:

    where N??=(N??x,N??y,0)is the normal constraint from the silhouette??.These normals are propagated to the interior of ? using a dif f usion method[19].As shown in Fig.4,we can obtain a smooth initial normal f i eld N0as a rounded shape.

    4.2 Ref l ectance estimation

    Once the initial normal f i eld N0has been obtained, our system estimates ref l ectance factors based on the cartoon shading representation c=M(L·N).

    The ref l ectance estimation process takes the original color c and the initial normal N0as inputs to estimate the light direction L and the color map function M.We assume that the scene is illuminated by a single key light direction(i.e.,L is the same for the entire image).The color map function M is estimated for each target object.

    In the early stage of our experiments,we observed that the key light estimation step was signif i cantly af f ected by the input material style and shape.Our simple experiment is summarized in the Appendix. Since L is a key factor in the following estimation steps,we assume that a reliable light direction is provided by the user.In our evaluation,we used a predef i ned ground-truth light direction Ltto observe errors caused by the other estimation steps.

    Fig.4 Initial normal f i eld obtained by silhouette inf l ation.

    Color map estimation.Given the smooth illumination result I0=L·N0,we estimate a color map function M to f i t c=M(I0).

    As shown in Fig.5,isophote pixels of I0do not provide the same color as c.Therefore,a straight forward minimization of Pproduces a blurred color map M.

    To avoid this invalid correspondence between I0and c,we force monotonicity by sorting the target pixels in dark-to-bright order as shown in Fig.6.From the sorted pixels,we can obtain a valid correspondence between luminance range [Ii,Ii+1]and each shading color ciin the same luminance order.As a result,a color map function M is recovered as a lookup table for obtaining cifrom[Ii,Ii+1].We also construct the corresponding inverse map M?1,which is an additional lookup table to retrieve the luminance range[Ii,Ii+1]from a shading color ci.

    4.3 Normal ref i nement

    As shown in the right image of Fig.6,the shading result of M(L·N0)does not match c perfectly.Here we consider ref i ning normal N0to reproduce theoriginal color c by minimizing the following objective function:

    Fig.5 Invalid correspondence between the initial illumination I0and the input shading c.

    Fig.6 Color map estimation.Given the set of illumination L·N0and original color c,a color map function M is estimated by matching the range of luminance orders.

    To address this issue,we provide the following complementary objective function to Eq.(3):

    Figure 7 illustrates the illumination constraints for the normal ref i nement process.From the color map estimation process described in Section 4.2,the luminance range[Ii,Ii+1]is known for each shading color ci.Therefore,the illumination is restricted by the following conditions:

    where Ci:={p∈?|c(p)=ci}is the quantized color area and illumination L·N(p)is constrained to [Ii,Ii+1].

    We solve the problem by minimizing the following energy:

    Fig.7 Illumination constraints for normal ref i nement.The initial illumination result is modif i ed by luminance range constraints derived from M?1.

    The normal N is updated iteratively from the estimated initial normal N0in Gauss–Seidel iterations.Here we chose λ=1.5 to obtain the ref i nement result.Compared to the initial normal N0,the ref i ned normal N better f i ts the original color c.

    4.4 Relighting

    Based on the cartoon shading representation c= M(L·N),our system enables lighting interactions for the input illustration.We can obtain a relighting result c0by changing the light vector L to L0as follows:

    where the estimated factors M and N are preserved in relighting process.

    5 Evaluation of shading analysis

    To evaluate our shading analysis approach,we conducted a simple pilot study via a ground-truth comparison.We compare our estimated results with several existing approaches and ground-truth inputs.

    5.1 Experimental design

    To generate a variety of stylized appearance,we f i rst prepared shape and color map datasets(see Fig.8).

    Shape dataset.We prepared 20 groundtruth 3D models having varying shape complexity and recognizability.This dataset includes 7 simple primitive shapes and 13 other shapes from 3D shape repositories.Each ground-truth model is renderedfrom a specif i c view point to generate a 512×512 normal f i eld.

    Fig.8 20 ground-truth 3D shapes and 24 color maps in our datasets.

    Color map dataset.To better understand real situations,we extracted color maps from existing digital illustrations.We selected a small portion of a material area with a stroke.Then the selected pixels were simply sorted in luminance order to obtain a color map.We tried to extract more than 100 material areas from dif f erent digital illustrations sources.From the extracted color maps,we selected 24 distinctive color maps with dif f erent quantization ef f ects.

    Given the ground-truth normal f i eld Ntand color map Mt,a f i nal input image was obtained by ct= Mt(Lt·Nt).Note that we also provide a groundtruth light direction Ltin our evaluation process.

    5.2 Comparison of ref l ectance models

    We f i rst compared the visual dif f erence between our target cartoon shading model and a common photorealistic Lambertian model as shown in Fig.9. To obtain an ambient color kaand a dif f use ref l ectance color kdfor the Lambertian shading representation c=ka+kdI,we minimizedM(I)?(ka+kdI)with the input color map function M.The color dif f erence suggests that cartoon shading includes some nonlinear parts,which cannot be described by a simple Lambertian model.We will discuss how this nonlinear ref l ectance property af f ects the estimation results.

    Fig.9 Comparison of ref l ectance models.Top:color map materials selected from our dataset.Middle:Lambertian material f i tted to the corresponding color map.Bottom:color dif f erence between the color map materials and Lambertian materials.The materials are listed according to the color dif f erence.

    5.3 Shading analysis

    Figure 10 summarizes a comparison of our estimation results with ones from Lumo[12]and the Lambertian assumption[4].To simulate Lumo we used the silhouette inf l ation constraints of the initial normal estimation in Eq.(2).For the Lambertian assumption,we used the illumination constraint in Eq.(5)with a small value λ=1.0 to f i t the input image luminance Ic.In all examples,we used our color map estimation method(Section 4.2)to reproduce the original shading appearance.

    As shown in Fig.10,Lumo cannot produce the details of illumination due to the lack of inner shading constraints.The Lambertian assumption recovers the original shading appearance well; however,the estimated normal f i eld is overf i tted to the quantized illumination.Although our method distributes certain shading errors near the boundaries of the color areas,it produces a relatively smooth normal f i eld and illumination that are both similar to the ground-truth.

    Figure 11 summarizes the shading analysis results for dif f erent material settings.Although our method cannot recover the same shape from dif f erent quantization styles,the estimated normal f i eld is smoother than the input shading.

    We also compute the mean squared error(MSE) to compare estimated results quantitatively(see Figs.12–15).In each comparison,we used the same shape and changed materials for computing the shape estimation errors.

    Fig.10 Comparison of shading analysis results with Lumo[12]and Lambertian assumption[4].The proposed method reproduces the original shading appearance similar to the Lambertian assumption with a smooth normal f i eld as in Lumo.

    Fig.11 Shading analysis results for dif f erent color map materials.

    Fig.12 Errors of estimated shape depending on input material (simple shape Three Box).

    Fig.13 Errors of estimated shape depending on input material (medium complexity shape Fertility).

    Note that our method tends to produce smaller errors for simple rounded shapes but the errors become larger than the Lambertian assumption for more complex shapes.For a complex shape like the Pulley shown in Fig.15,even the Lambertian assumption results in large errors.Since initial normal estimation errors become large in such cases, our method fails to recover a valid shape when only minimizing the appearance error.We provide further discussions on initial normal estimation errors in Section 7.

    Fig.14 Errors of estimated shape depending on input material (medium complexity shape Venus).

    Fig.15 Errors of estimated shape depending on input material (complex shape Pulley).

    Though the estimated shape may not be accurate, our method successfully reduces the inf l uence of the material dif f erence in all comparisons.Thanks to the proposed shading analysis based on the cartoon shading model assumption,our method regulates estimated ref l ectance properties for various quantization settings.

    5.4 Relighting

    Fig.16 Comparison of our relighting results with those from Lumo[12]and using the Lambertian assumption in Ref.[4].The shading analysis shows the estimated shading results from the input ground-truth light direction and shading.The analysis data are used to produce the following relighting results.Our method can produce dynamic illumination changes from the input light directions as in Lumo,which are less noticeable in the Lambertian assumption.The details of the shapes are also preserved in our method.

    Figure 16 and the supplemental videos in the Electronic Supplementary Material(ESM) summarize a comparison of our relighting results with those from Lumo[12]and using the Lambertian assumption in Ref.[4].In all examples,we f i rst estimate the shading representations in the shading analysis step.Then we use the analysis data to produce relighting results.

    As in the discussion in the previous evaluation of the shading analysis,the proposed method and the Lambertian assumption can preserve the original shading appearance in the shading analysis step. However,the Lambertian assumption tends to be strongly af f ected by the initial input illumination,so that dynamic illumination changes from the input light directions are less noticeable in the relighting results.On the other hand,the proposed method and Lumo can produce dynamic illumination changes that are similar to the ground-truth relighting results.The proposed method cannot fully recover the details of the ground-truth shape;however, our shading decomposition result can provide both dynamic illumination changes and details of the target shape.

    6 Real illustration examples

    We have tested our shading analysis approach on dif f erent shading styles using three real illustrations. Figure 17 shows relighting results for the one of them, the others are included in the supplemental videos in the ESM.The material regions are relatively simple, but each material region is painted with dif f erent quantization ef f ects.

    To apply our shading analysis and relighting methods,we f i rst manually segmented material regions for the target illustration.We also provide a key light direction L for the target illustration,which is needed for our ref l ectance estimation step.

    Fig.17 Relighting sequence using the proposed method.Nondif f use parts are limited to static transitions with simple residual representation.

    Fig.18 Ref l ectance and shape estimation results for a real illustration.Non-dif f use parts are encoded as residual shading.

    Figure 18 illustrates the elements of ref l ectance and shape estimation results for the illustration. Compared to the ideal cartoon shading in our evaluations,a material region in the real examples may include non-dif f use parts.As suggested by a photorealistic illumination estimation method[20], we encode such specular and shadow ef f ects as residual dif f erences?c=c?M(L·N)from our assumed shading representation c=M(L· N).Finally,we obtain relighting results as c= M(L0·N)+?c by changing the light direction L0.

    As shown in Fig.17 and the supplemental videos in the ESM,the residual representation can recover the appearance of the original shading.We also note that our initial experiment produced possible shading transitions for dif f use lighting,while specular and shadow ef f ects are relatively static.

    7 Discussion and future work

    In this paper,we have demonstrated a new shading analysis framework for cartoon-shaded objects. The visual appearance of the relighting results is improved by the proposed shading analysis.We incorporate color map shading representation in our shading analysis approach,which enables shading decomposition into a smooth normal f i eld and a nonlinear color map ref l ectance.We have introduced a new way to provide lighting interaction with digital illustrations;however,there are several things left to accomplish.

    Firstly,our method requires a reliable light direction which is provided by the user.Since the light estimation method in the Appendix is signif i cantly af f ected by the input shading,more friendly and robust cartoon shading estimation approaches are needed.We consider that a perceptually motivated approach[21]might be suitable.

    Secondly,the method minimizes the appearance error,because a shading image is the only input. This results in an under-constrained problem to estimate both shape and ref l ectance.Actually,our method achieves almost the same appearance as the input.As shown in Fig.19,the proposed method cannot recover the input shape even if the material has Lambertian ref l ectance with full illumination constraints.Although the recovered shape satisf i es appearance similarity with the color map that is estimated in advance,we need a better solution space to obtain a plausible shape.Since a desirable shape is typically dif f erent for dif f erent users,we plan to integrate user constraints[3,4,14]for normal ref i nement.More robust iterated ref i nement cycles of shape and ref l ectance estimations are to be desired.

    Fig.19 Shape analysis results for Lambertian ref l ectance.Blob (top):small errors in shape and shading.Pulley(middle):large errors in shape.Lucy(bottom):large errors in shading.

    Another limitation is that our initial normal f i eld approximation assumes the shape to be convex.This causes errors noticeable in complex shapes such as the Pulley,as shown in Fig.19.Currently,we also plan to incorporate interior contours for concave constraints as suggested by Lumo[12].Even though we require a robust edge detection process to def i ne suitable normal constraints for various illustration styles,this is a promising direction for future work that may yield a more pleasing initial normal f i eld.

    Although large collections of 2D digital illustrations are available online,we cannot directly apply our method since we require manual segmentation.A crucial area of future research is to automate albedo estimation,as suggested by intrinsic images[22,23].While our initial experiments with manual segmentation produced possible shading transitions via the dif f use shading assumption,our method cannot fully encode additional specular and shadow ef f ects.Therefore, incorporating such specular and shadow models is an important future work for more practical situations.Such shading ef f ects are often designed using non-photorealistic principles;however,we hope that our approach will provide a promising direction for new 2.5D image representations of digital illustrations.

    Appendix Light estimation

    In the early stage of our experiments,we tried toestimate the key light direction L from the input shading c and the estimated initial normal N0.

    As suggested by Ref.[4],we approximate the problem using Lambertian ref l ectance Ic=kdL·N0, where the dif f use term L·N0is simply scaled by the dif f use constant kd.For the input illumination Ic, we compute the luminance value from the original color c as the L component in Lab color space. We estimate the light vector L by minimizing the following energy:

    where L0is given by L0=kdL.We f i nally obtain the unit light vectorby normalizing L0.The dif f use ref l ectance constant kdis optionally computed from

    Figure 20 summarizes our experiment for light estimation.In this experiment,we give a single ground-truth light direction Lt(top left)to generate the input cartoon-shaded image ctand then estimate a key light direction L by solving Eq.(10).

    It can be observed that the estimated results look consistent with near-Lambertian materials(the left 3 maps)but inconsistent with more stylized materials (the right 3 maps).Another important factor is the shape complexity.The estimated light direction is relatively consistent with rounded smooth shapes. However,the light estimation error becomes quite large when the input model contains many crease edges,especially around the silhouette.

    Fig.20 Light estimation error.Top left:input ground-truth light direction Lt.Top row:input color map materials shaded from the Lt.The left 3 maps have small average errors;the right 3 maps have large average errors.Left column:input 3D models.The top 3 models have small average errors;the bottom 3 models have large average errors.

    The result suggests that we require additional constraints to improve light estimation.In this paper,we simply provide a ground-truth light direction for evaluation,or a user-given reliable light direction for relighting real illustration examples.

    Acknowledgements

    We would like to thank the anonymous reviewers for their constructive comments.We are also grateful to Tatsuya Yatagawa,Hiromu Ozaki,Tomohiro Tachi, and Takashi Kanai for their valuable discussions and suggestions.Additional thanks go to the AIM@SHAPE Shape Repository,Keenan’s 3D Model Repository for 3D models,and Makoto Nakajima,www.piapro.net for 2D illustrations used in this work.This work was supported in part by the Japan Science and Technology Agency CREST project and the Japan Society for the Promotion of Science KAKENHI Grant No.JP15H05924.

    Electronic Supplementary MaterialSupplementary material is available in the online version of this article at http://dx.doi.org/10.1007/s41095-016-0066-0.

    [1]Horn,B.K.P.;Brooks,M.J.Shape from Shading. Cambridge,MA,USA:MIT Press,1989.

    [2]Khan,E.A.;Reinhard,E.;Fleming,R.W.; B¨ulthof f,H.H.Image-based material editing.ACM Transactions on Graphics Vol.25,No.3,654–663, 2006.

    [3]Okabe,M.;Zeng,G.;Matsushita,Y.;Igarashi,T.; Quan,L.;Shum,H.-Y.Single-view relighting with normal map painting.In:Proceedings of Pacif i c Graphics,27–34,2006.

    [4]Wu,T.-P.;Sun,J.;Tang,C.-K.;Shum,H.-Y. Interactive normal reconstruction from a single image. ACM Transactions on Graphics Vol.27,No.5,Article No.119,2008.

    [5]Barla,P.;Thollot,J.;Markosian,L.X-toon: An extended toon shader.In:Proceedings of the 4th International Symposium on Non-Photorealistic Animation and Rendering,127–132,2006.

    [6]Lake,A.;Marshall,C.;Harris,M.;Blackstein,M. Stylized rendering techniques for scalable real-time 3D animation.In:Proceedings of the 1st International Symposium on Non-Photorealistic Animation and Rendering,13–20,2000.

    [7]Mitchell,J.;Francke,M.;Eng,D.Illustrative rendering in Team Fortress 2.In:Proceedings of the 5th International Symposium on Non-Photorealistic Animation and Rendering,71–76,2007.

    [8]DeCarlo,D.;Santella,A.Stylization and abstraction of photographs.ACM Transactions on Graphics Vol. 21,No.3,769–776,2002.

    [9]Kang,H.;Lee,S.;Chui,C.K.Flow-based image abstraction.IEEE Transactions on Visualization and Computer Graphics Vol.15,No.1,62–76,2009.

    [10]Kyprianidis,J.E.;D¨ollner,J.Image abstraction by structure adaptive f i ltering.In:Proceedings of EG UK Theory and Practice of Computer Graphics,51–58, 2008.

    [11]Winnem¨oller,H.;Olsen,S.C.;Gooch,B.Real-time video abstraction.ACM Transactions on Graphics Vol. 25,No.3,1221–1226,2006.

    [12]Johnston,S.F.Lumo:Illumination for cel animation. In:Proceedings of the 2nd International Symposium on Non-Photorealistic Animation and Rendering,45–52,2002.

    [13]S′ykora,D.;Kavan,L.;ˇCad′?k,M.;Jamriˇska,O.; Jacobson,A.;Whited,B.;Simmons,M.;Sorkine-Hornung,O.Ink-and-ray:Bas-relief meshes for adding global illumination ef f ects to hand-drawn characters. ACM Transactions on Graphics Vol.33,No.2,Article No.16,2014.

    [14]Shao,C.;Bousseau,A.;Shef f er,A.;Singh,K. CrossShade:Shading concept sketches using crosssection curves.ACM Transactions on Graphics Vol. 31,No.4,Article No.45,2012.

    [15]Iarussi,E.;Bommes,D.;Bousseau,A.Bendf i elds: Regularized curvature f i elds from rough concept sketches.ACM Transactions on Graphics Vol.34,No. 3,Article No.24,2015.

    [16]Xu,Q.;Gingold,Y.;Singh,K.Inverse toon shading: Interactive normal f i eld modeling with isophotes. In:Proceedings of the Workshop on Sketch-Based Interfaces and Modeling,15–25,2015.

    [17]Wu,T.-P.;Tang,C.-K.;Brown,M.S.;Shum,H.-Y.ShapePalettes:Interactive normal transfer via sketching.ACM Transactions on Graphics Vol.26,No. 3,Article No.44,2007.

    [18]Lopez-Moreno,J.;Jimenez,J.;Hadap,S.;Reinhard, E.;Anjyo,K.;Gutierrez,D.Stylized depiction of images based on depth perception.In:Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering,109–118, 2010.

    [19]Orzan,A.;Bousseau,A.;Barla,P.;Winnem¨oller, H.;Thollot,J.;Salesin,D.Dif f usion curves: A vector representation for smooth-shaded images. Communications of the ACM Vol.56,No.7,101–108, 2013.

    [20]Kholgade,N.;Simon,T.;Efros,A.;Sheikh,Y.3D object manipulation in a single photograph using stock 3D models.ACM Transactions on Graphics Vol.33, No.4,Article No.127,2014.

    [21]Lopez-Moreno,J.;Garces,E.;Hadap,S.;Reinhard, E.;Gutierrez,D.Multiple light source estimation in a single image.Computer Graphics Forum Vol.32,No. 8,170–182,2013.

    [22]Grosse,R.;Johnson,M.K.;Adelson,E.H.;Freeman, W.T.Ground truth dataset and baseline evaluations for intrinsic image algorithms.In:Proceedings of IEEE 12th International Conference on Computer Vision,2335–2342,2009.

    [23]Rother,C.;Kiefel,M.;Zhang,L.;Sch¨olkopf,B.; Gehler,P.V.Recovering intrinsic images with a global sparsity prior on ref l ectance.In:Proceedings of Advances in Neural Information Processing Systems 24,765–773,2011.

    Hideki Todois an assistant professor in the School of Media Science at Tokyo University of Technology.He received his Ph.D.degree in information science and technology from the University of Tokyo in 2013.His research interests lie in the f i eld of computer graphics in general,particularly non-photorealistic rendering.

    Yasushi Yamaguchi,Dr.Eng.,is a professor in the Graduate School of Arts and Sciences at the University of Tokyo.His research interests lie in image processing,computer graphics,and visual illusion,including visual cryptography,computer aided geometric design,volume visualization, and painterly rendering.He has been serving as a president of Japan Society for Graphic Science and as a vice president of International Society for Geometry and Graphics.

    Open AccessThe articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/),which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    1 Tokyo University of Technology,Tokyo,192-0982,Japan. E-mail:toudouhk@stf.teu.ac.jp().

    2 The University of Tokyo,Tokyo,153-8902,Japan.E-mail:yama@graco.c.u-tokyo.ac.jp.

    t

    2016-08-30;accepted:2016-11-10

    午夜视频精品福利| 国产亚洲午夜精品一区二区久久| 国产免费视频播放在线视频| 悠悠久久av| 亚洲伊人色综图| 少妇被粗大的猛进出69影院| 精品福利永久在线观看| 亚洲avbb在线观看| 久热爱精品视频在线9| 啦啦啦免费观看视频1| 露出奶头的视频| 欧美亚洲 丝袜 人妻 在线| 亚洲免费av在线视频| 日日摸夜夜添夜夜添小说| 久久久久久免费高清国产稀缺| 欧美在线黄色| 在线看a的网站| 国产成人免费观看mmmm| 夜夜骑夜夜射夜夜干| 国产深夜福利视频在线观看| 老司机影院毛片| 国产精品美女特级片免费视频播放器 | 亚洲国产欧美网| 国产高清视频在线播放一区| 成人亚洲精品一区在线观看| 桃花免费在线播放| 亚洲成人手机| 一区二区三区国产精品乱码| 亚洲第一青青草原| 午夜精品国产一区二区电影| 看免费av毛片| 搡老熟女国产l中国老女人| 欧美人与性动交α欧美软件| 不卡一级毛片| 国产xxxxx性猛交| 欧美 日韩 精品 国产| 纯流量卡能插随身wifi吗| 在线观看一区二区三区激情| 亚洲欧美激情在线| 97在线人人人人妻| 亚洲久久久国产精品| 亚洲七黄色美女视频| 亚洲专区字幕在线| 久久国产精品人妻蜜桃| 免费一级毛片在线播放高清视频 | 老司机靠b影院| 午夜精品国产一区二区电影| 久热爱精品视频在线9| 国产1区2区3区精品| 最新美女视频免费是黄的| 国产精品一区二区免费欧美| 色婷婷久久久亚洲欧美| 亚洲欧美日韩另类电影网站| 欧美av亚洲av综合av国产av| 亚洲中文av在线| 亚洲第一青青草原| 人成视频在线观看免费观看| 国产单亲对白刺激| 黄色毛片三级朝国网站| 黄色成人免费大全| 黑人巨大精品欧美一区二区蜜桃| 久久av网站| 久久亚洲真实| 成人国语在线视频| 真人做人爱边吃奶动态| 一二三四在线观看免费中文在| 在线亚洲精品国产二区图片欧美| 国产精品久久久av美女十八| 1024香蕉在线观看| 国产一区二区三区在线臀色熟女 | 中文字幕精品免费在线观看视频| 热re99久久国产66热| 午夜激情av网站| 久久久久久久国产电影| av欧美777| 国产在线视频一区二区| 国产精品自产拍在线观看55亚洲 | 狠狠狠狠99中文字幕| 久久精品国产亚洲av高清一级| 男女边摸边吃奶| 国产成人免费观看mmmm| 亚洲成人手机| 在线亚洲精品国产二区图片欧美| 国产一区二区三区综合在线观看| 中亚洲国语对白在线视频| 国产成人av教育| 啦啦啦视频在线资源免费观看| 一本久久精品| 国产精品亚洲一级av第二区| 国产亚洲精品第一综合不卡| 久久国产精品大桥未久av| 女警被强在线播放| 精品欧美一区二区三区在线| 国产精品电影一区二区三区 | 欧美久久黑人一区二区| 99热网站在线观看| 最新美女视频免费是黄的| 国产成人精品久久二区二区免费| 精品高清国产在线一区| av电影中文网址| 国产亚洲av高清不卡| 亚洲成人免费电影在线观看| 韩国精品一区二区三区| 久久久久久人人人人人| 一边摸一边做爽爽视频免费| 少妇精品久久久久久久| 捣出白浆h1v1| 天天操日日干夜夜撸| 亚洲人成电影免费在线| 欧美日韩福利视频一区二区| 欧美成人免费av一区二区三区 | 国产人伦9x9x在线观看| 99久久99久久久精品蜜桃| 777久久人妻少妇嫩草av网站| 亚洲精品在线观看二区| 国产av一区二区精品久久| 精品久久久久久久毛片微露脸| 国产成人av教育| 精品一区二区三区av网在线观看 | 美女视频免费永久观看网站| 自线自在国产av| 亚洲av美国av| 久久午夜亚洲精品久久| 天堂8中文在线网| 成人特级黄色片久久久久久久 | 欧美亚洲 丝袜 人妻 在线| 老司机福利观看| 99国产精品99久久久久| 黄网站色视频无遮挡免费观看| 男女午夜视频在线观看| 90打野战视频偷拍视频| 99香蕉大伊视频| 久久久精品94久久精品| 色播在线永久视频| 中文字幕精品免费在线观看视频| 久久国产精品影院| 亚洲av成人不卡在线观看播放网| 午夜福利乱码中文字幕| 日本精品一区二区三区蜜桃| 免费女性裸体啪啪无遮挡网站| 超色免费av| 亚洲综合色网址| 最新美女视频免费是黄的| 久久精品亚洲精品国产色婷小说| 九色亚洲精品在线播放| 亚洲成人免费av在线播放| 无人区码免费观看不卡 | 精品人妻1区二区| 狠狠婷婷综合久久久久久88av| 欧美另类亚洲清纯唯美| 老熟妇仑乱视频hdxx| 日韩欧美免费精品| 人人妻人人添人人爽欧美一区卜| 天天躁狠狠躁夜夜躁狠狠躁| 欧美乱妇无乱码| 亚洲av成人一区二区三| 大型黄色视频在线免费观看| 亚洲男人天堂网一区| 欧美激情高清一区二区三区| 十八禁网站免费在线| 久久人妻熟女aⅴ| 丝袜人妻中文字幕| 99热网站在线观看| 国产精品欧美亚洲77777| 精品卡一卡二卡四卡免费| 久久婷婷成人综合色麻豆| 国产一区有黄有色的免费视频| 国产主播在线观看一区二区| 18禁美女被吸乳视频| 日本撒尿小便嘘嘘汇集6| 成人影院久久| 午夜福利影视在线免费观看| 日韩欧美国产一区二区入口| 成人免费观看视频高清| 久久这里只有精品19| 精品高清国产在线一区| 亚洲国产毛片av蜜桃av| 天堂中文最新版在线下载| 大码成人一级视频| 9热在线视频观看99| 国产不卡一卡二| 一级,二级,三级黄色视频| 久久热在线av| 嫩草影视91久久| 丝袜美腿诱惑在线| 99精品久久久久人妻精品| 丰满迷人的少妇在线观看| 一区二区三区乱码不卡18| 午夜视频精品福利| av欧美777| 国产av精品麻豆| 91国产中文字幕| 国产区一区二久久| 免费av中文字幕在线| 99九九在线精品视频| 国产在线一区二区三区精| 精品少妇黑人巨大在线播放| 视频区欧美日本亚洲| 免费观看a级毛片全部| 色视频在线一区二区三区| 精品一区二区三区视频在线观看免费 | 亚洲专区字幕在线| 欧美成人午夜精品| 午夜福利乱码中文字幕| 中文字幕人妻丝袜制服| 99久久国产精品久久久| av国产精品久久久久影院| 免费在线观看影片大全网站| 99精品欧美一区二区三区四区| 一级,二级,三级黄色视频| 婷婷丁香在线五月| 香蕉久久夜色| 飞空精品影院首页| 亚洲av欧美aⅴ国产| 亚洲欧美日韩高清在线视频 | svipshipincom国产片| 三上悠亚av全集在线观看| 亚洲国产精品一区二区三区在线| 美女福利国产在线| 国产精品免费一区二区三区在线 | 高清在线国产一区| 天天躁狠狠躁夜夜躁狠狠躁| 99久久国产精品久久久| 国产精品久久久久久精品古装| 亚洲七黄色美女视频| 看免费av毛片| a级片在线免费高清观看视频| 亚洲国产av新网站| 丝袜在线中文字幕| 高清毛片免费观看视频网站 | 成人国产av品久久久| 18在线观看网站| 国产亚洲精品一区二区www | 日本一区二区免费在线视频| 亚洲成人免费电影在线观看| 男女之事视频高清在线观看| 美女福利国产在线| 国产精品免费一区二区三区在线 | 国产日韩一区二区三区精品不卡| 久久久国产欧美日韩av| 日本av免费视频播放| 不卡av一区二区三区| 国产一区二区三区综合在线观看| 制服人妻中文乱码| 在线观看免费视频网站a站| 久久这里只有精品19| 亚洲中文字幕日韩| 午夜福利在线观看吧| 亚洲欧美一区二区三区久久| 国产三级黄色录像| 国产精品一区二区免费欧美| 日韩欧美一区二区三区在线观看 | 夫妻午夜视频| 亚洲欧美色中文字幕在线| 如日韩欧美国产精品一区二区三区| 国产精品熟女久久久久浪| 国产欧美亚洲国产| 中文字幕最新亚洲高清| 日韩欧美一区视频在线观看| 精品欧美一区二区三区在线| 日韩制服丝袜自拍偷拍| 男人操女人黄网站| 午夜激情av网站| 叶爱在线成人免费视频播放| 怎么达到女性高潮| 国产精品国产av在线观看| 夜夜夜夜夜久久久久| 国产无遮挡羞羞视频在线观看| 69av精品久久久久久 | 国产精品久久久久久人妻精品电影 | 在线永久观看黄色视频| 2018国产大陆天天弄谢| 国产一区有黄有色的免费视频| 操出白浆在线播放| 蜜桃国产av成人99| 久久精品亚洲av国产电影网| 一边摸一边做爽爽视频免费| 亚洲精品自拍成人| 曰老女人黄片| 高清黄色对白视频在线免费看| 黄色毛片三级朝国网站| 久久精品国产a三级三级三级| 成年人免费黄色播放视频| 后天国语完整版免费观看| 久久热在线av| 久久毛片免费看一区二区三区| 成年女人毛片免费观看观看9 | 搡老岳熟女国产| 午夜老司机福利片| 少妇粗大呻吟视频| 欧美乱妇无乱码| 亚洲欧美精品综合一区二区三区| 国产亚洲精品久久久久5区| 啦啦啦视频在线资源免费观看| 精品乱码久久久久久99久播| 亚洲精品国产色婷婷电影| 99re在线观看精品视频| 777米奇影视久久| 女人高潮潮喷娇喘18禁视频| 99久久国产精品久久久| 伊人久久大香线蕉亚洲五| 狠狠狠狠99中文字幕| av一本久久久久| 亚洲第一av免费看| 大香蕉久久成人网| 亚洲专区字幕在线| 亚洲性夜色夜夜综合| 日日摸夜夜添夜夜添小说| 国产伦理片在线播放av一区| 亚洲精品国产色婷婷电影| 国产不卡一卡二| 狂野欧美激情性xxxx| 精品亚洲乱码少妇综合久久| 老司机午夜十八禁免费视频| 99久久人妻综合| 精品熟女少妇八av免费久了| 午夜视频精品福利| 免费在线观看黄色视频的| 欧美日韩黄片免| 中亚洲国语对白在线视频| 五月天丁香电影| 国产av精品麻豆| 精品少妇内射三级| 午夜福利免费观看在线| 精品高清国产在线一区| 一个人免费在线观看的高清视频| 精品卡一卡二卡四卡免费| 大香蕉久久网| 精品高清国产在线一区| 亚洲av电影在线进入| 国产av国产精品国产| 亚洲成av片中文字幕在线观看| 欧美大码av| 少妇被粗大的猛进出69影院| 中文字幕最新亚洲高清| 国产伦理片在线播放av一区| 久久久久久久久免费视频了| 亚洲男人天堂网一区| 搡老乐熟女国产| 精品人妻在线不人妻| 人妻久久中文字幕网| 一个人免费看片子| 国产单亲对白刺激| 日韩 欧美 亚洲 中文字幕| 18禁裸乳无遮挡动漫免费视频| av线在线观看网站| av片东京热男人的天堂| tube8黄色片| 成人国产av品久久久| 色婷婷av一区二区三区视频| 精品人妻1区二区| 国产麻豆69| 丝瓜视频免费看黄片| 国产极品粉嫩免费观看在线| 嫩草影视91久久| 无人区码免费观看不卡 | 成人三级做爰电影| 亚洲国产看品久久| 久久久精品94久久精品| 久久中文看片网| 久久午夜综合久久蜜桃| 人妻一区二区av| 久久人妻av系列| 亚洲,欧美精品.| 久久久久精品国产欧美久久久| 亚洲第一av免费看| 法律面前人人平等表现在哪些方面| 免费少妇av软件| 日韩中文字幕视频在线看片| 国产精品久久久av美女十八| 国产男女内射视频| 欧美变态另类bdsm刘玥| 黑人欧美特级aaaaaa片| 日韩三级视频一区二区三区| 久久国产亚洲av麻豆专区| 女人久久www免费人成看片| 窝窝影院91人妻| 老汉色av国产亚洲站长工具| 久久 成人 亚洲| 国产三级黄色录像| 在线观看舔阴道视频| 国产有黄有色有爽视频| 不卡av一区二区三区| 满18在线观看网站| 亚洲精品一卡2卡三卡4卡5卡| 一级片免费观看大全| 一级黄色大片毛片| 中文欧美无线码| 不卡一级毛片| 99久久国产精品久久久| 免费黄频网站在线观看国产| 国产亚洲欧美精品永久| 久久天堂一区二区三区四区| 精品国产乱子伦一区二区三区| 亚洲精品美女久久av网站| 99精品在免费线老司机午夜| 人人妻人人澡人人爽人人夜夜| 精品国产国语对白av| 久久人妻福利社区极品人妻图片| 热99国产精品久久久久久7| 国产福利在线免费观看视频| 97人妻天天添夜夜摸| 电影成人av| 欧美成人午夜精品| 99精品久久久久人妻精品| 91字幕亚洲| 免费不卡黄色视频| 男女无遮挡免费网站观看| 色尼玛亚洲综合影院| 国产精品免费一区二区三区在线 | 又黄又粗又硬又大视频| 啪啪无遮挡十八禁网站| 日本黄色视频三级网站网址 | 亚洲精品av麻豆狂野| 五月开心婷婷网| 亚洲成人手机| 亚洲国产精品一区二区三区在线| 亚洲欧美日韩另类电影网站| 国产熟女午夜一区二区三区| 亚洲精品在线观看二区| 午夜激情久久久久久久| 久久中文看片网| 看免费av毛片| 久久中文字幕一级| 欧美日韩av久久| 桃红色精品国产亚洲av| 成人av一区二区三区在线看| 女人精品久久久久毛片| 成年动漫av网址| 国产高清国产精品国产三级| 欧美激情极品国产一区二区三区| 一本综合久久免费| 亚洲精华国产精华精| 国产视频一区二区在线看| 后天国语完整版免费观看| 成人国产一区最新在线观看| 国产国语露脸激情在线看| 999精品在线视频| 黑丝袜美女国产一区| 亚洲成av片中文字幕在线观看| 欧美黄色淫秽网站| 国产一区二区三区视频了| 国产免费福利视频在线观看| 脱女人内裤的视频| 国产主播在线观看一区二区| 18禁裸乳无遮挡动漫免费视频| 亚洲国产成人一精品久久久| 久久免费观看电影| 成人永久免费在线观看视频 | 国产亚洲午夜精品一区二区久久| 欧美日韩亚洲国产一区二区在线观看 | 性少妇av在线| 国产深夜福利视频在线观看| 国产成人影院久久av| 欧美日韩精品网址| av在线播放免费不卡| 99在线人妻在线中文字幕 | 免费观看a级毛片全部| 美女国产高潮福利片在线看| 国产一区二区三区在线臀色熟女 | 亚洲一区二区三区欧美精品| 岛国在线观看网站| 久久中文字幕人妻熟女| 真人做人爱边吃奶动态| 十八禁网站网址无遮挡| 最新美女视频免费是黄的| 久久久精品免费免费高清| 少妇的丰满在线观看| 国产精品影院久久| 中文字幕人妻熟女乱码| 不卡一级毛片| 又大又爽又粗| 精品福利观看| 男女免费视频国产| 夜夜骑夜夜射夜夜干| 国产精品一区二区在线观看99| 啦啦啦视频在线资源免费观看| 久久av网站| 久久午夜综合久久蜜桃| 大香蕉久久网| 色老头精品视频在线观看| 嫁个100分男人电影在线观看| 91字幕亚洲| 欧美成人免费av一区二区三区 | 国产有黄有色有爽视频| 精品少妇一区二区三区视频日本电影| bbb黄色大片| 国产又色又爽无遮挡免费看| 久久99热这里只频精品6学生| 国产精品自产拍在线观看55亚洲 | 18禁国产床啪视频网站| 真人做人爱边吃奶动态| 欧美在线一区亚洲| 久久久久久人人人人人| 成人三级做爰电影| 亚洲精品美女久久av网站| 我要看黄色一级片免费的| 丝袜人妻中文字幕| 97在线人人人人妻| 成人影院久久| 精品国产一区二区三区久久久樱花| 黄色视频在线播放观看不卡| 国产熟女午夜一区二区三区| 久久国产精品男人的天堂亚洲| 久久午夜综合久久蜜桃| 亚洲性夜色夜夜综合| 一级a爱视频在线免费观看| 午夜精品久久久久久毛片777| 涩涩av久久男人的天堂| 制服人妻中文乱码| 国产精品av久久久久免费| 女同久久另类99精品国产91| 日韩一卡2卡3卡4卡2021年| 欧美变态另类bdsm刘玥| 丝袜美足系列| av超薄肉色丝袜交足视频| 欧美 日韩 精品 国产| 欧美一级毛片孕妇| 十八禁网站网址无遮挡| 在线 av 中文字幕| 后天国语完整版免费观看| 波多野结衣av一区二区av| 久久午夜综合久久蜜桃| 国产av精品麻豆| 麻豆乱淫一区二区| e午夜精品久久久久久久| 国产欧美日韩一区二区三| 一区二区三区国产精品乱码| 久久天躁狠狠躁夜夜2o2o| 日韩制服丝袜自拍偷拍| 久久久精品94久久精品| 久久国产精品大桥未久av| 精品少妇一区二区三区视频日本电影| 菩萨蛮人人尽说江南好唐韦庄| 日韩欧美一区二区三区在线观看 | 国产精品99久久99久久久不卡| 深夜精品福利| 国产免费视频播放在线视频| 久久久欧美国产精品| 两人在一起打扑克的视频| 天天躁夜夜躁狠狠躁躁| 性少妇av在线| 91字幕亚洲| 久久久久视频综合| 国产精品久久久av美女十八| 在线播放国产精品三级| 捣出白浆h1v1| 99久久99久久久精品蜜桃| 久久精品91无色码中文字幕| 嫩草影视91久久| 黑人猛操日本美女一级片| 午夜福利,免费看| 久久 成人 亚洲| 免费女性裸体啪啪无遮挡网站| 不卡av一区二区三区| 欧美日韩视频精品一区| 亚洲欧美色中文字幕在线| 青青草视频在线视频观看| 国产精品av久久久久免费| 侵犯人妻中文字幕一二三四区| 18禁黄网站禁片午夜丰满| 日韩精品免费视频一区二区三区| 日日夜夜操网爽| 18禁美女被吸乳视频| 国产无遮挡羞羞视频在线观看| 国产成人免费观看mmmm| 免费在线观看完整版高清| 午夜福利影视在线免费观看| 亚洲五月婷婷丁香| 欧美精品高潮呻吟av久久| 精品一区二区三区av网在线观看 | 欧美精品啪啪一区二区三区| 亚洲天堂av无毛| 日本欧美视频一区| 窝窝影院91人妻| 人成视频在线观看免费观看| 男人舔女人的私密视频| 多毛熟女@视频| 人人妻人人澡人人爽人人夜夜| 一级黄色大片毛片| 免费不卡黄色视频| 午夜福利,免费看| 五月天丁香电影| 后天国语完整版免费观看| 日韩 欧美 亚洲 中文字幕| 黄色毛片三级朝国网站| 男女边摸边吃奶| 国产高清视频在线播放一区| 性少妇av在线| 99re6热这里在线精品视频| 女性被躁到高潮视频| 久久香蕉激情| 午夜福利乱码中文字幕| 亚洲av片天天在线观看| 高清毛片免费观看视频网站 | 高清欧美精品videossex| 香蕉国产在线看| 国产精品香港三级国产av潘金莲| 美女福利国产在线| 亚洲人成电影观看| 天天影视国产精品| 91精品国产国语对白视频| 又大又爽又粗| 亚洲欧美色中文字幕在线| 久久国产亚洲av麻豆专区| 免费看十八禁软件| 无遮挡黄片免费观看| 亚洲 欧美一区二区三区| 最近最新免费中文字幕在线| 无遮挡黄片免费观看| 黄色毛片三级朝国网站| 亚洲人成电影观看| 国产不卡一卡二| 大片免费播放器 马上看| 超色免费av|