• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    User-guided line abstraction using coherence and structure analysis

    2017-06-19 19:20:17HuiChiTsaiYaHsuanLeeRuenRoneLeeandHungKuoChu
    Computational Visual Media 2017年2期

    Hui-Chi TsaiYa-Hsuan LeeRuen-Rone Leeand Hung-Kuo Chu

    User-guided line abstraction using coherence and structure analysis

    Hui-Chi Tsai1,2,Ya-Hsuan Lee1,Ruen-Rone Lee2and Hung-Kuo Chu1

    Line drawing is a style ofimage abstraction where the perceptual content of the image is conveyed using distinct straight or curved lines.However, extracting semantically salient lines is not trivial and mastered only by skilled artists.While many parametric filters have successfully extracted accurate and coherent lines,their results are sensitive to parameter choice and easily lead to either an excessive or insufficient number of lines.In this work,we present an interactive system to generate concise line abstractions ofarbitrary images via a few user specified strokes.Specifically,the user simply has to provide a few intuitive strokes on the input images,including tracing roughly along edges and scribbling on the region of interest,through a sketching interface.The system then automatically extracts lines that are long, coherent and share similar textural structures to form a corresponding highly detailed line drawing.We have tested our system with a wide variety of images.Our experimentalresults show that our system outperforms state-of-the-art techniques in terms of quality and efficiency.

    line abstraction;interactive drawing; coherence strokes;structure strokes; stroke matching

    1 Introduction

    Line drawing is a style ofimage abstraction in which a distinct and concise set of line strokes is used to depict the shapes of objects in a scene.Such concise image abstraction plays a fundamental element in many artistic stylizations where the artists delicately draw long coherent lines along semantically salient features in an image to give a first impression of their artworks.Beyond artistic drawings,good line abstraction also provides valuable priors for advanced image processing and scene understanding tasks that demand precise edge detection.

    However,generating semantically meaningful line abstractions is not a trivial task;it is currently approached in two very different ways.One way, mostly appreciated by artists,is to utilize various commercial painting tools(e.g.,Paint,Photoshop) to precisely trace the salient features by hand. Although this offers the artists full control over the finalresults,the process is tedious,time-consuming, and probably error-prone due to fatigue.In the other approach,a huge body of work is dedicated to automatic line abstraction in various contexts, ranging from gradient-based edge detection[1]to artistic abstraction[2].While such automation largely eliminates the manual eff ort required and achieves pixel level accuracy,the results are highly sensitive to parameter settings,leading to either excessive or insufficient detail.Overall,we still lack an efficient and effective technique to extract concise yet semantically meaningful line abstractions from images.

    In this work,we present a novel line abstraction algorithm to extract prominent line strokes from a highly detailed line drawing under the supervision of a few user specified strokes.The key insight isto leverage both the cognitive ability of humans and the computational power of machines to accomplish the line abstraction task.To minimize the fatigue,the user simply has to scribble roughly along the long image features(e.g.,contours of objects)or on a region with similar texture patterns via a sketching interface.The system then automatically performs the accuracy-demanding and computationally expensive tasks of extracting a concise yet semantically meaningful line abstraction from a highly detailed one.Specifically,the system first classifies the user strokes intocoherenceandstructurestrokes,based on which the long coherent lines and line segments that share similar texture patterns in the input image are extracted, respectively.Figure 1 shows a typical example generated by our system using only a few handdrawn strokes.We have tested our system on a variety ofimages across different users.Experimental results show that our system can generate superior or comparable line abstractions given the same input user strokes in comparison to previous state-of-theart methods,and provides a significant performance boost over hand drawing when a target complexity of abstraction is requested.

    In summary,our main contributions include:

    ?An easy-to-use sketching system that facilitates the creation of concise,semantic line abstractions using very simple and intuitive user strokes.

    ?A novel line matching algorithm for extracting long coherent lines and line segments with similar image domain structure usingcoherenceandstructurestrokes respectively,that are automatically derived from the user strokes.

    2 Related work

    Parametric image filtering.Parametric image filters such as the Canny edge detector[1]and the difference-of-Gaussians filter[2–4]are widely used in image abstraction for generating line drawing images.However,the quality of output may vary significantly when adjusting the associated control parameters,leading to either excessive or insufficient details.Another well-known contour detector,global probability of boundary(gPb)[5,6]combines both local and global image features,and requires a single threshold parameter to control the number of detected edges.Nevertheless,it remains difficult to find a universally applicable setting that produces satisfactory results for different input images. Rather than struggling to optimize parameters,our work aims to utilize these well-defined filters to generate over-detailed line drawing images,from which a concise set ofsemantic lines is then extracted via the user specified strokes.

    Sketch-based refinement.Limpaecher et al.[7]introduced a method to correct user input strokes by a consensus model collected from a crowdsourced drawing database.Su et al.[8] presented theEZ-Sketchingsystem that snaps user strokes to nearby edges using a novel three-level optimization.These systems also resemble those that snap the cursor or strokes to some specific image features such as image snapping[9]and lazy snapping[10].Other interactive sketching systems such asShadowDraw[11],drawing assistant[12], and iCanDraw[13]targeted providing a tutor-like drawing system for novice users.In contrast to previous works that intend to correct or guide user strokes,our system aim to use user stokes as guidance to effectively extract prominent lines that match the user’s intentions from a detailed line drawing image.

    Fig.1 Given an input image(a)along with a few scribbles by the user(b),our system automatically extracts a concise line abstraction withcoherenceandstructurelines depicting the edges of the petals and the shapes of the pistils(c,d).Note that our system can adaptively produce highly detailed line drawings using diff erent image filters(see insets).

    Stylized line drawing.RealBrush[14]used scanned images of real natural media to synthesize the texture appearance of user strokes.A portrait sketching system by Berger et al.[15]is capableof synthesizing a sketch portrait from a photograph that mimics a particular artistic style.Both systems are data-driven and achieve impressive results by analyzing the relationship between input strokes and the collected line abstraction database.Our system can contribute to this line of work by serving as an efficient tool for generating line abstractions with various styles.

    Our work is closely related to the work by Yang et al.[16]who also tried to extract semantic gradient edges based on input user strokes.Their system first clusters edge points into edgelets and constructs a graph that encodes the spatialrelations between the edges near the user strokes.An energy minimization framework is then used to select the semantic edges that conform to the shapes ofthe user strokes. However,their line matching algorithm may produce artifacts such as disconnected edges even ifthe input strokes are coherent.Moreover,lacking support for structure analysis in the texture domain,their system requires users to provide strokes at diff erent scales in order to extract the corresponding gradient edges.

    3 Overview

    An overview of our system is provided in Fig.2.Given an input image,our system starts by the user providing rough scribbles on the regions of interest(see Fig.2(b))to guide line abstraction. In addition to the user strokes,our system also takes as input a detailed line drawing of the input image(see Fig.2(c)),which provides a reference dataset of line segments used in the subsequent matching algorithm.Such a detailed line drawing can be obtained by using any suitable well-known image filter,such as the Canny edge detector[1],fDoG[2], gPb[5,6],etc.Then our system runs in two stages.

    Stroke classification.The user has to provide only two kinds of simple and intuitive strokes:(i) roughly tracing long image features(e.g.,outlines, edges),and(ii)scribbling on the regions using zigzag or circular strokes.We refer to the former type of strokes ascoherencestrokes.These are simple lines that are nearly straight,and are usually used to depict the main shape of objects(see Fig.2(d)).The other stroke type,structurestrokes,are mainly used to indicate regions of interest that contain repeated texture patterns,which are otherwise tedious to trace by hand(see Fig.2(e)).Since these two types of strokes represent different intentions of the user,our system employs a gesture recognition technique[17] to classify the user strokes.

    Line matching.We formulate line abstraction as a line matching problem,the aim being to extract lines from the reference dataset that match the user specified coherence and structure strokes(if any).Specifically,for each coherence stroke,the system computes the best matching coherence lines (see Fig.2(f))using metrics favoring candidate lines that are smooth and in agreement with the user stroke in terms oforientation and overalllength(seeSection 4.1).For each structure stroke,the system first analyzes a representative feature descriptor based on texture patches of the input image by sampling along the stroke using a localwindow.Then the corresponding structure lines are those lines of the reference dataset that are close to the user stroke in feature space(see Fig.2(g)and Section 4.3).The finalresult is obtained by combining these two types of extracted lines.

    Fig.2 Overview.Given an input image(a),the system lets the user provide a few simple,intuitive strokes(b)and generates a reference dataset of line segments from a detailed line drawing image(c).Next,the system classifies the user strokes intocoherencestrokes(d)andstructurestrokes(e).A novel line matching algorithm is then employed to match the line segments of(c)to the input coherence and structure strokes.The best matching coherence lines(f)and structure lines(g)are combined to form the final line abstraction(h).

    4 Algorithm

    Preprocessing.The system starts by preprocessing the input detailed line drawing image to obtain a dataset of atomic line segments for the subsequent matching algorithm.This is done by splitting long continuous lines into small line segments according to both length and curvature constraints.Assuming a line comprises a set oftconsecutive pixels{p1,···,pt},we measure the curvature ofpiusing the angleθbetween two vectors,andIfθis less than 135°or the length ofthe line exceeds a threshold of 20 pixels,we subdivide the line into two line segments.The splitting process is iterated until no more line segments violate the length and curvature requirements.

    We define a region of interest(ROI)for each user stroke to speed up the process,by constraining candidate matching line segments to those intersecting the ROI.The ROI is defined as the region swept by a disk aligned with and moving along the user stroke.We use an empirical setting of 15 pixels as the default disk radius to generate all results presented in the paper.

    4.1 Coherence line matching

    Coherence strokescorrespond to the user’s intention to trace along the contours of an object to depict its overall shape.Therefore,the goal in this step is to extract line segments to form a long coherent line that matches each user stroke in terms oflength and orientation.The details of algorithm are as below.

    Graph construction.For each input coherence stroke,the system first constructs a directed graphG=(V,E),with vertex setV={v1,···,vm}containing allcandidate line segments covered by the stroke’s ROI;we add a directed edge for every pair of distinct vertices.The edges can be further divided into two types according to context.The edge is labeled as arealedge if its two vertices(i.e.,line segments)are originally connected in the source line drawing image,otherwise it is labeled as avirtualedge.We defer the discussion of how to determine the direction of each edge untillater.An example of such a digraph can be seen in Fig.3.

    Vertex-wise energy term.Assume the coherence stroke is also split into a set of stroke segments,denotedS={s1,···,sn}.For each vertexvi,we search among the setSand assignvito the best matching stroke segment according to analignmentfunction,formulated as

    The distance cost,Cdistance,calculates the average distance betweenviandsj,and is defined as

    The orientation cost,Cangle,measures how wellviis aligned withsj,and is defined as

    whereθrepresents the acute angle betweenviandsj,andαis a weight which is empirically set to 2 in our experiments.

    Edge-wise energy term.Since our purpose is to extract a path in the graph that is coherent with the input stroke,the edge direction should naturally follow the drawing direction ofthe input stroke.The edge direction is determined by computing the angle between a pair of line segments(vi,vj)and their best matching stroke segments(si,sj).As shown in Fig.4(b),we calculate the angle,θ,between the two vectorsandwheremsandmvare the midpoints ofsiandvi,respectively.

    The edge direction is set fromvitovjifθis less than 90°,otherwise it is set to the opposite direction.

    Fig.3 Graph construction.(a)A coherence stroke(green)drawn by the user;direction shown by white arrow.(b)(Right)corresponding digraph based on the detailed line drawing(left).Each vertex represents a line segment.Realedges:black arrows.Virtualedges: orange arrows.Directions of edges are consistent with the direction of the input stroke.

    Fig.4 Continuity measure for a pair of line segmentsviandvj.(a) Two line segments with strong continuity in terms ofCline.(b)Two line segments with weak continuity in terms ofCuser.Edge direction is decided by the angle between line segments and the matching stroke segments.

    The associated edge weightW(vi,vj)is determined based on the continuity between each pair of line segments(vi,vj),defined by

    The continuity cost,Cline,takes into account the geometric feature of the two line segments,and is defined as

    wherenAandnBare the unit tangent vectors at pointsAandB,which correspond to two points respectively on line segmentsviandvj.dis the distance vector from pointAto pointB,andndis a normalized unit vector alongd.Figure 4(a) illustrates a case with smallCline.For real edges,Clineis set to 1.Note that Eq.(5)is a slight modification of the discontinuity term introduced in the stage of stroke clustering in Ref.[18].

    However,in some cases,two geometrically connected line segments may actually come from two semantically different objects.Take Fig.3 for instance:although the line on the left hand side of the window is long continuous,it is actually made up of edges from different objects(i.e.,the shoulder and the lantern).To handle such cases,we introduce another continuity cost,Cuser,using the indication from the coherence stroke to determine whether two line segments have strong or weak continuity.This cost function is defined as

    whereθis the angle between two line segments and their matching stroke segments.Whenθis large,it means that the user intends weak continuity between two line segments even though they show strong continuity in terms ofCline.Figure 4 illustrates a case with smallClineand largeCuser.

    Optimization.Given the directed graph,we apply the Floyd–Warshallalgorithm[19]to compute all pairs of shortest paths to find the most coherent path for each pair ofvertices.In order to extract the most prominent paths,we define an energy function,E(p),to measure the quality of each pathpas follows:

    The alignment energy term,Ealign,simply averages the alignment cost along the pathpand is defined as

    whereNpis the number of vertices on pathp.The length energy term,Elength,computes the proportion of matched user stroke segments of each path and favors the length of extracted lines to be as close as possible to the coherence stroke.Elengthis defined as

    whereNmatchedis the number of matched stroke segments andNuseris the number ofstroke segments. For the coherence energy term,Ecoherence,we simply average the edge weight along the pathp:

    whereNp?1 is the number of edges on pathp.The three energy terms are combined using weighting parameters,a,b,andc.The path with minimal total energy is selected as the most prominent line that matches the input coherence stroke.Note that we use the empirical values ofa=b=c=1 as default values to generate all results shown in the paper.Figure 5 shows some results of coherence stroke matching.

    4.2 Temporospatially neighboring strokes

    To distinguish coherence strokes that are close to each other,a co-analysis of multiple strokes is performed to match them to different nearby image lines with respect to the underlying image edges,as proposed in Ref.[8].In this paper,we implement a similar function to distinguish temporospatially neighboring coherence strokes.

    Fig.5 Coherence line matching results.(a)Input images and user specified coherence strokes.(b)Detailed line drawings using fDoG[2]. (c)Lines extracted by coherence line matching.

    The temporal neighboring relationship is determined by the drawing order.We first take the most recent coherence stroke as the temporal neighbor of the new coherence stroke.For its spatial neighbor,we consider itsparallel neighborand itscontiguous neighbor.The parallel neighbor is defined to be the neighboring stroke that is closest in distance and nearly parallelto the current stroke. They arise when the user wants to extract lines that are close to each other but find it difficult to precisely align them when using hand sketching. In order to avoid extracting the same lines when parallelneighbor strokes are given,we use an energy function to balance the results for such neighbor strokes:

    whereandare the candidate paths derived from the current strokepcand the neighbor strokepn,respectively.E(p)is the energy function defined in Eq.(7).Econflictis an energy term designed to prevent the same lines from being extracted for parallel neighbor strokes.It is given by

    whereNconflictis the number of duplicated line segments.

    The contiguous neighbor is defined to be a neighbor stroke that should be connected with the current stroke.They arise when the user wants to draw a long stroke,but,for some reason,uses two separated strokes to express this intent.In order to extract aligned,long,and coherent image lines,we use a similar energy function to that in Eq.(7)to balance the results of contiguous neighbor strokes.

    Fig.6 Temporospatially neighboring strokes.(a)User input with parallel neighboring strokes on the left and contiguous neighboring strokes on the right.(b)Extracted lines without applyingEparallelandEcontiguous.(c)Extracted lines usingEparallelandEcontiguous.

    4.3 Structure line matching

    Matching of structure strokes requires us to collect evidence covered by the structure stroke.A structure cost is then used to evaluate structure similarity to the collected evidence.

    Evidence collection.For a structure stroke, we need to extract line segments that have similar properties in the drawing region.The structure strokes do not need to align with the image lines,but are used for region identification.Sufficient evidence is collected as a basis to infer all other image line segments that match similar structures within the search range indicated by the user input stroke. Firstly,intersections of the user input stroke with the line segments from the line image are gathered. Secondly,for each intersection,we obtain two 3×3 patches along the tangent line on both sides of the intersected image line segment at the intersection point.Lastly,the means of these two patches are calculated.All such pairs of means are used as the evidence for testing structure similarity of line segments in the search range.Note that the search range is the same as for coherence strokes,with radius of 15 pixels.Figure 7 shows an example of evidence collection.

    Structure cost.After collecting all possible evidence,the image lines that meet the search range are considered to be candidate line segments for extraction,depending on their structure similarity. For a candidate line segmentvihavingNpoints,its color differenceDcolor(vi,R)to the set ofevidenceRis defined as

    whereeviis a collection ofpairs ofmeans,calculated by the way as for the evidence,for every point ofvi.The operatorrepresents the CIE94 color distance[20]measured inL?a?b?color space.

    Fig.7 Evidence collection.(a)Blue scribble:input structure stroke,black lines:candidate line segments.(b)Close-up;red dots: intersections of user strokes and candidate line segments.(c)10 pairs of means,out of 67 in total,illustrate examples of evidence.

    For each mean pairm,we find the most similar evidence pairrwith the smallest color difference from the evidence setR.The average of the smallest differences for allpoints ofviis regarded as the color difference of the entire candidate line segment.The structure cost functionCstructure(i)is then calculated for each candidate line segmentvi:

    whereDcoloris the color difference of a candidate line segment to the collected evidence set,andlkis the image line which line segmentvibelongs to before line splitting.W(lk)is a weighting function depending on the length ofthe specific candidate line segmentvi,and is defined by

    wherelminandlmaxare respectively the shortest and longest lengths of candidate lines before being split. This formulation causes matching to favor image lines that are longer in order to provide better line coherence.

    To extract appropriate line segments,the candidate line segments are sorted by cost,ones with lower cost being the preferred ones to be extracted.The default proportion of extracted candidate line segments is 70%.The user can also define the proportion of candidate line segments to be extracted.Figure 8 shows some structure line matching results with different proportions of line segments.Here,we enrich the rendering of these line drawings with colors sampled from original images to help clarify the changes between the different cases.

    Fig.8 Structure line matching with diff erent proportions of line segments.(a)Input image and structure strokes.(b)Detailed line drawings using fDoG[2].(c)–(e)Line segments with costs less than one,two,or three standard deviations respectively.

    5 Results and evaluation

    We have tested our system on a wide variety of images across different users and generated 14 line abstractions with only a small number of user strokes.A few examples can be found in Fig.9 and we refer the reader to the Electronic Supplementary Material for a full gallery.

    5.1 Evaluation

    In this section,we give the results of several experiments to evaluate the performance of our system against naive and state-of-the-art methods. In particular,our system is compared with two stateof-the-art methods by Yang et al.[16]and Su et al.[8] (EZ-Sketching),which share the same goal as our system of generating long coherent lines from user strokes.We also implemented two naive approaches for a baseline comparison,including:(i)extracting lines that are near to the user strokes,within a distance threshold of 15 pixels(NN);and(ii)using all the lines that intersect user strokes(NI).

    Performance of coherence line matching. We evaluate the performance of our coherence line matching algorithm against above four alternatives in terms of visual quality and edge detection accuracy with respect to the ground truth.To do so, we used the same benchmark as Yang et al.[16]and took gPb[6]edge maps as input reference line images to our system.For a fair comparison,we imitated 10 results shown in Ref.[16]by carefully tracing their user strokes using our coherence strokes.These coherence strokes were also used as input to EZ-Sketching[8]to generate the outputs for comparison. Figure 10 shows a side-by-side comparison of the results.Our results are visually comparable to those of EZ-Sketching[8],and superior to those of Ref.[16] and both naive approaches in terms of smoothness and conciseness.We further used the precisionP,recallR,andF-measure(the weighted harmonic mean ofPandR)to evaluate edge detection accuracy.Table 1 shows that our algorithm achieves comparable performance to Ref.[16]and clearly outperforms EZ-Sketching[8]in terms ofF-measure. Note that Yang et al.’s method achieves better recall than ours as the ground truths often contain lines that are not expected by the user.For example, in the second row of Fig.10(d),the noisy branches around the man’s contour come from the shape ofthe lanterns in the background,which are also included in the ground truth(see Fig.10(b)).On the other hand,although EZ-Sketching snapped user strokes to nearby edges,it tended to retain the style ofthe user strokes instead of emphasizing the precision of the refined strokes.Therefore,the precision and recall are relatively low.

    Fig.9 Four results generated using our system.(a)Input image.(b)Detailed line drawings by fDoG[2].(c)User strokes.(d)Final line abstractions.

    Fig.10 Comparison with four other methods.(a)Input image and user strokes.(b)Ground truths corresponding to detailed line drawings by gPb[6].(c)–(g)Lines extracted by(c)our system,(d)Yang et al.[16],(e)EZ-Sketching[8],(f)naive near neighbor search(NN),and(g) naive line-stroke intersection test(NI).

    Performance of structure line matching. Since neither Yang et al.’s system nor EZ-Sketching are designed to handle scribbles,we evaluated the performance ofthe structure line matching algorithmonly in comparison with naive methods(NN and NI).A side-by-side comparison can be found in Fig.11.Note that we enrich the rendering of line drawings with colors sampled from the original images to better show how our algorithm can effectively capture lines with similar features,while the naive approaches tend to generate results with excessive(NN)or insufficient(NI)details.

    Table 1 Edge detection accuracy

    User study.We evaluated the overall quality of line abstractions by conducting a user study. Specifically,we prepared two sets of images,each of which contains 10 example images with predrawn user strokes.One set was used to evaluate coherence line matching while the other was used for structure line matching.For both sets,we generated 3 results for each example using our system and two naive methods(NN and NI).The result generated by EZ-Sketching[8]was also included for the set used to evaluate coherence line matching.During each trial,the subject was shown the originalimage with user strokes and line abstractions by different methods.The subject was then asked to grade each result using a score of 1–5(the higher,the better) according to the degree of completeness,cleanness, and expectation as comparing to the input strokes. The average score over 11 subjects is given in Fig.12.

    Fig.11 Comparison with naive methods.(a)Input image and user strokes.(b)Detailed color line drawings by fDoG[2].(c)–(e)Lines extracted by(c)our system with costs less than one standard deviation,(d)naive near neighbor search(NN),and(e)naive line-stroke intersection test(NI).

    Fig.12 Average scores of diff erent methods in the user study(the higher,the better).

    For coherence line matching,there was a statistically significant diff erence between groups as determined by one-way ANOVA(F(3,36)=21.646,p<0.001).An LSD post-hoc test revealed that the score for NN(1.26±0.2,p<0.001)and NI (2.04±0.26,p<0.001)was statistically significantly lower than that for our method(4.05±0.28).There was no statistically significant difference between our method and EZ-Sketching[8](p=0.152).According to the participant feedback,some of them cared more about smoothness and completeness of the coherence lines rather than their precision.Since EZSketching[8]refined the user strokes to snap them to nearby edges,while our method extracted lines from images which were originally composed ofmany incoherent line segments,EZ-Sketching[8]tended to get higher scores for some participants.

    For structure line matching,there was a statistically significant difference between groups as determined by one-way ANOVA(F(2,27)= 56.429,p<0.001).An LSD post-hoc test revealed that the score of NN(2.48±0.62,p<0.001)and NI (2.68±0.53,p<0.001)was statistically significantly lower than that for our method(4.53±0.14).There was no statistically significant difference between NN and NI(p=0.355).

    System usability.We conducted a small user study with 3 subjects to test the usability of our system against EZ-Sketching[8].During each trial, the subject was asked to generate a line drawing with a comparable level of details to a given reference image using our system and EZ-Sketching[8],and we recorded how long the subjects took to finish the line drawings.The timing statistics can be found in Table 2,and examples are shown in Fig.13.The results indicate that users take more time when using EZ-Sketching[8]to generate a line drawing with a target level of detail.

    Speed.Once the user draws a stroke,our systemcan extract the corresponding line segments at an interactive rate.For all the images we tested,our system took on average less than one second to perform coherence line matching or structure line matching.The timing complexity of both line matching algorithms is proportionalto the number of candidate line segments involved in the computation.

    Table 2 Time taken to generate line drawings

    5.2 Limitations

    The quality ofthe extracted lines is currently limited by the input detailed line drawings.First,our system can not extract lines that are not present in the dataset.For instance,the duckling shown in Fig.14(a)presents a jagged outline,as a result of which most image filtering algorithms failto generate long coherent lines(see Fig.14(b)).In such cases, our system can not extract long coherent lines using coherence strokes(see Fig.14(c)).On the other hand,the quality of the extracted structure lines depends on the degree of color diversity in the input image.Since structure line matching depends on color differences ofline segments,the system may fail to extract meaningful structure lines if the reference image lacks color diversity with the stroke’s ROI(see Figs.14(d)–14(f)).

    Fig.14 Limitations.(a,d)Input images with coherence strokes and structure strokes respectively.(b,e)Detailed line drawings of(a,d) respectively.(c,f)Line abstractions produced by our system.Note that(c)fails to extract coherence lines due to the noisy line segments along the duck’s boundary in(b).Due to the small color difference between sepals and fl ower stem in(d),our system extracts lines from both sepals and stem even though the user is only interested in the sepal region.

    6 Conclusions

    In this work,we have presented a novel interactive system for generating a concise,semantic line abstraction guided by a few user strokes.The user strokes are classified into coherence strokes and structure strokes to facilitate extracting effective line drawings from arbitrary images.For a coherence stroke,we build a graph and apply an energy function to extract lines that are coherent and aligned with user strokes.For a structure stroke,we calculate the color difference between the candidate lines and the evidence,and allow lines with similar structures to be extracted.Our system is efficient and can respond in realtime.Its effectivity has been verified by comparing it with other line extraction approaches.The results show that our approach is superior to other systems in terms of quality and efficiency.

    Acknowledgements

    We are gratefulto the anonymous reviewers for their comments and suggestions.The work was supported in part by the“Ministry of Science and Technology of Taiwan”(Nos.103-2221-E-007-065-MY3 and 105-2221-E-007-104-MY2).

    Electronic Supplementary Material Supplementary material is available in the online version of this article at http://dx.doi.org/10.1007/s41095-016-0076-y.

    [1]Canny,J.A computationalapproach to edge detection.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.PAMI-8,No.6,679–698,1986.

    [2]Kyprianidis,J.E.;D¨ollner,J.Image abstraction by structure adaptive filtering.In:Proceedings of the EG UK Theory and Practice of Computer Graphics,51–58,2008.

    [3]Winnem¨oeller,H.;Olsen,S.C.;Gooch,B.Real-time video abstraction.ACM Transactions on GraphicsVol. 25,No.3,1221–1226,2006.

    [4]Winnem¨oller,H.;Kyprianidis,J.E.;Olsen, S.C.XDoG:An extended diff erence-of-Gaussians compendium including advanced image stylization.Computers&GraphicsVol.36,No.6,740–753,2012.

    [5]Arbelaez,P.;Maire,M.;Fowlkes,C.;Malik, J.Contour detection and hierarchical image segmentation.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.33,No.5, 898–916,2011.

    [6]Maire,M.;Arbelaez,P.;Fowlkes,C.;Malik,J.Using contours to detect and localize junctions in natural images.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,1–8,2008.

    [7]Limpaecher,A.;Feltman,N.;Treuille,A.;Cohen,M. Real-time drawing assistance through crowdsourcing.ACM Transactions on GraphicsVol.32,No.4,Article No.54,2013.

    [8]Su,Q.;Li,W.H.A.;Wang,J.;Fu,H.EZ-sketching:Three-level optimization for error-tolerant image tracing.ACM Transactions on GraphicsVol.33, No.4,Article No.54,2014.

    [9]Gleicher,M.Image snapping.In:Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques,183–190,1995.

    [10]Li,Y.;Sun,J.;Tang,C.-K.;Shum,H.-Y.Lazy snapping.ACM Transactions on GraphicsVol.23,No. 3,303–308,2004.

    [11]Lee,Y.J.;Zitnick,C.L.;Cohen,M.F.ShadowDraw: Real-time user guidance for freehand drawing.ACM Transactions on GraphicsVol.30,No.4,Article No. 27,2011.

    [12]Iarussi,E.;Bousseau,A.;Tsandilas,T.The drawing assistant:Automated drawing guidance and feedback from photographs.In:Proceedings of the ACM Symposium on User Interface Software and Technology,2013.

    [13]Dixon,D.;Prasad,M.;Hammond,T.iCanDraw: Using sketch recognition and corrective feedback to assist a user in drawing human faces.In:Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,897–906,2010.

    [14]Lu,J.;Barnes,C.;DiVerdi,S.;Finkelstein,A. RealBrush:Painting with examples of physicalmedia.ACM Transactions on GraphicsVol.32,No.4,ArticleNo.117,2013.

    [15]Berger,I.;Shamir,A.;Mahler,M.;Carter,E.; Hodgins,J.Style and abstraction in portrait sketching.ACM Transactions on GraphicsVol.32,No.4,Article No.55,2013.

    [16]Yang,S.;Wang,J.;Shapiro,L.Supervised semantic gradient extraction using linear-time optimization.In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2826–2833,2013.

    [17]Wobbrock,J.O.;Wilson,A.D.;Li,Y.Gestures without libraries,toolkits or training:A$1 recognizer for user interface prototypes.In:Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology,159–168,2007.

    [18]Orbay,G.;Kara,L.B.Beautification of design sketches using trainable stroke clustering and curve fitting.IEEE Transactions on Visualization and Computer GraphicsVol.17,No.5,694–708,2011.

    [19]Floyd,R.W.Algorithm 97:Shortest path.Communications of the ACMVol.5,No.6,345,1962.

    [20]McDonald,R.;Smith,K.J.CIE94—A new colourdiff erence formula.Journal of the Society of Dyers and ColouristsVol.111,No.12,376–379,1995.

    Ya-Hsuan Lee received her B.S. degree in computer science from“National Tsing Hua University”, Taiwan,China,in 2016.She is currently working at MediaTek as an engineer.Her research interests include computer graphics and computer vision.

    Ruen-Rone Lee received his Ph.D. degree in computer science from“National Tsing Hua University”, Taiwan,China,in 1994.From 1994 to 2010,he worked in several IC design companies for graphics hardware and software development.Later,from 2010 to 2015,he was an associate researcher with the Department of Computer Science,“National Tsing Hua University”.He is currently a deputy director in the Information and Communications Research Laboratories,Industrial Technology Research Institute, Taiwan,China.His research interests include computer graphics,non-photorealistic rendering,and graphics hardware architecture design.He is a member of the IEEE Computer Society and the ACM SIGGRAPH.

    Hung-Kuo Chu received his Ph.D. degree from the Department of Computer Science and Information Engineering,“National Cheng Kung University”,Taiwan,China,in 2010. He is currently an associate professor at the Department of Computer Science,“National Tsing Hua University”.His research interests focus on shape understanding,smart manipulation,perception-based rendering,recreational graphics,and human computer interaction.

    Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/),which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journalare available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    sai

    her bachelor and master degrees in computer science from“National Tsing Hua University”,Taiwan,China.She is currently a software engineer in the Information and Communications Research Laboratories,Industrial Technology Research Institute,Taiwan, China.Her research interests include computer graphics and computer vision.

    1 Department of Computer Science,“National Tsing Hua University”,No.101,Section 2,Kuang-Fu Road, Hsinchu,Taiwan 30013,China.E-mail:H.-C.Tsai, beck394@itri.org.tw;Y.-H.Lee,louiselee602@gmail.com; H.-K.Chu,hkchu@cs.nthu.edu.tw

    2 Information and Communications Research Laboratories,Industrial Technology Research Institute,No.195,Section 4,Chung Hsing Road, Chutung,Hsinchu,Taiwan 31040,China.E-mail: rrlee@itri.org.tw

    Manuscript received:2016-09-09;accepted:2016-12-21

    午夜福利在线观看免费完整高清在 | 丁香六月欧美| 欧美乱妇无乱码| 男女视频在线观看网站免费| 欧美精品国产亚洲| av国产免费在线观看| 一进一出抽搐gif免费好疼| 久久久久国产精品人妻aⅴ院| 18+在线观看网站| 国产大屁股一区二区在线视频| 久久精品国产99精品国产亚洲性色| 成年人黄色毛片网站| 亚洲精品影视一区二区三区av| 99国产综合亚洲精品| 精品久久久久久久久av| 88av欧美| 午夜a级毛片| 男人舔女人下体高潮全视频| 无遮挡黄片免费观看| 精品人妻视频免费看| 国产午夜精品久久久久久一区二区三区 | 亚洲欧美日韩东京热| 国产精品电影一区二区三区| 欧洲精品卡2卡3卡4卡5卡区| 欧美成狂野欧美在线观看| 嫩草影院入口| 日韩欧美免费精品| 偷拍熟女少妇极品色| 国产黄a三级三级三级人| 亚洲内射少妇av| 亚洲精品色激情综合| 国产精品一区二区三区四区久久| 亚洲精品成人久久久久久| 日韩欧美 国产精品| 成年女人看的毛片在线观看| 少妇高潮的动态图| 亚洲av中文字字幕乱码综合| 高潮久久久久久久久久久不卡| 欧美国产日韩亚洲一区| 精品99又大又爽又粗少妇毛片 | 日韩 亚洲 欧美在线| 久久天躁狠狠躁夜夜2o2o| 一级毛片久久久久久久久女| 观看美女的网站| 露出奶头的视频| 国产国拍精品亚洲av在线观看| 国产精品,欧美在线| 18禁黄网站禁片免费观看直播| ponron亚洲| 欧美日韩黄片免| 动漫黄色视频在线观看| 婷婷精品国产亚洲av| 亚洲欧美日韩东京热| 窝窝影院91人妻| 久久久久国内视频| 色精品久久人妻99蜜桃| 男女视频在线观看网站免费| 亚洲五月天丁香| 欧美成人一区二区免费高清观看| 欧美黑人巨大hd| 在线观看美女被高潮喷水网站 | 在线观看午夜福利视频| 丝袜美腿在线中文| 免费大片18禁| or卡值多少钱| 免费大片18禁| 在线观看美女被高潮喷水网站 | 简卡轻食公司| 一本一本综合久久| 一边摸一边抽搐一进一小说| 色播亚洲综合网| 最近中文字幕高清免费大全6 | 99久久久亚洲精品蜜臀av| 精品人妻1区二区| 一进一出抽搐gif免费好疼| 国产乱人视频| 一区二区三区激情视频| 精品一区二区三区人妻视频| 久久热精品热| 91午夜精品亚洲一区二区三区 | 欧美日韩综合久久久久久 | 国产久久久一区二区三区| 久久草成人影院| 麻豆成人午夜福利视频| a在线观看视频网站| 中国美女看黄片| 一个人免费在线观看的高清视频| 久久久久久大精品| 有码 亚洲区| 九九热线精品视视频播放| 亚洲av一区综合| 午夜福利免费观看在线| 国产成人福利小说| 久久精品综合一区二区三区| 黄色视频,在线免费观看| 超碰av人人做人人爽久久| 中文字幕熟女人妻在线| 亚洲狠狠婷婷综合久久图片| 波多野结衣高清作品| 高清日韩中文字幕在线| 国产欧美日韩一区二区三| 国产91精品成人一区二区三区| 亚洲综合色惰| 丰满人妻一区二区三区视频av| 久久中文看片网| 成年女人永久免费观看视频| 成人特级av手机在线观看| 女生性感内裤真人,穿戴方法视频| 午夜福利免费观看在线| 岛国在线免费视频观看| 日韩高清综合在线| 高清毛片免费观看视频网站| 老司机午夜福利在线观看视频| 国产成人av教育| 国产人妻一区二区三区在| 最后的刺客免费高清国语| 亚洲aⅴ乱码一区二区在线播放| 久久久久国内视频| 欧美乱妇无乱码| 18禁裸乳无遮挡免费网站照片| 亚洲精品久久国产高清桃花| 嫁个100分男人电影在线观看| 亚洲五月婷婷丁香| 99在线视频只有这里精品首页| 午夜激情福利司机影院| 免费在线观看亚洲国产| av女优亚洲男人天堂| xxxwww97欧美| 无人区码免费观看不卡| 亚洲成人久久爱视频| 国产视频内射| 亚洲国产精品久久男人天堂| 色av中文字幕| 国产综合懂色| 亚洲欧美日韩高清在线视频| 国产探花在线观看一区二区| 国产精品久久久久久久久免 | 精品久久久久久久久久免费视频| 脱女人内裤的视频| 韩国av一区二区三区四区| 亚洲最大成人中文| 婷婷亚洲欧美| 日本成人三级电影网站| 黄色日韩在线| 麻豆久久精品国产亚洲av| 亚洲av成人不卡在线观看播放网| 中文字幕av成人在线电影| 欧美+亚洲+日韩+国产| 悠悠久久av| 中文亚洲av片在线观看爽| 一区二区三区高清视频在线| 午夜福利高清视频| 精品人妻1区二区| 草草在线视频免费看| 国产主播在线观看一区二区| 哪里可以看免费的av片| 最近视频中文字幕2019在线8| 丝袜美腿在线中文| 国产伦人伦偷精品视频| 在线观看一区二区三区| 亚洲五月婷婷丁香| 婷婷丁香在线五月| or卡值多少钱| 色吧在线观看| 在线观看免费视频日本深夜| 国产 一区 欧美 日韩| 一夜夜www| 国产精华一区二区三区| 欧美xxxx性猛交bbbb| 亚洲第一欧美日韩一区二区三区| 国产av一区在线观看免费| 可以在线观看毛片的网站| 国产免费男女视频| 国产一级毛片七仙女欲春2| 欧美日本亚洲视频在线播放| 国产精品免费一区二区三区在线| 国产亚洲欧美在线一区二区| 丰满乱子伦码专区| 欧美黑人巨大hd| 一级黄片播放器| 麻豆av噜噜一区二区三区| 国产精品永久免费网站| 国内久久婷婷六月综合欲色啪| 国产高清三级在线| 久久久久免费精品人妻一区二区| 久久香蕉精品热| 久久人人精品亚洲av| 草草在线视频免费看| 国产亚洲欧美在线一区二区| 欧美午夜高清在线| 日韩人妻高清精品专区| 亚洲av成人av| 91麻豆av在线| 久久精品影院6| 国产极品精品免费视频能看的| 亚洲美女搞黄在线观看 | 搡老熟女国产l中国老女人| 可以在线观看毛片的网站| 色综合婷婷激情| 欧美激情国产日韩精品一区| 国产av麻豆久久久久久久| 男女那种视频在线观看| 精品欧美国产一区二区三| 亚洲国产精品成人综合色| 俄罗斯特黄特色一大片| 色综合亚洲欧美另类图片| 他把我摸到了高潮在线观看| 久久精品影院6| 我要搜黄色片| 好男人在线观看高清免费视频| 欧美高清成人免费视频www| 日本五十路高清| 免费观看人在逋| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 国产一区二区三区在线臀色熟女| 国产av一区在线观看免费| 国产高清视频在线播放一区| 老鸭窝网址在线观看| 中文字幕久久专区| 日本 av在线| 麻豆国产av国片精品| 午夜日韩欧美国产| 日韩欧美三级三区| 色av中文字幕| 久久久久久九九精品二区国产| 女人十人毛片免费观看3o分钟| 国产在线精品亚洲第一网站| 一a级毛片在线观看| 精品欧美国产一区二区三| 国产探花极品一区二区| 国产黄a三级三级三级人| 蜜桃亚洲精品一区二区三区| 国内少妇人妻偷人精品xxx网站| 亚洲av第一区精品v没综合| 全区人妻精品视频| 日本免费a在线| 99热这里只有精品一区| 国产精品影院久久| 美女cb高潮喷水在线观看| 欧美乱色亚洲激情| 精品久久久久久久久av| 丰满人妻熟妇乱又伦精品不卡| 中文字幕av成人在线电影| 亚洲精品影视一区二区三区av| av专区在线播放| 少妇丰满av| 麻豆成人午夜福利视频| av天堂中文字幕网| 国产伦人伦偷精品视频| 天堂av国产一区二区熟女人妻| 国产不卡一卡二| 99精品久久久久人妻精品| 成人高潮视频无遮挡免费网站| 一二三四社区在线视频社区8| 精品日产1卡2卡| 每晚都被弄得嗷嗷叫到高潮| 在线观看午夜福利视频| 在线十欧美十亚洲十日本专区| 一二三四社区在线视频社区8| 亚洲avbb在线观看| 精品久久久久久,| 久久久国产成人免费| 成人高潮视频无遮挡免费网站| 国产一区二区在线av高清观看| 欧美乱妇无乱码| 麻豆国产av国片精品| 欧美黄色淫秽网站| 精品午夜福利视频在线观看一区| 十八禁国产超污无遮挡网站| av黄色大香蕉| 日韩欧美在线二视频| 一本精品99久久精品77| 欧美一级a爱片免费观看看| 国产在线男女| 欧美日韩瑟瑟在线播放| 国产精品影院久久| 久久人人爽人人爽人人片va | 精品久久久久久成人av| 人人妻人人澡欧美一区二区| 欧美日韩福利视频一区二区| 在线观看免费视频日本深夜| 两个人视频免费观看高清| 欧美最黄视频在线播放免费| www.色视频.com| 亚洲一区高清亚洲精品| 又粗又爽又猛毛片免费看| 97热精品久久久久久| 亚洲无线观看免费| 亚洲av日韩精品久久久久久密| 级片在线观看| 丁香欧美五月| 99久久精品国产亚洲精品| 免费在线观看日本一区| 亚洲在线自拍视频| 变态另类丝袜制服| 天天躁日日操中文字幕| 99久久精品国产亚洲精品| 人人妻人人看人人澡| 99热这里只有是精品50| 国产精品98久久久久久宅男小说| 亚洲 欧美 日韩 在线 免费| 熟女电影av网| 精品人妻1区二区| 露出奶头的视频| 免费在线观看亚洲国产| av专区在线播放| 男人狂女人下面高潮的视频| 看黄色毛片网站| 大型黄色视频在线免费观看| 90打野战视频偷拍视频| 男人狂女人下面高潮的视频| 全区人妻精品视频| 成人特级av手机在线观看| 久久人妻av系列| 国内揄拍国产精品人妻在线| 国产精品久久电影中文字幕| 国产一区二区在线观看日韩| 蜜桃亚洲精品一区二区三区| 婷婷色综合大香蕉| 成年女人毛片免费观看观看9| 亚洲 国产 在线| 久久久国产成人免费| 久久精品91蜜桃| 在线观看午夜福利视频| 久久香蕉精品热| 色播亚洲综合网| 亚洲,欧美,日韩| 国产成年人精品一区二区| 亚洲人与动物交配视频| 亚洲无线观看免费| 久久亚洲精品不卡| 久久天躁狠狠躁夜夜2o2o| 日本在线视频免费播放| 亚洲,欧美,日韩| 国产伦精品一区二区三区视频9| 亚洲五月天丁香| 又爽又黄a免费视频| 亚洲一区二区三区色噜噜| 成年女人永久免费观看视频| 亚洲欧美精品综合久久99| 欧美日韩黄片免| 国模一区二区三区四区视频| 久久国产精品影院| 97热精品久久久久久| 婷婷精品国产亚洲av| 91在线精品国自产拍蜜月| 中文资源天堂在线| 欧美性感艳星| 欧美乱色亚洲激情| 亚洲av成人av| 成人亚洲精品av一区二区| 精品国产亚洲在线| 一个人免费在线观看的高清视频| 亚洲欧美精品综合久久99| 国产精品1区2区在线观看.| 国产高清视频在线观看网站| 99精品久久久久人妻精品| 久久精品国产亚洲av涩爱 | 自拍偷自拍亚洲精品老妇| 嫩草影院新地址| 国产高清三级在线| 男人舔奶头视频| 男女视频在线观看网站免费| 国产白丝娇喘喷水9色精品| 亚洲国产精品成人综合色| 激情在线观看视频在线高清| 色精品久久人妻99蜜桃| 变态另类成人亚洲欧美熟女| 久久久久国内视频| 亚洲中文字幕一区二区三区有码在线看| 日本一二三区视频观看| 国产极品精品免费视频能看的| 毛片女人毛片| 国产免费一级a男人的天堂| 欧美成狂野欧美在线观看| 夜夜看夜夜爽夜夜摸| 国产亚洲av嫩草精品影院| 久久香蕉精品热| 高清毛片免费观看视频网站| 精品国产亚洲在线| 简卡轻食公司| 国产三级中文精品| 99热这里只有是精品在线观看 | 精品久久久久久成人av| 一进一出好大好爽视频| 国产 一区 欧美 日韩| 亚洲av美国av| 99热精品在线国产| 国产一区二区三区在线臀色熟女| 国产精品99久久久久久久久| 两个人视频免费观看高清| 老熟妇乱子伦视频在线观看| 国产成人影院久久av| 99视频精品全部免费 在线| 97热精品久久久久久| 中文亚洲av片在线观看爽| 俺也久久电影网| 国产极品精品免费视频能看的| 1024手机看黄色片| 12—13女人毛片做爰片一| 久99久视频精品免费| 男人舔奶头视频| 99久久精品国产亚洲精品| 亚洲自拍偷在线| 久久久国产成人精品二区| 亚洲成人中文字幕在线播放| 精品99又大又爽又粗少妇毛片 | 欧美成人a在线观看| 欧美日韩综合久久久久久 | 美女 人体艺术 gogo| 亚洲精品亚洲一区二区| 成年女人毛片免费观看观看9| 中文字幕人成人乱码亚洲影| 直男gayav资源| 欧美午夜高清在线| 99精品在免费线老司机午夜| 国产一区二区在线av高清观看| 人人妻人人看人人澡| 偷拍熟女少妇极品色| 人妻制服诱惑在线中文字幕| 日韩高清综合在线| 国产白丝娇喘喷水9色精品| 日日干狠狠操夜夜爽| 天堂av国产一区二区熟女人妻| 中文字幕av在线有码专区| 又黄又爽又刺激的免费视频.| 国产蜜桃级精品一区二区三区| 亚洲欧美精品综合久久99| 亚洲乱码一区二区免费版| 国产精华一区二区三区| 成人国产综合亚洲| 国产精品三级大全| 国产精品永久免费网站| 午夜福利在线在线| 久久性视频一级片| 757午夜福利合集在线观看| 亚洲内射少妇av| 国产大屁股一区二区在线视频| 少妇丰满av| 夜夜夜夜夜久久久久| 国产精品亚洲一级av第二区| 性色av乱码一区二区三区2| 中文字幕久久专区| 国产蜜桃级精品一区二区三区| 美女高潮的动态| 99久久久亚洲精品蜜臀av| 久久天躁狠狠躁夜夜2o2o| 久久精品91蜜桃| 亚洲成人久久爱视频| 亚洲精品一区av在线观看| 中出人妻视频一区二区| 免费av毛片视频| 久久久精品大字幕| 露出奶头的视频| 少妇熟女aⅴ在线视频| 久久久久国内视频| 中文字幕av成人在线电影| 久久精品国产99精品国产亚洲性色| 一a级毛片在线观看| 久久国产精品人妻蜜桃| 亚洲av电影在线进入| 成人毛片a级毛片在线播放| 可以在线观看的亚洲视频| 国产av在哪里看| 国产69精品久久久久777片| 99国产精品一区二区三区| 97超视频在线观看视频| 人妻久久中文字幕网| 午夜免费成人在线视频| 在现免费观看毛片| 亚洲av五月六月丁香网| 午夜亚洲福利在线播放| 日韩欧美在线乱码| 俄罗斯特黄特色一大片| 久久国产精品影院| 狂野欧美白嫩少妇大欣赏| 精品久久久久久,| 国产av一区在线观看免费| 九色成人免费人妻av| 欧美乱色亚洲激情| 嫁个100分男人电影在线观看| 大型黄色视频在线免费观看| 亚洲av电影不卡..在线观看| 亚州av有码| 国产精品人妻久久久久久| 99国产精品一区二区蜜桃av| 亚洲 欧美 日韩 在线 免费| 99久久精品国产亚洲精品| 国产高清激情床上av| 精品一区二区三区av网在线观看| 美女xxoo啪啪120秒动态图 | 91在线精品国自产拍蜜月| 熟女电影av网| 国产日本99.免费观看| 一a级毛片在线观看| 天堂网av新在线| 嫩草影院精品99| 成人国产综合亚洲| 久久久国产成人免费| 亚洲国产精品999在线| 国产又黄又爽又无遮挡在线| 欧美性猛交黑人性爽| 人妻制服诱惑在线中文字幕| 精品欧美国产一区二区三| 村上凉子中文字幕在线| 久久人妻av系列| 最近视频中文字幕2019在线8| 欧美激情久久久久久爽电影| 成年人黄色毛片网站| 亚洲自拍偷在线| 久久人妻av系列| av在线天堂中文字幕| 婷婷精品国产亚洲av| 国产精品美女特级片免费视频播放器| av国产免费在线观看| 久久久精品大字幕| 亚洲av第一区精品v没综合| 欧美三级亚洲精品| 99热这里只有是精品在线观看 | 午夜视频国产福利| 亚洲欧美日韩卡通动漫| 三级国产精品欧美在线观看| 亚洲一区高清亚洲精品| 深夜精品福利| 国产成+人综合+亚洲专区| 成人无遮挡网站| 大型黄色视频在线免费观看| 日韩欧美在线二视频| 国产精品日韩av在线免费观看| 欧美日韩瑟瑟在线播放| 简卡轻食公司| 日本 av在线| 亚洲av中文字字幕乱码综合| 嫩草影院入口| 好男人电影高清在线观看| 欧美极品一区二区三区四区| 欧美精品国产亚洲| 一个人看视频在线观看www免费| 国产亚洲精品久久久com| 日韩成人在线观看一区二区三区| 国产aⅴ精品一区二区三区波| 国产精品精品国产色婷婷| 熟女电影av网| 国产单亲对白刺激| 天堂√8在线中文| 国产精品国产高清国产av| 欧美日韩福利视频一区二区| 亚洲美女视频黄频| 精品久久久久久久人妻蜜臀av| 亚洲aⅴ乱码一区二区在线播放| 国产精品一区二区性色av| 国产精品伦人一区二区| 我的女老师完整版在线观看| 国产精品99久久久久久久久| 真人一进一出gif抽搐免费| av国产免费在线观看| 欧美潮喷喷水| 亚洲精华国产精华精| 午夜免费成人在线视频| 日本一本二区三区精品| 欧美日韩乱码在线| 12—13女人毛片做爰片一| 99热这里只有精品一区| 露出奶头的视频| 亚洲成人免费电影在线观看| 国产单亲对白刺激| 变态另类丝袜制服| 波野结衣二区三区在线| 国产亚洲欧美98| 亚洲国产欧洲综合997久久,| 国产精品一区二区三区四区免费观看 | 婷婷亚洲欧美| 国产成人aa在线观看| 草草在线视频免费看| 赤兔流量卡办理| 欧美xxxx黑人xx丫x性爽| 亚洲乱码一区二区免费版| 国产色婷婷99| 久久久久久久精品吃奶| 国产乱人视频| 最近最新中文字幕大全电影3| 深爱激情五月婷婷| 两个人的视频大全免费| 亚洲人成网站在线播| 国产一区二区三区在线臀色熟女| 久久精品91蜜桃| 色av中文字幕| 一个人免费在线观看的高清视频| 亚洲成人免费电影在线观看| 极品教师在线视频| 国产黄a三级三级三级人| 三级男女做爰猛烈吃奶摸视频| 亚洲欧美激情综合另类| 天堂动漫精品| 成人国产综合亚洲| 久久久久性生活片| 十八禁网站免费在线| 一个人看的www免费观看视频| 在线免费观看不下载黄p国产 | 男人和女人高潮做爰伦理| 婷婷色综合大香蕉| 在线国产一区二区在线| 成年人黄色毛片网站| 精品国产三级普通话版| 好看av亚洲va欧美ⅴa在| 亚洲最大成人手机在线| 中文字幕人成人乱码亚洲影| 国产精品人妻久久久久久| 在现免费观看毛片| 黄色女人牲交| 老司机深夜福利视频在线观看| 99在线视频只有这里精品首页| 国产蜜桃级精品一区二区三区| 内射极品少妇av片p| 久99久视频精品免费|