• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Clothing Parsing Based on Multi-Scale Fusion and Improved Self-Attention Mechanism

    2023-12-28 08:49:34CHENNuoWANGShaoyu王紹宇LURanLIWenxuan李文萱QINZhidong覃志東SHIXiujin石秀金
    關(guān)鍵詞:李文

    CHEN Nuo(陳 諾), WANG Shaoyu(王紹宇), LU Ran (陸 然), LI Wenxuan(李文萱), QIN Zhidong(覃志東), SHI Xiujin(石秀金)

    College of Computer Science and Technology, Donghua University, Shanghai 201620, China

    Abstract:Due to the lack of long-range association and spatial location information, fine details and accurate boundaries of complex clothing images cannot always be obtained by using the existing deep learning-based methods. This paper presents a convolutional structure with multi-scale fusion to optimize the step of clothing feature extraction and a self-attention module to capture long-range association information. The structure enables the self-attention mechanism to directly participate in the process of information exchange through the down-scaling projection operation of the multi-scale framework. In addition, the improved self-attention module introduces the extraction of 2-dimensional relative position information to make up for its lack of ability to extract spatial position features from clothing images. The experimental results based on the colorful fashion parsing dataset (CFPD) show that the proposed network structure achieves 53.68% mean intersection over union (mIoU) and has better performance on the clothing parsing task.

    Key words:clothing parsing; convolutional neural network; multi-scale fusion; self-attention mechanism; vision Transformer

    0 Introduction

    In recent years, with the continuous development of the garment industry and the increasing improvement of e-commerce platforms, people have gradually started to pursue the personalization of clothing matching. Virtual fitting technology enables consumers to accurately obtain clothing information and achieve the matching of clothing. At the same time, designers can obtain information about consumers’ shopping preferences through trend forecasting to grasp current fashion trends. For this purpose, parsing techniques that can extract various types of fashion items from complex images become a prerequisite to achieve the goal. However, clothing parsing is limited by various factors such as different clothing styles, models and environments. To solve this problem, researchers have proposed their own solutions from different perspectives of pattern recognition[1-2]and deep learning[3-6].

    Recently, researchers have tried to introduce a model named Transformer to vision tasks. Dosovitskiyetal.[7]proposed vision Transformer (ViT) which divides an image into multiple patches and inputs the sequence of linear embeddings of these patches with position embedding into Transformer. This achieves a surprising performance on large-scale datasets. On this basis, researchers explored the integration of Transformer into traditional encoder-decoder frameworks to adapt to image parsing tasks. Zhengetal.[8]proposed segmentation Transformer (SETR) which consists of a pure Transformer and a simple decoder to confirm the feasibility of ViT for image parsing tasks. Inspired by frameworks such as SETR, Xieetal.[9]proposed SegFormer which does not require positional encoding and avoids the use of complex decoders. Liuetal.[10]improved the patch form of ViT and proposed Swin Transformer which achieves better results by sliding windows and relative positional encoding.

    Despite the great promise and potential of applying Transformer to image parsing, there are still several challenges that are difficult to ignore when applying the Transformer architecture to parse clothing images. First, due to the time complexity limitation, current Transformer architectures use the flattened segmented patch of an image as the input sequence for self-attention or input the low-resolution feature map of the convolutional backbone network into the Transformer encoder. However, for complex clothing images, feature maps of different scales can affect the final parsing results. Second, the pure Transformer performs poorly on small-scale datasets due to its lack of inductive bias for visual tasks. The clothing parsing task, on the other hand, lacks large-scale data to meet the data requirement of the pure Transformer due to its rich variety and high resolution of practical applications.

    In this paper, we propose a network named MFSANet based on multi-scale fusion and improved self-attention. MFSANet combines the respective advantages of convolution and self-attention for clothing parsing. We refer to the framework design of the high-resolution network (HRNet)[11]to extract and exchange the long-range association information and position information in each stage. MFSANet achieves good results on a generalized clothing parsing dataset and is promising to achieve good generalization in downstream tasks.

    1 Related Work

    1.1 Multi-scale fusion

    Convolutional neural networks applied to image parsing essentially abstract the target image layer by layer to extract the features of each layer of the target. Since Zeileretal.[12]proposed the visualization of convolutional neural networks to formally describe the differences between the feature maps of deep and shallow networks in terms of geometric and semantic information representation, more and more researchers have focused on multi-scale fusion networks. Linetal.[13]proposed the feature pyramid network (FPN) containing a multi-scale structure with a pyramidal structure. It was a model for the design of subsequent pyramidal networks by sampling the features at the bottom layer and then recovering and fusing the features layer by layer to obtain high-resolution features. Chenetal.[14]proposed DeepLab with atrous convolution to obtain multi-scale information by adjusting the receptive field of the filter. HRNet with a parallel convolution structure maintains and aggregates features at each resolution, thus enhancing the extraction of multi-scale features. Inspired by the strong aggregation capability of the HRNet framework, we incorporate long-range association information of the Transformer to achieve clearer parsing.

    1.2 Transformer

    As the first work to transfer Transformer to image tasks with fewer changes and achieve state-of-the-art results on large-scale image datasets, ViT split the image into multiple patches with positional embeddings attached as sequences fed into the Transformer’s encoder. Based on that, Wangetal.[15]constructed the pyramid vision Transformer (PVT) with a pyramid structure, demonstrating the feasibility of applying Transformer to multi-scale fusion structures. Chuetal.[16]proposed Twins which enhanced the Transformer’s representation of hierarchical features without an absolute position embedding, optimizing the Transformer performance on dense prediction tasks.

    2 Methods

    2.1 Overall architecture

    Figure 1 highlights the overall structure of MFSANet. We combine the characteristics of convolution and Transformer to extract local features and long-range information and use the multi-scale fusion structure to achieve information exchange. Meanwhile, the inductive bias of the convolution part of the hybrid network can effectively alleviate the lack of large-scale sample training of Transformer. We choose to include the Transformer module in the high-resolution feature maps that have participated in more multi-scale fusion to make more use of the information from multiple scales and incorporate long-range association information.

    2.2 Multi-scale representation

    Unlike the linear process of hybrid networks such as SETR and SegFormer, our network implements multi-scale parallel feature extraction for the input image. These multi-scale features simultaneously cover different local features of the image at different scales, which can fully extract the rich semantic information in the clothing images to improve the performance of parsing. After the parallel feature extraction is completed, the fusion operation will also be used to exchange multi-scale information to assist in generating feature maps at each scale by using high- and low-resolution semantic information, so that the feature maps at each scale contain richer information. Specifically, for stagek∈{2,3,4} represented as the blue region in Fig.1, given a multi-scale input feature map, the output of the stageX′ is expressed as

    (1)

    MHSA——multi-head self-attention; MLP——multi-layer perception.

    whereXiis the input feature map of theith scale;Fusionmeans the fusion operation of the feature map;BasicBlockindicates the corresponding residual or Transformer operation. Each stage receives all the scale feature map output from the previous stage and generates the input feature map of the next stage after processing and fusion. As shown in Fig.1, there are two basic blocks inside the stages,i.e., the Transformer basic block and the residual basic block. The former is responsible for the long-range association information extraction of the high-resolution feature maps. The latter is responsible for the local feature information extraction of the low- and medium-resolution feature maps and completing the information exchange in the multi-scale fusion. In particular, the high-resolution feature map of stage 1 also has to pass through the residual basic block to complete the local feature information extraction of the high-resolution feature map.

    2.3 Improved self-attention mechanism

    The standard Transformer is built on the basis of the Multi-Head Attention (MHA) module[17], which uses multiple heads to compute self-attention synchronously, and finally concatenates and projects to complete the computation, thus enabling the network to jointly focus on information from different representation subspaces at different locations. For simplicity, the subsequent descriptions are developed based on a single attention head. The standard Transformer takes the feature mapX∈RC×H×Was input, whereCis the number of channels, andHandWare the height and width of the feature map. After projecting and flattening,Xgets the input of self-attention,Q,K,V∈Rn×d, whereQ,KandVare sequences of Query, Key and Value, respectively;n=H×W;dis the sequences’ dimension of each head. The key to self-attention is a scaled dot-product and it is computed as

    The self-attention mechanism builds the correlation matrix between sequences by matrix dot product, thus obtaining long-range association information. However, this matrix dot product also bringsO(n2d) complexity, wherenis much larger than the usual image size in the field of high-resolution images such as clothing images, which leads to the fact that the self-attention in high-resolution images will be limited by the size. The square level of complexity is unacceptable for the current clothing images which are often of millions of pixels.

    For clothing images, most of the local regions within the image have highly approximate features, which leads to many redundant parts for inter-pixel association computation on the global. Meanwhile, the theory proves that the contextual association matrixPin the self-attention mechanism is of low rank for long sequences[18],

    Based on this, we design a self-attention mechanism for high-resolution images, as shown in Fig.2, and introduce a dimensionality reduction projection operation to generate equivalent sequencesKr=projection(K)∈Rnr×dandVr=Projection(V)∈Rnr×d, wherenr=Hr×Wr?n;HrandWrare the reduced height and weight of input after projecting, respectively. The modified formula is

    Fig.2 Improved self-attention mechanism architecture

    By the dimensionality reduction projection operation, we reduce the complexity of the matrix computation toO(nnrd) without having a large impact on the Transformer’s effect, thus allowing it to adapt to the high-resolution input of the clothing image. The complexity of the input width and height, after being reduced to the fixed value or proportionally smaller values, has been able to allow for self-attention operations at high resolution.

    In the field of vision, there are controversies about the role of relative and absolute position encoding for the Transformer, which led us to explore the application of relative position encoding on clothing parsing. For clothing images, their highly structured characteristics make their location features rich in detailed information. However, the standard Transformer is permutation equivariant, which cannot extract location information. Therefore, we refer to the two-dimensional relative position encoding[19]to introduce a complement to the relative position information. The attention logit using relative position from pixeli=(ix,iy) to pixelj=(jx,jy) is formulated as

    (2)

    whereqiandkjare the related query vector and key vector;rW, jx-ixandrH, jy-iyare learning embeddings for relative width and relative height, respectively. Similarly, the calculation of the relative position information is also subject to the dimensionality reduction projection operation. As shown in Fig.2,RHandRW, the corresponding learned embeddings, are added to the self-attention mechanism after the projection and aggregation operations. Therefore, the fully improved self-attention mechanism is expressed as:

    (3)

    whereSWr,SHr∈RHW×HrWr;SWrandSHrare the matrices of relative position logits, containing the position information of relative width and relative height.

    3 Experiments

    In this section, we experimentally validate the feasibility of the proposed network on the colorful fashion parsing dataset (CFPD)[20].

    3.1 Experiment preparation

    A graphic processing unit (GPU) was used in experiments to speed up the training of the network, and the optimization algorithm AdamW was used as an optimizer to accelerate the convergence of the network. We trained the network for 100 epochs, using an exponential learning rate scheduler with a warm-up strategy. In the training phase, a batch size of 4, an initial learning rate of 0.001, and a weight decay of 0.000 1 were used.

    3.2 Experiment results

    To verify the validity of the individual components and the overall framework proposed in the paper, we set up baseline and ablation experiments on the CFPD. The experimental results are shown in Table 1. Pixel accuracy (PA) and mean intersection over union (mIoU) are used as evaluation metrics. As shown in Table 1, compared with the baseline HRNet, the PA and mIoU of the MFSANet without relative position encoding (w/o RPE) increase 0.30% and 1.40%, while the PA and mIoU of the MFSANet increase 0.44% and 2.08%, respectively.

    Table 1 Results of baseline and ablation experiments on the CFPD

    To explore the effect of the MFSANet in improving the accuracy of clothing parsing, the parsing results are visualized as shown in Fig.3. For the first example, the MFSANet can divide the blouse block above the shoulder and divide the background block between pants and blouses, confirming its powerful ability to extract details. For the second example, the MSFANet can accurately segment sunglasses between hair and faces, as well as a clear demarcation between shorts and leg skin. For the third example, the MSFANet successfully identifies the hand skin at the boundary of the sweater of the model’s left hand, demonstrating its ability to delineate the inter-class demarcation obtained by extracting association information. The MFSANet provides more consistent segmentation results for the boundary and details of image parsing, demonstrating its effectiveness and robustness.

    Fig.3 Visual results on the CFPD

    In Table 2, we compare the ability of the downscaling projection operation to reduce complexity for different target scales. For an input feature map withH=144 pixels andW=96 pixels, we setHrandWrto the numbers as shown in parentheses in Tabe 2. As can be seen in the second column, smaller scales correspond to fewer parameters, which demonstrates the effect of the operation on reducing the complexity. Due to the need for map building and learning of position relations in relative position encoding and the fact that this need is amplified by multiple repetitions in the structure, the method requires a smaller spatial complexity. The downscaling projection operation can adjust the memory occupation of the method to a reasonable size, which confirms the necessity of the downscaling projection operation. Therefore, the more reasonable parameters (16, 16) are used as our standard parameters.

    Table 2 Comparison of different scales in downscaling projection operation

    4 Conclusions

    In this paper, a clothing parsing network based on multi-scale fusion and an improved self-attention mechanism is proposed. The network integrates the ability of self-attention to extract long-range association information with the overall architecture of multi-scale fusion through appropriate dimensionality reduction projection operations and incorporates two-dimensional relative position encoding to apply the rich position information in clothing images. The network proposed in this paper can effectively utilize the information from various aspects and accomplish the task of clothing parsing more accurately, thus providing help for practical garment field applications.

    猜你喜歡
    李文
    Advances in thermoelectric(GeTe)x(AgSbTe2)100-x
    定制化光學(xué)遙感器精益質(zhì)量管理探索與實(shí)踐
    Padéapproximant approach to singular properties of quantum gases:the ideal cases
    買狗糧
    心臟在哪兒
    上海故事(2017年8期)2017-08-23 10:11:25
    自 尊
    東方劍(2016年4期)2016-07-25 11:20:59
    Amplitude-Phase Modulation,Topological Horseshoe and Scaling Attractor of a Dynamical System?
    無比艱難的容易事
    夫妻那點(diǎn)事兒
    故事會(2012年20期)2012-10-11 04:47:18
    Monitoring Method for the Electrical Properties of Piezoelectric Transducer
    国产人伦9x9x在线观看| 悠悠久久av| 久久久久久大精品| 亚洲国产精品成人综合色| 自线自在国产av| 国产精品电影一区二区三区| 真人一进一出gif抽搐免费| 19禁男女啪啪无遮挡网站| 亚洲精品国产区一区二| 在线观看舔阴道视频| 国内精品久久久久久久电影| 久久天堂一区二区三区四区| 中文字幕精品免费在线观看视频| 亚洲人成伊人成综合网2020| 日本免费一区二区三区高清不卡 | 9191精品国产免费久久| 国产精品久久视频播放| 男女午夜视频在线观看| 日日爽夜夜爽网站| 日韩大码丰满熟妇| 国产免费av片在线观看野外av| 一级a爱视频在线免费观看| e午夜精品久久久久久久| 精品人妻1区二区| 亚洲第一电影网av| 国产av精品麻豆| 一区二区三区高清视频在线| 成人精品一区二区免费| 99精品久久久久人妻精品| 国产精品日韩av在线免费观看 | 中文字幕另类日韩欧美亚洲嫩草| 韩国精品一区二区三区| 非洲黑人性xxxx精品又粗又长| 色综合亚洲欧美另类图片| 国产精品免费视频内射| 精品久久久久久久久久免费视频| 久久久久久久精品吃奶| 国产又爽黄色视频| 国产精品98久久久久久宅男小说| 黄色视频,在线免费观看| 99国产精品99久久久久| 人人妻人人澡人人看| 欧美激情极品国产一区二区三区| 91九色精品人成在线观看| 在线观看免费午夜福利视频| 露出奶头的视频| 国产精品久久久人人做人人爽| 亚洲专区国产一区二区| 久久 成人 亚洲| 亚洲精品国产一区二区精华液| 亚洲av第一区精品v没综合| 日本五十路高清| 一本综合久久免费| 99久久综合精品五月天人人| 90打野战视频偷拍视频| 韩国精品一区二区三区| 午夜日韩欧美国产| 国产熟女午夜一区二区三区| 久久国产乱子伦精品免费另类| 91老司机精品| 黄片大片在线免费观看| 亚洲视频免费观看视频| 色哟哟哟哟哟哟| 脱女人内裤的视频| 久久精品国产亚洲av高清一级| 亚洲午夜理论影院| 欧美在线一区亚洲| 91麻豆精品激情在线观看国产| 亚洲午夜理论影院| 91麻豆av在线| 最近最新中文字幕大全免费视频| 欧美av亚洲av综合av国产av| 国产熟女xx| 99精品久久久久人妻精品| 十八禁网站免费在线| 国产不卡一卡二| 久久欧美精品欧美久久欧美| 精品国产超薄肉色丝袜足j| 国产视频一区二区在线看| 欧美黄色淫秽网站| 国产三级黄色录像| 日韩欧美国产一区二区入口| 夜夜看夜夜爽夜夜摸| 99精品在免费线老司机午夜| 老司机午夜福利在线观看视频| 性少妇av在线| 一区在线观看完整版| 一夜夜www| 女性生殖器流出的白浆| 午夜免费观看网址| 18禁国产床啪视频网站| 亚洲一区二区三区色噜噜| 亚洲av五月六月丁香网| 窝窝影院91人妻| 欧美日韩亚洲国产一区二区在线观看| 国产精品 国内视频| 久久精品国产亚洲av香蕉五月| 一区二区三区高清视频在线| 精品高清国产在线一区| 成人三级做爰电影| 99精品久久久久人妻精品| 久久久久久久精品吃奶| 美女高潮到喷水免费观看| 91av网站免费观看| 中文字幕人成人乱码亚洲影| 9191精品国产免费久久| 国产亚洲欧美98| 母亲3免费完整高清在线观看| 97人妻精品一区二区三区麻豆 | 国产一区二区三区在线臀色熟女| 亚洲成人精品中文字幕电影| 亚洲国产高清在线一区二区三 | 母亲3免费完整高清在线观看| 久久人妻福利社区极品人妻图片| 99国产综合亚洲精品| 亚洲激情在线av| 村上凉子中文字幕在线| 91在线观看av| √禁漫天堂资源中文www| 国产精品久久视频播放| 日本免费a在线| 1024香蕉在线观看| 黑人巨大精品欧美一区二区蜜桃| 婷婷精品国产亚洲av在线| 亚洲国产中文字幕在线视频| 精品免费久久久久久久清纯| 亚洲精品粉嫩美女一区| 国产野战对白在线观看| 少妇粗大呻吟视频| 精品卡一卡二卡四卡免费| 成人18禁在线播放| 成年女人毛片免费观看观看9| 老司机午夜福利在线观看视频| 国产单亲对白刺激| 亚洲成a人片在线一区二区| 一区在线观看完整版| 大型av网站在线播放| 51午夜福利影视在线观看| 国产一区二区在线av高清观看| 久久精品国产亚洲av香蕉五月| 最好的美女福利视频网| 黑人巨大精品欧美一区二区mp4| 久久热在线av| x7x7x7水蜜桃| 一本久久中文字幕| 久久精品国产亚洲av高清一级| 国产精品精品国产色婷婷| 亚洲久久久国产精品| 涩涩av久久男人的天堂| 亚洲一区二区三区色噜噜| 国产欧美日韩一区二区三区在线| 久久国产乱子伦精品免费另类| 韩国精品一区二区三区| 丝袜在线中文字幕| 国产乱人伦免费视频| 精品午夜福利视频在线观看一区| 天堂√8在线中文| 色哟哟哟哟哟哟| 天天一区二区日本电影三级 | 亚洲人成电影免费在线| 国产欧美日韩一区二区三| av电影中文网址| 久9热在线精品视频| 99热只有精品国产| 日韩精品青青久久久久久| 国产日韩一区二区三区精品不卡| 制服丝袜大香蕉在线| 两个人免费观看高清视频| 国产精品一区二区三区四区久久 | 1024视频免费在线观看| 十八禁网站免费在线| 欧美黑人精品巨大| 精品国产美女av久久久久小说| 天堂动漫精品| 国产精品爽爽va在线观看网站 | 国产精品98久久久久久宅男小说| 动漫黄色视频在线观看| 亚洲午夜理论影院| 精品一区二区三区av网在线观看| 男人的好看免费观看在线视频 | 黄片大片在线免费观看| 欧美日韩亚洲综合一区二区三区_| 亚洲精华国产精华精| 久久久国产成人免费| 久久精品91无色码中文字幕| 一个人观看的视频www高清免费观看 | 91麻豆精品激情在线观看国产| 免费av毛片视频| 国产亚洲精品久久久久5区| 两个人视频免费观看高清| 亚洲精华国产精华精| 国产黄a三级三级三级人| 亚洲av成人av| av在线天堂中文字幕| 如日韩欧美国产精品一区二区三区| 操出白浆在线播放| av片东京热男人的天堂| 欧美丝袜亚洲另类 | 一二三四在线观看免费中文在| 国产精品99久久99久久久不卡| 国产精品亚洲一级av第二区| 亚洲五月色婷婷综合| 欧美成人一区二区免费高清观看 | 午夜福利影视在线免费观看| 久久青草综合色| www日本在线高清视频| 亚洲国产欧美日韩在线播放| 巨乳人妻的诱惑在线观看| 亚洲男人天堂网一区| 日本免费a在线| 又大又爽又粗| 无人区码免费观看不卡| 欧美中文综合在线视频| 国产精品一区二区在线不卡| 欧美久久黑人一区二区| 久久伊人香网站| 久久久久精品国产欧美久久久| 美女扒开内裤让男人捅视频| 久久伊人香网站| 国产片内射在线| 两个人看的免费小视频| 日日爽夜夜爽网站| 波多野结衣巨乳人妻| 脱女人内裤的视频| 脱女人内裤的视频| 9色porny在线观看| 久久人人爽av亚洲精品天堂| 日韩大尺度精品在线看网址 | 91老司机精品| 国内久久婷婷六月综合欲色啪| 免费久久久久久久精品成人欧美视频| 非洲黑人性xxxx精品又粗又长| 制服诱惑二区| 中文字幕人妻熟女乱码| 桃色一区二区三区在线观看| 亚洲成a人片在线一区二区| 国内久久婷婷六月综合欲色啪| 国产激情欧美一区二区| 亚洲中文字幕日韩| 亚洲精品在线美女| 久久人人97超碰香蕉20202| 国产亚洲精品第一综合不卡| av天堂在线播放| 美女大奶头视频| 欧美午夜高清在线| 国产一区二区激情短视频| www.999成人在线观看| 国产欧美日韩精品亚洲av| 人人妻人人爽人人添夜夜欢视频| 免费在线观看完整版高清| 久久中文看片网| 婷婷六月久久综合丁香| 成人免费观看视频高清| 国产成人影院久久av| 国产成人精品久久二区二区91| 中亚洲国语对白在线视频| 午夜免费激情av| 在线观看免费午夜福利视频| 亚洲免费av在线视频| 久久久久国产一级毛片高清牌| 国产人伦9x9x在线观看| 精品久久久精品久久久| 脱女人内裤的视频| 精品国产国语对白av| 精品久久久精品久久久| 麻豆成人av在线观看| 神马国产精品三级电影在线观看 | 亚洲性夜色夜夜综合| 一本久久中文字幕| 欧美成人免费av一区二区三区| 国产亚洲精品一区二区www| 777久久人妻少妇嫩草av网站| 国产午夜福利久久久久久| 国产三级在线视频| 亚洲av日韩精品久久久久久密| 美女高潮到喷水免费观看| 男女之事视频高清在线观看| 中亚洲国语对白在线视频| 亚洲av电影不卡..在线观看| 91在线观看av| 久久久久久免费高清国产稀缺| 国产一区在线观看成人免费| 亚洲成人国产一区在线观看| 国产一区二区在线av高清观看| 变态另类丝袜制服| 欧美日韩中文字幕国产精品一区二区三区 | 热re99久久国产66热| 国产免费男女视频| 成人特级黄色片久久久久久久| 丝袜美足系列| 亚洲一区二区三区不卡视频| 精品一区二区三区视频在线观看免费| 午夜精品国产一区二区电影| 少妇粗大呻吟视频| 国产一级毛片七仙女欲春2 | e午夜精品久久久久久久| 人人澡人人妻人| 一区二区三区国产精品乱码| 在线观看舔阴道视频| 久久天躁狠狠躁夜夜2o2o| 制服诱惑二区| 久久草成人影院| 99国产精品99久久久久| 成人特级黄色片久久久久久久| 99国产极品粉嫩在线观看| 99精品欧美一区二区三区四区| 亚洲黑人精品在线| 在线观看日韩欧美| 国内久久婷婷六月综合欲色啪| 黄色丝袜av网址大全| 两性午夜刺激爽爽歪歪视频在线观看 | 午夜福利高清视频| 国产一区在线观看成人免费| 欧美黄色片欧美黄色片| 男女下面插进去视频免费观看| 国产欧美日韩一区二区精品| 日本三级黄在线观看| 国产精品一区二区在线不卡| 看片在线看免费视频| 欧美性长视频在线观看| 亚洲熟女毛片儿| 99久久久亚洲精品蜜臀av| 亚洲成人国产一区在线观看| 久久久国产成人免费| 午夜福利影视在线免费观看| 日韩欧美一区二区三区在线观看| 一区在线观看完整版| 色综合婷婷激情| 亚洲成人精品中文字幕电影| 一夜夜www| 亚洲成a人片在线一区二区| 国产在线精品亚洲第一网站| 老司机深夜福利视频在线观看| 91字幕亚洲| 免费高清在线观看日韩| 十分钟在线观看高清视频www| 久久影院123| 国产一级毛片七仙女欲春2 | 一边摸一边做爽爽视频免费| 久久婷婷人人爽人人干人人爱 | 亚洲一卡2卡3卡4卡5卡精品中文| 免费不卡黄色视频| 看免费av毛片| 精品少妇一区二区三区视频日本电影| 亚洲第一av免费看| avwww免费| 在线观看午夜福利视频| 美女扒开内裤让男人捅视频| 不卡av一区二区三区| 纯流量卡能插随身wifi吗| 美女高潮到喷水免费观看| 可以在线观看的亚洲视频| 精品久久久久久久人妻蜜臀av | 欧美激情极品国产一区二区三区| 1024视频免费在线观看| 国产97色在线日韩免费| 国产日韩一区二区三区精品不卡| 亚洲国产高清在线一区二区三 | 国产成+人综合+亚洲专区| 美女免费视频网站| 色精品久久人妻99蜜桃| 校园春色视频在线观看| 国产亚洲精品av在线| 亚洲人成电影免费在线| 制服丝袜大香蕉在线| 又大又爽又粗| 午夜老司机福利片| 精品久久蜜臀av无| 亚洲欧美精品综合久久99| 成人三级做爰电影| 一卡2卡三卡四卡精品乱码亚洲| 级片在线观看| 最新美女视频免费是黄的| 亚洲欧洲精品一区二区精品久久久| www国产在线视频色| 精品国产一区二区久久| 夜夜夜夜夜久久久久| 久久久久久久久中文| 免费在线观看影片大全网站| 国产成年人精品一区二区| 69精品国产乱码久久久| 亚洲第一欧美日韩一区二区三区| 日韩国内少妇激情av| 国产又色又爽无遮挡免费看| 51午夜福利影视在线观看| 国产精品av久久久久免费| 久久久精品国产亚洲av高清涩受| 国产精品野战在线观看| 久久欧美精品欧美久久欧美| 久久久精品欧美日韩精品| 成人国语在线视频| 国产精品乱码一区二三区的特点 | av视频在线观看入口| av福利片在线| 国产精品久久电影中文字幕| 看黄色毛片网站| 黑人巨大精品欧美一区二区mp4| av中文乱码字幕在线| 国产91精品成人一区二区三区| 久久久久久久久久久久大奶| 一级作爱视频免费观看| 亚洲精品国产一区二区精华液| 高潮久久久久久久久久久不卡| av电影中文网址| 九色亚洲精品在线播放| 黄色 视频免费看| 国产精品久久久人人做人人爽| 国产一区在线观看成人免费| 88av欧美| 久久久国产精品麻豆| 狠狠狠狠99中文字幕| 三级毛片av免费| 高潮久久久久久久久久久不卡| 亚洲国产精品合色在线| 亚洲片人在线观看| 青草久久国产| 黄片播放在线免费| 黑人巨大精品欧美一区二区蜜桃| 99热只有精品国产| av有码第一页| 满18在线观看网站| 一级作爱视频免费观看| 欧美日韩亚洲国产一区二区在线观看| 人妻丰满熟妇av一区二区三区| 一本综合久久免费| 国产成人系列免费观看| 日本a在线网址| 亚洲第一电影网av| 色哟哟哟哟哟哟| 免费高清在线观看日韩| 91麻豆av在线| 一边摸一边做爽爽视频免费| 日日夜夜操网爽| 好男人电影高清在线观看| 一个人观看的视频www高清免费观看 | 亚洲第一青青草原| 最新在线观看一区二区三区| 亚洲三区欧美一区| 精品国产乱码久久久久久男人| 国产精品一区二区免费欧美| 日本欧美视频一区| 禁无遮挡网站| 日韩国内少妇激情av| 99国产精品免费福利视频| 欧美一级毛片孕妇| 免费在线观看影片大全网站| 国产精品久久久久久人妻精品电影| 看黄色毛片网站| 欧美黄色片欧美黄色片| 亚洲国产精品久久男人天堂| 亚洲第一青青草原| 久久 成人 亚洲| 天天添夜夜摸| 亚洲最大成人中文| www.www免费av| 在线观看www视频免费| 亚洲色图 男人天堂 中文字幕| 国产欧美日韩一区二区三区在线| 最新在线观看一区二区三区| 国产高清激情床上av| 亚洲精品国产色婷婷电影| 色综合亚洲欧美另类图片| xxx96com| 国产一卡二卡三卡精品| 精品久久蜜臀av无| www.自偷自拍.com| 99国产精品一区二区蜜桃av| 脱女人内裤的视频| 午夜福利18| 国产精品 国内视频| 日日摸夜夜添夜夜添小说| 国产野战对白在线观看| 国产色视频综合| 又黄又爽又免费观看的视频| 久久亚洲精品不卡| 一本久久中文字幕| 久久 成人 亚洲| 搡老岳熟女国产| 又大又爽又粗| 欧美日本视频| 欧美av亚洲av综合av国产av| 午夜福利视频1000在线观看 | 国产伦人伦偷精品视频| 欧美中文综合在线视频| 久久久久久免费高清国产稀缺| 久久久国产精品麻豆| 亚洲欧美精品综合久久99| 激情视频va一区二区三区| 国产精品精品国产色婷婷| 亚洲欧洲精品一区二区精品久久久| 日本免费一区二区三区高清不卡 | 精品久久久久久久人妻蜜臀av | 久久精品人人爽人人爽视色| 99久久精品国产亚洲精品| 精品福利观看| 91麻豆精品激情在线观看国产| 99久久国产精品久久久| 久久精品亚洲熟妇少妇任你| 国产精品精品国产色婷婷| 91精品国产国语对白视频| 长腿黑丝高跟| 亚洲欧洲精品一区二区精品久久久| 国产真人三级小视频在线观看| 久久久精品国产亚洲av高清涩受| 一二三四在线观看免费中文在| 黄频高清免费视频| 久久精品亚洲精品国产色婷小说| 一个人免费在线观看的高清视频| 窝窝影院91人妻| 黄色视频不卡| svipshipincom国产片| 亚洲成人精品中文字幕电影| 国产欧美日韩一区二区精品| 国产乱人伦免费视频| 国内毛片毛片毛片毛片毛片| 91精品国产国语对白视频| 国产欧美日韩一区二区三| 亚洲免费av在线视频| 不卡一级毛片| 日韩高清综合在线| 国产成人免费无遮挡视频| 久久精品国产清高在天天线| 久久久精品国产亚洲av高清涩受| 色哟哟哟哟哟哟| 真人一进一出gif抽搐免费| 欧美日韩黄片免| 久久九九热精品免费| 在线观看www视频免费| 国产亚洲av嫩草精品影院| 动漫黄色视频在线观看| 亚洲成av人片免费观看| 身体一侧抽搐| 在线观看日韩欧美| 男人舔女人的私密视频| 午夜老司机福利片| av天堂在线播放| 国产亚洲欧美98| 亚洲欧美日韩无卡精品| 中文字幕色久视频| 午夜福利欧美成人| 精品久久久久久成人av| 国产蜜桃级精品一区二区三区| 性欧美人与动物交配| 12—13女人毛片做爰片一| 亚洲国产精品成人综合色| 热re99久久国产66热| 久久久久久国产a免费观看| 国产精品电影一区二区三区| 长腿黑丝高跟| 国内精品久久久久精免费| 午夜免费观看网址| 亚洲欧洲精品一区二区精品久久久| 一区福利在线观看| 成熟少妇高潮喷水视频| 免费看a级黄色片| 97超级碰碰碰精品色视频在线观看| 亚洲精品在线观看二区| 国内精品久久久久久久电影| 91字幕亚洲| 精品一区二区三区四区五区乱码| 午夜福利高清视频| 精品熟女少妇八av免费久了| 午夜精品国产一区二区电影| 一级作爱视频免费观看| 午夜免费成人在线视频| 色尼玛亚洲综合影院| or卡值多少钱| 麻豆成人av在线观看| 欧美色欧美亚洲另类二区 | 久久人妻av系列| 制服丝袜大香蕉在线| 不卡一级毛片| 人妻久久中文字幕网| АⅤ资源中文在线天堂| 手机成人av网站| 亚洲人成电影免费在线| 男女下面进入的视频免费午夜 | 一级毛片高清免费大全| 电影成人av| 好男人电影高清在线观看| 熟女少妇亚洲综合色aaa.| 亚洲成av片中文字幕在线观看| 不卡av一区二区三区| 精品欧美国产一区二区三| 国产一区在线观看成人免费| 久久 成人 亚洲| 日韩欧美免费精品| 久久久久久免费高清国产稀缺| 老司机午夜福利在线观看视频| 亚洲熟妇熟女久久| 精品福利观看| 免费不卡黄色视频| 男女床上黄色一级片免费看| 又黄又爽又免费观看的视频| bbb黄色大片| 午夜免费观看网址| 亚洲自偷自拍图片 自拍| 欧美色视频一区免费| 男人舔女人下体高潮全视频| 中文字幕高清在线视频| 99riav亚洲国产免费| 欧美日本亚洲视频在线播放| 校园春色视频在线观看| 欧美激情极品国产一区二区三区| 亚洲专区国产一区二区| 国产区一区二久久| 99国产精品免费福利视频| 99国产精品99久久久久| 欧美成人性av电影在线观看| 亚洲 国产 在线| 最新美女视频免费是黄的| 狠狠狠狠99中文字幕| www.自偷自拍.com| 九色国产91popny在线| av视频在线观看入口|