• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Generative Adversarial Network with Separate Learning Rule for Image Generation

    2020-06-04 06:38:50YINFengCHENXinyu陳新雨QIUJieKANGYongliang康永亮

    YIN Feng(印 峰), CHEN Xinyu(陳新雨), QIU Jie(邱 杰), KANG Yongliang(康永亮)

    1 College of Automation and Electronic Information, Xiangtan University, Xiangtan 411105, China 2 National Engineering Laboratory of Robot Vision Perception and Control Technology, Changsha 410012, China

    Abstract: Boundary equilibrium generative adversarial networks (BEGANs) are the improved version of generative adversarial networks (GANs). In this paper, an improved BEGAN with a skip-connection technique in the generator and the discriminator is proposed. Moreover, an alternative time-scale update rule is adopted to balance the learning rate of the generator and the discriminator. Finally, the performance of the proposed method is quantitatively evaluated by Fréchet inception distance (FID) and inception score (IS). The test results show that the performance of the proposed method is better than that of the original BEGAN.

    Key words: generative adversarial network (GAN); boundary equilibrium generative adversarial network (BEGAN); Fréchet inception distance (FID); inception score (IS)

    Introduction

    As one kind of methods for generating models, generative adversarial networks (GANs)[1]excel at generating realistic images[2-4], creating videos[5-7]and producing text[8-9]. The architecture of GANs composed of two subnetworks is a deep convolutional neural net. One subnetwork is used as a generator to synthesize data from random noise. The other is used as a discriminator to separate above synthesized data (also known as fake data) from real data. The competition between the generator and the discriminator drives both to improve themselves until the counterfeits are indistinguishable from real data. As a powerful subclass of generative models, GANs have achieved great success in many fields such as semi-supervised learning[10], semantic segmentation[11], small object detection[12]and so on. However, it cannot perform very well on some practical issues. In general, it is very hard to effectively train GANs without additional auxiliaries because of high need for well-designed network structures and hyper-parameters. There have been many different attempts to solve these issues mainly from three perspectives.

    The first attempt is the improvement of the object functions. Nowozinetal.[13]noticed that not just Jensen-Shannon (JS) divergence, any divergence can be placed in the GAN architecture. Least-square GAN (LSGAN)[14]replaces sigmoid cross entropy loss in the standard GAN with least square loss, which can improve the quality of picture generation and make the training stable by directly moving generator samples close to real data distribution. Wasserstein GAN (WGAN)[15]introduces Earth-Mover (EM) distance which has superior smoothing characteristics with respect to relative Kullback-Leibler (KL) divergence and JS divergence. The EM distance used as a cost function not only solves the problem of unstable training, but also provides a reliable training process indicator. WGAN-gradient penalty (WGAN-GP) is an improved version of WGAN. The approximate 1-Lipschitz limit on the discriminator is achieved with gradient penalty. WGAN-GP achieves very good results in the further test.

    The second attempt is the modification of the network structure. Well-architected generators and discriminators are more important for GANs. The most commonly used structure is convolutional neural networks (CNNs) in the image processing field[2]. The core idea of the approach in Ref. [2] is adopting and modifying demonstrated changes to CNN architectures. Experiments show that deep convolution generative adversarial networks (DCGANs) can provide a high quality generation process. Especially, it is very good at solving semi-supervised classification tasks. Self-attention generative adversarial networks (SAGANs)[4]use the self-attention paradigm to capture long-range spatial relationships that exist in images to better synthesize new images. Under good hardware with a tensor processor unit (TPU) and huge parameter system conditions, large scale GAN training for high fidelity natural image synthesis (denoted by BigGAN)[16]increases batch size and channel to produce realistic sharp pictures.

    The third attempt is the use of additional networks or supervision conditions. In addition to the different cost function and network framework, additional networks or supervision conditions are often adopted to further improve the performance of GANs. It is interesting to note that the architecture of the generator in GANs does not differ significantly from other approaches like variational auto-encoders (VAEs)[17]. VAEs, GANs and their variants are three kinds of generation models based on deep learning. It is normal practice to combine GANs with auto-encoder networks, such as VAE-GANs[18]and energy-based GANs (EBGANs)[19]. Compared with using the VAE alone, the VAE-GAN combining VAEs and GANs can produce clearer pictures. In the VAE-GAN, one discriminator is used to authenticate the input image whether it is from real data or generated samples. In contrast, the discriminator used in the EBGAN is adopted to identify the re-configurability of the input image. That is it can remember what real data distribution looks like and then give a high score as long as the arbitrary inputxis close to that of the real sample. In other improved methods of GANs, supervision conditions are added. In conditional generative adversarial nets (CGANs), an additional condition variable is introduced into both of the generator and the discriminator. The involved information can be used to guide the data generation process in the CGAN[20]. The information maximizing GAN (Info-GAN) contains a hidden variablecwhich is also known as the latent code. By associating latent code and generated data with additional constrains, letccontain interpretable information about the data to help Info-GAN[21]find an interpretable expression. Warde-Farley and Bengio[22]proposed a denoising feature matching (DFM) technique to guide the generator toward probable configurations of abstract discriminator features. A de-noising auto-encoder is used to greatly improve the GAN image model.

    In this paper, we proposed an improved BEGAN with a skip-connection technique in the generator and the discriminator. The use of the skip-connection technique allows feature information to be transmitted directly across layers. Its greatest advantage is that it can reduce the information loss in the process of transmission. More feature information can improve the performance of generated images. Moreover, an alternative time-scale update rule was proposed to balance the learning rates of the generator and the discriminator. As a result, more realistic pictures can be generated by using the proposed method. Finally, we evaluated the performance of the proposed method, and compared it with BEGANs[23], improving generative adversarial networks with DFM[22], adversarial learned inference (ALI)[24], improved techniques for training GANs (Improved GANs)[25]and generalization and equilibrium in generative adversarial nets (denoted by MIX+WGAN)[26].

    1 Review of Generative Adversarial Networks

    A GAN involves generator networks and discriminator networks whose purposes are to map random noise to samples and discriminate real and generated samples, respectively[16]. LetGθdenote the generator with the parametersθandDφdenote the discriminator with the parameterφ. Formally, the GAN objective, in its original form, involves finding Nash equilibrium for the following two-player min-max problem until neither player can improve its cost unilaterally. Both players aim to minimize their own cost function. The cost function for the discriminator is defined as

    (1)

    where the distributions of real dataxand random noisezarep(x) andp(z), respectively. In the minimax GAN, the discriminator (shown in Eq.(1)) attempts to classify generated images (fake images) from real images and outputs a probability. Simultaneously, the generator attempts to fool the discriminator and learns to generate samples that have a low probability of judging fake. The cost function for the generator in minimax the GAN is defined as

    (2)

    To improve the gradient, Goodfellowetal.[1]also proposed a non-saturating GAN as an alternative cost function, where the generator instead aims to maximize the probability of generated samples being real[27]. The cost function for the generator in the non-saturating GAN is defined as

    (3)

    The original GAN adopts the JS divergence to measure the distance between the distribution of real data and random noise. It is noted that the generation model and the discriminant model can be various neural networks without any limitations. During the training process of the GAN, the goals of the discriminator and the generator training are just opposite. The former is to maximize the discriminative accuracy to better distinguish between the real data and the generated data. In contrast, the latter is to minimize the discriminant accuracy of the discriminator. Generally, without auxiliary stabilization techniques, training procedure of GANs is notoriously brittle.

    2 Proposed Methods and Architectures

    Under the condition of the near-optimal discriminator, minimization of the loss of the generator is equivalent to minimizing the JS divergence betweenp(x) andp(z). In practice, the distributions of both are almost impossible to have substantial overlap, which eventually cause the gradient of the generator close to 0. In other words, the gradient disappears. This problem can be alleviated to some extent by an alternative non-saturating cost function. The minimization of the cost function is equivalent to minimizing an unreasonable distance measurement. However, there are also two problems to be solved: one is gradient instability, and the other is mode collapse.

    WGANs[15], BEGANs[23]and SAGANs[4]are excellent methods proposed to solve above problems. WGANs suggest the EM distance, also called the Wasserstein distance, as a measure of the discrepancy between two distributions. BEGANs adopt the distances between loss distributions, instead of sample distributions. A self-attention mechanism is added into the SAGAN. Moreover, the spectral normalization and two time-scale update rule (TTUR) optimization techniques are used to stabilize GAN training. Next, we will develop an improved BEGAN. The proposed BEGAN based network is shown in Fig. 1. The generator and the discriminator both adopt an encoder-decoder framework. Between them, the architecture of the discriminatorDis a deep convolutional neural network.Nxis short for the dimensions ofx.Nx=H×W×C, whereH,WandCare height, width and color, respectively; for RGB images,C=3. The architecture of the generatorGhas the dimensions ofH×W×C.Guses the same architecture as the decoder of the discriminator. The generator network illustrated in the upper section of Fig. 1 contains nine convolutional layers and three up-sampling convolutional layers.

    Fig. 1 Architectures of the generator networks and the discriminator networks with convolutional kernel size and output channels for each convolutional layer (SL denotes the skip layer; Conv w=(k, k) denotes a convolutional layer with k×k kernel; in d=(a, b), a and b denote input and output filters, respectively; n denotes the number of filters/channels)

    It is noted that the proposed method also uses the auto-encoder as the discriminator and aims to match the auto-encoder loss distributions using a loss derived from the Wasserstain distance. The definitions of the Wasserstain distance and its lower bound are stated as follows.

    LetL(ν)=|ν-D(ν)|η, whereη∈{1, 2}, and it is the target norm. LetΓ(u1,u2) be the set all of couplings ofu1andu2, whereu1andu2are two distributions of auto-encoder losses. Letm1andm2be their respective means, wherem1∈R andm2∈R. The Wasserstain distance can be confirmed as

    (4)

    By taking Jensen’s inequality, the lower bound ofW1(u1,u2)can be derived as

    infE[|x1-x2|]≥inf|E[x1-x2]|=|m1-m2|.

    (5)

    Letu1be the distribution of real data losses andu2be the distribution of the lossL(G(z)). Equation (5) shows that the infimum ofW1(u1,u2) is|m1-m2|.

    In order to maximize the distance between real data and generated data auto-encoder losses, there are only two solutions to maximize|m1-m2|ofW1(u1,u2)≥m1-m2, wherem1→∞andm2→0, orW1(u1,u2)≥m2-m1, wherem1→0 andm2→∞. Therefore minimizingm1naturally leads to auto encoding the real images. Similar to the BEGAN, the objective of the network training is to meet

    (6)

    where,L(x)is the auto-encoder loss of real data, andL(x)=D(x)-x;L(G(zD)) is the auto-encoder loss of generated data, andL(G(zD))=D(G(zD))-G(zD); a variablektis used to control how much emphasis is placed on generator losses during gradient descent and initializek0in this work, andkt∈[0, 1];λkis the learning rate ofkt; the hyper-parameterγ=E[|L(G(z))|]/E[|L(x)|]∈[0, 1] , balances these two goals, namely auto-encoder real images and discriminate real images from generated images, and at the same time,γis also an indicator about image diversity where a lower value means lower image diversity. Deriving a global measure of convergence is formulated as the sum of these two terms:

    Mglobal=L(x)+|γL(x)-L(G(zG))|,

    (7)

    where lowerMglobalmeans better training process. Figure 2 shows the detail process of image generation. To generate more real images, we would like to use Algorithm 1 to converge to a good estimator ofpdataif given enough capacity and training time, and Adam stands for adaptive moment estimation.

    Fig. 2 Detail process of image generation

    When applying the GAN, it is required that the discriminating ability of the discriminator is better than the generating ability of the current generator. To achieve this aim, the usual practice is to update the parameters of the discriminator more times than the generator during training. It is noted that the discriminator often learns faster than the generator in practice. To balance the learning speed, the TTUR[28]is adopted during the training process. Specifically, a same update rate but different learning rates are adopted in the TTUR when training the discriminator and the generator. As they have the same update rate, we only need to adjust the learning rate. Here, the learning rate is set to be 0.004 and 0.001 respectively for the discriminator and generator training. It is because choosing a relatively high learning rate can accelerate the learning speed of the discriminator. And a small learning rate is necessary for the generator to successfully fool the discriminator.

    Furthermore, a strategy of adding skip connection[29]between different layers is applied to strengthen feature propagation and encourage the reuse of feature. The feature information can be directly transmitted across layers with the help of additional skip connection. Then the integrity of the feature is preserved to the greatest extent. The skip-connection structure adopted in our model is that the input is additionally connected to the output for each convolution block. These skip connection is only added into the generator and the decoder. It is noted that another structure of skip connection similar to dense block is mentioned in the BEGAN. By comparison, our structure is more suitable for processing big dataset because of its simple connection. The data flow diagram of the generator and the discriminator in our model is shown in Fig. 3.

    Fig. 3 Data flow diagram of the generator and the discriminator (L(·) denotes auto-encoder loss)

    3 Experiments

    In this section, a series of experiments were conducted to demonstrate the performance of the proposed method.

    3.1 Parameter settings

    The architecture of the model is shown in Fig. 1. Both the discriminator and the generator use 3×3 convolutions with exponential liner units (ELUs) activation function for outputs. Many of these convolution layers constitute a convolution block. Specifically, two layers are used in this paper. The training of the generator and the discriminator both include down-sampling and up-sampling phases. Down-sampling is implemented as sub-sampling with stride 2 and sub-sampling is done by the nearest neighbour. The learning approach adopts Adam optimization algorithm with the initial learning rate of 0.000 1. Note that the learning rate will be adjusted by multiplying a factor of 2 when convergence stalls. The batch size, one of the important parameters, is set to be 16. The sizes of input images are 64×64. Note that the model is also suitable for varied resolution from 32 to 256 by adjusting convolution layers while keeping 8×8 size of the final down-sampled image.

    All training processes are conducted on a NVIDIA GeForce GTX 1080Ti GPU using 162 770 face images, randomly sampled from the large-scale celeb faces attributes (CelebA), and 60 000 images of 10 categories from CIFAR-10. Training images are different from the testing images.

    3.2 Computational experiments

    In this example, the data set used is CelebA with a larger variety of facial poses. It is noted that we resize the images to 64×64 to highlight the areas where faces are located in images. We prefer to do this since humans are better at identifying flaws appearing on the faces. First, we discuss the effect of another hyper-parameterγ∈{0, 1} and perform several group comparison tests. The value ofγis related to the quality and the diversity of the generated images. As shown in Fig. 4, we can observe skin colour, expression, moustaches, gender, hair colour, hair style and age from the generated images.

    Fig. 4 Comparison of samples randomly generated under different γ : (a) γ=0.3; (b) γ=0.7

    In order to observe the influence ofγconveniently, we change its values across the range [0, 1] in tests. Some typical results about image diversity are displayed in Fig. 4. Overall, the generated images appear to be well behaved. When the parameter is at a lower level, such asγ=0.3, the generated images look overly uniform, and contain more noise. The facial contours are gradually becoming similar, and the generated face samples are drawn to less diverse. Moreover, the noises are greatly reduced. From Fig. 4(a), it can be seen that little noises are concentrated around the picture, such as positions located near the hair and forehead. What’s more, more detailed features can be created successfully, like beards, blue eyes and bangs highlighted in Fig. 4(b). Note that these features are usually hard to be created by other methods.

    Furthermore, we quantitatively evaluate the performance of the proposed method. In this paper, a widely used quantitative measurement method, Fréchet inception distance (FID), is adopted for evaluation. The FID[28]provides a more principled and comprehensive metric. It compares the distributions between the generated images and real images in the feature space of inception networks. Lower FID means closer distances between synthetic and real date distributions. Figure 5 shows a series of 64×64 random generated samples based on the proposed method. From the visual point of view, the generated images are very impressive. It can even generate teeth images clearly.

    Fig. 5 Random generated samples (γ=0.5)

    It can be seen from Fig. 6 that the FID of our model decreases sharply at the beginning of the iteration, and gradually decreases with a mild concussion. By comparison, the FID value of the BEGAN fluctuates greatly, and suddenly increases dramatically in the late stages of the iteration. Figure 7 shows the convergence curves of theLD. The results show that our method is slightly better. Moreover, the numerical results show that FID values obtained based on BEGANs and our model are 84.19 and 24.57, respectively. At this point, the performance of our method increases by 3.4 times.

    Fig. 6 Comparison of the BEGAN and our model about convergence curves of the FID in CelebA datasets

    Fig. 7 Comparison of the BEGAN and our model about convergence curves of LD in CelebA datasets

    Another dataset used in the test is the fashion MNIST dataset, which consists of a training set of 60 000 samples and a test set of 10 000 samples. Unlike the previous CelebA dataset, the samples in the fashion MNIST dataset is a grayscale image, associated with a label from 10 classes. The parameters are set as follows: the size of the input images is 32×32, the batch size is 64 and the iteration number is 100 000.

    Figure 8 shows some results of generated random samples based on the proposed method. As can be seen from Fig. 8, a variety of shoe styles can be successfully generated.

    Fig. 8 Random samples generated on the fashion MNIST dataset (the picture contains a variety of shoe styles)

    Furthermore, we compare FID and inception score (IS) of the BEGAN and our model. IS is another widely used quantitative measurement to evaluate the performances of the comparing methods[25]. It uses an inception network pre-trained on ImageNet to compute the KL-divergence between the conditional class distribution and the marginal class distribution. Higher IS indicates better image quality and diversity.

    As can be seen from Fig. 9(a), the FID of our model is significantly smaller than that of the BEGAN. According to the analysis above, the quality of the images generated by our model is higher. Figure 9(b) shows the results of IS. It can be seen that higher IS can be obtained by our model. Moreover, the IS shows an upward trend at 100 000 iteration.

    (a)

    (b)

    Fig. 9 Comparison of (a) FID and (b) IS of the BEGAN and our model in the fashion MNIST dataset

    4 Verification

    In this section, we further compare the performances among the proposed method and common classical methods, including BEGANs, ALIs, DFMs, Improved GANs and MIX+WAGN. In these comparison experiments, we retrain models on the single NVIDIA GeForce GTX 1080 Ti GPU with CIFAR-10 dataset, which goes through 100 000 iterations and the batch size is 64. The values of other hyper-parameters are defaults as in the train file. All models are built based on TensorFlow.

    The used dataset is CIFAR-10. We calculate IS of the comparing methods with an average of 10 evaluations for 50 000 samples. The final numerical results are shown in Table 1. The test results show that our score is better than all methods except for the DFM. This seems to confirm experimentally that the DFM is an effect and direct method of matching data distributions to improve their performance. Using additional network to train de-nosing feature and combine with our model will be a possible avenue for future work.

    Note that IS can only be used to quantify the diversity of generated samples. To further compare the distributions of target samples, the above method evaluates the robustness of the model by calculating the FID value. In this example, FIDs are calculated with 50 000 train dataset and 10 000 generated samples. The experimental results show that the FIDs obtained based on DFMs, BEGANs and our model are 30.02, 77.27 and 57.96, respectively. All in all, our model is slightly inferior to the DFM, but better than the BEGAN.

    Table 1 Numerical results of IS

    Figure 10 shows some intermediate results when the CIFAR-10 is used to further test our method. As the number of training increases, the generated image changes from fuzzy to sharp, and the generated image distribution is gradually closer to the real image distribution. It is noted that here we combine 64 pieces of images individually generated into one.

    Fig. 10 Random samples generated with different training steps on CIFAR-10: (a) 20 000; (b) 40 000; (c) 60 000; (d) 80 000; (e) 100 000

    5 Conclusions

    An improved BEGAN with the additional skip-connection technique is proposed in this paper.An alternative time-scale update rule is adopted to balance the learning rates of the generator and the discriminator. The results of qualitative visual assessments show that high quality images can be created by the improved BEGAN when 0.5<γ<1. Furthermore, the performance of the proposed method is quantitatively evaluated by FID and IS. The FID for the proposed method and the BEGAN with CelebA dataset are 24.57 and 84.19, respectively. At this point, the performance of our method increases by 3.4 times. The other test results for CIFAR-10 dataset show that the FID is 57.96, which are also lower than 77.27 of the BEGAN. In addition, the ISs for the proposed method and the BEGAN are 6.32 and 5.62, respectively. Our method is also slightly better than the BEGAN. However, it should be pointed out that the performance of the proposed method is better than other compared methods except for the DFM. This result is predictable because the method of DFM directly aims to match the data distribution. In short, the experiment results can confirm that the use of such imbalanced learning rate updates and the skip-connection technique can improve the performances of image generation methods. In future work, we will try to add the low rank constraint to lead to generation of high quality images with lower rank.

    精品亚洲成国产av| .国产精品久久| 三级国产精品欧美在线观看| 日韩中字成人| 夫妻午夜视频| 91精品国产国语对白视频| 国产精品国产三级国产专区5o| 午夜福利视频在线观看免费| 91精品三级在线观看| 国产免费一级a男人的天堂| 日韩不卡一区二区三区视频在线| 天美传媒精品一区二区| 秋霞在线观看毛片| 男女无遮挡免费网站观看| 久久女婷五月综合色啪小说| 亚洲在久久综合| 欧美日韩一区二区视频在线观看视频在线| 精品国产乱码久久久久久小说| 成人18禁高潮啪啪吃奶动态图 | 大香蕉久久网| 欧美日韩在线观看h| 亚洲精品国产色婷婷电影| 中文乱码字字幕精品一区二区三区| 国产日韩欧美亚洲二区| 大香蕉久久网| 26uuu在线亚洲综合色| 久久综合国产亚洲精品| 老司机亚洲免费影院| 亚洲精品国产av蜜桃| 尾随美女入室| 亚洲av中文av极速乱| 精品久久久久久久久av| 国产精品熟女久久久久浪| 新久久久久国产一级毛片| 国产精品一区二区三区四区免费观看| 国产午夜精品久久久久久一区二区三区| 欧美少妇被猛烈插入视频| 美女cb高潮喷水在线观看| 久久久久久久国产电影| 不卡视频在线观看欧美| 欧美精品一区二区免费开放| 久久人人爽av亚洲精品天堂| 最近的中文字幕免费完整| 久久国产精品男人的天堂亚洲 | 国产一区亚洲一区在线观看| 人妻人人澡人人爽人人| 中文字幕人妻熟人妻熟丝袜美| 国产白丝娇喘喷水9色精品| 日韩伦理黄色片| 色视频在线一区二区三区| 国产视频内射| 另类亚洲欧美激情| 日韩av不卡免费在线播放| 午夜激情久久久久久久| 在现免费观看毛片| 欧美一级a爱片免费观看看| 国产熟女欧美一区二区| 少妇人妻久久综合中文| 一区二区三区免费毛片| 97在线人人人人妻| 日本免费在线观看一区| 黑人高潮一二区| 99九九在线精品视频| 亚洲精品乱码久久久v下载方式| 国产在线免费精品| 26uuu在线亚洲综合色| 国产免费视频播放在线视频| 国产精品久久久久久久久免| 国产成人freesex在线| 国产午夜精品久久久久久一区二区三区| 天堂8中文在线网| 久久99热6这里只有精品| 午夜福利,免费看| 日韩av在线免费看完整版不卡| 亚洲欧洲国产日韩| 狠狠婷婷综合久久久久久88av| 成人二区视频| 大又大粗又爽又黄少妇毛片口| 日韩成人伦理影院| 国产黄色免费在线视频| 一本—道久久a久久精品蜜桃钙片| 欧美国产精品一级二级三级| 制服人妻中文乱码| 国产精品久久久久成人av| 日日啪夜夜爽| 国产在线视频一区二区| 国产免费一级a男人的天堂| 国产精品无大码| 久久久久久久久大av| 十分钟在线观看高清视频www| a级毛片免费高清观看在线播放| 国产日韩一区二区三区精品不卡 | 国产精品久久久久久久久免| 考比视频在线观看| 熟女电影av网| 免费播放大片免费观看视频在线观看| 免费播放大片免费观看视频在线观看| 久久ye,这里只有精品| 午夜老司机福利剧场| 成人二区视频| 国产一区有黄有色的免费视频| 亚洲成人av在线免费| 亚洲精品aⅴ在线观看| 在线观看三级黄色| av在线播放精品| av国产久精品久网站免费入址| 国产 精品1| av在线app专区| 国产精品.久久久| 欧美亚洲 丝袜 人妻 在线| av又黄又爽大尺度在线免费看| 中文字幕av电影在线播放| 黄色视频在线播放观看不卡| 亚洲av综合色区一区| 三上悠亚av全集在线观看| 97超碰精品成人国产| 国产av精品麻豆| 欧美日韩在线观看h| √禁漫天堂资源中文www| 国产免费一区二区三区四区乱码| 超碰97精品在线观看| 久久久精品免费免费高清| 亚洲美女视频黄频| 少妇的逼好多水| 人人澡人人妻人| 我要看黄色一级片免费的| 亚洲精品乱码久久久v下载方式| 久久精品国产亚洲av涩爱| 91午夜精品亚洲一区二区三区| 国产欧美另类精品又又久久亚洲欧美| 中文字幕av电影在线播放| 三级国产精品片| a级毛色黄片| 丝袜喷水一区| 成人毛片60女人毛片免费| 国产精品欧美亚洲77777| 久久精品夜色国产| 一个人看视频在线观看www免费| 欧美xxⅹ黑人| 3wmmmm亚洲av在线观看| 制服丝袜香蕉在线| 丰满饥渴人妻一区二区三| 亚洲美女搞黄在线观看| 国产亚洲av片在线观看秒播厂| 国产高清国产精品国产三级| av国产久精品久网站免费入址| 啦啦啦啦在线视频资源| 亚洲丝袜综合中文字幕| 国产熟女欧美一区二区| 一级毛片 在线播放| 啦啦啦中文免费视频观看日本| 99久国产av精品国产电影| 妹子高潮喷水视频| 亚洲美女搞黄在线观看| 日韩,欧美,国产一区二区三区| 视频在线观看一区二区三区| 国产亚洲欧美精品永久| 99九九在线精品视频| 国产淫语在线视频| 国产精品偷伦视频观看了| 91久久精品国产一区二区成人| 高清毛片免费看| 在线看a的网站| 男女高潮啪啪啪动态图| 特大巨黑吊av在线直播| 人体艺术视频欧美日本| 午夜久久久在线观看| 婷婷色av中文字幕| 99国产综合亚洲精品| 多毛熟女@视频| 亚洲精品中文字幕在线视频| 色网站视频免费| 高清av免费在线| 亚洲成人一二三区av| 夫妻性生交免费视频一级片| 国产精品欧美亚洲77777| 晚上一个人看的免费电影| 精品国产露脸久久av麻豆| 色哟哟·www| 国产精品久久久久久久久免| 午夜福利在线观看免费完整高清在| 久久久精品94久久精品| 18禁观看日本| 成人毛片60女人毛片免费| 午夜免费鲁丝| 国产色爽女视频免费观看| 男的添女的下面高潮视频| 久久99热这里只频精品6学生| 欧美成人午夜免费资源| 美女大奶头黄色视频| 日韩成人av中文字幕在线观看| 亚洲少妇的诱惑av| 男女啪啪激烈高潮av片| 91精品伊人久久大香线蕉| 日本欧美视频一区| 亚洲熟女精品中文字幕| 国语对白做爰xxxⅹ性视频网站| 精品人妻在线不人妻| 又大又黄又爽视频免费| 亚洲综合精品二区| 熟女av电影| 极品人妻少妇av视频| 简卡轻食公司| 黄色一级大片看看| 国产精品免费大片| 午夜精品国产一区二区电影| 永久免费av网站大全| 午夜免费鲁丝| 自线自在国产av| 两个人的视频大全免费| 国产免费视频播放在线视频| 久久这里有精品视频免费| 成人手机av| 免费观看的影片在线观看| 午夜影院在线不卡| 看十八女毛片水多多多| 国产永久视频网站| 国产熟女欧美一区二区| av又黄又爽大尺度在线免费看| 亚洲色图综合在线观看| 亚洲欧美日韩另类电影网站| 成年女人在线观看亚洲视频| 欧美日韩精品成人综合77777| 亚洲国产最新在线播放| 亚洲精华国产精华液的使用体验| 国产成人免费观看mmmm| 国产精品一二三区在线看| 夫妻性生交免费视频一级片| 亚洲精品色激情综合| 一级黄片播放器| 国产不卡av网站在线观看| 国产精品一国产av| 国产老妇伦熟女老妇高清| 国产精品一区二区三区四区免费观看| 男女高潮啪啪啪动态图| 在线观看人妻少妇| 午夜av观看不卡| 久久久久精品性色| 女人久久www免费人成看片| 国产成人精品婷婷| 国产爽快片一区二区三区| 中文天堂在线官网| 国产精品久久久久久av不卡| 国产午夜精品一二区理论片| www.色视频.com| 国产69精品久久久久777片| 欧美日韩一区二区视频在线观看视频在线| 久久99热这里只频精品6学生| 国产淫语在线视频| 少妇人妻久久综合中文| 亚洲精品中文字幕在线视频| 伊人亚洲综合成人网| 午夜91福利影院| 尾随美女入室| 中文精品一卡2卡3卡4更新| 亚洲欧美日韩另类电影网站| 亚洲第一区二区三区不卡| 久久久久久久精品精品| 一级毛片aaaaaa免费看小| 国产精品成人在线| 国产精品一区www在线观看| 国产男女超爽视频在线观看| 亚洲av电影在线观看一区二区三区| 国产亚洲av片在线观看秒播厂| 亚洲精品日韩av片在线观看| 黄色视频在线播放观看不卡| 99国产综合亚洲精品| xxxhd国产人妻xxx| 成人午夜精彩视频在线观看| 美女国产视频在线观看| 成人二区视频| 久久久国产欧美日韩av| 欧美日韩综合久久久久久| 中国国产av一级| 精品一区二区三卡| 一区二区av电影网| 性色av一级| 我的老师免费观看完整版| 午夜福利视频精品| 在线精品无人区一区二区三| 国产亚洲午夜精品一区二区久久| 久久久a久久爽久久v久久| 看非洲黑人一级黄片| 一级黄片播放器| 满18在线观看网站| 好男人视频免费观看在线| 亚洲在久久综合| 国产片内射在线| 国产日韩欧美在线精品| 日韩制服骚丝袜av| 亚洲精品456在线播放app| 亚洲一区二区三区欧美精品| a级毛片黄视频| 精品国产露脸久久av麻豆| 日日爽夜夜爽网站| 中国美白少妇内射xxxbb| 自拍欧美九色日韩亚洲蝌蚪91| 久久久久久人妻| 亚洲精品视频女| 亚洲精品日本国产第一区| 免费看光身美女| 国产精品欧美亚洲77777| 一级爰片在线观看| 桃花免费在线播放| 熟妇人妻不卡中文字幕| 观看av在线不卡| 99热6这里只有精品| 国产免费一级a男人的天堂| 一区二区三区乱码不卡18| 国产成人精品福利久久| 久久久久久久精品精品| 中国美白少妇内射xxxbb| 国产精品三级大全| 青春草视频在线免费观看| 女人精品久久久久毛片| 精品亚洲乱码少妇综合久久| 日韩伦理黄色片| 妹子高潮喷水视频| 在线观看免费日韩欧美大片 | 全区人妻精品视频| 成人无遮挡网站| 看免费成人av毛片| 国产日韩一区二区三区精品不卡 | 各种免费的搞黄视频| 飞空精品影院首页| 欧美日韩视频精品一区| 99热全是精品| 18在线观看网站| 中文乱码字字幕精品一区二区三区| 熟女电影av网| 插阴视频在线观看视频| 亚洲经典国产精华液单| 卡戴珊不雅视频在线播放| 亚洲精品美女久久av网站| 中国国产av一级| 亚洲人成网站在线观看播放| 黑人猛操日本美女一级片| 日本欧美视频一区| 亚洲国产欧美在线一区| 又黄又爽又刺激的免费视频.| 亚洲综合色惰| 国产深夜福利视频在线观看| 国产伦精品一区二区三区视频9| 国产精品久久久久久久久免| 国产 一区精品| 亚洲五月色婷婷综合| 午夜福利网站1000一区二区三区| av不卡在线播放| 99视频精品全部免费 在线| 在线观看免费高清a一片| 国产探花极品一区二区| 在线观看美女被高潮喷水网站| 91精品一卡2卡3卡4卡| 免费av中文字幕在线| 日韩一区二区视频免费看| av又黄又爽大尺度在线免费看| av在线老鸭窝| 日韩大片免费观看网站| 精品亚洲成a人片在线观看| 日本wwww免费看| 丝瓜视频免费看黄片| 亚洲av成人精品一二三区| 女的被弄到高潮叫床怎么办| 日韩伦理黄色片| 久久久久久久国产电影| 搡女人真爽免费视频火全软件| 国产精品蜜桃在线观看| 欧美bdsm另类| 黑人巨大精品欧美一区二区蜜桃 | 午夜老司机福利剧场| 免费高清在线观看日韩| 最后的刺客免费高清国语| 久久女婷五月综合色啪小说| 欧美日韩国产mv在线观看视频| 色网站视频免费| 性色avwww在线观看| 精品国产一区二区三区久久久樱花| 高清不卡的av网站| 国产免费现黄频在线看| av国产久精品久网站免费入址| 免费高清在线观看日韩| 亚洲欧美清纯卡通| 欧美三级亚洲精品| av女优亚洲男人天堂| 永久免费av网站大全| 简卡轻食公司| 国产黄色免费在线视频| 欧美丝袜亚洲另类| 亚洲天堂av无毛| 久久久久久久大尺度免费视频| 亚洲精品日韩av片在线观看| 亚洲精品国产av蜜桃| 一区二区av电影网| 老女人水多毛片| 国产欧美另类精品又又久久亚洲欧美| 美女中出高潮动态图| 一级a做视频免费观看| 人妻 亚洲 视频| 久久ye,这里只有精品| 亚洲国产精品999| 一级二级三级毛片免费看| 亚洲,欧美,日韩| 色视频在线一区二区三区| 伊人久久精品亚洲午夜| 精品少妇内射三级| 国产av码专区亚洲av| 亚洲精品国产av成人精品| 中文字幕人妻丝袜制服| 亚洲国产色片| 国产极品天堂在线| 如何舔出高潮| 一区二区av电影网| 亚洲欧美色中文字幕在线| 五月玫瑰六月丁香| 少妇的逼水好多| 国产精品一区二区在线不卡| 十八禁网站网址无遮挡| 国产成人免费无遮挡视频| 久久久久久久久久久免费av| 春色校园在线视频观看| 22中文网久久字幕| 欧美3d第一页| 亚洲美女视频黄频| 国产日韩欧美在线精品| 伦理电影免费视频| 伦精品一区二区三区| 亚洲精品av麻豆狂野| 久久国产精品大桥未久av| 日韩免费高清中文字幕av| 日本-黄色视频高清免费观看| 老司机亚洲免费影院| 久久99一区二区三区| 熟女人妻精品中文字幕| 91aial.com中文字幕在线观看| 午夜精品国产一区二区电影| 日韩大片免费观看网站| 一区二区日韩欧美中文字幕 | 亚洲国产最新在线播放| 人成视频在线观看免费观看| 久久精品久久久久久久性| 日本午夜av视频| 乱人伦中国视频| 五月玫瑰六月丁香| 亚洲欧美成人精品一区二区| 啦啦啦视频在线资源免费观看| 人人妻人人澡人人爽人人夜夜| 丝袜喷水一区| 色吧在线观看| 午夜免费观看性视频| 久久人人爽av亚洲精品天堂| 欧美最新免费一区二区三区| 热99国产精品久久久久久7| 亚洲av欧美aⅴ国产| 久久精品国产亚洲网站| 最黄视频免费看| 纯流量卡能插随身wifi吗| 亚洲怡红院男人天堂| 天天影视国产精品| 在线观看国产h片| 亚洲在久久综合| videossex国产| 国产女主播在线喷水免费视频网站| 秋霞在线观看毛片| 日韩不卡一区二区三区视频在线| 国产免费又黄又爽又色| av一本久久久久| 99国产精品免费福利视频| av卡一久久| 亚洲国产精品一区二区三区在线| 一级黄片播放器| 日韩伦理黄色片| 少妇精品久久久久久久| 国产片内射在线| 成年人免费黄色播放视频| 亚洲婷婷狠狠爱综合网| 22中文网久久字幕| 亚洲av不卡在线观看| 一本—道久久a久久精品蜜桃钙片| 两个人免费观看高清视频| xxxhd国产人妻xxx| 久久人妻熟女aⅴ| 51国产日韩欧美| 成人黄色视频免费在线看| 亚洲图色成人| 高清av免费在线| 欧美激情国产日韩精品一区| 亚洲综合精品二区| 蜜臀久久99精品久久宅男| 亚洲国产欧美日韩在线播放| 色吧在线观看| 欧美激情 高清一区二区三区| 国产精品久久久久久av不卡| av网站免费在线观看视频| 韩国av在线不卡| 欧美日韩在线观看h| videos熟女内射| √禁漫天堂资源中文www| 看免费成人av毛片| 亚洲精品日本国产第一区| 国产午夜精品久久久久久一区二区三区| 国产在线一区二区三区精| 有码 亚洲区| 伊人久久精品亚洲午夜| 国内精品宾馆在线| 免费人妻精品一区二区三区视频| 亚洲不卡免费看| 有码 亚洲区| 人妻制服诱惑在线中文字幕| 最后的刺客免费高清国语| 亚洲精品自拍成人| 99久久人妻综合| 欧美国产精品一级二级三级| 成人免费观看视频高清| 国产精品久久久久久精品电影小说| 亚洲精品日韩av片在线观看| 午夜影院在线不卡| 久久久久久久精品精品| 制服丝袜香蕉在线| 欧美亚洲日本最大视频资源| av又黄又爽大尺度在线免费看| 亚洲性久久影院| 91精品伊人久久大香线蕉| 美女大奶头黄色视频| 午夜精品国产一区二区电影| 国产男女内射视频| 欧美+日韩+精品| 高清欧美精品videossex| 日本与韩国留学比较| 乱码一卡2卡4卡精品| 亚洲精品国产色婷婷电影| 波野结衣二区三区在线| 简卡轻食公司| 亚洲av成人精品一二三区| 久热这里只有精品99| 校园人妻丝袜中文字幕| 满18在线观看网站| 欧美日韩亚洲高清精品| 精品久久久久久久久av| 久久久国产精品麻豆| 最近的中文字幕免费完整| 九九久久精品国产亚洲av麻豆| 少妇被粗大的猛进出69影院 | 亚洲国产色片| 日韩精品有码人妻一区| 内地一区二区视频在线| a级毛片免费高清观看在线播放| 国产成人精品在线电影| 九色亚洲精品在线播放| 亚洲欧洲国产日韩| 国产又色又爽无遮挡免| 丝袜美足系列| 久久久久久久久久久久大奶| 国产爽快片一区二区三区| 多毛熟女@视频| 免费av不卡在线播放| 日韩一区二区三区影片| 亚洲精品乱久久久久久| 国产在线视频一区二区| 丝袜脚勾引网站| 国产欧美日韩综合在线一区二区| 日韩电影二区| 亚洲高清免费不卡视频| 人成视频在线观看免费观看| 国产深夜福利视频在线观看| 久久影院123| 亚洲第一区二区三区不卡| 99热这里只有是精品在线观看| 3wmmmm亚洲av在线观看| 新久久久久国产一级毛片| 欧美精品国产亚洲| 国产亚洲精品第一综合不卡 | 在线观看一区二区三区激情| 亚洲精品日韩av片在线观看| 国产男女内射视频| 男人添女人高潮全过程视频| 精品一区在线观看国产| 久热这里只有精品99| av天堂久久9| 欧美日本中文国产一区发布| 在线观看免费高清a一片| 熟女电影av网| 国产 精品1| 中文欧美无线码| 久久精品国产a三级三级三级| 一本—道久久a久久精品蜜桃钙片| 免费人成在线观看视频色| 91久久精品国产一区二区三区| 午夜福利,免费看| 啦啦啦视频在线资源免费观看| 美女脱内裤让男人舔精品视频| 亚洲国产色片| 欧美日韩成人在线一区二区| 成年av动漫网址| 五月玫瑰六月丁香| 日韩在线高清观看一区二区三区| 日日爽夜夜爽网站| 夜夜看夜夜爽夜夜摸| 51国产日韩欧美| 欧美日韩精品成人综合77777| 丁香六月天网| 国产亚洲午夜精品一区二区久久| 在线精品无人区一区二区三| 插阴视频在线观看视频| 国产永久视频网站| 欧美亚洲日本最大视频资源| 国产精品免费大片| 国产av码专区亚洲av| 亚洲国产成人一精品久久久| 中文字幕最新亚洲高清| 满18在线观看网站| 乱人伦中国视频| 色视频在线一区二区三区| a级毛片在线看网站| 又黄又爽又刺激的免费视频.| 久久 成人 亚洲|