• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Ultra-Lightweight Face Animation Method for Ultra-Low Bitrate Video Conferencing

    2023-05-08 06:13:20LUJianguoZHENGQingfang
    ZTE Communications 2023年1期

    LU Jianguo ,ZHENG Qingfang

    (1.State Key Laboratory of Mobile Network and Mobile Multimedia Technology,Shenzhen 518055,China;2.ZTE Corporation,Shenzhen 518057,China)

    Abstract: Video conferencing systems face the dilemma between smooth streaming and decent visual quality because traditional video com‐pression algorithms fail to produce bitstreams low enough for bandwidth-constrained networks.An ultra-lightweight face-animation-based method that enables better video conferencing experience is proposed in this paper.The proposed method compresses high-quality upperbody videos with ultra-low bitrates and runs efficiently on mobile devices without high-end graphics processing units (GPU).Moreover,a vi‐sual quality evaluation algorithm is used to avoid image degradation caused by extreme face poses and∕or expressions,and a full resolution im‐age composition algorithm to reduce unnaturalness,which guarantees the user experience.Experiments show that the proposed method is effi‐cient and can generate high-quality videos at ultra-low bitrates.

    Keywords: talking heads;face animation;video conferencing;generative adversarial network

    1 Introduction

    During the COVID-19 Pandemic,video conferencing systems have become indispensable tools for individu‐als to keep in touch with friends and for enterprises and organizations to connect with customers.Inside these systems,video compression technologies play critical roles in the efficient representation and transportation of video data.Great progress has been achieved in past years in repre‐senting high-fidelity videos with low bitrates;e.g.,the highefficiency video coding (HEVC)[1]was designed with the goal of allowing video content to have a data compression ratio up to 1 000:1.However,video conferencing systems still face the dilemma between smooth streaming and decent visual quality because current video compression technologies fail to pro‐duce bitstreams low enough for bandwidth-constrained net‐works due to a large number of concurrent users.

    Recently,some novel talking-head video compression meth‐ods[2–5]based on face animation have been proposed,which can significantly cut down the bandwidth usage of video con‐ferences.These face animation methods usually consist of two parts: encoder and decoder.The encoder is a motion extractor to derive a compact motion feature representation from the driving video frame,and the decoder is an image generator to synthesize photorealistic images according to the motion fea‐ture.Due to its extreme compactness,the extracted face fea‐ture can be used to reduce the bandwidth of video conferences and hence improve user experience in bandwidth-constrained networks.However,most of the talking-head video compres‐sion methods are too complicated to run in real time without the support of high-end graphics processing units (GPUs),let alone on mobile devices.For example,the model size of the First Order Motion Model (FOMM)[6]is 355 MB and the com‐putation complexity is 121 G multiply-accumulate operations (MACs).Aiming at practical applications,we propose an ultralightweight motion extractor to obtain effective motion repre‐sentations from the driving video and an animation generator to synthesize high-quality face videos accordingly.

    We find out that the face animation method may sometimes fail,which is usually caused by extreme head poses and∕or fa‐cial expressions.To tackle the problem,we propose an effi‐cient visual quality evaluation method to reject the synthe‐sized images that are visually unacceptable.We also notice that only displaying face without context regions looks unnatu‐ral and weird to users.To cope with it,we composite fullresolution images by stitching face regions with other body parts and backgrounds.These two mechanisms effectively pre‐vent user experience degradation during a conference.

    Our main contributions are as follows:

    ? An ultra-lightweight motion extraction algorithm is pro‐posed to derive effective facial motion features from driving videos,which is efficient enough to run on mobile devices without high-end GPUs.

    ? An efficient visual quality evaluation algorithm is pro‐posed to select visually acceptable generated images and an image composition algorithm to generate full-resolution vid‐eos,which ensures consistent and natural user experience dur‐ing conferences.

    ? A practical video conferencing system is built to integrate the best parts of face-animation-based methods and traditional video-compression-based methods,which significantly re‐duces uplink bandwidth usage and ensures decent user experi‐ence even when the network bandwidth is constrained.

    2 Related Work

    Due to the space limitation,we only review previous works about face animation and deep video compression that are most related to ours.

    2.1 Face Animation

    Face animation is an image-to-image translation task,which transfers the talking-head motion of a person in an im‐age to persons in other images.The former image is called the driving image,while the latter image is called the source im‐age.Face animation has become a popular topic since the generative adversarial network (GAN)[7]was proposed by GOODFELLOW et al.Most recently published face animation methods can synthesize photo-realistic images with the help of GANs.

    Some works[8–12]were proposed to solve the face animation task with the prior knowledge of the 3D Morphable Model (3DMM)[13].However,the traditional 3D-based works[8–10]failed to render details of talking heads,such as hair,teeth and accessories.Ref.[11] allowed fine-scale manipulation of any facial input image into a new expression while preserving its identity with the help of a conditional GAN.To improve the realism of the rendering,Ref.[12] designed a novel spacetime GAN to predict photorealistic video frames from the modified 3DMM directly.

    Contrary to 3D-based models,2D-based models synthesize talking heads directly without any prior knowledge of 3DMM.They can be classified into warping-based models and warping-free models.

    Warping-free models[14–19]directly synthesize images with‐out any warping.Few-shot vid2vid[16]learned to transform landmark positions into realistically looking personalized pho‐tographs with the help of meta-learning.Ref.[19] decomposed a person’s appearance into a pose-dependent coarse image and a pose-independent texture image.LI-Net[20]decoupled the face landmark image into pose and expression features and reenacted those attributes separately to generate identitypreserving faces with accurate expressions and poses.

    Warping-based methods[21–25]predicted dense motion fields to warp the feature maps extracted from the source images and inpaint the warped feature maps to generate photorealistic im‐ages.X2Face[22]used an encoder-decoder architecture to learn the latent embedding to encode pose and expression and re‐cover the dense motion fields from it.Many works attempted to predict the dense motion field from sparse object keypoints.The key to those methods is how to represent motions with sparse object keypoints.Monkey-Net[23]was proposed to learn pure keypoints to describe motions in an unsupervised man‐ner.Although it cannot describe subtle motions,Monkey-Net provided a strong baseline for further improvements.FOMM[6]represented sparse motion with some keypoints along with lo‐cal affine transformations.Motion representations for articu‐lated animation (MRAA)[24]defined the motion with regions us‐ing the motion estimation based on principal component analy‐sis (PCA),rather than keypoints,to describe locations,shapes and pose.The thin-plate spline (TPS) motion model[25]esti‐mated thin-plate spline motion to produce a more flexible opti‐cal flow.Ref.[5] extended the baseline to 3D optical flows to produce 3D deformations.The above mentioned methods ex‐tracted compact motion representations,which showed great potential in lowering the bitrate of video conferencing.

    2.2 Deep Learning-Based Video Compression

    For decades,researchers have made great efforts to transmit higher quality videos with lower bitrates.Recently several ap‐proaches based on deep learning were explored.

    For general-purpose video compression,some works[26–27]at‐tempted to reduce the bandwidth by making a balance between the cost of transferring the region of interest (ROI) and back‐ground.Compared to traditional codecs,such methods can achieve better visual quality with the same bitrate.Other works[28–29]focused on enhancing the visual quality of low bi‐trate videos by image super-resolution and image enhancement.

    For the compression of talking-head videos,great progress has been achieved.In Ref.[30],the encoder detected and transmitted keypoints representing the body pose and the face mesh information,and the receiver displayed the motion in the form of puppets.However,this method failed to produce photorealistic images.Inspired by the promising results achieved by face animation models,many works demonstrated the effectiveness of video compression based on face anima‐tion.VSBNet[3]reconstructed original frames from face land‐marks with a low bitrate of around 1 kB∕s.Ref.[5] proposed a neural talking-head video synthesis model and set up a video conferencing system that achieves the same visual quality as the commercial H.264 standard with only one-tenth of the bandwidth.Ref.[2] introduced an adaptive intra-refresh scheme to address the problem of reconstruction quality that might rapidly degrade due to the loss of temporal correlation as frames get farther away from the initial one.Ref.[4] evalu‐ated the advantages and disadvantages of several deep genera‐tive adversarial approaches and designed a mobile-compatible architecture that can run at 19 f∕s on iPhone 8.However,those methods can hardly run in real time without the support of high-end GPUs.What’s more,they could only generate nearfrontal faces,looking unnatural and weird when faces were not near-frontal.In this paper,we specifically focus on improving the efficiency and visual quality of video compression based on face animation.

    3 Proposed Ultra-Lightweight Face Anima?tion Method

    3.1 Overview

    The overall pipeline of our video conference system is shown in Fig.1.Each user provides an avatar image to the sys‐tem and uses its animation during a conference for ensuring privacy and elegant presence.When the system starts run‐ning,videos of users are captured and the face region in each video frame is cropped out by the face detection algorithm.Face images are then encoded by the keypoint detector and represented as the keypoints described in Section 3.2.Before the encoded data are sent out,the visual quality of the face im‐age that will be reconstructed by a decoder according to these keypoints is evaluated to prevent unnatural results.It is high‐lighted here that the visual quality evaluation method in Sec‐tion 3.3 requires no actual reconstruction of the face image but executes on encoded data,for the sake of efficiency.

    Upon receiving the encoded keypoint data from the sender,the conference server calls the image generator to synthesize the face image animated from the keypoints,as described in Section 3.2.The decoded face image replaces the face region in the avatar image by our method in Section 3.4 to create a full-resolution video frame,which is then encoded by H.264 or HEVC and sent to the receiver.The receiver simply de‐codes the video stream and displays it on the screen,which can usually take advantage of the hardware accelerator in the device’s chip.

    ▲Figure 1.Proposed video conference system consists of three parts: the sender on mobile devices,video generator on servers,and receiver on mobile devices.In the encoder part,the motion encoder ex?tracts keypoints from the driving images.The feature-based image quality evaluation filters out unnatu?ral images.The decoder synthesizes images from the keypoints and reconstructs full-resolution images,which are encoded by H.264 or H.265 and sent to the receiver.The receiver decodes the video stream and shows it on the phone screen

    With the prevalence of mobile phones,the demand for run‐ning video conferencing on mobile devices is growing.In most commercial video conference systems,mobile devices account for a significant portion of all terminals.For better compatibil‐ity with existing commercial video conference systems,our system and algorithms here are intentionally designed to make the sender∕receiver module deployable on mobile de‐vices and to keep their computational burdens to a minimum,thus reducing power consumption and extending the working time of mobile devices.

    3.2 Model Distillation

    Giving a source imageSof the target person,a driving video can be denoted as {D1,D2,D3,…,DN},whereDiis thei?th frame in the sequence andNis the total number of frames in the video.The output images can be denoted as {O1,O2,O3,…,ON},whereOiis thei?th frame of the output sequence.The outputOishares the same identity withSand the same face motions withDi.We adopt the face animation model simi‐lar to FOMM,which consists of a keypoint detectorK(en‐coder) and a generatorG(decoder).First,face landmarks are estimated fromSandDiseparately byK,whose locations serve as the sparse motion information.Second,dense motion fields and occlusion maps are predicted byG.Finally,Gwarps the feature map extracted fromSwith the dense motion fields and the warped feature map is masked by the occlusion maps to generate the output imageOi.Following the idea of FOMM,we extract 10 keypoints and their corresponding Jacobian matrices from the face image.

    We design our model to be light‐weight and can generate an image with excellent visual quality.For the decoder,we adopt the same ar‐chitecture as the generator model in FOMM but cut down the chan‐nels of the model by half.We de‐note the simplified generator asGsim.For the encoder,we replace the hourglass network in FOMM,which brings about high computa‐tional cost,with a greatly simpli‐fied version of MobileNetV2[31].However,it is very difficult to train the proposed model from scratch since the training process often fails to converge.We come up with a training strategy described as fol‐lows to solve the problem.

    1) Step 1: model distillation.We use the original encoderKfommin FOMM as the teacher model and our proposed en‐coderKproas the student model.The loss function consists of distillation lossLdisand equivariance lossLeq,which can be written as Eq.(1).

    whereIis the training sample andTis a thin plane spline de‐formation.The distillation loss ensures that the student en‐coder extracts the same motion representation as the teacher encoder.And the equivariance loss ensures the consistency of the motion representation when random geometric transforma‐tions are applied to the images.

    2) Step 2: iterative model pruning and distillation.Since the encoder has to extract motion representation from every video frame,it should be as lightweight as possible to reduce computational costs.In our attempt to further simplify the en‐coder,we find out most of the complexity comes from the last several convolutional layers.Therefore,we drop the last con‐volutional layer in the encoder model and retrain it following Step 1.This step can be repeated several times until we ob‐tainKbestthat strikes a balance between the model complexity and accuracy.

    3) Step 3: generator fine-tuning.Due to the simplification made to the generator,we train the simplified generatorGsimalong with the keypoint detectorKfommof the original FOMM to make a good initialization ofGsim.

    4) Step 4: overall fine-tuning.Once the encoder modelsKbestandGsimare determined,we fine-tuneKbestandGsimaccord‐ingly in an unsupervised manner.Finally,KbestandGsimact as the encoder and the decoder in our system respectively.

    3.3 Visual Quality Evaluation

    Although video conferences based on face animation can re‐sult in a very high video compression rate,the visual quality of a reconstructed image may sometimes degrade in the follow‐ing two cases (Fig.2).First,due to current algorithmic limita‐tions,most of the face animation models may generate inaccu‐rate expressions and visual artifacts on faces with large poses and∕or extreme expressions.Second,with the increase of the frame distance,the temporal correlation weakens,and hence the quality of generated video deteriorates.This phenomenon becomes particularly obvious when faces are occluded.The degraded image brings inconsistent experience to users.In or‐der to alleviate the problem,Ref.[2] introduced an adaptive intra-refresh scheme using multiple source frames.Before sending the features to the decoder,the sender reconstructs the image first and evaluates the generated image to avoid de‐graded images.However,this scheme not only incurs large computational costs which makes it impossible to run it on mo‐bile devices,but also leads to significant time delay at the re‐ceiving end.What’s more,frequent scene switching also re‐quires the system’s frequent sending of source frames,mak‐ing the system lose its advantage of reducing video bandwidth.

    We propose here an adaptive degraded frame filter method by an efficient image quality evaluation algorithm directly based on the extracted features.We find out that when a large head pose and∕or extreme facial expression happens,most of the regions in the generated image are inpainted by the gen‐erator,which degrades the image quality.The difference be‐tween the driving image and the source image can be mea‐sured by analyzing the dense motion field,which is predicted from the sparse motion field in our setting.Therefore,instead of using the decoder to synthesize the generated image,we de‐cide to evaluate image quality based on the relative motion.The lossL2in the algorithm can be formulated as follows.

    wherev1iis the value of thei?th keypoint in the first frame,v2iis the value of thei?th keypoint in the second frame,J1iis the Jacobian of thei?th keypoint in the first frame,J2iis the Jaco‐bian of thei?th keypoint in the second frame,and hyperparam‐etersαandβcontrol the weight of each part.In our experi‐ments,we set the hyperparameters to 2 and 1 respectively.

    ▲Figure 2.Examples of face animation failure.The first row shows a result caused by large-pose;the face area becomes blurred and there are some artifacts on the hair of the woman.The second row shows a de?graded image caused by weak temporal correlation and the recon?structed image looks terrible and weird

    In the proposed scheme,the balance between image quality and robustness is controlled by a thresholdτ.Although the identity of the people in the driving images and the source im‐age are the same,the two images may look different.For better visual quality,we adopt a relative motion transfer method,as described in Ref.[6].We first find a driving image that has a similar pose to the source image,which is called the initial im‐ageDI.Then,we extract keypoints from the source imageSand the initial imageDI,which can be denoted asKsandKI.The source keypoints are sent to the receiver.For every frameDt,we estimate keypointsKtfrom the frame,and compare the relative motion betweenKtandKsand that betweenKIandKs.If the former is smaller,we set this driving keypoint as an ini‐tial image.Finally,we compare the relative motion betweenKtandKIwith the thresholdτ.If the former is smaller,it means the relative motion is suitable for robust image generation.The relative motion is sent to the server.If the latter is smaller,the default motion is sent to avoid freezing in video streams.The default keypoints can be motions of some natural expressions,such as blinking and smiling.In this way,the de‐graded frames are replaced by frames of natural expressions.Compared to the method proposed in Ref.[2],our method can greatly reduce the computation cost at the sender and the de‐lay at the receiver.

    3.4 Full-Resolution Image Composition

    The face animation described above cannot be directly used in video conferences due to two facts.Face animation cannot synthesize face images with a size up to video resolution (at least 1 280×720) because computational complexity grows ex‐ponentially with the image size.Also,only displaying the fa‐cial region on the screen without other body parts such as the neck and shoulder looks unnatural and weird.In order to make our face animation method applicable,instead of gener‐ating full-resolution images,we propose to generate a facial re‐gion with a size of no more than 384×384 and stitch it with other body parts and background regions in the source frame to form a full-resolution image.The problem is that there will be a sharp blocky artifact between the head region and body region be‐cause the head region moves while the body region may remain station‐ary.We find that the keypoints spread over the talking-head area and each keypoint is responsible for the local transformation of its neighborhood.To reduce the arti‐fact,we fix the keypoints related to the shoulder part.As a result,the dense motion field predicted by the generator will stay stationary near the shoulder region and have a smooth transition from the head re‐gion to the shoulder region,which makes the composite image look more natural.We show the ex‐ample images in Fig.3 for compari‐son.

    4 Experiments

    4.1 Implementation Details

    1) Datasets.We train and evaluate our face animation model on the VoxCeleb dataset and an in-house dataset.Vox‐Celeb[32]is a dataset of interview videos of different celebri‐ties.We crop the videos and resize them to 256×256 for a fair comparison with the original FOMM and 384×384 for the generation of high-resolution images according to the bound‐ing boxes of faces.The in-house dataset consists of 4 124 Chinese people videos collected from the Internet and is used to reduce bias towards Western people.We fine-tune our model on the in-house dataset to make better adaptations to Chinese.

    2) Evaluation metrics.We evaluate the models using the L1 error,average keypoint distance (AKD) and average Euclid‐ean distance (AED).The L1 error is the mean absolute differ‐ence between pixel values in the reconstructed images and the ground-truth images,which measures the reconstruction accu‐racy.AKD and AED stand for semantic consistency.AKD is the average distance between the face landmarks extracted from the ground-truth images and the reconstructed images re‐spectively by the face landmark detector[33],which measures the pose difference between the two images.AED measures identity preservation,which is the L2 distance of the corre‐sponding features extracted by a pre-trained re-identification network[34].

    3) Hardware.In our video conference system,we implement a conferencing APP on a ZTE A30 Ultra mobile phone with Snapdragon 888 System on a Chip (SoC) and conferencing server software on a computer with Nvidia Tesla V100 GPU.

    ▲Figure 3.Qualitative comparisons with state-of-the-art methods.The first three rows are images from the VoxCeleb dataset and the following four rows are images from our in-house dataset.Our method produces competitive results

    4.2 Comparisons with FOMM

    1) Efficiency of the proposed face animation algorithm

    First,we compare our encoder,i.e.,the face motion extrac‐tor,with that of the original FOMM.We convert the encoder to the mobile neural network (MNN)[35]model and calculate the model size.As listed in Table 1,our encoder model is only 600 kB in size with theoretical computation complexity of 14.62 M MAC,both of which are about 1% of FOMM.Our en‐coder processes every frame in 3.5 ms on Snapdragon 888,which is 16.3 times faster than FOMM.

    Second,we compare our decoder,i.e.,the generator to syn‐thesize a 384×384-resolution face image,also with FOMM.For the generator,we convert the model to TensorRT[36]model and calculate the model size.As listed in Table 1,our decoder model is 81.77 MB in size with theoretical computation com‐plexity of 31.42 G MAC,and these two values are 26.0% and 27.3% of FOMM respectively.Our encoder runs in 5 ms on Tesla V100,which is 4 times faster than FOMM.

    2) Effectiveness of the proposed face animation algorithm

    We compare the visual quality of face images generated by our method with other face animation methods.For quantita‐tive comparison,we evaluate our model with existing studies on the VoxCeleb dataset for an image generation task.For a fair comparison,we generate images with the resolution of 256×256.The first frame of each test video is set as the source image,while the subsequent frames are set as the driving images.Evaluation metrics are computed for every frame and our result is the mean value of all frames.The re‐sults are summarized in Table 2,which clearly shows the pro‐posed method outperforms X2Face and Monkey-Net.Com‐pared to FOMM,our method can generate competitive re‐sults,even though our model is much lighter than FOMM.For a qualitative comparison,we list some example images in Fig.3 for visual comparisons.

    ▼Table 1.Efficiency comparison between our face animation method and FOMM

    ▼Table 2.Visual quality comparison among different face animation methods on VoxCeleb dataset

    4.3 Results of Full-Resolution Image Generation

    The avatar images provided by a user are usually not faceonly,but with other upper body parts.When head regions in the avatar images are cropped and animated by our method,they should be stitched back into original images to form new images with predefined resolutions,e.g.,1 280×720.Special treatment should be given to the point where the head region and body region connect because these regions move nonrigidly and disproportionately.As shown in the top two rows in Fig.4,simply replacing the head region in an avatar image with a new animated head region will result in visual disconti‐nuities.As comparisons,the bottom two rows show results of the proposed method described in Section 3.4.Our method successfully eliminates discontinuities and makes whole im‐ages visually natural.

    4.4 Ultra-Low Bitrate Video Conference

    As described in Section 3.1,our video conference system is comprised of server software running on the cloud server and application software,with the sender module and receiver module,running on the mobile phone.The most important dif‐ference between our sender module and those inside other video conference systems is we encode captured videos into compact keypoint motion information,rather than traditional H.264 or HEVC streams,which greatly cuts down the uplink bandwidth usage.For example,when encoded in H.264,720 p conference videos are typical of bitrates between 1 Mbit∕s and 2 Mbit∕s.By comparison,each video frame is encoded by our sender module as 10 keypoint information,each of which in‐cludes a position (2 floating points) and a Jacobian matrix (4 floating points).We empirically determine the half precision floating point format (FP16) is enough for data representation and thus reaches the bitrate of 6×16×10×30=28.8 kbit∕s,which is only less than 3% of H.264 encoding.We note the keypoint information can be compressed by the entropy en‐coder for further bandwidth usage saving.

    ▲Figure 4.Results of full-resolution image generation.The first row shows images generated by simply replacing the head region in the source image with the new animated head region.The third row shows image results by our method in Section 3.4.In the second and fourth rows,connections between head regions and body regions are zoomed in for clearer comparison

    In our real-world user studies,reducing the uplink bitrate can greatly improve the conference user experience.For one thing,since wireless bandwidth is not evenly allocated for up‐link and downlink data transportation,a smaller uplink bitrate can result in less congestion and faster upward transmission.For another thing,more aggressive schemes can be applied when Forward error correction (FEC) is used to tackle data loss in transmission,leading to less data retransmission,which brings about lower remote interaction latency and more real-time engagement.

    The server software in our system runs on a cloud server with Nvidia GPUs because the image generator in face anima‐tion is much more computationally expensive than the key‐point extractor,as demonstrated in Section 4.1.Although our simplified image generator can be deployed on some flagship mobile phones with powerful GPUs,we choose server-side de‐ployment to make our application software lightweight enough to run on most mobile phones and consume less power to ex‐tend working time,which is also critical to user experience.

    5 Conclusions

    In this paper,we propose a face-animation-based method to greatly reduce bandwidth usage in video conferences,com‐pressing face video frames by using only 60 FP16 data to rep‐resent the face motion.We design an ultra-lightweight face motion extraction algorithm that runs on mobile devices,as well as an efficient visual quality evaluation algorithm and a full-resolution image composition algorithm to ensure consis‐tent and natural user experience.We also build a practical system to enable user communication using animated avatars.Experimental results demonstrate the efficiency and effective‐ness of our methods and their superiority over previous stud‐ies.However,one limitation of our current work is that our method is only applicable to upper-body videos.A full-body animation method should be our next work to cover more realworld scenarios.Another improvement to our system will be saving downlink bandwidth by reconstructing videos on mo‐bile devices,which requires further research in GAN accelera‐tion to meet real-time constraints on mobile devices.

    黄片大片在线免费观看| 他把我摸到了高潮在线观看 | 好男人电影高清在线观看| 伊人亚洲综合成人网| 视频在线观看一区二区三区| 桃红色精品国产亚洲av| 亚洲欧美日韩高清在线视频 | 国产精品偷伦视频观看了| 在线观看人妻少妇| 亚洲精品久久午夜乱码| 色婷婷av一区二区三区视频| 日本黄色日本黄色录像| 大陆偷拍与自拍| 亚洲精品日韩在线中文字幕| 国产区一区二久久| 各种免费的搞黄视频| 日韩三级视频一区二区三区| 老司机午夜十八禁免费视频| 国产精品av久久久久免费| 成人国产一区最新在线观看| 亚洲欧美清纯卡通| 久久久久久亚洲精品国产蜜桃av| 婷婷丁香在线五月| 两性午夜刺激爽爽歪歪视频在线观看 | 午夜久久久在线观看| 人妻人人澡人人爽人人| 欧美日韩精品网址| 久久久久久久国产电影| 欧美 日韩 精品 国产| 久久精品亚洲熟妇少妇任你| 婷婷丁香在线五月| 婷婷丁香在线五月| 精品熟女少妇八av免费久了| 久久久国产精品麻豆| 精品福利观看| 亚洲色图综合在线观看| 女人被躁到高潮嗷嗷叫费观| 亚洲视频免费观看视频| 人妻一区二区av| 精品国产一区二区三区四区第35| av免费在线观看网站| 美女福利国产在线| 我的亚洲天堂| 精品熟女少妇八av免费久了| 夜夜夜夜夜久久久久| 麻豆av在线久日| 免费高清在线观看视频在线观看| 搡老乐熟女国产| 久久久久国产精品人妻一区二区| 国产精品麻豆人妻色哟哟久久| 久久中文看片网| 久久av网站| videos熟女内射| 亚洲 欧美一区二区三区| 亚洲欧洲日产国产| 国产精品秋霞免费鲁丝片| 久久国产精品大桥未久av| 国产成人精品在线电影| 一进一出抽搐动态| 男男h啪啪无遮挡| 超碰97精品在线观看| 18禁观看日本| 日韩制服丝袜自拍偷拍| 夜夜骑夜夜射夜夜干| 国产av国产精品国产| 一本—道久久a久久精品蜜桃钙片| 天堂中文最新版在线下载| 国产欧美日韩一区二区三 | 亚洲七黄色美女视频| 欧美黄色淫秽网站| 黑人操中国人逼视频| 精品福利观看| 一边摸一边做爽爽视频免费| 我要看黄色一级片免费的| 国产1区2区3区精品| 两个人看的免费小视频| 看免费av毛片| www.精华液| 一级毛片电影观看| 国产国语露脸激情在线看| 日本av手机在线免费观看| 欧美亚洲日本最大视频资源| 午夜福利乱码中文字幕| 精品乱码久久久久久99久播| av在线app专区| 免费在线观看影片大全网站| 国产欧美日韩一区二区三区在线| 一边摸一边做爽爽视频免费| 夜夜骑夜夜射夜夜干| 欧美国产精品va在线观看不卡| 黄色a级毛片大全视频| 午夜福利影视在线免费观看| 青春草视频在线免费观看| 国产精品 国内视频| 欧美少妇被猛烈插入视频| 又紧又爽又黄一区二区| 亚洲av美国av| 99热国产这里只有精品6| 国产成人a∨麻豆精品| 免费av中文字幕在线| 美女国产高潮福利片在线看| 在线永久观看黄色视频| 免费观看av网站的网址| a在线观看视频网站| 麻豆国产av国片精品| 欧美精品人与动牲交sv欧美| 黄频高清免费视频| 中文欧美无线码| 亚洲熟女精品中文字幕| 午夜激情久久久久久久| 天堂俺去俺来也www色官网| 天堂8中文在线网| 久久午夜综合久久蜜桃| 亚洲国产欧美日韩在线播放| 女人爽到高潮嗷嗷叫在线视频| 99久久综合免费| 啦啦啦在线免费观看视频4| 精品一区在线观看国产| 搡老乐熟女国产| 搡老熟女国产l中国老女人| 久热爱精品视频在线9| 午夜精品国产一区二区电影| 国产av精品麻豆| 菩萨蛮人人尽说江南好唐韦庄| 亚洲国产中文字幕在线视频| 黑人操中国人逼视频| 免费在线观看影片大全网站| 乱人伦中国视频| 国产日韩欧美在线精品| 午夜精品国产一区二区电影| 亚洲欧美激情在线| 亚洲第一av免费看| 精品一区二区三卡| 日韩 欧美 亚洲 中文字幕| 少妇被粗大的猛进出69影院| 大陆偷拍与自拍| 狂野欧美激情性xxxx| 亚洲自偷自拍图片 自拍| 亚洲国产成人一精品久久久| 精品亚洲乱码少妇综合久久| 亚洲 国产 在线| 精品少妇一区二区三区视频日本电影| 一级黄色大片毛片| 国产精品1区2区在线观看. | 欧美精品啪啪一区二区三区 | 亚洲五月婷婷丁香| 国产免费视频播放在线视频| 纯流量卡能插随身wifi吗| 新久久久久国产一级毛片| 精品国产超薄肉色丝袜足j| 91精品国产国语对白视频| 人人妻人人添人人爽欧美一区卜| 久久久精品94久久精品| 男女高潮啪啪啪动态图| 50天的宝宝边吃奶边哭怎么回事| 国产亚洲午夜精品一区二区久久| 99国产精品99久久久久| av在线app专区| 男女下面插进去视频免费观看| 亚洲国产av影院在线观看| 精品一区二区三区四区五区乱码| 久久国产精品大桥未久av| 精品乱码久久久久久99久播| 久久亚洲精品不卡| 欧美 日韩 精品 国产| 天天躁日日躁夜夜躁夜夜| 亚洲熟女毛片儿| 国产精品九九99| 桃花免费在线播放| 久久国产精品影院| 国产在线免费精品| 大码成人一级视频| 免费在线观看影片大全网站| 十八禁网站免费在线| 天天操日日干夜夜撸| 欧美日韩一级在线毛片| 丝袜美腿诱惑在线| 成年美女黄网站色视频大全免费| 免费女性裸体啪啪无遮挡网站| 后天国语完整版免费观看| 午夜成年电影在线免费观看| 亚洲自偷自拍图片 自拍| 999久久久国产精品视频| 中文字幕另类日韩欧美亚洲嫩草| 成在线人永久免费视频| 国产成人系列免费观看| 欧美老熟妇乱子伦牲交| 美女脱内裤让男人舔精品视频| 叶爱在线成人免费视频播放| 天堂中文最新版在线下载| videosex国产| 中文欧美无线码| 欧美中文综合在线视频| 在线亚洲精品国产二区图片欧美| 中文字幕人妻丝袜制服| 久久人人爽av亚洲精品天堂| 美女视频免费永久观看网站| 国产伦理片在线播放av一区| 亚洲欧美一区二区三区黑人| 纵有疾风起免费观看全集完整版| 麻豆av在线久日| 日本av免费视频播放| 亚洲免费av在线视频| 国产男人的电影天堂91| 久久久久久久精品精品| 精品国产国语对白av| 999久久久国产精品视频| 日本a在线网址| 9色porny在线观看| 亚洲精品一区蜜桃| 日韩人妻精品一区2区三区| 9色porny在线观看| 丰满饥渴人妻一区二区三| 天天躁夜夜躁狠狠躁躁| 1024香蕉在线观看| 成年动漫av网址| 国产伦理片在线播放av一区| 一级,二级,三级黄色视频| 免费在线观看黄色视频的| 男女高潮啪啪啪动态图| 日日摸夜夜添夜夜添小说| 国产免费一区二区三区四区乱码| 女人被躁到高潮嗷嗷叫费观| 亚洲欧美一区二区三区黑人| 免费在线观看日本一区| 精品一品国产午夜福利视频| 日韩有码中文字幕| 免费一级毛片在线播放高清视频 | 好男人电影高清在线观看| 欧美+亚洲+日韩+国产| 搡老岳熟女国产| 亚洲,欧美精品.| 午夜福利一区二区在线看| 久久久久精品人妻al黑| 黄片大片在线免费观看| 欧美中文综合在线视频| 精品亚洲乱码少妇综合久久| tocl精华| 美女福利国产在线| 国产日韩欧美在线精品| 精品福利永久在线观看| 一区二区av电影网| 亚洲国产欧美一区二区综合| 国产黄频视频在线观看| 狂野欧美激情性xxxx| 日日爽夜夜爽网站| 久久久久久久国产电影| 亚洲七黄色美女视频| av在线播放精品| h视频一区二区三区| 国产成人一区二区三区免费视频网站| 久久久精品区二区三区| 啪啪无遮挡十八禁网站| 亚洲专区字幕在线| 日韩,欧美,国产一区二区三区| 法律面前人人平等表现在哪些方面 | 十八禁高潮呻吟视频| 香蕉丝袜av| 欧美日韩视频精品一区| 久9热在线精品视频| 婷婷丁香在线五月| xxxhd国产人妻xxx| 黄色 视频免费看| 少妇精品久久久久久久| 亚洲国产精品一区二区三区在线| 汤姆久久久久久久影院中文字幕| 大片电影免费在线观看免费| 亚洲五月色婷婷综合| 日本精品一区二区三区蜜桃| 久久久久视频综合| 欧美 日韩 精品 国产| 麻豆av在线久日| 超碰成人久久| 成在线人永久免费视频| 50天的宝宝边吃奶边哭怎么回事| 日韩欧美一区二区三区在线观看 | 久久久久国产精品人妻一区二区| 欧美av亚洲av综合av国产av| 三级毛片av免费| 国产一区二区 视频在线| 在线观看免费高清a一片| 97精品久久久久久久久久精品| 国产又爽黄色视频| 99re6热这里在线精品视频| 亚洲精品自拍成人| 欧美精品av麻豆av| 日本欧美视频一区| 久久久久久亚洲精品国产蜜桃av| 飞空精品影院首页| 18在线观看网站| 一区二区三区四区激情视频| 不卡一级毛片| 19禁男女啪啪无遮挡网站| 亚洲一卡2卡3卡4卡5卡精品中文| 亚洲国产av新网站| 精品一区二区三区av网在线观看 | 亚洲五月婷婷丁香| 久久久精品94久久精品| 色老头精品视频在线观看| 精品视频人人做人人爽| 国产色视频综合| 老熟妇乱子伦视频在线观看 | 久久av网站| 男人操女人黄网站| 丁香六月欧美| 啦啦啦啦在线视频资源| 国产精品久久久av美女十八| 老司机影院毛片| 老司机午夜福利在线观看视频 | 美女脱内裤让男人舔精品视频| 久久久久国内视频| 精品国产乱子伦一区二区三区 | 欧美日韩一级在线毛片| 亚洲精品在线美女| 亚洲精品日韩在线中文字幕| 欧美精品啪啪一区二区三区 | 一区二区三区精品91| 免费av中文字幕在线| 国产老妇伦熟女老妇高清| 亚洲欧美激情在线| 亚洲av成人不卡在线观看播放网 | 国产精品欧美亚洲77777| 咕卡用的链子| 啦啦啦免费观看视频1| 90打野战视频偷拍视频| 成人国产av品久久久| 老汉色∧v一级毛片| 亚洲一码二码三码区别大吗| netflix在线观看网站| 日韩三级视频一区二区三区| 亚洲欧美日韩高清在线视频 | 考比视频在线观看| 母亲3免费完整高清在线观看| 亚洲精品久久久久久婷婷小说| 国产成人欧美在线观看 | 欧美老熟妇乱子伦牲交| 成人手机av| 亚洲第一欧美日韩一区二区三区 | 色婷婷av一区二区三区视频| 亚洲成人手机| 精品亚洲成a人片在线观看| 天天操日日干夜夜撸| 久久ye,这里只有精品| 老司机靠b影院| 纵有疾风起免费观看全集完整版| 日韩视频一区二区在线观看| av网站在线播放免费| 久久人人爽人人片av| 亚洲av美国av| 欧美黄色淫秽网站| 精品少妇一区二区三区视频日本电影| 老汉色av国产亚洲站长工具| e午夜精品久久久久久久| 亚洲欧美精品综合一区二区三区| 国产不卡av网站在线观看| 男女之事视频高清在线观看| 国产片内射在线| 色老头精品视频在线观看| 人妻一区二区av| av片东京热男人的天堂| 亚洲精品一卡2卡三卡4卡5卡 | 欧美午夜高清在线| 久久香蕉激情| 又黄又粗又硬又大视频| 久久久久久久国产电影| 日韩视频一区二区在线观看| 热99re8久久精品国产| 女性被躁到高潮视频| 国产欧美日韩一区二区三 | 一区在线观看完整版| 脱女人内裤的视频| 精品福利观看| www.自偷自拍.com| 精品国产一区二区三区久久久樱花| 中文字幕色久视频| 老司机在亚洲福利影院| 国产一区二区三区综合在线观看| 国产成人免费观看mmmm| 国产亚洲av高清不卡| 母亲3免费完整高清在线观看| 男人爽女人下面视频在线观看| 国产精品九九99| 欧美中文综合在线视频| 妹子高潮喷水视频| 国产男女超爽视频在线观看| 久久香蕉激情| 日日爽夜夜爽网站| 纯流量卡能插随身wifi吗| 在线看a的网站| 丰满人妻熟妇乱又伦精品不卡| 99热全是精品| 俄罗斯特黄特色一大片| 成人三级做爰电影| 韩国高清视频一区二区三区| 男女之事视频高清在线观看| 一本综合久久免费| 天天操日日干夜夜撸| 成年人午夜在线观看视频| 一本一本久久a久久精品综合妖精| 国产成人免费观看mmmm| 午夜激情av网站| 日韩欧美一区二区三区在线观看 | 最黄视频免费看| 亚洲国产欧美在线一区| a级毛片黄视频| www.av在线官网国产| 一本—道久久a久久精品蜜桃钙片| 亚洲精品第二区| 美女高潮到喷水免费观看| 熟女少妇亚洲综合色aaa.| 91字幕亚洲| 亚洲精华国产精华精| 中文字幕精品免费在线观看视频| 亚洲欧美精品自产自拍| 一二三四在线观看免费中文在| 国产精品久久久久久人妻精品电影 | 亚洲欧美一区二区三区久久| 欧美激情 高清一区二区三区| 日韩大片免费观看网站| 韩国高清视频一区二区三区| 90打野战视频偷拍视频| 首页视频小说图片口味搜索| 99国产精品一区二区三区| 欧美老熟妇乱子伦牲交| 亚洲伊人久久精品综合| 成人影院久久| 777久久人妻少妇嫩草av网站| 国产不卡av网站在线观看| 麻豆国产av国片精品| 欧美 亚洲 国产 日韩一| 美女中出高潮动态图| 亚洲欧美成人综合另类久久久| 久热爱精品视频在线9| 久久久久国内视频| 免费在线观看影片大全网站| 亚洲精品日韩在线中文字幕| 国产老妇伦熟女老妇高清| 国产精品.久久久| 国产人伦9x9x在线观看| 欧美成人午夜精品| 久久精品人人爽人人爽视色| 好男人电影高清在线观看| 日本欧美视频一区| 久久精品久久久久久噜噜老黄| 国产精品1区2区在线观看. | 日本av免费视频播放| 亚洲av电影在线观看一区二区三区| 亚洲少妇的诱惑av| 久久久久国内视频| 精品少妇内射三级| 久久精品国产亚洲av香蕉五月 | 国产精品国产三级国产专区5o| 国产一区二区三区在线臀色熟女 | 老司机影院成人| 久久精品人人爽人人爽视色| 人妻久久中文字幕网| 久久久久久久大尺度免费视频| 高清黄色对白视频在线免费看| 精品少妇久久久久久888优播| tube8黄色片| bbb黄色大片| 手机成人av网站| 成人免费观看视频高清| www.精华液| 黑人巨大精品欧美一区二区蜜桃| 中亚洲国语对白在线视频| 一区二区三区四区激情视频| 日本欧美视频一区| 在线观看免费午夜福利视频| 久热爱精品视频在线9| 久久免费观看电影| 精品少妇一区二区三区视频日本电影| 亚洲熟女精品中文字幕| 99久久人妻综合| 少妇人妻久久综合中文| 久久久久久久国产电影| 午夜福利在线观看吧| 又黄又粗又硬又大视频| 少妇被粗大的猛进出69影院| 视频区图区小说| a 毛片基地| 亚洲精品中文字幕一二三四区 | 国产欧美日韩综合在线一区二区| 侵犯人妻中文字幕一二三四区| 欧美日韩中文字幕国产精品一区二区三区 | 亚洲成av片中文字幕在线观看| 啦啦啦在线免费观看视频4| 亚洲性夜色夜夜综合| 精品一区二区三区av网在线观看 | 亚洲av成人一区二区三| 美女高潮喷水抽搐中文字幕| 自拍欧美九色日韩亚洲蝌蚪91| 精品国产超薄肉色丝袜足j| 高清黄色对白视频在线免费看| 成人18禁高潮啪啪吃奶动态图| 亚洲精品中文字幕在线视频| 狠狠婷婷综合久久久久久88av| 91九色精品人成在线观看| 亚洲,欧美精品.| 久久久久视频综合| 日韩制服丝袜自拍偷拍| 亚洲成国产人片在线观看| 国产一卡二卡三卡精品| 爱豆传媒免费全集在线观看| 国产精品久久久人人做人人爽| av在线播放精品| 久久精品国产亚洲av高清一级| 亚洲va日本ⅴa欧美va伊人久久 | 欧美精品一区二区免费开放| 亚洲精品第二区| 亚洲人成77777在线视频| 50天的宝宝边吃奶边哭怎么回事| 国产又爽黄色视频| 久久香蕉激情| 精品国产超薄肉色丝袜足j| a级毛片在线看网站| 一本综合久久免费| www.av在线官网国产| 国产成人啪精品午夜网站| 精品人妻一区二区三区麻豆| 国产欧美日韩一区二区精品| 国产亚洲av高清不卡| 亚洲av欧美aⅴ国产| 亚洲精品中文字幕在线视频| 在线av久久热| kizo精华| 咕卡用的链子| 丝袜人妻中文字幕| 国产亚洲精品第一综合不卡| 国产高清国产精品国产三级| 成在线人永久免费视频| 亚洲精品成人av观看孕妇| 亚洲国产av影院在线观看| 乱人伦中国视频| av片东京热男人的天堂| 91字幕亚洲| 精品熟女少妇八av免费久了| 精品国产超薄肉色丝袜足j| 天天操日日干夜夜撸| 高清欧美精品videossex| 视频区欧美日本亚洲| 亚洲情色 制服丝袜| 久久亚洲国产成人精品v| 欧美少妇被猛烈插入视频| 免费黄频网站在线观看国产| 久久香蕉激情| 性色av一级| 亚洲精品自拍成人| 狂野欧美激情性bbbbbb| 亚洲av欧美aⅴ国产| 啦啦啦啦在线视频资源| 日韩中文字幕视频在线看片| 亚洲成av片中文字幕在线观看| 亚洲va日本ⅴa欧美va伊人久久 | 一本大道久久a久久精品| 久久精品亚洲av国产电影网| 国产精品久久久av美女十八| 人人妻人人添人人爽欧美一区卜| 满18在线观看网站| 成年av动漫网址| 国产欧美日韩综合在线一区二区| 久久久精品区二区三区| 国产精品成人在线| 每晚都被弄得嗷嗷叫到高潮| 久久这里只有精品19| 飞空精品影院首页| 两性夫妻黄色片| 一边摸一边做爽爽视频免费| 欧美在线一区亚洲| 欧美精品啪啪一区二区三区 | 色婷婷av一区二区三区视频| 亚洲专区中文字幕在线| 波多野结衣一区麻豆| 蜜桃国产av成人99| 18在线观看网站| 国产精品一二三区在线看| 嫁个100分男人电影在线观看| 999久久久国产精品视频| 精品一区在线观看国产| 大陆偷拍与自拍| 亚洲精品国产区一区二| 国产成人免费观看mmmm| 亚洲av美国av| 免费日韩欧美在线观看| 黄色片一级片一级黄色片| 人妻 亚洲 视频| 亚洲av成人一区二区三| 一级,二级,三级黄色视频| 99国产极品粉嫩在线观看| 熟女少妇亚洲综合色aaa.| 欧美激情 高清一区二区三区| 亚洲欧美日韩高清在线视频 | 日韩 欧美 亚洲 中文字幕| 老司机福利观看| 80岁老熟妇乱子伦牲交| 午夜久久久在线观看| 亚洲第一青青草原| 亚洲欧美日韩高清在线视频 | 深夜精品福利| 亚洲成人手机| 99精国产麻豆久久婷婷| 美国免费a级毛片| 亚洲精品中文字幕一二三四区 | 国产在线一区二区三区精| 亚洲av电影在线观看一区二区三区| 肉色欧美久久久久久久蜜桃| 国产精品久久久久久精品古装| 人妻久久中文字幕网| 久久精品人人爽人人爽视色| 激情视频va一区二区三区| 成人三级做爰电影| 国产免费现黄频在线看| 亚洲欧美激情在线| 婷婷丁香在线五月| 黄色视频不卡|