• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Rigid Medical Image Registration Using Learning-Based Interest Points and Features

    2019-08-13 05:54:24MaoyangouJinrongHuHuanhangXiWuJiaHehijieXuandYonghong
    Computers Materials&Continua 2019年8期

    Maoyang Ζou,Jinrong Hu,Huan Ζhang,Xi Wu,Jia He,Ζhijie Xu and Yong Ζhong

    Abstract: For image-guided radiation therapy,radiosurgery,minimally invasive surgery,endoscopy and interventional radiology,one of the important techniques is medical image registration.In our study,we propose a learning-based approach named “FIPCNNF” for rigid registration of medical image.Firstly,the pixel-level interest points are computed by the full convolution network (FCN) with self-supervise.Secondly,feature detection,descriptor and matching are trained by convolution neural network (CNN).Thirdly,random sample consensus (Ransac) is used to filter outliers,and the transformation parameters are found with the most inliers by iteratively fitting transforms.In addition,we propose “TrFIP-CNNF” which uses transfer learning and fine-tuning to boost performance of FIP-CNNF.The experiment is done with the dataset of nasopharyngeal carcinoma which is collected from West China Hospital.For the CT-CT and MR-MR image registration,TrFIP-CNNF performs better than scale invariant feature transform (SIFT) and FIP-CNNF slightly.For the CT-MR image registration,the precision,recall and target registration error (TRE) of the TrFIP-CNNF are much better than those of SIFT and FIP-CNNF,and even several times better than those of SIFT.The promising results are achieved by TrFIP-CNNF especially in the multimodal medical image registration,which demonstrates that a feasible approach can be built to improve image registration by using FCN interest points and CNN features..

    Keywords: Medical image registration,CNN feature,interest point,deep learning.

    1 Introduction

    The purpose of image registration is to establish the corresponding relationship between two or more images,and the images are brought into the same coordinate system through transformation.For image-guided radiation therapy,radiosurgery,minimally invasive surgery,endoscopy and interventional radiology,one of the important techniques is image registration.

    For image registration,intensity-based registration and features-based registration are two recognized approaches.The intensity-based image registration approach directly establishes the similarity measure function based on intensity information,and finally registers the images by using the corresponding transformation in the case of maximum similarity.There are classic algorithms of this approach such as cross-correlation,mutual information,sequence similarity detection algorithm and so on.In general,it can be used for rigid and non-rigid registration.Its registration precision is high correspondingly,but the speed is slow due to high computational complexity,and it is also troubled by the monotone texture.For the feature-based image registration approach,the images are registered by using the representative feature of the image.The classical feature-based image registration most commonly uses the feature of SIFT [Lowe (2004)] + Ransac filter,and the second commonly uses the speeded up robust features (SURF) [Bay,Tuytelaars and Gool (2006)] + Ransac filter.The matching pair coordinates are obtained by these approaches,so the image transformation parameters can be calculated.Compared with the intensity-based image registration approach,its computation cost is relatively low because it does not consider all the image regions,and it has stronger antiinterference ability and higher robustness to noise and deformation,but the precision of registration is generally lower.Overall,feature-based image registration approach is currently a hot research topic because of its good cost performance.

    In recent years,the deep neural network which simulates human brain has achieved great success in image recognition [He,Zhang,Ren et al.(2015)],speech recognition [Hinton,Deng,Yu et al.(2012)],natural language [Abdel-Hamid,Mohamed,Jiang et al.(2014)],computer vision and so on [Meng,Rice,Wang et al.(2018)],and has become one of the hot research topics.In the task of computer vision classification [Krizhevsky,Sutskever and Hinton (2012)],segmentation [Long,Shelhamer and Darrell (2015)],target detection[Ren,He,Girshick et al.(2015)],the deep neural network,especially the Convolution Neural Network (CNN),performs well.

    For medical image registration,features-based approaches are developed by deep neural network.Since Chen et al.[Chen,Wu and Liao (2016)] first register spinal ultrasound and CT images using CNN,the researchers have achieved some results with deep learning approaches in the registration of chest CT images [sokooti,Vos,Berendsen et al.(2017)],brain CT and MR images [Cheng,Zhang and Zheng (2018);Simonovsky,Gutieerrez-Becker,Mateus et al.(2016);Wu,Kim and Wang (2013);Cao,Yang and Zhang (2017)],2D X-ray and 3D CT image [Miao,Wang and Liao(2016)],and so on.But overall,there are only a few researches on medical image registration using learningbased approach.Shan et al.[Shan,Guo,yan et al.(2018)] stated:“for learning-based approaches:(1) informative feature representations are difficult to obtain directly from learning and optimizing morphing or similarity function;(2) unlike image classification and segmentation,registration labels are difficult to collect.These two reasons limit the development of learning-based registration algorithms.”

    In this study,we propose a learning-based approach named “FIP-CNNF” to register medical images with deep-learning network.Firstly,FCN is used to detect the interest points in CT and MR images of nasopharyngeal carcinoma,which are collected from the patients in West China Hospital (This dataset is named “NPC”).Secondly,Matchnet network is used for feature detection,descriptor and matching.Thirdly,Ransac is used to filter outliers,and then the CT-CT,MR-MR,CT-MR images are registered by iteratively fitting transforms to the data.In addition,transfer learning is adopted on FIP-CNNF(named “TrFIP-CNNF”).Specifically,the Matchnet network is pre-trained with UBC dataset to initialize network parameters,and then trained with NPC dataset.The experiment show that the registration results of TrFIP-CNNF are better than those of FIP-CNNF.

    The contribution of this work is that:

    ● Two key steps of classic features-based registration algorithm have been improved by learning-based approach.A multi-scale,multihomography approach boosts pixellevel interest point detection with self-supervise,and Matchnet network using transfer learning contributes to feature detection,descriptor and matching.

    ● For CT-MR registration,the precision,recall,and TRE of TrFIP-CNNF are much better than those of SIFT.The result of experiment demonstrates that a feasible approach is built to improve multimodal medical image registration.

    The rest of the paper is organized as follows:Section 2 reviews the related work.Section 3 mainly introduces the methodology.Section 4 describes the transfer learning.Section 5 presents the experimental setup and experimental results.Section 6 is the conclusion for this paper.

    2 Related work

    The feature-based image registration approach focuses on the features of the image.Therefore,it is the key to how to extract features with good invariance.SIFT is the most popular algorithm for feature detection and matching at present.The interest points found by SIFT in different spaces are very prominent,such as corner points,edge points,etc.The features of SIFT are invariance in rotation,illumination,affine and scale.

    SURF is the most famous variant of SIFT,Bay et al.[Bay,Tuytelaars and Gool (2006)]proposed:“SURF approximates or even outperforms previously proposed schemes with respect to repeatability,distinctiveness,and robustness,yet can be computed and compared much faster.”

    The performance comparison of SIFT and SURF is given in Juan et al.[Juan and Gwun(2009)].“SIFT is slow and not good at illumination changes,while it is invariant to rotation,scale changes and affine transformations.SURF is fast and has good performance as the same as SIFT,but it is not stable to rotation and illumination changes.”There are many other variants of SIFT algorithm,such as,Chen et al.[Chen and Shang(2016)] propose “an improved sift algorithm on characteristic statistical distributions and consistency constraint.”

    Although SIFT is widely used,it also has some shortcomings.For example,the SIFT requires that the image has enough texture when it constructs 128-dimensional vectors for interest points,otherwise the 128-dimensional vector constructed is not so distinguished that it is easy to cause mismatch.

    CNN can also be used for feature extraction,feature description and matching.Given the image patches,the CNN usually employs the FC or pooled intermediate CNN features.The paper [Fischer and Dosovitskiy (2014)] “compares features from various layers of convolutional neural nets to standard SIFT descriptors”,“Surprisingly,convolutional neural networks clearly outperform SIFT on descriptor matching”.Other approaches using CNN features include [Reddy and Babu(2015);Xie,Hong and Zhang (2015);Yang,Dan and Yang (2018)].

    Here we specifically discuss Siamese network [Bromley,Guyon,LeCun et al.(1994)],which was first introduced in 1994 for signature verification.On the basis of Siamese network,combined with the spatial pyramid pool [He,Zhang,Ren et al.(2015)] (the network structure can generate a fixed-length representation regardless of image size/scale),Zagoruyko et al.[Zagoruyko and Komodakis (2015)] proposed a network structure of 2-channel + Central-surround two-stream + SPP to improve the precision of image registration.Han et al.[Han,Leung,Jia et al.(2015)] proposed “Matchnet” which is an improved Siamese network.By using fewer descriptors,Matchnet obtained better results for patch-based matching than those of SIFT and Siamese.

    3 Methodology

    This section focuses on the Methodology of FIP-CNNF.FIP-CNNF has three modules:(1) Interest point detection,(2) Feature detection,description,matching and (3)Transformation modelestimation,which will be described in detail as following.

    3.1 Interest points detection

    Inspired by Detone et al.[Detone,Malisiewicz and Rabinovich (2017)],we detect interest points in two steps.The first step is to build a simple geometric shapes dataset with no ambiguity in the interest point locations,which consists of rendered triangles,quadrilaterals,lines,cubes,checkerboards,and stars with ground truth corner locations.And then the FCN named “Base Detector” is trained with this dataset.The second step is finding interest points using Homographic Adaptation,and the process is shown in Fig.1[Detone,Malisiewicz and Rabinovich (2017)].

    Figure1:Homographic adaptation [Detone,Malisiewicz and Rabinovich (2017)]

    To find more potential interest point locations on a diverse set of image textures and patterns,Homographic Adaptation applies random homographies to warp copies of the input image,so Base Detector is helped to see the scene from many different viewpoints and scales.After Base Detector detects the transformed image separately,the results is combined to get the interest point of the image.The interest points from our experimental medical image are shown in Fig.2 (the red interest points are obtained by SIFT and green interest points are obtained by homographic adaptation).

    Figure2:Interest points of CT and MR images

    The ingenious design of this approach is that it can detect interest points with selfsupervision,and it can boost interest point detection repeatability.

    3.2 Feature detection,descriptor and matching

    Siamese network can learn a similarity metric and match the samples of the unknown class with this similarity metric.For images that have detected interest points,feature detection,descriptor and matching can be carried out with a Siamese network.In our experiment,the deep learning network is called “Matchnet” which is a kind of improved Siamese network.The network structure is shown in Fig.3 and the network parameters are shown in Tab.1.

    Figure3:Network structure

    Table1:Network parameters

    The first layer of network is the preprocessing layer.“For each pixel in the input grayscale patch we normalize its intensity value x (in [0,255]) to (x-128)/160” [Han,Leung,Jia et al.(2015)].For following convolution layers,Rectfied Linear Units (ReLU)is used as non-linearity.For the last layer,Softmax is used as activation function.The loss function of Matchnet is cross-entropy error,whose formula is as follow:

    Here training dataset has n patch pairs,yiis the 0 or 1 label for input pair xi,0 indicates mismatch,1 vice versa.and 1-are the Softmax activations computed on the values of v0(xi) and v1(xi) which are the two nodes in FC3,formula is as follow:=

    Formally,given a set S1of interest point descriptors in the fixed image,and a set S2of interest point descriptors in the moving image.For an interest point x in a fixed image,yiis a corresponding point in a moving image,m is a measure of the similarity between the two points.The outputs of Matchnet network is a value between 0 and 1,and 1 indicates full match.To prevent matching when interest points are locally similar,which often occurs in medical images,we want to find the match between x and yiis particularly distinctive.In particular,when we find the maximum m(x,y1) and second largest m(x,y2),the matching score is defined as:

    If h(x,S2) is smaller,x is much closer to y1than any other member of S2.Thus,we say that x matches y1if h(x,S2) is below threshold η.In addition,it is considered that the interest point x of the fixed image does not exist in the moving image if h (x,S2) is higher than the thresholdη.

    We need to consider what the threshold is.When the threshold ηis low,the real correspondence can be recognized less.After considering the effect on precision and recall under theηof 0.6,0.8,1.0 respectively,we defineη= 0.8 in our experiment.

    3.3 Transformation model estimation

    The outliers of interest points are rejected by Ransac algorithm,and the transformation parameters are found with the most inliers by iteratively fitting transforms.The fixed image is transformed to the same coordinate system with the moving image.The coordinate points after image transformation are not necessarily integers,but we can solve this problem with interpolation.

    4 Transfer learning

    Greenspan et al.[Greenspan,Ginneken and Summers (2016)] have pointed out:“the lack of publicly available ground-truth data,and the difficulty in collecting such data per medical task,both cost-wise as well as time-wise,is a prohibitively limiting factor in the medical domain.” Transfer learning and fine-tuning are used to solve the problem of insufficient training samples.Matchnet is pre-trained with UBC dataset which consists of corresponding patches sampled from 3D reconstructions of the Statue of Liberty (New York),Notre Dame (Paris) and Half Dome (Yosemite),and then the weights of the trained Matchnet are used as an initialization of a new same Matchnet,finally NPC dataset is used to fine-tune the learnable parameters of pre-trained Matchnet.According to the introduction of Zou et al.[Zou and Zhong (2018)]:“If half of last layers undergoes fine-tuning,compared with entire network involves in fine-tuning,the almost same accuracy can be achieved,but the convergence is more rapid”,so half the last layers undergoes fine-tuning in our experiment.

    5 Experiment

    5.1 NPC dataset and data preprocessing

    This study has been conducted using CT and MR images of 99 nasopharyngeal carcinoma patients(age range:21-76 years;mean age ± standard deviation:50.3 years ±11.2 years) who underwent chemo radiotherapy or radiotherapy in West China Hospital,and the radiology department of West China Hospital agrees that this dataset is used and the experimental results can be published.There are 99 CT images and 99 MR images in NPC dataset,all of which are coded in DICOM format.The CT images are obtained by a Siemens SOMATOM Definition AS+ system,with a voxel size ranges from 0.88 mm*0.88 mm*3.0 mm to 0.97 mm*0.97 mm*3.0 mm.The MR images are obtained by a Philips Achieva 3T scanner.In this study,T1-weighted images are used,which have a high in-slice resolution of 0.61 mm*0.61 mm and a slice spacing of 0.8 mm.

    The images are preprocessed as follows:

    ● Unifying the axis direction of MRI and CT data.

    ● Removing the invalid background area from CT and MR images.

    ● Unifying the images to have a voxel size of 1 mm*1 mm*1 mm.

    ● Because they are not consistent for the imaging ranges of MRI and CT,we only kept the range from eyebrow and chin when we slice the images.

    ● We randomly selected 15 pairs of MR and CT slices for each patient,and registered them as ground truth using the Elastix toolbox.

    We augment the dataset by rotating and scaling.

    ● Rotation:rotating the slice by a degree from -15 to 15 with a step of 5.

    ● Scale:scaling the slice with a factor in [0.8,1.2] with a step of 0.1.

    We use the approach introduced in section 3.1 to detect the interest points,and then centring in the interest points,image patches of size 64*64 is extracted.If the patch pair is generated from the same or two corresponding slices and the absolute distance between their corresponding interest points is less than 50 mm,this patch pair receives a positive label;Otherwise,a negative label is obtained.

    5.2 Experimental setup

    The CT and MR images of 60 patients are used for training and validation,and 39 cases for testing.More than 2 million pairs of patch are produced in the way described in Section 5.1.From training and validation dataset,500000 patch pairs are randomly selected as training data,200000 patch pairs are used as validation data.From testing dataset,300000 patch pairs are selected for testing.The ratio between positive and negative samples is 1:1,and the proportion of MR-MR,CT-CT,CT-MR pairs is 1:1:2.

    5.3 Results of experiment

    The ground truth displacement at each voxel of test pairs is obtained by Elestix toolbox,so we can independently verify each matched interest Point,and then we can calculate the precision of the features extracted by SIFT,FIP-CNNF and TrFIP-CNNF respectively.True positive is matched interest point in the fixed image for a true correspondence exists,and false positives are interest point which is assigned an incorrect match.

    For CT-CT image registration,the precision and recall of SIFT,FIP-CNNF and TrFIPCNNF are shown as Fig.4 and Fig.5.

    Figure4:CT-CT Precision

    Figure5:CT-CT Recall

    X-coordinate (Scale,Rotation) represents the degree of the scale and rotation respectively.The experimental results show that TrFIP-CNNF outperforms SIFT and FIP-CNNF.For SIFT and FIP-CNNF,the mean value of the precision is little difference,and the recall of FIP-CNNF is better than that of SIFT.

    For MR-MR image registration,the precision and recall of SIFT,FIP-CNNF and TrFIPCNNF are shown as Fig.6 and Fig.7.

    Figure6:MR-MR Precision

    Figure7:MR-MR Recall

    The experimental results show that TrFIP-CNNF and SIFT perform well.In most cases,the precision and recall of TrFIP-CNNF are relatively higher when the rotation is greater than 5o,on the contrary,the precision and recall of SIFT algorithm are relatively higher when the rotation is less than 5o.Overall,the precision and recall of FIP-CNNF is the lowest.

    For CT-MR image registration,the precision and recall of SIFT,FIP-CNNF and TrFIPCNNF are shown as Fig.8 and Fig.9.

    Figure8:CT-MR Precision

    Figure9:CT-MR Recall

    For multimodal image registration,the deep learning approach has obvious advantages,so that FIP-CNNF and TrFIP-CNNF outperform SIFT in every task.

    To further verify the results in Fig.8 and Fig.9,the target registration error (TRE) is calculated for measuring registration accuracy.TRE is defined as root-mean-square on these distance errors over all interest point pairs for one sample.TRE of multimode image registration are shown in Tab.2.The first line (Scale,Rotation) represents the degree of the scale and rotation.

    Table2:TRE of CT-MR registration

    It provides a visual comparison of a random pair of CT-MR slices registration between SIFT,FIP-CNNF and TrFIP-CNNF in Fig.10.

    Figure10:Color overlap registration results of SIFT,FIP-CNNF,and TrFIP-CNNF

    6 Conclusion

    In our study,the CT and MR images of nasopharyngeal carcinoma are registered by deep learning network.In particular,interest points are detected by FCN,and feature detection,descriptor and matching are trained by CNN.Experimental results show that this approach builds a general approach to improve medical image registration.Especially for the CT-MR image registration,FIP-CNNF outperforms SIFT in every task due to the superiority of the high level feature learned by CNN.TrFIP-CNNF outperforms FIPCNNF due to the knowledge transferred by rich natural images,which indicates that transfer learning is feasible for medical image and fine-tuning has a positive impact.

    Acknowledgement:We thank Xiaodong Yang for assistance in experiment.This work is supported by National Natural Science Foundation of China (Grant No.61806029),Science and Technology Department of Sichuan Province (Grant No.2017JY0011),and Education Department of Sichuan Province (Grant No.17QNJJ0004).

    丰满迷人的少妇在线观看| 久久午夜综合久久蜜桃| 搡老岳熟女国产| 黑人巨大精品欧美一区二区mp4| 美国免费a级毛片| 秋霞在线观看毛片| 久久国产精品男人的天堂亚洲| 国产一区二区激情短视频 | 欧美黄色淫秽网站| 国产精品av久久久久免费| 美女午夜性视频免费| 国产成人系列免费观看| 丝瓜视频免费看黄片| 欧美黑人精品巨大| 免费看十八禁软件| 亚洲情色 制服丝袜| 精品福利观看| 亚洲欧美日韩另类电影网站| 亚洲综合色网址| 欧美人与性动交α欧美精品济南到| 成人黄色视频免费在线看| 18在线观看网站| 免费人妻精品一区二区三区视频| 久久久久国产一级毛片高清牌| 91精品伊人久久大香线蕉| 亚洲欧美日韩另类电影网站| 一级黄色大片毛片| 一级毛片电影观看| 国产av又大| 亚洲色图 男人天堂 中文字幕| 色精品久久人妻99蜜桃| 又大又爽又粗| 美女扒开内裤让男人捅视频| 亚洲欧美激情在线| 精品视频人人做人人爽| 精品视频人人做人人爽| 极品少妇高潮喷水抽搐| 国产精品久久久久久精品古装| 国产亚洲av高清不卡| 丁香六月天网| 国产成人精品无人区| 操美女的视频在线观看| 日韩免费高清中文字幕av| 日韩中文字幕欧美一区二区| 自拍欧美九色日韩亚洲蝌蚪91| 99热全是精品| 国产亚洲精品一区二区www | 国产激情久久老熟女| 不卡av一区二区三区| 少妇精品久久久久久久| 欧美精品人与动牲交sv欧美| 美女扒开内裤让男人捅视频| 日韩中文字幕欧美一区二区| 精品乱码久久久久久99久播| 欧美亚洲 丝袜 人妻 在线| 国产高清videossex| 亚洲精品一卡2卡三卡4卡5卡 | 夜夜夜夜夜久久久久| 午夜日韩欧美国产| 午夜影院在线不卡| 免费女性裸体啪啪无遮挡网站| 久久久久久亚洲精品国产蜜桃av| 老司机亚洲免费影院| 国产日韩欧美在线精品| 老鸭窝网址在线观看| 婷婷丁香在线五月| 国产亚洲一区二区精品| 人人妻人人爽人人添夜夜欢视频| tocl精华| 精品国产乱子伦一区二区三区 | 欧美黄色淫秽网站| 久久精品亚洲熟妇少妇任你| 考比视频在线观看| 国产av又大| 一级毛片精品| videos熟女内射| 国产精品久久久久久精品电影小说| 成人国产一区最新在线观看| 欧美xxⅹ黑人| 久久影院123| 国产日韩欧美视频二区| 国产免费现黄频在线看| 国产男人的电影天堂91| 黄网站色视频无遮挡免费观看| 人妻一区二区av| tocl精华| 中文字幕精品免费在线观看视频| 男女无遮挡免费网站观看| 欧美在线一区亚洲| 久久久久久久久免费视频了| 十八禁高潮呻吟视频| 欧美日韩亚洲综合一区二区三区_| 亚洲免费av在线视频| 老司机深夜福利视频在线观看 | 久久综合国产亚洲精品| 日韩欧美国产一区二区入口| 国产精品久久久久久人妻精品电影 | 国产黄频视频在线观看| 亚洲欧美日韩高清在线视频 | 亚洲中文av在线| 国产麻豆69| 日韩免费高清中文字幕av| 美女午夜性视频免费| 一级,二级,三级黄色视频| 成人影院久久| 伊人久久大香线蕉亚洲五| 国产男女超爽视频在线观看| 欧美中文综合在线视频| 飞空精品影院首页| av有码第一页| 在线天堂中文资源库| 国产一区二区 视频在线| 亚洲激情五月婷婷啪啪| 色精品久久人妻99蜜桃| 久久久久久久久免费视频了| 日本五十路高清| 日韩欧美免费精品| 国产高清videossex| 国产无遮挡羞羞视频在线观看| 亚洲精品美女久久久久99蜜臀| 18禁黄网站禁片午夜丰满| 首页视频小说图片口味搜索| 国产97色在线日韩免费| 国产精品亚洲av一区麻豆| 欧美另类一区| 蜜桃国产av成人99| 日韩欧美免费精品| 中文字幕最新亚洲高清| 91字幕亚洲| 肉色欧美久久久久久久蜜桃| 91大片在线观看| 国产成人欧美| 国产亚洲精品久久久久5区| 免费不卡黄色视频| 亚洲国产中文字幕在线视频| 午夜成年电影在线免费观看| av免费在线观看网站| 国产一卡二卡三卡精品| 亚洲一卡2卡3卡4卡5卡精品中文| 欧美 亚洲 国产 日韩一| 一个人免费看片子| 中文字幕人妻丝袜一区二区| 天天添夜夜摸| 香蕉丝袜av| 国产色视频综合| 日日爽夜夜爽网站| 免费少妇av软件| 亚洲精品自拍成人| 精品高清国产在线一区| 人妻 亚洲 视频| 如日韩欧美国产精品一区二区三区| 真人做人爱边吃奶动态| 天天影视国产精品| 亚洲精品粉嫩美女一区| 久久天躁狠狠躁夜夜2o2o| 99精品久久久久人妻精品| 日本av免费视频播放| 性少妇av在线| 美女国产高潮福利片在线看| 国产精品九九99| 成人国产av品久久久| 亚洲欧美一区二区三区久久| 国产不卡av网站在线观看| 老鸭窝网址在线观看| 欧美黄色淫秽网站| 久久午夜综合久久蜜桃| 脱女人内裤的视频| 99国产精品一区二区三区| 在线亚洲精品国产二区图片欧美| 黑人巨大精品欧美一区二区蜜桃| 各种免费的搞黄视频| 桃花免费在线播放| 欧美另类一区| 五月天丁香电影| 免费在线观看视频国产中文字幕亚洲 | 高清视频免费观看一区二区| 久久天堂一区二区三区四区| 最新在线观看一区二区三区| 国产视频一区二区在线看| av有码第一页| 久久ye,这里只有精品| 老熟妇仑乱视频hdxx| 中文字幕人妻丝袜制服| 国产高清国产精品国产三级| 一级a爱视频在线免费观看| 午夜免费成人在线视频| 国产精品麻豆人妻色哟哟久久| 久久久欧美国产精品| 亚洲一区二区三区欧美精品| 91精品伊人久久大香线蕉| av欧美777| 欧美日韩一级在线毛片| 又紧又爽又黄一区二区| 精品免费久久久久久久清纯 | 菩萨蛮人人尽说江南好唐韦庄| 99久久99久久久精品蜜桃| 午夜视频精品福利| 免费黄频网站在线观看国产| 久久久国产欧美日韩av| 亚洲av成人不卡在线观看播放网 | 老司机在亚洲福利影院| 亚洲av日韩精品久久久久久密| av网站在线播放免费| 侵犯人妻中文字幕一二三四区| 久久亚洲精品不卡| 捣出白浆h1v1| 免费观看人在逋| 亚洲天堂av无毛| 久久人人爽av亚洲精品天堂| 婷婷成人精品国产| 欧美激情高清一区二区三区| 看免费av毛片| 国产不卡av网站在线观看| 日韩制服骚丝袜av| 国产精品久久久久久精品古装| 免费女性裸体啪啪无遮挡网站| 夫妻午夜视频| 亚洲熟女毛片儿| 精品人妻1区二区| 亚洲av美国av| 亚洲天堂av无毛| 我要看黄色一级片免费的| 亚洲,欧美精品.| 少妇的丰满在线观看| 老汉色∧v一级毛片| 国产成人精品久久二区二区91| 亚洲精品久久午夜乱码| 国产欧美日韩一区二区三 | 久久久久国产精品人妻一区二区| 宅男免费午夜| 久久精品成人免费网站| 99热网站在线观看| 亚洲精品久久午夜乱码| 日本欧美视频一区| 亚洲欧美成人综合另类久久久| 一二三四在线观看免费中文在| 国产欧美亚洲国产| 久久影院123| 少妇精品久久久久久久| 黄色片一级片一级黄色片| 国产精品1区2区在线观看. | 久久国产亚洲av麻豆专区| 五月天丁香电影| 韩国精品一区二区三区| 免费观看av网站的网址| 热re99久久国产66热| 久热这里只有精品99| 免费不卡黄色视频| 久久人人97超碰香蕉20202| 亚洲一码二码三码区别大吗| 午夜91福利影院| 欧美黄色片欧美黄色片| 国产精品欧美亚洲77777| 国产精品影院久久| 男女之事视频高清在线观看| 最黄视频免费看| 亚洲国产精品一区三区| a 毛片基地| 久久国产精品大桥未久av| 亚洲情色 制服丝袜| 日韩 欧美 亚洲 中文字幕| 三级毛片av免费| 动漫黄色视频在线观看| 最近最新中文字幕大全免费视频| netflix在线观看网站| 亚洲成av片中文字幕在线观看| 2018国产大陆天天弄谢| 999久久久国产精品视频| 国产精品一区二区在线观看99| 国产免费现黄频在线看| 别揉我奶头~嗯~啊~动态视频 | 国内毛片毛片毛片毛片毛片| 亚洲精品av麻豆狂野| 另类亚洲欧美激情| 2018国产大陆天天弄谢| 国产真人三级小视频在线观看| 热99re8久久精品国产| 性色av一级| 欧美精品人与动牲交sv欧美| 男人爽女人下面视频在线观看| 在线精品无人区一区二区三| 亚洲美女黄色视频免费看| 老司机午夜十八禁免费视频| 精品国内亚洲2022精品成人 | 色老头精品视频在线观看| 国产在线免费精品| 人人澡人人妻人| 考比视频在线观看| svipshipincom国产片| 日本91视频免费播放| 美女主播在线视频| 国产精品国产三级国产专区5o| 精品国产一区二区三区久久久樱花| 国产男人的电影天堂91| 男女国产视频网站| 亚洲欧美成人综合另类久久久| av不卡在线播放| 侵犯人妻中文字幕一二三四区| 少妇被粗大的猛进出69影院| 欧美精品一区二区大全| 久久ye,这里只有精品| 午夜精品国产一区二区电影| 久久亚洲国产成人精品v| 女警被强在线播放| 999精品在线视频| 午夜福利在线观看吧| 女性被躁到高潮视频| 精品国产一区二区三区久久久樱花| 久久青草综合色| 蜜桃在线观看..| 一级片'在线观看视频| 午夜福利视频在线观看免费| 国产区一区二久久| 伊人久久大香线蕉亚洲五| 国产欧美日韩一区二区精品| 亚洲国产欧美日韩在线播放| 91国产中文字幕| av免费在线观看网站| 国产福利在线免费观看视频| 免费日韩欧美在线观看| 天堂中文最新版在线下载| 黑人操中国人逼视频| 国产又爽黄色视频| 少妇精品久久久久久久| 人人妻,人人澡人人爽秒播| 深夜精品福利| 成年人黄色毛片网站| av天堂久久9| 亚洲精品久久成人aⅴ小说| 国产精品亚洲av一区麻豆| 人人妻,人人澡人人爽秒播| 99热国产这里只有精品6| 亚洲精品在线美女| 国产欧美日韩一区二区精品| 纯流量卡能插随身wifi吗| 亚洲美女黄色视频免费看| 亚洲三区欧美一区| 制服诱惑二区| 久久久久久久大尺度免费视频| 如日韩欧美国产精品一区二区三区| 国产日韩欧美视频二区| 久久九九热精品免费| 精品少妇一区二区三区视频日本电影| 日韩人妻精品一区2区三区| 久久久久网色| 最近最新免费中文字幕在线| 一边摸一边做爽爽视频免费| 成人手机av| 天堂俺去俺来也www色官网| 久久天躁狠狠躁夜夜2o2o| 午夜福利一区二区在线看| 色综合欧美亚洲国产小说| 国产又爽黄色视频| www.精华液| 午夜福利乱码中文字幕| 国产人伦9x9x在线观看| 欧美激情 高清一区二区三区| 国产片内射在线| 人妻 亚洲 视频| 国产亚洲欧美精品永久| 日韩电影二区| 人妻久久中文字幕网| 国产日韩欧美亚洲二区| 亚洲五月婷婷丁香| 精品少妇内射三级| 乱人伦中国视频| 国产免费现黄频在线看| 人妻久久中文字幕网| 欧美精品亚洲一区二区| 黑丝袜美女国产一区| 黑人欧美特级aaaaaa片| 欧美黑人精品巨大| 成人影院久久| 十八禁网站免费在线| 久久久久久久精品精品| videosex国产| 欧美乱码精品一区二区三区| 男女免费视频国产| 狂野欧美激情性bbbbbb| 亚洲精品自拍成人| 另类精品久久| bbb黄色大片| 男人舔女人的私密视频| 老司机午夜十八禁免费视频| 国产精品麻豆人妻色哟哟久久| 久久久久视频综合| 久久狼人影院| 男女高潮啪啪啪动态图| 亚洲,欧美精品.| 黄色a级毛片大全视频| 老汉色av国产亚洲站长工具| 午夜福利视频精品| 丝袜美腿诱惑在线| 曰老女人黄片| 黄片大片在线免费观看| 久久久久久久国产电影| 最新在线观看一区二区三区| 男女无遮挡免费网站观看| 国产亚洲午夜精品一区二区久久| 宅男免费午夜| 日日爽夜夜爽网站| 久久中文字幕一级| 丝袜脚勾引网站| 亚洲精华国产精华精| 国产日韩欧美视频二区| 99香蕉大伊视频| 好男人电影高清在线观看| 日本vs欧美在线观看视频| 精品人妻一区二区三区麻豆| 大码成人一级视频| 热99久久久久精品小说推荐| 久久久久久久久免费视频了| 久久久国产精品麻豆| 9191精品国产免费久久| 欧美xxⅹ黑人| 精品少妇一区二区三区视频日本电影| 亚洲天堂av无毛| 伊人久久大香线蕉亚洲五| e午夜精品久久久久久久| tube8黄色片| 国产激情久久老熟女| 国产一卡二卡三卡精品| 久久99热这里只频精品6学生| 19禁男女啪啪无遮挡网站| 亚洲成人手机| 男女之事视频高清在线观看| 亚洲精品第二区| 久久精品亚洲av国产电影网| 亚洲专区中文字幕在线| 成人国产av品久久久| 啦啦啦免费观看视频1| 免费av中文字幕在线| 久久中文看片网| 精品国内亚洲2022精品成人 | 999久久久国产精品视频| 日本av免费视频播放| 亚洲欧美激情在线| 国产黄色免费在线视频| 欧美精品一区二区免费开放| 成人国语在线视频| 久久久久久免费高清国产稀缺| 最近最新中文字幕大全免费视频| 精品卡一卡二卡四卡免费| 精品亚洲乱码少妇综合久久| 超色免费av| 老司机亚洲免费影院| 国产一区二区三区在线臀色熟女 | 欧美久久黑人一区二区| 国产淫语在线视频| 色婷婷av一区二区三区视频| 每晚都被弄得嗷嗷叫到高潮| 国产黄色免费在线视频| 黑人巨大精品欧美一区二区mp4| 后天国语完整版免费观看| 亚洲专区中文字幕在线| 亚洲国产欧美在线一区| 免费少妇av软件| 成人国产av品久久久| 亚洲三区欧美一区| av天堂久久9| 热99久久久久精品小说推荐| 中文字幕av电影在线播放| 欧美日韩亚洲综合一区二区三区_| 欧美激情久久久久久爽电影 | 制服人妻中文乱码| 国产欧美日韩精品亚洲av| 99九九在线精品视频| 欧美在线一区亚洲| 老司机靠b影院| 亚洲天堂av无毛| 每晚都被弄得嗷嗷叫到高潮| 别揉我奶头~嗯~啊~动态视频 | 成年人黄色毛片网站| 一级a爱视频在线免费观看| 亚洲七黄色美女视频| 日韩免费高清中文字幕av| 国产有黄有色有爽视频| 97精品久久久久久久久久精品| 亚洲美女黄色视频免费看| 亚洲伊人色综图| 国产男人的电影天堂91| 亚洲成av片中文字幕在线观看| 精品国产国语对白av| 欧美精品一区二区大全| 少妇粗大呻吟视频| 亚洲国产欧美在线一区| 亚洲成av片中文字幕在线观看| 亚洲欧美一区二区三区久久| 中国美女看黄片| 亚洲专区中文字幕在线| 欧美另类亚洲清纯唯美| 一本一本久久a久久精品综合妖精| 精品卡一卡二卡四卡免费| av网站免费在线观看视频| 丝袜在线中文字幕| 久久久久国产一级毛片高清牌| 黄片小视频在线播放| 看免费av毛片| 在线看a的网站| 黄网站色视频无遮挡免费观看| 久久久精品国产亚洲av高清涩受| 日韩欧美一区视频在线观看| 久久 成人 亚洲| 狠狠狠狠99中文字幕| 999久久久精品免费观看国产| 操美女的视频在线观看| 亚洲精品一二三| 亚洲av成人一区二区三| 中国美女看黄片| www.自偷自拍.com| √禁漫天堂资源中文www| 国产一区二区三区综合在线观看| 久久精品亚洲熟妇少妇任你| 亚洲色图 男人天堂 中文字幕| 丝袜喷水一区| 黑人欧美特级aaaaaa片| 91av网站免费观看| 啦啦啦啦在线视频资源| 狂野欧美激情性xxxx| 久久中文看片网| 欧美乱码精品一区二区三区| 欧美日韩成人在线一区二区| 在线观看人妻少妇| 欧美久久黑人一区二区| 狠狠婷婷综合久久久久久88av| 黑人巨大精品欧美一区二区mp4| 日本wwww免费看| 日韩欧美一区二区三区在线观看 | 人人妻人人澡人人看| 亚洲精品久久成人aⅴ小说| 汤姆久久久久久久影院中文字幕| 久久人人97超碰香蕉20202| 亚洲欧洲日产国产| 午夜福利免费观看在线| 狂野欧美激情性bbbbbb| 亚洲伊人久久精品综合| 1024香蕉在线观看| 午夜福利影视在线免费观看| 两性夫妻黄色片| 国产日韩欧美在线精品| 国产99久久九九免费精品| 亚洲欧美精品自产自拍| 成人av一区二区三区在线看 | 亚洲精品美女久久久久99蜜臀| 99国产精品一区二区三区| 男人操女人黄网站| 精品视频人人做人人爽| 国产成人a∨麻豆精品| 色精品久久人妻99蜜桃| 97在线人人人人妻| 9色porny在线观看| 狠狠狠狠99中文字幕| 12—13女人毛片做爰片一| av网站在线播放免费| 美女高潮到喷水免费观看| 日韩 亚洲 欧美在线| 国产一区二区在线观看av| 宅男免费午夜| 欧美性长视频在线观看| 亚洲人成电影免费在线| 国产免费av片在线观看野外av| 啦啦啦在线免费观看视频4| 99国产精品一区二区蜜桃av | 另类精品久久| 久久久国产一区二区| 视频区欧美日本亚洲| 亚洲精品成人av观看孕妇| 99热网站在线观看| 亚洲久久久国产精品| 老司机影院成人| 最黄视频免费看| av视频免费观看在线观看| netflix在线观看网站| 久久精品亚洲av国产电影网| 国产xxxxx性猛交| 99国产精品99久久久久| 国产成人影院久久av| 午夜影院在线不卡| 91精品国产国语对白视频| 色精品久久人妻99蜜桃| 在线观看人妻少妇| av国产精品久久久久影院| 最近最新免费中文字幕在线| 男女午夜视频在线观看| xxxhd国产人妻xxx| 久久免费观看电影| 日本五十路高清| 亚洲欧美日韩高清在线视频 | 亚洲欧洲精品一区二区精品久久久| 一个人免费在线观看的高清视频 | 国产成人精品久久二区二区91| 午夜两性在线视频| 一区二区av电影网| av欧美777| 咕卡用的链子| 久9热在线精品视频| 在线观看人妻少妇| 久久ye,这里只有精品| 亚洲中文av在线| 欧美成人午夜精品| 色综合欧美亚洲国产小说| 90打野战视频偷拍视频| 久久久精品区二区三区| 免费久久久久久久精品成人欧美视频| 老司机影院毛片| 国产一级毛片在线| 国产成人啪精品午夜网站| 国内毛片毛片毛片毛片毛片| 亚洲国产成人一精品久久久| 国产高清国产精品国产三级| 久久精品国产亚洲av高清一级| 丝瓜视频免费看黄片| 精品一区二区三区av网在线观看 | 久久精品久久久久久噜噜老黄| 他把我摸到了高潮在线观看 |