• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Chronic atrophic gastritis detection with a convolutional neural network considering stomach regions

    2020-08-18 10:01:20MisakiKanaiRenTogoTakahiroOgawaMikiHaseyama
    World Journal of Gastroenterology 2020年25期

    Misaki Kanai, Ren Togo, Takahiro Ogawa, Miki Haseyama

    Abstract

    Key words: Gastric cancer risk; Chronic atrophic gastritis; Helicobacter pylori; Gastric Xray images; Deep learning; Convolutional neural network; Computer-aided diagnosis

    INTRODUCTION

    Gastric cancer is the third leading cause of death in all types of malignancies behind lung cancer and colorectal cancer[1]. One of the major risk factors of gastric cancer isHelicobacter pylori(H. pylori)infection[2-4]. Chronic atrophic gastritis (CAG) induced byH. pylorileads to atrophic mucosa[5], which increases the risk of gastric cancer[6]. Moreover, it has been revealed thatH. pylorieradication therapy is effective for the reduction of the risk of gastric cancer[7-9].

    According to the International Agency for Research on Cancer, the number of new cases of gastric cancer in Eastern Asia accounts for more than half of those in the world. For the reduction of gastric cancer mortality, population-based screening for gastric cancer has been conducted through endoscopic and X-ray examinations in Japan[10]. Although the detection rate of early gastric cancer by endoscopic examination is higher than that by X-ray examination[11], mass screening using endoscopic examination has some problems,e.g., a limit to the number of patients who can be examined[12]. Therefore, in Japan, endoscopic examination is often performed for cases in which something unusual is found by X-ray examination[13]. However, the interpretation of gastric X-ray images (GXIs) requires sufficient experience and knowledge, and there is a shortage of doctors who are skilled in diagnosis[10]. The development of computer-aided diagnosis (CAD) systems is needed to help doctors who do not have sufficient experience and knowledge.

    For realizing CAD systems, researchers have been exploring methods for CAG detection from GXIs[14-17]. In early works, attempts were made to describe the visual features of CAG with mathematical models[14,15]. For more accurate detection, in the papers[16,17], we have tried to introduce convolutional neural networks (CNNs)[18]since it has been reported that CNNs outperform methods with hand-crafted features in various tasks[19-21]. We adopted a CNN that was trained on patches obtained by dividing original images,i.e., a patch-based CNN, to preserve detailed textures of GXIs since they had high resolutions. In our previous investigation[17], we focused on the outside patches of the stomach since textures of these patches do not depend on the image-level ground truth (GT),i.e., CAG or non-CAG. In clinical settings, GXIs generally have only the image-level GT. Therefore, we introduced manual annotation of stomach regions for all GXIs used in training and assigned the patch-level class labels based on the image-level GT and the stomach regions. Although the previously reported method already achieved high detection performance (sensitivity: 0.986, specificity: 0.945), there remains a problem. In general, CNNs require a large number of labeled images for training to determine millions of parameters that can capture the semantic contents of images. However, manually annotating stomach regions for a large number of GXIs is time- and labor-consuming. In other words, the previous method can practically utilize only a small number of GXIs even in the case of numerous GXIs being available for training.

    In this paper, we propose a novel CAG detection method that requires manual annotation of stomach regions for only a small number of GXIs. The main contribution of this paper is the effective use of stomach regions that are manually annotated for a part of GXIs used in training. We assume that distinguishing the inside and outside patches of the stomach is much easier for patch-based CNNs than distinguishing whether the patches are extracted from CAG images or non-CAG images. Therefore, we newly introduce the automatic estimation of stomach regions for non-annotated GXIs. Herewith, we can reduce the workload of manual annotation and train a patchbased CNN that considers stomach regions with all GXIs even when we manually annotate stomach regions for some of the GXIs used in training.

    MATERIALS AND METHODS

    The proposed method that requires manual annotation of stomach regions for only a small number of GXIs to detect CAG is presented in this section. This study was reviewed and approved by the institutional review board. Patients were not required to give informed consent to this study since the analysis used anonymous data that were obtained after each patient agreed to inspections by written consent. In this study, Kanai M from Graduate School of Information Science and Technology, Hokkaido University, Togo R from Education and Research Center for Mathematical and Data Science, Hokkaido University, Ogawa T and Haseyama M from Faculty of Information Science and Technology, Hokkaido University, took charge of the statistical analysis since they have an advanced knowledge of statistical analysis.

    Study subjects

    The GXIs used in this study were obtained from 815 subjects. Each subject underwent a gastric X-ray examination and an endoscopic examination at The University of Tokyo Hospital in 2010, and GXIs that had the same diagnostic results in both examinations were used in this study. Criteria for exclusion were the usage history of gastric acid suppressants, a history ofH. pylorieradication therapy, and insufficient image data. The ground truth for this study was the diagnostic results in X-ray and endoscopic examinations. In the X-ray evaluation, subjects were classified into four categories,i.e., “normal”, “mild”, “moderate” and “severe”, based on atrophic levels[22]. It should be noted that the stomach with non-CAG has straight and fine fold distributions and fine mucosal surfaces, and the stomach with CAG has non-straight and snaked folds and coarse mucosal surfaces. X-ray examination can visualize these atrophic characteristics by barium contrast medium. We show that these differences can be trained on a chronic atrophic gastritis detection model with a small number of training images in this paper. We regarded subjects whose diagnosis results were “normal” as non-CAG subjects and the other subjects as CAG subjects. In contrast, in the endoscopic examination, subjects were classified into seven categories,i.e., no atrophic change (C0), three closed types of atrophic gastritis (C1, C2, C3) and three open types of atrophic gastritis (O1, O2, O3) based on the Kimura-Takemoto sevengrade classification[23]. Since C1 is defined as the atrophic borderline, we excluded subjects whose diagnosis results were C1 from the dataset. We regarded subjects whose diagnosis results were C0 as non-CAG subjects and the other subjects as CAG subjects. As a result, we regarded 240 subjects as CAG subjects and 575 subjects as non-CAG subjects.

    According to the specific condition of GXIs, the size of GXIs used in this study was 2048 × 2048 pixels with an 8-bit grayscale. Technique of fluoroscopy was a digital radiography system. Exposure was controlled by an automatic exposure control mechanism. To realize the learning of our model at the patch level, GXIs taken from the double-contrast frontal view of the stomach in the supine position were used in this study.

    Preparation of dataset for training

    For more accurate detection, we focus on the outside patches of the stomach since textures of these patches do not depend on the image-level GT at all. Although stomach regions can be easily determined without highly dedicated knowledge, manually annotating stomach regions for a large number of GXIs is not practical. In order to overcome this problem, we split GXIs for training into the following two groups.

    (1) Manual annotation group (MAG): This group consists of GXIs for which we manually annotate the stomach regions. It is ideal for the number of GXIs in this group to be small since annotation for a large number of GXIs is time-and labor-consuming; (2) Automatic annotation group (AAG): This group consists of GXIs for which we automatically estimate the stomach regions with a CNN. By estimating the stomach regions automatically, a large number of GXIs can be used for training without the large burden of manual annotation.

    An overview of the preparation of the dataset is shown in Figure 1. The preparation of the dataset for training consists of two steps. In the first step, we annotate the stomach regions for GXIs in the MAG manually and select patches for training from GXIs in the MAG. In the second step, we estimate the stomach regions for GXIs in the AAG automatically and select patches for training from GXIs in the AAG.

    First step - Patch selection for training from the MAG:Each GXI in the MAG is divided into patches. To separate the inside and outside patches of the stomach, we categorize the patches with the following three kinds of patch-level class labels (P, N, U): P: inside patches of the stomach in CAG images,i.e., positive patches; N: inside patches of the stomach in non-CAG images,i.e., negative patches; U: outside patches of the stomach in both CAG and non-CAG images,i.e., unrelated patches.

    Note that if more than 80% regions within patches are included in the inside of the stomach, they were annotated as P or N. Furthermore, if less than 1% regions within patches are included in the inside of the stomach, they were annotated as U. Otherwise, we discard such patches from the training dataset. We denote a set of patches with each patch-level class label (P, N, U) asPMAG,NMAG, andUMAG. By setting the class label U, we can train a patch-based CNN that can distinguish the inside and outside patches of the stomach.

    Second step - Patch selection for training from the AAG:For estimating the stomach regions for GXIs in the AAG, we introduce a fine-tuning technique that is effective when only a small number of images for training are available[24]. First, we prepare a CNN whose weights are transferred from a CNN pre-trained for image classification with a large number of labeled natural images. The number of nodes on the last fully connected layer in the CNN is altered to the number of the patch-level class labels (P, N, U). Due to this alteration, we initialize weights of the last fully connected layer with random values sampled from a uniform distribution. Next, the CNN is fine-tuned with the patches obtained in the first step to calculate the probabilitiespc(cbelong to {P,N,U}, ∑cpc= 1) of belonging to the patch-level class labelc. By transferring the weights from the pre-trained CNN, accurate prediction of the patch-level class labels is realized even when the number of GXIs in the MAG is small. Second, we estimate the stomach regions for GXIs in the AAG. We divide each GXI in the AAG into patches. By inputting the patches into the fine-tuned CNN, we calculate the probabilitiespc. We can regardpP+pNandpUas the probability of the inside of the stomach and the probability of the outside of the stomach, respectively. Therefore, we handle the patches that satisfypP+pN≥ α (0.5 < α ≤ 1) and the patches that satisfypU≥ α as the inside and outside patches of the stomach, respectively. Finally, we assign three kinds of patch-level class labels (P, N, U) for the patches based on the estimated results and the image-level GT. We denote a set of selected patches with patch-level class labels (P, N, U) asPAAG,NAAGandUAAGrespectively. By estimating the stomach regions for the AAG, we can addPAAG,NAAG, andUAAGto the dataset for training even in the case of manual annotation of the stomach regions only for the MAG.

    Chronic atrophic gastritis detection

    With all selected patches (P=PMAG+PAAG,N=NMAG+NAAG,U=UMAG+UAAG), we retrain the fine-tuned CNN to predict the patch-level class labels. For a GXI Xtestwhose stomach regions and GT are unknown, we estimate an image-level class labelytestbelong to {1,0} that indicates CAG or non-CAG. First, we divide the target image Xtestinto patches. We denote the patch-level class label predicted by the retrained CNN ascpredbelong to {P,N,U}. In order to eliminate the influence of patches outside the stomach, we select patches that satisfycpred= P orcpred= N for estimating the imagelevel class label. We calculate the ratio R with the selected patches as follows:R=MP/(MP+MN), whereMPandMNare the numbers of patches that satisfycpred= P andcpred= N, respectively. Finally, the image-level estimation resultytestfor the target image Xtestis obtained as follows:ytest= 1 ifR<β, otherwise,ytest= 0, where β is a predefined threshold. By selecting patches that satisfycpred= P orcpred= N, estimating the imagelevel class label without the negative effect of regions outside the stomach is feasible.

    Figure 1 Overview of preparation of the dataset. CNN: Convolutional neural network; MAG: Manual annotation group; AAG: Automatic annotation group.

    Evaluation of chronic atrophic gastritis detection results

    A total of 815 GXIs including 200 images (100 CAG and 100 non-CAG images) for training and 615 images (140 CAG and 475 non-CAG images) for evaluation were used in this experiment. We set the number of GXIs in the MAGNMAGto {10, 20,…, 50}. Note that the numbers of CAG and non-CAG images in the MAG were both set toNMAG/2. We randomly sampled GXIs for the MAG from those for training, and the rest of them were used for the AAG. For an accurate evaluation, random sampling and calculating the performance of CAG detection were repeated five times at eachNMAG. It took approximately 24 h to perform each trial. Note that all networks were computed on a single NVIDIA GeForce RTX 2080 Ti GPU. In this study, we extracted patches of 299 × 299 pixels in size from GXIs at intervals of 50 pixels. The following parameters were used for training of the CNN model: Batch size = 32, learning rate = 0.0001, momentum = 0.9, and the number of epochs = 50. The threshold for estimating stomach regions α was set to 0.9.

    The TensorFlow framework[25]was utilized for training CNNs. We utilized the Inception-v3[26]model with weights trained on ImageNet[27]for fine-tuning. To confirm the effectiveness of utilizing not only the MAG but also the AAG for training, we compared the proposed method utilizing only the MAG with the proposed method utilizing the MAG and AAG. Hereinafter, (MAG only) denotes the proposed method utilizing only the MAG, and (MAG + AAG) denotes the proposed method utilizing both the MAG and AAG.

    The performance was measured by the following harmonic mean (HM) of sensitivity and specificity: HM = (2 × Sensitivity × Specificity)/( Sensitivity + Specificity).

    Note that HM was obtained at thresholdβproviding the highest HM.

    RESULTS

    Our experimental results are shown in this section. Examples of GXIs for evaluation are shown in Figure 2. First, we evaluate the performance of the fine-tuned CNN to select patches from the AAG. Note that the fine-tuned CNN doesn’t have to distinguish with high accuracy whether the patches are extracted from CAG images or non-CAG images since we utilize the fine-tuned CNN only for estimating the stomach regions. Figure 3 shows the visualization results obtained by applying the fine-tuned CNN to the CAG image shown in Figure 2A.

    Specifically, the estimated patches of the inside and outside of the stomach and visualization ofpPandpN(i.e., calculated probabilities of belonging to the patch-level class labels P and N) are shown in Figure 3. It is notable that the inside and outside regions of the stomach partially overlapped since GXIs were divided into patches with the overlap in this experiment. As shown in Figure 3, regions whose probabilitiespPandpNare high tend to increase and decrease, respectively, asNMAG(i.e., the number of GXIs in the MAG) increases. In contrast, the stomach regions were estimated with high accuracy and the estimated stomach regions do not depend on the change ofNMAG. Therefore, it is worth utilizing the fine-tuned CNN for estimating the stomach regions.

    Next, we evaluate the performance of CAG detection. To confirm the effectiveness of considering stomach regions, we evaluated the detection performance of a baseline method that did not consider the stomach regions by setting patch-level class labels to the same as the image-level GT. As a result of utilizing 200 GXIs for the training, HM of the baseline method was 0.945. We show the detection performance of methods that considered the stomach regions. The detection performance of (MAG only) and that of (MAG + AAG) are shown in Figure 4. In Figure 4, HMs are shown as means ± SD of five trials. As shown in Figure 4, the mean of HM by (MAG + AAG) is higher than that by (MAG only) at eachNMAG. The standard deviation of HM by (MAG + AAG) is smaller than that by (MAG only) at eachNMAG. Therefore, the effectiveness of utilizing not only the MAG but also the AAG for the training is confirmed when the stomach regions are manually annotated for the same number of GXIs. Besides, to confirm the effect of reducing the workload of the manual annotation on detection performance, we evaluated the detection performance of a method that manually annotated the stomach regions for all GXIs used in the training (i.e.,NMAG= 200). As a result, the HM of this method was 0.965. The negative effect of reducing the workload of manual annotation is small since the mean the HM of (MAG + AAG) approaches the HM of this method even whenNMAGis small.

    DISCUSSION

    This study demonstrated that highly accurate detection of CAG was feasible even when we manually annotated the stomach regions for a small number of GXIs. Figure 4 (MAG only) indicates that a larger number of GXIs with annotation of the stomach regions are required to realize highly accurate detection. In contrast, a highly accurate estimation of the stomach regions is feasible even when the number of manually annotated images is limited. Therefore, highly accurate detection of CAG and reduction of the labor required for manually annotating the stomach regions are simultaneously realized by the proposed method when a large number of GXIs with the image-level GT are available for training.

    The proposed method can be applied to other tasks in the field of medical image analysis since regions outside a target organ in medical images adversely affect the performance of the tasks. With only a simple annotation of regions of the target organ for a small number of medical images, the proposed method will enable accurate analysis that excludes the effect of regions outside the target organ.

    In general, endoscopic examination is superior to an X-ray examination for the evaluation of CAG in imaging inspections[28]. The endoscopic examination has been recommended for gastric cancer mass screening programs in East Asian countries in recent years. For example, South Korea has started the endoscopic examination-based gastric cancer screening program since 2002, and the proportion of individuals who underwent endoscopic examination greatly increased from 31.15% in 2002 to 72.55% in 2011[28]. Also, Japan has started the endoscopic examination-based gastric cancer mass screening program in addition to an X-ray examination since 2016. However, there remains the problem that the number of individuals who can be examined in a day is limited. Hence, X-ray examination still plays an important role in gastric cancer mass screening.

    Figure 2 Examples of gastric X-ray images for evaluation.

    Figure 3 Visualization of the results estimated by the fine-tuned convolutional neural network to select patches from the automatic annotation group at each NMAG for the chronic atrophic gastritis image shown in Figure 2A. The inside and outside regions of the stomach overlapped since gastric X-ray images were divided into patches with the overlap in this experiment.

    To realize effective gastric cancer mass screening, it is crucial to narrow down individuals who need endoscopic examination by evaluating the condition of the stomach. Then CAD systems that can provide additional information to doctors will be helpful. Particularly, our approach presented in this paper realized the construction of machine learning-based CAG detection with a small number of training images. This suggests that the CAG detection method can be trained with data from a small-scale or medium-scale hospital without a large number of medical images for training.

    This study has a few limitations. First, GXIs taken from only a single angle were analyzed in this study. In general X-ray examinations, GXIs are taken from multiple angles for each patient to examine the inside of the stomach thoroughly. Therefore, the detection performance will be improved by applying the proposed method to GXIs taken from multiple angles. Furthermore, the GXIs analyzed in this study were obtained in a single medical facility. To verify versatility, the proposed method should be applied to GXIs obtained in various medical facilities.

    Figure 4 Harmonic mean of the detection results obtained by changing NMAG. Results are shown as means ± SD of five trials. AAG: Automatic annotation group; MAG: Manual annotation group; HM: Harmonic mean.

    In this paper, a method for CAG detection from GXIs is presented. In the proposed method, we manually annotate the stomach regions for some of the GXIs used in training and automatically estimate the stomach regions for the rest of the GXIs. By using GXIs with the stomach regions for training, the proposed method realizes accurate CAG detection that automatically excludes the effect of regions outside the stomach. Experimental results showed the effectiveness of the proposed method.

    ARTICLE HIGHLIGHTS

    Research perspectives

    Our CAG detection method can be trained with data from a small-scale or mediumscale hospital without medical data sharing that having the risk of leakage of personal information.

    ACKNOWLEDGEMENTS

    Experimental data were provided by the University of Tokyo Hospital in Japan. We express our thanks to Katsuhiro Mabe of the Junpukai Health Maintenance Center, and Nobutake Yamamichi of The University of Tokyo.

    叶爱在线成人免费视频播放| 亚洲国产日韩欧美精品在线观看 | 日韩精品中文字幕看吧| 美女免费视频网站| 亚洲七黄色美女视频| 午夜激情福利司机影院| 国产精品电影一区二区三区| 色哟哟哟哟哟哟| 精品国产美女av久久久久小说| 国产精品女同一区二区软件 | 欧美在线黄色| 亚洲七黄色美女视频| 桃色一区二区三区在线观看| 亚洲一区二区三区不卡视频| 美女大奶头视频| 舔av片在线| 午夜精品久久久久久毛片777| 久久午夜综合久久蜜桃| 欧美三级亚洲精品| 亚洲精品美女久久av网站| 日韩高清综合在线| 精品久久久久久久久久免费视频| 国产精品亚洲一级av第二区| 国产精品一及| 国产亚洲精品久久久com| 久久伊人香网站| 婷婷亚洲欧美| 精品国产美女av久久久久小说| 嫩草影院入口| 国产v大片淫在线免费观看| 亚洲国产日韩欧美精品在线观看 | 露出奶头的视频| 后天国语完整版免费观看| 国产又黄又爽又无遮挡在线| 成年女人看的毛片在线观看| 九色成人免费人妻av| 给我免费播放毛片高清在线观看| 免费观看精品视频网站| 精品电影一区二区在线| 国产精品九九99| 亚洲欧美日韩高清专用| 国产激情偷乱视频一区二区| 99国产极品粉嫩在线观看| 久久久久久久午夜电影| 日本黄色片子视频| 老司机在亚洲福利影院| 丰满的人妻完整版| 香蕉国产在线看| xxx96com| 午夜福利高清视频| 黄色视频,在线免费观看| 国产成人一区二区三区免费视频网站| 香蕉久久夜色| 最新中文字幕久久久久 | 91在线观看av| 国产真人三级小视频在线观看| а√天堂www在线а√下载| 老司机午夜福利在线观看视频| 亚洲一区二区三区不卡视频| 国产主播在线观看一区二区| 啪啪无遮挡十八禁网站| 欧美日韩亚洲国产一区二区在线观看| 日韩精品中文字幕看吧| 国产蜜桃级精品一区二区三区| 99久久无色码亚洲精品果冻| 国产高清videossex| 亚洲欧美精品综合一区二区三区| 岛国视频午夜一区免费看| 午夜免费激情av| 欧美大码av| 亚洲人成网站在线播放欧美日韩| 久久精品国产清高在天天线| 99在线视频只有这里精品首页| 精品国内亚洲2022精品成人| 最好的美女福利视频网| 免费人成视频x8x8入口观看| www.精华液| 国产精品乱码一区二三区的特点| 久久精品国产清高在天天线| 一进一出抽搐动态| 日韩国内少妇激情av| 日韩成人在线观看一区二区三区| 国模一区二区三区四区视频 | 噜噜噜噜噜久久久久久91| 精品无人区乱码1区二区| 欧美高清成人免费视频www| 国产精品久久久久久亚洲av鲁大| 免费观看人在逋| 国产成人欧美在线观看| 亚洲精品一卡2卡三卡4卡5卡| 久久精品国产亚洲av香蕉五月| 最近最新免费中文字幕在线| 高潮久久久久久久久久久不卡| 午夜视频精品福利| 免费大片18禁| 制服人妻中文乱码| 男人舔女人的私密视频| 18禁美女被吸乳视频| 99久国产av精品| 色老头精品视频在线观看| av欧美777| 午夜福利在线观看吧| 高清在线国产一区| 91字幕亚洲| 亚洲性夜色夜夜综合| 成人鲁丝片一二三区免费| 少妇熟女aⅴ在线视频| 少妇丰满av| 国模一区二区三区四区视频 | 成在线人永久免费视频| 亚洲avbb在线观看| av天堂中文字幕网| 变态另类丝袜制服| 成人三级做爰电影| 国产精品一区二区精品视频观看| 91麻豆精品激情在线观看国产| 禁无遮挡网站| 日日摸夜夜添夜夜添小说| 日韩欧美国产一区二区入口| 中亚洲国语对白在线视频| 久久午夜综合久久蜜桃| 2021天堂中文幕一二区在线观| 国产黄色小视频在线观看| 午夜成年电影在线免费观看| 悠悠久久av| 听说在线观看完整版免费高清| 两人在一起打扑克的视频| 在线十欧美十亚洲十日本专区| 男女下面进入的视频免费午夜| 亚洲五月天丁香| 两个人看的免费小视频| 人人妻人人澡欧美一区二区| 欧美在线一区亚洲| 亚洲 欧美一区二区三区| 亚洲av成人不卡在线观看播放网| 亚洲五月天丁香| 天堂网av新在线| 精品电影一区二区在线| a级毛片在线看网站| 免费无遮挡裸体视频| 宅男免费午夜| 又爽又黄无遮挡网站| 黄色日韩在线| 久久久精品欧美日韩精品| 国产v大片淫在线免费观看| 在线观看66精品国产| 99热只有精品国产| 女人被狂操c到高潮| 一个人看的www免费观看视频| 亚洲,欧美精品.| 一进一出抽搐动态| 一二三四在线观看免费中文在| 亚洲精品色激情综合| bbb黄色大片| 亚洲最大成人中文| 在线免费观看的www视频| 无人区码免费观看不卡| 日本成人三级电影网站| a级毛片a级免费在线| 午夜福利成人在线免费观看| 国产高潮美女av| 日本免费a在线| svipshipincom国产片| 亚洲中文字幕一区二区三区有码在线看 | 久久久国产成人免费| 久久久久免费精品人妻一区二区| 婷婷六月久久综合丁香| 黄色片一级片一级黄色片| 搞女人的毛片| 国产一区二区在线av高清观看| 欧美中文日本在线观看视频| 99视频精品全部免费 在线 | 手机成人av网站| 久久人人精品亚洲av| 国产三级黄色录像| 99热这里只有精品一区 | 国产精品一区二区三区四区久久| 狂野欧美激情性xxxx| 黄色成人免费大全| 中文字幕熟女人妻在线| xxx96com| 国产不卡一卡二| 后天国语完整版免费观看| 久久精品亚洲精品国产色婷小说| 中文字幕人妻丝袜一区二区| 99热这里只有精品一区 | 久久久久九九精品影院| 国产精品亚洲美女久久久| 欧美午夜高清在线| 亚洲一区高清亚洲精品| 两个人的视频大全免费| 男女视频在线观看网站免费| 9191精品国产免费久久| 亚洲熟妇中文字幕五十中出| 熟女少妇亚洲综合色aaa.| 精品国产亚洲在线| 母亲3免费完整高清在线观看| 欧美一级a爱片免费观看看| 亚洲美女黄片视频| 亚洲人成电影免费在线| 黑人巨大精品欧美一区二区mp4| 国产精品九九99| 欧美另类亚洲清纯唯美| 亚洲精品一区av在线观看| 日本 欧美在线| 国产精品野战在线观看| 久久精品国产清高在天天线| 婷婷丁香在线五月| 一区二区三区高清视频在线| 中亚洲国语对白在线视频| 少妇的逼水好多| 亚洲片人在线观看| 国产美女午夜福利| 国产精品久久久久久久电影 | 一夜夜www| 国产精品久久电影中文字幕| 一个人看的www免费观看视频| 久久香蕉国产精品| 校园春色视频在线观看| av天堂在线播放| 在线观看午夜福利视频| 免费看光身美女| 亚洲精品美女久久av网站| 欧美另类亚洲清纯唯美| 伊人久久大香线蕉亚洲五| 最近最新免费中文字幕在线| 免费看光身美女| АⅤ资源中文在线天堂| 国产精品永久免费网站| 97人妻精品一区二区三区麻豆| 青草久久国产| 免费无遮挡裸体视频| 视频区欧美日本亚洲| 高清在线国产一区| 国产不卡一卡二| 操出白浆在线播放| 国产午夜福利久久久久久| 亚洲精品中文字幕一二三四区| 成人一区二区视频在线观看| 国产精品一及| 日本三级黄在线观看| 此物有八面人人有两片| 香蕉国产在线看| 亚洲午夜精品一区,二区,三区| 国产一区二区激情短视频| 日韩欧美在线二视频| 久久久久亚洲av毛片大全| 久久久久国内视频| 亚洲午夜精品一区,二区,三区| 露出奶头的视频| 久久热在线av| 99久久精品热视频| 亚洲欧美精品综合久久99| 日本免费一区二区三区高清不卡| 亚洲五月婷婷丁香| 亚洲色图av天堂| or卡值多少钱| 91老司机精品| 久久天堂一区二区三区四区| www.精华液| 免费看a级黄色片| 国产亚洲av嫩草精品影院| 两性午夜刺激爽爽歪歪视频在线观看| 综合色av麻豆| 亚洲 欧美一区二区三区| 久久精品国产亚洲av香蕉五月| av片东京热男人的天堂| 日韩欧美国产一区二区入口| 熟女人妻精品中文字幕| 人妻夜夜爽99麻豆av| 国产精品久久视频播放| 无限看片的www在线观看| 亚洲精品粉嫩美女一区| 国产 一区 欧美 日韩| 少妇人妻一区二区三区视频| 久久这里只有精品19| 国产99白浆流出| 一个人观看的视频www高清免费观看 | 国产野战对白在线观看| 久久久久性生活片| 91字幕亚洲| 两性夫妻黄色片| 激情在线观看视频在线高清| а√天堂www在线а√下载| 中文字幕高清在线视频| 亚洲熟妇熟女久久| 人妻丰满熟妇av一区二区三区| 亚洲片人在线观看| 精品久久久久久成人av| 国产又色又爽无遮挡免费看| 老司机午夜福利在线观看视频| 此物有八面人人有两片| 热99re8久久精品国产| 亚洲 欧美 日韩 在线 免费| 日韩av在线大香蕉| 亚洲一区二区三区不卡视频| 久久中文字幕人妻熟女| 日韩欧美在线乱码| 精品久久久久久久毛片微露脸| 国产爱豆传媒在线观看| 国产精品 国内视频| 国产午夜精品论理片| АⅤ资源中文在线天堂| 啦啦啦观看免费观看视频高清| 国产精品久久久人人做人人爽| 91麻豆av在线| 亚洲国产精品999在线| 在线免费观看不下载黄p国产 | 国产97色在线日韩免费| 日本五十路高清| av女优亚洲男人天堂 | x7x7x7水蜜桃| 伊人久久大香线蕉亚洲五| tocl精华| 精品一区二区三区视频在线观看免费| 男人的好看免费观看在线视频| 激情在线观看视频在线高清| 91久久精品国产一区二区成人 | 国产亚洲精品久久久久久毛片| 国产精品久久久久久人妻精品电影| 搡老熟女国产l中国老女人| 琪琪午夜伦伦电影理论片6080| 精品国产乱子伦一区二区三区| 精品国产美女av久久久久小说| 午夜福利成人在线免费观看| 男人舔女人下体高潮全视频| 老司机福利观看| 亚洲国产精品999在线| 97超级碰碰碰精品色视频在线观看| 午夜视频精品福利| 亚洲成人久久爱视频| 麻豆久久精品国产亚洲av| www.www免费av| 婷婷精品国产亚洲av| 国内毛片毛片毛片毛片毛片| 人妻久久中文字幕网| 在线a可以看的网站| 久久精品综合一区二区三区| 成人18禁在线播放| 国产乱人伦免费视频| 精品国产三级普通话版| 亚洲人成电影免费在线| 好看av亚洲va欧美ⅴa在| 国产又色又爽无遮挡免费看| 禁无遮挡网站| 怎么达到女性高潮| 人人妻人人澡欧美一区二区| 夜夜爽天天搞| 丁香六月欧美| avwww免费| 亚洲国产色片| 欧美精品啪啪一区二区三区| 岛国视频午夜一区免费看| 性欧美人与动物交配| 成年女人看的毛片在线观看| 我要搜黄色片| 亚洲在线自拍视频| 一级a爱片免费观看的视频| 亚洲成av人片免费观看| 好看av亚洲va欧美ⅴa在| 岛国视频午夜一区免费看| 精品久久蜜臀av无| 人妻夜夜爽99麻豆av| 两个人视频免费观看高清| 亚洲无线观看免费| 一个人免费在线观看电影 | 嫩草影院入口| 亚洲中文字幕日韩| 久久午夜亚洲精品久久| 午夜福利高清视频| 亚洲成a人片在线一区二区| 久久欧美精品欧美久久欧美| 色视频www国产| 欧美日本亚洲视频在线播放| 国产成人欧美在线观看| 午夜a级毛片| 亚洲精品乱码久久久v下载方式 | 国产精品久久视频播放| 亚洲国产高清在线一区二区三| 禁无遮挡网站| 日韩免费av在线播放| 成人av在线播放网站| 久久久久久人人人人人| 日本撒尿小便嘘嘘汇集6| 亚洲精品在线观看二区| 欧美中文日本在线观看视频| 最新中文字幕久久久久 | 天天躁狠狠躁夜夜躁狠狠躁| 婷婷丁香在线五月| 99久久精品热视频| 亚洲av成人精品一区久久| 亚洲精品色激情综合| 亚洲欧美日韩高清在线视频| 视频区欧美日本亚洲| 亚洲av片天天在线观看| 精品免费久久久久久久清纯| 国产精品98久久久久久宅男小说| 国产成人影院久久av| 黑人巨大精品欧美一区二区mp4| 亚洲黑人精品在线| 在线观看舔阴道视频| 免费高清视频大片| av黄色大香蕉| 巨乳人妻的诱惑在线观看| 在线看三级毛片| 每晚都被弄得嗷嗷叫到高潮| 亚洲欧美日韩高清专用| 免费无遮挡裸体视频| 久久中文字幕一级| 国产激情欧美一区二区| 在线永久观看黄色视频| 最近最新中文字幕大全电影3| 亚洲,欧美精品.| 啦啦啦观看免费观看视频高清| 国产一区二区激情短视频| 99久久99久久久精品蜜桃| 免费观看人在逋| 欧美不卡视频在线免费观看| 午夜福利视频1000在线观看| 午夜视频精品福利| 日本黄色片子视频| 久久精品aⅴ一区二区三区四区| 色尼玛亚洲综合影院| 宅男免费午夜| 一级毛片精品| 叶爱在线成人免费视频播放| 国产乱人伦免费视频| 十八禁人妻一区二区| 国产99白浆流出| 国产麻豆成人av免费视频| 国产三级在线视频| 亚洲精品乱码久久久v下载方式 | 亚洲欧美日韩高清专用| av天堂在线播放| 欧美一级毛片孕妇| 色综合亚洲欧美另类图片| 国产aⅴ精品一区二区三区波| 免费高清视频大片| 香蕉久久夜色| 日本 欧美在线| 精品一区二区三区视频在线观看免费| 热99re8久久精品国产| 无遮挡黄片免费观看| 人妻久久中文字幕网| 久久天堂一区二区三区四区| 免费一级毛片在线播放高清视频| 少妇裸体淫交视频免费看高清| 欧美日韩瑟瑟在线播放| 1000部很黄的大片| 狂野欧美白嫩少妇大欣赏| 亚洲乱码一区二区免费版| 精品99又大又爽又粗少妇毛片 | 人人妻人人看人人澡| 我要搜黄色片| 国产亚洲av高清不卡| 1024手机看黄色片| 国产av在哪里看| 欧美日韩亚洲国产一区二区在线观看| 亚洲精品色激情综合| 国产成人精品久久二区二区免费| 欧美zozozo另类| 在线观看免费视频日本深夜| 成人性生交大片免费视频hd| 日韩欧美精品v在线| 国产精品永久免费网站| 19禁男女啪啪无遮挡网站| 久久精品国产综合久久久| 欧美zozozo另类| 久久精品91蜜桃| 久久这里只有精品19| 午夜福利欧美成人| 夜夜躁狠狠躁天天躁| 久久久精品欧美日韩精品| 激情在线观看视频在线高清| 真人一进一出gif抽搐免费| 男人舔奶头视频| 午夜久久久久精精品| 国产精品九九99| 99国产精品一区二区三区| 性色avwww在线观看| 精品电影一区二区在线| 老司机福利观看| 亚洲av免费在线观看| netflix在线观看网站| 亚洲aⅴ乱码一区二区在线播放| 熟女电影av网| 在线看三级毛片| 亚洲美女视频黄频| 国产精品野战在线观看| 国产成人精品无人区| 日本黄色视频三级网站网址| 在线免费观看不下载黄p国产 | 亚洲 欧美一区二区三区| 日韩欧美免费精品| 亚洲成人免费电影在线观看| 午夜激情福利司机影院| 国产精品av视频在线免费观看| 看免费av毛片| 51午夜福利影视在线观看| 午夜两性在线视频| av片东京热男人的天堂| 亚洲激情在线av| 中文字幕av在线有码专区| 亚洲av熟女| 色哟哟哟哟哟哟| 国产成人欧美在线观看| 深夜精品福利| 国产熟女xx| 一边摸一边抽搐一进一小说| 国产精品影院久久| 99久久成人亚洲精品观看| 亚洲精品粉嫩美女一区| 国产精品一区二区三区四区免费观看 | 午夜精品久久久久久毛片777| 在线视频色国产色| 亚洲av成人精品一区久久| 香蕉丝袜av| 一区二区三区激情视频| 美女 人体艺术 gogo| 国产一区在线观看成人免费| 人妻丰满熟妇av一区二区三区| 国产黄色小视频在线观看| 国产午夜福利久久久久久| 欧美3d第一页| 99热只有精品国产| 特级一级黄色大片| 后天国语完整版免费观看| 91av网一区二区| 亚洲欧美日韩高清在线视频| 狠狠狠狠99中文字幕| 动漫黄色视频在线观看| 性欧美人与动物交配| 深夜精品福利| 亚洲最大成人中文| av女优亚洲男人天堂 | 日韩大尺度精品在线看网址| 欧美成人性av电影在线观看| 久久久久久人人人人人| 国产精品久久电影中文字幕| 亚洲av日韩精品久久久久久密| 精品福利观看| x7x7x7水蜜桃| 国产精品免费一区二区三区在线| 久久精品亚洲精品国产色婷小说| 午夜精品在线福利| 日韩 欧美 亚洲 中文字幕| 国产成人av激情在线播放| а√天堂www在线а√下载| 我的老师免费观看完整版| 精品久久蜜臀av无| 九色成人免费人妻av| 激情在线观看视频在线高清| 国产成人av教育| 草草在线视频免费看| 看片在线看免费视频| 啪啪无遮挡十八禁网站| 日韩欧美国产一区二区入口| 中文在线观看免费www的网站| 中文字幕熟女人妻在线| 亚洲国产高清在线一区二区三| www国产在线视频色| 亚洲精品美女久久av网站| 成年版毛片免费区| 99热6这里只有精品| 日韩欧美一区二区三区在线观看| 搡老熟女国产l中国老女人| 久久性视频一级片| 国产精品久久久av美女十八| 可以在线观看毛片的网站| 老鸭窝网址在线观看| 婷婷精品国产亚洲av在线| 国产综合懂色| 亚洲精品中文字幕一二三四区| 一进一出抽搐gif免费好疼| 国产精品 欧美亚洲| 欧美性猛交╳xxx乱大交人| 亚洲中文字幕日韩| 特大巨黑吊av在线直播| 国产午夜精品久久久久久| 老司机在亚洲福利影院| 99久久精品国产亚洲精品| 男女床上黄色一级片免费看| 欧美日韩瑟瑟在线播放| 午夜激情欧美在线| 国产91精品成人一区二区三区| 欧美绝顶高潮抽搐喷水| 精品一区二区三区视频在线 | 午夜福利在线观看吧| 村上凉子中文字幕在线| 亚洲在线观看片| 欧美激情在线99| 欧美又色又爽又黄视频| 九色国产91popny在线| 精品免费久久久久久久清纯| 18禁观看日本| 久久国产精品人妻蜜桃| 亚洲欧美一区二区三区黑人| 国产乱人视频| xxx96com| 一个人观看的视频www高清免费观看 | 国产一区二区三区在线臀色熟女| 国产午夜福利久久久久久| 亚洲国产中文字幕在线视频| 综合色av麻豆| 日本黄大片高清| 国产精品 欧美亚洲| 欧美日韩乱码在线| 91字幕亚洲| 日本一本二区三区精品| 夜夜爽天天搞| 色在线成人网| 亚洲av中文字字幕乱码综合| 国产精华一区二区三区| 国产一区在线观看成人免费| 欧美高清成人免费视频www|