• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Kappa coefficient: a popular measure of rater agreement

    2015-12-09 03:02:23WanTANGJunHUHuiZHANGPanWUHuaHE
    上海精神醫(yī)學(xué) 2015年1期
    關(guān)鍵詞:小市民骨干教師南通

    Wan TANG*, Jun HU, Hui ZHANG, Pan WU, Hua HE,5

    ?Biostatistics in psychiatry (25)?

    Kappa coefficient: a popular measure of rater agreement

    Wan TANG1*, Jun HU2, Hui ZHANG3, Pan WU4, Hua HE1,5

    interrater agreement; kappa coefficient; weighted kappa; correlation

    1. Introduction

    For most physical illnesses such as high blood pressure and tuberculosis, definitive diagnoses can be made using medical devices such as a sphygmomanometer for blood pressure or an X-ray for tuberculosis. However,there are no error-free gold standard physical indicators of mental disorders, so the diagnosis and severity of mental disorders typically depends on the use of instruments (questionnaires) that attempt to measure latent multi-faceted constructs. For example, psychiatric diagnoses are often based on criteria specified in the Fourth edition of theDiagnostic and Statistical Manual of Mental Disorders(DSM-IV)[1], published by the American Psychiatric Association. But different clinicians may have different opinions about the presence or absence of the speci fic symptoms required to determine the presence of a diagnosis, so there is typically no perfect agreement between evaluators. In this situation,statistical methods are needed to address variability in clinicians’ ratings.

    Cohen’s kappa is a widely used index for assessing agreement between raters.[2]Although similar in appearance, agreement is a fundamentally different concept from correlation. To illustrate, consider an instrument with six items and suppose that two raters’ratings of the six items on a single subject are (3,5), (4,6),(5,7), (6,8), (7,9) and (8,10). Although the scores of the two raters are quite different, the Pearson correlation coefficient for the two scores is 1, indicating perfect correlation. The paradox occurs because there is a bias in the scoring that results in a consistent difference of 2 points in the scores of the two raters for all 6 items in the instrument. Thus, although perfectly correlated(precision), there is quite poor agreement between the two raters. The kappa index, the most popular measure of raters’ agreement, resolves this problem by assessing both the bias and the precision between raters’ ratings.

    In addition to its applications to psychiatric diagnosis, the concept of agreement is also widely applied to assess the utility of diagnostic and screening tests. Diagnostic tests provide information about a patient’s condition that clinicians’ often use when making decisions about the management of patients.Early detection of disease or of important changes in the clinical status of patients often leads to less suffering and quicker recovery, but false negative and false positive screening results can result in delayed treatment or in inappropriate treatment. Thus when a new diagnostic or screening test is developed, it is critical to assess its accuracy by comparing test results with those from a gold or reference standard. When assessing such tests,it is incorrect to measure the correlation of the results of the test and the gold standard, the correct procedure is to assess the agreement of the test results with the gold standard.

    2. Problems

    Consider an instrument with a binary outcome, with‘1’ representing the presence of depression and ‘0’representing the absence of depression. Suppose two independent raters apply the instrument to a random sample ofnsubjects. Let and denote the ratings on thensubjects by the two raters fori=1,2,...,n. We are interested in the degree of agreement between the two raters. Since the ratings are on the same scale of two levels for both raters, the data can be summarized in a 2×2 contingency table.

    To illustrate, Table 1 shows the results of a study assessing the prevalence of depression among 200 patients treated in a primary care setting using two methods to determine the presence of depression;[3]one based on information provided by the individual(i.e., proband) and the other based on information provided by another informant (e.g., the subject’s family member or close friend) about the proband. Intuitively,we may think that the proportion of cases in which the two ratings are the same (in this example, 34.5%[(19+50)/200]) would be a reasonable measure of agreement. But the problem with this proportion is that it is almost always positive, even when the rating by the two methods is completely random and independent of each other. So the proportion of overall agreement does not indicate whether or not two raters or two methods of rating are in agreement.

    Table 1. Diagnosis of depression among 200 primary care patients based on information provided by the proband and by other informants about the proband

    For example, suppose that two raters with no training or experience about depression randomly decide whether or not each of the 200 patients has depression. Assume that one rater makes a positive diagnosis (i.e., considers depression present) 80% of thetime and the other gives a positive diagnosis 90% of thetime. Based on the assumption that their diagnoses are made independently from each other, Table 2 represents the joint distribution of their ratings. The proportion that the two raters give the same diagnosis is 74% (i.e.,0.72+0.02), suggesting that the two raters are doing a good job of diagnosing the presence of depression. But this level of agreement is purely by chance, it doesnotreflect the actual degree of agreement between the two raters. This hypothetical example shows that the proportion of cases in which two raters give the same ratings on an instrument is inflated by the agreement by chance. This chance agreement must be removed in order to provide a valid measure of agreement.Cohen’s kappa coefficient is used to assess the level of agreement beyond chance agreement.

    Table 2. Hypothetical example of proportional distribution of diagnoses by two raters that make diagnoses independently from each other

    3. Kappa for 2×2 tables

    Consider a hypothetical example of two raters giving ratings fornsubjects on a binary scale, with ‘1’representing a positive result (e.g., the presence of a diagnosis) and ‘0’ representing a negative result(e.g., the absence of a diagnosis). The results could be reported in a 2x2 contingency table as shown in Table 3. By convention, the results of the first rater are traditionally shown in the rows (x values) and the results of the second rater are shown in the columns(y values). Thus,nijin the table denotes the number of subjects who receive the rating ofifrom the first rater and the ratingjfrom the second rater. Let Pr(A) denote the probability of event A; thenpij=Pr(x=i,y=j) represent the proportion of all cases that receive the rating ofifrom the first rater and the ratingjfrom the second rater,pi+=Pr(x=i) represents the marginal distribution of the first rater’s ratings, andp+j=Pr(y=j) represents the marginal distribution of the second rater’s ratings.

    Table 3. A typical 2×2 contingency table to assess agreement of two raters

    If the two raters give their ratings independently according to their marginal distributions, the probability that a subject is rated 0 (negative) by chance by both raters is the product of the marginal probabilitiesp0+andp+0. Likewise, the probability of a subject being rated 1 (positive) by chance by both raters is the product of the marginal probabilitiesp1+andp+1. The sum of these two probabilities (p1+*p+1+p0+*p+0) is the agreement by chance, that is, the source of in flation discussed earlier.After excluding this source of inflation from the total proportion of cases in which the two raters give identical ratings (p11+p00), we arrive at the agreement corrected for chance agreement, (p11+p00- (p1+*p+1+p0+*p+0)). In 1960 Cohen[1]recommended normalizing this chanceadjusted agreement as the Kappa coefficient (K):

    This normalization process produces kappa coefficients that vary between -1 and 1, depending on the degree of agreement or disagreement beyond chance. If the two raters completely agree with each other, thenp11+p00=1 andK=1. Conversely, if the kappa coefficient is 1, then the two raters agree completely. On the other hand, if the raters rate the subjects in a completely random fashion, then the agreement is completely due to chance, sop11=p1+*p+1andp00=p0+*p+0do (p11+p00-(p1+*p+1+p0+*p+0))=0 and the kappa coefficient is also 0. In general, when rater agreement exceeds chance agreement the kappa coefficient is positive, and when raters disagree more than they agree the kappa coefficient is negative. The magnitude of kappa indicates the degree of agreement or disagreement.

    The kappa coefficient can be estimated by substituting sample proportions for the probabilities shown in equation (1). When the number of ratings given by each rater (i.e., the sample size) is large, the kappa coefficient approximately follows a normal distribution. This asymptotic distribution can be estimated using delta methods based on the asymptotic distributions of the various sample proportions.[4]Based on the asymptotic distribution, calculations of confidence intervals and hypothesis tests can be performed. For a sample with 100 or more ratings, this generally provides a good approximation. However, it may not work well for small sample sizes, in which case exact methods may be applied to provide more accurate inference.[4]

    Example 1.Assessing the agreement between the diagnosis of depression based on information provided by the proband compared to the diagnosis based on information provided by other informants (Table 1), the Kappa coefficient is computed as follows:

    The asymptotic standard error of kappa is estimated as 0.063. This gives a 95% confidence interval of κ, (0.2026, 0.4497). The positive kappa indicates some degree of agreement about the diagnosis of depression between diagnoses based on information provided by the proband versus diagnoses based on information provided by other informants. However, the level of agreement,though statistically signi ficant, is relatively weak.

    In most applications, there is usually more interest in the magnitude of kappa than in the statistical significance of kappa. When the sample is relatively large (as in this example), a low kappa which represents relatively weak agreement can, nevertheless, be statistically significant (that is, significantly greater than 0). The degree of beyond-chance agreement has been classified in different ways by different authors who arbitrarily assigned each category to specific cutoff levels of Kappa. For example, Landis and Koch[5]proposed that a kappa in the range of 0.21-0.40 be considered ‘fair’ agreement, kappa=0.41-0.60 be considered ‘moderate’ agreement, kappa=0.61-0.80 be considered ‘substantial’ agreement, and kappa >0.81 be considered ‘a(chǎn)lmost perfect’ agreement.

    4. Kappa for categorical variables with multiple levels

    The kappa coefficient for a binary rating scale can be generalized to cases in which there are more than two levels in the rating scale. Suppose there areknominal categories in the rating scale. For simplicity and without loss of generality, denote the rating levels by 1,2,...,k.The ratings from the two raters can be summarized in ak×kcontingency table, as shown in Table 4. In the table,nij,pij,pi+, andp+jhave the same interpretations as in the 2x2 contingency table (above) but the range of the scale is extended toi,j=1,…,k. As in the binary example,we first compute the agreement by chance, (the sum of the products of thekmarginal probabilities, ∑pi+*p+ifori=1,…,k), and subtract this chance agreement from the total observed agreement (the sum of the diagonal probabilities, ∑piifori=1,...,k) before estimating the normalized agreement beyond chance:

    Table 4. Model KxK contingency table to assess agreement about k categories by two different raters

    As in the case of binary scales, the kappa coefficient varies between -1 and 1, depending on the extent of agreement or disagreement. If the two raters completely agree with each other (∑pii=1, fori=1,…,k), then the kappa coefficient is equal to 1. If the raters rate the subjects at random, then the total agreement is equal chance agreement (∑pii=∑pi+*p+i, fori=1,…,k) so the kappa coefficient is 0. In general, the kappa coefficient is positive if there is agreement or negative if there is disagreement, with the magnitude of kappa indicating the degree of such agreement or disagreement between the raters. The kappa index in equation (2) is estimated by replacing the probabilities with their corresponding sample proportions. As in the case of binary scales, we can use asymptotic theory and exact methods to assess con fidence intervals and make inferences.

    5. Kappa for ordinal or ranked variables

    The definition of the kappa coefficient in equation(2) assumes that the rating categories are treated as independent categories. If, however, the rated categories are ordered or ranked (for example, a Likert scale with categories such as ‘strongly disagree’,‘disagree’, ‘neutral’, ‘a(chǎn)gree’,and ‘strongly agree’), then a weighted kappa coefficient is computed that takes into consideration the different levels of disagreement between categories. For example, if one rater ‘strongly disagrees’ and another ‘strongly agrees’ this must be considered a greater level of disagreement than when one rater ‘a(chǎn)grees’ and another ‘strongly agrees’.

    The first step in computing a weighted kappa is to assign weights representing the different levels of agreement for each cell in the KxK contingency table.The weights in the diagonal cells are all 1 (i.e.,wii=1,for alli), and the weights in the off-diagonal cells range from 0 to <1 (i.e., 0<wij<1, for alli≠j). These weights are then added to equation (2) to generate a weighted kappa that accounts for varying degrees of agreement or disagreement between the ranked categories:

    The weighted kappa is computed by replacing the probabilities with their respective sample proportions,pij,pi+, andp+i. Ifwij=0 for alli≠j, the weighted kappa coefficient Kwreduces to the standard kappa in equation (2). Note that for binary rating scales, there is no weighted version of kappa, since κ remains the same regardless of the weights used. Again, we can use asymptotic theory and exact methods to estimate con fidence intervals and make inferences.

    In theory, any weights satisfying the two defining conditions (i.e., weights in diagonal cells=1 and weights in off-diagonal cells >0 and <1) may be used.In practice, however, additional constraints are often imposed to make the weights more interpretable and meaningful. For example, since the degree of disagreement (agreement) is often a function of the difference between theith andjth rating categories,weights are typically set to reflect adjacency between rating categories, such as bywij=f(i-j), wherefis some decreasing function satisfying three conditions: (a)0<f(x)<1; (b)f(x)=f(-x); and (c)f(0)=1. Based on these conditions, larger weights (i.e., closer to 1) are used for weights of pairs of categories that are closer to each other and smaller weights (i.e., closer to 0) are used for weights of pairs of categories that are more distant from each other.

    Two such weighting systems based on column scores are commonly employed. Suppose the column scores are ordered, sayC1≤C2…≤Crand assigned values of 0,1,…r. Then, the Cicchetti-Allison weight and the Fleiss-Cohen weight in each cell of the KxK contingency table are computed as follows:

    Example 2. If depression is categorized into three ranked levels as shown in Table 5, the agreement of the classi fication based on information provided by the probands with the classification based on information provided by other informants can be estimated using the unweighted kappa coefficient as follows:

    Applying the Cicchetti-Allison weights (shown in Table 5) to the unweighted formula generates a weighed kappa:

    Applying the Fleiss-Cohen weights (shown in Table 5) involves replacing the 0.5 weight in the above equation with 0.75 and results in a Kw of 0.4482.Thus the weighted kappa coefficients have larger absolute values than the unweighted kappa coefficients. The overall result indicates only fair to moderate agreement between the two methods of classifying the level of depression. As seen in Table 5, the low agreement is partly due to the fact that a large number of subjects classified as minor depression based on information from the proband were not identified using information from other informants.

    6. Statistical Software

    Several statistical software packages including SAS,SPSS, and STATA can compute kappa coefficients. But agreement data conceptually result in square tables with entries in all cells, so most software packages will not compute kappa if the agreement table is nonsquare, which can occur if one or both raters do not use all the rating categories when rating subjects because of biases or small samples.

    Table 5. Three ranked levels of depression categorized based on information from the probands themselves or on information from other informants about the probands

    In some special circumstances the software packages will compute incorrect kappa coefficients if a square agreement table is generated despite the failure of both raters to use all rating categories. For example, suppose a scale for rater agreement has three categories, A, B, and C. If one rater only uses categories B and C, and the other only uses categories A and B,this could result in a square agreement table such as that shown in Table 6. This is a square table, but the rating categories in the rows are completely different from those represented by the column. Clearly, kappa values generated using this table would not provide the desired assessment of rater agreement. To deal with this problem the analyst must add zero counts for the rating categories not endorsed by the raters to create a square table with the right rating categories, as shown in Table 7.

    校校共建。不同于建在社區(qū)由社會組織運(yùn)營的“希望來吧”,南通市陸洪閘小學(xué)的“希望來吧”建在校內(nèi),由學(xué)校行政人員、黨員、團(tuán)員及骨干教師組成“希望來吧”工作小組,協(xié)同南通大學(xué)、南通航運(yùn)學(xué)院及諸多企事業(yè)單位的熱心志愿者百余人,定期為外來務(wù)工人員子女開展教學(xué)輔導(dǎo)、心理咨詢、主題活動。如“兒童心理健康教育”“為新小市民過集體生日”“新小市民親情聊天”等,讓孩子們走進(jìn)“希望來吧”,就仿佛走進(jìn)了如家一般溫馨的港灣。

    Table 6. Hypothetical example of incorrect agreement table that can occur when two raters on a three-level scale each only use 2 of the 3 levels

    Table 7. Adjustment of the agreement table (byadding zero cells) needed when two raters on a three-level scale each only use 2 of the 3 levels

    6.1 SAS

    In SAS, one may use PROC FREQ and specify the corresponding two-way table with the “AGREE” option.Here are the sample codes for Example 2 using PROC FREQ:

    PROC FREQ DATA = (the data set for the depression diagnosis study); TABLE (variable on result using proband) * (variable on result using other informants)/ AGREE; RUN;

    PROC FREQ uses Cicchetti-Allison weights by default. One can specify (WT=FC) with the AGREE option to request weighted kappa coefficients based on Fleiss-Cohen weights. It is important to check the order of the levels and weights used in computing weighted kappa. SAS calculates weights for weighted kappa based on unformatted values; if the variable of interest is not coded this way, one can either recode the variable or use a format statement and specify the“ORDER = FORMATTED” option. Also note that data for contingency tables are often recorded as aggregated data. For example, 10 subjects with the rating ‘A’ from the first rater and the rating ‘B’ from the second rater may be combined into one observation with a frequency variable of value 10. In such cases a weight statement“weight (the frequency variable);” may be applied to specify the frequency variable.

    6.2 SPSS

    In SPSS, kappa coefficients can be only be computed when there are only two levels in the rating scale so it is not possible to compute weighted kappa coefficients.For a two-level rating scale such as that described in Example 1, one may use the following syntax to compute the kappa coefficient:

    CROSSTABS

    /TABLES=(variable on result using proband) BY

    (variable on result using other informants)

    /STATISTICS=KAPPA.

    An alternatively easier approach is to select appropriate options in the SPSS menu:

    1. Click on Analyze, then Descriptive Statistics, then Crosstabs.

    2. Choose the variables for the row and column variables in the pop-up window for the crosstab.

    3. Click on Statistics and select the kappa checkbox.

    4. Click Continue or OK to generate the output for the kappa coefficient.

    7. Discussion

    In this paper we introduced the use of Cohen’s kappa coefficient to assess between-rater agreement, which has the desirable property of correcting for chance agreement. We focused on cross-sectional studies for two raters, but extensions to longitudinal studies with missing values and to studies that use more than two raters are also available.[6]Cohen’s kappa generally works well, but in some specific situations it may not accurately re flect the true level of agreement between raters.[7]. For example, when both raters report a very high prevalence of the condition of interest (as in the hypothetical example shown in Table 2), some of the overlap in their diagnoses may reflect their common knowledge about the disease in the population being rated. This should be considered ‘true’ agreement, but it is attributed to chance agreement (i.e., kappa=0).Despite such limitations, the kappa coefficient is an informative measure of agreement in most circumstances that is widely used in clinical research.

    Cohen’s kappa can only be applied to categorical ratings. When ratings are on a continuous scale, Lin’s concordance correlation coefficient[8]is an appropriate measure of agreement between two raters,[8]and the intraclass correlation coefficients[9]is an appropriate measure of agreement between multiple raters.

    Conflict of interest

    The authors declare no con flict of interest.

    Funding

    None.

    1. Spitzer RL, Gibbon M, Williams JBW.Structured Clinical Interview for Axis I DSM-IV Disorders. Biometrics Research Department: New York State Psychiatric Institute; 1994

    2. Cohen J. A coefficient of agreement for nominal scales.Educ Psychol Meas.1960;20(1): 37-46

    3. Duberstein PR, Ma Y, Chapman BP, Conwell Y, McGriff J,Coyne JC, et al. Detection of depression in older adults by family and friends: distinguishing mood disorder signals from the noise of personality and everyday life.Int Psychogeriatr.2011; 23(4): 634-643. doi: http://dx.doi.org/10.1017/S1041610210001808

    4. Tang W, He H, Tu XM.Applied Categorical and Count Data Analysis. Chapman & Hall/CRC; 2012

    5. Landis JR, Koch GG. The measurement of observer agreement for categorical data.Biometrics.1977; 33: 159-174. doi: http://dx.doi.org/10.2307/2529310

    6. Ma Y, Tang W, Feng C, Tu XM. Inference for kappas for longitudinal study data: applications to sexual health research.Biometrics.2008; 64: 781-789. doi: http://dx.doi.org/10.1111/j.1541-0420.2007.00934.x

    7. Feinstein AR, Cicchetti DV. High agreement but low kappa:I. The problems of two paradoxes.J Clin Epidemiol. 1990;43(6): 543-549. doi: http://dx.doi.org/10.1016/0895-4356(90)90158-L

    8. Lin L. A concordance correlation coefficient to evaluate reproducibility.Biometrics.1989; 45(1): 255-268. doi: http://dx.doi.org/10.2307/2532051

    9. Shrout PE, Fleiss J. Intraclass correlations: Uses in assessing rater reliability.Psychol Bull. 1979; 86(2): 420-428

    , 2015-01-28; accepted, 2015-02-04)

    Dr. Tang is a Research Associate Professor of Biostatistics in the Department of Biostatistics at the University of Rochester. His research interests are in semi-parametric modeling of longitudinal data with missing values, smoothing methods, and categorical and count data analysis and applications of statistical methods to psychosocial research. Dr. Tang received his PhD in Mathematics from the Department of Mathematics at the University of Rochester in 2004.

    Kappa系數(shù):一種衡量評估者間一致性的常用方法

    唐萬,胡俊,張暉,吳攀,賀華

    評估者間一致性,Kappa系數(shù),加權(quán)Kappa,相關(guān)性

    Summary: In mental health and psychosocial studies it is often necessary to report on the between-rater agreement of measures used in the study. This paper discusses the concept of agreement, highlighting its fundamental difference from correlation. Several examples demonstrate how to compute the kappa coefficient - a popular statistic for measuring agreement - both by hand and by using statistical software packages such as SAS and SPSS. Real study data are used to illustrate how to use and interpret this coefficient in clinical research and practice. The article concludes with a discussion of the limitations of the coefficient.

    [Shanghai Arch Psychiatry. 2015; 27(1): 62-67.

    10.11919/j.issn.1002-0829.215010]

    1Department of Biostatistics and Computational Biology, University of Rochester, Rochester, NY, United States

    2College of Basic Science and Information Engineering, Yunnan Agricultural University, Kunming, Yunnan Province, China

    3Department of Biostatistics, St. Jude Children’s Research Hospital, Memphis, TN, United States

    4Value Institute, Christiana Care Health System, Newark, DE, United States

    5Center of Excellence for Suicide Prevention, Canandaigua VA Medical Center Canandaigua, NY, United States

    *correspondence: wan_tang@urmc.rochester.edu

    概述:在精神衛(wèi)生和社會心理學(xué)研究中,常常需要報(bào)告研究使用某一評估方法的評估者間的一致性。本文討論了一致性的概念,強(qiáng)調(diào)一致性與相關(guān)性的本質(zhì)區(qū)別。Kappa系數(shù)是衡量一致性的一個常用統(tǒng)計(jì)方法。我們用幾個例子說明如何通過手工計(jì)算或統(tǒng)計(jì)軟件包SAS、SPSS等計(jì)算Kappa系數(shù),用真實(shí)的研究數(shù)據(jù)說明如何在臨床研究和實(shí)踐中使用和解釋這個系數(shù)。最后文章討論了該系數(shù)的局限性。

    本文全文中文版從2015年03月25日起在www.shanghaiarchivesofpsychiatry.org/cn可供免費(fèi)閱覽下載

    猜你喜歡
    小市民骨干教師南通
    中小學(xué)骨干教師“雙減”項(xiàng)目式研修模式探索
    藍(lán)印花布:南通獨(dú)具特色的非遺傳承
    華人時刊(2021年19期)2021-03-08 08:35:44
    非遺南通
    華人時刊(2020年19期)2021-01-14 01:17:06
    南通職業(yè)
    守護(hù)健康,把大愛傳遞——全國少兒美術(shù)教育骨干教師抗疫版畫作品選(二)
    守護(hù)健康,把大愛傳遞——全國少兒美術(shù)教育骨干教師抗疫版畫作品選
    時代審美的變格
    第九屆全國硬筆書法骨干教師高級研修班在紹興舉行
    中國篆刻(2017年5期)2017-07-18 11:09:30
    南通中船機(jī)械制造有限公司
    中國船檢(2017年3期)2017-05-18 11:33:12
    小市民
    特別文摘(2016年18期)2016-09-26 15:26:20
    久久综合国产亚洲精品| 在线观看一区二区三区激情| 国产av精品麻豆| 哪个播放器可以免费观看大片| 国产高清国产精品国产三级| 欧美成人午夜精品| 国产不卡av网站在线观看| 国产女主播在线喷水免费视频网站| 亚洲免费av在线视频| 亚洲精品av麻豆狂野| 日韩一区二区三区影片| 国产精品二区激情视频| 中文字幕人妻熟女乱码| 卡戴珊不雅视频在线播放| 天堂俺去俺来也www色官网| 久久久精品免费免费高清| 国产成人精品久久久久久| 亚洲国产av新网站| 欧美乱码精品一区二区三区| 悠悠久久av| 人妻一区二区av| 人成视频在线观看免费观看| 精品一区在线观看国产| 妹子高潮喷水视频| 亚洲中文av在线| 波多野结衣一区麻豆| 久久人人爽av亚洲精品天堂| 国产亚洲一区二区精品| 看免费成人av毛片| 少妇人妻久久综合中文| 超碰成人久久| 久热这里只有精品99| 在线看a的网站| 各种免费的搞黄视频| 咕卡用的链子| 男人添女人高潮全过程视频| 久久国产精品大桥未久av| 欧美精品一区二区大全| 伊人亚洲综合成人网| 看十八女毛片水多多多| 色精品久久人妻99蜜桃| 9色porny在线观看| 久久久久久人人人人人| 成人亚洲欧美一区二区av| 亚洲美女视频黄频| 欧美 亚洲 国产 日韩一| 国产精品麻豆人妻色哟哟久久| 最近中文字幕2019免费版| 久久久精品国产亚洲av高清涩受| 国产精品熟女久久久久浪| tube8黄色片| 欧美激情高清一区二区三区 | 国产一级毛片在线| 国产精品一区二区在线观看99| 日韩不卡一区二区三区视频在线| 一二三四中文在线观看免费高清| 国产亚洲精品第一综合不卡| 亚洲av欧美aⅴ国产| 高清av免费在线| www日本在线高清视频| 国产日韩欧美亚洲二区| 国产福利在线免费观看视频| 久久久久人妻精品一区果冻| 欧美亚洲日本最大视频资源| 丝袜在线中文字幕| 最近中文字幕2019免费版| 一级毛片 在线播放| 一级毛片我不卡| 男女午夜视频在线观看| 久久99一区二区三区| 啦啦啦在线观看免费高清www| 欧美 亚洲 国产 日韩一| 欧美在线黄色| 欧美亚洲 丝袜 人妻 在线| 成人漫画全彩无遮挡| 韩国高清视频一区二区三区| 久久精品人人爽人人爽视色| 亚洲免费av在线视频| av不卡在线播放| 久久国产精品男人的天堂亚洲| 国产免费福利视频在线观看| 午夜激情久久久久久久| 麻豆乱淫一区二区| 国产精品.久久久| 精品国产一区二区久久| 国产老妇伦熟女老妇高清| 在线观看免费视频网站a站| 色精品久久人妻99蜜桃| 黄色毛片三级朝国网站| 欧美日韩亚洲高清精品| a 毛片基地| 精品国产一区二区久久| 精品午夜福利在线看| 久久热在线av| 国产成人a∨麻豆精品| 国产精品熟女久久久久浪| 成年av动漫网址| a级片在线免费高清观看视频| 色网站视频免费| 街头女战士在线观看网站| 大片电影免费在线观看免费| 丝袜美足系列| 麻豆精品久久久久久蜜桃| 国产又爽黄色视频| 欧美激情极品国产一区二区三区| 免费高清在线观看视频在线观看| 中文字幕色久视频| xxxhd国产人妻xxx| 午夜影院在线不卡| 丝袜人妻中文字幕| 亚洲成人手机| 国产淫语在线视频| 99精品久久久久人妻精品| 色播在线永久视频| 考比视频在线观看| 免费在线观看黄色视频的| 国产av国产精品国产| 999久久久国产精品视频| 天天添夜夜摸| 免费av中文字幕在线| 韩国高清视频一区二区三区| 免费高清在线观看日韩| 欧美另类一区| 熟妇人妻不卡中文字幕| 交换朋友夫妻互换小说| 高清不卡的av网站| 青春草视频在线免费观看| 日韩av不卡免费在线播放| 国产精品女同一区二区软件| 纯流量卡能插随身wifi吗| 欧美人与善性xxx| 97在线人人人人妻| 亚洲精品美女久久av网站| 香蕉国产在线看| 国产精品一区二区在线不卡| 国产精品久久久久成人av| 嫩草影视91久久| 看免费成人av毛片| 成人亚洲欧美一区二区av| 最新的欧美精品一区二区| 一本—道久久a久久精品蜜桃钙片| 亚洲色图综合在线观看| 在线免费观看不下载黄p国产| 日本欧美视频一区| 国产av精品麻豆| 99精品久久久久人妻精品| 在线观看国产h片| 曰老女人黄片| tube8黄色片| 久久国产精品大桥未久av| 日本色播在线视频| 午夜免费鲁丝| 超碰97精品在线观看| 久热这里只有精品99| 最新的欧美精品一区二区| 日日摸夜夜添夜夜爱| 波多野结衣av一区二区av| 视频在线观看一区二区三区| 99久久综合免费| 一区福利在线观看| 操出白浆在线播放| 国产精品一区二区精品视频观看| 成人亚洲精品一区在线观看| 欧美精品av麻豆av| 丁香六月天网| 亚洲色图 男人天堂 中文字幕| 王馨瑶露胸无遮挡在线观看| 秋霞伦理黄片| 久久99一区二区三区| 精品酒店卫生间| 日本91视频免费播放| 亚洲国产精品999| 日韩一本色道免费dvd| 一级毛片 在线播放| 校园人妻丝袜中文字幕| 亚洲精品第二区| 在线观看免费视频网站a站| 51午夜福利影视在线观看| 中国三级夫妇交换| 99久久99久久久精品蜜桃| 中文字幕亚洲精品专区| 男女床上黄色一级片免费看| 汤姆久久久久久久影院中文字幕| 国产精品免费大片| 亚洲一区中文字幕在线| 国产精品香港三级国产av潘金莲 | 日本黄色日本黄色录像| 高清av免费在线| 久久久久久人人人人人| 这个男人来自地球电影免费观看 | 男女免费视频国产| av国产精品久久久久影院| 亚洲第一av免费看| 午夜福利视频精品| 一区福利在线观看| 成人国产av品久久久| 亚洲精品成人av观看孕妇| 国产亚洲欧美精品永久| 中文字幕高清在线视频| 亚洲精品一区蜜桃| 国产精品av久久久久免费| 国产成人91sexporn| 欧美 亚洲 国产 日韩一| 国产精品香港三级国产av潘金莲 | 成人亚洲精品一区在线观看| av片东京热男人的天堂| 欧美国产精品va在线观看不卡| 免费看不卡的av| 亚洲精品日韩在线中文字幕| 在线观看国产h片| 麻豆乱淫一区二区| 19禁男女啪啪无遮挡网站| 制服诱惑二区| 丰满迷人的少妇在线观看| 热re99久久精品国产66热6| 精品少妇黑人巨大在线播放| 日韩伦理黄色片| 日本爱情动作片www.在线观看| 乱人伦中国视频| 精品国产露脸久久av麻豆| 日韩av免费高清视频| a级毛片黄视频| 秋霞伦理黄片| 18在线观看网站| xxxhd国产人妻xxx| 亚洲国产欧美日韩在线播放| 久久毛片免费看一区二区三区| 久久99一区二区三区| 国产 一区精品| 制服人妻中文乱码| 激情视频va一区二区三区| 亚洲精品一二三| 亚洲精品久久久久久婷婷小说| 超色免费av| 1024视频免费在线观看| 精品国产乱码久久久久久小说| 制服诱惑二区| 777米奇影视久久| 精品国产国语对白av| 欧美av亚洲av综合av国产av | 国产国语露脸激情在线看| 欧美日韩福利视频一区二区| 欧美国产精品一级二级三级| 王馨瑶露胸无遮挡在线观看| 秋霞在线观看毛片| 新久久久久国产一级毛片| 亚洲在久久综合| 一区二区三区精品91| 综合色丁香网| 中文字幕制服av| 久久久久久久久久久免费av| e午夜精品久久久久久久| 好男人视频免费观看在线| 一区二区三区激情视频| av线在线观看网站| 另类亚洲欧美激情| 看十八女毛片水多多多| 成人黄色视频免费在线看| 国产精品亚洲av一区麻豆 | 999久久久国产精品视频| 天天躁狠狠躁夜夜躁狠狠躁| 国产在线视频一区二区| 婷婷色麻豆天堂久久| 又粗又硬又长又爽又黄的视频| 99久国产av精品国产电影| 成年动漫av网址| av网站免费在线观看视频| 日日撸夜夜添| 亚洲综合精品二区| 侵犯人妻中文字幕一二三四区| 久久久久久人妻| 中文字幕人妻丝袜一区二区 | 在线观看免费高清a一片| 国产精品国产三级国产专区5o| 亚洲国产看品久久| 人成视频在线观看免费观看| 国产精品一区二区在线不卡| 晚上一个人看的免费电影| 亚洲成人av在线免费| 亚洲国产欧美在线一区| 亚洲一区中文字幕在线| 波多野结衣av一区二区av| 亚洲国产精品999| 亚洲成色77777| 只有这里有精品99| 美女扒开内裤让男人捅视频| 热99国产精品久久久久久7| 嫩草影视91久久| 免费观看人在逋| 激情五月婷婷亚洲| 丝袜在线中文字幕| 最近中文字幕2019免费版| 高清av免费在线| 韩国高清视频一区二区三区| 久久久国产精品麻豆| 成人国语在线视频| 一级a爱视频在线免费观看| 亚洲在久久综合| 色吧在线观看| 水蜜桃什么品种好| 国产在线免费精品| 成人三级做爰电影| 丝袜在线中文字幕| 黄频高清免费视频| www.熟女人妻精品国产| 超碰97精品在线观看| 一级爰片在线观看| 午夜福利免费观看在线| 久久久亚洲精品成人影院| av又黄又爽大尺度在线免费看| 国语对白做爰xxxⅹ性视频网站| 亚洲成人一二三区av| 欧美激情极品国产一区二区三区| 欧美日韩福利视频一区二区| av一本久久久久| 嫩草影院入口| 国产无遮挡羞羞视频在线观看| 少妇被粗大的猛进出69影院| 高清黄色对白视频在线免费看| 精品午夜福利在线看| 欧美 亚洲 国产 日韩一| 青春草视频在线免费观看| 亚洲av中文av极速乱| 777米奇影视久久| 男男h啪啪无遮挡| 97精品久久久久久久久久精品| 伊人久久国产一区二区| 黄色一级大片看看| 操出白浆在线播放| av电影中文网址| 赤兔流量卡办理| 欧美日韩视频精品一区| √禁漫天堂资源中文www| 国产成人免费无遮挡视频| 久久久久精品性色| 久久精品aⅴ一区二区三区四区| 久久精品国产亚洲av高清一级| 各种免费的搞黄视频| 国产一级毛片在线| 久久精品人人爽人人爽视色| 欧美精品一区二区免费开放| 亚洲av福利一区| 日本爱情动作片www.在线观看| 夜夜骑夜夜射夜夜干| 天天躁狠狠躁夜夜躁狠狠躁| 国产成人精品福利久久| 欧美日韩成人在线一区二区| 国产免费一区二区三区四区乱码| 亚洲精品久久成人aⅴ小说| 婷婷色综合大香蕉| 曰老女人黄片| 国产精品.久久久| 国产av国产精品国产| 不卡视频在线观看欧美| 国产成人91sexporn| 久久精品国产综合久久久| 亚洲国产欧美日韩在线播放| 久久天堂一区二区三区四区| 1024视频免费在线观看| 久久午夜综合久久蜜桃| 欧美成人精品欧美一级黄| 亚洲男人天堂网一区| 亚洲一卡2卡3卡4卡5卡精品中文| 999精品在线视频| 如日韩欧美国产精品一区二区三区| 国产亚洲一区二区精品| 欧美人与性动交α欧美精品济南到| 亚洲国产精品成人久久小说| 一级,二级,三级黄色视频| 国产黄色视频一区二区在线观看| 亚洲成人av在线免费| 亚洲av福利一区| 老司机在亚洲福利影院| 无遮挡黄片免费观看| 成人亚洲精品一区在线观看| 精品久久久久久电影网| 欧美乱码精品一区二区三区| 美女午夜性视频免费| 亚洲精品自拍成人| 色婷婷久久久亚洲欧美| 电影成人av| 777久久人妻少妇嫩草av网站| 亚洲国产欧美一区二区综合| 麻豆精品久久久久久蜜桃| 大香蕉久久成人网| av女优亚洲男人天堂| a级片在线免费高清观看视频| 大香蕉久久成人网| 夫妻午夜视频| 老司机在亚洲福利影院| 国产精品麻豆人妻色哟哟久久| 亚洲欧美精品自产自拍| 亚洲伊人久久精品综合| 女性被躁到高潮视频| 久久毛片免费看一区二区三区| 纵有疾风起免费观看全集完整版| 七月丁香在线播放| 欧美国产精品va在线观看不卡| 日韩 欧美 亚洲 中文字幕| 18在线观看网站| 激情五月婷婷亚洲| 黄网站色视频无遮挡免费观看| 人人妻人人澡人人爽人人夜夜| 国产精品 欧美亚洲| 久久久精品区二区三区| 一本一本久久a久久精品综合妖精| 日本色播在线视频| 亚洲,一卡二卡三卡| 超碰97精品在线观看| 捣出白浆h1v1| 久久性视频一级片| 人人妻人人添人人爽欧美一区卜| 婷婷色av中文字幕| 国产精品.久久久| 国产亚洲最大av| 亚洲精品在线美女| 亚洲精华国产精华液的使用体验| 男的添女的下面高潮视频| 精品午夜福利在线看| 亚洲精品国产av成人精品| 久久精品熟女亚洲av麻豆精品| 久久久久久久久久久久大奶| av一本久久久久| 嫩草影视91久久| 精品国产露脸久久av麻豆| 亚洲国产av新网站| 精品少妇久久久久久888优播| 麻豆乱淫一区二区| 19禁男女啪啪无遮挡网站| 国产极品粉嫩免费观看在线| 国产精品嫩草影院av在线观看| 国产毛片在线视频| 国产av码专区亚洲av| av片东京热男人的天堂| 国产成人a∨麻豆精品| 欧美日韩亚洲高清精品| 男女之事视频高清在线观看 | 国产av精品麻豆| 国产片内射在线| 一边摸一边做爽爽视频免费| 一级爰片在线观看| 精品国产露脸久久av麻豆| 亚洲国产av影院在线观看| 久久韩国三级中文字幕| 国产免费又黄又爽又色| 街头女战士在线观看网站| 国产在视频线精品| 国产一区二区三区综合在线观看| 色吧在线观看| 精品卡一卡二卡四卡免费| 亚洲av在线观看美女高潮| 国产免费福利视频在线观看| 亚洲精品国产色婷婷电影| 九九爱精品视频在线观看| 男的添女的下面高潮视频| 精品卡一卡二卡四卡免费| 一本一本久久a久久精品综合妖精| 亚洲美女视频黄频| 女的被弄到高潮叫床怎么办| 黄频高清免费视频| 亚洲情色 制服丝袜| 一个人免费看片子| 视频在线观看一区二区三区| 亚洲 欧美一区二区三区| av在线老鸭窝| 免费人妻精品一区二区三区视频| 一二三四中文在线观看免费高清| 久久狼人影院| 日韩制服骚丝袜av| 国产老妇伦熟女老妇高清| 啦啦啦 在线观看视频| 一边亲一边摸免费视频| 中文字幕制服av| 国产伦人伦偷精品视频| 人人妻人人添人人爽欧美一区卜| 久久久久国产一级毛片高清牌| 久久久国产一区二区| 18禁动态无遮挡网站| 肉色欧美久久久久久久蜜桃| 国产成人欧美在线观看 | 日韩制服丝袜自拍偷拍| 少妇的丰满在线观看| 国产精品久久久人人做人人爽| 啦啦啦中文免费视频观看日本| 丝袜美腿诱惑在线| 久热爱精品视频在线9| 99精品久久久久人妻精品| 一个人免费看片子| 亚洲国产精品一区三区| 欧美老熟妇乱子伦牲交| 男女边吃奶边做爰视频| 欧美av亚洲av综合av国产av | 亚洲欧美中文字幕日韩二区| 久久精品国产亚洲av涩爱| 各种免费的搞黄视频| 欧美日韩亚洲高清精品| 亚洲激情五月婷婷啪啪| 婷婷色综合www| 精品少妇黑人巨大在线播放| 制服人妻中文乱码| 精品一品国产午夜福利视频| 最黄视频免费看| 久久久久精品性色| videosex国产| 一边亲一边摸免费视频| 国产在线一区二区三区精| bbb黄色大片| 天堂俺去俺来也www色官网| 纵有疾风起免费观看全集完整版| 亚洲精品日本国产第一区| 精品福利永久在线观看| 悠悠久久av| 国产精品人妻久久久影院| 亚洲国产毛片av蜜桃av| 精品久久蜜臀av无| 亚洲伊人色综图| av在线观看视频网站免费| 亚洲自偷自拍图片 自拍| 亚洲人成77777在线视频| 久久影院123| 欧美变态另类bdsm刘玥| 久久久久视频综合| 自线自在国产av| 免费在线观看黄色视频的| 嫩草影视91久久| 日韩一卡2卡3卡4卡2021年| 国产精品久久久久久久久免| 一级,二级,三级黄色视频| 中文字幕高清在线视频| 欧美黑人欧美精品刺激| 国产男女内射视频| 国产又爽黄色视频| 亚洲av福利一区| 国产亚洲av片在线观看秒播厂| 日本vs欧美在线观看视频| 亚洲av在线观看美女高潮| 熟女少妇亚洲综合色aaa.| 欧美激情极品国产一区二区三区| 欧美日韩av久久| 免费在线观看黄色视频的| 日日啪夜夜爽| 人人妻人人澡人人看| 如何舔出高潮| 免费人妻精品一区二区三区视频| 一级a爱视频在线免费观看| 国产伦人伦偷精品视频| 亚洲国产欧美网| 成年人免费黄色播放视频| 综合色丁香网| 久久精品国产亚洲av涩爱| 久久午夜综合久久蜜桃| 国产精品国产av在线观看| 亚洲av电影在线观看一区二区三区| 精品国产一区二区三区四区第35| 亚洲一级一片aⅴ在线观看| av在线老鸭窝| 国产亚洲欧美精品永久| 在线观看免费视频网站a站| 色综合欧美亚洲国产小说| 一个人免费看片子| 成人国语在线视频| 丝瓜视频免费看黄片| 亚洲一卡2卡3卡4卡5卡精品中文| 一区二区三区激情视频| 国产极品天堂在线| 韩国高清视频一区二区三区| 免费高清在线观看视频在线观看| 在线免费观看不下载黄p国产| 最新的欧美精品一区二区| 久久久久久免费高清国产稀缺| 欧美人与性动交α欧美精品济南到| 妹子高潮喷水视频| 91aial.com中文字幕在线观看| 免费黄色在线免费观看| 麻豆av在线久日| 亚洲欧美中文字幕日韩二区| 国产成人精品无人区| 亚洲一区中文字幕在线| 天天躁夜夜躁狠狠久久av| 一级黄片播放器| 国产精品成人在线| 欧美亚洲日本最大视频资源| 又大又黄又爽视频免费| 十八禁高潮呻吟视频| av片东京热男人的天堂| 少妇精品久久久久久久| 成人国语在线视频| 亚洲第一av免费看| 大码成人一级视频| 亚洲精品久久成人aⅴ小说| 男女高潮啪啪啪动态图| 亚洲成人国产一区在线观看 | 又粗又硬又长又爽又黄的视频| 久久久欧美国产精品| 91老司机精品| 亚洲久久久国产精品| av在线播放精品| 成人免费观看视频高清| svipshipincom国产片| 飞空精品影院首页| 国产精品 欧美亚洲| 在线天堂最新版资源| 国产精品免费视频内射| 国产 一区精品| 国产在线一区二区三区精| 一本—道久久a久久精品蜜桃钙片| 国产日韩欧美在线精品| 亚洲欧洲日产国产| 1024香蕉在线观看| 日本av手机在线免费观看| 性高湖久久久久久久久免费观看| 丝袜美足系列| 免费观看av网站的网址| 一边摸一边抽搐一进一出视频| 日韩大码丰满熟妇|