• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Variable selection strategies and its importance in clinical prediction modelling

    2020-04-04 15:01:10MohammadZiaulIslamChowdhuryTanvirTurin
    Family Medicine and Community Health 2020年1期

    Mohammad Ziaul Islam Chowdhury, Tanvir C Turin,2

    ABSTRACT Clinical prediction models are used frequently in clinical practice to identify patients who are at risk of developing an adverse outcome so that preventive measures can be initiated. A prediction model can be developed in a number of ways; however, an appropriate variable selection strategy needs to be followed in all cases. Our purpose is to introduce readers to the concept of variable selection in prediction modelling, including the importance of variable selection and variable reduction strategies. We will discuss the various variable selection techniques that can be applied during prediction model building (backward elimination, forward selection, stepwise selection and all possible subset selection), and the stopping rule/selection criteria in variable selection (p values, Akaike information criterion, Bayesian information criterion and Mallows’Cp statistic). This paper focuses on the importance of including appropriate variables, following the proper steps,and adopting the proper methods when selecting variables for prediction models.

    lNTRODuCTlON

    Prediction models play a vital role in establishing the relation between the variables used in the particular model and the outcomes achieved and help forecast the future of a proposed outcome. A prediction model can provide information on the variables that are determining the outcome, their strength of association with the outcome and predict the future of an outcome using their specific values. Prediction models have countless applications in diverse areas, including clinical settings, where a prediction model can help with detecting or screening highrisk subjects for asymptomatic diseases (to help prevent developing diseases with early interventions), predicting a future disease(to help facilitate patient-doctor communication based on more objective information),assisting in medical decision- making (to help both doctors and patients make an informed choice regarding treatment) and assisting healthcare services with planning and quality management.

    Different methodologies can be applied to build a prediction model, which techniques can be classified broadly into two categories: mathematical/statistical modelling and computer- based modelling. Regardless of the modelling technique used, one needs to apply appropriate variable selection methods during the model building stage. Selecting appropriate variables for inclusion in a model is often considered the most important and difficult part of model building. In this paper,we will discuss what is meant by variable selection, why variable selection is important, the different methods for variable selection and their advantages and disadvantages. We have also used examples of prediction models to demonstrate how these variable selection methods are applied in model building. The concept of variable selection is heavily statistical and general readers may not be familiar with many of the concepts discussed in this paper. However, we have attempted to present a non- technical discussion of the concept in a plain language that should be accessible to readers with a basic level of statistical understanding. This paper will be helpful for those who wish to be better informed of variable selection in prediction modelling,have more meaningful conversations with biostatisticians/data analysts about their project or select an appropriate method for variable selection in model building with the advanced training information provided by our paper. Our intention is to provide readers with a basic understanding of this extremely important topic to assist them when developing a prediction model.

    BASlC PRlNClPLES OF vARlABLE SELECTlON lN CLlNlCAL PREDlCTlON MODELLlNG The concept of variable selection

    Variable selection means choosing among many variables which to include in a particular model, that is, to select appropriate variables from a complete list of variables by removing those that are irrelevant or redundant.1The purpose of such selection is to determine a set of variables that will provide the best fit for the model so that accurate predictions can be made. Variable selection is one of the most difficult aspects of model building.It is often advised that variable selection should be more focused on clinical knowledge and previous literature than statistical selection methods alone.2Data often contain many additional variables that are not ultimately used in model developing.3Selection of appropriate variables should be undertaken carefully to avoid including noise variables in the final model.

    lmportance of variable selection

    Due to rapid digitalisation, big data (a term frequently used to describe a collection of data that is extremely large in size, is complex and continues to grow exponentially with time) have emerged in healthcare and become a critical source of the data that has helped conceptualise precision public health and precision medicine approaches. At its simplest level, precision health involves applying appropriate statistical modelling based on available clinical and biological data to predict patient outcomes more accurately. Big data sets contain thousands of variables, which makes it difficult to handle and manage efficiently using traditional approaches. Consequently, variable selection has become the focus of much research in different areas including health. Variable selection offers many benefits such as improving the performance of models in terms of prediction, delivering variables more quickly and costeffectively by reducing training and utilisation time, facilitating data visualisation and offering an overall better understanding of the underlying process that generated the data.4

    There are many reasons why variables should be selected, including practicality issues. It is not practical to use a large set of variables in a model. Information involving a large number of variables may not be available for all patients or may be costly to collect. Some variables also may have a negligible effect on outcome and can therefore be excluded. Having fewer variables in the model means less computational time and complexity.5According to the principle of parsimony, simple models with fewer variables are preferred over complex models with many variables. Many variables in the model make the model more dependent on the observed data.6Simple models are easier to interpret, generalise and use in practice.7However, one needs to ensure that important variables are not excluded from the simple model.

    There is no set rule as to the number of variables to include in a prediction model as it often depends on several factors. The ‘one in ten rule’, a rule that stipulates for how many variables/parameters can be estimated from a data set, is quite popular in traditional clinical prediction modelling strategy (eg, logistic regression and survival models). According to this rule, one variable can be considered in a model for every 10 events.89To illustrate, if information for 500 patients is available in a data set and 40 patients die (events) during the study/follow- up period, in predicting mortality, the ‘one in ten rule’ implies that four variables can be considered reliably in the model to give a good fit. Other rules also exist,such as the ‘one in twenty rule’,10‘one in fifty rule’11or ‘five to nine events per variable rule’,12depending on the research question(s). Peduzzi et al913suggested 10-15 events per variable for logistics and survival models to produce reasonably stable estimates. While there are many different rules, these rules are only approximations, and there are situations where fewer or more observations than have been suggested are needed.14If more variables are included in a prediction model than the sample data can support, the issue of overfitting(achieving overly optimistic results that do not really exist in the population and hence fail to replicate the results in another sample) may arise, and prediction outside the training data (the data used to develop the model) will be not useful. Having too many variables (with respect to the number of observation/data set) in a model will result in a relation between variables and the outcome that only exists in that particular data set but not in the true population and power (the probability of detecting an effect when the effect is already there) to detect the true relationships will be reduced.14Including too many variables in a model may deliver results that appear important but may not be in the true population context.14There are examples where prediction models developed using too many candidate variables in a small data set perform poorly when applied to an external data set.1516

    Existing theory and literature, as well as experience and clinical knowledge, provide a general idea as to which candidate variables should be considered for inclusion in a prediction model. Nevertheless, the actual variables used in the final prediction model should be determined by analysing the data. Determining the set of variables for the final model is called variable selection. Variable selection serves two purposes. First, it helps determine all of the variables that are related to the outcome, which makes the model complete and accurate. Second, it helps select a model with few variables by eliminating irrelevant variables that decrease the precision and increase the complexity of the model. Ultimately, variable selection provides a balance between simplicity and fit. Figure 1 describes the steps to follow in variable selection during model building.

    variable reduction strategies

    One way to restrict the list of potential variables is to choose the candidate variables first, particularly, if the sample is small. Candidate variables for a specific topic are those that have demonstrated previous prognostic performance with the outcome.17Candidate variables for a specific topic can be selected based on subject matter knowledge before a study begins. This can be achieved by reviewing the existing literature on the topic and consulting with experts in the area.7In addition, systematic reviews and meta- analyses can be performed to identify candidate variables. With respect to systematic reviews,counting the number of times a variable was found important/significant in the different studies has been shown to be helpful in identifying candidate variables.7

    Figure 1 Variable selection steps. AIC, Akaike information criterion; BIC, Bayesian information criterion.

    Grouping/combining similar, related variables based on subject knowledge and statistical technique can also help restrict the number of variables. If variables are strongly correlated, combining them into a single variable has been considered prudent.7For example, systolic blood pressure and diastolic blood pressure are strongly correlated. In choosing between the two, mean blood pressure may be a better option than selecting either one of them individually.7However, it has also been argued that variables that are highly correlated should be excluded a priori as they provide little independent information.1718Removing a correlated variable should not affect the performance of the model, as it measures the same underlying information as the variable to which it correlates.5Ultimately, both combining correlated variables and excluding them beforehand help restrict the number of variables.

    How variables are distributed can also provide an indication of which ones to restrict. Variables that have a large number of missing values can be excluded, because imputing a large number of missing values will be suspicious to many readers due to the lack of reliable estimation, which problem may recur in applications of the model.717Often, 5-20 candidate variables are sufficient to build an adequate prediction model.7Nevertheless, care must be taken in restricting variables, as one drawback is that certain variables and their effects may be excluded from the prediction model.

    variable selection methods

    Once the number of potential candidate variables has been identified from the list of all available variables in the data set, a further selection of variables is made for inclusion in the final model. There are different ways of selecting variables for a final model. However, there is no consensus on which method is the best.17There are recommendations that all candidate variables should be included in the model, which approach is called the full model approach.17A model developed using the full model approach has advantages. In a full model approach, the problem of selection bias is absent and the SEs and p values of the variables are correct.17However,due to practical reason and the difficulties involved in defining a full model, it often is not possible to consider the full model approach.17

    It has also been suggested that variable selection should start with the univariate analysis of each variable.6Variables that show significance (p<0.25) in the univariate analysis, as well as those that are clinically important,should be included for multivariate analysis.6Nevertheless, univariate analysis ignores the fact that individual variables that are weakly associated with the outcome can contribute significantly when they are combined.6This issue can be solved partially by setting a higher significance level to allow more variables to illustrate significance in the univariate analysis.6In general, when there are many candidate variables available and there is confusion or uncertainty regarding which variables to consider in the final model development, formal variable selection methods should be followed. Outlined below are four major variable selection methods: backward elimination,forward selection, stepwise selection and all possible subset selection, and a discussion of their pros and cons.

    Backward elimination

    Backward elimination is the simplest of all variable selection methods. This method starts with a full model that considers all of the variables to be included in the model.Variables then are deleted one by one from the full model until all remaining variables are considered to have some significant contribution to the outcome.1The variable with the smallest test statistic (a measure of the variable’s contribution to the model) less than the cut- off value or with the highest p value greater than the cut- off valuethe least significant variable-is deleted first. Then the model is refitted without the deleted variable and the test statistics or p values are recomputed. Again, the variable with the smallest test statistic or with the highest p value greater than the cut- off value is deleted in the refitted model. This process is repeated until every remaining variable is significant at the cut- off value. The cut- off value associated with the p value is sometimes referred to as‘p- to- remove’ and does not have to be set at 0.05.

    Kshirsagar et al19developed a hypertension prediction model for middle- aged and older adults using data from two community- based cohorts in the USA. The purpose of the study was to develop a simple prediction model/score with easy and routinely available variables. The model was developed using 7610 participants and eight variables (age, level of systolic and diastolic blood pressure, smoking, family history of hypertension, diabetes mellitus, female sex, high body mass index (BMI), lack of exercise). Candidate variables were selected based on the scientific literature and numeric evidence. One of the data sets did not have information on a specific variable (family history of hypertension) used in the final model. Values for this variable were imputed, however,this approach is not ideal and often not recommended,7as imputing a large number of missing values can raise questions as to acceptability and accuracy of the outcome.The study applied a backward elimination variable selection technique to select variables for the final model with a conventional p value threshold of 0.05. The study found that some important variables did not contribute independently to the outcome following multivariate adjustment. Setting a higher threshold for the p value and giving priority to clinical reasoning in selecting variables,along with statistical significance, perhaps would have allowed more important variables to be entered into the model.

    While a set of variables can have significant predictive ability, a particular subset of them may not. Unfortunately,both forward selection and stepwise selection do not have the capacity to identify less predictive individual variables that may not enter the model to demonstrate their joint behaviour. However, backward elimination has the advantage to assess the joint predictive ability of variables as the process starts with all variables being included in the model. Backward elimination also removes the least important variables early on and leaves only the most important variables in the model. One disadvantage of the backward elimination method is that once a variable is eliminated from the model it is not re- entered again.However, a dropped variable may become significant later in the final model.

    Forward selection

    The forward selection method of variable selection is the reverse of the backward elimination method. The method starts with no variables in the model then adds variables to the model one by one until any variable not included in the model can add any significant contribution to the outcome of the model.1At each step, each variable excluded from the model is tested for inclusion in the model. If an excluded variable is added to the model, the test statistic or p value is calculated. The variable with the largest test statistic greater than the cut- off value or the lowest p value less than the cut- off value is selected and added to the model. In other words, the most significant variable is added first. The model then is refitted with this variable and test statistics or p values are recomputed for all remaining variables. Again, the variable with the largest test statistic greater than the cut- off value or the lowest p value less than the cut- off value is chosen from among the remaining variables and added to the model.This process continues until no remaining variable is significant at the cut- off level when added to the model.In forward selection, if a variable is added to the model,it remains there.1

    Dang et al20developed a predictive model (BariWound)for incisional surgical site infections (SSI) within 30 days of bariatric surgery. The objective was to construct a clinically useful prediction model to stratify individuals into different risk groups (eg, very high, high, medium and low). A clinically rich database, Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program was used to develop the prediction model. An initial univariate screen was performed to identify baseline variables that were significantly associated (p<0.05)with the outcome 30- day SSI. Variables then were checked further for clinical relevance with the outcome. Finally, a forward selection procedure (p<0.01) was applied among the selected variables in the univariate screen to build the prediction model. A total of nine variables (procedure type, chronic steroid or immunosuppressant use, gastrooesophageal reflux disease, obstructive sleep apnoea, sex,type 2 diabetes, hypertension, operative time and BMI)identified through forward selection were included in the final model. As mentioned earlier, a p value threshold of 0.05 in univariate screening and of 0.01 in forward selection is a concern, as it creates the chance of missing some important variables in the model.

    One advantage of forward selection is that it starts with smaller models. Also, this procedure is less susceptible to collinearity (very high intercorrelations or interassociations among independent variables). Like backward elimination, forward selection also has drawbacks. In forward selection, inclusion of a new variable may make an existing variable in the model non- significant; however,the existing variable cannot be deleted from the model.A balance between backward elimination and forward selection is therefore required which can be achieved in stepwise selection.

    Stepwise selection

    Stepwise selection methods are a widely used variable selection technique, particularly in medical applications.This method is a combination of forward and backward selection procedures that allows moving in both directions, adding and removing variables at different steps.The process can start with both a backward elimination and forward selection approach. For example, if stepwise selection starts with forward selection, variables are added to the model one at a time based on statistical significance. At each step, after a variable is added, the procedure checks all the variables already added to the model to delete any variable that is not significant in the model.The process continues until every variable in the model is significant and every excluded variable is insignificant.Due to its similarity, this approach is sometimes considered as a modified forward selection. However, it differs from forward selection in that variables entered into the model do not necessarily remain in the model. However,if stepwise selection starts with backward elimination,the variables are deleted from the full model based on statistical significance and then added back if they later appear significant. The process is a rotation of choosing the least significant variable to drop from the model and then reconsidering all dropped variables to re- enter into the model. Stepwise selection requires two separate significance levels (cut- offs) for adding and deleting variables from the model. The significance levels for adding variables should be less than the significance levels for deleting variables so that the procedure does not get into an infinite loop. Within stepwise selection, backward elimination is often given preference as in backward elimination the full model is considered, and the effect of all candidate variables is assessed.7

    Chien et al21developed a new prediction model for hypertension risk in the Chinese population. A prospective cohort of 2506 ethnic Chinese community individuals in Taiwan was used to develop the model. Two different models, a clinical model with five variables and a biochemical model with eight variables, were developed. The objective was to identify high- risk Chinese community individuals with hypertension risk using the newly developed model. The variables for the model were selected using the stepwise selection method, the most common method for variable selection that permits using both forward and backward procedures iteratively in model building. Generally, to apply a stepwise selection procedure, a set of candidate variables need to be identified first. However, information about candidate variables and the number of variables considered in stepwise selection was absent in this study. Although it was indicated that the selected variables were statistically associated with the risk of hypertension, without a discussion about the potential candidate variables, how variables were selected and how many were included in the model, the reader is left uninformed about the variable selection process,which raises concern about the reliability of the finally selected variables. Moreover, setting a higher significance level is strongly recommended in stepwise selection to allow more variables to be included in the model.A significance level of only 0.05 was used in this study,and that cut- off value can sometimes miss important variables in the model. This likely happened in this study,as an important variable termed ‘gender’ was forcefully entered into the biochemical model even though it did not appear significant at the 0.05 level. Alternatively, the study could use Akaike information criterion (AIC) or Bayesian information criterion (BIC) (discussed later),which often provide the most parsimonious model.

    The stepwise selection method is perhaps the most widely used method of variable selection. One reason is that it is easy to apply in statistical software.7This method allows researchers to examine models with different combinations of variables that otherwise may be overlooked.6The method is also comparatively objective as the same variables are generally selected from the same data set even though different persons are conducting the analysis. This helps reproduce the results and validate in model.7There are also disadvantages to using the stepwise selection method. There is instability of variable selection if a different sample is used; however,a large effective sample size (50 events per variable)can help overcome this issue.6The p values obtained by this method are also in doubt, as so many multiple tests occur during the selection process. If there are too many candidate variables, then the method fails to provide the best model, as some irrelevant variables are entered into the model.16The regression coefficients obtained by this method are also biased. It also prevents researchers from thinking about the problem.1There is also criticism that stepwise and other automated variable selection processes can generate biologically implausible models.6Collinearity is often considered a serious issue in stepwise variable selection. Variables that best describe a particular data set are chosen by the stepwise procedure due to their high- magnitude coefficients for that data set, not necessarily for the underlying population. If there are two highly correlated variables and they contribute equally to the outcome, there is a good chance that both of the correlated variables will be out of the model in stepwise selection if they are individually less significant than other non- correlated variables. Conversely, if one of the two correlated variables contributes substantially better to the outcome for a particular data set and thus appears in the model, the estimate of its coefficient can be much higher in magnitude than its true population value. Additionally,potential valuable information from its correlated variable can be lost and the results less generalisable.

    All possible subset selection

    In all possible subset selection, every possible combination of variables is checked to determine the best subset of variables for the prediction model. With this procedure, all one- variable, two- variable, three- variable models,and so on, are built to determine which one is the best according to some specific criteria. If there are K variables, then there are 2Kpossible models that can be built.Holden et al22developed a model to identify variables(which combination of perceptions) that best predict bar- coded medication administration (BCMA) acceptance (intention to use, satisfaction) using cross- sectional survey data among registered nurses in the Midwest United States. An all possible subset selection procedure was used to identify combinations of variables to model BCMA acceptance most efficiently. Two different models were constructed. In model 1, the outcome of acceptance was nurses’ behavioural intention to use BCMA while in model 2, the outcome of acceptance was nurses’ satisfaction with BCMA. A set of nine theory- based candidate variables (seven perception and two demographic)were assessed for inclusion in the models. To determine the optimal set of variables for the models, investigators assessed every combination of the models generated by an all possible subset selection procedure using five different measures. After comparing the various models according to five different measures, the best model was selected. Application of an all possible subset selection procedure was feasible here due to the small number of candidate variables.The ability to identify a combination of variables,which is not available in other selection procedures, is an advantage of this method.7Among the disadvantages,computing can be an issue in an all subset selection procedure, as the number of possible subsets can be huge and many models can be produced, particularly when the number of variables is large. In addition, an all possible subset selection procedure can produce models that are too small23or overfitted due to examining many models with multiple testing.7Further, a selection criterion needs to be specified in advance.

    Stopping rule/selection criteria in variable selection

    In all stepwise selection methods including all subset selection, a stopping rule or selection criteria for inclusion or exclusion of variables need to be set. Generally, a standard significance level for hypothesis testing is used.7However, other criteria are also frequently used as a stopping rule such as the AIC, BIC or Mallows’ Cpstatistic. We discuss these major selection criteria below.

    P values

    If the stopping rule is based on p values, the traditional choice for significance level is 0.05 or 0.10. However, the optimum value of the significance level to decide which variable to include in the model is suggested to be 1, which exceeds the traditional choices.18This suggestion assumes absence of few strong variables or completely irrelevant variables in the data.18In reality, some strong and some irrelevant variables always exist in the outcome. In such a situation, a significance level of 0.50 is proposed, which allows some variables to exit in the selection process.18There is also a strong recommendation for using a p value in the range of 0.15-0.206, although using a higher significance level has the disadvantages that some unimportant variables may be included in the model.6However, we believe a higher significance level for variable selection should be considered so that important variables relevant to the outcome are not missed and to avoid deleting less significant variables that may have practical and clinical reasoning.

    Akaike information criterion

    AIC is a tool for model selection that compares different models. Including different variables in the model provides different models, and AIC attempts to select the model by balancing underfitting (too few variables in the model) and overfitting (too many variables in the model).24Including too few variables often fails to capture the true relation and too many variables create a generalisability problem.25A trade- off is therefore required between simplicity and adequacy of model fitting and AIC can help achieve this.26A model cannot precisely represent the true relation that exists in the data, as there is some information loss in estimating the true relation through modelling. AIC tries to estimate that relative information loss compared with other candidate models.Quality of the model is believed to be better with smaller information loss and it is important to select the model that best minimises that loss. Candidate models for the specific data are ranked from best to worst according to the value of AIC.24Among the available models for the specific data, the model with minimum AIC is best.26

    AIC only provides information about the quality of a model relative to the other models and does not provide information on the absolute quality of the model. With a small sample size (relative to a large number of parameters/variables or any number of variables/parameters),AIC often provides models with too many variables.However, this issue can be solved with a modified version of AIC called AICC,which introduces an extra penalty term for the number of variables/parameters. For a large sample size, this penalty term becomes zero and AICCsubsequently converges to AIC, which is why it is suggested that AICCbe used in practice.24

    Bayesian information criterion

    BIC is another variable selection criterion that is similar to AIC, but with a different penalty for the number of variables (parameters) included in the model. Like AIC, BIC also balances between simplicity and goodness of model fitting. In practice, for a given data set, BIC is calculated for each of the candidate models, and the model corresponding to the minimum BIC value is chosen. BIC often chooses models that are more parsimonious than AIC,as BIC penalises bigger models more due to the larger penalty term inherent in its formula.27

    Although there are similarities between AIC and BIC,and both criteria balance simplicity and model fit, differences exist between them. The underlying theory behind AIC is that the data stem from a very complex model,there are many candidate models to fit the data and none of the candidate models (including the best model) are the exact functional form of the true model.25In addition,the number of variables (parameters) in the best model may not include all variables (parameters) in the true model.25In other words, a best model is only an approximation of the true model and a true model that perfectly represents reality does not exist.24Conversely, the underlying theory behind BIC is that the data are derived from a simple model and there exists a candidate model that represents the true model.25Depending on the situation,however, each criterion has an advantage over the other.There are many studies that have compared AIC and BIC and recommended which one to use. If our objective is to select a best model that will provide maximum predictive accuracy, then AIC is superior (because there is no true model, and the best model is selected to maximise the predictive accuracy and represent an approximate true relation). However, if the goal is to select a correct model that is consistent, then BIC is superior (because BIC consistently selects the correct model from among the candidate models that best represent the true model).25For large data sets, the performance of both criteria improves, but with different objectives.25

    Mallows’ Cp statistic

    Mallows’ Cpstatistic is another criterion used in variable selection. The purpose of the statistic is to select the best model using a subset of variables from all available variables. This criterion is most widely used in the all subset selection method. Different models derived in all subset selection are compared based on Mallows’ Cpstatistic and the model with the lowest Mallows’ Cpstatistic closest to the number of variables plus the constant is often chosen.A small Mallows’ Cpvalue near the number of variables indicates that the model is relatively more precise than other models (small variance and less bias).28

    CONCLuSlON

    It is extremely important to include appropriate variables in prediction modelling, as model’s performance largely depends on which variables are ultimately included in the model. Failure to include the proper variables in the model provides inaccurate results, and the model will fail to capture the true relation that exists in the data between the outcome and the selected variables. There are numerous occasions when prediction models are developed without following the proper steps or adopting the proper method of variable selection. Researchers need to be more aware of and cautious about these very important aspects of prediction modelling.

    TwitterTanvir C Turin @drturin

    ContributorsTCT and MZIC developed the study idea. MZIC prepared the manuscript with critical intellectual inputs from TCT. The manuscript has been finalised by MZIC and TCT.

    FundingThe authors have not declared a specific grant for this research from any funding agency in the public, commercial or not- for- profit sectors.

    Competing interestsNone declared.

    Patient consent for publicationNot required.

    Provenance and peer reviewNot commissioned; externally peer reviewed.

    Data availability statementThere are no data in this work.

    Open accessThis is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY- NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non- commercially,and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non- commercial. See: http:// creativecommons. org/ licenses/ by- nc/ 4. 0/.

    美女午夜性视频免费| 成人三级做爰电影| 免费看a级黄色片| 美女国产高潮福利片在线看| 亚洲精品av麻豆狂野| 人人澡人人妻人| 亚洲一区高清亚洲精品| 在线观看免费日韩欧美大片| 国产黄a三级三级三级人| 亚洲自拍偷在线| 日韩精品中文字幕看吧| 九色国产91popny在线| 亚洲成a人片在线一区二区| 亚洲午夜理论影院| 免费在线观看成人毛片| 一个人观看的视频www高清免费观看 | 天天躁狠狠躁夜夜躁狠狠躁| 国产黄片美女视频| 九色国产91popny在线| 亚洲欧美精品综合久久99| 香蕉国产在线看| 亚洲国产精品成人综合色| 啦啦啦 在线观看视频| 97碰自拍视频| av电影中文网址| 成人手机av| 国产又爽黄色视频| 看片在线看免费视频| 国产精华一区二区三区| 亚洲成av片中文字幕在线观看| 少妇 在线观看| 午夜老司机福利片| 久久天堂一区二区三区四区| av免费在线观看网站| 国产成人av激情在线播放| 中亚洲国语对白在线视频| 欧美一区二区精品小视频在线| 欧美日韩亚洲国产一区二区在线观看| 88av欧美| 嫩草影视91久久| 久久精品亚洲精品国产色婷小说| 色播亚洲综合网| 2021天堂中文幕一二区在线观 | av欧美777| 中文字幕精品免费在线观看视频| 最新美女视频免费是黄的| cao死你这个sao货| 午夜福利在线在线| 人妻丰满熟妇av一区二区三区| 在线看三级毛片| av在线天堂中文字幕| 国产精品综合久久久久久久免费| svipshipincom国产片| 丝袜人妻中文字幕| 国产精品九九99| 国产亚洲精品久久久久久毛片| 亚洲国产中文字幕在线视频| 欧美人与性动交α欧美精品济南到| 亚洲精品美女久久久久99蜜臀| 侵犯人妻中文字幕一二三四区| 亚洲精品国产一区二区精华液| tocl精华| 国产精品一区二区三区四区久久 | 国产蜜桃级精品一区二区三区| 色老头精品视频在线观看| 老司机午夜十八禁免费视频| 亚洲激情在线av| 波多野结衣高清无吗| 好看av亚洲va欧美ⅴa在| 1024手机看黄色片| 亚洲第一青青草原| 亚洲全国av大片| 欧美国产精品va在线观看不卡| 欧美色欧美亚洲另类二区| 精品久久久久久久毛片微露脸| 丰满的人妻完整版| 麻豆成人av在线观看| 伦理电影免费视频| 精品久久久久久久末码| 国产一区二区在线av高清观看| 一区二区三区高清视频在线| 人人澡人人妻人| 麻豆成人午夜福利视频| 性色av乱码一区二区三区2| 亚洲美女黄片视频| 亚洲第一av免费看| 国产精品久久久久久亚洲av鲁大| 久久久久久人人人人人| 午夜视频精品福利| 身体一侧抽搐| 国产99白浆流出| 日韩精品免费视频一区二区三区| 18禁观看日本| 嫩草影视91久久| 大型黄色视频在线免费观看| 亚洲国产欧美日韩在线播放| 亚洲成人免费电影在线观看| 亚洲中文字幕一区二区三区有码在线看 | 国产97色在线日韩免费| 亚洲人成电影免费在线| 国产熟女午夜一区二区三区| 一本久久中文字幕| 一个人免费在线观看的高清视频| 淫秽高清视频在线观看| 哪里可以看免费的av片| 欧美性猛交╳xxx乱大交人| 中文字幕另类日韩欧美亚洲嫩草| 1024视频免费在线观看| 成人手机av| 亚洲五月婷婷丁香| 夜夜夜夜夜久久久久| 人人妻人人澡人人看| 一区二区三区精品91| 满18在线观看网站| 精品免费久久久久久久清纯| 久久久久久国产a免费观看| 午夜免费观看网址| 美女午夜性视频免费| 男女午夜视频在线观看| 亚洲中文字幕日韩| 欧美精品啪啪一区二区三区| 日日摸夜夜添夜夜添小说| 精品久久久久久久久久久久久 | 国产片内射在线| 久久国产精品人妻蜜桃| 99热这里只有精品一区 | 久久久久久人人人人人| 男女午夜视频在线观看| 国产一区二区激情短视频| 51午夜福利影视在线观看| 亚洲黑人精品在线| 国产精品 国内视频| 女性生殖器流出的白浆| 在线天堂中文资源库| 亚洲第一青青草原| 90打野战视频偷拍视频| 男女视频在线观看网站免费 | 哪里可以看免费的av片| 女性生殖器流出的白浆| 99国产综合亚洲精品| 免费电影在线观看免费观看| 美女午夜性视频免费| 精品国产一区二区三区四区第35| 国产在线观看jvid| 国产一区在线观看成人免费| 国产一区在线观看成人免费| 亚洲精品一区av在线观看| 日本五十路高清| 亚洲自拍偷在线| 丁香欧美五月| 一区二区日韩欧美中文字幕| 国产三级在线视频| 国产三级在线视频| 搡老岳熟女国产| 日韩欧美一区二区三区在线观看| 男女午夜视频在线观看| avwww免费| 黄片小视频在线播放| 视频区欧美日本亚洲| 女性生殖器流出的白浆| 国产精品二区激情视频| 久久婷婷人人爽人人干人人爱| 亚洲片人在线观看| 午夜a级毛片| 国产精品免费一区二区三区在线| 男女床上黄色一级片免费看| 亚洲,欧美精品.| 一进一出好大好爽视频| 亚洲成av片中文字幕在线观看| 免费在线观看完整版高清| 亚洲av美国av| 日韩三级视频一区二区三区| 国产91精品成人一区二区三区| 久久精品夜夜夜夜夜久久蜜豆 | 日韩大尺度精品在线看网址| 中文字幕高清在线视频| 手机成人av网站| 亚洲电影在线观看av| www.自偷自拍.com| 午夜福利18| 国产精品影院久久| 欧美又色又爽又黄视频| 999久久久国产精品视频| 别揉我奶头~嗯~啊~动态视频| a级毛片在线看网站| 午夜福利视频1000在线观看| 精品午夜福利视频在线观看一区| 精品国产乱子伦一区二区三区| 国产片内射在线| 国产1区2区3区精品| 成人欧美大片| 可以在线观看毛片的网站| 亚洲欧美日韩高清在线视频| 欧美乱码精品一区二区三区| 欧美激情久久久久久爽电影| 亚洲中文字幕一区二区三区有码在线看 | 日本五十路高清| 搡老岳熟女国产| 久久99热这里只有精品18| 免费在线观看完整版高清| 日韩三级视频一区二区三区| 久久中文字幕人妻熟女| 国产伦人伦偷精品视频| 国产亚洲欧美98| 欧美在线一区亚洲| 一夜夜www| 99久久国产精品久久久| www国产在线视频色| 黄色a级毛片大全视频| 狂野欧美激情性xxxx| 欧美一级毛片孕妇| 国产精品久久电影中文字幕| 午夜成年电影在线免费观看| 黄色毛片三级朝国网站| 国产在线精品亚洲第一网站| 欧美绝顶高潮抽搐喷水| 精品国产亚洲在线| 巨乳人妻的诱惑在线观看| 长腿黑丝高跟| 欧美黑人巨大hd| 亚洲色图 男人天堂 中文字幕| 最近在线观看免费完整版| 亚洲成a人片在线一区二区| 国产精品永久免费网站| 日日爽夜夜爽网站| 国产精品国产高清国产av| 亚洲av五月六月丁香网| 久久久久久九九精品二区国产 | 好看av亚洲va欧美ⅴa在| 变态另类成人亚洲欧美熟女| 悠悠久久av| 午夜福利欧美成人| 国产精品 国内视频| 久久午夜亚洲精品久久| 欧美色欧美亚洲另类二区| xxx96com| 白带黄色成豆腐渣| 精华霜和精华液先用哪个| 亚洲狠狠婷婷综合久久图片| 亚洲av第一区精品v没综合| 免费人成视频x8x8入口观看| 亚洲成av人片免费观看| www国产在线视频色| 在线看三级毛片| 国产视频内射| 两人在一起打扑克的视频| 成年女人毛片免费观看观看9| 日本一本二区三区精品| 久久久久免费精品人妻一区二区 | 757午夜福利合集在线观看| 亚洲国产高清在线一区二区三 | 岛国视频午夜一区免费看| 欧美激情高清一区二区三区| 亚洲成av人片免费观看| 首页视频小说图片口味搜索| 精品一区二区三区av网在线观看| 欧美激情久久久久久爽电影| 一进一出抽搐gif免费好疼| 啦啦啦免费观看视频1| av天堂在线播放| 精品乱码久久久久久99久播| 久久性视频一级片| 国产av一区在线观看免费| 国产精品亚洲一级av第二区| 国产一区在线观看成人免费| 国产黄a三级三级三级人| 18禁美女被吸乳视频| 在线看三级毛片| 精品第一国产精品| 男男h啪啪无遮挡| 精品福利观看| 淫妇啪啪啪对白视频| 成人欧美大片| 国产高清videossex| 亚洲专区字幕在线| 精品国产亚洲在线| 嫩草影视91久久| 日韩 欧美 亚洲 中文字幕| 他把我摸到了高潮在线观看| 国产精品日韩av在线免费观看| 国产精品日韩av在线免费观看| 欧美激情极品国产一区二区三区| 黑丝袜美女国产一区| 黄频高清免费视频| 国产亚洲欧美在线一区二区| 精品乱码久久久久久99久播| 长腿黑丝高跟| 琪琪午夜伦伦电影理论片6080| 90打野战视频偷拍视频| 中文字幕另类日韩欧美亚洲嫩草| 国产一区二区三区在线臀色熟女| 最好的美女福利视频网| 国产伦在线观看视频一区| 亚洲av中文字字幕乱码综合 | 日韩欧美国产在线观看| 99久久国产精品久久久| 国产欧美日韩一区二区精品| 又大又爽又粗| 久久久久国内视频| 黄片大片在线免费观看| 国产一区二区在线av高清观看| 啦啦啦 在线观看视频| 亚洲av日韩精品久久久久久密| 久久久国产精品麻豆| 欧美性猛交╳xxx乱大交人| 欧美在线一区亚洲| 欧美性猛交╳xxx乱大交人| 国产精品久久久久久人妻精品电影| 夜夜爽天天搞| 久久精品91无色码中文字幕| 亚洲精品久久国产高清桃花| 又大又爽又粗| 99在线视频只有这里精品首页| 麻豆成人午夜福利视频| e午夜精品久久久久久久| 国产v大片淫在线免费观看| 亚洲第一欧美日韩一区二区三区| 日韩三级视频一区二区三区| 老司机靠b影院| 久久狼人影院| 男女下面进入的视频免费午夜 | 亚洲午夜精品一区,二区,三区| 麻豆成人av在线观看| 亚洲一区二区三区色噜噜| 人人妻,人人澡人人爽秒播| 中文字幕人成人乱码亚洲影| 丰满的人妻完整版| 欧美性猛交黑人性爽| 制服丝袜大香蕉在线| 免费看十八禁软件| 麻豆国产av国片精品| www国产在线视频色| 国产国语露脸激情在线看| tocl精华| 97超级碰碰碰精品色视频在线观看| 午夜久久久在线观看| 亚洲天堂国产精品一区在线| netflix在线观看网站| 欧美精品啪啪一区二区三区| 亚洲三区欧美一区| 韩国精品一区二区三区| 日本 av在线| 亚洲精品美女久久av网站| 俄罗斯特黄特色一大片| 在线观看免费日韩欧美大片| 久久久久久亚洲精品国产蜜桃av| 国产成人啪精品午夜网站| 精品国产乱子伦一区二区三区| 欧美日本视频| 亚洲熟女毛片儿| 欧美三级亚洲精品| 此物有八面人人有两片| 在线观看一区二区三区| 19禁男女啪啪无遮挡网站| 欧美成人性av电影在线观看| 制服诱惑二区| 国产伦人伦偷精品视频| 18禁美女被吸乳视频| 桃红色精品国产亚洲av| 久久精品国产99精品国产亚洲性色| 韩国av一区二区三区四区| 国产熟女xx| 欧美日韩精品网址| 日韩免费av在线播放| 国产免费av片在线观看野外av| 国产一区二区激情短视频| 亚洲精品中文字幕一二三四区| 一区二区三区精品91| 女人被狂操c到高潮| 精品一区二区三区av网在线观看| 亚洲一区二区三区不卡视频| 久久国产精品影院| 久久精品人妻少妇| 国产高清激情床上av| 天堂√8在线中文| 国内毛片毛片毛片毛片毛片| 狠狠狠狠99中文字幕| 免费女性裸体啪啪无遮挡网站| 久久性视频一级片| 精品久久蜜臀av无| 亚洲在线自拍视频| 丁香欧美五月| 成人av一区二区三区在线看| 久久久久亚洲av毛片大全| 久久天堂一区二区三区四区| 女性生殖器流出的白浆| 禁无遮挡网站| 欧美另类亚洲清纯唯美| 黑人欧美特级aaaaaa片| 久久久久免费精品人妻一区二区 | 亚洲一卡2卡3卡4卡5卡精品中文| 在线av久久热| 一区二区三区国产精品乱码| 又紧又爽又黄一区二区| 国产又色又爽无遮挡免费看| 国内毛片毛片毛片毛片毛片| 校园春色视频在线观看| 9191精品国产免费久久| 亚洲一区二区三区色噜噜| 欧美午夜高清在线| 国产精品久久久人人做人人爽| 欧美成人一区二区免费高清观看 | 久久热在线av| 欧美日韩瑟瑟在线播放| 免费电影在线观看免费观看| 精品久久久久久久末码| 久久精品国产99精品国产亚洲性色| 窝窝影院91人妻| 国产黄片美女视频| 啦啦啦免费观看视频1| 他把我摸到了高潮在线观看| 嫁个100分男人电影在线观看| 可以免费在线观看a视频的电影网站| 亚洲av中文字字幕乱码综合 | 12—13女人毛片做爰片一| av在线天堂中文字幕| 日韩大码丰满熟妇| 可以免费在线观看a视频的电影网站| 在线国产一区二区在线| 精品国产超薄肉色丝袜足j| 亚洲美女黄片视频| av有码第一页| 真人一进一出gif抽搐免费| 国产激情偷乱视频一区二区| 欧美日韩黄片免| 一二三四社区在线视频社区8| 99久久精品国产亚洲精品| 麻豆一二三区av精品| 亚洲自拍偷在线| 久久人妻福利社区极品人妻图片| 亚洲五月婷婷丁香| 高清在线国产一区| 一级a爱视频在线免费观看| 国产一区二区在线av高清观看| 嫁个100分男人电影在线观看| 制服人妻中文乱码| 午夜激情av网站| 日本精品一区二区三区蜜桃| 久久九九热精品免费| 最近最新中文字幕大全电影3 | 亚洲国产精品合色在线| 看免费av毛片| 国产激情久久老熟女| 男男h啪啪无遮挡| 亚洲一区二区三区不卡视频| 亚洲国产精品合色在线| 丝袜在线中文字幕| 国产欧美日韩一区二区三| 长腿黑丝高跟| 久久亚洲真实| 亚洲成a人片在线一区二区| 真人做人爱边吃奶动态| 男人舔奶头视频| 精品久久久久久久人妻蜜臀av| 国产三级在线视频| 大香蕉久久成人网| 亚洲人成网站高清观看| 亚洲国产精品999在线| 亚洲aⅴ乱码一区二区在线播放 | 婷婷精品国产亚洲av| 男人舔女人下体高潮全视频| 18禁美女被吸乳视频| 欧美黄色淫秽网站| 一本大道久久a久久精品| www日本黄色视频网| 51午夜福利影视在线观看| 免费在线观看成人毛片| 国产1区2区3区精品| 国产高清视频在线播放一区| 国产精品久久久人人做人人爽| 中文字幕另类日韩欧美亚洲嫩草| www.www免费av| 日日摸夜夜添夜夜添小说| 国产三级在线视频| 深夜精品福利| 可以在线观看的亚洲视频| 国产aⅴ精品一区二区三区波| 精品不卡国产一区二区三区| 男女下面进入的视频免费午夜 | 人成视频在线观看免费观看| 欧美日韩亚洲国产一区二区在线观看| 超碰成人久久| 国产亚洲精品第一综合不卡| 性色av乱码一区二区三区2| 99久久国产精品久久久| 国产精品久久电影中文字幕| 国产激情偷乱视频一区二区| 精品国产亚洲在线| 99热只有精品国产| 久久亚洲真实| 丰满人妻熟妇乱又伦精品不卡| 亚洲av五月六月丁香网| 一进一出好大好爽视频| 好男人在线观看高清免费视频 | 可以免费在线观看a视频的电影网站| 欧美日本视频| 美女大奶头视频| 日本a在线网址| 中出人妻视频一区二区| 又黄又爽又免费观看的视频| 一a级毛片在线观看| 69av精品久久久久久| 欧美中文日本在线观看视频| 日韩一卡2卡3卡4卡2021年| 欧美一级毛片孕妇| 国产日本99.免费观看| 亚洲专区字幕在线| 婷婷精品国产亚洲av在线| 成在线人永久免费视频| av片东京热男人的天堂| 中文字幕精品免费在线观看视频| 黄色成人免费大全| 亚洲第一青青草原| 午夜福利视频1000在线观看| 最近最新中文字幕大全电影3 | 色尼玛亚洲综合影院| 国产精品一区二区精品视频观看| 欧美午夜高清在线| 别揉我奶头~嗯~啊~动态视频| 人妻久久中文字幕网| 国产精品免费视频内射| 91麻豆精品激情在线观看国产| 色av中文字幕| 中文亚洲av片在线观看爽| 69av精品久久久久久| xxx96com| 天天躁夜夜躁狠狠躁躁| 一区二区三区精品91| 亚洲av中文字字幕乱码综合 | 久久精品影院6| 日本在线视频免费播放| 亚洲精品中文字幕在线视频| 久久香蕉国产精品| 成人国语在线视频| 欧美精品亚洲一区二区| 精品乱码久久久久久99久播| 欧美日韩福利视频一区二区| 久久久久免费精品人妻一区二区 | 波多野结衣高清无吗| 欧美黑人巨大hd| 欧美最黄视频在线播放免费| 男人舔女人的私密视频| 免费高清在线观看日韩| 欧美中文综合在线视频| 亚洲精品中文字幕在线视频| 国产一区二区激情短视频| 国产日本99.免费观看| 激情在线观看视频在线高清| 色综合亚洲欧美另类图片| 午夜免费激情av| 成人欧美大片| 亚洲美女黄片视频| 午夜福利在线观看吧| 91老司机精品| 97碰自拍视频| 一夜夜www| 看免费av毛片| 男女午夜视频在线观看| 男女视频在线观看网站免费 | 国产黄a三级三级三级人| 精品无人区乱码1区二区| 欧美精品啪啪一区二区三区| 在线观看一区二区三区| 又大又爽又粗| 精品国内亚洲2022精品成人| 成人国产一区最新在线观看| 日日干狠狠操夜夜爽| 久久久久精品国产欧美久久久| 亚洲中文字幕一区二区三区有码在线看 | 国产又黄又爽又无遮挡在线| 国产99白浆流出| 亚洲国产欧洲综合997久久, | 国内毛片毛片毛片毛片毛片| 激情在线观看视频在线高清| 国产不卡一卡二| 久久精品成人免费网站| 精品第一国产精品| 欧美亚洲日本最大视频资源| 黄色女人牲交| 99久久国产精品久久久| 亚洲一区中文字幕在线| 99热只有精品国产| 别揉我奶头~嗯~啊~动态视频| 搡老熟女国产l中国老女人| 欧美日本亚洲视频在线播放| 十八禁网站免费在线| 亚洲国产欧美一区二区综合| 看免费av毛片| av欧美777| 男人操女人黄网站| 99国产精品一区二区三区| 亚洲片人在线观看| 久9热在线精品视频| 亚洲国产欧美网| 一区二区日韩欧美中文字幕| 麻豆国产av国片精品| 一本大道久久a久久精品| 精品乱码久久久久久99久播| 99久久精品国产亚洲精品| 亚洲av五月六月丁香网| 亚洲精品美女久久久久99蜜臀| 青草久久国产| 国产伦一二天堂av在线观看| 国产免费av片在线观看野外av| 久99久视频精品免费| 一本大道久久a久久精品| 国产91精品成人一区二区三区| 黄色片一级片一级黄色片| 婷婷亚洲欧美| 久久精品国产清高在天天线| 操出白浆在线播放| 757午夜福利合集在线观看| 国产黄a三级三级三级人| 日本在线视频免费播放| 国产精品久久久久久亚洲av鲁大|