• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Recognizing Breast Cancer Using Edge-Weighted Texture Features of Histopathology Images

    2023-12-12 15:51:12ArslanAkramJavedRashidFahimaHajjejSobiaYaqoobMuhammadHamidAsmaArshadandNadeemSarwar
    Computers Materials&Continua 2023年10期

    Arslan Akram,Javed Rashid,Fahima Hajjej,Sobia Yaqoob,Muhammad Hamid,Asma Arshad and Nadeem Sarwar

    1Department of Computer Science and Information Technology,Superior University,Lahore,54000,Pakistan

    2MLC Lab,Maharban House,House#209,Zafar Colony,Okara,56300,Pakistan

    3Information Technology Services,University of Okara,Okara,56300,Pakistan

    4Department of CS&SE,International Islamic University,Islamabad,44000,Pakistan

    5Department of Information Systems,College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University,Riyadh,11671,Saudi Arabia

    6Department of Computer Science,University of Okara,Okara,56300,Pakistan

    7Department of Statistics and Computer Science,University of Veterinary and Animal Sciences,Lahore,Punjab,54000,Pakistan

    8School of Biochemistry and Biotechnology,University of the Punjab,Lahore,54000,Pakistan

    9Department of Computer Science,Bahria University,Lahore Campus,Lahore,54600,Pakistan

    ABSTRACT Around one in eight women will be diagnosed with breast cancer at some time.Improved patient outcomes necessitate both early detection and an accurate diagnosis.Histological images are routinely utilized in the process of diagnosing breast cancer.Methods proposed in recent research only focus on classifying breast cancer on specific magnification levels.No study has focused on using a combined dataset with multiple magnification levels to classify breast cancer.A strategy for detecting breast cancer is provided in the context of this investigation.Histopathology image texture data is used with the wavelet transform in this technique.The proposed method comprises converting histopathological images from Red Green Blue(RGB)to Chrominance of Blue and Chrominance of Red(YCBCR),utilizing a wavelet transform to extract texture information,and classifying the images with Extreme Gradient Boosting(XGBOOST).Furthermore,SMOTE has been used for resampling as the dataset has imbalanced samples.The suggested method is evaluated using 10-fold cross-validation and achieves an accuracy of 99.27% on the BreakHis 1.0 40X dataset,98.95% on the BreakHis 1.0 100X dataset,98.92% on the BreakHis 1.0 200X dataset,98.78%on the BreakHis 1.0 400X dataset,and 98.80%on the combined dataset.The findings of this study imply that improved breast cancer detection rates and patient outcomes can be achieved by combining wavelet transformation with textural signals to detect breast cancer in histopathology images.

    KEYWORDS Benign and malignant;color conversion;wavelet domain;texture features;xgboost

    1 Introduction

    Cancer incidence continues to rise,making it the top cause of death worldwide.The fact that breast cancer is the second most common disease in women makes it an important global health concern.There is a worldwide problem with breast cancer.By the year 2020,the World Health Organization predicted there would be an additional 2.3 million instances of breast cancer worldwide.In terms of female fatalities,breast cancer ranks sixth.There appears to be no consistent breast cancer mortality rate.Breast cancer rates are higher in wealthy countries than in less developed ones because of differences in nutrition,exercise,and reproduction rates.With an estimated 284,200 new cases in 2021 and 44,130 deaths,breast cancer is the leading cause of death among American women[1].Breast cancer mortality rates in developed nations have been falling over the past few decades as the disease has been better diagnosed and treated.Despite this,breast cancer remains a major health concern worldwide,particularly in underdeveloped countries with scarce diagnostic and therapeutic options.Mammography and other imaging screening should begin for women of average risk at 40 since breast cancer survival rates are increased via early identification.Increased frequency of testing for breast cancer may be necessary for women with a family history or other risk factors.Early breast cancer staging is essential to increase the chances of successful therapy and rapid recovery.Accurate diagnosis and staging,which permits prompt intervention with surgery,radiation therapy,chemotherapy,or any combination thereof,are often the consequence of several factors contributing to improved treatment outcomes.

    Technology has made cancer detection more sensitive and accurate.X-rays,Magnetic Resonance Imaging(MRI),Computed Tomography(CT),ultrasound,biopsy,and lab testing are used to identify cancer.The biopsy includes evaluating a small tissue sample from the suspected location under a microscope for cancer cells.Blood and tumor marker testing can also detect cancer cells or cancerrelated chemicals.Conventional cancer detection technologies have drawbacks.Imaging may miss tiny tumors,and biopsy and laboratory testing may give erroneous positive or negative results.Liquid biopsy can detect cancer cells or DNA fragments in the blood.This non-invasive approach may diagnose cancer earlier and assess therapy response.Common machine learning methods,including Support Vector Machines(SVM),Random Forests,and K-Nearest Neighbors(KNN),have all been used for breast cancer classification[2,3].These algorithms’statistical and mathematical foundations allow for extracting useful insights from seemingly unconnected information.However,feature engineering,the process of selecting and extracting pertinent qualities from the data,is typically required when using traditional machine learning approaches.It might be lengthy when dealing with extensive data like histopathology images.Automating various domains using machine learning and deep learning is easy.Applications range from image forgery recognition and smart city infrastructure to medical care and agricultural water distribution[4–7].

    Classifying breast cancer histopathology images using deep learning models and other machine learning approaches is an active study area.Deep learning models like Convolutional Neural Networks(CNNs)have recently been used to classify breast cancer[8–10].For instance,the effectiveness of SVM,KNN,and CNN models was evaluated on breast cancer histopathological images.The CNN model had the greatest accuracy(95.29%)of all the machine-learning methods.An interesting case in point is the classification of breast cancer using histopathology images,where different deep-learning models were compared,and the best model achieved an accuracy of 97.3%[11–15].Combining histopathology images with a cutting-edge deep learning model called Deep Attention Ensemble Network(DAEN)[14]further demonstrates the superiority of deep learning models over conventional machine learning algorithms for breast cancer classification.Several papers have investigated the feasibility of using machine learning on histology images to categorize breast cancer better.Classifying breast cancer histopathology images using a combination of a support vector machine and a random forest resulted in an accuracy of 84.23 percent.Several machine learning models,including SVMs,KNNs,random forests,and CNNs,were tested and compared for their ability to classify breast cancer histology.Compared to the other approaches,CNN had the highest accuracy(96.8%)[15].

    Classifying cancerous images using machine learning has been the subject of numerous studies.When contrasted to more conventional inspection techniques that use image processing and classification algorithms,however,it is determined that these methods require refinement.First,there is a significant gender gap in the data made public through competitions and other sources.Furthermore,studies have yet to focus on analyzing a combined dataset consisting of all magnification levels,even though most research has focused on analyzing histopathology images either on a single magnification or separately on several magnification levels.Second,the current breast cancer classification methods have poor performance on the best classification algorithms since they rely on statistical and textural elements of an image to make their classifications.

    The findings of this study combined wavelet transformation with Extreme Gradient Boosting(XGBOOST) [16] to develop a technique for distinguishing between benign and malignant cancers.This study offers a scale-invariant strategy for labeling images as benign or malignant,regardless of their size,shape,or resolution.The suggested method classifies cancer as benign or malignant using the BreakHis 1.0 [17] dataset comprising four types of magnification levels.Important subsections include the preprocessing stage,during which images from various databases with varying types,sizes,and dimensions are input and converted into YCBCR channels,and the feature extraction and concatenation stages.The final step involves providing features to XGBOOST to classify them and developing a model for use by image forensic specialists.

    Some crucial findings from the study are as follows:

    1.Even though there are many more benign images than malignant ones,Synthetic Minority Oversampling Technique(SMOTE)has been utilized to balance the dataset so that more useful insights can be gleaned from it using the BreakHis 1.0 dataset.

    2.The images are classified as benign or malignant using XGBOOST,and texture features are extracted using Wavelet transformation.

    3.If a method maintains its effectiveness regardless of the size of the image,it is said to be scaleinvariant.Therefore,the scale invariance of the planned method is evaluated using images of varied sizes,shapes,and types.

    The remainder of this article is organized in terms of time:Section 2 details the pertinent studies on breast cancer detection techniques.Section 3 provides a high-level overview of the steps involved in the proposed methodology,including preprocessing,feature extraction,and classification.The experimental data sets are discussed here as well.Section 4 presents experimental results and a discussion of the proposed design.Results from computations using the proposed architecture are tabulated and illustrated.The report finishes with a discussion of the results and recommendations for future study in Section 5.

    2 Literature Review

    Breast cancer is a serious global public health issue that profoundly impacts patient outcomes and healthcare systems.Early identification and accurate breast cancer diagnosis were crucial for patients to have a greater survival rate and pay less for medical care.Recently,machine learning algorithms have shown enormous promise in identifying and classifying breast cancer using images from histopathology.This literature review includes the most up-to-date findings on the limitations of machine learning algorithms for breast cancer classification.

    Breast cancer grading using deep learning was created by Wetstein et al.[18] and tested using whole-slide histopathology images.The algorithm outperformed human pathologists at identifying low and intermediate tumor stages,achieving an accuracy rate of 80% and a Cohen’s Kappa of 0.59.The work highlighted the possibility of deep learning-based models for automating breast cancer grading on whole-slide images,which is important since accurate and consistent grading improves patient outcomes.To determine the most common and productive training-testing ratios for histological image recognition,Wakili et al.[19]quickly analyzed deep-learning-based models.A training-to-testing ratio of 80/20 was shown to yield the highest accuracy.DenTnet,a new method built on transfer learning and DenseNet,was also created by the authors to address the limitations of prior methods.DenTnet achieved up to 99.28%accuracy on the BreaKHis dataset,outperforming leading deep learning algorithms in computing performance and generalizability.DenTnet allowed us to use fewer computational resources while maintaining our previous feature distribution.DenTnet tested only whole slide images but it was not tested on different resolutions.

    Kadhim et al.[20]used the Histogram of Gradients(HOG)feature extractor to quantify invasive ductal carcinoma histopathology images.Area Under Curve (AUC),F1 score,specificity,accuracy,sensitivity,and precision were used to evaluate the algorithms’performance.With more than 100 images,the algorithms struggled to keep up with the data.Deep learning could help get over this limitation.By reducing the scope for human error,machine learning (ML) can potentially improve breast cancer detection and survival rates.Zhang et al.[21] developed BDR-CNN-GCN to detect breast cancer in mammograms better.When a convolutional graph network(GCN)and a CNN are combined with batch normalization(BN),dropout(DO),and rank-based stochastic pooling(RSP),performance is improved.After being evaluated ten times on the breast miniMIAS dataset,the model has a sensitivity of 96.202 percent,a specificity of 96.002 percent,and an accuracy of 96.101 percent.Compared to 15 state-of-the-art breast cancer detection approaches and five neural network models,BDR-CNN-GCN achieves better results regarding data augmentation and identifying malignant breast masses.

    The sliding window method for extracting features from Local Binary Patterns (LBP) characteristics was developed by Alqudah et al.[22].Overall,the proposed method achieves high accuracy,sensitivity,and specificity,with a 91.12%rate of correct predictions,an 85.22%rate of correct positive predictions,and a 94.01%rate of correct negative predictions.In comparison to other studies in the literature,these outcomes excel.More information can be extracted using the suggested method,and other machine-learning strategies can be compared.The technique can potentially enhance breast cancer diagnosis and histological tissue localization.Clementet et al.’s support vector machine classifier and four DCNN versions classified breast cancer histology images into eight categories[23].A deep convolutional neural network(DCNN)was used to analyze images at many resolutions and produce a highly predictive multi-scale pooling image feature representation(MPIFR),which was then used by SVM to classify the images.Since it offers a fresh approach to reliably identifying various breast cancer subtypes,the proposed MPIFR technology may greatly enhance patient outcomes and breast cancer screening.Using the BreakHis histopathological breast cancer image dataset,we show a precision of 98.45 percent,a sensitivity of 97.48 percent,and an accuracy of 97.77 percent.

    The MPIFR method can improve the precision of breast cancer diagnosis and patients’health.Seo et al.[24] created a deep convolutional neural network (DCNN) that performs exceptionally well in classifying breast cancer.On the BreakHis topology BC image dataset,the ensemble model achieved higher accuracy(97.77%),sensitivity(97.48%),and precision(98.45%)than the prior stateof-the-art and an entire set of DCNN baseline models.To separate cells with and without nuclei,Saturi et al.[25] introduced a superpixel-clustering strategy based on optimization.The proposed method outperformed prior studies,resulting in an 8%–9% increase in classification accuracy for identifying breast cancer.The improved segmentation results result from the method’s advantages,which include searching for global optimization and using parallel computing.

    In [26],Hao et al.suggested a deep semantic and Grey Level Co-Occurrence Matrix (GLCM)based technique to image recognition in breast cancer histopathology.The suggested method outperforms the baseline models in Magnification Specific (MSB) and Magnification Independent (MIB)classification,with recognition accuracies of 96.75%,95.21%,96.57%,and 93.15%at magnifications of 40,100,200,and 400,respectively,and 96.33%,95.26%,96.09%,and 92.99% at the patient level.At the individual patient level,MIB classification accuracy was 95.56 percent,and at the individual image level,it was 95.54%.The suggested method’s accuracy is comparable to current best practices in recognition.Rehman et al.[27] proposed a neural network-based,reduced feature vector-and-machine learning framework to distinguish between mitotic and non-mitotic cells.The suggested method could accurately capture cell texture,allowing for the creation of efficiently reduced feature vectors to identify malignant cells.The proposed technique used ensemble learning with weighted attributes to improve model performance.The proposed method for recognizing mitotic cells outperforms state-of-the-art methods on the MITOS-12,AMIDA-13,MITOS-14,and TUPAC16 datasets.Different feature extraction methods(Hu moment,Haralick textures,and color histogram)created by Joseph et al.allowed for successful multi-classification of breast cancer cases on the BreakHis dataset.Histological images supported the multi-classification strategy recommended for breast cancer,which outperformed the majority of other investigations.Histopathological images at 40X,100X,200X,and 400X magnifications were classified with accuracies of 97.87%,97.60%,96.10%,and 96.84%using the proposed method[28].

    Increasing patient survival rates and decreasing healthcare costs require early identification and accurate breast cancer diagnosis.Machine learning algorithms have shown potential in detecting and classifying breast cancer using histopathology images.Recent studies have investigated many approaches to grading breast cancer,including superpixel clustering algorithms,sliding window feature extraction methods,and deep learning-based models.These studies have shown that the proposed methods are superior to alternative procedures concerning accuracy,sensitivity,and specificity,all contributing to improved breast cancer detection.These procedures have the potential to enhance patient outcomes while decreasing healthcare costs.Among the many limitations and challenges that must be surmounted are the interpretability of machine learning models and the requirement for additional labeled data.

    3 Material and Methods

    The whole-slide classification machine learning pipeline has great potential for use in the detection and treatment of breast cancer.We analyze high-resolution images from databases like BreakHis to classify slides as cancerous or benign.The images were converted to YCBCR for optimal texture feature extraction.After the first image processing,texture features were retrieved using wavelet coefficients.A binary classifier was then given the extracted features.Any algorithm distinguishing between cancerous and noncancerous slides can be the classifier.The dataset must be resampled before classification can begin.Oversampling using SMOTE analysis is being used to rectify this inequitable data set.XGBOOST is handling classification in this investigation.The pipeline then reports the classification results.Metrics like accuracy,precision,recall,and F1 score may be included in the report.These indicators can be used to assess the pipeline’s efficiency and adjust the various stages accordingly.The pipeline consists of four phases:preprocessing,feature extraction,classification,and result reporting Fig.1.

    Figure 1:Workflow of proposed breast cancer classification method

    3.1 Datasets

    This section describes the data collecting and preprocessing methods used to train and assess the models employed in the machine learning pipeline.Table 1 summarizes the features of BreakHis 1.0.The BreakHis 1.0 database contains images of breast cancer tissue samples.The images are separated into two categories:normal and malignant.The magnifications used to capture these images range from 40X to 400X.The total number of images is 3,995,with 1,995 showing malignant growths and 2,000 showing noncancerous ones.Each image is a Portable Network Graphics(PNG)file of 7004603 pixels.The BreakHis dataset’s wide range of image sizes makes it perfect for teaching recognition models to scale.

    Table 1:Details of datasets used for experiments

    Breast histopathology images from the BreakHis 1.0 dataset.The dataset includes 9,109 microscopic images of both healthy and malignant breast tissue.These images were captured at four magnifications(40X,100X,200X,and 400X)with two distinct staining procedures(hematoxylin,eosin,and picrosirius red).Studies have used the BreakHis 1.0 dataset to train and evaluate algorithms for breast cancer diagnosis and prognosis.Thus,we have developed deep learning models for automatically classifying breast histopathology images,which has greatly improved the progress of CAD systems[29].Fig.2 displays a few examples of the experimental database’s image content.

    Figure 2:A breast cancer slide at four different magnifications: (a) 40X,(b) 100X,(c) 200X,and(d)400X

    The data needed to be rebalanced,and many different approaches were studied.Under-sampling would include decreasing normal slides to equal the number of cancer slides,but this would diminish the already limited amount of data from the majority class and,as a result,may eliminate beneficial features.If the minority class was oversampled using a method such as the synthetic minority oversampling technique(SMOTE)[30],the output classes would be more balanced,and the model would have access to more useful information.However,this method is more computationally expensive than the technique currently used,class weights,a simpler technique.Class weighting gives more weight to the class under-represented in the training data when computing the loss function.Class weighting does not involve further manipulation of the training data,given its capacity to meaningfully extend the size of the training data set currently limited in BreakHis 1.0.Fig.3 shows the results of resampling using SMOTE.

    Figure 3:Bar chart showing the output class distribution between the benign and malignant classes within the training data.(a)Before Balancing(b)After Balancing

    All datasets used for this inquiry were partitioned into K-fold cross-validation parts with their corresponding ratios.When using XGBOOST,training images are used to build a model,while testing images are utilized to evaluate the model and obtain information from the one that has been trained.

    3.2 Preprocessing

    Digital image processing yields subtly diverse outcomes when applied to images in various color modes.Converting an image from Red,Green,Blue (RGB) to Luminance,Chrominance (YCBCR)offers many benefits.For image and video compression,transmission,and processing,YCBCR is a color space that separates luminance (brightness) and chrominance (color).Converting an image from RGB to YCBCR reduces color redundancy,which improves image compression.In YCBCR,the luminance channel has the most visual information.Reducing chrominance resolution reduces file size without affecting image quality.YCBCR also handles human-device color perception discrepancies.Electronic gadgets see red,blue,and green equally,but humans see green more.YCBCR handles these variances by segregating luminance and chrominance information.So,the RGB image is converted to YCBCR using the OpenCV library in Python to separate all three components of YCBCR.

    3.3 Feature Extraction

    Signal processing,data compression,and image analysis are just a few of the many applications of the wavelet transform,a mathematical technique.It takes a signal and breaks it down into a family of wavelets,each of which is a scaled and translated version of the mother wavelet.The wavelet transform can be applied to signals in either continuous or discrete time.Discrete wavelet transforms(DWT)are frequently used for feature extraction and compression in image processing.The DWT breaks down an image into coefficients representing various degrees of detail and approximation.The image is convolved with a collection of filters known as the wavelet filters to extract these coefficients.The DWT can be expressed mathematically as follows:

    where?j,k,nand ?j,k,nare wavelet and scaling functions,andxnis the original signal.At level j and index k,the wavelet and scaling coefficients are denoted byWj,kandVj+1,k.

    Image analysis software widely uses texture features and the grey-level co-occurrences matrix(GLCM).Important details are laid out,and useful statistical interface formulas are also laid out[31].The image’s pixel intensities are ranked by counting how many of each kind there are.An image set’s mean is calculated by:

    The standard deviation may measure in-homogeneousness because it depicts the probability distribution of the observed population [32].Standard deviations with larger values publicly reflect the high resolution of the boundaries of an image and are indicative of images with higher intensity levels.Using the described formula,it determined:

    A metric known as“skewness”[32]has been used to quantify the presence or absence of symmetry.Skewness,denoted by Sk(x),is defined as follows for the X probability distribution.

    The term kurtosis [33] is used to characterize the curvature of the probability distributions of random variables.Kurt of variable x,also known as the kurtosis of a random variable,is defined as:

    A metric known as energy has been applied to the study of visual similarities.The energy variable quantifies how many times the pixelated image may be replicated.The Horalicks’definition of feature energy in the GLCMs.The second angular moment is another name for it,and its full name is as follows:

    By contrast,also known as the resolution of a pixel concerning its neighbors,it is a measurement used to assess the quality of an image.

    3.4 Classification

    We rely on earlier studies to guide our classification method because feature extraction is more important to our work than building a superior classifier.Our research confirmed the widespread implementation of nonlinear XGBOOST for image classification and the successful attainment of high-quality detection outcomes.For this reason,XGBOOST is our top pick.The DART amplifier is being used.Training and testing are two of several steps in the categorization process.Fig.1 illustrates a functional breakdown of the system’s workflow.During its formation,the classifier draws heavily on the texture features of the image databases.After wavelet-based feature extraction,we train a classification model with XGBOOST.Every image in the experimental datasets had features extracted for training data.The 10-fold cross-validation technique is employed for this purpose.In order to analyze the data,it was split up into k-segments.On every experimental dataset,the proposed model excels.

    3.5 Experimental Setup

    Python evaluated texture attributes,and XGBOOST classified counterfeit photos.Several machine-learning methods and extraction parameters were evaluated to enhance accuracy.XGBOOST classified images,and Python 3.11 preprocessed and extracted features.OpenCV and NumPy are popular image-reading and preprocessing libraries.Robotics,autonomous cars,and computer vision use these picture libraries.PyFeat extracts picture features using texture,shape,and color.These traits help machine learning systems classify and recognize items.XGBOOST and Scikit-learn offer decision trees,random forests,and support vector machines.SMOTE is used in machine learning to correct the class imbalance.SMOTE generates artificial minority class samples to balance the dataset and improve classification model accuracy.These Python packages process,extract,classify,and visualize pictures.Matplotlib and Seaborn ease picture analysis and categorization.The DART booster’s default settings use all training samples with a learning rate of 0.1,a maximum tree depth of 6,a subsample ratio of 1,a regularization term of 1,a gamma value of 0.0(no minimum loss reduction required for splitting),a minimum child weight of 1,and no dropout.K-Fold cross-validation evaluates categorization models.XGBOOST’s cross-validated k-fold datasets were calculated using each fold’s testing set.A Jupyter Notebook with a seventh-generation Dell I7 CPU,16 GB of RAM,and 1 TB of storage ran all tests.

    3.6 Evaluation Measures

    Many distinct measures,such as testing accuracy,precision,recall,F1-score,and AUC,are used to evaluate the classification process.When considering the proposed method,the assessment parameter utilized most of the time is accurate.So,in this study,the proposed approach is quantitatively evaluated using the following three parameters:

    whereAccuracyis the total number of correct guesses divided by the total number of correct forecasts,then multiplied by 100 to get a percentage,the percentage of correctly identified samples in the true positive rate is determined using.

    In this model,true positive(TP)represents the number of diseases that were correctly recognized,false positive (FP) represents the number of conditions that were misclassified,and false negative represents the number of diseases that should have been discovered but were not(FN).The F1 score is a popular measure for accuracy and recall.

    Cross-validation(CV)is a resampling methodology utilized to assess machine learning models in a constricted dataset while safeguarding the prediction models against overfitting.On the other hand,K-Fold CV embodies a technique where the given dataset is spliced into K segments or folds,where each fold serves as a testing set at some point.Consider the case of 10-fold cross-validation(K=10),where the dataset is separated into ten folds,with the first fold testing the model in the first iteration and the remaining folds trained on the model.In the second iteration,the second fold serves as the testing set,whereas the rest function as the training set.This cyclic process repeats until each ten-fold is utilized as the testing set.

    4 Results and Analysis

    The results of a large-scale experiment to test the proposed method for categorizing breast cancer are presented here.We used the evaluation method mentioned in Section 3.6 to train and score the models.Data was compiled from a wide range of performance assessment tools.The tests were conducted in the following areas:

    These areas were the focus of the experiments:

    1.The effectiveness of the proposed framework is measured by XGBOOST for two-class classification across different magnification datasets individually available in BreakHis 1.0.

    2.For two-class classification on the combined dataset,XGBOOST is used to evaluate the efficacy of the suggested framework.Cross-validation uses different assessment metrics to rate the proposed framework on the combined dataset for benign and malignant.

    3.Analysis of how the proposed method stacks up against other,more advanced approaches.

    4.1 Evaluation of Proposed Method on 40X,100X,200X,and 400X Images from BreakHis 1.0

    Table 2 summarizes ten rounds of cross-validation testing of a breast cancer classification model on a 40X magnified dataset.Wavelet transformation and textural features of histopathological images distinguish benign from malignant instances.The table below shows each fold’s benign and malignant classification percentage.The AUC statistic and the number of images utilized in each iteration are shown.All folds have good accuracy ratings of 96.35–99.27 percent.The model correctly classifies benign and malignant cases with good precision,recall,and F1 score values.Wavelet transformation and textural aspects of histopathology images may improve breast cancer classification accuracy and patient outcomes.

    Table 2:10-fold cross-validation results on 40X magnified images of BreakHis 1.0

    When evaluating machine learning models,cross-validation is frequently used.The dataset is partitioned into k folds,and the model is trained k times,with each fold serving as either the validation or training set.The model can be put to the test on new data through cross-validation.Model performance across each cross-validation fold is displayed in fold-wise confusion matrices.For each category,they show the proportion of correct classifications,incorrect classifications,and false negatives.Overfitting,class imbalance,and patterns in model performance can all be identified with this information.Based on the fold-wise confusion matrices presented in Fig.4,the model achieves high-performance levels for both benign and malignant classes.Depending on the fold,performance may change due to differences in the number of images used per class.Blue boxes in the confusion matrix show samples that are correctly classified.

    Table 3 displays the outcomes of a 10-fold cross-validation on the BreakHis 1.0 dataset using the proposed approach and a 100X magnification.The table separately lists the accuracy,precision,recall,and F1 score for each fold and benign and cancerous images.We also provide area under the curve(AUC)values for each fold,quantifying the model’s ability to differentiate between benign and cancerous images.The outcomes show that the automated approach is effective and accurate in spotting breast cancer.The high accuracy ratings (95.83–98.95%) demonstrate that the system can successfully categorize various images.The excellent precision,recall,and F1 score scores show how well the system can distinguish between benign and cancerous images.The AUC values demonstrate that the algorithm can distinguish between normal and cancerous images.These findings provide promising evidence for the potential utility of the automated approach in detecting invasive breast cancer.

    Table 3:10-fold cross-validation results on 100X magnified images of BreakHis 1.0

    Figure 4:Confusion matrices of testing results on 40X magnified images of BreakHis 1.0

    The confusion matrices shown in Fig.5 can be used to perform a fold-wise evaluation of a classification model.The model was trained and validated using many folds or data sets.The confusion matrices show how well the model does on each fold.The model’s accuracy and AUC(Area under the Curve)on that fold constitutes its total performance.True positives(TP),true negatives(TN),false positives (FP),and false negatives (FN) are added up for each fold and displayed in the confusion matrix.Metrics such as precision,recall,and F1score can be computed from this data to shed light on the model’s efficacy.The model has performed well with few false positives and negatives,and the accuracy and AUC values are sufficient for most folds.The model’s advantages and disadvantages need more investigation in any case.

    Figure 5:Confusion matrices of testing results on 100X magnified images of BreakHis 1.0

    Table 4 displays the outcomes of a 10-fold cross-validation study conducted on images from the BreakHis 1.0 dataset that were magnified by a factor of 200.The cross-validation is represented by“folds,”or rows.Values for accuracy,precision,recall,F1 score,and area under the curve(AUC)are displayed in separate columns for benign and malignant images.Between 97.12%and 98.92%,the foldwise accuracy is quite high.Both benign and cancerous images have precision values between 0.96 and 0.99.Both healthy and cancerous images have recall values between 0.97 and 0.99.The F1 score values are between 0.97 and 0.99 for healthy and cancerous images.The AUCs are between 0.97 and 0.99.The results show that the model is highly accurate and performs well when identifying benign and malignant breast histopathology images.

    Table 4:10-fold cross-validation results on 200X magnified images of BreakHis 1.0

    Ten iterations of cross-validation were run on a 200-fold-enhanced version of the BreakHis 1.0 dataset,and the findings are displayed in confusion matrices in Fig.6.Each fold’s accuracy,AUC,and confusion matrix are shown independently.Members of a confusion matrix that fall on the diagonal reflect correctly diagnosed events (benign and malignant),whereas those that fall off the diagonal represent misclassified cases.The model has an adequate area under the curve(AUC).There is some variation in the number of misclassified samples between the different folds.Each additional fold results in a higher rate of false positives(three cases of benign disease misdiagnosed as malignant)and false negatives(five cases of malignant disease misdiagnosed as benign).Areas under the curve that are large are indicative of successful data classification.If there is a big discrepancy between the number of benign and malignant events in this dataset,the class imbalance may be troublesome even if AUC remains unchanged.

    Figure 6:Confusion matrices of testing results on 200X magnified images of BreakHis 1.0

    This research used XBOOST to correctly label benign and malignant breast cancer images in a dataset comprising both types of cancer.Table 5 displays the outcomes of a 10-fold cross-validation test conducted on 400X zooms of the BreakHis 1.0 dataset.The table shows each fold’s accuracy,precision,recall,F1 score,and AUC.The table shows that for most folds,the proposed method achieved good accuracy(between 94.31%and 98.78%).High precision and recall values show that the method accurately separates benign from malignant samples.The high area AUC scores,ranging from 0.94 to 0.99,further prove that the proposed technique is a success.Table 5 shows that the proposed approach is a potentially useful strategy for classifying breast cancer images,which can be implemented in clinical settings for early detection and diagnosis.

    Table 5:10-fold cross-validation results on 400X magnified images of BreakHis 1.0

    Fold-wise confusion matrices for the provided classification model are displayed in Fig.7.The accuracy and AUC(area under the curve)values are presented for each fold,representing the model’s performance on a different portion of the data.Each confusion matrix is a 2×2 table,with the first row showing the number of false positives and the second showing the number of false negatives.Correctly classified samples are denoted by items on the diagonal(top left and bottom right),while misclassified samples are denoted by elements off the diagonal (top right and bottom left).The provided data suggests that the model performs better,with accuracy scores between 0.94 and 0.99 and AUC scores between 0.95 and 0.99 throughout the ten folds.It is worth noting that results may differ based on the dataset used.Therefore,additional investigation into the model’s efficacy may be necessary.

    Figure 7:Confusion matrices of testing results on 400X magnified images of BreakHis 1.0

    Tables 2 and 3,and Fig.4 show that the proposed method can successfully identify breast cancer in histological images.Wavelet transformation and textured features of histopathology pictures were used in the suggested study to distinguish between benign and malignant breast cancer.High accuracy,precision,recall,and F1 score results in cross-validation tests show that the models can correctly label a sizable fraction of images.The AUC values also demonstrate that the models can distinguish between normal and cancerous visuals.These results provide preliminary support for the automated invasive breast cancer detection technique,implying that it may improve patient outcomes.

    4.2 Performance Evaluation of Proposed Method on Combined Dataset

    Table 6 summarizes the results of the BreakHis 1.0 dataset’s application of the XGBOOST algorithm to classify breast cancer patients.Ten-fold cross-validation results show that the model is quite accurate,with a mean of 97.84%.The model’s recall,F1,and accuracy were used to evaluate how well it distinguished between benign and malignant tumors.The F1 score,precision,and recall all stayed in the 0.96 to 0.99 range for the harmless category.The malignant class’s F1 score,recall,and precision were all between 0.97 and 0.99.These results show that the model can distinguish between benign and malignant tumors in breast cancer images.The area under the curve(AUC)was also used to evaluate the model’s performance in identifying benign from malignant tumors.The model has excellent discriminatory power with an AUC in the range of 0.97 and 0.99.The results indicate that the proposed method is a practical strategy for breast cancer categorization based on histological images.

    Table 6:10-fold cross-validation results on combined images of BreakHis 1.0

    Fig.8 displays the 10-fold cross-validation results for a breast cancer XGBOOST model’s classification accuracy.Several different dataset folds exist for generating independent training and validation datasets.After each cycle,we log the AUC and accuracy.The confusion matrix provides information about the percentages of correct and incorrect results for each fold.The model has a respectable accuracy of 0.94 to 0.99 across ten folds.

    Furthermore,the AUC values are rather satisfactory,between 0.95 and 0.99.These findings indicate that the model may be able to distinguish between benign and aggressive breast tumors.The confusion matrices demonstrate that the model correctly classifies occurrences as good or bad.False positives and false negatives are possible,although only very rarely.When a model wrongly detects a benign instance as malignant,this is known as a false positive(FP),and when a model incorrectly identifies a malignant instance as benign,this is known as a false negative (FN).Clinical situations are inherently high-risk,making accounting for this type of error imperative.The proposed method appears to apply to classifying breast cancer.However,more research on larger datasets is required to verify their clinical feasibility.

    4.3 Comparative Analysis with State-of-Art Methods

    Section 2 covers the many methods used to diagnose breast cancer.A few of them use machine learning and deep learning.Different models can be compared using the same data to see how well they perform.Our research included comparing our approach with others that produce comparable results.We compare the suggested method’s accuracy to that of state-of-the-art methods.Table 7 compares the accuracy of various techniques for detecting breast cancer at varying magnification levels.The proposed method is just one of many that can be used;other options are Sliding Window+SVM[13],ResNet50+KWELM [28],Xception+SVM [29],and DenseNet201+GLCM+SVM [17].All measurements,including accuracy and area under the curve,suggest that the proposed strategy is superior.At 40X magnification,the proposed method obtains an accuracy of 99.27%,while at 100X magnification,it achieves an accuracy of 98.95%.Both at 200X and 400X,it gets a 98.92%accuracy rate.Xception plus SVM consistently beats other methods,regardless of zoom level.ResNet50 +KWELM performs moderately better from 40X to 100X but much worse from 100X to 400X.The proposed method shows potential as a robust instrument for detecting breast cancer due to its higher performance.

    Table 7:Comparative analysis with state-of-the-art methods

    5 Conclusion

    Recognizing malignant images is a vital study topic in the medical field.The purpose of this research is to employ wavelet transformation and texture features in the diagnosis of breast cancer.Our method eliminates the YCBCR channels from an image before extracting blocks of color data.The proposed method is resilient against transformations (rotation,scaling,and distortion) applied to the tumor region.However,we trained and tested our proposed technique on a larger collection of images to increase its efficacy.The classification was performed using the XGBOOST classifier,and feature extraction parameters were optimized for optimum accuracy.Maximum accuracy of 99.27%was reached on the 40X dataset,98.95%on the 100X dataset,98.92%on the 200X dataset,98.78%on the 400X dataset,and 98.80% on the combined dataset using the suggested method.Our findings show that wavelet modification can be used successfully for cancer image recognition.There are,however,some restrictions that must be overcome.For instance,our dataset does not reflect the world as it is because of the biases introduced by Smote.In addition,our approach might need help with more advanced forms of image editing,such as sophisticated geometric transformations or semantic changes at a higher level.In conclusion,our research has aided in advancing wavelet-based methods for recognizing cancer images in medical imagery.To make our method more accurate and stable,we intend to continue investigating this topic by increasing the size of our dataset and investigating additional classification models.The goal is to create a system that can accurately and efficiently categorize multi-class cancer images in real-world settings.

    Acknowledgement:None.

    Funding Statement:This work was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R236),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.

    Author Contributions:The authors confirm contribution to the paper as follows:study conception and design:A.Akram,M.Hamid,J.Rashid;data collection:A.Akram,F.Hajjej,N.Sarwar;analysis and interpretation of results:A.Arshad,J.Rashid,M.Hamid;draft manuscript preparation:A.Akram,J.Rashid,F.Hajjej,N.Sarwar,M.Hamid.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Data will be provided on request.It is also publicly available.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    成年女人在线观看亚洲视频| 丝袜美腿诱惑在线| 欧美 亚洲 国产 日韩一| 视频区图区小说| 深夜精品福利| 久久久久久免费高清国产稀缺| 黑人猛操日本美女一级片| 男的添女的下面高潮视频| 国产xxxxx性猛交| 2021少妇久久久久久久久久久| 国产麻豆69| 午夜福利在线免费观看网站| 亚洲国产精品国产精品| 男人操女人黄网站| 中文天堂在线官网| 另类亚洲欧美激情| 少妇人妻精品综合一区二区| 国产成人91sexporn| 国产av精品麻豆| 久久久a久久爽久久v久久| 午夜日韩欧美国产| 99久久中文字幕三级久久日本| 最近中文字幕高清免费大全6| 成年动漫av网址| 少妇 在线观看| 成人漫画全彩无遮挡| 日本色播在线视频| 欧美日韩精品成人综合77777| 日韩制服骚丝袜av| 国产精品秋霞免费鲁丝片| 69精品国产乱码久久久| 天天躁夜夜躁狠狠久久av| 男女免费视频国产| 日本91视频免费播放| 一二三四中文在线观看免费高清| 国产一区二区三区av在线| 国产精品不卡视频一区二区| 日韩 亚洲 欧美在线| 一本久久精品| 国产成人精品婷婷| 母亲3免费完整高清在线观看 | 一级爰片在线观看| 99久国产av精品国产电影| 日本猛色少妇xxxxx猛交久久| 视频在线观看一区二区三区| 尾随美女入室| 美女视频免费永久观看网站| 久久久久久久久免费视频了| 国产极品天堂在线| 国产午夜精品一二区理论片| 久久热在线av| 久久99蜜桃精品久久| 深夜精品福利| 飞空精品影院首页| 国产av国产精品国产| 亚洲欧美成人综合另类久久久| 午夜福利在线免费观看网站| av在线老鸭窝| 国产精品熟女久久久久浪| 欧美少妇被猛烈插入视频| 国产 一区精品| 女性生殖器流出的白浆| 在线观看一区二区三区激情| 国产精品99久久99久久久不卡 | 一区二区av电影网| 99国产综合亚洲精品| 日韩中字成人| 水蜜桃什么品种好| 男女国产视频网站| 黄片无遮挡物在线观看| 精品人妻熟女毛片av久久网站| 妹子高潮喷水视频| xxxhd国产人妻xxx| 少妇被粗大猛烈的视频| 亚洲中文av在线| 伊人久久国产一区二区| av不卡在线播放| 精品一区二区三区四区五区乱码 | 精品亚洲成a人片在线观看| 久久久久精品性色| 校园人妻丝袜中文字幕| 少妇人妻久久综合中文| 亚洲第一区二区三区不卡| 在线观看免费视频网站a站| 人妻系列 视频| 亚洲精品国产色婷婷电影| 少妇被粗大猛烈的视频| 日本免费在线观看一区| 日韩精品免费视频一区二区三区| 女的被弄到高潮叫床怎么办| 午夜91福利影院| 久久午夜福利片| 国产成人精品婷婷| 最新的欧美精品一区二区| 欧美日本中文国产一区发布| 久久久久久人妻| 一本久久精品| 精品久久久精品久久久| 精品久久蜜臀av无| 街头女战士在线观看网站| 午夜福利在线观看免费完整高清在| 伦精品一区二区三区| 国产一区有黄有色的免费视频| 超碰97精品在线观看| 99久久精品国产国产毛片| 制服人妻中文乱码| 色婷婷av一区二区三区视频| 美女大奶头黄色视频| 亚洲久久久国产精品| 男女午夜视频在线观看| 黄色视频在线播放观看不卡| 两个人看的免费小视频| 亚洲美女视频黄频| 男女下面插进去视频免费观看| 亚洲精品美女久久久久99蜜臀 | 日韩人妻精品一区2区三区| 女人久久www免费人成看片| 亚洲人成网站在线观看播放| 高清在线视频一区二区三区| 欧美成人午夜精品| av电影中文网址| 两个人看的免费小视频| 亚洲一级一片aⅴ在线观看| 欧美日韩成人在线一区二区| 老司机影院毛片| 国产成人午夜福利电影在线观看| 伊人久久大香线蕉亚洲五| 成年人午夜在线观看视频| 天美传媒精品一区二区| 一边亲一边摸免费视频| 男人操女人黄网站| 欧美亚洲 丝袜 人妻 在线| 免费观看在线日韩| 伦理电影大哥的女人| 日韩在线高清观看一区二区三区| 一个人免费看片子| 18禁观看日本| 亚洲成人一二三区av| 亚洲精品第二区| a 毛片基地| 亚洲精品日本国产第一区| 又黄又粗又硬又大视频| 天天躁日日躁夜夜躁夜夜| 校园人妻丝袜中文字幕| 国产精品三级大全| 少妇人妻久久综合中文| 大香蕉久久成人网| 大话2 男鬼变身卡| 美女高潮到喷水免费观看| 午夜福利乱码中文字幕| 精品亚洲成国产av| 如何舔出高潮| 激情视频va一区二区三区| 久久99热这里只频精品6学生| 777米奇影视久久| 亚洲熟女精品中文字幕| 久久人妻熟女aⅴ| 国产欧美亚洲国产| 午夜福利一区二区在线看| 欧美av亚洲av综合av国产av | 有码 亚洲区| 美女国产视频在线观看| 免费观看a级毛片全部| a级毛片在线看网站| 99精国产麻豆久久婷婷| 最近最新中文字幕大全免费视频 | 亚洲精品国产一区二区精华液| 桃花免费在线播放| 久久这里只有精品19| 五月伊人婷婷丁香| 久久99精品国语久久久| 人成视频在线观看免费观看| 香蕉丝袜av| 性高湖久久久久久久久免费观看| 人人妻人人爽人人添夜夜欢视频| 少妇的逼水好多| 国产精品不卡视频一区二区| 亚洲一区二区三区欧美精品| 亚洲精品乱久久久久久| 在线观看免费高清a一片| 亚洲国产欧美日韩在线播放| 赤兔流量卡办理| 国产福利在线免费观看视频| 午夜av观看不卡| 日本wwww免费看| 一区二区av电影网| 精品国产一区二区久久| 一区二区av电影网| 精品久久久久久电影网| 亚洲综合色惰| 国产成人91sexporn| 精品久久久久久电影网| 视频区图区小说| 中文字幕人妻熟女乱码| 制服诱惑二区| 18在线观看网站| 一级片免费观看大全| 制服丝袜香蕉在线| 亚洲av电影在线进入| 90打野战视频偷拍视频| 99精国产麻豆久久婷婷| 久久国产精品大桥未久av| 夫妻性生交免费视频一级片| 午夜福利在线免费观看网站| 黑丝袜美女国产一区| 最近中文字幕2019免费版| 国产av码专区亚洲av| 亚洲欧美一区二区三区黑人 | 一级毛片电影观看| 免费久久久久久久精品成人欧美视频| 18禁裸乳无遮挡动漫免费视频| 一区二区av电影网| 99热国产这里只有精品6| 日本色播在线视频| 制服诱惑二区| 91aial.com中文字幕在线观看| 国产精品偷伦视频观看了| 91国产中文字幕| 制服诱惑二区| 26uuu在线亚洲综合色| 亚洲国产av影院在线观看| 色吧在线观看| 亚洲国产欧美在线一区| 国产亚洲午夜精品一区二区久久| 国产片内射在线| 亚洲 欧美一区二区三区| 亚洲婷婷狠狠爱综合网| 亚洲欧美清纯卡通| 黄色视频在线播放观看不卡| 最近的中文字幕免费完整| 99热全是精品| 999精品在线视频| 纯流量卡能插随身wifi吗| 在线亚洲精品国产二区图片欧美| 亚洲国产成人一精品久久久| 十八禁高潮呻吟视频| 老女人水多毛片| 超碰97精品在线观看| 国产一级毛片在线| 三上悠亚av全集在线观看| 午夜精品国产一区二区电影| 欧美成人午夜免费资源| 国产野战对白在线观看| 天天影视国产精品| kizo精华| 大香蕉久久网| 日本-黄色视频高清免费观看| 母亲3免费完整高清在线观看 | 精品一区二区免费观看| 国产男人的电影天堂91| 久久人人爽av亚洲精品天堂| 黄色毛片三级朝国网站| 午夜福利影视在线免费观看| a级毛片黄视频| 三级国产精品片| 欧美日韩亚洲高清精品| 乱人伦中国视频| 天天躁狠狠躁夜夜躁狠狠躁| 一级爰片在线观看| 人妻 亚洲 视频| 欧美变态另类bdsm刘玥| 亚洲精品美女久久久久99蜜臀 | 久久久久国产一级毛片高清牌| 男女边吃奶边做爰视频| 国精品久久久久久国模美| 国产在视频线精品| 亚洲三区欧美一区| 新久久久久国产一级毛片| av视频免费观看在线观看| 日韩,欧美,国产一区二区三区| 99精国产麻豆久久婷婷| 精品人妻偷拍中文字幕| av免费观看日本| 最近最新中文字幕免费大全7| 国产精品免费大片| 欧美日韩视频高清一区二区三区二| 国产1区2区3区精品| 天美传媒精品一区二区| 国产激情久久老熟女| 国产精品一区二区在线不卡| 黄色毛片三级朝国网站| 天天躁日日躁夜夜躁夜夜| 午夜激情av网站| 日韩制服丝袜自拍偷拍| 80岁老熟妇乱子伦牲交| 久久久久久久久久久免费av| 伊人亚洲综合成人网| 国产极品粉嫩免费观看在线| √禁漫天堂资源中文www| 国产1区2区3区精品| 午夜激情av网站| 三上悠亚av全集在线观看| 日本欧美视频一区| 亚洲,欧美,日韩| 99国产综合亚洲精品| 久久免费观看电影| 欧美日韩精品成人综合77777| 成人国语在线视频| 成人影院久久| av卡一久久| 亚洲欧美一区二区三区国产| 18禁动态无遮挡网站| 国产xxxxx性猛交| 99国产精品免费福利视频| 寂寞人妻少妇视频99o| 婷婷色综合大香蕉| 久久久国产一区二区| 国精品久久久久久国模美| 欧美精品亚洲一区二区| 亚洲图色成人| 国产伦理片在线播放av一区| 成人亚洲欧美一区二区av| 2022亚洲国产成人精品| 丝袜美腿诱惑在线| 日韩人妻精品一区2区三区| 99精国产麻豆久久婷婷| 国产精品偷伦视频观看了| 国产精品亚洲av一区麻豆 | 美女主播在线视频| 亚洲精品国产一区二区精华液| 亚洲 欧美一区二区三区| 老司机影院成人| 成年av动漫网址| 国产又色又爽无遮挡免| 免费观看在线日韩| 99热国产这里只有精品6| 丰满乱子伦码专区| 2021少妇久久久久久久久久久| 中文字幕亚洲精品专区| 另类亚洲欧美激情| 中文字幕另类日韩欧美亚洲嫩草| 大香蕉久久成人网| av又黄又爽大尺度在线免费看| 精品少妇久久久久久888优播| 高清欧美精品videossex| 性色av一级| 国产色婷婷99| 成人免费观看视频高清| 亚洲av中文av极速乱| 日韩中文字幕视频在线看片| 亚洲,一卡二卡三卡| 精品少妇一区二区三区视频日本电影 | 欧美 日韩 精品 国产| 男女无遮挡免费网站观看| 国产成人精品福利久久| 男的添女的下面高潮视频| 国产极品粉嫩免费观看在线| 久久午夜综合久久蜜桃| 2021少妇久久久久久久久久久| 久久久久久人人人人人| 欧美97在线视频| 90打野战视频偷拍视频| 大香蕉久久成人网| 国产精品免费大片| 中文字幕精品免费在线观看视频| 天天躁夜夜躁狠狠躁躁| 亚洲久久久国产精品| 国产极品天堂在线| 多毛熟女@视频| 国产精品嫩草影院av在线观看| 黑人巨大精品欧美一区二区蜜桃| 99国产精品免费福利视频| 男女边摸边吃奶| 久久久久视频综合| 丝袜美足系列| 丝袜在线中文字幕| 久久久精品免费免费高清| 最近最新中文字幕大全免费视频 | 精品卡一卡二卡四卡免费| 韩国高清视频一区二区三区| 一区福利在线观看| av.在线天堂| 久久这里只有精品19| 久久精品久久精品一区二区三区| 亚洲精品成人av观看孕妇| 一区二区日韩欧美中文字幕| 国产成人精品无人区| 欧美精品国产亚洲| 久久久久网色| 亚洲欧美成人精品一区二区| 日韩一卡2卡3卡4卡2021年| 欧美国产精品va在线观看不卡| 人人妻人人爽人人添夜夜欢视频| 亚洲av男天堂| 国产精品免费视频内射| 日本黄色日本黄色录像| 亚洲欧美成人综合另类久久久| 9色porny在线观看| 麻豆精品久久久久久蜜桃| 精品人妻熟女毛片av久久网站| a级毛片在线看网站| 免费av中文字幕在线| 高清欧美精品videossex| 国产亚洲精品第一综合不卡| 成人免费观看视频高清| av免费观看日本| 人人妻人人添人人爽欧美一区卜| 欧美成人午夜精品| 久久毛片免费看一区二区三区| 久久久亚洲精品成人影院| 亚洲欧洲日产国产| 国产 精品1| 国产av一区二区精品久久| 欧美精品一区二区大全| 国产成人精品婷婷| 在线 av 中文字幕| 一级爰片在线观看| 你懂的网址亚洲精品在线观看| 久久精品久久久久久噜噜老黄| 免费观看无遮挡的男女| 中文字幕制服av| 日韩在线高清观看一区二区三区| 在线天堂中文资源库| 欧美日韩一区二区视频在线观看视频在线| 纵有疾风起免费观看全集完整版| 欧美日韩成人在线一区二区| 久久久久精品久久久久真实原创| av免费在线看不卡| 欧美另类一区| 亚洲色图综合在线观看| 精品久久久精品久久久| 亚洲精品第二区| 香蕉精品网在线| 国产精品久久久av美女十八| 少妇被粗大的猛进出69影院| 亚洲中文av在线| 大话2 男鬼变身卡| 久久精品亚洲av国产电影网| 26uuu在线亚洲综合色| 最近最新中文字幕免费大全7| 亚洲综合色网址| 国产高清国产精品国产三级| 久久青草综合色| 在现免费观看毛片| 国产无遮挡羞羞视频在线观看| 国产深夜福利视频在线观看| 日本免费在线观看一区| 国产黄频视频在线观看| 久久热在线av| 女的被弄到高潮叫床怎么办| 亚洲经典国产精华液单| 亚洲av男天堂| 成年人午夜在线观看视频| 80岁老熟妇乱子伦牲交| 日本午夜av视频| 99久久综合免费| 精品少妇内射三级| 亚洲成av片中文字幕在线观看 | 亚洲av福利一区| 久久久精品区二区三区| 肉色欧美久久久久久久蜜桃| 黄色怎么调成土黄色| 成年动漫av网址| 国产极品粉嫩免费观看在线| 亚洲欧美精品综合一区二区三区 | 黄片无遮挡物在线观看| av不卡在线播放| 美女高潮到喷水免费观看| 国产亚洲一区二区精品| 精品亚洲成国产av| 精品酒店卫生间| 一级毛片黄色毛片免费观看视频| 亚洲精品国产一区二区精华液| 另类精品久久| videosex国产| 亚洲精品乱久久久久久| 少妇熟女欧美另类| 女性生殖器流出的白浆| 亚洲国产欧美在线一区| 建设人人有责人人尽责人人享有的| 欧美+日韩+精品| 久久久久视频综合| 人妻系列 视频| 国产白丝娇喘喷水9色精品| 国产精品亚洲av一区麻豆 | 自拍欧美九色日韩亚洲蝌蚪91| 国产精品免费视频内射| 久久久久精品人妻al黑| 亚洲av男天堂| 国产成人aa在线观看| 91精品三级在线观看| 精品久久久久久电影网| 免费日韩欧美在线观看| 少妇的逼水好多| 欧美bdsm另类| 热99国产精品久久久久久7| 欧美老熟妇乱子伦牲交| 18禁裸乳无遮挡动漫免费视频| 久久精品国产亚洲av天美| 最黄视频免费看| 久久精品夜色国产| 在线天堂中文资源库| 成人国产av品久久久| 亚洲欧美日韩另类电影网站| 亚洲国产日韩一区二区| 成人国产av品久久久| 女性生殖器流出的白浆| 中文字幕色久视频| 国产乱来视频区| 老司机影院毛片| 夫妻午夜视频| 韩国精品一区二区三区| 成人毛片60女人毛片免费| 久久久久久久久免费视频了| 精品人妻一区二区三区麻豆| 亚洲激情五月婷婷啪啪| 久久国产亚洲av麻豆专区| 久久狼人影院| 欧美日韩精品成人综合77777| 婷婷色综合www| 亚洲精华国产精华液的使用体验| 亚洲av.av天堂| 一边摸一边做爽爽视频免费| 99精国产麻豆久久婷婷| 久久ye,这里只有精品| 亚洲国产精品国产精品| 麻豆av在线久日| 国产精品久久久久久av不卡| 成年动漫av网址| 亚洲欧洲国产日韩| 亚洲欧美精品综合一区二区三区 | 欧美激情高清一区二区三区 | 亚洲国产欧美在线一区| 老汉色∧v一级毛片| 亚洲欧美一区二区三区黑人 | 欧美精品高潮呻吟av久久| 国产极品天堂在线| 99久国产av精品国产电影| 99re6热这里在线精品视频| 亚洲国产欧美在线一区| 18禁动态无遮挡网站| 永久网站在线| 春色校园在线视频观看| av有码第一页| 午夜免费鲁丝| 欧美老熟妇乱子伦牲交| 国产精品欧美亚洲77777| 少妇精品久久久久久久| 国产精品秋霞免费鲁丝片| 妹子高潮喷水视频| 亚洲激情五月婷婷啪啪| 国产精品嫩草影院av在线观看| 国产男女超爽视频在线观看| 欧美日韩国产mv在线观看视频| 在线天堂中文资源库| 国产综合精华液| av在线app专区| 成人黄色视频免费在线看| 久久久久精品性色| 人体艺术视频欧美日本| av国产精品久久久久影院| 赤兔流量卡办理| 十分钟在线观看高清视频www| 欧美成人精品欧美一级黄| a级毛片在线看网站| 国产在视频线精品| 老汉色av国产亚洲站长工具| 国产精品免费大片| av在线app专区| 日韩中字成人| 国产精品国产三级国产专区5o| 国产免费福利视频在线观看| 黄色毛片三级朝国网站| 18+在线观看网站| 精品久久蜜臀av无| 精品亚洲成a人片在线观看| 最黄视频免费看| 国产精品一二三区在线看| 天天操日日干夜夜撸| 一级毛片我不卡| www.av在线官网国产| 久久影院123| h视频一区二区三区| 中国三级夫妇交换| 18禁动态无遮挡网站| 欧美日韩成人在线一区二区| av女优亚洲男人天堂| 熟女电影av网| 国产精品久久久久久av不卡| 99国产精品免费福利视频| 精品一区在线观看国产| 一级爰片在线观看| 日韩制服骚丝袜av| 如何舔出高潮| 国产精品国产三级专区第一集| 亚洲欧美精品综合一区二区三区 | 一级,二级,三级黄色视频| 日本欧美国产在线视频| 欧美日韩综合久久久久久| 国产1区2区3区精品| 国产精品二区激情视频| 成人18禁高潮啪啪吃奶动态图| 热99久久久久精品小说推荐| 一本—道久久a久久精品蜜桃钙片| 亚洲av在线观看美女高潮| 国产欧美日韩综合在线一区二区| 激情视频va一区二区三区| freevideosex欧美| 国产黄色视频一区二区在线观看| 天堂中文最新版在线下载| 亚洲国产最新在线播放| 性高湖久久久久久久久免费观看| 一二三四中文在线观看免费高清| 欧美+日韩+精品| 丰满迷人的少妇在线观看| 少妇被粗大的猛进出69影院| 丰满乱子伦码专区| 亚洲精品美女久久av网站| 性色avwww在线观看| 电影成人av| 老汉色∧v一级毛片| 菩萨蛮人人尽说江南好唐韦庄| 国产成人精品一,二区| 男女高潮啪啪啪动态图| 免费大片黄手机在线观看|