• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Enhancing Skin Cancer Diagnosis with Deep Learning:A Hybrid CNN-RNN Approach

    2024-05-25 14:42:54SyedaShamailaZareenGuangminSunMahwishKundiSyedFurqanQadriandSalmanQadri
    Computers Materials&Continua 2024年4期

    Syeda Shamaila Zareen ,Guangmin Sun ,Mahwish Kundi ,Syed Furqan Qadri and Salman Qadri

    1Faculty of Information Technology,Beijing University of Technology,Beijing,100124,China

    2Computer Science International Engineering Collage,Maynooth University,Kildare,W23 F2H6,Irland

    3Research Center for Data Hub and Security,Zhejiang Lab,Hangzhou,311121,China

    4Computer Science Department,MNS University of Agriculture,Multan,59220,Pakistan

    ABSTRACT Skin cancer diagnosis is difficult due to lesion presentation variability.Conventional methods struggle to manually extract features and capture lesions spatial and temporal variations.This study introduces a deep learning-based Convolutional and Recurrent Neural Network (CNN-RNN) model with a ResNet-50 architecture which used as the feature extractor to enhance skin cancer classification.Leveraging synergistic spatial feature extraction and temporal sequence learning,the model demonstrates robust performance on a dataset of 9000 skin lesion photos from nine cancer types.Using pre-trained ResNet-50 for spatial data extraction and Long Short-Term Memory(LSTM)for temporal dependencies,the model achieves a high average recognition accuracy,surpassing previous methods.The comprehensive evaluation,including accuracy,precision,recall,and F1-score,underscores the model’s competence in categorizing skin cancer types.This research contributes a sophisticated model and valuable guidance for deep learning-based diagnostics,also this model excels in overcoming spatial and temporal complexities,offering a sophisticated solution for dermatological diagnostics research.

    KEYWORDS Skin cancer classification;deep learning;Convolutional Neural Network(CNN);RNN;ResNet-50

    1 Introduction

    Accurate and fast diagnosis is crucial for the proper treatment of skin cancer,a widespread and possibly life-threatening condition.The use of deep learning methods,specifically Convolutional Neural Networks (CNNs),has demonstrated encouraging outcomes in many picture classification assignments,such as the diagnosis of skin cancer,in recent times [1].Nevertheless,the current terrain exposes enduring obstacles that require more advanced methods.Existing approaches to skin cancer classification face obstacles that hinder their effectiveness.Conventional techniques frequently encounter difficulties in dealing with the natural fluctuations in skin lesion pictures,leading to less-than-ideal diagnostic precision.Furthermore,the dependence on manual feature extraction in traditional systems requires a significant amount of labor and may fail to capture the subtle patterns that are crucial for precise diagnosis [2].Given the circumstances,there is a clear and noticeable requirement for sophisticated solutions that go beyond the constraints of current methods.The potential of deep learning to transform skin cancer diagnostics is significant,its ability to automatically extract features and learn representations is highly helpful.Convolutional Neural Networks(CNNs)have shown proficiency in identifying complex spatial patterns in images,which is essential for differentiating between harmless and cancerous tumors[3].However,even in the field of deep learning,there are still difficulties that remain,especially when it comes to dealing with the time-related features of skin cancer development.The literature review conducted on the proposed model provides a comprehensive analysis of previous research efforts concerning the classification of skin cancer.Some of the research gape findings shown in Table 1 below.

    Table 1: Literature review of existing approached

    The focus of the study is a large corpus of scholarly literature on the diagnosis of skin cancer and the application of image processing algorithms based on deep learning techniques.To maximize the efficacy of treatment for skin cancer,which is a prevalent form of malignant disease,it is essential to diagnose this condition promptly and accurately.In order to overcome the constraints of current techniques and the intricate difficulties encountered in deep learning,we suggest a pioneering hybrid methodology.The architecture of our model combines a Convolutional Neural Network (CNN)and a Recurrent Neural Network (RNN),notably utilizing a Long Short-Term Memory (LSTM)layer.This results in a hybrid model that incorporates both Convolutional Neural Network and Recurrent Neural Network,with a ResNet-50 backbone.This fusion is intended to combine the spatial feature extraction abilities of Convolutional Neural Network (CNNs) with the temporal sequence learning capabilities of Recurrent Neural Network (RNNs).The Convolutional Neural Network(CNN)module has exceptional proficiency in extracting spatial characteristics from skin lesion images,effectively capturing intricate patterns that may defy conventional techniques.The Recurrent Neural Network component,equipped with its LSTM layer,effectively handles the temporal intricacies that are naturally present in the course of skin cancer.By considering both spatial and temporal dimensions,our model offers a comprehensive answer to the complex issues involved in classifying skin cancer.

    1.1 Existing Challenges in Skin Cancer Diagnosis

    1.Skin cancer can manifest in various forms,and lesions can exhibit considerable variability in appearance [9].Distinguishing between benign and malignant lesions,as well as identifying the specific type of skin cancer,can be challenging even for experienced dermatologists[10].

    2.The success of skin cancer treatment is often closely tied to early detection.However,small or subtle lesions may go unnoticed in routine examinations,leading to delayed diagnoses and potentially more advanced stages of the disease[11].

    3.Different dermatologists may interpret skin lesions differently,leading to inconsistencies in diagnosis[12].This variability can be a significant obstacle in achieving consistently accurate and reliable diagnoses.

    1.2 Research Gape in Skin Cancer Diagnosis

    1.Early diagnosis of skin cancer is paramount for successful treatment.Timely identification allows for less invasive interventions,potentially reducing the need for extensive surgeries or aggressive treatments[13].

    2.Accurate diagnoses contribute to better patient outcomes by facilitating appropriate and targeted treatment plans.This,in turn,enhances the chances of complete recovery and reduces the risk of complications[14].

    3.Prompt diagnoses can lead to more efficient healthcare resource utilization.Avoiding unnecessary procedures for benign lesions and initiating appropriate treatments early can help in reducing overall healthcare costs[15].

    1.3 Contribution of Deep Learning in Skin Cancer Diagnosis

    1.Deep learning models,especially Convolutional Neural Network (CNN),excel at pattern recognition in images.They can learn intricate features and textures in skin lesions that may be imperceptible to the human eye,aiding in the early detection of subtle abnormalities[16].

    2.Deep learning models provide a consistent and standardized approach to diagnosis.Once trained,they apply the same criteria to every image,mitigating inter-observer variability and ensuring more reliable results[17].

    3.Automated deep learning models can process large datasets quickly,enabling a more efficient diagnostic process.This is particularly valuable in settings where there might be a shortage of dermatologists or where rapid screenings are necessary[18].

    The challenges in skin cancer diagnosis necessitate advanced technologies for accurate and timely identification.Deep learning models,with their capacity for sophisticated image analysis and pattern recognition,significantly contribute to overcoming these challenges.Their role in standardizing diagnoses,improving efficiency,and aiding in early detection reinforces the importance of integrating such technologies into the broader framework of dermatological diagnostics[19].Accurate diagnosis of skin cancer poses a formidable challenge due to the intricate variability in lesion presentation.This study seeks to overcome the limitations of current skin cancer diagnostic approaches.We address the dual challenges of spatial and temporal intricacies by introducing a novel hybrid Convolutional and Recurrent Neural Network(CNN-RNN)model with a ResNet-50 backbone.The aim is to enhance the accuracy and comprehensiveness of skin cancer classification by leveraging the strengths of both spatial feature extraction and temporal sequence learning.This study introduces a novel approach with the following main contributions:

    1.We propose a pioneering model that integrates Convolutional Neural Networks (CNNs)with Recurrent Neural Networks(RNNs),specifically utilizing a Long Short-Term Memory(LSTM)layer.This hybrid architecture with a ResNet-50 backbone aims to synergize spatial feature extraction and temporal sequence learning.

    2.The Convolutional Neural Network (CNN) component excels in capturing spatial features from skin lesion images,addressing limitations in traditional methods.Simultaneously,the Recurrent Neural Networks(RNN)component,with its LSTM layer,navigates the temporal complexities of skin cancer progression,providing a comprehensive solution.

    3.Rigorous experiments are conducted on a large dataset comprising 9000 images across nine classes of skin cancer.This diverse dataset allows for robust training and evaluation,ensuring the model’s capacity to generalize across various pathological conditions.

    4.Beyond conventional accuracy metrics,we employ precision,recall,and F1-score to comprehensively assess the model’s performance.The inclusion of a confusion matrix offers a detailed breakdown of its proficiency in classifying different skin cancer types.

    5.The proposed model achieves an impressive average recognition accuracy of 99.06%,surpassing traditional methods.This demonstrates the model’s capability to make accurate and nuanced distinctions among diverse skin cancer types.Our contributions pave the way for more effective automated skin cancer diagnosis,offering a refined understanding of spatial and temporal patterns in skin lesion images through a hybrid Convolutional and Recurrent Neural Network(CNN-RNN)model.

    This research introduces a novel hybrid Convolutional and Recurrent Neural Network (CNNRNN) algorithm that is highly effective in classifying skin cancer disease in the specified data collection.The primary objective of this project is to develop a novel hybrid deep learning algorithm for accurately predicting skin cancer illness.Section 1,under“Introduction,”provides a detailed overview of the publications that explicitly focus on classifying different forms of skin cancer,also cover literature review of skin cancer,research gape and proposed research contribution.Section 2 outlines the recommended methodology,includes data definition,preprocessing,augmentation,segmentation,feature extraction steps.While Section 3 provides a comprehensive examination of the classification.Section 4 combines system design and analysis and lastly this paper is concluded in Section 5.

    2 Methodology

    The proposed methodology is described in Fig.1 for the classification of dermoscopy skin cancer images.We utilized a hybrid Convolutional and Recurrent Neural Network (CNN-RNN) model pretrained on approximately 2239 images from ISIC dataset which contain dermoscopy images.The model classified skin lesion image with performance better or comparable to expert dermatologists for nine classes.The overall model working is shown below.

    Figure 1: Proposed hybrid model for skin cancer detection and classification

    2.1 Data Definition

    Computerized algorithms are typically used for the automated assessment of medical data,and the accuracy of these algorithms must be validated using clinical-grade medical imaging[20].The required test images are obtained from the ISIC dataset in the proposed study.The collection has 2239 photos of skin lesions.Following the collection process,each image and the corresponding ground truth are scaled to a resolution of pixels.The test images used in this study,together with their corresponding ground truth,are shown in Fig.2.

    Figure 2: Images with type definition from the ISIC dataset

    2.2 Image Pre-Processing

    The skin cancer classification model employs a range of preprocessing techniques to enhance the quality and suitability of the input images before they are used in the network.The process of constructing a skin cancer classification model entails the production of images that specifically target and emphasize particular regions of interest(ROI).The process of creating regions of interest(ROI)entails the identification and isolation of specific regions within skin lesions that encompass relevant information,with the aim of facilitating classification [21].This step enhances the model’s ability to capture unique features of the skin lesions.The skin cancer classification model being evaluated employed a dataset consisting of ISIC images,obtained from an online resource.A series of image processing operations were executed to attain uniformity and create a consistent approach for the image data[22].The operations involved in this process include the manipulation of image resolution depth and the conversion of pixel range into a positive integer,which is then represented by a single byte.Within this particular context,in the backend,the variable‘x’is employed to symbolize the pixel value,‘y’is utilized to represent the gray level value,and ‘max’and ‘min’are employed to denote the upper and lower boundaries of the pixel range.The aforementioned conversion procedure played a role in enhancing the precision of the images,thus enabling more efficient analysis in subsequent stages.To attain accurate identification and evaluation of cancerous regions,a cohort of skilled and accredited dermatologists specializing in the domain of skin cancer diligently scrutinized the provided images.The investigators performed a thorough analysis of the epidermis to identify areas impacted by pathological conditions.The parameters related to the disease areas underwent regular assessment and quantification across all datasets[23].Fig.3,as illustrated below,presents the transformation of the color of the image into grayscale.In order to focus on the primary site of malignancy,distinct regions of interest(ROI)were established for each variant of skin cancer[24].

    Figure 3: Converted the image color to grey

    Two methodologies were employed to address the variations observed within the malignant regions.The first step in the process involved applying the Laplacian filter to reduce image noise and enhance the prominent regions of infection.The application of this filtering technique has demonstrated efficacy in enhancing the detectability of salient features associated with skin cancer[25].

    The core objective of the proposed model was to improve the precision and reliability of the skin cancer classification procedure by incorporating different preprocessing methods and techniques.Fig.4 depicts the process of generating region of interests(ROI),as presented below.The utilization of region of interests(ROIs)has enabled focused analysis and extraction of relevant features specific to different types of skin cancer.

    Figure 4: Process of ROI generation

    A comprehensive dataset consisting of 2239 images was compiled,representing nine distinct categories of skin cancer.The dataset that has undergone preprocessing is divided into three sets:Training,validation,and testing.Subsequently,a portion of the dataset was allocated specifically for training the model,while another separate portion was reserved for the purpose of hyperparameter tuning and validation.Ultimately,a concluding portion was allocated to evaluate the efficacy of the model.Data allocation ratios of 80:20 is frequently used for training and testing reasons.According to reference[26],in these ratios,80%or 70%of the data is set aside for training,while the remaining 20% or 30% is set aside for testing.The factual data pertaining to the International Skin Imaging Collaboration(ISIC)is presented in Table 2.

    Table 2: Actual available data on ISIC

    This partition ensures that a significant portion of the data is dedicated to training the model,thereby facilitating its ability to learn patterns and make predictions.In our study,the proposed hybrid Convolutional and Recurrent Neural Network(CNN-RNN)model with a ResNet-50 backbone was carefully configured with specific parameters to optimize its performance in skin cancer classification.For the Convolutional Neural Network(CNN)component,we utilized a learning rate of 0.001,a batch size of 32,and employed transfer learning with pre-trained weights from ResNet-50.This allowed the model to leverage features learned from a large dataset,promoting better generalization to our skin cancer classification task.The Recurrent Neural Network (RNN) component,designed to capture temporal dependencies,was configured with a long short-term memory (LSTM) layer.We used a sequence length of 10,chosen empirically to capture relevant sequential information in the dataset.Additionally,we employed a dropout rate of 0.2 in the LSTM layer to prevent overfitting and enhance the model’s generalization capabilities.

    Throughout training,the model underwent 50 epochs,a parameter chosen after monitoring the convergence of the training and validation curves.The choice of these parameters was guided by a combination of empirical experimentation and hyperparameter tuning to achieve optimal performance on our skin cancer dataset.These parameter details not only provide transparency about the setup of your model but also offer insights into the considerations made during the experimental design.Adjusting these parameters and understanding their impact on the model’s performance can be crucial for the reproducibility and adaptability of your proposed approach in different contexts or datasets.

    By employing a hybrid methodology,we have effectively expanded the dataset to include a comprehensive collection of 9000 images.This augmentation process has been carried out with the objective of achieving a balanced distribution,whereby each unique category of skin cancer is represented by 1000 images.The sequential model’s parameters are presented in Table 3.The incorporation of a balanced dataset facilitates the machine-learning model in gaining insights from a more comprehensive and fair representation of each class[27].This approach helps to alleviate biases towards the dominant classes and improves the accuracy and generalization abilities of the sequential model.Principal Component Analysis(PCA)is a widely employed method in data analysis that aims to decrease the number of variables in a dataset while retaining the most pertinent information[28].In order to minimize the squared error during the reconstruction of the original data,a projection is sought by transforming the data into a lower dimensional space[29].

    Table 3: Parameters of sequential model

    The first step in the process entails data standardization,which involves subtracting the mean of the entire dataset from each individual sample and subsequently dividing it by the variance.This procedure guarantees that every instance exhibits a uniform variance [30].The ultimate stage in the procedure is not completely indispensable;nevertheless,it offers benefits in terms of decreasing the workload on the central processing unit(CPU),as evidenced by Eq.(1).

    To compute the Covariance Matrix,a dataset consisting of n samples,denoted as{X1,X2,...,Xn}.The Covariance Matrix can be obtained using Eq.(2),as shown below:

    where X denotes the covariance matrix,xirepresents each sample in the dataset and ˉx denotes the mean of the dataset as shown in Eq.(3).

    An alternative approach involves the multiplication of the standardized matrix Z with its transposed form as shown in Eq.(4).

    Z is the standardized matrix,Ztis the transpose of Z and Cov(X) is the covariance matrix.The determination of principal components is based on the Eigenvectors of the‘Covariance Matrix’(Cov),which are subsequently organized in a descending order of importance according to their corresponding eigenvalues.Eigenvectors that correspond to larger eigenvalues are indicative of greater significance.The majority of the valuable data within the complete dataset is condensed into a single vector.

    2.3 Data Augmentation

    The model under consideration employs a range of data augmentation techniques in order to enhance the overall quality of the training dataset.The employed techniques include random rotations,horizontal and vertical flips,random zooming,and slight translations.Through the implementation of these transformations,the model is able to successfully simulate a wide range of viewpoints and perspectives of skin lesions [31].This ultimately improves the model’s ability to handle variations in image orientation and positioning,making it more resilient.Data augmentation plays a multifaceted role in the training process[32].Firstly,it is advantageous to address the issue of inadequate training data by employing augmented samples,as they provide a wider and more diverse selection for the purpose of training the model.The utilization of this methodology assists in the reduction of overfitting and improves the model’s ability to accurately extrapolate to unfamiliar skin lesion images [33].Furthermore,the incorporation of data augmentation in the training process serves as a means of regularization for the model.This is achieved by introducing noise and variations,thereby enhancing the model’s ability to generalize and improve its performance.Regularization is a widely utilized technique in the field of machine learning that serves to address the potential issue of overfitting in models.By integrating regularization techniques,the model is discouraged from excessively memorizing specific patterns or features that are exclusively present in the training data.On the contrary,there is a strong recommendation to acquire a broader understanding of skin cancer by focusing on features that are more generalizable and representative[34].Furthermore,the application of data augmentation methods can effectively tackle the difficulties presented by class imbalance within the dataset.To address the issue of underrepresentation in certain classes,it is possible to generate additional samples.This approach aims to promote a fairer distribution and reduce the potential bias of the model towards the dominant class.The incorporation of data augmentation holds significant significance in the training phase of the skin cancer classification model that has been put forth.

    By incorporating a diverse range of transformations and modifications into the training dataset,it becomes possible to enhance the model’s capacity to generalize,thereby improving its performance on previously unseen data.Moreover,the application of this methodology possesses the capability to efficiently address any discrepancies in the distribution of students among various classes,as emphasized in literature [35].The process of data augmentation is depicted in Fig.5 above.The utilization of data augmentation techniques aims to enhance the accuracy and robustness of the model in the identification and classification of various forms of skin cancer.The parameters and their corresponding values for data augmentation are presented in Table 4.Augmented data is visualized in Fig.6 below.

    Figure 5: Data augmentation process

    Figure 6: Visualization of augmented data

    Table 4: Outlines the various augmentation parameters and their corresponding values

    2.4 Segmentation

    After uploading the captured images of skin lesions,we proceeded to use the segmentation technique in order to eliminate the skin,hair,and other unwanted elements.To achieve our goal,we utilized the range-oriented by pixel resolution(RO-PR)algorithm on the skin pictures.The rangeoriented by pixel resolution (RO-PR) technique is utilized to delineate the clustering region of the pixels in the skin lesion image[36].The Total Value of Pixel(TVP)evaluates the specified threshold value by taking into account a distinct cluster of skin images[37].The thresholding level served as the pixel value for the skin lesion(PVL)image or as the beginning value for the cluster.The resolution of all adjacent pixels was compared,and the cumulative resolution of the image was analyzed to form the cluster.Ultimately,the assessment concludes that the dataset value of Total Value of Pixel(TVP)is equivalent to that of pixel value for the skin lesion (PVL).Consequently,the clustering of pixel regions(PR)is considered same,and the total cluster data is augmented by evaluating its ROIs.Each subsequent nearby cluster is determined in the same way.

    2.5 Software Configuration

    The study employed the deep learning framework TensorFlow version 1.4 in conjunction with version 2.0.8 of the Keras API,implemented in Python,on a workstation equipped with an Intel i5 processor,16 GB of RAM,and 4 GB of VRAM,utilizing NVIDIA GPUs.The GPUs were equipped with CUDA (Compute Unified Device Architecture) and cuDNN (CUDA Deep Neural Network library),which were installed and configured to enable GPU acceleration.The starting settings are set as follows:Batch size is assigned as 10,Learning rate is set to 0.001,and Epochs is defined as 50.The initial activation function used is Rectified Linear Unit (ReLu),while the final activation function is Sigmoid.The optimizer used for training is either Adam or Stochastic Gradient Descent (SGD).Average pooling is applied,and the performance of the model is evaluated using accuracy metrics.

    2.6 Feature Extraction

    Following the segmentation of the skin lesion,ROIs were obtained and the image properties of these moles of skin cancer were extracted in order to analyze their texture.There is various manual,automated,and semi-automated approaches to generating ROIs.Automated schemes rely significantly on image enhancement,while semi-automated and manual approaches rely on the judgment of experts.In order to compile a dataset on skin cancer,ten non-overlapping ROIs measuring 512 × 512 were applied to the images of each category,yielding a total of 1000(100×10)ROI images per category.The ROI samples are illustrated in Fig.4.We obtained a dataset containing 9000(1000×9)ROI images for nine different types of skin cancer.The acquisition of features is a critical step in the classification of datasets using machine learning,as it provides the essential information required for texture analysis.Twenty-eight binary features,five histogram features,seven RST features,ten texture features,and seven spectral features were selected for texture analysis in the study.A grand total of 57 features were extracted from each of the generated ROIs in this manner.There was a total of 513,000 features in the features vector space(FVS)(9×100×10=9000×57).

    3 Classification Model

    The skin cancer classification model being proposed employs a hybrid architecture that integrates Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)with a ResNet-50 backbone.The mathematical representation of this hybrid approach is as follows,as stated in Eq.(6)below.Let X represent the input image data set,where X is defined as the collection of individual images{x1,x2,...,xn}.Here,n denotes the total count of images present in the dataset.The following is the architectural diagram of the Convolutional Neural Network(CNN)model,as depicted in Fig.7.

    Figure 7: Architecture diagram of the convolutional neural network(CNN)model

    The proposed Convolutional Neural Network (CNN) model architecture for the skin cancer classification consists of three main layers: Two convolutional-pooling layers followed by a fullyconnected classification layer.Each layer performs specific operations to extract meaningful features and classify the input images[38].The first layer in the architecture is a convolutional-pooling layer.It utilizes an 8×8 fixed convolutional kernel to convolve over the input images.This convolutional layer aims to capture local spatial patterns and extract relevant features.The resulting feature maps are then subjected to a 2×2 pooling operation,which reduces the spatial dimensions while preserving important information[39].The second layer is also a convolutional-pooling layer.It uses the same 8×8 convolutional kernels as the previous layer but with a higher number of neurons,specifically 64.This allows for more complex feature extraction and representation.Similar to the first layer,a 2×2 pooling operation follows the convolution step to further down sample the feature maps.Finally,the last layer of the architecture is a fully-connected classification layer.It consists of 256 neurons,which are connected to the final two neurons responsible for mitosis/non-mitosis classification [40].This layer combines the extracted features from the previous layers and performs the classification task,distinguishing between the two target classes[41].

    3.1 Convolutional Neural Network(CNN)Feature Extraction

    The Convolutional Neural Network(CNN)denotes the“CNN”model component,fCNN,extracts feature from the input images using the ResNet-50 architecture and“I”denotes the input images.This can be expressed as Eqs.(5)&(5.1)shown below:

    3.2 Recurrent Neural Network(RNN)Temporal Modeling

    The Recurrent Neural Network (RNN) denotes the “RNN”model component,fRNNrepresents the output of the RNN component capturing temporal dependencies and fCNNdenotes the features extracted by the CNN component.This can be formulated as Eqs.(6)&(6.1)below:

    3.3 Classification Layer

    The output of the Recurrent Neural Network(RNN)component represents ZRNNcapturing temporal dependencies is passed through a classification layer to obtain the final predicted probabilities for each class.Letbe the predicted probabilities for the n images.The classification layer denoted by the symbol g,encompasses various activation functions such as the softmax function or other suitable alternatives.This can be represented as Eq.(7):

    The classification layer,denoted by the symbol g,encompasses various activation functions such as the softmax function or other suitable alternatives.The model was trained using the International Skin Imaging Collaboration(ISIC)9 class’s image dataset of skin cancer.The dataset includes images from various classes such as Actinic Keratosis (AK),Basal Cell Carcinoma (BC),Dermatofibroma(DF),Melanoma(MM),Nevus(NN),Pigmented Benign Keratosis(PK),Seborrheic Keratosis(SK),Squamous Cell Carcinoma(SC),Vascular Lesion(VL)[43].The model was trained to acquire discriminative features and temporal patterns that are unique to each class.The integration of convolutional neural networks(CNNs)and recurrent neural networks(RNNs)in a hybrid architecture enables the model to exploit the spatial characteristics acquired by the Convolutional Neural Network(CNN)and the temporal relationships captured by the RNN.Through the integration of these two components,the model is able to proficiently extract features and effectively model the sequential information that is inherent in the images of skin cancer[44].By undergoing training using the International Skin Imaging Collaboration(ISIC)9 classes dataset,the model acquires the ability to accurately classify images of skin cancer,taking into account the distinct characteristics and variations present in each class.This feature allows the model to generate accurate predictions regarding the existence of various forms of skin cancer by analyzing the input images.The proposed skin cancer classification model integrates a hybrid Convolutional and Recurrent Neural Network (CNN-RNN) architecture with a ResNet-50 backbone to effectively extract features and model temporal dependencies in the International Skin Imaging Collaboration (ISIC) 9 classes image dataset of skin cancer.This enables accurate classification and identification of various types of skin cancer.Fig.8 below show Convolutional and Recurrent Neural Network(CNN-RNN)classification performance graph for the skin cancer types.

    Figure 8: Convolutional and recurrent neural network(CNN-RNN)classification performance graph for the skin cancer type

    3.4 Performance Assessment

    The utilization of a confusion matrix allows us to accurately observe both true and false positive outcomes,as well as false negative outcomes.In order to further investigate the model’s performance,we carried out a comprehensive analysis utilizing precision,recall,and F1-score metrics.These measures offer valuable information about the model’s capacity to accurately detect positive cases,minimize false positives,and maintain a balance between precision and recall[45].The performance of our model consistently shown high precision,recall,and F1-score values for all classes,indicating its reliability and applicability for various forms of skin cancer.The equations below are used to derive and express additional important metrics.

    where,TP(True Positive),TN(True Negative),FP(False Positive),and FN(False Negative).

    3.5 Training

    The time complexity of training the model depends on several factors,including the number of training samples(N),the number of epochs(E),the batch size(B),and the complexity of the model architecture.The overall time complexity for training can be approximated as O(N×E×(H×W×C×F×K2)),taking into account the forward and backward passes for each training sample[46].Below shown in Fig.9,show data distribution for training and testing.

    Figure 9: Data distribution for training and testing

    Upon commencing the training process,it becomes evident that the validation accuracy attains a level of 90%upon the completion of five epochs.Once the iterations are completed,the efficiency and error of each model generated are calculated.The overall efficiency and final error are determined by calculating the average of the 10 trained models.The visual representation of the Cross-Validation procedure is depicted in Fig.10.

    Figure 10: Validation procedure

    3.6 Evaluation

    The time complexity of evaluating the model on the test dataset is similar to the feature extraction process,as it involves passing the test images through the model’s layers.Height,width,number of channels,number of filters,and the square of the convolutional kernel size[47].The time complexity for evaluation can be approximated as the upper bound of the growth rate‘O’(H ?W ?C ?F ?K∧2)for each test sample.Fig.11 below shows accuracy,area under the curve,sensitivity,specificity,precision and F1-score comparison for the model.

    Figure 11: Accuracy,area under the curve,sensitivity,specificity,precision and F1-score

    3.7 Prediction

    The time complexity for making predictions on new,unseen images using the trained model is similar to the evaluation process,as it involves a forward pass through the model’s layers [48].The time complexity for prediction can be approximated as O(H×W×C×F×K2)for each prediction.Here,above notation represents,Height (H),width (W),number of channels (C),number of filters(F),and the square of the convolutional kernel size(K2).It is important to note that the above time complexity analysis provides an approximation and assumes a sequential execution of the operations.The actual time taken may also depend on hardware acceleration,parallel processing,and optimization techniques used during the implementation.Fig.12 below shows recognition accuracy and loss curves of proposed model.

    Figure 12: Recognition accuracy and loss curves of proposed model

    4 Design and Analysis

    There are more benign than malignant pictures,the collection is unbalanced.We have used augmentation to oversample malignant samples.Then,when the model is given these photos,it improves them by rotating,resizing,flipping them horizontally,and increasing the range of magnification and shear.Images are improved so that they do not fit too well.The model was trained with the many topologies of the Convolution Neural Network.Even though VGGNet V2 and GoogleNet V2 were used to train the model,the result was not what was expected.Then,the custom layered model is used.Dropout is added between layers to keep the model from fitting too well,and as the size of the training picture gets smaller,more neurons or filters are added to the convolutional 2D layer to get the desired result.This lowers loss,and as loss goes down,model accuracy goes up.When model accuracy goes up,it is easier to tell if a skin lesion is benign or cancerous.after getting the results that were wanted.After the model is saved,FAST APIs are used to get the result into the online application and mobile application.Once the user uploads a picture of the spot,the program will predict whether the user will be diagnosed with skin cancer or not.Fig.13 below shows confusion matrix and Table 5 shows performance comparison of proposed methods on ISIC dataset.

    Figure 13: Confusion matrix of proposed model

    Table 5: Performance comparison of proposed methods on ISIC dataset

    4.1 Limitation of Proposed Methodology

    Firstly,the effectiveness of the model is contingent upon the quality and representativeness of the dataset used for training.If the dataset has inherent biases or lacks diversity in terms of skin types,ethnicities,or geographic regions,the model’s generalizability may be compromised.Future research should focus on curating more diverse and inclusive datasets to enhance the model’s applicability across a broader population[54].Secondly,while the model demonstrates high accuracy,the potential challenges of interpretability persist.Deep learning models,especially those with intricate architectures like Convolutional and Recurrent Neural Network(CNN-RNN),can be perceived as“black boxes”where understanding the decision-making process is challenging.Enhancing the interpretability of the model’s predictions is an avenue for future research,ensuring that healthcare professionals can trust and understand the model’s outputs in clinical practice.Moreover,the proposed model’s performance may be influenced by variations in image quality and resolution,factors commonly encountered in real-world clinical settings.Ensuring robustness to such variations is critical for the model’s practical utility.Further research could explore techniques to enhance the model’s resilience to variations in input data quality.while our Convolutional and Recurrent Neural Network (CNN-RNN) model shows promise in skin cancer classification,it is essential to recognize these limitations.Addressing these challenges in future research will not only refine the proposed model but also contribute to the broader goal of developing robust and reliable deep learning models for clinical applications in dermatology.

    5 Conclusions

    The employed model utilized image processing techniques to enhance the performance of our existing models.As previously stated,initially,random image cropping and rotations were employed as a means to augment the dataset with additional information.Additionally,it is possible to manipulate the visual display by zooming in or out,as well as flipping the image horizontally or vertically.Additionally,during the process of retraining the final layers,we employed transfer learning techniques by leveraging the pre-trained weights from the ImageNet dataset.Subsequently,we performed finetuning on the model.The entire model will be retrained using a reduced learning rate.This model also attempted to impart knowledge to the other models.In the absence of transfer learning,the outcomes consistently exhibited inferior performance,particularly in the deeper layers.Furthermore,we made modifications to the color scheme by transitioning from the RGB model to the HSV model and also experimented with grayscale.However,despite these adjustments,our attempts were unsuccessful in attaining the intended outcomes.Subsequently,the outcomes exhibited a deterioration,particularly in the context of evaluating grayscale images.The final results are presented by considering the average accuracy of the classifications,which is weighted.

    The model were trained using RGB images,employing techniques such as data augmentation,transfer learning,fine-tuning,and the SGD optimizer.During the training process,appropriate class weights were employed to ensure equitable treatment for all individuals,given the substantial dissimilarity in the data.In order to address the issue pertaining to the image quality,we attempted to incorporate each individual image in a sequential manner.During the validation phase,the algorithm is executed multiple times,and a diagnosis is determined based on the median average of its results.Specifically,we performed four iterations of inserting each validation image into the database,with each iteration involving a different flipping operation.Subsequently,we employed the median average to assign the image to one of four distinct categories.The aforementioned approach was employed on a sample above model,which were randomly selected from the pool of training epochs in the ResNet50 model over the course of the last 50 iterations.The findings indicated that,in comparison to the initial image,there was an average enhancement of 1.04%.Given its emphasis on quality and its primary role as an initial diagnostic indicator,the application is primarily focused on ensuring high standards and serving as an initial diagnostic clue.According to Table 6,the model has opted for a two-class mapping approach using the ResNet50 model to distinguish between Malignant cases (mel,bcc,or akiec)and Benign cases(nv,pbk.d,bkl,vasc,cc,or df).

    Table 6: Results comparison between different models

    5.1 Suggestions for Future Research

    1.Investigate the incorporation of multi-model data,such as dermoscopy and patient history information,to enhance the model’s diagnostic capabilities.Combining imaging data with other patient-specific features could provide a more holistic understanding of skin conditions.

    2.Explore techniques for enhancing model interpretability,particularly in the context of medical[55]diagnostics.Integrate explainable AI methods to provide insights into the decision-making process of the model,fostering trust and understanding among healthcare practitioners.

    3.Assess the feasibility of transfer learning by training the model on a broader dataset encompassing various dermatological conditions.This could potentially enhance the model’s ability to differentiate between different skin disorders and expand its utility in clinical practice.

    4.Investigate strategies for real-time deployment of the model in clinical settings.Develop lightweight architectures or model compression techniques to ensure efficient execution on edge devices,facilitating point-of-care diagnostics.

    Acknowledgement:We wish to extend our heartfelt thanks and genuine appreciation to our supervisor,Guangmin Sun (Beijing University of Technology) for his support,insightful comments,valuable remarks,and active involvement throughout the duration of writing this manuscript have been truly invaluable.Also,we are most grateful to the anonymous reviewers and editors for their careful work,which made this,paper a substantial improvement.Thanks to my teacher for providing me with research conditions.

    Funding Statement:This research was conducted without external funding support.

    Author Contributions:S.G developed the method;S.S.Z,S.F.Q and M.K actively engaged in conducting the experiments and analyzing the data;S.S.Z resulted in the composition of the manuscript;S.Q made substantial contributions to the analysis and discussion of the algorithm and experimental findings.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The JPG File(.jpg)data used to support the findings of this study have been deposited in the ISIC Archive repository(Base URL:https://www.isic-archive.com/).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    1024视频免费在线观看| 国产熟女午夜一区二区三区| 80岁老熟妇乱子伦牲交| 国产成+人综合+亚洲专区| 色精品久久人妻99蜜桃| 99精国产麻豆久久婷婷| 一进一出好大好爽视频| 日本黄色视频三级网站网址 | 亚洲国产av新网站| 一夜夜www| 亚洲精品久久成人aⅴ小说| 午夜两性在线视频| 真人做人爱边吃奶动态| 又大又爽又粗| 国产精品一区二区在线观看99| 亚洲黑人精品在线| 99re在线观看精品视频| 午夜两性在线视频| 欧美乱码精品一区二区三区| 免费久久久久久久精品成人欧美视频| 两性夫妻黄色片| 精品国产乱码久久久久久男人| 欧美黄色片欧美黄色片| 久久精品熟女亚洲av麻豆精品| 国产亚洲一区二区精品| 丰满少妇做爰视频| 亚洲精品国产色婷婷电影| 桃花免费在线播放| 狠狠狠狠99中文字幕| cao死你这个sao货| 色综合欧美亚洲国产小说| 中文字幕色久视频| 操出白浆在线播放| 黑人欧美特级aaaaaa片| 又黄又粗又硬又大视频| 丝瓜视频免费看黄片| 精品国产超薄肉色丝袜足j| 免费少妇av软件| 日韩三级视频一区二区三区| 欧美亚洲 丝袜 人妻 在线| 久久精品国产99精品国产亚洲性色 | 国产一区二区在线观看av| 伦理电影免费视频| 不卡av一区二区三区| 热re99久久精品国产66热6| 亚洲欧美色中文字幕在线| 视频区欧美日本亚洲| 日韩大码丰满熟妇| 国产亚洲欧美精品永久| 欧美国产精品va在线观看不卡| 久久久久久久大尺度免费视频| 热99久久久久精品小说推荐| 大型av网站在线播放| 国产国语露脸激情在线看| 青青草视频在线视频观看| 黄色成人免费大全| 欧美午夜高清在线| 最新的欧美精品一区二区| 婷婷丁香在线五月| 一区二区三区乱码不卡18| 久久久久久久大尺度免费视频| 亚洲男人天堂网一区| a级毛片在线看网站| 久久久久国产一级毛片高清牌| 王馨瑶露胸无遮挡在线观看| 亚洲少妇的诱惑av| 一个人免费在线观看的高清视频| 精品国产一区二区久久| 午夜福利乱码中文字幕| 91av网站免费观看| 自拍欧美九色日韩亚洲蝌蚪91| 热re99久久国产66热| 999精品在线视频| 热99久久久久精品小说推荐| 在线观看免费日韩欧美大片| 午夜免费鲁丝| 一边摸一边做爽爽视频免费| 亚洲,欧美精品.| 夜夜爽天天搞| 国产亚洲午夜精品一区二区久久| 免费一级毛片在线播放高清视频 | 99久久国产精品久久久| 黑人操中国人逼视频| 久久久久久久国产电影| 午夜福利免费观看在线| 菩萨蛮人人尽说江南好唐韦庄| 亚洲 欧美一区二区三区| 蜜桃国产av成人99| 欧美日韩中文字幕国产精品一区二区三区 | 高清av免费在线| 亚洲自偷自拍图片 自拍| 99久久人妻综合| 国产在线免费精品| 老司机午夜十八禁免费视频| 亚洲va日本ⅴa欧美va伊人久久| 极品少妇高潮喷水抽搐| 中文字幕另类日韩欧美亚洲嫩草| 婷婷丁香在线五月| 亚洲五月色婷婷综合| 精品人妻在线不人妻| 亚洲一码二码三码区别大吗| 午夜激情久久久久久久| 亚洲欧美精品综合一区二区三区| 亚洲av美国av| 无人区码免费观看不卡 | 亚洲七黄色美女视频| 久久久久久亚洲精品国产蜜桃av| 不卡av一区二区三区| 国产成人系列免费观看| 亚洲一码二码三码区别大吗| 欧美大码av| 亚洲成人免费电影在线观看| 日本黄色日本黄色录像| 热99re8久久精品国产| 美女福利国产在线| 波多野结衣av一区二区av| 亚洲九九香蕉| 久久精品国产99精品国产亚洲性色 | 老鸭窝网址在线观看| 国产精品 国内视频| 久久性视频一级片| 欧美另类亚洲清纯唯美| 少妇猛男粗大的猛烈进出视频| 欧美乱码精品一区二区三区| 蜜桃国产av成人99| 久久精品国产亚洲av香蕉五月 | 国产欧美日韩精品亚洲av| 视频区欧美日本亚洲| 在线看a的网站| 视频在线观看一区二区三区| 在线av久久热| 日日爽夜夜爽网站| 亚洲精品一卡2卡三卡4卡5卡| 日韩中文字幕欧美一区二区| 亚洲国产看品久久| 露出奶头的视频| 天天操日日干夜夜撸| 制服诱惑二区| 国产人伦9x9x在线观看| 亚洲色图av天堂| 成年动漫av网址| av网站免费在线观看视频| 国产精品欧美亚洲77777| 亚洲五月色婷婷综合| 中文字幕人妻丝袜一区二区| 国产精品秋霞免费鲁丝片| 精品国产国语对白av| 国产亚洲精品第一综合不卡| 久久精品成人免费网站| 脱女人内裤的视频| 黑人巨大精品欧美一区二区蜜桃| 交换朋友夫妻互换小说| 淫妇啪啪啪对白视频| 老熟妇仑乱视频hdxx| 人人妻人人澡人人爽人人夜夜| 两人在一起打扑克的视频| 一区二区三区乱码不卡18| 精品国产一区二区三区四区第35| 中文欧美无线码| 中文字幕av电影在线播放| 午夜免费成人在线视频| 国产黄色免费在线视频| 一区二区三区国产精品乱码| 国产av一区二区精品久久| 又大又爽又粗| 搡老熟女国产l中国老女人| 在线观看一区二区三区激情| 十八禁人妻一区二区| 亚洲色图综合在线观看| 亚洲欧美色中文字幕在线| 色精品久久人妻99蜜桃| 国产免费视频播放在线视频| 午夜福利欧美成人| 欧美日韩视频精品一区| 高清欧美精品videossex| 久久久精品免费免费高清| 如日韩欧美国产精品一区二区三区| 亚洲精品av麻豆狂野| 99国产极品粉嫩在线观看| 男女之事视频高清在线观看| 久久久久久久久免费视频了| 亚洲第一av免费看| 波多野结衣一区麻豆| 热re99久久国产66热| 精品国产一区二区三区四区第35| 日韩有码中文字幕| 国产老妇伦熟女老妇高清| av视频免费观看在线观看| 久久久国产成人免费| 一区二区三区乱码不卡18| 久热这里只有精品99| 国产熟女午夜一区二区三区| 97人妻天天添夜夜摸| 捣出白浆h1v1| 久久精品成人免费网站| www.熟女人妻精品国产| 久久国产精品影院| 另类亚洲欧美激情| 久久久久久久久久久久大奶| 国产男女内射视频| 一边摸一边抽搐一进一出视频| 亚洲 国产 在线| 男女无遮挡免费网站观看| 天天添夜夜摸| 精品一区二区三区四区五区乱码| 黑人操中国人逼视频| 飞空精品影院首页| 亚洲精品国产精品久久久不卡| 国产av精品麻豆| 成年人黄色毛片网站| 美女扒开内裤让男人捅视频| 五月天丁香电影| 新久久久久国产一级毛片| 亚洲精品美女久久久久99蜜臀| 亚洲国产欧美一区二区综合| 国产精品免费大片| 激情在线观看视频在线高清 | 欧美av亚洲av综合av国产av| 肉色欧美久久久久久久蜜桃| 成人国产av品久久久| 亚洲全国av大片| 国产伦理片在线播放av一区| 桃红色精品国产亚洲av| 国产日韩一区二区三区精品不卡| 狠狠精品人妻久久久久久综合| 在线看a的网站| 午夜免费鲁丝| 在线永久观看黄色视频| av片东京热男人的天堂| 亚洲精品中文字幕在线视频| 久久精品91无色码中文字幕| 亚洲专区字幕在线| 亚洲精品粉嫩美女一区| 成人18禁高潮啪啪吃奶动态图| 国产深夜福利视频在线观看| 如日韩欧美国产精品一区二区三区| 一本大道久久a久久精品| 免费在线观看完整版高清| av片东京热男人的天堂| 纯流量卡能插随身wifi吗| 亚洲国产精品一区二区三区在线| 亚洲av成人一区二区三| 国产不卡av网站在线观看| 亚洲人成伊人成综合网2020| 欧美日韩黄片免| 国产一区二区三区在线臀色熟女 | 在线av久久热| 嫩草影视91久久| 母亲3免费完整高清在线观看| 99国产精品99久久久久| 亚洲,欧美精品.| 国产成人欧美在线观看 | 91精品三级在线观看| 大码成人一级视频| 欧美+亚洲+日韩+国产| av网站免费在线观看视频| 丝袜美足系列| 99国产精品免费福利视频| 成人三级做爰电影| 久久精品国产a三级三级三级| 亚洲国产精品一区二区三区在线| 丰满少妇做爰视频| 欧美日韩中文字幕国产精品一区二区三区 | www日本在线高清视频| 十八禁网站免费在线| 可以免费在线观看a视频的电影网站| 亚洲精品在线美女| 最新的欧美精品一区二区| 中文字幕精品免费在线观看视频| 国产免费现黄频在线看| 日本av手机在线免费观看| 天天躁狠狠躁夜夜躁狠狠躁| 日本vs欧美在线观看视频| 999久久久精品免费观看国产| 性色av乱码一区二区三区2| 免费在线观看日本一区| 亚洲精品一二三| 最新在线观看一区二区三区| 狠狠精品人妻久久久久久综合| 丝袜人妻中文字幕| 亚洲午夜精品一区,二区,三区| 在线永久观看黄色视频| 美女扒开内裤让男人捅视频| 免费少妇av软件| 99香蕉大伊视频| 黄色成人免费大全| 精品福利永久在线观看| 欧美精品啪啪一区二区三区| 国产欧美亚洲国产| 久久午夜综合久久蜜桃| 另类亚洲欧美激情| 熟女少妇亚洲综合色aaa.| 999精品在线视频| 中亚洲国语对白在线视频| 国产精品久久久久成人av| 欧美日本中文国产一区发布| 色视频在线一区二区三区| h视频一区二区三区| 日韩一卡2卡3卡4卡2021年| 激情视频va一区二区三区| av网站免费在线观看视频| 国产在视频线精品| 一个人免费在线观看的高清视频| 日日摸夜夜添夜夜添小说| 蜜桃在线观看..| 国产亚洲欧美精品永久| 啦啦啦免费观看视频1| 国产成人av教育| 亚洲精品中文字幕在线视频| 久久九九热精品免费| 欧美精品亚洲一区二区| 夜夜爽天天搞| 精品福利观看| 国产福利在线免费观看视频| 无限看片的www在线观看| 黄色视频不卡| 一级毛片精品| 国产三级黄色录像| 精品国内亚洲2022精品成人 | 午夜福利在线免费观看网站| 一级毛片精品| 国产三级黄色录像| 国产1区2区3区精品| 欧美老熟妇乱子伦牲交| 中文字幕最新亚洲高清| 国产一卡二卡三卡精品| 成人18禁在线播放| 在线观看免费视频网站a站| 中文字幕色久视频| 天天影视国产精品| 亚洲成国产人片在线观看| 久久人人97超碰香蕉20202| 青草久久国产| 黄网站色视频无遮挡免费观看| 久久久久久久精品吃奶| 国产精品国产高清国产av | 两人在一起打扑克的视频| 18禁观看日本| 精品国产一区二区久久| 久久精品国产a三级三级三级| 午夜精品久久久久久毛片777| 天堂俺去俺来也www色官网| 久久亚洲真实| 精品福利观看| 久久精品91无色码中文字幕| 亚洲欧美日韩另类电影网站| 亚洲成人免费av在线播放| 18禁黄网站禁片午夜丰满| 国产无遮挡羞羞视频在线观看| 久久免费观看电影| 97人妻天天添夜夜摸| 99国产精品一区二区蜜桃av | 久久久久久久国产电影| 日本一区二区免费在线视频| 国产高清视频在线播放一区| 亚洲一卡2卡3卡4卡5卡精品中文| 蜜桃国产av成人99| tocl精华| 在线亚洲精品国产二区图片欧美| 久久久国产成人免费| 另类精品久久| 如日韩欧美国产精品一区二区三区| 午夜两性在线视频| 又大又爽又粗| 精品国产超薄肉色丝袜足j| 超色免费av| 国产精品1区2区在线观看. | 人人妻人人澡人人看| 不卡一级毛片| 亚洲天堂av无毛| 建设人人有责人人尽责人人享有的| 夜夜爽天天搞| 在线观看免费视频日本深夜| 一边摸一边抽搐一进一小说 | 日韩欧美一区视频在线观看| www.精华液| 午夜福利欧美成人| 丰满人妻熟妇乱又伦精品不卡| 国产高清激情床上av| 国产男女内射视频| 久久久欧美国产精品| 夫妻午夜视频| 亚洲,欧美精品.| 两个人看的免费小视频| 亚洲熟妇熟女久久| 大片免费播放器 马上看| 亚洲中文av在线| 国产成人免费无遮挡视频| 大香蕉久久网| 丝袜美足系列| 免费观看av网站的网址| 国产老妇伦熟女老妇高清| 男女下面插进去视频免费观看| 国产精品国产高清国产av | 国产精品欧美亚洲77777| 午夜福利一区二区在线看| www.精华液| 国产精品久久久人人做人人爽| 国产亚洲午夜精品一区二区久久| 亚洲视频免费观看视频| 国产在线精品亚洲第一网站| 国产午夜精品久久久久久| 91精品国产国语对白视频| 人人妻人人澡人人爽人人夜夜| 亚洲av成人一区二区三| 男女之事视频高清在线观看| 97人妻天天添夜夜摸| 免费看十八禁软件| 999精品在线视频| av福利片在线| 久久精品国产a三级三级三级| 国产成人av教育| 亚洲伊人色综图| av片东京热男人的天堂| 午夜福利欧美成人| av天堂久久9| 99精品欧美一区二区三区四区| 欧美久久黑人一区二区| 国产一区二区激情短视频| 成人精品一区二区免费| 国产成人精品在线电影| 我的亚洲天堂| 国产91精品成人一区二区三区 | 丁香欧美五月| 最黄视频免费看| 老司机在亚洲福利影院| 久久人妻熟女aⅴ| 欧美大码av| 亚洲国产成人一精品久久久| 美女高潮到喷水免费观看| 别揉我奶头~嗯~啊~动态视频| 正在播放国产对白刺激| 999精品在线视频| 久久免费观看电影| 岛国在线观看网站| 99精国产麻豆久久婷婷| 美女福利国产在线| 国产在线视频一区二区| 电影成人av| 亚洲av片天天在线观看| 国产伦人伦偷精品视频| 老司机在亚洲福利影院| 制服诱惑二区| 新久久久久国产一级毛片| 在线亚洲精品国产二区图片欧美| 一区二区av电影网| 啦啦啦免费观看视频1| 多毛熟女@视频| 美女高潮喷水抽搐中文字幕| 一级毛片精品| 亚洲av第一区精品v没综合| 亚洲精品一卡2卡三卡4卡5卡| 91国产中文字幕| 18禁观看日本| 自拍欧美九色日韩亚洲蝌蚪91| 一级毛片女人18水好多| 国产一卡二卡三卡精品| 99国产精品99久久久久| 最近最新中文字幕大全免费视频| 建设人人有责人人尽责人人享有的| av福利片在线| 国产日韩欧美亚洲二区| 亚洲精品国产一区二区精华液| 久久中文字幕一级| 亚洲欧美一区二区三区久久| 亚洲av日韩精品久久久久久密| 伦理电影免费视频| 日韩欧美一区二区三区在线观看 | 国产精品久久久av美女十八| 桃花免费在线播放| 蜜桃国产av成人99| 久久午夜亚洲精品久久| 在线观看www视频免费| 精品免费久久久久久久清纯 | 青青草视频在线视频观看| 少妇被粗大的猛进出69影院| 亚洲avbb在线观看| 国产野战对白在线观看| 桃花免费在线播放| 丁香六月天网| 成人国语在线视频| 97在线人人人人妻| 久久久国产成人免费| 激情视频va一区二区三区| 肉色欧美久久久久久久蜜桃| 国产片内射在线| 精品国产超薄肉色丝袜足j| 啦啦啦 在线观看视频| av国产精品久久久久影院| 国产高清激情床上av| √禁漫天堂资源中文www| 少妇精品久久久久久久| 中文字幕人妻熟女乱码| 亚洲欧美日韩另类电影网站| 亚洲国产欧美一区二区综合| 又紧又爽又黄一区二区| 纵有疾风起免费观看全集完整版| 日韩免费高清中文字幕av| 俄罗斯特黄特色一大片| 国产av国产精品国产| 国产精品免费一区二区三区在线 | 中亚洲国语对白在线视频| 日韩免费高清中文字幕av| 天天添夜夜摸| 亚洲 欧美一区二区三区| 欧美黑人欧美精品刺激| 久久ye,这里只有精品| 一区福利在线观看| 少妇的丰满在线观看| 少妇精品久久久久久久| 两人在一起打扑克的视频| 丝瓜视频免费看黄片| 国产成人av教育| 纯流量卡能插随身wifi吗| 国产欧美日韩一区二区精品| 悠悠久久av| 国产熟女午夜一区二区三区| 国产精品久久久久久精品古装| 欧美亚洲 丝袜 人妻 在线| 侵犯人妻中文字幕一二三四区| 亚洲精品中文字幕一二三四区 | 精品免费久久久久久久清纯 | 亚洲国产毛片av蜜桃av| 搡老乐熟女国产| 国产成人精品在线电影| 亚洲欧美精品综合一区二区三区| 亚洲人成电影免费在线| av欧美777| 青青草视频在线视频观看| 精品国产乱码久久久久久男人| 9热在线视频观看99| 成人黄色视频免费在线看| 亚洲少妇的诱惑av| 久久午夜综合久久蜜桃| 老司机影院毛片| 麻豆成人av在线观看| 日本一区二区免费在线视频| 国产高清激情床上av| 少妇猛男粗大的猛烈进出视频| 欧美精品人与动牲交sv欧美| 老熟妇仑乱视频hdxx| 高清av免费在线| 久久久水蜜桃国产精品网| 香蕉久久夜色| 亚洲人成电影观看| 十八禁高潮呻吟视频| av一本久久久久| 国产男女超爽视频在线观看| 精品少妇一区二区三区视频日本电影| 久9热在线精品视频| 国产又色又爽无遮挡免费看| 黄色毛片三级朝国网站| 交换朋友夫妻互换小说| 久久人人97超碰香蕉20202| 成年女人毛片免费观看观看9 | 欧美日韩中文字幕国产精品一区二区三区 | 男男h啪啪无遮挡| 人人妻人人澡人人爽人人夜夜| 两个人看的免费小视频| 亚洲熟妇熟女久久| 免费一级毛片在线播放高清视频 | 青草久久国产| 日本黄色视频三级网站网址 | 成人手机av| 亚洲欧美激情在线| 欧美成狂野欧美在线观看| 国产亚洲一区二区精品| 高清毛片免费观看视频网站 | 一个人免费在线观看的高清视频| 99九九在线精品视频| 性色av乱码一区二区三区2| 女性被躁到高潮视频| 亚洲av国产av综合av卡| 亚洲精品成人av观看孕妇| 亚洲精华国产精华精| 久热爱精品视频在线9| 欧美乱妇无乱码| 亚洲欧美色中文字幕在线| 亚洲欧美日韩高清在线视频 | 黄色 视频免费看| 一级片'在线观看视频| 欧美激情极品国产一区二区三区| 成年版毛片免费区| 欧美日韩福利视频一区二区| 日韩欧美一区二区三区在线观看 | 啦啦啦在线免费观看视频4| 国产男女超爽视频在线观看| 欧美激情久久久久久爽电影 | 法律面前人人平等表现在哪些方面| 久久毛片免费看一区二区三区| 久久国产精品人妻蜜桃| 国产精品二区激情视频| 精品国产国语对白av| 最近最新中文字幕大全免费视频| 国产精品亚洲一级av第二区| 久久午夜亚洲精品久久| www.自偷自拍.com| 一级片'在线观看视频| 国产欧美日韩一区二区三| 精品一区二区三区av网在线观看 | 日韩一卡2卡3卡4卡2021年| 波多野结衣av一区二区av| √禁漫天堂资源中文www| 高清视频免费观看一区二区| 一区二区三区国产精品乱码| 免费女性裸体啪啪无遮挡网站| 老熟妇乱子伦视频在线观看| 大香蕉久久成人网| 伊人久久大香线蕉亚洲五| 日韩大片免费观看网站| 久久毛片免费看一区二区三区| 国产成人av激情在线播放| 国产男女超爽视频在线观看| 在线十欧美十亚洲十日本专区| 男女边摸边吃奶|