• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Fake News Classification:Past,Current,and Future

    2023-12-15 03:57:14MuhammadUsmanGhaniKhanAbidMehmoodMouradElhadefandShehzadAshrafChaudhry
    Computers Materials&Continua 2023年11期

    Muhammad Usman Ghani Khan,Abid Mehmood,Mourad Elhadef and Shehzad Ashraf Chaudhry,3,★

    1Department of Computer Science,University of Engineering and Technology,Lahore,54890,Pakistan

    2Department of Computer Science&Information Technology,Abu Dhabi University,Abu Dhabi,59911,United Arab Emirates

    3Department of Software Engineering,Faculty of Engineering and Architecture,Nisantasi University,Istanbul,Turkey

    ABSTRACT The proliferation of deluding data such as fake news and phony audits on news web journals,online publications,and internet business apps has been aided by the availability of the web,cell phones,and social media.Individuals can quickly fabricate comments and news on social media.The most difficult challenge is determining which news is real or fake.Accordingly,tracking down programmed techniques to recognize fake news online is imperative.With an emphasis on false news,this study presents the evolution of artificial intelligence techniques for detecting spurious social media content.This study shows past,current,and possible methods that can be used in the future for fake news classification.Two different publicly available datasets containing political news are utilized for performing experiments.Sixteen supervised learning algorithms are used,and their results show that conventional Machine Learning (ML) algorithms that were used in the past perform better on shorter text classification.In contrast,the currently used Recurrent Neural Network(RNN)and transformer-based algorithms perform better on longer text.Additionally,a brief comparison of all these techniques is provided,and it concluded that transformers have the potential to revolutionize Natural Language Processing(NLP)methods in the near future.

    KEYWORDS Supervised learning algorithms;fake news classification;online disinformation;transformers;recurrent neural network(RNN)

    1 Introduction

    Recent internet advancements have had a considerable impact on social communications and interactions.Social media platforms are being used more and more frequently to obtain information.Additionally,people express themselves through a variety of social media sites.Speedy access to information,low cost,and quick information transmission are just a few of social media’s many advantages.These advantages have led many people to choose social media over more conventional news sources,including television or newspapers,as their preferred method of news consumption.Therefore,social media is replacing traditional news sources quickly.However,social media’s nature can be changed to accomplish different goals[1].One of the reasons that social networks are favored for news access is that it allows for easy commenting and sharing of material with other social media users.Low cost and ease of access are the primary reasons numerous people use social network platforms with rapid access to conventional news sources such as the internet,newsletter,and telecasting.The large volume of internet news data necessitates the development of automated analysis technologies.

    Moreover,recently,during the coronavirus breakdown,the spread of fake news on social networking sites has increased,causing a terrible epidemic worldwide.Fig.1 shows some of the fake news stories circulated on social media during the lockdown1,2,3,4,5https://timesofindia.indiatimes.com/times-fact-check/news/fake-alert-no-russia-does-not-have-lions-roaming-the-streets-to-keep-people-indoors/articleshow/74768135.cms.Emissions from Chinese crematoriums could be visible from space.500 lions are released into the streets of Russia to keep people indoors.In London,doctors are being mugged.The condition can be cured with snake oils or vitamins.How about inhaling a hairdryer’s heated air?Or gargling with garlic water that’s been warmed up?

    Figure 1:Examples of fake news spread on social media

    False information harms people,society,corporations,and governments.The spread of fake news,particularly low-quality news,negatively affects personal and societal beliefs.Spammers or malicious users may distribute false and misleading information that could be very harmful.As a result,identifying fake news has become an essential area of study.Manually identifying and removing fake news or fraudulent reviews from social media takes more effort,money,and time.According to certain prior studies,people perform worse than automated systems when it comes to distinguishing real news from fake news[2].

    ML technologies have been focusing on automatically distinguishing between fake and authentic news for the last few years.Following the 2015 presidential election in the United States,several important social media platforms,including Twitter,Facebook,and Google,focused on developing ML and NLP-based methods to identify and prevent fake news.The extraordinary progress of supervised ML models cleared the path for developing expert systems to detect fake news in English,Portuguese,and other languages [2].Different ML models can have different results on the same classification problem,which is a serious issue [3].Their performance can be affected by corpus features like the size of the corpus and the distribution of instances into classes[3].The performance of the K-Nearby Neighbor (KNN),for example,is determined by the value of (k).Similarly,when handling optimization issues,the Support Vector Machine(SVM)experiences numerical instability[4].

    Various ML algorithms have been utilized in the past to classify fake news.These algorithms are compared against state-of-the-art techniques such as Long Short-Term Memory(LSTM)and Gated Recurrent Unit (GRU),which are currently being used.Transformer models are also experimented with as they are expected to be employed in future fake news classification tasks.This approach enables the evaluation of past techniques.It allows for understanding the current trends in fake news classification and a glimpse into potential future developments in the field.A detection algorithm with two phases has been suggested in this study to detect fake and bogus news on social networking sites.The proposed model is a hybrid of ML algorithms and NLP techniques.Text mining methods are used on the internet news data set in the initial part of this project.Text analysis tools and procedures are designed to extract structured information from raw news data.In the second step,supervised learning algorithms(BayesNet,Logistic Model Tree(LMT),Stochastic Gradient Descent(SGD),decision stump,linear SVM,kernel SVM,Logistic Regression,Decision Tree and Gaussian Discriminant Analysis have been applied to two publicly available Random and Buzzfeed Political News datasets[5].Individual studies employing only a few of these algorithms have been published in the literature.

    Furthermore,they are primarily implemented on a single dataset.In contrast to previous papers,the challenge of detecting fake and fraudulent news has been dealt with and regarded as a classification issue.A wide range of supervised learning algorithms has opted for all two publicly available data sets comprising titles and bodies.The contributions of this research paper are:

    ? We compared the performance of sixteen supervised learning algorithms.

    ? A pipeline for the utilization of transformers on two different datasets.

    ? Analyzed and presented the past,current and future trends of NLP techniques.

    The following is a breakdown of the paper’s structure.The related work is briefly described in Section 2.Details of some of the ML and DL algorithms are described in Section 3.Section 4 contains the details of the methodology and how text preprocessing techniques are applied before utilizing artificial intelligence methods.Section 5 covers datasets and experimental evaluations produced from sixteen supervised artificial intelligence algorithms for two different datasets.Section 5 also describes the results and discussion part.In Section 6,conclusions and future research directions have been examined.

    2 Related Works

    In recent years,detecting rumors and fake news,evaluating web content trustworthiness,and developing fact-checking systems have all been hot subjects.Preprocessing of data can be utilized for the estimation and recovery of various text forms.This includes pre-handling the text utilizing NLP,for example,stemming or lemmatization,standardization,tokenization,and afterward utilization of Term Frequency-Inverse Document Frequency(TF-IDF)[6]for forming words into vectors,Honnibal et al.[7]utilized Spacy for changing words into vectors.Similarly,Mikolov et al.[8]and Pennington et al.[9]used word2vec and Glove for word embeddings.

    Even though the fake news identification problem is only established,it has drawn much attention.Different researchers proposed different methodologies to distinguish fake news in many data types.Reference[10]divided the difficulty of detecting fake news into three categories,i.e.,severe fabrication,comical fake news,and massive scope deception.In [11],Conroy et al.utilized a hybrid technique and proposed a novel detector for fake news.Their proposed methodology[11]incorporates different linguistic cueing and network analysis techniques.In addition to this,they used the vector space model to confirm news [12].In [13] methodology,TF-IDF and SVM were used to categorize news into different groups.In [14],humorous cues were employed to detect false or deceptive news.The authors proposed an SVM-based model and used 360 news articles to evaluate it.To verify the stories,reference[15]found different perspectives on social media.Then,they tested their model against actual data sets.Reference[16]employed ML classifiers such as Decision Tree,K-Nearest Neighbor,Naive Bayes,SVM,and Logistic Regression to classify fake news from online media sources.An ensemble categorization model is suggested in the article[17]for identifying fake news that outperformed the state-of-the-art in terms of accuracy.The recommended approach identified vital characteristics from the datasets.The retrieved characteristics were then categorized using an ensemble model of three well-known ML models:Decision Tree,Random Forest,and Extra Tree Classifier.

    Two categorization models were presented in[18]to address the problem of identifying fake news.One is a Boolean crowd-sourcing approach,while the other is a Logistic Regression model.Aside from this,the preprocessing methods for the problem of false news detection and the creation of assessment measures for data sets have been thoroughly documented [19].Reference [20] employed ML classification techniques and n-gram analysis for classifying spam and fake news.On publicly accessible datasets,the authors assessed their study methods throughout.Gradient boosting,SGD,Random Forests,SVM,and limited Decision Trees were used as classification methods[21].Reference[22]have developed CSI,an algorithm comprises of different characteristics for classifying fake news.Three characteristics were merged in their strategy for a more accurate forecast.Capture,score,and integrate were the three attributes.Reference [23] introduced a tri-relationship false news detection model that considers news attitude,publisher bias,and interactions of users.They evaluated their approach using public datasets for detecting fake news.To classify fake news,the author of [24]suggested a novel hybrid DL model that integrated CNN and RNN.The algorithm was evaluated effectively on two false news datasets,and detection performance was notably superior to previous non-hybrid baseline techniques.

    Reference [25] developed a novel hybrid algorithm based on attention-based LSTM networks for the fake news identification challenge.Evaluation of the method’s performance is conducted on benchmark sets of false news detection data.In early 2017,reference[26]investigated the current state of fake news,provided a remedy for fake news,and described two opposing approaches.Janze et al.[27]developed a detection technique for spotting fake news.The authors of this study evaluated their models on Facebook News during the 2016 presidential election in the United States.Reference[28] developed another automated algorithm.This paper’s authors provided a categorization model based on semantic,syntactic,and lexical information.Reference[29]offered an automated technique for detecting false news in popular Twitter discussions.This approach was tested on three existing data sets.Reference [30] researched the statistical features of misinformation,fraud,and unverified assertions in online social networks.

    Reference[31]developed a competitive model to mitigate the impact of misleading information.The author mainly focused on the interaction between original erroneous and updated information.Reference [32] developed a new algorithm for detecting fake news that considers consumer trust.Reference[33]solved the problem by using a crowded signal.As a result,the authors have presented a novel detective method that uses Bayesian inference and learns the accuracy of users’flagging over time.Reference[34]suggested a content-based false news detection approach.The authors developed a semi-supervised approach for detecting fake news.Reference [35] looked at the different types of social networks and advocated using them to identify and counteract false news on social media.Reference [36] created a model that can identify the truthfulness of Arabic news or claims using a Deep Neural Network (DNN) approach,specifically Convolutional Neural Networks (CNN).The aim was to tackle the fact-checking problem,determining whether a news text claim is authentic or fake.The model achieved an accuracy of 91%,surpassing the performance of previous methods when applied to the same Arabic dataset.Reference[37]discussed the use of Deep Learning(DL)models to detect fake news written in Slovak,using a dataset collected from various local online news sources associated with the COVID-19 epidemic.The DL models were trained and evaluated using this dataset.A bidirectional LSTM network combined with one-dimensional convolutional layers resulted in an average macro F1-score of 94%on an independent test set.

    For accurately identifying misleading information using text sentiment analysis [38] presented“emoratio,”a sentiment scoring algorithm that employs the Linguistic Inquiry Word Count(LIWC)tool’s psychological and linguistic skills.Reference[39]proposed a thorough comparative examination of different DL algorithms,including ensemble methods,CNN,LSTM,and attention mechanisms for fake news identification.The CNN ensembled with bidirectional LSTM using the attention mechanism was found to have the most remarkable accuracy of 88.75%.Another tricky topic of false news classification is the circulation of intentionally generated phony photographs and altered images on social media platforms.The examination was directed by[40]on a dataset of 36,302 picture answers by utilizing both traditional and deep picture forgery techniques for distinguishing fraudulent pictures produced using picture-to-picture transformation based on the Generative Adversarial Networks(GAN) model,a DNN for identifying fake news in its early stages.Reference [41] utilized time and assault for veracity classification[42],style examination of hyperpartisan news[43],are worth focusing on spearheading research in believability investigation on informal organizations.

    Bidirectional Encoder Representations Transformer (BERT) and VGG19,based on a multimodal supervised framework,namely ‘Spotfake’[44],classify the genuine and fictitious articles by utilizing the capacities of encoders and decoders.Moreover,reference[45]used Adversarial training to classify news articles.The purpose of[46]was to develop a model for identifying fake news using the content-based classification approach and focusing on news titles.The model employed a BERT model combined with an LSTM layer.The proposed model was compared to other base classification models,and a standard BERT model was also trained on the same dataset to compare the impact of using an LSTM layer.Results indicated that the proposed model slightly improved accuracy on the datasets compared to the standard BERT model.References [47,48] utilized linear discriminant analysis and KNN for the detection of fake news,even in a vehicular network.The summary of the related work is shown in Table 1.

    Several datasets have been used for fake news detection research.Some of the datasets that have been used in the past are LIAR,FNC-1,and FakeNewsNet datasets.On the other hand,the GossipCop,PolitiFact,and the Fake News Challenge(FNC)datasets are widely used in the current era.It is important to note that many datasets are created to serve a specific research problem;therefore,they might not be generalizable to other scenarios and might not have the same size,type of data,quality,and time coverage.Thus,it is essential to consider these factors when selecting the dataset for a specific task.

    This work uses transformers,RNN,and conventional ML algorithms to classify fake news and provide an in-depth comparison of all these models.Results depict that ML algorithms perform better than complex DL-based models on shorter text.While for longer text,transformers outperform other algorithms.

    3 Machine Learning and Deep Learning

    This section briefly describes the algorithms used in this study’s experiments.Moreover,it is further divided into ML and DL methods.

    3.1 Supervised ML Algorithms

    3.1.1 Linear SVM

    One of the most well-known supervised learning methods,SVM,is used to tackle classification and regression problems.“Linear separable data” refers to information that can be split into two groups by a single straight line.Linear SVM is used to classify such information,and the classifier employed is referred to as a linear SVM classifier.

    3.1.2 Kernel SVM

    When the collection of samples cannot be divided linearly,SVM can be expanded to address nonlinear classification challenges.The data are mapped onto a high-dimensional feature space by applying kernel functions,where linear grouping is conceivable.

    3.1.3 Logistic Regression

    In contrast to Linear Regression,Logistic Regression is used as a classification technique.Logistic regression predicts the outcome by utilizing values of different independent variables.It is undoubtedly one of the most utilized ML techniques.Rather than giving a constant value,it provides the result as a binary,i.e.,valid or invalid,fake and real,yes or no,etc.Its probabilistic value ranges between 0 and 1.

    3.1.4 Naive Bayes

    It is a supervised ML algorithm based on the Bayesian theorem for classification tasks.This classifier posits that features in a class are independent of each other.This type of classifier is relatively easy to construct and is especially good for massive datasets.Naive Bayes outperforms even the most advanced classification systems due to its simplicity.

    3.1.5 Decision Tree

    One of the supervised algorithms based on rules is the Decision Tree.The Decision Tree is used for classification and regression and is a non-parametric method.In the Decision Tree,every node has one of the rules and gives output that is passed to another node,and then another rule-based testing is applied.

    3.1.6 Random Forest

    Random Forest is a supervised algorithm that combines Decision Trees for the different samples and gives results by giving an average from each Decision Tree.It is one of the flexible algorithms that can produce a good result for classification even without tuning.

    3.1.7 Gaussian Discriminant Analysis-Linear

    ML algorithms that directly predict the class from the training set are known as discriminant algorithms.One of the discriminative algorithms applied in our study is Gaussian Discriminant Analysis.Gaussian Discriminant Analysis fits a Gaussian distribution to each class of data separately to capture the distribution of each class.The probability will be high if the predicted value lies at the center of the contour of one of the classes in the training dataset.Linear Discriminant Analysis is a particular type of Quadratic Discriminant Analysis with linear boundaries.

    3.1.8 Gaussian Discriminant Analysis-Quadratic

    ML algorithms that directly predict the class from the training set are known as discriminant algorithms.One of the discriminative algorithms applied in our study is Gaussian Discriminant Analysis.Gaussian Discriminant Analysis fits a Gaussian distribution to each class of data separately to capture the distribution of each class.The probability will be high if the predicted value lies at the center of the contour of one of the classes in the training dataset.

    3.1.9 KNN

    KNN is one of the most well-known and widely utilized supervised learning methods.It works by finding the distance between new data points and comparing it with the number of K points provided as input.The data point is allocated to the class where the distance is minimum.Euclidean distance is one of the distance functions used in KNN.

    3.1.10 Weighted KNN

    It is a specially modified version of KNN.In contrast to traditional KNN,it assigns the highest weight to the points which are near and less weight to those which are far away.Its performance varies with the change in the hyperparameter K.Weighted KNN may produce outliers if the value of K is too small.

    3.2 RNN-Based Algorithms

    3.2.1 Gated Recurrent Units(GRU)

    GRU comprises two gates,i.e.,the update gate and the reset gate.An update gate combines the features of an input gate and a forget gate.A forget gate makes decisions about which data will be discarded and which will be stored.On the other hand,reset gates prevent gradient explosions by erasing previous information.It regulates how much past data must be discarded.

    3.2.2 Long Short-Term Memory(LSTM)

    Each LSTM network has three gates that govern data flow and cells storing data.The data is transmitted from the beginning to the end of the time step by the Cell States.In LSTM,the forget gate determines whether data must be pushed forward or omitted.While in the input gate,upon deciding on the relevant data,the data is sent to the input gate,which carries the data onto the cell states,causing them to be updated.It is as simple as storing and changing the weight.An output gate is triggered when the information has been transferred via the input gate.The output gate produces the hidden phases,and the current condition of the cells is carried forward to the next step.

    3.3 Transformers

    3.3.1 Bidirectional Encoder Representations from Transformers(BERT)

    BERT is an excellent addition to the Transformers community,especially for dealing with longer text.It is a bidirectional encoder-based transformer proposed by Google.BERT currently consists of two versions BERT base and BERT large.For the input,BERT took 512 tokens sequence at once.BERT can take three input embedding types:position embeddings,segment embeddings,and token embeddings.

    3.3.2 ALBERT

    ALBERT is a particularly lite variant of BERT with efficient training speed and fewer parameters as compared to BERT.Because ALBERT uses absolute position embeddings,it is best to pad the right side of the inputs rather than the left.Moreover,the computation cost remains the same as BERT because it has the same number of hidden layers as BERT.

    3.3.3 DeBERTa

    A neural language model based on Transformer called Decoding enhanced BERT with disentangled attention(DeBERTa)trains on enormous amounts of raw text corpora using self-supervised learning.To do numerous NLP jobs in the future,DeBERTa is built to accumulate universal language representations.By utilizing three unique strategies,DeBERTa trumps the previous state-of-the-art BERT.The strategies are as follows:

    ? A precise attention mechanism.

    ? Mask decoder improvement.

    ? A technique for virtual adversarial training that can be fine-tuned.

    3.3.4 RoBERTa

    The architecture of RoBERTa is similar to that of BERT,but it employs a different pre-training strategy and a byte-level BPE as a tokenizer.It extends BERT by changing crucial hyperparameters,such as deleting the following sentence pre-training goal and training with considerably bigger minibatches and learning rates.

    4 Methodology

    This section provides the detail of our methodology for fake news classification.Each step is discussed in sequence.First,duplicated words and unwanted characters,such as numbers,stopwords,dates,time,etc.,are removed from the dataset.Then,feature extraction was performed on the fake news dataset to reduce the feature space.Each word frequency is calculated for the construction of a document term matrix.Sixteen supervised algorithms are applied to the two political news datasets in the final step.Fig.2 shows the whole methodology,and Table 2 shows the specifications of the dataset being utilized in it.

    Table 2: Stats for the dataset

    4.1 Preprocessing for ML Algorithms

    ? Tokenization

    From the word tokenization,it is clear that it is used to make tokens of text by dividing the text into smaller chunks.Punctuation marks are also eradicated from the corpus.Moreover,a number filter has been used to remove those words that contain numeric values,followed by a case converter filter for converting all text from upper to lower case.Lastly,in this step,a filter is used to remove DateTime from the textual data.

    Figure 2:The overall process flow of methodology

    ? Stopwords and line removal

    Stopwords,usually little,are used to join phrases and finish sentences.These are regional language terms that do not transmit knowledge.Pronouns,prepositions,and conjunctions are all examples of stop words.The number of stopwords in the English language is between 400 and 500 [49].Stop words include that,does,a,an,so on,where on,too,above,I until,but,again,what,all,and when.

    ? Stemming

    Stemming is a technique to identify the fundamental forms of words with similar semantics but diverse word forms.This process converts grammatical structures such as a word’s verb,adjective,noun,adverb,and so on into their root form.The words“collect,”“collections,”and“collecting,”for example,all come from the word“collect.”

    The specifics of the preprocessing processes are displayed in Table 3.

    Table 3: Steps for preprocessing data

    4.2 Feature Engineering

    Managing high-dimensional information is the most challenging part of text mining.To increase performance,unrelated and inconsequential qualities should be disposed of from the model.The means of information preprocessing include extracting features from high-layered unstructured data.In this work,stem phrases in informational collections with a recurrence over the edge are tracked down utilizing a feature selection method.Following this technique,each record was changed into a vector of term loads by weighing the terms in its informational index.The Vector Space Model(VSM)is the most direct essential portrayal.VSM assigns a value to each word that indicates the word’s weight in the text.Term frequency is one approach for calculating these weigh.Inverse Document Frequency(IDF)and Term Frequency Inverse Document Frequency(TF-IDF)are the two most well-known of these methods.In this paper,the TF-IDF approach is applied.The TF-IDF approach is used to weigh a phrase in any piece of information and assign value based on the number of times it appears in the document.It also looks at the keyword’s significance across the internet,a technique known as corpus.

    4.3 Evaluation Measures

    The performance of our model is evaluated using precision,accuracy,F1-score,and recall,represented in Eqs.(1)-(4),respectively.

    whereas TPN stands for True Positive News—news that is real and anticipated by the model to be real.TNN stands for True Negative News—fake news projected to be fake by the model.FPN stands for False Positive News or the real fake news that the model incorrectly anticipated to be true.FNN stands for False Negative News;the actual real news projected to be fake by the model.

    5 Results and Discussion

    In this section,dataset and training details are provided.Moreover,the comparison results of RNN,transformers,and ML-based algorithms are also discussed.

    5.1 Experimental Settings

    In this work,two publicly available datasets from the political domain[5]are used.As discussed above,sixteen (RNN,transformers,and conventional ML) algorithms are applied to the title and body text of the dataset.Before applying the algorithms,the dataset is split with a ratio of 70% to 30%,respectively,for training and testing.The TF-IDF is used to form the word-weighted matrix for feature extraction for the conventional ML algorithms.While for the RNN-based algorithm,GloVe vectors are utilized.

    5.1.1 Dataset

    The dataset[5]described in Table 2 is used for the tests.“Buzzfeed News Data and Random News Data”are just two news datasets that are included.48 examples of false news and 53 instances of actual news are included in”Buzzfeed News Data.”O(jiān)n the other hand,the“Random News Data”collection contains 75 instances of satire,true news,and false news.Real news and false news data are both used in this study.Both datasets include the headline and the story’s content,which are utilized separately to classify the dataset.A few examples of these datasets are shown in Table 4.

    Table 4: Instances from Buzzfeed and political news dataset

    Table 5: Results on title(Buzzfeed political news dataset)

    Table 6: Results on body(Buzzfeed political news dataset)

    Table 8: Results on title(Random political news dataset)

    5.1.2 Hyperparameters

    For the DL-based method,i.e.,RNN and GRU,a glove matrix of 300 embedding dimensions and 60 epochs with a batch size of 16 are used.The hidden units are set to 256,which is the number of neurons in the hidden layer.The number of hidden units is chosen based on the task’s complexity and the dataset’s size.A dropout rate of 0.3 is used during the training of the model.This rate is chosen to strike a balance between preventing overfitting and maintaining the model’s ability to capture the relevant information from the data.The optimization algorithm used for training the model is Stochastic Gradient Descent(SGD),a widely used optimization algorithm for training neural networks.To further prevent overfitting,an early stopping strategy is implemented.Moreover,the learning rate is set to 0.0001,which determines the optimization algorithm’s step size in finding the model’s optimal parameters.

    For both datasets,experiments are run 10 times for conventional ML algorithms because there is a massive distinction between the outcomes due to random data selection.After running each traditional algorithm of ML 10 times,the mean value of evaluation measures,i.e.,accuracy,precision,recall,and F1-score,is taken.

    These hyperparameters were chosen through a combination of literature review and experimental tuning,demonstrating that they provided optimal performance for the task.Finally,in addition to the RNNs,the transformers model is trained using BERT embeddings with a dropout rate of 0.2.The dropout rate of 0.2 is used on the BERT embeddings during the fine-tuning process to prevent overfitting.

    5.2 Dataset 1:Buzzfeed Political News Dataset

    The Buzzfeed Political News dataset has been subjected to the recently mentioned supervised ML,RNN,and transformer-based algorithms to determine whether the news is accurate.The features are disengaged from the dataset using TF-IDF.On the dataset for Buzzfeed Political News,Tables 5 and 6 compare the effectiveness of various supervised ML algorithms on the title and body of the Buzzfeed Political News dataset,respectively.Tables 5 and 6 show that in terms of precision,kernel SVM and quadratic Gradient Discriminant Analysis(GDA)perform worst on the title and body of the Buzzfeed Political News dataset,respectively.On the other hand,linear GDA and Random Forest perform best in terms of precision on the title and body text of the Buzzfeed News dataset.Tables 5 and 6 depict that kernel SVM has the worst performance regarding the recall and F1-score linear GDA,Logistic Regression,and Random Forest on title and body text,respectively.It seems that kernel SVM and BERT perform best in terms of recall and F1-score on the title,while kernel SVM and RoBERTa perform best on body text.Regarding accuracy,kernel SVM performs worst on both title and body,while BERT and RoBERTa perform best on the title and body text.Figs.3 and 4 depict a graphical illustration of the algorithm’s performance in terms of accuracy,precision,recall,and F-measure metrics.While Fig.5 shows the comparison of loss on the title and body of the Buzzfeed Political News dataset.

    Figure 3:Comparison of RNN,transformers,and ML-based algorithms on the title text of Buzzfeed political news dataset

    Figure 4:Comparison of RNN,transformers,and ML-based algorithms on the body text of Buzzfeed political news dataset

    Figure 5:Comparison of loss on title and body of Buzzfeed political news dataset

    5.3 Dataset 2:Random Political News Dataset

    This section provides the results of the applied artificial intelligence algorithms with respect to their evaluation measures on both datasets.On the title and body of the Random Political News dataset,Tables 7 and 8 show the outcomes of the various supervised ML algorithms.Figs.6 and 7 visually represent a comparison of sixteen supervised learning algorithms’outputs.In addition to this,Fig.8 shows the comparison of loss on the title and body of the Random Political News dataset.

    In Tables 7 and 8 for the Random Political News dataset,kernel SVM performs worst in terms of precision on both title and body text.While on precision,Decision Tree and linear SVM performance are best of all others.For recall,Decision Tree performance is worst on both title and body.On the other hand,kernel SVM performs best in terms of recall.

    Figure 6:Comparison of RNN,transformers,and ML-based algorithms on the title text of the random political news dataset

    Figure 7: Comparison of RNN,transformers,and ML-based algorithms on the body text of the random political news dataset

    Figure 8:Comparison of loss on title and body of random political news dataset

    For F1-score and accuracy,kernel SVM remains on the lowest performance for title and body text,while DeBERTa performs best on title and body,respectively.

    From the above results and analysis,it is depicted that in the title text of both datasets,the performance of conventional ML algorithms is better than RNN and transformer-based algorithms in terms of computation and evaluation measures.For the longer text,i.e.,transformers outperform the remaining applied algorithms for the body of both datasets.

    Other than this,Table 9 shows the comparison of different algorithms used for the detection of fake news in recent surveys.

    Table 9: Comparison of the different algorithms used in recent studies for fake news detection

    6 Conclusion

    This paper compares supervised learning models for detecting fake news on social media based on NLP techniques and supervised RNN,transformers,and conventional ML algorithms.The accuracy,recall,precision,and F1-measure values for supervised artificial intelligence algorithms are examined.Two datasets are used to determine the average performance of all supervised AI algorithms.From our obtained results,it is clear that ML algorithms perform better on short text classification.It depicts that it is better to use an ML algorithm when the text is one or two lines,and also ML algorithms are efficient in computation.In contrast,longer text transformers outperform the other algorithms.

    In the future,this work could be improved with the advancement in transformers,existing hybridizing techniques,and intelligent optimization algorithms.In addition,we will be looking for multi-modal data(images,videos,audio)to detect fake news.The experiments will be undertaken on a multi-modal dataset to understand better the aspects of fake news identification and how to employ ML algorithms better.

    Acknowledgement:ADU authors acknowledge financial support from Abu Dhabi University’s Office of Research and Grant programs.

    Funding Statement:Abu Dhabi University’s Office of sponsored programs in the United Arab Emirates(Grant Number:19300752)funded this endeavor.

    Author Contributions:All the authors contributed equally.

    Availability of Data and Materials:https://github.com/rpitrust/fakenewsdata1.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产免费一级a男人的天堂| 国产免费视频播放在线视频 | 欧美日韩在线观看h| 久久精品91蜜桃| 联通29元200g的流量卡| 亚洲av电影不卡..在线观看| 一本一本综合久久| 亚洲av电影在线观看一区二区三区 | av在线播放精品| 日本-黄色视频高清免费观看| 看免费成人av毛片| 精品酒店卫生间| 久久久久久久久久久免费av| 色网站视频免费| 亚洲中文字幕一区二区三区有码在线看| 欧美日本亚洲视频在线播放| 日韩欧美三级三区| 男人和女人高潮做爰伦理| 又爽又黄a免费视频| 九草在线视频观看| 亚洲av成人精品一二三区| 久久精品久久精品一区二区三区| 色视频www国产| 秋霞伦理黄片| 亚洲激情五月婷婷啪啪| 男女边吃奶边做爰视频| 51国产日韩欧美| 久久精品熟女亚洲av麻豆精品 | 99视频精品全部免费 在线| 精品久久久久久久久亚洲| 深爱激情五月婷婷| 亚洲最大成人手机在线| 一二三四中文在线观看免费高清| 国产精品99久久久久久久久| 看黄色毛片网站| 成人美女网站在线观看视频| 成人欧美大片| 人妻夜夜爽99麻豆av| 我的女老师完整版在线观看| 美女cb高潮喷水在线观看| 99久久成人亚洲精品观看| 久久久久久久久久久免费av| 菩萨蛮人人尽说江南好唐韦庄 | 亚洲av不卡在线观看| 亚洲无线观看免费| 久久国产乱子免费精品| 免费看美女性在线毛片视频| 久久99热6这里只有精品| 看片在线看免费视频| 亚洲成人av在线免费| 看非洲黑人一级黄片| 听说在线观看完整版免费高清| 国产精品嫩草影院av在线观看| 女的被弄到高潮叫床怎么办| 特级一级黄色大片| 日韩精品有码人妻一区| 最后的刺客免费高清国语| 国产在线一区二区三区精 | 精品久久久久久久久av| 男人和女人高潮做爰伦理| 国产一区有黄有色的免费视频 | 欧美精品一区二区大全| 乱人视频在线观看| 91精品国产九色| 日韩国内少妇激情av| av在线观看视频网站免费| 综合色av麻豆| 久热久热在线精品观看| .国产精品久久| 亚洲美女视频黄频| 亚洲最大成人手机在线| 亚洲人成网站在线观看播放| 久久99蜜桃精品久久| 99九九线精品视频在线观看视频| 韩国高清视频一区二区三区| 黄色一级大片看看| 久久久久久久久大av| 能在线免费看毛片的网站| 国产乱人偷精品视频| 免费av不卡在线播放| 亚洲欧洲国产日韩| 欧美色视频一区免费| 十八禁国产超污无遮挡网站| 在线观看av片永久免费下载| 99九九线精品视频在线观看视频| 亚洲精品456在线播放app| 内射极品少妇av片p| 91av网一区二区| 一夜夜www| 晚上一个人看的免费电影| www.av在线官网国产| 在线观看一区二区三区| 亚洲国产精品成人综合色| 国产精品伦人一区二区| 网址你懂的国产日韩在线| 亚洲av成人精品一二三区| 97热精品久久久久久| 精品久久久久久久久久久久久| 国内精品一区二区在线观看| 国产成人免费观看mmmm| 水蜜桃什么品种好| 搞女人的毛片| 五月伊人婷婷丁香| 美女cb高潮喷水在线观看| 午夜精品一区二区三区免费看| 99视频精品全部免费 在线| 美女黄网站色视频| 午夜激情福利司机影院| 赤兔流量卡办理| 亚洲精品aⅴ在线观看| 床上黄色一级片| 欧美xxxx性猛交bbbb| 免费av观看视频| 久久99蜜桃精品久久| 一夜夜www| 九九爱精品视频在线观看| 三级男女做爰猛烈吃奶摸视频| 亚洲欧美成人综合另类久久久 | 日本免费在线观看一区| 男女国产视频网站| 欧美另类亚洲清纯唯美| 亚洲欧美精品自产自拍| 成人漫画全彩无遮挡| 亚洲精品456在线播放app| 免费无遮挡裸体视频| 哪个播放器可以免费观看大片| 乱人视频在线观看| 国产在线一区二区三区精 | 免费看美女性在线毛片视频| 99热这里只有精品一区| 边亲边吃奶的免费视频| 一级毛片电影观看 | 十八禁国产超污无遮挡网站| 99久久成人亚洲精品观看| 欧美一级a爱片免费观看看| 中文资源天堂在线| 国产成人午夜福利电影在线观看| 国产成人一区二区在线| 国产精品久久久久久久电影| 久久欧美精品欧美久久欧美| 亚洲成人精品中文字幕电影| 日韩大片免费观看网站 | 麻豆成人午夜福利视频| 久久久久性生活片| 国产精品永久免费网站| 天堂av国产一区二区熟女人妻| 久久精品国产亚洲av涩爱| 久久亚洲精品不卡| 99热6这里只有精品| 国产伦理片在线播放av一区| 99久久中文字幕三级久久日本| 一本久久精品| 精品久久久久久久人妻蜜臀av| 中文字幕av成人在线电影| 青青草视频在线视频观看| 成年av动漫网址| 国产熟女欧美一区二区| 国产黄色小视频在线观看| 国产一级毛片在线| 精品国产露脸久久av麻豆 | 麻豆国产97在线/欧美| 国产中年淑女户外野战色| 内射极品少妇av片p| 菩萨蛮人人尽说江南好唐韦庄 | 国产白丝娇喘喷水9色精品| 久久久久久久久大av| 亚洲国产色片| 网址你懂的国产日韩在线| 女人十人毛片免费观看3o分钟| 国产精华一区二区三区| 小蜜桃在线观看免费完整版高清| 欧美bdsm另类| 国产黄色视频一区二区在线观看 | 两个人视频免费观看高清| 欧美丝袜亚洲另类| 精品国产一区二区三区久久久樱花 | 丝袜美腿在线中文| 国产精品野战在线观看| 久久精品国产自在天天线| 亚洲精品国产av成人精品| 成人午夜精彩视频在线观看| 99久久精品热视频| 简卡轻食公司| 人人妻人人澡人人爽人人夜夜 | 嫩草影院精品99| 久久草成人影院| 国产白丝娇喘喷水9色精品| 亚洲自偷自拍三级| 九九爱精品视频在线观看| 国产精品一区二区性色av| 偷拍熟女少妇极品色| 日本三级黄在线观看| av天堂中文字幕网| a级毛片免费高清观看在线播放| 国产成人a∨麻豆精品| 久久精品国产99精品国产亚洲性色| 十八禁国产超污无遮挡网站| 亚洲精品日韩在线中文字幕| 亚洲婷婷狠狠爱综合网| 尾随美女入室| 亚洲熟妇中文字幕五十中出| 超碰97精品在线观看| 亚洲国产高清在线一区二区三| 国产成人福利小说| 日韩欧美国产在线观看| 男的添女的下面高潮视频| 色哟哟·www| 成人av在线播放网站| 高清在线视频一区二区三区 | 久久精品久久久久久噜噜老黄 | 天天一区二区日本电影三级| 国产成人精品婷婷| 直男gayav资源| 亚洲精品乱码久久久久久按摩| 一个人免费在线观看电影| 麻豆乱淫一区二区| 亚洲在久久综合| 国产成人精品久久久久久| 99热这里只有是精品50| 成人欧美大片| 一个人看视频在线观看www免费| 日韩人妻高清精品专区| 男女国产视频网站| 国产91av在线免费观看| 在线观看66精品国产| 亚洲精品色激情综合| 美女cb高潮喷水在线观看| 波多野结衣巨乳人妻| 一区二区三区高清视频在线| 少妇丰满av| 一区二区三区四区激情视频| 内地一区二区视频在线| 欧美最新免费一区二区三区| 亚洲综合色惰| 禁无遮挡网站| 久久久久久久亚洲中文字幕| 国产av码专区亚洲av| 亚洲av男天堂| 好男人在线观看高清免费视频| 人妻系列 视频| 亚洲欧美成人精品一区二区| 神马国产精品三级电影在线观看| 亚洲欧洲日产国产| 99在线视频只有这里精品首页| 国产色爽女视频免费观看| 小说图片视频综合网站| 亚洲熟妇中文字幕五十中出| 亚洲精品影视一区二区三区av| 国产精品伦人一区二区| 国产探花极品一区二区| 亚洲成人精品中文字幕电影| av在线天堂中文字幕| 亚洲欧美中文字幕日韩二区| 真实男女啪啪啪动态图| 纵有疾风起免费观看全集完整版 | 国产午夜福利久久久久久| 国产亚洲午夜精品一区二区久久 | 精品人妻视频免费看| av在线播放精品| 中文字幕亚洲精品专区| 在线观看美女被高潮喷水网站| 亚洲av一区综合| 禁无遮挡网站| 欧美极品一区二区三区四区| 伦精品一区二区三区| 男的添女的下面高潮视频| 少妇的逼好多水| 国产精品美女特级片免费视频播放器| 天堂网av新在线| ponron亚洲| 天堂√8在线中文| 好男人在线观看高清免费视频| 久久久久久久亚洲中文字幕| 精品一区二区三区人妻视频| 国国产精品蜜臀av免费| 最近的中文字幕免费完整| 免费av观看视频| 久久精品人妻少妇| 亚洲内射少妇av| 女人被狂操c到高潮| 免费观看在线日韩| av国产久精品久网站免费入址| 午夜视频国产福利| 麻豆国产97在线/欧美| 日韩欧美国产在线观看| 看非洲黑人一级黄片| 免费在线观看成人毛片| 日韩欧美 国产精品| 日日摸夜夜添夜夜爱| 国产亚洲精品av在线| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 青春草国产在线视频| 国产黄色小视频在线观看| 美女cb高潮喷水在线观看| 日韩欧美国产在线观看| 日韩欧美在线乱码| 成人三级黄色视频| av在线播放精品| 精品人妻一区二区三区麻豆| 亚洲av日韩在线播放| 舔av片在线| 国产精品久久久久久久久免| 日韩,欧美,国产一区二区三区 | av国产久精品久网站免费入址| 色尼玛亚洲综合影院| 永久网站在线| 一级毛片电影观看 | 日日啪夜夜撸| 热99re8久久精品国产| 成人毛片60女人毛片免费| 日韩高清综合在线| 亚洲色图av天堂| 少妇裸体淫交视频免费看高清| 最近最新中文字幕大全电影3| 久久99蜜桃精品久久| 51国产日韩欧美| 身体一侧抽搐| 日本一二三区视频观看| av.在线天堂| 18禁在线无遮挡免费观看视频| 亚洲无线观看免费| 国产一区二区在线av高清观看| 高清在线视频一区二区三区 | 欧美xxxx性猛交bbbb| 寂寞人妻少妇视频99o| 成人无遮挡网站| 亚洲无线观看免费| 一级毛片久久久久久久久女| 亚洲性久久影院| 国产人妻一区二区三区在| 国产午夜福利久久久久久| 少妇猛男粗大的猛烈进出视频 | 老师上课跳d突然被开到最大视频| 亚洲国产精品sss在线观看| 中文乱码字字幕精品一区二区三区 | 精品久久久久久成人av| 波野结衣二区三区在线| 国产成年人精品一区二区| 国模一区二区三区四区视频| 黄色欧美视频在线观看| 午夜福利在线在线| 免费观看性生交大片5| 精品久久久久久久久久久久久| 永久网站在线| av在线天堂中文字幕| 晚上一个人看的免费电影| 国产熟女欧美一区二区| 成人特级av手机在线观看| 亚洲欧美精品综合久久99| h日本视频在线播放| 国产欧美另类精品又又久久亚洲欧美| 热99re8久久精品国产| 女人久久www免费人成看片 | 亚洲精品乱码久久久v下载方式| 汤姆久久久久久久影院中文字幕 | 免费看av在线观看网站| 亚洲精品日韩av片在线观看| 免费看日本二区| 亚洲真实伦在线观看| 午夜精品在线福利| 你懂的网址亚洲精品在线观看 | 国产av码专区亚洲av| 18禁在线播放成人免费| 蜜桃亚洲精品一区二区三区| 色视频www国产| 99久久九九国产精品国产免费| 91av网一区二区| 国产淫片久久久久久久久| 国产精品一区二区三区四区免费观看| 亚洲一级一片aⅴ在线观看| 美女高潮的动态| 日本-黄色视频高清免费观看| 偷拍熟女少妇极品色| 波多野结衣巨乳人妻| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 人人妻人人看人人澡| 寂寞人妻少妇视频99o| 国产亚洲最大av| 久久人妻av系列| 男人舔女人下体高潮全视频| 亚洲国产精品成人久久小说| 精品少妇黑人巨大在线播放 | 内地一区二区视频在线| 2021天堂中文幕一二区在线观| 亚洲av免费高清在线观看| 伊人久久精品亚洲午夜| 久久精品影院6| 精品久久久久久成人av| 日本免费一区二区三区高清不卡| 国产黄片视频在线免费观看| 人人妻人人看人人澡| 久久精品久久久久久噜噜老黄 | 熟女电影av网| 美女黄网站色视频| 超碰97精品在线观看| 午夜福利在线观看免费完整高清在| 舔av片在线| 国产 一区 欧美 日韩| 亚洲乱码一区二区免费版| 毛片女人毛片| 九草在线视频观看| 欧美一区二区国产精品久久精品| 成人亚洲精品av一区二区| 99视频精品全部免费 在线| 一级毛片久久久久久久久女| 欧美97在线视频| 亚洲美女视频黄频| 国产精品国产三级国产专区5o | 最后的刺客免费高清国语| 日韩在线高清观看一区二区三区| 日本猛色少妇xxxxx猛交久久| 亚洲在线自拍视频| 在线免费观看的www视频| 观看免费一级毛片| 亚州av有码| 黄色欧美视频在线观看| 国产亚洲最大av| 高清视频免费观看一区二区 | 久久99热6这里只有精品| 亚洲成人精品中文字幕电影| 男人狂女人下面高潮的视频| 亚洲av成人精品一二三区| 精品人妻一区二区三区麻豆| 一个人看视频在线观看www免费| 丰满人妻一区二区三区视频av| 久久精品久久久久久久性| 女人久久www免费人成看片 | 在线观看一区二区三区| 亚洲精品国产av成人精品| 日韩成人伦理影院| 亚洲国产欧美在线一区| 日本三级黄在线观看| 国产亚洲5aaaaa淫片| 久久久色成人| 三级经典国产精品| 美女xxoo啪啪120秒动态图| 精品国内亚洲2022精品成人| 男女下面进入的视频免费午夜| 国产精品美女特级片免费视频播放器| 国产黄片美女视频| 国产高清国产精品国产三级 | 国产伦理片在线播放av一区| 国产一区亚洲一区在线观看| 国产淫语在线视频| 嫩草影院精品99| av在线亚洲专区| 少妇裸体淫交视频免费看高清| 国产成人freesex在线| 精品久久久久久久末码| av.在线天堂| 国产精品美女特级片免费视频播放器| 只有这里有精品99| 欧美成人午夜免费资源| 长腿黑丝高跟| 在线播放无遮挡| 欧美激情在线99| 插逼视频在线观看| 热99re8久久精品国产| 久久国产乱子免费精品| 国产大屁股一区二区在线视频| 亚洲一级一片aⅴ在线观看| 美女被艹到高潮喷水动态| 18禁在线播放成人免费| 老司机影院毛片| 亚洲国产色片| 亚洲精品日韩av片在线观看| 国产又黄又爽又无遮挡在线| 九色成人免费人妻av| 久久亚洲国产成人精品v| 亚洲国产精品成人综合色| 国产高清不卡午夜福利| 亚洲欧美一区二区三区国产| eeuss影院久久| 国产色爽女视频免费观看| 久久久久久久午夜电影| 特级一级黄色大片| 午夜老司机福利剧场| 欧美精品一区二区大全| 国产精品久久久久久久久免| 高清日韩中文字幕在线| 波多野结衣巨乳人妻| 免费大片18禁| 在线天堂最新版资源| 欧美xxxx性猛交bbbb| av国产免费在线观看| 观看免费一级毛片| 97在线视频观看| 看免费成人av毛片| 在现免费观看毛片| 亚洲欧美精品专区久久| videos熟女内射| 男女国产视频网站| 亚洲经典国产精华液单| 在线免费十八禁| 美女大奶头视频| 直男gayav资源| 成年女人永久免费观看视频| 伊人久久精品亚洲午夜| 国内精品宾馆在线| 欧美日韩在线观看h| 精品久久久久久久久久久久久| 又粗又硬又长又爽又黄的视频| 亚洲人成网站在线观看播放| 国产精品久久久久久av不卡| 精品人妻偷拍中文字幕| 国产91av在线免费观看| 日本猛色少妇xxxxx猛交久久| 91狼人影院| 国模一区二区三区四区视频| 久久欧美精品欧美久久欧美| 久久久久久久国产电影| 九色成人免费人妻av| 日本熟妇午夜| 国产亚洲av嫩草精品影院| 日韩av在线免费看完整版不卡| 美女高潮的动态| 日韩大片免费观看网站 | 七月丁香在线播放| 国产亚洲精品av在线| 亚洲国产精品成人综合色| 国产午夜福利久久久久久| 一级黄片播放器| 日本wwww免费看| 赤兔流量卡办理| 男女视频在线观看网站免费| 中文字幕av成人在线电影| 蜜桃久久精品国产亚洲av| 亚洲欧洲日产国产| 国产黄片美女视频| 麻豆成人午夜福利视频| 亚洲国产精品sss在线观看| 尤物成人国产欧美一区二区三区| 国产欧美另类精品又又久久亚洲欧美| 亚洲精华国产精华液的使用体验| 亚洲aⅴ乱码一区二区在线播放| 欧美三级亚洲精品| 秋霞在线观看毛片| 国产精品久久久久久精品电影| 青春草视频在线免费观看| av在线播放精品| 国产高潮美女av| 特大巨黑吊av在线直播| 日本免费在线观看一区| 一区二区三区免费毛片| 免费观看在线日韩| 国产精品熟女久久久久浪| 国产精品综合久久久久久久免费| 99热这里只有是精品在线观看| 99久久精品热视频| 国产亚洲精品av在线| 国产精品久久视频播放| 22中文网久久字幕| 国产精品永久免费网站| 99在线人妻在线中文字幕| 精品少妇黑人巨大在线播放 | 蜜臀久久99精品久久宅男| 亚洲天堂国产精品一区在线| 男女国产视频网站| 插逼视频在线观看| 婷婷色av中文字幕| 亚洲av成人av| av在线播放精品| 午夜福利高清视频| 国产91av在线免费观看| 欧美性猛交黑人性爽| 国语自产精品视频在线第100页| 精品一区二区三区视频在线| 久久精品夜夜夜夜夜久久蜜豆| 免费观看精品视频网站| 久久精品久久精品一区二区三区| 国产免费福利视频在线观看| 亚洲av不卡在线观看| 97在线视频观看| 欧美区成人在线视频| 女人被狂操c到高潮| 免费不卡的大黄色大毛片视频在线观看 | 麻豆一二三区av精品| 丝袜美腿在线中文| 亚洲精品日韩在线中文字幕| 久久久a久久爽久久v久久| 日本爱情动作片www.在线观看| 日本黄大片高清| 欧美极品一区二区三区四区| 综合色av麻豆| 精品熟女少妇av免费看| 午夜精品国产一区二区电影 | 边亲边吃奶的免费视频| 国产久久久一区二区三区| 麻豆国产97在线/欧美| 免费av不卡在线播放| 久久精品久久久久久久性| 久久人人爽人人爽人人片va| 国产精品.久久久| 人妻系列 视频| 亚洲熟妇中文字幕五十中出| 日本熟妇午夜| 久久国内精品自在自线图片| 26uuu在线亚洲综合色| kizo精华| 国产精品嫩草影院av在线观看| 精华霜和精华液先用哪个| 国产淫语在线视频| 国模一区二区三区四区视频| 成人三级黄色视频| 国产淫语在线视频| 能在线免费看毛片的网站| 国产 一区精品| 国产欧美日韩精品一区二区| 好男人视频免费观看在线| 91aial.com中文字幕在线观看| 一级爰片在线观看| av卡一久久| 欧美又色又爽又黄视频| 国产亚洲av片在线观看秒播厂 | 久久国内精品自在自线图片|