• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Long Text Classification Algorithm Using a Hybrid Model of Bidirectional Encoder Representation from Transformers-Hierarchical Attention Networks-Dilated Convolutions Network

    2021-10-22 08:24:36ZHAOYuanyuan趙媛媛GAOShining高世寧LIUYangGONGXiaohui宮曉蕙

    ZHAO Yuanyuan(趙媛媛), GAO Shining(高世寧), LIU Yang(劉 洋) , GONG Xiaohui(宮曉蕙) *

    1 College of Information Science and Technology, Donghua University, Shanghai 201620, China 2 Engineering Research Center of Digitized Textile & Apparel Technology, Ministry of Education, Donghua University, Shanghai 201620, China

    Abstract: Text format information is full of most of the resources of Internet, which puts forward higher and higher requirements for the accuracy of text classification. Therefore, in this manuscript, firstly, we design a hybrid model of bidirectional encoder representation from transformers-hierarchical attention networks-dilated convolutions networks (BERT_HAN_DCN) which based on BERT pre-trained model with superior ability of extracting characteristic. The advantages of HAN model and DCN model are taken into account which can help gain abundant semantic information, fusing context semantic features and hierarchical characteristics. Secondly, the traditional softmax algorithm increases the learning difficulty of the same kind of samples, making it more difficult to distinguish similar features. Based on this, AM-softmax is introduced to replace the traditional softmax. Finally, the fused model is validated, which shows superior performance in the accuracy rate and F1-score of this hybrid model on two datasets and the experimental analysis shows the general single models such as HAN, DCN, based on BERT pre-trained model. Besides, the improved AM-softmax network model is superior to the general softmax network model.

    Key words: long text classification; dilated convolution; BERT; fusing context semantic features; hierarchical characteristics; BERT_HAN_DCN; AM-softmax

    Introduction

    Text classification is aimed at simplifying messy text data and summarizing information from unstructured data[1]. It is a basic task in natural language processing (NLP) and can be applied to sentiment classification, web retrieval, and spam filtering systems[2]. Specific classification rules are a necessary process for automatic text categorization, which mainly include text feature extraction and word vector representation.

    For text feature extraction, experts have proposed a variety of methods, which can be summarized into the following: expert systems, machine learning, and deep neural networks, which are also the three main stages of NLP development. The expert system uses experts with relevant field expertise and experience to summarize rules and extract features for classification, which makes it difficult to deal with the flexible and changeable characteristics of natural language, and long-term dependence on manual feature extraction requires huge manpower. Machine learning algorithms[3-4]is a shallow feature extractor and this kind of feature engineering is based on manual extraction and is not able to automatically extract features from training sets.

    However, in most of the above-mentioned feature extraction methods, high dimension and data sparseness result in poor performance[5]. With the rise and popularity of deep learning, neural networks have acquired excellent achievement in the field of image processing[6-7], and related scholars began to utilize deep learning[8-13]for NLP, which has been known as the feature extraction unit and has gained extraordinary accomplishments. The most representative neural network is convolution neural networks (CNN)[8]which is strong in feature learning, and it improves the feature extraction ability by modifying hyperparameters or increasing the number of layers of convolution, but at the same time facing the problems of a large amount of calculation and parameters adjusting. Dilated convolution network (DCN) is a variant network of CNN network. DCN is able to extract more global features with less parameter-adjusting works[14], but it often loses key information and context structure semantic information in obtaining global information. Attention mechanism can calculate the key information in characters and sentences[9]. Traditional attention mechanism usually performs on characters, but it is inadequate for the acquisition of semantic information, and afterwards Yangetal.[15]proposed a hierarchical attention neural network (HAN). HAN is composed with a two-level attention mechanism on characters and sentences, which cound effectively identify features, structural information and key value semantics. However, at the same time it lost its global features extraction and may generate partial semantic loss.

    In the aspect of vector representation, unsupervised training is essential in vector representation of text, and the pre-trained CNN[16-17]are widely used to fine-tune the downstream tasks[18-19]gaining significant enlarged ability in feature extraction, transfer learning and dynamically fetch context semantics. The traditional models, such as fast text[20]and Glove[21], intend to obtain the semantic information of each word, discarding the semantic relevance with preceding texts, and is prone to the problems of dimension explosion and data sparseness[22]. Bidirectional encoder representation from transformers (BERT) is one of pre-trained word vector models that constructed with then-layer transformer models with strong coding ability, and is able to calculate the semantic weight of each word with others in the sentences. Therefore, the pre-trained language model BERT is used as migration learning to fine-tune downstream tasks.

    With the explosive growth of numbers of texts, a single classifier is not able to accomplish the tasks with high accuracy and precision, and many studies with mixed models have been proven more effective compared to single models in dealing with text classification problems[23-26]. A feature-fused HAN-DCN model was presented in this manuscript, the BERT model trained word vectors to initially understand the text semantics, HAN network obtained the structural dependency between word vectors, and the DCN extracted global and edge semantics in parallel. The features obtained from the two channels are spliced to be more efficient in improving the weight of the key information in the two levels of words and sentences, and extracting global semantic features as much as possible to improve the accuracy. Since softmax is aiming to maximize the probability of categorization by optimize the variances between different classes, and it is unable to minimize the differences within the same category, AM-softmax[27]is used to deepen the feature learning in improving the accuracy and efficiency of news text classification. The feasibility of the BERT_HAN_DCN model based on AM-softmax is verified through a series of experiments and it shows certain advances in improving generalization ability and model convergence speed.

    1 Model Architecture

    The entire architecture of this manuscript is indicated in Fig. 1. Firstly, the data is processed by BERT to initially get a rudimentary idea of the text so that we can obtain the dynamic semantic representation. After receiving the vector of the individual word in the long sentence, the digital vector is sent to the parallel network, which is composed by a three-layer DCN, which can acquire a larger receptive filed with fewer amount of calculation and HAN hybrid model to extract more abundant semantic information and contextual features information. In the relevant image processing, the mesh effect appears in the dilated convolution, resulting in the loss of characteristic information[28-29]. Therefore, in this network design, a three-layer dilated convolution is adopted to overcome the influence of the mesh effect, and the coefficients of expansion of each layer are set to 1, 3, and 5, respectively. Thus, the feature representation of the text is formed by combing the feature information of these two parts. In the end, the softmax function is used to normalize and classify the output probability according to the probability size. The mixed model architecture is shown in Fig. 1.

    1.1 Input of representation layer

    BERT could obtain dynamic and nearly comprehensive semantic information of the text. The BERT model uses a transformer with a bidirectional structure to fuse left and right characters to obtain contextual semantics, complete the two tasks of masking language model (MLM) and next sentence prediction (NSP) at the same time, and conduct joint training to obtain the vector representation of words and sentences. BERT’s embedding layer consists of token embedding (vector representation of words), segment embedding (vector representation of two sentences in a sentence pair, similarity of the sentence pair) and position embedding (learning the order properties of the sentence) to convert Chinese characters into input vectorsW1,W2, …,Wn, and the model can dynamically generate the context semantic representation of words by bidirectional transformer structure[30]to perform the two tasks mentioned above (MLM and NSP) as shown in Fig. 2. The final transformer output of the hidden layer vector with semantic information is avilable from the self-attention layer, the remaining connection and the normalization layer, and the obtained output is the superposition of the character-level vector. The output layer vectors processed by BERT areE1,E2, …,En, which are obtained by multi-layer transformers. In this experiment, BERT_BASE_CHINESE model is used, which is composed by a 12 layers-multi-head attention mechanism transformer.

    Fig. 1 Overall framework of BERT_HAN_DCN

    Fig. 2 BERT model structure

    1.2 HAN layer

    As illustrated in Fig. 3, the HAN model is composed by Chinese character and sentence level attention network, and this hierarchical structure is in line with people’s habitual thinking of understanding articles. In essence, each layer in attention network is composed by two layers of BiGRU, which has the advantage of serialization learning text features in the dotted box in Fig. 3. Considering the hierarchical structure of the network, it is necessary to set the fixed length of each sentence when dividing sentence attention. Thus, in this manuscript, we maximize the characters of the longest sentence in the article as 256, and each sentence is divided into segments with 16 characters. HAN is composed of four parts: word encode, word attention, sentence encode and sentence attention, which will be explained the calculation process in detail.

    Fig. 3 HAN model structure

    (1) Word encode

    In this part, the embedding layer vector is word-encoded. The vectors are initialized and then used as the input of the two-layer BiGRU. The specific conversion method is shown as

    xin=EeEn,n∈[1,t],

    (1)

    (2)

    (3)

    (2) Word attention

    The splicing vectorsh11,h12,...,h1nof the forward hidden state and the reverse hidden state are used as the overall expression of the word. In this part, calculate the size of the attention weight matrix of each word in the sentence. The calculation method is

    u1n=tanh(wsh1n+b),n∈[1,t],

    (4)

    (5)

    (6)

    (3) Sentence attention

    (7)

    (8)

    The purpose of adding a mechanism in this step is to discover the significant meaningful words in the sentence. We can get the final output of the HAN network as illustrated in Eq. (9), in which the vectorω1is the significant local characteristic of the mixed neural network, which is defined as

    ω1=∑nα2nh2n,n∈[1,t],

    (9)

    whereω1is the document vector as well as the final features extracted by HAN, which sums up all the information of the sentence in long text.

    1.3 Dilated convolutional networks layer

    As shown in Fig. 2, the DCN layer is composed of three hollow convolutional blocks with the same structure, and the input of each dilated convolutional layer is the output of the earlier layer. Changing the dilated convolution rate of each layer allows the receptive field of the convolutional layer to quickly cover all input data. As the input expansion rate of each layer increases, the obtained feature information increases exponentially.

    DCN and HAN are parallel network structures, taking the output of the embedding layer, which initialized by BERT as the input, and the input of each word in the sentence areEi∈RB×N×D, whereBis batch_size which is set to 64,Nis the number of words, andDis the word vector dimension of BERT output. The feature extraction of the input text sentence by dilated convolution is completed by setting the filter size. The convolution calculation is shown as

    ci=f(ω·Ei:i+k+(k-1)(r-1)+b),

    (10)

    wherefis a non-linear function,ωis the random initialization weight matrix the convolution kernel,kis the size of the convolution kernel, andris the hole rate of the hole convolution;Ei:i+k+(k-1)(r-1)isi:i+k+(k-1)(r-1) the sentence vector composed ofitoi+k+(k-1)(r-1), andbis the bias term.

    Therefore, after the feature extraction of the dilated convolutional layer, the final vectors obtained areC.The concrete vector representation ofCis shown as

    C=[c1,c2,…,ci+k+(k-1)(r-1)].

    (11)

    The output of HAN is going to be serialized continuous vectors, which needs to keep the dimensions consistently. Connect the vector obtained from the three layers dilate convolution networks and convert the vector into a feature matrixw2, which is as shown in

    w2=[C1,C2,...,Ci],i∈[1,n],

    (12)

    whereCiis the feature matrix of the output of dilated convolutional neural networks.

    1.4 Classification layer

    The classification layer is composed of the following four parts: feature fusion layer, fully connected layer, dropout layer and softmax layer. It consists of a simple softmax classifier (at the top of HAN and DCN) to calculate conditional probability distributions on predefined classification tags. Using Keras’s add function at the model fusion layer, we can get the merge layer vectorω, shown as

    ω=w1⊕w2,

    (13)

    wherew1andw2represent the features output vectors of HAN and DCN respectively, and ⊕ represents a splicing operation. After realizing merge layer operation, the obtained feature vectors are combined. Then extracting the feature vector again, each input unit of the fully connection layer represents the value of each feature vector. In order to avoid overfitting of the model, we use the dropout mechanism. The final feature representations are obtained from the dropout layer, and these feature representations are classified by softmax classification algorithm. The classification algorithm calculates the probability ofωinto categoryz, and the concrete calculation formula is shown as

    (14)

    2 Experiments and Results

    In this section, for verifying the effectiveness of BERT_HAN_DCN model, we use two real-world experimental datasets. We which are extracted the portion from SogouCS and THCNews datasets, explicate the details of the experiment, evaluate the performance of the hybrid model, and analyze the experimental results.

    2.1 Experimental datasets

    The datasets used in this experiment are Chinese text classification datasets launched by the NLP Laboratory of Tsinghua University and Sogou labs. The detailed data amount of the train group, the validation group and the test group are shown in Table 1.

    Table 1 Details of the text classification datasets

    2.2 Multi-classification evaluation index

    On the course of training process of the text classifier, it is indispensable to select appropriate criteria to evaluate the ability of the classifier. The confusion matrix is shown in Table 2 and there are four commonly used criteria in the field of NLP: precision (P), accuracy (A), recall (R), andF1-score (F1).

    Table 2 Confusion matrix

    (1) Accuracy(A)

    He cried so much that the glass splinter swam out of his eye; then he knew her, and cried out, Gerda! dear little Gerda! Where have you been so long? and where have I been? And he looked round him

    (15)

    whereAis measuring the ability of the classifier to distinguish the whole data set, the higher the value of A represents the better classification ability the model has.

    (2)F1-score (F1)

    (16)

    wherePis shown in

    (17)

    andRis shown in

    (18)

    F1 is a comprehensive index which is the harmonic evaluation value of precision and recall. It can be seenF1 combines the results ofPandR, and whenF1 gets closer to 1, it can indicate that the model method is more effective.

    2.3 Main initialization hyperparameters

    In order to train a better classification model, we should set the appropriate hyperparameters settings of the model. The hidden vector dimension of BiGRU and DCN models are respectively set to 64 and 128, the size of batch is 64, the dropout rate of BiGRU is set to 0.1, the maximum input length of data set is set to 256, and the learning rate is 0.000 05. The other main initialization hyperparameters settings of this experiment are shown in Table 3.

    Table 3 Main initialization hyperparameters

    Using optimizer Adam[31]to update network weights and cross-entropy cost function is used for calculating loss. In addition, early stopping is used to prevent over fitting. After multiple training processes in the models, it is found that 3 is the most suitable parameter for all experimental models. Complicated neural networks trained on small data sets often result in overfitting[32-33]. Because of the fewer data sets in this experiment, a certain dropout rate is adopted to prevent the overfitting of the model. Consequently, five groups of experiments were designed to explore the influence of dropout rate on the model effect and the optimal parameters were suitable for this fusion model. When we change the dropout rate, every 0.1 change has an impact on the accuracy of the model. Finally, we found the most appropriate parameter dropout for SogouCS dataset is set to 0.6, while THCNews dataset is set to 0.8.

    2.4 Analysis of experimental results

    This manuscript focuses on two kinds of news data sets. A total of three groups of comparative experiments are designed, which are using the standard BERT to connect the fully connection layer directly, and using the BERT as the embedded representation layer, HAN and DCN as the feature extraction layer respectively. The accuracy and loss rate of the two data sets during the training process are plotted separately, as shown in Fig. 4. The models are trained for 10 epochs on two datasets. Over the course of training data, it can be seen from Fig. 4 that the BERT_HAN model plays a significant role in improving accuracy. However, the model tends to be unstable in the first few iterations. The reason for this phenomenon is the complex structure of HAN model network. In the early stage of learning, the error is relatively large, and some important features may be lost when focusing on local important features. With the constant updating of parameters and BERT_HAN’s strong learning ability, the accuracy and stability of prediction are constantly improved. The BERT_DCN model is more stable than BERT model in both data sets. In addition, the accuracy of data set SogouCS and THCNews improved by 2.89% and 2.03% respectively compared with the BERT model.

    From the experiment results, in SogouCS and THCNews, BERT_HAN_ACN model achieved accuracy value of 91.42% and 95.66% respectively and the loss rate of 39.95% and 17.83% respectively. The accuracy of the BERT_HAN_DCN model in the verification set is higher than that of other models, and it is more stable in the training process than the other models. Compared with other groups of models are showed the best effect which has improved considerably and shows that the designed model fusion is feasible, which can extract deep characteristics of long text and improve the effect of news text classification model.

    Fig. 4 Training performance comparison between the presented model and other basic models: (a)-(b) training curves of verification accuracy and loss of SougoCS; (c)-(d) training curves of verification accuracy and loss of THCNews

    2.4.2ImpactofAM-softmax

    AM-softmax has achieved remarkable results in the field of face recognition. Unlike softmax, AM-softmax can reduce the probability of correct label and increase the effect of loss, which is more helpful to the aggregation of the same class. The specific AM-softmax is shown as

    (19)

    where cosθyjis to calculatexjin the categoryyjregion;mis the area between categories which is at leastmapart. The value ofmhere is set to 0.35 which needs to consider whether there is a clear boundary between the distribution of data in the real scene. The cosine value is between [0, 1], which is too small and cannot effectively distinguish the difference. After increasingstimes to improve the difference of distribution andshere is set to 30. And with the increase of the number of training epochs, the accuracy of validation sets of different models changed as shown in Fig. 5.

    From the accuracy of the verification sets, it can be concluded that after 5-6 times of model training about two datasets, the mixed model BERT_HAN_DCN which based on AM-softmax tends to be stable and finally achieves higher accuracy.

    Fig. 5 Training performance comparison between different models: (a) training accuracy curves of SougoCS based on AM-softmax models and the original models; (b) training accuracy curves of THCNews based on AM-softmax models and the original models

    The training models were verified by the validation set after 10 epochs. The precision, recall,F1 value and accuracy of the 8 categories of the two data sets are obtained respectively as shown in Tables 4-5.

    As shown in Tables 4-5, the models using AM-softmax as loss function have slight improvement in both accuracy rate andF1 value compared with the original models. Although the improvement effect is small, it also proves that changing the way of calculating loss is also a way to improve the feature extraction ability of the training model. Finally, for SogouCS dataset, we find that the finalF1-score and accuracy of the hybrid model are respectively increased by 0.56% (from 91.42% to 91.98%) and 0.55% (from 91.42% to 91.97%). For THCNews dataset, we find thatF1-score and accuracy of the hybrid model are respectively increased by 0.34% (from 95.69% to 99.06%) and 0.3% (from 95.66% to 95.69%).

    Table 4 Model comparison result on SogouCS dataset

    Table 5 Model comparison result on THCNews dataset

    2.4.3Timecomplexitycomparisonexperiment

    Under the same parameter settings, the algorithm running time of 8 different modules was compared, and the effectiveness of the algorithm was verified. The time complexity comparison experiment results are shown in Table 6.

    Table 6 Time complexity comparison

    From Tables 4-6, experimental results show that the mixed model has higher time complexity. However, the accuracy andF1 are much better. For SogouCS and THCNews datasets, the average calculation time per epoch of the hybrid model are 230 s and 244 s longer than BERT, respectively, but the accuracy is improved. It is obvious that the addition of hierarchical attention mechanism increases the complexity of the computing complexity but effectively improves the accuracy of the model. The calculation time of all AM-softmax-based models is less than that of the original models. For the SogouCS dataset, the average calculation time of each round of the hybrid model is reduced by 6 s, and for the THCNews dataset, the average calculation time of each round of the hybrid model is reduced by 11 s. This proves that the calculation of changing the loss improves the convergence speed of the model to a certain extent and slightly reduces the complexity of the model.

    3 Conclusions

    This manuscript adopts the BERT_HAN_DCN model of the composite network and applies it to the task of Chinese long text classification. Compared with the single BERT model, BERT_HAN, BERT_DCN, the accuracy andF1 value of the model are the highest. The results show that the fusion of HAN and DCN is effective and can learn deep features and contextual information in long text.

    In addition, by improving the loss function, the accuracy andF1 of the single model and the mixed model are improved and relatively reduced training time which proves that the mixed model can be better applied to Chinese text classification tasks. This also shows that in the process of model training, not only the ability of feature extraction and word vector transformation, but also the impact of loss function on model accuracy should be paid attention to.

    However, a more complex hybrid model requires more network parameters, which requires more computing power and longer training time. In the following research, we intend to further optimize and improve the details of the algorithm and we will improve this work by building a larger dataset.

    日日摸夜夜添夜夜添av毛片| 欧美激情在线99| 成人三级黄色视频| 男的添女的下面高潮视频| 人体艺术视频欧美日本| 久久久久久久久久久免费av| 搞女人的毛片| 波多野结衣巨乳人妻| 国产麻豆成人av免费视频| 中文精品一卡2卡3卡4更新| 日本-黄色视频高清免费观看| 看黄色毛片网站| 亚洲av不卡在线观看| 69人妻影院| 免费av毛片视频| 欧美日韩在线观看h| 国产精品综合久久久久久久免费| 欧美最新免费一区二区三区| 亚洲性久久影院| 国产伦精品一区二区三区四那| 久久久a久久爽久久v久久| 丝袜喷水一区| 国产欧美日韩精品一区二区| 美女被艹到高潮喷水动态| 国产精品野战在线观看| 伊人久久精品亚洲午夜| 欧美最新免费一区二区三区| 亚洲av二区三区四区| 亚洲第一区二区三区不卡| 国产黄片美女视频| 岛国毛片在线播放| 小说图片视频综合网站| 亚洲av二区三区四区| av福利片在线观看| 成人欧美大片| 色视频www国产| 久久久久久久午夜电影| 亚洲最大成人手机在线| 在线免费观看的www视频| 色哟哟哟哟哟哟| 麻豆成人av视频| 国内久久婷婷六月综合欲色啪| 国产老妇伦熟女老妇高清| 深爱激情五月婷婷| 97超视频在线观看视频| 91aial.com中文字幕在线观看| 亚洲第一区二区三区不卡| 在线a可以看的网站| 亚洲天堂国产精品一区在线| 久久精品国产亚洲网站| 狂野欧美白嫩少妇大欣赏| 狂野欧美白嫩少妇大欣赏| 国产精品久久久久久精品电影小说 | 欧美日韩精品成人综合77777| 你懂的网址亚洲精品在线观看 | 免费一级毛片在线播放高清视频| 国产精品蜜桃在线观看 | 在线播放国产精品三级| 久久精品国产99精品国产亚洲性色| 国产高清有码在线观看视频| 亚洲精品国产成人久久av| 国产精品国产三级国产av玫瑰| 亚洲综合色惰| 精品人妻视频免费看| 一区福利在线观看| 国产一区亚洲一区在线观看| 美女黄网站色视频| 国产成人福利小说| 激情 狠狠 欧美| 欧美性猛交黑人性爽| 久久综合国产亚洲精品| 18禁在线播放成人免费| 免费大片18禁| 禁无遮挡网站| 国产69精品久久久久777片| 国产乱人偷精品视频| 久久久久久久久久成人| 国产精品国产高清国产av| 日韩欧美 国产精品| 狂野欧美激情性xxxx在线观看| 天堂√8在线中文| 久久精品综合一区二区三区| 99热只有精品国产| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 啦啦啦韩国在线观看视频| 黄色欧美视频在线观看| 免费观看精品视频网站| 国产欧美日韩精品一区二区| 欧美另类亚洲清纯唯美| 国产精品久久久久久久久免| 国产一级毛片在线| 久久久a久久爽久久v久久| 国产高清三级在线| 国产精品人妻久久久久久| 99久久九九国产精品国产免费| 国产亚洲精品av在线| 亚洲色图av天堂| 欧美日韩国产亚洲二区| 可以在线观看毛片的网站| 国产久久久一区二区三区| 亚洲成a人片在线一区二区| 国内精品宾馆在线| 午夜老司机福利剧场| 最近中文字幕高清免费大全6| 变态另类丝袜制服| 美女国产视频在线观看| 亚洲国产欧美在线一区| 天堂√8在线中文| 最近视频中文字幕2019在线8| 18禁在线无遮挡免费观看视频| 一边摸一边抽搐一进一小说| 一边摸一边抽搐一进一小说| 久久国内精品自在自线图片| 成年av动漫网址| 大又大粗又爽又黄少妇毛片口| 午夜精品一区二区三区免费看| 国产色婷婷99| 尾随美女入室| 免费看光身美女| 日本成人三级电影网站| 国产女主播在线喷水免费视频网站 | 欧美日韩一区二区视频在线观看视频在线 | 久久精品国产清高在天天线| 日本与韩国留学比较| 老熟妇乱子伦视频在线观看| 国产免费男女视频| 蜜桃亚洲精品一区二区三区| 成年免费大片在线观看| 久久精品久久久久久久性| 亚洲美女搞黄在线观看| 国产 一区 欧美 日韩| 亚洲不卡免费看| 高清在线视频一区二区三区 | 黄色视频,在线免费观看| 欧美性猛交╳xxx乱大交人| 一区二区三区四区激情视频 | 免费一级毛片在线播放高清视频| 联通29元200g的流量卡| 精品99又大又爽又粗少妇毛片| 成人鲁丝片一二三区免费| 又爽又黄无遮挡网站| 亚洲一区高清亚洲精品| 99久国产av精品| 99热这里只有精品一区| 九九爱精品视频在线观看| 中文字幕制服av| 国产视频内射| 国产精品一二三区在线看| 三级毛片av免费| 久久午夜福利片| videossex国产| 亚洲欧美清纯卡通| 国产精品国产高清国产av| 两性午夜刺激爽爽歪歪视频在线观看| 一级黄色大片毛片| 欧美又色又爽又黄视频| 中文字幕av成人在线电影| 国产精品综合久久久久久久免费| 不卡一级毛片| 天天躁夜夜躁狠狠久久av| 日韩一本色道免费dvd| 国内少妇人妻偷人精品xxx网站| 男人舔女人下体高潮全视频| 男女边吃奶边做爰视频| 我要搜黄色片| 久久九九热精品免费| 免费人成视频x8x8入口观看| 国产精品久久久久久av不卡| 观看美女的网站| 在线观看美女被高潮喷水网站| 好男人在线观看高清免费视频| 亚洲成a人片在线一区二区| 99久久人妻综合| 国产高清三级在线| 尾随美女入室| 久久鲁丝午夜福利片| 一级黄色大片毛片| av在线蜜桃| 久久99精品国语久久久| 黄片wwwwww| 少妇的逼好多水| 搡老妇女老女人老熟妇| 国产精品嫩草影院av在线观看| 又爽又黄无遮挡网站| 精品久久久久久久久久免费视频| 啦啦啦韩国在线观看视频| 免费看光身美女| 国产午夜福利久久久久久| 一区二区三区高清视频在线| av在线观看视频网站免费| 久久久久久伊人网av| 色综合站精品国产| 免费大片18禁| 男人的好看免费观看在线视频| 国产精品一二三区在线看| 中出人妻视频一区二区| 色尼玛亚洲综合影院| 亚洲在线自拍视频| 午夜免费男女啪啪视频观看| 91精品一卡2卡3卡4卡| 亚洲欧美日韩东京热| 中文亚洲av片在线观看爽| 久久精品夜夜夜夜夜久久蜜豆| 嫩草影院精品99| 国产亚洲av嫩草精品影院| 中出人妻视频一区二区| 欧美日本视频| 三级国产精品欧美在线观看| 亚洲综合色惰| 日本免费a在线| 精品人妻熟女av久视频| 亚洲第一电影网av| 伦精品一区二区三区| 三级国产精品欧美在线观看| 国内揄拍国产精品人妻在线| 久久久久久大精品| 久久国内精品自在自线图片| 高清毛片免费看| 中文在线观看免费www的网站| 久久亚洲国产成人精品v| 免费av不卡在线播放| 国产真实伦视频高清在线观看| 国产乱人视频| 国产精品久久久久久亚洲av鲁大| 欧美三级亚洲精品| 蜜臀久久99精品久久宅男| 国产精品av视频在线免费观看| 噜噜噜噜噜久久久久久91| 亚洲高清免费不卡视频| 99久国产av精品| 国产伦一二天堂av在线观看| 我的老师免费观看完整版| 我的女老师完整版在线观看| 日日摸夜夜添夜夜爱| 久久久久九九精品影院| 非洲黑人性xxxx精品又粗又长| 校园春色视频在线观看| 国内精品久久久久精免费| 国产精品乱码一区二三区的特点| 久久精品91蜜桃| 国产成人91sexporn| 国产精品,欧美在线| 九九热线精品视视频播放| 最近中文字幕高清免费大全6| 亚洲精品国产av成人精品| 啦啦啦啦在线视频资源| 国产在视频线在精品| 赤兔流量卡办理| 成人毛片60女人毛片免费| 久久亚洲精品不卡| 最近中文字幕高清免费大全6| 最近2019中文字幕mv第一页| 成人美女网站在线观看视频| 全区人妻精品视频| 日韩中字成人| 久久人人爽人人爽人人片va| 一级二级三级毛片免费看| 最近手机中文字幕大全| 成人鲁丝片一二三区免费| 大又大粗又爽又黄少妇毛片口| 久久鲁丝午夜福利片| 久久久久久九九精品二区国产| 26uuu在线亚洲综合色| 九九热线精品视视频播放| 97热精品久久久久久| 日本撒尿小便嘘嘘汇集6| 日韩欧美国产在线观看| 日韩在线高清观看一区二区三区| 波野结衣二区三区在线| 国产精品无大码| 亚洲自偷自拍三级| 日日干狠狠操夜夜爽| 51国产日韩欧美| 色尼玛亚洲综合影院| 国产午夜精品论理片| 边亲边吃奶的免费视频| 男女做爰动态图高潮gif福利片| 久久久久久大精品| 国产精品嫩草影院av在线观看| 久久久久久久久久久丰满| 日日摸夜夜添夜夜添av毛片| 国内精品美女久久久久久| 有码 亚洲区| 国产亚洲av嫩草精品影院| 国产精品久久久久久久久免| 看片在线看免费视频| 在线观看66精品国产| 最新中文字幕久久久久| 亚洲aⅴ乱码一区二区在线播放| 非洲黑人性xxxx精品又粗又长| av在线老鸭窝| 欧美不卡视频在线免费观看| 国产伦精品一区二区三区视频9| 听说在线观看完整版免费高清| 亚洲欧美清纯卡通| 成熟少妇高潮喷水视频| 人人妻人人澡人人爽人人夜夜 | 精品久久久久久久久亚洲| 国产精华一区二区三区| 99热这里只有精品一区| 伦精品一区二区三区| 亚洲乱码一区二区免费版| 日韩欧美三级三区| 午夜福利在线观看吧| 亚洲精品乱码久久久v下载方式| 黄色配什么色好看| 日韩精品青青久久久久久| 高清日韩中文字幕在线| 99久久精品国产国产毛片| 国产亚洲av片在线观看秒播厂 | 亚洲真实伦在线观看| 床上黄色一级片| 午夜免费激情av| 校园春色视频在线观看| 国产精品不卡视频一区二区| 两个人的视频大全免费| 精品国产三级普通话版| 在线观看美女被高潮喷水网站| 亚洲成人久久爱视频| 国产亚洲av片在线观看秒播厂 | 日本免费一区二区三区高清不卡| 99久久精品热视频| 嫩草影院新地址| 少妇熟女欧美另类| 大又大粗又爽又黄少妇毛片口| 免费av不卡在线播放| 黄片wwwwww| 我要搜黄色片| 三级毛片av免费| 久久久精品大字幕| 精品熟女少妇av免费看| 99热网站在线观看| av视频在线观看入口| 校园春色视频在线观看| 久久亚洲国产成人精品v| 大型黄色视频在线免费观看| 亚洲18禁久久av| 国产亚洲精品久久久com| 免费观看精品视频网站| 一区二区三区高清视频在线| 亚洲av免费在线观看| 亚洲精品影视一区二区三区av| av黄色大香蕉| 在线天堂最新版资源| 亚洲国产欧洲综合997久久,| 亚洲欧美中文字幕日韩二区| 午夜免费男女啪啪视频观看| 日韩欧美 国产精品| 爱豆传媒免费全集在线观看| 亚洲四区av| 久久这里有精品视频免费| 男女啪啪激烈高潮av片| 精品免费久久久久久久清纯| 亚洲久久久久久中文字幕| 在线观看免费视频日本深夜| www日本黄色视频网| 嫩草影院精品99| 日韩一区二区三区影片| 国产成人一区二区在线| 日本色播在线视频| 69人妻影院| 人妻久久中文字幕网| 九草在线视频观看| 亚洲欧美成人综合另类久久久 | 有码 亚洲区| 欧美xxxx性猛交bbbb| 国产精品人妻久久久久久| 欧美又色又爽又黄视频| 最近视频中文字幕2019在线8| 精品无人区乱码1区二区| kizo精华| 亚洲欧美精品专区久久| 一个人免费在线观看电影| 成人二区视频| 黄色一级大片看看| 久久精品久久久久久噜噜老黄 | 欧美性感艳星| 男人狂女人下面高潮的视频| 国产伦一二天堂av在线观看| 亚洲成人中文字幕在线播放| 亚洲欧美日韩卡通动漫| 亚洲图色成人| 日本黄色视频三级网站网址| 国产精品一区二区三区四区久久| 美女cb高潮喷水在线观看| 亚洲三级黄色毛片| 午夜福利在线观看免费完整高清在 | eeuss影院久久| 好男人视频免费观看在线| 99久久人妻综合| 亚洲人成网站在线播放欧美日韩| 97超视频在线观看视频| 好男人在线观看高清免费视频| av专区在线播放| 久久久午夜欧美精品| 观看美女的网站| 久久精品国产亚洲av天美| 夫妻性生交免费视频一级片| 美女黄网站色视频| 老司机影院成人| 国产亚洲精品av在线| 亚洲第一电影网av| 边亲边吃奶的免费视频| 91久久精品国产一区二区成人| 啦啦啦韩国在线观看视频| 欧美一区二区精品小视频在线| 淫秽高清视频在线观看| 麻豆精品久久久久久蜜桃| 少妇熟女aⅴ在线视频| 日韩三级伦理在线观看| 可以在线观看的亚洲视频| 内射极品少妇av片p| 国产精品,欧美在线| 午夜激情福利司机影院| 九草在线视频观看| av又黄又爽大尺度在线免费看 | 中文精品一卡2卡3卡4更新| 人妻少妇偷人精品九色| 久久久久久久久久成人| 久久亚洲精品不卡| 亚洲欧美中文字幕日韩二区| 精品人妻一区二区三区麻豆| 久久欧美精品欧美久久欧美| 亚洲第一区二区三区不卡| 51国产日韩欧美| 禁无遮挡网站| 99久久精品一区二区三区| 欧美三级亚洲精品| 成人av在线播放网站| 国产女主播在线喷水免费视频网站 | 亚洲人成网站在线播放欧美日韩| 色哟哟·www| 精品人妻一区二区三区麻豆| 少妇被粗大猛烈的视频| 国产精品精品国产色婷婷| 男女那种视频在线观看| 精品国产三级普通话版| 中文字幕人妻熟人妻熟丝袜美| 亚洲自拍偷在线| 少妇猛男粗大的猛烈进出视频 | 精品少妇黑人巨大在线播放 | 欧美精品国产亚洲| 成年免费大片在线观看| 精品人妻偷拍中文字幕| 最近2019中文字幕mv第一页| 寂寞人妻少妇视频99o| 国产一区亚洲一区在线观看| 免费人成在线观看视频色| 18禁黄网站禁片免费观看直播| 亚洲va在线va天堂va国产| 日韩,欧美,国产一区二区三区 | 日产精品乱码卡一卡2卡三| 色视频www国产| 高清午夜精品一区二区三区 | 国产精品蜜桃在线观看 | 村上凉子中文字幕在线| 蜜桃久久精品国产亚洲av| 欧美区成人在线视频| 黄片无遮挡物在线观看| 国产成人91sexporn| 此物有八面人人有两片| 久久这里有精品视频免费| 久久人妻av系列| 亚洲国产精品合色在线| 深夜a级毛片| 久久久精品欧美日韩精品| 男人的好看免费观看在线视频| 亚洲人成网站高清观看| 此物有八面人人有两片| 午夜亚洲福利在线播放| 免费一级毛片在线播放高清视频| 只有这里有精品99| 亚洲av第一区精品v没综合| 国产69精品久久久久777片| 在线观看66精品国产| 只有这里有精品99| 免费大片18禁| 久久99热6这里只有精品| 精品久久久久久久久久久久久| av专区在线播放| 成人性生交大片免费视频hd| 亚洲av熟女| 少妇人妻精品综合一区二区 | 天美传媒精品一区二区| 亚洲美女搞黄在线观看| 日本熟妇午夜| 黄片无遮挡物在线观看| 一个人免费在线观看电影| 国产在视频线在精品| 精品一区二区三区人妻视频| 偷拍熟女少妇极品色| 麻豆av噜噜一区二区三区| 久久久久久久亚洲中文字幕| 此物有八面人人有两片| 亚洲av不卡在线观看| av在线观看视频网站免费| 成熟少妇高潮喷水视频| 日韩av不卡免费在线播放| 性欧美人与动物交配| 亚洲av中文av极速乱| 久久草成人影院| 热99在线观看视频| 特大巨黑吊av在线直播| 国产精品国产三级国产av玫瑰| 女人被狂操c到高潮| 国产伦精品一区二区三区视频9| 亚洲在线自拍视频| av又黄又爽大尺度在线免费看 | 午夜爱爱视频在线播放| 身体一侧抽搐| 午夜激情福利司机影院| 亚洲精品自拍成人| 午夜福利成人在线免费观看| 久久国产乱子免费精品| 十八禁国产超污无遮挡网站| 美女国产视频在线观看| 国内久久婷婷六月综合欲色啪| 国产精品无大码| 人妻系列 视频| 男人舔奶头视频| 久久久久九九精品影院| 三级经典国产精品| 性插视频无遮挡在线免费观看| 精品一区二区三区人妻视频| 好男人在线观看高清免费视频| 欧美高清成人免费视频www| 国产黄片视频在线免费观看| 国产伦一二天堂av在线观看| 能在线免费看毛片的网站| 国产精品一区二区在线观看99 | 大香蕉久久网| 听说在线观看完整版免费高清| 国产精品一区二区在线观看99 | 午夜福利在线观看免费完整高清在 | 国产综合懂色| 久久精品国产亚洲av香蕉五月| 欧美最黄视频在线播放免费| 国产亚洲91精品色在线| 国产精品99久久久久久久久| 一级二级三级毛片免费看| 婷婷精品国产亚洲av| 色播亚洲综合网| ponron亚洲| 精品一区二区三区视频在线| 国产 一区精品| 婷婷色av中文字幕| 日韩欧美三级三区| 国产高清三级在线| 免费大片18禁| 成人亚洲欧美一区二区av| 美女cb高潮喷水在线观看| 亚洲精品久久久久久婷婷小说 | 国产不卡一卡二| 晚上一个人看的免费电影| 午夜a级毛片| 国产精品一二三区在线看| 精品一区二区免费观看| 久久久久久久久久久丰满| 哪个播放器可以免费观看大片| 男人狂女人下面高潮的视频| 久久99蜜桃精品久久| 国产日韩欧美在线精品| 亚州av有码| 亚洲在线观看片| 国产黄色小视频在线观看| av在线亚洲专区| 亚洲精品国产av成人精品| 高清午夜精品一区二区三区 | 不卡一级毛片| 亚洲一区高清亚洲精品| 国产三级在线视频| 中文在线观看免费www的网站| 男女做爰动态图高潮gif福利片| a级一级毛片免费在线观看| АⅤ资源中文在线天堂| 亚洲国产精品合色在线| 丝袜美腿在线中文| 此物有八面人人有两片| 18禁裸乳无遮挡免费网站照片| 99国产极品粉嫩在线观看| 国产日本99.免费观看| 91麻豆精品激情在线观看国产| 国产av一区在线观看免费| 日本与韩国留学比较| 国产在线精品亚洲第一网站| 深夜精品福利| 国产在视频线在精品| 天美传媒精品一区二区| 国产黄片美女视频| 国产麻豆成人av免费视频| 22中文网久久字幕| 在线免费十八禁| 色综合亚洲欧美另类图片| 久久久久久久久久久丰满| 免费观看人在逋| 人人妻人人看人人澡| 国产亚洲91精品色在线| 人人妻人人澡人人爽人人夜夜 | 99热这里只有是精品50| 观看美女的网站| 欧美+日韩+精品| 亚洲欧美日韩卡通动漫| 黄色一级大片看看| 精品久久久久久久久av| 久久久久久久亚洲中文字幕| 日日摸夜夜添夜夜爱| 2022亚洲国产成人精品| 又爽又黄无遮挡网站| 哪里可以看免费的av片| 亚洲丝袜综合中文字幕| 精品久久久久久久末码| 久久久久久久久久久免费av| 成人无遮挡网站| 亚洲国产精品sss在线观看| 成年av动漫网址|