• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    深度神經(jīng)網(wǎng)絡(luò)建模方法用于數(shù)據(jù)缺乏的帶口音普通話語音識別的研究

    2015-11-25 03:59:36謝旭榮隋相劉循英王
    集成技術(shù) 2015年6期
    關(guān)鍵詞:口音級聯(lián)決策樹

    謝旭榮隋 相劉循英王 嵐

    1(中國科學(xué)院深圳先進技術(shù)研究院人機智能協(xié)同系統(tǒng)重點實驗室 深圳 518055)2(劍橋大學(xué)工程系 劍橋 CB2 1TN)

    3(香港中文大學(xué) 香港 999077)

    深度神經(jīng)網(wǎng)絡(luò)建模方法用于數(shù)據(jù)缺乏的帶口音普通話語音識別的研究

    謝旭榮1,3隋 相1,3劉循英1,2王 嵐1,3

    1(中國科學(xué)院深圳先進技術(shù)研究院人機智能協(xié)同系統(tǒng)重點實驗室 深圳 518055)
    2(劍橋大學(xué)工程系 劍橋 CB2 1TN)

    3(香港中文大學(xué) 香港 999077)

    眾所周知中文普通話被眾多的地區(qū)口音強烈地影響著,然而帶不同口音的普通話語音數(shù)據(jù)卻十分缺乏。因此,普通話語音識別的一個重要目標是恰當(dāng)?shù)啬M口音帶來的聲學(xué)變化。文章給出了隱式和顯式地使用口音信息的一系列基于深度神經(jīng)網(wǎng)絡(luò)的聲學(xué)模型技術(shù)的研究。與此同時,包括混合條件訓(xùn)練,多口音決策樹狀態(tài)綁定,深度神經(jīng)網(wǎng)絡(luò)級聯(lián)和多級自適應(yīng)網(wǎng)絡(luò)級聯(lián)隱馬爾可夫模型建模等的多口音建模方法在本文中被組合和比較。一個能顯式地利用口音信息的改進多級自適應(yīng)網(wǎng)絡(luò)級聯(lián)隱馬爾可夫模型系統(tǒng)被提出,并應(yīng)用于一個由四個地區(qū)口音組成的、數(shù)據(jù)缺乏的帶口音普通話語音識別任務(wù)中。在經(jīng)過序列區(qū)分性訓(xùn)練和自適應(yīng)后,通過絕對上 0.8% 到 1.5%(相對上 6% 到 9%)的字錯誤率下降,該系統(tǒng)顯著地優(yōu)于基線的口音獨立深度神經(jīng)網(wǎng)絡(luò)級聯(lián)系統(tǒng)。

    語音識別;決策樹;深度神經(jīng)網(wǎng)絡(luò);口音;自適應(yīng)

    1 Introduction

    An important part of the Mandarin speech recognition task is to appropriately handle the influence from a rich set of diverse accents. There are at least seven major regional accents in China[1,2]. The related variabilities imposed on accented Mandarin speech are complex and widespread. The resulting high mismatch can lead to severe performance degradation for automatic speech recognition (ASR)tasks. To handle this problem, ASR systems can be trained on large amounts of accent specific speech data[3]. However, collecting and annotating accented data is very expensive and time-consuming. Hence,the amount of available accent specific speech data is often quite limited.

    An alternative approach is to exploit the accent independent features among standard Mandarin speech data, which are often available in large amounts, to improve robustness and generalization. Along this line, two categories of techniques can be used. The first category of techniques aims to directly adapt systems trained on standard Mandarin speech data[4-8]. The second category uses standard Mandarin speech to augment the limited in-domain accent specific data in a multi-style training framework[9]. For example, an accent dependent phonetic decision tree tying technique was proposed[10-12]. It allows the resulting acoustic models to explicitly learn both the accent independent and the accent specific characteristics in speech.

    Recently, deep neural networks (DNNs) have become increasing popular for acoustic modelling,due to their inherent robustness to the highly complex factors of variabilities found in natural speech[13-15]. These include external factors such as environment noise[16-18]and language dependent linguistic features[19-21]. In order to incorporate DNNs, or multi-layer perceptrons (MLPs) in general,into hidden Markov model (HMM)-based acoustic models, two approaches can be used. The first uses a hybrid architecture that estimates the HMM state emission probabilities using DNNs[22]. The second approach uses an MLP or DNN[17], which is trained to produce phoneme posterior probabilities,as a feature extractor. The resulting probabilistic features[23]or bottleneck features[24]are concatenated with standard front-ends and used to train Gaussian mixture model (GMM)-HMMs in a tandem fashion. As GMM-HMMs remain as the back-end classifier,the tandem approach requires minimum change to the downstream techniques, such as adaptation and discriminative training, while retaining the useful information by the bottleneck features.

    Using limited amounts of accented data alone is insufficient to obtain sufficient generalization for the resulting acoustic models, including DNNs. Therefore, a key problem in accented Mandarin speech recognition with low resources, as considered in this paper, is how to improve coverage and generalisation by exploiting the commonalities and specialties among standard and accented speech data during training. Using conventional multi-style DNN training based on a mix of standard and accented Mandarin speech data, accent independent features found in both can be implicitly learned[25,26].

    Inspired by recent works on multi-lingual low resource speech recognition[19,20,21,27], this paper aims to investigate and compare the explicit as well as the implicit uses of accent information in state-ofthe-art deep neural network (DNN) based acoustic modelling techniques, including conventional tied state GMM-HMMs, DNN tandem systems and multi-level adaptive network (MLAN)[27,28]tandem HMMs. These approaches are evaluated on a low resource accented Mandarin speech recognition task consisting of accented speech collected from four regions: Guangzhou, Chongqing, Shanghai and Xiamen. The improved multi-accent GMMHMM and MLAN tandem systems explicitly leveraging the accent information during model training significantly outperformed the baseline GMM-HMM and DNN tandem HMM systems by 0.8%-1.5% absolute (6%-9% relative) in character error rate after minimum phone error (MPE) based discriminative training and adaptation.

    The rest of this paper is organized as follows. Standard acoustic accent modelling approaches are reviewed in section 2. These include multi-accent decision tree state tying for GMM-HMM systems,and multi-accent DNN tandem systems. MLAN tandem systems with improved pre-training for accent modeling are presented in section 3.2. Experimental results are presented in section 4. Section 5 draws the conclusions and discusses future work.

    2 Acoustic modelling for accented speech

    2.1 Multi-style accent modelling

    Multi-style training[9]is used in this paper for accent modelling. This approach uses speech data collected in a wide range of styles and domains. Then, it exploits the implicit modelling ability of mixture models used in GMM-HMMs and, more recently,deep neural networks[16,20,21]to obtain a good generalization to unseen situations. In the accented speech modelling experiments of this paper, large amount of standard Mandarin speech data is used to augment the limited accented data during training to provide useful accent independent features.

    2.2 Multi-accent decision tree state tying

    As the phonetic and phonological realization of Mandarin speech is significantly different between regional accents, inappropriate tying of context dependent HMM states associated with different accents can lead to poor coverage and discrimination for GMM-HMM based acoustic models. In order to handle this problem, decision tree clustering[10,11]with multi-accent branches is tried in this paper. In order to effectively exploit the commonalities and specificities found in standard and accented Mandarin data, accent dependent (AD) questions are

    used together with conventional linguistic questions during the clustering process. A sample of the accented branches is shown in red part of Fig. 1.

    Fig. 1 A part of multi-accent decision tree. Blue:conventional branches; Red: accented branches

    In common with standard maximum likelihood(ML) based phonetic decision tree tying[13], the questions giving highest log-likelihood improvement are chosen when splitting tree nodes. The algorithm iterates until no more splitting operations can yield a log-likelihood increase above a certain threshold. Therefore, the multi-accent information is explicitly used during states tying. As expected, the use of accent dependent questions dramatically increases the number of context-dependent phone units to consider during training and decoding. As not all of them are allowed by the lexicon, following the approach proposed in Liu's report[29], only the valid subset under the lexical constraint is considered in this paper.

    2.3 Multi-accent DNN tandem systems

    In this paper, DNNs are trained to extract bottleneck features to be used in both DNN tandem and MLAN tandem systems. They are trained to model frame posterior probabilities of context-dependent phone HMM state targets. The inputs to DNNs consist of a context window of 11 successive frames of features for each time instance. The input to each neuron of each hidden layer is a linearly weighted sum of the outputs from the previous layer, before fed into a sigmoid activation function. At the output layer a softmax activation is used to compute posterior probability of each output target. The networks were first pre-trained by layer-by-layer restricted Boltzmann machine (RBM) pre-training[14,15], then globally fine-tuned to minimize the frame-level cross-entropy by back-propagation. Moreover, the last hidden layer is set to have a significantly smaller number of neurons[24]. This narrow layer introduces a constriction in dimensionality while retaining the information useful for classification. Subsequently,low dimensional bottleneck features can be extracted by taking neuron values of this layer before activation. The bottleneck features are then appended to the standard acoustic features and used to train the back-end GMM-HMMs in tandem systems.

    3 Multi-accent MLAN tandem systems

    3.1 Multi-level adaptive network tandem systems A multi-level adaptive network (MLAN) was first proposed for cross domain adaptation[27,28], where large amounts of out-of-domain telephone and meeting room speech were used to improve the performance of an ASR system trained on a limited amount of in-domain multi-genrearchive broadcast data. The MLAN approach explored the useful domain independent characteristics in the outof-domain data to improve in-domain modelling performance, while reducing the mismatch across different domains. In this paper, the MLAN approachwas further exploited to improve the performance of accented Mandarin speech recognition systems.

    An MLAN system consists of two component subnetworks. The first-level network is trained with acoustic features of large amounts of accent independent standard Mandarin speech data. The acoustic features of target accented speech data are then fed forward through the first-level network. The resulting bottleneck features are then concatenated with the associated standard acoustic features and used as input to train the second-level network. After both of two component networks are trained,the entire training set, including both standard and accented Mandarin speech data, is fed forward through two subnetworks in turn. The resulting set of bottleneck features are then concatenated with the standard front-ends and used to train the back-end GMM-HMMs.

    3.2 Improved MLAN tandem systems for accent modelling

    The MLAN framework can be considered as stacked DNNs that consists of multi level of networks[21,20]. The second level network of stacked DNNs uses the information of first level network only in the input features, while weights and biases in the second level network are randomly initialized before pre-training and training. One important issue associated with conventional MLAN systems is the robust estimation of the second level DNN parameters. When using limited amounts of in-domain, accent specific speech data to adapt the second level DNN, as is considered in this work, a direct update of its associated weight parameters presents a significant data sparsity problem and can lead to poor generalization performance[21,25,26]. In order to address this issue, an improvement form of pre-training initialization is used in this paper for the second level DNN.

    First, all the hidden layers parameters of the second level accent adaptive DNN, and its input layer parameters associated with the standard acoustic features (shown as red and orange parts in Fig. 2) are initialized using those of the first level DNN trained on sufficient amounts of accent independent speech data. Second, the remaining input layer weights and biases connecting the input bottleneck features generated from first level DNN are initialized using RBM pre-training (shown as green in Fig. 2).

    Fig. 2 Improved MLANtraining for tandem systems. Left:first level DNN network; Right: second level DNN network

    When training the second level DNN, the parameters between bottleneck layer and output layer are updated first (shown as blue in Fig. 2),while fixing the rest of the second level network. The entire second level network is then globally fine-tuned using back-propagation. Similar to the multilingual DNN adaptation approach investigated in Grezl's report[21], the proposed method aims to adapt the second level network parameters based on those of a well trained first level network.

    4 Experiments and results

    4.1 Data description

    In this section the performance of various accentedMandarin speech recognition approaches are evaluated. 43 hours of standard Mandarin speech[30]and 22.6 hours of accented Mandarin speech containing Guangzhou, Chongqing, Shanghai and Xiamen regional accents[31]released by CASIA and RASC863 databases respectively were used in training. Four testing sets associated with each of these four accents respectively were also used. More detailed information of these data sets was presented in Table 1.

    4.2 Experiment setup

    Baseline context-dependent phonetic decision tree clustered[13,32]triphone GMM-HMM systems with 16 Gaussians per state were trained using 42 dimensional acoustic features consisting of heteroskedastic linear discriminant analysis (HLDA)perceptual linear predictive (PLP) features and pitch parameters. These were used as the input features, and to produce accent independent state level alignment to train DNNs with 2 048 neurons in each non-bottleneck hidden layer using the Kaldi toolkit[33]. Meanwhile the bottleneck layer had 26 neurons. All DNNs were trained with initial learning rate of 0.008 and the commonly used new bob annealing schedule. Mean normalization and principle component analysis (PCA) de-correlation were applied to the resulting bottleneck features before being appended to the above acoustic features.

    4.3 Performance of multi-accent GMM-HMM systems

    The performance of multi-accent GMM-HMM systems were first evaluated on Guangzhou accented speech data. These are shown in Table 2. In this table, the “Model AD” column denotes accent dependent questions were used in decision tree state tying. This table shows that the multi-accent HMM model (System (2)) trained by adding all four types of accented speech to the standard Mandarin data outperformed folding in Guangzhou accent data only(System (1)). In addition, the explicit use of accent information during decision tree clustering (System (3))obtained a further character error rate (CER) reduction of 2.7% absolute from 17.77% down to 15.07%.

    4.4 Performance of multi-accent DNN tandem systems

    A second set of experiments comparable to thoseshown in Table 2 were then conducted to evaluate the performance of four tandem systems on the Guangzhou accent test set. In addition to the standard Mandarin speech data, the Guangzhou accent data, or all four accent types, were also used in DNN training. All DNNs here had4hidden layers including the bottleneck layer. These are shown in Table 3. The multi-accent trained DNN tandem system (System (4) in Table 3), which used both accent dependent questions in decision tree based HMM state clustering, and included all four accent types in DNN training, gave the lowest character error rate of 13.16%.

    Table 1 Standard and accented Mandarin speech data sets

    Table 2 Performance of baseline GMM-HMM systems trained on standard Mandarin speech plus Guangzhou accent data only, or all four accents of Table 1, and evaluated on Guangzhou accent test set

    4.5 Performance of multi-accent MLAN tandem systems

    The performance of various MLAN tandem systems on Guangzhou accent speech data are shown in Table 4. In addition to the standard Mandarin speech data, all four accent types were used in both baseline HMM and the first level DNN training. The first level DNN had4hidden layers. The first four MLAN tandem systems used a conventional random initialization of the second level DNN with 2 or4hidden layers prior to pre-training and full network update on the target accent data. As discussed in sections 1 and 3.2, when using limited amounts of accent specific speech data to estimate the second level DNN, a direct update its associated weight parameters can lead to unrobust estimation and poor generalization. This is shown in the first four lines of Table 4. Increasing the number of hidden layers from 2 to4for the second level DNN led to further performance degradation. Compared with the best DNN tandem system shown in the bottom line of Table 3, a performance degradation of 0.92% absolute was observed.

    In contrast, when the improved pre-training based MLAN tandem system discussed in section 3.2 was used, as shown in the last two rows in Table 4, consistent improvements were obtained using both the accent independent and dependent MLAN tandem configurations over the comparable DNN tandem systems shown in the last two rows of Table 3.

    Table 3 Performance of DNN tandem systems on Guangzhou accent test set

    Table4Performance of MLAN tandem systems on Guangzhou accent test set

    4.6 Performance evaluation on multiple accent test sets

    A full set of experiments were finally conducted to evaluate the performance of various multiaccent systems on four accent test sets: Guangzhou,Chongqing, Shanghai and Xiamen. The performances of systems are presented in Table 5 and Table 6 for the multi-accent GMM-HMM, DNN tandem and improved MLAN tandem systems. “+MPE” denotes MPE discriminative training[34]performed on the maximum likelihood trained “ML” model, and “+

    MLLR” denotes a subsequent maximum likelihood linear regression (MLLR) adaptation[35]on the “+

    MPE” model. Moreover, System (0) used only out of domain data, namely standard Mandarin data, to train the GMM-HMMs, which denoted by “HMM(out)”. Meanwhile, “HMM(ma)” denotes multi-accent GMMHMM systems trained with all accented data as well as standard Mandarin data. Both DNN tandem and improved MLAN tandem systems utilized“HMM(ma)ML” models as their baselines. All DNNs here had 6 hidden layers including the bottleneck layer. “DNN AD” denotes DNN trained with accent dependent state alignment, while all DNNs used in MLAN tandem systems were trained with accent independent state alignment.

    A general trend can be found in Table 5 and 6 that the explicit use of accent information in training lead to consistent improvements for GMM-HMM, DNN tandem and MLAN tandem systems. For example,by explicitly using accent information during model training, an absolute CER reduction of 1.5% (relative 9%) was obtained on the GMM-HMM systems(System (2) compared to System (1) in Table 5). Although the improved MLAN tandem systems got less improvement from MPE training than the DNN tandem systems, they got more significant amelioration when MLLR adaptation was utilized. This indicates that the improved MLAN framework is not exclusive to the MLLR adaptation. The bestperformance was obtained using the improved MLAN tandem system with accent dependent modelling (System (4) in Table 6). Using this improved MLAN tandem system, an average CER reduction of 0.8% absolute (6% relative ) was obtained over the baseline DNN tandem system trained without explicitly using any accent information(System (3) in Table 5).

    Table 5 Performance of baseline multi-accent GMM-HMM and DNN tandem systems evaluated on all four accent test sets

    Comparing the results to previous works evaluated also on RASC863 database, Zhang et al.[36,37]used the augmented HMM and dynamic Gaussian mixture selection (DGMS), instead of multi-style accent modelling HMM and multi-accent decision tree state tying used in this paper. Their error rate for Guangzhou (Yue), Chongqing (Chuan) and Shanghai(Wu) accented Mandarin ASR stayed above 40% in syllable level (SER), and the best relative SER reduction against HMM trained with standard Mandarin (Putonghua) was about 20%. Although SER is not directly comparable to CER, but can still be seen as a reference. Meanwhile, for these three accents the comparable HMM system in this paper(System (2) in Table 5) obtained ML CER of about 18%, which had relative reduction of more than 40% against system (0) in Table 5. It might because that information of standard Mandarin cannot complement the low resource accented Mandarin in the augmented HMM and DGMS approaches.

    5 Conclusions

    In this paper, implicit and explicit accent modeling approaches were investigated for low resource accented Mandarin speech recognition. The improved multi-accent GMM-HMM and MLAN tandem systems significantly outperformed the baseline GMM-HMM and DNN tandem HMM systems by 0.8%-1.5% absolute (6%-9% relative)in character error rate after MPE training and adaptation. Experimental results suggest the proposed techniques may be useful for accented speech recognition. Future work will focus on modelling a larger and more diverse set of accents.

    Table 6 Performance of improved MLAN tandem systems evaluated on all four accent test sets

    [1] Liu Y,F(xiàn)ung P. Partial change accent models for accented Mandarin speech recognition [C] // IEEE Workshop on Automatic Speech Recognition and Understanding,2003:111-116.

    [2] Li J,Zheng TF,Byrne W, et al. A dialectal Chinese speech recognition framework [J].Journal of Computer Science and Technology, 2006,21(1):106-115.

    [3] Fisher V,Gao YQ,Janke E.Speaker-independent upfront dialect adaptation in a large vocabulary continuous speech recognizer[C] // The 5th International Conference on Spoken Language Processing,Incorporating, 1998: 787-790.

    [4] Oh YR,Kim HK.MLLR/MAP adaptation using pronunciation variation for non-native speech recognition [C] //IEEE Workshop on Automatic Speech Recognition &Understanding,2009:216-221.

    [5] Wang ZR,Schultz T,Waibel A.Comparison of acoustic model adaptation techniques on non-native speech [C] // 2003 IEEE International Conference on Acoustics,Speech,and Signal Processing,2003:540-543.

    [6] Tomokiyo LM,Waibel A.Adaptation methods for non-native speech [J]. Multilingual Speech and Language Processing,2003:6.

    [7] Liu M,Xu B,Hunng T,et al.Mandarin accent adaptation based on context-independent/contextdependent pronunciation modeling [C] // 2000 IEEE International Conference on Acoustics,Speech,and Signal Processing, 2000: 1025-1028.

    [8] Zheng Y,Sproat R,Gu L,et al. Accent detection and speech recognition for Shanghai-accented Mandarin [C] // Interspeech,2005: 217-220.

    [9] Lippmann RP,Martin E,Paul DB. Multi-style training for robust isolated-word speech recognition[C] // IEEE International Conference on Acoustics,Speech,and Signal Processing,1987:705-708.

    [10]YoungSJ,OdellJJ,WoodlandPC.Tree-basedstate tying for high accuracy acoustic modeling [C] // Proceedings of the Workshop on Human Language Technology,1994:307-312.

    [11] Reichl W,Chou W.A unified approach of incorporating general features in decision tree based acoustic modeling [C] // 1999 IEEE International Conference on Acoustics,Speech,and Signal Processing,1999: 573-576.

    [12] Sim KC,Li H. Robust phone set mapping using decision tree clustering for cross-lingual phone recognition [C] // 2008 IEEE International Conference on Acoustics,Speech,and Signal Processing, 2008: 4309-4312.

    [13] Seide F,Li G,Yu D. Conversational speech transcription using context-dependent deep neural networks [C] // Interspeech,2011: 437-440.

    [14] Dahl GE,Yu D,Deng L,et al. Contextdependent pre-trained deep neural networks for large-vocabulary speech recognition [J] IEEE Transactions on Audio, Speech, and Language Processing, 2012, 20(1): 30-42.

    [15] Hinton G,Deng L,Yu D,et al. Deep neural networks for acoustic modeling in speech recognition:The shared views of four research groups [J] IEEE Signal Processing Magazine,2012,29(6):82-97.

    [16] Seltzer ML,Yu D,Wang Y.An investigation of deep neural networks for noise robust speech recognition[C] // 2013 IEEE International Conference on Acoustics,Speech and Signal Processing,2013:7398-7402.

    [17] Yu D,Seltzer ML.Improved bottleneck features using pretrained deep neural networks [C] // The 12th Annual Conference of the International Speech Communication Association,2011:237-240.

    [18] Xie X,Su R,Liu X,et al.Deep neural network bottleneck features for generalized variable parameter HMMs [C] // The 15th Annual Conference of the International Speech Communication Association, 2014: 2739-2743.

    [19] Thomas S,Seltzer ML,Church K,et al. Deep neural network features and semi-supervised training for low resource speech recognition [C] // 2013 IEEE International Conference on Acoustics,Speech and Signal Processing, 2013:6704-6708.

    [20] Knill KM,Gales MJF,Rath SP,et al.Investigationof multilingual deep neural networks for spoken term detection [C] // 2013 IEEE Workshop on Automatic Speech Recognition and Understanding,2013:138-143.

    [21] Grezl F,Karafiat M,Vesely K.Adaptation of multilingual stacked bottle-neck neural network structure for new language [C] // 2014 IEEE International Conference on Acoustics,Speech and Signal Processing, 2014: 7654-7658.

    [22] Bourlard HA, Morgan N.Connectionist Speech Recognition: A Hybrid Approach [M].USA:Academic Publishers, 1993.

    [23] Hermansky H,Ellis DW,Sharma S.Tandem connectionist feature extraction for conventional HMM systems [C]// 2000 IEEE International Conference on Acoustics, Speech and Signal Processing,2000: 1635-1638.

    [24] Grezl F,Karafiat M,Kontar S, et al. Probabilistic and bottle-neck features for LVCSR of meetings[C]// 2007 IEEE International Conference on Acoustics,Speech and Signal Processing,2007:757-760.

    [25] Wang H,Wang L,Liu X. Multi-level adaptive network for accented Mandarin speech recognition[C]// The 4th IEEE International Conference on Information Science and Technology,2014: 602-605.

    [26] Sui X,Wang H,Wang L.A general framework for multi-accent Mandarin speech recognition using adaptive neural networks [C] // The 9th IEEE International Symposium on Chinese Spoken Language Processing, 2014: 118-122.

    [27] Bell P,Swietojanski P,Renals S.Multi-level adaptive networks in tandem and hybrid ASR systems [C] // 2013 IEEE International Conference on Acoustics, Speech and Signal Processing,2013:6975-6979.

    [28] Bell PJ,Gales MJF,Lanchantin P, et al.Transcription of multi-genre media archives using out-ofdomain data [C] // 2012 IEEE on Spoken Language Technology Workshop,2012:324-329.

    [29] Liu X,Gales MJF,Hieronymus JL,et al.Investigation of acoustic units for LVCSR systems[C] // 2011 IEEE International Conference on Acoustics, Speech and Signal Processing,2011:4872-4875.

    [30] 中國科學(xué)院自動化研究所.CASIA 北方口音語音庫 [OL]. [2015-07-28]. http://www.chineseldc. org/doc/CLDC-SPC-2004-015/intro.htm.

    [31] 中國社會科學(xué)院語言所. 四大方言普通話語音語料庫 [OL]. [2015-08-02]. http://www.chineseldc.org/doc/CLDC-SPC-2004-004/intro.htm.

    [32] Young SJ,Evermann G,Gales MJF, et al. The HTK Book (Revised for HTK version 3.4.1)[M]. Cambridge University, 2009.

    [33] Ghoshal A, Povey D.The Kaldi speech recognition toolkit [EB/OL]. 2013-02-03 [2015-08-02].http://kaldi.sourceforge.net.

    [34] Povey D, Woodland PC. Minimum phone error and I-smoothing for improved discriminative training[C] // 2002 IEEE International Conference on Acoustics,Speech, and Signal Processing,2002:105-108.

    [35] Gales MJ,Woodland PC.Mean and variance adaptation within the MLLR framework [J]. Computer Speech & Language, 1996, 10(4): 249-264.

    [36] Zhang C,Liu Y,Xia Y,et al. Reliable accent specific unit generation with dynamic Gaussian mixture selection for multi-accent speech recognition [C] // 2011 IEEE International Conference on Multimedia and Expo, 2011: 1-6.

    [37] Zhang C,Liu Y,Xia Y,et al. Discriminative dynamic Gaussian mixture selection with enhanced robustness and performance for multi-accent speech recognition [C] // 2012 IEEE International Conference on Acoustics,Speech and Signal Processing,2012:4749-4752.

    Investigation of Deep Neural Network Acoustic Modelling Approaches for Low Resource Accented Mandarin Speech Recognition

    XIE Xurong1,3SUI Xiang1,3LIU Xunying1,2WANG Lan1,3

    1( Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China )
    2( Cambridge University Engineering Department, Cambridge CB2 1TN, U.K. )
    3( The Chinese University of Hong Kong, Hong Kong 999077, China )

    The Mandarin Chinese language is known to be strongly influenced by a rich set of regional accents, while Mandarin speech with each accent is of quite low resource. Hence, an important task in Mandarin speech recognition is to appropriately model the acoustic variabilities imposed by accents. In thispaper, an investigation of implicit and explicit use of accent information on a range of deep neural network(DNN) based acoustic modeling techniques was conducted. Meanwhile, approaches of multi-accent modelling including multi-style training, multi-accent decision tree state tying, DNN tandem and multi-level adaptive network (MLAN) tandem hidden Markov model (HMM) modelling were combined and compared. On a low resource accented Mandarin speech recognition task consisting of four regional accents, an improved MLAN tandem HMM systems explicitly leveraging the accent information was proposed, and significantly outperformed the baseline accent independent DNN tandem systems by 0.8%-1.5% absolute (6%-9% relative)in character error rate after sequence level discriminative training and adaptation.

    speech recognition; decision tree; deep neural network; accent; adaptation

    TP 391.4

    A

    Received: 2015-08-12 Revised: 2015-09-14

    Foundation: National Natural Science Foundation of China (NSFC 61135003);Shenzhen Fundamental Research Program(JCYJ20130401170306806,JC201005280621A)

    Author: Xie Xurong, Research Assistant. His research interests are speech recognition and machine learning; Sui Xiang, Master. Her research interest is speech recognition; Liu Xunying, Senior Research Associate. His research interests include large vocabulary continuous speech recognition, language modelling, noise robust speech recognition, speech and language processing; Wang Lan(corresponding author), Professor. Her research interests are large vocabulary continuous speech recognition, speech visualization and speech centric human-machine interaction, E-mail:lan.wang@siat.ac.cn.

    猜你喜歡
    口音級聯(lián)決策樹
    法國立法禁止嘲笑他人口音
    別人都在說英語,只有中國人在說口音
    一種針對不均衡數(shù)據(jù)集的SVM決策樹算法
    決策樹和隨機森林方法在管理決策中的應(yīng)用
    電子制作(2018年16期)2018-09-26 03:27:06
    你說話的口音反映出什么?
    級聯(lián)LDPC碼的STBC-OFDM系統(tǒng)
    電子制作(2016年15期)2017-01-15 13:39:09
    基于決策樹的出租車乘客出行目的識別
    基于級聯(lián)MUSIC的面陣中的二維DOA估計算法
    基于肺癌CT的決策樹模型在肺癌診斷中的應(yīng)用
    LCL濾波器在6kV級聯(lián)STATCOM中的應(yīng)用
    電測與儀表(2014年1期)2014-04-04 12:00:34
    露出奶头的视频| 91精品国产九色| 99久久成人亚洲精品观看| 日韩在线高清观看一区二区三区 | 琪琪午夜伦伦电影理论片6080| 欧美zozozo另类| 在线免费十八禁| 欧美高清性xxxxhd video| 国产精品亚洲美女久久久| 国产三级中文精品| 淫秽高清视频在线观看| 久久久久性生活片| 亚洲国产日韩欧美精品在线观看| 成人毛片a级毛片在线播放| 色综合亚洲欧美另类图片| 女人十人毛片免费观看3o分钟| 色视频www国产| 真人做人爱边吃奶动态| 悠悠久久av| 免费不卡的大黄色大毛片视频在线观看 | 亚洲乱码一区二区免费版| 久久久久久久午夜电影| 免费在线观看成人毛片| 欧美+日韩+精品| 午夜福利在线在线| 免费看美女性在线毛片视频| 麻豆av噜噜一区二区三区| 小蜜桃在线观看免费完整版高清| 99国产极品粉嫩在线观看| 久久久久久国产a免费观看| 国产成人影院久久av| 直男gayav资源| 亚洲色图av天堂| 一级a爱片免费观看的视频| 国内毛片毛片毛片毛片毛片| 婷婷丁香在线五月| 伊人久久精品亚洲午夜| 一a级毛片在线观看| 国产伦人伦偷精品视频| 欧美日韩亚洲国产一区二区在线观看| 波野结衣二区三区在线| 亚洲一级一片aⅴ在线观看| 成人无遮挡网站| 亚洲美女搞黄在线观看 | 日韩精品有码人妻一区| 一边摸一边抽搐一进一小说| 亚洲图色成人| 999久久久精品免费观看国产| 香蕉av资源在线| 中亚洲国语对白在线视频| 亚洲男人的天堂狠狠| 亚洲国产精品合色在线| 久久久久久伊人网av| 性插视频无遮挡在线免费观看| 欧美日韩亚洲国产一区二区在线观看| 在线观看66精品国产| 国产精品av视频在线免费观看| 日韩av在线大香蕉| 国产综合懂色| 成人毛片a级毛片在线播放| 哪里可以看免费的av片| 亚洲欧美激情综合另类| 久久久久久久久久成人| 国产高清激情床上av| 丝袜美腿在线中文| 老女人水多毛片| 全区人妻精品视频| 国产一区二区亚洲精品在线观看| 成人av在线播放网站| 精品久久久噜噜| 动漫黄色视频在线观看| 久久久久久国产a免费观看| 亚洲人与动物交配视频| 久久精品国产亚洲av涩爱 | 亚洲精品亚洲一区二区| 黄色视频,在线免费观看| xxxwww97欧美| 亚洲午夜理论影院| 人人妻人人澡欧美一区二区| 91久久精品国产一区二区成人| 亚洲男人的天堂狠狠| 亚洲专区国产一区二区| 国产国拍精品亚洲av在线观看| 精品人妻熟女av久视频| 天天一区二区日本电影三级| 国产精品久久久久久精品电影| 欧美成人一区二区免费高清观看| 午夜精品一区二区三区免费看| 不卡一级毛片| 乱人视频在线观看| 蜜桃久久精品国产亚洲av| 婷婷精品国产亚洲av| 亚洲综合色惰| 亚洲四区av| 99热这里只有精品一区| 久久久久久久久久成人| 男女啪啪激烈高潮av片| 国产精品,欧美在线| 露出奶头的视频| 欧美色视频一区免费| 亚洲成a人片在线一区二区| 日本在线视频免费播放| 在线国产一区二区在线| 一个人看的www免费观看视频| 国产精品综合久久久久久久免费| 欧美潮喷喷水| 一个人看视频在线观看www免费| 精华霜和精华液先用哪个| 久久6这里有精品| 色尼玛亚洲综合影院| 成年版毛片免费区| 国产成人福利小说| 成熟少妇高潮喷水视频| 亚洲欧美日韩东京热| 国产精品福利在线免费观看| 久久精品综合一区二区三区| 久99久视频精品免费| 精品一区二区三区视频在线| 综合色av麻豆| 制服丝袜大香蕉在线| 精品久久久久久久末码| 成人av在线播放网站| 国产精品久久电影中文字幕| 国产激情偷乱视频一区二区| 一区二区三区激情视频| 成人三级黄色视频| 日韩强制内射视频| 久久久色成人| 黄色丝袜av网址大全| 国产精品不卡视频一区二区| 99视频精品全部免费 在线| 国产精品久久久久久亚洲av鲁大| www.色视频.com| 国产精品爽爽va在线观看网站| 日本一二三区视频观看| 极品教师在线视频| 亚洲国产日韩欧美精品在线观看| 成人毛片a级毛片在线播放| 极品教师在线免费播放| 一本精品99久久精品77| 免费在线观看影片大全网站| 欧美一区二区亚洲| 中文在线观看免费www的网站| 国产免费一级a男人的天堂| 最近视频中文字幕2019在线8| 三级毛片av免费| 国产午夜精品论理片| 五月伊人婷婷丁香| 能在线免费观看的黄片| 在线播放无遮挡| 18禁黄网站禁片免费观看直播| 亚洲五月天丁香| 亚洲三级黄色毛片| 少妇被粗大猛烈的视频| 午夜爱爱视频在线播放| 国语自产精品视频在线第100页| 亚州av有码| 欧美xxxx性猛交bbbb| 搞女人的毛片| 国产蜜桃级精品一区二区三区| 精品欧美国产一区二区三| 在线播放无遮挡| 国产高清激情床上av| 国产欧美日韩一区二区精品| 黄色配什么色好看| 国产男人的电影天堂91| 婷婷丁香在线五月| 免费在线观看影片大全网站| 日本 欧美在线| 在线播放国产精品三级| 国产三级中文精品| 日本一二三区视频观看| 又黄又爽又免费观看的视频| 欧美性猛交黑人性爽| 国产精品久久久久久久久免| 精品人妻偷拍中文字幕| 97热精品久久久久久| 99热精品在线国产| 热99在线观看视频| 两人在一起打扑克的视频| 一个人看的www免费观看视频| 变态另类丝袜制服| 别揉我奶头 嗯啊视频| 麻豆久久精品国产亚洲av| 免费高清视频大片| 春色校园在线视频观看| 久久久国产成人精品二区| 男插女下体视频免费在线播放| 久久国内精品自在自线图片| 午夜精品一区二区三区免费看| ponron亚洲| 国产黄a三级三级三级人| 99精品久久久久人妻精品| 观看免费一级毛片| 在线观看免费视频日本深夜| av中文乱码字幕在线| 亚洲国产欧美人成| 97热精品久久久久久| 精品午夜福利视频在线观看一区| 国产成人一区二区在线| 波多野结衣高清无吗| 亚洲av日韩精品久久久久久密| 99久久精品国产国产毛片| 蜜桃久久精品国产亚洲av| 亚洲欧美日韩东京热| 精品久久久久久,| 国产精品自产拍在线观看55亚洲| 精品欧美国产一区二区三| 国产色婷婷99| 精品一区二区三区人妻视频| 亚洲熟妇中文字幕五十中出| x7x7x7水蜜桃| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲精品日韩av片在线观看| 人人妻,人人澡人人爽秒播| 精品免费久久久久久久清纯| 天堂av国产一区二区熟女人妻| 村上凉子中文字幕在线| 久99久视频精品免费| 一个人看的www免费观看视频| 麻豆国产97在线/欧美| 亚洲av电影不卡..在线观看| 久久久午夜欧美精品| 69av精品久久久久久| 国产av麻豆久久久久久久| 精品国产三级普通话版| 欧美3d第一页| 久久欧美精品欧美久久欧美| 给我免费播放毛片高清在线观看| 国产乱人伦免费视频| 成人特级av手机在线观看| 欧洲精品卡2卡3卡4卡5卡区| 精品人妻视频免费看| 日韩亚洲欧美综合| 在线观看免费视频日本深夜| 午夜亚洲福利在线播放| 日韩一区二区视频免费看| 国产精品av视频在线免费观看| x7x7x7水蜜桃| 欧美日韩黄片免| 最近中文字幕高清免费大全6 | 日韩人妻高清精品专区| 男插女下体视频免费在线播放| 老司机深夜福利视频在线观看| 十八禁网站免费在线| 精品99又大又爽又粗少妇毛片 | 成熟少妇高潮喷水视频| 久久精品人妻少妇| 久久久国产成人免费| 成人国产综合亚洲| 在线观看美女被高潮喷水网站| 中文字幕久久专区| 免费电影在线观看免费观看| 校园春色视频在线观看| 精品人妻视频免费看| 午夜爱爱视频在线播放| 不卡一级毛片| 久久久久国内视频| 国语自产精品视频在线第100页| 亚洲性夜色夜夜综合| 亚洲人成网站高清观看| 久久久久久久久大av| 国产69精品久久久久777片| 国产一级毛片七仙女欲春2| 91在线精品国自产拍蜜月| 波多野结衣巨乳人妻| 国产私拍福利视频在线观看| 国产精品永久免费网站| 日韩中字成人| 久久精品国产99精品国产亚洲性色| 99久久久亚洲精品蜜臀av| 欧美xxxx黑人xx丫x性爽| 亚洲性久久影院| 变态另类丝袜制服| 淫妇啪啪啪对白视频| 日韩精品有码人妻一区| 亚洲av不卡在线观看| 人人妻,人人澡人人爽秒播| 亚洲欧美日韩高清专用| 亚洲中文字幕日韩| 色精品久久人妻99蜜桃| 亚洲经典国产精华液单| 91久久精品国产一区二区成人| 亚洲成av人片在线播放无| 亚洲av成人av| 亚洲人成网站高清观看| 精品乱码久久久久久99久播| 99久久成人亚洲精品观看| 91av网一区二区| 一级黄片播放器| 国产极品精品免费视频能看的| 免费搜索国产男女视频| 人妻制服诱惑在线中文字幕| 免费观看人在逋| 小蜜桃在线观看免费完整版高清| 麻豆国产97在线/欧美| 久久人妻av系列| 色噜噜av男人的天堂激情| 免费在线观看成人毛片| 欧美又色又爽又黄视频| 99热6这里只有精品| 日韩欧美 国产精品| 男插女下体视频免费在线播放| 免费无遮挡裸体视频| 超碰av人人做人人爽久久| 精品久久久久久久人妻蜜臀av| 五月玫瑰六月丁香| 久久久精品大字幕| 国产精品人妻久久久影院| 国产精品永久免费网站| 国产精品久久久久久精品电影| 欧美另类亚洲清纯唯美| 色噜噜av男人的天堂激情| 性欧美人与动物交配| 神马国产精品三级电影在线观看| 51国产日韩欧美| 悠悠久久av| 男插女下体视频免费在线播放| 亚洲经典国产精华液单| 精品久久久久久久末码| 小说图片视频综合网站| 午夜老司机福利剧场| 欧美不卡视频在线免费观看| av视频在线观看入口| 亚洲,欧美,日韩| 综合色av麻豆| 欧美日本视频| 国内毛片毛片毛片毛片毛片| 欧美+日韩+精品| 听说在线观看完整版免费高清| 国产精品野战在线观看| 成人特级av手机在线观看| 啦啦啦韩国在线观看视频| 免费不卡的大黄色大毛片视频在线观看 | 国产 一区 欧美 日韩| 国产精品av视频在线免费观看| 特大巨黑吊av在线直播| 久久精品夜夜夜夜夜久久蜜豆| 国产v大片淫在线免费观看| 国产三级在线视频| 成人鲁丝片一二三区免费| 国产av在哪里看| 在线免费观看不下载黄p国产 | 成熟少妇高潮喷水视频| 久久精品夜夜夜夜夜久久蜜豆| 精品日产1卡2卡| 深夜a级毛片| 国产69精品久久久久777片| 日本五十路高清| 少妇人妻精品综合一区二区 | 国产亚洲精品av在线| 久久久久久大精品| 亚洲国产日韩欧美精品在线观看| 嫩草影院精品99| 亚洲午夜理论影院| 超碰av人人做人人爽久久| 亚洲性久久影院| 国产精品精品国产色婷婷| netflix在线观看网站| a级毛片免费高清观看在线播放| 99热精品在线国产| 精品久久国产蜜桃| netflix在线观看网站| 狂野欧美激情性xxxx在线观看| 真人做人爱边吃奶动态| 欧美性猛交╳xxx乱大交人| 在线观看舔阴道视频| 舔av片在线| 中国美白少妇内射xxxbb| 亚洲性夜色夜夜综合| 日日摸夜夜添夜夜添av毛片 | 亚洲精品一卡2卡三卡4卡5卡| 男女视频在线观看网站免费| 久久香蕉精品热| 亚洲va在线va天堂va国产| 99视频精品全部免费 在线| 日韩高清综合在线| 日本黄大片高清| 欧美色视频一区免费| 免费在线观看影片大全网站| 狂野欧美白嫩少妇大欣赏| 久久欧美精品欧美久久欧美| 久久久国产成人精品二区| 成人三级黄色视频| 亚洲美女搞黄在线观看 | 嫁个100分男人电影在线观看| 国产亚洲精品久久久久久毛片| 一边摸一边抽搐一进一小说| 国产一区二区三区视频了| 在线观看一区二区三区| 国产在视频线在精品| 久久久久久久久大av| 真实男女啪啪啪动态图| 麻豆国产97在线/欧美| 赤兔流量卡办理| 亚洲av日韩精品久久久久久密| 亚洲成av人片在线播放无| 偷拍熟女少妇极品色| 特级一级黄色大片| 色播亚洲综合网| 波野结衣二区三区在线| 亚洲四区av| 91久久精品电影网| 亚洲成a人片在线一区二区| 精品久久久久久,| 老司机福利观看| 草草在线视频免费看| 国产在视频线在精品| 亚洲av美国av| 中文字幕高清在线视频| 一个人免费在线观看电影| 黄色一级大片看看| 99riav亚洲国产免费| 一区二区三区激情视频| 一进一出抽搐动态| 国产午夜精品久久久久久一区二区三区 | 1024手机看黄色片| 久久欧美精品欧美久久欧美| 欧美丝袜亚洲另类 | 九色国产91popny在线| 国产精品人妻久久久影院| 日韩一区二区视频免费看| 久久久精品欧美日韩精品| 搞女人的毛片| 在线观看午夜福利视频| 日韩欧美 国产精品| 国内精品久久久久久久电影| 美女大奶头视频| 亚洲国产精品久久男人天堂| 免费不卡的大黄色大毛片视频在线观看 | 搞女人的毛片| 久久午夜福利片| 成年女人看的毛片在线观看| 国产成人aa在线观看| 国产男靠女视频免费网站| 91精品国产九色| 黄色女人牲交| 亚洲在线观看片| a级毛片免费高清观看在线播放| 国产精品亚洲美女久久久| 丝袜美腿在线中文| 午夜福利在线观看免费完整高清在 | 色哟哟·www| 亚洲国产日韩欧美精品在线观看| 国产高清有码在线观看视频| 一区二区三区激情视频| 日韩强制内射视频| 亚洲,欧美,日韩| 亚洲国产精品成人综合色| 熟女电影av网| 51国产日韩欧美| 亚洲精品日韩av片在线观看| 老师上课跳d突然被开到最大视频| 天堂√8在线中文| 午夜免费男女啪啪视频观看 | 如何舔出高潮| 在线免费观看的www视频| 国产伦精品一区二区三区四那| 超碰av人人做人人爽久久| 国产白丝娇喘喷水9色精品| 高清毛片免费观看视频网站| 亚洲 国产 在线| 亚洲精品456在线播放app | 国产激情偷乱视频一区二区| 日本a在线网址| 国产精品98久久久久久宅男小说| av女优亚洲男人天堂| 婷婷精品国产亚洲av在线| 校园春色视频在线观看| 波多野结衣高清作品| 免费av观看视频| 亚洲aⅴ乱码一区二区在线播放| 亚洲电影在线观看av| 无人区码免费观看不卡| 欧美bdsm另类| 淫妇啪啪啪对白视频| 久久久精品欧美日韩精品| 欧美不卡视频在线免费观看| 最近在线观看免费完整版| av在线观看视频网站免费| 亚洲男人的天堂狠狠| 国产精品一区二区三区四区久久| 欧美日韩国产亚洲二区| 99riav亚洲国产免费| 91午夜精品亚洲一区二区三区 | 日本精品一区二区三区蜜桃| 中国美白少妇内射xxxbb| 天天躁日日操中文字幕| 国产高潮美女av| 成人二区视频| 亚洲av美国av| 国产精品亚洲美女久久久| 97热精品久久久久久| 日本免费一区二区三区高清不卡| 亚洲av不卡在线观看| 亚洲自偷自拍三级| 亚洲,欧美,日韩| www.www免费av| 国产三级中文精品| 亚洲国产精品久久男人天堂| 国产一区二区激情短视频| 成人高潮视频无遮挡免费网站| 99久久精品国产国产毛片| 国产男靠女视频免费网站| 久久久久久久久久久丰满 | 观看免费一级毛片| 黄色视频,在线免费观看| 国产精品1区2区在线观看.| 国产精品一区www在线观看 | 人妻丰满熟妇av一区二区三区| 99九九线精品视频在线观看视频| 在线观看免费视频日本深夜| 在线看三级毛片| 久久久成人免费电影| 少妇的逼水好多| 免费搜索国产男女视频| 国产乱人视频| 精品99又大又爽又粗少妇毛片 | 人妻制服诱惑在线中文字幕| 1024手机看黄色片| 精华霜和精华液先用哪个| 久久久精品大字幕| 天美传媒精品一区二区| 身体一侧抽搐| 女的被弄到高潮叫床怎么办 | 国产探花极品一区二区| 一本一本综合久久| 亚洲成人中文字幕在线播放| 免费在线观看日本一区| 69人妻影院| 永久网站在线| 成年女人毛片免费观看观看9| 亚州av有码| 老司机深夜福利视频在线观看| 亚洲四区av| 日本色播在线视频| 联通29元200g的流量卡| 精品免费久久久久久久清纯| 1000部很黄的大片| 亚洲人成网站在线播| 九色成人免费人妻av| 99久久九九国产精品国产免费| 午夜爱爱视频在线播放| 久久精品国产自在天天线| 99精品在免费线老司机午夜| 99国产精品一区二区蜜桃av| 精品人妻偷拍中文字幕| 久久久久久久久久久丰满 | 黄色欧美视频在线观看| 校园人妻丝袜中文字幕| 99久久久亚洲精品蜜臀av| 校园春色视频在线观看| 丰满乱子伦码专区| 国语自产精品视频在线第100页| 热99re8久久精品国产| 国产精品久久久久久精品电影| 国产亚洲精品av在线| 国产精华一区二区三区| 成人三级黄色视频| 中文字幕人妻熟人妻熟丝袜美| 18禁在线播放成人免费| 不卡一级毛片| 中文字幕免费在线视频6| 久久久久久久午夜电影| 国产老妇女一区| 亚洲一区高清亚洲精品| 亚洲熟妇中文字幕五十中出| 99久久九九国产精品国产免费| 男插女下体视频免费在线播放| 美女大奶头视频| 国产欧美日韩精品一区二区| 精品久久久久久久末码| 国产精品福利在线免费观看| 日韩大尺度精品在线看网址| 69av精品久久久久久| 此物有八面人人有两片| av在线老鸭窝| 日本a在线网址| 国产乱人视频| 国产综合懂色| 欧美性感艳星| av.在线天堂| 干丝袜人妻中文字幕| 哪里可以看免费的av片| .国产精品久久| 欧美3d第一页| 色哟哟·www| 国产视频一区二区在线看| 久久久久性生活片| 日本精品一区二区三区蜜桃| 欧美不卡视频在线免费观看| 韩国av在线不卡| 国产高清视频在线观看网站| 国产白丝娇喘喷水9色精品| 日本撒尿小便嘘嘘汇集6| 国产精品一区二区三区四区久久| 又黄又爽又免费观看的视频| 韩国av在线不卡| 香蕉av资源在线| 中国美白少妇内射xxxbb| 久久久久免费精品人妻一区二区| 男女视频在线观看网站免费| 2021天堂中文幕一二区在线观| 亚洲自偷自拍三级| 欧美激情久久久久久爽电影| 免费在线观看成人毛片| 亚洲熟妇中文字幕五十中出| 亚洲经典国产精华液单| 女人被狂操c到高潮| 男女边吃奶边做爰视频| 黄色日韩在线| 免费看av在线观看网站| 九色成人免费人妻av| 国产视频一区二区在线看| 精品日产1卡2卡|