• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    深度神經(jīng)網(wǎng)絡(luò)建模方法用于數(shù)據(jù)缺乏的帶口音普通話語音識別的研究

    2015-11-25 03:59:36謝旭榮隋相劉循英王
    集成技術(shù) 2015年6期
    關(guān)鍵詞:口音級聯(lián)決策樹

    謝旭榮隋 相劉循英王 嵐

    1(中國科學(xué)院深圳先進技術(shù)研究院人機智能協(xié)同系統(tǒng)重點實驗室 深圳 518055)2(劍橋大學(xué)工程系 劍橋 CB2 1TN)

    3(香港中文大學(xué) 香港 999077)

    深度神經(jīng)網(wǎng)絡(luò)建模方法用于數(shù)據(jù)缺乏的帶口音普通話語音識別的研究

    謝旭榮1,3隋 相1,3劉循英1,2王 嵐1,3

    1(中國科學(xué)院深圳先進技術(shù)研究院人機智能協(xié)同系統(tǒng)重點實驗室 深圳 518055)
    2(劍橋大學(xué)工程系 劍橋 CB2 1TN)

    3(香港中文大學(xué) 香港 999077)

    眾所周知中文普通話被眾多的地區(qū)口音強烈地影響著,然而帶不同口音的普通話語音數(shù)據(jù)卻十分缺乏。因此,普通話語音識別的一個重要目標是恰當(dāng)?shù)啬M口音帶來的聲學(xué)變化。文章給出了隱式和顯式地使用口音信息的一系列基于深度神經(jīng)網(wǎng)絡(luò)的聲學(xué)模型技術(shù)的研究。與此同時,包括混合條件訓(xùn)練,多口音決策樹狀態(tài)綁定,深度神經(jīng)網(wǎng)絡(luò)級聯(lián)和多級自適應(yīng)網(wǎng)絡(luò)級聯(lián)隱馬爾可夫模型建模等的多口音建模方法在本文中被組合和比較。一個能顯式地利用口音信息的改進多級自適應(yīng)網(wǎng)絡(luò)級聯(lián)隱馬爾可夫模型系統(tǒng)被提出,并應(yīng)用于一個由四個地區(qū)口音組成的、數(shù)據(jù)缺乏的帶口音普通話語音識別任務(wù)中。在經(jīng)過序列區(qū)分性訓(xùn)練和自適應(yīng)后,通過絕對上 0.8% 到 1.5%(相對上 6% 到 9%)的字錯誤率下降,該系統(tǒng)顯著地優(yōu)于基線的口音獨立深度神經(jīng)網(wǎng)絡(luò)級聯(lián)系統(tǒng)。

    語音識別;決策樹;深度神經(jīng)網(wǎng)絡(luò);口音;自適應(yīng)

    1 Introduction

    An important part of the Mandarin speech recognition task is to appropriately handle the influence from a rich set of diverse accents. There are at least seven major regional accents in China[1,2]. The related variabilities imposed on accented Mandarin speech are complex and widespread. The resulting high mismatch can lead to severe performance degradation for automatic speech recognition (ASR)tasks. To handle this problem, ASR systems can be trained on large amounts of accent specific speech data[3]. However, collecting and annotating accented data is very expensive and time-consuming. Hence,the amount of available accent specific speech data is often quite limited.

    An alternative approach is to exploit the accent independent features among standard Mandarin speech data, which are often available in large amounts, to improve robustness and generalization. Along this line, two categories of techniques can be used. The first category of techniques aims to directly adapt systems trained on standard Mandarin speech data[4-8]. The second category uses standard Mandarin speech to augment the limited in-domain accent specific data in a multi-style training framework[9]. For example, an accent dependent phonetic decision tree tying technique was proposed[10-12]. It allows the resulting acoustic models to explicitly learn both the accent independent and the accent specific characteristics in speech.

    Recently, deep neural networks (DNNs) have become increasing popular for acoustic modelling,due to their inherent robustness to the highly complex factors of variabilities found in natural speech[13-15]. These include external factors such as environment noise[16-18]and language dependent linguistic features[19-21]. In order to incorporate DNNs, or multi-layer perceptrons (MLPs) in general,into hidden Markov model (HMM)-based acoustic models, two approaches can be used. The first uses a hybrid architecture that estimates the HMM state emission probabilities using DNNs[22]. The second approach uses an MLP or DNN[17], which is trained to produce phoneme posterior probabilities,as a feature extractor. The resulting probabilistic features[23]or bottleneck features[24]are concatenated with standard front-ends and used to train Gaussian mixture model (GMM)-HMMs in a tandem fashion. As GMM-HMMs remain as the back-end classifier,the tandem approach requires minimum change to the downstream techniques, such as adaptation and discriminative training, while retaining the useful information by the bottleneck features.

    Using limited amounts of accented data alone is insufficient to obtain sufficient generalization for the resulting acoustic models, including DNNs. Therefore, a key problem in accented Mandarin speech recognition with low resources, as considered in this paper, is how to improve coverage and generalisation by exploiting the commonalities and specialties among standard and accented speech data during training. Using conventional multi-style DNN training based on a mix of standard and accented Mandarin speech data, accent independent features found in both can be implicitly learned[25,26].

    Inspired by recent works on multi-lingual low resource speech recognition[19,20,21,27], this paper aims to investigate and compare the explicit as well as the implicit uses of accent information in state-ofthe-art deep neural network (DNN) based acoustic modelling techniques, including conventional tied state GMM-HMMs, DNN tandem systems and multi-level adaptive network (MLAN)[27,28]tandem HMMs. These approaches are evaluated on a low resource accented Mandarin speech recognition task consisting of accented speech collected from four regions: Guangzhou, Chongqing, Shanghai and Xiamen. The improved multi-accent GMMHMM and MLAN tandem systems explicitly leveraging the accent information during model training significantly outperformed the baseline GMM-HMM and DNN tandem HMM systems by 0.8%-1.5% absolute (6%-9% relative) in character error rate after minimum phone error (MPE) based discriminative training and adaptation.

    The rest of this paper is organized as follows. Standard acoustic accent modelling approaches are reviewed in section 2. These include multi-accent decision tree state tying for GMM-HMM systems,and multi-accent DNN tandem systems. MLAN tandem systems with improved pre-training for accent modeling are presented in section 3.2. Experimental results are presented in section 4. Section 5 draws the conclusions and discusses future work.

    2 Acoustic modelling for accented speech

    2.1 Multi-style accent modelling

    Multi-style training[9]is used in this paper for accent modelling. This approach uses speech data collected in a wide range of styles and domains. Then, it exploits the implicit modelling ability of mixture models used in GMM-HMMs and, more recently,deep neural networks[16,20,21]to obtain a good generalization to unseen situations. In the accented speech modelling experiments of this paper, large amount of standard Mandarin speech data is used to augment the limited accented data during training to provide useful accent independent features.

    2.2 Multi-accent decision tree state tying

    As the phonetic and phonological realization of Mandarin speech is significantly different between regional accents, inappropriate tying of context dependent HMM states associated with different accents can lead to poor coverage and discrimination for GMM-HMM based acoustic models. In order to handle this problem, decision tree clustering[10,11]with multi-accent branches is tried in this paper. In order to effectively exploit the commonalities and specificities found in standard and accented Mandarin data, accent dependent (AD) questions are

    used together with conventional linguistic questions during the clustering process. A sample of the accented branches is shown in red part of Fig. 1.

    Fig. 1 A part of multi-accent decision tree. Blue:conventional branches; Red: accented branches

    In common with standard maximum likelihood(ML) based phonetic decision tree tying[13], the questions giving highest log-likelihood improvement are chosen when splitting tree nodes. The algorithm iterates until no more splitting operations can yield a log-likelihood increase above a certain threshold. Therefore, the multi-accent information is explicitly used during states tying. As expected, the use of accent dependent questions dramatically increases the number of context-dependent phone units to consider during training and decoding. As not all of them are allowed by the lexicon, following the approach proposed in Liu's report[29], only the valid subset under the lexical constraint is considered in this paper.

    2.3 Multi-accent DNN tandem systems

    In this paper, DNNs are trained to extract bottleneck features to be used in both DNN tandem and MLAN tandem systems. They are trained to model frame posterior probabilities of context-dependent phone HMM state targets. The inputs to DNNs consist of a context window of 11 successive frames of features for each time instance. The input to each neuron of each hidden layer is a linearly weighted sum of the outputs from the previous layer, before fed into a sigmoid activation function. At the output layer a softmax activation is used to compute posterior probability of each output target. The networks were first pre-trained by layer-by-layer restricted Boltzmann machine (RBM) pre-training[14,15], then globally fine-tuned to minimize the frame-level cross-entropy by back-propagation. Moreover, the last hidden layer is set to have a significantly smaller number of neurons[24]. This narrow layer introduces a constriction in dimensionality while retaining the information useful for classification. Subsequently,low dimensional bottleneck features can be extracted by taking neuron values of this layer before activation. The bottleneck features are then appended to the standard acoustic features and used to train the back-end GMM-HMMs in tandem systems.

    3 Multi-accent MLAN tandem systems

    3.1 Multi-level adaptive network tandem systems A multi-level adaptive network (MLAN) was first proposed for cross domain adaptation[27,28], where large amounts of out-of-domain telephone and meeting room speech were used to improve the performance of an ASR system trained on a limited amount of in-domain multi-genrearchive broadcast data. The MLAN approach explored the useful domain independent characteristics in the outof-domain data to improve in-domain modelling performance, while reducing the mismatch across different domains. In this paper, the MLAN approachwas further exploited to improve the performance of accented Mandarin speech recognition systems.

    An MLAN system consists of two component subnetworks. The first-level network is trained with acoustic features of large amounts of accent independent standard Mandarin speech data. The acoustic features of target accented speech data are then fed forward through the first-level network. The resulting bottleneck features are then concatenated with the associated standard acoustic features and used as input to train the second-level network. After both of two component networks are trained,the entire training set, including both standard and accented Mandarin speech data, is fed forward through two subnetworks in turn. The resulting set of bottleneck features are then concatenated with the standard front-ends and used to train the back-end GMM-HMMs.

    3.2 Improved MLAN tandem systems for accent modelling

    The MLAN framework can be considered as stacked DNNs that consists of multi level of networks[21,20]. The second level network of stacked DNNs uses the information of first level network only in the input features, while weights and biases in the second level network are randomly initialized before pre-training and training. One important issue associated with conventional MLAN systems is the robust estimation of the second level DNN parameters. When using limited amounts of in-domain, accent specific speech data to adapt the second level DNN, as is considered in this work, a direct update of its associated weight parameters presents a significant data sparsity problem and can lead to poor generalization performance[21,25,26]. In order to address this issue, an improvement form of pre-training initialization is used in this paper for the second level DNN.

    First, all the hidden layers parameters of the second level accent adaptive DNN, and its input layer parameters associated with the standard acoustic features (shown as red and orange parts in Fig. 2) are initialized using those of the first level DNN trained on sufficient amounts of accent independent speech data. Second, the remaining input layer weights and biases connecting the input bottleneck features generated from first level DNN are initialized using RBM pre-training (shown as green in Fig. 2).

    Fig. 2 Improved MLANtraining for tandem systems. Left:first level DNN network; Right: second level DNN network

    When training the second level DNN, the parameters between bottleneck layer and output layer are updated first (shown as blue in Fig. 2),while fixing the rest of the second level network. The entire second level network is then globally fine-tuned using back-propagation. Similar to the multilingual DNN adaptation approach investigated in Grezl's report[21], the proposed method aims to adapt the second level network parameters based on those of a well trained first level network.

    4 Experiments and results

    4.1 Data description

    In this section the performance of various accentedMandarin speech recognition approaches are evaluated. 43 hours of standard Mandarin speech[30]and 22.6 hours of accented Mandarin speech containing Guangzhou, Chongqing, Shanghai and Xiamen regional accents[31]released by CASIA and RASC863 databases respectively were used in training. Four testing sets associated with each of these four accents respectively were also used. More detailed information of these data sets was presented in Table 1.

    4.2 Experiment setup

    Baseline context-dependent phonetic decision tree clustered[13,32]triphone GMM-HMM systems with 16 Gaussians per state were trained using 42 dimensional acoustic features consisting of heteroskedastic linear discriminant analysis (HLDA)perceptual linear predictive (PLP) features and pitch parameters. These were used as the input features, and to produce accent independent state level alignment to train DNNs with 2 048 neurons in each non-bottleneck hidden layer using the Kaldi toolkit[33]. Meanwhile the bottleneck layer had 26 neurons. All DNNs were trained with initial learning rate of 0.008 and the commonly used new bob annealing schedule. Mean normalization and principle component analysis (PCA) de-correlation were applied to the resulting bottleneck features before being appended to the above acoustic features.

    4.3 Performance of multi-accent GMM-HMM systems

    The performance of multi-accent GMM-HMM systems were first evaluated on Guangzhou accented speech data. These are shown in Table 2. In this table, the “Model AD” column denotes accent dependent questions were used in decision tree state tying. This table shows that the multi-accent HMM model (System (2)) trained by adding all four types of accented speech to the standard Mandarin data outperformed folding in Guangzhou accent data only(System (1)). In addition, the explicit use of accent information during decision tree clustering (System (3))obtained a further character error rate (CER) reduction of 2.7% absolute from 17.77% down to 15.07%.

    4.4 Performance of multi-accent DNN tandem systems

    A second set of experiments comparable to thoseshown in Table 2 were then conducted to evaluate the performance of four tandem systems on the Guangzhou accent test set. In addition to the standard Mandarin speech data, the Guangzhou accent data, or all four accent types, were also used in DNN training. All DNNs here had4hidden layers including the bottleneck layer. These are shown in Table 3. The multi-accent trained DNN tandem system (System (4) in Table 3), which used both accent dependent questions in decision tree based HMM state clustering, and included all four accent types in DNN training, gave the lowest character error rate of 13.16%.

    Table 1 Standard and accented Mandarin speech data sets

    Table 2 Performance of baseline GMM-HMM systems trained on standard Mandarin speech plus Guangzhou accent data only, or all four accents of Table 1, and evaluated on Guangzhou accent test set

    4.5 Performance of multi-accent MLAN tandem systems

    The performance of various MLAN tandem systems on Guangzhou accent speech data are shown in Table 4. In addition to the standard Mandarin speech data, all four accent types were used in both baseline HMM and the first level DNN training. The first level DNN had4hidden layers. The first four MLAN tandem systems used a conventional random initialization of the second level DNN with 2 or4hidden layers prior to pre-training and full network update on the target accent data. As discussed in sections 1 and 3.2, when using limited amounts of accent specific speech data to estimate the second level DNN, a direct update its associated weight parameters can lead to unrobust estimation and poor generalization. This is shown in the first four lines of Table 4. Increasing the number of hidden layers from 2 to4for the second level DNN led to further performance degradation. Compared with the best DNN tandem system shown in the bottom line of Table 3, a performance degradation of 0.92% absolute was observed.

    In contrast, when the improved pre-training based MLAN tandem system discussed in section 3.2 was used, as shown in the last two rows in Table 4, consistent improvements were obtained using both the accent independent and dependent MLAN tandem configurations over the comparable DNN tandem systems shown in the last two rows of Table 3.

    Table 3 Performance of DNN tandem systems on Guangzhou accent test set

    Table4Performance of MLAN tandem systems on Guangzhou accent test set

    4.6 Performance evaluation on multiple accent test sets

    A full set of experiments were finally conducted to evaluate the performance of various multiaccent systems on four accent test sets: Guangzhou,Chongqing, Shanghai and Xiamen. The performances of systems are presented in Table 5 and Table 6 for the multi-accent GMM-HMM, DNN tandem and improved MLAN tandem systems. “+MPE” denotes MPE discriminative training[34]performed on the maximum likelihood trained “ML” model, and “+

    MLLR” denotes a subsequent maximum likelihood linear regression (MLLR) adaptation[35]on the “+

    MPE” model. Moreover, System (0) used only out of domain data, namely standard Mandarin data, to train the GMM-HMMs, which denoted by “HMM(out)”. Meanwhile, “HMM(ma)” denotes multi-accent GMMHMM systems trained with all accented data as well as standard Mandarin data. Both DNN tandem and improved MLAN tandem systems utilized“HMM(ma)ML” models as their baselines. All DNNs here had 6 hidden layers including the bottleneck layer. “DNN AD” denotes DNN trained with accent dependent state alignment, while all DNNs used in MLAN tandem systems were trained with accent independent state alignment.

    A general trend can be found in Table 5 and 6 that the explicit use of accent information in training lead to consistent improvements for GMM-HMM, DNN tandem and MLAN tandem systems. For example,by explicitly using accent information during model training, an absolute CER reduction of 1.5% (relative 9%) was obtained on the GMM-HMM systems(System (2) compared to System (1) in Table 5). Although the improved MLAN tandem systems got less improvement from MPE training than the DNN tandem systems, they got more significant amelioration when MLLR adaptation was utilized. This indicates that the improved MLAN framework is not exclusive to the MLLR adaptation. The bestperformance was obtained using the improved MLAN tandem system with accent dependent modelling (System (4) in Table 6). Using this improved MLAN tandem system, an average CER reduction of 0.8% absolute (6% relative ) was obtained over the baseline DNN tandem system trained without explicitly using any accent information(System (3) in Table 5).

    Table 5 Performance of baseline multi-accent GMM-HMM and DNN tandem systems evaluated on all four accent test sets

    Comparing the results to previous works evaluated also on RASC863 database, Zhang et al.[36,37]used the augmented HMM and dynamic Gaussian mixture selection (DGMS), instead of multi-style accent modelling HMM and multi-accent decision tree state tying used in this paper. Their error rate for Guangzhou (Yue), Chongqing (Chuan) and Shanghai(Wu) accented Mandarin ASR stayed above 40% in syllable level (SER), and the best relative SER reduction against HMM trained with standard Mandarin (Putonghua) was about 20%. Although SER is not directly comparable to CER, but can still be seen as a reference. Meanwhile, for these three accents the comparable HMM system in this paper(System (2) in Table 5) obtained ML CER of about 18%, which had relative reduction of more than 40% against system (0) in Table 5. It might because that information of standard Mandarin cannot complement the low resource accented Mandarin in the augmented HMM and DGMS approaches.

    5 Conclusions

    In this paper, implicit and explicit accent modeling approaches were investigated for low resource accented Mandarin speech recognition. The improved multi-accent GMM-HMM and MLAN tandem systems significantly outperformed the baseline GMM-HMM and DNN tandem HMM systems by 0.8%-1.5% absolute (6%-9% relative)in character error rate after MPE training and adaptation. Experimental results suggest the proposed techniques may be useful for accented speech recognition. Future work will focus on modelling a larger and more diverse set of accents.

    Table 6 Performance of improved MLAN tandem systems evaluated on all four accent test sets

    [1] Liu Y,F(xiàn)ung P. Partial change accent models for accented Mandarin speech recognition [C] // IEEE Workshop on Automatic Speech Recognition and Understanding,2003:111-116.

    [2] Li J,Zheng TF,Byrne W, et al. A dialectal Chinese speech recognition framework [J].Journal of Computer Science and Technology, 2006,21(1):106-115.

    [3] Fisher V,Gao YQ,Janke E.Speaker-independent upfront dialect adaptation in a large vocabulary continuous speech recognizer[C] // The 5th International Conference on Spoken Language Processing,Incorporating, 1998: 787-790.

    [4] Oh YR,Kim HK.MLLR/MAP adaptation using pronunciation variation for non-native speech recognition [C] //IEEE Workshop on Automatic Speech Recognition &Understanding,2009:216-221.

    [5] Wang ZR,Schultz T,Waibel A.Comparison of acoustic model adaptation techniques on non-native speech [C] // 2003 IEEE International Conference on Acoustics,Speech,and Signal Processing,2003:540-543.

    [6] Tomokiyo LM,Waibel A.Adaptation methods for non-native speech [J]. Multilingual Speech and Language Processing,2003:6.

    [7] Liu M,Xu B,Hunng T,et al.Mandarin accent adaptation based on context-independent/contextdependent pronunciation modeling [C] // 2000 IEEE International Conference on Acoustics,Speech,and Signal Processing, 2000: 1025-1028.

    [8] Zheng Y,Sproat R,Gu L,et al. Accent detection and speech recognition for Shanghai-accented Mandarin [C] // Interspeech,2005: 217-220.

    [9] Lippmann RP,Martin E,Paul DB. Multi-style training for robust isolated-word speech recognition[C] // IEEE International Conference on Acoustics,Speech,and Signal Processing,1987:705-708.

    [10]YoungSJ,OdellJJ,WoodlandPC.Tree-basedstate tying for high accuracy acoustic modeling [C] // Proceedings of the Workshop on Human Language Technology,1994:307-312.

    [11] Reichl W,Chou W.A unified approach of incorporating general features in decision tree based acoustic modeling [C] // 1999 IEEE International Conference on Acoustics,Speech,and Signal Processing,1999: 573-576.

    [12] Sim KC,Li H. Robust phone set mapping using decision tree clustering for cross-lingual phone recognition [C] // 2008 IEEE International Conference on Acoustics,Speech,and Signal Processing, 2008: 4309-4312.

    [13] Seide F,Li G,Yu D. Conversational speech transcription using context-dependent deep neural networks [C] // Interspeech,2011: 437-440.

    [14] Dahl GE,Yu D,Deng L,et al. Contextdependent pre-trained deep neural networks for large-vocabulary speech recognition [J] IEEE Transactions on Audio, Speech, and Language Processing, 2012, 20(1): 30-42.

    [15] Hinton G,Deng L,Yu D,et al. Deep neural networks for acoustic modeling in speech recognition:The shared views of four research groups [J] IEEE Signal Processing Magazine,2012,29(6):82-97.

    [16] Seltzer ML,Yu D,Wang Y.An investigation of deep neural networks for noise robust speech recognition[C] // 2013 IEEE International Conference on Acoustics,Speech and Signal Processing,2013:7398-7402.

    [17] Yu D,Seltzer ML.Improved bottleneck features using pretrained deep neural networks [C] // The 12th Annual Conference of the International Speech Communication Association,2011:237-240.

    [18] Xie X,Su R,Liu X,et al.Deep neural network bottleneck features for generalized variable parameter HMMs [C] // The 15th Annual Conference of the International Speech Communication Association, 2014: 2739-2743.

    [19] Thomas S,Seltzer ML,Church K,et al. Deep neural network features and semi-supervised training for low resource speech recognition [C] // 2013 IEEE International Conference on Acoustics,Speech and Signal Processing, 2013:6704-6708.

    [20] Knill KM,Gales MJF,Rath SP,et al.Investigationof multilingual deep neural networks for spoken term detection [C] // 2013 IEEE Workshop on Automatic Speech Recognition and Understanding,2013:138-143.

    [21] Grezl F,Karafiat M,Vesely K.Adaptation of multilingual stacked bottle-neck neural network structure for new language [C] // 2014 IEEE International Conference on Acoustics,Speech and Signal Processing, 2014: 7654-7658.

    [22] Bourlard HA, Morgan N.Connectionist Speech Recognition: A Hybrid Approach [M].USA:Academic Publishers, 1993.

    [23] Hermansky H,Ellis DW,Sharma S.Tandem connectionist feature extraction for conventional HMM systems [C]// 2000 IEEE International Conference on Acoustics, Speech and Signal Processing,2000: 1635-1638.

    [24] Grezl F,Karafiat M,Kontar S, et al. Probabilistic and bottle-neck features for LVCSR of meetings[C]// 2007 IEEE International Conference on Acoustics,Speech and Signal Processing,2007:757-760.

    [25] Wang H,Wang L,Liu X. Multi-level adaptive network for accented Mandarin speech recognition[C]// The 4th IEEE International Conference on Information Science and Technology,2014: 602-605.

    [26] Sui X,Wang H,Wang L.A general framework for multi-accent Mandarin speech recognition using adaptive neural networks [C] // The 9th IEEE International Symposium on Chinese Spoken Language Processing, 2014: 118-122.

    [27] Bell P,Swietojanski P,Renals S.Multi-level adaptive networks in tandem and hybrid ASR systems [C] // 2013 IEEE International Conference on Acoustics, Speech and Signal Processing,2013:6975-6979.

    [28] Bell PJ,Gales MJF,Lanchantin P, et al.Transcription of multi-genre media archives using out-ofdomain data [C] // 2012 IEEE on Spoken Language Technology Workshop,2012:324-329.

    [29] Liu X,Gales MJF,Hieronymus JL,et al.Investigation of acoustic units for LVCSR systems[C] // 2011 IEEE International Conference on Acoustics, Speech and Signal Processing,2011:4872-4875.

    [30] 中國科學(xué)院自動化研究所.CASIA 北方口音語音庫 [OL]. [2015-07-28]. http://www.chineseldc. org/doc/CLDC-SPC-2004-015/intro.htm.

    [31] 中國社會科學(xué)院語言所. 四大方言普通話語音語料庫 [OL]. [2015-08-02]. http://www.chineseldc.org/doc/CLDC-SPC-2004-004/intro.htm.

    [32] Young SJ,Evermann G,Gales MJF, et al. The HTK Book (Revised for HTK version 3.4.1)[M]. Cambridge University, 2009.

    [33] Ghoshal A, Povey D.The Kaldi speech recognition toolkit [EB/OL]. 2013-02-03 [2015-08-02].http://kaldi.sourceforge.net.

    [34] Povey D, Woodland PC. Minimum phone error and I-smoothing for improved discriminative training[C] // 2002 IEEE International Conference on Acoustics,Speech, and Signal Processing,2002:105-108.

    [35] Gales MJ,Woodland PC.Mean and variance adaptation within the MLLR framework [J]. Computer Speech & Language, 1996, 10(4): 249-264.

    [36] Zhang C,Liu Y,Xia Y,et al. Reliable accent specific unit generation with dynamic Gaussian mixture selection for multi-accent speech recognition [C] // 2011 IEEE International Conference on Multimedia and Expo, 2011: 1-6.

    [37] Zhang C,Liu Y,Xia Y,et al. Discriminative dynamic Gaussian mixture selection with enhanced robustness and performance for multi-accent speech recognition [C] // 2012 IEEE International Conference on Acoustics,Speech and Signal Processing,2012:4749-4752.

    Investigation of Deep Neural Network Acoustic Modelling Approaches for Low Resource Accented Mandarin Speech Recognition

    XIE Xurong1,3SUI Xiang1,3LIU Xunying1,2WANG Lan1,3

    1( Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China )
    2( Cambridge University Engineering Department, Cambridge CB2 1TN, U.K. )
    3( The Chinese University of Hong Kong, Hong Kong 999077, China )

    The Mandarin Chinese language is known to be strongly influenced by a rich set of regional accents, while Mandarin speech with each accent is of quite low resource. Hence, an important task in Mandarin speech recognition is to appropriately model the acoustic variabilities imposed by accents. In thispaper, an investigation of implicit and explicit use of accent information on a range of deep neural network(DNN) based acoustic modeling techniques was conducted. Meanwhile, approaches of multi-accent modelling including multi-style training, multi-accent decision tree state tying, DNN tandem and multi-level adaptive network (MLAN) tandem hidden Markov model (HMM) modelling were combined and compared. On a low resource accented Mandarin speech recognition task consisting of four regional accents, an improved MLAN tandem HMM systems explicitly leveraging the accent information was proposed, and significantly outperformed the baseline accent independent DNN tandem systems by 0.8%-1.5% absolute (6%-9% relative)in character error rate after sequence level discriminative training and adaptation.

    speech recognition; decision tree; deep neural network; accent; adaptation

    TP 391.4

    A

    Received: 2015-08-12 Revised: 2015-09-14

    Foundation: National Natural Science Foundation of China (NSFC 61135003);Shenzhen Fundamental Research Program(JCYJ20130401170306806,JC201005280621A)

    Author: Xie Xurong, Research Assistant. His research interests are speech recognition and machine learning; Sui Xiang, Master. Her research interest is speech recognition; Liu Xunying, Senior Research Associate. His research interests include large vocabulary continuous speech recognition, language modelling, noise robust speech recognition, speech and language processing; Wang Lan(corresponding author), Professor. Her research interests are large vocabulary continuous speech recognition, speech visualization and speech centric human-machine interaction, E-mail:lan.wang@siat.ac.cn.

    猜你喜歡
    口音級聯(lián)決策樹
    法國立法禁止嘲笑他人口音
    別人都在說英語,只有中國人在說口音
    一種針對不均衡數(shù)據(jù)集的SVM決策樹算法
    決策樹和隨機森林方法在管理決策中的應(yīng)用
    電子制作(2018年16期)2018-09-26 03:27:06
    你說話的口音反映出什么?
    級聯(lián)LDPC碼的STBC-OFDM系統(tǒng)
    電子制作(2016年15期)2017-01-15 13:39:09
    基于決策樹的出租車乘客出行目的識別
    基于級聯(lián)MUSIC的面陣中的二維DOA估計算法
    基于肺癌CT的決策樹模型在肺癌診斷中的應(yīng)用
    LCL濾波器在6kV級聯(lián)STATCOM中的應(yīng)用
    電測與儀表(2014年1期)2014-04-04 12:00:34
    男女无遮挡免费网站观看| 国产成人精品无人区| 久久精品久久久久久久性| 晚上一个人看的免费电影| 国产免费一区二区三区四区乱码| 亚洲精华国产精华液的使用体验| 午夜福利视频精品| 91午夜精品亚洲一区二区三区| av黄色大香蕉| 亚洲一级一片aⅴ在线观看| 精品少妇黑人巨大在线播放| 国产1区2区3区精品| 久久99精品国语久久久| 国产深夜福利视频在线观看| 男女高潮啪啪啪动态图| 免费av不卡在线播放| 欧美日韩亚洲高清精品| 一边亲一边摸免费视频| 午夜福利乱码中文字幕| 亚洲精品av麻豆狂野| 日产精品乱码卡一卡2卡三| 80岁老熟妇乱子伦牲交| 大香蕉97超碰在线| 免费在线观看黄色视频的| 十八禁网站网址无遮挡| 亚洲伊人色综图| 亚洲国产最新在线播放| 亚洲精品色激情综合| 大话2 男鬼变身卡| 免费看不卡的av| 欧美国产精品va在线观看不卡| 亚洲综合色网址| 美女大奶头黄色视频| 亚洲成色77777| 国产精品久久久久久精品古装| 久久午夜综合久久蜜桃| 亚洲精品,欧美精品| 大香蕉久久网| 亚洲国产欧美日韩在线播放| 国产免费视频播放在线视频| 午夜av观看不卡| 精品一区二区三卡| 九色亚洲精品在线播放| 男女国产视频网站| 22中文网久久字幕| 日韩在线高清观看一区二区三区| 精品一品国产午夜福利视频| 久久这里有精品视频免费| 日韩在线高清观看一区二区三区| 国产精品不卡视频一区二区| 久久97久久精品| 波多野结衣一区麻豆| 新久久久久国产一级毛片| 美国免费a级毛片| 日产精品乱码卡一卡2卡三| 黄色一级大片看看| 啦啦啦啦在线视频资源| 久久久久久久精品精品| 国产成人91sexporn| 亚洲av欧美aⅴ国产| 日韩一区二区视频免费看| 九色亚洲精品在线播放| 国产片内射在线| 日本wwww免费看| 伊人亚洲综合成人网| 人人妻人人澡人人爽人人夜夜| 一区二区日韩欧美中文字幕 | 黄片播放在线免费| 一级毛片 在线播放| 国产女主播在线喷水免费视频网站| av视频免费观看在线观看| 熟妇人妻不卡中文字幕| 2021少妇久久久久久久久久久| 91久久精品国产一区二区三区| 中国三级夫妇交换| 哪个播放器可以免费观看大片| 国产免费现黄频在线看| 久久久久久人妻| 在线 av 中文字幕| 亚洲av成人精品一二三区| 黑丝袜美女国产一区| 日韩精品免费视频一区二区三区 | 熟女人妻精品中文字幕| 黑人猛操日本美女一级片| 少妇被粗大猛烈的视频| 精品国产露脸久久av麻豆| 免费黄频网站在线观看国产| 97人妻天天添夜夜摸| 丰满乱子伦码专区| 最黄视频免费看| 久久久久久久久久人人人人人人| 自线自在国产av| 日韩欧美精品免费久久| 街头女战士在线观看网站| 欧美+日韩+精品| 新久久久久国产一级毛片| 麻豆乱淫一区二区| 成人漫画全彩无遮挡| 十八禁网站网址无遮挡| 国产欧美日韩一区二区三区在线| 又黄又粗又硬又大视频| 国产精品一区二区在线不卡| 亚洲国产av影院在线观看| 国产精品国产三级专区第一集| 亚洲成人av在线免费| 日韩在线高清观看一区二区三区| 十八禁高潮呻吟视频| 国产乱来视频区| 中文乱码字字幕精品一区二区三区| 亚洲精品av麻豆狂野| 男女国产视频网站| 草草在线视频免费看| 一级片免费观看大全| 黄色配什么色好看| 久久久欧美国产精品| 男人添女人高潮全过程视频| 国产精品国产av在线观看| 一二三四在线观看免费中文在 | 嫩草影院入口| 久久精品久久精品一区二区三区| 啦啦啦在线观看免费高清www| 久久久久精品久久久久真实原创| 欧美亚洲 丝袜 人妻 在线| 国产免费现黄频在线看| 日韩一区二区视频免费看| 天天躁夜夜躁狠狠躁躁| 人妻少妇偷人精品九色| 久久97久久精品| 国产亚洲欧美精品永久| 又大又黄又爽视频免费| 日本av免费视频播放| 人体艺术视频欧美日本| 精品人妻熟女毛片av久久网站| 亚洲图色成人| 丰满饥渴人妻一区二区三| 最近手机中文字幕大全| 卡戴珊不雅视频在线播放| av一本久久久久| 欧美精品国产亚洲| 少妇人妻 视频| 日韩中字成人| 精品人妻偷拍中文字幕| 人人妻人人澡人人爽人人夜夜| 欧美人与善性xxx| 久久精品aⅴ一区二区三区四区 | 2018国产大陆天天弄谢| 国产欧美日韩综合在线一区二区| 国产成人精品无人区| 久久久久国产网址| 两个人免费观看高清视频| 人人妻人人澡人人爽人人夜夜| 一二三四中文在线观看免费高清| 精品国产国语对白av| 久久ye,这里只有精品| 日产精品乱码卡一卡2卡三| 久久人妻熟女aⅴ| 久久精品人人爽人人爽视色| kizo精华| 国产精品国产av在线观看| 免费人成在线观看视频色| 韩国精品一区二区三区 | 美女视频免费永久观看网站| 秋霞在线观看毛片| 一级黄片播放器| 97人妻天天添夜夜摸| 在线免费观看不下载黄p国产| 亚洲欧美日韩另类电影网站| 在线观看www视频免费| 女人被躁到高潮嗷嗷叫费观| 亚洲国产欧美日韩在线播放| 日韩欧美精品免费久久| 一区二区三区四区激情视频| 9色porny在线观看| 久久午夜综合久久蜜桃| 日本爱情动作片www.在线观看| 啦啦啦中文免费视频观看日本| 久久99热这里只频精品6学生| 日本爱情动作片www.在线观看| 久久国产亚洲av麻豆专区| 久久国产亚洲av麻豆专区| 久久综合国产亚洲精品| 久久久久久久久久久免费av| 精品视频人人做人人爽| 男女国产视频网站| 亚洲国产精品一区三区| 亚洲av福利一区| 毛片一级片免费看久久久久| 中文乱码字字幕精品一区二区三区| 欧美国产精品va在线观看不卡| 久久精品国产综合久久久 | 成人午夜精彩视频在线观看| 久久国产精品男人的天堂亚洲 | 亚洲av中文av极速乱| 男的添女的下面高潮视频| 亚洲精品aⅴ在线观看| 国产永久视频网站| 日韩av在线免费看完整版不卡| 日韩av不卡免费在线播放| 中文天堂在线官网| 亚洲av中文av极速乱| 一区二区三区精品91| 日本午夜av视频| 欧美日韩精品成人综合77777| 哪个播放器可以免费观看大片| 制服诱惑二区| 精品国产乱码久久久久久小说| 国产av码专区亚洲av| 国产探花极品一区二区| 国产伦理片在线播放av一区| 男女午夜视频在线观看 | 欧美老熟妇乱子伦牲交| 久久婷婷青草| 18禁国产床啪视频网站| 久久亚洲国产成人精品v| 国产精品人妻久久久久久| 国产精品国产av在线观看| 97超碰精品成人国产| 久久精品aⅴ一区二区三区四区 | 亚洲内射少妇av| 国产男女内射视频| 国产精品免费大片| 国产精品人妻久久久影院| 国产精品人妻久久久影院| xxx大片免费视频| 亚洲一级一片aⅴ在线观看| 18禁裸乳无遮挡动漫免费视频| 九色成人免费人妻av| 亚洲欧洲精品一区二区精品久久久 | 巨乳人妻的诱惑在线观看| 国产色爽女视频免费观看| 又黄又爽又刺激的免费视频.| 秋霞伦理黄片| 国产又色又爽无遮挡免| 亚洲久久久国产精品| 在线天堂中文资源库| 夫妻性生交免费视频一级片| 精品熟女少妇av免费看| 亚洲一码二码三码区别大吗| 亚洲精品视频女| 欧美精品一区二区免费开放| 亚洲av成人精品一二三区| 在线观看三级黄色| 黄片播放在线免费| 午夜91福利影院| 日韩一区二区三区影片| 久久 成人 亚洲| 日本猛色少妇xxxxx猛交久久| 一级a做视频免费观看| 91久久精品国产一区二区三区| 黄色怎么调成土黄色| 久久精品久久精品一区二区三区| 热re99久久国产66热| 久久国内精品自在自线图片| videossex国产| 国产老妇伦熟女老妇高清| 九色亚洲精品在线播放| 日本vs欧美在线观看视频| 国产国拍精品亚洲av在线观看| 亚洲色图综合在线观看| 一本久久精品| 97精品久久久久久久久久精品| 国产一级毛片在线| 欧美亚洲日本最大视频资源| 中文字幕免费在线视频6| 丁香六月天网| 青春草视频在线免费观看| 欧美老熟妇乱子伦牲交| 看免费成人av毛片| 日本黄大片高清| 日韩人妻精品一区2区三区| 在线观看免费日韩欧美大片| 26uuu在线亚洲综合色| 久久久久久伊人网av| 夫妻午夜视频| 久久久久人妻精品一区果冻| 黄色怎么调成土黄色| av卡一久久| 亚洲精品中文字幕在线视频| 国产淫语在线视频| 亚洲精品久久成人aⅴ小说| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 精品久久久久久电影网| av又黄又爽大尺度在线免费看| 欧美精品av麻豆av| 成人影院久久| 丰满迷人的少妇在线观看| 欧美日韩国产mv在线观看视频| 九色亚洲精品在线播放| 亚洲精品456在线播放app| 日韩视频在线欧美| 亚洲国产日韩一区二区| 制服丝袜香蕉在线| 亚洲国产精品专区欧美| 久久人人爽av亚洲精品天堂| 女性被躁到高潮视频| 久久久久久人妻| 国产伦理片在线播放av一区| av网站免费在线观看视频| 色婷婷av一区二区三区视频| 国产综合精华液| 人妻少妇偷人精品九色| 久久精品国产鲁丝片午夜精品| 亚洲精品美女久久久久99蜜臀 | 国产白丝娇喘喷水9色精品| 这个男人来自地球电影免费观看 | 久久久久久人妻| 18禁国产床啪视频网站| 满18在线观看网站| 十分钟在线观看高清视频www| 少妇猛男粗大的猛烈进出视频| 99热这里只有是精品在线观看| 一个人免费看片子| 精品酒店卫生间| 黄片无遮挡物在线观看| 91久久精品国产一区二区三区| 女性被躁到高潮视频| 又黄又爽又刺激的免费视频.| 亚洲精品美女久久av网站| 这个男人来自地球电影免费观看 | 日日啪夜夜爽| 亚洲久久久国产精品| 一级片免费观看大全| 草草在线视频免费看| 亚洲欧美日韩卡通动漫| 亚洲精品第二区| 中文字幕制服av| 亚洲人成网站在线观看播放| 国产探花极品一区二区| 丝袜在线中文字幕| 考比视频在线观看| 国产伦理片在线播放av一区| 五月开心婷婷网| 午夜精品国产一区二区电影| 日日啪夜夜爽| 人妻 亚洲 视频| 亚洲av电影在线进入| 免费观看在线日韩| 日韩一本色道免费dvd| 国产熟女午夜一区二区三区| 十分钟在线观看高清视频www| 色视频在线一区二区三区| 中文字幕av电影在线播放| 国产在视频线精品| 久久久久国产精品人妻一区二区| 乱人伦中国视频| 又黄又爽又刺激的免费视频.| 久久精品国产a三级三级三级| 老熟女久久久| 亚洲欧美清纯卡通| 乱人伦中国视频| 久久久欧美国产精品| 国产成人精品久久久久久| 日本av免费视频播放| 免费在线观看黄色视频的| √禁漫天堂资源中文www| 亚洲欧洲日产国产| 亚洲欧美中文字幕日韩二区| 久热久热在线精品观看| 人体艺术视频欧美日本| 午夜福利,免费看| 亚洲性久久影院| 97超碰精品成人国产| 国产精品一区二区在线观看99| 欧美精品人与动牲交sv欧美| 久久久久精品久久久久真实原创| 亚洲欧洲日产国产| 国产精品久久久久久av不卡| 最近的中文字幕免费完整| 欧美成人午夜免费资源| 欧美老熟妇乱子伦牲交| 日韩熟女老妇一区二区性免费视频| 午夜福利在线观看免费完整高清在| 亚洲精品,欧美精品| 美女国产视频在线观看| 黄片播放在线免费| 亚洲欧美清纯卡通| 久久久久网色| 狠狠精品人妻久久久久久综合| 精品一品国产午夜福利视频| 亚洲,欧美,日韩| 免费观看a级毛片全部| 亚洲国产av影院在线观看| 多毛熟女@视频| 美女内射精品一级片tv| a 毛片基地| 十八禁高潮呻吟视频| 90打野战视频偷拍视频| 欧美精品一区二区大全| 久久久久精品久久久久真实原创| 免费不卡的大黄色大毛片视频在线观看| 黑人欧美特级aaaaaa片| 建设人人有责人人尽责人人享有的| 亚洲少妇的诱惑av| 亚洲精华国产精华液的使用体验| av.在线天堂| 国产熟女午夜一区二区三区| 搡女人真爽免费视频火全软件| 久久久久久久精品精品| 久久国内精品自在自线图片| 啦啦啦啦在线视频资源| 91成人精品电影| 捣出白浆h1v1| 这个男人来自地球电影免费观看 | 亚洲精品乱码久久久久久按摩| 久久99一区二区三区| 久久久久久人妻| 草草在线视频免费看| 91精品国产国语对白视频| 18禁观看日本| 国产又爽黄色视频| 少妇精品久久久久久久| 久久av网站| 在线看a的网站| 国产熟女午夜一区二区三区| 国产男人的电影天堂91| 久久久久人妻精品一区果冻| 久久精品国产综合久久久 | 亚洲国产看品久久| 亚洲经典国产精华液单| 亚洲精品久久久久久婷婷小说| 一本—道久久a久久精品蜜桃钙片| 中国国产av一级| 国产精品嫩草影院av在线观看| 熟妇人妻不卡中文字幕| 日韩在线高清观看一区二区三区| 国产精品人妻久久久久久| 久久精品人人爽人人爽视色| 久久99一区二区三区| 亚洲精品久久久久久婷婷小说| xxx大片免费视频| 18禁在线无遮挡免费观看视频| 国产成人aa在线观看| 国产一区有黄有色的免费视频| 午夜福利,免费看| 各种免费的搞黄视频| 亚洲av福利一区| 一区二区三区乱码不卡18| 欧美97在线视频| 亚洲精品久久成人aⅴ小说| 国产精品.久久久| 视频区图区小说| 亚洲 欧美一区二区三区| 建设人人有责人人尽责人人享有的| 久久久久久久久久成人| 午夜日本视频在线| 免费不卡的大黄色大毛片视频在线观看| 成人无遮挡网站| 丝袜喷水一区| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 久久这里只有精品19| 国产精品三级大全| 国产精品久久久久久久电影| 99香蕉大伊视频| av又黄又爽大尺度在线免费看| 夫妻午夜视频| 国产精品熟女久久久久浪| 999精品在线视频| 国产不卡av网站在线观看| 插逼视频在线观看| 91aial.com中文字幕在线观看| 黑人猛操日本美女一级片| 婷婷色综合www| 高清毛片免费看| 国产成人一区二区在线| 亚洲激情五月婷婷啪啪| 国产精品欧美亚洲77777| 欧美精品亚洲一区二区| 少妇精品久久久久久久| 伦理电影大哥的女人| 五月伊人婷婷丁香| 制服人妻中文乱码| 青春草国产在线视频| 97在线人人人人妻| 欧美最新免费一区二区三区| 亚洲av成人精品一二三区| 欧美日韩av久久| 精品人妻在线不人妻| 大片电影免费在线观看免费| 久久精品夜色国产| 成年美女黄网站色视频大全免费| 多毛熟女@视频| 亚洲av免费高清在线观看| 亚洲精华国产精华液的使用体验| 免费高清在线观看视频在线观看| 欧美国产精品一级二级三级| 校园人妻丝袜中文字幕| 热re99久久精品国产66热6| 欧美激情国产日韩精品一区| 国产精品国产三级专区第一集| 夫妻午夜视频| 性色avwww在线观看| 亚洲欧美一区二区三区国产| 高清在线视频一区二区三区| 90打野战视频偷拍视频| 日本wwww免费看| 狂野欧美激情性bbbbbb| 亚洲欧美成人精品一区二区| 日产精品乱码卡一卡2卡三| 亚洲综合精品二区| 国产xxxxx性猛交| 91午夜精品亚洲一区二区三区| 男女边吃奶边做爰视频| 人妻人人澡人人爽人人| 乱人伦中国视频| 国产有黄有色有爽视频| 久久热在线av| 亚洲精品国产av蜜桃| 国产精品.久久久| 成人国语在线视频| 欧美国产精品va在线观看不卡| 在线观看免费视频网站a站| 一级,二级,三级黄色视频| 亚洲内射少妇av| 韩国高清视频一区二区三区| 国产成人一区二区在线| 婷婷色麻豆天堂久久| 久久人人爽av亚洲精品天堂| 18禁动态无遮挡网站| 免费黄频网站在线观看国产| 99视频精品全部免费 在线| 一区在线观看完整版| 免费大片18禁| 国产精品久久久久久精品电影小说| 国产日韩欧美在线精品| 伊人亚洲综合成人网| 国产国拍精品亚洲av在线观看| 曰老女人黄片| 国产欧美亚洲国产| 男女啪啪激烈高潮av片| 黄色怎么调成土黄色| 久久精品国产亚洲av天美| 久久久久久久久久人人人人人人| 国产精品国产三级国产av玫瑰| 日韩欧美精品免费久久| 久久国产精品男人的天堂亚洲 | 精品人妻熟女毛片av久久网站| 国产极品粉嫩免费观看在线| 国产激情久久老熟女| a级毛色黄片| 十八禁网站网址无遮挡| 国产极品粉嫩免费观看在线| 久久午夜福利片| av在线app专区| 亚洲欧美精品自产自拍| 制服人妻中文乱码| 国产精品麻豆人妻色哟哟久久| 少妇被粗大猛烈的视频| 亚洲高清免费不卡视频| 久久久国产欧美日韩av| 少妇人妻 视频| 五月天丁香电影| 日韩在线高清观看一区二区三区| 国产精品国产三级专区第一集| 日韩成人av中文字幕在线观看| 乱人伦中国视频| 99视频精品全部免费 在线| 成人国语在线视频| 黄色视频在线播放观看不卡| 80岁老熟妇乱子伦牲交| 亚洲精品乱码久久久久久按摩| 久久热在线av| 国产深夜福利视频在线观看| 香蕉丝袜av| 亚洲精品国产色婷婷电影| 久久人人97超碰香蕉20202| 久久久久久久久久人人人人人人| 18禁裸乳无遮挡动漫免费视频| av不卡在线播放| 成人午夜精彩视频在线观看| 欧美日韩视频精品一区| 久久久久国产精品人妻一区二区| 日本91视频免费播放| 日韩在线高清观看一区二区三区| 亚洲欧美成人综合另类久久久| 在现免费观看毛片| 亚洲久久久国产精品| 亚洲欧洲国产日韩| 久久久久精品性色| 日韩,欧美,国产一区二区三区| 国产成人免费观看mmmm| 欧美精品亚洲一区二区| 国产白丝娇喘喷水9色精品| 午夜av观看不卡| videossex国产| av视频免费观看在线观看| 2022亚洲国产成人精品| 2018国产大陆天天弄谢| 自线自在国产av| 十八禁网站网址无遮挡| 一本—道久久a久久精品蜜桃钙片| 9热在线视频观看99| videosex国产| 亚洲情色 制服丝袜| 18禁观看日本| 国产成人精品在线电影| 亚洲国产欧美在线一区| 国产成人aa在线观看| 午夜福利,免费看| 国产成人免费观看mmmm| 一级黄片播放器| 久久精品国产亚洲av涩爱| 国产欧美另类精品又又久久亚洲欧美| 亚洲精品成人av观看孕妇| 岛国毛片在线播放| 精品久久蜜臀av无| 欧美亚洲日本最大视频资源| 国产男女超爽视频在线观看| 2018国产大陆天天弄谢| 18禁动态无遮挡网站| 日本-黄色视频高清免费观看| 久久ye,这里只有精品| 日日啪夜夜爽| 国产精品偷伦视频观看了| 国产精品久久久久成人av| 精品少妇黑人巨大在线播放|