• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Source Recovery in Underdetermined Blind Source Separation Based on Articial Neural Network

    2018-03-12 12:12:41WeihongFuBinNongXinbiaoZhouJunLiuChangleLiSchoolofTelecommunicationengineeringXidianUniversityXianShaanxi7007ChinaCollaborativeinnovationcenterofinformationsensingandunderstandingXianShaanxi7007ChinaNationalLaboratoryof
    China Communications 2018年1期

    Weihong Fu*, Bin Nong, Xinbiao Zhou, Jun Liu, Changle Li School of Telecommunication engineering, Xidian University, Xi’an, Shaanxi, 7007, China Collaborative innovation center of information sensing and understanding, Xi’an, Shaanxi, 7007, China National Laboratory of Radar Signal Processing, Xidian University, Xi’an, Shaanxi, 7007, China

    I. INTRODUCTION

    The underdetermined blind source separation(UBSS) is one case of blind source separation(BSS) that the number of observed signals is less than the number of source signals[1]. In recent years, the technique of underdetermined blind separation is applied widely in speech signal processing, image processing, radar signal processing, communication systems[2-5],data mining, and biomedical science. Currently, many researches on UBSS have mainly focused on sparse component analysis (SCA)[6], which leads to the “two steps” approach[7].Therst step is to estimate the mixing matrix,and the second step is to recover the source signals. Notice that source signal may not be sparse in time domain, and in this case we assume that a linear and sparse transformation(e.g., Fourier transform, wavelet transform,etc.) can be found. The SCA can also be applied in the linearly transformed domain. In the “two step” approach, source signals recovery attracts very little attention, while many researchers investigate methods that identify the mixing matrix, such as clustering algorithm[8,9]or potential function based algorithm[10,11]. In this paper, we assume that the mixing matrix has been estimated successfully by the aforementioned algorithms, and we are concerned with the source signals recovery.

    UBSS shares the same model with compressed sensing (CS) on the condition that the source signals are sparse and the mixing matrix is known[12]. In fact, before the concept of compressed sensing was put forward, one usual method to solve sparse BSS is to minimize the ?1-norm. Y Li[13]et al. analyzed the equivalence of the ?0-norm solution and the?1-norm solution within a probabilistic framework, and presented conditions for recoverability of source signals in literature[14]. But,Georgiev[15]et al. illustrated that the recovery in UBSS using ?1-norm minimization did not perform well ,even if the mixing matrix is perfectly known. After compressed sensing[16]technique came out, many sparse recovery algorithms were proposed, and some methods based on compressed sensing have been proposed to solve UBSS[17,18].

    In 2009, Mohimani[19]et al. proposed a sparse recovery method based on smoothed ?0-norm (SL0), which can be applied to UBSS and is two or three times faster than the sparse reconstruction algorithm based on ?1-norm in the same precision or higher precision. And from then on, many scholars began to devote into the study of sparse signals reconstruction algorithm based on the SL0[20-23]. The SL0 and its improved algorithms have the advantages of less calculation and good robustness, but are easily affected by the degree of approximation of ?0-norm. To make the ?0-norm perform better, Vidya L[24]et al. proposed a sparse signal reconstruction algorithm based on minimum variance of radial basis function network (RASR). The algorithm firstly establishes a two-stage cascade network: the first stage fulfills the optimization of radial basis function, and the second stage is used for computing the minimum variance and outputting feedback to therst stage to accelerate the convergence. For the RASR algorithm,the computational complexity is not reduced obviously since two optimization models are built. What’s more, the improper step size may affect the convergence rate due to the adoption of gradient descent in the algorithm.Chun-hui Zhao[25]et al. introduce the articial neural network (ANN) into the model of compressed sensing reconstruction and obtain the compressed sensing reconstruction algorithm based on artificial neural network (CSANN),which enhances the fault-tolerant ability of the algorithm. However, the result is easy to fall into the area of local extreme point, since the penalty function of the CSANN algorithm cannot approximate the ?0-norm very well.In addition, the CSANN algorithm always needs a large number of iterations to end. In UBSS, the compressed sensing reconstruction algorithm mentioned above cannot meet the requirements of recovery precision of source signals and the computing complexity simultaneously. To solve the problem, we propose an algorithm for source recovery based on artificial neural network. The algorithm improves the precision of recovery by taking the Gaussian function as a penalty function to approximate the ?0-norm. A smoothed parameter is used to control the convergence speed of the network. Additionally, we manage to seek for the optimal learning factor to improve the recovery accuracy, and a gradually descent sequence of smoothed parameter is utilized to accelerate the convergence of the ANN. Numerical experiments show the proposed algorithm can recover the source signals with high precision but low computational complexity.

    The paper is organized as follows. In Section 2, the model of UBSS based on the ANN is introduced. In Section 3, the proposed algorithm for source recovery in UBSS is presented. The performance of the proposed algorithm is numerically evaluated by simulation results in Section 4. Finally, conclusions are made in Section 5.

    II. THE MODEL OF UNDETERMINED BLIND SOURCE SEPARATION BASED ON ANN

    2.1 Problem description

    In a blind source separation system, the received signal can be presented as

    For brevity, equation (1) is rewritten as

    The UBSS problem above can be viewed as a CS problem by regarding the source signalss, mixing matrixAand observed signalxin the UBSS as, respectively, the sparse signals,sensing matrix and measurement signals in the CS. So the sparse reconstruction algorithms in the CS can be readily applied to the signal recovery in the UBSS problem.

    2.2 The model of articial neural network for UBSS

    It is the unique knowledge structure and information processing principle that make articial neural network one of the main technologies of intelligent information processing,which attracts more and more scientific and technological workers’ interest[26]. ANN has many advantages on signal processing, including self-adaption, fault tolerance capability,etc.

    Fig. 1. The model of single-layer perceptron.

    In view of the single-layer perceptron that is sufficient to describe the model of UBSS and multi-layer perceptron in which it is not easy tond optimal learning factor, we introduce the single-layer perceptron articial neural network model into UBSS in the following.As shown ingure 1,Ninputs correspond to an output. The source signal vectorsin Eq.(2)is the weight vector of the perceptron, thej-th row vectorof the mixing matrixAis the input of the perceptron model,whereis thei-th element ofAj, and thej-th elemenofxis the threshold value. The output error decision rule of the perceptron is

    The learning procedure or the convergence process of the neural network is to minimizeEby adjusting the weight vector of the perceptron. In order to make the weight vector of the perceptron converge to the actual source signal vector, the constraint—sparsity of source signal—should be involved in the output error decision. Generally, both ?0-norm and?1-norm can measure the sparsity of source signal. To some extent, the sparse solution acquired by minimizing ?1-norm is equivalent to the solution obtained by minimizing ?0-norm in sparse recovery if mixing matrixAobeys a uniform uncertainty principle[27]. But literature[15] suggests that conditions (literature [28],theorem 7) which guarantee equivalence of ?0-norm and ?1-norm minimization are always not satisfied for UBSS. Hence, ?0-norm is used as a penalty function to further adjust the weight coefcients, and Eq. (3) can be rewritten as

    whereγ>0 is used to trade off the penalty function and the estimation error. For ease of analysis, we assumeγ=1. Since the minimization of the ?0-norm of source vectorsis an NP-hard problem, literature [25] uses Eq. (5)to approximate the ?0-norm.

    whereβ>0, and the greater the value ofβis, the better it is able to approximate the ?0-norm. CSANN algorithms generally use the empirical valueβ=10.

    In Eq. (5), however, the operation of taking absolute value leads to terrible smoothness of function. In order to approximate the ?0-norm more perfectly, the Gaussian function is introduced as the penalty term, and Eq. (5) can be rewritten as

    whereσ> 0, and the smallerσis, the better it is able to approximate the ?0-norm. Figure 2 illustrates the results of calculating the ?0-norm by using Eq. (5) and Eq. (6) respectively. Obviously, both Eq.(5) and Eq.(6) can reflect the characteristic of ?0-norm, but the result of approximating ?0-norm using Gaussian function is better than another. In order to further compare the degree of approximating the ?0-norm using the two penalty functions aforementioned, we produce a 12×20 signal matrix (12 and 20 are the number of sources and the number of sampling respectively) with sparsity (dened in Section 4) 0.75 to approximate?0-norm by the two functions. As shown ingure 3, the horizontal coordinate is the discrete sampling time (t=1,2,…,20), and the vertical coordinate is the value of the ?0-norm calculated by different functions and it demonstrates that the value of ?0-norm obtained by Eq. (6) is closer to the theoretical value. In addition, the average error of Eq.(6) is only0.0501, while the average error of Eq.(5) is 1.8107. Thus, it is better to use the Gaussian function to approximate the ?0-norm.

    If the value ofσis small enough, by substituting Eq. (6) into Eq. (4) we obtain that

    Fig. 2. Comparison of approximation of ?0-norm by different functions.

    Fig. 3. The comparison between two functions f or approximating ?0-norm and their theoretical values.

    The procedure of source recovery in the UBSS based on the ANN adjusts the weight vector of the perceptron according to the output error decisionE. Moreover, a gradient descent is used in order to increase the learning speed of the neural network and improve recovery accuracy. Then we manage to calculate the optimal step size, which we call it learning factor here.When the convergence condition is satisfied,the obtained weight vector of the perceptron is the estimated source signal vector.

    III. ALGORITHM FOR SOURCE RECOVERY BASED ON ANN

    Eq. (8) is used as the convergence criterion for the CSANN algorithm, that is

    whereε>0, ifεis small enough, the process of algorithm will get to the nearly ideal state of convergence, but more iterations are needed. According to [24], in practice, the CSANN algorithm requires the maximum number of iterations to terminate its iteration and the complexity is intolerable. In order to improve the precision of recovery, the maximum number of iterations in the CSANN algorithm will be a large value and it consumes much time. Trade-off between fewer numbers of iterations and higher accuracy of source signal recovery is a main difculty to achieve.To solve this difficulty, a gradually descent sequence of smoothed parameterσutilized in Eq.(7) is used to assure the convergence of the proposed algorithm and the accuracy of source recovery simultaneously. Explanations for the decent sequence can be found in [19]. We can prove that

    wheres0is the most sparse solution of UBSS problem (i.e. Eq.(1)) ands~ is the optimal solution of Eq.(7). The process of proof is shown in appendix A.

    However, the functionEin Eq.(7) is highly non-smoothed for small values ofσand contains a lot of local maxima, which leads to tough maximization. On the contrary,Eis smoother and contains less local maxima ifσis large, which results in easier maximization.But the second term of Eq. (7) cannot approximate ?0-norm well for largeσ.

    Therefore the convergence condition is

    Whereσminshould be small as small as possible but cannot be too small.

    For Eq.(7), the gradient descent method is used to adjust the weight coefficients of the perceptron. Calculating the gradient vector of Eq. (7), we obtain that

    Therefore, the updated formula for the weight coefcients of perceptron is

    Substituting Eq. (11a) into Eq. (12) yields Eq. (13) shown in the bottom at next page.

    Comparing (13) to (11a), and meanwhile due toEq.(14) can be obtained:

    Then, Eq. (14) is rewritten as

    So far, the optimal learning factor can be obtained from Eq.(15)

    According to the above mentioned analysis, the process source recovery for the UBSS based on the articial neural network contains only one optimization problem, which can improve the accuracy of source recovery and dramatically reduce computational cost.

    In summary, the steps of the UBSSANN algorithm are as following:

    Step 1Initialize the source signal vectorparameters of Gaussian functionthe scale factorδ(0<δ<1, used to implement decent sequence ofσ), the threshold valueσmin≤10?2and the number of iterationsk=0;

    Step 2For, calculateusing Eq.(11a) and (16);

    Step 3Update the smoothed parameter of Gaussian function:

    Step 5Ifσk>σmin, go to step 2; otherwise,output

    Step 4Update the number of iteration:k←k+1;

    The computational complexity of the SL0,CSANN and RASR algorithms has been analyzed in the literature[20]. Analysis and experimental results show that the computation time of the RASR algorithm is only half of that of the SL0, and the number of iterations in the RASR algorithm is signicantly smaller than that in the CSANN algorithm, and the convergence time of the CSANN algorithm increases exponentially with the growth of the number of non-zero elements in the source signal. For the convenience of comparison, a contrast mode in literature [20] is used. The index of measuring complexity is the number of multiplication consisting in the gradient descent method to update the source signal.

    As shown in Table 1, the computational complexity of the UBSSANN algorithm is lower than that of the SL0, CSANN and RASR algorithms in the case of low degree of sparsity and high degree of sparsity. The structure and characteristics of the SL0,CSANN, RASR and UBSSANN algorithms are presented in Table 2. For the RASR, it contains two optimization models while the UBSSANN contains one optimization model,which implies that it is easier to optimize for the UBSSANN.

    IV. SIMULATION RESULTS AND NUMERICAL ANALYSIS

    In this section, the simulation results of the proposed UBSSANN algorithm are compared with those of the other algorithms (the SL0,CSANN, and RASR). In order to evaluate the accuracy of recovery for different algorithms,the correlation coefcient[21]is dened as:

    whereSandS? denote the source signal and the estimated signal respectively,is then-th row andj-th column ofS,s?n(t) denotes then-th row andj-th column ofS?. The larger the correlation coefcient is, the more accurate the algorithm will be. The value ofρ(S?,S)ranges from 0 to 1.The sparsitypis dened as following: each source is inactive with probabilitypand is active with probability 1-p.pcontrols the degree of sparsity of the source signal. Source signal becomes sparser with increasingp.

    Table I. Computational complexity of the four algorithms.

    Table II. Comparison of algorithm structure feature.

    Firstly, in order to analyze the effect of parameters on the algorithm performance, parameters are tested for different SNR (signalto-noise ratio) and sparsity of source signals.Secondly, according to the results in the first experiment, we choose appropriate value of parameters and compare the proposed algorithm with conventional algorithm in the second experiment for random signals. Finally,radar signals are used in the third experiment to prove the availability of the proposed UBSSANN algorithm in a real scenario.

    4.1 Simulation for eect of parameters on performance

    In order to verify the performance of the proposed algorithm, the effect of parameters on algorithm performance is studied in this experiment. Two essential parameters, the scale factorδand convergence threshold valueσmin,are discussed. With different sources randomly generated and a mixing matrix of dimension 8×15, the simulations are repeated 100 times.

    In figure 5, the number of iterations corresponding to the scale factor is depicted for different SNRs and the sparsity of source signal. Fromgure 5(a) orgure 5(b), we can roughly conclude that the number of iterations will rapidly increase when the scale factor is greater than 0.6. Hence, in the next experiment, the scale factorσis set as 0.6 such that the convergence is accelerated.

    4.2 Simulation results for random source signals

    Source signals, following Gaussian distribution, are sparse signals with sparsityp∈[0.5,0.9], which are received byMantennas. The source signal becomes sparser with increasingp. A mixing matrix is of dimensionM×N, which is randomly generated with the normal distribution. Simulations are repeated 1000 times. For the SL0, RASR and UBSSANN algorithms,σminis set as 0.001.The convergence criteria of the CSANN algorithm isand the maximum number of iterations isxed to 500.

    Fig. 4. Effect of parameters on performance.

    Figure 6 demonstrates the correlation coefficient obtained by different algorithms corresponding to SNR for different system scales (i.e., the dimension of mixing matrix,MandN, mentioned in section 2) in the case ofp=0.8. As shown ingure 6, the correlation coefficient of the UBSSANN algorithm is obviously larger than that of all the other algorithms. For instance, the correlation coefcient obtained by the UBSSANN is about 6%, 10%,15% higher than that by the RASR,SL0 and CSANN, respectively, when SNR is 20dB in case ofM=3,N=5. This improvement on the correlation coefficient is essential for many real applications, such as radar signal processing. In the range of SNR from 10 dB to 20 dB, the average correlation coefcients of the UBSSANN algorithms are 0.9034, which shows its robustness against the noise.

    Fig. 5. The number of iterations varies with the parameter scale factor.

    Figure 7 shows the correlation coefficients obtained by the SL0, CSANN, RASR,UBSSANN algorithms corresponding to sparsitypwith several mixing matrices of different dimensions in the case where SNR is 30 dB.When the sparsitypis greater than 0.6, the correlation coefficient of the UBSSANN algorithm is larger than that in the SL0, RASR,CSANN algorithms. For example, in the case ofp=0.8,M=6,N=10, the correlation coefficients obtained by the UBSSANN algorithm are about 4%, 10% and 22% larger than that of the RASR, CSANN and CSANN algorithms,respectively. As presented in Table 1 and Table 2, however, the complexity of the UBSSANN is signicantly lower than that of the conventional algorithms. When the sparsity is less than 0.6, the correlation coefficient of the UBSSANN algorithm is a little smaller than that of the RASR but larger than that of SL0 and CSANN.

    Figure 8 shows the contrast between the correlation coefficients obtained by the UBSSANN and RASR corresponding to the number of iterations in the case of SNR=20dB andp=0.9 at axed time. Figure 6 illustrates that the UBSSANN algorithm has reached the convergence state in the 5th iteration, while the RASR is close to the state of convergence after the 20th iteration. So the convergence rate of the UBSSANN is relatively fast.

    In the following,we give the time required by different algorithms. As shown ingure 9,in the case of SNR=20dB, the running time of UMSRANN algorithm is less than the other three algorithms. With the range of sparsity from 0.5 to 0.9, the average running time of UBSSANN algorithm is reduced by about 40%, 60% and 29% respectively compared with SL0, CSANN and RASR algorithm. It implies that UBSSANN algorithm maintains high recovery accuracy while signicantly re-duce the computational complexity compared with the other three algorithms.

    Fig. 6. Correlation coef cient vs. SNR with p=0.8 for different dimension mixing matrix.

    Fig. 7. Correlation coef cient vs. sparsity p with SNR=10dB for different dimension mixing matrix.

    4.3. Simulation results for radar source signals

    In this experiment, 5 radar signalsare chosen as the source signals.are general radar signals with the same pulse width 10μsand pulse duration 50μs,but with different carrier frequencies 5MHz and 5.5MHz.s3is linearly frequency modulated (LFM) radar signal with a carrier frequency 5MHz, pulse width 10μs, pulse duration 50μsand pulse bandwidth 10MHz.s4is also a LFM radar signal with the same parameters ass3, but its pulse bandwidth is 15MHz.s5is a sinusoidal phase-modulated radar signal with a carrier frequency 5MHz, pulse width 10μs,pulse duration 50μsand the frequency of sine-wave modulation signal 200KHz. The dimension of mixing matrix isM=3,N=5(i.e.,3 receiving antennas and 5 source signals). To assess the recovery equality, signal-to-interference ratio (SIR) of recovery source signal is used here besides the correlation coefficient,and the SIR of recovery source signal is defined aswheresanddenotes the real source signal and recovery source signal, respectively.

    Fig. 8. correlation coef cient vs. iterations of UBSSANN and RASR algorithms respectively.

    Fig. 9. computing time vs. degree of sparsity with SNR = 20 dB.

    In this experiment, we use the SL0,CSANN, RASR, and UBSSANN algorithms to recover the source signals. Figure 10 and figure 11 demonstrate the correlation coefficient and SIR of the recovered signals obtained by the aforementioned algorithms,respectively. As shown in figure 10, the UBSSANN algorithm gives good results for the SNR ranging from 10 dB to 30 dB, performing better than the SL0, CSANN, and RASR in terms of correlation coefcient. For instance, correlation coefficient acquired by the UBSSANN is about 0.99, while those obtained by the SL0, RASR and CSANN are about 0.89, 0.87, 0.76, respectively, when SNR=30dB. Instead of correlation coefcient, the SNR of the recovered signal is used to evaluate the algorithms’ performance, as shown ingure 11. Obviously, the SIR of the recovered signal obtained by the UBSSANN is significantly greater than those acquired by the SL0,CSANN, and RASR while the SNR of mixed signal ranges from 10 dB to 30dB. Moreover,the SIRs obtained by the SL0, CSANN, and RASR are all lower than 10dB while that obtained by that UBSSANN roughly linearly increases with respect to the SNR of the mixed signal rising.

    V. SUMMARY

    For the problem of high computational complexity when the compressed sensing sparse reconstruction algorithm is used for source signal recovery in UBSS, the UBSSANN algorithm is proposed. Based on the sparse reconstruction model, a single layer perceptron articial neural network is introduced into the proposed algorithm introduces, and the optimal learning factor is calculated, which improves the precision of recovery. Additionally,a descending sequence of smoothed parameterσis used to control the convergence speed of the proposed algorithm such that the number of iterations can be signicantly reduced.Compared with the existing algorithms (i.e.,SL0, CSANN and RASR) the UBSSANN algorithm achieves good trade-off between the precision of recovery and computational complexity.

    Fig. 10. Correlation coef cient vs. SNR of mixed signal.

    Fig. 11. SIR of recovered signal vs. SNR of mixed signal.

    Appendix A

    UBSSANN’s original mathematical model is

    whereNrepresents the number of source sig-nals,Obviously, (A-1)is equivalent to

    Then

    According to Eq. (A-5) and (A-6), we can get that

    It is assumed that A? is a sub-matrix composed ofaivectors in matrix A, wherei∈Iκ, then A? contains at mostMcolumn vectors. Since each column vector is independent of each other, A? has left pseudo inverse,denoted asIn addition, in the vectorthe sub-vector corresponding to the element constituting the subscript setIkis denoted byand the sub-vector of the element corresponding to the subscript which is not belong toIkis denoted ass′,

    We can get that

    According to Eq. (A-7), it can be obtained that

    Similarly, combining with Eq.(A-6), we get

    By Eq. (A-10) and (A-11), it can be obtained that

    The set of all the sub-matrices A? of the mixing matrix A is now referred to as Θ, Letβbe as follow:

    Then

    Next we use Eq. (A-14) to prove that

    wheres0is the most sparse solution of UBSS problem (i.e. Eq.(1)) ands~ is the optimal solution of Eq.(7).

    It is assumed that the vectorsatisfies the constraintx=As~ and is the optimal point of the current objective functionThe set of subscripts corresponding to the elements satisfyingin vectors~ is denotedthere is

    Combined with Eq. (A-3), it can be obtained that

    Sinces~ is the optimal point of the current objective function, then

    According to Eq. (A-17), (A-18) and (A-19), we get

    Then

    So

    Therefore, the number of elements ofwhose absolute value is bigger thanκis at mostM?k, and that of non-zero elements ofs0is at mostk, so the number of elements inwhose absolute value is bigger thanκis at mostM?k+k=M.

    Then

    ACKNOWLEDGMENT

    This work was supported by National Nature Science Foundation of China under Grant(61201134, 61401334) and Key Research and Development Program of Shaanxi (Contract

    No. 2017KW-004, 2017ZDXM-GY-022).

    [1] G. R. Naik, W. Wang, Blind Source Separation:Advances in Theory Algorithms and Applications (Signals and Communication Technology Series), Berlin, Germany: Springer, 2014.

    [2] Gao, L., Wang, X., Xu, Y. and Zhang, Q., Spectrum trading in cognitive radio networks: A contract-theoretic modeling approach. IEEE Journal on Selected Areas in Communications, 2011,29(4), pp.843-855.

    [3] Wang, X., Huang, W., Wang, S., Zhang, J. and Hu, C.,Delay and capacity tradeoanalysis for motioncast. IEEE/ACM Transactions on Networking (TON), 2011,19(5), pp.1354-1367.

    [4] Gao, L., Xu, Y. and Wang, X., Map: Multiauctioneer progressive auction for dynamic spectrum access. IEEE Transactions on Mobile Computing, 2011,10(8), pp.1144-1161.

    [5] Wang X, Fu L, Hu C. Multicast performance with hierarchical cooperation[J]. IEEE/ACM Transactions on Networking (TON), 2012, 20(3): 917-930.

    [6] Y. Li, A. Cichocki, S. Amari, “Sparse component analysis for blind source separation with less sensors than sources”,Proc. Int. Conf. Independent Component Analysis (ICA), pp. 89-94, 2003.

    [8] J. Sun, et al., “Novel mixing matrix estimation approach in underdetermined blind source separation”,Neurocomputing, vol. 173, pp. 623-632, 2016.

    [9] V. G. Reju, S. N. Koh, I. Y. Soon, “An algorithm for mixing matrix estimation in instantaneous blind source separation”,Signal Process., vol. 89, no.3, pp. 1762-1773, Mar. 2009.

    [10] F. M. Naini, et al., “Estimating the mixing matrix in Sparse Component Analysis (SCA) based on partial k-dimensional subspace clustering”,Neurocomputing, vol. 71, pp. 2330-2343, 2008.

    [11] T. Dong, L. Yingke, and J. Yang, “An algorithm for underdetermined mixing matrix estimation”,Neurocomputing, vol. 104, pp. 26-34, 2013.

    [12] T. Xu, W. Wang, “A compressed sensing approach for underdetermined blind audio source separation with sparse representation”,Proc.IEEE Statist. Signal Process. 15th Workshop, pp.493-496, 2009.

    [13] Y. Q. Li, A. Cichocki, S. Amari, “Analysis of sparse representation and blind source separation”,Neural Comput., vol. 16, no. 6, pp. 1193-1234,2004.

    [14] Y. Li, et. al., “Underdetermined blind source separation based on sparse representation”,IEEE Trans. Signal Process., vol. 54, no. 2, pp. 423-437, Feb. 2006.

    [15] P. Georgiev, F. Theis, A. Cichocki, “Sparse component analysis and blind source separation of underdetermined mixtures”,IEEE Trans. Neural Networks, vol. 16, no. 5, pp. 992-996, Jul. 2005.

    [16] D. Donoho, “Compressed sensing”,IEEE Trans.Inform. Theory, vol. 52, no. 4, pp. 1289-1306,Apr. 2006.

    [17] T. Xu, W. Wang, “A block-based compressed sensing method for underdetermined blind speech separation incorporating binary mask”,Proc. Int. Conf. Acoust. Speech Signal Process.(ICASSP), pp. 2022-2025, 2010.

    [18] M. Kleinsteuber, H. Shen, “Blind source separation with compressively sensed linear mixtures”,IEEE Signal Process Lett., vol. 19, no. 2, pp. 107-110, Feb. 2012.

    [19] H. Mohimani, M. Babaie-Zadeh, and C. Jutten,“A Fast Approach for Overcomplete Sparse Decomposition Based on Smoothed L0 Norm”,IEEE Trans. Signal Process., vol. 57, no. 1, pp.289-301, Jan. 2009.

    [20] A. Eftekhari, M. Babaie-Zadeh, C. Jutten, H.Abrishami Moghad-dam, “Robust-SL0 for stable sparse representation in noisy settings”,Proc. Int. Conf. Acoust. Speech Signal Process.(ICASSP), pp. 3433-3436, 2009.

    [21] S. H. Ghalehjegh, M. Babaie-Zadeh, and C. Jutten, “Fast block-sparse decomposition based on SL0”,International Conference on Latent Variable Analysis and Signal Separation, PP. 426-433, 2010.

    [22] Changzheng Ma, Tat Soon Yeo, Zhoufeng Liu.“Target imaging based on ?1?0 norms homotopy sparse signal recovery and distributed MIMO antennas”,IEEE Transactions on Aerospace and Electronic Systems, vol.51, no.4, pp:3399-3414,2015

    [23] V. Vivekanand; L. Vidya, “Compressed sensing recovery using polynomial approximated l0 minimization of signal and error”,2014 International Conference on Signal Processing and Communications, pp:1-6, 2014

    [24] L. Vidya, V. Vivekanand, U. Shyamkumar, Deepak Mishra, “RBF-network based sparse signal recovery algorithm for compressed sensing reconstruction”,Neural Networks, vol. 63, pp. 66-78, 2015.

    [25] C. Zhao, and Y. Xu, “An improved compressed sensing reconstruction algorithm based on artificial neural network”,2011 International Conference on Electronics, Communications and Control (ICECC), pp. 1860-1863, 2011.

    [26] A. Cichocki, R. Unbehauen, Neural Networks for Optimization and Signal Processing, U.K.,Chichester: Wiley, 1993.

    [27] E. Cands, J. Romberg, T. Tao, “Stable Signal Recovery from Incomplete and Inaccurate Measurements”,Comm. Pure and Applied Math., vol.59, no. 8, pp. 1207-1223, 2006.

    [28] D. Donoho, M. Elad, “Optimally sparse representation in general (nonorthogonal) dictionaries via ?1 minimization”,Proceedings of the National Academy of Sciences, pp. 2197-2202, 2003.

    [29] I. F. Gorodnitsky, B. D. Rao, “Sparse signal reconstruction from limited data using FOCUSS:A re-weighted norm minimization algorithm”,IEEE Trans. Signal Process., vol. 45, pp. 600-616,1997.

    久久久久视频综合| 精品欧美一区二区三区在线| 亚洲国产精品一区二区三区在线| 免费在线观看完整版高清| 色在线成人网| 国产无遮挡羞羞视频在线观看| 一边摸一边抽搐一进一出视频| 日日爽夜夜爽网站| 久久影院123| 大片电影免费在线观看免费| 黑人猛操日本美女一级片| 欧美日韩视频精品一区| 国产精品国产av在线观看| 久久精品91无色码中文字幕| 国产福利在线免费观看视频| 日韩大码丰满熟妇| 精品欧美一区二区三区在线| 老汉色av国产亚洲站长工具| 国产一卡二卡三卡精品| 在线永久观看黄色视频| 80岁老熟妇乱子伦牲交| 免费少妇av软件| tube8黄色片| 天天影视国产精品| 成年动漫av网址| 国产又爽黄色视频| 涩涩av久久男人的天堂| 亚洲第一av免费看| 久久亚洲精品不卡| 精品人妻在线不人妻| 丁香欧美五月| www.自偷自拍.com| h视频一区二区三区| 蜜桃在线观看..| 人人妻人人爽人人添夜夜欢视频| 人人妻人人爽人人添夜夜欢视频| 国产av一区二区精品久久| 久久国产精品男人的天堂亚洲| 久热这里只有精品99| 久久精品aⅴ一区二区三区四区| 国产不卡一卡二| 日韩视频在线欧美| 日韩精品免费视频一区二区三区| 国产精品麻豆人妻色哟哟久久| 国产精品影院久久| 国产精品影院久久| tube8黄色片| 欧美在线黄色| 啦啦啦免费观看视频1| 精品一区二区三卡| 午夜福利欧美成人| 91麻豆精品激情在线观看国产 | 精品视频人人做人人爽| 久久国产精品男人的天堂亚洲| 在线永久观看黄色视频| 亚洲视频免费观看视频| 男女床上黄色一级片免费看| 国产欧美日韩精品亚洲av| 99国产精品免费福利视频| 在线天堂中文资源库| 肉色欧美久久久久久久蜜桃| 欧美日韩亚洲综合一区二区三区_| 国产一区二区激情短视频| 成人免费观看视频高清| 国产免费现黄频在线看| 国产伦人伦偷精品视频| 久久人妻熟女aⅴ| 欧美在线一区亚洲| 欧美精品人与动牲交sv欧美| 最新的欧美精品一区二区| 制服诱惑二区| 亚洲国产av新网站| 久久久久久亚洲精品国产蜜桃av| av片东京热男人的天堂| 法律面前人人平等表现在哪些方面| 大片电影免费在线观看免费| 欧美精品av麻豆av| 久久国产精品人妻蜜桃| 天堂8中文在线网| 国产99久久九九免费精品| 又黄又粗又硬又大视频| 午夜激情av网站| 不卡一级毛片| 成人av一区二区三区在线看| av线在线观看网站| 亚洲欧美色中文字幕在线| 午夜福利一区二区在线看| 成年人免费黄色播放视频| 天天添夜夜摸| 久久人妻福利社区极品人妻图片| 亚洲欧美一区二区三区黑人| 又大又爽又粗| 人人妻人人澡人人看| 极品人妻少妇av视频| 国产精品免费大片| 亚洲avbb在线观看| 午夜成年电影在线免费观看| 久久青草综合色| 日韩免费高清中文字幕av| av不卡在线播放| 精品少妇一区二区三区视频日本电影| 亚洲国产欧美日韩在线播放| 精品亚洲乱码少妇综合久久| 免费少妇av软件| 91成年电影在线观看| 性色av乱码一区二区三区2| 麻豆av在线久日| 一区二区av电影网| 亚洲色图 男人天堂 中文字幕| 精品福利永久在线观看| 久久久久国内视频| 国产精品秋霞免费鲁丝片| 成人国产av品久久久| 免费在线观看视频国产中文字幕亚洲| 久热爱精品视频在线9| 日韩免费av在线播放| 亚洲成人免费av在线播放| 男女无遮挡免费网站观看| a级片在线免费高清观看视频| 90打野战视频偷拍视频| 欧美激情高清一区二区三区| 色精品久久人妻99蜜桃| 精品国产一区二区久久| 久久久久国产一级毛片高清牌| 国产成人啪精品午夜网站| 日日摸夜夜添夜夜添小说| 欧美日韩精品网址| 丝袜在线中文字幕| 免费黄频网站在线观看国产| 国产欧美日韩一区二区三| 大香蕉久久网| 久久亚洲真实| 国产欧美日韩综合在线一区二区| av网站在线播放免费| av国产精品久久久久影院| 美国免费a级毛片| 国产视频一区二区在线看| 久久久久久久国产电影| 欧美av亚洲av综合av国产av| 国产在线视频一区二区| 久久久久久久精品吃奶| 波多野结衣一区麻豆| 2018国产大陆天天弄谢| 亚洲成人手机| 777久久人妻少妇嫩草av网站| 国产高清视频在线播放一区| 久久国产精品男人的天堂亚洲| 国产在线视频一区二区| 欧美日韩亚洲综合一区二区三区_| 在线观看免费日韩欧美大片| 性少妇av在线| 中文字幕最新亚洲高清| 丝袜美足系列| 亚洲成人免费电影在线观看| 极品少妇高潮喷水抽搐| 亚洲第一青青草原| 日本a在线网址| 日韩有码中文字幕| 国产一区二区在线观看av| 国产野战对白在线观看| 欧美+亚洲+日韩+国产| 欧美午夜高清在线| 国产男女超爽视频在线观看| 久久亚洲真实| av又黄又爽大尺度在线免费看| 亚洲中文日韩欧美视频| 亚洲欧美日韩另类电影网站| 免费看a级黄色片| 99精品欧美一区二区三区四区| 精品国产乱码久久久久久男人| 国产熟女午夜一区二区三区| av网站在线播放免费| 色播在线永久视频| 一边摸一边做爽爽视频免费| 色在线成人网| 亚洲精品国产精品久久久不卡| 久久亚洲真实| 精品久久久久久电影网| 久久人人爽av亚洲精品天堂| 757午夜福利合集在线观看| 黄色 视频免费看| 久热这里只有精品99| 黄色视频,在线免费观看| 日韩有码中文字幕| 午夜福利视频精品| 纯流量卡能插随身wifi吗| 亚洲精品美女久久av网站| 国内毛片毛片毛片毛片毛片| 成人av一区二区三区在线看| 日韩熟女老妇一区二区性免费视频| 在线观看免费日韩欧美大片| 丰满迷人的少妇在线观看| 麻豆乱淫一区二区| 精品人妻在线不人妻| 久久精品91无色码中文字幕| 夫妻午夜视频| 黄色视频在线播放观看不卡| 久久久精品免费免费高清| 国产精品久久久久成人av| 九色亚洲精品在线播放| 精品亚洲乱码少妇综合久久| 99re6热这里在线精品视频| 国产精品久久久久久人妻精品电影 | 日韩精品免费视频一区二区三区| 亚洲性夜色夜夜综合| 高清毛片免费观看视频网站 | 欧美性长视频在线观看| 亚洲 国产 在线| 捣出白浆h1v1| 香蕉国产在线看| 欧美日韩亚洲国产一区二区在线观看 | 亚洲第一青青草原| cao死你这个sao货| 国产97色在线日韩免费| 99精品在免费线老司机午夜| 欧美日韩精品网址| 免费少妇av软件| 国产av精品麻豆| 国产一区二区在线观看av| 亚洲精品久久午夜乱码| 国产精品98久久久久久宅男小说| 无遮挡黄片免费观看| 久久中文字幕人妻熟女| bbb黄色大片| 日日摸夜夜添夜夜添小说| 精品一区二区三区视频在线观看免费 | 一区二区日韩欧美中文字幕| 中文字幕另类日韩欧美亚洲嫩草| 国产一区二区在线观看av| 真人做人爱边吃奶动态| 伊人久久大香线蕉亚洲五| 国产欧美日韩精品亚洲av| 汤姆久久久久久久影院中文字幕| 亚洲精品国产一区二区精华液| 99国产精品一区二区三区| 欧美在线黄色| 日韩欧美免费精品| 国产精品亚洲av一区麻豆| 亚洲中文日韩欧美视频| 老熟妇仑乱视频hdxx| 国产精品.久久久| 无人区码免费观看不卡 | 久久久久精品人妻al黑| 国产av又大| 欧美久久黑人一区二区| 在线观看66精品国产| 久久精品国产亚洲av香蕉五月 | 在线观看人妻少妇| 性高湖久久久久久久久免费观看| 国产熟女午夜一区二区三区| 啦啦啦视频在线资源免费观看| 纯流量卡能插随身wifi吗| 久久久久精品人妻al黑| 美女主播在线视频| 亚洲色图综合在线观看| 999久久久国产精品视频| 午夜福利免费观看在线| 69精品国产乱码久久久| 99精品在免费线老司机午夜| 老熟妇仑乱视频hdxx| 日韩三级视频一区二区三区| 国产精品成人在线| 一级毛片女人18水好多| 久久久久久久国产电影| 免费高清在线观看日韩| 女人精品久久久久毛片| 午夜福利影视在线免费观看| 国产成人啪精品午夜网站| 在线十欧美十亚洲十日本专区| 巨乳人妻的诱惑在线观看| 久久久久久久久久久久大奶| 欧美国产精品一级二级三级| 变态另类成人亚洲欧美熟女 | 久久精品国产亚洲av高清一级| 欧美成人免费av一区二区三区 | 黄片播放在线免费| 国产日韩欧美视频二区| 国产91精品成人一区二区三区 | 亚洲欧美日韩另类电影网站| 两个人免费观看高清视频| 久久久久久久精品吃奶| 欧美国产精品一级二级三级| 国产精品亚洲一级av第二区| 欧美午夜高清在线| 超色免费av| 亚洲精品久久午夜乱码| 嫩草影视91久久| 在线天堂中文资源库| 久久久精品国产亚洲av高清涩受| 91九色精品人成在线观看| 巨乳人妻的诱惑在线观看| 午夜福利在线观看吧| 精品国产乱子伦一区二区三区| 少妇粗大呻吟视频| 91精品三级在线观看| 亚洲人成电影免费在线| 王馨瑶露胸无遮挡在线观看| 波多野结衣av一区二区av| 丝袜美腿诱惑在线| 午夜激情久久久久久久| 一级a爱视频在线免费观看| 精品一区二区三卡| 大香蕉久久成人网| 50天的宝宝边吃奶边哭怎么回事| 悠悠久久av| 精品国产一区二区三区久久久樱花| 777米奇影视久久| 精品少妇一区二区三区视频日本电影| 亚洲男人天堂网一区| 波多野结衣一区麻豆| 国产1区2区3区精品| 午夜免费成人在线视频| 国产成人精品在线电影| 日韩中文字幕视频在线看片| 热99国产精品久久久久久7| 欧美黑人精品巨大| av超薄肉色丝袜交足视频| 久久国产精品人妻蜜桃| 日韩免费av在线播放| 精品国产一区二区三区久久久樱花| 大码成人一级视频| 国产精品亚洲一级av第二区| 美女高潮喷水抽搐中文字幕| 波多野结衣一区麻豆| 亚洲午夜理论影院| 国产真人三级小视频在线观看| 最新的欧美精品一区二区| 男男h啪啪无遮挡| 80岁老熟妇乱子伦牲交| 一区在线观看完整版| 在线观看舔阴道视频| 色在线成人网| 麻豆av在线久日| 精品国内亚洲2022精品成人 | 99九九在线精品视频| 巨乳人妻的诱惑在线观看| 视频在线观看一区二区三区| 黄片播放在线免费| 老熟女久久久| cao死你这个sao货| 淫妇啪啪啪对白视频| 午夜视频精品福利| 精品免费久久久久久久清纯 | 亚洲精品久久午夜乱码| 亚洲美女黄片视频| 大香蕉久久成人网| 一级毛片精品| 亚洲精品在线观看二区| 交换朋友夫妻互换小说| 日本av手机在线免费观看| 国产男靠女视频免费网站| 成年人黄色毛片网站| 无限看片的www在线观看| 久久久欧美国产精品| 亚洲国产av影院在线观看| 激情在线观看视频在线高清 | 日本黄色日本黄色录像| 十分钟在线观看高清视频www| 亚洲精品一卡2卡三卡4卡5卡| 国产精品二区激情视频| 色婷婷av一区二区三区视频| 国产精品免费视频内射| 视频区欧美日本亚洲| 少妇的丰满在线观看| 亚洲五月色婷婷综合| 在线天堂中文资源库| 在线av久久热| 在线观看免费视频日本深夜| 性高湖久久久久久久久免费观看| 日韩一卡2卡3卡4卡2021年| 国产精品国产高清国产av | 十八禁网站免费在线| 亚洲国产av新网站| 777久久人妻少妇嫩草av网站| 大香蕉久久网| 99久久精品国产亚洲精品| 一级a爱视频在线免费观看| 国产欧美亚洲国产| 99热网站在线观看| 国产精品一区二区精品视频观看| 高清视频免费观看一区二区| 国产区一区二久久| 麻豆乱淫一区二区| 90打野战视频偷拍视频| 国产精品久久电影中文字幕 | 91九色精品人成在线观看| 精品人妻在线不人妻| 欧美日韩视频精品一区| 99九九在线精品视频| 老熟女久久久| 美女高潮到喷水免费观看| 精品欧美一区二区三区在线| 午夜福利影视在线免费观看| 亚洲成人免费av在线播放| 日韩大码丰满熟妇| 咕卡用的链子| 精品乱码久久久久久99久播| 国产一区二区在线观看av| 麻豆国产av国片精品| 久久精品亚洲熟妇少妇任你| 十八禁网站免费在线| 精品欧美一区二区三区在线| 黄色 视频免费看| 久久免费观看电影| 午夜福利视频精品| 蜜桃在线观看..| 亚洲成国产人片在线观看| 欧美 日韩 精品 国产| 男女下面插进去视频免费观看| 啦啦啦视频在线资源免费观看| 蜜桃在线观看..| 久久精品国产99精品国产亚洲性色 | 精品国产乱子伦一区二区三区| 天天躁狠狠躁夜夜躁狠狠躁| 国产国语露脸激情在线看| 黑人巨大精品欧美一区二区mp4| 日韩人妻精品一区2区三区| 下体分泌物呈黄色| 18在线观看网站| 亚洲精品国产区一区二| 国产成人一区二区三区免费视频网站| 国产在线观看jvid| 久久久久精品国产欧美久久久| 国产不卡av网站在线观看| 精品一区二区三区av网在线观看 | 久久人妻av系列| 日本黄色日本黄色录像| 91九色精品人成在线观看| 国产免费福利视频在线观看| 国产精品亚洲一级av第二区| 欧美日韩一级在线毛片| 丝袜喷水一区| 成人18禁在线播放| 搡老熟女国产l中国老女人| 国产主播在线观看一区二区| 久久久久久久久免费视频了| 亚洲av日韩精品久久久久久密| 午夜91福利影院| 男女免费视频国产| 欧美成人午夜精品| 免费在线观看黄色视频的| 男女午夜视频在线观看| 国产精品久久电影中文字幕 | 黄色视频,在线免费观看| 在线十欧美十亚洲十日本专区| 色精品久久人妻99蜜桃| 国产一区二区三区视频了| 三级毛片av免费| 大型黄色视频在线免费观看| 欧美乱妇无乱码| 午夜两性在线视频| 在线观看一区二区三区激情| 国产精品一区二区在线观看99| 久久影院123| 在线永久观看黄色视频| 国产亚洲欧美精品永久| 亚洲少妇的诱惑av| 久久精品91无色码中文字幕| 欧美精品亚洲一区二区| 欧美激情高清一区二区三区| 精品少妇内射三级| 老汉色av国产亚洲站长工具| 最新美女视频免费是黄的| 啦啦啦免费观看视频1| 国产一区二区在线观看av| 免费在线观看影片大全网站| 亚洲av国产av综合av卡| 欧美激情久久久久久爽电影 | 丝袜人妻中文字幕| 水蜜桃什么品种好| 性色av乱码一区二区三区2| 久久精品国产综合久久久| 亚洲精品美女久久av网站| 精品第一国产精品| 欧美变态另类bdsm刘玥| √禁漫天堂资源中文www| 在线观看一区二区三区激情| 夜夜夜夜夜久久久久| 十八禁高潮呻吟视频| 国产精品秋霞免费鲁丝片| 成人av一区二区三区在线看| 一区在线观看完整版| 国产成人欧美| 后天国语完整版免费观看| 成人特级黄色片久久久久久久 | 中文字幕高清在线视频| 69av精品久久久久久 | 桃花免费在线播放| 欧美午夜高清在线| 大片免费播放器 马上看| 国产成人精品久久二区二区91| 激情视频va一区二区三区| 国产一区二区在线观看av| av欧美777| 亚洲欧美日韩高清在线视频 | 一边摸一边做爽爽视频免费| 精品国产一区二区久久| 国产淫语在线视频| 人人妻人人添人人爽欧美一区卜| 日韩欧美免费精品| 国产亚洲一区二区精品| 桃花免费在线播放| 亚洲第一青青草原| 美国免费a级毛片| 免费看十八禁软件| 久久精品国产a三级三级三级| 国产成人啪精品午夜网站| 在线天堂中文资源库| 欧美国产精品va在线观看不卡| 国产成人影院久久av| 好男人电影高清在线观看| 美女扒开内裤让男人捅视频| 国产欧美亚洲国产| 亚洲午夜精品一区,二区,三区| 成人黄色视频免费在线看| 亚洲成人免费av在线播放| 国产精品一区二区在线不卡| 国产av精品麻豆| 精品少妇一区二区三区视频日本电影| 精品一区二区三区四区五区乱码| 国产精品久久久人人做人人爽| 一区二区三区国产精品乱码| 最近最新免费中文字幕在线| 国产深夜福利视频在线观看| 男女高潮啪啪啪动态图| 国产男女超爽视频在线观看| 国产视频一区二区在线看| 亚洲伊人久久精品综合| 一进一出好大好爽视频| 丰满少妇做爰视频| 黄片大片在线免费观看| 精品亚洲成国产av| 精品一区二区三区av网在线观看 | 中文字幕高清在线视频| 久久精品91无色码中文字幕| 人人妻,人人澡人人爽秒播| 丰满人妻熟妇乱又伦精品不卡| 欧美日韩中文字幕国产精品一区二区三区 | 日韩成人在线观看一区二区三区| 两个人看的免费小视频| 国产精品av久久久久免费| 成人特级黄色片久久久久久久 | 日韩免费av在线播放| 午夜久久久在线观看| 国产亚洲欧美在线一区二区| 亚洲色图av天堂| 99久久国产精品久久久| 欧美午夜高清在线| 一本久久精品| 欧美日韩视频精品一区| 国产不卡一卡二| 久久国产亚洲av麻豆专区| a级毛片在线看网站| 国产精品影院久久| 久久人妻av系列| 肉色欧美久久久久久久蜜桃| 国产成人欧美| 欧美中文综合在线视频| 久久久国产精品麻豆| 女同久久另类99精品国产91| 久久久久久亚洲精品国产蜜桃av| 精品乱码久久久久久99久播| 免费日韩欧美在线观看| 日本一区二区免费在线视频| 热99久久久久精品小说推荐| 一本综合久久免费| 麻豆国产av国片精品| 国产主播在线观看一区二区| 精品国产国语对白av| 国产黄频视频在线观看| 在线观看一区二区三区激情| 国产精品电影一区二区三区 | 国产高清国产精品国产三级| 免费观看人在逋| 每晚都被弄得嗷嗷叫到高潮| 国产精品 欧美亚洲| 国产精品一区二区在线观看99| 国产日韩欧美视频二区| 成年女人毛片免费观看观看9 | 久久人妻熟女aⅴ| 国产精品秋霞免费鲁丝片| 1024香蕉在线观看| 99国产精品一区二区三区| 亚洲人成电影免费在线| 亚洲视频免费观看视频| 国产成+人综合+亚洲专区| 免费观看a级毛片全部| 人妻 亚洲 视频| 成人永久免费在线观看视频 | 啦啦啦免费观看视频1| 真人做人爱边吃奶动态| 黄片大片在线免费观看| 精品国产一区二区三区四区第35| 精品高清国产在线一区| 国产精品偷伦视频观看了| 亚洲国产看品久久| 99久久99久久久精品蜜桃| 香蕉国产在线看| 国产高清激情床上av| 丰满少妇做爰视频| 久9热在线精品视频| 人妻 亚洲 视频| 无限看片的www在线观看| 亚洲人成电影免费在线| kizo精华| 免费日韩欧美在线观看| 日本wwww免费看| www.999成人在线观看| 国产精品98久久久久久宅男小说| 十分钟在线观看高清视频www| 亚洲熟妇熟女久久| 一级a爱视频在线免费观看| 亚洲成国产人片在线观看| 性高湖久久久久久久久免费观看| 天堂俺去俺来也www色官网|