• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Self-Organizing RBF Neural Network Based on Distance Concentration Immune Algorithm

    2020-02-29 14:21:38JunfeiQiaoFeiLiCuiliYangWenjingLiandKeGu
    IEEE/CAA Journal of Automatica Sinica 2020年1期

    Junfei Qiao,, Fei Li, Cuili Yang, Wenjing Li, and Ke Gu

    Abstract—Radial basis function neural network (RBFNN) is an effective algorithm in nonlinear system identification. How to properly adjust the structure and parameters of RBFNN is quite challenging. To solve this problem, a distance concentration immune algorithm (DCIA) is proposed to self-organize the structure and parameters of the RBFNN in this paper. First, the distance concentration algorithm, which increases the diversity of antibodies, is used to find the global optimal solution. Secondly,the information processing strength (IPS) algorithm is used to avoid the instability that is caused by the hidden layer with neurons split or deleted randomly. However, to improve the forecasting accuracy and reduce the computation time, a sample with the most frequent occurrence of maximum error is proposed to regulate the parameters of the new neuron. In addition, the convergence proof of a self-organizing RBF neural network based on distance concentration immune algorithm (DCIA-SORBFNN) is applied to guarantee the feasibility of algorithm. Finally, several nonlinear functions are used to validate the effectiveness of the algorithm. Experimental results show that the proposed DCIASORBFNN has achieved better nonlinear approximation ability than that of the art relevant competitors.

    I. INTRODUCTION

    THE radial basis function neural network (RBFNN) has been extensively used to model and control nonlinear systems due to its universal approximation ability [1]-[3]. In addition, it is able to approximate any nonlinear function to any desirable accuracy [1] when there are enough neurons. The desired approximation accuracy is primarily achieved by the network size and the parameters of RBFNN.

    In order to adjust the network parameters, the gradientbased methods are proposed in [4], [5]. Among them, the error back propagation (BP) algorithm is popular and widely used[6]. However, the BP algorithm has still many shortcomings,such as the time-consuming convergence and poor globalsearch capability [7]. Compared with the BP algorithm, the recursive least squares (RLS) algorithm has better convergence rate and accuracy [8]. However, RLS involves more complicated mathematical operations and require more computational resources. In addition, the local minimum is not solved [9]. To solve this problem, a variable-length sliding window blockwise least squares (VLSWBLS) algorithm is proposed by Jiang and Zhang [10]. VLSWBLS outperforms the RLS with forgetting factors. Penget al. [11] introduced a continuous forward algorithm (CFA) to optimize the parameters of RBFNNs. As a result, this method realizes major performance in reducing memory usage and computational complexity. Qiao and Han [12] proposed a forward-only computation (FOC) algorithm to adjust the parameters. Unlike the traditional forward and backward computation, the FOC algorithm simplifies the calculation and decreases computational complexity. However, the descriptions about how to automatically adjust network size are seldom seen in these literatures mentioned above.

    In fact, a proper structure size can avoid network overfitting and achieve a desired performance. In recent years,many studies have focused on the structure design of the RBFNN. Huanget al. [13] proposed a sequential learning method known as the growing and pruning RBF (GAP-RBF)algorithm. In addition, a more advanced model based on the GAP-RBF algorithm (GGAP-RBF) was advocated in [14].The results show that an RBFNN with a relatively compact structure can achieve a less computational time. However,both algorithms require a complete set of samples for the training process. Generally, it is impossible for designers to obtain a priori knowledge of the training samples before implementation [15]. To solve this problem, an informationoriented algorithm (IOA) [16] was proposed to self-organize the RBFNN structure. The IOA is used to calculate the information processing strength (IPS) of hidden neurons. In addition, it is a computational technique that identifies hidden independent sources from multivariate data. However, most of these self-organizing RBF (SORBF) neural networks adopt the learning algorithms that are based on the gradient decent(GD) algorithm, which may easily trap into a local optimum[17].

    To optimize the parameters and network size of an RBFNN simultaneously, the evolutionary algorithms (EAs) are studied to train the RBFNN [18] to achieve a good robustness and a global optimization capability. For example, Feng [19]proposed an SORBF neural network based on the particle swarm optimization (PSO) algorithm. Alexandridiset al. [20]developed a novel algorithm that used fuzzy means and PSO algorithms to train the RBFNN. The results show that the proposed SORBF neural networks obtain higher prediction accuracy and smaller network structure. Moreover, an adaptive-pso-based self-organizing RBF neural network(APSO-SORBF) was proposed to construct the RBFNN in[21]. This algorithm is adopted to determine the optimal parameters and network size of the RBFNN simultaneously for time series prediction problems. The simulation results illustrate that APSO-SORBF performs better than the other PSO-based RBFNN in terms of the forecast accuracy and the computational efficiency [21]. Compared with other EA algorithms, the PSO algorithm has a faster convergence rate,but it easily traps into a local optimum, which affects the calculation accuracy [22]. To solve this problem, the immune algorithm (IA) was proposed [23]. The IA is a highly parallel,distributed, and adaptive system whose diversity and maintaining mechanism can be used to maintain the diversity of solutions and overcome the “premature” problem of a multi-peak function. Moreover, to break away from the local optimal solution, increasing the diversity of the artificial immune algorithm is very important. In this paper, the distance concentration immune algorithm (DCIA) was proposed to increase the population diversity. In comparison with the artificial immune based on information entropy(IEIA) algorithm, this algorithm does not need requirement for setting any threshold. The results show that the proposed DCIA can significantly increase the global search capability.According to the above analysis, it is necessary and effective for the structure and parameters of the neuron network to be adjusted by DCIA instead of PSO [21]. However, the structure of APSO-SORBF [21] is adjusted by increasing and decreasing the number of hidden layer neurons randomly so that the network is unstable. Thus, how to adjust the structure steadily and present theoretical analysis of algorithm convergence is quite challenging.

    To solve the problems as mentioned above, the IPS [16] of hidden neurons was adopted to determine which hidden layer neurons need to be split or pruned when the network of antibodies should be updated. However, the last input sample is used to adjust the parameters of the hidden layer neuron in[16]. Then, the computational accuracy is affected, and the calculation time is lengthened. Based on the analysis above,the information-oriented error compensation algorithm(IOECA) is proposed in this paper. In this algorithm, the input sample with the most frequent occurrence of maximum error is used to set up the parameters of the new hidden neurons.Then the accuracy is increased. In addition, in order to ensure the stability of algorithm, the convergence analysis of the DCIA-SORBF neural network is provided.

    The main contributions of this paper are summarized as follows:

    1) A DCIA algorithm is adopted to improve the diversity of antibodies. The distance concentration is used to determine the diversity of the artificial immune algorithm. The DCIA algorithm approaches to skip the local minimum and finally find the global minimum. Consequently, it has a higher accuracy than that of many traditional EAs.

    2) The immune algorithm is used to adjust the network and parameters according to [16]. However, in [16], the hidden layer neurons are increased or deleted randomly to adjust network structure. Then this caused instability of the system.To solve this, the IPS is used to identify the hidden layer neurons that needed to be deleted and increased according to its ability to identify hidden independent sources from multivariate data.

    3) Five samples are used to be calculate simultaneously in[16], and the last one is used to update the parameters of hidden layer neurons, so that the system has large amount of calculation and its computation is not high enough. So that all samples are selected in this paper. Furthermore, the input sample with the most frequent occurrence of maximum error is used to set up the parameters of the new hidden neurons.Therefore, the parameters and structure of the RBFNN can be optimized simultaneously by DCIA-SORBF. Compared with other algorithms, this method has a greater accuracy with a compact structure, and the stability can be ensured.

    4) The convergence analysis is provided. The results of multiple experiments testify to the feasibility and efficiency of the DCIA-SORBF algorithm. A convergence analysis of the DCIA-SORBF neural network is provided and the effectiveness is verified via simulations because the convergence of the algorithm is necessary and very important for many actual engineering problems.

    The rest of this article is organized as followed: In Section II, brief reviews of the RBF and immune algorithm system model are introduced. In Section III, the details of the DCIA algorithm and DCIA-SORBF are described. In Section IV, the convergence analyses of DCIA-SORBF are provided. In Section V, four experiments are conducted. Finally, the conclusions are presented.

    II. PROBLEM FORMULATION

    A. RBF Neural Network

    The RBF neural network is a typical feed-forward neural network [24], and it is generally composed of three layers: the input layer, hidden layer and output layer. The structure of the RBF is shown in Fig. 1.

    The structure of the RBF neural network is described as follows:

    1) The input layer. In this layer, ann-dimensional input vectorx= (x1,x2, …,xn) is imported to the network, wherenis the number of neurons in the layer.

    2) The hidden layer. In this layer, the input variables are converted to the high dimensional space by a nonlinear transformation. There are many activation functions in network, such as Gaussian function, sigmoid function and so on. Here, Gaussian function is used as the activation function.The output is defined as

    Among them, φK(t) is the output oft-time hidden layer.x(t)is the input sample matrix at timet,μK(t) is the center of theKth hidden layer neurons at timet, andσK2(t) is the width of

    t

    heKth hidden layer neurons at timet.is the Euclidean distance betweenx(t) andμK(t). It is noted that the widths and centers of the activation function is not fixed. They are randomly initiated and then optimized by DCIA.

    3) The output layer. In this layer, there is only one node,which is the output of the neural network. The output is given as

    whereωjis the connection weight between thejth neuron in hidden layer and the network output, andyis the network output. Here,ωjis randomly initialized.

    B. Immune Algorithm System

    The artificial immune system is primarily based on the information processing mechanism of the biological immune system [25]. Then, the new algorithm is used to solve complex problems. To describe the algorithm better, several common immunology terms in the artificial immune system are defined as follows:

    Definition 1 (Antigen):An antigen refers to the problem of the constraint to be solved. It is defined as the objective function.

    Definition 2 (Antibody):An antibody refers to the candidate solution of the problem. It is described as a candidate solution that corresponds to the objective function.

    Definition 3 (Affinity):An affinity refers to the adaptive measure of the candidate solution. It is related to the problem or the candidate solution that corresponds to the target function value.

    The antibodies that have a high affinity in the immune system can achieve a high rate of cloning. To maintain the diversity of antibodies, the rate of cloningP(xi) can be expressed as follows [25]:

    whereD(xi) is the affinity function of the antibody,C(xi) is the concentration of the antibody,Ma(t) is the parent generation group, anda=o+s, whereais the size of parent generation group,ois the elite number.Ei(t) is the elite solution of the population, and it is selected according to the affinityA(xi)which is sorted in ascending order. The immune cellsRi(t),which are used to reflect the diversity of individuals, are selected fromo+1 toaaccording toP(xi) that is listed in descending order. And to update the optimal individual, the method is expressed as

    whereD(·) is the antibody affinity function.G(t) is the minimum value of population affinity.xg(t) is the global best solution at timet. Subsequently, the crossover and mutation operations can progress.

    In fact, the antibody diversity is very important to immune algorithm. It is closely related to global searching ability of algorithm. For that reason, the R bit comparison method that reflects the similarity degree between antibodies was proposed. However, such an approach is time consuming and has a low calculation accuracy. To avoid this problem, Chun proposed an information entropy-based artificial immune algorithm (IEIA) [26] whose diversity can be satisfied.However, the constant factors of this algorithm that is able to influence the convergence performance, are determined by experience. In addition, the calculation of different antibodies is similar. In view of the above problems, Zhenget al. [27]proposed artificial immunity based on the Euclid distance algorithm (EDAI). This algorithm calculates the Euclid distance between two antibodies and if it reaches a certain threshold, the similarity of antibodies can be determined.However, the problem of threshold setting should be solved,which is a tedious process. In light of the above problems, the distance concentration immune algorithm (DCIA) was proposed to increase the diversity. This algorithm can effectively jump from local optimal solution without requirement for setting any threshold. The results show that the calculation accuracy is improved effectively.

    III. DCIA-SORBF NEURAL NETWORK

    The performance of the RBFNN primarily relies on its structures and parameters. To optimize them simultaneously,the distance concentration artificial immune algorithm (DCIA)is used. However, if the network structure of the immune cells is randomly increased and decreased [21], system instability will occur. To avoid this situation, the information-oriented error compensation algorithm (IOECA) is proposed, with an aim of obtaining a compact structure of the RBFNN.Subsequently, the accuracy of the algorithm is improved and the stability is guaranteed.

    A. DCIA Algorithm

    To reflect the diversity of the antibodies, DCIA [25] is used to calculate the concentration. The greater the distance between the antibodies is, the smaller the distance concentration will be. The use of the DCIA algorithm that is conducive to obtaining the optimal solution set rapidly, can ensure the diversity of the antibodies so that a trap into local optimal will not occur. The expressions are

    wherexiis theith antibody andC(xi) is the distance concentration of antibodyxi.dis the sum of distances between antibodies in the population.diis the sum of the distances between theith antibody and other antibodies.mis the size of population. In addition, the affinity functionD(xi) is another factor that determines the probability of cloning. The formulas are

    wheref(xi(t)) is the fitness of antibodyxi(t). The detailed process is shown in Algorithm 1.

    Remark 1:In the immune algorithm, the distance concentration that reflects the diversity of antibodies, is directly calculated in the DCIA algorithm without setting the threshold, and the antibodies with the smaller affinity and lower concentration can be stored in elite archives.

    B. DCIA-SORBF Neural Network

    To adjust the network size and the parameters during the training process, the DCIA-SORBFNN is proposed in this section. The proposed DCIA-SORBFNN algorithm is summarized in Algorithm 2. From Fig. 2, we can see that an antibody is a complete RBF neuron network as shown in Fig. 2(a) (i.e., the RBF centers, the widths of RBF neuron, and the output weights).

    Firstly, the population is randomly initialized. Then,different antibodies have different network size and parameters. The initialized variables are given by

    whereAis the antibodies population,Aiis theith antibody which hasKhidden layer neurons.μi,K,σi,K,ωi,Kare the center, width and output weight of theKth hidden neuron in theith antibody. To self-organize the network,Kis a random integer. For the sake of improving the accuracy of algorithm, the error criterion is selected as the fitness value of each antibody.The proposed expression is

    whereei(t) is the root-mean-square error (RMSE) of theith antibody.Tis the number of training samples, andyi(t) andyid(t) are the network actual output and predictive output of theith antibody at timet, respectively. In order to ensure the convergence of the algorithm, it is necessary to have a set range for the value ofis the maximum value ofis the network size of theith antibody, andnis the number of input variable.

    As seen in Fig. 2, the 2nd immune antibody is the optimal antibody which is obtained by (9a) and (9b) at timet. Then the optimal size of RBF neural network is obtained and the dimensions of other antibodies need to be updated (Fig. 2(d)).The network size which satisfies the following condition, is updated

    whereKbestis the network size of the optimal antibody, andKiis the network size of theith antibody (i= 1, 2, …,s). Each adjustment process divides or deletes a hidden layer neuron.However, in [21] a neuron is deleted or added randomly and the network sometimes is unsteady.

    To update the dimensions stably, the information-oriented error compensation algorithm (IOECA) is proposed in this paper. In IOECA, the information processing strengths (IPSs)[16] according to the independent information between the neurons in the hidden layer and their independent contribution to the output neurons, the information processing intensity(IPSs) of the neurons in the hidden layer in the learning process is calculated to identify the neurons in the hidden layer that need to be updated. The methods are given asUij(t)

    where and are the input and output information processing strengths (IPSs) of thejth hidden neuron in theith antibody.Sis the number of the samples.Hij(t) is the independent component contribution. In IPSs, by calculating the independent information of the neurons in the hidden layer,the contribution of the neurons in the independent hidden layer to the output neurons is obtained. Here, the information-oriented algorithm (IOA) which is an independent component analysis method is used. The expressions [16] are shown in(12a)-(12f)

    whereCi(t) is the independent contribution matrix of theith antibody,dij(t) is the independent contribution of thejth hidden neuron. AmongΨi(t) = [φi(t-S+ 1), …,φi(t- 1),φi(t)]Tis the output matrix of hidden layer in theith antibody,j=1,…,K. γi(t) is the coefficients matrix which is given as

    whereσi(t),ηi(t),εi(t) are the covariance matrix ofΨi(t), the whitening matrix ofyi(t),yi(t) = [yi(t-S+ 1),…,yi(t- 1),yi(t)]and the whitening transformation matrix ofyi(t), respectively.σi-1(t)Ψi(t) is a decorrelation process forΨi(t) which can represent the independence between the hidden neurons.σi(t),Ψi(t) andεi(t) are given as

    Ui(t) andΛi(t) are the eigenvector and eigenvalue matrices ofyi(t), respectively. Moreover,ηi(t)εi(t) is used to reduce the correlation between the output layer and the hidden neurons.According to the competitive capacities of hidden neurons which are obtained by IPSs. The adjustment rules are given as shown below:

    Case 1 (Neuron Splitting Rule):In fact, ifUi j(t) is larger,the input samples are closer to the center of thejth hidden neuron. In addition, thejth hidden neuron is more active than these input samples. Meanwhile, based on (11a) and (11b), ifis larger, the hidden neurons are more sensitive to the output neurons. Therefore,Ui j(t) andcan be used to describe the information processing ability of the hidden neurons. Then, the condition is described as

    wherePis the sample matrix,Pkis thekth sample,Sis the size ofP.E(t) is the error matrix of the sample matrixPat timet.nEis the matrix of the sample maximum error calculation occurrences.nekis the maximum ofnE. Thekth samplePkhas the largest number and times of errors in the whole iteration process,k21, 2, …,S.XkandYkare the input and output matrices for all samples, respectively. They can be described as

    wherexkoandykoare theoth input and output values of samplePkrespectively.landvare the input and output dimension of samples, separately. Subsequently, the parameters of the new neuron are given as

    wherecij(t),σij(t) andωij(t) represent respectively the center,radius and weight of thejth hidden pre-split neuron in theith immune cell at timet, andcinew(t),σinew(t) andωinew(t) are respectively the center, radius and connection weights of the new added hidden neuron, among whichα2 [0.95, 1.05] andβ2[0, 0.1]. The sample that has the most error calculation occurrences is trained with compensation by (15), so that the accuracy of algorithm can be improved.

    Case 2 (Neuron Deleting Rule):Based on the information processing ability of hidden neurons, thejth hidden neuron is deleted if the IPSs of the hidden neurons satisfy the following conditions:

    The connection weights of thej'th hidden neuron will be updated according to

    Before thejth hidden neuron is cut off, thej'th hidden neuron is the one that is nearest thejth hidden neuron. Before and after thejth hidden neuron is cut off.andare the connection weights between thej'th hidden neuron and the output layer, respectively. The center and radius of thej′th hidden neuron remain unchanged after thejth hidden neuron is deleted.

    Case 3 (Neuron Retaining Rule):If the input and output IPSs of the hidden neurons are neither the maximum nor minimum information strength (such as (13) or (16)), the structure of theith immune cell does not change. With the self-organizing mechanism, the structure of the immune cell can be automatically organized to improve the performance.In addition, the prediction accuracy of the system will be improved based on the error compensation method.

    Remark 2:In [21], the neuron that is split or deleted is random. Then, the network is unstable. To solve this problem,the IOA is used to provide the split or deleted rules in the artificial immune algorithm.

    Remark 3:In IOECA-SORBF, the error compensation is used. When the structure of the RBF network needs to be split or deleted, the parameters of the new neuron need to be updated. However, in [16], the parameters of the last sample in every five samples are used to update the newly added hidden layer neurons, so that the accuracy cannot be ensured.To increase the network prediction accuracy, the neural network sample with the most frequent occurrence of error is used to regulate the parameters of the new neuron. Therefore,the output error is compensated appropriately, and the prediction accuracy of the RBF neural network is increased.

    IV. CONVERGENCE ANALYSIS

    For the proposed DCIA-SORBFNN, the convergence of the algorithm is an important issue and needs to be carefully investigated. In this section, the analysis of convergence is provided in detail to guarantee the successful application of the proposed DCIA-SORBFNN. Furthermore, one can obtain a better understanding of the DCIA-SORBFNN through this analysis.

    A. Convergence Analysis of DCIA Algorithm

    The operations in each generation of crossover, mutation,and antibody concentration regulation actually correspond to the state transition process, which come from one antibody populationto another antibody populationis the transfer process from the populationto the populationPijis only related to the previous population state and is independent of evolutionary algebra. Therefore, the antibody state transfer process can be regarded as a finite homogeneous Markov chain. The stability of crossover, mutation and selection operation is to prove respectively below.

    Definition 4:Set upFkas the best antibody in thekth moment,F* is the antigen of the problem to be solved, if and only ifis established, the DCIA algorithm is convergent [29].

    According to the definition in [29], the crossover operator may be regarded as a random total function whose domain and range areR. The state spaceR=IBN=IBl.n= {0,1}l.n, wherenis the population size, andlis the number of genes. i.e., each state ofRis mapped probabilistically to another state.Therefore, the state transition matrixCis stochastic.

    The probability that stateibecomes statejafter mutation can be aggregated as [29] because the mutation operator is applied independently to each gene/bit in the population

    wherepm2 (0,1), alli,j2S,Hijdenotes the Hamming distance between the binary representations of stateiand statej.The length of each antibody is set asl, and the hamming distance between stateiandjis defined as

    wheregikandgjkare thekth gene of thexiandxjimmune cells,respectively. The same holds for other operators and their transition matrices. Thus,Mis positive. The probabilitySwhose selection does not alter the state generated by mutation is column-allowable. Subsequently, the state transition of one generation antibody is completed, whereP=SCMis positive[29].

    Definition 5:A square matrixA:n×nis said to be reducible, ifAcan be brought into the form (with square matricesCandT)

    by applying the same permutations to rows and columns.

    Theorem 1:LetPa reducible stochastic matrix, whereC:m×mis a primitive stochastic matrix andR,Then

    It is a stable stochastic matrix withP∞= 1'P∞, whereP∞=P0P∞is unique regardless of the initial distribution, andP∞satisfies:for 1 ≤i≤m;form<i≤n.

    Theorem 2:The immune algorithm based on distance concentration is globally convergent to probability 1.

    Proof: The state transition matrix of immune algorithm is denoted byAnd the state transition matrixUis stochastic matrices.The states of transition matrixUare as follows: the first state is global optimal solution. The second state is global suboptimal solution, …, thenth state is the solution of the worst. Then,for any state,anduii= 0,8j>i, the upgraded matrix can be written as

    WithP=SCMthe transition matrix for DCIA becomes

    C= [1] is a first order stochastic matrix. The submatricesPua1witha≥ 2 can be gathered in a rectangular matrixAccording to Definition 5, the state transition matrix is reducible[29]. From Theorem 1

    So that Theorem1 and Definition 4 may be used to prove that the DCIA converges to the global optimum [29].

    B. Convergence Analysis of DCIA-SORBF Algorithm

    According to the above description, a simple and efficient DCIA algorithm with a special fitness function is used to automatically construct the RBFNN. The goal of the DCIA algorithm is to construct an appropriate RBFNN. After the DCIA algorithm returns a set of immune cells, the parameters and size of the DCIA-SORBFNN corresponding to each immune cell can be obtained by using (8b).

    To provide the theoretical basis for the applications, this section presents the convergence analysis of the DCIASORBF neural network based on the convergence analysis of the DCIA algorithm. The convergence analysis of the DCIASORBFNN is summarized in Theorem 3.

    Theorem 3: If the bounds of the predefined maximumDmax(xi(t)) < 2|Ei(t)|/((2 +n)Ki(t))1/2, then the DCIA-SORBFNN is convergent, andEi(t) →0ast→ 0,i= 1, 2, …,s.

    Proof: Consider the following Lyapunov function:

    According to (9b), the system error is [30]

    Then the change in the Lyapunov function between two steps is

    In addition, the error change is denoted:

    Then, the strictly differential formula of the RMSE is

    whereandare the three adjusted parameters in the RBFNN.

    where Δωi(t), Δμi(t) and Δσi(t) are the parameter updating rules.is the affinity function ofith immune cell. According to the above analysis, the factorization of (30) is described as

    where

    The following conditions can be obtained [21] because the bounds of the predefined maximum distance concentration is adjusted dynamically, and

    whereKi(t) is the number of hidden neurons at timetfor theith immune cell.

    which leads to

    Thus,ei(t) is bounded fort≥t0. Moreover, through the Lyapunov-like lemma, it is implied that

    Therefore,ei(t)→ 0 ast→∞,i= 1, 2, …,s.

    In addition, the structure of self-organizing phase stage converges according to [31], thus, the convergence of the proposed DCIA-SORBF neural network is proved.

    Remark 4:Based on the above discussion, in the adjustment phase of the parameter and network size, the convergence of DCIA-SORBF neural network can be maintained according to formulas from (25a) to (35). Then, the convergence of DCIASORBF neural network that is necessary for successful applications, can be guaranteed by using the DCIA algorithm.

    Algorithm Testing RMSE No. of hidden neurons Testing time (s) Mean/Dev. rank Mean Dev.DCIA-SORBF 0.0122 0.0035 8 0.0032 1/2 APSO-SORBF [21] 0.0133* 0.0056* 9* 0.0039* 2/3 AI-RBF [37] 0.0235 0.0092 10 0.0062 4/5 GAP-RBF [13] 0.0415* 0.0087* 19* 0.0087* 8/4 PSO-RBF [36] 0.0368* 0.0164* 13* 0.0054* 7/6 AI-PSO-RBF [34] 0.0295* 0.073* 11* 0.0042* 6/7 SAIW-PSO-RBF [35] 0.0197* 0.0026* 11* 0.0046* 3/1

    V. DCIA-SORBF SIMULATION AND APPLICATION

    In this section, five systems are used to demonstrate the effectiveness of DCIA-SORBF. They are the Mackey-Glass time series prediction, nonlinear system identification, the Lorenz time series prediction focusing on nonlinear system modeling or prediction problems, and the effluent total phosphorus (TP) prediction. Whereas TP is an actual industrial problem in a wastewater treatment process (WWTP). In addition, the fitness is used to reflect the diversity of DCIASORBF. And six algorithms are used to compare the performance with it. They are APSO-SORBF [21], GAP-RBF[13], (AI-PSO-RBF) [34], and stability adaptive inertia weight PSO-based RBF (SAIW-PSO-RBF) [35], separately. All the examples were programmed in MATLAB R2014a and ran on a PC with a clock speed of 2.60 GHz and 4 GB RAM under a Microsoft Windows 8.0 environment.

    A. Function Approximation

    In this example, the DCIA-SORBF neural network is used to approximate the following benchmark problem:

    This function is used to examine many popular algorithms in [13], [32] and [33]. There are 300 training patterns that are generated randomly on the domain [0, 2], along theXdirection. Similarly, the testing samples are also randomly produced in the range [0, 2]. In addition, the testing set contains 200 samples. In Fig. 3, four indices are used to reflect the DCIA-SORBF neural network performance: the number of hidden neurons, the training RMSE, the approximation error, and the testing output. In addition, the proposed DCIASORBF algorithm is compared with six other algorithms. All algorithms use the same training data sets and test partitions.And the initial parameters of DCIA-SORBF neural network are set as: the cross probabilitypc= 0.4, the mutation probabilitypm= 0.35, the parameter of diversity evaluationps= 0.85, the elite file sizeEa= 60, and the maximum number of neurons in an antibodymax_num= 60.

    To compare these algorithms, four performance parameters are shown in Table I. This table shows the results of the mean value and standard deviation (Dev.) of testing RMSE, number of hidden neurons, and testing time. The results show that the structure of DCIA-SORBF neural network is the most compact one for its self-organizing capability. Moreover, the proposed DCIA-SORBF neural network requires the least testing time of all algorithms. And Wilcoxon’ rank is added to verify the effectiveness of the proposed DCIA-SORBF algorithm. Mean/Dev. rank is used to rank the results of mean and Dev. Among them, the left side is the ranking result of mean, the right side is the ranking result of Dev. We can see that DCIA-SORBF gets the first mean and the second Dev. value.

    B. Mackey-Glass Time Series Prediction

    The Mackey-Glass time series prediction problem which is one of the benchmark problems. It is used to assess the performance of learning algorithms [21]. The time series prediction is generated by the following equation:

    Algorithm Testing RMSE No.of hidden neurons Testing time (s) Mean/Dev. rank Mean Dev.DCIA-SORBF 0.0116 0.0075 9 0.0035 1/1 APSO-SORBF [21] 0.0135* 0.0095* 11* 0.0039* 2/2 AI-RBF [37] 0.0151 0.0128 11 0.0042 3/3 GAP-RBF [13] 0.0321* — 19* — 7/—PSO-RBF [36] 0.0208* 0.0249* 12* 0.0047* 6/6 AI-PSO-RBF [34] 0.0189* 0.0132* 11* 0.0043* 5/4 SAIW-PSO-RBF [35] 0.0166* 0.0145* 11* 0.0053* 4/5

    whereα= 0.1,b= 0.2, andτ= 17, and the initial conditionx(0) = 1.2. The valuex(t+Δt) is predicted from the previous values {x(t),x(t-Δt), …,x(t- (l- 1)Δt)}. In this paper, the prediction model is given by

    In the simulation experiment, 1700 data points were selected fromt= 1 tot= 1700. Among them, the first 1200 data points are used for training, and the last 500 data points are used as test data. The initial network size is set to 60. Andpc= 0.45,pm= 0.55,ps= 0.9,Ea= 60 are selected as the best parameters. The experimental results are shown in Fig. 4. The proposed algorithm can track the Mackey-Glass time series problem well, and the test error is within the range of [-0.05,0.05], and the test error is small. The network structure is constantly adjusted during the process of iteration. Finally, the performance is the best when the network structure is 8. As can be seen from Table II, DCIA-SORBF has the smallest Mean and Dev values compared with other algorithms. At the same time, it has the smallest test time because the algorithm has the smallest network structure. In addition, APSO-SORBF[21] outperforms other algorithms except DCIA-SORBF and ranks second. And AI-RBF [37], SAIW-PSO-RBF [35], AIPSO-RBF [34] and PSO-RBF [36] ranked third to sixth,respectively. GAP-RBF [13] has the worst test error.Therefore, DCIA-SORBF has the smallest test error and the best system stability compared with other algorithms.

    C. Nonlinear System Identification

    The nonlinear system is given by

    There are two input values,y(t) andu(t), and the outputy(t+ 1). The nonlinear system is used in [17], [13], and [36] to demonstrate the performance of a neural network. The training inputs were obtained from two parts. Half of them were sequenced uniformly over the interval [-2, 2], and the others were generated by 1.05 × sin(t/45). Besides, 2400 and 1000 samples were selected for training and testing. The testing samples of input were set as

    whereu(t) is the input signal which used to determine the identification results for the testing signal. To evaluate the performance of DCIA-SORBF neural network, its results are compared with those of six other neural networks. Fig. 5 records the RMSE values, prediction results, prediction errors and the number of hidden neurons for DCIA-SORBF. The number of hidden neurons self-organization adjustments is shown in Fig. 5 (d). And we can see that DCIA-SORBF neural network performs well and the test error remains within the range [-0.05, 0.05]. Therefore, it can predict the nonlinear system function well.

    Algorithm Testing RMSE No.of Hidden neurons Testing time (s) Mean/Dev. rank Mean Dev.DCIA-SORBF 0.0724 0.0035 7 0.0039 1/1 APSO-SORBF [21] 0.0916 0.0116 11* 0.0047* 2/4 AI-RBF [37] 0.1049 0.052 11 0.0062 4/7 GAP-RBF [13] 0.2229* 0.0165* 15* 0.0068* 6/6 PSO-RBF [36] 0.2564* 0.0126* 12* 0.0049* 7/5 AI-PSO-RBF [34] 01536* 0.0109* 14* 0.0051* 5/2 SAIW-PSO-RBF [35] 0.0934* 0.0113* 13* 0.0056* 3/3

    Table III exhibits the detailed results of the different algorithms. Four indices are selected to reflect the performances. They are the number of hidden neurons, the mean value and standard Dev. of the testing RMSE, and the testing time. In Table III, the mean value and standard dev. of the testing RMSE are the smallest for DCIA-SORBF. The number of the hidden neurons is the smallest and the testing time is the least for DCIA-SORBF. And APSO-SORBF is the second only to DCIA-SORBF, ranking second. PSO-RBF performed poorest for nonlinear system identification. This example shows that the DCIA-SORBF neural network has better identification ability and its structure is more compact.

    D. Lorenz Time Series Prediction

    The Lorenz time series system is a mathematical model for atmospheric convection that is also widely used as a benchmark in many applications [33]. As a 3-D and highly nonlinear system, the Lorenz system is governed by

    wherea1,a2, anda3are the system parametersa1= 10,a2=28, anda3= 8/3;x(t),y(t), andz(t) are the 3-D space vectors of the Lorenz system. In this example, the fourth-order Runge-Kutta approach with a step size 0.01 is adopted to generate the Lorenz samples, and only theY-dimension samplesy(t) are used for the time series prediction. For 3400 data samples generated fromy(t), the first 2400 samples were taken as training data, and the last 1000 samples were used to check the proposed model. The ratio is close to 7: 3. The test results in Fig. 6 show that DCIA-SORBF neural network performs well and the test error remains within the range [-0.05, 0.05].And the network structure is constantly adjusted during the process of iteration. Finally, the performance is the best when the network structure is 9. Moreover, six algorithm: APSOSORBF[21], GAP-RBF [13], MRL-QPSO-RBF [33], PSORBF [36], AI-PSO-RBF [34], and SAIW-PSO-RBF [35] are compared with DCIA-SORBF in Table IV. This comparison show that the DCIA-SORBF neural network has the smallest mean value error and standard Dev. And the testing RMSE is far better than other algorithms except AI-RBF. In addition,AI-RBF which is worse than DCIA-SORBF is better than the other five algorithms. However, GAP-RBF shows the worst performance. Then the results indicate that DCIA-SORBF has best identification ability for Lorenz time series prediction than the other proposed algorithms.

    E. Effluent TP Prediction in WWTP

    Algorithm Testing RMSE No.of hidden neurons Testing time (s) Mean/Dev. rank Mean Dev.DCIA-SORBF 0.0958 0.026 9 0.0075 1/1 APSO-SORBF [21] 0.1726* 0.054* 5* 0.0069* 3/3 AI-RBF 0.1049 0.052 11 0.0062 2/2 GAP-RBF [13] 2.3294* — 70* — 7/—PSO-RBF [36] 0.2673* 0.095* 6* 0.0076* 6/6 AI-PSO-RBF [34] 0.2017* 0.058* 6* 0.0076* 5/4 SAIW-PSO-RBF [35] 0.1981* 0.073* 5* 0.0072* 4/5

    The effluent TP is an important parameter for evaluating the performance of a WWTP [30]. However, the values of the effluent TP are difficult to measure due to the biological characteristics of the activated sludge process. The availability of the effluent TP is often associated with expensive capital and maintenance costs [37]. Therefore, the proposed DCIASORBF neural network is used to predict the values of the effluent TP in this experiment.

    Due to the influence of accuracy measurement, operation and measurement method, water quality abrupt change and so on, the collected data have a certain degree of error.Moreover, direct soft sensor modeling of unprocessed data will inevitably lead to poor performance system and unreliable prediction results. Therefore, in order to ensure the reliability and accuracy of soft sensing, it is necessary to eliminate abnormal data. The existing TP prediction of wastewater treatment is mostly processed by noise reduction. However,all the collected data are real data, and some noise data are hard to avoid, so we do the total phosphorus experiment with real data. We obtain 367 sets of data from a small sewage treatment plant in Beijing from June to August 2015. 267 sets of data are used as training samples and 110 sets of data are used as test samples. The ratio of training samples to test samples is 7:3. In this experiment, the proposed DCIASORBF neural network is used to predict the values of the effluent TP. And the easy-to-measure process variables include: the temperature, oxidation reduction potential,influent TP, dissolved oxygen, pH and total soluble solid,which are selected as the input variables of the DCIA-SORBF neural network.

    The experimental results are shown in the Fig. 7. As can be seen from the graph, DCIA-SORBF can better predict TP value with a small prediction error and the error is between-0.015 and 0.015. However, when noise data appear in the 70th to 90th samples, the algorithm still has a certain degree of prediction distortion. The algorithm has better robustness and the prediction error is within acceptable range when noise occurs. Therefore, the algorithm can still track and predict the results of total phosphorus. In addition, the comparison results are recorded in Table V. We can see that DCIA-SORBF is compared with other six algorithms. From the results, we can see that DCIA-SORBF has a smallest mean testing RMSE and a more compact structure for TP prediction. Then, DCIASORBF has the shortest prediction time. The above results show that the DCIA-SORBF is more suitable and effective than the other SORBF neural networks on predicting the effluent TP values.

    In order to avoid the randomness caused by the experimental results, In Table VI, at the same time, in order to judge the overall performance of the algorithm, the experimental results of five test functions are counted and sorted. The specific results such as Table VI can be seen. rank sum on mean and rank sum on Dev. are the sum of mean and Dev. ranking of all test functions for each algorithm. As you can see, for all test functions, the sum of mean and Dev.rankings of DCIA-SORBF is the smallest. In addition, sum rank on all the problems is the sum of the sorting of all test functions mean and Dev. for each algorithm. Final rank on all the problems is the final ranking result of each algorithm. The experimental results show that DCIA-SORBF has the smallest test error and the best stability, so the algorithm has the best prediction performance.

    Algorithm Testing RMSE No. of hidden neurons Testing time (s) Mean/Dev. rank Mean Dev.DCIA-SORBF 0.0102 0.0027 10 0.0097 1/1 APSO-SORBF [21] 0.0127* 0.0025* 12* 0.0120* 2/2 AI-RBF [37] 0.0201 0.0076 12 0.0580 5/5 GAP-RBF [13] 0.0356* 0.0085* 18* 0.0630* 6/6 PSO-RBF [36] 0.1602* 0.0915* 14* 0.0055* 7/7 AI-PSO-RBF [34] 0.0191* 0.0052* 12* 0.0290* 4/4 SAIW-PSO-RBF [35] 0.0159* 0.0049* 12* 0.0410* 3/3

    F. The Fitness

    In order to verify the diversity of DCIA algorithm, Mackey-Glass time series prediction problem is used as the standard test function in this experiment. Meanwhile, the average value of fitness (test error here) and the best fitness value of all antibodies were taken as test indicators. In order to verify the effectiveness of distance concentration method with immune algorithm, this paper compares it with the self-organizing RBF neural network (IA-SORBF) based on artificial immunity. Among them, the number of iterations is 100. The experimental results are shown in Fig. 8. We can see that IASORBF has gradually converged in less than 10 generations,while DCIA-SORBF needs at least 20 generations. Therefore,IA-SORBF is more likely to fall into local optimum. In addition, the fitness average of all antibodies given in Fig. 8(a) shows that the fluctuation range of DCIA-SORBF is larger than that of IA-SORBF, which also shows that the difference of antibodies in DCIA-SORBF algorithm is greater and the diversity of antibodies is better. At the same time, as shown in Fig. 8 (b), the optimal fitness value of DCIA-SORBF algorithm is smaller. Therefore, the algorithm has smaller test error and better diversity, so it can better approximate the global optimal solution.

    VI. DISCUSSION

    In order to verify the effectiveness of the proposed algorithm, The experimental results show that the diversity of the proposed DCIA algorithm is significantly increased compared with that of IA without distance concentration algorithm, so that the proposed DCIA algorithm can better jump out of the local optimum. Secondly, five test functions are used in this paper. From Figs. 3-7, it can be seen that DCIA-SORBF can predict the value of the objective function better, and the error is small. At the same time, the algorithm can adjust the network structure adaptively with the change of sample data, and finally make the RBF network structure the most compact. In addition, the proposed DCIA-SORBF algorithm is compared with six algorithms: APSO-SORBF[21], AI-RBF [13], GAP-RBF [33], PSO-RBF [35], AI-PSORBF [25], and SAIW-PSO-RBF [48]. As can be seen from Tables I-V, except Lorenz time series system, DCIA-SORBF algorithm has the smallest test error in other test functions.Therefore, it has the best prediction accuracy. Meanwhile, for the RMS error value, the RMS error of the algorithm is the smallest except for the function approximation of the test function so that the system has high stability. However, since RBF network structure can be self-organized, we can see from the table that DCIA-SORBF has the smallest network structure for function approximation, nonlinear system identification and effluent TP prediction in WWTP, which effectively avoids the redundancy of network structure and thus has the shortest computing time. However, for the more complex Lorenz time series system, a larger network structure is needed to improve the prediction accuracy, and the experimental results show that DCIA-SORBF error is the smallest except AI-RBF. Compared with other algorithms, it has the smallest root mean square error. For Mackey-Glass time series prediction function, the network structure of the algorithm is slightly larger than others, but the error and root mean square error are the smallest. Finally, in order to calculate the performance of the proposed DCIA-SORBF algorithm, Wilcoxon’ rank is used to analyze the experimental results. From Table VI , through five test functions, the algorithm has the smallest error and root mean square error compared with other algorithms, which proves that DCIASORBF algorithm has the highest prediction accuracy and better stability. At the same time, the statistical results of final rank on all the problems show that the algorithm has the best performance.

    Problems Algorithms DCIA-SORBF APSO-SORBF AI-RBF GAP-RBF PSO-RBF AI-PSO-RBF SAIW-PSO-RBF Function approximation 1/2 2/3 4/5 8/4 7/6 6/7 3/1 Mackey-Glass timeseries prediction 1/1 2/2 3/3 7/— 6/6 5/4 4/5 Nonlinear system identification 1/1 2/4 4/7 6/6 7/5 5/2 3/3 Lorenz time series system 1/1 3/3 2/2 7/— 6/6 5/4 4/5 Effluent TP prediction in WWTP 1/2 2/2 5/5 6/6 7/7 4/4 3/3 Rank sum on mean 5 11 17 34 33 25 17 Rank sum on Dev. 7 14 22 — 30 21 17 Sum rank on all the problems 11 25 40 — 63 46 34 Final rank on all the problems 1 2 4 — 6 5 3

    CONCLUSION

    In this paper, a SORBF neural network is presented to model uncertain nonlinear systems, and the network size and parameters are simultaneously optimized by the proposed DCIA algorithm. In addition, to overcome the shortcoming of easily falling into a local optimum of other algorithms, the distance concentration algorithm that can increase the diversity of immune cells, is adopted. However, while adjusting the structure of the RBFNN, ensuring the stability of the network is quite challenging. For this purpose, the information-oriented algorithm (IOA) is applied to identify which antibodies need to be updated. To increase the network prediction accuracy and reduce the calculation tasks, the sample with the most frequent occurrence of error is used to regulate the parameters of the new neuron. Additionally, the convergence of DCIA-SORBF is demonstrated theoretically for a practical application. Finally, the experimental results demonstrate that the proposed DCIA-SORBF algorithm is more effective in solving nonlinear learning problems.Moreover, the good potential of the proposed techniques in real-world applications is demonstrated from our simulation results over several benchmark problems and an engineering modeling task.

    In addition, the parameters have a great correlation with the predictive results. In the future research work, we will choose the best parameters adaptively according to different situations. At the same time, when there is some noise or interference data, how to get accurate prediction results is another direction and focus of our next research.

    国产永久视频网站| 99视频精品全部免费 在线| 国产日韩欧美亚洲二区| 女的被弄到高潮叫床怎么办| 日韩 亚洲 欧美在线| 中文字幕av电影在线播放| 日韩三级伦理在线观看| 日日撸夜夜添| 久久久久久久久久久久大奶| 亚洲av欧美aⅴ国产| 99视频精品全部免费 在线| 你懂的网址亚洲精品在线观看| 成人亚洲精品一区在线观看| 人妻制服诱惑在线中文字幕| 日本黄色日本黄色录像| 欧美人与善性xxx| 欧美+日韩+精品| 国产探花极品一区二区| 久久久久久久久久成人| 国产探花极品一区二区| 国产在线视频一区二区| 国产精品人妻久久久久久| 少妇的逼水好多| 久久精品熟女亚洲av麻豆精品| 国产精品不卡视频一区二区| 午夜免费观看性视频| 久久国产精品男人的天堂亚洲 | 日本免费在线观看一区| 插逼视频在线观看| 欧美最新免费一区二区三区| 夜夜看夜夜爽夜夜摸| .国产精品久久| 性高湖久久久久久久久免费观看| 亚洲人成网站在线观看播放| 国产午夜精品一二区理论片| 日韩精品免费视频一区二区三区 | 日韩欧美 国产精品| 男女免费视频国产| 大话2 男鬼变身卡| 亚洲av中文av极速乱| 日韩av在线免费看完整版不卡| h日本视频在线播放| 亚洲美女搞黄在线观看| 热re99久久精品国产66热6| 男女边吃奶边做爰视频| 亚洲美女黄色视频免费看| 人人妻人人爽人人添夜夜欢视频 | 制服丝袜香蕉在线| 欧美精品人与动牲交sv欧美| av在线观看视频网站免费| 亚洲美女黄色视频免费看| 日韩人妻高清精品专区| 亚洲真实伦在线观看| 搡女人真爽免费视频火全软件| 亚洲av.av天堂| 有码 亚洲区| 欧美日韩视频高清一区二区三区二| 中文字幕av电影在线播放| 一本久久精品| 国产淫片久久久久久久久| 黄色视频在线播放观看不卡| 曰老女人黄片| 欧美日韩av久久| 亚洲精品成人av观看孕妇| 在线观看www视频免费| 热re99久久精品国产66热6| 亚洲不卡免费看| 极品教师在线视频| 99热这里只有精品一区| 丝瓜视频免费看黄片| 国产乱人偷精品视频| 乱系列少妇在线播放| 三级国产精品片| 自线自在国产av| 精品亚洲乱码少妇综合久久| 国产成人精品一,二区| 黄色欧美视频在线观看| 久久国产精品男人的天堂亚洲 | 久久久久久人妻| 麻豆成人午夜福利视频| 最近2019中文字幕mv第一页| 51国产日韩欧美| 妹子高潮喷水视频| 久久久久久久国产电影| 亚洲三级黄色毛片| 人人妻人人爽人人添夜夜欢视频 | 蜜臀久久99精品久久宅男| 免费人妻精品一区二区三区视频| 只有这里有精品99| 国产黄频视频在线观看| 国产欧美日韩综合在线一区二区 | 春色校园在线视频观看| 日韩一区二区三区影片| 久久精品国产鲁丝片午夜精品| 久久久欧美国产精品| 搡老乐熟女国产| 亚洲经典国产精华液单| 肉色欧美久久久久久久蜜桃| 亚洲伊人久久精品综合| 啦啦啦视频在线资源免费观看| 香蕉精品网在线| 国产精品人妻久久久久久| 精品少妇黑人巨大在线播放| 全区人妻精品视频| 亚洲精品久久久久久婷婷小说| 一级av片app| 日本黄色片子视频| 日日啪夜夜爽| 人妻 亚洲 视频| 国产精品.久久久| 国产免费一级a男人的天堂| 免费高清在线观看视频在线观看| 一区在线观看完整版| 美女国产视频在线观看| 一边亲一边摸免费视频| 亚洲情色 制服丝袜| 六月丁香七月| 久久久久精品性色| 国产日韩欧美视频二区| 日日啪夜夜爽| 两个人免费观看高清视频 | 欧美精品一区二区免费开放| 人人妻人人澡人人看| 精品午夜福利在线看| 18禁在线播放成人免费| 久久97久久精品| 一级毛片我不卡| 国产精品99久久99久久久不卡 | 亚洲伊人久久精品综合| 国产精品秋霞免费鲁丝片| 国产亚洲91精品色在线| 国产综合精华液| 国产在视频线精品| 国产视频内射| 亚洲av不卡在线观看| 国产深夜福利视频在线观看| 91精品国产九色| 丝袜脚勾引网站| 晚上一个人看的免费电影| 久久精品国产鲁丝片午夜精品| 成人综合一区亚洲| 美女中出高潮动态图| 欧美人与善性xxx| 人妻夜夜爽99麻豆av| 亚洲天堂av无毛| 在线观看三级黄色| 91在线精品国自产拍蜜月| 老司机亚洲免费影院| 亚洲国产欧美日韩在线播放 | 亚洲美女视频黄频| 观看av在线不卡| 欧美精品国产亚洲| 亚洲精品国产成人久久av| 啦啦啦视频在线资源免费观看| 国产淫片久久久久久久久| 欧美成人精品欧美一级黄| av在线app专区| 亚洲欧美成人精品一区二区| av福利片在线观看| 精品人妻偷拍中文字幕| 一区在线观看完整版| 亚洲国产欧美在线一区| 国精品久久久久久国模美| 亚洲精品日本国产第一区| 久久99热6这里只有精品| 久久久久久久国产电影| 亚洲精华国产精华液的使用体验| 免费av不卡在线播放| 国产精品一区二区性色av| 哪个播放器可以免费观看大片| 久久久国产一区二区| 国精品久久久久久国模美| 水蜜桃什么品种好| 亚洲美女搞黄在线观看| 日韩欧美一区视频在线观看 | 18禁在线播放成人免费| 精品国产国语对白av| 亚洲人与动物交配视频| 久久久久久久国产电影| 欧美变态另类bdsm刘玥| 少妇的逼好多水| 国产欧美另类精品又又久久亚洲欧美| 成人漫画全彩无遮挡| 我要看日韩黄色一级片| 亚洲电影在线观看av| 国产视频内射| 久久这里有精品视频免费| 美女脱内裤让男人舔精品视频| 国产精品国产三级国产av玫瑰| 日韩人妻高清精品专区| 亚洲一区二区三区欧美精品| 91aial.com中文字幕在线观看| 亚洲av成人精品一二三区| 国产精品女同一区二区软件| 国产亚洲精品久久久com| 在线亚洲精品国产二区图片欧美 | 精品视频人人做人人爽| 欧美最新免费一区二区三区| 91午夜精品亚洲一区二区三区| 国产一区二区在线观看日韩| 国产精品免费大片| 国产免费一区二区三区四区乱码| 国产69精品久久久久777片| 欧美日韩亚洲高清精品| 乱码一卡2卡4卡精品| 久久久欧美国产精品| 午夜福利在线观看免费完整高清在| 午夜av观看不卡| 国产视频首页在线观看| 我要看黄色一级片免费的| 欧美成人午夜免费资源| a级一级毛片免费在线观看| 成人亚洲欧美一区二区av| 日本午夜av视频| 国产精品免费大片| 成人无遮挡网站| 建设人人有责人人尽责人人享有的| 一级爰片在线观看| 人体艺术视频欧美日本| 日本av免费视频播放| 国产亚洲午夜精品一区二区久久| 男男h啪啪无遮挡| 亚洲欧美一区二区三区国产| 国产日韩欧美在线精品| 国产精品久久久久久久电影| a级毛片在线看网站| 久久精品国产亚洲av涩爱| 日产精品乱码卡一卡2卡三| av国产精品久久久久影院| 欧美成人精品欧美一级黄| 女人久久www免费人成看片| 国产午夜精品久久久久久一区二区三区| 国产精品一区二区在线不卡| 亚洲精品456在线播放app| 黄色一级大片看看| 最新中文字幕久久久久| 一边亲一边摸免费视频| 曰老女人黄片| 22中文网久久字幕| 成人黄色视频免费在线看| 日韩av免费高清视频| 久久国内精品自在自线图片| 一级爰片在线观看| 精品国产露脸久久av麻豆| 国产精品国产三级专区第一集| 爱豆传媒免费全集在线观看| 日本av手机在线免费观看| 欧美3d第一页| 欧美高清成人免费视频www| 亚洲在久久综合| 亚洲精品自拍成人| 亚洲综合色惰| 国产成人a∨麻豆精品| 熟女人妻精品中文字幕| 精品亚洲成a人片在线观看| 欧美激情极品国产一区二区三区 | 久久精品久久精品一区二区三区| 亚洲人与动物交配视频| 久久久国产欧美日韩av| 国产精品.久久久| 蜜臀久久99精品久久宅男| 五月玫瑰六月丁香| 性色avwww在线观看| 中文在线观看免费www的网站| 亚洲成色77777| 国产一区二区在线观看日韩| 在线观看美女被高潮喷水网站| 亚洲国产最新在线播放| 蜜桃久久精品国产亚洲av| 免费看不卡的av| 欧美精品一区二区大全| √禁漫天堂资源中文www| 久久精品国产亚洲网站| 韩国av在线不卡| 夜夜骑夜夜射夜夜干| 久久久a久久爽久久v久久| 亚洲av中文av极速乱| 久久影院123| 51国产日韩欧美| 国产在线男女| 亚洲av成人精品一区久久| 晚上一个人看的免费电影| 日韩成人伦理影院| 欧美精品一区二区大全| 久久人人爽人人爽人人片va| 国产成人a∨麻豆精品| 国产无遮挡羞羞视频在线观看| kizo精华| 精品人妻偷拍中文字幕| 97超视频在线观看视频| 亚洲av免费高清在线观看| 国产在线视频一区二区| 一级,二级,三级黄色视频| 国产亚洲欧美精品永久| 日韩 亚洲 欧美在线| 秋霞伦理黄片| 国产精品国产三级专区第一集| 青青草视频在线视频观看| 男人狂女人下面高潮的视频| 伊人久久精品亚洲午夜| 亚洲熟女精品中文字幕| 啦啦啦中文免费视频观看日本| 亚洲av综合色区一区| 99re6热这里在线精品视频| av免费观看日本| 天美传媒精品一区二区| 色视频www国产| 久久精品国产亚洲av涩爱| 视频区图区小说| 国产黄片视频在线免费观看| 女的被弄到高潮叫床怎么办| 熟女电影av网| 日韩中文字幕视频在线看片| 老司机影院成人| 99久久精品一区二区三区| 2022亚洲国产成人精品| 18禁在线播放成人免费| 国产乱来视频区| 自线自在国产av| 波野结衣二区三区在线| 在线观看av片永久免费下载| 国产精品国产三级国产av玫瑰| 夜夜骑夜夜射夜夜干| 日韩欧美一区视频在线观看 | av国产精品久久久久影院| 国产深夜福利视频在线观看| 精品国产露脸久久av麻豆| 777米奇影视久久| 乱系列少妇在线播放| 美女中出高潮动态图| 免费高清在线观看视频在线观看| 中文字幕av电影在线播放| 亚洲国产精品国产精品| 午夜福利在线观看免费完整高清在| 777米奇影视久久| 少妇熟女欧美另类| 国产在线视频一区二区| 少妇人妻 视频| 国产 精品1| 久久狼人影院| 永久网站在线| 色哟哟·www| 男人狂女人下面高潮的视频| 成人亚洲欧美一区二区av| 精品久久久久久电影网| 国产亚洲精品久久久com| 美女中出高潮动态图| 欧美日韩视频精品一区| 国产在视频线精品| 国产av一区二区精品久久| 街头女战士在线观看网站| 午夜福利视频精品| 男女无遮挡免费网站观看| 久久99蜜桃精品久久| 99九九在线精品视频 | 国产欧美另类精品又又久久亚洲欧美| 久久久久网色| 国产毛片在线视频| 色婷婷av一区二区三区视频| 免费在线观看成人毛片| 久久鲁丝午夜福利片| 女人精品久久久久毛片| 国产黄色视频一区二区在线观看| 久久精品国产亚洲av涩爱| 色吧在线观看| 麻豆乱淫一区二区| 成年人免费黄色播放视频 | 毛片一级片免费看久久久久| 免费黄频网站在线观看国产| 天天躁夜夜躁狠狠久久av| 黄色毛片三级朝国网站 | 大片电影免费在线观看免费| 久久久久久久大尺度免费视频| 永久免费av网站大全| 日日爽夜夜爽网站| 男人添女人高潮全过程视频| 中文字幕人妻丝袜制服| 成人特级av手机在线观看| 成人二区视频| 亚洲电影在线观看av| 亚洲欧洲精品一区二区精品久久久 | 亚洲精品久久午夜乱码| 丝袜喷水一区| 男女啪啪激烈高潮av片| 91久久精品国产一区二区成人| 国产精品麻豆人妻色哟哟久久| 国产精品福利在线免费观看| 国语对白做爰xxxⅹ性视频网站| 80岁老熟妇乱子伦牲交| 国产精品一区二区性色av| 国产成人免费观看mmmm| 欧美精品亚洲一区二区| 内射极品少妇av片p| 色吧在线观看| 啦啦啦中文免费视频观看日本| 欧美成人午夜免费资源| 欧美最新免费一区二区三区| 久久精品国产自在天天线| 国产精品嫩草影院av在线观看| 国产淫语在线视频| 国产真实伦视频高清在线观看| 一区二区三区免费毛片| 免费人成在线观看视频色| 看非洲黑人一级黄片| 国产精品99久久99久久久不卡 | 亚洲高清免费不卡视频| av专区在线播放| 欧美日韩在线观看h| 亚洲四区av| 日本黄色日本黄色录像| 国产亚洲欧美精品永久| 在线亚洲精品国产二区图片欧美 | 久久99一区二区三区| 亚洲欧美日韩卡通动漫| 在线观看一区二区三区激情| 国产精品福利在线免费观看| 美女cb高潮喷水在线观看| 色5月婷婷丁香| xxx大片免费视频| 亚洲国产欧美日韩在线播放 | 日韩免费高清中文字幕av| 91久久精品国产一区二区成人| 涩涩av久久男人的天堂| 国产极品天堂在线| 男男h啪啪无遮挡| 午夜免费男女啪啪视频观看| 国产精品秋霞免费鲁丝片| 99热全是精品| 亚洲电影在线观看av| 免费大片18禁| 免费黄频网站在线观看国产| 国产一区二区三区av在线| 春色校园在线视频观看| 亚洲久久久国产精品| 免费久久久久久久精品成人欧美视频 | 国产一区有黄有色的免费视频| 午夜av观看不卡| 人妻少妇偷人精品九色| 日韩,欧美,国产一区二区三区| 一边亲一边摸免费视频| 精品亚洲乱码少妇综合久久| 国产极品天堂在线| videossex国产| av有码第一页| 国产黄片视频在线免费观看| 91精品一卡2卡3卡4卡| 一级a做视频免费观看| 国产视频内射| 午夜免费男女啪啪视频观看| 日本wwww免费看| 国产日韩欧美视频二区| 国产av精品麻豆| 色视频在线一区二区三区| 久久这里有精品视频免费| 日韩精品有码人妻一区| 午夜激情福利司机影院| 你懂的网址亚洲精品在线观看| 多毛熟女@视频| 国产深夜福利视频在线观看| 国产精品国产三级专区第一集| 久久久久网色| av福利片在线观看| 美女xxoo啪啪120秒动态图| av专区在线播放| 久久久久视频综合| 国产深夜福利视频在线观看| 街头女战士在线观看网站| 亚洲欧美一区二区三区国产| 波野结衣二区三区在线| 午夜免费鲁丝| 91精品一卡2卡3卡4卡| 黑人高潮一二区| 国产精品蜜桃在线观看| 久久久午夜欧美精品| 一本一本综合久久| av有码第一页| 综合色丁香网| 国产精品国产av在线观看| 国产精品免费大片| 97在线人人人人妻| 日韩中字成人| 三级国产精品片| 亚洲精品日韩在线中文字幕| 久久精品国产亚洲网站| 国产黄色免费在线视频| 狂野欧美激情性bbbbbb| 久久热精品热| 日本欧美国产在线视频| 99精国产麻豆久久婷婷| 日日摸夜夜添夜夜爱| 伦精品一区二区三区| 激情五月婷婷亚洲| 成人美女网站在线观看视频| 菩萨蛮人人尽说江南好唐韦庄| 啦啦啦啦在线视频资源| 高清黄色对白视频在线免费看 | 久久av网站| 国产亚洲最大av| 久久久精品免费免费高清| 久久人人爽人人爽人人片va| 涩涩av久久男人的天堂| 最新中文字幕久久久久| 久热久热在线精品观看| 久久热精品热| 我要看黄色一级片免费的| 亚洲成人av在线免费| 国产高清有码在线观看视频| 亚洲av成人精品一区久久| 69精品国产乱码久久久| 一级av片app| 两个人的视频大全免费| 热re99久久国产66热| 不卡视频在线观看欧美| 一级二级三级毛片免费看| 日韩在线高清观看一区二区三区| 国产亚洲午夜精品一区二区久久| 国产高清三级在线| 久久久久久久久大av| 丝瓜视频免费看黄片| 亚洲美女视频黄频| 蜜桃久久精品国产亚洲av| 久久精品久久久久久噜噜老黄| 超碰97精品在线观看| 2022亚洲国产成人精品| 少妇熟女欧美另类| 亚洲精品视频女| 午夜激情福利司机影院| 永久免费av网站大全| 女性被躁到高潮视频| 高清午夜精品一区二区三区| 久久国内精品自在自线图片| 你懂的网址亚洲精品在线观看| 亚洲国产最新在线播放| 99精国产麻豆久久婷婷| 国产成人一区二区在线| 99九九线精品视频在线观看视频| 亚洲欧美成人精品一区二区| 欧美少妇被猛烈插入视频| av专区在线播放| 国国产精品蜜臀av免费| 久久狼人影院| 国产亚洲午夜精品一区二区久久| 国产在线男女| 狂野欧美激情性xxxx在线观看| 国产精品人妻久久久久久| 成人影院久久| 国产又色又爽无遮挡免| av天堂久久9| 亚洲欧美成人精品一区二区| 国产精品一区二区在线不卡| 另类亚洲欧美激情| 久久久久人妻精品一区果冻| 午夜福利,免费看| 人人妻人人添人人爽欧美一区卜| 99国产精品免费福利视频| 最近中文字幕高清免费大全6| 伊人久久精品亚洲午夜| 久久6这里有精品| 看免费成人av毛片| 少妇猛男粗大的猛烈进出视频| 高清视频免费观看一区二区| 9色porny在线观看| 欧美+日韩+精品| 亚洲色图综合在线观看| 国产黄片美女视频| 男男h啪啪无遮挡| 日本免费在线观看一区| 高清欧美精品videossex| 在线看a的网站| 国产 精品1| 午夜日本视频在线| 免费看av在线观看网站| 最黄视频免费看| 国产乱人偷精品视频| 下体分泌物呈黄色| 51国产日韩欧美| 不卡视频在线观看欧美| 久久99热这里只频精品6学生| 日本欧美视频一区| 亚洲内射少妇av| 国产伦在线观看视频一区| 精品久久久精品久久久| 国产黄色免费在线视频| 欧美成人午夜免费资源| 大片电影免费在线观看免费| 另类亚洲欧美激情| 国产免费视频播放在线视频| 我的老师免费观看完整版| 少妇人妻精品综合一区二区| 日本wwww免费看| av视频免费观看在线观看| 91精品伊人久久大香线蕉| 男人舔奶头视频| av线在线观看网站| 精品一区在线观看国产| av天堂中文字幕网| 一级黄片播放器| 日韩大片免费观看网站| 不卡视频在线观看欧美| 最近2019中文字幕mv第一页| 精品一区二区免费观看| 熟女av电影| 亚洲va在线va天堂va国产| 如日韩欧美国产精品一区二区三区 | 精品久久久久久久久av| 亚洲av二区三区四区| 三级国产精品片| 国产精品人妻久久久影院| 内射极品少妇av片p| 老司机亚洲免费影院| 久久精品久久久久久噜噜老黄| 久久毛片免费看一区二区三区| videossex国产| 欧美日韩在线观看h| 久久精品熟女亚洲av麻豆精品|