• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Detection of Oscillations in Process Control Loops From Visual Image Space Using Deep Convolutional Networks

    2024-04-15 09:37:18TaoWangQimingChenXunLangLeiXiePengLiandHongyeSu
    IEEE/CAA Journal of Automatica Sinica 2024年4期

    Tao Wang , Qiming Chen , Xun Lang , Lei Xie , Peng Li , and Hongye Su ,,

    Abstract—Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed, most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However, manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks (CNNs),inspired by animal visual systems, have been raised with powerful feature extraction capabilities.In this work, an exploration of the typical CNN models for visual oscillation detection is performed.Specifically, we tested MobileNet-V1, ShuffleNet-V2,EfficientNet-B0, and GhostNet models, and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors, the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition, this framework generalizes well and is capable of handling features that are not present in the training data, such as multiple oscillations and outliers.

    I.INTRODUCTION

    OSCILLATION, which is the primary manifestation of deterioration of control performance, is still one of the most frequent problems in process industries [1]–[3].If not handled in time, it may lead to a decline in product quality,waste of raw materials and energy, and accelerated aging of equipment, which may directly impact the profitability and safety of the plant [4]–[6].The removal of oscillations means less variability in process variables, resulting in stable economic benefits for the production process [7].The prerequisite for removing oscillations is oscillation detection.However, a typical process industry is quite complex and may contain hundreds to thousands of control loops, so manual monitoring can be costly and prone to missed detection [8].In contrast, an automatic strategy is preferred.

    Many automatic oscillation detectors have been reported over the past 30 years.The first algorithm for oscillation detection, which is based on integral absolute errors (IAEs)between successive zero-crossings, was proposed by Thornhillet al.[9], [10].In addition to this time domain approach,Yanet al.[11] proposed a detector by extracting repetitive patterns from the time series based on the hidden Markov model.Due to the periodic nature, oscillations may be more intuitive in the frequency spectrum.In light of this, frequency domain methods have been developed in succession.In an earlier study, Zhanget al.[12] developed a method based on the discrete Fourier transform and the Raleigh distribution to detect multiple oscillations in the presence of mean-nonstationarity.Recently, a novel framework for detecting oscillations using both time and frequency domain knowledge was proposed by Ullahet al.[13].

    In general, the above methods are intuitive and easy to implement in engineering.However, most of them are susceptible to noise and disturbances.To tackle this challenge, methods based on the auto-covariance function (ACF) have been proposed.Miao and Seborg [14] first used ACF for oscillation detection, by defining the ACF decay ratio as the indicator.Recently, Thornhillet al.[15] devised a metric of ACF zero-crossing regularity, which is capable of detecting multiple oscillations by combining ACF with a band-pass filter.Following this work, Naghoosi and Huang [16] developed an improved technique to detect multiple oscillations directly without additional filtering.

    Although ACF-based methods have good robustness to noise, their practicality to handle multiple oscillations and mean-nonstationarity is quite limited.The wavelet domain and decomposition-based methods can effectively handle the above complex plant data.On the basis of the wavelet transform (WT), a straightforward method for oscillation monitoring was presented by Naghoosi and Huang [17].Subsequently, Bounouaet al.[18] proposed an improved empirical WT method based on detrended fluctuation analysis to accurately extract the oscillating modes.We highlight that those methods based on the wavelet analysis are more accurate,however, at the expense of higher parametric degrees of freedom.

    Most decomposition-based methods were developed inspired by related time-frequency analysis techniques in the field of signal processing, such as empirical mode decomposition [19], intrinsic time-scale decomposition [20], variational mode decomposition [21], and local mean decomposition(LMD) [22].The above methods are highly adaptive and can effectively cope with multiple oscillations and mean-nonstationarity in nonlinear processes.However, they assume that the investigated signal satisfies strict separation conditions in the time-frequency domain, and are therefore susceptible to mode mixing and low resolution in practical applications [4], [23].

    In addition to the above reviewed methods, some emerging techniques, such as linear predictive coding [24], and machine learning [25]–[28], have recently been introduced.Notice that the ease of implementation of these methods remains to be further investigated.

    In summary, all the methods reviewed above can only address part of the practical difficulties since most of them are rule-based [28].Therefore, there is a need to develop a method that is not bounded by specific rules for control loops of various data features.Here, we revisit the definition of oscillation.A widely accepted definition was presented by Horch [29], who designated oscillation as a periodic variation that is not completely hidden in noise, in other words, visible to the human eyes [7], [30].This definition suggests that loops containing oscillatory behavior are visually apparent.However, there is no work at the visual level yet that explores oscillation detection following the definition of oscillation.Although manual visual inspection provides highly intuitive detection results, it is not recommended for engineering applications due to its labor-intensive nature.

    The presence of convolutional neural networks (CNNs), a representative of deep learning, provides a solution to detect oscillations from a visual perspective since they have proven successful in computer vision tasks [31]–[33].Given this, a framework based on the CNN models is explored, and featured by the following steps.First, the artificially generated process data are preprocessed through two stages, namely imaging and normalization.Following that, the preprocessed data are utilized to train four typical CNN models containing MobileNet-V1 [34], ShuffleNet-V2 [35], EfficientNet-B0 [36]and GhostNet [37].Finally, these trained models are used to carry out oscillation detection.The main contributions and advantages of this work are summarized as follows.

    1) A framework for visual oscillation detection using typical CNNs is proposed and explored.

    2) Due to the capabilities of powerful feature extraction of CNNs, the proposed framework can effectively handle multiple and time-varying oscillations in the presence of noise and mean-nonstationarity.

    3) The CNNs used are all from typical networks in the field of deep learning, which are easy to implement.

    4) The framework can be updated to process new oscillation problems with additional training data.

    The rest of this paper are organized as below.Section II presents the details of the generation of artificial data and the structure of CNNs.Then, the process of using CNNs to achieve oscillation detection visually is elaborated in Section III.In Section IV, the effectiveness of the proposed framework is demonstrated by representative numerical experiments.Section V discusses the limitations of this framework and leads to future work by visualizing the detection results.Finally, conclusions are drawn in Section VI.

    II.PRELIMINARIES

    The effective execution of the proposed framework relies on two crucial prerequisites: the training data and the construction of the CNN models.In this section, their technical details will be introduced separately.

    A. Generation of Artificial Data

    The availability of big data and advancements in hardware are the main reasons for the success of CNNs [32].However,confidential and strategic issues make industrial data (especially those containing fault information) difficult to obtain in large quantities.Even when data is available, numerous issues are still to be solved, e.g., data cleaning, and the time-consuming task of data labeling.An alternative and successful solution–artificial data, was proposed by Dambroset al.[28].They pointed out that the generated data served for oscillation detection should obey the following rules: 1) The artificial data must be as similar as possible to the industrial (oscillation) data; 2) The artificial data must have examples from processes with different dynamics, configurations, and characteristics.

    Accordingly, the generated data should have the following features: 1) Oscillatory and nonoscillatory examples of different lengths; 2) Noise and disturbances with different amplitudes; 3) Oscillatory time series with different numbers of oscillation periods; 4) Oscillatory time series with different waveforms, i.e., sine, triangle, and square waves, respectively;5) Waveforms smoothed with different intensities, approximating the oscillatory time series filtered by the process;6) Part of the oscillatory time series is time-varying with different intensities.

    Dambroset al.[28] have defined a set of variables that obey various distributions for generating artificial data, which are listed in Table I.The value of each variable is a random number generated based on the probability density function of the corresponding distribution.More details on this can be found in [28].

    As depicted in Table I, both noise and disturbance are originated from Gaussian white noise, whose magnitudes are determined byNvarandDamp, respectively.In addition, to mimic the industrial situation, both the disturbance and the oscillation need to be smoothed.More specifically, the disturbance is smoothed by the following transfer function:

    TABLE I THE VARIABLES USED TO GENERATE ARTIFICIAL DATA

    while the oscillatory component is smoothed by the following transfer function:

    The amplitudes of all oscillatory time series are set to be 1 to ensure the validity of variablesNvarandDamp.For frequency-invariant oscillation, its frequencyf0is determined by variablesSLandNper.On the contrary, the frequency seriesf(t) of the time-varying oscillation is calculated byf0andFc f.The detailed procedures of artificial data generation are given in Algorithm 1.

    B. Convolutional Neural Networks

    CNN is a type of network with a multilayer structure inspired by the animal visual system [38], [39].It can sufficiently exploit spatial or temporal correlation in data and is considered one of the best learning algorithms for understanding image content [32].Numerous variants of CNN architectures have been proposed in the past.However, their primary components are very similar [40].Typically, a CNN consists of four main components, namely, convolutional layers, pooling layers, fully connected layers, and activation functions,where Fig.1 visualizes its general architecture for image classification.

    CNNs have remarkable attributes such as automatic feature extraction, hierarchical learning, and weight sharing, where the critical step is convolution [32].A convolutional layer is composed of a set of convolutional kernels that are used to generate various feature maps.However, the feature map after convolution consists of diverse features, which tends to cause the overfitting problem.In view of this, the pooling layer is proposed to reduce the dimensionality of the feature map to prevent data redundancy.Furthermore, the nonlinearity in CNNs is realized through the activation functions, which help the network represent complex features.Finally, its high-level reasoning generally relies on fully connected layers, which are usually used at the end of the network.

    Algorithm 1 Generation of Artificial Data Input: , , , , , , , ,S L Nvar DampOprob NperOwfS FVFprob Fc f Output: the generated oscillation series Ots 1: Initialize to an empty cell;t=[0,1,...,S L-1]Ots 2: ;3: Generate two zero-mean Gaussian white noise series and of length and variance independently;Dnoise Ddisturbance S L Nvar Dad 4: is obtained by smoothing with (1);Db isturbance Ddisturbancedisturbance=(Damp/(max(Dadisturbance)-min(Dadisturbance)))·Dadisturbance 5: ;6: if then f0=1/(S L/Nper)Oprob=1 7: ;VFprob=1 8: if then 9: The time-varying frequency is obtained by the method of [28];■■■■■■■■■■■■■Od(t)=sin(2π f(t)·t), Owf =0 Od(t)=square(2π f(t)·t), Owf =1 Od(t)=sawtooth(2π f(t)·t), Owf =2;f(t)10:OSF d 11: is obtained by smoothing with (2);Od d =OSFd (1/(max(OSFd )-min(OSFd )))12: ;OSF disturbance+OSFd 13: ;Ots= Dnoise+Db 14: els 15:e■■■■■■■■■■■■■■■Od(t)=sin(2π f0t), Owf =0 Od(t)=square(2π f0t), Owf =1 Od(t)=sawtooth(2π f0t), Owf =2;OSF d 16: is obtained by smoothing with (2);Od d =OSFd (1/(max(OSFd )-min(OSFd )))17: ;OSF disturbance+OSFd 18: ;Ots= Dnoise+Db 19: end if 20: else Ots= Dnoise+Db 21: ;22: end if disturbance

    III.METHODOLOGY

    In this section, we focus on how to implement the detection of single-loop oscillation from a visual perspective.The general framework of the proposed solution is shown in Fig.2.As depicted, the proposed framework consists of three parts,i.e., data preprocessing, construction of the typical CNN models, and finally, detection of oscillations.

    A. Data Preprocessing

    The data preprocessing process is straightforward and consists of only two steps: process data imaging and pixel matrix normalization.For data imaging, the “plot” function in the MATLAB platform was used for simple imaging of sequences.The following points need to be satisfied when data imaging:

    1) Distinguishable features can still be observed when the number of data points or periods of the oscillatory time series is extensive.

    2) The original structure of the data cannot be destroyed.

    3) The resolution size of imaging should match the performance of the experimental equipment, otherwise it may degrade the training speed of the model.

    Fig.1.A general CNN architecture for image classification.

    Fig.2.General framework of the visual methods.

    Given the above, an appropriate image resolution of 200 ×2400 (height × width) is selected experimentally.

    With respect to the normalization step, imaging pixels taking values between 0 and 255 are not suitable for CNN training, since CNNs are generally trained using smaller weight values for fitting.When the values of the training data are large integer values, it slows down the training process and additionally leads to the problem of overfitting.To tackle the issue above, we normalized the pixel matrix using the following operation:

    where X(i)denotes theith input sample and X?(i)represents the corresponding normalized result.

    B. Models for Oscillation Detection

    Although oscillation detection only needs a few classes of time series to be distinguished, it requires high image resolution.Therefore, if the chosen CNNs are of high complexity,then the high computational cost would be the main limitation.In contrast, lightweight CNNs show lower complexity and faster convergence, which are more suitable for application in oscillation detection.In view of this, here we explore the feasibility of four typical lightweight CNNs for oscillation detection, namely MobileNet-V1 [34], ShuffleNet-V2 [35], EfficientNet-B0 [36], and GhostNet [37].We highlight that these CNNs selected are the most popular neural networks available and their codes are open-source.Below is a brief description of these networks.

    MobileNet-V1: MobileNet-V1 was proposed by Howardet al.[34] to balance the accuracy and complexity of the deep learning model.It is mainly constructed by the depthwise separable convolution block consisting of a depthwise convolution and a 1 × 1 convolution.Compared with the standard convolution, less computation and fewer number of parameters are required by the depthwise separable convolution.

    ShuffleNet-V2: Maet al.[35] presented a more efficient network, ShuffleNet-V2, whose main body consists of two units that utilize “channel split” and “channel shuffle” operators to reduce network fragmentation.Additionally, element-wise operations are removed in each unit, thereby speeding up the computation of the model.

    EfficientNet-B0: To balance the scale and performance of the network, Tan and Le [36] designed a baseline network(EfficientNet-B0) using a neural structure search and then scaled it with their proposed composite coefficient.Such composite coefficient is capable of scaling the depth and width of the network based on its predefined principles.Considering the requirement of fast and efficient detection, EfficientNet-B0 is chosen as one of our experimental models.

    GhostNet: The feature extraction process of CNN generates a large number of feature maps, of which there are many similar ones.Therefore, Hanet al.[37] argued that redundancy in feature maps may be essential to a successful CNN.To this end, the Ghost module was designed to generate more feature maps through a simple linear operation.

    The overall architectures of MobileNet-V1, ShuffleNet-V2,EfficientNet-B0, and GhostNet are attached to Supplementary material1Supplementary matenial of this paper can be found in links https://github.com/2681704096/Supplementary-materials-for-the-paper.git.for more detailed information.

    C. Oscillation Detection

    After building the CNN models, the next step is to train the models with artificial data for oscillation detection.There are two stages for training the CNNs: forward propagation and backward propagation.Specifically, the main goal of the forward propagation stage is to represent the input image with the parameters (weights and biases) of each layer.The forward output is then used to calculate the loss cost with ground truth labels.Based on the loss cost, the backward propagation stage calculates the gradient of each parameter using chain rules.Here, all parameters are updated according to the gradient and prepared for the subsequent forward calculation.

    Mathematically, given a training set{(X(i),y(i))|i=1,2,...,N}, where X(i)denotes theith input sample,Ndenotes the number of samples in a batch, and y(i)is the label ofX(i)(using one-hot encoding), the output of the input sample through a series of linear and nonlinear operations (before the softmax operation) can be calculated as

    where F(·) denotes the series of linear and nonlinear operations, andKdenotes the number of categories.The final step of forward propagation is the softmax classification operation,which converts the output of neurons into a probability distribution of classes.Concretely, the prediction resulttransformed by softmax is expressed as given by

    The loss function plays the role of connecting forward propagation and backward propagation.Here, we selected the softmax categorical cross-entropy loss as the loss function, which is defined as

    where θ denotes all relevant parameters used to construct the model (e.g., weight vectors and bias terms).

    The update of parameters is related to the gradient direction,which aims to reduce the value of the loss function.For this purpose, gradient descent optimization algorithms are commonly used to quickly find the gradient descent direction.In this work, the stochastic gradient descent method with momentum is adopted as the optimizer because of its competitive generality [41].It can aggregate the velocity vectors in relevant directions so that the update of the current gradient depends on the historical batches.The process of gradient update is mathematically defined as

    where γ and ηtdenote the momentum term (γisusually set to 0.9) and the learning rate, respectively.(X(t),y(t)) denotes a randomly picked sample.In practice, each parameter update is computed for a mini-batch rather than a single sample.

    After sufficient iterations of both forward and backward propagation stages, the learning of the network can be stopped.Next, oscillation detection is achieved by feeding the preprocessed samples into the trained models.Concretely, a test sampleXtestis fed into the trained model, whose outputMo=[p1,p2,...,pj,...,pK]is obtained by combining (4) with(5), whereMois a vector containing the predicted probabilities of each class.Then, the class to which the sample belongs can be determined by obtaining the index valueIVof the maximum probability, as given by

    IV.EXPERIMENTS AND RESULTS ANALYSIS

    In this section, more information about the artificial and industrial datasets used for experiment is presented.Then,several metrics serving to evaluate the detection performance of all investigated CNNs are introduced.Finally, we give specific details on the implementation of the experiments.

    A. Data Sets

    1)Artificial Data: Artificial data was proposed by Dambroset al.[28] in 2019 (see Section II.A for details of data generation), and its corresponding download address can be found in[42].The dataset was generated by simulating the characteristics of industrial data and contains three classes, nonoscillation, regular oscillation, and irregular oscillation.In addition,the dataset contains 120 000 samples, with data lengths ranging from 200 to 24 064 points.However, this work only selected a random sample of 10 000 from the dataset because of its massive sample size and the limited performance of the experimental equipment.We also highlight that a small sample set better reflects the advantages of the proposed framework.Specifically, eighty percent of the selected samples were used as the training set, and the rest were used as the test set.Some simple examples of the dataset are shown in Fig.3.

    2)Industrial Data: A benchmark dataset ISDB for oscillation detection and diagnosis was disclosed by Jelali and Huang [43].The data was collected from different industries such as commercial construction, chemical, pulp and paper mills, power plants, mining, and metal processing.Controller output and process output measurements for 93 control loops are contained in the dataset, and many of which are susceptible to noise, nonstationary trends, or other disturbances and anomalies.The oscillatory time series exhibit multiple, intermittent, and time-varying properties.Additionally, the dataset contains sequences that are clearly differentiated, ranging in length from 200 to 277 115 points and in amplitude from 0.0184 to 2303.4.Note that a portion of the data in this dataset are not explicitly labeled, however accurate data labels are required in subsequent experiments.Therefore, we labeled 64 of these closed-loop data in combination with the literature[44] to ensure the accuracy of the data labels, while the remaining part was discarded due to poor recognition.The corresponding labeling results are attached in Supplementary material1.In addition, some representative examples of the dataset are shown in Fig.4.

    Fig.3.Some simple examples on the artificial dataset.

    Fig.4.Some simple examples on the industrial dataset.

    Furthermore, to ensure that the detection was not influenced by the source of the time series, the magnitude of all time series was normalized to a value equal to 1.

    B. Evaluation Metrics

    In the field of deep learning, some quantitative metrics are commonly used to evaluate the performance of methods [45].To be inspired, this work proposes to evaluate the effectiveness of the CNNs using the following metrics:

    where P and R denote precision and recall, and F1 is a composite measure for P, R.As shown in Table II,TP,TN,FP,andFNdenote the number of true positive, true negative, false positive, and false negative results reported, which are related to the confusion matrix.Since the three metrics above are measures for individual categories, some overall measure is desired in this work.To this end, four metrics, i.e., overall accuracy (OA), average precision (AP), average recall (AR),and average F1 score (AF1) were used, where OA indicates the number of correctly detected samples as a proportion of the total number of samples tested.AP, AR, and AF1 denotethe average of P, R, and F1 indicators for each class, respectively.

    TABLE II CONFUSION MATRIX

    In addition, detection rate (DR) and average detection time(ADT) were leveraged to check the reliability of the methods.Specifically, DR is defined as the number of samples detected by the techniques used as a percentage of the total number of samples tested.ADT is defined as the average detection time required for each sample tested (including the preprocessing time).

    C. Implementation Details

    The Keras library was used in this work to build the CNN models with the TensorFlow backend, and Table III lists the corresponding number of learnable parameters (LPs) for each model.With respect to the experimental hyperparameters, an optimal set of hyperparameters enable the best performance of the models.However, the corresponding optimization searchprocess often takes a lot of time, which does not meet the requirements of practical applications.With such consideration, some empirical but widely used hyperparameters were used to train the models.Specifically, the models were trained with an initial learning rate set to 0.001, which was scaled down by a factor of 10 when the loss value was no longer reduced after 16 epochs.The number of epochs was set to 100, and the batch size was set to 4.Moreover, all experiments were repeated five times and then their results were averaged to reduce randomness.Our code was run on a computer with an Intel Pentium G4600 at 3.60 GHz CPU, 8 GB of RAM, and an NVIDIA GeForce GTX 1050 Ti 4GB GPU.

    TABLE III NUMBER OF LEARNABLE PARAMETERS

    Fig.5.Comparison of sharpness of different resolution images.

    D. Performance Evaluation on Artificial Data

    A series of numerical experiments were carried out on the artificial data.Firstly, the resolution size used for data imaging was determined experimentally.Secondly, the impact of different data lengths on the performance of visual framework was explored.Thirdly, the lowest signal-to-noise ratio(SNR) allowed for the visual framework to maintain good performance was investigated.Finally, a preliminary evaluation of the detection performance of the four selected CNNs was performed.

    1)Ablation Experiments for Resolution: The inherent characteristics of the data are challenging to be detected if the sharpness after imaging is insufficient.To observe the difference, we imaged representative noise data of appropriate length at different resolutions as shown in Fig.5, where the local imaging effect of the noise sequence was amplified.As depicted, the sharpness decreases gradually with decreasing image resolution.In particular, the images with resolutions of 32×384 and 64×768 have only fewer visual features that can be used.Moreover, we also analyzed the difference in detection performance of MobileNet-V1 on different resolution images, as shown in Fig.6.It is observed that the detection performance is positively proportional to the resolution size(here, higher OA and lower loss mean better performance).However, as the resolution gradually increases, the growth trend of the detection performance gradually becomes slower.In view of this, after balancing the accuracy and speed of detection, we set the resolution size of imaging to 200×2400.

    Fig.6.Performance differences of MobileNet-V1 with different resolution images.

    2)Ablation Experiments on Data Length: The features of data with different lengths imaged at the same resolution are distinct, which may affect the performance of the visual methods.Given this, we conducted experiments to evaluate the detection performance as a function of the data length.It should be noted foremost that the visual detectability of the oscillation is coupled with its data length and frequency.Therefore, a suitable oscillation frequency needs to be selected before conducting the data length study.To this end,we conducted an extensive review of relevant literature and found that most of the industrial oscillations are repeated at a rate of less than 30 cycles per 1000 sampling points [2], [42],[43].On this basis, we took the highest signal frequency (i.e.,the smallest oscillation period: 1000/30 samples) to explore the performance limits of the visual framework.Specifically,when generating oscillation data of different lengths, the corresponding variables of noise variance (Nvar), disturbance amplitude (Damp) , waveform (Ow f), and smoothing factor(SF) were also set randomly, whose distributions are listed in Table I.In this experiment, the minimum and maximum lengths of the data were set to 200 and 10 000, respectively,based on the observation that most of the industrial oscillations in the relevant industrial datasets are hundreds to thousands of samples long [28], [43].

    More specifically, we evaluated the performance of the visual framework at lengths of 200, 1000, 2000, 3000, 4000,5000, 6000, 7000, 8000, 9000, and 10 000, respectively, with the quantitative metric being set to OA.The experimental results are presented in Fig.7, where each listed value is the average of the outcomes of 2000 independent repetitions of the experiment.As illustrated, the performance of all investigated visual methods remains consistently high for most data lengths.However, it is observed that their performance tends to decrease after the data length reaches 8000.We highlight that the above shortcoming has little impact on the practical application of the visual framework given two reasons: a) Due to the slow nature of the industrial processes (low sampling rate), only a few real-world cases have data lengths exceeding 7000; b) The oscillation frequency in this experiment was set to the highest frequency investigated.

    3)Ablation Experiments for SNR: The noise resistance of the proposed framework directly affects the effectiveness of detecting industrial oscillations in the presence of noise artifacts.Hence, we would like to explore the lowestSNRallowed for the visual framework to maintain good performance through this experiment.Similar to the experiment of different data lengths, the data used in this experiment were also randomly generated by the generation algorithm of artificial data.The corresponding variables used were signal length(SL) , number of periods (Nper), waveform (Owf), and smoothing factor (S F), whose distributions are listed in Table I.Specifically, we evaluated the performance of the visual framework at differentSNRs of 10, 8, 6, 4, 2, 0, -1, -2, -3,-4, and -5, respectively, with the quantitative metric being set as OA.Here we set the intervals for theSNRs in this way because: whenSNR>0, the performance of four visual methods does not change significantly, so the interval was set to 2.In contrast, whenSNR<0, their changes in performance begin to become progressively significant, so the interval was set to 1.

    The experimental results are shown in Fig.8, where each value listed is the average of the outcomes of 2000 independent repetitions of the experiment.From the figure we can make the following observations: a) In the region of highSNR(typicallyS NR≥4), all four visual methods investigated maintain a relatively high level of detection performance; b)In the region of lowSNR(typicallyS NR≤2), the performance of the studied visual methods decreases with decreasingSNR.In particular, afterS NR=0, they will fall below 90% in accuracy; c) For all four visual methods, a significant downward trend can be observed afterS NR=-2, implying thatS NR=-2 may be the beginning of a sharp deterioration in the detection performance of the visual framework.Finally,we would like to note that the lowestSNRthat allows the visual framework to maintain good detection performance is related to a manually set threshold.For example, if the detection accuracy of 90% and below is considered to be unsatisfactory, then the lowestSNRwould be determined to be 2.

    Fig.8.The performance trend of the visual methods at different SNRs.

    Fig.9.Measure the performance of the models in each class with P, R, and F1 (the closer the numerical result is to 100%, the better).

    4)Performance Reported on Artificial Dataset: Here, the deep feedforward network (DFN) proposed by Dambroset al.[28] as an oscillation detector was used for performance comparison to demonstrate the effectiveness of the visual framework using the four selected CNNs.The performance of the methods in each class was measured by three metrics, as shown in Fig.9.As illustrated, the significant difference between the P and R values of DFN in the non-oscillation type reflects the fact that the DFN method is more sensitive to noise and disturbances.When disruptions increase, it directly causes DFN to misclassify the oscillatory time series as the non-oscillation ones.The results associated with F1 in Fig.9(c) consistently support the above analysis.In contrast, the four CNN models we selected show better detection performance for each class, especially for the regular oscillations.Such results verify that the visual methods can better extract features from the data and effectively suppress the counteraction generated by noise and disturbances.

    The overall measure is an evaluation of the overall detection ability of the methods.The corresponding results of the four visual methods and DFN are reported in Table IV (where the mean and standard deviation were taken from five experiments).In general, the detection performance of the visual methods is improved by 6 to 7 percentage points over DFN,demonstrating the good generalization capability of the proposed visual framework.

    It should be noted that all performance evaluation experiments presented above were conducted using the hold-out method.Therefore, to attenuate the interference brought by the way the dataset is divided, a 5-fold cross-validation experiment was carried out to demonstrate the good generalization of the visual methods.The experiment was conducted on the artificial dataset, whose corresponding results are listed in Table V.As shown, the performance differences between the different visual methods are not significant, but all of them are significantly better than DFN.The results associated with theabove cross-validation experiments are generally consistent with those obtained using the hold-out method.

    TABLE IV OVERALL MEASUREMENT REPORT OF THE METHODS ON THE ARTIFICIAL DATASET (HOLD-OUT METHOD)

    TABLE V OVERALL MEASUREMENT REPORT OF THE METHODS ON THE ARTIFICIAL DATASET (CROSS-VALIDATION METHOD)

    We highlight that relying on artificial data alone does not fully verify the validity of the proposed framework, and therefore, some industrial cases were investigated later for further illustration.

    E. Comparison of the Methods on Industrial Data

    In addition to DFN, we also implemented several commonly used or state-of-the-art methods for oscillation detec-tion in this comparative experiment.The overall metrics of the performance of the methods are listed in Table VI.As shown,both the ACF ratio and the ACF zero-crossings regularity methods fail to detect oscillations in some cases.This result is expected because these methods are based on specific rules.When no such rules exist in the data, they will fail.The improved local mean decomposition (LMD), fast adaptive chirp mode decomposition (FACMD), and DFN can detect all tested data successfully, however, their AF1 values are relatively inferior, which indicates that fewer samples are correctly detected.Throughout the evaluation results in Table VI,the explored visual methods consistently outperform other methods in terms of detection performance.

    TABLE VI OVERALL MEASUREMENT REPORT OF THE METHODS ON THE INDUSTRIAL DATASET

    Fig.10.Confusion matrices for each method on the industrial dataset.

    It is noteworthy that all comparison methods except the visual ones exhibit significant difference between AP and AR values, which leads to lower AF1 values.To further analyze the cause of this observation, we introduced the confusion matrix.The confusion matrices for all survey methods are shown in Fig.10, from where we observe that the ACF ratio,FACMD, and DFN tend to identify more of the nonoscillation data as oscillatory.In contrast, the ACF zero-crossings regularity and the improved LMD tend to identify more oscillations as nonoscillatory data.The above factors contribute to the lower AP, AR and AF1 values of these methods.In comparison, all visual methods demonstrate high accuracy and strong robustness in detecting oscillations in the presence of noise and mean-nonstationarity.

    The speed of detection determines whether the method is reliable in practical applications.Therefore, the detection speed of each method was compared using ADT, and the relevant results are listed in Table VII.As depicted, the detection speed of the improved LMD and the FACMD is significantly slower since they are decomposition-based techniques.Despite the fact that the ACF zero-crossings regularity method has the fastest detection speed, its detection performance (DR)needs to be improved.Furthermore, we highlight that the detection speed of the compared methods is related to the length of the sequence, with longer sequences leading to slower detection speed.In contrast, the visual methods are not influenced by the number of data points.As expected, the visual framework will be quite competitive in terms of detection speed when numerous data points are involved.

    TABLE VII COMPARISON OF THE DETECTION SPEED OF EACH METHOD

    V.DISCUSSIONS

    Oscillation detection plays a crucial role in monitoring process performance.However, existing methods are poorly generalized and can handle only part of the practical challenges.In view of this, we have explored the feasibility of a visual framework for oscillation detection based on the heuristic definition of oscillation.A set of numerical experiments and industrial cases consistently demonstrated the effectiveness of the proposed framework.

    Here, we would like to further discuss the validity of the visual framework in oscillation detection.It should be noted that although oscillation is heuristically defined as periodic abnormal fluctuation visible to the human eye, its detection cannot be regarded as a simple task due to the following reasons: 1) The nature of oscillation varies considerably in terms of length, frequency, and waveform.2) Data collected from process plants are corrupted by noisy artifacts, outliers,unknown disturbances, and nonstationary trends.3) Due to the nonstationary nature of industrial processes, oscillations often suffer from time-varying and intermittent characteristics.4)Multiple oscillatory behaviors caused by multiple fault sources may also coexist.All of the above features seriously degrade the regularity of the oscillation, making its detection a challenging problem [7], [43].Therefore, it can be pointed out that simple machine learning methods have limited representation capability, which makes it difficult for them to effectively capture the oscillatory features of complex industrial processes.This point is also supported by experiments as those shown in Tables IV-VI.Typically, the detection accuracy of DFN, which uses a simple network structure, is significantly worse than that of other visual methods.

    In addition, from the industrial dataset, we found that a few samples contain outliers and multiple oscillations, which are not involved in the artificial data in contrast.However, these samples are correctly detected by the visual methods, proving that the proposed framework generalizes well and that we can discover the commonalities of oscillatory time series with different features.To demonstrate more intuitively the verdict ability of the visual framework on oscillation detection, here we embed high-dimensional features into a two-dimensional visualization graph by the uniform manifold approximation and projection (UMAP) algorithm [46].The visualization results of the four selected CNN models on the artificial and industrial datasets are shown in Figs.11 and 12, respectively.It is obvious from Fig.11 that data features extracted by CNNs are well represented and can effectively distinguish among these three types of data.A similar observation can also be drawn in Fig.12.

    However, there is still a problem that is demonstrated in Figs.11 and 12 which occurs when the CNN models acquire a sparse distribution of features in each type of data, resulting in poor intra-class compactness.This may be an important cause for their fluctuating detection performance on the industrial dataset (as shown by the standard deviations in Table VI).In response to the above issue, we speculate that there are two possible causes.On the one hand, there may be an inherently large intra-class discrepancy in the training data.However, the main goal of CNN models is not to reduce the intra-class distance, which may lead to poor intra-class compactness of their extracted features.On the other hand, the softmax classifier and loss function are directly related to the sample features mapped in the metric space.Despite their competitive generality, they do not explicitly encourage discriminative learning of features, which results in poor intra-class compactness.Future work will focus on addressing both of the above aspects to better apply visual methods to practical applications in process industries.

    VI.CONCLUSION

    In this work, we have explored the feasibility of a deep learning-based visual framework for oscillation detection,based on the widely accepted definition of oscillation.Four typical CNNs were applied to this framework separately to evaluate their detection performance.Corresponding representative numerical experiments and industrial cases consistently demonstrated that our proposed framework enables simple and effective oscillation detection for practical applications.However, we highlight that the present work is only a tiny step forward in using visual methods to address the difficulties associated with oscillation monitoring.Many research challenges remain to be solved, such as balancing the image resolution size with the performance of computers when there are too many data points.Furthermore, some more advanced techniques in computer vision, such as metric learning, incremental learning, and transfer learning, have not yet been introduced into this field.In summary, it is encouraged to continue the in-depth research on oscillation detection using visual framework.

    Fig.11.Visualization results on the artificial data.

    Fig.12.Visualization results on the industrial data.

    ACKNOWLEDGMENT

    We thank Dambroset al.for making the artificial dataset publicly available.

    黄片大片在线免费观看| 亚洲精品国产色婷婷电影| 久久午夜亚洲精品久久| 欧美在线一区亚洲| 亚洲av第一区精品v没综合| 最近最新中文字幕大全免费视频| 一边摸一边抽搐一进一小说| 一区二区三区激情视频| 亚洲 欧美 日韩 在线 免费| 大型黄色视频在线免费观看| 韩国精品一区二区三区| 男女午夜视频在线观看| 99re在线观看精品视频| 亚洲第一青青草原| 一区福利在线观看| 韩国精品一区二区三区| 丝袜在线中文字幕| 国产视频一区二区在线看| 九色亚洲精品在线播放| 久久国产精品人妻蜜桃| 热re99久久国产66热| 人人妻人人添人人爽欧美一区卜| 搡老乐熟女国产| 午夜精品久久久久久毛片777| 精品免费久久久久久久清纯| 欧美日韩亚洲综合一区二区三区_| 一进一出抽搐gif免费好疼 | 999久久久精品免费观看国产| 黑人巨大精品欧美一区二区mp4| 一级毛片女人18水好多| 一级毛片女人18水好多| 日日夜夜操网爽| 韩国av一区二区三区四区| 欧美精品啪啪一区二区三区| 好看av亚洲va欧美ⅴa在| 久久久久国产一级毛片高清牌| 国产单亲对白刺激| 中出人妻视频一区二区| 国产精品久久电影中文字幕| 无人区码免费观看不卡| 性欧美人与动物交配| 国产成人av教育| 欧美日韩福利视频一区二区| 亚洲一卡2卡3卡4卡5卡精品中文| 很黄的视频免费| 制服人妻中文乱码| 久久人妻av系列| 成人影院久久| 精品欧美一区二区三区在线| 黑人欧美特级aaaaaa片| 日本撒尿小便嘘嘘汇集6| 国产精品秋霞免费鲁丝片| a在线观看视频网站| 日韩大尺度精品在线看网址 | 亚洲精品国产一区二区精华液| 高潮久久久久久久久久久不卡| a级毛片黄视频| 久久婷婷成人综合色麻豆| 一级a爱片免费观看的视频| 亚洲成av片中文字幕在线观看| 国产伦人伦偷精品视频| 精品久久久久久久毛片微露脸| 99精品久久久久人妻精品| 久久久久精品国产欧美久久久| 国产精品久久久久成人av| 亚洲专区中文字幕在线| 1024香蕉在线观看| 99精国产麻豆久久婷婷| av超薄肉色丝袜交足视频| 久久精品91无色码中文字幕| 欧美日韩视频精品一区| 在线观看日韩欧美| 成熟少妇高潮喷水视频| 精品第一国产精品| 18禁黄网站禁片午夜丰满| 男人操女人黄网站| 桃色一区二区三区在线观看| 国产av一区在线观看免费| 午夜福利影视在线免费观看| 国产视频一区二区在线看| 好男人电影高清在线观看| 国产高清视频在线播放一区| 9191精品国产免费久久| av天堂久久9| 在线十欧美十亚洲十日本专区| 美女国产高潮福利片在线看| 久久久久久免费高清国产稀缺| 午夜视频精品福利| 午夜两性在线视频| 嫩草影视91久久| 男女午夜视频在线观看| 亚洲精品在线美女| 国产亚洲精品久久久久久毛片| 亚洲五月色婷婷综合| 成人国产一区最新在线观看| 午夜精品国产一区二区电影| 国产一区二区三区视频了| 咕卡用的链子| 日韩国内少妇激情av| 麻豆成人av在线观看| 日韩精品中文字幕看吧| 亚洲专区字幕在线| 国产一区二区三区视频了| 国产又色又爽无遮挡免费看| 国产免费av片在线观看野外av| 十八禁网站免费在线| 嫩草影院精品99| 99在线视频只有这里精品首页| 国产区一区二久久| 我的亚洲天堂| 国产精品99久久99久久久不卡| 日本免费一区二区三区高清不卡 | 在线十欧美十亚洲十日本专区| 欧美+亚洲+日韩+国产| 乱人伦中国视频| www.999成人在线观看| 国产成人精品在线电影| 国产伦人伦偷精品视频| 国产成人影院久久av| 操美女的视频在线观看| 校园春色视频在线观看| 99国产精品99久久久久| 欧美乱色亚洲激情| 久久精品91蜜桃| 国产成人欧美| 亚洲成人国产一区在线观看| 中文字幕人妻丝袜制服| av超薄肉色丝袜交足视频| 免费久久久久久久精品成人欧美视频| 一区二区三区国产精品乱码| 免费在线观看视频国产中文字幕亚洲| 身体一侧抽搐| 精品国产一区二区三区四区第35| 久久香蕉国产精品| 琪琪午夜伦伦电影理论片6080| 嫩草影院精品99| 国产亚洲欧美在线一区二区| 日本免费a在线| 交换朋友夫妻互换小说| 一区二区三区精品91| 中出人妻视频一区二区| 国产精品乱码一区二三区的特点 | 咕卡用的链子| 久久人人97超碰香蕉20202| 国产主播在线观看一区二区| 久久天堂一区二区三区四区| 精品国产美女av久久久久小说| av在线天堂中文字幕 | 黄色丝袜av网址大全| 精品国内亚洲2022精品成人| 午夜福利影视在线免费观看| 三上悠亚av全集在线观看| 国产精品久久久av美女十八| 亚洲精品粉嫩美女一区| 亚洲九九香蕉| 久久久国产成人免费| 日日夜夜操网爽| 自线自在国产av| 亚洲片人在线观看| 母亲3免费完整高清在线观看| 桃红色精品国产亚洲av| 性少妇av在线| 久久影院123| 757午夜福利合集在线观看| www.自偷自拍.com| 久久天堂一区二区三区四区| 亚洲九九香蕉| 操美女的视频在线观看| 制服人妻中文乱码| 精品久久久久久成人av| 中文字幕av电影在线播放| 日日摸夜夜添夜夜添小说| 97人妻天天添夜夜摸| 女同久久另类99精品国产91| 精品国产美女av久久久久小说| 亚洲av第一区精品v没综合| 一区在线观看完整版| 人妻久久中文字幕网| 国产人伦9x9x在线观看| 香蕉久久夜色| 精品少妇一区二区三区视频日本电影| 一边摸一边做爽爽视频免费| 国产精品一区二区精品视频观看| 老汉色∧v一级毛片| 在线视频色国产色| 亚洲成人精品中文字幕电影 | 久久亚洲真实| 国产单亲对白刺激| 国产亚洲精品一区二区www| 每晚都被弄得嗷嗷叫到高潮| 亚洲成人国产一区在线观看| 后天国语完整版免费观看| 精品久久久精品久久久| 男女床上黄色一级片免费看| 精品久久久久久成人av| 国产男靠女视频免费网站| 又大又爽又粗| 欧美日韩国产mv在线观看视频| 婷婷六月久久综合丁香| 久久久国产成人精品二区 | 国产精品美女特级片免费视频播放器 | 丝袜人妻中文字幕| 91在线观看av| 婷婷六月久久综合丁香| 国产单亲对白刺激| www.自偷自拍.com| 91国产中文字幕| 精品久久久精品久久久| 欧美激情 高清一区二区三区| 女人精品久久久久毛片| 999久久久精品免费观看国产| 丝袜美腿诱惑在线| 中文字幕色久视频| 妹子高潮喷水视频| 露出奶头的视频| 婷婷丁香在线五月| 嫩草影视91久久| 亚洲一区二区三区欧美精品| 久久国产亚洲av麻豆专区| 国产高清激情床上av| 别揉我奶头~嗯~啊~动态视频| 中文字幕av电影在线播放| 欧美黑人精品巨大| 别揉我奶头~嗯~啊~动态视频| a级毛片黄视频| 国产一区二区在线av高清观看| 性欧美人与动物交配| 身体一侧抽搐| a在线观看视频网站| 国产亚洲精品一区二区www| 日韩欧美三级三区| av福利片在线| 在线观看66精品国产| 国产成人精品在线电影| 亚洲 欧美一区二区三区| 国产又色又爽无遮挡免费看| 久久这里只有精品19| 成人特级黄色片久久久久久久| 午夜亚洲福利在线播放| 9191精品国产免费久久| 欧美激情极品国产一区二区三区| 国产精品一区二区免费欧美| 日韩欧美一区二区三区在线观看| 国产97色在线日韩免费| 三上悠亚av全集在线观看| 中亚洲国语对白在线视频| 看黄色毛片网站| 日韩精品免费视频一区二区三区| e午夜精品久久久久久久| 亚洲成人免费电影在线观看| 免费搜索国产男女视频| 国产熟女xx| 午夜福利,免费看| 人妻久久中文字幕网| 男人舔女人的私密视频| 欧美精品一区二区免费开放| 欧美日韩精品网址| 欧美日韩亚洲综合一区二区三区_| 最近最新中文字幕大全电影3 | 精品卡一卡二卡四卡免费| 免费在线观看完整版高清| 亚洲第一欧美日韩一区二区三区| 欧美日本亚洲视频在线播放| 精品无人区乱码1区二区| 在线观看免费视频日本深夜| 最近最新免费中文字幕在线| 欧美乱码精品一区二区三区| 日本精品一区二区三区蜜桃| 久久久久精品国产欧美久久久| 国产亚洲精品第一综合不卡| 久久性视频一级片| 欧美 亚洲 国产 日韩一| 十八禁网站免费在线| 国内久久婷婷六月综合欲色啪| 亚洲精品一区av在线观看| 久久久久国内视频| 欧美 亚洲 国产 日韩一| 国产精品日韩av在线免费观看 | 一级黄色大片毛片| 一级片免费观看大全| 欧美日韩国产mv在线观看视频| 欧美黑人欧美精品刺激| 多毛熟女@视频| 欧美日本亚洲视频在线播放| 99久久国产精品久久久| 国产高清视频在线播放一区| 最近最新免费中文字幕在线| 国产精品99久久99久久久不卡| 俄罗斯特黄特色一大片| 看黄色毛片网站| 深夜精品福利| √禁漫天堂资源中文www| 男女午夜视频在线观看| 日本wwww免费看| 精品国产超薄肉色丝袜足j| 黄色视频,在线免费观看| 日日夜夜操网爽| 午夜福利,免费看| 99久久综合精品五月天人人| 欧美黄色淫秽网站| 午夜精品国产一区二区电影| 午夜免费激情av| 咕卡用的链子| 国产一区二区三区在线臀色熟女 | 国产亚洲欧美在线一区二区| 高潮久久久久久久久久久不卡| 久久精品国产清高在天天线| 国产激情久久老熟女| 国产野战对白在线观看| 国产区一区二久久| 91成人精品电影| 午夜精品久久久久久毛片777| 亚洲熟女毛片儿| av超薄肉色丝袜交足视频| 国产精品乱码一区二三区的特点 | 性少妇av在线| 女人高潮潮喷娇喘18禁视频| 精品一区二区三区四区五区乱码| 伊人久久大香线蕉亚洲五| 国产精品偷伦视频观看了| 午夜激情av网站| 午夜亚洲福利在线播放| 欧美性长视频在线观看| 最近最新中文字幕大全免费视频| 欧美日韩瑟瑟在线播放| 51午夜福利影视在线观看| 91九色精品人成在线观看| 99国产精品免费福利视频| 高清在线国产一区| 国产精品久久久人人做人人爽| 成人国语在线视频| 国产在线观看jvid| 精品久久蜜臀av无| 搡老岳熟女国产| 日韩一卡2卡3卡4卡2021年| 欧美亚洲日本最大视频资源| 69精品国产乱码久久久| 麻豆久久精品国产亚洲av | 91字幕亚洲| 午夜精品久久久久久毛片777| 亚洲人成网站在线播放欧美日韩| 久久精品亚洲av国产电影网| 亚洲精品中文字幕在线视频| 波多野结衣一区麻豆| 久久国产精品影院| cao死你这个sao货| 岛国在线观看网站| cao死你这个sao货| 激情在线观看视频在线高清| 交换朋友夫妻互换小说| 国产精品免费视频内射| 高清毛片免费观看视频网站 | 午夜福利免费观看在线| 日日摸夜夜添夜夜添小说| www.999成人在线观看| 最新美女视频免费是黄的| 一区福利在线观看| 久久伊人香网站| 久久精品国产亚洲av香蕉五月| 精品午夜福利视频在线观看一区| 美女大奶头视频| 性少妇av在线| 丁香欧美五月| 精品久久蜜臀av无| 欧美黑人精品巨大| 91大片在线观看| 他把我摸到了高潮在线观看| 欧美大码av| 欧美激情 高清一区二区三区| 国产aⅴ精品一区二区三区波| 国产91精品成人一区二区三区| 国产人伦9x9x在线观看| 国产精品二区激情视频| 免费在线观看日本一区| 国产亚洲欧美在线一区二区| 欧美日韩国产mv在线观看视频| 大陆偷拍与自拍| 色综合婷婷激情| 色综合站精品国产| 99久久99久久久精品蜜桃| 久久久精品欧美日韩精品| 丝袜美足系列| 国产伦人伦偷精品视频| 老熟妇乱子伦视频在线观看| 又大又爽又粗| 国产成人欧美| 很黄的视频免费| 午夜影院日韩av| 中出人妻视频一区二区| 国产av一区二区精品久久| 色尼玛亚洲综合影院| 男女做爰动态图高潮gif福利片 | 久久久久久久久中文| 女性生殖器流出的白浆| 国产亚洲精品一区二区www| 少妇裸体淫交视频免费看高清 | 日韩视频一区二区在线观看| 国产成+人综合+亚洲专区| 亚洲精品中文字幕一二三四区| 不卡av一区二区三区| а√天堂www在线а√下载| 不卡av一区二区三区| 交换朋友夫妻互换小说| 亚洲熟女毛片儿| 久久午夜亚洲精品久久| 亚洲av成人不卡在线观看播放网| 夜夜夜夜夜久久久久| 午夜影院日韩av| 久久久久国产一级毛片高清牌| 男人舔女人下体高潮全视频| 成人影院久久| 亚洲精品粉嫩美女一区| 好看av亚洲va欧美ⅴa在| а√天堂www在线а√下载| 欧美日韩av久久| 18禁裸乳无遮挡免费网站照片 | 50天的宝宝边吃奶边哭怎么回事| 国产午夜精品久久久久久| 真人做人爱边吃奶动态| 黄频高清免费视频| 一级黄色大片毛片| 一级作爱视频免费观看| av天堂在线播放| 老司机午夜福利在线观看视频| 日本 av在线| 国产精品永久免费网站| 深夜精品福利| 精品卡一卡二卡四卡免费| 国产亚洲精品久久久久5区| 一边摸一边抽搐一进一出视频| av网站免费在线观看视频| 午夜两性在线视频| bbb黄色大片| 亚洲少妇的诱惑av| 一级片'在线观看视频| 精品国产美女av久久久久小说| 亚洲男人的天堂狠狠| 国产av在哪里看| aaaaa片日本免费| 一级a爱片免费观看的视频| 很黄的视频免费| 一进一出抽搐gif免费好疼 | avwww免费| 最新美女视频免费是黄的| 性少妇av在线| 亚洲伊人色综图| 国产精品亚洲av一区麻豆| 午夜a级毛片| 久久热在线av| videosex国产| 不卡av一区二区三区| 国产精品久久视频播放| 一进一出抽搐gif免费好疼 | 精品乱码久久久久久99久播| 一级毛片精品| 亚洲精品中文字幕一二三四区| 久热爱精品视频在线9| 后天国语完整版免费观看| 亚洲精品在线观看二区| 精品午夜福利视频在线观看一区| 日韩欧美三级三区| netflix在线观看网站| 男女下面插进去视频免费观看| 亚洲成人免费电影在线观看| 日本wwww免费看| 99re在线观看精品视频| 精品一区二区三区av网在线观看| 18禁观看日本| 国产精品亚洲av一区麻豆| 搡老岳熟女国产| 成人影院久久| 国产成人精品在线电影| 欧美日韩av久久| 成年人黄色毛片网站| 人人妻,人人澡人人爽秒播| 91麻豆av在线| 亚洲第一青青草原| 9191精品国产免费久久| 久久天躁狠狠躁夜夜2o2o| 999久久久精品免费观看国产| 在线观看日韩欧美| 国产成人系列免费观看| xxx96com| 成人亚洲精品一区在线观看| 老司机在亚洲福利影院| 成人黄色视频免费在线看| 久久人人精品亚洲av| 亚洲男人的天堂狠狠| 天堂动漫精品| 欧美人与性动交α欧美软件| 大香蕉久久成人网| 久久欧美精品欧美久久欧美| 夫妻午夜视频| 色综合站精品国产| 国产乱人伦免费视频| 久久久国产成人免费| 在线视频色国产色| 久久久久久久久免费视频了| 精品国产乱码久久久久久男人| 麻豆成人av在线观看| 黄色片一级片一级黄色片| 日韩 欧美 亚洲 中文字幕| 国产成人av激情在线播放| 俄罗斯特黄特色一大片| 亚洲欧美日韩另类电影网站| 一进一出抽搐动态| 欧美黑人精品巨大| 真人做人爱边吃奶动态| 亚洲国产欧美一区二区综合| 欧美日韩亚洲高清精品| 高清av免费在线| 长腿黑丝高跟| 757午夜福利合集在线观看| 制服诱惑二区| 亚洲第一av免费看| 亚洲情色 制服丝袜| 亚洲av电影在线进入| 国产精品98久久久久久宅男小说| 视频区图区小说| 啦啦啦免费观看视频1| 高潮久久久久久久久久久不卡| 少妇裸体淫交视频免费看高清 | 亚洲精华国产精华精| 国产高清视频在线播放一区| 国产亚洲欧美精品永久| 亚洲精品国产精品久久久不卡| 久久久久国产精品人妻aⅴ院| 欧美色视频一区免费| 巨乳人妻的诱惑在线观看| 人妻丰满熟妇av一区二区三区| 天堂中文最新版在线下载| 一级黄色大片毛片| 日本vs欧美在线观看视频| 国产av一区二区精品久久| 亚洲中文日韩欧美视频| 夜夜爽天天搞| 亚洲专区中文字幕在线| 色尼玛亚洲综合影院| 91成年电影在线观看| 久久国产精品影院| 操美女的视频在线观看| 午夜福利,免费看| 欧美 亚洲 国产 日韩一| 亚洲色图 男人天堂 中文字幕| 久久人人精品亚洲av| 黄片播放在线免费| 免费女性裸体啪啪无遮挡网站| 国产99白浆流出| 中出人妻视频一区二区| 在线天堂中文资源库| 9热在线视频观看99| 精品熟女少妇八av免费久了| 一a级毛片在线观看| 看片在线看免费视频| 人成视频在线观看免费观看| 伊人久久大香线蕉亚洲五| 黄色丝袜av网址大全| 黄色视频,在线免费观看| 成人特级黄色片久久久久久久| 久久久国产成人精品二区 | 欧美人与性动交α欧美软件| 欧美色视频一区免费| avwww免费| 丝袜美足系列| 亚洲五月婷婷丁香| 如日韩欧美国产精品一区二区三区| 亚洲精品在线观看二区| xxx96com| 久久99一区二区三区| 女人被躁到高潮嗷嗷叫费观| av网站在线播放免费| 成人免费观看视频高清| 女同久久另类99精品国产91| 美女 人体艺术 gogo| 18禁国产床啪视频网站| 超碰成人久久| 久久午夜亚洲精品久久| 一区福利在线观看| 黑人操中国人逼视频| 青草久久国产| 一级a爱片免费观看的视频| 窝窝影院91人妻| 1024视频免费在线观看| 久久人人精品亚洲av| 久久青草综合色| 国产成人精品无人区| 韩国精品一区二区三区| 9色porny在线观看| cao死你这个sao货| av天堂在线播放| 夜夜躁狠狠躁天天躁| 夜夜爽天天搞| 91成人精品电影| 亚洲色图综合在线观看| 母亲3免费完整高清在线观看| 亚洲少妇的诱惑av| 亚洲精品久久午夜乱码| а√天堂www在线а√下载| 久久人人97超碰香蕉20202| 91大片在线观看| 满18在线观看网站| 亚洲精品在线观看二区| 亚洲欧美日韩高清在线视频| av超薄肉色丝袜交足视频| 色在线成人网| 久久 成人 亚洲| 桃红色精品国产亚洲av| av在线播放免费不卡| 韩国av一区二区三区四区| 麻豆国产av国片精品| 亚洲片人在线观看| 亚洲一区二区三区不卡视频| 欧美人与性动交α欧美精品济南到| 精品一品国产午夜福利视频| 亚洲av成人一区二区三|