• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Actor–Critic Reinforcement Learning and Application in Developing Computer-Vision-Based Interface Tracking

    2021-03-22 07:43:18OguzhanDogruKirubakaranVelswamyBiaoHuang
    Engineering 2021年9期

    Oguzhan Dogru, Kirubakaran Velswamy, Biao Huang*

    Department of Chemical and Materials Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada

    Keywords:Interface tracking Object tracking Occlusion Reinforcement learning Uniform manifold approximation and projection

    ABSTRACT This paper synchronizes control theory with computer vision by formalizing object tracking as a sequential decision-making process. A reinforcement learning (RL) agent successfully tracks an interface between two liquids, which is often a critical variable to track in many chemical, petrochemical, metallurgical, and oil industries. This method utilizes less than 100 images for creating an environment, from which the agent generates its own data without the need for expert knowledge.Unlike supervised learning(SL)methods that rely on a huge number of parameters,this approach requires far fewer parameters,which naturally reduces its maintenance cost. Besides its frugal nature, the agent is robust to environmental uncertainties such as occlusion, intensity changes, and excessive noise. From a closed-loop control context,an interface location-based deviation is chosen as the optimization goal during training.The methodology showcases RL for real-time object-tracking applications in the oil sands industry. Along with a presentation of the interface tracking problem, this paper provides a detailed review of one of the most effective RL methodologies: actor–critic policy.

    1. Introduction

    Oil sands ore contains bitumen,water,and minerals.Bitumen is a high-viscosity hydrocarbon mixture, which can be extracted by means of several chemical and physical processes. The product is further treated in upgrader units or refineries [1] to obtain more valuable byproducts (e.g., gasoline, jet fuel). Oil sands are mined from open pits and loaded into trucks to be moved into the crushers [2]. Following this, the mixture is treated with hot water for hydro-transportation to the extraction plant. Aeration and several chemicals are introduced to enhance this process.In the extraction plant, the mixture is settled down in a primary separation vessel(PSV). A water-based oil sands separation process is summarized in Fig. 1.

    During the separation process inside the PSV, three layers are formed:froth,middlings,and tailings(Fig.2).An interface(referred to as the froth–middlings interface (FMI) henceforth) is formed between the froth and middlings layer. Its level with reference to the PSV unit influences the quality of the extraction.

    To control the FMI level, it is crucial to have reliable sensors.Traditionally, differential pressure (DP) cells, capacitance probes,or nucleonic density profilers are used to monitor the FMI level.However, these are either inaccurate or reported to be unreliable[3]. Sight glasses are used to manually monitor the interface for any process abnormalities. To utilize this observation in closedloop control, Ref. [3] proposed using a camera as a sensor. This scheme utilizes an edge detection model with particle filtering on the images to obtain the FMI level; feedback control is then established using this model. More recently, Ref. [4] combined edge detection with dynamic frame differencing to detect the interface. This method directly uses the edge detection technique to detect the interface,along with a frame-comparison mechanism that estimates the quality of the measurement; it also detects faults. Ref. [5] used a mixture of Gaussian distributions to model the appearances of the froth, interface, and middlings, and predicted the interface using a spatiotemporal Markov random field.Despite addressing several challenges utilizing models based on the appearance or behavior of the interface, these techniques fail to address the sensitivities to uncertain environmental conditions,such as occlusion and excessive/non-Gaussian noise.

    Fig. 1. A simplified illustration of the water-based oil sands separation process. The PSV is located in the extraction unit.

    Fig. 2. A schematic of the PSV. During the separation process, three layers are formed.The camera is used to monitor the interface between the middlings and the froth layers in order to control the FMI level optimally.

    Supervised learning (SL) methods try to build a map from input(i.e., image, x) to output (i.e., label, y) data by minimizing a cost(or loss)function.Usually,the cost function is convex,and the optimal parameters are calculated by applying a stochastic gradient descent algorithm [6,7] to the cost function. Unsupervised learning(UL) methods, on the other hand, are used to find the hidden features in the unlabeled data(i.e.,uses x only)[8].The goal is usually to compress the data or to find similarities within the data.Nevertheless, UL techniques do not consider the impact of the input on the output, even if such a causal relationship exists. In computer vision,these methods are implemented using convolutional neural networks(CNNs).A CNN is a parametric function that applies convolutional operation on the inputs. It can extract abstract features by processing not just a pixel, but also its neighboring pixels. It is used for classification, regression, dimensionality reduction, and so forth [9–12]. Even though they have been used for decades[13–16],CNNs have only lately gained significant popularity in different domains[17–20].This is due to the developments that have occurred in hardware technology [21] and data availability [22].Parallel to the developments in computer vision,a recurrent neural network(RNN)is used for time-series prediction,where the previous output of the network is fed back into itself[23]in what can be considered a recursive matrix multiplication. However, vanilla RNN[24]suffers from diminishing or exploding gradients,because it repeatedly feeds the previous information back into itself, leading to uneven back-propagated data sharing in between hidden layers. Therefore, it tends to fail when the data sequence is arbitrarily long. To overcome this issue, more complex networks such as long short-term memory(LSTM) [25] and gated recurrent units[26]have been proposed.These networks facilitate data transfer in between hidden layers to make the learning more efficient. More recently,a variant of LSTM called convolutional LSTM (ConvLSTM)[27] was reported to improve LSTM performance by replacing matrix multiplications with convolutional operations. Unlike fully connected LSTM, ConvLSTM receives an image rather than onedimensional data; it utilizes spatial connections that are present within the input data and enhances estimation. Networks with many layers are considered to be deep structures[28].Various deep architectures have been proposed [29–33] to enhance the prediction accuracy even further. However, these structures suffer from over-parameterization (i.e., the number of training data points is less than the number of parameters). Several regularization techniques (e.g., dropout, L2) [17] and transfer learning (also called fine-tuning (FT)) methods [34,35] try to find a work-around to improve the network’s performance. However, the transferred information(e.g.,network parameters)may not be general enough for the target domain. This issue becomes significant, especially when the training data is insufficient or their statistics are significantly different than the data in the target domain.Moreover,efficient transfer learning for recurrent networks currently remains as an opportunity for further research.

    Reinforcement learning (RL) [36] combines the advantages of both SL and UL techniques and formalizes the learning process as a Markov decision process (MDP). Inspired by animal psychology[37] and optimal control [38–43], this learning scheme involves an intelligent agent (i.e., the controller). Unlike SL or UL methods,RL does not rely on an offline or batch dataset, but generates its own data by interacting with the environment. It evaluates the impacts of its actions by considering immediate consequences and predicts the value via roll-out. Hence, it is more suitable for real or continuous processes involving decision-making for complex systems. However, in sampled data-based schemes, data distribution may be significantly different during training,which may cause high variance of estimations [36]. Actor–critic methods have been proposed [44–46] in order to combine the advantages of value estimation and the policy gradient.This approach segregates the agent into two parts: The actor decides which action to take,while the critic estimates the goodness of that action using an action-value [47] or state-value [48] function. These methods do not rely on any labels or system models. Therefore, exploration of the state or action space is an important factor that affects the agent’s performance. In system identification [49–51], this is known as the identification problem. Various methods have been developed to address the exploration issue[36,48,52–58].As a subfield of machine learning [59–61], RL is used in—but not limited to—process control [2,42,61–68], the game industry [69–77],robotics, and autonomous vehicles [78–81].

    FMI tracking can be formulated as an object-tracking problem,which can be solved in one or two steps using detection-free or detection-based tracking approaches,respectively.Previous works[82–84]have used RL for object detection or localization,for which it can be combined with a tracking algorithm.In the case of such a combination, the tracking algorithm also needs to be reliable and fast for real-time implementation. Several object-tracking algorithms have been proposed, including multiple object-tracking algorithms using RL [85–90]. The proposed schemes combine pretrained object detection with RL-based tracking or a supervised tracking solution. These simulations were carried out under ideal conditions [91,92]. The performance of object-detection-based methods often depends on the detection accuracy. Even if the agent learns to track based on a well-defined reward signal, the researcher should ensure that the sensory information is (or the features of the sensory information are) accurate. Model-based algorithms often assume that the object of interest has a rigid or a non-rigid shape[4]and that the noise or the motion has a particular pattern [3]. These assumptions may not hold when unexpected events occur. Therefore, a model-free approach may provide a more general solution.

    Since a CNN may extract abstract features, it is important to analyze it after training. Common analysis techniques utilize the information of the activation functions, kernels, intermediate layers,saliency maps,and so forth[30,93–95].In an RL context,a popular approach has been to reduce the dimensions of the observed features using t-distributed stochastic neighbor embedding (t-SNE) [96] to visualize the agent in different states [72,97,98]. This helps to cluster the behavior with respect to the different situations encountered by the agent. Another dimensionalityreduction technique—namely, uniform manifold approximation and projection (UMAP) [99]—projects the high-dimensional input(which may not be meaningful in the Euclidean space) into Riemannian space. In this way, the dimensionality of nonlinear features can be reduced.

    Fig. 3 illustrates a general control hierarchy in the process industry.In a continuous process,each level in the hierarchy interacts with each other at different sampling frequencies.Interaction starts at the instrumentation level, which affects the upper levels significantly. Recently, Ref. [2] proposed a solution for the execution level. However, addressing other levels remains challenging.

    Here,we propose a novel interface tracking scheme based on RL that is trained for a model-free sequential decision-making agent.This work:

    ? Provides a detailed review of actor–critic algorithms;

    ? Focuses on the instrumentation level to improve the overall performance of the hierarchy;

    ? Formulates interface tracking as a model-free sequential decision-making process;

    ? Combines CNN and LSTM to extract spatiotemporal features without any explicit models or unrealistic assumptions;

    ? Utilizes DP cell measurements in a reward function without any labels or human intervention;

    ? Trains the agent using temporal difference learning that allows the agent to learn continuously in a closed-loop control setting;

    ? Validates robustness amidst uncertainties in an open-loop setting;

    ? Analyzes the agent’s beliefs in a reduced feature space.

    This paper is organized as follows: Section 2 provides a review on actor–critic algorithms and preliminary information, interface detection is formulated in Section 3,Section 4 presents the training and test results in detail, and conclusions and future work are drawn in Sections 5 and 6, respectively.

    2. Review of actor–critic reinforcement learning

    RL is a rigorous mathematical concept [36,39,42] in which an agent learns a behavior that maximizes an overall return in a dynamic environment. Similar to a human being, the agent learns how to make intelligent decisions by considering the future rewards. This implies contemplating temporal aspects of the observations,unlike simple classification or regression approaches.This ability allows RL to be used under uncertain conditions [40]with irregular sampling rates. Its versatile nature makes RL adaptive to different environmental conditions and allows it to be transferred from simulation environments to real processes [80].

    Fig. 3. A general control hierarchy in the process industry. RTO: real-time optimization; MPC: model predictive control; PID: proportional–integral–derivative controller.

    2.1. Markov decision processes

    where E is the expectation of a random variable. After the value functions are estimated for each state, the optimal value(v*π(x ) and q*π(x, u)) functions can be found using Eqs. (4)and (5):

    2.2. A review of actor–critic algorithms

    Earlier approaches used value-based(critic-only)RL[71,101]to solve control problems. In these approaches, actions are derived directly from a value function, which has been reported to be divergent for largescale problems [45,102]. Policy-based (actoronly) methods [103–105] tackle this problem and can learn stochastic behaviors by generating a policy directly from a parameterized function.This function is then directly optimized by using a performance metric.However,variance of the estimation and the extended learning time make the policy gradient impractical.Similar to generative adversarial networks(GANs)[106],which utilize generative and discriminative networks, actor–critic algorithms self-supervise without any labels [44,45,107,108]. These techniques combine policy and value-based methods via an actor and a critic,respectively.This assisted estimation reduces the variance significantly and helps in learning the optimal policy [36,55]. The actor and the critic can be represented as two neural networks,π(u|x, θ)(where θ represents the parameters of the actor network)and V (x |ω) (or Q (x, u |ω)), respectively.

    Although several model-based actor–critic schemes have been proposed [109,110], this paper focuses on the most commonly used model-free algorithms, as represented in Table 1. Some of these methods use entropy regularization,whereas the others take advantage of heuristic methods. A common example for these methods is the ε-greedy approach,in which the agent takes a random action with a probability ε ∈[0, 1). Other exploration techniques include—but are not limited to—introducing additive noise to the action space,introducing noise to the parameter space,and utilizing an upper confidence bound. Interested readers can see Ref. [67] for more detail.

    The actor–critic algorithms are summarized as follows.

    2.2.1. Deep deterministic policy gradient

    This algorithm has been proposed to generalize discrete, lowdimensional value-based approaches [71] to continuous action spaces.The deep deterministic policy gradient(DDPG)[47]utilizes an actor and a critic (Q) as well as a target critic (Q′) network,which is a copy of the critic network. After observing a state,real-valued actions are sampled from the actor network and are mixed with a random process (e.g., the Ornstein–Uhlenbeck process)[111]to encourage exploration.The agent stores state,action,and reward samples in an experience replay buffer to break the correlation between consecutive samples in order to improve learning. It minimizes the mean square error of the loss function,L, to optimize its critic, as shown in Eq. (7).

    AlgorithmAction spaceExploration DDPGContinuousNoisy actions A2C or A3CDiscrete/continuousEntropy regularization ACERDiscrete/continuousEntropy regularization PPODiscrete/continuousN/A ACKTRDiscrete/continuousN/A SACContinuousEntropy regularization TD3ContinuousNoisy actions

    The scheme utilizes a policy gradient to improve the actor network.Since the value function is learned for the target policy based on a different behavior policy, DDPG is an off-policy method.

    2.2.2. Asynchronous advantage actor–critic

    Instead of storing the experience in a replay buffer that requires memory, the asynchronous advantage actor–critic (A2C/A3C)scheme [48] involves local workers that interact with their environments and update a global network asynchronously, which inherently increases exploration. Instead of minimizing the error based on the Q function, this scheme minimizes the mean square error of the advantage function (A or δ) for the critic update, as shown in Eq. (8).

    where initially dθG=dωG=0.A left arrow(←)represents the update operation;αcand αaare the learning rates for the critic and actor,respectively; ?is the derivative with respect to its subscript; and β is a fixed entropy term that is used to encourage exploration.Subscripts L and G stand for the local and global networks,respectively.Multiple workers (A3C) can be used in an offline manner, and the scheme can be reduced to a single worker(A2C)to be implemented online. Even though the workers are independent, they predict the value function based on the behavior policy of the global network,which makes A3C an on-policy method. This work utilizes an A3C algorithm to track the interface.

    2.2.3. Actor–critic with experience replay

    2.2.5. Actor–critic using Kronecker-factored trust region

    Instead of a gradient descent[6]algorithm to optimize the actor and critic networks,the actor–critic using Kronecker-factored trust region (ACKTR) [116] utilizes second-order optimization, which provides more information. It overcomes the computational complexity by using Kronecker-factored approximation [117,118] to approximate the inverse of the Fisher information matrix (FIM),which otherwise scales exponentially with respect to the parameters of the approximation. Moreover, it keeps track of the Fisher statistics, which yields better curvature estimates.

    2.2.6. Soft actor–critic

    Unlike methods that use the entropy of the policy as a loss regularizer [48,114,115,119], the soft actor–critic (SAC) method[55,120] augments the reward function with the entropy term(as shown in Eq. (13)) to encourage exploration. This approach has also been reported[120]to improve the robustness of the policy against model errors.

    where θ represents the parameters of the policy,α is a user-defined(fixed or time-varying) weight to adjust the contribution of the entropy, and H=E[-logπ(·)]. This scheme relies on both the Q and V functions to utilize the soft-policy iteration. Similar to DDPG and ACER, SAC stores the transitions in a replay buffer to address sample efficiency.Besides enhancing the exploration,entropy maximization compensates for stability loss, which is introduced by the off-policy approach.

    2.2.7. Twin delayed deep deterministic policy gradient

    The twin delayed deep deterministic policy gradient (TD3)[121]addresses error propagation(which is a non-trivial challenge in statistics and control) [122] due to function approximation and bootstrapping (i.e., instead of an exact value, using an estimated value in the update step). To achieve it, the scheme predicts two separate action-values and prefers the pessimistic value; hence,it avoids suboptimal policies. TD3 utilizes target networks, delays the update to the policy function,and uses an average target value estimate by sampling N transitions from a replay buffer to reduce variance during learning. The scheme introduces exploration by adding Gaussian noise to the sampled actions and performs policy updates using the deterministic policy gradient [104].

    Although the abovementioned algorithms provide general solutions to control problems, they may remain inadequate for more complex or specific tasks. Many other algorithms have been proposed to address these shortcomings. For example, Ref. [123]extended the discrete actor–critic method proposed by Ref. [44]to continuous time and space problems via the Hamiltonian–Jacobi–Bellman (HJB) equation [39,124]. This proposed algorithm was then tested in an action-constrained pendulum and a cartpole swing up problem. Ref. [125] employed an actor–critic algorithm on a constrained MDP together with a detailed convergence analysis. Ref. [46] showcased four incremental actor–critic algorithms based on regular and natural gradient estimates. Ref.[126] introduced a natural actor–critic (NAC) and demonstrated its performance on the cart-pole problem as well as on a baseball swing task. Ref. [127] presented a continuous time actor–critic via converse HJB and tested the convergence in two nonlinear simulation environments. Ref. [128] proposed an online actor–critic algorithm for an infinite horizon, continuous time problems with a rigorous convergence analysis, and linear and nonlinear simulation examples. Ref. [129] proposed an incremental, online, and off-policy actor–critic algorithm. The proposal analyzed the convergence qualitatively and supported it with empirical results.Moreover, the temporal difference (TD) methods were compared with gradient-TD methods that minimize the projected Bellman error [36]. Ref. [130] proposed an actor–critic identifier that could provably approximate the HJB equation without a knowledge of the system dynamics.After the learning was complete,the scheme showed process stability. However, knowledge of the input gain matrix was required. Ref. [131] used a nominal controller as a supervisor to guide the actor and to yield a safer control in a simulated cruise-control system. Ref. [132] proposed learning the solution of an HJB equation for a partially unknown inputconstrained system without the persistent excitation conditions while preserving the stability. By considering Lyapunov theory,Ref. [133] designed a fault-tolerant actor–critic algorithm and tested its stability on the Van der Pol system.Ref.[134]formulated an input-constrained nonlinear tracking problem by using the HJB equation and a quadratic cost function to define the value function.The scheme obtained an approximate value function with an actor–critic algorithm. Ref. [135] combined classification and time-series prediction techniques to solve an optimal control problem and showcased the proposed algorithm on a simulated continuous stirred-tank reactor (CSTR) and a simulated nonlinear oscillator. The mean actor–critic algorithm [136] was proposed to estimate the policy gradient by using a smooth Q function,which was averaged over the actions to reduce variance; the results were demonstrated on Atari games. Ref. [137] utilized an event-triggered actor–critic scheme to control a heating,ventilation, and air conditioning (HVAC) system. In addition to these, there are more recent studies on different actor–critic algorithms and their applications, as reported in Refs.[2,62,67,138–145].

    Several methods have been proposed to improve value estimation in RL[146–148],which can be used in actor–critic algorithms.Moreover, different techniques [112,149] have been reported to improve the sample efficiency (i.e., to reduce the amount of data needed to learn the optimal policy). Unlike techniques that made use of experience replay [70] or supervised data [150], ‘‘parallel learning”makes use of multiple randomly initialized workers(local networks)that interact with different instances of the environment independently to reduce the variance in the policy during learning.These workers have the same infrastructure as a global network and, after collecting k-samples, are used to update parameters of the global network. This reduces the amount of memory used and improves exploration, because the workers have independent trajectories. Task distribution can be performed via multiple machines [151] or multiple central processing unit (CPU) threads of a single computer [48].

    The optimal policy and the optimal critic are different in each process, and they are often unknown a priori. Monte Carlo-type methods calculate empirical return (given in Eq. (1)) at the end of the process (or an episode), which may be lengthy and noisy.Similar to Pavlovian conditioning[152]in psychology,TD learning predicts the value of the current state. Unlike Monte Carlo methods, it makes the predictions for a small horizon, as low as one step. This converts the infinite horizon problem into a finite horizon prediction problem. Instead of calculating the expectation of returns (as in Eq. (2)), the critic network can be updated using kstep ahead estimation of the TD error,δ,as shown in Eq.(14).This is called policy evaluation.

    3. Formulating the interface tracking as a sequential decisionmaking process

    3.1. Interface tracking

    A model is a mathematical means of describing the process dynamics that can occur either in a physical/chemical/biological system [153] or in a video [154]. The models derived for images often suffer from inaccuracies when there is an unexpected event(e.g., occlusion). To overcome this, either the information from the last valid observation is used in the next observation [4] or the images are reconstructed [154]. Although these solutions may substitute actual measurements for a short period of time,prolonged exposure can deteriorate closed-loop stability.As a consequence, if the FMI’s level is too low, the bitumen from the froth layer drains into the tailings. This lowers the product quality and creates environmental footprints. In contrast, if its level is closer to the extraction point, the solid particles in the froth being extracted complicate downstream operations [3]. Since deviations in the FMI level affect the downstream processes,it is important to regulate the FMI at an optimum point.

    RL can address inaccuracies during occlusion and excessive noise.This can be done by combining DP cell measurement or measurement from any other reliable instrument with the current FMI prediction by the agent to provide an accurate cost in the reward function, without external labels such as bounding boxes, during the training phase. Removing the dependence upon such labels minimizes human error.To achieve this,an agent can move a cropping box on the vertical axis over the PSV’s sight glass and compare its center with the DP cell measurement. Based on this deviation,the agent can move the box to an optimal position,where the center of the box matches to that of the FMI. This deviationminimizing feedback mechanism is inspired from control theory,and it can enhance an image-based estimation using the measurement obtained from the real process.

    Consider a grayscale image, I, sampled from a video stream as I ∈RH×Wwith an arbitrary width,W,and height,H,which captures the entire PSV.Consider a rectangular cropping box,B ∈RN×M,that has an arbitrary width, M, and height, N, where{N :N =2^z-1, ^z>1 ∈N} and ^z is the center of the rectangle.An example image and a cropping box are shown in Fig. 4(a). This rectangle crops I at^z into a size of N×M.For the sake of completeness,H>N and W=M.Consider an interface measurement obtained from a DP cell at time t as z. Note that the DP cell is used only in offline training of the RL agent and can be replaced by other interface measurement sensors,which is considered to be accurate in the offline laboratory environments.

    Fig. 4. A frame (I) obtained using a camera. (a) Sizes of the image (H×W) and the cropping box (N×W); (b) sizes of the cropping boxes (N×M) and the initial cropping box positions; (c) an example occlusion with its ratio,ρ.

    The components of the MDP for this problem can then be defined as follows:

    States: The pixels inside the rectangle, x ∈B ?X ?I. These pixels may be thought of as N×M independent sensors.

    where^z0is an arbitrary initial point, and the summation term represents the actions taken up to the tth instant(ui=+1 for up,ui=-1 for down).

    Discount factor:γ=0.99.

    The goal of this agent is to generate a sequence of actions to overlay the cropping box, B, on the vertical axis of the PSV with the interface at its center. To achieve this, the agent needs to perform long-term planning and preserve the association between its actions and the information obtained from DP cell measurement.A flowchart of the proposed scheme is shown in Fig. 5. In addition,Fig.6 and Table 2 show the networks in detail.More details about the ConvLSTM layer can be found in Ref. [27].

    Unlike the previous works [4,5] that make predictions in the state space, this approach optimizes the value and the policyspaces by using Eqs. (9), (10), and (14), respectively. Moreover,the CNN and ConvLSTM layers are updated by using Eq. (17).

    Fig. 5. Flow diagram for the proposed learning process. The update mechanism is shown in Eqs.(9) and (10) with the k-step policy evaluation, as shown in Eq. (14).

    This can be extended to a variable zt∈Z.

    3.2. Robustness to occlusion via training

    CNNs interpret the spatial information by considering the connectivity of the pixels, which improves robustness up to a certain point. However, it does not guarantee robustness to occlusion,and the agent may fail even if a good policy is obtained under normal conditions. To overcome this issue, the agent may be trained using synthetically occluded images during the training phase.Another way is to recalibrate a policy (that was trained using occlusion-free images) with occluded images.

    Fig. 6. Detailed structures of the CNN, ConvLSTM, actor, and critic networks.

    4. Results and discussion

    4.1. Experimental setup

    A lab-scale setup that mimics an industrial PSV is used for the proposed scheme.This setup allows for the movement of the interface to a desired level using pumps,as shown in Fig.7.Two DP cells are used to measure the interface level based on the liquid density,as described in Ref. [5].

    Fig. 7. The experimental setup.

    No.Layer typeOutput dimensionFilter sizeNumber of parameters 1 Convolutional20×20×168×81 040 2 Convolutional9×9×324×48 224 3 ConvLSTM9×9×323×373 856 4 Fully connected (actor)3—7 776 5 Fully connected (critic)1—2 592 Total——93 488

    Images are obtained using D-Link DCS-8525LH camera at 15 frames per second(FPS).From the 15 FPS footages,a representative image for each second is obtained.Hence,80 images from 80 consecutive seconds are obtained with necessary down-sampling.These images are processed to showcase the PSV portion, void of unwanted background. They are then converted into grayscale images. The DP cell measurements (for the same contiguous time period as the images), which are available in terms of water head(water-in), are converted to pixel positions, as given in Ref. [4].After each action is taken, the video frame changes. Every action the agent takes generates a scalar reward (Eq. (15)), which is later utilized to calculate the TD error (Eq. (14)) that is used in training the agent’s parameters (Eqs. (9) and (10)).

    4.2. Implementation details

    4.2.1. Software and network details

    Both the training and the testing phases were conducted using an Intel Core i7-7500U CPU at 2.90 GHz (two cores, four threads),8 GB RAM at 2133 MHz, and 64-bit Windows using Tensorflow 1.15.0. Unlike deeper networks (e.g., those in Ref. [32] that consisted of tens of millions of parameters), this agent consisted of fewer parameters, as summarized in Table 2. This prevents overparameterization and reduces the computational time significantly,with the disadvantage of an inability to extract higher level features [155].

    After each action is taken,the cropping box is resized to 84×84 pixels.An Adam optimizer with a learning rate of 0.0001 is used to optimize the parameters of the agent (including the CNN,ConvLSTM, actor, and critic) in a sample-based manner. This momentum-based stochastic optimization method has been reported to be computationally efficient [156].

    4.2.2. Training without occlusion

    An A3C algorithm was used during the experiments to reduce the training time, improve exploration, and achieve convergence to a suboptimal policy during learning [48]. All of the initial network parameters were sampled randomly from a Gaussian distribution with zero mean and unit variance. Offline training was performed after creating a continuous trajectory of the interface level by manually ordering 80 unique images, as shown in Fig. 8.

    This trajectory was then repeatedly shown to the agent for 470 steps for 2650 episodes(i.e.,an episode consisted of 470 steps).At any time, the agent observed only the pixels within the cropping box.The cropping box of each agent was initialized at four different positions, as shown in Fig. 4(b). The agent’s goal was to minimize the deviation of the center of the cropping box with respect to the DP cell measurements, given a maximum velocity of 1 pixel per step. The agent was not exposed to occlusion during training and was capable of processing 20 FPS (i.e., computational execution time) for four workers.

    Fig. 8. Training results at the end of training (2650 episodes) and FT (3380 episodes). BFT: before fine-tuning; AFT: after fine-tuning.

    4.2.3. Fine-tuning with occlusion

    The global network parameters were initialized using the parameters obtained at the end of the training without occlusion.The local networks initially shared the same parameters as the global network. All of the training hyperparameters (e.g., learning rate, interface trajectory) were kept unchanged. The images used in the previous training phase were overlayed with occlusion,whose ratio,ρ,was sampled from a Poisson distribution,as shown in Eq. (18). The distribution, Pois(x, λ), is given in Eq. (19).

    Eq.(18)bounds ρ between 0 and ρmax=80%at the beginning of an episode. Shape factor is arbitrarily defined as λ=1. In each episode,occlusion occurs at the 200th step to the following 200 steps with a probability of 1. The intent behind FT is to make sure the agent is robust to the occlusion. The agent, with four workers,was trained for an arbitrary amount of 730 episodes until the episodic cumulative reward improved.

    4.2.4. Interface tracking test

    For a 1000-step episode,the agent was tested using a discontinuous trajectory that contained previously unseen images that were either noiseless or were laden with a Gaussian noise,ν ∈RH×W~N(0, 1), in three ways, as shown in Table 3. These images were also occluded using a synthetic occlusion,whose constant intensity was arbitrarily selected as the mean of the image(i.e., κ=128), while the occlusion ratio, ρ, varied linearly from 20% to 80%.

    4.2.5. Feature analysis

    To illustrate the effectiveness of the network, a previously unseen PSV image was manually cropped starting from the top of the PSV to the bottom. These manually cropped images were then passed one by one through the CNN prior to training, the CNN was trained as in Section 4.2.2, and the CNN was fine-tuned as discussed in Section 4.2.3 to extract the features. These spatial features, φs, were then collected in a buffer with the size 9×9×32×440,from which the reduced dimension(2×440)features were obtained using UMAP [99]. These lower dimensional features will be represented in Section 4.6.

    4.3. Training

    The best policies were obtained at the end of training and FT,when there was no improvement in the cumulative reward for 500 consecutive episodes.Fig.8 shows the trajectories using these policies. The position of the cropping box is initialized with itscenter at 60% of the PSV’s maximum height. At the end of this phase, the agent tracked the interface with a negligible amount of offset. An example obtained from the 80th step is shown in Fig. 9(a). The green star represents where the agent thinks the interface is for the current frame.

    Identity of the noisy imageNoisy imageCondition 1 It =It +ν ⊙ζt<300 2 It =It ⊙ 1+ν ⊙ζ()300 ≤t<700 3 It =It ⊙ 1+ν ⊙2×ζ()t ≥700

    4.4. FT re-calibration for occlusion

    FT improved the agent’s overall performance, even for the occlusion-free images, by reducing the level-wise mean average error (MAE) by 0.51%, as summarized in Table 4. This result indicates that the agent adapted to the new environmental conditions without forgetting the previous conditions. This was due to the improvements in the value estimation and the policy, which started from near-optimal points. Note that the minimum value for the MAE is limited by the initial position of the cropping box,as shown in Fig. 8.

    Fig. 10 shows the cumulative rewards from one of the workers during training and after fine-tuning (AFT), as shown in solid and dash-dot lines, respectively.

    Note that the initial decrease during FT was caused by the occlusion, because the agent was not able to track the interface level when occlusion occurred. This new feature was learned successfully by the closed-loop reward mechanism within 400 episodes. Note that the final cumulative reward obtained at the end of FT is almost the same as that obtained at the end of training.This is because the cumulative reward represents only the tracking performance during training and depends on the initial position of the cropping box,as shown in Fig.8.This value can be zero only if the center of the box and the DP cell measurement overlap completely at the beginning of the episode and the agent tracks the interface without any offsets during the episode. The necessity of the FT is more pronounced when the agent is exposed to unseen environmental conditions such as excessive noise and occlusion, as discussed in Section 4.5.

    Fig. 9. (a) Training result at the 80th frame. (b) Test result AFT with 80% occlusion and excessive noise,at the 950th step.The white boxes represent the cropping box that the agent controls.The stars represent the center of the cropping box, and the circles are the exact interface level. The pentagon is the bottom of the occlusion,which looks like the FMI.

    StageMAE pixelMAE level After training4.98521.1382 AFT4.95971.1324

    Fig. 10. Cumulative rewards. The graph shows that the agent can learn the occlusion and track the interface successfully.

    4.5. Test

    4.5.1. Before fine-tuning

    The initial before fine-tuning (BFT) test was conducted at the end of the initial training (i.e., the 2650th episode, as shown in Fig.10).Note that in the testing(online application)phase,DP cell information is not being used, and the RL agent works on its own.In fact,even if the DP cell is available,it will not be accurate in the field application environment. Fig. 11 shows that the agent was robust to up to 50%occlusion and additional noise prior to FT.This is a significant improvement over the existing schemes, all of which do not address occlusion. The reason for this improvement is that the neural networks extract more abstract features than edge and histogram information, in both the spatial and temporal domains [157]. This is due to the convolutional operations that smooth out disturbances and improve the agent’s overall performance. On the other hand, any further increase to the occlusion ratio resulted in failure to track the interface. Since occlusion is of lighter intensity,the policy naturally moved toward the bottom of the PSV(where pixels of higher intensity were abundant)to find the interface.

    4.5.2. After fine-tuning

    Fig. 11. Test results: Tracking, where ρ is the occlusion ratio (e.g., ρ=0.8 means that the image is occluded by 80%).

    AFT, it was found that recalibrating the agent for occlusion improved its performance significantly, as seen from its ability to track the interface more accurately(Fig.11).Additional noise caused its performance to degrade when the interface offset between the consecutive frames was around 5%.However,the agent was successful when this interface offset was reduced to 2.5%, as shown in Fig.11.This is because the excessive noise corrupts the image significantly and the agent fails to locate the interface.An example frame obtained at the 950th frame is shown in Fig.9(b).It should be noted that the noise is accompanied by 80% occlusion; this makes the tracking problem more challenging, since the amount of useful information extracted by the agent from the image is significantly reduced—that is, only 20% of the pixels can be used to locate the interface.This performance is due to the CNN and ConvLSTM combination. Fig. 12 shows the agent’s beliefs (predicted by the critic)about the states(obtained from an unseen frame)using parameters obtained from a random network(solid),after training(dash-dot),and AFT (dot). According to Eq. (2), this figure defines the value of a state,assuming that the best trajectory toward the interface level would be generated by the policy.

    Fig.12 also shows that,prior to any training,the value predicted for any state is similar.However,during training,the agent regrets being in disadvantegous states, and the DP cell readings reinforce that moving the cropping box closer to the interface(i.e.,a vertical solid line)yields a better value than being further away from it.At the end of FT, with more data, the agent further improves its parameters—and therefore its actions—to move the cropper box so that it becomes more accurate.This result shows that the agent tries to improve its actions based on a constantly changing belief(value). Note that the increase in AFT after a deviation value of 200 corresponds to the yellow pentagon in Fig. 9, which looks like the interface and causes an increase in the value function. However, the value obtained from that part is lower than that of the interface,meaning that the agent is more confident when it is close to the star, rather than to the pentagon.

    Fig. 12. Test results of value function versus deviation from the interface.

    4.6. Understanding the network: Feature analysis

    The training and test results focused on the progress of the learning and control abilities of the agent. These alone may not be sufficient to explain whether the agent’s decisions are meaningful given an observation in the form of an image.

    Fig.13 shows the reduced dimensionality as a two-dimensional graph by representing the values of the corresponding cropped images (obtained in Section 4.2.5) using the gradual intensities of a color. The curve (from left to right) corresponds to the cropped images from top to bottom of the PSV tank side glass,as explained in Section 4.2.5.

    The colored pentagons in Figs. 13(a)–(c) correspond to three points in Fig.13(d).According to the results,the features obtained from the network prior to training are similar to each other without any particular arrangement. However, as training proceeds,features with similar values get closer. Upon combining Fig. 13 with Fig. 12, it could be inferred that the CNN was able to extract the features in a meaningful way, despite using unlabeled data in a model-free context,due to the RL methodology.This was possible because the texture and pixel intensity pattern of each cropped image was successfully converted into the value and the policy functions by employing a CNN–ConvLSTM combination. Also, the reward signal obtained from the DP cell(which was used as a feedback mechanism) trained the agent’s behavior.

    5. Conclusion

    Fig.13. Dimensionality reduction applied to the features of the states(x ∈X)obtained from an unseen image.The features are obtained using the parameters obtained from(a) random, (b) trained, and (c) fine-tuned networks. The data points are then colored by their corresponding values. (d) Three regions that correspond to the top and the bottom of the tank and the FMI are highlighted on the unseen image.As the agent trains,the extracted features from similar regions are clustered closer in the Riemannian space.

    This work provided a comprehensive review on actor–critic algorithms and proposed a novel RL scheme that targets the instrumentation level of the control hierarchy in order to improve the performance of the entire structure. To achieve this result,interface tracking was formulated as a sequential decisionmaking process that requires long-term planning. The agent was composed of a CNN and ConvLSTM combination that does not require any shape or motion models and is hence more robust to uncertainties in the process conditions.Inspired from the feedback mechanism used in control theory,the agent utilized readings from DP cells to improve its actions.This technique removes the dependencies on explicit labels that are required for SL schemes. The agent’s performance during validation using untrained images under occlusion and noise showed that the interface can be tracked under up to 80% occlusion and excessive noise. An analysis of the high-dimensional features validated the agent’s generalization of its beliefs around its observations.

    6. Future work

    This work successfully demonstrated the tracking of a liquid interface by utilizing one of the most advanced RL techniques.The occlusion was handled by employing an agent composed of deep CNN structures, and the tolerance was improved by FT the policy, which showcased the adaptive nature of the proposed method. In addition to these, an agent that can reconstruct the occluded images may be an alternative method for future work.

    Acknowledgements

    The authors thank Dr.Fadi Ibrahim for his help in the laboratory to initiate this research and Dr. Artin Afacan for the lab-scale PSV setup. The authors also acknowledge the Natural Sciences Engineering Research Council of Canada (NSERC), and its Industrial Research Chair (IRC) Program for financial support.

    Compliance with ethics guidelines

    Oguzhan Dogru, Kirubakaran Velswamy, and Biao Huang declare that they have no conflict of interest or financial conflicts to disclose.

    Nomenclature

    Abbreviations

    A2Cadvantage actor–critic

    A3Casynchronous advantage actor–critic

    ACERactor–critic with experience replay

    ACKTRactor–critic using Kronecker-factored trust region

    AFTafter fine-tuning

    BFTbefore fine-tuning

    CNNconvolutional neural network

    ConvLSTMconvolutional long short-term memory

    CSTRcontinuous stirred-tank reactor

    DDPGdeep deterministic policy gradient

    DPdifferential pressure

    FIMFisher information matrix

    FMIfroth–middlings interface

    FPSframes per second

    FTfine-tuning

    GANgenerative adversarial network

    HJBHamiltonian–Jacobi–Bellman

    HVACheating, ventilation, air conditioning

    LSTMlong short-term memory

    MAEmean average error

    MDPMarkov decision process

    NACnatural actor–critic

    PPOproximal policy optimization

    PSVprimary separation vessel

    RLreinforcement learning

    RNNrecurrent neural network

    SACsoft actor–critic

    SLsupervised learning

    TDtemporal difference

    TD3twin delayed deep deterministic policy gradient

    TRPOtrust region policy optimization

    t-SNEt-distributed stochastic neighbor embedding

    ULunsupervised learning

    UMAPuniform manifold approximation and projection

    Symbols

    E[·]expectation

    φs(·)spatial features

    φt(·)temporal features

    δ temporal difference error

    σ0distribution of initial states

    ν gaussian noise with zero mean unit variance

    ·()*optimum value for the variable, e.g., q*

    ln(·)natural logarithm

    R, Gempirical reward, return

    q, r,vexpected action-value, reward, state-value

    x,x′∈XStates ∈State space

    u ∈UActions ∈Action space

    π(·)policy of the agent, also known as the actor

    δ xt|ωL

    ()temporal difference error

    V (·)estimate of state-value, also known as the critic

    Q (·)estimate of action-value, also known as the critic

    Ω occlusion

    Parameters

    αa,αclearning rates for the actor and critic: 0.0001

    γ discount factor: 0.99

    κ intensity of occlusion: 128/256

    λ shape parameter of a Poisson distribution: 1

    ρ occlusion ratio: %

    ζ magnitude of noise: 0.2

    国产高清激情床上av| 在线观看美女被高潮喷水网站| 欧美日韩中文字幕国产精品一区二区三区| 久久精品国产亚洲网站| 最后的刺客免费高清国语| 精品日产1卡2卡| 久久久久久久久久久丰满 | 国产蜜桃级精品一区二区三区| 亚州av有码| 天堂动漫精品| 国产淫片久久久久久久久| 久久人妻av系列| 国产白丝娇喘喷水9色精品| 精品久久久久久,| 国产精品三级大全| 乱系列少妇在线播放| 国产色爽女视频免费观看| 婷婷精品国产亚洲av| 精品久久久久久久久亚洲 | 国产在线男女| 午夜免费成人在线视频| 听说在线观看完整版免费高清| 麻豆国产av国片精品| 国产色爽女视频免费观看| 在线观看免费视频日本深夜| 99久久中文字幕三级久久日本| 日韩av在线大香蕉| 级片在线观看| 啦啦啦韩国在线观看视频| 午夜福利在线观看免费完整高清在 | 又黄又爽又免费观看的视频| 免费观看在线日韩| 欧美又色又爽又黄视频| 国产一区二区亚洲精品在线观看| 久久久国产成人免费| 国产成人a区在线观看| 亚洲欧美日韩高清在线视频| 成人av一区二区三区在线看| 国产一区二区亚洲精品在线观看| 狂野欧美白嫩少妇大欣赏| 啦啦啦观看免费观看视频高清| 亚洲av二区三区四区| 欧美又色又爽又黄视频| 中国美白少妇内射xxxbb| 成人亚洲精品av一区二区| 成人av在线播放网站| xxxwww97欧美| 午夜久久久久精精品| 麻豆一二三区av精品| 黄色一级大片看看| 欧美一区二区精品小视频在线| 亚洲成人中文字幕在线播放| 18禁黄网站禁片免费观看直播| 久久久久久伊人网av| 久久人人爽人人爽人人片va| 免费看美女性在线毛片视频| 欧美成人一区二区免费高清观看| 国产精品美女特级片免费视频播放器| 亚洲av不卡在线观看| 大又大粗又爽又黄少妇毛片口| 国内精品一区二区在线观看| 1000部很黄的大片| 美女大奶头视频| 有码 亚洲区| 夜夜爽天天搞| 国产成人av教育| 日本在线视频免费播放| 成人欧美大片| 国产精品98久久久久久宅男小说| 99国产极品粉嫩在线观看| 国产精品免费一区二区三区在线| а√天堂www在线а√下载| 亚洲av免费高清在线观看| 国产色婷婷99| 午夜影院日韩av| 超碰av人人做人人爽久久| 欧美3d第一页| 国内精品一区二区在线观看| 高清日韩中文字幕在线| 一进一出抽搐动态| 欧美日本视频| 少妇丰满av| 亚洲国产精品成人综合色| 免费黄网站久久成人精品| 国产精品1区2区在线观看.| 国产免费av片在线观看野外av| 国产精品久久视频播放| 黄色丝袜av网址大全| 全区人妻精品视频| 国产精品爽爽va在线观看网站| 99热6这里只有精品| 亚洲性久久影院| 一级a爱片免费观看的视频| 一区二区三区四区激情视频 | 男女视频在线观看网站免费| 老女人水多毛片| 国产一区二区三区视频了| 国产欧美日韩精品亚洲av| 在线免费十八禁| 午夜免费男女啪啪视频观看 | 搡老妇女老女人老熟妇| 最近最新中文字幕大全电影3| 亚洲一区二区三区色噜噜| 天堂√8在线中文| 少妇人妻精品综合一区二区 | 久久草成人影院| 少妇熟女aⅴ在线视频| 久久精品夜夜夜夜夜久久蜜豆| 嫩草影院入口| 国产69精品久久久久777片| 淫妇啪啪啪对白视频| 欧美+日韩+精品| 亚洲性久久影院| 亚洲成人免费电影在线观看| 日韩强制内射视频| 欧美3d第一页| 日韩中文字幕欧美一区二区| 国产一区二区三区在线臀色熟女| 男人狂女人下面高潮的视频| 欧美激情久久久久久爽电影| 国产高清视频在线观看网站| 九九在线视频观看精品| 国产黄a三级三级三级人| 99热6这里只有精品| 久久精品国产亚洲av涩爱 | 老熟妇仑乱视频hdxx| 色综合站精品国产| 内地一区二区视频在线| 老熟妇仑乱视频hdxx| av在线天堂中文字幕| 特大巨黑吊av在线直播| 亚洲一区二区三区色噜噜| 综合色av麻豆| 久久精品国产鲁丝片午夜精品 | 日本撒尿小便嘘嘘汇集6| 国产成人一区二区在线| 99久久中文字幕三级久久日本| av女优亚洲男人天堂| 91在线观看av| 美女黄网站色视频| 99热这里只有是精品在线观看| 成人国产综合亚洲| 天堂动漫精品| 欧美3d第一页| 色哟哟哟哟哟哟| 舔av片在线| 校园春色视频在线观看| 国产一级毛片七仙女欲春2| 国国产精品蜜臀av免费| 精品久久久久久成人av| 日韩人妻高清精品专区| 亚洲性夜色夜夜综合| 欧美xxxx性猛交bbbb| 久久精品国产清高在天天线| 自拍偷自拍亚洲精品老妇| a级毛片免费高清观看在线播放| 日韩 亚洲 欧美在线| 国产亚洲精品综合一区在线观看| 亚洲一区二区三区色噜噜| 成人特级av手机在线观看| 在线免费观看不下载黄p国产 | 国产精品久久久久久亚洲av鲁大| 露出奶头的视频| 非洲黑人性xxxx精品又粗又长| 韩国av一区二区三区四区| 少妇人妻一区二区三区视频| 色播亚洲综合网| 欧美极品一区二区三区四区| 成人鲁丝片一二三区免费| 51国产日韩欧美| 人妻久久中文字幕网| 亚洲精品影视一区二区三区av| 亚洲va在线va天堂va国产| 国产v大片淫在线免费观看| 人人妻人人看人人澡| 又粗又爽又猛毛片免费看| 欧美日韩中文字幕国产精品一区二区三区| 久久久色成人| 亚洲欧美日韩高清在线视频| 国产乱人视频| 91在线精品国自产拍蜜月| 色尼玛亚洲综合影院| 久久香蕉精品热| 国产私拍福利视频在线观看| 久久国内精品自在自线图片| 亚洲三级黄色毛片| 久久久午夜欧美精品| 极品教师在线免费播放| 久99久视频精品免费| 亚洲av中文字字幕乱码综合| 老熟妇仑乱视频hdxx| www.色视频.com| 国产高清有码在线观看视频| 欧美3d第一页| 国语自产精品视频在线第100页| 亚洲乱码一区二区免费版| 免费观看人在逋| 特大巨黑吊av在线直播| 黄色一级大片看看| 嫩草影视91久久| 我要看日韩黄色一级片| 极品教师在线视频| 老熟妇乱子伦视频在线观看| 免费电影在线观看免费观看| 国产男人的电影天堂91| 偷拍熟女少妇极品色| 免费黄网站久久成人精品| 网址你懂的国产日韩在线| 成人国产一区最新在线观看| 极品教师在线免费播放| 蜜桃亚洲精品一区二区三区| 中文字幕av在线有码专区| 99视频精品全部免费 在线| 国产精品人妻久久久影院| 国产亚洲精品久久久com| 国产精品人妻久久久久久| 91午夜精品亚洲一区二区三区 | 天堂网av新在线| 日韩欧美国产一区二区入口| 日日撸夜夜添| 在线免费十八禁| 亚洲国产日韩欧美精品在线观看| 欧美日本亚洲视频在线播放| 欧美成人性av电影在线观看| 国产av一区在线观看免费| 国产极品精品免费视频能看的| 国产一区二区在线av高清观看| 日韩 亚洲 欧美在线| 看黄色毛片网站| 国产精品福利在线免费观看| 亚洲av熟女| 欧美另类亚洲清纯唯美| 在线天堂最新版资源| 亚洲av美国av| 禁无遮挡网站| 啦啦啦韩国在线观看视频| 免费一级毛片在线播放高清视频| 成年女人看的毛片在线观看| 国产精品免费一区二区三区在线| 午夜福利视频1000在线观看| 国产av一区在线观看免费| 欧美最新免费一区二区三区| 国产免费av片在线观看野外av| 最近最新免费中文字幕在线| 乱码一卡2卡4卡精品| 精品一区二区三区人妻视频| 国内毛片毛片毛片毛片毛片| 老女人水多毛片| 最好的美女福利视频网| 国产精品久久久久久av不卡| a级一级毛片免费在线观看| 少妇丰满av| 能在线免费观看的黄片| 蜜桃久久精品国产亚洲av| 亚洲国产日韩欧美精品在线观看| 日韩精品有码人妻一区| av女优亚洲男人天堂| 欧美日韩瑟瑟在线播放| 亚洲国产欧洲综合997久久,| 久久99热6这里只有精品| 日韩中文字幕欧美一区二区| 国内毛片毛片毛片毛片毛片| av在线老鸭窝| 国产精品福利在线免费观看| 日本一二三区视频观看| 久久久久久久久久成人| 在线免费观看的www视频| 成年免费大片在线观看| 国产精品久久久久久久电影| 免费看美女性在线毛片视频| 观看免费一级毛片| 日本与韩国留学比较| 成年人黄色毛片网站| 欧美精品啪啪一区二区三区| 免费av毛片视频| 美女高潮的动态| 不卡一级毛片| av视频在线观看入口| 免费人成在线观看视频色| 自拍偷自拍亚洲精品老妇| 亚洲精华国产精华精| 亚洲三级黄色毛片| 国产精品福利在线免费观看| 一边摸一边抽搐一进一小说| 亚洲av中文字字幕乱码综合| 欧美日韩精品成人综合77777| 久久国内精品自在自线图片| 欧美高清成人免费视频www| 亚洲欧美日韩无卡精品| 欧美黑人巨大hd| 免费人成在线观看视频色| 国产 一区精品| 国产精华一区二区三区| 国产亚洲精品综合一区在线观看| 久久国产乱子免费精品| 欧美黑人巨大hd| 99热精品在线国产| 亚洲成人精品中文字幕电影| a在线观看视频网站| 午夜日韩欧美国产| 国产三级在线视频| 淫秽高清视频在线观看| 在线看三级毛片| 伊人久久精品亚洲午夜| 91麻豆精品激情在线观看国产| 最近在线观看免费完整版| 国产亚洲精品av在线| 国产主播在线观看一区二区| 美女cb高潮喷水在线观看| 国产伦精品一区二区三区四那| 久久欧美精品欧美久久欧美| 69av精品久久久久久| 亚洲在线观看片| 亚洲欧美日韩无卡精品| 一边摸一边抽搐一进一小说| 日日摸夜夜添夜夜添小说| 亚洲国产精品sss在线观看| 高清毛片免费观看视频网站| 看黄色毛片网站| 国产伦精品一区二区三区视频9| 午夜爱爱视频在线播放| 国产精品国产高清国产av| 免费看美女性在线毛片视频| 午夜精品一区二区三区免费看| 日韩一本色道免费dvd| 搡老妇女老女人老熟妇| 别揉我奶头 嗯啊视频| 国产女主播在线喷水免费视频网站 | 他把我摸到了高潮在线观看| 夜夜爽天天搞| 国产探花在线观看一区二区| 91午夜精品亚洲一区二区三区 | 国产三级中文精品| 乱系列少妇在线播放| 精品一区二区三区人妻视频| 国产精品嫩草影院av在线观看 | 国产精品久久久久久av不卡| 中文字幕av在线有码专区| 他把我摸到了高潮在线观看| 午夜爱爱视频在线播放| 国内精品一区二区在线观看| 久久99热这里只有精品18| 久久国内精品自在自线图片| 欧美高清成人免费视频www| 久久亚洲精品不卡| 欧美xxxx性猛交bbbb| av在线蜜桃| 禁无遮挡网站| 日本黄色视频三级网站网址| 成年人黄色毛片网站| 日本黄色视频三级网站网址| 午夜精品在线福利| 日本黄大片高清| 春色校园在线视频观看| av在线蜜桃| 亚洲欧美日韩高清专用| 国产精品综合久久久久久久免费| 午夜爱爱视频在线播放| 国产精品久久久久久亚洲av鲁大| 色视频www国产| 午夜福利在线在线| 免费看av在线观看网站| 美女大奶头视频| 亚洲精品456在线播放app | 高清在线国产一区| 日韩欧美三级三区| 天堂网av新在线| 国产男人的电影天堂91| 国产色婷婷99| 欧美性猛交黑人性爽| 国产精品野战在线观看| 最近视频中文字幕2019在线8| 亚洲成av人片在线播放无| 亚洲性夜色夜夜综合| 中国美白少妇内射xxxbb| 舔av片在线| 在线观看舔阴道视频| 俄罗斯特黄特色一大片| 亚洲国产精品合色在线| 亚洲五月天丁香| 黄色日韩在线| 男插女下体视频免费在线播放| 久久香蕉精品热| 日本一二三区视频观看| 91久久精品电影网| 日本撒尿小便嘘嘘汇集6| 日本与韩国留学比较| 免费看av在线观看网站| 国产 一区精品| 婷婷丁香在线五月| 日韩欧美一区二区三区在线观看| 真实男女啪啪啪动态图| 久久精品影院6| 国产精品野战在线观看| 偷拍熟女少妇极品色| 搞女人的毛片| 免费人成视频x8x8入口观看| 国产熟女欧美一区二区| 亚洲国产色片| 此物有八面人人有两片| 蜜桃亚洲精品一区二区三区| 男女做爰动态图高潮gif福利片| 99久久成人亚洲精品观看| 高清日韩中文字幕在线| 成人一区二区视频在线观看| 国产一区二区三区在线臀色熟女| 日本熟妇午夜| 色播亚洲综合网| 搞女人的毛片| 精品午夜福利在线看| 2021天堂中文幕一二区在线观| 亚洲aⅴ乱码一区二区在线播放| av专区在线播放| 香蕉av资源在线| 久久精品国产99精品国产亚洲性色| 一区二区三区激情视频| 在线观看美女被高潮喷水网站| 人妻夜夜爽99麻豆av| a级毛片a级免费在线| 老师上课跳d突然被开到最大视频| 五月伊人婷婷丁香| 热99re8久久精品国产| 国产极品精品免费视频能看的| 国产精品久久久久久av不卡| 91狼人影院| 国产一区二区在线av高清观看| 国产 一区 欧美 日韩| 国产一区二区在线观看日韩| x7x7x7水蜜桃| 欧美一区二区精品小视频在线| 欧美xxxx黑人xx丫x性爽| 国产精品不卡视频一区二区| 免费高清视频大片| 观看免费一级毛片| 国产免费av片在线观看野外av| 成熟少妇高潮喷水视频| 国产三级在线视频| 乱系列少妇在线播放| 国产午夜福利久久久久久| 有码 亚洲区| 欧洲精品卡2卡3卡4卡5卡区| 亚洲精品亚洲一区二区| 亚洲精华国产精华精| 99久久精品一区二区三区| 国产蜜桃级精品一区二区三区| 色视频www国产| 国产女主播在线喷水免费视频网站 | 男女那种视频在线观看| 午夜a级毛片| 国产精品久久久久久久久免| 久久精品人妻少妇| 国产欧美日韩一区二区精品| 精品不卡国产一区二区三区| 两个人视频免费观看高清| 亚洲性久久影院| 亚洲狠狠婷婷综合久久图片| 欧美最黄视频在线播放免费| 久久中文看片网| 日韩欧美国产在线观看| 国产一区二区在线观看日韩| 色av中文字幕| 国产精品久久久久久精品电影| 亚洲欧美日韩高清在线视频| 国产精品女同一区二区软件 | 国产淫片久久久久久久久| 日本五十路高清| 亚洲人成网站在线播| 久久中文看片网| 欧美一级a爱片免费观看看| 精品久久久久久久久av| 免费看光身美女| 嫩草影院入口| 国产高清视频在线播放一区| 夜夜夜夜夜久久久久| eeuss影院久久| 老女人水多毛片| 色综合站精品国产| 国内精品一区二区在线观看| www.www免费av| 成人毛片a级毛片在线播放| 欧美色欧美亚洲另类二区| 成人三级黄色视频| 国产av一区在线观看免费| 亚洲av免费在线观看| 日本三级黄在线观看| 网址你懂的国产日韩在线| 国产亚洲精品久久久久久毛片| 变态另类丝袜制服| 十八禁国产超污无遮挡网站| 我的女老师完整版在线观看| 中出人妻视频一区二区| 亚洲 国产 在线| 欧美xxxx黑人xx丫x性爽| 欧美人与善性xxx| 亚洲国产欧美人成| 亚洲在线自拍视频| 日韩欧美一区二区三区在线观看| 婷婷丁香在线五月| 亚洲精品成人久久久久久| 极品教师在线视频| 一a级毛片在线观看| 又爽又黄无遮挡网站| 午夜福利18| 成人二区视频| 97人妻精品一区二区三区麻豆| 国产精品伦人一区二区| 成人三级黄色视频| 91精品国产九色| 3wmmmm亚洲av在线观看| 亚洲欧美精品综合久久99| 五月伊人婷婷丁香| 成熟少妇高潮喷水视频| 午夜a级毛片| 国内久久婷婷六月综合欲色啪| 丰满的人妻完整版| 国产大屁股一区二区在线视频| 久久久久久久久久黄片| 人妻丰满熟妇av一区二区三区| 久久人人爽人人爽人人片va| 免费一级毛片在线播放高清视频| a级毛片a级免费在线| 亚洲国产日韩欧美精品在线观看| 国产一区二区三区在线臀色熟女| 波多野结衣巨乳人妻| 亚洲熟妇熟女久久| 啦啦啦韩国在线观看视频| 亚洲18禁久久av| 久久久精品大字幕| 美女黄网站色视频| 亚洲在线自拍视频| 国产精品一区二区三区四区久久| 国产精品一区二区免费欧美| 亚洲综合色惰| 色av中文字幕| avwww免费| 亚洲av电影不卡..在线观看| 国产一区二区三区视频了| 久久久久九九精品影院| 久久精品91蜜桃| 日本欧美国产在线视频| 日本免费一区二区三区高清不卡| 51国产日韩欧美| 22中文网久久字幕| 精品无人区乱码1区二区| 美女大奶头视频| 久久99热6这里只有精品| 99热网站在线观看| 91在线观看av| 美女cb高潮喷水在线观看| 国产一区二区三区av在线 | www.www免费av| 2021天堂中文幕一二区在线观| 久久久久久久久久久丰满 | 99久久精品国产国产毛片| 九九在线视频观看精品| 午夜精品在线福利| 国产高潮美女av| 麻豆国产av国片精品| 成人av一区二区三区在线看| 成人性生交大片免费视频hd| 一进一出抽搐gif免费好疼| 一进一出抽搐动态| 嫩草影视91久久| 日本三级黄在线观看| 国产一区二区三区在线臀色熟女| 日韩中文字幕欧美一区二区| 久久草成人影院| 长腿黑丝高跟| 婷婷精品国产亚洲av| 国产日本99.免费观看| 亚州av有码| 国内精品一区二区在线观看| 日韩 亚洲 欧美在线| 久久久久久久久大av| 欧美激情国产日韩精品一区| 国产淫片久久久久久久久| 国产高清激情床上av| 国产精品不卡视频一区二区| 成年人黄色毛片网站| 亚洲真实伦在线观看| 九色国产91popny在线| 欧美成人性av电影在线观看| 日韩一本色道免费dvd| 中文在线观看免费www的网站| 日本免费a在线| 91久久精品电影网| 黄色丝袜av网址大全| 免费观看人在逋| 欧美一区二区精品小视频在线| 久久精品国产鲁丝片午夜精品 | 黄色日韩在线| 最新在线观看一区二区三区| 88av欧美| 99久久九九国产精品国产免费| 无人区码免费观看不卡| 搡老岳熟女国产| 直男gayav资源| 有码 亚洲区| 欧美激情久久久久久爽电影| 国产中年淑女户外野战色| 美女高潮喷水抽搐中文字幕| 婷婷亚洲欧美| 国产日本99.免费观看| 九九久久精品国产亚洲av麻豆| 在线播放无遮挡| 真人一进一出gif抽搐免费| 少妇被粗大猛烈的视频| 美女被艹到高潮喷水动态| 午夜老司机福利剧场| 搡老岳熟女国产| 看黄色毛片网站| 午夜精品在线福利| 男女做爰动态图高潮gif福利片| 国产伦一二天堂av在线观看| 真人一进一出gif抽搐免费|