• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Grasp Planning and Visual Servoing for an Outdoors Aerial Dual Manipulator

    2020-06-01 02:06:30PabloRamonSoriaBegoArrueAnibalOllero
    Engineering 2020年1期

    Pablo Ramon-Soria*, Bego?a C. Arrue, Anibal Ollero

    Robotics, Vision, and Control Group, School of Engineering, University of Seville, Seville 41092, Spain

    Keywords:

    Aerial manipulation

    Grasp planning

    Visual servoing

    ABSTRACT This paper describes a system for grasping known objects with unmanned aerial vehicles (UAVs) provided with dual manipulators using an RGB-D camera. Aerial manipulation remains a very challenging task. This paper covers three principal aspects for this task: object detection and pose estimation, grasp planning, and in-flight grasp execution. First, an artificial neural network (ANN) is used to obtain clues regarding the object’s position. Next, an alignment algorithm is used to obtain the object’s sixdimensional(6D)pose,which is filtered with an extended Kalman filter.A three-dimensional(3D)model of the object is then used to estimate an arranged list of good grasps for the aerial manipulator. The results from the detection algorithm—that is, the object’s pose—are used to update the trajectories of the arms toward the object. If the target poses are not reachable due to the UAV’s oscillations, the algorithm switches to the next feasible grasp. This paper introduces the overall methodology, and provides the experimental results of both simulation and real experiments for each module,in addition to a video showing the results.

    1. Introduction

    The use of unmanned aerial vehicles(UAVs)for industrial applications is on the rise. UAVs are suitable for a wide range of applications that take place in inaccessible locations and are typically dangerous to human operators, such as power-line inspection[1,2], wind-turbine maintenance [3], the inspection of various structures in facilities [4], and photogrammetry [5]. However,these applications merely involve perceptive tasks.In recent years,it has been demonstrated that aerial robots can effectively perform manipulation tasks as well [6].

    Aerial manipulation remains a challenging task, as it has extra requirements; the platform is usually delicate and remains in an unstable equilibrium in the air. In order to perform manipulation tasks, it is necessary to provide the robot with various capabilities such as perception, planning, and, in general, smart capabilities. It is also necessary to equip the robot with the appropriate hardware to perform the desired tasks. The purpose of introducing robots in these applications is to provide a more efficient solution to problems, reduce costs (both time and money), and reduce decreases in quality due to faults by human operators resulting from the monotony of some of these tasks. Instead, the operator is given a more important role that cannot be performed by robots.

    Manipulation involves interaction with physical objects using manipulators, and may include grasping objects, moving them, or maintaining contact with a stiff object. In order to perform such tasks, the robot needs to be able to detect and locate objects, as well as analyze how to interact with the objects. In addition, in order to perform such tasks, it is necessary to plan the actions and movements of the manipulators and prevent them from colliding with the environment and the platform itself.

    To perform any manipulation task, it is first necessary to find and track the object to be manipulated. Many computer vision algorithms can be used for this task; these can be classified depending on the application, computational resources, and sensors used.

    Region proposal algorithms for the monocular detection of objects are commonly used.These algorithms can be based on simple color algorithms [7,8], features detection [9,10], or machine learning. Novel algorithms use artificial neural networks (ANNs)to detect pre-trained objects. ANNs have been exploited thanks to the field of deep learning and the availability of powerful graphics processing units (GPUs) that boost and parallelize ANNs.The latest designs(SSD [11],RCNN[12],F-RCNN[13],Mask-RCNN[14], and YOLO [15]) have overtaken most traditional objectdetection algorithms. However, detecting instances of objects in color images is insufficient to perform manipulation tasks.It is also necessary to analyze the three-dimensional(3D)information of the object.With this purpose,recent research on deep neural networks(DNNs) has extended ANNs to compute the six-dimensional (6D)location of the objects as well [16].

    Depth sensors such as Kinect or Intel RealSense devices can be used to directly extract 3D information from the environment[17,18]. However, these sensors usually have strict limitations in outdoor environments,as they use infrared-structural light,which is usually blinded outdoors. In contrast, passive stereo cameras make it possible to compute a disparity map that can be used to perform a pixel-wise triangulation of points, resulting in a point cloud. This kind of device is usually less accurate than a depth camera, but works well under almost any conditions. However,as it is based on the visible light spectrum,the performance of this device is poor under low-light conditions.

    Image features can be used to accurately obtain the position of objects in space[19-21].These methods show good results and are robust to occlusions.However,these algorithms may present problems if objects are texture-less or if they show reflections. Other authors have used projective algorithms, which are based on geometrical models that are projected onto the image in order to find its position and orientation using monocular cameras [22].

    In regards to the manipulation itself, robot manipulators have been widely studied for a variety of applications. Mechanical and mathematical models of robot hands and their interaction with objects are a fundamental aspect of the analysis of robotic manipulation.The vast number of combinations of object and hand configurations makes this area of research challenging.

    Miller et al. [23] proposed the use of primitive shapes that approximate the object, thus facilitating the generation of grasps.However, this method requires a simple object,and cannot be used for more complex shapes.In Ref.[24],researchers proposed an algorithm that approximates the shape of the objects to a set of planar regions which are used to generate the contact points for the grasps.More recent research has used machine learning methods to generate better grasps;in these methods,the system is trained using synthetic datasets [25-27] or reinforcement learning [28].

    Other approaches use the data from sensors to create a probabilistic model of the objects.In Refs.[29-31],researchers proposed an algorithm that uses Gaussian processes for the reconstruction of 3D objects, which is known as Gaussian process implicit surfaces.

    Once grasps are generated,they must be evaluated.The quality of grasps[32]can be analyzed according to their geometry,strength of the fingers, and many other aspects. In the present work, we focus on the common aspects, which are force closure, largest minimum resisted wrench, and task-oriented quality metrics.

    Marturi et al.[33]proposed an offline/online planner to control an industrial robotic arm in order to cooperate with human operators.The algorithm uses a pre-learned grasp planner(offline)that generates a set of likely grasps. Later, the object’s pose is detected online and the second planner generates a set of feasible grasps according to the current state of the robot and the pose of the object.

    However, aerial manipulation is more challenging than ground manipulation. For example, in aerial applications, both the robot and the target objects are moving. Moreover, the arms’ dynamic influences the aerial platform’s dynamic,compromising its stability. Several publications [34,35] have studied a UAV’s stability and control, coupled with its manipulators. In the present work, we assume that the movements are not aggressive and the controller is fast enough to compensate for the semi-static external forces that the arms exert on the platform.

    Furthermore,in order to actually perform the grasping task,it is necessary to run a visual servoing algorithm to accurately move the manipulators. Visual servoing, or vision-based control, can be classified into two types depending on the data. If the algorithm uses 3D positions, it is named position-based visual servoing(PBVS) [36,37]; if the algorithm uses just 2D information from images, it is named image-based visual servoing [37,38]. A third option involves a combination of information—the so-called‘‘hybrid algorithms.” This article proposes a PBVS method that detects the position of the target object. A control law is designed to reduce the position error of the end-effectors.

    Aerial platforms need to locate themselves within the work zone in order to control their position. A standard technique is to use motion-capture (MOCAP) systems, to accurately locate UAVs during research. These systems locate the robots at a high frequency, which is very convenient when testing new technologies and algorithms.Outdoors missions also require a precise and accurate location for the robots.The global positioning system provides a wide-ranging and extensive system for locating robots outdoors.However,standard devices have unacceptable errors for manipulation tasks.To solve this issue,our research group[39]proposed the use of a total station at which information is fused with the inertial measurement unit of the aerial robot in order to locate the robot anywhere outdoors.

    Aerial manipulators can be classified according to the configuration of the arm.This work focuses on the use of serial manipulators. Laiacker et al. [40] developed a helicopter equipped with an industrial robotic arm to perform manipulations task. The current increasing interest in aerial manipulation has led researchers to place not one but two manipulators on the same platform in order to allow it to perform dexterous manipulation with both hands.Korpela et al. [41] proposed the use of a pair of manipulators to turn valves in industrial environments. A dual manipulator configuration [42] has been found to be very versatile for the performance of manipulation tasks.

    This article proposes the design of a complete aerial manipulation system that includes the development of manipulators for the UAV, a visual perception system to detect the objects to be manipulated, and the visual servoing to guide the arms. This work was carried out under the framework of AEROARMS(SI-1439/2015),an EU-funded project. This project aims to develop a novel aerial platform with dexterous manipulation capabilities to perform inspection and maintenance tasks in industrial environments.

    In this work, a mock-up model of a crawler robot that was developed in the AEROARMS project is used as a use case for the detection algorithm and grasping experiments.The same methodology can be applied to other objects.

    The remainder of the article is organized as follows. Section 2 introduces the aerial manipulator system that has been developed to perform the manipulation task. Section 3 describes the vision algorithm used to detect, track, and locate the target object.Section 4 shows the grasping and re-planning algorithm, which takes into account the possible movement of both the robot and the target. Section 5 describes the global procedure and the designed state machine to execute the mission. Section 6 shows an experimental analysis of the system, through actual grasping experiments with the flying manipulator. Finally, Section 7 discusses the results and future steps for this research.

    2. Aerial manipulator

    Multirotors have been demonstrated to be practical solutions for inspection and maintenance tasks [6]. Unlike fixed-wing aircraft, multirotors can move freely in space and hover at a desired position to perform any tasks.

    In this project,a coplanar hexacopter was chosen to perform the manipulation. This configuration has more stability and strength than smaller multirotors, in order to meet the payload requirements to carry out the mission. Fig. 1 shows the platform used for the experiments. The frame is a hexacopter that was designed and built by DroneTools SL.?

    The rest of the hardware, including the dual manipulators and software, was designed and developed by the present authors at the University of Seville.

    The platform has a Pixhawk?autopilot that is responsible for the robot’s flying control. At the bottom, an Intel NUC computer is installed to run all the systems,from the vision algorithm to the control of the manipulators.Finally,an Intel RealSense camera is built in to perceive the environment.

    In order to perform manipulation, the multirotor is equipped with a pair of arms. These arms have three-degree-of-freedom(3-DoF) integrated and exchangeable end-effectors, which give them four or six DoFs, depending on the application. Fig. 1 shows the models of the arms with grippers as the end-effectors. Using two manipulators instead of one is advantageous for various reasons. For one thing, two manipulators make it possible to carry out more complex manipulation tasks, as each arm can perform a different operation. In addition, having two end-effectors (as e.g., two grippers) makes the grasps more stable, because the object can be held from two different sides. With just one gripper,the grasps are more limited,as the gravity exerted on the center of mass will be misaligned in many situations. Thus, the only stable grasps are those located at the top of the object,if possible,which limits the manipulation to objects that have handles on top.

    The arms presented here have been released in the open-source project called hecatonquiros??, which aims for general-purpose,cheap, and easy-to-use robotic arms. These arms are designed to be lightweight, so relatively small UAVs can carry them. They are 3D printed, which reduces the overall cost of production, as the material is relatively cheap and does not require post-processing,allowing the arms to be assembled directly. The cost of a single arm is about $150 USD (including the smart serial servos). The project also provides an OpenRAVE-based wrapper for the kinematic solvers and has support for simulation with robot operating system(ROS) before actual experiments are performed.

    Fig. 2(a) shows the kinematic reachability of the pair of arms,which is used to estimate the appropriate position of the robot to perform the manipulation tasks.Figs.2(b)and(c)show the coordinate frames defined for the robot,which are used to transform the coordinates of the detected objects from the camera’s coordinates to the arm’s coordinate system, to move the end-effectors.

    As mentioned earlier, the UAV is equipped with an Intel Real-Sense D435 device. This device was chosen due to its capability to obtain reasonably accurate depth information outdoors. The depth information is exploited by the visual algorithm to obtain the 6D pose of the object for the manipulation process.This camera also provides excellent quality for short distances, as it is able to capture depth up close (up to 0.2 m), which makes it ideal for hand-eye coordination in manipulation tasks in robotics.

    To conclude,Table 1 summarizes the specifications of the complete aerial system.

    ? http://www.dronetools.es.

    ? https://pixhawk.org/.

    ??https://github.com/bardo91/hecatonquiros/.

    3. Object detection and pose tracking

    Fig. 1. A photograph of the arms built in the multirotor.

    This section describes the algorithm used to detect and track the target object. Object detection is a challenging task. However,the use of deep learning techniques has revolutionized the way in which perception is conceived.The proposed methodology combines a convolutional neural network(CNN)with a random sample consensus algorithm to detect the object and compute its pose for the aerial manipulator. The algorithm is summarized in Fig. 3.

    The algorithm is based on two stages. First,an object-detection CNN is applied to produce object candidates. Then, an alignment algorithm is used to compute the exact location of the target object.

    CNNs are widely used nowadays and provide solutions to many difficult problems, as in the case of object detection. If the target object is known, only a labeled database of images is needed to train the network.However,these algorithms are heavy in mathematical operations,albeit highly parallelizable.Thus,GPUs are usually used because they are specifically designed to perform parallel operations efficiently. Some of the most used frameworks for this purpose are TensorFlow [43] and Caffe [44]. These frameworks provide an extensive ‘‘model zoo” with the implementation of the newest networks, thus speeding up the testing and development process. However, it is essential to consider that the final algorithm will be carried out by the onboard computer in the drone, which may have some limitations such as compute unified device architecture (CUDA) compatibility. In Section 6, a comparison between different algorithms (F-RCNN, SSD, and YOLO) on three different devices (a laptop computer with a GTX1070, a Jetson TX1 by Nvidia, and an Intel NUC computer) is presented.

    The CNN is fed with RGB images provided by the camera. As a result,a set of candidate bounding boxes is obtained.The bounding box with the highest probability is employed to crop both the RGB and the depth images. These images are used to compute a local point cloud, combining the information from the pictures and the calibration matrices of the device. This colored point cloud is used in the next stage.

    In short, the use of the object-detection algorithm reduces the search for the object during the alignment stage, decreasing the searchable area by 50%, and possibly by as much as 80% if the crawler represents a small fragment of the image.

    At this point,a fragment of point cloud that contains the target object is obtained.In order to locate the object,an alignment algorithm is used. This algorithm filters the object from the background. A point cloud model of the crawler is provided to perform an iterative closest points (ICP)-based algorithm [45,46],which outputs the pose of the target. The ICP method uses all the available information to enhance the alignment process.It includes the 3D location of points(Eq.(1)),the normal information obtained from both the model and the scene (Eq. (2)), and the color data(Eq. (3)). This information is used in the form of outlier rejections during the selection process of points before the iterative calculation of the affine transformation.

    Fig.2. Manipulators’kinematic reachability and computer-aided design models for the online simulations and planning.(a)Kinematic reachability of the aerial manipulator;(b) 6-DoF arms model in OpenRAVE and coordinate frames; (c) 4-DoF arms model in OpenRAVE and coordinate frames.

    Table 1 Platform specifications.

    where Ddistis the geometrical distance between any two points x1and x2of the point cloud.

    where Dnormmeans the cosine of the angle between the normals A1and A2; A1and A2are the normals of any two points x1and x2of a point cloud, respectively.

    where Dcolormeans the absolute distance between the colors assigned to any two points x1and x2of a point cloud; R, G, and B are the values of the red, green, and blue channels, respectively.

    It is worth noting that ICP algorithms are sensitive to the initial conditions. For this reason, the pose computed in the previous instant(k-1)(k is the corresponding instant of the previous computed cloud) is used to provide a good estimate for the algorithm.This fact, together with the crop and clean of the input cloud,maximizes the probability of the algorithm converging to the correct solution.

    Finally, the result of the algorithm is filtered by an extended Kalman filter (EKF) [47,48]. The filter helps to reduce the noise in the results and makes it possible to predict the relative speed of the object, which is used to improve the object pose estimation.A simple kinematic model was chosen to estimate the object’s state.

    4. Grasp planning

    In order to make it possible to grasp the target object, a grasp planning algorithm was developed, composed of the following steps:First,a set of feasible grasps is generated,taking into account the target’s shape; next, these grasps are arranged according to their properties; finally, the best available grasp is chosen according to the computed visual information at each instant.During the grasping procedure, if the pose of the object makes the current grasp unreachable, the planner chooses the closest best grasp.

    4.1. Grasp generation

    First,a set of possible contact points is generated on the surface of the target object.Our approach is based on the grasp generation algorithm described in Ref. [49]. This algorithm casts a set of rays from a bounding box toward the target object. Each ray collides once with the surface, and the surface normal is computed. An example is shown in Fig. 4.

    However,this methodology may fail to generate a feasible grasp for objects with complex shapes. Fig. 5 shows an example of the ray-tracing algorithm for a U-shaped object. It can be seen that if the aperture of the gripper is smaller than the size labeled as ‘‘a(chǎn)”in Fig. 5(b), then the object is ungraspable for the robot. However,if the algorithm takes into account all the possible internal nooks of the object’s surface, then the robot can grasp the object if the aperture of the gripper is larger than the sizes labeled as‘‘b”or‘‘d”in Fig. 5(b).

    Fig. 3. Scheme of the visual algorithm for the object detection and pose estimation of the target object. EKF: extended Kalman filter.

    Fig.4. Example of the ray-tracing algorithm for contact point calculation.(a)The initial set of rays used to compute the grasp points;(b)the resulting grasp points computing using the ray-tracing algorithm.

    Fig.5. Example of the use of the grasping algorithm for a non-convex object.(a)An isometric view of the rays crossing the object;(b)the front view.Red lines represent the rays used to trace the possible grasp points.

    In general terms,let p be a single ray passing through an object.The number of collisions of p with a mesh is Mcollisions=Mfolds×2;thus, the number of combinations of opposite contact points that might generate a grasp are

    where Mgraspsis the number of possible grasps; Mfoldscorresponds to the folds intersected by the ray; j is the internal loop variable of the SUM operator.

    This generation of contact point candidates may be time intensive if the mesh of the object is too complicated.In order to speed up this step of the algorithm, the contact points generated are stored in a binary file so they can be reused in any mission if the object is the same.Once this ray tracing is performed,the untransformed results are stored in a database;if the same object or a different instance of the same object is grasped later,this database is recovered, thus avoiding all the ray-tracing computations.

    The set of contact points is then used to generate candidate grasps. The grasps are sampled in an internal simulator according to the pose of the object relative to the aerial manipulator. The algorithm computes the reachability for each grasp, the possible collisions between the manipulator and the target object, and the quality of each grasp. First, each arm samples all the possible candidate solutions to generate a list of feasible grasps for each arm.Next, all the feasible grasps are tested in pairs to obtain a list of dual grasps. The quality of these grasps is analyzed in terms of force closure and largest minimum resisted wrench, as described in Ref. [32]. All these grasps are then stored to be used during the planning process.

    4.2. Planning, servoing, and grasping

    Once the list of grasps is arranged, the best grasp is chosen based on the current relative target object pose. This is the closest and most stable grasp for the dual manipulator. Before grasping,the arms are sent to a pre-catch position; this prevents the manipulators from performing large and dangerous movements,as it is crucial to prevent the manipulators from colliding with the flying components of the robot.

    Let it be a multi-link robot manipulator L specified by the scalar variables Θ=θ1,...,θidescribing the states of its joints.The end of each link of the body has a certain position S = s1, ..., si, some of which are the so-called end-effectors. These positions can be described using the Denavit-Hartenberg (DH) formulation [50] as a chain of transformations between the joints of the manipulator and computed by a set of parameters DHi={αi, ai,di,φi};αimeans the angle between the z axis of the joints,aiis the distance between the joints along the x axis of the first joint, diis the distance between two joints along the z axis of the first joint,φiis the angle along the z axis of the first joint between the x axis of the joints.Each DHielement is a matrix that transforms between link (i - 1)to i; that is, it performs forward kinematics (FK).

    where S(Θ) represents the position of the end effector in terms of the joint space Θ.

    where δ is the partial derivative operator, θnis an specific joint of the joint space,smis the position of the m link,m and n are the index iterators over the variables to define the complete Jacobian matrix J(Θ).

    The final purpose is to guide the arms toward the target grasps using the visual information from the sensors. In this work, a Jacobian damped-least square (DLS) gradient descent method[53,54] is applied to ensure the convergence of the arms toward the target position.This method has been shown to be more robust to inverse and pseudoinverse methods[54]near to instabilities and singularities in the Jacobian matrices. Let XKand QKbe the target position of orientation for the end-effector given by the grasp planning and updated by the vision algorithm.The aim is to update the manipulator’s pose by updating its joints.

    The DLS proceeds as follows: Eqs. (7) and (8) are computed based on the current state of the robot and the target pose of the end-effector;next,Eq.(9)is used to compute a vector that contains an increment in the joint values of the robot that allows the endeffector to move toward the target pose.

    where Xtargetand Qtargetare the target positions for the endeffectors; XKand QKare the target position and orientation of the end-effector at instant K.

    where Jc, JX, and JQare the corresponding position and orientation Jacobians at the current state.

    where ΘKrepresents the arm angles of the current joints; λ is the damping coefficient to reduce the issues related to the inversion of the matrix. λ must be large enough to ensure that the algorithm behaves adequately close to singularities,but not too large to grant a good convergence rate.I means an identity matrix with the size of Jc. These are calculated based on the position of the target object,which is computed in the vision module.

    Furthermore, working with a pair of manipulators requires additional constraints.The principal constraint is the need to avoid collisions with itself and with other objects.However,checking the collisions of any non-convex object is not trivial.The use of a convex hull is advantageous in some situations, as it can be used to check them. However, this simplification can be rough in some applications.Modern approaches split the objects into a set of convex hulls [55,56]. This results in a more accurate solution while preserving the advantages and mathematical simplifications of working with convex hulls. In the present work, the implementation given by OpenRAVE is used [57], which integrates these fast methodologies.

    Finally, it is possible that due to the oscillations of the aerial robot, or if the target object is mobile, the best grasp may became unreachable. The algorithm continuously measures the quality of the current grasp against the database of grasps in a separate thread,in order to overcome this issue. If the grasp became unreachable or unfeasible, the algorithm switches to a new best option.

    5. State machine and complete mission

    The work presented in this paper covers the process starting from the take-off of the UAV from the platform until its landing after grasping the target object. Such a complete mission requires a sophisticated system design. The system is composed of five modules, as shown in Figs. 6 and 7. Each module is responsible for a single task in the UAV. In order to simplify the development process, communication between modules is carried out by ROS[58].In this section,each module is initialized in an idle state that is triggered by a global signal start.

    Fig. 6(a) shows the global state machine, which is responsible for the global behavior of the mission. It is composed of three states: approaching, grasping, and homing. Approaching triggers the take_off signal and waits until the UAV has taken off.After that,the global state machine sends the approaching signal, which carries an initial waypoint, to the UAV’s controller,close to the target object.Once the UAV is in the target position,it holds until the target object is found(found signal).Then the state machine switches to the grasping state, in which the visual servoing comes into action to grasp the target. Once the object is grasped, the state machine switches to the homing state, which ends when the UAV lands.

    Fig.6(b)shows the state machine of the UAV’s controller,which consists of six states. It starts with idle. Once the UAV takes off, it keeps hovering until the next signal. If the global state machine sends an approaching signal, the UAV control state machine switches to the waypoint state and moves the UAV to a target position. If the object is found, it then switches to the tracking state.The tracking module is responsible for checking if the target object is in a reasonable position to be grasped.If so,it sends the near signal. This state finishes when the object is either lost or grasped.

    Fig.7(a)shows the vision state machine,which has the simplest structure. This state machine continuously runs the vision algorithm in order to detect the target object and compute its pose.When the object is found and its pose is computed, the state machine publishes that information together with the found signal. If the object is lost, then the lost signal is sent to cause the appropriate effect.

    Fig. 7(b) shows the state machine of the grasp planner module,which is responsible for computing the appropriate grasps of the object. It also runs the servoing algorithm to compute the desired joints of the arms according to the target grasp.It starts by estimating the arranged list of grasps. Later, if the object is close, within the range of the arms, the state machine switches to the servoing state, in which the visual servoing publishes the target joints for the arms.As mentioned in Section 4,the arms are not sent directly to the grasping pose.Instead,they are sent to a position close to it,called the‘‘a(chǎn)pproaching pose.”O(jiān)nce the arms are in the approaching position, the state machine toggles to the grasping state in which the arms move quickly toward the object and grasp it.

    Finally, Fig. 7(c) shows the arms controller state machine. The manipulator controller is responsible for publishing information about the manipulators and providing the interface for moving them.It also limits the speed and trajectories of the arms to ensure that they are safe. An emergency stop (E_S) state has been placed to stop the arms externally for the sake of security.Emergency stop also causes the UAV to keep hovering to prevent it from crashing.

    Fig. 6. First two submodules of the system. (a) Global state machine; (b) UAV control state machine.

    6. Experiments

    This section begins by describing the experiments that were performed to evaluate and validate the system and algorithms.Next, a set of experiments are described and discussed to show the performance of the system in carrying out aerial manipulation tasks. As mentioned earlier,this work focuses on grasping a crawler robot that was developed during the project AEROARMS, and that is used for pipe inspection in factories.However,the algorithm can be used equally well for any object,as neither the vision algorithm nor the planning process have any particular restriction to this particular object. A summary video of the experiments can be found.?

    ? https://youtu.be/nXYlzqwM8kA.

    6.1. System evaluation

    As described in Section 3,an object-detection CNN is used in the first stage of the vision algorithm to detect the object.This provides a good initial solution for the alignment algorithm, which is more computationally expensive.

    Three CNNs were tested in three different devices to choose the best option that fits the system requirements for the manipulation task and the payload limitations of the aerial manipulator. The specific CNNs tested were F-RCNN, SSD300, and YOLO (tiny YOLO v2).These were tested in the following three devices:a laptop with a GTX1070, a Jetson TX1, and an Intel NUC with an iris GPU. The first two devices have CUDA capabilities,as they have Nvidia GPUs.However, the third device cannot use common frameworks due to a lack of CUDA compatibility. Table 2 summarizes the average computational times of the different detection algorithms in the different devices. As the Intel NUC is not CUDA compatible, only the YOLO was evaluated.To be specific,an OpenCL implementation of the YOLO has been used.?

    It can be clearly seen that the performance on the laptop overtakes the results of the other devices. However, due to the strict payload limitations, only the other two devices can be considered when operating with UAVs. At first, the YOLO runs a bit faster in the Jetson TX1 device. However, the central processing unit(CPU) capabilities of the Intel NUC computer strongly overtake those of the Jetson. As the whole system requires many processes from the rest of algorithms, the Intel NUC platform was selected.

    The main benefit of the region proposal algorithm (the CNN) is that it drastically reduces the size of the cloud to be used in the alignment process (within 50% and 80%). It has a double improvement in that it decreases the computational time of this step,while also allowing the algorithm to converge a good solution;as gradient descent algorithms are usually sensitive to outliers and initial conditions, this improves convergence to the minimum.

    Once the object is detected and its pose is estimated, the algorithm performs the grasp planning process described in Section 4.This process can be split into three stages. The first stage is the computation of contact points; this process can be precomputed,as it only depends on the resolution chosen for the algorithm and the model of the object.The second stage is the computation of feasible grasps for each arm separately. Table 3 summarizes the average computation times for each stage for various objects. Finally, the grasps are taken in pairs to arrange them by their quality metrics. The second column evaluates the time spent in the generation of the initial set of contact points.The grid resolution is the distance between the initial set of rays used during the ray tracing evaluation as described in Section 4.1.The third column shows the time spent evaluating the feasibility of the grasps generated in the previous step. Eventually, the fourth column shows the time spent evaluating the quality of the grasps.

    In order to prevent the arms from colliding with any part of the aerial platform, a safe volume is left above the shoulders of the arms. This volume corresponds to the flying parts of the robot,which must never be collided with.For safety reasons,this volume is taken into account in the planning process and during the servoing algorithm.

    Fig. 8 shows a sequence of the online simulations run in Open-RAVE for the grasp planning process.Afterwards,the set of feasible grasps is stored for later use during the re-planning stage.

    ? https://github.com/ganyc717/Darknet-On-OpenCL.

    Table 2Computational time in seconds of the different algorithms in the tested devices.

    Table 3Computational times per mesh faces for grasp planning process.

    Fig. 8. Different samples of the grasp planning process.

    6.2. Simulated and test-bench experiments

    In order to verify the performance of the arms and their control before the actual experiments,a set of simulations was carried out.The pose of the objects to be grasped was given with a certain noise.These objects were then animated to stress the arm servoing algorithm. Three objects were shown with different shapes. However,any other object can be chosen at this stage,as the algorithm is capable of computing a grasp for any mesh.

    Fig. 9 shows a sequence of snapshots of the simulated experiments to demonstrate the reachability of the arms in different situations. In these simulation experiments, the algorithm proceeds as in the real experiments, but the pose of the object is provided by the simulator, with a certain level of noise. First, the algorithm computes a set of feasible grasps, as described in Section 4.1.Next,according to the position of the object,it chooses the best grasp and guides the arm toward the object, as described in Section 4.2. Once the arms are close to the grasping poses, the system sends a signal to close the gripper.

    The purpose of these experiments was to demonstrate that regardless of the vision, the algorithm is able to handle different objects and shapes.It generates a set of feasible grasps and dynamically chooses the best option based on the estimation of the object pose.

    Before the real flights, the complete system—including the vision—was tested on a test-bench (Fig. 10), where the aerial manipulator was placed in a fixed structure and the target object was moved.

    6.3. Flying experiments

    This section describes the experiments that were carried out to demonstrate the performance of the system in real environments.For these experiments, a mock-up of a crawler robot was 3D printed. Two different setups were prepared. In the first setup,which was indoors, the UAV performed a completely autonomous operation,thanks to the position obtained by a MOCAP system.The second setup was intended to test the system outdoors. In both experiments, the 4-DoF gripper was chosen, as it is stronger than the 6-DoF version.

    Fig. 9. Samples of simulation tests with different objects at different times.

    Fig.10. Testing the complete system at the indoor test-bench.(a)The camera view and the 2D detection of the crawler mock-up;(b)the 3D virtualization of the robot,including the estimated position of the crawler and the target grasps; (c) a thirdview of the experiment.

    The first key module is the vision system, which computes the pose of the object to be grasped for the grasp planning, feeds the arms module for the servoing, and provides a reference position for the UAV. Fig. 11 shows the results of the algorithm.Fig. 11 (a) shows the result of the object detection in the RGB image using the CNN. Fig. 11(b) shows the fragment of the point cloud computed using the depth image and the estimation of the object’s pose from the alignment algorithm,filtered by the Kalman filter.The pose estimation is highlighted with an overlaid red point cloud and a coordinate system, as shown in the figure.

    Fig.12 shows the estimation of the object’s pose from the vision system against the ground truth using a MOCAP system. The estimation shows an average of the root mean square error (RMSE)smaller than 0.02 m. This accuracy is fundamental to ensure that the robot can perform the grasp tasks.Larger errors in the estimation of the localization of the robot would be problematic,causing the robot to miss the grasp and even collide with the environment.

    Fig. 13 shows an additional experiment for testing the visual detection module. In this experiment, the mock-up was moved along a pipe. The position of both the UAV and the crawler was measured by the vision system and a MOCAP system.

    The accuracy of the system was tested firstly indoors in a controlled environment with a MOCAP system. The mission started with the UAV taking off; the UAV then performed the pipeline maneuver explained in Section 5. Fig. 14 provides a snapshot of the experiment.

    Fig.15 shows the values of the joints during two visual servoing tests.The dashed line corresponds to the target joints and the solid line corresponds to the actual joints values. The figure shows how the system switches to a new grasp when the previous grasp becomes unreachable.

    Fig.11. Object detection and pose estimation results.(a)A bound box resulting from YOLO;(b)the fragment of cloud and the pose estimation as an overlaid red-dots model and a coordinate frame.

    Fig. 12. Estimated pose of the crawler versus the ground truth in the UAV’s coordinate system. (a) Estimated distance from the camera vs. ground truth in the X axis; (b) estimated distance from the camera vs. ground truth in the Y axis;(c) estimated distance from the camera vs. ground truth in the Z axis.

    Finally, Fig. 16 provides a photograph of the outdoors experimental test-bench. In this experiment, the UAV flew while being tied to a structure with a safety rope, for safety reasons. The photograph shows the point of view of the robot while grasping the mock-up of the crawler. This experiment demonstrated that the perception algorithm works outdoors using the RealSense Depth camera, regardless of the sunlight conditions.

    7. Conclusions

    This paper presents a complete system for performing manipulation operations with an aerial platform.Integrating two manipulators, instead of just one, makes it possible to grasp larger and more complex objects.In addition,having two manipulators helps the UAV to cancel fluctuations due to external disturbances and even choose grasps that are more stable because they enclose the center of mass. Both qualitative and quantitative experimental data have been provided in order to demonstrate that the system is capable of performing grasping operations.

    The average RMSE of the vision system is lower than 0.02 m.The use of object-detection networks has been shown to be highly beneficial for the speed of the pose estimation. The main limitations come from the available onboard devices. Most state-ofthe-art CNNs have been tested and designed to be used in powerful computers, which makes it challenging to integrate them in aerial platforms. However, after the analysis in Section 6, the chosen algorithm provided fair enough results for the mission. The next step will include the estimation of object pose in the neuronal network.

    Fig. 13. Trajectory of the UAV and estimated position of the crawler measured by the vision system in global coordinates(solid lines)and ground truth(dashed lines).(a) 3D trajectory of the UAV and the crawler during a moving experiment;(b)estimated position vs.real position of the crawler using the visual estimation in each of the axis.

    Fig. 14. Indoor autonomous mission using the MOCAP system.

    It has been demonstrated in several environments that the system is able to operate and grasp the target object.In addition,these results have been supported with simulated objects in order to prove that the grasp planning algorithm is able to plan manipulation tasks with other shapes. The vision system has been tested under low-light conditions as well as outdoors under direct sunlight. The algorithm ran correctly under both illumination conditions.

    Acknowledgement

    This work was carried out in the framework of the AEROARMS(SI-1439/2015) EU-funded project and the national project ARMEXTENDED (DPI2017-89790-R).

    Fig.15. Joints values during the experiments(in radians).At first,the arms follow the grasp target.After a certain point,the object rotates,so the system switches to another feasible grasp.

    Fig. 16. Outdoor autonomous test with a safety rope.

    Compliance with ethics guidelines

    Pablo Ramon-Soria, Bego?a C. Arrue, and Anibal Ollero declare that they have no conflict of interest or financial conflicts to disclose.

    成人国产麻豆网| 伦精品一区二区三区| 国产高清有码在线观看视频| 波多野结衣高清无吗| 一个人看视频在线观看www免费| 日本在线视频免费播放| 国产中年淑女户外野战色| 综合色丁香网| 草草在线视频免费看| 在线观看66精品国产| 97超碰精品成人国产| 黄色欧美视频在线观看| 精品久久久久久久久av| 亚洲性久久影院| 国产亚洲欧美98| 久久久精品94久久精品| 国产成人一区二区在线| 午夜精品在线福利| 亚洲av中文av极速乱| 亚洲在线自拍视频| 岛国在线免费视频观看| 国产精品免费一区二区三区在线| av国产免费在线观看| 三级国产精品欧美在线观看| 一个人看的www免费观看视频| 久久久久久久久久成人| 黄色欧美视频在线观看| 日本一本二区三区精品| 成人鲁丝片一二三区免费| 综合色av麻豆| 村上凉子中文字幕在线| 老女人水多毛片| 综合色丁香网| 在线观看一区二区三区| 亚洲av二区三区四区| 高清毛片免费看| 亚洲欧美清纯卡通| АⅤ资源中文在线天堂| 日产精品乱码卡一卡2卡三| 白带黄色成豆腐渣| 亚洲av成人精品一区久久| 日本黄色片子视频| 综合色丁香网| 俄罗斯特黄特色一大片| 九九热线精品视视频播放| 日韩av不卡免费在线播放| 看黄色毛片网站| 特大巨黑吊av在线直播| 日本爱情动作片www.在线观看 | 香蕉av资源在线| av在线观看视频网站免费| 日韩欧美 国产精品| 你懂的网址亚洲精品在线观看 | 久久精品国产亚洲av涩爱 | av在线天堂中文字幕| eeuss影院久久| 日产精品乱码卡一卡2卡三| 91狼人影院| 精品国内亚洲2022精品成人| 国产色爽女视频免费观看| 日韩欧美国产在线观看| 中文字幕人妻熟人妻熟丝袜美| 色视频www国产| 少妇人妻精品综合一区二区 | 成人特级黄色片久久久久久久| 国产精品一区二区免费欧美| 久久久久精品国产欧美久久久| 欧美成人精品欧美一级黄| 女人十人毛片免费观看3o分钟| 村上凉子中文字幕在线| 免费观看精品视频网站| 亚洲av熟女| 久久久成人免费电影| 熟女电影av网| 女的被弄到高潮叫床怎么办| 亚洲性夜色夜夜综合| 国产 一区精品| 精品福利观看| 国产欧美日韩精品一区二区| 丝袜喷水一区| 天堂网av新在线| 欧美成人免费av一区二区三区| 色av中文字幕| 精品日产1卡2卡| 欧美成人精品欧美一级黄| 蜜桃亚洲精品一区二区三区| 少妇熟女欧美另类| 欧美精品国产亚洲| 亚洲精品日韩在线中文字幕 | 国产片特级美女逼逼视频| 国产成人91sexporn| 中文字幕av成人在线电影| 欧洲精品卡2卡3卡4卡5卡区| 久久午夜福利片| 丰满的人妻完整版| 黄色日韩在线| 久久久国产成人精品二区| 男人的好看免费观看在线视频| 超碰av人人做人人爽久久| a级毛片免费高清观看在线播放| 99热网站在线观看| 18+在线观看网站| 联通29元200g的流量卡| 青春草视频在线免费观看| 大又大粗又爽又黄少妇毛片口| 99热这里只有是精品50| 亚洲天堂国产精品一区在线| 国产aⅴ精品一区二区三区波| 老司机福利观看| 在现免费观看毛片| 国产精品日韩av在线免费观看| 最好的美女福利视频网| 国产乱人视频| 婷婷精品国产亚洲av在线| 日韩国内少妇激情av| 国产精品人妻久久久久久| 在线国产一区二区在线| 亚洲av成人精品一区久久| 国内精品久久久久精免费| 中文字幕人妻熟人妻熟丝袜美| 国产 一区 欧美 日韩| 国产在线精品亚洲第一网站| 老女人水多毛片| 国产日本99.免费观看| 在线免费观看不下载黄p国产| 中文字幕精品亚洲无线码一区| 亚洲一区高清亚洲精品| 九色成人免费人妻av| 91午夜精品亚洲一区二区三区| 久久久久国产网址| 欧美成人免费av一区二区三区| 我要搜黄色片| 精品久久国产蜜桃| 久久久久国产精品人妻aⅴ院| 黄色视频,在线免费观看| 一个人看视频在线观看www免费| 久久精品夜色国产| 久久久久久伊人网av| 身体一侧抽搐| 人妻丰满熟妇av一区二区三区| 日韩 亚洲 欧美在线| 国产一区二区激情短视频| 亚洲一区二区三区色噜噜| 成人综合一区亚洲| 1024手机看黄色片| 波多野结衣高清无吗| 中文在线观看免费www的网站| 麻豆国产97在线/欧美| 国产伦在线观看视频一区| 日韩高清综合在线| 亚洲18禁久久av| 婷婷精品国产亚洲av在线| av在线蜜桃| 国产白丝娇喘喷水9色精品| 国产午夜精品论理片| 精品久久久久久久人妻蜜臀av| 国产大屁股一区二区在线视频| 女的被弄到高潮叫床怎么办| 美女高潮的动态| 别揉我奶头~嗯~啊~动态视频| 日韩成人av中文字幕在线观看 | 69人妻影院| 国产精品国产三级国产av玫瑰| 国产成人精品久久久久久| 久久鲁丝午夜福利片| 成人高潮视频无遮挡免费网站| 久久亚洲国产成人精品v| 欧美激情久久久久久爽电影| 全区人妻精品视频| ponron亚洲| 国产精品无大码| 国产高清激情床上av| 亚洲自拍偷在线| 免费观看在线日韩| 亚洲av二区三区四区| 久久久久国产精品人妻aⅴ院| 色综合色国产| 99视频精品全部免费 在线| 欧美高清成人免费视频www| 欧美潮喷喷水| 国产伦在线观看视频一区| 亚洲,欧美,日韩| 亚洲经典国产精华液单| 国产熟女欧美一区二区| 老熟妇仑乱视频hdxx| av天堂在线播放| 看非洲黑人一级黄片| av免费在线看不卡| 国产男人的电影天堂91| 久久精品综合一区二区三区| 91久久精品电影网| 免费观看在线日韩| 成人精品一区二区免费| 成年免费大片在线观看| 国产亚洲精品久久久com| 九色成人免费人妻av| 村上凉子中文字幕在线| 亚洲中文字幕日韩| av专区在线播放| 老女人水多毛片| 亚洲婷婷狠狠爱综合网| 97超视频在线观看视频| 亚洲精品日韩av片在线观看| 人妻夜夜爽99麻豆av| 亚洲最大成人中文| 99热这里只有是精品50| 少妇人妻精品综合一区二区 | 亚洲人与动物交配视频| 狂野欧美激情性xxxx在线观看| 中文字幕av成人在线电影| 国产亚洲精品av在线| 中文字幕精品亚洲无线码一区| 麻豆成人午夜福利视频| 99riav亚洲国产免费| 99久国产av精品国产电影| 国产成人aa在线观看| 韩国av在线不卡| 色视频www国产| 精品午夜福利视频在线观看一区| 91精品国产九色| 国产精品久久电影中文字幕| 国产高潮美女av| 欧美日韩一区二区视频在线观看视频在线 | 亚洲aⅴ乱码一区二区在线播放| 99九九线精品视频在线观看视频| 欧美性猛交黑人性爽| 亚洲精品国产成人久久av| 午夜亚洲福利在线播放| 亚洲欧美日韩卡通动漫| 日韩欧美免费精品| 亚洲aⅴ乱码一区二区在线播放| 日韩亚洲欧美综合| 又爽又黄a免费视频| 色5月婷婷丁香| 国产午夜福利久久久久久| 久久精品久久久久久噜噜老黄 | 午夜精品国产一区二区电影 | 亚洲图色成人| 欧美国产日韩亚洲一区| 国产精品电影一区二区三区| 国内精品一区二区在线观看| av.在线天堂| 天天躁夜夜躁狠狠久久av| 香蕉av资源在线| 99九九线精品视频在线观看视频| 久久国内精品自在自线图片| 亚洲国产色片| 日本三级黄在线观看| 小蜜桃在线观看免费完整版高清| 小说图片视频综合网站| 国产真实伦视频高清在线观看| 高清日韩中文字幕在线| 观看美女的网站| 亚洲国产精品成人久久小说 | 在线观看美女被高潮喷水网站| 亚洲av免费在线观看| 国产精品,欧美在线| 色综合亚洲欧美另类图片| 亚洲自拍偷在线| 国产黄a三级三级三级人| 免费电影在线观看免费观看| 女生性感内裤真人,穿戴方法视频| 婷婷六月久久综合丁香| 最好的美女福利视频网| 国产精品福利在线免费观看| 在线免费观看的www视频| 成年女人永久免费观看视频| 日本爱情动作片www.在线观看 | 99视频精品全部免费 在线| 国产一区二区三区av在线 | 久久精品夜夜夜夜夜久久蜜豆| 欧美一区二区国产精品久久精品| 午夜老司机福利剧场| 亚洲一区高清亚洲精品| 国产欧美日韩精品一区二区| 色5月婷婷丁香| 国产免费男女视频| 黄色日韩在线| 内射极品少妇av片p| 国产在线男女| 天天一区二区日本电影三级| 老熟妇乱子伦视频在线观看| 成年女人看的毛片在线观看| 人人妻人人澡欧美一区二区| 亚洲美女视频黄频| 久久精品影院6| 少妇裸体淫交视频免费看高清| 久久久久九九精品影院| 色视频www国产| 亚洲在线自拍视频| 伊人久久精品亚洲午夜| 身体一侧抽搐| 久久精品久久久久久噜噜老黄 | 俺也久久电影网| 欧美成人精品欧美一级黄| 亚洲人成网站高清观看| 欧美日本视频| 国产探花极品一区二区| av福利片在线观看| 亚洲欧美日韩无卡精品| 男人舔女人下体高潮全视频| 久久精品国产清高在天天线| 岛国在线免费视频观看| 老司机午夜福利在线观看视频| 国产免费男女视频| 亚洲精品一区av在线观看| 露出奶头的视频| 最近2019中文字幕mv第一页| 日韩大尺度精品在线看网址| 国语自产精品视频在线第100页| 1000部很黄的大片| 又粗又爽又猛毛片免费看| 看非洲黑人一级黄片| 亚洲国产欧美人成| av视频在线观看入口| 女人被狂操c到高潮| 尾随美女入室| 日产精品乱码卡一卡2卡三| 亚洲欧美成人精品一区二区| 色5月婷婷丁香| 日产精品乱码卡一卡2卡三| 国产精品日韩av在线免费观看| av女优亚洲男人天堂| 国产aⅴ精品一区二区三区波| 婷婷精品国产亚洲av| 国产不卡一卡二| 国产精品精品国产色婷婷| 精品人妻熟女av久视频| 欧美潮喷喷水| ponron亚洲| 床上黄色一级片| 别揉我奶头~嗯~啊~动态视频| 中文字幕人妻熟人妻熟丝袜美| 18+在线观看网站| 亚洲欧美成人精品一区二区| 亚洲精品亚洲一区二区| 亚洲精品一区av在线观看| 国产成人福利小说| 我要看日韩黄色一级片| 在线免费十八禁| 97超级碰碰碰精品色视频在线观看| av专区在线播放| 日韩欧美在线乱码| 最好的美女福利视频网| 午夜福利18| 亚洲熟妇中文字幕五十中出| 欧美xxxx黑人xx丫x性爽| 国产中年淑女户外野战色| 日韩大尺度精品在线看网址| 欧美日韩一区二区视频在线观看视频在线 | 99热精品在线国产| 22中文网久久字幕| 国产精品日韩av在线免费观看| 人妻少妇偷人精品九色| 国产欧美日韩精品一区二区| 变态另类成人亚洲欧美熟女| 国产成人精品久久久久久| 99久久精品热视频| 九色成人免费人妻av| 国产亚洲av嫩草精品影院| 国产欧美日韩精品一区二区| h日本视频在线播放| 亚洲精品亚洲一区二区| 久久精品91蜜桃| 色哟哟哟哟哟哟| 国产伦精品一区二区三区视频9| 亚洲无线在线观看| 1024手机看黄色片| 亚洲第一电影网av| 亚洲精华国产精华液的使用体验 | 亚洲,欧美,日韩| 日产精品乱码卡一卡2卡三| 色哟哟·www| 别揉我奶头~嗯~啊~动态视频| 精品久久久噜噜| 一级黄色大片毛片| 久久人人精品亚洲av| 国产国拍精品亚洲av在线观看| 国模一区二区三区四区视频| 午夜激情欧美在线| 尤物成人国产欧美一区二区三区| 国产午夜精品论理片| av天堂在线播放| 亚洲七黄色美女视频| 亚洲美女搞黄在线观看 | 欧美一区二区国产精品久久精品| 亚洲欧美中文字幕日韩二区| 97超碰精品成人国产| 久久精品久久久久久噜噜老黄 | 搡老熟女国产l中国老女人| 欧美又色又爽又黄视频| 成年女人永久免费观看视频| 18禁裸乳无遮挡免费网站照片| 精品日产1卡2卡| 婷婷亚洲欧美| 最新中文字幕久久久久| 日韩高清综合在线| 免费人成在线观看视频色| 久久热精品热| 日本色播在线视频| 精品免费久久久久久久清纯| 亚洲图色成人| 国产高清视频在线播放一区| 97在线视频观看| 国产亚洲欧美98| 亚洲av五月六月丁香网| 十八禁网站免费在线| 可以在线观看的亚洲视频| 欧美日韩国产亚洲二区| 国产精品av视频在线免费观看| 18禁在线无遮挡免费观看视频 | 国内少妇人妻偷人精品xxx网站| 欧美性感艳星| 国产91av在线免费观看| 欧美区成人在线视频| 亚洲精品色激情综合| 男人的好看免费观看在线视频| 国产亚洲精品综合一区在线观看| 亚洲一区二区三区色噜噜| 热99在线观看视频| 免费搜索国产男女视频| 国产日本99.免费观看| 啦啦啦韩国在线观看视频| 波野结衣二区三区在线| 国产 一区精品| 亚洲性夜色夜夜综合| 在线免费十八禁| 美女被艹到高潮喷水动态| 你懂的网址亚洲精品在线观看 | 欧美xxxx黑人xx丫x性爽| 亚洲三级黄色毛片| 天天躁夜夜躁狠狠久久av| 亚洲国产精品合色在线| 午夜福利在线在线| 色播亚洲综合网| 99热精品在线国产| 村上凉子中文字幕在线| 插阴视频在线观看视频| 国产激情偷乱视频一区二区| 3wmmmm亚洲av在线观看| 亚洲欧美清纯卡通| 赤兔流量卡办理| 国产一区二区在线观看日韩| 男插女下体视频免费在线播放| 97超碰精品成人国产| 精品一区二区免费观看| 久久午夜亚洲精品久久| 成人精品一区二区免费| 免费人成在线观看视频色| 午夜福利18| 人人妻人人澡欧美一区二区| 免费在线观看影片大全网站| 免费电影在线观看免费观看| 国产日本99.免费观看| 啦啦啦韩国在线观看视频| 少妇高潮的动态图| 国产成年人精品一区二区| 麻豆av噜噜一区二区三区| 亚洲中文字幕日韩| 中文字幕熟女人妻在线| 一级黄片播放器| 国内精品一区二区在线观看| 99在线人妻在线中文字幕| 一级av片app| 日韩欧美 国产精品| 国产黄色小视频在线观看| 丝袜喷水一区| 国内少妇人妻偷人精品xxx网站| 国产精品精品国产色婷婷| av黄色大香蕉| 日本五十路高清| 亚洲精品久久国产高清桃花| 国产av麻豆久久久久久久| 亚洲欧美成人综合另类久久久 | 女人被狂操c到高潮| 中文亚洲av片在线观看爽| 欧美一区二区亚洲| 久久人妻av系列| 国产精品福利在线免费观看| 国产精品嫩草影院av在线观看| 欧美+日韩+精品| 国产欧美日韩精品亚洲av| 夜夜夜夜夜久久久久| av.在线天堂| а√天堂www在线а√下载| 成人精品一区二区免费| 日韩欧美一区二区三区在线观看| 国产麻豆成人av免费视频| 欧美最新免费一区二区三区| 听说在线观看完整版免费高清| av在线老鸭窝| 美女大奶头视频| 欧美色视频一区免费| 国产黄片美女视频| 日日撸夜夜添| 亚洲av免费高清在线观看| 亚洲欧美成人精品一区二区| 深夜a级毛片| 一级毛片我不卡| av专区在线播放| av中文乱码字幕在线| 亚洲精品国产av成人精品 | 2021天堂中文幕一二区在线观| 欧美激情在线99| av在线老鸭窝| 婷婷色综合大香蕉| 久久久久国内视频| 啦啦啦观看免费观看视频高清| 大型黄色视频在线免费观看| 欧美成人一区二区免费高清观看| 天堂av国产一区二区熟女人妻| 搡女人真爽免费视频火全软件 | 精品99又大又爽又粗少妇毛片| 男人舔奶头视频| 午夜免费激情av| 伦理电影大哥的女人| 日韩强制内射视频| 国产精品一区二区免费欧美| 日韩精品有码人妻一区| 成年女人毛片免费观看观看9| av中文乱码字幕在线| 日本欧美国产在线视频| 人人妻人人澡人人爽人人夜夜 | 久久久午夜欧美精品| 国产亚洲av嫩草精品影院| 亚洲经典国产精华液单| 床上黄色一级片| 亚洲不卡免费看| 人人妻,人人澡人人爽秒播| 99久久精品国产国产毛片| 国产探花极品一区二区| 国产男人的电影天堂91| 老司机福利观看| 国内少妇人妻偷人精品xxx网站| 免费人成视频x8x8入口观看| 亚洲国产日韩欧美精品在线观看| 黑人高潮一二区| 老司机福利观看| 午夜福利在线在线| 99久国产av精品国产电影| 亚洲色图av天堂| 偷拍熟女少妇极品色| 毛片一级片免费看久久久久| 乱码一卡2卡4卡精品| 亚洲国产精品sss在线观看| 精品人妻熟女av久视频| 国产精品久久久久久久电影| 国产成年人精品一区二区| 欧美高清性xxxxhd video| 亚洲美女黄片视频| 午夜爱爱视频在线播放| 你懂的网址亚洲精品在线观看 | 日本在线视频免费播放| 亚洲精品在线观看二区| 久久国内精品自在自线图片| 成年版毛片免费区| 午夜亚洲福利在线播放| 精品久久久久久久人妻蜜臀av| 日韩三级伦理在线观看| 精品久久久久久成人av| 神马国产精品三级电影在线观看| 久久久久九九精品影院| av福利片在线观看| 亚洲高清免费不卡视频| 看十八女毛片水多多多| 欧美zozozo另类| 伦精品一区二区三区| 少妇裸体淫交视频免费看高清| 尾随美女入室| 日本免费a在线| 国产伦一二天堂av在线观看| a级毛片a级免费在线| 看片在线看免费视频| 亚洲精品粉嫩美女一区| 精品国内亚洲2022精品成人| 国产男人的电影天堂91| 欧美最黄视频在线播放免费| 亚洲国产欧洲综合997久久,| 99热这里只有是精品50| 天堂动漫精品| 啦啦啦韩国在线观看视频| 国产成年人精品一区二区| 国产aⅴ精品一区二区三区波| 精品国产三级普通话版| 久久草成人影院| 神马国产精品三级电影在线观看| 亚洲av免费高清在线观看| 夜夜看夜夜爽夜夜摸| 毛片女人毛片| 精品一区二区三区人妻视频| 精品一区二区三区视频在线| 麻豆久久精品国产亚洲av| 国产精华一区二区三区| 亚洲国产高清在线一区二区三| 国模一区二区三区四区视频| 国语自产精品视频在线第100页| av在线播放精品| 久久鲁丝午夜福利片| 欧美在线一区亚洲| 黄色配什么色好看| 91久久精品国产一区二区成人| av福利片在线观看| videossex国产| 九九在线视频观看精品| 国产精品嫩草影院av在线观看| 人妻夜夜爽99麻豆av| 国产真实乱freesex| 亚洲av中文av极速乱| 少妇熟女欧美另类| 日本五十路高清| 啦啦啦啦在线视频资源| 久久国产乱子免费精品| 亚洲欧美精品综合久久99| 久久精品久久久久久噜噜老黄 | 欧美一区二区精品小视频在线|