• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An improved SLAM based on RK-VIF: Vision and inertial information fusion via Runge-Kutta method

    2023-03-28 08:37:38JishnCuiFngruiZhngDongzhuFengCongLiFeiLiQihenTin
    Defence Technology 2023年3期

    Ji-shn Cui , Fng-rui Zhng , Dong-zhu Feng ,*, Cong Li , Fei Li , Qi-hen Tin

    a School of Aerospace Science and Technology, Xidian University, Xi'an, 710126, China

    b China Academy of Space Technology, Xian Branch, Xi'an, 710100, China

    c China Academy of Launch Vehicle Technology, Beijing,100076, China

    d Aerospace Information Research Institute, Chinese Academy of Sciences, Jinan, 250100, China

    Keywords:SLAM Visual-inertial positioning Sensor fusion Unmanned system Runge-Kutta

    ABSTRACT Simultaneous Localization and Mapping (SLAM) is the foundation of autonomous navigation for unmanned systems.The existing SLAM solutions are mainly divided into the visual SLAM(vSLAM)equipped with camera and the lidar SLAM equipped with lidar. However, pure visual SLAM have shortcomings such as low positioning accuracy,the paper proposes a visual-inertial information fusion SLAM based on Runge-Kutta improved pre-integration. First, the Inertial Measurement Unit(IMU) information between two adjacent keyframes is pre-integrated at the front-end to provide IMU constraints for visual-inertial information fusion. In particular, to improve the accuracy in pre-integration, the paper uses the Runge-Kutta algorithm instead of Euler integral to calculate the pre-integration value at the next moment.Then,the IMU pre-integration value is used as the initial value of the system state at the current frame time.We combine the visual reprojection error and imu pre-integration error to optimize the state variables such as speed and pose,and restore map points'three-dimensional coordinates.Finally,we set a sliding window to optimize map points' coordinates and state variables. The experimental part is divided into dataset experiment and complex indoor-environment experiment.The results show that compared with pure visual SLAM and the existing visual-inertial fusion SLAM, our method has higher positioning accuracy.

    1. Introduction

    Recently, autonomous navigation based on unmanned systems is gradually being used in various fields.SLAM is the foundation and core of autonomous navigation [1]. The SLAM method refers to unmanned systems establishing the environmental map and estimating its trajectory during the movement under the unknown environment [2]. Vision sensors rely on the image sequence captured to estimate the pose and create an environmental map.Still, when the movement is too fast or the environment features are missing, its accuracy and robustness will decrease sharply and sometimes even lead to positioning failure [3,4].

    With the continuous advancement of hardware technology,low-cost IMU has become ubiquitous[5,6].IMU and vision sensors have strong complementarity in performance. Therefore, the Visual-Inertial Navigation System (VINS) has gradually become a hot topic in the current SLAM field [7].

    The main goal of VINS is to better couple IMU measurement and image information.According to the coupling scheme,VINS can be divided into loosely coupled systems and tightly coupled systems.The loose coupling uses the knowledge of the camera and IMU to estimate the motion of the system and then combines the estimation results.In contrast,the tight coupling method establishes a unified objective function for the camera and IMU state information and then performs the system motion estimation.

    According to the data processing method, VINS can be divided into the filtering-based method and optimization-based method.Filtering-based methods generally choose the EKF framework,which is divided into two stages: prediction and update. The prediction stage uses IMU information to predict the system pose;and then in the update stage, the system state is corrected and optimized according to the image information. The optimizationbased method considers the data of a period in the past and then iteratively optimizes the system state[8,9].For the reason that the filtering-based method only considers data at the current moment and only uses image information when updating the state, its accuracy is limited; therefore, the current mainstream VINS choose the optimization-based method.

    Limited by processor performance in the early days, VINS research was mainly based on filtering. Mourikis et al. proposed a filter-based tight-coupling scheme-MSCKF, which is the first relatively mature scheme in the field of VINS[10].MSCKF uses a feature point to constrain multiple camera motion states at different moments and uses multi-state constrained extended Kalman filtering as the back-end. At the same time, it adopts a sliding window strategy to ensure the real-time performance of the system. However, since only monocular camera is used in the MSCKF, the accuracy of visual-inertial odometry (VIO) is quite insufficient compared to stereo camera. The problem of autonomous underwater vehicles' SLAM has been solved in Ref. [11]. Aiming at the problem of nonlinear models and non-Gaussian noise in AUV motion, the author proposes an improved variance reduction Fast-SLAM with simulated annealing to solve the problems of particle degradation, particle depletion and particle loss in traditional FastSLAM.Bloesch et al.proposed a robust visual-inertial odometry(ROVIO), which also uses a tight coupling solution based on filtering.The ROVIO front-end adopts the direct method to track the camera movement by minimizing the photometric error of the image block, and the extended Kalman filter at the back-end can follow the image block features and three-dimensional landmark points at the same time [12]. However, ROVIO doesn't have a loop detection module, it is only an odometry, and there is a sizeable cumulative error in the long term.

    In recent years,with the continuous improvement of processor performance,the optimized VINS has gradually become the current hot point. Leutenegger et al. proposed the keyframe-based visualinertial SLAM using nonlinear optimization (OKVIS) [13]. Its frontend uses multi-scale Harris corner detection features and BRISK descriptors.OKVIS pre-integrates the IMU information between the keyframes in the back-end and then estimates the system moves through vision and IMU information.However,the scheme doesn't have loop detection and pose graph optimization,which inevitably causes error accumulation. Raul et al. proposed VIORB-SLAM[14,15] based on ORB-SLAM by fusing IMU information, which also adopts a tight coupling scheme based on optimization. Its front-end uses ORB features, pre-integrates IMU data, and then uses the g2o library to fuse the two information to estimate the system motion. The accuracy of VIORB-SLAM is much better than monocular ORB-SLAM.Still,the code is not open-source,and it only supports monocular cameras,which require complex initialization before using [16].

    Shen Shaojiao et al. proposed the robust and versatile monocular visual-inertial state estimator (VINS-Mono),which still uses a tight coupling scheme based on optimization. However, VINSMono only supports monocular cameras. Its front-end uses the KLT optical flow method to track Harris corners.It also uses the preintegration idea to process IMU data and then fuses the two sensor information through iterative optimization [17].In addition,VINSMono can be initialized in an unknown environment and has a loop detection module [18,19], but the accuracy needs to be further improved. Later, the Shen Shaojiao team released VINS-FuSion based on VINS-Mono. VINS-FuSion retains an optimized tightly coupled fusion scheme and supports multiple types of sensor combinations (pure binocular, binocular + IMU, and binocular + IMU + GPS). Compared with VINS-Mono, it can be initialized statically,and the scale information doesn't entirely rely on the IMU, which is more robust. However, because of stereo vision's mismatch problem,its accuracy is not even as good as VINSMono in some situations.

    Although existing visual-inertial information fusion SLAM systems are emerging one after another, the visual and IMU information still cannot be effectively integrated.Solving the frequency inconsistency between the camera and IMU and fully integrating the visual and IMU information is still an urgent problem for VINS.A suitable SLAM should be able to fully integrate visual and IMU information, significantly to improve the positioning accuracy and robustness of the system in extreme situations such as lack of environmental information and rapid carrier movement.Therefore,this paper proposes a visual-inertial information fusion SLAM based on Runge-Kutta improved pre-integration, aiming to improve the accuracy and robustness of the system.

    The sections of our paper are arranged as follows: the second section introduces the visual-inertial information fusion framework. The third section introduces the SLAM method of visualinertial information fusion based on Runge-Kutta 4 improved pre-integration. The fourth section introduces the experimental part of our paper, including dataset experiments and experiments in complex indoor environment. Finally, the fifth section summarizes the paper and gives conclusions.

    2. RK-VIF: system overview

    Pure visual SLAM can estimate the pose of unmanned system and create featured map in unknown environment. Visual information has no data drift,and the positioning accuracy is higher in rich image features areas;however,if the image features are less or the movement is too fast,it is difficult for pure visual SLAM to work robustly, or even lead to positioning fails; IMU doesn't depend on environmental characteristics, and the measurement frequency is higher than camera. IMU relative displacement data has high accuracy in a short time,which can make up for the shortcomings of the visual blurring due to fast motion;however,excessive IMU use will cause errors to accumulate with time, and then produce extremely serious drift error. Aiming at the shortcomings of IMU,visual SLAM can constrain the divergence generated by IMU for a long time. Thus, IMU and camera are highly complementary in performance.Therefore,the paper fuses IMU information based on the visual SLAM,studies visual-inertial information fusion SLAM to improve the system's positioning accuracy. As shown in Fig.1, the visual-inertial information fusion SLAM framework of this paper mainly includes four modules: measurement data processing,system initialization, joint state estimation and sliding window local optimization, and loop detection.

    3. Measurement process

    3.1. Feature extraction and matching

    In the visual SLAM based on the feature point method, feature point extraction and matching are first required.The purpose is to determine the projection of the same spatial point in different images according to the extracted features. Feature extraction generally includes two parts: feature point detection and description. First, the critical information in the image is detected, and then it is represented by feature descriptors. Then, by comparing the distances of feature descriptors, the correspondence between feature points in different images can be matched. Usually, there are many mismatched points in the matched image, which will affect the accuracy of the subsequent pose estimation.Therefore,it is necessary to use the mismatch elimination algorithm to eliminate the abnormal points.

    The current visual SLAM method usually uses the Random Sampling Consensus(RANSAC)algorithm to eliminate mismatched points. Although the RANSAC algorithm can effectively eliminate mismatched points, the randomness of the algorithm itself causes the number of iterations to be unstable, which in turn causes problems such as low elimination efficiency. Therefore, this paper introduces the Progressive Sampling Consensus (PROSAC) algorithm[20]to eliminate mismatched points.The PROSAC algorithm can solve the problems of unstable iterations and poor robustness of the RANSAC algorithm, effectively improve the algorithm's efficiency, and provide a guarantee for the real-time performance of SLAM.

    The PROSAC is based on the assumption that “the points with good quality are more likely to be interior points” [21]. Therefore,when selecting sample points,the points with good quality will be chosen first. Specifically, the PROSAC first uses certain criteria to evaluate the quality of all sample points,and then sorts them by the evaluation results, the algorithm only high-quality sample points are selected to estimate the matching model every time. Through the sorting strategy, the randomness of algorithm sampling is avoided,the probability of obtaining the correct model is improved,and the number of algorithm iterations is reduced.

    The steps of the PROSAC algorithm are shown in Fig. 2:

    3.2. IMU pre-integration based on Runge-Kutta

    In the VINS, the camera is an external sensor that collects environmental information and determines the system's pose during movement; IMU is an internal sensor that provides the system's movement information and restores the gravity direction of the visual SLAM.The IMU measurement value will be affected by Gaussian noise and zero bias. In order to simplify the model, it's assumed that each axis of the accelerometer and gyroscope are measured independently, and the influence of the earth's rotation is not considered. The measurement process can be modeled as follows [22,23]:

    According to the rigid body kinematics model,the motion state of the body system in the world coordinate system satisfies the following differential formula:

    In formula, aB|Wrepresents the acceleration of the body coordinate in the world coordinate,and vB|Wrepresents the speed of the body coordinate in the world coordinate.

    Fig.1. Visual-inertial information fusion framework.

    Suppose the time difference between two IMU measurements is Δt. Assuming that the acceleration aB|Wand the angular velocity vB|Wremain constant within [t,t +Δt], substituting the IMU measurement model of formula (1) into formula (2):

    In formula, ηgd(t) and ηad(t) are the discrete values of random noise ηg(t) and ηa(t) respectively.

    Traditional pre-integration uses Euler integral to discretize the IMU motion formula and then derives the IMU pre-integration formula. Our paper uses the Runge-Kutta method to discretize the IMU motion formula to improve the pre-integration accuracy.The basic idea of the Runge-Kutta method is to estimate the derivatives of multiple points in the integration interval, take the weighted average of these derivatives to obtain the average derivative,and then use the average derivative to estimate the result at the end of the integration interval. If Runge-Kutta algorithm's order is higher,it will lead to smaller truncation error of the system,and improve algorithm's accuracy. In practical applications, the fourth-order Runge-Kutta algorithm can already meet most of the accuracy requirements. Considering the computer processor's performance, the paper use the fourth-order Runge-Kutta algorithm.

    Given the following differential formula:

    Euler integral uses the derivative f(tn,yn) at point tnas the derivative of the interval[tn,tn+1],and updates it with the state ynat point tn. There are:

    In formula,Δt =tn+1-tn.Euler integral only uses the derivative information at point tn, so it has a higher estimation error.

    The Runge-Kutta algorithm is an improvement of Euler integral.It selects four points in the interval [tn,tn+1], calculates the derivative k1~k4of four points by iterative method,and then takes the weighted average of these derivatives to calculate the next state yn+1. The formula of the fourth-order Runge-Kutta algorithm is as follows:

    In formula,

    In formula, k1represents the derivative at the beginning of interval (tn,yn);is the midpoint of the interval estimated by Euler integral method using k1as the average derivative, and k2represents its derivative;is the midpoint of the interval estimated by Euler integral method using k2as the average derivative, and k3represents its derivative;(tn+Δt,yn+k3?Δt) is the end point coordinates of the interval estimated by Euler integral method using k3as the average derivative, and k4represents its derivative. From formula (7), it can be seen that the algorithm integrates the derivative information of the four points, and the derivative at the two midpoints has a higher weight.

    Our paper use Runge-Kutta method discretize the IMU motion formula in formula (3):

    ωWB-RK|Band aB-RKare the weighted average values of the angular velocity and acceleration of the system obtained by Runge-Kutta algorithm within [k,k + 1]. Compared with only using the angular velocity ωkWB|Band acceleration akBat moment k to recursively calculate the system state at moment k+ 1, ωWB-RK|Band aB-RKhave higher accuracy.

    As shown in Fig. 3, the IMU frequency is much higher than the camera frequency, and there are much IMU data between two adjacent keyframes [24,25]. Therefore, the IMU data cannot be directly fused with the camera data.The IMU data between the two frames must be integrated to align the frequencies of the two sensors.

    Fig. 2. The process of PROSAC algorithm.

    Fig. 3. Sensor frequency.

    Assuming that the IMU information and the camera image are synchronized in time, only the frequency is different. We accumulate the IMU information from the moment i to the moment j.The state variables such as the rotation, speed, and translation of the system at moment i can be directly updated to the moment j.Combining formula (8), the numerical integration of IMU information between adjancent keyframes i and j satisfies the following formula:

    Formula (9) describes the state estimation obtained by IMU information from moment i to moment j.But if RiWBchanges during the optimization process, all other rotation RkWB(k=i, … ,j-1)will change, and all integrals need to be recalculated. In order to avoid the problem of repeated integration,we use the form of preintegration to define a set of variables that are not related to the initial pose and velocity:is the pre-integrated measurement of rotation between adjacent keyframes,and RiBWRi+1WBis the rotation matrix estimated by vision.Formula (11) is a least-squares problem. The Gauss-Newton method can find the zero bias of the gyroscope. After calculating the gyroscope zero bias, in order to eliminate the influence of the gyroscope zero bias on the pre-integration, it is necessary to recalculate the pre-integration on the IMU data.

    In formula, ΔRi,k= RiBW?RkWB, Δvi,k= RiBW?(vkB|W- viB|W-gW?Δti,k). The left side of formula (10) is the relative motion increment between adjancent keyframes i and j,and the right side is the IMU pre-integration measurement model. The definition makes the right side of the formula independent of the state and gravity at moment i.Repeated integration in the calculation process is solved, and calculation is significantly reduced.

    3.3. System initialization

    VINS initialization is a loosely coupled process of visual estimation and IMU estimation [26]. Firstly, perform visual initialization, estimate the initial pose of the system from the matching image. Then initialize the zero bias of the gyroscope and accelerometer by the estimation results, and calculate the speed of the IMU.

    Assuming that the gyroscope bias remains constant between two consecutive keyframes, minimize the difference between the gyroscope pre-integrated measurement and the visual estimated relative rotation, and calculate the gyroscope bias:

    N is the number of keyframes required for initialization,ΔRi,i+1

    In order to get the zero bias of the accelerometer, firstly, we estimate the gravitational acceleration through the posture of visual SLAM. Then we calculate the zero bias [27]. The conversion relationship between the body coordinate system and the camera coordinate system in the world coordinate system satisfies:

    pB|Wand pC|Wrespectively represent the position of IMU and camera in the world coordinate system,RWCdescribes the rotation matrix of the camera coordinate system relative to the world coordinate system.Meanwhile,pB|Crepresents the position of IMU in the camera coordinate.

    Introducing the constraint of gravitational acceleration G≈9.8,suppose a body coordinate system I, which shares the origin with the world coordinate system. The rotation matrix between the body coordinate system and the world coordinate system is RWI.The direction of gravitational acceleration in the body coordinate system I is= [0,0,1]T, and the direction of gravitational acceleration in the world coordinate system is=g*W/‖g*W‖,so the angle between these two direction vectors can be used to calculate RWI.

    At this time,the acceleration of gravity gWcan be expressed as:

    RWIwill change after the accelerometer zero bias is introduced.We use disturbance δθ to describe the change and perform the firstorder approximation:

    Combining formula (9) and formula (12), considering the change of the accelerometer zero bias to Δpij, and considering the constraints between three consecutive keyframes, eliminating the velocity term,we can get the following formula:

    where,

    3.4. Joint estimation and local optimization

    The initial velocity, gravitational acceleration, and zero bias of the gyroscope and accelerometer are solved through the IMU initialization.Then we use image and IMU information to estimate and optimize the system's pose. First, the joint estimation calculates the system's pose at the current keyframe by minimizing the visual reprojection error and the IMU pre-integration error. Then,the system's pose and the map points'location are optimized in the sliding window.

    We use Singular Value Decomposition (SVD) to solve formula(15), the zero bias baof the accelerometer and the optimized gravity acceleration can be obtained.

    According to the calculated zero bias of the gyroscope and accelerometer, combined with formula (9), the system speed at each keyframe moment can be further obtained.The system speed at the last keyframe is:

    The system speed at other keyframe moments is:

    3.4.1. State and error

    The state variables of the VINS includes rotation matrix, translation matrix and speed.IMU bias is a random walk process,which also needs to be estimated.Define the ith frame's state as Φi={Ri,pi,vi,bgi,bai}.In the subsequent local optimization process,the map point's location in the sliding window needs to be jointly optimized as the state.

    As shown in Fig. 4, the VINS includes three constraints:

    Fig. 4. VINS state and constraints.

    ●The constraint of image feature points'position between the keyframe and the map points;

    ●The constraint of IMU pre-integration measurement between adjacent keyframes;

    ●The constraint on the random walk model of the zero bias between adjacent keyframes.

    According to the above constraints, we define the visual reprojection error and IMU measurement error. The two types of errors are introduced below.

    Visual error is defined by the reprojection error of the map point.Assuming that m map points are observed at the time of the ith frame,the visual error at that time is

    In formula,ρ is the kernel function,is the covariance matrix of the reprojection error. Meanwhile, xkuv=(ukL,vkL,ukR) is the projection coordinate of the kth landmark point obtained by the stereo camera,XkC=[X,Y,Z]is the landmark in camera coordinate,and π is the projection formula:

    fx,fy,cu,and cvare the internal parameters,and bl is the baseline.The IMU error comprehensively considers the two constraints,including the IMU pre-integration measurement and the zero bias random walk process. The IMU error between frame i and j is defined as:

    In formula,ΣIMUis the pre-integrated measurement covariance matrix;Σbis the zero-biased random walk covariance matrix;eIMUis the IMU pre-integrated measurement residual,and ebis the IMU zero-biased random walk residual:

    In formula,

    3.4.2. Joint state estimation

    The joint state estimation measures the system pose in realtime. First, IMU provides the system with a relatively credible initial estimate,reprojects the map points observed in the previous keyframe into the current keyframe. Then, the system calculates the visual reprojection error for all projection points and their matching points in two adjacent keyframes. Finally, it minimizes the sum of visual reprojection error and IMU pre-integration error.Formula (19) defines the visual error Eproj(j), and formula (21) defines the IMU error EIMU(i,j). Therefore, the objective function is defined as:

    The above problem is a least-squares problem. Therefore, the optimal state Φ*can be solved by minimizing the sum of the visual reprojection error and the IMU error.

    3.4.3. Local sliding window optimization

    The state information of the latest keyframe is estimated through the joint state estimation module, and the observed landmarks are added to the local map to become map points.Next,we optimize the system pose and map point locations in the local map.

    In the VINS, since the IMU provides constraints between two adjacent keyframes,the keyframes contained in the local map must be continuous before they can be used for optimization.At the same time, the computational complexity after fusion will increase rapidly because of the high frequency of IMU. To balance accuracy and calculation efficiency, we choose the sliding window strategy to optimize the local map. As shown in Fig. 5, the continuous keyframes in the most recent period of time constitute a sliding window. During the optimization process, only the keyframes in the sliding window and the map points observed by these keyframes are optimized.In our paper,the sliding window's length in the local map is set to 10 to ensure reliability.The method ensures that the IMU pre-integration can provide effective constraints during local optimization (see Fig. 6).

    Fig. 5. Sliding window optimization range.

    Fig. 6. Loop detection process.

    Fig. 7. Globally optimized pose.

    The state at this moment is defined as:

    In formula, k represents the keyframe's subscript in the sliding window, TkWBrepresents the system pose in the world coordinate system at the kth keyframe. Meanwhile, XiWrepresents the ith landmark's coordinate in the world coordinate system.

    Combine the reprojection error and the IMU pre-integration error of all keyframes in the sliding window to construct the objective function:

    Formula(26)is still a least-squares problem,and it is still solved by the Gauss-Newton method in g2o.

    The local sliding window optimization comprehensively considers the status information of consecutive N keyframes, and update the system pose in the sliding window at each iteration.After completing local optimization, the loop detection module will perform loop detection and global pose graph optimization on the latest keyframes to eliminate accumulated errors and optimize the system pose.

    3.5. Back-end optimization

    Unmanned systems explore in an unknown environment, and there is always an error estimating their pose.Due to expanding the exploration area, the previous errors will gradually accumulate to the next moment. Therefore, correcting the pose in time is particularly important for the long-term robust running of the unmanned system.In order to solve this problem,the feasible idea is to detect the image information in the scene and determine whether the current position is a place that has been reached before. If the information is found to be similar, a loop is formed at the repeated position. Then, re-optimize the system's pose in the loop to eliminate accumulated errors and obtain a globally consistent pose graph.

    The loop detection based on the scene's appearance is essentially an image similarity detection problem. However, suppose all historical frames in the map are matched with the current frame.In that case, calculation is too large, which seriously affects the realtime detection. In this paper, we use the Bag of Words (BoW)model to accomplish loop detection.

    The words in the BoW model are the average expression of similar features those have a specific type, and the dictionary is a collection of all words.We use the ORB features extracted from the front-end. For the frame, compare the ORB feature and dictionary elements to convert the frame into a bag-of-words vector. In loop detection, by comparing the bag-of-words vectors of the two frames,it can be judged whether the two frames constitute a loop to speed up the detection. The detection process includes the following three parts: (1) building a dictionary; (2) using bag-ofwords vectors to describe frames; (3) comparing the bag-ofwords vectors to determine the similarity of two frames.

    After the system successfully detects the loop, it can establish the constraint relationship between the two frames through feature matching.Firstly,we need to fuse the map points matched by the loop and eliminate redundant map points.Then calculate the relative pose between the two keyframes according to Perspective-N-Point (PnP) algorithm, and correct the current keyframe's pose.In addition,other keyframes in the loop also have different degrees of accumulated error and need to be globally optimized.

    Considering that the number of map points is much larger than the number of keyframe poses, we only optimize each keyframe pose in the global optimization process to reduce calculation.After the pose optimization is completed, we use the optimized pose to adjust the map point.

    In the pose graph optimization,the nodes represent the camera poses, and the edges represent the pose transformation relationships between adjacent nodes. Suppose the camera poses of two adjacent nodes in the loop are Tiand Tj, the pose transformation matrix is Ti,j, and the corresponding Lie Algebras are expressed as ξi, ξj, and ξi,jrespectively. According to the relationship of pose transformation:

    Due to the pose estimation error,the above formula will not be accurate.The pose estimation of unmanned systems is to iteratively solve an optimal transformation matrix T?SE(3),so as to minimize the system error.In order to simplify the iterative solution process,Lie algebra is introduced,so we can transform the matrix derivation to multiplying the matrix by the vector which corresponding to the anti-symmetric matrix.We stipulate that the symbol‘∨’represents the conversion of an anti-symmetric matrix to corresponding vector, and the symbol ‘∧’ represents the conversion of a vector to corresponding anti-symmetric matrix. Thus we can construct the error term by moving Ti,jto the right side of formula (27):

    Formula (28) describes the error term between two adjacent pose nodes.Considering the error term formed by all pose nodes in pose graph to build a least-squares problem:

    where,ξ is the set of all nodes in the pose graph,ε represents the set of all edges in the pose graph. Σ i,jis the information matrix, representing the weight of error term ei,j.

    Finally,we use g2o to solve the above least-squares problem,the poses of all keyframes in the loop are corrected,and the position of all map points is re-adjusted. Fig. 7 shows the pose graph. Due to the accumulation of errors, the unmanned system motion trajectory does not form a closed loop when the previous position is accessed.Pose optimization can align the system poses at the same place and correct the system global motion trajectory.

    4. Experiment

    We evaluate the proposed visual-inertial information fusion algorithm through two parts: dataset experiment and real environment experiment. We compare the proposed visual-inertial information fusion algorithm with other latest algorithms on a public dataset in the first experiment. The comparison of experimental data proves the positioning accuracy and robustness of our algorithm in detail. Then, we test the algorithm in an indoor environment to evaluate its performance in real scenarios.

    4.1. Dataset experiment

    The experimental platform configuration for our experiment is shown in Table 1.The open-source dataset EuRoC provided by ETH Zurich is used to verify the SLAM algorithm. The EuRoC dataset is collected on a micro-drone containing 20 Hz binocular images provided by MT9V034 camera, IMU information of synchronous 200 Hz provided by ADIS16448 sensor, and actual trajectory provided by a motion capture system. Therefore, it is very suitable to verify the visual-inertial information fusion SLAM algorithm.

    Table 1 Experimental platform configuration.

    The dataset includes 11 data sequences collected during the flight of the micro-drone shown in Fig. 8(a) in an industrial plantand two rooms.According to the camera data sequence's exposure change,feature number,motion speed,and motion blur degree,the EuRoC dataset divides 11 data sequences into three levels: easy,medium,and difficult.Among them,the maximum speed of 7 data sequences exceeds 2 m/s,and the highest speed can reach 2.87 m/s.The average speed of three data series exceeds 0.9 m/s. Fig. 8(b)shows the images collected by the micro-drone in an industrial plant. The environment in the plant is relatively complex and can fully simulate the actual setting of unmanned systems.

    In order to verify the performance of the algorithm, we have done a lot of comparative experiments.To simplify the notation,we use RK-VIF(the SLAM method for visual-inertial information fusion via Runge-Kutta) to represent our algorithm and compares it with the current state-of-the-art visual-inertial information fusion system OKVIS, ROVIO, and VINS-Mono. OKVIS is an advanced VIO algorithm developed by the Automation System Laboratory of ETH Zurich,which can be used with monocular or stereo cameras;VINSMono is a tightly coupled monocular visual-inertial system developed by the team of Mr.Shaojie Shen from HongKong University of Science and Technology;ROVIO was developed by ETH Zurich as a tightly coupled monocular visual-inertial system. This part of the simulation experiment details the results of the MH_05_difficult and V1_02_medium sequences in the EuRoC dataset.

    For sequence MH_05_difficult and sequence V1_02_medium,the running trajectory graph of ROVIO,OKVIS, RK-VIF, and Ground Truth is shown in Fig. 9.

    Through multiple comparison experiments, we give the Root Mean Square Error (RMSE) of various algorithms under each sequence in the EuRoC dataset to evaluate the performance of multiple algorithms.

    The RMSE of 10 sequences in the EuRoC dataset is shown in Table 2. Compared with other algorithms, RK-VIF has the smallest RMSE in most data sequences, indicating that RK-VIF has the highest positioning accuracy and the best performance. However,the OKVIS has weak advantage in the sequence V2_01_easy. After comparative analysis,we think the RK-VIF has a certain positioning error when handling the sequence V2_01_easy, Unmanned aerial vehicle (UAV) moving too fast and blurred motion are the main reasons leading to the increase in RMSE. However, on the test platform of our paper, the OKVIS cannot run in real-time. Fig. 10 shows the running time of various algorithms in EuRoC dataset.The blue column represents Ground Truth, which is the actual shooting time of every sequence. It can be seen that the running speed of our algorithm is better than ROVIO. However, compared with the VINS-Mono, the running time is longer under certain sequences. It is because the algorithm costs long time in matching feature points and using Runge-Kutta for pre-integration; OKVIS has the longest running time.Analysis of the possible reason is that it takes a long time in the process of image feature extraction and global pose optimization. In summary, our experiments verify algorithm's performance in our paper in terms of running accuracy and running speed. Our algorithm satisfies the long-term and stable operation of the SLAM system.

    Table 2 The RMSE in EuRoC Dataset (unit:m).

    Fig. 8. EuRoC dataset.

    Fig. 9. Dataset experiment.

    4.2. Real environment experiment

    In the actual indoor experiment part, the experimental environment is shown in Fig.11(a). Our article uses indoor GPS as the Ground Truth.We compare the RK-VIF with OKVIS,VINS-Mono and ROVIO to verify the performance of various algorithms.

    The experimental platform used is shown in Fig. 11(b). A TX2 development board is mounted on the Mecanum wheel mobile robot.Computing resources mainly include CPU:HMP Dual Denver 2/2 MB L2 + Quad ARM A57/2 MB L2, GPU: NVIDIA Pascal, 256 CUDA Cores, connected to a ZED binocular camera and an STM32 driver board, equipped with IMU sensor MPU-6050.

    In this experiment,we tested the positioning accuracy of ROVIO,OKVIS,VINS-Mono,and RK-VIF.Connect the laptop and TX2 to the same local area network (LAN). We use laptop to control the TX2 mobile robot and run three circles in the indoor experimental environment with the same trajectory, as shown in Fig.12. Indoor GPS obtains ground truth, and the total track length is 72.72 m.Fig. 13 and Fig. 14 respectively show the translation and rotation information of the mobile robot at each moment and the corresponding error.

    Fig.12. Running trajectory in indoor environment.

    Fig.10. Running time in EuRoC Dataset (unit: s).

    Fig.11. Test environment and equipment.

    The RK-VIF, ROVIO, OKVIS, and VINS-Mono are compared and analyzed in the indoor environment experiment. Comparing the ground truth generated by indoor GPS in the X and Y directions,the RK-VIF algorithm has the least trajectory error and the highest positioning accuracy, which is between -.0.2m ~+ .0.2m. The second is the VINS-Mono, and the third is the OKVIS. The worst ROVIO has a trajectory error of±.0.3m;in the Z direction,due to the mobile robot is always on the same horizontal plane. The position information at each moment fluctuates around 0, which is in line with actual changes.

    Considering the actual situation of the indoor experimental site flat and no slope,in the attitude error comparison,we do not count the roll angle and pitch angle;only the yaw angle is included in the category of attitude error analysis.

    Fig.13. The position and their corresponding errors of various algorithms.

    As shown in Fig. 14, comparing the four algorithms of RK-VIF,OKVIS, VINS-Mono, and ROVIO, the attitude error of the RK-VIF is significantly smaller than the other three algorithms and the error is between -4°~+4°.However,OKVIS and VINS-Mono have similar attitude errors,and ROVIO has the largest error, reaching± 6°.

    Figs.13 and 14 analyze the positioning errors of the four algorithms under the same experimental conditions and scene in the form of a relationship diagram. In order to more intuitively demonstrate the advantages of our algorithm, we analogy to the dataset experiment in chapter 4.1. In Table 3, we conduct five experiments for each algorithm and give the best result of the RMSE in the indoor environment.It can be seen from the table that RK-VIF has smaller positioning errors and better robustness in actual scenes. Thus, integrating the two parts of the dataset and actual scene experiments, our algorithm considers the positioning accuracy and running time,providing a more excellent solution for the unmanned system to explore the unknown environment robustly.

    Fig.14. The attitude and corresponding errors of various algorithms.

    Table 3 The average RMSE in indoor environment(unit: m).

    5. Conclusion

    This paper proposes a visual-inertial information fusion SLAM based on the Runge-Kutta improved pre-integration,aiming at the low positioning accuracy of pure visual SLAM in fast motion and complex indoor-environment. In the process of visual-inertial information fusion, the IMU pre-integration value is used as a constraint between two adjacent frames. In order to calculate the pre-integration value, the paper uses the Runge-Kutta instead of Euler integral to discretize the IMU motion equation,so as to avoid the error caused by the first-order approximation.The experiments show that RK-VIF can effectively perform positioning estimation in environments of different difficulty levels.Furthermore,compared with other current state-of-the-art open-source algorithms, our method shows higher positioning accuracy.However,we still need extensive research further to improve the accuracy and robustness of the system.

    Declaration of competing interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgements

    This work was supported by the China Postdoctoral Science Foundation under Grant 2019M653870XB, National Natural Science Foundation of Shanxi Province under Grants No.2020GY-003 and 2021GY-036, National Natural Science Foundation of China under Grants 62001340 and Fundamental Research Funds for the Central Universities,China, XJS211306 and JC2007.

    国产精品一区www在线观看| 亚洲最大成人中文| 亚洲成人久久性| 97超碰精品成人国产| 久久亚洲国产成人精品v| 国产午夜精品论理片| 午夜福利在线观看免费完整高清在 | 成人欧美大片| 五月玫瑰六月丁香| 最新在线观看一区二区三区| 在线播放无遮挡| av专区在线播放| 国产午夜精品论理片| 国产 一区精品| 亚洲av中文字字幕乱码综合| 日韩av在线大香蕉| 日本黄大片高清| 亚洲精品成人久久久久久| 色哟哟·www| 亚洲精品影视一区二区三区av| 狂野欧美白嫩少妇大欣赏| 99久久成人亚洲精品观看| 欧美人与善性xxx| 麻豆久久精品国产亚洲av| 精品久久久久久久久亚洲| 久久精品综合一区二区三区| 亚洲欧美清纯卡通| 亚洲av电影不卡..在线观看| 大又大粗又爽又黄少妇毛片口| 免费人成在线观看视频色| 日本精品一区二区三区蜜桃| 日本免费一区二区三区高清不卡| 久久精品综合一区二区三区| 久久精品国产亚洲网站| 3wmmmm亚洲av在线观看| 免费无遮挡裸体视频| 国产在视频线在精品| 最近的中文字幕免费完整| 亚洲av五月六月丁香网| 亚洲国产欧美人成| 又黄又爽又刺激的免费视频.| eeuss影院久久| 欧美三级亚洲精品| 内射极品少妇av片p| 亚洲不卡免费看| 99热这里只有是精品50| 欧美丝袜亚洲另类| 精品久久久久久久久久久久久| 免费在线观看影片大全网站| 日本 av在线| 少妇猛男粗大的猛烈进出视频 | 九九在线视频观看精品| 女同久久另类99精品国产91| 99久久无色码亚洲精品果冻| 精品熟女少妇av免费看| 久久精品综合一区二区三区| 欧美一级a爱片免费观看看| 国产三级在线视频| 亚洲在线自拍视频| 亚洲欧美精品综合久久99| 国产av不卡久久| 亚洲经典国产精华液单| 亚洲人成网站高清观看| 国产乱人视频| 国产片特级美女逼逼视频| 少妇丰满av| 少妇熟女aⅴ在线视频| 99久国产av精品国产电影| 22中文网久久字幕| 精品久久久久久久人妻蜜臀av| 日韩成人av中文字幕在线观看 | 久久久久性生活片| 悠悠久久av| 国产免费男女视频| 晚上一个人看的免费电影| 最近在线观看免费完整版| 蜜桃亚洲精品一区二区三区| 村上凉子中文字幕在线| 小说图片视频综合网站| 亚洲中文日韩欧美视频| 99久久成人亚洲精品观看| 欧洲精品卡2卡3卡4卡5卡区| 亚洲va在线va天堂va国产| 男插女下体视频免费在线播放| 国产亚洲精品久久久com| 俄罗斯特黄特色一大片| eeuss影院久久| 亚洲av中文字字幕乱码综合| 欧洲精品卡2卡3卡4卡5卡区| 非洲黑人性xxxx精品又粗又长| 久久韩国三级中文字幕| 色综合站精品国产| 国产一区二区在线av高清观看| 成人三级黄色视频| 2021天堂中文幕一二区在线观| 九九在线视频观看精品| 插逼视频在线观看| 1000部很黄的大片| 成人三级黄色视频| 亚洲熟妇熟女久久| 俄罗斯特黄特色一大片| av免费在线看不卡| 国产成年人精品一区二区| 日韩强制内射视频| 国产精品女同一区二区软件| 高清日韩中文字幕在线| 精品久久久久久久久av| 欧美日韩乱码在线| 日韩成人av中文字幕在线观看 | 国产视频内射| 久久久久性生活片| 校园春色视频在线观看| 在线看三级毛片| 少妇熟女欧美另类| 精品无人区乱码1区二区| 91精品国产九色| 免费黄网站久久成人精品| 老司机午夜福利在线观看视频| 午夜激情欧美在线| 美女 人体艺术 gogo| 久久午夜福利片| 午夜精品在线福利| 啦啦啦啦在线视频资源| 国产真实乱freesex| 午夜老司机福利剧场| 午夜免费激情av| 国产人妻一区二区三区在| 国产日本99.免费观看| 亚洲人成网站在线播| 久久久精品大字幕| 伦精品一区二区三区| 91久久精品国产一区二区成人| 少妇的逼好多水| 色尼玛亚洲综合影院| av在线播放精品| 精品久久久久久久久av| 国产 一区精品| 久久久午夜欧美精品| 在线免费观看的www视频| 国产aⅴ精品一区二区三区波| 欧美成人免费av一区二区三区| 日韩 亚洲 欧美在线| 香蕉av资源在线| 亚洲av.av天堂| 99九九线精品视频在线观看视频| 久久韩国三级中文字幕| 最近2019中文字幕mv第一页| 一进一出抽搐动态| 亚洲美女视频黄频| 亚洲综合色惰| 我要搜黄色片| 午夜精品在线福利| 午夜福利成人在线免费观看| a级一级毛片免费在线观看| 日日摸夜夜添夜夜添小说| 人人妻人人看人人澡| 国产av麻豆久久久久久久| 内射极品少妇av片p| 国内精品久久久久精免费| 大香蕉久久网| 日日啪夜夜撸| 丰满的人妻完整版| 亚洲性夜色夜夜综合| 两个人视频免费观看高清| 乱系列少妇在线播放| 国产欧美日韩一区二区精品| 熟女人妻精品中文字幕| 欧美成人精品欧美一级黄| 露出奶头的视频| 久久久久久国产a免费观看| 国产精品爽爽va在线观看网站| 欧美性猛交╳xxx乱大交人| 18禁裸乳无遮挡免费网站照片| 一级黄片播放器| 99热这里只有是精品50| 你懂的网址亚洲精品在线观看 | 久久这里只有精品中国| 国产私拍福利视频在线观看| 高清日韩中文字幕在线| 天堂√8在线中文| 国产国拍精品亚洲av在线观看| 噜噜噜噜噜久久久久久91| 一区福利在线观看| 色噜噜av男人的天堂激情| 国产私拍福利视频在线观看| 中文亚洲av片在线观看爽| 国产高清不卡午夜福利| 国产极品精品免费视频能看的| 欧美国产日韩亚洲一区| 99久久精品一区二区三区| 内射极品少妇av片p| av福利片在线观看| 此物有八面人人有两片| 最好的美女福利视频网| 国产精品福利在线免费观看| 久久人人精品亚洲av| 亚洲成人中文字幕在线播放| 黄片wwwwww| 亚洲乱码一区二区免费版| 永久网站在线| 日本黄色视频三级网站网址| 日韩欧美 国产精品| 国产精品福利在线免费观看| 日日啪夜夜撸| 亚洲综合色惰| 国产一区二区激情短视频| 亚洲高清免费不卡视频| 韩国av在线不卡| 亚洲美女搞黄在线观看 | 不卡一级毛片| www日本黄色视频网| 国产av不卡久久| 99久久精品国产国产毛片| 欧美另类亚洲清纯唯美| 真人做人爱边吃奶动态| 小说图片视频综合网站| av在线观看视频网站免费| 中文字幕熟女人妻在线| 中文字幕精品亚洲无线码一区| 久久久久久久久久久丰满| 日本一本二区三区精品| 国产精品人妻久久久久久| 亚洲av二区三区四区| 日韩欧美精品v在线| 免费观看人在逋| 麻豆av噜噜一区二区三区| 日韩欧美国产在线观看| 亚洲av五月六月丁香网| 中文亚洲av片在线观看爽| 蜜臀久久99精品久久宅男| 亚洲自偷自拍三级| 国产亚洲精品久久久com| 直男gayav资源| 伦理电影大哥的女人| 美女大奶头视频| 亚洲欧美成人综合另类久久久 | 日韩av不卡免费在线播放| 国产成年人精品一区二区| 午夜福利视频1000在线观看| 最近最新中文字幕大全电影3| 97碰自拍视频| 精品久久久久久久末码| 内射极品少妇av片p| 亚洲18禁久久av| 成年女人毛片免费观看观看9| 12—13女人毛片做爰片一| 69av精品久久久久久| 啦啦啦观看免费观看视频高清| 精华霜和精华液先用哪个| 久99久视频精品免费| 人人妻人人澡人人爽人人夜夜 | 真实男女啪啪啪动态图| 五月伊人婷婷丁香| 日韩 亚洲 欧美在线| 国产高清不卡午夜福利| 欧美激情在线99| 小蜜桃在线观看免费完整版高清| 日本精品一区二区三区蜜桃| 成人亚洲精品av一区二区| 我的老师免费观看完整版| 亚洲av第一区精品v没综合| 日韩大尺度精品在线看网址| 99久久中文字幕三级久久日本| 成人鲁丝片一二三区免费| 国产高潮美女av| 毛片一级片免费看久久久久| 伊人久久精品亚洲午夜| 乱人视频在线观看| 国产欧美日韩精品一区二区| 国产精品国产三级国产av玫瑰| 国产精品久久久久久久久免| 亚洲真实伦在线观看| 性色avwww在线观看| 两性午夜刺激爽爽歪歪视频在线观看| 成人特级av手机在线观看| 国产色爽女视频免费观看| 少妇被粗大猛烈的视频| 免费在线观看成人毛片| 99久久久亚洲精品蜜臀av| 美女内射精品一级片tv| 欧美人与善性xxx| 国产一区二区在线av高清观看| 亚洲第一电影网av| 日韩精品有码人妻一区| 欧美xxxx性猛交bbbb| 欧美日韩一区二区视频在线观看视频在线 | 美女内射精品一级片tv| 久久久久免费精品人妻一区二区| eeuss影院久久| 少妇熟女欧美另类| 国产免费一级a男人的天堂| 国产精品久久久久久av不卡| 欧美激情久久久久久爽电影| 精品免费久久久久久久清纯| 成人永久免费在线观看视频| 乱人视频在线观看| 久久久欧美国产精品| 天堂动漫精品| 99热这里只有是精品50| 久久中文看片网| 99热这里只有精品一区| 精华霜和精华液先用哪个| 久久精品夜色国产| 日韩人妻高清精品专区| 人妻少妇偷人精品九色| 99久久久亚洲精品蜜臀av| 久久久久国内视频| 在线a可以看的网站| 岛国在线免费视频观看| 亚洲无线观看免费| av在线天堂中文字幕| 99久久精品一区二区三区| 97超视频在线观看视频| 丝袜喷水一区| 97超视频在线观看视频| 十八禁国产超污无遮挡网站| 久久久久久伊人网av| 成熟少妇高潮喷水视频| 婷婷色综合大香蕉| 国产单亲对白刺激| 亚洲欧美日韩无卡精品| 麻豆乱淫一区二区| av中文乱码字幕在线| 国产精品美女特级片免费视频播放器| 老熟妇仑乱视频hdxx| 久久国内精品自在自线图片| 熟女电影av网| 亚洲精品久久国产高清桃花| 国产老妇女一区| 免费人成视频x8x8入口观看| 成年版毛片免费区| 成人特级av手机在线观看| 人妻久久中文字幕网| 日本黄色视频三级网站网址| 又黄又爽又刺激的免费视频.| 少妇高潮的动态图| 九九久久精品国产亚洲av麻豆| 插阴视频在线观看视频| 老师上课跳d突然被开到最大视频| 午夜福利高清视频| 69人妻影院| 最近最新中文字幕大全电影3| 嫩草影视91久久| 天天躁日日操中文字幕| 成人欧美大片| 一a级毛片在线观看| 色视频www国产| 久久久久久九九精品二区国产| 男人和女人高潮做爰伦理| 国产熟女欧美一区二区| 给我免费播放毛片高清在线观看| 国产午夜福利久久久久久| 俺也久久电影网| 小说图片视频综合网站| 亚洲五月天丁香| 日日摸夜夜添夜夜添av毛片| 国产一区亚洲一区在线观看| 久久草成人影院| 国产精品野战在线观看| 中文字幕人妻熟人妻熟丝袜美| 久久久久国产精品人妻aⅴ院| 亚州av有码| 亚洲人与动物交配视频| 欧美人与善性xxx| 久久精品久久久久久噜噜老黄 | 插逼视频在线观看| 综合色丁香网| 日韩大尺度精品在线看网址| 久久精品久久久久久噜噜老黄 | 麻豆av噜噜一区二区三区| 亚洲真实伦在线观看| 久久99热6这里只有精品| 日本与韩国留学比较| 啦啦啦啦在线视频资源| 看非洲黑人一级黄片| 一卡2卡三卡四卡精品乱码亚洲| 蜜臀久久99精品久久宅男| 成人无遮挡网站| 日韩国内少妇激情av| 国产高清有码在线观看视频| 国产毛片a区久久久久| 99久久无色码亚洲精品果冻| 国产精品1区2区在线观看.| 九九热线精品视视频播放| 中文在线观看免费www的网站| 日韩欧美一区二区三区在线观看| 如何舔出高潮| 日产精品乱码卡一卡2卡三| 狂野欧美白嫩少妇大欣赏| 波多野结衣高清无吗| 欧美色欧美亚洲另类二区| 亚洲精品一区av在线观看| 亚洲一区高清亚洲精品| www日本黄色视频网| 成年av动漫网址| 婷婷六月久久综合丁香| 亚洲人成网站高清观看| 国产精品日韩av在线免费观看| 中文字幕av在线有码专区| 久久久精品欧美日韩精品| 国产精品亚洲美女久久久| 69人妻影院| 伦理电影大哥的女人| 国产精品久久视频播放| 夜夜看夜夜爽夜夜摸| 老司机影院成人| 天天躁夜夜躁狠狠久久av| 秋霞在线观看毛片| www日本黄色视频网| 亚洲美女搞黄在线观看 | 久久久欧美国产精品| 99九九线精品视频在线观看视频| 看免费成人av毛片| 午夜老司机福利剧场| 99视频精品全部免费 在线| 午夜免费激情av| 人妻夜夜爽99麻豆av| 最近2019中文字幕mv第一页| 国产真实伦视频高清在线观看| 国产中年淑女户外野战色| 亚洲精品日韩在线中文字幕 | 1000部很黄的大片| 午夜激情欧美在线| 身体一侧抽搐| 禁无遮挡网站| 国产三级中文精品| 性欧美人与动物交配| 欧美日韩精品成人综合77777| 简卡轻食公司| 亚洲真实伦在线观看| 国产精品美女特级片免费视频播放器| 亚洲自拍偷在线| 久久久久久久久大av| 国产亚洲精品久久久com| 91久久精品电影网| 免费在线观看成人毛片| 男人舔奶头视频| 男人狂女人下面高潮的视频| 村上凉子中文字幕在线| 成年av动漫网址| 日韩一本色道免费dvd| 最新在线观看一区二区三区| 精品国产三级普通话版| 欧洲精品卡2卡3卡4卡5卡区| 亚州av有码| 国产在视频线在精品| 久久人人爽人人爽人人片va| 亚洲国产精品久久男人天堂| 亚洲熟妇中文字幕五十中出| 精品久久久久久久人妻蜜臀av| 九色成人免费人妻av| 日日摸夜夜添夜夜添av毛片| 狂野欧美激情性xxxx在线观看| 精品少妇黑人巨大在线播放 | 亚洲成人av在线免费| 国产精品伦人一区二区| 日本免费一区二区三区高清不卡| 麻豆国产av国片精品| 蜜臀久久99精品久久宅男| 亚洲国产精品成人久久小说 | 国产蜜桃级精品一区二区三区| 亚洲人成网站高清观看| 国产一区亚洲一区在线观看| 在线播放无遮挡| 亚洲一级一片aⅴ在线观看| 内地一区二区视频在线| 国产精品一区二区性色av| 麻豆乱淫一区二区| 一a级毛片在线观看| 特大巨黑吊av在线直播| 日韩三级伦理在线观看| 午夜久久久久精精品| 亚洲自偷自拍三级| 搡老熟女国产l中国老女人| 噜噜噜噜噜久久久久久91| 又黄又爽又免费观看的视频| 深夜精品福利| 国产午夜精品论理片| 99在线人妻在线中文字幕| 天堂av国产一区二区熟女人妻| 我要搜黄色片| 亚洲中文字幕日韩| 日本撒尿小便嘘嘘汇集6| 日本 av在线| 国产成人a区在线观看| 我要看日韩黄色一级片| av天堂中文字幕网| 狂野欧美白嫩少妇大欣赏| 日韩中字成人| 国产色婷婷99| 男女之事视频高清在线观看| 精品熟女少妇av免费看| 丝袜美腿在线中文| 99久久久亚洲精品蜜臀av| 国产精品伦人一区二区| 欧美日韩精品成人综合77777| 国产高清三级在线| 亚州av有码| 中文资源天堂在线| 久久午夜福利片| 成人永久免费在线观看视频| 18禁裸乳无遮挡免费网站照片| 九九在线视频观看精品| 麻豆乱淫一区二区| 精品人妻视频免费看| 老女人水多毛片| 日韩强制内射视频| 欧美激情国产日韩精品一区| 禁无遮挡网站| 精品一区二区三区人妻视频| 国产精品一二三区在线看| 日韩欧美一区二区三区在线观看| 国产 一区 欧美 日韩| 亚洲精品国产成人久久av| 国产精品一区二区三区四区免费观看 | 午夜免费激情av| 此物有八面人人有两片| 99久国产av精品| 日本与韩国留学比较| 在线免费十八禁| 中文亚洲av片在线观看爽| 在线观看午夜福利视频| 国产成人精品久久久久久| 1024手机看黄色片| 又粗又爽又猛毛片免费看| 精品久久久久久成人av| 有码 亚洲区| 欧美高清成人免费视频www| 国产精品久久视频播放| 欧美三级亚洲精品| 成人二区视频| 亚洲乱码一区二区免费版| 观看免费一级毛片| 深爱激情五月婷婷| 91在线观看av| av在线播放精品| 亚洲精品一卡2卡三卡4卡5卡| 一边摸一边抽搐一进一小说| 亚洲真实伦在线观看| 51国产日韩欧美| 日韩制服骚丝袜av| 亚洲欧美成人精品一区二区| 免费看美女性在线毛片视频| 国产成人精品久久久久久| 男人的好看免费观看在线视频| 亚洲乱码一区二区免费版| 久久人妻av系列| 99在线人妻在线中文字幕| 99久久九九国产精品国产免费| 简卡轻食公司| 免费av不卡在线播放| 99热全是精品| 在线天堂最新版资源| 女人被狂操c到高潮| 波多野结衣高清作品| 中文在线观看免费www的网站| 偷拍熟女少妇极品色| 欧美潮喷喷水| 两个人视频免费观看高清| 国产高清视频在线播放一区| 麻豆精品久久久久久蜜桃| 精品乱码久久久久久99久播| 特级一级黄色大片| 国产精品久久视频播放| 国产一级毛片七仙女欲春2| 亚洲av免费高清在线观看| 日韩欧美精品v在线| 国产v大片淫在线免费观看| 1000部很黄的大片| 春色校园在线视频观看| 中国美女看黄片| 国产精品99久久久久久久久| 国产一区二区在线观看日韩| 成人鲁丝片一二三区免费| ponron亚洲| 简卡轻食公司| 国内少妇人妻偷人精品xxx网站| 久久人人爽人人片av| av.在线天堂| 一本一本综合久久| 亚洲经典国产精华液单| 97在线视频观看| 精品一区二区三区av网在线观看| 精华霜和精华液先用哪个| 亚洲精品乱码久久久v下载方式| 村上凉子中文字幕在线| 欧美高清性xxxxhd video| 亚洲av美国av| 欧美国产日韩亚洲一区| 天堂动漫精品| 亚洲人成网站高清观看| 久久久精品94久久精品| 啦啦啦韩国在线观看视频| 亚洲性夜色夜夜综合| 久久久久久久久中文| 日本爱情动作片www.在线观看 | 九九热线精品视视频播放| 内射极品少妇av片p| 成人无遮挡网站| 99久久久亚洲精品蜜臀av| 亚洲精品国产av成人精品 | 99久久精品国产国产毛片| 精品久久久久久久久亚洲| 校园春色视频在线观看| 中国国产av一级| h日本视频在线播放| 啦啦啦啦在线视频资源| 菩萨蛮人人尽说江南好唐韦庄 | 久久精品综合一区二区三区| av专区在线播放| 最近手机中文字幕大全| 永久网站在线| 性欧美人与动物交配| 成人三级黄色视频| 精品一区二区三区av网在线观看|