• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Rapid and robust initialization for monocular visual inertial navigation within multi-state Kalmanfilter

    2018-02-02 08:10:18WeiFANGLianyuZHENG
    CHINESE JOURNAL OF AERONAUTICS 2018年1期

    Wei FANG,Lianyu ZHENG

    School of Mechanical Engineering and Automation,Beihang University,Beijing 100083,China

    Navigation for an isolated target,providing the position,orientation,and velocity of the object in an unprepared environment,is a great challenge in the community of Unmanned Aerial Vehicles(UAVs),1–3mobile robot,4augmented reality,5etc.With an accurate state estimation,the target object can percept the current pose and do the decision-making for the next step,and this technology is becoming ubiquitous in our daily life.

    Much work has been done to achieve this assignment,and the visual-based approach is typical one for the pose estimation and the surrounding perception.Liu et al.6presented a high-precision pose estimation based on a binocular vision,and the pose can be estimated in real time with the highspeed cameras and laser scanning.As an outside-in capturing style,this method needs to arrange the location and orientation of the observed equipment in advance to avoid the occlusion,lacking generality andflexibility for free moving targets’navigation.Zhou and Tao7proposed a vision-based navigation method by using marked feature points arranged in a radial shape.Instead of the traditional locating method by PnP,the camera location and outlier points are calculated simultaneously,making the navigation more efficient.Nevertheless,this maker-based navigation method can achieve considerable accuracy only at the expense of the prior knowledge about the scene.

    With the progress of the lightweight and cheap microelectro-mechanical system,the Inertial Measurement Unit(IMU)can provide the position,orientation and velocity passively.However,these consumer-grade IMUs are influenced greatly by the time integration of the systematic error and must be corrected periodically.As a common complementary sensor,the Global Positioning System(GPS)is usually used for this correction,8but it cannot work in the GPS-denied environment for an indoor navigation.9Thus,Zhu et al.10proposed an indoor positioning method by fusing the WiFi and inertial sensors,but WiFi signals are not available everywhere,limiting a wider application of this method to some extent.As described in the last paragraph,6,7although there are some limitations by the vision-only navigation method,the accuracy of these pose estimations is reliable.Moreover,the consumer-grade camera and IMU module are ubiquitous in our daily life.For smallsize and light-weight applications,such as UAVs and mobile devices,the monocular camera turns out to be a superior visual candidate sensor.Thus,the integrated navigation by combining a monocular camera and an inertial sensor is applied in the paper.

    In addition,the visual inertial navigation can be categorized into filtering-based11oroptimization-based.12Thefiltering-based approach requires fewer computational resources by marginalizing past states,but this method may have slightly lower performance due to the earlyfixing of linearization points.However,the optimization-based approach results in a better performance at the expense of higher computational demands.The comparison betweenfiltering-based and optimization-based approaches is detailed.13Taking the realtime application in the future UAV-like resource-constraint system for consideration,this paper focuses on thefilteringbased fusion method with a high efficiency.According to the different combinations of visual and inertial information,thefiltering-based sensor fusion solution can be further grouped into loosely-coupled14,15and tightly-coupled.16,17The loosely-coupled approach usually consists of a standalone vision-based pose estimation module,such as SVO18or ORB-SLAM,19and a separate IMU propagation module.15Due to the lack of cross-relation information between the vision and IMU,the loosely-coupled system is incapable of correcting drifts in the estimator.Tightly-coupled approach performs systematic fusion of the visual and IMU measurements,usually leading to better navigation results.12What is more,as a monocular navigation,the arbitrary scale is a common problem,and the tightly-coupled method is able to implicitly incorporate the metric information from IMU into the 3D scene estimation,removing the need for scale modeling individually.Thus,the tightly-coupled method is adapted for our monocular visual inertial navigation.

    The most common tightly-coupled visual inertial navigation is to augment the 3D feature positions in thefilter state,and concurrently estimates the pose and 3D points.17This method leads to an increasing computational complexity with new observed features.To address this problem,the Multi-State Constraint Kalman Filter(MSCKF),16instead of estimating the positions of landmarks in the state vector,maintains constant computational demands by a sliding window of camera poses.Compared with the traditional visual inertial navigation in Extended Kalman Filter(EKF),the MSCKF method can balance the efficiency and accuracy within a larger-scale navigation.After the presentation of the MSCKF,a number of improvements have been applied,20,21and they increased the accuracy and online calibration performance for MSCKF.Gui and Gu22proposed a keyframe within the direct approach in MSCKF,and it can reduce the computational complexity linear to the number of selected patches.However,due to minimum length constraint of sliding windows for state update,the initialization problem in MSCKF is still unsolved.Usually,the sensor module had better be kept static for a while to achieve initialization,and shocks at the initial stage may damage the initialization performance in MSCKF.As a highly nonlinear system by fusion of visual and IMU measurements,an accurate and robust initialization is required to bootstrap the estimator.Hesch et al.11had shown that the performance of visual inertial navigation estimator relied heavily on the accuracy of initial state,and the initialization error would be accumulated in the subsequent navigation.In recent years,researchers have made some achievements on the initialization of the visual inertial navigation.Yang and Shen23addressed the initialization problem based on optimization-based method.They divided the estimation pipeline into different stages,and the experimental results illustrated the validity of the method.But the optimizationbased initialization is of high computational complexity,and the state vector derived from the optimization-based initialization is not consistent with the subsequent MSCKF.

    To the best of our knowledge,this is thefirst paper which pays attention to the initialization within MSCKF framework for visual inertial navigation.In order to improve the initialization performance of MSCKF,a trifocal tensor constraint based Initialization MSCKF(I-MSCKF)method is proposed.Instead of calculating the 3D map points in the scene,the constraints between successive three images are applied to the initial estimator.At the same time,the estimated poses are generated and accumulated to satisfy the minimum sliding window for MSCKF.In addition,in order to improve the initialization accuracy,the sigma-pointfilter is applied to providing superior approximation for the nonlinear system.Thus,with the proposed I-MSCKF method,the robustness and accuracy of initialization can be improved,resulting in a superior performance for the monocular visual inertial navigation.The overall workflow of the proposed I-MSCKF method is depicted in Fig.1.The combination of trifocal tensor and sigma-pointfilter is applied to the initialization,and then the seamless transform for the subsequent state estimation is provided to MSCKF.Finally,the state estimation can be output by the monocular visual inertial navigation.

    The following sections are organized as follows.Section 2 introduces the IMU propagation model and the MSCKF method.In Section 3,the trifocal tensor and the sigma-pointfilter for initialization are depicted.In Section 4,the performance of the proposed I-MSCKF method is demonstrated by different experiments.Finally,the conclusions of the study are summarized in Section 5.

    Fig.1 Overallflowchart of visual inertial navigation.

    2.State estimate and MSCKF

    2.1.IMU propagation model

    The state propagation is carried out from IMU measurements,and a 16-dimensional vector is parameterized for IMU state in the paper:

    whereIGq is the quaternion representing the rotation from the global frame {G} to the IMU frame {I},GvIandGpIare the velocity and position of the IMU in the global frame,respectively,and bωand badenote the biases of the gyroscope and accelerometer,respectively.

    Using the continuous-time dynamics of the state,Eq.(2)can be obtained.

    whereGa is the acceleration in the global frame,nbωand nbaare the Gaussian random walk process of the biases bωand ba,respectively,and Ω(ω)is the quaternion multiplication matrix of ω,which can be defined by

    Given the gyroscope(accelerometer)in a inertial sensor,their outputs contain a certain bias bω(ba)and white Gaussian noise nω(na).With the measurements ωmand amfrom the gyroscope and accelerometer,the real angular velocity ω and the real acceleration a can be obtained:

    With the linearization of Eq.(2),the continuous-time nominal state propagation model can be achieved:

    whereGδθIis the angle deviation of the error quaternion δq,and then the 15-dimensional IMU error state is obtained:

    2.2.State propagation

    Different to the visual aided inertial EKF-fusion method,17the MSCKF method does not add features to the state vector.And the update procedure is carried out when features are out of view or the number of appending poses reaches to the maximum length threshold.In other words,the state vector of MSCKF maintains a sliding window of camera poses.As shown in Fig.2,ncamera poses andmfeatures are visible to each other,and thus featuresfi(i=1,2,...,m)can be used to impose constraints to all camera poses in the sliding window.In our monocular visual inertial navigation,the main purpose of MSCKF is to estimate the orientation and position of the IMU-affixed frame {I} in a global frame {G},and states of the camera can be obtained by the rigid connection with the IMU.At time stepk,the full state vector consists of the current IMU state estimate andn7-dimensional past camera poses.Thus the vector has the following form:

    Therefore,theerrorstatevectorofMSCKFcanbedefinedas

    Fig.2 Schematic of MSCKF constraint framework.

    Infilter prediction,IMU measurements for state propagation are obtained in a discrete form,and thus the signals from gyroscope and accelerometer are assumed to sample at a certain time interval.By combining Eqs.(2)and(4),the IMU error-state kinematics can be obtained:

    Thus,the linearized continuous-time model of the IMU error state24is acquired:

    where I3is the 3×3 identity matrix,and G is calculated as follows:

    2.3.State augmentation

    where I6n+15is a (6n+15)× (6n+15)identity matrix,and the Jacobian Jkis derived from Eq.(13).

    2.4.Covariance propagation

    Given the computational complexity,the Euler method for integration over sampling time Δtis chosen.Thus,the discretetime error state transform matrix Φ(tk+ Δt,tk)can be derived as Eq.(17)according to method12:

    where QIis the covariance matrix of the IMU process noise nIin Eq.(10).

    2.5.State correction

    Now that we have estimated the position of any feature in the state,the measurement residual is obtained:

    Linearize Eq.(21)about the estimates for the camera poses and feature positions,and the residual can be estimated approximately:

    In order to reduce the computational complexity of this EKF update,a QR decomposition for Hois employed.

    where Q1and Q2are the unitary matrices and THis an uppertriangular matrix.Substituting the result into Eq.(25)and multiplying by [Q1,Q2]Tyield:

    The residual QT2rocan be discarded as noise,and a new error term can be defined for the EKF update:

    At the same time,the correction and the updated estimate for covariance matrix are formulated as

    3.Rapid and robust initialization method

    As depicted in Section 2,due to a certain sliding window constraint,the visual inertial fusion based on MSCKF can maintain a tradeoff between accuracy and efficiency.However,at the initial stage of MSCKF,a certain length of sliding window is needed to bootstrap the subsequent state estimator.Usually,the navigation objects should be kept static for a while for a successful initialization.Otherwise,the initial error would deteriorate the sequential navigation performance.To alleviate the limitation,a trifocal tensor and sigma-pointfilter based initialization method I-MSCKF will be applied as follows.

    3.1.Trifocal tensor based constraints

    With the constraint about epipolar geometry,the trifocal tensor encapsulates geometric relationships between three different viewpoints.As shown in Fig.4,the geometric properties of the trifocal tensor are introduced.It can transfer points from a correspondence in two views to the corresponding point in a third view.Compared with the epipolar constraint in two viewpoints,the essence of the trifocal tensor is the geometric constraint of a point-line-point correspondence in a 3D-space,enabling a stronger constraint to the pose estimation.

    Thus,to improve the initial robustness of MSCKF,a trifocal tensor based state vector can be defined as Eq.(32),and it makes the visual inertial navigation without estimating the 3D position of feature point:

    Compared with the MSCKF,the proposed I-MSCKF method can accomplish initialization within three images.At the same time,given the same state vector framework,pose estimations at the initial stage can be accumulated to satisfy the sliding window restriction of MSCKF.Thus,the robustness of initialization is improved when suffered from shocks at initial stage.In addition,to achieve higher initial accuracy,a more accurate sigma-pointfilter is applied in the initial state rather than EKF,and the detailed measurement update process at the initial stage will be illustrated next.

    Fig.3 Epipolar geometry between two views.

    Fig.4 Correspondences between trifocal tensors.

    3.2.Initial measurement update

    Let the projection matrices of camera for the three views be P1= [I| 0 ],P2= [A′|a4]and P3= [B′|b4],where A′and B′are 3×3 matrices,and the vector a4and b4are the 4th column of the respective projection matrix of camera.Therefore,according to the projection matrix and the line in 3D space,the notation of trifocal tensor can be derived:

    where the vectors ai,bi(i=1,2,3)are theith column of the respective projection matrix of camera.

    According to the principle of trifocal tensor,the point~m3in the third image is obtained:

    Thus,the measurement model can be given by the combination of epipolar and trifocal tensor constraints.Since the trifocal tensor focuses on three consecutive images,assuming that theith correspondence feature point in three images is{m1,m2,m3},two measurement models among three images are acquired:

    where (h1,h2)are measurement elements from epipolar geometry,and then with the predicted pose (GRC,GpC)from IMU propagation,the corresponding pairs of (R12,t12),(R23,t23)can be yielded:

    According to Eq.(36)and the trifocal tensor constraints,another measurement element is acquired as

    Therefore,the total measurement zincan be derived by stacking Eqs.(37)and(39):

    where~xinis the initial state from Eq.(33),and with these trifocal constraints at the initial stage of MSCKF,features from thefirst captured image are extracted and stored.Given the state vector in Eq.(33),the IMU propagation model for trifocal tensor is similar to Section 2.2.And then,when each subsequent image arrives,features are extracted and matched to former stored points,and the errors between predicted and measured locations are used to update the state estimation.In MSCKF,there exist a minimum number of observations for a feature track to trigger an update,such as ten or more continuous observed images to accomplish a reliable tracking precision.Thus,the navigation system is supposed to keep stabilization at this initialization stage,otherwise the error within the initialization would destroy the subsequent navigation.With the help of the trifocal tensor constraint,the predicted pose can be corrected within only three continuous images instead of ten or more images in MSCKF.Given that the elapsed time is proportional to the number of images,the initialization time of I-MSCKF can be reduced dramatically.And then,the estimated camera poses by trifocal tensor are appended to satisfy the minimum number of observations in MSCKF framework.Therefore,a rapid and robust initialization for visual inertial navigation can be obtained by the proposed I-MSCKF.

    3.3.Sigma-pointfilter

    Due to the nonlinearity of the visual inertial navigation,imprecise initialization may irreparably deteriorate the subsequent state estimation.Therefore,a more accurate sigma-pointfilter is used in the paper to apply measurement update at the initial stage.Using the following scheme,the sigma-point can be generated by the state covariance matrix Pkk-1|:

    Fig.5 Typical images and estimated trajectories.

    wheren=27 is the dimension of initial state vector,and thus the initial state X0can be denoted as 027×1.The weightWsiandWci(i=0,1,...,2n)in the sigma-pointfilter can be acquired as

    with λ= α2(n+ κ)-n,α determines the distribution of λ,which is usually set as a small positive,27and it is set to 0.2 in the paper.κ is set as 3-nat parameter estimation.For the Gaussian distribution,β =2 can be defined.With the measurement noise covariance matrix Rin,the measurement model can be obtained as

    where Pxziand Pziziare the state-measurementcrosscovariance matrix and the predicted measurement covariance matrix,respectively,and Kinis thefilter gain from the sigmapointfilter.And then,the update of error state and covariance matrix can be carried out within the initial stage.

    Table 1 RMSEs of different methods in KITTI dataset.

    At the same time,the output pose estimation is accumulated,and the initialization is accomplished when the number of accumulated poses reaches to the sliding threshold of MSCKF.Moreover,with the accurate sigma-pointfilter for pose estimation at initial stage,the proposed I-MSCKF method can achieve better navigation performance.

    4.Experiments

    In this section,two kinds of the experiments are carried out to evaluate the proposed I-MSCKF method.Thefirst kind of experiments is derived from the public datasets KITTI28and EuRoc29,which are generated by the automobile and UAV with ground truths.They can be applied to quantitative analysis of the proposed method.Another kind of experiments is carried out by our own monocular camera and IMU platform.Due to the lack of ground truth,only qualitative analyses of the I-MSCKF method can be realized.Moreover,to assess the performance of the difference methods clearly,the initial parts of all navigation experiments are chosen with major vibration intentionally.Experimental results and analyses are presented in following sections.

    Fig.6 3σ boundaries for pose estimation error with proposed I-MSCKF method.

    4.1.Case 1:Navigation for KITTI dataset

    The KITTI dataset is a famous algorithm evaluation platform from Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago.It is generated from two grayscale and two color cameras,a laser scanner,and a high-precision GPS/IMU system OXTS RT3003,and the measurement frequency of 6-axis IMU is 100 Hz.What is more,the ground truth along with the motion trajectory is also provided.For our monocular visual inertial navigation,only one grayscale camera and the IMU data are used.

    The start point of the navigation near the turning part is chosen intentionally,and typical images of the trajectory are shown in Figs.5(a)and(b).With the dataset from a grayscale camera and an IMU,corresponding trajectories can be estimated from the MSCKF and I-MSCKF(see Fig.5(c)).The ground truth of the trajectory is marked with black solid line,and the results of the MSCKF and the I-MSCKF are illustrated with red dashed line and blue dot dash line,respectively.It is easy tofind that the proposed I-MSCKF method is of high accuracy in the navigation.The reason for this is that the initialization of MSCKF suffered from drastic orientation changes near turning part,making the initial pose estimation from the IMU integration inaccurate.Thus,the optimization of the sliding window cannot converge to an ideal result,leading to the estimated trajectory incrementally deviated to the ground truth.

    In order to further analyze the performance of the proposed I-MSCKF method,two different methods are compared with the ground truth.At the time stepk,the Root Mean Square Error(RMSE)can be defined as

    whereNis the number of images,pestis the estimated position(orientation)from the state estimation,and pgtis the ground truth of the position(orientation)from the KITTI dataset.RMSEs of these trajectories are shown in Table 1.

    Given the robust analysis for the proposed I-MSCKF method,the 3σ boundary of the pose estimations are shown as the dashed line in Fig.6,and the results obtained by the proposed method are depicted as the line within the 3σ boundary.Even with some turning parts during the trajectory,the translational and rotational errors still are located in the 3σ boundaries,illustrating therobustnessoftheproposed method.

    4.2.Case 2:Navigation for EuRoc UAV dataset

    In addition to evaluating the proposed I-MSCKF method,further experiments on the public EuRoc dataset are performed.This dataset is provided by the ETH Zurich for the UAV navigation algorithm assessment.The experimental platform contains a stereo camera,capturing the images with 20 Hz on a UAV,as shown in Fig.7(a).The synchronized IMU is modeled ADIS16448,which can output the measurements at 200 Hz.The ‘V2_01_easy’dataset with 6-DoF pose ground truth from a Vicon motion capture system is applied.Compared with KITTI dataset,the EuRoc UAV dataset suffers from greater vibration duringflying on trajectory.The typical image and IMU measurements are shown in Figs.7(b)and(c),respectively.

    Similar to the experiment in KITTI dataset,the trajectory with great shock is chosen as the start point.The estimated trajectories of different methods are depicted in Fig.8(a).According to the ground truth(marked as black solid line),we canfind that the proposed I-MSCKF method(marked as blue dot dash line)can recover a more accurate trajectory than the MSCKF method(marked as red dash line).In addition,these estimated trajectories are projected to a 2D plane in Fig.8(b),and the results also illustrate that the proposed IMSCKF method is more close to the ground truth.

    According to Eq.(45),the position RMSEs of different methods are compared with the ground truth.The results are shown in Table 2.

    Fig.7 Typical dataset of EuRoc.

    Fig.8 Trajectory comparisons with EuRoc dataset.

    Table 2 RMSEs of different trajectories in EuRoc dataset.

    The results in Table 2 also show that the proposed IMSCKF method is able to acquire superior performance to the MSCKF method.The reason may be that the start point of the navigation with severe vibration deteriorates the convergence of the MSCKF method,leading to the estimated trajectory drifted severely as time goes on.However,the proposed IMSCKF method can accomplish the initialization more effi-ciently and accurately,which will result in a more precise trajectory estimation regardless of the vibration at initial stage.

    4.3.Case 3:Real experiment in indoor environment

    In order to evaluate the performance of the proposed IMSCKF in different circumstances,real navigation experiments are carried out in indoor and outdoor environments.The experimental platform is shown in Fig.9,consisting of a consumer-grade monocular camera and IMU with rigid connection,and the corresponding coordinate frames are also depicted.The IMU is able to output the angular velocity and linearacceleration ata rateof100 Hz,and the camera can work at a frame rate of 30 Hz,with a resolution of 640 pixel×480 pixel.

    In addition,a robust and convenient calibrated method30is used to evaluate the performance of the aforementioned consumer-grade IMU in this paper.It can calibrate the nonorthogonal sensitive error S,the scale factor K and the bias vector b of the IMU.According to the typical IMU measurement dataset,parameters of the accelerometer (Sa,Ka,ba)and the gyroscope (Sg,Kg,bg)can be obtained in Table 3,which can be used to compensate the original IMU measurements.

    Different from aforementioned public datasets,our experiments encountered more challenges and irregular motion in extensive environments.Within the indoor scene,the experimental platform is carried on by the pedestrian in a random movement.Moreover,the lighting changes severely for different walking directions,and these would make the visual inertial navigation in trouble.

    The typical experimental scenes in a large-scale GPS-denied indoor aisle are shown in Fig.10(a).The total length of the trajectory is about 400 m.The route is subjected to the sunshine from different directions.Therefore,the captured images may suffer from overexposure or underexposure at different positions.As shown in Fig.10(a),the intensity of typical images changes dramatically to each other.Coupled with the captured images,IMU measurements are also obtained synchronously,and the corresponding angular velocity and linear acceleration from IMU measurements are also depicted in Fig.10(b).

    To illustrate the robustness of the proposed I-MSCKF method,the walking trajectory is suffered from a 180°sharp turn.In addition,given the different performance between the two methods,the vibration is performed intentionally at the initial stage,as the zoom features shown in Fig.11.According to the captured images and IMU measurements,the trajectory is estimated by the proposed I-MSCKF method,as the red solid line shows.The result of the MSCKF is depicted with blue dash line.These two trajectories are illustrated on the Google map.Although there is no ground truth,the tendency of the large scale aisle can also demonstrate the performance of different methods.

    Fig.9 Monocularvision inertialnavigation experimental platform.

    Table 3 Parameters of accelerometer and gyroscope in IMU.

    Fig.10 Dataset obtained from indoor environment.

    It is obvious tofind that the proposed I-MSCKF method has high consistency with the reference path(as the tendency of the coordinate line shows in Fig.11)along the building.However,due to the suffering from some shock at initial stage,the initialization of the MSCKF method is not robust and accurate enough,which makes the estimated trajectory far away to the reference path,by stretching out of the building.These results show that the proposed I-MSCKF method is qualified with a higher robustness and accuracy for monocular visual inertial navigation.

    Fig.11 Comparison of trajectories in indoor environment.

    4.4.Case 4:Real experiment in outdoor environment

    Another experiment is performed in an outdoor environment with a closed loop,and the length of the total trajectory is about 800 m.The typical captured images are shown in Fig.12(a),and moving objects and illumination variation occur during the movement.The experimental platform for the outdoor scene is carried on a bike,which also results influctuations during the dataset capturing.Therefore,the IMU measurements of outdoor environment change more intensely than those of indoor environment,as shown in Fig.12(b),which also imposes more challenges to the visual inertial navigation.

    With captured images and IMU measurements,the trajectories can be estimated by the MSCKF and I-MSCKF method,respectively(shown in Fig.13).As a closed-loop trajectory,ideally,the start point and the end point of the trajectory should coincide with each other.Thus,the distance between start and end point can be used to evaluate the performance of different navigation methods.Similar to other experiments,some shock is applied within the initial stage,as the enlarged view shows in Fig.13.It is easy tofind that the proposed I-MSCKF method can obtain a small closed-loop error(as marked by red solid line)with about 5.87 m.At the same time,for the MSCKF method,with the sensitivity to the initial condition,the closed-loop error is up to 29.12 m,which is about 5 times to that using the proposed I-MSCKF method.Therefore,the experimental result in the outdoor environment also shows that the I-MSCKF method can achieve superior performance to the MSCKF method.

    4.5.Results and discussion

    Monocular visual inertial navigation is nonlinear,and to properly fuse the monocular image and IMU measurements,a rapid and robust initialization is needed to bootstrap the integrated navigation system.In order to demonstrate the proposed I-MSCKF method, different qualitative and quantitative experiments are carried out.Thanks for the public dataset KITTI and EuRoc with accurate ground truths,comparative analyses are executed and the experimental results show the superior performance of the proposed method.Moreover,in order to verify the adaptability and universality of the I-MSCKF,other real experiments with consumergrade sensor module is tested both indoor and outdoor.Experimental results also indicate that the proposed I-MSCKF can bootstrap the monocular visual inertial navigation system more accurately and robustly than MSCKF.

    Fig.12 Dataset obtained from outdoor environment.

    Fig.13 Comparison of trajectories in outdoor closed-loop scene.

    Ideally,as a visual inertial navigation system,the Earth rotation should be considered for an accurate integrated navigation performance.However,the aim of the proposed IMSCKF in the paper is to provide 6-DoF pose for daily applications,and thus the consumer-grade IMU is applied in our visual inertial navigation.The low-cost IMU has turn-on biases larger than Earth’s rotation rate(approximately 15 degree/hour or 1 rotation per day),meaning that the measured effects of the Earth’s rotation are small enough compared to the gyroscope accuracy.Thus,the influence of the Earth’s rotation is not included in the IMU kinematics within the paper.

    In addition,given the integrated navigation by combining the monocular camera and IMU,the inertial propagation provides process model,while the measurement process is carried out by minimizing the re-projection error of visual features within a sliding window.Thus,the accuracy of the inertial navigation plays an important role for subsequent measurement update strategies.If a higher accurate inertial navigation is acquired,a more stable measurement update can be expected in the visual inertial navigation system.Thus,the quantitative comparison of different inertial navigation systems within the visual inertial navigation will be worth to do further research.Due to the limitation of experimental equipment,this comparison is not covered in this paper.However,according to experiments with different consumer-grade IMUs above,the results demonstrate that the proposed I-MSCKF can obtain reliable and improved navigation performance with different IMUs.

    5.Conclusions

    (1)The proposed I-MSCKF method for monocular visual inertial navigation is able to estimate the 6-DoF egomotion of the UAV or other mobile devices.The experimental results demonstrate that the I-MSCKF method owns greater robustness and accuracy than the MSCKF method.

    (2)To alleviate the initial demands of the MSCKF method,a trifocal tensor based I-MSCKF method is proposed,which can reduce the initial requirements within only three successive images.Moreover,the initial pose estimation framework maintains a consistency with the MSCKF.Thus,the robustness and adaptability of the monocular visual inertial navigation can be improved.

    (3)Attentions are also paid to the initial accuracy for visual inertial navigation,and the sigma-pointfilter is used instead of the EKF at the initial stage,which can achieve superiorposeestimationsofthenonlinearsystemwiththe third order of the Taylor series expansion.Thus,the subsequent navigation accuracy can also be improved.

    (4)Finally,the extensive experimental results demonstrate that,even under some challenging initial conditions,the proposed I-MSCKF has better performance than the MSCKF.

    Acknowledgements

    The authors gratefully acknowledge the supports of the Beijing Key Laboratory of Digital Design&Manufacture,the Academic Excellence Foundation of Beihang University for Ph.D.Students,and the MIIT(Ministry of Industry and Information Technology)Key Laboratory of Smart Manufacturing for High-end Aerospace Products.

    1.Goh ST,Abdelkhalik O,Zekavat SA.A weighted measurement fusion Kalmanfilter implementation for UAV navigation.Aerosp Sci Technol2013;28(1):315–23.

    2.Tang L,Yang S,Cheng N,Li Q.Toward autonomous navigation using an RGB-D camera forflight in unknown indoor environments.Proceedings of 2014 IEEE Chinese guidance,navigation and control conference;2014 Aug 8–10;Yantai,China.Piscataway:IEEE Press;2014.p.2007–12.

    3.Santoso F,Garratt MA,Anavatti SG.Visual-inertial navigation systems for aerial robotics:Sensor fusion and technology.IEEE T Autom Sci Eng2017;14(1):260–75.

    4.Lu Y,Song D.Visual navigation using heterogeneous landmarks and unsupervised geometric constraints.IEEE T Robot2015;31(3):736–49.

    5.Marchand E,Uchiyama H,Spindler F.Pose estimation for augmented reality:A hands-on survey.IEEE T Vis Comput Gr2016;22(12):2633–51.

    6.Liu W,Ma X,Li X,Chen L,Zhang Y,Li XD,et al.Highprecision pose measurement method in wind tunnels based on laser-aided vision technology.ChinJAeronaut2015;28(4):1121–30.

    7.Zhou H,Tao Z.A vision-based navigation approach with multiple radial shape marks for indoor aircraft locating.Chin J Aeronaut2014;27(1):76–84.

    8.Wendel J,Meister O,Schlaile C,Trommer GF.An integrated GPS/MEMS-IMU navigation system for an autonomous helicopter.Aerosp Sci Technol2006;10(6):527–33.

    9.Jeremy M,Max B,Sara S,Larry M,Matt M.Real-time pose estimation of a dynamic quadruped in GPS-denied environments for 24-hour operation.Int J Robot Res2016;35(6):631–53.

    10.Zhu N,Zhao H,Feng W,Wang Z.A novel particlefilter approach for indoor positioning by fusing WiFi and inertial sensors.Chin J Aeronaut2015;28(6):1725–34.

    11.Hesch J,Kottas DG,Bowman SL,Roumeliotis SI.Consistency analysis and improvement of vision-aided inertial navigation.IEEE T Robot2014;30(1):158–76.

    12.Leutenegger S,Lynen S,Bosse M,Siegwart R,Furgale P.Keyframe-based visual–inertial odometry using nonlinear optimization.Int J Robot Res2015;34(3):314–34.

    13.Lange S,Sunderhauf N,Protzel P.Incremental smoothing vsfiltering for sensor fusion on an indoor UAV.Proceedings of 2013 IEEE international conference on robotics and automation;2013 May 6–10;Karlsruhe,Germany.Piscataway:IEEE Press;2013.p.1773–8.

    14.Wang C,Wang T,Liang J,Chen Y,Wu Y.Monocular vision and IMU based navigation for a small unmanned helicopter.Proceedings of 2012 7th IEEE conference on industrial electronics and application;2012 Jul 18–20;Singapore.Piscataway:IEEE Press;2012.p.1694–9.

    15.Weiss S,Siegwart R.Real-time metric state estimation for modular vision-inertial systems.Proceedings of 2011 IEEE international conference on robotics and automation;2011 May 9–13;Shanghai,China.Piscataway:IEEE Press;2011.p.4531–7.

    16.Mourikis AI,Roumeliotis SI.A multi-state constraint Kalmanfilter for vision-aided inertial navigation.Proceedings of 2007 IEEE international conference on robotics and automation;2007 Apr 10–14;Rome,Italy.Piscataway:IEEE Press;2007.p.3565–72.

    17.Kelly J,Sukhatme GS.Visual-inertial sensor fusion:Localization,mapping and sensor-to-sensor self-calibration.Int J Robot Res2011;30(1):56–79.

    18.Forster C,Pizzoli M,Scaramuzza D.SVO:Fast semi-directmonocular visual odometry.Proceedings of 2014 IEEE international conference on robotics and automation;2014 May 31–Jun 7;Hong Kong,China.Piscataway:IEEE Press;2014.p.15–22.

    19.Mur-Artal R,Montiel JMM,Tardos JD.ORB-SLAM:A versatile and accurate monocular SLAM system.IEEE T Robot2015;31(5):1147–63.

    20.Li M,Mourikis AI.Improving the accuracy of EKF-based visualinertial odometry.Proceedings of 2012 IEEE international conference on robotics and automation;2012 May 14–18;Minnesota,USA.Piscataway:IEEE Press;2012.p.828–35.

    21.Li M,Mourikis AI.3-D motion estimation and online temporal calibration for camera-IMU systems.Proceedings of 2013 IEEE international conference on robotics and automation;2013 May 6–10;Karlsruhe,Germany.Piscataway:IEEE Press;2013.p.5689–96.

    22.Gui J,Gu D.A direct visual-inertial sensor fusion approach in multi-state constraint Kalmanfilter.Proceedings of the 34thChinese control conference;2015 Jul 28–30;Hangzhou,China.Piscataway:IEEE Press;2015.p.6105–10.

    23.Yang Z,Shen S.Monocular visual-inertial state estimation with online initialization and camera-IMU extrinsic calibration.IEEE T Autom Sci Eng2017;14(1):39–51.

    24.Trawny N,Roumeliotis S.Indirect Kalmanfilter for 3D attitude estimation.Minneapolis:University of Minnesota;2005 Report No.:2005-002.

    25.Civera J,Davison AJ,Montiel JMM.Inverse depth parametrization for monocular SLAM.IEEE T Robot2008;24(5):932–45.

    26.Hartley R,Zisserman A.Multiple view geometry in computer vision.2nd ed.Cambridge:Cambridge University Press;2008.p.365–83.

    27.Sarkka S.On unscented Kalmanfiltering for state estimation of continuous-time nonlinear systems.IEEE T Automat Contr2007;52(9):1631–41.

    28.Geiger A,Lenz P,Stiller C,Urtasun R.Vision meets robotics:The KITTI dataset.Int J Robot Res2013;32(11):1231–7.

    29.Burri M,Nikolic J,Gohl P,Schneider T,Rehder J,Omari S,et al.The EuRoC micro aerial vehicle datasets.Int J Robot Res2016;35(10):1157–63.

    30.Tedaldi D,Pretto A,Menegatti E.A robust and easy to implement method for IMU calibration without external equipment.Proceedings of 2014 IEEE international conference on robotics and automation;2014 May 31–Jun 7;Hong Kong,China.Piscataway:IEEE Press;2014.p.3042–9.

    国产高清三级在线| 精品国产超薄肉色丝袜足j| 性色avwww在线观看| 亚洲男人的天堂狠狠| a级毛片在线看网站| 国产黄色小视频在线观看| 日韩欧美国产一区二区入口| 黄色成人免费大全| 国产伦在线观看视频一区| 欧美大码av| 法律面前人人平等表现在哪些方面| 精品国产乱子伦一区二区三区| 国产伦在线观看视频一区| 久久香蕉精品热| ponron亚洲| 日韩欧美精品v在线| 久久这里只有精品中国| a级毛片a级免费在线| 精华霜和精华液先用哪个| 精品乱码久久久久久99久播| a级毛片在线看网站| 91麻豆av在线| 欧美激情久久久久久爽电影| av国产免费在线观看| 天堂动漫精品| 久久久久久国产a免费观看| 18禁国产床啪视频网站| 在线免费观看的www视频| 一a级毛片在线观看| 亚洲av成人不卡在线观看播放网| 国产av在哪里看| 日本黄大片高清| 亚洲国产高清在线一区二区三| 超碰成人久久| 91九色精品人成在线观看| 国产精品电影一区二区三区| 成人欧美大片| 亚洲av片天天在线观看| 亚洲专区字幕在线| 中文字幕高清在线视频| 9191精品国产免费久久| 两性午夜刺激爽爽歪歪视频在线观看| netflix在线观看网站| 老熟妇乱子伦视频在线观看| 亚洲欧美日韩无卡精品| 国产成人福利小说| 久久久色成人| 亚洲欧洲精品一区二区精品久久久| 韩国av一区二区三区四区| 久久香蕉精品热| 美女午夜性视频免费| 在线a可以看的网站| 国产av在哪里看| 99精品欧美一区二区三区四区| 成人午夜高清在线视频| 无人区码免费观看不卡| 成人一区二区视频在线观看| 精品国产亚洲在线| 国产欧美日韩一区二区三| 少妇裸体淫交视频免费看高清| 最近在线观看免费完整版| 成人三级做爰电影| 天天一区二区日本电影三级| 2021天堂中文幕一二区在线观| 国产黄a三级三级三级人| 99久久无色码亚洲精品果冻| 成人亚洲精品av一区二区| 韩国av一区二区三区四区| 两个人的视频大全免费| 全区人妻精品视频| 18禁黄网站禁片免费观看直播| 精品久久久久久久毛片微露脸| 午夜福利视频1000在线观看| 一本精品99久久精品77| 亚洲av五月六月丁香网| 日韩 欧美 亚洲 中文字幕| а√天堂www在线а√下载| 久久草成人影院| 成人高潮视频无遮挡免费网站| 国产成人系列免费观看| 18美女黄网站色大片免费观看| 亚洲欧洲精品一区二区精品久久久| 午夜两性在线视频| 日韩三级视频一区二区三区| 成年免费大片在线观看| 亚洲av五月六月丁香网| 国产一区二区在线观看日韩 | 亚洲av成人一区二区三| 免费在线观看影片大全网站| 亚洲av成人av| 桃红色精品国产亚洲av| 久久伊人香网站| 熟妇人妻久久中文字幕3abv| 老司机在亚洲福利影院| 国产高清视频在线播放一区| 欧美av亚洲av综合av国产av| 国产精品一及| 国产一区二区三区在线臀色熟女| 久久亚洲真实| 久9热在线精品视频| 好男人在线观看高清免费视频| av女优亚洲男人天堂 | 脱女人内裤的视频| 久久精品综合一区二区三区| av在线天堂中文字幕| 九九热线精品视视频播放| 深夜精品福利| 午夜福利高清视频| 国产乱人伦免费视频| 啦啦啦观看免费观看视频高清| 久久久久久久久免费视频了| 香蕉久久夜色| 亚洲中文字幕日韩| 香蕉av资源在线| 国产高清有码在线观看视频| 性欧美人与动物交配| 一级毛片女人18水好多| 精品国产亚洲在线| 日韩欧美三级三区| 在线观看午夜福利视频| www.精华液| 国产成+人综合+亚洲专区| 三级国产精品欧美在线观看 | 亚洲人与动物交配视频| 夜夜躁狠狠躁天天躁| 网址你懂的国产日韩在线| 午夜福利在线在线| a级毛片a级免费在线| 后天国语完整版免费观看| 国产成人啪精品午夜网站| 国产综合懂色| 黑人欧美特级aaaaaa片| 99视频精品全部免费 在线 | 亚洲av第一区精品v没综合| 热99re8久久精品国产| 亚洲在线自拍视频| 精品国产亚洲在线| 中文字幕人妻丝袜一区二区| 男人的好看免费观看在线视频| 99热这里只有精品一区 | 波多野结衣高清无吗| 亚洲精品色激情综合| 日本 欧美在线| 国产淫片久久久久久久久 | 亚洲av成人精品一区久久| 国产毛片a区久久久久| 女人高潮潮喷娇喘18禁视频| 香蕉丝袜av| 熟女人妻精品中文字幕| 国内精品久久久久久久电影| 国产精品亚洲av一区麻豆| 日日夜夜操网爽| 免费在线观看影片大全网站| 色综合站精品国产| 狂野欧美白嫩少妇大欣赏| 怎么达到女性高潮| 国产三级中文精品| 观看美女的网站| 狂野欧美激情性xxxx| 少妇的逼水好多| 大型黄色视频在线免费观看| 叶爱在线成人免费视频播放| 精品久久久久久成人av| 日本黄色视频三级网站网址| 哪里可以看免费的av片| 成人av在线播放网站| 99久久精品一区二区三区| 亚洲专区中文字幕在线| 久久久精品欧美日韩精品| 亚洲成人久久爱视频| 波多野结衣巨乳人妻| www.自偷自拍.com| 欧美日韩乱码在线| av黄色大香蕉| 国产一区二区三区视频了| 亚洲国产欧洲综合997久久,| 免费在线观看亚洲国产| 岛国在线观看网站| а√天堂www在线а√下载| 色av中文字幕| 欧美精品啪啪一区二区三区| 久久久久久九九精品二区国产| av国产免费在线观看| 波多野结衣高清作品| 久久中文字幕人妻熟女| 俺也久久电影网| 亚洲国产色片| 国产精品野战在线观看| 欧美一区二区精品小视频在线| 色综合亚洲欧美另类图片| 久久中文字幕一级| av片东京热男人的天堂| 国产精品亚洲美女久久久| 欧美+亚洲+日韩+国产| 中文字幕人妻丝袜一区二区| 在线观看日韩欧美| 国产成人精品久久二区二区免费| 人人妻,人人澡人人爽秒播| 亚洲熟妇中文字幕五十中出| 亚洲欧美日韩高清专用| 国产激情偷乱视频一区二区| 亚洲专区中文字幕在线| 精品免费久久久久久久清纯| 小蜜桃在线观看免费完整版高清| 岛国视频午夜一区免费看| 97超级碰碰碰精品色视频在线观看| 变态另类成人亚洲欧美熟女| 91av网站免费观看| 国产在线精品亚洲第一网站| 日韩 欧美 亚洲 中文字幕| 后天国语完整版免费观看| 两性午夜刺激爽爽歪歪视频在线观看| 国产精品一区二区三区四区久久| av片东京热男人的天堂| 夜夜看夜夜爽夜夜摸| 欧美性猛交╳xxx乱大交人| 黄色丝袜av网址大全| 此物有八面人人有两片| 一个人观看的视频www高清免费观看 | 国产精品国产高清国产av| 国产高清视频在线观看网站| 成人亚洲精品av一区二区| 国产精品综合久久久久久久免费| 精品久久久久久久末码| 狂野欧美激情性xxxx| 麻豆成人av在线观看| 亚洲中文av在线| 亚洲男人的天堂狠狠| 亚洲专区中文字幕在线| 欧美乱妇无乱码| 99国产精品一区二区三区| 99精品欧美一区二区三区四区| 老司机午夜福利在线观看视频| 国产99白浆流出| 欧美性猛交黑人性爽| 伦理电影免费视频| 97人妻精品一区二区三区麻豆| 一边摸一边抽搐一进一小说| 亚洲专区字幕在线| www.999成人在线观看| 亚洲欧美激情综合另类| 女同久久另类99精品国产91| 91在线精品国自产拍蜜月 | 精品电影一区二区在线| 国产成人影院久久av| 国产午夜精品论理片| 亚洲,欧美精品.| 老司机午夜福利在线观看视频| 国产高潮美女av| 免费人成视频x8x8入口观看| 国产午夜精品论理片| 亚洲精品美女久久久久99蜜臀| 欧美一级毛片孕妇| 久久久久亚洲av毛片大全| 俺也久久电影网| 不卡av一区二区三区| 久久久水蜜桃国产精品网| 亚洲av电影不卡..在线观看| 亚洲成人精品中文字幕电影| 在线看三级毛片| 亚洲国产高清在线一区二区三| 欧美成人性av电影在线观看| 国产亚洲精品一区二区www| 五月玫瑰六月丁香| www.熟女人妻精品国产| 精品国产三级普通话版| 色综合欧美亚洲国产小说| 99久久久亚洲精品蜜臀av| 亚洲国产精品成人综合色| 小蜜桃在线观看免费完整版高清| 长腿黑丝高跟| 真实男女啪啪啪动态图| 亚洲美女视频黄频| 亚洲第一欧美日韩一区二区三区| 久久久久亚洲av毛片大全| 欧美成狂野欧美在线观看| 亚洲专区字幕在线| 听说在线观看完整版免费高清| 国产高清videossex| 在线视频色国产色| aaaaa片日本免费| 我的老师免费观看完整版| 一个人观看的视频www高清免费观看 | 国产成人系列免费观看| 日本黄色片子视频| 神马国产精品三级电影在线观看| 国产淫片久久久久久久久 | 制服人妻中文乱码| 精品人妻1区二区| 成人午夜高清在线视频| 99久国产av精品| av在线蜜桃| 久久久久久久精品吃奶| 一本综合久久免费| 俄罗斯特黄特色一大片| 黑人操中国人逼视频| 日韩三级视频一区二区三区| 日韩大尺度精品在线看网址| 久久婷婷人人爽人人干人人爱| aaaaa片日本免费| 午夜视频精品福利| tocl精华| 九九在线视频观看精品| 亚洲五月天丁香| 欧美绝顶高潮抽搐喷水| 露出奶头的视频| 亚洲人成网站高清观看| 美女黄网站色视频| 日本在线视频免费播放| 黄色女人牲交| 成人三级做爰电影| 国产一区二区在线观看日韩 | 成人特级av手机在线观看| 免费在线观看亚洲国产| 一个人看视频在线观看www免费 | 久久天堂一区二区三区四区| 亚洲国产色片| 欧美绝顶高潮抽搐喷水| 免费av毛片视频| 久99久视频精品免费| 一个人看视频在线观看www免费 | 哪里可以看免费的av片| 亚洲一区高清亚洲精品| 国产毛片a区久久久久| 天堂动漫精品| 两人在一起打扑克的视频| 午夜视频精品福利| 欧美日韩福利视频一区二区| 一进一出抽搐gif免费好疼| 久久久久九九精品影院| 欧美日韩综合久久久久久 | 亚洲国产欧美网| 99在线人妻在线中文字幕| 久久午夜综合久久蜜桃| 国产私拍福利视频在线观看| 中文字幕最新亚洲高清| 久久天堂一区二区三区四区| 观看免费一级毛片| 久久久久久久久久黄片| 91老司机精品| 波多野结衣高清作品| 久久国产精品人妻蜜桃| 亚洲,欧美精品.| 制服人妻中文乱码| 97人妻精品一区二区三区麻豆| 国产淫片久久久久久久久 | 一夜夜www| 国产亚洲精品一区二区www| 亚洲男人的天堂狠狠| 成人国产一区最新在线观看| 老司机午夜十八禁免费视频| 此物有八面人人有两片| 精品一区二区三区四区五区乱码| 12—13女人毛片做爰片一| 亚洲色图 男人天堂 中文字幕| 九九在线视频观看精品| 在线观看午夜福利视频| 99视频精品全部免费 在线 | 丁香六月欧美| 神马国产精品三级电影在线观看| 夜夜爽天天搞| 精品久久久久久久人妻蜜臀av| 男女下面进入的视频免费午夜| bbb黄色大片| 99视频精品全部免费 在线 | 日本熟妇午夜| 极品教师在线免费播放| 久久热在线av| 少妇人妻一区二区三区视频| 欧美日韩福利视频一区二区| 国产视频一区二区在线看| 观看美女的网站| 中文字幕久久专区| 久久久成人免费电影| 亚洲欧美精品综合久久99| 亚洲av成人不卡在线观看播放网| 青草久久国产| 国产乱人伦免费视频| 久久香蕉国产精品| 亚洲精品乱码久久久v下载方式 | 岛国在线免费视频观看| 国产精品野战在线观看| 亚洲精品色激情综合| 欧美国产日韩亚洲一区| 国产爱豆传媒在线观看| 欧美黑人巨大hd| 中文字幕高清在线视频| 国产精品免费一区二区三区在线| 国产成人av教育| 国产精品98久久久久久宅男小说| 九九热线精品视视频播放| 脱女人内裤的视频| 黄片小视频在线播放| 桃色一区二区三区在线观看| 久久久久久九九精品二区国产| 欧美zozozo另类| 啦啦啦观看免费观看视频高清| 国产蜜桃级精品一区二区三区| 成年女人毛片免费观看观看9| 日韩欧美国产一区二区入口| 亚洲色图 男人天堂 中文字幕| 成年版毛片免费区| 色综合站精品国产| 国产单亲对白刺激| 成人特级黄色片久久久久久久| 午夜亚洲福利在线播放| 淫妇啪啪啪对白视频| 热99在线观看视频| 欧美丝袜亚洲另类 | 国产精品av视频在线免费观看| 热99re8久久精品国产| 亚洲中文字幕日韩| 黄频高清免费视频| 免费观看精品视频网站| 亚洲美女视频黄频| 欧美3d第一页| 日韩欧美国产一区二区入口| 一级作爱视频免费观看| 黄色成人免费大全| 岛国在线免费视频观看| 看片在线看免费视频| 久久中文看片网| 色视频www国产| 亚洲一区二区三区色噜噜| 久久精品91无色码中文字幕| 国产精品99久久久久久久久| 国产成人福利小说| 国产高清三级在线| 一区福利在线观看| 狂野欧美激情性xxxx| 亚洲精品一区av在线观看| 久久久国产成人免费| 成年版毛片免费区| 亚洲七黄色美女视频| 亚洲av片天天在线观看| 国产精品九九99| 亚洲人成网站高清观看| 国产免费av片在线观看野外av| 亚洲av电影不卡..在线观看| 人人妻,人人澡人人爽秒播| 亚洲成av人片免费观看| 母亲3免费完整高清在线观看| 夜夜夜夜夜久久久久| 一个人看的www免费观看视频| 色综合站精品国产| 国产综合懂色| 久久久久亚洲av毛片大全| 亚洲国产精品成人综合色| 脱女人内裤的视频| 久久精品夜夜夜夜夜久久蜜豆| 欧美中文日本在线观看视频| 国产精品久久久久久亚洲av鲁大| 成人av在线播放网站| 999久久久精品免费观看国产| www.www免费av| 国内精品一区二区在线观看| 舔av片在线| 亚洲 欧美一区二区三区| 久久精品国产清高在天天线| 一卡2卡三卡四卡精品乱码亚洲| 久久久国产欧美日韩av| 亚洲人与动物交配视频| av在线蜜桃| 成年人黄色毛片网站| 女同久久另类99精品国产91| 亚洲精华国产精华精| 白带黄色成豆腐渣| 一夜夜www| 亚洲va日本ⅴa欧美va伊人久久| 在线观看美女被高潮喷水网站 | 不卡一级毛片| 欧美xxxx黑人xx丫x性爽| 色老头精品视频在线观看| 免费看美女性在线毛片视频| 国产亚洲精品综合一区在线观看| 国产欧美日韩精品亚洲av| 亚洲欧美日韩高清在线视频| 99久久精品热视频| 欧美日韩亚洲国产一区二区在线观看| 日韩 欧美 亚洲 中文字幕| 日本黄色片子视频| 最好的美女福利视频网| 欧美色视频一区免费| 欧美日韩综合久久久久久 | 淫秽高清视频在线观看| 亚洲av电影不卡..在线观看| 999精品在线视频| 亚洲欧美日韩卡通动漫| 国产午夜精品论理片| 国产一区二区三区在线臀色熟女| 亚洲中文日韩欧美视频| 长腿黑丝高跟| 国产精品 欧美亚洲| 2021天堂中文幕一二区在线观| 成人av在线播放网站| 亚洲中文字幕日韩| 成在线人永久免费视频| 亚洲在线观看片| 国产蜜桃级精品一区二区三区| 亚洲欧美精品综合久久99| www日本黄色视频网| 美女高潮的动态| 亚洲国产精品sss在线观看| 国产成人欧美在线观看| 国产真人三级小视频在线观看| 免费看日本二区| 午夜福利欧美成人| 午夜福利成人在线免费观看| 男女下面进入的视频免费午夜| www.熟女人妻精品国产| 国产精品,欧美在线| 级片在线观看| 日韩欧美三级三区| 亚洲人成网站高清观看| 五月玫瑰六月丁香| 国产成人av激情在线播放| 亚洲av免费在线观看| 九色国产91popny在线| 美女黄网站色视频| 国产又黄又爽又无遮挡在线| 久久中文看片网| 亚洲av成人精品一区久久| 国产欧美日韩一区二区三| 欧美一级a爱片免费观看看| 国产成年人精品一区二区| 中文字幕久久专区| 19禁男女啪啪无遮挡网站| 中文字幕最新亚洲高清| 1024香蕉在线观看| 精品久久久久久久久久免费视频| 中文字幕人妻丝袜一区二区| 最新美女视频免费是黄的| 岛国在线免费视频观看| 亚洲av成人一区二区三| 国产欧美日韩精品亚洲av| 在线播放国产精品三级| 五月伊人婷婷丁香| 亚洲欧美激情综合另类| 男人舔奶头视频| 成人性生交大片免费视频hd| 日本黄大片高清| 午夜福利视频1000在线观看| 国产黄片美女视频| 一本久久中文字幕| 黄色 视频免费看| 麻豆国产av国片精品| 国产乱人伦免费视频| 国产高清有码在线观看视频| 国产午夜精品论理片| 日韩欧美免费精品| 少妇熟女aⅴ在线视频| 日本a在线网址| 成人国产综合亚洲| 99久久无色码亚洲精品果冻| 欧美日韩综合久久久久久 | 91av网站免费观看| 免费在线观看视频国产中文字幕亚洲| 91av网站免费观看| 国产毛片a区久久久久| 色精品久久人妻99蜜桃| 老汉色∧v一级毛片| aaaaa片日本免费| 国产精品久久久av美女十八| 黄色丝袜av网址大全| 国产成年人精品一区二区| 啦啦啦观看免费观看视频高清| 天堂√8在线中文| 欧美最黄视频在线播放免费| 床上黄色一级片| 国产精品野战在线观看| 99在线人妻在线中文字幕| 日日夜夜操网爽| 99国产综合亚洲精品| 一个人看视频在线观看www免费 | 我要搜黄色片| 搡老熟女国产l中国老女人| 少妇裸体淫交视频免费看高清| 两性午夜刺激爽爽歪歪视频在线观看| 91久久精品国产一区二区成人 | 91在线观看av| 露出奶头的视频| 久久精品国产综合久久久| 偷拍熟女少妇极品色| 日韩精品中文字幕看吧| 国模一区二区三区四区视频 | 俺也久久电影网| 欧美日韩黄片免| 久9热在线精品视频| 亚洲一区二区三区色噜噜| 国产真实乱freesex| 国产av在哪里看| 国产单亲对白刺激| 午夜a级毛片| 欧美中文综合在线视频| 免费av毛片视频| 最近最新中文字幕大全电影3| 黄片小视频在线播放| 淫妇啪啪啪对白视频| 琪琪午夜伦伦电影理论片6080| 亚洲自偷自拍图片 自拍| 国产精品久久久久久人妻精品电影| 小说图片视频综合网站| 精品欧美国产一区二区三| 欧美又色又爽又黄视频| 国产v大片淫在线免费观看| 久久99热这里只有精品18| 亚洲在线观看片| 国内毛片毛片毛片毛片毛片| 国产精品99久久99久久久不卡| 国模一区二区三区四区视频 | 在线观看午夜福利视频| 一进一出抽搐gif免费好疼| 国产69精品久久久久777片 | 成年免费大片在线观看| 国产又色又爽无遮挡免费看|