• 
    

    
    

      99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

      Simplified Method for Joint Calibration of 3D Ladar and Monocular Camera

      2018-10-10 06:27:26JingLiLiuzhiYuandJunzhengWang

      Jing Li, Liuzhi Yu and Junzheng Wang

      (Key Laboratory of Intelligent Control and Decision of Complex Systems, School of Automation, Beijing Institute of Technology, Beijing 100081, China)

      Abstract: To address the problem of data fusion between monocular camera image with 3D data from laser detection and ranging (LADAR) sensor, this paper proposes a novel simplified scheme based on the planar feature method, which can meet the accuracy requirements of the joint calibration with fewer checkerboard calibration plate (CP) positions than traditional methods. First, a mathematical model of the joint calibration is established to obtain the calibration parameters. Secondly, the selection of positions and orientations of the CP are introduced and the corresponding influence to the calibration is analyzed. Then, the calibration result is optimized by using a nonlinear Levenberg-Marquardt (LM) optimization approach, and the distance residual method is utilized to estimate the accuracy. Finally, experimental results conclude that the minimum number of positions required to meet the joint calibration accuracy in the proposed method is 5, which is less than 12 in traditional methods.

      Key words: 3D LADAR; monocular camera; joint calibration; plane calibration method; distance residual

      Multi-sensor data fusion technology in unmanned platforms can play an important role to reduce the impact on the system performance caused by sudden changes of environments, thus improving the reliability, robustness and fault-tolerance ability of the system. Multi-sensor joint calibration is the prerequisite of information fusion. Although a monocular camera can get a wealth of physical information[1], it is particularly vulnerable to external disturbances, such as weather, illumination changes, and lack of three-dimensional data object information and so on. On the contrary, 3D LADAR can obtain the polar coordinates of the target object quickly, but it cannot get the color, texture and other information of the target object[2]. As a result, the performance of 3D LADAR and monocular camera can be complementary to each other. Since the data of the two kinds of sensors are collected in their respective coordinate system, the uniform representation of the two kinds of data is the key of data fusion. It needs the coordinates conversion between the two kinds of coordinate systems, which is aimed to obtain the relevant rotation matrix and translation vector.

      In recent years, the information fusion of visual sensors and 3D LADAR has become a hot research topic[3]. At present, there are mainly two sorts of methods for joint calibration, including the feature point space registration method[4]and plane calibration plate method[5].

      In Ref.[4], the spatial registration of feature points is based on the spatial constraints of the corresponding points in the depth map of the radar image and the camera image to get the spatial transformation of the two coordinate systems. However, the spatial feature points are difficult to extract accurately, and the number is small and the calibration results are easily influenced by the mixed pixels. Plane calibration method is first proposed in 2004 by Zhang and Pless[6], which indicates that the angle between the calibration target plane and the LADAR scan plane will affect the accuracy of the joint calibration results. Xiang Zhiyu[7]also proposed a calibration plate (CP) method based on the principle of correspondence between the sensor origin and the target distance of the target CP, but the method requires the target placed at more than 18 locations without the analysis of influence of targets quantity and position on calibration accuracy.

      Taking into account the previous points of view of related work, this paper studies the influence of the position, orientation and number of target CPs on the results and accuracy of the joint calibration. In particular, the minimum number of target positions required to meet the joint calibration accuracy in the proposed method is 5, and the reliability is verified by experiments. However, traditional methods usually need at least more than 12 positions. At the same time, the result of joint calibration is optimized by using the nonlinear LM optimization approach, and the evaluation method of distance residual is put forward to estimate the calibration accuracy. Therefore, this method can simplify the calibration process and improves the efficiency of the joint calibration under the premise of ensuring the accuracy.

      1 Joint Calibration Algorithm Description

      1.1 Joint calibration model

      The essence of the joint calibration of 3D LADAR and monocular camera is to determine the rigid transformation between the two coordinate systems, namely the rotation matrix and the translation vector. Then the coordinate relationship of the joint calibration is shown in Fig.1.

      Fig.1 Relationship among the three coordinate systems of the monocular camera, 3D LADAR and calibration plate

      In this case, without loss of generality, we assume that the world coordinate systemOwXwYwZwis coincident with the target CP coordinates.OcXcYcZcandOlXlYlZlrespectively represents the camera and LADAR coordinate system. Besides,the distances from the two coordinate system origins to target CP plane are respectively dcand dl. Denote Pcand Plrespectively as the normal vectors of the target CP in the monocular camera coordinates and the 3D LADAR coordinates.

      The expression of transformation between the camera coordinate system and the 3D LADAR coordinate system is formulated as follows:

      (1)

      where R and T are, respectively, the rotation matrix and the translation vector that converts the LADAR coordinate system to the monocular camera coordinate system. (Xc,Yc,Zc) is the point in the camera coordinate systemOcXcYcZc, and (Xl,Yl,Zl) is the corresponding one in the 3D LADAR coordinate systemOcXcYcZc.

      1.2 Calculate parameters for joint calibration

      Before conducting joint calibration of the two sensors, first we calculate Pcand dcfor the camera, and Pland dlfor the LADAR. Then, the results are applied to deduce the aforementioned R and T for the coordinate conversion.

      On one hand, the parameters of the monocular camera are calibrated with Zhang’s calibration method[8]. Then, the normal vector of the target CP in the camera coordinates and the distance from the pointOcto the plane CP are calculated.

      Suppose that Pcmstands for the normal vector in them-th target CP plane, and dcmfor the distance fromOcto the target plane, where,m∈{1,2,…,n},nis the total number of target CP positions needed. According to the results of monocular camera calibration, namely, R3,mand Tm, we can obtain

      (2)

      dcm=pcmTm

      (3)

      where R3,mdenotes the third column of the orthogonal rotation matrix, which is obtained in the camera calibration process using them-th target CP picture, and Tmis the corresponding translation vector.

      On the other hand, for them-th target CP position, the 3D point cloud data set {x,y,z}mcollected by LADAR is used to carry out the planar fitting, and the plane equation is expressed as:

      U3,l,1x+U3,l,2y+U3,l,3z+D=0

      (4)

      whereU3,lis the vector that represents the third column in the analytical solution of the singular value decomposition. SinceU3,ldenotes the normal vector of the fitting plane in the 3D LADAR coordinate system, we can obtain Plmas

      (5)

      Assuming {xc,yc,zc}mis the average coordinate of {x,y,z}m,dlmcan be obtained as

      dlm=‖-D‖=
      ‖-U3,l,1xc-U3,l,2yc-U3,l,3zc‖

      (6)

      Afterwards, {Pcm,dcm,Plm,dlm},m∈{1,2,…,n}, can be applied to deduce the aforementioned R and T for the coordinate conversion. Let

      (7)

      From the perspective of geometry, dlcan also be calculated as

      (8)

      According to Eqs.(7) (8), the optimal solution to T could be obtained by minimizing an objective function[8], which is defined as follows

      (9)

      Then the objective function fortis equivalent to

      (10)

      which has an analytical solution of least squares given by

      (11)

      (12)

      Such that R is a rotation matrix with RTR=I3and det(R)=1. This is equivalent to

      (13)

      This problem is an instance of the well-studied orthogonal procrustes problem (OPP)[10]and has the closed-form solution given by

      R1=VUT

      (14)

      1.3 Target calibration plate position selection

      With Zhang’s plane calibration method[8], we assume that the target CP is the planez=0 in the world coordinate system, and matrix K is the inner parameter matrix of the camera, [XY1]Tis the homogeneous coordinates of point on the target CP, and [uv1]Tis the homogeneous pixel coordinates of point projected in the correspondence image, and [r1r2r3] andTare respectively the rotation matrix and translation vector from the camera coordinates converting to the world coordinates. And then we can obtain

      (15)

      (16)

      Since the camera has six unknown internal parameters, the inner parameter matrix K can be uniquely and linearly solved when the number of images captured by the camera is 3 or more.

      During the procedure of the joint calibration, the monocular camera is calibrated with at least three positions needed to solve the calibration parameters. In addition, according to the method of PNP (perspective-N-point) problem, whenN=3, the constraint condition is the same as the unknown quantity. Consequently we can in advance place the approximate position between the control points and the camera to avoid the false solution in the P3P problem, as shown in Fig.2.

      Fig.2 Distribution of control points into isosceles triangles in P3P problem

      WhereOis the optical center of the camera, and the three connecting lines between the control pointsA,B,Cand the optical centerOareα,β,γ, and |AB|=c, |AC|=b,|BC|=a, the equation is described as

      (17)

      According to the investigation, when the three control points are placed in an isosceles triangle distribution, we can uniquely solve the real solution of the current P3P problem.

      However, in the actual joint calibration process, when the target CP is placed at three positions, especially in an isosceles triangle distributing, it is difficult to operate and is also easy to induce random errors. It is well known that placing the target CP in 3 or 4 positions would result in a low calibration accuracy since that they are too small to cover the entire camera field of view. And a conclusion is that a valid solution of rotation R and translation vector T requires aN≥5 calibration plate. Therefore, five calibration positions can be used to place evenly the target CP in the front of camera, and the five planar normal vectors are symmetrical distributed on the optical axis in the camera fields of view, that is to say, make the orientations of the target CP in these five locations are different with each other, as shown in Fig.3.

      Fig.3 Target CP with five locations in the front of camera

      1.4 Optimization of calibration results

      To obtain better R and T for joint calibration, the result obtained in Section 2.1 is optimized using the nonlinear LM (Levenberg-Marquardt) optimization method.

      We select the distance from the point cloud data on the target CP to the corresponding image as the objective function to be minimized, take the R and T obtained in Section 2.1 as an initial value into the following equation. And then the iterative optimization process is executed to minimize the objective function as follows

      (18)

      Finally, the optimal solutionR′ andT′ can be obtained through the iterative process.

      1.5 Accuracy evaluation method of joint calibration

      In this paper, a method is proposed to evaluate the joint calibration accuracy by using the distance residual, which is defined by the correspondence principle of the distance pairs between the two sensors original points and the target CP. To be more specific, we firstly substitute the 3D data of LADAR into the joint calibration model and solve the values of R and T. Then the theoretical distance from the origin to the target CP in the camera coordinates is obtained through multiplying calculated convert pixel coordinates value by the correspondence normal vector transpose. Next, the theoretical distance is used to subtract from the actual distance from the origin to the target CP in the camera coordinates. Finally, we can derive the distance residual through the formulation in the following equation

      (19)

      2 Experiments and Results Analysis

      The joint calibration experiment system is shown in Fig.4. The monocular camera is fixed about 20 cm below the 3D LADAR. The checkerboard CP is placed in front of the experimental platform and we change its location and orientation several times in each experiment where the LADAR and camera collect distance-image data pairs in the target CP area. In our experiment, we use a checkerboard of 9×7 black and white cells with a size of 30 mm×30 mm. And the target CP is placed about 1 m to 3 m in front of the experimental platform.

      Fig.4 Joint calibration experimental platform

      The monocular camera is calibrated by Zhang’s calibration method[8], and we can obtain the inner parameter matrix M (unit: pixel) is

      2.1 Calibration results and accuracy analysis

      In order to verify that accomplishing the joint calibration needs the CP to be placed at least 5 different orientations and locations, we have separately carried out two sets of experiments with the target placed at 3, 4, 5,18 locations with different or identical orientations. In addition, the re-projection method is utilized to display the joint calibration results.

      The data collected in the experiments with different locations and orientations is substituted into the mathematical model of joint calibration. Subsequently, the rotation matrix R and translation vector T are solved respectively, and then the method in Section 1.5 above is used to calculate the accuracy of each pair of R and T. The results are listed in Tab.1.

      Tab.1 Contrast of grouping experiment

      From the above experimental results, it can be observed that the calibration errors with 3 or 4 different positions of the target CP are too large, while the error with 5 can meet the practical application requirements and approximate to that of 18. Hence, the target CP needs to be placed at least 5 different positions to accomplish the joint calibration.

      According to the above results, we can use the re-projection method to project the 3D LADAR point cloud of the target CP to the corresponding image. The projected points are draw in the image with red dots, and the re-projection results are shown in Fig.5.

      Fig.5 Projection results in camera coordinate

      2.2 Optimization of joint calibration results

      In this section, the optimization method for joint calibration results in Section 2.1 is used to optimize the experimental results, and the distance residuals method is employed to evaluate the calibration accuracy. According to the experimental results in the previous section, it can be known that the minimal target positions to meet the accuracy of joint calibration is 5. Therefore, we only optimize the calibration results in the case that the target CP is placed in 5 different positions, and then compare the optimized result with the original one in terms of calibration error, which is defined in Eq. (16).The result of error comparison shows that the original error is 0.046 m and the corresponding optimized error is 0.013 7 m.

      Besides, the re-projection results using the original and optimized calibration parameters are shown in Fig.6.

      Fig.6 Re-projection results using original and optimized calibration parameters

      2.3 Color point cloud by data fusion

      In order to test the effect of the calibration algorithm in this paper, the color image data captured by the camera and the distance data scanned by the 3D LADAR is fused according to the joint calibration results, and then the fused data is displayed in the form of color point cloud. Fig.7a is a picture captured by a calibrated monocular camera, and Fig.7b is a color point cloud image of the scene reconstruction with the fusion information. It can be seen in the final color point cloud image that the yellow, blue and white points are clearly distinguishable, which demonstrates that the color and distance information match well and the joint calibration method is reliable and robust. (The red, green and dark blue lines in Fig.7 are axes of the point cloud coordinates.)

      Fig.7 Color point cloud by data fusion

      3 Conclusion

      This paper presents a method for joint calibration based on the planar feature with the principle of the correspondence distance pairs between the two original points and the target CP. The target CP is freely placed at different locations in front of the camera’s field of view, and all the orientations of the target CP (plane normal vectors) can cover the whole field of view in the camera coordinates. Finally, experimental results validate that accomplishing the joint calibration needs the target CP to be minimally placed at 5 different orientations and locations. In other words, the joint calibration of the two sensors can be realized by placing symmetrically the target CP in 5 different orientations in front of the joint calibration device. Generally, this method can simplify the calibration process, shorten the calibration time and improve the calibration efficiency.

      达拉特旗| 得荣县| 驻马店市| 凤庆县| 昭苏县| 邹平县| 株洲县| 阿拉善盟| 旬阳县| 静安区| 湖口县| 大邑县| 南陵县| 腾冲县| 乌拉特中旗| 望都县| 太湖县| 应城市| 蒙自县| 依安县| 阜平县| 茌平县| 东方市| 苍南县| 射洪县| 容城县| 藁城市| 平和县| 开封市| 德庆县| 潼南县| 长垣县| 汾西县| 虹口区| 巴彦淖尔市| 永春县| 九台市| 安龙县| 康保县| 上林县| 正定县|