Kinect Calibration

Research.KinectCalibration History

Hide minor edits - Show changes to markup

May 26, 2014, at 01:45 AM by 208.66.25.130 -
Changed lines 5-6 from:
to:

NOTE: OpenNI and recent versions of Freenect generally do not need a custom calibration.

NOTE: version 0.5 and higher of RGBDemo do not require manual annotations.

Changed line 122 from:

Also check this webpage for more information on automatic calibration: Ros Kinect

to:

Also check this webpage for more information on automatic calibration: Ros Kinect

August 07, 2011, at 04:06 PM by 87.217.160.55 -
Changed line 59 from:

P3D.z = depth(x,y)

to:

P3D.z = depth(x_d,y_d)

November 23, 2010, at 06:26 PM by 163.117.150.79 -
Changed lines 5-6 from:

(:title Kinect:)

to:

(:title Kinect Calibration:)

Changed lines 44-45 from:

Note that another transformation claimed to be more accurate was proposed there: [http://groups.google.com/group/openkinect/browse_thread/thread/31351846fd33c78/e98a94ac605b9f21?lnk=gst&q=stephane#e98a94ac605b9f21 Stephane Magnenat post] but I haven’t tested it yet.

to:

Note that another transformation claimed to be more accurate was proposed there: Stephane Magnenat post but I haven’t tested it yet.

Changed lines 30-37 from:

Stereo calibration

Selecting also the corners of the chessboard on the color images (they would also be easy to extract automatically using the chessboard recognition), standard stereo calibration can be performed.

Mapping depth pixels with color pixels

The first step is to undistort rgb and depth images using the estimated distortion coefficients. Then, using the depth camera intrinsics, each pixel (x_d,y_d) of the depth camera can be projected to metric 3D space using the following formula:

to:

Transformation of raw depth values into meters

Raw depth values are integer between 0 and 2047. They can be transformed into depth in meters using the parameters given on the Ros Kinect page.

Changed lines 34-36 from:

P3D.x = (x_d - cx_d) * depth(x_d,y_d) / fx_d P3D.y = (y_d - cy_d) * depth(x_d,y_d) / fy_d P3D.z = depth(x,y)

to:

float raw_depth_to_meters(int raw_depth) {

  if (raw_depth < 2047)
  {
   return 1.0 / (raw_depth * −0.0030711016 + 3.3309495161);
  }
  return 0;

}

Changed lines 44-47 from:

with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera.

We can then reproject each 3D point on the color image and get its color:

to:

Note that another transformation claimed to be more accurate was proposed there: [http://groups.google.com/group/openkinect/browse_thread/thread/31351846fd33c78/e98a94ac605b9f21?lnk=gst&q=stephane#e98a94ac605b9f21 Stephane Magnenat post] but I haven’t tested it yet.

Stereo calibration

Selecting also the corners of the chessboard on the color images (they would also be easy to extract automatically using the chessboard recognition), standard stereo calibration can be performed.

Mapping depth pixels with color pixels

The first step is to undistort rgb and depth images using the estimated distortion coefficients. Then, using the depth camera intrinsics, each pixel (x_d,y_d) of the depth camera can be projected to metric 3D space using the following formula:

Changed lines 55-57 from:

P3D’ = R.P3D + T P 2 D_rgb.x = (P3D’.x * fx_rgb / P3D’.z) + cx_rgb P 2 D_rgb.y = (P3D’.y * fy_rgb / P3D’.z) + cy_rgb

to:

P3D.x = (x_d - cx_d) * depth(x_d,y_d) / fx_d P3D.y = (y_d - cy_d) * depth(x_d,y_d) / fy_d P3D.z = depth(x,y)

Changed lines 60-63 from:

with R and T the rotation and translation parameters estimated during the stereo calibration.

The parameters I could estimate for my Kinect are:

  • Color
to:

with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera.

We can then reproject each 3D point on the color image and get its color:

Added lines 65-74:

P3D’ = R.P3D + T P 2 D_rgb.x = (P3D’.x * fx_rgb / P3D’.z) + cx_rgb P 2 D_rgb.y = (P3D’.y * fy_rgb / P3D’.z) + cy_rgb @]

with R and T the rotation and translation parameters estimated during the stereo calibration.

The parameters I could estimate for my Kinect are:

  • Color

[@

Changed line 103 from:

Also check this webpage for more information on automatic calibration: [http://www.ros.org/wiki/kinect_node|Ros Kinect]

to:

Also check this webpage for more information on automatic calibration: Ros Kinect

Changed lines 13-14 from:

Here is a preliminary semi-automatic way to calibrate the Kinect depth sensor and the rgb output to enable a mapping between them. You can see a result there:

to:

Here is a preliminary semi-automatic way to calibrate the Kinect depth sensor and the rgb output to enable a mapping between them. You can see some results there:

Added lines 102-103:

Also check this webpage for more information on automatic calibration: [http://www.ros.org/wiki/kinect_node|Ros Kinect]

Changed lines 97-98 from:

ROS calibration files

to:

ROS calibration files

Added lines 1-10:

(:linebreaks:)

(:title Kinect:)

(:htoc:)


Added lines 86-91:

ROS calibration files

Here are some calibration files in ROS (http://code.ros.org) format. Attach:calibration_rgb.yaml Attach:calibration_depth.yaml

Added line 4:
Changed lines 3-4 from:

Here is a preliminary semi-automatic way to calibrate the Kinect depth sensor and the rgb output to enable a mapping between them. You can see a result there: (:youtube BVxKvZKKpds:)

to:

Here is a preliminary semi-automatic way to calibrate the Kinect depth sensor and the rgb output to enable a mapping between them. You can see a result there: (:youtube BVxKvZKKpds:)

Changed lines 3-4 from:

Here is a preliminary semi-automatic way to calibrate the Kinect depth sensor and the rgb output to enable a mapping between them.

to:

Here is a preliminary semi-automatic way to calibrate the Kinect depth sensor and the rgb output to enable a mapping between them. You can see a result there: (:youtube BVxKvZKKpds:)

Changed lines 12-13 from:
to:
Changed lines 16-17 from:
to:
Changed lines 20-21 from:
to:
Changed lines 21-22 from:

First step is to undistort rgb and depth images using the estimated distortion coefficients. Then, using the depth camera intrinsics, each pixel (x_d,y_d) of the depth camera can be projected to metric 3D space using the following formula:

to:

The first step is to undistort rgb and depth images using the estimated distortion coefficients. Then, using the depth camera intrinsics, each pixel (x_d,y_d) of the depth camera can be projected to metric 3D space using the following formula:

Changed line 23 from:

@@

to:

[@

Changed lines 27-28 from:

@@

to:

@]

Changed line 33 from:

@@

to:

[@

Changed lines 37-38 from:

@@

to:

@]

Added line 43:

[@

Changed lines 53-54 from:
to:

@]

Added line 56:

[@

Changed lines 66-67 from:
to:

@]

Added line 69:

[@

Changed lines 79-80 from:

-1.0916736334336222e-02 ]

to:

-1.0916736334336222e-02 ] @]

Added lines 10-74:

Calibration of the color camera intrinsics

The color camera intrinsics can be calibrated using standard chessboard recognition.

Calibration of the depth camera intrinsics

This is done by extracting the corners of the chessboard on the depth image and storing them. This could be done automatically, but right now I selected them by hand using a small utility program.

Stereo calibration

Selecting also the corners of the chessboard on the color images (they would also be easy to extract automatically using the chessboard recognition), standard stereo calibration can be performed.

Mapping depth pixels with color pixels

First step is to undistort rgb and depth images using the estimated distortion coefficients. Then, using the depth camera intrinsics, each pixel (x_d,y_d) of the depth camera can be projected to metric 3D space using the following formula:

@@ P3D.x = (x_d - cx_d) * depth(x_d,y_d) / fx_d P3D.y = (y_d - cy_d) * depth(x_d,y_d) / fy_d P3D.z = depth(x,y) @@

with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera.

We can then reproject each 3D point on the color image and get its color:

@@ P3D’ = R.P3D + T P 2 D_rgb.x = (P3D’.x * fx_rgb / P3D’.z) + cx_rgb P 2 D_rgb.y = (P3D’.y * fy_rgb / P3D’.z) + cy_rgb @@

with R and T the rotation and translation parameters estimated during the stereo calibration.

The parameters I could estimate for my Kinect are:

  • Color

fx_rgb 5.2921508098293293e+02 fy_rgb 5.2556393630057437e+02 cx_rgb 3.2894272028759258e+02 cy_rgb 2.6748068171871557e+02 k1_rgb 2.6451622333009589e-01 k2_rgb −8.3990749424620825e-01 p1_rgb −1.9922302173693159e-03 p2_rgb 1.4371995932897616e-03 k3_rgb 9.1192465078713847e-01

  • Depth

fx_d 5.9421434211923247e+02 fy_d 5.9104053696870778e+02 cx_d 3.3930780975300314e+02 cy_d 2.4273913761751615e+02 k1_d −2.6386489753128833e-01 k2_d 9.9966832163729757e-01 p1_d −7.6275862143610667e-04 p2_d 5.0350940090814270e-03 k3_d −1.3053628089976321e+00

Relative transform between the sensors (in meters) R [ 9.9984628826577793e-01, 1.2635359098409581e-03, -1.7487233004436643e-02, −1.4779096108364480e-03, 9.9992385683542895e-01, −1.2251380107679535e-02, 1.7470421412464927e-02, 1.2275341476520762e-02, 9.9977202419716948e-01 ]

T [ 1.9985242312092553e-02, −7.4423738761617583e-04, -1.0916736334336222e-02 ]

Changed lines 8-9 from:
to:
Changed lines 8-9 from:
to:
Changed lines 8-9 from:
to:
Added lines 1-9:

Calibrating the depth and color camera

Here is a preliminary semi-automatic way to calibrate the Kinect depth sensor and the rgb output to enable a mapping between them.

It is basically a standard stereo calibration technique (http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html), the main difficulty comes from the depth image that cannot detect patterns on a flat surface. Thus, the pattern has to be created using depth difference.

Here I used a rectangular peace of carton cut around a chessboard printed on an A3 sheet of paper.