Skip to content

beam_calibration

Jake McLaughlin edited this page Oct 5, 2020 · 11 revisions

beam_calibration

This module contains the camera and distortion models to model real world cameras.

Classes

CameraType (enum)

An enum to keep track of camera types. Can take on the following values: { RADTAN = 0, KANNALABRANDT, DOUBLESPHERE, LADYBUG }

CameraModel (Parent)

CameraModel is the parent class to all camera models in libbeam. It provides implementations for common functionalities across models and methods for creating models from .json, .conf and from manual input. The class also has abstract methods that must be implemented separately by each model (relating to projection and backprojection)

Ladybug (Subclass)

LadybugCamera implements our CameraModel API using the SDK from Flir found here: https://www.flir.ca/products/ladybug-sdk/

Radtan (Subclass)

The Radtan model utilizes the classic pinhole projection model and a separate distortion function to model radial and tangential distortion. https://www.mathworks.com/help/vision/ug/camera-calibration.html

Parameters: [fx, fy, cx, cy, k1, k2, p1, p2]

KannalaBrandt (Subclass)

KannalaBrandt implements the camera model proposed in: https://ieeexplore.ieee.org/document/1642666/;jsessionid=skL5qfmEEZ0OVxcnD29ayY7TpxHmQmihsFY_Z1a-6HioUfVdmXzI!-793196492

Parameters: [fx, fy, cx, cy, k1, k2, k3, k4]

DoubleSphere (Subclass)

DoubleSphere implements the camera model proposed in: https://arxiv.org/abs/1807.08957

Parameters: [fx, fy, cx, cy, epsilon, alpha]

Info

The Radtan model is equivalent to OpenCV's base camera model, and KannalaBrandt is equivalent to OpenCV's fisheye model.

Example Use

The following shows an example configuration file for a camera model, and code on how to use this module.

{
  "date": "2019-05-29",
  "method": "opencv",
  "camera_type": "KANNALABRANDT",
  "image_width": 2048,
  "image_height": 1536,
  "frame_id": "F1_link",
  "intrinsics": [
    783.44463219576687,
    783.68479107567089,
    996.34300258081578,
    815.47561902246832,
    0.0052822823133193853,
    0.0069435221311202099,
    -0.0025332897347625323,
    -0.0013896892385779631
  ]
}
#include "beam_calibration/CameraModel.h"
#include <boost/filesystem.hpp>

int main() {
  // Read in cofniguration file and create camera
  std::string radtan_location = __FILE__;
  radtan_location.erase(radtan_location.end() - 14, radtan_location.end());
  radtan_location += "tests/test_data/F2.json";
  std::shared_ptr<beam_calibration::CameraModel> radtan =
  beam_calibration::CameraModel::Create(radtan_location);

  // Projecting a point
  Eigen::Vector3d point(10, 10, 10)
  opt<Eigen::Vector2i> coords = radtan->ProjectPoint(point);
  if (!coords.has_value()) { continue; }
  uint16_t col = coords.value()(0, 0);
  uint16_t row = coords.value()(1, 0);
  cv::Vec3b color = image.at<cv::Vec3b>(row, col);

  return 0;
}

TfTree Object Usage

Adding Transforms

When using a TfTree Object to with both static and dynamic transforms, add static transforms first by calling LoadJSON() function, then, add dynamic transforms by calling AddTransform() function.

Getting Transforms

Call GetTransformEigen() to get transforms in Eigen::Affine3d format:

// Getting a static Eigen::Affine3d transform
Eigen::Affine3d T = tf_tree.GetTransformEigen(to_frame, from_frame);

// Getting a dynamic Eigen::Affine3d transform
Eigen::Affine3d T = tf_tree.GetTransformEigen(to_frame, from_frame, lookup_time);

Call GetTransformROS() to get transforms in geometry_msgs::TransformStamped format:

// Getting a static geometry_msgs::TransformStamped transform
geometry_msgs::TransformStamped T_msg = tf_tree.GetTransformROS(to_frame, from_frame);

// Getting a dynamic geometry_msgs::TransformStamped transform
geometry_msgs::TransformStamped T_msg = tf_tree.GetTransformROS(to_frame, from_frame, lookup_time);

References

  1. J. Kannala and S. Brandt (2006). A Generic Camera Model and Calibration Method for Conventional, Wide-Angle, and Fish-Eye Lenses, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 8, pp. 1335-1340

  2. Usenko, V., Demmel, N., & Cremers, D. (2018). The Double Sphere Camera Model. 2018 International Conference on 3D Vision (3DV). doi:10.1109/3dv.2018.00069

  3. B. Khomutenko, G. Garcia, and P. Martinet. (2016) An enhanced unified camera model. IEEE Robotics and Automation Letters.

Clone this wiki locally