Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release Candidate March 2018 #55

Closed

Conversation

multiagent-mapping-sheep
Copy link
Contributor

@multiagent-mapping-sheep multiagent-mapping-sheep commented Mar 16, 2018

Release Candidate March 2018

This branch currently requires a special dependencies branch:

maplab_dependencies/#167

Please note that this is a release candidate and needs further testing. If you would like to give it a try, please do so, but read the release notes carefully. This version does not yet conform to the descriptions, tutorials and reported performance in the wiki! We are happy for any feedback you can provide regarding the localization and map merging performance, as well as the compatibility with your ARM devices.

How to update

There is a new submodule in dependencies, please do the following:

cd ~/maplab_ws/src/maplab_dependencies
git submodule init
git submodule update --recursive

TODOs

  • Adapt wiki and the uploaded calibration/map data
  • Test tutorials
  • Investigate and fix slightly deteriorated performance of map alignment and merging.
  • Finalize release and add release tag

Changes

  • Gridded Feature Detector

    Multithreaded and more evenly distributed feature detection. It is enabled by default, but can be disabled using this flag:

    --feature_tracking_gridded_detector_use_gridded=false

    image

    Credits: @floriantschopp

  • Refactoring of the SensorManager

    The SensorManager now also owns all the sensors, including the NCamera.

    Credits: @mbuerki

  • Optional Sensor/Camera Resources

    While the camera images that are used for visual odometry can be associated clearly with one vertex of the pose graph, some resources (images, point clouds, GPS measurements) cannot. These resources might originate from a sensor that is not triggered at the same time as the primary camera setup (e.g. RGB-D camera, color cameras, lidars, GPS). These sensors or cameras can now be stored in the SensorManager and their data attached to a specific mission and timestamp.

    template <typename SensorId, typename DataType>
    void VIMap::addOptionalSensorResource(
            const backend::ResourceType& type, const SensorId& camera_id,
            const int64_t timestamp_ns, const DataType& resource, VIMission* mission);
    
    template <typename SensorId, typename DataType>
    bool VIMap::getOptionalSensorResource(
             const VIMission& mission, const backend::ResourceType& type,
             const SensorId& sensor_id, const int64_t timestamp_ns,
             DataType* resource) const;
    
    template <typename SensorId, typename DataType>
    bool VIMap::getClosestOptionalSensorResource(
            const VIMission& mission, const backend::ResourceType& type,
            const SensorId& sensor_id, const int64_t timestamp_ns,
            const int64_t tolerance_ns, DataType* resource,
            int64_t* closest_timestamp_ns) const;

    Credits: @mfehr, @mbuerki

  • Resource Importer

    This tool allows you to attach optional resources such as images, depth maps or point clouds to an existing VIMap. The main application we had in mind is to attach the point cloud or depth maps of an RGB-D sensor (e.g. ZR300) to the VIMap, such that they can be used to create a dense 3D reconstruction with the tools described here. Instructions how to use it can be found here.

    Credits: @mfehr

  • maplab now builds on ARM

    We added experimental support for ARM devices such as the Nvidia Jetson TX2. Even though we currently do not have a ARM build server, we have tested this for ARMv7-A and ARMv8-A (list of ARM architectures). See these issues for more details: Build maplab on ARMv7-A (32bit) #7 Failure to compile in Nvidia TX2 (ARM) #12 Failure to compile in Nvidia TX2 (Aarch64) #18

    Credits: @fabianbl, @Alabate

  • Function to artifically disturb VIMap

    We introduced a new feature to artificially disturb the VIMap. This function is used mostly for unit testing, but is very useful to develop and debug bundle adjustment and loop closure algorithms.

    In file: algorithms/vi-map-helpers/include/vi-map-helpers/vi-map-manipulation.h

    void VIMapManipulation::artificiallyDisturbVertices()

    Credits: @eggerk

Changes from previous release candidate (January 2018)

  • ROVIOLI - Localization using Structure Constraints:

    Previous Version:

    The localization, represented by T_G_M (the transformation from mission/odometry to global frame), was estimated by ROVIO as part of its state based on updates sent by the feature-based localization. These updates contained a 6-DoF global pose estimate (T_G_I) with a constant, hard-coded covariance.

    Update:

    This release introduces structure localization. ROVIO no longer receives localization updates as the 6-DoF global pose (T_G_I), but receives 2D-3D matches and updates the state using reprojection errors. These 2D-3D matches denote the correspondences between localization map landmarks and keypoints in the frame we are trying to localize. Assuming a constant, isotropic covariance of the localization landmark positions yields a better performance than assuming a constant covariance of the 6dof pose constraint. Furthermore we integrated a reset logic that resets the localization state when a large change is detected, i.e. when re-localizing after leaving the localization map for some time.

    Effects:

    • The localization performance increases significantly.
    • The mission/odometry frame (T_M_I) is now more stable and is not affected anymore by jumps in the localization. This is crucial for robot control, which is usually performed on the locally consistent, but more stable T_M_I rather than the globally consistent, but potentially discontinuous T_G_I.
    • Localization receives a proper covariance estimate, which improves the behavior in the presence of dubious localizations or drift of the estimator.

    .
    Credits: @schneith, @dymczykm

  • ROVIOLI - Health checker:

    New Feature:

    ROVIOLI now comes with a health checker that monitors the current state of the odometry and if it detects a divergence, resets the state to what it assumes to be the last stable state. Currently the following health criterion are implemented:

    • feature distance covariance
    • velocity

    Since an unexpected VIO reset can cause considerable damage, e.g. on a flying platform, we disabled this feature by default. However, we highly recommend that you give it a try and enable it with:

    --rovioli_enable_health_checking

    Effects:

    • If enabled, ROVIOLI recovers within seconds after ROVIO diverges, e.g. because the camera is occluded for too long or the sensor suffered a shock.

    .
    Credits: @dymczykm

  • maplab/ROVIOLI - Rotation invariant descriptors

    Update:

    This feature was basically already there in the initial release, however by default it was always disabled, as the rotation invariance lead to a slightly lower overall descriptor matching performance if the camera orientation is static. However to make ROVIOLI applicable to the general use-case, e.g. down facing aerial cameras or end-effector cameras on robotic arms we now enabled this option by default. It can be switched on/off using the following flag:

    --rovioli_descriptor_rotation_invariance

    Effects:

    • The descriptors will not get matched across maps with different settings, therefore if you create a localization map, make sure the you use the same setting when localizing from this map. Since the option is now ON by default. Previously created localization maps might not work without switching off the rotation invariance in ROVIOLI.
    • The maplab console can be launched without considering this option, because the flag only takes effect when creating new descriptors.
    • The ROVIOLI flag will overwrite the value of this flag, so don't use it:
      --feature_tracking_descriptor_rotation_invariance.

    .

  • ROVIOLI - Unified maplab and rovio sigmas

    Previous version:

    Due to a difference in the convention of the IMU parameters, two separate IMU parameter files for ROVIO and maplab had to be passed to ROVIOLI.

    --imu_parameters_maplab=$IMU_PARAMETERS_MAPLAB
    --imu_parameters_rovio=$IMU_PARAMETERS_ROVIO

    Update:

    We unified the parameters and it now suffices to simply pass the maplab IMU parameters to ROVIOLI.

    Effects:

    • ROVIOLI can now be launched with just these IMU parameters:
      --imu_parameters_maplab=$IMU_PARAMETERS_MAPLAB
    • We observed a better performance in terms of drift in the VIO when using the maplab values.
    • If the performance deteriorates for your application and you prefer to have separate parameters for ROVIO and maplab, you can still pass in ROVIO sigmas using the following NEW flag:
      --external_imu_parameters_rovio=$IMU_PARAMETERS_ROVIO
      The ROVIO IMU parameters correspond to the covariance, the maplab parameters correspond to sigmas and therefore interally we simply pass the squared values of the maplab parameters to ROVIO. For the new ROVIO IMU parameters you need to take the square root of your previous ROVIO parameters.

    .

  • ROVIOLI - Intel Realsense ZR300 ROS publisher and parameters

    New feature:

    Even though the ZR300 has been discontinued, it is still one of the most capable, affordable and publicly available visual-inertial sensors. We therefore added a template calibration for camera and IMU to facilitate the integration of ROVIOLI on this sensor. The files can be found in maplab/applications/rovioli/share.

    Furthermore, to improve the support of this sensor on Ubuntu 14.04 and 16.04 we provide a dedicated ROS sensor node here: maplab_realsense We also re-organized the calibration files to correspond to the three sensor we provide calibration templates for, the ZR300, the Skybotix VI-sensor and the Google Tango Yellowstone tablet.

    Credits: @mfehr

  • ROVIOLI - Improve latency of pose estimation

    Previous version:

    The ROVIO filter update is triggered by the RovioInterface object. We have discovered that the update was triggered unnecessarily late, only after the next camera frame was feeded to ROVIO. For instance, for a framerate of 10Hz it resulted in an additional 100ms delay of the pose output. This behavior can lead to issues if someone wants to directly control an agile robot (e.g. an MAV) using the pose output of ROVIOLI.

    Update:

    The logic of updating the ROVIO filter has been changed and the updates are processed as soon as they leave the update queue. The filter is no longer waiting for the next image to start processing the previous updates.

    Effects:

    ROVIOLI pose estimates are published with a smaller latency. This is particularly important for real-time systems and control purposes.

    Credits: @dymczykm

@ethzasl-jenkins
Copy link

Test FAILed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants