- Stream Sonar Point Clouds
- Stream Laser 2D clouds (sensor_msgs/PointCloud)
- Stream Laser 3D Point clouds
- Publish dynamic tf transform Twist messages to robot Running the Navigation Stack
- Start all the nodes above simultaneously
- Navigate the environment using ROS' navigation stack
###Introduction
This repo has a set of clients that, among other things, pushes velocity commands to the P3_DX robot from adept mobile robots, displays the point clouds from the scan results of the p3_dx LIDAR sensor and SONAR. The package supports indoor localization and dynamic SLAM via an adaptive Monte Carlo Localization (AMCL) for mobile robots as described by Sebastian Thrun, Wolfram Burgard, and Dieter Fox in their book, Probabilistic Robotics, Intelligent Robotics and Autonomous Agent.
To aid faster implementation time, we have developed the code in ROS. In addition to sending velocity commands, and performing dynamic SLAM based on LIDAR data, it subscribes to the RosAria package's sonar scans, laser scans, and projected 3D laser scans(point clouds) and provides a 400 X 400 pixels window to visualize these topics in real-time. Example point clouds from the sonars and laser scanners are provided below:
Here is an example video of the navigation of the robot based on velocity commands that are sent to the robot after receiving the TF
transform broadcasters from the rosaria
package:
To compile these codes, you would want to pull the files from the links indicated above.
Please install the Aria and Arnl packages by following the intructions on the links, Aria package and Arnl. Also, you would want to install the ros wrappers to the Aria and Arnl packages namely: rosarnl and rosaria.
In addition to the above dependencies, you would preferrably want to compile the code using c++11. On Linux, ensure you have at least g++ 4.8 and pass the -std=c++11
to the CMakeLists.txt files (this is already done by default in the accompanying CMakeLists file).
When you are all set, you can clone this repo to your catkin workspace src
folder and build with
catkin_make --source src/p3_dx
When the compilation finishes, you can run each individual executable as follows:
rosrun sonar_clouds sonar_clouds
Remember to click on the PCL window and zoom out on the clouds for visibility.
rosrun laser_scans laser_scans
rosrun scanner_clouds scanner_clouds
After retrieving RosAria's latest published transforms, from the odometry
frame -> base_link
-> laser frame
, we generate the transform from the origin to a new pose at time t_1
and we move linearly along x according to the following relation:
vel_msg.linear.x = 0.5 * sqrt(pow(transform.getOrigin().x(), 2) +
pow(transform.getOrigin().y(), 2));
and orient the robot along z
according to:
vel_msg.angular.z = 4.0 * atan2(transform.getOrigin().y(),
transform.getOrigin().x());
To start the lookuptransform, do this in a separate terminal:
rosrun tf_listener tf_listener
To generate dynamic motions based on the relations above, pass -p
or -pirouette
to the command above.
roslaunch p3dx_2dnav p3_dx.launch
This uses the adaptive monte carlo localization algorithm as thoroughly discussed by Dieter Fox, Thrun, and colleagues in their book, Probabilistic Robotics. Fire up a separate terminal and do:
roslaunch p3dx_2dnav move_base.launch
A static map of the environment (generated with openslam's gmapping) that was used for development is provided in the directory p3dx_2dnav/map_data/map.pgm. Feel free to create your own map and feed it to the robot by following the Ros Map Server Tutorial. Also provided along with the map are the global costmap, local costmap, base local planner and costmap common parameters that are used in setting up the ROS navigation stack when you bring up the robot
This will navigate the environment with all the robot's sensors and will dynamically update map of the environment based on real-time acquired sensor information.
Please use the issues page.