Skip to content

Robot builds a semantic map on object level based on hybrid cloud

License

Notifications You must be signed in to change notification settings

liyiying/cloudrobot-semantic-map

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

cloudrobot-semantic-map

Robot can build a semantic map on object level based on a hybrid cloud(mission cloud & public cloud). The mission cloud can deal with the work specific to the robot or the robot is familiar with. If the task is beyond the robot's capability, the mission cloud will seak help from the public cloud, because the public cloud has more knowledge from the Internet and so on. For example, robot can build a semantic map of a room via recognizing the objects to realize the scene understanding. The mission cloud can recognize the objects it has been trained in advance, however, when the object is beyond its knowledge, it may transfer the object image to the public cloud, like CloudSight. This work is an implementation of robot semantic map buiding based on a hybrid cloud architecture. The object recognition engine on the mission cloud is based on Faster R-CNN and the public cloud uses the Internet open object recognition service CloudSight. This work is a fusion of the object recognition and geometry map to build the semantic map of a room. BTW, the only hybrid cloud object recognition work is also available(https://github.com/liyiying/py-faster-rcnn) and can be used in many aspects not only the semantic map.

Requirements:cloud server

The requirements is the same as Faster R-CNN requirements. (Because the clous server should run Faster R-CNN on it)

Requirements:robot

We work this based on Turtlebot robot.

How to use:

On TurtleBot:

  • Put turtlebot_follower package to your own ROS workspace(~/catkin_ws/src).Then build the package. See Building a ROS Package for details.

  • Modify gmapping_demo.launch

sudo gedit /opt/ros/indigo/share/turtlebot_navigation/launch/gmapping_demo.launch

remove codes below

<include file="$(find turtlebot_bringup)/launch/3dsensor.launch">
    <arg name="rgb_processing" value="false" />
    <arg name="depth_registration" value="false" />
    <arg name="depth_processing" value="false" />
    
    <!-- We must specify an absolute topic name because if not it will be prefixed by "$(arg camera)".
         Probably is a bug in the nodelet manager: https://github.com/ros/nodelet_core/issues/7 --> 
    <arg name="scan_topic" value="/scan" />
  </include>
  • Set ROS workspace in ~/.bashrc
sudo gedit ~/.bashrc

add this

source ~/catkin_ws/devel/setup.bash
  • Bringup Turtlebot
roslaunch turtlebot_bringup minimal.launch
  • Calculate Turtlebot's pose
cd $semantic_map_robot
python quat_to_angle_xy.py
  • Calculate the distance from Turtlebot to object points
roslaunch turtlebot_follower follower.launch
  • Build SLAM map
roslaunch turtlebot_navigation gmapping_demo.launch

the map can be seen on Turtlebot or cloud server

roslaunch turtlebot_rviz_launchers view_navigation.launch

On Cloud Server:

cd $semantic_map_cloud
rosrun image_view image_saver image:=/camera/rgb/image_raw _save_all_image:=false _filename_format:=foo.jpg __name:=image_saver
  • Calculate the object's position on the map
cd $semantic_map_cloud
python object_position4.py
  • Control Turtlebot by keyboard to build semantic map
ssh $TurtlebotName@ <Turtlebot_IP>
roslaunch turtlebot_teleop keyboard_teleop.launch

About

Robot builds a semantic map on object level based on hybrid cloud

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published