Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increasing dimension of bounding box #77

Open
tadteo opened this issue Sep 22, 2020 · 3 comments
Open

Increasing dimension of bounding box #77

tadteo opened this issue Sep 22, 2020 · 3 comments
Labels

Comments

@tadteo
Copy link

tadteo commented Sep 22, 2020

I am using octomap server to explore a zone. The problem is that I am unable to modify the bounding box of the map. The points are passed in the map reference frame since I have done a package to do transform the point cloud from the robot frame to the map frame. For this reason, I think that the map is limited by the sensor/max_range parameter as it takes the origin as the position of the sensor even if the sensor is moving with the drone.
My questions are:

  • There is a way to modify the overall bounding box of the octomap from the launchfile?
  • Is correct to provide the point cloud in the map reference frame or the octomap automatically convert the reference frame of the points? In that case, is the senor/max_range parameter correct since in reality, the sensor is not in that position?

Thank you in advance for the information

@GeneralAdmin
Copy link

Have you found a solution? I have the same issue.

@tadteo
Copy link
Author

tadteo commented Feb 16, 2021

In the end, I have increased the sensor/max_range parameter ad that worked for me. But I think there is a parameter to change the external bounding box of the octomap.

@GeneralAdmin
Copy link

This is the same issue than in:

#69

I've also seen that the folks in:

https://github.com/PX4/PX4-Avoidance/blob/master/global_planner/src/nodes/global_planner_node.cpp

Have done this to solve the issue:

// Go through obstacle points and store them
void GlobalPlannerNode::depthCameraCallback(const sensor_msgs::PointCloud2& msg) {
try {
// Transform msg from camera frame to world frame
ros::Time now = ros::Time::now();
listener_.waitForTransform(frame_id_, camera_frame_id_, now, ros::Duration(5.0));
tf::StampedTransform transform;
listener_.lookupTransform(frame_id_, camera_frame_id_, now, transform);
sensor_msgs::PointCloud2 transformed_msg;
pcl_ros::transformPointCloud(frame_id_, transform, msg, transformed_msg);
pcl::PointCloudpcl::PointXYZ cloud; // Easier to loop through pcl::PointCloud
pcl::fromROSMsg(transformed_msg, cloud);

// Store the obstacle points
for (const auto& p : cloud) {
  if (!std::isnan(p.x)) {
    // TODO: Not all points end up here
    Cell occupied_cell(p.x, p.y, p.z);
    global_planner_.occupied_.insert(occupied_cell);
  }
}
pointcloud_pub_.publish(msg);

} catch (tf::TransformException const& ex) {
ROS_DEBUG("%s", ex.what());
ROS_WARN("Transformation not available (%s to %s)", frame_id_.c_str(), camera_frame_id_.c_str());
}
}

I'll try to do something like that in my code...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants