Skip to content

Commit

Permalink
Depth cameras ros bridge, documentatio
Browse files Browse the repository at this point in the history
  • Loading branch information
SijmenHuizenga committed Jul 24, 2020
1 parent 00fa002 commit 573724a
Show file tree
Hide file tree
Showing 7 changed files with 92 additions and 17 deletions.
36 changes: 36 additions & 0 deletions docs/camera.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Cameras

To add a camera to your vehicle, add the following json to the "Cameras" map in your `settings.json`:

```
"Camera1": {
"CaptureSettings": [{
"ImageType": 0,
"Width": 785,
"Height": 785,
"FOV_Degrees": 90
}],
"X": 1.0,
"Y": 0.06,
"Z": -2.20,
"Pitch": 0.0,
"Roll": 0.0,
"Yaw": 180
}
```

`Camera1` is the name of the camera. This name will be used in ros topics and in coordinate frame.

`X`, `Y` and `Z` are the position of the lidar relative the [vehicle pawn center](vehicle_model.md) of the car in NED frame.

`Roll`,`Pitch` and `Yaw` are rotations in degrees.

`ImageType` describes the type of camera.
At this moment only rgb and depth cameras are supported.
For rgb camera, set this value to 0 and for depth camera set the value to 2.
Using depth camera (DepthPerspective) you get depth using a projection ray that hits that pixel.

RGB images published in ros are encoded using `bgr8`, depth images published in ROS are encoded in `32FC1`.

`FOV_Degrees` describes [how much the camera sees](https://en.wikipedia.org/wiki/Field_of_view).
The vertical FoV will be automatically calculated using the following formula: `vertical FoV = image height / image width * horizontal FoV`.
5 changes: 2 additions & 3 deletions docs/integration-handbook.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ To ensure the simulation will perform as expected, the sensor suite has some res
Here you can read the requirements and restrictions that apply to every sensor.

### Camera
**A this moment camera's are a bit broken. You can use the camera's but the framerate and topic names might change. See #43 and #85.**
**Be warned, camera framerate cannot be guaranteed #43.**

Every vehicle can have a maximum of 2 camera sensors.
These camera(s) can be placed anywhere on the vehicle that would be allowed by FSG 2020 rules.
Expand All @@ -91,8 +91,6 @@ You can choose the resolution of the camera(s).
In total, the camera’s can have 1232450 pixels.
Every dimension (width or height) must be at least 240px and no greater than 1600px.
The horizontal field of view (FoV) is configurable for each camera and must be at least 30 degrees and not be greater than 90 degrees.
The vertical FoV will be automatically calculated using the following formula: `vertical FoV = image height / image width * horizontal FoV`.

The camera's auto exposure, motion blur and gamma settings will be equal for all teams.


Expand All @@ -112,6 +110,7 @@ For each lidar, during a single rotation, the number of collected points for 1 l
The number of collected points per lidar per laser per second cannot exceed 20480.

To ensure the simulation keeps running smoothly:

* Every lidar is limited to collect 10000 points per scan.
* The total number of points collected per second can be no larger than 100000

Expand Down
2 changes: 1 addition & 1 deletion docs/lidar.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ This is an example lidar:

`Lidar1` is the name of the lidar. This value will be used in the ros topic name and coordinate frame.

`X`, `Y` and `Z` are the position of the lidar relative the the center of the car in NED frame.
`X`, `Y` and `Z` are the position of the lidar relative the [vehicle pawn center](vehicle_model.md) of the car in NED frame.

`Roll`,`Pitch` and `Yaw` are rotations in degrees.

Expand Down
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ nav:
- "Vehicle model": "vehicle_model.md"
- "IMU": "imu.md"
- "Lidar": "lidar.md"
- "Camera": "camera.md"
- "GPS": "gps.md"
- "GSS": "gss.md"
- "#tech madness":
Expand Down
3 changes: 2 additions & 1 deletion ros/src/fsds_ros_bridge/scripts/cameralauncher.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,9 @@ def signal_handler(sig, frame):
required=True, output='screen',
args=args({
'camera_name': cameraname,
'depthcamera': camsettings["CaptureSettings"][0]["ImageType"] == 2,
'framerate': CAMERA_FRAMERATE,
'host_ip': AIRSIM_HOSTIP
'host_ip': AIRSIM_HOSTIP,
})))
# launch.launch(
# roslaunch.core.Node(
Expand Down
52 changes: 45 additions & 7 deletions ros/src/fsds_ros_bridge/src/fsds_ros_bridge_camera.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
#include "vehicles/car/api/CarRpcLibClient.hpp"
#include "statistics.h"
#include "rpc/rpc_error.h"
#include <cv_bridge/cv_bridge.h>



Expand All @@ -27,6 +28,7 @@ ros_bridge::Statistics fps_statistic;
std::string camera_name = "";
double framerate = 0.0;
std::string host_ip = "localhost";
bool depthcamera = false;

ros::Time make_ts(uint64_t unreal_ts)
{
Expand All @@ -41,21 +43,26 @@ std::string logprefix() {
return "[cambridge " + camera_name + "] ";
}

void doImageUpdate(const ros::TimerEvent&)
{
// We are using simGetImages instead of simGetImage because the latter does not return image dimention information.
std::vector<ImageRequest> req;
req.push_back(ImageRequest(camera_name, ImageType::Scene, false, false));
std::vector<ImageResponse> getImage(ImageRequest req) {
// We are using simGetImages instead of simGetImage because the latter does not return image dimention information.
std::vector<ImageRequest> reqs;
reqs.push_back(req);

std::vector<ImageResponse> img_responses;
try {
img_responses = airsim_api->simGetImages(req, "FSCar");
img_responses = airsim_api->simGetImages(reqs, "FSCar");
} catch (rpc::rpc_error& e) {
std::cout << "error" << std::endl;
std::string msg = e.get_error().as<std::string>();
std::cout << "Exception raised by the API while getting image:" << std::endl
<< msg << std::endl;
}
return img_responses;
}

void doImageUpdate(const ros::TimerEvent&)
{
auto img_responses = getImage(ImageRequest(camera_name, ImageType::Scene, false, false));

// if a render request failed for whatever reason, this img will be empty.
if (img_responses.size() == 0 || img_responses[0].time_stamp == 0)
Expand All @@ -78,6 +85,36 @@ void doImageUpdate(const ros::TimerEvent&)
fps_statistic.addCount();
}

cv::Mat manual_decode_depth(const ImageResponse& img_response)
{
cv::Mat mat(img_response.height, img_response.width, CV_32FC1, cv::Scalar(0));
int img_width = img_response.width;

for (int row = 0; row < img_response.height; row++)
for (int col = 0; col < img_width; col++)
mat.at<float>(row, col) = img_response.image_data_float[row * img_width + col];
return mat;
}


void doDepthImageUpdate(const ros::TimerEvent&) {
auto img_responses = getImage(ImageRequest(camera_name, ImageType::DepthPerspective, true, false));

// if a render request failed for whatever reason, this img will be empty.
if (img_responses.size() == 0 || img_responses[0].time_stamp == 0)
return;

ImageResponse img_response = img_responses[0];

cv::Mat depth_img = manual_decode_depth(img_response);
sensor_msgs::ImagePtr img_msg = cv_bridge::CvImage(std_msgs::Header(), "32FC1", depth_img).toImageMsg();
img_msg->header.stamp = make_ts(img_response.time_stamp);
img_msg->header.frame_id = "/fsds/camera/"+camera_name;

image_pub.publish(img_msg);
fps_statistic.addCount();
}

void printFps(const ros::TimerEvent&)
{
std::cout << logprefix() << "Average FPS: " << (fps_statistic.getCount()/FPS_WINDOW) << std::endl;
Expand All @@ -93,6 +130,7 @@ int main(int argc, char ** argv)
nh.param<std::string>("camera_name", camera_name, "");
nh.param<double>("framerate", framerate, 0.0);
nh.param<std::string>("host_ip", host_ip, "localhost");
nh.param<bool>("depthcamera", depthcamera, false);

if(camera_name == "") {
std::cout << logprefix() << "camera_name unset." << std::endl;
Expand Down Expand Up @@ -123,7 +161,7 @@ int main(int argc, char ** argv)
image_pub = nh.advertise<sensor_msgs::Image>("/fsds/camera/" + camera_name, 1);

// start the loop
ros::Timer imageTimer = nh.createTimer(ros::Duration(1/framerate), doImageUpdate);
ros::Timer imageTimer = nh.createTimer(ros::Duration(1/framerate), depthcamera ? doDepthImageUpdate : doImageUpdate);
ros::Timer fpsTimer = nh.createTimer(ros::Duration(FPS_WINDOW), printFps);
ros::spin();
return 0;
Expand Down
10 changes: 5 additions & 5 deletions settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -69,23 +69,23 @@
],
"X": 1.0,
"Y": 0.06,
"Z": -2.20,
"Z": -1.20,
"Pitch": 0.0,
"Roll": 0.0,
"Yaw": 180
"Yaw": 0
},
"cam2": {
"CaptureSettings": [
{
"ImageType": 0,
"ImageType": 2,
"Width": 785,
"Height": 785,
"FOV_Degrees": 90
}
],
"X": 1.0,
"Y": 0.06,
"Z": -2.20,
"Y": 0.56,
"Z": -1.20,
"Pitch": 0.0,
"Roll": 0.0,
"Yaw": 0.0
Expand Down

0 comments on commit 573724a

Please sign in to comment.