Skip to content

Commit

Permalink
ROS1 documentation updates and enhancements (opendr-eu#316)
Browse files Browse the repository at this point in the history
* Added prerequisites section for common prerequisites across nodes

* Overhauled the dataset nodes section and added RGB nodes section

* Rearranged the listed node links

* General rearrangement and input sections

* Additional modifications and pose estimation section

* Section renaming for consistency

* Some rearrangement in contents list to match the order

* Fall detection doc and moved dataset nodes to bottom

* Face det, reco, 2d object detection overhaul and todo notes

* Added a class id table on sem segmentation doc

* Panoptic and semantic segmentation overhaul

* Fix long lines as per suggestions

* Fix long lines as per suggestions

* Removed commented pose estimation usage suggestion

* Update video HAR docs foro ROS node

* Updated the video human activity recognition section and some other minor fixes

* Fixed italics showing as block

* Removed redundant line separators after headers

* Removed redundant horizontal line from RGBD header

* Added notes for output visualization and updated pose estimation docs

* Added missing space in pose estimation docs

* Updates on formatting for all other applicable nodes' docs and minor fixes

* More detailed ros setup instructions

* Added skipping of workspace build step

* Updated RGBD hand gesture recognition ros node doc

* Updated speech command recognition ros node doc and some minor fixes

* Updated heart anomaly detection ros node doc

* Reordered audio section and added RGB + Audio section

* Added audiovisual emotion reco missing doc and reordered audio section

* Added link to csv file with classes-ids for activity recognition

* Added link to csv file with class-ids for activity recognition

* Minor improvements

* Several minor fixes and landmark-based facial expression recognition

* Skeleton-based human action recognition and minor fixes

* Moved fair mot in rgb input section

* Completed ROS1 docs

* Updates on default values for FairMOT ros node class ctor

* Fixed duplicate shortcut on deepsort ros node argparse

* Fixed missing shortcut on rgbd hand gesture reco ros node argparse

* Added "opendr_" to data gen package and "_node" to the node file name

* Renamed package to "opendr_perception" and added "_node" to scripts

* Applied fixes to yolov5

* Added "opendr_" to planning package

* Added "opendr_" to bridge package

* Added "opendr_" to simulation package

* Fixed old version of torch in pip_requirements.txt

* Renamed ros bridge package doc

* Updated based on new names and some minor modifications

* Fixed list numbers

* Merge clean-up

* Added a new notes item with a node diagram and explanation

* Minor formatting fixes in the diagram section

* Fixed step X dot consistency

* Fixed step X dot consistency

* Added detached & in instructions for running stuff

* Removed apt ros package installation instructions

* Added some additional required ros packages and made ROS version a variable

Co-authored-by: LukasHedegaard <lukasxhedegaard@gmail.com>
Co-authored-by: ad-daniel <44834743+ad-daniel@users.noreply.github.com>
  • Loading branch information
3 people authored and Luca Marchionni committed Dec 11, 2022
1 parent 3f6b1fb commit b7377cb
Show file tree
Hide file tree
Showing 6 changed files with 779 additions and 305 deletions.
2 changes: 1 addition & 1 deletion bin/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ make install_compilation_dependencies
make install_runtime_dependencies

# Install additional ROS packages
sudo apt-get install ros-noetic-vision-msgs ros-noetic-audio-common-msgs
sudo apt-get install ros-$ROS_DISTRO-vision-msgs ros-$ROS_DISTRO-geometry-msgs ros-$ROS_DISTRO-sensor-msgs ros-$ROS_DISTRO-audio-common-msgss

# If working on GPU install GPU dependencies as needed
if [[ "${OPENDR_DEVICE}" == "gpu" ]]; then
Expand Down
1 change: 1 addition & 0 deletions docs/reference/activity-recognition.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

The *activity_recognition* module contains the *X3DLearner* and *CoX3DLearner* classes, which inherit from the abstract class *Learner*.

You can find the classes and the corresponding IDs regarding activity recognition [here](https://github.com/opendr-eu/opendr/blob/master/src/opendr/perception/activity_recognition/datasets/kinetics400_classes.csv).

### Class X3DLearner
Bases: `engine.learners.Learner`
Expand Down
5 changes: 5 additions & 0 deletions docs/reference/semantic-segmentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,11 @@

The *semantic segmentation* module contains the *BisenetLearner* class, which inherit from the abstract class *Learner*.

On the table below you can find the detectable classes and their corresponding IDs:

| Class | Bicyclist | Building | Car | Column Pole | Fence | Pedestrian | Road | Sidewalk | Sign Symbol | Sky | Tree | Unknown |
|--------|-----------|----------|-----|-------------|-------|------------|------|----------|-------------|-----|------|---------|
| **ID** | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |

### Class BisenetLearner
Bases: `engine.learners.Learner`
Expand Down
127 changes: 78 additions & 49 deletions projects/opendr_ws/README.md
Original file line number Diff line number Diff line change
@@ -1,66 +1,95 @@
# opendr_ws

## Description
This ROS workspace contains ROS nodes and tools developed by OpenDR project. Currently, ROS nodes are compatible with ROS Noetic.
This workspace contains the `ros_bridge` package, which provides message definitions for ROS-compatible OpenDR data types,
This ROS workspace contains ROS nodes and tools developed by OpenDR project.
Currently, ROS nodes are compatible with **ROS Melodic for Ubuntu 18.04** and **ROS Noetic for Ubuntu 20.04**.
The instructions that follow target ROS Noetic, but can easily be modified for ROS Melodic by swapping out the version name.
This workspace contains the `opendr_bridge` package, which provides message definitions for ROS-compatible OpenDR data types,
as well the `ROSBridge` class which provides an interface to convert OpenDR data types and targets into ROS-compatible
ones similar to CvBridge. You can find more information in the corresponding [documentation](../../docs/reference/rosbridge.md).
ones similar to CvBridge. You can find more information in the corresponding [documentation](../../docs/reference/opendr-ros-bridge.md).


## Setup
For running a minimal working example you can follow the instructions below:
## First time setup
For the initial setup you can follow the instructions below:

0. Source the necessary distribution tools:
0. Make sure ROS noetic is installed: http://wiki.ros.org/noetic/Installation/Ubuntu (desktop full install)

```source /opt/ros/noetic/setup.bash```
1. Open a new terminal window and source the necessary distribution tools:
```shell
source /opt/ros/noetic/setup.bash
```
_For convenience, you can add this line to your `.bashrc` so you don't have to source the tools each time you open a terminal window._
2. Navigate to your OpenDR home directory (`~/opendr`) and activate the OpenDR environment using:
```shell
source bin/activate.sh
```
You need to do this step every time before running an OpenDR node.
3. Navigate into the OpenDR ROS workspace::
```shell
cd projects/opendr_ws
```
4. (Optional) Most nodes with visual input are set up to run with a default USB camera. If you want to use it install the corresponding package and its dependencies:
```shell
cd src
git clone https://github.com/ros-drivers/usb_cam
cd ..
rosdep install --from-paths src/ --ignore-src
```
5. Build the packages inside the workspace:
```shell
catkin_make
```
6. Source the workspace:
```shell
source devel/setup.bash
```
You are now ready to run an OpenDR ROS node, in this terminal but first the ROS master node needs to be running
1. Make sure you are inside opendr_ws
2. If you are planning to use a usb camera for the demos, install the corresponding package and its dependencies:
7. Before continuing, you need to start the ROS master node by running:
```shell
roscore &
```
You can now run an OpenDR ROS node. More information below.
#### After first time setup
For running OpenDR nodes after you have completed the initial setup, you can skip steps 2 and 5 from the list above.
You can also skip building the workspace (step 6) granted it's been already built and no changes were made to the code inside the workspace, e.g. you modified the source code of a node.

#### More information
After completing the setup you can read more information on the [opendr perception package README](src/opendr_perception/README.md), where you can find a concise list of prerequisites and helpful notes to view the output of the nodes or optimize their performance.

#### Node documentation
You can also take a look at the list of tools [below](#structure) and click on the links to navigate directly to documentation for specific nodes with instructions on how to run and modify them.

**For first time users we suggest reading the introductory sections (prerequisites and notes) first.**

```shell
cd src
git clone https://github.com/ros-drivers/usb_cam
cd ..
rosdep install --from-paths src/ --ignore-src
```
3. Install the following dependencies, required in order to use the OpenDR ROS tools:
```shell
sudo apt-get install ros-noetic-vision-msgs ros-noetic-geometry-msgs ros-noetic-sensor-msgs ros-noetic-audio-common-msgs
```
4. Build the packages inside workspace
```shell
catkin_make
```
5. Source the workspace and you are ready to go!
```shell
source devel/setup.bash
```
## Structure

Currently, apart from tools, opendr_ws contains the following ROS nodes (categorized according to the input they receive):

### [Perception](src/perception/README.md)
### [Perception](src/opendr_perception/README.md)
## RGB input
1. [Pose Estimation](src/perception/README.md#pose-estimation-ros-node)
2. [Fall Detection](src/perception/README.md#fall-detection-ros-node)
3. [Face Recognition](src/perception/README.md#face-recognition-ros-node)
4. [2D Object Detection](src/perception/README.md#2d-object-detection-ros-nodes)
5. [Face Detection](src/perception/README.md#face-detection-ros-node)
6. [Panoptic Segmentation](src/perception/README.md#panoptic-segmentation-ros-node)
7. [Semantic Segmentation](src/perception/README.md#semantic-segmentation-ros-node)
8. [Video Human Activity Recognition](src/perception/README.md#human-action-recognition-ros-node)
9. [Landmark-based Facial Expression Recognition](src/perception/README.md#landmark-based-facial-expression-recognition-ros-node)
10. [FairMOT Object Tracking 2D](src/perception/README.md#fairmot-object-tracking-2d-ros-node)
11. [Deep Sort Object Tracking 2D](src/perception/README.md#deep-sort-object-tracking-2d-ros-node)
12. [Skeleton-based Human Action Recognition](src/perception/README.md#skeleton-based-human-action-recognition-ros-node)
## Point cloud input
1. [Voxel Object Detection 3D](src/perception/README.md#voxel-object-detection-3d-ros-node)
2. [AB3DMOT Object Tracking 3D](src/perception/README.md#ab3dmot-object-tracking-3d-ros-node)
1. [Pose Estimation](src/opendr_perception/README.md#pose-estimation-ros-node)
2. [Fall Detection](src/opendr_perception/README.md#fall-detection-ros-node)
3. [Face Detection](src/opendr_perception/README.md#face-detection-ros-node)
4. [Face Recognition](src/opendr_perception/README.md#face-recognition-ros-node)
5. [2D Object Detection](src/opendr_perception/README.md#2d-object-detection-ros-nodes)
6. [2D Object Tracking](src/opendr_perception/README.md#2d-object-tracking-ros-nodes)
7. [Panoptic Segmentation](src/opendr_perception/README.md#panoptic-segmentation-ros-node)
8. [Semantic Segmentation](src/opendr_perception/README.md#semantic-segmentation-ros-node)
9. [Landmark-based Facial Expression Recognition](src/opendr_perception/README.md#landmark-based-facial-expression-recognition-ros-node)
10. [Skeleton-based Human Action Recognition](src/opendr_perception/README.md#skeleton-based-human-action-recognition-ros-node)
11. [Video Human Activity Recognition](src/opendr_perception/README.md#video-human-activity-recognition-ros-node)
## RGB + Infrared input
1. [End-to-End Multi-Modal Object Detection (GEM)](src/perception/README.md#gem-ros-node)
## RGBD input nodes
1. [RGBD Hand Gesture Recognition](src/perception/README.md#rgbd-hand-gesture-recognition-ros-node)
## Biosignal input
1. [Heart Anomaly Detection](src/perception/README.md#heart-anomaly-detection-ros-node)
1. [End-to-End Multi-Modal Object Detection (GEM)](src/opendr_perception/README.md#2d-object-detection-gem-ros-node)
## RGBD input
1. [RGBD Hand Gesture Recognition](src/opendr_perception/README.md#rgbd-hand-gesture-recognition-ros-node)
## RGB + Audio input
1. [Audiovisual Emotion Recognition](src/opendr_perception/README.md#audiovisual-emotion-recognition-ros-node)
## Audio input
1. [Speech Command Recognition](src/perception/README.md#speech-command-recognition-ros-node)
1. [Speech Command Recognition](src/opendr_perception/README.md#speech-command-recognition-ros-node)
## Point cloud input
1. [3D Object Detection Voxel](src/opendr_perception/README.md#3d-object-detection-voxel-ros-node)
2. [3D Object Tracking AB3DMOT](src/opendr_perception/README.md#3d-object-tracking-ab3dmot-ros-node)
## Biosignal input
1. [Heart Anomaly Detection](src/opendr_perception/README.md#heart-anomaly-detection-ros-node)
Binary file added projects/opendr_ws/images/opendr_node_diagram.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit b7377cb

Please sign in to comment.