Skip to content

Commit

Permalink
ROS1 and ROS2 nodes for CoSTGCN (#387)
Browse files Browse the repository at this point in the history
* ROS2 workspace with example pose estimation node and initial ros2_bridge package

* Add ROS2 object detection 2D, face detection and semantic segmentation nodes (#273)

* Added to and from ros boxes bridge methods

* Added object detection 2d ssd node according to ros1 node

* Added object detection 2d centernet node according to ros1 node

* Added to and from bounding box list bridge methods

* Added object detection 2d detr node according to ros1 node

* Fixed some issues with type conversions in bridge

* Added object detection 2d yolov3 node according to ros1 node

* Added face detection retinaface node according to ros1 node

* Added retinaface ros2 node in setup.py

* Added semantic segmenation bisenet ROS2 node according to ROS1 node

* Added additional checks in learner download methods to stop redownloading

* Improved ROS2 packages names and bridge import

* Tester moved

* Changed bridge import and fixed some nms stuff causing errors

* Changed bridge import and made all queues 1 to avoid delays

* Minor pep8 fix

* Another minor pep8 fix

* Added licence check skip for setup.pys

* Added licence check skip for test.pys

* Added appropriate docstring on existing bridge methods

* Removed unused commented line

* Finalized ROS2 pose estimation node with argparse

* Minor formatting

* Finalized ROS2 bisenet semantic segmentation node with argparse

* Improved docstring

* Minor comment addition

* Finalized ROS2 face detection retinaface node with argparse

* Minor improvements

* Finalized ROS2 object detection 2d yolov3 node with argparse

* Finalized ROS2 object detection 2d centernet node with argparse

* Finalized ROS2 object detection 2d detr node with argparse

* Finalized ROS2 object detection 2d ssd node with argparse

* Fixed typo in package.xml description

Co-authored-by: Stefania Pedrazzi <stefaniapedrazzi@users.noreply.github.com>

Co-authored-by: Stefania Pedrazzi <stefaniapedrazzi@users.noreply.github.com>

* Removed tester node

* Add ROS2 face recognition and fall detection nodes (#279)

* Fixed wrong argparse default values

* Some reordering for internal consistency and to/from face/face_id

* Initial version of ROS2 face recognition node

* Added annotated image publishing to ROS2 face recognition node

* Fixed face recognition node class name

* Fixed face recognition node class name in main

* Added ROS2 fall detection node

* Detr node now properly uses torch

* Fixed missing condition in publishing face reco image

* Updated ROS2 semseg bisenet node according to new ROS1 node

* Added lambda expression in argparse to handle passing of none value

* Minor optimization

* Added new ros2 cmake package for OpenDR custom messages

* Pose estimation ROS2 node now uses custom pose messages

* Bridge to/from pose methods now use new opendr pose message

* Moved opendr messages package to correct subdirectory (src)

* Face det: Added lambda expression in argparse to handle passing of none value and minor optimization

* Face reco: Added lambda expression in argparse to handle passing of none value

* Fall det: Added lambda expression in argparse to handle passing of none value and minor fixes in callback

* Obj det 2d: Added lambda expression in argparse to handle passing of none value and minor optimization

* Sem segm bisenet: Added lambda expression in argparse to handle passing of none value and reintroduced try/except

* Fall det: message now gets published only when there's a fallen pose present and each pose has its own id.

* Ros2 detr (#296)

* use same drawing function as other 2d object detectors

* create object_detection_2d_detr ros2 node

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_detr_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_detr_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_detr_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_detr_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_detr_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_detection_2d_detr_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* format code

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Ros2 har (#323)

* Add ros2 video_activity_recognition_node

* Fix resizing inference issues

* Added missing bridge methods

* Added missing node entry point declaration in setup.py

* Fixed docstring and wrong default input topic

Co-authored-by: tsampazk <tsampaka@csd.auth.gr>

* Initial version of main ROS2 README.md

* Updated contents based on newer ROS1 version

* Added initial ROS2 main READMEs

* Changed usb cam run command

* Work in progress adding setup instructions

* Complete setup and build instructions plus fixed links

* Minor fixes in introduction and line separators

* Commented out cv bridge installation

* ROS2 RGBD hand gestures recognition (#341)

* Add RGBD hand recogniton node

* Update RGB hand gesture recognition node

* Remove redundancy parameters

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/rgbd_hand_gesture_recognition_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* ROS2 implementation for human_model_generation module (#291)

* changes for ros2 in opendr_simulation code

* Update mesh_util.py

* Update human_model_generation_service.py

* Update human_model_generation_client.py

* Update bridge.py

* Update human_model_generation_service.py

* Update human_model_generation_client.py

* Update bridge.py

* Update human_model_generation_client.py

* Update human_model_generation_service.py

* Update bridge.py

* Update setup.py

* Update human_model_generation_client.py

* Update human_model_generation_service.py

* Update test_flake8.py

* Update test_flake8.py

* Update test_license.py

* Update setup.py

* Update test_license.py

* Update projects/opendr_ws_2/src/opendr_ros2_bridge/opendr_ros2_bridge/bridge.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_interfaces/package.xml

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_service.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_client.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_ros2_bridge/opendr_ros2_bridge/bridge.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_client.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_service.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_interfaces/package.xml

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_interfaces/srv/Mesh.srv

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_client.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_client.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_ros2_bridge/opendr_ros2_bridge/bridge.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_service.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_service.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* changes to ros2

* changes to ros2

* changes to ROS2 nodes

* changes to ROS2 nodes

* ROS2 changes

* ROS2 changes

* ROS2 changes

* ROS2 changes

* ROS2 changes

* ROS2 changes

* ROS2 changes

* ROS2 changes

* ROS2 changes

* ROS2 changes

* ROS2 changes

* Update projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_service.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_client.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update bridge.py

* Update bridge.py

* Update projects/opendr_ws_2/src/opendr_simulation/opendr_simulation/human_model_generation_client.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update dependencies.ini

* Update dependencies.ini

* Update mesh_util.py

* Update dependencies.ini

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Add ROS2 node for EfficientPS (#270)

* Update submodule

* Add ROS2 node

* Do not remove downloaded checkpoint file

* Conclude merge

* Fix PEP8 issue

* ROS2 adaptation wrt. ROS imp. on dev. branch

* fix: EfficientPS Learner  updated, ROS2 subscriber queue fixed, ros2 bridge import fixed

EfficientPS learner file is updated to the current changes in the develop branch. Queue parameter of the subscriber in the ROS2 script has adapted correctly, also implementation of the ROS2 bridge fixed.

* refactor: Resolved conflict in the setup.py of ROS2 implementation

* style: PEP8 fix

* fix: arguments, ROS2Bridge import and rclpy.init fixed

* style: PEP8 blank line fix

* fix: Fixed encoder and default image topic location

* fix: logging order and pep8 fixed

* style: pep8 fix

Co-authored-by: aselimc <canakcia@cs.uni-freiburg.de>
Co-authored-by: Ahmet Selim Çanakçı <73101853+aselimc@users.noreply.github.com>

* ROS2 for heart anomaly detection (#337)

* Implement ROS2 for heart anomaly detection

* Remove redundant blank line

* Organize import libraries

* Change from ROS to ROS2 in docstring

* Update heart_anomaly_detection_node.py

Fix wrong initialized node name

* Update bridge.py

Update from single quotes to double quotes in docstring

* Change node name for consistency naming across ROS2 nodes

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Fixed wrong name in dependency

* ROS2 speech command recognition (#340)

* Implement ROS2 for speech command recognition

* Change default audio topic from /audio/audio to /audio

* Update bridge.py

Update from single quotes to double quotes in docstring

* Fix style

* Change node name for consistency across ROS2 nodes

* Update speech_command_recognition_node.py

Update blank lines in different positions

* PEP8 removed whitespace

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>
Co-authored-by: tsampazk <tsampaka@csd.auth.gr>

* ROS2 for audiovisual emotion recognition node (#342)

* Implement audiovisual emotion recognition node

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/audiovisual_emotion_recognition_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/audiovisual_emotion_recognition_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Skeleton_based_action_recognition ROS2 node (#344)

* skeleton_har ros2 node added

* ros2 node fixed

* Add skeleton based action recognition in setup.py

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/skeleton_based_action_recognition_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/skeleton_based_action_recognition_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/skeleton_based_action_recognition_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/skeleton_based_action_recognition_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Fixed extra newline at end of line

Co-authored-by: tsampazk <tsampaka@csd.auth.gr>
Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* landmark_based_facial_expression_recognition ROS2 node (#345)

* landmark_fer ros2 node added

* ros2 node fixed

* Add landmark based facial expression recognition in setup.py

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/landmark_based_facial_expression_recognition_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/landmark_based_facial_expression_recognition_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/landmark_based_facial_expression_recognition_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

Co-authored-by: tsampazk <tsampaka@csd.auth.gr>
Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Reformatted file

* Ros2 synthetic facial generation (#288)

* opendr_ws_ros2 synthetic facial image generation

* Delete projects/opendr_ws_2/src/ros2_bridge directory

delete directory

* delete unneccesary directory

* Update README.md

change to be correct the execution of the module

* Update package.xml

change to be correct

* Update setup.py

* Update package.xml

* Update package.xml

* prepared with test

* Update synthetic_facial_generation.py

* Update synthetic_facial_generation.py

* Update synthetic_facial_generation.py

* Update setup.py

* Update test_pep257.py

* Update test_flake8.py

* Update test_copyright.py

* Update mesh_util.py

* Update Dockerfile-cuda

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update .github/workflows/tests_suite_develop.yml

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update .github/workflows/tests_suite_develop.yml

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/package.xml

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update test_copyright.py

* Update test_flake8.py

* Update test_pep257.py

* Update test_license.py

* Update projects/opendr_ws_2/src/data_generation/setup.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/setup.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update package.xml

* Update synthetic_facial_generation.py

* Update .github/workflows/tests_suite_develop.yml

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update synthetic_facial_generation.py

* Update synthetic_facial_generation.py

* ros2

* ros2

* ros2

* Update README.md

* Update synthetic_facial_generation.py

* Update synthetic_facial_generation.py

* Update synthetic_facial_generation.py

* Update synthetic_facial_generation.py

* Update test_copyright.py

* Update test_flake8.py

* Update test_pep257.py

* Update test_license.py

* Update test_license.py

* Update Dockerfile-cuda

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/test/test_copyright.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/test/test_flake8.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/test/test_pep257.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update tests/test_license.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update synthetic_facial_generation.py

* Update synthetic_facial_generation.py

* Update synthetic_facial_generation.py

* new updates

* updates

* Update bridge.py

* Update README.md

* Update data.py

* Update projects/opendr_ws_2/src/opendr_ros2_bridge/opendr_ros2_bridge/bridge.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_ros2_bridge/opendr_ros2_bridge/bridge.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/data_generation/data_generation/synthetic_facial_generation.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Ros2 gem (#295)

* use same drawing function as other object_detection_2d nodes

* Add gem ros2 node

* Apply suggestions from code review

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Rename node class to ObjectDetectionGemNode

* add argparse consistent with node args

* make arguments consistent

* tested gem ros and ros2 node

* fix pep8 errors

* apply changes requested by reviewer

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Renamed data generation package

* Renamed opendr_ros2_messages to _interfaces

* Fixed imports for opendr_ros2_interface

* Minor fixes across all existing nodes - brought up to speed with latest ROS1 nodes

* Added correct test_license skipped directories

* Renamed bridge package to opendr_bridge

* Fixed bridge dependency

* Fixed bridge dependency and imports in perception and simulation packages

* Fixed bridge dependency for simulation package

* Renamed opendr_ros2_interface to opendr_interface

* Fixed colcon build deprecation warnings

* Fixed bridge import in init

* Nanodet and yolov5 new ros2 nodes

* Fix class names for yolo nodes

* Convert to proper nanodet ros2 node

* Minor comment fix on yolo nodes

* Fixed node name in log info of nanodet node

* Upgrade pip prior to install in docker

* Update Dockerfile

* Revert

* Revert dependency change

* Some future ros1 fixes for ros2 yolov5 node

* Added new siamrpn node in setup.py

* Added initialization service for siamrpn node

* Initial SiamRPN ROS2 node

* New bridge methods for tracking siamrpn

* SiamRPN ROS2 node with built-in detector

* Removed unused imports

* Added missing description on siamrpn docstring

* Fixed siamrpn node name

* Minor fix in ros1 main readme

* Updated ROS2 main readme, consistent with ROS1

* Minor fix in ros1 readme

* Added ROS2 node diagram

* Updated ROS2 node readme introductory sections and pose estimation

* Updated whole ROS2 node readme for ROS2 specific stuff

* Updated default usb cam ROS2 topic

* Ros2 nodes for 3D Detection and 2D/3D tracking (#319)

* Add ros2 point cloud dataset node

* Fix point cloude dataset node args

* Update default dataset path

* Add voxel 3d detection node

* Add output topic to args

* Add tracking 3d node

* Add tracking 2d fairmot node

* Add deep sort ros2 node

* Fix style errors

* Fix C++ style error

* Add device parsing

* Move ros2 gitignore to global

* Fix image dataset conditions

* Fix docstrings

* Fix pc dataset comments and conditions

* Fix voxel 3d arguments and conditions

* Fix ab3dmot conditions

* Fix unused device var

* Fix deep sort conditions

* Fix fairmot conditions

* Fix ab3dmot docstrings

* Fix style errors

* Apply suggestions from code review

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update projects/opendr_ws_2/src/opendr_perception/opendr_perception/object_tracking_2d_deep_sort_node.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Fix parameter names

* Fix deep sort inference

* Fix deep sort no detections fail

* Removed depend  on rclcpp for data generation package

* Fix bridge imports

* Fix interface import

* Minor fixes on voxel node based on ros1 voxel node

* Future fix for embedded devices

* Fixes from ros1

* Minor formatting

* Matched arguments with ros1

* Fixed node name

* Fixed docstring

* Some autoformatting

* Some fixes

* Various fixes to match ros1 node

* Fixes from ros1

* Autoformatting

* Some fixes from ros1

* Some fixes from ros1

Co-authored-by: Illia Oleksiienko <io@ece.u.dk>
Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* ROS2 node for e2e planner and environment upgrade to Webots R2022b (#358)

* ros1 planning package

* end to end planner node

* end to end planner ros2 node initiated

* updated gym environment

* planner updated with tests and environment

* ROS1 node for e2e planner

* Update readme and doc

* Delete end_to_end_planner.py

* Apply suggestions from code review

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Apply suggestions from code review

* change the world to Webots 2022b initiated

* apply suggestions from code review, update doc

* Update projects/opendr_ws/src/planning/scripts/end_to_end_planner.py

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>

* Update e2e_planning_learner.py

* Update e2e_planning_learner.py

* Update UAV_depth_planning_env.py

* Update test_end_to_end_planning.py

* Update euler_quaternion_transformations.py

* Update obstacle_randomizer.py

* Update sys_utils.py

* Update dependencies.ini

* changes for test learner independent from Webots

* Webots world updated to R2022b

* ROS1 node fix for 2022b version

* ROS2 node for e2e planner

* ROS2 node for e2e planner

* cleanup

* end-to-end planning ros2 node webots world launcher

* disable ros-numpy package dependency

* license test

* cleanup

* Apply suggestions from code review

Co-authored-by: ad-daniel <44834743+ad-daniel@users.noreply.github.com>

* Apply suggestions from code review

* Apply suggestions from code review - fix for webots-ros2 apt installation.

* fix driver name.

* fix docs Webots version

* Update src/opendr/planning/end_to_end_planning/__init__.py

Co-authored-by: ad-daniel <44834743+ad-daniel@users.noreply.github.com>

Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>
Co-authored-by: ad-daniel <daniel.dias@epfl.ch>
Co-authored-by: ad-daniel <44834743+ad-daniel@users.noreply.github.com>

* ROS1,2 added for CoSTGCN

* Minor fix to allow .pt weights file loading

* Various improvements and fixes to costgcn ros1 node

* Renamed node to add _node

* Added documentation entry for new ROS1 node

* Applied some fixes and renamed node

* Added ROS2 readme entry for new node

* Added setup.py entry for ROS2 node

* Minor fix in skeleton based ROS2 node

* Fixed license year in the new nodes

* _pose2numpy returns tensor

* graph_type checking updated

* Exclude adjacency matrix weights from load checks

* Fix _pose2numpy

* Add no_grad in infer

* Add Category descriptions

* Applied fix for _pose2numpy from ros1 node

* Added ros node to CMakeLists.txt

* Some link fixes in ros docs

---------

Co-authored-by: Kostas Tsampazis <tsampaka@csd.auth.gr>
Co-authored-by: Kostas Tsampazis <27914645+tsampazk@users.noreply.github.com>
Co-authored-by: ad-daniel <44834743+ad-daniel@users.noreply.github.com>
Co-authored-by: Stefania Pedrazzi <stefaniapedrazzi@users.noreply.github.com>
Co-authored-by: Jelle <43064291+jelledouwe@users.noreply.github.com>
Co-authored-by: Nikolaos Passalis <passalis@users.noreply.github.com>
Co-authored-by: Lukas Hedegaard <lukasxhedegaard@gmail.com>
Co-authored-by: Quoc Nguyen <53263073+minhquoc0712@users.noreply.github.com>
Co-authored-by: charsyme <63857415+charsyme@users.noreply.github.com>
Co-authored-by: Niclas <49001036+vniclas@users.noreply.github.com>
Co-authored-by: aselimc <canakcia@cs.uni-freiburg.de>
Co-authored-by: Ahmet Selim Çanakçı <73101853+aselimc@users.noreply.github.com>
Co-authored-by: ekakalet <63847549+ekakalet@users.noreply.github.com>
Co-authored-by: ad-daniel <daniel.dias@epfl.ch>
Co-authored-by: Illia Oleksiienko <io@ece.au.dk>
Co-authored-by: Illia Oleksiienko <io@ece.u.dk>
Co-authored-by: halil93ibrahim <halil@ece.au.dk>
Co-authored-by: LukasHedegaard <lh@eng.au.dk>
  • Loading branch information
19 people authored Feb 14, 2023
1 parent fe34b42 commit c3b0395
Show file tree
Hide file tree
Showing 11 changed files with 590 additions and 49 deletions.
2 changes: 1 addition & 1 deletion projects/opendr_ws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Currently, apart from tools, opendr_ws contains the following ROS nodes (categor
9. [Semantic Segmentation](src/opendr_perception/README.md#semantic-segmentation-ros-node)
10. [Image-based Facial Emotion Estimation](src/opendr_perception/README.md#image-based-facial-emotion-estimation-ros-node)
11. [Landmark-based Facial Expression Recognition](src/opendr_perception/README.md#landmark-based-facial-expression-recognition-ros-node)
12. [Skeleton-based Human Action Recognition](src/opendr_perception/README.md#skeleton-based-human-action-recognition-ros-node)
12. [Skeleton-based Human Action Recognition](src/opendr_perception/README.md#skeleton-based-human-action-recognition-ros-nodes)
13. [Video Human Activity Recognition](src/opendr_perception/README.md#video-human-activity-recognition-ros-node)
## RGB + Infrared input
1. [End-to-End Multi-Modal Object Detection (GEM)](src/opendr_perception/README.md#2d-object-detection-gem-ros-node)
Expand Down
1 change: 1 addition & 0 deletions projects/opendr_ws/src/opendr_perception/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ catkin_install_python(PROGRAMS
scripts/semantic_segmentation_bisenet_node.py
scripts/object_tracking_2d_siamrpn_node.py
scripts/facial_emotion_estimation_node.py
scripts/continual_skeleton_based_action_recognition_node.py
scripts/point_cloud_2_publisher_node.py
DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
54 changes: 36 additions & 18 deletions projects/opendr_ws/src/opendr_perception/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -486,40 +486,58 @@ whose documentation can be found [here](../../../../docs/reference/landmark-base
For viewing the output, refer to the [notes above.](#notes)
### Skeleton-based Human Action Recognition ROS Node
### Skeleton-based Human Action Recognition ROS Nodes
A ROS node for performing skeleton-based human action recognition using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset.
The human body poses of the image are first extracted by the lightweight OpenPose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition method to be categorized.
A ROS node for performing skeleton-based human action recognition is provided, one using either ST-GCN or PST-GCN models pretrained on NTU-RGBD-60 dataset.
Another ROS node for performing continual skeleton-based human action recognition is provided, using the CoSTGCN method.
The human body poses of the image are first extracted by the lightweight OpenPose method which is implemented in the toolkit, and they are passed to the skeleton-based action recognition methods to be categorized.
You can find the skeleton-based human action recognition ROS node python script [here](./scripts/skeleton_based_action_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
The node makes use of the toolkit's skeleton-based human action recognition tool which can be found [here for ST-GCN](../../../../src/opendr/perception/skeleton_based_action_recognition/spatio_temporal_gcn_learner.py)
and [here for PST-GCN](../../../../src/opendr/perception/skeleton_based_action_recognition/progressive_spatio_temporal_gcn_learner.py)
whose documentation can be found [here](../../../../docs/reference/skeleton-based-action-recognition.md).
You can find the skeleton-based human action recognition ROS node python script [here](./scripts/skeleton_based_action_recognition_node.py)
and the continual skeleton-based human action recognition ROS node python script [here](./scripts/continual_skeleton_based_action_recognition_node.py) to inspect the code and modify it as you wish to fit your needs.
The latter makes use of the toolkit's skeleton-based human action recognition tool which can be found [here for ST-GCN](../../../../src/opendr/perception/skeleton_based_action_recognition/spatio_temporal_gcn_learner.py)
and [here for PST-GCN](../../../../src/opendr/perception/skeleton_based_action_recognition/progressive_spatio_temporal_gcn_learner.py) and the former makes use
of the toolkit's continual skeleton-based human action recognition tool which can be found [here](../../../../src/opendr/perception/skeleton_based_action_recognition/continual_stgcn_learner.py).
Their documentation can be found [here](../../../../docs/reference/skeleton-based-action-recognition.md).
#### Instructions for basic usage:
1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
2. You are then ready to start the skeleton-based human action recognition node:
1. Skeleton-based action recognition node
```shell
rosrun opendr_perception skeleton_based_action_recognition_node.py
```
The following optional argument is available for the skeleton-based action recognition node:
- `--model` MODEL: model to use, options are `stgcn` or `pstgcn`, (default=`stgcn`)
- `-c or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic name for recognized action category, `None` to stop the node from publishing on this topic (default=`"/opendr/skeleton_recognized_action"`)
- `-d or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC`: topic name for description of the recognized action category, `None` to stop the node from publishing on this topic (default=`/opendr/skeleton_recognized_action_description`)
```shell
rosrun opendr_perception skeleton_based_action_recognition_node.py
```
The following optional arguments are available:
2. Continual skeleton-based action recognition node
```shell
rosrun opendr_perception continual_skeleton_based_action_recognition_node.py
```
The following optional argument is available for the continual skeleton-based action recognition node:
- `--model` MODEL: model to use, options are `costgcn`, (default=`costgcn`)
- `-c or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic name for recognized action category, `None` to stop the node from publishing on this topic (default=`"/opendr/continual_skeleton_recognized_action"`)
- `-d or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC`: topic name for description of the recognized action category, `None` to stop the node from publishing on this topic (default=`/opendr/continual_skeleton_recognized_action_description`)
The following optional arguments are available for all nodes:
- `-h or --help`: show a help message and exit
- `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
- `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output pose-annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_pose_annotated`)
- `-p or --pose_annotations_topic POSE_ANNOTATIONS_TOPIC`: topic name for pose annotations, `None` to stop the node from publishing on this topic (default=`/opendr/poses`)
- `-c or --output_category_topic OUTPUT_CATEGORY_TOPIC`: topic name for recognized action category, `None` to stop the node from publishing on this topic (default=`"/opendr/skeleton_recognized_action"`)
- `-d or --output_category_description_topic OUTPUT_CATEGORY_DESCRIPTION_TOPIC`: topic name for description of the recognized action category, `None` to stop the node from publishing on this topic (default=`/opendr/skeleton_recognized_action_description`)
- `--model`: model to use, options are `stgcn` or `pstgcn`, (default=`stgcn`)
- `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
3. Default output topics:
- Detection messages: `/opendr/skeleton_based_action_recognition`, `/opendr/skeleton_based_action_recognition_description`, `/opendr/poses`
- Output images: `/opendr/image_pose_annotated`

For viewing the output, refer to the [notes above.](#notes)
1. Skeleton-based action recognition node:
- Detection messages: `/opendr/skeleton_based_action_recognition`, `/opendr/skeleton_based_action_recognition_description`, `/opendr/poses`
- Output images: `/opendr/image_pose_annotated`
2. Continual skeleton-based action recognition node:
- Detection messages: `/opendr/continual_skeleton_recognized_action`, `/opendr/continual_skeleton_recognized_action_description`, `/opendr/poses`
- Output images: `/opendr/image_pose_annotated`
For viewing the output, refer to the [notes above.](#notes)
### Video Human Activity Recognition ROS Node
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,234 @@
#!/usr/bin/env python
# Copyright 2020-2023 OpenDR European Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import argparse
import torch
import numpy as np

import rospy
from std_msgs.msg import String
from vision_msgs.msg import ObjectHypothesis
from sensor_msgs.msg import Image as ROS_Image
from opendr_bridge.msg import OpenDRPose2D
from opendr_bridge import ROSBridge

from opendr.engine.data import Image
from opendr.perception.pose_estimation import draw
from opendr.perception.pose_estimation import LightweightOpenPoseLearner
from opendr.perception.skeleton_based_action_recognition import CoSTGCNLearner


class CoSkeletonActionRecognitionNode:

def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
output_rgb_image_topic="/opendr/image_pose_annotated",
pose_annotations_topic="/opendr/poses",
output_category_topic="/opendr/continual_skeleton_recognized_action",
output_category_description_topic="/opendr/continual_skeleton_recognized_action_description",
device="cuda", model='costgcn'):
"""
Creates a ROS Node for continual skeleton-based action recognition.
:param input_rgb_image_topic: Topic from which we are reading the input image
:type input_rgb_image_topic: str
:param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, we are not
publishing annotated image)
:type output_rgb_image_topic: str
:param pose_annotations_topic: Topic to which we are publishing the pose annotations (if None, we are not publishing
annotated pose annotations)
:type pose_annotations_topic: str
:param output_category_topic: Topic to which we are publishing the recognized action category info
(if None, we are not publishing the info)
:type output_category_topic: str
:param output_category_description_topic: Topic to which we are publishing the description of the recognized
action (if None, we are not publishing the description)
:type output_category_description_topic: str
:param device: device on which we are running inference ('cpu' or 'cuda')
:type device: str
:param model: model to use for continual skeleton-based action recognition.
(Options: "costgcn")
:type model: str
"""

# Set up ROS topics and bridge
self.input_rgb_image_topic = input_rgb_image_topic
self.bridge = ROSBridge()

if output_rgb_image_topic is not None:
self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
else:
self.image_publisher = None

if pose_annotations_topic is not None:
self.pose_publisher = rospy.Publisher(pose_annotations_topic, OpenDRPose2D, queue_size=1)
else:
self.pose_publisher = None

if output_category_topic is not None:
self.hypothesis_publisher = rospy.Publisher(output_category_topic, ObjectHypothesis, queue_size=1)
else:
self.hypothesis_publisher = None

if output_category_description_topic is not None:
self.string_publisher = rospy.Publisher(output_category_description_topic, String, queue_size=1)
else:
self.string_publisher = None

# Initialize the pose estimation
self.pose_estimator = LightweightOpenPoseLearner(device=device, num_refinement_stages=2,
mobilenet_use_stride=False,
half_precision=False
)
self.pose_estimator.download(path=".", verbose=True)
self.pose_estimator.load("openpose_default")

# Initialize the skeleton_based action recognition
self.action_classifier = CoSTGCNLearner(device=device, backbone=model, in_channels=2, num_point=18,
graph_type='openpose')

model_saved_path = self.action_classifier.download(method_name='stgcn/stgcn_ntu_cv_lw_openpose',
mode="pretrained",
file_name='stgcn_ntu_cv_lw_openpose.pt')
self.action_classifier.load(model_saved_path)

def listen(self):
"""
Start the node and begin processing input data
"""
rospy.init_node('opendr_continual_skeleton_action_recognition_node', anonymous=True)
rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
rospy.loginfo("Continual skeleton-based action recognition node started.")
rospy.spin()

def callback(self, data):
"""
Callback that process the input data and publishes to the corresponding topics.
:param data: input message
:type data: sensor_msgs.msg.Image
"""

# Convert sensor_msgs.msg.Image into OpenDR Image
image = self.bridge.from_ros_image(data, encoding='bgr8')

# Run pose estimation
poses = self.pose_estimator.infer(image)
if len(poses) > 2:
# select two poses with the highest energy
poses = _select_2_poses(poses)

# Publish detections in ROS message
if self.pose_publisher is not None:
for pose in poses:
# Convert OpenDR pose to ROS pose message using bridge and publish it
self.pose_publisher.publish(self.bridge.to_ros_pose(pose))

if self.image_publisher is not None:
# Get an OpenCV image back
image = image.opencv()
# Annotate image with poses
for pose in poses:
draw(image, pose)
# Convert the annotated OpenDR image to ROS image message using bridge and publish it
self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))

num_frames = 1
poses_list = [poses]
skeleton_seq = _pose2numpy(num_frames, poses_list)

# Run action recognition
result = self.action_classifier.infer(skeleton_seq) # input_size: BxCxTxVxS
category = result[0]
category.confidence = float(category.confidence.max())

if self.hypothesis_publisher is not None:
self.hypothesis_publisher.publish(self.bridge.to_ros_category(category))

if self.string_publisher is not None:
self.string_publisher.publish(self.bridge.to_ros_category_description(category))


def _select_2_poses(poses):
selected_poses = []
energy = []
for i in range(len(poses)):
s = poses[i].data[:, 0].std() + poses[i].data[:, 1].std()
energy.append(s)
energy = np.array(energy)
index = energy.argsort()[::-1][0:2]
for i in range(len(index)):
selected_poses.append(poses[index[i]])
return selected_poses


def _pose2numpy(num_current_frames, poses_list):
C = 2
V = 18
M = 2 # num_person_in
skeleton_seq = np.zeros((1, C, num_current_frames, V, M))
for t in range(num_current_frames):
for m in range(len(poses_list[t])):
skeleton_seq[0, 0:2, t, :, m] = np.transpose(poses_list[t][m].data)
return torch.tensor(skeleton_seq)


def main():
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input image",
type=str, default="/usb_cam/image_raw")
parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated image",
type=lambda value: value if value.lower() != "none" else None,
default="/opendr/image_pose_annotated")
parser.add_argument("-p", "--pose_annotations_topic", help="Topic name for pose annotations",
type=lambda value: value if value.lower() != "none" else None,
default="/opendr/poses")
parser.add_argument("-c", "--output_category_topic", help="Topic name for recognized action category",
type=lambda value: value if value.lower() != "none" else None,
default="/opendr/continual_skeleton_recognized_action")
parser.add_argument("-d", "--output_category_description_topic", help="Topic name for description of the "
"recognized action category",
type=lambda value: value if value.lower() != "none" else None,
default="/opendr/continual_skeleton_recognized_action_description")
parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\"",
type=str, default="cuda", choices=["cuda", "cpu"])
parser.add_argument("--model", help="Model to use, either \"costgcn\"",
type=str, default="costgcn", choices=["costgcn"])

args = parser.parse_args()

try:
if args.device == "cuda" and torch.cuda.is_available():
device = "cuda"
elif args.device == "cuda":
print("GPU not found. Using CPU instead.")
device = "cpu"
else:
print("Using CPU.")
device = "cpu"
except:
print("Using CPU.")
device = "cpu"

continual_skeleton_action_recognition_node = \
CoSkeletonActionRecognitionNode(input_rgb_image_topic=args.input_rgb_image_topic,
output_rgb_image_topic=args.output_rgb_image_topic,
pose_annotations_topic=args.pose_annotations_topic,
output_category_topic=args.output_category_topic,
output_category_description_topic=args.output_category_description_topic,
device=device,
model=args.model)
continual_skeleton_action_recognition_node.listen()


if __name__ == '__main__':
main()
Loading

0 comments on commit c3b0395

Please sign in to comment.