Skip to content

Commit

Permalink
ROS1 updates and fixes for several nodes (opendr-eu#281)
Browse files Browse the repository at this point in the history
* Updated, cleaned up and added argparse to ROS1 pose estimation node

* Added additional information for pose estimation node documentation

* Improved wording in pose estimation ros docs

* Minor comment improvement

* Fix typo

* Updated, cleaned up and added argparse to ROS1 fall detection node

* Added box id type conversion that caused error in object det2d centernet

* Updated, cleaned up and added argparse to ROS1 object detection 2d centernet node

* Updated, cleaned up and added argparse to ROS1 object detection 2d ssd node

* Updated, cleaned up and added argparse to ROS1 object detection 2d yolov3 node

* Minor typo fix

* Updated, cleaned up and added argparse to ROS1 object detection 2d DETR node

* Updated, cleaned up and added argparse to ROS1 face detection retinaface node

* Updated, cleaned up and added argparse to ROS1 semantic segmentation bisenet node

* Increased buffer size for fall detection and pose estimation to minimize delay

* Updated, cleaned up and added argparse to ROS1 face recognition node

* Fixed minor type conversion typo in bridge

* Sem seg bisenet now outputs class ids heatmap and separate visualization heatmap with improved colors and legend

* Added custom opendr pose and keypoint messages on ros_bridge

* Modified opendr_bridge from/to ros pose to use new messages and doc update

* ROS1 pose node now uses new custom pose ros message

* Enhanced pose __str__ method to return pose id and confidence

* Update human_model_generation_client.py

* Added missing space to pose out_string

Co-authored-by: Stefania Pedrazzi <stefaniapedrazzi@users.noreply.github.com>

* Fixed indentation on CMakeLists.txt

Co-authored-by: Stefania Pedrazzi <stefaniapedrazzi@users.noreply.github.com>

Co-authored-by: charsyme <63857415+charsyme@users.noreply.github.com>
Co-authored-by: Stefania Pedrazzi <stefaniapedrazzi@users.noreply.github.com>
  • Loading branch information
3 people authored and Luca Marchionni committed Sep 1, 2022
1 parent e743efb commit 5749917
Show file tree
Hide file tree
Showing 18 changed files with 970 additions and 602 deletions.
17 changes: 10 additions & 7 deletions docs/reference/rosbridge.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,25 +59,28 @@ ROSBridge.from_ros_pose(self,
ros_pose)
```

Converts a ROS pose into an OpenDR pose.
Converts an OpenDRPose2D message into an OpenDR Pose.

Parameters:

- **message**: *ros_bridge.msg.Pose*\
ROS pose to be converted into an OpenDR pose.
- **ros_pose**: *ros_bridge.msg.OpenDRPose2D*\
ROS pose to be converted into an OpenDR Pose.

#### `ROSBridge.to_ros_pose`

```python
ROSBridge.to_ros_pose(self,
ros_pose)
pose)
```
Converts an OpenDR pose into a ROS pose.
Converts an OpenDR Pose into a OpenDRPose2D msg that can carry the same information, i.e. a list of keypoints,
the pose detection confidence and the pose id.
Each keypoint is represented as an OpenDRPose2DKeypoint with x, y pixel position on input image with (0, 0)
being the top-left corner.

Parameters:

- **message**: *engine.target.Pose*\
OpenDR pose to be converted to ROS pose.
- **pose**: *engine.target.Pose*\
OpenDR Pose to be converted to ROS OpenDRPose2D.


#### `ROSBridge.to_ros_category`
Expand Down
15 changes: 13 additions & 2 deletions projects/opendr_ws/src/perception/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,14 +31,25 @@ Assuming that you have already [activated the OpenDR environment](../../../../do
rosrun usb_cam usb_cam_node
```

2. You are then ready to start the pose detection node
2. You are then ready to start the pose detection node (use `-h` to print out help for various arguments)

```shell
rosrun perception pose_estimation.py
```

3. You can examine the annotated image stream using `rqt_image_view` (select the topic `/opendr/image_pose_annotated`) or
`rostopic echo /opendr/poses`
`rostopic echo /opendr/poses`.

Note that to use the pose messages properly, you need to create an appropriate subscriber that will convert the ROS pose messages back to OpenDR poses which you can access as described in the [documentation](https://github.com/opendr-eu/opendr/blob/master/docs/reference/engine-target.md#posekeypoints-confidence):
```python
...
rospy.Subscriber("opendr/poses", Detection2DArray, self.callback)
...
def callback(self, data):
opendr_pose = self.bridge.from_ros_pose(data)
print(opendr_pose)
print(opendr_pose['r_eye'])
```

## Fall Detection ROS Node
Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can
Expand Down
151 changes: 83 additions & 68 deletions projects/opendr_ws/src/perception/scripts/face_detection_retinaface.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,115 +13,130 @@
# See the License for the specific language governing permissions and
# limitations under the License.

import rospy
import argparse
import mxnet as mx

import rospy
from vision_msgs.msg import Detection2DArray
from sensor_msgs.msg import Image as ROS_Image
from opendr_bridge import ROSBridge

from opendr.engine.data import Image
from opendr.perception.object_detection_2d import RetinaFaceLearner
from opendr.perception.object_detection_2d import draw_bounding_boxes
from opendr.engine.data import Image


class FaceDetectionNode:
def __init__(self, input_image_topic="/usb_cam/image_raw", output_image_topic="/opendr/image_boxes_annotated",
face_detections_topic="/opendr/faces", device="cuda", backbone="resnet"):

def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
output_rgb_image_topic="/opendr/image_faces_annotated", detections_topic="/opendr/faces",
device="cuda", backbone="resnet"):
"""
Creates a ROS Node for face detection
:param input_image_topic: Topic from which we are reading the input image
:type input_image_topic: str
:param output_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
annotated image)
:type output_image_topic: str
:param face_detections_topic: Topic to which we are publishing the annotations (if None, we are not publishing
annotated pose annotations)
:type face_detections_topic: str
Creates a ROS Node for face detection with Retinaface.
:param input_rgb_image_topic: Topic from which we are reading the input image
:type input_rgb_image_topic: str
:param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
image is published)
:type output_rgb_image_topic: str
:param detections_topic: Topic to which we are publishing the annotations (if None, no face detection message
is published)
:type detections_topic: str
:param device: device on which we are running inference ('cpu' or 'cuda')
:type device: str
:param backbone: retinaface backbone, options are ('mnet' and 'resnet'), where 'mnet' detects masked faces as well
:param backbone: retinaface backbone, options are either 'mnet' or 'resnet',
where 'mnet' detects masked faces as well
:type backbone: str
"""
self.input_rgb_image_topic = input_rgb_image_topic

# Initialize the face detector
self.face_detector = RetinaFaceLearner(backbone=backbone, device=device)
self.face_detector.download(path=".", verbose=True)
self.face_detector.load("retinaface_{}".format(backbone))
self.class_names = ["face", "masked_face"]

# Initialize OpenDR ROSBridge object
self.bridge = ROSBridge()

# setup communications
if output_image_topic is not None:
self.image_publisher = rospy.Publisher(output_image_topic, ROS_Image, queue_size=10)
if output_rgb_image_topic is not None:
self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
else:
self.image_publisher = None

if face_detections_topic is not None:
self.face_publisher = rospy.Publisher(face_detections_topic, Detection2DArray, queue_size=10)
if detections_topic is not None:
self.face_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=1)
else:
self.face_publisher = None

rospy.Subscriber(input_image_topic, ROS_Image, self.callback)
self.bridge = ROSBridge()

# Initialize the face detector
self.face_detector = RetinaFaceLearner(backbone=backbone, device=device)
self.face_detector.download(path=".", verbose=True)
self.face_detector.load("retinaface_{}".format(backbone))
self.class_names = ["face", "masked_face"]

def listen(self):
"""
Start the node and begin processing input data.
"""
rospy.init_node('face_detection_node', anonymous=True)
rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
rospy.loginfo("Face detection RetinaFace node started.")
rospy.spin()

def callback(self, data):
"""
Callback that process the input data and publishes to the corresponding topics
:param data: input message
Callback that processes the input data and publishes to the corresponding topics.
:param data: Input image message
:type data: sensor_msgs.msg.Image
"""

# Convert sensor_msgs.msg.Image into OpenDR Image
image = self.bridge.from_ros_image(data, encoding='bgr8')

# Run pose estimation
# Run face detection
boxes = self.face_detector.infer(image)

# Get an OpenCV image back
image = image.opencv()

# Convert detected boxes to ROS type and publish
ros_boxes = self.bridge.to_ros_boxes(boxes)
# Publish detections in ROS message
ros_boxes = self.bridge.to_ros_boxes(boxes) # Convert to ROS boxes
if self.face_publisher is not None:
self.face_publisher.publish(ros_boxes)
rospy.loginfo("Published face boxes")

# Annotate image and publish result
# NOTE: converting back to OpenDR BoundingBoxList is unnecessary here,
# only used to test the corresponding bridge methods
odr_boxes = self.bridge.from_ros_boxes(ros_boxes)
image = draw_bounding_boxes(image, odr_boxes, class_names=self.class_names)
if self.image_publisher is not None:
message = self.bridge.to_ros_image(Image(image), encoding='bgr8')
self.image_publisher.publish(message)
rospy.loginfo("Published annotated image")
# Annotate image with face detection boxes
image = draw_bounding_boxes(image, boxes, class_names=self.class_names)
# Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))


def main():
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
type=str, default="/usb_cam/image_raw")
parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
type=str, default="/opendr/image_faces_annotated")
parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
type=str, default="/opendr/faces")
parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
type=str, default="cuda", choices=["cuda", "cpu"])
parser.add_argument("--backbone",
help="Retinaface backbone, options are either 'mnet' or 'resnet', where 'mnet' detects "
"masked faces as well",
type=str, default="resnet", choices=["resnet", "mnet"])
args = parser.parse_args()


if __name__ == '__main__':
# Automatically run on GPU/CPU
try:
if mx.context.num_gpus() > 0:
print("GPU found.")
device = 'cuda'
else:
if args.device == "cuda" and mx.context.num_gpus() > 0:
device = "cuda"
elif args.device == "cuda":
print("GPU not found. Using CPU instead.")
device = 'cpu'
device = "cpu"
else:
print("Using CPU.")
device = "cpu"
except:
device = 'cpu'
print("Using CPU.")
device = "cpu"

# initialize ROS node
rospy.init_node('opendr_face_detection', anonymous=True)
rospy.loginfo("Face detection node started!")
face_detection_node = FaceDetectionNode(device=device, backbone=args.backbone,
input_rgb_image_topic=args.input_rgb_image_topic,
output_rgb_image_topic=args.output_rgb_image_topic,
detections_topic=args.detections_topic)
face_detection_node.listen()

# get network backbone ("mnet" detects masked faces as well)
backbone = rospy.get_param("~backbone", "resnet")
input_image_topic = rospy.get_param("~input_image_topic", "/videofile/image_raw")

rospy.loginfo("Using backbone: {}".format(backbone))
assert backbone in ["resnet", "mnet"], "backbone should be one of ['resnet', 'mnet']"

# created node object
face_detection_node = FaceDetectionNode(device=device, backbone=backbone,
input_image_topic=input_image_topic)
# begin ROS communications
rospy.spin()
if __name__ == '__main__':
main()
Loading

0 comments on commit 5749917

Please sign in to comment.