Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ROS1 updates and fixes for several nodes #281

Merged
merged 29 commits into from
Sep 1, 2022
Merged
Show file tree
Hide file tree
Changes from 26 commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
ac150e2
Updated, cleaned up and added argparse to ROS1 pose estimation node
tsampazk Jul 15, 2022
ac3375f
Added additional information for pose estimation node documentation
tsampazk Jul 15, 2022
b3b9f73
Improved wording in pose estimation ros docs
tsampazk Jul 15, 2022
849b7cb
Minor comment improvement
tsampazk Jul 15, 2022
ac1bf3f
Fix typo
tsampazk Aug 17, 2022
8ec195a
Updated, cleaned up and added argparse to ROS1 fall detection node
tsampazk Aug 17, 2022
33dbc04
Merge branch 'develop' into ros1-update
tsampazk Aug 18, 2022
c7df07b
Added box id type conversion that caused error in object det2d centernet
tsampazk Aug 17, 2022
eb12647
Updated, cleaned up and added argparse to ROS1 object detection 2d ce…
tsampazk Aug 18, 2022
464d0ab
Updated, cleaned up and added argparse to ROS1 object detection 2d ss…
tsampazk Aug 18, 2022
60bbb58
Updated, cleaned up and added argparse to ROS1 object detection 2d yo…
tsampazk Aug 18, 2022
d6d5ded
Minor typo fix
tsampazk Aug 18, 2022
c4f060d
Updated, cleaned up and added argparse to ROS1 object detection 2d DE…
tsampazk Aug 18, 2022
8df2aca
Updated, cleaned up and added argparse to ROS1 face detection retinaf…
tsampazk Aug 19, 2022
ae933e6
Updated, cleaned up and added argparse to ROS1 semantic segmentation …
tsampazk Aug 19, 2022
45bb14d
Increased buffer size for fall detection and pose estimation to minim…
tsampazk Aug 19, 2022
187813f
Updated, cleaned up and added argparse to ROS1 face recognition node
tsampazk Aug 20, 2022
0558748
Merge branch 'develop' into ros1-update
tsampazk Aug 23, 2022
4c680cc
Fixed minor type conversion typo in bridge
tsampazk Aug 24, 2022
deafc83
Sem seg bisenet now outputs class ids heatmap and separate visualizat…
tsampazk Aug 25, 2022
8b20a9d
Added custom opendr pose and keypoint messages on ros_bridge
tsampazk Aug 26, 2022
eda9d27
Modified opendr_bridge from/to ros pose to use new messages and doc u…
tsampazk Aug 26, 2022
61cd459
ROS1 pose node now uses new custom pose ros message
tsampazk Aug 26, 2022
b1f41f2
Enhanced pose __str__ method to return pose id and confidence
tsampazk Aug 26, 2022
e5ec087
Merge branch 'develop' into ros1-update
tsampazk Aug 29, 2022
94ed008
Update human_model_generation_client.py
charsyme Aug 29, 2022
1b7d464
Merge branch 'develop' into ros1-update
tsampazk Sep 1, 2022
7d09058
Added missing space to pose out_string
tsampazk Sep 1, 2022
3d3e658
Fixed indentation on CMakeLists.txt
tsampazk Sep 1, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 10 additions & 7 deletions docs/reference/rosbridge.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,25 +59,28 @@ ROSBridge.from_ros_pose(self,
ros_pose)
```

Converts a ROS pose into an OpenDR pose.
Converts an OpenDRPose2D message into an OpenDR Pose.

Parameters:

- **message**: *ros_bridge.msg.Pose*\
ROS pose to be converted into an OpenDR pose.
- **ros_pose**: *ros_bridge.msg.OpenDRPose2D*\
ROS pose to be converted into an OpenDR Pose.

#### `ROSBridge.to_ros_pose`

```python
ROSBridge.to_ros_pose(self,
ros_pose)
pose)
```
Converts an OpenDR pose into a ROS pose.
Converts an OpenDR Pose into a OpenDRPose2D msg that can carry the same information, i.e. a list of keypoints,
the pose detection confidence and the pose id.
Each keypoint is represented as an OpenDRPose2DKeypoint with x, y pixel position on input image with (0, 0)
being the top-left corner.

Parameters:

- **message**: *engine.target.Pose*\
OpenDR pose to be converted to ROS pose.
- **pose**: *engine.target.Pose*\
OpenDR Pose to be converted to ROS OpenDRPose2D.


#### `ROSBridge.to_ros_category`
Expand Down
15 changes: 13 additions & 2 deletions projects/opendr_ws/src/perception/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,14 +31,25 @@ Assuming that you have already [activated the OpenDR environment](../../../../do
rosrun usb_cam usb_cam_node
```

2. You are then ready to start the pose detection node
2. You are then ready to start the pose detection node (use `-h` to print out help for various arguments)

```shell
rosrun perception pose_estimation.py
```

3. You can examine the annotated image stream using `rqt_image_view` (select the topic `/opendr/image_pose_annotated`) or
`rostopic echo /opendr/poses`
`rostopic echo /opendr/poses`.

Note that to use the pose messages properly, you need to create an appropriate subscriber that will convert the ROS pose messages back to OpenDR poses which you can access as described in the [documentation](https://github.com/opendr-eu/opendr/blob/master/docs/reference/engine-target.md#posekeypoints-confidence):
```python
...
rospy.Subscriber("opendr/poses", Detection2DArray, self.callback)
...
def callback(self, data):
opendr_pose = self.bridge.from_ros_pose(data)
print(opendr_pose)
print(opendr_pose['r_eye'])
```

## Fall Detection ROS Node
Assuming that you have already [activated the OpenDR environment](../../../../docs/reference/installation.md), [built your workspace](../../README.md) and started roscore (i.e., just run `roscore`), then you can
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,115 +13,130 @@
# See the License for the specific language governing permissions and
# limitations under the License.

import rospy
import argparse
import mxnet as mx

import rospy
from vision_msgs.msg import Detection2DArray
from sensor_msgs.msg import Image as ROS_Image
from opendr_bridge import ROSBridge

from opendr.engine.data import Image
from opendr.perception.object_detection_2d import RetinaFaceLearner
from opendr.perception.object_detection_2d import draw_bounding_boxes
from opendr.engine.data import Image


class FaceDetectionNode:
def __init__(self, input_image_topic="/usb_cam/image_raw", output_image_topic="/opendr/image_boxes_annotated",
face_detections_topic="/opendr/faces", device="cuda", backbone="resnet"):

def __init__(self, input_rgb_image_topic="/usb_cam/image_raw",
output_rgb_image_topic="/opendr/image_faces_annotated", detections_topic="/opendr/faces",
device="cuda", backbone="resnet"):
"""
Creates a ROS Node for face detection
:param input_image_topic: Topic from which we are reading the input image
:type input_image_topic: str
:param output_image_topic: Topic to which we are publishing the annotated image (if None, we are not publishing
annotated image)
:type output_image_topic: str
:param face_detections_topic: Topic to which we are publishing the annotations (if None, we are not publishing
annotated pose annotations)
:type face_detections_topic: str
Creates a ROS Node for face detection with Retinaface.
:param input_rgb_image_topic: Topic from which we are reading the input image
:type input_rgb_image_topic: str
:param output_rgb_image_topic: Topic to which we are publishing the annotated image (if None, no annotated
image is published)
:type output_rgb_image_topic: str
:param detections_topic: Topic to which we are publishing the annotations (if None, no face detection message
is published)
:type detections_topic: str
:param device: device on which we are running inference ('cpu' or 'cuda')
:type device: str
:param backbone: retinaface backbone, options are ('mnet' and 'resnet'), where 'mnet' detects masked faces as well
:param backbone: retinaface backbone, options are either 'mnet' or 'resnet',
where 'mnet' detects masked faces as well
:type backbone: str
"""
self.input_rgb_image_topic = input_rgb_image_topic

# Initialize the face detector
self.face_detector = RetinaFaceLearner(backbone=backbone, device=device)
self.face_detector.download(path=".", verbose=True)
self.face_detector.load("retinaface_{}".format(backbone))
self.class_names = ["face", "masked_face"]

# Initialize OpenDR ROSBridge object
self.bridge = ROSBridge()

# setup communications
if output_image_topic is not None:
self.image_publisher = rospy.Publisher(output_image_topic, ROS_Image, queue_size=10)
if output_rgb_image_topic is not None:
self.image_publisher = rospy.Publisher(output_rgb_image_topic, ROS_Image, queue_size=1)
else:
self.image_publisher = None

if face_detections_topic is not None:
self.face_publisher = rospy.Publisher(face_detections_topic, Detection2DArray, queue_size=10)
if detections_topic is not None:
self.face_publisher = rospy.Publisher(detections_topic, Detection2DArray, queue_size=1)
else:
self.face_publisher = None

rospy.Subscriber(input_image_topic, ROS_Image, self.callback)
self.bridge = ROSBridge()

# Initialize the face detector
self.face_detector = RetinaFaceLearner(backbone=backbone, device=device)
self.face_detector.download(path=".", verbose=True)
self.face_detector.load("retinaface_{}".format(backbone))
self.class_names = ["face", "masked_face"]

def listen(self):
"""
Start the node and begin processing input data.
"""
rospy.init_node('face_detection_node', anonymous=True)
rospy.Subscriber(self.input_rgb_image_topic, ROS_Image, self.callback, queue_size=1, buff_size=10000000)
rospy.loginfo("Face detection RetinaFace node started.")
rospy.spin()

def callback(self, data):
"""
Callback that process the input data and publishes to the corresponding topics
:param data: input message
Callback that processes the input data and publishes to the corresponding topics.
:param data: Input image message
:type data: sensor_msgs.msg.Image
"""

# Convert sensor_msgs.msg.Image into OpenDR Image
image = self.bridge.from_ros_image(data, encoding='bgr8')

# Run pose estimation
# Run face detection
boxes = self.face_detector.infer(image)

# Get an OpenCV image back
image = image.opencv()

# Convert detected boxes to ROS type and publish
ros_boxes = self.bridge.to_ros_boxes(boxes)
# Publish detections in ROS message
ros_boxes = self.bridge.to_ros_boxes(boxes) # Convert to ROS boxes
if self.face_publisher is not None:
self.face_publisher.publish(ros_boxes)
rospy.loginfo("Published face boxes")

# Annotate image and publish result
# NOTE: converting back to OpenDR BoundingBoxList is unnecessary here,
# only used to test the corresponding bridge methods
odr_boxes = self.bridge.from_ros_boxes(ros_boxes)
image = draw_bounding_boxes(image, odr_boxes, class_names=self.class_names)
if self.image_publisher is not None:
message = self.bridge.to_ros_image(Image(image), encoding='bgr8')
self.image_publisher.publish(message)
rospy.loginfo("Published annotated image")
# Annotate image with face detection boxes
image = draw_bounding_boxes(image, boxes, class_names=self.class_names)
# Convert the annotated OpenDR image to ROS2 image message using bridge and publish it
self.image_publisher.publish(self.bridge.to_ros_image(Image(image), encoding='bgr8'))


def main():
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--input_rgb_image_topic", help="Topic name for input rgb image",
type=str, default="/usb_cam/image_raw")
parser.add_argument("-o", "--output_rgb_image_topic", help="Topic name for output annotated rgb image",
type=str, default="/opendr/image_faces_annotated")
parser.add_argument("-d", "--detections_topic", help="Topic name for detection messages",
type=str, default="/opendr/faces")
parser.add_argument("--device", help="Device to use, either \"cpu\" or \"cuda\", defaults to \"cuda\"",
type=str, default="cuda", choices=["cuda", "cpu"])
parser.add_argument("--backbone",
help="Retinaface backbone, options are either 'mnet' or 'resnet', where 'mnet' detects "
"masked faces as well",
type=str, default="resnet", choices=["resnet", "mnet"])
args = parser.parse_args()


if __name__ == '__main__':
# Automatically run on GPU/CPU
try:
if mx.context.num_gpus() > 0:
print("GPU found.")
device = 'cuda'
else:
if args.device == "cuda" and mx.context.num_gpus() > 0:
device = "cuda"
elif args.device == "cuda":
print("GPU not found. Using CPU instead.")
device = 'cpu'
device = "cpu"
else:
print("Using CPU.")
device = "cpu"
except:
device = 'cpu'
print("Using CPU.")
device = "cpu"

# initialize ROS node
rospy.init_node('opendr_face_detection', anonymous=True)
rospy.loginfo("Face detection node started!")
face_detection_node = FaceDetectionNode(device=device, backbone=args.backbone,
input_rgb_image_topic=args.input_rgb_image_topic,
output_rgb_image_topic=args.output_rgb_image_topic,
detections_topic=args.detections_topic)
face_detection_node.listen()

# get network backbone ("mnet" detects masked faces as well)
backbone = rospy.get_param("~backbone", "resnet")
input_image_topic = rospy.get_param("~input_image_topic", "/videofile/image_raw")

rospy.loginfo("Using backbone: {}".format(backbone))
assert backbone in ["resnet", "mnet"], "backbone should be one of ['resnet', 'mnet']"

# created node object
face_detection_node = FaceDetectionNode(device=device, backbone=backbone,
input_image_topic=input_image_topic)
# begin ROS communications
rospy.spin()
if __name__ == '__main__':
main()
Loading