Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Request] Add Human Pose Estimation 3D Demo for C++ #879

Open
UnaNancyOwen opened this issue Feb 23, 2020 · 2 comments
Open

[Request] Add Human Pose Estimation 3D Demo for C++ #879

UnaNancyOwen opened this issue Feb 23, 2020 · 2 comments
Labels
feature Adds new feature

Comments

@UnaNancyOwen
Copy link

Please consider to add demo of human pose estimation 3d for C++. Thanks,
https://github.com/opencv/open_model_zoo/tree/master/models/public/human-pose-estimation-3d-0001

@Daniil-Osokin
Copy link
Contributor

Hi! There is one for pose estimation in 2D. Since these networks are similar, demo can be transformed for 3D case. It looks like a project for internship (one of), so if it will be a hot request, guys may consider doing it (probably 😃). Actually, I would say it already has ~80% of code in OpenCV/C++. So what do you expect from pure OpenCV/C++ demo, do you have some specific use case?

@saurabhmj11
Copy link

I cannot provide a complete code for a Jupyter notebook without more specific details about what you want to accomplish. However, here is an example of how you can use the OpenCV library in Python to perform human pose estimation:

python
Copy code
import cv2
import mediapipe as mp

Load the Mediapipe Pose model

mp_pose = mp.solutions.pose

Initialize the video capture device

cap = cv2.VideoCapture(0)

Create a Mediapipe Pose object

with mp_pose.Pose(min_detection_confidence=0.5, min_tracking_confidence=0.5) as pose:

while True:
    # Read a frame from the video capture device
    ret, image = cap.read()
    if not ret:
        break

    # Convert the image to RGB format
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

    # Process the image with the Mediapipe Pose model
    results = pose.process(image)

    # Draw the detected pose landmarks on the image
    if results.pose_landmarks:
        mp_pose.draw_landmarks(image, results.pose_landmarks, mp_pose.POSE_CONNECTIONS)

    # Display the image
    cv2.imshow('Human Pose Estimation', image)

    # Exit the program when the 'q' key is pressed
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

Release the video capture device and destroy all windows

cap.release()
cv2.destroyAllWindows()
In this example, we first import the necessary libraries: cv2 for image processing and mediapipe for the human pose estimation model. We then load the Mediapipe Pose model, initialize the video capture device, and create a Mediapipe Pose object with the appropriate detection and tracking confidence thresholds.

Inside the main loop, we read a frame from the video capture device, convert it to RGB format, and process it with the Mediapipe Pose model. If pose landmarks are detected, we draw them on the image using the mp_pose.draw_landmarks function. Finally, we display the image and wait for the 'q' key to be pressed to exit the program.

Note that this is just a basic example, and there are many more parameters and options that can be configured for the Mediapipe Pose model. Also, you may need to install the necessary libraries and dependencies before running this code in a Jupyter notebook.

@andrei-kochin andrei-kochin added the feature Adds new feature label Jul 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Adds new feature
Projects
None yet
Development

No branches or pull requests

4 participants