Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caliscope and rotating calibration file #136

Closed
JustHere27 opened this issue Oct 6, 2024 · 15 comments
Closed

Caliscope and rotating calibration file #136

JustHere27 opened this issue Oct 6, 2024 · 15 comments
Labels
bug Something isn't working

Comments

@JustHere27
Copy link

I tried to run an analysis with the caliscope sample data.
I am unclear on what is causing the error.

Any assistance would be greatly appreciated.

It says that calib_cam1['matrix'][0][0] caused TypeError: 'int' object is not subscriptable.
However, in calibration config.toml [cam_1] had:
matrix = [[1086.1181825737008, 0.0, 946.8128791033895], [0.0, 1080.8281918966268, 585.7453083609911], [0.0, 0.0, 1.0]]

I tried running :

from Pose2Sim import Pose2Sim
Pose2Sim.poseEstimation() # I skipped the calibration as I am using the config.toml fom the sample data
Pose2Sim.synchronization()
Pose2Sim.personAssociation()
Pose2Sim.triangulation()
Pose2Sim.filtering()
Pose2Sim.markerAugmentation()
Pose2Sim.kinematics()

It stopt in Pose2Sim.personAssociation() with this stacktrace:

Single-person analysis selected.
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[1], line 4
      2 Pose2Sim.poseEstimation()
      3 Pose2Sim.synchronization()
----> 4 Pose2Sim.personAssociation()
      5 Pose2Sim.triangulation()
      6 Pose2Sim.filtering()

File ~\miniconda3\envs\Pose2Sim\lib\site-packages\Pose2Sim\Pose2Sim.py:329, in personAssociation(config)
    326 logging.info(f"Project directory: {project_dir}")
    327 logging.info("---------------------------------------------------------------------\n")
--> 329 associate_all(config_dict)
    331 end = time.time()
    332 elapsed = end-start

File ~\miniconda3\envs\Pose2Sim\lib\site-packages\Pose2Sim\personAssociation.py:752, in associate_all(config_dict)
    748     rewrite_json_files(json_tracked_files_f, json_files_f, proposals, n_cams)
    751 # recap message
--> 752 recap_tracking(config_dict, error_min_tot, cameras_off_tot)

File ~\miniconda3\envs\Pose2Sim\lib\site-packages\Pose2Sim\personAssociation.py:592, in recap_tracking(config_dict, error, nb_cams_excluded)
    590 calib = toml.load(calib_file)
    591 calib_cam1 = calib[list(calib.keys())[0]]
--> 592 fm = calib_cam1['matrix'][0][0]
    593 Dm = euclidean_distance(calib_cam1['translation'], [0,0,0])
    594 mean_error_mm = np.around(mean_error_px * Dm / fm * 1000, decimals=1)

TypeError: 'int' object is not subscriptable

this was my file structure was:

├───calibration
│   └───config.toml
├───videos
│  ├───port_1.mp4
│  ├───port_2.mp4
│  └───port_3.mp4
└───config.toml

my calibration config.toml was :

camera_count = 3
creation_date = 2024-08-18T11:45:12.286017
fps_sync_stream_processing = 100
save_tracked_points_video = true

[cam_1]
port = 1
rotation_count = 1
error = 0.192
exposure = "null"
grid_count = 20
size = [1920, 1080]
matrix = [[1086.1181825737008, 0.0, 946.8128791033895], [0.0, 1080.8281918966268, 585.7453083609911], [0.0, 0.0, 1.0]]
distortions = [0.16059955112801685, -0.21983681609059072, 0.0068910423179863395, -0.002675572662716701, 0.09488383201063787]
translation = [-0.32591022601321334, -0.23196948252804894, 0.9592899782866384]
rotation = [-0.8515827954418844, 0.8125747163905757, -1.434407144834311]

[cam_2]
port = 2
rotation_count = 1
error = 0.114
exposure = "null"
grid_count = 20
size = [1920, 1080]
matrix = [[957.5353038903306, 0.0, 976.997422719857], [0.0, 959.475242608966, 510.7999646121249], [0.0, 0.0, 1.0]]
distortions = [0.17451566581152764, -0.23870973334618992, 0.002217111514839098, -0.0024784135466969274, 0.1096648548329311]
translation = [-0.27344524293065237, -0.19952578520447495, 0.8897813711170587]
rotation = [-0.21156576755761772, 1.3292487863082516, -0.7562677554595282]

[cam_3]
port = 3
rotation_count = 1
error = 0.185
exposure = "null"
grid_count = 20
size = [1920, 1080]
matrix = [[1070.5916524347545, 0.0, 946.2638870470158], [0.0, 1070.4486118194836, 527.2323439235578], [0.0, 0.0, 1.0]]
distortions = [0.1488283523397413, -0.2041209633792853, 0.0069736691104673385, -0.002452390742946745, 0.0926812465799158]
translation = [-0.298307991957966, 0.09666490358202146, 0.941573474837498]
rotation = [-0.8737451880596284, 0.8840259807065655, -1.3050925395756494]

[capture_volume]
stage = 2
origin_sync_index = 267

[charuco]
columns = 3
rows = 4
board_height = 11.0
board_width = 5.5
dictionary = "DICT_4X4_50"
units = "inch"
aruco_scale = 0.75
square_size_overide_cm = 5.4
inverted = true

I took the my root config.toml from Demo_SinglePerson.
All I changed was convert_from in [calibration.convert]:

###############################################################################
## PROJECT PARAMETERS                                                        ##
###############################################################################


# Configure your project parameters here. 
# 
# IMPORTANT:
# If a parameter is not found here, Pose2Sim will look for its value in the 
# Config.toml file of the level above. This way, you can set global 
# instructions for the Session and alter them for specific Participants or Trials.
#
# If you wish to overwrite a parameter for a specific trial or participant,  
# edit its Config.toml file by uncommenting its key (e.g., [project])
# and editing its value (e.g., frame_range = [10,300]). Or else, uncomment 
# [filtering.butterworth] and set cut_off_frequency = 10, etc.



[project]
multi_person = false # true for trials with multiple participants. If false, only the main person in scene is analyzed (and it run much faster). 
participant_height = 1.72 # m # float if single person, list of float if multi-person (same order as the Static trials) # Only used for marker augmentation
participant_mass = 70.0 # kg # Only used for marker augmentation and scaling

frame_rate = 'auto' # fps # int or 'auto'. If 'auto', finds from video (or defaults to 60 fps if you work with images) 
frame_range = [] # For example [10,300], or [] for all frames. 
## If cameras are not synchronized, designates the frame range of the camera with the shortest recording time
## N.B.: If you want a time range instead, use frame_range = time_range * frame_rate
## For example if you want to analyze from 0.1 to 2 seconds with a 60 fps frame rate, 
## frame_range = [0.1, 2.0]*frame_rate = [6, 120]

exclude_from_batch = [] # List of trials to be excluded from batch analysis, ['<participant_dir/trial_dir>', 'etc'].
# e.g. ['S00_P00_Participant/S00_P00_T00_StaticTrial', 'S00_P00_Participant/S00_P00_T01_BalancingTrial']

[pose]
vid_img_extension = 'mp4' # any video or image extension
pose_model = 'HALPE_26'  #With RTMLib: HALPE_26 (body and feet, default), COCO_133 (body, feet, hands), COCO_17 (body)
                         # /!\ Only RTMPose is natively embeded in Pose2Sim. For all other pose estimation methods, you will have to run them yourself, and then refer to the documentation to convert the files if needed
                         #With MMPose: HALPE_26, COCO_133, COCO_17, CUSTOM. See CUSTOM example at the end of the file
                         #With openpose: BODY_25B, BODY_25, BODY_135, COCO, MPII
                         #With mediapipe: BLAZEPOSE
                         #With alphapose: HALPE_26, HALPE_68, HALPE_136, COCO_133
                         #With deeplabcut: CUSTOM. See example at the end of the file
mode = 'balanced' # 'lightweight', 'balanced', 'performance'
det_frequency = 1 # Run person detection only every N frames, and inbetween track previously detected bounding boxes (keypoint detection is still run on all frames). 
                  # Equal to or greater than 1, can be as high as you want in simple uncrowded cases. Much faster, but might be less accurate. 
display_detection = true
overwrite_pose = false # set to false if you don't want to recalculate pose estimation when it has already been done
save_video = 'to_video' # 'to_video' or 'to_images', 'none', or ['to_video', 'to_images']
output_format = 'openpose' # 'openpose', 'mmpose', 'deeplabcut', 'none' or a list of them # /!\ only 'openpose' is supported for now


[synchronization]
display_sync_plots = true # true or false (lowercase)
keypoints_to_consider = ['RWrist'] # 'all' if all points should be considered, for example if the participant did not perform any particicular sharp movement. In this case, the capture needs to be 5-10 seconds long at least
                           # ['RWrist', 'RElbow'] list of keypoint names if you want to specify keypoints with a sharp vertical motion.
approx_time_maxspeed = 'auto' # 'auto' if you want to consider the whole capture (default, slower if long sequences)
                           # [10.0, 2.0, 8.0, 11.0] list of times (seconds) if you want to specify the approximate time of a clear vertical event for each camera
time_range_around_maxspeed = 2.0 # Search for best correlation in the range [approx_time_maxspeed - time_range_around_maxspeed, approx_time_maxspeed  + time_range_around_maxspeed]
likelihood_threshold = 0.4 # Keypoints whose likelihood is below likelihood_threshold are filtered out
filter_cutoff = 6 # time series are smoothed to get coherent time-lagged correlation
filter_order = 4


# Take heart, calibration is not that complicated once you get the hang of it!
[calibration]
calibration_type = 'convert' # 'convert' or 'calculate'

   [calibration.convert]
   convert_from = 'caliscope' # 'caliscope', 'qualisys', 'optitrack', vicon', 'opencap', 'easymocap', 'biocv', 'anipose', or 'freemocap'
      [calibration.convert.caliscope]  # No parameter needed
      [calibration.convert.qualisys]
      binning_factor = 1 # Usually 1, except when filming in 540p where it usually is 2
      [calibration.convert.optitrack]  # See readme for instructions
      [calibration.convert.vicon]      # No parameter needed
      [calibration.convert.opencap]    # No parameter needed
      [calibration.convert.easymocap]  # No parameter needed
      [calibration.convert.biocv]      # No parameter needed
      [calibration.convert.anipose]    # No parameter needed
      [calibration.convert.freemocap]  # No parameter needed
  

   [calibration.calculate] 
      # Camera properties, theoretically need to be calculated only once in a camera lifetime
      [calibration.calculate.intrinsics]
      overwrite_intrinsics = false # set to false if you don't want to recalculate intrinsic parameters
      show_detection_intrinsics = true # true or false (lowercase)
      intrinsics_extension = 'jpg' # any video or image extension
      extract_every_N_sec = 1 # if video, extract frames every N seconds (can be <1 )
      intrinsics_corners_nb = [4,7] 
      intrinsics_square_size = 60 # mm

      # Camera placements, need to be done before every session
      [calibration.calculate.extrinsics]
      calculate_extrinsics = true # true or false (lowercase) 
      extrinsics_method = 'scene' # 'board', 'scene', 'keypoints'
      # 'board' should be large enough to be detected when laid on the floor. Not recommended.
      # 'scene' involves manually clicking any point of know coordinates on scene. Usually more accurate if points are spread out.
      # 'keypoints' uses automatic pose estimation of a person freely walking and waving arms in the scene. Slighlty less accurate, requires synchronized cameras.
      moving_cameras = false # Not implemented yet

         [calibration.calculate.extrinsics.board]
         show_reprojection_error = true # true or false (lowercase)
         extrinsics_extension = 'png' # any video or image extension
         extrinsics_corners_nb = [4,7] # [H,W] rather than [w,h]
         extrinsics_square_size = 60 # mm # [h,w] if square is actually a rectangle

         [calibration.calculate.extrinsics.scene]
         show_reprojection_error = true # true or false (lowercase)
         extrinsics_extension = 'png' # any video or image extension
         # list of 3D coordinates to be manually labelled on images. Can also be a 2 dimensional plane. 
         # in m -> unlike for intrinsics, NOT in mm!
         object_coords_3d =   [[-2.0,  0.3,  0.0], 
                              [-2.0 , 0.0,  0.0], 
                              [-2.0, 0.0,  0.05], 
                              [-2.0, -0.3 ,  0.0], 
                              [0.0,  0.3,  0.0], 
                              [0.0, 0.0,  0.0], 
                              [0.0, 0.0,  0.05], 
                              [0.0, -0.3,  0.0]]
        
         [calibration.calculate.extrinsics.keypoints]
         # Coming soon!


[personAssociation]
   likelihood_threshold_association = 0.3

   [personAssociation.single_person]
   reproj_error_threshold_association = 20 # px
   tracked_keypoint = 'Neck' # If the neck is not detected by the pose_model, check skeleton.py 
               # and choose a stable point for tracking the person of interest (e.g., 'right_shoulder' or 'RShoulder')
   
   [personAssociation.multi_person]
   reconstruction_error_threshold = 0.1 # 0.1 = 10 cm
   min_affinity = 0.2 # affinity below which a correspondence is ignored


[triangulation]
reproj_error_threshold_triangulation = 15 # px
likelihood_threshold_triangulation= 0.3
min_cameras_for_triangulation = 2
interpolation = 'linear' #linear, slinear, quadratic, cubic, or none
                        # 'none' if you don't want to interpolate missing points
interp_if_gap_smaller_than = 10 # do not interpolate bigger gaps
show_interp_indices = true # true or false (lowercase). For each keypoint, return the frames that need to be interpolated
fill_large_gaps_with = 'last_value' # 'last_value', 'nan', or 'zeros' 
handle_LR_swap = false # Better if few cameras (eg less than 4) with risk of limb swapping (eg camera facing sagittal plane), otherwise slightly less accurate and slower
undistort_points = false # Better if distorted image (parallel lines curvy on the edge or at least one param > 10^-2), but unnecessary (and slightly slower) if distortions are low
make_c3d = true # save triangulated data in c3d format in addition to trc


[filtering]
type = 'butterworth' # butterworth, kalman, gaussian, LOESS, median, butterworth_on_speed
display_figures = true # true or false (lowercase) 
make_c3d = true # also save triangulated data in c3d format

   [filtering.butterworth]
   order = 4 
   cut_off_frequency = 6 # Hz
   [filtering.kalman]
   # How much more do you trust triangulation results (measurements), than previous data (process assuming constant acceleration)?
   trust_ratio = 100 # = measurement_trust/process_trust ~= process_noise/measurement_noise
   smooth = true # should be true, unless you need real-time filtering
   [filtering.butterworth_on_speed]
   order = 4 
   cut_off_frequency = 10 # Hz
   [filtering.gaussian]
   sigma_kernel = 2 #px
   [filtering.LOESS]
   nb_values_used = 30 # = fraction of data used * nb frames
   [filtering.median]
   kernel_size = 9


[markerAugmentation] 
## Requires the following markers: ["Neck", "RShoulder", "LShoulder", "RHip", "LHip", "RKnee", "LKnee",
##        "RAnkle", "LAnkle", "RHeel", "LHeel", "RSmallToe", "LSmallToe",
##        "RBigToe", "LBigToe", "RElbow", "LElbow", "RWrist", "LWrist"]
make_c3d = true # save triangulated data in c3d format in addition to trc


[kinematics]
use_augmentation = true  # true or false (lowercase) # Set to true if you want to use the model with augmented markers
right_left_symmetry = true # true or false (lowercase) # Set to false only if you have good reasons to think the participant is not symmetrical (e.g. prosthetic limb)
remove_individual_scaling_setup = true # true or false (lowercase) # If true, the individual scaling setup files are removed to avoid cluttering
remove_individual_IK_setup = true # true or false (lowercase) # If true, the individual IK setup files are removed to avoid cluttering



# CUSTOM skeleton, if you trained your own model from DeepLabCut or MMPose for example. 
# Make sure the node ids correspond to the column numbers of the 2D pose file, starting from zero.
# 
# If you want to perform inverse kinematics, you will also need to create an OpenSim model
# and add to its markerset the location where you expect the triangulated keypoints to be detected.
# 
# In this example, CUSTOM reproduces the HALPE_26 skeleton (default skeletons are stored in skeletons.py).
# You can create as many custom skeletons as you want, just add them further down and rename them.
# 
# Check your model hierarchy with:  for pre, _, node in RenderTree(model): 
#                                      print(f'{pre}{node.name} id={node.id}')
[pose.CUSTOM]
name = "Hip"
id = "19"
  [[pose.CUSTOM.children]]
  name = "RHip"
  id = 12
     [[pose.CUSTOM.children.children]]
     name = "RKnee"
     id = 14
        [[pose.CUSTOM.children.children.children]]
        name = "RAnkle"
        id = 16
           [[pose.CUSTOM.children.children.children.children]]
           name = "RBigToe"
           id = 21
              [[pose.CUSTOM.children.children.children.children.children]]
              name = "RSmallToe"
              id = 23
           [[pose.CUSTOM.children.children.children.children]]
           name = "RHeel"
           id = 25
  [[pose.CUSTOM.children]]
  name = "LHip"
  id = 11
     [[pose.CUSTOM.children.children]]
     name = "LKnee"
     id = 13
        [[pose.CUSTOM.children.children.children]]
        name = "LAnkle"
        id = 15
           [[pose.CUSTOM.children.children.children.children]]
           name = "LBigToe"
           id = 20
              [[pose.CUSTOM.children.children.children.children.children]]
              name = "LSmallToe"
              id = 22
           [[pose.CUSTOM.children.children.children.children]]
           name = "LHeel"
           id = 24
  [[pose.CUSTOM.children]]
  name = "Neck"
  id = 18
     [[pose.CUSTOM.children.children]]
     name = "Head"
     id = 17
        [[pose.CUSTOM.children.children.children]]
        name = "Nose"
        id = 0
     [[pose.CUSTOM.children.children]]
     name = "RShoulder"
     id = 6
        [[pose.CUSTOM.children.children.children]]
        name = "RElbow"
        id = 8
           [[pose.CUSTOM.children.children.children.children]]
           name = "RWrist"
           id = 10
     [[pose.CUSTOM.children.children]]
     name = "LShoulder"
     id = 5
        [[pose.CUSTOM.children.children.children]]
        name = "LElbow"
        id = 7
           [[pose.CUSTOM.children.children.children.children]]
           name = "LWrist"
           id = 9
@davidpagnon
Copy link
Collaborator

Hi,

From what I can see here, Caliscope and Pose2Sim don't provide the same pose estimation output.

  • Pose2Sim output is in the OpenPose json format, and synchronization is done afterwards
  • CaliScope output is a custom csv one, with synchronization done beforehand. It is either done automatically with webcams, or requires to create a frame_time_history.csv file

Pose2Sim supports Caliscope calibration so you can use it for your calibration if you prefer; but other than that I would encourage you to try both tools independently from the pose estimation stage, and see what best fits your needs.

@mprib may be able to tell you more about it!

@davidpagnon davidpagnon added the help wanted Extra attention is needed label Oct 7, 2024
@davidpagnon
Copy link
Collaborator

Actually, I just realized that although the calibration correctly loads in the Blender addon: https://github.com/davidpagnon/Pose2Sim_Blender ,
there is a little bug in the recap messages of personAssociation and triangulation. I'm fixing it now and will release a new version by the end of the day, that you will be able to install with pip install Pose2Sim -U

NB: Since there is only one person in the scene, you can skip this personAssociation.

@davidpagnon
Copy link
Collaborator

I just released the fix, would you mind telling me if it works on your end?

P.S.: I misread this message and thought you had skipped pose estimation. I was wrong, you did things correctly! (although you can still skip personAssociation() )

Pose2Sim.poseEstimation() # I skipped the calibration as I am using the config.toml fom the sample data

@davidpagnon davidpagnon added bug Something isn't working and removed help wanted Extra attention is needed labels Oct 7, 2024
@JustHere27
Copy link
Author

Thanks for the quick and clear response.
I tried it again after running pip install Pose2Sim -U, however I got another error during triangulation.py

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[1], line 4
      2 Pose2Sim.poseEstimation()
      3 Pose2Sim.synchronization()
----> 4 Pose2Sim.triangulation()
      5 Pose2Sim.filtering()
      6 Pose2Sim.markerAugmentation()

File ~\miniconda3\envs\Pose2Sim\lib\site-packages\Pose2Sim\Pose2Sim.py:375, in triangulation(config)
    372 logging.info(f"Project directory: {project_dir}")
    373 logging.info("---------------------------------------------------------------------\n")
--> 375 triangulate_all(config_dict)
    377 end = time.time()
    378 elapsed = end-start

File ~\miniconda3\envs\Pose2Sim\lib\site-packages\Pose2Sim\triangulation.py:956, in triangulate_all(config_dict)
    936     c3d_paths = [convert_to_c3d(t) for t in trc_paths]
    938 # # Reorder TRC files
    939 # if multi_person and reorder_trc and len(trc_paths)>1:
    940 #     trc_id = retrieve_right_trc_order(trc_paths)
   (...)
    954
    955 # Recap message
--> 956 recap_triangulate(config_dict, error_tot, nb_cams_excluded_tot, keypoints_names, cam_excluded_count, interp_frames, non_interp_frames, trc_paths)

File ~\miniconda3\envs\Pose2Sim\lib\site-packages\Pose2Sim\triangulation.py:300, in recap_triangulate(config_dict, error, nb_cams_excluded, keypoints_names, cam_excluded_count, interp_frames, non_interp_frames, trc_path)
    298 calib_file = glob.glob(os.path.join(calib_dir, '*.toml'))[0] # lastly created calibration file
    299 calib = toml.load(calib_file)
--> 300 cam_names = np.array([calib[c].get('name') for c in list(calib.keys())])
    301 cam_names = cam_names[list(cam_excluded_count[0].keys())]
    302 error_threshold_triangulation = config_dict.get('triangulation').get('reproj_error_threshold_triangulation')

File ~\miniconda3\envs\Pose2Sim\lib\site-packages\Pose2Sim\triangulation.py:300, in <listcomp>(.0)
    298 calib_file = glob.glob(os.path.join(calib_dir, '*.toml'))[0] # lastly created calibration file
    299 calib = toml.load(calib_file)
--> 300 cam_names = np.array([calib[c].get('name') for c in list(calib.keys())])
    301 cam_names = cam_names[list(cam_excluded_count[0].keys())]
    302 error_threshold_triangulation = config_dict.get('triangulation').get('reproj_error_threshold_triangulation')

AttributeError: 'int' object has no attribute 'get'

I ran:

from Pose2Sim import Pose2Sim
Pose2Sim.poseEstimation()
Pose2Sim.synchronization()
Pose2Sim.triangulation()
Pose2Sim.filtering()
Pose2Sim.markerAugmentation()
Pose2Sim.kinematics()

@davidpagnon
Copy link
Collaborator

Hi, sorry about that, I tried to bring a quick fix to something that jumped to mind, but without testing it... I'll properly test it tonight and keep you updated!

@mprib
Copy link

mprib commented Oct 7, 2024

Thank you for the mention to bring this to my attention @davidpagnon. I had been unaware of the OpenPose format, just defaulting to my preference for tabular tidy-style data. Obligatory xkcd.

I've opened an issue to jog my memory to revisit this as time permits: mprib/caliscope#650

@JustHere27
Copy link
Author

@davidpagnon No worries, I am happy that you were willing to help.

@davidpagnon
Copy link
Collaborator

davidpagnon commented Oct 7, 2024

Hi again,

I had another look at it and upgraded Pose2Sim, it should not throw any error anymore.

Now, here are two more issues:

  • poseEstimation() does not work well with images when they are rotated. Until today, I either filmed videos upright, or used cameras that detected their own rotation (phone, action camera...), so I did not have this problem.

    I had a look and there is a button in CaliScope GUI to rotate videos. This is then specified in the calibration file as rotation_count. I have not implemented such a feature, but I guess it would be a good idea!

    In the meantime, you can rotate your videos and your calibration file with the script below.

  • Results are still far from perfect. Here are my current guesses:

    • I may have made a mistake in the rotation of the calibration file, or in handling the rotation_count parameter in CaliScope calibration
    • Or it could be due to the cameras in the sample being distorted, which I currently do not handle. Undistorting points should be easy to fix, but I have trouble finding time right now.

    Either way, if you ever have a chance to test Pose2Sim from a Caliscope calibration on other data, I would be quite interested in knowing how it goes!


# ROTATE VIDEOS AND CALIBRATION FILES

# pip install moviepy
# Go to your videos folder and run this script

import os 
from pathlib import Path
import toml
import numpy as np
import cv2
from moviepy.editor import *
from Pose2Sim.common import rotate_cam


videos_files = ['port_1.mp4', 'port_2.mp4', 'port_3.mp4']
rotation = [-90, -90, -90]
calib_file = '../calibration/config.toml'


# Rotate videos
for vid, rot in zip(videos_files, rotation):
    Path.rename(Path(vid), f'{vid}.old')
    clip = VideoFileClip(f'{vid}.old')
    clip = clip.rotate(rot)
    clip.write_videofile(vid)
    clip.close()
    

# Rotate calibration file
calib = toml.load(calib_file)
new_calib = calib.copy()
Path(calib_file).rename(Path(calib_file).with_suffix('.toml.old'))
cal_keys = [c for c in calib.keys() 
            if c not in ['metadata', 'capture_volume', 'charuco', 'checkerboard'] 
            and isinstance(calib[c],dict)]

for cam, rot in zip(cal_keys, rotation):
    new_calib[cam]['size'] = calib[cam]['size'][::-1] if rot==90 or rot==-90 else calib[cam]['size']

    K = np.array(calib[cam]['matrix'])
    K[0,0], K[1,1] = (K[1,1], K[0,0]) if rot==90 or rot==-90 else (K[0,0], K[1,1])
    K[0,2], K[1,2] = (calib[cam]['size'][0]-K[1,2], K[0,2]) if rot == -90 \
                        else (K[1,2], calib[cam]['size'][1]-K[0,2]) if rot == 90 \
                        else (calib[cam]['size'][1]-K[0,2], calib[cam]['size'][0]-K[1,2])
    new_calib[cam]['matrix'] = K.tolist()
    
    r,t = rotate_cam(calib[cam]['rotation'], calib[cam]['translation'], ang_z=-np.radians(rot))
    r = np.squeeze(cv2.Rodrigues(r)[0])
    new_calib[cam]['rotation'], new_calib[cam]['translation'] = r.tolist(), t.tolist()

with open(calib_file, 'w') as f:
    toml.dump(new_calib, f)

@davidpagnon
Copy link
Collaborator

davidpagnon commented Oct 7, 2024

I figured out where the problem was!
After all, it was not about handling Caliscope files, nor about distortion (although I should probably tackle it one day or another). The issue was in the script for rotating the calibration file; more specifically, in the optical center K[0,2], K[1,2].

I edited my above answer with the correction.

EDIT: reedited it now.

@JustHere27
Copy link
Author

I have run the script that you gave, and it correctly orientated the video's
However, I still ran into an error:

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
Cell In[3], line 4
      2 Pose2Sim.poseEstimation()
      3 Pose2Sim.synchronization()
----> 4 Pose2Sim.triangulation()
      5 Pose2Sim.filtering()
      6 Pose2Sim.markerAugmentation()

File ~\miniconda3\envs\Pose2Sim\lib\site-packages\Pose2Sim\Pose2Sim.py:375, in triangulation(config)
    372 logging.info(f"Project directory: {project_dir}")
    373 logging.info("---------------------------------------------------------------------\n")
--> 375 triangulate_all(config_dict)
    377 end = time.time()
    378 elapsed = end-start

File ~\miniconda3\envs\Pose2Sim\lib\site-packages\Pose2Sim\triangulation.py:900, in triangulate_all(config_dict)
    897 nb_persons_to_detect = len(Q_tot)
    899 if nb_persons_to_detect ==0:
--> 900     raise Exception('No persons have been triangulated. Please check your calibration and your synchronization, or the triangulation parameters in Config.toml.')
    902 # IDs of excluded cameras
    903 # id_excluded_cams_tot = [np.concatenate([id_excluded_cams_tot[f][k] for f in range(frames_nb)]) for k in range(keypoints_nb)]
    904 id_excluded_cams_tot = [np.hstack(np.hstack(np.array(id_excluded_cams_tot[n]))) for n in range(nb_persons_to_detect)]

Exception: No persons have been triangulated. Please check your calibration and your synchronization, or the triangulation parameters in Config.toml.

@davidpagnon
Copy link
Collaborator

Hi, sorry again, there was a problem in the script upon saving the file. I hope it works on your end now!

This is what I get:
image

@JustHere27
Copy link
Author

It appears we messed up the setting of the origin point, but other than that it seems to work.

screen-capture-pose2sim-test.mp4

Thanks again for all the help.

@davidpagnon
Copy link
Collaborator

That's interesting 😁 Out of curiosity, do you use the Rajagopal model rather than the Pose2Sim one? If so, what is the reason? (just in case there is an issue I'm not aware of)

You're welcome!

@JustHere27
Copy link
Author

I am not sure if I understand what you mean, however if you are referring to pose_model I left it on default.

@davidpagnon
Copy link
Collaborator

Okay sorry, I thought the spine looked stiff but I must have been mistaken

@davidpagnon davidpagnon changed the title Error when running analysis with caliscope sample data Caliscope and rotating calibration file Oct 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants