The purpose of the software shared in the repo is to process measurements of the human motion collected by the group of wireless IMU sensors(acceleration and angular velocity) and reproduce such motiones represented by the numerical data points as 3-D simulation of humanoid model in Rviz, on ROS platform
The inception of this software stems from an idea to create a ML model that can help diagnose the physical challenge, Cerebral Palsy by measuring human subject's motion with wearable IMU sensors, capable of capturing linear acceleartion and angular velocity in 3 orthogonal axis. Being a disease that hinders the motorized movement of subject's body, sensors attached to subject's limbs are to be able to properly caputre the characteristic of human motion. Aligned with such intention, the creation of simulation software that can reproduce the recorded motion of human body in virtual environment is a verification tool for the validity of the measurements. It can be further utilized to allow the conversion of human motion represented in linear accleration and angular velocity to characterize critical patterns of motion contributing to the diagnosis of Cerebral Palsy.
-
Dead Reckoning
The technique, estimating pose and orientatin of a sensor in the 3-d space, is known as dead reckoning. What the software, introduced in this page, does to reproduce human motion in simulation environment is essentially the same. The software has to be able to compute the update of pose and estimation of each sensor, processing the linear acceleration and angular velocity. One can find more information about dead reckoning in this wiki page. -
Robotic Operating System(ROS)
The ROS is a software framework that, at its core, offers a message passing interface that provides inter-process communication and is commonly referred to as a middleware. ROS is a natural choice platform for the introduced software as it'd need a platform to process a series of data in parallel and consolidate the update in visual format. One can find more information about ROS here. -
Simultaneous Orthogonal Rotation Angle
Due to the nature of the challenge to estimate the orientation of the sensor at next time step, based on current measurement of angular velocity at current time step in 3-D, sequential Euler rotation in each axis introduces systematic error as the rotation is non-commutative. Such requirement calls for simultaneous computation of rotation. One can find more information about Simultaneous Orthogonal Rotation Angle here.
The sensor is encapsulated inside flexible silicone material containing IMU sensors. The sensor is capable of wireless charging and data transporation via bluetooth.
- IMU sensor
BMI 160 Bosch Sensortec IMU
Data Sheet - Device
Sensor encapsulation and entire development of mechanical and electrical components of the sensor is performed and provided by Roges Research Group at Northwestern University
- Python 2.7.15+
- ROS Melodic
root: ~/catkin_ws/src/cp_simulator/
launch/ : contains launch file that initiates the nodes
rviz/ : contains .rviz file with configured setting
src/ : contains python node scripts
urdf/ : contains Universal Robot Description Files
-
upper_body_transform.py(node):
takes in position and so3 matrix components and broadcasts to tf topic in quaternion angle
Input: .csv file(pos, so3 matrix)
Ouput: ROS tf msg -
data_parser.py(helper function script):
imports data from the sensor measurements in .tsv format and parses them into individual containers of 3-axis accleration and 3-axis angular velocity Input: filename(.tsv)
Ouput: 6 lists (3 acc, 3 ang_vel) -
computation.py(helper function script):
computes position and orientation of sensor frame in 3d environment and saves in .csv file Input: 6 lists (3 acc, 3 ang_vel)
Output: pos, so3 matrix
From initial sensor acceleration in sensor frame({b}) and gravity represented in world frame({w}) (0, 0, -|gravity|), get RR(rotation mat).
Sensor frame {b}(sensor coordinate) in {w} <- RR(dot)Identity matrix
While not done:
rotation of frame = Single Orthogonal Rotation Angle(ang velocity)
updated sensor coordinate = sensor coordinate(dot)rotation of frame
updated RR = RR(dot)rotation of frame
get quaternion from so3(RR)
broadcast sensor coordinate in {w}, in quaternion
sensor position updated by (sensor coordinate, model physical constraint)
- Run
python computation.py
with tab-spaced-value files containing 6 columns of linear acc and ang vel. - ROS launch
roslaunch cp_simulator upper_body_cp.launch
to initiate the RViz simulator
Refer to Demo Section to walk through the launch of demo files.
During the development of the software, a number of technical challenges were encountered and addressed.
It is a well known issue that dead reckoning, estimating position and orientation of an object, integrating the linear acceleration and angular velocity is inherently very prone to accumulation of error. There are different approaches to address the issue but it tends to involve external devices such as magnotometer or GPS. However, specific to the scope of the application, pursued by introduced software, estimation of the position and orientation can be calculated by constraining the robot model to the designed geometry. Instead of updating the position of sensor in the 3d environment by the double integration of linear acceleartion, only the orientation of the sensor is calculated by single integration of angular velocity. Body model, constrained to the URDF model, can only have one available position at the given orientation.
IMU sensors carry DC bias and noise. Both were observed from plotting the trials with stationary sensor measurements as shown in the figure below. Although DC bias can be manually balanced by element-wise subtracting the individual bias values from the measurement data, it can also be addressed by conducting sensor calibration on firmware level.
The bias is represented as distance of solid lines for each axis of measurement from absolute zero in y-axis, and noise is represented as high frequency variation of values.
Shown above are FFT plots of measurement of single stationary sensor and single moving sensor. First plot shows, as confirmed in the previous section, the bias signal appearing at 0Hz(DC bias). Second plot captures the FFT plot of frequency range of human motion. One can find detailed research in other scholarly articles that support the observation that relevant frequency range of human motion is between 0 - 20 Hz. Author integrated designed Low Pass Filter filtering out the noise of frequency higher than 20Hz. Frequency response plot of designed Low Pass Filter is shown below on the left. The plot shown below to the right depicts the processed signal with aforementioned filter.
One has to refer to ROS.org to build the package using catkin
before attempting to run the demo. Demo requires processed comma-separated-values files by the node computation.py
in the dir ~/catkin_ws/src/cp_simulator/demo/
.
source ~/catkin_ws/devel/setup.bash
cd ~/catkin_ws/src/cp_simulator/
roslaunch cp_simulator upper_body_cp.launch
Shown below is the demo of the proof of concept with the synthetic data. Such data is created to prove that the algorithm can compute the update of pose and estimation in the world frame, {w}, from iterating the data points with the previously introduced algorithm, representing the linear accelearation and angular velocity in the body frame of a sensor, {b}. Sythetic data is shown in the picture below and one can play the video demo by clicking the image at the bottom.
Shown below is the demo of a data collection performed with two sensors as an initial prototype. A person in the demo is wearing two wireless IMU sensors, one on the forearm and the other on the upper arm. One can click the image below to play the video.
Shown below is the demo of a data collection performed with four sensors attached on both sides of arms. A person in the demo is wearing two wireless IMU sensors each arm, one on the forearm and the other on the upper arm. One can click the image below to play the video.
Shown below is the demo of a data collection performed with four sensors attached on both sides of arms. A person in the demo is wearing two wireless IMU sensors each arm, one on the forearm and the other on the upper arm. One can click the image below to play the video.
Shown below is the demo of a data collection performed with eight sensors attached to the body. A person in the demo is wearing one sensor on each of the limbs, both sides: upper arm, forearm, thigh and shin. The subject in the demo is walking. One can click the image below to play the video.
Shown below is the demo of a data collection performed with eight sensors attached to the body. A person in the demo is wearing one sensor on each of the limbs, both sides: upper arm, forearm, thigh and shin. One can click the image below to play the video.
@misc{Human Motion Simulator with Wearable IMU Sensor,
author = {James Sohn},
title = {{Human Motion Simulator with Wearable IMU Sensor}},
month = dec,
year = 2019,
doi = {10.5281/zenodo.3690141},
version = {1.0.0},
publisher = {Zenodo},
url = {https://doi.org/10.5281/zenodo.3690141}
}