-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TS4231DataFrame is too low level to be useful #152
Comments
- Intial docs for all headstage-64 associated classes except for TS4231DataFrame because I have a feeling that will change per #152
While I agree that doing basic 3D reconstruction over pulses detected from a single sensor is definitely useful and should be included in the library, I don't think baking this directly into the The reason is there are multiple possible approaches to do 3D reconstruction and pose estimation using these sensors, either based on fusion of multiple sensors, or multiple base stations, or even model-based approaches that combine TS sensor data with the IMU data to reconstruct both position and orientation. By restricting access to the raw sensor data we are excluding these possibilities a priori, and also make it harder to collect raw data that could be analyzed offline to develop better methods. My preferred design here would be to introduce a new downstream node that takes care of both queueing and reconstructing the 3D poses, with its own data frame emitting values at a lower rate. This is similar to the post-processing nodes that handle the BNO data. Finally, this would also make it easier for us to have multiple versions of the downstream node without breaking the |
Yes, these are good points. To properly do this, we should then qualify all So, I think we should do three things:
to
because any downstream processing node must have access seconds rather than clock ticks to perform 3D
|
@jonnew Sounds good, let's brainstorm tomorrow how much of this we want to include for next week. For now I am moving this to 0.2.0. |
I have a working implementation but I found a hardware bug that needs to be fixed regardless of the way we do things, so I held my commit until I work that out. |
- Addresses #152 - Addresses feedback in #154 - Testedl - Provides two possitvle data sources for TS4231 lighthouse sensor arrays. - TS4231V1Data provides low-level sensor index, pulse type, and pulse widths in microseconds that can be potentially combined with IMU data in a predictive filter to improve 3D tracking - TS4231GeometricPositionData provides naive geometric calculation of 3D positions.
Fixed in #166 |
TS4231Data
produces a stream ofTS4231DataFrame
objectshttps://github.com/neurogears/onix-refactor/blob/f42fb50cfc9d845e806eecc74564b9b79c453418/OpenEphys.Onix/OpenEphys.Onix/TS4231DataFrame.cs#L5
these require a lot of post-processing to be useful. We need to include functionality for converting these data into 3D positions. The question is where to do that. Do people really care about the raw data format of this device or do they want get 3D position data directly? I would strongly argue that the later is the case and that internal buffering and conversion should be performed in order to produce pre-calculated 3D positions that are produced by this node rather than the current form of
TS4231DataFrame
.If we want to do this conversion internally, then the relevant transform is provided here: https://github.com/open-ephys/Bonsai.ONIX/blob/main/Bonsai.ONIX/TS4231V1FrameToPosition.cs
I would suggest the modified DataFrame should look something like:
Since each receiver can produces a 3D position, another question is if
TS4231Data
should produce data from only one sensor (in the way thatNeuropixelsData
nodes produce data from a single probe) or if a Frame should be produced when enough data has been collected from a given sensor to calcuate a position and the Sensor index included in the frame to indicate which photodiode it corresponds to. My vote is for the later.The text was updated successfully, but these errors were encountered: