Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TS4231DataFrame is too low level to be useful #152

Closed
jonnew opened this issue Jul 24, 2024 · 5 comments
Closed

TS4231DataFrame is too low level to be useful #152

jonnew opened this issue Jul 24, 2024 · 5 comments
Assignees
Labels
proposal Request for a new feature
Milestone

Comments

@jonnew
Copy link
Member

jonnew commented Jul 24, 2024

TS4231Data produces a stream of TS4231DataFrame objects

https://github.com/neurogears/onix-refactor/blob/f42fb50cfc9d845e806eecc74564b9b79c453418/OpenEphys.Onix/OpenEphys.Onix/TS4231DataFrame.cs#L5

these require a lot of post-processing to be useful. We need to include functionality for converting these data into 3D positions. The question is where to do that. Do people really care about the raw data format of this device or do they want get 3D position data directly? I would strongly argue that the later is the case and that internal buffering and conversion should be performed in order to produce pre-calculated 3D positions that are produced by this node rather than the current form of TS4231DataFrame.

If we want to do this conversion internally, then the relevant transform is provided here: https://github.com/open-ephys/Bonsai.ONIX/blob/main/Bonsai.ONIX/TS4231V1FrameToPosition.cs

I would suggest the modified DataFrame should look something like:

public class TS4231DataFrame
{
    public ulong Clock { get; }

    public ulong HubClock { get; }

    public int SensorIndex { get; }

    public Vector3 Position {get};
}

Since each receiver can produces a 3D position, another question is if TS4231Data should produce data from only one sensor (in the way that NeuropixelsData nodes produce data from a single probe) or if a Frame should be produced when enough data has been collected from a given sensor to calcuate a position and the Sensor index included in the frame to indicate which photodiode it corresponds to. My vote is for the later.

@jonnew jonnew added the proposal Request for a new feature label Jul 24, 2024
@jonnew jonnew added this to the 0.1.0 milestone Jul 24, 2024
jonnew added a commit that referenced this issue Jul 24, 2024
- Intial docs for all headstage-64 associated classes except for
TS4231DataFrame because I have a feeling that will change per #152
@glopesdev
Copy link
Collaborator

glopesdev commented Jul 24, 2024

While I agree that doing basic 3D reconstruction over pulses detected from a single sensor is definitely useful and should be included in the library, I don't think baking this directly into the TS4231Data node is the right approach.

The reason is there are multiple possible approaches to do 3D reconstruction and pose estimation using these sensors, either based on fusion of multiple sensors, or multiple base stations, or even model-based approaches that combine TS sensor data with the IMU data to reconstruct both position and orientation. By restricting access to the raw sensor data we are excluding these possibilities a priori, and also make it harder to collect raw data that could be analyzed offline to develop better methods.

My preferred design here would be to introduce a new downstream node that takes care of both queueing and reconstructing the 3D poses, with its own data frame emitting values at a lower rate. This is similar to the post-processing nodes that handle the BNO data.

Finally, this would also make it easier for us to have multiple versions of the downstream node without breaking the TS4231Data stream.

@jonnew
Copy link
Member Author

jonnew commented Jul 25, 2024

Yes, these are good points.

To properly do this, we should then qualify all TS4231* nodes with ViveV1, V1 or similar. The "raw" data type that TS4231s produce is not raw, but already conditioned by the FPGA with the expectation that a Vive V1 lighthouses are being used. We have V2 variants of the FPGA core, but that is not present on e.g. headstage-64.

So, I think we should do three things:

  1. Change the type names of all existing TS4231* objects to include V1 or some other indicator that these are for capturing and representing V1 lighthouse data.

  2. Change the following line:

     public uint EnvelopeWidth { get; }
    

to

    public double EnvelopeV1WidthuSec { get; }

because any downstream processing node must have access seconds rather than clock ticks to perform 3D
reconstruction regardless of the method. In order to get this, the hub-clock rate must be known and this is much easier to
do in nodes with easy access to DeviceInfo.

  1. Add another "Data" node, e.g. TS4231V1GeometricPositionData, that behaves similarly to the changes in this commit and sits next to the TS4231V1Data node which spits "raw" data.

@glopesdev
Copy link
Collaborator

@jonnew Sounds good, let's brainstorm tomorrow how much of this we want to include for next week. For now I am moving this to 0.2.0.

@glopesdev glopesdev modified the milestones: 0.1.0, 0.2.0 Jul 25, 2024
@jonnew
Copy link
Member Author

jonnew commented Jul 25, 2024

I have a working implementation but I found a hardware bug that needs to be fixed regardless of the way we do things, so I held my commit until I work that out.

@jonnew jonnew modified the milestones: 0.2.0, 0.1.0 Jul 25, 2024
jonnew added a commit that referenced this issue Jul 26, 2024
- Addresses #152
- Addresses feedback in #154
- Testedl
- Provides two possitvle data sources for TS4231 lighthouse sensor
arrays.
- TS4231V1Data provides low-level sensor index, pulse type, and pulse
widths in microseconds that can be potentially combined with IMU data in
a predictive filter to improve 3D tracking
- TS4231GeometricPositionData provides naive geometric calculation of 3D
positions.
@jonnew
Copy link
Member Author

jonnew commented Jul 26, 2024

Fixed in #166

@jonnew jonnew closed this as completed Jul 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
proposal Request for a new feature
Projects
None yet
Development

No branches or pull requests

2 participants