You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am taking my first steps in the ML/AI world so please forgive any misunderstanding on the functioning and underlying principles. I would love to have any input from you on this.
I have a dataset containing neural and kinematic features from multiple experimental runs (55). I don't want to concatenate all the runs, as mentioned in #49 this would artificially create weird junctions. So I am trying the multi-session training strategy.
At first, just to get the pipeline up and running I used the whole dataset, of course, I plan to do the proper test and training split.
Briefly, the X is a list of 55 2D matrixes containing my features and the y is a list of 1D arrays containing labels that classify different "states" of my subjects (possible values: 0, 1, 2).
Q1: does the pipeline until now make sense?
EDIT: I realized I am misusing the labels as continuous, while discrete values are not supported by multisessions models yet. I'll be waiting for #135
My next step is to train the model only on a subset of sessions and see if I can decode the states of unseen sessions.
However, since I am using the multi-session model I have multiple embeddings. I am unsure about how (if at all) to combine the embeddings to train the decoder.
I know this might be more of a basic question, not strictly related to CEBRA. But I would really appreciate any input and feedback both on my reasoning and practical steps to achieve what I have in mind.
Q2: is it possible to train a decoder with the embeddings from a multi-session model?
EDIT: I realized the Demo_Allen is doing something similar enough, will try to use this.
Thanks a lot for your help! And thanks a lot for this tool, I'm having a lot of fun using it!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Dear CEBRA community,
I am taking my first steps in the ML/AI world so please forgive any misunderstanding on the functioning and underlying principles. I would love to have any input from you on this.
I have a dataset containing neural and kinematic features from multiple experimental runs (55). I don't want to concatenate all the runs, as mentioned in #49 this would artificially create weird junctions. So I am trying the multi-session training strategy.
At first, just to get the pipeline up and running I used the whole dataset, of course, I plan to do the proper test and training split.
Briefly, the X is a list of 55 2D matrixes containing my features and the
y is a list of 1D arrays containing labels that classify different "states" of my subjects (possible values: 0, 1, 2).Q1: does the pipeline until now make sense?
EDIT: I realized I am misusing the labels as continuous, while discrete values are not supported by multisessions models yet. I'll be waiting for #135
My next step is to train the model only on a subset of sessions and see if I can decode the states of unseen sessions.
However, since I am using the multi-session model I have multiple embeddings. I am unsure about how (if at all) to combine the embeddings to train the decoder.
I know this might be more of a basic question, not strictly related to CEBRA. But I would really appreciate any input and feedback both on my reasoning and practical steps to achieve what I have in mind.
Q2: is it possible to train a decoder with the embeddings from a multi-session model?
EDIT: I realized the Demo_Allen is doing something similar enough, will try to use this.
Thanks a lot for your help! And thanks a lot for this tool, I'm having a lot of fun using it!
Beta Was this translation helpful? Give feedback.
All reactions