-
Thanks again for providing this great tool and also keeping the discussion here on GitHub! I wondered, based on the results presented in the paper, but also based on our own results, why CEBRA embeddings are usually lying on a sphere? This is purely out of curiosity.. Is it related to the loss InfoNCE loss function? In the beginning I thought that it was related to the "circular" task presented in Figure 1 where the rat is running on a track forth and back. But for the other embeddings that also seems to be the case. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
@timonmerk , this a choice you can influence by picking loss function and model properties. The It depends a bit which latent space is more suitable for your application, but you are free to use either In CEBRA. |
Beta Was this translation helpful? Give feedback.
@timonmerk , this a choice you can influence by picking loss function and model properties.
The
offset10-model
andcosine
similarity metrics will learn an embedding on the hypersphere, while theoffset10-model-mse
witheuclidean
similarity gives you an embedding in Euclidean space (cf eg the behavior embedding on paper Figure 2 which is trained with this loss).It depends a bit which latent space is more suitable for your application, but you are free to use either In CEBRA.