Skip to content
This repository has been archived by the owner on May 4, 2023. It is now read-only.

Timestamp awareness of .mat files #210

Closed
AHEsmaeili opened this issue Mar 18, 2022 · 4 comments
Closed

Timestamp awareness of .mat files #210

AHEsmaeili opened this issue Mar 18, 2022 · 4 comments

Comments

@AHEsmaeili
Copy link

AHEsmaeili commented Mar 18, 2022

Hi Fernando!

I've recently came upon wave_clus, and want to use its automatic/batch sorting capabilities for processing of my single channel .mat files. These files were converted from Alphalab .map files (since the latter format is currently not supported by the package) and contain continuous data.

Following the guidelines in the wiki, I made a .mat file with two variables: data (for the continuous data), and sr (for the sampling rate, 25 KHz).

Understandbly however, since there was no indication of start and end times of the recorded data, the timestamps of the results of this spike sorting do not align with the actual timestamps of the recordings.

I wanted to ask if there is a way to make the scripts aware of the actual timestamps of the continuous .mat files, either as an extra variable or file.

Thanks for your insight.

@ferchaure
Copy link
Member

ferchaure commented Mar 19, 2022

I would say the easiest alternative is saving the time offset in the sample zero and then add it for your processing after wave_clus. But if you want you can add a way to read the map files (or just a custom file type with a made-up extension), check a reader code. You have to change just the function to convert samples in milliseconds adding the offset.

@AHEsmaeili
Copy link
Author

Thanks for your response Fernando! Will try that asap.

Two more questions:

  1. The data for each session are divided into separate files. In the way that wave_clus handles sorting, would the results of the sorting be reliable if I zero (or mean) pad the time gaps between the experiment blocs in each session?

  2. The timestamps that wave_clus calculates are three orders of magnitude larger than the scale of the ones in the raw data. I was wondering if I'm setting the sampling rate in the test file (25000 Hz) incorrectly, or if wave_clus generates the spike times in ms scale automatically (and whether that could be changed via a parameter).

I've attached a sample file that I use for testing.

Start timestamp: 3305.35936 (seconds)
End timestamp: 3528.76032 (seconds)
testFile.zip

@ferchaure
Copy link
Member

  1. Wave_clus doesn't use the timing for clustering, therefore it's possible to sort a recording concatenated in that way (just take care with the output timestamps and real timestamps). Be careful, if you have a moving animal or long periods between sessions the waveforms could change a lot and you could get more that one cluster per neuron.
  2. The times in wave_clus are in ms, it's a design choice.

@AHEsmaeili
Copy link
Author

AHEsmaeili commented Mar 21, 2022

That's great!

The time between the experiment blocs is on the scale of (~5) minutes and the animal was head fixed, but I will try to take care of the timestamp conversion after concatenating (admittedly an intricate procedure).

Many thanks for your insight, and for this great package Fernando.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants