NOTE this version of combinato has been edited from the main branch.
Edits / updates include:
- update loading of hdf5 files to require a sampling rate argument
- update block size extraction to assume 30K sampling rate
- a change to the files that the GUI can see / load (include chan)
The above changes are designed to make combinato easier to use in the HSNPipeline, including for different sampling rate files (e.g. BlackRock files that are sampled at 30K as compared to the NeuraLynx files at 32K that combinato assumes).
Combinato Spike Sorting is a software for spike extraction, automatic spike sorting, manual improvement of sorting, artifact rejection, and visualization of continuous recordings and spikes. It offers a toolchain that transforms raw data into single/multi-unit spike trains. The software is largely modular, thus useful also if you are interested in just extraction or just sorting of spikes.
Combinato Spike Sorting works very well with large raw data files (tested with 100-channel, 15-hour recordings, i.e. > 300 GB of raw data). Most parts make use of multiprocessing and scale well with tens of CPUs.
Combinato is a collection of a few command-line tools and two GUIs, written in Python and depending on a few standard modules. It is being developed mostly for Linux, but it works on Windows and OS X, too.
The documentation of Combinato is maintained as a Wiki.
- Installation on Linux (recommended)
- Installation on Windows
- Installation on OS X
Please walk through our instructive Tutorial.
When using Combinato in your work, please cite this paper:
Johannes Niediek, Jan Boström, Christian E. Elger, Florian Mormann. „Reliable Analysis of Single-Unit Recordings from the Human Brain under Noisy Conditions: Tracking Neurons over Hours“. PLOS ONE 11 (12): e0166598. 2016. doi:10.1371/journal.pone.0166598.
Please feel free to use the GitHub infrastructure for questions, bug reports, feature requests, etc.
Johannes Niediek, 2016-2023, jonied@posteo.de
.