-
Notifications
You must be signed in to change notification settings - Fork 89
Frequently asked questions (FAQ)
- Should I use normcorre
or normcorre_batch
?
The functions give (almost) identical results. If you have access to the parallel computing toolbox, normcorre_batch is much faster since it aligns different frames in parallel. However, if the FOV is very large (e.g., during volumetric 3D imaging), then normcorre_batch can potentially lead to slowdowns and memory issues since it would try to load to much data in memory at once. In this case, use of the plain normcorre function is recommended.
That depends a lot on the properties of the dataset, e.g., existence of non-rigid motion, expression level, SNR etc. Smaller patches are preferable when there is a lot of non-rigid motion but can become less robust if the expression level is low and/or the SNR is poor. Empirically, for a standard two-photon experiment with a 512 x 512 wide FOV, with pixel size ~1μm x 1μm, taken at 30Hz, we have observed that a choice of grid_size = [128,128]
, overlap_pre = [32,32]
, mot_uf = 4
, max_shift = [20,20]
, and max_dev = [8,8]
gives typically good results.
Instead of passing an already loaded in RAM dataset as the first input Y
to the functions normcorre
or normcorre_batch
, Y
can just be a pointer to where the file is located (e.g., Y = 'big_file.tif'
). Supported file types include .tif
, .h5
and .mat
memory mapped files. However, make sure that when setting the options
struct, options.d1
and options.d2
correspond to the actual dimensions of the FOV, since size(Y,1), size(Y,2)
since Y
is now a string.
Similarly, the registered dataset can and should be saved directly in the hard-drive by modifying options.output_type
. Supported file types include .tif
, .h5
and .mat
memory mapped files. See also the motion-correction-on-large-datasets entry on the wiki.
Yes, the simple approach of 1) doing high pass spatial filtering on the data, 2) estimating motion on the high pass filtered data and 3) correcting for the estimated motion on the original data, seems to work quite well. See the script demo_1p.m
for details.
Please use the following gitter channel for questions. If you believe you have found a bug open an issue on github.