Skip to content

Commit

Permalink
Merge pull request #91 from MouseLand/dev
Browse files Browse the repository at this point in the history
New logo and bug fixes
  • Loading branch information
Atika-Syeda authored Feb 22, 2023
2 parents af4564b + c525841 commit 5893145
Show file tree
Hide file tree
Showing 15 changed files with 303 additions and 14 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,8 @@ Facemap allows pupil tracking, blink tracking and running estimation, see more d
You can draw ROIs to compute the motion/movie SVD within the ROI, and/or compute the full video SVD by checking `multivideo`. Then check `motSVD` and/or `movSVD` and click `process`. The processed SVD `*_proc.npy` (and optionally `*_proc.mat`) file will be saved in the output folder selected.
For more details see [SVD python tutorial](docs/svd_python_tutorial.md) or [SVD MATLAB tutorial](docs/svd_matlab_tutorial.md).
([video](https://www.youtube.com/watch?v=Rq8fEQ-DOm4) with old install instructions)
<img src="figs/face_fast.gif" width="100%" alt="face gif">
Expand Down
6 changes: 4 additions & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
Facemap
===================================

This is an example file with default values.
Facemap is a framework for predicting neural activity from mouse orofacial movements. It includes a pose estimation model for tracking distinct keypoints on the mouse face, a neural network model for predicting neural activity using the pose estimates, and also can be used compute the singular value decomposition (SVD) of behavioral videos.

.. toctree::
:maxdepth: 3
:caption: Basics:

installation
inputs
pose_tracking_gui_tutorial
roi_proc
outputs
neural_activity_prediction_tutorial


Expand Down
65 changes: 61 additions & 4 deletions docs/inputs.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
Inputs
~~~~~~~~~~~~~~~~~~~
=============================

Facemap supports grayscale and RGB movies. The software can process multi-camera videos for pose tracking and SVD analysis.
Movie file extensions supported include:

'.mj2','.mp4','.mkv','.avi','.mpeg','.mpg','.asf'

Here are some `example movies <https://drive.google.com/open?id=1cRWCDl8jxWToz50dCX1Op-dHcAC-ttto>`__.

Processing multiple movies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -22,10 +29,60 @@ Note: if you have many simultaneous videos / overall pixels (e.g. 2000 x 2000) y

You will be able to see all the videos that were simultaneously collected at once. However, you can only draw ROIs that are within ONE video. Only the "multivideo SVD" is computed over all videos.

.. figure:: https://github.com/MouseLand/facemap/blob/main/figs/multivideo_fast.gif?raw=true
:alt: example GUI with pupil, blink and motion SVD

Batch processing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Load a video or a set of videos and draw your ROIs and choose your processing settings.
Then click `save ROIs`. This will save a *_proc.npy file in the output folder.
Default output folder is the same folder as the video. Use file menu to change path of the output folder. The name of saved proc file will be listed below `process batch` (this button will also activate). You can then repeat this process: load the video(s), draw ROIs, choose settings, and click `save ROIs`. Then to process all the listed *_proc.npy files click `process batch`.
Load a video or a set of videos and draw your ROIs and choose your processing settings. Then click "save ROIs". This will save a `\*_proc.npy` file in the output folder. Default output folder is the same folder as the video. Use file menu to change path of the output folder. The name of saved proc file will be listed below "process batch" (this button will also activate). You can then repeat this process: load the video(s), draw ROIs, choose settings, and click "save ROIs". Then to process all the listed `\*_proc.npy` files click "process batch".

Data acquisition info
~~~~~~~~~~~~~~~~~~~~~~~~~

IR ILLUMINATION
---------------------

For recording in darkness we use `IR
illumination <https://www.amazon.com/Logisaf-Invisible-Infrared-Security-Cameras/dp/B01MQW8K7Z/ref=sr_1_12?s=security-surveillance&ie=UTF8&qid=1505507302&sr=1-12&keywords=ir+light>`__
at 850nm, which works well with 2p imaging at 970nm and even 920nm.
Depending on your needs, you might want to choose a different
wavelength, which changes all the filters below as well. 950nm works
just as well, and probably so does 750nm, which still outside of the
visible range for rodents.

If you want to focus the illumination on the mouse eye or face, you will
need a different, more expensive system. Here is an example, courtesy of
Michael Krumin from the Carandini lab:
`driver <https://www.thorlabs.com/thorproduct.cfm?partnumber=LEDD1B>`__,
`power
supply <https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=1710&pn=KPS101#8865>`__,
`LED <https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=2692&pn=M850L3#4426>`__,
`lens <https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=259&pn=AC254-030-B#2231>`__,
and `lens
tube <https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=4109&pn=SM1V10#3389>`__,
and another `lens
tube <https://www.thorlabs.com/thorproduct.cfm?partnumber=SM1L10>`__.

CAMERAS
---------------------

We use `ptgrey
cameras <https://www.ptgrey.com/flea3-13-mp-mono-usb3-vision-vita-1300-camera>`__.
The software we use for simultaneous acquisition from multiple cameras
is `BIAS <http://public.iorodeo.com/notes/bias/>`__ software. A basic
lens that works for zoomed out views
`here <https://www.bhphotovideo.com/c/product/414195-REG/Tamron_12VM412ASIR_12VM412ASIR_1_2_4_12_F_1_2.html>`__.
To see the pupil well you might need a better zoom lens `10x
here <https://www.edmundoptics.com/imaging-lenses/zoom-lenses/10x-13-130mm-fl-c-mount-close-focus-zoom-lens/#specs>`__.

For 2p imaging, you’ll need a tighter filter around 850nm so you don’t
see the laser shining through the mouse’s eye/head, for example
`this <https://www.thorlabs.de/thorproduct.cfm?partnumber=FB850-40>`__.
Depending on your lenses you’ll need to figure out the right adapter(s)
for such a filter. For our 10x lens above, you might need all of these:
`adapter1 <https://www.edmundoptics.com/optics/optical-filters/optical-filter-accessories/M52-to-M46-Filter-Thread-Adapter/>`__,
`adapter2 <https://www.thorlabs.de/thorproduct.cfm?partnumber=SM2A53>`__,
`adapter3 <https://www.thorlabs.de/thorproduct.cfm?partnumber=SM2A6>`__,
`adapter4 <https://www.thorlabs.de/thorproduct.cfm?partnumber=SM1L03>`__.

4 changes: 2 additions & 2 deletions docs/neural_activity_prediction_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ To process neural activity prediction using pose estimates extracted
using the tracker:

1. Load or process keypoints (`see pose tracking
tutorial <docs/pose_tracking_gui_tutorial.md>`__).
tutorial <https://github.com/MouseLand/facemap/blob/main/docs/pose_tracking_gui_tutorial.md>`__).
2. Select ``Neural activity`` from file menu to ``Load neural data``.
3. Load neural activity data (2D-array stored in *.npy) and (optionally)
timestamps for neural and behavioral data (1D-array stored in*.npy)
Expand All @@ -25,7 +25,7 @@ To process neural activity prediction using pose estimates extracted
using the tracker:

1. Load or process SVDs for the video. (`see SVD
tutorial <docs/svd_tutorial.md>`__).
tutorial <https://github.com/MouseLand/facemap/blob/main/docs/svd_python_tutorial.md>`__).
2. Follow steps 2-5 above.

Note: a linear model is used for prediction using SVDs.
Expand Down
95 changes: 95 additions & 0 deletions docs/outputs.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
Outputs
=======================

ROI and SVD processing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Proccessed output
~~~~~~~~~~~~~~~~~

The GUIs create one file for all videos (saved in current folder), the
npy file has name “videofile_proc.npy” and the mat file has name
“videofile_proc.mat”.

- **filenames**: list of lists of video filenames - each list are the videos taken simultaneously

- **Ly**, **Lx**: list of number of pixels in Y (Ly) and X (Lx) for each video taken simultaneously

- **sbin**: spatial bin size for motion SVDs

- **Lybin**, **Lxbin**: list of number of pixels binned by sbin in Y (Ly) and X (Lx) for each video taken simultaneously

- **sybin**, **sxbin**: coordinates of multivideo (for plotting/reshaping ONLY)

- **LYbin**, **LXbin**: full-size of all videos embedded in rectangle (binned)

- **fullSVD**: whether or not “multivideo SVD” is computed

- **save_mat**: whether or not to save proc as `\*.mat` file

- **avgframe**: list of average frames for each video from a subset of frames (binned by sbin)

- **avgframe_reshape**: average frame reshaped to be y-pixels x x-pixels

- **avgmotion**: list of average motions for each video from a subset of frames (binned by sbin)

- **avgmotion_reshape**: average motion reshaped to be y-pixels x x-pixels

- **iframes**: array containing number of frames in each consecutive video

- **motion**: list of absolute motion energies across time - first is “multivideo” motion energy (empty if not computed)

- **motSVD**: list of motion SVDs - first is “multivideo SVD” (empty if not computed) - each is nframes x components

- **motMask**: list of motion masks for each motion SVD - each motMask is pixels x components

- **motMask_reshape**: motion masks reshaped to be y-pixels x x-pixels x components

- **motSv**: array containing singular values for motSVD

- **movSv**: array containing singular values for movSVD

- **pupil**: list of pupil ROI outputs - each is a dict with ‘area’, ‘area_smooth’, and ‘com’ (center-of-mass)

- **blink**: list of blink ROI outputs - each is nframes, the blink area on each frame

- **running**: list of running ROI outputs - each is nframes x 2, for X and Y motion on each frame

- **rois**: ROIs that were drawn and computed:

- rind: type of ROI in number

- rtype: what type of ROI (‘motion SVD’, ‘pupil’, ‘blink’, ‘running’)

- ivid: in which video is the ROI

- color: color of ROI

- yrange: y indices of ROI

- xrange: x indices of ROI

- saturation: saturation of ROI (0-255)

- pupil_sigma: number of stddevs used to compute pupil radius (for pupil ROIs)

- yrange_bin: binned indices in y (if motion SVD)

- xrange_bin: binned indices in x (if motion SVD)

Loading outputs
''''''''''''''''''''

Note this is a dict, e.g. to load in python:

::

import numpy as np
proc = np.load('cam1_proc.npy', allow_pickle=True).item()
print(proc.keys())
motion = proc['motion']

These \*_proc.npy\* files can be loaded into the GUI (and will
automatically be loaded after processing). The checkboxes in the lower
left allow you to view different traces from the processing.
11 changes: 9 additions & 2 deletions docs/pose_tracking_gui_tutorial.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
Pose tracking **(GUI)**
===============================
.. image:: https://github.com/MouseLand/facemap/blob/main/figs/tracker.gif
:width: 100%
:height: 500
:alt: tracker
:align: middle
:vspace: 10
:title: Tracker

The latest python version is integrated with Facemap network for
tracking 14 distinct keypoints on mouse face and an additional point for
Expand Down Expand Up @@ -37,7 +44,7 @@ Follow the steps below to generate keypoints for your videos:
appear. Drag the red rectangle to select region of interest on the
frame where the keypoints will be tracked. Please ensure that the
bouding box is focused on the face where all the keypoints will be
visible. See example frames `here <figs/mouse_views.png>`__. If a
visible. See example frames `here <https://github.com/MouseLand/facemap/blob/main/figs/mouse_views.png>`__. If a
‘Face (pose)’ ROI has already been added then this step will be
skipped.
- Click ``Done`` to process video. Alternatively, click ``Skip`` to
Expand Down Expand Up @@ -77,7 +84,7 @@ steps below:
show keypoints with higher confidence estimates.

Note: this feature is currently only supported for single video. Please
see `CLI instructions <pose_tracking_cli_tutorial.md>`__ for viewing
see `CLI instructions <https://github.com/MouseLand/facemap/blob/main/docs/pose_tracking_cli_tutorial.md>`__ for viewing
keypoints for multiple videos.

Finetune model to refine keypoints for a video
Expand Down
126 changes: 126 additions & 0 deletions docs/roi_proc.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
ROI and SVD processing
==============================

Choose a type of ROI to add and then click “add ROI” to add it to the
view. The pixels in the ROI will show up in the right window (with
different processing depending on the ROI type - see below). You can
move it and resize the ROI anytime. You can delete the ROI with
“right-click” and selecting “remove”. You can change the saturation of
the ROI with the upper right saturation bar. You can also just click on
the ROI at any time to see what it looks like in the right view.

.. figure:: https://github.com/MouseLand/facemap/blob/main/figs/face_fast.gif?raw=true
:alt: example GUI with pupil, blink and motion SVD

By default, the “multivideo” box is unchecked. If you check
it, then the motion SVD or movie SVD is computed across ALL videos - all videos are
concatenated at each timepoint, and the SVD of this matrix of ALL_PIXELS
x timepoints is computed. If you have just one video acquired at a time,
then it is the SVD of the full video.

**To compute motion SVD and/or movie SVD, please check one or both boxes in the GUI before hitting process.**

If you want to open the GUI with a movie file specified and/or save path
specified, the following command will allow this: ~~~ python -m facemap
–movie ‘/home/carsen/movie.avi’ –savedir ‘/media/carsen/SSD/’ ~~~ Note
this will only work if you only have one file that you need to load
(can’t have multiple in series / multiple views).


ROI types
~~~~~~~~~~~~~

Motion SVD
^^^^^^^^^^^

The motion/movie SVDs (small ROIs / multivideo) are computed on the movie
downsampled in space by the spatial downsampling input box in the GUI
(default 4 pixels). Note the saturation set in this window is NOT used
for any processing.

The motion *M* is defined as the abs(current_frame - previous_frame),
and the average motion energy across frames is computed using a subset
of frames (*avgmot*) (at least 1000 frames). Then the singular vectors of the
motion energy are computed on chunks of data, also from a subset of
frames (15 chunks of 1000 frames each): *uMotMask*. These are the motion masks
that are then projected onto the video
at all timepoints (done in chunks of size *nt*\ =500):

Example motion masks *uMotMask* and traces *motSVD*:

.. figure:: https://github.com/MouseLand/facemap/blob/main/figs/exsvds.png?raw=true
:alt: example SVDs

The SVDs can be computed on the motion or on the raw movie, please check the
corresponding box for "motion SVD" and/or "movie SVD" before hitting process to
compute one or both of these.

We found that these extracted singular vectors explained up to half of
the total explainable variance in neural activity in visual cortex and
in other forebrain areas. See our
`paper <https://science.sciencemag.org/content/364/6437/eaav7893>`__ for
more details.

We also compute the average of *M* across all
pixels in each motion ROI and that is returned as the **motion**. The
first **motion** field is non-empty if “multivideo SVD” is on, and in
that case it is the average motion energy across all pixels in all
views.

Pupil computation
^^^^^^^^^^^^^^^^^

The minimum pixel value is subtracted from the ROI. Use the saturation
bar to reduce the background of the eye. The algorithm zeros out any
pixels less than the saturation level (I recommend a *very* low value -
so most pixels are white in the GUI).

Next it finds the pixel with the largest magnitude. It draws a box
around that area (1/2 the size of the ROI) and then finds the
center-of-mass of that region. It then centers the box on that area. It
fits a multivariate gaussian to the pixels in the box using maximum
likelihood (see `pupil.py <https://github.com/MouseLand/facemap/blob/main/facemap/pupil.py>`__).

After a Gaussian is fit, it zeros out pixels whose squared distance from
the center (normalized by the standard deviation of the Gaussian fit) is
greater than 2 \* sigma^2 where sigma is set by the user in the GUI
(default sigma = 2.5). It now performs the fit again with these points
erased, and repeats this process 4 more times. The pupil is then defined
as an ellipse sigma standard deviations away from the center-of-mass of
the gaussian. This is plotted with ‘+’ around the ellipse and with one
‘+’ at the center.

If there are reflections on the mouse’s eye, then you can draw ellipses
to account for this “corneal reflection” (plotted in black). You can add
as many of these per pupil ROI as needed. The algorithm fills in these
areas of the image with the predicted values, which allows for smooth
transitions between big and small pupils.

.. figure:: https://github.com/MouseLand/facemap/blob/main/figs/out.gif?raw=true
:alt: pupil tracking zoom

This raw pupil area trace is post-processed. The trace is median filtered
with a window of 30 timeframes. At each timepoint, the difference
between the raw trace and the median filtered trace is computed. If the
difference at a given point exceeds half the standard deviation of the
raw trace, then the raw value is replaced by the median filtered value.

.. figure:: https://github.com/MouseLand/facemap/blob/main/figs/pupilfilter.png?raw=true
:alt: pupil filtering

Blink computation
^^^^^^^^^^^^^^^^^

You may want to ignore frames in which the animal is blinking if you are
looking at pupil size. The blink area is defined the number of pixels above the
saturation level that you set (all non-white pixels).


Running computation
^^^^^^^^^^^^^^^^^^^

The phase-correlation between consecutive frames (in running ROI) are
computed in the fourier domain (see `running.py <https://github.com/MouseLand/facemap/blob/main/facemap/running.py>`__). The XY
position of maximal correlation gives the amount of shift between the
two consecutive frames. Depending on how fast the movement is
frame-to-frame you may want at least a 50x50 pixel ROI to compute this.
2 changes: 1 addition & 1 deletion facemap/gui/gui.py
Original file line number Diff line number Diff line change
Expand Up @@ -784,7 +784,7 @@ def make_buttons(self):
self.plot1_checkboxes[-1], istretch + 1 + i, 0, 1, 1
)
# Set plot 2 checkboxes
for k in range(4):
for k in range(7):
self.plot2_checkboxes.append(QCheckBox(""))
self.scene_grid_layout.addWidget(
self.plot2_checkboxes[-1], istretch + 2 + k, 1, 1, 1
Expand Down
2 changes: 1 addition & 1 deletion facemap/gui/io.py
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ def open_proc(parent, file_name=None):
int(parent.saturation[parent.iROI] * 100 / 255)
)
parent.ROIs[parent.iROI].plot(parent)
if parent.processed and k < 5:
if parent.processed and k <= 7:
parent.plot2_checkboxes[k].setText(
"%s%d" % (parent.typestr[r["rind"]], kt[r["rind"]])
)
Expand Down
Binary file modified facemap/gui/ops_user.npy
Binary file not shown.
Binary file modified facemap/mouse.png
100755 → 100644
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 5893145

Please sign in to comment.