Skip to content

Commit

Permalink
Merge pull request #44 from MouseLand/dev
Browse files Browse the repository at this point in the history
Dev
  • Loading branch information
carsen-stringer authored Mar 17, 2022
2 parents 0e8fa78 + c9ef624 commit 5011e15
Show file tree
Hide file tree
Showing 44 changed files with 2,705 additions and 935 deletions.
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,11 @@ suite2p.egg-info/
build/
FaceMap/ops_user.npy
*.ipynb
*.pth
*.pt
*.gif

#
# =========================
# Operating System Files
# =========================
Expand Down
302 changes: 49 additions & 253 deletions README.md

Large diffs are not rendered by default.

75 changes: 75 additions & 0 deletions docs/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# Installation (Python)

This package only supports python 3. We recommend installing python 3 with **[Anaconda](https://www.anaconda.com/download/)**.


### For using pose tracker and svd processing
Please run
~~~
pip install git+https://github.com/mouseland/facemap.git
~~~
that will install the latest development version on github.

### For latest released version (from PyPI) using svd processing only

Run the following for command line interface (CLI) i.e. headless version:
~~~
pip install facemap
~~~
or the following for using GUI:
~~~~
pip install facemap[gui]
~~~~

To upgrade Facemap (package [here](https://pypi.org/project/facemap/)), within the environment run:
~~~~
pip install facemap --upgrade
~~~~

Using the environment.yml file (recommended installation method):

1. Download the `environment.yml` file from the repository or clone the github repository: `git clone https://www.github.com/mouseland/facemap.git`
2. Open an anaconda prompt / command prompt with `conda` for **python 3** in the path
3. Change directory to facemap folder `cd facemap`
4. Run `conda env create -f environment.yml`
5. To activate this new environment, run `conda activate facemap`
6. You should see `(facemap)` on the left side of the terminal line. Now run `python -m facemap` and you're all set.

## Common installation issues

If you have pip issues, there might be some interaction between pre-installed dependencies and the ones FaceMap needs. First thing to try is
~~~~
python -m pip install --upgrade pip
~~~~

While running `python -m facemap`, if you receive the error: `No module named PyQt5.sip`, then try uninstalling and reinstalling pyqt5
~~~
pip uninstall pyqt5 pyqt5-tools
pip install pyqt5 pyqt5-tools pyqt5.sip
~~~

If you are on Yosemite Mac OS, PyQt doesn't work, and you won't be able to install Facemap. More recent versions of Mac OS are fine.

The software has been heavily tested on Ubuntu 18.04, and less well tested on Windows 10 and Mac OS. Please post an issue if you have installation problems.

### Pyhton dependencies

Facemap python relies on these awesome packages:
- [pyqtgraph](http://pyqtgraph.org/)
- [PyQt5](http://pyqt.sourceforge.net/Docs/PyQt5/)
- [numpy](http://www.numpy.org/) (>=1.13.0)
- [scipy](https://www.scipy.org/)
- [opencv](https://opencv.org/)
- [numba](http://numba.pydata.org/numba-doc/latest/user/5minguide.html)
- [natsort](https://natsort.readthedocs.io/en/master/)
- [PyTorch](https://pytorch.org)
- [Matplotlib](https://matplotlib.org)
- [SciPy](https://scipy.org)
- [tqdm](https://tqdm.github.io)
- [pandas](https://pandas.pydata.org)
- [UMAP](https://umap-learn.readthedocs.io/en/latest/)


# Installation (MATLAB)

The matlab version supports SVD processing only and does not include face tracker. The package can be downloaded/cloned from github (no install required). It works in Matlab 2014b and above - please submit issues if it's not working. The Image Processing Toolbox is necessary to use the GUI. For GPU functionality, the Parallel Processing Toolbox is required. If you don't have the Parallel Processing Toolbox, uncheck the box next to "use GPU" in the GUI before processing.
1 change: 1 addition & 0 deletions docs/pose_tracking_cli_tutorial.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Pose tracking **(CLI)**
34 changes: 34 additions & 0 deletions docs/pose_tracking_gui_tutorial.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Pose tracking **(GUI)** :mouse:

<img src="../figs/tracker.gif" width="100%" height="500" title="Tracker" alt="tracker" algin="middle" vspace = "10">

The latest python version is integrated with Facemap network for tracking 14 distinct keypoints on mouse face and an additional point for tracking paw. The keypoints can be tracked from different camera views (some examples shown below).

<p float="middle">
<img src="../figs/mouse_face1_keypoints.png" width="310" height="290" title="View 1" alt="view1" align="left" vspace = "10" hspace="30" style="border: 0.5px solid white" />
<img src="../figs/mouse_face0_keypoints.png" width="310" height="290" title="View 2" alt="view2" algin="right" vspace = "10" style="border: 0.5px solid white">
</p>

For pose tracking using GUI after following the [installation instructions](installation.md), proceed with the following steps:

1. Load video
- Select `File` from the menu bar
- For processing single video, select `Load single movie file`
- Alternatively, for processing multiple videos, select `Open folder of movies` and then select the files you want to process. Please note multiple videos are processed sequentially.
2. Select output folder
- Use the file menu to `Set output folder`.
- The processed keypoints file will be saved in the selected output folder with an extension of `.h5` and corresponding metadata file with extension `.pkl`.
3. Choose processing options
- Check at least one of the following boxes:
- `Keypoints` for face pose tracking.
- `motSVD` for SVD processing of difference across frames over time.
- `movSVD` for SVD processing of raw movie frames.
- Click `process` button and monitor progress bar at the bottom of the window to see updates.
4. Select ROI/bounding box for face
- Once you hit `process`, a dialog box will appear for selecting a bounding box for the face. The keypoints will be tracked in the selected bounding box. Please ensure that the bouding box is focused on the face where all the keypoints shown above will be visible. See example frames [here](figs/mouse_views.png). Once the bounding box is focused, click 'Done' to continue.
- Alternatively, if you wish to use the entire frame for the mouse then click 'Skip' to continue.
- If a 'Face (pose)' ROI has already been selected using the dropdown menu for ROIs the bounding box will be automatically selected and the keypoints will be tracked in the selected ROI.

The videos will be processed in the order they are listed in the file list and output will be saved in the output folder. Following is an example gif demonstrating the above mentioned steps for tracking keypoints in a video.


130 changes: 130 additions & 0 deletions docs/svd_matlab_tutorial.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
# *HOW TO GUI* (MATLAB)

To start the GUI, run the command `MovieGUI` in this folder. The following window should appear. After you click an ROI button and draw an area, you have to **double-click** inside the drawn box to confirm it. To compute the SVD across multiple simultaneously acquired videos you need to use the "multivideo SVD" options to draw ROI's on each video one at a time.

<div align="center">
<img src="../figs/GUIscreenshot.png" width="80%" alt="gui screenshot" >
</div>

Default starting folder is set at line 59 of MovieGUI.m (h.filepath)

#### File loading structure

If you choose a folder instead of a single file, it will assemble a list of all video files in that folder and also all videos 1 folder down. The MATLAB GUI will ask *"would you like to process all movies?"*. If you say no, then a list of movies to choose from will appear. By default the python version shows you a list of movies. If you choose no movies in the python version then it's assumed you want to process ALL of them.

#### Processing movies captured simultaneously (multiple camera setups)

Both GUIs will then ask *"are you processing multiple videos taken simultaneously?"*. If you say yes, then the script will look if across movies the **FIRST FOUR** letters of the filename vary. If the first four letters of two movies are the same, then the GUI assumed that they were acquired *sequentially* not *simultaneously*.

Example file list:
+ cam1_G7c1_1.avi
+ cam1_G7c1_2.avi
+ cam2_G7c1_1.avi
+ cam2_G7c1_2.avi
+ cam3_G7c1_1.avi
+ cam3_G7c1_2.avi

*"are you processing multiple videos taken simultaneously?"* ANSWER: Yes

Then the GUIs assume {cam1_G7c1_1.avi, cam2_G7c1_1.avi, cam3_G7c1_1.avi} were acquired simultaneously and {cam1_G7c1_2.avi, cam2_G7c1_2.avi, cam3_G7c1_2.avi} were acquired simultaneously. They will be processed in alphabetical order (1 before 2) and the results from the videos will be concatenated in time. If one of these files was missing, then the GUI will error and you will have to choose file folders again. Also you will get errors if the files acquired at the same time aren't the same frame length (e.g. {cam1_G7c1_1.avi, cam2_G7c1_1.avi, cam3_G7c1_1.avi} should all have the same number of frames).

Note: if you have many simultaneous videos / overall pixels (e.g. 2000 x 2000) you will need around 32GB of RAM to compute the full SVD motion masks.

After the file choosing process is over, you will see all the movies in the drop down menu (by filename). You can switch between them and inspect how well an ROI works for each of the movies.

### ROI types

#### Pupil computation

The minimum pixel value is subtracted from the ROI. Use the saturation bar to reduce the background of the eye. The algorithm zeros out any pixels less than the saturation level (I recommend a *very* low value - so most pixels are white in the GUI).

Next it finds the pixel with the largest magnitude. It draws a box around that area (1/2 the size of the ROI) and then finds the center-of-mass of that region. It then centers the box on that area. It fits a multivariate gaussian to the pixels in the box using maximum likelihood (see [pupil.py](facemap/pupil.py) or [fitMVGaus.m](matlab/utils/fitMVGaus.m)).

After a Gaussian is fit, it zeros out pixels whose squared distance from the center (normalized by the standard deviation of the Gaussian fit) is greater than 2 * sigma^2 where sigma is set by the user in the GUI (default sigma = 2.5). It now performs the fit again with these points erased, and repeats this process 4 more times. The pupil is then defined as an ellipse sigma standard deviations away from the center-of-mass of the gaussian. This is plotted with '+' around the ellipse and with one '+' at the center.

If there are reflections on the mouse's eye, then you can draw ellipses to account for this "corneal reflection" (plotted in black). You can add as many of these per pupil ROI as needed. The algorithm fills in these areas of the image with the predicted values, which allows for smooth transitions between big and small pupils.

<img src="../figs/out.gif" width="80%" alt="pupil gif">

This raw pupil area trace is post-processed (see [smoothPupil.m](pupil/smoothPupil.m))). The trace is median filtered with a window of 30 timeframes. At each timepoint, the difference between the raw trace and the median filtered trace is computed. If the difference at a given point exceeds half the standard deviation of the raw trace, then the raw value is replaced by the median filtered value.

![pupil](../figs/pupilfilter.png?raw=true "pupil filtering")

#### Blink computation

You may want to ignore frames in which the animal is blinking if you are looking at pupil size. The blink area is the number of pixels above the saturation level that you set (all non-white pixels).

#### Motion SVD

The motion SVDs (small ROIs / multivideo) are computed on the movie downsampled in space by the spatial downsampling input box in the GUI (default 4 pixels). Note the saturation set in this window is NOT used for any processing.

The motion *M* is defined as the abs(current_frame - previous_frame), and the average motion energy across frames is computed using a subset of frames (*avgmot*) (at least 1000 frames - set at line 45 in [subsampledMean.m](matlab/subsampledMean.m) or line 183 in [process.py](facemap/process.py)). Then the singular vectors of the motion energy are computed on chunks of data, also from a subset of frames (15 chunks of 1000 frames each). Let *F* be the chunk of frames [pixels x time]. Then
```
uMot = [];
for j = 1:nchunks
M = abs(diff(F,1,2));
[u,~,~] = svd(M - avgmot);
uMot = cat(2, uMot, u);
end
[uMot,~,~] = svd(uMot);
uMotMask = normc(uMot(:, 1:500)); % keep 500 components
```
*uMotMask* are the motion masks that are then projected onto the video at all timepoints (done in chunks of size *nt*=500):
```
for j = 1:nchunks
M = abs(diff(F,1,2));
motSVD0 = (M - avgmot)' * uMotMask;
motSVD((j-1)*nt + [1:nt],:) = motSVD0;
end
```
Example motion masks *uMotMask* and traces *motSVD*:

<img src="../figs/exsvds.png" width="50%" alt="example SVDs">

We found that these extracted singular vectors explained up to half of the total explainable variance in neural activity in visual cortex and in other forebrain areas. See our [paper](https://science.sciencemag.org/content/364/6437/eaav7893) for more details.

In the python version, we also compute the average of *M* across all pixels in each motion ROI and that is returned as the **motion**. The first **motion** field is non-empty if "multivideo SVD" is on, and in that case it is the average motion energy across all pixels in all views.

#### Running computation

The phase-correlation between consecutive frames (in running ROI) are computed in the fourier domain (see [running.py](../facemap/running.py) or [processRunning.m](../matlab/running/processRunning.m)). The XY position of maximal correlation gives the amount of shift between the two consecutive frames. Depending on how fast the movement is frame-to-frame you may want at least a 50x50 pixel ROI to compute this.

#### Multivideo SVD ROIs

You can draw areas to be included and excluded in the multivideo SVD (or single video if you only have one view). The buttons are "area to keep" and "area to exclude" and will draw blue and red boxes respectively. The union of all pixels in "areas to include" are used, excluding any pixels that intersect this union from "areas to exclude" (you can toggle between viewing the boxes and viewing the included pixels using the "Show areas" checkbox, see example below).

<img src="../figs/incexcareas.png" width="60%" alt="example areas">

The motion energy is then computed from these non-red pixels.

### Proccessed output

The GUIs create one file for all videos (saved in current folder), the processed mat file has name "videofile_proc.mat".

**MATLAB output**:
- **nX**,**nY**: cell arrays of number of pixels in X and Y in each video taken simultaneously
- **sc**: spatial downsampling constant used
- **ROI**: [# of videos x # of areas] - areas to be included for multivideo SVD (in downsampled reference)
- **eROI**: [# of videos x # of areas] - areas to be excluded from multivideo SVD (in downsampled reference)
- **locROI**: location of small ROIs (in order running, ROI1, ROI2, ROI3, pupil1, pupil2); in downsampled reference
- **ROIfile**: in which movie is the small ROI
- **plotROIs**: which ROIs are being processed (these are the ones shown on the frame in the GUI)
- **files**: all the files you processed together
- **npix**: array of number of pixels from each video used for multivideo SVD
- **tpix**: array of number of pixels in each view that was used for SVD processing
- **wpix**: cell array of which pixels were used from each video for multivideo SVD
- **avgframe**: [sum(tpix) x 1] average frame across videos computed on a subset of frames
- **avgmotion**: [sum(tpix) x 1] average frame across videos computed on a subset of frames
- **motSVD**: cell array of motion SVDs [components x time] (in order: multivideo, ROI1, ROI2, ROI3)
- **uMotMask**: cell array of motion masks [pixels x time]
- **runSpeed**: 2D running speed computed using phase correlation [time x 2]
- **pupil**: structure of size 2 (pupil1 and pupil2) with 3 fields: area, area_raw, and com
- **thres**: pupil sigma used
- **saturation**: saturation levels (array in order running, ROI1, ROI2, ROI3, pupil1, pupil2); only saturation levels for pupil1 and pupil2 are used in the processing, others are just for viewing ROIs

an ROI is [1x4]: [y0 x0 Ly Lx]

#### Motion SVD Masks in MATLAB

Use the script [plotSVDmasks.m](../figs/plotSVDmasks.m) to easily view motion masks from the multivideo SVD. The motion masks from the smaller ROIs have been reshaped to be [xpixels x ypixels x components].

Loading

0 comments on commit 5011e15

Please sign in to comment.