Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev #44

Merged
merged 127 commits into from
Mar 17, 2022
Merged

Dev #44

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
127 commits
Select commit Hold shift + click to select a range
62b7833
add tqdm
Atika-Syeda Jun 14, 2021
1ae320b
multivideo test fix
Atika-Syeda Jun 14, 2021
d17dcb6
pytest edit
Atika-Syeda Jun 16, 2021
e662aaf
sbin param in process
Atika-Syeda Jun 17, 2021
523a193
online tests working
Atika-Syeda Jun 17, 2021
f278afd
setup.py edit
Atika-Syeda Jun 17, 2021
306a566
mkl-fft vers
Atika-Syeda Jun 17, 2021
261b76c
setup edit
Atika-Syeda Jun 17, 2021
c9dc7c3
Update test_and_deploy.yml
Atika-Syeda Jun 17, 2021
5aa4c6e
setup edit
Atika-Syeda Jun 17, 2021
cb8e72e
Merge branch 'dev' of https://github.com/MouseLand/facemap into dev
Atika-Syeda Jun 17, 2021
e74bf7a
setup edit
Atika-Syeda Jun 17, 2021
4592771
test py3.6 only
Atika-Syeda Jun 17, 2021
1baaae9
Update test_and_deploy.yml
Atika-Syeda Jun 17, 2021
1719362
test py3.6 only
Atika-Syeda Jun 17, 2021
e5e5fe0
Update setup.py
Atika-Syeda Jun 17, 2021
34c230d
Update tox.ini
Atika-Syeda Jun 17, 2021
f2ae586
Update test_and_deploy.yml
Atika-Syeda Jun 17, 2021
293e032
Update setup.py
Atika-Syeda Jun 17, 2021
7cdf468
removing mkl-fft
Atika-Syeda Jun 21, 2021
aac6563
removing mkl-fft
Atika-Syeda Jun 21, 2021
bc64725
numba in tox
Atika-Syeda Jun 21, 2021
cb773c3
all deps in tox
Atika-Syeda Jun 21, 2021
9a1e031
all deps in tox
Atika-Syeda Jun 21, 2021
7b4e018
update test
Atika-Syeda Jun 21, 2021
ba673da
update pytqtgraph
Atika-Syeda Jun 21, 2021
339e67c
update branch
Atika-Syeda Jun 24, 2021
cd3e735
process.run usage
Atika-Syeda Jul 12, 2021
e6eb5ad
Update README.md
Atika-Syeda Aug 5, 2021
e664e45
Update README.md
Atika-Syeda Aug 5, 2021
7a748a9
Update README.md
Atika-Syeda Aug 5, 2021
559141e
Update README.md
Atika-Syeda Aug 5, 2021
bdf7d1e
unet model
Atika-Syeda Aug 13, 2021
611a58c
fix merge
Atika-Syeda Aug 13, 2021
55a0d11
Update README.md
Atika-Syeda Aug 25, 2021
d1d7c3d
Update README.md
Atika-Syeda Aug 26, 2021
a735e94
Update README.md
Atika-Syeda Aug 26, 2021
c2fdfc2
transforms file
Atika-Syeda Aug 26, 2021
82fd33a
merge
Atika-Syeda Aug 26, 2021
2107682
bounding box functions
Atika-Syeda Aug 26, 2021
8b01a1d
bbox complete
Atika-Syeda Aug 27, 2021
5804a49
predict labels and wrtie df
Atika-Syeda Aug 30, 2021
72659b9
tutorial notebooks
Atika-Syeda Aug 30, 2021
37dd678
Update test_and_deploy.yml
Atika-Syeda Aug 30, 2021
45001c2
save and plot pose
Atika-Syeda Aug 30, 2021
54d6c2c
pull remote change Merge branch 'dev' of ssh://github.com/MouseLand/f…
Atika-Syeda Aug 30, 2021
28424e3
pose command line - part implementation
Atika-Syeda Aug 31, 2021
2245404
pose GUI/CLI partial implementation
Atika-Syeda Sep 13, 2021
ee355cc
pose GUI/CLI partial implementation II
Atika-Syeda Sep 13, 2021
cd9d01e
facemap setup [gui]
Atika-Syeda Sep 16, 2021
b633447
Update README.md
Atika-Syeda Sep 16, 2021
cecc30b
pose GUI/CLI partial implementation III
Atika-Syeda Sep 16, 2021
0f39feb
pose GUI/CLI partial implementation III
Atika-Syeda Sep 16, 2021
a070766
user defined bbox adjustable
Atika-Syeda Nov 22, 2021
54332ae
fixed ROI creation
Atika-Syeda Nov 22, 2021
1ba03e0
cv2 bbox method
Atika-Syeda Nov 23, 2021
ec36845
correct bbox selection and pred
Atika-Syeda Nov 23, 2021
3dbc547
corrected bbox cropping
Atika-Syeda Nov 29, 2021
addbdcd
CPU batch processing and likelihood
Atika-Syeda Nov 30, 2021
0b93d49
minor edits
Atika-Syeda Nov 30, 2021
bc1d7ff
edits for ubuntu & gpu
Atika-Syeda Nov 30, 2021
96a1975
zoom options
Atika-Syeda Dec 10, 2021
81681b7
minor edit
Atika-Syeda Dec 11, 2021
908200b
using pyqt instead of opencv for pose bbox
Atika-Syeda Dec 12, 2021
7e0d937
adjusted cropping projection
Atika-Syeda Dec 13, 2021
5f46b43
using opencv headless
Atika-Syeda Dec 13, 2021
030d457
highlight point w/ video playback
Atika-Syeda Dec 14, 2021
e418e6c
pose estimation w/o bbox
Atika-Syeda Dec 16, 2021
72f11fa
plot running speed trace
Atika-Syeda Dec 17, 2021
04ba38d
ethograms plot
Atika-Syeda Dec 18, 2021
7e61909
organize code chunks
Atika-Syeda Dec 18, 2021
e25ee6a
cluster labels trace on p1
Atika-Syeda Dec 20, 2021
473c042
bbox for processing multiple videos
Atika-Syeda Dec 30, 2021
56b50dd
pose estimates for multiview simultaneous recordings
Atika-Syeda Dec 30, 2021
640d1c8
load multiview pose files
Atika-Syeda Dec 30, 2021
698d0eb
process batch includes pose estimation
Atika-Syeda Jan 1, 2022
339e362
fix tracker button position
Atika-Syeda Jan 1, 2022
58f70b8
fix svd calculation
Atika-Syeda Jan 5, 2022
4cb1e90
batchlist and pyqt5 imports
Atika-Syeda Jan 6, 2022
d6740dc
cluster legend adjustment
Atika-Syeda Jan 6, 2022
e70f2d7
reload traces
Atika-Syeda Jan 7, 2022
385b2a2
cluster labels
Atika-Syeda Jan 8, 2022
149df7b
inference speed improvements
Atika-Syeda Jan 10, 2022
8fd861c
inference speed improvement
Atika-Syeda Jan 13, 2022
c4cd358
pupil, blink, and run process_roi correction
Atika-Syeda Jan 17, 2022
501902d
facemap_network
Atika-Syeda Mar 8, 2022
7ac4787
time inference
Atika-Syeda Mar 8, 2022
027146a
time inference edit
Atika-Syeda Mar 8, 2022
fc54801
400 fps model
Atika-Syeda Mar 8, 2022
75b7599
Update README.md
Atika-Syeda Mar 10, 2022
9386c6c
Update README.md
Atika-Syeda Mar 10, 2022
24da1f9
Update README.md
Atika-Syeda Mar 10, 2022
45021f9
tsne upload
Atika-Syeda Mar 12, 2022
0135903
Merge branch 'dev' of ssh://github.com/MouseLand/facemap into dev
Atika-Syeda Mar 12, 2022
e6019e3
download pose model from url
Atika-Syeda Mar 13, 2022
60d5096
time prediction
Atika-Syeda Mar 13, 2022
dc8a3d6
adding progress bar for pose
Atika-Syeda Mar 13, 2022
84a9066
keep progress visible
Atika-Syeda Mar 13, 2022
1028f36
progressbar edit
Atika-Syeda Mar 13, 2022
4bfdbb5
progressbar edit II
Atika-Syeda Mar 13, 2022
3d8f989
move svd details
Atika-Syeda Mar 13, 2022
5517840
pose tracking section
Atika-Syeda Mar 13, 2022
48781aa
Detailed instructions for installation
Atika-Syeda Mar 13, 2022
a339918
moving details from main readme
Atika-Syeda Mar 13, 2022
eabd6d6
Update README.md
Atika-Syeda Mar 13, 2022
4139a6a
add torchvision
Atika-Syeda Mar 13, 2022
c3e4e0a
add torchvision
Atika-Syeda Mar 13, 2022
d9e1885
Update installation.md
Atika-Syeda Mar 13, 2022
87853ca
matlab instructions
Atika-Syeda Mar 13, 2022
d5d2105
separate instructions for MATLAB
Atika-Syeda Mar 13, 2022
b6b6336
Update svd_tutorial_matlab.md
Atika-Syeda Mar 13, 2022
6be4a3a
Update svd_tutorial_matlab.md
Atika-Syeda Mar 13, 2022
a44d481
Create svd_python_tutorial.md
Atika-Syeda Mar 13, 2022
511ad03
Create pose_tracking_tutorial.md
Atika-Syeda Mar 13, 2022
775300b
example face view images
Atika-Syeda Mar 13, 2022
611da94
Update README.md
Atika-Syeda Mar 13, 2022
57dc98a
Update README.md
Atika-Syeda Mar 13, 2022
997db40
Update README.md
Atika-Syeda Mar 13, 2022
0dd5b0c
Update README.md
Atika-Syeda Mar 13, 2022
b0c8fcb
place images
Atika-Syeda Mar 13, 2022
5d8e764
readme update
Atika-Syeda Mar 13, 2022
03353b0
readme update
Atika-Syeda Mar 13, 2022
8cc620c
readme fix links
Atika-Syeda Mar 13, 2022
20e807c
bug fix: svd process
Atika-Syeda Mar 13, 2022
b5f9fa8
tracker gif
Atika-Syeda Mar 13, 2022
329ca66
removing hdbscan
Atika-Syeda Mar 14, 2022
c9ef624
pose gui readme
Atika-Syeda Mar 14, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,11 @@ suite2p.egg-info/
build/
FaceMap/ops_user.npy
*.ipynb
*.pth
*.pt
*.gif

#
# =========================
# Operating System Files
# =========================
Expand Down
302 changes: 49 additions & 253 deletions README.md

Large diffs are not rendered by default.

75 changes: 75 additions & 0 deletions docs/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# Installation (Python)

This package only supports python 3. We recommend installing python 3 with **[Anaconda](https://www.anaconda.com/download/)**.


### For using pose tracker and svd processing
Please run
~~~
pip install git+https://github.com/mouseland/facemap.git
~~~
that will install the latest development version on github.

### For latest released version (from PyPI) using svd processing only

Run the following for command line interface (CLI) i.e. headless version:
~~~
pip install facemap
~~~
or the following for using GUI:
~~~~
pip install facemap[gui]
~~~~

To upgrade Facemap (package [here](https://pypi.org/project/facemap/)), within the environment run:
~~~~
pip install facemap --upgrade
~~~~

Using the environment.yml file (recommended installation method):

1. Download the `environment.yml` file from the repository or clone the github repository: `git clone https://www.github.com/mouseland/facemap.git`
2. Open an anaconda prompt / command prompt with `conda` for **python 3** in the path
3. Change directory to facemap folder `cd facemap`
4. Run `conda env create -f environment.yml`
5. To activate this new environment, run `conda activate facemap`
6. You should see `(facemap)` on the left side of the terminal line. Now run `python -m facemap` and you're all set.

## Common installation issues

If you have pip issues, there might be some interaction between pre-installed dependencies and the ones FaceMap needs. First thing to try is
~~~~
python -m pip install --upgrade pip
~~~~

While running `python -m facemap`, if you receive the error: `No module named PyQt5.sip`, then try uninstalling and reinstalling pyqt5
~~~
pip uninstall pyqt5 pyqt5-tools
pip install pyqt5 pyqt5-tools pyqt5.sip
~~~

If you are on Yosemite Mac OS, PyQt doesn't work, and you won't be able to install Facemap. More recent versions of Mac OS are fine.

The software has been heavily tested on Ubuntu 18.04, and less well tested on Windows 10 and Mac OS. Please post an issue if you have installation problems.

### Pyhton dependencies

Facemap python relies on these awesome packages:
- [pyqtgraph](http://pyqtgraph.org/)
- [PyQt5](http://pyqt.sourceforge.net/Docs/PyQt5/)
- [numpy](http://www.numpy.org/) (>=1.13.0)
- [scipy](https://www.scipy.org/)
- [opencv](https://opencv.org/)
- [numba](http://numba.pydata.org/numba-doc/latest/user/5minguide.html)
- [natsort](https://natsort.readthedocs.io/en/master/)
- [PyTorch](https://pytorch.org)
- [Matplotlib](https://matplotlib.org)
- [SciPy](https://scipy.org)
- [tqdm](https://tqdm.github.io)
- [pandas](https://pandas.pydata.org)
- [UMAP](https://umap-learn.readthedocs.io/en/latest/)


# Installation (MATLAB)

The matlab version supports SVD processing only and does not include face tracker. The package can be downloaded/cloned from github (no install required). It works in Matlab 2014b and above - please submit issues if it's not working. The Image Processing Toolbox is necessary to use the GUI. For GPU functionality, the Parallel Processing Toolbox is required. If you don't have the Parallel Processing Toolbox, uncheck the box next to "use GPU" in the GUI before processing.
1 change: 1 addition & 0 deletions docs/pose_tracking_cli_tutorial.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Pose tracking **(CLI)**
34 changes: 34 additions & 0 deletions docs/pose_tracking_gui_tutorial.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Pose tracking **(GUI)** :mouse:

<img src="../figs/tracker.gif" width="100%" height="500" title="Tracker" alt="tracker" algin="middle" vspace = "10">

The latest python version is integrated with Facemap network for tracking 14 distinct keypoints on mouse face and an additional point for tracking paw. The keypoints can be tracked from different camera views (some examples shown below).

<p float="middle">
<img src="../figs/mouse_face1_keypoints.png" width="310" height="290" title="View 1" alt="view1" align="left" vspace = "10" hspace="30" style="border: 0.5px solid white" />
<img src="../figs/mouse_face0_keypoints.png" width="310" height="290" title="View 2" alt="view2" algin="right" vspace = "10" style="border: 0.5px solid white">
</p>

For pose tracking using GUI after following the [installation instructions](installation.md), proceed with the following steps:

1. Load video
- Select `File` from the menu bar
- For processing single video, select `Load single movie file`
- Alternatively, for processing multiple videos, select `Open folder of movies` and then select the files you want to process. Please note multiple videos are processed sequentially.
2. Select output folder
- Use the file menu to `Set output folder`.
- The processed keypoints file will be saved in the selected output folder with an extension of `.h5` and corresponding metadata file with extension `.pkl`.
3. Choose processing options
- Check at least one of the following boxes:
- `Keypoints` for face pose tracking.
- `motSVD` for SVD processing of difference across frames over time.
- `movSVD` for SVD processing of raw movie frames.
- Click `process` button and monitor progress bar at the bottom of the window to see updates.
4. Select ROI/bounding box for face
- Once you hit `process`, a dialog box will appear for selecting a bounding box for the face. The keypoints will be tracked in the selected bounding box. Please ensure that the bouding box is focused on the face where all the keypoints shown above will be visible. See example frames [here](figs/mouse_views.png). Once the bounding box is focused, click 'Done' to continue.
- Alternatively, if you wish to use the entire frame for the mouse then click 'Skip' to continue.
- If a 'Face (pose)' ROI has already been selected using the dropdown menu for ROIs the bounding box will be automatically selected and the keypoints will be tracked in the selected ROI.

The videos will be processed in the order they are listed in the file list and output will be saved in the output folder. Following is an example gif demonstrating the above mentioned steps for tracking keypoints in a video.


130 changes: 130 additions & 0 deletions docs/svd_matlab_tutorial.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
# *HOW TO GUI* (MATLAB)

To start the GUI, run the command `MovieGUI` in this folder. The following window should appear. After you click an ROI button and draw an area, you have to **double-click** inside the drawn box to confirm it. To compute the SVD across multiple simultaneously acquired videos you need to use the "multivideo SVD" options to draw ROI's on each video one at a time.

<div align="center">
<img src="../figs/GUIscreenshot.png" width="80%" alt="gui screenshot" >
</div>

Default starting folder is set at line 59 of MovieGUI.m (h.filepath)

#### File loading structure

If you choose a folder instead of a single file, it will assemble a list of all video files in that folder and also all videos 1 folder down. The MATLAB GUI will ask *"would you like to process all movies?"*. If you say no, then a list of movies to choose from will appear. By default the python version shows you a list of movies. If you choose no movies in the python version then it's assumed you want to process ALL of them.

#### Processing movies captured simultaneously (multiple camera setups)

Both GUIs will then ask *"are you processing multiple videos taken simultaneously?"*. If you say yes, then the script will look if across movies the **FIRST FOUR** letters of the filename vary. If the first four letters of two movies are the same, then the GUI assumed that they were acquired *sequentially* not *simultaneously*.

Example file list:
+ cam1_G7c1_1.avi
+ cam1_G7c1_2.avi
+ cam2_G7c1_1.avi
+ cam2_G7c1_2.avi
+ cam3_G7c1_1.avi
+ cam3_G7c1_2.avi

*"are you processing multiple videos taken simultaneously?"* ANSWER: Yes

Then the GUIs assume {cam1_G7c1_1.avi, cam2_G7c1_1.avi, cam3_G7c1_1.avi} were acquired simultaneously and {cam1_G7c1_2.avi, cam2_G7c1_2.avi, cam3_G7c1_2.avi} were acquired simultaneously. They will be processed in alphabetical order (1 before 2) and the results from the videos will be concatenated in time. If one of these files was missing, then the GUI will error and you will have to choose file folders again. Also you will get errors if the files acquired at the same time aren't the same frame length (e.g. {cam1_G7c1_1.avi, cam2_G7c1_1.avi, cam3_G7c1_1.avi} should all have the same number of frames).

Note: if you have many simultaneous videos / overall pixels (e.g. 2000 x 2000) you will need around 32GB of RAM to compute the full SVD motion masks.

After the file choosing process is over, you will see all the movies in the drop down menu (by filename). You can switch between them and inspect how well an ROI works for each of the movies.

### ROI types

#### Pupil computation

The minimum pixel value is subtracted from the ROI. Use the saturation bar to reduce the background of the eye. The algorithm zeros out any pixels less than the saturation level (I recommend a *very* low value - so most pixels are white in the GUI).

Next it finds the pixel with the largest magnitude. It draws a box around that area (1/2 the size of the ROI) and then finds the center-of-mass of that region. It then centers the box on that area. It fits a multivariate gaussian to the pixels in the box using maximum likelihood (see [pupil.py](facemap/pupil.py) or [fitMVGaus.m](matlab/utils/fitMVGaus.m)).

After a Gaussian is fit, it zeros out pixels whose squared distance from the center (normalized by the standard deviation of the Gaussian fit) is greater than 2 * sigma^2 where sigma is set by the user in the GUI (default sigma = 2.5). It now performs the fit again with these points erased, and repeats this process 4 more times. The pupil is then defined as an ellipse sigma standard deviations away from the center-of-mass of the gaussian. This is plotted with '+' around the ellipse and with one '+' at the center.

If there are reflections on the mouse's eye, then you can draw ellipses to account for this "corneal reflection" (plotted in black). You can add as many of these per pupil ROI as needed. The algorithm fills in these areas of the image with the predicted values, which allows for smooth transitions between big and small pupils.

<img src="../figs/out.gif" width="80%" alt="pupil gif">

This raw pupil area trace is post-processed (see [smoothPupil.m](pupil/smoothPupil.m))). The trace is median filtered with a window of 30 timeframes. At each timepoint, the difference between the raw trace and the median filtered trace is computed. If the difference at a given point exceeds half the standard deviation of the raw trace, then the raw value is replaced by the median filtered value.

![pupil](../figs/pupilfilter.png?raw=true "pupil filtering")

#### Blink computation

You may want to ignore frames in which the animal is blinking if you are looking at pupil size. The blink area is the number of pixels above the saturation level that you set (all non-white pixels).

#### Motion SVD

The motion SVDs (small ROIs / multivideo) are computed on the movie downsampled in space by the spatial downsampling input box in the GUI (default 4 pixels). Note the saturation set in this window is NOT used for any processing.

The motion *M* is defined as the abs(current_frame - previous_frame), and the average motion energy across frames is computed using a subset of frames (*avgmot*) (at least 1000 frames - set at line 45 in [subsampledMean.m](matlab/subsampledMean.m) or line 183 in [process.py](facemap/process.py)). Then the singular vectors of the motion energy are computed on chunks of data, also from a subset of frames (15 chunks of 1000 frames each). Let *F* be the chunk of frames [pixels x time]. Then
```
uMot = [];
for j = 1:nchunks
M = abs(diff(F,1,2));
[u,~,~] = svd(M - avgmot);
uMot = cat(2, uMot, u);
end
[uMot,~,~] = svd(uMot);
uMotMask = normc(uMot(:, 1:500)); % keep 500 components
```
*uMotMask* are the motion masks that are then projected onto the video at all timepoints (done in chunks of size *nt*=500):
```
for j = 1:nchunks
M = abs(diff(F,1,2));
motSVD0 = (M - avgmot)' * uMotMask;
motSVD((j-1)*nt + [1:nt],:) = motSVD0;
end
```
Example motion masks *uMotMask* and traces *motSVD*:

<img src="../figs/exsvds.png" width="50%" alt="example SVDs">

We found that these extracted singular vectors explained up to half of the total explainable variance in neural activity in visual cortex and in other forebrain areas. See our [paper](https://science.sciencemag.org/content/364/6437/eaav7893) for more details.

In the python version, we also compute the average of *M* across all pixels in each motion ROI and that is returned as the **motion**. The first **motion** field is non-empty if "multivideo SVD" is on, and in that case it is the average motion energy across all pixels in all views.

#### Running computation

The phase-correlation between consecutive frames (in running ROI) are computed in the fourier domain (see [running.py](../facemap/running.py) or [processRunning.m](../matlab/running/processRunning.m)). The XY position of maximal correlation gives the amount of shift between the two consecutive frames. Depending on how fast the movement is frame-to-frame you may want at least a 50x50 pixel ROI to compute this.

#### Multivideo SVD ROIs

You can draw areas to be included and excluded in the multivideo SVD (or single video if you only have one view). The buttons are "area to keep" and "area to exclude" and will draw blue and red boxes respectively. The union of all pixels in "areas to include" are used, excluding any pixels that intersect this union from "areas to exclude" (you can toggle between viewing the boxes and viewing the included pixels using the "Show areas" checkbox, see example below).

<img src="../figs/incexcareas.png" width="60%" alt="example areas">

The motion energy is then computed from these non-red pixels.

### Proccessed output

The GUIs create one file for all videos (saved in current folder), the processed mat file has name "videofile_proc.mat".

**MATLAB output**:
- **nX**,**nY**: cell arrays of number of pixels in X and Y in each video taken simultaneously
- **sc**: spatial downsampling constant used
- **ROI**: [# of videos x # of areas] - areas to be included for multivideo SVD (in downsampled reference)
- **eROI**: [# of videos x # of areas] - areas to be excluded from multivideo SVD (in downsampled reference)
- **locROI**: location of small ROIs (in order running, ROI1, ROI2, ROI3, pupil1, pupil2); in downsampled reference
- **ROIfile**: in which movie is the small ROI
- **plotROIs**: which ROIs are being processed (these are the ones shown on the frame in the GUI)
- **files**: all the files you processed together
- **npix**: array of number of pixels from each video used for multivideo SVD
- **tpix**: array of number of pixels in each view that was used for SVD processing
- **wpix**: cell array of which pixels were used from each video for multivideo SVD
- **avgframe**: [sum(tpix) x 1] average frame across videos computed on a subset of frames
- **avgmotion**: [sum(tpix) x 1] average frame across videos computed on a subset of frames
- **motSVD**: cell array of motion SVDs [components x time] (in order: multivideo, ROI1, ROI2, ROI3)
- **uMotMask**: cell array of motion masks [pixels x time]
- **runSpeed**: 2D running speed computed using phase correlation [time x 2]
- **pupil**: structure of size 2 (pupil1 and pupil2) with 3 fields: area, area_raw, and com
- **thres**: pupil sigma used
- **saturation**: saturation levels (array in order running, ROI1, ROI2, ROI3, pupil1, pupil2); only saturation levels for pupil1 and pupil2 are used in the processing, others are just for viewing ROIs

an ROI is [1x4]: [y0 x0 Ly Lx]

#### Motion SVD Masks in MATLAB

Use the script [plotSVDmasks.m](../figs/plotSVDmasks.m) to easily view motion masks from the multivideo SVD. The motion masks from the smaller ROIs have been reshaped to be [xpixels x ypixels x components].

Loading