Skip to content

Commit

Permalink
Merge pull request #88 from MouseLand/main
Browse files Browse the repository at this point in the history
Update dev
  • Loading branch information
Atika-Syeda authored Feb 16, 2023
2 parents e0c553c + be616c5 commit 352da94
Show file tree
Hide file tree
Showing 3 changed files with 21 additions and 62 deletions.
26 changes: 13 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@

Facemap is a framework for predicting neural activity from mouse orofacial movements. It includes a pose estimation model for tracking distinct keypoints on the mouse face, a neural network model for predicting neural activity using the pose estimates, and also can be used compute the singular value decomposition (SVD) of behavioral videos.

Please find the detailed documentation at **[facemap.readthedocs.io](https://facemap.readthedocs.io/en/latest/index.html)**.

To learn about Facemap, read the [paper](https://www.biorxiv.org/content/10.1101/2022.11.03.515121v1) or check out the tweet [thread](https://twitter.com/Atika_Ibrahim/status/1588885329951367168?s=20&t=AhE3vBTnCvW36QiTyhu0qQ). For support, please open an [issue](https://github.com/MouseLand/facemap/issues).

- For latest released version (from PyPI) including svd processing only, run `pip install facemap` for headless version or `pip install facemap[gui]` for using GUI. Note: `pip install facemap` not yet available for latest tracker and neural model, instead install with `pip install git+https://github.com/mouseland/facemap.git`
Expand Down Expand Up @@ -113,19 +115,7 @@ For more details on using the tracker, please refer to the [GUI Instructions](do
Facemap aims to provide a simple and easy-to-use tool for tracking mouse orofacial movements. The tracker's performance for new datasets could be further improved by expand our training set. You can contribute to the model by sharing videos/frames on the following email address(es): `asyeda1[at]jh.edu` or `stringerc[at]janelia.hhmi.org`.
# II. Neural activity prediction
Facemap includes a deep neural network encoding model for predicting neural activity or principal components of neural activity from mouse orofacial pose estimates extracted using the tracker or SVDs.
The encoding model used for prediction is described as follows:
<p float="middle">
<img src="figs/encoding_model.png" width="70%" height="300" title="View 1" alt="view1" align="center" vspace = "10" hspace="30" style="border: 0.5px solid white" />
</p>
Please see neural activity prediction [tutorial](docs/neural_activity_prediction_tutorial.md) for more details.
# III. SVD processing
# II. SVD processing
Facemap provides options for singular value decomposition (SVD) of single and multi-camera videos. SVD analysis can be performed across static frames called movie SVD (`movSVD`) to extract the spatial components or over the difference between consecutive frames called motion SVD (`motSVD`) to extract the temporal components of the video. The first 500 principal components from SVD analysis are saved as output along with other variables. For more details, see [python tutorial](docs/svd_python_tutorial.md). The process for SVD analysis is as follows:
1. Load video. (Optional) Use the file menu to set output folder.
Expand All @@ -144,6 +134,16 @@ python -m facemap
```
Default starting folder is set to wherever you run `python -m FaceMap`
# III. Neural activity prediction
Facemap includes a deep neural network encoding model for predicting neural activity or principal components of neural activity from mouse orofacial pose estimates extracted using the tracker or SVDs.
The encoding model used for prediction is described as follows:
<p float="middle">
<img src="figs/encoding_model.png" width="70%" height="300" title="View 1" alt="view1" align="center" vspace = "10" hspace="30" style="border: 0.5px solid white" />
</p>
Please see neural activity prediction [tutorial](docs/neural_activity_prediction_tutorial.md) for more details.
### [*HOW TO GUI* (MATLAB)](docs/svd_matlab_tutorial.md)
Expand Down
45 changes: 2 additions & 43 deletions docs/installation.rst
Original file line number Diff line number Diff line change
@@ -1,50 +1,9 @@
Installation
===================================

This package only supports python 3. We recommend installing python 3 with `Anaconda <https://www.anaconda.com/download/>`_.
Please see the github readme for full install instructions.


Pose tracker and SVD processing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For using tracker and svd processing, follow the instructions below:

1. ``git clone https://github.com/MouseLand/facemap.git``
2. Change directory to facemap folder containing ``environment.yml`` file
3. ``conda env create -f environment.yml``
4. ``conda activate facemap``
5. ``python -m facemap``

This will install and run the latest development version on github.

SVD processing only
~~~~~~~~~~~~~~~~~~~~

Run the following for command line interface (CLI) i.e. headless version:
::

pip install facemap

or the following for using GUI:
::

pip install facemap[gui]


To upgrade Facemap package (https://pypi.org/project/facemap/), within the environment run:
::

pip install facemap --upgrade


Using the environment.yml file (recommended installation method):

1. Download the ``environment.yml`` file from the repository or clone the github repository: ``git clone https://www.github.com/mouseland/facemap.git``
2. Open an anaconda prompt / command prompt with ``conda`` for **python 3** in the path
3. Change directory to facemap folder ``cd facemap``
4. Run ``conda env create -f environment.yml``
5. To activate this new environment, run ``conda activate facemap``
6. You should see ``(facemap)`` on the left side of the terminal line. Now run ``python -m facemap`` and you're all set.

Common installation issues
~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down Expand Up @@ -96,4 +55,4 @@ Facemap python relies on these awesome packages:
MATLAB package installation
~~~~~~~~~~~~~~~~~~~~~~~~~~~

The matlab version supports SVD processing only and does not include face tracker. The package can be downloaded/cloned from github (no install required). It works in Matlab 2014b and above - please submit issues if it's not working. The Image Processing Toolbox is necessary to use the GUI. For GPU functionality, the Parallel Processing Toolbox is required. If you don't have the Parallel Processing Toolbox, uncheck the box next to "use GPU" in the GUI before processing.
The matlab version supports SVD processing only and does not include face tracker. The package can be downloaded/cloned from github (no install required). It works in Matlab 2014b and above. The Image Processing Toolbox is necessary to use the GUI. For GPU functionality, the Parallel Processing Toolbox is required. If you don't have the Parallel Processing Toolbox, uncheck the box next to "use GPU" in the GUI before processing. Note this version is no longer supported.
12 changes: 6 additions & 6 deletions facemap/neural_prediction/prediction_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ def reduced_rank_regression(X, Y, rank=None, lam=0, device=torch.device("cuda"))

# compute inverse square root of matrix
# s, u = eigh(CXX.cpu().numpy())
u, s = torch.svd(CXX)[:2]
u, s = torch.svd_lowrank(CXX, q=rank)[:2]
CXXMH = (u * (s + lam) ** -0.5) @ u.T

# project into prediction space
Expand All @@ -156,7 +156,7 @@ def reduced_rank_regression(X, Y, rank=None, lam=0, device=torch.device("cuda"))
# model = PCA(n_components=rank).fit(M)
# c = model.components_.T
# s = model.singular_values_
s, c = torch.svd(M)[1:]
s, c = torch.svd_lowrank(M, q=rank)[1:]
A = M @ c
B = CXXMH @ c
return A, B
Expand Down Expand Up @@ -200,7 +200,7 @@ def rrr_prediction(
itest: 1D int array (optional, default None)
times in test set
tbin: int (optional, default 0)
tbin: int (optional, default None)
also compute variance explained in bins of tbin
Returns
Expand All @@ -226,8 +226,8 @@ def rrr_prediction(
if itrain is None and itest is None:
itrain, itest = split_traintest(n_t)
itrain, itest = itrain.flatten(), itest.flatten()
X = torch.from_numpy(X).to(device, dtype=torch.float64)
Y = torch.from_numpy(Y).to(device, dtype=torch.float64)
X = torch.from_numpy(X).to(device)
Y = torch.from_numpy(Y).to(device)
A, B = reduced_rank_regression(
X[itrain], Y[itrain], rank=rank, lam=lam, device=device
)
Expand All @@ -254,7 +254,7 @@ def rrr_prediction(
residual = ((Y[itest] - Y_pred_test) ** 2).mean(axis=0)
varexpf[r] = (1 - residual / Y_test_var).cpu().numpy()
varexp[r, 0] = (1 - residual.mean() / Y_test_var.mean()).cpu().numpy()
if tbin != 0 and tbin > 1:
if tbin is not None and tbin > 1:
varexp[r, 1] = (
compute_varexp(
bin1d(Y[itest], tbin).flatten(), bin1d(Y_pred_test, tbin).flatten()
Expand Down

0 comments on commit 352da94

Please sign in to comment.