Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update readthedocs #113

Merged
merged 4 commits into from
Jul 21, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
author = "Carsen Stringer & Atika Syeda & Renee Tung"

# The full version, including alpha/beta/rc tags
release = "1.0.0-rc1"
release = "1.0.1"


# -- General configuration ---------------------------------------------------
Expand Down
21 changes: 21 additions & 0 deletions docs/gui.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
GUI
-----

Starting the GUI
~~~~~~~~~~~~~~~~~~~~~~~

The quickest way to start is to open the GUI from a command line terminal is:
::

python -m facemap

Using the GUI
~~~~~~~~~~~~~~~~~~~~~~~
The GUI can be used for the processing keypoints and SVD of mouse behavioral videos. The GUI can also be used for predicting neural activity using the behavioral data. For more details on each feature, see the following tutorials:

.. toctree::
:maxdepth: 3

pose_tracking_gui_tutorial
roi_proc
neural_activity_prediction_tutorial
13 changes: 9 additions & 4 deletions docs/index.rst
Original file line number Diff line number Diff line change
@@ -1,17 +1,22 @@
Facemap
Facemap

.. figure:: https://github.com/MouseLand/facemap/blob/main/facemap/mouse.png
:alt: facemap

===================================

Facemap is a framework for predicting neural activity from mouse orofacial movements. It includes a pose estimation model for tracking distinct keypoints on the mouse face, a neural network model for predicting neural activity using the pose estimates, and also can be used compute the singular value decomposition (SVD) of behavioral videos.

For more details, please see our `paper <https://www.biorxiv.org/content/10.1101/2022.11.03.515121v1>`__ and `twitter thread <https://twitter.com/Atika_Ibrahim/status/1588885329951367168?s=20>`__.

.. toctree::
:maxdepth: 3
:caption: Basics:

installation
gui
inputs
pose_tracking_gui_tutorial
roi_proc
outputs
neural_activity_prediction_tutorial

.. toctree::
:caption: Tutorials
Expand Down
4 changes: 2 additions & 2 deletions docs/inputs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Load a video or a set of videos and draw your ROIs and choose your processing se
Data acquisition info
~~~~~~~~~~~~~~~~~~~~~~~~~

IR ILLUMINATION
IR illumination
---------------------

For recording in darkness we use `IR
Expand All @@ -67,7 +67,7 @@ tube <https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=4109&pn=SM1V10#3
and another `lens
tube <https://www.thorlabs.com/thorproduct.cfm?partnumber=SM1L10>`__.

CAMERAS
Cameras
---------------------

We use `ptgrey
Expand Down
2 changes: 1 addition & 1 deletion docs/installation.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Installation (Python)

This package only supports python 3. We recommend installing python 3 with **[Anaconda](https://www.anaconda.com/download/)**.s
This package only supports python 3. We recommend installing python 3 with **[Anaconda](https://www.anaconda.com/download/)**.

## Common installation issues

Expand Down
79 changes: 53 additions & 26 deletions docs/outputs.rst
Original file line number Diff line number Diff line change
@@ -1,50 +1,43 @@
Outputs
=======================
========

ROI and SVD processing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~
SVD processing saves two outputs: a \*.npy file and a \*.mat file. The output file contains the following variables:

- **filenames**: A 2D list of video filenames - a list within the 2D list consists of videos recorded simultaneously whereas sequential videos are stored as a separate list

Proccessed output
~~~~~~~~~~~~~~~~~
- **Ly**, **Lx**: list of frame length in y-dim (Ly) and x-dim (Lx) for each video taken simultaneously

The GUIs create one file for all videos (saved in current folder), the
npy file has name “videofile_proc.npy” and the mat file has name
“videofile_proc.mat”.

- **filenames**: list of lists of video filenames - each list are the videos taken simultaneously

- **Ly**, **Lx**: list of number of pixels in Y (Ly) and X (Lx) for each video taken simultaneously

- **sbin**: spatial bin size for motion SVDs
- **sbin**: spatial bin size for SVDs

- **Lybin**, **Lxbin**: list of number of pixels binned by sbin in Y (Ly) and X (Lx) for each video taken simultaneously

- **sybin**, **sxbin**: coordinates of multivideo (for plotting/reshaping ONLY)

- **LYbin**, **LXbin**: full-size of all videos embedded in rectangle (binned)

- **fullSVD**: whether or not “multivideo SVD” is computed
- **fullSVD**: bool flag indicating whether “multivideo SVD” is computed

- **save_mat**: whether or not to save proc as `\*.mat` file
- **save_mat**: bool flag indicating whether to save proc as `\*.mat` file

- **avgframe**: list of average frames for each video from a subset of frames (binned by sbin)

- **avgframe_reshape**: average frame reshaped to be y-pixels x x-pixels
- **avgframe_reshape**: average frame reshaped to size y-pixels by x-pixels

- **avgmotion**: list of average motions for each video from a subset of frames (binned by sbin)
- **avgmotion**: list of average motion computed for each video from a subset of frames (binned by sbin)

- **avgmotion_reshape**: average motion reshaped to be y-pixels x x-pixels
- **avgmotion_reshape**: average motion reshaped to size y-pixels by x-pixels

- **iframes**: array containing number of frames in each consecutive video
- **iframes**: an array containing the number of frames in each consecutive video

- **motion**: list of absolute motion energies across time - first is “multivideo” motion energy (empty if not computed)

- **motSVD**: list of motion SVDs - first is “multivideo SVD” (empty if not computed) - each is nframes x components
- **motSVD**: list of motion SVDs - first is “multivideo SVD” (empty if not computed) - each is of size number of frames by number of components (500)

- **motMask**: list of motion masks for each motion SVD - each motMask is pixels x components

- **motMask_reshape**: motion masks reshaped to be y-pixels x x-pixels x components
- **motMask_reshape**: motion masks reshaped to: y-pixels x x-pixels x components

- **motSv**: array containing singular values for motSVD

Expand Down Expand Up @@ -81,15 +74,49 @@ npy file has name “videofile_proc.npy” and the mat file has name
Loading outputs
''''''''''''''''''''

Note this is a dict, e.g. to load in python:
The \*.npy saved is a dict which can be loaded in python as follows:

::

import numpy as np
proc = np.load('cam1_proc.npy', allow_pickle=True).item()
proc = np.load('filename_proc.npy', allow_pickle=True).item()
print(proc.keys())
motion = proc['motion']

These \*_proc.npy\* files can be loaded into the GUI (and will
automatically be loaded after processing). The checkboxes in the lower
left allow you to view different traces from the processing.
These \*_proc.npy\* files can be loaded in the GUI (and is
automatically loaded after processing). The checkboxes on the lower
left panel of the GUI can be used to toggle display of different traces/variables.

Keypoints processing
~~~~~~~~~~~~~~~~~~~~

Keypoints processing saves two outputs: a \*.h5 and a \*_metadata.pkl file.
- \*.h5 file contains: Keypoints stored as a 3D array of shape (3, number of bodyparts, number of frames). The first dimension of size 3 is in the order: (x, y, likelihood). For more details on using/loading the \*.h5 file in python see this `tutorial <https://github.com/MouseLand/facemap/blob/main/notebooks/load_visualize_keypoints.ipynb>`__.
- \*_metadata.pkl file: contains a dictionary consisting of the following variables:
- batch_size: batch size used for inference
- image_size: frame size
- bbox: bounding box for cropping the video [x1, x2, y1, y2]
- total_frames: number of frames
- bodyparts: names of bodyparts
- inference_speed: processing speed
To load the pkl file in python, use the following code:

::

import pickle
with open('filename_metadata.pkl', 'rb') as f:
metadata = pickle.load(f)
print(metadata.keys())
print(metadata['bodyparts'])


Neural activity prediction output
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The output of neural activity prediction is saved in \*.npy file and optionally in \*.mat file. The output contains a dictionary with the following keys:

- predictions: a 2D array containing the predicted neural activity of shape (number of features x time)
- test_indices: a list of indices indicating sections of data used as test data for computing variance explained by the model
- variance_explained: variance explained by the model for test data
- plot_extent: extent of the plot used for plotting the predicted neural activity in the order [x1, y1, x2, y2]


3 changes: 0 additions & 3 deletions docs/pose_tracking_cli_tutorial.md

This file was deleted.

2 changes: 1 addition & 1 deletion docs/pose_tracking_gui_tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Follow the steps below to generate keypoints for your videos:
- Check `Keypoints` for pose tracking.
- Click `process`.
3. Set ROI/bounding box for face region
- A dialog box for selecting a bounding box for the face will appear. Drag the red rectangle to select region of interest on the frame where the keypoints will be tracked. Please ensure that the bouding box is focused on the face where all the keypoints will be visible. See example frames [here](figs/mouse_views.png). If a 'Face (pose)' ROI has already been added then this step will be skipped.
- A dialog box for selecting a bounding box for the face will appear. Drag the red rectangle to select region of interest on the frame where the keypoints will be tracked. Please ensure that the bouding box is focused on the face where all the keypoints will be visible. See example frames [here](https://github.com/MouseLand/facemap/blob/main/figs/mouse_views.png). If a 'Face (pose)' ROI has already been added then this step will be skipped.
- Click `Done` to process video. Alternatively, click `Skip` to use the entire frame region. Monitor progress bar at the bottom of the window for updates.
4. View keypoints
- Keypoints will be automatically loaded after processing.
Expand Down
27 changes: 8 additions & 19 deletions docs/pose_tracking_gui_tutorial.rst
Original file line number Diff line number Diff line change
@@ -1,18 +1,10 @@
Pose tracking **(GUI)** :mouse:
===============================
Pose tracking **(GUI)**
========================

The latest python version is integrated with Facemap network for
tracking 14 distinct keypoints on mouse face and an additional point for
tracking paw. The keypoints can be tracked from different camera views
(some examples shown below).
tracking paw. The keypoints can be tracked from different camera views (see `examples <https://github.com/MouseLand/facemap/blob/dev/figs/mouse_views.png>`__).

.. raw:: html

<p float="middle">

.. raw:: html

</p>

Generate keypoints
------------------
Expand All @@ -33,20 +25,21 @@ Follow the steps below to generate keypoints for your videos:
- Use the file menu to ``Set output folder``.
- The processed keypoints (``*.h5``) and metadata (``*.pkl``) will
be saved in the selected output folder or folder containing the
video (default).
video (by default).

2. Process video(s)

- Check ``Keypoints`` for pose tracking.
- Click ``process``.
- Note: The first time facemap runs for processing keypoints it downloads the latest available trained model weights from our website.

3. Set ROI/bounding box for face region

- A dialog box for selecting a bounding box for the face will
appear. Drag the red rectangle to select region of interest on the
frame where the keypoints will be tracked. Please ensure that the
bouding box is focused on the face where all the keypoints will be
visible. See example frames `here <figs/mouse_views.png>`__. If a
visible. See example frames `here <https://github.com/MouseLand/facemap/blob/main/figs/mouse_views.png>`__. If a
‘Face (pose)’ ROI has already been added then this step will be
skipped.
- Click ``Done`` to process video. Alternatively, click ``Skip`` to
Expand All @@ -62,7 +55,7 @@ Follow the steps below to generate keypoints for your videos:
Visualize keypoints
-------------------

To load keypoints (*.h5) for a video generated using Facemap or other
To load keypoints (\*.h5) for a video generated using Facemap or other
software in the same format (such as DeepLabCut and SLEAP), follow the
steps below:

Expand All @@ -75,7 +68,7 @@ steps below:

- Select ``Pose`` from the menu bar
- Select ``Load keypoints``
- Select the keypoints (*.h5) file
- Select the keypoints (\*.h5) file

3. View keypoints

Expand All @@ -85,10 +78,6 @@ steps below:
keypoints with lower confidence estimates. Higher threshold will
show keypoints with higher confidence estimates.

Note: this feature is currently only supported for single video. Please
see `CLI instructions <pose_tracking_cli_tutorial.md>`__ for viewing
keypoints for multiple videos.

Finetune model to refine keypoints for a video
----------------------------------------------

Expand Down
4 changes: 2 additions & 2 deletions docs/roi_proc.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
ROI and SVD processing
==============================
SVD processing and ROIs
========================

Choose a type of ROI to add and then click “add ROI” to add it to the
view. The pixels in the ROI will show up in the right window (with
Expand Down
1 change: 1 addition & 0 deletions facemap/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@
Copright © 2023 Howard Hughes Medical Institute, Authored by Carsen Stringer and Atika Syeda.
"""
name = "facemap"
from facemap.version import version, version_str
6 changes: 5 additions & 1 deletion facemap/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

from facemap import process
from facemap.gui import gui

from facemap import version_str

def tic():
return time.time()
Expand All @@ -25,6 +25,7 @@ def main():


if __name__ == "__main__":

parser = argparse.ArgumentParser(description="Movie files")
parser.add_argument("--ops", default=[], type=str, help="options")
parser.add_argument(
Expand Down Expand Up @@ -88,6 +89,9 @@ def main():
parser.set_defaults(autoload_proc=True)

args = parser.parse_args()

print(version_str)

ops = {}
if len(args.ops) > 0:
ops = np.load(args.ops)
Expand Down
5 changes: 3 additions & 2 deletions facemap/gui/help_windows.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
QVBoxLayout,
QWidget,
)
from ..version import version_str


class MainWindowHelp(QDialog):
Expand Down Expand Up @@ -165,12 +166,12 @@ def __init__(self, parent=None, window_size=None):
<b>License:</b> GPLv3
</p>
<p>
<b>Version:</b> 0.2.0
<b>Version:</b> {version}
</p>
<p>
Visit our <a href="https://github.com/MouseLand/FaceMap"> github page </a> for more information.
</p>
"""
""".format(version=version_str)
text = QLabel(text, self)
text.setStyleSheet(
"font-size: 12pt; font-family: Arial; color: white; text-align: center; "
Expand Down
Binary file modified facemap/gui/ops_user.npy
Binary file not shown.
3 changes: 0 additions & 3 deletions facemap/pose/pose_helper_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Import packages ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
import numpy as np

print("numpy version: %s" % np.__version__)
import random
from platform import python_version

Expand All @@ -17,8 +16,6 @@
from PyQt5.QtWidgets import QDialog, QPushButton
from scipy.ndimage import gaussian_filter

print("python version:", python_version())

# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Global variables~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`
N_FACTOR = 2**4 // (2**2)
SIGMA = 3 * 4 / N_FACTOR
Expand Down
21 changes: 21 additions & 0 deletions facemap/version.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
"""
Copright © 2023 Howard Hughes Medical Institute, Authored by Carsen Stringer and Atika Syeda.
"""

from importlib.metadata import PackageNotFoundError, version
import sys
from platform import python_version
import torch, numpy

try:
version = version("facemap")
except PackageNotFoundError:
version = 'unknown'

version_str = f"""
facemap version: \t{version}
platform: \t{sys.platform}
python version: \t{python_version()}
torch version: \t{torch.__version__}
numpy version: \t{numpy.__version__}
"""