Skip to content

Commit

Permalink
Minor fixes - docs build, refactor common (#654)
Browse files Browse the repository at this point in the history
* Restructure for mkdocs

* Minor docstring edits for mkdocs

* Adjust installer docs

* Permit json config

* Update docstrings for mkdocs. Add images

* Minor fixes. Rename publish action. Add changelog notes

* hard wrap changes, markdownlint

* blackify

* Fix version cmd. Minor doc wording

* mkdocstrings require empty inits to identify submodules

* 🤦‍♂️ Publish docs CI/CD `main`->`master`

* Edit `get_part`; Add `merge_fetch`

* Spelling fixes

* Refactor restrict_parts. Adjust mkdocs nav.

* Docs adjust for Merge tables

* Fix `merge_get_part`

* Use hatch for docs version

* Update changelog

* Typo

* Spellcheck config

* WIP: dict/str ristrict consistency

* Normalize restriction/classmeth. Add notes on why

* Typos

* Docs publish on tag

* Edit changelog: Add links, patch version bump

* 🧪 Test gh-actions debug

* get-part multi-source flag

* Add mutual exclusivity flag from pos branch

* See details. Notebook work, config overhaul

gitignore: add exclude example config
mkdocs: new notebook names
notebooks: complete revamp for minirec data and more links to docs
init: add new load_config, isort imports
common_lab:
	- adjust to accept names in Last, First format (nwb-compliant)
	- continue to use First Last name structure in database - yes?
common_nwb: use new load_config, change `assert` to `raise`
insert_sessions: permit paths, use file name, use raw dir
storage_dirs: remove redundant funcs for base_dir
settings: implement new base_dir system
	- allows base/raw/etc to be independent
	- defaults to dj.config, then env vars, then sets default rel paths

* WIP: fix failing tests related to base_dir edits

* underscore-prefix Merge. Linter fixes

* See Details. Notebook overhaul

- dj_config: accept base dir as arg, refactor for single responsibility
- mkdocs, installation.md
  - condense installation information to single page
  - reference new notebook
  - remove local and production subpages due to redundancy
- environment and env_position.yml: add install current dir to avoid
  additional step in installation process
- notebooks: rewrite with Docker optional and minirec as demo data
- common_lab: raise error for invalid name
- common_position: get raw dir from settings, not hardcode
- settings.py: Should this be a class with properties?
  - add options for kachery dirs set via same dj_config mechanism
  - add raw_dir helper function

* remove note to self

* WIP notebook edits

* Revise 04_LFP nb

* Reorder/revise notebooks; #609

* Notebook formatting

* Remove old

* jupytext backup note

* Blackify py scrips. Continue config changes

* Refactor common

* WIP: notebooks, plus improved merge_delete_downstream

* WIP: PositionSource add part table

* Refactor trodes position #613

* WIP: Fix Trodes Video

* WIP: Spellcheck. Remove debug params. Remove assigned lambda E713

* WIP: Pass tests. Remove codespell offending link

* WIP: blackify

* Selective fetch from cbroz/master

* Fetch additional file from cbroz1/master to pass CI/CD

* Add RawPos fetch method implementations. Object -> PosObject

* Refactor PosIntervalMap helpers

* Revert typo

* Bugfixes for ripple

* Blackify

* WIP: minor edits

* Add restriction to fetch1_dataframe

* Update Trodes notebook, revise others

* Set pos id default for migration. Rename Trodes params

* Fix typos

* Spelling; Jupytext sync; Blackify

* Edit gitignore for new notebook numbering

* Update changelog and notebooks

* Refactor position helpers

* Minor position_trodes fixes

* Docs fixes

* Minor docs nav rename

* Fix typos

* Config to OOP. And #365

* Fix failing test from prev commit. Also #585

* Fixes from review comments
  • Loading branch information
CBroz1 authored Oct 12, 2023
1 parent fba9ad4 commit 0d29760
Show file tree
Hide file tree
Showing 30 changed files with 1,467 additions and 1,116 deletions.
57 changes: 57 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# Building Docs

## Adding new pages

`mkdocs.yml` is the site configuration file. To add a new page, edit the `nav`
section of this file. New pages should be either:

1. A markdown file in the `docs/` directory.
2. A Jupyter notebook in the `notebooks/` directory.

The remainder of `mkdocs.yml` specifies the site's
[configuration](https://www.mkdocs.org/user-guide/configuration/)

## Deployment

## GitHub

Whenever a new tag is pushed, GitHub actions will run
`.github/workflows/publish-docs.yml`. Progress can be monitored in the
'Actions' tab within the repo.

Releases should be tagged with `X.Y.Z`. A tag to redeploy docs should use the
current version, with an alpha release suffix, e.g. `X.Y.Za1`.

To deploy on your own fork without a tag, follow turn on github pages in
settings, following a `documentation` branch, and then push to `test_branch`.
This branch is protected on `LorenFranklin/spyglass`, but not on forks.

## Testing

To test edits to the site, be sure docs dependencies are installed:

```console
cd /your/path/to/spyglass
pip install .[docs]
```

Then, run the build script:

```console
bash ./docs/build-docs.sh serve
```

Notably, this will make a copy of notebooks in `docs/src/notebooks`. Changes to
the root notebooks directory may not be reflected when rebuilding.

Use a browser to navigate to `localhost:8000/` to inspect the site. For
auto-reload of markdown files during development, use `mkdocs serve -f
./docs/mkdosc.yaml`. The `mike` package used in the build script manages
versioning, but does not support dynamic versioning.

The following items can be commented out in `mkdocs.yml` to reduce build time:

- `mkdocstrings`: Turns code docstrings to API pages.
- `mkdocs-jupyter`: Generates tutorial pages from notebooks.

To end the process in your console, use `ctrl+c`.
14 changes: 10 additions & 4 deletions docs/build-docs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,20 @@
# Copy top-level repo files for docs display
cp ./CHANGELOG.md ./docs/src/
cp ./LICENSE ./docs/src/LICENSE.md
cp -r ./notebooks/ ./docs/src/
cp -r ./notebook-images ./docs/src/notebooks
mkdir -p ./docs/src/notebooks
cp ./notebooks/*ipynb ./docs/src/notebooks/
cp -r ./notebook-images ./docs/src/notebooks/
cp -r ./notebook-images ./docs/src/

# Get major version
FULL_VERSION=$(hatch version) # Most recent tag
export MAJOR_VERSION="${FULL_VERSION%.*}"
FULL_VERSION=$(hatch version) # Most recent tag, may include periods
export MAJOR_VERSION="${FULL_VERSION:0:3}" # First 3 chars of tag
echo "$MAJOR_VERSION"

# Get ahead of errors
export JUPYTER_PLATFORM_DIRS=1
# jupyter notebook --generate-config

# Generate site docs
mike deploy "$MAJOR_VERSION" --config ./docs/mkdocs.yml -b documentation

Expand Down
57 changes: 31 additions & 26 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,30 +47,32 @@ nav:
- Home: index.md
- Installation: installation.md
- Miscellaneous:
- FigURL: misc/figurl_views.md
- Session Groups: misc/session_groups.md
- Insert Data: misc/insert_data.md
- Merge Tables: misc/merge_tables.md
- FigURL: misc/figurl_views.md
- Session Groups: misc/session_groups.md
- Insert Data: misc/insert_data.md
- Merge Tables: misc/merge_tables.md
- Tutorials:
- General:
- Setup: notebooks/00_Setup.ipynb
- Insert Data: notebooks/01_Insert_Data.ipynb
- Data Sync: notebooks/02_Data_Sync.ipynb
- Ephys:
- Spike Sorting: notebooks/10_Spike_Sorting.ipynb
- Curation: notebooks/11_Curation.ipynb
- LFP: notebooks/12_LFP.ipynb
- Theta: notebooks/14_Theta.ipynb
- Position:
- Position Trodes: notebooks/20_Position_Trodes.ipynb
- Position DLC 1: notebooks/21_Position_DLC_1.ipynb
- Position DLC 2: notebooks/22_Position_DLC_2.ipynb
- Linearization: notebooks/24_Linearization.ipynb
- Combined:
- Ripple Detection: notebooks/30_Ripple_Detection.ipynb
- Extract Mark Indicators: notebooks/31_Extract_Mark_Indicators.ipynb
- Decoding with GPUs: notebooks/32_Decoding_with_GPUs.ipynb
- Decoding Clusterless: notebooks/33_Decoding_Clusterless.ipynb
- Overview: notebooks/README.md
- General:
- Setup: notebooks/00_Setup.ipynb
- Insert Data: notebooks/01_Insert_Data.ipynb
- Data Sync: notebooks/02_Data_Sync.ipynb
- Ephys:
- Spike Sorting: notebooks/10_Spike_Sorting.ipynb
- Curation: notebooks/11_Curation.ipynb
- LFP: notebooks/12_LFP.ipynb
- Theta: notebooks/14_Theta.ipynb
- Position:
- Position Trodes: notebooks/20_Position_Trodes.ipynb
- DLC From Scratch: notebooks/21_Position_DLC_1.ipynb
- DLC From Model: notebooks/22_Position_DLC_2.ipynb
- DLC Prediction: notebooks/23_Position_DLC_3.ipynb
- Linearization: notebooks/24_Linearization.ipynb
- Combined:
- Ripple Detection: notebooks/30_Ripple_Detection.ipynb
- Extract Mark Indicators: notebooks/31_Extract_Mark_Indicators.ipynb
- Decoding with GPUs: notebooks/32_Decoding_with_GPUs.ipynb
- Decoding Clusterless: notebooks/33_Decoding_Clusterless.ipynb
- API Reference: api/ # defer to gen-files + literate-nav
- How to Contribute: contribute.md
- Change Log: CHANGELOG.md
Expand All @@ -85,6 +87,7 @@ plugins:
glob:
- "temp*"
- "0*yaml"
- "*py_scripts/*"
- mike:
canonical_version: latest
css_dir: stylesheets
Expand All @@ -98,8 +101,8 @@ plugins:
group_by_category: false
line_length: 80
docstring_style: numpy
watch:
- src/spyglass/
# watch:
# - src/spyglass/
- literate-nav:
nav_file: navigation.md
- exclude-search:
Expand All @@ -109,8 +112,10 @@ plugins:
scripts:
- ./src/api/make_pages.py
- mkdocs-jupyter: # Comment this block during dev to reduce build time
execute: False # Very slow, needs gh-action edit to work/link to db
include_source: False
ignore_h1_titles: True
ignore: ["*make_pages.py", "**checkpoints**"]
ignore: ["*make_pages.py", "**checkpoints**", "*/py_scripts/*"]

markdown_extensions:
- attr_list
Expand Down
2 changes: 1 addition & 1 deletion docs/src/contribute.md
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ There are a few places where a name needs to be given to objects. Follow these r

- You may want to create a development/testing environment independent of the
lab datajoint server. To do so, run your own datajoint server with Docker. See
[example](./notebooks/docker_mysql_tutorial.ipynb).
[example](./notebooks/00_Setup.ipynb).
- Datajoint is unable to set delete permissions on a per-table basis. In other
words, if a user is able to delete entries in a given table, she can delete
entries in any table in the schema. Some tables that hold important data
Expand Down
2 changes: 1 addition & 1 deletion docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ format and integrates open-source tools into a coherent framework.

## Installation

To install to this project, see [Installation](./installation/).
To install to this project, see [Installation](./installation.md).

## Contributing

Expand Down
16 changes: 8 additions & 8 deletions docs/src/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ with the `-e` flag: `pip install -e /path/to/spyglass`
## Basic Installation

For basic installation steps, see the
[Setup notebook](../notebooks/00_Setup.ipynb) 'local installation' section,
[Setup notebook](./notebooks/00_Setup.ipynb) 'local installation' section,
including python, mamba (for managing a
[virtual environment](https://en.wikipedia.org/wiki/Virtual_environment_software)),
VSCode, Jupyter, and git. This notebook also covers
Expand Down Expand Up @@ -41,7 +41,7 @@ pip install ghostipy
## Database access

For basic installation steps, see the
[Setup notebook](../notebooks/00_Setup.ipynb) 'database connection' section. For
[Setup notebook](./notebooks/00_Setup.ipynb) 'database connection' section. For
additional details, see the
[DataJoint documentation](https://datajoint.com/docs/elements/user-guide/#relational-databases).

Expand All @@ -58,12 +58,12 @@ specified, the subfolder names below are included as defaults.
"database.prefix": "username_",
"spyglass_dirs": {
"base": "/your/base/path",
"raw":"/your/base/path/raw",
"analysis":"/your/base/path/analysis",
"recording":"/your/base/path/recording",
"spike_sorting_storage":"/your/base/path/spikesorting",
"waveforms":"/your/base/path/waveforms",
"temp":"/your/base/path/tmp",
"raw": "/your/base/path/raw",
"analysis": "/your/base/path/analysis",
"recording": "/your/base/path/recording",
"spike_sorting_storage": "/your/base/path/spikesorting",
"waveforms": "/your/base/path/waveforms",
"temp": "/your/base/path/tmp"
}
}
}
Expand Down
111 changes: 8 additions & 103 deletions docs/src/misc/merge_tables.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ pipeline. By convention...
from spyglass.utils.dj_merge_tables import _Merge

@schema
class MergeTable(_Merge):
class MergeOutput(_Merge):
definition = """
merge_id: uuid
---
Expand All @@ -57,6 +57,11 @@ class MergeTable(_Merge):

![Merge diagram](../images/merge_diagram.png)

By convention, Merge Tables have been named with the pipeline name plus `Output`
(e.g., `LFPOutput`, `PositionOutput`). Using the underscore alias for this class
allows us to circumvent a DataJoint protection that interprets the class as a
table itself.

## How

### Merging
Expand All @@ -65,7 +70,7 @@ The Merge class in Spyglass's utils is a subclass of DataJoint's [Manual
Table](https://datajoint.com/docs/core/design/tables/tiers/#data-entry-lookup-and-manual)
and adds functions to make the awkwardness of part tables more manageable.
These functions are described in the
[API section](../../api/src/spyglass/utils/dj_merge_tables/), under
[API section](../../api/src/spyglass/utils/dj_merge_tables.md), under
`utils.dj_merge_tables`.

### Restricting
Expand Down Expand Up @@ -106,104 +111,4 @@ is not present in the parent.

## Example

First, we'll import various items related to the LFP Merge Table...

```python
from spyglass.utils.dj_merge_tables import delete_downstream_merge, Merge
from spyglass.common.common_ephys import LFP as CommonLFP # Upstream 1
from spyglass.lfp.lfp_merge import LFPOutput # Merge Table
from spyglass.lfp.v1.lfp import LFPV1 # Upstream 2
```

Merge Tables have multiple custom methods that begin with `merge`. `help` can
show us the docstring of each

```python
merge_methods=[d for d in dir(Merge) if d.startswith('merge')]
help(getattr(Merge,merge_methods[-1]))
```

We'll use this example to explore populating both `LFPV1` and the `LFPOutput`
Merge Table.

```python
nwb_file_dict = { # We'll use this later when fetching from the Merge Table
"nwb_file_name": "tonks20211103_.nwb",
}
lfpv1_key = {
**nwb_file_dict,
"lfp_electrode_group_name": "CA1_test",
"target_interval_list_name": "test interval2",
"filter_name": "LFP 0-400 Hz",
"filter_sampling_rate": 30000,
}
LFPV1.populate(lfpv1_key) # Also populates LFPOutput
```

The Merge Table can also be populated with keys from `common_ephys.LFP`.

```python
common_keys_CH = CommonLFP.fetch(limit=3, as_dict=True) # CH61
LFPOutput.insert1(common_keys_CH[0], skip_duplicates=True)
LFPOutput.insert(common_keys_CH[1:], skip_duplicates=True)
common_keys_J1 = CommonLFP.fetch(limit=3, offset=80, as_dict=True) # J16
LFPOutput.insert(common_keys_J1, skip_duplicates=True)
```

`merge_view` shows a union of the master and all part tables.

```python
LFPOutput.merge_view()
LFPOutput.merge_view(restriction=lfpv1_key)
```

UUIDs help retain unique entries across all part tables. We can fetch NWB file
by referencing this or other features.

```python
uuid_key = LFPOutput.fetch(limit=1, as_dict=True)[-1]
restrict = LFPOutput & uuid_key
result1 = restrict.fetch_nwb()

nwb_key = LFPOutput.merge_restrict(nwb_file_dict).fetch(as_dict=True)[0]
result2 = (LFPOutput & nwb_key).fetch_nwb()
```

There are also functions for retrieving part/parent table(s) and fetching data.

1. These `get` functions will either return the part table of the Merge table or
the parent table with the source information for that part.

2. This `fetch` will collect all relevant entries and return them as a list in
the format specified by keyword arguments and one's DataJoint config.

```python
result4 = LFPOutput.merge_get_part(restriction=common_keys_CH[0],join_master=True)
result5 = LFPOutput.merge_get_parent(restriction='nwb_file_name LIKE "CH%"')
result6 = result5.fetch('lfp_sampling_rate') # Sample rate for all CH* files
result7 = LFPOutput.merge_fetch("filter_name", "nwb_file_name")
result8 = LFPOutput.merge_fetch(as_dict=True)
```

When deleting from Merge Tables, we can either...

1. delete from the Merge Table itself with `merge_delete`, deleting both
the master and part.

2. use `merge_delete_parent` to delete from the parent sources, getting rid of
the entries in the source table they came from.

3. use `delete_downstream_merge` to find Merge Tables downstream and get rid
full entries, avoiding orphaned master table entries.

The two latter cases can be destructive, so we include an extra layer of
protection with `dry_run`. When true (by default), these functions return
a list of tables with the entries that would otherwise be deleted.

```python
LFPOutput.merge_delete(common_keys_CH[0]) # Delete from merge table
LFPOutput.merge_delete_parent(restriction=nwb_file_dict, dry_run=True)
delete_downstream_merge(
table=CommonLFP, restriction=common_keys_CH[0], dry_run=True
)
```
For example usage, see our Merge Table notebook.
1 change: 0 additions & 1 deletion notebooks/.gitignore

This file was deleted.

20 changes: 9 additions & 11 deletions notebooks/01_Insert_Data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2081,17 +2081,15 @@
" group_name=\"test\",\n",
" electrode_list=[0],\n",
")\n",
"lfp.v1.LFPSelection.insert1(\n",
" {\n",
" \"nwb_file_name\": nwb_copy_file_name,\n",
" \"lfp_electrode_group_name\": \"test\",\n",
" \"target_interval_list_name\": \"01_s1\",\n",
" \"filter_name\": \"LFP 0-400 Hz\",\n",
" \"filter_sampling_rate\": 30_000,\n",
" },\n",
" skip_duplicates=True,\n",
")\n",
"lfp.v1.LFPV1().populate()\n",
"lfp_key = {\n",
" \"nwb_file_name\": nwb_copy_file_name,\n",
" \"lfp_electrode_group_name\": \"test\",\n",
" \"target_interval_list_name\": \"01_s1\",\n",
" \"filter_name\": \"LFP 0-400 Hz\",\n",
" \"filter_sampling_rate\": 30_000,\n",
"}\n",
"lfp.v1.LFPSelection.insert1(lfp_key, skip_duplicates=True)\n",
"lfp.v1.LFPV1().populate(lfp_key)\n",
"```\n",
"</details>\n",
"<details>\n",
Expand Down
Loading

0 comments on commit 0d29760

Please sign in to comment.