Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix spelling & grammar in documentation #97

Merged
merged 1 commit into from
Feb 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 12 additions & 12 deletions docs/contributing/contributing.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
# Contributing to SCATTR

SCATTR python package dependencies are managed with Poetry (`v1.2.0+`), which
you will need installed on your machine. You can find installation instructions
on the [Poetry website](https://python-poetry.org/docs/master/#installation).
SCATTR python package dependencies are managed with Poetry, which
will need to be installed on your machine. Installation instructions can be
found on the
[Poetry website](https://python-poetry.org/docs/master/#installation).

SCATTR also has a few dependencies outside of python, including popular
neuroimaging packages like `ANTs`, `Freesurfer`, `MRtrix3`, and others. We
**strongly** recommend using SCATTR with the `--use-singularity` flag, which
will pull and use the required containers, unless you are comfortable using
will pull and use the required containers, unless you are comfortable
installing and using all of these tools yourself.

_Note: These instructions are only recommended if you are making changes to the
SCATTR codebase and committing these back to the repository, or if you are
_Note: These instructions are only recommended if you are making changes to
the SCATTR codebase and committing these back to the repository or if you are
using Snakemake's cluster execution profiles. If not, it is easier to run
SCATTR using the packaged singularity container (e.g.
`docker://khanlab/scattr:latest`)._
SCATTR using the packaged container (e.g. `docker://khanlab/scattr:latest`)._

## Setup the development environment

Expand Down Expand Up @@ -54,10 +54,9 @@ You can see what commands are available by running:
poetry run poe
```

We use a a few tools, including `black`, `flake8`, `isort`, `snakefmt`, and
`yamlfix` to ensure formatting and style of our codebase is consistent. There
are two task runners you can use to check and fix your code, which can be
invoked with:
We use a few tools, including `ruff`, `snakefmt`, and `yamlfix` to ensure
formatting and style of our codebase is consistent. There are two task runners
you can use to check and fix your code, which can be invoked with:

```
poetry run poe quality-check
Expand All @@ -68,6 +67,7 @@ _Note: If you are in a poetry shell, you do not need to prepend `poetry run` to
the command._

## Dry-run / testing your workflow

Using Snakemake\'s dry-run option (`--dry-run`/`-n`) is an easy way to verify
any changes made to the workflow are working direcctly. The `test/data` folder
contains a _fake_ BIDS dataset (i.e. dataset with zero-sized files) that is
Expand Down
11 changes: 6 additions & 5 deletions docs/getting_started/docker.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
# Running SCATTR with Docker on Windows

_Note, these instructions you have Docker installed already on a Windows system.
Docker can also be run on Linux or MacOS with similar commands, but here, we
will assume the default Windows CLI is being used._
_Note, these instructions assume you have Docker installed already on a
Windows system. Docker can also be run on Linux or MacOS with similar
commands, but here, we will assume the default Windows CLI is being
used._

## First time setup

Expand All @@ -17,7 +18,7 @@ cd c:\Users\username\Downloads
```

Pull the container (this will take some time and storage space, but like an
installation, it only needs to be done once and can be then be run on many
installation, it only needs to be done once and can then be run on many
datasets). The example below pulls the latest versioned container (replace
`latest` with `vX.X.X` for a specific version).

Expand Down Expand Up @@ -49,7 +50,7 @@ docker run -it --rm khanlab_scattr_latest.sif --help-snakemake
We will use the `test` folder found from the
[Github repository](https://github.com/khanlab/scattr/tree/main/test/) via
`git clone` to the previously mentioned folder to demonstrate an example of
how to run SCATTR
how to run SCATTR.

```
docker run -it --rm -v c:\Users\username\Downloads\scattr\test:\test khanlab_scattr_latest.sif /test/data/bids /test/data/derivatives participant --fs-license /test/fs_license --force-output -n
Expand Down
2 changes: 1 addition & 1 deletion docs/getting_started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ and DWI derivatives (which can be stored separately).

## Docker on Windows / Mac (Intel) / Linux

The SCATTR BIDS App is available on DuckerHub as versioned releases.
The SCATTR BIDS App is available on DockerHub as versioned releases.
Instructions can be found in the [Docker](https://scattr.readthedocs.io/en/stable/getting_started/docker.html) documentation page.

### Pros
Expand Down
4 changes: 2 additions & 2 deletions docs/getting_started/singularity.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Running SCATTR with Singularity

## Pre-requisites
1. Singularity / Apptainer is is installed on your system. For more info, see
1. Singularity / Apptainer is installed on your system. For more info, see
the detailed [Apptainer install instructions](https://apptainer.org/docs/admin/main/installation.html#install-from-pre-built-packages).
1. The following command-line tools are installed:
* wget
Expand All @@ -10,7 +10,7 @@ the detailed [Apptainer install instructions](https://apptainer.org/docs/admin/m
* in your working folder to store the container (~20GB)
* for SCATTR outputs (~40GB per subject using default parameters)
1. Sufficient CPU and memory - the more you have, the faster it will run and
the more streamlines that can be estimated. We recommand at least 8 CPU cores
the more streamlines that can be estimated. We recommend at least 8 CPU cores
and 64GB memory if using default parameters.

## First time setup
Expand Down
Loading