Skip to content

Commit

Permalink
Merge pull request #189 from twosixlabs/173-revise-user-and-developer…
Browse files Browse the repository at this point in the history
…-docs

173 revise user and developer docs
  • Loading branch information
deprit authored Nov 20, 2024
2 parents 5381d5e + 786c993 commit 185b425
Show file tree
Hide file tree
Showing 13 changed files with 585 additions and 426 deletions.
6 changes: 3 additions & 3 deletions docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ mechanical standards before submitting.

## Documentation

We assume that contributors will be making small changes to the documentation that
are in the [code repository `docs` directory](./). If you are making larger
changes that would require more substantial changes to the documentation, you'd
We assume that contributors will be making small changes to the documentation files
that are in the code repository [`docs`](./) directory. If you are making larger
changes that would require more substantial changes to the documentation, you would
likely want to talk with us first to discuss your plans.

## Code of Conduct
Expand Down
Binary file added docs/assets/MSTAR-classes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/assets/charmory.png
Binary file not shown.
14 changes: 7 additions & 7 deletions docs/developers/development_environment.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Setting Up VSCode for Python Development

This guide will walk you through the process of setting up Visual Studio Code (VSCode) for Python development with a focus on machine learning.
This guide will walk you through the process of setting up [Visual Studio Code (VSCode)](https://code.visualstudio.com/) for Python development with a focus on machine learning.

## Prerequisites

- VSCode installed
- Python installed (the version depends on your project requirements)
- Python installed (Armory-Library requires at least version 3.8)

## 1. Installing Essential VSCode Extensions

Expand All @@ -15,7 +15,6 @@ VSCode provides a rich ecosystem of extensions that can make Python development
2. **Jupyter** (`ms-toolsai.jupyter`) - Provides Jupyter notebook support, interactive programming and computing.
3. **Python Test Explorer** (`LittleFoxTeam.vscode-python-test-adapter`) - Supports unit testing in Python.


To install an extension, follow these steps:

- Press `Ctrl+Shift+X` to open the Extensions view.
Expand Down Expand Up @@ -43,12 +42,13 @@ You can also configure the interpreter used by VSCode by modifying the `.vscode/

To set up debugging in Python with VSCode, see the [Troubleshooting Guide](./troubleshooting.md#visual-studio-codes-debugger).

## 4. Enabling Pair Programming in VSCode
## 4. Configuring Jupyter Notebook Support in VSCode

To enable pair programming, install the "Live Share" extension. This allows you to share your workspace with others for collaborative work.
With the Jupyter extension installed, you can create a new Jupyter notebook by clicking on the new file button in the Explorer view and giving the file a .ipynb extension.

## 5. Configuring Jupyter Notebook Support in VSCode
## 5. Enabling Pair Programming in VSCode

To enable pair programming, install the "Live Share" extension. This allows you to share your workspace with others for collaborative work.

With the Jupyter extension installed, you can create a new Jupyter notebook by clicking on the new file button in the Explorer view and giving the file a .ipynb extension.

With these steps, you will have a robust and efficient Python development environment set up in VSCode.
105 changes: 0 additions & 105 deletions docs/developers/new_model_to_armory.md

This file was deleted.

19 changes: 15 additions & 4 deletions docs/developers/style.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,28 @@ We will update black versioning annually following their [Stability Policy](http

As of version 0.16.1 `tools/format_json.py` no longer exists. Instead the built-in [json.tool](https://docs.python.org/3/library/json.html#module-json.tool) is used along with the `--sort-keys` and `--indent=4` flags.

We use [isort](https://pycqa.github.io/isort/) to sort Python imports.

isort --profile black *

We use [Flake8](https://flake8.pycqa.org/) for non-formatting PEP style enforcement.

flake8

Our repo-specific Flake8 configuration is detailed in `.flake8`.

We use [isort](https://pycqa.github.io/isort/) to sort Python imports.
We use [mypy](https://mypy-lang.org/) for static type checking.

isort --profile black *
mypy

Notebooks are type checked using [nbqa](https://github.com/nbQA-dev/nbQA) and output stripped using [nbstripout](https://github.com/kynan/nbstripout).

nbqa mypy
nbstripout

Once clean, the notebook JSON source is checked and formatted.

Finally, large file commits are prevented with `check-added-large-files`.

### Pre-commit Hooks

Expand All @@ -37,8 +49,7 @@ python -m pip install pre-commit
python -m pre_commit install
```

Note: the `pre-commit` package is already installed as part of the `developer`
flavor dependencies.
Note: the `pre-commit` package is already installed as part of the `developer` flavor dependencies.

```bash
pip install .[developer]
Expand Down
24 changes: 12 additions & 12 deletions docs/experiment_tracking.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
# Experiment Tracking

Armory provides integration with [MLFlow] to provide tracking of evaluation
runs and storage of results from evaluations.
Armory provides integration with [MLFlow](https://mlflow.org/) to track evaluation
runs and store results of metrics evaluations.

When the Armory evaluation engine runs, an experiment is created using the
Running the Armory evaluation engine creates an experiment using the
`evaluation.name` if one doesn't already exist. Then a parent run is created to
store any global parameters that aren't chain-specific. Each chain within the
evaluation results in a separate nested run. This nested run will contain all
the chain-specific parameters, metrics, and exports for that chain.
evaluation parent run produces a separate nested run. This nested run will contain all
the chain-specific parameters, metrics, and exports.

The following table summarizes how Armory evaluation components map to records
in MLFlow:
in MLFlow.

| Armory Component | MLFlow Record |
|-----------------------|----------------------------------------|
Expand All @@ -19,7 +19,7 @@ in MLFlow:
| Evaluation chain run | Nested run |
| Tracked params | Parent or nested run parameters |
| Metrics | Nested run metrics _or_ JSON artifacts |
| Exports | Nested run artifacts
| Exports | Nested run artifacts

## Usage

Expand Down Expand Up @@ -55,7 +55,7 @@ class TheDataset:
dataset = TheDataset(batch_size=...)
```

For third-party functions or classes that cannot have the decorator already
For third-party functions or classes that do not have the decorator already
applied, use the `track_call` utility function.

```python
Expand All @@ -66,8 +66,8 @@ dataset = track_call(TheDataset, batch_size=...)
```

`track_call` will invoke the function or class initializer given as the first
positional argument, forwarding all following arguments to the function or class
and recording the keyword arguments as parameters.
positional argument, forward all following arguments to the function or class
and record the keyword arguments as parameters.

Additional parameters may be recorded manually using the
`armory.track.track_param` function before the evaluation is run.
Expand Down Expand Up @@ -225,7 +225,7 @@ And you can view it at `http://localhost:5000` in your browser.

### Remote

If using a remote MLFlow tracking server, set the `MLFLOW_TRACKING_URI`
When using a remote MLFlow tracking server, set the `MLFLOW_TRACKING_URI`
environment variable to the tracking server's URI.

```sh
Expand All @@ -249,4 +249,4 @@ You may also store your credentials
[MLFlow]: https://mlflow.org/docs/latest/tracking.html
[`mlflow.log_metric`]: https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.log_metric
[`mlflow.log_artifact`]: https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.log_artifact
[`mlflow.log_artifacts`]: https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.log_artifacts
[`mlflow.log_artifacts`]: https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.log_artifacts
Loading

0 comments on commit 185b425

Please sign in to comment.