Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[proposal] make the CLI available without cloning the repo #373

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -131,4 +131,4 @@ dmypy.json
.pyre/

# Training outputs
./bin/train/outputs/
./train/outputs/
102 changes: 40 additions & 62 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ There are three main ways to use the NAM trainer. There are two simplified train
### Google Colab

If you don't have a good computer for training ML models, you use Google Colab to train
in the cloud using the pre-made notebooks under `bin\train`.
in the cloud using the pre-made notebooks under `train`.

For the very easiest experience, open
[`easy_colab.ipynb` on Google Colab](https://colab.research.google.com/github/sdatkinson/neural-amp-modeler/blob/48353508431a62a17bf5e35deee862f83f730f6c/bin/train/easy_colab.ipynb)
Expand All @@ -29,32 +29,9 @@ After installing the Python package, a GUI can be accessed by running `nam` in t

### The command line trainer (all features)

Alternatively, you can clone this repo to your computer and use it locally.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[BLOCKING] Don't delete this. This is instructions for how to do development with the repo.

It's ok w/ me if you want to add some small section saying that pip install neural-amp-modeler is a thing if you want to depend on it w/o modifying it. But I want developers to understand how to correctly set up their development environment.

Copy link
Contributor Author

@Eraz1997 Eraz1997 Mar 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good 👌 My (obviously questionable) suggestion is to clarify what is a CONTRIBUTING guide and what are usage guidelines. This paragraph looks more like user guidelines, so I'd specify that.

I'll leave them as they are and possibly add just a paragraph about pip 💪


#### Installation

Installation uses [Anaconda](https://www.anaconda.com/) for package management.

For computers with a CUDA-capable GPU (recommended):

```bash
conda env create -f environment_gpu.yml
```
_Note: you may need to modify the CUDA version if your GPU is older. Have a look at [nVIDIA's documentation](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-major-component-versions__table-cuda-toolkit-driver-versions) if you're not sure._
#### Installation of the Python package

Otherwise, for a CPU-only install (will train much more slowly):

```bash
conda env create -f environment_cpu.yml
```

_Note: if Anaconda takes a long time "`Solving environment...`", then you can speed up installing the environment by using the mamba experimental sovler with `--experimental-solver=libmamba`._

Then activate the environment you've created with

```bash
conda activate nam
```
After installing the Python package, the CLI trainer can be accessed by running `nam-cli` in the command line.

#### Train models (GUI)
After installing, you can open a GUI trainer by running
Expand All @@ -65,14 +42,14 @@ nam

from the terminal.

#### Train models (Python script)
#### Train models (CLI)
For users looking to get more fine-grained control over the modeling process,
NAM includes a training script that can be run from the terminal. In order to run it
#### Download audio files
Download the [v1_1_1.wav](https://drive.google.com/file/d/1CMj2uv_x8GIs-3X1reo7squHOVfkOa6s/view?usp=drive_link) and [output.wav](https://drive.google.com/file/d/1e0pDzsWgtqBU87NGqa-4FbriDCkccg3q/view?usp=drive_link) to a folder of your choice

##### Update data configuration
Edit `bin/train/data/single_pair.json` to point to relevant audio files:
Edit `train/data/single_pair.json` to point to relevant audio files:
```json
"common": {
"x_path": "C:\\path\\to\\v1_1_1.wav",
Expand All @@ -82,13 +59,13 @@ Edit `bin/train/data/single_pair.json` to point to relevant audio files:
```

##### Run training script
Open up a terminal. Activate your nam environment and call the training with
After installing, open up a terminal and run
```bash
python bin/train/main.py \
bin/train/inputs/data/single_pair.json \
bin/train/inputs/models/demonet.json \
bin/train/inputs/learning/demo.json \
bin/train/outputs/MyAmp
nam-cli \
train/inputs/data/single_pair.json \
train/inputs/models/demonet.json \
train/inputs/learning/demo.json \
train/outputs/MyAmp
```

`data/single_pair.json` contains the information about the data you're training
Expand All @@ -100,28 +77,43 @@ is being trained. The example used here uses a `feather` configured `wavenet`.
The configuration above runs a short (demo) training. For a real training you may prefer to run something like,

```bash
python bin/train/main.py \
bin/train/inputs/data/single_pair.json \
bin/train/inputs/models/wavenet.json \
bin/train/inputs/learning/default.json \
bin/train/outputs/MyAmp
nam-cli \
train/inputs/data/single_pair.json \
train/inputs/models/wavenet.json \
train/inputs/learning/default.json \
train/outputs/MyAmp
```

As a side note, NAM uses [PyTorch Lightning](https://lightning.ai/pages/open-source/)
under the hood as a modeling framework, and you can control many of the Pytorch Lightning configuration options from `bin/train/inputs/learning/default.json`
under the hood as a modeling framework, and you can control many of the Pytorch Lightning configuration options from `train/inputs/learning/default.json`

#### Export a model (to use with [the plugin](https://github.com/sdatkinson/NeuralAmpModelerPlugin))
Exporting the trained model to a `.nam` file for use with the plugin can be done
with:
Then, point the plugin at the exported `model.nam` file and you're good to go!

#### Cloning and installing

Alternatively, you can clone this repo to your computer and use it locally.
Installation uses [Anaconda](https://www.anaconda.com/) for package management.

For computers with a CUDA-capable GPU (recommended):

```bash
python bin/export.py \
path/to/config_model.json \
path/to/checkpoints/epoch=123_val_loss=0.000010.ckpt \
path/to/exported_models/MyAmp
conda env create -f environment_gpu.yml
```
_Note: you may need to modify the CUDA version if your GPU is older. Have a look at [nVIDIA's documentation](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-major-component-versions__table-cuda-toolkit-driver-versions) if you're not sure._

Then, point the plugin at the exported `model.nam` file and you're good to go!
Otherwise, for a CPU-only install (will train much more slowly):

```bash
conda env create -f environment_cpu.yml
```

_Note: if Anaconda takes a long time "`Solving environment...`", then you can speed up installing the environment by using the mamba experimental sovler with `--experimental-solver=libmamba`._

Then activate the environment you've created with

```bash
conda activate nam
```

## Standardized reamping files

Expand All @@ -133,17 +125,3 @@ You can use any of the following files:
* [v2_0_0.wav](https://drive.google.com/file/d/1xnyJP_IZ7NuyDSTJfn-Jmc5lw0IE7nfu/view?usp=drive_link)
* [v1_1_1.wav](https://drive.google.com/file/d/1CMj2uv_x8GIs-3X1reo7squHOVfkOa6s/view?usp=drive_link)
* [v1.wav](https://drive.google.com/file/d/1jxwTHOCx3Zf03DggAsuDTcVqsgokNyhm/view?usp=drive_link)

## Other utilities

#### Run a model on an input signal ("reamping")

Handy if you want to just check it out without needing to use the plugin:

```bash
python bin/run.py \
path/to/source.wav \
path/to/config_model.json \
path/to/checkpoints/epoch=123_val_loss=0.000010.ckpt \
path/to/output.wav
```
2 changes: 1 addition & 1 deletion nam/_version.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.7.4"
__version__ = "0.8.0"
14 changes: 9 additions & 5 deletions bin/train/main.py → nam/train/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ def _create_callbacks(learning_config):
return [checkpoint_best, checkpoint_last, checkpoint_epoch]


def main(args):
def run(args):
outdir = ensure_outdir(args.outdir)
# Read
with open(args.data_config_path, "r") as fp:
Expand All @@ -155,10 +155,10 @@ def main(args):
model_config = json.load(fp)
with open(args.learning_config_path, "r") as fp:
learning_config = json.load(fp)
main_inner(data_config, model_config, learning_config, outdir, args.no_show)
run_inner(data_config, model_config, learning_config, outdir, args.no_show)


def main_inner(
def run_inner(
data_config, model_config, learning_config, outdir, no_show, make_plots=True
):
# Write
Expand Down Expand Up @@ -225,11 +225,15 @@ def main_inner(
model.net.export(outdir)


if __name__ == "__main__":
def main():
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Nit] Now that there are 3 different "entry points" with different interfaces, it'd probably be good to docstring why they're all here and what they achieve [I can handle this.]

parser = ArgumentParser()
parser.add_argument("data_config_path", type=str)
parser.add_argument("model_config_path", type=str)
parser.add_argument("learning_config_path", type=str)
parser.add_argument("outdir")
parser.add_argument("--no-show", action="store_true", help="Don't show plots")
main(parser.parse_args())
run(parser.parse_args())


if __name__ == "__main__":
run()
1 change: 1 addition & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ def get_additional_requirements():
entry_points={
"console_scripts": [
"nam = nam.train.gui:run",
"nam-cli = nam.train.cli:main",
]
},
)
Empty file removed tests/test_bin/__init__.py
Empty file.
Empty file.
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
# Author: Steven Atkinson (steven@atkinson.mn)

import json
from argparse import Namespace
from enum import Enum
from pathlib import Path
from subprocess import check_call
Expand All @@ -14,10 +15,7 @@
import torch

from nam.data import REQUIRED_RATE, np_to_wav

_BIN_TRAIN_MAIN_PY_PATH = Path(__file__).absolute().parent.parent.parent.parent / Path(
"bin", "train", "main.py"
)
from nam.train.cli import run


class _Device(Enum):
Expand Down Expand Up @@ -173,22 +171,20 @@ def _setup_files(self, root_path: Path, device: _Device):

def _t_main(self, device: _Device):
"""
End-to-end test of bin/train/main.py
End-to-end test of the CLI
"""
with TemporaryDirectory() as tempdir:
tempdir = Path(tempdir)
self._input_path(tempdir, ensure=True)
self._setup_files(tempdir, device)
check_call(
[
"python",
str(_BIN_TRAIN_MAIN_PY_PATH),
str(self._data_config_path(tempdir)),
str(self._model_config_path(tempdir)),
str(self._learning_config_path(tempdir)),
str(self._output_path(tempdir, ensure=True)),
"--no-show",
]
run(
Namespace(
data_config_path=str(self._data_config_path(tempdir)),
model_config_path=str(self._model_config_path(tempdir)),
learning_config_path=str(self._learning_config_path(tempdir)),
outdir=str(self._output_path(tempdir, ensure=True)),
no_show=True,
)
)

@classmethod
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.