Skip to content

Commit

Permalink
Update docs to reference new input.wav and output.wav (#469)
Browse files Browse the repository at this point in the history
Update docs
  • Loading branch information
sdatkinson authored Sep 20, 2024
1 parent 38ad854 commit 0d840ee
Show file tree
Hide file tree
Showing 8 changed files with 60 additions and 20 deletions.
8 changes: 4 additions & 4 deletions docs/source/model-file.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@ There are a few keys you should expect to find with the following values:
* ``"weights"``: a list of float-type numbers that are the weights (parameters)
of the model. How they map into the model is architecture-specific. Looking at
``._export_weights()`` will usually tell you what you need to know (e.g. for
``WaveNet``
`here <https://github.com/sdatkinson/neural-amp-modeler/blob/cb100787af4b16764ac94a2edf9bcf7dc5ae59a7/nam/models/wavenet.py#L428>`_
and ``LSTM``
`here <https://github.com/sdatkinson/neural-amp-modeler/blob/cb100787af4b16764ac94a2edf9bcf7dc5ae59a7/nam/models/recurrent.py#L317>`_.)
``WaveNet`` at
`wavenet.py <https://github.com/sdatkinson/neural-amp-modeler/blob/cb100787af4b16764ac94a2edf9bcf7dc5ae59a7/nam/models/wavenet.py#L428>`_
and ``LSTM`` at
`recurrent.py <https://github.com/sdatkinson/neural-amp-modeler/blob/cb100787af4b16764ac94a2edf9bcf7dc5ae59a7/nam/models/recurrent.py#L317>`_.)

There are also some optional keys that ``nam`` may use:

Expand Down
11 changes: 7 additions & 4 deletions docs/source/tutorials/colab.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,11 +43,10 @@ have made high-quality tutorials.
However, if you want to skip reamping for your first model, you can download
these pre-made files:

* `v1_1_1.wav <https://drive.google.com/file/d/1CMj2uv_x8GIs-3X1reo7squHOVfkOa6s/view?usp=drive_link>`_,
* `input.wav <https://drive.google.com/file/d/1KbaS4oXXNEuh2aCPLwKrPdf5KFOjda8G/view?usp=sharing>`_,
a standardized input file.
* `output.wav <https://drive.google.com/file/d/1e0pDzsWgtqBU87NGqa-4FbriDCkccg3q/view?usp=drive_link>`_,
a reamp of the same overdrive used to make
`ParametricOD <https://www.neuralampmodeler.com/post/the-first-publicly-available-parametric-neural-amp-model>`_.
* `output.wav <https://drive.google.com/file/d/1NrpQLBbCDHyu0RPsne4YcjIpi5-rEP6w/view?usp=sharing>`_,
a reamp of a high-gain tube head.

To upload your data to Colab, click the Folder icon here:

Expand Down Expand Up @@ -88,3 +87,7 @@ If you don't see it, you might have to refresh the file browser:

.. image:: media/colab/refresh.png
:scale: 20 %

To use it, point
`the plugin <https://github.com/sdatkinson/NeuralAmpModelerPlugin>`_ at the file
and you're good to go!
8 changes: 3 additions & 5 deletions docs/source/tutorials/full.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,8 @@ signal from it (either by reamping a pre-recorded test signal or by
simultaneously recording your DI and the effected tone). For your first time,
you can download the following pre-made files:

* `v1_1_1.wav <https://drive.google.com/file/d/1CMj2uv_x8GIs-3X1reo7squHOVfkOa6s/view?usp=drive_link>`_
(input)
* `output.wav <https://drive.google.com/file/d/1e0pDzsWgtqBU87NGqa-4FbriDCkccg3q/view?usp=drive_link>`_
(output)
* `input.wav <https://drive.google.com/file/d/1KbaS4oXXNEuh2aCPLwKrPdf5KFOjda8G/view?usp=sharing>`_
* `output.wav <https://drive.google.com/file/d/1NrpQLBbCDHyu0RPsne4YcjIpi5-rEP6w/view?usp=sharing>`_

Next, make a file called e.g. ``data.json`` by copying
`nam_full_configs/data/single_pair.json <https://github.com/sdatkinson/neural-amp-modeler/blob/main/nam_full_configs/data/single_pair.json>`_
Expand All @@ -40,7 +38,7 @@ and editing it to point to your audio files like this:
.. code-block:: json
"common": {
"x_path": "C:\\path\\to\\v1_1_1.wav",
"x_path": "C:\\path\\to\\input.wav",
"y_path": "C:\\path\\to\\output.wav",
"delay": 0
}
Expand Down
53 changes: 46 additions & 7 deletions docs/source/tutorials/gui.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,52 @@ with:
$ nam
Training with the GUI requires a reamp based on one of the standardized training
files:
You'll see a GUI like this:

* `v3_0_0.wav <https://drive.google.com/file/d/1Pgf8PdE0rKB1TD4TRPKbpNo1ByR3IOm9/view?usp=drive_link>`_
(preferred)
* `v2_0_0.wav <https://drive.google.com/file/d/1xnyJP_IZ7NuyDSTJfn-Jmc5lw0IE7nfu/view?usp=drive_link>`_
* `v1_1_1.wav <https://drive.google.com/file/d/1CMj2uv_x8GIs-3X1reo7squHOVfkOa6s/view?usp=drive_link>`_
* `v1.wav <https://drive.google.com/file/d/1jxwTHOCx3Zf03DggAsuDTcVqsgokNyhm/view?usp=drive_link>`_
.. image:: media/gui/gui.png
:scale: 30 %

Start by pressing the "Download input file" button to be taken to download the
audio you'll reamp through your gear to make your model,
`input.wav <https://drive.google.com/file/d/1KbaS4oXXNEuh2aCPLwKrPdf5KFOjda8G/view?usp=sharing>`_.
Reamp this through the gear that you want to model and render the output as a
WAVE file. Be sure to match the sample rate (48k) and bit depth (24-bit) of the
input file. Also, be sure that your render matches the length of the input file.
An example can be found here:
`output.wav <https://drive.google.com/file/d/1NrpQLBbCDHyu0RPsne4YcjIpi5-rEP6w/view?usp=sharing>`_.

Return to the trainer and pick the input and output files as well as where you
want your model to be saved.

.. note:: To train a batch of models, pick all of their reamp (output) files.

Once you've selected these, then the "Train" button should become available:

.. image:: media/gui/ready-to-train.png
:scale: 30 %

Click "Train", and the program will check your files for any problems, then
start training.

Some recording setups will have round-trip latency that should be accounted for.
Some DAWs might attempt to compensate for this but can overcompensate.
The trainer will automatically attempt to line up the input and output audio. To
help with this, the input file has two impulses near its beginning that are used
to help with alignment. The trainer will attempt to identify the response to
these in the output. You'll see a plot showing where it thinks that the output
first reacted to the input (black dashed line) as well as the two responses
overlaid with each other. You should see that they overlap and that the black
line is just before the start of the response, like this:

.. image:: media/gui/impulse-responses.png
:scale: 50 %

Close this figure, and then you will see training proceed. At the end, you'll
see a plot that compares the model's prediction against your recording:

.. image:: media/gui/result.png
:scale: 30 %

Close that plot, and your model will be saved. To use it, point
`the plugin <https://github.com/sdatkinson/NeuralAmpModelerPlugin>`_ at the file
and you're good to go!
Binary file added docs/source/tutorials/media/gui/gui.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/tutorials/media/gui/result.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

1 comment on commit 0d840ee

@38github
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work!

Please sign in to comment.