Skip to content

Commit

Permalink
Merge pull request #1799 from lissyx/better-cuda-doc
Browse files Browse the repository at this point in the history
Improve first user experience for CUDA deps
  • Loading branch information
lissyx authored Dec 19, 2018
2 parents b976032 + 924c144 commit da608dc
Showing 1 changed file with 13 additions and 0 deletions.
13 changes: 13 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,13 +25,16 @@ pip3 install deepspeech-gpu
deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav
```

Please ensure you have the required [CUDA dependency](#cuda-dependency).

See the output of `deepspeech -h` for more information on the use of `deepspeech`. (If you experience problems running `deepspeech`, please check [required runtime dependencies](native_client/README.md#required-dependencies)).

**Table of Contents**

- [Prerequisites](#prerequisites)
- [Getting the code](#getting-the-code)
- [Getting the pre-trained model](#getting-the-pre-trained-model)
- [CUDA dependency](#cuda-dependency)
- [Using the model](#using-the-model)
- [Using the Python package](#using-the-python-package)
- [Using the command line client](#using-the-command-line-client)
Expand Down Expand Up @@ -82,6 +85,10 @@ There are three ways to use DeepSpeech inference:
- [The Node.JS package](#using-the-nodejs-package)


### CUDA dependency

The GPU capable builds (Python, NodeJS, C++ etc) depend on the same CUDA runtime as upstream TensorFlow. Currently with TensorFlow r1.12 it depends on CUDA 9.0 and CuDNN v7.2.

### Using the Python package

Pre-built binaries that can be used for performing inference with a trained model can be installed with `pip3`. You can then use the `deepspeech` binary to do speech-to-text on an audio file:
Expand Down Expand Up @@ -133,6 +140,8 @@ $ pip3 install --upgrade deepspeech-gpu

In both cases, it should take care of installing all the required dependencies. Once it is done, you should be able to call the sample binary using `deepspeech` on your command-line.

Please ensure you have the required [CUDA dependency](#cuda-dependency).

Note: the following command assumes you [downloaded the pre-trained model](#getting-the-pre-trained-model).

```bash
Expand Down Expand Up @@ -189,6 +198,8 @@ npm install deepspeech-gpu

See [client.js](native_client/javascript/client.js) for an example of how to use the bindings.

Please ensure you have the required [CUDA dependency](#cuda-dependency).

### Installing bindings from source

If pre-built binaries aren't available for your system, you'll need to install them from scratch. Follow [these instructions](native_client/README.md).
Expand Down Expand Up @@ -230,6 +241,8 @@ pip3 uninstall tensorflow
pip3 install 'tensorflow-gpu==1.12.0'
```

Please ensure you have the required [CUDA dependency](#cuda-dependency).

### Common Voice training data

The Common Voice corpus consists of voice samples that were donated through [Common Voice](https://voice.mozilla.org/).
Expand Down

0 comments on commit da608dc

Please sign in to comment.