Skip to content

Commit

Permalink
Merge pull request #692 from desh2608/style_change_2.0
Browse files Browse the repository at this point in the history
Style change 2.0
  • Loading branch information
csukuangfj committed Nov 19, 2022
2 parents b3920e5 + fbe1e35 commit 500792d
Show file tree
Hide file tree
Showing 439 changed files with 3,965 additions and 7,430 deletions.
3 changes: 3 additions & 0 deletions .git-blame-ignore-revs
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Migrate to 88 characters per line (see: https://github.com/lhotse-speech/lhotse/issues/890)
107df3b115a58f1b68a6458c3f94a130004be34c
d31db010371a4128856480382876acdc0d1739ed
11 changes: 6 additions & 5 deletions .github/workflows/style_check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,17 +45,18 @@ jobs:

- name: Install Python dependencies
run: |
python3 -m pip install --upgrade pip black==21.6b0 flake8==3.9.2 click==8.0.4
# See https://github.com/psf/black/issues/2964
# The version of click should be selected from 8.0.0, 8.0.1, 8.0.2, 8.0.3, and 8.0.4
python3 -m pip install --upgrade pip black==22.3.0 flake8==5.0.4 click==8.1.0
# Click issue fixed in https://github.com/psf/black/pull/2966
- name: Run flake8
shell: bash
working-directory: ${{github.workspace}}
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --show-source --statistics
flake8 .
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 \
--statistics --extend-ignore=E203,E266,E501,F401,E402,F403,F841,W503
- name: Run black
shell: bash
Expand Down
28 changes: 20 additions & 8 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,26 +1,38 @@
repos:
- repo: https://github.com/psf/black
rev: 21.6b0
rev: 22.3.0
hooks:
- id: black
args: [--line-length=80]
additional_dependencies: ['click==8.0.1']
args: ["--line-length=88"]
additional_dependencies: ['click==8.1.0']
exclude: icefall\/__init__\.py

- repo: https://github.com/PyCQA/flake8
rev: 3.9.2
rev: 5.0.4
hooks:
- id: flake8
args: [--max-line-length=80]
args: ["--max-line-length=88", "--extend-ignore=E203,E266,E501,F401,E402,F403,F841,W503"]

# What are we ignoring here?
# E203: whitespace before ':'
# E266: too many leading '#' for block comment
# E501: line too long
# F401: module imported but unused
# E402: module level import not at top of file
# F403: 'from module import *' used; unable to detect undefined names
# F841: local variable is assigned to but never used
# W503: line break before binary operator
# In addition, the default ignore list is:
# E121,E123,E126,E226,E24,E704,W503,W504

- repo: https://github.com/pycqa/isort
rev: 5.9.2
rev: 5.10.1
hooks:
- id: isort
args: [--profile=black, --line-length=80]
args: ["--profile=black"]

- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.0.1
rev: v4.2.0
hooks:
- id: check-executables-have-shebangs
- id: end-of-file-fixer
Expand Down
24 changes: 12 additions & 12 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@

2 sets of configuration are provided - (a) Ubuntu18.04-pytorch1.12.1-cuda11.3-cudnn8, and (b) Ubuntu18.04-pytorch1.7.1-cuda11.0-cudnn8.

If your NVIDIA driver supports CUDA Version: 11.3, please go for case (a) Ubuntu18.04-pytorch1.12.1-cuda11.3-cudnn8.
If your NVIDIA driver supports CUDA Version: 11.3, please go for case (a) Ubuntu18.04-pytorch1.12.1-cuda11.3-cudnn8.

Otherwise, since the older PyTorch images are not updated with the [apt-key rotation by NVIDIA](https://developer.nvidia.com/blog/updating-the-cuda-linux-gpg-repository-key), you have to go for case (b) Ubuntu18.04-pytorch1.7.1-cuda11.0-cudnn8. Ensure that your NVDIA driver supports at least CUDA 11.0.

You can check the highest CUDA version within your NVIDIA driver's support with the `nvidia-smi` command below. In this example, the highest CUDA version is 11.0, i.e. case (b) Ubuntu18.04-pytorch1.7.1-cuda11.0-cudnn8.

```bash
$ nvidia-smi
Tue Sep 20 00:26:13 2022
Tue Sep 20 00:26:13 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.119.03 Driver Version: 450.119.03 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
Expand All @@ -26,7 +26,7 @@ Tue Sep 20 00:26:13 2022
| 41% 30C P8 11W / 280W | 6MiB / 24220MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
Expand All @@ -40,15 +40,15 @@ Tue Sep 20 00:26:13 2022
```

## Building images locally
If your environment requires a proxy to access the Internet, remember to add those information into the Dockerfile directly.
For most cases, you can uncomment these lines in the Dockerfile and add in your proxy details.
If your environment requires a proxy to access the Internet, remember to add those information into the Dockerfile directly.
For most cases, you can uncomment these lines in the Dockerfile and add in your proxy details.

```dockerfile
ENV http_proxy=http://aaa.bb.cc.net:8080 \
https_proxy=http://aaa.bb.cc.net:8080
```

Then, proceed with these commands.
Then, proceed with these commands.

### If you are case (a), i.e. your NVIDIA driver supports CUDA version >= 11.3:

Expand All @@ -72,11 +72,11 @@ docker run -it --runtime=nvidia --shm-size=2gb --name=icefall --gpus all icefall
```

### Tips:
1. Since your data and models most probably won't be in the docker, you must use the -v flag to access the host machine. Do this by specifying `-v {/path/in/host/machine}:{/path/in/docker}`.
1. Since your data and models most probably won't be in the docker, you must use the -v flag to access the host machine. Do this by specifying `-v {/path/in/host/machine}:{/path/in/docker}`.

2. Also, if your environment requires a proxy, this would be a good time to add it in too: `-e http_proxy=http://aaa.bb.cc.net:8080 -e https_proxy=http://aaa.bb.cc.net:8080`.

Overall, your docker run command should look like this.
Overall, your docker run command should look like this.

```bash
docker run -it --runtime=nvidia --shm-size=2gb --name=icefall --gpus all -v {/path/in/host/machine}:{/path/in/docker} -e http_proxy=http://aaa.bb.cc.net:8080 -e https_proxy=http://aaa.bb.cc.net:8080 icefall/pytorch1.12.1
Expand All @@ -86,9 +86,9 @@ You can explore more docker run options [here](https://docs.docker.com/engine/re

### Linking to icefall in your host machine

If you already have icefall downloaded onto your host machine, you can use that repository instead so that changes in your code are visible inside and outside of the container.
If you already have icefall downloaded onto your host machine, you can use that repository instead so that changes in your code are visible inside and outside of the container.

Note: Remember to set the -v flag above during the first run of the container, as that is the only way for your container to access your host machine.
Note: Remember to set the -v flag above during the first run of the container, as that is the only way for your container to access your host machine.
Warning: Check that the icefall in your host machine is visible from within your container before proceeding to the commands below.

Use these commands once you are inside the container.
Expand All @@ -103,12 +103,12 @@ ln -s {/path/in/docker/to/icefall} /workspace/icefall
docker exec -it icefall /bin/bash
```

## Restarting a killed container that has been run before.
## Restarting a killed container that has been run before.
```bash
docker start -ai icefall
```

## Sample usage of the CPU based images:
```bash
docker run -it icefall /bin/bash
```
```
14 changes: 7 additions & 7 deletions docker/Ubuntu18.04-pytorch1.12.1-cuda11.3-cudnn8/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
FROM pytorch/pytorch:1.12.1-cuda11.3-cudnn8-devel

# ENV http_proxy=http://aaa.bbb.cc.net:8080 \
# https_proxy=http://aaa.bbb.cc.net:8080
# https_proxy=http://aaa.bbb.cc.net:8080

# install normal source
RUN apt-get update && \
Expand Down Expand Up @@ -38,22 +38,22 @@ RUN wget -P /opt https://cmake.org/files/v3.18/cmake-3.18.0.tar.gz && \
rm -rf cmake-3.18.0.tar.gz && \
find /opt/cmake-3.18.0 -type f \( -name "*.o" -o -name "*.la" -o -name "*.a" \) -exec rm {} \; && \
cd -
# flac

# flac
RUN wget -P /opt https://downloads.xiph.org/releases/flac/flac-1.3.2.tar.xz && \
cd /opt && \
cd /opt && \
xz -d flac-1.3.2.tar.xz && \
tar -xvf flac-1.3.2.tar && \
cd flac-1.3.2 && \
./configure && \
make && make install && \
rm -rf flac-1.3.2.tar && \
find /opt/flac-1.3.2 -type f \( -name "*.o" -o -name "*.la" -o -name "*.a" \) -exec rm {} \; && \
cd -
cd -

RUN conda install -y -c pytorch torchaudio=0.12 && \
pip install graphviz


#install k2 from source
RUN git clone https://github.com/k2-fsa/k2.git /opt/k2 && \
Expand All @@ -68,7 +68,7 @@ RUN git clone https://github.com/k2-fsa/icefall /workspace/icefall && \
cd /workspace/icefall && \
pip install -r requirements.txt

RUN pip install kaldifeat
RUN pip install kaldifeat
ENV PYTHONPATH /workspace/icefall:$PYTHONPATH

WORKDIR /workspace/icefall
17 changes: 8 additions & 9 deletions docker/Ubuntu18.04-pytorch1.7.1-cuda11.0-cudnn8/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
FROM pytorch/pytorch:1.7.1-cuda11.0-cudnn8-devel

# ENV http_proxy=http://aaa.bbb.cc.net:8080 \
# https_proxy=http://aaa.bbb.cc.net:8080
# https_proxy=http://aaa.bbb.cc.net:8080

RUN rm /etc/apt/sources.list.d/cuda.list && \
rm /etc/apt/sources.list.d/nvidia-ml.list && \
apt-key del 7fa2af80

# install normal source
RUN apt-get update && \
apt-get install -y --no-install-recommends \
Expand Down Expand Up @@ -36,7 +36,7 @@ RUN curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu18
curl -fsSL https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/7fa2af80.pub | apt-key add - && \
echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 /" > /etc/apt/sources.list.d/cuda.list && \
echo "deb https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 /" > /etc/apt/sources.list.d/nvidia-ml.list && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/lib/apt/lists/* && \
mv /opt/conda/lib/libcufft.so.10 /opt/libcufft.so.10.bak && \
mv /opt/conda/lib/libcurand.so.10 /opt/libcurand.so.10.bak && \
mv /opt/conda/lib/libcublas.so.11 /opt/libcublas.so.11.bak && \
Expand All @@ -56,18 +56,18 @@ RUN wget -P /opt https://cmake.org/files/v3.18/cmake-3.18.0.tar.gz && \
rm -rf cmake-3.18.0.tar.gz && \
find /opt/cmake-3.18.0 -type f \( -name "*.o" -o -name "*.la" -o -name "*.a" \) -exec rm {} \; && \
cd -
# flac

# flac
RUN wget -P /opt https://downloads.xiph.org/releases/flac/flac-1.3.2.tar.xz && \
cd /opt && \
cd /opt && \
xz -d flac-1.3.2.tar.xz && \
tar -xvf flac-1.3.2.tar && \
cd flac-1.3.2 && \
./configure && \
make && make install && \
rm -rf flac-1.3.2.tar && \
find /opt/flac-1.3.2 -type f \( -name "*.o" -o -name "*.la" -o -name "*.a" \) -exec rm {} \; && \
cd -
cd -

RUN conda install -y -c pytorch torchaudio=0.7.1 && \
pip install graphviz
Expand All @@ -79,7 +79,7 @@ RUN git clone https://github.com/k2-fsa/k2.git /opt/k2 && \
cd -

# install lhotse
RUN pip install git+https://github.com/lhotse-speech/lhotse
RUN pip install git+https://github.com/lhotse-speech/lhotse

RUN git clone https://github.com/k2-fsa/icefall /workspace/icefall && \
cd /workspace/icefall && \
Expand All @@ -88,4 +88,3 @@ RUN git clone https://github.com/k2-fsa/icefall /workspace/icefall && \
ENV PYTHONPATH /workspace/icefall:$PYTHONPATH

WORKDIR /workspace/icefall

15 changes: 11 additions & 4 deletions docs/source/contributing/code-style.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ We use the following tools to make the code style to be as consistent as possibl

The following versions of the above tools are used:

- ``black == 12.6b0``
- ``flake8 == 3.9.2``
- ``isort == 5.9.2``
- ``black == 22.3.0``
- ``flake8 == 5.0.4``
- ``isort == 5.10.1``

After running the following commands:

Expand Down Expand Up @@ -54,10 +54,17 @@ it should succeed this time:
If you want to check the style of your code before ``git commit``, you
can do the following:

.. code-block:: bash
$ pre-commit install
$ pre-commit run
Or without installing the pre-commit hooks:

.. code-block:: bash
$ cd icefall
$ pip install black==21.6b0 flake8==3.9.2 isort==5.9.2
$ pip install black==22.3.0 flake8==5.0.4 isort==5.10.1
$ black --check your_changed_file.py
$ black your_changed_file.py # modify it in-place
$
Expand Down
2 changes: 1 addition & 1 deletion docs/source/installation/images/k2-gt-v1.9-blueviolet.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/source/installation/images/python-gt-v3.6-blue.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/source/installation/images/torch-gt-v1.6.0-green.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 0 additions & 1 deletion docs/source/recipes/aishell/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,3 @@ It can be downloaded from `<https://www.openslr.org/33/>`_
tdnn_lstm_ctc
conformer_ctc
stateless_transducer

1 change: 0 additions & 1 deletion docs/source/recipes/timit/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,3 @@ TIMIT

tdnn_ligru_ctc
tdnn_lstm_ctc

Loading

0 comments on commit 500792d

Please sign in to comment.