From 89184ab1c604bcb0729ecf6be7cb0e458e179581 Mon Sep 17 00:00:00 2001 From: Bharath Ramaswamy Date: Fri, 13 Sep 2024 09:13:31 -0700 Subject: [PATCH] Fixed installation docs for 1.34 release (#3667) * Fixed wheel file naming mechanism in installation (#3292) * Updated versioning and naming of wheel files in conf.py and install RST file * Updated installation instructions and fixed wheel file installation commands (#3335) * Updated docker and host wheel install instructions to make them more generic across releases * Updated install instructions: torch GPU to cuda 2.1 and onnx to 1.16 * Added pytorch variant cuda versions Signed-off-by: Bharath Ramaswamy Signed-off-by: Joel Polizzi Co-authored-by: Joel Polizzi Co-authored-by: Bharath Ramaswamy Signed-off-by: Bharath Ramaswamy --- Docs/conf.py | 1 + Docs/install/index.rst | 39 +++++++++-------- Docs/install/install_docker.rst | 74 +++++++++++---------------------- Docs/install/install_host.rst | 70 ++++++++++--------------------- 4 files changed, 68 insertions(+), 116 deletions(-) diff --git a/Docs/conf.py b/Docs/conf.py index a043d0c4397..2006057761e 100644 --- a/Docs/conf.py +++ b/Docs/conf.py @@ -173,6 +173,7 @@ def setup(app): .. |author| replace:: {author} .. |project| replace:: {project} .. |default-quantsim-config-file| replace:: aimet_common/quantsim_config/default_config.json +.. |version| replace:: {version} """.format(project=project, author=author, version=version) # -- Options for LaTeX output ------------------------------------------------ diff --git a/Docs/install/index.rst b/Docs/install/index.rst index ef0abb4eeef..2953f697a6d 100644 --- a/Docs/install/index.rst +++ b/Docs/install/index.rst @@ -47,10 +47,10 @@ The AIMET PyTorch GPU PyPI packages are available for environments that meet the * 64-bit Intel x86-compatible processor * Linux Ubuntu 22.04 LTS [Python 3.10] or Ubuntu 20.04 LTS [Python 3.8] -* Cuda 12.0 +* CUDA 12.0 * Torch 2.2.2 -**Pip install:** +**Pip install** .. code-block:: @@ -61,16 +61,20 @@ The AIMET PyTorch GPU PyPI packages are available for environments that meet the Release Packages ~~~~~~~~~~~~~~~~ -For other aimet variants, install the latest version from the .whl files hosted at https://github.com/quic/aimet/releases +For other AIMET variants, install the *latest* version from the .whl files hosted at https://github.com/quic/aimet/releases **PyTorch** .. parsed-literal:: + # Pytorch 2.1 with CUDA 12.x + python3 -m pip install |download_url|\ |version|/aimet_torch-\ |version|.cu121\ |whl_suffix| + + # Pytorch 2.1 CPU only + python3 -m pip install |download_url|\ |version|/aimet_torch-\ |version|.cpu\ |whl_suffix| + # Pytorch 1.13 with CUDA 11.x - python3 -m pip install |download_url|\ |version|/aimet_torch-torch_gpu\_\ |version|\ |whl_suffix| - # Pytorch 1.13 CPU only - python3 -m pip install |download_url|\ |version|/aimet_torch-torch_cpu\_\ |version|\ |whl_suffix| + python3 -m pip install |download_url|\ |version|/aimet_torch-\ |version|.cu117\ |whl_suffix| **TensorFlow** @@ -78,31 +82,26 @@ For other aimet variants, install the latest version from the .whl files hosted .. parsed-literal:: # Tensorflow 2.10 GPU with CUDA 11.x - python3 -m pip install |download_url|\ |version|/aimet_tensorflow-tf_gpu\_\ |version|\ |whl_suffix| + python3 -m pip install |download_url|\ |version|/aimet_tensorflow-\ |version|.cu118\ |whl_suffix| + # Tensorflow 2.10 CPU only - python3 -m pip install |download_url|\ |version|/aimet_tensorflow-tf_cpu\_\ |version|\ |whl_suffix| + python3 -m pip install |download_url|\ |version|/aimet_tensorflow-\ |version|.cpu\ |whl_suffix| **Onnx** .. parsed-literal:: - # ONNX 1.14 GPU - python3 -m pip install |download_url|\ |version|/aimet_onnx-onnx_gpu\_\ |version|\ |whl_suffix| - # ONNX 1.14 CPU - python3 -m pip install |download_url|\ |version|/aimet_onnx-onnx_cpu\_\ |version|\ |whl_suffix| + # ONNX 1.16 GPU with CUDA 11.x + python3 -m pip install |download_url|\ |version|/aimet_onnx-\ |version|.cu117\ |whl_suffix| -For previous AIMET releases, browse packages at https://github.com/quic/aimet/releases. Each release includes multiple python packages of the following format: - -.. parsed-literal:: + # ONNX 1.16 CPU + python3 -m pip install |download_url|\ |version|/aimet_onnx-\ |version|.cpu\ |whl_suffix| - # VARIANT in {torch_gpu, torch_cpu, tf_gpu, tf_cpu, onnx_gpu, onnx_cpu} - # PACKAGE_PREFIX in {aimet_torch, aimet_tensorflow, aimet_onnx} - -_\ |whl_suffix| +For older versions, please browse the releases at https://github.com/quic/aimet/releases and follow the documentation corresponding to that release to select and install the appropriate package. -.. |version| replace:: 1.31.0 -.. |whl_suffix| replace:: -cp38-cp38-linux_x86_64.whl +.. |whl_suffix| replace:: -cp310-cp310-manylinux_2_34_x86_64.whl .. |download_url| replace:: \https://github.com/quic/aimet/releases/download/ System Requirements diff --git a/Docs/install/install_docker.rst b/Docs/install/install_docker.rst index 64e648d0c2c..8a46723bb07 100644 --- a/Docs/install/install_docker.rst +++ b/Docs/install/install_docker.rst @@ -48,7 +48,6 @@ Set the `` to ONE of the following depending on your desired var #. For the PyTorch 2.1 GPU variant, use `torch-gpu` #. For the PyTorch 2.1 CPU variant, use `torch-cpu` #. For the PyTorch 1.13 GPU variant, use `torch-gpu-pt113` - #. For the PyTorch 1.13 CPU variant, use `torch-cpu-pt113` #. For the TensorFlow GPU variant, use `tf-gpu` #. For the TensorFlow CPU variant, use `tf-cpu` #. For the ONNX GPU variant, use `onnx-gpu` @@ -121,73 +120,50 @@ Install AIMET packages From PyPI ========= -Aimet Torch GPU can install from pypi through the following method: +The default AIMET Torch GPU variant may be installed from PyPI as follows: + - Go to https://pypi.org/project/aimet-torch + - Browse the Requirements section of each Release to identify the version you wish to install. Following are some tips: + - For Pytorch 2.2.2 GPU with CUDA 12.1, use aimet-torch>=1.32.2 + - For Pytorch 2.1.2 GPU with CUDA 12.1, use aimet-torch==1.32.1.post1 + - For PyTorch 1.13 GPU with CUDA 11.7, use aimet-torch==1.31.2 -Go to https://pypi.org/project/aimet-torch to identify a version you wish to install - - - For PyTorch 1.13 GPU you should use aimet-torch==1.31.1 - - For Pytorch 2.1.2 GPU you should use aimet-torch >= 1.32.0 +Run the following commands to install the package (prepend with "sudo" and/or package version as needed): .. code-block:: bash - sudo apt-get install liblapacke -y - pip install aimet-torch - + apt-get install liblapacke -y + python3 -m pip install aimet-torch From Release Package ==================== -Alternatively, we host .whl packages for each release at https://github.com/quic/aimet/releases. Identify the release tag -of the package you wish to install, then follow the instructions below to install AIMET from the .whl file. - -Set the to ONE of the following depending on your desired variant +We also host python wheel packages for different variants which may be installed as follows: + - Go to https://github.com/quic/aimet/releases + - Identify the release tag of the package that you wish to install + - Identify the .whl file corresponding to the package variant that you wish to install + - Follow the instructions below to install AIMET from the .whl file -#. For the PyTorch 2.1 GPU variant, use "torch_gpu" -#. For the PyTorch 2.1 CPU variant, use "torch_cpu" -#. For the PyTorch 1.13 GPU variant, use "torch_gpu-pt113" -#. For the PyTorch 1.13 CPU variant, use "torch_cpu-pt113" -#. For the TensorFlow GPU variant, use "tf_gpu" -#. For the TensorFlow CPU variant, use "tf_cpu" -#. For the ONNX GPU variant, use "onnx_gpu" -#. For the ONNX CPU variant, use "onnx_cpu" +Set the package details as follows: .. code-block:: bash - export AIMET_VARIANT= - -Replace in the steps below with the appropriate tag: - -.. code-block:: bash - - export release_tag= - -Set the package download URL as follows: - -.. code-block:: bash + # Set the release tag ex. "1.34.0" + export release_tag="" + # Construct the download root URL export download_url="https://github.com/quic/aimet/releases/download/${release_tag}" -Set the common suffix for the package files as follows: - -.. code-block:: bash - - export wheel_file_suffix="cp310-cp310-linux_x86_64.whl" - -Install the AIMET packages in the order specified below: + # Set the wheel file name with extension + # ex. "aimet_torch-1.34.0.cu121-cp310-cp310-manylinux_2_34_x86_64.whl" + export wheel_file_name="" -**NOTE:** - #. Please pre-pend the "apt-get install" and "pip3 install" commands with "sudo -H" as appropriate. - #. These instructions assume that pip packages will be installed in the path: /usr/local/lib/python3.10/dist-packages. If that is not the case, please modify it accordingly. - #. Python dependencies will automatically get installed. +Install the selected AIMET package as specified below: +**NOTE:** Python dependencies will automatically get installed. .. code-block:: bash - # Install ONE of the following depending on the variant - python3 -m pip install ${download_url}/aimet_torch-${AIMET_VARIANT}_${release_tag}-${wheel_file_suffix} -f https://download.pytorch.org/whl/torch_stable.html - # OR - python3 -m pip install ${download_url}/aimet_tensorflow-${AIMET_VARIANT}_${release_tag}-${wheel_file_suffix} - # OR - python3 -m pip install ${download_url}/aimet_onnx-${AIMET_VARIANT}_${release_tag}-${wheel_file_suffix} + python3 -m pip install ${download_url}/${wheel_file_name} + Environment setup ~~~~~~~~~~~~~~~~~ diff --git a/Docs/install/install_host.rst b/Docs/install/install_host.rst index 294a4298064..a67a2a7ab37 100755 --- a/Docs/install/install_host.rst +++ b/Docs/install/install_host.rst @@ -100,73 +100,49 @@ Install AIMET packages From PyPI ========= -Aimet Torch GPU can install from pypi through the following method: +The default AIMET Torch GPU variant may be installed from PyPI as follows: + - Go to https://pypi.org/project/aimet-torch + - Browse the Requirements section of each Release to identify the version you wish to install. Following are some tips: + - For Pytorch 2.2.2 GPU with CUDA 12.1, use aimet-torch>=1.32.2 + - For Pytorch 2.1.2 GPU with CUDA 12.1, use aimet-torch==1.32.1.post1 + - For PyTorch 1.13 GPU with CUDA 11.7, use aimet-torch==1.31.2 -Go to https://pypi.org/project/aimet-torch to identify a version you wish to install - - - For PyTorch 1.13 GPU you should use aimet-torch==1.31.1 - - For Pytorch 2.1.2 GPU you should use aimet-torch >= 1.32.0 +Run the following commands to install the package (prepend with "sudo" and/or package version as needed): .. code-block:: bash - sudo apt-get install liblapacke -y - pip install aimet-torch - + apt-get install liblapacke -y + python3 -m pip install aimet-torch From Release Package ==================== -Alternatively, we host .whl packages for each release at https://github.com/quic/aimet/releases. Identify the release tag -of the package you wish to install, then follow the instructions below to install AIMET from the .whl file. - -Set the to ONE of the following depending on your desired variant - -#. For the PyTorch 2.1 GPU variant, use "torch_gpu" -#. For the PyTorch 2.1 CPU variant, use "torch_cpu" -#. For the PyTorch 1.13 GPU variant, use "torch_gpu_pt113" -#. For the PyTorch 1.13 CPU variant, use "torch_cpu_pt113" -#. For the TensorFlow GPU variant, use "tf_gpu" -#. For the TensorFlow CPU variant, use "tf_cpu" -#. For the ONNX GPU variant, use "onnx_gpu" -#. For the ONNX CPU variant, use "onnx_cpu" +We also host python wheel packages for different variants which may be installed as follows: + - Go to https://github.com/quic/aimet/releases + - Identify the release tag of the package that you wish to install + - Identify the .whl file corresponding to the package variant that you wish to install + - Follow the instructions below to install AIMET from the .whl file -.. code-block:: bash - - export AIMET_VARIANT= - -Replace in the steps below with the appropriate tag: +Set the package details as follows: .. code-block:: bash - export release_tag= - -Set the package download URL as follows: - -.. code-block:: bash + # Set the release tag ex. "1.34.0" + export release_tag="" + # Construct the download root URL export download_url="https://github.com/quic/aimet/releases/download/${release_tag}" -Set the common suffix for the package files as follows: - -**NOTE:** Set wheel_file_suffix to cp310-cp310-linux_x86_64.whl OR cp38-cp38-linux_x86_64.whl OR cp36-cp36m-linux_x86_64 OR cp37-cp37m-linux_x86_64 OR py3-none-any as appropriate depending on the actual wheel filename(s) on the https://github.com/quic/aimet/releases. - -.. code-block:: bash - - export wheel_file_suffix="cp310-cp310-linux_x86_64.whl" - -Install the AIMET packages in the order specified below: + # Set the wheel file name with extension + # ex. "aimet_torch-1.33.0.cu121-cp310-cp310-manylinux_2_34_x86_64.whl" + export wheel_file_name="" +Install the selected AIMET package as specified below: **NOTE:** Python dependencies will automatically get installed. .. code-block:: bash - # Install ONE of the following depending on the variant - python3 -m pip install ${download_url}/aimet_torch-${AIMET_VARIANT}_${release_tag}-${wheel_file_suffix} -f https://download.pytorch.org/whl/torch_stable.html - # OR - python3 -m pip install ${download_url}/aimet_tensorflow-${AIMET_VARIANT}_${release_tag}-${wheel_file_suffix} - # OR - python3 -m pip install ${download_url}/aimet_onnx-${AIMET_VARIANT}_${release_tag}-${wheel_file_suffix} - + python3 -m pip install ${download_url}/${wheel_file_name} Install common debian packages ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~