diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 82801b928..12e8efa33 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -71,17 +71,29 @@ for a minimal build of NVIDIA cuOpt without using conda are also listed below. Compilers: -* `gcc` version 11.4+ -* `nvcc` version 11.8+ -* `cmake` version 3.29.6+ +These will be installed while creating the Conda environment + +* `gcc` version 13.0+ +* `nvcc` version 12.8+ +* `cmake` version 3.30.4+ CUDA/GPU Runtime: -* CUDA 11.4+ +* CUDA 12.8 * Volta architecture or better ([Compute Capability](https://docs.nvidia.com/deploy/cuda-compatibility/) >=7.0) -You can obtain CUDA from -[https://developer.nvidia.com/cuda-downloads](https://developer.nvidia.com/cuda-downloads). +Python: + +* Python >=3.10.x, <= 3.12.x + +OS: + +* Only Linux is supported + +Architecture: + +* x86_64 (64-bit) +* aarch64 (64-bit) ### Build NVIDIA cuOpt from source @@ -219,6 +231,12 @@ set_source_files_properties(src/routing/data_model_view.cu PROPERTIES COMPILE_OP This will add the device debug symbols for this object file in `libcuopt.so`. You can then use `cuda-dbg` to debug into the kernels in that source file. +## Adding dependencies + +Please refer to the [dependencies.yaml](dependencies.yaml) file for details on how to add new dependencies. +Add any new dependencies in the `dependencies.yaml` file. It takes care of conda, requirements (pip based dependencies) and pyproject. +Please don't try to add dependencies directly to environment.yaml files under `conda/environments` directory and pyproject.toml files under `python` directories. + ## Code Formatting ### Using pre-commit hooks @@ -303,6 +321,5 @@ You can skip these checks with `git commit --no-verify` or with the short versio (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. ``` - diff --git a/README.md b/README.md index 61011ecf2..39d45e4af 100644 --- a/README.md +++ b/README.md @@ -2,24 +2,30 @@ [![Build Status](https://github.com/NVIDIA/cuopt/actions/workflows/build.yaml/badge.svg)](https://github.com/NVIDIA/cuopt/actions/workflows/build.yaml) -NVIDIA® cuOpt™ is a GPU-accelerated optimization engine that excels in mixed integer programming (MIP), linear programming (LP), and vehicle routing problems (VRP). It enables near real-time solutions for large-scale challenges with millions of variables and constraints, offering easy integration into existing solvers and seamless deployment across hybrid and multi-cloud environments. +NVIDIA® cuOpt™ is a GPU-accelerated optimization engine that excels in mixed integer linear programming (MILP), linear programming (LP), and vehicle routing problems (VRP). It enables near real-time solutions for large-scale challenges with millions of variables and constraints, offering +easy integration into existing solvers and seamless deployment across hybrid and multi-cloud environments. -For the latest stable version ensure you are on the `main` branch. - -## Build from Source +Core engine is written in C++ which is wrapped into C API, Python API and Server API. -Please see our [guide for building cuOpt from source](CONTRIBUTING.md#build-nvidia-cuopt-from-source) - -## Contributing Guide +For the latest stable version ensure you are on the `main` branch. -Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to contribute code and issues to the project. +## Supported APIs -## Resources +cuOpt supports the following APIs: -- [cuopt (Python) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/introduction.html) -- [libcuopt (C++/CUDA) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/introduction.html) -- [Examples and Notebooks](https://github.com/NVIDIA/cuopt-examples) -- [Test cuopt with Brev](https://brev.nvidia.com/launchable/deploy?launchableID=env-2qIG6yjGKDtdMSjXHcuZX12mDNJ): Examples notebooks are pulled and hosted on [Brev](https://docs.nvidia.com/brev/latest/). +- C API support + - Linear Programming (LP) + - Mixed Integer Linear Programming (MILP) +- C++ API support + - cuOpt is written in C++ and includes a native C++ API. However, we do not provide documentation for the C++ API at this time. We anticipate that the C++ API will change significantly in the future. Use it at your own risk. +- Python support + - Routing (TSP, VRP, and PDP) + - Linear Programming (LP) and Mixed Integer Linear Programming (MILP) + - cuOpt includes a Python API that is used as the backend of the cuOpt server. However, we do not provide documentation for the Python API at this time. We suggest using cuOpt server to access cuOpt via Python. We anticipate that the Python API will change significantly in the future. Use it at your own risk. +- Server support + - Linear Programming (LP) + - Mixed Integer Linear Programming (MILP) + - Routing (TSP, VRP, and PDP) ## Installation @@ -29,8 +35,24 @@ Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to con * NVIDIA driver >= 525.60.13 (Linux) and >= 527.41 (Windows) * Volta architecture or better (Compute Capability >=7.0) +### Python requirements + +* Python >=3.10.x, <= 3.12.x + +### OS requirements + +* Only Linux is supported and Windows via WSL2 + * x86_64 (64-bit) + * aarch64 (64-bit) + +Note: WSL2 is tested to run cuOpt, but not for building. + +More details on system requirements can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/system-requirements.html) + ### Pip +Pip wheels are easy to install and easy to configure. Users with existing workflows who uses pip as base to build their workflows can use pip to install cuOpt. + cuOpt can be installed via `pip` from the NVIDIA Python Package Index. Be sure to select the appropriate cuOpt package depending on the major version of CUDA available in your environment: @@ -38,21 +60,50 @@ on the major version of CUDA available in your environment: For CUDA 12.x: ```bash -pip install --extra-index-url=https://pypi.nvidia.com cuopt-cu12 +pip install --extra-index-url=https://pypi.nvidia.com cuopt-server-cu12==25.5 cuopt-sh-client==25.5 nvidia-cuda-runtime-cu12==12.8.* ``` ### Conda cuOpt can be installed with conda (via [miniforge](https://github.com/conda-forge/miniforge)) from the `nvidia` channel: +All other dependencies are installed automatically when cuopt-server and cuopt-sh-client are installed. + +Users who are used to conda env based workflows would benefit with conda packages readily available for cuOpt. For CUDA 12.x: ```bash conda install -c rapidsai -c conda-forge -c nvidia \ - cuopt=25.05 python=3.12 cuda-version=12.8 + cuopt-server=25.05 cuopt-sh-client=25.05 python=3.12 cuda-version=12.8 ``` We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD of our latest development branch. -Note: cuOpt is supported only on Linux, and with Python versions 3.10 and later. \ No newline at end of file +### Container + +Users can pull the cuOpt container from the NVIDIA container registry. + +```bash +docker pull nvidia/cuopt:25.5.0-cuda12.8-py312 +``` +More information about the cuOpt container can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/quick-start.html#container-from-docker-hub). + +Users who are using cuOpt for quick testing or research can use the cuOpt container. Alternatively, users who are planning to plug cuOpt as a service in their workflow can quickly start with the cuOpt container. But users are required to build security layers around the service to safeguard the service from untrusted users. + +## Build from Source and Test + +Please see our [guide for building cuOpt from source](CONTRIBUTING.md#setting-up-your-build-environment). This will be helpful if users want to add new features or fix bugs for cuOpt. This would also be very helpful in case users want to customize cuOpt for their own use cases which require changes to the cuOpt source code. + +## Contributing Guide + +Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to contribute code and issues to the project. + +## Resources + +- [libcuopt (C) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-c/index.html) +- [cuopt (Python) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-python/index.html) +- [cuopt (Server) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/index.html) +- [Examples and Notebooks](https://github.com/NVIDIA/cuopt-examples) +- [Test cuopt with NVIDIA Launchable](https://brev.nvidia.com/launchable/deploy?launchableID=env-2qIG6yjGKDtdMSjXHcuZX12mDNJ): Examples notebooks are pulled and hosted on [NVIDIA Launchable](https://docs.nvidia.com/brev/latest/). +- [Test cuopt on Google Colab](https://colab.research.google.com/github/nvidia/cuopt-examples/): Examples notebooks can be opened in Google Colab. Please note that you need to choose a `Runtime` as `GPU` in order to run the notebooks. \ No newline at end of file diff --git a/benchmarks/README.md b/benchmarks/README.md new file mode 100644 index 000000000..9ce20f69e --- /dev/null +++ b/benchmarks/README.md @@ -0,0 +1,6 @@ +# Benchmarks Scripts + +This directory contains the scripts for the benchmarks. + + + diff --git a/ci/README.md b/ci/README.md new file mode 100644 index 000000000..b1344a947 --- /dev/null +++ b/ci/README.md @@ -0,0 +1,49 @@ +# CI scripts + +This directory contains the scripts for the CI pipeline. + +CI builds are triggered by `pr.yaml`, `build.yaml` and `test.yaml` files in the `.github/workflows` directory. And these scripts are used from those workflows to build and test the code. + +cuOpt is packaged in following ways: + +## PIP package + +### Build + +The scripts for building the PIP packages are named as `build_wheel_.sh`. For example, `build_wheel_cuopt.sh` is used to build the PIP package for cuOpt. + +Please refer to existing scripts for more details and how you can add a new script for a new package. + +### Test + +The scripts for testing the PIP packages are named as `test_wheel_.sh`. For example, `test_wheel_cuopt.sh` is used to test the PIP package for cuOpt. + +Please refer to existing scripts for more details and how you can add a new script for a new package. + +## Conda Package + +### Build + +For Conda package, + +- all cpp libraries are built under one script called `build_cpp.sh`. +- all python bindings are built under one script called `build_python.sh`. + +So if there are new cpp libraries or python bindings, you need to add them to the respective scripts. + + +### Test + +Similarly, for Conda package, + +- all cpp libraries are tested under one script called `test_cpp.sh`. +- all python bindings are tested under one script called `test_python.sh`. + + +There are other scripts in this directory which are used to build and test the code and are also used in the workflows as utlities. + + + + + + diff --git a/cmake/README.md b/cmake/README.md new file mode 100644 index 000000000..b0ab92484 --- /dev/null +++ b/cmake/README.md @@ -0,0 +1,6 @@ +# Cmake for RAPIDS configuration + +This directory contains the Cmake files for the RAPIDS configuration. + + + diff --git a/conda/README.md b/conda/README.md new file mode 100644 index 000000000..8188d0a7d --- /dev/null +++ b/conda/README.md @@ -0,0 +1,8 @@ +# Conda Recipes and Environment + +This directory contains the conda recipes for the cuOpt packages which are used to build the conda packages in CI. + +Along with that, it also contains the environment files which is used to create the conda environment for the development of cuOpt and CI testing. + + + diff --git a/cpp/README.md b/cpp/README.md new file mode 100644 index 000000000..974f22cc7 --- /dev/null +++ b/cpp/README.md @@ -0,0 +1,61 @@ +# C++ Modules + +This directory contains the C++ modules for the cuOpt project. + +Please refer to the [CMakeLists.txt](CMakeLists.txt) file for details on how to add new modules and tests. + +Most of the dependencies are defined in the [dependencies.yaml](../dependencies.yaml) file. Please refer to different sections in the [dependencies.yaml](../dependencies.yaml) file for more details. However, some of the dependencies are defined in [thirdparty modules](cmake/thirdparty/) in case where source code is needed to build, for example, `cccl` and `rmm`. + + +## Include Structure + +Add any new modules in the `include` directory under `include/cuopt/` directory. + +```bash +cpp/ +├── include/ +│ ├── cuopt/ +│ │ └── linear_programming/ +│ │ └── ... +│ │ └── routing/ +│ │ └── ... +│ └── ... +└── ... +``` + +## Source Structure + +Add any new modules in the `src` directory under `src/cuopt/` directory. + +```bash +cpp/ +├── src/ +│ ├── cuopt/ +│ │ └── linear_programming/ +│ │ └── ... +│ │ └── routing/ +│ │ └── ... +└── ... +``` + +## Test Structure + +Add any new modules in the `test` directory under `test/cuopt/` directory. + +```bash +cpp/ +├── test/ +│ ├── cuopt/ +│ │ └── linear_programming/ +│ │ └── ... +│ │ └── routing/ +│ │ └── ... +└── ... +``` + +## MPS parser + +The MPS parser is a standalone module that parses MPS files and converts them into a format that can be used by the cuOpt library. + +It is located in the `libmps_parser` directory. This also contains the `CMakeLists.txt` file to build the module. + diff --git a/docs/cuopt/Makefile b/docs/cuopt/Makefile index 9a675115b..7102ea1d4 100644 --- a/docs/cuopt/Makefile +++ b/docs/cuopt/Makefile @@ -32,9 +32,8 @@ help: clean: @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - rm -rf "$(SOURCEDIR)/user_guide/api_docs/api" # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). %: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) \ No newline at end of file + @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/docs/cuopt/README.md b/docs/cuopt/README.md index f6b36f7f8..435224424 100644 --- a/docs/cuopt/README.md +++ b/docs/cuopt/README.md @@ -1,36 +1,33 @@ # Building Documentation -Documentation dependencies are installed while installing conda environment, please refer to the [CONTRIBUTING](https://github.com/NVIDIA/cuopt/blob/main/CONTRIBUTING.md) for more details. Doc generation -does not get run by default. There are two ways to generate the docs: +Documentation dependencies are installed while installing the Conda environment, please refer to the [Build and Test](../../CONTRIBUTING.md#building-with-a-conda-environment) for more details. Assuming you have set-up the Conda environment, you can build the documentation along with all the cuOpt libraries by running: -Note: It is assumed that all required libraries are already installed locally. If they haven't been installed yet, please first install all libraries by running: ```bash ./build.sh ``` -1. Run +In subsequent runs where there are no changes to the cuOpt libraries, documentation can be built by running: + +1. From the root directory: ```bash -make clean;make html +./build.sh docs ``` -from the `docs/cuopt` directory. -2. Run + +2. From the `docs/cuopt` directory: ```bash -./build.sh docs +make clean;make html ``` -from the root directory. Outputs to `build/html/index.html` ## View docs web page by opening HTML in browser: -First navigate to `/build/html/` folder, i.e., `cd build/html` and then run the following command: - ```bash -python -m http.server +python -m http.server --directory=build/html/ ``` Then, navigate a web browser to the IP address or hostname of the host machine at port 8000: ``` -https://:8000 +http://:8000 ``` Now you can check if your docs edits formatted correctly, and read well. diff --git a/docs/cuopt/source/cuopt-c/quick-start.rst b/docs/cuopt/source/cuopt-c/quick-start.rst index 18a18334a..a913f9593 100644 --- a/docs/cuopt/source/cuopt-c/quick-start.rst +++ b/docs/cuopt/source/cuopt-c/quick-start.rst @@ -13,11 +13,14 @@ pip For CUDA 12.x: +This wheel is a Python wrapper around the C++ library and eases installation and access to libcuopt. This also helps in the pip environment to load libraries dynamically while using the Python SDK. + + .. code-block:: bash - # This is deprecated module and not longer used, but share same name for the CLI, so we need to uninstall it first if it exists. + # This is a deprecated module and no longer used, but it shares the same name for the CLI, so we need to uninstall it first if it exists. pip uninstall cuopt-thin-client - pip install --extra-index-url=https://pypi.nvidia.com libcuopt-cu12==25.5.* + pip install --extra-index-url=https://pypi.nvidia.com libcuopt-cu12==25.5.* nvidia-cuda-runtime-cu12==12.8.0 Conda @@ -29,10 +32,10 @@ For CUDA 12.x: .. code-block:: bash - # This is deprecated module and not longer used, but share same name for the CLI, so we need to uninstall it first if it exists. + # This is a deprecated module and no longer used, but it shares the same name for the CLI, so we need to uninstall it first if it exists. conda remove cuopt-thin-client conda install -c rapidsai -c conda-forge -c nvidia \ - libcuopt=25.5.* python=3.12 cuda-version=12.8 + libcuopt=25.05.* python=3.12 cuda-version=12.8 Please visit examples under each section to learn how to use the cuOpt C API. \ No newline at end of file diff --git a/docs/cuopt/source/cuopt-python/quick-start.rst b/docs/cuopt/source/cuopt-python/quick-start.rst index 27d46b8f7..993b25e1c 100644 --- a/docs/cuopt/source/cuopt-python/quick-start.rst +++ b/docs/cuopt/source/cuopt-python/quick-start.rst @@ -14,7 +14,7 @@ For CUDA 12.x: .. code-block:: bash - pip install --extra-index-url=https://pypi.nvidia.com cuopt-cu12==25.5.* + pip install --extra-index-url=https://pypi.nvidia.com cuopt-cu12==25.5.* nvidia-cuda-runtime-cu12==12.8.* Conda @@ -27,7 +27,7 @@ For CUDA 12.x: .. code-block:: bash conda install -c rapidsai -c conda-forge -c nvidia \ - cuopt=25.5.* python=3.12 cuda-version=12.8 + cuopt=25.05.* python=3.12 cuda-version=12.8 Container @@ -43,7 +43,7 @@ The container includes both the Python API and self-hosted server components. To .. code-block:: bash - docker run --gpus all -it --rm nvidia/cuopt:25.5.0 + docker run --gpus all -it --rm nvidia/cuopt:25.5.0-cuda12.8-py312 This will start an interactive session with cuOpt pre-installed and ready to use. @@ -51,10 +51,10 @@ This will start an interactive session with cuOpt pre-installed and ready to use Make sure you have the NVIDIA Container Toolkit installed on your system to enable GPU support in containers. See the `installation guide `_ for details. -Brev ----- +NVIDIA Launchable +------------------- -NVIDIA cuOpt can be tested with `Brev Launchable `_ with `example notebooks `_. For more details, please refer to the `Brev documentation `_. +NVIDIA cuOpt can be tested with `NVIDIA Launchable `_ with `example notebooks `_. For more details, please refer to the `NVIDIA Launchable documentation `_. Smoke Test ---------- diff --git a/docs/cuopt/source/cuopt-server/examples/lp-examples.rst b/docs/cuopt/source/cuopt-server/examples/lp-examples.rst index c1b82d8ea..01bb356ff 100644 --- a/docs/cuopt/source/cuopt-server/examples/lp-examples.rst +++ b/docs/cuopt/source/cuopt-server/examples/lp-examples.rst @@ -662,7 +662,7 @@ To use a previous solution as the initial/warm start solution for a new request # Please update these values if the server is running on a different IP address or port export ip="localhost" export port=5000 - reqId=$(cuopt_sh -t LP data.json -i $ip -p $port -k | sed "s/'/\"/g" | jq -r '.reqId') + reqId=$(cuopt_sh -t LP data.json -i $ip -p $port -k | sed "s/'/\"/g" | sed 's/False/false/g' | jq -r '.reqId') cuopt_sh data.json -t LP -i $ip -p $port -wid $reqId diff --git a/docs/cuopt/source/cuopt-server/examples/routing-examples.rst b/docs/cuopt/source/cuopt-server/examples/routing-examples.rst index 2d2aeb013..f66a58f9f 100644 --- a/docs/cuopt/source/cuopt-server/examples/routing-examples.rst +++ b/docs/cuopt/source/cuopt-server/examples/routing-examples.rst @@ -338,7 +338,7 @@ To use a previous solution as an initial solution for a new request ID, you are cuopt_sh data.json -i $ip -p $port -id $reqId # delete previous saved solutions using follwing command - cuopt_sh $ip $port -d $reqId + cuopt_sh -i $ip -p $port -d $reqId Uploading a Solution diff --git a/docs/cuopt/source/cuopt-server/quick-start.rst b/docs/cuopt/source/cuopt-server/quick-start.rst index 4aba1e6aa..5eed0cdc8 100644 --- a/docs/cuopt/source/cuopt-server/quick-start.rst +++ b/docs/cuopt/source/cuopt-server/quick-start.rst @@ -12,7 +12,7 @@ For CUDA 12.x: .. code-block:: bash - pip install --extra-index-url=https://pypi.nvidia.com cuopt-server-cu12==25.5.* cuopt-sh==25.5.* + pip install --extra-index-url=https://pypi.nvidia.com cuopt-server-cu12==25.05.* cuopt-sh-client==25.05.* nvidia-cuda-runtime-cu12==12.8.* Conda @@ -25,7 +25,7 @@ For CUDA 12.x: .. code-block:: bash conda install -c rapidsai -c conda-forge -c nvidia \ - cuopt-server=25.5.* cuopt-sh=25.5.* python=3.12 cuda-version=12.8 + cuopt-server=25.05.* cuopt-sh-client=25.05.* python=3.12 cuda-version=12.8 Container from Docker Hub @@ -35,13 +35,13 @@ NVIDIA cuOpt is also available as a container from Docker Hub: .. code-block:: bash - docker pull nvidia/cuopt:25.5.0 + docker pull nvidia/cuopt:25.5.0-cuda12.8-py312 The container includes both the Python API and self-hosted server components. To run the container: .. code-block:: bash - docker run --gpus all -it --rm -p 8000:8000 -e CUOPT_SERVER_PORT=8000 nvidia/cuopt:25.5.0 /bin/bash -c "python3 -m cuopt_server.cuopt_service" + docker run --gpus all -it --rm -p 8000:8000 -e CUOPT_SERVER_PORT=8000 nvidia/cuopt:25.5.0-cuda12.8-py312 /bin/bash -c "python3 -m cuopt_server.cuopt_service" .. note:: Make sure you have the NVIDIA Container Toolkit installed on your system to enable GPU support in containers. See the `installation guide `_ for details. @@ -82,10 +82,10 @@ The container includes both the Python API and self-hosted server components. To docker run --gpus all -it --rm -p 8000:8000 -e CUOPT_SERVER_PORT=8000 /bin/bash -c "python3 -m cuopt_server.cuopt_service" -Brev ----- +NVIDIA Launchable +------------------- -NVIDIA cuOpt can be tested with `Brev Launchable `_ with `example notebooks `_. For more details, please refer to the `Brev documentation `_. +NVIDIA cuOpt can be tested with `NVIDIA Launchable `_ with `example notebooks `_. For more details, please refer to the `NVIDIA Launchable documentation `_. Smoke Test ---------- diff --git a/docs/cuopt/source/faq.rst b/docs/cuopt/source/faq.rst index ba1c48d39..d452813c9 100644 --- a/docs/cuopt/source/faq.rst +++ b/docs/cuopt/source/faq.rst @@ -11,7 +11,6 @@ General FAQ - NVIDIA docker hub (https://hub.docker.com/r/nvidia/) - NVIDIA NGC registry (https://catalog.ngc.nvidia.com/orgs/nvidia/teams/cuopt/containers/cuopt/tags) with NVAIE license. - .. dropdown:: How to get a NVAIE license? Please refer to `NVIDIA NVAIE `_ for more information. @@ -44,14 +43,13 @@ General FAQ docker pull - .. dropdown:: Do I need a GPU to use cuOpt? Yes, please refer to `system requirements `_ for GPU specifications. You can acquire a cloud instance with a supported GPU and launch cuOpt; alternatively, you can launch it in your local machine if it meets the requirements. -.. dropdown:: Does cuOpt use multiple GPUs? +.. dropdown:: Does cuOpt use multiple GPUs/multi-GPUs/multi GPUs? - #. Yes, in cuOpt self-hosted server, a solver process per GPU can be configured to run multiple solvers. Requests are accepted in a round-robin queue. More details are available in `server api `_. + #. Yes, in cuOpt self-hosted server, a solver process per GPU can be configured to run multiple solvers. Requests are accepted in a round-robin queue. More details are available in `server api `_. #. There is no support for leveraging multiple GPUs to solve a single problem or oversubscribing a single GPU for multiple solvers. .. dropdown:: The cuOpt Service is not starting: Issue with port? @@ -83,6 +81,19 @@ General FAQ #. The complete round-trip solve time might be more than what was set. +.. dropdown:: Why am I getting "libcuopt.so: cannot open shared object file: No such file or directory" error? + + This error indicates that the cuOpt shared library is not found. Please check the following: + + - The cuOpt is installed + - Use ``find / -name libcuopt.so`` to search for the library path from root directory. You might need to run this command as root user. + - If the library is found, please add it to the ``LD_LIBRARY_PATH`` environment variable as shown below: + + .. code-block:: bash + + export LD_LIBRARY_PATH=/path/to/cuopt/lib:$LD_LIBRARY_PATH + + - If the library is not found, it means it is not yet installed. Please check the cuOpt installation guide for more details. .. dropdown:: Is there a way to make cuOpt also account for other overheads in the same time limit provided? diff --git a/docs/cuopt/source/introduction.rst b/docs/cuopt/source/introduction.rst index 85949293c..d3878d3ea 100644 --- a/docs/cuopt/source/introduction.rst +++ b/docs/cuopt/source/introduction.rst @@ -15,6 +15,8 @@ As part of `NVIDIA AI Enterprise `__ for more information about the NVIDIA Developer Program. +The core engine is built on C++ and all the APIs are built on top of it as wrappers. For example, cuOpt Python API uses Cython to wrap the C++ core engine and provide a Python interface. Similarly, other interfaces wrap different layers to communicate with the core engine. + Routing (TSP, VRP, and PDP) ============================= @@ -104,15 +106,44 @@ Supported APIs cuOpt supports the following APIs: - C API support - - Linear Programming (LP) - - Mixed Integer Linear Programming (MILP) + - `Linear Programming (LP) - C `_ + - `Mixed Integer Linear Programming (MILP) - C `_ - C++ API support - cuOpt is written in C++ and includes a native C++ API. However, we do not provide documentation for the C++ API at this time. We anticipate that the C++ API will change significantly in the future. Use it at your own risk. - Python support - - Routing (TSP, VRP, and PDP) - - Linear Programming (LP) and Mixed Integer Linear Programming (MILP) + - `Routing (TSP, VRP, and PDP) - Python `_ + - Linear Programming (LP) and Mixed Integer Linear Programming (MILP) - cuOpt includes a Python API that is used as the backend of the cuOpt server. However, we do not provide documentation for the Python API at this time. We suggest using cuOpt server to access cuOpt via Python. We anticipate that the Python API will change significantly in the future. Use it at your own risk. - Server support - - Linear Programming (LP) - - Mixed Integer Linear Programming (MILP) - - Routing (TSP, VRP, and PDP) + - `Linear Programming (LP) - Server `_ + - `Mixed Integer Linear Programming (MILP) - Server `_ + - `Routing (TSP, VRP, and PDP) - Server `_ + +================================== +Installation Options +================================== + +NVIDIA cuOpt is available in several formats to suit different deployment needs: + +Source Code +=========== +For users who want to customize cuOpt or contribute to its development, the source code is available on `GitHub `_. Building from source allows maximum flexibility but requires setting up the build environment. + +Pip Wheels +========== +For Python users with existing pip-based workflows, cuOpt can be installed directly via pip from the NVIDIA Python Package Index. This is the simplest installation method for most users. + +Conda Packages +=============== +Available from the NVIDIA channel, conda packages provide a convenient way to manage cuOpt and its dependencies in conda environments. This is ideal for users who prefer conda-based workflow management. + +Containers +=========== +NVIDIA provides ready-to-use containers with cuOpt pre-installed, available from: + +- Docker Hub (``nvidia/cuopt``) +- NVIDIA NGC (for NVIDIA AI Enterprise subscribers) + +Containers offer a consistent, isolated environment and are particularly useful for cloud deployments or microservices architectures. + +For detailed installation instructions for each option, please refer to the respective quickstart guides in the documentation. \ No newline at end of file diff --git a/docs/cuopt/source/resources.rst b/docs/cuopt/source/resources.rst index 978778ef7..dde06479d 100644 --- a/docs/cuopt/source/resources.rst +++ b/docs/cuopt/source/resources.rst @@ -6,8 +6,12 @@ Resources `Sample Notebooks `_ ---------------------------------------------------------------------------------- -`Test cuopt with Brev `_ +`Test cuopt with NVIDIA Launchable `_ +------------------------------------------------------------------------------------------------------------------------------ + +`Test cuOpt on Google Colab `_ ------------------------------------------------------------------------------------------------------------------------ +Please note that you need to choose a `Runtime` as `GPU` in order to run the notebooks. `File a Bug `_ ----------------------------------------------------------------- diff --git a/docs/cuopt/source/system-requirements.rst b/docs/cuopt/source/system-requirements.rst index 7313ff0ed..216b4e4e2 100644 --- a/docs/cuopt/source/system-requirements.rst +++ b/docs/cuopt/source/system-requirements.rst @@ -2,6 +2,8 @@ System Requirements =================== +Dependencies are installed automatically when using the pip and Conda installation methods. However, users would still need to make sure the system meets the minimum requirements. + .. dropdown:: Minimum Requirements * System Architecture: @@ -23,9 +25,13 @@ System Requirements * CUDA: - 12.0+ + * Python: + - >= 3.10.* and <= 3.12.* + * NVIDIA drivers: - - 525.60.13+ (linux) - - 527.41+ (windows) + - 525.60.13+ (Linux) + - 527.41+ (Windows) + * OS: - Linux distributions with glibc>=2.28 (released in August 2018): * Arch Linux (minimum version 2018-08-02) @@ -91,4 +97,4 @@ Thin-client for Self-Hosted - x86-64 - ARM64 -* Python > 3.10.x \ No newline at end of file +* Python >= 3.10.x <= 3.12.x \ No newline at end of file diff --git a/notebooks/README.md b/notebooks/README.md new file mode 100644 index 000000000..58bea996e --- /dev/null +++ b/notebooks/README.md @@ -0,0 +1,5 @@ +# Notebooks + +This directory contains the sample notebooks for the cuOpt project. + +Users can find more advanced examples in the [cuOpt Examples](https://github.com/nvidia/cuopt-examples) repository. \ No newline at end of file diff --git a/python/README.md b/python/README.md new file mode 100644 index 000000000..47536d698 --- /dev/null +++ b/python/README.md @@ -0,0 +1,42 @@ +# Python Modules + +This directory contains the Python modules for the cuOpt project. + +## Package Structure + +- Each subdirectory contains the Python modules for a specific cuOpt package. For example, `libcuopt` directory contains the Python wrappers for the cuOpt C++ library. This is the main package for the cuOpt project. And it just loads shared libraries and make it available for other Python modules. `cuopt` Python package uses `libcuopt` package as dependency and build on top of it. + +```bash +python/ +├── libcuopt/ +├── cuopt/ +└── ... +``` +- Each of these Python modules have a `tests` directory that contains the tests for the module. Python tests are written using `pytest`. For example, `python/cuopt/cuopt/tests/` directory contains the tests for the `cuopt` Python package. + +```bash +python/ +├── cuopt/ +│ ├── cuopt/ +│ │ └── tests/ +│ └── ... +└── ... +``` + +- Each of these Python modules have a `pyproject.toml` file that contains the dependencies for the module. For example, `python/cuopt/pyproject.toml` file contains the dependencies for the `cuopt` Python package. + +```bash +python/ +├── cuopt/ +│ ├── pyproject.toml +│ └── ... +└── ... +``` + +- The dependencies are defined in the [dependencies.yaml](../dependencies.yaml) file in the root folder. For example, the `python/cuopt/pyproject.toml` file contains the dependencies for the `cuopt` Python package. Therefore, any changes to dependencies should be done in the [dependencies.yaml](../dependencies.yaml) file. Please refer to different sections in the [dependencies.yaml](../dependencies.yaml) file for more details. + + + + + +