Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ jobs:
script: ci/build_wheel_libcuopt.sh
package-name: libcuopt
package-type: cpp
matrix_filter: map(select((.CUDA_VER | startswith("12")) and .PY_VER != "3.13"))
matrix_filter: map(select((.CUDA_VER | startswith("12")) and .PY_VER == "3.12"))
wheel-build-cuopt:
needs: [wheel-build-cuopt-mps-parser, wheel-build-libcuopt]
secrets: inherit
Expand Down
16 changes: 4 additions & 12 deletions .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,19 +25,8 @@ jobs:
branch: ${{ inputs.branch }}
date: ${{ inputs.date }}
sha: ${{ inputs.sha }}
matrix_filter: map(select((.CUDA_VER | startswith("12")) and .PY_VER != "3.13"))
script: ci/test_cpp.sh
conda-cpp-memcheck-tests:
secrets: inherit
uses: rapidsai/shared-workflows/.github/workflows/custom-job.yaml@branch-25.06
with:
build_type: ${{ inputs.build_type }}
branch: ${{ inputs.branch }}
date: ${{ inputs.date }}
sha: ${{ inputs.sha }}
node_type: "gpu-l4-latest-1"
arch: "amd64"
container_image: "rapidsai/ci-conda:cuda11.8.0-ubuntu22.04-py3.10"
run_script: "ci/test_cpp_memcheck.sh"
conda-python-tests:
secrets: inherit
uses: rapidsai/shared-workflows/.github/workflows/conda-python-tests.yaml@branch-25.06
Expand All @@ -46,6 +35,7 @@ jobs:
branch: ${{ inputs.branch }}
date: ${{ inputs.date }}
sha: ${{ inputs.sha }}
matrix_filter: map(select((.CUDA_VER | startswith("12")) and .PY_VER != "3.13"))
script: ci/test_python.sh
wheel-tests-cuopt:
secrets: inherit
Expand All @@ -55,6 +45,7 @@ jobs:
branch: ${{ inputs.branch }}
date: ${{ inputs.date }}
sha: ${{ inputs.sha }}
matrix_filter: map(select((.CUDA_VER | startswith("12")) and .PY_VER != "3.13"))
script: ci/test_wheel_cuopt.sh
wheel-tests-cuopt-server:
secrets: inherit
Expand All @@ -64,4 +55,5 @@ jobs:
branch: ${{ inputs.branch }}
date: ${{ inputs.date }}
sha: ${{ inputs.sha }}
matrix_filter: map(select((.CUDA_VER | startswith("12")) and .PY_VER != "3.13"))
script: ci/test_wheel_cuopt_server.sh
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@ error_log.txt
docs/cuopt/source/cuopt-c/lp-milp/cuopt-cli-help.txt
docs/cuopt/source/cuopt-server/client-api/sh-cli-help.txt
docs/cuopt/source/cuopt-server/server-api/server-cli-help.txt
docs/cuopt/source/cuopt-cli/cuopt-cli-help.txt
docs/cuopt/source/cuopt_spec.yaml
python/cuopt_self_hosted/cuopt_sh_client/tests/utils/certs/*.key
docs/cuopt/build
Expand Down
31 changes: 24 additions & 7 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,17 +71,29 @@ for a minimal build of NVIDIA cuOpt without using conda are also listed below.

Compilers:

* `gcc` version 11.4+
* `nvcc` version 11.8+
* `cmake` version 3.29.6+
These will be installed while creating the Conda environment

* `gcc` version 13.0+
* `nvcc` version 12.8+
* `cmake` version 3.30.4+

CUDA/GPU Runtime:

* CUDA 11.4+
* CUDA 12.8
* Volta architecture or better ([Compute Capability](https://docs.nvidia.com/deploy/cuda-compatibility/) >=7.0)

You can obtain CUDA from
[https://developer.nvidia.com/cuda-downloads](https://developer.nvidia.com/cuda-downloads).
Python:

* Python >=3.10.x, <= 3.12.x

OS:

* Only Linux is supported

Architecture:

* x86_64 (64-bit)
* aarch64 (64-bit)

### Build NVIDIA cuOpt from source

Expand Down Expand Up @@ -219,6 +231,12 @@ set_source_files_properties(src/routing/data_model_view.cu PROPERTIES COMPILE_OP
This will add the device debug symbols for this object file in `libcuopt.so`. You can then use
`cuda-dbg` to debug into the kernels in that source file.

## Adding dependencies

Please refer to the [dependencies.yaml](dependencies.yaml) file for details on how to add new dependencies.
Add any new dependencies in the `dependencies.yaml` file. It takes care of conda, requirements (pip based dependencies) and pyproject.
Please don't try to add dependencies directly to environment.yaml files under `conda/environments` directory and pyproject.toml files under `python` directories.

## Code Formatting

### Using pre-commit hooks
Expand Down Expand Up @@ -303,6 +321,5 @@ You can skip these checks with `git commit --no-verify` or with the short versio

(d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.
```



82 changes: 68 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,31 @@
# cuOpt - GPU accelerated Optimization Engine

NVIDIA® cuOpt™ is a GPU-accelerated optimization engine that excels in mixed integer programming (MIP), linear programming (LP), and vehicle routing problems (VRP). It enables near real-time solutions for large-scale challenges with millions of variables and constraints, offering easy integration into existing solvers and seamless deployment across hybrid and multi-cloud environments.
[![Build Status](https://github.com/NVIDIA/cuopt/actions/workflows/build.yaml/badge.svg)](https://github.com/NVIDIA/cuopt/actions/workflows/build.yaml)

For the latest stable version ensure you are on the `main` branch.

## Build from Source
NVIDIA® cuOpt™ is a GPU-accelerated optimization engine that excels in mixed integer linear programming (MILP), linear programming (LP), and vehicle routing problems (VRP). It enables near real-time solutions for large-scale challenges with millions of variables and constraints, offering
easy integration into existing solvers and seamless deployment across hybrid and multi-cloud environments.

Please see our [guide for building cuOpt from source](CONTRIBUTING.md#build-nvidia-cuopt-from-source)
Core engine is written in C++ which is wrapped into C API, Python API and Server API.

## Contributing Guide
For the latest stable version ensure you are on the `main` branch.

Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to contribute code and issues to the project.
## Supported APIs

## Resources
cuOpt supports the following APIs:

- [cuopt (Python) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/introduction.html)
- [libcuopt (C++/CUDA) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/introduction.html)
- [Examples and Notebooks](https://github.com/NVIDIA/cuopt-examples)
- C API support
- Linear Programming (LP)
- Mixed Integer Linear Programming (MILP)
- C++ API support
- cuOpt is written in C++ and includes a native C++ API. However, we do not provide documentation for the C++ API at this time. We anticipate that the C++ API will change significantly in the future. Use it at your own risk.
- Python support
- Routing (TSP, VRP, and PDP)
- Linear Programming (LP) and Mixed Integer Linear Programming (MILP)
- cuOpt includes a Python API that is used as the backend of the cuOpt server. However, we do not provide documentation for the Python API at this time. We suggest using cuOpt server to access cuOpt via Python. We anticipate that the Python API will change significantly in the future. Use it at your own risk.
- Server support
- Linear Programming (LP)
- Mixed Integer Linear Programming (MILP)
- Routing (TSP, VRP, and PDP)

## Installation

Expand All @@ -26,30 +35,75 @@ Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to con
* NVIDIA driver >= 525.60.13 (Linux) and >= 527.41 (Windows)
* Volta architecture or better (Compute Capability >=7.0)

### Python requirements

* Python >=3.10.x, <= 3.12.x

### OS requirements

* Only Linux is supported and Windows via WSL2
* x86_64 (64-bit)
* aarch64 (64-bit)

Note: WSL2 is tested to run cuOpt, but not for building.

More details on system requirements can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/system-requirements.html)

### Pip

Pip wheels are easy to install and easy to configure. Users with existing workflows who uses pip as base to build their workflows can use pip to install cuOpt.

cuOpt can be installed via `pip` from the NVIDIA Python Package Index.
Be sure to select the appropriate cuOpt package depending
on the major version of CUDA available in your environment:

For CUDA 12.x:

```bash
pip install --extra-index-url=https://pypi.nvidia.com cuopt-cu12
pip install --extra-index-url=https://pypi.nvidia.com cuopt-server-cu12==25.5 cuopt-sh-client==25.5 nvidia-cuda-runtime-cu12==12.8.*
```

### Conda

cuOpt can be installed with conda (via [miniforge](https://github.com/conda-forge/miniforge)) from the `nvidia` channel:

All other dependencies are installed automatically when cuopt-server and cuopt-sh-client are installed.

Users who are used to conda env based workflows would benefit with conda packages readily available for cuOpt.

For CUDA 12.x:
```bash
conda install -c rapidsai -c conda-forge -c nvidia \
cuopt=25.05 python=3.12 cuda-version=12.8
cuopt-server=25.05 cuopt-sh-client=25.05 python=3.12 cuda-version=12.8
```

We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD
of our latest development branch.

Note: cuOpt is supported only on Linux, and with Python versions 3.10 and later.
### Container

Users can pull the cuOpt container from the NVIDIA container registry.

```bash
docker pull nvidia/cuopt:25.5.0-cuda12.8-py312
```
More information about the cuOpt container can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/quick-start.html#container-from-docker-hub).

Users who are using cuOpt for quick testing or research can use the cuOpt container. Alternatively, users who are planning to plug cuOpt as a service in their workflow can quickly start with the cuOpt container. But users are required to build security layers around the service to safeguard the service from untrusted users.

## Build from Source and Test

Please see our [guide for building cuOpt from source](CONTRIBUTING.md#setting-up-your-build-environment). This will be helpful if users want to add new features or fix bugs for cuOpt. This would also be very helpful in case users want to customize cuOpt for their own use cases which require changes to the cuOpt source code.

## Contributing Guide

Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to contribute code and issues to the project.

## Resources

- [libcuopt (C) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-c/index.html)
- [cuopt (Python) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-python/index.html)
- [cuopt (Server) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/index.html)
- [Examples and Notebooks](https://github.com/NVIDIA/cuopt-examples)
- [Test cuopt with NVIDIA Launchable](https://brev.nvidia.com/launchable/deploy?launchableID=env-2qIG6yjGKDtdMSjXHcuZX12mDNJ): Examples notebooks are pulled and hosted on [NVIDIA Launchable](https://docs.nvidia.com/brev/latest/).
- [Test cuopt on Google Colab](https://colab.research.google.com/github/nvidia/cuopt-examples/): Examples notebooks can be opened in Google Colab. Please note that you need to choose a `Runtime` as `GPU` in order to run the notebooks.
Loading