Skip to content
Merged
31 changes: 24 additions & 7 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,17 +71,29 @@ for a minimal build of NVIDIA cuOpt without using conda are also listed below.

Compilers:

* `gcc` version 11.4+
* `nvcc` version 11.8+
* `cmake` version 3.29.6+
These will be installed while creating the Conda environment

* `gcc` version 13.0+
* `nvcc` version 12.8+
* `cmake` version 3.30.4+

CUDA/GPU Runtime:

* CUDA 11.4+
* CUDA 12.8
* Volta architecture or better ([Compute Capability](https://docs.nvidia.com/deploy/cuda-compatibility/) >=7.0)

You can obtain CUDA from
[https://developer.nvidia.com/cuda-downloads](https://developer.nvidia.com/cuda-downloads).
Python:

* Python >=3.10.x, <= 3.12.x

OS:

* Only Linux is supported

Architecture:

* x86_64 (64-bit)
* aarch64 (64-bit)

### Build NVIDIA cuOpt from source

Expand Down Expand Up @@ -219,6 +231,12 @@ set_source_files_properties(src/routing/data_model_view.cu PROPERTIES COMPILE_OP
This will add the device debug symbols for this object file in `libcuopt.so`. You can then use
`cuda-dbg` to debug into the kernels in that source file.

## Adding dependencies

Please refer to the [dependencies.yaml](dependencies.yaml) file for details on how to add new dependencies.
Add any new dependencies in the `dependencies.yaml` file. It takes care of conda, requirements (pip based dependencies) and pyproject.
Please don't try to add dependencies directly to environment.yaml files under `conda/environments` directory and pyproject.toml files under `python` directories.

## Code Formatting

### Using pre-commit hooks
Expand Down Expand Up @@ -303,6 +321,5 @@ You can skip these checks with `git commit --no-verify` or with the short versio

(d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.
```



83 changes: 67 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,24 +2,30 @@

[![Build Status](https://github.com/NVIDIA/cuopt/actions/workflows/build.yaml/badge.svg)](https://github.com/NVIDIA/cuopt/actions/workflows/build.yaml)

NVIDIA® cuOpt™ is a GPU-accelerated optimization engine that excels in mixed integer programming (MIP), linear programming (LP), and vehicle routing problems (VRP). It enables near real-time solutions for large-scale challenges with millions of variables and constraints, offering easy integration into existing solvers and seamless deployment across hybrid and multi-cloud environments.
NVIDIA® cuOpt™ is a GPU-accelerated optimization engine that excels in mixed integer linear programming (MILP), linear programming (LP), and vehicle routing problems (VRP). It enables near real-time solutions for large-scale challenges with millions of variables and constraints, offering
easy integration into existing solvers and seamless deployment across hybrid and multi-cloud environments.

For the latest stable version ensure you are on the `main` branch.

## Build from Source
Core engine is written in C++ which is wrapped into C API, Python API and Server API.

Please see our [guide for building cuOpt from source](CONTRIBUTING.md#build-nvidia-cuopt-from-source)

## Contributing Guide
For the latest stable version ensure you are on the `main` branch.

Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to contribute code and issues to the project.
## Supported APIs

## Resources
cuOpt supports the following APIs:

- [cuopt (Python) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/introduction.html)
- [libcuopt (C++/CUDA) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/introduction.html)
- [Examples and Notebooks](https://github.com/NVIDIA/cuopt-examples)
- [Test cuopt with Brev](https://brev.nvidia.com/launchable/deploy?launchableID=env-2qIG6yjGKDtdMSjXHcuZX12mDNJ): Examples notebooks are pulled and hosted on [Brev](https://docs.nvidia.com/brev/latest/).
- C API support
- Linear Programming (LP)
- Mixed Integer Linear Programming (MILP)
- C++ API support
- cuOpt is written in C++ and includes a native C++ API. However, we do not provide documentation for the C++ API at this time. We anticipate that the C++ API will change significantly in the future. Use it at your own risk.
- Python support
- Routing (TSP, VRP, and PDP)
- Linear Programming (LP) and Mixed Integer Linear Programming (MILP)
- cuOpt includes a Python API that is used as the backend of the cuOpt server. However, we do not provide documentation for the Python API at this time. We suggest using cuOpt server to access cuOpt via Python. We anticipate that the Python API will change significantly in the future. Use it at your own risk.
- Server support
- Linear Programming (LP)
- Mixed Integer Linear Programming (MILP)
- Routing (TSP, VRP, and PDP)

## Installation

Expand All @@ -29,30 +35,75 @@ Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to con
* NVIDIA driver >= 525.60.13 (Linux) and >= 527.41 (Windows)
* Volta architecture or better (Compute Capability >=7.0)

### Python requirements

* Python >=3.10.x, <= 3.12.x

### OS requirements

* Only Linux is supported and Windows via WSL2
* x86_64 (64-bit)
* aarch64 (64-bit)

Note: WSL2 is tested to run cuOpt, but not for building.

More details on system requirements can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/system-requirements.html)

### Pip

Pip wheels are easy to install and easy to configure. Users with existing workflows who uses pip as base to build their workflows can use pip to install cuOpt.

cuOpt can be installed via `pip` from the NVIDIA Python Package Index.
Be sure to select the appropriate cuOpt package depending
on the major version of CUDA available in your environment:

For CUDA 12.x:

```bash
pip install --extra-index-url=https://pypi.nvidia.com cuopt-cu12
pip install --extra-index-url=https://pypi.nvidia.com cuopt-server-cu12==25.5 cuopt-sh-client==25.5 nvidia-cuda-runtime-cu12==12.8.*
```

### Conda

cuOpt can be installed with conda (via [miniforge](https://github.com/conda-forge/miniforge)) from the `nvidia` channel:

All other dependencies are installed automatically when cuopt-server and cuopt-sh-client are installed.

Users who are used to conda env based workflows would benefit with conda packages readily available for cuOpt.

For CUDA 12.x:
```bash
conda install -c rapidsai -c conda-forge -c nvidia \
cuopt=25.05 python=3.12 cuda-version=12.8
cuopt-server=25.05 cuopt-sh-client=25.05 python=3.12 cuda-version=12.8
```

We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD
of our latest development branch.

Note: cuOpt is supported only on Linux, and with Python versions 3.10 and later.
### Container

Users can pull the cuOpt container from the NVIDIA container registry.

```bash
docker pull nvidia/cuopt:25.5.0-cuda12.8-py312
```
More information about the cuOpt container can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/quick-start.html#container-from-docker-hub).

Users who are using cuOpt for quick testing or research can use the cuOpt container. Alternatively, users who are planning to plug cuOpt as a service in their workflow can quickly start with the cuOpt container. But users are required to build security layers around the service to safeguard the service from untrusted users.

## Build from Source and Test

Please see our [guide for building cuOpt from source](CONTRIBUTING.md#setting-up-your-build-environment). This will be helpful if users want to add new features or fix bugs for cuOpt. This would also be very helpful in case users want to customize cuOpt for their own use cases which require changes to the cuOpt source code.

## Contributing Guide

Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to contribute code and issues to the project.

## Resources

- [libcuopt (C) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-c/index.html)
- [cuopt (Python) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-python/index.html)
- [cuopt (Server) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/index.html)
- [Examples and Notebooks](https://github.com/NVIDIA/cuopt-examples)
- [Test cuopt with NVIDIA Launchable](https://brev.nvidia.com/launchable/deploy?launchableID=env-2qIG6yjGKDtdMSjXHcuZX12mDNJ): Examples notebooks are pulled and hosted on [NVIDIA Launchable](https://docs.nvidia.com/brev/latest/).
- [Test cuopt on Google Colab](https://colab.research.google.com/github/nvidia/cuopt-examples/): Examples notebooks can be opened in Google Colab. Please note that you need to choose a `Runtime` as `GPU` in order to run the notebooks.
6 changes: 6 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Benchmarks Scripts

This directory contains the scripts for the benchmarks.



49 changes: 49 additions & 0 deletions ci/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# CI scripts

This directory contains the scripts for the CI pipeline.

CI builds are triggered by `pr.yaml`, `build.yaml` and `test.yaml` files in the `.github/workflows` directory. And these scripts are used from those workflows to build and test the code.

cuOpt is packaged in following ways:

## PIP package

### Build

The scripts for building the PIP packages are named as `build_wheel_<package_name>.sh`. For example, `build_wheel_cuopt.sh` is used to build the PIP package for cuOpt.

Please refer to existing scripts for more details and how you can add a new script for a new package.

### Test

The scripts for testing the PIP packages are named as `test_wheel_<package_name>.sh`. For example, `test_wheel_cuopt.sh` is used to test the PIP package for cuOpt.

Please refer to existing scripts for more details and how you can add a new script for a new package.

## Conda Package

### Build

For Conda package,

- all cpp libraries are built under one script called `build_cpp.sh`.
- all python bindings are built under one script called `build_python.sh`.

So if there are new cpp libraries or python bindings, you need to add them to the respective scripts.


### Test

Similarly, for Conda package,

- all cpp libraries are tested under one script called `test_cpp.sh`.
- all python bindings are tested under one script called `test_python.sh`.


There are other scripts in this directory which are used to build and test the code and are also used in the workflows as utlities.






6 changes: 6 additions & 0 deletions cmake/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Cmake for RAPIDS configuration

This directory contains the Cmake files for the RAPIDS configuration.



8 changes: 8 additions & 0 deletions conda/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Conda Recipes and Environment

This directory contains the conda recipes for the cuOpt packages which are used to build the conda packages in CI.

Along with that, it also contains the environment files which is used to create the conda environment for the development of cuOpt and CI testing.



61 changes: 61 additions & 0 deletions cpp/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# C++ Modules

This directory contains the C++ modules for the cuOpt project.

Please refer to the [CMakeLists.txt](CMakeLists.txt) file for details on how to add new modules and tests.

Most of the dependencies are defined in the [dependencies.yaml](../dependencies.yaml) file. Please refer to different sections in the [dependencies.yaml](../dependencies.yaml) file for more details. However, some of the dependencies are defined in [thirdparty modules](cmake/thirdparty/) in case where source code is needed to build, for example, `cccl` and `rmm`.


## Include Structure

Add any new modules in the `include` directory under `include/cuopt/<module_name>` directory.

```bash
cpp/
├── include/
│ ├── cuopt/
│ │ └── linear_programming/
│ │ └── ...
│ │ └── routing/
│ │ └── ...
│ └── ...
└── ...
```

## Source Structure

Add any new modules in the `src` directory under `src/cuopt/<module_name>` directory.

```bash
cpp/
├── src/
│ ├── cuopt/
│ │ └── linear_programming/
│ │ └── ...
│ │ └── routing/
│ │ └── ...
└── ...
```

## Test Structure

Add any new modules in the `test` directory under `test/cuopt/<module_name>` directory.

```bash
cpp/
├── test/
│ ├── cuopt/
│ │ └── linear_programming/
│ │ └── ...
│ │ └── routing/
│ │ └── ...
└── ...
```

## MPS parser

The MPS parser is a standalone module that parses MPS files and converts them into a format that can be used by the cuOpt library.

It is located in the `libmps_parser` directory. This also contains the `CMakeLists.txt` file to build the module.

3 changes: 1 addition & 2 deletions docs/cuopt/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,8 @@ help:

clean:
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
rm -rf "$(SOURCEDIR)/user_guide/api_docs/api"

# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
23 changes: 10 additions & 13 deletions docs/cuopt/README.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,33 @@
# Building Documentation

Documentation dependencies are installed while installing conda environment, please refer to the [CONTRIBUTING](https://github.com/NVIDIA/cuopt/blob/main/CONTRIBUTING.md) for more details. Doc generation
does not get run by default. There are two ways to generate the docs:
Documentation dependencies are installed while installing the Conda environment, please refer to the [Build and Test](../../CONTRIBUTING.md#building-with-a-conda-environment) for more details. Assuming you have set-up the Conda environment, you can build the documentation along with all the cuOpt libraries by running:

Note: It is assumed that all required libraries are already installed locally. If they haven't been installed yet, please first install all libraries by running:
```bash
./build.sh
```

1. Run
In subsequent runs where there are no changes to the cuOpt libraries, documentation can be built by running:

1. From the root directory:
```bash
make clean;make html
./build.sh docs
```
from the `docs/cuopt` directory.
2. Run

2. From the `docs/cuopt` directory:
```bash
./build.sh docs
make clean;make html
```
from the root directory.

Outputs to `build/html/index.html`

## View docs web page by opening HTML in browser:

First navigate to `/build/html/` folder, i.e., `cd build/html` and then run the following command:

```bash
python -m http.server
python -m http.server --directory=build/html/
```
Then, navigate a web browser to the IP address or hostname of the host machine at port 8000:

```
https://<host IP-Address>:8000
http://<host IP-Address>:8000
```
Now you can check if your docs edits formatted correctly, and read well.
Loading