Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -358,5 +358,5 @@ if buildAll || hasArg docs; then

cd "${REPODIR}"/docs/cuopt
make clean
make html
make html linkcheck
fi
16 changes: 13 additions & 3 deletions docs/cuopt/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,22 @@ SPHINXPROJ = cuOpt
SOURCEDIR = source
BUILDDIR = build

# Put it first so that "make" without argument is like "make help".
# Default target: build documentation and run link check
all: html linkcheck

# Build HTML documentation
html:
@$(SPHINXBUILD) -M html "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

# Check all external links in the documentation
linkcheck:
@$(SPHINXBUILD) -M linkcheck "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

# Show help
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)


.PHONY: help clean Makefile
.PHONY: all html linkcheck help clean Makefile

clean:
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
Expand Down
21 changes: 21 additions & 0 deletions docs/cuopt/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -325,6 +325,27 @@ def setup(app):
}
html_search = True

# Link checker settings
linkcheck_retries = 3
linkcheck_timeout = 30
linkcheck_workers = 5
linkcheck_rate_limit_timeout = 60

# GitHub and GitLab link checker exceptions
linkcheck_ignore = [
# GitHub (Rate Limited)
r'https://github\.com/.*',
r'https://api\.github\.com/.*',
r'https://raw\.githubusercontent\.com/.*',
r'https://gist\.github\.com/.*',

# GitLab (Rate Limited)
r'https://gitlab\.com/.*',
r'https://api\.gitlab\.com/.*',
r'https://gitlab\.org/.*',
r'https://api\.gitlab\.org/.*',
]

def setup(app):
from sphinx.application import Sphinx
from typing import Any, List
Expand Down
2 changes: 1 addition & 1 deletion docs/cuopt/source/cuopt-c/lp-milp/lp-example.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ LP C API Examples
Example With Data
-----------------

This example demonstrates how to use the LP solver in C. More details on the API can be found in `C API <lp-milp-c-api.html>`_.
This example demonstrates how to use the LP solver in C. More details on the API can be found in :doc:`C API <lp-milp-c-api>`.

Copy the code below into a file called ``lp_example.c``:

Expand Down
23 changes: 14 additions & 9 deletions docs/cuopt/source/cuopt-c/lp-milp/lp-milp-c-api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,15 +48,15 @@ Certain constants are needed to define an optimization problem. These constants
Objective Sense Constants
-------------------------

These constants are used to define the objective sense in the `cuOptCreateProblem <lp-milp-c-api.html#c.cuOptCreateProblem>`_ and `cuOptCreateRangedProblem <lp-milp-c-api.html#c.cuOptCreateRangedProblem>`_ functions.
These constants are used to define the objective sense in the :c:func:`cuOptCreateProblem` and :c:func:`cuOptCreateRangedProblem` functions.

.. doxygendefine:: CUOPT_MINIMIZE
.. doxygendefine:: CUOPT_MAXIMIZE

Constraint Sense Constants
--------------------------

These constants are used to define the constraint sense in the `cuOptCreateProblem <lp-milp-c-api.html#c.cuOptCreateProblem>`_ and `cuOptCreateRangedProblem <lp-milp-c-api.html#c.cuOptCreateRangedProblem>`_ functions.
These constants are used to define the constraint sense in the :c:func:`cuOptCreateProblem` and :c:func:`cuOptCreateRangedProblem` functions.

.. doxygendefine:: CUOPT_LESS_THAN
.. doxygendefine:: CUOPT_GREATER_THAN
Expand All @@ -65,15 +65,15 @@ These constants are used to define the constraint sense in the `cuOptCreateProbl
Variable Type Constants
-----------------------

These constants are used to define the the variable type in the `cuOptCreateProblem <lp-milp-c-api.html#c.cuOptCreateProblem>`_ and `cuOptCreateRangedProblem <lp-milp-c-api.html#c.cuOptCreateRangedProblem>`_ functions.
These constants are used to define the the variable type in the :c:func:`cuOptCreateProblem` and :c:func:`cuOptCreateRangedProblem` functions.

.. doxygendefine:: CUOPT_CONTINUOUS
.. doxygendefine:: CUOPT_INTEGER

Infinity Constant
-----------------

This constant may be used to represent infinity in the `cuOptCreateProblem <lp-milp-c-api.html#c.cuOptCreateProblem>`_ and `cuOptCreateRangedProblem <lp-milp-c-api.html#c.cuOptCreateRangedProblem>`_ functions.
This constant may be used to represent infinity in the :c:func:`cuOptCreateProblem` and :c:func:`cuOptCreateRangedProblem` functions.

.. doxygendefine:: CUOPT_INFINITY

Expand Down Expand Up @@ -118,7 +118,7 @@ When you are done with a solve you should destroy a `cuOptSolverSettings` object

Setting Parameters
------------------
The following functions are used to set and get parameters. You can find more details on the available parameters in the `LP/MILP settings <../../lp-milp-settings.html>`_ section.
The following functions are used to set and get parameters. You can find more details on the available parameters in the :doc:`LP/MILP settings <../../lp-milp-settings>` section.

.. doxygenfunction:: cuOptSetParameter
.. doxygenfunction:: cuOptGetParameter
Expand All @@ -127,11 +127,12 @@ The following functions are used to set and get parameters. You can find more de
.. doxygenfunction:: cuOptSetFloatParameter
.. doxygenfunction:: cuOptGetFloatParameter

.. _parameter-constants:

Parameter Constants
-------------------

These constants are used as the parameter name in the `cuOptSetParameter <lp-milp-c-api.html#c.cuOptSetParameter>`_ , `cuOptGetParameter <lp-milp-c-api.html#c.cuOptGetParameter>`_ and similar functions. More details on the parameters can be found in the `LP/MILP settings <../../lp-milp-settings.html>`_ section.
These constants are used as parameter names in the :c:func:`cuOptSetParameter`, :c:func:`cuOptGetParameter`, and similar functions. For more details on the available parameters, see the :doc:`LP/MILP settings <../../lp-milp-settings>` section.

.. LP/MIP parameter string constants
.. doxygendefine:: CUOPT_ABSOLUTE_DUAL_TOLERANCE
Expand Down Expand Up @@ -161,20 +162,24 @@ These constants are used as the parameter name in the `cuOptSetParameter <lp-mil
.. doxygendefine:: CUOPT_NUM_CPU_THREADS
.. doxygendefine:: CUOPT_USER_PROBLEM_FILE

.. _pdlp-solver-mode-constants:

PDLP Solver Mode Constants
--------------------------

These constants are used to configure `CUOPT_PDLP_SOLVER_MODE` via `cuOptSetIntegerParameter <lp-milp-c-api.html#c.cuOptSetIntegerParameter>`_.
These constants are used to configure `CUOPT_PDLP_SOLVER_MODE` via :c:func:`cuOptSetIntegerParameter`.

.. doxygendefine:: CUOPT_PDLP_SOLVER_MODE_STABLE1
.. doxygendefine:: CUOPT_PDLP_SOLVER_MODE_STABLE2
.. doxygendefine:: CUOPT_PDLP_SOLVER_MODE_METHODICAL1
.. doxygendefine:: CUOPT_PDLP_SOLVER_MODE_FAST1

.. _method-constants:

Method Constants
----------------

These constants are used to configure `CUOPT_METHOD` via `cuOptSetIntegerParameter <lp-milp-c-api.html#c.cuOptSetIntegerParameter>`_.
These constants are used to configure `CUOPT_METHOD` via :c:func:`cuOptSetIntegerParameter`.

.. doxygendefine:: CUOPT_METHOD_CONCURRENT
.. doxygendefine:: CUOPT_METHOD_PDLP
Expand Down Expand Up @@ -214,7 +219,7 @@ When you are finished with a `cuOptSolution` object you should destory it with
Termination Status Constants
----------------------------

These constants define the termination status received from the `cuOptGetTerminationStatus <lp-milp-c-api.html#c.cuOptGetTerminationStatus>`_ function.
These constants define the termination status received from the :c:func:`cuOptGetTerminationStatus` function.

.. LP/MIP termination status constants
.. doxygendefine:: CUOPT_TERIMINATION_STATUS_NO_TERMINATION
Expand Down
2 changes: 1 addition & 1 deletion docs/cuopt/source/cuopt-c/lp-milp/milp-examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ MILP C API Examples
Example With Data
-----------------

This example demonstrates how to use the MILP solver in C. More details on the API can be found in `C API <lp-milp-c-api.html>`_.
This example demonstrates how to use the MILP solver in C. More details on the API can be found in :doc:`C API <lp-milp-c-api>`.

Copy the code below into a file called ``milp_example.c``:

Expand Down
4 changes: 2 additions & 2 deletions docs/cuopt/source/cuopt-cli/quick-start.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Quickstart Guide
=================

cuopt_cli is built as part of the libcuopt package and you can follow these `instructions <../cuopt-c/quick-start.html>`_ to install it.
cuopt_cli is built as part of the libcuopt package and you can follow these :doc:`../cuopt-c/quick-start` to install it.

To see all available options and their descriptions:

Expand All @@ -17,4 +17,4 @@ This will display the complete list of command-line arguments and their usage:
:language: shell
:linenos:

Please refer to `parameter settings <../lp-milp-settings.html>`_ for more details on default values and other options.
Please refer to :doc:`../lp-milp-settings` for more details on default values and other options.
11 changes: 7 additions & 4 deletions docs/cuopt/source/cuopt-python/quick-start.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ For CUDA 12.x:
Conda
-----

NVIDIA cuOpt can be installed with Conda (via `miniforge <https://github.com/conda-forge/miniforge>`_ from the ``nvidia`` channel:
NVIDIA cuOpt can be installed with Conda (via `miniforge <https://github.com/conda-forge/miniforge>`_) from the ``nvidia`` channel:

.. code-block:: bash

Expand All @@ -41,16 +41,19 @@ NVIDIA cuOpt is also available as a container from Docker Hub:

.. code-block:: bash

docker pull nvidia/cuopt:latest-cuda12.8-py312
docker pull nvidia/cuopt:latest-cuda12.8-py3.12

.. note::
The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``<version>-cuda12.8-py312`` tag. For example, to use cuOpt 25.5.0, you can use the ``25.5.0-cuda12.8-py312`` tag. Please refer to `cuOpt dockerhub page <https://hub.docker.com/r/nvidia/cuopt>`_ for the list of available tags.
The ``latest`` tag is the latest stable release of cuOpt. If you want to use a specific version, you can use the ``<version>-cuda12.8-py3.12`` tag. For example, to use cuOpt 25.5.0, you can use the ``25.5.0-cuda12.8-py3.12`` tag. Please refer to `cuOpt dockerhub page <https://hub.docker.com/r/nvidia/cuopt>`_ for the list of available tags.

.. note::
The nightly version of cuOpt is available as ``[VERSION]a-cuda12.8-py3.12`` tag. For example, to use cuOpt 25.8.0a, you can use the ``25.8.0a-cuda12.8-py3.12`` tag.

The container includes both the Python API and self-hosted server components. To run the container:

.. code-block:: bash

docker run --gpus all -it --rm nvidia/cuopt:latest-cuda12.8-py312
docker run --gpus all -it --rm nvidia/cuopt:latest-cuda12.8-py3.12 /bin/bash

This will start an interactive session with cuOpt pre-installed and ready to use.

Expand Down
8 changes: 4 additions & 4 deletions docs/cuopt/source/cuopt-server/client-api/sh-cli-build.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,9 +73,9 @@ Success Response:
{"reqId":"1df28c33-8b8c-4bb7-9ff9-1e19929094c6"}


When sending files to the server, the server must be configured with appropriate data and result directories to temporarily store these files. These directories can be set using the ``-d`` and ``-r`` options when starting the server. Please refer to the `Server CLI documentation <../server-api/server-cli.html>`_ for more details on configuring these directories.
When sending files to the server, the server must be configured with appropriate data and result directories to temporarily store these files. These directories can be set using the ``-d`` and ``-r`` options when starting the server. Please refer to the :doc:`Server CLI documentation <../server-api/server-cli>` for more details on configuring these directories.

``JSON_DATA`` should follow the `spec <../../open-api.html#operation/postrequest_cuopt_request_post>`_ described for cuOpt input.
``JSON_DATA`` should follow the :doc:`spec under "POST /cuopt/request" schema <../../open-api>` described for cuOpt input.

Polling for Request Status:
---------------------------
Expand All @@ -92,7 +92,7 @@ Users can poll the request id for status with the help of ``/cuopt/request/{requ

In case the solver has completed the job, the response will be "completed".

Please refer to the `Solver status in spec <../../open-api.html#operation/getrequest_cuopt_request__id__get>`_ for more details on responses.
Please refer to the :doc:`Solver status in spec using "GET /cuopt/request/{request-id}" <../../open-api>` for more details on responses.


cuOpt Result Retrieval
Expand All @@ -106,7 +106,7 @@ Once you have received successful response from solver with status "completed",
curl --location "http://$ip:$port/cuopt/solution/{request-id}"


This would fetch the result in JSON format. Please refer to the `Response structure in spec <../../open-api.html#operation/getrequest_cuopt_solution__id__get>`_ for more details on responses.
This would fetch the result in JSON format. Please refer to the :doc:`Response structure in spec using "GET /cuopt/solution/{request-id}" <../../open-api>` for more details on responses.


.. important::
Expand Down
4 changes: 2 additions & 2 deletions docs/cuopt/source/cuopt-server/csp-guides/csp-aws.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,10 +46,10 @@ Step 1: Create an AWS VM with NVAIE Image
Step 2: Activate NVAIE Subscription
------------------------------------

Once connected to the VM, generate an identity token. Activate your NVIDIA AI Enterprise subscription using that identity token on NGC. Follow the instructions `here <https://docs.nvidia.com/ai-enterprise/deployment-guide-cloud/0.1.0/azure-ai-enterprise-vmi.html#accessing-the-nc-on-ngc>`__.
Once connected to the VM, generate an identity token. Activate your NVIDIA AI Enterprise subscription using that identity token on NGC. Follow the instructions `here <https://docs.nvidia.com/ai-enterprise/deployment/cloud/latest/azure-ai-enterprise-vmi.html#accessing-the-ngc-catalog-on-ngc>`__.

Step 3: Run cuOpt
------------------

To run cuOpt, you will need to log in to the NVIDIA Container Registry, pull the cuOpt container, and then run it. To test that it is successfully running, you can run a sample cuOpt request. This process is the same for deploying cuOpt on your own infrastructure. Refer to `Self-Hosted Service Quickstart Guide </cuopt-server/quick-start.html#container-from-nvidia-ngc>`__.
To run cuOpt, you will need to log in to the NVIDIA Container Registry, pull the cuOpt container, and then run it. To test that it is successfully running, you can run a sample cuOpt request. This process is the same for deploying cuOpt on your own infrastructure. Refer to :ref:`Self-Hosted Service Quickstart Guide <container-from-nvidia-ngc>`.

4 changes: 2 additions & 2 deletions docs/cuopt/source/cuopt-server/csp-guides/csp-azure.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,12 +56,12 @@ Step 1: Create an Azure VM with NVAIE Image
Step 2: Activate NVAIE Subscription
------------------------------------

Once connected to the VM, generate an identity token. Activate your NVIDIA AI Enterprise subscription using that identity token on NGC. Follow the instructions `here <https://docs.nvidia.com/ai-enterprise/deployment-guide-cloud/0.1.0/azure-ai-enterprise-vmi.html#accessing-the-nc-on-ngc>`__.
Once connected to the VM, generate an identity token. Activate your NVIDIA AI Enterprise subscription using that identity token on NGC. Follow the instructions `here <https://docs.nvidia.com/ai-enterprise/deployment/cloud/latest/azure-ai-enterprise-vmi.html#accessing-the-ngc-catalog-on-ngc>`__.

Step 3: Run cuOpt
------------------

To run cuOpt, you will need to log in to the NVIDIA Container Registry, pull the cuOpt container, and then run it. To test that it is successfully running, you can run a sample cuOpt request. This process is the same for deploying cuOpt on your own infrastructure. Refer `Self-Hosted Service Quickstart Guide </cuopt-server/quick-start.html#container-from-nvidia-ngc>`__.
To run cuOpt, you will need to log in to the NVIDIA Container Registry, pull the cuOpt container, and then run it. To test that it is successfully running, you can run a sample cuOpt request. This process is the same for deploying cuOpt on your own infrastructure. Refer :ref:`Self-Hosted Service Quickstart Guide <container-from-nvidia-ngc>`.


Step 4: Mapping Visualization with Azure
Expand Down
14 changes: 9 additions & 5 deletions docs/cuopt/source/cuopt-server/examples/lp-examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ LP Python Examples

The following example showcases how to use the ``CuOptServiceSelfHostClient`` to solve a simple LP problem in normal mode and batch mode (where multiple problems are solved at once).

The OpenAPI specification for the server is available in `open-api spec <../../open-api.html>`_. The example data is structured as per the OpenAPI specification for the server, please refer `LPData <../../open-api.html#/default/postrequest_cuopt_request_post>`_ under schema section. LP and MILP share same spec.
The OpenAPI specification for the server is available in :doc:`open-api spec <../../open-api>`. The example data is structured as per the OpenAPI specification for the server, please refer :doc:`LPData under "POST /cuopt/request" <../../open-api>` under schema section. LP and MILP share same spec.

If you want to run server locally, please run the following command in a terminal or tmux session so you can test examples in another terminal.

Expand All @@ -15,6 +15,8 @@ If you want to run server locally, please run the following command in a termina
export port=5000
python -m cuopt_server.cuopt_service --ip $ip --port $port

.. _generic-example-with-normal-and-batch-mode:

Genric Example With Normal Mode and Batch Mode
------------------------------------------------

Expand Down Expand Up @@ -225,6 +227,8 @@ Batch mode response:
.. note::
Warm start is only applicable to LP and not for MILP.

.. _warm-start:

Warm Start
----------

Expand Down Expand Up @@ -428,7 +432,7 @@ The response is:
Generate Datamodel from MPS Parser
----------------------------------

Use a datamodel generated from mps file as input; this yields a solution object in response. For more details please refer to `LP/MILP parameters <../../lp-milp-settings.html>`_.
Use a datamodel generated from mps file as input; this yields a solution object in response. For more details please refer to :doc:`LP/MILP parameters <../../lp-milp-settings>`.

.. code-block:: python
:linenos:
Expand Down Expand Up @@ -560,13 +564,13 @@ The response would be as follows:

Example with DataModel is available in the `Examples Notebooks Repository <https://github.com/NVIDIA/cuopt-examples>`_.

The ``data`` argument to ``get_LP_solve`` may be a dictionary of the format shown in `LP Open-API spec <../../open-api.html#operation/postrequest_cuopt_request_post>`_. More details on the response can be found under the responses schema `request and solution API spec <../../open-api.html#/default/getrequest_cuopt_request__id__get>`_.
The ``data`` argument to ``get_LP_solve`` may be a dictionary of the format shown in :doc:`LP Open-API spec <../../open-api>`. More details on the response can be found under the responses schema :doc:`"get /cuopt/request" and "get /cuopt/solution" API spec <../../open-api>`.


Aborting a Running Job in Thin Client
-------------------------------------

Please refer to the `MILP Example on Aborting a Running Job in Thin Client <milp-examples.html#aborting-a-running-job-in-thin-client>`_ for more details.
Please refer to the :ref:`aborting-thin-client` in the MILP Example for more details.


=================================================
Expand Down Expand Up @@ -709,7 +713,7 @@ In the case of batch mode, you can send a bunch of ``mps`` files at once, and ac
Aborting a Running Job In CLI
-----------------------------

Please refer to the `MILP Example <milp-examples.html#aborting-a-running-job-in-cli>`_ for more details.
Please refer to the :ref:`aborting-cli` in the MILP Example for more details.

.. note::
Please use solver settings while using .mps files.
Loading