Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[doc] Cleanup document for building from source. #11145

Merged
merged 7 commits into from
Jan 7, 2025
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions doc/R-package/adding_parameters.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
.. _index_base:

Developer guide: parameters from core library
=============================================

Expand Down
318 changes: 118 additions & 200 deletions doc/build.rst

Large diffs are not rendered by default.

1 change: 0 additions & 1 deletion doc/contrib/donate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,6 @@ All expenses incurred for hosting CI will be submitted to the fiscal host with r
* Cloud expenses for the cloud test farm
* Cost of domain https://xgboost-ci.net
* Annual subscription for RunsOn
* Hosting cost of the User Forum (https://discuss.xgboost.ai)

Administration of cloud CI infrastructure
-----------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion doc/contrib/python_packaging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ built at the time of install. So ``pip install`` with the binary wheel
completes quickly:

.. code-block:: console

$ pip install xgboost-2.0.0-py3-none-linux_x86_64.whl # Completes quickly

.. rubric:: Footnotes
Expand Down
2 changes: 1 addition & 1 deletion doc/gpu/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -103,4 +103,4 @@ Many thanks to the following contributors (alphabetical order):
* Sriram Chandramouli
* Vinay Deshpande

Please report bugs to the XGBoost issues list: https://github.com/dmlc/xgboost/issues. For general questions please visit our user form: https://discuss.xgboost.ai/.
Please report bugs to the XGBoost `issues list <https://github.com/dmlc/xgboost/issues>`__.
7 changes: 4 additions & 3 deletions doc/jvm/xgboost4j_spark_gpu_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -216,8 +216,9 @@ and the prediction for each instance.
Submit the application
**********************

Assuming you have configured the Spark standalone cluster with GPU support. Otherwise, please
refer to `spark standalone configuration with GPU support <https://nvidia.github.io/spark-rapids/docs/get-started/getting-started-on-prem.html#spark-standalone-cluster>`_.
Assuming you have configured the Spark standalone cluster with GPU support. Otherwise,
please refer to `spark standalone configuration with GPU support
<https://docs.nvidia.com/spark-rapids/user-guide/latest/getting-started/on-premise.html>`__.

Starting from XGBoost 2.1.0, stage-level scheduling is automatically enabled. Therefore,
if you are using Spark standalone cluster version 3.4.0 or higher, we strongly recommend
Expand Down Expand Up @@ -257,4 +258,4 @@ Spark Standalone cluster.
For details about other ``RAPIDS Accelerator`` other configurations, please refer to the `configuration <https://nvidia.github.io/spark-rapids/docs/configs.html>`_.

For ``RAPIDS Accelerator Frequently Asked Questions``, please refer to the
`frequently-asked-questions <https://nvidia.github.io/spark-rapids/docs/FAQ.html#frequently-asked-questions>`_.
`frequently-asked-questions <https://docs.nvidia.com/spark-rapids/user-guide/latest/faq.html>`_.
2 changes: 1 addition & 1 deletion doc/tutorials/dask.rst
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,7 @@ For most of the use cases with GPUs, the `Dask-CUDA <https://docs.rapids.ai/api/
Working with other clusters
***************************

Using Dask's ``LocalCluster`` is convenient for getting started quickly on a local machine. Once you're ready to scale your work, though, there are a number of ways to deploy Dask on a distributed cluster. You can use `Dask-CUDA <https://docs.rapids.ai/api/dask-cuda/stable/quickstart.html>`_, for example, for GPUs and you can use Dask Cloud Provider to `deploy Dask clusters in the cloud <https://docs.dask.org/en/stable/deploying.html#cloud>`_. See the `Dask documentation for a more comprehensive list <https://docs.dask.org/en/stable/deploying.html#distributed-computing>`_.
Using Dask's ``LocalCluster`` is convenient for getting started quickly on a local machine. Once you're ready to scale your work, though, there are a number of ways to deploy Dask on a distributed cluster. You can use `Dask-CUDA <https://docs.rapids.ai/api/dask-cuda/stable/quickstart.html>`_, for example, for GPUs and you can use Dask Cloud Provider to `deploy Dask clusters in the cloud <https://docs.dask.org/en/stable/deploying.html#cloud>`_. See the `Dask documentation for a more comprehensive list <https://docs.dask.org/en/stable/deploying.html>`__.

In the example below, a ``KubeCluster`` is used for `deploying Dask on Kubernetes <https://docs.dask.org/en/stable/deploying-kubernetes.html>`_:

Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/input_format.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Auxiliary Files for Additional Information

Group Input Format
==================
For `ranking task <https://github.com/dmlc/xgboost/tree/master/demo/rank>`_, XGBoost supports the group input format. In ranking task, instances are categorized into *query groups* in real world scenarios. For example, in the learning to rank web pages scenario, the web page instances are grouped by their queries. XGBoost requires an file that indicates the group information. For example, if the instance file is the ``train.txt`` shown above, the group file should be named ``train.txt.group`` and be of the following format:
For ranking task, XGBoost supports the group input format. In ranking task, instances are categorized into *query groups* in real world scenarios. For example, in the learning to rank web pages scenario, the web page instances are grouped by their queries. XGBoost requires an file that indicates the group information. For example, if the instance file is the ``train.txt`` shown above, the group file should be named ``train.txt.group`` and be of the following format:

.. code-block:: none
:caption: ``train.txt.group``
Expand Down
9 changes: 3 additions & 6 deletions jvm-packages/create_jni.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,6 @@

CONFIG = {
"USE_OPENMP": "ON",
"USE_HDFS": "OFF",
"USE_AZURE": "OFF",
"USE_S3": "OFF",
"USE_CUDA": "OFF",
"USE_NCCL": "OFF",
"JVM_BINDINGS": "ON",
Expand Down Expand Up @@ -70,10 +67,9 @@ def normpath(path):
return normalized


def native_build(args):
def native_build(cli_args: argparse.Namespace) -> None:
CONFIG["USE_OPENMP"] = cli_args.use_openmp
if sys.platform == "darwin":
# Enable of your compiler supports OpenMP.
CONFIG["USE_OPENMP"] = "OFF"
os.environ["JAVA_HOME"] = (
subprocess.check_output("/usr/libexec/java_home").strip().decode()
)
Expand Down Expand Up @@ -184,5 +180,6 @@ def native_build(args):
"--log-capi-invocation", type=str, choices=["ON", "OFF"], default="OFF"
)
parser.add_argument("--use-cuda", type=str, choices=["ON", "OFF"], default="OFF")
parser.add_argument("--use-openmp", type=str, choices=["ON", "OFF"], default="ON")
cli_args = parser.parse_args()
native_build(cli_args)
1 change: 1 addition & 0 deletions jvm-packages/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@
<maven.wagon.http.retryHandler.count>5</maven.wagon.http.retryHandler.count>
<log.capi.invocation>OFF</log.capi.invocation>
<use.cuda>OFF</use.cuda>
<use.openmp>ON</use.openmp>
<cudf.version>24.10.0</cudf.version>
<spark.rapids.version>24.10.0</spark.rapids.version>
<spark.rapids.classifier>cuda12</spark.rapids.classifier>
Expand Down
2 changes: 2 additions & 0 deletions jvm-packages/xgboost4j/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,8 @@
<argument>${log.capi.invocation}</argument>
<argument>--use-cuda</argument>
<argument>${use.cuda}</argument>
<argument>--use-openmp</argument>
<argument>${use.openmp}</argument>
</arguments>
<workingDirectory>${user.dir}</workingDirectory>
<skip>${skip.native.build}</skip>
Expand Down
Loading