diff --git a/doc/source/tune/api/execution.rst b/doc/source/tune/api/execution.rst index a759758845e6c..f4d0005276292 100644 --- a/doc/source/tune/api/execution.rst +++ b/doc/source/tune/api/execution.rst @@ -54,3 +54,4 @@ tune.run_experiments run_experiments Experiment + TuneError diff --git a/doc/source/tune/api/logging.rst b/doc/source/tune/api/logging.rst index 81e55df295f52..f8692a19b2d0a 100644 --- a/doc/source/tune/api/logging.rst +++ b/doc/source/tune/api/logging.rst @@ -59,8 +59,10 @@ See the :doc:`tutorial here `. .. autosummary:: :nosignatures: + :toctree: doc/ - tune.logger.mlflow.MLflowLoggerCallback + ~air.integrations.mlflow.MLflowLoggerCallback + ~air.integrations.mlflow.setup_mlflow Wandb Integration ----------------- @@ -71,8 +73,10 @@ See the :doc:`tutorial here `. .. autosummary:: :nosignatures: + :toctree: doc/ - tune.logger.wandb.WandbLoggerCallback + ~air.integrations.wandb.WandbLoggerCallback + ~air.integrations.wandb.setup_wandb Comet Integration @@ -84,8 +88,9 @@ See the :doc:`tutorial here `. .. autosummary:: :nosignatures: + :toctree: doc/ - tune.logger.comet.CometLoggerCallback + ~air.integrations.comet.CometLoggerCallback Aim Integration --------------- @@ -119,4 +124,3 @@ The non-relevant metrics (like timing stats) can be disabled on the left to show relevant ones (like accuracy, loss, etc.). .. image:: ../images/ray-tune-viskit.png - diff --git a/doc/source/tune/examples/tune_analyze_results.ipynb b/doc/source/tune/examples/tune_analyze_results.ipynb index a7a11690a0b9e..aafc6f1bbaac9 100644 --- a/doc/source/tune/examples/tune_analyze_results.ipynb +++ b/doc/source/tune/examples/tune_analyze_results.ipynb @@ -479,7 +479,7 @@ "id": "184bd3ee", "metadata": {}, "source": [ - "The last reported metrics might not contain the best accuracy each trial achieved. If we want to get maximum accuracy that each trial reported throughout its training, we can do so by using {meth}`ResultGrid.get_dataframe ` specifying a metric and mode used to filter each trial's training history." + "The last reported metrics might not contain the best accuracy each trial achieved. If we want to get maximum accuracy that each trial reported throughout its training, we can do so by using {meth}`~ray.tune.ResultGrid.get_dataframe` specifying a metric and mode used to filter each trial's training history." ] }, { diff --git a/doc/source/tune/faq.rst b/doc/source/tune/faq.rst index 2c5e884a60087..53a1744610acc 100644 --- a/doc/source/tune/faq.rst +++ b/doc/source/tune/faq.rst @@ -634,7 +634,7 @@ You can configure this by setting the `RAY_CHDIR_TO_TRIAL_DIR=0` environment var This explicitly tells Tune to not change the working directory to the trial directory, giving access to paths relative to the original working directory. One caveat is that the working directory is now shared between workers, so the -:meth:`train.get_context().get_trial_dir() ` +:meth:`train.get_context().get_trial_dir() ` API should be used to get the path for saving trial-specific outputs. .. literalinclude:: doc_code/faq.py diff --git a/doc/source/tune/tutorials/tune-distributed.rst b/doc/source/tune/tutorials/tune-distributed.rst index e25987f045b73..23bff47c69d01 100644 --- a/doc/source/tune/tutorials/tune-distributed.rst +++ b/doc/source/tune/tutorials/tune-distributed.rst @@ -237,7 +237,7 @@ even after failure. Recovering From Failures ~~~~~~~~~~~~~~~~~~~~~~~~ -Tune automatically persists the progress of your entire experiment (a ``Tuner.fit()`` session), so if an experiment crashes or is otherwise cancelled, it can be resumed through :meth:`Tuner.restore() `. +Tune automatically persists the progress of your entire experiment (a ``Tuner.fit()`` session), so if an experiment crashes or is otherwise cancelled, it can be resumed through :meth:`~ray.tune.Tuner.restore`. .. _tune-distributed-common: diff --git a/doc/source/tune/tutorials/tune-storage.rst b/doc/source/tune/tutorials/tune-storage.rst index 07efe6cb6af23..8a55d7f6fe88a 100644 --- a/doc/source/tune/tutorials/tune-storage.rst +++ b/doc/source/tune/tutorials/tune-storage.rst @@ -212,11 +212,11 @@ you can resume it any time starting from the experiment state saved in the cloud There are a few options for restoring an experiment: ``resume_unfinished``, ``resume_errored`` and ``restart_errored``. Please see the documentation of -:meth:`Tuner.restore() ` for more details. +:meth:`~ray.tune.Tuner.restore` for more details. Advanced configuration ---------------------- See :ref:`Ray Train's section on advanced storage configuration `. -All of the configurations also apply to Ray Tune. \ No newline at end of file +All of the configurations also apply to Ray Tune. diff --git a/python/ray/tune/experiment/trial.py b/python/ray/tune/experiment/trial.py index f0a2bccab4729..0834181fdfb86 100644 --- a/python/ray/tune/experiment/trial.py +++ b/python/ray/tune/experiment/trial.py @@ -98,7 +98,7 @@ def validate(formats): """Validates formats. Raises: - ValueError if the format is unknown. + ValueError: if the format is unknown. """ for i in range(len(formats)): formats[i] = formats[i].strip().lower() @@ -660,7 +660,7 @@ def update_resources(self, resources: Union[dict, PlacementGroupFactory]): Should only be called when the trial is not running. Raises: - ValueError if trial status is running. + ValueError: if trial status is running. """ if self.status is Trial.RUNNING: raise ValueError("Cannot update resources while Trial is running.") diff --git a/python/ray/tune/tuner.py b/python/ray/tune/tuner.py index c4da01e4c88a4..86d7cae553758 100644 --- a/python/ray/tune/tuner.py +++ b/python/ray/tune/tuner.py @@ -391,7 +391,7 @@ def fit(self) -> ResultGrid: def get_results(self) -> ResultGrid: """Get results of a hyperparameter tuning run. - This method returns the same results as :meth:`fit() ` + This method returns the same results as :meth:`~ray.tune.Tuner.fit` and can be used to retrieve the results after restoring a tuner without calling ``fit()`` again.