Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc] Clean up remaining RLlib broken links #46102

Merged

Conversation

peytondmurray
Copy link
Contributor

Why are these changes needed?

This PR cleans up the last remaining broken links in the RLlib documentation.

Related issue number

Partially completes #39658.

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: pdmurray <peynmurray@gmail.com>
@peytondmurray peytondmurray added rllib RLlib related issues docs An issue or change related to documentation labels Jun 17, 2024
@peytondmurray peytondmurray mentioned this pull request Jun 17, 2024
8 tasks
@@ -42,7 +42,7 @@ are represented by slightly different maps that the agent has to navigate.
:end-before: __END_curriculum_learning_example_env_options__

Then, define the central piece controlling the curriculum, which is a custom callbacks class
overriding the :py:meth:`~ray.rllib.algorithms.callbacks.Callbacks.on_train_result`.
overriding the :py:meth:`~ray.rllib.algorithms.callbacks.DefaultCallbacks.on_train_result`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh wow, thanks for catching this! This wrong link must have been in there forever.

Example of how to override the :py:meth:`~ray.rllib.algorithms.algorithm.training_step` method of the
:py:class:`~ray.rllib.algorithms.algorithm.Algorithm` class to train two different policies in parallel
(also using multi-agent API).
Example of how to override the :py:meth:`~ray.rllib.algorithms.algorithm.Algorithm.training_step` method
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice!

Copy link
Contributor

@sven1977 sven1977 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for these important fixes @peytondmurray ! :)

@peytondmurray peytondmurray added the go add ONLY when ready to merge, run all tests label Jun 18, 2024
@can-anyscale can-anyscale merged commit 49b57c5 into ray-project:master Jun 20, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docs An issue or change related to documentation go add ONLY when ready to merge, run all tests rllib RLlib related issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants