Skip to content

Commit

Permalink
[Doc] Clean up remaining RLlib broken links (#46102)
Browse files Browse the repository at this point in the history
## Why are these changes needed?

This PR cleans up the last remaining broken links in the RLlib
documentation.

## Related issue number

Partially completes #39658.

## Checks

- [x] I've signed off every commit(by using the -s flag, i.e., `git
commit -s`) in this PR.
- [x] I've run `scripts/format.sh` to lint the changes in this PR.
- [x] I've included any doc changes needed for
https://docs.ray.io/en/master/.
- [x] I've added any new APIs to the API Reference. For example, if I
added a
method in Tune, I've added it in `doc/source/tune/api/` under the
           corresponding `.rst` file.
- [x] I've made sure the tests are passing. Note that there might be a
few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
   - [x] Unit tests
   - [ ] Release tests
   - [ ] This PR is not tested :(

Signed-off-by: pdmurray <peynmurray@gmail.com>
  • Loading branch information
peytondmurray authored Jun 20, 2024
1 parent 231a013 commit 49b57c5
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 4 deletions.
3 changes: 3 additions & 0 deletions doc/source/rllib/package_ref/rl_modules.rst
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,9 @@ Forward methods
~RLModule.forward_train
~RLModule.forward_exploration
~RLModule.forward_inference
~RLModule._forward_train
~RLModule._forward_exploration
~RLModule._forward_inference

IO specifications
+++++++++++++++++
Expand Down
2 changes: 1 addition & 1 deletion doc/source/rllib/rllib-advanced-api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ are represented by slightly different maps that the agent has to navigate.
:end-before: __END_curriculum_learning_example_env_options__

Then, define the central piece controlling the curriculum, which is a custom callbacks class
overriding the :py:meth:`~ray.rllib.algorithms.callbacks.Callbacks.on_train_result`.
overriding the :py:meth:`~ray.rllib.algorithms.callbacks.DefaultCallbacks.on_train_result`.


.. TODO move to doc_code and make it use algo configs.
Expand Down
5 changes: 2 additions & 3 deletions doc/source/rllib/rllib-examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,9 +64,8 @@ Algorithms
----------

- |new_stack| `How to write a custom Algorith.training_step() method combining on- and off-policy training <https://github.com/ray-project/ray/blob/master/rllib/examples/algorithms/custom_training_step_on_and_off_policy_combined.py>`__:
Example of how to override the :py:meth:`~ray.rllib.algorithms.algorithm.training_step` method of the
:py:class:`~ray.rllib.algorithms.algorithm.Algorithm` class to train two different policies in parallel
(also using multi-agent API).
Example of how to override the :py:meth:`~ray.rllib.algorithms.algorithm.Algorithm.training_step` method
to train two different policies in parallel (also using multi-agent API).

Checkpoints
-----------
Expand Down

0 comments on commit 49b57c5

Please sign in to comment.