Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] Fix SAC/DQN/CQL GPU and multi-GPU. #47179

Merged
merged 21 commits into from
Aug 19, 2024

Conversation

sven1977
Copy link
Contributor

@sven1977 sven1977 commented Aug 16, 2024

Fix DQN/SAC/CQL GPU and multi-GPU.

  • Fix bug in DQN/SAC/CQL where the target net do not get updated due to missing timestep information on the Learner side (only happens when we have 1 or more remote Learners).
  • For torch DDP to work with a complex setup like SAC's (where the same network (q-net) is passed twice, but one of these passes should NOT record gradients): Implement "straight-through" gradients for the q-net forward pass with the resampled actions (computed by the policy net). In other words, make sure that for this forward pass, the q-net does NOT get its gradients recorded, but the policy net does.
  • Add CI learning tests for combinations of [DQN | SAC] x [single-agent | multi-agent] x [CPU Learner | GPU Learner | 2 CPU Learners | 2 GPU Learners].
  • Add release test for SAC on HalfCheetah-v4.

Why are these changes needed?

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@sven1977 sven1977 enabled auto-merge (squash) August 16, 2024 19:21
@github-actions github-actions bot added the go add ONLY when ready to merge, run all tests label Aug 16, 2024
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@sven1977 sven1977 requested review from maxpumperla and a team as code owners August 16, 2024 19:23
@github-actions github-actions bot disabled auto-merge August 16, 2024 19:24
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@sven1977 sven1977 enabled auto-merge (squash) August 16, 2024 20:04
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@github-actions github-actions bot disabled auto-merge August 17, 2024 05:00
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@sven1977 sven1977 added the tests-ok The tagger certifies test failures are unrelated and assumes personal liability. label Aug 17, 2024
@sven1977 sven1977 enabled auto-merge (squash) August 17, 2024 07:57
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@github-actions github-actions bot disabled auto-merge August 17, 2024 10:32
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@sven1977 sven1977 enabled auto-merge (squash) August 18, 2024 17:41
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@github-actions github-actions bot disabled auto-merge August 18, 2024 17:45
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Copy link
Collaborator

@simonsays1980 simonsays1980 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Great PR with a big achievement. Multi-GPU on SAC is awesome!

tags = ["team:rllib", "exclusive", "learning_tests", "torch_only", "learning_tests_discrete", "learning_tests_pytorch_use_all_core", "gpu"],
size = "large",
srcs = ["tuned_examples/dqn/cartpole_dqn.py"],
args = ["--as-test", "--enable-new-api-stack", "--num-gpus=1"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does num-gpus=1 use a local or remote learner? Imo, we should test with both. What do you think @sven1977 ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For IMPALA/APPO, we should add a validation that these should never be run with a local Learner, b/c these are async algos that suffer tremendously from having the Learner not-async. Will add this check/error in a separate PR ...

tags = ["team:rllib", "exclusive", "learning_tests", "torch_only", "learning_tests_discrete", "learning_tests_pytorch_use_all_core", "gpu"],
size = "large",
srcs = ["tuned_examples/dqn/multi_agent_cartpole_dqn.py"],
args = ["--as-test", "--enable-new-api-stack", "--num-agents=2", "--num-cpus=4", "--num-gpus=1"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, I thought this does not work --num-gpus > 0 and --num-cpus > 0 :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. We need to get rid of this confusion some time soon. Note that these are the command line options, not directly translatable to Algo config properties:
Here:
--num-cpus are the ray provided CPUs for the entire cluster.
--num-gpus are the number of Learner workers; note that if no GPUs are available, --num-gpus still sets the number of Learner workers, but then each worker gets one CPU (instead of 1 GPU). :|

main = "tuned_examples/sac/multi_agent_pendulum_sac.py",
tags = ["team:rllib", "exclusive", "learning_tests", "torch_only", "learning_tests_continuous"],
size = "large",
srcs = ["tuned_examples/sac/multi_agent_pendulum_sac.py"],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we actually need the srcs for files that can be executed directly via python?

# Reduce EnvRunner metrics over the n EnvRunners.
self.metrics.merge_and_log_n_dicts(
env_runner_results, key=ENV_RUNNER_RESULTS
)

# Add the sampled experiences to the replay buffer.
with self.metrics.log_time((TIMERS, REPLAY_BUFFER_ADD_DATA_TIMER)):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice :)

# here). This is different from doing `.detach()` or `with torch.no_grads()`,
# as these two methds would fully block all gradient recordings, including
# the needed policy ones.
all_params = (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

Signed-off-by: sven1977 <svenmika1977@gmail.com>
@sven1977 sven1977 merged commit b040318 into ray-project:master Aug 19, 2024
5 checks passed
@sven1977 sven1977 deleted the fix_sac_cql_multi_gpu branch August 19, 2024 13:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
go add ONLY when ready to merge, run all tests tests-ok The tagger certifies test failures are unrelated and assumes personal liability.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants