Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HANDS-ON mistake] Unit 4 Policy gradient - mistake in exercise explanation #582

Open
migolan opened this issue Jan 11, 2025 · 0 comments
Open

Comments

@migolan
Copy link

migolan commented Jan 11, 2025

On the hands on section of unit 4 (policy gradient), when building the REINFORCE policy, the exercise has an intentional bug causing the error message
ValueError: The value argument to log_prob must be a Tensor.
The explanation says that this is error occurs because we should sample the probability distribution rather than argmax it - but that's not the reason this error occurs.
The error occurs because we use np.argmax(Categorical(probs)) rather than torch.argmax(probs).
Indeed, we would nevertheless like to substitute the argmax with sampling, but for different reasons, so the reasoning in the notebook is somewhat misleading.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant