You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On the hands on section of unit 4 (policy gradient), when building the REINFORCE policy, the exercise has an intentional bug causing the error message ValueError: The value argument to log_prob must be a Tensor.
The explanation says that this is error occurs because we should sample the probability distribution rather than argmax it - but that's not the reason this error occurs.
The error occurs because we use np.argmax(Categorical(probs)) rather than torch.argmax(probs).
Indeed, we would nevertheless like to substitute the argmax with sampling, but for different reasons, so the reasoning in the notebook is somewhat misleading.
The text was updated successfully, but these errors were encountered:
On the hands on section of unit 4 (policy gradient), when building the REINFORCE policy, the exercise has an intentional bug causing the error message
ValueError: The value argument to log_prob must be a Tensor
.The explanation says that this is error occurs because we should sample the probability distribution rather than argmax it - but that's not the reason this error occurs.
The error occurs because we use
np.argmax(Categorical(probs))
rather thantorch.argmax(probs)
.Indeed, we would nevertheless like to substitute the argmax with sampling, but for different reasons, so the reasoning in the notebook is somewhat misleading.
The text was updated successfully, but these errors were encountered: