-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Misc improvements #1333
base: pytorch
Are you sure you want to change the base?
Misc improvements #1333
Conversation
…nd debug message improvements. (cherry picked from commit a455316)
alf/algorithms/merlin_algorithm.py
Outdated
if output_activation is None: | ||
output_activation = alf.math.identity |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems unnecessary. Can provide alf.math.identity as argument.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Removed.
alf/utils/data_buffer_test.py
Outdated
# ox = (x * torch.arange( | ||
# batch_size, dtype=torch.float32, requires_grad=True, | ||
# device="cpu").unsqueeze(1) * torch.arange( | ||
# dim, dtype=torch.float32, requires_grad=True, | ||
# device="cpu").unsqueeze(0)) | ||
if batch_size > 1 and x.ndim > 0 and batch_size == x.shape[0]: | ||
a = x | ||
else: | ||
a = x * torch.ones(batch_size, dtype=torch.float32, device="cpu") | ||
if batch_size > 1 and t.ndim > 0 and batch_size == t.shape[0]: | ||
pass | ||
else: | ||
t = t * torch.ones(batch_size, dtype=torch.int32, device="cpu") | ||
ox = a.unsqueeze(1).clone().requires_grad_(True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the purpose of this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is needed because we allow x and t inputs to be scalars, which will be expanded to be consistent with the batch_size. Made code easier to read, and commented.
alf/config_util.py
Outdated
# Most of the times, for command line flags, this warning is a false alarm. | ||
# This can be useful in other failures, e.g. when the Config has already been used, | ||
# before configuring its value. | ||
logging.warning("pre_config potential error: %s", e) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This warning is hard to understand. It's better to identify the case of the Config has already been used. Perhaps throw a different type of Exception when config has been used in config1()?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Logging error in config1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the comments. Done all, PTAL.
Le
alf/config_util.py
Outdated
# Most of the times, for command line flags, this warning is a false alarm. | ||
# This can be useful in other failures, e.g. when the Config has already been used, | ||
# before configuring its value. | ||
logging.warning("pre_config potential error: %s", e) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Logging error in config1.
alf/algorithms/merlin_algorithm.py
Outdated
if output_activation is None: | ||
output_activation = alf.math.identity |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Removed.
alf/utils/data_buffer_test.py
Outdated
# ox = (x * torch.arange( | ||
# batch_size, dtype=torch.float32, requires_grad=True, | ||
# device="cpu").unsqueeze(1) * torch.arange( | ||
# dim, dtype=torch.float32, requires_grad=True, | ||
# device="cpu").unsqueeze(0)) | ||
if batch_size > 1 and x.ndim > 0 and batch_size == x.shape[0]: | ||
a = x | ||
else: | ||
a = x * torch.ones(batch_size, dtype=torch.float32, device="cpu") | ||
if batch_size > 1 and t.ndim > 0 and batch_size == t.shape[0]: | ||
pass | ||
else: | ||
t = t * torch.ones(batch_size, dtype=torch.int32, device="cpu") | ||
ox = a.unsqueeze(1).clone().requires_grad_(True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is needed because we allow x and t inputs to be scalars, which will be expanded to be consistent with the batch_size. Made code easier to read, and commented.
adding various asserts, debug messages, and small improvements