-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update in find_root_secant in order to use dynamic tensor shape R0.13 #1415
base: main
Are you sure you want to change the base?
Conversation
Update TF version.
Bump version suffix to rc0
keras in 2.5.0 is not exposing the tf.keras.__internal__ symbol for some reason, even though it's there in tf-nightly.
Update distribution_layer.py
This deprecation is too late for 2.5.0, as it requires the change in TF's `deprecation.py` to `_safe_eq`.
A better for for the keras utils lib.
PiperOrigin-RevId: 374718460
…ing. This also removes a spurious deprecation warning that appears when `JointDistribution`s are used in public colabs. (JDs have a multi-level hierarchy in which no class defines its own _parameter_properties, but since they all inherit the base class def that raises `NotImplementedError`, there's no inheritance problem). Justification for this change: there are legitimate 'quick-and-dirty' uses of parameter_properties inheritance, even if we wouldn't do it in TFP. For example, somewhere in the DeepMind silo is a Normal distribution that takes a log_scale rather than a scale: class NormalWithLogScale(tfd.Normal): def __init__(self, loc, log_scale): super().__init__(loc=loc, scale=tf.exp(log_scale)) Ignoring whether this is the best way to accomplish any particular goal, from a general Pythonic standpoint one might expect that this class would at least be basically functional. It may not support batch slicing or AutoCompositeTensor, but one should at least be able to call sample and log_prob, which means it should at least define properties like batch_shape. But now that batch shape depends on parameter_properties (as of cl/373590501), breaking parameter_properties inheritance means breaking batch_shape inheritance. To allow quick subclasses like this, I propose we simply warn when an inherited `parameter_properties` is called. In this example, the batch shape would be computed (correctly) using the base Normal parameters, just as if an explicit batch_shape method had been inherited. For full functionality including batch slicing, CompositeTensor, etc., a subclass would need to both (a) set self.parameters = dict(locals()) in its own constructor, and (b) define its own _parameter_properties. I've tried to articulate these requirements in the warning message. PiperOrigin-RevId: 374554087
…ity` in preparation to convert `DeferredTensor` to `CompositeTensor` (such that `tf.identity` will newly return a `DeferredTensor` instead of a `Tensor`). PiperOrigin-RevId: 373877243
PiperOrigin-RevId: 373905388
…TransformedVariable` to `CompositeTensor`. PiperOrigin-RevId: 373906496
PiperOrigin-RevId: 374319681
PiperOrigin-RevId: 374478908
… preserved through flatten/unflatten and appear in serializations, but are omitted from comparison/equality checks and hashes. PiperOrigin-RevId: 374718734
…preserves the `name` attribute through flattening/unflattening and in serialization. PiperOrigin-RevId: 374731367
…lease. PiperOrigin-RevId: 374899448
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed (or fixed any issues), please reply here with What to do if you already signed the CLAIndividual signers
Corporate signers
ℹ️ Googlers: Go here for more info. |
1 similar comment
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed (or fixed any issues), please reply here with What to do if you already signed the CLAIndividual signers
Corporate signers
ℹ️ Googlers: Go here for more info. |
There are several places in the function where static tensor shape is used. Thus errors appear when shape is unspecified - for example running with
TensorShape(shape=[None])
.