Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple Classes fails in Eager Mode ("tf.keras.Model") #18763

Closed
sml0820 opened this issue Apr 21, 2018 · 2 comments · Fixed by #78963
Closed

Multiple Classes fails in Eager Mode ("tf.keras.Model") #18763

sml0820 opened this issue Apr 21, 2018 · 2 comments · Fixed by #78963
Assignees
Labels
stat:awaiting response Status - Awaiting response from author

Comments

@sml0820
Copy link

sml0820 commented Apr 21, 2018

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
    No
  • Bazel version:
    N/A
  • CUDA/cuDNN version:
    N/A
  • GPU model and memory:
    N/A
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    Tried on MacOS using tensorflow as well as Linux Ubuntu 16.04 using tensorflow-gpu
  • TensorFlow installed from (source or binary):
    Installed utilizing pip
  • TensorFlow version (use command below):
    1.7
  • Python version:
    3.6
  • Exact command to reproduce:
import tensorflow as tf  
import tensorflow.contrib.eager as tfe  

tfe.enable_eager_execution()

class CustomLayer(tf.keras.Model):
    def __init__(self):
        super(CustomLayer, self).__init__()
        print("blah")

class CustomNetwork(tf.keras.Model):
    def __init__(self):
        super(CustomNetwork, self).__init__()
        self.custom_layers = CustomLayer()

    def forward(self, x, y=None):
        x = self.custom_layers(x)

CustomNetwork().forward(tf.convert_to_tensor([1]))

Describe the problem

Trying to utilize multiple classes fails in tensorflow eager mode utilizing "tf.keras.Model". If I change "tf.keras.Model" to "tfe.Network" it works - keep in mind I am utilizing tensorflow 1.7. The error I get running the above code results in the error below:

Source code / logs

blah
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-12-9afa9b91ddef> in <module>()
----> 1 CustomNetwork().forward(tf.convert_to_tensor([1]))

<ipython-input-11-484119102aec> in forward(self, x, y)
      5 
      6     def forward(self, x, y=None):
----> 7         x = self.custom_layers(x)

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/engine/base_layer.py in __call__(self, inputs, **kwargs)
    237     """
    238     # Actually call the layer (optionally building it).
--> 239     output = super(Layer, self).__call__(inputs, **kwargs)
    240     if context.executing_eagerly():
    241       return output

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/layers/base.py in __call__(self, inputs, *args, **kwargs)
    712 
    713         if not in_deferred_mode:
--> 714           outputs = self.call(inputs, *args, **kwargs)
    715           if outputs is None:
    716             raise ValueError('A layer\'s `call` method should return a Tensor '

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/engine/network.py in call(self, inputs, training, mask)
    635     outputs, _ = self._run_internal_graph(inputs,
    636                                           training=training,
--> 637                                           mask=masks)
    638     return outputs
    639 

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/engine/network.py in _run_internal_graph(self, inputs, training, mask)
    770     # does not return a list the same size as `call`
    771     tensor_map = {}
--> 772     for x, y, mask in zip(self.inputs, inputs, masks):
    773       tensor_map[str(id(x))] = (y, mask)
    774 

TypeError: zip argument #1 must support iteration
@tensorflowbutler tensorflowbutler added the stat:awaiting response Status - Awaiting response from author label Apr 22, 2018
@tensorflowbutler
Copy link
Member

Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks.
Bazel version
CUDA/cuDNN version
GPU model and memory

@sml0820
Copy link
Author

sml0820 commented Apr 22, 2018

Believe I need to use .call instead of .forward

@sml0820 sml0820 closed this as completed Apr 22, 2018
copybara-service bot pushed a commit that referenced this issue Oct 29, 2024
Imported from GitHub PR openxla/xla#18763

These conditions seem to break the auto-pgle workflow. Removing them.
Copybara import of the project:

--
4bf87b309b5eabef94ce7a7fe00346d5faaea241 by Shraiysh Vaishay <svaishay@nvidia.com>:

Remove the conditions that break auto-pgle workflow.

Merging this change closes #18763

FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#18763 from shraiysh:fix-pgle-mistype 4bf87b309b5eabef94ce7a7fe00346d5faaea241
PiperOrigin-RevId: 691052017
copybara-service bot pushed a commit that referenced this issue Oct 29, 2024
Imported from GitHub PR openxla/xla#18763

These conditions seem to break the auto-pgle workflow. Removing them.
Copybara import of the project:

--
4bf87b309b5eabef94ce7a7fe00346d5faaea241 by Shraiysh Vaishay <svaishay@nvidia.com>:

Remove the conditions that break auto-pgle workflow.

Merging this change closes #18763

FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#18763 from shraiysh:fix-pgle-mistype 4bf87b309b5eabef94ce7a7fe00346d5faaea241
PiperOrigin-RevId: 691052017
copybara-service bot pushed a commit that referenced this issue Oct 29, 2024
Imported from GitHub PR openxla/xla#18763

These conditions seem to break the auto-pgle workflow. Removing them.
Copybara import of the project:

--
4bf87b309b5eabef94ce7a7fe00346d5faaea241 by Shraiysh Vaishay <svaishay@nvidia.com>:

Remove the conditions that break auto-pgle workflow.

Merging this change closes #18763

FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#18763 from shraiysh:fix-pgle-mistype 4bf87b309b5eabef94ce7a7fe00346d5faaea241
PiperOrigin-RevId: 691052017
copybara-service bot pushed a commit that referenced this issue Oct 30, 2024
Imported from GitHub PR openxla/xla#18763

These conditions seem to break the auto-pgle workflow. Removing them.
Copybara import of the project:

--
99e456fbda8e82a9dcd5b600398071de04344a5e by Shraiysh Vaishay <svaishay@nvidia.com>:

Remove the conditions that break auto-pgle workflow.

Merging this change closes #18763

FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#18763 from shraiysh:fix-pgle-mistype 99e456fbda8e82a9dcd5b600398071de04344a5e
PiperOrigin-RevId: 691052017
copybara-service bot pushed a commit that referenced this issue Oct 30, 2024
Imported from GitHub PR openxla/xla#18763

These conditions seem to break the auto-pgle workflow. Removing them.
Copybara import of the project:

--
99e456fbda8e82a9dcd5b600398071de04344a5e by Shraiysh Vaishay <svaishay@nvidia.com>:

Remove the conditions that break auto-pgle workflow.

Merging this change closes #18763

FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#18763 from shraiysh:fix-pgle-mistype 99e456fbda8e82a9dcd5b600398071de04344a5e
PiperOrigin-RevId: 691052017
copybara-service bot pushed a commit that referenced this issue Oct 31, 2024
Imported from GitHub PR openxla/xla#18763

These conditions seem to break the auto-pgle workflow. Removing them.
Copybara import of the project:

--
99e456fbda8e82a9dcd5b600398071de04344a5e by Shraiysh Vaishay <svaishay@nvidia.com>:

Remove the conditions that break auto-pgle workflow.

Merging this change closes #18763

FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#18763 from shraiysh:fix-pgle-mistype 99e456fbda8e82a9dcd5b600398071de04344a5e
PiperOrigin-RevId: 691052017
copybara-service bot pushed a commit that referenced this issue Oct 31, 2024
Imported from GitHub PR openxla/xla#18763

These conditions seem to break the auto-pgle workflow. Removing them.
Copybara import of the project:

--
99e456fbda8e82a9dcd5b600398071de04344a5e by Shraiysh Vaishay <svaishay@nvidia.com>:

Remove the conditions that break auto-pgle workflow.

Merging this change closes #18763

FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#18763 from shraiysh:fix-pgle-mistype 99e456fbda8e82a9dcd5b600398071de04344a5e
PiperOrigin-RevId: 691052017
copybara-service bot pushed a commit that referenced this issue Oct 31, 2024
Imported from GitHub PR openxla/xla#18763

These conditions seem to break the auto-pgle workflow. Removing them.
Copybara import of the project:

--
99e456fbda8e82a9dcd5b600398071de04344a5e by Shraiysh Vaishay <svaishay@nvidia.com>:

Remove the conditions that break auto-pgle workflow.

Merging this change closes #18763

PiperOrigin-RevId: 691751033
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting response Status - Awaiting response from author
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants