-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KeyValueTensorInitializer based on tft.count_per_key output raises AssertionError: Tried to export a function which references untracked resource Tensor #240
Comments
Thanks for reporting this, we'll be working on a fix for tft.count_per_key with The changes include: specifying a key_vocabulary_filename, which results in getting a vocabulary path from |
Thanks a lot for the quick and clear answer, i will keep watch of the coming releases. Also if you could advise me on the workaround, i wonder if hasattr(deferred_vocab_filename_tensor, 'numpy') is the way one should use to check whether we are in Eager mode and why it is necessary here ? |
No, this is purely a workaround for this specific case, it appears that outputs of
This was added to the workaround in order to wrap the table initialization with a We found that checking whether the tensor has a Hopefully this clears things up. |
It does a lot. Thank you for the help ! |
Hi @jccarles Could you please move this to closed status as it is resolved.Thank you! |
Hey @pindinagesh, sure ! When was it fixed ? Or is the workaround of passing by an intermediate vocabulary file the definitive solution ? WARNING:tensorflow:You are passing instance dicts and DatasetMetadata to TFT which will not provide optimal performance. Consider following the TFT guide to upgrade to the TFXIO format (Apache Arrow RecordBatch).
WARNING:tensorflow:You are passing instance dicts and DatasetMetadata to TFT which will not provide optimal performance. Consider following the TFT guide to upgrade to the TFXIO format (Apache Arrow RecordBatch).
WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: key_value_init/LookupTableImportV2
WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: key_value_init/LookupTableImportV2
WARNING:tensorflow:You are passing instance dicts and DatasetMetadata to TFT which will not provide optimal performance. Consider following the TFT guide to upgrade to the TFXIO format (Apache Arrow RecordBatch).
WARNING:tensorflow:You are passing instance dicts and DatasetMetadata to TFT which will not provide optimal performance. Consider following the TFT guide to upgrade to the TFXIO format (Apache Arrow RecordBatch).
WARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py', '-f', '/root/.local/share/jupyter/runtime/kernel-981dce64-aebf-4813-a00a-1f14564d61f8.json']
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/apache_beam/runners/common.cpython-37m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
65 frames
ValueError: Unable to save function b'__inference__initializer_247' because it captures graph tensor Tensor("count_per_key/StringToNumber:0", shape=(None,), dtype=int64) from a parent function which cannot be converted to a constant with `tf.get_static_value`.
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py](https://localhost:8080/#) in map_resources(self)
402 if capture_constant_value is None:
403 raise ValueError(
--> 404 f"Unable to save function {concrete_function.name} because it "
405 f"captures graph tensor {capture} from a parent function which "
406 "cannot be converted to a constant with `tf.get_static_value`.")
ValueError: Unable to save function b'__inference__initializer_247' because it captures graph tensor Tensor("count_per_key/StringToNumber:0", shape=(None,), dtype=int64) from a parent function which cannot be converted to a constant with `tf.get_static_value`. [while running 'AnalyzeAndTransformDataset/AnalyzeDataset/CreateSavedModelForAnalyzerInputs[Phase0][tf_v2_only]/CreateSavedModel'] The workaround still works perfectly ! Thank you for your answer |
Hello TFT team,
I am migrating to tensorflow transform 1.0 and I am running into a tracing issue with tf2 behavior enabled. I wish to instantiate a lookup table based on a result of a tensorflow-transform analyzer but when
tft_beam.AnalyzeDataset
tries to save the transformation graph I run into an error referring to an "untracked resource Tensor".Versions
Steps to reproduce
I created a small snippet which reproduce the error, it create few data examples and a basic
preprocessing_fn
which instantiate aKeyValueTensorInitializer
based on the result of the analyzertft.count_per_key
.Stack trace
When the above script is run I get the following error:
I am unsure what is causing the issue although i do not encounter it when
force_tf_compat_v1=True
. Any help would be appreciated.The text was updated successfully, but these errors were encountered: