-
Notifications
You must be signed in to change notification settings - Fork 709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TFX 1.12.0 Issues #5604
Comments
TFX 1.12.0 seems to depend on Apache Beam 2.41.0 || 2.42.0 || 2.43.0, instead of 2.40.0 as described on the Readme page. |
Cannot install TFX 1.12.0 using pip (tried stable 22.0.4 and newest 22.3.1). The following is the related error message.
Similar error message was raised also for
Update: It appears that 1.12.0 are now built for MacOS 12_0 instead of 10_9 (https://pypi.org/project/tfx-bsl/1.12.0/#files). Is older version of Mac no longer supported? |
@EdwardCuiPeacock, Thank you for bringing this point. We are updating or packages and supported OS to prevent security and vulnerability issues. Please try to update your mac OS and TFX should be installed without any issues. Kindly let us know, if you face other issues. Thank you! |
I cannot run In previous versions this worked:
In the 1.12.0 it no longer does:
|
Hi, I encountered the same issue as @IzakMaraisTAL and found a workaround to fix it, but it may not be the most consistent solution:
If possible, could you provide a more reliable way to fix this issue please? Thank you in advance! |
TFX Cloud Tuner job does not fail / runs forever when error encountered. I am using Python 3.9.15 with TFX 1.12.0, Tensroflow 2.11.0, and keras_tuner 1.3.0. The error came from my data generation script (as indicated in the logs), but I am expecting the error will also terminate the tuner job instead of hanging indefinitely. Tuner component set up: tuner_args = {
"tuner_fn": settings.TUNER_SPECS["tuner_fn"],
"examples": transform.outputs["transformed_examples"],
"transform_graph": transform.outputs["transform_graph"],
"train_args": TrainArgs(num_steps=settings.TUNER_SPECS["train_num_steps"]),
"eval_args": EvalArgs(num_steps=settings.TUNER_SPECS["eval_num_steps"]),
"custom_config": custom_config,
}
if settings.GCP_AI_PLATFORM_TUNING_ARGS is not None: # cloud tuner
tuner_args.update(
{
"tune_args": TuneArgs(
num_parallel_trials=settings.TUNER_SPECS["num_parallel_trials"]
),
}
)
tuner_args["custom_config"].update(
{
# Note that this TUNING_ARGS_KEY will be used to start the CAIP
# job for parallel tuning (CAIP job X above).
# num_parallel_trials will be used to fill/overwrite the
# workerCount specified by TUNING_ARGS_KEY:
# num_parallel_trials = workerCount + 1 (for master)
TUNING_ARGS_KEY: settings.GCP_AI_PLATFORM_TUNING_ARGS,
# This working directory has to be a valid GCS path and will be
# used to launch remote training job per trial.
REMOTE_TRIALS_WORKING_DIR_KEY: os.path.join(
settings.PIPELINE_ROOT, "trials"
),
ENABLE_VERTEX_KEY: True,
VERTEX_REGION_KEY: settings.GPU_REGION,
"use_gpu": settings.GCP_AI_PLATFORM_TUNING_ARGS is not None,
"num_gpus": settings.GCP_AI_PLATFORM_TUNING_ARGS["job_spec"][
"worker_pool_specs"
][0]["machine_spec"]["accelerator_count"],
}
)
tuner = AI_Platform_Tuner(**tuner_args) |
I can confirm that this is fixed in 1.13.0-rc0 |
Hello, I updated my macOS version but I still get I am on macOS 13.3.1 and python 3.7.16. This is blocking to upgrade to tfx 1.12 which I believe is the last version compatible with python 3.7. |
Turned out that even though I updated my OS I was still using a python which was compiled for my old OS version, this is why it couldn't see packages built for macOS 12. As shown when running the following commands: >>> from distutils import util
>>> util.get_platform()
'macosx-10.9-x86_64' Reinstalling a new python compiled for my updated OS version fixed my problem. Hope I didn't waste anyone's time on that issue. |
Closing this issues since all comments are resolved. Please open a new issue here. Thanks |
Please comment or link any issues you find with TFX 1.12.0.
Thanks.
The text was updated successfully, but these errors were encountered: