-
Notifications
You must be signed in to change notification settings - Fork 14.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Airflow TaskInstance endpoing API error for old task instances : 'V1Container' object has no attribute '_startup_probe' #27084
Comments
In 2.4.1 you probably still get that error because your raw value has already been pickled with the old k8s library and with the new version of the libary you can't unpickle it so there's nothnig you can do at that point. What #24117 does though is fix it on a go forward basis (by serializing to json) so we shouldn't have issues. Note that we followed up that one with #26191, which fixed an issue we didn't catch in testing where the webserver would bork the executor config by repeatedly applying the serialization logic. I don't think there's anything to be done about configs that are already pickled with the old library in this way, but if you have something to contribute feel free to open a PR. There's always a way to test. In any case this appears to be a duplicate of #23727. |
Thanks @dstandish, agree with your point that the pickled value stored in database is incompatible. Some of our users might use ~ to fetch all task instances of all dagruns where they hit this with one faulty task instance causing server error for all other compatible task instance objects in the API. They don't really use |
Oh I see so your concern is specifically with the API. I didn't register that initially. Yeah I'm sure there could be a fix for the scenario you mention. |
Do mention this issue in the PR and tag me |
@tirkarthi - I assigned you to it :). If you think you can make PR - cool, if you think it's not worth - let us know and we will close this one (you can also close it yourself). |
I believe this is resolved by #28454 |
Closing it provisionally then as fixed (we can always re-open if it is not). |
Apache Airflow version
main (development)
What happened
Opening issue as per comment : #23727 (comment) . We have also noticed this issue where we have a 2.1.x setup using Kubernetes executor and on upgrade the new task instances created post upgrade are working fine in task instance endpoint. The old objects fail with similar traceback as below. We also tried main branch (2.4.1 as of writing) and it also has same issue. It seems a fix similar to #24117 has to be made with a custom String field for
executor_config
in task instance schema that calls the_serialize
method and on error returns empty dict as string. We have a fix internally though a test case might not be possible since it needs an older value of executor_config that we can't export due to private data. I will be happy to make a PR with the fix though and opened this issue for discussion,cc: @dstandish @joshzana
What you think should happen instead
Older task instances that cannot be serialized should possibly return empty dict instead of failing completely.
How to reproduce
Operating System
Redhat
Versions of Apache Airflow Providers
No response
Deployment
Other Docker-based deployment
Deployment details
No response
Anything else
No response
Are you willing to submit PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: