-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stat /argo/podmetadata/annotations: no such file or directory #5656
Comments
Can you please attach the Pod's YAML? |
I think I've seen this occasionally and it usually happens when the apiserver is unstable and fails to write/update the pod's annotations. |
|
This is a very interesting theory; we have certainly seen this behaviour before with AKS. However we have consistently seen that one workflow fails with this error 100% of the times we've run it, but another workflow with a similar step did not fail. Which contradicts that it is the apiserver which is faulty, because then it would give this error on only some of the runs and not consistently on one workflow. |
Can you please try in v2.12? |
Okay then the causes might be different. In our case, we only observed this when apiserver was under extremely high load or unstable. We observed this in v2.12. |
|
This might be a pre-existing timing issue surfaced introduced by changes in v3. |
Signed-off-by: Alex Collins <alex_collins@intuit.com>
Thanks for taking the time to look into this. |
Summary
After upgrade to v3.0.1 the wait container consistently fails the workflow with the following output:
This works perfectly fine in v2.12.11. Any idea why this is happening?
Diagnostics
What Kubernetes provider are you using?
Azure Kubernetes Service (AKS)
What version of Argo Workflows are you running?
v3.0.1
Message from the maintainers:
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
The text was updated successfully, but these errors were encountered: