-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod startup failing due to new startup.sh #1557
Comments
Seeing the same behavior, the pod logs shows: |
@rich-bain can you show which version you pinned to? |
Pinning the image to
resolved the issue for us! |
We faced the same error on new image The error log is one line: environment: AWS EKS v1.22.9-eks-a64ea69 with Karpenter (containerd).
We've already reverted |
Getting the same issue, reverted to |
Controller Version
0.24.1
Helm Chart Version
0.19.1
CertManager Version
1.8.1
Deployment Method
Other
cert-manager installation
Checks
Resource Definitions
To Reproduce
Describe the bug
See
textPayload
key below.This is a
k8s_container
resource which means the pod was correctly scheduled but failed during startup.Symptoms are a restart loop of pods every ~1 second.
Describe the expected behavior
Pod should start. Changing nothing but going back to
summerwind/actions-runner-dind:v2.293.0-ubuntu-20.04-933b0c7@sha256:635aa33ed5fc83f5df7a27986f654500fc28eeb619498888f3442a133b54258b
fixes the issue.Controller Logs
Runner Pod Logs
Additional Context
Worked fine yesterday. Spun up some new nodes which invalided my docker cache. Pinning back to the old version fixes the issue.
Issue either https://github.com/actions-runner-controller/actions-runner-controller/blob/master/runner/startup.sh#L30 or https://github.com/actions-runner-controller/actions-runner-controller/blob/master/runner/startup.sh#L45
The text was updated successfully, but these errors were encountered: