-
-
Notifications
You must be signed in to change notification settings - Fork 983
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Failed to proxy to backend" #5059
Comments
Can you post the full pod logs somewhere? Also you're configuring 2 workers in your deployment, which I don't think does what you intend it to, you probably want to set |
|
@BeryJu any idea on what it could be causing this issue? |
So it seems the core service is running correctly for some time and only then do these timeouts happen. The reason this error happens is because the server container runs two processes actually, the core authentik server (which is python) and a small go proxy, that serves the static files, provides TLS, and also runs the embedded outpost. The main ports 9000 and 9443 point to that go proxy, which then proxies API requests to the actual backend. This error is caused by that proxying from the go proxy to the backend timing out. The main reason this can happen is high load on the system or in the container, or it's also possible that there's a bug in the core service that causes it's threads to hang (the logs don't point to anything like that, but I won't fully exclude it from being the cause) Do you use CPU throttling in kubernetes? (It doesn't look like it from the config above but there might be other things modifying the resource limits) Also as I said above, set the worker replicas to 1 and the server replica to 2, which should better distribute the load. |
No resource limits set for Authentik, we increased the replicas. @Jdavid77 will report back after a while to see if the problem keeps happening or not. We also suspected some problem with the PostgreSQL connection but it seems a bit unlikely. |
I'm also receiving this error on a fresh install of authentik using Helm (2023.3.1). I use an external database provisioned by Stackgres and it is ssl enabled. Initial install went perfectly (all migrations ran and such) and then something weird is happening after. Here's the logs from when I start seeing these errors:
A really odd thing is that Authentik connected to the db server initially over ssl successfully and installation ran fine. In my setup, |
It's always the dns... always. Just needed to add an additional zone to my coredns config (so that the |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Describe the bug
Since upgrading to the latest version , i've been getting this error:
{"error":"context canceled","event":"failed to proxy to backend","level":"warning","logger":"authentik.router","timestamp":"2023-03-23T17:38:43Z"}
And it makes authentik restart continuously and the pod gets shutdown everytime.
To Reproduce
Steps to reproduce the behavior:
My Current helm configuration is as follows:
P.S - Usings SOPS for variable substitution
Expected behavior
A clear and concise description of what you expected to happen.
Version and Deployment (please complete the following information):
Additional context
Zero clue how to procede with this error it never happened before, all our previous updates have been smooth
The text was updated successfully, but these errors were encountered: