-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
client_idle_timeout does not work #2166
Comments
if you change the settings in |
@klizhentas I have done that plenty of times. I've also disabled it, and run it manually with --debug on to see if I could get more information. |
ok, we will take a look. thanks for your bug report |
@datasage I've looked into it and could not reproduce. Can you paste your configuration here (removing all specifics of course) and the places where you are looking at? |
Starting with the config:
I updated the session storage setting to s3 and that worked right away. Are auth service settings initialized once and then stored in the cluster state? The code I looked at seem to indicate that. Cluster config from cache file (cache/auth/cluster_configuration):
|
I set up the cluster originally with 2.3.x so it uses the boltdb backend by default. I was able to find a way to read that db. This does show the correct values.
I am not sure why cache file is showing a different value or which value is used by the system to make a determination for terminating the idle session. I've never seen any if the idle session entries in the debug log so I would assume the value being used on a given connection is 0. |
|
The state is the same for both node and proxy. I have tried a low timeout, 60 seconds in my case, and it did not disconnect the client. I recently changed the s3 storage location and that updated in the cache, but the idle timeout and client expiration settings did not. |
This PR fixes #2166, adds suite tests.
Hi @datasage, is your issue solved? I'm having issues with client_idle_timeout too but the behavior is a little bit different (I'm using TSH), since it's an issue with idle_timeout might them be related?:
Regards, |
My issues have been solved, but i primarily use the Web UI. This sounds like an idle connection issue to me. A router, or firewall is dropping idle connections after 5 minutes. |
I just found that it might be related with tsh version, I tried v2.5.6 and it worked properly but using tsh v3.0.1 didn't. Thank you anyway! |
What happened:
I upgraded my cluster to use 2.7.3 (from 2.4.7 using the expected upgrade path) so that I could enable idle timeout. In my testing, no matter what setting i set for client_idle_timeout, It would always disable it.
I am currently using the community version of teleport. Configuration is currently set up as one instance serving as auth and proxy server. Storage is set up to use file directory.
From what I could tell, it seems that teleport auth service does not use config values after initially initialized. I dug through the cache files and cluster_configuration always had the client_idle_timeout set to 0, regardless of what the config has.
What you expected to happen:
Client should disconnect when idle timeout period is reached.
How to reproduce it (as minimally and precisely as possible):
Environment:
teleport version
): 2.7.3tsh version
): 2.7.3Relevant Debug Logs If Applicable
I ran with debug and never saw the debug output from the client idle checks. It would appear that the config is getting read as 0.
The text was updated successfully, but these errors were encountered: