-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
headObject returning 403 (sometimes) #1492
Comments
I've search all over the issues here and I found that it could be something with the time of the request... It seems that the kubernet server has its instances time unfolded. I don't know what else to look up. I read about the request time on issue 221 |
Have you tried running an NTP sync on the production server? |
Kubernetes runs an internal NTP, it seems, but it didn't work, so we ran an NTP from the main server but it didn't work either. So we didn't know what to do.... But, the thing is, the problem lies there (the request time that throws the 403)? it's our suspicion. |
I have to add something. Yesterday we ran just one pod and everything works pretty fine, but we scale it to two pods and we got the 403 again. We don't know why it works with just one pod. |
If the same code base is deployed across three environments and you're encountering 403 errors in one of them, the errors are almost certainly related either to credentials or to server time. Clock skew issues come up more frequently with containerized environments, but if running an NTP sync didn't cause the errors to go away, then clock skew probably isn't the root cause. How are you loading credentials? Are they coming from a synchronous source (e.g., environment variables or an ini file) in one environment and an asynchronous source (e.g., STS or container roles) in another? |
We load credentials from environment variables provided by our .env file, which is the same for all kubernetes pods. But, as I already said, something rare is happening, because if we run the server with just one kubernete replicaset instance (pod), everything works pretty fine and we never get the 403, but if we scale it to two replicasets or more, then we get 403 again, and each replicasets take the credentials from the same static file, that is actually on each replicaset. |
...I'm not sure I know enough about Kubernetes to be of much help on this one. If the .env file is shared between pods via NFS or a similar shared mount, you might be running into a race condition where the file is not fully available when the application is booting (and the environment is therefore populated with partial credentials like a full access key ID and a truncated secret key). Are you able to verify that the .env file is present in its entirety on each pod? |
I contacted to our hosting provider, they fixed something (don't know what) on the main server and they said: "the time is now sync, run your kubernete again" ... We ran it with the four instances... IT WORKED! Thanks a lot for the replies and the help. If something happens I will open the issue again. |
Happy to help! |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread. |
I have 3 environments:
In "Local" everything works pretty fine (it's like obvious, no? haha), in Develop everything works fine but sometimes rarely I get the forbbiden, BUT! in the Production environment The headObject function is throwing 403 forbidden at least 2 out of 4 or 5 times and I don't know why, it doesn't make sence, because I try to upload some files and some are uploaded and others don't and those that don't upload are the ones that threw 403... I don't know where else to look and I don't have any idea on why this is happening "sometimes"
Thanks in advance
The text was updated successfully, but these errors were encountered: