-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] 429 too many requests #1583
Comments
I'm sorry there is no step to reproduce, but I'm facing that issue while there is no obvious reason so hopefully someone who knows what is going on can give me some insight |
We experienced the same issue. I ended up increasing the hardware specs for now and it reduced the issue. I wish I didn't have to as we're trying to be budget friendly. AWS talks about it at this link: https://aws.amazon.com/premiumsupport/knowledge-center/opensearch-resolve-429-error/ |
something changed because of opensearch, with the same metrics and instances, we never had that issue before |
Looking into this. |
@rawpixel-vincent Hi, could you please state which version of OpenSearch you're using - 1.0, 1.1 or 1.2? |
Hi, @Poojita-Raj,
we are stuck with this until opensearch-project/opensearch-js#187 has landed |
Since you're using elasticsearch currently, this is an issue with the AWS OpenSearch Service offering. Please open a ticket against the AWS OpenSearch team. AWS support is the right place to get the assistance required to resolve this issue. Hope this helps! |
thank you I have a ticket opened in aws support so they can look into that opensearch service bug |
Closing this out |
@rawpixel-vincent curious what the solution here was? we are running into a similar 429 issue |
We were able to resolve the 429 issue by switching to a non-graviton AWS instance type. The graviton instance would have memory spikes over 85% which triggered 429 responses. |
Switching from graviton to non graviton instance types fixes this issue |
@anthonygerrard @cameron-hurd Do you have tickets open with the Amazon managed service on these Graviton-related issues? If so would you mind opening them and/or sending me ticket numbers (dblock[at]amazon[dot]com works), please? There's a team that has looked at similar issues, but I can't tell whether it's the same problem or not from the above. |
@dblock we reached out to our AWS support - they mentioned |
@cameron-hurd Can you please make it clear which type of GC causes this and which type resolved this? |
@amitmun switching to non-Graviton instance type resolved it. We have autotune enabled and use the managed opensearch service so we do not have ability to set any GC settings |
We've only just raised a support case with Amazon. No resolution yet. |
We had a call with AWS support today. The solution offered was for us to raise a support request to increase the JVM utilization threshold from 85% to 95% after we create a cluster using Graviton instance types. We're not going to make use of this because we're operating fine on m5 instance types now and have a fully automated infrastructure as code deployment process. I sent a message to our account manager requesting a feature to improve OpenSearch support on newer instance types. |
still for t3 machine somtimes we are getting 429 issue |
We had similar issue and resolved with below. By default, OPENSEARCH_JAVA_OPTS comes with 512M. Based on the data load JVM might require additional memory to process the data. Edited stateful set to increase the OPENSEARCH_JAVA_OPTS to 2g to solve the issue.
|
Is it time to increase this default for 3.0? |
Describe the bug
we're using the aws hosted opensearch service, for about 10 days from now we started to get 429 too many requests response from the elasticsearch api (fwict only from search endpoint) - it happened even we haven't seen any increase in the number of requests - we have been working since then to reduce the number of requests - the search request queued is steady at 0 with eventual peaks around 10 or 20.
Expected behavior
why did we start to get this 429 that looks like an api rate limit, when the request count didn't increase from our usual workload and all the critical metrics are green (as before) ?
Plugins
none
Host/Environment (please complete the following information):
ecs / fargate / elasticsearch hosted by aws / graviton powered containers
last supported elasticsearch version and requesting with the last compatible elasticsearch node.js client
Additional context
<img width="746" alt="Screen Shot 2564-11-19 at 01 43 33" src="https://user-


images.githubusercontent.com/22284209/142477316-e24a8a44-1e6f-4a08-95f0-65abc4c4a3e1.png">
The text was updated successfully, but these errors were encountered: