Rate limiting of requests #89
Replies: 2 comments 1 reply
-
Hi @jpeacock29 Sorry for a late reply. At the moment we do not have any mechanism for rate limiting. The easiest thing you can do is to create mini batches and call predict several times. This solution is far from being perfect, but very easy to implement. |
Beta Was this translation helpful? Give feedback.
-
Hi @OKUA1, thank you so much for your reply. That seems like a good solution for fitted models, but I was actually hitting this error during the fitting process. In particular, I'm fitting a text classifier to about 1000 rows of training data. Is there any way to batch that? Maybe using an online algorithm instead? |
Beta Was this translation helpful? Give feedback.
-
I'm encountering a lot of RateLimitErrors. Is there any built-in rate limiting available? (Of course, this is ultimately an issue with my low-tier OpenAI account, but curious if it's possible to work with that.) Or is there another work around to consider to limit the rate of requests for large classification tasks? Thank you so much for your help!
Beta Was this translation helpful? Give feedback.
All reactions