You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I setup crewai with Anthropic's Claude 3.5 Sonnet LLM and followed along with your 2 hour YT course. When I ran the ai-news project, I got this error in the middle:
raise AnthropicError( litellm.llms.anthropic.common_utils.AnthropicError: {"type":"error","error":{"type":"rate_limit_error","message":"This request would exceed the rate limit for your organization (c4b9cc94-844c-487b-ba7d-07cd087edc62) of 40,000 input tokens per minute. For details, refer to: https://docs.anthropic.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}}
and the whole website content was given as input to the agent to perform the website scraping task. This obviously shot the input token count through the roof. So what's a more efficient and conservative way to do this?
The text was updated successfully, but these errors were encountered:
I setup crewai with Anthropic's Claude 3.5 Sonnet LLM and followed along with your 2 hour YT course. When I ran the ai-news project, I got this error in the middle:
raise AnthropicError( litellm.llms.anthropic.common_utils.AnthropicError: {"type":"error","error":{"type":"rate_limit_error","message":"This request would exceed the rate limit for your organization (c4b9cc94-844c-487b-ba7d-07cd087edc62) of 40,000 input tokens per minute. For details, refer to: https://docs.anthropic.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}}
So when I probed more, I saw that the project scraped this website: https://blog.dataiku.com/a-dizzying-year-for-language-models-2024-in-review#:~:text=With%20powerful%20models%20and%20features,in%20light%20of%20these%20evolutions.
and the whole website content was given as input to the agent to perform the website scraping task. This obviously shot the input token count through the roof. So what's a more efficient and conservative way to do this?
The text was updated successfully, but these errors were encountered: