-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"You have exceeded a secondary rate limit." #22
Comments
I'm actually still getting this issue with operations-per-run set to 10
|
I am also running into this, we have an old and still active repo with 1000+ stale branches we're trying to remove. Workflow: name: Stale Branch Cleanup
on:
schedule:
- cron: "0 0 * * 1" # At 00:00 on Monday.
workflow_dispatch: # set this so workflow can be run on demand
permissions:
contents: write # Required to delete branches
pull-requests: read # Ensures open PRs are checked
jobs:
remove-stale-branches:
name: Remove Stale Branches
runs-on: ubuntu-latest
steps:
- uses: fpicalausa/remove-stale-branches@v2.1.0
with:
dry-run: false
ignore-unknown-authors: true
days-before-branch-stale: 365
days-before-branch-delete: 0
default-recipient: tucker-pw
operations-per-run: 1500
ignore-branches-with-open-prs: true
exempt-authors-regex: "^(dependabot|renovate)" Error:
|
@tucker-pw with 1500 operations per runs, github is likely complaining that we're trying to mark too many branches as stale (i.e. writing a comment) within a short period of time. @austinpray-mixpanel if you are still meeting this issue even with 10 operations per runs, could it be that the same action with the same access token is reused across different repos at around the same time? I ran into a similar issue, and the solution was to stagger the runs throughout the day. |
@fpicalausa I'm using the default github actions token. The problem is definitely this:
The graphql queries fetching all the branches are definitely hitting that 2000 points per minute issue since we have 15k branches |
Hey friends!
I'm using this action to clean up 15,000+
I set operations-per-run to a somewhat conservative 100 so we can make slow but steady progress on that giant backlog. I'm getting hit with secondary rate limit errors though:
Full error output
https://docs.github.com/en/rest/using-the-rest-api/rate-limits-for-the-rest-api?apiVersion=2022-11-28#about-secondary-rate-limits
I think octokit has some retry middlewares and stuff that comply with these best practices that might make this issue less prevalent:
https://docs.github.com/en/rest/using-the-rest-api/best-practices-for-using-the-rest-api?apiVersion=2022-11-28
The text was updated successfully, but these errors were encountered: