Skip to main content

Rate limits and query limits for the GraphQL API

The GitHub GraphQL API has limitations in place to protect against excessive or abusive calls to GitHub's servers.

Primary rate limit

The GraphQL API assigns points to each query and limits the points that you can use within a specific amount of time. This limit helps prevent abuse and denial-of-service attacks, and ensures that the API remains available for all users.

The REST API also has a separate primary rate limit. For more information, see Ratenbegrenzungen für die REST-API.

In general, you can calculate your primary rate limit for the GraphQL API based on your method of authentication:

  • For users: 5,000 points per hour per user. This includes requests made with a personal access token as well as requests made by a GitHub App or OAuth app on behalf of a user that authorized the app. Requests made on a user's behalf by a GitHub App that is owned by a GitHub Enterprise Cloud organization have a higher rate limit of 10,000 points per hour. Similarly, requests made on your behalf by an OAuth app that is owned or approved by a GitHub Enterprise Cloud organization have a higher rate limit of 10,000 points per hour if you are a member of the GitHub Enterprise Cloud organization.
  • For GitHub App installations not on a GitHub Enterprise Cloud organization: 5,000 points per hour per installation. Installations that have more than 20 repositories receive another 50 points per hour for each repository. Installations that are on an organization that have more than 20 users receive another 50 points per hour for each user. The rate limit cannot increase beyond 12,500 points per hour. The rate limit for user access tokens (as opposed to installation access tokens) are dictated by the primary rate limit for users.
  • For GitHub App installations on a GitHub Enterprise Cloud organization: 10,000 points per hour per installation. The rate limit for user access tokens (as opposed to installation access tokens) are dictated by the primary rate limit for users.
  • For OAuth apps: 5,000 points per hour, or 10,000 points per hour if the app is owned by a GitHub Enterprise Cloud organization. This only applies when the app uses their client ID and client secret to request public data. The rate limit for OAuth access tokens generated by a OAuth app are dictated by the primary rate limit for users.
  • For GITHUB_TOKEN in GitHub Actions workflows: 1,000 points per hour per repository. For requests to resources that belong to an enterprise account on GitHub.com, the limit is 15,000 points per hour per repository.

You can check the point value of a query or calculate the expected point value as described in the following sections. The formula for calculating points and the rate limit are subject to change.

Checking the status of your primary rate limit

You can use the headers that are sent with each response to determine the current status of your primary rate limit.

Header nameDescription
x-ratelimit-limitThe maximum number of points that you can use per hour
x-ratelimit-remainingThe number of points remaining in the current rate limit window
x-ratelimit-usedThe number of points you have used in the current rate limit window
x-ratelimit-resetThe time at which the current rate limit window resets, in UTC epoch seconds
x-ratelimit-resourceThe rate limit resource that the request counted against. For GraphQL requests, this will always be graphql.

You can also query the rateLimit object to check your rate limit. When possible, you should use the rate limit response headers instead of querying the API to check your rate limit.

query {
  viewer {
    login
  }
  rateLimit {
    limit
    remaining
    used
    resetAt
  }
}
FieldDescription
limitThe maximum number of points that you can use per hour
remainingThe number of points remaining in the current rate limit window
usedThe number of points you have used in the current rate limit window
resetAtThe time at which the current rate limit window resets, in UTC epoch seconds

Returning the point value of a query

You can return the point value of a query by querying the cost field on the rateLimit object:

query {
  viewer {
    login
  }
  rateLimit {
    cost
  }
}

Predicting the point value of a query

You can also roughly calculate the point value of a query before you make the query.

  1. Add up the number of requests needed to fulfill each unique connection in the call. Assume every request will reach the first or last argument limits.
  2. Divide the number by 100 and round the result to the nearest whole number to get the final aggregate point value. This step normalizes large numbers.

Hinweis

The minimum point value of a call to the GraphQL API is 1.

Here's an example query and score calculation:

query {
  viewer {
    login
    repositories(first: 100) {
      edges {
        node {
          id

          issues(first: 50) {
            edges {
              node {
                id

                labels(first: 60) {
                  edges {
                    node {
                      id
                      name
                    }
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

This query requires 5,101 requests to fulfill:

  • Although we're returning 100 repositories, the API has to connect to the viewer's account once to get the list of repositories. So, requests for repositories = 1
  • Although we're returning 50 issues, the API has to connect to each of the 100 repositories to get the list of issues. So, requests for issues = 100
  • Although we're returning 60 labels, the API has to connect to each of the 5,000 potential total issues to get the list of labels. So, requests for labels = 5,000
  • Total = 5,101

Dividing by 100 and rounding gives us the final score of the query: 51

Secondary rate limits

Zusätzlich zu den Primärratenbegrenzungen erzwingt GitHub sekundäre Ratenbeschränkungen, um Missbrauch zu verhindern und die API für alle Benutzer verfügbar zu halten.

Wenn Sie folgende Aktionen ausführen, kann es zu einer sekundären Ratenbegrenzung gehen:

  • Zu hohe Anzahl gleichzeitiger Anforderungen. Es sind nicht mehr als 100 gleichzeitige Anforderungen zulässig. Dieser Grenzwert wird für die REST-API und die GraphQL-API freigegeben.
  • Nehmen Sie zu viele Anforderungen an einen einzelnen Endpunkt pro Minute vor. Für REST-API-Endpunkte sind maximal 900 Punkte pro Minute zulässig und für den GraphQL-API-Endpunkt sind maximal 2 000 Punkte pro Minute zulässig. Weitere Informationen zu Punkten findest du unter Berechnen von Punkten für das sekundäre Ratenbegrenzungen.
  • Nehmen Sie zu viele Anforderungen pro Minute vor. Maximal 90 Sekunden CPU-Zeit pro 60 Sekunden Echtzeit ist zulässig. Es kann nicht mehr als 60 Sekunden dieser CPU-Zeit für die GraphQL-API sein. Sie können die CPU-Zeit grob schätzen, indem Sie die Gesamtantwortzeit für Ihre API-Anforderungen messen.
  • Nehmen Sie zu viele Anforderungen vor, die in kurzer Zeit zu viele Computeressourcen verbrauchen.
  • Erstellen Sie in kurzer Zeit zu viele Inhalte für GitHub. Im Allgemeinen sind nicht mehr als 80 Anforderungen zum Generieren von Inhalten pro Minute und maximal 500 Anforderungen zur Inhaltsgenerierung pro Stunde zulässig. Einige Endpunkte weisen niedrigere Grenzwerte für die Inhaltserstellung auf. Zu den Grenzwerten für die Inhaltserstellung gehören Aktionen, die auf der GitHub-Webschnittstelle sowie über die REST-API und die GraphQL-API ausgeführt werden.

Diese Sekundärratenbegrenzungen können ohne Vorherige Ankündigung geändert werden. Es kann auch sein, dass Sie aus unbekannten Gründen auf eine sekundäre Ratenbegrenzung stoßen.

Berechnen von Punkten für die sekundäre Ratenbegrenzung

Einige sekundäre Ratenbegrenzungen werden durch die Punktwerte der Anforderungen bestimmt. Bei GraphQL-Anforderungen sind diese Punktwerte getrennt von den Punktwertberechnungen für die primäre Ratenbegrenzung.

AnfordernPunkte
GraphQL-Anforderungen ohne Mutationen1
GraphQL-Anforderungen mit Mutationen5
Die meisten REST-API GET-, HEAD- und OPTIONS-Anforderungen1
Die meisten POST-, PATCH-, PUT- oder DELETE-Anforderungen über die REST-API5

Einige REST-API-Endpunkte haben einen anderen Kostenpunkt, der nicht öffentlich freigegeben wird.

Exceeding the rate limit

If you exceed your primary rate limit, the response status will still be 200, but you will receive an error message, and the value of the x-ratelimit-remaining header will be 0. You should not retry your request until after the time specified by the x-ratelimit-reset header.

If you exceed a secondary rate limit, the response status will be 200 or 403, and you will receive an error message that indicates that you hit a secondary rate limit. If the retry-after response header is present, you should not retry your request until after that many seconds has elapsed. If the x-ratelimit-remaining header is 0, you should not retry your request until after the time, in UTC epoch seconds, specified by the x-ratelimit-reset header. Otherwise, wait for at least one minute before retrying. If your request continues to fail due to a secondary rate limit, wait for an exponentially increasing amount of time between retries, and throw an error after a specific number of retries.

Continuing to make requests while you are rate limited may result in the banning of your integration.

Staying under the rate limit

To avoid exceeding a rate limit, you should pause at least 1 second between mutative requests and avoid concurrent requests.

You should also subscribe to webhook events instead of polling the API for data. For more information, see Webhooks-Dokumentation.

You can also stream the audit log in order to view API requests. This can help you troubleshoot integrations that are exceeding the rate limit. For more information, see Streamen des Überwachungsprotokolls für ein Unternehmen.

Node limit

To pass schema validation, all GraphQL API calls must meet these standards:

  • Clients must supply a first or last argument on any connection.
  • Values of first and last must be within 1-100.
  • Individual calls cannot request more than 500,000 total nodes.

Calculating nodes in a call

These two examples show how to calculate the total nodes in a call.

  1. Simple query:

    query {
      viewer {
        repositories(first: 50) {
          edges {
            repository:node {
              name
    
              issues(first: 10) {
                totalCount
                edges {
                  node {
                    title
                    bodyHTML
                  }
                }
              }
            }
          }
        }
      }
    }

    Calculation:

    50         = 50 repositories
     +
    50 x 10  = 500 repository issues
    
                = 550 total nodes
  2. Complex query:

    query {
      viewer {
        repositories(first: 50) {
          edges {
            repository:node {
              name
    
              pullRequests(first: 20) {
                edges {
                  pullRequest:node {
                    title
    
                    comments(first: 10) {
                      edges {
                        comment:node {
                          bodyHTML
                        }
                      }
                    }
                  }
                }
              }
    
              issues(first: 20) {
                totalCount
                edges {
                  issue:node {
                    title
                    bodyHTML
    
                    comments(first: 10) {
                      edges {
                        comment:node {
                          bodyHTML
                        }
                      }
                    }
                  }
                }
              }
            }
          }
        }
    
        followers(first: 10) {
          edges {
            follower:node {
              login
            }
          }
        }
      }
    }

    Calculation:

    50              = 50 repositories
     +
    50 x 20       = 1,000 pullRequests
     +
    50 x 20 x 10 = 10,000 pullRequest comments
     +
    50 x 20       = 1,000 issues
     +
    50 x 20 x 10 = 10,000 issue comments
     +
    10              = 10 followers
    
                     = 22,060 total nodes

Timeouts

If GitHub takes more than 10 seconds to process an API request, GitHub will terminate the request and you will receive a timeout response and a message reporting that "We couldn't respond to your request in time".

GitHub reserves the right to change the timeout window to protect the speed and reliability of the API.

You can check the status of the GraphQL API at githubstatus.com to determine whether the timeout is due to a problem with the API. You can also try to simplify your request or try your request later. For example, if you are requesting a large number of objects in a single request, you can try requesting fewer objects split over multiple queries.

If a timeout occurs for any of your API requests, additional points will be deducted from your primary rate limit for the next hour to protect the speed and reliability of the API.

Other resource limits

To protect the speed and reliability of the API, GitHub also enforces other resource limitations. If your GraphQL query consumes too many resources, GitHub will terminate the request and return partial results along with an error indicating that resource limits were exceeded.

Examples of queries that may exceed resource limits:

  • Requesting thousands of objects or deeply nested relationships in a single query.
  • Using large first or last arguments in multiple connections simultaneously.
  • Fetching extensive details for each object, such as all comments, reactions, and related issues for every repository.

Query optimization strategies

  • Limit the number of objects: Use smaller values for first or last arguments and paginate through results.
  • Reduce query depth: Avoid requesting deeply nested objects unless necessary.
  • Filter results: Use arguments to filter data and return only what you need.
  • Split large queries: Break up complex queries into multiple simpler queries.
  • Request only required fields: Select only the fields you need, rather than requesting all available fields.

By following these strategies, you can reduce the likelihood of hitting resource limits and improve the performance and reliability of your API requests.