
GitHub announced that it will switch to a usage-based billing model for its GitHub Copilot AI service starting June 1. The move was pitched as a way to “better align pricing with actual usage” and a necessary step to keep Copilot financially sustainable amid growing demand for limited AI computing resources.
GitHub Copilot subscribers currently receive an allotment of monthly “requests” and “featured requests,” which are spent when they request assistance from Copilot from an AI model. But these broad categories cover many different AI tasks with a wide range of total back-end computing costs, GitHub says.
“Today, a quick chat question and a multi-hour standalone coding session can cost a user the same amount,” the Microsoft-owned company wrote in its announcement. While GitHub says it has “absorbed much of the incremental inference cost behind this usage” to this point, lumping all “privileged requests” together is “no longer sustainable.”
Under the new pricing system, GitHub Copilot subscribers will receive a monthly allotment of “AI Credits” that match their monthly subscription payment. Pricing for additional AI usage beyond those credits will be calculated “based on token consumption, including inputs, outputs, and cached tokens, using the API rates listed for each model.”
API rates can vary greatly depending on how sophisticated the model is being used; OpenAI’s cutting-edge GPT models currently range in price from $4.50 per million output symbols (GPT-5.4 Mini) to $30 per million output symbols (GPT-5.5), for example. The total number of tokens used for an individual AI claim can also vary widely depending on the amount of “thinking” time the model needs to formulate its output.
GitHub Copilot subscribers will still be able to use simple AI suggestions like code completion and next edit without consuming AI credits. But Copilot code reviews will come at an additional cost in the form of GitHub action minutes.