Estimating Open AI usage for workspace operations? #144557
Replies: 1 comment
-
Never mind, I just realized that all GPT usage is apparently included. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Select Topic Area
Question
Body
I've been waiting for these features - get suggestions for the whole code base based on a question or an issue inside of vscode.
But before I do anything, I'd like to know about expected costs other than Copilot subscription.
The current pricing for 4o is $25.000 / 1M training tokens.
Assuming that I'm working on a project with 100K lines code, on average 10 tokens per line, that's 25$ to train the model for the whole workspace.
But what happens when a file changes?
My assumption would be that you'll retrain based on the latest check point. But does that work? Can you tell the model to "forget about" the previous version? Or at least always refer to the latest by default?
And if a team uses copilot, is that model shared?
If that works, it seems like it could be pretty reasonable after the initial training cost.
And a minor follow up question: But if that works, I'm wondering whether it ever makes sense to only drag a few files into the context of the "edit" feature. I guess it can make sense if you never use workspace features and if Copilot doesn't train the model until the first workspace-wide prompt. But as soon as that is done - why bother with granularity? I suppose another reason might be to limit visibility. Of course, ideally, GPT can figure out what you're talking about without having to restrict visibility.
Beta Was this translation helpful? Give feedback.
All reactions