The project is built on the most loved ChatGPT extension gencay/vscode-chatgpt, which has downloaded by ~500,000 developers.
But unfortunately, the original author has decided to stop maintaining the project, and the new recommended Genie AI extension on the original project is not opensourced. So I decided to fork it and continue the development.
- ➕ Use GPT-4, GPT-3.5, Claude 3 or OpenAI compatible local models with your API Key from OpenAI, Azure OpenAI Service or Anthropic.
- 📃 Get streaming answers to your prompts in sidebar conversation window
- 🔥 Stop the responses to save your tokens.
- 📝 Create files or fix your code with one click or with keyboard shortcuts.
- ➡️ Export all your conversation history at once in Markdown format.
- Automatic partial code response detection. Continues and combines automatically, when response is cut off.
- Ad-hoc prompt prefixes for you to customize what you are asking ChatGPT
- Edit and resend a previous prompt
- Copy, insert or create new file from the code, ChatGPT is suggesting right into your editor.
Configuration | Description |
---|---|
chatgpt.gpt3.apiKey | Required, please get from OpenAI, Azure OpenAI or Anthropic. |
chatgpt.gpt3.apiBaseUrl | Optional, default to "https://api.openai.com/v1". For Azure OpenAI Service, it should be set to "https://[YOUR-ENDPOINT-NAME].openai.azure.com/openai/deployments/[YOUR-DEPLOYMENT-NAME]". |
chatgpt.gpt3.model | Optional, default to "gpt-3.5-turbo". |
Refer following sections for more details of how to configure various openai services.
"chatgpt.gpt3.apiKey": "<api-key>",
"chatgpt.gpt3.apiBaseUrl": "https://api.openai.com/v1", // Optional
"chatgpt.gpt3.apiKey": "<api-key>",
"chatgpt.gpt3.model": "gpt-3.5-turbo",
"chatgpt.gpt3.apiBaseUrl": "https://<endpoint-name>.openai.azure.com/openai/deployments/<deployment-name>", // Required
"chatgpt.gpt3.model": "claude-3-sonnet-20240229",
"chatgpt.gpt3.apiKey": "<api-key>",
"chatgpt.gpt3.apiBaseUrl": "https://api.anthropic.com/v1", // Optional
"chatgpt.gpt3.apiKey": "<api-key>",
"chatgpt.gpt3.apiBaseUrl": "<base-url>",
To use a custom model name for local or self-hosted LLMs compatible with OpenAI, set the chatgpt.gpt3.model
configuration to "custom"
and specify your custom model name in the chatgpt.gpt3.customModel
configuration.
Example configuration for a custom model name with groq:
"chatgpt.gpt3.model": "custom",
"chatgpt.gpt3.apiKey": "<your-custom-key>",
"chatgpt.gpt3.customModel": "mixtral-8x7b-32768",
"chatgpt.gpt3.apiBaseUrl": "https://api.groq.com/openai/v1",
- Install
vsce
if you don't have it on your machine (The Visual Studio Code Extension Manager)npm install --global vsce
- Run
vsce package
- Follow the instructions and install manually.
npm run build
npm run package
code --uninstall-extension jeanibarz.chatgpt-copilot
code --install-extension chatgpt-copilot-*.vsix
This project is released under ISC License - See LICENSE for details. Copyright notice and the respective permission notices must appear in all copies.