> Simple and effective AI integration with your favorite Neovim text editor! Ask a question, and let robots expl[AI]n-it
!
Visually select a block of text in your buffer, and have OpenAI tell you what it does! Optionally customize the prompt included in your request.
Have questions on the full buffer? Shortcut to include everything in your request, no visual selection necessary.
Try it in a minimal container, no commitment necessary
- Set your API key in your terminal (
export CHAT_GPT_API_KEY=<replace_with_your_key>
) - Execute
make docker
Neovim integration with the OpenAI API
- Send your entire buffer to OpenAI APIs! This will allow you to do things like:
- Explain what code does
- Generate code snippets
- Write unit tests
- Fetch non-code responses, such as document generation or answering questions
- Keybindings for quick integration with separate models
- Write responses to text files for persistence
- Visual selection is supported to
Package manager | Snippet |
---|---|
use ({
'tdfacer/explain-it.nvim',
requires = {
"rcarriga/nvim-notify",
},
config = function ()
require "explain-it".setup {
-- Prints useful log messages
debug = true,
-- Customize notification window width
max_notification_width = 200,
-- Retry API calls
max_retries = 3,
-- Customize response text file persistence location
output_directory = "/tmp/chat_output",
-- Toggle splitting responses in notification window
split_responses = false,
-- Set token limit to prioritize keeping costs low, or increasing quality/length of responses
token_limit = 2000,
-- Per-filetype default prompt questions
default_prompts = {
["markdown"] = "Answer this question:",
["txt"] = "Explain this block of text:",
["lua"] = "What does this code do?",
["zsh"] = "Answer this question:",
},
}
end
}) |
|
{ "tdfacer/explain-it.nvim",
dependencies = "rcarriga/nvim-notify",
config = function ()
require "explain-it".setup {
-- Prints useful log messages
debug = true,
-- Customize notification window width
max_notification_width = 20,
-- Retry API calls
max_retries = 3,
-- Customize response text file persistence location
output_directory = "/tmp/chat_output",
-- Toggle splitting responses in notification window
split_responses = false,
-- Set token limit to prioritize keeping costs low, or increasing quality/length of responses
token_limit = 2000,
-- Per-filetype default prompt questions
default_prompts = {
["markdown"] = "Answer this question:",
["txt"] = "Explain this block of text:",
["lua"] = "What does this code do?",
["zsh"] = "Answer this question:",
},
}
end
}, |
- Sign up for paid account at https://platform.openai.com/signup
- Be sure to note pricing! It is recommended to use something like privacy.com to make sure that you do not accidentally exceed your price limit. Note that there is a separate charge for ChatGPT usage and API usage. We're after API.
- After adding payment info, copy your API key and set it as an environment variable in your shell. Here is a command that will add this to your
.zshrc
file:echo 'export CHAT_GPT_API_KEY=<replace_with_your_key>' >> ~/.zshrc
- Install the Plugin using your favorite package manager as described above
- Sensible default config values have been set. Customize values using the the standard
setup
function. - See
M.options
here for a full list of options.
PRs and issues are always welcome. Make sure to provide as much context as possible when opening one. See CONTRIBUTING.md.