Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama provider returns no results when ollama is running behind an https proxy on a remote server #64

Closed
FlippingBinary opened this issue Sep 8, 2024 · 7 comments

Comments

@FlippingBinary
Copy link

It's not clear to me if the server is returning an empty result or if there is some other connectivity problem, but that issue is covered by #35.

Here is the configuration I'm using:

return {
  "gsuuon/model.nvim",
  cmd = { "M", "Model", "Mchat" },
  init = function()
    vim.filetype.add({
      extension = {
        mchat = "mchat",
      },
    })
  end,
  ft = "mchat",
  config = function()
    local ollama = require("model.providers.ollama")
    local mode = require("model").mode
    require("model").setup({
      chats = {
        ["review"] = {
          provider = ollama,
          options = {
            url = "https://ollama.local",
          },
          system = "You are an expert programmer that gives constructive feedback. Review the changes in the user's git diff.",
          params = {
            model = "starling-lm",
          },
          create = function()
            local git_diff = vim.fn.system({ "git", "diff", "--staged" })
            ---@cast git_diff string
             if not git_diff:match("^diff") then
              error("Git error:\n" .. git_diff)
            end
             return git_diff
          end,
          run = function(messages, config)
            if config.system then
              table.insert(messages, 1, {
                role = "system",
                content = config.system,
              })
            end
             return { messages = messages }
          end,
        },
      },
      prompts = {
        ["ollama:starling"] = {
          provider = ollama,
          options = {
            url = "https://ollama.local",
          },
          params = {
            model = "starling-lm",
          },
          builder = function(input)
            return {
              prompt = "GPT4 Correct User: " .. input .. "<|end_of_turn|>GPT4 Correct Assistant: ",
            }
          end,
        },
      },
    })
  end,
  keys = {
    { "<C-m>d", ":Mdelete<cr>", mode = "n" },
    { "<C-m>s", ":Mselect<cr>", mode = "n" },
    { "<C-m><space>", ":Mchat<cr>", mode = "n" },
  },
}

Has anyone else tried using this plugin with Ollama behind an https reverse proxy?

@zbindenren
Copy link

I am also behind a proxy and I get the following error with hugginface:

Error executing Lua callback: ...local/share/nvim/lazy/model.nvim/lua/model/core/chat.lua:202: attempt to call field 'create' (a nil value)                                                                                                                             
stack traceback:                                                                                                                                                                                                                                                        
        ...local/share/nvim/lazy/model.nvim/lua/model/core/chat.lua:202: in function 'build_contents'                                                                                                                                                                   
        .../rz/.local/share/nvim/lazy/model.nvim/lua/model/init.lua:295: in function <.../rz/.local/share/nvim/lazy/model.nvim/lua/model/init.lua:259>  

And I have the following config:

{
    'gsuuon/model.nvim',
    cmd = { 'M', 'Model', 'Mchat' },
    init = function()
      vim.filetype.add({
        extension = {
          mchat = 'mchat',
        }
      })
    end,
    ft = 'mchat',

    -- keys = {
    --   { '<C-m>d',       ':Mdelete<cr>', mode = 'n' },
    --   { '<C-m>s',       ':Mselect<cr>', mode = 'n' },
    --   { '<C-m><space>', ':Mchat<cr>',   mode = 'n' }
    -- },

    -- To override defaults add a config field and call setup()

    config = function()
      require('model').setup({
        prompts = require('model.util').module.autoload('prompt_library'),
        chats = {
          ['hf:starcoder'] = {
            provider = require('model.providers.huggingface'),
            options = {
              model = 'bigcode/starcoder'
            },
            builder = function(input)
              return { inputs = input }
            end
          },
        },
      })
    end
  },

Not sure if it is the proxy or my config. Usually curl uses the configured http_proxy variable.

@FlippingBinary
Copy link
Author

@zbindenren That would be a different type of problem. It looks like maybe you put a completion prompt where you needed a chat prompt. I think the chat prompts require a create and run function while the completion prompts require a builder function. You can see the difference in the example I posted, with the caveat that I'm new to this plugin as well so there may be errors. The problem you ran into might warrant a separate issue because I think that's a case that should probably be handled a little better, but I'm pretty sure it's a separate issue from mine.

@gsuuon
Copy link
Owner

gsuuon commented Sep 16, 2024

Are you able to directly curl that endpoint? You can set require('model.util.curl')._is_debugging = true to get a vim.notify of the curl args and body. Getting an error response but not surfacing it would be a bug (likely with the provider) - though I probably didn't add enough error handling to each of the specific providers.

@zbindenren I think @FlippingBinary is right here. I should improve the docs with respect to the distinction between chat prompts and completion prompts (or maybe unify the interface).

@FlippingBinary
Copy link
Author

@gsuuon When I curl the url in the console, I get "Ollama is running." I also tried sending a prompt to the url plus /api/generate like this:

curl https://ollama.local/api/generate -d '{
  "model": "llama3.1",
  "prompt": "Why is the sky blue?"
}'

This gave me a response as a stream that looks valid.

Hmm.. I tried enabling the debugging messages and got two errors instead of the curl args when calling :Mchat review:

msg_show.lua_error > Mchat review E1510: Value too large: 
msg_show.emsg > Mchat review Error executing Lua callback: 

Aside from a single blank space, there was no text after Value too large: and Error executing Lua callback: .

@gsuuon
Copy link
Owner

gsuuon commented Sep 16, 2024

Are you using a graphical neovim client? There may be something going awry with the dialog event, but the vim.notify's might work in terminal neovim. If you can get the curl args the plugin builds, try calling curl directly with those args to see what you get.

Pipe the body into curl with the args, so something like echo <body> | curl <args>, or you can replace the @- part of the args with the body (may need to handle escapes).

Do you see this issue with any of the other providers?

@FlippingBinary
Copy link
Author

Well this is weird. I was sure neither the chat nor the completion were working before. Now the error message I reported above appears to be a problem with Git, not the plugin. When I run git diff --staged in the console, it returns the diff or nothing at all, as expected. But the text returned from vim.fn.system({ 'git', 'diff', '--staged' }) is:

error: unknown option `staged'

... followed by the big usage message. I'm not sure why that is, but it doesn't seem to be related to this plugin. I'll close this issue for now because it looks like I just didn't configure the plugin correctly. Thanks for the tips.

@gsuuon
Copy link
Owner

gsuuon commented Sep 16, 2024

No problem! Hope it works out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants