-
Notifications
You must be signed in to change notification settings - Fork 967
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Langchain OpenAI HTTP response code 422 #187
Comments
@rodrigo-pedro sorry about that, this was a bug that got re-introduced recently, openai api accepts arrays of prompts, I'll push a new pypi release tomorrow but it should be fixed in the github version. |
* Bugfix: Ensure logs are printed when streaming * Update llama.cpp * Update llama.cpp * Add missing tfs_z paramter * Bump version * Fix docker command * Revert "llama_cpp server: prompt is a string". Closes abetlen#187 This reverts commit b9098b0. * Only support generating one prompt at a time. * Allow model to tokenize strings longer than context length and set add_bos. Closes abetlen#92 * Update llama.cpp * Bump version * Update llama.cpp * Fix obscure Wndows DLL issue. Closes abetlen#208 * chore: add note for Mac m1 installation * Add winmode arg only on windows if python version supports it * Bump mkdocs-material from 9.1.11 to 9.1.12 Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.1.11 to 9.1.12. - [Release notes](https://github.com/squidfunk/mkdocs-material/releases) - [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG) - [Commits](squidfunk/mkdocs-material@9.1.11...9.1.12) --- updated-dependencies: - dependency-name: mkdocs-material dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * Update README.md Fix typo. * Fix CMakeLists.txt * Add sampling defaults for generate * Update llama.cpp * Add model_alias option to override model_path in completions. Closes abetlen#39 * Update variable name * Update llama.cpp * Fix top_k value. Closes abetlen#220 * Fix last_n_tokens_size * Implement penalize_nl * Format * Update token checks * Move docs link up * Fixd CUBLAS dll load issue in Windows * Check for CUDA_PATH before adding --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Andrei Betlen <abetlen@gmail.com> Co-authored-by: Anchen <anchen.li+alias@pepperstone.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Xiyou Zhou <xiyou.zhou@gmail.com> Co-authored-by: Aneesh Joy <aneeshjoy@gmail.com>
Ahh interesting, OpenAI works but ChatOpenAI is broken. |
Add max_tokens parameter when calling ChatOpenAI worked for me. |
* add ggml_rms_norm * update op num
Can we close this? |
I just tried running commit 92b0013 from git, and it still has this same issue using langchain with GPTeam (101dotxyz/GPTeam#63).
|
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
I am trying to run the following code, as demonstrated in https://github.com/abetlen/llama-cpp-python/blob/main/examples/notebooks/Clients.ipynb :
I expect to receive a successful response.
Current Behavior
The response I receive is the following:
In the server, this is the corresponding message:
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
$ lscpu
$ uname -a
Failure Information (for bugs)
See above
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
Failure Logs
Please include any relevant log snippets or files. If it works under one configuration but not under another, please provide logs for both configurations and their corresponding outputs so it is easy to see where behavior changes.
Also, please try to avoid using screenshots if at all possible. Instead, copy/paste the console output and use Github's markdown to cleanly format your logs for easy readability.
Environment info:
The text was updated successfully, but these errors were encountered: