fix(deps): update dependency node-llama-cpp to v3.1.1 #123
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
3.0.0-beta.44
->3.1.1
Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
withcatai/node-llama-cpp (node-llama-cpp)
v3.1.1
Compare Source
Features
Llama
instance (#360) (8145c94)Shipped with
llama.cpp
releaseb3889
v3.1.0
Compare Source
Bug Fixes
Features
resolveModelFile
method (#351) (4ee10a9)hf:
URI support (#351) (4ee10a9)Shipped with
llama.cpp
releaseb3887
v3.0.3
Compare Source
✨
node-llama-cpp
3.0 is here! ✨Read about the release in the blog post
3.0.3 (2024-09-25)
Bug Fixes
llama.cpp
breaking change (#344) (2e751c8)Shipped with
llama.cpp
releaseb3825
v3.0.2
Compare Source
✨
node-llama-cpp
3.0 is here! ✨Read about the release in the blog post
3.0.2 (2024-09-25)
Bug Fixes
README.md
(#340) (8ab983b)Shipped with
llama.cpp
releaseb3821
v3.0.1
Compare Source
✨
node-llama-cpp
3.0 is here! ✨Read about the release in the blog post
3.0.1 (2024-09-24)
Bug Fixes
Shipped with
llama.cpp
releaseb3808
v3.0.0
Compare Source
✨
node-llama-cpp
3.0 is here! ✨Read about the release in the blog post
3.0.0 (2024-09-24)
Features
pull
command (#214) (453c162)inspect gpu
command (#175) (5a70576)inspect gguf
command (#182) (35e6f50)inspect estimate
command (#309) (4b3ad61)inspect measure
command (#182) (35e6f50)init
command to scaffold a new project from a template (withnode-typescript
andelectron-typescript-react
templates) (#217) (d6a0f43)download
,build
andclear
commands to be subcommands of asource
command (#309) (4b3ad61)seed
option to the prompt level (#309) (4b3ad61)TemplateChatWrapper
: custom history template for each message role (#309) (4b3ad61)onTextChunk
option (#273) (e3e0994)stopOnAbortSignal
andcustomStopTriggers
onLlamaChat
andLlamaChatSession
(#214) (453c162)--gpu
flag in generation CLI commands (#205) (ef501f9)specialTokens
parameter onmodel.detokenize
(#205) (ef501f9)LlamaModel
(#182) (35e6f50)tokenizer.chat_template
header from thegguf
file when available - use it to find a better specialized chat wrapper or useJinjaTemplateChatWrapper
with it as a fallback (#182) (35e6f50)chat
,complete
,infill
(#182) (35e6f50)getLlama
when using"lastBuild"
(#164) (ede69c1)chatWrapper
getter on aLlamaChatSession
(#161) (46235a2)LlamaChat
(#139) (5fcdf9b)LlamaText
util (#139) (5fcdf9b)llama.cpp
release in GitHub releases (#142) (36c779d)Shipped with
llama.cpp
releaseb3808
v3.0.0-beta.47
Compare Source
Bug Fixes
Features
resetChatHistory
function on aLlamaChatSession
(#327) (ebc4e83)Shipped with
llama.cpp
releaseb3804
v3.0.0-beta.46
Compare Source
Bug Fixes
defineChatSessionFunction
types and docs (#322) (2204e7a)electron-builder
version used in Electron template (#323) (6c644ff)Shipped with
llama.cpp
releaseb3787
v3.0.0-beta.45
Compare Source
Bug Fixes
llama.cpp
sampling refactor (#309) (4b3ad61)chat
command when using--printTimings
or--meter
(#309) (4b3ad61)Features
inspect estimate
command (#309) (4b3ad61)seed
option to the prompt level (#309) (4b3ad61)autoDisposeSequence
default tofalse
(#309) (4b3ad61)download
,build
andclear
commands to be subcommands of asource
command (#309) (4b3ad61)TokenBias
(#309) (4b3ad61)threads
default value (#309) (4b3ad61)LlamaEmbedding
an object (#309) (4b3ad61)HF_TOKEN
env var support for reading GGUF file metadata (#309) (4b3ad61)TemplateChatWrapper
: custom history template for each message role (#309) (4b3ad61)inspect gpu
command (#309) (4b3ad61)--gpuLayers max
and--contextSize max
flag support forinspect estimate
command (#309) (4b3ad61)Shipped with
llama.cpp
releaseb3785
Configuration
📅 Schedule: Branch creation - "every weekend" in timezone UTC, Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.