Skip to content

Releases: himself65/LlamaIndexTS

Release refs/tags/llamaindex@0.6.12

02 Oct 21:18
449274c
Compare
Choose a tag to compare

llamaindex

0.6.12

Patch Changes

  • f7b4e94: feat: add filters for pinecone
  • 78037a6: fix: bypass service context embed model
  • 1d9e3b1: fix: export llama reader in non-nodejs runtime

0.6.11

Patch Changes

  • df441e2: fix: consoleLogger is missing from @llamaindex/env
  • Updated dependencies [df441e2]
    • @llamaindex/cloud@0.2.9
    • @llamaindex/core@0.2.8
    • @llamaindex/env@0.1.13
    • @llamaindex/ollama@0.0.3
    • @llamaindex/openai@0.1.10
    • @llamaindex/groq@0.0.9

0.6.10

Patch Changes

  • ebc5105: feat: support @vercel/postgres
  • 6cce3b1: feat: support npm:postgres
  • Updated dependencies [96f72ad]
  • Updated dependencies [6cce3b1]
    • @llamaindex/openai@0.1.9
    • @llamaindex/core@0.2.7
    • @llamaindex/groq@0.0.8
    • @llamaindex/ollama@0.0.2

0.6.9

Patch Changes

  • Updated dependencies [ac41ed3]
    • @llamaindex/cloud@0.2.8

0.6.8

Patch Changes

  • 8b7fdba: refactor: move chat engine & retriever into core.

    • chatHistory in BaseChatEngine now returns ChatMessage[] | Promise<ChatMessage[]>, instead of BaseMemory
    • update retrieve-end type
  • Updated dependencies [8b7fdba]

    • @llamaindex/core@0.2.6
    • @llamaindex/openai@0.1.8
    • @llamaindex/groq@0.0.7

0.6.7

Patch Changes

  • 23bcc37: fix: add serializer in doc store

    PostgresDocumentStore now will not use JSON.stringify for better performance

0.6.6

Patch Changes

  • d902cc3: Fix context not being sent using ContextChatEngine
  • 025ffe6: fix: update PostgresKVStore constructor params
  • a659574: Adds upstash vector store as a storage
  • Updated dependencies [d902cc3]
    • @llamaindex/core@0.2.5
    • @llamaindex/openai@0.1.7
    • @llamaindex/groq@0.0.6

0.6.5

Patch Changes

  • e9714db: feat: update PGVectorStore

    • move constructor parameter config.user | config.database | config.password | config.connectionString into config.clientConfig
    • if you pass pg.Client or pg.Pool instance to PGVectorStore, move it to config.client, setting config.shouldConnect to false if it's already connected
    • default value of PGVectorStore.collection is now "data" instead of "" (empty string)

0.6.4

Patch Changes

  • b48bcc3: feat: add load-transformers event type when loading @xenova/transformers module

    This would benefit user who want to customize the transformer env.

  • Updated dependencies [b48bcc3]

    • @llamaindex/core@0.2.4
    • @llamaindex/env@0.1.12
    • @llamaindex/openai@0.1.6
    • @llamaindex/groq@0.0.5

0.6.3

Patch Changes

  • 2cd1383: refactor: align response-synthesizers & chat-engine module

    • builtin event system
    • correct class extends
    • aligin APIs, naming with llama-index python
    • move stream out of first parameter to second parameter for the better tyep checking
    • remove JSONQueryEngine in @llamaindex/experimental, as the code quality is not satisify and we will bring it back later
  • 5c4badb: Extend JinaAPIEmbedding parameters

  • Updated dependencies [fb36eff]

  • Updated dependencies [d24d3d1]

  • Updated dependencies [2cd1383]

    • @llamaindex/cloud@0.2.7
    • @llamaindex/core@0.2.3
    • @llamaindex/openai@0.1.5
    • @llamaindex/groq@0.0.4

0.6.2

Patch Changes

  • 749b43a: fix: clip embedding transform function
  • Updated dependencies [b42adeb]
  • Updated dependencies [749b43a]
    • @llamaindex/cloud@0.2.6
    • @llamaindex/core@0.2.2
    • @llamaindex/openai@0.1.4
    • @llamaindex/groq@0.0.3

0.6.1

Patch Changes

  • fbd5e01: refactor: move groq as llm package

  • 6b70c54: feat: update JinaAIEmbedding, support embedding v3

  • 1a6137b: feat: experimental support for browser

    If you see bundler issue in next.js edge runtime, please bump to next@14 latest version.

  • 85c2e19: feat: @llamaindex/cloud package update

    • Bump to latest openapi schema
    • Move LlamaParse class from llamaindex, this will allow you use llamaparse in more non-node.js environment
  • Updated dependencies [ac07e3c]

  • Updated dependencies [fbd5e01]

  • Updated dependencies [70ccb4a]

  • Updated dependencies [1a6137b]

  • Updated dependencies [85c2e19]

  • Updated dependencies [ac07e3c]

    • @llamaindex/core@0.2.1
    • @llamaindex/env@0.1.11
    • @llamaindex/groq@0.0.2
    • @llamaindex/cloud@0.2.5
    • @llamaindex/openai@0.1.3

0.6.0

Minor Changes

Patch Changes

  • Updated dependencies [11feef8]
    • @llamaindex/core@0.2.0
    • @llamaindex/openai@0.1.2

0.5.27

Patch Changes

  • 7edeb1c: feat: decouple openai from llamaindex module

    This should be a non-breaking change, but just you can now only install @llamaindex/openai to reduce the bundle size in the future

  • Updated dependencies [7edeb1c]

    • @llamaindex/openai@0.1.1

0.5.26

Patch Changes

  • ffe0cd1: faet: add openai o1 support
  • ffe0cd1: feat: add PostgreSQL storage

0.5.25

Patch Changes

  • 4810364: fix: handle RouterQueryEngine with string query

  • d3bc663: refactor: export vector store only in nodejs environment on top level

    If you see some missing modules error, please change vector store related imports to llamaindex/vector-store

  • Updated dependencies [4810364]

    • @llamaindex/cloud@0.2.4

0.5.24

Patch Changes

  • Updated dependencies [0bf8d80]
    • @llamaindex/cloud@0.2.3

0.5.23

Patch Changes

  • Updated dependencies [711c814]
    • @llamaindex/core@0.1.12

0.5.22

Patch Changes

  • 4648da6: fix: wrong tiktoken version caused NextJs CL template run fail
  • Updated dependencies [4648da6]
    • @llamaindex/env@0.1.10
    • @llamaindex/core@0.1.11

0.5.21

Patch Changes

  • ae1149f: feat: add JSON streaming to JSONReader

  • 2411c9f: Auto-create index for MongoDB vector store (if not exists)

  • e8f229c: Remove logging from MongoDB Atlas Vector Store

  • 11b3856: implement filters for MongoDBAtlasVectorSearch

  • 83d7f41: Fix database insertion for PGVectorStore

    It will now:

    • throw an error if there is an insertion error.
    • Upsert documents with the same id.
    • add all documents to the database as a single INSERT call (inside a transaction).
  • 0148354: refactor: prompt system

    Add PromptTemplate module with strong type check.

  • 1711f6d: Export imageToDataUrl for using images in chat

  • Updated dependencies [0148354]

    • @llamaindex/core@0.1.10

0.5.20

Patch Changes

  • d9d6c56: Add support for MetadataFilters for PostgreSQL
  • 22ff486: Add tiktoken WASM to withLlamaIndex
  • eed0b04: fix: use LLM metadata mode for generating context of ContextChatEngine

0.5.19

Patch Changes

  • fcbf183: implement llamacloud file service

0.5.18

Patch Changes

  • 8b66cf4: feat: support organization id in llamacloud index
  • Updated dependencies [e27e7dd]
    • @llamaindex/core@0.1.9

0.5.17

Patch Changes

  • c654398: Implement Weaviate Vector Store in TS

0.5.16

Patch Changes

  • 58abc57: fix: align version
  • Updated dependencies [58abc57]
    • @llamaindex/cloud@0.2.2
    • @llamaindex/core@0.1.8
    • @llamaindex/env@0.1.9

0.5.15

Patch Changes

  • 01c184c: Add is_empty operator for filtering vector store
  • 07a275f: chore: bump openai

0.5.14

Patch Changes

  • c825a2f: Add gpt-4o-mini to Azure. Add 2024-06-01 API version for Azure

0.5.13

Patch Changes

  • Updated dependencies [04b2f8e]
    • @llamaindex/core@0.1.7

0.5.12

Patch Changes

  • 345300f: feat: add splitByPage mode to LlamaParseReader
  • da5cfc4: Add metadatafilter options to retriever constructors
  • da5cfc4: Fix system prompt not used in ContextChatEngine
  • Updated dependencies [0452af9]
    • @llamaindex/core@0.1.6

0.5.11

Patch Changes

  • Updated dependencies [1f680d7]
    • @llamaindex/cloud@0.2.1

0.5.10

Patch Changes

  • 086b940: feat: add DeepSeek LLM
  • 5d5716b: feat: add a reader for JSON data
  • 91d02a4: feat: support transform component callable
  • fb6db45: feat: add pageSeparator params to LlamaParseReader
  • Updated dependencies [91d02a4]
    • @llamaindex/core@0.1.5

0.5.9

Patch Changes

  • 15962b3: feat: node parser refactor

    Align the text splitter logic with Python; it has almost the same logic as Python; Zod checks for input and better error messages and event system.

    This change will not be considered a breaking change since it doesn't have a significant output difference from the last version,
    but some edge cases will change, like the page separator and parameter for the constructor.

  • Updated dependencies [15962b3]

    • @llamaindex/core@0.1.4

0.5.8

Patch Changes

  • 3d5ba08: fix: update user agent in AssemblyAI
  • d917cdc: Add azure interpreter tool to tool factory

0.5.7

Patch Changes

  • ec59acd: fix: bundling issue with pnpm

0.5.6

Patch Changes

  • 2562244: feat: add gpt4o-mini
  • 325aa51: Implement Jina embedding through Jina api
  • ab700ea: Add missing authentication to LlamaCloudIndex.fromDocuments
  • 92f0782: feat: use query bundle
  • 6cf6ae6: feat: abstract query type
  • b7cfe5b: fix: passing max_token option to replicate's api call
  • Updated dependencies [6cf6ae6]
    • @llamaindex/core@0.1.3

0.5.5

Patch Changes

  • b974eea: Add support for Metadata filters
  • Updated dependencies [b974eea]
    • @llamaindex/core@0.1.2

0.5.4

Patch Changes

  • 1a65ead: feat: add vendorMultimodal params to LlamaParseReader

0.5.3

Patch Changes

  • 9bbbc67: feat: add a reader for Discord messages
  • b3681bf: fix: DataCloneError when using FunctionTool
  • Updated dependencies [b3681bf]
    • @llamaindex/core@0.1.1

0.5.2

Patch Changes

  • Updated dependencies [3ed6acc]
    • @llamaindex/cloud@0.2.0

0.5.1

Patch Changes

  • 2774681: Add mixedbread's embeddings and reranking API
  • a0f424e: corrected the regex in the react.ts file in extractToolUse & extractJsonStr functions, as mentioned in run-llama#1019

0.5.0

Minor Changes

  • 16ef5dd: refactor: simplify callback manager

    Change event.detail.payload to event.detail

Patch Changes

  • 16ef5dd: refactor: move callback manage...
Read more

Release refs/tags/llamaindex@0.6.7

23 Sep 21:45
22ae8d0
Compare
Choose a tag to compare

llamaindex

0.6.7

Patch Changes

  • 23bcc37: fix: add serializer in doc store

    PostgresDocumentStore now will not use JSON.stringify for better performance

0.6.6

Patch Changes

  • d902cc3: Fix context not being sent using ContextChatEngine
  • 025ffe6: fix: update PostgresKVStore constructor params
  • a659574: Adds upstash vector store as a storage
  • Updated dependencies [d902cc3]
    • @llamaindex/core@0.2.5
    • @llamaindex/openai@0.1.7
    • @llamaindex/groq@0.0.6

0.6.5

Patch Changes

  • e9714db: feat: update PGVectorStore

    • move constructor parameter config.user | config.database | config.password | config.connectionString into config.clientConfig
    • if you pass pg.Client or pg.Pool instance to PGVectorStore, move it to config.client, setting config.shouldConnect to false if it's already connected
    • default value of PGVectorStore.collection is now "data" instead of "" (empty string)

0.6.4

Patch Changes

  • b48bcc3: feat: add load-transformers event type when loading @xenova/transformers module

    This would benefit user who want to customize the transformer env.

  • Updated dependencies [b48bcc3]

    • @llamaindex/core@0.2.4
    • @llamaindex/env@0.1.12
    • @llamaindex/openai@0.1.6
    • @llamaindex/groq@0.0.5

0.6.3

Patch Changes

  • 2cd1383: refactor: align response-synthesizers & chat-engine module

    • builtin event system
    • correct class extends
    • aligin APIs, naming with llama-index python
    • move stream out of first parameter to second parameter for the better tyep checking
    • remove JSONQueryEngine in @llamaindex/experimental, as the code quality is not satisify and we will bring it back later
  • 5c4badb: Extend JinaAPIEmbedding parameters

  • Updated dependencies [fb36eff]

  • Updated dependencies [d24d3d1]

  • Updated dependencies [2cd1383]

    • @llamaindex/cloud@0.2.7
    • @llamaindex/core@0.2.3
    • @llamaindex/openai@0.1.5
    • @llamaindex/groq@0.0.4

0.6.2

Patch Changes

  • 749b43a: fix: clip embedding transform function
  • Updated dependencies [b42adeb]
  • Updated dependencies [749b43a]
    • @llamaindex/cloud@0.2.6
    • @llamaindex/core@0.2.2
    • @llamaindex/openai@0.1.4
    • @llamaindex/groq@0.0.3

0.6.1

Patch Changes

  • fbd5e01: refactor: move groq as llm package

  • 6b70c54: feat: update JinaAIEmbedding, support embedding v3

  • 1a6137b: feat: experimental support for browser

    If you see bundler issue in next.js edge runtime, please bump to next@14 latest version.

  • 85c2e19: feat: @llamaindex/cloud package update

    • Bump to latest openapi schema
    • Move LlamaParse class from llamaindex, this will allow you use llamaparse in more non-node.js environment
  • Updated dependencies [ac07e3c]

  • Updated dependencies [fbd5e01]

  • Updated dependencies [70ccb4a]

  • Updated dependencies [1a6137b]

  • Updated dependencies [85c2e19]

  • Updated dependencies [ac07e3c]

    • @llamaindex/core@0.2.1
    • @llamaindex/env@0.1.11
    • @llamaindex/groq@0.0.2
    • @llamaindex/cloud@0.2.5
    • @llamaindex/openai@0.1.3

0.6.0

Minor Changes

Patch Changes

  • Updated dependencies [11feef8]
    • @llamaindex/core@0.2.0
    • @llamaindex/openai@0.1.2

0.5.27

Patch Changes

  • 7edeb1c: feat: decouple openai from llamaindex module

    This should be a non-breaking change, but just you can now only install @llamaindex/openai to reduce the bundle size in the future

  • Updated dependencies [7edeb1c]

    • @llamaindex/openai@0.1.1

0.5.26

Patch Changes

  • ffe0cd1: faet: add openai o1 support
  • ffe0cd1: feat: add PostgreSQL storage

0.5.25

Patch Changes

  • 4810364: fix: handle RouterQueryEngine with string query

  • d3bc663: refactor: export vector store only in nodejs environment on top level

    If you see some missing modules error, please change vector store related imports to llamaindex/vector-store

  • Updated dependencies [4810364]

    • @llamaindex/cloud@0.2.4

0.5.24

Patch Changes

  • Updated dependencies [0bf8d80]
    • @llamaindex/cloud@0.2.3

0.5.23

Patch Changes

  • Updated dependencies [711c814]
    • @llamaindex/core@0.1.12

0.5.22

Patch Changes

  • 4648da6: fix: wrong tiktoken version caused NextJs CL template run fail
  • Updated dependencies [4648da6]
    • @llamaindex/env@0.1.10
    • @llamaindex/core@0.1.11

0.5.21

Patch Changes

  • ae1149f: feat: add JSON streaming to JSONReader

  • 2411c9f: Auto-create index for MongoDB vector store (if not exists)

  • e8f229c: Remove logging from MongoDB Atlas Vector Store

  • 11b3856: implement filters for MongoDBAtlasVectorSearch

  • 83d7f41: Fix database insertion for PGVectorStore

    It will now:

    • throw an error if there is an insertion error.
    • Upsert documents with the same id.
    • add all documents to the database as a single INSERT call (inside a transaction).
  • 0148354: refactor: prompt system

    Add PromptTemplate module with strong type check.

  • 1711f6d: Export imageToDataUrl for using images in chat

  • Updated dependencies [0148354]

    • @llamaindex/core@0.1.10

0.5.20

Patch Changes

  • d9d6c56: Add support for MetadataFilters for PostgreSQL
  • 22ff486: Add tiktoken WASM to withLlamaIndex
  • eed0b04: fix: use LLM metadata mode for generating context of ContextChatEngine

0.5.19

Patch Changes

  • fcbf183: implement llamacloud file service

0.5.18

Patch Changes

  • 8b66cf4: feat: support organization id in llamacloud index
  • Updated dependencies [e27e7dd]
    • @llamaindex/core@0.1.9

0.5.17

Patch Changes

  • c654398: Implement Weaviate Vector Store in TS

0.5.16

Patch Changes

  • 58abc57: fix: align version
  • Updated dependencies [58abc57]
    • @llamaindex/cloud@0.2.2
    • @llamaindex/core@0.1.8
    • @llamaindex/env@0.1.9

0.5.15

Patch Changes

  • 01c184c: Add is_empty operator for filtering vector store
  • 07a275f: chore: bump openai

0.5.14

Patch Changes

  • c825a2f: Add gpt-4o-mini to Azure. Add 2024-06-01 API version for Azure

0.5.13

Patch Changes

  • Updated dependencies [04b2f8e]
    • @llamaindex/core@0.1.7

0.5.12

Patch Changes

  • 345300f: feat: add splitByPage mode to LlamaParseReader
  • da5cfc4: Add metadatafilter options to retriever constructors
  • da5cfc4: Fix system prompt not used in ContextChatEngine
  • Updated dependencies [0452af9]
    • @llamaindex/core@0.1.6

0.5.11

Patch Changes

  • Updated dependencies [1f680d7]
    • @llamaindex/cloud@0.2.1

0.5.10

Patch Changes

  • 086b940: feat: add DeepSeek LLM
  • 5d5716b: feat: add a reader for JSON data
  • 91d02a4: feat: support transform component callable
  • fb6db45: feat: add pageSeparator params to LlamaParseReader
  • Updated dependencies [91d02a4]
    • @llamaindex/core@0.1.5

0.5.9

Patch Changes

  • 15962b3: feat: node parser refactor

    Align the text splitter logic with Python; it has almost the same logic as Python; Zod checks for input and better error messages and event system.

    This change will not be considered a breaking change since it doesn't have a significant output difference from the last version,
    but some edge cases will change, like the page separator and parameter for the constructor.

  • Updated dependencies [15962b3]

    • @llamaindex/core@0.1.4

0.5.8

Patch Changes

  • 3d5ba08: fix: update user agent in AssemblyAI
  • d917cdc: Add azure interpreter tool to tool factory

0.5.7

Patch Changes

  • ec59acd: fix: bundling issue with pnpm

0.5.6

Patch Changes

  • 2562244: feat: add gpt4o-mini
  • 325aa51: Implement Jina embedding through Jina api
  • ab700ea: Add missing authentication to LlamaCloudIndex.fromDocuments
  • 92f0782: feat: use query bundle
  • 6cf6ae6: feat: abstract query type
  • b7cfe5b: fix: passing max_token option to replicate's api call
  • Updated dependencies [6cf6ae6]
    • @llamaindex/core@0.1.3

0.5.5

Patch Changes

  • b974eea: Add support for Metadata filters
  • Updated dependencies [b974eea]
    • @llamaindex/core@0.1.2

0.5.4

Patch Changes

  • 1a65ead: feat: add vendorMultimodal params to LlamaParseReader

0.5.3

Patch Changes

  • 9bbbc67: feat: add a reader for Discord messages
  • b3681bf: fix: DataCloneError when using FunctionTool
  • Updated dependencies [b3681bf]
    • @llamaindex/core@0.1.1

0.5.2

Patch Changes

  • Updated dependencies [3ed6acc]
    • @llamaindex/cloud@0.2.0

0.5.1

Patch Changes

  • 2774681: Add mixedbread's embeddings and reranking API
  • a0f424e: corrected the regex in the react.ts file in extractToolUse & extractJsonStr functions, as mentioned in run-llama#1019

0.5.0

Minor Changes

  • 16ef5dd: refactor: simplify callback manager

    Change event.detail.payload to event.detail

Patch Changes

  • 16ef5dd: refactor: move callback manager & llm to core module

    For people who import llamaindex/llms/base or llamaindex/llms/utils,
    use @llamaindex/core/llms and @llamaindex/core/utils instead.

  • 36ddec4: fix: typo in custom page separator parameter for LlamaParse

  • Updated dependencies [16ef5dd]

  • Updated dependencies [16ef5dd]

  • Updated dependencies [36ddec4]

    • @llamaindex/core@0.1.0
    • @llamaindex/cloud@0.1.4

0.4.14

Patch Changes

  • Updated dependencies [1c444d5]
    • @llamaindex/cloud@0.1.3

0.4.13

Patch Changes

  • e8f8bea: feat: add boundingBox and targetPages to LlamaParseReader
  • 304484b: feat: add ignoreErrors flag to LlamaParseReader

0.4.12

Patch Changes

  • f326ab8: chore: bump version
  • Updated dependencies [f326ab8]
    • @llamaindex/cloud@0.1.2
    • @llamaindex/core@0.0.3
    • @llamaindex/env@0.1.8

0.4.11

Patch Changes

  • 8bf5b4a: fix: llama parse input spreadsheet

0.4.10

Patch Changes

  • 7dce3d2: fix: disable External Filters for Gemini

0.4.9

Patch Changes

  • 3a96a48: fix: anthroipic image input

0.4.8

Patch Changes

  • 83ebdfb: fix: next.js build error

0.4.7

Patch Changes

  • 41fe871: Add support for azure dynamic session tool
  • 321c39d: fix: g...
Read more

Release refs/tags/llamaindex@0.6.5

22 Sep 23:02
726eb41
Compare
Choose a tag to compare

llamaindex

0.6.5

Patch Changes

  • e9714db: feat: update PGVectorStore

    • move constructor parameter config.user | config.database | config.password | config.connectionString into config.clientConfig
    • if you pass pg.Client or pg.Pool instance to PGVectorStore, move it to config.client, setting config.shouldConnect to false if it's already connected
    • default value of PGVectorStore.collection is now "data" instead of "" (empty string)

0.6.4

Patch Changes

  • b48bcc3: feat: add load-transformers event type when loading @xenova/transformers module

    This would benefit user who want to customize the transformer env.

  • Updated dependencies [b48bcc3]

    • @llamaindex/core@0.2.4
    • @llamaindex/env@0.1.12
    • @llamaindex/openai@0.1.6
    • @llamaindex/groq@0.0.5

0.6.3

Patch Changes

  • 2cd1383: refactor: align response-synthesizers & chat-engine module

    • builtin event system
    • correct class extends
    • aligin APIs, naming with llama-index python
    • move stream out of first parameter to second parameter for the better tyep checking
    • remove JSONQueryEngine in @llamaindex/experimental, as the code quality is not satisify and we will bring it back later
  • 5c4badb: Extend JinaAPIEmbedding parameters

  • Updated dependencies [fb36eff]

  • Updated dependencies [d24d3d1]

  • Updated dependencies [2cd1383]

    • @llamaindex/cloud@0.2.7
    • @llamaindex/core@0.2.3
    • @llamaindex/openai@0.1.5
    • @llamaindex/groq@0.0.4

0.6.2

Patch Changes

  • 749b43a: fix: clip embedding transform function
  • Updated dependencies [b42adeb]
  • Updated dependencies [749b43a]
    • @llamaindex/cloud@0.2.6
    • @llamaindex/core@0.2.2
    • @llamaindex/openai@0.1.4
    • @llamaindex/groq@0.0.3

0.6.1

Patch Changes

  • fbd5e01: refactor: move groq as llm package

  • 6b70c54: feat: update JinaAIEmbedding, support embedding v3

  • 1a6137b: feat: experimental support for browser

    If you see bundler issue in next.js edge runtime, please bump to next@14 latest version.

  • 85c2e19: feat: @llamaindex/cloud package update

    • Bump to latest openapi schema
    • Move LlamaParse class from llamaindex, this will allow you use llamaparse in more non-node.js environment
  • Updated dependencies [ac07e3c]

  • Updated dependencies [fbd5e01]

  • Updated dependencies [70ccb4a]

  • Updated dependencies [1a6137b]

  • Updated dependencies [85c2e19]

  • Updated dependencies [ac07e3c]

    • @llamaindex/core@0.2.1
    • @llamaindex/env@0.1.11
    • @llamaindex/groq@0.0.2
    • @llamaindex/cloud@0.2.5
    • @llamaindex/openai@0.1.3

0.6.0

Minor Changes

Patch Changes

  • Updated dependencies [11feef8]
    • @llamaindex/core@0.2.0
    • @llamaindex/openai@0.1.2

0.5.27

Patch Changes

  • 7edeb1c: feat: decouple openai from llamaindex module

    This should be a non-breaking change, but just you can now only install @llamaindex/openai to reduce the bundle size in the future

  • Updated dependencies [7edeb1c]

    • @llamaindex/openai@0.1.1

0.5.26

Patch Changes

  • ffe0cd1: faet: add openai o1 support
  • ffe0cd1: feat: add PostgreSQL storage

0.5.25

Patch Changes

  • 4810364: fix: handle RouterQueryEngine with string query

  • d3bc663: refactor: export vector store only in nodejs environment on top level

    If you see some missing modules error, please change vector store related imports to llamaindex/vector-store

  • Updated dependencies [4810364]

    • @llamaindex/cloud@0.2.4

0.5.24

Patch Changes

  • Updated dependencies [0bf8d80]
    • @llamaindex/cloud@0.2.3

0.5.23

Patch Changes

  • Updated dependencies [711c814]
    • @llamaindex/core@0.1.12

0.5.22

Patch Changes

  • 4648da6: fix: wrong tiktoken version caused NextJs CL template run fail
  • Updated dependencies [4648da6]
    • @llamaindex/env@0.1.10
    • @llamaindex/core@0.1.11

0.5.21

Patch Changes

  • ae1149f: feat: add JSON streaming to JSONReader

  • 2411c9f: Auto-create index for MongoDB vector store (if not exists)

  • e8f229c: Remove logging from MongoDB Atlas Vector Store

  • 11b3856: implement filters for MongoDBAtlasVectorSearch

  • 83d7f41: Fix database insertion for PGVectorStore

    It will now:

    • throw an error if there is an insertion error.
    • Upsert documents with the same id.
    • add all documents to the database as a single INSERT call (inside a transaction).
  • 0148354: refactor: prompt system

    Add PromptTemplate module with strong type check.

  • 1711f6d: Export imageToDataUrl for using images in chat

  • Updated dependencies [0148354]

    • @llamaindex/core@0.1.10

0.5.20

Patch Changes

  • d9d6c56: Add support for MetadataFilters for PostgreSQL
  • 22ff486: Add tiktoken WASM to withLlamaIndex
  • eed0b04: fix: use LLM metadata mode for generating context of ContextChatEngine

0.5.19

Patch Changes

  • fcbf183: implement llamacloud file service

0.5.18

Patch Changes

  • 8b66cf4: feat: support organization id in llamacloud index
  • Updated dependencies [e27e7dd]
    • @llamaindex/core@0.1.9

0.5.17

Patch Changes

  • c654398: Implement Weaviate Vector Store in TS

0.5.16

Patch Changes

  • 58abc57: fix: align version
  • Updated dependencies [58abc57]
    • @llamaindex/cloud@0.2.2
    • @llamaindex/core@0.1.8
    • @llamaindex/env@0.1.9

0.5.15

Patch Changes

  • 01c184c: Add is_empty operator for filtering vector store
  • 07a275f: chore: bump openai

0.5.14

Patch Changes

  • c825a2f: Add gpt-4o-mini to Azure. Add 2024-06-01 API version for Azure

0.5.13

Patch Changes

  • Updated dependencies [04b2f8e]
    • @llamaindex/core@0.1.7

0.5.12

Patch Changes

  • 345300f: feat: add splitByPage mode to LlamaParseReader
  • da5cfc4: Add metadatafilter options to retriever constructors
  • da5cfc4: Fix system prompt not used in ContextChatEngine
  • Updated dependencies [0452af9]
    • @llamaindex/core@0.1.6

0.5.11

Patch Changes

  • Updated dependencies [1f680d7]
    • @llamaindex/cloud@0.2.1

0.5.10

Patch Changes

  • 086b940: feat: add DeepSeek LLM
  • 5d5716b: feat: add a reader for JSON data
  • 91d02a4: feat: support transform component callable
  • fb6db45: feat: add pageSeparator params to LlamaParseReader
  • Updated dependencies [91d02a4]
    • @llamaindex/core@0.1.5

0.5.9

Patch Changes

  • 15962b3: feat: node parser refactor

    Align the text splitter logic with Python; it has almost the same logic as Python; Zod checks for input and better error messages and event system.

    This change will not be considered a breaking change since it doesn't have a significant output difference from the last version,
    but some edge cases will change, like the page separator and parameter for the constructor.

  • Updated dependencies [15962b3]

    • @llamaindex/core@0.1.4

0.5.8

Patch Changes

  • 3d5ba08: fix: update user agent in AssemblyAI
  • d917cdc: Add azure interpreter tool to tool factory

0.5.7

Patch Changes

  • ec59acd: fix: bundling issue with pnpm

0.5.6

Patch Changes

  • 2562244: feat: add gpt4o-mini
  • 325aa51: Implement Jina embedding through Jina api
  • ab700ea: Add missing authentication to LlamaCloudIndex.fromDocuments
  • 92f0782: feat: use query bundle
  • 6cf6ae6: feat: abstract query type
  • b7cfe5b: fix: passing max_token option to replicate's api call
  • Updated dependencies [6cf6ae6]
    • @llamaindex/core@0.1.3

0.5.5

Patch Changes

  • b974eea: Add support for Metadata filters
  • Updated dependencies [b974eea]
    • @llamaindex/core@0.1.2

0.5.4

Patch Changes

  • 1a65ead: feat: add vendorMultimodal params to LlamaParseReader

0.5.3

Patch Changes

  • 9bbbc67: feat: add a reader for Discord messages
  • b3681bf: fix: DataCloneError when using FunctionTool
  • Updated dependencies [b3681bf]
    • @llamaindex/core@0.1.1

0.5.2

Patch Changes

  • Updated dependencies [3ed6acc]
    • @llamaindex/cloud@0.2.0

0.5.1

Patch Changes

  • 2774681: Add mixedbread's embeddings and reranking API
  • a0f424e: corrected the regex in the react.ts file in extractToolUse & extractJsonStr functions, as mentioned in run-llama#1019

0.5.0

Minor Changes

  • 16ef5dd: refactor: simplify callback manager

    Change event.detail.payload to event.detail

Patch Changes

  • 16ef5dd: refactor: move callback manager & llm to core module

    For people who import llamaindex/llms/base or llamaindex/llms/utils,
    use @llamaindex/core/llms and @llamaindex/core/utils instead.

  • 36ddec4: fix: typo in custom page separator parameter for LlamaParse

  • Updated dependencies [16ef5dd]

  • Updated dependencies [16ef5dd]

  • Updated dependencies [36ddec4]

    • @llamaindex/core@0.1.0
    • @llamaindex/cloud@0.1.4

0.4.14

Patch Changes

  • Updated dependencies [1c444d5]
    • @llamaindex/cloud@0.1.3

0.4.13

Patch Changes

  • e8f8bea: feat: add boundingBox and targetPages to LlamaParseReader
  • 304484b: feat: add ignoreErrors flag to LlamaParseReader

0.4.12

Patch Changes

  • f326ab8: chore: bump version
  • Updated dependencies [f326ab8]
    • @llamaindex/cloud@0.1.2
    • @llamaindex/core@0.0.3
    • @llamaindex/env@0.1.8

0.4.11

Patch Changes

  • 8bf5b4a: fix: llama parse input spreadsheet

0.4.10

Patch Changes

  • 7dce3d2: fix: disable External Filters for Gemini

0.4.9

Patch Changes

  • 3a96a48: fix: anthroipic image input

0.4.8

Patch Changes

  • 83ebdfb: fix: next.js build error

0.4.7

Patch Changes

  • 41fe871: Add support for azure dynamic session tool
  • 321c39d: fix: generate api as class
  • f7f1af0: fix: throw error when no pipeline found
  • Updated dependencies [41fe871]
  • Updated dependencies [f10b41d]
  • Updated dependencies [321c39d]
    • @llamaindex/env@0.1.7
    • @llamaindex/core@0.0.2
    • @llamaindex/cloud@0.1.1

0.4.6

Patch Changes

  • 1feb23b: feat: Gemini tool calling for agent support
  • 08c55ec: Add metadata to PDFs and use Uint8Array for readers content

0.4.5

Patch Changes

  • 6c3e5d0: fix: switch to correct refe...
Read more

Release refs/tags/llamaindex@0.5.27

13 Sep 15:52
9c5ff16
Compare
Choose a tag to compare

llamaindex

0.5.27

Patch Changes

  • 7edeb1c: feat: decouple openai from llamaindex module

    This should be a non-breaking change, but just you can now only install @llamaindex/openai to reduce the bundle size in the future

  • Updated dependencies [7edeb1c]

    • @llamaindex/openai@0.1.1

0.5.26

Patch Changes

  • ffe0cd1: faet: add openai o1 support
  • ffe0cd1: feat: add PostgreSQL storage

0.5.25

Patch Changes

  • 4810364: fix: handle RouterQueryEngine with string query

  • d3bc663: refactor: export vector store only in nodejs environment on top level

    If you see some missing modules error, please change vector store related imports to llamaindex/vector-store

  • Updated dependencies [4810364]

    • @llamaindex/cloud@0.2.4

0.5.24

Patch Changes

  • Updated dependencies [0bf8d80]
    • @llamaindex/cloud@0.2.3

0.5.23

Patch Changes

  • Updated dependencies [711c814]
    • @llamaindex/core@0.1.12

0.5.22

Patch Changes

  • 4648da6: fix: wrong tiktoken version caused NextJs CL template run fail
  • Updated dependencies [4648da6]
    • @llamaindex/env@0.1.10
    • @llamaindex/core@0.1.11

0.5.21

Patch Changes

  • ae1149f: feat: add JSON streaming to JSONReader

  • 2411c9f: Auto-create index for MongoDB vector store (if not exists)

  • e8f229c: Remove logging from MongoDB Atlas Vector Store

  • 11b3856: implement filters for MongoDBAtlasVectorSearch

  • 83d7f41: Fix database insertion for PGVectorStore

    It will now:

    • throw an error if there is an insertion error.
    • Upsert documents with the same id.
    • add all documents to the database as a single INSERT call (inside a transaction).
  • 0148354: refactor: prompt system

    Add PromptTemplate module with strong type check.

  • 1711f6d: Export imageToDataUrl for using images in chat

  • Updated dependencies [0148354]

    • @llamaindex/core@0.1.10

0.5.20

Patch Changes

  • d9d6c56: Add support for MetadataFilters for PostgreSQL
  • 22ff486: Add tiktoken WASM to withLlamaIndex
  • eed0b04: fix: use LLM metadata mode for generating context of ContextChatEngine

0.5.19

Patch Changes

  • fcbf183: implement llamacloud file service

0.5.18

Patch Changes

  • 8b66cf4: feat: support organization id in llamacloud index
  • Updated dependencies [e27e7dd]
    • @llamaindex/core@0.1.9

0.5.17

Patch Changes

  • c654398: Implement Weaviate Vector Store in TS

0.5.16

Patch Changes

  • 58abc57: fix: align version
  • Updated dependencies [58abc57]
    • @llamaindex/cloud@0.2.2
    • @llamaindex/core@0.1.8
    • @llamaindex/env@0.1.9

0.5.15

Patch Changes

  • 01c184c: Add is_empty operator for filtering vector store
  • 07a275f: chore: bump openai

0.5.14

Patch Changes

  • c825a2f: Add gpt-4o-mini to Azure. Add 2024-06-01 API version for Azure

0.5.13

Patch Changes

  • Updated dependencies [04b2f8e]
    • @llamaindex/core@0.1.7

0.5.12

Patch Changes

  • 345300f: feat: add splitByPage mode to LlamaParseReader
  • da5cfc4: Add metadatafilter options to retriever constructors
  • da5cfc4: Fix system prompt not used in ContextChatEngine
  • Updated dependencies [0452af9]
    • @llamaindex/core@0.1.6

0.5.11

Patch Changes

  • Updated dependencies [1f680d7]
    • @llamaindex/cloud@0.2.1

0.5.10

Patch Changes

  • 086b940: feat: add DeepSeek LLM
  • 5d5716b: feat: add a reader for JSON data
  • 91d02a4: feat: support transform component callable
  • fb6db45: feat: add pageSeparator params to LlamaParseReader
  • Updated dependencies [91d02a4]
    • @llamaindex/core@0.1.5

0.5.9

Patch Changes

  • 15962b3: feat: node parser refactor

    Align the text splitter logic with Python; it has almost the same logic as Python; Zod checks for input and better error messages and event system.

    This change will not be considered a breaking change since it doesn't have a significant output difference from the last version,
    but some edge cases will change, like the page separator and parameter for the constructor.

  • Updated dependencies [15962b3]

    • @llamaindex/core@0.1.4

0.5.8

Patch Changes

  • 3d5ba08: fix: update user agent in AssemblyAI
  • d917cdc: Add azure interpreter tool to tool factory

0.5.7

Patch Changes

  • ec59acd: fix: bundling issue with pnpm

0.5.6

Patch Changes

  • 2562244: feat: add gpt4o-mini
  • 325aa51: Implement Jina embedding through Jina api
  • ab700ea: Add missing authentication to LlamaCloudIndex.fromDocuments
  • 92f0782: feat: use query bundle
  • 6cf6ae6: feat: abstract query type
  • b7cfe5b: fix: passing max_token option to replicate's api call
  • Updated dependencies [6cf6ae6]
    • @llamaindex/core@0.1.3

0.5.5

Patch Changes

  • b974eea: Add support for Metadata filters
  • Updated dependencies [b974eea]
    • @llamaindex/core@0.1.2

0.5.4

Patch Changes

  • 1a65ead: feat: add vendorMultimodal params to LlamaParseReader

0.5.3

Patch Changes

  • 9bbbc67: feat: add a reader for Discord messages
  • b3681bf: fix: DataCloneError when using FunctionTool
  • Updated dependencies [b3681bf]
    • @llamaindex/core@0.1.1

0.5.2

Patch Changes

  • Updated dependencies [3ed6acc]
    • @llamaindex/cloud@0.2.0

0.5.1

Patch Changes

  • 2774681: Add mixedbread's embeddings and reranking API
  • a0f424e: corrected the regex in the react.ts file in extractToolUse & extractJsonStr functions, as mentioned in run-llama#1019

0.5.0

Minor Changes

  • 16ef5dd: refactor: simplify callback manager

    Change event.detail.payload to event.detail

Patch Changes

  • 16ef5dd: refactor: move callback manager & llm to core module

    For people who import llamaindex/llms/base or llamaindex/llms/utils,
    use @llamaindex/core/llms and @llamaindex/core/utils instead.

  • 36ddec4: fix: typo in custom page separator parameter for LlamaParse

  • Updated dependencies [16ef5dd]

  • Updated dependencies [16ef5dd]

  • Updated dependencies [36ddec4]

    • @llamaindex/core@0.1.0
    • @llamaindex/cloud@0.1.4

0.4.14

Patch Changes

  • Updated dependencies [1c444d5]
    • @llamaindex/cloud@0.1.3

0.4.13

Patch Changes

  • e8f8bea: feat: add boundingBox and targetPages to LlamaParseReader
  • 304484b: feat: add ignoreErrors flag to LlamaParseReader

0.4.12

Patch Changes

  • f326ab8: chore: bump version
  • Updated dependencies [f326ab8]
    • @llamaindex/cloud@0.1.2
    • @llamaindex/core@0.0.3
    • @llamaindex/env@0.1.8

0.4.11

Patch Changes

  • 8bf5b4a: fix: llama parse input spreadsheet

0.4.10

Patch Changes

  • 7dce3d2: fix: disable External Filters for Gemini

0.4.9

Patch Changes

  • 3a96a48: fix: anthroipic image input

0.4.8

Patch Changes

  • 83ebdfb: fix: next.js build error

0.4.7

Patch Changes

  • 41fe871: Add support for azure dynamic session tool
  • 321c39d: fix: generate api as class
  • f7f1af0: fix: throw error when no pipeline found
  • Updated dependencies [41fe871]
  • Updated dependencies [f10b41d]
  • Updated dependencies [321c39d]
    • @llamaindex/env@0.1.7
    • @llamaindex/core@0.0.2
    • @llamaindex/cloud@0.1.1

0.4.6

Patch Changes

  • 1feb23b: feat: Gemini tool calling for agent support
  • 08c55ec: Add metadata to PDFs and use Uint8Array for readers content

0.4.5

Patch Changes

  • 6c3e5d0: fix: switch to correct reference for a static function

0.4.4

Patch Changes

  • 42eb73a: Fix IngestionPipeline not working without vectorStores

0.4.3

Patch Changes

  • 2ef62a9: feat: added support for embeddings via HuggingFace Inference API
  • Updated dependencies [d4e853c]
  • Updated dependencies [a94b8ec]
    • @llamaindex/env@0.1.6

0.4.2

Patch Changes

  • a87a4d1: feat: added tool support calling for Bedrock's Calude and general llm support for agents
  • 0730140: include node relationships when converting jsonToDoc
  • Updated dependencies [f3b34b4]
    • @llamaindex/env@0.1.5

0.4.1

Patch Changes

  • 3c47910: fix: groq llm
  • ed467a9: Add model ids for Anthropic Claude 3.5 Sonnet model on Anthropic and Bedrock
  • cba5406: fix: every Llama Parse job being called "blob"
  • Updated dependencies [56fabbb]
    • @llamaindex/env@0.1.4

0.4.0

Minor Changes

  • 436bc41: Unify chat engine response and agent response

Patch Changes

  • a44e54f: Truncate text to embed for OpenAI if it exceeds maxTokens
  • a51ed8d: feat: add support for managed identity for Azure OpenAI
  • d3b635b: fix: agents to use chat history

0.3.17

Patch Changes

  • 6bc5bdd: feat: add cache disabling, fast mode, do not unroll columns mode and custom page separator to LlamaParseReader
  • bf25ff6: fix: polyfill for cloudflare worker
  • e6d6576: chore: use unpdf

0.3.16

Patch Changes

  • 11ae926: feat: add numCandidates setting to MongoDBAtlasVectorStore for tuning queries
  • 631f000: feat: DeepInfra LLM implementation
  • 1378ec4: feat: set default model to gpt-4o
  • 6b1ded4: add gpt4o-mode, invalidate cache and skip diagonal text to LlamaParseReader
  • 4d4bd85: Show error message if agent tool is called with partial JSON
  • 24a9d1e: add json mode and image retrieval to LlamaParseReader
  • 45952de: add concurrency management for SimpleDirectoryReader
  • 54230f0: feat: Gemini GA release models
  • a29d835: setDocumentHash should be async
  • 73819bf: Unify metadata and ID handling of documents, allow files to be read by Buffer

0.3.15

Patch Changes

  • 6e156ed: Use images in context chat engine
  • 265976d: fix bug with node decorator
  • 8e26f75: Add retrieval for images using multi-modal messages

0.3.14

Patch Changes

  • 6ff7576: Added GPT-4o for Azure
  • 94543de: Added the latest preview gemini models and multi modal images taken into account

0.3.13

Patch Changes

  • 1b1081b: Add vectorStores to storage context to define vector store per modality
  • 37525df: Added support for accessing Gemini via Vertex AI
  • 660a2b3: Fix text before heading in markdown reader
  • a1f2475: Add system prompt to ContextChatEngine

0.3.12

Patch Changes

0.3.11

Pat...

Read more

Release refs/tags/llamaindex@0.5.26

12 Sep 20:05
8b95abd
Compare
Choose a tag to compare

llamaindex

0.5.26

Patch Changes

  • ffe0cd1: faet: add openai o1 support
  • ffe0cd1: feat: add PostgreSQL storage

0.5.25

Patch Changes

  • 4810364: fix: handle RouterQueryEngine with string query

  • d3bc663: refactor: export vector store only in nodejs environment on top level

    If you see some missing modules error, please change vector store related imports to llamaindex/vector-store

  • Updated dependencies [4810364]

    • @llamaindex/cloud@0.2.4

0.5.24

Patch Changes

  • Updated dependencies [0bf8d80]
    • @llamaindex/cloud@0.2.3

0.5.23

Patch Changes

  • Updated dependencies [711c814]
    • @llamaindex/core@0.1.12

0.5.22

Patch Changes

  • 4648da6: fix: wrong tiktoken version caused NextJs CL template run fail
  • Updated dependencies [4648da6]
    • @llamaindex/env@0.1.10
    • @llamaindex/core@0.1.11

0.5.21

Patch Changes

  • ae1149f: feat: add JSON streaming to JSONReader

  • 2411c9f: Auto-create index for MongoDB vector store (if not exists)

  • e8f229c: Remove logging from MongoDB Atlas Vector Store

  • 11b3856: implement filters for MongoDBAtlasVectorSearch

  • 83d7f41: Fix database insertion for PGVectorStore

    It will now:

    • throw an error if there is an insertion error.
    • Upsert documents with the same id.
    • add all documents to the database as a single INSERT call (inside a transaction).
  • 0148354: refactor: prompt system

    Add PromptTemplate module with strong type check.

  • 1711f6d: Export imageToDataUrl for using images in chat

  • Updated dependencies [0148354]

    • @llamaindex/core@0.1.10

0.5.20

Patch Changes

  • d9d6c56: Add support for MetadataFilters for PostgreSQL
  • 22ff486: Add tiktoken WASM to withLlamaIndex
  • eed0b04: fix: use LLM metadata mode for generating context of ContextChatEngine

0.5.19

Patch Changes

  • fcbf183: implement llamacloud file service

0.5.18

Patch Changes

  • 8b66cf4: feat: support organization id in llamacloud index
  • Updated dependencies [e27e7dd]
    • @llamaindex/core@0.1.9

0.5.17

Patch Changes

  • c654398: Implement Weaviate Vector Store in TS

0.5.16

Patch Changes

  • 58abc57: fix: align version
  • Updated dependencies [58abc57]
    • @llamaindex/cloud@0.2.2
    • @llamaindex/core@0.1.8
    • @llamaindex/env@0.1.9

0.5.15

Patch Changes

  • 01c184c: Add is_empty operator for filtering vector store
  • 07a275f: chore: bump openai

0.5.14

Patch Changes

  • c825a2f: Add gpt-4o-mini to Azure. Add 2024-06-01 API version for Azure

0.5.13

Patch Changes

  • Updated dependencies [04b2f8e]
    • @llamaindex/core@0.1.7

0.5.12

Patch Changes

  • 345300f: feat: add splitByPage mode to LlamaParseReader
  • da5cfc4: Add metadatafilter options to retriever constructors
  • da5cfc4: Fix system prompt not used in ContextChatEngine
  • Updated dependencies [0452af9]
    • @llamaindex/core@0.1.6

0.5.11

Patch Changes

  • Updated dependencies [1f680d7]
    • @llamaindex/cloud@0.2.1

0.5.10

Patch Changes

  • 086b940: feat: add DeepSeek LLM
  • 5d5716b: feat: add a reader for JSON data
  • 91d02a4: feat: support transform component callable
  • fb6db45: feat: add pageSeparator params to LlamaParseReader
  • Updated dependencies [91d02a4]
    • @llamaindex/core@0.1.5

0.5.9

Patch Changes

  • 15962b3: feat: node parser refactor

    Align the text splitter logic with Python; it has almost the same logic as Python; Zod checks for input and better error messages and event system.

    This change will not be considered a breaking change since it doesn't have a significant output difference from the last version,
    but some edge cases will change, like the page separator and parameter for the constructor.

  • Updated dependencies [15962b3]

    • @llamaindex/core@0.1.4

0.5.8

Patch Changes

  • 3d5ba08: fix: update user agent in AssemblyAI
  • d917cdc: Add azure interpreter tool to tool factory

0.5.7

Patch Changes

  • ec59acd: fix: bundling issue with pnpm

0.5.6

Patch Changes

  • 2562244: feat: add gpt4o-mini
  • 325aa51: Implement Jina embedding through Jina api
  • ab700ea: Add missing authentication to LlamaCloudIndex.fromDocuments
  • 92f0782: feat: use query bundle
  • 6cf6ae6: feat: abstract query type
  • b7cfe5b: fix: passing max_token option to replicate's api call
  • Updated dependencies [6cf6ae6]
    • @llamaindex/core@0.1.3

0.5.5

Patch Changes

  • b974eea: Add support for Metadata filters
  • Updated dependencies [b974eea]
    • @llamaindex/core@0.1.2

0.5.4

Patch Changes

  • 1a65ead: feat: add vendorMultimodal params to LlamaParseReader

0.5.3

Patch Changes

  • 9bbbc67: feat: add a reader for Discord messages
  • b3681bf: fix: DataCloneError when using FunctionTool
  • Updated dependencies [b3681bf]
    • @llamaindex/core@0.1.1

0.5.2

Patch Changes

  • Updated dependencies [3ed6acc]
    • @llamaindex/cloud@0.2.0

0.5.1

Patch Changes

  • 2774681: Add mixedbread's embeddings and reranking API
  • a0f424e: corrected the regex in the react.ts file in extractToolUse & extractJsonStr functions, as mentioned in run-llama#1019

0.5.0

Minor Changes

  • 16ef5dd: refactor: simplify callback manager

    Change event.detail.payload to event.detail

Patch Changes

  • 16ef5dd: refactor: move callback manager & llm to core module

    For people who import llamaindex/llms/base or llamaindex/llms/utils,
    use @llamaindex/core/llms and @llamaindex/core/utils instead.

  • 36ddec4: fix: typo in custom page separator parameter for LlamaParse

  • Updated dependencies [16ef5dd]

  • Updated dependencies [16ef5dd]

  • Updated dependencies [36ddec4]

    • @llamaindex/core@0.1.0
    • @llamaindex/cloud@0.1.4

0.4.14

Patch Changes

  • Updated dependencies [1c444d5]
    • @llamaindex/cloud@0.1.3

0.4.13

Patch Changes

  • e8f8bea: feat: add boundingBox and targetPages to LlamaParseReader
  • 304484b: feat: add ignoreErrors flag to LlamaParseReader

0.4.12

Patch Changes

  • f326ab8: chore: bump version
  • Updated dependencies [f326ab8]
    • @llamaindex/cloud@0.1.2
    • @llamaindex/core@0.0.3
    • @llamaindex/env@0.1.8

0.4.11

Patch Changes

  • 8bf5b4a: fix: llama parse input spreadsheet

0.4.10

Patch Changes

  • 7dce3d2: fix: disable External Filters for Gemini

0.4.9

Patch Changes

  • 3a96a48: fix: anthroipic image input

0.4.8

Patch Changes

  • 83ebdfb: fix: next.js build error

0.4.7

Patch Changes

  • 41fe871: Add support for azure dynamic session tool
  • 321c39d: fix: generate api as class
  • f7f1af0: fix: throw error when no pipeline found
  • Updated dependencies [41fe871]
  • Updated dependencies [f10b41d]
  • Updated dependencies [321c39d]
    • @llamaindex/env@0.1.7
    • @llamaindex/core@0.0.2
    • @llamaindex/cloud@0.1.1

0.4.6

Patch Changes

  • 1feb23b: feat: Gemini tool calling for agent support
  • 08c55ec: Add metadata to PDFs and use Uint8Array for readers content

0.4.5

Patch Changes

  • 6c3e5d0: fix: switch to correct reference for a static function

0.4.4

Patch Changes

  • 42eb73a: Fix IngestionPipeline not working without vectorStores

0.4.3

Patch Changes

  • 2ef62a9: feat: added support for embeddings via HuggingFace Inference API
  • Updated dependencies [d4e853c]
  • Updated dependencies [a94b8ec]
    • @llamaindex/env@0.1.6

0.4.2

Patch Changes

  • a87a4d1: feat: added tool support calling for Bedrock's Calude and general llm support for agents
  • 0730140: include node relationships when converting jsonToDoc
  • Updated dependencies [f3b34b4]
    • @llamaindex/env@0.1.5

0.4.1

Patch Changes

  • 3c47910: fix: groq llm
  • ed467a9: Add model ids for Anthropic Claude 3.5 Sonnet model on Anthropic and Bedrock
  • cba5406: fix: every Llama Parse job being called "blob"
  • Updated dependencies [56fabbb]
    • @llamaindex/env@0.1.4

0.4.0

Minor Changes

  • 436bc41: Unify chat engine response and agent response

Patch Changes

  • a44e54f: Truncate text to embed for OpenAI if it exceeds maxTokens
  • a51ed8d: feat: add support for managed identity for Azure OpenAI
  • d3b635b: fix: agents to use chat history

0.3.17

Patch Changes

  • 6bc5bdd: feat: add cache disabling, fast mode, do not unroll columns mode and custom page separator to LlamaParseReader
  • bf25ff6: fix: polyfill for cloudflare worker
  • e6d6576: chore: use unpdf

0.3.16

Patch Changes

  • 11ae926: feat: add numCandidates setting to MongoDBAtlasVectorStore for tuning queries
  • 631f000: feat: DeepInfra LLM implementation
  • 1378ec4: feat: set default model to gpt-4o
  • 6b1ded4: add gpt4o-mode, invalidate cache and skip diagonal text to LlamaParseReader
  • 4d4bd85: Show error message if agent tool is called with partial JSON
  • 24a9d1e: add json mode and image retrieval to LlamaParseReader
  • 45952de: add concurrency management for SimpleDirectoryReader
  • 54230f0: feat: Gemini GA release models
  • a29d835: setDocumentHash should be async
  • 73819bf: Unify metadata and ID handling of documents, allow files to be read by Buffer

0.3.15

Patch Changes

  • 6e156ed: Use images in context chat engine
  • 265976d: fix bug with node decorator
  • 8e26f75: Add retrieval for images using multi-modal messages

0.3.14

Patch Changes

  • 6ff7576: Added GPT-4o for Azure
  • 94543de: Added the latest preview gemini models and multi modal images taken into account

0.3.13

Patch Changes

  • 1b1081b: Add vectorStores to storage context to define vector store per modality
  • 37525df: Added support for accessing Gemini via Vertex AI
  • 660a2b3: Fix text before heading in markdown reader
  • a1f2475: Add system prompt to ContextChatEngine

0.3.12

Patch Changes

0.3.11

Patch Changes

  • e072c45: fix: remove non-standard API pipeline

  • 9e133ac: refactor: remove defaultFS from parameters

    We don't accept passing fs in the parameter since it's unnecessary for a determined JS environment.

    This was a polyfill way for the non-Node.js environment, but no...

Read more

Release refs/tags/llamaindex@0.5.25

11 Sep 22:20
28b877e
Compare
Choose a tag to compare

llamaindex

0.5.25

Patch Changes

  • 4810364: fix: handle RouterQueryEngine with string query

  • d3bc663: refactor: export vector store only in nodejs environment on top level

    If you see some missing modules error, please change vector store related imports to llamaindex/vector-store

  • Updated dependencies [4810364]

    • @llamaindex/cloud@0.2.4

0.5.24

Patch Changes

  • Updated dependencies [0bf8d80]
    • @llamaindex/cloud@0.2.3

0.5.23

Patch Changes

  • Updated dependencies [711c814]
    • @llamaindex/core@0.1.12

0.5.22

Patch Changes

  • 4648da6: fix: wrong tiktoken version caused NextJs CL template run fail
  • Updated dependencies [4648da6]
    • @llamaindex/env@0.1.10
    • @llamaindex/core@0.1.11

0.5.21

Patch Changes

  • ae1149f: feat: add JSON streaming to JSONReader

  • 2411c9f: Auto-create index for MongoDB vector store (if not exists)

  • e8f229c: Remove logging from MongoDB Atlas Vector Store

  • 11b3856: implement filters for MongoDBAtlasVectorSearch

  • 83d7f41: Fix database insertion for PGVectorStore

    It will now:

    • throw an error if there is an insertion error.
    • Upsert documents with the same id.
    • add all documents to the database as a single INSERT call (inside a transaction).
  • 0148354: refactor: prompt system

    Add PromptTemplate module with strong type check.

  • 1711f6d: Export imageToDataUrl for using images in chat

  • Updated dependencies [0148354]

    • @llamaindex/core@0.1.10

0.5.20

Patch Changes

  • d9d6c56: Add support for MetadataFilters for PostgreSQL
  • 22ff486: Add tiktoken WASM to withLlamaIndex
  • eed0b04: fix: use LLM metadata mode for generating context of ContextChatEngine

0.5.19

Patch Changes

  • fcbf183: implement llamacloud file service

0.5.18

Patch Changes

  • 8b66cf4: feat: support organization id in llamacloud index
  • Updated dependencies [e27e7dd]
    • @llamaindex/core@0.1.9

0.5.17

Patch Changes

  • c654398: Implement Weaviate Vector Store in TS

0.5.16

Patch Changes

  • 58abc57: fix: align version
  • Updated dependencies [58abc57]
    • @llamaindex/cloud@0.2.2
    • @llamaindex/core@0.1.8
    • @llamaindex/env@0.1.9

0.5.15

Patch Changes

  • 01c184c: Add is_empty operator for filtering vector store
  • 07a275f: chore: bump openai

0.5.14

Patch Changes

  • c825a2f: Add gpt-4o-mini to Azure. Add 2024-06-01 API version for Azure

0.5.13

Patch Changes

  • Updated dependencies [04b2f8e]
    • @llamaindex/core@0.1.7

0.5.12

Patch Changes

  • 345300f: feat: add splitByPage mode to LlamaParseReader
  • da5cfc4: Add metadatafilter options to retriever constructors
  • da5cfc4: Fix system prompt not used in ContextChatEngine
  • Updated dependencies [0452af9]
    • @llamaindex/core@0.1.6

0.5.11

Patch Changes

  • Updated dependencies [1f680d7]
    • @llamaindex/cloud@0.2.1

0.5.10

Patch Changes

  • 086b940: feat: add DeepSeek LLM
  • 5d5716b: feat: add a reader for JSON data
  • 91d02a4: feat: support transform component callable
  • fb6db45: feat: add pageSeparator params to LlamaParseReader
  • Updated dependencies [91d02a4]
    • @llamaindex/core@0.1.5

0.5.9

Patch Changes

  • 15962b3: feat: node parser refactor

    Align the text splitter logic with Python; it has almost the same logic as Python; Zod checks for input and better error messages and event system.

    This change will not be considered a breaking change since it doesn't have a significant output difference from the last version,
    but some edge cases will change, like the page separator and parameter for the constructor.

  • Updated dependencies [15962b3]

    • @llamaindex/core@0.1.4

0.5.8

Patch Changes

  • 3d5ba08: fix: update user agent in AssemblyAI
  • d917cdc: Add azure interpreter tool to tool factory

0.5.7

Patch Changes

  • ec59acd: fix: bundling issue with pnpm

0.5.6

Patch Changes

  • 2562244: feat: add gpt4o-mini
  • 325aa51: Implement Jina embedding through Jina api
  • ab700ea: Add missing authentication to LlamaCloudIndex.fromDocuments
  • 92f0782: feat: use query bundle
  • 6cf6ae6: feat: abstract query type
  • b7cfe5b: fix: passing max_token option to replicate's api call
  • Updated dependencies [6cf6ae6]
    • @llamaindex/core@0.1.3

0.5.5

Patch Changes

  • b974eea: Add support for Metadata filters
  • Updated dependencies [b974eea]
    • @llamaindex/core@0.1.2

0.5.4

Patch Changes

  • 1a65ead: feat: add vendorMultimodal params to LlamaParseReader

0.5.3

Patch Changes

  • 9bbbc67: feat: add a reader for Discord messages
  • b3681bf: fix: DataCloneError when using FunctionTool
  • Updated dependencies [b3681bf]
    • @llamaindex/core@0.1.1

0.5.2

Patch Changes

  • Updated dependencies [3ed6acc]
    • @llamaindex/cloud@0.2.0

0.5.1

Patch Changes

  • 2774681: Add mixedbread's embeddings and reranking API
  • a0f424e: corrected the regex in the react.ts file in extractToolUse & extractJsonStr functions, as mentioned in run-llama#1019

0.5.0

Minor Changes

  • 16ef5dd: refactor: simplify callback manager

    Change event.detail.payload to event.detail

Patch Changes

  • 16ef5dd: refactor: move callback manager & llm to core module

    For people who import llamaindex/llms/base or llamaindex/llms/utils,
    use @llamaindex/core/llms and @llamaindex/core/utils instead.

  • 36ddec4: fix: typo in custom page separator parameter for LlamaParse

  • Updated dependencies [16ef5dd]

  • Updated dependencies [16ef5dd]

  • Updated dependencies [36ddec4]

    • @llamaindex/core@0.1.0
    • @llamaindex/cloud@0.1.4

0.4.14

Patch Changes

  • Updated dependencies [1c444d5]
    • @llamaindex/cloud@0.1.3

0.4.13

Patch Changes

  • e8f8bea: feat: add boundingBox and targetPages to LlamaParseReader
  • 304484b: feat: add ignoreErrors flag to LlamaParseReader

0.4.12

Patch Changes

  • f326ab8: chore: bump version
  • Updated dependencies [f326ab8]
    • @llamaindex/cloud@0.1.2
    • @llamaindex/core@0.0.3
    • @llamaindex/env@0.1.8

0.4.11

Patch Changes

  • 8bf5b4a: fix: llama parse input spreadsheet

0.4.10

Patch Changes

  • 7dce3d2: fix: disable External Filters for Gemini

0.4.9

Patch Changes

  • 3a96a48: fix: anthroipic image input

0.4.8

Patch Changes

  • 83ebdfb: fix: next.js build error

0.4.7

Patch Changes

  • 41fe871: Add support for azure dynamic session tool
  • 321c39d: fix: generate api as class
  • f7f1af0: fix: throw error when no pipeline found
  • Updated dependencies [41fe871]
  • Updated dependencies [f10b41d]
  • Updated dependencies [321c39d]
    • @llamaindex/env@0.1.7
    • @llamaindex/core@0.0.2
    • @llamaindex/cloud@0.1.1

0.4.6

Patch Changes

  • 1feb23b: feat: Gemini tool calling for agent support
  • 08c55ec: Add metadata to PDFs and use Uint8Array for readers content

0.4.5

Patch Changes

  • 6c3e5d0: fix: switch to correct reference for a static function

0.4.4

Patch Changes

  • 42eb73a: Fix IngestionPipeline not working without vectorStores

0.4.3

Patch Changes

  • 2ef62a9: feat: added support for embeddings via HuggingFace Inference API
  • Updated dependencies [d4e853c]
  • Updated dependencies [a94b8ec]
    • @llamaindex/env@0.1.6

0.4.2

Patch Changes

  • a87a4d1: feat: added tool support calling for Bedrock's Calude and general llm support for agents
  • 0730140: include node relationships when converting jsonToDoc
  • Updated dependencies [f3b34b4]
    • @llamaindex/env@0.1.5

0.4.1

Patch Changes

  • 3c47910: fix: groq llm
  • ed467a9: Add model ids for Anthropic Claude 3.5 Sonnet model on Anthropic and Bedrock
  • cba5406: fix: every Llama Parse job being called "blob"
  • Updated dependencies [56fabbb]
    • @llamaindex/env@0.1.4

0.4.0

Minor Changes

  • 436bc41: Unify chat engine response and agent response

Patch Changes

  • a44e54f: Truncate text to embed for OpenAI if it exceeds maxTokens
  • a51ed8d: feat: add support for managed identity for Azure OpenAI
  • d3b635b: fix: agents to use chat history

0.3.17

Patch Changes

  • 6bc5bdd: feat: add cache disabling, fast mode, do not unroll columns mode and custom page separator to LlamaParseReader
  • bf25ff6: fix: polyfill for cloudflare worker
  • e6d6576: chore: use unpdf

0.3.16

Patch Changes

  • 11ae926: feat: add numCandidates setting to MongoDBAtlasVectorStore for tuning queries
  • 631f000: feat: DeepInfra LLM implementation
  • 1378ec4: feat: set default model to gpt-4o
  • 6b1ded4: add gpt4o-mode, invalidate cache and skip diagonal text to LlamaParseReader
  • 4d4bd85: Show error message if agent tool is called with partial JSON
  • 24a9d1e: add json mode and image retrieval to LlamaParseReader
  • 45952de: add concurrency management for SimpleDirectoryReader
  • 54230f0: feat: Gemini GA release models
  • a29d835: setDocumentHash should be async
  • 73819bf: Unify metadata and ID handling of documents, allow files to be read by Buffer

0.3.15

Patch Changes

  • 6e156ed: Use images in context chat engine
  • 265976d: fix bug with node decorator
  • 8e26f75: Add retrieval for images using multi-modal messages

0.3.14

Patch Changes

  • 6ff7576: Added GPT-4o for Azure
  • 94543de: Added the latest preview gemini models and multi modal images taken into account

0.3.13

Patch Changes

  • 1b1081b: Add vectorStores to storage context to define vector store per modality
  • 37525df: Added support for accessing Gemini via Vertex AI
  • 660a2b3: Fix text before heading in markdown reader
  • a1f2475: Add system prompt to ContextChatEngine

0.3.12

Patch Changes

0.3.11

Patch Changes

  • e072c45: fix: remove non-standard API pipeline

  • 9e133ac: refactor: remove defaultFS from parameters

    We don't accept passing fs in the parameter since it's unnecessary for a determined JS environment.

    This was a polyfill way for the non-Node.js environment, but now we use another way to polyfill APIs.

  • 447105a: Improve Gemini message and context preparation

  • 320be3f: F...

Read more

Release refs/tags/llamaindex@0.5.24

11 Sep 16:45
2dcad52
Compare
Choose a tag to compare

llamaindex

0.5.24

Patch Changes

  • Updated dependencies [0bf8d80]
    • @llamaindex/cloud@0.2.3

0.5.23

Patch Changes

  • Updated dependencies [711c814]
    • @llamaindex/core@0.1.12

0.5.22

Patch Changes

  • 4648da6: fix: wrong tiktoken version caused NextJs CL template run fail
  • Updated dependencies [4648da6]
    • @llamaindex/env@0.1.10
    • @llamaindex/core@0.1.11

0.5.21

Patch Changes

  • ae1149f: feat: add JSON streaming to JSONReader

  • 2411c9f: Auto-create index for MongoDB vector store (if not exists)

  • e8f229c: Remove logging from MongoDB Atlas Vector Store

  • 11b3856: implement filters for MongoDBAtlasVectorSearch

  • 83d7f41: Fix database insertion for PGVectorStore

    It will now:

    • throw an error if there is an insertion error.
    • Upsert documents with the same id.
    • add all documents to the database as a single INSERT call (inside a transaction).
  • 0148354: refactor: prompt system

    Add PromptTemplate module with strong type check.

  • 1711f6d: Export imageToDataUrl for using images in chat

  • Updated dependencies [0148354]

    • @llamaindex/core@0.1.10

0.5.20

Patch Changes

  • d9d6c56: Add support for MetadataFilters for PostgreSQL
  • 22ff486: Add tiktoken WASM to withLlamaIndex
  • eed0b04: fix: use LLM metadata mode for generating context of ContextChatEngine

0.5.19

Patch Changes

  • fcbf183: implement llamacloud file service

0.5.18

Patch Changes

  • 8b66cf4: feat: support organization id in llamacloud index
  • Updated dependencies [e27e7dd]
    • @llamaindex/core@0.1.9

0.5.17

Patch Changes

  • c654398: Implement Weaviate Vector Store in TS

0.5.16

Patch Changes

  • 58abc57: fix: align version
  • Updated dependencies [58abc57]
    • @llamaindex/cloud@0.2.2
    • @llamaindex/core@0.1.8
    • @llamaindex/env@0.1.9

0.5.15

Patch Changes

  • 01c184c: Add is_empty operator for filtering vector store
  • 07a275f: chore: bump openai

0.5.14

Patch Changes

  • c825a2f: Add gpt-4o-mini to Azure. Add 2024-06-01 API version for Azure

0.5.13

Patch Changes

  • Updated dependencies [04b2f8e]
    • @llamaindex/core@0.1.7

0.5.12

Patch Changes

  • 345300f: feat: add splitByPage mode to LlamaParseReader
  • da5cfc4: Add metadatafilter options to retriever constructors
  • da5cfc4: Fix system prompt not used in ContextChatEngine
  • Updated dependencies [0452af9]
    • @llamaindex/core@0.1.6

0.5.11

Patch Changes

  • Updated dependencies [1f680d7]
    • @llamaindex/cloud@0.2.1

0.5.10

Patch Changes

  • 086b940: feat: add DeepSeek LLM
  • 5d5716b: feat: add a reader for JSON data
  • 91d02a4: feat: support transform component callable
  • fb6db45: feat: add pageSeparator params to LlamaParseReader
  • Updated dependencies [91d02a4]
    • @llamaindex/core@0.1.5

0.5.9

Patch Changes

  • 15962b3: feat: node parser refactor

    Align the text splitter logic with Python; it has almost the same logic as Python; Zod checks for input and better error messages and event system.

    This change will not be considered a breaking change since it doesn't have a significant output difference from the last version,
    but some edge cases will change, like the page separator and parameter for the constructor.

  • Updated dependencies [15962b3]

    • @llamaindex/core@0.1.4

0.5.8

Patch Changes

  • 3d5ba08: fix: update user agent in AssemblyAI
  • d917cdc: Add azure interpreter tool to tool factory

0.5.7

Patch Changes

  • ec59acd: fix: bundling issue with pnpm

0.5.6

Patch Changes

  • 2562244: feat: add gpt4o-mini
  • 325aa51: Implement Jina embedding through Jina api
  • ab700ea: Add missing authentication to LlamaCloudIndex.fromDocuments
  • 92f0782: feat: use query bundle
  • 6cf6ae6: feat: abstract query type
  • b7cfe5b: fix: passing max_token option to replicate's api call
  • Updated dependencies [6cf6ae6]
    • @llamaindex/core@0.1.3

0.5.5

Patch Changes

  • b974eea: Add support for Metadata filters
  • Updated dependencies [b974eea]
    • @llamaindex/core@0.1.2

0.5.4

Patch Changes

  • 1a65ead: feat: add vendorMultimodal params to LlamaParseReader

0.5.3

Patch Changes

  • 9bbbc67: feat: add a reader for Discord messages
  • b3681bf: fix: DataCloneError when using FunctionTool
  • Updated dependencies [b3681bf]
    • @llamaindex/core@0.1.1

0.5.2

Patch Changes

  • Updated dependencies [3ed6acc]
    • @llamaindex/cloud@0.2.0

0.5.1

Patch Changes

  • 2774681: Add mixedbread's embeddings and reranking API
  • a0f424e: corrected the regex in the react.ts file in extractToolUse & extractJsonStr functions, as mentioned in run-llama#1019

0.5.0

Minor Changes

  • 16ef5dd: refactor: simplify callback manager

    Change event.detail.payload to event.detail

Patch Changes

  • 16ef5dd: refactor: move callback manager & llm to core module

    For people who import llamaindex/llms/base or llamaindex/llms/utils,
    use @llamaindex/core/llms and @llamaindex/core/utils instead.

  • 36ddec4: fix: typo in custom page separator parameter for LlamaParse

  • Updated dependencies [16ef5dd]

  • Updated dependencies [16ef5dd]

  • Updated dependencies [36ddec4]

    • @llamaindex/core@0.1.0
    • @llamaindex/cloud@0.1.4

0.4.14

Patch Changes

  • Updated dependencies [1c444d5]
    • @llamaindex/cloud@0.1.3

0.4.13

Patch Changes

  • e8f8bea: feat: add boundingBox and targetPages to LlamaParseReader
  • 304484b: feat: add ignoreErrors flag to LlamaParseReader

0.4.12

Patch Changes

  • f326ab8: chore: bump version
  • Updated dependencies [f326ab8]
    • @llamaindex/cloud@0.1.2
    • @llamaindex/core@0.0.3
    • @llamaindex/env@0.1.8

0.4.11

Patch Changes

  • 8bf5b4a: fix: llama parse input spreadsheet

0.4.10

Patch Changes

  • 7dce3d2: fix: disable External Filters for Gemini

0.4.9

Patch Changes

  • 3a96a48: fix: anthroipic image input

0.4.8

Patch Changes

  • 83ebdfb: fix: next.js build error

0.4.7

Patch Changes

  • 41fe871: Add support for azure dynamic session tool
  • 321c39d: fix: generate api as class
  • f7f1af0: fix: throw error when no pipeline found
  • Updated dependencies [41fe871]
  • Updated dependencies [f10b41d]
  • Updated dependencies [321c39d]
    • @llamaindex/env@0.1.7
    • @llamaindex/core@0.0.2
    • @llamaindex/cloud@0.1.1

0.4.6

Patch Changes

  • 1feb23b: feat: Gemini tool calling for agent support
  • 08c55ec: Add metadata to PDFs and use Uint8Array for readers content

0.4.5

Patch Changes

  • 6c3e5d0: fix: switch to correct reference for a static function

0.4.4

Patch Changes

  • 42eb73a: Fix IngestionPipeline not working without vectorStores

0.4.3

Patch Changes

  • 2ef62a9: feat: added support for embeddings via HuggingFace Inference API
  • Updated dependencies [d4e853c]
  • Updated dependencies [a94b8ec]
    • @llamaindex/env@0.1.6

0.4.2

Patch Changes

  • a87a4d1: feat: added tool support calling for Bedrock's Calude and general llm support for agents
  • 0730140: include node relationships when converting jsonToDoc
  • Updated dependencies [f3b34b4]
    • @llamaindex/env@0.1.5

0.4.1

Patch Changes

  • 3c47910: fix: groq llm
  • ed467a9: Add model ids for Anthropic Claude 3.5 Sonnet model on Anthropic and Bedrock
  • cba5406: fix: every Llama Parse job being called "blob"
  • Updated dependencies [56fabbb]
    • @llamaindex/env@0.1.4

0.4.0

Minor Changes

  • 436bc41: Unify chat engine response and agent response

Patch Changes

  • a44e54f: Truncate text to embed for OpenAI if it exceeds maxTokens
  • a51ed8d: feat: add support for managed identity for Azure OpenAI
  • d3b635b: fix: agents to use chat history

0.3.17

Patch Changes

  • 6bc5bdd: feat: add cache disabling, fast mode, do not unroll columns mode and custom page separator to LlamaParseReader
  • bf25ff6: fix: polyfill for cloudflare worker
  • e6d6576: chore: use unpdf

0.3.16

Patch Changes

  • 11ae926: feat: add numCandidates setting to MongoDBAtlasVectorStore for tuning queries
  • 631f000: feat: DeepInfra LLM implementation
  • 1378ec4: feat: set default model to gpt-4o
  • 6b1ded4: add gpt4o-mode, invalidate cache and skip diagonal text to LlamaParseReader
  • 4d4bd85: Show error message if agent tool is called with partial JSON
  • 24a9d1e: add json mode and image retrieval to LlamaParseReader
  • 45952de: add concurrency management for SimpleDirectoryReader
  • 54230f0: feat: Gemini GA release models
  • a29d835: setDocumentHash should be async
  • 73819bf: Unify metadata and ID handling of documents, allow files to be read by Buffer

0.3.15

Patch Changes

  • 6e156ed: Use images in context chat engine
  • 265976d: fix bug with node decorator
  • 8e26f75: Add retrieval for images using multi-modal messages

0.3.14

Patch Changes

  • 6ff7576: Added GPT-4o for Azure
  • 94543de: Added the latest preview gemini models and multi modal images taken into account

0.3.13

Patch Changes

  • 1b1081b: Add vectorStores to storage context to define vector store per modality
  • 37525df: Added support for accessing Gemini via Vertex AI
  • 660a2b3: Fix text before heading in markdown reader
  • a1f2475: Add system prompt to ContextChatEngine

0.3.12

Patch Changes

0.3.11

Patch Changes

  • e072c45: fix: remove non-standard API pipeline

  • 9e133ac: refactor: remove defaultFS from parameters

    We don't accept passing fs in the parameter since it's unnecessary for a determined JS environment.

    This was a polyfill way for the non-Node.js environment, but now we use another way to polyfill APIs.

  • 447105a: Improve Gemini message and context preparation

  • 320be3f: Force ChromaDB version to 1.7.3 (to prevent NextJS issues)

  • Updated dependencies [e072c45]

  • Updated dependencies [9e133ac]

    • @llamaindex/env@0.1.3

0.3.10

Patch Changes

0.3.9

Patch Changes

  • c3747d0: fix: import @xenova/transformers

    For now, if you use llamaindex in next.js, you need to ad...

Read more

Release refs/tags/llamaindex@0.5.7

23 Jul 02:50
b370edf
Compare
Choose a tag to compare

llamaindex

0.5.7

Patch Changes

  • ec59acd: fix: bundling issue with pnpm

0.5.6

Patch Changes

  • 2562244: feat: add gpt4o-mini
  • 325aa51: Implement Jina embedding through Jina api
  • ab700ea: Add missing authentication to LlamaCloudIndex.fromDocuments
  • 92f0782: feat: use query bundle
  • 6cf6ae6: feat: abstract query type
  • b7cfe5b: fix: passing max_token option to replicate's api call
  • Updated dependencies [6cf6ae6]
    • @llamaindex/core@0.1.3

0.5.5

Patch Changes

  • b974eea: Add support for Metadata filters
  • Updated dependencies [b974eea]
    • @llamaindex/core@0.1.2

0.5.4

Patch Changes

  • 1a65ead: feat: add vendorMultimodal params to LlamaParseReader

0.5.3

Patch Changes

  • 9bbbc67: feat: add a reader for Discord messages
  • b3681bf: fix: DataCloneError when using FunctionTool
  • Updated dependencies [b3681bf]
    • @llamaindex/core@0.1.1

0.5.2

Patch Changes

  • Updated dependencies [3ed6acc]
    • @llamaindex/cloud@0.2.0

0.5.1

Patch Changes

  • 2774681: Add mixedbread's embeddings and reranking API
  • a0f424e: corrected the regex in the react.ts file in extractToolUse & extractJsonStr functions, as mentioned in run-llama#1019

0.5.0

Minor Changes

  • 16ef5dd: refactor: simplify callback manager

    Change event.detail.payload to event.detail

Patch Changes

  • 16ef5dd: refactor: move callback manager & llm to core module

    For people who import llamaindex/llms/base or llamaindex/llms/utils,
    use @llamaindex/core/llms and @llamaindex/core/utils instead.

  • 36ddec4: fix: typo in custom page separator parameter for LlamaParse

  • Updated dependencies [16ef5dd]

  • Updated dependencies [16ef5dd]

  • Updated dependencies [36ddec4]

    • @llamaindex/core@0.1.0
    • @llamaindex/cloud@0.1.4

0.4.14

Patch Changes

  • Updated dependencies [1c444d5]
    • @llamaindex/cloud@0.1.3

0.4.13

Patch Changes

  • e8f8bea: feat: add boundingBox and targetPages to LlamaParseReader
  • 304484b: feat: add ignoreErrors flag to LlamaParseReader

0.4.12

Patch Changes

  • f326ab8: chore: bump version
  • Updated dependencies [f326ab8]
    • @llamaindex/cloud@0.1.2
    • @llamaindex/core@0.0.3
    • @llamaindex/env@0.1.8

0.4.11

Patch Changes

  • 8bf5b4a: fix: llama parse input spreadsheet

0.4.10

Patch Changes

  • 7dce3d2: fix: disable External Filters for Gemini

0.4.9

Patch Changes

  • 3a96a48: fix: anthroipic image input

0.4.8

Patch Changes

  • 83ebdfb: fix: next.js build error

0.4.7

Patch Changes

  • 41fe871: Add support for azure dynamic session tool
  • 321c39d: fix: generate api as class
  • f7f1af0: fix: throw error when no pipeline found
  • Updated dependencies [41fe871]
  • Updated dependencies [f10b41d]
  • Updated dependencies [321c39d]
    • @llamaindex/env@0.1.7
    • @llamaindex/core@0.0.2
    • @llamaindex/cloud@0.1.1

0.4.6

Patch Changes

  • 1feb23b: feat: Gemini tool calling for agent support
  • 08c55ec: Add metadata to PDFs and use Uint8Array for readers content

0.4.5

Patch Changes

  • 6c3e5d0: fix: switch to correct reference for a static function

0.4.4

Patch Changes

  • 42eb73a: Fix IngestionPipeline not working without vectorStores

0.4.3

Patch Changes

  • 2ef62a9: feat: added support for embeddings via HuggingFace Inference API
  • Updated dependencies [d4e853c]
  • Updated dependencies [a94b8ec]
    • @llamaindex/env@0.1.6

0.4.2

Patch Changes

  • a87a4d1: feat: added tool support calling for Bedrock's Calude and general llm support for agents
  • 0730140: include node relationships when converting jsonToDoc
  • Updated dependencies [f3b34b4]
    • @llamaindex/env@0.1.5

0.4.1

Patch Changes

  • 3c47910: fix: groq llm
  • ed467a9: Add model ids for Anthropic Claude 3.5 Sonnet model on Anthropic and Bedrock
  • cba5406: fix: every Llama Parse job being called "blob"
  • Updated dependencies [56fabbb]
    • @llamaindex/env@0.1.4

0.4.0

Minor Changes

  • 436bc41: Unify chat engine response and agent response

Patch Changes

  • a44e54f: Truncate text to embed for OpenAI if it exceeds maxTokens
  • a51ed8d: feat: add support for managed identity for Azure OpenAI
  • d3b635b: fix: agents to use chat history

0.3.17

Patch Changes

  • 6bc5bdd: feat: add cache disabling, fast mode, do not unroll columns mode and custom page separator to LlamaParseReader
  • bf25ff6: fix: polyfill for cloudflare worker
  • e6d6576: chore: use unpdf

0.3.16

Patch Changes

  • 11ae926: feat: add numCandidates setting to MongoDBAtlasVectorStore for tuning queries
  • 631f000: feat: DeepInfra LLM implementation
  • 1378ec4: feat: set default model to gpt-4o
  • 6b1ded4: add gpt4o-mode, invalidate cache and skip diagonal text to LlamaParseReader
  • 4d4bd85: Show error message if agent tool is called with partial JSON
  • 24a9d1e: add json mode and image retrieval to LlamaParseReader
  • 45952de: add concurrency management for SimpleDirectoryReader
  • 54230f0: feat: Gemini GA release models
  • a29d835: setDocumentHash should be async
  • 73819bf: Unify metadata and ID handling of documents, allow files to be read by Buffer

0.3.15

Patch Changes

  • 6e156ed: Use images in context chat engine
  • 265976d: fix bug with node decorator
  • 8e26f75: Add retrieval for images using multi-modal messages

0.3.14

Patch Changes

  • 6ff7576: Added GPT-4o for Azure
  • 94543de: Added the latest preview gemini models and multi modal images taken into account

0.3.13

Patch Changes

  • 1b1081b: Add vectorStores to storage context to define vector store per modality
  • 37525df: Added support for accessing Gemini via Vertex AI
  • 660a2b3: Fix text before heading in markdown reader
  • a1f2475: Add system prompt to ContextChatEngine

0.3.12

Patch Changes

0.3.11

Patch Changes

  • e072c45: fix: remove non-standard API pipeline

  • 9e133ac: refactor: remove defaultFS from parameters

    We don't accept passing fs in the parameter since it's unnecessary for a determined JS environment.

    This was a polyfill way for the non-Node.js environment, but now we use another way to polyfill APIs.

  • 447105a: Improve Gemini message and context preparation

  • 320be3f: Force ChromaDB version to 1.7.3 (to prevent NextJS issues)

  • Updated dependencies [e072c45]

  • Updated dependencies [9e133ac]

    • @llamaindex/env@0.1.3

0.3.10

Patch Changes

0.3.9

Patch Changes

  • c3747d0: fix: import @xenova/transformers

    For now, if you use llamaindex in next.js, you need to add a plugin from llamaindex/next to ensure some module resolutions are correct.

0.3.8

Patch Changes

  • ce94780: Add page number to read PDFs and use generated IDs for PDF and markdown content

0.3.7

Patch Changes

  • b6a6606: feat: allow change host of ollama
  • b6a6606: chore: export ollama in default js runtime

0.3.6

Patch Changes

  • efa326a: chore: update package.json
  • Updated dependencies [efa326a]
  • Updated dependencies [efa326a]
    • @llamaindex/env@0.1.2

0.3.5

Patch Changes

  • bc7a11c: fix: inline ollama build
  • 2fe2b81: fix: filter with multiple filters in ChromaDB
  • 5596e31: feat: improve @llamaindex/env
  • e74fe88: fix: change <-> to <=> in the SELECT query
  • be5df5b: fix: anthropic agent on multiple chat
  • Updated dependencies [5596e31]
    • @llamaindex/env@0.1.1

0.3.4

Patch Changes

  • 1dce275: fix: export StorageContext on edge runtime
  • d10533e: feat: add hugging face llm
  • 2008efe: feat: add verbose mode to Agent
  • 5e61934: fix: remove clone object in CallbackManager.dispatchEvent
  • 9e74a43: feat: add top k to asQueryEngine
  • ee719a1: fix: streaming for ReAct Agent

0.3.3

Patch Changes

  • e8c41c5: fix: wrong gemini streaming chat response

0.3.2

Patch Changes

  • 61103b6: fix: streaming for Agent.createTask API

0.3.1

Patch Changes

  • 46227f2: fix: build error on next.js nodejs runtime

0.3.0

Minor Changes

  • 5016f21: feat: improve next.js/cloudflare/vite support

Patch Changes

  • Updated dependencies [5016f21]
    • @llamaindex/env@0.1.0

0.2.13

Patch Changes

  • 6277105: fix: allow passing empty tools to llms

0.2.12

Patch Changes

  • d8d952d: feat: add gemini llm and embedding

0.2.11

Patch Changes

  • 87142b2: refactor: use ollama official sdk
  • 5a6cc0e: feat: support jina ai embedding and reranker
  • 87142b2: feat: support output to json format

0.2.10

Patch Changes

0.2.9

Patch Changes

  • 76c3fd6: Add score to source nodes response

  • 208282d: feat: init anthropic agent

    remove the tool | function type in MessageType. Replace with assistant instead.
    This is because these two types are only available for OpenAI.
    Since OpenAI deprecates the function type, we support the Claude 3 tool call.

0.2.8

Patch Changes

  • Add ToolsFactory to generate agent tools

0.2.7

Patch Changes

  • 96f8f40: fix: agent stream
  • Updated dependencies
    • @llamaindex/env@0.0.7

0.2.6

Patch Changes

  • a3b4409: Fix agent streaming with new OpenAI models

0.2.5

Patch Changes

  • 7d56cdf: Allow OpenAIAgent to be called without tools

0.2.4

Patch Changes

  • 3bc77f7: gpt-4-turbo GA
  • 8d2b21e: Mistral 0.1.3

0.2.3

Patch Changes

  • f0704ec: Support streaming for OpenAI agent (and OpenAI tool calls)
  • Removed 'parentEvent' - Use 'event.reason?.computedCallers' instead
  • 3cbfa98: Added LlamaCloudIndex.fromDocuments

0.2.2

Patch Changes

  • 3f8407c: Add pipeline.register to create a managed index in LlamaCloud
  • 60a1603: fix: make edge run build after core
  • fececd8: feat: add tool factory
  • 1115f83: fix: throw error when no pipelines exist for the retriever
  • 7a23cc6: feat: improve CallbackManager
  • ea467fa: Update the list of supported Azure OpenAI API versions as of 2024-04-02.
  • 6d9e015: feat: use claude3 with react agent
  • 0b665bd: feat: add wikip...
Read more

Release refs/tags/llamaindex@0.5.1

10 Jul 19:30
a699086
Compare
Choose a tag to compare

llamaindex

0.5.1

Patch Changes

  • 2774681: Add mixedbread's embeddings and reranking API
  • a0f424e: corrected the regex in the react.ts file in extractToolUse & extractJsonStr functions, as mentioned in run-llama#1019

0.5.0

Minor Changes

  • 16ef5dd: refactor: simplify callback manager

    Change event.detail.payload to event.detail

Patch Changes

  • 16ef5dd: refactor: move callback manager & llm to core module

    For people who import llamaindex/llms/base or llamaindex/llms/utils,
    use @llamaindex/core/llms and @llamaindex/core/utils instead.

  • 36ddec4: fix: typo in custom page separator parameter for LlamaParse

  • Updated dependencies [16ef5dd]

  • Updated dependencies [16ef5dd]

  • Updated dependencies [36ddec4]

    • @llamaindex/core@0.1.0
    • @llamaindex/cloud@0.1.4

0.4.14

Patch Changes

  • Updated dependencies [1c444d5]
    • @llamaindex/cloud@0.1.3

0.4.13

Patch Changes

  • e8f8bea: feat: add boundingBox and targetPages to LlamaParseReader
  • 304484b: feat: add ignoreErrors flag to LlamaParseReader

0.4.12

Patch Changes

  • f326ab8: chore: bump version
  • Updated dependencies [f326ab8]
    • @llamaindex/cloud@0.1.2
    • @llamaindex/core@0.0.3
    • @llamaindex/env@0.1.8

0.4.11

Patch Changes

  • 8bf5b4a: fix: llama parse input spreadsheet

0.4.10

Patch Changes

  • 7dce3d2: fix: disable External Filters for Gemini

0.4.9

Patch Changes

  • 3a96a48: fix: anthroipic image input

0.4.8

Patch Changes

  • 83ebdfb: fix: next.js build error

0.4.7

Patch Changes

  • 41fe871: Add support for azure dynamic session tool
  • 321c39d: fix: generate api as class
  • f7f1af0: fix: throw error when no pipeline found
  • Updated dependencies [41fe871]
  • Updated dependencies [f10b41d]
  • Updated dependencies [321c39d]
    • @llamaindex/env@0.1.7
    • @llamaindex/core@0.0.2
    • @llamaindex/cloud@0.1.1

0.4.6

Patch Changes

  • 1feb23b: feat: Gemini tool calling for agent support
  • 08c55ec: Add metadata to PDFs and use Uint8Array for readers content

0.4.5

Patch Changes

  • 6c3e5d0: fix: switch to correct reference for a static function

0.4.4

Patch Changes

  • 42eb73a: Fix IngestionPipeline not working without vectorStores

0.4.3

Patch Changes

  • 2ef62a9: feat: added support for embeddings via HuggingFace Inference API
  • Updated dependencies [d4e853c]
  • Updated dependencies [a94b8ec]
    • @llamaindex/env@0.1.6

0.4.2

Patch Changes

  • a87a4d1: feat: added tool support calling for Bedrock's Calude and general llm support for agents
  • 0730140: include node relationships when converting jsonToDoc
  • Updated dependencies [f3b34b4]
    • @llamaindex/env@0.1.5

0.4.1

Patch Changes

  • 3c47910: fix: groq llm
  • ed467a9: Add model ids for Anthropic Claude 3.5 Sonnet model on Anthropic and Bedrock
  • cba5406: fix: every Llama Parse job being called "blob"
  • Updated dependencies [56fabbb]
    • @llamaindex/env@0.1.4

0.4.0

Minor Changes

  • 436bc41: Unify chat engine response and agent response

Patch Changes

  • a44e54f: Truncate text to embed for OpenAI if it exceeds maxTokens
  • a51ed8d: feat: add support for managed identity for Azure OpenAI
  • d3b635b: fix: agents to use chat history

0.3.17

Patch Changes

  • 6bc5bdd: feat: add cache disabling, fast mode, do not unroll columns mode and custom page separator to LlamaParseReader
  • bf25ff6: fix: polyfill for cloudflare worker
  • e6d6576: chore: use unpdf

0.3.16

Patch Changes

  • 11ae926: feat: add numCandidates setting to MongoDBAtlasVectorStore for tuning queries
  • 631f000: feat: DeepInfra LLM implementation
  • 1378ec4: feat: set default model to gpt-4o
  • 6b1ded4: add gpt4o-mode, invalidate cache and skip diagonal text to LlamaParseReader
  • 4d4bd85: Show error message if agent tool is called with partial JSON
  • 24a9d1e: add json mode and image retrieval to LlamaParseReader
  • 45952de: add concurrency management for SimpleDirectoryReader
  • 54230f0: feat: Gemini GA release models
  • a29d835: setDocumentHash should be async
  • 73819bf: Unify metadata and ID handling of documents, allow files to be read by Buffer

0.3.15

Patch Changes

  • 6e156ed: Use images in context chat engine
  • 265976d: fix bug with node decorator
  • 8e26f75: Add retrieval for images using multi-modal messages

0.3.14

Patch Changes

  • 6ff7576: Added GPT-4o for Azure
  • 94543de: Added the latest preview gemini models and multi modal images taken into account

0.3.13

Patch Changes

  • 1b1081b: Add vectorStores to storage context to define vector store per modality
  • 37525df: Added support for accessing Gemini via Vertex AI
  • 660a2b3: Fix text before heading in markdown reader
  • a1f2475: Add system prompt to ContextChatEngine

0.3.12

Patch Changes

0.3.11

Patch Changes

  • e072c45: fix: remove non-standard API pipeline

  • 9e133ac: refactor: remove defaultFS from parameters

    We don't accept passing fs in the parameter since it's unnecessary for a determined JS environment.

    This was a polyfill way for the non-Node.js environment, but now we use another way to polyfill APIs.

  • 447105a: Improve Gemini message and context preparation

  • 320be3f: Force ChromaDB version to 1.7.3 (to prevent NextJS issues)

  • Updated dependencies [e072c45]

  • Updated dependencies [9e133ac]

    • @llamaindex/env@0.1.3

0.3.10

Patch Changes

0.3.9

Patch Changes

  • c3747d0: fix: import @xenova/transformers

    For now, if you use llamaindex in next.js, you need to add a plugin from llamaindex/next to ensure some module resolutions are correct.

0.3.8

Patch Changes

  • ce94780: Add page number to read PDFs and use generated IDs for PDF and markdown content

0.3.7

Patch Changes

  • b6a6606: feat: allow change host of ollama
  • b6a6606: chore: export ollama in default js runtime

0.3.6

Patch Changes

  • efa326a: chore: update package.json
  • Updated dependencies [efa326a]
  • Updated dependencies [efa326a]
    • @llamaindex/env@0.1.2

0.3.5

Patch Changes

  • bc7a11c: fix: inline ollama build
  • 2fe2b81: fix: filter with multiple filters in ChromaDB
  • 5596e31: feat: improve @llamaindex/env
  • e74fe88: fix: change <-> to <=> in the SELECT query
  • be5df5b: fix: anthropic agent on multiple chat
  • Updated dependencies [5596e31]
    • @llamaindex/env@0.1.1

0.3.4

Patch Changes

  • 1dce275: fix: export StorageContext on edge runtime
  • d10533e: feat: add hugging face llm
  • 2008efe: feat: add verbose mode to Agent
  • 5e61934: fix: remove clone object in CallbackManager.dispatchEvent
  • 9e74a43: feat: add top k to asQueryEngine
  • ee719a1: fix: streaming for ReAct Agent

0.3.3

Patch Changes

  • e8c41c5: fix: wrong gemini streaming chat response

0.3.2

Patch Changes

  • 61103b6: fix: streaming for Agent.createTask API

0.3.1

Patch Changes

  • 46227f2: fix: build error on next.js nodejs runtime

0.3.0

Minor Changes

  • 5016f21: feat: improve next.js/cloudflare/vite support

Patch Changes

  • Updated dependencies [5016f21]
    • @llamaindex/env@0.1.0

0.2.13

Patch Changes

  • 6277105: fix: allow passing empty tools to llms

0.2.12

Patch Changes

  • d8d952d: feat: add gemini llm and embedding

0.2.11

Patch Changes

  • 87142b2: refactor: use ollama official sdk
  • 5a6cc0e: feat: support jina ai embedding and reranker
  • 87142b2: feat: support output to json format

0.2.10

Patch Changes

0.2.9

Patch Changes

  • 76c3fd6: Add score to source nodes response

  • 208282d: feat: init anthropic agent

    remove the tool | function type in MessageType. Replace with assistant instead.
    This is because these two types are only available for OpenAI.
    Since OpenAI deprecates the function type, we support the Claude 3 tool call.

0.2.8

Patch Changes

  • Add ToolsFactory to generate agent tools

0.2.7

Patch Changes

  • 96f8f40: fix: agent stream
  • Updated dependencies
    • @llamaindex/env@0.0.7

0.2.6

Patch Changes

  • a3b4409: Fix agent streaming with new OpenAI models

0.2.5

Patch Changes

  • 7d56cdf: Allow OpenAIAgent to be called without tools

0.2.4

Patch Changes

  • 3bc77f7: gpt-4-turbo GA
  • 8d2b21e: Mistral 0.1.3

0.2.3

Patch Changes

  • f0704ec: Support streaming for OpenAI agent (and OpenAI tool calls)
  • Removed 'parentEvent' - Use 'event.reason?.computedCallers' instead
  • 3cbfa98: Added LlamaCloudIndex.fromDocuments

0.2.2

Patch Changes

  • 3f8407c: Add pipeline.register to create a managed index in LlamaCloud
  • 60a1603: fix: make edge run build after core
  • fececd8: feat: add tool factory
  • 1115f83: fix: throw error when no pipelines exist for the retriever
  • 7a23cc6: feat: improve CallbackManager
  • ea467fa: Update the list of supported Azure OpenAI API versions as of 2024-04-02.
  • 6d9e015: feat: use claude3 with react agent
  • 0b665bd: feat: add wikipedia tool
  • 24b4033: feat: add result type json
  • 8b28092: Add support for doc store strategies to VectorStoreIndex.fromDocuments
  • Updated dependencies [7a23cc6]
    • @llamaindex/env@0.0.6

0.2.1

Patch Changes

  • 41210df: Add auto create milvus collection and add milvus node metadata
  • 137cf67: Use Pinecone namespaces for all operations
  • 259c842: Add support for edge runtime by using @llamaindex/edge

0.2.0

Minor Changes

  • bf583a7: Use parameter object for retrieve function of Retriever (to align usage with query function of QueryEngine)

Patch Changes

  • d2e8d0c: add support for Milvus vector store
  • aefc326: feat: experimental package + json query engine
  • 484a710: - Add missing exports:
    • IndexStructType,
    • IndexDict,
    • jsonToIndexStruct,
    • IndexList,
    • IndexStruct
    • Fix IndexDict.toJson() method
  • d766bd0: Add streaming to agents
  • dd95927: add Claude Haiku support and update anthropic SDK

0.1.21

...

Read more

Release refs/tags/llamaindex@0.4.13

05 Jul 21:33
1f910f7
Compare
Choose a tag to compare

llamaindex

0.4.13

Patch Changes

  • e8f8bea: feat: add boundingBox and targetPages to LlamaParseReader
  • 304484b: feat: add ignoreErrors flag to LlamaParseReader

0.4.12

Patch Changes

  • f326ab8: chore: bump version
  • Updated dependencies [f326ab8]
    • @llamaindex/cloud@0.1.2
    • @llamaindex/core@0.0.3
    • @llamaindex/env@0.1.8

0.4.11

Patch Changes

  • 8bf5b4a: fix: llama parse input spreadsheet

0.4.10

Patch Changes

  • 7dce3d2: fix: disable External Filters for Gemini

0.4.9

Patch Changes

  • 3a96a48: fix: anthroipic image input

0.4.8

Patch Changes

  • 83ebdfb: fix: next.js build error

0.4.7

Patch Changes

  • 41fe871: Add support for azure dynamic session tool
  • 321c39d: fix: generate api as class
  • f7f1af0: fix: throw error when no pipeline found
  • Updated dependencies [41fe871]
  • Updated dependencies [f10b41d]
  • Updated dependencies [321c39d]
    • @llamaindex/env@0.1.7
    • @llamaindex/core@0.0.2
    • @llamaindex/cloud@0.1.1

0.4.6

Patch Changes

  • 1feb23b: feat: Gemini tool calling for agent support
  • 08c55ec: Add metadata to PDFs and use Uint8Array for readers content

0.4.5

Patch Changes

  • 6c3e5d0: fix: switch to correct reference for a static function

0.4.4

Patch Changes

  • 42eb73a: Fix IngestionPipeline not working without vectorStores

0.4.3

Patch Changes

  • 2ef62a9: feat: added support for embeddings via HuggingFace Inference API
  • Updated dependencies [d4e853c]
  • Updated dependencies [a94b8ec]
    • @llamaindex/env@0.1.6

0.4.2

Patch Changes

  • a87a4d1: feat: added tool support calling for Bedrock's Calude and general llm support for agents
  • 0730140: include node relationships when converting jsonToDoc
  • Updated dependencies [f3b34b4]
    • @llamaindex/env@0.1.5

0.4.1

Patch Changes

  • 3c47910: fix: groq llm
  • ed467a9: Add model ids for Anthropic Claude 3.5 Sonnet model on Anthropic and Bedrock
  • cba5406: fix: every Llama Parse job being called "blob"
  • Updated dependencies [56fabbb]
    • @llamaindex/env@0.1.4

0.4.0

Minor Changes

  • 436bc41: Unify chat engine response and agent response

Patch Changes

  • a44e54f: Truncate text to embed for OpenAI if it exceeds maxTokens
  • a51ed8d: feat: add support for managed identity for Azure OpenAI
  • d3b635b: fix: agents to use chat history

0.3.17

Patch Changes

  • 6bc5bdd: feat: add cache disabling, fast mode, do not unroll columns mode and custom page seperator to LlamaParseReader
  • bf25ff6: fix: polyfill for cloudflare worker
  • e6d6576: chore: use unpdf

0.3.16

Patch Changes

  • 11ae926: feat: add numCandidates setting to MongoDBAtlasVectorStore for tuning queries
  • 631f000: feat: DeepInfra LLM implementation
  • 1378ec4: feat: set default model to gpt-4o
  • 6b1ded4: add gpt4o-mode, invalidate cache and skip diagonal text to LlamaParseReader
  • 4d4bd85: Show error message if agent tool is called with partial JSON
  • 24a9d1e: add json mode and image retrieval to LlamaParseReader
  • 45952de: add concurrency management for SimpleDirectoryReader
  • 54230f0: feat: Gemini GA release models
  • a29d835: setDocumentHash should be async
  • 73819bf: Unify metadata and ID handling of documents, allow files to be read by Buffer

0.3.15

Patch Changes

  • 6e156ed: Use images in context chat engine
  • 265976d: fix bug with node decorator
  • 8e26f75: Add retrieval for images using multi-modal messages

0.3.14

Patch Changes

  • 6ff7576: Added GPT-4o for Azure
  • 94543de: Added the latest preview gemini models and multi modal images taken into account

0.3.13

Patch Changes

  • 1b1081b: Add vectorStores to storage context to define vector store per modality
  • 37525df: Added support for accessing Gemini via Vertex AI
  • 660a2b3: Fix text before heading in markdown reader
  • a1f2475: Add system prompt to ContextChatEngine

0.3.12

Patch Changes

0.3.11

Patch Changes

  • e072c45: fix: remove non-standard API pipeline

  • 9e133ac: refactor: remove defaultFS from parameters

    We don't accept passing fs in the parameter since it's unnecessary for a determined JS environment.

    This was a polyfill way for the non-Node.js environment, but now we use another way to polyfill APIs.

  • 447105a: Improve Gemini message and context preparation

  • 320be3f: Force ChromaDB version to 1.7.3 (to prevent NextJS issues)

  • Updated dependencies [e072c45]

  • Updated dependencies [9e133ac]

    • @llamaindex/env@0.1.3

0.3.10

Patch Changes

0.3.9

Patch Changes

  • c3747d0: fix: import @xenova/transformers

    For now, if you use llamaindex in next.js, you need to add a plugin from llamaindex/next to ensure some module resolutions are correct.

0.3.8

Patch Changes

  • ce94780: Add page number to read PDFs and use generated IDs for PDF and markdown content

0.3.7

Patch Changes

  • b6a6606: feat: allow change host of ollama
  • b6a6606: chore: export ollama in default js runtime

0.3.6

Patch Changes

  • efa326a: chore: update package.json
  • Updated dependencies [efa326a]
  • Updated dependencies [efa326a]
    • @llamaindex/env@0.1.2

0.3.5

Patch Changes

  • bc7a11c: fix: inline ollama build
  • 2fe2b81: fix: filter with multiple filters in ChromaDB
  • 5596e31: feat: improve @llamaindex/env
  • e74fe88: fix: change <-> to <=> in the SELECT query
  • be5df5b: fix: anthropic agent on multiple chat
  • Updated dependencies [5596e31]
    • @llamaindex/env@0.1.1

0.3.4

Patch Changes

  • 1dce275: fix: export StorageContext on edge runtime
  • d10533e: feat: add hugging face llm
  • 2008efe: feat: add verbose mode to Agent
  • 5e61934: fix: remove clone object in CallbackManager.dispatchEvent
  • 9e74a43: feat: add top k to asQueryEngine
  • ee719a1: fix: streaming for ReAct Agent

0.3.3

Patch Changes

  • e8c41c5: fix: wrong gemini streaming chat response

0.3.2

Patch Changes

  • 61103b6: fix: streaming for Agent.createTask API

0.3.1

Patch Changes

  • 46227f2: fix: build error on next.js nodejs runtime

0.3.0

Minor Changes

  • 5016f21: feat: improve next.js/cloudflare/vite support

Patch Changes

  • Updated dependencies [5016f21]
    • @llamaindex/env@0.1.0

0.2.13

Patch Changes

  • 6277105: fix: allow passing empty tools to llms

0.2.12

Patch Changes

  • d8d952d: feat: add gemini llm and embedding

0.2.11

Patch Changes

  • 87142b2: refactor: use ollama official sdk
  • 5a6cc0e: feat: support jina ai embedding and reranker
  • 87142b2: feat: support output to json format

0.2.10

Patch Changes

0.2.9

Patch Changes

  • 76c3fd6: Add score to source nodes response

  • 208282d: feat: init anthropic agent

    remove the tool | function type in MessageType. Replace with assistant instead.
    This is because these two types are only available for OpenAI.
    Since OpenAI deprecates the function type, we support the Claude 3 tool call.

0.2.8

Patch Changes

  • Add ToolsFactory to generate agent tools

0.2.7

Patch Changes

  • 96f8f40: fix: agent stream
  • Updated dependencies
    • @llamaindex/env@0.0.7

0.2.6

Patch Changes

  • a3b4409: Fix agent streaming with new OpenAI models

0.2.5

Patch Changes

  • 7d56cdf: Allow OpenAIAgent to be called without tools

0.2.4

Patch Changes

  • 3bc77f7: gpt-4-turbo GA
  • 8d2b21e: Mistral 0.1.3

0.2.3

Patch Changes

  • f0704ec: Support streaming for OpenAI agent (and OpenAI tool calls)
  • Removed 'parentEvent' - Use 'event.reason?.computedCallers' instead
  • 3cbfa98: Added LlamaCloudIndex.fromDocuments

0.2.2

Patch Changes

  • 3f8407c: Add pipeline.register to create a managed index in LlamaCloud
  • 60a1603: fix: make edge run build after core
  • fececd8: feat: add tool factory
  • 1115f83: fix: throw error when no pipelines exist for the retriever
  • 7a23cc6: feat: improve CallbackManager
  • ea467fa: Update the list of supported Azure OpenAI API versions as of 2024-04-02.
  • 6d9e015: feat: use claude3 with react agent
  • 0b665bd: feat: add wikipedia tool
  • 24b4033: feat: add result type json
  • 8b28092: Add support for doc store strategies to VectorStoreIndex.fromDocuments
  • Updated dependencies [7a23cc6]
    • @llamaindex/env@0.0.6

0.2.1

Patch Changes

  • 41210df: Add auto create milvus collection and add milvus node metadata
  • 137cf67: Use Pinecone namespaces for all operations
  • 259c842: Add support for edge runtime by using @llamaindex/edge

0.2.0

Minor Changes

  • bf583a7: Use parameter object for retrieve function of Retriever (to align usage with query function of QueryEngine)

Patch Changes

  • d2e8d0c: add support for Milvus vector store
  • aefc326: feat: experimental package + json query engine
  • 484a710: - Add missing exports:
    • IndexStructType,
    • IndexDict,
    • jsonToIndexStruct,
    • IndexList,
    • IndexStruct
    • Fix IndexDict.toJson() method
  • d766bd0: Add streaming to agents
  • dd95927: add Claude Haiku support and update anthropic SDK

0.1.21

Patch Changes

  • 552a61a: Add quantized parameter to HuggingFaceEmbedding
  • d824876: Add support for Claude 3

0.1.20

Patch Changes

  • 64683a5: fix: prefix messages always true
  • 698cd9c: fix: step wise agent + examples
  • 7257751: fixed removeRefDocNode and persist store on delete
  • 5116ad8: fix: compatibility issue with Deno
  • Updated dependencies [5116ad8]
    • @llamaindex/env@0.0.5

0.1.19

Patch Changes

  • 026d068: feat: enhance pinecone usage

0.1.18

Patch Changes

  • 90027a7: Add splitLongSentences option to SimpleNodeParser
  • c57bd11: feat: update and refactor title extractor

0.1.17

Patch Changes

  • c8396c5: feat: add base evaluator and correctness evaluator
  • c8396c5: feat: add base evaluator and correctness evaluator
  • cf87f84: fix: type backward compatibility
  • 09bf27a: Add Groq LLM to LlamaIndex
  • Updated dependencies [cf87f84]
    • @llamaindex/env@0.0.4

0.1.16

Patch ...

Read more