Releases: logancyang/obsidian-copilot
Releases Β· logancyang/obsidian-copilot
2.7.12
2.7.11 (#970)
Happy holidays everyone! Thanks for your support in 2024! The highlight of this update is a MUCH faster indexing process with batch embedding, and a strong (stronger than openai embedding large) but small embedding model exclusive for Plus users called copilot-plus-small
, it just works with a Plus license key! Let me know how it goes!
Improvements
- #969 Enable batch embedding and add experimental
copilot-plus-small
embedding model for Plus users @logancyang - #964 Increase the number of partitions. Skip empty files during indexing @logancyang
- #958 Update system prompt to better treat user language and latex equations @logancyang
Bug Fixes
- #961 Fix Radix portal @zeroliu
- #967 Fix lost embeddings critical bug @logancyang
- #952 Add a small delay to avoid race conditions @logancyang
2.7.10
A BIG update incoming!
- A more robust indexing module is introduced. Partitioned indexing can handle extremely large vaults now!
- LM Studio has been added as an embedding provider, it's lightning-fast!
- A "Verify Connection" button is added when you add a Custom Model, so you can check if it works before you add it! (This was first implemented by @Emt-lin, updated by @logancyang)
Check out the details below!
Improvements
- Big upgrade of indexing logic to have a more robust UX
- Enable incremental indexing. Now "refresh index" respects inclusion/exclusion filters
- Implement partitioning logic for large vaults
- Inclusion filters no longer eclipses exclusion filters.
- Add Stop indexing button
- Add the "Remove files from Copilot index" command that takes in the same list format from "List indexed files"
- Add confirmation modal for actions in settings that lead to reindexing
- Add LM Studio to embedding providers
- Add Verify Connection button for adding custom models.
- Update the max sources setting to 30 per user request. Be warned: a large number of sources may lead to bad answer quality with weaker chat models
- Add metadata to context, now you can directly ask "what files did i create/modified in (time period)"
Bug Fixes
- Fix safeFetch for 3rd party API with CORS on, including moonshot API and perplexity API, etc.
- Fix time-based queries for some special cases
2.7.9
[Plus] Quick Fixes
- Enhance vault search with current time info
- Fix file already exists error for list indexed files
- Fix web search request (safeFetch GET)
2.7.8
Improvements
- #916 Refresh VectorStoreManager at setting changes @logancyang
Bug fixes
- #918 Brevilabs CORS issue @logancyang
- #917 Clear chat context on new chat @logancyang
2.7.7
Improvements
- #908 Add setting to exclude copilot index in obsidian sync @logancyang
- #906 Update current note in context at change @logancyang
Bug fixes
- #913 Validate or invalidate current model when api key is updated @logancyang
- #912 Fix Index not loaded, add better index checks for a fresh install @logancyang
- #911 Avoid using jotai default store @zeroliu
2.7.6
Critical Bug Fix
- #893 New users could not load the plugin @logancyang
2.7.5
Great news, no more "Save and Reload" thanks to @zeroliu ! Settings now save automatically! πππ
Improvements
- #890 Implement indexing checkpointing @logancyang
- #886 UX improvements (Fix long titles in context menu, chat error as AI response, etc.) @logancyang
- #882 Add user message shade @logancyang
- #881 Copilot command: list all indexed files in a markdown note @logancyang
- #874 Auto save settings @zeroliu
- Settings now automatically save after changes without requiring manual save and reload!!
- #872 Add New chat confirm modal, restructure components dir @logancyang
- #851 Support certain providers to customize the base URL @Emt-lin
- #850 Fix system message handling for o1-xx models, convert systemMessage to aiMessage for compatibility @Emt-lin
- #880 Append user system prompt instead of override @logancyang
Bug fixes
- #846 Fix disappearing note in context menu @logancyang
- #845 Make open window command focus active view @zeroliu
- #887 Fix note cannot be removed bug @logancyang
- #873 Fixed URL mention behavior in Chat mode @logancyang
2.7.4
Improvements
- #824 Improve settings and chat focus @zeroliu
- #843 Implement Copilot command "list indexed files" @logancyang
Bug fixes
- #842 Fix system message for non-openai models @logancyang
- #843 Alpha quick fixes @logancyang
- Fix message edit
- Unblock saveDB
- Skip rerank call if max score is 0
- Fix double indexing trigger at mode switch
2.7.3
Improvements
Copilot Plus Alpha is here! I've been working on this for a long time! Test license key is on its way to project sponsors and early supporters.
- Time-based Queries: Ask questions like
Give me a recap of last week @vault
orList all highlights from my daily notes in Oct @vault
. Copilot Plus understands time! - Cursor-like Context Menu: Enjoy a more intuitive and streamlined context menu specifically designed for Plus Mode. It not only shows note titles but also PDF files and URLs!
- URL Mention Capability: Quickly reference URLs in your chat input. Copilot Plus can grab the webpage in the background!
- Vault Search with Cmd + Shift + Enter: Search your vault with a simple keyboard shortcut, this is equivalent to having
@vault
in your query. - Dynamic Note Reindexing: Copilot index is updated at note modify event under Copilot Plus mode (this is not the case in Vault QA basic mode), ensuring your data is always up-to-date.
- Image Support in Chat: Add and send image(s) in your chat for any LLMs with vision support.
- PDF Integration in Chat Context: Easily incorporate PDF file or notes with embedded PDF in your chat context.
- Web Search Functionality: Access the web directly from your Copilot Plus Mode with
@web
. - YouTube Transcript: Easy access to video transcript with
@youtube video_url
in chat.
- #839 Add Copilot Plus suggested prompts @logancyang
- #838 Return YouTube transcript directly without LLM for long transcripts @logancyang
- #835 Introduce Copilot Plus Alpha to testers @logancyang
Bug fixes
- #826 Fix delete message in memory @logancyang
- #825 Fix "index not loaded" @logancyang
- #812 Fix model and mode menu side offset @logancyang