Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🎉 feat: Optimizations and Anthropic Title Generation #2184

Merged
merged 12 commits into from
Mar 24, 2024
Merged

Conversation

danny-avila
Copy link
Owner

@danny-avila danny-avila commented Mar 23, 2024

Summary

  • Add conversation titling using Anthropic Function Calling in the AnthropicClient

  • Optimize saveMessage calls mid-stream via throttling to improve performance

  • Ensure the assistant model is always the default when creating new chats:

    • Update last conversation setup with current assistant model and call newConvo again when assistants load to allow fast initial load and
  • Explicitly add TTL of 2 minutes when setting titleCache and add default TTL of 10 minutes to abortKeys cache for better caching control

  • Consolidate addMetadata operations in BaseClient to reduce code duplication

  • Add claude-3-haiku-20240307 to default Anthropic model list

  • Attempt to specify correct model mapping for Azure as accurately as possible, addressing concern of Assistant model display error #2177

  • Fixed unhandled edge case for conversation grouping

  • Improved style of Search Bar after recent UI update (light mode respectively changed as well)

  • before:

    • focus and default
    • image
    • hover
    • image
  • after:

    • default
    • image
    • focus & hover
    • image
  • Always show code option in General Settings

Checklist

  • My code adheres to this project's style guidelines
  • I have performed a self-review of my own code
  • I have commented in any complex areas of my code
  • I have made pertinent documentation changes
  • My changes do not introduce new warnings
  • I have written tests demonstrating that my changes are effective or that my feature works
  • Local unit tests pass with my changes

@danny-avila danny-avila merged commit 1f0fb49 into main Mar 24, 2024
3 checks passed
@danny-avila danny-avila deleted the minor-updates branch March 24, 2024 00:21
jinzishuai pushed a commit to aitok-ai/LibreChat that referenced this pull request May 20, 2024
* feat: add claude-3-haiku-20240307 to default anthropic list

* refactor: optimize `saveMessage` calls mid-stream via throttling

* chore: remove addMetadata operations and consolidate in BaseClient

* fix(listAssistantsForAzure): attempt to specify correct model mapping as accurately as possible (danny-avila#2177)

* refactor(client): update last conversation setup with current assistant model, call newConvo again when assistants load to allow fast initial load and ensure assistant model is always the default, not the last selected model

* refactor(cache): explicitly add TTL of 2 minutes when setting titleCache and add default TTL of 10 minutes to abortKeys cache

* feat(AnthropicClient): conversation titling using Anthropic Function Calling

* chore: remove extraneous token usage logging

* fix(convos): unhandled edge case for conversation grouping (undefined conversation)

* style: Improved style of Search Bar after recent UI update

* chore: remove unused code, content part helpers

* feat: always show code option
kenshinsamue pushed a commit to intelequia/LibreChat that referenced this pull request Sep 17, 2024
* feat: add claude-3-haiku-20240307 to default anthropic list

* refactor: optimize `saveMessage` calls mid-stream via throttling

* chore: remove addMetadata operations and consolidate in BaseClient

* fix(listAssistantsForAzure): attempt to specify correct model mapping as accurately as possible (danny-avila#2177)

* refactor(client): update last conversation setup with current assistant model, call newConvo again when assistants load to allow fast initial load and ensure assistant model is always the default, not the last selected model

* refactor(cache): explicitly add TTL of 2 minutes when setting titleCache and add default TTL of 10 minutes to abortKeys cache

* feat(AnthropicClient): conversation titling using Anthropic Function Calling

* chore: remove extraneous token usage logging

* fix(convos): unhandled edge case for conversation grouping (undefined conversation)

* style: Improved style of Search Bar after recent UI update

* chore: remove unused code, content part helpers

* feat: always show code option
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant