Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Next: Allow TIs to be either a key or a name in the prompt during our transition to using keys #5817

Merged
merged 5 commits into from
Feb 28, 2024

Conversation

brandonrising
Copy link
Collaborator

@brandonrising brandonrising commented Feb 27, 2024

What type of PR is this? (check all applicable)

  • Refactor
  • Feature
  • Bug Fix
  • Optimization
  • Documentation Update
  • Community Node Submission

Have you discussed this change with the InvokeAI team?

  • Yes
  • No, because:

Have you updated all relevant documentation?

  • Yes
  • No

Description

While we decide the best avenue for passing TI/Embeddings to the backend, this PRs allows the TI trigger word to be either the key of the TI model or the name of it.

Related Tickets & Documents

  • Related Issue 5804

@github-actions github-actions bot added python PRs that change python files invocations PRs that change invocations labels Feb 27, 2024
@maryhipp
Copy link
Collaborator

This worked well for me locally using both name and key in prompt - the invocation throws if an incompatible base model is used, not sure if that is new behavior or not. I would expect it to just the user embedding as plaintext I think, not sure.

@brandonrising
Copy link
Collaborator Author

This worked well for me locally using both name and key in prompt - the invocation throws if an incompatible base model is used, not sure if that is new behavior or not. I would expect it to just the user embedding as plaintext I think, not sure.

It probably makes sense to add that exception to the catch block and let it pass through

@hipsterusername
Copy link
Member

This worked well for me locally using both name and key in prompt - the invocation throws if an incompatible base model is used, not sure if that is new behavior or not. I would expect it to just the user embedding as plaintext I think, not sure.

It probably makes sense to add that exception to the catch block and let it pass through

Agree - If it's invalid, should just pass through as text.

@brandonrising
Copy link
Collaborator Author

Should never fail a graph for any exception while trying to load TIs now @hipsterusername @maryhipp

@brandonrising brandonrising merged commit 8c6860a into next Feb 28, 2024
7 of 8 checks passed
@brandonrising brandonrising deleted the next-refractor-ti-injection branch February 28, 2024 14:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invocations PRs that change invocations python PRs that change python files
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants