Replies: 4 comments
-
This is the intended behavior so that the captioning model can generate a natural continuation of the For example, if If this was instead done like the way you described, the model might initially generate a caption like |
Beta Was this translation helpful? Give feedback.
-
Haha, it seems that my previous understanding was wrong. When I used A1111's wd14 tagger extension or other tagging plug-ins before, it also had a similar prefix text box, but it just directly placed the text in the text box at the front of the model output tag. So I expected the same behavior from TagGUI as well. However, I think if you want the model to start with a certain paragraph of text when outputting text, it may be more reasonable to describe it directly in the prompt. |
Beta Was this translation helpful? Give feedback.
-
I see. I haven't used those tools, so I'm not familiar with their behavior.
That can work sometimes. |
Beta Was this translation helpful? Give feedback.
-
I see. Thanks for your explanation. |
Beta Was this translation helpful? Give feedback.
-
Hi, I found that after using "start_caption_with", the caption output by the model became different.
After checking the code, I found that in the "get_model_inputs" method, you merged "start_caption_with" and "prompt" and passed them to the model together. Is this something you did intentionally?
What I expected for "start_caption_with" would be to do what you did in the "get_caption_from_generated_tokens" method and simply add the contents of "start_caption_with" to the beginning of the contents put back by the model.
Beta Was this translation helpful? Give feedback.
All reactions