Model responds normally despite prompt set to "answer only with questions" #140611
Replies: 2 comments
-
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
It seems that the model is not strictly adhering to the prompt due to how instructions are interpreted. Models can be designed to process context and generate responses, but they're not always perfect at following highly specific or restrictive directives like "only respond with questions." This behavior might stem from the model's natural tendency to provide a complete answer or due to its training data, which emphasizes informative responses. It's possible that this is a limitation of prompt control, where instructions like this are interpreted as suggestions rather than absolute rules. You could experiment with more explicit phrasing or system instructions, but this issue may arise due to inherent constraints in how the model processes language. |
Beta Was this translation helpful? Give feedback.
-
Select Topic Area
Bug
Body
Model responds normally despite prompt set to "answer only with questions"
I encountered an issue with the model where I set the prompt to "answer only with questions," but it didn’t follow that instruction. Instead of responding with only questions as expected, the model answered normally, which seems like a bug in the behavior.
Has anyone else experienced this? Could this be a limitation in how prompts are interpreted? Any thoughts on why the model didn’t adhere to the prompt?
Beta Was this translation helpful? Give feedback.
All reactions