-
As title. I'm using LLMCharacter.Complete to generate responses. I have debug mode on and it shows: https://pastebin.com/E6jSGmUC The completion just suspends forever, it doesn't eventually finish or anything like that. Though I can continue to generate responses afterwards if I restart it after a few seconds. I suspect an error has been thrown in LlamaFile but I'm not sure how I would detect or debug that. The prompt is the exact same in both situations. It may have something to do with simultaneous prompts. But sometimes simultaneous prompts work perfectly fine. I've also tried to wait for a response to finish generating before the next one starts but that just makes the game wait forever because the response never comes. This happens regardless of other settings like which model or what the prompt is. But for reference I'm using Qwen2-1.5B-Q8.0 for this test. (Which is giving great results other than this!) If anyone has any experience with this issue I would love to hear it, I've done a lot of testing and can't seem to pin it down. Thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 9 replies
-
hi there 🙂! I would need some more info. |
Beta Was this translation helpful? Give feedback.
-
I don't see something off 🤔. |
Beta Was this translation helpful? Give feedback.
-
I'm very sorry, the problem was with my code. It was throwing an error when editing the sentence, but it's weird that it wasn't anywhere on the logs which is why I thought that it must be the LLM. I wonder if it has something to do with how the onComplete method is invoked but I'm not sure. |
Beta Was this translation helpful? Give feedback.
I'm very sorry, the problem was with my code. It was throwing an error when editing the sentence, but it's weird that it wasn't anywhere on the logs which is why I thought that it must be the LLM. I wonder if it has something to do with how the onComplete method is invoked but I'm not sure.
Thanks for the guidance, you definitely pointed me in the right direction.