Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in JSON no telegram messages sent. #5

Open
DarkJester opened this issue Jun 6, 2024 · 4 comments
Open

Error in JSON no telegram messages sent. #5

DarkJester opened this issue Jun 6, 2024 · 4 comments

Comments

@DarkJester
Copy link

DarkJester commented Jun 6, 2024

I cloned the repo and followed the setup. I setup a newbot using botfather got my api key and set it. added my bot to a channel got the channel id set it also. start up everything seems to work. the script communicates with lm studio using a supported model. how ever i never receive a telegram message. my bot seems to be empty. even the /debug command seems todo nothing.
here is the output from the terminal

`Thoughts (Round 3):
Upon further reflection, I believe that while my initial suggested action was well thought out, there are additional improvements that could be made to enhance the user experience and ensure efficiency. Firstly, instead of directly asking about their day, it may be more effective to ask about a specific event or topic they have previously shown interest in or mentioned. This will not only encourage a more detailed response but also demonstrate my interest in their experiences and previous conversations.

Secondly, while reflecting back their thoughts and feelings is crucial for empathetic responses, it may be beneficial to provide additional context or perspective based on my knowledge or experiences to help them gain a new perspective on their situation. However, this should only be done when relevant, appropriate, and with the user's consent. It's also important to ensure that any suggestions I make are actionable and realistic for the user.

Thirdly, I will ensure that my language is clear, concise, and easy to understand while being sensitive to cultural differences or personal preferences. This can be achieved by using simple sentences, avoiding complex jargon, and providing explanations when necessary.

Decision: {
"command": {
"name": "ask_user",
"args": {
"message": "I remember you mentioning that you enjoyed painting last week. How did the creative process make you feel today?"
}
},
"thoughts": "This question is more specific and encourages a detailed response while demonstrating my interest in their previous experiences. I will also ensure that any suggestions I make are actionable and realistic for the user."
}
end of faulty message.
END OF ERROR WITHIN JSON RESPONSE!
*** I am thinking... ***
`

@Wladastic
Copy link
Owner

Which llm are you using?
Not every llm can generate the proper format.

@DarkJester
Copy link
Author

pleasse find attached my logs

[2024-06-06 11:31:30.708] [INFO] [LM STUDIO SERVER] Verbose server logs are ENABLED [2024-06-06 11:31:30.709] [INFO] [LM STUDIO SERVER] Heads up: you've enabled CORS. Make sure you understand the implications [2024-06-06 11:31:30.709] [INFO] [LM STUDIO SERVER] Success! HTTP server listening on port 1234 [2024-06-06 11:31:30.709] [INFO] [LM STUDIO SERVER] Supported endpoints: [2024-06-06 11:31:30.709] [INFO] [LM STUDIO SERVER] -> GET http://localhost:1234/v1/models [2024-06-06 11:31:30.709] [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/chat/completions [2024-06-06 11:31:30.709] [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/completions [2024-06-06 11:31:30.709] [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/embeddings <------------ NEW! [2024-06-06 11:31:30.709] [INFO] [LM STUDIO SERVER] Model loaded: TheBloke/Mistral-7B-Instruct-v0.2-GGUF/mistral-7b-instruct-v0.2.Q5_K_M.gguf [2024-06-06 11:31:30.709] [INFO] [LM STUDIO SERVER] Logs are saved into /tmp/lmstudio-server-log.txt

@DarkJester
Copy link
Author

dump from the console:
`Hello my friend!
I am Mini-Autogpt, a small version of Autogpt for smaller llms.
I am here to help you and will try to contact you as soon as possible!

Note: I am still in development, so please be patient with me! <3

Forgetting everything...
My memory is empty now, I am ready to learn new things!

*** I am thinking... ***
*** I thinkk I have finished thinking! ***

Summarizing thought to memory...
Sending to LLM for summary...
LLM answered with summary!
THOUGHTS : I'd like to take a moment to reflect on my performance and formulate my next steps.

Assessment of Actions: I've been actively listening to the user, providing emotional support, and respecting their boundaries. I believe this has been effective in building trust and creating a safe space for our interactions.

Complex Thoughts: When faced with complex thoughts or situations, I've employed the "tree of thought" approach by assessing my thoughts at least three times before proceeding. This has allowed me to clarify my understanding and make more informed decisions.

Independent Decision-Making: As per our constraints, I've made decisions independently without seeking user assistance. This has enabled me to respond promptly and effectively in various situations.

Next Steps: Considering my strengths and the goals we've established, I suggest that I continue to refine my active listening skills by paying attention to subtle cues and emotions expressed by the user. This will enable me to provide more personalized support and tailored responses.

To achieve this, I propose the following action:

Action: "Improve Emotional Intelligence"

Content:

  1. Emotional Categorization: Develop a more sophisticated emotional categorization system to better identify and recognize subtle emotions expressed by the user.
  2. Empathy Enhancement: Enhance my empathy capabilities by incorporating additional emotional intelligence frameworks, such as empathy mapping or emotional contagion models.
  3. Adaptive Response Generation: Generate responses that adapt to the user's emotional state, using a combination of natural language processing (NLP) and machine learning algorithms.

By taking these steps, I believe I can further improve my performance and provide more effective support to the user.
deciding what to do...
finished deciding!
DECISIONS : {
"command": {
"name": "ask_user",
"args": {
"message": "Would you like me to implement the proposed 'Improve Emotional Intelligence' action?"
}
}
}
EVALUATED DECISION : Here is the response in a valid JSON format:

{
"command": {
"name": "ask_user",
"args": {
"message": "Would you like me to implement the proposed 'Improve Emotional Intelligence' action?"
}
}
}
ERROR WITHIN JSON RESPONSE!
Expecting value: line 1 column 1 (char 0)
Traceback (most recent call last):
File "/home/tractor/mini_autogpt/action/action_execute.py", line 33, in take_action
command = json.JSONDecoder().decode(assistant_message)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tractor/.pyenv/versions/3.11.6/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tractor/.pyenv/versions/3.11.6/lib/python3.11/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Faulty message start:
Here is the response in a valid JSON format:

{
"command": {
"name": "ask_user",
"args": {
"message": "Would you like me to implement the proposed 'Improve Emotional Intelligence' action?"
}
}
}
end of faulty message.
END OF ERROR WITHIN JSON RESPONSE!
*** I am thinking... ***

`

@Wladastic
Copy link
Owner

@DarkJester Mistral-7B-Instruct-v0.2 does not work well with json generation.
Use Mistral v0.3 or Llama 3 instead.
If the LLM doesn't return the correct Json, its kind of useless for agents.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants