-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Function calling in groupchat does not work #1440
Comments
update: from testing, this seems to only occur if there is more than one user proxy in a group. When i have only one user proxy in a groupchat, function calling works correctly. |
@LUK3ARK I tried the sample notebook, it didn't raise any issue. https://github.com/microsoft/autogen/blob/fix_1440/notebook/agentchat_groupchat_RAG.ipynb Could you share your notebook? Thanks.
|
here is the script i run:
Here is the execution result: DetailsBoss (to chat_manager): How to use spark for parallel training in FLAML? Give me sample code. INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" To use Spark for parallel training in FLAML, you need to follow these steps:
from flaml import AutoML
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName("FLAML with Spark") \
.getOrCreate()
data = spark.read.format("csv").option("header", "true").load("path_to_your_data.csv")
automl = AutoML()
automl.initialize(spark=spark)
settings = {
"time_budget": 60, # total running time in seconds
"metric": 'accuracy', # primary metric for optimization
"task": 'classification', # task type
"n_jobs": -1, # number of parallel jobs, -1 means using all available cores
"log_file_name": 'flaml.log', # flaml log file
}
automl.fit(data, **settings)
best_model = automl.best_model
predictions = best_model.predict(data)
spark.stop() That's it! You have successfully used Spark for parallel training in FLAML. Remember to adjust the settings and data loading code according to your specific use case. TERMINATE -> i have tried with gpt-3.5-turbo-0613 and gpt-4 Now however, it seems i just made the script again and i am not able to get it to reproduce even making the function call in the first place. I have been able to get my usecase working so this is not a burning issue anymore but i had to go a different way around, I am still unable to just copy the setup and run it. |
@LUK3ARK I had the same problem, how did you fix it? |
I rolled back the version to 0.2.3 and the issue stopped happening |
@thinkall could you take a note of this thread and inform the users in your RAG refactor roadmap? |
* Reproduce #1440 * Updated code with latest APIs * Reran notebook * Fix usage of cache
…soft#1661) * Reproduce microsoft#1440 * Updated code with latest APIs * Reran notebook * Fix usage of cache
Describe the bug
I picked out the RAG groupchat example from the notebook and made several variations of it. I have been able to get it working if i chat only with the retrieve agent but when I initiate a conversation with the group manager, it suggests a function call and then fails.
When i run the agentchat_groupchat_RAG.ipynb example it suggests to call the retrieve function and then fails with this issue:
openai.BadRequestError: Error code: 400 - {'error': {'message': "None is not of type 'array' - 'messages.2.tool_calls'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
I get this error with MULTIPLE different functions.
I have ben debugging this for a while and my current conclusion that this bug comes from the groupchat manager itself. I tried hardcoding it so that an empty list is provided instead of None but they it says the message is still short.
I know that OpenAI is deprecating function calling and wanted to know if this is expected behaviour or if i am missing something vital?
Again - seems to only come when a manager is in the middle of the conversation, but have been able to see other agents call functions within a chat.
Steps to reproduce
Copy the agentchat_groupchat_RAG.ipynb and run it
Expected Behavior
When using the call rag function it should call the retrieve content function correctly
I expect to be able to reproduce the same results as the examples out of the box.
Screenshots and logs
lib/python3.11/site-packages/openai/_base_client.py", line 960, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "None is not of type 'array' - 'messages.2.tool_calls'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
Additional Information
consistent in all versions of pyautogen down to 2.0.4, then it kind of works but other issues start tangling.
The text was updated successfully, but these errors were encountered: