The simplest code: Define: - config_list - llm_config Instantiate: - assistant agent => AssistantAgent - user agent => UserProxyAgent Call: - initiate_chat Testing several prompts on several LLMs locally (no openai) - autogen caching is a problem - How to disable caching?