🚀 Describe the new functionality needed
Currently, the remote vLLM inference provider only supports a single tool call function. For example, if you use this example: https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/e2e_loop_with_client_tools.py, only the first function passed to client_tools argument in AgentConfig will be used.
💡 Why is this needed? What if we don't build it?
Users won't be able to use multiple tool call functions with agent.
Other thoughts
No response