Skip to content

[BUG] output_json not working with custom_openai #2282

@WoBuGs

Description

@WoBuGs

Description

I have a setup with Ollama and Open-WebUI. Agents do some tasks, and then output a json file.

I ensure that the output is valid by using the output_json parameter in my task definition:

    @task
    def my_task(self) -> Task:
        return Task(
            config=self.tasks_config['my_task'],
            tools=[],
            output_file="outputs/task_output.json",
            output_json=TaskOutput
        )

So far I used Ollama as model provider in my crews. Now I want to move everything to the OpenAI-compatible API from Open-WebUI, to handle users, API keys, etc.

For testing, I have the following two ini files:

MODEL=ollama/granite3.2:8b
BASE_URL=http://<ollama url>:11434

and

OPENAI_MODEL_NAME=custom_openai/granite3.2:8b
OPENAI_API_BASE=https://<openwebui url>/ollama/v1
OPENAI_API_KEY=<API key>

The crew runs for both, BUT for the second one, I get the following error at the end of the exection:

 Failed to convert text into JSON, error: Instructor does not support multiple tool calls, use List[Model] instead. Using raw output instead.

The outputed JSON file is not valid, and this last steps hangs for a few minutes.

Steps to Reproduce

  1. Set up a task with the output_json parameter.
  2. Run the task using ollama as backend.
  3. Run the task using openwebui (OpenAI-compatible API) as backend.

Expected behavior

Valid JSON and not error.

Screenshots/Code snippets

See description.

Operating System

Ubuntu 24.04

Python Version

3.12

crewAI Version

0.102.0

crewAI Tools Version

0.36.0

Virtual Environment

Venv

Evidence

See description.

Possible Solution

None

Additional context

None

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions