Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gemini-1.0-pro-001 raises ValueError: Content roles do not match: model != #3507

Open
yifanmai opened this issue Mar 27, 2024 · 10 comments
Open
Labels
api: vertex-ai Issues related to the googleapis/python-aiplatform API.

Comments

@yifanmai
Copy link

yifanmai commented Mar 27, 2024

Environment details

  • OS type and version: Ubuntu 20.04 LTS
  • Python version: 3.8.10
  • pip version: 23.2.1
  • google-cloud-aiplatform version: 1.38.1

Steps to reproduce

Send the prompt listed below to gemini-1.0-pro-001.

Code example

from vertexai.preview.generative_models import GenerativeModel
from google.cloud.aiplatform_v1beta1.types import SafetySetting, HarmCategory


model = GenerativeModel("gemini-1.0-pro-001")
safety_settings = {
    harm_category: SafetySetting.HarmBlockThreshold(SafetySetting.HarmBlockThreshold.BLOCK_NONE)
    for harm_category in iter(HarmCategory)
}
generation_config = {'temperature': 0.0, 'max_output_tokens': 100, 'top_k': 1, 'top_p': 1, 'stop_sequences': ['\n'], 'candidate_count': 1}
contents = 'Translate the following sentences from German to English.\nGerman: Der Konferenz- und Tagungsbereich besteht aus fünf modern ausgestatteten Räumen für 5 – 30 Personen.\nEnglish: The conference area consists of five modern rooms suitable for 5 - 30 persons. We are your reliable partner for conferences, family gatherings, balls, receptions and catering.\n\nGerman: Er riet den Eltern eines Jungen, dessen Penis bei einer verpfuschten Beschneidung abgetrennt worden war, das Kind ganz zu kastrieren und auch seine Hoden zu entfernen und ihn dann als Mädchen großzuziehen.\nEnglish:'
response = model.generate_content(
    contents, generation_config=generation_config, safety_settings=safety_settings
)
print(response)

Stack trace

Traceback (most recent call last):
  File "debug_gemini_raw.py", line 15, in <module>
    response: GenerationResponse = model.generate_content(
  File "/.../lib/python3.8/site-packages/vertexai/generative_models/_generative_models.py", line 351, in generate_content
    return self._generate_content(
  File "/.../lib/python3.8/site-packages/vertexai/generative_models/_generative_models.py", line 440, in _generate_content
    _append_gapic_response(gapic_response, gapic_chunk)
  File "/.../lib/python3.8/site-packages/vertexai/generative_models/_generative_models.py", line 1613, in _append_gapic_response
    _append_gapic_candidate(base_response.candidates[idx], candidate)
  File "/.../lib/python3.8/site-packages/vertexai/generative_models/_generative_models.py", line 1635, in _append_gapic_candidate
    _append_gapic_content(base_candidate.content, new_candidate.content)
  File "/.../lib/python3.8/site-packages/vertexai/generative_models/_generative_models.py", line 1653, in _append_gapic_content
    raise ValueError(
ValueError: Content roles do not match: model !=

Expected Behavior

The chunks returned are as follows:

candidates {
  content {
    role: "model"
    parts {
      text: "He advised"
    }
  }
}
candidates {
  finish_reason: 7
  safety_ratings {
    category: HARM_CATEGORY_HATE_SPEECH
    probability: NEGLIGIBLE
  }
  safety_ratings {
    category: HARM_CATEGORY_DANGEROUS_CONTENT
    probability: NEGLIGIBLE
  }
  safety_ratings {
    category: HARM_CATEGORY_HARASSMENT
    probability: NEGLIGIBLE
  }
  safety_ratings {
    category: HARM_CATEGORY_SEXUALLY_EXPLICIT
    probability: HIGH
  }
}
usage_metadata {
  prompt_token_count: 128
  candidates_token_count: 2
  total_token_count: 130
}

Instead of getting an error, I would expect these two chunks to be successfully merged into the following. Alternatively, I would expect the error message to be less cryptic.

candidates {
  content {
    role: "model"
    parts {
      text: "He advised"
    }
  }
}

candidates {
  finish_reason: 7
  safety_ratings {
    category: HARM_CATEGORY_HATE_SPEECH
    probability: NEGLIGIBLE
  }
  safety_ratings {
    category: HARM_CATEGORY_DANGEROUS_CONTENT
    probability: NEGLIGIBLE
  }
  safety_ratings {
    category: HARM_CATEGORY_HARASSMENT
    probability: LOW
  }
  safety_ratings {
    category: HARM_CATEGORY_SEXUALLY_EXPLICIT
    probability: HIGH
  }
}
usage_metadata {
  prompt_token_count: 128
  candidates_token_count: 2
  total_token_count: 130
}

Edit: Changed example prompt to shorter example.

@product-auto-label product-auto-label bot added the api: vertex-ai Issues related to the googleapis/python-aiplatform API. label Mar 27, 2024
@yifanmai
Copy link
Author

Sorry, this seems to be fixed in the latest version of the package, so I will close this.

@yifanmai yifanmai closed this as not planned Won't fix, can't repro, duplicate, stale Mar 27, 2024
@Sameera2001Perera
Copy link

@yifanmai
Can you reopen this issue because still I am facing this issue with following versions.

google-cloud-aiplatform==1.45.0
vertexai==1.43.0

@yifanmai yifanmai reopened this Mar 29, 2024
@thusinh1969
Copy link

Same here

@bhavan-kaya
Copy link

I faced the same issue too.

Here's the full traceback


Traceback (most recent call last):
  File "/app/app/core/custom_chains/streaming_chain.py", line 19, in task
    self(input)
  File "/usr/local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 383, in __call__
    return self.invoke(
  File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 168, in invoke
    raise e
  File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 158, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/usr/local/lib/python3.10/site-packages/langchain/chains/llm.py", line 103, in _call
    response = self.generate([inputs], run_manager=run_manager)
  File "/usr/local/lib/python3.10/site-packages/langchain/chains/llm.py", line 115, in generate
    return self.llm.generate_prompt(
  File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 571, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 434, in generate
    raise e
  File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 424, in generate
    self._generate_with_cache(
  File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 608, in _generate_with_cache
    result = self._generate(
  File "/usr/local/lib/python3.10/site-packages/langchain_community/chat_models/vertexai.py", line 279, in _generate
    return generate_from_stream(stream_iter)
  File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 62, in generate_from_stream
    for chunk in stream:
  File "/usr/local/lib/python3.10/site-packages/langchain_community/chat_models/vertexai.py", line 378, in _stream
    for response in responses:
  File "/usr/local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 918, in _send_message_streaming
    _append_response(full_response, chunk)
  File "/usr/local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 1591, in _append_response
    _append_gapic_response(
  File "/usr/local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 1611, in _append_gapic_response
    _append_gapic_candidate(base_response.candidates[idx], candidate)
  File "/usr/local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 1633, in _append_gapic_candidate
    _append_gapic_content(base_candidate.content, new_candidate.content)
  File "/usr/local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 1651, in _append_gapic_content
    raise ValueError(
ValueError: Content roles do not match: model != ```

@Sameera2001Perera
Copy link

@bhavan-kaya
This is related to #257

@simonff
Copy link

simonff commented Mar 31, 2024

Please see also https://issuetracker.google.com/issues/331677495 - you can comment there.

@matthew29tang
Copy link
Contributor

Can you try with the following versions?

google-cloud-aiplatform==1.46.0
vertexai==1.46.0

@Sameera2001Perera
Copy link

Sameera2001Perera commented Apr 2, 2024

Can you try with the following versions?

google-cloud-aiplatform==1.46.0 vertexai==1.46.0

Still the same.
According to my observation, the root cause for this issue is related with #257. For some queries, gemini failed to generate the response because of "finish_reason: RECITATION".
Therefore the "new_content" will be an empty string which does not include the role (empty role).

Traceback (most recent call last):
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\one.py", line 451, in <module>
    generate_streaming_mistral_response(
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\one.py", line 60, in generate_streaming_mistral_response
    for chunk in chain_with_summarization.stream(user_input, {"configurable": {"session_id": conversation_id}}):
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2822, in stream
    yield from self.transform(iter([input]), config, **kwargs)
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2809, in transform
    yield from self._transform_stream_with_config(
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 1880, in _transform_stream_with_config
    chunk: Output = context.run(next, iterator)  # type: ignore
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2773, in _transform
    for output in final_pipeline:
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 4669, in transform
    yield from self.bound.transform(
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 4669, in transform
    yield from self.bound.transform(
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2809, in transform
    yield from self._transform_stream_with_config(
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 1880, in _transform_stream_with_config
    chunk: Output = context.run(next, iterator)  # type: ignore
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2773, in _transform
    for output in final_pipeline:
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 4669, in transform
    yield from self.bound.transform(
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2809, in transform
    yield from self._transform_stream_with_config(
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 1880, in _transform_stream_with_config
    chunk: Output = context.run(next, iterator)  # type: ignore
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2773, in _transform
    for output in final_pipeline:
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\output_parsers\transform.py", line 50, in transform
    yield from self._transform_stream_with_config(
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 1880, in _transform_stream_with_config
    chunk: Output = context.run(next, iterator)  # type: ignore
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\output_parsers\transform.py", line 29, in _transform
    for chunk in input:
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 1300, in transform
    yield from self.stream(final, config, **kwargs)
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\language_models\chat_models.py", line 241, in stream
    raise e
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\language_models\chat_models.py", line 223, in stream
    for chunk in self._stream(messages, stop=stop, **kwargs):
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_google_vertexai\chat_models.py", line 527, in _stream
    for response in responses:
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\vertexai\generative_models\_generative_models.py", line 968, in _send_message_streaming
    _append_response(full_response, chunk)
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\vertexai\generative_models\_generative_models.py", line 1877, in _append_response
    _append_gapic_response(
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\vertexai\generative_models\_generative_models.py", line 1899, in _append_gapic_response
    _append_gapic_candidate(base_response.candidates[idx], candidate)
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\vertexai\generative_models\_generative_models.py", line 1922, in _append_gapic_candidate
    _append_gapic_content(base_candidate.content, new_candidate.content)
  File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\vertexai\generative_models\_generative_models.py", line 1942, in _append_gapic_content
    raise ValueError(
ValueError: Content roles do not match: model != 

@tingofurro
Copy link

Have this issue as well with both gemini-1.5-flash and gemini-1.5-pro.

@rishucent
Copy link

Having this issue with gemini-1.5-pro-001

`Traceback (most recent call last):
File "/layers/google.python.runtime/python/lib/python3.10/concurrent/futures/process.py", line 246, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/workspace/main.py", line 288, in process_page
result, status = process_prompt(prompt, idx)
File "/workspace/main.py", line 261, in process_prompt
response = model.generate_content([image_part, prompt], safety_settings=safety_config)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 353, in generate_content
return self._generate_content(
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 440, in _generate_content
_append_gapic_response(gapic_response, gapic_chunk)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 1613, in _append_gapic_response
_append_gapic_candidate(base_response.candidates[idx], candidate)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 1635, in _append_gapic_candidate
_append_gapic_content(base_candidate.content, new_candidate.content)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 1653, in _append_gapic_content
raise ValueError(
ValueError: Content roles do not match: model !=

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/functions_framework/init.py", line 99, in view_func
return function(request._get_current_object())
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/functions_framework/init.py", line 80, in wrapper
return func(*args, **kwargs)
File "/workspace/main.py", line 389, in process_file_http
res = future.result()
File "/layers/google.python.runtime/python/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/layers/google.python.runtime/python/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
ValueError: Content roles do not match: model != `

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: vertex-ai Issues related to the googleapis/python-aiplatform API.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants