Skip to content

Comments

Integration test for Opus thinking block constraints#1584

Draft
csmith49 wants to merge 13 commits intomainfrom
fix/opus-thinking
Draft

Integration test for Opus thinking block constraints#1584
csmith49 wants to merge 13 commits intomainfrom
fix/opus-thinking

Conversation

@csmith49
Copy link
Collaborator

@csmith49 csmith49 commented Jan 3, 2026

Summary

Opus 4.5 seems to occasionally respond with a malformed signature error. This PR adds an integration test that reproduces a possible scenario as simply as possible.

Checklist

  • If the PR is changing/adding functionality, are there tests to reflect this?
  • If there is an example, have you run the example to make sure that it works?
  • If there are instructions on how to run the code, have you followed the instructions and made sure that it works?
  • If the feature is significant enough to require documentation, is there a PR open on the OpenHands/docs repository with the same branch name?
  • Is the github CI passing?

Agent Server images for this PR

GHCR package: https://github.com/OpenHands/agent-sdk/pkgs/container/agent-server

Variants & Base Images

Variant Architectures Base Image Docs / Tags
java amd64, arm64 eclipse-temurin:17-jdk Link
python amd64, arm64 nikolaik/python-nodejs:python3.12-nodejs22 Link
golang amd64, arm64 golang:1.21-bookworm Link

Pull (multi-arch manifest)

# Each variant is a multi-arch manifest supporting both amd64 and arm64
docker pull ghcr.io/openhands/agent-server:143a01d-python

Run

docker run -it --rm \
  -p 8000:8000 \
  --name agent-server-143a01d-python \
  ghcr.io/openhands/agent-server:143a01d-python

All tags pushed for this build

ghcr.io/openhands/agent-server:143a01d-golang-amd64
ghcr.io/openhands/agent-server:143a01d-golang_tag_1.21-bookworm-amd64
ghcr.io/openhands/agent-server:143a01d-golang-arm64
ghcr.io/openhands/agent-server:143a01d-golang_tag_1.21-bookworm-arm64
ghcr.io/openhands/agent-server:143a01d-java-amd64
ghcr.io/openhands/agent-server:143a01d-eclipse-temurin_tag_17-jdk-amd64
ghcr.io/openhands/agent-server:143a01d-java-arm64
ghcr.io/openhands/agent-server:143a01d-eclipse-temurin_tag_17-jdk-arm64
ghcr.io/openhands/agent-server:143a01d-python-amd64
ghcr.io/openhands/agent-server:143a01d-nikolaik_s_python-nodejs_tag_python3.12-nodejs22-amd64
ghcr.io/openhands/agent-server:143a01d-python-arm64
ghcr.io/openhands/agent-server:143a01d-nikolaik_s_python-nodejs_tag_python3.12-nodejs22-arm64
ghcr.io/openhands/agent-server:143a01d-golang
ghcr.io/openhands/agent-server:143a01d-java
ghcr.io/openhands/agent-server:143a01d-python

About Multi-Architecture Support

  • Each variant tag (e.g., 143a01d-python) is a multi-arch manifest supporting both amd64 and arm64
  • Docker automatically pulls the correct architecture for your platform
  • Individual architecture tags (e.g., 143a01d-python-amd64) are also available if needed


# Check if this atomic unit has any events with thinking blocks
for event in view.events[start_idx:end_idx]:
if isinstance(event, ActionEvent) and event.thinking_blocks:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! I appreciate this, I do wonder though if it belongs as integration test, or maybe something else.

thinking_blocks are an Anthropic Claudes' parameter, Gemini 3 has this but they go by different rules (and different from Gemini 2.5 too 😅 ...); Minimax I think has them too. And I can't think of another.

Since it feels very LLM specific, I wonder if maybe a script in scripts/ would work? Alternatively, we have now an examples/ directory named llm_specific/ ... None seems great, idk, WDYT?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It probably is too LLM specific to be an integration test. The scaffold is just convenient for condenser tests -- repeatable, no mocked objects, arbitrary LLMs configured from outside the test, etc.

What if we made it like the behavior tests? Using the same infra as the integration tests, but triggered separately and non-blocking. This would basically become c01_test... instead.

Would be good to have some batch of tests that stress the condenser across multiple LLMs we can run on occasion to make sure behavior isn't drifting.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually just have a somewhat similar problem: we want an integration test for conversation restore, where the user does a few actions with an LLM, exit and change to another LLM, then restore the conversation.

The structure of integration tests is almost okay, except that it is actually unnecessary to run all 6 LLMs or so, the test doesn't really depend on them. It just seems to me a bit of a waste, it matters that it "really" works, with real LLMs picking up and continuing the conversation, but it doesn't matter for all matrix. Maybe it matters for more than a pair (we might want a reasoning LLM paired with a non-reasoning one, for fun), but those are... different rules to choose LLM than the current matrix.

I'd call it similar with this, in the sense where thinking_blocks are under test, so we know exactly which LLM(s) to run it for, and not run it for others? If that's possible within the integration-test framework... for C01, C02, ...

@all-hands-bot
Copy link
Collaborator

[Automatic Post]: It has been a while since there was any activity on this PR. @csmith49, are you still working on it? If so, please go ahead, if not then please request review, close it, or request that someone else follow up.

1 similar comment
@all-hands-bot
Copy link
Collaborator

[Automatic Post]: It has been a while since there was any activity on this PR. @csmith49, are you still working on it? If so, please go ahead, if not then please request review, close it, or request that someone else follow up.

@enyst enyst added the behavior-initiative This is related to the system prompt sections and LLM steering. label Feb 14, 2026
@openhands-ai
Copy link

openhands-ai bot commented Feb 14, 2026

Looks like there are a few issues preventing this PR from being merged!

  • GitHub Actions are failing:
    • Run Integration Tests behavior-initiative
    • Pre-commit checks
  • There are merge conflicts

If you'd like me to help, just leave a comment, like

@OpenHands please fix the merge conflicts on PR #1584 at branch `fix/opus-thinking`

or

@OpenHands please fix the failing actions on PR #1584 at branch `fix/opus-thinking`

Feel free to include any additional details that might help me get this PR into a better state.

You can manage your notification settings

@enyst enyst removed the behavior-initiative This is related to the system prompt sections and LLM steering. label Feb 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants