Skip to content

Conversation

iamemilio
Copy link
Contributor

@iamemilio iamemilio commented Oct 14, 2025

What does this PR do?

Adds a test and a standardized way to build future tests out for telemetry in llama stack.
Contributes to #3806

Test Plan

This is the test plan 😎

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Oct 14, 2025
@iamemilio iamemilio force-pushed the telemetry-coverage-tests branch from bcb801e to d057549 Compare October 14, 2025 15:53
@iamemilio iamemilio changed the title Telemetry Tests Feat(telemetry): Telemetry Tests Oct 14, 2025
@iamemilio iamemilio force-pushed the telemetry-coverage-tests branch from d057549 to 94bcedd Compare October 14, 2025 16:22
@iamemilio iamemilio changed the title Feat(telemetry): Telemetry Tests test(telemetry): Telemetry Tests Oct 14, 2025
@iamemilio iamemilio force-pushed the telemetry-coverage-tests branch 2 times, most recently from da28bd0 to 86ec389 Compare October 14, 2025 17:29
Comment on lines 71 to 72
# Verify token metrics in response
# Note: Llama Stack emits token metrics in the response JSON, not via OTel Metrics API
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These shouldn't be in telemetry tests? should be responses tests instead?


pytestmark = pytest.mark.skipif(
os.environ.get("LLAMA_STACK_TEST_STACK_CONFIG_TYPE") == "server",
reason="In-memory telemetry tests only work in library_client mode (server mode runs in separate process)",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be great to have the server tested as that's the main production path. We can have the server use in-memory collector and maybe write the collected results to file somehow, which the tests can then load and validate?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can be in followup PR

@iamemilio iamemilio force-pushed the telemetry-coverage-tests branch from 8bbdadc to 862076f Compare October 14, 2025 23:27
@iamemilio iamemilio force-pushed the telemetry-coverage-tests branch 2 times, most recently from dd9f6d4 to d82abf4 Compare October 15, 2025 14:31
@iamemilio iamemilio force-pushed the telemetry-coverage-tests branch 2 times, most recently from de02a38 to 56d2bd2 Compare October 16, 2025 20:03
@iamemilio iamemilio force-pushed the telemetry-coverage-tests branch from 56d2bd2 to 2ca139d Compare October 16, 2025 20:06
Comment on lines 52 to 59
# Handle both dict and Pydantic model for usage
# This occurs do to the replay system returning a dict for usage, but the client returning a Pydantic model
# TODO: Fix this by making the replay system return a Pydantic model for usage
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iamemilio iamemilio force-pushed the telemetry-coverage-tests branch from 2ca139d to 73a7af0 Compare October 16, 2025 20:35
Copy link
Contributor

@ehhuang ehhuang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG! one more nit please address

contains_model_id = True
assert args["model_id"] == text_model_id

assert contains_model_id
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clearer to store logged_model_id directly in the loop above and assert logged_model_id == text_model_id here

@iamemilio iamemilio force-pushed the telemetry-coverage-tests branch from 73a7af0 to 4f82002 Compare October 17, 2025 13:38
if "model_id" in args:
logged_model_id = args["model_id"]

assert logged_model_id is not None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is redundant

@ehhuang ehhuang merged commit 943558a into llamastack:main Oct 17, 2025
23 checks passed
@iamemilio iamemilio deleted the telemetry-coverage-tests branch October 20, 2025 14:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants