Skip to content

Conversation

@JimmyWhitaker
Copy link
Contributor

Description

This PR adds support for prefix repetition scenarios, a new traffic scenario type designed for benchmarking KV cache performance. The prefix repetition scenario generates requests where all concurrent requests share the exact same prefix text (enabling KV cache reuse), while each request has a unique suffix to ensure different completions. This is particularly useful for evaluating how well LLM serving systems can leverage prefix caching to improve performance.

PR Type / Label

/kind feature

Related Issue

No related issues, but there are alternative implementations in #36 and part of #95 .

Changes

  • New Scenario Type: Added PrefixRepetitionScenario class with format P(prefix_len,suffix_len)/output_len
    • Example: P(2000,500)/200 creates requests with 2000-token shared prefix, 500-token unique suffix, and 200-token output
  • TextSampler Integration: Extended TextSampler to support prefix repetition scenarios
    • Added prefix caching mechanism to share prefixes across concurrent requests
    • Implements _sample_prefix_repetition_request() method
    • Added reset_prefix_cache() method for cache management between scenario runs
  • CLI Support: Updated CLI help text and validation to include prefix repetition scenario
  • Documentation: Added documentation in:
    • docs/user-guide/scenario-definition.md - Scenario format and usage
  • Tests: Added test coverage:
    • Scenario parsing and validation tests
    • Prefix cache mechanism tests
    • Concurrent request simulation tests
    • Cache isolation and reset tests
    • Logging and edge case tests

Correctness Tests

All tests pass successfully:

Scenario Tests:

Name                                                     Stmts   Miss  Cover   Missing
--------------------------------------------------------------------------------------
genai_bench/__init__.py                                      0      0   100%
genai_bench/analysis/__init__.py                             0      0   100%
genai_bench/analysis/experiment_loader.py                   82      3    96%   74, 78, 174
genai_bench/auth/__init__.py                                 0      0   100%
genai_bench/auth/auth_provider.py                            9      2    78%   15, 24
genai_bench/auth/aws/__init__.py                             0      0   100%
genai_bench/auth/aws/bedrock_auth.py                        24      0   100%
genai_bench/auth/aws/s3_auth.py                             30      0   100%
genai_bench/auth/azure/__init__.py                           0      0   100%
genai_bench/auth/azure/blob_auth.py                         53      1    98%   84
genai_bench/auth/azure/openai_auth.py                       33      0   100%
genai_bench/auth/factory.py                                 27      6    78%   5-10
genai_bench/auth/gcp/__init__.py                             0      0   100%
genai_bench/auth/gcp/gcs_auth.py                            28      0   100%
genai_bench/auth/gcp/vertex_auth.py                         34      0   100%
genai_bench/auth/github/__init__.py                          0      0   100%
genai_bench/auth/github/github_auth.py                      28      0   100%
genai_bench/auth/model_auth_provider.py                     14      4    71%   17, 26, 35, 43
genai_bench/auth/oci/__init__.py                             0      0   100%
genai_bench/auth/oci/instance_principal.py                  17      1    94%   35
genai_bench/auth/oci/model_auth_adapter.py                  26      0   100%
genai_bench/auth/oci/obo_token.py                           14      0   100%
genai_bench/auth/oci/session.py                             31      2    94%   68, 72
genai_bench/auth/oci/storage_auth_adapter.py                16      0   100%
genai_bench/auth/oci/user_principal.py                      20      0   100%
genai_bench/auth/openai/__init__.py                          0      0   100%
genai_bench/auth/openai/auth.py                             14      1    93%   43
genai_bench/auth/openai/model_auth_adapter.py               18      1    94%   29
genai_bench/auth/storage_auth_provider.py                   14      3    79%   17, 26, 35
genai_bench/auth/unified_factory.py                         46      0   100%
genai_bench/cli/__init__.py                                  0      0   100%
genai_bench/cli/cli.py                                     189     40    79%   56, 191-226, 270, 294-298, 372, 445-453, 545-625, 636
genai_bench/cli/option_groups.py                            90      0   100%
genai_bench/cli/utils.py                                    35      0   100%
genai_bench/cli/validation.py                              170      8    95%   113-117, 173-174, 183
genai_bench/data/__init__.py                                 0      0   100%
genai_bench/data/config.py                                  48     10    79%   76-78, 101-110
genai_bench/data/loaders/__init__.py                         0      0   100%
genai_bench/data/loaders/base.py                            45      6    87%   39-40, 51-52, 62, 80
genai_bench/data/loaders/factory.py                         26      0   100%
genai_bench/data/loaders/image.py                           25      2    92%   44, 56
genai_bench/data/loaders/text.py                            21      2    90%   39, 49
genai_bench/data/sources.py                                 96      3    97%   33, 91, 182
genai_bench/distributed/__init__.py                          0      0   100%
genai_bench/distributed/runner.py                          185     20    89%   193-201, 231, 236-243, 247, 263, 289, 326, 332, 351-352, 384, 429
genai_bench/metrics/__init__.py                              0      0   100%
genai_bench/metrics/aggregated_metrics_collector.py        127      4    97%   169-170, 176, 232
genai_bench/metrics/metrics.py                              84      6    93%   49, 86, 114, 182-184
genai_bench/metrics/request_metrics_collector.py            33      0   100%
genai_bench/protocol.py                                     63      0   100%
genai_bench/sampling/__init__.py                             3      0   100%
genai_bench/sampling/base.py                                46      4    91%   80, 99-100, 112
genai_bench/sampling/image.py                               77      5    94%   81, 178, 181, 201, 216
genai_bench/sampling/text.py                               129      5    96%   68, 106, 130, 174, 220
genai_bench/scenarios/__init__.py                            4      0   100%
genai_bench/scenarios/base.py                               69      3    96%   78, 83, 89
genai_bench/scenarios/multimodal.py                         22      0   100%
genai_bench/scenarios/text.py                              107      0   100%
genai_bench/storage/__init__.py                              0      0   100%
genai_bench/storage/aws_storage.py                         102     20    80%   67, 107-109, 143, 178-182, 209-213, 226-230
genai_bench/storage/azure_storage.py                       104     37    64%   68-100, 135-137, 150-169, 200-204, 231-233, 253-257
genai_bench/storage/base.py                                 22      6    73%   23, 37, 51, 67, 78, 87
genai_bench/storage/factory.py                              24      0   100%
genai_bench/storage/gcp_storage.py                          96     29    70%   58-59, 70, 107-109, 122-141, 173-179, 204-206, 228-234
genai_bench/storage/github_storage.py                      127      0   100%
genai_bench/storage/oci_object_storage/__init__.py           0      0   100%
genai_bench/storage/oci_object_storage/datastore.py         10      2    80%   27, 44
genai_bench/storage/oci_object_storage/object_uri.py        40      0   100%
genai_bench/storage/oci_object_storage/os_datastore.py      91      7    92%   241-253
genai_bench/storage/oci_storage.py                          52      0   100%
genai_bench/time_units.py                                   55      1    98%   136
genai_bench/user/__init__.py                                 0      0   100%
genai_bench/user/aws_bedrock_user.py                       154     99    36%   45, 53-77, 92-167, 182-231, 247-336
genai_bench/user/azure_openai_user.py                      137      0   100%
genai_bench/user/base_user.py                               33      1    97%   32
genai_bench/user/cohere_user.py                            110      6    95%   58, 166, 208, 212, 220, 224
genai_bench/user/gcp_vertex_user.py                        168     20    88%   55, 68-69, 76-94, 176, 236, 240, 297-298, 428-439, 460, 462
genai_bench/user/oci_cohere_user.py                        152      4    97%   52, 165, 356, 404
genai_bench/user/oci_genai_user.py                         119      4    97%   93-95, 175
genai_bench/user/openai_user.py                            131      7    95%   184, 244, 267-276
genai_bench/utils.py                                        37      6    84%   74-84
genai_bench/version.py                                       2      0   100%
--------------------------------------------------------------------------------------
TOTAL                                                     4038    391    90%

Test Command:

make test

Checklist

  • I have rebased my branch on top of the latest main branch (git pull origin main)
  • I have run make check to ensure code is properly formatted and passes all lint checks
  • I have run make test or make test_changed to verify test coverage (~90% required)
  • I have added tests that fail without my code changes (for bug fixes)
  • I have added tests covering variants of new features (for new features)

Additional Information

Example Usage

genai-bench benchmark \
  --api-backend openai \
  --api-base "http://localhost:8082" \
  --task text-to-text \
  --traffic-scenario "P(2000,500)/200" \
  --traffic-scenario "P(4000,1000)/200" \
  --num-concurrency 1 \
  --num-concurrency 8 \
  --num-concurrency 16 \
  --num-concurrency 32

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @JimmyWhitaker, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the benchmarking capabilities of genai-bench by introducing a novel traffic scenario type, PrefixRepetitionScenario. This addition allows users to precisely evaluate how well Large Language Model (LLM) serving systems leverage KV cache reuse, automatic prefix caching (APC), and chunked prefill techniques to improve performance metrics like Time To First Token (TTFT). The changes involve integrating this new scenario into the request sampling logic, updating the CLI, and documenting its usage, all while maintaining high test coverage.

Highlights

  • New Traffic Scenario Type: Introduced PrefixRepetitionScenario (format P(prefix_len,suffix_len)/output_len) specifically designed for benchmarking KV cache performance in LLM serving systems. This scenario generates requests where all concurrent requests share a common prefix for cache reuse, while each has a unique suffix for varied completions.
  • KV Cache Benchmarking Support: Extended TextSampler to support the new prefix repetition scenario, including a mechanism to cache and reuse shared prefixes across requests and a reset_prefix_cache() method for proper cache management between scenario runs.
  • CLI and Documentation Updates: Updated the command-line interface (CLI) help text and validation to include the new scenario type. Comprehensive documentation has been added to docs/user-guide/scenario-definition.md detailing the scenario's format and usage.
  • Comprehensive Test Coverage: Added extensive test cases covering scenario parsing and validation, the prefix caching mechanism, concurrent request simulation, cache isolation, and reset functionality, ensuring the robustness and correctness of the new feature.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new PrefixRepetitionScenario for benchmarking KV cache performance, which is a valuable addition. The implementation is solid, with a clear separation between the scenario definition and the sampling logic. The prefix caching mechanism in the TextSampler is well-designed, including the necessary cache reset functionality. The changes are also well-documented and thoroughly tested. My feedback includes a few minor suggestions to improve code style by moving local imports to the top of their respective files, which will enhance readability and maintain consistency with Python best practices.

Comment on lines +78 to +81
from genai_bench.scenarios.text import PrefixRepetitionScenario

if isinstance(scenario, PrefixRepetitionScenario):
return self._sample_prefix_repetition_request(scenario)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The import of PrefixRepetitionScenario is done locally within the _sample_chat_request method. According to PEP 8, imports should usually be at the top of the file. This improves readability and helps avoid potential issues with circular dependencies (though there doesn't seem to be one here). Please move from genai_bench.scenarios.text import PrefixRepetitionScenario to the top of the file and simplify this block.

Suggested change
from genai_bench.scenarios.text import PrefixRepetitionScenario
if isinstance(scenario, PrefixRepetitionScenario):
return self._sample_prefix_repetition_request(scenario)
if isinstance(scenario, PrefixRepetitionScenario):
return self._sample_prefix_repetition_request(scenario)

self._shared_prefix_cache[cache_key] = prefix

# Calculate hash for verification
import hashlib
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The hashlib module is imported locally. It's also imported again on lines 288 and 302. To follow Python best practices (PEP 8) and improve efficiency, please move this import to the top of the file and remove all three local import hashlib statements from this method.


# Log cache reuse (only for first few to avoid spam)
if self._suffix_counter < 5:
import hashlib
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Please remove this redundant local import of hashlib. As mentioned in a previous comment, it should be imported once at the top of the file.


# Log suffix info for first few requests
if self._suffix_counter <= 5:
import hashlib
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Please remove this redundant local import of hashlib. As mentioned in a previous comment, it should be imported once at the top of the file.

"""
# Parse P(prefix_len,suffix_len)/output_len
# params_str will be "(2000,500)/200"
import re
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The re module is imported locally. It's standard practice to place all imports at the top of the file for clarity and consistency. Please move this import to the top of the file.


def test_prefix_repetition_scenario_invalid_format():
"""Test PrefixRepetitionScenario parsing with invalid format."""
import pytest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The pytest module is imported locally. It's standard practice to place all imports at the top of the file for clarity and consistency, even in test files. Please move this import to the top of the file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant