Skip to content

Conversation

@TedThemistokleous
Copy link
Collaborator

@TedThemistokleous TedThemistokleous commented Dec 30, 2025

Adds a cache lookup for precompiled mxr such that a user can specify the max dynamic batch size and we should either lookup or generate the desired files in powers of two then be able to select the appropriate batch size of pre-compiled model.

Description

  • Cache lookup for batch values
  • Request to build MXRs in powers of two up to a specified batch size
  • Enable threadsafe addition to the lookup for batch sizes

Motivation and Context

Want something that can handle symbolic shape inputs but without modifying MIGraphX while also leveraging the fact we can statically compile inputs for a model. This should let us select the appropriate dynamic batch parameter model here on the flow while avoiding the penalty of load times.

Tested for batch 1, concurrency level 2 in a customer case. Working on this as I resolve other quirks

…iple batch models created then loaded into memory on initialization
@TedThemistokleous TedThemistokleous changed the title add cache of preloaded models and use max_dynamic batch size for mult… Dynamic Preload cache for precompiled MXRs Dec 30, 2025
…ng is enabled

- Wrap the output shape verification loop in a check for verbose logging mode
- Only runs the shape verification code when logging severity <= kVERBOSE
- Reduces overhead during normal inference execution
… threads start

Key changes:
1. Unified batch pre-compilation into a single block that runs after main
   compilation (both cache hit and cache miss cases)
2. Fixed hash calculation to include ALL inputs with updated batch sizes,
   not just the first input - ensures correct cache file names for
   multi-input models
3. Always compile if load_precompiled_model fails for any batch size
4. Removed duplicate pre-compilation block that ran only on cache miss
5. Removed unused CompileProgramWithBatch helper function
6. Removed unused precompile_done_ member variable
7. Only initialize batch_program_cache_ if not already initialized to
   preserve any programs compiled earlier

The batch cache is now fully populated before NodeComputeInfo is created,
ensuring compute threads have access to all pre-compiled programs.
…ilation logic

Key changes:
1. Restored CompileProgramWithBatch helper function (now handles ALL inputs,
   not just the first one)
2. When max_dynamic_batch > 0: compile power-of-2 batch sizes up to max
3. When max_dynamic_batch == 0 (not set): only compile the single batch size
   from the model's input shape
4. Uses CompileProgramWithBatch for cleaner compilation with proper input
   shape handling for multi-input models
5. Always compiles if load_precompiled_model fails for any batch size

This ensures:
- Models with max_dynamic_batch set get all power-of-2 batch sizes compiled
- Models without max_dynamic_batch only compile the necessary batch size
- All compiled programs are stored in batch cache before compute threads start
Reverted the hash calculation for batch cache files to use only the first
input's shape (with batch dimension) instead of all inputs. This matches
the previous behavior.
The batch cache was empty because:
1. Pre-compilation block only ran when model_cache_path_ was not empty
2. The main compiled program was never stored in batch_program_cache_

Fix:
1. Always store the main compiled program in batch_program_cache_ with its
   batch size (extracted from first input tensor)
2. Pre-compilation of additional batch sizes only runs when max_dynamic_batch > 0
   AND model_cache_path_ is set
3. When max_dynamic_batch == 0, we still have at least the main batch size
   in the cache from the main compilation

This ensures compute threads always have at least one batch size available
in the batch cache.
…shapes

When no_input_shape is true (model has dynamic/symbolic dimensions), the
code was previously deferring compilation and leaving prog empty. This
caused the batch cache to be empty, and compute threads would fail.

Fix:
1. When no_input_shape is true, still compile the model with default shapes
   (batch size 1) so we always have something in the batch cache
2. Always store the main compiled program in the batch cache (removed the
   !no_input_shape condition)

This ensures compute threads always find at least batch size 1 in the cache,
even for models with dynamic shapes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants