Skip to content

Commit 952d341

Browse files
committed
feat(cache): add LLM metadata caching for model and provider information
Extends the cache system to store and restore LLM metadata (model name and provider name) alongside cache entries. This allows cached results to maintain provenance information about which model and provider generated the original response. - Added LLMMetadataDict and LLMCacheData TypedDict definitions for type safety - Extended CacheEntry to include optional llm_metadata field - Implemented extract_llm_metadata_for_cache() to capture model and provider info from context - Implemented restore_llm_metadata_from_cache() to restore metadata when retrieving cached results - Updated get_from_cache_and_restore_stats() to handle metadata extraction and restoration - Added comprehensive test coverage for metadata caching functionalit
1 parent 32d57f5 commit 952d341

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

nemoguardrails/llm/cache/utils.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -179,6 +179,9 @@ def get_from_cache_and_restore_stats(
179179
if cached_metadata:
180180
restore_llm_metadata_from_cache(cached_metadata)
181181

182+
if cached_metadata:
183+
restore_llm_metadata_from_cache(cached_metadata)
184+
182185
processing_log = processing_log_var.get()
183186
if processing_log is not None:
184187
llm_call_info = llm_call_info_var.get()

0 commit comments

Comments
 (0)