-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache bearer tokens based on the authority #1755
Comments
Hi @heaths Heath, I spent some time yesterday and today and got a working proof-of-concept of the interior mutability caching concept, but looking deeper at other examples in the codebase, it seems that refreshing should be handled on the token side and not the policy side? For example, in impl TokenCredential for DefaultAzureCredential {
async fn get_token(&self, scopes: &[&str]) -> azure_core::Result<AccessToken> {
self.cache.get_token(scopes, self.get_token(scopes)).await
} And so, looking at pub(crate) async fn get_token(
&self,
scopes: &[&str],
callback: impl Future<Output = azure_core::Result<AccessToken>>,
) -> azure_core::Result<AccessToken> {
// if the current cached token for this resource is good, return it.
let token_cache = self.0.read().await;
let scopes = scopes.iter().map(ToString::to_string).collect::<Vec<_>>();
if let Some(token) = token_cache.get(&scopes) {
if !is_expired(token) {
trace!("returning cached token");
return Ok(token.clone());
}
}
// otherwise, drop the read lock and get a write lock to refresh the token
drop(token_cache);
let mut token_cache = self.0.write().await;
// check again in case another thread refreshed the token while we were
// waiting on the write lock
if let Some(token) = token_cache.get(&scopes) {
if !is_expired(token) {
trace!("returning token that was updated while waiting on write lock");
return Ok(token.clone());
}
}
trace!("falling back to callback");
let token = callback.await?;
// NOTE: we do not check to see if the token is expired here, as at
// least one credential, `AzureCliCredential`, specifies the token is
// immediately expired after it is returned, which indicates the token
// should always be refreshed upon use.
token_cache.insert(scopes, token.clone());
Ok(token)
}
} And for completeness sake, is_expired's implementation: pub fn is_expired(&self, window: Option<Duration>) -> bool {
self.expires_on < OffsetDateTime::now_utc() + window.unwrap_or(Duration::from_secs(30))
} It would seem that when the policy asks for a token, we should not maintain a cache on the policy as the token itself should maintain its own caching. Wondering if this logic tracks or if I am misunderstanding something! CC @demoray Hi Brian, I see that you are still actively working on this as of last week and wanted to possibly expand this discussion here to help improve my understanding of these tokens and caching mechanisms! Thanks! |
I don't think it matters if the caching is done at the policy layer or at the token side, though implementing the caching at the policy layer would resolve one of the struggles with As is, a cache needs to be implemented at each of layers and the DAC/chained-credential layers. Assume you use just IMDS. Then you expect caching of the results of IMDS . Assume you have the following list of sources to check:
If this is used in an environment uses the CLI, the expectation would be that the DAC/chained-credential would have the query cached such that it doesn't hit IMDS+ENV before hitting the CLI cache. |
Agreed: token caching should be done in the policy, as it is in other languages, so that it just works regardless of what |
We need to cache auth tokens until they expire or are close to expiring so we're not fetching them each time, which hurts performance and may impact quotas.
See #1726 (review) for context.
The text was updated successfully, but these errors were encountered: