You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to use ZenML in an AKS Kubernetes Cluster with workload identities enabled.
For this we use Implicit authentication using the workload identities for all services if possible.
This works great for the secret store, the Kubernetes Orchestrator and the Kubernetes Step Operator.
As the documentation says, this does not work for the blob storages:
The only Azure authentication method that works with Azure blob storage resources is the service principal authentication method. [1]
And it also does not work for the ACR without the admin account enabled.
If an authentication method other than the Azure service principal is used for authentication, the admin account must be enabled for the registry, otherwise, clients will not be able to authenticate to the registry. See the official Azure documentation on the admin account for more information. [2]
We can certainly go for using the service principal but we would like to avoid this to keep static credentials out of our system.
Technically it does not seem problematic at the first glance to support the usage of the DefaultAzureCredential.
In the following the credentials are checked if they are the correct type:
"""The adlfs filesystem to access this artifact store.
Returns:
The adlfs filesystem to access this artifact store.
"""
ifnotself._filesystem:
secret=self.get_credentials()
credentials=secret.get_values() ifsecretelse {}
self._filesystem=adlfs.AzureBlobFileSystem(
**credentials,
anon=False,
use_listings_cache=False,
)
returnself._filesystem
This seems to have no problem with consuming the DefaultAzureCredentials[3]:
The filesystem can be instantiated for different use cases based on a variety of storage_options combinations. The following list describes some common use cases utilizing AzureBlobFileSystem, i.e. protocols abfsor az. Note that all cases require the account_name argument to be provided:
1. Anonymous connection to public container: storage_options={'account_name': ACCOUNT_NAME, 'anon': True} will assume the ACCOUNT_NAME points to a public container, and attempt to use an anonymous login. Note, the default value for anon is True.
2. Auto credential solving using Azure's DefaultAzureCredential() library: storage_options={'account_name': ACCOUNT_NAME, 'anon': False} will use [DefaultAzureCredential](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python) to get valid credentials to the container ACCOUNT_NAME. DefaultAzureCredential attempts to authenticate via the [mechanisms and order visualized here](https://learn.microsoft.com/en-us/python/api/overview/azure/identity-readme?view=azure-python#defaultazurecredential).
....
I might be overseeing anything that blocks this from being feasible, but support for workload identities wherever possible would be great for us.
Happy to get feedback from you regarding this :)
@lukas-reining Good one - I also think we might be able to enable this in a future release. We'd welcome a contribution if you already have a clear idea how to add the DefaultAzureCredential as well!
We'd welcome a contribution if you already have a clear idea how to add the DefaultAzureCredential as well!
I will have a look, maybe there will be some time over the holidays, but I can't promise :)
Just for my understanding: Do you know why there is the explicit check for only using the service principle?
Was there a restriction in the past?
If you can not tell, I will try to find it out, I just want to avoid stepping into a pitfall that you might already have seen.
Contact Details [Optional]
lukas.reining@codecentric.de
System Information
What happened?
We want to use ZenML in an AKS Kubernetes Cluster with workload identities enabled.
For this we use Implicit authentication using the workload identities for all services if possible.
This works great for the secret store, the Kubernetes Orchestrator and the Kubernetes Step Operator.
As the documentation says, this does not work for the blob storages:
And it also does not work for the ACR without the admin account enabled.
We can certainly go for using the service principal but we would like to avoid this to keep static credentials out of our system.
Technically it does not seem problematic at the first glance to support the usage of the
DefaultAzureCredential
.In the following the credentials are checked if they are the correct type:
zenml/src/zenml/integrations/azure/artifact_stores/azure_artifact_store.py
Line 81 in c68275b
Then they are given to the
adlfs.AzureBlobFileSystem
:zenml/src/zenml/integrations/azure/artifact_stores/azure_artifact_store.py
Lines 99 to 114 in c68275b
This seems to have no problem with consuming the
DefaultAzureCredentials
[3]:I might be overseeing anything that blocks this from being feasible, but support for workload identities wherever possible would be great for us.
Happy to get feedback from you regarding this :)
[1]: https://docs.zenml.io/how-to/infrastructure-deployment/auth-management/azure-service-connector#azure-blob-storage-container
[2]: https://docs.zenml.io/how-to/infrastructure-deployment/auth-management/azure-service-connector#acr-container-registry
[3]: https://pypi.org/project/adlfs/
Reproduction steps
No response
Relevant log output
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: