-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tickets/DM-34218: expand user reset env beyond .local #133
Conversation
src/nublado2/resourcemgr.py
Outdated
@@ -190,6 +186,9 @@ async def _create_kubernetes_resources( | |||
async def _build_dask_template(self, spawner: KubeSpawner) -> str: | |||
"""Build a template for dask workers from the jupyter pod manifest.""" | |||
api_client = shared_client("ApiClient") | |||
# This works around an infelicity in upstream Kubespawner's | |||
# shared client implementation | |||
api_client.api_client = api_client |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you show me the error message that you get if you don't do this? I've read the relevant source code for both kubespawner and kubernetes_asyncio and can't find any reason why this would be necessary. The kubernetes_asyncio ApiClient object has a sanitize_for_serialization
method that's an entirely normal method and doesn't use an api_client
attribute of itself in any way, and shared_client("ApiClient")
should just return a kubernetes_asyncio ApiClient. And you don't seem to have to do this down below in the code that's making actual Kubernetes calls. I also don't see anything like this code in kubespawner with a search.
So I'm fairly confused, and the commit message and comment didn't really help unconfuse me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ERROR:asyncio:Task exception was never retrieved
future: <Task finished name='Task-81' coro=<shared_client.<locals>.close_client_task() done, defined at /usr/local/lib/python3.8/dist-packages/kubespawner/clients.py:55> exception=AttributeError("'ApiClient' object has no attribute 'api_client'")>
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/kubespawner/clients.py", line 57, in close_client_task
async with client.api_client:
AttributeError: 'ApiClient' object has no attribute 'api_client'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue is that the shared_client's close_client_task()
uses an async with
on the api_client
attr of the client object to do its thing and cleanly shut down the shared client when it goes out of scope (that is, when the Spawner object is finalized).
client objects that point to actual K8s APIs (e.g. CoreV1AP1) have such an attr. shared_client("ApiClient") does not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh! The problem is that shared_client("ApiClient")
is simply invalid; the only thing that should be retrieved via shared_client
are specific API clients. That makes sense. I think the fix is to replace every instance of api_client = shared_client("ApiClient")
with:
api_client = shared_client("CoreV1Api").api_client
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better still: I already have one of those, because I have a spawner everywhere I use that client, and its api attr is already a CoreV1Api shared client.
d605b76
to
0d88dc9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great to me. Thanks for iterating on that!
While I'm at it, freshen dependencies and upstream helm charts and thereby lose the need to carry our own KubeSpawner.