-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
List other UI servers running #27
Comments
From the perspective of user B, a discoverable UI would be the better option. (It is also clearer what views/commands are available to user B, so the UI can be better tested.) |
Shared on Riot, copying here
|
When testing the SSH Spawner I had to troubleshoot the proxy too (had to troubleshoot pretty much the whole setup 😫). Learned a good trick that might be helpful here, so note to self (or to whoever works on this one):
Example response: {
"/":{
"hub":true,
"target":"http://127.0.0.1:8081",
"jupyterhub":true,
"last_activity":"2019-06-05T11:09:56.477Z"
},
"/user/kinow":{
"user":"kinow",
"server_name":"",
"target":"http://127.0.0.1:44755",
"jupyterhub":true,
"last_activity":"2019-06-05T11:09:57.888Z"
}
} |
One important thing, that I have also been postponing investigating. To save resources - I assume - if you run Question: Is that desired for Cylc UI Server? Reason for asking, is that a UI Server may start a workflow that runs... well... forever? If the UI Server dies, then the feature discussed in this issue is directly affected. Example: Bruno starts workflow |
Hmmm, looks like it's not killing inactive notebooks/ There is a default setting for inactivity, that defaults to 300 seconds. It's possible to see that after a while and without looking at the UI Server, the Hub keeps "checking routes". I believe it will remove the ones that do not reply after each inactivity interval. It is also possible to see the same PID running for well over 10 minutes, without any use, and not being killed. So ignore my previous question/comment 😬 |
Even if we're not yet, I think we should we killing (or shutting down) inactive UI servers |
Spent the afternoon trying to use the REST Services from within the hub, but without much success. Only got 403, but learned a bit more about the endpoints, its access control... will try again another day and try to document how much we can rely on the hub for listing other user's UI servers. |
I haven't fully tested JupyterHub, but my understanding is that it won't kill the UI Server, even if inactive. But it provides an example of a JupyterHub Service: cull-idle If you start that service, it will read the But at least that's yet another thing we can tell users to use something from JupyterHub project :) |
Prepared a draft pull request that lists all the UI servers running in a Hub, using the Configurable HTTP Proxy REST API. No authorization layer here, so in theory with that change you would be able to see all the UI servers, and also open all the UI servers (perhaps useful for development). |
I'd have thought what we need to do is present users with a list of what UI servers they are authorised to access (whether they are running or not). I don't think normal users should ever need to worry about what UI servers are currently running. |
Oh I see. So for that we may not even need to access the hub or proxy apis. That info could be stored somewhere else like a Db (maybe the same with authorization info). But at least I learned a bit more about JupyterHub with this experiment :-) |
There's some other work happening on JupyterHub that may benefit us: jupyterhub/jupyterhub#1758 (comment) Specifically this part:
If they improve the REST API, and we get methods for this, maybe backed by some changes in their database, then we won't have too much work left to get that in the UI&UIServer. |
From CylcCon2020: We will implement a configuration file for assigning authorisation to users in a read/write/execute type fashion (with some finer grained privilege control).
|
Not mentioned at CylcCon2020: It would be nice to permit temporary authorisation, e.g. for support purposes, ideally this would fit in with the same configuration file scheme. |
(For authorization, see #10 ) |
We will have one UI server per user. And each UI server will look after one or more workflows.
We want to allow users to access each other's UI server and see their running workflows - subjected to proper authorization, of course.
We need to define what is the workflow going to look like.
The text was updated successfully, but these errors were encountered: