-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add cmd for creating user kubeconfigs (cont. from #545) #598
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
ad3e543
to
297ded4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested out creating and auth-ing from the config files in this branch, everything seemed to work. k9s even refused to list namespaces after switching context to alice-redteam...
I know you are still planning to write docs which is good. I don't understand how the actual network of tanks relates... do you just add namespace: redteam
to the individual tanks in network.yaml? and then is there still a "master" namespace where an admin can monitor all tanks at once?
I left a few comments and suggestions below
warnet admin init
also copies in the default 6 node network without asking if i want a custom network, and it does not copy in the scenarios. One suggestion is instead of "initializing" the warnet admin command could be more of an "upgrade" to a project directory that adds namespace stuff without disturbing networks or scenarios.
OR would be even cooler if warnet admin init
did create a network as well, with inquierer, and also distributed the namesapces according to how many teams the user wants, etc
os.makedirs(kubeconfig_dir, exist_ok=True) | ||
|
||
# Get all namespaces that start with prefix | ||
# This assumes when deploying multiple namespacs for the purpose of team games, all namespaces start with a prefix, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s "namespaces"
|
||
# Get all namespaces that start with prefix | ||
# This assumes when deploying multiple namespacs for the purpose of team games, all namespaces start with a prefix, | ||
# e.g., tabconf-wargames-*. Currently, this is a bit brittle, but we can improve on this in the future |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah it feel like instead of using naming conventions k8s probably wants us to use metadata labels or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it would be great to have a "cluster level" configmap. We could have a wargames-admin
namespace , where we create a configmap for all of the war-games related metadata (e.g., all of the namespaces , players), and then any administration commands can reference that to get all of the correct values? This admin namespace could also be used for deploying the signet miner, etc.
# Create a kubeconfig file for the user | ||
kubeconfig_file = os.path.join(kubeconfig_dir, f"{sa}-{namespace}-kubeconfig") | ||
|
||
# TODO: move yaml out of python code to resources/manifests/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could use yaml.dump()
like in deploy_namespaces()
and deploy_network()
|
||
|
||
def get_kubeconfig_value(jsonpath): | ||
command = f"kubectl config view --minify -o jsonpath={jsonpath}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems a little silly to me to import the kubernetes python client library into this module and then just use shell commands anyway, but the whole file is like that so it should just be a cleanup PR later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heh, I had the same thought. I am somewhat in favour of using the Kubernetes python client for everything, but noticed everyone was defaulting to string-ifying kubectl
commands. So long as we keep all the k8s logic in this file, should be super easy to refactor this at some point in the future.
Thanks for the review @pinheadmz , good questions!
A network of tanks is always meant to be deployed into a single namespace. The nice thing is, since we are using bitcoin core's
In my mind, there really is no such thing as a "master" namespace. Namespaces are for grouping resources and limiting permissions, so if you are the cluster admin, you already have access to all of the namespaces. Namespaces can also be given permissions to look into other namespaces, e.g.,
I really like this idea, I'll try to implement this. I do agree its a bit clunky how the |
Would be nice for the big tabconf scoreboard if we could view all the pod status like with k9s. If the logging stack can aggregate all pod info that might be enough... |
Do the users then just run scenarios? These permissions do not allow logging stack but not sure that's a problem? (.venv) ➜ adminin warnet deploy networks/6_node_bitcoin
Found collectLogs or metricsExport in network definition, Deploying logging stack
"grafana" already exists with the same configuration, skipping
"prometheus-community" already exists with the same configuration, skipping
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "traefik" chart repository
...Successfully got an update from the "grafana" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "scalinglightning" chart repository
Update Complete. ⎈Happy Helming!⎈
Error: query: failed to query with labels: secrets is forbidden: User "system:serviceaccount:wargames-blue-team:carol" cannot list resource "secrets" in API group "" in the namespace "warnet-logging"
Traceback (most recent call last):
File "/Users/maxedwards/source/warnet/.venv/bin/warnet", line 8, in <module>
sys.exit(cli())
^^^^^
File "/Users/maxedwards/source/warnet/.venv/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/maxedwards/source/warnet/.venv/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/Users/maxedwards/source/warnet/.venv/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/maxedwards/source/warnet/.venv/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/maxedwards/source/warnet/.venv/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/maxedwards/source/warnet/src/warnet/deploy.py", line 50, in deploy
dl = deploy_logging_stack(directory, debug)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/maxedwards/source/warnet/src/warnet/deploy.py", line 94, in deploy_logging_stack
if not stream_command(command):
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/maxedwards/source/warnet/src/warnet/process.py", line 28, in stream_command
raise Exception(process.stderr)
Exception: None |
The expectation is the logging stack would be deployed into I don't think it makes sense to have logging running in each namespace? especially for something like fork-observer. I've got a local branch that I'm reworking some of the flow and permissions to make things more seamless but had to pivot a bit this week. Will try to finish today/tomorrow. |
This is ideally how it should work. We know it already works across namespaces because warnet-logging aggregate information from the "default" (warnet) namespace. The only thing that needs to change is the assumption that there is only ever one namespace. For this, I added some code to annotate the team namespaces with a label so that whenever we want to get all the pods in all of the team namespaces, we query the namespaces by label and then use that. |
namespaces.yaml is meant for describing the overall structure of what you want with specific overrides for specific users as needed. the "default" roles should be defined in namespace-defaults.yaml so that they are automatically applied by default for each user in each namespace. at a lower level, defaults that should be applied by default for *any* namespaces deployment should be defined in values.yaml. namespace-defaults.yaml is meant to override values.yaml in the event for a particular namespaces deployment the admin wants to create tailor made roles and permisssions. otherwise, this can stay empty and whatever is in values.yaml will be applied. update example prefix to wargames, to illustrate this is not relying on a default namespace of warnet. this probably needs some more thought, but I think its best to address how to pipe through the name in a followup rather than slow this PR down.
Replace the setup_contexts.sh script into a proper warnet command. Co-authored-by: mplsgrant <58152638+mplsgrant@users.noreply.github.com>
acc63ed
to
36e4f3c
Compare
allow the user to specify a namespace to deploy into (overrides kubectl default namespace)
36e4f3c
to
f84777a
Compare
Closing in favour of the replacement #616 |
Continues on #545. This command is used to create user specific authentication files , after running
warnet deploy namespaces/<your custom config>
.The custom config here is for creating a namespace for each team, creating user service accounts in each namespace, and applying whatever roles to those users are defined in the namespaces.yaml and namespace-default.yaml files. If no roles are specified in the config files, whatever is in values.yaml for the namespaces chart is used. Currently, the defaults in the chart are to create pod viewer and pod manager roles for each user.
Testing
warnet admin init
- this copies over the example namespaces directorywarnet deploy namespaces/two_namespaces_two_users/
warnet admin create-kubeconfigs wargames-
- issues a token for each user in each namespace with the prefix "wargames-"To test, run
warnet auth <user_kubeconfig>
. This will switch your current-context in kubectl to this user context. You should be able to deploy and run scenarios , and do everything needed to participate in a war game. If you see a permissions error, you can easily update the namespaces.yaml file to fix the permissions for that user and redeploy the namespaces and reissue the tokens.Todo