-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vault execution module broken in pillar lookups #49671
Comments
also ping @garethgreenaway @gtmanfred I've currently done the legwork on #49343 / #43288; these are about adding pillar support to the Vault config for doling out policies to the tokens (rather than the completely arbitrary grains now, see discussions there). @mchugh19 you're right in that the docs promise the behaviour you seek. If the master wants to render a pillar, it at most should be using the local config & get an impersonated token. The behaviour does therefore not introduce breakage and the only very small benefit I can see is that you're positive your pillar doesn't violate the policies on the tokens you're now no longer using. All of this considered I'd like to propose to close this; I'll handle doc updates as part of my work on the other issues as I'd really much prefer it to be able to dole out policies based on pillar records and maintain pillar functionality by using the masters' token. |
@The-Loeki the behavior described here isn't just a theoretical implementation described in the documentation, but the way that it previously used to work. We currently keep an older version of _utils/vault.py to run salt 2018.3. We've been running the vault execution modules since before it was in upstream salt, and only noticed this breakage when we attempted to cleanup our local modules in favor of upstream. I don't disagree that relying on grains for vault policies is a potential security issue. However, in our environment the same ops team that manages the hosts also manages vault. So while a machine could be made to spoof another, the internal folks with access and knowledge already have access to vault. I agree grains still aren't the best solution here, but it just isn't at the top of our priority list to update. If we were able to use pillars for this functionality, we'd gladly switch.
This is incorrect and is at the root of this issue. In our environment, the salt-master only has vault access to generate tokens from vault (using orphan tokens)
Minion policies are different from the masters, and the master itself has no other vault privileges. With the code changes in 2018.3, we would now need to grant our salt-master access to all vault secrets in order to allow a minion to access anything which would be a security leak. The current functionality of the salt master returning |
Well I only figured that one out after reading this bug and plowing through trying to get the promised infinite recursion ;) What I meant to say is I'd rather fix the docs & add pillar-based policies than revert to the old behaviour.
I'm relatively new to Vault, but I understand that a token can only give out policies the token itself already has? So from what I can tell
|
Nope. That's the orphan functionality.
In code yes, but not via salt. Since the salt-master has sudo to generate tokens, it could generate a token with all policies (if you knew the names), or generate a multi-use token with a 3 year ttl. However, the tokens generated by the salt vault implementation are single use and only for the requested policies. Might it make sense to create a similar vault.read_secret_pillar_policy function which contains the desired pillar policy lookup behavior? |
So what's the difference in terms of security then? Anyone with sufficient write-access to the Salt master's repos has the capability to abuse that power, with or without impersonation. So as I mentioned before, the main difference (aside from you having to make the access explicit) is the operators having to be aware which secrets they are passing on through the pillar (which is already true anyway). That's why I vastly prefer building in pillar-based policies and updating the docs to reflect this (lack of) impersonation. Fixxing it outright seems impossible without reintroducing aforementioned infinite recursion. Other than that the only solution I currently see is a big fat WARNING in the docs ;) |
@mchugh19 Apologies for the delay on this one, circled back to it today. Can you provide some examples of your Vault policies for both the minions and the master, as well as the relevant Salt master configuration for the Vault. Also to confirm, in your setup the Salt master does not have access to the Vault endpoint but only access to generate tokens, while the minions have access to the endpoints. |
Sure thing... Master vault config:
The salt master's vault token was created with
using the salt-master-creator policy of
This means that the salt-master can create tokens (with the sudo permission on the auth/token/create endpoints) but because the salt-master's token was created with the Policies applied to test minion:
As you can see, the minion is successfully getting the Test Vault policy - using the
This shows members of the saltstack/219829223289 policy should have read permission on the secret/aws-devqa/firehose_send endpoint. With the initial utils/vault.py from 2017.7, the minion is able to successfully access the firehose_send endpoint
While the master is not
If that endpoint is used in a pillar, and assigned to the host, the minion can access the pillar just fine:
Pshew! This sets up the current state. The problem comes when we remove the local copy of _utils/vault.py, and use the upstream one from 2018.3.
With the following in the master log:
So obviously the minion is now being denied by the vault policy. At this point, as described in the initial description of this ticket, it seems that the master, when running the vault.read_secret from the pillar, is no longer creating a vault token with the minion's policy list, but instead running the If I was to modify the vault policy to allow the salt master to read this data, then the pillar lookup would work for this minion. Having the salt-master create a vault token with the minion's vault policy has been the design of the vault modules from the beginning. So it is this behavior that I think should be restored. P.S. |
@mchugh19 I believe I have this figured out, can you give this patch a try:
|
Yep! That works. I just wasn't sure if that was the proper fix. If you are all good with it, than that works for me. Thanks! |
@garethgreenaway you do realize you're possibly regressing on #45928 doing that? |
@The-Loeki Definitely. Always a concern 😄 Before sending along the patch for @mchugh19 to test, I did some manual testing with both the scenario that was described above and using the previous salt-ssh scenario, with the above change both scenarios worked as expected. The section of code that is used when salt-ssh is used was not changed, it's continues to use As far as the discussion above about adding pillar based policies, I like it and I think that would be a great feature to add. Definitely look forward to seeing a future PR 😄. My main focus on this particular issue was to fix the bug that we introduced in 2018.3 and not introducing any additional issues. |
Description of Issue/Question
When using salt minions connected to a master, where the master does have access to a vault endpoint, but the minion does not, placing a vault.read_secret into a pillar should fail. As of 2018.3, this broke and those lookups now succeed.
Steps to Reproduce Issue
For the case of vault lookups in a pillar, the master should be running the vault token generation on behalf of the minion and not just connecting to vault to perform the lookup itself, this occurs in the
_get_token_and_url_from_master()
function. It looks like the logic in https://github.com/saltstack/salt/blob/develop/salt/utils/vault.py#L136-L138 broke this flow and the_get_token_and_url_from_master()
function is never called.The problem code is currently
but previous version of the code used to read:
As you can see, the logic was adjusted to remove the not in the
not __opts__.get('__role', 'minion') == 'master'
lookup. Fixing this just results in the elif erroring out on a missing key with__opts__['master_type']
. If that is corrected to use __opts__.get('master_type'), then that conditional still succeeds because__opts__['local']
results to True.In the case of a vault in a pillar where the master connects to vault with the vault profile of a minion, the first two conditionals in that if elif should fail. The commit messages state these lines were added to support vault lookups in salt-ssh scenarios, but since we don't have that environment to test I'm not sure of the best way to progress.
Leakages from vault are capable of being catastrophic, so this requires a complete unit test to validate all scenarios.
Setup
Versions Report
The text was updated successfully, but these errors were encountered: