Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Populate entire files via Pillar #1543

Closed
QuinnyPig opened this issue Jun 30, 2012 · 49 comments
Closed

Populate entire files via Pillar #1543

QuinnyPig opened this issue Jun 30, 2012 · 49 comments
Milestone

Comments

@QuinnyPig
Copy link
Contributor

If I want to populate a file (say, /etc/ssl/cert.pem) that not every minion should be able to see, Pillar is ideal for this; however right now I don't have that option.

@thatch45
Copy link
Contributor

Good call, remind me to place this on 0.10.3 when the time comes

@QuinnyPig
Copy link
Contributor Author

This attempted workaround via jinja just results in a blank /tmp/date; should it work, or am I being too optimistic?

[cquinn@salt www]$ cd /srv/pillar/
[cquinn@salt pillar]$ cat top.sls 
base:
  '*':
    - base
  'www*':
    - www
[cquinn@salt pillar]$ cat www/init.sls 
date: {% include "blah" %}

[cquinn@salt pillar]$ cat /srv/salt/www/init.sls 
/tmp/date:
  file:
    - managed
    - source: salt://www/files/date
    - template: jinja

@rnd42
Copy link

rnd42 commented Oct 15, 2012

I addressed a similar problem doing something along the lines of the following example:

top.sls: (the same for both /srv/pillar and /srv/salt directories)

base:
    '*':
        - ssh

/srv/pillar/ssh.sls:

ssh_certs:
{% if grains['fqdn'] == 'server1.example.com' %}
    dsa: |
        -----BEGIN DSA PRIVATE KEY-----
        {# key text goes here with consistant indentation... #}
        -----END DSA PRIVATE KEY-----
    ecdsa: |
        -----BEGIN ECDSA PRIVATE KEY-----
        {# key text goes here with consistant indentation... #}
        -----END ECDSA PRIVATE KEY-----
    rsa: |
        -----BEGIN RSA PRIVATE KEY-----
        {# key text goes here with consistant indentation... #}
        -----END RSA PRIVATE KEY-----
{% elif grains['fqdn'] == 'server2.example.com' %}
    # same as above but with different key texts of course....
{% endif %}

/srv/salt/ssh.sls:

{# using .get() and providing a default on the next line is important to avoid a KeyError for
   hosts that don't include the Pillar ssh.sls file: #}

{% for key_type in pillar.get('ssh_certs', {}) %}
/etc/ssh/ssh_host_{{ key_type }}_key:
    file.managed:
        - context:
            key_type: {{ key_type }}
        - mode: 600
        - source: salt://ssh/files/private_key
        - template: jinja
{% endfor %}

/srv/salt/ssh/files/private_key:

{{ pillar['ssh_certs'][key_type] }}

It's a little bit convoluted, but works just fine for me... Of course this approach wouldn't work very well for binary data or any kind of data that might cause PyYaml or Jinja2 problems....

@tf198
Copy link
Contributor

tf198 commented Jan 3, 2013

I've been looking into this as well as I need to be able to manage systems for multiple unconnected companies and came up with the following:

tf198/salt@SHA: 8a30f60

Basically it adds a path_hide_regex and path_hide_glob option to the master config which prevents listing of files to minions. Then you can use a suitably long and securely generated secret distributed via a pillar to restrict access to individual or groups of minions.

/etc/salt/master - hide the private directory

path_hide_glob:
  - private/*

/srv/pillar/private.sls - distribute the secret through pillar mechanism

{% if grains['id'] == 'server1.example.com' %}
secret: gK9yOuETT9pKdnsvWAhbbHoYqJlchxDD
{% endif %}

/srv/salt/private/gK9yOuETT9pKdnsvWAhbbHoYqJlchxDD/cert.pem

-----BEGIN RSA PRIVATE KEY-----
...
 -----END RSA PRIVATE KEY-----

/srv/salt/ssl.sls - install the file

/etc/ssl/cert.pem:
  file.managed:
    - source: salt://private/{{ pillar.secret }}/cert.pem
    - mode: 600

minions cant see the files

$ salt 'server1.*' cp.list_master
{'server1.example.com': ['top.sls', 'ssl.sls']}
$ salt 'server1.*' cp.list_master_dirs
{'server1.example.com': ['.', 'private']}

The only issue I can see (until some security guru pops up and shoots this down in flames) is that you cannot use the file.recurse state for a hidden path, though I guess there could be other side effects.

From my reading of the ZMQ docs then any other access control would require a relatively major change to the transport layer involving tokens so I cant imagine it coming any time soon unless Thatch has something in the wings.

Comments welcomed...

@pille
Copy link
Contributor

pille commented Jan 3, 2013

there's issue #636 that was closed wrt pillar ;-)

@thatch45
Copy link
Contributor

thatch45 commented Jan 3, 2013

This is not a bad idea @tf198, although it does seem too hacky. I have been thinking about this a little, I am wondering if we could allow files to be stored in pillar and then realized via a pillar file request, then the files can be enabled within the pillar data itself for the specific minion, otherwise direct file downloads from the pillar_roots is restricted.
It is also noteworthy, that this patch would not apply to master any more because of some serious changes I have made to the file server for 0.12.0 allowing multiple fileserver backends. take a look at the salt/fileserver/roots.py module as that is where the fileserver code is now.

@tf198
Copy link
Contributor

tf198 commented Jan 4, 2013

Man this project moves fast - I was only away for a week :-)

The problem as I see it is that you'll end up duplicating a lot of code to allow file requests from the pillar - new handler for pillar:// etc... Am glad to hear that you are still looking into this though as it is a blocker for several usage scenarios at the moment.

Have reworked my patch for the new fileserver backends tf198/salt@4f9c713 so that you can declare environments as private which seems like a more natural separation and gives cleaner code. Usage is pretty much the same as above except you need to specify the env in the state file.

/etc/master/salt

file_roots:
  base:
    - /srv/salt
  private:
    - /srv/private

private_envs:
  - private

/srv/pillar/private.sls - distribute the secret through pillar mechanism

{% if grains['id'] == 'server1.example.com' %}
secret: gK9yOuETT9pKdnsvWAhbbHoYqJlchxDD
{% endif %}

/srv/private/gK9yOuETT9pKdnsvWAhbbHoYqJlchxDD/cert.pem

-----BEGIN RSA PRIVATE KEY-----
...
 -----END RSA PRIVATE KEY-----

/srv/salt/ssl.sls

/etc/ssl/cert.pem:
  file.managed:
    - source: salt://{{ pillar.secret }}/cert.pem
    - env: private
    - mode: 600

minions cant see the files

$ salt 'server1.*' cp.list_master
{'server1.example.com': ['top.sls', 'ssl.sls']}
$ salt 'server1.*' cp.list_master private
{'server1.example.com': []}

@thatch45
Copy link
Contributor

thatch45 commented Jan 4, 2013

Hmm, this is a good direction, but we would need to evaluate some of the security implications more.
Lets do this, we will put this on hold until after 0.12.0 is released since we need to get it out the door. then we can more this guy to the forefront and more aggressively evaluate it. Sound alright?

@tf198
Copy link
Contributor

tf198 commented Jan 4, 2013

Agreed - I'm still at the evaluation stage so am just kicking a few ideas about to see whether we can make SaltStack bend to our particular needs. Obviously full granular access control is preferable but this gives me enough to be going on with.

Assuming of course that there is no way the secret leaks through any of the logging/caching internals...

BTW. I can't find anywhere in the current test suite to add master/config behaviour tests - is this currently not covered?

@thatch45
Copy link
Contributor

thatch45 commented Jan 4, 2013

Yes, you can set config values on the master, in here:
https://github.com/saltstack/salt/blob/develop/tests/integration/files/conf/master
this is for the primary master that we start for all of the integration tests

@Mrten
Copy link
Contributor

Mrten commented Apr 18, 2013

I've worked around this via mako (er, python).
Basically:

#!mako|yaml

<%!
import re
import os
import yaml
%>
<%

path = '/home/salt/pillar'
data = yaml.safe_load(open(path + '/ssl.conf').read())
out  = []

if grains['id'] in data:

  sslkeys = {}

  for ssldir in data[grains['id']]:
    sslpath = path + '/ssl/' + ssldir + '/'

    keydata  = open(sslpath + 'key').read().split('\n')
    certdata = open(sslpath + 'certificate').read().split('\n')
    intermediatedata = None
    if os.path.exists(sslpath + 'intermediate'):
      intermediatedata = open(sslpath + 'intermediate').read().split('\n')

    sslkeys[ssldir] = { 'key' : keydata, 'cert' : certdata, 'int' : intermediatedata }

  # nu hebben we alles gelezen, maak yaml

  for sslname, ssl in sslkeys.iteritems():
    out.append(  '  ' + sslname + ': ')
    out.append(  '    key: |\n'+         '      ' + '\n      '.join(ssl['key']))
    out.append(  '    certificate: |\n'+ '      ' + '\n      '.join(ssl['cert']))
    if ssl['int'] != None:
      out.append('    intermediate: |\n'+'      ' + '\n      '.join(ssl['int']))

%>
ssl:
${'\n'.join(out)}

Then, have a state that references this:

{% if 'ssl' in pillar %}
{% for ssldir in pillar['ssl'] %}

/etc/apache2/ssl/{{ ssldir }}:
  file.directory:
    - mode: 700
    - user: root
    - require:
      - file: /etc/apache2/ssl

/etc/apache2/ssl/{{ ssldir }}/key:
  file.managed:
    - mode: 400
    - user: root
    - source: salt://apache/ssl-file
    - template: jinja
    - context:
      content: |-
        {{ pillar['ssl'][ssldir]['key'] | indent(8) }}
    - require:
      - file: /etc/apache2/ssl/{{ ssldir }}

/etc/apache2/ssl/{{ ssldir }}/certificate:
  file.managed:
    - mode: 444
    - user: root
    - source: salt://apache/ssl-file
    - template: jinja
    - context:
      content: |-
        {{ pillar['ssl'][ssldir]['certificate'] | indent(8) }}
    - require:
      - file: /etc/apache2/ssl/{{ ssldir }}

and ssl-file prints the data:

{{ content }}

Won't work for binary data, obviously, but if you encode it base64 at the start and decode it in ssl-file (use mako instead if jinja) you'd get there eventually.

@jollyroger
Copy link

I've solved this by writing custom ext_pillar: https://gist.github.com/jollyroger/6037683. This module has some assumptions but is pretty straightforward:

import os.path
def ext_pillar( pillar, **kwargs ):
    ca_pillar = {}
    host_id = __opts__['id']
    for pillar_key, ca_dir in kwargs.iteritems():
        cacert_path = os.path.join(ca_dir, 'cacert.pem')
        cert_path = os.path.join(ca_dir, 'certs', "".join([host_id, '.crt']))
        key_path = os.path.join(ca_dir, 'private', "".join([host_id, '.key']))
        try:
            cacert = open(cacert_path,'r').read()
            key = open(key_path,'r').read()
            cert = open(cert_path, 'r').read()
            ca_pillar[pillar_key] = {
                "cacert": cacert,
                "key": key,
                "cert": cert 
                }
        except IOError:
            continue
    return ca_pillar

You need to configure ext_pillar in master's config as well as to point out the path to the ext_pillar (for example, the code above is written to /srv/salt/modules/pillar/ca.py ) with extension_modules setting, for example:

extension_modules: /srv/salt/modules
ext_pillar:
  - ca: 
      example_ca: /srv/ca.example.com/demoCA

In this example ext_pillar will populate pillar key example_ca with a dictionary containing keys cacert, key, and cert that hold CA root certificate and host's private key and certificate respectively based on the host's id. Assuming target host is server1.example.com, the following pillar values will be available for it:

  • example_ca:cacert will be read from /srv/ca.example.com/demoCA/cacert.pem
  • example_ca:key will be read from /srv/ca.example.com/demoCA/private/server1.example.com.key
  • example_ca:cert will be read from /srv/ca.example.com/demoCA/certs/server1.example.com.crt

The actual states using this pillar can look like this:

{{ "/etc/ssl/certs/%s.crt"|format(grains[id]) }}:
  file.managed:
    - mode: 0600
    - user: root
    - group: root
    - contents: |-
      {{ salt['pillar.get']('example_ca:cert')|indent(6) }}
{{ "/etc/ssl/private/%s.key"|format(grains[id]) }}:
  file.managed:
    - mode: 0600
    - user: root
    - group: root
    - contents: |-
      {{ salt['pillar.get']('example_ca:key')|indent(6) }}

Thanks to @Mrten for inspiration since most of the ideas are his, I just better like the ext_pillar idea.

@tf198
Copy link
Contributor

tf198 commented Jul 31, 2013

One extra thing to add to this discussion - any solution that uses pillars results in the file being transferred every time regardless. Hardening salt envs in some way retains all the benefits of file.managed so is preferable from a network overheads point of view...

@jarus
Copy link
Contributor

jarus commented Aug 18, 2013

+1
It would be very nice and helpful to deploy ssl certs/key via Salt.

@terminalmage
Copy link
Contributor

This was implemented about 7 weeks ago in 3077e63 and will be in 0.17.0. Closing.

@aptiko
Copy link

aptiko commented Nov 12, 2013

I don't think that the fix of 3077e63 fixes this. 3077e63 is for using pillar variables as the contents of a file. My understanding is that this ticket was about getting an entire file from the pillar.

@terminalmage
Copy link
Contributor

I don't quite understand the difference. Pillar data aren't files, they're strings, integers, etc. This issue asked for the ability to populate an entire file from pillar data, and the above commit does that.

@aptiko
Copy link

aptiko commented Nov 13, 2013

@terminalmage Yes, it isn't very clear how to interpret what the original poster wanted; however, he gives an example in his second post, where the pillar contains`

  date: {% include "blah" %}

Apparently he wants the pillar to store "blah" as a separate file, not as as a string inside the pillar. I think that this is not possible; the "include" trick above won't work if "blah" contains newlines, and I don't see any way in jinja to make it work; you'd need something like

  date: |
    {% include "blah" | replace("\n", "\n        ") %}

which doesn't work of course (is there anything like this? I'm far from a jinja guru).

The only workaround seems to be to have the required contents inside the pillar file:

  date: |
    This is an important date
    just because it is important:
    2013-11-13

However, I think that in complicated pillars it might be beneficial to be able to store the stuff in a separate file.

@QuinnyPig
Copy link
Contributor Author

@aptiko What I wanted was the ability to store SSL certificates in Pillar. This has been working for me for ages now, but I see where you're going with it.

@ruimarinho
Copy link

@aptiko I thought this ticket referenced this need too - I share your opinion. Storing SSL certificates on separate files inside pillar would be much easier to manage rather than inlining the content. Should this be ticketed as a new issue?

@aptiko
Copy link

aptiko commented Nov 17, 2013

@ruimarinho So far inlining the content works for me (I have PGP and SSL keys, and also small html templates - see, for example, https://github.com/openmeteo/salt-enhydris/blob/master/pillar.example#L41-L47). I can imagine that this could become messy, so I chose to share this thought in this ticket.I don't know if it should be ticketed. I'm relatively new to Salt and I don't know how it's managed and where it's going.

@ruimarinho
Copy link

@aptiko I also inline SSL keys like you did. Nice tip about jinja templates too, though. Thanks!

So, what's the outcome of this issue? Can anyone related to Salt shed some light?

@clearclaw
Copy link

So, how do we distribute binary file data (eg binary license keys) from pillars?

@basepi
Copy link
Contributor

basepi commented Jan 3, 2014

@clearclaw I'm not sure that this is supported by the contents_pillar option. One roadblock would be getting the actual binary data into the pillar data structure. Even then, I'd be worried about that data being mangled in some way by msgpack. Then again, we distribute binary files from the fileserver just fine, so maybe it wouldn't be a problem.

That said, I think you should still open a new issue specifically for the binary file data problem. I think we would need a special external pillar module to get this data in, plus probably an additional file module function to deliver it.

@clearclaw
Copy link

#9569

@arnisoph
Copy link
Contributor

+1

@iwinux
Copy link

iwinux commented Jul 30, 2014

+1 for this feature

(otherwise, a helper script that generates key files as YAML would be appreciated LOL)

@rnd42
Copy link

rnd42 commented Jul 30, 2014

At it's simplest this script would be absolutely trivial:

script.py:

#!python
import sys

print("key: |"
for line in sys.stdin.splitlines():
    print(" " * 4 + line)

call it with cat my_file.key | python script.py > my_file.yaml...

Adjust it to suit your needs...

@igorwidlinski
Copy link

+1

@h0jeZvgoxFepBQ2C
Copy link

So is it still not possible to reference full files as pillar data? I also want to store different client SSL certificates out from my pillar folder?

@kevinquinnyo
Copy link
Contributor

The solutions here are hacky -- is there really not a solution for this? I need to keep SSL keys private, so pillar data is the only logical answer with salt, but inline data in pillar files for this is a bit of an upkeep nightmare.

Just bumping this issue so someone can point me in the right direction if there's a more elegant solution I don't know about.

@basepi
Copy link
Contributor

basepi commented Jul 27, 2015

@kevinquinnyo This is being tracked in a new issue, #9569.

@igorwidlinski
Copy link

Lack of major features like this, makes me wonder if there is actually anyone using salt in production environments as their primary configuration management software, and if people do, to what extent?

@basepi
Copy link
Contributor

basepi commented Jul 27, 2015

We have many thousands of users managing millions of servers with salt. My guess is most people have gotten around this by putting their keys in pillar in a similar fashion to this:

not_my_private_ssh_key: |
    -----BEGIN RSA PRIVATE KEY-----
    MIIEowIBAAKCAQEAw/o8DDnheOqOjH9pRhxPpHQ7TEXNtZswaenF66crWglIdno7
    MDAz+wPHYH4HJh2LO1oXW14Hd5JxFSxr5HzbLzecwqrLf7e8lEHic5ArBKEon0Rx
    j9WTN7a8OE2iA9+HKnVpERMImECbTfl6NpXrtODC72vDZPhF9HA5snEqN9D+DYTH
    ddgJGOyczPmqRJwQNFzP66U49oMrrQA+KY2lZbqfzxCAmlvNoiX6vtDb3aqKPGY3
    oh7MhgbQ3Inik23vS9/vDgKTgiRoMXZfHI4HUmG77aBs4hXNn0z9+78WWxu+qhWt
    UegmlHU4pddwuHGOcsd+NEJi5CUwZdz6XAVLVwIDAQABAoIBAF9mjhqphoAVNqVg
    VMADgiWdS0x64oPYcv3sBiQMMcdXo4XBRNTVckhsc38eep5sXV3cImig4mOrzw6u
    nCsTOKPIn7AH8p5OtCc711/IO5i6VwsJB3ssTckeVIvYBtl509OwaiAcst9i+/c+
    TecKnj5j96ETRX/+eBdhFkUuX43hfZMlsRnyrk/wc45qYzoLOUjbFnsNINvHSVAt
    BmZnh2dXDqJCpJ9TTDtkCbMnq+Eze583X3j/S08cH+o6EaSIrkvtJ8P58/aHg5k0
    OXLLDjPT8GeTsGVBuuXCt4T5Qnyuvfmz8ktGdTtAb+kExQdB+yWPoWNUGEic8Elq
    Z04pEwkCgYEA7Kq0c/aueQPd4mit1r1JsUttEyYaCXKd6Qzi8vWjEVRWzpax8TIW
    KGeWlQwQVDxIoy2gSDz88xJnRAZINhXfUpXOh+kXuUyoYt9Kjo2MaRuumZjOkV0e
    ATCL4c0ohYkFQ4J40eh30Q+yasIEFFK9EJIG801nCvXgfpAmGJK8ASMCgYEA0/yd
    JSCGdVULnmNRU2imMLBtG1I2lfyQbSaROX1bAjwjypj64grD51f5o3nZPpP2fYpO
    xLXwZJ9N3s+EKAcyGeC0otw0BtQMnjoLHkxR9SRhCxzhpktkdnTCdsVkGuVJxnm5
    69PufQhe3Clb5PiXGGn+cVAC/qmYKhvLFNk6Qj0CgYAEdMplDJYIbUw2QSZUzsee
    cP8ixyriVqgmhTmYvYtOfjoMNcYv9nN4W0r4j3uXOnNbrzY/ZfaVVRlgrIWbjxnf
    Yja7VGY/9POOuQmcWYn1SLIm7julfQ1dlF5t6AEFUqkotkI0IZ5v7026uOB+yXgJ
    4dYDqsdv62VIvMoa2Fh15QKBgQCq2IKFs4rp8RqmEgEvKb4Wq3mjdUTM6Ho1ncmY
    /bBlQrbNxzEbD/YG8t4cpE0zo+gaEWeeHcuaLNGDatdlszbrqC+sua+seSWaN8lS
    J8w9t44GeMZbUZOr7Dn1ouwkyPoGXYA70df5KM/au/J1vOt5H6OTCxr+xwv9k9y9
    9rx/OQKBgCMAQeXrMr3L2IwwvrEQe4b+qrsBcliLBNJaHspBScGpndCIQl/UaztU
    f8P71vixGXjkmZi/lVq8X1e2pht81xH1LMp559WvIA8oBolLsArqEZR40Ew2d36Z
    vZr6tM2rUyJWuXDU7Pmhc60E/ymC7YVRL7to2p3FvNTDLRdU8EYm
    -----END RSA PRIVATE KEY-----

Then you can use file.managed with contents_pillar: not_my_private_ssh_key to deploy that key.

Little hacky, but workable. I do agree that this feature (#9579) would be great, and have labeled it such that it will be reviewed for inclusion in our next feature release of salt.

@h0jeZvgoxFepBQ2C
Copy link

Yeah, I do it exactly like this... hacky.. but works..

@igorwidlinski
Copy link

We do not have millions of servers, just hundreds and we do exactly same workaround for ssh keys, keytabs, right now. The problem is we have different ssh keys and different keytab for every server. So that means 100s of ssh-keys and keytabs and they all have to go into yaml files exactly like you just like you described.

What happens when you want to redo all ssh=keys or keytabs on all servers? You need to take out old entries (manually, or we'd have to write smething that manipulates .yml files) and put in new entries. Repeat 100s of times.

Having something like this:

source: pillar://sshkey.H_{{grains['fqdn'}} 

Is soo much easier to manage and cleaner.

If we want to rekey all servers all we do is:

source: pillar://NEWsshkey.H_{{grains['fqdn'}} 

That would be great.

@QuinnyPig
Copy link
Contributor Author

I agree with you, but I'd rather shove secrets management into Vault before having to manage that many keys in Pillar.

Alternately, I'd generate those Pillar files programmatically; there's little value in doing them by hand.

@arnisoph
Copy link
Contributor

@h0jeZvgoxFepBQ2C
Copy link

Oh, looks interesting.. thanks...

but: doesn't work in salt-ssh because of not working nodegroups in salt-ssh.. or?
I mean I can't use the mentioned file directories for a group of servers... (in pillars I can specify them still with L@x,y,z... But on the file system-level nothing is similar?)

@igorwidlinski
Copy link

Ourselves we do not use nodegroups so the file_tree would not work for us. We use reclass adapter which is awesome.

QuinnyPig: of course, we could also just write a whole configuration management software ourselves...
Transferring files to servers in a secure manner does not seem out of scope for configuration software. Its should be one of its primary features.

So far we resorted to converting files to base64, and pasting them into .yml files. I find this solution unacceptable and hacky. Imagine if 'source: salt://' did not exist and you just had to paste all contents into .yml pillar file.

There is also a matter of supporting and making it understandable by other team members. I could come up with a bizarre and complex solution that I only understand, but I'd prefer something supported, standard and available at http://docs.saltstack.com/ (i'd like to add support is something my company would pay money for ).

@Mrten
Copy link
Contributor

Mrten commented Jul 28, 2015

Can you please take the discussion to the mailinglists?

@basepi
Copy link
Contributor

basepi commented Jul 28, 2015

It's a closed issue, I'm fine with the discussion happening here. Remember you can unsubscribe in the sidebar on the right so you don't get e-mails for this issue.

@fbretel
Copy link
Contributor

fbretel commented Feb 5, 2016

Hi all, I have hard time using file_tree and contents_pillar because:

  • the gpg renderer seems ineffective
  • I'm not sure how to use environments with file_tree
    I'm in a similar situation as @igorwidlinski.

@terminalmage
Copy link
Contributor

  • "seems ineffective" is extremely vague, you'll need to clarify
  • file_tree doesn't appear to support environments, please file a feature request for this.

@fbretel
Copy link
Contributor

fbretel commented Feb 8, 2016

@terminalmage thx for your feedback. The problem is:

# master
ext_pillar:
  - file_tree:
      root_dir: /srv/salt/pillar_files
      follow_dir_links: False
      raw_data: False

and putting an gpg-encrypted file in /srv/salt/pillar_files/host1/files/encrypted.gpg, sudo salt host1 pillar.items gives:

host1:
    ----------
    files:
        ----------
        encrypted.gpg:
            -----BEGIN PGP MESSAGE-----
            Version: GnuPG v2.0.14 (GNU/Linux)

#...            

I tried to add #!yaml|gpg and #!gpg at the beginning of encrypted.gpg, or renderer: jinja | gpg | yaml in the master config file, but the encrypted file is never decrypted.

The gpg renderer works fine with regular pillar data though.

@terminalmage
Copy link
Contributor

The ext_pillar is working as expected, as is contents_pillar. The better solution here is an added argument to the file.managed state which passes any of contents, contents_grains or contents_pillar through the gpg renderer to decrypt them.

@terminalmage
Copy link
Contributor

I have opened #31006 for this.

@dawidmalina
Copy link

@fbretel I have exactly the same issue. I am trying to use gpg renderer (works perfectly with regular pillars) but no luck to have this working with ext_pillar. Pillar date is not decrypted. I've tried to debug it a bit and looks like ext_pillar only use yaml renderer even then __opts__ have all renderers configured on master side (including gpg renderer). @terminalmage do you think this is issue in salt or the issue is with configuration?

@lopezio
Copy link

lopezio commented Jul 20, 2016

I've been reading this and related threads - as many of you looking for a solution for pkis.
I just came up with this idea: maybe the best way currently to manage different file repositories for minions (main usage is, here too, pki management) would be to create git repositories, (accessed via ssh), and then manage the files on the minions with the git state module. The only thing to be managed via pillar (and multiline content) would then be one (or more) ssh keypairs to use as 'identity' in the state...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests