Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use dev/staging pillars to remove branching logic in provisioning #1053

Open
1 task done
Tracked by #1058
rocodes opened this issue May 30, 2024 · 5 comments
Open
1 task done
Tracked by #1058

Use dev/staging pillars to remove branching logic in provisioning #1053

rocodes opened this issue May 30, 2024 · 5 comments

Comments

@rocodes
Copy link
Contributor

rocodes commented May 30, 2024

  • I have searched for duplicates or related issues

Description

Filing for discussion

Right now our provisioning logic copies everything needed for provisioning into one common Salt directory, and uses conditionals in .sls files to choose the right path(s) for provisioning, leading to lots of branching logic {% if d.environment == "staging" %} ... {% elif d.environment == "prod" %}. Part of why the provsioning is complex is because it's trying to deal with different build flavours/environments, and can't tell which one to use until orchestration/apply time.

I think we could use saltenv/pillarenv and simplify our lives:

  • define separate pillars (data stores) for staging, dev, or prod data, but keep salt states shared across all environments so no Salt duplication
  • at provisioning time, specifying the environment when invoking salt command (qubesctl state.apply pillarenv=dev)
    ** (use the pillarenv_from_saltenv configuration option for sanity)
  • In each of dev, staging, or prod, we'd have the pillars (data) specific to the build environment, namely (securedrop-workstation.repo, sd-release-key.asc), meaning sd-default-config{.sls,.yml} wouldn't be needed.
  • We could also simplify our conditional provisioning of systemd services (wip Configure environment-specific services in dom0 via systemd #1038) and be conditionally provisioned via the use of preset files that vary depending on build envrionment. (This could happen anyway: Use preset files for conditionally enabling/disabling systemd services #1052)
  • I think this structure will translate even when we move away from Salt. (ansible environments etc)

How will this impact SecureDrop/SecureDrop Workstation users?

  • The more logic we remove from dom0 salt states, the faster provisioning is, but more importantly, the faster every updater run is

How would this affect the SecureDrop Workstation threat model?

  • More reliance on systemd
  • Maybe easier to reason about system state
@cfm
Copy link
Member

cfm commented Jun 4, 2024

I support this approach wholeheartedly. As you point out, @rocodes, it has precedent in how we parameterize securedrop's Ansible playbooks. It would be wonderful to be able to separate configuration from provisioning logic.

@rocodes
Copy link
Contributor Author

rocodes commented Jan 20, 2025

I think a good initial step towards less conditional logic throughout our provisioning files (thought about this a lot in the context of #945 and the other related proposals I have made around build variants) could go like this:

  • move hardcoded data ({% set fedora_version = "f37" %}) to become pillar data. (Things like: supported_debian version, supported_dom0_fedora, current_system_fedora) can be pillar data used in any build variant.
  • Our sls files in securedrop_salt use this pillar data and strictly configure a prod setup. (No more dev/staging conditional logic will remain.)
  • Prod settings that are not shared across all build variants: (things like debian_component: main, apt_repo: apt.f.p etc) could either be shipped as pillar data, or still in our sls files. In order to support dev and staging setups, we ship an additional testing pillar that offers overrides to these pillars, but is disabled by default. (This can also include any non-production orchestration actions. Upstream example below.)
  • Functionally, this will mean that make dev and make staging basically amount to qubesctl enable testing-pillar.top && qubesctl enable testing-pillar.top pillar=True before running the standard sdw-admin --apply, although of course they would still maintain their make aliases so that the dev experience would be unchanged.

Good outcomes:

  • less branching logic, only one place to change hardcoded things like version numbers
  • environment/build variant set in one place, and all configs in one place; there is no more if d.environment == "dev" in the code..,
  • becomes one step closer to deprecating config.json (things like vmsizes etc can also be pillar data)
  • We don't even have to ship the extra pillar data in the rpm, it could live in the repo only if we wanted to fix Protect production users from setting up a dev or staging environment in error using production RPM #1058 .

upstream examples/precedent:

@rocodes
Copy link
Contributor Author

rocodes commented Jan 21, 2025

To make this more concrete, here is an example of what this would look like.

  • sd-default-config.yml and sd-default-config.sls are replaced by sd-default-config.j2 (below). All hard-coded values go in this file, and this is the only file that needs changing if we want to adjust a production value (such as a supported os version), so all the {% set sd_supported_fedora = "41" %} is gone from throughout the salt files (confined to this file, referenced like a standard jinja variable)
  • Everywhere we have {% import sdvars with context %} is just {% import sd-default-config as config %}
  • Anything not-prod is purged from securedrop_salt (and moved to a new securedrop_pillar directory), so all the if d.environment == "dev" etc is gone
  • In the Makefile, make dev (for example) becomes sudo qubesctl top.enable securedrop_pillar.dev && sudo qubesctl top.enable securedrop_pillar.dev pillar=True && sdw-admin --apply

Here's an example of what the default-config.j2 file would look like. All configuration details are in one file.

./securedrop_salt/sd-default-config.j2
# Default SDW configuration (Default prod configuration).
# To update a value (eg supported OS version) in production, adjust here.
# To update a value in non-production (dev/staging), adjust in its respective pillar (sls) file.

# Common across dev/staging/prod
{% set whonix_version = salt['pillar.get']('securedrop_pillar:os:whonix_version', '17') %}
{% set debian_distribution = salt['pillar.get']('securedrop_pillar:os:debian_distribution', 'bookworm') %}
{% set fedora_version_dom0 = salt['pillar.get']('securedrop_pillar:os:fedora_version_dom0', '37') %}
{% set fedora_version_sys_vms = salt['pillar.get']('securedrop_pillar:os:fedora_version_sys_vms', '41') %}

# Build-variant dependent; prod values are here, dev/staging values are stored in their respective pillar (sls) files.
{% set build_variant = salt['pillar.get']('securedrop_pillar:env:build_variant', 'prod') %}
{% set apt_repo_url = salt['pillar.get']('securedrop_pillar:env:apt_repo_url', 'https://apt.freedom.press') %}
{% set apt_repo_component = salt['pillar.get']('securedrop_pillar:env:apt_repo_component', 'main') %}

# End-user-configurable (build-variant independent) values
{% set vmsize_sdapp = salt['pillar.get']('securedrop_pillar:cfg:vmsize_sdapp', '10') %}
{% set vmsize_sdlog = salt['pillar.get']('securedrop_pillar:cfg:vmsize_sdlog', '5') %}

And here's an example of the drop-in dev configuration (staging would have its own similar pillar file)

./securedrop_pillar/dev.sls
securedrop_pillar:
# These are the same, don't need to override them, but just showing for completeness.
# To enable testing of eg a new debian or whonix version in a dev setup, adjust these values
#    os:   
#        whonix_version: 17
#        debian_distribution: bookworm
#        fedora_version_dom0: 37
#        fedora_version_sys_vms: 41
# These overwrite the prod config the same way we do now with the `if d.environment == "dev"` stuff
    env:
        build_variant: dev
        apt_repo_url: https://apt-test.freedom.press
        apt_repo_component: main nightlies
        yum_repo_file: securedrop-workstation-dom0-test.repo
        yum_repo_signing_key_salt: apt-test-pubkey.asc
    cfg:
        vmsize_sdapp: 20
        vmsize_sdlog: 5

@deeplow
Copy link
Contributor

deeplow commented Jan 23, 2025

I like the approach of centralizing the variables in one place and pillars sounds like the correct approach.

One potential problem with the approach that I can see is that pillarenvs are mutually exclusive when explicitly set via pillarenv=. Ideally we'd want our specified environment to override only what it specifies. Even more annoying is the fact that when not specified all environments are squashed together. So if we leave the dev pillar environment, it will run even if we're testing in production. So even if we could go this route, we'd have to manage quite well the files to prevent accidents in QA.

In practice this would prevents us from requiring default Qubes formulas like qvm.anon-whonix, which rely on the base environment.

I was trying to look at other approaches which would let programmatically inject additional pillar data. Ideally this would be without relying on files because without saltenv we can't really count on being able to dynamically change which files it picks. I looked into --pillar-root but it only lets us specify one root, which defeats the purpose of just overriding what we want and leaving the remaining in from the default pillar root. Anything else relating to pillar_root required manipulating files.

The most promising approach without using files is by passing in pillar data (the next section).

Pass in pillar data via salt-calls

We can also inject pillar data with qubesctl [...] pillar={"key":"value}. It would look ugly, but we would not need to be concerned about file management and more importantly, this overrides values we specify, but keeps all the other original ones.

One downside is that the command would look very verbose. Luckily there is a way to pass arguments via stdin.

echo pillar={"key1": "value1", "key2": "value2"} | qubesctl pillar.items --args-stdin

Then we could programmatically inject pillar overrides based on the configuration we desire.

@rocodes rocodes changed the title Use pillarenv/saltenv to remove branching logic in provisioning Use dev/staging pillars to remove branching logic in provisioning Jan 23, 2025
@rocodes
Copy link
Contributor Author

rocodes commented Jan 23, 2025

Thanks so much for all the thoughtful feedback, @deeplow :)

I should have renamed this issue when I posted the last comment, since pillarenv/saltenv was the original thing I was wondering about several months ago, but not the thing I most recently proposed - I've slightly renamed it now to clarify that the approach is still about pillar data + avoiding branching logic, but doesn't hinge on pillarenv/saltenv.

In this last suggestion I am not suggesting we use our own pillarenv - the same way we kept securedrop_salt in/srv/salt , I'm suggesting we keep using the base pillar and just create additional pillar data, which I think should be fine. The securedrop_pillar data structure that I outlined here would be meant to live alongside the qvm pillar, also in the base envt but in its own subdirectory.

Ideally we'd want our specified environment to override only what it specifies. Even more annoying is the fact that when not specified all environments are squashed together. So if we leave the dev pillar environment, it will run even if we're testing in production. So even if we could go this route, we'd have to manage quite well the files to prevent accidents in QA.

What I am imagining here is that make dev would enable the dev pillar, and any other make target would disable the dev pillar (and first clear the salt cache or tell the user to reboot). As I mentioned, with this approach we would not ship dev/staging pillar data in the prod rpm, meaning that end-users could never end up with dev/staging pillars on their machines, and we'd just be making sure that dev users who are switching between different environments do so in a standardized way (eg with make targets).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants