Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Salt 3004 broke extend [BUG] #61121

Closed
MartinEmrich opened this issue Oct 25, 2021 · 10 comments · Fixed by #61303
Closed

Salt 3004 broke extend [BUG] #61121

MartinEmrich opened this issue Oct 25, 2021 · 10 comments · Fixed by #61303
Assignees
Labels
Bug broken, incorrect, or confusing behavior Regression The issue is a bug that breaks functionality known to work in previous releases. severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around

Comments

@MartinEmrich
Copy link

Description
A pair of states (One extending another) works with minion up to 3003, but not with 3004. Instead I get:

salt-call state.apply rsyslog test=True
local:
    Data failed to compile:
----------
    Cannot extend ID 'rsyslog' in 'base:services.rsyslog'. It is not part of the high state.
This is likely due to a missing include statement or an incorrectly typed ID.
Ensure that a state with an ID of 'rsyslog' is available
in environment 'base' and to SLS 'services.rsyslog'

Setup

Salt master (happens with both 3000.3 and 3004 tested).
Two states:
packages.rsyslog:

rsyslog:
  pkg:
    - latest

services.rsyslog:

include:
  - packages.rsyslog

extend:
  rsyslog:
    service:
     - running
     - enable: True
     - require:
       - pkg: rsyslog
     - watch:
       - pkg: rsyslog

(could be simplified, but it worked for years now)

Steps to Reproduce the behavior
run salt-call state.apply services.rsyslog (or highstate including it) on salt-minion 3004.
Works fine with older minions.

Expected behavior
both pkg.installed and service.running including require and watch should work as before.

Versions Report

# salt-minion --versions-report
Salt Version:
          Salt: 3004

Dependency Versions:
          cffi: Not Installed
      cherrypy: Not Installed
      dateutil: 2.4.2
     docker-py: 2.6.1
         gitdb: Not Installed
     gitpython: Not Installed
        Jinja2: 2.11.1
       libgit2: Not Installed
      M2Crypto: 0.35.2
          Mako: Not Installed
       msgpack: 0.6.2
  msgpack-pure: Not Installed
  mysql-python: Not Installed
     pycparser: Not Installed
      pycrypto: 2.6.1
  pycryptodome: Not Installed
        pygit2: Not Installed
        Python: 3.6.8 (default, Nov 16 2020, 16:55:22)
  python-gnupg: Not Installed
        PyYAML: 3.13
         PyZMQ: 17.0.0
         smmap: Not Installed
       timelib: Not Installed
       Tornado: 4.5.3
           ZMQ: 4.1.4

System Versions:
          dist: centos 7 Core
        locale: UTF-8
       machine: x86_64
       release: 3.10.0-1160.21.1.el7.x86_64
        system: Linux
       version: CentOS Linux 7 Core

Additional context
Add any other context about the problem here.

@MartinEmrich MartinEmrich added Bug broken, incorrect, or confusing behavior needs-triage labels Oct 25, 2021
@votdev
Copy link
Contributor

votdev commented Oct 26, 2021

I can confirm this behaviour in a different setup in combination with Jinja in the SLS file. This worked perfect in Salt 3003.

omv6box:
    Data failed to compile:
----------
    Cannot extend ID 'start_proftpd_service' in 'base:omv.deploy.proftpd.modules.20mod_wrap'. It is not part of the high state.
This is likely due to a missing include statement or an incorrectly typed ID.
Ensure that a state with an ID of 'start_proftpd_service' is available
in environment 'base' and to SLS 'omv.deploy.proftpd.modules.20mod_wrap'

default.sls:

{% set config = salt['omv_conf.get']('conf.service.ftp') %}

include:
  - .modules

{% if config.enable | to_bool %}

...

start_proftpd_service:
  service.running:
    - name: proftpd
    - enable: True
    - require:
      - cmd: test_proftpd_service_config
...

{% else %}

...

start_proftpd_service:
  test.nop

...

{% endif %}

modules/20mod_wrap.sls:

...

configure_proftpd_mod_wrap:
  file.append:
    - name: "/etc/proftpd/proftpd.conf"
    - text: |
        <IfModule mod_wrap.c>
          TCPAccessFiles {{ tcp_access_files }}
          TCPAccessSyslogLevels {{ tcp_access_syslog_levels }}
          TCPServiceName {{ tcp_service_name }}
        </IfModule>
    - watch_in:
      - service: start_proftpd_service

...

See https://github.com/openmediavault/openmediavault/tree/master/deb/openmediavault/srv/salt/omv/deploy/proftpd for the whole code.

@OrangeDog OrangeDog added Regression The issue is a bug that breaks functionality known to work in previous releases. Silicon v3004.0 Release code name labels Oct 26, 2021
@OrangeDog
Copy link
Contributor

I have similar includes and extends, but they work fine (Ubuntu 20.04 package).

@anilsil anilsil removed the Silicon v3004.0 Release code name label Oct 26, 2021
@nkukard
Copy link
Contributor

nkukard commented Nov 1, 2021

I'm seeing the same issue on ArchLinux after upgrading to 3004

@pauldalewilliams
Copy link

I ran into a similar error but with a different setup (not using extend). I suspect it has to do with this change: #59943 I can confirm that my now broken watch_in requisite worked prior to 3004.

Mine cropped up when I had this in an SLS (abbreviated but enough to demonstrate the issue):

nginx-proxy:
  docker_network.present

Ensure nginx-proxy container is running:
  docker_container.running:
    - name: nginx-proxy
    - image: jwilder/nginx-proxy
    - restart_policy: always
    ...

Turn off server tokens on nginx-proxy:
  file.managed:
    - name: /data/nginx-proxy/conf/custom_settings.conf
    - user: root
    - group: root
    - mode: 644
    - contents: |2
        server_tokens off;
        client_max_body_size 1g;
    - require:
      - docker_volume: nginx-conf
    - watch_in:
      - docker_container: nginx-proxy

If I modify the ID for the nginx-proxy docker_container state it works once again:

nginx-proxy:
  docker_network.present: []
  docker_container.running:
    - image: jwilder/nginx-proxy
    - restart_policy: always
    ...

Turn off server tokens on nginx-proxy:
  file.managed:
    - name: /data/nginx-proxy/conf/custom_settings.conf
    - user: root
    - group: root
    - mode: 644
    - contents: |2
        server_tokens off;
        client_max_body_size 1g;
    - require:
      - docker_volume: nginx-conf
    - watch_in:
      - docker_container: nginx-proxy

@OrangeDog OrangeDog added severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around and removed needs-triage labels Nov 5, 2021
@OrangeDog OrangeDog added this to the Approved milestone Nov 5, 2021
@daks
Copy link
Contributor

daks commented Nov 29, 2021

We are also affected by this bug, at least at one place in our code, since salt 3004.

We have a file.managed state with a watch_in parameter to the postfix-init-service-running-postfix-restart state from saltstack-formulas/postfix-formula.

Our solution was to reimplement the service.running state which is not a long-term viable solution.

edit: fix comment, no extend

@whytewolf
Copy link
Collaborator

@pauldalewilliams looks like you are correct about your guess on #59943 looking at the code instead of making the extend strict requirements for just the _in versions that change made it strict for all extends. so the state_type needs to already exist for all extends. which goes against some of the original meaning for extend.

Ch3LL pushed a commit that referenced this issue Dec 7, 2021
adding a modifier to the extend resolver that lets it be strict
when needed.. as well as added that up the chain in find_names

fixes: #61121
garethgreenaway pushed a commit to bryceml/salt that referenced this issue Jan 21, 2022
adding a modifier to the extend resolver that lets it be strict
when needed.. as well as added that up the chain in find_names

fixes: saltstack#61121
@camproto
Copy link

camproto commented Jul 27, 2022

How to bypass this bug?

We are affected by this issue as well.

I'm new in the salt world but have a quite comprehensive set of states in my project.

I'm on salt-call V3004.2 on CentOS, this fix is scheduled for 3005.0.

How can I downgrade my salt-call to V3003 to bypass this bug? Any hint appreciated.

local:
Data failed to compile:

Cannot extend ID 'mock-mv-core' in 'base:konz.mock-mv-core'. It is not part of the high state.
This is likely due to a missing include statement or an incorrectly typed ID.
Ensure that a state with an ID of 'mock-mv-core' is available
in environment 'base' and to SLS 'konz.mock-mv-core'

@camproto
Copy link

Solved: I fixed the version in my Vagrantfile in the salt provisioner 3003.5 worked at the end.

@votdev
Copy link
Contributor

votdev commented Jul 27, 2022

This issue has been reported nearly one year ago. Is there any progress in 3005? When can we expect that to be released? IMO i would love to see a backport of the 3005 fix to 3004 because this is a major regression.

@OrangeDog
Copy link
Contributor

@votdev as you can see above it's already been fixed and is in the 3005RC currently available. I believe the final release will be in a couple of weeks.

votdev added a commit to openmediavault/openmediavault that referenced this issue Apr 22, 2023
This is necessary because Salt 3004 is broken and 3005 has some other issues that prevent an upgrade to the latest version.

References:
- https://www.debian.org/doc/debian-policy/ch-relationships.html#syntax-of-relationship-fields
- https://askubuntu.com/questions/766846/version-range-in-debian-control/918432#918432
- saltstack/salt#61121 (comment)

Signed-off-by: Volker Theile <votdev@gmx.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior Regression The issue is a bug that breaks functionality known to work in previous releases. severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants