-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
does not work with PE modules #193
Comments
We use this module with PE 2016.4.2 (and previously to upgrade from PE 3.8 to 2016.2.1) without any issues. We do not override any service names. We are using RHEL and Windows - perhaps it's platform related? |
I am not sure if you are running the pe console where classification sets up mcollective for all nodes. This can be found in the PE Mcollective group in the PE Console. The classes in the group conflict on classes in the puppet_enterprise module that conflict with puppet_agent, as mentioned above. |
I'm really confused how this isn't a problem in our testing. Can you add details about how you're running it? What PE versions and platform? I'll take a look. /cc @highb have any ideas? |
Since non puppet folks don't have access to puppet_enterprise repo this is harder to debug. Basically you would just need to include the same manifest that gets applied to all PE Infrastructure via the PE classifier. In order to run a test against this you would have to make puppet_enterprise module a public repo. Then just include This boils down to the following manifest:
Code from the PE Module file { "${puppet_enterprise::params::mco_etc}/server.cfg":
content => template('puppet_enterprise/mcollective/server.cfg.erb'),
mode => '0660',
notify => Service['mcollective'],
} Code from the puppet_agent module which conflicts. # below two vars are from params.pp
$mco_dir = '/etc/puppetlabs/mcollective'
$mco_server = "${mco_dir}/server.cfg"
$mco_server = $::puppet_agent::params::mco_server
file { $mco_server:
ensure => file,
source => $::mco_server_config,
} |
That class (mco_server_config) is only applied when upgrading from Puppet 3. So I will assume you're attempting to upgrade a Puppet 3 agent against a PE 2016.5 install. My guess would be that in some environments, |
The simple solution is probably to add an |
PE modules also manage server.cfg, so only manage it if not already done so by other modules. Fixes puppetlabs#193.
PE modules also manage server.cfg, so only manage it if not already done so by other modules. Fixes puppetlabs#193.
You should also declare a dependency on puppet_enterprise in the metadata.json file as well. But that can be tricky because you can't pull down that module from the forge. |
It doesn't and shouldn't require the puppet_enterprise module to function. |
except your using a function that is only available in the puppet_enterprise module. Note: it is only available on 2015+ versions of puppet enterprise at that. |
We only use that function if the |
Also, where are you seeing a dependency on the puppet_enterprise module? https://github.com/puppetlabs/puppetlabs-puppet_agent/blob/master/metadata.json#L124 adds dependencies for |
With regards to the dependency in metadata.json. I was just mentioning that it does not exist and I think it should. Furthermore future versions of puppet will fail to load puppet functions in other modules if they are not expressed as a dependency in metadata.json. However, this is only the case for v4 functions. If the folks that develop the puppet_enterprise module decide to use v4 functions only than the puppet_agent module would no longer work unless you add the dependency in metadata.json. So it might be a good idea to remove that coupling by bringing that function into this module. |
Removing that coupling seems like a great idea. |
There is a direct duplicate resource collision that is happening because the puppet_enterprise module is already managing
Service['mcollective']
andFile['/etc/puppetlabs/mcollective/server.cfg']
which this module tries to manage as well. Anybody use PE and the modules it comes with will most likely see this error.While you can override the service_names to only include
[puppet]
there is no way to stop managing the mco config files.I think to resolve this issue one of the following should be done:
use the defined function to determine if the resource is already managed
unless defined(File['/etc/puppetlabs/mcollective/server.cfg'])
The module is called puppet_agent and should not have anything to do with mcollective, remove the mcollective portion alltogether
Add a feature flag to allow the user to manage mcollective (services, files)
The text was updated successfully, but these errors were encountered: