title | layout | canonical |
---|---|---|
PuppetDB 3.1 » Maintaining and Tuning |
default |
/puppetdb/latest/maintain_and_tune.html |
PuppetDB requires a relatively small amount of maintenance and tuning. You should become familiar with the following occasional tasks:
Once you have PuppetDB running, visit this URL, substituting the name
and port of your PuppetDB server:
http://localhost:8080/pdb/dashboard/index.html
Note: You may need to edit PuppetDB's HTTP configuration first, changing the
host
setting to the server's externally-accessible hostname. If you've used the PuppetDB module to install, you'll need to set thelisten_address
parameter. When you do this, you should also configure your firewall to control access to PuppetDB's cleartext HTTP port.
PuppetDB uses this page to display a web-based dashboard with performance information and metrics, including its memory use, queue depth, command processing metrics, duplication rate, and query stats. It displays min/max/median of each metric over a configurable duration, as well as an animated SVG "sparkline" (a simple line chart that shows general variation). It also displays the current version of PuppetDB and checks for updates, showing a link to the latest package if your deployment is out of date.
You can use the following URL parameters to change the attributes of the dashboard:
width
= width of each sparkline, in pixelsheight
= height of each sparkline, in pixelsnHistorical
= how many historical data points to use in each sparklinepollingInterval
= how often to poll PuppetDB for updates, in milliseconds
E.g.: http://localhost:8080/pdb/dashboard/index.html?height=240&pollingInterval=1000
When you remove a node from your Puppet deployment, it should be marked as deactivated in PuppetDB. This will ensure that any resources exported by that node will stop appearing in the catalogs served to the remaining agent nodes.
-
PuppetDB can automatically mark nodes that haven't checked in recently as expired. Expiration is simply the automatic version of deactivation; the distinction is is important only for record keeping. Expired nodes behave the same as deactivated nodes. To enable this, set the
node-ttl
setting. -
If you prefer to manually deactivate nodes, use the following command on your puppet master:
$ sudo puppet node deactivate <node> [<node> ...]
-
Any deactivated or expired node will be reactivated if PuppetDB receives new catalogs or facts for it.
Although deactivated and expired nodes will be excluded from storeconfigs queries, their data is still preserved.
Note: Deactivating a node does not remove (e.g.
ensure => absent
) exported resources from other systems; it only stops managing those resources. If you want to actively destroy resources from deactivated nodes, you will probably need to purge that resource type using theresources
metatype. Note that some types can't be purged, and several others usually shouldn't be purged (e.g. users).
When the PuppetDB report processor is enabled on your Puppet master, PuppetDB will retain reports for each node for a fixed amount of time. This defaults to 14 days, but you can alter this to suit your needs using the report-ttl
setting. The larger the value you provide for this setting, the more history you will retain; however, your database size will grow accordingly.
PuppetDB will react to certain types of processing failures by storing a complete copy of the offending input, along with retry timestamps and error traces, in the "dead letter office" (DLO). Over time, the DLO can get pretty large. If you're not actively troubleshooting an issue, you might be able to recover a significant amount of space by deleting the contents of /var/lib/puppetdb/mq/discarded
(or /var/lib/pe-puppetdb/mq/discarded
on Puppet Enterprise).
PuppetDB's log file lives at /var/log/puppetlabs/puppetdb/puppetdb.log
. Check the log when you need to confirm that PuppetDB is working correctly or to troubleshoot visible malfunctions. If you have changed the logging settings, examine the logback.xml file to find the log.
The PuppetDB packages install a logrotate job in /etc/logrotate.d/puppetdb
, which will keep the log from becoming too large.
Although we provide rule-of-thumb memory recommendations, PuppetDB's RAM usage depends on several factors, so memory needs will vary depending on the number of nodes, frequency of Puppet runs, and amount of managed resources. 1000 nodes that check in once a day will require much less memory than if they check in every 30 minutes.
So the best way to manage PuppetDB's max heap size is to estimate a ballpark figure, then monitor the performance dashboard and increase the heap size if the "JVM Heap" metric keeps approaching the maximum. You may need to revisit your memory needs whenever your site grows substantially.
The good news is that memory starvation is actually not very destructive. It will cause OutOfMemoryError
exceptions to appear in the log, but you can restart PuppetDB with a larger memory allocation and it'll pick up where it left off --- any requests successfully queued up in PuppetDB will get processed.
When viewing the performance dashboard, note the depth of the message queue (labeled 'Command Queue depth'). If it is rising and you have CPU cores to spare, increasing the number of threads may help process the backlog faster.
If you are saturating your CPU, we recommend lowering the number of threads. This prevents other PuppetDB subsystems (such as the web server, or the MQ itself) from being starved of resources and can actually increase throughput.
If you've recently changed the certificates in use by the PuppetDB server, you'll also need to update the SSL configuration for PuppetDB itself.
If you've installed PuppetDB from Puppet Labs packages, you can simply re-run the puppetdb ssl-setup
command. Otherwise, you'll need to again perform all the SSL configuration steps outlined in the installation instructions.