The Icinga Vagrant boxes allow you to run Icinga 2, Icinga Web 2 and integrations (Graphite, InfluxDB, Grafana, Elastic Stack, Graylog) in various scenarios.
A simple vagrant up
fully installs these VMs and you are ready to explore
the Icinga ecosystem and possible integrations.
You can use these boxes for your own local demos, or to learn how to use Icinga in your environment. The Puppet provisioner uses official upstream modules including puppet-icinga2 and puppet-icingaweb2.
Overview
Below are some sample screenshots. Keep in mind that software is under steady development, so screenshots and features may change.
Box specific code is licensed under the terms of the GNU General Public License Version 2, you will find a copy of this license in the LICENSE file included in the source package.
Included Puppet modules in the .puppet/modules
directory provide their own license details.
These boxes are built for demos and development tests only. Team members and partners may use these for their Icinga Camp presentations or any other event too.
Join the Icinga community channels for questions.
Note
Boxes can run snapshot builds and unstable code to test the latest and the greatest.
You can also use them to test Icinga packages prior to the next release.
In case you've found a problem or want to submit a patch, please open an issue on GitHub and/or create a PR.
- Vagrant >= 1.8.x
One of these virtualization providers:
Each Vagrant box setup requires at least 2 Cores and 2 GB RAM.
The required resources are automatically configured during the
vagrant up
run.
Note
OpenStack VMs are provisioned remotely in your cloud provider. Please continue here for a full documentation.
Optional:
- vagrant-hostmanager >= 1.8.1
Example on Fedora (needs RPMFusion repository for VirtualBox):
sudo dnf install vagrant
sudo dnf install virtualbox
vagrant plugin install virtualbox
Fedora uses libvirt by default. More details on VirtualBox can be found here.
Example on Ubuntu:
$ sudo apt-get install vagrant
$ sudo apt-get install virtualbox
libvirt uses NFS for shared folders in the VMs, nfs_udp: false
is already set.
nfs3
needs to be enabled in your local firewall to allow connections.
# firewall-cmd --permanent --add-service=nfs3
# firewall-cmd --reload
macOS runs best with the Parallels provider, VirtualBox works as well.
Windows requires VirtualBox as provider. You'll also need the Git package which includes SSH.
Install the Git package and set autocrlf
to false
.
You can also set the options on the command line afterwards:
C:\Users\michi> git.exe config --global core.autocrlf false
Set the Windows command line as default:
Note
If
vagrant up
hangs with Vagrant 2.0.0 on Windows 7, you might need to upgrade your Powershell version. See this note for details.
Choose one of the providers below. VirtualBox can be used nearly everwhere. If you have a Parallels Pro license on macOS, or prefer to use libvirt, that's possible too.
If Virtualbox is installed, this will be enabled by default.
The Virtualbox provider uses the bento base box.
You'll need to install the vagrant-parallels plugin first:
$ vagrant plugin install vagrant-parallels
The Parallels provider uses the bento base box.
Both VMware Workstation and the Vagrant plugin require their own license.
The Vagrant plugin installation is described here.
The VMware provider uses the bento base box.
You should have qemu
and libvirt installed if you plan to run Vagrant on your local system. Then install the
vagrant-libvirt` plugin:
$ vagrant plugin install vagrant-libvirt
The libvirt provider uses the official CentOS base boxes.
$ git clone https://github.com/Icinga/icinga-vagrant && cd icinga-vagrant
Change into the directory of the scenario and start the box(es).
$ cd standalone
$ vagrant up
Proceed here for an overview about all available boxes.
Clone this repository:
C:\Users\michi\Documents> git.exe clone https://github.com/Icinga/icinga-vagrant
Change into the directory of the scenario and start the box(es).
Proceed here for an overview about all available boxes.
Each setup comes with the following basic tools installed:
- Icinga 2
- Icinga Web 2
- Reporting with the IDO Reports data provider
- Director, Business Process, Cube, Map modules
- Community themes
Additionally, specific integrations, tools and modules are prepared for each scenario.
Run Vagrant:
$ cd standalone && vagrant up
Application | Url | Credentials |
---|---|---|
Icinga Web 2 | http://192.168.33.5/icingaweb2 | icingaadmin/icinga |
Icinga 2 API | https://192.168.33.5:5665/v1 | root/icinga |
Graphite Web | http://192.168.33.5:8003 | - |
Grafana | http://192.168.33.5:8004 | admin/admin |
Dashing | http://192.168.33.5:8005 | - |
Note: In case Dashing is not running, restart it manually:
$ vagrant ssh -c "sudo systemctl start dashing-icinga2"
- 2 VMs as Icinga 2 Master/Satellite scenario
Run Vagrant:
$ cd distributed && vagrant up
Application | Url | Credentials |
---|---|---|
Icinga Web 2 | http://192.168.33.101/icingaweb2 | icingaadmin/icinga |
Icinga Web 2 | http://192.168.33.102/icingaweb2 | icingaadmin/icinga |
Icinga 2 API | https://192.168.33.101:5665/v1 | root/icinga |
Icinga 2 API | https://192.168.33.102:5665/v1 | root/icinga |
Run Vagrant:
$ cd influxdb && vagrant up
Application | Url | Credentials |
---|---|---|
Icinga Web 2 | http://192.168.33.8/icingaweb2 | icingaadmin/icinga |
Icinga 2 API | https://192.168.33.8:5665/v1 | root/icinga |
Grafana | http://192.168.33.8:8004 | admin/admin |
- Elastic Stack
- Elasticsearch
- icingabeat, filebeat
- Kibana
- Elasticsearch module for Icinga Web 2
Run Vagrant:
$ cd elastic && vagrant up
Application | Url | Credentials |
---|---|---|
Icinga Web 2 | http://192.168.33.7/icingaweb2 | icingaadmin/icinga |
Icinga 2 API | https://192.168.33.7:5665/v1 | root/icinga |
Kibana | http://192.168.33.7:5602 | icinga/icinga |
Elasticsearch/Nginx | http://192.168.33.7:9202 | icinga/icinga |
Kibana (TLS) | https://192.168.33.7:5603 | icinga/icinga |
Elasticsearch/Nginx (TLS) | https://192.168.33.7:9203 | icinga/icinga |
Run Vagrant:
$ cd graylog && vagrant up
Application | Url | Credentials |
---|---|---|
Icinga Web 2 | http://192.168.33.6/icingaweb2 | icingaadmin/icinga |
Icinga 2 API | https://192.168.33.6:5665/v1 | root/icinga |
Graylog | http://192.168.33.6:9000 | admin/admin |
The default configuration for specific scenarios is stored in the Vagrantfile.nodes
file.
In case you want to modify its content to e.g. add synced folders or change the host-only IP address
you can copy its content into the Vagrantfile.local
file and modify it there.
Vagrantfile.local
is not tracked by Git.
If you change the base box, keep in mind that provisioning only has been tested and developed with CentOS 7, no other distributions are currently supported.
Example for additional synced folders:
$ vim standalone/Vagrantfile.local
nodes = {
'icinga2' => {
:box_virtualbox => 'bento/centos-7.4',
:box_parallels => 'bento/centos-7.4',
:box_hyperv => 'bento/centos-7.4',
:box_libvirt => 'centos/7',
:net => 'demo.local',
:hostonly => '192.168.33.5',
:memory => '2048',
:cpus => '2',
:mac => '020027000500',
:forwarded => {
'443' => '8443',
'80' => '8082',
'22' => '2082',
'8003' => '8082'
},
:synced_folders => {
'../../icingaweb2-module-graphite' => '/usr/share/icingaweb2-modules/graphite'
}
}
}
If the vagrant-hostmanager
plugin is installed an entry in /etc/hosts
will be created to provide
access by name.
This requires you to edit the Hiera configuration tracked by Git. The setting below allows to control whether the Icinga release or snapshot package repositories are enabled by default.
That way you can easily either test the development snapshots or have stable packages for demos.
vim .puppet/hieradata/common.yaml
icinga::repo::type: "snapshot" # you can use 'release' too
#icinga::repo::type: "release"
Start all VMs:
$ vagrant up
Depending on the provider you have chosen above, you might want to set it explicitely:
$ vagrant up --provider=virtualbox
SSH into the box as local vagrant
user (Tip: Use sudo -i
to become root
):
$ vagrant ssh
Note
Multi-VM boxes require the hostname for
vagrant ssh
like so:vagrant ssh icinga2b
. That works in a similar fashion for other sub commands.
Stop all VMs:
$ vagrant halt
Update packages/reset configuration for all VMs:
$ vagrant provision
Destroy the VM (add -f
to avoid the safety question)
$ vagrant destroy
Documentation for software used inside these boxes.
Project | URL |
---|---|
Icinga 2 | https://www.icinga.com/docs/icinga2/latest/doc/01-about/ |
Icinga Web 2 | https://www.icinga.com/docs/icingaweb2/latest/doc/01-About/ |
Director | https://www.icinga.com/docs/director/latest/doc/01-Introduction/ |
Graphite | https://graphite.readthedocs.io |
InfluxDB | https://docs.influxdata.com/influxdb/ |
Grafana | https://docs.grafana.org |
Elastic | https://www.elastic.co/guide/ |
Graylog | http://docs.graylog.org |
On local config change (git pull for this repository).
$ pwd
$ git pull
$ git log
$ vagrant provision
If you are working behind a proxy, you can use the proxyconf plugin.
Install the plugin:
$ vagrant plugin install vagrant-proxyconf
Export the proxy variables into your environment:
$ export VAGRANT_HTTP_PROXY=http://proxy:8080
$ export VAGRANT_HTTPS_PROXY=http://proxy:8080
Vagrant exports the proxy settings into the VM and provisioning will then work.
Thanks to all contributors! :)
- lippserd for the initial Vagrant box idea from Icinga Web 2.
- gunnarbeutner for the base setup with Icinga 2.
- NETWAYS for sponsoring the initial Icinga 2 Cluster setup.
- bernd for the Graylog box.
- nbuchwitz for fixes and workarounds on broken packages.
- kornm for the Vagrant HTTP proxy FAQ.
- ruzickap for the libvirt provider.
- mightydok for fixes on Virtualbox provider.
- joonas for Puppet provisioner fixes.
- tomdc for his contributions to Icinga 1.x/Jasper.
- martbhell for the OpenStack provider.
Each box uses a generic Vagrantfile to set the required resources for initial VM
startup. The Vagrantfile
includes the Vagrantfile.nodes
file which defines
VM specific settings. In addition to that, tools/vagrant_helper.rb
loads all
pre-defined functions for provider and provisioner instantiation. Furthermore it
configures vagrant-hostmanager
if the plugin is installed.
The generic shell_provisioner.sh
scripts ensure that all VM requirements are fulfilled
and also takes care about installing Puppet which will be used as provisioner in the next
step.
For OpenStack, there's a special SSH IP address override in place which provisions Puppet/Hiera with an auto-generated config file. This is needed for all integrations to work properly.
The main entry point is the Puppet provisioner which calls the default.pp
environment resource.
Anything compiled into this catalog will be installed into the VM.
Provider | Base Box |
---|---|
VirtualBox | Bento |
Parallels | Bento |
libvirt | libvirt |
OpenStack | NWS CentOS 7 |
Pull updates.
vagrant box update
Current version via HTTP API:
curl -sl -I 192.168.33.8:8086/ping
Show tags on a database:
# influx
use icinga2
show tag keys on icinga2
The following Puppet modules are used for provisioning the boxes, installing packages and configuring everything for your needs. In addition to these official modules, specific Puppet profiles have been created to avoid code duplication.
The modules are pulled into this repository as git subtree. The main reason for not using submodules or the official way of installing Puppet modules is that the upstream source may be gone or unreachable. That must not happen with this Vagrant environment.
General:
Specific projects:
Notes for developers only.
Add subtree:
$ git subtree add --prefix .puppet/modules/vim https://github.com/saz/puppet-vim master --squash
Update subtree:
$ git subtree pull --prefix .puppet/modules/postgresql https://github.com/puppetlabs/puppetlabs-postgresql.git master --squash