-
Notifications
You must be signed in to change notification settings - Fork 103
High availability
High available Tempesta FW cluster in a cloud (see Clouds Wiki page for description of using Tempesta FW in different cloud environments) or on bare metal machines can be configured using keepalived. This doc describes cluster configuration of two machines. Each machine must run Tempesta FW and keepalived. Each keepalived instance sends periodic heartbeat messages to the second instance and run failovering process if the instance doesn't respond. It's recommended to use separate network interfaces for ingress HTTP traffic and internal keepalived (VRRP) traffic: if the cluster receives enormous ingress traffic and uses the same interfaces to process ingress traffic and VRRP, then VRRP messages can be dropped by an interfaces and keepalived won't be able to manage possible server failures.
Keepalived is usually available through standard Linux distribution packages. Use
# yum install keepalived
to install it in CentOS or
# apt-get install keepalived
to install it in Debian.
If a server fails, then it must restart all required services. Thus, add keepalived to bootup process by:
# systemctl enable keepalived
Configuration files for keepalived are at the below. Note that active-active configuration with two virtual (floating) IP addresses, VIPs, is used. I.e. both the nodes can process traffic and each node can acquire VIP of the second node if that fails. Two VRRP instances are used for active-active mode: one instance is configured as master at the first node and as backup at the second node and the second instance is configured as backup at the first node and as master at the second one. You can use only one instance for active-passive configuration.
The first node configuration:
vrrp_script chk_tfw {
script "wget -q -O /dev/null http://127.0.0.1/"
interval 1
}
vrrp_instance TFW_1 {
state BACKUP
interface eth0
virtual_router_id 1
priority 100
advert_int 1
dont_track_primary
unicast_src_ip 192.168.100.6
unicast_peer {
192.168.100.5
}
virtual_ipaddress {
172.16.0.50/24 dev eth1
}
track_script {
chk_tfw
}
}
vrrp_instance TFW_2 {
state MASTER
interface eth0
virtual_router_id 2
priority 200
advert_int 1
dont_track_primary
unicast_src_ip 192.168.100.6
unicast_peer {
192.168.100.5
}
virtual_ipaddress {
172.16.0.6/24 dev eth1
}
track_script {
chk_tfw
}
}
Note that eth0
is a private network interface for VRRP communications and eth1
is an external interface. It's supposed that Tempesta FW is running at 0.0.0.0:80
and we use wget -q -O /dev/null http://127.0.0.1/
to verify that if works as expected. 0.0.0.0
should be used to allow Tempesta FW accept traffic at VIP addresses: if an address appears at the system, then there is nothing to be done to make Tempesta FW accept connections at the new address. We don't use authentication, e-mail notifications, and other nice keepalived features in the configuration file for the example brevity.
The second node configuration looks the similar, except assigning master and backup roles to different instances:
vrrp_script chk_tfw {
script "wget -q -O /dev/null http://127.0.0.1/"
interval 1
}
vrrp_instance TFW_1 {
state MASTER
interface eth0
virtual_router_id 1
priority 200
advert_int 1
dont_track_primary
unicast_src_ip 192.168.100.5
unicast_peer {
192.168.100.6
}
virtual_ipaddress {
172.16.0.5/24 dev eth1
}
track_script {
chk_tfw
}
}
vrrp_instance TFW_2 {
state BACKUP
interface eth0
virtual_router_id 2
priority 100
advert_int 1
dont_track_primary
unicast_src_ip 192.168.100.5
unicast_peer {
192.168.100.6
}
virtual_ipaddress {
172.16.0.6/24 dev eth1
}
track_script {
chk_tfw
}
}