This guide covers procedures to upgrade PacketFence servers.
- Clustering Guide
-
Covers installation in a clustered environment.
- Developer’s Guide
-
Covers API, captive portal customization, application code customizations and instructions for supporting new equipment.
- Installation Guide
-
Covers installation and configuration of PacketFence.
- Network Devices Configuration Guide
-
Covers switches, WiFi controllers and access points configuration.
- PacketFence News
-
Covers noteworthy features, improvements and bug fixes by release.
These files are included in the package and release tarballs.
The MariaDB root password that was provided during the initial configuration is required.
Note
|
Starting from PacketFence 11.0.0, this step is not necessary for doing an automated upgrade. |
Taking a complete backup of the current installation is strongly recommended. Perform a backup using:
/usr/local/pf/addons/database-backup-and-maintenance.sh
/usr/local/pf/addons/backup-and-maintenance.sh
Note
|
Starting from PacketFence 11.0.0, this step is not necessary for doing an automated upgrade. |
If monit
is installed and running, stop and disable it with:
systemctl stop monit
systemctl disable monit
Starting from PacketFence 11.0.0, the PacketFence installation can be upgraded in two ways:
For all PacketFence versions prior to 11.0.0, follow the steps described in the Upgrade procedure.
In cluster environments, perform following steps on one server at a time. To avoid multiple moves of the virtual IP addresses, start with nodes which don’t own any virtual IP addresses first. Ensure all services have been restarted correctly before moving to the next node.
If monit
is installed and running, shut it down with:
systemctl stop monit
systemctl disable monit
It is recommended to stop all PacketFence services that are currently running before proceeding any further:
/usr/local/pf/bin/pfcmd service pf stop
systemctl stop packetfence-config
Warning
|
All non-configuration files will be overwritten by new packages. All changes made to any other files will be lost during the upgrade. |
Follow instructions related to automation of upgrades.
Please refer to the PacketFence Clustering Guide, more specifically the Performing an upgrade on a cluster.
Starting from PacketFence 11.0.0, Debian 9 and CentOS 7 support are dropped in benefit of Debian 11 and RHEL 8. In place upgrades are not supported. Provision new operating system(s) in order to migrate.
To simplify upgrade process to PacketFence 11.0.0 and future versions, we now rely on an export/import mechanism.
Before doing anything else, be sure to read assumptions and limitations of this mechanism.
Follow instructions related to export process.
Follow instructions related to import process.
If the import mechanism is not used to upgrade the previous PacketFence installation, follow the instructions in this section to upgrade the configuration and database schema.
# Only run if the previous configuration is not imported
/usr/local/pf/addons/upgrade/to-11.0-firewall_sso-conf.pl
/usr/local/pf/addons/upgrade/to-11.0-no-slash-32-switches.pl
/usr/local/pf/addons/upgrade/to-11.0-openid-username_attribute.pl
Changes have been made to the database schema. An SQL upgrade script has been provided to upgrade the database schema from 10.3 to 11.0.
To upgrade the database schema, run the following command:
# Only run if the previous configuration is not imported
mysql -u root -p pf -v < /usr/local/pf/db/upgrade-10.3-11.0.sql
The option NTLM cache background job
and its associated parameters have been deprecated. If this option was previously used on at least one of the domains, it will automatically use the NTLM cache on connection
method.
The pf-maint.pl
script used to get maintenance patches has been deprecated. Get maintenance patches using the package manager, see Apply maintenance patches section.
Upgrades are now automated for standalone servers starting from PacketFence 11.0.0. Follow instructions related to automation of upgrades.
PacketFence now provides a way to add custom rules in /usr/local/pf/conf/iptables.conf
using two files:
-
/usr/local/pf/conf/iptables-input.conf.inc
for all input traffic -
/usr/local/pf/conf/iptables-input-management.conf.inc
for all input traffic related to management interface
If custom rules in iptables.conf
were previously created, we recommend moving these rules into these files.
PacketFence now allow to enable or disable local authentication for 802.1X directly in web admin.
If packetfence-local-auth
has been previously enabled in
/usr/local/pf/conf/radiusd/packetfence-tunnel
, we recommend
enabling this feature in PacketFence web admin (see
EAP
local user authentication).
Monit configuration is now managed directly in
/usr/local/pf/conf/pf.conf
. An upgrade script will be used during
upgrade process to automatically migrate existing Monit configuration into
/usr/local/pf/conf/pf.conf
.
Cluster upgrades are not automated, follow the instructions in this section to upgrade the configuration and database schema.
# Only run this for cluster upgrades
/usr/local/pf/addons/upgrade/to-11.1-cleanup-ntlm-cache-batch-fields.pl
/usr/local/pf/addons/upgrade/to-11.1-migrate-monit-configuration-to-pf-conf.pl
/usr/local/pf/addons/upgrade/to-11.1-remove-unused-sources.pl
/usr/local/pf/addons/upgrade/to-11.1-update-reports.pl
Changes have been made to the database schema. An SQL upgrade script has been provided to upgrade the database from the 11.0 schema to 11.1.
To upgrade the database schema, run the following command:
# Only run this for cluster upgrades
mysql -u root -p pf -v < /usr/local/pf/db/upgrade-11.0-11.1.sql
Upgrades are now automated for standalone servers starting from PacketFence 11.0.0. Follow instructions related to automation of upgrades.
Cluster upgrades are not automated, follow the instructions in this section to upgrade the configuration and database schema.
/usr/local/pf/addons/upgrade/to-11.2-pfcron.pl
/usr/local/pf/addons/upgrade/to-11.2-pfcron-populate_ntlm_redis_cache.pl
/usr/local/pf/addons/upgrade/to-11.2-upgrade-pf-privileges.sh
Changes have been made to the database schema. An SQL upgrade script has been provided to upgrade the database from the 11.1 schema to 11.2.
To upgrade the database schema, run the following command:
# Only run this for cluster upgrades
mysql -u root -p pf -v < /usr/local/pf/db/upgrade-11.1-11.2.sql
If any condition for filters (VLAN, RADIUS, Switch, DNS, DHCP, and Profile) uses a `not equals
operator.
Check if the logic is still ok if the value is null/undef.
If a filter must ensure a value is defined, add an additional defined condition to the filter.
If pfpki
is used, and PKI templates were created without email attribute, we
recommend setting a value for this attribute.
By doing this, pfpki
will use email addresses defined in PKI templates to
notify about next certificates expirations for certificates without emails.
The code used to manage tenants in PacketFence has been removed. If tenants are required, consider staying on any release prior to 12.0.
PacketFence previously used haproxy (via the haproxy-db service) to load balance and failover database connections from the PacketFence services to the database servers. This is now performed by ProxySQL which allows for splitting reads and writes to different members which offers greater performance and scalability.
If ProxySQL causes issues in the deployment, revert back to haproxy-db by following these instructions
Tracking the bandwidth accounting information is now disabled by default.
If bandwidth reports or security events are required then enable it by following
Go to Configuration → System Configuration → RADIUS → General
Then enable 'Process Bandwidth Accounting'. The pfacct
service has to be restarted to apply any changes.
API calls used to fix permissions and to perform checkups from web admin have been deprecated. With the containerization of several services, it didn’t make sense to keep them available.
However, it’s still possible to perform these commands on a PacketFence server using pfcmd fixpermissions
and pfcmd checkup
.
Note
|
This applies to administrators that have a RADIUS authentication source configured in PacketFence. If PacketFence is used as a RADIUS server, but no RADIUS authentication source is configured, this section can be ignored. |
RADIUS authentication sources previously used the source IP of the packet in the NAS-IP-Address field when communicating with the RADIUS server. This behavior has been deprecated in favor of using the management IP address (or VIP in a cluster) in the NAS-IP-Address. If another value in the NAS-IP-Address attribute is required, it is configurable in the RADIUS authentication source directly.
The name of some log files have changed:
Service | Old log file(s) | New log file(s) |
---|---|---|
MariaDB |
mariadb_error.log |
mariadb.log |
httpd.aaa (Apache requests) |
httpd.aaa.access and httpd.aaa.error |
httpd.apache |
httpd.collector (Apache requests) |
httpd.collector.log and httpd.collector.error |
httpd.apache |
httpd.portal (Apache requests) |
httpd.portal.access, httpd.portal.error, httpd.portal.catalyst |
httpd.apache |
httpd.proxy (Apache requests) |
httpd.proxy.error and httpd.proxy.access |
httpd.apache |
httpd.webservices (Apache requests) |
httpd.webservices.error and httpd.webservices.access |
httpd.apache |
api-frontend (Apache requests) |
httpd.api-frontend.access |
httpd.apache |
HAProxy (all services) |
/var/log/syslog or /var/log/messages |
haproxy.log |
The ability to backup a remote database configured in PacketFence has been deprecated. From now on, a dedicated tool on the database server itself must be used to backup the external database. If the database is hosted on the PacketFence server (default behavior), then no adjustment is required.
configreload call has been deprecated on pfcmd service pf restart due to a file synchronisation issue on each restart.
If a config file is modified directly on the filesystem then a manual configreload
is required.
/usr/local/pf/bin/pfcmd configreload hard
The attribute used for dynamic ACLs on Aruba/HP switches has been changed to Aruba-NAS-Filter-Rule
. Ensure a recent firmware for the switches is used so that this attribute is honored.
Due to containerization of pfacct
service, network devices must send a RADIUS NAS-IP-Address
attribute in Accounting-Request packets.
Value of this attribute needs to be an IP address, defined in Switches menu (or part of a CIDR declaration).
If this RADIUS attribute is not sent by the network devices, declare them in Switches menu using MAC Addresses (value of RADIUS Called-Station-Id
attribute).
A bug has been identified on ZEN 12.1 installations.
With a ZEN 12.1 installation, perform the following patch:
cd /tmp/
wget https://github.com/inverse-inc/packetfence/files/10897043/rc-local.patch
patch /etc/rc.local /tmp/rc-local.patch
LDAP conditions added in the LDAP authentication source use a LDAP search to retrieve the values.
Two switch types will be converted to the new way of defining a switch. Now, a switch could be defined according the OS and not only the model.
Since v13.1, Packetfence moved from Samba to a new NTLM_AUTH_API service. In order to upgrade the domain join, ensure the domain controller is running Windows Server 2008 or later, then perform the following steps:
First run the following script:
/usr/local/pf/addons/upgrade/to-13.1-move-ntlm-auth-to-rest.pl
Running the previous script will extract the current Samba configuration and convert it to the NTLM_AUTH_API format.
The script will detect if PacketFence is running in a cluster environment and will compare the Samba machine name with the hostname:
-
If the Samba machine name matches the hostname - the script will migrate the configuration to the NTLM_AUTH_API format and replace the machine name with %h.
-
If the Samba machine name does not match the hostname - manually delete the machine accounts in the AD and reconfigure the join.
In both cases the NTLM_AUTH_API is supported in a cluster, and each machine joined to the domain must have the exact same password.
Depending of the action of the script, there may be a configuration change for the domain(s) in Configuration → Policies and Access Control → Active Directory Domains.
Important
|
When creating or editing a Domain, specifying the Server Name as %h will use the hostname of the server. The hostname differs for each member of a cluster. |
Fill out the form and specify the Machine account password (record it to reuse it again later) and the credentials of an AD admin account who is able to join a machine to the Domain. Click Save and check the Machine account was created in the Active Directory Domain.
For each remaining server in the cluster:
-
Visit Status → Services and on the right-side, click API Redirect, choose the Nth server.
-
Visit Configuration → Policies and Access Control → Active Directory Domains and choose the domain created or modified above.
-
The Machine account password will be a hash or the original password. Retype the password used above.
-
Click Save
Since 13.2 PacketFence implements a local NT Key cache to track failed login attempts to prevent the account from being locked on the AD. To implement the NT Key cache perform the following steps:
/usr/local/pf/addons/upgrade/to-13.2-update-domain-config.pl
Since 13.2 PacketFence is able to receive events from the AD to report password changes, which allows PacketFence to reset failed login attempts in the NT Key cache. To add a new admin role to receive these events through the PacketFence API perform the following steps:
/usr/local/pf/addons/upgrade/to-13.2-adds-new-admin-roles.pl
Since 13.2 PacketFence has reworked the Cisco, Juniper and Meraki switch modules to use OS versions rather than hardware versions. To update the current switch configurations to the new OS versions perform the following:
/usr/local/pf/addons/upgrade/to-13.2-convert-switch-types.pl
/usr/local/pf/addons/upgrade/to-13.2-convert-juniper-switch-types.pl
/usr/local/pf/addons/upgrade/to-13.2-convert-merakiswitch-types.pl
Since 14.0 PacketFence is able to receive events from the FleetDM servers, which allows PacketFence to detect policy violations or CVEs of devices managed by FleetDM. To add a new admin role to receive these events through the PacketFence API perform the following steps:
/usr/local/pf/addons/upgrade/to-14.0-adds-admin-roles-fleetdm.pl
Since 14.0, we’ve changed the structure of domain.conf
, added a host identifier
prefix to each domain profile.
Here is an example of a node joined both domain "a.com" and "b.com". The hostname of the node is pfv14
.
domain.conf
structure prior to v14.0.0:
[domainA] ntlm_auth_port=5000 server_name=%h dns_name=a.com .... [domainB] ntlm_auth_port=5001 server_name=%h dns_name=b.com ....
domain.conf
structure after v14.0.0:
[pfv14 domainA] ntlm_auth_port=5000 server_name=%h dns_name=a.com .... [pfv14 domainB] ntlm_auth_port=5001 server_name=%h dns_name=b.com ....
For a standalone PacketFence, compared with the 2 versions of configuration file, the only change is the hostname prefix.
However, when it comes to a PacketFence cluster, the content of domain.conf
is "duplicated" several times,
depending on how many nodes there are in the cluster.
This structure change will allow each member to have its own configuration: Including individual machine account, password, etc. Now all the nodes will be able to join Windows AD using customized machine accounts and passwords without having to use %h as part of the machine account name.
Here is an example of PacketFence cluster of 3 nodes, the hostnames of each node are: pf-node1
, pf-node2
and pf-node3
, they all joined "a.com"
There will be 3 individual machine accounts on Windows Domain Controller, named pf-node1
, pf-node2
and pf-node3
,assuming %h was used as the machine account name and there are 3 nodes in the cluster.
Now the domain.conf
looks like the following:
[pf-node1 domainA] ntlm_auth_port=5000 server_name=node1 dns_name=a.com .... [pf-node2 domainA] ntlm_auth_port=5000 server_name=node2 dns_name=a.com .... [pf-node3 domainA] ntlm_auth_port=5000 server_name=node3 dns_name=a.com ....
A node will try to find their configuration from the section starts with its hostname.
During the upgrading process, the following script will be executed on each node. It will add the hostname prefix to each of the domain sections to match the new domain.conf
structure.
/usr/local/pf/addons/upgrade/to-14.0-update-domain-config-section.pl
Upgrading a PacketFence standalone installation prior to v14.0.0, nothing more is required after the upgrade script has completed.
However, upgrading a PacketFence cluster, there are additional steps required:
The domain configuration may need to be manually changed
or
Some nodes may need to be re-joined.
It’s because PacketFence can convert its own domain.conf
to the new structure, but not able to access other nodes’s configuration.
If a force configuration sync has already been done before merging the domain.conf
on the master node, the configuration the node-sync is lost.
There are 2 ways to do this:
-
check the
domain.conf
on each of the node and make sure if all the nodes have both their own section and sections of other cluster members -
If there are missing parts, go to each of the node and copy-paste the corresponding part to master node’s
domain.conf
. -
save the changes on master node, do a force configuration sync on other nodes.
Note:
Hostnames using the %h
prefix or suffix must still be used when upgrading from a previous version
unless specifying individual machine account names for each node.
-
Do a configuration sync after upgrade - so all the slave nodes lost their domain config.
-
Open PacketFence Admin UI, go to "configuration" → "Policies and Access Control" → "Active Directory Domains"
-
Take a note of the configuration for later, the entire configuration will need to be replicated on the slave nodes.
-
Use "API redirect" to switch between nodes or directly access the API using node’s IP.
-
Using API redirect: Visit the API redirect in "Admin UI" → "Status" → "Services" → "API redirect", then select the node to handle API request.
-
Directly access the node using IP address: use "https://node_ip:1443/" to select the node to handle API request.
-
Then select a specific node to handle the API requests, the "Domain Joining" operation will be only be performed on the selected node.
-
-
Using either API redirect or manually selection to switch across all the nodes
-
Fill the identical domain information on each API node, and click "Create", this will create the domain.conf file and join the corresponding machine on Windows AD.
-
repeat the joining steps on all the nodes to make sure all the nodes are having the same domain profile.
In place upgrades are supported for Redhat EL8. Follow up the current Upgrade to another version (major or minor).
PacketFence 14.0.0 has removed support for Debian 11 (bullseye) and added support for Debian 12 (bookworm). In place upgrades from Debian 11 to Debian 12 are not supported. A new operating system will need to be provisioned in order to migrate from either Debian 11 or RedHat EL8, to Debian 12.
To simplify the upgrade process to PacketFence 14.0.0 and future versions, we utilize a custom export/import procedure.
The mariadb-backup package is installed with a PacketFence cluster and can also be used with standalone. The mariadb-backup package should have the same major version as the mariadb-server package.
To know which package version of mariadb-backup is installed:
# Debian 11 # /usr/bin/mariabackup --version /usr/bin/mariabackup based on MariaDB server 10.5.24-MariaDB debian-linux-gnu (x86_64) # Debian 12 # /usr/bin/mariabackup --version /usr/bin/mariabackup based on MariaDB server 10.11.6-MariaDB debian-linux-gnu (x86_64)
If it is not installed follow the default export process at export on current installation.
Before continuing, be sure to read assumptions and limitations.
PacketFence versions < 11.1 must upgrade to 11.1 before continuing.
Backup using the following script where the database export is created using mariadb-backup (10.5). This backup is used to Import the database in the new host.
/usr/local/pf/addons/backup-and-maintenance.sh
Ensure the backup exists in /root/backup/packetfence-db-dump-innobackup-YYYY-MM-DD_HHhmm.xbstream.gz
This export is only used to Import the configuration files in the new host.
/usr/local/pf/addons/full-import/export.sh /tmp/export.tgz
Restore locally the database backup into a new copy for mariabackup.
gunzip /root/backup/packetfence-db-dump-innobackup-YYYY-MM-DD_HHhmm.xbstream.gz mkdir -p /root/backup/restore/ pushd /root/backup/restore/ mv /root/backup/packetfence-db-dump-innobackup-YYYY-MM-DD_HHhmm.xbstream /root/backup/restore/ mbstream -x < packetfence-db-dump-innobackup-*.xbstream rm packetfence-db-dump-innobackup-*.xbstream mariabackup --prepare --target-dir=./
⇒ SCP (copy) the restored files and the export.tgz to the Debian 12 server
# create the restore directory ssh root@PacketFence_Debian_12 mkdir -p /root/backup/restore/ scp -r /root/backup/restore/* root@PacketFence_Debian_12:/root/backup/restore/ scp /tmp/export.tgz root@PacketFence_Debian_12:/tmp/export.tgz
systemctl stop packetfence-mariadb pkill -9 -f mariadbd || echo 1 > /dev/null mv /var/lib/mysql/ "/var/lib/mysql-`date +%s`" mkdir /var/lib/mysql cd /root/backup/restore/ mariabackup --innobackupex --defaults-file=/usr/local/pf/var/conf/mariadb.conf --move-back --force-non-empty-directories ./ chown -R mysql: /var/lib/mysql systemctl start packetfence-mariadb mysql_upgrade -p systemctl restart packetfence-mariadb
Import only the configuration files, do not import the database.
/usr/local/pf/addons/full-import/import.sh --conf -f /tmp/export.tgz
The configuration and database is now migrated to the new host.
If all goes well, restart services using following instructions.
To build a new cluster or rebuild an existing cluster, follow instructions in Cluster setup section.
If the previous installation was a cluster, some steps may not be required. The export archive will contain the previous cluster.conf
file.
Please follow these command line BEFORE starting the full automation upgrade.
yum localinstall https://www.packetfence.org/downloads/PacketFence/RHEL8/packetfence-upgrade-14.1.el8.noarch.rpm
Then follow the standard Full upgrade.
It is the same as [_performing_an_upgrade_on_a_cluster] when at the step [_upgrade_node_c] for node C, follow the upgrade instructions [_upgrade_standalone_redhat_el8].