This guide covers procedures to upgrade PacketFence servers.
- Clustering Guide
-
Covers installation in a clustered environment.
- Developer’s Guide
-
Covers API, captive portal customization, application code customizations and instructions for supporting new equipment.
- Installation Guide
-
Covers installation and configuration of PacketFence.
- Network Devices Configuration Guide
-
Covers switches, WiFi controllers and access points configuration.
- PacketFence News
-
Covers noteworthy features, improvements and bug fixes by release.
These files are included in the package and release tarballs.
The MariaDB root password that was provided during the initial configuration is required.
Note
|
Starting from PacketFence 11.0.0, this step is not necessary for doing an automated upgrade. |
Taking a complete backup of the current installation is strongly recommended. Perform a backup using:
/usr/local/pf/addons/database-backup-and-maintenance.sh
/usr/local/pf/addons/backup-and-maintenance.sh
Note
|
Starting from PacketFence 11.0.0, this step is not necessary for doing an automated upgrade. |
If monit
is installed and running, stop and disable it with:
systemctl stop monit
systemctl disable monit
Starting from PacketFence 11.0.0, the PacketFence installation can be upgraded in two ways:
For all PacketFence versions prior to 11.0.0, follow the steps described in the Upgrade procedure.
In cluster environments, you need to perform following steps on one server at a time. To avoid multiple moves of the virtual IP addresses, you can start with nodes which don’t own any virtual IP addresses first. You must ensure all services have been restarted correctly before moving to the next node.
If monit
is installed and running, shut it down with:
systemctl stop monit
systemctl disable monit
It is recommended to stop all PacketFence services that are currently running before proceeding any further:
/usr/local/pf/bin/pfcmd service pf stop
systemctl stop packetfence-config
Warning
|
All non-configuration files will be overwritten by new packages. All changes made to any other files will be lost during the upgrade. |
Follow instructions related to automation of upgrades.
Please refer to the PacketFence Clustering Guide, more specifically the Performing an upgrade on a cluster.
Note
|
This step needs to be done before packages upgrade. |
In this version we need to have the kernel development package that matches your current kernel version in order to build the Netflow kernel module.
yum install kernel-devel-$(uname -r)
The headers for your specific kernel may not be published anymore in the CentOS repository. If that is the case, then perform the following prior to the upgrade:
yum update kernel
reboot
yum install kernel-devel-$(uname -r)
Note
|
Be sure to follow instructions in [_rebooting_after_services_have_been_stopped] section to ensure services will not restart. |
The timezone set in pf.conf will be set on the operating system every time PacketFence reloads its configuration. For this reason, you should review the timezone setting in the general section of pf.conf (System Configuration → General Configuration in the admin). If its empty, PacketFence will use the timezone that is already set on the server and you don’t have anything to do. Otherwise, it will set the timezone in this setting on the operating system layer for consistency which may modify the timezone setting of your operating system. In this case you should ensure that you reboot the server after completing all the steps of the upgrade so that the services start with the right timezone.
packetfence-tracking-config
service is now enabled by default. It means that all
manual changes to configuration files will be recorded, including passwords.
You can disable this service from PacketFence web admin if you don’t want such behavior.
Note
|
If you do not use the PacketFence PKI, you can safely ignore this step |
PacketFence-pki is deprecated in favour of the new PacketFence PKI written in Golang. If you previously used the PacketFence-pki you will need to migrate from the SQLite database to MariaDB. To migrate, be sure that the database is running and the new PKI too and do the following:
/usr/local/pf/addons/upgrade/to-10.0-packetfence-pki-migrate.pl
Next edit the PKI providers (Configuration → PKI Providers) and redefine the profile to use. Finally, if you use OCSP then change the URL to use this one: http://127.0.0.1:22225/api/v1/pki/ocsp
This release adds a new service that will automatically attempt to recover broken Galera cluster members and can also perform a full recovery of a Galera cluster. These automated decisions may lead to potential data loss. If this is not acceptable for you disable the galera-autofix service in pf.conf or in "System Configuration→Services". More details and documentation is available in the "The galera-autofix service" section of the clustering guide.
The file /usr/local/pf/conf/currently-at
is no longer needed, it can be removed:
rm /usr/local/pf/conf/currently-at
You also need to disable access to configurator by running:
printf '\n[advanced]\nconfigurator=disabled\n' >> /usr/local/pf/conf/pf.conf
Some queries now need CREATE TEMPORARY TABLE privilege. You will be prompted for the MariaDB root password when running this script:
/usr/local/pf/addons/upgrade/to-10.0-upgrade-pf-privileges.sh
We are now using a new format for the VLAN/DNS/DHCP/RADIUS/Switch filters. This script will convert the old format to the new one:
/usr/local/pf/addons/upgrade/to-10.0-filter_engines.pl
Starting from now, httpd.admin
daemon is disabled by default and web admin
interface is managed by HAProxy using haproxy-admin
daemon.
It means that if you use a dedicated SSL certificate (different from captive
portal certificate) for web admin interface, this one has been replaced by
your captive portal certificate. You can find it at
/usr/local/pf/conf/ssl/server.pem
.
Changes have been made to the database schema. You will need to update it accordingly. An SQL upgrade script has been provided to upgrade the database from the 9.3 schema to 10.0.
To upgrade the database schema, run the following command:
mysql -u root -p pf -v < /usr/local/pf/db/upgrade-9.3.0-10.0.0.sql
RADIUS attributes used in rules of authentication sources are now prefixed by radius_request
.
This script will add the prefix:
/usr/local/pf/addons/upgrade/to-10.1-authentication-prefix.pl
In order to improve LDAP support when using RADIUS, new files and configuration parameters have been added. This script will update your current configuration:
/usr/local/pf/addons/upgrade/to-10.1-move-radius-configuration-parmeters.pl
RADIUS filters now support templated values like switch templates. This script will update your RADIUS filters to new format:
/usr/local/pf/addons/upgrade/to-10.1-radius-filter-template.pl
A new EAP parameter has been added to realm.conf
file.
This script will add this parameter to your current configuration file:
/usr/local/pf/addons/upgrade/to-10.1-realm-conf.pl
It’s now possible to enable/disable rules in authentication sources.
This script will add the new status
parameter:
/usr/local/pf/addons/upgrade/to-10.1-rule-status.pl
Support for CoA for Unifi AP is now supported but requires to have the latest controller and AP firmware available. Make sure you run the latest version of the controller and firmware if you use Ubiquiti equipment.
Changes have been made to the database schema. You will need to update it accordingly. An SQL upgrade script has been provided to upgrade the database from the 10.0.0 schema to 10.1.0.
To upgrade the database schema, run the following command:
mysql -u root -p pf -v < /usr/local/pf/db/upgrade-10.0.0-10.1.0.sql
Note
|
This step needs to be done before packages upgrade. |
Debian packages upgrades will remove /usr/local/pf/conf/pfmon.conf
file in favor of /usr/local/pf/conf/pfcron.conf
.
In order to keep your configuration in place, you need to make a backup of your pfmon.conf
file before running packages upgrades:
cp /usr/local/pf/conf/pfmon.conf /root/pfmon.conf.rpmsave
After packages upgrades have been performed, you can move file to its original location:
mv /root/pfmon.conf.rpmsave /usr/local/pf/conf/pfmon.conf.rpmsave
Configuration will be moved to /usr/local/pf/conf/pfcron.conf
file during configuration migration step.
Warning
|
rpmsave extension is not an error, script to-10.2-pfmon-maintenance.pl will migrate configuration using this filename.
|
The parameter device_registration_role has been renamed device_registration_roles, in order to apply the change run the following script:
/usr/local/pf/addons/upgrade/to-10.2-selfservice-conf.pl
If switch type was not defined, this script will set it to Generic
:
/usr/local/pf/addons/upgrade/to-10.2-default-switch-packetfence-standard.pl
Convert the pfmon configuration file to pfcron
/usr/local/pf/addons/upgrade/to-10.2-pfmon-maintenance.pl
Rename PFMON actions to the PFCRON actions
/usr/local/pf/addons/upgrade/to-10.2-adminroles-conf.pl
Add the tenant_id to pfdetect
/usr/local/pf/addons/upgrade/to-10.2-pfdetect-conf.pl
Changes have been made to the database schema. You will need to update it accordingly. An SQL upgrade script has been provided to upgrade the database from the 10.1.0 schema to 10.2.0.
To upgrade the database schema, run the following command:
mysql -u root -p pf -v < /usr/local/pf/db/upgrade-10.1.0-10.2.0.sql
Note
|
This step needs to be done before packages upgrade. |
PacketFence now depends of the MariaDB version 10.2. In order to upgrade the MariaDB version you need to execute the following steps before upgrading PacketFence.
In order to be able to work on the server, we first need to stop all the PacketFence application services on it, see Stop all PacketFence services section.
Now stop packetfence-mariadb
:
systemctl stop packetfence-mariadb
Now proceed with the MariaDB upgrade
rpm -e --nodeps MariaDB-client MariaDB-common MariaDB-server MariaDB-shared
yum install --enablerepo=packetfence MariaDB-server
dpkg -r --force-depends mariadb-server mariadb-client-10.1 mariadb-client-core-10.1 \
mariadb-common mariadb-server-10.1 mariadb-server-core-10.1 libmariadbclient18 mysql-common
apt update
apt install mariadb-server mariadb-client-10.2 mariadb-client-core-10.2 \
mariadb-common mariadb-server-10.2 mariadb-server-core-10.2 libmariadbclient18 libmariadb3 \
mysql-common
Note
|
If you manually installed Percona XtraBackup to take your backups, you
need to install MariaDB-backup (rpm) and mariadb-backup-10.2 (deb) as a
replacement.
|
Note
|
On Debian, ignore prompts related to change of root password during package upgrade.
|
At this moment you have the newest version of MariaDB installed on your system. Ensure MariaDB is running:
systemctl unmask mariadb
systemctl start mariadb
You can check you are running MariaDB 10.2 version with following command:
mysql -u root -p -e "show variables where Variable_name='version';"
Next step is to upgrade your databases:
mysql_upgrade -u root -p
Note
|
If the following error appears "Recovering after a crash using tc.log" then delete the file /var/lib/mysql/tc.log
|
After databases have been upgraded, you can disable default MariaDB service:
systemctl stop mariadb
systemctl mask mariadb
systemctl stop mysql
pkill -u mysql
systemctl mask mysql
packetfence-mariadb
service will be started later by upgrade of PacketFence package(s).
At this step you have now the MariaDB 10.2 database ready. You can now upgrade the PacketFence version by following instructions in Packages upgrades section.
Caution
|
Performing a live upgrade on a PacketFence cluster is not a straightforward operation and should be done meticulously. |
In this procedure, the 3 nodes will be named A, B and C and they are in this order in cluster.conf
. When we referenced their hostnames, we speak about hostnames in cluster.conf
.
First, ensure you have taken backups of your data. We highly encourage you to perform snapshots of all the virtual machines prior to the upgrade. You should also take a backup of the database and the /usr/local/pf
directory using database and configurations backup instructions
The PacketFence clustering stack has a mechanism that allows configuration conflicts to be handled accross the servers. This will come in conflict with your upgrade, so you should disable it.
In order to do so, go in Configuration→System Configuration→Maintenance and disable the Cluster Check task.
Once this is done, restart pfmon
or pfcron
on all nodes using:
/usr/local/pf/bin/pfcmd service pfmon restart
/usr/local/pf/bin/pfcmd service pfcron restart
You should disable the galera-autofix
service in the configuration to disable the automated resolution of cluster issues during the upgrade.
In order to do so, go in Configuration→System Configuration→Services and disable the galera-autofix
service.
Once this is done, stop galera-autofix
service on all nodes using:
/usr/local/pf/bin/pfcmd service galera-autofix updatesystemd
/usr/local/pf/bin/pfcmd service galera-autofix stop
In order to be able to work on node C, we first need to stop all the PacketFence application services on it:
/usr/local/pf/bin/pfcmd service pf stop
packetfence-config
needs to stay up to disable node A and B in configuration.
Note
|
The steps below will cause a temporary loss of service. |
First, we need to tell A and B to ignore C in their cluster configuration. In order to do so, execute the following command on A and B while changing node-C-hostname
with the actual hostname of node C:
/usr/local/pf/bin/cluster/node node-C-hostname disable
Once this is done proceed to restart the following services on nodes A and B one at a time. This will cause service failure during the restart on node A
/usr/local/pf/bin/pfcmd service radiusd restart
/usr/local/pf/bin/pfcmd service pfdhcplistener restart
/usr/local/pf/bin/pfcmd service haproxy-admin restart
/usr/local/pf/bin/pfcmd service haproxy-db restart
/usr/local/pf/bin/pfcmd service haproxy-portal restart
/usr/local/pf/bin/pfcmd service keepalived restart
Then, we should tell C to ignore A and B in their cluster configuration. In order to do so, execute the following commands on node C while changing node-A-hostname
and node-B-hostname
by the hostname of nodes A and B respectively.
/usr/local/pf/bin/cluster/node node-A-hostname disable
/usr/local/pf/bin/cluster/node node-B-hostname disable
The commands above will make sure that nodes A and B will not be forwarding requests to C even if it is alive. Same goes for C which won’t be sending traffic to A and B. This means A and B will continue to have the same database informations while C will start to diverge from it when it goes live. We’ll make sure to reconcile this data afterwards.
Now stop packetfence-mariadb
on node C:
systemctl stop packetfence-mariadb
Now proceed with the MariaDB upgrade
rpm -e --nodeps MariaDB-client MariaDB-common MariaDB-server MariaDB-shared
yum install --enablerepo=packetfence MariaDB-server MariaDB-backup
dpkg -r --force-depends mariadb-server mariadb-client-10.1 mariadb-client-core-10.1 \
mariadb-common mariadb-server-10.1 mariadb-server-core-10.1 libmariadbclient18 \
mysql-common
apt update
apt install mariadb-server-10.2 mariadb-common mariadb-client-10.2 \
mariadb-client-core-10.2 mariadb-server-core-10.2 libmariadb3 \
libmariadbclient18 mariadb-server mariadb-backup-10.2 mysql-common
Note
|
On Debian, ignore prompts related to change of root password during package upgrade.
|
At this moment you have the newest version of MariaDB installed on your system. Ensure MariaDB is running:
systemctl unmask mariadb
systemctl start mariadb
You can check you are running MariaDB 10.2 version with following command:
mysql -u root -p -e "show variables where Variable_name='version';"
Next step is to upgrade your databases:
mysql_upgrade -u root -p
Note
|
If the following error appear "Recovering after a crash using tc.log" then delete the file /var/lib/mysql/tc.log |
After databases have been upgraded, you can disable default MariaDB service:
systemctl stop mariadb
systemctl mask mariadb
systemctl stop mysql
pkill -u mysql
systemctl mask mysql
At this step you have now the MariaDB 10.2 database ready.
In order to start MariaDB as standalone on node C, you need to regenerate MariaDB config
(packetfence-mariadb
service will be started later by upgrade of packetfence package(s))
/usr/local/pf/bin/pfcmd generatemariadbconfig
Next, you can upgrade your operating system and/or PacketFence on node C by following instructions of Packages upgrades section.
Important
|
If you are on a RHEL/CentOS based systems, the command to install
packetfence-release released with 10.3.0 version will be:
|
https://www.packetfence.org/downloads/PacketFence/RHEL7/packetfence-release-7.stable.noarch.rpm
Now, make sure you follow the directives in the upgrade guide as you would on a standalone server including the database schema updates.
Now, start the application service on node C using following instructions:
/usr/local/pf/bin/pfcmd fixpermissions
/usr/local/pf/bin/pfcmd pfconfig clear_backend
systemctl restart packetfence-config
/usr/local/pf/bin/pfcmd configreload hard
/usr/local/pf/bin/pfcmd service pf restart
Next, stop all application services on node A and B:
-
Stop all PacketFence services:
/usr/local/pf/bin/pfcmd fixpermissions /usr/local/pf/bin/pfcmd pfconfig clear_backend systemctl restart packetfence-config /usr/local/pf/bin/pfcmd configreload hard /usr/local/pf/bin/pfcmd service pf stop
-
Stop database:
systemctl stop packetfence-mariadb
You should now have full service on node C and should validate that all functionnalities are working as expected. Once you continue past this point, there will be no way to migrate back to nodes A and B in case of issues other than to use the snapshots taken prior to the upgrade.
If your migration to node C goes wrong, you can fail back to nodes A and B by stopping all services on node C and starting them on nodes A and B
systemctl stop packetfence-mariadb
/usr/local/pf/bin/pfcmd service pf stop
systemctl start packetfence-mariadb
/usr/local/pf/bin/pfcmd service pf start
Once you are feeling confident to try your failover to node C again, you can do the exact opposite of the commands above to try your upgrade again.
Now proceed with the MariaDB upgrade:
rpm -e --nodeps MariaDB-client MariaDB-common MariaDB-server MariaDB-shared
yum install --enablerepo=packetfence MariaDB-server MariaDB-backup
dpkg -r --force-depends mariadb-server mariadb-client-10.1 mariadb-client-core-10.1 \
mariadb-common mariadb-server-10.1 mariadb-server-core-10.1 libmariadbclient18 \
mysql-common
apt update
apt install mariadb-server-10.2 mariadb-common mariadb-client-10.2 \
mariadb-client-core-10.2 mariadb-server-core-10.2 libmariadb3 \
libmariadbclient18 mariadb-server mariadb-backup-10.2 mysql-common
Note
|
On Debian, ignore prompts related to change of root password during package upgrade.
|
To let nodes A and B rejoin cluster before upgrading PacketFence packages, you need to update MariaDB configuration:
sed -i "s/xtrabackup/mariabackup/g" /usr/local/pf/conf/mariadb/mariadb.conf.tt
At this moment you have the newest version of MariaDB installed on nodes A and B.
On Debian-based systems only, you need to stop default mysql
service:
systemctl stop mysql
pkill -u mysql
systemctl mask mysql
At this step you have now the MariaDB 10.2 database ready.
When you will re-establish a cluster using node C in the steps below, your environment will be set in read-only mode for the duration of the database sync (which needs to be done from scratch).
This can take from a few minutes to an hour depending on your database size.
We highly suggest you delete data from the following tables if you don’t need it:
-
radius_audit_log
: contains the data in Auditing→RADIUS Audit Logs -
ip4log_history
: Archiving data for the IPv4 history -
ip4log_archive
: Archiving data for the IPv4 history -
locationlog_history
: Archiving data for the node location history
You can safely delete the data from all of these tables without affecting the functionnalities as they are used for reporting and archiving purposes. Deleting the data from these tables can make the sync process considerably faster.
In order to truncate a table:
mysql -u root -p pf
MariaDB> truncate TABLE_NAME;
In order for node C to be able to elect itself as database master, we must tell it there are other members in its cluster by re-enabling nodes A and B
/usr/local/pf/bin/cluster/node node-A-hostname enable
/usr/local/pf/bin/cluster/node node-B-hostname enable
Next, enable node C on nodes A and B by executing the following command on the two servers:
systemctl start packetfence-config
/usr/local/pf/bin/cluster/node node-C-hostname enable
Now, stop packetfence-mariadb
on node C, regenerate the MariaDB configuration and start it as a new master:
Note
|
Before starting this step, be sure that the galera_replication_username has grant permission PROCESS |
mysql -u root -p
select * from information_schema.user_privileges where PRIVILEGE_TYPE="PROCESS";
# If it's not the case
GRANT PROCESS ON *.* TO '`galera_replication_username`'@localhost;
systemctl stop packetfence-mariadb
/usr/local/pf/bin/pfcmd generatemariadbconfig
/usr/local/pf/sbin/pf-mariadb --force-new-cluster
You should validate that you are able to connect to the MariaDB database even though it is in read-only mode using the MariaDB command line:
mysql -u root -p pf -h localhost
If its not, make sure you check the MariaDB log
(/usr/local/pf/logs/mariadb_error.log
)
On each of the servers you want to discard the data from, stop packetfence-mariadb
, you must destroy all the data in /var/lib/mysql
and start packetfence-mariadb
so it resyncs its data from scratch.
systemctl stop packetfence-mariadb
rm -fr /var/lib/mysql/*
/usr/local/pf/bin/pfcmd generatemariadbconfig
systemctl start packetfence-mariadb
Should there be any issues during the sync, make sure you look into the MariaDB log (/usr/local/pf/logs/mariadb_error.log
)
Once both nodes have completely synced (try connecting to it using the MariaDB
command line), then you can break the cluster election command you have
running on node C and start node C normally (using systemctl start
packetfence-mariadb
).
Next, you can upgrade your operating system and/or PacketFence on nodes A and B by following instructions of Packages upgrades section.
Warning
|
You only need to merge changes of new configuration files that will not be synced by /usr/local/pf/bin/cluster/sync command described below.
|
Important
|
If you are on a RHEL/CentOS based systems, the command to install
packetfence-release released with 10.3.0 version will be:
|
https://www.packetfence.org/downloads/PacketFence/RHEL7/packetfence-release-7.stable.noarch.rpm
You do not need to follow the upgrade procedure when upgrading these nodes. You should instead do a sync from node C on nodes A and B:
/usr/local/pf/bin/cluster/sync --from=192.168.1.5 --api-user=packet --api-password=anotherMoreSecurePassword
/usr/local/pf/bin/pfcmd configreload hard
Where:
-
192.168.1.5
is the management IP of node C -
packet
is the webservices username (Configuration→Webservices) -
fence
is the webservices password (Configuration→Webservices)
Before starting PacketFence services on nodes A and B, packetfence-mariadb
need to be restarted again to take into account changes introduced by packages
upgrades:
systemctl restart packetfence-mariadb
You can now safely start PacketFence on nodes A and B using following instructions:
/usr/local/pf/bin/pfcmd fixpermissions
/usr/local/pf/bin/pfcmd pfconfig clear_backend
systemctl restart packetfence-config
/usr/local/pf/bin/pfcmd configreload hard
/usr/local/pf/bin/pfcmd service pf restart
Now, you should restart PacketFence on node C using following instructions:
/usr/local/pf/bin/pfcmd fixpermissions
/usr/local/pf/bin/pfcmd pfconfig clear_backend
systemctl restart packetfence-config
/usr/local/pf/bin/pfcmd configreload hard
/usr/local/pf/bin/pfcmd service pf restart
So it becomes aware of its peers again.
You should now have full service on all 3 nodes using the latest version of PacketFence.
Now that your cluster is back to a healthy state, you should reactivate the configuration conflict resolution.
In order to do, so go in Configuration→System Configuration→Maintenance and re-enable the Cluster Check task.
Once this is done, restart pfcron
on all nodes using:
/usr/local/pf/bin/pfcmd service pfcron restart
You now need to reactivate and restart the galera-autofix
service so that it’s aware that all the members of the cluster are online again.
In order to do so, go in Configuration→System Configuration→Services and re-enable the galera-autofix
service.
Once this is done, restart galera-autofix
service on all nodes using:
/usr/local/pf/bin/pfcmd service galera-autofix updatesystemd
/usr/local/pf/bin/pfcmd service galera-autofix restart
/usr/local/pf/addons/upgrade/to-10.3-provisioners-windows_agent_download_uri.pl
The ability to define a specific port per host in the list of the LDAP servers of a single authentication source has been deprecated. If you have such entries, adjust them accordingly. If you have been using the same LDAP port for all the hosts in an authentication source, then this will not apply to you.
inline_accounting
table will be removed by upgrade of database schema (see below) because it has been replaced by bandwidth_accounting
table since v10.
You are only concern by this item if you extract data from inline_accounting
table before v10 for external usage.
To add the default tenant_id
(1) to all network configurations run:
/usr/local/pf/addons/upgrade/to-10.3-network-conf.pl
Changes have been made to the database schema. You will need to update it accordingly. An SQL upgrade script has been provided to upgrade the database from the 10.2.0 schema to 10.3.0.
To upgrade the database schema, run the following command:
mysql -u root -p pf -v < /usr/local/pf/db/upgrade-10.2.0-10.3.0.sql
Starting from PacketFence 11.0.0, Debian 9 and CentOS 7 support are dropped in benefit of Debian 11 and RHEL 8. In place upgrades are not supported. You will have to provision new operating system(s) in order to migrate.
To simplify upgrade process to PacketFence 11.0.0 and future versions, we now rely on an export/import mechanism.
Before doing anything else, be sure to read assumptions and limitations of this mechanism.
-
Follow upgrade path to PacketFence 10.3.0
-
Go to next section
Follow instructions related to export process.
Follow instructions related to import process.
If you don’t use import mechanism to upgrade your previous PacketFence installation, you will need to follow the instructions in this section to upgrade the configuration and database schema.
# Only run this if you don't import your previous configuration
/usr/local/pf/addons/upgrade/to-11.0-firewall_sso-conf.pl
/usr/local/pf/addons/upgrade/to-11.0-no-slash-32-switches.pl
/usr/local/pf/addons/upgrade/to-11.0-openid-username_attribute.pl
Changes have been made to the database schema. You will need to update it accordingly. An SQL upgrade script has been provided to upgrade the database from the 10.3 schema to 11.0.
To upgrade the database schema, run the following command:
# Only run this if you don't import your previous configuration
mysql -u root -p pf -v < /usr/local/pf/db/upgrade-10.3-11.0.sql
The option NTLM cache background job
and its associated parameters have been deprecated. If you previously used this option on one of your domains, it will now automatically use the NTLM cache on connection
method.
The pf-maint.pl
script used to get maintenance patches has been deprecated. You can now get maintenance patches using your package manager, see Apply maintenance patches section.
TLS 1.0 and TLS 1.1 are now disabled by default. If you still have supplicants
using theses protocols, you should move to TLS 1.2. If it’s not possible, you
can adjust TLS Minimum version
in Configuration → System configuration →
RADIUS → TLS profiles.
Upgrades are now automated for standalone servers starting from PacketFence 11.0.0. Follow instructions related to automation of upgrades.
PacketFence now provides a way to add custom rules in /usr/local/pf/conf/iptables.conf
using two files:
-
/usr/local/pf/conf/iptables-input.conf.inc
for all input traffic -
/usr/local/pf/conf/iptables-input-management.conf.inc
for all input traffic related to management interface
If you previously added custom rules in iptables.conf
, we recommend you to move these rules into these files.
PacketFence now allow to enable or disable local authentication for 802.1X directly in web admin.
If you previously enabled packetfence-local-auth
feature in
/usr/local/pf/conf/radiusd/packetfence-tunnel
, we recommend you to
enable this feature in PacketFence web admin (see
EAP
local user authentication).
Monit configuration is now managed directly in
/usr/local/pf/conf/pf.conf
. An upgrade script will be used during
upgrade process to automatically migrate existing Monit configuration into
/usr/local/pf/conf/pf.conf
.
If you use a cluster, their upgrade isn’t yet automated so you will need to follow the instructions in this section to upgrade the configuration and database schema.
# Only run this for cluster upgrades
/usr/local/pf/addons/upgrade/to-11.1-cleanup-ntlm-cache-batch-fields.pl
/usr/local/pf/addons/upgrade/to-11.1-migrate-monit-configuration-to-pf-conf.pl
/usr/local/pf/addons/upgrade/to-11.1-remove-unused-sources.pl
/usr/local/pf/addons/upgrade/to-11.1-update-reports.pl
Changes have been made to the database schema. You will need to update it accordingly. An SQL upgrade script has been provided to upgrade the database from the 11.0 schema to 11.1.
To upgrade the database schema, run the following command:
# Only run this for cluster upgrades
mysql -u root -p pf -v < /usr/local/pf/db/upgrade-11.0-11.1.sql
Upgrades are now automated for standalone servers starting from PacketFence 11.0.0. Follow instructions related to automation of upgrades.
If you use a cluster, their upgrade isn’t yet automated so you will need to follow the instructions in this section to upgrade the configuration and database schema.
/usr/local/pf/addons/upgrade/to-11.2-pfcron.pl
/usr/local/pf/addons/upgrade/to-11.2-pfcron-populate_ntlm_redis_cache.pl
/usr/local/pf/addons/upgrade/to-11.2-upgrade-pf-privileges.sh
Changes have been made to the database schema. You will need to update it accordingly. An SQL upgrade script has been provided to upgrade the database from the 11.1 schema to 11.2.
To upgrade the database schema, run the following command:
# Only run this for cluster upgrades
mysql -u root -p pf -v < /usr/local/pf/db/upgrade-11.1-11.2.sql
If any condition for filters (VLAN, RADIUS, Switch, DNS, DHCP, and Profile) uses a `not equals
operator.
Check if the logic is still ok if the value is null/undef.
If the filter needs to ensure that the value is defined you would need to add an additional defined condition to that filter.
If you use pfpki
and you created PKI templates without email attribute, we
recommend you to set a value for this attribute.
By doing this, pfpki
will use email addresses defined in PKI templates to
notify about next certificates expirations for certificates without emails.
The code used to manage tenants in PacketFence has been removed. If you previously used tenants in PacketFence, you should consider staying on a release prior to v12.
PacketFence previously used haproxy (via the haproxy-db service) to load balance and failover database connections from the PacketFence services to the database servers. This is now performed by ProxySQL which allows for splitting reads and writes to different members which offers greater performance and scalability.
If you suspect that using ProxySQL causes issues in your deployment, you can revert back to using haproxy-db by following these instructions
Tracking the bandwidth accounting information is now disabled by default.
If you rely on bandwidth reports or security events then enable it by doing the following.
Go to Configuration → System Configuration → RADIUS → General
Then enable 'Process Bandwidth Accounting'. pfacct
service needs to be restarted to apply changes.
API calls used to fix permissions and to perform checkups from web admin have been deprecated. With the containerization of several services, it didn’t make sense to keep them available.
However, it’s still possible to perform these commands on a PacketFence server using pfcmd fixpermissions
and pfcmd checkup
.
Note
|
This applies to administrators that have a RADIUS authentication source configured in PacketFence. If you are using PacketFence as a RADIUS server but do not have any RADIUS authentication source configured, this section does not apply to you. |
RADIUS authentication sources previously used the source IP of the packet in the NAS-IP-Address field when communicating with the RADIUS server. This behavior has been deprecated in favor of using the management IP address (or VIP in a cluster) in the NAS-IP-Address. If you do need to use another value in the NAS-IP-Address attribute, it is configurable in the RADIUS authentication source directly.
The name of some log files have changed. You can find a list below:
Service | Old log file(s) | New log file(s) |
---|---|---|
MariaDB |
mariadb_error.log |
mariadb.log |
httpd.aaa (Apache requests) |
httpd.aaa.access and httpd.aaa.error |
httpd.apache |
httpd.collector (Apache requests) |
httpd.collector.log and httpd.collector.error |
httpd.apache |
httpd.portal (Apache requests) |
httpd.portal.access, httpd.portal.error, httpd.portal.catalyst |
httpd.apache |
httpd.proxy (Apache requests) |
httpd.proxy.error and httpd.proxy.access |
httpd.apache |
httpd.webservices (Apache requests) |
httpd.webservices.error and httpd.webservices.access |
httpd.apache |
api-frontend (Apache requests) |
httpd.api-frontend.access |
httpd.apache |
HAProxy (all services) |
/var/log/syslog or /var/log/messages |
haproxy.log |
The ability to backup a remote database configured in PacketFence has been deprecated. From now on, a dedicated tool on the database server itself must be used to backup the external database. If your database is hosted on the PacketFence server (default behavior), then no adjustment is required for this.
configreload call has been deprecated on pfcmd service pf restart due to a file synchronisation issue on each restart. If you modify a config file directly from the filesystem then you have to do the configreload manually.
/usr/local/pf/bin/pfcmd configreload hard
The attribute used for dynamic ACLs on Aruba/HP switches has been changed to Aruba-NAS-Filter-Rule
. Make sure you are running a recent firmware for these switches so that this attribute is honored.
Due to containerization of pfacct
service, network devices must send a RADIUS NAS-IP-Address
attribute in Accounting-Request packets.
Value of this attribute needs to be an IP address, defined in Switches menu (or part of a CIDR declaration).
If this RADIUS attribute is not sent by your network devices, you need to declare them in Switches menu using MAC Addresses (value of RADIUS Called-Station-Id
attribute).
A bug has been identified on ZEN 12.1 installations.
If you perform a ZEN 12.1 installation, you need to patch your setup using following instructions:
cd /tmp/
wget https://github.com/inverse-inc/packetfence/files/10897043/rc-local.patch
patch /etc/rc.local /tmp/rc-local.patch
LDAP conditions added in the LDAP authentication source use a LDAP search to retrieve the values.
Two switch types will be converted to the new way of defining a switch. Now, a switch could be defined according the OS and not only the model.
Since v13.1, Packetfence moved from Samba to a new NTLM_AUTH_API service. In order to upgrade the domain join, make sure your domain controller is running Windows Server 2008 or later, then please, perform the following steps:
First run the following script:
/usr/local/pf/addons/upgrade/to-13.1-move-ntlm-auth-to-rest.pl
Running the previous script will extract the current Samba configuration and convert it to the NTLM_AUTH_API format.
The script will detect if you are running PacketFence in a cluster environment and will compare the Samba machine name with the hostname:
-
If the Samba machine name matches the hostname - the script will migrate the configuration to the NTLM_AUTH_API format and replace the machine name with %h.
-
If the Samba machine name does not match the hostname - manually delete the machine accounts in the AD and reconfigure the join.
In both cases the NTLM_AUTH_API is supported in a cluster, and each machine joined to the domain must have the exact same password.
Depending of the action of the script, there may be a configuration change for the domain(s) in Configuration → Policies and Access Control → Active Directory Domains.
Important
|
When creating or editing a Domain, specifying the Server Name as %h will use the hostname of the server. The hostname differs for each member of a cluster. |
Fill out the form and specify the Machine account password (record it to reuse it again later) and the credentials of an AD admin account who is able to join a machine to the Domain. Click Save and you should be able to see the Machine account created in the Active Directory Domain.
For each remaining server in the cluster:
-
Visit Status → Services and on the right-side, click API Redirect, choose the Nth server.
-
Visit Configuration → Policies and Access Control → Active Directory Domains and choose the domain created or modified above.
-
The Machine account password will be a hash or the original password. Retype the password used above.
-
Click Save
Since 13.2 PacketFence implements a local NT Key cache to track failed login attempts to prevent the account from being locked on the AD. To implement the NT Key cache perform the following steps:
/usr/local/pf/addons/upgrade/to-13.2-update-domain-config.pl
Since 13.2 PacketFence is able to receive events from the AD to report password changes, which allows PacketFence to reset failed login attempts in the NT Key cache. To add a new admin role to receive these events through the PacketFence API perform the following steps:
/usr/local/pf/addons/upgrade/to-13.2-adds-new-admin-roles.pl
Since 13.2 PacketFence has reworked the Cisco, Juniper and Meraki switch modules to use OS versions rather than hardware versions. To update your current switch configurations to the new OS versions perform the following tasks:
/usr/local/pf/addons/upgrade/to-13.2-convert-switch-types.pl
/usr/local/pf/addons/upgrade/to-13.2-convert-juniper-switch-types.pl
/usr/local/pf/addons/upgrade/to-13.2-convert-merakiswitch-types.pl
Since 14.0 PacketFence is able to receive events from the FleetDM servers, which allows PacketFence to detect policy violations or CVEs of devices managed by FleetDM. To add a new admin role to receive these events through the PacketFence API perform the following steps:
/usr/local/pf/addons/upgrade/to-14.0-adds-admin-roles-fleetdm.pl
In place upgrades are supported for Redhat EL8. You can follow up the current Upgrade to another version (major or minor).
PacketFence 14.0.0 has removed support for Debian 11 (bullseye) and added support for Debian 12 (bookworm). In place upgrades from Debian 11 to Debian 12 are not supported. A new operating system will need to be provisioned in order to migrate from either Debian 11 or RedHat EL8, to Debian 12.
To simplify the upgrade process to PacketFence 14.0.0 and future versions, we utilize a custom export/import procedure.
The mariadb-backup package is installed with a PacketFence cluster and can also be used with standalone. The mariadb-backup package should have the same major version as the mariadb-server package.
To know which package version of mariadb-backup is installed:
# Debian 11 # /usr/bin/mariabackup --version /usr/bin/mariabackup based on MariaDB server 10.5.24-MariaDB debian-linux-gnu (x86_64) # Debian 12 # /usr/bin/mariabackup --version /usr/bin/mariabackup based on MariaDB server 10.11.6-MariaDB debian-linux-gnu (x86_64)
If it is not installed follow the default export process at export on current installation.
Before continuing, be sure to read assumptions and limitations.
PacketFence versions < 11.1 must upgrade to 11.1 before continuing.
Backup using the following script where the database export is created using mariadb-backup (10.5). This backup is used to Import the database in the new host.
/usr/local/pf/addons/backup-and-maintenance.sh
Ensure the backup exists in /root/backup/packetfence-db-dump-innobackup-YYYY-MM-DD_HHhmm.xbstream.gz
This export is only used to Import the configuration files in the new host.
/usr/local/pf/addons/full-import/export.sh /tmp/export.tgz
Restore locally the database backup into a new copy for mariabackup.
gunzip /root/backup/packetfence-db-dump-innobackup-YYYY-MM-DD_HHhmm.xbstream.gz mkdir -p /root/backup/restore/ pushd /root/backup/restore/ mv /root/backup/packetfence-db-dump-innobackup-YYYY-MM-DD_HHhmm.xbstream /root/backup/restore/ mbstream -x < packetfence-db-dump-innobackup-*.xbstream rm packetfence-db-dump-innobackup-*.xbstream mariabackup --prepare --target-dir=./
⇒ SCP (copy) the restored files and the export.tgz to the Debian 12 server
# create the restore directory ssh root@PacketFence_Debian_12 mkdir -p /root/backup/restore/ scp -r /root/backup/restore/* root@PacketFence_Debian_12:/root/backup/restore/ scp /tmp/export.tgz root@PacketFence_Debian_12:/tmp/export.tgz
systemctl stop packetfence-mariadb pkill -9 -f mariadbd || echo 1 > /dev/null mv /var/lib/mysql/ "/var/lib/mysql-`date +%s`" mkdir /var/lib/mysql cd /root/backup/restore/ mariabackup --innobackupex --defaults-file=/usr/local/pf/var/conf/mariadb.conf --move-back --force-non-empty-directories ./ chown -R mysql: /var/lib/mysql systemctl start packetfence-mariadb mysql_upgrade -p systemctl restart packetfence-mariadb
Import only the configuration files, do not import the database.
/usr/local/pf/addons/full-import/import.sh --conf -f /tmp/export.tgz
The configuration and database is now migrated to the new host.
If all goes well, you can restart services using following instructions.
If you want to build or rebuild a cluster, you need to follow instructions in Cluster setup section.
If your previous installation was a cluster, some steps may not be necessary
to do. Your export archive will contain your previous
cluster.conf
file.
Please follow these command lines in order to upgrade database.
yum clean all --enablerepo=packetfence yum update --enablerepo=packetfence systemctl stop monit systemctl disable monit /usr/local/pf/bin/pfcmd service pf stop systemctl stop packetfence-mariadb rpm -e --nodeps MariaDB-server rpm -e --nodeps MariaDB-client yum localinstall -y https://www.packetfence.org/downloads/PacketFence/RHEL8/14.1/x86_64/RPMS/MariaDB-client-10.11.6-1.el8.x86_64.rpm yum localinstall -y https://www.packetfence.org/downloads/PacketFence/RHEL8/14.1/x86_64/RPMS/galera-4-26.4.16-1.el8.x86_64.rpm yum localinstall -y https://www.packetfence.org/downloads/PacketFence/RHEL8/14.1/x86_64/RPMS/MariaDB-server-10.11.6-1.el8.x86_64.rpm yum localinstall -y https://www.packetfence.org/downloads/PacketFence/RHEL8/14.1/x86_64/RPMS/freeradius-mysql-3.2.6-1.el8.x86_64.rpm systemctl start packetfence-mariadb mysql_upgrade -p addons/upgrade/do-upgrade.sh
It is the same as [_performing_an_upgrade_on_a_cluster] but when you are at that this step [_upgrade_node_c] for node C, please follow these upgrade instructions [_upgrade_standalone_redhat_el8].