You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A clustered cache's reported 'max cache size' is wrong, when it is based on Openfire-provided (default) values.
The unit of those values is expected to be expressed in units of 'bytes'. Hazelcast uses 'megabytes'. Furthermore, Hazelcast doesn't accept negative values.
When a cache is created based on Openfire-provided (default) values, the value is processed accordingly:
Openfire's representation of 'unlimited' (-1) is replaced by Integer.MAX_VALUE
Any non-negative value is divided by 1024*1024 (with some rounding happening).
Thus, the configuration is roughly correct.
However, when the cache size is read again (which mostly is used for diagnostic purposes, as far as I can see), this modification isn't reverted. That causes Openfire to 'see' values that are wrong (eg, values off by a factor of a million, or a large integer value when unlimited is intended).
The text was updated successfully, but these errors were encountered:
guusdk
added a commit
to guusdk/openfire-hazelcast-plugin
that referenced
this issue
Nov 4, 2024
The maximum cache size was reported by the ClusteredCache implementation incorrectly, when based on Openfire-provided configuration.
This change effectively reverts the value modifications performed by `org.jivesoftware.openfire.plugin.util.cache.ClusteredCacheFactory#createCache`
guusdk
added a commit
to guusdk/openfire-hazelcast-plugin
that referenced
this issue
Nov 5, 2024
The maximum cache size was reported by the ClusteredCache implementation incorrectly, when based on Openfire-provided configuration.
This change effectively reverts the value modifications performed by `org.jivesoftware.openfire.plugin.util.cache.ClusteredCacheFactory#createCache`
A clustered cache's reported 'max cache size' is wrong, when it is based on Openfire-provided (default) values.
The unit of those values is expected to be expressed in units of 'bytes'. Hazelcast uses 'megabytes'. Furthermore, Hazelcast doesn't accept negative values.
When a cache is created based on Openfire-provided (default) values, the value is processed accordingly:
-1
) is replaced byInteger.MAX_VALUE
1024*1024
(with some rounding happening).Thus, the configuration is roughly correct.
However, when the cache size is read again (which mostly is used for diagnostic purposes, as far as I can see), this modification isn't reverted. That causes Openfire to 'see' values that are wrong (eg, values off by a factor of a million, or a large integer value when unlimited is intended).
The text was updated successfully, but these errors were encountered: