-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimise Performance for Reverse Chaining in FHIR Search #1772
Comments
This is a tiny cache at 100 entries, which means the number of internal hash bins is kept very low. These bins are locks in ConcurrentHashMap, so fewer leads to a higher chance of collisions between two independent writes. A simple solution is to increase the Caffeine.newBuilder()
.initialCapacity(10_000)
.maximumSize(100)
.build(); A slightly more complex approach is to use AsyncCache<K, V> cache = Caffeine.newBuilder().buildAsync();
V get(K key, Function<K, V> mappingFunction) {
var future = new CompletableFuture<DecodedValue>();
var prior = cache.asMap().putIfAbsent(key, future);
if (prior != null) {
return prior.join();
}
try {
var value = mappingFunction.apply(key);
future.complete(value);
return value;
} catch (Throwable t) {
future.completeExceptionally(t);
throw t;
}
} |
Currently the implementation of the
_has
search parameter inblaze.db.impl.search-param.has
uses a cache for resource handles. The cache is filled by the computation functionresource-handles*
which can take a long time. One site has experienced a warning from Caffeine that the long-running computation halted eviction. So we should revisit why ne need that cache and if how we can handle the cache updates differently.The text was updated successfully, but these errors were encountered: