-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix concurrency and performance issues #14
Conversation
When implementing asynchronous chunk loading, numerous concurrency issues were found. Also replaced quite a few bad uses of Map.containsKey containsKey "reads" cleaner, but results in double the performance cost of all map operations as containsKey in MOST cases where null values are not used is identical to get() == null Considering how deep data fixers go in call stacks, with tons of map lookups, this micro optimization could provide some real gains. Additionally, many of the containsKey/get/put style operations were also a concurrency risk, resulting in multiple computation/insertions.
@@ -22,8 +23,8 @@ | |||
import java.util.function.Supplier; | |||
|
|||
public class Schema { | |||
protected final Object2IntMap<String> RECURSIVE_TYPES = new Object2IntOpenHashMap<>(); | |||
private final Map<String, Supplier<TypeTemplate>> TYPE_TEMPLATES = Maps.newHashMap(); | |||
protected final Object2IntMap<String> RECURSIVE_TYPES = Object2IntMaps.synchronize(new Object2IntOpenHashMap<>()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All schema construction should be done during bootstrap and frozen after that, synchronization inside buildTypes doesn't quite make sense since it's a one-off operation anyway. This class can hopefully be cleaned up slightly, and made to construct immutable maps instead, but synchronization is not the right approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. I went a bit heavy handed to make sure I covered any risks, but looking at it closer I agree shouldn't be needed since it's all done in constructor.
@@ -16,7 +16,7 @@ | |||
private Function<DynamicOps<?>, T> value; | |||
|
|||
@SuppressWarnings("ConstantConditions") | |||
public Function<DynamicOps<?>, T> evalCached() { | |||
public synchronized Function<DynamicOps<?>, T> evalCached() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems too heavy-weight, since evalCached should be a hot spot. There might be a better option, like https://en.wikipedia.org/wiki/Double-checked_locking
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is something I was concerned about too and was considering a double checked lock also. I will update it.
Wasn't able to detect any performance concerns considering we have usage of datafixers all done on the thread pool now, but helping improve conversion speed is always good.
Optional<? extends RewriteResult<?, ?>> rewrite = REWRITE_CACHE.get(key); | ||
//noinspection OptionalAssignedToNull | ||
if (rewrite != null) { | ||
return (Optional<RewriteResult<A, ?>>) rewrite; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cast should not be neccesary if the map stores CompletableFuture<? extends Optional<? extends RewriteResult<?, ?>>>>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not worth changing this. That breaks everything else. The class uses a generic but uses a static cache, so the cast is required unless the caches are moved to be instance caches instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just noticed the cache key references this anyways... I'm just going to move it to an instance property
edit: not doing that, breaks all the things
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the reason you failed is that you need another static field with an object that you sync around instead of the instance cache, isn't it?
* upstream/master: Corrected minor grammatical issue in README.md Fixed a run-on sentence Removed the extra "is" Added links to README.md
I've tested the requested changes against Paper with our Async Chunk Loading, deleted my region files, and reconverted chunks and have not seen any issues arise. Should be good to go. |
Hold on I'm investigating what appears to be a server destroying performance regression with these changes.... I believe it's from the moving from static to instance property, as now types that would be considered equal wont share same cache. |
Yep, Ok I'm reverting those last commits, that was it. Casting it is :) |
When implementing asynchronous chunk loading, numerous concurrency issues were found.
These changes have been in use in Paper's Async Chunk system now and have no more known issues. We are doing chunk conversions over the thread pool now.
Also replaced quite a few bad uses of Map.containsKey
containsKey "reads" cleaner, but results in double the performance cost of all map operations as containsKey in MOST cases where null values are not used is identical to get() == null
Considering how deep data fixers go in call stacks, with tons of map lookups, this micro optimization could provide some real gains.
Additionally, many of the containsKey/get/put style operations were also a concurrency risk, resulting in multiple computation/insertions.