-
Notifications
You must be signed in to change notification settings - Fork 38.5k
Threads bottlenecking in DefaultSingletonBeanRegistry when using Wicket's @SpringBean annotation for injection [SPR-5360] #10033
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Xavi Arias commented Hello, we are facing this problem as well, our environment is JDK 1.5 with Tomcat 6 on Windows servers. Is it possible to know how did you hacked around it to avoid this bottleneck? Thanks, |
Juergen Hoeller commented Actually, looking at that stacktrace, I wonder why Wicket's SpringBeanLocator recalculates the bean name for every such call? The bottleneck seems to be the introspection of all applicable singleton beans in getBeanNamesForType, which really shouldn't get invoked for obtaining a simple bean instance... Of course we could consider caching getBeanNamesForType results in the first place, for any such caller, in particular for concurrent invocations. If it happens to be unavoidable to reobtain the bean names every time, the factory itself could do some caching. Wicket's SpringBeanLocator could cache as well though. Juergen |
Leo Kim commented The Wicket folks may have fixed this in 1.4.2 actually: https://issues.apache.org/jira/browse/WICKET-2344 We're using Wicket 1.3.7 where it's not yet fixed. We hacked around it by relying on a convention that we used for our Spring beans. I'm too embarassed to share it here. :-) |
Daniel Gredler commented I've also run into trouble with this lock in the DefaultSingletonBeanRegistry#getSingleton(...) methods. Specifically, I was trying to reduce application startup time by parallelizing the initialization of singleton beans in DefaultListableBeanFactory#preInstantiateSingletons() [1]. I'm not sure what the best fix is... maybe make the DefaultSingletonBeanRegistry.singletonObjects instance variable volatile and reduce the size of the synchronized block(s)? I see that the synchronized block in getSingleton(String, boolean) is a tad bit more forgiving than the synchronized block in getSingleton(String, ObjectFactory) (the null check is outside of the synchronized block). However, I'm not sure that the smaller synchronized block is safe without making the instance variable volatile and adding a second null check inside the synchronized block (see "Fixing Double-Checked Locking using Volatile" in [2]). [1] http://github.com/gredler/spriths |
Juergen Hoeller commented Daniel, the singletonObjects variable is final and hence doesn't need to be marked as volatile since it is properly visible in any case. The double-check in the getSingleton(String, boolean) method should be safe as well since it doesn't create the singleton: The inner block just checks for an 'early' singleton reference, after the outer block having checked for a regular singleton reference. The classic double-checked locking case would only apply when lazily creating an instance: you might accidentally create the instance twice then. Admittedly, singleton creation in getSingleton(String, ObjectFactory) is not meant to happen in parallel at the moment since none of the out of the box ApplicationContexts operates that way. We'd have to revisit the locking to use a per-bean-name lock instead of a global singletonObjects lock instead. Juergen |
Daniel Gredler commented Good point about singletonObjects already being final, I completely missed that. If DefaultSingletonBeanRegistry moves to per-bean-name locks, how would that affect methods like getSingletonNames() and getSingletonCount(), which aren't bean-scoped? |
Arne Limburg commented Hi Jürgen, we run into the same issue under load using the javax.inject.Provider-interface with Spring 3.0.2 We have a workaround for this, using a custom bean-factory. I'll attach the code later. To fix this you probably should make the earlySingletonObjects-Map a ConcurrentHashMap and just move the synchronized-block into the second "if"-block in getSingleton. |
Arne Limburg commented As attachment you find an application-context that can be used as a replacement of the XmlWebApplicationContext to work around the synchronized issue. |
Arne Limburg commented For the case of using the javax.inject.Provider-interface I wonder if it would be cleverer to resolve the reference to the actual bean at injection time instead of doing this at runtime. In our case we use the provider to look-up beans of a smaller scope from beans of containing scopes, i.e. look-up session-information from within singleton-beans. In this cases (I guess, this is the main usage of the Provider-interface) the referenced bean is known at injection time, but it is not available since the needed scope is just not active. |
Lari Hotari commented |
Leo Kim opened SPR-5360 and commented
I actually wrote to the Wicket mailing list initially about this:
http://www.nabble.com/SpringBeanLocator-and-%40SpringBean-performance-issue-td20964687.html
and they suggest that this is a deeper Spring issue. Basically, I'm using Wicket's
@SpringBean
annotation to inject beans throughout our webapp. When we load tested the app, we found threads blocking for extended periods at this particular point:Object blocked: 145.133 ms, Object wait: 0 ms, CPU wait: 2.118 ms, I/O wait: 9.017 ms, CPU: 73.847 ms
[...snip...]
We're able to hack around it in our code and avoid this bottleneck, which resulted in us getting 50-75% more requests per second. Looking at some of the Spring 3.0 code, it looks like this class has not changed much so we'll probably run into this problem when we upgrade. Is there a way to make this code path more concurrent? With Spring 3.0 using Java 5, it seems like the use of read/write locks might squeeze more concurrency out of these bean lookups.
Affects: 2.5.6
Attachments:
Issue Links:
10 votes, 14 watchers
The text was updated successfully, but these errors were encountered: