You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As connection.reset is said to cause bugs and deprecated, we seek to emulate a reset type of behaviour by closing the connection when we hit an exception and opening a brand new one.
We thought doing it like this would work.
Then repeatedly connect and closeAsync(when we hit some exception).
connection = redisClusterClient.connect(SOME_CODEC);
// do some workconnection.closeAsync()
connection = null;
Eventually threads will be blocked, refer to the exception below under stack trace.
After the lock is engaged, we can no longer unlock it.
Stack trace
"lettuce-epollEventLoop-6-37" #6307 daemon prio=5 os_prio=0 tid=0x620018a00000 nid=0x350646 [ JVM thread_state=_thread_blocked, locked by VM at safepoint, polling bits: safep ]
java.lang.Thread.State: RUNNABLE
at io.lettuce.core.protocol.Sharedlock.lockWritersExclusive(SharedLock.java:139)
at io.lettuce.core.protocol.SharedLock.doexclusive(SharedLock.java:114)
at io.lettuce.core.protocol.DefaultEndpoint.doExclusive(DefaultEndpoint.java:741)
at io.lettuce.core.cluster.ClusterNodeEndpoint.closeAsync(ClusterNodeEndpoint.java:70)
at io.lettuce.core.RedisChannelHandler.closeAsync(RedisChannelHandler.java:179)
at io.lettuce.core.internal.AsyncConnectionProvider.lambda$close$2(AsyncConnectionProvider.java:162)
at io.lettuce.core.internal.AsyncConnectionProvider$$Lambda$lambda$close$2$56014052/0x0000000000003ac0.accept(Unknown Source)
at io.lettuce.core.internal.AsyncConnectionProvider$Sync.doWithConnection(AsyncConnectionProvider.java:287)
at io.lettuce.core.internal.AsyncConnectionProvider.lambda$forEach$4(AsyncConnectionProvider.java:207)
at io.lettuce.core.internal.AsyncConnectionProvider$$Lambda$lambda$forEach$4$3588936050/0x0000000000003ac1.accept(Unknown Source)
at java.util.concurrent.ConcurrentHashMap.forEach(java.base@17.0.8.1.101/ConcurrentHashMap.java:1603)
at io.lettuce.core.internal.AsyncConnectionProvider.forEach(AsyncConnectionProvider. java: 207)
at io.lettuce.core.internal.AsyncConnectionProvider.close(AsyncConnectionProvider.java:160)
at io.lettuce.core.cluster.PooledClusterConnectionProvider.closeAsync(PooledClusterConnectionProvider.java:513)
at io.lettuce.core.cluster.ClusterDistributionChannelWriter.closeAsync(ClusterDistributionChannelWriter.java:439)
at io.lettuce.core.RedisChannelHandler.closeAsync(RedisChannelHandler.java:179)
Expected behavior/code
Expected it not to be blocked
Environment
Lettuce version(s): RELEASE-6.3.0
Redis version: 7.0.1
Additional context
None
The text was updated successfully, but these errors were encountered:
Bug Report
Thread blocked when using one global instance of ClientResources but repeatedly opening and closing connections.
Similar issues I had found about using multiple clientResources over here #1269.
However we only used one global ClientResources and still faced a similar looking error.
Current Behavior
Following the guidelines of lettuce, we only used one instance of ClientResources in a single JVM.
Created multiple RedisClusterClient in one JVM.
As connection.reset is said to cause bugs and deprecated, we seek to emulate a reset type of behaviour by closing the connection when we hit an exception and opening a brand new one.
We thought doing it like this would work.
Then repeatedly connect and closeAsync(when we hit some exception).
Eventually threads will be blocked, refer to the exception below under stack trace.
After the lock is engaged, we can no longer unlock it.
Stack trace
Expected behavior/code
Expected it not to be blocked
Environment
Additional context
None
The text was updated successfully, but these errors were encountered: