Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thread blocked when using one global instance of ClientResources but repeatedly opening and closing connections #2879

Closed
TomatoCream opened this issue Jun 7, 2024 · 1 comment
Labels
type: bug A general bug
Milestone

Comments

@TomatoCream
Copy link

TomatoCream commented Jun 7, 2024

Bug Report

Thread blocked when using one global instance of ClientResources but repeatedly opening and closing connections.

Similar issues I had found about using multiple clientResources over here #1269.

However we only used one global ClientResources and still faced a similar looking error.

Current Behavior

Following the guidelines of lettuce, we only used one instance of ClientResources in a single JVM.
Created multiple RedisClusterClient in one JVM.

    RedisClusterClient redisClusterClient = RedisClusterClient.create(globalClientResources, redisURI);

As connection.reset is said to cause bugs and deprecated, we seek to emulate a reset type of behaviour by closing the connection when we hit an exception and opening a brand new one.

We thought doing it like this would work.

Then repeatedly connect and closeAsync(when we hit some exception).

    connection = redisClusterClient.connect(SOME_CODEC);
    // do some work
    connection.closeAsync()
    connection = null;

Eventually threads will be blocked, refer to the exception below under stack trace.
After the lock is engaged, we can no longer unlock it.

Stack trace
"lettuce-epollEventLoop-6-37" #6307 daemon prio=5 os_prio=0 tid=0x620018a00000 nid=0x350646 [ JVM thread_state=_thread_blocked, locked by VM at safepoint, polling bits: safep ]
  java.lang.Thread.State: RUNNABLE
      at io.lettuce.core.protocol.Sharedlock.lockWritersExclusive(SharedLock.java:139)
      at io.lettuce.core.protocol.SharedLock.doexclusive(SharedLock.java:114)
      at io.lettuce.core.protocol.DefaultEndpoint.doExclusive(DefaultEndpoint.java:741)
      at io.lettuce.core.cluster.ClusterNodeEndpoint.closeAsync(ClusterNodeEndpoint.java:70)
      at io.lettuce.core.RedisChannelHandler.closeAsync(RedisChannelHandler.java:179)
      at io.lettuce.core.internal.AsyncConnectionProvider.lambda$close$2(AsyncConnectionProvider.java:162)
      at io.lettuce.core.internal.AsyncConnectionProvider$$Lambda$lambda$close$2$56014052/0x0000000000003ac0.accept(Unknown Source)
      at io.lettuce.core.internal.AsyncConnectionProvider$Sync.doWithConnection(AsyncConnectionProvider.java:287)
      at io.lettuce.core.internal.AsyncConnectionProvider.lambda$forEach$4(AsyncConnectionProvider.java:207)
      at io.lettuce.core.internal.AsyncConnectionProvider$$Lambda$lambda$forEach$4$3588936050/0x0000000000003ac1.accept(Unknown Source)
      at java.util.concurrent.ConcurrentHashMap.forEach(java.base@17.0.8.1.101/ConcurrentHashMap.java:1603)
      at io.lettuce.core.internal.AsyncConnectionProvider.forEach(AsyncConnectionProvider. java: 207)
      at io.lettuce.core.internal.AsyncConnectionProvider.close(AsyncConnectionProvider.java:160)
      at io.lettuce.core.cluster.PooledClusterConnectionProvider.closeAsync(PooledClusterConnectionProvider.java:513)
      at io.lettuce.core.cluster.ClusterDistributionChannelWriter.closeAsync(ClusterDistributionChannelWriter.java:439)
      at io.lettuce.core.RedisChannelHandler.closeAsync(RedisChannelHandler.java:179)

Expected behavior/code

Expected it not to be blocked

Environment

  • Lettuce version(s): RELEASE-6.3.0
  • Redis version: 7.0.1

Additional context

None

@tishun tishun added this to the Backlog milestone Jun 28, 2024
@tishun tishun added status: help-wanted An issue that a contributor can help us with type: bug A general bug labels Jun 28, 2024
@tishun
Copy link
Collaborator

tishun commented Jun 28, 2024

Possibly related to #1429

@tishun tishun modified the milestones: Backlog, 7.x Jun 28, 2024
@tishun tishun added for: team-attention An issue we need to discuss as a team to make progress status: waiting-for-triage and removed status: help-wanted An issue that a contributor can help us with labels Jul 17, 2024
tishun pushed a commit that referenced this issue Sep 13, 2024
* fix:deadlock when reentrant exclusive lock #2905

* confirm won't blocking other thread

* apply suggestions
@tishun tishun removed for: team-attention An issue we need to discuss as a team to make progress status: waiting-for-triage labels Sep 13, 2024
@tishun tishun modified the milestones: 7.x, 6.5.0.RELEASE Sep 13, 2024
@tishun tishun closed this as completed Sep 13, 2024
tishun pushed a commit to tishun/lettuce-core that referenced this issue Nov 1, 2024
…is#2961)

* fix:deadlock when reentrant exclusive lock redis#2905

* confirm won't blocking other thread

* apply suggestions
tishun added a commit that referenced this issue Nov 1, 2024
* fix:deadlock when reentrant exclusive lock #2905

* confirm won't blocking other thread

* apply suggestions

Co-authored-by: Andy(Jingzhang)Chen <iRoiocam@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug A general bug
Projects
None yet
Development

No branches or pull requests

2 participants