-
Notifications
You must be signed in to change notification settings - Fork 973
Frequently Asked Questions
Symptoms:
RedisCommandTimeoutException
with a stack trace like:
io.lettuce.core.RedisCommandTimeoutException: Command timed out after 1 minute(s)
at io.lettuce.core.ExceptionFactory.createTimeoutException(ExceptionFactory.java:51)
at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:114)
at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
at com.sun.proxy.$Proxy94.set(Unknown Source)
Diagnosis:
-
Check the debug log (log level
DEBUG
orTRACE
for the loggerio.lettuce.core.protocol
) -
Take a Thread dump to investigate Thread activity
Cause:
Command timeouts are caused by the fact that a command was not completed within the configured timeout. Timeouts may be caused for various reasons:
-
Redis server has crashed/network partition happened and your Redis service didn’t recover within the configured timeout
-
Command was not finished in time. This can happen if your Redis server is overloaded or if the connection is blocked by a command (e.g.
BLPOP 0
, long-running Lua script). See alsoblpop(Duration.ZERO, …)
givesRedisCommandTimeoutException
. -
Configured timeout does not match Redis’s performance.
-
If you block the
EventLoop
(e.g. calling blocking methods in aRedisFuture
callback or in a Reactive pipeline)
Action:
Check for the causes above. If the configured timeout does not match your Redis latency characteristics, consider increasing the timeout. Never block the EventLoop
from your code.
Symptoms:
Calling blpop
, brpop
or any other blocking command followed by RedisCommandTimeoutException
with a stack trace like:
io.lettuce.core.RedisCommandTimeoutException: Command timed out after 1 minute(s)
at io.lettuce.core.ExceptionFactory.createTimeoutException(ExceptionFactory.java:51)
at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:114)
at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
at com.sun.proxy.$Proxy94.set(Unknown Source)
Cause:
The configured command timeout applies without considering command-specific timeouts.
Action:
There are various options:
-
Configure a higher default timeout.
-
Consider a timeout that meets the default timeout when calling blocking commands.
-
Configure
TimeoutOptions
with a customTimeoutSource
TimeoutOptions timeoutOptions = TimeoutOptions.builder().timeoutSource(new TimeoutSource() {
@Override
public long getTimeout(RedisCommand<?, ?, ?> command) {
if (command.getType() == CommandType.BLPOP) {
return TimeUnit.MILLISECONDS.toNanos(CommandArgsAccessor.getFirstInteger(command.getArgs()));
}
// -1 indicates fallback to the default timeout
return -1;
}
}).build();
Note that commands that timed out may block the connection until either the timeout exceeds or Redis sends a response.
Symptoms:
RedisException
with one of the following messages:
io.lettuce.core.RedisException: Request queue size exceeded: n. Commands are not accepted until the queue size drops.
io.lettuce.core.RedisException: Internal stack size exceeded: n. Commands are not accepted until the stack size drops.
Or excessive memory allocation.
Diagnosis:
-
Check Redis connectivity
-
Inspect memory usage
Cause:
Lettuce auto-reconnects by default to Redis to minimize service disruption. Commands issued while there’s no Redis connection are buffered and replayed once the server connection is reestablished. By default, the queue is unbounded which can lead to memory exhaustion.
Action:
You can configure disconnected behavior and the request queue size through ClientOptions
for your workload profile. See Client options for further reference.
Symptoms:
Performance degradation when using the Reactive API with a single connection (i.e. non-pooled connection arrangement).
Diagnosis:
-
Inspect Thread affinity of reactive signals
Cause:
Netty’s threading model assigns a single Thread to each connection which makes I/O for a single Channel
effectively single-threaded. With a significant computation load and without further thread switching, the system leverages a single thread and therefore leads to contention.
Action:
You can configure signal multiplexing for the reactive API through ClientOptions
by enabling publishOnScheduler(true)
. See Client options for further reference. Alternatively, you can configure Scheduler
on each result stream through publishOn(Scheduler)
. Note that the asynchronous API features the same behavior and you might want to use then…Async(…)
, run…Async(…)
, apply…Async(…)
, or handleAsync(…)
methods along with an Executor
object.
Lettuce documentation was moved to https://redis.github.io/lettuce/overview/
Intro
Getting started
- Getting started
- Redis URI and connection details
- Basic usage
- Asynchronous API
- Reactive API
- Publish/Subscribe
- Transactions/Multi
- Scripting and Functions
- Redis Command Interfaces
- FAQ
HA and Sharding
Advanced usage
- Configuring Client resources
- Client Options
- Dynamic Command Interfaces
- SSL Connections
- Native Transports
- Unix Domain Sockets
- Streaming API
- Events
- Command Latency Metrics
- Tracing
- Stateful Connections
- Pipelining/Flushing
- Connection Pooling
- Graal Native Image
- Custom commands
Integration and Extension
Internals