From 299d8e78f5f47bad971352e4e88acd9c7dedbbee Mon Sep 17 00:00:00 2001 From: Igor Malinovskyi Date: Tue, 6 Aug 2024 16:30:20 +0200 Subject: [PATCH 01/12] Convert wiki to markdown docs --- docs/Advanced-usage.md | 2686 +++++++++++++++++ docs/Connecting-Redis.md | 2139 +++++++++++++ docs/Frequently-Asked-Questions.md | 171 ++ docs/Getting-Started.md | 94 + docs/High-Availability-and-Sharding.md | 671 ++++ docs/Integration-and-Extension.md | 280 ++ docs/New--Noteworthy.md | 172 ++ docs/Overview.md | 142 + ...g-with-dynamic-Redis-Command-Interfaces.md | 584 ++++ docs/index.md | 11 + 10 files changed, 6950 insertions(+) create mode 100644 docs/Advanced-usage.md create mode 100644 docs/Connecting-Redis.md create mode 100644 docs/Frequently-Asked-Questions.md create mode 100644 docs/Getting-Started.md create mode 100644 docs/High-Availability-and-Sharding.md create mode 100644 docs/Integration-and-Extension.md create mode 100644 docs/New--Noteworthy.md create mode 100644 docs/Overview.md create mode 100644 docs/Working-with-dynamic-Redis-Command-Interfaces.md create mode 100644 docs/index.md diff --git a/docs/Advanced-usage.md b/docs/Advanced-usage.md new file mode 100644 index 0000000000..561098de0b --- /dev/null +++ b/docs/Advanced-usage.md @@ -0,0 +1,2686 @@ +# Advanced usage + +## Configuring Client resources + +Client resources are configuration settings for the client related to +performance, concurrency, and events. A vast part of Client resources +consists of thread pools (`EventLoopGroup`s and a `EventExecutorGroup`) +which build the infrastructure for the connection workers. In general, +it is a good idea to reuse instances of `ClientResources` across +multiple clients. + +Client resources are stateful and need to be shut down if they are +supplied from outside the client. + +### Creating Client resources + +Client resources are required to be immutable. You can create instances +using two different patterns: + +**The `create()` factory method** + +By using the `create()` method on `DefaultClientResources` you create +`ClientResources` with default settings: + +``` java +ClientResources res = DefaultClientResources.create(); +``` + +This approach fits the most needs. + +**Resources builder** + +You can build instances of `DefaultClientResources` by using the +embedded builder. It is designed to configure the resources to your +needs. The builder accepts the configuration in a fluent fashion and +then creates the ClientResources at the end: + +``` java +ClientResources res = DefaultClientResources.builder() + .ioThreadPoolSize(4) + .computationThreadPoolSize(4) + .build() +``` + +### Using and reusing `ClientResources` + +A `RedisClient` and `RedisClusterClient` can be created without passing +`ClientResources` upon creation. The resources are exclusive to the +client and are managed itself by the client. When calling `shutdown()` +of the client instance `ClientResources` are shut down. + +``` java +RedisClient client = RedisClient.create(); +... +client.shutdown(); +``` + +If you require multiple instances of a client or you want to provide +existing thread infrastructure, you can configure a shared +`ClientResources` instance using the builder. The shared Client +resources can be passed upon client creation: + +``` java +ClientResources res = DefaultClientResources.create(); +RedisClient client = RedisClient.create(res); +RedisClusterClient clusterClient = RedisClusterClient.create(res, seedUris); +... +client.shutdown(); +clusterClient.shutdown(); +res.shutdown(); +``` + +Shared `ClientResources` are never shut down by the client. Same applies +for shared `EventLoopGroupProvider`s that are an abstraction to provide +`EventLoopGroup`s. + +#### Why `Runtime.getRuntime().availableProcessors()` \* 3? + +Netty requires different `EventLoopGroup`s for NIO (TCP) and for EPoll +(Unix Domain Socket) connections. One additional `EventExecutorGroup` is +used to perform computation tasks. `EventLoopGroup`s are started lazily +to allocate Threads on-demand. + +#### Shutdown + +Every client instance requires a call to `shutdown()` to clear used +resources. Clients with dedicated `ClientResources` (i.e. no +`ClientResources` passed within the constructor/`create`-method) will +shut down `ClientResources` on their own. + +Client instances with using shared `ClientResources` (i.e. +`ClientResources` passed using the constructor/`create`-method) won’t +shut down the `ClientResources` on their own. The `ClientResources` +instance needs to be shut down once it’s not used anymore. + +### Configuration settings + +The basic configuration options are listed in the table below: + +| Name | Method | Default | +|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------|------------------------| +| **I/O Thread Pool Size** | `ioThreadPoolSize` | `Number of processors` | +| The number of threads in the I/O thread pools. The number defaults to the number of available processors that the runtime returns (which, as a well-known fact, sometimes does not represent the actual number of processors). Every thread represents an internal event loop where all I/O tasks are run. The number does not reflect the actual number of I/O threads because the client requires different thread pools for Network (NIO) and Unix Domain Socket (EPoll) connections. The minimum I/O threads are `3`. A pool with fewer threads can cause undefined behavior. | | | +| **Computation Thread Pool Size** | `comput ationThreadPoolSize` | `Number of processors` | +| The number of threads in the computation thread pool. The number defaults to the number of available processors that the runtime returns (which, as a well-known fact, sometimes does not represent the actual number of processors). Every thread represents an internal event loop where all computation tasks are run. The minimum computation threads are `3`. A pool with fewer threads can cause undefined behavior. | | | + +### Advanced settings + +Values for the advanced options are listed in the table below and should +not be changed unless there is a truly good reason to do so. + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameMethodDefault
Provider for EventLoopGroupeve ntLoopGroupProvidernone
For those who want to reuse existing netty infrastructure or the +total control over the thread pools, the +Eve ntLoopGroupProvider API provides a way to do so. +EventLoopGroups are obtained and managed by an +Even tLoopGroupProvider. A provided +Eve ntLoopGroupProvider is not managed by the client and +needs to be shut down once you do not longer need the resources.
Provided EventExecutorGroupeventExecutorGroupnone
For those who want to reuse existing netty infrastructure or the +total control over the thread pools can provide an existing +EventExecutorGroup to the Client resources. A provided +EventExecutorGroup is not managed by the client and needs +to be shut down once you do not longer need the resources.
Event buseventBusDefaultEventBus
The event bus system is used to transport events from the client to +subscribers. Events are about connection state changes, metrics, and +more. Events are published using a RxJava subject and the default +implementation drops events on backpressure. Learn more about the Reactive API. You can also publish your own +events. If you wish to do so, make sure that your events implement the +Event marker interface.
Command latency collector optionscommandLate ncyCollectorOptionsDefaultCommandLat encyCollectorOptions
The client can collect latency metrics during while dispatching +commands. The options allow configuring the percentiles, level of +metrics (per connection or server) and whether the metrics are +cumulative or reset after obtaining these. Command latency collection is +enabled by default and can be disabled by setting +commandLatency PublisherOptions(…) to +D efaultEventPublisher Options.disabled(). Latency +collector requires LatencyUtils to be on your class +path.
Command latency collectorcomm andLatencyCollectorDefaultCom mandLatencyCollector
The client can collect latency metrics during while dispatching +commands. Command latency metrics is collected on connection or server +level. Command latency collection is enabled by default and can be +disabled by setting commandLatency CollectorOptions(…) to +DefaultCom mandLatencyCollector Options.disabled().
Latency event publisher optionscommandLate ncyPublisherOptionsDefaultE ventPublisherOptions
Command latencies can be published using the event bus. Latency +events are emitted by default every 10 minutes. Event publishing can be +disabled by setting commandLatency PublisherOptions(…) to +D efaultEventPublisher Options.disabled().
DNS ResolverdnsResolverDnsRe solvers.JVM_DEFAULT ( or netty if present)

Since: 3.5, 4.2

+

Configures a DNS resolver to resolve hostnames to a +ja va.net.InetAddress. Defaults to the JVM DNS resolution +that uses blocking hostname resolution and caching of lookup results. +Users of DNS-based Redis-HA setups (e.g. AWS ElastiCache) might want to +configure a different DNS resolver. Lettuce comes with +Di rContextDnsResolver that uses Java’s +DnsContextFactory to resolve hostnames. +Di rContextDnsResolver allows using either the system DNS +or custom DNS servers without caching of results so each hostname lookup +yields in a DNS lookup.

+

Since 4.4: Defaults to DnsR esolvers.UNRESOLVED to use +netty’s AddressResolver that resolves DNS names on +Bootstrap.connect() (requires netty 4.1)

Reconnect DelayreconnectDelayDelay.exponential()

Since: 4.2

+

Configures a reconnect delay used to delay reconnect attempts. +Defaults to binary exponential delay with an upper boundary of +30 SECONDS. See Delay for more delay +implementations.

Netty CustomizerNettyCustomizernone

Since: 4.4

+

Configures a netty customizer to enhance netty components. Allows +customization of Bootstrap after Bootstrap +configuration by Lettuce and Channel customization after +all Lettuce handlers are added to Channel. The customizer +allows custom SSL configuration (requires RedisURI in plain-text mode, +otherwise Lettuce’s configures SSL), adding custom handlers or setting +customized Bootstrap options. Misconfiguring +Bootstrap or Channel can cause connection +failures or undesired behavior.

Tracingtracingdisabled

Since: 5.1

+

Configures a tracing instance to trace Redis calls. +Lettuce wraps Brave data models to support tracing in a vendor-agnostic +way if Brave is on the class path. A Brave tracing instance +can be created using BraveTracing.crea te(clientTracing);, +where clientTracing is a created or existent Brave tracing +instance .

+ +## Client Options + +Client options allow controlling behavior for some specific features. + +Client options are immutable. Connections inherit the current options at +the moment the connection is created. Changes to options will not affect +existing connections. + +``` java +client.setOptions(ClientOptions.builder() + .autoReconnect(false) + .pingBeforeActivateConnection(true) + .build()); +``` + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameMethodDefault
PING before activating connectionpingBefor eActivateConnectiontrue

Since: 3.1, 4.0

+

Perform a lightweight PING connection handshake when +establishing a Redis connection. If true (default is true), +every connection and reconnect will issue a PING command +and await its response before the connection is activated and enabled +for use. If the check fails, the connect/reconnect is treated as a +failure. This option has no effect unless forced to use the RESP 2 +protocol version. RESP 3/protocol discovery performs a +HELLO handshake.

+

Failed PING's on reconnect are handled as protocol +errors and can suspend reconnection if +suspendReconne ctOnProtocolFailure is enabled.

+

The PING handshake validates whether the other end of +the connected socket is a service that behaves like a Redis +server.

Auto-ReconnectautoReconnecttrue

Since: 3.1, 4.0

+

Controls auto-reconnect behavior on connections. As soon as a +connection gets closed/reset without the intention to close it, the +client will try to reconnect, activate the connection and re-issue any +queued commands.

+

This flag also has the effect that disconnected connections will +refuse commands and cancel these with an exception.

Cancel commands on reconnect failurecancelCommand sOnReconnectFailurefalse

Since: 3.1, 4.0

+

This flag is deprecated and should not be used as it can lead +to race conditions and protocol offsets. SSL is natively supported by +Lettuce and does no longer requires the use of SSL tunnels where +protocol traffic can get out of sync.

+

If this flag is true any queued commands will be +canceled when a reconnect fails within the activation sequence. The +reconnect itself has two phases: Socket connection and +protocol/connection activation. In case a connect timeout occurs, a +connection reset, host lookup fails, this does not affect the +cancelation of commands. In contrast, where the protocol/connection +activation fails due to SSL errors or PING before activating connection +failure, queued commands are canceled.

Policy how to reclaim decode buffer memorydecodeBufferPolicyratio-based at 75%

Since: 6.0

+

Policy to discard read bytes from the decoding aggregation buffer to +reclaim memory. See D ecodeBufferPolicies for available +strategies.

Suspend reconnect on protocol failuresuspendReconne ctOnProtocolFailurefalse (was introduced in 3. 1 with default true)

Since: 3.1, 4.0

+

If this flag is true the reconnect will be suspended on +protocol errors. The reconnect itself has two phases: Socket connection +and protocol/connection activation. In case a connect timeout occurs, a +connection reset, host lookup fails, this does not affect the +cancellation of commands. In contrast, where the protocol/connection +activation fails due to SSL errors or PING before activating connection +failure, queued commands are canceled.

+

Reconnection can be activated again, but there is no public API to +obtain the ConnectionWatchdog instance.

Request queue sizerequestQueueSize2147483647 (Integer#MAX_VALUE)

Since: 3.4, 4.1

+

Controls the per-connection request queue size. The command +invocation will lead to a RedisException if the queue size +is exceeded. Setting the requestQueueSize to a lower value +will lead earlier to exceptions during overload or while the connection +is in a disconnected state. A higher value means hitting the boundary +will take longer to occur, but more requests will potentially be queued, +and more heap space is used.

Disconnected behaviord isconnectedBehaviorDEFAULT

Since: 3.4, 4.1

+

A connection can behave in a disconnected state in various ways. The +auto-connect feature allows in particular to retrigger commands that +have been queued while a connection is disconnected. The disconnected +behavior setting allows fine-grained control over the behavior. +Following settings are available:

+

DEFAULT: Accept commands when auto-reconnect is enabled, +reject commands when auto-reconnect is disabled.

+

ACCEPT_COMMANDS: Accept commands in disconnected +state.

+

REJECT_COMMANDS: Reject commands in disconnected +state.

Protocol VersionprotocolVersionL atest/Auto-discovery

Since: 6.0

+

Configuration of which protocol version (RESP2/RESP3) to use. Leaving +this option unconfigured performs a protocol discovery to use the +lastest available protocol.

Script CharsetscriptCharsetUTF-8

Since: 6.0

+

Charset to use for Luascripts.

Socket OptionssocketOptions10 seconds Connecti on-Timeout, no keep-a live, no TCP noDelay

Since: 4.3

+

Options to configure low-level socket options for the connections +kept to Redis servers.

SSL OptionssslOptions(non e), use JDK defaults

Since: 4.3

+

Configure SSL options regarding SSL providers (JDK/OpenSSL) and key +store/trust store.

Timeout OptionstimeoutOptionsDo n ot timeout commands.

Since: 5.1

+

Options to configure command timeouts applied to timeout commands +after dispatching these (active connections, queued while disconnected, +batch buffer). By default, the synchronous API times out commands using +Red isURI.getTimeout().

Publish Reactive Signals on SchedulerpublishOnSchedulerUse I/O thread.

Since: 5.1.4

+

Use a dedicated Scheduler to emit reactive data signals. +Enabling this option can be useful for reactive sequences that require a +significant amount of processing with a single/a few Redis connections +performance suffers from a single-thread-like behavior. Enabling this +option uses EventExecutorGroup configured through +ClientResources for data/completion signals. The used +Thread is sticky across all signals for a single +Publisher instance.

+ +### Cluster-specific options + +Cluster client options extend the regular client options by some cluster +specifics. + +Cluster client options are immutable. Connections inherit the current +options at the moment the connection is created. Changes to options will +not affect existing connections. + +``` java +ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder() + .enablePeriodicRefresh(refreshPeriod(10, TimeUnit.MINUTES)) + .enableAllAdaptiveRefreshTriggers() + .build(); + +client.setOptions(ClusterClientOptions.builder() + .topologyRefreshOptions(topologyRefreshOptions) + .build()); +``` + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameMethodDefault
Periodic cluster topology refreshen ablePeriodicRefreshfalse

Since: 3.1, 4.0

+

Enables or disables periodic cluster topology refresh. The refresh is +handled in the background. Partitions, the view on the Redis cluster +topology, are valid for a whole RedisClusterClient +instance, not a connection. All connections created by this client +operate on the one cluster topology.

+

The refresh job is regularly executed, the period between the runs +can be set with refreshPeriod. The refresh job starts after +either opening the first connection with the job enabled or by calling +reloadPartitions. The job can be disabled without +discarding the full client by setting new client options.

Cluster topology refresh periodrefreshPeriod60 SECONDS

Since: 3.1, 4.0

+

Set the period between the refresh job runs. The effective interval +cannot be changed once the refresh job is active. Changes to the value +will be ignored.

Adaptive cluster topology refreshenableAda ptiveRefreshTrigger(none)

Since: 4.2

+

Enables selectively adaptive topology refresh triggers. Adaptive +refresh triggers initiate topology view updates based on events happened +during Redis Cluster operations. Adaptive triggers lead to an immediate +topology refresh. These refreshes are rate-limited using a timeout since +events can happen on a large scale. Adaptive refresh triggers are +disabled by default. Following triggers can be enabled:

+

MOVED_REDIRECT, ASK_REDIRECT, +PER SISTENT_RECONNECTS, UNKNOWN_NODE (since +5.1), and UNCOVERED_SLOT (since 5.2) (see also reconnect +attempts for the reconnect trigger)

Adaptive refresh triggers timeoutadaptiveRef reshTriggersTimeout30 SECONDS

Since: 4.2

+

Set the timeout between the adaptive refresh job runs. Multiple +triggers within the timeout will be ignored, only the first enabled +trigger leads to a topology refresh. The effective period cannot be +changed once the refresh job is active. Changes to the value will be +ignored.

Reconnect attempts (Adaptive topology refresh trigger)refreshTrigge rsReconnectAttempts5

Since: 4.2

+

Set the threshold for the PE RSISTENT_RECONNECTS refresh +trigger. Topology updates based on persistent reconnects lead only to a +refresh if the reconnect process tries at least the number of specified +attempts. The first reconnect attempt starts with +1.

Dynamic topology refresh sourcesdy namicRefreshSourcestrue

Since: 4.2

+

Discover cluster nodes from the topology and use only the discovered +nodes as the source for the cluster topology. Using dynamic refresh will +query all discovered nodes for the cluster topology details. If set to +false, only the initial seed nodes will be used as sources +for topology discovery and the number of clients will be obtained only +for the initial seed nodes. This can be useful when using Redis Cluster +with many nodes.

+

Note that enabling dynamic topology refresh sources uses node +addresses reported by Redis CLUSTER NODES output which +typically contains IP addresses.

Close stale connectionscl oseStaleConnectionstrue

Since: 3.3, 4.1

+

Stale connections are existing connections to nodes which are no +longer part of the Redis Cluster. If this flag is set to +true, then stale connections are closed upon topology +refreshes. It’s strongly advised to close stale connections as open +connections will attempt to reconnect nodes if the node is no longer +available and open connections require system resources.

Limitation of cluster redirectsmaxRedirects5

Since: 3.1, 4.0

+

When the assignment of a slot-hash is moved in a Redis Cluster and a +client requests a key that is located on the moved slot-hash, the +Cluster node responds with a -MOVED response. In this case, +the client follows the redirection and queries the cluster specified +within the redirection. Under some circumstances, the redirection can be +endless. To protect the client and also the Cluster, a limit of max +redirects can be configured. Once the limit is reached, the +-MOVED error is returned to the caller. This limit also +applies for -ASK redirections in case a slot is set to +MIGRATING state.

Filter nodes from TopologynodeFilterno filter

Since: 6.1.6

+

When providing a nodeFilter, then +RedisClusterNodes can be filtered from the topology view to +remove unwanted nodes (e.g. failed replicas). Note that the filter is +applied only after obtaining the topology so the filter does not prevent +trying to connect the node during topology discovery.

Validate cluster node membershipvalidateCl usterNodeMembershiptrue

Since: 3.3, 4.0

+

Validate the cluster node membership before allowing connections to +that node. The current implementation performs redirects using +MOVED and ASK and allows obtaining connections +to the particular cluster nodes. The validation was introduced during +the development of version 3.3 to prevent security breaches and only +allow connections to the known hosts of the CLUSTER NODES +output.

+

There are some scenarios, where the strict validation is an +obstruction:

+

MOVED/ASK redirection but the cluster +topology view is stale Connecting to cluster nodes using different +IP’s/hostnames (e.g. private/public IP’s)

+

Connecting to non-cluster members to reconfigure those while using +the RedisClusterClient connection.

+ +### Request queue size and cluster + +Clustered operations use multiple connections. The resulting +overall-queue limit is +`requestQueueSize * ((number of cluster nodes * 2) + 1)`. + +## SSL Connections + +Lettuce supports SSL connections since version 3.1 on Redis Standalone +connections and since version 4.2 on Redis Cluster. Redis has no native +SSL support, SSL is implemented usually by using +[stunnel](https://www.stunnel.org/index.html). + +An example stunnel configuration can look like: + + cert=/etc/ssl/cert.pem + key=/etc/ssl/key.pem + capath=/etc/ssl/cert.pem + cafile=/etc/ssl/cert.pem + delay=yes + pid=/etc/ssl/stunnel.pid + foreground = no + + [redis] + accept = 127.0.0.1:6443 + connect = 127.0.0.1:6479 + +Next step is connecting lettuce over SSL to Redis. + +``` java +RedisURI redisUri = RedisURI.Builder.redis("localhost") + .withSsl(true) + .withPassword("authentication") + .withDatabase(2) + .build(); + +RedisClient client = RedisClient.create(redisUri); +``` + +``` java +RedisURI redisUri = RedisURI.create("rediss://authentication@localhost/2"); +RedisClient client = RedisClient.create(redisUri); +``` + +``` java +RedisURI redisUri = RedisURI.Builder.redis("localhost") + .withSsl(true) + .withPassword("authentication") + .build(); + +RedisClusterClient client = RedisClusterClient.create(redisUri); +``` + +### Limitations + +Lettuce supports SSL only on Redis Standalone and Redis Cluster +connections and since 5.2, also for Master resolution using Redis +Sentinel or Redis Master/Replicas. + +### Connection Procedure and Reconnect + +When connecting using SSL, Lettuce performs an SSL handshake before you +can use the connection. Plain text connections do not perform a +handshake. Errors during the handshake throw +`RedisConnectionException`s. + +Reconnection behavior is also different to plain text connections. If an +SSL handshake fails on reconnect (because of peer/certification +verification or peer does not talk SSL) reconnection will be disabled +for the connection. You will also find an error log entry within your +logs. + +### Certificate Chains/Root Certificate/Self-Signed Certificates + +Lettuce uses Java defaults for the trust store that is usually `cacerts` +in your `jre/lib/security` directory and comes with customizable SSL +options via [client options](#client-options). If you need to add you +own root certificate, so you can configure `SslOptions`, import it +either to `cacerts` or you provide an own trust store and set the +necessary system properties: + +``` java +SslOptions sslOptions = SslOptions.builder() + .jdkSslProvider() + .truststore(new File("yourtruststore.jks"), "changeit") + .build(); + +ClientOptions clientOptions = ClientOptions.builder().sslOptions(sslOptions).build(); +``` + +``` java +System.setProperty("javax.net.ssl.trustStore", "yourtruststore.jks"); +System.setProperty("javax.net.ssl.trustStorePassword", "changeit"); +``` + +### Host/Peer Verification + +By default, Lettuce verifies the certificate against the validity and +the common name (Name validation not supported on Java 1.6, only +available on Java 1.7 and higher) of the Redis host you are connecting +to. This behavior can be turned off: + +``` java +RedisURI redisUri = ... +redisUri.setVerifyPeer(false); +``` + +or + +``` java +RedisURI redisUri = RedisURI.Builder.redis(host(), sslPort()) + .withSsl(true) + .withVerifyPeer(false) + .build(); +``` + +### StartTLS + +If you need to issue a StartTLS before you can use SSL, set the +`startTLS` property of `RedisURI` to `true`. StartTLS is disabled by +default. + +``` java +RedisURI redisUri = ... +redisUri.setStartTls(true); +``` + +or + +``` java +RedisURI redisUri = RedisURI.Builder.redis(host(), sslPort()) + .withSsl(true) + .withStartTls(true) + .build(); +``` + +## Native Transports + +Netty provides three platform-specific JNI transports: + +- epoll on Linux + +- io_uring on Linux (Incubator) + +- kqueue on MacOS/BSD + +Lettuce defaults to native transports if the appropriate library is +available within its runtime. Using a native transport adds features +specific to a particular platform, generate less garbage and generally +improve performance when compared to the NIO based transport. Native +transports are required to connect to Redis via [Unix Domain +Sockets](#unix-domain-sockets) and are suitable for TCP connections as +well. + +Native transports are available with: + +- Linux **epoll** x86_64 systems with a minimum netty version of + `4.0.26.Final`, requiring `netty-transport-native-epoll`, classifier + `linux-x86_64` + + ``` xml + + io.netty + netty-transport-native-epoll + ${netty-version} + linux-x86_64 + + ``` + +- Linux **io_uring** x86_64 systems with a minimum netty version of + `4.1.54.Final`, requiring `netty-incubator-transport-native-io_uring`, + classifier `linux-x86_64`. Note that this transport is still + experimental. + + ``` xml + + io.netty.incubator + netty-incubator-transport-native-io_uring + 0.0.1.Final + linux-x86_64 + + ``` + +- MacOS **kqueue** x86_64 systems with a minimum netty version of + `4.1.11.Final`, requiring `netty-transport-native-kqueue`, classifier + `osx-x86_64` + + ``` xml + + io.netty + netty-transport-native-kqueue + ${netty-version} + osx-x86_64 + + ``` + +You can disable native transport use through system properties. Set +`io.lettuce.core.epoll`, `io.lettuce.core.iouring` respective +`io.lettuce.core.kqueue` to `false` (default is `true`, if unset). + +### Limitations + +Native transport support does not work with the shaded version of +Lettuce because of two reasons: + +1. `netty-transport-native-epoll` and `netty-transport-native-kqueue` + are not packaged into the shaded jar. So adding the jar to the + classpath will resolve in different netty base classes (such as + `io.netty.channel.EventLoopGroup` instead of + `com.lambdaworks.io.netty.channel.EventLoopGroup`) + +2. Support for using epoll/kqueue with shaded netty requires netty 4.1 + and all parts of netty to be shaded. + +See also Netty [documentation on native +transports](http://netty.io/wiki/native-transports.html). + +## Unix Domain Sockets + +Lettuce supports since version 3.2 Unix Domain Sockets for local Redis +connections. + +``` java +RedisURI redisUri = RedisURI.Builder + .socket("/tmp/redis") + .withPassword("authentication") + .withDatabase(2) + .build(); + +RedisClient client = RedisClient.create(redisUri); +``` + +``` java +RedisURI redisUri = RedisURI.create("redis-socket:///tmp/redis"); +RedisClient client = RedisClient.create(redisUri); +``` + +Unix Domain Sockets are inter-process communication channels on POSIX +compliant systems. They allow exchanging data between processes on the +same host operating system. When using Redis, which is usually a network +service, Unix Domain Sockets are usable only if connecting locally to a +single instance. Redis Sentinel and Redis Cluster, maintain tables of +remote or local nodes and act therefore as a registry. Unix Domain +Sockets are not beneficial with Redis Sentinel and Redis Cluster. + +Using `RedisClusterClient` with Unix Domain Sockets would connect to the +local node using a socket and open TCP connections to all the other +hosts. A good example is connecting locally to a standalone or a single +cluster node to gain performance. + +See [Native Transports](#native-transports) for more details and +limitations. + +## Streaming API + +Redis can contain a huge set of data. Collections can burst your memory, +when the amount of data is too massive for your heap. Lettuce can return +your collection data either as List/Set/Map or can push the data on +`StreamingChannel` interfaces. + +`StreamingChannel`s are similar to callback methods. Every method, which +can return bulk data (except transactions/multi and some config methods) +specifies beside a regular method with a collection return class also +method which accepts a `StreamingChannel`. Lettuce interacts with a +`StreamingChannel` as the data arrives so data can be processed while +the command is running and is not yet completed. + +There are 4 StreamingChannels accepting different data types: + +- [KeyStreamingChannel](http://redis.paluch.biz/docs/api/releases/latest/com/lambdaworks/redis/output/KeyStreamingChannel.html) + +- [ValueStreamingChannel](http://redis.paluch.biz/docs/api/releases/latest/com/lambdaworks/redis/output/ValueStreamingChannel.html) + +- [KeyValueStreamingChannel](http://redis.paluch.biz/docs/api/releases/latest/com/lambdaworks/redis/output/KeyValueStreamingChannel.html) + +- [ScoredValueStreamingChannel](http://redis.paluch.biz/docs/api/releases/latest/com/lambdaworks/redis/output/ScoredValueStreamingChannel.html) + +The result of the steaming methods is the count of keys/values/key-value +pairs as `long` value. + +> [!NOTE] +> Don’t issue blocking calls (includes synchronous API calls to Lettuce) +> from inside of callbacks such as the streaming API as this would block +> the EventLoop. If you need to fetch data from Redis from inside a +> `StreamingChannel` callback, please use the asynchronous API or use +> the reactive API directly. + +``` java +Long count = redis.hgetall(new KeyValueStreamingChannel() + { + @Override + public void onKeyValue(String key, String value) + { + ... + } + }, key); +``` + +Streaming happens real-time to the redis responses. The method call +(future) completes after the last call to the StreamingChannel. + +### Examples + +``` java +redis.lpush("key", "one") +redis.lpush("key", "two") +redis.lpush("key", "three") + +Long count = redis.lrange(new ValueStreamingChannel() + { + @Override + public void onValue(String value) + { + System.out.println("Value: " + value); + } + }, "key", 0, -1); + +System.out.println("Count: " + count); +``` + +will produce the following output: + + Value: one + Value: two + Value: three + Count: 3 + +## Events + +### Before 3.4/4.1 + +lettuce can notify its users of certain events: + +- Connected + +- Disconnected + +- Exceptions in the connection handler pipeline + +You can subscribe to these events using `RedisClient#addListener()` and +unsubscribe with `RedisClient.removeListener()`. Both methods accept a +`RedisConnectionStateListener`. + +`RedisConnectionStateListener` receives as connection the async +implementation of the connection. This means if you use a sync way (e. +g. `RedisConnection`) you will receive the `RedisAsyncConnectionImpl` +instance + +**Example** + +``` java +RedisClient client = new RedisClient(host, port); +client.addListener(new RedisConnectionStateListener() +{ + @Override + public void onRedisConnected(RedisChannelHandler connection) + { + + } + @Override + public void onRedisDisconnected(RedisChannelHandler connection) + { + + } + @Override + public void onRedisExceptionCaught(RedisChannelHandler connection, Throwable cause) + { + + } +}); +``` + +### Since 3.4/4.1 + +The client produces events during its operation and uses an event bus +for the transport. The `EventBus` can be configured and obtained from +the [client resources](#configuring-client-resources) and is used for +client- and custom events. + +Following events are sent by the client: + +- Connection events + +- Metrics events + +- Cluster topology events + +#### Subscribing to events + +The simple-most approach to subscribing to the client events is +obtaining the event bus from the client’s client resources. + +``` java +RedisClient client = RedisClient.create() +EventBus eventBus = client.getresources().eventBus(); + +eventBus.get().subscribe(e -> System.out.println(event)); + +... +client.shutdown(); +``` + +Calls to the `subscribe()` method will return a `Subscription`. If you +plan to unsubscribe from the event stream, you can do so by calling the +`Subscription.unsubscribe()` method. The event bus utilizes +[RxJava](http://reactivex.io) and the {reactive-api} to transport events +from the publisher to its subscribers. + +A thread of the computation thread pool (can be configured using [client +resources](#configuring-client-resources)) transports the events. + +#### Connection events + +When working with events, multiple events occur. These can be used to +monitor connections or react to these. Connection events transport the +local and the remote connection points. The regular order of connection +events is: + +1. Connected: The transport-layer connection is established (TCP or + Unix Domain Socket connection established). Event type: + `ConnectedEvent` + +2. Connection activated: The logical connection is activated and can be + used to dispatch Redis commands (SSL handshake complete, PING before + activating response received). Event type: + `ConnectionActivatedEvent` + +3. Disconnected: The transport-layer connection is closed/reset. That + event occurs on regular connection shutdowns and connection + interruptions (outage). Event type: `DisconnectedEvent` + +4. Connection deactivated: The logical connection is deactivated. The + internal processing state is reset and the `isOpen()` flag is set to + `false` That event occurs on regular connection shutdowns and + connection interruptions (outage). Event type: + `ConnectionDeactivatedEvent` + +5. Since 5.3: Reconnect failed: A reconnect attempt failed. Contains + the reconnect failure and and the retry counter. Event type: + `ReconnectFailedEvent` + +#### Metrics events + +Client command metrics is published using the event bus. The current +event carries command latency metrics. Latency metrics is segregated by +connection or server and command which means you can get detailed +statistics on every command. Connection distinction allows seeing how +particular connections perform. Server distinction how particular +servers perform. You can configure metrics collection using [client +resources](#configuring-client-resources). + +In detail, two command latencies are recorded: + +1. RTT from dispatching the command until the first command response is + processed (first response) + +2. RTT from dispatching the command until the full command response is + processed and at the moment the command is completed (completion) + +The latency metrics provide following statistics: + +- Number of commands + +- min latency + +- max latency + +- latency percentiles + +**First Response Latency** + +The first response latency measuring begins at the moment the command +sending begins (command flush on the netty event loop). That is not the +time at when at which the command was issued from the client API. The +latency time recording ends at the moment the client receives the first +command bytes and starts to process the command response. Both +conditions must be met to end the latency recording. The client could be +busy with processing the previous command while the first bytes are +already available to read. That scenario would be a good time to file an +[issue](https://github.com/mp911de/lettuce/issues) for improving the +client performance. The first response latency value is good to +determine the lag/network performance and can give a hint on the client +and server performance. + +**Completion Latency** + +The completion latency begins at the same time as the first response +latency but lasts until the time where the client is just about to call +the `complete()` method to signal command completion. That means all +command response bytes arrived and were decoded/processed, and the +response data structures are ready for consumption for the user of the +client. On completion callback duration (such as async or observable +callbacks) are not part of the completion latency. + +#### Cluster events + +When using Redis Cluster, you might want to know when the cluster +topology changes. As soon as the cluster client discovers the cluster +topology change, a `ClusterTopologyChangedEvent` event is published to +the event bus. The time at which the event is published is not +necessarily the time the topology change occurred. That is because the +client polls the topology from the cluster. + +The cluster topology changed event carries the topology view before and +after the change. + +Make sure, you enabled cluster topology refresh in the [Client +options](#cluster-specific-options). + +### Java Flight Recorder Events (since 6.1) + +Lettuce emits Connection and Cluster events as Java Flight Recorder +events. `EventBus` emits all events to `EventRecorder` and the actual +event bus. + +`EventRecorder` verifies whether your runtime provides the required JFR +classes (available as of JDK 8 update 262 or later) and if so, then it +creates Flight Recorder variants of the event and commits these to JFR. + +The following events are supported out of the box: + +**Redis Connection Events** + +- Connection Attempt + +- Connect, Disconnect, Connection Activated, Connection Deactivated + +- Reconnect Attempt and Reconnect Failed + +**Redis Cluster Events** + +- Topology Refresh initiated + +- Topology Changed + +- ASK and MOVED redirects + +**Redis Master/Replica Events** + +- Sentinel Topology Refresh initiated + +- Master/Replica Topology Changed + +Events come with a rich set of event attributes such as channelId, epId +(endpoint Id), Redis URI and many more. + +You can record data by starting your application with: + +``` shell +java -XX:StartFlightRecording:filename=recording.jfr,duration=10s … +``` + +You can disable JFR events use through system properties. Set +`io.lettuce.core.jfr` to `false`. + +## Observability + +The following section explains Lettuces metrics and tracing +capabilities. + +### Metrics + +Command latency metrics give insight into command execution and +latencies. Metrics are collected for every completed command. Lettuce +has two mechanisms to collect latency metrics: + +- [Built-in](#built-in-latency-tracking) (since version 3.4 using + HdrHistogram and LatencyUtils. Enabled by default if both libraries + are available on the classpath.) + +- [Micrometer](#micrometer) (since version 6.1) + +### Built-in latency tracking + +Each command is tracked with: + +- Execution count + +- Latency to first response (min, max, percentiles) + +- Latency to complete (min, max, percentiles) + +Command latencies are tracked on remote endpoint (distinction by host +and port or socket path) and command type level (`GET`, `SET`, …​). It is +possible to track command latencies on a per-connection level (see +`DefaultCommandLatencyCollectorOptions`). + +Command latencies are transported using Events on the `EventBus`. The +`EventBus` can be obtained from the [client +resources](#configuring-client-resources) of the client instance. Please +keep in mind that the `EventBus` is used for various event types. Filter +on the event type if you’re interested only in particular event types. + +``` java +RedisClient client = RedisClient.create(); +EventBus eventBus = client.getResources().eventBus(); + +Subscription subscription = eventBus.get() + .filter(redisEvent -> redisEvent instanceof CommandLatencyEvent) + .cast(CommandLatencyEvent.class) + .subscribe(e -> System.out.println(e.getLatencies())); +``` + +The `EventBus` uses Reactor Processors to publish events. This example +prints the received latencies to `stdout`. The interval and the +collection of command latency metrics can be configured in the +`ClientResources`. + +#### Prerequisites + +Lettuce requires the LatencyUtils dependency (at least 2.0) to provide +latency metrics. Make sure to include that dependency on your classpath. +Otherwise, you won’t be able using latency metrics. + +If using Maven, add the following dependency to your pom.xml: + +``` xml + + org.latencyutils + LatencyUtils + 2.0.3 + +``` + +#### Disabling command latency metrics + +To disable metrics collection, use own `ClientResources` with a disabled +`DefaultCommandLatencyCollectorOptions`: + +``` java +ClientResources res = DefaultClientResources + .builder() + .commandLatencyCollectorOptions( DefaultCommandLatencyCollectorOptions.disabled()) + .build(); + +RedisClient client = RedisClient.create(res); +``` + +#### CommandLatencyCollector Options + +The following settings are available to configure from +`DefaultCommandLatencyCollectorOptions`: + +| Name | Method | Default | +|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------|---------------------------------| +| **Disable metrics tracking** | `disable` | `false` | +| Disables tracking of command latency metrics. | | | +| **Latency time unit** | `targetUnit` | `MICROSECONDS` | +| The target unit for command latency values. All values in the `CommandLatencyEvent` and a `CommandMetrics` instance are `long` values scaled to the `targetUnit`. | | | +| **Latency percentiles** | `targetPercentiles` | `50.0, 90 .0, 95.0, 99.0, 99.9` | +| A `double`-array of percentiles for latency metrics. The `CommandMetrics` contains a map that holds the percentile value and the latency value according to the percentile. Note that percentiles here must be specified in the range between 0 and 100. | | | +| **Reset latencies after publish** | `reset LatenciesAfterEvent` | `true` | +| Allows controlling whether the latency metrics are reset to zero one they were published. Setting `reset LatenciesAfterEvent` allows accumulating metrics over a long period for long-term analytics. | | | +| **Local socket distinction** | `localDistinction` | `false` | +| Enables per connection metrics tracking instead of per host/port. If `true`, multiple connections to the same host/connection point will be recorded separately which allows to inspection of every connection individually. If `false`, multiple connections to the same host/connection point will be recorded together. This allows a consolidated view on one particular service. | | | + +#### EventPublisher Options + +The following settings are available to configure from +`DefaultEventPublisherOptions`: + +| Name | Method | Default | +|---------------------------------------------------|--------------------------|-----------| +| **Disable event publisher** | `disable` | `false` | +| Disables event publishing. | | | +| **Event publishing time unit** | `ev entEmitIntervalUnit` | `MINUTES` | +| The `TimeUnit` for the event publishing interval. | | | +| **Event publishing interval** | `eventEmitInterval` | `10` | +| The interval for the event publishing. | | | + +### Micrometer + +Commands are tracked by using two Micrometer `Timer`s: +`lettuce.command.firstresponse` and `lettuce.command.completion`. The +following tags are attached to each timer: + +- `command`: Name of the command (`GET`, `SET`, …​) + +- `local`: Local socket (`localhost/127.0.0.1:45243` or `ANY` when local + distinction is disabled, which is the default behavior) + +- `remote`: Remote socket (`localhost/127.0.0.1:6379`) + +Command latencies are reported using the provided `MeterRegistry`. + +``` java +MeterRegistry meterRegistry = …; +MicrometerOptions options = MicrometerOptions.create(); +ClientResources resources = ClientResources.builder().commandLatencyRecorder(new MicrometerCommandLatencyRecorder(meterRegistry, options)).build(); + +RedisClient client = RedisClient.create(resources); +``` + +#### Prerequisites + +Lettuce requires Micrometer (`micrometer-core`) to integrate with +Micrometer. + +If using Maven, add the following dependency to your pom.xml: + +``` xml + + io.micrometer + micrometer-core + ${micrometer.version} + +``` + +#### Micrometer Options + +The following settings are available to configure from +`MicrometerOptions`: + +| Name | Method | Default | +|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------|------------------------------------------------------------------------------------| +| **Disable metrics tracking** | `disable` | `false` | +| Disables tracking of command latency metrics. | | | +| **Histogram** | `histogram` | `false` | +| Enable histogram buckets used to generate aggregable percentile approximations in monitoring systems that have query facilities to do so. | | | +| **Local socket distinction** | `localDistinction` | `false` | +| Enables per connection metrics tracking instead of per host/port. If `true`, multiple connections to the same host/connection point will be recorded separately which allows inspection of every connection individually. If `false`, multiple connections to the same host/connection point will be recorded together. This allows a consolidated view on one particular service. | | | +| **Maximum Latency** | `maxLatency` | `5 Minutes` | +| Sets the maximum value that this timer is expected to observe. Applies only if Histogram publishing is enabled. | | | +| **Minimum Latency** | `minLatency` | `1ms` | +| Sets the minimum value that this timer is expected to observe. Applies only if Histogram publishing is enabled. | | | +| **Additional Tags** | `tags` | `Tags.empty()` | +| Extra tags to add to the generated metrics. | | | +| **Latency percentiles** | `targetPercentiles` | `0.5, 0.9, 0.95, 0.99, 0.999 (corresp onding with 50.0, 90. 0, 95.0, 99.0, 99.9)` | +| A `double`-array of percentiles for latency metrics. Values must be supplied in the range of `0.0` (0th percentile) up to `1.0` (100th percentile). The `CommandMetrics` contains a map that holds the percentile value and the latency value according to the percentile. This applies only if Histogram publishing is enabled. | | | + +### Tracing + +Tracing gives insights about individual Redis commands sent to Redis to +trace their frequency, duration and to trace of which commands a +particular activity consists. Lettuce provides a tracing SPI to avoid +mandatory tracing library dependencies. Lettuce ships integrations with +[Micrometer Tracing](https://github.com/micrometer-metrics/tracing) and +[Brave](https://github.com/openzipkin/brave) which can be configured +through [client resources](#configuring-client-resources). + +#### Micrometer Tracing + +With Micrometer tracing enabled, Lettuce creates an observation for each +Redis command resulting in spans per Command and corresponding Meters if +configured in Micrometer’s `ObservationContext`. + +##### Prerequisites + +Lettuce requires the Micrometer Tracing dependency to provide Tracing +functionality. Make sure to include that dependency on your classpath. + +If using Maven, add the following dependency to your pom.xml: + +``` xml + + io.micrometer + micrometer-tracing + +``` + +The following example shows how to configure tracing through +`ClientResources`: + +``` java +ObservationRegistry observationRegistry = …; + +MicrometerTracing tracing = new MicrometerTracing(observationRegistry, "Redis"); + +ClientResources resources = ClientResources.builder().tracing(tracing).build(); +``` + +#### Brave + +With Brave tracing enabled, Lettuce creates a span for each Redis +command. The following options can be configured: + +- `serviceName` (defaults to `redis`). + +- `Endpoint` customizer. This option can be used together with a custom + `SocketAddressResolver` to attach custom endpoint details. + +- `Span` customizer. Allows for customization of spans based on the + actual Redis `Command` object. + +- Inclusion/Exclusion of all command arguments in a span. By default, + all arguments are included. + +##### Prerequisites + +Lettuce requires the Brave dependency (at least 5.1) to provide Tracing +functionality. Make sure to include that dependency on your classpath. + +If using Maven, add the following dependency to your pom.xml: + +``` xml + + io.zipkin.brave + brave + +``` + +The following example shows how to configure tracing through +`ClientResources`: + +``` java +brave.Tracing clientTracing = …; + +BraveTracing tracing = BraveTracing.builder().tracing(clientTracing) + .excludeCommandArgsFromSpanTags() + .serviceName("custom-service-name-goes-here") + .spanCustomizer((command, span) -> span.tag("cmd", command.getType().name())) + .build(); + +ClientResources resources = ClientResources.builder().tracing(tracing).build(); +``` + +Lettuce ships with a Tracing SPI in `io.lettuce.core.tracing` that +allows custom tracer implementations. + +## Pipelining and command flushing + +Redis is a TCP server using the client-server model and what is called a +Request/Response protocol. This means that usually a request is +accomplished with the following steps: + +- The client sends a query to the server and reads from the socket, + usually in a blocking way, for the server response. + +- The server processes the command and sends the response back to the + client. + +A request/response server can be implemented so that it is able to +process new requests even if the client did not already read the old +responses. This way it is possible to send multiple commands to the +server without waiting for the replies at all, and finally read the +replies in a single step. + +Using the synchronous API, in general, the program flow is blocked until +the response is accomplished. The underlying connection is busy with +sending the request and receiving its response. Blocking, in this case, +applies only from a current Thread perspective, not from a global +perspective. + +To understand why using a synchronous API does not block on a global +level we need to understand what this means. Lettuce is a non-blocking +and asynchronous client. It provides a synchronous API to achieve a +blocking behavior on a per-Thread basis to create await (synchronize) a +command response. Blocking does not affect other Threads per se. Lettuce +is designed to operate in a pipelining way. Multiple threads can share +one connection. While one Thread may process one command, the other +Thread can send a new command. As soon as the first request returns, the +first Thread’s program flow continues, while the second request is +processed by Redis and comes back at a certain point in time. + +Lettuce is built on top of netty decouple reading from writing and to +provide thread-safe connections. The result is, that reading and writing +can be handled by different threads and commands are written and read +independent of each other but in sequence. You can find more details +about [message ordering](#message-ordering) to learn +about command ordering rules in single- and multi-threaded arrangements. +The transport and command execution layer does not block the processing +until a command is written, processed and while its response is read. +Lettuce sends commands at the moment they are invoked. + +A good example is the [async API](Connecting-Redis.md#asynchronous-api). Every +invocation on the [async API](Connecting-Redis.md#asynchronous-api) returns a +`Future` (response handle) after the command is written to the netty +pipeline. A write to the pipeline does not mean, the command is written +to the underlying transport. Multiple commands can be written without +awaiting the response. Invocations to the API (sync, async and starting +with `4.0` also reactive API) can be performed by multiple threads. + +Sharing a connection between threads is possible but keep in mind: + +**The longer commands need for processing, the longer other invoker wait +for their results** + +You should not use transactional commands (`MULTI`) on shared +connection. If you use Redis-blocking commands (e. g. `BLPOP`) all +invocations of the shared connection will be blocked until the blocking +command returns which impacts the performance of other threads. Blocking +commands can be a reason to use multiple connections. + +### Command flushing + +> [!NOTE] +> Command flushing is an advanced topic and in most cases (i.e. unless +> your use-case is a single-threaded mass import application) you won’t +> need it as Lettuce uses pipelining by default. + +The normal operation mode of Lettuce is to flush every command which +means, that every command is written to the transport after it was +issued. Any regular user desires this behavior. You can control command +flushing since Version `3.3`. + +Why would you want to do this? A flush is an [expensive system +call](https://github.com/netty/netty/issues/1759) and impacts +performance. Batching, disabling auto-flushing, can be used under +certain conditions and is recommended if: + +- You perform multiple calls to Redis and you’re not depending + immediately on the result of the call + +- You’re bulk-importing + +Controlling the flush behavior is only available on the async API. The +sync API emulates blocking calls, and as soon as you invoke a command, +you can no longer interact with the connection until the blocking call +ends. + +The `AutoFlushCommands` state is set per connection and, therefore +visible to all threads using a shared connection. If you want to omit +this effect, use dedicated connections. The `AutoFlushCommands` state +cannot be set on pooled connections by the Lettuce connection pooling. + +> [!WARNING] +> Do not use `setAutoFlushCommands(…)` when sharing a connection across +> threads, at least not without proper synchronization. According to the +> many questions and (invalid) bug reports using +> `setAutoFlushCommands(…)` in a multi-threaded scenario causes a lot of +> complexity overhead and is very likely to cause issues on your side. +> `setAutoFlushCommands(…)` can only be reliably used on single-threaded +> connection usage in scenarios like bulk-loading. + +``` java +StatefulRedisConnection connection = client.connect(); +RedisAsyncCommands commands = connection.async(); + +// disable auto-flushing +commands.setAutoFlushCommands(false); + +// perform a series of independent calls +List> futures = Lists.newArrayList(); +for (int i = 0; i < iterations; i++) { + futures.add(commands.set("key-" + i, "value-" + i)); + futures.add(commands.expire("key-" + i, 3600)); +} + +// write all commands to the transport layer +commands.flushCommands(); + +// synchronization example: Wait until all futures complete +boolean result = LettuceFutures.awaitAll(5, TimeUnit.SECONDS, + futures.toArray(new RedisFuture[futures.size()])); + +// later +connection.close(); +``` + +#### Performance impact + +Commands invoked in the default flush-after-write mode perform in an +order of about 100Kops/sec (async/multithreaded execution). Grouping +multiple commands in a batch (size depends on your environment, but +batches between 50 and 1000 work nice during performance tests) can +increase the throughput up to a factor of 5x. + +Pipelining within the Redis docs: + +## Connection Pooling + +Lettuce connections are designed to be thread-safe so one connection can +be shared amongst multiple threads and Lettuce connections +[auto-reconnection](#client-options) by default. While connection +pooling is not necessary in most cases it can be helpful in certain use +cases. Lettuce provides generic connection pooling support. + +### Is connection pooling necessary? + +Lettuce is thread-safe by design which is sufficient for most cases. All +Redis user operations are executed single-threaded. Using multiple +connections does not impact the performance of an application in a +positive way. The use of blocking operations usually goes hand in hand +with worker threads that get their dedicated connection. The use of +Redis Transactions is the typical use case for dynamic connection +pooling as the number of threads requiring a dedicated connection tends +to be dynamic. That said, the requirement for dynamic connection pooling +is limited. Connection pooling always comes with a cost of complexity +and maintenance. + +### Execution Models + +Lettuce supports two execution models for pooling: + +- Synchronous/Blocking via Apache Commons Pool 2 + +- Asynchronous/Non-Blocking via a Lettuce-specific pool implementation + (since version 5.1) + +### Synchronous Connection Pooling + +Using imperative programming models, synchronous connection pooling is +the right choice as it carries out all operations on the thread that is +used to execute the code. + +#### Prerequisites + +Lettuce requires Apache’s +[common-pool2](https://commons.apache.org/proper/commons-pool/) +dependency (at least 2.2) to provide connection pooling. Make sure to +include that dependency on your classpath. Otherwise, you won’t be able +using connection pooling. + +If using Maven, add the following dependency to your `pom.xml`: + +``` xml + + org.apache.commons + commons-pool2 + 2.4.3 + +``` + +#### Connection pool support + +Lettuce provides generic connection pool support. It requires a +connection `Supplier` that is used to create connections of any +supported type (Redis Standalone, Pub/Sub, Sentinel, Master/Replica, +Redis Cluster). `ConnectionPoolSupport` will create a +`GenericObjectPool` or `SoftReferenceObjectPool`, depending on your +needs. The pool can allocate either wrapped or direct connections. + +- Wrapped instances will return the connection back to the pool when + called `StatefulConnection.close()`. + +- Regular connections need to be returned to the pool with + `GenericObjectPool.returnObject(…)`. + +**Basic usage** + +``` java +RedisClient client = RedisClient.create(RedisURI.create(host, port)); + +GenericObjectPool> pool = ConnectionPoolSupport + .createGenericObjectPool(() -> client.connect(), new GenericObjectPoolConfig()); + +// executing work +try (StatefulRedisConnection connection = pool.borrowObject()) { + + RedisCommands commands = connection.sync(); + commands.multi(); + commands.set("key", "value"); + commands.set("key2", "value2"); + commands.exec(); +} + +// terminating +pool.close(); +client.shutdown(); +``` + +**Cluster usage** + +``` java +RedisClusterClient clusterClient = RedisClusterClient.create(RedisURI.create(host, port)); + +GenericObjectPool> pool = ConnectionPoolSupport + .createGenericObjectPool(() -> clusterClient.connect(), new GenericObjectPoolConfig()); + +// execute work +try (StatefulRedisClusterConnection connection = pool.borrowObject()) { + connection.sync().set("key", "value"); + connection.sync().blpop(10, "list"); +} + +// terminating +pool.close(); +clusterClient.shutdown(); +``` + +### Asynchronous Connection Pooling + +Asynchronous/non-blocking programming models require a non-blocking API +to obtain Redis connections. A blocking connection pool can easily lead +to a state that blocks the event loop and prevents your application from +progress in processing. + +Lettuce comes with an asynchronous, non-blocking pool implementation to +be used with Lettuces asynchronous connection methods. It does not +require additional dependencies. + +#### Asynchronous Connection pool support + +Lettuce provides asynchronous connection pool support. It requires a +connection `Supplier` that is used to asynchronously connect to any +supported type (Redis Standalone, Pub/Sub, Sentinel, Master/Replica, +Redis Cluster). `AsyncConnectionPoolSupport` will create a +`BoundedAsyncPool`. The pool can allocate either wrapped or direct +connections. + +- Wrapped instances will return the connection back to the pool when + called `StatefulConnection.closeAsync()`. + +- Regular connections need to be returned to the pool with + `AsyncPool.release(…)`. + +**Basic usage** + +``` java +RedisClient client = RedisClient.create(); + +CompletionStage>> poolFuture = AsyncConnectionPoolSupport.createBoundedObjectPoolAsync( + () -> client.connectAsync(StringCodec.UTF8, RedisURI.create(host, port)), BoundedPoolConfig.create()); + +// await poolFuture initialization to avoid NoSuchElementException: Pool exhausted when starting your application + +// execute work +CompletableFuture transactionResult = pool.acquire().thenCompose(connection -> { + + RedisAsyncCommands async = connection.async(); + + async.multi(); + async.set("key", "value"); + async.set("key2", "value2"); + return async.exec().whenComplete((s, throwable) -> pool.release(connection)); +}); + +// terminating +pool.closeAsync(); + +// after pool completion +client.shutdownAsync(); +``` + +**Cluster usage** + +``` java +RedisClusterClient clusterClient = RedisClusterClient.create(RedisURI.create(host, port)); + +CompletionStage>> poolFuture = AsyncConnectionPoolSupport.createBoundedObjectPoolAsync( + () -> clusterClient.connectAsync(StringCodec.UTF8), BoundedPoolConfig.create()); + +// execute work +CompletableFuture setResult = pool.acquire().thenCompose(connection -> { + + RedisAdvancedClusterAsyncCommands async = connection.async(); + + async.set("key", "value"); + return async.set("key2", "value2").whenComplete((s, throwable) -> pool.release(connection)); +}); + +// terminating +pool.closeAsync(); + +// after pool completion +clusterClient.shutdownAsync(); +``` + +## Custom commands + +Lettuce covers nearly all Redis commands. Redis development is an +ongoing process and the Redis Module system is intended to introduce new +commands which are not part of the Redis Core. This requirement +introduces the need to invoke custom commands or use custom outputs. +Custom commands can be dispatched on the one hand using Lua and the +`eval()` command, on the other side Lettuce 4.x allows you to trigger +own commands. That API is used by Lettuce itself to dispatch commands +and requires some knowledge of how commands are constructed and +dispatched within Lettuce. + +Lettuce provides two levels of command dispatching: + +1. Using the synchronous, asynchronous or reactive API wrappers which + invoke commands according to their nature + +2. Using the bare connection to influence the command nature and + synchronization (advanced) + +**Example using `dispatch()` on the synchronous API** + +``` java +RedisCodec codec = StringCodec.UTF8; +RedisCommands commands = ... + +String response = redis.dispatch(CommandType.SET, new StatusOutput<>(codec), + new CommandArgs<>(codec) + .addKey(key) + .addValue(value)); +``` + +**Example using `dispatch()` on the asynchronous API** + +``` java +RedisCodec codec = StringCodec.UTF8; +RedisAsyncCommands commands = ... + +RedisFuture response = redis.dispatch(CommandType.SET, new StatusOutput<>(codec), + new CommandArgs<>(codec) + .addKey(key) + .addValue(value)); +``` + +**Example using `dispatch()` on the reactive API** + +``` java +RedisCodec codec = StringCodec.UTF8; +RedisReactiveCommands commands = ... + +Observable response = redis.dispatch(CommandType.SET, new StatusOutput<>(codec), + new CommandArgs<>(codec) + .addKey(key) + .addValue(value)); +``` + +**Example using a `RedisFuture` command wrapper** + +``` java +StatefulRedisConnection connection = redis.getStatefulConnection(); + +RedisCommand command = new Command<>(CommandType.PING, + new StatusOutput<>(StringCodec.UTF8)); + +AsyncCommand async = new AsyncCommand<>(command); +connection.dispatch(async); + +// async instanceof CompletableFuture == true +``` + +### Mechanics of Lettuce commands + +Lettuce uses the command pattern to implement to execute commands. Every +time a command is invoked, Lettuce creates a command object (`Command` +or types implementing `RedisCommand`). Commands can carry arguments +(`CommandArgs`) and an output (subclasses of `CommandOutput`). Both are +optional. The two mandatory properties are the command type (see +`CommandType` or a type implementing `ProtocolKeyword`) and a +`RedisCodec`. If you dispatch commands by yourself, do not reuse command +instances to dispatch commands more than once. Commands that were +executed once have the completed flag set and cannot be reused. + +#### Arguments + +`CommandArgs` is a container for command arguments that follow the +command keyword (`CommandType`). A `PING` or `QUIT` command do not +require commands whereas the `GET` or `SET` commands require arguments +in the form of keys and values. + +**The `PING` command** + +``` java +RedisCommand command = new Command<>(CommandType.PING, + new StatusOutput<>(StringCodec.UTF8)); +``` + +**The `SET` command** + +``` java +StringCodec codec = StringCodec.UTF8; +RedisCommand command = new Command<>(CommandType.SET, + new StatusOutput<>(codec), new CommandArgs<>(codec) + .addKey("key") + .addValue("value")); +``` + +`CommandArgs` allow to add one or more: + +- key and arrays of keys + +- value and arrays of values + +- `String`, `long` (the Redis integer), `double` + +- byte array + +- `CommandType`, `CommandKeyword` and generic `ProtocolKeyword` + +The sequence of args and keywords is not validated by Lettuce beyond the +supported data types, meaning Redis will report errors if the command +syntax is not correct. + +#### Outputs + +Commands producing an output are required to consume the output. Lettuce +supports type-safe conversion of the response into the appropriate +result types. The output handlers derive from the `CommandOutput` base +class. Lettuce provides a wide range of output types (see the +`com.lambdaworks.redis.output` package for details). Command outputs are +mostly used to return the result as the whole object. The response is +available as soon as the whole command output is processed. There are +cases, where you might want to stream the response instead of allocating +a significant amount of memory and return the whole response as one. +These types are called streaming outputs. Following implementations ship +with Lettuce: + +- `KeyStreamingOutput` + +- `KeyValueScanStreamingOutput` + +- `KeyValueStreamingOutput` + +- `ScoredValueStreamingOutput` + +- `ValueScanStreamingOutput` + +- `ValueStreamingOutput` + +Those outputs take a streaming channel (see `ValueStreamingChannel`) and +invoke the callback method (e.g. `onValue(V value)`) for every data +element. + +Implementing an own output is, in general, a good idea when you want to +support a different data type, or you want to work with different types +than the basic collection, map, String, and primitive types. You might +get an impression of the custom types idea by taking a look on +`GeoWithinListOutput`, which takes a bunch of strings and nested lists +to construct a list of `GeoWithin` instances. + +Please note that using an output that does not fit the command output +can jam the response processing and lead to not usable connections. Use +either `ArrayOutput` or `NestedMultiOutput` when in doubt, so you +receive a list of objects (nested lists). + +**Output for the `PING` command** + +``` java +Command command = new Command<>(CommandType.PING, + new StatusOutput<>(StringCodec.UTF8)); +``` + +**Output for the `HGETALL` command** + +``` java +StringCodec codec = StringCodec.UTF8; +Command> command = new Command<>(CommandType.HGETALL, + new MapOutput<>(codec), + new CommandArgs<>(codec).addKey(key)); +``` + +**Output for the `HKEYS` command** + +``` java +StringCodec codec = StringCodec.UTF8; +Command> command = new Command<>(CommandType.HKEYS, + new KeyListOutput<>(codec), + new CommandArgs<>(codec).addKey(key)); +``` + +### Synchronous, asynchronous and reactive + +Great, that you made it up to here. You might want to know now, how to +synchronize the command completion, work with `Future`s or how about the +reactive API. The simple way is using the `dispatch(…)` method of the +according wrapper. If this is not sufficient, then continue on reading. + +The `dispatch()` method on a stateful Redis connection is not +opinionated at all how you are using Lettuce, whether it is synchronous +or reactive. The only thing this method does is dispatching the command. +The response handler handles decoding the command and completing the +command once it’s done. The asynchronous command processing is the only +operating mode of Lettuce. + +The `RedisCommand` interface provides methods to `complete()`, +`cancel()` and `completeExceptionally()` the command. The `complete()` +methods are called by the response handler as soon as the command is +completed. Redis commands can be wrapped and augmented by that way. +Wrapping is used when using transactions (`MULTI`) or Redis Cluster. + +You are free to implement your command type or use one of the provided +commands: + +- Command (default implementation) + +- AsyncCommand (the `CompleteableFuture` wrapper for `RedisCommand`) + +- CommandWrapper (generic wrapper) + +- TransactionalCommand (wraps `RedisCommand`s when `MULTI` is active) + +#### Fire & Forget + +Fire&Forget is the simple-most way to dispatch commands. You just +trigger it and then you do not care what happens, whether the command +completes or not, and you don’t have access to the command output: + +``` java +StatefulRedisConnection connection = redis.getStatefulConnection(); + +connection.dispatch(CommandType.PING, VoidOutput.create()); +``` + +> [!NOTE] +> `VoidOutput.create()` swallows also Redis error responses. If you want +> to just avoid response decoding, create a `VoidCodec` instance using +> its constructor to retain error response decoding. + +#### Asynchronous + +The asynchronous API works in general with the `AsyncCommand` wrapper +that extends `CompleteableFuture`. `AsyncCommand` can be synchronized by +`await()` or `get()` which corresponds with the asynchronous pull style. +By using the methods from the `CompletionStage` interface (such as +`handle()` or `thenAccept()`) the response handler will trigger the +functions ("listeners") on command completion. Lear more about +asynchronous usage in the [Asynchronous API](Connecting-Redis.md#asynchronous-api) topic. + +``` java +StatefulRedisConnection connection = redis.getStatefulConnection(); + +RedisCommand command = new Command<>(CommandType.PING, + new StatusOutput<>(StringCodec.UTF8)); + +AsyncCommand async = new AsyncCommand<>(command); +connection.dispatch(async); + +// async instanceof CompletableFuture == true +``` + +#### Synchronous + +The synchronous API of Lettuce uses future synchronization to provide a +synchronous view. + +#### Reactive + +Reactive commands are dispatched at the moment of subscription (see +[Reactive API](Connecting-Redis.md#reactive-api) for more details on reactive APIs). In the +context of Lettuce this means, you need to start before calling the +`dispatch()` method. The reactive API uses internally an +`ObservableCommand`, but that is internal stuff. If you want to dispatch +commands the reactive way, you’ll need to wrap commands (or better: +command supplier to be able to retry commands) with the +`ReactiveCommandDispatcher`. The dispatcher implements the `OnSubscribe` +API to create an `Observable`, handles command dispatching at the +time of subscription and can dissolve collection types to particular +elements. An instance of `ReactiveCommandDispatcher` allows creating +multiple `Observable`s as long as you use a `Supplier`. +Commands that were executed once have the completed flag set and cannot +be reused. + +``` java +StatefulRedisConnection connection = redis.getStatefulConnection(); + +RedisCommand command = new Command<>(CommandType.PING, + new StatusOutput<>(StringCodec.UTF8)); +ReactiveCommandDispatcher dispatcher = new ReactiveCommandDispatcher<>(command, + connection, false); + +Observable observable = Observable.create(dispatcher); +String result = observable.toBlocking().first(); + +result == "PONG" +``` + +## Graal Native Image + +This section explains how to use Lettuce with Graal Native Image +compilation. + +### Why Create a Native Image? + +The GraalVM +[`native-image`](http://www.graalvm.org/docs/reference-manual/aot-compilation/) +tool enables ahead-of-time (AOT) compilation of Java applications into +native executables or shared libraries. While traditional Java code is +just-in-time (JIT) compiled at run time, AOT compilation has two main +advantages: + +1. First, it improves the start-up time since the code is already + pre-compiled into efficient machine code. + +2. Second, it reduces the memory footprint of Java applications since + it eliminates the need to include infrastructure to load and + optimize code at run time. + +There are additional advantages such as more predictable performance and +less total CPU usage. + +### Building Native Images + +Native images assume a closed world principle in which all code needs to +be known at the time the native image is built. Graal’s SubstrateVM +analyzes class files during native image build-time to determine what +bytecode needs to be translated into a native image. While this task can +be achieved to a good extent by analyzing static bytecode, it’s harder +for dynamic parts of the code such as reflection. When using reflective +access or Java proxies, the native image build process requires a little +bit of help so it can include parts that are required during runtime. + +Lettuce ships with configuration files that specifically describe which +classes are used by Lettuce during runtime and which Java proxies get +created. + +Starting as of Lettuce 5.3.2, the following configuration files are +available: + +- `META-INF/native-image/io.lettuce/lettuce-core/native-image.properties` + +- `META-INF/native-image/io.lettuce/lettuce-core/proxy-config.json` + +- `META-INF/native-image/io.lettuce/lettuce-core/reflect-config.json` + +Those cover Lettuce operations for `RedisClient` and +`RedisClusterClient`. + +Depending on your configuration you might need additional configuration +for Netty, HdrHistorgram (metrics collection), Reactive Libraries, and +dynamic Redis Command interfaces. + +### HdrHistogram/Command Latency Metrics + +Lettuce uses HdrHistogram and LatencyUtils to accumulate metrics. You +can use your application without these. If you want to use Command +Latency Metrics, please add the following lines to your own +`reflect-config.json` file: + +``` json + { + "name": "org.HdrHistogram.Histogram" + }, + { + "name": "org.LatencyUtils.PauseDetector" + } +``` + +### Dynamic Command Interfaces + +You can use Dynamic Command Interfaces when compiling your code to a +GraalVM Native Image. GraalVM requires two information as Lettuce +inspects command interfaces using reflection and it creates a Java +proxy: + +1. Add the command interface class name to your `reflect-config.json` + using ideally `allDeclaredMethods:true`. + +2. Add the command interface class name to your `proxy-config.json` + +
+ +**`reflect-config.json`** + +
+ +``` json +[ + { + "name": "com.example.MyCommands", + "allDeclaredMethods": true + }, +] +``` + +
+ +**`proxy-config.json`** + +
+ +``` json +[ + ["com.example.MyCommands"] +] +``` + +#### Reactive Libraries + +If you decide to use a specific reactive library with dynamic command +interfaces, please add the following lines to your `reflect-config.json` +file, depending on the presence of Rx Java 1-3: + +``` json + { + "name": "rx.Completable" + }, + { + "name": "io.reactivex.Flowable" + }, + { + "name": "io.reactivex.rxjava3.core.Flowable" + } +``` + +### Limitations + +For now, native images must be compiled with +`--report-unsupported-elements-at-runtime` to ignore missing Method +Handles and annotation synthetization failures. + +#### Netty Config + +To properly start up the netty stack, the following reflection +configuration is required for netty and the JDK in +`reflect-config.json`: + +``` json + { + "name":"io.netty.util.internal.shaded.org.jctools.queues.BaseMpscLinkedArrayQueueColdProducerFields", + "fields":[{"name":"producerLimit","allowUnsafeAccess" : true}] + }, + { + "name":"io.netty.util.internal.shaded.org.jctools.queues.BaseMpscLinkedArrayQueueConsumerFields", + "fields":[{"name":"consumerIndex","allowUnsafeAccess" : true}] + }, + { + "name":"io.netty.util.internal.shaded.org.jctools.queues.BaseMpscLinkedArrayQueueProducerFields", + "fields":[{"name":"producerIndex", "allowUnsafeAccess" : true}] + }, + { + "name":"io.netty.util.internal.shaded.org.jctools.queues.MpscArrayQueueConsumerIndexField", + "fields":[{"name":"consumerIndex", "allowUnsafeAccess" : true}] + }, + { + "name":"io.netty.util.internal.shaded.org.jctools.queues.MpscArrayQueueProducerIndexField", + "fields":[{"name":"producerIndex", "allowUnsafeAccess" : true}] + }, + { + "name":"io.netty.util.internal.shaded.org.jctools.queues.MpscArrayQueueProducerLimitField", + "fields":[{"name":"producerLimit","allowUnsafeAccess" : true}] + }, + { + "name":"java.nio.Buffer", + "fields":[{"name":"address", "allowUnsafeAccess":true}] + }, + { + "name":"java.nio.DirectByteBuffer", + "fields":[{"name":"cleaner", "allowUnsafeAccess":true}], + "methods":[{"name":"","parameterTypes":["long","int"] }] + }, + { + "name":"io.netty.buffer.AbstractReferenceCountedByteBuf", + "fields":[{"name":"refCnt", "allowUnsafeAccess":true}] + }, + { + "name":"io.netty.buffer.AbstractByteBufAllocator", + "allPublicMethods": true, + "allDeclaredFields":true, + "allDeclaredMethods":true, + "allDeclaredConstructors":true + }, + { + "name":"io.netty.buffer.PooledByteBufAllocator", + "allPublicMethods": true, + "allDeclaredFields":true, + "allDeclaredMethods":true, + "allDeclaredConstructors":true + }, + { + "name":"io.netty.channel.ChannelDuplexHandler", + "allPublicMethods": true, + "allDeclaredConstructors":true + }, + { + "name":"io.netty.channel.ChannelHandlerAdapter", + "allPublicMethods": true, + "allDeclaredConstructors":true + }, + { + "name": "io.netty.channel.ChannelInboundHandlerAdapter", + "allPublicMethods": true, + "allDeclaredConstructors":true + }, + { + "name": "io.netty.channel.ChannelInitializer", + "allPublicMethods": true, + "allDeclaredConstructors":true + }, + { + "name": "io.netty.channel.ChannelOutboundHandlerAdapter", + "allPublicMethods": true, + "allDeclaredConstructors":true + }, + { + "name": "io.netty.channel.DefaultChannelPipeline$HeadContext", + "allPublicMethods": true, + "allDeclaredConstructors":true + }, + { + "name": "io.netty.channel.DefaultChannelPipeline$TailContext", + "allPublicMethods": true, + "allDeclaredConstructors":true + }, + { + "name": "io.netty.channel.socket.nio.NioSocketChannel", + "allPublicMethods": true, + "allDeclaredConstructors":true + }, + { + "name": "io.netty.handler.codec.MessageToByteEncoder", + "allPublicMethods": true, + "allDeclaredConstructors":true + }, + { + "name":"io.netty.util.ReferenceCountUtil", + "allPublicMethods": true, + "allDeclaredConstructors":true + } +``` + +#### Functionality + +We don’t have found a way yet to invoke default interface methods on +proxies without `MethodHandle`. Hence the `NodeSelection` API +(`masters()`, `all()` and others on `RedisAdvancedClusterCommands` and +`RedisAdvancedClusterAsyncCommands`) do not work. + +## Command execution reliability + +Lettuce is a thread-safe and scalable Redis client that allows multiple +independent connections to Redis. + +### General + +Lettuce provides two levels of consistency; these are the rules for +Redis command sends: + +Depending on the chosen consistency level: + +- **at-most-once execution**, i. e. no guaranteed execution + +- **at-least-once execution**, i. e. guaranteed execution (with [some + exceptions](#exceptions-to-at-least-once)) + +Always: + +- command ordering in the order of invocations + +### What does *at-most-once* mean? + +When it comes to describing the semantics of an execution mechanism, +there are three basic categories: + +- **at-most-once** execution means that for each command handed to the + mechanism, that command is execution zero or one time; in more casual + terms it means that commands may be lost. + +- **at-least-once** execution means that for each command handed to the + mechanism potentially multiple attempts are made at execution it, such + that at least one succeeds; again, in more casual terms this means + that commands may be duplicated but not lost. + +- **exactly-once** execution means that for each command handed to the + mechanism exactly one execution is made; the command can neither be + lost nor duplicated. + +The first one is the cheapest - the highest performance, least +implementation overhead - because it can be done without tracking +whether the command was sent or got lost within the transport mechanism. +The second one requires retries to counter transport losses, which means +keeping the state at the sending end and having an acknowledgment +mechanism at the receiving end. The third is most expensive—and has +consequently worst performance—because also to the second it requires a +state to be kept at the receiving end to filter out duplicate +executions. + +### Why No Guaranteed Delivery? + +At the core of the problem lies the question what exactly this guarantee +shall mean: + +1. The command is sent out on the network? + +2. The command is received by the other host? + +3. The command is processed by Redis? + +4. The command response is sent by the other host? + +5. The command response is received by the network? + +6. The command response is processed successfully? + +Each one of these have different challenges and costs, and it is obvious +that there are conditions under which any command sending library would +be unable to comply. Think for example about how a network partition +would affect point three, or even what it would mean to decide upon the +“successfully” part of point six. + +The only meaningful way for a client to know whether an interaction was +successful is by receiving a business-level acknowledgment command, +which is not something Lettuce could make up on its own. + +Lettuce allows two levels of consistency; each one has its costs and +benefits, and therefore it does not try to lie and emulate a leaky +abstraction. + +### Message Ordering + +The rule more specifically is that commands sent are not be executed +out-of-order. + +The following illustrates the guarantee: + +- Thread `T1` sends commands `C1`, `C2`, `C3` to Redis + +- Thread `T2` sends commands `C4`, `C5`, `C6` to Redis + +This means that: + +- If `C1` is executed, it must be executed before `C2` and `C3`. + +- If `C2` is executed, it must be executed before `C3`. + +- If `C4` is executed, it must be executed before `C5` and `C6`. + +- If `C5` is executed, it must be executed before `C6`. + +- Redis executes commands from `T1` interleaved with commands from `T2`. + +- If there is no guaranteed delivery, any of the commands may be + dropped, i.e. not arrive at Redis. + +### Failures and *at-least-once* execution + +Lettuce’s *at-least-once* execution is scoped to the lifecycle of a +logical connection. Redis commands are not persisted to be executed +after a JVM or client restart. All Redis command state is held in +memory. A retry mechanism re-executes commands that are not successfully +completed if a network failure occurs. In more casual terms, when Redis +is available again, the retry mechanism fires all queued commands. +Commands that are issued as long as the failure persists are buffered. + +*at-least-once* execution ensures a higher consistency level than +*at-most-once* but comes with some caveats: + +- Commands can be executed more than once + +- Higher usage of resources since commands are buffered and sent again + after reconnect + +#### Exceptions to *at-least-once* + +Lettuce does not loose commands while sending them. A command execution +can, however, fail for the same reasons as a normal method call can on +the JVM: + +- `StackOverflowError` + +- `OutOfMemoryError` + +- other `Error`s + +Also, executions can fail in specific ways: + +- The command runs into a timeout + +- The command cannot be encoded + +- The command cannot be decoded, because: + +- The output is not compatible with the command output + +- Exceptions occur while command decoding/processing. This may happen a + `StreamingChannel` results in an error, or a consumer of Pub/Sub + events fails while listener notification. + +While the first is clearly a matter of configuration, the second +deserves some thought: The command execution does not get feedback if +there was a timeout. This is in general not distinguishable from a lost +message. By using the Sync API, commands that exceeded their timeout are +canceled. This behavior cannot be changed. When using the Async API, +users can decide, how to proceed with the command, whether the command +should be canceled. + +Commands which run into `Exception`s while encoding or decoding reach a +non-recoverable state. Commands that cannot be *encoded* are **not** +executed but get canceled. Commands that cannot be *decoded* were +already executed; only the result is not available. These errors are +caused mostly due to a wrong implementation. The result of a command, +which cannot be *decoded* is that the command gets canceled, and the +causing `Exception` is available in the result. The command is cleared +from the response queue, and the connection stays useable. + +In general, when `Errors` occur while operating on a connection, you +should close the connection and use a new one. Connections, that +experienced such severe failures get into a unrecoverable state, and no +further response processing is possible. + +Executing commands more than once + +In terms of consistency, Redis commands can be grouped into two +categories: + +- Idempotent commands + +- Non-idempotent commands + +Idempotent commands are commands that lead to the same state if they are +executed more than once. Read commands are a good example for +idempotency since they do not change the state of data. Another set of +idempotent commands are commands that write a whole data structure/entry +at once such as `SET`, `DEL` or `CLIENT SETNAME`. Those commands change +the data to the desired state. Subsequent executions of the same command +leave the data in the same state. + +Non-idempotent commands change the state with every execution. This +means, if you execute a command twice, each resulting state is different +in comparison to the previous. Examples for non-idempotent Redis +commands are such as `LPUSH`, `PUBLISH` or `INCR`. + +Note: When using master-replica replication, different rules apply to +*at-least-once* consistency. Replication between Redis nodes works +asynchronously. A command can be processed successfully from Lettuce’s +client perspective, but the result is not necessarily replicated to the +replica yet. If a failover occurs at that moment, a replica takes over, +and the not yet replicated data is lost. Replication behavior is +Redis-specific. Further documentation about failover and consistency +from Redis perspective is available within the Redis docs: + + +### Switching between *at-least-once* and *at-most-once* operations + +Lettuce’s consistency levels are bound to retries on reconnects and the +connection state. By default, Lettuce operates in the *at-least-once* +mode. Auto-reconnect is enabled and as soon as the connection is +re-established, queued commands are re-sent for execution. While a +connection failure persists, issued commands are buffered. + +To change into *at-most-once* consistency level, disable auto-reconnect +mode. Connections cannot be longer reconnected and thus no retries are +issued. Not successfully commands are canceled. New commands are +rejected. + +### Clustered operations + +Lettuce sticks in clustered operations to the same rules as for +standalone operations but with one exception: + +Command execution on master nodes, which is rejected by a `MOVED` +response are tried to re-execute with the appropriate connection. +`MOVED` errors occur on master nodes when a slot’s responsibility is +moved from one cluster node to another node. Afterwards *at-least-once* +and *at-most-once* rules apply. + +When the cluster topology changes, generally spoken, the cluster slots +or master/replica state is reconfigured, following rules apply: + +- **at-most-once** If the connection is disconnected, queued commands + are canceled and buffered commands, which were not sent, are executed + by using the new cluster view + +- **at-least-once** If the connection is disconnected, queued and + buffered commands, which were not sent, are executed by using the new + cluster view + +- If the connection is not disconnected, queued commands are finished + and buffered commands, which were not sent, are executed by using the + new cluster view + diff --git a/docs/Connecting-Redis.md b/docs/Connecting-Redis.md new file mode 100644 index 0000000000..6ba3c69d9f --- /dev/null +++ b/docs/Connecting-Redis.md @@ -0,0 +1,2139 @@ +# Connecting Redis + +Connections to a Redis Standalone, Sentinel, or Cluster require a +specification of the connection details. The unified form is `RedisURI`. +You can provide the database, password and timeouts within the +`RedisURI`. You have following possibilities to create a `RedisURI`: + +1. Use an URI: + + ``` java + RedisURI.create("redis://localhost/"); + ``` + +2. Use the Builder + + ``` java + RedisURI.Builder.redis("localhost", 6379).auth("password").database(1).build(); + ``` + +3. Set directly the values in `RedisURI` + + ``` java + new RedisURI("localhost", 6379, 60, TimeUnit.SECONDS); + ``` + +## URI syntax + +**Redis Standalone** + + redis :// [[username :] password@] host [:port][/database] + [?[timeout=timeout[d|h|m|s|ms|us|ns]] [&clientName=clientName] + [&libraryName=libraryName] [&libraryVersion=libraryVersion] ] + +**Redis Standalone (SSL)** + + rediss :// [[username :] password@] host [: port][/database] + [?[timeout=timeout[d|h|m|s|ms|us|ns]] [&clientName=clientName] + [&libraryName=libraryName] [&libraryVersion=libraryVersion] ] + +**Redis Standalone (Unix Domain Sockets)** + + redis-socket :// [[username :] password@]path + [?[timeout=timeout[d|h|m|s|ms|us|ns]] [&database=database] + [&clientName=clientName] [&libraryName=libraryName] + [&libraryVersion=libraryVersion] ] + +**Redis Sentinel** + + redis-sentinel :// [[username :] password@] host1[:port1] [, host2[:port2]] [, hostN[:portN]] [/database] + [?[timeout=timeout[d|h|m|s|ms|us|ns]] [&sentinelMasterId=sentinelMasterId] + [&clientName=clientName] [&libraryName=libraryName] + [&libraryVersion=libraryVersion] ] + +**Schemes** + +- `redis` Redis Standalone + +- `rediss` Redis Standalone SSL + +- `redis-socket` Redis Standalone Unix Domain Socket + +- `redis-sentinel` Redis Sentinel + +**Timeout units** + +- `d` Days + +- `h` Hours + +- `m` Minutes + +- `s` Seconds + +- `ms` Milliseconds + +- `us` Microseconds + +- `ns` Nanoseconds + +Hint: The database parameter within the query part has higher precedence +than the database in the path. + +RedisURI supports Redis Standalone, Redis Sentinel and Redis Cluster +with plain, SSL, TLS and unix domain socket connections. + +Hint: The database parameter within the query part has higher precedence +than the database in the path. RedisURI supports Redis Standalone, Redis +Sentinel and Redis Cluster with plain, SSL, TLS and unix domain socket +connections. + +## Authentication + +Redis URIs may contain authentication details that effectively lead to +usernames with passwords, password-only, or no authentication. +Connections are authenticated by using the information provided through +`RedisCredentials`. Credentials are obtained at connection time from +`RedisCredentialsProvider`. When configuring username/password on the +URI statically, then a `StaticCredentialsProvider` holds the configured +information. + +**Notes** + +- When using Redis Sentinel, the password from the URI applies to the + data nodes only. Sentinel authentication must be configured for each + sentinel node. + +- Usernames are supported as of Redis 6. + +- Library name and library version are automatically set on Redis 7.2 or + greater. + +## Basic Usage + +``` java +RedisClient client = RedisClient.create("redis://localhost"); + +StatefulRedisConnection connection = client.connect(); + +RedisCommands commands = connection.sync(); + +String value = commands.get("foo"); + +... + +connection.close(); + +client.shutdown(); +``` + +- Create the `RedisClient` instance and provide a Redis URI pointing to + localhost, Port 6379 (default port). + +- Open a Redis Standalone connection. The endpoint is used from the + initialized `RedisClient` + +- Obtain the command API for synchronous execution. Lettuce supports + asynchronous and reactive execution models, too. + +- Issue a `GET` command to get the key `foo`. + +- Close the connection when you’re done. This happens usually at the + very end of your application. Connections are designed to be + long-lived. + +- Shut down the client instance to free threads and resources. This + happens usually at the very end of your application. + +Each Redis command is implemented by one or more methods with names +identical to the lowercase Redis command name. Complex commands with +multiple modifiers that change the result type include the CamelCased +modifier as part of the command name, e.g. `zrangebyscore` and +`zrangebyscoreWithScores`. + +Redis connections are designed to be long-lived and thread-safe, and if +the connection is lost will reconnect until `close()` is called. Pending +commands that have not timed out will be (re)sent after successful +reconnection. + +All connections inherit a default timeout from their RedisClient and +and will throw a `RedisException` when non-blocking commands fail to +return a result before the timeout expires. The timeout defaults to 60 +seconds and may be changed in the RedisClient or for each connection. +Synchronous methods will throw a `RedisCommandExecutionException` in +case Redis responds with an error. Asynchronous connections do not throw +exceptions when Redis responds with an error. + +### RedisURI + +The RedisURI contains the host/port and can carry +authentication/database details. On a successful connect you get +authenticated, and the database is selected afterward. This applies +also after re-establishing a connection after a connection loss. + +A Redis URI can also be created from an URI string. Supported formats +are: + +- `redis://[password@]host[:port][/databaseNumber]` Plaintext Redis + connection + +- `rediss://[password@]host[:port][/databaseNumber]` [SSL + Connections](Advanced-usage.md#ssl-connections) Redis connection + +- `redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId` + for using Redis Sentinel + +- `redis-socket:///path/to/socket` [Unix Domain + Sockets](Advanced-usage.md#unix-domain-sockets) connection to Redis + +### Exceptions + +In the case of an exception/error response from Redis, you’ll receive a +`RedisException` containing +the error message. `RedisException` is a `RuntimeException`. + +### Examples + +``` java +RedisClient client = RedisClient.create(RedisURI.create("localhost", 6379)); +client.setDefaultTimeout(20, TimeUnit.SECONDS); + +// … + +client.shutdown(); +``` + +``` java +RedisURI redisUri = RedisURI.Builder.redis("localhost") + .withPassword("authentication") + .withDatabase(2) + .build(); +RedisClient client = RedisClient.create(redisUri); + +// … + +client.shutdown(); +``` + +``` java +RedisURI redisUri = RedisURI.Builder.redis("localhost") + .withSsl(true) + .withPassword("authentication") + .withDatabase(2) + .build(); +RedisClient client = RedisClient.create(redisUri); + +// … + +client.shutdown(); +``` + +``` java +RedisURI redisUri = RedisURI.create("redis://authentication@localhost/2"); +RedisClient client = RedisClient.create(redisUri); + +// … + +client.shutdown(); +``` + +## Asynchronous API + +This guide will give you an impression how and when to use the +asynchronous API provided by Lettuce 4.x. + +### Motivation + +Asynchronous methodologies allow you to utilize better system resources, +instead of wasting threads waiting for network or disk I/O. Threads can +be fully utilized to perform other work instead. Lettuce facilitates +asynchronicity from building the client on top of +[netty](http://netty.io) that is a multithreaded, event-driven I/O +framework. All communication is handled asynchronously. Once the +foundation is able to processes commands concurrently, it is convenient +to take advantage from the asynchronicity. It is way harder to turn a +blocking and synchronous working software into a concurrently processing +system. + +#### Understanding Asynchronicity + +Asynchronicity permits other processing to continue before the +transmission has finished and the response of the transmission is +processed. This means, in the context of Lettuce and especially Redis, +that multiple commands can be issued serially without the need of +waiting to finish the preceding command. This mode of operation is also +known as [Pipelining](http://redis.io/topics/pipelining). The following +example should give you an impression of the mode of operation: + +- Given client *A* and client *B* + +- Client *A* triggers command `SET A=B` + +- Client *B* triggers at the same time of Client *A* command `SET C=D` + +- Redis receives command from Client *A* + +- Redis receives command from Client *B* + +- Redis processes `SET A=B` and responds `OK` to Client *A* + +- Client *A* receives the response and stores the response in the + response handle + +- Redis processes `SET C=D` and responds `OK` to Client *B* + +- Client *B* receives the response and stores the response in the + response handle + +Both clients from the example above can be either two threads or +connections within an application or two physically separated clients. + +Clients can operate concurrently to each other by either being separate +processes, threads, event-loops, actors, fibers, etc. Redis processes +incoming commands serially and operates mostly single-threaded. This +means, commands are processed in the order they are received with some +characteristic that we’ll cover later. + +Let’s take the simplified example and enhance it by some program flow +details: + +- Given client *A* + +- Client *A* triggers command `SET A=B` + +- Client *A* uses the asynchronous API and can perform other processing + +- Redis receives command from Client *A* + +- Redis processes `SET A=B` and responds `OK` to Client *A* + +- Client *A* receives the response and stores the response in the + response handle + +- Client *A* can access now the response to its command without waiting + (non-blocking) + +The Client *A* takes advantage from not waiting on the result of the +command so it can process computational work or issue another Redis +command. The client can work with the command result as soon as the +response is available. + +#### Impact of asynchronicity to the synchronous API + +While this guide helps you to understand the asynchronous API it is +worthwhile to learn the impact on the synchronous API. The general +approach of the synchronous API is no different than the asynchronous +API. In both cases, the same facilities are used to invoke and transport +commands to the Redis server. The only difference is a blocking behavior +of the caller that is using the synchronous API. Blocking happens on +command level and affects only the command completion part, meaning +multiple clients using the synchronous API can invoke commands on the +same connection and at the same time without blocking each other. A call +on the synchronous API is unblocked at the moment a command response was +processed. + +- Given client *A* and client *B* + +- Client *A* triggers command `SET A=B` on the synchronous API and waits + for the result + +- Client *B* triggers at the same time of Client *A* command `SET C=D` + on the synchronous API and waits for the result + +- Redis receives command from Client *A* + +- Redis receives command from Client *B* + +- Redis processes `SET A=B` and responds `OK` to Client *A* + +- Client *A* receives the response and unblocks the program flow of + Client *A* + +- Redis processes `SET C=D` and responds `OK` to Client *B* + +- Client *B* receives the response and unblocks the program flow of + Client *B* + +However, there are some cases you should not share a connection among +threads to avoid side-effects. The cases are: + +- Disabling flush-after-command to improve performance + +- The use of blocking operations like `BLPOP`. Blocking operations are + queued on Redis until they can be executed. While one connection is + blocked, other connections can issue commands to Redis. Once a command + unblocks the blocking command (that said an `LPUSH` or `RPUSH` hits + the list), the blocked connection is unblocked and can proceed after + that. + +- Transactions + +- Using multiple databases + +#### Result handles + +Every command invocation on the asynchronous API creates a +`RedisFuture` that can be canceled, awaited and subscribed +(listener). A `CompleteableFuture` or `RedisFuture` is a pointer +to the result that is initially unknown since the computation of its +value is yet incomplete. A `RedisFuture` provides operations for +synchronization and chaining. + +``` java +CompletableFuture future = new CompletableFuture<>(); + +System.out.println("Current state: " + future.isDone()); + +future.complete("my value"); + +System.out.println("Current state: " + future.isDone()); +System.out.println("Got value: " + future.get()); +``` + +The example prints the following lines: + + Current state: false + Current state: true + Got value: my value + +Attaching a listener to a future allows chaining. Promises can be used +synonymous to futures, but not every future is a promise. A promise +guarantees a callback/notification and thus it has come to its name. + +A simple listener that gets called once the future completes: + +``` java +final CompletableFuture future = new CompletableFuture<>(); + +future.thenRun(new Runnable() { + @Override + public void run() { + try { + System.out.println("Got value: " + future.get()); + } catch (Exception e) { + e.printStackTrace(); + } + + } +}); + +System.out.println("Current state: " + future.isDone()); +future.complete("my value"); +System.out.println("Current state: " + future.isDone()); +``` + +The value processing moves from the caller into a listener that is then +called by whoever completes the future. The example prints the following +lines: + + Current state: false + Got value: my value + Current state: true + +The code from above requires exception handling since calls to the +`get()` method can lead to exceptions. Exceptions raised during the +computation of the `Future` are transported within an +`ExecutionException`. Another exception that may be thrown is the +`InterruptedException`. This is because calls to `get()` are blocking +calls and the blocked thread can be interrupted at any time. Just think +about a system shutdown. + +The `CompletionStage` type allows since Java 8 a much more +sophisticated handling of futures. A `CompletionStage` can consume, +transform and build a chain of value processing. The code from above can +be rewritten in Java 8 in the following style: + +``` java +CompletableFuture future = new CompletableFuture<>(); + +future.thenAccept(new Consumer() { + @Override + public void accept(String value) { + System.out.println("Got value: " + value); + } +}); + +System.out.println("Current state: " + future.isDone()); +future.complete("my value"); +System.out.println("Current state: " + future.isDone()); +``` + +The example prints the following lines: + + Current state: false + Got value: my value + Current state: true + +You can find the full reference for the `CompletionStage` type in the +[Java 8 API +documentation](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html). + +### Creating futures using Lettuce + +Lettuce futures can be used for initial and chaining operations. When +using Lettuce futures, you will notice the non-blocking behavior. This +is because all I/O and command processing are handled asynchronously +using the netty EventLoop. The Lettuce `RedisFuture` extends a +`CompletionStage` so all methods of the base type are available. + +Lettuce exposes its futures on the Standalone, Sentinel, +Publish/Subscribe and Cluster APIs. + +Connecting to Redis is insanely simple: + +``` java +RedisClient client = RedisClient.create("redis://localhost"); +RedisAsyncCommands commands = client.connect().async(); +``` + +In the next step, obtaining a value from a key requires the `GET` +operation: + +``` java +RedisFuture future = commands.get("key"); +``` + +### Consuming futures + +The first thing you want to do when working with futures is to consume +them. Consuming a futures means obtaining the value. Here is an example +that blocks the calling thread and prints the value: + +``` java +RedisFuture future = commands.get("key"); +String value = future.get(); +System.out.println(value); +``` + +Invocations to the `get()` method (pull-style) block the calling thread +at least until the value is computed but in the worst case indefinitely. +Using timeouts is always a good idea to not exhaust your threads. + +``` java +try { + RedisFuture future = commands.get("key"); + String value = future.get(1, TimeUnit.MINUTES); + System.out.println(value); +} catch (Exception e) { + e.printStackTrace(); +} +``` + +The example will wait at most 1 minute for the future to complete. If +the timeout exceeds, a `TimeoutException` is thrown to signal the +timeout. + +Futures can also be consumed in a push style, meaning when the +`RedisFuture` is completed, a follow-up action is triggered: + +``` java +RedisFuture future = commands.get("key"); + +future.thenAccept(new Consumer() { + @Override + public void accept(String value) { + System.out.println(value); + } +}); +``` + +Alternatively, written in Java 8 lambdas: + +``` java +RedisFuture future = commands.get("key"); + +future.thenAccept(System.out::println); +``` + +Lettuce futures are completed on the netty EventLoop. Consuming and +chaining futures on the default thread is always a good idea except for +one case: Blocking/long-running operations. As a rule of thumb, never +block the event loop. If you need to chain futures using blocking calls, +use the `thenAcceptAsync()`/`thenRunAsync()` methods to fork the +processing to another thread. The `…​async()` methods need a threading +infrastructure for execution, by default the `ForkJoinPool.commonPool()` +is used. The `ForkJoinPool` is statically constructed and does not grow +with increasing load. Using default `Executor`s is almost always the +better idea. + +``` java +Executor sharedExecutor = ... +RedisFuture future = commands.get("key"); + +future.thenAcceptAsync(new Consumer() { + @Override + public void accept(String value) { + System.out.println(value); + } +}, sharedExecutor); +``` + +### Synchronizing futures + +A key point when using futures is the synchronization. Futures are +usually used to: + +1. Trigger multiple invocations without the urge to wait for the + predecessors (Batching) + +2. Invoking a command without awaiting the result at all (Fire&Forget) + +3. Invoking a command and perform other computing in the meantime + (Decoupling) + +4. Adding concurrency to certain computational efforts (Concurrency) + +There are several ways how to wait or get notified in case a future +completes. Certain synchronization techniques apply to some motivations +why you want to use futures. + +#### Blocking synchronization + +Blocking synchronization comes handy if you perform batching/add +concurrency to certain parts of your system. An example to batching can +be setting/retrieving multiple values and awaiting the results before a +certain point within processing. + +``` java +List> futures = new ArrayList>(); + +for (int i = 0; i < 10; i++) { + futures.add(commands.set("key-" + i, "value-" + i)); +} + +LettuceFutures.awaitAll(1, TimeUnit.MINUTES, futures.toArray(new RedisFuture[futures.size()])); +``` + +The code from above does not wait until a certain command completes +before it issues another one. The synchronization is done after all +commands are issued. The example code can easily be turned into a +Fire&Forget pattern by omitting the call to `LettuceFutures.awaitAll()`. + +A single future execution can be also awaited, meaning an opt-in to wait +for a certain time but without raising an exception: + +``` java +RedisFuture future = commands.get("key"); + +if(!future.await(1, TimeUnit.MINUTES)) { + System.out.println("Could not complete within the timeout"); +} +``` + +Calling `await()` is friendlier to call since it throws only an +`InterruptedException` in case the blocked thread is interrupted. You +are already familiar with the `get()` method for synchronization, so we +will not bother you with this one. + +At last, there is another way to synchronize futures in a blocking way. +The major caveat is that you will become responsible to handle thread +interruptions. If you do not handle that aspect, you will not be able to +shut down your system properly if it is in a running state. + +``` java +RedisFuture future = commands.get("key"); +while (!future.isDone()) { + // do something ... +} +``` + +While the `isDone()` method does not aim primarily for synchronization +use, it might come handy to perform other computational efforts while +the command is executed. + +#### Chaining synchronization + +Futures can be synchronized/chained in a non-blocking style to improve +thread utilization. Chaining works very well in systems relying on +event-driven characteristics. Future chaining builds up a chain of one +or more futures that are executed serially, and every chain member +handles a part in the computation. The `CompletionStage` API offers +various methods to chain and transform futures. A simple transformation +of the value can be done using the `thenApply()` method: + +``` java +future.thenApply(new Function() { + @Override + public Integer apply(String value) { + return value.length(); + } +}).thenAccept(new Consumer() { + @Override + public void accept(Integer integer) { + System.out.println("Got value: " + integer); + } +}); +``` + +Alternatively, written in Java 8 lambdas: + +``` java +future.thenApply(String::length) + .thenAccept(integer -> System.out.println("Got value: " + integer)); +``` + +The `thenApply()` method accepts a function that transforms the value +into another one. The final `thenAccept()` method consumes the value for +final processing. + +You have already seen the `thenRun()` method from previous examples. The +`thenRun()` method can be used to handle future completions in case the +data is not crucial to your flow: + +``` java +future.thenRun(new Runnable() { + @Override + public void run() { + System.out.println("Finished the future."); + } +}); +``` + +Keep in mind to execute the `Runnable` on a custom `Executor` if you are +doing blocking calls within the `Runnable`. + +Another chaining method worth mentioning is the either-or chaining. A +couple of `…​Either()` methods are available on a `CompletionStage`, +see the [Java 8 API +docs](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html) +for the full reference. The either-or pattern consumes the value from +the first future that is completed. A good example might be two services +returning the same data, for instance, a Master-Replica scenario, but +you want to return the data as fast as possible: + +``` java +RedisStringAsyncCommands master = masterClient.connect().async(); +RedisStringAsyncCommands replica = replicaClient.connect().async(); + +RedisFuture future = master.get("key"); +future.acceptEither(replica.get("key"), new Consumer() { + @Override + public void accept(String value) { + System.out.println("Got value: " + value); + } +}); +``` + +### Error handling + +Error handling is an indispensable component of every real world +application and should to be considered from the beginning on. Futures +provide some mechanisms to deal with errors. + +In general, you want to react in the following ways: + +- Return a default value instead + +- Use a backup future + +- Retry the future + +`RedisFuture`s transport exceptions if any occurred. Calls to the +`get()` method throw the occurred exception wrapped within an +`ExecutionException` (this is different to Lettuce 3.x). You can find +more details within the Javadoc on +[CompletionStage](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html). + +The following code falls back to a default value after it runs to an +exception by using the `handle()` method: + +``` java +future.handle(new BiFunction() { + @Override + public Integer apply(String value, Throwable throwable) { + if(throwable != null) { + return "default value"; + } + return value; + } +}).thenAccept(new Consumer() { + @Override + public void accept(String value) { + System.out.println("Got value: " + value); + } +}); +``` + +More sophisticated code could decide on behalf of the throwable type +that value to return, as the shortcut example using the +`exceptionally()` method: + +``` java +future.exceptionally(new Function() { + @Override + public String apply(Throwable throwable) { + if (throwable instanceof IllegalStateException) { + return "default value"; + } + + return "other default value"; + } +}); +``` + +Retrying futures and recovery using futures is not part of the Java 8 +`CompleteableFuture`. See the [Reactive API](#reactive-api) for +comfortable ways handling with exceptions. + +### Examples + +``` java +RedisAsyncCommands async = client.connect().async(); +RedisFuture set = async.set("key", "value"); +RedisFuture get = async.get("key"); + +set.get() == "OK" +get.get() == "value" +``` + +``` java +RedisAsyncCommands async = client.connect().async(); +RedisFuture set = async.set("key", "value"); +RedisFuture get = async.get("key"); + +set.await(1, SECONDS) == true +set.get() == "OK" +get.get(1, TimeUnit.MINUTES) == "value" +``` + +``` java +RedisStringAsyncCommands async = client.connect().async(); +RedisFuture set = async.set("key", "value"); + +Runnable listener = new Runnable() { + @Override + public void run() { + ...; + } +}; + +set.thenRun(listener); +``` + +## Reactive API + +This guide helps you to understand the Reactive Stream pattern and aims +to give you a general understanding of how to build reactive +applications. + +### Motivation + +Asynchronous and reactive methodologies allow you to utilize better +system resources, instead of wasting threads waiting for network or disk +I/O. Threads can be fully utilized to perform other work instead. + +A broad range of technologies exists to facilitate this style of +programming, ranging from the very limited and less usable +`java.util.concurrent.Future` to complete libraries and runtimes like +Akka. [Project Reactor](http://projectreactor.io/), has a very rich set +of operators to compose asynchronous workflows, it has no further +dependencies to other frameworks and supports the very mature Reactive +Streams model. + +### Understanding Reactive Streams + +Reactive Streams is an initiative to provide a standard for asynchronous +stream processing with non-blocking back pressure. This encompasses +efforts aimed at runtime environments (JVM and JavaScript) as well as +network protocols. + +The scope of Reactive Streams is to find a minimal set of interfaces, +methods, and protocols that will describe the necessary operations and +entities to achieve the goal—asynchronous streams of data with +non-blocking back pressure. + +It is an interoperability standard between multiple reactive composition +libraries that allow interaction without the need of bridging between +libraries in application code. + +The integration of Reactive Streams is usually accompanied with the use +of a composition library that hides the complexity of bare +`Publisher` and `Subscriber` types behind an easy-to-use API. +Lettuce uses [Project Reactor](http://projectreactor.io/) that exposes +its publishers as `Mono` and `Flux`. + +For more information about Reactive Streams see +. + +### Understanding Publishers + +Asynchronous processing decouples I/O or computation from the thread +that invoked the operation. A handle to the result is given back, +usually a `java.util.concurrent.Future` or similar, that returns either +a single object, a collection or an exception. Retrieving a result, that +was fetched asynchronously is usually not the end of processing one +flow. Once data is obtained, further requests can be issued, either +always or conditionally. With Java 8 or the Promise pattern, linear +chaining of futures can be set up so that subsequent asynchronous +requests are issued. Once conditional processing is needed, the +asynchronous flow has to be interrupted and synchronized. While this +approach is possible, it does not fully utilize the advantage of +asynchronous processing. + +In contrast to the preceding examples, `Publisher` objects answer the +multiplicity and asynchronous questions in a different fashion: By +inverting the `Pull` pattern into a `Push` pattern. + +**A Publisher is the asynchronous/push “dual” to the synchronous/pull +Iterable** + +| event | Iterable (pull) | Publisher (push) | +|----------------|------------------|--------------------| +| retrieve data | T next() | onNext(T) | +| discover error | throws Exception | onError(Exception) | +| complete | !hasNext() | onCompleted() | + +An `Publisher` supports emission sequences of values or even infinite +streams, not just the emission of single scalar values (as Futures do). +You will very much appreciate this fact once you start to work on +streams instead of single values. Project Reactor uses two types in its +vocabulary: `Mono` and `Flux` that are both publishers. + +A `Mono` can emit `0` to `1` events while a `Flux` can emit `0` to `N` +events. + +A `Publisher` is not biased toward some particular source of +concurrency or asynchronicity and how the underlying code is executed - +synchronous or asynchronous, running within a `ThreadPool`. As a +consumer of a `Publisher`, you leave the actual implementation to the +supplier, who can change it later on without you having to adapt your +code. + +The last key point of a `Publisher` is that the underlying processing +is not started at the time the `Publisher` is obtained, rather its +started at the moment an observer subscribes or signals demand to the +`Publisher`. This is a crucial difference to a +`java.util.concurrent.Future`, which is started somewhere at the time it +is created/obtained. So if no observer ever subscribes to the +`Publisher`, nothing ever will happen. + +### A word on the lettuce Reactive API + +All commands return a `Flux`, `Mono` or `Mono` to which a +`Subscriber` can subscribe to. That subscriber reacts to whatever item +or sequence of items the `Publisher` emits. This pattern facilitates +concurrent operations because it does not need to block while waiting +for the `Publisher` to emit objects. Instead, it creates a sentry in +the form of a `Subscriber` that stands ready to react appropriately at +whatever future time the `Publisher` does so. + +### Consuming `Publisher` + +The first thing you want to do when working with publishers is to +consume them. Consuming a publisher means subscribing to it. Here is an +example that subscribes and prints out all the items emitted: + +``` java +Flux.just("Ben", "Michael", "Mark").subscribe(new Subscriber() { + public void onSubscribe(Subscription s) { + s.request(3); + } + + public void onNext(String s) { + System.out.println("Hello " + s + "!"); + } + + public void onError(Throwable t) { + + } + + public void onComplete() { + System.out.println("Completed"); + } +}); +``` + +The example prints the following lines: + + Hello Ben + Hello Michael + Hello Mark + Completed + +You can see that the Subscriber (or Observer) gets notified of every +event and also receives the completed event. A `Publisher` emits +items until either an exception is raised or the `Publisher` finishes +the emission calling `onCompleted`. No further elements are emitted +after that time. + +A call to the `subscribe` registers a `Subscription` that allows to +cancel and, therefore, do not receive further events. Publishers can +interoperate with the un-subscription and free resources once a +subscriber unsubscribed from the `Publisher`. + +Implementing a `Subscriber` requires implementing numerous methods, +so lets rewrite the code to a simpler form: + +``` java +Flux.just("Ben", "Michael", "Mark").doOnNext(new Consumer() { + public void accept(String s) { + System.out.println("Hello " + s + "!"); + } +}).doOnComplete(new Runnable() { + public void run() { + System.out.println("Completed"); + } +}).subscribe(); +``` + +alternatively, even simpler by using Java 8 Lambdas: + +``` java +Flux.just("Ben", "Michael", "Mark") + .doOnNext(s -> System.out.println("Hello " + s + "!")) + .doOnComplete(() -> System.out.println("Completed")) + .subscribe(); +``` + +You can control the elements that are processed by your `Subscriber` +using operators. The `take()` operator limits the number of emitted +items if you are interested in the first `N` elements only. + +``` java +Flux.just("Ben", "Michael", "Mark") // + .doOnNext(s -> System.out.println("Hello " + s + "!")) + .doOnComplete(() -> System.out.println("Completed")) + .take(2) + .subscribe(); +``` + +The example prints the following lines: + + Hello Ben + Hello Michael + Completed + +Note that the `take` operator implicitly cancels its subscription from +the `Publisher` once the expected count of elements was emitted. + +A subscription to a `Publisher` can be done either by another `Flux` +or a `Subscriber`. Unless you are implementing a custom `Publisher`, +always use `Subscriber`. The used subscriber `Consumer` from the example +above does not handle `Exception`s so once an `Exception` is thrown you +will see a stack trace like this: + + Exception in thread "main" reactor.core.Exceptions$BubblingException: java.lang.RuntimeException: Example exception + at reactor.core.Exceptions.bubble(Exceptions.java:96) + at reactor.core.publisher.Operators.onErrorDropped(Operators.java:296) + at reactor.core.publisher.LambdaSubscriber.onError(LambdaSubscriber.java:117) + ... + Caused by: java.lang.RuntimeException: Example exception + at demos.lambda$example3Lambda$4(demos.java:87) + at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:157) + ... 23 more + +It is always recommended to implement an error handler right from the +beginning. At a certain point, things can and will go wrong. + +A fully implemented subscriber declares the `onCompleted` and `onError` +methods allowing you to react to these events: + +``` java +Flux.just("Ben", "Michael", "Mark").subscribe(new Subscriber() { + public void onSubscribe(Subscription s) { + s.request(3); + } + + public void onNext(String s) { + System.out.println("Hello " + s + "!"); + } + + public void onError(Throwable t) { + System.out.println("onError: " + t); + } + + public void onComplete() { + System.out.println("Completed"); + } +}); +``` + +### From push to pull + +The examples from above illustrated how publishers can be set up in a +not-opinionated style about blocking or non-blocking execution. A +`Flux` can be converted explicitly into an `Iterable` or +synchronized with `block()`. Avoid calling `block()` in your code as you +start expressing the nature of execution inside your code. Calling +`block()` removes all non-blocking advantages of the reactive chain to +your application. + +``` java +String last = Flux.just("Ben", "Michael", "Mark").last().block(); +System.out.println(last); +``` + +The example prints the following line: + + Mark + +A blocking call can be used to synchronize the publisher chain and find +back a way into the plain and well-known `Pull` pattern. + +``` java +List list = Flux.just("Ben", "Michael", "Mark").collectList().block(); +System.out.println(list); +``` + +The `toList` operator collects all emitted elements and passes the list +through the `BlockingPublisher`. + +The example prints the following line: + + [Ben, Michael, Mark] + +### Creating `Flux` and `Mono` using Lettuce + +There are many ways to establish publishers. You have already seen +`just()`, `take()` and `collectList()`. Refer to the [Project Reactor +documentation](http://projectreactor.io/docs/) for many more methods +that you can use to create `Flux` and `Mono`. + +Lettuce publishers can be used for initial and chaining operations. When +using Lettuce publishers, you will notice the non-blocking behavior. +This is because all I/O and command processing are handled +asynchronously using the netty EventLoop. + +Connecting to Redis is insanely simple: + +``` java +RedisClient client = RedisClient.create("redis://localhost"); +RedisStringReactiveCommands commands = client.connect().reactive(); +``` + +In the next step, obtaining a value from a key requires the `GET` +operation: + +``` java +commands.get("key").subscribe(new Consumer() { + + public void accept(String value) { + System.out.println(value); + } +}); +``` + +Alternatively, written in Java 8 lambdas: + +``` java +commands + .get("key") + .subscribe(value -> System.out.println(value)); +``` + +The execution is handled asynchronously, and the invoking Thread can be +used to processed in processing while the operation is completed on the +Netty EventLoop threads. Due to its decoupled nature, the calling method +can be left before the execution of the `Publisher` is finished. + +Lettuce publishers can be used within the context of chaining to load +multiple keys asynchronously: + +``` java +Flux.just("Ben", "Michael", "Mark"). + flatMap(key -> commands.get(key)). + subscribe(value -> System.out.println("Got value: " + value)); +``` + +### Hot and Cold Publishers + +There is a distinction between Publishers that was not covered yet: + +- A cold Publishers waits for a subscription until it emits values and + does this freshly for every subscriber. + +- A hot Publishers begins emitting values upfront and presents them to + every subscriber subsequently. + +All Publishers returned from the Redis Standalone, Redis Cluster, and +Redis Sentinel API are cold, meaning that no I/O happens until they are +subscribed to. As such a subscriber is guaranteed to see the whole +sequence from the beginning. So just creating a Publisher will not cause +any network I/O thus creating and discarding Publishers is cheap. +Publishers created for a Publish/Subscribe emit `PatternMessage`s and +`ChannelMessage`s once they are subscribed to. Publishers guarantee +however to emit all items from the beginning until their end. While this +is true for Publish/Subscribe publishers, the nature of subscribing to a +Channel/Pattern allows missed messages due to its subscription nature +and less to the Hot/Cold distinction of publishers. + +### Transforming publishers + +Publishers can transform the emitted values in various ways. One of the +most basic transformations is `flatMap()` which you have seen from the +examples above that converts the incoming value into a different one. +Another one is `map()`. The difference between `map()` and `flatMap()` +is that `flatMap()` allows you to do those transformations with +`Publisher` calls. + +``` java +Flux.just("Ben", "Michael", "Mark") + .flatMap(commands::get) + .flatMap(value -> commands.rpush("result", value)) + .subscribe(); +``` + +The first `flatMap()` function is used to retrieve a value and the +second `flatMap()` function appends the value to a Redis list named +`result`. The `flatMap()` function returns a Publisher whereas the +normal map just returns ``. You will use `flatMap()` a lot when +dealing with flows like this, you’ll become good friends. + +An aggregation of values can be achieved using the `reduce()` +transformation. It applies a function to each value emitted by a +`Publisher`, sequentially and emits each successive value. We can use +it to aggregate values, to count the number of elements in multiple +Redis sets: + +``` java +Flux.just("Ben", "Michael", "Mark") + .flatMap(commands::scard) + .reduce((sum, current) -> sum + current) + .subscribe(result -> System.out.println("Number of elements in sets: " + result)); +``` + +The aggregation function of `reduce()` is applied on each emitted value, +so three times in the example above. If you want to get the last value, +which denotes the final result containing the number of elements in all +Redis sets, apply the `last()` transformation: + +``` java +Flux.just("Ben", "Michael", "Mark") + .flatMap(commands::scard) + .reduce((sum, current) -> sum + current) + .last() + .subscribe(result -> System.out.println("Number of elements in sets: " + result)); +``` + +Now let’s take a look at grouping emitted items. The following example +emits three items and groups them by the beginning character. + +``` java +Flux.just("Ben", "Michael", "Mark") + .groupBy(key -> key.substring(0, 1)) + .subscribe( + groupedFlux -> { + groupedFlux.collectList().subscribe(list -> { + System.out.println("First character: " + groupedFlux.key() + ", elements: " + list); + }); + } +); +``` + +The example prints the following lines: + + First character: B, elements: [Ben] + First character: M, elements: [Michael, Mark] + +### Absent values + +The presence and absence of values is an essential part of reactive +programming. Traditional approaches consider `null` as an absence of a +particular value. With Java 8, `Optional` was introduced to +encapsulate nullability. Reactive Streams prohibits the use of `null` +values. + +In the scope of Redis, an absent value is an empty list, a non-existent +key or any other empty data structure. Reactive programming discourages +the use of `null` as value. The reactive answer to absent values is just +not emitting any value that is possible due the `0` to `N` nature of +`Publisher`. + +Suppose we have the keys `Ben` and `Michael` set each to the value +`value`. We query those and another, absent key with the following code: + +``` java +Flux.just("Ben", "Michael", "Mark") + .flatMap(commands::get) + .doOnNext(value -> System.out.println(value)) + .subscribe(); +``` + +The example prints the following lines: + + value + value + +The output is just two values. The `GET` to the absent key `Mark` does +not emit a value. + +The reactive API provides operators to work with empty results when you +require a value. You can use one of the following operators: + +- `defaultIfEmpty`: Emit a default value if the `Publisher` did not + emit any value at all + +- `switchIfEmpty`: Switch to a fallback `Publisher` to emit values + +- `Flux.hasElements`/`Flux.hasElement`: Emit a `Mono` that + contains a flag whether the original `Publisher` is empty + +- `next`/`last`/`elementAt`: Positional operators to retrieve the + first/last/`N`th element or emit a default value + +### Filtering items + +The values emitted by a `Publisher` can be filtered in case you need +only specific results. Filtering does not change the emitted values +itself. Filters affect how many items and at which point (and if at all) +they are emitted. + +``` java +Flux.just("Ben", "Michael", "Mark") + .filter(s -> s.startsWith("M")) + .flatMap(commands::get) + .subscribe(value -> System.out.println("Got value: " + value)); +``` + +The code will fetch only the keys `Michael` and `Mark` but not `Ben`. +The filter criteria are whether the `key` starts with a `M`. + +You already met the `last()` filter to retrieve the last value: + +``` java +Flux.just("Ben", "Michael", "Mark") + .last() + .subscribe(value -> System.out.println("Got value: " + value)); +``` + +the extended variant of `last()` allows you to take the last `N` values: + +``` java +Flux.just("Ben", "Michael", "Mark") + .takeLast(3) + .subscribe(value -> System.out.println("Got value: " + value)); +``` + +The example from above takes the last `2` values. + +The opposite to `next()` is the `first()` filter that is used to +retrieve the next value: + +``` java +Flux.just("Ben", "Michael", "Mark") + .next() + .subscribe(value -> System.out.println("Got value: " + value)); +``` + +### Error handling + +Error handling is an indispensable component of every real world +application and should to be considered from the beginning on. Project +Reactor provides several mechanisms to deal with errors. + +In general, you want to react in the following ways: + +- Return a default value instead + +- Use a backup publisher + +- Retry the Publisher (immediately or with delay) + +The following code falls back to a default value after it throws an +exception at the first emitted item: + +``` java +Flux.just("Ben", "Michael", "Mark") + .doOnNext(value -> { + throw new IllegalStateException("Takes way too long"); + }) + .onErrorReturn("Default value") + .subscribe(); +``` + +You can use a backup `Publisher` which will be called if the first +one fails. + +``` java +Flux.just("Ben", "Michael", "Mark") + .doOnNext(value -> { + throw new IllegalStateException("Takes way too long"); + }) + .switchOnError(commands.get("Default Key")) + .subscribe(); +``` + +It is possible to retry the publisher by re-subscribing. Re-subscribing +can be done as soon as possible, or with a wait interval, which is +preferred when external resources are involved. + +``` java +Flux.just("Ben", "Michael", "Mark") + .flatMap(commands::get) + .retry() + .subscribe(); +``` + +Use the following code if you want to retry with backoff: + +``` java +Flux.just("Ben", "Michael", "Mark") + .doOnNext(v -> { + if (new Random().nextInt(10) + 1 == 5) { + throw new RuntimeException("Boo!"); + } + }) + .doOnSubscribe(subscription -> + { + System.out.println(subscription); + }) + .retryWhen(throwableFlux -> Flux.range(1, 5) + .flatMap(i -> { + System.out.println(i); + return Flux.just(i) + .delay(Duration.of(i, ChronoUnit.SECONDS)); + })) + .blockLast(); +``` + +The attempts get passed into the `retryWhen()` method delayed with the +number of seconds to wait. The delay method is used to complete once its +timer is done. + +### Schedulers and threads + +Schedulers in Project Reactor are used to instruct multi-threading. Some +operators have variants that take a Scheduler as a parameter. These +instruct the operator to do some or all of its work on a particular +Scheduler. + +Project Reactor ships with a set of preconfigured Schedulers, which are +all accessible through the `Schedulers` class: + +- Schedulers.parallel(): Executes the computational work such as + event-loops and callback processing. + +- Schedulers.immediate(): Executes the work immediately in the current + thread + +- Schedulers.elastic(): Executes the I/O-bound work such as asynchronous + performance of blocking I/O, this scheduler is backed by a thread-pool + that will grow as needed + +- Schedulers.newSingle(): Executes the work on a new thread + +- Schedulers.fromExecutor(): Create a scheduler from a + `java.util.concurrent.Executor` + +- Schedulers.timer(): Create or reuse a hash-wheel based TimedScheduler + with a resolution of 50ms. + +Do not use the computational scheduler for I/O. + +Publishers can be executed by a scheduler in the following different +ways: + +- Using an operator that makes use of a scheduler + +- Explicitly by passing the Scheduler to such an operator + +- By using `subscribeOn(Scheduler)` + +- By using `publishOn(Scheduler)` + +Operators like `buffer`, `replay`, `skip`, `delay`, `parallel`, and so +forth use a Scheduler by default if not instructed otherwise. + +All of the listed operators allow you to pass in a custom scheduler if +needed. Sticking most of the time with the defaults is a good idea. + +If you want the subscribe chain to be executed on a specific scheduler, +you use the `subscribeOn()` operator. The code is executed on the main +thread without a scheduler set: + +``` java +Flux.just("Ben", "Michael", "Mark").flatMap(key -> { + System.out.println("Map 1: " + key + " (" + Thread.currentThread().getName() + ")"); + return Flux.just(key); + } +).flatMap(value -> { + System.out.println("Map 2: " + value + " (" + Thread.currentThread().getName() + ")"); + return Flux.just(value); + } +).subscribe(); +``` + +The example prints the following lines: + + Map 1: Ben (main) + Map 2: Ben (main) + Map 1: Michael (main) + Map 2: Michael (main) + Map 1: Mark (main) + Map 2: Mark (main) + +This example shows the `subscribeOn()` method added to the flow (it does +not matter where you add it): + +``` java +Flux.just("Ben", "Michael", "Mark").flatMap(key -> { + System.out.println("Map 1: " + key + " (" + Thread.currentThread().getName() + ")"); + return Flux.just(key); + } +).flatMap(value -> { + System.out.println("Map 2: " + value + " (" + Thread.currentThread().getName() + ")"); + return Flux.just(value); + } +).subscribeOn(Schedulers.parallel()).subscribe(); +``` + +The output of the example shows the effect of `subscribeOn()`. You can +see that the Publisher is executed on the same thread, but on the +computation thread pool: + + Map 1: Ben (parallel-1) + Map 2: Ben (parallel-1) + Map 1: Michael (parallel-1) + Map 2: Michael (parallel-1) + Map 1: Mark (parallel-1) + Map 2: Mark (parallel-1) + +If you apply the same code to Lettuce, you will notice a difference in +the threads on which the second `flatMap()` is executed: + +``` java +Flux.just("Ben", "Michael", "Mark").flatMap(key -> { + System.out.println("Map 1: " + key + " (" + Thread.currentThread().getName() + ")"); + return commands.set(key, key); +}).flatMap(value -> { + System.out.println("Map 2: " + value + " (" + Thread.currentThread().getName() + ")"); + return Flux.just(value); +}).subscribeOn(Schedulers.parallel()).subscribe(); +``` + +The example prints the following lines: + + Map 1: Ben (parallel-1) + Map 1: Michael (parallel-1) + Map 1: Mark (parallel-1) + Map 2: OK (lettuce-nioEventLoop-3-1) + Map 2: OK (lettuce-nioEventLoop-3-1) + Map 2: OK (lettuce-nioEventLoop-3-1) + +Two things differ from the standalone examples: + +1. The values are set rather concurrently than sequentially + +2. The second `flatMap()` transformation prints the netty EventLoop + thread name + +This is because Lettuce publishers are executed and completed on the +netty EventLoop threads by default. + +`publishOn` instructs an Publisher to call its observer’s `onNext`, +`onError`, and `onCompleted` methods on a particular Scheduler. Here, +the order matters: + +``` java +Flux.just("Ben", "Michael", "Mark").flatMap(key -> { + System.out.println("Map 1: " + key + " (" + Thread.currentThread().getName() + ")"); + return commands.set(key, key); +}).publishOn(Schedulers.parallel()).flatMap(value -> { + System.out.println("Map 2: " + value + " (" + Thread.currentThread().getName() + ")"); + return Flux.just(value); +}).subscribe(); +``` + +Everything before the `publishOn()` call is executed in main, everything +below in the scheduler: + + Map 1: Ben (main) + Map 1: Michael (main) + Map 1: Mark (main) + Map 2: OK (parallel-1) + Map 2: OK (parallel-1) + Map 2: OK (parallel-1) + +Schedulers allow direct scheduling of operations. Refer to the [Project +Reactor +documentation](https://projectreactor.io/core/docs/api/reactor/core/scheduler/Schedulers.html) +for further information. + +### Redis Transactions + +Lettuce provides a convenient way to use Redis Transactions in a +reactive way. Commands that should be executed within a transaction can +be executed after the `MULTI` command was executed. Functional chaining +allows to execute commands within a closure, and each command receives +its appropriate response. A cumulative response is also returned with +`TransactionResult` in response to `EXEC`. + +See [Transactions](#transactions-using-the-reactive-api) for +further details. + +#### Other examples + +**Blocking example** + +``` java +RedisStringReactiveCommands reactive = client.connect().reactive(); +Mono set = reactive.set("key", "value"); +set.block(); +``` + +**Non-blocking example** + +``` java +RedisStringReactiveCommands reactive = client.connect().reactive(); +Mono set = reactive.set("key", "value"); +set.subscribe(); +``` + +**Functional chaining** + +``` java +RedisStringReactiveCommands reactive = client.connect().reactive(); +Flux.just("Ben", "Michael", "Mark") + .flatMap(key -> commands.sadd("seen", key)) + .flatMap(value -> commands.randomkey()) + .flatMap(commands::type) + .doOnNext(System.out::println).subscribe(); +``` + +**Redis Transaction** + + RedisReactiveCommands reactive = client.connect().reactive(); + + reactive.multi().doOnSuccess(s -> { + reactive.set("key", "1").doOnNext(s1 -> System.out.println(s1)).subscribe(); + reactive.incr("key").doOnNext(s1 -> System.out.println(s1)).subscribe(); + }).flatMap(s -> reactive.exec()) + .doOnNext(transactionResults -> System.out.println(transactionResults.wasRolledBack())) + .subscribe(); + +## Kotlin API + +Kotlin Coroutines are using Kotlin lightweight threads allowing to write +non-blocking code in an imperative way. On language side, suspending +functions provides an abstraction for asynchronous operations while on +library side kotlinx.coroutines provides functions like `async { }` and +types like `Flow`. + +Lettuce ships with extensions to provide support for idiomatic Kotlin +use. + +### Dependencies + +Coroutines support is available when `kotlinx-coroutines-core` and +`kotlinx-coroutines-reactive` dependencies are on the classpath: + +``` xml + + org.jetbrains.kotlinx + kotlinx-coroutines-core + ${kotlinx-coroutines.version} + + + org.jetbrains.kotlinx + kotlinx-coroutines-reactive + ${kotlinx-coroutines.version} + +``` + +### How does Reactive translate to Coroutines? + +`Flow` is an equivalent to `Flux` in Coroutines world, suitable for hot +or cold streams, finite or infinite streams, with the following main +differences: + +- `Flow` is push-based while `Flux` is a push-pull hybrid + +- Backpressure is implemented via suspending functions + +- `Flow` has only a single suspending collect method and operators are + implemented as extensions + +- Operators are easy to implement thanks to Coroutines + +- Extensions allow to add custom operators to Flow + +- Collect operations are suspending functions + +- `map` operator supports asynchronous operations (no need for + `flatMap`) since it takes a suspending function parameter + +### Coroutines API based on reactive operations + +Example for retrieving commands and using it: + +``` kotlin +val api: RedisCoroutinesCommands = connection.coroutines() + +val foo1 = api.set("foo", "bar") +val foo2 = api.keys("fo*") +``` + +> [!NOTE] +> Coroutine Extensions are experimental and require opt-in using +> `@ExperimentalLettuceCoroutinesApi`. The API ships with a reduced +> feature set. Deprecated methods and `StreamingChannel` are left out +> intentionally. Expect evolution towards a `Flow`-based API to consume +> large Redis responses. + +### Extensions for existing APIs + +#### Transactions DSL + +Example for the synchronous API: + +``` kotlin +val result: TransactionResult = connection.sync().multi { + set("foo", "bar") + get("foo") +} +``` + +Example for async with coroutines: + +``` kotlin +val result: TransactionResult = connection.async().multi { + set("foo", "bar") + get("foo") +} +``` + +## Publish/Subscribe + +Lettuce provides support for Publish/Subscribe on Redis Standalone and +Redis Cluster connections. The connection is notified on +message/subscribed/unsubscribed events after subscribing to channels or +patterns. [Synchronous](#basic-usage), [asynchronous](#asynchronous-api) +and [reactive](#reactive-api) API’s are provided to interact with Redis +Publish/Subscribe features. + +### Subscribing + +A connection can notify multiple listeners that implement +`RedisPubSubListener` (Lettuce provides a `RedisPubSubAdapter` for +convenience). All listener registrations are kept within the +`StatefulRedisPubSubConnection`/`StatefulRedisClusterConnection`. + +``` java +StatefulRedisPubSubConnection connection = client.connectPubSub() +connection.addListener(new RedisPubSubListener() { ... }) + +RedisPubSubCommands sync = connection.sync(); +sync.subscribe("channel"); + +// application flow continues +``` + +> [!NOTE] +> Don’t issue blocking calls (includes synchronous API calls to Lettuce) +> from inside of Pub/Sub callbacks as this would block the EventLoop. If +> you need to fetch data from Redis from inside a callback, please use +> the asynchronous API. + +``` java +StatefulRedisPubSubConnection connection = client.connectPubSub() +connection.addListener(new RedisPubSubListener() { ... }) + +RedisPubSubAsyncCommands async = connection.async(); +RedisFuture future = async.subscribe("channel"); + +// application flow continues +``` + +### Reactive API + +The reactive API provides hot `Observable`s to listen on +`ChannelMessage`s and `PatternMessage`s. The `Observable`s receive all +inbound messages. You can do filtering using the observable chain if you +need to filter out the interesting ones, The `Observable` stops +triggering events when the subscriber unsubscribes from it. + +``` java +StatefulRedisPubSubConnection connection = client.connectPubSub() + +RedisPubSubReactiveCommands reactive = connection.reactive(); +reactive.subscribe("channel").subscribe(); + +reactive.observeChannels().doOnNext(patternMessage -> {...}).subscribe() + +// application flow continues +``` + +### Redis Cluster + +Redis Cluster support Publish/Subscribe but requires some attention in +general. User-space Pub/Sub messages (Calling `PUBLISH`) are broadcasted +across the whole cluster regardless of subscriptions to particular +channels/patterns. This behavior allows connecting to an arbitrary +cluster node and registering a subscription. The client isn’t required +to connect to the node where messages were published. + +A cluster-aware Pub/Sub connection is provided by +`RedisClusterClient.connectPubSub()` allowing to listen for cluster +reconfiguration and reconnect if the topology changes. + +``` java +StatefulRedisClusterPubSubConnection connection = clusterClient.connectPubSub() +connection.addListener(new RedisPubSubListener() { ... }) + +RedisPubSubCommands sync = connection.sync(); +sync.subscribe("channel"); +``` + +Redis Cluster also makes a distinction between user-space and key-space +messages. Key-space notifications (Pub/Sub messages for key-activity) +stay node-local and are not broadcasted across the Redis Cluster. A +notification about, e.g. an expiring key, stays local to the node on +which the key expired. + +Clients that are interested in keyspace notifications must subscribe to +the appropriate node (or nodes) to receive these notifications. You can +either use `RedisClient.connectPubSub()` to establish Pub/Sub +connections to the individual nodes or use `RedisClusterClient`'s +message propagation and NodeSelection API to get a managed set of +connections. + +``` java +StatefulRedisClusterPubSubConnection connection = clusterClient.connectPubSub() +connection.addListener(new RedisClusterPubSubListener() { ... }) +connection.setNodeMessagePropagation(true); + +RedisPubSubCommands sync = connection.sync(); +sync.masters().commands().subscribe("__keyspace@0__:*"); +``` + +There are two things to pay special attention to: + +1. Replication: Keys replicated to replica nodes, especially + considering expiry, generate keyspace events on all nodes holding + the key. If a key expires and it is replicated, it will expire on + the master and all replicas. Each Redis server will emit keyspace + events. Subscribing to non-master nodes, therefore, will let your + application see multiple events of the same type for the same key + because of Redis distributed nature. + +2. Topology Changes: Subscriptions are issued either by using the + NodeSelection API or by calling `subscribe(…)` on the individual + cluster node connections. Subscription registrations are not + propagated to new nodes that are added on a topology change. + +## Transactions/Multi + +Transactions allow the execution of a group of commands in a single +step. Transactions can be controlled using `WATCH`, `UNWATCH`, `EXEC`, +`MULTI` and `DISCARD` commands. Synchronous, asynchronous, and reactive +APIs allow the use of transactions. + +> [!NOTE] +> Transactional use requires external synchronization when a single +> connection is used by multiple threads/processes. This can be achieved +> either by serializing transactions or by providing a dedicated +> connection to each concurrent process. Lettuce itself does not +> synchronize transactional/non-transactional invocations regardless of +> the used API facade. + +Redis responds to commands invoked during a transaction with a `QUEUED` +response. The response related to the execution of the command is +received at the moment the `EXEC` command is processed, and the +transaction is executed. The particular APIs behave in different ways: + +- Synchronous: Invocations to the commands return `null` while they are + invoked within a transaction. The `MULTI` command carries the response + of the particular commands. + +- Asynchronous: The futures receive their response at the moment the + `EXEC` command is processed. This happens while the `EXEC` response is + received. + +- Reactive: An `Obvervable` triggers `onNext`/`onCompleted` at the + moment the `EXEC` command is processed. This happens while the `EXEC` + response is received. + +As soon as you’re within a transaction, you won’t receive any responses +on triggering the commands + +``` java +redis.multi() == "OK" +redis.set(key, value) == null +redis.exec() == list("OK") +``` + +You’ll receive the transactional response when calling `exec()` on the +end of your transaction. + +``` java +redis.multi() == "OK" +redis.set(key1, value) == null +redis.set(key2, value) == null +redis.exec() == list("OK", "OK") +``` + +### Transactions using the asynchronous API + +Asynchronous use of Redis transactions is very similar to +non-transactional use. The asynchronous API returns `RedisFuture` +instances that eventually complete and they are handles to a future +result. Regular commands complete as soon as Redis sends a response. +Transactional commands complete as soon as the `EXEC` result is +received. + +Each command is completed individually with its own result so users of +`RedisFuture` will see no difference between transactional and +non-transactional `RedisFuture` completion. That said, transactional +command results are available twice: Once via `RedisFuture` of the +command and once through `List` (`TransactionResult` since +Lettuce 5) of the `EXEC` command future. + +``` java +RedisAsyncCommands async = client.connect().async(); + +RedisFuture multi = async.multi(); + +RedisFuture set = async.set("key", "value"); + +RedisFuture> exec = async.exec(); + +List objects = exec.get(); +String setResult = set.get(); + +objects.get(0) == setResult +``` + +### Transactions using the reactive API + +The reactive API can be used to execute multiple commands in a single +step. The nature of the reactive API encourages nesting of commands. It +is essential to understand the time at which an `Observable` emits a +value when working with transactions. Redis responds with `QUEUED` to +commands invoked during a transaction. The response related to the +execution of the command is received at the moment the `EXEC` command is +processed, and the transaction is executed. Subsequent calls in the +processing chain are executed after the transactional end. The following +code starts a transaction, executes two commands within the transaction +and finally executes the transaction. + +``` java +RedisReactiveCommands reactive = client.connect().reactive(); +reactive.multi().subscribe(multiResponse -> { + reactive.set("key", "1").subscribe(); + reactive.incr("key").subscribe(); + reactive.exec().subscribe(); +}); +``` + +### Transactions on clustered connections + +Clustered connections perform a routing by default. This means, that you +can’t be really sure, on which host your command is executed. So if you +are working in a clustered environment, use rather a regular connection +to your node, since then you’ll bound to that node knowing which hash +slots are handled by it. + +### Examples + +**Multi with executing multiple commands** + +``` java +redis.multi(); + +redis.set("one", "1"); +redis.set("two", "2"); +redis.mget("one", "two"); +redis.llen(key); + +redis.exec(); // result: list("OK", "OK", list("1", "2"), 0L) +``` + +**Mult executing multiple asynchronous commands** + +``` java +redis.multi(); + +RedisFuture set1 = redis.set("one", "1"); +RedisFuture set2 = redis.set("two", "2"); +RedisFuture mget = redis.mget("one", "two"); +RedisFuture llen = mgetredis.llen(key); + + +set1.thenAccept(value -> …); // OK +set2.thenAccept(value -> …); // OK + +RedisFuture> exec = redis.exec(); // result: list("OK", "OK", list("1", "2"), 0L) + +mget.get(); // list("1", "2") +llen.thenAccept(value -> …); // 0L +``` + +**Using WATCH** + +``` java +redis.watch(key); + +RedisConnection redis2 = client.connect(); +redis2.set(key, value + "X"); +redis2.close(); + +redis.multi(); +redis.append(key, "foo"); +redis.exec(); // result is an empty list because of the changed key +``` + +## Scripting and Functions + +Redis functionality can be extended through many ways, of which [Lua +Scripting](https://redis.io/topics/eval-intro) and +[Functions](https://redis.io/topics/functions-intro) are two approaches +that do not require specific pre-requisites on the server. + +### Lua Scripting + +[Lua](https://redis.io/topics/lua-api) is a powerful scripting language +that is supported at the core of Redis. Lua scripts can be invoked +dynamically by providing the script contents to Redis or used as stored +procedure by loading the script into Redis and using its digest to +invoke it. + +
+ +``` java +String helloWorld = redis.eval("return ARGV[1]", STATUS, new String[0], "Hello World"); +``` + +
+ +Using Lua scripts is straightforward. Consuming results in Java requires +additional details to consume the result through a matching type. As we +do not know what your script will return, the API uses call-site +generics for you to specify the result type. Additionally, you must +provide a `ScriptOutputType` hint to `EVAL` so that the driver uses the +appropriate output parser. See [Output Formats](#output-formats) for +further details. + +Lua scripts can be stored on the server for repeated execution. +Dynamically-generated scripts are an anti-pattern as each script is +stored in Redis' script cache. Generating scripts during the application +runtime may, and probably will, exhaust the host’s memory resources for +caching them. Instead, scripts should be as generic as possible and +provide customized execution via their arguments. You can register a +script through `SCRIPT LOAD` and use its SHA digest to invoke it later: + +
+ +``` java +String digest = redis.scriptLoad("return ARGV[1]", STATUS, new String[0], "Hello World"); + +// later +String helloWorld = redis.evalsha(digest, STATUS, new String[0], "Hello World"); +``` + +
+ +### Redis Functions + +[Redis Functions](https://redis.io/topics/functions-intro) is an +evolution of the scripting API to provide extensibility beyond Lua. +Functions can leverage different engines and follow a model where a +function library registers functionality to be invoked later with the +`FCALL` command. + +
+ +``` java +redis.functionLoad("FUNCTION LOAD "#!lua name=mylib\nredis.register_function('knockknock', function() return 'Who\\'s there?' end)"); + +String response = redis.fcall("knockknock", STATUS); +``` + +
+ +Using Functions is straightforward. Consuming results in Java requires +additional details to consume the result through a matching type. As we +do not know what your function will return, the API uses call-site +generics for you to specify the result type. Additionally, you must +provide a `ScriptOutputType` hint to `EVAL` so that the driver uses the +appropriate output parser. See [Output Formats](#output-formats) for +further details. + +### Output Formats + +You can choose from one of the following: + +- `BOOLEAN`: Boolean output, expects a number `0` or `1` to be converted + to a boolean value. + +- `INTEGER`: 64-bit Integer output, represented as Java `Long`. + +- `MULTI`: List of flat arrays. + +- `STATUS`: Simple status value such as `OK`. The Redis response is + parsed as ASCII. + +- `VALUE`: Value return type decoded through `RedisCodec`. + +- `OBJECT`: RESP3-defined object output supporting all Redis response + structures. + +### Leveraging Scripting and Functions through Command Interfaces + +Using dynamic functionality without a documented response structure can +impose quite some complexity on your application. If you consider using +scripting or functions, then you can use [Command +Interfaces](Working-with-dynamic-Redis-Command-Interfaces.md) to declare +an interface along with methods that represent your scripting or +function landscape. Declaring a method with input arguments and a +response type not only makes it obvious how the script or function is +supposed to be called, but also how the response structure should look +like. + +Let’s take a look at a simple function call first: + +
+ +``` lua +local function my_hlastmodified(keys, args) + local hash = keys[1] + return redis.call('HGET', hash, '_last_modified_') +end +``` + +
+ +
+ +``` java +Long lastModified = redis.fcall("my_hlastmodified", INTEGER, "my_hash"); +``` + +
+ +This example calls the `my_hlastmodified` function expecting some `Long` +response an input argument. Calling a function from a single place in +your code isn’t an issue on its own. The arrangement becomes problematic +once the number of functions grows or you start calling the functions +with different arguments from various places in your code. Without the +function code, it becomes impossible to investigate how the response +mechanics work or determine the argument semantics, as there is no +single place to document the function behavior. + +Let’s apply the Command Interface pattern to see how the the declaration +and call sites change: + +
+ +``` java +interface MyCustomCommands extends Commands { + + /** + * Retrieve the last modified value from the hash key. + * @param hashKey the key of the hash. + * @return the last modified timestamp, can be {@code null}. + */ + @Command("FCALL my_hlastmodified 1 :hashKey") + Long getLastModified(@Param("my_hash") String hashKey); + +} + +MyCustomCommands myCommands = …; +Long lastModified = myCommands.getLastModified("my_hash"); +``` + +
+ +By declaring a command method, you create a place that allows for +storing additional documentation. The method declaration makes clear +what the function call expects and what you get in return. + diff --git a/docs/Frequently-Asked-Questions.md b/docs/Frequently-Asked-Questions.md new file mode 100644 index 0000000000..8c540fe885 --- /dev/null +++ b/docs/Frequently-Asked-Questions.md @@ -0,0 +1,171 @@ +# Frequently Asked Questions + +## I’m seeing `RedisCommandTimeoutException` + +**Symptoms:** + +`RedisCommandTimeoutException` with a stack trace like: + + io.lettuce.core.RedisCommandTimeoutException: Command timed out after 1 minute(s) + at io.lettuce.core.ExceptionFactory.createTimeoutException(ExceptionFactory.java:51) + at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:114) + at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69) + at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80) + at com.sun.proxy.$Proxy94.set(Unknown Source) + +**Diagnosis:** + +1. Check the debug log (log level `DEBUG` or `TRACE` for the logger + `io.lettuce.core.protocol`) + +2. Take a Thread dump to investigate Thread activity + +3. Investigate Lettuce usage, specifically for + `setAutoFlushCommands(false)` calls + +4. Do you use a custom `RedisCodec`? + +**Cause:** + +Command timeouts are caused by the fact that a command was not completed +within the configured timeout. Timeouts may be caused for various +reasons: + +1. Redis server has crashed/network partition happened and your Redis + service didn’t recover within the configured timeout + +2. Command was not finished in time. This can happen if your Redis + server is overloaded or if the connection is blocked by a command + (e.g. `BLPOP 0`, long-running Lua script). See also + [gives](#blpopdurationzero--gives-rediscommandtimeoutexception). + +3. Configured timeout does not match Redis’s performance. + +4. If you block the `EventLoop` (e.g. calling blocking methods in a + `RedisFuture` callback or in a Reactive pipeline). That can easily + happen when calling Redis commands in a Pub/Sub listener or a + `RedisConnectionStateListener`. + +5. If you manually control the flushing behavior of commands + (`setAutoFlushCommands(true/false)`), you should have a good reason + to do so. In multi-threaded environments, race conditions may easily + happen, and commands are not flushed. Updating a missing or + misplaced `flushCommands()` call might solve the problem. + +6. If you’re using a custom `RedisCodec` that can fail during encoding, + this will desynchronize the protocol state. + +**Action:** + +Check for the causes above. If the configured timeout does not match +your Redis latency characteristics, consider increasing the timeout. +Never block the `EventLoop` from your code. Make sure that your +`RedisCodec` doesn’t fail on encode. + +## `blpop(Duration.ZERO, …)` gives `RedisCommandTimeoutException` + +**Symptoms:** + +Calling `blpop`, `brpop` or any other blocking command followed by +`RedisCommandTimeoutException` with a stack trace like: + + io.lettuce.core.RedisCommandTimeoutException: Command timed out after 1 minute(s) + at io.lettuce.core.ExceptionFactory.createTimeoutException(ExceptionFactory.java:51) + at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:114) + at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69) + at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80) + at com.sun.proxy.$Proxy94.set(Unknown Source) + +**Cause:** + +The configured command timeout applies without considering +command-specific timeouts. + +**Action:** + +There are various options: + +1. Configure a higher default timeout. + +2. Consider a timeout that meets the default timeout when calling + blocking commands. + +3. Configure `TimeoutOptions` with a custom `TimeoutSource` + +``` java +TimeoutOptions timeoutOptions = TimeoutOptions.builder().timeoutSource(new TimeoutSource() { + @Override + public long getTimeout(RedisCommand command) { + + if (command.getType() == CommandType.BLPOP) { + return TimeUnit.MILLISECONDS.toNanos(CommandArgsAccessor.getFirstInteger(command.getArgs())); + } + + // -1 indicates fallback to the default timeout + return -1; + } +}).build(); +``` + +Note that commands that timed out may block the connection until either +the timeout exceeds or Redis sends a response. + +## Excessive Memory Usage or `RedisException` while disconnected + +**Symptoms:** + +`RedisException` with one of the following messages: + + io.lettuce.core.RedisException: Request queue size exceeded: n. Commands are not accepted until the queue size drops. + + io.lettuce.core.RedisException: Internal stack size exceeded: n. Commands are not accepted until the stack size drops. + +Or excessive memory allocation. + +**Diagnosis:** + +1. Check Redis connectivity + +2. Inspect memory usage + +**Cause:** + +Lettuce auto-reconnects by default to Redis to minimize service +disruption. Commands issued while there’s no Redis connection are +buffered and replayed once the server connection is reestablished. By +default, the queue is unbounded which can lead to memory exhaustion. + +**Action:** + +You can configure disconnected behavior and the request queue size +through `ClientOptions` for your workload profile. See [Client +Options](Advanced-usage.md#client-options) for further reference. + +## Performance Degradation using the Reactive API with a single connection + +**Symptoms:** + +Performance degradation when using the Reactive API with a single +connection (i.e. non-pooled connection arrangement). + +**Diagnosis:** + +1. Inspect Thread affinity of reactive signals + +**Cause:** + +Netty’s threading model assigns a single Thread to each connection which +makes I/O for a single `Channel` effectively single-threaded. With a +significant computation load and without further thread switching, the +system leverages a single thread and therefore leads to contention. + +**Action:** + +You can configure signal multiplexing for the reactive API through +`ClientOptions` by enabling `publishOnScheduler(true)`. See [Client +Options](Advanced-usage.md#client-options) for further reference. Alternatively, you can +configure `Scheduler` on each result stream through +`publishOn(Scheduler)`. Note that the asynchronous API features the same +behavior and you might want to use `then…Async(…)`, `run…Async(…)`, +`apply…Async(…)`, or `handleAsync(…)` methods along with an `Executor` +object. diff --git a/docs/Getting-Started.md b/docs/Getting-Started.md new file mode 100644 index 0000000000..dcd44d8163 --- /dev/null +++ b/docs/Getting-Started.md @@ -0,0 +1,94 @@ +# Getting Started + +You can get started with Lettuce in various ways. + +## 1. Get it + +### For Maven users + +Add these lines to file pom.xml: + +``` xml + + io.lettuce + lettuce-core + 6.3.2.RELEASE + +``` + +### For Ivy users + +Add these lines to file ivy.xml: + +``` xml + + + + + +``` + +### For Gradle users + +Add these lines to file build.gradle: + +``` groovy +dependencies { + implementation 'io.lettuce:lettuce-core:6.3.2.RELEASE' +} +``` + +### Plain Java + +Download the latest binary package from + and extract the +archive. + +## 2. Start coding + +So easy! No more boring routines, we can start. + +Import required classes: + +``` java +import io.lettuce.core.*; +``` + +and now, write your code: + +``` java +RedisClient redisClient = RedisClient.create("redis://password@localhost:6379/0"); +StatefulRedisConnection connection = redisClient.connect(); +RedisCommands syncCommands = connection.sync(); + +syncCommands.set("key", "Hello, Redis!"); + +connection.close(); +redisClient.shutdown(); +``` + +Done! + +Do you want to see working examples? + +- [Standalone + Redis](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToRedis.java) + +- [Standalone Redis with + SSL](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToRedisSSL.java) + +- [Redis + Sentinel](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToRedisUsingRedisSentinel.java) + +- [Redis + Cluster](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToRedisCluster.java) + +- [Connecting to a ElastiCache + Master](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToElastiCacheMaster.java) + +- [Connecting to ElastiCache with + Master/Replica](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToMasterSlaveUsingElastiCacheCluster.java) + +- [Connecting to Azure Redis + Cluster](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToRedisClusterSSL.java) + diff --git a/docs/High-Availability-and-Sharding.md b/docs/High-Availability-and-Sharding.md new file mode 100644 index 0000000000..f9cd050126 --- /dev/null +++ b/docs/High-Availability-and-Sharding.md @@ -0,0 +1,671 @@ +# High-Availability and Sharding + +## Master/Replica + +Redis can increase availability and read throughput by using +replication. Lettuce provides dedicated Master/Replica support since 4.2 +for topologies and ReadFrom-Settings. + +Redis Master/Replica can be run standalone or together with Redis +Sentinel, which provides automated failover and master promotion. +Failover and master promotion is supported in Lettuce already since +version 3.1 for master connections. + +Connections can be obtained from the `MasterReplica` connection provider +by supplying the client, Codec, and one or multiple RedisURIs. + +### Redis Sentinel + +Master/Replica using [Redis Sentinel](#redis-sentinel) uses Redis +Sentinel as registry and notification source for topology events. +Details about the master and its replicas are obtained from [Redis +Sentinel](#redis-sentinel). Lettuce subscribes to [Redis +Sentinel](#redis-sentinel) events for notifications to all supplied +Sentinels. + +### Standalone Master/Replica + +Running a Standalone Master/Replica setup requires one seed address to +establish a Redis connection. Providing one `RedisURI` will discover +other nodes which belong to the Master/Replica setup and use the +discovered addresses for connections. The initial URI can point either +to a master or a replica node. + +### Static Master/Replica with predefined node addresses + +In some cases, topology discovery shouldn’t be enabled, or the +discovered Redis addresses are not suited for connections. AWS +ElastiCache falls into this category. Lettuce allows to specify one or +more Redis addresses as `List` and predefine the node topology. +Master/Replica URIs will be treated in this case as static topology, and +no additional hosts are discovered in such case. Redis Standalone +Master/Replica will discover the roles of the supplied `RedisURI`s and +issue commands to the appropriate node. + +### Topology discovery + +Master-Replica topologies are either static or semi-static. Redis +Standalone instances with attached replicas provide no failover/HA +mechanism. Redis Sentinel managed instances are controlled by Redis +Sentinel and allow failover (which include master promotion). The +`MasterReplica` API supports both mechanisms. The topology is provided +by a `TopologyProvider`: + +- `MasterReplicaTopologyProvider`: Dynamic topology lookup using the + `INFO REPLICATION` output. Replicas are listed as replicaN=…​ entries. + The initial connection can either point to a master or a replica, and + the topology provider will discover nodes. The connection needs to be + re-established outside of Lettuce in a case of a Master/Replica + failover or topology changes. + +- `StaticMasterReplicaTopologyProvider`: Topology is defined by the list + of URIs and the ROLE output. MasterReplica uses only the supplied + nodes and won’t discover additional nodes in the setup. The connection + needs to be re-established outside of Lettuce in case of a + Master/Replica failover or topology changes. + +- `SentinelTopologyProvider`: Dynamic topology lookup using the Redis + Sentinel API. In particular, `SENTINEL MASTER` and `SENTINEL REPLICAS` + output. Master/Replica failover is handled by Lettuce. + +### Topology Updates + +- Standalone Master/Replica: Performs a one-time topology lookup which + remains static afterward + +- Redis Sentinel: Subscribes to all Sentinels and listens for Pub/Sub + messages to trigger topology refreshing + +#### Transactions + +Since version 5.1, transactions and commands during a transaction are +routed to the master node to ensure atomic transaction execution on a +single node. Transactions can contain read- and write-operations so the +driver cannot decide upfront which node can be used to run the actual +transaction. + +#### Examples + +``` java +RedisClient redisClient = RedisClient.create(); + +StatefulRedisMasterReplicaConnection connection = MasterReplica.connect(redisClient, StringCodec.UTF8, + RedisURI.create("redis://localhost")); +connection.setReadFrom(ReadFrom.MASTER_PREFERRED); + +System.out.println("Connected to Redis"); + +connection.close(); +redisClient.shutdown(); +``` + +``` java +RedisClient redisClient = RedisClient.create(); + +StatefulRedisMasterReplicaConnection connection = MasterReplica.connect(redisClient, StringCodec.UTF8, + RedisURI.create("redis-sentinel://localhost:26379,localhost:26380/0#mymaster")); +connection.setReadFrom(ReadFrom.MASTER_PREFERRED); + +System.out.println("Connected to Redis"); + +connection.close(); +redisClient.shutdown(); +``` + +``` java +RedisClient redisClient = RedisClient.create(); + +List nodes = Arrays.asList(RedisURI.create("redis://host1"), + RedisURI.create("redis://host2"), + RedisURI.create("redis://host3")); + +StatefulRedisMasterReplicaConnection connection = MasterReplica + .connect(redisClient, StringCodec.UTF8, nodes); +connection.setReadFrom(ReadFrom.MASTER_PREFERRED); + +System.out.println("Connected to Redis"); + +connection.close(); +redisClient.shutdown(); +``` + +## Redis Sentinel + +When using Lettuce, you can interact with Redis Sentinel and Redis +Sentinel-managed nodes in multiple ways: + +1. [Direct connection to Redis + Sentinel](#direct-connection-redis-sentinel-nodes), for issuing + Redis Sentinel commands + +2. Using Redis Sentinel to [connect to a + master](#redis-discovery-using-redis-sentinel) + +3. Using Redis Sentinel to connect to master nodes and replicas through + the {master-replica-api-link}. + +In both cases, you need to supply a `RedisURI` since the Redis Sentinel +integration supports multiple Sentinel hosts to provide high +availability. + +Please note: Redis Sentinel (Lettuce 3.x) integration provides only +asynchronous connections and no connection pooling. + +### Direct connection Redis Sentinel nodes + +Lettuce exposes an API to interact with Redis Sentinel nodes directly. +This is useful for performing administrative tasks using Lettuce. You +can monitor new master nodes, query master addresses, replicas and much +more. A connection to a Redis Sentinel node is established by +`RedisClient.connectSentinel()`. Use a [Publish/Subscribe +connection](Connecting-Redis.md#publishsubscribe) to subscribe to Sentinel events. + +### Redis discovery using Redis Sentinel + +One or more Redis Sentinels can monitor Redis instances . These Redis +instances are usually operated together with a replica of the Redis +instance. Once the master goes down, the replica is promoted to a +master. Once a master instance is not reachable anymore, the failover +process is started by the Redis Sentinels. Usually, the client +connection is terminated. The disconnect can result in any of the +following options: + +1. The master comes back: The connection is restored to the Redis + instance + +2. A replica is promoted to a master: Lettuce performs an address + lookup using the `masterId`. As soon as the Redis Sentinel provides + an address the connection is restored to the new Redis instance + +Read more at + +### Examples + +``` java +RedisURI redisUri = RedisURI.create("redis://sentinelhost1:26379"); +RedisClient client = new RedisClient(redisUri); + +RedisSentinelAsyncConnection connection = client.connectSentinelAsync(); + +Map map = connection.master("mymaster").get(); +``` + +``` java +RedisURI redisUri = RedisURI.Builder.sentinel("sentinelhost1", "mymaster").withSentinel("sentinelhost2").build(); +RedisClient client = RedisClient.create(redisUri); + +RedisConnection connection = client.connect(); +``` + +> [!NOTE] +> Every time you connect to a Redis instance using Redis Sentinel, the +> Redis master is looked up using a new connection to a Redis Sentinel. +> This can be time-consuming, especially when multiple Redis Sentinels +> are used and one or more of them are not reachable. + +## Redis Cluster + +Lettuce supports Redis Cluster with: + +- Support of all `CLUSTER` commands + +- Command routing based on the hash slot of the commands' key + +- High-level abstraction for selected cluster commands + +- Execution of commands on multiple cluster nodes + +- `MOVED` and `ASK` redirection handling + +- Obtaining direct connections to cluster nodes by slot and host/port + (since 3.3) + +- SSL and authentication (since 4.2) + +- Periodic and adaptive cluster topology updates + +- Publish/Subscribe + +Connecting to a Redis Cluster requires one or more initial seed nodes. +The full cluster topology view (partitions) is obtained on the first +connection so you’re not required to specify all cluster nodes. +Specifying multiple seed nodes helps to improve resiliency as Lettuce is +able to connect the cluster even if a seed node is not available. +Lettuce holds multiple connections, which are opened on demand. You are +free to operate on these connections. + +Connections can be bound to specific hosts or nodeIds. Connections bound +to a nodeId will always stick to the nodeId, even if the nodeId is +handled by a different host. Requests to unknown nodeId’s or host/ports +that are not part of the cluster are rejected. Do not close the +connections. Otherwise, unpredictable behavior will occur. Keep also in +mind that the node connections are used by the cluster connection itself +to perform cluster operations: If you block one connection all other +users of the cluster connection might be affected. + +### Command routing + +The [concept of Redis Cluster](http://redis.io/topics/cluster-tutorial) +bases on sharding. Every master node within the cluster handles one or +more slots. Slots are the [unit of +sharding](http://redis.io/topics/cluster-tutorial#redis-cluster-data-sharding) +and calculated from the commands' key using `CRC16 MOD 16384`. Hash +slots can also be specified using hash tags such as `{user:1000}.foo`. + +Every request, which incorporates at least one key is routed based on +its hash slot to the corresponding node. Commands without a key are +executed on the *default* connection that points most likely to the +first provided `RedisURI`. The same rule applies to commands operating +on multiple keys but with the limitation that all keys have to be in the +same slot. Commands operating on multiple slots will be terminated with +a `CROSSSLOT` error. + +### Cross-slot command execution and cluster-wide execution for selected commands + +Regular Redis Cluster commands are limited to single-slot keys operation +– either single key commands or multi-key commands that share the same +hash slot. + +The cross slot limitation can be mitigated by using the advanced cluster +API for *a set of selected* multi-key commands. Commands that operate on +keys with different slots are decomposed into multiple commands. The +single commands are fired in a fork/join fashion. The commands are +issued concurrently to avoid synchronous chaining. Results are +synchronized before the command is completed. + +Following commands are supported for cross-slot command execution: + +- `DEL`: Delete the `KEY`s. Returns the number of keys that were + removed. + +- `EXISTS`: Count the number of `KEY`s that exist across the master + nodes being responsible for the particular key. + +- `MGET`: Get the values of all given `KEY`s. Returns the values in the + order of the keys. + +- `MSET`: Set multiple key/value pairs for all given `KEY`s. Returns + always `OK`. + +- `TOUCH`: Alters the last access time of all given `KEY`s. Returns the + number of keys that were touched. + +- `UNLINK`: Delete the `KEY`s and reclaiming memory in a different + thread. Returns the number of keys that were removed. + +Following commands are executed on multiple cluster nodes operations: + +- `CLIENT SETNAME`: Set the client name on all known cluster node + connections. Returns always `OK`. + +- `KEYS`: Return/Stream all keys that are stored on all masters. + +- `DBSIZE`: Return the number of keys that are stored on all masters. + +- `FLUSHALL`: Flush all data on the cluster masters. Returns always + `OK`. + +- `FLUSHDB`: Flush all data on the cluster masters. Returns always `OK`. + +- `RANDOMKEY`: Return a random key from a random master. + +- `SCAN`: Scan the keyspace across the whole cluster according to + `ReadFrom` settings. + +- `SCRIPT FLUSH`: Remove all the scripts from the script cache on all + cluster nodes. + +- `SCRIPT LOAD`: Load the script into the Lua script cache on all nodes. + +- `SCRIPT KILL`: Kill the script currently in execution on all cluster + nodes. This call does not fail even if no scripts are running. + +- `SHUTDOWN`: Synchronously save the dataset to disk and then shut down + all nodes of the cluster. + +Cross-slot command execution is available on the following APIs: + +- `RedisAdvancedClusterCommands` + +- `RedisAdvancedClusterAsyncCommands` + +- `RedisAdvancedClusterReactiveCommands` + +### Execution of commands on one or multiple cluster nodes + +Sometimes commands have to be executed on multiple cluster nodes. The +advanced cluster API allows to select a set of nodes (e.g. all masters, +all replicas) and trigger a command on this set. + +``` java +RedisAdvancedClusterAsyncCommands async = clusterClient.connect().async(); +AsyncNodeSelection replicas = connection.slaves(); + +AsyncExecutions> executions = replicas.commands().keys("*"); +executions.forEach(result -> result.thenAccept(keys -> System.out.println(keys))); +``` + +The commands are triggered concurrently. This API is currently only +available for async commands. Commands are dispatched to the nodes +within the selection, the result (CompletionStage) is available through +`AsyncExecutions`. + +A node selection can be either dynamic or static. A dynamic node +selection updates its node set upon a [cluster topology view +refresh](#refreshing-the-cluster-topology-view). Node +selections can be constructed by the following presets: + +- masters + +- replicas (operate on connections with activated `READONLY` mode) + +- all nodes + +A custom selection of nodes is available by implementing [custom +predicates](http://redis.paluch.biz/docs/api/current/com/lambdaworks/redis/cluster/api/async/RedisAdvancedClusterAsyncCommands.html#nodes-java.util.function.Predicate-) +or lambdas. + +The particular results map to a cluster node (`RedisClusterNode`) that +was involved in the node selection. You can obtain the set of involved +`RedisClusterNode`s and all results as `CompletableFuture` from +`AsyncExecutions`. + +The node selection API is a technical preview and can change at any +time. That approach allows powerful operations but it requires further +feedback from the users. So feel free to contribute. + +### Refreshing the cluster topology view + +The Redis Cluster configuration may change at runtime. New nodes can be +added, the master for a specific slot can change. Lettuce handles +`MOVED` and `ASK` redirects transparently but in case too many commands +run into redirects, you should refresh the cluster topology view. The +topology is bound to a `RedisClusterClient` instance. All cluster +connections that are created by one `RedisClusterClient` instance share +the same cluster topology view. The view can be updated in three ways: + +1. Either by calling `RedisClusterClient.reloadPartitions` + +2. [Periodic updates](Advanced-usage.md#cluster-specific-options) in the background + based on an interval + +3. [Adaptive updates](Advanced-usage.md#cluster-specific-options) in the background + based on persistent disconnects and `MOVED`/`ASK` redirections + +By default, commands follow `-ASK` and `-MOVED` redirects [up to 5 +times](Advanced-usage.md#cluster-specific-options) until the command execution is +considered to be failed. Background topology updating starts with the +first connection obtained through `RedisClusterClient`. + +### Connection Count for a Redis Cluster Connection Object + +With Standalone Redis, a single connection object correlates with a +single transport connection. Redis Cluster works differently: A +connection object with Redis Cluster consists of multiple transport +connections. These are: + +- Default connection object (Used for key-less commands and for Pub/Sub + message publication) + +- Connection per node (read/write connection to communicate with + individual Cluster nodes) + +- When using `ReadFrom`: Read-only connection per read replica node + (read-only connection to read data from read replicas) + +Connections are allocated on demand and not up-front to start with a +minimal set of connections. Formula to calculate the maximum number of +transport connections for a single connection object: + + 1 + (N * 2) + +Where `N` is the number of cluster nodes. + +Apart of connection objects, `RedisClusterClient` uses additional +connections for topology refresh. These are created on topology refresh +and closed after obtaining the topology: + +- Set of connections for cluster topology refresh (a connection to each + cluster node) + +### Client-options + +See [Cluster-specific Client options](Advanced-usage.md#cluster-specific-options). + +#### Examples + +``` java +RedisURI redisUri = RedisURI.Builder.redis("localhost").withPassword("authentication").build(); + +RedisClusterClient clusterClient = RedisClusterClient.create(redisUri); +StatefulRedisClusterConnection connection = clusterClient.connect(); +RedisAdvancedClusterCommands syncCommands = connection.sync(); + +... + +connection.close(); +clusterClient.shutdown(); +``` + +``` java +RedisURI node1 = RedisURI.create("node1", 6379); +RedisURI node2 = RedisURI.create("node2", 6379); + +RedisClusterClient clusterClient = RedisClusterClient.create(Arrays.asList(node1, node2)); +StatefulRedisClusterConnection connection = clusterClient.connect(); +RedisAdvancedClusterCommands syncCommands = connection.sync(); + +... + +connection.close(); +clusterClient.shutdown(); +``` + +``` java +RedisClusterClient clusterClient = RedisClusterClient.create(RedisURI.create("localhost", 6379)); + +ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder() + .enablePeriodicRefresh(10, TimeUnit.MINUTES) + .build(); + +clusterClient.setOptions(ClusterClientOptions.builder() + .topologyRefreshOptions(topologyRefreshOptions) + .build()); +... + +clusterClient.shutdown(); +``` + +``` java +RedisURI node1 = RedisURI.create("node1", 6379); +RedisURI node2 = RedisURI.create("node2", 6379); + +RedisClusterClient clusterClient = RedisClusterClient.create(Arrays.asList(node1, node2)); + +ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder() + .enableAdaptiveRefreshTrigger(RefreshTrigger.MOVED_REDIRECT, RefreshTrigger.PERSISTENT_RECONNECTS) + .adaptiveRefreshTriggersTimeout(30, TimeUnit.SECONDS) + .build(); + +clusterClient.setOptions(ClusterClientOptions.builder() + .topologyRefreshOptions(topologyRefreshOptions) + .build()); +... + +clusterClient.shutdown(); +``` + +``` java +RedisURI node1 = RedisURI.create("node1", 6379); +RedisURI node2 = RedisURI.create("node2", 6379); + +RedisClusterClient clusterClient = RedisClusterClient.create(Arrays.asList(node1, node2)); +StatefulRedisClusterConnection connection = clusterClient.connect(); + +RedisClusterCommands node1 = connection.getConnection("host", 7379).sync(); + +... +// do not close node1 + +connection.close(); +clusterClient.shutdown(); +``` + +## ReadFrom Settings + +The ReadFrom setting describes how Lettuce routes read operations to +replica nodes. + +By default, Lettuce routes its read operations in multi-node connections +to the master node. Reading from the master returns the most recent +version of the data because write operations are issued to the single +master node. Reading from masters guarantees strong consistency. + +You can reduce latency or improve read throughput by distributing reads +to replica members for applications that do not require fully up-to-date +data. + +Be careful if using other ReadFrom settings than `MASTER`. Settings +other than `MASTER` may return stale data because the replication is +asynchronous. Data in the replicas may not hold the most recent data. + +### Redis Cluster + +Redis Cluster is a multi-node operated Redis setup that uses one or more +master nodes and allows to setup replica nodes. Redis Cluster +connections allow to set a `ReadFrom` setting on connection level. This +setting applies for all read operations on this connection. + +``` java +RedisClusterClient client = RedisClusterClient.create(RedisURI.create("host", 7379)); +StatefulRedisClusterConnection connection = client.connect(); +connection.setReadFrom(ReadFrom.REPLICA); + +RedisAdvancedClusterCommands sync = connection.sync(); +sync.set(key, "value"); + +sync.get(key); // replica read + +connection.close(); +client.shutdown(); +``` + +### Master/Replica connections ("Master/Slave") + +Redis nodes can be operated in a Master/Replica setup to achieve +availability and performance. Master/Replica setups can be run either +Standalone or managed using Redis Sentinel. Lettuce allows to use +replica nodes for read operations by using the `MasterReplica` API that +supports both Master/Replica setups: + +1. Redis Standalone Master/Replica (no failover) + +2. Redis Sentinel Master/Replica (Sentinel-managed failover) + +The resulting connection uses in any case the primary connection-point +to dispatch non-read operations. + +#### Redis Sentinel + +Master/Replica with Redis Sentinel is very similar to regular Redis +Sentinel operations. When the master fails over, a replica is promoted +by Redis Sentinel to the new master and the client obtains the new +topology from Redis Sentinel. + +Connections to Master/Replica require one or more Redis Sentinel +connection points and a master name. The primary connection point is the +Sentinel monitored master node. + +``` java +RedisURI sentinelUri = RedisURI.Builder.sentinel("sentinel-host", 26379, "master-name").build(); +RedisClient client = RedisClient.create(); + +StatefulRedisMasterReplicaConnection connection = MasterReplica.connect( + client, + StringCodec.UTF8 + sentinelUri); + +connection.setReadFrom(ReadFrom.REPLICA); + +connection.sync().get("key"); // Replica read + +connection.close(); +client.shutdown(); +``` + +#### Redis Standalone + +Master/Replica with Redis Standalone is very similar to regular Redis +Standalone operations. A Redis Standalone Master/Replica setup is static +and provides no built-in failover. Replicas are read from the Redis +master node’s `INFO` command. + +Connecting to Redis Standalone Master/Replica nodes requires connections +to use the Redis master for the `RedisURI`. The node used within the +`RedisURI` is the primary connection point. + +``` java +RedisURI masterUri = RedisURI.Builder.redis("master-host", 6379).build(); +RedisClient client = RedisClient.create(); + +StatefulRedisMasterReplicaConnection connection = MasterReplica.connect( + client, + StringCodec.UTF8, + masterUri); + +connection.setReadFrom(ReadFrom.REPLICA); + +connection.sync().get("key"); // Replica read + +connection.close(); +client.shutdown(); +``` + +### Use Cases for non-master reads + +The following use cases are common for using non-master read settings +and encourage eventual consistency: + +- Providing local reads for geographically distributed applications. If + you have Redis and application servers in multiple data centers, you + may consider having a geographically distributed cluster. Using the + `LOWEST_LATENCY` setting allows the client to read from the + lowest-latency members, rather than always reading from the master + node. + +- Maintaining availability during a failover. Use `MASTER_PREFERRED` if + you want an application to read from the master by default, but to + allow stale reads from replicas when the master node is unavailable. + `MASTER_PREFERRED` allows a "read-only mode" for your application + during a failover. + +- Increase read throughput by allowing stale reads If you want to + increase your read throughput by adding additional replica nodes to + your cluster Use `REPLICA` to read explicitly from replicas and reduce + read load on the master node. Using replica reads can highly lead to + stale reads. + +### Read from settings + +All `ReadFrom` settings except `MASTER` may return stale data because +replicas replication is asynchronous and requires some delay. You need +to ensure that your application can tolerate stale data. + +| Setting | Description | +|---------------------|--------------------------------------------------------------------------------| +| `MASTER` | Default mode. Read from the current master node. | +| `MASTER_PREFERRED` | Read from the master, but if it is unavailable, read from replica nodes. | +| `REPLICA` | Read from replica nodes. | +| `REPLICA_PREFERRED` | Read from the replica nodes, but if none is unavailable, read from the master. | +| `LOWEST_LATENCY` | Read from any node of the cluster with the lowest latency. | +| `ANY` | Read from any node of the cluster. | +| `ANY_REPLICA` | Read from any replica of the cluster. | + +> [!TIP] +> The latency of the nodes is determined upon the cluster topology +> refresh. If the topology view is never refreshed, values from the +> initial cluster nodes read are used. + +Custom read settings can be implemented by extending the +`io.lettuce.core.ReadFrom` class. + diff --git a/docs/Integration-and-Extension.md b/docs/Integration-and-Extension.md new file mode 100644 index 0000000000..22e0847477 --- /dev/null +++ b/docs/Integration-and-Extension.md @@ -0,0 +1,280 @@ +# Integration and Extension + +## Codecs + +Codecs are a pluggable mechanism for transcoding keys and values between +your application and Redis. The default codec supports UTF-8 encoded +String keys and values. + +Each connection may have its codec passed to the extended +`RedisClient.connect` methods: + +``` java +StatefulRedisConnection connect(RedisCodec codec) +StatefulRedisPubSubConnection connectPubSub(RedisCodec codec) +``` + +Lettuce ships with predefined codecs: + +- `io.lettuce.core.codec.ByteArrayCodec` - use `byte[]` for keys and + values + +- `io.lettuce.core.codec.StringCodec` - use Strings for keys and values. + Using the default charset or a specified `Charset` with improved + support for `US_ASCII` and `UTF-8`. + +- `io.lettuce.core.codec.CipherCodec` - used for transparent encryption + of values. + +- `io.lettuce.core.codec.CompressionCodec` - apply `GZIP` or `DEFLATE` + compression to values. + +Publish/Subscribe connections use channel names and patterns for keys; +messages are treated as values. + +Keys and values can be encoded independently from each other which means +the key can be a `java.lang.String` while the value is a `byte[]`. Many +other constellations are possible like: + +- Representing your data as JSON if your data is mapped to a particular + Java type. Different types are complex to map since the codec applies + to all operations. + +- Serialize your data using the Java Serializer + (`ObjectInputStream`/`ObjectOutputStream`). Allows type-safe + conversions but is less interoperable with other languages + +- Serializing your data using + [Kryo](https://github.com/EsotericSoftware/kryo) for improved + type-safe serialization. + +- Any specialized codecs like the `BitStringCodec` (see below) + +### Exception handling during Encoding + +Codecs should be designed in a way that doesn’t allow encoding +exceptions except for Out-of-Memory scenarios. Encoding of keys and +values happens on the Event Loop after registering a command in the +protocol stack and sending a command to the write queue. Exceptions at +that stage will leave the command in the protocol stack while a command +might have not been sent to Redis because encoding has failed. Such a +state desynchronizes the protocol state and your commands will fail +with: +`Cannot encode command. Please close the connection as the connection state may be out of sync.`. + +JSON and JDK serialization can fail because the underlying object graph +cannot be serialized (i.e. an object does not implement `Serializable` +or Jackson cannot serialize a value because of misconfiguration). If you +want to remain safe (and remove encoding load from the Event Loop), +rather serialize such objects beforehand and use the resulting `byte[]` +as value input to Redis commands. + +### Why `ByteBuffer` instead of `byte[]` + +The `RedisCodec` interface accepts and returns `ByteBuffer`s for data +interchange. A `ByteBuffer` is not opinionated about the source of the +underlying bytes. The `byte[]` interface of Lettuce 3.x required the +user to provide an array with the exact data for interchange. So if you +have an array where you want to use only a subset, you’re required to +create a new instance of a byte array and copy the data. The same +applies if you have a different byte source (e.g. netty’s `ByteBuf` or +an NIO `ByteBuffer`). The `ByteBuffer`s for decoding are pointers to the +underlying data. `ByteBuffer`s for encoding data can be either pure +pointers or allocated memory. Lettuce does not free any memory (such as +pooled buffers). + +### Diversity in Codecs + +As in every other segment of technology, there is no one-fits-it-all +solution when it comes to Codecs. Redis data structures provide a +variety of The key and value limitation of codecs is intentionally and a +balance amongst convenience and simplicity. The Redis API allows much +more variance in encoding and decoding particular data elements. A good +example is Redis hashes. A hash is identified by its key but stores +another key/value pairs. The keys of the key-value pairs could be +encoded using a different approach than the key of the hash. Another +different approach might be to use different encodings between lists and +sets. Using a base codec (such as UTF-8 or byte array) and performing an +own conversion on top of the base codec is often the better idea. + +### Multi-Threading + +A key point in Codecs is that Codecs are shared resources and can be +used by multiple threads. Your Codec needs to be thread-safe (by +shared-nothing, pooling or synchronization). Every logical Lettuce +connection uses its codec instance. Codec instances are shared as soon +as multiple threads are issuing commands or if you use Redis Cluster. + +### Compression + +Compression can be a good idea when storing larger chunks of data within +Redis. Any textual data structures (such as JSON or XML) are suited for +compression. Compression is handled at Codec-level which means you do +not have to change your application to apply compression. The +`CompressionCodec` provides basic and transparent compression for values +using either GZIP or Deflate compression: + +``` java +StatefulRedisConnection connection = client.connect( + CompressionCodec.valueCompressor(new SerializedObjectCodec(), CompressionCodec.CompressionType.GZIP)).sync(); + +StatefulRedisConnection connection = client.connect( + CompressionCodec.valueCompressor(StringCodec.UTF8, CompressionCodec.CompressionType.DEFLATE)).sync(); +``` + +Compression can be used with any codec, the compressor just wraps the +inner `RedisCodec` and compresses/decompresses the data that is +interchanged. You can build your own compressor the same way as you can +provide own codecs. + +### Examples + +``` java +public class BitStringCodec extends StringCodec { + @Override + public String decodeValue(ByteBuffer bytes) { + StringBuilder bits = new StringBuilder(bytes.remaining() * 8); + while (bytes.remaining() > 0) { + byte b = bytes.get(); + for (int i = 0; i < 8; i++) { + bits.append(Integer.valueOf(b >>> i & 1)); + } + } + return bits.toString(); + } +} + +StatefulRedisConnection connection = client.connect(new BitStringCodec()); +RedisCommands redis = connection.sync(); + +redis.setbit(key, 0, 1); +redis.setbit(key, 1, 1); +redis.setbit(key, 2, 0); +redis.setbit(key, 3, 0); +redis.setbit(key, 4, 0); +redis.setbit(key, 5, 1); + +redis.get(key) == "00100011" +``` + +``` java +public class SerializedObjectCodec implements RedisCodec { + private Charset charset = Charset.forName("UTF-8"); + + @Override + public String decodeKey(ByteBuffer bytes) { + return charset.decode(bytes).toString(); + } + + @Override + public Object decodeValue(ByteBuffer bytes) { + try { + byte[] array = new byte[bytes.remaining()]; + bytes.get(array); + ObjectInputStream is = new ObjectInputStream(new ByteArrayInputStream(array)); + return is.readObject(); + } catch (Exception e) { + return null; + } + } + + @Override + public ByteBuffer encodeKey(String key) { + return charset.encode(key); + } + + @Override + public ByteBuffer encodeValue(Object value) { + try { + ByteArrayOutputStream bytes = new ByteArrayOutputStream(); + ObjectOutputStream os = new ObjectOutputStream(bytes); + os.writeObject(value); + return ByteBuffer.wrap(bytes.toByteArray()); + } catch (IOException e) { + return ByteBuffer.wrap(new byte[0]); + } + } +} +``` + +## CDI Support + +CDI support for Lettuce is available for `RedisClient` and +`RedisClusterClient`. You need to provide a `RedisURI` in order to get +Lettuce injected. + +### RedisURI producer + +Implement a simple producer (either field producer or producer method) +of `RedisURI`: + +``` java +@Produces +public RedisURI redisURI() { + return RedisURI.Builder.redis("localhost").build(); +} +``` + +Lettuce also supports qualified `RedisURI`'s: + +``` java +@Produces +@PersonDB +public RedisURI redisURI() { + return RedisURI.Builder.redis("localhost").build(); +} +``` + +### Injection + +After declaring your `RedisURI`'s you can start using Lettuce in your +classes: + +``` java +public class InjectedClient { + + @Inject + private RedisClient redisClient; + + @Inject + private RedisClusterClient redisClusterClient; + + @Inject + @PersonDB + private RedisClient redisClient; + + private RedisConnection connection; + + @PostConstruct + public void postConstruct() { + connection = redisClient.connect(); + } + + public void pingRedis() { + connection.ping(); + } + + @PreDestroy + public void preDestroy() { + if (connection != null) { + connection.close(); + } + } +} +``` + +### Activating Lettuce’s CDI extension + +By default, you just drop Lettuce on your classpath and declare at least +one `RedisURI` bean. That’s all. + +The CDI extension registers one bean pair (`RedisClient` and +`RedisClusterClient`) per discovered `RedisURI`. This means, if you do +not declare any `RedisURI` producers, the CDI extension won’t be +activated at all. This way you can use Lettuce in CDI-capable containers +without even activating the CDI extension. + +All produced beans (`RedisClient` and `RedisClusterClient`) remain +active as long as your application is running since the beans are +`@ApplicationScoped`. + diff --git a/docs/New--Noteworthy.md b/docs/New--Noteworthy.md new file mode 100644 index 0000000000..5bde497de0 --- /dev/null +++ b/docs/New--Noteworthy.md @@ -0,0 +1,172 @@ +# New & Noteworthy + +## What’s new in Lettuce 6.3 + +- [Redis Function support](Connecting-Redis.md#redis-functions) (`fcall` and `FUNCTION` + commands). + +- Support for Library Name and Version through `LettuceVersion`. + Automated registration of the Lettuce library version upon connection + handshake. + +- Support for Micrometer Tracing to trace observations (distributed + tracing and metrics). + +## What’s new in Lettuce 6.2 + +- [`RedisCredentialsProvider`](Connecting-Redis.md#authentication) abstraction to + externalize credentials and credentials rotation. + +- Retrieval of Redis Cluster node connections using `ConnectionIntent` + to obtain read-only connections. + +- Master/Replica now uses `SENTINEL REPLICAS` to discover replicas + instead of `SENTINEL SLAVES`. + +## What’s new in Lettuce 6.1 + +- Kotlin Coroutines support for `SCAN`/`HSCAN`/`SSCAN`/`ZSCAN` through + `ScanFlow`. + +- Command Listener API through + `RedisClient.addListener(CommandListener)`. + +- [Micrometer support](Advanced-usage.md#micrometer) through + `MicrometerCommandLatencyRecorder`. + +- [Experimental support for `io_uring`](Advanced-usage.md#native-transports). + +- Configuration of extended Keep-Alive options through + `KeepAliveOptions` (only available for some transports/Java versions). + +- Configuration of netty’s `AddressResolverGroup` through + `ClientResources`. Uses `DnsAddressResolverGroup` when + `netty-resolver-dns` is on the classpath. + +- Add support for Redis ACL commands. + +- [Java Flight Recorder Events](Advanced-usage.md#java-flight-recorder-events-since-61) + +## What’s new in Lettuce 6.0 + +- Support for RESP3 usage with Redis 6 along with RESP2/RESP3 handshake + and protocol version discovery. + +- ACL authentication using username and password or password-only + authentication. + +- Cluster topology refresh is now non-blocking. + +- [Kotlin Coroutine Extensions](Connecting-Redis.md#kotlin-api). + +- RxJava 3 support. + +- Refined Scripting API accepting the Lua script either as `byte[]` or + `String`. + +- Connection and Queue failures now no longer throw an exception but + properly associate the failure with the Future handle. + +- Removal of deprecated API including timeout methods accepting + `TimeUnit`. Use methods accepting `Duration` instead. + +- Lots of internal refinements. + +- `xpending` methods return now `List` and + `PendingMessages` + +- Spring support removed. Use Spring Data Redis for a seamless Spring + integration with Lettuce. + +- `AsyncConnectionPoolSupport.createBoundedObjectPool(…)` methods are + now blocking to await pool initialization. + +- `DecodeBufferPolicy` for fine-grained memory reclaim control. + +- `RedisURI.toString()` renders masked password. + +- `ClientResources.commandLatencyCollector(…)` refactored into + `ClientResources.commandLatencyRecorder(…)` returning + `CommandLatencyRecorder`. + +## What’s new in Lettuce 5.3 + +- Improved SSL configuration supporting Cipher suite selection and + PEM-encoded certificates. + +- Fixed method signature for `randomkey()`. + +- Un-deprecated `ClientOptions.pingBeforeActivateConnection` to allow + connection verification during connection handshake. + +## What’s new in Lettuce 5.2 + +- Allow randomization of read candidates using Redis Cluster. + +- SSL support for Redis Sentinel. + +## What’s new in Lettuce 5.1 + +- Add support for `ZPOPMIN`, `ZPOPMAX`, `BZPOPMIN`, `BZPOPMAX` commands. + +- Add support for Redis Command Tracing through Brave, see [Configuring + Client resources](Advanced-usage.md#configuring-client-resources). + +- Add support for [Redis + Streams](https://redis.io/topics/streams-intro). + +- Asynchronous `connect()` for Master/Replica connections. + +- [Asynchronous Connection Pooling](Advanced-usage.md#asynchronous-connection-pooling) + through `AsyncConnectionPoolSupport` and `AsyncPool`. + +- Dedicated exceptions for Redis `LOADING`, `BUSY`, and `NOSCRIPT` + responses. + +- Commands in at-most-once mode (auto-reconnect disabled) are now + canceled already on disconnect. + +- Global command timeouts (also for reactive and asynchronous API usage) + configurable through [Client Options](Advanced-usage.md#client-options). + +- Host and port mappers for Lettuce usage behind connection + tunnels/proxies through `SocketAddressResolver`, see [Configuring + Client resources](Advanced-usage.md#configuring-client-resources). + +- `SCRIPT LOAD` dispatch to all cluster nodes when issued through + `RedisAdvancedClusterCommands`. + +- Reactive `ScanStream` to iterate over the keyspace using `SCAN` + commands. + +- Transactions using Master/Replica connections are bound to the master + node. + +## What’s new in Lettuce 5.0 + +- New artifact coordinates: `io.lettuce:lettuce-core` and packages moved + from `com.lambdaworks.redis` to `io.lettuce.core`. + +- [Reactive API](Connecting-Redis.md#reactive-api) now Reactive Streams-based using + [Project Reactor](https://projectreactor.io/). + +- [Redis Command + Interfaces](Working-with-dynamic-Redis-Command-Interfaces.md) supporting + dynamic command invocation and Redis Modules. + +- Enhanced, immutable Key-Value objects. + +- Asynchronous Cluster connect. + +- Native transport support for Kqueue on macOS systems. + +- Removal of support for Guava. + +- Removal of deprecated `RedisConnection` and `RedisAsyncConnection` + interfaces. + +- Java 9 compatibility. + +- HTML and PDF reference documentation along with a new project website: + . + diff --git a/docs/Overview.md b/docs/Overview.md new file mode 100644 index 0000000000..a5d12533a1 --- /dev/null +++ b/docs/Overview.md @@ -0,0 +1,142 @@ +# Overview + +This document is the reference guide for Lettuce. It explains how to use +Lettuce, its concepts, semantics, and the syntax. + +You can read this reference guide in a linear fashion, or you can skip +sections if something does not interest you. + +This section provides some basic introduction to Redis. The rest of the +document refers only to Lettuce features and assumes the user is +familiar with Redis concepts. + +## Knowing Redis + +NoSQL stores have taken the storage world by storm. It is a vast domain +with a plethora of solutions, terms and patterns (to make things worse +even the term itself has multiple +[meanings](https://www.google.com/search?q=nosql+acronym)). While some +of the principles are common, it is crucial that the user is familiar to +some degree with Redis. The best way to get acquainted to these +solutions is to read and follow their documentation - it usually doesn't +take more than 5-10 minutes to go through them and if you are coming +from an RDMBS-only background many times these exercises can be an +eye-opener. + +The jumping off ground for learning about Redis is +[redis.io](https://www.redis.io/). Here is a list of other useful +resources: + +- The [interactive tutorial](https://try.redis.io/) introduces Redis. + +- The [command references](https://redis.io/commands) explains Redis + commands and contains links to getting started guides, reference + documentation and tutorials. + +## Project Reactor + +[Reactor](https://projectreactor.io) is a highly optimized reactive +library for building efficient, non-blocking applications on the JVM +based on the [Reactive Streams +Specification](https://github.com/reactive-streams/reactive-streams-jvm). +Reactor based applications can sustain very high throughput message +rates and operate with a very low memory footprint, making it suitable +for building efficient event-driven applications using the microservices +architecture. + +Reactor implements two publishers +[Flux\](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html) +and +[Mono\](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html), +both of which support non-blocking back-pressure. This enables exchange +of data between threads with well-defined memory usage, avoiding +unnecessary intermediate buffering or blocking. + +## Non-blocking API for Redis + +Lettuce is a scalable thread-safe Redis client based on +[netty](https://netty.io) and Reactor. Lettuce provides +[synchronous](Connecting-Redis.md#basic-usage), [asynchronous](Connecting-Redis.md#asynchronous-api) and +[reactive](Connecting-Redis.md#reactive-api) APIs to interact with Redis. + +## Requirements + +Lettuce 6.x binaries require JDK level 8.0 and above. + +In terms of [Redis](https://redis.io/), at least 2.6. + +## Additional Help Resources + +Learning a new framework is not always straight forward.In this section, +we try to provide what we think is an easy-to-follow guide for starting +with Lettuce. However, if you encounter issues or you are just looking +for an advice, feel free to use one of the links below: + +### Support + +There are a few support options available: + +- Lettuce on Stackoverflow + [Stackoverflow](https://stackoverflow.com/questions/tagged/lettuce) is + a tag for all Lettuce users to share information and help each + other.Note that registration is needed **only** for posting. + +- Get in touch with the community on + [Gitter](https://gitter.im/lettuce-io/Lobby). + +- GitHub Discussions: + + +- Report bugs (or ask questions) in GitHub issues + . + +### Following Development + +For information on the Lettuce source code repository, nightly builds +and snapshot artifacts please see the [Lettuce +homepage](https://lettuce.io). You can help make lettuce best serve the +needs of the lettuce community by interacting with developers through +the Community on +[Stackoverflow](https://stackoverflow.com/questions/tagged/lettuce). If +you encounter a bug or want to suggest an improvement, please create a +ticket on the lettuce issue +[tracker](https://github.com/redis/lettuce/issues). + +### Project Metadata + +- Version Control – + +- Releases and Binary Packages – + + +- Issue tracker – + +- Release repository – (Maven Central) + +- Snapshot repository – + (OSS + Sonatype Snapshots) + +## Where to go from here + +- Head to [Getting Started](Getting-Started.md) if you feel like jumping + straight into the code. + +- Go to [High-Availability and + Sharding](High-Availability-and-Sharding.md) for Master/Replica + ("Master/Slave"), Redis Sentinel and Redis Cluster topics. + +- In order to dig deeper into the core features of Reactor: + + - If you’re looking for client configuration options, performance + related behavior and how to use various transports, go to [Advanced + usage](Advanced-usage.md). + + - See [Integration and Extension](Integration-and-Extension.md) for + extending Lettuce with codecs or integrate it in your CDI/Spring + application. + + - You want to know more about **at-least-once** and **at-most-once**? + Take a look into [Command execution + reliability](Advanced-usage.md#command-execution-reliability). + diff --git a/docs/Working-with-dynamic-Redis-Command-Interfaces.md b/docs/Working-with-dynamic-Redis-Command-Interfaces.md new file mode 100644 index 0000000000..e59b4dd9ad --- /dev/null +++ b/docs/Working-with-dynamic-Redis-Command-Interfaces.md @@ -0,0 +1,584 @@ +# Working with dynamic Redis Command Interfaces + +The Redis Command Interface abstraction provides a dynamic way for +typesafe Redis command invocation. It allows you to declare an interface +with command methods to significantly reduce boilerplate code required +to invoke a Redis command. + +## Introduction + +Redis is a data store supporting over 190 documented commands and over +450 command permutations. The community supports actively Redis +development; each major Redis release comes with new commands. Command +growth and keeping track with upcoming modules are challenging for +client developers and Redis user as there is no full command coverage +for each module in a single Redis client. + +The central interface in Lettuce Command Interface abstraction is +`Commands`. This interface acts primarily as a marker interface to help +you to discover interfaces that extend this one. The `KeyCommands` +interface below declares some command methods. + +``` java +public interface KeyCommands extends Commands { + + String get(String key); + + String set(String key, String value); + + String set(String key, byte[] value); +} +``` + +- Retrieves a key by its name. + +- Sets a key and value. + +- Sets a key and a value by using bytes. + +The interface from above declares several methods. Let’s take a brief +look at `String set(String key, String value)`. We can derive from that +declaration certain things: + +- It should be executed synchronously – there’s no + [asynchronous](#asynchronous-future-execution) or + [reactive](#reactive-execution) wrapper declared in the result type. + +- The Redis command method returns a `String` - that reveals something + regarding the command result expectation. This command expects a reply + that can be represented as `String`. + +- The method is named `set` so the derived command will be named `set`. + +- There are two parameters defined: `String key` and `String value`. + Although Redis does not take any other parameter types than bulk + strings, we still can apply a transformation to the parameters – we + can conclude their serialization from the declared type. + +The `set` command from above called would look like: + + commands.set("key", "value"); + +This command translates to: + + SET key value + +## Command methods + +With Lettuce, declaring command methods becomes a four-step process: + +1. Declare an interface extending `Commands`. + + ``` java + interface KeyCommands extends Commands { … } + ``` + +2. Declare command methods on the interface. + + ``` java + interface KeyCommands extends Commands { + String get(String key); + } + ``` + +3. Set up Lettuce to create proxy instances for those interfaces. + + ``` java + RedisClient client = … + RedisCommandFactory factory = new RedisCommandFactory(client.connect()); + ``` + +4. Get the commands instance and use it. + + ``` java + public class SomeClient { + + KeyCommands commands; + + public SomeClient(RedisCommandFactory factory) { + commands = factory.getCommands(KeyCommands.class); + } + + public void doSomething() { + String value = commands.get("Walter"); + } + } + ``` + +The sections that follow explain each step in detail. + +## Defining command methods + +As a first step, you define a specific command interface. The interface +must extend `Commands`. + +Command methods are declared inside the commands interface like regular +methods (probably not that much of a surprise). Lettuce derives commands +(name, arguments, and response) from each declared method. + +### Command naming + +The commands proxy has two ways to derive a Redis command from the +method name. It can derive the command name from the method name +directly, or by using a manually defined `@Command` annotation. However, +there’s got to be a strategy that decides what actual command is +created. Let’s have a look at the available options. + +``` java +public interface MixedCommands extends Commands { + + List mget(String... keys); + + @Command("MGET") + List mgetAsValues(String... keys); + + @CommandNaming(strategy = DOT) + double nrRun(String key, int... indexes) +} +``` + +- Plain command method. Lettuce will derive to the `MGET` command. + +- Command method annotated with `@Command`. Lettuce will execute `MGET` + since annotations have a higher precedence than method-based name + derivation. + +- Redis commands consist of one or multiple command parts or follow a + different naming strategy. The recommended pattern for commands + provided by modules is using dot notation. Command methods can derive + from "camel humps" that style by placing a dot (`.`) between name + parts. + +> [!NOTE] +> Command names are attempted to be resolved against `CommandType` to +> participate in settings for known commands. These are primarily used +> to determine a command intent (whether a command is a read-only one). +> Commands are resolved case-sensitive. Use lower-case command names in +> `@Command` to resolve to an unknown command to e.g. enforce +> master-routing. + +### CamelCase in method names + +Command methods use by default the method name command type. This is +ideal for commands like `GET`, `SET`, `ZADD` and so on. Some commands, +such as `CLIENT SETNAME` consist of multiple command segments and +passing `SETNAME` as argument to a method `client(…)` feels rather +clunky. + +Camel case is a natural way to express word boundaries in method names. +These "camel humps" (changes in letter casing) can be interpreted in +different ways. The most common case is to translate a change in case +into a space between command segments. + +``` java +interface ServerCommands extends Commands { + String clientSetname(String name); +} +``` + +Invoking `clientSetname(…)` will execute the Redis command +`CLIENT SETNAME name`. + +#### `@CommandNaming` + +Camel humps are translated to whitespace-delimited command segments by +default. Methods and the commands interface can be annotated with +`@CommandNaming` to apply a different strategy. + +``` java +@CommandNaming(strategy = Strategy.DOT) +interface MixedCommands extends Commands { + + @CommandNaming(strategy = Strategy.SPLIT) + String clientSetname(String name); + + @CommandNaming(strategy = Strategy.METHOD_NAME) + String mSet(String key1, String value1, String key2, String value2); + + double nrRun(String key, int... indexes) +} +``` + +You can choose amongst multiple strategies: + +- `SPLIT`: Splits camel-case method names into multiple command + segments: `clientSetname` executes `CLIENT SETNAME`. This is the + default strategy. + +- `METHOD_NAME`: Uses the method name as-is: `mSet` executes `MSET`. + +- `DOT`: Translates camel-case method names into dot-notation that is + the recommended pattern for module-provided commands. `nrRun` executes + `NR.RUN`. + +### `@Command` annotation + +You already learned, that method names are used as command type any by +default all arguments are appended to the command. Some cases, such as +the example from above, require in Java declaring a method with a +different name because of variance in the return type. `mgetAsValues` +would execute a non-existent command `MGETASVALUES`. + +Annotating command methods with `@Command` lets you take control over +implicit conventions. The annotation value overrides the command name +and provides command segments to command methods. Command segments are +parts of a command that are sent to Redis. The semantics of a command +segment depend on context and the command itself. +`@Command("CLIENT SETNAME")` denotes a subcommand of the `CLIENT` +command while a method annotated with `@Command("SET key")` invokes +`SET`, using `mykey` as key. `@Command` lets you specify whole command +strings and reference [parameters](#parameters) to construct custom +commands. + +``` java +interface MixedCommands extends Commands { + + @Command("CLIENT SETNAME") + String setName(String name); + + @Command("MGET") + List mgetAsValues(String... keys); + + @Command("SET mykey") + String set(String value); + + @Command("NR.OBSERVE ?0 ?1 -> ?2 TRAIN") + List nrObserve(String key, int[] in, int... out) +} +``` + +### Parameters + +Most Redis commands take one or more parameters to operate with your +data. Using command methods with Redis appends all parameters in their +specified order to the command as arguments. You have already seen +commands annotated with `@Command("MGET")` or with no annotation at all. +Commands append their parameters as command arguments as declared in the +method signature. + +``` java +interface MixedCommands extends Commands { + + @Command("SET ?1 ?0") + String set(String value, String key); + + @Command("NR.OBSERVE :key :in -> :out TRAIN") + List nrObserve(@Param("key") String key, @Param("in") int[] in, @Param("out") int... out) +} +``` + +`@Command`-annotated command methods allow references to parameters. You +can use index-based or name-based parameter references. Index-based +references (`?0`, `?1`, …) are zero-based. Name-based parameters +(`:key`, `:in`) reference parameters by their name. Java 8 provides +access to parameter names if the code was compiled with +`javac -parameters`. Parameter names can be supplied alternatively by +`@Param`. Please note that all parameters are required to be annotated +if using `@Param`. + +> [!NOTE] +> The same parameter can be referenced multiple times. Not referenced +> parameters are appended as arguments after the last command segment. + +#### Keys and values + +Redis commands are usually less concerned about key and value type since +all data is bytes anyway. In the context of Redis Cluster, the very +first key affects command routing. Keys and values are discovered by +verifying their declared type assignability to `RedisCodec` key and +value types. In some cases, where keys and values are indistinguishable +from their types, it might be required to hint command methods about +keys and values. You can annotate key and value parameters with `@Key` +and `@Value` to control which parameters should be treated as keys or +values. + +``` java +interface KeyCommands extends Commands { + + String set(@Key String key, @Value String value); +} +``` + +Hinting command method parameters influences +[`RedisCodec`](#codecs) selection. + +#### Parameter types + +Command method parameter types are just limited by the +[`RedisCodec`s](#codecs) that are supplied to +`RedisCommandFactory`. Command methods, however, support a basic set of +parameter types that are agnostic to the selected codec. If a parameter +is identified as key or value and the codec supports that parameter, +this specific parameter is encoded by applying codec conversion. + +Built-in parameter types: + +- `String` - encoded to bytes using `ASCII`. + +- `byte[]` + +- `double`/`Double` + +- `ProtocolKeyword` - using its byte-representation. `ProtocolKeyword` + is useful to declare/reuse commonly used Redis keywords, see + `io.lettuce.core.protocol.CommandType` and + `io.lettuce.core.protocol.CommandKeyword`. + +- `Map` - key and value encoding of key-value pairs using `RedisCodec`. + +- types implementing `io.lettuce.core.CompositeParameter` - Lettuce + comes with a set of command argument types such as `BitFieldArgs`, + `SetArgs`, `SortArgs`, … that can be used as parameter. Providing + `CompositeParameter` will ontribute multiple command arguments by + invoking the `CompositeParameter.build(CommandArgs)` method. + +- `Value`, `KeyValue`, and `ScoredValue` that are encoded to their + value, key and value and score and value representation using + `RedisCodec`. + +- `GeoCoordinates` - contribute longitude and latitude command arguments + +- `Limit` - used together with `ZRANGEBYLEX`/`ZRANGEBYSCORE` commands. + Will add `LIMIT (offset) (count)` segments to the command. + +- `Range` - used together with `ZCOUNT`/`ZRANGEBYLEX`/`ZRANGEBYSCORE` + commands. Numerical commands are converted to numerical boundaries + (`` inf`, `(1.0`, `[1.0`). Value-typed `Range` parameters are encoded to their value boundary representation (` ``, + `-`, `[value`, `(value`). + +Command methods accept other, special parameter types such as `Timeout` +or `FlushMode` that control [execution-model +specific](#execution-models) behavior. Those parameters are filtered +from command arguments. + +### Codecs + +Redis command interfaces use `RedisCodec`s for key/value encoding and +decoding. Each command method performs `RedisCodec` resolution so each +command method can use a different `RedisCodec`. Codec resolution is +based on key and value types declared in the command method signature. +Key and value parameters can be annotated with `@Key`/`@Value` +annotations to hint codec resolution to the appropriate types. Codec +resolution checks all annotated parameters for compatibility. If types +are assignable to codec types, the codec is selected for a particular +command method. + +Codec resolution without annotation is based on a compatible type +majority. A command method resolves to the codec accepting the most +compatible types. See also [Keys and values](#keys-and-values) for +details on key/value encoding. Depending on provided codecs and the +command method signature it’s possible that no codec can be resolved. +You need to provide either a compatible `RedisCodec` or adjust parameter +types in the method signature to provide a compatible method signature. +`RedisCommandFactory` uses `StringCodec` (UTF-8) and `ByteArrayCodec` by +default. + +``` java +RedisCommandFactory factory = new RedisCommandFactory(connection, Arrays.asList(new ByteArrayCodec(), new StringCodec(LettuceCharsets.UTF8))); +``` + +The resolved codec is also applied to command response deserialization +that allows you to use parametrized command response types. + +### Response types + +Another aspect of command methods is their response type. Redis command +responses consist of simple strings, bulk strings (byte streams) or +arrays with nested elements depending on the issued command. + +You can choose amongst various return types that map to a particular +{custom-commands-command-output-link}. A command output can return +either its return type directly (`List` for `StringListOutput`) +or stream individual elements (`String` for `StringListOutput` as it +implements `StreamingOutput`). Command output resolution depends +on whether the declared return type supports streaming. The currently +only supported streaming output are reactive wrappers such as `Flux`. + +`RedisCommandFactory` comes with built-in command outputs that are +resolved from `OutputRegistry`. You can choose from built-in command +output types or register your own `CommandOutput`. + +A command method can return its response directly or wrapped in a +response wrapper. See [Execution models](#execution-models) for +execution-specific wrapper types. + +| `CommandOutput` class | return type | streaming type | +|----|----|----| +| `ListOfMapsOutput` | `List>` | | +| `ArrayOutput` | `List` | | +| `DoubleOutput` | `Double`, `double` | | +| `ByteArrayOutput` | `byte[]` | | +| `IntegerOutput` | `Long`, `long` | | +| `KeyOutput` | `K` (Codec key type) | | +| `KeyListOutput` | `List` (Codec key type) | `K` (Codec key type) | +| `ValueOutput` | `V` (Codec value type) | | +| `ValueListOutput` | `List` (Codec value type) | `V` (Codec value type) | +| `ValueSetOutput` | `Set` (Codec value type) | | +| `MapOutput` | `Map` | | +| `BooleanOutput` | `Boolean`, `boolean` | | +| `BooleanListOutput` | `List` | `Boolean` | +| `GeoCo ordinatesListOutput` | `GeoCoordinates` | | +| `GeoCoordin atesValueListOutput` | `List>` | `V alue` | +| `Sc oredValueListOutput` | `L ist>` | `ScoredValue` | +| `St ringValueListOutput` (ASCII) | `List>` | `Value` | +| `StringListOutput` (ASCII) | `List` | `String` | +| `V alueValueListOutput` | `List>` | `Value` | +| `VoidOutput` | `Void`, `void` | | + +Built-in command output types + +## Execution models + +Each declared command methods requires a synchronization mode, more +specific an execution model. Lettuce uses an event-driven command +execution model to send commands, process responses, and signal +completion. Command methods can execute their commands in a synchronous, +[asynchronous](Connecting-Redis.md#asynchronous-api) or [reactive](Connecting-Redis.md#reactive-api) way. + +The choice of a particular execution model is made on return type level, +more specific on the return type wrapper. Each command method may use a +different execution model so command methods within a command interface +may mix different execution models. + +### Synchronous (Blocking) Execution + +Declaring a non-wrapped return type (like `List`, `String`) will +execute commands synchronously. See +{custom-commands-command-exec-model-link} on more details on synchronous +command execution. + +Blocking command execution applies by default timeouts set on connection +level. Command methods support timeouts on invocation level by defining +a special `Timeout` parameter. The parameter position does not affect +command segments since special parameters are filtered from the command +arguments. Supplying `null` will apply connection defaults. + +``` java +interface KeyCommands extends Commands { + + String get(String key, Timeout timeout); +} + +KeyCommands commands = … + +commands.get("key", Timeout.create(10, TimeUnit.SECONDS)); +``` + +### Asynchronous (Future) Execution + +Command methods wrapping their response in `Future`, +`CompletableFuture`, `CompletionStage` or `RedisFuture` will execute +their commands asynchronously. Invoking an asynchronous command method +will send the command to Redis at invocation time and return a return +handle that allows you to synchronize or chain command execution. + +``` java +interface KeyCommands extends Commands { + + RedisFuture get(String key, Timeout timeout); +} +``` + +### Reactive Execution + +You can declare command methods that wrap their response in a reactive +type for reactive command execution. Invoking a reactive command method +will not send the command to Redis until the resulting subscriber +signals demand for data to its subscription. Using reactive wrapper +types allow [result streaming](#response-types) by emitting data as it’s +received from the I/O channel. + +Currently supported reactive types: + +- Project Reactor `Mono` and `Flux` (native) + +- RxJava 1 `Single` and `Observable` (via `rxjava-reactive-streams`) + +- RxJava 2 `Single`, `Maybe` and `Flowable` (via `rxjava` 2.0) + +See [Reactive API](Connecting-Redis.md#reactive-api) for more details. + +``` java +interface KeyCommands extends Commands { + + @Command("GET") + Mono get(String key); + + @Command("GET") + Maybe getRxJava2Maybe(String key); + + Flowable lrange(String key, long start, long stop); +} +``` + +### Batch Execution + +Command interfaces support command batching to collect multiple commands +in a batch queue and flush the batch in a single write to the transport. +Command batching executes commands in a deferred nature. This means that +at the time of invocation no result is available. Batching can be only +used with synchronous methods without a return value (`void`) or +asynchronous methods returning a `RedisFuture`. Reactive command +batching is not supported because reactive executed commands maintain an +own subscription lifecycle that is decoupled from command method +batching. + +Command batching can be enabled on two levels: + +- On class level by annotating the command interface with `@BatchSize`. + All methods participate in command batching. + +- On method level by adding `CommandBatching` to the arguments. Method + participates selectively in command batching. + +``` java +@BatchSize(50) +interface StringCommands extends Commands { + + void set(String key, String value); + + RedisFuture get(String key); + + RedisFuture get(String key, CommandBatching batching); +} + +StringCommands commands = … + +commands.set("key", "value"); // queued until 50 command invocations reached. + // The 50th invocation flushes the queue. + +commands.get("key", CommandBatching.queue()); // invocation-level queueing control +commands.get("key", CommandBatching.flush()); // invocation-level queueing control, + // flushes all queued commands +``` + +Batching can be controlled on per invocation by passing a +`CommandBatching` argument. `CommandBatching` has precedence over +`@BatchSize`. + +To flush queued commands at any time (without further command +invocation), add `BatchExecutor` to your interface definition. + +``` java +@BatchSize(50) +interface StringCommands extends Commands, BatchExecutor { + + RedisFuture get(String key); +} + +StringCommands commands = … + +commands.set("key"); + +commands.flush() // force-flush +``` + +#### Batch execution synchronization + +Queued command batches are flushed either on reaching the batch size or +force flush (via `BatchExecutor.flush()` or `CommandBatching.flush()`). +Errors are transported through `RedisFuture`. Synchronous commands don’t +receive any result/exception signal except if the batch is flushed +through a synchronous method call. Synchronous flushing throws +`BatchException` containing the failed commands. + diff --git a/docs/index.md b/docs/index.md new file mode 100644 index 0000000000..983037e1b3 --- /dev/null +++ b/docs/index.md @@ -0,0 +1,11 @@ +# Table of Contents + +- [Overview](<./Overview.md>) +- [New & Noteworthy](<./New--Noteworthy.md>) +- [Getting Started](<./Getting-Started.md>) +- [Connecting Redis](<./Connecting-Redis.md>) +- [High-Availability and Sharding](<./High-Availability-and-Sharding.md>) +- [Working with dynamic Redis Command Interfaces](<./Working-with-dynamic-Redis-Command-Interfaces.md>) +- [Advanced usage](<./Advanced-usage.md>) +- [Integration and Extension](<./Integration-and-Extension.md>) +- [Frequently Asked Questions](<./Frequently-Asked-Questions.md>) \ No newline at end of file From b5b2da4ceac1b334573db4e465588f7d6dc3d103 Mon Sep 17 00:00:00 2001 From: Igor Malinovskyi Date: Fri, 9 Aug 2024 12:44:46 +0200 Subject: [PATCH 02/12] Overhaul docs and fix obvious conversion issues --- .github/workflows/docs.yml | 37 + docs/Connecting-Redis.md | 2139 ----------------- docs/README.md | 18 + docs/{Advanced-usage.md => advanced-usage.md} | 242 +- .../{Frequently-Asked-Questions.md => faq.md} | 4 +- ...{Getting-Started.md => getting-started.md} | 0 ...ability-and-Sharding.md => ha-sharding.md} | 35 +- docs/index.md | 11 - ...-Extension.md => integration-extension.md} | 0 docs/{New--Noteworthy.md => new-features.md} | 24 +- docs/{Overview.md => overview.md} | 14 +- ...erfaces.md => redis-command-interfaces.md} | 24 +- docs/static/logo-redis.svg | 10 + docs/user-guide/async-api.md | 572 +++++ docs/user-guide/connecting-redis.md | 239 ++ docs/user-guide/kotlin-api.md | 90 + docs/user-guide/lua-scripting.md | 42 + docs/user-guide/pubsub.md | 118 + docs/user-guide/reactive-api.md | 792 ++++++ docs/user-guide/redis-functions.md | 114 + docs/user-guide/transactions-multi.md | 168 ++ mkdocs.yml | 44 + 22 files changed, 2382 insertions(+), 2355 deletions(-) create mode 100644 .github/workflows/docs.yml delete mode 100644 docs/Connecting-Redis.md create mode 100644 docs/README.md rename docs/{Advanced-usage.md => advanced-usage.md} (95%) rename docs/{Frequently-Asked-Questions.md => faq.md} (97%) rename docs/{Getting-Started.md => getting-started.md} (100%) rename docs/{High-Availability-and-Sharding.md => ha-sharding.md} (95%) delete mode 100644 docs/index.md rename docs/{Integration-and-Extension.md => integration-extension.md} (100%) rename docs/{New--Noteworthy.md => new-features.md} (84%) rename docs/{Overview.md => overview.md} (91%) rename docs/{Working-with-dynamic-Redis-Command-Interfaces.md => redis-command-interfaces.md} (96%) create mode 100644 docs/static/logo-redis.svg create mode 100644 docs/user-guide/async-api.md create mode 100644 docs/user-guide/connecting-redis.md create mode 100644 docs/user-guide/kotlin-api.md create mode 100644 docs/user-guide/lua-scripting.md create mode 100644 docs/user-guide/pubsub.md create mode 100644 docs/user-guide/reactive-api.md create mode 100644 docs/user-guide/redis-functions.md create mode 100644 docs/user-guide/transactions-multi.md create mode 100644 mkdocs.yml diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml new file mode 100644 index 0000000000..ac0ca9f2a3 --- /dev/null +++ b/.github/workflows/docs.yml @@ -0,0 +1,37 @@ +name: Publish Docs +on: + push: + branches: ["main", "markdown_docs"] +permissions: + contents: read + pages: write + id-token: write +concurrency: + group: "pages" + cancel-in-progress: false +jobs: + build-and-deploy: + concurrency: ci-${{ github.ref }} + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + - uses: actions/setup-python@v4 + with: + python-version: 3.9 + cache: 'pip' + - name: Install dependencies + run: | + python -m pip install --upgrade pip + pip install mkdocs mkdocs-material pymdown-extensions + - name: Build docs + run: | + mkdocs build -d docsbuild + - name: Setup Pages + uses: actions/configure-pages@v3 + - name: Upload artifact + uses: actions/upload-pages-artifact@v1 + with: + path: 'docsbuild' + - name: Deploy to GitHub Pages + id: deployment + uses: actions/deploy-pages@v2 \ No newline at end of file diff --git a/docs/Connecting-Redis.md b/docs/Connecting-Redis.md deleted file mode 100644 index 6ba3c69d9f..0000000000 --- a/docs/Connecting-Redis.md +++ /dev/null @@ -1,2139 +0,0 @@ -# Connecting Redis - -Connections to a Redis Standalone, Sentinel, or Cluster require a -specification of the connection details. The unified form is `RedisURI`. -You can provide the database, password and timeouts within the -`RedisURI`. You have following possibilities to create a `RedisURI`: - -1. Use an URI: - - ``` java - RedisURI.create("redis://localhost/"); - ``` - -2. Use the Builder - - ``` java - RedisURI.Builder.redis("localhost", 6379).auth("password").database(1).build(); - ``` - -3. Set directly the values in `RedisURI` - - ``` java - new RedisURI("localhost", 6379, 60, TimeUnit.SECONDS); - ``` - -## URI syntax - -**Redis Standalone** - - redis :// [[username :] password@] host [:port][/database] - [?[timeout=timeout[d|h|m|s|ms|us|ns]] [&clientName=clientName] - [&libraryName=libraryName] [&libraryVersion=libraryVersion] ] - -**Redis Standalone (SSL)** - - rediss :// [[username :] password@] host [: port][/database] - [?[timeout=timeout[d|h|m|s|ms|us|ns]] [&clientName=clientName] - [&libraryName=libraryName] [&libraryVersion=libraryVersion] ] - -**Redis Standalone (Unix Domain Sockets)** - - redis-socket :// [[username :] password@]path - [?[timeout=timeout[d|h|m|s|ms|us|ns]] [&database=database] - [&clientName=clientName] [&libraryName=libraryName] - [&libraryVersion=libraryVersion] ] - -**Redis Sentinel** - - redis-sentinel :// [[username :] password@] host1[:port1] [, host2[:port2]] [, hostN[:portN]] [/database] - [?[timeout=timeout[d|h|m|s|ms|us|ns]] [&sentinelMasterId=sentinelMasterId] - [&clientName=clientName] [&libraryName=libraryName] - [&libraryVersion=libraryVersion] ] - -**Schemes** - -- `redis` Redis Standalone - -- `rediss` Redis Standalone SSL - -- `redis-socket` Redis Standalone Unix Domain Socket - -- `redis-sentinel` Redis Sentinel - -**Timeout units** - -- `d` Days - -- `h` Hours - -- `m` Minutes - -- `s` Seconds - -- `ms` Milliseconds - -- `us` Microseconds - -- `ns` Nanoseconds - -Hint: The database parameter within the query part has higher precedence -than the database in the path. - -RedisURI supports Redis Standalone, Redis Sentinel and Redis Cluster -with plain, SSL, TLS and unix domain socket connections. - -Hint: The database parameter within the query part has higher precedence -than the database in the path. RedisURI supports Redis Standalone, Redis -Sentinel and Redis Cluster with plain, SSL, TLS and unix domain socket -connections. - -## Authentication - -Redis URIs may contain authentication details that effectively lead to -usernames with passwords, password-only, or no authentication. -Connections are authenticated by using the information provided through -`RedisCredentials`. Credentials are obtained at connection time from -`RedisCredentialsProvider`. When configuring username/password on the -URI statically, then a `StaticCredentialsProvider` holds the configured -information. - -**Notes** - -- When using Redis Sentinel, the password from the URI applies to the - data nodes only. Sentinel authentication must be configured for each - sentinel node. - -- Usernames are supported as of Redis 6. - -- Library name and library version are automatically set on Redis 7.2 or - greater. - -## Basic Usage - -``` java -RedisClient client = RedisClient.create("redis://localhost"); - -StatefulRedisConnection connection = client.connect(); - -RedisCommands commands = connection.sync(); - -String value = commands.get("foo"); - -... - -connection.close(); - -client.shutdown(); -``` - -- Create the `RedisClient` instance and provide a Redis URI pointing to - localhost, Port 6379 (default port). - -- Open a Redis Standalone connection. The endpoint is used from the - initialized `RedisClient` - -- Obtain the command API for synchronous execution. Lettuce supports - asynchronous and reactive execution models, too. - -- Issue a `GET` command to get the key `foo`. - -- Close the connection when you’re done. This happens usually at the - very end of your application. Connections are designed to be - long-lived. - -- Shut down the client instance to free threads and resources. This - happens usually at the very end of your application. - -Each Redis command is implemented by one or more methods with names -identical to the lowercase Redis command name. Complex commands with -multiple modifiers that change the result type include the CamelCased -modifier as part of the command name, e.g. `zrangebyscore` and -`zrangebyscoreWithScores`. - -Redis connections are designed to be long-lived and thread-safe, and if -the connection is lost will reconnect until `close()` is called. Pending -commands that have not timed out will be (re)sent after successful -reconnection. - -All connections inherit a default timeout from their RedisClient and -and will throw a `RedisException` when non-blocking commands fail to -return a result before the timeout expires. The timeout defaults to 60 -seconds and may be changed in the RedisClient or for each connection. -Synchronous methods will throw a `RedisCommandExecutionException` in -case Redis responds with an error. Asynchronous connections do not throw -exceptions when Redis responds with an error. - -### RedisURI - -The RedisURI contains the host/port and can carry -authentication/database details. On a successful connect you get -authenticated, and the database is selected afterward. This applies -also after re-establishing a connection after a connection loss. - -A Redis URI can also be created from an URI string. Supported formats -are: - -- `redis://[password@]host[:port][/databaseNumber]` Plaintext Redis - connection - -- `rediss://[password@]host[:port][/databaseNumber]` [SSL - Connections](Advanced-usage.md#ssl-connections) Redis connection - -- `redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId` - for using Redis Sentinel - -- `redis-socket:///path/to/socket` [Unix Domain - Sockets](Advanced-usage.md#unix-domain-sockets) connection to Redis - -### Exceptions - -In the case of an exception/error response from Redis, you’ll receive a -`RedisException` containing -the error message. `RedisException` is a `RuntimeException`. - -### Examples - -``` java -RedisClient client = RedisClient.create(RedisURI.create("localhost", 6379)); -client.setDefaultTimeout(20, TimeUnit.SECONDS); - -// … - -client.shutdown(); -``` - -``` java -RedisURI redisUri = RedisURI.Builder.redis("localhost") - .withPassword("authentication") - .withDatabase(2) - .build(); -RedisClient client = RedisClient.create(redisUri); - -// … - -client.shutdown(); -``` - -``` java -RedisURI redisUri = RedisURI.Builder.redis("localhost") - .withSsl(true) - .withPassword("authentication") - .withDatabase(2) - .build(); -RedisClient client = RedisClient.create(redisUri); - -// … - -client.shutdown(); -``` - -``` java -RedisURI redisUri = RedisURI.create("redis://authentication@localhost/2"); -RedisClient client = RedisClient.create(redisUri); - -// … - -client.shutdown(); -``` - -## Asynchronous API - -This guide will give you an impression how and when to use the -asynchronous API provided by Lettuce 4.x. - -### Motivation - -Asynchronous methodologies allow you to utilize better system resources, -instead of wasting threads waiting for network or disk I/O. Threads can -be fully utilized to perform other work instead. Lettuce facilitates -asynchronicity from building the client on top of -[netty](http://netty.io) that is a multithreaded, event-driven I/O -framework. All communication is handled asynchronously. Once the -foundation is able to processes commands concurrently, it is convenient -to take advantage from the asynchronicity. It is way harder to turn a -blocking and synchronous working software into a concurrently processing -system. - -#### Understanding Asynchronicity - -Asynchronicity permits other processing to continue before the -transmission has finished and the response of the transmission is -processed. This means, in the context of Lettuce and especially Redis, -that multiple commands can be issued serially without the need of -waiting to finish the preceding command. This mode of operation is also -known as [Pipelining](http://redis.io/topics/pipelining). The following -example should give you an impression of the mode of operation: - -- Given client *A* and client *B* - -- Client *A* triggers command `SET A=B` - -- Client *B* triggers at the same time of Client *A* command `SET C=D` - -- Redis receives command from Client *A* - -- Redis receives command from Client *B* - -- Redis processes `SET A=B` and responds `OK` to Client *A* - -- Client *A* receives the response and stores the response in the - response handle - -- Redis processes `SET C=D` and responds `OK` to Client *B* - -- Client *B* receives the response and stores the response in the - response handle - -Both clients from the example above can be either two threads or -connections within an application or two physically separated clients. - -Clients can operate concurrently to each other by either being separate -processes, threads, event-loops, actors, fibers, etc. Redis processes -incoming commands serially and operates mostly single-threaded. This -means, commands are processed in the order they are received with some -characteristic that we’ll cover later. - -Let’s take the simplified example and enhance it by some program flow -details: - -- Given client *A* - -- Client *A* triggers command `SET A=B` - -- Client *A* uses the asynchronous API and can perform other processing - -- Redis receives command from Client *A* - -- Redis processes `SET A=B` and responds `OK` to Client *A* - -- Client *A* receives the response and stores the response in the - response handle - -- Client *A* can access now the response to its command without waiting - (non-blocking) - -The Client *A* takes advantage from not waiting on the result of the -command so it can process computational work or issue another Redis -command. The client can work with the command result as soon as the -response is available. - -#### Impact of asynchronicity to the synchronous API - -While this guide helps you to understand the asynchronous API it is -worthwhile to learn the impact on the synchronous API. The general -approach of the synchronous API is no different than the asynchronous -API. In both cases, the same facilities are used to invoke and transport -commands to the Redis server. The only difference is a blocking behavior -of the caller that is using the synchronous API. Blocking happens on -command level and affects only the command completion part, meaning -multiple clients using the synchronous API can invoke commands on the -same connection and at the same time without blocking each other. A call -on the synchronous API is unblocked at the moment a command response was -processed. - -- Given client *A* and client *B* - -- Client *A* triggers command `SET A=B` on the synchronous API and waits - for the result - -- Client *B* triggers at the same time of Client *A* command `SET C=D` - on the synchronous API and waits for the result - -- Redis receives command from Client *A* - -- Redis receives command from Client *B* - -- Redis processes `SET A=B` and responds `OK` to Client *A* - -- Client *A* receives the response and unblocks the program flow of - Client *A* - -- Redis processes `SET C=D` and responds `OK` to Client *B* - -- Client *B* receives the response and unblocks the program flow of - Client *B* - -However, there are some cases you should not share a connection among -threads to avoid side-effects. The cases are: - -- Disabling flush-after-command to improve performance - -- The use of blocking operations like `BLPOP`. Blocking operations are - queued on Redis until they can be executed. While one connection is - blocked, other connections can issue commands to Redis. Once a command - unblocks the blocking command (that said an `LPUSH` or `RPUSH` hits - the list), the blocked connection is unblocked and can proceed after - that. - -- Transactions - -- Using multiple databases - -#### Result handles - -Every command invocation on the asynchronous API creates a -`RedisFuture` that can be canceled, awaited and subscribed -(listener). A `CompleteableFuture` or `RedisFuture` is a pointer -to the result that is initially unknown since the computation of its -value is yet incomplete. A `RedisFuture` provides operations for -synchronization and chaining. - -``` java -CompletableFuture future = new CompletableFuture<>(); - -System.out.println("Current state: " + future.isDone()); - -future.complete("my value"); - -System.out.println("Current state: " + future.isDone()); -System.out.println("Got value: " + future.get()); -``` - -The example prints the following lines: - - Current state: false - Current state: true - Got value: my value - -Attaching a listener to a future allows chaining. Promises can be used -synonymous to futures, but not every future is a promise. A promise -guarantees a callback/notification and thus it has come to its name. - -A simple listener that gets called once the future completes: - -``` java -final CompletableFuture future = new CompletableFuture<>(); - -future.thenRun(new Runnable() { - @Override - public void run() { - try { - System.out.println("Got value: " + future.get()); - } catch (Exception e) { - e.printStackTrace(); - } - - } -}); - -System.out.println("Current state: " + future.isDone()); -future.complete("my value"); -System.out.println("Current state: " + future.isDone()); -``` - -The value processing moves from the caller into a listener that is then -called by whoever completes the future. The example prints the following -lines: - - Current state: false - Got value: my value - Current state: true - -The code from above requires exception handling since calls to the -`get()` method can lead to exceptions. Exceptions raised during the -computation of the `Future` are transported within an -`ExecutionException`. Another exception that may be thrown is the -`InterruptedException`. This is because calls to `get()` are blocking -calls and the blocked thread can be interrupted at any time. Just think -about a system shutdown. - -The `CompletionStage` type allows since Java 8 a much more -sophisticated handling of futures. A `CompletionStage` can consume, -transform and build a chain of value processing. The code from above can -be rewritten in Java 8 in the following style: - -``` java -CompletableFuture future = new CompletableFuture<>(); - -future.thenAccept(new Consumer() { - @Override - public void accept(String value) { - System.out.println("Got value: " + value); - } -}); - -System.out.println("Current state: " + future.isDone()); -future.complete("my value"); -System.out.println("Current state: " + future.isDone()); -``` - -The example prints the following lines: - - Current state: false - Got value: my value - Current state: true - -You can find the full reference for the `CompletionStage` type in the -[Java 8 API -documentation](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html). - -### Creating futures using Lettuce - -Lettuce futures can be used for initial and chaining operations. When -using Lettuce futures, you will notice the non-blocking behavior. This -is because all I/O and command processing are handled asynchronously -using the netty EventLoop. The Lettuce `RedisFuture` extends a -`CompletionStage` so all methods of the base type are available. - -Lettuce exposes its futures on the Standalone, Sentinel, -Publish/Subscribe and Cluster APIs. - -Connecting to Redis is insanely simple: - -``` java -RedisClient client = RedisClient.create("redis://localhost"); -RedisAsyncCommands commands = client.connect().async(); -``` - -In the next step, obtaining a value from a key requires the `GET` -operation: - -``` java -RedisFuture future = commands.get("key"); -``` - -### Consuming futures - -The first thing you want to do when working with futures is to consume -them. Consuming a futures means obtaining the value. Here is an example -that blocks the calling thread and prints the value: - -``` java -RedisFuture future = commands.get("key"); -String value = future.get(); -System.out.println(value); -``` - -Invocations to the `get()` method (pull-style) block the calling thread -at least until the value is computed but in the worst case indefinitely. -Using timeouts is always a good idea to not exhaust your threads. - -``` java -try { - RedisFuture future = commands.get("key"); - String value = future.get(1, TimeUnit.MINUTES); - System.out.println(value); -} catch (Exception e) { - e.printStackTrace(); -} -``` - -The example will wait at most 1 minute for the future to complete. If -the timeout exceeds, a `TimeoutException` is thrown to signal the -timeout. - -Futures can also be consumed in a push style, meaning when the -`RedisFuture` is completed, a follow-up action is triggered: - -``` java -RedisFuture future = commands.get("key"); - -future.thenAccept(new Consumer() { - @Override - public void accept(String value) { - System.out.println(value); - } -}); -``` - -Alternatively, written in Java 8 lambdas: - -``` java -RedisFuture future = commands.get("key"); - -future.thenAccept(System.out::println); -``` - -Lettuce futures are completed on the netty EventLoop. Consuming and -chaining futures on the default thread is always a good idea except for -one case: Blocking/long-running operations. As a rule of thumb, never -block the event loop. If you need to chain futures using blocking calls, -use the `thenAcceptAsync()`/`thenRunAsync()` methods to fork the -processing to another thread. The `…​async()` methods need a threading -infrastructure for execution, by default the `ForkJoinPool.commonPool()` -is used. The `ForkJoinPool` is statically constructed and does not grow -with increasing load. Using default `Executor`s is almost always the -better idea. - -``` java -Executor sharedExecutor = ... -RedisFuture future = commands.get("key"); - -future.thenAcceptAsync(new Consumer() { - @Override - public void accept(String value) { - System.out.println(value); - } -}, sharedExecutor); -``` - -### Synchronizing futures - -A key point when using futures is the synchronization. Futures are -usually used to: - -1. Trigger multiple invocations without the urge to wait for the - predecessors (Batching) - -2. Invoking a command without awaiting the result at all (Fire&Forget) - -3. Invoking a command and perform other computing in the meantime - (Decoupling) - -4. Adding concurrency to certain computational efforts (Concurrency) - -There are several ways how to wait or get notified in case a future -completes. Certain synchronization techniques apply to some motivations -why you want to use futures. - -#### Blocking synchronization - -Blocking synchronization comes handy if you perform batching/add -concurrency to certain parts of your system. An example to batching can -be setting/retrieving multiple values and awaiting the results before a -certain point within processing. - -``` java -List> futures = new ArrayList>(); - -for (int i = 0; i < 10; i++) { - futures.add(commands.set("key-" + i, "value-" + i)); -} - -LettuceFutures.awaitAll(1, TimeUnit.MINUTES, futures.toArray(new RedisFuture[futures.size()])); -``` - -The code from above does not wait until a certain command completes -before it issues another one. The synchronization is done after all -commands are issued. The example code can easily be turned into a -Fire&Forget pattern by omitting the call to `LettuceFutures.awaitAll()`. - -A single future execution can be also awaited, meaning an opt-in to wait -for a certain time but without raising an exception: - -``` java -RedisFuture future = commands.get("key"); - -if(!future.await(1, TimeUnit.MINUTES)) { - System.out.println("Could not complete within the timeout"); -} -``` - -Calling `await()` is friendlier to call since it throws only an -`InterruptedException` in case the blocked thread is interrupted. You -are already familiar with the `get()` method for synchronization, so we -will not bother you with this one. - -At last, there is another way to synchronize futures in a blocking way. -The major caveat is that you will become responsible to handle thread -interruptions. If you do not handle that aspect, you will not be able to -shut down your system properly if it is in a running state. - -``` java -RedisFuture future = commands.get("key"); -while (!future.isDone()) { - // do something ... -} -``` - -While the `isDone()` method does not aim primarily for synchronization -use, it might come handy to perform other computational efforts while -the command is executed. - -#### Chaining synchronization - -Futures can be synchronized/chained in a non-blocking style to improve -thread utilization. Chaining works very well in systems relying on -event-driven characteristics. Future chaining builds up a chain of one -or more futures that are executed serially, and every chain member -handles a part in the computation. The `CompletionStage` API offers -various methods to chain and transform futures. A simple transformation -of the value can be done using the `thenApply()` method: - -``` java -future.thenApply(new Function() { - @Override - public Integer apply(String value) { - return value.length(); - } -}).thenAccept(new Consumer() { - @Override - public void accept(Integer integer) { - System.out.println("Got value: " + integer); - } -}); -``` - -Alternatively, written in Java 8 lambdas: - -``` java -future.thenApply(String::length) - .thenAccept(integer -> System.out.println("Got value: " + integer)); -``` - -The `thenApply()` method accepts a function that transforms the value -into another one. The final `thenAccept()` method consumes the value for -final processing. - -You have already seen the `thenRun()` method from previous examples. The -`thenRun()` method can be used to handle future completions in case the -data is not crucial to your flow: - -``` java -future.thenRun(new Runnable() { - @Override - public void run() { - System.out.println("Finished the future."); - } -}); -``` - -Keep in mind to execute the `Runnable` on a custom `Executor` if you are -doing blocking calls within the `Runnable`. - -Another chaining method worth mentioning is the either-or chaining. A -couple of `…​Either()` methods are available on a `CompletionStage`, -see the [Java 8 API -docs](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html) -for the full reference. The either-or pattern consumes the value from -the first future that is completed. A good example might be two services -returning the same data, for instance, a Master-Replica scenario, but -you want to return the data as fast as possible: - -``` java -RedisStringAsyncCommands master = masterClient.connect().async(); -RedisStringAsyncCommands replica = replicaClient.connect().async(); - -RedisFuture future = master.get("key"); -future.acceptEither(replica.get("key"), new Consumer() { - @Override - public void accept(String value) { - System.out.println("Got value: " + value); - } -}); -``` - -### Error handling - -Error handling is an indispensable component of every real world -application and should to be considered from the beginning on. Futures -provide some mechanisms to deal with errors. - -In general, you want to react in the following ways: - -- Return a default value instead - -- Use a backup future - -- Retry the future - -`RedisFuture`s transport exceptions if any occurred. Calls to the -`get()` method throw the occurred exception wrapped within an -`ExecutionException` (this is different to Lettuce 3.x). You can find -more details within the Javadoc on -[CompletionStage](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html). - -The following code falls back to a default value after it runs to an -exception by using the `handle()` method: - -``` java -future.handle(new BiFunction() { - @Override - public Integer apply(String value, Throwable throwable) { - if(throwable != null) { - return "default value"; - } - return value; - } -}).thenAccept(new Consumer() { - @Override - public void accept(String value) { - System.out.println("Got value: " + value); - } -}); -``` - -More sophisticated code could decide on behalf of the throwable type -that value to return, as the shortcut example using the -`exceptionally()` method: - -``` java -future.exceptionally(new Function() { - @Override - public String apply(Throwable throwable) { - if (throwable instanceof IllegalStateException) { - return "default value"; - } - - return "other default value"; - } -}); -``` - -Retrying futures and recovery using futures is not part of the Java 8 -`CompleteableFuture`. See the [Reactive API](#reactive-api) for -comfortable ways handling with exceptions. - -### Examples - -``` java -RedisAsyncCommands async = client.connect().async(); -RedisFuture set = async.set("key", "value"); -RedisFuture get = async.get("key"); - -set.get() == "OK" -get.get() == "value" -``` - -``` java -RedisAsyncCommands async = client.connect().async(); -RedisFuture set = async.set("key", "value"); -RedisFuture get = async.get("key"); - -set.await(1, SECONDS) == true -set.get() == "OK" -get.get(1, TimeUnit.MINUTES) == "value" -``` - -``` java -RedisStringAsyncCommands async = client.connect().async(); -RedisFuture set = async.set("key", "value"); - -Runnable listener = new Runnable() { - @Override - public void run() { - ...; - } -}; - -set.thenRun(listener); -``` - -## Reactive API - -This guide helps you to understand the Reactive Stream pattern and aims -to give you a general understanding of how to build reactive -applications. - -### Motivation - -Asynchronous and reactive methodologies allow you to utilize better -system resources, instead of wasting threads waiting for network or disk -I/O. Threads can be fully utilized to perform other work instead. - -A broad range of technologies exists to facilitate this style of -programming, ranging from the very limited and less usable -`java.util.concurrent.Future` to complete libraries and runtimes like -Akka. [Project Reactor](http://projectreactor.io/), has a very rich set -of operators to compose asynchronous workflows, it has no further -dependencies to other frameworks and supports the very mature Reactive -Streams model. - -### Understanding Reactive Streams - -Reactive Streams is an initiative to provide a standard for asynchronous -stream processing with non-blocking back pressure. This encompasses -efforts aimed at runtime environments (JVM and JavaScript) as well as -network protocols. - -The scope of Reactive Streams is to find a minimal set of interfaces, -methods, and protocols that will describe the necessary operations and -entities to achieve the goal—asynchronous streams of data with -non-blocking back pressure. - -It is an interoperability standard between multiple reactive composition -libraries that allow interaction without the need of bridging between -libraries in application code. - -The integration of Reactive Streams is usually accompanied with the use -of a composition library that hides the complexity of bare -`Publisher` and `Subscriber` types behind an easy-to-use API. -Lettuce uses [Project Reactor](http://projectreactor.io/) that exposes -its publishers as `Mono` and `Flux`. - -For more information about Reactive Streams see -. - -### Understanding Publishers - -Asynchronous processing decouples I/O or computation from the thread -that invoked the operation. A handle to the result is given back, -usually a `java.util.concurrent.Future` or similar, that returns either -a single object, a collection or an exception. Retrieving a result, that -was fetched asynchronously is usually not the end of processing one -flow. Once data is obtained, further requests can be issued, either -always or conditionally. With Java 8 or the Promise pattern, linear -chaining of futures can be set up so that subsequent asynchronous -requests are issued. Once conditional processing is needed, the -asynchronous flow has to be interrupted and synchronized. While this -approach is possible, it does not fully utilize the advantage of -asynchronous processing. - -In contrast to the preceding examples, `Publisher` objects answer the -multiplicity and asynchronous questions in a different fashion: By -inverting the `Pull` pattern into a `Push` pattern. - -**A Publisher is the asynchronous/push “dual” to the synchronous/pull -Iterable** - -| event | Iterable (pull) | Publisher (push) | -|----------------|------------------|--------------------| -| retrieve data | T next() | onNext(T) | -| discover error | throws Exception | onError(Exception) | -| complete | !hasNext() | onCompleted() | - -An `Publisher` supports emission sequences of values or even infinite -streams, not just the emission of single scalar values (as Futures do). -You will very much appreciate this fact once you start to work on -streams instead of single values. Project Reactor uses two types in its -vocabulary: `Mono` and `Flux` that are both publishers. - -A `Mono` can emit `0` to `1` events while a `Flux` can emit `0` to `N` -events. - -A `Publisher` is not biased toward some particular source of -concurrency or asynchronicity and how the underlying code is executed - -synchronous or asynchronous, running within a `ThreadPool`. As a -consumer of a `Publisher`, you leave the actual implementation to the -supplier, who can change it later on without you having to adapt your -code. - -The last key point of a `Publisher` is that the underlying processing -is not started at the time the `Publisher` is obtained, rather its -started at the moment an observer subscribes or signals demand to the -`Publisher`. This is a crucial difference to a -`java.util.concurrent.Future`, which is started somewhere at the time it -is created/obtained. So if no observer ever subscribes to the -`Publisher`, nothing ever will happen. - -### A word on the lettuce Reactive API - -All commands return a `Flux`, `Mono` or `Mono` to which a -`Subscriber` can subscribe to. That subscriber reacts to whatever item -or sequence of items the `Publisher` emits. This pattern facilitates -concurrent operations because it does not need to block while waiting -for the `Publisher` to emit objects. Instead, it creates a sentry in -the form of a `Subscriber` that stands ready to react appropriately at -whatever future time the `Publisher` does so. - -### Consuming `Publisher` - -The first thing you want to do when working with publishers is to -consume them. Consuming a publisher means subscribing to it. Here is an -example that subscribes and prints out all the items emitted: - -``` java -Flux.just("Ben", "Michael", "Mark").subscribe(new Subscriber() { - public void onSubscribe(Subscription s) { - s.request(3); - } - - public void onNext(String s) { - System.out.println("Hello " + s + "!"); - } - - public void onError(Throwable t) { - - } - - public void onComplete() { - System.out.println("Completed"); - } -}); -``` - -The example prints the following lines: - - Hello Ben - Hello Michael - Hello Mark - Completed - -You can see that the Subscriber (or Observer) gets notified of every -event and also receives the completed event. A `Publisher` emits -items until either an exception is raised or the `Publisher` finishes -the emission calling `onCompleted`. No further elements are emitted -after that time. - -A call to the `subscribe` registers a `Subscription` that allows to -cancel and, therefore, do not receive further events. Publishers can -interoperate with the un-subscription and free resources once a -subscriber unsubscribed from the `Publisher`. - -Implementing a `Subscriber` requires implementing numerous methods, -so lets rewrite the code to a simpler form: - -``` java -Flux.just("Ben", "Michael", "Mark").doOnNext(new Consumer() { - public void accept(String s) { - System.out.println("Hello " + s + "!"); - } -}).doOnComplete(new Runnable() { - public void run() { - System.out.println("Completed"); - } -}).subscribe(); -``` - -alternatively, even simpler by using Java 8 Lambdas: - -``` java -Flux.just("Ben", "Michael", "Mark") - .doOnNext(s -> System.out.println("Hello " + s + "!")) - .doOnComplete(() -> System.out.println("Completed")) - .subscribe(); -``` - -You can control the elements that are processed by your `Subscriber` -using operators. The `take()` operator limits the number of emitted -items if you are interested in the first `N` elements only. - -``` java -Flux.just("Ben", "Michael", "Mark") // - .doOnNext(s -> System.out.println("Hello " + s + "!")) - .doOnComplete(() -> System.out.println("Completed")) - .take(2) - .subscribe(); -``` - -The example prints the following lines: - - Hello Ben - Hello Michael - Completed - -Note that the `take` operator implicitly cancels its subscription from -the `Publisher` once the expected count of elements was emitted. - -A subscription to a `Publisher` can be done either by another `Flux` -or a `Subscriber`. Unless you are implementing a custom `Publisher`, -always use `Subscriber`. The used subscriber `Consumer` from the example -above does not handle `Exception`s so once an `Exception` is thrown you -will see a stack trace like this: - - Exception in thread "main" reactor.core.Exceptions$BubblingException: java.lang.RuntimeException: Example exception - at reactor.core.Exceptions.bubble(Exceptions.java:96) - at reactor.core.publisher.Operators.onErrorDropped(Operators.java:296) - at reactor.core.publisher.LambdaSubscriber.onError(LambdaSubscriber.java:117) - ... - Caused by: java.lang.RuntimeException: Example exception - at demos.lambda$example3Lambda$4(demos.java:87) - at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:157) - ... 23 more - -It is always recommended to implement an error handler right from the -beginning. At a certain point, things can and will go wrong. - -A fully implemented subscriber declares the `onCompleted` and `onError` -methods allowing you to react to these events: - -``` java -Flux.just("Ben", "Michael", "Mark").subscribe(new Subscriber() { - public void onSubscribe(Subscription s) { - s.request(3); - } - - public void onNext(String s) { - System.out.println("Hello " + s + "!"); - } - - public void onError(Throwable t) { - System.out.println("onError: " + t); - } - - public void onComplete() { - System.out.println("Completed"); - } -}); -``` - -### From push to pull - -The examples from above illustrated how publishers can be set up in a -not-opinionated style about blocking or non-blocking execution. A -`Flux` can be converted explicitly into an `Iterable` or -synchronized with `block()`. Avoid calling `block()` in your code as you -start expressing the nature of execution inside your code. Calling -`block()` removes all non-blocking advantages of the reactive chain to -your application. - -``` java -String last = Flux.just("Ben", "Michael", "Mark").last().block(); -System.out.println(last); -``` - -The example prints the following line: - - Mark - -A blocking call can be used to synchronize the publisher chain and find -back a way into the plain and well-known `Pull` pattern. - -``` java -List list = Flux.just("Ben", "Michael", "Mark").collectList().block(); -System.out.println(list); -``` - -The `toList` operator collects all emitted elements and passes the list -through the `BlockingPublisher`. - -The example prints the following line: - - [Ben, Michael, Mark] - -### Creating `Flux` and `Mono` using Lettuce - -There are many ways to establish publishers. You have already seen -`just()`, `take()` and `collectList()`. Refer to the [Project Reactor -documentation](http://projectreactor.io/docs/) for many more methods -that you can use to create `Flux` and `Mono`. - -Lettuce publishers can be used for initial and chaining operations. When -using Lettuce publishers, you will notice the non-blocking behavior. -This is because all I/O and command processing are handled -asynchronously using the netty EventLoop. - -Connecting to Redis is insanely simple: - -``` java -RedisClient client = RedisClient.create("redis://localhost"); -RedisStringReactiveCommands commands = client.connect().reactive(); -``` - -In the next step, obtaining a value from a key requires the `GET` -operation: - -``` java -commands.get("key").subscribe(new Consumer() { - - public void accept(String value) { - System.out.println(value); - } -}); -``` - -Alternatively, written in Java 8 lambdas: - -``` java -commands - .get("key") - .subscribe(value -> System.out.println(value)); -``` - -The execution is handled asynchronously, and the invoking Thread can be -used to processed in processing while the operation is completed on the -Netty EventLoop threads. Due to its decoupled nature, the calling method -can be left before the execution of the `Publisher` is finished. - -Lettuce publishers can be used within the context of chaining to load -multiple keys asynchronously: - -``` java -Flux.just("Ben", "Michael", "Mark"). - flatMap(key -> commands.get(key)). - subscribe(value -> System.out.println("Got value: " + value)); -``` - -### Hot and Cold Publishers - -There is a distinction between Publishers that was not covered yet: - -- A cold Publishers waits for a subscription until it emits values and - does this freshly for every subscriber. - -- A hot Publishers begins emitting values upfront and presents them to - every subscriber subsequently. - -All Publishers returned from the Redis Standalone, Redis Cluster, and -Redis Sentinel API are cold, meaning that no I/O happens until they are -subscribed to. As such a subscriber is guaranteed to see the whole -sequence from the beginning. So just creating a Publisher will not cause -any network I/O thus creating and discarding Publishers is cheap. -Publishers created for a Publish/Subscribe emit `PatternMessage`s and -`ChannelMessage`s once they are subscribed to. Publishers guarantee -however to emit all items from the beginning until their end. While this -is true for Publish/Subscribe publishers, the nature of subscribing to a -Channel/Pattern allows missed messages due to its subscription nature -and less to the Hot/Cold distinction of publishers. - -### Transforming publishers - -Publishers can transform the emitted values in various ways. One of the -most basic transformations is `flatMap()` which you have seen from the -examples above that converts the incoming value into a different one. -Another one is `map()`. The difference between `map()` and `flatMap()` -is that `flatMap()` allows you to do those transformations with -`Publisher` calls. - -``` java -Flux.just("Ben", "Michael", "Mark") - .flatMap(commands::get) - .flatMap(value -> commands.rpush("result", value)) - .subscribe(); -``` - -The first `flatMap()` function is used to retrieve a value and the -second `flatMap()` function appends the value to a Redis list named -`result`. The `flatMap()` function returns a Publisher whereas the -normal map just returns ``. You will use `flatMap()` a lot when -dealing with flows like this, you’ll become good friends. - -An aggregation of values can be achieved using the `reduce()` -transformation. It applies a function to each value emitted by a -`Publisher`, sequentially and emits each successive value. We can use -it to aggregate values, to count the number of elements in multiple -Redis sets: - -``` java -Flux.just("Ben", "Michael", "Mark") - .flatMap(commands::scard) - .reduce((sum, current) -> sum + current) - .subscribe(result -> System.out.println("Number of elements in sets: " + result)); -``` - -The aggregation function of `reduce()` is applied on each emitted value, -so three times in the example above. If you want to get the last value, -which denotes the final result containing the number of elements in all -Redis sets, apply the `last()` transformation: - -``` java -Flux.just("Ben", "Michael", "Mark") - .flatMap(commands::scard) - .reduce((sum, current) -> sum + current) - .last() - .subscribe(result -> System.out.println("Number of elements in sets: " + result)); -``` - -Now let’s take a look at grouping emitted items. The following example -emits three items and groups them by the beginning character. - -``` java -Flux.just("Ben", "Michael", "Mark") - .groupBy(key -> key.substring(0, 1)) - .subscribe( - groupedFlux -> { - groupedFlux.collectList().subscribe(list -> { - System.out.println("First character: " + groupedFlux.key() + ", elements: " + list); - }); - } -); -``` - -The example prints the following lines: - - First character: B, elements: [Ben] - First character: M, elements: [Michael, Mark] - -### Absent values - -The presence and absence of values is an essential part of reactive -programming. Traditional approaches consider `null` as an absence of a -particular value. With Java 8, `Optional` was introduced to -encapsulate nullability. Reactive Streams prohibits the use of `null` -values. - -In the scope of Redis, an absent value is an empty list, a non-existent -key or any other empty data structure. Reactive programming discourages -the use of `null` as value. The reactive answer to absent values is just -not emitting any value that is possible due the `0` to `N` nature of -`Publisher`. - -Suppose we have the keys `Ben` and `Michael` set each to the value -`value`. We query those and another, absent key with the following code: - -``` java -Flux.just("Ben", "Michael", "Mark") - .flatMap(commands::get) - .doOnNext(value -> System.out.println(value)) - .subscribe(); -``` - -The example prints the following lines: - - value - value - -The output is just two values. The `GET` to the absent key `Mark` does -not emit a value. - -The reactive API provides operators to work with empty results when you -require a value. You can use one of the following operators: - -- `defaultIfEmpty`: Emit a default value if the `Publisher` did not - emit any value at all - -- `switchIfEmpty`: Switch to a fallback `Publisher` to emit values - -- `Flux.hasElements`/`Flux.hasElement`: Emit a `Mono` that - contains a flag whether the original `Publisher` is empty - -- `next`/`last`/`elementAt`: Positional operators to retrieve the - first/last/`N`th element or emit a default value - -### Filtering items - -The values emitted by a `Publisher` can be filtered in case you need -only specific results. Filtering does not change the emitted values -itself. Filters affect how many items and at which point (and if at all) -they are emitted. - -``` java -Flux.just("Ben", "Michael", "Mark") - .filter(s -> s.startsWith("M")) - .flatMap(commands::get) - .subscribe(value -> System.out.println("Got value: " + value)); -``` - -The code will fetch only the keys `Michael` and `Mark` but not `Ben`. -The filter criteria are whether the `key` starts with a `M`. - -You already met the `last()` filter to retrieve the last value: - -``` java -Flux.just("Ben", "Michael", "Mark") - .last() - .subscribe(value -> System.out.println("Got value: " + value)); -``` - -the extended variant of `last()` allows you to take the last `N` values: - -``` java -Flux.just("Ben", "Michael", "Mark") - .takeLast(3) - .subscribe(value -> System.out.println("Got value: " + value)); -``` - -The example from above takes the last `2` values. - -The opposite to `next()` is the `first()` filter that is used to -retrieve the next value: - -``` java -Flux.just("Ben", "Michael", "Mark") - .next() - .subscribe(value -> System.out.println("Got value: " + value)); -``` - -### Error handling - -Error handling is an indispensable component of every real world -application and should to be considered from the beginning on. Project -Reactor provides several mechanisms to deal with errors. - -In general, you want to react in the following ways: - -- Return a default value instead - -- Use a backup publisher - -- Retry the Publisher (immediately or with delay) - -The following code falls back to a default value after it throws an -exception at the first emitted item: - -``` java -Flux.just("Ben", "Michael", "Mark") - .doOnNext(value -> { - throw new IllegalStateException("Takes way too long"); - }) - .onErrorReturn("Default value") - .subscribe(); -``` - -You can use a backup `Publisher` which will be called if the first -one fails. - -``` java -Flux.just("Ben", "Michael", "Mark") - .doOnNext(value -> { - throw new IllegalStateException("Takes way too long"); - }) - .switchOnError(commands.get("Default Key")) - .subscribe(); -``` - -It is possible to retry the publisher by re-subscribing. Re-subscribing -can be done as soon as possible, or with a wait interval, which is -preferred when external resources are involved. - -``` java -Flux.just("Ben", "Michael", "Mark") - .flatMap(commands::get) - .retry() - .subscribe(); -``` - -Use the following code if you want to retry with backoff: - -``` java -Flux.just("Ben", "Michael", "Mark") - .doOnNext(v -> { - if (new Random().nextInt(10) + 1 == 5) { - throw new RuntimeException("Boo!"); - } - }) - .doOnSubscribe(subscription -> - { - System.out.println(subscription); - }) - .retryWhen(throwableFlux -> Flux.range(1, 5) - .flatMap(i -> { - System.out.println(i); - return Flux.just(i) - .delay(Duration.of(i, ChronoUnit.SECONDS)); - })) - .blockLast(); -``` - -The attempts get passed into the `retryWhen()` method delayed with the -number of seconds to wait. The delay method is used to complete once its -timer is done. - -### Schedulers and threads - -Schedulers in Project Reactor are used to instruct multi-threading. Some -operators have variants that take a Scheduler as a parameter. These -instruct the operator to do some or all of its work on a particular -Scheduler. - -Project Reactor ships with a set of preconfigured Schedulers, which are -all accessible through the `Schedulers` class: - -- Schedulers.parallel(): Executes the computational work such as - event-loops and callback processing. - -- Schedulers.immediate(): Executes the work immediately in the current - thread - -- Schedulers.elastic(): Executes the I/O-bound work such as asynchronous - performance of blocking I/O, this scheduler is backed by a thread-pool - that will grow as needed - -- Schedulers.newSingle(): Executes the work on a new thread - -- Schedulers.fromExecutor(): Create a scheduler from a - `java.util.concurrent.Executor` - -- Schedulers.timer(): Create or reuse a hash-wheel based TimedScheduler - with a resolution of 50ms. - -Do not use the computational scheduler for I/O. - -Publishers can be executed by a scheduler in the following different -ways: - -- Using an operator that makes use of a scheduler - -- Explicitly by passing the Scheduler to such an operator - -- By using `subscribeOn(Scheduler)` - -- By using `publishOn(Scheduler)` - -Operators like `buffer`, `replay`, `skip`, `delay`, `parallel`, and so -forth use a Scheduler by default if not instructed otherwise. - -All of the listed operators allow you to pass in a custom scheduler if -needed. Sticking most of the time with the defaults is a good idea. - -If you want the subscribe chain to be executed on a specific scheduler, -you use the `subscribeOn()` operator. The code is executed on the main -thread without a scheduler set: - -``` java -Flux.just("Ben", "Michael", "Mark").flatMap(key -> { - System.out.println("Map 1: " + key + " (" + Thread.currentThread().getName() + ")"); - return Flux.just(key); - } -).flatMap(value -> { - System.out.println("Map 2: " + value + " (" + Thread.currentThread().getName() + ")"); - return Flux.just(value); - } -).subscribe(); -``` - -The example prints the following lines: - - Map 1: Ben (main) - Map 2: Ben (main) - Map 1: Michael (main) - Map 2: Michael (main) - Map 1: Mark (main) - Map 2: Mark (main) - -This example shows the `subscribeOn()` method added to the flow (it does -not matter where you add it): - -``` java -Flux.just("Ben", "Michael", "Mark").flatMap(key -> { - System.out.println("Map 1: " + key + " (" + Thread.currentThread().getName() + ")"); - return Flux.just(key); - } -).flatMap(value -> { - System.out.println("Map 2: " + value + " (" + Thread.currentThread().getName() + ")"); - return Flux.just(value); - } -).subscribeOn(Schedulers.parallel()).subscribe(); -``` - -The output of the example shows the effect of `subscribeOn()`. You can -see that the Publisher is executed on the same thread, but on the -computation thread pool: - - Map 1: Ben (parallel-1) - Map 2: Ben (parallel-1) - Map 1: Michael (parallel-1) - Map 2: Michael (parallel-1) - Map 1: Mark (parallel-1) - Map 2: Mark (parallel-1) - -If you apply the same code to Lettuce, you will notice a difference in -the threads on which the second `flatMap()` is executed: - -``` java -Flux.just("Ben", "Michael", "Mark").flatMap(key -> { - System.out.println("Map 1: " + key + " (" + Thread.currentThread().getName() + ")"); - return commands.set(key, key); -}).flatMap(value -> { - System.out.println("Map 2: " + value + " (" + Thread.currentThread().getName() + ")"); - return Flux.just(value); -}).subscribeOn(Schedulers.parallel()).subscribe(); -``` - -The example prints the following lines: - - Map 1: Ben (parallel-1) - Map 1: Michael (parallel-1) - Map 1: Mark (parallel-1) - Map 2: OK (lettuce-nioEventLoop-3-1) - Map 2: OK (lettuce-nioEventLoop-3-1) - Map 2: OK (lettuce-nioEventLoop-3-1) - -Two things differ from the standalone examples: - -1. The values are set rather concurrently than sequentially - -2. The second `flatMap()` transformation prints the netty EventLoop - thread name - -This is because Lettuce publishers are executed and completed on the -netty EventLoop threads by default. - -`publishOn` instructs an Publisher to call its observer’s `onNext`, -`onError`, and `onCompleted` methods on a particular Scheduler. Here, -the order matters: - -``` java -Flux.just("Ben", "Michael", "Mark").flatMap(key -> { - System.out.println("Map 1: " + key + " (" + Thread.currentThread().getName() + ")"); - return commands.set(key, key); -}).publishOn(Schedulers.parallel()).flatMap(value -> { - System.out.println("Map 2: " + value + " (" + Thread.currentThread().getName() + ")"); - return Flux.just(value); -}).subscribe(); -``` - -Everything before the `publishOn()` call is executed in main, everything -below in the scheduler: - - Map 1: Ben (main) - Map 1: Michael (main) - Map 1: Mark (main) - Map 2: OK (parallel-1) - Map 2: OK (parallel-1) - Map 2: OK (parallel-1) - -Schedulers allow direct scheduling of operations. Refer to the [Project -Reactor -documentation](https://projectreactor.io/core/docs/api/reactor/core/scheduler/Schedulers.html) -for further information. - -### Redis Transactions - -Lettuce provides a convenient way to use Redis Transactions in a -reactive way. Commands that should be executed within a transaction can -be executed after the `MULTI` command was executed. Functional chaining -allows to execute commands within a closure, and each command receives -its appropriate response. A cumulative response is also returned with -`TransactionResult` in response to `EXEC`. - -See [Transactions](#transactions-using-the-reactive-api) for -further details. - -#### Other examples - -**Blocking example** - -``` java -RedisStringReactiveCommands reactive = client.connect().reactive(); -Mono set = reactive.set("key", "value"); -set.block(); -``` - -**Non-blocking example** - -``` java -RedisStringReactiveCommands reactive = client.connect().reactive(); -Mono set = reactive.set("key", "value"); -set.subscribe(); -``` - -**Functional chaining** - -``` java -RedisStringReactiveCommands reactive = client.connect().reactive(); -Flux.just("Ben", "Michael", "Mark") - .flatMap(key -> commands.sadd("seen", key)) - .flatMap(value -> commands.randomkey()) - .flatMap(commands::type) - .doOnNext(System.out::println).subscribe(); -``` - -**Redis Transaction** - - RedisReactiveCommands reactive = client.connect().reactive(); - - reactive.multi().doOnSuccess(s -> { - reactive.set("key", "1").doOnNext(s1 -> System.out.println(s1)).subscribe(); - reactive.incr("key").doOnNext(s1 -> System.out.println(s1)).subscribe(); - }).flatMap(s -> reactive.exec()) - .doOnNext(transactionResults -> System.out.println(transactionResults.wasRolledBack())) - .subscribe(); - -## Kotlin API - -Kotlin Coroutines are using Kotlin lightweight threads allowing to write -non-blocking code in an imperative way. On language side, suspending -functions provides an abstraction for asynchronous operations while on -library side kotlinx.coroutines provides functions like `async { }` and -types like `Flow`. - -Lettuce ships with extensions to provide support for idiomatic Kotlin -use. - -### Dependencies - -Coroutines support is available when `kotlinx-coroutines-core` and -`kotlinx-coroutines-reactive` dependencies are on the classpath: - -``` xml - - org.jetbrains.kotlinx - kotlinx-coroutines-core - ${kotlinx-coroutines.version} - - - org.jetbrains.kotlinx - kotlinx-coroutines-reactive - ${kotlinx-coroutines.version} - -``` - -### How does Reactive translate to Coroutines? - -`Flow` is an equivalent to `Flux` in Coroutines world, suitable for hot -or cold streams, finite or infinite streams, with the following main -differences: - -- `Flow` is push-based while `Flux` is a push-pull hybrid - -- Backpressure is implemented via suspending functions - -- `Flow` has only a single suspending collect method and operators are - implemented as extensions - -- Operators are easy to implement thanks to Coroutines - -- Extensions allow to add custom operators to Flow - -- Collect operations are suspending functions - -- `map` operator supports asynchronous operations (no need for - `flatMap`) since it takes a suspending function parameter - -### Coroutines API based on reactive operations - -Example for retrieving commands and using it: - -``` kotlin -val api: RedisCoroutinesCommands = connection.coroutines() - -val foo1 = api.set("foo", "bar") -val foo2 = api.keys("fo*") -``` - -> [!NOTE] -> Coroutine Extensions are experimental and require opt-in using -> `@ExperimentalLettuceCoroutinesApi`. The API ships with a reduced -> feature set. Deprecated methods and `StreamingChannel` are left out -> intentionally. Expect evolution towards a `Flow`-based API to consume -> large Redis responses. - -### Extensions for existing APIs - -#### Transactions DSL - -Example for the synchronous API: - -``` kotlin -val result: TransactionResult = connection.sync().multi { - set("foo", "bar") - get("foo") -} -``` - -Example for async with coroutines: - -``` kotlin -val result: TransactionResult = connection.async().multi { - set("foo", "bar") - get("foo") -} -``` - -## Publish/Subscribe - -Lettuce provides support for Publish/Subscribe on Redis Standalone and -Redis Cluster connections. The connection is notified on -message/subscribed/unsubscribed events after subscribing to channels or -patterns. [Synchronous](#basic-usage), [asynchronous](#asynchronous-api) -and [reactive](#reactive-api) API’s are provided to interact with Redis -Publish/Subscribe features. - -### Subscribing - -A connection can notify multiple listeners that implement -`RedisPubSubListener` (Lettuce provides a `RedisPubSubAdapter` for -convenience). All listener registrations are kept within the -`StatefulRedisPubSubConnection`/`StatefulRedisClusterConnection`. - -``` java -StatefulRedisPubSubConnection connection = client.connectPubSub() -connection.addListener(new RedisPubSubListener() { ... }) - -RedisPubSubCommands sync = connection.sync(); -sync.subscribe("channel"); - -// application flow continues -``` - -> [!NOTE] -> Don’t issue blocking calls (includes synchronous API calls to Lettuce) -> from inside of Pub/Sub callbacks as this would block the EventLoop. If -> you need to fetch data from Redis from inside a callback, please use -> the asynchronous API. - -``` java -StatefulRedisPubSubConnection connection = client.connectPubSub() -connection.addListener(new RedisPubSubListener() { ... }) - -RedisPubSubAsyncCommands async = connection.async(); -RedisFuture future = async.subscribe("channel"); - -// application flow continues -``` - -### Reactive API - -The reactive API provides hot `Observable`s to listen on -`ChannelMessage`s and `PatternMessage`s. The `Observable`s receive all -inbound messages. You can do filtering using the observable chain if you -need to filter out the interesting ones, The `Observable` stops -triggering events when the subscriber unsubscribes from it. - -``` java -StatefulRedisPubSubConnection connection = client.connectPubSub() - -RedisPubSubReactiveCommands reactive = connection.reactive(); -reactive.subscribe("channel").subscribe(); - -reactive.observeChannels().doOnNext(patternMessage -> {...}).subscribe() - -// application flow continues -``` - -### Redis Cluster - -Redis Cluster support Publish/Subscribe but requires some attention in -general. User-space Pub/Sub messages (Calling `PUBLISH`) are broadcasted -across the whole cluster regardless of subscriptions to particular -channels/patterns. This behavior allows connecting to an arbitrary -cluster node and registering a subscription. The client isn’t required -to connect to the node where messages were published. - -A cluster-aware Pub/Sub connection is provided by -`RedisClusterClient.connectPubSub()` allowing to listen for cluster -reconfiguration and reconnect if the topology changes. - -``` java -StatefulRedisClusterPubSubConnection connection = clusterClient.connectPubSub() -connection.addListener(new RedisPubSubListener() { ... }) - -RedisPubSubCommands sync = connection.sync(); -sync.subscribe("channel"); -``` - -Redis Cluster also makes a distinction between user-space and key-space -messages. Key-space notifications (Pub/Sub messages for key-activity) -stay node-local and are not broadcasted across the Redis Cluster. A -notification about, e.g. an expiring key, stays local to the node on -which the key expired. - -Clients that are interested in keyspace notifications must subscribe to -the appropriate node (or nodes) to receive these notifications. You can -either use `RedisClient.connectPubSub()` to establish Pub/Sub -connections to the individual nodes or use `RedisClusterClient`'s -message propagation and NodeSelection API to get a managed set of -connections. - -``` java -StatefulRedisClusterPubSubConnection connection = clusterClient.connectPubSub() -connection.addListener(new RedisClusterPubSubListener() { ... }) -connection.setNodeMessagePropagation(true); - -RedisPubSubCommands sync = connection.sync(); -sync.masters().commands().subscribe("__keyspace@0__:*"); -``` - -There are two things to pay special attention to: - -1. Replication: Keys replicated to replica nodes, especially - considering expiry, generate keyspace events on all nodes holding - the key. If a key expires and it is replicated, it will expire on - the master and all replicas. Each Redis server will emit keyspace - events. Subscribing to non-master nodes, therefore, will let your - application see multiple events of the same type for the same key - because of Redis distributed nature. - -2. Topology Changes: Subscriptions are issued either by using the - NodeSelection API or by calling `subscribe(…)` on the individual - cluster node connections. Subscription registrations are not - propagated to new nodes that are added on a topology change. - -## Transactions/Multi - -Transactions allow the execution of a group of commands in a single -step. Transactions can be controlled using `WATCH`, `UNWATCH`, `EXEC`, -`MULTI` and `DISCARD` commands. Synchronous, asynchronous, and reactive -APIs allow the use of transactions. - -> [!NOTE] -> Transactional use requires external synchronization when a single -> connection is used by multiple threads/processes. This can be achieved -> either by serializing transactions or by providing a dedicated -> connection to each concurrent process. Lettuce itself does not -> synchronize transactional/non-transactional invocations regardless of -> the used API facade. - -Redis responds to commands invoked during a transaction with a `QUEUED` -response. The response related to the execution of the command is -received at the moment the `EXEC` command is processed, and the -transaction is executed. The particular APIs behave in different ways: - -- Synchronous: Invocations to the commands return `null` while they are - invoked within a transaction. The `MULTI` command carries the response - of the particular commands. - -- Asynchronous: The futures receive their response at the moment the - `EXEC` command is processed. This happens while the `EXEC` response is - received. - -- Reactive: An `Obvervable` triggers `onNext`/`onCompleted` at the - moment the `EXEC` command is processed. This happens while the `EXEC` - response is received. - -As soon as you’re within a transaction, you won’t receive any responses -on triggering the commands - -``` java -redis.multi() == "OK" -redis.set(key, value) == null -redis.exec() == list("OK") -``` - -You’ll receive the transactional response when calling `exec()` on the -end of your transaction. - -``` java -redis.multi() == "OK" -redis.set(key1, value) == null -redis.set(key2, value) == null -redis.exec() == list("OK", "OK") -``` - -### Transactions using the asynchronous API - -Asynchronous use of Redis transactions is very similar to -non-transactional use. The asynchronous API returns `RedisFuture` -instances that eventually complete and they are handles to a future -result. Regular commands complete as soon as Redis sends a response. -Transactional commands complete as soon as the `EXEC` result is -received. - -Each command is completed individually with its own result so users of -`RedisFuture` will see no difference between transactional and -non-transactional `RedisFuture` completion. That said, transactional -command results are available twice: Once via `RedisFuture` of the -command and once through `List` (`TransactionResult` since -Lettuce 5) of the `EXEC` command future. - -``` java -RedisAsyncCommands async = client.connect().async(); - -RedisFuture multi = async.multi(); - -RedisFuture set = async.set("key", "value"); - -RedisFuture> exec = async.exec(); - -List objects = exec.get(); -String setResult = set.get(); - -objects.get(0) == setResult -``` - -### Transactions using the reactive API - -The reactive API can be used to execute multiple commands in a single -step. The nature of the reactive API encourages nesting of commands. It -is essential to understand the time at which an `Observable` emits a -value when working with transactions. Redis responds with `QUEUED` to -commands invoked during a transaction. The response related to the -execution of the command is received at the moment the `EXEC` command is -processed, and the transaction is executed. Subsequent calls in the -processing chain are executed after the transactional end. The following -code starts a transaction, executes two commands within the transaction -and finally executes the transaction. - -``` java -RedisReactiveCommands reactive = client.connect().reactive(); -reactive.multi().subscribe(multiResponse -> { - reactive.set("key", "1").subscribe(); - reactive.incr("key").subscribe(); - reactive.exec().subscribe(); -}); -``` - -### Transactions on clustered connections - -Clustered connections perform a routing by default. This means, that you -can’t be really sure, on which host your command is executed. So if you -are working in a clustered environment, use rather a regular connection -to your node, since then you’ll bound to that node knowing which hash -slots are handled by it. - -### Examples - -**Multi with executing multiple commands** - -``` java -redis.multi(); - -redis.set("one", "1"); -redis.set("two", "2"); -redis.mget("one", "two"); -redis.llen(key); - -redis.exec(); // result: list("OK", "OK", list("1", "2"), 0L) -``` - -**Mult executing multiple asynchronous commands** - -``` java -redis.multi(); - -RedisFuture set1 = redis.set("one", "1"); -RedisFuture set2 = redis.set("two", "2"); -RedisFuture mget = redis.mget("one", "two"); -RedisFuture llen = mgetredis.llen(key); - - -set1.thenAccept(value -> …); // OK -set2.thenAccept(value -> …); // OK - -RedisFuture> exec = redis.exec(); // result: list("OK", "OK", list("1", "2"), 0L) - -mget.get(); // list("1", "2") -llen.thenAccept(value -> …); // 0L -``` - -**Using WATCH** - -``` java -redis.watch(key); - -RedisConnection redis2 = client.connect(); -redis2.set(key, value + "X"); -redis2.close(); - -redis.multi(); -redis.append(key, "foo"); -redis.exec(); // result is an empty list because of the changed key -``` - -## Scripting and Functions - -Redis functionality can be extended through many ways, of which [Lua -Scripting](https://redis.io/topics/eval-intro) and -[Functions](https://redis.io/topics/functions-intro) are two approaches -that do not require specific pre-requisites on the server. - -### Lua Scripting - -[Lua](https://redis.io/topics/lua-api) is a powerful scripting language -that is supported at the core of Redis. Lua scripts can be invoked -dynamically by providing the script contents to Redis or used as stored -procedure by loading the script into Redis and using its digest to -invoke it. - -
- -``` java -String helloWorld = redis.eval("return ARGV[1]", STATUS, new String[0], "Hello World"); -``` - -
- -Using Lua scripts is straightforward. Consuming results in Java requires -additional details to consume the result through a matching type. As we -do not know what your script will return, the API uses call-site -generics for you to specify the result type. Additionally, you must -provide a `ScriptOutputType` hint to `EVAL` so that the driver uses the -appropriate output parser. See [Output Formats](#output-formats) for -further details. - -Lua scripts can be stored on the server for repeated execution. -Dynamically-generated scripts are an anti-pattern as each script is -stored in Redis' script cache. Generating scripts during the application -runtime may, and probably will, exhaust the host’s memory resources for -caching them. Instead, scripts should be as generic as possible and -provide customized execution via their arguments. You can register a -script through `SCRIPT LOAD` and use its SHA digest to invoke it later: - -
- -``` java -String digest = redis.scriptLoad("return ARGV[1]", STATUS, new String[0], "Hello World"); - -// later -String helloWorld = redis.evalsha(digest, STATUS, new String[0], "Hello World"); -``` - -
- -### Redis Functions - -[Redis Functions](https://redis.io/topics/functions-intro) is an -evolution of the scripting API to provide extensibility beyond Lua. -Functions can leverage different engines and follow a model where a -function library registers functionality to be invoked later with the -`FCALL` command. - -
- -``` java -redis.functionLoad("FUNCTION LOAD "#!lua name=mylib\nredis.register_function('knockknock', function() return 'Who\\'s there?' end)"); - -String response = redis.fcall("knockknock", STATUS); -``` - -
- -Using Functions is straightforward. Consuming results in Java requires -additional details to consume the result through a matching type. As we -do not know what your function will return, the API uses call-site -generics for you to specify the result type. Additionally, you must -provide a `ScriptOutputType` hint to `EVAL` so that the driver uses the -appropriate output parser. See [Output Formats](#output-formats) for -further details. - -### Output Formats - -You can choose from one of the following: - -- `BOOLEAN`: Boolean output, expects a number `0` or `1` to be converted - to a boolean value. - -- `INTEGER`: 64-bit Integer output, represented as Java `Long`. - -- `MULTI`: List of flat arrays. - -- `STATUS`: Simple status value such as `OK`. The Redis response is - parsed as ASCII. - -- `VALUE`: Value return type decoded through `RedisCodec`. - -- `OBJECT`: RESP3-defined object output supporting all Redis response - structures. - -### Leveraging Scripting and Functions through Command Interfaces - -Using dynamic functionality without a documented response structure can -impose quite some complexity on your application. If you consider using -scripting or functions, then you can use [Command -Interfaces](Working-with-dynamic-Redis-Command-Interfaces.md) to declare -an interface along with methods that represent your scripting or -function landscape. Declaring a method with input arguments and a -response type not only makes it obvious how the script or function is -supposed to be called, but also how the response structure should look -like. - -Let’s take a look at a simple function call first: - -
- -``` lua -local function my_hlastmodified(keys, args) - local hash = keys[1] - return redis.call('HGET', hash, '_last_modified_') -end -``` - -
- -
- -``` java -Long lastModified = redis.fcall("my_hlastmodified", INTEGER, "my_hash"); -``` - -
- -This example calls the `my_hlastmodified` function expecting some `Long` -response an input argument. Calling a function from a single place in -your code isn’t an issue on its own. The arrangement becomes problematic -once the number of functions grows or you start calling the functions -with different arguments from various places in your code. Without the -function code, it becomes impossible to investigate how the response -mechanics work or determine the argument semantics, as there is no -single place to document the function behavior. - -Let’s apply the Command Interface pattern to see how the the declaration -and call sites change: - -
- -``` java -interface MyCustomCommands extends Commands { - - /** - * Retrieve the last modified value from the hash key. - * @param hashKey the key of the hash. - * @return the last modified timestamp, can be {@code null}. - */ - @Command("FCALL my_hlastmodified 1 :hashKey") - Long getLastModified(@Param("my_hash") String hashKey); - -} - -MyCustomCommands myCommands = …; -Long lastModified = myCommands.getLastModified("my_hash"); -``` - -
- -By declaring a command method, you create a place that allows for -storing additional documentation. The method declaration makes clear -what the function call expects and what you get in return. - diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 0000000000..c19f52b9cc --- /dev/null +++ b/docs/README.md @@ -0,0 +1,18 @@ +# Table of Contents + +- [Overview](<./overview.md>) +- [New & Noteworthy](<./new-features.md>) +- [Getting Started](<./getting-started.md>) +- [Connecting Redis](<./user-guide/connecting-redis.md>) +- [Async API](<./user-guide/async-api.md>) +- [Reactive API](<./user-guide/reactive-api.md>) +- [Kotlin API](<./user-guide/kotlin-api.md>) +- [Transactions and Pipelining](<./user-guide/transactions-multi.md>) +- [Pub/Sub](<./user-guide/pubsub.md>) +- [Lua Scripting](<./user-guide/lua-scripting.md>) +- [Redis Functions](<./user-guide/redis-functions.md>) +- [High-Availability and Sharding](<./ha-sharding.md>) +- [Working with dynamic Redis Command Interfaces](<./redis-command-interfaces.md>) +- [Advanced usage](<./advanced-usage.md>) +- [Integration and Extension](<./integration-extension.md>) +- [Frequently Asked Questions](<./faq.md>) \ No newline at end of file diff --git a/docs/Advanced-usage.md b/docs/advanced-usage.md similarity index 95% rename from docs/Advanced-usage.md rename to docs/advanced-usage.md index 561098de0b..f6de542d2c 100644 --- a/docs/Advanced-usage.md +++ b/docs/advanced-usage.md @@ -125,19 +125,17 @@ not be changed unless there is a truly good reason to do so. Provider for EventLoopGroup -eve ntLoopGroupProvider +eventLoopGroupProvider none -For those who want to reuse existing netty infrastructure or the +For those who want to reuse existing netty infrastructure or the total control over the thread pools, the -Eve ntLoopGroupProvider API provides a way to do so. +EventLoopGroupProvider API provides a way to do so. EventLoopGroups are obtained and managed by an -Even tLoopGroupProvider. A provided -Eve ntLoopGroupProvider is not managed by the client and -needs to be shut down once you do not longer need the resources. - - +EventLoopGroupProvider. A provided +EventLoopGroupProvider is not managed by the client and +needs to be shut down once you no longer need the resources. Provided EventExecutorGroup @@ -145,13 +143,11 @@ needs to be shut down once you do not longer need the resources. none -For those who want to reuse existing netty infrastructure or the +For those who want to reuse existing netty infrastructure or the total control over the thread pools can provide an existing EventExecutorGroup to the Client resources. A provided EventExecutorGroup is not managed by the client and needs to be shut down once you do not longer need the resources. - - Event bus @@ -159,82 +155,72 @@ to be shut down once you do not longer need the resources. DefaultEventBus -The event bus system is used to transport events from the client to +The event bus system is used to transport events from the client to subscribers. Events are about connection state changes, metrics, and more. Events are published using a RxJava subject and the default -implementation drops events on backpressure. Learn more about the Reactive API. You can also publish your own +implementation drops events on backpressure. Learn more about the Reactive API. You can also publish your own events. If you wish to do so, make sure that your events implement the Event marker interface. - - Command latency collector options -commandLate ncyCollectorOptions -DefaultCommandLat encyCollectorOptions +commandLatencyCollectorOptions +DefaultCommandLatencyCollectorOptions -The client can collect latency metrics during while dispatching +The client can collect latency metrics during while dispatching commands. The options allow configuring the percentiles, level of metrics (per connection or server) and whether the metrics are cumulative or reset after obtaining these. Command latency collection is enabled by default and can be disabled by setting -commandLatency PublisherOptions(…) to -D efaultEventPublisher Options.disabled(). Latency +commandLatencyPublisherOptions(…) to +DefaultEventPublisherOptions.disabled(). Latency collector requires LatencyUtils to be on your class path. - - Command latency collector -comm andLatencyCollector -DefaultCom mandLatencyCollector +commandLatencyCollector +DefaultCommandLatencyCollector -The client can collect latency metrics during while dispatching +The client can collect latency metrics during while dispatching commands. Command latency metrics is collected on connection or server level. Command latency collection is enabled by default and can be disabled by setting commandLatency CollectorOptions(…) to DefaultCom mandLatencyCollector Options.disabled(). - - Latency event publisher options -commandLate ncyPublisherOptions -DefaultE ventPublisherOptions +commandLatencyPublisherOptions +DefaultEventPublisherOptions -Command latencies can be published using the event bus. Latency +Command latencies can be published using the event bus. Latency events are emitted by default every 10 minutes. Event publishing can be -disabled by setting commandLatency PublisherOptions(…) to -D efaultEventPublisher Options.disabled(). - - +disabled by setting commandLatencyPublisherOptions(…) to +DefaultEventPublisherOptions.disabled(). DNS Resolver dnsResolver -DnsRe solvers.JVM_DEFAULT ( or netty if present) +DnsResolvers.JVM_DEFAULT ( or netty if present) -

Since: 3.5, 4.2

+

Since: 3.5, 4.2

Configures a DNS resolver to resolve hostnames to a -ja va.net.InetAddress. Defaults to the JVM DNS resolution +java.net.InetAddress. Defaults to the JVM DNS resolution that uses blocking hostname resolution and caching of lookup results. Users of DNS-based Redis-HA setups (e.g. AWS ElastiCache) might want to configure a different DNS resolver. Lettuce comes with -Di rContextDnsResolver that uses Java’s +DirContextDnsResolver that uses Java’s DnsContextFactory to resolve hostnames. -Di rContextDnsResolver allows using either the system DNS +DirContextDnsResolver allows using either the system DNS or custom DNS servers without caching of results so each hostname lookup yields in a DNS lookup.

-

Since 4.4: Defaults to DnsR esolvers.UNRESOLVED to use +

Since 4.4: Defaults to DnsResolvers.UNRESOLVED to use netty’s AddressResolver that resolves DNS names on Bootstrap.connect() (requires netty 4.1)

- - Reconnect Delay @@ -242,13 +228,11 @@ netty’s AddressResolver that resolves DNS names on Delay.exponential() -

Since: 4.2

+

Since: 4.2

Configures a reconnect delay used to delay reconnect attempts. Defaults to binary exponential delay with an upper boundary of 30 SECONDS. See Delay for more delay implementations.

- - Netty Customizer @@ -256,7 +240,7 @@ implementations.

none -

Since: 4.4

+

Since: 4.4

Configures a netty customizer to enhance netty components. Allows customization of Bootstrap after Bootstrap configuration by Lettuce and Channel customization after @@ -266,8 +250,6 @@ otherwise Lettuce’s configures SSL), adding custom handlers or setting customized Bootstrap options. Misconfiguring Bootstrap or Channel can cause connection failures or undesired behavior.

- - Tracing @@ -275,15 +257,13 @@ failures or undesired behavior.

disabled -

Since: 5.1

+

Since: 5.1

Configures a tracing instance to trace Redis calls. Lettuce wraps Brave data models to support tracing in a vendor-agnostic way if Brave is on the class path. A Brave tracing instance -can be created using BraveTracing.crea te(clientTracing);, +can be created using BraveTracing.create(clientTracing);, where clientTracing is a created or existent Brave tracing instance .

- - @@ -323,7 +303,7 @@ client.setOptions(ClientOptions.builder() true -

Since: 3.1, 4.0

+

Since: 3.1, 4.0

Perform a lightweight PING connection handshake when establishing a Redis connection. If true (default is true), every connection and reconnect will issue a PING command @@ -334,12 +314,10 @@ protocol version. RESP 3/protocol discovery performs a HELLO handshake.

Failed PING's on reconnect are handled as protocol errors and can suspend reconnection if -suspendReconne ctOnProtocolFailure is enabled.

+suspendReconnectOnProtocolFailure is enabled.

The PING handshake validates whether the other end of the connected socket is a service that behaves like a Redis server.

- - Auto-Reconnect @@ -347,15 +325,13 @@ server.

true -

Since: 3.1, 4.0

+

Since: 3.1, 4.0

Controls auto-reconnect behavior on connections. As soon as a connection gets closed/reset without the intention to close it, the client will try to reconnect, activate the connection and re-issue any queued commands.

This flag also has the effect that disconnected connections will refuse commands and cancel these with an exception.

- - Cancel commands on reconnect failure @@ -363,7 +339,7 @@ refuse commands and cancel these with an exception.

false -

Since: 3.1, 4.0

+

Since: 3.1, 4.0

This flag is deprecated and should not be used as it can lead to race conditions and protocol offsets. SSL is natively supported by Lettuce and does no longer requires the use of SSL tunnels where @@ -376,8 +352,6 @@ connection reset, host lookup fails, this does not affect the cancelation of commands. In contrast, where the protocol/connection activation fails due to SSL errors or PING before activating connection failure, queued commands are canceled.

- - Policy how to reclaim decode buffer memory @@ -385,20 +359,18 @@ failure, queued commands are canceled.

ratio-based at 75% -

Since: 6.0

+

Since: 6.0

Policy to discard read bytes from the decoding aggregation buffer to -reclaim memory. See D ecodeBufferPolicies for available +reclaim memory. See DecodeBufferPolicies for available strategies.

- - Suspend reconnect on protocol failure -suspendReconne ctOnProtocolFailure +suspendReconnectOnProtocolFailure false (was introduced in 3. 1 with default true) -

Since: 3.1, 4.0

+

Since: 3.1, 4.0

If this flag is true the reconnect will be suspended on protocol errors. The reconnect itself has two phases: Socket connection and protocol/connection activation. In case a connect timeout occurs, a @@ -408,8 +380,6 @@ activation fails due to SSL errors or PING before activating connection failure, queued commands are canceled.

Reconnection can be activated again, but there is no public API to obtain the ConnectionWatchdog instance.

- - Request queue size @@ -417,7 +387,7 @@ obtain the ConnectionWatchdog instance.

2147483647 (Integer#MAX_VALUE) -

Since: 3.4, 4.1

+

Since: 3.4, 4.1

Controls the per-connection request queue size. The command invocation will lead to a RedisException if the queue size is exceeded. Setting the requestQueueSize to a lower value @@ -425,16 +395,14 @@ will lead earlier to exceptions during overload or while the connection is in a disconnected state. A higher value means hitting the boundary will take longer to occur, but more requests will potentially be queued, and more heap space is used.

- - Disconnected behavior -d isconnectedBehavior +disconnectedBehavior DEFAULT -

Since: 3.4, 4.1

+

Since: 3.4, 4.1

A connection can behave in a disconnected state in various ways. The auto-connect feature allows in particular to retrigger commands that have been queued while a connection is disconnected. The disconnected @@ -446,21 +414,17 @@ reject commands when auto-reconnect is disabled.

state.

REJECT_COMMANDS: Reject commands in disconnected state.

- - Protocol Version protocolVersion -L atest/Auto-discovery +Latest/Auto-discovery -

Since: 6.0

+

Since: 6.0

Configuration of which protocol version (RESP2/RESP3) to use. Leaving this option unconfigured performs a protocol discovery to use the -lastest available protocol.

- - +latest available protocol.

Script Charset @@ -468,10 +432,8 @@ lastest available protocol.

UTF-8 -

Since: 6.0

+

Since: 6.0

Charset to use for Luascripts.

- - Socket Options @@ -479,11 +441,9 @@ lastest available protocol.

10 seconds Connecti on-Timeout, no keep-a live, no TCP noDelay -

Since: 4.3

+

Since: 4.3

Options to configure low-level socket options for the connections kept to Redis servers.

- - SSL Options @@ -491,11 +451,9 @@ kept to Redis servers.

(non e), use JDK defaults -

Since: 4.3

+

Since: 4.3

Configure SSL options regarding SSL providers (JDK/OpenSSL) and key store/trust store.

- - Timeout Options @@ -503,13 +461,11 @@ store/trust store.

Do n ot timeout commands. -

Since: 5.1

+

Since: 5.1

Options to configure command timeouts applied to timeout commands after dispatching these (active connections, queued while disconnected, batch buffer). By default, the synchronous API times out commands using -Red isURI.getTimeout().

- - +RedisURI.getTimeout().

Publish Reactive Signals on Scheduler @@ -517,7 +473,7 @@ batch buffer). By default, the synchronous API times out commands using Use I/O thread. -

Since: 5.1.4

+

Since: 5.1.4

Use a dedicated Scheduler to emit reactive data signals. Enabling this option can be useful for reactive sequences that require a significant amount of processing with a single/a few Redis connections @@ -526,8 +482,6 @@ option uses EventExecutorGroup configured through ClientResources for data/completion signals. The used Thread is sticky across all signals for a single Publisher instance.

- - @@ -572,7 +526,7 @@ client.setOptions(ClusterClientOptions.builder() false -

Since: 3.1, 4.0

+

Since: 3.1, 4.0

Enables or disables periodic cluster topology refresh. The refresh is handled in the background. Partitions, the view on the Redis cluster topology, are valid for a whole RedisClusterClient @@ -583,8 +537,6 @@ can be set with refreshPeriod. The refresh job starts after either opening the first connection with the job enabled or by calling reloadPartitions. The job can be disabled without discarding the full client by setting new client options.

- - Cluster topology refresh period @@ -592,12 +544,10 @@ discarding the full client by setting new client options.

60 SECONDS -

Since: 3.1, 4.0

+

Since: 3.1, 4.0

Set the period between the refresh job runs. The effective interval cannot be changed once the refresh job is active. Changes to the value will be ignored.

- - Adaptive cluster topology refresh @@ -605,7 +555,7 @@ will be ignored.

(none) -

Since: 4.2

+

Since: 4.2

Enables selectively adaptive topology refresh triggers. Adaptive refresh triggers initiate topology view updates based on events happened during Redis Cluster operations. Adaptive triggers lead to an immediate @@ -616,8 +566,6 @@ disabled by default. Following triggers can be enabled:

PER SISTENT_RECONNECTS, UNKNOWN_NODE (since 5.1), and UNCOVERED_SLOT (since 5.2) (see also reconnect attempts for the reconnect trigger)

- - Adaptive refresh triggers timeout @@ -625,14 +573,12 @@ attempts for the reconnect trigger)

30 SECONDS -

Since: 4.2

+

Since: 4.2

Set the timeout between the adaptive refresh job runs. Multiple triggers within the timeout will be ignored, only the first enabled trigger leads to a topology refresh. The effective period cannot be changed once the refresh job is active. Changes to the value will be ignored.

- - Reconnect attempts (Adaptive topology refresh trigger) @@ -640,14 +586,12 @@ ignored.

5 -

Since: 4.2

+

Since: 4.2

Set the threshold for the PE RSISTENT_RECONNECTS refresh trigger. Topology updates based on persistent reconnects lead only to a refresh if the reconnect process tries at least the number of specified attempts. The first reconnect attempt starts with 1.

- - Dynamic topology refresh sources @@ -655,7 +599,7 @@ attempts. The first reconnect attempt starts with true -

Since: 4.2

+

Since: 4.2

Discover cluster nodes from the topology and use only the discovered nodes as the source for the cluster topology. Using dynamic refresh will query all discovered nodes for the cluster topology details. If set to @@ -666,8 +610,6 @@ with many nodes.

Note that enabling dynamic topology refresh sources uses node addresses reported by Redis CLUSTER NODES output which typically contains IP addresses.

- - Close stale connections @@ -675,15 +617,13 @@ typically contains IP addresses.

true -

Since: 3.3, 4.1

+

Since: 3.3, 4.1

Stale connections are existing connections to nodes which are no longer part of the Redis Cluster. If this flag is set to true, then stale connections are closed upon topology refreshes. It’s strongly advised to close stale connections as open connections will attempt to reconnect nodes if the node is no longer available and open connections require system resources.

- - Limitation of cluster redirects @@ -691,7 +631,7 @@ available and open connections require system resources.

5 -

Since: 3.1, 4.0

+

Since: 3.1, 4.0

When the assignment of a slot-hash is moved in a Redis Cluster and a client requests a key that is located on the moved slot-hash, the Cluster node responds with a -MOVED response. In this case, @@ -702,8 +642,6 @@ redirects can be configured. Once the limit is reached, the -MOVED error is returned to the caller. This limit also applies for -ASK redirections in case a slot is set to MIGRATING state.

- - Filter nodes from Topology @@ -711,14 +649,12 @@ applies for -ASK redirections in case a slot is set to no filter -

Since: 6.1.6

+

Since: 6.1.6

When providing a nodeFilter, then RedisClusterNodes can be filtered from the topology view to remove unwanted nodes (e.g. failed replicas). Note that the filter is applied only after obtaining the topology so the filter does not prevent trying to connect the node during topology discovery.

- - Validate cluster node membership @@ -726,7 +662,7 @@ trying to connect the node during topology discovery.

true -

Since: 3.3, 4.0

+

Since: 3.3, 4.0

Validate the cluster node membership before allowing connections to that node. The current implementation performs redirects using MOVED and ASK and allows obtaining connections @@ -741,8 +677,6 @@ topology view is stale Connecting to cluster nodes using different IP’s/hostnames (e.g. private/public IP’s)

Connecting to non-cluster members to reconfigure those while using the RedisClusterClient connection.

- - @@ -1027,12 +961,12 @@ There are 4 StreamingChannels accepting different data types: The result of the steaming methods is the count of keys/values/key-value pairs as `long` value. -> [!NOTE] -> Don’t issue blocking calls (includes synchronous API calls to Lettuce) -> from inside of callbacks such as the streaming API as this would block -> the EventLoop. If you need to fetch data from Redis from inside a -> `StreamingChannel` callback, please use the asynchronous API or use -> the reactive API directly. +!!! NOTE + Don’t issue blocking calls (includes synchronous API calls to Lettuce) + from inside of callbacks such as the streaming API as this would block + the EventLoop. If you need to fetch data from Redis from inside a + `StreamingChannel` callback, please use the asynchronous API or use + the reactive API directly. ``` java Long count = redis.hgetall(new KeyValueStreamingChannel() @@ -1611,8 +1545,8 @@ The transport and command execution layer does not block the processing until a command is written, processed and while its response is read. Lettuce sends commands at the moment they are invoked. -A good example is the [async API](Connecting-Redis.md#asynchronous-api). Every -invocation on the [async API](Connecting-Redis.md#asynchronous-api) returns a +A good example is the [async API](user-guide/async-api.md). Every +invocation on the [async API](user-guide/async-api.md) returns a `Future` (response handle) after the command is written to the netty pipeline. A write to the pipeline does not mean, the command is written to the underlying transport. Multiple commands can be written without @@ -1632,10 +1566,10 @@ commands can be a reason to use multiple connections. ### Command flushing -> [!NOTE] -> Command flushing is an advanced topic and in most cases (i.e. unless -> your use-case is a single-threaded mass import application) you won’t -> need it as Lettuce uses pipelining by default. +!!! NOTE + Command flushing is an advanced topic and in most cases (i.e. unless + your use-case is a single-threaded mass import application) you won’t + need it as Lettuce uses pipelining by default. The normal operation mode of Lettuce is to flush every command which means, that every command is written to the transport after it was @@ -1662,14 +1596,14 @@ visible to all threads using a shared connection. If you want to omit this effect, use dedicated connections. The `AutoFlushCommands` state cannot be set on pooled connections by the Lettuce connection pooling. -> [!WARNING] -> Do not use `setAutoFlushCommands(…)` when sharing a connection across -> threads, at least not without proper synchronization. According to the -> many questions and (invalid) bug reports using -> `setAutoFlushCommands(…)` in a multi-threaded scenario causes a lot of -> complexity overhead and is very likely to cause issues on your side. -> `setAutoFlushCommands(…)` can only be reliably used on single-threaded -> connection usage in scenarios like bulk-loading. +!!! WARNING + Do not use `setAutoFlushCommands(…)` when sharing a connection across + threads, at least not without proper synchronization. According to the + many questions and (invalid) bug reports using + `setAutoFlushCommands(…)` in a multi-threaded scenario causes a lot of + complexity overhead and is very likely to cause issues on your side. + `setAutoFlushCommands(…)` can only be reliably used on single-threaded + connection usage in scenarios like bulk-loading. ``` java StatefulRedisConnection connection = client.connect(); @@ -2127,10 +2061,10 @@ StatefulRedisConnection connection = redis.getStatefulConnection connection.dispatch(CommandType.PING, VoidOutput.create()); ``` -> [!NOTE] -> `VoidOutput.create()` swallows also Redis error responses. If you want -> to just avoid response decoding, create a `VoidCodec` instance using -> its constructor to retain error response decoding. +!!! NOTE + `VoidOutput.create()` swallows also Redis error responses. If you want + to just avoid response decoding, create a `VoidCodec` instance using + its constructor to retain error response decoding. #### Asynchronous @@ -2140,7 +2074,7 @@ that extends `CompleteableFuture`. `AsyncCommand` can be synchronized by By using the methods from the `CompletionStage` interface (such as `handle()` or `thenAccept()`) the response handler will trigger the functions ("listeners") on command completion. Lear more about -asynchronous usage in the [Asynchronous API](Connecting-Redis.md#asynchronous-api) topic. +asynchronous usage in the [Asynchronous API](user-guide/async-api.md) topic. ``` java StatefulRedisConnection connection = redis.getStatefulConnection(); @@ -2162,7 +2096,7 @@ synchronous view. #### Reactive Reactive commands are dispatched at the moment of subscription (see -[Reactive API](Connecting-Redis.md#reactive-api) for more details on reactive APIs). In the +[Reactive API](user-guide/reactive-api.md) for more details on reactive APIs). In the context of Lettuce this means, you need to start before calling the `dispatch()` method. The reactive API uses internally an `ObservableCommand`, but that is internal stuff. If you want to dispatch diff --git a/docs/Frequently-Asked-Questions.md b/docs/faq.md similarity index 97% rename from docs/Frequently-Asked-Questions.md rename to docs/faq.md index 8c540fe885..4bb14401d2 100644 --- a/docs/Frequently-Asked-Questions.md +++ b/docs/faq.md @@ -139,7 +139,7 @@ default, the queue is unbounded which can lead to memory exhaustion. You can configure disconnected behavior and the request queue size through `ClientOptions` for your workload profile. See [Client -Options](Advanced-usage.md#client-options) for further reference. +Options](advanced-usage.md#client-options) for further reference. ## Performance Degradation using the Reactive API with a single connection @@ -163,7 +163,7 @@ system leverages a single thread and therefore leads to contention. You can configure signal multiplexing for the reactive API through `ClientOptions` by enabling `publishOnScheduler(true)`. See [Client -Options](Advanced-usage.md#client-options) for further reference. Alternatively, you can +Options](advanced-usage.md#client-options) for further reference. Alternatively, you can configure `Scheduler` on each result stream through `publishOn(Scheduler)`. Note that the asynchronous API features the same behavior and you might want to use `then…Async(…)`, `run…Async(…)`, diff --git a/docs/Getting-Started.md b/docs/getting-started.md similarity index 100% rename from docs/Getting-Started.md rename to docs/getting-started.md diff --git a/docs/High-Availability-and-Sharding.md b/docs/ha-sharding.md similarity index 95% rename from docs/High-Availability-and-Sharding.md rename to docs/ha-sharding.md index f9cd050126..d771ddbfac 100644 --- a/docs/High-Availability-and-Sharding.md +++ b/docs/ha-sharding.md @@ -16,11 +16,10 @@ by supplying the client, Codec, and one or multiple RedisURIs. ### Redis Sentinel -Master/Replica using [Redis Sentinel](#redis-sentinel) uses Redis +Master/Replica using Redis Sentinel uses Redis Sentinel as registry and notification source for topology events. -Details about the master and its replicas are obtained from [Redis -Sentinel](#redis-sentinel). Lettuce subscribes to [Redis -Sentinel](#redis-sentinel) events for notifications to all supplied +Details about the master and its replicas are obtained from Redis +Sentinel. Lettuce subscribes to Redis Sentinel events for notifications to all supplied Sentinels. ### Standalone Master/Replica @@ -158,7 +157,7 @@ This is useful for performing administrative tasks using Lettuce. You can monitor new master nodes, query master addresses, replicas and much more. A connection to a Redis Sentinel node is established by `RedisClient.connectSentinel()`. Use a [Publish/Subscribe -connection](Connecting-Redis.md#publishsubscribe) to subscribe to Sentinel events. +connection](user-guide/pubsub.md) to subscribe to Sentinel events. ### Redis discovery using Redis Sentinel @@ -197,11 +196,11 @@ RedisClient client = RedisClient.create(redisUri); RedisConnection connection = client.connect(); ``` -> [!NOTE] -> Every time you connect to a Redis instance using Redis Sentinel, the -> Redis master is looked up using a new connection to a Redis Sentinel. -> This can be time-consuming, especially when multiple Redis Sentinels -> are used and one or more of them are not reachable. +!!! NOTE + Every time you connect to a Redis instance using Redis Sentinel, the + Redis master is looked up using a new connection to a Redis Sentinel. + This can be time-consuming, especially when multiple Redis Sentinels + are used and one or more of them are not reachable. ## Redis Cluster @@ -386,14 +385,14 @@ the same cluster topology view. The view can be updated in three ways: 1. Either by calling `RedisClusterClient.reloadPartitions` -2. [Periodic updates](Advanced-usage.md#cluster-specific-options) in the background +2. [Periodic updates](advanced-usage.md#cluster-specific-options) in the background based on an interval -3. [Adaptive updates](Advanced-usage.md#cluster-specific-options) in the background +3. [Adaptive updates](advanced-usage.md#cluster-specific-options) in the background based on persistent disconnects and `MOVED`/`ASK` redirections By default, commands follow `-ASK` and `-MOVED` redirects [up to 5 -times](Advanced-usage.md#cluster-specific-options) until the command execution is +times](advanced-usage.md#cluster-specific-options) until the command execution is considered to be failed. Background topology updating starts with the first connection obtained through `RedisClusterClient`. @@ -430,7 +429,7 @@ and closed after obtaining the topology: ### Client-options -See [Cluster-specific Client options](Advanced-usage.md#cluster-specific-options). +See [Cluster-specific Client options](advanced-usage.md#cluster-specific-options). #### Examples @@ -661,10 +660,10 @@ to ensure that your application can tolerate stale data. | `ANY` | Read from any node of the cluster. | | `ANY_REPLICA` | Read from any replica of the cluster. | -> [!TIP] -> The latency of the nodes is determined upon the cluster topology -> refresh. If the topology view is never refreshed, values from the -> initial cluster nodes read are used. +!!! TIP + The latency of the nodes is determined upon the cluster topology + refresh. If the topology view is never refreshed, values from the + initial cluster nodes read are used. Custom read settings can be implemented by extending the `io.lettuce.core.ReadFrom` class. diff --git a/docs/index.md b/docs/index.md deleted file mode 100644 index 983037e1b3..0000000000 --- a/docs/index.md +++ /dev/null @@ -1,11 +0,0 @@ -# Table of Contents - -- [Overview](<./Overview.md>) -- [New & Noteworthy](<./New--Noteworthy.md>) -- [Getting Started](<./Getting-Started.md>) -- [Connecting Redis](<./Connecting-Redis.md>) -- [High-Availability and Sharding](<./High-Availability-and-Sharding.md>) -- [Working with dynamic Redis Command Interfaces](<./Working-with-dynamic-Redis-Command-Interfaces.md>) -- [Advanced usage](<./Advanced-usage.md>) -- [Integration and Extension](<./Integration-and-Extension.md>) -- [Frequently Asked Questions](<./Frequently-Asked-Questions.md>) \ No newline at end of file diff --git a/docs/Integration-and-Extension.md b/docs/integration-extension.md similarity index 100% rename from docs/Integration-and-Extension.md rename to docs/integration-extension.md diff --git a/docs/New--Noteworthy.md b/docs/new-features.md similarity index 84% rename from docs/New--Noteworthy.md rename to docs/new-features.md index 5bde497de0..84a2d23bf8 100644 --- a/docs/New--Noteworthy.md +++ b/docs/new-features.md @@ -2,7 +2,7 @@ ## What’s new in Lettuce 6.3 -- [Redis Function support](Connecting-Redis.md#redis-functions) (`fcall` and `FUNCTION` +- [Redis Function support](user-guide/redis-functions.md) (`fcall` and `FUNCTION` commands). - Support for Library Name and Version through `LettuceVersion`. @@ -14,7 +14,7 @@ ## What’s new in Lettuce 6.2 -- [`RedisCredentialsProvider`](Connecting-Redis.md#authentication) abstraction to +- [`RedisCredentialsProvider`](user-guide/connecting-redis.md#authentication) abstraction to externalize credentials and credentials rotation. - Retrieval of Redis Cluster node connections using `ConnectionIntent` @@ -31,10 +31,10 @@ - Command Listener API through `RedisClient.addListener(CommandListener)`. -- [Micrometer support](Advanced-usage.md#micrometer) through +- [Micrometer support](advanced-usage.md#micrometer) through `MicrometerCommandLatencyRecorder`. -- [Experimental support for `io_uring`](Advanced-usage.md#native-transports). +- [Experimental support for `io_uring`](advanced-usage.md#native-transports). - Configuration of extended Keep-Alive options through `KeepAliveOptions` (only available for some transports/Java versions). @@ -45,7 +45,7 @@ - Add support for Redis ACL commands. -- [Java Flight Recorder Events](Advanced-usage.md#java-flight-recorder-events-since-61) +- [Java Flight Recorder Events](advanced-usage.md#java-flight-recorder-events-since-61) ## What’s new in Lettuce 6.0 @@ -57,7 +57,7 @@ - Cluster topology refresh is now non-blocking. -- [Kotlin Coroutine Extensions](Connecting-Redis.md#kotlin-api). +- [Kotlin Coroutine Extensions](user-guide/kotlin-api.md). - RxJava 3 support. @@ -110,14 +110,14 @@ - Add support for `ZPOPMIN`, `ZPOPMAX`, `BZPOPMIN`, `BZPOPMAX` commands. - Add support for Redis Command Tracing through Brave, see [Configuring - Client resources](Advanced-usage.md#configuring-client-resources). + Client resources](advanced-usage.md#configuring-client-resources). - Add support for [Redis Streams](https://redis.io/topics/streams-intro). - Asynchronous `connect()` for Master/Replica connections. -- [Asynchronous Connection Pooling](Advanced-usage.md#asynchronous-connection-pooling) +- [Asynchronous Connection Pooling](advanced-usage.md#asynchronous-connection-pooling) through `AsyncConnectionPoolSupport` and `AsyncPool`. - Dedicated exceptions for Redis `LOADING`, `BUSY`, and `NOSCRIPT` @@ -127,11 +127,11 @@ canceled already on disconnect. - Global command timeouts (also for reactive and asynchronous API usage) - configurable through [Client Options](Advanced-usage.md#client-options). + configurable through [Client Options](advanced-usage.md#client-options). - Host and port mappers for Lettuce usage behind connection tunnels/proxies through `SocketAddressResolver`, see [Configuring - Client resources](Advanced-usage.md#configuring-client-resources). + Client resources](advanced-usage.md#configuring-client-resources). - `SCRIPT LOAD` dispatch to all cluster nodes when issued through `RedisAdvancedClusterCommands`. @@ -147,11 +147,11 @@ - New artifact coordinates: `io.lettuce:lettuce-core` and packages moved from `com.lambdaworks.redis` to `io.lettuce.core`. -- [Reactive API](Connecting-Redis.md#reactive-api) now Reactive Streams-based using +- [Reactive API](user-guide/reactive-api.md) now Reactive Streams-based using [Project Reactor](https://projectreactor.io/). - [Redis Command - Interfaces](Working-with-dynamic-Redis-Command-Interfaces.md) supporting + Interfaces](redis-command-interfaces.md) supporting dynamic command invocation and Redis Modules. - Enhanced, immutable Key-Value objects. diff --git a/docs/Overview.md b/docs/overview.md similarity index 91% rename from docs/Overview.md rename to docs/overview.md index a5d12533a1..020d93c88d 100644 --- a/docs/Overview.md +++ b/docs/overview.md @@ -56,8 +56,8 @@ unnecessary intermediate buffering or blocking. Lettuce is a scalable thread-safe Redis client based on [netty](https://netty.io) and Reactor. Lettuce provides -[synchronous](Connecting-Redis.md#basic-usage), [asynchronous](Connecting-Redis.md#asynchronous-api) and -[reactive](Connecting-Redis.md#reactive-api) APIs to interact with Redis. +[synchronous](user-guide/connecting-redis.md#basic-usage), [asynchronous](user-guide/async-api.md) and +[reactive](user-guide/reactive-api.md) APIs to interact with Redis. ## Requirements @@ -119,24 +119,24 @@ ticket on the lettuce issue ## Where to go from here -- Head to [Getting Started](Getting-Started.md) if you feel like jumping +- Head to [Getting Started](getting-started.md) if you feel like jumping straight into the code. - Go to [High-Availability and - Sharding](High-Availability-and-Sharding.md) for Master/Replica + Sharding](ha-sharding.md) for Master/Replica ("Master/Slave"), Redis Sentinel and Redis Cluster topics. - In order to dig deeper into the core features of Reactor: - If you’re looking for client configuration options, performance related behavior and how to use various transports, go to [Advanced - usage](Advanced-usage.md). + usage](advanced-usage.md). - - See [Integration and Extension](Integration-and-Extension.md) for + - See [Integration and Extension](integration-extension.md) for extending Lettuce with codecs or integrate it in your CDI/Spring application. - You want to know more about **at-least-once** and **at-most-once**? Take a look into [Command execution - reliability](Advanced-usage.md#command-execution-reliability). + reliability](advanced-usage.md#command-execution-reliability). diff --git a/docs/Working-with-dynamic-Redis-Command-Interfaces.md b/docs/redis-command-interfaces.md similarity index 96% rename from docs/Working-with-dynamic-Redis-Command-Interfaces.md rename to docs/redis-command-interfaces.md index e59b4dd9ad..e6252aa902 100644 --- a/docs/Working-with-dynamic-Redis-Command-Interfaces.md +++ b/docs/redis-command-interfaces.md @@ -149,13 +149,13 @@ public interface MixedCommands extends Commands { from "camel humps" that style by placing a dot (`.`) between name parts. -> [!NOTE] -> Command names are attempted to be resolved against `CommandType` to -> participate in settings for known commands. These are primarily used -> to determine a command intent (whether a command is a read-only one). -> Commands are resolved case-sensitive. Use lower-case command names in -> `@Command` to resolve to an unknown command to e.g. enforce -> master-routing. +!!! NOTE + Command names are attempted to be resolved against `CommandType` to + participate in settings for known commands. These are primarily used + to determine a command intent (whether a command is a read-only one). + Commands are resolved case-sensitive. Use lower-case command names in + `@Command` to resolve to an unknown command to e.g. enforce + master-routing. ### CamelCase in method names @@ -276,9 +276,9 @@ access to parameter names if the code was compiled with `@Param`. Please note that all parameters are required to be annotated if using `@Param`. -> [!NOTE] -> The same parameter can be referenced multiple times. Not referenced -> parameters are appended as arguments after the last command segment. +!!! NOTE + The same parameter can be referenced multiple times. Not referenced + parameters are appended as arguments after the last command segment. #### Keys and values @@ -433,7 +433,7 @@ Each declared command methods requires a synchronization mode, more specific an execution model. Lettuce uses an event-driven command execution model to send commands, process responses, and signal completion. Command methods can execute their commands in a synchronous, -[asynchronous](Connecting-Redis.md#asynchronous-api) or [reactive](Connecting-Redis.md#reactive-api) way. +[asynchronous](user-guide/async-api.md) or [reactive](user-guide/reactive-api.md) way. The choice of a particular execution model is made on return type level, more specific on the return type wrapper. Each command method may use a @@ -496,7 +496,7 @@ Currently supported reactive types: - RxJava 2 `Single`, `Maybe` and `Flowable` (via `rxjava` 2.0) -See [Reactive API](Connecting-Redis.md#reactive-api) for more details. +See [Reactive API](user-guide/reactive-api.md) for more details. ``` java interface KeyCommands extends Commands { diff --git a/docs/static/logo-redis.svg b/docs/static/logo-redis.svg new file mode 100644 index 0000000000..a8de68d23c --- /dev/null +++ b/docs/static/logo-redis.svg @@ -0,0 +1,10 @@ + + + + + + + + + + diff --git a/docs/user-guide/async-api.md b/docs/user-guide/async-api.md new file mode 100644 index 0000000000..cc100b07cb --- /dev/null +++ b/docs/user-guide/async-api.md @@ -0,0 +1,572 @@ +## Asynchronous API + +This guide will give you an impression how and when to use the +asynchronous API provided by Lettuce 4.x. + +### Motivation + +Asynchronous methodologies allow you to utilize better system resources, +instead of wasting threads waiting for network or disk I/O. Threads can +be fully utilized to perform other work instead. Lettuce facilitates +asynchronicity from building the client on top of +[netty](http://netty.io) that is a multithreaded, event-driven I/O +framework. All communication is handled asynchronously. Once the +foundation is able to processes commands concurrently, it is convenient +to take advantage from the asynchronicity. It is way harder to turn a +blocking and synchronous working software into a concurrently processing +system. + +#### Understanding Asynchronicity + +Asynchronicity permits other processing to continue before the +transmission has finished and the response of the transmission is +processed. This means, in the context of Lettuce and especially Redis, +that multiple commands can be issued serially without the need of +waiting to finish the preceding command. This mode of operation is also +known as [Pipelining](http://redis.io/topics/pipelining). The following +example should give you an impression of the mode of operation: + +- Given client *A* and client *B* + +- Client *A* triggers command `SET A=B` + +- Client *B* triggers at the same time of Client *A* command `SET C=D` + +- Redis receives command from Client *A* + +- Redis receives command from Client *B* + +- Redis processes `SET A=B` and responds `OK` to Client *A* + +- Client *A* receives the response and stores the response in the + response handle + +- Redis processes `SET C=D` and responds `OK` to Client *B* + +- Client *B* receives the response and stores the response in the + response handle + +Both clients from the example above can be either two threads or +connections within an application or two physically separated clients. + +Clients can operate concurrently to each other by either being separate +processes, threads, event-loops, actors, fibers, etc. Redis processes +incoming commands serially and operates mostly single-threaded. This +means, commands are processed in the order they are received with some +characteristic that we’ll cover later. + +Let’s take the simplified example and enhance it by some program flow +details: + +- Given client *A* + +- Client *A* triggers command `SET A=B` + +- Client *A* uses the asynchronous API and can perform other processing + +- Redis receives command from Client *A* + +- Redis processes `SET A=B` and responds `OK` to Client *A* + +- Client *A* receives the response and stores the response in the + response handle + +- Client *A* can access now the response to its command without waiting + (non-blocking) + +The Client *A* takes advantage from not waiting on the result of the +command so it can process computational work or issue another Redis +command. The client can work with the command result as soon as the +response is available. + +#### Impact of asynchronicity to the synchronous API + +While this guide helps you to understand the asynchronous API it is +worthwhile to learn the impact on the synchronous API. The general +approach of the synchronous API is no different than the asynchronous +API. In both cases, the same facilities are used to invoke and transport +commands to the Redis server. The only difference is a blocking behavior +of the caller that is using the synchronous API. Blocking happens on +command level and affects only the command completion part, meaning +multiple clients using the synchronous API can invoke commands on the +same connection and at the same time without blocking each other. A call +on the synchronous API is unblocked at the moment a command response was +processed. + +- Given client *A* and client *B* + +- Client *A* triggers command `SET A=B` on the synchronous API and waits + for the result + +- Client *B* triggers at the same time of Client *A* command `SET C=D` + on the synchronous API and waits for the result + +- Redis receives command from Client *A* + +- Redis receives command from Client *B* + +- Redis processes `SET A=B` and responds `OK` to Client *A* + +- Client *A* receives the response and unblocks the program flow of + Client *A* + +- Redis processes `SET C=D` and responds `OK` to Client *B* + +- Client *B* receives the response and unblocks the program flow of + Client *B* + +However, there are some cases you should not share a connection among +threads to avoid side-effects. The cases are: + +- Disabling flush-after-command to improve performance + +- The use of blocking operations like `BLPOP`. Blocking operations are + queued on Redis until they can be executed. While one connection is + blocked, other connections can issue commands to Redis. Once a command + unblocks the blocking command (that said an `LPUSH` or `RPUSH` hits + the list), the blocked connection is unblocked and can proceed after + that. + +- Transactions + +- Using multiple databases + +#### Result handles + +Every command invocation on the asynchronous API creates a +`RedisFuture` that can be canceled, awaited and subscribed +(listener). A `CompleteableFuture` or `RedisFuture` is a pointer +to the result that is initially unknown since the computation of its +value is yet incomplete. A `RedisFuture` provides operations for +synchronization and chaining. + +``` java +CompletableFuture future = new CompletableFuture<>(); + +System.out.println("Current state: " + future.isDone()); + +future.complete("my value"); + +System.out.println("Current state: " + future.isDone()); +System.out.println("Got value: " + future.get()); +``` + +The example prints the following lines: + + Current state: false + Current state: true + Got value: my value + +Attaching a listener to a future allows chaining. Promises can be used +synonymous to futures, but not every future is a promise. A promise +guarantees a callback/notification and thus it has come to its name. + +A simple listener that gets called once the future completes: + +``` java +final CompletableFuture future = new CompletableFuture<>(); + +future.thenRun(new Runnable() { + @Override + public void run() { + try { + System.out.println("Got value: " + future.get()); + } catch (Exception e) { + e.printStackTrace(); + } + + } +}); + +System.out.println("Current state: " + future.isDone()); +future.complete("my value"); +System.out.println("Current state: " + future.isDone()); +``` + +The value processing moves from the caller into a listener that is then +called by whoever completes the future. The example prints the following +lines: + + Current state: false + Got value: my value + Current state: true + +The code from above requires exception handling since calls to the +`get()` method can lead to exceptions. Exceptions raised during the +computation of the `Future` are transported within an +`ExecutionException`. Another exception that may be thrown is the +`InterruptedException`. This is because calls to `get()` are blocking +calls and the blocked thread can be interrupted at any time. Just think +about a system shutdown. + +The `CompletionStage` type allows since Java 8 a much more +sophisticated handling of futures. A `CompletionStage` can consume, +transform and build a chain of value processing. The code from above can +be rewritten in Java 8 in the following style: + +``` java +CompletableFuture future = new CompletableFuture<>(); + +future.thenAccept(new Consumer() { + @Override + public void accept(String value) { + System.out.println("Got value: " + value); + } +}); + +System.out.println("Current state: " + future.isDone()); +future.complete("my value"); +System.out.println("Current state: " + future.isDone()); +``` + +The example prints the following lines: + + Current state: false + Got value: my value + Current state: true + +You can find the full reference for the `CompletionStage` type in the +[Java 8 API +documentation](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html). + +### Creating futures using Lettuce + +Lettuce futures can be used for initial and chaining operations. When +using Lettuce futures, you will notice the non-blocking behavior. This +is because all I/O and command processing are handled asynchronously +using the netty EventLoop. The Lettuce `RedisFuture` extends a +`CompletionStage` so all methods of the base type are available. + +Lettuce exposes its futures on the Standalone, Sentinel, +Publish/Subscribe and Cluster APIs. + +Connecting to Redis is insanely simple: + +``` java +RedisClient client = RedisClient.create("redis://localhost"); +RedisAsyncCommands commands = client.connect().async(); +``` + +In the next step, obtaining a value from a key requires the `GET` +operation: + +``` java +RedisFuture future = commands.get("key"); +``` + +### Consuming futures + +The first thing you want to do when working with futures is to consume +them. Consuming a futures means obtaining the value. Here is an example +that blocks the calling thread and prints the value: + +``` java +RedisFuture future = commands.get("key"); +String value = future.get(); +System.out.println(value); +``` + +Invocations to the `get()` method (pull-style) block the calling thread +at least until the value is computed but in the worst case indefinitely. +Using timeouts is always a good idea to not exhaust your threads. + +``` java +try { + RedisFuture future = commands.get("key"); + String value = future.get(1, TimeUnit.MINUTES); + System.out.println(value); +} catch (Exception e) { + e.printStackTrace(); +} +``` + +The example will wait at most 1 minute for the future to complete. If +the timeout exceeds, a `TimeoutException` is thrown to signal the +timeout. + +Futures can also be consumed in a push style, meaning when the +`RedisFuture` is completed, a follow-up action is triggered: + +``` java +RedisFuture future = commands.get("key"); + +future.thenAccept(new Consumer() { + @Override + public void accept(String value) { + System.out.println(value); + } +}); +``` + +Alternatively, written in Java 8 lambdas: + +``` java +RedisFuture future = commands.get("key"); + +future.thenAccept(System.out::println); +``` + +Lettuce futures are completed on the netty EventLoop. Consuming and +chaining futures on the default thread is always a good idea except for +one case: Blocking/long-running operations. As a rule of thumb, never +block the event loop. If you need to chain futures using blocking calls, +use the `thenAcceptAsync()`/`thenRunAsync()` methods to fork the +processing to another thread. The `…​async()` methods need a threading +infrastructure for execution, by default the `ForkJoinPool.commonPool()` +is used. The `ForkJoinPool` is statically constructed and does not grow +with increasing load. Using default `Executor`s is almost always the +better idea. + +``` java +Executor sharedExecutor = ... +RedisFuture future = commands.get("key"); + +future.thenAcceptAsync(new Consumer() { + @Override + public void accept(String value) { + System.out.println(value); + } +}, sharedExecutor); +``` + +### Synchronizing futures + +A key point when using futures is the synchronization. Futures are +usually used to: + +1. Trigger multiple invocations without the urge to wait for the + predecessors (Batching) + +2. Invoking a command without awaiting the result at all (Fire&Forget) + +3. Invoking a command and perform other computing in the meantime + (Decoupling) + +4. Adding concurrency to certain computational efforts (Concurrency) + +There are several ways how to wait or get notified in case a future +completes. Certain synchronization techniques apply to some motivations +why you want to use futures. + +#### Blocking synchronization + +Blocking synchronization comes handy if you perform batching/add +concurrency to certain parts of your system. An example to batching can +be setting/retrieving multiple values and awaiting the results before a +certain point within processing. + +``` java +List> futures = new ArrayList>(); + +for (int i = 0; i < 10; i++) { + futures.add(commands.set("key-" + i, "value-" + i)); +} + +LettuceFutures.awaitAll(1, TimeUnit.MINUTES, futures.toArray(new RedisFuture[futures.size()])); +``` + +The code from above does not wait until a certain command completes +before it issues another one. The synchronization is done after all +commands are issued. The example code can easily be turned into a +Fire&Forget pattern by omitting the call to `LettuceFutures.awaitAll()`. + +A single future execution can be also awaited, meaning an opt-in to wait +for a certain time but without raising an exception: + +``` java +RedisFuture future = commands.get("key"); + +if(!future.await(1, TimeUnit.MINUTES)) { + System.out.println("Could not complete within the timeout"); +} +``` + +Calling `await()` is friendlier to call since it throws only an +`InterruptedException` in case the blocked thread is interrupted. You +are already familiar with the `get()` method for synchronization, so we +will not bother you with this one. + +At last, there is another way to synchronize futures in a blocking way. +The major caveat is that you will become responsible to handle thread +interruptions. If you do not handle that aspect, you will not be able to +shut down your system properly if it is in a running state. + +``` java +RedisFuture future = commands.get("key"); +while (!future.isDone()) { + // do something ... +} +``` + +While the `isDone()` method does not aim primarily for synchronization +use, it might come handy to perform other computational efforts while +the command is executed. + +#### Chaining synchronization + +Futures can be synchronized/chained in a non-blocking style to improve +thread utilization. Chaining works very well in systems relying on +event-driven characteristics. Future chaining builds up a chain of one +or more futures that are executed serially, and every chain member +handles a part in the computation. The `CompletionStage` API offers +various methods to chain and transform futures. A simple transformation +of the value can be done using the `thenApply()` method: + +``` java +future.thenApply(new Function() { + @Override + public Integer apply(String value) { + return value.length(); + } +}).thenAccept(new Consumer() { + @Override + public void accept(Integer integer) { + System.out.println("Got value: " + integer); + } +}); +``` + +Alternatively, written in Java 8 lambdas: + +``` java +future.thenApply(String::length) + .thenAccept(integer -> System.out.println("Got value: " + integer)); +``` + +The `thenApply()` method accepts a function that transforms the value +into another one. The final `thenAccept()` method consumes the value for +final processing. + +You have already seen the `thenRun()` method from previous examples. The +`thenRun()` method can be used to handle future completions in case the +data is not crucial to your flow: + +``` java +future.thenRun(new Runnable() { + @Override + public void run() { + System.out.println("Finished the future."); + } +}); +``` + +Keep in mind to execute the `Runnable` on a custom `Executor` if you are +doing blocking calls within the `Runnable`. + +Another chaining method worth mentioning is the either-or chaining. A +couple of `…​Either()` methods are available on a `CompletionStage`, +see the [Java 8 API +docs](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html) +for the full reference. The either-or pattern consumes the value from +the first future that is completed. A good example might be two services +returning the same data, for instance, a Master-Replica scenario, but +you want to return the data as fast as possible: + +``` java +RedisStringAsyncCommands master = masterClient.connect().async(); +RedisStringAsyncCommands replica = replicaClient.connect().async(); + +RedisFuture future = master.get("key"); +future.acceptEither(replica.get("key"), new Consumer() { + @Override + public void accept(String value) { + System.out.println("Got value: " + value); + } +}); +``` + +### Error handling + +Error handling is an indispensable component of every real world +application and should to be considered from the beginning on. Futures +provide some mechanisms to deal with errors. + +In general, you want to react in the following ways: + +- Return a default value instead + +- Use a backup future + +- Retry the future + +`RedisFuture`s transport exceptions if any occurred. Calls to the +`get()` method throw the occurred exception wrapped within an +`ExecutionException` (this is different to Lettuce 3.x). You can find +more details within the Javadoc on +[CompletionStage](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html). + +The following code falls back to a default value after it runs to an +exception by using the `handle()` method: + +``` java +future.handle(new BiFunction() { + @Override + public Integer apply(String value, Throwable throwable) { + if(throwable != null) { + return "default value"; + } + return value; + } +}).thenAccept(new Consumer() { + @Override + public void accept(String value) { + System.out.println("Got value: " + value); + } +}); +``` + +More sophisticated code could decide on behalf of the throwable type +that value to return, as the shortcut example using the +`exceptionally()` method: + +``` java +future.exceptionally(new Function() { + @Override + public String apply(Throwable throwable) { + if (throwable instanceof IllegalStateException) { + return "default value"; + } + + return "other default value"; + } +}); +``` + +Retrying futures and recovery using futures is not part of the Java 8 +`CompleteableFuture`. See the [Reactive API](reactive-api.md) for +comfortable ways handling with exceptions. + +### Examples + +``` java +RedisAsyncCommands async = client.connect().async(); +RedisFuture set = async.set("key", "value"); +RedisFuture get = async.get("key"); + +set.get() == "OK" +get.get() == "value" +``` + +``` java +RedisAsyncCommands async = client.connect().async(); +RedisFuture set = async.set("key", "value"); +RedisFuture get = async.get("key"); + +set.await(1, SECONDS) == true +set.get() == "OK" +get.get(1, TimeUnit.MINUTES) == "value" +``` + +``` java +RedisStringAsyncCommands async = client.connect().async(); +RedisFuture set = async.set("key", "value"); + +Runnable listener = new Runnable() { + @Override + public void run() { + ...; + } +}; + +set.thenRun(listener); +``` \ No newline at end of file diff --git a/docs/user-guide/connecting-redis.md b/docs/user-guide/connecting-redis.md new file mode 100644 index 0000000000..dac3102b88 --- /dev/null +++ b/docs/user-guide/connecting-redis.md @@ -0,0 +1,239 @@ +# Connecting Redis + +Connections to a Redis Standalone, Sentinel, or Cluster require a +specification of the connection details. The unified form is `RedisURI`. +You can provide the database, password and timeouts within the +`RedisURI`. You have following possibilities to create a `RedisURI`: + +1. Use an URI: + + ``` java + RedisURI.create("redis://localhost/"); + ``` + +2. Use the Builder + + ``` java + RedisURI.Builder.redis("localhost", 6379).auth("password").database(1).build(); + ``` + +3. Set directly the values in `RedisURI` + + ``` java + new RedisURI("localhost", 6379, 60, TimeUnit.SECONDS); + ``` + +## URI syntax + +**Redis Standalone** + + redis :// [[username :] password@] host [:port][/database] + [?[timeout=timeout[d|h|m|s|ms|us|ns]] [&clientName=clientName] + [&libraryName=libraryName] [&libraryVersion=libraryVersion] ] + +**Redis Standalone (SSL)** + + rediss :// [[username :] password@] host [: port][/database] + [?[timeout=timeout[d|h|m|s|ms|us|ns]] [&clientName=clientName] + [&libraryName=libraryName] [&libraryVersion=libraryVersion] ] + +**Redis Standalone (Unix Domain Sockets)** + + redis-socket :// [[username :] password@]path + [?[timeout=timeout[d|h|m|s|ms|us|ns]] [&database=database] + [&clientName=clientName] [&libraryName=libraryName] + [&libraryVersion=libraryVersion] ] + +**Redis Sentinel** + + redis-sentinel :// [[username :] password@] host1[:port1] [, host2[:port2]] [, hostN[:portN]] [/database] + [?[timeout=timeout[d|h|m|s|ms|us|ns]] [&sentinelMasterId=sentinelMasterId] + [&clientName=clientName] [&libraryName=libraryName] + [&libraryVersion=libraryVersion] ] + +**Schemes** + +- `redis` Redis Standalone + +- `rediss` Redis Standalone SSL + +- `redis-socket` Redis Standalone Unix Domain Socket + +- `redis-sentinel` Redis Sentinel + +**Timeout units** + +- `d` Days + +- `h` Hours + +- `m` Minutes + +- `s` Seconds + +- `ms` Milliseconds + +- `us` Microseconds + +- `ns` Nanoseconds + +Hint: The database parameter within the query part has higher precedence +than the database in the path. + +RedisURI supports Redis Standalone, Redis Sentinel and Redis Cluster +with plain, SSL, TLS and unix domain socket connections. + +Hint: The database parameter within the query part has higher precedence +than the database in the path. RedisURI supports Redis Standalone, Redis +Sentinel and Redis Cluster with plain, SSL, TLS and unix domain socket +connections. + +## Authentication + +Redis URIs may contain authentication details that effectively lead to +usernames with passwords, password-only, or no authentication. +Connections are authenticated by using the information provided through +`RedisCredentials`. Credentials are obtained at connection time from +`RedisCredentialsProvider`. When configuring username/password on the +URI statically, then a `StaticCredentialsProvider` holds the configured +information. + +**Notes** + +- When using Redis Sentinel, the password from the URI applies to the + data nodes only. Sentinel authentication must be configured for each + sentinel node. + +- Usernames are supported as of Redis 6. + +- Library name and library version are automatically set on Redis 7.2 or + greater. + +## Basic Usage + +``` java +RedisClient client = RedisClient.create("redis://localhost"); + +StatefulRedisConnection connection = client.connect(); + +RedisCommands commands = connection.sync(); + +String value = commands.get("foo"); + +... + +connection.close(); + +client.shutdown(); +``` + +- Create the `RedisClient` instance and provide a Redis URI pointing to + localhost, Port 6379 (default port). + +- Open a Redis Standalone connection. The endpoint is used from the + initialized `RedisClient` + +- Obtain the command API for synchronous execution. Lettuce supports + asynchronous and reactive execution models, too. + +- Issue a `GET` command to get the key `foo`. + +- Close the connection when you’re done. This happens usually at the + very end of your application. Connections are designed to be + long-lived. + +- Shut down the client instance to free threads and resources. This + happens usually at the very end of your application. + +Each Redis command is implemented by one or more methods with names +identical to the lowercase Redis command name. Complex commands with +multiple modifiers that change the result type include the CamelCased +modifier as part of the command name, e.g. `zrangebyscore` and +`zrangebyscoreWithScores`. + +Redis connections are designed to be long-lived and thread-safe, and if +the connection is lost will reconnect until `close()` is called. Pending +commands that have not timed out will be (re)sent after successful +reconnection. + +All connections inherit a default timeout from their RedisClient and +and will throw a `RedisException` when non-blocking commands fail to +return a result before the timeout expires. The timeout defaults to 60 +seconds and may be changed in the RedisClient or for each connection. +Synchronous methods will throw a `RedisCommandExecutionException` in +case Redis responds with an error. Asynchronous connections do not throw +exceptions when Redis responds with an error. + +### RedisURI + +The RedisURI contains the host/port and can carry +authentication/database details. On a successful connect you get +authenticated, and the database is selected afterward. This applies +also after re-establishing a connection after a connection loss. + +A Redis URI can also be created from an URI string. Supported formats +are: + +- `redis://[password@]host[:port][/databaseNumber]` Plaintext Redis + connection + +- `rediss://[password@]host[:port][/databaseNumber]` [SSL + Connections](../advanced-usage.md#ssl-connections) Redis connection + +- `redis-sentinel://[password@]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId` + for using Redis Sentinel + +- `redis-socket:///path/to/socket` [Unix Domain + Sockets](../advanced-usage.md#unix-domain-sockets) connection to Redis + +### Exceptions + +In the case of an exception/error response from Redis, you’ll receive a +`RedisException` containing +the error message. `RedisException` is a `RuntimeException`. + +### Examples + +``` java +RedisClient client = RedisClient.create(RedisURI.create("localhost", 6379)); +client.setDefaultTimeout(20, TimeUnit.SECONDS); + +// … + +client.shutdown(); +``` + +``` java +RedisURI redisUri = RedisURI.Builder.redis("localhost") + .withPassword("authentication") + .withDatabase(2) + .build(); +RedisClient client = RedisClient.create(redisUri); + +// … + +client.shutdown(); +``` + +``` java +RedisURI redisUri = RedisURI.Builder.redis("localhost") + .withSsl(true) + .withPassword("authentication") + .withDatabase(2) + .build(); +RedisClient client = RedisClient.create(redisUri); + +// … + +client.shutdown(); +``` + +``` java +RedisURI redisUri = RedisURI.create("redis://authentication@localhost/2"); +RedisClient client = RedisClient.create(redisUri); + +// … + +client.shutdown(); +``` + diff --git a/docs/user-guide/kotlin-api.md b/docs/user-guide/kotlin-api.md new file mode 100644 index 0000000000..cb39d85c49 --- /dev/null +++ b/docs/user-guide/kotlin-api.md @@ -0,0 +1,90 @@ +## Kotlin API + +Kotlin Coroutines are using Kotlin lightweight threads allowing to write +non-blocking code in an imperative way. On language side, suspending +functions provides an abstraction for asynchronous operations while on +library side kotlinx.coroutines provides functions like `async { }` and +types like `Flow`. + +Lettuce ships with extensions to provide support for idiomatic Kotlin +use. + +### Dependencies + +Coroutines support is available when `kotlinx-coroutines-core` and +`kotlinx-coroutines-reactive` dependencies are on the classpath: + +``` xml + + org.jetbrains.kotlinx + kotlinx-coroutines-core + ${kotlinx-coroutines.version} + + + org.jetbrains.kotlinx + kotlinx-coroutines-reactive + ${kotlinx-coroutines.version} + +``` + +### How does Reactive translate to Coroutines? + +`Flow` is an equivalent to `Flux` in Coroutines world, suitable for hot +or cold streams, finite or infinite streams, with the following main +differences: + +- `Flow` is push-based while `Flux` is a push-pull hybrid + +- Backpressure is implemented via suspending functions + +- `Flow` has only a single suspending collect method and operators are + implemented as extensions + +- Operators are easy to implement thanks to Coroutines + +- Extensions allow to add custom operators to Flow + +- Collect operations are suspending functions + +- `map` operator supports asynchronous operations (no need for + `flatMap`) since it takes a suspending function parameter + +### Coroutines API based on reactive operations + +Example for retrieving commands and using it: + +``` kotlin +val api: RedisCoroutinesCommands = connection.coroutines() + +val foo1 = api.set("foo", "bar") +val foo2 = api.keys("fo*") +``` + +!!! NOTE + Coroutine Extensions are experimental and require opt-in using + `@ExperimentalLettuceCoroutinesApi`. The API ships with a reduced + feature set. Deprecated methods and `StreamingChannel` are left out + intentionally. Expect evolution towards a `Flow`-based API to consume + large Redis responses. + +### Extensions for existing APIs + +#### Transactions DSL + +Example for the synchronous API: + +``` kotlin +val result: TransactionResult = connection.sync().multi { + set("foo", "bar") + get("foo") +} +``` + +Example for async with coroutines: + +``` kotlin +val result: TransactionResult = connection.async().multi { + set("foo", "bar") + get("foo") +} +``` \ No newline at end of file diff --git a/docs/user-guide/lua-scripting.md b/docs/user-guide/lua-scripting.md new file mode 100644 index 0000000000..b31970e500 --- /dev/null +++ b/docs/user-guide/lua-scripting.md @@ -0,0 +1,42 @@ +### Lua Scripting + +[Lua](https://redis.io/topics/lua-api) is a powerful scripting language +that is supported at the core of Redis. Lua scripts can be invoked +dynamically by providing the script contents to Redis or used as stored +procedure by loading the script into Redis and using its digest to +invoke it. + +
+ +``` java +String helloWorld = redis.eval("return ARGV[1]", STATUS, new String[0], "Hello World"); +``` + +
+ +Using Lua scripts is straightforward. Consuming results in Java requires +additional details to consume the result through a matching type. As we +do not know what your script will return, the API uses call-site +generics for you to specify the result type. Additionally, you must +provide a `ScriptOutputType` hint to `EVAL` so that the driver uses the +appropriate output parser. See [Output Formats](redis-functions.md#output-formats) for +further details. + +Lua scripts can be stored on the server for repeated execution. +Dynamically-generated scripts are an anti-pattern as each script is +stored in Redis' script cache. Generating scripts during the application +runtime may, and probably will, exhaust the host’s memory resources for +caching them. Instead, scripts should be as generic as possible and +provide customized execution via their arguments. You can register a +script through `SCRIPT LOAD` and use its SHA digest to invoke it later: + +
+ +``` java +String digest = redis.scriptLoad("return ARGV[1]", STATUS, new String[0], "Hello World"); + +// later +String helloWorld = redis.evalsha(digest, STATUS, new String[0], "Hello World"); +``` + +
\ No newline at end of file diff --git a/docs/user-guide/pubsub.md b/docs/user-guide/pubsub.md new file mode 100644 index 0000000000..0680ad7eb2 --- /dev/null +++ b/docs/user-guide/pubsub.md @@ -0,0 +1,118 @@ +## Publish/Subscribe + +Lettuce provides support for Publish/Subscribe on Redis Standalone and +Redis Cluster connections. The connection is notified on +message/subscribed/unsubscribed events after subscribing to channels or +patterns. [Synchronous](connecting-redis.md#basic-usage), [asynchronous](async-api.md) +and [reactive](reactive-api.md) API’s are provided to interact with Redis +Publish/Subscribe features. + +### Subscribing + +A connection can notify multiple listeners that implement +`RedisPubSubListener` (Lettuce provides a `RedisPubSubAdapter` for +convenience). All listener registrations are kept within the +`StatefulRedisPubSubConnection`/`StatefulRedisClusterConnection`. + +``` java +StatefulRedisPubSubConnection connection = client.connectPubSub() +connection.addListener(new RedisPubSubListener() { ... }) + +RedisPubSubCommands sync = connection.sync(); +sync.subscribe("channel"); + +// application flow continues +``` + +!!! NOTE + Don’t issue blocking calls (includes synchronous API calls to Lettuce) + from inside of Pub/Sub callbacks as this would block the EventLoop. If + you need to fetch data from Redis from inside a callback, please use + the asynchronous API. + +``` java +StatefulRedisPubSubConnection connection = client.connectPubSub() +connection.addListener(new RedisPubSubListener() { ... }) + +RedisPubSubAsyncCommands async = connection.async(); +RedisFuture future = async.subscribe("channel"); + +// application flow continues +``` + +### Reactive API + +The reactive API provides hot `Observable`s to listen on +`ChannelMessage`s and `PatternMessage`s. The `Observable`s receive all +inbound messages. You can do filtering using the observable chain if you +need to filter out the interesting ones, The `Observable` stops +triggering events when the subscriber unsubscribes from it. + +``` java +StatefulRedisPubSubConnection connection = client.connectPubSub() + +RedisPubSubReactiveCommands reactive = connection.reactive(); +reactive.subscribe("channel").subscribe(); + +reactive.observeChannels().doOnNext(patternMessage -> {...}).subscribe() + +// application flow continues +``` + +### Redis Cluster + +Redis Cluster support Publish/Subscribe but requires some attention in +general. User-space Pub/Sub messages (Calling `PUBLISH`) are broadcasted +across the whole cluster regardless of subscriptions to particular +channels/patterns. This behavior allows connecting to an arbitrary +cluster node and registering a subscription. The client isn’t required +to connect to the node where messages were published. + +A cluster-aware Pub/Sub connection is provided by +`RedisClusterClient.connectPubSub()` allowing to listen for cluster +reconfiguration and reconnect if the topology changes. + +``` java +StatefulRedisClusterPubSubConnection connection = clusterClient.connectPubSub() +connection.addListener(new RedisPubSubListener() { ... }) + +RedisPubSubCommands sync = connection.sync(); +sync.subscribe("channel"); +``` + +Redis Cluster also makes a distinction between user-space and key-space +messages. Key-space notifications (Pub/Sub messages for key-activity) +stay node-local and are not broadcasted across the Redis Cluster. A +notification about, e.g. an expiring key, stays local to the node on +which the key expired. + +Clients that are interested in keyspace notifications must subscribe to +the appropriate node (or nodes) to receive these notifications. You can +either use `RedisClient.connectPubSub()` to establish Pub/Sub +connections to the individual nodes or use `RedisClusterClient`'s +message propagation and NodeSelection API to get a managed set of +connections. + +``` java +StatefulRedisClusterPubSubConnection connection = clusterClient.connectPubSub() +connection.addListener(new RedisClusterPubSubListener() { ... }) +connection.setNodeMessagePropagation(true); + +RedisPubSubCommands sync = connection.sync(); +sync.masters().commands().subscribe("__keyspace@0__:*"); +``` + +There are two things to pay special attention to: + +1. Replication: Keys replicated to replica nodes, especially + considering expiry, generate keyspace events on all nodes holding + the key. If a key expires and it is replicated, it will expire on + the master and all replicas. Each Redis server will emit keyspace + events. Subscribing to non-master nodes, therefore, will let your + application see multiple events of the same type for the same key + because of Redis distributed nature. + +2. Topology Changes: Subscriptions are issued either by using the + NodeSelection API or by calling `subscribe(…)` on the individual + cluster node connections. Subscription registrations are not + propagated to new nodes that are added on a topology change. \ No newline at end of file diff --git a/docs/user-guide/reactive-api.md b/docs/user-guide/reactive-api.md new file mode 100644 index 0000000000..3af432a258 --- /dev/null +++ b/docs/user-guide/reactive-api.md @@ -0,0 +1,792 @@ +## Reactive API + +This guide helps you to understand the Reactive Stream pattern and aims +to give you a general understanding of how to build reactive +applications. + +### Motivation + +Asynchronous and reactive methodologies allow you to utilize better +system resources, instead of wasting threads waiting for network or disk +I/O. Threads can be fully utilized to perform other work instead. + +A broad range of technologies exists to facilitate this style of +programming, ranging from the very limited and less usable +`java.util.concurrent.Future` to complete libraries and runtimes like +Akka. [Project Reactor](http://projectreactor.io/), has a very rich set +of operators to compose asynchronous workflows, it has no further +dependencies to other frameworks and supports the very mature Reactive +Streams model. + +### Understanding Reactive Streams + +Reactive Streams is an initiative to provide a standard for asynchronous +stream processing with non-blocking back pressure. This encompasses +efforts aimed at runtime environments (JVM and JavaScript) as well as +network protocols. + +The scope of Reactive Streams is to find a minimal set of interfaces, +methods, and protocols that will describe the necessary operations and +entities to achieve the goal—asynchronous streams of data with +non-blocking back pressure. + +It is an interoperability standard between multiple reactive composition +libraries that allow interaction without the need of bridging between +libraries in application code. + +The integration of Reactive Streams is usually accompanied with the use +of a composition library that hides the complexity of bare +`Publisher` and `Subscriber` types behind an easy-to-use API. +Lettuce uses [Project Reactor](http://projectreactor.io/) that exposes +its publishers as `Mono` and `Flux`. + +For more information about Reactive Streams see +. + +### Understanding Publishers + +Asynchronous processing decouples I/O or computation from the thread +that invoked the operation. A handle to the result is given back, +usually a `java.util.concurrent.Future` or similar, that returns either +a single object, a collection or an exception. Retrieving a result, that +was fetched asynchronously is usually not the end of processing one +flow. Once data is obtained, further requests can be issued, either +always or conditionally. With Java 8 or the Promise pattern, linear +chaining of futures can be set up so that subsequent asynchronous +requests are issued. Once conditional processing is needed, the +asynchronous flow has to be interrupted and synchronized. While this +approach is possible, it does not fully utilize the advantage of +asynchronous processing. + +In contrast to the preceding examples, `Publisher` objects answer the +multiplicity and asynchronous questions in a different fashion: By +inverting the `Pull` pattern into a `Push` pattern. + +**A Publisher is the asynchronous/push “dual” to the synchronous/pull +Iterable** + +| event | Iterable (pull) | Publisher (push) | +|----------------|------------------|--------------------| +| retrieve data | T next() | onNext(T) | +| discover error | throws Exception | onError(Exception) | +| complete | !hasNext() | onCompleted() | + +An `Publisher` supports emission sequences of values or even infinite +streams, not just the emission of single scalar values (as Futures do). +You will very much appreciate this fact once you start to work on +streams instead of single values. Project Reactor uses two types in its +vocabulary: `Mono` and `Flux` that are both publishers. + +A `Mono` can emit `0` to `1` events while a `Flux` can emit `0` to `N` +events. + +A `Publisher` is not biased toward some particular source of +concurrency or asynchronicity and how the underlying code is executed - +synchronous or asynchronous, running within a `ThreadPool`. As a +consumer of a `Publisher`, you leave the actual implementation to the +supplier, who can change it later on without you having to adapt your +code. + +The last key point of a `Publisher` is that the underlying processing +is not started at the time the `Publisher` is obtained, rather its +started at the moment an observer subscribes or signals demand to the +`Publisher`. This is a crucial difference to a +`java.util.concurrent.Future`, which is started somewhere at the time it +is created/obtained. So if no observer ever subscribes to the +`Publisher`, nothing ever will happen. + +### A word on the lettuce Reactive API + +All commands return a `Flux`, `Mono` or `Mono` to which a +`Subscriber` can subscribe to. That subscriber reacts to whatever item +or sequence of items the `Publisher` emits. This pattern facilitates +concurrent operations because it does not need to block while waiting +for the `Publisher` to emit objects. Instead, it creates a sentry in +the form of a `Subscriber` that stands ready to react appropriately at +whatever future time the `Publisher` does so. + +### Consuming `Publisher` + +The first thing you want to do when working with publishers is to +consume them. Consuming a publisher means subscribing to it. Here is an +example that subscribes and prints out all the items emitted: + +``` java +Flux.just("Ben", "Michael", "Mark").subscribe(new Subscriber() { + public void onSubscribe(Subscription s) { + s.request(3); + } + + public void onNext(String s) { + System.out.println("Hello " + s + "!"); + } + + public void onError(Throwable t) { + + } + + public void onComplete() { + System.out.println("Completed"); + } +}); +``` + +The example prints the following lines: + + Hello Ben + Hello Michael + Hello Mark + Completed + +You can see that the Subscriber (or Observer) gets notified of every +event and also receives the completed event. A `Publisher` emits +items until either an exception is raised or the `Publisher` finishes +the emission calling `onCompleted`. No further elements are emitted +after that time. + +A call to the `subscribe` registers a `Subscription` that allows to +cancel and, therefore, do not receive further events. Publishers can +interoperate with the un-subscription and free resources once a +subscriber unsubscribed from the `Publisher`. + +Implementing a `Subscriber` requires implementing numerous methods, +so lets rewrite the code to a simpler form: + +``` java +Flux.just("Ben", "Michael", "Mark").doOnNext(new Consumer() { + public void accept(String s) { + System.out.println("Hello " + s + "!"); + } +}).doOnComplete(new Runnable() { + public void run() { + System.out.println("Completed"); + } +}).subscribe(); +``` + +alternatively, even simpler by using Java 8 Lambdas: + +``` java +Flux.just("Ben", "Michael", "Mark") + .doOnNext(s -> System.out.println("Hello " + s + "!")) + .doOnComplete(() -> System.out.println("Completed")) + .subscribe(); +``` + +You can control the elements that are processed by your `Subscriber` +using operators. The `take()` operator limits the number of emitted +items if you are interested in the first `N` elements only. + +``` java +Flux.just("Ben", "Michael", "Mark") // + .doOnNext(s -> System.out.println("Hello " + s + "!")) + .doOnComplete(() -> System.out.println("Completed")) + .take(2) + .subscribe(); +``` + +The example prints the following lines: + + Hello Ben + Hello Michael + Completed + +Note that the `take` operator implicitly cancels its subscription from +the `Publisher` once the expected count of elements was emitted. + +A subscription to a `Publisher` can be done either by another `Flux` +or a `Subscriber`. Unless you are implementing a custom `Publisher`, +always use `Subscriber`. The used subscriber `Consumer` from the example +above does not handle `Exception`s so once an `Exception` is thrown you +will see a stack trace like this: + + Exception in thread "main" reactor.core.Exceptions$BubblingException: java.lang.RuntimeException: Example exception + at reactor.core.Exceptions.bubble(Exceptions.java:96) + at reactor.core.publisher.Operators.onErrorDropped(Operators.java:296) + at reactor.core.publisher.LambdaSubscriber.onError(LambdaSubscriber.java:117) + ... + Caused by: java.lang.RuntimeException: Example exception + at demos.lambda$example3Lambda$4(demos.java:87) + at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:157) + ... 23 more + +It is always recommended to implement an error handler right from the +beginning. At a certain point, things can and will go wrong. + +A fully implemented subscriber declares the `onCompleted` and `onError` +methods allowing you to react to these events: + +``` java +Flux.just("Ben", "Michael", "Mark").subscribe(new Subscriber() { + public void onSubscribe(Subscription s) { + s.request(3); + } + + public void onNext(String s) { + System.out.println("Hello " + s + "!"); + } + + public void onError(Throwable t) { + System.out.println("onError: " + t); + } + + public void onComplete() { + System.out.println("Completed"); + } +}); +``` + +### From push to pull + +The examples from above illustrated how publishers can be set up in a +not-opinionated style about blocking or non-blocking execution. A +`Flux` can be converted explicitly into an `Iterable` or +synchronized with `block()`. Avoid calling `block()` in your code as you +start expressing the nature of execution inside your code. Calling +`block()` removes all non-blocking advantages of the reactive chain to +your application. + +``` java +String last = Flux.just("Ben", "Michael", "Mark").last().block(); +System.out.println(last); +``` + +The example prints the following line: + + Mark + +A blocking call can be used to synchronize the publisher chain and find +back a way into the plain and well-known `Pull` pattern. + +``` java +List list = Flux.just("Ben", "Michael", "Mark").collectList().block(); +System.out.println(list); +``` + +The `toList` operator collects all emitted elements and passes the list +through the `BlockingPublisher`. + +The example prints the following line: + + [Ben, Michael, Mark] + +### Creating `Flux` and `Mono` using Lettuce + +There are many ways to establish publishers. You have already seen +`just()`, `take()` and `collectList()`. Refer to the [Project Reactor +documentation](http://projectreactor.io/docs/) for many more methods +that you can use to create `Flux` and `Mono`. + +Lettuce publishers can be used for initial and chaining operations. When +using Lettuce publishers, you will notice the non-blocking behavior. +This is because all I/O and command processing are handled +asynchronously using the netty EventLoop. + +Connecting to Redis is insanely simple: + +``` java +RedisClient client = RedisClient.create("redis://localhost"); +RedisStringReactiveCommands commands = client.connect().reactive(); +``` + +In the next step, obtaining a value from a key requires the `GET` +operation: + +``` java +commands.get("key").subscribe(new Consumer() { + + public void accept(String value) { + System.out.println(value); + } +}); +``` + +Alternatively, written in Java 8 lambdas: + +``` java +commands + .get("key") + .subscribe(value -> System.out.println(value)); +``` + +The execution is handled asynchronously, and the invoking Thread can be +used to processed in processing while the operation is completed on the +Netty EventLoop threads. Due to its decoupled nature, the calling method +can be left before the execution of the `Publisher` is finished. + +Lettuce publishers can be used within the context of chaining to load +multiple keys asynchronously: + +``` java +Flux.just("Ben", "Michael", "Mark"). + flatMap(key -> commands.get(key)). + subscribe(value -> System.out.println("Got value: " + value)); +``` + +### Hot and Cold Publishers + +There is a distinction between Publishers that was not covered yet: + +- A cold Publishers waits for a subscription until it emits values and + does this freshly for every subscriber. + +- A hot Publishers begins emitting values upfront and presents them to + every subscriber subsequently. + +All Publishers returned from the Redis Standalone, Redis Cluster, and +Redis Sentinel API are cold, meaning that no I/O happens until they are +subscribed to. As such a subscriber is guaranteed to see the whole +sequence from the beginning. So just creating a Publisher will not cause +any network I/O thus creating and discarding Publishers is cheap. +Publishers created for a Publish/Subscribe emit `PatternMessage`s and +`ChannelMessage`s once they are subscribed to. Publishers guarantee +however to emit all items from the beginning until their end. While this +is true for Publish/Subscribe publishers, the nature of subscribing to a +Channel/Pattern allows missed messages due to its subscription nature +and less to the Hot/Cold distinction of publishers. + +### Transforming publishers + +Publishers can transform the emitted values in various ways. One of the +most basic transformations is `flatMap()` which you have seen from the +examples above that converts the incoming value into a different one. +Another one is `map()`. The difference between `map()` and `flatMap()` +is that `flatMap()` allows you to do those transformations with +`Publisher` calls. + +``` java +Flux.just("Ben", "Michael", "Mark") + .flatMap(commands::get) + .flatMap(value -> commands.rpush("result", value)) + .subscribe(); +``` + +The first `flatMap()` function is used to retrieve a value and the +second `flatMap()` function appends the value to a Redis list named +`result`. The `flatMap()` function returns a Publisher whereas the +normal map just returns ``. You will use `flatMap()` a lot when +dealing with flows like this, you’ll become good friends. + +An aggregation of values can be achieved using the `reduce()` +transformation. It applies a function to each value emitted by a +`Publisher`, sequentially and emits each successive value. We can use +it to aggregate values, to count the number of elements in multiple +Redis sets: + +``` java +Flux.just("Ben", "Michael", "Mark") + .flatMap(commands::scard) + .reduce((sum, current) -> sum + current) + .subscribe(result -> System.out.println("Number of elements in sets: " + result)); +``` + +The aggregation function of `reduce()` is applied on each emitted value, +so three times in the example above. If you want to get the last value, +which denotes the final result containing the number of elements in all +Redis sets, apply the `last()` transformation: + +``` java +Flux.just("Ben", "Michael", "Mark") + .flatMap(commands::scard) + .reduce((sum, current) -> sum + current) + .last() + .subscribe(result -> System.out.println("Number of elements in sets: " + result)); +``` + +Now let’s take a look at grouping emitted items. The following example +emits three items and groups them by the beginning character. + +``` java +Flux.just("Ben", "Michael", "Mark") + .groupBy(key -> key.substring(0, 1)) + .subscribe( + groupedFlux -> { + groupedFlux.collectList().subscribe(list -> { + System.out.println("First character: " + groupedFlux.key() + ", elements: " + list); + }); + } +); +``` + +The example prints the following lines: + + First character: B, elements: [Ben] + First character: M, elements: [Michael, Mark] + +### Absent values + +The presence and absence of values is an essential part of reactive +programming. Traditional approaches consider `null` as an absence of a +particular value. With Java 8, `Optional` was introduced to +encapsulate nullability. Reactive Streams prohibits the use of `null` +values. + +In the scope of Redis, an absent value is an empty list, a non-existent +key or any other empty data structure. Reactive programming discourages +the use of `null` as value. The reactive answer to absent values is just +not emitting any value that is possible due the `0` to `N` nature of +`Publisher`. + +Suppose we have the keys `Ben` and `Michael` set each to the value +`value`. We query those and another, absent key with the following code: + +``` java +Flux.just("Ben", "Michael", "Mark") + .flatMap(commands::get) + .doOnNext(value -> System.out.println(value)) + .subscribe(); +``` + +The example prints the following lines: + + value + value + +The output is just two values. The `GET` to the absent key `Mark` does +not emit a value. + +The reactive API provides operators to work with empty results when you +require a value. You can use one of the following operators: + +- `defaultIfEmpty`: Emit a default value if the `Publisher` did not + emit any value at all + +- `switchIfEmpty`: Switch to a fallback `Publisher` to emit values + +- `Flux.hasElements`/`Flux.hasElement`: Emit a `Mono` that + contains a flag whether the original `Publisher` is empty + +- `next`/`last`/`elementAt`: Positional operators to retrieve the + first/last/`N`th element or emit a default value + +### Filtering items + +The values emitted by a `Publisher` can be filtered in case you need +only specific results. Filtering does not change the emitted values +itself. Filters affect how many items and at which point (and if at all) +they are emitted. + +``` java +Flux.just("Ben", "Michael", "Mark") + .filter(s -> s.startsWith("M")) + .flatMap(commands::get) + .subscribe(value -> System.out.println("Got value: " + value)); +``` + +The code will fetch only the keys `Michael` and `Mark` but not `Ben`. +The filter criteria are whether the `key` starts with a `M`. + +You already met the `last()` filter to retrieve the last value: + +``` java +Flux.just("Ben", "Michael", "Mark") + .last() + .subscribe(value -> System.out.println("Got value: " + value)); +``` + +the extended variant of `last()` allows you to take the last `N` values: + +``` java +Flux.just("Ben", "Michael", "Mark") + .takeLast(3) + .subscribe(value -> System.out.println("Got value: " + value)); +``` + +The example from above takes the last `2` values. + +The opposite to `next()` is the `first()` filter that is used to +retrieve the next value: + +``` java +Flux.just("Ben", "Michael", "Mark") + .next() + .subscribe(value -> System.out.println("Got value: " + value)); +``` + +### Error handling + +Error handling is an indispensable component of every real world +application and should to be considered from the beginning on. Project +Reactor provides several mechanisms to deal with errors. + +In general, you want to react in the following ways: + +- Return a default value instead + +- Use a backup publisher + +- Retry the Publisher (immediately or with delay) + +The following code falls back to a default value after it throws an +exception at the first emitted item: + +``` java +Flux.just("Ben", "Michael", "Mark") + .doOnNext(value -> { + throw new IllegalStateException("Takes way too long"); + }) + .onErrorReturn("Default value") + .subscribe(); +``` + +You can use a backup `Publisher` which will be called if the first +one fails. + +``` java +Flux.just("Ben", "Michael", "Mark") + .doOnNext(value -> { + throw new IllegalStateException("Takes way too long"); + }) + .switchOnError(commands.get("Default Key")) + .subscribe(); +``` + +It is possible to retry the publisher by re-subscribing. Re-subscribing +can be done as soon as possible, or with a wait interval, which is +preferred when external resources are involved. + +``` java +Flux.just("Ben", "Michael", "Mark") + .flatMap(commands::get) + .retry() + .subscribe(); +``` + +Use the following code if you want to retry with backoff: + +``` java +Flux.just("Ben", "Michael", "Mark") + .doOnNext(v -> { + if (new Random().nextInt(10) + 1 == 5) { + throw new RuntimeException("Boo!"); + } + }) + .doOnSubscribe(subscription -> + { + System.out.println(subscription); + }) + .retryWhen(throwableFlux -> Flux.range(1, 5) + .flatMap(i -> { + System.out.println(i); + return Flux.just(i) + .delay(Duration.of(i, ChronoUnit.SECONDS)); + })) + .blockLast(); +``` + +The attempts get passed into the `retryWhen()` method delayed with the +number of seconds to wait. The delay method is used to complete once its +timer is done. + +### Schedulers and threads + +Schedulers in Project Reactor are used to instruct multi-threading. Some +operators have variants that take a Scheduler as a parameter. These +instruct the operator to do some or all of its work on a particular +Scheduler. + +Project Reactor ships with a set of preconfigured Schedulers, which are +all accessible through the `Schedulers` class: + +- Schedulers.parallel(): Executes the computational work such as + event-loops and callback processing. + +- Schedulers.immediate(): Executes the work immediately in the current + thread + +- Schedulers.elastic(): Executes the I/O-bound work such as asynchronous + performance of blocking I/O, this scheduler is backed by a thread-pool + that will grow as needed + +- Schedulers.newSingle(): Executes the work on a new thread + +- Schedulers.fromExecutor(): Create a scheduler from a + `java.util.concurrent.Executor` + +- Schedulers.timer(): Create or reuse a hash-wheel based TimedScheduler + with a resolution of 50ms. + +Do not use the computational scheduler for I/O. + +Publishers can be executed by a scheduler in the following different +ways: + +- Using an operator that makes use of a scheduler + +- Explicitly by passing the Scheduler to such an operator + +- By using `subscribeOn(Scheduler)` + +- By using `publishOn(Scheduler)` + +Operators like `buffer`, `replay`, `skip`, `delay`, `parallel`, and so +forth use a Scheduler by default if not instructed otherwise. + +All of the listed operators allow you to pass in a custom scheduler if +needed. Sticking most of the time with the defaults is a good idea. + +If you want the subscribe chain to be executed on a specific scheduler, +you use the `subscribeOn()` operator. The code is executed on the main +thread without a scheduler set: + +``` java +Flux.just("Ben", "Michael", "Mark").flatMap(key -> { + System.out.println("Map 1: " + key + " (" + Thread.currentThread().getName() + ")"); + return Flux.just(key); + } +).flatMap(value -> { + System.out.println("Map 2: " + value + " (" + Thread.currentThread().getName() + ")"); + return Flux.just(value); + } +).subscribe(); +``` + +The example prints the following lines: + + Map 1: Ben (main) + Map 2: Ben (main) + Map 1: Michael (main) + Map 2: Michael (main) + Map 1: Mark (main) + Map 2: Mark (main) + +This example shows the `subscribeOn()` method added to the flow (it does +not matter where you add it): + +``` java +Flux.just("Ben", "Michael", "Mark").flatMap(key -> { + System.out.println("Map 1: " + key + " (" + Thread.currentThread().getName() + ")"); + return Flux.just(key); + } +).flatMap(value -> { + System.out.println("Map 2: " + value + " (" + Thread.currentThread().getName() + ")"); + return Flux.just(value); + } +).subscribeOn(Schedulers.parallel()).subscribe(); +``` + +The output of the example shows the effect of `subscribeOn()`. You can +see that the Publisher is executed on the same thread, but on the +computation thread pool: + + Map 1: Ben (parallel-1) + Map 2: Ben (parallel-1) + Map 1: Michael (parallel-1) + Map 2: Michael (parallel-1) + Map 1: Mark (parallel-1) + Map 2: Mark (parallel-1) + +If you apply the same code to Lettuce, you will notice a difference in +the threads on which the second `flatMap()` is executed: + +``` java +Flux.just("Ben", "Michael", "Mark").flatMap(key -> { + System.out.println("Map 1: " + key + " (" + Thread.currentThread().getName() + ")"); + return commands.set(key, key); +}).flatMap(value -> { + System.out.println("Map 2: " + value + " (" + Thread.currentThread().getName() + ")"); + return Flux.just(value); +}).subscribeOn(Schedulers.parallel()).subscribe(); +``` + +The example prints the following lines: + + Map 1: Ben (parallel-1) + Map 1: Michael (parallel-1) + Map 1: Mark (parallel-1) + Map 2: OK (lettuce-nioEventLoop-3-1) + Map 2: OK (lettuce-nioEventLoop-3-1) + Map 2: OK (lettuce-nioEventLoop-3-1) + +Two things differ from the standalone examples: + +1. The values are set rather concurrently than sequentially + +2. The second `flatMap()` transformation prints the netty EventLoop + thread name + +This is because Lettuce publishers are executed and completed on the +netty EventLoop threads by default. + +`publishOn` instructs an Publisher to call its observer’s `onNext`, +`onError`, and `onCompleted` methods on a particular Scheduler. Here, +the order matters: + +``` java +Flux.just("Ben", "Michael", "Mark").flatMap(key -> { + System.out.println("Map 1: " + key + " (" + Thread.currentThread().getName() + ")"); + return commands.set(key, key); +}).publishOn(Schedulers.parallel()).flatMap(value -> { + System.out.println("Map 2: " + value + " (" + Thread.currentThread().getName() + ")"); + return Flux.just(value); +}).subscribe(); +``` + +Everything before the `publishOn()` call is executed in main, everything +below in the scheduler: + + Map 1: Ben (main) + Map 1: Michael (main) + Map 1: Mark (main) + Map 2: OK (parallel-1) + Map 2: OK (parallel-1) + Map 2: OK (parallel-1) + +Schedulers allow direct scheduling of operations. Refer to the [Project +Reactor +documentation](https://projectreactor.io/core/docs/api/reactor/core/scheduler/Schedulers.html) +for further information. + +### Redis Transactions + +Lettuce provides a convenient way to use Redis Transactions in a +reactive way. Commands that should be executed within a transaction can +be executed after the `MULTI` command was executed. Functional chaining +allows to execute commands within a closure, and each command receives +its appropriate response. A cumulative response is also returned with +`TransactionResult` in response to `EXEC`. + +See [Transactions](transactions-multi.md#transactions-using-the-reactive-api) for +further details. + +#### Other examples + +**Blocking example** + +``` java +RedisStringReactiveCommands reactive = client.connect().reactive(); +Mono set = reactive.set("key", "value"); +set.block(); +``` + +**Non-blocking example** + +``` java +RedisStringReactiveCommands reactive = client.connect().reactive(); +Mono set = reactive.set("key", "value"); +set.subscribe(); +``` + +**Functional chaining** + +``` java +RedisStringReactiveCommands reactive = client.connect().reactive(); +Flux.just("Ben", "Michael", "Mark") + .flatMap(key -> commands.sadd("seen", key)) + .flatMap(value -> commands.randomkey()) + .flatMap(commands::type) + .doOnNext(System.out::println).subscribe(); +``` + +**Redis Transaction** + +``` java + RedisReactiveCommands reactive = client.connect().reactive(); + + reactive.multi().doOnSuccess(s -> { + reactive.set("key", "1").doOnNext(s1 -> System.out.println(s1)).subscribe(); + reactive.incr("key").doOnNext(s1 -> System.out.println(s1)).subscribe(); + }).flatMap(s -> reactive.exec()) + .doOnNext(transactionResults -> System.out.println(transactionResults.wasRolledBack())) + .subscribe(); +``` \ No newline at end of file diff --git a/docs/user-guide/redis-functions.md b/docs/user-guide/redis-functions.md new file mode 100644 index 0000000000..f317816731 --- /dev/null +++ b/docs/user-guide/redis-functions.md @@ -0,0 +1,114 @@ +## Redis Functions + +[Redis Functions](https://redis.io/topics/functions-intro) is an +evolution of the scripting API to provide extensibility beyond Lua. +Functions can leverage different engines and follow a model where a +function library registers functionality to be invoked later with the +`FCALL` command. + +
+ +``` java +redis.functionLoad("FUNCTION LOAD "#!lua name=mylib\nredis.register_function('knockknock', function() return 'Who\\'s there?' end)"); + +String response = redis.fcall("knockknock", STATUS); +``` + +
+ +Using Functions is straightforward. Consuming results in Java requires +additional details to consume the result through a matching type. As we +do not know what your function will return, the API uses call-site +generics for you to specify the result type. Additionally, you must +provide a `ScriptOutputType` hint to `EVAL` so that the driver uses the +appropriate output parser. See [Output Formats](#output-formats) for +further details. + +### Output Formats + +You can choose from one of the following: + +- `BOOLEAN`: Boolean output, expects a number `0` or `1` to be converted + to a boolean value. + +- `INTEGER`: 64-bit Integer output, represented as Java `Long`. + +- `MULTI`: List of flat arrays. + +- `STATUS`: Simple status value such as `OK`. The Redis response is + parsed as ASCII. + +- `VALUE`: Value return type decoded through `RedisCodec`. + +- `OBJECT`: RESP3-defined object output supporting all Redis response + structures. + +### Leveraging Scripting and Functions through Command Interfaces + +Using dynamic functionality without a documented response structure can +impose quite some complexity on your application. If you consider using +scripting or functions, then you can use [Command +Interfaces](../redis-command-interfaces.md) to declare +an interface along with methods that represent your scripting or +function landscape. Declaring a method with input arguments and a +response type not only makes it obvious how the script or function is +supposed to be called, but also how the response structure should look +like. + +Let’s take a look at a simple function call first: + +
+ +``` lua +local function my_hlastmodified(keys, args) + local hash = keys[1] + return redis.call('HGET', hash, '_last_modified_') +end +``` + +
+ +
+ +``` java +Long lastModified = redis.fcall("my_hlastmodified", INTEGER, "my_hash"); +``` + +
+ +This example calls the `my_hlastmodified` function expecting some `Long` +response an input argument. Calling a function from a single place in +your code isn’t an issue on its own. The arrangement becomes problematic +once the number of functions grows or you start calling the functions +with different arguments from various places in your code. Without the +function code, it becomes impossible to investigate how the response +mechanics work or determine the argument semantics, as there is no +single place to document the function behavior. + +Let’s apply the Command Interface pattern to see how the the declaration +and call sites change: + +
+ +``` java +interface MyCustomCommands extends Commands { + + /** + * Retrieve the last modified value from the hash key. + * @param hashKey the key of the hash. + * @return the last modified timestamp, can be {@code null}. + */ + @Command("FCALL my_hlastmodified 1 :hashKey") + Long getLastModified(@Param("my_hash") String hashKey); + +} + +MyCustomCommands myCommands = …; +Long lastModified = myCommands.getLastModified("my_hash"); +``` + +
+ +By declaring a command method, you create a place that allows for +storing additional documentation. The method declaration makes clear +what the function call expects and what you get in return. \ No newline at end of file diff --git a/docs/user-guide/transactions-multi.md b/docs/user-guide/transactions-multi.md new file mode 100644 index 0000000000..8fbcb4d6ea --- /dev/null +++ b/docs/user-guide/transactions-multi.md @@ -0,0 +1,168 @@ +## Transactions/Multi + +Transactions allow the execution of a group of commands in a single +step. Transactions can be controlled using `WATCH`, `UNWATCH`, `EXEC`, +`MULTI` and `DISCARD` commands. Synchronous, asynchronous, and reactive +APIs allow the use of transactions. + +!!! note + Transactional use requires external synchronization when a single + connection is used by multiple threads/processes. This can be achieved + either by serializing transactions or by providing a dedicated + connection to each concurrent process. Lettuce itself does not + synchronize transactional/non-transactional invocations regardless of + the used API facade. + +Redis responds to commands invoked during a transaction with a `QUEUED` +response. The response related to the execution of the command is +received at the moment the `EXEC` command is processed, and the +transaction is executed. The particular APIs behave in different ways: + +- Synchronous: Invocations to the commands return `null` while they are + invoked within a transaction. The `MULTI` command carries the response + of the particular commands. + +- Asynchronous: The futures receive their response at the moment the + `EXEC` command is processed. This happens while the `EXEC` response is + received. + +- Reactive: An `Obvervable` triggers `onNext`/`onCompleted` at the + moment the `EXEC` command is processed. This happens while the `EXEC` + response is received. + +As soon as you’re within a transaction, you won’t receive any responses +on triggering the commands + +``` java +redis.multi() == "OK" +redis.set(key, value) == null +redis.exec() == list("OK") +``` + +You’ll receive the transactional response when calling `exec()` on the +end of your transaction. + +``` java +redis.multi() == "OK" +redis.set(key1, value) == null +redis.set(key2, value) == null +redis.exec() == list("OK", "OK") +``` + +### Transactions using the asynchronous API + +Asynchronous use of Redis transactions is very similar to +non-transactional use. The asynchronous API returns `RedisFuture` +instances that eventually complete and they are handles to a future +result. Regular commands complete as soon as Redis sends a response. +Transactional commands complete as soon as the `EXEC` result is +received. + +Each command is completed individually with its own result so users of +`RedisFuture` will see no difference between transactional and +non-transactional `RedisFuture` completion. That said, transactional +command results are available twice: Once via `RedisFuture` of the +command and once through `List` (`TransactionResult` since +Lettuce 5) of the `EXEC` command future. + +``` java +RedisAsyncCommands async = client.connect().async(); + +RedisFuture multi = async.multi(); + +RedisFuture set = async.set("key", "value"); + +RedisFuture> exec = async.exec(); + +List objects = exec.get(); +String setResult = set.get(); + +objects.get(0) == setResult +``` + +### Transactions using the reactive API + +The reactive API can be used to execute multiple commands in a single +step. The nature of the reactive API encourages nesting of commands. It +is essential to understand the time at which an `Observable` emits a +value when working with transactions. Redis responds with `QUEUED` to +commands invoked during a transaction. The response related to the +execution of the command is received at the moment the `EXEC` command is +processed, and the transaction is executed. Subsequent calls in the +processing chain are executed after the transactional end. The following +code starts a transaction, executes two commands within the transaction +and finally executes the transaction. + +``` java +RedisReactiveCommands reactive = client.connect().reactive(); +reactive.multi().subscribe(multiResponse -> { + reactive.set("key", "1").subscribe(); + reactive.incr("key").subscribe(); + reactive.exec().subscribe(); +}); +``` + +### Transactions on clustered connections + +Clustered connections perform a routing by default. This means, that you +can’t be really sure, on which host your command is executed. So if you +are working in a clustered environment, use rather a regular connection +to your node, since then you’ll bound to that node knowing which hash +slots are handled by it. + +### Examples + +**Multi with executing multiple commands** + +``` java +redis.multi(); + +redis.set("one", "1"); +redis.set("two", "2"); +redis.mget("one", "two"); +redis.llen(key); + +redis.exec(); // result: list("OK", "OK", list("1", "2"), 0L) +``` + +**Mult executing multiple asynchronous commands** + +``` java +redis.multi(); + +RedisFuture set1 = redis.set("one", "1"); +RedisFuture set2 = redis.set("two", "2"); +RedisFuture mget = redis.mget("one", "two"); +RedisFuture llen = mgetredis.llen(key); + + +set1.thenAccept(value -> …); // OK +set2.thenAccept(value -> …); // OK + +RedisFuture> exec = redis.exec(); // result: list("OK", "OK", list("1", "2"), 0L) + +mget.get(); // list("1", "2") +llen.thenAccept(value -> …); // 0L +``` + +**Using WATCH** + +``` java +redis.watch(key); + +RedisConnection redis2 = client.connect(); +redis2.set(key, value + "X"); +redis2.close(); + +redis.multi(); +redis.append(key, "foo"); +redis.exec(); // result is an empty list because of the changed key +``` + +## Scripting and Functions + +Redis functionality can be extended through many ways, of which [Lua +Scripting](https://redis.io/topics/eval-intro) and +[Functions](https://redis.io/topics/functions-intro) are two approaches +that do not require specific pre-requisites on the server. + diff --git a/mkdocs.yml b/mkdocs.yml new file mode 100644 index 0000000000..bfdf7f3990 --- /dev/null +++ b/mkdocs.yml @@ -0,0 +1,44 @@ +site_name: Lettuce Reference Guide +theme: + name: material + logo: static/logo-redis.svg + font: + text: 'Geist' + code: 'Geist Mono' + features: + - content.code.copy + palette: + primary: white + accent: red + +markdown_extensions: + - pymdownx.highlight: + anchor_linenums: true + line_spans: __span + pygments_lang_class: true + - pymdownx.inlinehilite + - pymdownx.snippets + - pymdownx.superfences + - admonition + - pymdownx.details + - toc: + permalink: true +nav: + - Overview: overview.md + - New & Noteworthy: new-features.md + - Getting Started: getting-started.md + - User Guide: + - Connecting Redis: user-guide/connecting-redis.md + - Asynchronous API: user-guide/async-api.md + - Reactive API: user-guide/reactive-api.md + - Kotlin API: user-guide/kotlin-api.md + - Publish/Subscribe: user-guide/pubsub.md + - Transactions/Multi: user-guide/transactions-multi.md + - Redis programmability: + - LUA Scripting: user-guide/lua-scripting.md + - Redis Functions: user-guide/redis-functions.md + - High-Availability and Sharding: ha-sharding.md + - Working with dynamic Redis Command Interfaces: redis-command-interfaces.md + - Advanced Usage: advanced-usage.md + - Integration and Extension: integration-extension.md + - Frequently Asked Questions: faq.md \ No newline at end of file From 87326582e4b86571603a0ec2006a7e75468a73fe Mon Sep 17 00:00:00 2001 From: Igor Malinovskyi Date: Fri, 9 Aug 2024 12:47:25 +0200 Subject: [PATCH 03/12] Update docs workflow --- .github/workflows/docs.yml | 1 - 1 file changed, 1 deletion(-) diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml index ac0ca9f2a3..d8eb8fa906 100644 --- a/.github/workflows/docs.yml +++ b/.github/workflows/docs.yml @@ -19,7 +19,6 @@ jobs: with: python-version: 3.9 cache: 'pip' - - name: Install dependencies run: | python -m pip install --upgrade pip pip install mkdocs mkdocs-material pymdown-extensions From 1024670caaa9747669db4753273233b711a25b50 Mon Sep 17 00:00:00 2001 From: Igor Malinovskyi Date: Fri, 9 Aug 2024 12:50:11 +0200 Subject: [PATCH 04/12] Update docs workflow --- .github/workflows/docs.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml index d8eb8fa906..94dde0c618 100644 --- a/.github/workflows/docs.yml +++ b/.github/workflows/docs.yml @@ -18,7 +18,7 @@ jobs: - uses: actions/setup-python@v4 with: python-version: 3.9 - cache: 'pip' + - name: Install dependencies run: | python -m pip install --upgrade pip pip install mkdocs mkdocs-material pymdown-extensions From 264dec0aa441a55661cb467329934e21573473c1 Mon Sep 17 00:00:00 2001 From: Igor Malinovskyi Date: Fri, 9 Aug 2024 15:40:47 +0200 Subject: [PATCH 05/12] Fix links and include main README --- .github/workflows/docs.yml | 2 +- README.md | 28 ++++++------ docs/README.md | 18 -------- docs/advanced-usage.md | 12 +++--- docs/getting-started.md | 27 +++++------- docs/ha-sharding.md | 8 ++-- docs/index.md | 1 + docs/new-features.md | 6 +++ docs/overview.md | 82 ++---------------------------------- docs/user-guide/async-api.md | 5 +-- mkdocs.yml | 8 +++- 11 files changed, 55 insertions(+), 142 deletions(-) delete mode 100644 docs/README.md create mode 100644 docs/index.md diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml index 94dde0c618..14d633f787 100644 --- a/.github/workflows/docs.yml +++ b/.github/workflows/docs.yml @@ -21,7 +21,7 @@ jobs: - name: Install dependencies run: | python -m pip install --upgrade pip - pip install mkdocs mkdocs-material pymdown-extensions + pip install mkdocs mkdocs-material pymdown-extensions mkdocs-macros-plugin - name: Build docs run: | mkdocs build -d docsbuild diff --git a/README.md b/README.md index a1484030eb..d20cfc1bd3 100644 --- a/README.md +++ b/README.md @@ -11,18 +11,18 @@ Supports advanced Redis features such as Sentinel, Cluster, Pipelining, Auto-Rec This version of Lettuce has been tested against the latest Redis source-build. -* [synchronous](https://github.com/lettuce-io/lettuce-core/wiki/Basic-usage), [asynchronous](https://github.com/lettuce-io/lettuce-core/wiki/Asynchronous-API-%284.0%29) and [reactive](https://github.com/lettuce-io/lettuce-core/wiki/Reactive-API-%285.0%29) usage -* [Redis Sentinel](https://github.com/lettuce-io/lettuce-core/wiki/Redis-Sentinel) -* [Redis Cluster](https://github.com/lettuce-io/lettuce-core/wiki/Redis-Cluster) -* [SSL](https://github.com/lettuce-io/lettuce-core/wiki/SSL-Connections) and [Unix Domain Socket](https://github.com/lettuce-io/lettuce-core/wiki/Unix-Domain-Sockets) connections -* [Streaming API](https://github.com/lettuce-io/lettuce-core/wiki/Streaming-API) -* [CDI](https://github.com/lettuce-io/lettuce-core/wiki/CDI-Support) and [Spring](https://github.com/lettuce-io/lettuce-core/wiki/Spring-Support) integration -* [Codecs](https://github.com/lettuce-io/lettuce-core/wiki/Codecs) (for UTF8/bit/JSON etc. representation of your data) +* [synchronous](https://redis.github.io/lettuce/user-guide/connecting-redis/#basic-usage), [asynchronous](https://redis.github.io/lettuce/user-guide/async-api/) and [reactive](https://redis.github.io/lettuce/user-guide/reactive-api/) usage +* [Redis Sentinel](https://redis.github.io/lettuce/ha-sharding/#redis-sentinel_1) +* [Redis Cluster](https://redis.github.io/lettuce/ha-sharding/#redis-cluster) +* [SSL](https://redis.github.io/lettuce/advanced-usage/#ssl-connections) and [Unix Domain Socket](https://redis.github.io/lettuce/advanced-usage/#unix-domain-sockets) connections +* [Streaming API](https://redis.github.io/lettuce/advanced-usage/#streaming-api) +* [CDI](https://redis.github.io/lettuce/integration-extension/#cdi-support) +* [Codecs](https://redis.github.io/lettuce/integration-extension/#codecss) (for UTF8/bit/JSON etc. representation of your data) * multiple [Command Interfaces](https://github.com/lettuce-io/lettuce-core/wiki/Command-Interfaces-%284.0%29) -* Support for [Native Transports](https://github.com/lettuce-io/lettuce-core/wiki/Native-Transports) +* Support for [Native Transports](https://redis.github.io/lettuce/advanced-usage/#native-transports) * Compatible with Java 8++ (implicit automatic module w/o descriptors) -See the [reference documentation](https://lettuce.io/docs/) and [Wiki](https://github.com/lettuce-io/lettuce-core/wiki) for more details. +See the [reference documentation](https://redis.github.io/lettuce/) and [API Reference](https://www.javadoc.io/doc/io.lettuce/lettuce-core/latest/index.html) for more details. ## How do I Redis? @@ -109,7 +109,7 @@ to the lowercase Redis command name. Complex commands with multiple modifiers that change the result type include the CamelCased modifier as part of the command name, e.g. zrangebyscore and zrangebyscoreWithScores. -See [Basic usage](https://github.com/lettuce-io/lettuce-core/wiki/Basic-usage) for further details. +See [Basic usage](https://redis.github.io/lettuce/user-guide/connecting-redis/#basic-usage) for further details. Asynchronous API ------------------------ @@ -126,7 +126,7 @@ set.get() == "OK" get.get() == "value" ``` -See [Asynchronous API](https://github.com/lettuce-io/lettuce-core/wiki/Asynchronous-API-%284.0%29) for further details. +See [Asynchronous API](https://redis.github.io/lettuce/user-guide/async-api/) for further details. Reactive API ------------------------ @@ -142,7 +142,7 @@ set.subscribe(); get.block() == "value" ``` -See [Reactive API](https://github.com/lettuce-io/lettuce-core/wiki/Reactive-API-%285.0%29) for further details. +See [Reactive API](https://redis.github.io/lettuce/user-guide/reactive-api/) for further details. Pub/Sub ------- @@ -177,7 +177,7 @@ $ make test Bugs and Feedback ----------- -For bugs, questions and discussions please use the [GitHub Issues](https://github.com/lettuce-io/lettuce-core/issues). +For bugs, questions and discussions please use the [GitHub Issues](https://github.com/redis/lettuce/issues). License ------- @@ -189,4 +189,4 @@ Contributing ------- Github is for social coding: if you want to write code, I encourage contributions through pull requests from forks of this repository. -Create Github tickets for bugs and new features and comment on the ones that you are interested in and take a look into [CONTRIBUTING.md](https://github.com/lettuce-io/lettuce-core/blob/main/.github/CONTRIBUTING.md) +Create Github tickets for bugs and new features and comment on the ones that you are interested in and take a look into [CONTRIBUTING.md](https://github.com/redis/lettuce/blob/main/.github/CONTRIBUTING.md) diff --git a/docs/README.md b/docs/README.md deleted file mode 100644 index c19f52b9cc..0000000000 --- a/docs/README.md +++ /dev/null @@ -1,18 +0,0 @@ -# Table of Contents - -- [Overview](<./overview.md>) -- [New & Noteworthy](<./new-features.md>) -- [Getting Started](<./getting-started.md>) -- [Connecting Redis](<./user-guide/connecting-redis.md>) -- [Async API](<./user-guide/async-api.md>) -- [Reactive API](<./user-guide/reactive-api.md>) -- [Kotlin API](<./user-guide/kotlin-api.md>) -- [Transactions and Pipelining](<./user-guide/transactions-multi.md>) -- [Pub/Sub](<./user-guide/pubsub.md>) -- [Lua Scripting](<./user-guide/lua-scripting.md>) -- [Redis Functions](<./user-guide/redis-functions.md>) -- [High-Availability and Sharding](<./ha-sharding.md>) -- [Working with dynamic Redis Command Interfaces](<./redis-command-interfaces.md>) -- [Advanced usage](<./advanced-usage.md>) -- [Integration and Extension](<./integration-extension.md>) -- [Frequently Asked Questions](<./faq.md>) \ No newline at end of file diff --git a/docs/advanced-usage.md b/docs/advanced-usage.md index f6de542d2c..f224eec381 100644 --- a/docs/advanced-usage.md +++ b/docs/advanced-usage.md @@ -950,13 +950,13 @@ the command is running and is not yet completed. There are 4 StreamingChannels accepting different data types: -- [KeyStreamingChannel](http://redis.paluch.biz/docs/api/releases/latest/com/lambdaworks/redis/output/KeyStreamingChannel.html) +- [KeyStreamingChannel](https://www.javadoc.io/static/io.lettuce/lettuce-core/6.4.0.RELEASE/io/lettuce/core/output/KeyStreamingChannel.html) -- [ValueStreamingChannel](http://redis.paluch.biz/docs/api/releases/latest/com/lambdaworks/redis/output/ValueStreamingChannel.html) +- [ValueStreamingChannel](https://www.javadoc.io/static/io.lettuce/lettuce-core/6.4.0.RELEASE/io/lettuce/core/output/ValueStreamingChannel.html) -- [KeyValueStreamingChannel](http://redis.paluch.biz/docs/api/releases/latest/com/lambdaworks/redis/output/KeyValueStreamingChannel.html) +- [KeyValueStreamingChannel](https://www.javadoc.io/static/io.lettuce/lettuce-core/6.4.0.RELEASE/io/lettuce/core/output/KeyValueStreamingChannel.html) -- [ScoredValueStreamingChannel](http://redis.paluch.biz/docs/api/releases/latest/com/lambdaworks/redis/output/ScoredValueStreamingChannel.html) +- [ScoredValueStreamingChannel](https://www.javadoc.io/static/io.lettuce/lettuce-core/6.4.0.RELEASE/io/lettuce/core/output/ScoredValueStreamingChannel.html) The result of the steaming methods is the count of keys/values/key-value pairs as `long` value. @@ -1638,7 +1638,7 @@ multiple commands in a batch (size depends on your environment, but batches between 50 and 1000 work nice during performance tests) can increase the throughput up to a factor of 5x. -Pipelining within the Redis docs: +Pipelining within the Redis docs: ## Connection Pooling @@ -2577,7 +2577,7 @@ replica yet. If a failover occurs at that moment, a replica takes over, and the not yet replicated data is lost. Replication behavior is Redis-specific. Further documentation about failover and consistency from Redis perspective is available within the Redis docs: - + ### Switching between *at-least-once* and *at-most-once* operations diff --git a/docs/getting-started.md b/docs/getting-started.md index dcd44d8163..af6d3a6838 100644 --- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -12,7 +12,7 @@ Add these lines to file pom.xml: io.lettuce lettuce-core - 6.3.2.RELEASE + 6.4.0.RELEASE ``` @@ -23,7 +23,7 @@ Add these lines to file ivy.xml: ``` xml - + ``` @@ -34,7 +34,7 @@ Add these lines to file build.gradle: ``` groovy dependencies { - implementation 'io.lettuce:lettuce-core:6.3.2.RELEASE' + implementation 'io.lettuce:lettuce-core:6.4.0.RELEASE' } ``` @@ -71,24 +71,17 @@ Done! Do you want to see working examples? -- [Standalone - Redis](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToRedis.java) +- [Standalone Redis](https://github.com/redis/lettuce/blob/main/src/test/java/io/lettuce/examples/ConnectToRedis.java) -- [Standalone Redis with - SSL](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToRedisSSL.java) +- [Standalone Redis with SSL](https://github.com/redis/lettuce/blob/main/src/test/java/io/lettuce/examples/ConnectToRedisSSL.java) -- [Redis - Sentinel](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToRedisUsingRedisSentinel.java) +- [Redis Sentinel](https://github.com/redis/lettuce/blob/main/src/test/java/io/lettuce/examples/ConnectToRedisUsingRedisSentinel.java) -- [Redis - Cluster](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToRedisCluster.java) +- [Redis Cluster](https://github.com/redis/lettuce/blob/main/src/test/java/io/lettuce/examples/ConnectToRedisCluster.java) -- [Connecting to a ElastiCache - Master](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToElastiCacheMaster.java) +- [Connecting to a ElastiCache Master](https://github.com/redis/lettuce/blob/main/src/test/java/io/lettuce/examples/ConnectToElastiCacheMaster.java) -- [Connecting to ElastiCache with - Master/Replica](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToMasterSlaveUsingElastiCacheCluster.java) +- [Connecting to ElastiCache with Master/Replica](https://github.com/redis/lettuce/blob/main/src/test/java/io/lettuce/examples/ConnectToMasterSlaveUsingElastiCacheCluster.java) -- [Connecting to Azure Redis - Cluster](https://github.com/redis/lettuce/blob/6.3.2.RELEASE/src/test/java/io/lettuce/examples/ConnectToRedisClusterSSL.java) +- [Connecting to Azure Redis Cluster](https://github.com/redis/lettuce/blob/main/src/test/java/io/lettuce/examples/ConnectToRedisClusterSSL.java) diff --git a/docs/ha-sharding.md b/docs/ha-sharding.md index d771ddbfac..67c547b65d 100644 --- a/docs/ha-sharding.md +++ b/docs/ha-sharding.md @@ -176,7 +176,7 @@ following options: lookup using the `masterId`. As soon as the Redis Sentinel provides an address the connection is restored to the new Redis instance -Read more at +Read more at ### Examples @@ -244,10 +244,10 @@ users of the cluster connection might be affected. ### Command routing -The [concept of Redis Cluster](http://redis.io/topics/cluster-tutorial) +The [concept of Redis Cluster](https://redis.io/docs/latest/operate/oss_and_stack/management/scaling/) bases on sharding. Every master node within the cluster handles one or more slots. Slots are the [unit of -sharding](http://redis.io/topics/cluster-tutorial#redis-cluster-data-sharding) +sharding](https://redis.io/docs/latest/operate/oss_and_stack/management/scaling/#redis-cluster-data-sharding) and calculated from the commands' key using `CRC16 MOD 16384`. Hash slots can also be specified using hash tags such as `{user:1000}.foo`. @@ -361,7 +361,7 @@ selections can be constructed by the following presets: - all nodes A custom selection of nodes is available by implementing [custom -predicates](http://redis.paluch.biz/docs/api/current/com/lambdaworks/redis/cluster/api/async/RedisAdvancedClusterAsyncCommands.html#nodes-java.util.function.Predicate-) +predicates](https://www.javadoc.io/static/io.lettuce/lettuce-core/6.4.0.RELEASE/io/lettuce/core/cluster/api/async/RedisAdvancedClusterAsyncCommands.html#nodes-java.util.function.Predicate-) or lambdas. The particular results map to a cluster node (`RedisClusterNode`) that diff --git a/docs/index.md b/docs/index.md new file mode 100644 index 0000000000..074a48bf84 --- /dev/null +++ b/docs/index.md @@ -0,0 +1 @@ +{% include 'README.md' %} \ No newline at end of file diff --git a/docs/new-features.md b/docs/new-features.md index 84a2d23bf8..bd5abe02fa 100644 --- a/docs/new-features.md +++ b/docs/new-features.md @@ -1,5 +1,11 @@ # New & Noteworthy +## What’s new in Lettuce 6.4 + +- [Hash Field Expiration](https://redis.io/docs/latest/develop/data-types/hashes/#field-expiration) is now fully supported +- [Sharded Pub/Sub](https://redis.io/docs/latest/develop/interact/pubsub/#sharded-pubsub) is now fully supported +- Support `CLIENT KILL` with `[MAXAGE]` parameter and `HSCAN` with `NOVALUES` parameter + ## What’s new in Lettuce 6.3 - [Redis Function support](user-guide/redis-functions.md) (`fcall` and `FUNCTION` diff --git a/docs/overview.md b/docs/overview.md index 020d93c88d..780cc189aa 100644 --- a/docs/overview.md +++ b/docs/overview.md @@ -12,26 +12,7 @@ familiar with Redis concepts. ## Knowing Redis -NoSQL stores have taken the storage world by storm. It is a vast domain -with a plethora of solutions, terms and patterns (to make things worse -even the term itself has multiple -[meanings](https://www.google.com/search?q=nosql+acronym)). While some -of the principles are common, it is crucial that the user is familiar to -some degree with Redis. The best way to get acquainted to these -solutions is to read and follow their documentation - it usually doesn't -take more than 5-10 minutes to go through them and if you are coming -from an RDMBS-only background many times these exercises can be an -eye-opener. - -The jumping off ground for learning about Redis is -[redis.io](https://www.redis.io/). Here is a list of other useful -resources: - -- The [interactive tutorial](https://try.redis.io/) introduces Redis. - -- The [command references](https://redis.io/commands) explains Redis - commands and contains links to getting started guides, reference - documentation and tutorials. +If you are new to Redis, you can find a good introduction to Redis on [redis.io](https://redis.io/docs/latest/develop/) ## Project Reactor @@ -65,78 +46,23 @@ Lettuce 6.x binaries require JDK level 8.0 and above. In terms of [Redis](https://redis.io/), at least 2.6. -## Additional Help Resources - -Learning a new framework is not always straight forward.In this section, -we try to provide what we think is an easy-to-follow guide for starting -with Lettuce. However, if you encounter issues or you are just looking -for an advice, feel free to use one of the links below: - -### Support - -There are a few support options available: - -- Lettuce on Stackoverflow - [Stackoverflow](https://stackoverflow.com/questions/tagged/lettuce) is - a tag for all Lettuce users to share information and help each - other.Note that registration is needed **only** for posting. - -- Get in touch with the community on - [Gitter](https://gitter.im/lettuce-io/Lobby). - -- GitHub Discussions: - - -- Report bugs (or ask questions) in GitHub issues - . - -### Following Development - -For information on the Lettuce source code repository, nightly builds -and snapshot artifacts please see the [Lettuce -homepage](https://lettuce.io). You can help make lettuce best serve the -needs of the lettuce community by interacting with developers through -the Community on -[Stackoverflow](https://stackoverflow.com/questions/tagged/lettuce). If -you encounter a bug or want to suggest an improvement, please create a -ticket on the lettuce issue -[tracker](https://github.com/redis/lettuce/issues). - -### Project Metadata - -- Version Control – - -- Releases and Binary Packages – - - -- Issue tracker – - -- Release repository – (Maven Central) - -- Snapshot repository – - (OSS - Sonatype Snapshots) - ## Where to go from here - Head to [Getting Started](getting-started.md) if you feel like jumping straight into the code. -- Go to [High-Availability and - Sharding](ha-sharding.md) for Master/Replica +- Go to [High-Availability and Sharding](ha-sharding.md) for Master/Replica ("Master/Slave"), Redis Sentinel and Redis Cluster topics. - In order to dig deeper into the core features of Reactor: - If you’re looking for client configuration options, performance - related behavior and how to use various transports, go to [Advanced - usage](advanced-usage.md). + related behavior and how to use various transports, go to [Advanced usage](advanced-usage.md). - See [Integration and Extension](integration-extension.md) for extending Lettuce with codecs or integrate it in your CDI/Spring application. - You want to know more about **at-least-once** and **at-most-once**? - Take a look into [Command execution - reliability](advanced-usage.md#command-execution-reliability). + Take a look into [Command execution reliability](advanced-usage.md#command-execution-reliability). diff --git a/docs/user-guide/async-api.md b/docs/user-guide/async-api.md index cc100b07cb..c11c350d44 100644 --- a/docs/user-guide/async-api.md +++ b/docs/user-guide/async-api.md @@ -23,7 +23,7 @@ transmission has finished and the response of the transmission is processed. This means, in the context of Lettuce and especially Redis, that multiple commands can be issued serially without the need of waiting to finish the preceding command. This mode of operation is also -known as [Pipelining](http://redis.io/topics/pipelining). The following +known as [Pipelining](https://redis.io/docs/latest/develop/use/pipelining/). The following example should give you an impression of the mode of operation: - Given client *A* and client *B* @@ -455,8 +455,7 @@ doing blocking calls within the `Runnable`. Another chaining method worth mentioning is the either-or chaining. A couple of `…​Either()` methods are available on a `CompletionStage`, -see the [Java 8 API -docs](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html) +see the [Java 8 API docs](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html) for the full reference. The either-or pattern consumes the value from the first future that is completed. A good example might be two services returning the same data, for instance, a Master-Replica scenario, but diff --git a/mkdocs.yml b/mkdocs.yml index bfdf7f3990..eb085c35ff 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -11,6 +11,11 @@ theme: primary: white accent: red +plugins: + - search + - macros: + include_dir: . + markdown_extensions: - pymdownx.highlight: anchor_linenums: true @@ -41,4 +46,5 @@ nav: - Working with dynamic Redis Command Interfaces: redis-command-interfaces.md - Advanced Usage: advanced-usage.md - Integration and Extension: integration-extension.md - - Frequently Asked Questions: faq.md \ No newline at end of file + - Frequently Asked Questions: faq.md + - API Reference: https://www.javadoc.io/doc/io.lettuce/lettuce-core/latest/index.html \ No newline at end of file From f67a2c482941e6f2b728dad9c7a3c38dc6c77ffd Mon Sep 17 00:00:00 2001 From: Igor Malinovskyi Date: Fri, 9 Aug 2024 15:45:14 +0200 Subject: [PATCH 06/12] Update TLS section --- docs/advanced-usage.md | 20 +++----------------- 1 file changed, 3 insertions(+), 17 deletions(-) diff --git a/docs/advanced-usage.md b/docs/advanced-usage.md index f224eec381..cf982ca0a7 100644 --- a/docs/advanced-usage.md +++ b/docs/advanced-usage.md @@ -690,23 +690,9 @@ overall-queue limit is ## SSL Connections Lettuce supports SSL connections since version 3.1 on Redis Standalone -connections and since version 4.2 on Redis Cluster. Redis has no native -SSL support, SSL is implemented usually by using -[stunnel](https://www.stunnel.org/index.html). - -An example stunnel configuration can look like: - - cert=/etc/ssl/cert.pem - key=/etc/ssl/key.pem - capath=/etc/ssl/cert.pem - cafile=/etc/ssl/cert.pem - delay=yes - pid=/etc/ssl/stunnel.pid - foreground = no - - [redis] - accept = 127.0.0.1:6443 - connect = 127.0.0.1:6479 +connections and since version 4.2 on Redis Cluster. [Redis supports SSL since version 6.0](https://redis.io/docs/latest/operate/oss_and_stack/management/security/encryption/). + +First, you need to [enable SSL on your Redis server](https://redis.io/docs/latest/operate/oss_and_stack/management/security/encryption/). Next step is connecting lettuce over SSL to Redis. From 57cb34e7db2b8128f9a2d161ba86d61538e541df Mon Sep 17 00:00:00 2001 From: Igor Malinovskyi Date: Fri, 9 Aug 2024 15:59:31 +0200 Subject: [PATCH 07/12] Fix more links --- README.md | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index d20cfc1bd3..e778d2ea1b 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ This version of Lettuce has been tested against the latest Redis source-build. * [Streaming API](https://redis.github.io/lettuce/advanced-usage/#streaming-api) * [CDI](https://redis.github.io/lettuce/integration-extension/#cdi-support) * [Codecs](https://redis.github.io/lettuce/integration-extension/#codecss) (for UTF8/bit/JSON etc. representation of your data) -* multiple [Command Interfaces](https://github.com/lettuce-io/lettuce-core/wiki/Command-Interfaces-%284.0%29) +* multiple [Command Interfaces](https://github.com/redis/lettuce/wiki/Command-Interfaces-%284.0%29) * Support for [Native Transports](https://redis.github.io/lettuce/advanced-usage/#native-transports) * Compatible with Java 8++ (implicit automatic module w/o descriptors) @@ -41,27 +41,25 @@ See the [reference documentation](https://redis.github.io/lettuce/) and [API Ref Communication --------------- -* [GitHub Discussions](https://github.com/lettuce-io/lettuce-core/discussions) (Q&A, Ideas, General discussion) +* [GitHub Discussions](https://github.com/redis/lettuce/discussions) (Q&A, Ideas, General discussion) * Stack Overflow (Questions): [https://stackoverflow.com/questions/tagged/lettuce](https://stackoverflow.com/questions/tagged/lettuce) * Discord: [![Discord](https://img.shields.io/discord/697882427875393627.svg?style=social&logo=discord)](https://discord.gg/redis) * Twitter: [![Twitter](https://img.shields.io/twitter/follow/redisinc?style=social)](https://twitter.com/redisinc) -* [GitHub Issues](https://github.com/lettuce-io/lettuce-core/issues) (Bug reports, feature requests) +* [GitHub Issues](https://github.com/redis/lettuce/issues) (Bug reports, feature requests) Documentation --------------- -* [Reference documentation](https://lettuce.io/docs/) -* [Wiki](https://github.com/lettuce-io/lettuce-core/wiki) -* [Javadoc](https://lettuce.io/core/release/api/) - +* [Reference documentation](https://redis.github.io/lettuce/) +* [Javadoc](https://www.javadoc.io/doc/io.lettuce/lettuce-core/latest/index.html) Binaries/Download ---------------- Binaries and dependency information for Maven, Ivy, Gradle and others can be found at http://search.maven.org. -Releases of lettuce are available in the Maven Central repository. Take also a look at the [Releases](https://github.com/lettuce-io/lettuce-core/releases). +Releases of lettuce are available in the Maven Central repository. Take also a look at the [Releases](https://github.com/redis/lettuce/releases). Example for Maven: @@ -162,7 +160,7 @@ are configured using a ```Makefile```. Tests run by default against Redis `unsta To build: ``` -$ git clone https://github.com/lettuce-io/lettuce-core.git +$ git clone https://github.com/redis/lettuce.git $ cd lettuce/ $ make prepare ssl-keys $ make test From 9d13428b245b017a201988caa57d80de82ab0607 Mon Sep 17 00:00:00 2001 From: Igor Malinovskyi Date: Fri, 9 Aug 2024 15:59:43 +0200 Subject: [PATCH 08/12] Fix more links --- .github/CONTRIBUTING.md | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 77b1708308..55194335a0 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -57,17 +57,16 @@ If you have a question, then check one of the following places first as GitHub i **Checkout the docs** -* [Reference documentation](https://lettuce.io/docs/) -* [Wiki](https://github.com/lettuce-io/lettuce-core/wiki) -* [Javadoc](https://lettuce.io/core/release/api/) +* [Reference documentation](https://redis.github.io/lettuce/) +* [Javadoc](https://www.javadoc.io/doc/io.lettuce/lettuce-core/latest/index.html) **Communication** -* GitHub Discussions (Q&A, Ideas, General discussion): https://github.com/lettuce-io/lettuce-core/discussions +* [GitHub Discussions](https://github.com/redis/lettuce/discussions) (Q&A, Ideas, General discussion) * Stack Overflow (Questions): [https://stackoverflow.com/questions/tagged/lettuce](https://stackoverflow.com/questions/tagged/lettuce) -* Gitter (chat): [![Join the chat at https://gitter.im/lettuce-io/Lobby](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/lettuce-io/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) -* Twitter: [@LettuceDriver](https://twitter.com/LettuceDriver) -* [GitHub Issues](https://github.com/lettuce-io/lettuce-core/issues) (Bug reports, feature requests) +* Discord: [![Discord](https://img.shields.io/discord/697882427875393627.svg?style=social&logo=discord)](https://discord.gg/redis) +* Twitter: [![Twitter](https://img.shields.io/twitter/follow/redisinc?style=social)](https://twitter.com/redisinc) +* [GitHub Issues](https://github.com/redis/lettuce/issues) (Bug reports, feature requests) ### Building from Source From 5489c2bca649f04644ab26bc6fe7d5974f58ad99 Mon Sep 17 00:00:00 2001 From: Igor Malinovskyi Date: Fri, 9 Aug 2024 16:00:55 +0200 Subject: [PATCH 09/12] Remove ascii docs and maven profile --- pom.xml | 124 +- src/main/asciidoc/advanced-usage.asciidoc | 74 - src/main/asciidoc/faq.asciidoc | 5 - src/main/asciidoc/getting-started.asciidoc | 48 - src/main/asciidoc/ha-sharding.asciidoc | 30 - .../asciidoc/images/apple-touch-icon-144.png | Bin 18372 -> 0 bytes .../asciidoc/images/apple-touch-icon-180.png | Bin 19515 -> 0 bytes .../asciidoc/images/lettuce-green-text@2x.png | Bin 11672 -> 0 bytes .../asciidoc/images/touch-icon-192x192.png | Bin 19800 -> 0 bytes src/main/asciidoc/index-docinfo.html | 5 - src/main/asciidoc/index.asciidoc | 33 - .../asciidoc/integration-extension.asciidoc | 10 - src/main/asciidoc/kotlin-api.asciidoc | 79 - src/main/asciidoc/new-features.adoc | 94 - src/main/asciidoc/overview.asciidoc | 83 - .../redis-command-interfaces.asciidoc | 4 - .../asciidoc/scripting-and-functions.asciidoc | 4 - src/main/asciidoc/stylesheets/golo.css | 1990 ----------------- 18 files changed, 1 insertion(+), 2582 deletions(-) delete mode 100644 src/main/asciidoc/advanced-usage.asciidoc delete mode 100644 src/main/asciidoc/faq.asciidoc delete mode 100644 src/main/asciidoc/getting-started.asciidoc delete mode 100644 src/main/asciidoc/ha-sharding.asciidoc delete mode 100644 src/main/asciidoc/images/apple-touch-icon-144.png delete mode 100644 src/main/asciidoc/images/apple-touch-icon-180.png delete mode 100644 src/main/asciidoc/images/lettuce-green-text@2x.png delete mode 100644 src/main/asciidoc/images/touch-icon-192x192.png delete mode 100644 src/main/asciidoc/index-docinfo.html delete mode 100644 src/main/asciidoc/index.asciidoc delete mode 100644 src/main/asciidoc/integration-extension.asciidoc delete mode 100644 src/main/asciidoc/kotlin-api.asciidoc delete mode 100644 src/main/asciidoc/new-features.adoc delete mode 100644 src/main/asciidoc/overview.asciidoc delete mode 100644 src/main/asciidoc/redis-command-interfaces.asciidoc delete mode 100644 src/main/asciidoc/scripting-and-functions.asciidoc delete mode 100644 src/main/asciidoc/stylesheets/golo.css diff --git a/pom.xml b/pom.xml index c1c5f9215f..1bd0368b50 100644 --- a/pom.xml +++ b/pom.xml @@ -893,7 +893,7 @@ org.apache.maven.plugins maven-release-plugin - sonatype-oss-release,documentation + sonatype-oss-release deploy true @{project.version} @@ -1238,128 +1238,6 @@ - - - documentation - - - - - - - org.apache.maven.plugins - maven-antrun-plugin - - - - rename-reference-docs - process-resources - - - - - - - run - - - - - - - - org.asciidoctor - asciidoctor-maven-plugin - 2.2.4 - - - org.asciidoctor - asciidoctorj-pdf - 2.3.9 - - - - - - html - generate-resources - - process-asciidoc - - - html5 - - ${project.build.directory}/site/reference/html - - book - - true - true - stylesheets - golo.css - - - - - - pdf - generate-resources - - process-asciidoc - - - pdf - - - - - - - src/main/asciidoc - index.asciidoc - book - - ${project.version} - true - 3 - true - - https://raw.githubusercontent.com/wiki/lettuce-io/lettuce-core/ - - - - - font - coderay - - - - - - org.apache.maven.plugins - maven-assembly-plugin - - - docs - package - - single - - - - src/assembly/docs.xml - - gnu - true - - - - - - - - - diff --git a/src/main/asciidoc/advanced-usage.asciidoc b/src/main/asciidoc/advanced-usage.asciidoc deleted file mode 100644 index 50b657e54b..0000000000 --- a/src/main/asciidoc/advanced-usage.asciidoc +++ /dev/null @@ -1,74 +0,0 @@ -:auto-reconnect-link: <> -:client-options-link: <> -:client-resources-link: <> - -:custom-commands-command-output-link: <> -:custom-commands-command-exec-model-link: <> - -[[advanced-usage]] -== Advanced usage - -[[client-resources]] -=== Configuring Client resources -include::{ext-doc}/Configuring-Client-resources.asciidoc[leveloffset=+2] - -[[client-options]] -=== Client Options -include::{ext-doc}/Client-Options.asciidoc[leveloffset=+2] - -[[ssl]] -=== SSL Connections -include::{ext-doc}/SSL-Connections.asciidoc[leveloffset=+2] - -[[native-transports]] -=== Native Transports -include::{ext-doc}/Native-Transports.asciidoc[leveloffset=+2] - -[[unix-domain-sockets]] -=== Unix Domain Sockets -include::{ext-doc}/Unix-Domain-Sockets.asciidoc[leveloffset=+2] - -[[streaming-api]] -=== Streaming API -include::{ext-doc}/Streaming-API.asciidoc[leveloffset=+1] - -[[events]] -=== Events -include::{ext-doc}/Connection-Events.asciidoc[leveloffset=+2] - -[[observability]] -=== Observability - -The following section explains Lettuces metrics and tracing capabilities. - -[[observability.metrics]] -==== Metrics - -include::{ext-doc}/Command-Latency-Metrics.asciidoc[leveloffset=+2] - -[[observability.tracing]] -==== Tracing - -include::{ext-doc}/Tracing.asciidoc[leveloffset=+2] - -=== Pipelining and command flushing - -include::{ext-doc}/Pipelining-and-command-flushing.asciidoc[leveloffset=+2] - -=== Connection Pooling - -include::{ext-doc}/Connection-Pooling.asciidoc[leveloffset=+2] - -=== Custom commands - -include::{ext-doc}/Custom-commands%2C-outputs-and-command-mechanics.asciidoc[leveloffset=+2] - -=== Graal Native Image - -include::{ext-doc}/Using-Lettuce-with-Native-Images.asciidoc[leveloffset=+2] - -[[command-execution-reliability]] -=== Command execution reliability - -include::{ext-doc}/Command-execution-reliability.asciidoc[leveloffset=+2] - diff --git a/src/main/asciidoc/faq.asciidoc b/src/main/asciidoc/faq.asciidoc deleted file mode 100644 index 1793a75438..0000000000 --- a/src/main/asciidoc/faq.asciidoc +++ /dev/null @@ -1,5 +0,0 @@ -:client-options-link: <> - -[[faq]] -== Frequently Asked Questions -include::{ext-doc}/Frequently-Asked-Questions.asciidoc[leveloffset=+1] diff --git a/src/main/asciidoc/getting-started.asciidoc b/src/main/asciidoc/getting-started.asciidoc deleted file mode 100644 index 54f166c867..0000000000 --- a/src/main/asciidoc/getting-started.asciidoc +++ /dev/null @@ -1,48 +0,0 @@ -:ssl-link: <> -:uds-link: <> -:native-transport-link: <> -:basic-synchronous-link: <> -:asynchronous-api-link: <> -:reactive-api-link: <> -:asynchronous-link: <> -:reactive-link: <> - -[[getting-started]] -== Getting Started -include::{ext-doc}/Getting-started.asciidoc[leveloffset=+1] - -[[connecting-redis]] -== Connecting Redis -include::{ext-doc}/Redis-URI-and-connection-details.asciidoc[] - -[[basic-usage]] -=== Basic Usage -include::{ext-doc}/Basic-usage.asciidoc[leveloffset=+1] - -[[asynchronous-api]] -=== Asynchronous API - -include::{ext-doc}/Asynchronous-API.asciidoc[leveloffset=+2] - -[[reactive-api]] -=== Reactive API - -include::{ext-doc}/Reactive-API.asciidoc[leveloffset=+2] - -[[kotlin]] -=== Kotlin API - -include::kotlin-api.asciidoc[leveloffset=+2] - -=== Publish/Subscribe - -include::{ext-doc}/Pub-Sub.asciidoc[leveloffset=+1] - -=== Transactions/Multi - -include::{ext-doc}/Transactions.asciidoc[leveloffset=+1] - -[[scripting-and-functions]] -=== Scripting and Functions - -include::scripting-and-functions.asciidoc[] diff --git a/src/main/asciidoc/ha-sharding.asciidoc b/src/main/asciidoc/ha-sharding.asciidoc deleted file mode 100644 index aacd855bf3..0000000000 --- a/src/main/asciidoc/ha-sharding.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ -:redis-sentinel-link: <> -:upstream-replica-api-link: <> -:cco-up-to-5-times: <> -:cco-link: <> -:cco-periodic-link: <> -:cco-adaptive-link: <> - -[[ha-sharding]] -== High-Availability and Sharding - -[[master-slave]] -[[master-replica]] -[[upstream-replica]] -=== Master/Replica - -include::{ext-doc}/Master-Replica.asciidoc[leveloffset=+2] - -[[redis-sentinel]] -=== Redis Sentinel - -include::{ext-doc}/Redis-Sentinel.asciidoc[leveloffset=+2] - -[[redis-cluster]] -=== Redis Cluster -include::{ext-doc}/Redis-Cluster.asciidoc[leveloffset=+2] - -[[readfrom-settings]] -=== ReadFrom Settings -include::{ext-doc}/ReadFrom-Settings.asciidoc[leveloffset=+2] - diff --git a/src/main/asciidoc/images/apple-touch-icon-144.png b/src/main/asciidoc/images/apple-touch-icon-144.png deleted file mode 100644 index 8adb9fff097aa835424a0fb1c0fea7797fef91a3..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 18372 zcmeI3c|25o`^V4N%D(Sm8Y%nC*vc3hS)-9P31u)bV#YF6B)gWILM6YE?mBhXJXZ|=*mKG-591fZj^+#kn28lcBD74+2pa_WI^+7?j|y3$xe6BCx&&cu#`qA%DEhAq!i# zLkHZn-CcJoYHv}~P==}@kXmqrHXNy;fFBH+rqRl!D7GF{^j}l`24bT z0NE&rhTtpGUpfP95BlQa*7yKYpuZd5D2R3`vcI}|GGYJs!TBN6O3!c49ZwLE%<^x9Py>UNS+yA|kHxtWh$~zb(D0v4-$(p= zKD;50jMrQ7svy)=5Sq3~BuZTqrHNEpnVJYtl2zqTIj|&m!v2H5$f2QuQr`^D0aE^) zgSLX)ab(<#BgI?I-#B;K+IeADjms?o0GQ!@oCzl;7qR%7Ele@~52^UQZnj z|5Y={^<}Q1wi5%$IHDVVyP+PfL6tyoMFa*#3{yh9tMZmG$y9wfldW8f5=R>Ye@s+FpnA25DB#{=Y5pi!}cz zWl9L3&Cr9OnM<3$Uv@Jayx)(le|LBjzU)fAIR60r%9_)K{k`bEZDZedsTI$cwTr^J zt?XxdZYxUy?~aE5(E9zfzB#|z2>;&!T+I)%|Gq}BC!Y9k3u-mh7uD~R1d#TVgK_?N zeGl54_|I1RYTh4>KNmp5SH1zf2|o_SYU__9^!J?o=ZKI@4!_(?fVabfFlvK*H$?T7@jp4!)HKQ)6~E4%ON z#$ZFcQNZhIX=3^IUoQ_chOoMLBz(p4kSiryqra?Dm z;G&BNEMVXQ)1VtOaM48s7BFyuY0!-sxacAR3mCY-H0Z_*TyznE1q@tZ8gyd@F1m=o z0tPNH4Z1M{7hObP0RtD92Hlu}i!LItfPo84gKo^gMHdlRz`zBjK{saLqKgPDVBiAN zpc^xA(M1FnFmQos(2W_m=pq6O7`VVR=*A3ObP<6C3|wFubYliCx`@C61}-oSx-kP6 zT|{630~eSE-I#%kE+VjifeTE7Zp^?%7ZF&%zy+p3H)i0XiwG=W-~!X28#8dxMFbWw zaDi#ijsF!F$M**S@kH8Ff5EiJ`zp4kH`5*sgu0npV*%ih8~{*{1Hj@k?Q;SE0ucZ( z=>h<#6aWw-kG%pLelHtMa-vA38ntth)%*H?PsI5jFS^O5nc%)D~NzFl+kOw^QE zBfrL}dfSVd*X6+y1n={@ei_x z2FqkrCO)uI-<3(-JFA=#B353}HtH7dB!%f-IEI=_I#Uq1=8|N2O!Bijz%(7uaX7(G zP0^Rl?4*eT?+3}|^aKNYN8R}MNn+Tkn*iR~GE>Pr+nk=>RknX|QZaD77z~)_ipW2N z-Qkmy4LBaF6F_~?(Tw3#EV*szR;9?AuG}bA-&qrM%aF=K?Fv;kmh&?Np1n7esl=%o zN$TA9XaNM6Mx?S^Vdo#rfBfqso)}9Vn@6C#bd{4r8J@4 zljKs|WPI}J$@~Y4d+@GEv!bR=V-+onyLU{?+5q#jWeff)gwVX!yqdY^2EY&yN;?B? z*1z)CJ!08z25p|ZW-EAd!$?mA+Y`%eJQ7^Ub)@Av)qQRap2!cH7J_P(`>>fl;RUp>d{~!t*7)Nc z7_|rZAl!Bk$&t-GF7o~fU;WiqSbLR()q4?_N<&;5QJ8gjCZr4q_in0hiqyXU%Jd8q zn-x;#6mZkQTlP+d>W=*C_?dOyt&PIeY>6dCzxq`K~ z(J))fkNJ?L*R8Pbr6U&mSc7tndUC5$dK~vWe)I(7Q#t@?M@Y*?Q6X&#F-2v*l(iu4hQJ~<7QN|RH`=t?l?i{u-FO}sCe`D-PXWVqkzX&SyhySwkK8yQZr{Aw{7hlu z1C>1*#wV5n_Gp2s7)J z*LO;}0}^k1PE)XP?YpM}*?=8@=rk(d=IIP|zNOnM*E=g&ADkBB$!%%jV3=X*e1h1C zvD4hPEBACaAN%{xtC#w{V$BZ3$fw8jvYa~VK2Wjo5yu{g;pTbxd*0v+0y?#GrW1a#D3!`N531dk;-dDPIbSKgnv}r$T(i zn%A9=yw`eOBHP+m**_P1Wnqv4Sx(UMLvRAcbBr)6so#1Wymyz=h;EoEIWkqkLsU!XtJ1Z zqd&jACkv&B<_}8rNoL_KJ=&&uRUlrCs-=9K-8Xet(B$QQE#Icdj;g#m~IetCz+AIak z<81XWgdrOSpTOIC5)%FjWR8w~*_`m1q?Igg{sOfX!KPsSohm5rKnr1+jmi3 zLYJ;r#Wg<0*7#T03TkYBQLEtBC2U|j6_3V9XI#?%U7U=a*?-v%v8wc`zZFOW53W=7e0L7KBgKL zB_2rR&EkOoYr-~%)=R_X1%4yZI!cp(K;O`UGsRf3IJT3OM^#f=Vv?o#X6cKWUF-$D z3lyQm*sxogJOu`9U0kAjcNcJFFv%a5kFNAG=N37KI>6PG5dK6=er)mfL>oWPowWr@ z>D8eIMaygO$E-&i7fhc}v?TgB%iXwy8aN^;$6OeUle_!=K&O$Cja|zl>Y-w$Y@M(| zn~dUZ=ehmvw`InM7PedG#;|qY8aiH1DY5_Sr0gl7aX&o~h_2h`NAY*;Ws)bdIPy8p zRzk*mV5|b?@^KHTD64JEy|$5+@34+ka=*9_BDuV=2`}v0pLV>Up}!Ccw{+#}Hx8fa z>CKB%@*6OVI-17ZG`ScqM{IHoI3;!OPs-7g9HKWkSVBwK-2^h~any*mt_&_>yG0?8 zbca+My;-bWjyn@~n|sQT^k0oGw^L zuJd)vzjIis2$N73k^|%b98&hs3zB!eS@ZHgdNEnHkuGMp*YM#- zu5_P}(7GoS!reOHdeqrtfB9Rf6mvpBc12Twy3yeZtS(js7;KP88R)wAdMkN3K1?xh}s2WCu<-uFzNO?h4y*p+6z7_L|j;g{YHtazJz15ud?x#ei!|ASubHhtMMR>^g_SyGkrD zC!Z7%wBu>ka9DG$E%b=fd~FI~cX{0ZpuXmhcY~JoU2*F~u^@ zvhmVQ6a=ti5CkY$JEk|JPUS~?M<`26@EeL#PnI~PZj_FG zCbMq#&z>5#zT(e?uiDX=Zru7sJ89coshr;6Px|s2lZ{Fd<3{XhtfxYHc0rm@(_!5h zdDW7_!`<@|ky%RNn|2)=l>~TmReJ*-pt`cR3@PXgONmRKFa;K7qtANsN_%mNYvksB zE=?4b$gR#0#hRp@Fs)IaVmdH*`T9<-SSS3>4!s1AR3}~@!k7A zVKngO<;+a+-O~n7J!R^H{pCJ+_}67|hOi-Co{Gwjv5{yVyMDXx$jJ5PF`qKWv}dYH zjV;Qd{xPhvfWiy>#05@mK+4V>fTkq7B>$V!*3z@P#S=lMD zsqftLb(rW+@6DaeOt;Dei8=0gbOXku0Gkh*yV2e<%o#GYyrCED`@&jGin4n(Uh4SJ|LOXZEvfPc=tEtaA!q z8a!*6^AcRxdKb?cjXPpnZ9QuI^0m0VloCWSO1sU;%<%R@yVKVjh`eu$43n6c6uf|W z=KjmF>ZKc*_irz_Rzc8fP^2n8m{5x|-oaX9eMPuBVH%S{0d!fRUiFXT96P#?`Xh#~6zzDRfZ--Q zRT#%)PP#|{V)3QnyiQ;|m6JF*40&*xEWJG7IbfbCZ_?kjwehpWxlgw{ zkjioYUbtAeoF(}jVugHwz(YpKgd1Y4*&oF diff --git a/src/main/asciidoc/images/apple-touch-icon-180.png b/src/main/asciidoc/images/apple-touch-icon-180.png deleted file mode 100644 index d0928b5316ed19c37568eb4af5e8053277f8c269..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 19515 zcmeI4c|4Ts`^N{7ecvh-WQ@j;lzojziLw;36j`#Rp@pQJ zHe{zHsq9pi(DEDAsX3j_cfP;h>-Xo(>ouOseSfa&x}W!b-S_i&Ua!Y78!J;T_U-Hd z0D#Nf%ossA)^C1UStvjC9uB6alY< zL=(I*N}>2biW&gW)ej9sq7Gn4;@%ivT!0>Mq4^e29Ea8eI;vYLTLv0ousE|YBE~k% z$_^EF0Huuv>Osm5%UB{&xPVQrvbvyO zoqo>v&v+PPBnboE^i@<=Q&d*BQ&EAasYBFNUY z76z)O3;JC%mDkr=gO~>dk&po>jJYwC(x8OHp&{CunqagF8mWj>Q9&w#wKOyowZW>I ziYQf8j20THjnq=t{tD63lG^(E0+GZZ%;uWY1OBz>er#hucBxID zuWJ{AL~ZV8P}JsT^ICg^RFH@xKND0lQE96f`c~eQbmh$Vqcwp?fI>z z^S^ph^?djIHd#~`^{;dJt(9`iK)G81{kUEE`!xSp4FAu@&r|w;4on@6o`jkPO`DF3 zCL(G99TznXnl>F5O+?fJIxcD&G;KOAnuw?cbX?RlXxemKG!an?=(wn9(6s5eXdP>E}Dp_1$120G-%p%Tr?3;3+TA0Y0$LkxM(7x7SM4~)1Yb7 zanVFXEuiC~ra{xDFja-0GHf|UV)X)gc(auxs(A)NAT-wOZ;q?j8U+Jz3!x{n+jX%cqtei}LD zAby_xp`F)jPcekz?#Id*1RGdHeK@)>P5x)-z_u0OWz= zckjJBwk3vx;{YLQ$3n!H&s;1hnX`c+%4fScSp;%gy9T3HDtd`Aid4sIy0zN*hSlztit+qnKpNnRGph<1 zb}Q)ZF`@Wt44;+&Pf1H}L8LZYz-{3R0tdriHR%D~b3@mS2XeYQdodSu46(pKGfh>Q zYbTEYFI?Yt>NF-jS;O}82w0hWiOgs~Wqe zlI2Rnxu^a7VPuZ0fO#QCsbb*^7gx3J0;%>L@kr$E-@b?Fy0oVjSacIihXy#L}|LU<|Uw+RUUjjUq#n4BWhc3v zdU1t|L1y5Fa}tTnVU;y1FtbAJu!8RtuTRzi=u>7R41>M%m;}ra`NTX{_X@4Cd<&o~ z#o!GanAAer;6(yC#x{r|Xtcy}@tAvoTh|-RT9GivmW-YpqiHv^X|eYPbMYwS2Lpzp zxvoAZ<-(a4EY)lK2ZwQ7LK|<0B6m)!4VWuVuCH*rrF61CQxHE5&&#$OIe3&MJ;$GQ z=mg-f$FNT*JoiLLovy_s|3h}x@D!xMaJr@+WyF#tExUHU>&{NCNwthy;bJIEeDOjJ zP}Df8aEHBR&%O~oHRc3ND@dO7CsB~`WjTFEf#;!JK=?QvTl20 zNK$XpD`jUsY0{&PuuGu@bG7{$E_EW?NK<1f`u|wAv z5)&nJ3=x)jY1sV_-?))J_cl6wG~pkpzI-z`KGgSvf_a#B-=C`irh8|3)?=?da2JQn zCyKPg(c2Bb2tJoMxW50r8b%`D?(=m!^Tz(T((Y|CaylTg_!?wdvdQwTVM$j(V!*yf z!wF0P&pG*$3WJ_b^UQ%7uGaePedS{x?k`9ij=XFDluW`N?GSJ&^AH@zXAfjNG0^7k zIe#J0I$M^JIws+dIhaZER@1F@ccmAZv8*?2gk|A!SIMy#q?UkaQO>cPK(%^-n zv?XL!`z+&UxlWS-0mPSJhq~cKaPrhH^Tx6(=JaOC=RsJf_yJiF)Kj?3N9 za@>PDR(A&;Cmp7V$3*o5M;W6o46kOnwXv-!UB5ODurDQy#E#!q^T&#R=4nCJ@p`zz zs?+z(-|8Y>t{PSvFTMjX=R$;b%s^H#rrgDMM{?xEp3HF1kAu`TGM$uQeQ;z!sJK1Z z%_`c`Jb0lj*(c`uA65IwIF}Pi3i4r=ijqV#jXyE`my<5#7cMSv!-aZ-gw`9Cmb3c( z=93Ln4#aRt9TI?t@n*)e_Gp^1VVF*b~k-TFP(%Xtjy=be$4PbleMbpoQ zt2(3S0in6^P9i4{lQ|f3Vlc9!g#nZ74bJaud|7h9H9O)87xL4yCPer~KoTKNwt_RJ$v)NDKEC~qmF27Eq(Rx`{h2P2Lkh+=qLabhCVmg!`;OGVfiR7t3-9`3Y9NYdyWzuvs3s1HDF1w$!*VgxmjmM(jzUIMTEXMmfk-e7L9a}sxpbw*g4w9?he8V+P_(? zk<*MBfEVsdi}r#I7$Jirrk`(1XdVvR50g)^N{xY8^RQ~S9n4~0v+7*G@wN~BeC4TS zs?hccv49-jFb86W=%dAK?&oW#5s zy}IMv#`Hs*qBL{&>IFNoTsiPd;!PLjG7yJAA(Q>^HVeN8>6zpNSbYD=tJ+uBc9oGY zTw+no%o7To*_#y`3zH7cUmSMg-g~Rq>ZJajj>!%qwW@u7J<+*r3R)pCY>%|In)nLb zy)x?$lLi;Dm7&@;_HAOaj+yRtX@Bz0sP4Wn~anZ|(WZw0K zt#)8(yQH4anN2IICcoT3B0ZYZzMjO~t*K>?b^_3W@WD+O5y? zmlwf^>h7E4W%6Cmq6a#NGAp;MQMb35+{yW*FZ+ zDr@+zMz+xQx?CmL3&Ycwl(w3(>Y*QUy$TcHJa8g$o+SpNRLEDjvu@|T=nR*} zou+43_w&SeHL~$vZKkQ@rHqR!_c+*SrO)|Onb4R1f8gvVPQ94o%m8N<q(iAHq)C=Z&bqv`lP{SZv}?Z z3=T&gKnhDJU)M~qT#OFsL|h{aey-qiEDv&9Ws>GHwM4J=YG{gyL0B@mA27;`^sR|~ zam(#GJ*OoOlDV|rFnHUFjYV|X5$WfIJ~-FJ1D@H}%!dehS!NYdR0g~3Cx+u|z_%i` z(x)2d3_>gmmxm&P(>SyeyX5CS4Zho<10OLge|_HoTBZveky#B$O5x4vbz(^@%rd_s zyO=X=ZH5xP>}$Wcb63YPANl^_&J-C^HoRJKM%=yf4y%9SpQabn>ccB9z2|Jvi^`ZT z1HrH+se}f>^LV+P{fk(X4NiphQ*p_(1HNLmxE<@_TJfY*kGn{+o{3}Mo>)jV$vS*L zOsDwCenU=WjotY7H`4|S9@~a}2~~_&vYpB_ITw7u2|bjS-j!{Xo$4@sRHINm(?M-l z8Ak4!5_UD6sV!R)8iw!iuSdMmV0ZT!eI--Ju*#C@fg0?xU&kVMtde+7+4*|eBQyt6spG?-pgG=E?ml*_x8=o5K zmk){5Rx}n5SEBM9;v9NI{m#k(_f_1>z1ltdE{U&T9y%`nBB~MBgg*mRn<{AYo+vBS zQ8aj{P`k^-BP_`}eyeb!q8#g%-Ocy24+U4`E|s|khc!M7^XIDD7k98F0TcPmZwmlw zLt8*C|i8i9yRkBJ{(-sg@{5K zMPXA0ZrNoO5%M@!_3zHS{#5(`G_(H4keNhUb-GJ>%xbFksM6Iz@K{qzYWz)Y+g@$h z`5@>R<4~y;n`QsYl=@vc`5vySHzNDtxkKQwgoNsH#*B{kC&?i=b)}LnRo{f_H6#^E0`8?n%<~S zx;H=BH+X1-4JI#JBM?Ox*06Oo3$J_1>}u4E$L2oJ02g11A5#V6`Ptz0V3*6~hI~o>@j; zyeeg5QGazOM>)^SgHC`hBA^3gjT^wQ~djem1>5gEn0rB z)TQc9e4IKv+VM0IcDXph!E8=8vQr^@95Vu|{1n4-PJoA1v|P5KU10Qec8Tx#RdXLe z=<3Gm0#xTn%0QwdlU}8f;czVT^-#M;eYrV~#U@4{4v&sqW%!TX-M*21dNsm;75Hlp z!6xwy6-Api^{5lVAErAWxTkMdNE%d#JE1!Oomd)WYcXZYmIUa(ZPJ&VZZ6EYxx?Vx z;xWi8PorrC!-%v-F5!ubqfkGT1-@Pd>wiFd1#mR{x$1J;q4nh+sa0Psly*HgdnTz< zjq^cq{T{FI!b>-%EYmFHJq39x{)8__-zxo-b@dTi#fOwXzu45P=)diTfa`V`ySSS8 zA%KPLQHx|$ip#wO{H1Mt)4P{!P2_qa^3Q2|JwNl|m5vFyQlid2PFl6tczRFr*~sz{ zYoXg)(6(W#{Qy;?65Aa~;Z?6bG&!?A=Pjr%m^dq&)an1;vEPZit#{{s79>1bY-*?3 z^fhVk<@4?a9@UnDZk?QKJ{5iOialHBrlwJk7LFKtkYyBWL<`112_;vuy6y`$O4)1i zA;%e?H<@@u-(&Hvz8{j+?dASLb?WBoyXG>_2>Xv;w)zned;@}axhy7M-*I5dt&LI; zZ@(HnXblF`B{MX`Zb}sh^ilqT)2Y-?CU)(a$b=3s*BvXreGz9zC36 zcj@ZIODor39Mbh;cIP+2%q-bga9*k&7?st8!wxX?$HKCLyKlMz&5f(c%x4yle=Hs- z_TTQvf_C5cZomAFbfD*=tYq!3NKvLh>80SLxfcs2oX*c`(`5uFcY;!0-dScUU3wmB%- zC^mI4gP$ExhTnH1N9a^UWLVl49~e)@Q?+<8!)W}VIcoD!*3V~OKdsHDV4F|8zC927`Rog2@!<>LRH2wymJCd0 Q^WS@!n^+lF!MtMr2cS_A=Kufz diff --git a/src/main/asciidoc/images/lettuce-green-text@2x.png b/src/main/asciidoc/images/lettuce-green-text@2x.png deleted file mode 100644 index adff15525a19361a304f718107bcb8bf12b9a458..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 11672 zcmb_?RX|%?(>79EE4Y?IDNZTwa)1KGU4j%Sv_OGEkOVJs4wO=y0>vp#a8Iz}5Bk^u$X<(wx3{JqSXk5?4?k=*J$4ip7Auz8UkV03v-=AR?sFH6Au@3|@U zgMWAorxeM1;tB(Ti-_M=6AG9l$>&`YH zdQ{sicD|d;A{Ri2F^J`6W}J@maKq|{{qTn*dAQ1>|1ImFd3d0NJzTLu1s<;d6(f9b z{eQ)RPg3w(%9Rk*wZFjmB%qpF~)WF9vW8j+7XYMABTjx zp7V|qLH&+REv-v%*5_GGoX6|wr=>f`?`@o4aIyV(XfPj%)l`W{RhE0<=F9EIS?2RM#jLlXkH{=GEOLkoZ2hu?m55bg2Vtp&MZ_26tnJ3E64!mz z7~$%$$yx=8Cy#%3CjQdOH`9TOE611%hcT37f|X+5$7rm| zhzJYHXmkB+2I?PYY8lDgxaKQ$b)s{K6?o@tYwIZKsBRkD6cK=plh7}F3e_hF#GFML z?AB?f&{(HyB4HtKQc{jPuR}u0B-8 z{bp}ecGY0af8+4y%jc!6o-)O2=|zJ$jJ_TfJ4w94Ro=CoecUN(S8UrKUw>1P4#&93 z>UT*86QyRmdq*Kw<7;iZF5u(-xi5reKuL*>k2RX_Z;ni`OY363;*g&E;q-aiQZlJK z=`f|_fKL^xDEtvTkbR%Jb1Su{B59M2sD=R3O9JVyA4U(6jsL-Fto6M0UW39TtrW+K zf}KbU{56HFC+&k`3cJj!44QK@Hn(VL!VKjA4T7GD7O0%b<&TH?KZ4gPx_5$qzAO2( z#gDmZN?*DL6;MU&6}kCws9MIvOZp9ik%Yc9XIVwT04S#WeA20{mO(AG;7+dUQ0C@LzyC2yIolSx@kqxSf)idUSWnimFn%3zQPa+WGg0IMMn zrhd-I6Cly;j*Z3L_-T5RNm$x1-~Y$yg&oyjbrs!~qP(x;PHThPc@F705UB4xln=8@ z;C^0Ruc^LQvCrizq#f{+@`KEK+%6tAQC)br`pcebLunSR?L*uLQ3#0!Uc58fQKM6a z$R?XiV35o@%0^hHW!~?pJ!JB?J>o&3V*pxRWJK)eApu|W%ZoVCctb(HE9|+YJr9x& zW4qRK-)H}sld5C#OVoa?nxHtCmQnj}$F~{#X=f46Tb4?lh~N3x^be{$(cEUvxl1d% z9z5HbIhKNH*VKg)Cv8O<`7K!oiRnF7o?AO$|O zE8+q{eBruT8)cy>k5v8*e1ulaw0876fD8XB;*OT3ibP6(a%=2l3QW~d$j0{Zoz4XT z&a}SLciVy5N0>t7Kp=vtHQ=I-%VW`(wuCOq&ABlNQ;wOZxvs6SFMR>z)^P!5M6YWJ zBV4h+G4o^IQNrjcVWJjQ7WHvSGp=S>41JL3lx^fgBatN0Uddtv&ac1KQMX~9F|s=G z*CANcYFC5X4^f8Ds!%K_+5>0WV#pNxF{Xgk5Aaq9!LuBfc#LuRIGp}F2h}Nt(%U|6LZRpXvB*SMTF$4#fz5O41V=(5p9!$JyI9p* zjAiSBn*}Sif%=3e#QFzmxMw@LC!LbWyvmW~d>wttc;9pO{ENMEo zw(#hL0(*o`F2@vjFF%1}W};WJ!)K5Y`Xms0+--g)&mBUkD4S0Xn3V|ydmXp zIFw8-UE@j*XV)KUc_4mJAbt5tK_g5%XjGY_OL%!Hz&>gIdrcMdX>31%jbV6qK;Q>zXM~;Q#>V* zD%Yl#^**nD1Nwt;ZX^X|Pj;FY;h0P;F#4M0d0zNWHCK)o~fcOx)6 z2I?3y%0CA$ORUr1Ot|uHHyvbTpYDW#82EkOJ}Jy2?$p`e6r&c7cS;W3V(9XiSBW>#S`{);T;9;_&_RebuSelZtFv`oe2P zM&^y={OZx&u?i9+*Ik;QonYU`t|dYk5*cBsOg=6R?HM3S@4BGVjBC zI>y7a`0EHgMj~X#aJud5n2-CD$kIM~$`KkY_i}qLTs-;hu|o zGUgoCQUgHdl`l7 z9_+Or_tYOMR=J?4zlz0pbz1zI1(cp3B59KUR=1q9%Z_DtJEI3N#gnDAY8$mz=ed7@ z%o4|Fdqu0{Ts-7huy6tB8OWZ-fmTG^XZ_OJ%Bl>Th5JATwD7cpO)VC0^ykh@8=5iO zsq_Qfc5ZVD=Schv+WTFE)i0YzFAdI8W%HUIi;oxg8e~4=0(n_Fp$V86LQ$=ULh70l zPbWjBF{{OTMuZp7M`=^MH^1t|Dm=HU(w2bn>%WPG?pteiFBRmQxoc172T#M-$^&Db z!5=S}nYPwOx^&%qcRza$j-F=^>E`Jr%Sn)#Z%IowrO0=+Uy>mJRA;B6~P+ zI-J!#_;S1NTs`)9GfAHY*upRQr*@Nv3k!j_bQs6b1?8;91SO84D1Mm80WOKG$0TCB zrn=P?c8IKZldgqWst$z7ADNf@WfvVMFL$wmqglAtEg1+0YP%{`*(+6TGKGU31hDdEgz#ow_p-xjIHvrxU~LELpxV02=moKf2)Mj zQ|$B#(svtf*EMlo1^o&;IVTa2AXF^^ns@F0eDKNT1ZY69(R=8C*OQM3kaz&FKR`-p zk86cu#+rycCd|#r^=KwHixsO97AcnEn#2MN2g8=$8MP07 zHd+s@71u=k4oicRzZa=pJ~CV>C}err$S1{mI(Zcx10}D#xpjh)B?4Q z6zr7ru(CeU(Q;clEyHYFQavTa9ovCf{7rwM)%{VMr#|Z3h?Q>GLLNifZhi=*X#8pO zit?MtJ@(s1T9w?K%2G`9n`V1rMFa?|!dw2meBGAM(_EQ)|AeCmtqArY!?GV}%c~4r zLMb)IpTg~^2BR#yF~bVs|FV5aZ%1EB)PV>Pi|omwhTT@U7vZYKNVGzegkSCKV^$T| z(2HyHfau_J-E+%mLMv2h&MwB|JfA&~xn&Of3+&$(HdqAxZEMP3=2Q;-s=f$1_F_Dj ze4Z>}rG^VMNBJ{qE#70gMrkpwc`8?=E=|Vt@sZ8aaf=u3&|lidrniIl<0p8T+|ByZ zaHnQ?ec!F6@SRMpfW~o7j_bWuPsMS@ivN?IIF-G4T-Jh{pWEoJ-hLn5JMNf*4BUX+ z27-;b_q3WaFzN|q<}gSelAg(N}JM3Tsg zN+yZOz~P{|3-&LUU(az~_490&4H7@o_L2J?acgwGar75hsEp^X(J3O*-#>URudD}; z{iIxJvNfR1%r)&rYZBknvgk>~9DY0Qq*T?GepTQv?SUM9nqS)6nn|1`te5ReV`{I@ z{2Q)YXjMEY7TnZ}A_v9Ogl=$9{)9`2~16h(2!NG}OVA!8ucooqE z7o@;Zx?8hjEQmhdsi?VHkM@z z)~X!6JsQa3raJc@%i5mp&cn*Xq#h+1HsPF0h`zgn=)A_-p5G$ESKzpuKJErbtXzJ{ zK!AZ)d$hvNqeG|Oj)h-&`!KbX^DI|F%i7+I%{7Hxk56tJ{pCA`jG{5V* z!0Z;0?`h9C{3q_H%%j+q z|F4;3GV8A-0Fs>?k_lz|P6V(ooVQrcFOa6J?L9QwmdDvGeGkg`7=|(T`E%m05*js@ zvYaSC$=?QD7dqMLO^tS|xNW{)lP8;dc%R>A`B>iUjX^h5ll*DOtS6L{OQI~mFt*6= z@mHT*zaD=s%nt+LOuOTM7oZK2*Q!=rxdqQV9{-~z7=bjzNL8h zrx}l>xAn%Hjy=!yRa{0YWm{nsKbWcf`^^bYu z4qUmNp1CA0*T$a~oYgqX@LywcB}evcc;tRfMjmP+ z?X^A)0LIVF0E_!ko4+R>W!{YP$1&MY5sT1x1gXq}d=K0+1=poJ%6Ey?SM}T9`0H(t zJ|(LuAtmL(ht#fQB-!hKpOoEg>ql|#w_*b60x;GIZY=6|#-*o4-}Z&`{&CxjzQ z2{>BWN$wh|&%JlW9yEC+|3zTcAo@FO1*&f$jjp;k)Pzqvg{kzaE#2XWo70IK)ix@# zVA?l(x>{e=qK=ie9vzUw|49{KAnL~yf3LkhXg&dN8^yY zqv6-w-|B}v0vbWx#Ots?1D=8QNbJ4ORm3V2%KIfz<*yjbp0+WID+h_S^Zc08N$aUI z_WG#~x4&HCIB5;)P7E$5yPFcb;z{t=tIbf_VT14Hl)T`7<@(~5*!qnjgE48EnDHk1 zlV{VYJ6}DLP1}Xu*lO(UI;ZT?+f;W4>}W1r0K-igh2_sGxcwxQAx%sbFp<-=m69Q_ z_?M+<5P&8R?VuG~Z70r$Bmt%r-Qe)UQQArei(8}P9@i4`R&2MV zWguU@jj2=@lpG*D4)3JMq=GNq`@jBnh1rA3q7#za_H95brwLyMWTNRq`J>6T5X~xYXHG_+f*EiYE5i#1sxR%J zWB~qaMG}jXRHawI*iGPL8LNFk0Tcf(yc>bon~px;3PITraD5IuV-K52W37!O2^y!DO2~hsWo)xr=IpE5h>sp$_TE2B z^#(RP2R;VIYM|e9eZF7db_lWWGXyq;i^&vLBOe#ups#+YsLDzS*k5qxjbfo~+3PI@ zHfVKefKB|8UenK&W^ca8nd0S!q^*XHpDcYnOXV)5g@xKiGX*56sLD{yXeyWzEZNgD962mU8bqzG(0NuGfeaB6POV{B3S)24( zq>m8UAA!0vD8mWmcVEuO0qyv1`mYUy!2%TI^Q-bqREuv))Dvx884)6xfmj7dp|waA zxrYu=LaQqpWuhi&PtqqT>YoM3EmZeO+ugv2A zw^M}GwQG=)PhrG|+{TQq8|MGQPk3)H-@*SjlE69TS3B&xz;@Tf-(!qOJ>H;)}k zfg*Y` z#+D%jYuFrLC7t`Bb5=VMK?EhSphz*(VPYvy@|+OGC@^h#|AC8An0E$oOoqVnjSF$V z{DUkpPV=!%M)PCHzxjBiVHC<>^e9F+0(}9awSfkBp$gZj?n8Yg-8f!7=zuq)UaWZCv6dd<9r2 z^%)zOz<=5^9tQc%`=Wm&!DqeG8vzFe%)*{I>!Me>u(iVYNOyFL9Y9sYSLxhYzoHFw zXljf2uO`>6FpCwYWA{q2m$Z{US&`=M+flU?-u!9RpnE3V-z;ippZZjCBkrJ?66Op; zFd<0_HpE}Ds2cc0)+=OJ5@1azTOyQOjnF+W)L5Rm>PW-=?lYXm30cOF5>%yWlvyPv z4jVv@un~`lbC8mMztQ{pxVsHc>@7#S-im-7wzY<0ltU684V0*J>VUYzjwadzhL{S& zdoVwt52Timmzr^FOM*&R2CU> zApE0@1AV3DrxpsKp7F`gM`@=4E)s}3zC|*=8e3RwjqSD}W`FC$1bgS$`aP8yHrSMl z2yEG$ty4B}8$1PJpsTFMNT;7PFnXa=K>(OIDxy zOd!G5r9Mj#XIiLBd!;?@RL$~rP}JM?3w#TYrs43!NvziXHYY7AOf=DjBAq@Wca=VL zu46_=8ocO3F-C^McX{Gp>=EnjA}4aw;Dy0~KETA(p{!V1pI?8&8OFg1Y9jmxv-GYR zjm6&{il5BE4r!>KwJ2=mV34vchjJH&;jZQ&`%YVMoTbS+>4m9H8d& z(QR?-+>&2KaQZ@JLLoa8wXVz_)Uyo(ErM9X!S&Htl1^2`7|(!@KapDf?A{aR^t+tD zPOKlidWtNeLh>+F{4Ubz&humJa-ZgQWx&?K57`Ye>ogwg1SVV7&C@q#_w>l}k2)Sk zrcn=Fn-$14Ga>vpEb+ddDTWaQ3H=cIu^Ut!AwXgpbKW{0V!e@@lO56geUZeM;=|L6 z>r|@Hy|eeUEo7&JONbjEhTGDKuk`ECARi%l#;+5}%w-{d;)_q&oAZ*ZiI(aVGUdQb+h-kp zxN>9^%WEkmyBq22nlnNu+_W3Le=9vTlg$GslwLUPY)doMC9x=Qom_Ex5}H`?P2 zoaXBuhnKhE)D_8^_Vu>3ulMi^|JYC4Ad<&3d@b-7D`%O3kF}n`gh-qR$7yEHO^@BT zFEh4oxrwb}dosOUWQsrbK`(k?mf~MWvcJ)JGzF=@871mgLY`W7;cZp&?km0AW z(u#DhMPPHm&2t+8=8UM13Wo^&p3Y>5*9?wb74h%LWQN6mECtdd2HYqP)UrUnRw^S% z56>d&RXzO0@m)ng=+iKPc;RnynBxmpDo0Bh%>|itfq&!@TX{sL_#ir`wHJ;_)xF)k zXJB6KZ}HXOianClwDjz0EOH)minC^)FG8LB{`|Nf(fTyV1mplHIxc^+GkxjA9c{DN zA7Ku3SY2L|az{kkl znmEhc2T0pqamZkulQ2U+C$*qMfwYF*TnC2Y)VtC*U7dUZ^pS)XK^o}$jtsHGfnOM` z46`j9^i|MWXg0jm6{-g`L^y{ILmgNK@=05&sUn*#&jA?Na;m?z>3kWJ8DsD^4RjSU zqE82Qy(g}(y&1SbLn_DE2hTsAb1oPx-xIY%YUqs!&QmfAQ!4d*;aw|*I`PhHT=8!& z^>I&Q(3Y0&Zo*V}{x=G^A~0X+ME{@nAh3le1g4S`Z_>Bk3q)IO=U!^h~R#?@D852)6gm z7PD62GuXlGY_5=^MGD2i(aPqd(Y|(LDF^ zrFn|k3-H>ENFN_vM8Ml}^Y@T6t>9UU|Mz&^)VRj1PK2P2w<$6uU)$s#Yk zzr$_j=>LAhB+{eXjv)@nK5zjtHqZ9j9Vrn)D45eghQn{I^x~Ub{^J^cNibTSPp)-< zMAWD);B+r%ep8dQ7Uv-L(-Av!bqS3bGTlJhcEh)QTxF$X9@HHOdOR81t(xT{u1e=r z!Z6~(e<5EYdQy7@d1rgO^8B|V-7B>vl8u@A%Ds{*<+dAAL~zsAJK*SK81tZJaZm$% zCP?B`Fvx!h{H7X%6yZ#(g&@RF+`c{24oX;IuH+~nVkW52A%)Hu&tT9;#8;ceAFDFs zP6Ph7E-{B30TXFZex8J2J!v?ML}^KpiCHq$<*7?i)GP*O0HE3}e&b9fr&Ce-lUgqk zS4> zd$1o(K^|yltn@XrskgdmYbTB%V`g<%3VU!Mt@=;_6}g8raR_8VC6ZkdGTxh%@IWWj z^X-?erbbcXU9W4-W`&VIzwFmHSi&sb+#Yf=u#e4=#pes>KuTphra)s(*mqn z0X)~+J0l0+X>)bVw=m!lGCTCR$g&-QHBaTs4MiBk1LjEq23+){mTW#H_U5sa&6 z{mR*|N`fgGWP~|?DLOvUQ8A3j_cg;mXTP0HQWoC>sZfO=n##NM3!`p|qspay#e2xe ztZjxX2RUxJ&atILeE_?+amUy^3_01@ya`|k8oJLp7HOGeKvAb*e=#)P_{+tgNcdCl z$+JEeT+*81)%8OxWly(v-d4%O8pavA3_ z_r`O4cBPqenLwl8ad*0x91s>66QjEXd+V{`Sz-63I%34h!ii7h;=3I-IZUP=!!q20 zRc9dmd9noSgP6VCi7AHTAv`jdUiX_=3+)}qo3dK_+$THi_m{xu9!~264cUnc7GqZ2 za@nR$ZIre>8DF~xUwk@p4~w(pme%ml&WsNx)41M2^)NGt6EvyQrC2m;E+)zGHD!Fv z)SlnPeB}?@?fRz%Y(Xy1;7Z3u##~&qA?d3>)vt0Dl}A5daPJN0taML;CCZi&(IPc? zxxF#GzvCs@FJjCHxwz+ z_gAAJb7WO zoD~=PR4T?>@?hVICuU&h2RD4QD?-Qgh5yTs=GnR?4l#%sczdk9^_SbbN-h@v7)9gB zUEg9S4ZxZ^;B+8{Mb1yF^kgDB9jq_87woPD=Qi>0C5}SZ>l>v zZrY@O(o!tL^G408*uvIgFE#9o)h+k!6WgSH8JOLwlDqaFgU_9kenggienG9=H=~?t zuu8eb+@zv!TT8fp=AI_mAEG)(fHrvmP`I({i@ zg9i1MtO;c!)dKkW7K%~%KD4|$CuX%rD?SS?C1}0q7+ZRsMge<%PwBZx`{UvMlSZc( zBRh~vk>&H-xy%xB)la+bGp%n7&l*J>e8YLMV!r!n$u{|^M4%G)tZG{RyfIOtOW{}T z&`%DVYG9d>C@C4b+#e$$)us7W&wniImmi{DwTB+YnSQw$oU+}(zoc*C1TPk$=6A>l ztyp+Z3viHK+UUjE==v_#-)MO<@%6YDzu(fpx)AkEAY9k&Pc6e-wcB9Ycz^U7iI^ud*liC`vNO94o*N>(QV2s(-mM-KmaP+4zn_vFtJ3(NG$5N8- zY_6x9&sWapdj%?XT{zOQbMc8Dd{8bwLMw4^G`G*o9Y=P7-PBIsknW!0{_y+=_?SvS zbk5uB+rKTRjF^p!A5%~I_z6%8iGkAI)b|w)Tt(7gX(&1@g*X5*bC1)|k%((H(Sczx1hutz7*=Xgw;ME&_8zhs!=7G{#Xs*Inz}58p&hW77hY;8csA^)Y9yk9?@R) zj#HkOKmNFRbidQ`(*>y!0m(zBemaLQ1AHBdS}c$Dd| zwu^dSGJk(`YFTpQRae#n?7;F;0~hcmNV6D%a0RGgl19n?)w{d<&qy8;uSLyR=2d8Q zJPnnBsK;e}Ls$DhgEYi8hurdDl2dP{F!fypeRaft@s~@p+)@L&thMy{#;8pAGTrk zMWwo@x&iyuX9{Djm?I#bSst6$-xSR=?kEhc;wBA62^+t41}2^MV2+GP!s$NiweMoy z<3AE$y_i{=W=w$}J1Je-O26F^d+x)r4w++`=2dr=8>i`BOF#;0!R! zK$*Y2lFvU-?_r(R^yYTAIGJy=APZ)u)#q@dz;I4kf^FZa2x&C^6M8fJ;kl`eYz)e` zzCif~%2!qC9oGIkLQpC^*#Aya6g^=6;Z8MdB=OhWWQs3yr~sNCCWHW$j<@(pTuE<{ zUhjEkwI4Z0ucIc&>g$+|6)v|l0QZiYtdgro7yI)psrPpyq0IFC^5sh75GAJ$dajp?#8M%pzn<39UHZVZ1@7 zX$iB~844K7vaHKa9(Z2*%}IJwY9`~aWXTv8neud7#%-YzA~~)_@F(BCtC3mNahd(l zh?itBxuk27!e^2-{C@TwZ$Z@!wRpjW!CbJ@jToc$Z~Fu5;)_&Gq6SEa0b%<7%qx)H z=GDtH%6DnPmCqa%*2VN%rscE7p>IK2B89p9ttOSR1bq_Tvj!%v8hol52PflAtNjw) zl}xKHG@0=&mq*;w_mTf@`Q_1H8i|fUwf&(LdDbwyNIA)c0_G4Qg}sEer1sOMTf|%;K~cs*{!o{j*ZRd%C(i)2yhd$3_kyYru)GA2uV=Fh#x|Ad>F z$MAUgsf{pBd+5FNJWl@)Rv@X<@O$&Q8bSDrI`O~p`su!4XaKF^>F}13$^O3l2lfTB z1cZ~heTzdS07`qb_rx0KQbcu-<8Kbvt2kTVS|}s=uhz-{=A^!s2T~R07+uV7jj!s+ z?zyw+@u6Bc7rT{r2IG5|rxYYF-`9WQ`uML9FoRXJ&N`M{OR9^+Vi3SdX3o01dy78O z2(NO0mQ~fbi+_(@$*A1Z8PXvYcp!KYTB(Ls*nLKr9D+N&icnS3D~GL9*L^MOx2>Ou z9pRCj?7v;~mge@P!evh8Bx}7NxRml}Nh?YXkpHo$faR{s$ALn;;5QYNw01ZMEmx?8 z`N0DnI0yy&&PrYAQ`=d)vY7w&h?$N-J0=%WpZz2F7(DxFd;NhKgF#eYv?ZAC3(hzT z(H+;v=INg$*q_^|-Fs)l$E0XD?PTKw_3Q zVf@EU@n36`Yw})Y7d%WaThD>BnyHAQ`|o>*d(fiEuLmk0CfrJ787ikdaM1ok^9?HH z(Rv`9{g(;+Uk3005QhIFi1mLHhW|hDivLR({vZ1B|II7@AFq6t@38}uxG2##LxK;D OQB&6Xt6b4KPkEJadMLQ%F< zF@B97vob&oFbIf%e$xaS7f>2o~ z004rvIIJ`C$XI^!u3-Ljeip>TJn&I)?%@CcmRf#;YRLzx0Ki}^*~N|OX1~#h5E`t7 zCx-fww4#G4Of>*tOrt4y!VVG@?neqBhZrL!uHHbv$wXtsRy})^J;jm~NXEs6kv7LV zxDaA@5DbY3QxhmA+K8DTm_)_Hql1G&!i}Ph5nuBfG2`WCBm(|bLfv7EFkdbRce8hb zTZV>_;Cfp6ngq0t0eqdI7Fy5HKwDP>u8l(LBTN(si8eGeM545j+S-~-g=Tn62o)c#84|Ai zEy-Vbu%vK87@0yPhlaqH^WyzNBdEp*#B!k@ukY6tO!-kLB>X!&CPid4o`OVcp^(2b z5eYv$l!&mPuct;NAW1=_U{VM*oasaV)<+2prG|zFhW<;*AKm|2m^mi)_CIR>_I!hb ze_J}7Y8A;u@Ez%Io#8Gq6cW;z6doE8Mj%;5GA~8>cUMm(`~MuAzhzqP`PXwNMU(%< zw%qg0_H`!m`iAAkjb}M=JXdrxTWRttXcATNC5L?|k?bGy`y9G@tT|Ye zzvf`BAR?ZM|F5#JYFY9S2}b^*VZnH+2{{-aKtfVN0x-y*jV#I^^UBCFG$=HTd0HeB z9Srh!%`C33bIr&$B%F#5A&_jbCd>vcGMQ+ktEa7p(!m>Oq6j+vnz{sSf~KLKo{pxD zt~P;)ChGbd5d6PFw6|xq|06Xvln}AJUcRR${x4Ir*#D8*F^tUI>+nG=n&q?q-y=igiZ>Ifo#-IXZ#uyE4ynlncHwdj6qV?TDOWzW~O zYlJ5(?`I~2%r#puVO0s`~dN;i3N2 zD0~>nJb*bT{=3!wE$`orzZSqCmp=i5$bTP-Z>@hHp}*$rKSva^ys;TEX9M%h@Lxv( zjY0mS?Puk0dbj0K4T|`t*U{70(MO_D%-44-Yxl2K_y25V?f%vJ)9D&QrkZGful7qb zi)(rJ{kAb|X5J`}O!S#+Sr3E!wfR?v6FHg`kFAu|>ox;NW7V!8YdLVvC4Xz`?~zgKf;g#TF5(fP;&b2HTi} zi!CBn0S6Z=4Yn}{7h6QE0uC-#8f;?@F1CnR1sq(gG}y)*Tx=1s3OKk}X|RnsxY#0M z6>xB|(qJ2NaIrC<8j01~CPSaX-?zEQ6i8GW}N7)L&j`huXX1Q~!^ zCtNhw?Q7w?7zSw-mC2%A5%sJ&7?r}Sfk0s=6L{y1OCxf}PTVt9$;p^?+2lQVWifS@ zS~w1~zcDxURv5Cc;?*^Qcr#wO2rmc%R-&1~AtJnq061R63@!rZg;)Upxs#zb|EM*8 zThJpB+9GJ~30;PfydYP3_FhLbT88Ug^_;_vPK%py-j2nk<&fyNbLtr><<8T-ZPO>! z1W#7UUYcw6?joD^{X-LTLJul4{msxanItQQD!(#L0aCaT)~Mdw5}L)652Y zoRq!unmbSPDhOQg3;6;=uLGuIQ1iD6tj?Ae1)zxcN&@QAcp3MlpS;6>0tb|XFLs@4 zt3TUQG~(QMHUQrIK5s`ycBSw2SU(teEH=({U-b&<~r z&KjPM#TT)ca~}jSQrAK*?%cFJaMN0N$7LVK;!7d}4d#L%3^>}|vDm%i{Qh#C;8&h5?eJvpZi*ObPnQ#@8t7r{G+_R)KRJ#DW~^tX@r2zjN z_GyptA+VBJ#Gh0Aj2PoXv!n4mhLY(Xct?fUEVVmQ9(rn)iWfbt*UblL&#!K4E@|+P zOxkWgxWE0PpXj6y(Dq4J7798q>dQC4PkZ}GH*d#_N&sUR-LZe@U_9@+m&owNLQSPe z?#ffser3+hvrk`tEbY{3I!NBRcx{J_2)aAtxfAjj$Qtm8y}lG7>ZkdZ&Nvb!LE3PK4(%V~xJk^e?ec6jo+D59L$Q*kv z+`8(7I%-vml3VUh<3rgJ@}9Y7^Yy)J_q@F(uBCckChC35RT!#SMeNctDNlchnC2ofbZ=#$EPJ1M=$w}g_jaKBZ6y%2O{3e zx-#PBKQt&dzYKxE?-&;I$F<8aR_lG5+pQ|__!wb!lUSr`9|vgB7wxc z=2a_SlthBlKiH%uBTMV`(y~>aQnue(Pa2(=Sed4>-(hr7Kd|nEc$UE`x$T!F&lsYf zYne)pwK&lM^ak1e2$pO=nTbqoXB* zl^L1qy}1#YHzg}VE3E4OP-#~M&|_-Dv-o)94|5tWUS6OE#F~v$=|?0EKyRLSdH>VR zW5npNdeMVHFl>zT*@Z z14OQ{+tK8`hS%;MxBGadQMKk-7az>t^}NUcV0m?=-&29T$7CK<(D$!jzg^zTvqLgX z#jaV*`UQQrS}>{#9|=2!jmp~y?rvT@RO_{^71vOEneR;Bapxia$%ciDYWj;)DocmN zALM@^KPN-=hC7wb&Y$zvJNMvvsCpXib3@lydo&M1LVtIdOZe>cw)w`z5ev(%!MN## zrPm!zjV%;%O_z8&tnUy=bbiLK?2ii{6~_8%PM^NiA^ot+kqktNmQ~+X#;O$%aFq%xeo%d7g>5 z_wSHf+fVGYM)sR%?lges0`DWfQouaEvbBa8RQ;aju z2y_ZmBI$)0Jb{CyPj`~QRQDz6Cf<%th5XbIBBG#XT`kl4o6>p|J4CXRm*^0|UI(7gaQA7g*0&|z;UCya2aA>96xiP;$SA(MO2{tLPsvQJo9iN)N;kE2)_}Z& zu0Tc655vct_5<=wH>VG(Uc({(WN@o}o^dfzNCXxKQ%Dv8tNvufB?*d&)s*}92gsSd zmUE*h=6sCj$}HgCC#%wZ&Noo>8FGz~pmX5<&Wd($2WZlr@2fW?FwEBwlV0I%#9vu@F{fyE&K{(=);QkxNdk2Zq|O8^5biLb z9SJNJIX>0f>5EG$7P5{{TG(?upf-*q_m2Pet_ssH=>p|v@5S}yM7l3N|CkuP|Frzt z!(gxXZzH=4ViXOl8Llc$L&aJppunr^b`N=IMQTD(mWn$z0B3m3c#M?ECi>GZ;0RH# zyWDj}WeELzJ#o3I)n+#h4xX@;n8Yo$x$=p)6P-qzd{-xfBW-(~Z%To&J=VZ$vI2b5 zfEf25LfCPf9qqtLZ>gI89V7(fv?}DC^P-r{b&o4w_Ra+H9bTIW1c=duzLd0yp>KUU zhMbjGV?66lF1e$ zCRFa~=2(2lYN^MXCdz;AuJGxX&!22=<3+uiexWo#$(OH zjP7(JDe&c1(c`)Euvb+sahunHs4!ikEB7(AmR60t2sC(F;}Lw~+6}A4(OZqzJ(2(_ zLpsIAJtmu{k(v&vmP+HQy?XNC+BMba$Mh5JJh0O$!hvT5`bJ2>c~dZoyNfy~(CTIG zJnu6zn{^x;w)QVX#UQ&bZ|?JG#^T;c8G$5l;a&-i+9t zf6W_oBb$C)P&O{6{JHCxaC!l1QU#i-G0_oOUf$g+!nX&Ou(iK1FG;xK!{SlCld#yX zVo!zy_ZE3&M)w2km%#4DPNUJWJR$UOa-o2vlC;k?ooi}W-@8dT6^QwET#j=(FmvM| zuyH)+6m19n5toOeV?x8Q2xyi*;gIGtb=PS|_`UPWp3l5?g75F4){mZQ z=TDNm@rE(rxeJ$VnHZ%Cz-FVe3L-DZPth{f6u6DD_+_%`b*U3Uce-R4o9Q}NV<6qH z?*?9SH3uEu6=h7erC$}}rmt8tvi;T7%MzO2?;FO;_!zOynu4Tz6U>#!pa`4W@OIA( z8NH=fy-k#J4emOhZPK(Y%YXA*3+Xf6_q#W#V%PZ}Xp;&Z$(v4gdzBx%7>nWhEsb~k(FPhB)?MV9r*|OB|9LQuJqt?Xl~{}Z~RECl3I-T@BDE`1HQcVqzp1q?1_D5dn7 z1KCcq4*QB!TCc2J3g4XNbytK2i;?1sOv~^MsXrWWY2;R`X)9^Xt^Byq9gx-7eV=s0 zI_hGNH1f=wyerxh5+ARh%eA{kCJsy3wQleRKj>h(StrahFL|M+_k1@6sSYS&j$yCS z)8>*ZCyD}RJGlpMtORXHhXa!xN|rI6L9Rh=>m`_f%*#|AF=~M^Yzn$%b(iM*rDqol z)N0ddBV*~93+m#Vlz~fl6J^bry|kyDY7Q$5oz*_$-has$X^o2l%ZT~p_ovyOZ8##g zbhY>V(U*^I<%H!pQmQ>=t8dJVY>e#qLhzq2N@*1aCKl)a9K4-X6d!wAb~U1H-2s93 zNklyYvF@=+<{7w;Ttk0lY<dIuGu8Mzaka`cr4I|oi)+2?jCg>( z+fFqlr;dfzd1$-p;R=LT3@R$0o5u8t`6#FSE3;#^<+Lj>o;?`ZHo7M2{_DX?aF2YB zK;PuX3o4@Ehc*zci0gyFT>Gb^Bp1eSS|W6n4tpQXP7YeyQtyz^fj$! zaIq%#-jh`wH~1(}xITLzdt#LNb;N5P&o>hGm=u}J5h9LsKUH;U|NDgdVv_zpDeY5C=m>prK z`KHff@1oj94C!whaY~TD!n48aH+y2PW5(Tq!{?YK^5z|b;(0SgRDzdfS(UWS#VIdAu!_ViWPjZ?5lmEoiR^TT1Sp`bz+6GG&Y2%NLW4A zAG_af73Lt|%}dc!n$5e6yH;^$upb<7D2VKG@Tu-5Rmi0JwrRnBBh?L?Cg0%?A`WP+ qy1@Lh@1%d@F{S_OcY*8YmjDRjTvzgIRr>P(KC`uQz*boJ?)x8RXh1#y diff --git a/src/main/asciidoc/index-docinfo.html b/src/main/asciidoc/index-docinfo.html deleted file mode 100644 index d47a3c38a1..0000000000 --- a/src/main/asciidoc/index-docinfo.html +++ /dev/null @@ -1,5 +0,0 @@ - - - - - \ No newline at end of file diff --git a/src/main/asciidoc/index.asciidoc b/src/main/asciidoc/index.asciidoc deleted file mode 100644 index 1bde6343e5..0000000000 --- a/src/main/asciidoc/index.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ -= Lettuce Reference Guide -Mark Paluch ; -:ext-doc: https://raw.githubusercontent.com/wiki/lettuce-io/lettuce-core -{version} -:doctype: book -:icons: font -:toc: -:sectnums: -:sectanchors: -:docinfo: -ifdef::backend-pdf[] -:title-logo-image: images/lettuce-green-text@2x.png -endif::[] - -ifdef::backend-html5[] -image::images/lettuce-green-text@2x.png[width=50%,link=https://lettuce.io] -endif::[] - -include::overview.asciidoc[] - -include::new-features.adoc[leveloffset=+1] - -include::getting-started.asciidoc[] - -include::ha-sharding.asciidoc[] - -include::redis-command-interfaces.asciidoc[] - -include::advanced-usage.asciidoc[] - -include::integration-extension.asciidoc[] - -include::faq.asciidoc[] diff --git a/src/main/asciidoc/integration-extension.asciidoc b/src/main/asciidoc/integration-extension.asciidoc deleted file mode 100644 index 4839d122f8..0000000000 --- a/src/main/asciidoc/integration-extension.asciidoc +++ /dev/null @@ -1,10 +0,0 @@ -[[integration-extension]] -== Integration and Extension - -[[codecs]] -=== Codecs -include::{ext-doc}/Codecs.asciidoc[leveloffset=+1] - -[[cdi-support]] -=== CDI Support -include::{ext-doc}/CDI-Support.asciidoc[leveloffset=+1] diff --git a/src/main/asciidoc/kotlin-api.asciidoc b/src/main/asciidoc/kotlin-api.asciidoc deleted file mode 100644 index e630347ce4..0000000000 --- a/src/main/asciidoc/kotlin-api.asciidoc +++ /dev/null @@ -1,79 +0,0 @@ -Kotlin Coroutines are using Kotlin lightweight threads allowing to write non-blocking code in an imperative way. -On language side, suspending functions provides an abstraction for asynchronous operations while on library side kotlinx.coroutines provides functions like `async { }` and types like `Flow`. - -Lettuce ships with extensions to provide support for idiomatic Kotlin use. - -== Dependencies - -Coroutines support is available when `kotlinx-coroutines-core` and `kotlinx-coroutines-reactive` dependencies are on the classpath: - -.pom.xml -==== -[source,xml] ----- - - org.jetbrains.kotlinx - kotlinx-coroutines-core - ${kotlinx-coroutines.version} - - - org.jetbrains.kotlinx - kotlinx-coroutines-reactive - ${kotlinx-coroutines.version} - ----- -==== - -== How does Reactive translate to Coroutines? - -`Flow` is an equivalent to `Flux` in Coroutines world, suitable for hot or cold streams, finite or infinite streams, with the following main differences: - -* `Flow` is push-based while `Flux` is a push-pull hybrid -* Backpressure is implemented via suspending functions -* `Flow` has only a single suspending collect method and operators are implemented as extensions -* Operators are easy to implement thanks to Coroutines -* Extensions allow to add custom operators to Flow -* Collect operations are suspending functions -* `map` operator supports asynchronous operations (no need for `flatMap`) since it takes a suspending function parameter - -== Coroutines API based on reactive operations - -Example for retrieving commands and using it: - -[source,kotlin] ----- -val api: RedisCoroutinesCommands = connection.coroutines() - -val foo1 = api.set("foo", "bar") -val foo2 = api.keys("fo*") ----- - -NOTE: Coroutine Extensions are experimental and require opt-in using `@ExperimentalLettuceCoroutinesApi`. -The API ships with a reduced feature set. -Deprecated methods and `StreamingChannel` are left out intentionally. -Expect evolution towards a `Flow`-based API to consume large Redis responses. - -== Extensions for existing APIs - -=== Transactions DSL - -Example for the synchronous API: - -[source,kotlin] ----- -val result: TransactionResult = connection.sync().multi { - set("foo", "bar") - get("foo") -} ----- - -Example for async with coroutines: - -[source,kotlin] ----- -val result: TransactionResult = connection.async().multi { - set("foo", "bar") - get("foo") -} ----- - diff --git a/src/main/asciidoc/new-features.adoc b/src/main/asciidoc/new-features.adoc deleted file mode 100644 index 795df139bb..0000000000 --- a/src/main/asciidoc/new-features.adoc +++ /dev/null @@ -1,94 +0,0 @@ -[[new-features]] -= New & Noteworthy - -[[new-features.6-3-0]] -== What's new in Lettuce 6.3 - -* <<_redis_functions,Redis Function support>> (`fcall` and `FUNCTION` commands). -* Support for Library Name and Version through `LettuceVersion`. -Automated registration of the Lettuce library version upon connection handshake. -* Support for Micrometer Tracing to trace observations (distributed tracing and metrics). - -[[new-features.6-2-0]] -== What's new in Lettuce 6.2 - -* <> abstraction to externalize credentials and credentials rotation. -* Retrieval of Redis Cluster node connections using `ConnectionIntent` to obtain read-only connections. -* Master/Replica now uses `SENTINEL REPLICAS` to discover replicas instead of `SENTINEL SLAVES`. - -[[new-features.6-1-0]] -== What's new in Lettuce 6.1 - -* Kotlin Coroutines support for `SCAN`/`HSCAN`/`SSCAN`/`ZSCAN` through `ScanFlow`. -* Command Listener API through `RedisClient.addListener(CommandListener)`. -* <> through `MicrometerCommandLatencyRecorder`. -* <>. -* Configuration of extended Keep-Alive options through `KeepAliveOptions` (only available for some transports/Java versions). -* Configuration of netty's `AddressResolverGroup` through `ClientResources`. -Uses `DnsAddressResolverGroup` when `netty-resolver-dns` is on the classpath. -* Add support for Redis ACL commands. -* <> - -[[new-features.6-0-0]] -== What's new in Lettuce 6.0 - -* Support for RESP3 usage with Redis 6 along with RESP2/RESP3 handshake and protocol version discovery. -* ACL authentication using username and password or password-only authentication. -* Cluster topology refresh is now non-blocking. -* <>. -* RxJava 3 support. -* Refined Scripting API accepting the Lua script either as `byte[]` or `String`. -* Connection and Queue failures now no longer throw an exception but properly associate the failure with the Future handle. -* Removal of deprecated API including timeout methods accepting `TimeUnit`. -Use methods accepting `Duration` instead. -* Lots of internal refinements. -* `xpending` methods return now `List` and `PendingMessages` -* Spring support removed. -Use Spring Data Redis for a seamless Spring integration with Lettuce. -* `AsyncConnectionPoolSupport.createBoundedObjectPool(…)` methods are now blocking to await pool initialization. -* `DecodeBufferPolicy` for fine-grained memory reclaim control. -* `RedisURI.toString()` renders masked password. -* `ClientResources.commandLatencyCollector(…)` refactored into `ClientResources.commandLatencyRecorder(…)` returning `CommandLatencyRecorder`. - -[[new-features.5-3-0]] -== What's new in Lettuce 5.3 - -* Improved SSL configuration supporting Cipher suite selection and PEM-encoded certificates. -* Fixed method signature for `randomkey()`. -* Un-deprecated `ClientOptions.pingBeforeActivateConnection` to allow connection verification during connection handshake. - -[[new-features.5-2-0]] -== What's new in Lettuce 5.2 - -* Allow randomization of read candidates using Redis Cluster. -* SSL support for Redis Sentinel. - -[[new-features.5-1-0]] -== What's new in Lettuce 5.1 - -* Add support for `ZPOPMIN`, `ZPOPMAX`, `BZPOPMIN`, `BZPOPMAX` commands. -* Add support for Redis Command Tracing through Brave, see <>. -* Add support for https://redis.io/topics/streams-intro[Redis Streams]. -* Asynchronous `connect()` for Master/Replica connections. -* <> through `AsyncConnectionPoolSupport` and `AsyncPool`. -* Dedicated exceptions for Redis `LOADING`, `BUSY`, and `NOSCRIPT` responses. -* Commands in at-most-once mode (auto-reconnect disabled) are now canceled already on disconnect. -* Global command timeouts (also for reactive and asynchronous API usage) configurable through <>. -* Host and port mappers for Lettuce usage behind connection tunnels/proxies through `SocketAddressResolver`, see <>. -* `SCRIPT LOAD` dispatch to all cluster nodes when issued through `RedisAdvancedClusterCommands`. -* Reactive `ScanStream` to iterate over the keyspace using `SCAN` commands. -* Transactions using Master/Replica connections are bound to the master node. - -[[new-features.5-0-0]] -== What's new in Lettuce 5.0 - -* New artifact coordinates: `io.lettuce:lettuce-core` and packages moved from `com.lambdaworks.redis` to `io.lettuce.core`. -* <> now Reactive Streams-based using https://projectreactor.io/[Project Reactor]. -* <> supporting dynamic command invocation and Redis Modules. -* Enhanced, immutable Key-Value objects. -* Asynchronous Cluster connect. -* Native transport support for Kqueue on macOS systems. -* Removal of support for Guava. -* Removal of deprecated `RedisConnection` and `RedisAsyncConnection` interfaces. -* Java 9 compatibility. -* HTML and PDF reference documentation along with a new project website: https://lettuce.io. diff --git a/src/main/asciidoc/overview.asciidoc b/src/main/asciidoc/overview.asciidoc deleted file mode 100644 index 83b4f5e771..0000000000 --- a/src/main/asciidoc/overview.asciidoc +++ /dev/null @@ -1,83 +0,0 @@ -[[overview]] -== Overview - -This document is the reference guide for Lettuce. It explains how to use Lettuce, its concepts, semantics, and the syntax. - -You can read this reference guide in a linear fashion, or you can skip sections if something does not interest you. - -This section provides some basic introduction to Redis. The rest of the document refers only to Lettuce features and assumes the user is familiar with Redis concepts. - -[[overview.redis]] -=== Knowing Redis - -NoSQL stores have taken the storage world by storm. -It is a vast domain with a plethora of solutions, terms and patterns (to make things worse even the term itself has multiple https://www.google.com/search?q=nosql+acronym[meanings]). -While some of the principles are common, it is crucial that the user is familiar to some degree with Redis. -The best way to get acquainted to these solutions is to read and follow their documentation - it usually doesn't take more than 5-10 minutes to go through them and if you are coming from an RDMBS-only background many times these exercises can be an eye-opener. - -The jumping off ground for learning about Redis is https://www.redis.io/[redis.io]. -Here is a list of other useful resources: - -* The https://try.redis.io/[interactive tutorial] introduces Redis. -* The https://redis.io/commands[command references] explains Redis commands and contains links to getting started guides, reference documentation and tutorials. - -=== Project Reactor - -https://projectreactor.io[Reactor] is a highly optimized reactive library for building efficient, non-blocking applications on the JVM based on the https://github.com/reactive-streams/reactive-streams-jvm[Reactive Streams Specification]. -Reactor based applications can sustain very high throughput message rates and operate with a very low memory footprint, making it suitable for building efficient event-driven applications using the microservices architecture. - -Reactor implements two publishers https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html[Flux] and -https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html[Mono], both of which support non-blocking back-pressure. -This enables exchange of data between threads with well-defined memory usage, avoiding unnecessary intermediate buffering or blocking. - -=== Non-blocking API for Redis - -Lettuce is a scalable thread-safe Redis client based on https://netty.io[netty] and Reactor. -Lettuce provides <>, <> and <> APIs to interact with Redis. - -[[overview.requirements]] -=== Requirements - -Lettuce 4.x and 5.x binaries require JDK level 8.0 and above. - -In terms of https://redis.io/[Redis], at least 2.6. - -=== Additional Help Resources - -Learning a new framework is not always straight forward.In this section, we try to provide what we think is an easy-to-follow guide for starting with Lettuce. However, if you encounter issues or you are just looking for an advice, feel free to use one of the links below: - -[[overview.support]] -==== Support - -There are a few support options available: - -* Lettuce on Stackoverflow https://stackoverflow.com/questions/tagged/lettuce[Stackoverflow] is a tag for all Lettuce users to share information and help each other.Note that registration is needed *only* for posting. -* Get in touch with the community on https://gitter.im/lettuce-io/Lobby[Gitter]. -* GitHub Discussions: https://github.com/lettuce-io/lettuce-core/discussions -* Report bugs (or ask questions) in GitHub issues https://github.com/lettuce-io/lettuce-core/issues. - -[[overview.development]] -==== Following Development - -For information on the Lettuce source code repository, nightly builds and snapshot artifacts please see the https://lettuce.io[Lettuce homepage]. -You can help make lettuce best serve the needs of the lettuce community by interacting with developers through the Community on https://stackoverflow.com/questions/tagged/lettuce[Stackoverflow]. -If you encounter a bug or want to suggest an improvement, please create a ticket on the lettuce issue https://github.com/lettuce-io/lettuce-core/issues[tracker]. - -==== Project Metadata - -* Version Control – https://github.com/lettuce-io/lettuce-core -* Releases and Binary Packages – https://github.com/lettuce-io/lettuce-core/releases -* Issue tracker – https://github.com/lettuce-io/lettuce-core/issues -* Release repository – https://repo1.maven.org/maven2/ (Maven Central) -* Snapshot repository – https://oss.sonatype.org/content/repositories/snapshots/ (OSS Sonatype Snapshots) - -=== Where to go from here - -* Head to <> if you feel like jumping straight into the code. -* Go to <> for Master/Replica ("Master/Slave"), Redis Sentinel and Redis Cluster topics. -* In order to dig deeper into the core features of Reactor: -** If you’re looking for client configuration options, performance related behavior and how to use various transports, go to <>. -** See <> for extending Lettuce with codecs or integrate it in your CDI/Spring application. -** You want to know more about *at-least-once* and *at-most-once*? -Take a look into <>. - diff --git a/src/main/asciidoc/redis-command-interfaces.asciidoc b/src/main/asciidoc/redis-command-interfaces.asciidoc deleted file mode 100644 index ae5b750bf6..0000000000 --- a/src/main/asciidoc/redis-command-interfaces.asciidoc +++ /dev/null @@ -1,4 +0,0 @@ - -[[redis-command-interfaces]] -include::{ext-doc}/Redis-Command-Interfaces.asciidoc[leveloffset=+1] - diff --git a/src/main/asciidoc/scripting-and-functions.asciidoc b/src/main/asciidoc/scripting-and-functions.asciidoc deleted file mode 100644 index 73c7f66345..0000000000 --- a/src/main/asciidoc/scripting-and-functions.asciidoc +++ /dev/null @@ -1,4 +0,0 @@ -:command-interfaces-link: <> -[[redis-scripting-and-functions]] -include::{ext-doc}/Scripting-and-Functions.asciidoc[leveloffset=+2] - diff --git a/src/main/asciidoc/stylesheets/golo.css b/src/main/asciidoc/stylesheets/golo.css deleted file mode 100644 index b7699baf53..0000000000 --- a/src/main/asciidoc/stylesheets/golo.css +++ /dev/null @@ -1,1990 +0,0 @@ -@import url('https://fonts.googleapis.com/css?family=Raleway:300:400:700'); -@import url(https://cdnjs.cloudflare.com/ajax/libs/semantic-ui/1.6.2/semantic.min.css); - - -#header .details br+span.author:before { - content: "\00a0\0026\00a0"; - color: rgba(0,0,0,.85); -} - -#header .details br+span.email:before { - content: "("; -} - -#header .details br+span.email:after { - content: ")"; -} - -/*! normalize.css v2.1.2 | MIT License | git.io/normalize */ -/* ========================================================================== HTML5 display definitions ========================================================================== */ -/** Correct `block` display not defined in IE 8/9. */ -@import url(https://cdnjs.cloudflare.com/ajax/libs/font-awesome/3.2.1/css/font-awesome.css); - -article, aside, details, figcaption, figure, footer, header, hgroup, main, nav, section, summary { - display: block; -} - -/** Correct `inline-block` display not defined in IE 8/9. */ -audio, canvas, video { - display: inline-block; -} - -/** Prevent modern browsers from displaying `audio` without controls. Remove excess height in iOS 5 devices. */ -audio:not([controls]) { - display: none; - height: 0; -} - -/** Address `[hidden]` styling not present in IE 8/9. Hide the `template` element in IE, Safari, and Firefox < 22. */ -[hidden], template { - display: none; -} - -script { - display: none !important; -} - -/* ========================================================================== Base ========================================================================== */ -/** 1. Set default font family to sans-serif. 2. Prevent iOS text size adjust after orientation change, without disabling user zoom. */ -html { - font-family: sans-serif; /* 1 */ - -ms-text-size-adjust: 100%; /* 2 */ - -webkit-text-size-adjust: 100%; /* 2 */ -} - -/** Remove default margin. */ -body { - margin: 0; -} - -/* ========================================================================== Links ========================================================================== */ -/** Remove the gray background color from active links in IE 10. */ -a { - background: transparent; -} - -/** Address `outline` inconsistency between Chrome and other browsers. */ -a:focus { - outline: thin dotted; -} - -/** Improve readability when focused and also mouse hovered in all browsers. */ -a:active, a:hover { - outline: 0; -} - -/* ========================================================================== Typography ========================================================================== */ -/** Address variable `h1` font-size and margin within `section` and `article` contexts in Firefox 4+, Safari 5, and Chrome. */ -h1 { - font-size: 2em; - margin: 1.2em 0; -} - -/** Address styling not present in IE 8/9, Safari 5, and Chrome. */ -abbr[title] { - border-bottom: 1px dotted; -} - -/** Address style set to `bolder` in Firefox 4+, Safari 5, and Chrome. */ -b, strong { - font-weight: bold; -} - -/** Address styling not present in Safari 5 and Chrome. */ -dfn { - font-style: italic; -} - -/** Address differences between Firefox and other browsers. */ -hr { - -moz-box-sizing: content-box; - box-sizing: content-box; - height: 0; -} - -/** Address styling not present in IE 8/9. */ -mark { - background: #ff0; - color: #000; -} - -/** Correct font family set oddly in Safari 5 and Chrome. */ -code, kbd, pre, samp { - font-family: Menlo, Monaco, 'Liberation Mono', Consolas, monospace; - font-size: 1em; -} - -/** Improve readability of pre-formatted text in all browsers. */ -pre { - white-space: pre-wrap; -} - -/** Set consistent quote types. */ -q { - quotes: "\201C" "\201D" "\2018" "\2019"; -} - -/** Address inconsistent and variable font size in all browsers. */ -small { - font-size: 80%; -} - -/** Prevent `sub` and `sup` affecting `line-height` in all browsers. */ -sub, sup { - font-size: 75%; - line-height: 0; - position: relative; - vertical-align: baseline; -} - -sup { - top: -0.5em; -} - -sub { - bottom: -0.25em; -} - -/* ========================================================================== Embedded content ========================================================================== */ -/** Remove border when inside `a` element in IE 8/9. */ -img { - border: 0; -} - -/** Correct overflow displayed oddly in IE 9. */ -svg:not(:root) { - overflow: hidden; -} - -/* ========================================================================== Figures ========================================================================== */ -/** Address margin not present in IE 8/9 and Safari 5. */ -figure { - margin: 0; -} - -/* ========================================================================== Forms ========================================================================== */ -/** Define consistent border, margin, and padding. */ -fieldset { - border: 1px solid #c0c0c0; - margin: 0 2px; - padding: 0.35em 0.625em 0.75em; -} - -/** 1. Correct `color` not being inherited in IE 8/9. 2. Remove padding so people aren't caught out if they zero out fieldsets. */ -legend { - border: 0; /* 1 */ - padding: 0; /* 2 */ -} - -/** 1. Correct font family not being inherited in all browsers. 2. Correct font size not being inherited in all browsers. 3. Address margins set differently in Firefox 4+, Safari 5, and Chrome. */ -button, input, select, textarea { - font-family: inherit; /* 1 */ - font-size: 100%; /* 2 */ - margin: 0; /* 3 */ -} - -/** Address Firefox 4+ setting `line-height` on `input` using `!important` in the UA stylesheet. */ -button, input { - line-height: normal; -} - -/** Address inconsistent `text-transform` inheritance for `button` and `select`. All other form control elements do not inherit `text-transform` values. Correct `button` style inheritance in Chrome, Safari 5+, and IE 8+. Correct `select` style inheritance in Firefox 4+ and Opera. */ -button, select { - text-transform: none; -} - -/** 1. Avoid the WebKit bug in Android 4.0.* where (2) destroys native `audio` and `video` controls. 2. Correct inability to style clickable `input` types in iOS. 3. Improve usability and consistency of cursor style between image-type `input` and others. */ -button, html input[type="button"], input[type="reset"], input[type="submit"] { - -webkit-appearance: button; /* 2 */ - cursor: pointer; /* 3 */ -} - -/** Re-set default cursor for disabled elements. */ -button[disabled], html input[disabled] { - cursor: default; -} - -/** 1. Address box sizing set to `content-box` in IE 8/9. 2. Remove excess padding in IE 8/9. */ -input[type="checkbox"], input[type="radio"] { - box-sizing: border-box; /* 1 */ - padding: 0; /* 2 */ -} - -/** 1. Address `appearance` set to `searchfield` in Safari 5 and Chrome. 2. Address `box-sizing` set to `border-box` in Safari 5 and Chrome (include `-moz` to future-proof). */ -input[type="search"] { - -webkit-appearance: textfield; /* 1 */ - -moz-box-sizing: content-box; - -webkit-box-sizing: content-box; /* 2 */ - box-sizing: content-box; -} - -/** Remove inner padding and search cancel button in Safari 5 and Chrome on OS X. */ -input[type="search"]::-webkit-search-cancel-button, input[type="search"]::-webkit-search-decoration { - -webkit-appearance: none; -} - -/** Remove inner padding and border in Firefox 4+. */ -button::-moz-focus-inner, input::-moz-focus-inner { - border: 0; - padding: 0; -} - -/** 1. Remove default vertical scrollbar in IE 8/9. 2. Improve readability and alignment in all browsers. */ -textarea { - overflow: auto; /* 1 */ - vertical-align: top; /* 2 */ -} - -/* ========================================================================== Tables ========================================================================== */ -/** Remove most spacing between table cells. */ -table { - border-collapse: collapse; - border-spacing: 0; -} - -meta.foundation-mq-small { - font-family: "only screen and (min-width: 768px)"; - width: 768px; -} - -meta.foundation-mq-medium { - font-family: "only screen and (min-width:1280px)"; - width: 1280px; -} - -meta.foundation-mq-large { - font-family: "only screen and (min-width:1440px)"; - width: 1440px; -} - -*, *:before, *:after { - -moz-box-sizing: border-box; - -webkit-box-sizing: border-box; - box-sizing: border-box; -} - -html, body { - font-size: 100%; -} - -body { - background: white; - color: #34302d; - padding: 0; - margin: 0; - font-family: "Helvetica Neue", "Helvetica", Helvetica, Arial, sans-serif; - font-weight: 400; - font-style: normal; - line-height: 1.8em; - position: relative; - cursor: auto; -} - -#content, #content p { - line-height: 1.8em; - margin-top: 1.5em; -} - -#content li p { - margin-top: 0.25em; -} - -a:hover { - cursor: pointer; -} - -img, object, embed { - max-width: 100%; - height: auto; -} - -object, embed { - height: 100%; -} - -img { - -ms-interpolation-mode: bicubic; -} - -#map_canvas img, #map_canvas embed, #map_canvas object, .map_canvas img, .map_canvas embed, .map_canvas object { - max-width: none !important; -} - -.left { - float: left !important; -} - -.right { - float: right !important; -} - -.text-left { - text-align: left !important; -} - -.text-right { - text-align: right !important; -} - -.text-center { - text-align: center !important; -} - -.text-justify { - text-align: justify !important; -} - -.hide { - display: none; -} - -.antialiased, body { - -webkit-font-smoothing: antialiased; -} - -img { - display: inline-block; - vertical-align: middle; -} - -textarea { - height: auto; - min-height: 50px; -} - -select { - width: 100%; -} - -p.lead, .paragraph.lead > p, #preamble > .sectionbody > .paragraph:first-of-type p { - font-size: 1.21875em; -} - -.subheader, #content #toctitle, .admonitionblock td.content > .title, .exampleblock > .title, .imageblock > .title, .listingblock > .title, .literalblock > .title, .mathblock > .title, .openblock > .title, .paragraph > .title, .quoteblock > .title, .sidebarblock > .title, .tableblock > .title, .verseblock > .title, .videoblock > .title, .dlist > .title, .olist > .title, .ulist > .title, .qlist > .title, .hdlist > .title, .tableblock > caption { - color: #6db33f; - font-weight: 300; - margin-top: 0.2em; - margin-bottom: 0.5em; -} - -/* Typography resets */ -div, dl, dt, dd, ul, ol, li, h1, h2, h3, #toctitle, .sidebarblock > .content > .title, h4, h5, h6, pre, form, p, blockquote, th, td { - margin: 0; - padding: 0; - direction: ltr; -} - -/* Default Link Styles */ -a { - color: #6db33f; - line-height: inherit; - text-decoration: none; -} - -a:hover, a:focus { - color: #6db33f; - text-decoration: underline; -} - -a img { - border: none; -} - -/* Default paragraph styles */ -p { - font-family: inherit; - font-weight: normal; - font-size: 1em; - margin-bottom: 1.25em; - text-rendering: optimizeLegibility; -} - -p aside { - font-size: 0.875em; - font-style: italic; -} - -/* Default header styles */ -h1, h2, h3, #toctitle, .sidebarblock > .content > .title, h4, h5, h6 { - font-family: "Raleway", Arial, sans-serif; - font-weight: normal; - font-style: normal; - color: #34302d; - text-rendering: optimizeLegibility; - margin-top: 1.6em; - margin-bottom: 0.6em; -} - -h1 small, h2 small, h3 small, #toctitle small, .sidebarblock > .content > .title small, h4 small, h5 small, h6 small { - font-size: 60%; - color: #6db33f; - line-height: 0; -} - -h1 { - font-size: 2.125em; - line-height: 2em; -} - -h2 { - font-size: 1.6875em; - line-height: 1.5em; -} - -h3, #toctitle, .sidebarblock > .content > .title { - font-size: 1.375em; - line-height: 1.3em; -} - -h4 { - font-size: 1.125em; -} - -h5 { - font-size: 1.125em; -} - -h6 { - font-size: 1em; -} - -hr { - border: solid #dcd2c9; - border-width: 1px 0 0; - clear: both; - margin: 1.25em 0 1.1875em; - height: 0; -} - -/* Helpful Typography Defaults */ -em, i { - font-style: italic; - line-height: inherit; -} - -strong, b { - font-weight: bold; - line-height: inherit; -} - -small { - font-size: 60%; - line-height: inherit; -} - -code { - font-family: Consolas, "Liberation Mono", Courier, monospace; - font-weight: bold; - color: #305CB5; -} - -/* Lists */ -ul, ol, dl { - font-size: 1em; - margin-bottom: 1.25em; - list-style-position: outside; - font-family: inherit; -} - -ul, ol { - margin-left: 1.5em; -} - -ul.no-bullet, ol.no-bullet { - margin-left: 1.5em; -} - -/* Unordered Lists */ -ul li ul, ul li ol { - margin-left: 1.25em; - margin-bottom: 0; - font-size: 1em; /* Override nested font-size change */ -} - -ul.square li ul, ul.circle li ul, ul.disc li ul { - list-style: inherit; -} - -ul.square { - list-style-type: square; -} - -ul.circle { - list-style-type: circle; -} - -ul.disc { - list-style-type: disc; -} - -ul.no-bullet { - list-style: none; -} - -/* Ordered Lists */ -ol li ul, ol li ol { - margin-left: 1.25em; - margin-bottom: 0; -} - -/* Definition Lists */ -dl dt { - margin-bottom: 0.3125em; - font-weight: bold; -} - -dl dd { - margin-bottom: 1.25em; -} - -/* Abbreviations */ -abbr, acronym { - text-transform: uppercase; - font-size: 90%; - color: #34302d; - border-bottom: 1px dotted #dddddd; - cursor: help; -} - -abbr { - text-transform: none; -} - -/* Blockquotes */ -blockquote { - margin: 0 0 1.25em; - padding: 0.5625em 1.25em 0 1.1875em; - border-left: 1px solid #dddddd; -} - -blockquote cite { - display: block; - font-size: 0.8125em; - color: #655241; -} - -blockquote cite:before { - content: "\2014 \0020"; -} - -blockquote cite a, blockquote cite a:visited { - color: #655241; -} - -blockquote, blockquote p { - color: #34302d; -} - -/* Microformats */ -.vcard { - display: inline-block; - margin: 0 0 1.25em 0; - border: 1px solid #dddddd; - padding: 0.625em 0.75em; -} - -.vcard li { - margin: 0; - display: block; -} - -.vcard .fn { - font-weight: bold; - font-size: 0.9375em; -} - -.vevent .summary { - font-weight: bold; -} - -.vevent abbr { - cursor: auto; - text-decoration: none; - font-weight: bold; - border: none; - padding: 0 0.0625em; -} - -@media only screen and (min-width: 768px) { - h1, h2, h3, #toctitle, .sidebarblock > .content > .title, h4, h5, h6 { - } - - h1 { - font-size: 2.75em; - } - - h2 { - font-size: 2.3125em; - } - - h3, #toctitle, .sidebarblock > .content > .title { - font-size: 1.6875em; - } - - h4 { - font-size: 1.4375em; - } -} - -/* Print styles. Inlined to avoid required HTTP connection: www.phpied.com/delay-loading-your-print-css/ Credit to Paul Irish and HTML5 Boilerplate (html5boilerplate.com) -*/ -.print-only { - display: none !important; -} - -@media print { - * { - background: transparent !important; - color: #000 !important; /* Black prints faster: h5bp.com/s */ - box-shadow: none !important; - text-shadow: none !important; - } - - a, a:visited { - text-decoration: underline; - } - - a[href]:after { - content: " (" attr(href) ")"; - } - - abbr[title]:after { - content: " (" attr(title) ")"; - } - - .ir a:after, a[href^="javascript:"]:after, a[href^="#"]:after { - content: ""; - } - - pre, blockquote { - border: 1px solid #999; - page-break-inside: avoid; - } - - thead { - display: table-header-group; /* h5bp.com/t */ - } - - tr, img { - page-break-inside: avoid; - } - - img { - max-width: 100% !important; - } - - @page { - margin: 0.5cm; - } - - p, h2, h3, #toctitle, .sidebarblock > .content > .title { - orphans: 3; - widows: 3; - } - - h2, h3, #toctitle, .sidebarblock > .content > .title { - page-break-after: avoid; - } - - .hide-on-print { - display: none !important; - } - - .print-only { - display: block !important; - } - - .hide-for-print { - display: none !important; - } - - .show-for-print { - display: inherit !important; - } -} - -/* Tables */ -table { - background: white; - margin-bottom: 1.25em; - border: solid 1px #34302d; -} - -table thead, table tfoot { - font-weight: bold; -} - -table thead tr th, table thead tr td, table tfoot tr th, table tfoot tr td { - padding: 0.5em 0.625em 0.625em; - font-size: inherit; - color: #34302d; - text-align: left; -} - -table thead tr th { - color: white; - background: #34302d; -} - -table tr th, table tr td { - padding: 0.5625em 0.625em; - font-size: inherit; - color: #34302d; - border: 0 none; -} - -table tr.even, table tr.alt, table tr:nth-of-type(even) { - background: #f2F2F2; -} - -table thead tr th, table tfoot tr th, table tbody tr td, table tr td, table tfoot tr td { - display: table-cell; -} - -.clearfix:before, .clearfix:after, .float-group:before, .float-group:after { - content: " "; - display: table; -} - -.clearfix:after, .float-group:after { - clear: both; -} - -*:not(pre) > code { - font-size: inherit; - padding: 0; - white-space: nowrap; - background-color: inherit; - border: 0 solid #dddddd; - -webkit-border-radius: 6px; - border-radius: 6px; - text-shadow: none; -} - -pre, pre > code { - color: black; - font-family: monospace, serif; - font-weight: normal; -} - -.keyseq { - color: #774417; -} - -kbd:not(.keyseq) { - display: inline-block; - color: #211306; - font-size: 0.75em; - background-color: #F7F7F7; - border: 1px solid #ccc; - -webkit-border-radius: 3px; - border-radius: 3px; - -webkit-box-shadow: 0 1px 0 rgba(0, 0, 0, 0.2), 0 0 0 2px white inset; - box-shadow: 0 1px 0 rgba(0, 0, 0, 0.2), 0 0 0 2px white inset; - margin: -0.15em 0.15em 0 0.15em; - padding: 0.2em 0.6em 0.2em 0.5em; - vertical-align: middle; - white-space: nowrap; -} - -.keyseq kbd:first-child { - margin-left: 0; -} - -.keyseq kbd:last-child { - margin-right: 0; -} - -.menuseq, .menu { - color: black; -} - -b.button:before, b.button:after { - position: relative; - top: -1px; - font-weight: normal; -} - -b.button:before { - content: "["; - padding: 0 3px 0 2px; -} - -b.button:after { - content: "]"; - padding: 0 2px 0 3px; -} - -p a > code:hover { - color: #541312; -} - -#header, #content, #footnotes, #footer { - width: 100%; - margin-left: auto; - margin-right: auto; - margin-top: 0; - margin-bottom: 0; - max-width: 62.5em; - *zoom: 1; - position: relative; - padding-left: 4em; - padding-right: 4em; -} - -#header:before, #header:after, #content:before, #content:after, #footnotes:before, #footnotes:after, #footer:before, #footer:after { - content: " "; - display: table; -} - -#header:after, #content:after, #footnotes:after, #footer:after { - clear: both; -} - -#header { - margin-bottom: 2.5em; -} - -#header > h1 { - color: #34302d; - font-weight: 400; -} - -#header span { - color: #34302d; -} - -#header #revnumber { - text-transform: capitalize; -} - -#header br { - display: none; -} - -#header br + span { -} - -#revdate { - display: block; -} - -#toc { - border-bottom: 1px solid #e6dfd8; - padding-bottom: 1.25em; -} - -#toc > ul { - margin-left: 0.25em; -} - -#toc ul.sectlevel0 > li > a { - font-style: italic; -} - -#toc ul.sectlevel0 ul.sectlevel1 { - margin-left: 0; - margin-top: 0.5em; - margin-bottom: 0.5em; -} - -#toc ul { - list-style-type: none; -} - -#toctitle { - color: #385dbd; -} - -@media only screen and (min-width: 768px) { - body.toc2 { - padding-left: 15em; - padding-right: 0; - } - - #toc.toc2 { - position: fixed; - width: 15em; - left: 0; - border-bottom: 0; - z-index: 1000; - padding: 1em; - height: 100%; - top: 0px; - background: #F1F1F1; - overflow: auto; - - -moz-transition-property: top; - -o-transition-property: top; - -webkit-transition-property: top; - transition-property: top; - -moz-transition-duration: 0.4s; - -o-transition-duration: 0.4s; - -webkit-transition-duration: 0.4s; - transition-duration: 0.4s; - } - - #reactor-header { - position: fixed; - top: -75px; - left: 0; - right: 0; - height: 75px; - - - -moz-transition-property: top; - -o-transition-property: top; - -webkit-transition-property: top; - transition-property: top; - -moz-transition-duration: 0.4s; - -o-transition-duration: 0.4s; - -webkit-transition-duration: 0.4s; - transition-duration: 0.4s; - } - - body.head-show #toc.toc2 { - top: 75px; - } - body.head-show #reactor-header { - top: 0; - } - - #toc.toc2 a { - color: #34302d; - font-family: "Raleway", Arial, sans-serif; - } - - #toc.toc2 #toctitle { - margin-top: 0; - font-size: 1.2em; - } - - #toc.toc2 > ul { - font-size: .90em; - } - - #toc.toc2 ul ul { - margin-left: 0; - padding-left: 0.4em; - } - - #toc.toc2 ul.sectlevel0 ul.sectlevel1 { - padding-left: 0; - margin-top: 0.5em; - margin-bottom: 0.5em; - } - - body.toc2.toc-right { - padding-left: 0; - padding-right: 15em; - } - - body.toc2.toc-right #toc.toc2 { - border-right: 0; - border-left: 1px solid #e6dfd8; - left: auto; - right: 0; - } -} - -@media only screen and (min-width: 1280px) { - body.toc2 { - padding-left: 20em; - padding-right: 0; - } - - #toc.toc2 { - width: 20em; - } - - #toc.toc2 #toctitle { - font-size: 1.375em; - } - - #toc.toc2 > ul { - font-size: 0.95em; - } - - #toc.toc2 ul ul { - padding-left: 1.25em; - } - - body.toc2.toc-right { - padding-left: 0; - padding-right: 20em; - } -} - -#content #toc { - border-style: solid; - border-width: 1px; - border-color: #d9d9d9; - margin-bottom: 1.25em; - padding: 1.25em; - background: #f2f2f2; - border-width: 0; - -webkit-border-radius: 6px; - border-radius: 6px; -} - -#content #toc > :first-child { - margin-top: 0; -} - -#content #toc > :last-child { - margin-bottom: 0; -} - -#content #toc a { - text-decoration: none; -} - -#content #toctitle { - font-weight: bold; - font-family: "Raleway", Arial, sans-serif; - font-size: 1em; - padding-left: 0.125em; -} - -#footer { - max-width: 100%; - background-color: white; - padding: 1.25em; - color: #CCC; - border-top: 3px solid #F1F1F1; -} - -#footer-text { - color: #444; - line-height: 1.44; -} - -.sect1 { - padding-bottom: 1.25em; -} - -.sect1 + .sect1 { - border-top: 1px solid #e6dfd8; -} - -#content h1 > a.anchor, h2 > a.anchor, h3 > a.anchor, #toctitle > a.anchor, .sidebarblock > .content > .title > a.anchor, h4 > a.anchor, h5 > a.anchor, h6 > a.anchor { - position: absolute; - width: 1em; - margin-left: -1em; - display: block; - text-decoration: none; - visibility: hidden; - text-align: center; - font-weight: normal; -} - -#content h1 > a.anchor:before, h2 > a.anchor:before, h3 > a.anchor:before, #toctitle > a.anchor:before, .sidebarblock > .content > .title > a.anchor:before, h4 > a.anchor:before, h5 > a.anchor:before, h6 > a.anchor:before { - content: '\00A7'; - font-size: .85em; - vertical-align: text-top; - display: block; - margin-top: 0.05em; -} - -#content h1:hover > a.anchor, #content h1 > a.anchor:hover, h2:hover > a.anchor, h2 > a.anchor:hover, h3:hover > a.anchor, #toctitle:hover > a.anchor, .sidebarblock > .content > .title:hover > a.anchor, h3 > a.anchor:hover, #toctitle > a.anchor:hover, .sidebarblock > .content > .title > a.anchor:hover, h4:hover > a.anchor, h4 > a.anchor:hover, h5:hover > a.anchor, h5 > a.anchor:hover, h6:hover > a.anchor, h6 > a.anchor:hover { - visibility: visible; -} - -#content h1 > a.link, h2 > a.link, h3 > a.link, #toctitle > a.link, .sidebarblock > .content > .title > a.link, h4 > a.link, h5 > a.link, h6 > a.link { - color: #34302d; - text-decoration: none; -} - -#content h1 > a.link:hover, h2 > a.link:hover, h3 > a.link:hover, #toctitle > a.link:hover, .sidebarblock > .content > .title > a.link:hover, h4 > a.link:hover, h5 > a.link:hover, h6 > a.link:hover { - color: #34302d; -} - -.imageblock, .literalblock, .listingblock, .mathblock, .verseblock, .videoblock { - margin-bottom: 1.25em; - margin-top: 1.25em; -} - -.admonitionblock td.content > .title, .exampleblock > .title, .imageblock > .title, .listingblock > .title, .literalblock > .title, .mathblock > .title, .openblock > .title, .paragraph > .title, .quoteblock > .title, .sidebarblock > .title, .tableblock > .title, .verseblock > .title, .videoblock > .title, .dlist > .title, .olist > .title, .ulist > .title, .qlist > .title, .hdlist > .title { - text-align: left; - font-weight: bold; -} - -.tableblock > caption { - text-align: left; - font-weight: bold; - white-space: nowrap; - overflow: visible; - max-width: 0; -} - -table.tableblock #preamble > .sectionbody > .paragraph:first-of-type p { - font-size: inherit; -} - -.admonitionblock > table { - border: 0; - background: none; - width: 100%; -} - -.admonitionblock > table td.icon { - text-align: center; - width: 80px; -} - -.admonitionblock > table td.icon img { - max-width: none; -} - -.admonitionblock > table td.icon .title { - font-weight: bold; - text-transform: uppercase; -} - -.admonitionblock > table td.content { - padding-left: 1.125em; - padding-right: 1.25em; - border-left: 1px solid #dcd2c9; - color: #34302d; -} - -.admonitionblock > table td.content > :last-child > :last-child { - margin-bottom: 0; -} - -.exampleblock > .content { - border-top: 1px solid #6db33f; - border-bottom: 1px solid #6db33f; - margin-bottom: 1.25em; - padding: 1.25em; - background: white; -} - -.exampleblock > .content > :first-child { - margin-top: 0; -} - -.exampleblock > .content > :last-child { - margin-bottom: 0; -} - -.exampleblock > .content h1, .exampleblock > .content h2, .exampleblock > .content h3, .exampleblock > .content #toctitle, .sidebarblock.exampleblock > .content > .title, .exampleblock > .content h4, .exampleblock > .content h5, .exampleblock > .content h6, .exampleblock > .content p { - color: #333333; -} - -.exampleblock > .content h1, .exampleblock > .content h2, .exampleblock > .content h3, .exampleblock > .content #toctitle, .sidebarblock.exampleblock > .content > .title, .exampleblock > .content h4, .exampleblock > .content h5, .exampleblock > .content h6 { - margin-bottom: 0.625em; -} - -.exampleblock > .content h1.subheader, .exampleblock > .content h2.subheader, .exampleblock > .content h3.subheader, .exampleblock > .content .subheader#toctitle, .sidebarblock.exampleblock > .content > .subheader.title, .exampleblock > .content h4.subheader, .exampleblock > .content h5.subheader, .exampleblock > .content h6.subheader { -} - -.exampleblock.result > .content { - -webkit-box-shadow: 0 1px 8px #d9d9d9; - box-shadow: 0 1px 8px #d9d9d9; -} - -.sidebarblock { - padding: 1.25em 2em; - background: #F1F1F1; - margin: 2em -2em; - -} - -.sidebarblock > :first-child { - margin-top: 0; -} - -.sidebarblock > :last-child { - margin-bottom: 0; -} - -.sidebarblock h1, .sidebarblock h2, .sidebarblock h3, .sidebarblock #toctitle, .sidebarblock > .content > .title, .sidebarblock h4, .sidebarblock h5, .sidebarblock h6, .sidebarblock p { - color: #333333; -} - -.sidebarblock h1, .sidebarblock h2, .sidebarblock h3, .sidebarblock #toctitle, .sidebarblock > .content > .title, .sidebarblock h4, .sidebarblock h5, .sidebarblock h6 { - margin-bottom: 0.625em; -} - -.sidebarblock h1.subheader, .sidebarblock h2.subheader, .sidebarblock h3.subheader, .sidebarblock .subheader#toctitle, .sidebarblock > .content > .subheader.title, .sidebarblock h4.subheader, .sidebarblock h5.subheader, .sidebarblock h6.subheader { -} - -.sidebarblock > .content > .title { - color: #6db33f; - margin-top: 0; - font-size: 1.2em; -} - -.exampleblock > .content > :last-child > :last-child, .exampleblock > .content .olist > ol > li:last-child > :last-child, .exampleblock > .content .ulist > ul > li:last-child > :last-child, .exampleblock > .content .qlist > ol > li:last-child > :last-child, .sidebarblock > .content > :last-child > :last-child, .sidebarblock > .content .olist > ol > li:last-child > :last-child, .sidebarblock > .content .ulist > ul > li:last-child > :last-child, .sidebarblock > .content .qlist > ol > li:last-child > :last-child { - margin-bottom: 0; -} - -.literalblock pre:not([class]), .listingblock pre:not([class]) { - background-color:#f2f2f2; -} - -.literalblock pre, .literalblock pre[class], .listingblock pre, .listingblock pre[class] { - border-width: 1px; - border-style: solid; - border-color: rgba(21, 35, 71, 0.1); - -webkit-border-radius: 6px; - border-radius: 6px; - padding: 0.8em; - word-wrap: break-word; -} - -.literalblock pre.nowrap, .literalblock pre[class].nowrap, .listingblock pre.nowrap, .listingblock pre[class].nowrap { - overflow-x: auto; - white-space: pre; - word-wrap: normal; -} - -.literalblock pre > code, .literalblock pre[class] > code, .listingblock pre > code, .listingblock pre[class] > code { - display: block; -} - -@media only screen { - .literalblock pre, .literalblock pre[class], .listingblock pre, .listingblock pre[class] { - font-size: 0.72em; - } -} - -@media only screen and (min-width: 768px) { - .literalblock pre, .literalblock pre[class], .listingblock pre, .listingblock pre[class] { - font-size: 0.81em; - } -} - -@media only screen and (min-width: 1280px) { - .literalblock pre, .literalblock pre[class], .listingblock pre, .listingblock pre[class] { - font-size: 0.9em; - } -} - -.listingblock pre.highlight { - padding: 0; - line-height: 1.4em; -} - -.listingblock pre.highlight > code { - padding: 0.8em; -} - -.listingblock > .content { - position: relative; -} - -.listingblock:hover code[class*=" language-"]:before { - text-transform: uppercase; - font-size: 0.9em; - color: #999; - position: absolute; - top: 0.375em; - right: 0.375em; -} - -.listingblock:hover code.asciidoc:before { - content: "asciidoc"; -} - -.listingblock:hover code.clojure:before { - content: "clojure"; -} - -.listingblock:hover code.css:before { - content: "css"; -} - -.listingblock:hover code.groovy:before { - content: "groovy"; -} - -.listingblock:hover code.html:before { - content: "html"; -} - -.listingblock:hover code.java:before { - content: "java"; -} - -.listingblock:hover code.javascript:before { - content: "javascript"; -} - -.listingblock:hover code.python:before { - content: "python"; -} - -.listingblock:hover code.ruby:before { - content: "ruby"; -} - -.listingblock:hover code.sass:before { - content: "sass"; -} - -.listingblock:hover code.scss:before { - content: "scss"; -} - -.listingblock:hover code.xml:before { - content: "xml"; -} - -.listingblock:hover code.yaml:before { - content: "yaml"; -} - -.listingblock.terminal pre .command:before { - content: attr(data-prompt); - padding-right: 0.5em; - color: #999; -} - -.listingblock.terminal pre .command:not([data-prompt]):before { - content: '$'; -} - -table.pyhltable { - border: 0; - margin-bottom: 0; -} - -table.pyhltable td { - vertical-align: top; - padding-top: 0; - padding-bottom: 0; -} - -table.pyhltable td.code { - padding-left: .75em; - padding-right: 0; -} - -.highlight.pygments .lineno, table.pyhltable td:not(.code) { - color: #999; - padding-left: 0; - padding-right: .5em; - border-right: 1px solid #dcd2c9; -} - -.highlight.pygments .lineno { - display: inline-block; - margin-right: .25em; -} - -table.pyhltable .linenodiv { - background-color: transparent !important; - padding-right: 0 !important; -} - -.quoteblock { - margin: 0 0 1.25em; - padding: 0.5625em 1.25em 0 1.1875em; - border-left: 3px solid #dddddd; -} - -.quoteblock blockquote { - margin: 0 0 1.25em 0; - padding: 0 0 0.5625em 0; - border: 0; -} - -.quoteblock blockquote > .paragraph:last-child p { - margin-bottom: 0; -} - -.quoteblock .attribution { - margin-top: -.25em; - padding-bottom: 0.5625em; - font-size: 0.8125em; -} - -.quoteblock .attribution br { - display: none; -} - -.quoteblock .attribution cite { - display: block; - margin-bottom: 0.625em; -} - -table thead th, table tfoot th { - font-weight: bold; -} - -table.tableblock.grid-all { - border-collapse: separate; - border-radius: 6px; - border-top: 1px solid #34302d; - border-bottom: 1px solid #34302d; -} - -table.tableblock.frame-topbot, table.tableblock.frame-none { - border-left: 0; - border-right: 0; -} - -table.tableblock.frame-sides, table.tableblock.frame-none { - border-top: 0; - border-bottom: 0; -} - -table.tableblock td .paragraph:last-child p > p:last-child, table.tableblock th > p:last-child, table.tableblock td > p:last-child { - margin-bottom: 0; -} - -th.tableblock.halign-left, td.tableblock.halign-left { - text-align: left; -} - -th.tableblock.halign-right, td.tableblock.halign-right { - text-align: right; -} - -th.tableblock.halign-center, td.tableblock.halign-center { - text-align: center; -} - -th.tableblock.valign-top, td.tableblock.valign-top { - vertical-align: top; -} - -th.tableblock.valign-bottom, td.tableblock.valign-bottom { - vertical-align: bottom; -} - -th.tableblock.valign-middle, td.tableblock.valign-middle { - vertical-align: middle; -} - -tbody tr th { - display: table-cell; - background: rgba(105, 60, 22, 0.25); -} - -tbody tr th, tbody tr th p, tfoot tr th, tfoot tr th p { - color: #211306; - font-weight: bold; -} - -td > div.verse { - white-space: pre; -} - -ol { - margin-left: 1.75em; -} - -ul li ol { - margin-left: 1.5em; -} - -dl dd { - margin-left: 1.125em; -} - -dl dd:last-child, dl dd:last-child > :last-child { - margin-bottom: 0; -} - -ol > li p, ul > li p, ul dd, ol dd, .olist .olist, .ulist .ulist, .ulist .olist, .olist .ulist { - margin-bottom: 0.625em; -} - -ul.unstyled, ol.unnumbered, ul.checklist, ul.none { - list-style-type: none; -} - -ul.unstyled, ol.unnumbered, ul.checklist { - margin-left: 0.625em; -} - -ul.checklist li > p:first-child > i[class^="icon-check"]:first-child, ul.checklist li > p:first-child > input[type="checkbox"]:first-child { - margin-right: 0.25em; -} - -ul.checklist li > p:first-child > input[type="checkbox"]:first-child { - position: relative; - top: 1px; -} - -ul.inline { - margin: 0 auto 0.625em auto; - margin-left: -1.375em; - margin-right: 0; - padding: 0; - list-style: none; - overflow: hidden; -} - -ul.inline > li { - list-style: none; - float: left; - margin-left: 1.375em; - display: block; -} - -ul.inline > li > * { - display: block; -} - -.unstyled dl dt { - font-weight: normal; - font-style: normal; -} - -ol.arabic { - list-style-type: decimal; -} - -ol.decimal { - list-style-type: decimal-leading-zero; -} - -ol.loweralpha { - list-style-type: lower-alpha; -} - -ol.upperalpha { - list-style-type: upper-alpha; -} - -ol.lowerroman { - list-style-type: lower-roman; -} - -ol.upperroman { - list-style-type: upper-roman; -} - -ol.lowergreek { - list-style-type: lower-greek; -} - -.hdlist > table, .colist > table { - border: 0; - background: none; -} - -.hdlist > table > tbody > tr, .colist > table > tbody > tr { - background: none; -} - -td.hdlist1 { - padding-right: .75em; - font-weight: bold; -} - -td.hdlist1, td.hdlist2 { - vertical-align: top; -} - -.literalblock + .colist, .listingblock + .colist { - margin-top: -0.5em; -} - -.colist > table tr > td:first-of-type { - padding: 0 .75em; -} - -.colist > table tr > td:last-of-type { - padding: 0.25em 0; -} - -.qanda > ol > li > p > em:only-child { - color: #063f40; -} - -.thumb, .th { - line-height: 0; - display: inline-block; - border: solid 4px white; - -webkit-box-shadow: 0 0 0 1px #dddddd; - box-shadow: 0 0 0 1px #dddddd; -} - -.imageblock.left, .imageblock[style*="float: left"] { - margin: 0.25em 0.625em 1.25em 0; -} - -.imageblock.right, .imageblock[style*="float: right"] { - margin: 0.25em 0 1.25em 0.625em; -} - -.imageblock > .title { - margin-bottom: 0; -} - -.imageblock.thumb, .imageblock.th { - border-width: 6px; -} - -.imageblock.thumb > .title, .imageblock.th > .title { - padding: 0 0.125em; -} - -.image.left, .image.right { - margin-top: 0.25em; - margin-bottom: 0.25em; - display: inline-block; - line-height: 0; -} - -.image.left { - margin-right: 0.625em; -} - -.image.right { - margin-left: 0.625em; -} - -a.image { - text-decoration: none; -} - -span.footnote, span.footnoteref { - vertical-align: super; - font-size: 0.875em; -} - -span.footnote a, span.footnoteref a { - text-decoration: none; -} - -#footnotes { - padding-top: 0.75em; - padding-bottom: 0.75em; - margin-bottom: 0.625em; -} - -#footnotes hr { - width: 20%; - min-width: 6.25em; - margin: -.25em 0 .75em 0; - border-width: 1px 0 0 0; -} - -#footnotes .footnote { - padding: 0 0.375em; - font-size: 0.875em; - margin-left: 1.2em; - text-indent: -1.2em; - margin-bottom: .2em; -} - -#footnotes .footnote a:first-of-type { - font-weight: bold; - text-decoration: none; -} - -#footnotes .footnote:last-of-type { - margin-bottom: 0; -} - -#content #footnotes { - margin-top: -0.625em; - margin-bottom: 0; - padding: 0.75em 0; -} - -.gist .file-data > table { - border: none; - background: #fff; - width: 100%; - margin-bottom: 0; -} - -.gist .file-data > table td.line-data { - width: 99%; -} - -div.unbreakable { - page-break-inside: avoid; -} - -.big { - font-size: larger; -} - -.small { - font-size: smaller; -} - -.underline { - text-decoration: underline; -} - -.overline { - text-decoration: overline; -} - -.line-through { - text-decoration: line-through; -} - -.aqua { - color: #00bfbf; -} - -.aqua-background { - background-color: #00fafa; -} - -.black { - color: black; -} - -.black-background { - background-color: black; -} - -.blue { - color: #0000bf; -} - -.blue-background { - background-color: #0000fa; -} - -.fuchsia { - color: #bf00bf; -} - -.fuchsia-background { - background-color: #fa00fa; -} - -.gray { - color: #606060; -} - -.gray-background { - background-color: #7d7d7d; -} - -.green { - color: #006000; -} - -.green-background { - background-color: #007d00; -} - -.lime { - color: #00bf00; -} - -.lime-background { - background-color: #00fa00; -} - -.maroon { - color: #600000; -} - -.maroon-background { - background-color: #7d0000; -} - -.navy { - color: #000060; -} - -.navy-background { - background-color: #00007d; -} - -.olive { - color: #606000; -} - -.olive-background { - background-color: #7d7d00; -} - -.purple { - color: #600060; -} - -.purple-background { - background-color: #7d007d; -} - -.red { - color: #bf0000; -} - -.red-background { - background-color: #fa0000; -} - -.silver { - color: #909090; -} - -.silver-background { - background-color: #bcbcbc; -} - -.teal { - color: #006060; -} - -.teal-background { - background-color: #007d7d; -} - -.white { - color: #bfbfbf; -} - -.white-background { - background-color: #fafafa; -} - -.yellow { - color: #bfbf00; -} - -.yellow-background { - background-color: #fafa00; -} - -span.icon > [class^="icon-"], span.icon > [class*=" icon-"] { - cursor: default; -} - -.admonitionblock td.icon [class^="icon-"]:before { - font-size: 2.5em; - text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.5); - cursor: default; -} - -.admonitionblock td.icon .icon-note:before { - content: "\f05a"; - color: #095557; - color: #064042; -} - -.admonitionblock td.icon .icon-tip:before { - content: "\f0eb"; - text-shadow: 1px 1px 2px rgba(155, 155, 0, 0.8); - color: #111; -} - -.admonitionblock td.icon .icon-warning:before { - content: "\f071"; - color: #bf6900; -} - -.admonitionblock td.icon .icon-caution:before { - content: "\f06d"; - color: #bf3400; -} - -.admonitionblock td.icon .icon-important:before { - content: "\f06a"; - color: #bf0000; -} - -.conum { - display: inline-block; - color: white !important; - background-color: #211306; - -webkit-border-radius: 100px; - border-radius: 100px; - text-align: center; - width: 20px; - height: 20px; - font-size: 12px; - font-weight: bold; - line-height: 20px; - font-family: Arial, sans-serif; - font-style: normal; - position: relative; - top: -2px; - letter-spacing: -1px; -} - -.conum * { - color: white !important; -} - -.conum + b { - display: none; -} - -.conum:after { - content: attr(data-value); -} - -.conum:not([data-value]):empty { - display: none; -} - -body { - padding-top: 60px; -} - -#toc.toc2 ul ul { - padding-left: 1em; -} -#toc.toc2 ul ul.sectlevel2 { -} - -#toctitle { - color: #34302d; - display: none; -} - -#header h1 { - font-weight: bold; - position: relative; - left: -0.0625em; -} - -#header h1 span.lo { - color: #dc9424; -} - -#content h2, #content h3, #content #toctitle, #content .sidebarblock > .content > .title, #content h4, #content h5, #content #toctitle { - font-weight: normal; - position: relative; - left: -0.0625em; -} - -#content h2 { - font-weight: bold; -} - -.literalblock .content pre.highlight, .listingblock .content pre.highlight { - background-color:#f2f2f2; -} - -.admonitionblock > table td.content { - border-color: #e6dfd8; -} - -table.tableblock.grid-all { - -webkit-border-radius: 0; - border-radius: 0; -} - -#footer { - background-color: #while; - color: #34302d; -} - -.imageblock .title { - text-align: center; -} - -#content h1.sect0 { - font-size: 48px; -} - -#toc > ul > li > a { - font-size: large; -} From 2905ac113afb690895bc07424c8208975b370351 Mon Sep 17 00:00:00 2001 From: Igor Malinovskiy Date: Wed, 21 Aug 2024 18:13:40 +0200 Subject: [PATCH 10/12] Fix spelling --- .github/wordlist.txt | 170 +++++++++++++++++++++++++++++++ docs/advanced-usage.md | 10 +- docs/redis-command-interfaces.md | 6 +- 3 files changed, 178 insertions(+), 8 deletions(-) diff --git a/.github/wordlist.txt b/.github/wordlist.txt index 2714717cb1..5e1f9d0e49 100644 --- a/.github/wordlist.txt +++ b/.github/wordlist.txt @@ -80,3 +80,173 @@ DnsResolver dnsResolver evalReadOnly gg +ACL +AOT +APIs +API’s +Akka +Async +AsyncCommand +Asynchronicity +Backpressure +CamelCase +Charset +ClientResources +CommandLatencyCollector +CommandWrapper +CompletionStage +Config +Coroutine +Coroutines +Customizer +DNS +DSL +EPoll +ElastiCache +EventExecutorGroup +EventLoop +EventLoopGroup +EventPublisher +Failover +GZIP +Graal +GraalVM +Graal’s +HdrHistogram +IP’s +Iterable +JDK +JFR +JIT +JNI +KeyStreamingChannel +KeyValueStreamingChannel +Kops +Kqueue +Kryo +LatencyUtils +Luascripts +MasterReplica +Misconfiguring +Mult +NIO +Netty’s +NodeSelection +OpenSSL +PEM +POSIX +Plaintext +RTT +Reconnection +RedisClient +RedisClusterClient +RedisURIs +RxJava +SHA +SPI +ScoredValueStreamingChannel +Serializer +Sharded +Sharding +SomeClient +StartTLS +StreamingChannel +StreamingChannels +SubstrateVM +TCP +TLS +TimedScheduler +TransactionalCommand +URIs +Un +ValueStreamingChannel +aggregable +amongst +analytics +args +assignability +async +asynchronicity +backoff +backpressure +boolean +broadcasted +bytecode +cancelation +channelId +charset +classpath +codecs +config +coroutines +customizable +customizer +dataset +deserialization +desynchronize +desynchronizes +encodings +epId +epoll +executables +extensibility +failover +fromExecutor +gradle +hasNext +hostnames +idempotency +integrations +interoperable +interoperate +invoker +json +keyspace +kotlinx +kqueue +latencies +lifecycle +localhost +macOS +microservices +misconfiguration +multithreaded +natively +netty’s +newSingle +nodeId +nodeIds +nodeId’s +nullability +onCompleted +onError +onNext +oss +parametrized +pipelining +pluggable +pre +preconfigured +predefine +reconnection +redirections +replicaN +retrigger +runtimes +se +sharding +stateful +subclasses +subcommand +synthetization +th +throwable +topologies +transcoding +typesafe +un +unconfigured +unix +uring +whitespace +xml diff --git a/docs/advanced-usage.md b/docs/advanced-usage.md index cf982ca0a7..f0f192ba9d 100644 --- a/docs/advanced-usage.md +++ b/docs/advanced-usage.md @@ -349,7 +349,7 @@ canceled when a reconnect fails within the activation sequence. The reconnect itself has two phases: Socket connection and protocol/connection activation. In case a connect timeout occurs, a connection reset, host lookup fails, this does not affect the -cancelation of commands. In contrast, where the protocol/connection +cancellation of commands. In contrast, where the protocol/connection activation fails due to SSL errors or PING before activating connection failure, queued commands are canceled.

@@ -811,7 +811,7 @@ Netty provides three platform-specific JNI transports: - io_uring on Linux (Incubator) -- kqueue on MacOS/BSD +- kqueue on macOS/BSD Lettuce defaults to native transports if the appropriate library is available within its runtime. Using a native transport adds features @@ -850,7 +850,7 @@ Native transports are available with: ``` -- MacOS **kqueue** x86_64 systems with a minimum netty version of +- macOS **kqueue** x86_64 systems with a minimum netty version of `4.1.11.Final`, requiring `netty-transport-native-kqueue`, classifier `osx-x86_64` @@ -2162,7 +2162,7 @@ Those cover Lettuce operations for `RedisClient` and `RedisClusterClient`. Depending on your configuration you might need additional configuration -for Netty, HdrHistorgram (metrics collection), Reactive Libraries, and +for Netty, HdrHistogram (metrics collection), Reactive Libraries, and dynamic Redis Command interfaces. ### HdrHistogram/Command Latency Metrics @@ -2526,7 +2526,7 @@ already executed; only the result is not available. These errors are caused mostly due to a wrong implementation. The result of a command, which cannot be *decoded* is that the command gets canceled, and the causing `Exception` is available in the result. The command is cleared -from the response queue, and the connection stays useable. +from the response queue, and the connection stays usable. In general, when `Errors` occur while operating on a connection, you should close the connection and use a new one. Connections, that diff --git a/docs/redis-command-interfaces.md b/docs/redis-command-interfaces.md index e6252aa902..6a644bdc85 100644 --- a/docs/redis-command-interfaces.md +++ b/docs/redis-command-interfaces.md @@ -199,7 +199,7 @@ interface MixedCommands extends Commands { } ``` -You can choose amongst multiple strategies: +You can choose among multiple strategies: - `SPLIT`: Splits camel-case method names into multiple command segments: `clientSetname` executes `CLIENT SETNAME`. This is the @@ -329,7 +329,7 @@ Built-in parameter types: - types implementing `io.lettuce.core.CompositeParameter` - Lettuce comes with a set of command argument types such as `BitFieldArgs`, `SetArgs`, `SortArgs`, … that can be used as parameter. Providing - `CompositeParameter` will ontribute multiple command arguments by + `CompositeParameter` will contribute multiple command arguments by invoking the `CompositeParameter.build(CommandArgs)` method. - `Value`, `KeyValue`, and `ScoredValue` that are encoded to their @@ -386,7 +386,7 @@ Another aspect of command methods is their response type. Redis command responses consist of simple strings, bulk strings (byte streams) or arrays with nested elements depending on the issued command. -You can choose amongst various return types that map to a particular +You can choose among various return types that map to a particular {custom-commands-command-output-link}. A command output can return either its return type directly (`List` for `StringListOutput`) or stream individual elements (`String` for `StringListOutput` as it From 0a65e2465894d586badcac033afbad09ea777094 Mon Sep 17 00:00:00 2001 From: Igor Malinovskiy Date: Wed, 21 Aug 2024 18:26:32 +0200 Subject: [PATCH 11/12] Another attempt to fix spelling --- .github/wordlist.txt | 9 +++++---- docs/advanced-usage.md | 6 +++--- docs/integration-extension.md | 2 +- docs/new-features.md | 2 +- docs/user-guide/pubsub.md | 2 +- 5 files changed, 11 insertions(+), 10 deletions(-) diff --git a/.github/wordlist.txt b/.github/wordlist.txt index 5e1f9d0e49..97da8e4180 100644 --- a/.github/wordlist.txt +++ b/.github/wordlist.txt @@ -111,9 +111,9 @@ Failover GZIP Graal GraalVM -Graal’s +Graal's HdrHistogram -IP’s +IPs Iterable JDK JFR @@ -193,6 +193,7 @@ extensibility failover fromExecutor gradle +Graal's hasNext hostnames idempotency @@ -212,11 +213,11 @@ microservices misconfiguration multithreaded natively -netty’s +netty's newSingle nodeId nodeIds -nodeId’s +nodeId's nullability onCompleted onError diff --git a/docs/advanced-usage.md b/docs/advanced-usage.md index f0f192ba9d..23c3528de4 100644 --- a/docs/advanced-usage.md +++ b/docs/advanced-usage.md @@ -219,7 +219,7 @@ configure a different DNS resolver. Lettuce comes with or custom DNS servers without caching of results so each hostname lookup yields in a DNS lookup.

Since 4.4: Defaults to DnsResolvers.UNRESOLVED to use -netty’s AddressResolver that resolves DNS names on +netty's AddressResolver that resolves DNS names on Bootstrap.connect() (requires netty 4.1)

@@ -674,7 +674,7 @@ output.

obstruction:

MOVED/ASK redirection but the cluster topology view is stale Connecting to cluster nodes using different -IP’s/hostnames (e.g. private/public IP’s)

+IPs/hostnames (e.g. private/public IPs)

Connecting to non-cluster members to reconfigure those while using the RedisClusterClient connection.

@@ -2137,7 +2137,7 @@ less total CPU usage. ### Building Native Images Native images assume a closed world principle in which all code needs to -be known at the time the native image is built. Graal’s SubstrateVM +be known at the time the native image is built. Graal's SubstrateVM analyzes class files during native image build-time to determine what bytecode needs to be translated into a native image. While this task can be achieved to a good extent by analyzing static bytecode, it’s harder diff --git a/docs/integration-extension.md b/docs/integration-extension.md index 22e0847477..d58004ca69 100644 --- a/docs/integration-extension.md +++ b/docs/integration-extension.md @@ -77,7 +77,7 @@ underlying bytes. The `byte[]` interface of Lettuce 3.x required the user to provide an array with the exact data for interchange. So if you have an array where you want to use only a subset, you’re required to create a new instance of a byte array and copy the data. The same -applies if you have a different byte source (e.g. netty’s `ByteBuf` or +applies if you have a different byte source (e.g. netty's `ByteBuf` or an NIO `ByteBuffer`). The `ByteBuffer`s for decoding are pointers to the underlying data. `ByteBuffer`s for encoding data can be either pure pointers or allocated memory. Lettuce does not free any memory (such as diff --git a/docs/new-features.md b/docs/new-features.md index bd5abe02fa..5b02a98533 100644 --- a/docs/new-features.md +++ b/docs/new-features.md @@ -45,7 +45,7 @@ - Configuration of extended Keep-Alive options through `KeepAliveOptions` (only available for some transports/Java versions). -- Configuration of netty’s `AddressResolverGroup` through +- Configuration of netty's `AddressResolverGroup` through `ClientResources`. Uses `DnsAddressResolverGroup` when `netty-resolver-dns` is on the classpath. diff --git a/docs/user-guide/pubsub.md b/docs/user-guide/pubsub.md index 0680ad7eb2..96186f4103 100644 --- a/docs/user-guide/pubsub.md +++ b/docs/user-guide/pubsub.md @@ -4,7 +4,7 @@ Lettuce provides support for Publish/Subscribe on Redis Standalone and Redis Cluster connections. The connection is notified on message/subscribed/unsubscribed events after subscribing to channels or patterns. [Synchronous](connecting-redis.md#basic-usage), [asynchronous](async-api.md) -and [reactive](reactive-api.md) API’s are provided to interact with Redis +and [reactive](reactive-api.md) APIs are provided to interact with Redis Publish/Subscribe features. ### Subscribing From f4c18073174ebbeeaf42ce607fec5f12671148fa Mon Sep 17 00:00:00 2001 From: Igor Malinovskiy Date: Wed, 21 Aug 2024 18:28:09 +0200 Subject: [PATCH 12/12] Build docs only from the main branch --- .github/workflows/docs.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml index 14d633f787..9d8400e426 100644 --- a/.github/workflows/docs.yml +++ b/.github/workflows/docs.yml @@ -1,7 +1,7 @@ name: Publish Docs on: push: - branches: ["main", "markdown_docs"] + branches: ["main"] permissions: contents: read pages: write