diff --git a/.github/spellcheck-settings.yml b/.github/spellcheck-settings.yml new file mode 100644 index 0000000000..96abbe6da8 --- /dev/null +++ b/.github/spellcheck-settings.yml @@ -0,0 +1,29 @@ +matrix: +- name: Markdown + expect_match: false + apsell: + lang: en + d: en_US + ignore-case: true + dictionary: + wordlists: + - .github/wordlist.txt + output: wordlist.dic + pipeline: + - pyspelling.filters.markdown: + markdown_extensions: + - markdown.extensions.extra: + - pyspelling.filters.html: + comments: false + attributes: + - alt + ignores: + - ':matches(code, pre)' + - code + - pre + - blockquote + - img + sources: + - '*.md' + - 'docs/*.rst' + - 'docs/*.ipynb' diff --git a/.github/wordlist.txt b/.github/wordlist.txt new file mode 100644 index 0000000000..be16c437ff --- /dev/null +++ b/.github/wordlist.txt @@ -0,0 +1,142 @@ +APM +ARGV +BFCommands +CFCommands +CMSCommands +ClusterNode +ClusterNodes +ClusterPipeline +ClusterPubSub +ConnectionPool +CoreCommands +EVAL +EVALSHA +GraphCommands +Grokzen's +INCR +IOError +Instrumentations +JSONCommands +Jaeger +Ludovico +Magnocavallo +McCurdy +NOSCRIPT +NUMPAT +NUMPT +NUMSUB +OSS +OpenCensus +OpenTelemetry +OpenTracing +Otel +PubSub +READONLY +RediSearch +RedisBloom +RedisCluster +RedisClusterCommands +RedisClusterException +RedisClusters +RedisGraph +RedisInstrumentor +RedisJSON +RedisTimeSeries +SHA +SearchCommands +SentinelCommands +SentinelConnectionPool +Sharded +Solovyov +SpanKind +Specfiying +StatusCode +TCP +TOPKCommands +TimeSeriesCommands +Uptrace +ValueError +WATCHed +WatchError +api +args +async +asyncio +autoclass +automodule +backoff +bdb +behaviour +bool +boolean +booleans +bysource +charset +del +dev +eg +exc +firsttimersonly +fo +genindex +gmail +hiredis +http +idx +iff +ini +json +keyslot +keyspace +kwarg +linters +localhost +lua +makeapullrequest +maxdepth +mget +microservice +microservices +mset +multikey +mykey +nonatomic +observability +opentelemetry +oss +performant +pmessage +png +pre +psubscribe +pubsub +punsubscribe +py +pypi +quickstart +readonly +readwrite +redis +redismodules +reinitialization +replicaof +repo +runtime +sedrik +sharded +ssl +str +stunnel +subcommands +thevalueofmykey +timeseries +toctree +topk +tox +triaging +txt +un +unicode +url +virtualenv +www diff --git a/.github/workflows/spellcheck.yml b/.github/workflows/spellcheck.yml new file mode 100644 index 0000000000..e152841553 --- /dev/null +++ b/.github/workflows/spellcheck.yml @@ -0,0 +1,14 @@ +name: spellcheck +on: + pull_request: +jobs: + check-spelling: + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v3 + - name: Check Spelling + uses: rojopolis/spellcheck-github-actions@0.33.1 + with: + config_path: .github/spellcheck-settings.yml + task_name: Markdown diff --git a/README.md b/README.md index 67912eb3ef..f8c3a78ae7 100644 --- a/README.md +++ b/README.md @@ -44,13 +44,13 @@ Looking for a high-level library to handle object mapping? See [redis-om-python] The most recent version of this library supports redis version [5.0](https://github.com/redis/redis/blob/5.0/00-RELEASENOTES), [6.0](https://github.com/redis/redis/blob/6.0/00-RELEASENOTES), [6.2](https://github.com/redis/redis/blob/6.2/00-RELEASENOTES), and [7.0](https://github.com/redis/redis/blob/7.0/00-RELEASENOTES). -The table below higlights version compatibility of the most-recent library versions and redis versions. +The table below highlights version compatibility of the most-recent library versions and redis versions. | Library version | Supported redis versions | |-----------------|-------------------| | 3.5.3 | <= 6.2 Family of releases | | >= 4.5.0 | Version 5.0 to 7.0 | -| >= 5.0.0 | Versiond 5.0 to current | +| >= 5.0.0 | Version 5.0 to current | ## Usage diff --git a/docs/advanced_features.rst b/docs/advanced_features.rst index 5fd20c2ba2..fd29d2f684 100644 --- a/docs/advanced_features.rst +++ b/docs/advanced_features.rst @@ -37,7 +37,7 @@ the client and server. Pipelines are quite simple to use: -.. code:: pycon +.. code:: python >>> r = redis.Redis(...) >>> r.set('bing', 'baz') @@ -54,7 +54,7 @@ Pipelines are quite simple to use: For ease of use, all commands being buffered into the pipeline return the pipeline object itself. Therefore calls can be chained like: -.. code:: pycon +.. code:: python >>> pipe.set('foo', 'bar').sadd('faz', 'baz').incr('auto_number').execute() [True, True, 6] @@ -64,7 +64,7 @@ executed atomically as a group. This happens by default. If you want to disable the atomic nature of a pipeline but still want to buffer commands, you can turn off transactions. -.. code:: pycon +.. code:: python >>> pipe = r.pipeline(transaction=False) @@ -84,7 +84,7 @@ prior the execution of that transaction, the entire transaction will be canceled and a WatchError will be raised. To implement our own client-side INCR command, we could do something like this: -.. code:: pycon +.. code:: python >>> with r.pipeline() as pipe: ... while True: @@ -117,7 +117,7 @@ Pipeline is used as a context manager (as in the example above) reset() will be called automatically. Of course you can do this the manual way by explicitly calling reset(): -.. code:: pycon +.. code:: python >>> pipe = r.pipeline() >>> while True: @@ -137,7 +137,7 @@ that should expect a single parameter, a pipeline object, and any number of keys to be WATCHed. Our client-side INCR command above can be written like this, which is much easier to read: -.. code:: pycon +.. code:: python >>> def client_side_incr(pipe): ... current_value = pipe.get('OUR-SEQUENCE-KEY') @@ -165,7 +165,7 @@ dramatically increase the throughput of Redis Cluster by significantly reducing the number of network round trips between the client and the server. -.. code:: pycon +.. code:: python >>> with rc.pipeline() as pipe: ... pipe.set('foo', 'value1') @@ -198,7 +198,7 @@ Publish / Subscribe redis-py includes a PubSub object that subscribes to channels and listens for new messages. Creating a PubSub object is easy. -.. code:: pycon +.. code:: python >>> r = redis.Redis(...) >>> p = r.pubsub() @@ -206,7 +206,7 @@ listens for new messages. Creating a PubSub object is easy. Once a PubSub instance is created, channels and patterns can be subscribed to. -.. code:: pycon +.. code:: python >>> p.subscribe('my-first-channel', 'my-second-channel', ...) >>> p.psubscribe('my-*', ...) @@ -215,7 +215,7 @@ The PubSub instance is now subscribed to those channels/patterns. The subscription confirmations can be seen by reading messages from the PubSub instance. -.. code:: pycon +.. code:: python >>> p.get_message() {'pattern': None, 'type': 'subscribe', 'channel': b'my-second-channel', 'data': 1} @@ -240,7 +240,7 @@ following keys. Let's send a message now. -.. code:: pycon +.. code:: python # the publish method returns the number matching channel and pattern # subscriptions. 'my-first-channel' matches both the 'my-first-channel' @@ -256,7 +256,7 @@ Let's send a message now. Unsubscribing works just like subscribing. If no arguments are passed to [p]unsubscribe, all channels or patterns will be unsubscribed from. -.. code:: pycon +.. code:: python >>> p.unsubscribe() >>> p.punsubscribe('my-*') @@ -279,7 +279,7 @@ the message dictionary is created and passed to the message handler. In this case, a None value is returned from get_message() since the message was already handled. -.. code:: pycon +.. code:: python >>> def my_handler(message): ... print('MY HANDLER: ', message['data']) @@ -305,7 +305,7 @@ passing ignore_subscribe_messages=True to r.pubsub(). This will cause all subscribe/unsubscribe messages to be read, but they won't bubble up to your application. -.. code:: pycon +.. code:: python >>> p = r.pubsub(ignore_subscribe_messages=True) >>> p.subscribe('my-channel') @@ -325,7 +325,7 @@ to a message handler. If there's no data to be read, get_message() will immediately return None. This makes it trivial to integrate into an existing event loop inside your application. -.. code:: pycon +.. code:: python >>> while True: >>> message = p.get_message() @@ -339,7 +339,7 @@ your application doesn't need to do anything else but receive and act on messages received from redis, listen() is an easy way to get up an running. -.. code:: pycon +.. code:: python >>> for message in p.listen(): ... # do something with the message @@ -360,7 +360,7 @@ handlers. Therefore, redis-py prevents you from calling run_in_thread() if you're subscribed to patterns or channels that don't have message handlers attached. -.. code:: pycon +.. code:: python >>> p.subscribe(**{'my-channel': my_handler}) >>> thread = p.run_in_thread(sleep_time=0.001) @@ -374,7 +374,7 @@ appropriately. The exception handler will take as arguments the exception itself, the pubsub object, and the worker thread returned by run_in_thread. -.. code:: pycon +.. code:: python >>> p.subscribe(**{'my-channel': my_handler}) >>> def exception_handler(ex, pubsub, thread): @@ -401,7 +401,7 @@ when reconnecting. Messages that were published while the client was disconnected cannot be delivered. When you're finished with a PubSub object, call its .close() method to shutdown the connection. -.. code:: pycon +.. code:: python >>> p = r.pubsub() >>> ... @@ -410,7 +410,7 @@ object, call its .close() method to shutdown the connection. The PUBSUB set of subcommands CHANNELS, NUMSUB and NUMPAT are also supported: -.. code:: pycon +.. code:: python >>> r.pubsub_channels() [b'foo', b'bar'] @@ -421,6 +421,38 @@ supported: >>> r.pubsub_numpat() 1204 +Sharded pubsub +~~~~~~~~~~~~~~ + +`Sharded pubsub `_ is a feature introduced with Redis 7.0, and fully supported by redis-py as of 5.0. It helps scale the usage of pub/sub in cluster mode, by having the cluster shard messages to nodes that own a slot for a shard channel. Here, the cluster ensures the published shard messages are forwarded to the appropriate nodes. Clients subscribe to a channel by connecting to either the master responsible for the slot, or any of its replicas. + +This makes use of the `SSUBSCRIBE `_ and `SPUBLISH `_ commands within Redis. + +The following, is a simplified example: + +.. code:: python + + >>> from redis.cluster import RedisCluster, ClusterNode + >>> r = RedisCluster(startup_nodes=[ClusterNode('localhost', 6379), ClusterNode('localhost', 6380)]) + >>> p = r.pubsub() + >>> p.ssubscribe('foo') + >>> # assume someone sends a message along the channel via a publish + >>> message = p.get_sharded_message() + +Similarly, the same process can be used to acquire sharded pubsub messages, that have already been sent to a specific node, by passing the node to get_sharded_message: + +.. code:: python + + >>> from redis.cluster import RedisCluster, ClusterNode + >>> first_node = ClusterNode['localhost', 6379] + >>> second_node = ClusterNode['localhost', 6380] + >>> r = RedisCluster(startup_nodes=[first_node, second_node]) + >>> p = r.pubsub() + >>> p.ssubscribe('foo') + >>> # assume someone sends a message along the channel via a publish + >>> message = p.get_sharded_message(target_node=second_node) + + Monitor ~~~~~~~ @@ -428,7 +460,7 @@ redis-py includes a Monitor object that streams every command processed by the Redis server. Use listen() on the Monitor object to block until a command is received. -.. code:: pycon +.. code:: python >>> r = redis.Redis(...) >>> with r.monitor() as m: diff --git a/docs/clustering.rst b/docs/clustering.rst index 34cb7f1f69..9b4dee1c9f 100644 --- a/docs/clustering.rst +++ b/docs/clustering.rst @@ -26,7 +26,7 @@ cluster instance can be created: - Using ‘host’ and ‘port’ arguments: -.. code:: pycon +.. code:: python >>> from redis.cluster import RedisCluster as Redis >>> rc = Redis(host='localhost', port=6379) @@ -35,14 +35,14 @@ cluster instance can be created: - Using the Redis URL specification: -.. code:: pycon +.. code:: python >>> from redis.cluster import RedisCluster as Redis >>> rc = Redis.from_url("redis://localhost:6379/0") - Directly, via the ClusterNode class: -.. code:: pycon +.. code:: python >>> from redis.cluster import RedisCluster as Redis >>> from redis.cluster import ClusterNode @@ -77,7 +77,7 @@ you can change it using the ‘set_default_node’ method. The ‘target_nodes’ parameter is explained in the following section, ‘Specifying Target Nodes’. -.. code:: pycon +.. code:: python >>> # target-nodes: the node that holds 'foo1's key slot >>> rc.set('foo1', 'bar1') @@ -105,7 +105,7 @@ topology of the cluster changes during the execution of a command, the client will be able to resolve the nodes flag again with the new topology and attempt to retry executing the command. -.. code:: pycon +.. code:: python >>> from redis.cluster import RedisCluster as Redis >>> # run cluster-meet command on all of the cluster's nodes @@ -127,7 +127,7 @@ topology changes, a retry attempt will not be made, since the passed target node/s may no longer be valid, and the relevant cluster or connection error will be returned. -.. code:: pycon +.. code:: python >>> node = rc.get_node('localhost', 6379) >>> # Get the keys only for that specific node @@ -140,7 +140,7 @@ In addition, the RedisCluster instance can query the Redis instance of a specific node and execute commands on that node directly. The Redis client, however, does not handle cluster failures and retries. -.. code:: pycon +.. code:: python >>> cluster_node = rc.get_node(host='localhost', port=6379) >>> print(cluster_node) @@ -170,7 +170,7 @@ to the relevant slots, sending the commands to the slots’ node owners. Non-atomic operations batch the keys according to their hash value, and then each batch is sent separately to the slot’s owner. -.. code:: pycon +.. code:: python # Atomic operations can be used when all keys are mapped to the same slot >>> rc.mset({'{foo}1': 'bar1', '{foo}2': 'bar2'}) @@ -202,7 +202,7 @@ the commands are not currently recommended for use. See documentation `__ for more. -.. code:: pycon +.. code:: python >>> p1 = rc.pubsub() # p1 connection will be set to the node that holds 'foo' keyslot @@ -224,7 +224,7 @@ READONLY mode can be set at runtime by calling the readonly() method with target_nodes=‘replicas’, and read-write access can be restored by calling the readwrite() method. -.. code:: pycon +.. code:: python >>> from cluster import RedisCluster as Redis # Use 'debug' log level to print the node that the command is executed on diff --git a/docs/examples/asyncio_examples.ipynb b/docs/examples/asyncio_examples.ipynb index 7fdcc36bc5..f7e67e2ca7 100644 --- a/docs/examples/asyncio_examples.ipynb +++ b/docs/examples/asyncio_examples.ipynb @@ -355,7 +355,7 @@ "source": [ "import redis.asyncio as redis\n", "\n", - "url_connection = redis.from_url(\"redis://localhost:6379?decode_responses=Trueprotocol=3\")\n", + "url_connection = redis.from_url(\"redis://localhost:6379?decode_responses=True&protocol=3\")\n", "url_connection.ping()" ] } diff --git a/docs/examples/connection_examples.ipynb b/docs/examples/connection_examples.ipynb index e6d147c920..cddded2865 100644 --- a/docs/examples/connection_examples.ipynb +++ b/docs/examples/connection_examples.ipynb @@ -68,12 +68,12 @@ ] }, { - "cell_type": "code", + "cell_type": "markdown", "execution_count": null, "metadata": {}, "outputs": [], "source": [ - "### by default this library uses the RESP 2 protocol. To eanble RESP3, set protocol=3." + "### By default this library uses the RESP 2 protocol. To enable RESP3, set protocol=3." ] }, { diff --git a/docs/index.rst b/docs/index.rst index a6ee05e917..2c0557cbbe 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -64,15 +64,16 @@ Module Documentation .. toctree:: :maxdepth: 1 - backoff connections + clustering exceptions + backoff lock retry - advanced_features - clustering lua_scripting opentelemetry + resp3_features + advanced_features examples Contributing @@ -86,4 +87,4 @@ Contributing License ******* -This projectis licensed under the `MIT license `_. +This project is licensed under the `MIT license `_. diff --git a/docs/lua_scripting.rst b/docs/lua_scripting.rst index 8276dad051..0edb6b6723 100644 --- a/docs/lua_scripting.rst +++ b/docs/lua_scripting.rst @@ -24,7 +24,7 @@ The following trivial Lua script accepts two parameters: the name of a key and a multiplier value. The script fetches the value stored in the key, multiplies it with the multiplier value and returns the result. -.. code:: pycon +.. code:: python >>> r = redis.Redis() >>> lua = """ @@ -47,7 +47,7 @@ function. Script instances accept the following optional arguments: Continuing the example from above: -.. code:: pycon +.. code:: python >>> r.set('foo', 2) >>> multiply(keys=['foo'], args=[5]) @@ -60,7 +60,7 @@ executes the script and returns the result, 10. Script instances can be executed using a different client instance, even one that points to a completely different Redis server. -.. code:: pycon +.. code:: python >>> r2 = redis.Redis('redis2.example.com') >>> r2.set('foo', 3) @@ -79,7 +79,7 @@ should be passed as the client argument when calling the script. Care is taken to ensure that the script is registered in Redis's script cache just prior to pipeline execution. -.. code:: pycon +.. code:: python >>> pipe = r.pipeline() >>> pipe.set('foo', 5) diff --git a/docs/resp3_features.rst b/docs/resp3_features.rst new file mode 100644 index 0000000000..11c01985a0 --- /dev/null +++ b/docs/resp3_features.rst @@ -0,0 +1,69 @@ +RESP 3 Features +=============== + +As of version 5.0, redis-py supports the `RESP 3 standard `_. Practically, this means that client using RESP 3 will be faster and more performant as fewer type translations occur in the client. It also means new response types like doubles, true simple strings, maps, and booleans are available. + +Connecting +----------- + +Enabling RESP3 is no different than other connections in redis-py. In all cases, the connection type must be extending by setting `protocol=3`. The following are some base examples illustrating how to enable a RESP 3 connection. + +Connect with a standard connection, but specifying resp 3: + +.. code:: python + + >>> import redis + >>> r = redis.Redis(host='localhost', port=6379, protocol=3) + >>> r.ping() + +Or using the URL scheme: + +.. code:: python + + >>> import redis + >>> r = redis.from_url("redis://localhost:6379?protocol=3") + >>> r.ping() + +Connect with async, specifying resp 3: + +.. code:: python + + >>> import redis.asyncio as redis + >>> r = redis.Redis(host='localhost', port=6379, protocol=3) + >>> await r.ping() + +The URL scheme with the async client + +.. code:: python + + >>> import redis.asyncio as Redis + >>> r = redis.from_url("redis://localhost:6379?protocol=3") + >>> await r.ping() + +Connecting to an OSS Redis Cluster with RESP 3 + +.. code:: python + + >>> from redis.cluster import RedisCluster, ClusterNode + >>> r = RedisCluster(startup_nodes=[ClusterNode('localhost', 6379), ClusterNode('localhost', 6380)], protocol=3) + >>> r.ping() + +Push notifications +------------------ + +Push notifications are a way that redis sends out of band data. The RESP 3 protocol includes a `push type `_ that allows our client to intercept these out of band messages. By default, clients will log simple messages, but redis-py includes the ability to bring your own function processor. + +This means that should you want to perform something, on a given push notification, you specify a function during the connection, as per this examples: + +.. code:: python + + >> from redis import Redis + >> + >> def our_func(message): + >> if message.find("This special thing happened"): + >> raise IOError("This was the message: \n" + message) + >> + >> r = Redis(protocol=3) + >> p = r.pubsub(push_handler_func=our_func) + +In the example above, upon receipt of a push notification, rather than log the message, in the case where specific text occurs, an IOError is raised. This example, highlights how one could start implementing a customized message handler.