Skip to content

Commit

Permalink
fix: PR fixes
Browse files Browse the repository at this point in the history
Co-authored-by: Michał Olender <92638966+TC-MO@users.noreply.github.com>
  • Loading branch information
drobnikj and TC-MO authored Dec 10, 2024
1 parent 983ad20 commit 8e1f283
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions sources/platform/storage/request_queue.md
Original file line number Diff line number Diff line change
Expand Up @@ -407,15 +407,15 @@ You can lock a request so that no other clients receive it when they fetch the q
This feature is seamlessly integrated into Crawlee, requiring minimal extra setup. By default, requests are locked for the same duration as the timeout for processing requests in the crawler ([`requestHandlerTimeoutSecs`](https://crawlee.dev/api/next/basic-crawler/interface/BasicCrawlerOptions#requestHandlerTimeoutSecs)).
If the Actor processing the request fails, the lock expires, and the request is processed again eventually. For more details, refer to the [Crawlee documentation](https://crawlee.dev/docs/next/experiments/experiments-request-locking).

In the following example, we demonstrate how we can use locking mechanisms to avoid concurrent processing of the same request across multiple Actor runs.
In the following example, we demonstrate how you can use locking mechanisms to avoid concurrent processing of the same request across multiple Actor runs.

:::info
The lock mechanism works on the client level, as well as the run level, when running the Actor on the Apify platform.

This means you can unlock or prolong the lock the locked request only if:

1. You are using the same client key, or
2. The operation is being called from the same Actor run.
- You are using the same client key, or
- The operation is being called from the same Actor run.

:::

Expand Down

0 comments on commit 8e1f283

Please sign in to comment.