Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Race condition when scheduling commands with withoutOverlapping() using Redis/Memcached #50330

Open
nickma42 opened this issue Mar 1, 2024 · 5 comments

Comments

@nickma42
Copy link

nickma42 commented Mar 1, 2024

Laravel Version

10.46.0

PHP Version

8.1.27

Database Driver & Version

No response

Description

We've identified an issue whereby in multi-server environments where commands are scheduled to run using withoutOverlapping() and onOneServer(), the commands will very occasionally not run at all on any server (about 0.2% of the time in our environment). Not a problem for things running every minute where we first spotted the problem, but more impactful when things running a few times or once per day are skipped.

Just going to start with outlining my understanding of how scheduled commands get run when withoutOverlapping() and onOneServer() are used:

  • Commands due to run are iterated over (ScheduleRunCommand::handle, L121)
  • Each command is checked to ensure filters pass before proceeding (ScheduleRunCommand::handle, L122)
    • Because we used withoutOverlapping() a check is added to the filters to ensure the event mutex lock does not currently exist (Event::withoutOverlapping, L713)
  • For each command that passes the filters (i.e. considered to be not currently running), serverShouldRun() is then called to check for and take a scheduling lock for the current server to run the command for the current hour/minute (ScheduleRunCommand::runSingleServerEvent, L156)
  • Once confirmed that the event is not currently running and the server has the lock to run the event for the current schedule run, Event::run is called (Event::run, L217)
  • As a final check before the event runs, shouldSkipDueToOverlapping() is called which checks whether withoutOverlapping() was used and checks that the event lock can be obtained (Event::shouldSkipDueToOverlapping, L237)

Just for clarity, there's two locks in play;

  • The event lock which is used by withoutOverlapping() to identify whether a command is already being run.
  • The scheduling lock used by onOneSever() to check whether a command has already been run on a server for the current schedule run (identified by the current hour and minute.)

The problem stems from the way in which the event mutex lock is checked for existence, which was introduced in PR #45963.

Originally, the CacheEventMutex would just check if the cache had an entry for the lock (CacheEventMutex::exists, L69) which worked fine, but PR #45963 changed that behavior. For cache stores that implement the LockProvider interface (Redis, Memcached) it now attempts to get the lock and then immediately releases it again (but this is not done atomically.)

So the race condition looks something like this if Server B's system clock is just slightly slower than Server A:

Time Server A Server B
12:00:00 Checking that Event A is not currently running, takes event lock for Event A
12:00:01 Releases event lock for Event A, determining that it is not currently running so should be run
12:00:02 Takes scheduling lock for Event A to run on Server A
12:00:03 Checking that Event A is not currently running, takes event lock for Event A
12:00:04 Attempts to take the event lock for Event A to run it, fails because Server B has the lock, then skips due to overlapping
12:00:05 Releases event lock for Event A, determining that it is not currently running so should be run
12:00:06 Attempts to take scheduling lock for Event A to run on Server B, fails as the scheduling lock was already taken by Server A, skips assuming command was run on another server already

So in this scenario the scheduled command is never run, there's no error message, and the ScheduledTaskSkipped doesn't even get fired because the call to run() just returns if shouldSkipDueToOverlapping() returns true.

In terms of a solution, it might make sense to be able to check for the existence of a lock without taking and releasing it, maybe a new abstract exists() on Lock (https://github.com/laravel/framework/blob/10.x/src/Illuminate/Cache/Lock.php) which CacheEventMutex could use and RedisLock/MemcachedLock could implement their own atomic means of checking for lock existence, although that probably constitutes a backwards-compatibility break. Happy to put together a PR for whatever the agreed solution is.

As a workaround, I've just bound an older version of CacheEventMutex to Illuminate\Console\Scheduling\EventMutex to override the changes implemented in #45963 which has solved the issue in our project for now.

Steps To Reproduce

Given its a race condition its very hard to reproduce, but hopefully there's enough details in the description to understand what's going on.

Copy link

github-actions bot commented Mar 4, 2024

Thank you for reporting this issue!

As Laravel is an open source project, we rely on the community to help us diagnose and fix issues as it is not possible to research and fix every issue reported to us via GitHub.

If possible, please make a pull request fixing the issue you have described, along with corresponding tests. All pull requests are promptly reviewed by the Laravel team.

Thank you!

@iBotPeaches
Copy link
Contributor

Just wanted to say thanks for the highly detailed report. I was on the very beginning of the research of what I believe is this issue. Since Sentry now provides Cron check-in reports - I was digging into a random problem of jobs not checking in.

Screenshot 2024-03-04 at 12 16 30 PM

Read this issue and we indeed use Redis, both withOverlapping() and onOneServer() so I think you've saved all my research.

@johanrosenson
Copy link
Contributor

In terms of a solution, it might make sense to be able to check for the existence of a lock without taking and releasing it, maybe a new abstract exists() on Lock (https://github.com/laravel/framework/blob/10.x/src/Illuminate/Cache/Lock.php) which CacheEventMutex could use and RedisLock/MemcachedLock could implement their own atomic means of checking for lock existence, although that probably constitutes a backwards-compatibility break. Happy to put together a PR for whatever the agreed solution is.

I'm the author of #45963 and I would be very happy if we can find a way to add a lockExists method to the LockProvider contract, I was never very happy with the way I did the exists check.

In terms of backwards-compatibility, the release of Laravel 11 is only a few days away, maybe such a bc breaking fix could be added for Laravel 11 now at the last moment?

This is my suggestion of a lockExists in RedisStore (this code should work for Memcached also)

    public function lockExists($name) : bool
    {
        $lockName = $this->prefix.$name;

        return $this->lockConnection()->get($lockName) !== null;

        // alternate version (only for redis)
        // return (bool) $this->lockConnection()->eval(
        //     "return redis.call('exists',KEYS[1])", 1, $lockName,
        // );
    }

Usage in CacheEventMutex.php

    public function exists(Event $event)
    {
        if ($this->cache->store($this->store)->getStore() instanceof LockProvider) {
            return $this->cache->store($this->store)->getStore()
                ->lockExists($event->mutexName());
        }

        return $store->has($event->mutexName());
    }

This would at least solve the issue of the lock being acquired just to be immediately released.

@AlexKapustinZitcha
Copy link

I hope you are not serious about lockExists. This will inevitably lead to another race condition issue.
Imagine you are trying to check lock and in parallel to that (with few ms delay) coming request to acquire lock.
As soon as you get result there will be lock already, but your information is outdated.
You should think of atomic operation to acquire and check locks.
For example increment. If increment returns 1 - then you are the lock owner. otherwise no. Lock owner should be responsible to delete a key, in order to release lock.
Cheers

@vkulov
Copy link

vkulov commented Jul 4, 2024

Actually suggested solution by @johanrosenson would work fine. While the information could indeed be outdated it won't cause deadlocks, overlaps or missed runs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants