Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(ludicrous): Fix logical race condition in concurrent execution of mutations #7269

Merged
merged 1 commit into from
Jan 13, 2021

Conversation

ahsanbarkati
Copy link
Contributor

@ahsanbarkati ahsanbarkati commented Jan 11, 2021

The following crash is seen in alpha, if we run high load writes in ludicrous mode for a long time.

2021/01/07 00:43:14 Name: Applied watermark doneUntil: 42339870. Index: 42339868
github.com/dgraph-io/badger/v2/y.AssertTruef
        /go/pkg/mod/github.com/dgraph-io/badger/v2@v2.0.1-rc1.0.20201214114056-bcfae6104545/y/error.go:62
github.com/dgraph-io/badger/v2/y.(*WaterMark).process.func1
        /go/pkg/mod/github.com/dgraph-io/badger/v2@v2.0.1-rc1.0.20201214114056-bcfae6104545/y/watermark.go:165
github.com/dgraph-io/badger/v2/y.(*WaterMark).process
        /go/pkg/mod/github.com/dgraph-io/badger/v2@v2.0.1-rc1.0.20201214114056-bcfae6104545/y/watermark.go:232
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1374

The error was due to a logical race condition in concurrent execution of mutations in ludicrous mode. Consider the following mutation scenario:

mutation M1 -> (uid:12, name:"alice")
mutation M2 -> (uid:12, name:"bob")

The conflict keys for both of these mutations will be same. Assume that processing of M1 is already started by the e.worker() goroutine, and then M2 arrives. M2 will have a dependency on M1 and the inDeg of M2 will be 1. But by the time, M2 goes to check if it has inDeg == 0 here ->

if atomic.LoadInt64(&m.inDeg) == 0 {
it is possible that M1 would have been completed and it would have unblocked M2, reduced inDeg of M2 to 0 and started the processing of M2. In the check in (line: 249), M2 will see inDeg to be 0 and will start its processing again. This causes the issue of double done on the watermark.

This issue can be consistently reproduced, if we add a sleep after releasing the lock here ->

g.Unlock()
and do mutation on same <uid, predicate> multiple times.


This change is Reviewable

@ahsanbarkati ahsanbarkati merged commit 594ff63 into master Jan 13, 2021
@ahsanbarkati ahsanbarkati deleted the ahsan/watermark branch January 13, 2021 18:52
ahsanbarkati added a commit that referenced this pull request Jan 15, 2021
…#7269)

Fix the race condition in the concurrent execution of mutations in
the ludicrous mode. The issue caused same mutation to be called
for execution twice, which resulted in double done on the watermark.

(cherry picked from commit 594ff63)
ahsanbarkati added a commit that referenced this pull request Jan 15, 2021
…#7269)

Fix the race condition in the concurrent execution of mutations in
the ludicrous mode. The issue caused same mutation to be called
for execution twice, which resulted in double done on the watermark.

(cherry picked from commit 594ff63)
ahsanbarkati added a commit that referenced this pull request Jan 15, 2021
…#7269) (#7309)

Fix the race condition in the concurrent execution of mutations in
the ludicrous mode. The issue caused same mutation to be called
for execution twice, which resulted in double done on the watermark.

(cherry picked from commit 594ff63)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants