-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cherry-pick commits for v2.0.3 #1273
Merged
jarifibrahim
merged 35 commits into
release/v2.0
from
ibrahim/release/v2.0-cherry-pick-v2.0.3
Mar 26, 2020
Merged
Cherry-pick commits for v2.0.3 #1273
jarifibrahim
merged 35 commits into
release/v2.0
from
ibrahim/release/v2.0-cherry-pick-v2.0.3
Mar 26, 2020
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This reverts commit 8fd5740.
(cherry picked from commit 03af216)
Fix windows build and some deepsource warnings (cherry picked from commit 0f2e629)
The overlap check in compaction would keep additional keys in case the levels under compaction had overlap amongst themselves. This commit fixes it. This commit also fixes the `numVersionsToKeep` check. Without this commit, we would end up keeping `2` versions of a key even when the number of versions to keep was set to `1`. See `level 0 to level 1 with lower overlap` test. Fixes #1053 The following test fails on master but works with this commit ```go func TestCompaction(t *testing.T) { t.Run("level 0 to level 1", func(t *testing.T) { dir, err := ioutil.TempDir("", "badger-test") require.NoError(t, err) defer removeDir(dir) // Disable compactions and keep single version of each key. opt := DefaultOptions(dir).WithNumCompactors(0).WithNumVersionsToKeep(1) db, err := OpenManaged(opt) require.NoError(t, err) l0 := []keyValVersion{{"foo", "bar", 3}, {"fooz", "baz", 1}} l01 := []keyValVersion{{"foo", "bar", 2}} l1 := []keyValVersion{{"foo", "bar", 1}} // Level 0 has table l0 and l01. createAndOpen(db, l0, 0) createAndOpen(db, l01, 0) // Level 1 has table l1. createAndOpen(db, l1, 1) // Set a high discard timestamp so that all the keys are below the discard timestamp. db.SetDiscardTs(10) getAllAndCheck(t, db, []keyValVersion{ {"foo", "bar", 3}, {"foo", "bar", 2}, {"foo", "bar", 1}, {"fooz", "baz", 1}, }) cdef := compactDef{ thisLevel: db.lc.levels[0], nextLevel: db.lc.levels[1], top: db.lc.levels[0].tables, bot: db.lc.levels[1].tables, } require.NoError(t, db.lc.runCompactDef(0, cdef)) // foo version 2 should be dropped after compaction. getAllAndCheck(t, db, []keyValVersion{{"foo", "bar", 3}, {"fooz", "baz", 1}}) }) } ``` (cherry picked from commit 0a06173)
Fixes - #1031 (There wasn't a bug to fix. The log statement shouldn't have been there) This PR removes the warning message `WARNING: This entry should have been caught.`. The warning message assumed that we will always find the **newest key if two keys have the same version** This assumption is valid in case of a normal key but it's **NOT TRUE** in case of **move keys**. Here's how we can end up fetching the older version of a move key if two move keys have the same version. ``` It might be possible that the entry read from LSM Tree points to an older vlog file. This can happen in the following situation. Assume DB is opened with numberOfVersionsToKeep=1 Now, if we have ONLY one key in the system "FOO" which has been updated 3 times and the same key has been garbage collected 3 times, we'll have 3 versions of the movekey for the same key "FOO". NOTE: moveKeyi is the moveKey with version i Assume we have 3 move keys in L0. - moveKey1 (points to vlog file 10), - moveKey2 (points to vlog file 14) and - moveKey3 (points to vlog file 15). Also, assume there is another move key "moveKey1" (points to vlog file 6) (this is also a move Key for key "FOO" ) on upper levels (let's say level 3). The move key "moveKey1" on level 0 was inserted because vlog file 6 was GCed. Here's what the arrangement looks like L0 => (moveKey1 => vlog10), (moveKey2 => vlog14), (moveKey3 => vlog15) L1 => .... L2 => .... L3 => (moveKey1 => vlog6) When L0 compaction runs, it keeps only moveKey3 because the number of versions to keep is set to 1. (we've dropped moveKey1's latest version) The new arrangement of keys is L0 => .... L1 => (moveKey3 => vlog15) L2 => .... L3 => (moveKey1 => vlog6) Now if we try to GC vlog file 10, the entry read from vlog file will point to vlog10 but the entry read from LSM Tree will point to vlog6. The move key read from LSM tree will point to vlog6 because we've asked for version 1 of the move key. This might seem like an issue but it's not really an issue because the user has set the number of versions to keep to 1 and the latest version of moveKey points to the correct vlog file and offset. The stale move key on L3 will be eventually dropped by compaction because there is a newer version in the upper levels. ``` (cherry picked from commit 2a90c66)
This makes db.Sync() no-op when badger is running in in-memory mode. The previous code would unnecessarily load up an atomic and acquire locks. (cherry picked from commit 2698bfc)
The math/rand package (https://golang.org/src/math/rand/rand.go) uses a global lock to allow concurrent access to the rand object. The PR replaces `math.Rand` with `ristretto/z.FastRand()`. `FastRand` is much faster than `math.Rand` and `rand.New(..)` generators. The change improves concurrent writes to skiplist by ~30% ```go func BenchmarkWrite(b *testing.B) { value := newValue(123) l := NewSkiplist(int64((b.N + 1) * MaxNodeSize)) defer l.DecrRef() b.ResetTimer() b.RunParallel(func(pb *testing.PB) { rng := rand.New(rand.NewSource(time.Now().UnixNano())) for pb.Next() { l.Put(randomKey(rng), y.ValueStruct{Value: value, Meta: 0, UserMeta: 0}) } }) } ``` ``` name old time/op new time/op delta Write-16 657ns ± 3% 441ns ± 1% -32.94% (p=0.000 n=9+10) ``` (cherry picked from commit 9d6512b)
(cherry picked from commit 01a00cb)
This could possibly be a bug in `go test` command golang/go#36527 . The `go test` command would skip tests in sub-packages if the top-level package has a `custom flag` defined and the sub-packages don't define it. Issue golang/go#36527 (comment) has an example of this. This PR also removes the code from the test that would unnecessary start a web server. I see two problems here 1. An unnecessary web server running. 2. We cannot run multiple tests are the same time since the second run of the test would try to start a web server and crash saying `port already in use`. (cherry picked from commit 5870b7b)
We don't need to stall writes if Level 1 does not have enough space. Level 1 is stored on the disk and it should be okay to have more number of tables (more size) on Level 1 than the `max level 1 size`. These tables will eventually be compacted to lower levels. This commit changes the following - We no longer stall writes if L1 doesn't have enough space. - We stall writes on level 0 only if `KeepL0InMemory` is true. - Upper levels (L0, L1, etc) get priority in compaction (previously, level with higher priority score would get preference) (cherry picked from commit 3747be5)
This commit pulls following changes from ristretto ``` git log c1f00be0418e...8f368f2 --oneline ``` ``` 8f368f2 (HEAD -> master) Fix DeepSource warnings adb35f0 delete item immediately fce5e91 Support nil *Cache values in Clear and Close 4e224f9 Add .travis.yml 8350826 Fix the way metrics are handled for deletions 99d1bbb (tag: v0.0.1) default to 128bit hashing for collision checks 1eea1b1 atomic Get-Delete operation for eviction 8d6a8a7 add cache test for identifying MaxCost overflows ae09250 use the custom KeyToHash function if one is set 1dd5a4d #19 Fixes memory leak due to Oct 1st regression in processItems ``` (cherry picked from commit 82381ac)
The test `TestL0Stall` and `TestL1Stall` would never fail because of errors in the manifest file. This commit fixes it. (cherry picked from commit 0acb3f6)
This PR - Disables compression. By default, badger does not use any compression. - Set default ZSTD compression level to 1 Level 15 is very slow for any practical use of badger. ``` no_compression-16 10 502848865 ns/op 165.46 MB/s zstd_compression/level_1-16 7 739037966 ns/op 112.58 MB/s zstd_compression/level_3-16 7 756950250 ns/op 109.91 MB/s zstd_compression/level_15-16 1 11135686219 ns/op 7.47 MB/s ``` (cherry picked from commit c3333a5)
This PR adds support for caching bloom filters in ristretto. The bloom filters and blocks are removed from the cache when the table is deleted. (cherry picked from commit 4676ca9)
Move all access to `valueLog.maxFid` under `valueLog.filesLock`, while all mutations happen either with writes stopped or sequentially under valueLog.write. Fixes a concurrency issue in `valueLog.Read` where the maxFid variable and the `writableLogOffset` variable could point to two different log files. (cherry picked from commit 3e25d77)
(cherry picked from commit eee1602)
Fixes #1197 The `TestPageBufferReader2` test would fail often because of an `off-by-1` issue. The problem can be reproduced by setting `randOffset` to the biggest number that randInt31n may return statically like: ``` //randOffset := int(rand.Int31n(int32(b.length))) randOffset := int(int32(b.length-1)) ``` This makes the problem reliably reproducible as the offset is now pointing at EOF. Thus changing the line to this should hopefully solve the problem: `randOffset := int(rand.Int31n(int32(b.length-1 (cherry picked from commit c51748e)
We are using the following pattern in tests that can be replaced with `require.NoError(t, err)`. ```go if err != nil { t.Fatal(err) } ``` (cherry picked from commit 78d405a)
(cherry picked from commit 8734e3a)
The test ExampleDB_Subscribe doesn't run reliably on appveyor. This commit removes it. (cherry picked from commit e029e93)
- Fix tests for 32 bit systems - Enable 32-bit builds on Travis (cherry picked from commit bce069c)
Co-authored-by: Ibrahim Jarif <ibrahim@dgraph.io> (cherry picked from commit e908818)
If compaction is started before opening vlog, badger might crash with nil pointer error when trying to push discard stats at the end of compactions because discard stats are initialized in vlog open. This commit fixes it. (cherry picked from commit 617ed7c)
This commit increases the ValueThreshold for in-memory mode. The existing threshold was of 1 MB which means badger would crash if len(value) was greater than 1 MB. This commit sets the value threshold to MaxInt32 (around 2 GB). Badger transaction would return an error if badger is running in in-memory mode and length of value is greater than the value threshold. Fixes #1234 (cherry picked from commit 5b4c0a6)
The project depends on BadgerDB (cherry picked from commit 8dbc982)
(cherry picked from commit bdb2b13)
This commit adds support for compression/encryption in the background. All incoming writes are saved to a common buffer and then worker go routine pick up each block (a block is just a start and end point in the main buffer). The worker goroutines also fix the table start and end points. We fix the interleaving space (which might be caused because the block was compressed) in the builder.Finish() call. All the blocks are copied over to their correct position. Master branch vs This commit (new) Benchmarks. ``` benchstat master.txt new.txt name old time/op new time/op delta (-ve is better. Reduction in time) no_compression-16 177ms ± 1% 177ms ± 2% ~ (p=0.690 n=5+5) encryption-16 313ms ± 4% 245ms ± 3% -21.69% (p=0.008 n=5+5) zstd_compression/level_1-16 330ms ± 1% 227ms ± 2% -31.22% (p=0.008 n=5+5) zstd_compression/level_3-16 345ms ± 2% 227ms ± 3% -34.39% (p=0.008 n=5+5) zstd_compression/level_15-16 10.7s ± 1% 1.2s ± 0% -88.96% (p=0.008 n=5+5) name old speed new speed delta (+ve is better. Speed improvement) no_compression-16 471MB/s ± 1% 471MB/s ± 2% ~ (p=0.690 n=5+5) encryption-16 266MB/s ± 4% 340MB/s ± 2% +27.67% (p=0.008 n=5+5) zstd_compression/level_1-16 252MB/s ± 1% 367MB/s ± 2% +45.40% (p=0.008 n=5+5) zstd_compression/level_3-16 241MB/s ± 2% 367MB/s ± 3% +52.44% (p=0.008 n=5+5) zstd_compression/level_15-16 7.80MB/s ± 1% 70.62MB/s ± 0% +805.33% (p=0.008 n=5+5) ``` Please look at #1227 (comment) for the code used for benchmarking. (cherry picked from commit b13b927)
This PR adds support for watching empty prefixes (all keys) in subscribe API. To subscribe to all changes in badger, user can run ```go db.Subscribe(ctx, handler, nil) ``` or ```go db.Subscribe(ctx, handler, []byte{}) ``` (cherry picked from commit c6c1e5e)
Fixes #988 Based on #1046 This PR adds `BypassDirLock` option which allows badger to work without acquiring a lock on the data directory. This option could lead to data corruption if used with multiple badger instances trying to write to the same badger directory. Co-authored-by: Ehsan Noureddin Moosa <ehsannm@users.noreply.github.com> (cherry picked from commit 1bcbefc)
#1256 showed that a single cache might not be enough to store the data blocks and the bloom filters. This commit adds a separate cache for the bloom filters. This commit also adds a new flag `LoadBloomsOnOpen` which determines if the bloom filters should be loaded when the table is opened on or not. The default value of `MaxBfCacheSize` is `zero` and `LoadBloomsOnOpen` is true. This change has significant performance improvement on read speeds because a single cache would lead to bloom filter eviction and we would read the bloom filter from the disk. (cherry picked from commit eaf64c0)
Disabling cache by default leads to better performance when compression and encryption is disabled. Result of read bench tool (data is not encrypted or compressed) 1. With 1 GB cache: Average read speed : 41.3833 MB/s Total bytes read in 1 minute : 2.4 GB 2. Without cache: Average read speed : 58.86 MB/s Total bytes read in 1 minute : 3.3 GB This PR also fixes a small issue with the boolean condition in table.Open(). (cherry picked from commit 91c31eb)
jarifibrahim
requested review from
ashish-goswami,
manishrjain and
a team
as code owners
March 24, 2020 11:29
danielmai
approved these changes
Mar 26, 2020
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 21 files reviewed, all discussions resolved (waiting on @ashish-goswami and @manishrjain)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR actually cherry-picks only 9 new commits. All the other commits are because of the last commit which was a
squash and merge
of24 commits
.I've reverted the last commit and cherry-picked individual commits (total 24 commits) into this branch.
The new commits are
This PR also adds the changelog for v2.0.3
This change is