-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update release/v3.2011 branch #1634
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We removed the global z.Allocator pool from z package. Instead, we now use a new z.AllocatorPool class in the places which need a pool. In this case, we brought it to TableBuilder and Stream. Fix up a memory leak in Stream. Co-authored-by: Ibrahim Jarif <ibrahim@dgraph.io>
Decrease the size of DISCARD file and WAL for Memtables.
Remove z.Buffer from skiplist because we are using static sized buffer.
Do not use AllocatorPool, because that has shown weird crashes due to Go's interpretation of slices. The new system uses Go memory for z.Allocator and avoids reusing it.
The orchestrate function would get blocked forever if send function returned an error. The produceKv go routines would also get blocked since the size of the error chan was 1.
This fixes two issues - Atomic variable was not being accessed correctly - Atomic variable should be the first member of the struct to ensure proper alignment. Failure to do so will cause a segmentation fault. Fixes DGRAPH-2773
Add debugging information in yieldItemValue function to find the root cause of missing vlog files error. Note: This commit should be reverted once the issue has been resolved.
Stream.Send now sends out z.Buffer instead of pb.KVList. z.Buffer marshals each KV as a separate slice. This significantly reduces the memory requirement by the Stream framework. Stream no longer uses z.Allocator or tries to put pb.KV struct on the Allocator for memory safety reasons. Bring back the z.AllocatorPool for table.Builder. Changes: * Use z.Buffer for stream.Send * Only use 8 streams in write bench * Revert "Bug Fix: Fix up how we use z.Allocator" This reverts commit 5ff9e1d. * Bring allocator back. Use z.Buffer for send * Add BufferToKVList function * Print jemalloc while stream * Bring in latest Ristretto * Fix memory leak and benchmark read test Co-authored-by: Ibrahim Jarif <ibrahim@dgraph.io>
Remove `Github issues` links
Even for some integer fields like MaxVersion, KeyCount, OffsetsLength, we end up calling `fetchIndex`, which hits Ristretto cache. Instead, we could just keep these in memory always, because they're so cheap. This PR does that.
We store a slice of pb.KVs in Iterator, so it can be used by Stream users.
NewKeyIterator uses pickTables which was optimized in the past. But, a recent PR: #1546 removed this optimization, which is now making NewKeyIterator quite expensive. This PR brings that optimization back.
The DefaultOptions already has snappy set to the default, so the stream CLI tool aligns with that now.
* chore(cmd/info): Fix printed spacing of summary. Column-aligns the output for the Summary section: Before: [Summary] Level 0 size: 0 B Level 1 size: 2.3 kB Total SST size: 2.3 kB Value log size: 20 B After: [Summary] Level 0 size: 0 B Level 1 size: 2.3 kB Total SST size: 2.3 kB Value log size: 20 B * fix: Set block cache and index cache sizes. This fixes a panic when running badger info panic: BlockCacheSize should be set since compression/encryption are enabled
* Let's use allocator again * Switch to z.NumAllocBytes * Make stream work with both another Badger DB or with a file to backup. * Add a test for Allocator * Use allocator for Backup * Bring in latest Ristretto Co-authored-by: Daniel Mai <daniel@dgraph.io>
In edge cases, we end up with too many splits during compactions, which make compactions take up too much RAM. Avoid that by limiting splits to max 5. Also, avoid running more compactions when the memory usage is going above 16GB.
If a table has a mixture of value log pointers and embedded values, badger will carry over the last length from a value log entry into the subsequent embedded entries. Co-authored-by: Raúl Kripalani <raul@protocol.ai>
Since we are in process of moving to Netlify, we need this change for docs to work. This change has no effect on current badger docs
This sets relativeURLs to false (the default value). If it's set to true, then the URLs generated by Hugo are incorrect. e.g., in the HTML the incorrect URL is created starting with "./". <link href='./docs/badger/css/theme.css?ed88a5fdbf06b9737b9afdf41f9e2902' rel="stylesheet" /> With this change, the correct URL is created starting with "/". <link href='/docs/badger/css/theme.css?ed88a5fdbf06b9737b9afdf41f9e2902' rel="stylesheet" />
When keys are moved because of the GC, we were removing the bitDiscardEarlierVersions and other bits. Only the transaction markers should be removed and all the other bits should be kept.
…v3.2011 Conflicts: go.mod go.sum stream.go stream_test.go test.sh
jarifibrahim
commented
Jan 6, 2021
aman-bansal
approved these changes
Jan 15, 2021
jarifibrahim
commented
Jan 15, 2021
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This change is