Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core/bloombits, eth/filter: transformed bloom bitmap based log search #14631

Merged
merged 5 commits into from
Sep 6, 2017

Conversation

zsfelfoldi
Copy link
Contributor

This PR is based on #14522 and supersedes #3749 which was the same thing but based on the old version of the chain processor. Both versions are kept until one of them is merged.

Further parts of the code may be moved into separate PRs to make the review process easier (suggestions are welcome).

This PR optimizes log searching by creating a data structure (BloomBits) that makes it cheaper to retrieve bloom filter data relevant to a specific filter. When searching in a long section of the block history, we are checking three specific bits of each bloom filter per address/topic. In order to do that, currently we read/retrieve a cca. 500 byte block header for each block. The implemented structure optimizes this by a "bitwise 90 degree rotation" of the bloom filters. Blocks are grouped into sections (SectionSize is 4096 blocks at the moment), BloomBits[bitIdx][sectionIdx] is a 4096 bit (512 byte) long bit vector that contains a single bit of each bloom filter from the block range [sectionIdx*SectionSize ... (sectionIdx+1)*SectionSize-1]. (Since bloom filters are usually sparse, a simple data compression makes this structure even more efficient, especially for ODR retrieval.) By reading and binary AND-ing three BloomBits sections, we can filter for an address/topic in 4096 blocks at once ("1" bits in the binary AND result mean bloom matches).

Implementation and design rationale of the matcher logic

  • Pipelined structure

The matcher was designed with the needs of both full and light nodes in mind. A simpler architecture would probably be satisfactory for full nodes (where the bit vectors are available in the local database) but the network retrieval bottleneck of light clients justifies a more sophisticated algorithm that tries to minimize the amount of retrieved data and return results as soon as possible. The current implementation is a pipelined structure based on input and output channels (receiving section indexes and sending potential matches). The matcher is built from sub-matchers, one for the addresses and one for each topic group. Since we are interested in matches that each sub-matcher signals as positive, they are daisy-chained in a way that subsequent sub-matchers are only retrieving and matching the bit vectors of sections where the previous matchers have found a potential match. The "1" bits of the output of the last sub-matcher are returned as bloom filter matches.
Sub-matchers use a set of fetchers to retrieve a stream of bit vectors belonging to certain bloom filter bit indexes. Fetchers are also pipelined, receiving a stream of section indexes and sending a stream of bit vectors. As soon as each fetcher returned the bit vector belonging to a certain section index, the sub-matcher performs the necessary binary AND and OR operations and outputs the resulting vector and its belonging section index if the vector is not made of zeroes only.

  • Prioritizing and batching requests

Light clients retrieve the bit vectors with merkle proofs, which makes it much more efficient to retrieve batches of vectors (whose merkle proofs share most of their trie nodes) in a single request. Also, it is preferable to prioritize requests based on their section index (regardless of bit index) in order to ensure that matches are found and returned as soon as possible (and in a sequential order). Prioritizing and batching are realized by a common request distributor that receives individual bit index/section index requests from fetchers and keeps an ordered list of section indexes to be requested, grouped by bit index. It does not call any retrieval backend function but it is called by a "server" process (see serveMatcher in filter.go). NextRequest returns the next batch to be requested, retrieved vectors are returned through the Deliver function. This method ensures that the bloombits package should only care about implementing the matching logic. The caller can retain full control over the resources (CPU/disk/network bandwidth) assigned to this task.

Copy link
Contributor

@Arachnid Arachnid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I fully understand the operation of the various goroutines here. Could we schedule a call to discuss and review interactively?

reqLock sync.RWMutex
}

type req struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you decompress these identifiers a bit? bitIndex, requestMap, request, etc.

func (f *fetcher) fetch(sectionCh chan uint64, distCh chan distReq, stop chan struct{}, wg *sync.WaitGroup) chan []byte {
dataCh := make(chan []byte, channelCap)
returnCh := make(chan uint64, channelCap)
wg.Add(2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 2?

returnCh := make(chan uint64, channelCap)
wg.Add(2)

go func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why two separate goroutines? Can't these be combined?

for i, idx := range sectionIdxList {
r := f.reqMap[idx]
if r.data != nil {
panic("BloomBits section data delivered twice")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this really a cause for panic?

// set up fetchers
fetchIdx := make([][3]chan uint64, len(idxs))
fetchData := make([][3]chan []byte, len(idxs))
for i, idx := range idxs {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you give 'idxs' and 'ii' more descriptive names?

return
}

for _, ff := range fetchIdx {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These variable names are also pretty hard to follow.


m.wg.Add(2)
// goroutine for starting retrievals
go func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's unclear to me why there need to be two independent goroutines here, or how they work together. Can you elaborate?

func (m *Matcher) distributeRequests(stop chan struct{}) {
m.distWg.Add(1)
stopDist := make(chan struct{})
go func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this really necessary?

@zsfelfoldi zsfelfoldi force-pushed the bloombits2 branch 2 times, most recently from 7fae8bc to ef18689 Compare August 9, 2017 14:24
@ethereum ethereum deleted a comment from GitCop Aug 18, 2017
@ethereum ethereum deleted a comment from GitCop Aug 18, 2017
@ethereum ethereum deleted a comment from GitCop Aug 18, 2017
@ethereum ethereum deleted a comment from GitCop Aug 18, 2017
@ethereum ethereum deleted a comment from GitCop Aug 18, 2017
@ethereum ethereum deleted a comment from GitCop Aug 18, 2017
@ethereum ethereum deleted a comment from GitCop Aug 18, 2017
@ethereum ethereum deleted a comment from GitCop Aug 18, 2017
@zsfelfoldi zsfelfoldi force-pushed the bloombits2 branch 3 times, most recently from 5f9be14 to 900d968 Compare August 19, 2017 17:44
@karalabe karalabe force-pushed the bloombits2 branch 2 times, most recently from b37d46f to 06ca7f7 Compare September 4, 2017 09:24
@karalabe
Copy link
Member

karalabe commented Sep 4, 2017

I am now satisfied with this PR, though given that I added heavy reworks to it, I cannot mark it as reviewed. @zsfelfoldi Please double check that the code still makes sense from your perspective. @fjl or @Arachnid Please do a final review.

Copy link
Contributor

@Arachnid Arachnid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally looks good, but I'm concerned the code to retrieve data from disk is massively overengineered.


// AddBloom takes a single bloom filter and sets the corresponding bit column
// in memory accordingly.
func (b *Generator) AddBloom(bloom types.Bloom) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little concerned that the implied state here (which bit is being set) could lead to something becoming invisibly out of sync. Could this take the bit being set as a parameter and error if it's not the one it expects?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're right, done

bitMask := byte(1) << byte(7-b.nextBit%8)

for i := 0; i < types.BloomBitLength; i++ {
bloomByteMask := types.BloomByteLength - 1 - i/8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an index, not a mask, isn't it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is, and also byteMask is. fixed.

b = crypto.Keccak256(b)

var idxs bloomIndexes
for i := 0; i < len(idxs); i++ {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a lot of magic numbers here. Could they be made into constants?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are kind of speced by the yellow paper on how to construct the header bloom filters. I'm unsure if it makes sense to separate them out since there's not much room to reuse them elsewhere. But I'm open to suggestions.

type Matcher struct {
sectionSize uint64 // Size of the data batches to filter on

addresses []bloomIndexes // Addresses the system is filtering for
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be cleaner to just accept a list of byte slices, and leave the address/topic distinction to the caller?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

API wise within NewMatcher I think it might be nice thing to support the filter by being explicitly catered for that use case. Internally, I'm unsure if we need to retain this distinction. @zsfelfoldi ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even in NewMatcher, this seems to me like it shouldn't care about what it's matching on - they should just be opaque hashes to it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed it as @Arachnid suggested in the last commit, now we have to convert addresses and hashes to byte slices in filter.go but maybe it's still nicer this way. If you don't like it, we can drop the last commit.

// Iterate over all the blocks in the section and return the matching ones
for i := first; i <= last; i++ {
// If the bitset is nil, we're a special match-all cornercase
if res.bitset == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the corner case? Why not just return a fully set bitset?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because that's what comes from the source: https://github.com/zsfelfoldi/go-ethereum/blob/bloombits2/core/bloombits/matcher.go#L229
I don't want to do unnecessary binary ANDs every time in the first subMatcher so I'm sending nils at the source. But in the corner case there are no subMatchers at all (we are receiving the source at the end) so we have to do some special case handling anyway. We can do it some other way but I'm not sure it's going to be any nicer. But I'm open to suggestions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a commit to always send 0xff-s in the first instance. I don't think doing a binary and on a few bytes will matter given that the data comes from disk or the network.


// distributor receives requests from the schedulers and queues them into a set
// of pending requests, which are assigned to retrievers wanting to fulfil them.
func (m *Matcher) distributor(dist chan *request, session *MatcherSession) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does having multiple parallel retrievers help performance? I wonder if it wouldn't be far simpler to just have each filter fetch data as it needs it, with a cache to prevent duplicate reads?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the complexity is to aid the light client which needs to batch multiple requests together and send them out multiple batches to different light servers. Hence why the whole distribution "mess".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, good point. Objection withdrawn, then.

@karalabe
Copy link
Member

karalabe commented Sep 6, 2017

@Arachnid Issues should be addresses now, PTAL

@karalabe karalabe added this to the 1.7.0 milestone Sep 6, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants