-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bitswap rounds #438
Bitswap rounds #438
Conversation
@@ -29,6 +30,8 @@ const maxProvidersPerRequest = 3 | |||
const providerRequestTimeout = time.Second * 10 | |||
const hasBlockTimeout = time.Second * 15 | |||
|
|||
const roundTime = time.Second / 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe all these consts should be related to each other:
const roundTime = time.Second / 2
const providerRequestTimeout = roundTime * 20
const hasBlockTimeout = roundTime * 30
Also, note you can probably set these down 10 or 100x for tests. (though... timeouts in other subsystems may affect this. it may be worth exploring the instrumented Signals
idea, and maybe configuring timeouts in one place.
} | ||
wg.Wait() | ||
} | ||
|
||
func (bs *bitswap) roundWorker(ctx context.Context) { | ||
roundTicker := time.NewTicker(roundTime) | ||
bandwidthPerRound := 500000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bandwidthPerRound
should be a const/var
up at top.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
slipped my mind, thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and include unit eg bytesPerRound
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if all blocks are larger than 500K, will anything ever be sent?
jenkins is angry because i forgot to make vendor... |
// implies Priority(A) > Priority(B) | ||
AddWanted(u.Key) | ||
// AddEntry adds an entry to the Wantlist. | ||
AddEntry(u.Key, int, bool) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not obvious what bools mean in parameter lists
type Type int
const (
Normal = iota
Revoke/Cancel
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or named params:
AddEntry(key u.Key, priority int, cancel bool)
(a thought that occurred to me is we could use priority = 0
but this type of overloading can lead to problems later)
Large problem: right now, with massive bandwidth allocation, this means that actually sending out all the data for each round will take longer that the round timeout. On one hand we can let round duration be determined by send time (just keep looping, and sleep while there is nothing to send (empty wantlists, we can use a condvar)). On the other hand, if we do, one slow peer may slow everything down, because if we wait for the network, transport to be done sending, we could be waiting a long time. (waiting for the transport is not something we do currently (we buffer inside channels) but we may want to go that route (by shrinking the buffers) because otherwise rounds will be allocated one after another really fast and everything will just hang on the pipes. Solution is non trivial. |
may want to rebase this out before the branch gets big and it becomes more difficult to do |
67af5a6
to
5613d0a
Compare
How is the add/cat use case? Any issues with files > 100 MB? |
(squashed the vendor fix so all commits build) |
it's only used in two places, but i think we've been using maps on IPFS types so much now that the specificity is no longer necessary License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
If we put the lock next to the fields it protects, it can sometimes make it easier to reason about threadsafety. In this case, it reveals that the task queue (not threadsafe) isn't protected by the mutex, yet shared between the worker and callers. @whyrusleeping License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
bitswap keeps the threadsafe version. observing the ledger shows that it doesn't need it anymore (ledgermanager is protected and safe). License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
this opens up the possibility of having multiple queues. And for all outgoing messages to be managed by the decision engine License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
only sort SortedEntries() License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
addresses #438 (comment) License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
addresses #438 (comment) License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
not changing this because i don't want to write a test for it now License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
in many places, entries are assigned from one slice to another and in different goroutines. In one place, entries were modified (in the queue). To avoid shared mutable state, probably best to handle entries by value. License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
@whyrusleeping may wanna have a look and make sure i didn't screw anything up here BenchmarkInstantaneousAddCat1MB-4 200 10763761 ns/op 97.42 MB/s BenchmarkInstantaneousAddCat2MB-4 panic: runtime error: invalid memory address or nil pointer dereference [signal 0xb code=0x1 addr=0x0 pc=0xbedd] goroutine 14297 [running]: github.com/jbenet/go-ipfs/exchange/bitswap/decision.(*taskQueue).Remove(0xc2087553a0, 0xc2085ef200, 0x22, 0x56f570, 0xc208367a40) /Users/btc/go/src/github.com/jbenet/go-ipfs/exchange/bitswap/decision/taskqueue.go:66 +0x82 github.com/jbenet/go-ipfs/exchange/bitswap/decision.(*Engine).MessageSent(0xc20871b5c0, 0x56f570, 0xc208367a40, 0x570040, 0xc208753d40, 0x0, 0x0) /Users/btc/go/src/github.com/jbenet/go-ipfs/exchange/bitswap/decision/engine.go:177 +0x29e github.com/jbenet/go-ipfs/exchange/bitswap.(*bitswap).send(0xc20871b7a0, 0x56f4d8, 0xc208379800, 0x56f570, 0xc208367a40, 0x570040, 0xc208753d40, 0x0, 0x0) /Users/btc/go/src/github.com/jbenet/go-ipfs/exchange/bitswap/bitswap.go:352 +0x11c github.com/jbenet/go-ipfs/exchange/bitswap.(*bitswap).taskWorker(0xc20871b7a0, 0x56f4d8, 0xc208379800) /Users/btc/go/src/github.com/jbenet/go-ipfs/exchange/bitswap/bitswap.go:238 +0x165 created by github.com/jbenet/go-ipfs/exchange/bitswap.New /Users/btc/go/src/github.com/jbenet/go-ipfs/exchange/bitswap/bitswap.go:66 +0x49e
License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
addresses... https://github.com/jbenet/go-ipfs/pull/438/files#r21878994 License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
@whyrusleeping License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
1be37e4
to
9fafec1
Compare
addresses ipfs/kubo#438 (comment) License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com>
addresses ipfs/kubo#438 (comment) License: MIT Signed-off-by: Brian Tiger Chow <brian@perfmode.com> This commit was moved from ipfs/go-bitswap@edaafa9
This PR implements the first revision of 'Bitswap Rounds' as defined by @jbenet in issue #417