-
Notifications
You must be signed in to change notification settings - Fork 342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(depthmonitor): depthmonitor service #3061
Conversation
pkg/node/node.go
Outdated
@@ -849,6 +850,10 @@ func NewBee(interrupt chan os.Signal, addr string, publicKey *ecdsa.PublicKey, s | |||
return nil, fmt.Errorf("pullsync protocol: %w", err) | |||
} | |||
|
|||
depthMonitor := depthmonitor.New(kad, pullSyncProtocol, storer, stateStore, logger, warmupTime) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
depthmonitor should only run in full node mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
depthmonitor should only run in full node mode.
While I agree with that, it raises another question: Should light and ultra-light nodes be pinning chunks into a reserve to begin with? Or should preserveOrCache always cache and never preserve (pin) in any mode except full-node?
bee/pkg/localstore/mode_put.go
Line 517 in 9ece392
func (db *DB) preserveOrCache(batch *leveldb.Batch, item shed.Item, forcePin, forceCache bool) (gcSizeChange, reserveSizeChange int64, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ldeffenb preserveOrCache is only called when the put mode is sync or request. As light nodes and ultra light nodes would only ingest data through the API, only the PutUpload mode will be used. This will not be affecting the reserve. So this component would not affect that behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you. I guess I didn't follow that rabbit far enough down the trail. I don't have time to look at it in detail at the moment, but aren't retrieved chunks stored with a PutRequest? That's actually the path to the reserve that I had in mind with the original question. If a (ultra) light node user retrieves chunks for local use, some of them will end up in the reserve via the PutRequest caching that may turn into preserving when stored locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, doesn't a PutUpload eventually turn into a SetSync invocation when the chunks are pushed into the swarm, and SetSync also invokes preserveOrCache so even locally uploaded chunks may end up in the reserve on a light or ultralight node as I understand the code paths.
cdf7a6f
to
1e2ac9d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest moving the depthmonitor
package under the topology
package.
Reviewed 16 of 19 files at r7, 1 of 4 files at r8, 4 of 4 files at r10, all commit messages.
Reviewable status: all files reviewed, 4 unresolved discussions (waiting on @acud, @aloknerurkar, @istae, @metacertain, and @notanatol)
pkg/depthmonitor/depthmonitor.go
line 52 at r10 (raw file):
// Service implements the depthmonitor service type Service struct { depthLock sync.Mutex
It isn't clear what fields the mutex is guarding, so would be nice to add some comments here. It's also a common practice to put the mutex right before the filed(s).
pkg/depthmonitor/depthmonitor.go
line 197 at r10 (raw file):
go func() { defer close(stopped) s.wg.Wait()
This WaitGroup
seems a little bit like overkill, I'd suggest putting and using the stopped
channel directly in the Service
struct.
pkg/rate/rate.go
line 19 at r10 (raw file):
type Rate struct { windows map[int64]int mtx sync.Mutex
Is this mutex guarding all fields?
pkg/rate/rate.go
line 73 at r10 (raw file):
} func (r *Rate) SetTimeFunc(f func() int64) {
I assume that this method exists only because of testing, if so, I'd suggest moving it into the export_test.go
file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 6 of 6 files at r11, all commit messages.
Reviewable status: all files reviewed, 4 unresolved discussions (waiting on @acud, @aloknerurkar, @istae, @metacertain, and @notanatol)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 15 of 19 files at r7, 4 of 6 files at r11, 2 of 2 files at r12, all commit messages.
Reviewable status: all files reviewed, 9 unresolved discussions (waiting on @aloknerurkar, @istae, @metacertain, and @notanatol)
pkg/localstore/localstore.go
line 666 at r12 (raw file):
} func (db *DB) Size() (uint64, error) {
I would rename these fields to ReserveSize
and ReserveCapacity
, since the "size" of the localstore can have different semantics under different circumstances
pkg/rate/rate.go
line 20 at r12 (raw file):
mtx sync.Mutex windows map[int64]int windowSize int64 // window size in milliseconds
i think that working with the standard library time.Time
types here would be nicer, also for the now
function
pkg/rate/rate.go
line 62 at r12 (raw file):
// cleanup removes windows older than the most recent two windows func (r *Rate) cleanup() {
comment should mention that it is needed to call this method under lock
pkg/rate/rate.go
line 67 at r12 (raw file):
for k := range r.windows { if k <= window-2 {
what is 2
in this sense? would it make sense to factor it out to a variable/const?
pkg/rate/rate_test.go
line 67 at r12 (raw file):
got := rate.Rate() want := float64(r + (r - r/int(tc)))
generally i would go with the approach of hard-coded r
and want
in the test case level, and not calculate it within the test dynamically.
@istae when you use external PRs to resolve the comments on this PR, it makes it more difficult to follow up on the review process. it makes the reviewers now have to go through the diffs and resolve the comments themselves. I would advise to address the comments within the PR in the future. Also, since you worked on the PR, maybe it is better that you don't approve it yourself :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removing accidental approval
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test
7430504
to
3ce7d99
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some issue not in scope for this PR discovered in gc (#3231)
LGTM
Checklist
Description
Motivation and context (Optional)
Related Issue (Optional)
Screenshots (if appropriate):
This change is