Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(depthmonitor): depthmonitor service #3061

Merged
merged 3 commits into from
Aug 30, 2022
Merged

feat(depthmonitor): depthmonitor service #3061

merged 3 commits into from
Aug 30, 2022

Conversation

aloknerurkar
Copy link
Contributor

@aloknerurkar aloknerurkar commented Jul 1, 2022

Checklist

  • I have read the coding guide
  • My change requires a documentation update and I have done it
  • I have added tests to cover my changes.

Description

Motivation and context (Optional)

Related Issue (Optional)

Screenshots (if appropriate):


This change is Reviewable

pkg/node/node.go Outdated Show resolved Hide resolved
@aloknerurkar aloknerurkar marked this pull request as ready for review July 4, 2022 10:25
@aloknerurkar aloknerurkar added the ready for review The PR is ready to be reviewed label Jul 4, 2022
pkg/node/node.go Outdated
@@ -849,6 +850,10 @@ func NewBee(interrupt chan os.Signal, addr string, publicKey *ecdsa.PublicKey, s
return nil, fmt.Errorf("pullsync protocol: %w", err)
}

depthMonitor := depthmonitor.New(kad, pullSyncProtocol, storer, stateStore, logger, warmupTime)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

depthmonitor should only run in full node mode.

Copy link
Collaborator

@ldeffenb ldeffenb Jul 4, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

depthmonitor should only run in full node mode.

While I agree with that, it raises another question: Should light and ultra-light nodes be pinning chunks into a reserve to begin with? Or should preserveOrCache always cache and never preserve (pin) in any mode except full-node?

func (db *DB) preserveOrCache(batch *leveldb.Batch, item shed.Item, forcePin, forceCache bool) (gcSizeChange, reserveSizeChange int64, err error) {

Copy link
Contributor Author

@aloknerurkar aloknerurkar Jul 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ldeffenb preserveOrCache is only called when the put mode is sync or request. As light nodes and ultra light nodes would only ingest data through the API, only the PutUpload mode will be used. This will not be affecting the reserve. So this component would not affect that behavior.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you. I guess I didn't follow that rabbit far enough down the trail. I don't have time to look at it in detail at the moment, but aren't retrieved chunks stored with a PutRequest? That's actually the path to the reserve that I had in mind with the original question. If a (ultra) light node user retrieves chunks for local use, some of them will end up in the reserve via the PutRequest caching that may turn into preserving when stored locally.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, doesn't a PutUpload eventually turn into a SetSync invocation when the chunks are pushed into the swarm, and SetSync also invokes preserveOrCache so even locally uploaded chunks may end up in the reserve on a light or ultralight node as I understand the code paths.

@mrekucci mrekucci requested a review from istae July 6, 2022 11:39
Copy link
Contributor

@mrekucci mrekucci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest moving the depthmonitor package under the topology package.

Reviewed 16 of 19 files at r7, 1 of 4 files at r8, 4 of 4 files at r10, all commit messages.
Reviewable status: all files reviewed, 4 unresolved discussions (waiting on @acud, @aloknerurkar, @istae, @metacertain, and @notanatol)


pkg/depthmonitor/depthmonitor.go line 52 at r10 (raw file):

// Service implements the depthmonitor service
type Service struct {
	depthLock        sync.Mutex

It isn't clear what fields the mutex is guarding, so would be nice to add some comments here. It's also a common practice to put the mutex right before the filed(s).


pkg/depthmonitor/depthmonitor.go line 197 at r10 (raw file):

	go func() {
		defer close(stopped)
		s.wg.Wait()

This WaitGroup seems a little bit like overkill, I'd suggest putting and using the stopped channel directly in the Service struct.


pkg/rate/rate.go line 19 at r10 (raw file):

type Rate struct {
	windows    map[int64]int
	mtx        sync.Mutex

Is this mutex guarding all fields?


pkg/rate/rate.go line 73 at r10 (raw file):

}

func (r *Rate) SetTimeFunc(f func() int64) {

I assume that this method exists only because of testing, if so, I'd suggest moving it into the export_test.go file.

Copy link
Contributor

@mrekucci mrekucci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 6 of 6 files at r11, all commit messages.
Reviewable status: all files reviewed, 4 unresolved discussions (waiting on @acud, @aloknerurkar, @istae, @metacertain, and @notanatol)

Copy link
Member

@acud acud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 15 of 19 files at r7, 4 of 6 files at r11, 2 of 2 files at r12, all commit messages.
Reviewable status: all files reviewed, 9 unresolved discussions (waiting on @aloknerurkar, @istae, @metacertain, and @notanatol)


pkg/localstore/localstore.go line 666 at r12 (raw file):

}

func (db *DB) Size() (uint64, error) {

I would rename these fields to ReserveSize and ReserveCapacity, since the "size" of the localstore can have different semantics under different circumstances


pkg/rate/rate.go line 20 at r12 (raw file):

	mtx        sync.Mutex
	windows    map[int64]int
	windowSize int64        // window size in milliseconds

i think that working with the standard library time.Time types here would be nicer, also for the now function


pkg/rate/rate.go line 62 at r12 (raw file):

// cleanup removes windows older than the most recent two windows
func (r *Rate) cleanup() {

comment should mention that it is needed to call this method under lock


pkg/rate/rate.go line 67 at r12 (raw file):

	for k := range r.windows {
		if k <= window-2 {

what is 2 in this sense? would it make sense to factor it out to a variable/const?


pkg/rate/rate_test.go line 67 at r12 (raw file):

		got := rate.Rate()
		want := float64(r + (r - r/int(tc)))

generally i would go with the approach of hard-coded r and want in the test case level, and not calculate it within the test dynamically.

@acud
Copy link
Member

acud commented Jul 13, 2022

@istae when you use external PRs to resolve the comments on this PR, it makes it more difficult to follow up on the review process. it makes the reviewers now have to go through the diffs and resolve the comments themselves. I would advise to address the comments within the PR in the future. Also, since you worked on the PR, maybe it is better that you don't approve it yourself :)

@istae istae requested a review from acud July 18, 2022 09:15
Copy link
Member

@istae istae left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removing accidental approval

Copy link
Member

@istae istae left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test

Copy link
Member

@istae istae left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test

@istae istae self-requested a review July 18, 2022 09:16
@istae istae force-pushed the depthmonitor.0 branch 2 times, most recently from 7430504 to 3ce7d99 Compare July 25, 2022 10:27
@istae istae requested a review from mrekucci August 4, 2022 11:26
@istae istae linked an issue Aug 4, 2022 that may be closed by this pull request
Copy link
Member

@metacertain metacertain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some issue not in scope for this PR discovered in gc (#3231)
LGTM

@istae istae merged commit 9c5cb2a into master Aug 30, 2022
@istae istae deleted the depthmonitor.0 branch August 30, 2022 08:24
@aloknerurkar aloknerurkar added this to the 1.8.0 milestone Sep 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pull-request ready for review The PR is ready to be reviewed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Depth Package - storage incentives
7 participants