Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: write cache in batches #13

Merged
merged 10 commits into from
Apr 12, 2022
Merged

feat: write cache in batches #13

merged 10 commits into from
Apr 12, 2022

Conversation

rem1niscence
Copy link
Contributor

Writes the cache items in batches one single time regarding the amount of sessions checked.

Note: this is my first time writing monitor concurrent code so it could definitely be better, open to any suggestions

@height
Copy link

height bot commented Apr 8, 2022

Link Height tasks by mentioning a task ID in the pull request title or commit messages, or description and comments with the keyword link (e.g. "Link T-123").

💡Tip: You can also use "Close T-X" to automatically close a task when the pull request is merged.

Copy link
Contributor

@Evalir Evalir left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looking solid, and monitorCacheBatch/writeCacheBatch is pretty straightforward to understand. Just a few comments:

type cacheItem struct {
Key string
Value interface{}
Ttl time.Duration
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Ttl time.Duration
TTL time.Duration

Just wanna make it consistent with the rest of the codebase

Comment on lines 227 to 229
close(batch)
// Wait for the remaining items in the batch if any
cacheWg.Wait()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't there be a problem if you close the channel before the goroutine is done?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as you can see it happens after the sync/chain checks sync group is over (I mean after the wg.Wait() on line 225, so no more items will go to the batch channel

func (ac *applicationChecks) eraseNodesFailureMark(ctx context.Context, options pocket.SyncCheckOptions, nodes []string, caches []*cache.Redis) error {
getNodeFailureKey := func(blockchain, node string) string {
func (ac *applicationChecks) eraseNodesFailureMark(ctx context.Context, options pocket.SyncCheckOptions, nodes []string, caches []*cache.Redis) {
nodeFailureKey := func(blockchain, node string) string {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More of a Golang Q: Is it common to declare small utility functions like this inline? Saying this because to me this looks like something that could go in an utils file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't say golang and you're definitely right, on this case I put it there as there's no other case where this is needed so IMO it would pollute the utils file with something that can only be used in one single place

// Embedding current block height within session so can be checked for cache
session.BlockHeight = dispatch.BlockHeight

err = cache.WriteJSONToCaches(ctx, caches, cacheKey, session, uint(cacheTTL))
sessionMarshalled, err := json.Marshal(session)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
sessionMarshalled, err := json.Marshal(session)
marshalledSession, err := json.Marshal(session)


batch <- &cache.Item{
Key: cacheKey,
Value: sessionMarshalled,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Value: sessionMarshalled,
Value: marshalledSession,

@@ -292,7 +292,7 @@ func (ac *applicationChecks) syncCheck(ctx context.Context, options pocket.SyncC
ttl = 30
}

marshalledNodes, err := json.Marshal(nodes)
nodesMarshalled, err := json.Marshal(nodes)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
nodesMarshalled, err := json.Marshal(nodes)
marshalledNodes, err := json.Marshal(nodes)

Key: cacheKey,
Value: marshalledNodes,
Value: nodesMarshalled,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Value: nodesMarshalled,
Value: marshalledNodes,

@rem1niscence rem1niscence merged commit 2ca5d21 into master Apr 12, 2022
@rem1niscence rem1niscence deleted the write-cache-batch branch April 12, 2022 17:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants