Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Horizon Lite: Improve the performance and functionality of the batch-based indexer. #4566

Closed
3 tasks
Tracked by #4571
sreuland opened this issue Aug 31, 2022 · 2 comments
Closed
3 tasks
Tracked by #4571

Comments

@sreuland
Copy link
Contributor

sreuland commented Aug 31, 2022

Context

There are several necessary improvements to the existing map-reduce batch job for index creation:

  • poor performance: performance of reduce is low when the target/source index is remote, for example, S3 (jobs don't complete, running forever and churning slowly on account/tx merging routines)
  • low visibility on performance: there's a lack of visibility on I/O rates due to the lack of metrics and logging.
  • lack of flexibility: the reduce job operates on all modules, even if the map job only specified on module.

Suggestions

  1. In the tx index merge routine, perform a query against the 'source' index that has map job output for the tx/ folder, skip iterating all 255 tx prefixes if the map job output does not have 'tx' folder. (This happens when map was configured to not include transactions in its MODULES.)
  2. We can change the entire map/reduce flow to use a shared persistent volume across all workers, then upload the volume to remote store once at end :
    • have all map jobs write to a single on-disk volume or source of storage,
    • the reduce jobs merge them together to the same on-disk source,
    • final step uploads/syncs that disk to remote target index.
  3. On account index merging, pre-download all the 'source' index mapped job account summary files, load those into a map of job_id:accountid->true/false, then the worker -> account -> read-all-map-jobs-for-account loop can check for account presence first and avoid sending iterative network trips to remote 'source' index that will be empty response anyway.

Acceptance Criteria

It's entirely possible that this task can/should be broken down into many sub-tasks based on the above suggestions, but the general criteria for completion should be:

  • Add more output on metrics such as upload times on both the map and reduce jobs.
  • The reduce job does not do unnecessary work if the map job did not apply all modules - per first suggestion above
  • The performance of the reduce batch job is significantly improved - per all three suggestions
@sreuland sreuland moved this to Next Sprint Proposal in Platform Scrum Aug 31, 2022
@Shaptic Shaptic changed the title exp/lighthorizon/cmd/batch: performance optimizations for reduce Horizon Lite: Optimize the performance of the indexer reduce job. Aug 31, 2022
@Shaptic Shaptic changed the title Horizon Lite: Optimize the performance of the indexer reduce job. Horizon Lite: Improve the performance and functionality of the batch-based indexer. Aug 31, 2022
@Shaptic Shaptic mentioned this issue Sep 1, 2022
7 tasks
@2opremio
Copy link
Contributor

2opremio commented Sep 1, 2022

We should also consider using something other than s3 since we may not end up using s3 in production (for cost reasons).

@jcx120 jcx120 moved this from Next Sprint Proposal to Backlog in Platform Scrum Sep 1, 2022
@sreuland
Copy link
Contributor Author

sreuland commented Sep 1, 2022

@Shaptic @2opremio , I re-worded the acceptance criteria per the scrum feedback to make this ticket's scope s3 agnostic and more about optimizing regardless of the 'target' index's interface(s3, file, others..)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

4 participants