Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: move libp2p to network worker thread #5229

Merged
merged 22 commits into from
May 17, 2023

Conversation

dapplion
Copy link
Contributor

@dapplion dapplion commented Mar 3, 2023

Motivation

Goals

  • Merge first minimum viable solution, going for simplicity first and optimization latter
  • Allow libp2p to be run in main thread or worker by switching a runtime flag

Description

TBD

TODO

  • Wire metrics on libp2p worker
  • Connect NetworkEventBus through the worker boundary
  • Complete reqresp async iterables via events
  • Handle serialization/deserialization of req/resp across worker boundary
  • Move discv5 worker out of libp2p worker, and connect them via main thread
  • Logger in worker with user provided settings
  • Add metrics to track async iterable bridge Maps
  • Add tests for all possible events and methods crossing the worker boundary to programmatically ensure structures are clonable. Otherwise runtime error and Lodestar dead
  • Add more tests for both usage modes (in worker, outside worker)

Closes #5447

@wemeetagain
Copy link
Member

todo
Move discv5 worker out of libp2p worker, and connect them via main thread

why should this happen?

@dapplion
Copy link
Contributor Author

dapplion commented Mar 7, 2023

@wemeetagain Can a worker spawn another worker?

@wemeetagain
Copy link
Member

@wemeetagain Can a worker spawn another worker?

Afaik yes

@dapplion dapplion force-pushed the dapplion/network-thread branch from cee8592 to 74fd057 Compare March 7, 2023 13:13
@dapplion
Copy link
Contributor Author

dapplion commented Mar 7, 2023

@wemeetagain regarding ReqResp

  • Split the ReqResp interface into two parts: IReqRespBeaconNodePeerManager only used by peer manager so never has to cross worker boundary. IReqRespBeaconNodeBeacon has to cross worker boundary. Shitty temp names, let's fix it latter.
  • Responses on IReqRespBeaconNodeBeaconBytes methods are kept encoded to be decoded only when in the main thread. That happens at the very top Network class to prevent code duplication with current hierarchy.
  • For handling async generators through worker boundary see AsyncIterableWorker which communicates all necessary signals

@nazarhussain
Copy link
Contributor

@dapplion I reviewed the changes majorly focused to the ReqResp changes. I found those changes well enough to understand and use able.

image

The points I can't understand is why this effort is been done with very particular focus on the network. I believe if for the planning phase, we should expand our focus to a higher level and answer few more questions.

  1. Discuss which component can and can't be splitted?
  2. Which of interfaces may be more easily vulnerable for DOS?
  3. Which of interfaces are more impactful for a DOS attack?

We may definitely start with network, but then the focus for foreseeable future will be clear. e.g. I foresee the similar arguments for the Rest interface as well, but can't align how this refactor would enable us to do that next.

@wemeetagain
Copy link
Member

The network is the source of all untrusted and adversarial input so it holds the vast amount of the risk.
The other factor we're considering is minimal disturbance to existing interfaces, especially interfaces with consumers that rely on fast synchronous access. Most data access across threads will become async, if there is not some caching across threads or use of exotic techniques ala SharedArrayBuffer. So this another big consideration / constraint to think about when deciding what to split out and how. Third consideration would be codebase simplicity / maintainability and having a solid grasp of the cost/benefit of splitting out a module and how that splitting out relates to the rest of the codebase.

So the network is a natural candidate to split out first, with medium cost and high benefit. We can offload a lot of normal work and worst-case work from the main thread, and the network somewhat decoupled from need for fast access to our chain. All connection management (esp in DoS cases), network crypto (handshakes, connection encryption/decryption), simple req/resp (ping, status, metadata) can be handled entirely off-main-thread.

My preference for further splitting is that we are careful to avoid splitting 'just because', and have a clear sense of the benefits.
IMO the next module that I see having clear benefit to split is backfill sync.

@dapplion dapplion force-pushed the dapplion/network-thread branch from 3d76f24 to 8cdd0c5 Compare May 2, 2023 10:55
@dapplion dapplion changed the title Move libp2p to a thread Move libp2p to network worker thread May 2, 2023
@dapplion
Copy link
Contributor Author

dapplion commented May 2, 2023

Rebased branch on current unstable with almost everything implemented and wired.

Pushed previous branch into backup branch to review if necessary https://github.com/ChainSafe/lodestar/compare/dapplion/network-thread-may1

CC @wemeetagain

@dapplion dapplion force-pushed the dapplion/network-thread branch from 8cdd0c5 to 6ba3c21 Compare May 2, 2023 11:16
@dapplion dapplion force-pushed the dapplion/network-thread branch from 6ba3c21 to 516b44e Compare May 2, 2023 12:00
@dapplion dapplion changed the title Move libp2p to network worker thread feat: move libp2p to network worker thread May 3, 2023
@github-actions
Copy link
Contributor

github-actions bot commented May 3, 2023

Performance Report

✔️ no performance regression detected

🚀🚀 Significant benchmark improvement detected

Benchmark suite Current: 83d3782 Previous: fe85482 Ratio
isKnown normal case - 2 super set checks 262.00 ns/op 912.00 ns/op 0.29
bytes32 Buffer.toString(hex) 443.00 ns/op 1.3410 us/op 0.33
bytes32 Buffer.toString(hex) from Uint8Array 631.00 ns/op 2.2980 us/op 0.27
Map access 1 prop 0.16600 ns/op 0.55500 ns/op 0.30
prioritizePeers score -10:0 att 32-0.1 sync 2-0 108.91 us/op 332.75 us/op 0.33
Full benchmark results
Benchmark suite Current: 83d3782 Previous: fe85482 Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 623.45 us/op 1.3513 ms/op 0.46
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 56.705 us/op 100.45 us/op 0.56
BLS verify - blst-native 1.2803 ms/op 2.2499 ms/op 0.57
BLS verifyMultipleSignatures 3 - blst-native 2.5531 ms/op 6.6265 ms/op 0.39
BLS verifyMultipleSignatures 8 - blst-native 5.3872 ms/op 13.310 ms/op 0.40
BLS verifyMultipleSignatures 32 - blst-native 19.525 ms/op 41.644 ms/op 0.47
BLS aggregatePubkeys 32 - blst-native 26.050 us/op 58.304 us/op 0.45
BLS aggregatePubkeys 128 - blst-native 101.50 us/op 231.85 us/op 0.44
getAttestationsForBlock 66.731 ms/op 156.85 ms/op 0.43
isKnown best case - 1 super set check 269.00 ns/op 735.00 ns/op 0.37
isKnown normal case - 2 super set checks 262.00 ns/op 912.00 ns/op 0.29
isKnown worse case - 16 super set checks 284.00 ns/op 523.00 ns/op 0.54
CheckpointStateCache - add get delete 6.6440 us/op 11.635 us/op 0.57
validate gossip signedAggregateAndProof - struct 2.8727 ms/op 6.0538 ms/op 0.47
validate gossip attestation - struct 1.3931 ms/op 2.7391 ms/op 0.51
pickEth1Vote - no votes 1.4763 ms/op 2.6228 ms/op 0.56
pickEth1Vote - max votes 12.620 ms/op 22.915 ms/op 0.55
pickEth1Vote - Eth1Data hashTreeRoot value x2048 10.228 ms/op 17.752 ms/op 0.58
pickEth1Vote - Eth1Data hashTreeRoot tree x2048 21.070 ms/op 37.228 ms/op 0.57
pickEth1Vote - Eth1Data fastSerialize value x2048 785.98 us/op 1.5945 ms/op 0.49
pickEth1Vote - Eth1Data fastSerialize tree x2048 5.4814 ms/op 18.104 ms/op 0.30
bytes32 toHexString 721.00 ns/op 1.3530 us/op 0.53
bytes32 Buffer.toString(hex) 443.00 ns/op 1.3410 us/op 0.33
bytes32 Buffer.toString(hex) from Uint8Array 631.00 ns/op 2.2980 us/op 0.27
bytes32 Buffer.toString(hex) + 0x 443.00 ns/op 847.00 ns/op 0.52
Object access 1 prop 0.20400 ns/op 0.58600 ns/op 0.35
Map access 1 prop 0.16600 ns/op 0.55500 ns/op 0.30
Object get x1000 6.9330 ns/op 15.570 ns/op 0.45
Map get x1000 0.60000 ns/op 1.5400 ns/op 0.39
Object set x1000 69.783 ns/op 132.60 ns/op 0.53
Map set x1000 55.748 ns/op 119.30 ns/op 0.47
Return object 10000 times 0.24430 ns/op 1.1099 ns/op 0.22
Throw Error 10000 times 4.2875 us/op 14.022 us/op 0.31
fastMsgIdFn sha256 / 200 bytes 3.5890 us/op 7.6990 us/op 0.47
fastMsgIdFn h32 xxhash / 200 bytes 308.00 ns/op 723.00 ns/op 0.43
fastMsgIdFn h64 xxhash / 200 bytes 454.00 ns/op 901.00 ns/op 0.50
fastMsgIdFn sha256 / 1000 bytes 11.834 us/op 22.596 us/op 0.52
fastMsgIdFn h32 xxhash / 1000 bytes 463.00 ns/op 1.0840 us/op 0.43
fastMsgIdFn h64 xxhash / 1000 bytes 542.00 ns/op 1.0240 us/op 0.53
fastMsgIdFn sha256 / 10000 bytes 103.93 us/op 216.10 us/op 0.48
fastMsgIdFn h32 xxhash / 10000 bytes 2.0010 us/op 3.8020 us/op 0.53
fastMsgIdFn h64 xxhash / 10000 bytes 1.4830 us/op 2.9570 us/op 0.50
enrSubnets - fastDeserialize 64 bits 1.5480 us/op 3.1370 us/op 0.49
enrSubnets - ssz BitVector 64 bits 590.00 ns/op 1.3300 us/op 0.44
enrSubnets - fastDeserialize 4 bits 177.00 ns/op 400.00 ns/op 0.44
enrSubnets - ssz BitVector 4 bits 669.00 ns/op 1.2040 us/op 0.56
prioritizePeers score -10:0 att 32-0.1 sync 2-0 108.91 us/op 332.75 us/op 0.33
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 152.22 us/op 434.57 us/op 0.35
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 176.36 us/op 422.49 us/op 0.42
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 356.36 us/op 827.01 us/op 0.43
prioritizePeers score 0:0 att 64-1 sync 4-1 431.94 us/op 1.2227 ms/op 0.35
array of 16000 items push then shift 1.6751 us/op 4.4666 us/op 0.38
LinkedList of 16000 items push then shift 9.1360 ns/op 19.469 ns/op 0.47
array of 16000 items push then pop 124.21 ns/op 187.50 ns/op 0.66
LinkedList of 16000 items push then pop 9.2730 ns/op 19.417 ns/op 0.48
array of 24000 items push then shift 2.3815 us/op 6.3865 us/op 0.37
LinkedList of 24000 items push then shift 9.7160 ns/op 18.914 ns/op 0.51
array of 24000 items push then pop 86.402 ns/op 159.25 ns/op 0.54
LinkedList of 24000 items push then pop 9.3500 ns/op 17.349 ns/op 0.54
intersect bitArray bitLen 8 13.322 ns/op 30.377 ns/op 0.44
intersect array and set length 8 99.598 ns/op 154.45 ns/op 0.64
intersect bitArray bitLen 128 45.575 ns/op 83.662 ns/op 0.54
intersect array and set length 128 1.2336 us/op 2.2222 us/op 0.56
Buffer.concat 32 items 3.2110 us/op 4.6030 us/op 0.70
Uint8Array.set 32 items 2.8440 us/op 3.6460 us/op 0.78
pass gossip attestations to forkchoice per slot 2.4041 ms/op 3.4259 ms/op 0.70
computeDeltas 3.0251 ms/op 3.9837 ms/op 0.76
computeProposerBoostScoreFromBalances 1.7903 ms/op 2.1806 ms/op 0.82
altair processAttestation - 250000 vs - 7PWei normalcase 2.9946 ms/op 2.9860 ms/op 1.00
altair processAttestation - 250000 vs - 7PWei worstcase 3.7198 ms/op 5.5834 ms/op 0.67
altair processAttestation - setStatus - 1/6 committees join 142.30 us/op 181.51 us/op 0.78
altair processAttestation - setStatus - 1/3 committees join 282.30 us/op 348.66 us/op 0.81
altair processAttestation - setStatus - 1/2 committees join 377.07 us/op 514.54 us/op 0.73
altair processAttestation - setStatus - 2/3 committees join 469.94 us/op 573.87 us/op 0.82
altair processAttestation - setStatus - 4/5 committees join 660.57 us/op 1.2221 ms/op 0.54
altair processAttestation - setStatus - 100% committees join 756.75 us/op 1.1718 ms/op 0.65
altair processBlock - 250000 vs - 7PWei normalcase 18.438 ms/op 37.549 ms/op 0.49
altair processBlock - 250000 vs - 7PWei normalcase hashState 25.681 ms/op 41.369 ms/op 0.62
altair processBlock - 250000 vs - 7PWei worstcase 56.432 ms/op 75.814 ms/op 0.74
altair processBlock - 250000 vs - 7PWei worstcase hashState 67.751 ms/op 89.426 ms/op 0.76
phase0 processBlock - 250000 vs - 7PWei normalcase 2.1368 ms/op 3.1777 ms/op 0.67
phase0 processBlock - 250000 vs - 7PWei worstcase 28.885 ms/op 38.099 ms/op 0.76
altair processEth1Data - 250000 vs - 7PWei normalcase 526.79 us/op 712.27 us/op 0.74
vc - 250000 eb 1 eth1 1 we 0 wn 0 - smpl 15 8.0100 us/op 8.0690 us/op 0.99
vc - 250000 eb 0.95 eth1 0.1 we 0.05 wn 0 - smpl 219 24.976 us/op 35.539 us/op 0.70
vc - 250000 eb 0.95 eth1 0.3 we 0.05 wn 0 - smpl 42 9.9680 us/op 10.610 us/op 0.94
vc - 250000 eb 0.95 eth1 0.7 we 0.05 wn 0 - smpl 18 8.0360 us/op 7.3160 us/op 1.10
vc - 250000 eb 0.1 eth1 0.1 we 0 wn 0 - smpl 1020 102.32 us/op 81.778 us/op 1.25
vc - 250000 eb 0.03 eth1 0.03 we 0 wn 0 - smpl 11777 646.66 us/op 701.15 us/op 0.92
vc - 250000 eb 0.01 eth1 0.01 we 0 wn 0 - smpl 16384 905.95 us/op 1.6600 ms/op 0.55
vc - 250000 eb 0 eth1 0 we 0 wn 0 - smpl 16384 872.86 us/op 1.3277 ms/op 0.66
vc - 250000 eb 0 eth1 0 we 0 wn 0 nocache - smpl 16384 2.3353 ms/op 2.7640 ms/op 0.84
vc - 250000 eb 0 eth1 1 we 0 wn 0 - smpl 16384 1.6829 ms/op 3.3164 ms/op 0.51
vc - 250000 eb 0 eth1 1 we 0 wn 0 nocache - smpl 16384 3.9445 ms/op 6.9399 ms/op 0.57
Tree 40 250000 create 320.36 ms/op 581.68 ms/op 0.55
Tree 40 250000 get(125000) 189.47 ns/op 232.70 ns/op 0.81
Tree 40 250000 set(125000) 1.0219 us/op 1.2209 us/op 0.84
Tree 40 250000 toArray() 22.032 ms/op 23.708 ms/op 0.93
Tree 40 250000 iterate all - toArray() + loop 22.178 ms/op 25.177 ms/op 0.88
Tree 40 250000 iterate all - get(i) 75.592 ms/op 90.338 ms/op 0.84
MutableVector 250000 create 11.418 ms/op 13.692 ms/op 0.83
MutableVector 250000 get(125000) 6.2620 ns/op 8.0990 ns/op 0.77
MutableVector 250000 set(125000) 292.18 ns/op 323.09 ns/op 0.90
MutableVector 250000 toArray() 2.9939 ms/op 3.8952 ms/op 0.77
MutableVector 250000 iterate all - toArray() + loop 3.1066 ms/op 3.7899 ms/op 0.82
MutableVector 250000 iterate all - get(i) 1.5050 ms/op 2.0817 ms/op 0.72
Array 250000 create 3.1964 ms/op 4.1144 ms/op 0.78
Array 250000 clone - spread 1.2147 ms/op 1.5382 ms/op 0.79
Array 250000 get(125000) 0.54900 ns/op 0.75900 ns/op 0.72
Array 250000 set(125000) 0.62200 ns/op 0.83100 ns/op 0.75
Array 250000 iterate all - loop 82.532 us/op 112.87 us/op 0.73
effectiveBalanceIncrements clone Uint8Array 300000 35.120 us/op 45.878 us/op 0.77
effectiveBalanceIncrements clone MutableVector 300000 342.00 ns/op 461.00 ns/op 0.74
effectiveBalanceIncrements rw all Uint8Array 300000 168.51 us/op 220.23 us/op 0.77
effectiveBalanceIncrements rw all MutableVector 300000 82.349 ms/op 107.97 ms/op 0.76
phase0 afterProcessEpoch - 250000 vs - 7PWei 113.78 ms/op 148.27 ms/op 0.77
phase0 beforeProcessEpoch - 250000 vs - 7PWei 44.238 ms/op 38.764 ms/op 1.14
altair processEpoch - mainnet_e81889 335.31 ms/op 402.07 ms/op 0.83
mainnet_e81889 - altair beforeProcessEpoch 73.192 ms/op 74.150 ms/op 0.99
mainnet_e81889 - altair processJustificationAndFinalization 16.553 us/op 19.447 us/op 0.85
mainnet_e81889 - altair processInactivityUpdates 5.9323 ms/op 6.1611 ms/op 0.96
mainnet_e81889 - altair processRewardsAndPenalties 53.657 ms/op 59.333 ms/op 0.90
mainnet_e81889 - altair processRegistryUpdates 2.5880 us/op 2.8060 us/op 0.92
mainnet_e81889 - altair processSlashings 468.00 ns/op 526.00 ns/op 0.89
mainnet_e81889 - altair processEth1DataReset 780.00 ns/op 900.00 ns/op 0.87
mainnet_e81889 - altair processEffectiveBalanceUpdates 1.2882 ms/op 1.6392 ms/op 0.79
mainnet_e81889 - altair processSlashingsReset 9.4980 us/op 5.9070 us/op 1.61
mainnet_e81889 - altair processRandaoMixesReset 7.0960 us/op 5.1510 us/op 1.38
mainnet_e81889 - altair processHistoricalRootsUpdate 758.00 ns/op 695.00 ns/op 1.09
mainnet_e81889 - altair processParticipationFlagUpdates 4.7270 us/op 3.8870 us/op 1.22
mainnet_e81889 - altair processSyncCommitteeUpdates 595.00 ns/op 697.00 ns/op 0.85
mainnet_e81889 - altair afterProcessEpoch 127.98 ms/op 150.85 ms/op 0.85
phase0 processEpoch - mainnet_e58758 378.20 ms/op 422.35 ms/op 0.90
mainnet_e58758 - phase0 beforeProcessEpoch 152.70 ms/op 162.07 ms/op 0.94
mainnet_e58758 - phase0 processJustificationAndFinalization 20.451 us/op 21.881 us/op 0.93
mainnet_e58758 - phase0 processRewardsAndPenalties 66.007 ms/op 66.785 ms/op 0.99
mainnet_e58758 - phase0 processRegistryUpdates 11.933 us/op 12.502 us/op 0.95
mainnet_e58758 - phase0 processSlashings 779.00 ns/op 980.00 ns/op 0.79
mainnet_e58758 - phase0 processEth1DataReset 644.00 ns/op 875.00 ns/op 0.74
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 1.0344 ms/op 1.6269 ms/op 0.64
mainnet_e58758 - phase0 processSlashingsReset 4.9810 us/op 6.9690 us/op 0.71
mainnet_e58758 - phase0 processRandaoMixesReset 6.5220 us/op 8.1190 us/op 0.80
mainnet_e58758 - phase0 processHistoricalRootsUpdate 1.1380 us/op 1.4730 us/op 0.77
mainnet_e58758 - phase0 processParticipationRecordUpdates 4.3000 us/op 7.2400 us/op 0.59
mainnet_e58758 - phase0 afterProcessEpoch 100.25 ms/op 116.41 ms/op 0.86
phase0 processEffectiveBalanceUpdates - 250000 normalcase 1.2699 ms/op 1.8397 ms/op 0.69
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 1.6891 ms/op 2.2455 ms/op 0.75
altair processInactivityUpdates - 250000 normalcase 22.213 ms/op 36.703 ms/op 0.61
altair processInactivityUpdates - 250000 worstcase 25.786 ms/op 38.505 ms/op 0.67
phase0 processRegistryUpdates - 250000 normalcase 8.0050 us/op 14.922 us/op 0.54
phase0 processRegistryUpdates - 250000 badcase_full_deposits 300.97 us/op 416.12 us/op 0.72
phase0 processRegistryUpdates - 250000 worstcase 0.5 145.42 ms/op 169.56 ms/op 0.86
altair processRewardsAndPenalties - 250000 normalcase 73.553 ms/op 91.107 ms/op 0.81
altair processRewardsAndPenalties - 250000 worstcase 72.676 ms/op 93.530 ms/op 0.78
phase0 getAttestationDeltas - 250000 normalcase 7.1146 ms/op 11.199 ms/op 0.64
phase0 getAttestationDeltas - 250000 worstcase 6.7632 ms/op 10.084 ms/op 0.67
phase0 processSlashings - 250000 worstcase 3.4853 ms/op 5.4138 ms/op 0.64
altair processSyncCommitteeUpdates - 250000 182.99 ms/op 253.55 ms/op 0.72
BeaconState.hashTreeRoot - No change 351.00 ns/op 403.00 ns/op 0.87
BeaconState.hashTreeRoot - 1 full validator 51.887 us/op 78.867 us/op 0.66
BeaconState.hashTreeRoot - 32 full validator 541.82 us/op 744.77 us/op 0.73
BeaconState.hashTreeRoot - 512 full validator 5.7874 ms/op 7.4174 ms/op 0.78
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 64.289 us/op 97.850 us/op 0.66
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 918.40 us/op 1.3704 ms/op 0.67
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 13.212 ms/op 18.893 ms/op 0.70
BeaconState.hashTreeRoot - 1 balances 72.069 us/op 74.759 us/op 0.96
BeaconState.hashTreeRoot - 32 balances 643.36 us/op 625.31 us/op 1.03
BeaconState.hashTreeRoot - 512 balances 7.3081 ms/op 7.0009 ms/op 1.04
BeaconState.hashTreeRoot - 250000 balances 102.35 ms/op 117.38 ms/op 0.87
aggregationBits - 2048 els - zipIndexesInBitList 19.942 us/op 25.500 us/op 0.78
regular array get 100000 times 46.485 us/op 72.294 us/op 0.64
wrappedArray get 100000 times 43.752 us/op 46.557 us/op 0.94
arrayWithProxy get 100000 times 16.056 ms/op 23.746 ms/op 0.68
ssz.Root.equals 588.00 ns/op 824.00 ns/op 0.71
byteArrayEquals 584.00 ns/op 826.00 ns/op 0.71
shuffle list - 16384 els 7.1489 ms/op 9.9483 ms/op 0.72
shuffle list - 250000 els 109.56 ms/op 150.82 ms/op 0.73
processSlot - 1 slots 11.216 us/op 13.076 us/op 0.86
processSlot - 32 slots 1.4244 ms/op 2.1170 ms/op 0.67
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 50.837 ms/op 53.350 ms/op 0.95
getCommitteeAssignments - req 1 vs - 250000 vc 4.5824 ms/op 5.0448 ms/op 0.91
getCommitteeAssignments - req 100 vs - 250000 vc 5.2054 ms/op 6.7362 ms/op 0.77
getCommitteeAssignments - req 1000 vs - 250000 vc 6.0424 ms/op 6.0964 ms/op 0.99
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 5.6200 ns/op 7.5600 ns/op 0.74
state getBlockRootAtSlot - 250000 vs - 7PWei 979.67 ns/op 1.0329 us/op 0.95
computeProposers - vc 250000 17.225 ms/op 18.071 ms/op 0.95
computeEpochShuffling - vc 250000 131.03 ms/op 157.10 ms/op 0.83
getNextSyncCommittee - vc 250000 224.28 ms/op 287.43 ms/op 0.78
computeSigningRoot for AttestationData 19.092 us/op 23.385 us/op 0.82
hash AttestationData serialized data then Buffer.toString(base64) 2.8199 us/op 4.3793 us/op 0.64
toHexString serialized data 2.2224 us/op 2.1256 us/op 1.05
Buffer.toString(base64) 511.99 ns/op 644.86 ns/op 0.79

by benchmarkbot/action

@dapplion
Copy link
Contributor Author

dapplion commented May 3, 2023

Big remaining item is the logger in the thread

  • How to instantiate the logger with same settings?
  • Ignore file logging completely or duplicate transports?

CC @wemeetagain

@dapplion dapplion force-pushed the dapplion/network-thread branch from ddef290 to b16d9e1 Compare May 4, 2023 05:42
@twoeths
Copy link
Contributor

twoeths commented May 5, 2023

Attached is the profile of network thread from lg1k node
0505_lg1k_network_thread.cpuprofile.zip

I was not able to do that by Chrome Dev Tool but with VsCode

@wemeetagain wemeetagain marked this pull request as ready for review May 16, 2023 00:14
@wemeetagain wemeetagain requested a review from a team as a code owner May 16, 2023 00:14
wemeetagain
wemeetagain previously approved these changes May 16, 2023
* feat: add ThreadBoundaryError

* Remove ClonableLodestarError

* Fix unit test

---------

Co-authored-by: Cayman <caymannava@gmail.com>
@wemeetagain wemeetagain merged commit 2d7356b into unstable May 17, 2023
@wemeetagain wemeetagain deleted the dapplion/network-thread branch May 17, 2023 16:26
@dapplion
Copy link
Contributor Author

dapplion commented May 18, 2023

🎉 thanks @wemeetagain ❤️

@wemeetagain
Copy link
Member

🎉 This PR is included in v1.9.0 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Move libp2p to worker thread
5 participants