-
Notifications
You must be signed in to change notification settings - Fork 20.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rlp: add functions for encoding #257
Conversation
I'm reasonably confident that the encoding matches the output of ethutil.Encode for values that it supports. Some of the tests have been adpated from the Ethereum testing repository. There are still TODOs in the code.
thanks a million @fjl long overdue and badly needed ;) eager to test it out |
Note that this isn't used by anything yet. It will be a bit of work to move over to the new encoder. |
return writeInterface, nil | ||
case typ.AssignableTo(reflect.PtrTo(bigInt)): | ||
return writeBigIntPtr, nil | ||
case typ.AssignableTo(bigInt): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think non-ptr big.Ints will ever happen. Was there a specific reason / use case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That can be useful if you have a field of type big.Int
as part of a struct. Its zero value is zero and it can never be nil
.
The real reason is that they should be supported because every other non-pointer value of a supported type is encodable. I think it would be strange to return an error for them.
Another reason is that Decode
also supports them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possible though hardly useful. It's fine to leave it in, wondered if there was a specific use case you had in mind.
rlp: add functions for encoding
swarm/network: Bugfix swarm peer discovery
Add ETC Coop ETC bootnodes (mostly Besu ones).
* focus on performance improvement in many aspects. 1. Do BlockBody verification concurrently; 2. Do calculation of intermediate root concurrently; 3. Preload accounts before processing blocks; 4. Make the snapshot layers configurable. 5. Reuse some object to reduce GC. add * rlp: improve decoder stream implementation (ethereum#22858) This commit makes various cleanup changes to rlp.Stream. * rlp: shrink Stream struct This removes a lot of unused padding space in Stream by reordering the fields. The size of Stream changes from 120 bytes to 88 bytes. Stream instances are internally cached and reused using sync.Pool, so this does not improve performance. * rlp: simplify list stack The list stack kept track of the size of the current list context as well as the current offset into it. The size had to be stored in the stack in order to subtract it from the remaining bytes of any enclosing list in ListEnd. It seems that this can be implemented in a simpler way: just subtract the size from the enclosing list context in List instead. * rlp: use atomic.Value for type cache (ethereum#22902) All encoding/decoding operations read the type cache to find the writer/decoder function responsible for a type. When analyzing CPU profiles of geth during sync, I found that the use of sync.RWMutex in cache lookups appears in the profiles. It seems we are running into CPU cache contention problems when package rlp is heavily used on all CPU cores during sync. This change makes it use atomic.Value + a writer lock instead of sync.RWMutex. In the common case where the typeinfo entry is present in the cache, we simply fetch the map and lookup the type. * rlp: optimize byte array handling (ethereum#22924) This change improves the performance of encoding/decoding [N]byte. name old time/op new time/op delta DecodeByteArrayStruct-8 336ns ± 0% 246ns ± 0% -26.98% (p=0.000 n=9+10) EncodeByteArrayStruct-8 225ns ± 1% 148ns ± 1% -34.12% (p=0.000 n=10+10) name old alloc/op new alloc/op delta DecodeByteArrayStruct-8 120B ± 0% 48B ± 0% -60.00% (p=0.000 n=10+10) EncodeByteArrayStruct-8 0.00B 0.00B ~ (all equal) * rlp: optimize big.Int decoding for size <= 32 bytes (ethereum#22927) This change grows the static integer buffer in Stream to 32 bytes, making it possible to decode 256bit integers without allocating a temporary buffer. In the recent commit 088da24, Stream struct size decreased from 120 bytes down to 88 bytes. This commit grows the struct to 112 bytes again, but the size change will not degrade performance because Stream instances are internally cached in sync.Pool. name old time/op new time/op delta DecodeBigInts-8 12.2µs ± 0% 8.6µs ± 4% -29.58% (p=0.000 n=9+10) name old speed new speed delta DecodeBigInts-8 230MB/s ± 0% 326MB/s ± 4% +42.04% (p=0.000 n=9+10) * eth/protocols/eth, les: avoid Raw() when decoding HashOrNumber (ethereum#22841) Getting the raw value is not necessary to decode this type, and decoding it directly from the stream is faster. * fix testcase * debug no lazy * fix can not repair * address comments Co-authored-by: Felix Lange <fjl@twurst.com>
* trie/utils: add helper to calculate code tree indices * core/state: rewrite optimized version of keyValueMigrator Signed-off-by: Ignacio Hagopian <jsign.uy@gmail.com> * trie/verkle: remove uint256 allocs (ethereum#257) Signed-off-by: Ignacio Hagopian <jsign.uy@gmail.com> --------- Signed-off-by: Ignacio Hagopian <jsign.uy@gmail.com>
Fix DeserializeHeaderExtraInformation that panics for nil header
I'm submitting this because I need the encoder to implement Kademlia Node Discovery.
This has been sitting around 95% done on my laptop for the last few weeks.