You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently Tokenizer doesn't support white spaces between items of tuples and arrays, instead of being ignored they are passed into the lower-level parser often causing errors. It'd be great to just trim the white spaces, AFAIK they don't carry any useful information for any of the tokenized types.
For example parsing bytes[] from string [0x0a, 0x0b] fails with a cryptic message expected value of type: bytes[]. This is because the space between 0x0a and 0x0b isn't ignored, it's considered a part of 0x0b which confuses the hex decoder. The only valid payload is [0x0a,0x0b], which is difficult to guess and less human-readable.
For the context, I'm coming as a user of Foundry, where cast abi-encode is extremely tricky to use with arrays and tuples.
The text was updated successfully, but these errors were encountered:
Currently
Tokenizer
doesn't support white spaces between items of tuples and arrays, instead of being ignored they are passed into the lower-level parser often causing errors. It'd be great to just trim the white spaces, AFAIK they don't carry any useful information for any of the tokenized types.For example parsing
bytes[]
from string[0x0a, 0x0b]
fails with a cryptic messageexpected value of type: bytes[]
. This is because the space between0x0a
and0x0b
isn't ignored, it's considered a part of0x0b
which confuses the hex decoder. The only valid payload is[0x0a,0x0b]
, which is difficult to guess and less human-readable.For the context, I'm coming as a user of Foundry, where
cast abi-encode
is extremely tricky to use with arrays and tuples.The text was updated successfully, but these errors were encountered: