-
Notifications
You must be signed in to change notification settings - Fork 719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add SS
and LS
intructions.
#410
Comments
It is definitely more efficient at the DB layer if we can do a batch multi-key get, vs individual consecutive fetches. And even better than a multi-key get, is iteration over slot keys that are neighbors when sorted lexicographically. Each key lookup in an LSM is a logarithmic operation while iterating over subsequent keys is O(1). This is because the database creates a linked list over all the sorted keys. Writing slots in consecutive batches would also share a similar benefit, as we read before writing. However, the write operation itself generally won't see the same level of efficiency gain. This is because the entire state diff from a transaction is stored in memory and then wholly committed as a batch afterward. So consecutive writes are automatically batched. However, this is something we can let the new benchmark framework determine if there are any major cost savings. |
It's probably ok if we don't get quite the same benefit for write operations. |
A couple notes:
|
@adlerjohn should we close this issue? Looks like we've opted to put this functionality into our existing quad-word opcodes rather than introducing new ones. related: #422 |
There is an existing problem with using union types (i.e:
Identity
) in storage.For the
Identity
example, we need to store 2 slots; one for the tag and one for theb256
value. This is quite wasteful/expensive, and the cost grows even more if we consider storing/reading a struct with multiple fields.We have an open issue we proposed one way to solve this for the specif case of the
Identity
type. It is very limited in scope as it doesn't unlock any other use-cases beyond a specific type.It has been suggested by @adlerjohn that another solution to the problem of expensive storage for enums & structs would be to add instructions to allow reading & writing
n
consecutive storage slots in a single operation.So, for reading 2 storage slots, this would be slightly more expensive than reading one slot, but not twice as expensive (I would expect the savings to increase linearly with respect to
n
, at least to a point).We could add "Store Slots" & "Load Slots" (or
SNS
/LNS` "Store N Words" & "Load N Words") instructions to allow this.Something along these lines:
SS: Store slots
$rC
words starting at$rB
are stored at the address$rA
.MEM[$rA, ($rC * 8)] = [$rB, $rB + ($rC * 8];
ss $rA, $rB, $rC
0x00 rA rB rC
LS: Load slots
$rC
words are loaded into$rA
starting from$rB
.$rA = MEM[$rB, (`$rC` * 8)];
lw $rA, $rB, $rC
0x00 rA rB rC
Eliminates the need for #324
cc @adlerjohn
The text was updated successfully, but these errors were encountered: