-
Notifications
You must be signed in to change notification settings - Fork 11.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a double ended queue #3153
Add a double ended queue #3153
Conversation
If FIFO and LIFO are just wrappers that restrict the available functions, we should just have Vector. But FIFO doesn't need negative indices. So I don't know if this is the best implementation. What is the extra cost of using a mapping vs using a contiguous array? |
Using an array could save some gas, but it would prevent |
Note that, if we want to keep AFAIK, this means that we cannot use |
I think the array would win against the mapping if you do multiple pushes in a row, because there is a single hash you need to do and then you just increment the slot number. |
|
I was wrong in my previous comment, in the case of the array you don't even need to hash on-chain, because you know the position statically, so the compiler just precomputes the hash and embeds it in the bytecode. Anyway, the difference between these two methods is less than 0.1% of the cost of a push so I think either option is fine in terms of gas cost. |
I really think that having an array is not realistic if you are not using the built-in length. And if we are using the built-in, that is one slot that we cannot share. IMO, it makes a lot of sense to use the same slot for begin+end (2x uint128). And we need begin + end for FIFO operations. |
contracts/utils/structs/Vector.sol
Outdated
} | ||
|
||
function length(Bytes32Vector storage vector) internal view returns (uint256) { | ||
unchecked { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. Why don't we do just use checked arithmetic and even SafeCast?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
begin
and end
are only updated by ++ or --. They are not expected to overflow (it would take 2**127 calls for it to overflow).
a consequence is that any checks (like the safemath that solidity includes by default and safecast) would just be wasted gas.
For example, on a pushX
/popX
operation, unchecked saves ~100 gas per operation. Not sure what the safecast impact would be here, but it would be wasted gas
|
||
function at(Bytes32Deque storage deque, uint256 i) internal view returns (bytes32 value) { | ||
// int256(deque.begin) is a safe upcast | ||
int128 idx = SafeCast.toInt128(int256(deque.begin) + SafeCast.toInt256(i)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will never happen in practice, but if the queue is longer then type(int256).max
, this code will fail.
IMO, we will never really get any deque of length > 2**64, so I would argue that using SafeCast makes all operations more expensive, with no real security gains
deque.data[backIndex] = value; | ||
unchecked { | ||
deque.end = backIndex + 1; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, the previous implementation
unchecked {
deque.data[deque.end++] = value;
}
was way more easy to read/understand (same argument apply to all push and pop functions)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pre/post inc/decrements are quick to read, but definitely not easy...
The statement deque.data[deque.end++] = value
is too dense: there are two different storage updates, and it's not immediately clear what value is used to index the data mapping.
I tried a few alternatives and ended up going with the current option. I think this code is quite ok:
int128 backIndex = deque.end;
deque.data[backIndex] = value;
deque.end = backIndex + 1;
The unchecked
makes it harder to read. Perhaps if we put everything in the unchecked
block it would look better?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I definitely think we want the +1
unchecked.
Putting everything in the unchecked block might make it cleaner to read (all the logic is indented at the same level)
@Amxx Please review and merge if everything looks good. |
DoubleEndedQueue
is an array with O(1) read access and with O(1) push to front and back as well as O(1) pop from front and back. It can also be cleared (reset) in O(1).This structure can be used to create FIFO, LIFO or other similar structures.
PR Checklist