This repository has been archived by the owner on Nov 22, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 145
Possible "Out of Memory" danger for opcode NEWBUFFER and PUSHDATA #369
Comments
Could you provide the script? |
|
But yes, we should reduce the limits or use a different system, I proposed time ago #245 in order to forget this limits. |
Yeah, that's an example where result of transaction execution will differ on different nodes |
Why do you need a continuous piece of 2GB-sized RAM? |
static void Main()
{
List<byte[]> list = new List<byte[]>();
for (int i = 0; i < 10240; i++)
list.Add(new byte[1024 * 1024]);
} I don't think it's a problem. This code works fine. And, OS manages the memory in paging and does not need contiguous memory at all. I think what you have encountered should be other problems. |
Closed
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Currently MaxStackSize = 2048 and MaxItemSize = 1M, which means that we can at most push 2GB's data into stack. 2GB's size might seems not so much for nowaday computer's RAM. However, this piece of data is stored in a
List
which is stored in ram continuously. Seeking a continuous piece of 2GB-sized RAM might give rise to Out of Memory exception for computers whose RAM is more than 2GB.My computer is with 16GB's RAM. Tests shows that an Out of Memory exception will be thrown after about 1500 1MB-sized new buffers are created & pushed into stack.
This will give rise to a serious problem: result of transaction execution will differ on different nodes, which might lead to other problems and eventually consensus crash.
The text was updated successfully, but these errors were encountered: