-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: freeze reducing #240
Conversation
IMHO, this is just going to hide the root cause of the mis-matched hashes which is fundamentally reserves within a neighborhood not matching. Mismatched reserves is a fundamental issue within the swarm eco-system and should be identified and fixed rather than just catering to a reduction in freezes to keep a better public image. |
Team is also thinking the same but until this is not solved this would help for ppl to get less punished without proper reason. |
I would agree if this was an actual slash, but a freeze is self-correcting over time. If you want to reduce the punishment, while retaining the visibility into the problem, why not simply reduce the freeze duration dramatically? That way we would still see the reserve discrepancies that cause the freeze, but the freeze wouldn't last long enough to actually be a "punishment"? |
As a matter of fact, if I read the source correctly, you could just have a Redistribution ADMIN invoke Two rounds is a lot better than 1,024 ( |
I will call others to weigh in on this, seems to me that what you propose is just too lenient. Freezing for just 2 rounds is very little time and will probably be an open invitation to spam the network where there will be very little penalty. |
Hi All - first of all, many thanks ldeffenb for looking into this, and being incredibly helpful and constructive (as always) in moving things forward! My name is Gyuri; I am relatively new to Swarm, having joined just over a year ago. This means that while I did want to give some input here, it comes with the discalimer that you probably know the system much better than I do. So: the idea of reducing the freezing time (instead of stochastically freezing for But that is just a generic / philosophical point. The other point is the one that was brought up by Mark above as well: if freezing times are vastly shorter than the expected waiting time between two rounds in which a node can participate, then there is no incentive to be an honest revealer. This can have real consequences. For example, one could tamper with the storage price by spamming fake reveals in every neighbourhood, thus pretending to the system that there are many more nodes than there actually are - which will make the price oracle start pushing the prices downward. I am no expert, but intuitively, it feels to me (like it did to Mark) that having a stochastic possibility of 20% for a real punishment will have a greater deterring power against similar types of exploitative behaviour than a 100% probability for an inconsequential punishment. But I am more than happy to listen to alternative suggestions, or to any feedback in case there are flaws in this logic. Do let me know your thoughts! |
My opinion is that it all depends on why we're trying to reduce the freezing to begin with? If this is a permanent workaround to the actual underlying issue of the reserve contents within a neighborhood not matching (presuming that's the cause of the mis-matched hashes), then definitely, the considerations that have been recently discussed makes sense. But if this is a temporary reaction to ease the freezes (which is what I'm hoping it is) while actively trying to address the underlying root cause of mismatched hashes, then personally, I don't think it's worth worry too much about the level of penalty nor possibly storage price fixing. Presuming, of course, that the cause of the mismatched hashes in well-behaving neighborhood peers can be identified and fixed. |
It is certainly not meant to be a permanent solution, merely a temporary measure to allow node operators to get a fair deal in terms of their return on investment. We sure hope we can fix the underlying issue (and do so quickly), but it is work in progress. OK, I can bring this point up to the others and see what they say. (Or they can just comment here.) |
I was assuming temporary relief, which is why I initially suggested zeroing the |
We will certainly discuss this option - many thanks again for your input! |
I wanted to get back to you on what the plans are for handling the freezing issue. We discussed it with the people in research. The verdict was that at this stage, reducing the actual number of freezing events is a big priority. So people were more keen on the stochastic method, instead of reducing the severity of any one freezing event. There was another aspect to the discussion as well. Namely, that although we are of course trying to get to the bottom of the freezing problem, it might take a while (meaning, on the order of months). If that would be the case, then instead of a short-term bandaid solution, we are looking for a mid-term bandaid. For that duration, it might make more sense to prevent exploits of e.g. price manipulation by using stochastic freezing. I do hope this makes things clear. Again, sorry for not taking up your idea, which was indeed a quick and elegant way of addressing the problem. But of course the most important thing would be to find the actual source of the freezes and fix that - then none of this will be relevant anymore. |
…deployer/multisig
Freeze deposit if any truth is false, make it a 20% chance for this to happen.
For more info on Randao check here https://eth2book.info/capella/part2/building_blocks/randomness/
This code should be a good enough source of randomness as when nodes are doing commits/reveals they don't know what the PrevRANDAO value will be in the future, PrevRANDAO value does stay the same per one epoche which is 32 slots, which means 32 blocks but our rounds are longer and that is the reason the system cant be gamed