Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: freeze reducing #240

Merged
merged 12 commits into from
May 13, 2024
Merged

feat: freeze reducing #240

merged 12 commits into from
May 13, 2024

Conversation

0xCardinalError
Copy link
Contributor

@0xCardinalError 0xCardinalError commented Feb 5, 2024

Freeze deposit if any truth is false, make it a 20% chance for this to happen.

For more info on Randao check here https://eth2book.info/capella/part2/building_blocks/randomness/

This code should be a good enough source of randomness as when nodes are doing commits/reveals they don't know what the PrevRANDAO value will be in the future, PrevRANDAO value does stay the same per one epoche which is 32 slots, which means 32 blocks but our rounds are longer and that is the reason the system cant be gamed

@0xCardinalError 0xCardinalError changed the title feat: freez reducing feat: freeze reducing Feb 5, 2024
@ldeffenb
Copy link

ldeffenb commented Feb 5, 2024

IMHO, this is just going to hide the root cause of the mis-matched hashes which is fundamentally reserves within a neighborhood not matching. Mismatched reserves is a fundamental issue within the swarm eco-system and should be identified and fixed rather than just catering to a reduction in freezes to keep a better public image.

@0xCardinalError
Copy link
Contributor Author

0xCardinalError commented Feb 8, 2024

IMHO, this is just going to hide the root cause of the mis-matched hashes which is fundamentally reserves within a neighborhood not matching. Mismatched reserves is a fundamental issue within the swarm eco-system and should be identified and fixed rather than just catering to a reduction in freezes to keep a better public image.

Team is also thinking the same but until this is not solved this would help for ppl to get less punished without proper reason.

@ldeffenb
Copy link

ldeffenb commented Feb 8, 2024

Team is also thinking the same but until this is not solved this would help for ppl not to get punished without proper reason.

I would agree if this was an actual slash, but a freeze is self-correcting over time. If you want to reduce the punishment, while retaining the visibility into the problem, why not simply reduce the freeze duration dramatically? That way we would still see the reserve discrepancies that cause the freeze, but the freeze wouldn't last long enough to actually be a "punishment"?

@ldeffenb
Copy link

ldeffenb commented Feb 8, 2024

Team is also thinking the same but until this is not solved this would help for ppl not to get punished without proper reason.

As a matter of fact, if I read the source correctly, you could just have a Redistribution ADMIN invoke setFreezingParams() with a _penaltyMultiplierDisagreement set to zero (0). This would end up going into the Staking contract's freezeDeposit method with a time of zero which would effectively only set the lastUpdatedBlockNumber time on the stake. This would then only "freeze" the stake for the absolute minimum number of blocks (actually 2 rounds MustStake2Rounds), as if the stake holder modified the stake, IIRC.

Two rounds is a lot better than 1,024 (ROUND_LENGTH * uint256(2 ** truthRevealedDepth)) rounds at the current depth of 10. I'd even prefer a 100% chance of a 2 round freeze than a 20% chance of a 1,204 round freeze. Especially since it is highly unlikely that the same neighborhood would be selected again within the 2 round freeze interval.

@0xCardinalError
Copy link
Contributor Author

_penaltyMultiplierDisagreement

I will call others to weigh in on this, seems to me that what you propose is just too lenient. Freezing for just 2 rounds is very little time and will probably be an open invitation to spam the network where there will be very little penalty.

@dysordys
Copy link

Hi All - first of all, many thanks ldeffenb for looking into this, and being incredibly helpful and constructive (as always) in moving things forward! My name is Gyuri; I am relatively new to Swarm, having joined just over a year ago. This means that while I did want to give some input here, it comes with the discalimer that you probably know the system much better than I do.

So: the idea of reducing the freezing time (instead of stochastically freezing for 2^truthRevealedDepth rounds) is interesting, and we could (and should) definitely discuss it. Two things came to my mind though when thinking about implementing this. The first is a more generic concern that I should probably take up with the rest of research as well: 2^truthRevealedDepth rounds is an awfully permissive, tiny amount of time to be frozen for, even without any further reductions to it. This is because the probability of picking a neighbourhood for the game is inversely proportional to their number, and so in each round there is a 2^(-truthRevealedDepth) chance for any neighbourhood to play. That is, the default freezing duration will on average exclude a node from playing in exactly one single round, out of those in which it would have been eligible. So if anything, the freezing duration ought to be stricter than that.

But that is just a generic / philosophical point. The other point is the one that was brought up by Mark above as well: if freezing times are vastly shorter than the expected waiting time between two rounds in which a node can participate, then there is no incentive to be an honest revealer. This can have real consequences. For example, one could tamper with the storage price by spamming fake reveals in every neighbourhood, thus pretending to the system that there are many more nodes than there actually are - which will make the price oracle start pushing the prices downward. I am no expert, but intuitively, it feels to me (like it did to Mark) that having a stochastic possibility of 20% for a real punishment will have a greater deterring power against similar types of exploitative behaviour than a 100% probability for an inconsequential punishment.

But I am more than happy to listen to alternative suggestions, or to any feedback in case there are flaws in this logic. Do let me know your thoughts!

@ldeffenb
Copy link

ldeffenb commented Feb 12, 2024

My opinion is that it all depends on why we're trying to reduce the freezing to begin with?

If this is a permanent workaround to the actual underlying issue of the reserve contents within a neighborhood not matching (presuming that's the cause of the mis-matched hashes), then definitely, the considerations that have been recently discussed makes sense.

But if this is a temporary reaction to ease the freezes (which is what I'm hoping it is) while actively trying to address the underlying root cause of mismatched hashes, then personally, I don't think it's worth worry too much about the level of penalty nor possibly storage price fixing. Presuming, of course, that the cause of the mismatched hashes in well-behaving neighborhood peers can be identified and fixed.

@dysordys
Copy link

It is certainly not meant to be a permanent solution, merely a temporary measure to allow node operators to get a fair deal in terms of their return on investment. We sure hope we can fix the underlying issue (and do so quickly), but it is work in progress.

OK, I can bring this point up to the others and see what they say. (Or they can just comment here.)

@ldeffenb
Copy link

I was assuming temporary relief, which is why I initially suggested zeroing the _penaltyMultiplierDisagreement. That requires no contract changes, and can be re-instated to 1 if anything untoward is seen in the redistribution contract game. Minimal impact approach to grant immediate relief.

@dysordys
Copy link

We will certainly discuss this option - many thanks again for your input!

@dysordys
Copy link

I wanted to get back to you on what the plans are for handling the freezing issue. We discussed it with the people in research. The verdict was that at this stage, reducing the actual number of freezing events is a big priority. So people were more keen on the stochastic method, instead of reducing the severity of any one freezing event. There was another aspect to the discussion as well. Namely, that although we are of course trying to get to the bottom of the freezing problem, it might take a while (meaning, on the order of months). If that would be the case, then instead of a short-term bandaid solution, we are looking for a mid-term bandaid. For that duration, it might make more sense to prevent exploits of e.g. price manipulation by using stochastic freezing.

I do hope this makes things clear. Again, sorry for not taking up your idea, which was indeed a quick and elegant way of addressing the problem. But of course the most important thing would be to find the actual source of the freezes and fix that - then none of this will be relevant anymore.

@0xCardinalError 0xCardinalError merged commit 595ebfe into master May 13, 2024
1 check passed
@0xCardinalError 0xCardinalError deleted the freezing_reducing branch May 13, 2024 14:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants