-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Posts can be flagged and owners can appeal #50
Comments
PremiseI fully support this feature, but I think there are some challenges that need to be thought through very deeply first. DifficultiesSuppose we implement this feature as you mentioned, so:
Who should manage this situationsThe first question that comes into my mind is: who should the owner of the flagged post speak to? We have two different options to keep things decentralized. 1. The handling is done through a decentralized governance made of all the community.This can unfortunately lead to two different bad situations (that can even verify together):
2. An elected group of peopleI personally like best the idea in which there is an elected group of people (elected from the whole community via a specific governance proposal) that rotates a couple of times per year, and which handles such situations. In this case what they would do it:
This would allow for a faster management of such situations, but could lead to an easier corruption of such people from the user that posted the flagged message. How to prevent double posting?When the community has flagged a post as inappropriate, and it is being review or has already been hidden, how do we prevent the same person from posting the same message? What we could do is "jailing" him until the message has been reviewed. By jailing I mean not allowing him to post any other message. I, however, find this option really brutal and should not be applied. On the other hand, we could implement a sort of exponential jail time for flagged posts that result being classified as inappropriate. For example:
We can then go up exponentially from 1 hour to 1 day, 1 week, 1 moth, etc. This will surely lead to posters avoiding posting new messages that can be easily flagged as inappropriate. Where to hide the posts?Suppose we find a way to manage the above problems. From where should we hide the posts that has been marked as inappropriate? The first thing that comes into my mind is filtering them out from the list returned by the proper query endpoint (#55). While this can be a solution of applications based on such endpoint, we cannot guarantee that all clients will not be able too show such posts. A client based on the transactions directly that avoids handling the flagging of posts could simply go though all the txs done in the past and let the users see the contents of them. On the other hand, such approach is particularly expensive and I would personally not implement it thinking on how much I would have to work to be able to implement such a ridiculous option. |
Event-based solutionProblems with the solution proposed aboveDuring the latest weekly call on Tuesday 3 March 2020, some problems have been raised about creating a universal reporting system that is 100% on-chain and governed by the Desmos users or by the chain government. The problems can be synthesized with the following key points. Censorship attacksHaving a government-based way of handling reports might lead to censorship attacks. Different applications Terms of ServicesDesmos is thought to be the protocol that allows to create any kind of social networks. That being said, we should consider what would happen in the case of two completely different applications being developed on top of it. Suppose, for example, that one day we end up with PornHub as well as Facebook developing their social networks on top of Desmos. For this reason, a shared reporting system might be problematic to handle in the case of multiple, very different, applications build on top of Desmos. Event-based solutionIn order to solve both of the problems depicted above, what we could do is implement an event-based system. This should work as follows. Every time a post is reported, the following happens:
In order to make the system as much generic as possible, the following features should be implemented. NOTE. The best thing should be to implement all those features inside a new TypesA type Report struct {
Type String `json:"type"` // Identifies the type of the report
Message String `json:"message"` // Contains the user message
User sdk.AccAddress `json:"user"` // Identifies the reporting user
} To be consistent across different applications, we should define a list of supported MessagesA new type MsgReportPost struct {
PostID types.PostID `json:"post_id"` // Identifies the reported post
Report types.Report `json:"report"` // Contains the report data
} KeeperThe func (k Keeper) SaveReport(ctx sdk.Context, postID types.PostID, report types.Report) {}
func (k Keeper) GetPostReports(ctx sdk.Context, postID types.PostID) []types.Report {} StorageThe storage should be pretty simply, allowing to store a list of
ConclusionThe event-based solution allows for both an on-chain storage of all the reports that are created from users as well as a custom handling of such reports from different applications. Combined with Djuno this is in my opinion the best way we should pursue related to the posts reporting implementation. It also allows for applications that want to handle this in a decentralized way to do so by using the soon-to-be-implemented CosmWasm module (#115). What are your thoughts @bragaz @kwunyeung? |
I'm wondering if emitting an event is enough for this. What if the event subscription is lost at some point? Would the application still knows a specific post is being reported and needed to be handled? For the case of having Facebook and PornHub, as long as we separate the posts and flags in their own subspaces, would that confusion happen? |
The reports will be stored on-chain as well, so that should do it as it can get them later and perform a diff.
Right, I didn't think about subspaces. Users can still do censorship attacks reporting them cross-app tho 🤔 |
@RiccardoM Hm... actually there is no contradiction with your design. The spec you are proposing is good enough to implement the idea. We don't need the appeal feature at this moment. We only need to let users flag the posts. Whether displaying or not all depends on the apps themselves based on their own rules. The most basic info we can have would be number of reports and storage association is simple enough. In the future when we have more values regarding user accounts like reputations, we may also list them out along with the flags for the applications to consider if those values should be taken into account. That will be another story. For the cross-app censorship attack, it may be solvable if we move this with the usage of smart contract. The state update has to be done by the smart contract and it depends on who can trigger the smart contract. |
What do you guys think about implementing this into |
@bragaz Yes, this might be good for
I really think that leaving application the freedom of supporting their own types might be better and easier to manage in the future. |
@bragaz agree to have it in |
A user can flag inappropriate of messages. Conditions of hiding messages should be implemented. For example, the message will be hidden if at least be flagged by 2 individuals.
The user of the flagged message can appeal. The appeal should be juried by the community, maybe via some kind of governance?
The text was updated successfully, but these errors were encountered: