Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Posts can be flagged and owners can appeal #50

Closed
kwunyeung opened this issue Nov 23, 2019 · 8 comments · Fixed by #179
Closed

Posts can be flagged and owners can appeal #50

kwunyeung opened this issue Nov 23, 2019 · 8 comments · Fixed by #179
Assignees
Labels
kind/new-feature Propose the addition of a new feature that does not yet exist x/posts Post module
Milestone

Comments

@kwunyeung
Copy link
Contributor

kwunyeung commented Nov 23, 2019

A user can flag inappropriate of messages. Conditions of hiding messages should be implemented. For example, the message will be hidden if at least be flagged by 2 individuals.

The user of the flagged message can appeal. The appeal should be juried by the community, maybe via some kind of governance?

@kwunyeung kwunyeung added kind/new-feature Propose the addition of a new feature that does not yet exist x/posts Post module labels Dec 6, 2019
@RiccardoM
Copy link
Contributor

RiccardoM commented Dec 13, 2019

Premise

I fully support this feature, but I think there are some challenges that need to be thought through very deeply first.

Difficulties

Suppose we implement this feature as you mentioned, so:

  • users can mark posts as inappropriate
  • after a defined number of flags (let's say 2 to be coherent with your example) the posts will be hidden

Who should manage this situations

The first question that comes into my mind is: who should the owner of the flagged post speak to?

We have two different options to keep things decentralized.

1. The handling is done through a decentralized governance made of all the community.

This can unfortunately lead to two different bad situations (that can even verify together):

  1. The overall managing of such situation can take too long.
    If we take example from the Cosmos Hub, it takes up to a month to have a simple proposal sorted out properly. Even the post in the example is of high importance, things should be cleared as soon as possible.

  2. The community can ban users that do not think like the majority of them.
    This could lead to politics-related censorship as well as other form of censorship that emerges when a group of people with a certain mindset has the power to vote on whether a group of people with a different mindset can express their opinion or not.

2. An elected group of people

I personally like best the idea in which there is an elected group of people (elected from the whole community via a specific governance proposal) that rotates a couple of times per year, and which handles such situations. In this case what they would do it:

  1. Visualize the message
  2. Express an individual vote on whether the post should be hidden or not.

This would allow for a faster management of such situations, but could lead to an easier corruption of such people from the user that posted the flagged message.

How to prevent double posting?

When the community has flagged a post as inappropriate, and it is being review or has already been hidden, how do we prevent the same person from posting the same message?

What we could do is "jailing" him until the message has been reviewed. By jailing I mean not allowing him to post any other message. I, however, find this option really brutal and should not be applied.

On the other hand, we could implement a sort of exponential jail time for flagged posts that result being classified as inappropriate. For example:

  1. Bob posts a message, which is flagged as inappropriate. This leads to him being jailed for 1 hour.
  2. Bob while the first post was under review had posted another one flagged and later classified as inappropriate. This leads his jail time to increase to 1 day and 1 hour.
  3. After 25 hours, Bob is unjailed and posts another message flagged and classified inappropriate. This leads to him being jailed for 1 week.

We can then go up exponentially from 1 hour to 1 day, 1 week, 1 moth, etc.

This will surely lead to posters avoiding posting new messages that can be easily flagged as inappropriate.

Where to hide the posts?

Suppose we find a way to manage the above problems. From where should we hide the posts that has been marked as inappropriate? The first thing that comes into my mind is filtering them out from the list returned by the proper query endpoint (#55).

While this can be a solution of applications based on such endpoint, we cannot guarantee that all clients will not be able too show such posts. A client based on the transactions directly that avoids handling the flagging of posts could simply go though all the txs done in the past and let the users see the contents of them. On the other hand, such approach is particularly expensive and I would personally not implement it thinking on how much I would have to work to be able to implement such a ridiculous option.

@RiccardoM RiccardoM added this to the v0.3.0 milestone Dec 17, 2019
@RiccardoM RiccardoM removed this from the v0.3.0 milestone Jan 7, 2020
@RiccardoM RiccardoM added this to the v0.5.0 milestone Mar 10, 2020
@RiccardoM
Copy link
Contributor

RiccardoM commented Mar 10, 2020

Event-based solution

Problems with the solution proposed above

During the latest weekly call on Tuesday 3 March 2020, some problems have been raised about creating a universal reporting system that is 100% on-chain and governed by the Desmos users or by the chain government.

The problems can be synthesized with the following key points.

Censorship attacks

Having a government-based way of handling reports might lead to censorship attacks.
Suppose for example that a user is targeted from a group of users (we might think as racists, diverging politic ideals, etc). In order to completely censor that user, the attacking group can report any post multiple times, leading a first block of that posts. If they also manage to have a great governance power (they possess a lot of tokens or corrupt the right people) they can also have the posts permanently hidden from the chain.
Of course, they will not be deleted from the chain state, but they could be made very much difficult to be read.

Different applications Terms of Services

Desmos is thought to be the protocol that allows to create any kind of social networks. That being said, we should consider what would happen in the case of two completely different applications being developed on top of it.

Suppose, for example, that one day we end up with PornHub as well as Facebook developing their social networks on top of Desmos.
If we have the same reporting system shared among the two social networks, this might cause some problems. As Desmos-based applications will share the users too, this means that every Facebook user could report the content present inside PornHub's social network. Of course, they could report them for nudity or other porn-related motivations, but PornHub does not have the same ToS of Facebook and such contents should (obviously) be visualized inside its social network.

For this reason, a shared reporting system might be problematic to handle in the case of multiple, very different, applications build on top of Desmos.

Event-based solution

In order to solve both of the problems depicted above, what we could do is implement an event-based system. This should work as follows.

Every time a post is reported, the following happens:

  1. The report is saved on-chain.
  2. A post_reported event is emitted.
  3. Desmos-based applications will listen to such event and act accordingly.

In order to make the system as much generic as possible, the following features should be implemented.

NOTE. The best thing should be to implement all those features inside a new report module.

Types

A Report type should be created, containing the following data:

type Report struct {
  Type    String         `json:"type"`     // Identifies the type of the report
  Message String         `json:"message"`  // Contains the user message
  User    sdk.AccAddress `json:"user"`     // Identifies the reporting user
}

To be consistent across different applications, we should define a list of supported Types (probably inside the genesis) so that custom applications can have a list of them.

Messages

A new MsgReportPost should be created, having the following structure:

type MsgReportPost struct {
  PostID types.PostID `json:"post_id"`  // Identifies the reported post
  Report types.Report `json:"report"`   // Contains the report data
}

Keeper

The Keeper should have a method that allows to store a report for a post as well as one to read all the reports of a post.

func (k Keeper) SaveReport(ctx sdk.Context, postID types.PostID, report types.Report) {}

func (k Keeper) GetPostReports(ctx sdk.Context, postID types.PostID) []types.Report {}

Storage

The storage should be pretty simply, allowing to store a list of Report associated to a specific PostID:

PostID -> []Report

Conclusion

The event-based solution allows for both an on-chain storage of all the reports that are created from users as well as a custom handling of such reports from different applications.

Combined with Djuno this is in my opinion the best way we should pursue related to the posts reporting implementation. It also allows for applications that want to handle this in a decentralized way to do so by using the soon-to-be-implemented CosmWasm module (#115).

What are your thoughts @bragaz @kwunyeung?

@kwunyeung
Copy link
Contributor Author

I'm wondering if emitting an event is enough for this. What if the event subscription is lost at some point? Would the application still knows a specific post is being reported and needed to be handled? For the case of having Facebook and PornHub, as long as we separate the posts and flags in their own subspaces, would that confusion happen?

@RiccardoM
Copy link
Contributor

What if the event subscription is lost at some point? Would the application still knows a specific post is being reported and needed to be handled?

The reports will be stored on-chain as well, so that should do it as it can get them later and perform a diff.

For the case of having Facebook and PornHub, as long as we separate the posts and flags in their own subspaces, would that confusion happen?

Right, I didn't think about subspaces. Users can still do censorship attacks reporting them cross-app tho 🤔

@kwunyeung
Copy link
Contributor Author

@RiccardoM Hm... actually there is no contradiction with your design. The spec you are proposing is good enough to implement the idea. We don't need the appeal feature at this moment. We only need to let users flag the posts. Whether displaying or not all depends on the apps themselves based on their own rules. The most basic info we can have would be number of reports and storage association is simple enough.

In the future when we have more values regarding user accounts like reputations, we may also list them out along with the flags for the applications to consider if those values should be taken into account. That will be another story.

For the cross-app censorship attack, it may be solvable if we move this with the usage of smart contract. The state update has to be done by the smart contract and it depends on who can trigger the smart contract.

@leobragaz
Copy link
Contributor

What do you guys think about implementing this into v0.7.0?

@RiccardoM
Copy link
Contributor

@bragaz Yes, this might be good for v0.7.0. Please when implementing this do not consider the following point:

To be consistent across different applications, we should define a list of supported Types (probably inside the genesis) so that custom applications can have a list of them.

I really think that leaving application the freedom of supporting their own types might be better and easier to manage in the future.

@RiccardoM RiccardoM added this to the v0.7.0 milestone May 14, 2020
@kwunyeung
Copy link
Contributor Author

@bragaz agree to have it in v0.7.0. Please also study the feasibilities on adapting smart contract system in this aspect.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/new-feature Propose the addition of a new feature that does not yet exist x/posts Post module
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants