-
-
Notifications
You must be signed in to change notification settings - Fork 882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Grant users privileges based on activity level #3548
Comments
I would suggest renaming issue to "Grant users powers based on activity level" I think it would be nice to have as an option. For example, I think editing titles or marking as NSFW could work for powers though it could be closing like #2619 |
There’s a clear interest in this among fedizens: https://writing.exchange/@erlend/110391232157395456 Also worth noting that once a Trust Levels system is in place, it’d be possible to explore ways to federate it. |
The MVP version of this could already mitigate a lot of the recent uploading of illegal images as an attack vector. Disallow image uploads for users:
|
I'm very against adding special abilities and priveledges for users (in the same way that stackoverflow does), outside of admins and mods. These systems get endlessly complicated and quickly become too difficult to maintain. |
Requirements
Is your proposal related to a problem?
Moderators and admins are experiencing burnout due to the increasing number of users on the platform. This leads to a rise in unmoderated communities as the workload becomes too much. Currently, admins must create posts to request community moderation, adding to their existing workload. Potential moderators may also be discouraged by the full-time commitment required.
Describe the solution you'd like.
Implement a hierarchical trust level system similar to Discourse1, where users can gain privileges and responsibilities based on activity metrics2. This distributes moderation and allows admins to focus on adjusting the trust levels of top-tier users, without micromanaging every user.
Describe alternatives you've considered.
Admins could configure:
The number of trust levels
The number of users per trust level
The reputation thresholds for each level
The reputation score for different actions
The privileges granted at each level
The number of users desired for each level or the reputation thresholds could be automatically calculated based on the other configurable parameters.
This allows instances to define tailored trust and moderation models.
Configurability
The platform would implement trust levels on a per-community or per-instance basis. Instance admins could choose to have:
In communities with human moderators, admins can restrict their privileges.
Appeals Process
There could be an appeals process where users can contest moderator actions. A user with a higher trust level would review the appeal and penalize the incorrect party.
Additional context
The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth.
In light of these challenges, it's time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation.
Key features of a trust level system include:
Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users.
For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many.
As we continue to navigate the complexities of online community management, it's clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone.
Related
Related Issues
Footnotes
Understanding Discourse Trust Levels ↩
Voting Affinity and Engagement Analysis ↩
The text was updated successfully, but these errors were encountered: