Replies: 5 comments 4 replies
-
Update from the same facilitator: The moderation policy I use is usually based on the following:
Request from the facilitator:Ways of reducing the load on moderators involved making any of these assessments or carrying out these tasks (e.g. the ability to edit a statement) would help. We are about to start trying some specialist qualitative analysis tools to see if they are useful for moderation and then also for analysis when we combine metadata and statements. More thoughts from the facilitatorStrong moderation is important in order to not place too much burden on participants. There's been plenty of feedback that people get bored assessing randomly assigned ’tweets’ or that assessing even high quality statements is really taxing when they are not part of a narrative. I’ve wondered about moderation transparency and accountability: should people be able to see what has been submitted, which statements have been accepted and reject, and the reasons for rejection. Deliberative democracy places a premium on reason-giving. On the other hand, no one has ever contacted me about whether their statement was accepted in all the Polis discussions that I’ve moderated so this wouldn’t be on my list of priorities. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone,
|
Beta Was this translation helpful? Give feedback.
-
Response from the facilitator:
|
Beta Was this translation helpful? Give feedback.
-
Idea from another facilitator of Very Large Conversation:
|
Beta Was this translation helpful? Give feedback.
-
I have another suggestion: This might be related to the existing "Priority" mechanism. |
Beta Was this translation helpful? Give feedback.
-
📬 I'm posting this discussion on behalf of a powerful facilitation partner who runs national scale conversations.
What we mean when we say moderation
At the outset, this facilitator describes themselves as a "strong moderator" , which they define as:
This facilitator does not ascribe to the permissive moderation mode of "let the system handle the noise" which is often recommended for large conversations, because that approach does not protect the experience of the "next participant" in terms of:
Current manual workflow for moderating large conversations
Here is my understanding of the manual workflow that this national-scale facilitator uses for moderation:
Problem:
This workflow is maxxed out. It's expensive in terms of human time and inefficient.
Suggested solution:
Not sure. Improve Polis? Use a third party site for processing the similarity of submitted statements?
Alternative suggestions:
Um...
Additional context:
Please add any other context or screenshots about the feature request here.
I will add a link to their existing Google Spreadsheet if i can get one
Beta Was this translation helpful? Give feedback.
All reactions