You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Public accountability will be valuable as long as the number of sets requiring review is manageable and the review work is seen as meaningful. It would be useful to limit the number of proposed sets requiring human attention, especially for the "associated" type, by requiring associated research and through setting sensible waiting periods for re-submitting the same domain in a new set after a rejection of a previous one.
It is always important to consider the incentives for participating in public accountability. People will be less likely to participate if their efforts are seen as duplicating a task that could have been automated, or as joining an open-ended argument about whether a set is valid. There is likely to be more and higher-quality participation if it is presented as evaluating the existing user research about a set than if it is just an opportunity to express opinions.
There will also need to be appropriately designed and enforced anti-abuse and anti-harassment measures for public review participants. (nobody wants to get SWATted because their comment kept some scammer's bogus set from getting in)
The text was updated successfully, but these errors were encountered:
@dmarti wrote in #91 (comment):
Public accountability will be valuable as long as the number of sets requiring review is manageable and the review work is seen as meaningful. It would be useful to limit the number of proposed sets requiring human attention, especially for the "associated" type, by requiring associated research and through setting sensible waiting periods for re-submitting the same domain in a new set after a rejection of a previous one.
It is always important to consider the incentives for participating in public accountability. People will be less likely to participate if their efforts are seen as duplicating a task that could have been automated, or as joining an open-ended argument about whether a set is valid. There is likely to be more and higher-quality participation if it is presented as evaluating the existing user research about a set than if it is just an opportunity to express opinions.
There will also need to be appropriately designed and enforced anti-abuse and anti-harassment measures for public review participants. (nobody wants to get SWATted because their comment kept some scammer's bogus set from getting in)
The text was updated successfully, but these errors were encountered: