Skip to content

9. Do No Harm by Design

Bolaji Ayodeji edited this page Sep 4, 2024 · 18 revisions

Note

Indicator Requirement: "Digital public goods must be designed to anticipate, prevent, and do no harm by design."

This indicator is further broken into three parts, described as follows:

  • (A) Digital public goods that collect, store and distribute personally identifiable data, must demonstrate how they ensure the privacy, security and integrity of this data in addition to the steps taken to prevent adverse impacts resulting from its collection, storage and distribution.
  • (B) Digital public goods that collect, store or distribute content must have policies identifying inappropriate and illegal content such as child sexual abuse materials in addition to processes for detecting, moderating, reporting and removing inappropriate/ illegal content.
  • (C) If the digital public good facilitates interactions with or between users or contributors there must be a process for users and contributors to protect themselves against grief, abuse, and harassment. The project must have system(s) to address the safety and security of underage users.

A good way to provide evidence of this is to:

  • Provide any links relevant to user terms and conditions, privacy policy, code of conduct, or similar documents.
  • State clearly how your digital solution was designed to take into consideration the use of PII data, usage by underage users, illegal content, enforcing code of conduct, etc.

Below are some tips and best practices for each subpart of this indicator.

9A. Data Privacy & Security


Designing DPGs with robust data security is crucial to protecting sensitive information and user privacy. The definition of personal data is laid out in the "Guidance note on big data for achievement of the 2030 Sustainable Development Goals agenda":

For the purposes of this document, personal data means data, in any form or medium, relating to an identified or identifiable individual who can be identified, directly or indirectly, by means reasonably likely to be used, including where an individual can be identified by linking the data to other information reasonably available. Personal data is defined by many regional and national instruments and can also be referenced as personal information or personally identifiable information.

Here are some software data security best practices to consider during platform development:

  • Data Encryption: Use strong encryption algorithms to secure data at rest and in transit. Implement encryption for sensitive data, such as passwords, personal information, and payment details, to prevent unauthorized access.
  • Secure Authentication and Authorization: Implement a robust authentication system with secure password storage and multifactor authentication (MFA) options. Use role-based access control (RBAC) to ensure that users only have access to the data they need.
  • Regular Security Audits and Pen-testing: Conduct security audits and penetration testing on the platform to identify vulnerabilities and weaknesses. Regularly review and update security measures based on the findings.
  • Secure Communication: Use HTTPS and SSL/TLS protocols to encrypt data transmitted between clients and servers. Avoid transmitting sensitive data over unencrypted channels.
  • Regular Software Updates and Patch Management: Keep all software components and dependencies up-to-date with the latest security patches to address known vulnerabilities.
  • User Input Validation: Validate and sanitize user input to prevent common security issues like SQL injection and cross-site scripting (XSS) attacks.
  • Least Privilege Principle: Follow the principle of least privilege, giving users and system components only the minimum access required to perform their functions.
  • Secure Error Handling: Implement proper error handling and logging mechanisms to avoid exposing sensitive information in error messages.
  • Secure API Design: If the platform uses APIs (Application Programming Interfaces), ensure they are designed securely, with proper authentication, authorization, and input validation mechanisms.
  • Privacy by Design: Incorporate privacy measures into the platform's architecture and consider data protection throughout the development lifecycle.
  • Compliance with Data Regulations: Ensure that the platform complies with relevant data protection and privacy regulations, such as GDPR (General Data Protection Regulation) or CCPA (California Consumer Privacy Act), depending on the target audience and location.

9B. Inappropriate & Illegal Content


DPGs can employ various mechanisms to handle inappropriate and illegal content. These mechanisms are essential for maintaining a safe and compliant online environment. It's important for digital platforms to strike a balance between protecting users from harmful content and respecting their rights to freedom of expression. Implementing a combination of the mechanisms below and adapting them as needed can help achieve this balance.

Content Moderation:

  • Human Moderation: Employ a team of human moderators to review and remove content that violates the platform's policies. This can include hate speech, harassment, explicit content, or illegal activities.
  • Automated Moderation: Implement AI-driven algorithms to detect and remove inappropriate content. These algorithms can flag content based on keywords, image recognition, or behavioural patterns.

Reporting Systems:

  • Provide users with a straightforward reporting system to flag content that violates the platform's guidelines. Make sure to review these reports promptly.
  • Encourage users to report inappropriate content and assure them that their reports will remain confidential.

User Guidelines and Policies:

  • Clearly communicate community guidelines and policies to all users. Ensure that these guidelines address what constitutes inappropriate or illegal content.
  • Enforce penalties for users who violate these guidelines, including warnings, temporary suspensions, or permanent bans.

Age Verification:

  • Implement age verification mechanisms to restrict access to age-restricted or explicit content to adults only.
  • Ensure that users below the age limit cannot easily falsify their age.

Content Filters:

  • Use keyword filters and content recognition technology to automatically block or flag potentially inappropriate content.
  • Allow users to customize their own content filters to control what they see on the platform.

Machine Learning and AI:

  • Train machine learning models to identify and categorize inappropriate and illegal content more accurately.
  • Continuously update and improve these models to adapt to evolving online threats.

Legal Compliance:

  • Comply with local, national, and international laws and regulations concerning online content. This includes reporting illegal activities to law enforcement when necessary.
  • Implement mechanisms to handle content removal requests in accordance with relevant laws.

Transparency and Appeals:

  • Provide a transparent process for content removal, so users understand why their content was flagged or removed.
  • Allow users to appeal content moderation decisions and provide a mechanism for revaluating and reinstating content if necessary.

Education and Awareness:

  • Educate users about the platform's policies and the importance of responsible online behaviour.
  • Raise awareness about the potential consequences of sharing inappropriate or illegal content.

Regular Auditing and Reporting:

  • Conduct regular audits of the platform's content and moderation processes to ensure they are effective and compliant.
  • Publish transparency reports detailing content removal and enforcement actions.

9C. Protection from Harassment


DPGs enabling users and contributors to protect themselves from harassment is crucial for fostering a safe and welcoming open-source community. Combining the mechanisms below with clear communication and community-building efforts can contribute to a more respectful digital environment.

  • Privacy Settings: Users can customize their privacy settings to control who can interact with them. This includes options to restrict messages or comments from non-friends or followers.
  • Blocking and Muting: Provide easy-to-use blocking and muting features that enable users to prevent specific individuals from contacting or interacting with them. Blocked users should not be able to view or comment on the blocker's content.
  • Report and Flagging Tools: Implement clear and accessible reporting systems that enable users to report abusive or harassing behaviour. Ensure that these reports are reviewed promptly.
  • Content Moderation Filters: Allow users to set custom content filters to automatically filter out or hide comments or messages containing specific keywords, phrases, or content that they find offensive.
  • Anonymity Controls: Users can control who can interact with them while maintaining their anonymity. This could include allowing interactions only from verified or trusted users.
  • Message Controls: Offer features that allow users to filter and manage incoming messages, such as sorting messages from unknown senders into a separate folder or providing options to filter messages by content or sender.
  • Profile and Comment Visibility: Let users customize the visibility of their profiles and posts. They can choose to make their profiles and content visible only to friends or followers.
  • Feedback Mechanisms: Encourage users to provide feedback about other users' behaviour. If multiple users report the same individual, it can trigger automatic or manual review by content moderators.
  • Time Restrictions: Implement features that limit the frequency and timing of interactions from a single user to prevent spamming or harassment.
  • Community Guidelines and Reporting Transparency: Ensure that platform guidelines clearly address harassment and its consequences. Users should know what behaviour is unacceptable, and they should be informed about the status of their reports.
  • User Support and Counselling: Provide resources for users who are experiencing harassment, including an email or contact form to get in touch with admins.

Other Reference Docs


Tip

Here's a collection of extra resources and helpful links curated by the DPGA and the DPG community you can explore or contribute to.