Skip to content

dchen236/toxic_comment

Repository files navigation

toxic_comment detection using Kaggle Dataset

In this project, we compare the performance of machine learning and deep learning mod- els, including Naive Bayes, Logistic Regres- sion, Decision Tree, LSTM, CNN, and BERT on a toxic comment detection challenge from Kaggle. The main goal of this project is to help mitigate content moderation. Therefore, After training the models, we analyze popular online and social media platforms, including Reddit and Twitter, by detecting toxic content using the trained model. We further show the vulnerability of the models using adversarial examples.

Scripts

NOTE: if you have trouble loading the scripts, click reload after the first fail.

Training

Social / Online Platform Analysis

Adversarial Vulnerability Attack

  • script of adversarial attack with Perspective API.

Team Member:

  • Danni Chen
  • Fuzail Khan
  • Don Le

About

Final Project for CS263 - UCLA

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published