Detect abusive comments in conversations

Our model detects whether a comment contains different types of toxic features.
Write a comment to checks its toxicity.

Get toxicity

  • 0 %
  • 25 %
  • 50 %
  • 75 %
  • 100 %
  • Toxic
  • Severe Toxic
  • Obscene
  • Threat
  • Insult
  • Hate

The model was trained on data provided for the Kaggle Toxic Comments Classification Challenge, where we placed in the top 1% of more than 4500 teams. A detailed explanation of the used attention mechanism which highlights different parts of the sentence can be found here.