In this work, we demonstrate how existing classifiers for identifying toxic
comments online fail to generalize to the diverse concerns of Internet users.
We survey 17,280 participants to understand how user expectations for what
constitutes toxic content differ across demographics, beliefs, and personal
experiences. We find that groups historically at-risk of harassment – such as
people who identify as LGBTQ+ or young adults – are more likely to to flag a
random comment drawn from Reddit, Twitter, or 4chan as toxic, as are people who
have personally experienced harassment in the past. Based on our findings, we
show how current one-size-fits-all toxicity classification algorithms, like the
Perspective API from Jigsaw, can improve in accuracy by 86% on average through
personalized model tuning. Ultimately, we highlight current pitfalls and new
design directions that can improve the equity and efficacy of toxic content
classifiers for all users.

By admin