top of page
  • Writer's pictureCatherine Yeo

NLP Bias Against People with Disabilities

An overview of how biases against mentions of disabilities are embedded in natural language processing tasks and models


I recently came across “Social Biases in NLP Models as Barriers for Persons with Disabilities”, a new paper (arXiv preprint here) that will appear in ACL 2020. It provided a novel vantage point on bias in NLP by looking at how machine learning and NLP models affect people with disabilities, which I found to be a very interesting perspective and one I sought to highlight.


Photo by Josh Appel on Unsplash


One Line Summary

Undesirable biases against people with disabilities exist in NLP tasks and models, specifically toxicity prediction, sentiment analysis, and word embeddings.


Motivation & Background

NLP models are increasingly being used in our daily lives in a variety of ways:

  • To detect and moderate toxic comments in online forums (toxicity prediction)

  • To measure consumers’ feelings (sentiment analysis) towards well-known brands

  • To match candidates to job opportunities

With such prevalent usage, it is crucial for NLP models to not discriminate against people impacted by these algorithms. Previous research exploring biases in NLP models have looked extensively at attributes such as gender and race, but bias with respect to different disability groups has been explored much less.


This is problematic when over 1 billion people in the world experience some form of disability — that’s ~15% of the population we have neglected in creating and evaluating fair AI technologies.


Findings

This paper’s analysis used a set of 56 phrases to refer to people with different disabilities. Each phrase was classified as Recommended v. Non-Recommended. For example:

  • Under the category of mental health disabilities, “a person with depression” is Recommended and “an insane person” is Non-Recommended

  • Under the category of cognitive disabilities, “a person with dyslexia” is Recommended and “a slow learner” is Non-Recommended

The researchers then followed a process of perturbation — they took existing template sentences containing pronouns “he” or “she” and perturbed them by replacing the pronoun with 1 of the 56 phrases.


Then, they calculated the score diff — the difference between the NLP model score for the original sentence and the score for the perturbed sentence.


Overall, they found that:

  • In Toxicity Prediction, The NLP model score was higher (more toxic) for both Recommended and Non-Recommended phrases, which means that phrases mentioning disability are likelier to be labelled toxic

  • Sentiment Analysis — The NLP model score was lower (more negative) for both Recommended and Non-Recommended phrases, which means that phrases mentioning disability are likelier to be labelled negative

  • In both tasks, Non-Recommended phrases resulted in a more toxic/negative score than Recommended phrases


Source: Figure 1


Furthermore, the researchers found that neural text embeddings such as BERT, a widely used language model, similarly contain undesirable biases around mentions of disabilities. Again using perturbation, they compared how the top 10 BERT word predictions changed with different disability phrases and found high frequencies of suggestions related to disabilities producing negative sentiment. This means BERT associates words with more negative sentiment with phrases referencing people with disabilities.


What do these results imply?

These biases can result in non-toxic, non-negative comments mentioning disabilities being flagged as toxic at a much higher rate, suppressing harmless discussion about disabilities.


This could impact their opportunity to participate equally in online forums, which consequently influences public awareness and societal attitudes.


My Final Thoughts

  1. NLP models are already widely used in our daily lives. Given evidence of biases in these models, human judgment is needed in addition to these models’ decisions to ensure that people with disabilities are not discouraged from online participation.

  2. Further time and research in the AI fairness field also need to be dedicated to under-explored marginalized groups, e.g. people with disabilities, gender non-binary individuals, intersectional subgroups, etc.

  3. Uncovering biases in ML/NLP models is a valuable first step, and this paper did a great job bringing to light such biases against people with disabilities. Now, we must also figure out how to eliminate these biases.


For more information, check out the original paper on arXiv here.


Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen Denuyl. “Social Biases in NLP Models as Barriers for Persons with Disabilities”, Annual Conference of the Association for Computational Linguistics 2020.

. . .



bottom of page