• Catherine Yeo

What is Transparency in AI?

What does it mean for a machine learning algorithm to be “transparent”?

Photo by eberhard grossgasteiger on Unsplash


When looking at fair and ethical algorithms, transparency is a key concept often brought up in discussion. But what exactly does it mean for a machine learning algorithm to be “transparent”?


Like “fairness” and “privacy”, it sounds important and useful, but the concept of transparency is quite ambiguous and is worth exploring in more detail.


Definition

When we seek transparent algorithms, we are asking for an understandable explanation of how it works. For example:

  • What is the algorithm doing?

  • Why is it outputting this particular decision?

  • How did it know to do this?

Types of Transparency

Have you ever read an article or watched a lecture that made no sense to you, but made complete sense to your friend/classmate? Likewise, a machine learning system might be understandable to one person, but not another. Thus, we need to consider multiple parties:

  • The developer who builds the system

  • The deployer who owns and releases the system to the public

  • The user of the system

For example, developers are hired to build a personalized recommendation system for online shopping. Amazon (or any large e-commerce company) deploys it, which is used by a regular member of the public.


Then, with these parties in consideration, there are many different types and goals of transparency (as delineated in this paper), including and not limited to:


  • For a user to understand what the system is doing and why, which helps the user get a sense of predicting what it might do with different actions

  • For society to overcome fear of the unknown and build trust in technology

  • For a user to understand why a particular decision was outputted, which provides a check that the system worked as expected (this is critical in applications like criminal sentencing)

  • For experts to test and monitor the system

  • For a deployer to retain users, because transparency would allow the user to feel more comfortable and so they keep using the system

In the last example, while the user receives the explanation, the deployer is the true beneficiary of the system’s transparency. When we evaluate transparency, we must consider who the system is for versus who is benefitting from the system.


We can also differentiate between global and local transparency. Global interpretability explains the overall system, while local interpretability explains a particular decision or classification. When we measure transparency, we must decide which mode of transparency provides a “better” explanation in the specific context.


Conclusion

Creating transparent algorithms and AI systems helps us be able to explain, inspect, and reproduce how they make decisions and use data. There are many types of transparency with different motivations, so we must find better ways to articulate and measure them precisely.


Related Work

Adrian Weller. “Transparency: Motivations and Challenges”. First submitted to Workshop on Human Interpretability in Machine Learning at ICML 2017. https://arxiv.org/pdf/1708.01870.pdf

. . .