by Cynthia L. Bennett
From the Editor: Many who believe we need to be saved from our blindness or the more difficult consequences of it first suggest that we need vision. When this is understood to be impossible, artificial intelligence is offered as the next best thing. It will liberate by freeing us from the necessity of asking other people things that we cannot see. Coupled with a camera, it will tell us what we could otherwise see for ourselves.
But as Cynthia Bennett tells us, there are pitfalls that exist as we begin to journey into the ever-widening use of artificial intelligence. Sometimes it is because the artificial intelligence is modeled on our own biases and prejudices. Sometimes it is because the artificial intelligence lacks the experience to understand what we think it does, meaning it generates its own biases. Still we give great weight to it in the privacy we sacrifice and the decisions we let it make about us. Here is a most provocative presentation delivered by our friend and fellow Federationist, Cynthia Bennett:
Thank you, President Riccobono, for inviting me to address the convention. I’ll start with some access notes1. I tweeted a text transcript of this speech from my account on Twitter @clb5590 using our hashtag #NFB20. There are a lot of linked resources in that transcript, so I encourage you to check it out. And I’ll provide a visual description. I’m a blind white woman with dark blond hair worn down. It’s not completely visible on video, but I’m wearing a navy blue shirt that says, “Access is Love.” Finally, my virtual background is a view of my resident city, Pittsburgh, Pennsylvania. Specifically, it shows downtown where the Monongahela, Allegheny, and Ohio rivers meet.
Today I am going to be talking about some of the things I’ve been thinking about over the past couple of years and specifically at Carnegie Mellon. Professionally, I am a researcher. An important evaluation of a researcher’s work occurs through a process called peer review. During peer review, researchers submit work for publication. People who are deemed to be peers or colleagues with relevant expertise are then recruited to evaluate the work by drawing out its strengths and requesting improvements. But I’ve been submitting publications for eight years, and I have never been reviewed by my peers. Sure, reviewers may have degrees, but the vast majority of them are not blind like me. So I invite you, my blind peer reviewers, to join in using my aforementioned Twitter handle, @clb5590 and our hashtag #NFB20 to share your feedback on what I have to say.
During this talk I’ll argue that blind people should be organizing for the ethical study, deployment, and yes, sometimes withholding, of artificial intelligence that analyzes humans and our data and which shares particular information or makes decisions based on that analysis. To make this argument, I will first offer some definitions and examples. Second, I will share some biases and consequences of AI that have already proved harmful. Finally, I will offer some suggestions for moving forward.
Before we dive in, I’ll recognize some leaders in AI bias research who have scaffolded my education in this area, and many are black women scholars. They include Ruha Benjamin, Simone Browne, Joy Buolamwini, Timnit Gebru, Safiya Umoja Noble2, Morgan Scheuerman, and Meredith Whittaker.
Artificial intelligence or AI is a branch of computer science focusing on mimicking “cognitive” functions we often associate with human intelligence like “learning” and “problem solving.” A key term in computer science is the word algorithm. Algorithms are sets of instructions that dictate how computer programs will work. Traditionally humans write and update algorithms. But one type, called machine-learning algorithms, learn and change based on data provided to them. As machine learning algorithms are exposed to data, they recognize patterns, classify those patterns, and make predictions based on these classifications. And as machine learning algorithms are exposed to more data, what counts as a pattern, and what happens when a pattern is recognized will change. For example, as your search engine learns data from you, possibly in the form of what types of search results you open and what types of search results you ignore, machine learning algorithms will make predictions about which types of search results you may open in the future, and rank those search results higher on the list.
Many blind people, including me, benefit from machine learning, and that’s why I’m talking about it. For example, machine learning may help us do things nonvisually a little bit easier, like frame scenes in our camera’s viewfinder, discern identically-shaped objects from one another, and learn what is shown in photos.
Now that I’ve provided some definitions and examples, I’ll move on to contextualize my argument for why blind people should be organizing for the ethical study, deployment, and yes, sometimes withholding, of AI that analyzes humans and our data and which shares particular information or makes decisions based on that analysis.
AI impacts everyone. So why is this issue important for blind people to care about uniquely? I will offer two reasons:
First, blind people, particularly those living at intersections of systematic marginalization including our blind members who are black indigenous people of color, are disproportionately negatively impacted by some applications of AI. I will offer some examples. Some of these examples have been retracted and are no longer in use. But as we know that updates may make accessible technology inaccessible again just because some mistakes are now in the past, does not mean they won’t happen again. These are just a few recent examples. You can search AI bias and discrimination to learn more.
First example: Hiring software uses machine learning to judge whether someone should be called in for an interview. In one now retracted instance, women applicants were ranked lower. We know that because of systematic discrimination of blind people we do not always take traditional paths to employment that would come to be recognized as patterns by machine learning algorithms that would then be classified as qualified. A lot of us might not fit those patterns, so we would be classified as unqualified.
Second example: A new AI system very recently claimed it could reconstruct sharper images of people's faces from blurry images, but it "reconstructed" a blurry but recognizable image of Barack Obama into an unrecognizable image of a man with much lighter skin. And finally, Robert Williams, a Black, Detroit, Michigan, resident, was recently wrongfully arrested when AI incorrectly labeled him a suspect. These instances may seem like one-off mistakes. But the scholars I mentioned earlier teach us that extreme cases of consequential classification by machine learning are often highlights of deeper systemic patterns. For example, unjust surveillance and the classification of Black people is not new but has a long history of being encoded into laws and human behaviors. If automatic, AI is built and maintained by humans, and it tends to replicate and amplify existing bias and discrimination which most impact our members living at intersections of marginalization.
My second reason for why blind people should care about AI and bias is that stories of our access barriers motivate its innovation. While our narratives are powerful, my research has unpacked the ways our stories have been misused to promote development that may not actually reflect what blind people want. Given AI’s biased track record, we should be very concerned as blind people about how our stories are used to promote it.
So how should we move forward? We will all be able to engage in different ways. I’ll begin with some general advice that has helped me. I agree with recent calls to educate ourselves about injustice, and as President Riccobono mentioned earlier, I hope that part of this learning turns inward so that we better learn about ourselves. If we learn new things while failing to connect them to our own lives and use of technology, we risk believing we are not responsible and therefore not responsible to act. On this topic specifically, as we educate ourselves, we might ask ourselves questions including: Why haven’t I learned about the potential harms of AI when it has been promoted to me as a tool to increase access? Why do I think that it’s okay for access technology to work for some of our members and maybe not work well for others? In what ways have I been asked to put aside parts of my identity in order to promote access for blind people? I am still processing these questions myself, and they are helping me to recognize how I have power, including power to share my lived experiences and power to listen and act given I’m an accessibility researcher who’s able to work in this field. These recognitions are a first step to direct what types of action I, and hopefully you, can do.
Specifically, at an organizational level we can craft resolutions that not only concern our direct user experiences in technology but concern how our data should be used, whether and how it can be used with machine learning algorithms. Those of us with powers to research, design, and deploy technologies need to follow up on our commitments to diversity and inclusion by widening how we collect feedback. Feedback must come from blind people with a variety of life experiences, and we should build in research methods and activities which allow us to work through potential harms this technology might cause so we aren’t just presenting them with potential positives.
Individually, I want you to tell your story, and tell it directly, not filter it through someone else. Great outlets to share them include the Braille Monitor, the Disability Visibility podcast, or your personal blog. There isn’t a lot of documentation of the impacts of AI on people with disabilities. This is a very missing part of the conversation that we need in AI fairness right now. We need more stories of blind people and the impacts, both positive and negative, and how AI impacts not only you as a blind person but you as a whole person and the various identities that make you who you are.
Finally, we might think about promoting automation that turns the gaze away from humans and classifying us and our data. A lot of AI is implemented to increase access to visual information because that information wasn’t accessible in the first place; it was remediated. How can we think about using machine learning or automation to instead make the processes of our disability from the start a little bit easier and to educate the humans who are a part of these workflows? These are just a few starting points.
I realize this talk is sharp and critical. But I know that blind people are well-positioned for this work. We constantly repurpose things from their intended use and invent new objects and processes to make our lives easier. And crucially, we are great at sharing this genius with our blind family. For example, highlights of my quarantine include learning electronics circuitry and soldering from The Blind Arduino Project led by people including, Josh Miele and our own Chancey Fleet from the NFB of New York. And I’ve been learning origami from NFB of California’s Lisamaria Martinez. I know we can apply this creativity, laden with our persistent hope for better futures and the connections that we make along the way, to develop more equitable applications for AI and accessibility. This is already happening. Visual interpreting services are giving users more control over how sessions are recorded, and companies are developing AI for accessibility solutions that do as much computation on people’s local devices as possible to minimize the amount of data you have to share. So let’s stay the course with reminders to ourselves and others that we have a lot to offer to this challenge, and change is possible.
To close, writing this presentation was difficult. Being blind and working with engineers makes positioning myself a challenge sometimes. I’m supposed to love technological innovation. Yet my life experiences of being incorrectly classified and discriminated against keep me cautious, as I have demonstrated today. As such, I sought feedback on this presentation from people like Chancey Fleet, and my colleagues Sarah Fox, and Daniela Rosner, and thanks to J.J. Meddaugh for picking my intro music, because I had no idea what to pick. Finally, thank you, everyone, for receiving my message and for composing my first accurately-named blind peer review. I’ll meet you on Twitter for continued discussions. Thank you.