Authentic Intelligence: A Blind Researcher Bringing Wisdom to the Future of Technology Innovations

MARK RICCOBONO:  The next item is one I'm really excited about, and it is titled Authentic Intelligence: A blind researcher bringing wisdom to the future of technology innovations. There's a lot of things I could say about my admiration for this rising professional. She splits her time as a post-doctoral researcher at Carnegie Mellon University's human computer interaction initiative, and as well as a researcher at Apple. Her research sits at the intersection of human-computer interaction, accessibility, and disability studies. In her own words, she says that irrespective of the project, my aim is to inform what I do as much as possible with the lived experience and creativity of people with disabilities. She is a blind person. She is a graduate of BLIND Incorporated. Here is doctor Cynthia Bennett! (Pentatonix Daft Punk medley playing)

CYNTHIA BENNET: Thank you, President Riccobono, for inviting me to address the convention. I'll start with some access notes. So, I've tweeted a text transcript of this speech from my account on Twitter using our hashtag #NFB20. There's a lot of linked resources in that transcript, so I encourage you to check it out. And I'll provide a visual description. I'm a blind white woman with dark hair worn down. And it may not be completely visible in the video, but I'm wearing a shirt that says "access is love". Finally my virtual background is a view of my resident city, Pittsburgh, Pennsylvania, specifically it shows downtown where the Monongahela, Allegheny, and Ohio Rivers meet. I want to talk about what I've learned over the years, specifically at Carnegie Mellon University. I'm a researcher, and an important process of the researcher's work is peer review. During peer review, researchers submit work for publication, and people who are deemed to be peers or colleagues with relevant expertise are then recruited to evaluate the work by drawing out its strengths and requesting improvements. But I've been submitting to publications for eight years, and I have never been reviewed by my peers. Sure, reviewers may have degrees, but the vast majority of them are not blind like me. And so today I invite you, my blind peer reviewers, to join in using the aforementioned Twitter handle @clb5990, and use the hashtag #NFB20 to say what you have to say. In this talk, I'll be arguing that blind people should be organizing for the ethical study, deployment, and, yes, sometimes withholding of artificial intelligence that analyzes humans and our data, and which shares particular information or makes decisions based on that analysis. I'm going to pause and turn off my speech, because I forgot to do that. Apologies.

So, to make this argument, I will first offer some definitions and examples. And second, I will share some biases and consequences of AI that have already proved harmful. Finally, I will offer some suggestions for moving forward.

Before we dive in, I'll recognize some leaders in AI bias research who have scaffolded my education in this area, and many of whom are Black women scholars. They include Ruha Benjamin, Simone Brown, Joy Buolamwini, Timnit Gebru, Sofia Umoja Noble, and Meredith Whittaker. Artificial intelligence or AI is a branch of computer science focused on imitating cognitive functions we associate with human intelligence like learning or problem solving. A key term in computer science is algorithms, which are sets of instructions that dictate how computer programs will work, and traditionally humans write and update algorithms. But one type, called machine learning algorithms, learn and change based on data provided to them. As machine learning algorithms are exposed to data, they recognize patterns, categorize patterns, and make classifications based on the information. As they're exposed to more data, what counts as a pattern and what happens when a pattern is recognized will change automatically. For example, as your search engine learns data from you, possibly in the form of what types of search results you open, and what types of search results you ignore, machine learning algorithms will make predictions about which types of search results you may open in the future, and rank those search results higher on the list. So many blind people, including me, benefit from machine learning. And that's why I'm talking about it. For example, machine learning may help us do things nonvisually a little bit easier, like frame a scene in our camera's view finder, discern identically shaped objects from one another, and learn what is shown in photos. Now that I've provided some definitions and examples, I'll move on to contextualize my argument for why blind people should be organizing for the ethical study, deployment, and, yes, sometimes withholding of AI that analyzes humans and our data, and which shares particular information or makes decisions based on that analysis. So, AI impacts everyone. So why is this issue important for blind people to care about uniquely? I'll offer two reasons. First, blind people and particularly those living at intersections of systematic marginalizations, including our blind members who are Black, Indigenous, and people are color, are disproportionately impacted by some applications of AI. I'll offer some examples. Some of these examples are retracted and no longer in use. But as we know that updates may make accessible technology inaccessible again, just because some mistakes are now in the past doesn't mean it won't happen again, so we need to keep them in mind. These are just a few recent examples, and you can search "AI bias and discrimination" to learn more. First, hiring software. This type of hiring software has become extremely popular. It uses machine learning to judge whether someone should be called in for an interview, like a job applicant. In one now retracted instance, women applicants were ranked lower. And we know that because of systematic discrimination of blind people, we don't always take traditional paths to employment that would come to be recognized as patterns by machine learning algorithms that would then be classified as qualified. A lot of us might not first those patterns and be categorized as unqualified. Second, a new AI system very recently claimed to be able to reconstruct sharper images of people's faces from blurry images. But it reconstructed a blurry but recognizable image of Barack Obama to an unrecognizable image of a man with much lighter skin. Finally, Roderick Williams, a Black Detroit, Michigan resident, was wrongfully arrested when AI mistakenly identified him. These may seem like one off mistakes, but the scholars I mentioned earlier taught us that extreme cases of consequential decisions made by machine learning highlight larger patterns. Any of you who heard Angela Frederick earlier will know what I'm talking about, deeper historical patterns. For example, the surveillance of Black people is not new but has a long history of being encoded in laws and human behavior. If automatic, AI is built and maintained by humans, and it tends to replicate and amplify the existing biases and discrimination, which most impact our members living at the intersections of marginalization.

My second reason for why blind people should care about AI and bias is that stories of our access barriers motivate its innovation. While our narratives are powerful, research I would encourage you all to check out has actually unpacked the ways that our stories are misused to promote development that may not actually reflect what blind people want. Given AI's biased track record, we should be very concerned as blind people about how our stories are being used to promote it. So how should we move forward? We will all be able to engage in different ways, and so I will begin with some general advice that has helped me. I agree with recent calls to educate ourselves about injustice. And as President Riccobono mentioned earlier, I hope that part of this learning turns inward so that we better learn about ourselves. If we learn new things while failing to connect them with our own lives and our own use of technology, we risk believing that we are not responsible and therefore not responsible to act. On this topic specifically, as we educate ourselves, we might ask ourselves questions, including, why haven't I learned about the potential harms of AI when it has been promoted to me as a tool to increase access? Why do I think that it's okay for access technology to work for some of our members and maybe not work as well for others? And in what ways have I been asked to put aside parts of my identity in order to promote access for blind people?

I am still processing these questions myself, and they are helping me to recognize how I have power, including the power to share my lived experiences and the power to listen and act, given that I'm an accessibility researcher who is able to work in this field. These recognitions are a first step to direct what types of action I and hopefully you can do.

Specifically at an organizational level, we could craft resolutions that not only concern our direct user experience of technology, but concern how our data is used, whether or how it can be used for machine learning algorithms, or maybe say that these companies that don't already have a track record of hiring people with disabilities, maybe they shouldn't automate that process! Those of us with powers to research, design, and deploy technologies, we need to follow up on our commitments to diversity and inclusion and widen the ways that we get feedback from blind people in the community. Feedback must come from blind people with a variety of life experiences, and we need to build in research methods and activities that help us work through potential harms technology might cause in addition to the potential benefits for accessibility.

Individually, I want you to tell your story and tell it directly, not filter it through someone else. Great outlets to do this include the Braille Monitor, the Disability Visibility podcast, or your own personal blog. There isn't a lot of documentation of the impact of AI on people can disabilities. This is a missing conversation in AI fairness right now. We need more stories of blind people and impacts, both positive and negative, and how AI impacts you not only as a blind person, but as a whole person and all the various identities that make you who you are. And finally, we might think about promoting automation that moves the gaze away from human and classifying us and our data. A lot of AI is implemented to increase access to visual information because that information wasn't accessible in the first place. It's remediating. How can we think about using machine learning or automation to instead make the processes of building accessibility in from the start a little bit easier, and to educate the humans who are part of these work flows?

These are just a few starting points.

I realize this talk is sharp and critical. But I know that blind people are well positioned for this work. We constantly repurpose things from their intended use and invent new objects and processes to make our lives easier. And crucially, we're great at sharing our genius with our growing family. For example, highlights of my quarantine include learning circuitry and soldering let by the project led by our own NFB of New York, and I'm learning origami from very visual directions translated into tactile ones. So I hope we can develop for equitable applications for AI and accessibility. This is already happening. Visual interprets services are giving users more control over how their sessions are recorded, and companies are developing AI for disability solutions that do as much computation on people's local devices as possible to minimize the amount of data that you have to share. So let's stay the course with reminders to ourselves and others that we have a lot to offer to this challenge, and change is possible.

To close, writing this presentation was difficult. Being blind and working with engineers makes positioning myself a challenge sometimes. I'm supposed to LOVE technological innovation, yet my life experiences of being incorrectly classified and discriminated against keep me cautious, as I have demonstrated today. As such, I sought feedback on today's presentation from people like Chancey Fleet, and thanks to JJ Meadow for picking my intro music because I didn't know what to pick. Finally, thank you everyone for receiving my message and composing my first accurately named blind peer review. I'll be on Twitter for discussion.

MARK RICCOBONO:  Dr. Bennett, thank you so much for your words.  I know you will continue the conversation online.  I appreciate your leadership, thought leadership and action in bringing authentic intelligence to this space and continuing the conversation and challenging our members, our organization to do the same internally to create change externally.  So thank you very much for your expertise and leadership, and for being with us at NFB 20!

CYNTHIA BENNETT:  Thank you.