Last Thursday brought a whirlwind of events and learning opportunities, culminating in a much-anticipated virtual encounter with Dr. Joy Buolamwini, the vanguard in the movement for fair and ethical artificial intelligence. As a poet of code, Dr. Joy merges art with research to illuminate the often-overlooked social consequences of AI technologies. With a plethora of degrees from institutions like Georgia Tech, Oxford, and MIT, and as the founder of the Algorithmic Justice League, she is more than just an academic – she is a movement.
Her new book, Unmasking AI: My Mission to Protect What is Human in a World of Machines, is a beacon for change-makers and a testament to her tireless advocacy for algorithmic justice. It launched on October 31, 2023, and is already getting sold out on Amazon. This work does not simply outline the problem but provides a deeply personal narrative that charts the journey of an influencer who dares to confront big tech. Dr. Joy, a Rhodes Scholar, and a Fulbright Fellow, has continually been a voice of transformation in a field that desperately needs it.
Looking to buy the book? Get it here! (Affiliate link – you don’t pay extra but you do support the dream)
Dr. Joy’s discussion with Karen Hao was not just an interview; it was an exposition of the role we all play in the technology narrative. The principle that “if you have a face, you have a place” in the conversation about technology was at the forefront of the discussion, signifying the universal impact of AI on society. Our very identities are at stake, with our data shaping the algorithms that, in turn, shape our futures. Whether we know it or not.
Despite not having the book in hand just yet (mine will be arriving in early December and I can’t wait), insights from the virtual tour revealed a work that offers not only a critique but a pathway to empower and engage in the discourse on technology. Dr. Joy’s decision to stand up against the biases ingrained within the tech industry, despite potential risks to her career, resonates deeply with the struggles many women of color face in professional environments.
One of the most stirring elements of Dr. Joy’s work is the ‘white mask’ incident during her time at MIT, which uncovered racial and gender biases in facial recognition technology. That study led to the gender shades methodology which has been used by Bloomberg of Stable Diffusion, the text-to-image generative AI model. They used different skin tones and average faces to evaluate the model and try prompts such as high and low-paying jobs. Unsurprisingly, lighter-skinned males were shown for high-paying roles while men with darker skin colors were correlated with low-paying jobs and criminal stereotypes.
The Unmasking AI book also goes into how change is made. In Dr. Joy’s case she went from her research on gender shades with the white mask to informing President Biden on what his next moves should be with AI. Wow! I can’t wait to read all about it. As a student, it can be hard to see how you go from researching in a classroom to later in a position of authority and knowledge on a subject. I am in awe of Dr. Joy for being so transparent in her book and as I said, can’t wait to read it.
What truly stands out from the virtual event was Dr. Joy’s candid conversation about her health. In the relentless pursuit of change, the importance of self-care became evident. This point is especially poignant as it underscores the need for balance amidst our ambitions. One of the ways Dr. Joy mentions taking care of herself is knowing when it’s too much and that it’s time to say no. She talked about cancelling on a TedAI because she learned to listen to the warning signs from her body. I appreciate her for being so transparent about it.
Now, I’d like to sum up my review with some of the lessons and insights I learned from Dr. Joy in the virtual book chat:
- Preparation is key: There’s no such thing as being overprepared. Being a young woman in tech means that your opportunities are not just for you, they pave the way for those coming behind you.
- Courage to speak up: Speaking out about controversial topics, like racial bias in AI, may have a negative impact on you or your career. But doing so is so important and can make a huge impact on the industry. Just like how Dr. Joy’s gender shades methodology is being used by big tech companies.
- Accountability in AI: Deleting the data is not enough, we need to be asking companies to delete the models that they train with that data. I was shocked when she mentioned that, for example, Facebook went through a lawsuit that caused them to have to delete a ton of data. But they didn’t delete the models they trained with it so clearly deleting the data isn’t enough. Companies should make sure their models are trained on ethical data in the first place to avoid that.
- Heed your body’s signals: Listen to your body. Heed when it tells you to slow down or it will do it for you.
Stay tuned for more insights and discussions on tech, and remember, the conversation about ethical AI isn’t just for those at the forefront of the industry—it involves us all. Because if you have a face, you indeed have a place in this pivotal dialogue.