(Ir)Responsible AI

|
AI-generated image of woman looking in mirror and seeing a robot

By Nick Martin, Founder and CEO of TechChange

From the impact of AI on the human rights of marginalized populations, to its potential to help (and exacerbate) the climate crises, even to using AI tools for public health writing, artificial intelligence was on EVERYONE’s agenda at the recent Global Digital Development Forum, a signature TechChange event. This included the launch of a brand new tool at the conference’s closing keynote, which featured speakers from USAID, the Government of Canada, the Global Center for Responsible AI, and more. 

That tool is The Global Index on Responsible AI, the first global report to establish benchmarks for responsible AI and assess those standards across 138 countries, providing an overview of how nations are–or aren’t–addressing the ethical, social and regulatory challenges of AI. 

As Sabeen Dhanani, Deputy Director for the Technology Division of USAID, said in her opening remarks on the launch: “The fact is that we are all working in real time to determine the right approach to AI, and in order to chart a path forward we need to see where we currently stand.” 

The Global Index on Responsible AI is taking on that challenge, and its findings were likely not entirely surprising to anyone who’s been paying attention. Here are a few of the top takeaways from the report that stood out to me:

Key issues of inclusion and equality in AI are not being addressed

Governments are largely not prioritizing issues of inclusion and equality, which means that existing gaps may be exacerbated by its spread. But civil society (including researchers, development organizations, and universities) are helping to draw their focus. I like to think that the conversations and workshops at GDDF add much-needed fuel to this fire. 

There are major gaps in ensuring the safety, security, and reliability of AI 

This one’s more about the technical security of AI– something I confess I hadn’t thought much about. And, according to the report, only 38 countries have taken any steps to address the safety and reliability of AI systems. This is a major oversight, and one that will take the work of committed professionals in diverse contexts to remedy. 

There’s still a long way to achieve adequate levels of responsible AI worldwide

Nearly SIX BILLION people are living in countries where there are not adequate policies or oversight to protect their human rights from AI. AI can go instantly global, so even if it’s developed in a place with regulation, it can quickly reach more vulnerable groups in settings where there is no such oversight. And don’t get it twisted- the majority of countries have yet to take any actions to protect the rights of marginalized people in the context of AI. 

Politico covered the launch of the Index at GDDF

There’s so much more to read and learn from this interdisciplinary report, which took a global team of researchers three years to develop. I encourage you to take some time and read it for yourself. As Dr. Adams said in the panel: “We know the risks are real, and that they threaten the very fabric of society, from misinformation weakening democracy to exacerbating inequalities of vulnerable groups.” 
Some of those risks were made painfully clear in another GDDF workshop, “AI through Feminist Lenses Workshop: Reimagining Tech with Popular Education” where a team of researchers from the Data-Pop Alliance shared excerpts from books and movies about algorithmic racism and misogyny, including some pretty upsetting things, like facial recognition issues for black skin and pornographic deepfakes. But at the end of the session the speakers gave attendees the opportunity to express their feelings and reactions to these distressing issues using an image generation algorithm.

I love the meta-aspect of this– using AI to help us express our very human reactions to the dangers presented by AI. Plus, it was a neat trick to make an engaging hybrid workshop– which I’m always a fan of. We might even have to add that one to our hybrid playbook. 

One of the AI images created in the Feminist Lenses on AI workshop, in response to the prompt “woman looking at herself in the mirror and seeing the reflection of a robot”

This is only the beginning of the rise of AI, and it’s pretty clear that we don’t know where things are going to go with it. But I for one am glad that there are new tools to track its use and oversight, and smart, socially conscious professionals working with them. 

As Gloria Guerrero, the Executive Director of the Latin American Alliance for Open Data, said in the closing GDDF workshop on Responsible AI: “the race to govern AI isn’t about getting there first, but making it work for everyone.”

If you want to watch these sessions from GDDF, it’s not too late. The power of a TechChange event isn’t just the people in the room, it’s spreading the incredible digital development community to an inclusive environment around the world. 

Also on TechChange Main

Photo of participants watching a panel at Kenya watch party
Reflections on GDHF coming to Kenya: From Watch Party to Main Site

By Sylvia Mwelu, Digital Health Technical Lead at KeHIA and Nairobi Watch Party Host In the fast-paced world of digital...

Photo of group of DDA training participants
USAID DDA Workshop Delivered by TechChange

This past summer TechChange supported a week long workshop for USAID's Digital Development Advisors (DDAs) and other Digital Champions scattered across the...

Two women from USAID
Digital Government Rap Session at #GDDF2024

By Lara Henneman, TechChange Content Specialist You really can’t talk about digital development without including a discussion on digital government....