← Back to blog

Spotlight on Women in AI

Plus, our work on chatbot Wanja demonstrated quickly that the LLMs provided by Open AI are not up to scratch

We are in build mode. It’s taking some time but we are pushing on. Thomas, The Rundown Co-Founder, spotted here in this image below, has completed milestone 1 of our back end and has almost finished the front end work.

He’s here!

We will move on to our main product, then to the course content. We have paused on Wanja for the moment to assess where we are and what we need to improve. As you know, the rundown is building Wanja, with African data sets, to counter bias and tell stories from our own perspectives. She is our chat bot. We are exploring the ways this can address the problems Africans face when it comes to western-designed tools. Wanja.AI has been positioned as: your guide to understanding Africa. Her (limited) curated datasets offer a diverse and authentically African perspective.

Our Analysis: Not Good Enough

Wanja demonstrated quickly that the LLMs provided by Open AI are not up to scratch. While hallucinations with LLMs are well documented, the information they played back was cliche and harmful. For example, when asked questions about the Africa CDC and its “Five C” vision, it would often lean into contraception and HIV.

With a bit of prompt engineering and retrieval augmentation generation, we were able to start getting back meaningful responses to make it useful as a tool, but it doesn’t change the fact that the perspective it takes is of what the USA and Europe think of Africa, rather than the many perspectives held within Africa.

We are in the process of narrowing the use case for Wanja to make it more relevant and niche to some of the trends we are seeing.

Why Wanja

Data bias in current large language models impacts Africa negatively. Information has been scraped primarily from the internet and trained on machines in the English language. This means that current AI systems lack authentic and local context, cultural nuance, and language range. All this alienates, excludes, and often exploits African communities, affecting diverse participation, business opportunities, and authentic perceptions of Africa. African culture and communities are not adequately represented in language and multimodal models.

Techcrunch Spotlight

TechCrunch is putting a spotlight on women in AI, running a series of interviews focused on women’s contribution to the AI revolution. Women are underrepresented in the research, development and training of AI models.

According to a 2021 Stanford study, just 16% of tenure-track faculty focused on AI are women Techcrunch

While that was a few years ago, not much has changed. What is it that is keeping women away from the AI space? Lack of access to educational opportunities is part of it. A Deloitte survey from 2021 on women working in AI found that:

78% of women said they didn’t have a chance to intern in AI or machine learning while they were undergraduates.

Diverse perspectives are needed now more than ever, to help us navigate the societal and ethical impact of the new technology. AI is in a major period of innovation and growth, and that needs more input from as broad a base as possible, not less. Diversity, across race, gender, and geography, helps prevent biases in AI systems. Innovation and growth in the tech industry thrive on many colliding viewpoints, and without them, we’re risking major stagnation and missed opportunities. 

Over half (58%) said they ended up leaving at least one employer because of how men and women were treated differently, while 73% considered leaving the industry altogether due to unequal pay and an inability to advance in their careers. - Deloitte 2021

Here are the first 10 women in the series we want to highlight

Irene Solaiman, head of global policy at Hugging Face (was a researcher and public policy manager at OpenAI during release of GPT-2)

I viewed AI as a means of working on human rights and building a better future. Being able to do technical research and lead policy in a field with so many unanswered questions and untaken paths keeps my work exciting.

Eva Maydell, member of European Parliament and EU AI Act adviser

The big issues politicians will need to address are: Firstly, how can this technology make our economies more competitive while ensuring wider social benefit? Secondly, how do we stop AI from fueling disinformation? And thirdly, how do we set international rules to ensure AI is developed and utilized according to democratic standards?

Lee Tiedrich, AI expert at the Global Partnership on AI

Society faces the grand challenge of developing frameworks that unlock AI’s benefits and mitigate the risks. This requires multidisciplinary collaboration, as laws and policies need to factor in relevant technologies as well as market and societal realities.

Rashida Richardson, senior counsel at Mastercard, focuses on AI and privacy

What I’ve noticed both from legal practice and my research is that there are areas that remain unresolved by this legal patchwork and will only be resolved when there’s more litigation involving AI development and use.

Krystal Kauffman, research fellow at the Distributed AI Research Institute

One of the most pressing issues facing the evolution of AI is accessibility. Who has access to the tools? Who’s providing the data and maintaining the system? Who’s benefiting from AI? What populations are being left behind, and how do we change that? How are the workers behind the system being treated? The other issue I would raise here would be bias. How do we create systems completely free from bias?

Miranda Bogen is creating solutions to help govern AI

The best way to responsibly build AI is with humility. Consider how the success of the AI system you are working on has been defined, who that definition serves, and what context may be missing. Think about for whom the system might fail and what will happen if it does. And build systems not just with the people who will use them but with the communities that will be subject to them.

Mutale Nkonde’s nonprofit is working to make AI less biased

One thing to consider is pursuing research questions that center on people living on the margins of the margins. The easiest way to do this is by taking notes on cultural trends and then considering how this impacts technological development.

Karine Perset helps governments understand AI

One of the OECD AI Principles refers to the accountability that AI actors bear for the proper functioning of the AI systems they develop and use. This means that AI actors must take measures to ensure that the AI systems they build are trustworthy.

Chinasa T. Okolo is a researcher on AI’s impact in the Global South

One of the most prominent issues will be improving the equitable representation of non-Western cultures in prominent language and multimodal models. The vast majority of AI models are trained in English and on data that primarily represents Western contexts, which leaves out valuable perspectives from the majority of the world.

Sandra Wachter, Professor of data ethics at Oxford

AI is plagued by biased data that leads to discriminatory and unfair outcomes. AI is inherently opaque and difficult to understand, yet it is tasked with deciding who gets a loan, who gets the job, who has to go to prison and who is allowed to go to university … We have no time to lose; we need to have addressed these issues yesterday.

And we are adding Wakanyi Hoffman . Check out her views on Ubuntu Ethics and AI systems. Also on The Rundown. Do give us feedback and let us know if you would like her to teach a course on The Rundown on these topics.

More short, bite size on-the-go videos are coming soon.

Have a great weekend. We will be working away.

Interested to learn more?

Sign up today to get notified when the course becomes available on presenting on the future of comms and AI

More from the blog

Open AI's Own Goal and Women's Servitude in AI Voices. Plus, a summer of super AI from Google and Microsoft

Each week, we bring you Embedded. This will be our news and interview series. We are working on the 10-part podcast season in the rundown.

Our product update, and introducing Pressmate

We are building a press release tool for communications teams to buy time, stay on point and make messages.

A Landmark AI Report, CAIO Baby, and Adobe's Fuzzy Ethics

Both the overall number of investment events involving AI and the number of AI businesses that received money recently fell.

A big leap forward in ChatGPT-4o

It’s a strong foundational tool for developers. It’s twice as fast and half the price for developers.