AI has a systemic bias problem – can it also be part of the solution?

We’re seeing the early signs of a massive wave of AI-disrupting industries.  AI-powered tools aren’t just making existing processes faster or more efficient – but replacing them entirely, unlocking new capabilities we haven’t seen before.

This disruption has real downsides, though. Generative AI systems like Stable Diffusion or ChatGPT can end up propagating and reinforcing the systemic biases and stereotypes present in the data they were trained on.  Recent research has found how generative AI imaging tools amplify stereotypes, showing white men for some prompts (e.g., ‘firefighter’) and men of color for others (e.g., ‘janitor’), for example. The popular Lensa app, which generates stylized avatars and portraits, has been roundly criticized for sexualizing women’s avatars, but not men’s. And AI chatbots have been found to be prone to propagating misinformation, such as racist conspiracy theories.

As these AI systems become more and more ubiquitous across industries (and in our daily lives) – generating visual media, answering questions and search queries, even writing code – there is a tremendous risk that AI doesn’t just maintain the most pernicious biases on the internet, but amplifies them in both blatant and subtle ways.

Fortunately, the power of AI can also be used to help to identify and measure these biases, which is impossible for humans to do effectively at scale. For example, it can be used to measure the representation of different demographic groups in popular media. Companies building sourcing and recruitment tools have attempted to use AI to reduce bias in hiring, albeit with mixed results.

Thus far, this work on finding bias in AI systems has largely been driven people whose focus is on equity, such as academics, industry researchers, and critics. But should companies actually productize and sell AI-powered tools that help to detect and mitigate bias in AI products and services?  AI tools to reduce systemic bias could be an essential part of companies’ digital infrastructure going forward – just another thing to account for like cybersecurity or analytics or environmental impact.  If your company could pay a reasonable price for off-the-shelf tools that make your product more socially responsible and better for the world, why would you not do it?

At TVE, we’ve been working with the People & AI Research team at Google supporting their efforts to improve inclusivity in AI by understanding how effectively skin tone scales represent people with different backgrounds and skin tones.

By: Alan Clark

Comments are closed