Skip to main content
Industry Contributor 14 Apr 2021 - 4 min read

Black data matters: My new phone camera has built-in bias against darker skin

By Maurice Riley, Chief Data Officer - Digitas Australia & New Zealand

When Digitas data boss Maurice Riley used his new phone's camera at an Easter lunch to take a picture of his partner and family, who happen to be white, he was stunned to find them in perfect focus and his "wonderfully melanated face" slightly out of focus in every shot. It turns out if the light source is artificial, digital camera technology struggles with darker skin. Some people believe technology is neutral but it’s far from it, says Riley. It's time for courageous conversations about inclusive standards in the age of AI and data-driven marketing automation – black data matters, he says.

What you need to know:

  • Some of the algorithms that underwrite our lives and influence our decisions are created by biased humans and data sources.
  • Artificial Intelligence (AI) is being used without wide-spread oversight in the public or private sector, despite proven baked-in bias.
  • More Australians want to learn about AI; trusted Australian companies can lead the way, building more brand value in return.
  • Before deploying AI, we must employ forward-thinking ethics to understand who it might serve or harm.

Does your mobile phone gaslight you? Mine does. When I ask my phone to do two specific tasks, it tells me I didn’t ask or it dismisses me by responding with a false-positive, as if to say: “It's not me, it's you.”

Take the recent Easter long weekend for example. I tried to use the facial recognition feature to unlock my new phone with no success. Then, after a few tries it responded: “face doesn't match.” How? Did my face change? Are you telling me I look tired?

When the phone finally unlocked, I tried to use the fancy new camera to take an Easter lunch picture with my partner and his family, who happen to be white Australians. The indoor photo shows them in perfect focus, while my wonderfully melanated face is slightly out of focus in every shot. Turns out if the light source is artificial, digital camera technology struggles with darker skin.

And you thought unconscious bias was limited to human-to-human interactions.

This is something known as algorithmic bias: a kind of error associated with the use of AI in decision making, which often results in unfairness. One way it’s caused is when humans use data with racially biased historical norms to train algorithms.

I understand the inherent problems with algorithmic bias in AI. Still, these small daily battles I have with my phone leave me feeling like I am at fault. In reality, the fault lies with the incomplete, biased data sources driving automated gate-keeping decisions that block my access to features that are intended to make my life easier.

This kind of bias is a well-researched area in the rapidly growing AI landscape – and it’s making its way to the masses. A brand-new documentary on Netflix called Coded Bias unpacks the prejudice that’s built into the algorithms we use every day. Through the righteous work of researchers, like Joy Buolamwini of the MIT Media Lab, the documentary shines a light on the negative social and emotional impacts of bias in facial recognition algorithms, which far exceed my own mobile phone drama.  

After I watched Coded Bias, I began to contemplate the social contract that should exist in Australia when it comes to AI technology. I wanted to explore how our public and private society is preparing for the ways algorithms might alter how we interact with each other – without us even realising it.

I came across three numbers that have stuck with me: 100%, 75% and 46%.

These numbers represent the higher rate Indigenous Australians consume online media when compared to Australians with European descent; the number of Australians who have an implicit negative bias against Indigenous Australians; and the number of people who are unaware important decisions are made about them using AI, respectfully.

When I considered the underlying societal issues, including the bias in news media coverage against Indigenous Australians and the inequitable access to technology education, I realised these three numbers represent a vicious cycle. One that exists when:

  1. Indigenous Australians spend a disproportionate amount of time with online platforms that are driven by unchecked algorithms while these algorithms perpetuate negative perceptions about Indigenous communities.
  2. Australians influenced by these skewed perceptions continue to unconsciously program racially biased inputs into algorithms, leading to more skewed results that reach more Australians.
  3. Indigenous Australians continue to spend more time on these platforms ingesting content that may impact their sense of self, while few people are even aware they’re participating in this engine of inequality.=

From media platforms recommending movie titles to marketers determining who is affluent enough to receive a product or service, the algorithms that permeate our lives reflect the societal issues inherent in the data they are fed to learn. These “life algorithms'' are often designed by (and beneficial to) a single demographic, despite being used to accommodate an entire country.

Not convinced? Try searching "professional haircut". Then, search "unprofessional haircut". Notice the difference in the images returned at top of page or within images search? That’s algorithmic bias in action.

My lived experiences as a person of colour inform my semantic and moral stances when I deliver data solutions designed to better target humans and grow my clients’ businesses. Those experiences have taught me we can't afford to separate our social realities from our technical ambitions in order to drive commercial outcomes. 

As Mark Twain once said: “Plan for the future because that's where you are going to spend the rest of your life”. And if today’s data is tomorrow's destiny, we need to have the hard conversations now to ensure we don’t bake in discrimination when deploying data-centric technology.

Put simply, we need to know better to do better.

First, do no harm

Know better: AI is increasingly being trialled and deployed around Australia. Just last week, the government announced an expansion of facial verification algorithms for government services. However, given the long history of technological advances being used to exclude Indigenous people, we must vigorously confront the potential of automated unconscious bias impacting access to these services.

From bank loan assessments to job screenings, the proven bias in facial recognition and verification algorithms based on skin tones poses a very real risk of automated racial profiling. While Australia presses play on expanding its use of facial recognition algorithms, big tech companies like Amazon are pressing pause over fears of wide-scale mistakes and misuse.

Do better: Companies must be proactive in identifying risks as they use proprietary or “rented” AI. This may seem like a daunting task given conversations about AI are future-forward by nature; however, AI is trained by data. We need to ask the same questions of our data that we ask of other parts of our businesses to ensure we’re always within the law when providing or refusing to provide goods, services or facilities.

Rule number one when deploying AI: Do no harm. 

Radical indifference won't lead to radical outcomes

Know better: In her book, Surveillance Capitalism, Shoshana Zuboff coins the principle “radical indifference.” She posits that Big Tech acts without regard for the social consequences of their actions. They view the positive and negative outcomes from AI as equivalent and don’t seek to improve the fundamental lack of reciprocity with populations. In contrast, 80% of Australians believe search engines should disclose details of changes to the algorithms they use to target them. And, while it’s unrealistic to expect Big Tech to reveal the detail of the algorithms their businesses are built on, it does show the societal shift away from blind faith in the systems we engage with every day.

Do better: Many Australian companies have established social contracts and trust with Australians. These same companies are also the biggest users of “life algorithms” to drive business outcomes. Despite shouting about the benefits of human-centered design, we as an industry are propelling the growth of unregulated algorithms that misidentify or exclude Indigenous Australians. To do better, it’s imperative that marketers are among the loudest voices to provoke conversations around inclusive data, influencing the development of accountable AI that’s advantageous to Indigenous communities.

From the death of the cookie to the birth of the cohort

Know better: Since 1974, computer-powered target marketing systems have been in existence, starting with Census data at postcode level. Since then, populations have had a healthy paranoia about data that can be manipulated to predict behaviour. Today, 86% of Australians want to know more about AI and the behavioural data it uses despite only one in three Australians report they are willing to trust AI systems. As we introduce new solutions to replace 3rd party cookies but fail to make it clear how they work, we continue to erode trust in AI.

One of those new solutions is Google’s Federated Learning of Cohorts (FLoC). Rather than tracking and targeting “individual identifiers” like cookies or persistent IDs, FLoC identifies groups of people based on their common interests. But data ethics researchers and privacy advocates warn cohort-based targeting could be used to deliberately discriminate against particular groups and prevent users from easily opting out of algorithmic categorisations.

Do better: Clear evidence exists to show helping people understand AI is an important driver of trust in AI systems. Companies need to first create AI policies based on fairness, transparency and inclusivity, then lean into supporting AI products related to their core business. Lastly, and most importantly, when companies deploy their AI product, they need to clearly explain how the AI is used to give people equal access to products and services.

Where do we start?

The appeal of artificial intelligence is undeniable – but whose intelligence does it reflect? Bias isn’t limited to Indigenous matters. Gender, language and socio-economic bias exists in these algorithms too. If we do nothing, we risk careening into a future of unchecked proliferation of biased algorithms that have the power to undermine human rights and do harm to socially disadvantaged populations by gatekeeping access to products and services.

We need to ensure there is a diverse and inclusive set of stakeholders around the same table shaping the direction of smart design and the ethical use of AI in line with social contracts. And we need to do it before it’s too late.

What do you think?

Search Mi3 Articles