Skip to main content
SxSW Sydney '23 16 Oct 2023 - 8 min read

Facebook whistleblower Frances Haugen: Digital platforms actively resist transparency around disinformation, misinformation, hate speech and even child sexual exploitation; Australia’s eSafety commissioner agrees, goes after Google, X

By Andrew Birmingham - Editor - CX | Martech | Ecom

An Mi3 editorial series brought to you by
7 / 7plus

An Mi3 editorial series brought to you by
7 / 7plus

Kellie Nuttall, Frances Haugen, Nic Suzor and Julie Inman Grant on tackling misinformation – and going after platforms.

Australia's failed Voice to Parliament referendum, along with the horrors of war in the Middle East and in Ukraine have brought the issue of misinformation and disinformation back to the centre of digital industry debate. But digital giants and social media platforms now also face heat over their failure to adequately address the distribution of child sexual abuse content, with Australia's eSafety commissioner issuing X with a $610,000 infringement notice for failure to provide the regulator with the information it requested. The damage could swell to tens, potentially hundreds, of millions of dollars if the courts get involved. Google is also on notice and has been issued with a formal warning. This lack of care and transparency is par for the course, according to Facebook whistleblower Frances Haugen, who told SxSW that Facebook had to rely on Twitter data from external researchers to identify misinformation being deployed on its own platform.

What you need to know

  • Digital giants like Meta, Alphabet, and Twitter are allergic to transparency, and avoid investing in the capabilities to tackle child sexual abuse, hate speech and misinformation and disinformation, claimed Facebook whistleblower Frances Haugen, speaking on a panel at SxSW.
  • Facebook, per Haugen, had to rely on Twitter data provided by researchers to identify and shutdown misinformation and hate speech.
  • Australia's eSafety commissioner Julie Inman Grant seems to agree and is going after Google and X for their failure to address child sexual exploitation issues on their platforms, the first time the regulator has taken action, after earlier warnings were ignored. If they refuse to play ball, the courts can fine both over $700,000 a day, backdated to March.
  • Tech businesses are not exceptional, and should be subject to safety rules in Australia just as manufacturers, or food companies are, according to Inman Grant.
  • QUT's Nicolas Suzor, chief investigator at the Digital Media Research Centre says the replication and amplification of misinformation and hate speech is effectively a design feature of platforms built as advertising systems with algorithms that reflect the biased and divisive nature of societies. 

 

Let's be honest. If you can build a sophisticated AI system and target advertising with deadly precision, you should be able to do the same with hate speech or child sexual abuse material.

Australian eSafety commissioner Julie Inman Grant

The wrapping paper had barely been ripped off the first sessions of the Sydney chapter of SxSW before the issue of misinformation and disinformation took center stage.

Speaking in the first panel session in the Seven House, Kellie Nuttall, Deloitte's Strategy and Business Design Leader, (and previously the consulting firm’s artificial intelligence and AI Institute lead) said that among the risks AI is creating, misinformation is one that keeps her awake at night.

“We have strategies for managing risks around bias, around secure algorithms, cyberattacks. All these things we know know how to manage," Nuttall told delegates. "The misinformation thing. It's more of cultural and psychological and that is something it's really hard to get a grasp on. That goes from misinformation around bullying in schools and creating naked photos of children, all the way up to geopolitical tensions, so it's a broad spectrum."

But it's one that is beyond an organisation's control to deal with that as it involved individuals acting maliciously, she said.

However, in Sydney’s latest great festival of ideas, not everyone agrees. Speakers at a different session held concurrently took a contrary view when it comes to social media platforms and other big tech companies.

Facebook whistleblower Francis Haugen and founder of Beyond the Screen, claimed Facebook’s commitment to transparency is so poor, and its funding of Information Operations is so limited, that the company actually had to rely on Twitter’s data firehouse to identify bad actors.

Haugen who testified to the US Congress against the social networking giant in 2021 offered the insight during an SxSW panel discussion entitled Blowing the whistle on big tech: transparency and accountability in the age of AI, which also featured Australia’s eSafety commissioner Julie Inman Grant, and Nicolas Suzor professor of law at Queensland University of Technology and a chief investigator at their Digital Media Research Centre

The overall tenor of the panel was that none of the global digital platforms are taking the issue seriously enough, and the failure of their systems, including much-vaunted AI solutions, often gets mistaken for bias when in fact it's just incompetence and penny-pinching.

That's awkward

“To give you a sense of how little transparency there is in the social media space, and how little places like Facebook do to fund things like Information Operations – for instance, foreign involvement in other countries made possible by their platform – when I worked at Facebook, they would regularly find influence operations on Facebook using Twitter's data,” stated Haugen.

Researchers with access to the Twitter data firehose – a database of one in ten tweets – would alert the companies and Twitter would literally send the IP addresses to Facebook, she added.

Per Haugen: “The fact that Facebook needed Twitter's Firehose to catch Information Operations is ridiculous.”

She made the comments by responding to a point by Inman Grant who revealed during the session that earlier today she had taken regulatory action against X and Google for failing to adequately tackle child sexual exploitation material, sexual extortion, and the livestreaming of abuse.

It’s the second tranche of notices issued by the commissioner. In February the commissioner issued legal notices to Twitter (subsequently rebranded as “X”), Google, TikTok, Twitch, and Discord under Australia’s Online Safety Act. Today’s announcement noted that new transparency required the tech companies to answer questions about measures they have in place to deal with the issue.

“You may have seen today that we've just announced our second tranche of regulatory notices, It's the first time I'm taking regulatory action against both Google through a formal warning and Twitter/X through a service provider notification, and an infringement notice.

“Twitter (X) has 28 days to pay it. If they do not, then we can take civil proceedings and action. And then fines will be determined by the courts, or they can petition to have their infringement notes removed.”

Inman Grant noted that if X chooses not to pay the fine, then eSafety can bring civil proceedings against them, and if the court finds against them the fine could swell to $780,000 a day backdated to the first day of non-compliance back in March.

She also gave short shrift to the complaints by the tech companies that it’s all too hard and labour-intensive.

Calling bullshit

“We gave them 35 days and having worked inside these matrix companies, yes, sometimes the systems are insufficient, and it might be hard to organise a global conference call. But it really doesn't take seven months," she said.

“We went back and forth. The simple fact of the matter? Transparency is very uncomfortable for these companies, because they haven't had to deliver it – not in a meaningful way, and not answer questions with datasets without qualifiers that they want to provide.”

Inman Grant said Twitter (now X) point blank refused to answer some questions.

“We had some of these companies leaving entire answers blank, questions like how many trust and safety people do you have Twitter, now that you've eviscerated your trust and safety teams? No answer? Well, clearly, if you've got an HR Payroll system, you know the answer. Why wouldn't you give an online safety regulator the information if you were doing the right thing, if you have the people policies and processes in place, [if] you're using the right technologies.”

The eSafety commission said it was time to stop treating tech companies as exceptional cases.

“I went to the government when they reformed the Online Safety Act and said we've got to enshrine safety by design or figure out a way to do this so that there are basic online safety expectations about what [these] services are delivering to Australians. Just as car manufacturers were required to invent seatbelts 50 years ago; when we have food safety standards; we've got surge protection on electrical goods, you name it,” said Inman Grant.

“This technological exceptionalism shouldn't stand. And so they gave us powers to legally compel transparency.”

Digital platforms like Google and Facebook have built vast businesses over the last 15-20 years by convincing advertisers that they can discern the intent of one buyer out of a billion in a milli-second – a point not lost on the eSafety commissioner.

“Let's be honest. If you can build a sophisticated AI system and target advertising with deadly precision, you should be able to do the same with hate speech or child sexual abuse material," per Inman Grant.

“But instead, what we're seeing is the companies are making it harder. They're making their platforms more opaque by putting the Twitter data hose (a data set of in in ten tweets that was previously made available for researchers)  for instance out of the reach of advocates and NGOs and small regulators.” 

Discord, some of the Google services, and Twitter – none of them are using live detection tools for child sexual abuse on live streaming services. For those of you who work in the industry, most of these companies use the same people, processes, and technologies to deal with illegal content. If they're not doing it for child sexual abuse material, then they're not doing it for terrorists and white extremist content either.

Frances Haugan, Facebook whistleblower and founder of Beyond the Screen

Artificial stupidity

The other theme to emerge from the panel discussion is just how unsophisticated the artificial intelligence used by the platforms to tackle disinformation, misinformation and hate speech really is – and how poorly understood this is by the rest of us.

For instance, delegates at the session were told by QUT's Nic Suzor that platforms will claim that 97 per cent of hate speech is taken down by AI. But what the platforms don’t reveal is that the AI does not distinguish between hate speech and legitimate comment addressing the same content. In other words, both good and bad content is caught in the net and taken down.

Likewise, drawing on a current area of contention, Inman Grant discussed the apparently different treatment of posts by Israelis and Palestinians on Twitter.

Rather than a conspiracy by a platform to favour one side over the other, the different experiences of voices on either side of the argument are due to a deliberate design choice driven by budget, she suggested.

“Twitter is now working with only 12 languages. I think TikTok, and Google around 70. One of the languages that they're not working with on Twitter is Hebrew," said Inman Grant.

Haugen then described the implications of that choice. “Often platforms will support Arabic in terms of safety systems at some level but will not support Hebrew because it’s considered too small. This has serious consequences. People have commented that an Israeli can say something and someone in Gaza can say something and the person in Gaza’s [comment] will be taken down while the [comment from the] person in Israel won’t.”

“If you see that take place, you think the platform is really biased against you when in reality it’s just a cost-cutting measure.”

While there is a lot of focus on politics at the moment, Haugen said the platforms are also failing on issues such as child sexual assault for the same penny-pinching reasons.

“Discord, some of the Google services, and Twitter – none of them are using live detection tools for child sexual abuse on live streaming services,” stated Haugen.

“For those of you who work in the industry, most of these companies use the same people, processes, and technologies to deal with illegal content. If they're not doing it for child sexual abuse material, then they're not doing it for terrorists and white extremist content either.”

A feature, not a bug

According to QUT’s Nic Suzor, digital platforms replicate and amplify the worst aspects of human behaviour, even if unintentionally.

“If you build a technical system, and it is effective in a world which is already biased, and divisive – and that system works – you will amplify those biases. And you typically, if you're not really careful, you will develop a system that is most dangerous for the people who are most vulnerable and already marginalised.”

The impact is that the voices of marginalised communities are silenced through self-censorship, said Inman Grant.

“Let's be clear, the way that targeted online misogynistic harassment manifests against women is very different versus men – it's sexualised, it's violent, there are rape threats, it talks about fertility, appearance, and supposed virtue," she said.

“And then when you delve into the intersectional layers, if you're a First Nations Australian, you're twice as likely to receive online hate. The same goes for those with LGBTQI-plus, as well as those with a disability. And it is meant to demean, it is meant to silence, it's meant to make people self-censor,” she added. 

“So it further entrenches inequalities that already exist.”

Technology companies simply have not been able to grapple effectively with the issue according to QUT’s Suzor.

“Social media companies and large technology companies have talked about their advertising systems or ranking in technologically-neutral ways that were just [about] providing a system for someone else to use. ‘We don't want to discriminate. We think that everyone's voice is important.’ On a surface level, that sounds great. But what you end up seeing is a lot of very targeted attacks by sophisticated, coordinated bad actors.”

He said these were amplified by the populist of mainstream media picking up, recycling and recirculating misinformation and disinformation.

“Then ordinary people get caught up as well, for many reasons. Not in an attempt to deliberately undermine democracy but because [they wanted] to participate in the issues of the day. [But they] are also caught up in spreading false information.”

Old school reporting

Back at the Seven panel there was some hope that regulatory and legislative engagement might force a change by the technology companies. Lucio Ribeiro, director of marketing, digital and innovation at Seven said the big tech players are taking note of legislation emerging that will hold them responsible for spreading fake news. "So they are speeding up [the development] of their own tools."

His colleague Edwina Bartholomew who hosted the panel noted the seriousness of the issues for news organisations such as hers.

"It comes down to individual news organisations verifying that their sources are reputable. A lot of the images we are seeing being spread around TikTok and the net aren't actually AI generated. They just happen to be from another conflict or from another source. That just raises a whole other area of misinformation that can't be solved by technology. It has to be solved by individual organisations and citizens."

What do you think?

Search Mi3 Articles