Skip to main content
News 1 Aug 2023 - 6 min read

Senate urges government TikTok ban extension to critical infrastructure providers - telco, transport, finance, food, grocery, healthcare all in scope; LinkedIn, Meta, Youtube, Twitter all pose risks

By Andrew Birmingham & Brendan Coyne

TikTok ban extension recommended, but senate looks to US for outright powers to force TikTok's hand.

The Select Committee on Foreign Interference through Social Media has recommended extending the ban on TikTok on government devices to those working in critical infrastructure – which could include workers across food and grocery to financial services, communications, healthcare, utilities, transport, education and beyond. Meanwhile, if the US government forces ByteDance to divest of TikTok, the Australian government should consider doing the same. But the platform has reportedly spent $100m in a bid to curb those powers.

What you need to know:

  • Senate Committee says Australia should follow US lead on TikTok ban.
  • But proposes powers to extend existing ban on government devices more broadly to critical infrastructure providers – with sectors spanning telco to financial services, food and grocery, healthcare, transport and beyond potentially in scope.
  • Recommends minimum levels of transparency for social media services.
  • Blasts TikTok and WeChat attitudes to Senate review. "TikTok was evasive" and WeChat's answers labeled "disingenuous". "WeChat showed contempt for Parliament", per report.
  • But Committee says Australians more likely to see disinformation on major platforms like Facebook and Twitter.
  • Warns that LinkedIn users are targets for foreign agents. Head of ASIO Mike Burgess said the 16,000 Australians it found declaring they have security clearance on professional networking sites and 1,000 more revealing they have worked in intelligence community "may as well add 'high value target' to their profiles."
  • Flags Meta's Threads is collecting more data than Twitter, "almost everything it can".
  • Warns targeted advertising can be weaponised.
  • Incoming AI tsunami deemed "exponential" threat, urges urgent government AI regulation.

TikTok was reluctant to provide witnesses sought by the committee, and evasive in their answers when they finally did agree to appear. WeChat showed its contempt for the Parliament by failing to appear before the committee at all, and through its disingenuous answers to questions in writing.

Select Committee on Foreign Interference through Social Media

TikTok's fate in Australia is now largely in the hands of the US government. Australia's Select Committee on Foreign Interference through Social Media has recommended the government consider forcing ByteDance to divest of TikTok locally if the US does the same.

The Biden administration is pushing for powers under the Restrict Act that would enable it to ban TikTok in the US. However, TikTok is reportedly lobbying hard, with one senator claiming it has spent $100m, in a bid to prevent the proposed bill going through without significant amendments.  

Either way, the app's ban on government devices in Australia could be significantly expanded after the committee recommended the Government extend, "via policy or appropriate legislation, directives issued under the Protective Security Policy Framework regarding the banning of specific applications (e.g. TikTok) on all government contractors' devices who have access to Australian government data."

It also urged the Minister for Home Affairs to potentially expand that ban to swathes of the private sector by reviewing the application of the Security of Critical Infrastructure Act 2018, to "allow applications banned under the Protective Security Policy Framework to be banned on work-issued devices of entities designated [as] Systems of National Significance."

If so, a TikTok ban could apply to those working across critical infrastructure – which now spans industries and verticals including: electricity, communications, data storage or processing, financial services and markets, water, healthcare and medical, higher education and research, food and grocery, transport, space technology, and defence, per the Department of Home Affairs' list of critical infrastructure, though which assets and systems qualify as nationally significant within those sectors is less clear cut.

Consumer distrust

Sam Higgins, Principal Analyst and Global Public Sector lead for Forrester told Mi3: “According to Forrester’s Consumer Trust Imperative research despite the high usage of social media by Australians, including the 20 per cent of the population who use TikTok at least once a week, the operators of these platforms are one of the least trusted types of organisations in the market. Indeed, over half (52 per cent) of all Australian’s indicated in 2022 that they distrust social networking companies – with a mere 18 per cent of the population placing any faith in the way they treat their customers at all.”

According to Higgins, it is also important to note that when it comes to building trust in brands like Apple, Google, Meta, ByteDance, Tencent and other technology companies, the second most important lever for growing and maintaining trust with Australians is transparency.

"Put simply Australian’s want social media platforms that do business in an open way and make every effort to share information about the business that is based on accurate, verifiable facts. It’s clear from both the report and poor trust sentiment we see in our data that yet again the market has failed to meet consumer, and therefore, policymaker expectations - even before we consider the wider geopolitical environment and clear lines being drawn in the race to establish virtual borders of digital sovereignty."

As such, he said, he was not surprised by the recommendations. "Nor am I surprised by the close alignment with policies, past and even future actions being taken by our defence allies like the US. What is encouraging is that these rules, like the media and digital platforms bargaining code, are being applied to all comers regardless of which point of the compass they might access Australian consumers from ensuring that we balance national security with the need for open global digital commerce.”

Transparency push, banning powers

Mi3 Australia obtained a copy of the report prior to its official release. There are a wide set of recommendations regarding social media generally including that all "all large social media platforms operating in Australia to meet a minimum set of transparency requirements, enforceable with fines. Any platform which repeatedly fails to meet the transparency requirements could, as a last resort, be banned by the Minister for Home Affairs."

Among the recommendations, social media platforms must be required to:

  • Have an Australian presence
  • Proactively label state affiliated media
  • Be transparent about any content they censor or account takedowns on their platform
  • Disclose any government directions they receive about content on their platform, subject to national security considerations
  • Disclose cyber-enabled foreign interference activity, including transnational repression and surveillance originating from foreign authoritarian governments
  • Disclose any takedowns of coordinated inauthentic behaviour (CIB) networks, and report how and when the platform identified those CIB networks
  • Disclose any instances where a platform removes or takes adverse action against an elected official's account
  • Disclose any changes to their platform's data collection practices or security protection policies as soon as reasonably practicable
  • Make their platform open to independent cyber analysts and researchers to examine cyber-enabled foreign interference activities
  • Disclose which countries they have employees operating in who could access Australian data and keeps auditable logs of any instance of Australian data being transmitted, stored or accessed offshore
  • Maintain a public library of advertisements on their platform.

TikTok: existential threat?

All of the platforms were found to be lacking in transparency and open to manipulation. But the commentary and conclusion regarding TikTok that will draw most attention given its meteoric rise – with advertisers piling into the platform. The National President of the Public Relations Institute (PRIA), Shane Allison, told Mi3 in March that the TikTok "reputational risk" for companies and brands was on the radar of the corporate affairs community but that brands and marketers were "addicted to platforms that drive demand generation and aren't aware of the reputational cost..."    

According to yesterday's Senate committee report: "The committee was particularly concerned with the unique national security risks posed by social media companies like TikTok and WeChat, whose parent companies ByteDance and Tencent respectively, are irrefutably headquartered in and run from authoritarian countries like China. China's 2017 National Intelligence Law means the Chinese Government can require these social media companies to secretly cooperate with Chinese intelligence agencies.

"In the case of TikTok, the committee heard that its China-based employees can and have accessed Australian user data, and can manipulate content algorithms—but TikTok cannot tell us how often this data is accessed despite initially suggesting that this information was logged. Nor was TikTok able to provide the legal basis upon which its employees can refuse to comply with Chinese law—the short answer is, it can't."

The report said TikTok was reluctant to provide witnesses sought by the committee, and evasive in its answers when they finally did agree to appear. It was even more scathing of WeChat which it said "showed its contempt for the Parliament by failing to appear before the committee at all, and through its disingenuous answers to questions in writing."

It contrasted the approach of the two giant Chinese apps with what it described as the "more constructive engagement of platforms based in Western countries who at least recognised the fundamental importance of the checks and balances inherent in democratic systems."

LinkedIn, Meta, Youtube, Twitter

However, the report says the large western platforms including those owned by Meta, Google's Youtube, Microsoft-owned LinkedIn and Twitter also pose serious security and societal risks.

As well as their increased user numbers in Australia, much of that is due to people's seeming ignorance of those risks. Per submissions to the Committee:

  • 59 per cent of people post names or photos of children.
  • 27 per cent of people post names or photos of their partner.
  • 93 per cent of people post employment updates.
  • 36 per cent of people post information about their company job, boss, colleagues.
  • 32 per cent of people post updates and photos during business trips.
  • Around 55 per cent of people do not have privacy settings activated on their social media.

The report offered a withering critique of those oversharing on LinkedIn from Australia's head of national security. The Microsoft-owned platform, it said, presents a "different risk for foreign interference than other social media platforms as it is a professional networking site".

Per the report: "In February 2023, the Australian Security Intelligence Organisation (ASIO) head Mr Mike Burgess noted that ASIO identified nearly 16,000 Australians publicly declaring on professional networking sites they have a security clearance, and 1,000 more revealing they worked in the intelligence community. Mr Burgess said 'these people may as well add "high-value target"' to their profiles".

The report included submissions that LinkedIn has become "heavily abused by threat actors seeking to distribute malware, perform cyberespionage, steal credentials, or conduct financial fraud."

It added: "LinkedIn noted that the largest threat is posed by fake accounts and reported that in 2022 it 'blocked more than 80 million fake accounts worldwide, of which about 400,000 were attributed to Australia'", citing Joshua Reiten, Senior Director, Legal–Digital Safety at LinkedIn.

Meta, Youtube and Twitter were criticised by those giving evidence to the Select Committee as suffering from similar misappropriation by bad actors – and being slow to react to mis and disinformation with takedown requests often falling on deaf ears.

Meta's Threads, per the report "has been noted to collect more user data than Twitter: Threads is collecting almost everything it can, including data on your health, purchases, financial info, location, contact info, search history, and browsing history". Meanwhile, "Threads is not currently available in the European Union because it does not comply with safety regulations." Which could prove interesting for Australian users and Meta's local operation when Australia completes its Privacy Act overhaul. 

Manipulated algorithms, content farms, micro targeting

The scale of interference and manipulation is partially due the success of platforms' own algorithms and recommendation engines in driving engagement. The problem is that they can also amplify harmful content seeded by foreign agents and can normalise "prejudice, hate and distrust in public institutions" according to Australia's eSafety Commissioner.

The report also flagged tools developed by the ad industry, specifically highly granular targeting, as problematic when applied at a one-to-one level, known as 'dark' or 'micro' targeting. "Micro-targeting is the weaponisation of the social media environment to further the goals of various actors–corporate, international or even malicious actors," per the report.

Meanwhile, troll farms and content farms continue to both spread misinformation, and in the latter case, monetise it through advertising – usually placed by brands that are not looking too carefully at where their money is being spent.

New threat: AI tsunami

The report warned that AI has the potentially to "exponentially increase" foreign interference and operations by bad actors, some of whom are already engaging 'interference as a service' providers. Some experts labelled AI the biggest threat Australia faces:

"Social media is so important in elections because people make decisions based on what the group is saying or doing, and government makes decisions in its normal running based on what it thinks the population wants," stated David Robinson, a former Australian Army Intelligence Officer and co-founder of cybersecurity firm Internet 2.0.

"[However] a social media platform with AI and botnets can create hundreds of thousands of accounts and talk about the same issue. That influences what you as lawmakers and politicians and what the population do and say to each other. I think AI is very dangerous, and if we don't have laws to regulate how it acts on social media – about talking about elections, for example –I think we're running out of time, to be honest. I think it's the most dangerous thing."

The Senate Committee appears to agree.

"The committee is alarmed at the increasing national security risks associated with the potential weaponisation of artificial intelligence (AI) technologies by malicious actors. These concerns have sharply increased with the advent of generative AI tools such as ChatGPT and Midjourney, which can be used to generate content at a speed and scale we have never seen before."

It called for the government to "urgently address the national security concerns" around AI.

Update – Tiktok Director Public Policy AUNZ, Ella Woods-Joyce offered the following statement:

"While we disagree with many of the characterisations and statements made regarding TikTok, on our initial reading, we welcome the fact that the Committee has not recommended a ban. We are also encouraged that recommendations largely appear to apply equally to all platforms. TikTok remains committed to continuing an open and transparent dialogue with all levels of Australian Government."

What do you think?

Search Mi3 Articles