Five stories you may have missed over Xmas; Adobe and Figman pull pin on $20bn merger; NYT sues OpenAI, Microsoft; Morgan Stanley drools over retail media; More data suggests TikTok-CCP alignment; Botshit era dawns

While you were sensibly ignoring developments in the worlds of marketing technology, generative AI and social media platforms over Christmas, the news cycle continued rolling. Amid the filler, here's five stories worth familiarising yourself with. First up, Adobe and Figman abandoned their merger, although both have reasons to be satisfied with the outcome. Meanwhile the New York Times isn't waiting for regulators to move on generative AI, it is suing OpenAI and Microsoft for hoovering up – and reproducing – its IP without permission. Morgan Stanley has run the numbers on retail media and spies pay dirt. Rutgers University and Network Contagion Research Institute meanwhile analysed a tonne of TikTok posts and concluded there's less to like, at least from the perspective of western governments. Finally, its bad enough having to guard against the lies humans tell and share, now we have to worry about AI stuffing the information pipes — welcome to the world of botshit.
$20 billion deal unravels
Adobe and Figma called off their $20bn merger last month saying they saw no path forward that would allow them to overcome the concerns of regulators in the UK and Europe (and according to industry commentary – from the US eventually).
Barely a month before the pin was pulled, Figma's VP Legal Brendan Mulligan was still singing from the previously agreed hymn sheet:
"Figma’s focus is collaborative product design and development. Adobe builds world-class creative tools that reach hundreds of millions of people around the globe. By bringing our complementary strengths together, we have the potential to unlock new benefits for consumers that neither company could deliver on its own."
Cut forward to December and the play track had changed.
According to statements from both companies, they each continue to believe in the merits and procompetitive benefits of the deal, but mutually agreed to terminate the transaction based on a joint assessment that there is no clear path to receive necessary regulatory approvals from the European Commission and the UK Competition and Markets Authority.
“Adobe and Figma strongly disagree with the recent regulatory findings, but we believe it is in our respective best interests to move forward independently,” said Shantanu Narayen, chair and CEO, Adobe. “While Adobe and Figma shared a vision to jointly redefine the future of creativity and productivity, we continue to be well positioned to capitalise on our massive market opportunity and mission to change the world through personalised digital experiences.”
Figma CEO Dylan Field expressed no little exasperation. "It’s not the outcome we had hoped for, but despite thousands of hours spent with regulators around the world detailing differences between our businesses, our products, and the markets we serve, we no longer see a path toward regulatory approval of the deal."
But it's not entirely bad news for either partner. Field's business benefits from a billion dollar break fee, filling the war chest for another round of development and expansion without the need for shareholders to dilute their holdings.
And, at least according to analysts like Constellation Research's Liz Miller, in a world of generative AI, where Adobe is already making great strides, it may not need Figma as much as it once did. Channeling Taylor Swift, Miller says that regulators may have done Adobe a solid favour by dragging their heels and bungling these inquiries.
"While regulators were lamenting that Adobe would use their posture in the marketplace to miss out on improvements and innovations, tools like Generative Fill and Firefly Image models have radically changed the work of creation in ways we couldn’t have truly envisioned when the headlines first broke about Adobe and Figma," per Miller.
She's also in the camp of analysts who feel Figma has done ok despite not getting the originally sought outcome. "Adobe may have just handed over a billion-dollar budget boost that Figma needs to further power innovations and bring on the next evolution of digital product design and digital collaboration. During this vetting period, Figma has continued to grow in all the right places."
Other analysts questioned what the failed merger means for the wider market, with concerns the result signposts a harder road to hoe for scale-ups in future seeking exits via trade sales rather than IPO.
NYT sues OpenAI and Microsoft
While Australia barbecued itself into oblivion, concerns over IP and other legal matters elsewhere dominated the regulatory conversation. The New York Times announced it is suing OpenAI and Microsoft for hoovering up its intellectual property and reproducing its consent without permission.
It's the latter of those two concerns that has the NYT on firmer ground, according to observers, as the courts were generally hands off on the matter of firms ingesting information during the rise of social media.
According to the publisher's filing in the Federal District Court in Manhattan, "The Times’ journalism provides a service that has grown even more valuable to the public by supplying trustworthy information, news analysis, and commentary".
The Times argues that the defendants’ unlawful use of its work to create artificial intelligence products that compete with it threatens The Times’s ability to provide that service.
"Defendants’ generative artificial intelligence (“GenAI”) tools rely on large-language models (“LLMs”) that were built by copying and using millions of The Times’s copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more. While Defendants engaged in widescale copying from many sources, they gave Times content particular emphasis when building their LLMs—revealing a preference that recognises the value of those works. Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’ massive investment in its journalism by using it to build substitutive products without permission or payment."
For its part, OpenAI initially said the Times' claim is without merit. More recently OpenAI founder Sam Altman now says OpenAI doesn't want to train its models on NYT content. Read into that what you will, either on the NYT or on OpenAI.
Morgan Stanley on the virtues of retail media
Investing banking and financial services giant Morgan Stanley sees revenue gold from the rise of retail media. In a paper titled, "Can e-commerce margins converge in-store? Exploring the benefits of supply chain investment and retail media" the banking powerhouse flags the high overall margins for retail media, which is says run between 70 and 90 per cent. That's an important consideration for businesses that are otherwise highly leveraged to a fixed cost base.
"As the segment grows, more retailers are attuned to its economic potential, increasing the appetite to monetise their customer platforms by building out their own retail media offering."
It will also help build customer loyalty, according to the bank.
Elsewhere, Morgan Stanley also noted that there will be losers from the rise of retail. Much of the money for retail media campaigns – considered trade marketing these days – will come directly out of the pockets of traditional media players. It was ever thus.
Dance, comrades?
A study by Rutgers University and Network Contagion Research Institute (NCRI) supports the idea that TikTok is being used by the Chinese government as a channel of influence. According to the study called, "A Tik-Toking Timebomb: How TikTok's global platform anomolies align with the Chinese Communist Party's Geostrategic objectives," the data strongly suggests that TokTok systematically promotes and demotes content on the basis of whether or not it is aligned or opposed to the interests of the Chinese government.
The NCRI conducted an analysis of hashtags on both TikTok and Insta and found that "While ratios for non-sensitive topics (e.g., general political and pop-culture) generally followed user ratios (~2:1), ratios for topics sensitive to the Chinese Government were much higher (>10:1)."
The study has certainly been noticed in the US where there has been a long running debate about whether to ban the platform. Brendan Carr, Commissioner, US Federal Communications Commission shared the research on Twitter, noting, "research shows massive difference in pro-CCP content on TikTok compared to other major social media platform."
The belief locally is that if the US goes down the route of banning TikTok, Australia will follow suit.
The Australian Senate has already publicly flagged concerns. Per Mi3's report last year, if the Senate does take action, a wide pool of people, including much of the marketing industry and its supply chain, will have to ditch the app – at least on their work phones.
"The Select Committee on Foreign Interference through Social Media has recommended extending the ban on TikTok on government devices to those working in critical infrastructure – which could include workers across food and grocery to financial services, communications, healthcare, utilities, transport, education and beyond. Meanwhile, if the US government forces ByteDance to divest of TikTok, the Australian government should consider doing the same."
GenAI: The dawn of the "botshit" age
In an age of misinformation, the lies spewed by propagandists, agents of influence, and lazy, incompetent middle managers were hard enough to filter out. Now a new academic paper with authors from the business schools of University of Alberta, Simon Fraser University and City University London warn that we need to prepare for the robotic equivalent.
Once that robotic content is utilised or shared by humans it becomes what they term "botshit" as opposed to bullshit - which they describe as "...an important technical concept in management theory for understanding how to comprehend, recognise, act on, and prevent acts of communication that have no grounding in truth."
In a paper called, "Beware of Botshit: How to manage the epistemic risks of generative chatbots," the authors write that advances in large language model (LLM) technology enable chatbots to generate and analyse content to help people in their jobs. "Generative chatbots do this work by ‘predicting’ responses rather than ‘knowing’ the meaning of their responses. This means chatbots can produce coherent sounding but inaccurate or fabricated content, referred to as ‘hallucinations’. When humans use this untruthful content for tasks, it becomes what we call ‘botshit’"
Drawing on research from the world of risk management, they describe a framework that identifies four modes of chatbot work (authenticated, autonomous, automated, and augmented) which they overlay with the botshit related risks of ignorance, miscalibration, routinisation, and black boxing.
They write: "While LLM chatbots are powerful content generation and analysis tools, they do not have any sense of truth or reality beyond the words that tend to co-occur in their training data and processes (i.e., the supervised fine-tuning, the reward model, and policy optimisation). This training data is often date-limited and necessarily retrospective."
According to the authors it is crucial for organisations to understand that they still need substantial human input for LLMs to function effectively. Leaving the marketing supply chain to breathe a collective sigh of relief. For now.