Skip to main content
News Plus 1 May 2023 - 15 min read

Everything, everywhere, all at once: Plan for generative AI's impact now or face ungovernable disruption as literally every business process gets compressed, SaaS platforms generate tsunami

By Andrew Birmingham - Editor - CX | Martech | Ecom

L-R, Scott Brinker, Liz MIller, Michael Fagan, Tanya Graham, Nicole McInnes, Paul Connell, Amanda Blevins

Generative AI offers breathtaking levels of time compression, turning weeks into days and in some cases months into minutes. But as vendors desperately race to bolt the tech onto their SaaS platforms, there’s the risk of a dumpster fire of unfettered, unregulated, ungoverned disruption could derail organisations ill equipped to handle massive acceleration from all angles, simultaneously. Meanwhile, some firms are already learning the hard way what happens when you let teams input company data and IP into ChatGPT. Marketers and senior business leaders from Adobe, AKQA, Amperity, Big Red GroupCoelius Capital, Constellation Research, Digitas, Healthscope, Kyndryl, Ogilvy Australia, Tribal DDB, Village Roadshow and WW/Weight Watchers weigh in on the risks ahead. Yet ahead of the incoming tsunami, organisational inefficiency may yet be our salvation, reckons martech expert Scott Brinker. Just prepare for some major bottlenecks. 

What you need to know:

  • Huge time compressions around work flows are common in digital transformation. But traditional digital projects are (usually) tightly structured and carefully governed, even in agile environments. What's coming next is very different.
  • Competitive pressures are forcing SaaS vendors to deliver generative AI capabilities rapidly into their platforms, creating potential for dozens or even hundreds of work processes to compress dramatically, at scale, at speed and all at once. The impact of those processes still needs to be understood and effectively governed.
  • Scott Brinker, chiefmartec: There's little from an AI perspective focussed on plugging the downstream impact of time compression into the corporate machinery.
  • Paul Connell, Big Red Group: A great leveller for smaller businesses but some serious questions remain.
  • Sam Bessey, Amperity: Operationalising training and usage is as important as the technology itself.
  • Zach Coelius, Coelius Capital: AI has suddenly put every enterprise SaaS company back into startup product mode, whether they like it or not.
  • Nicole McInnes, WW/Weight Watchers: First experience of time compression was nothing like what people are about to face with AI.
  • Tanya Graham, Healthscope: Be conscious of creating new limitations.
  • Michael Fagan, Village Roadshow: Every task expands to fill the amount of time so you need to make physical, bankable, dollar savings.
  • Maurice Riley, Digitas Australia: Time compression can force organisational realignment.
  • Angela Bliss, AKQA: Businesses will need new policies, new procedures and new processes .
  • Liz Miller, Constellation Research: People mistakenly assume that if we program the algorithm to be perfect then the behaviour, the action, and the rollout is also perfect.
  • Amanda Blevins, VMware: We cannot trust machine learning without human intervention.
  • Alexandru Costin, Adobe: Creatives and marketers need to keep control. Or else.
  • Davy Rennie, Tribal DDB: Someone needs to own the single source of truth, or things fall apart.
  • Nicolas Sekkaki, Kyndryl Stopping innovation is a problem. You need to be able to experiment. Just not with company IP.
  • Jason Davy, Ogilvy Australia: It may be too early to even attempt to build out a fully formed governance architecture – and those that do risk being left behind. Managed experimentation is key.

It’s not a Ron Popeil chicken machine. I'm not buying it on an infomercial, sticking a chicken in it and assuming that chicken is gonna taste great. Even when we bought those machines, you had to season the dang chicken, right? You need to stand there and make sure there's not a grease fire.

Liz Miller, VP and Principal Analyst, Constellation Research

For two decades we have been fooling ourselves that we're living on the vertical axis of the digital-fuelled exponential transformation curve – rocket propelled argonauts trailblazing into the future.

We may be about to discover that we have actually been squatting at the base of the very modest horizontal curve during the last lazy days of the analogue age, something of a monolith moment

For marketers and digital executives who have already experienced massive disruption over the last decade, the incoming wave of generative AI could turn out to be especially jarring.

But, at least to begin with, operational inertia – and simple human pigheadedness – may offer a buttress against the most immediate effects.

Martech’s Law – the idea that technology is exponential while organisational change is logarithmic – was first popularised by chiefmartec editor-in-chief Scott Brinker in 2013, as the marketing technology sector was just starting to ramp up.

Brinker (who at the time was the CTO of Ion interactive and who is now VP platform ecosystem for Hubspot) has tracked the rise of martech for over a decade, watching it swell from a nascent cohort of about 150 companies to a global industry now topping 11,000 companies. 

He believes there will be a generative AI twist on Martech’s Law, a heuristic that can be applied far beyond marketing: “The inherent rate limiting of slow organisational change may be a good thing here, to temper the potential chaos from too rapid change. Generative AI will require the development of new organisational capital, which takes time.”

Brinker told Mi3: “We're seeing an explosion of Gen AI [tools], an augmentation of individual pieces. I can create a piece of content here, I can edit this thing, write a program – I can do all this stuff that used to take a long period of time. Now it happens relatively speaking in a fraction of that [time]. But right now, there's very little from an AI perspective, solving the issue of exactly … how do you plug that into the machinery [of the business]?”

Waving vs drowning

Generative AI tools like Dall-e, Lumen, and Midjourney have made headway over the last 12-18 months. But the launch of ChatGPT in late November saw the technology – and its disruptive potential – explode into widespread consciousness. Which means marketers need to get across the implications.

Paul Connell, chief customer officer at Big Red Group, which operates marketplaces across travel, tourism and leisure, thinks generative AI offers both efficiencies and the capacity to even the playing field for its thousands of small suppliers.

“I’m quite excited for what this means for small businesses as a leveller," says Connell. "If I think about our suppliers and what we trying to do to make data and insights available to help them … some of these business 'co-pilot' functions [via generative AI] could help someone at say a four-person company achieve the same level of insights and capability that normally only a large business could expect.”

Connell, however, is hardly gimlet-eyed about the prospects of generative AI. Instead, he is reminded that in the same way two decades ago, the rise of digital media raised questions about where ads were placed and where a brand might be seen, the emergence of generative AI raises questions about the extent to which brands can trust the outputs of the machine. “There’s a really interesting question to come on that.”

That question has emerged repeatedly over the last few weeks as Mi3 interviewed marketing, digital and CX leaders locally and globally.

But tech vendors aren’t waiting for the answer. Instead, SaaS firms are scrambling to bolt-on generative AI capabilities with the potential to massively compress organisational work processes en masse.

The martech sector is no exception. At its global customer summit this year Adobe touted Firefly, a generative AI tool it claims can cut campaign design times from weeks into hours. That quantum of compression has underpinned many digital transformation business cases for the last decade or two. What's different this time is the scale-at-speed disruption dynamic we are likely to experience as Adobe's thousands of peers, partners, and competitors pile in simultaneously – or risk being rendered obsolete.

Everything, everywhere, all at once

Mi3 has identified more than 50 SaaS vendors which have already integrated Gen AI capabilities. In addition to Adobe, these include Salesforce, Sitecore, Microsoft, Google, Zoom, Hubspot, Mailchimp, Zendesk, Atlassian and Pegasystems.

Expect a tidal wave to follow. As Zach Coelius, Managing Partner at early stage VC firm Coelius Capital observed last month, “AI has suddenly put every enterprise SaaS company back on a start-up product tempo whether they like it or not. Usually product congeals in the first couple of years and changes very little after that. AI has now created huge inflection potential and speed is critical.”

Imagine dozens or even hundreds of months-into-minutes compressions across your entire enterprise technology ecosystem, none of it via traditional program management governance. That's the potential of unregulated corporate generative AI uptake fuelled by the competitive necessity of software markets. And it is rapidly coming at all of us.

What could possibly go wrong?

Saving grace

At least digital time compression has precedent, with lessons baked-in, per Sam Bessey, lead solution consultant at customer data platform (CDP) provider, Amperity. He thinks the tech will only be as good as the people using it. "Training people in the organisation around how they operationalise these things is just as important – making sure that they are driving the metrics in the right way, per Bessey.

Organisations have been managing digital transformations since the early 2000s, coinciding with the rise of SaaS, cloud computing, social media, and smart mobility.

In this report, we hear from B2B and B2C marketers, marketing technologists, AI experts, agency leaders, and industry analysts including; Village Roadshow's Michael Fagan, Healthscope's Tanya Graham, Big Red Group's Paul Connell, Chiefmartec's Scott Brinker, Amerpity's Sam Bessey, Constellation Research's Liz Miller, VMware's Amanda Blevins, Kyndryl's Nicolas Sekkaki, Tribal's Davey Rennie, AKQA's Angela Bliss, Digitas Australia's Maurice Riley, and Adobe's Alexandru Costin.

They flag generative AI's powerful impact and potential, right along with serious issues of trust, and the need to orchestrate holistically a scale of change that will leave the last decade’s workplace upheavals in its wake.

But we begin by unpacking exactly what time compression looks like.

The challenge we faced was the measurement. We got the opportunity to use an econometric model but they couldn't tell us the effectiveness because we were always on – and there was no gap for them to baseline against. The methodology hadn't caught up with how we were actually doing performance marketing.

Nicole McInnes, Managing Director WW/Weight Watchers

Nicole McInnes, managing director, WW/Weightwatchers

Nicole McInnes recalls encountering the compressive effects of technology on marketing in around 2016 while working for eHarmony. 

These days, McInnes, whose marketing credentials include stints at Dell, Adshel,  American Express and WooliesX, is managing director of Weight Watchers Australia.

“Much of my career has been in marketing,” she says noting that the first big compression she experienced centred on campaign management at the middle of the last decade.

"You did the lead-up to a campaign, that took three to six months. Then there was three months post-analysis. Then you started it all again... All of the measurement for campaign effectiveness was set up for those gaps, for those peaks and troughs."

In other words, the friction from the inefficiency of the process provided the space for analysis, but that analysis was often too late to provide usable insights.

Her earlier experience as a marketer at Dell in 2005 had demonstrated the value of constantly optimising always-on campaigns, but even there the process was very manual.  

She took that Dell experience and applied it to her role at eHarmony breaking down the ways they were working in silos and moving money around on a weekly basis. 

“But the challenge we faced was measurement. We got the opportunity to use an econometric model but they couldn't tell us the effectiveness, because we were always on and there was no gap for them to baseline against. The methodology hadn't caught up with how we were actually doing performance marketing," says McInnes. "And that was really small compression versus what people will now face with AI. "

In that instance, the new ways of working meant the company moved faster than the infrastructure to support the measurement optimisation could sustain. That scenario is about to play out across hundreds of work processes, and not just in marketing. 

The first question executives ask is are we already using almost obsolete technology? With all the media attention, leaders want to understand how generative AI will play into this scenario because soon there will be a world where not only is there an automated process, but the software will be able to interact with other processes, and people in the process, to ask them questions or source information that's missing.

Tanya Graham, Healthscope

Tanya Graham, executive general manager, strategic programs, Healthscope 

Tanya Graham has first-hand and contemporaneous experience of the time compression effect of digital technologies, having led several enterprise level technology transformation programs. 

Now working for a healthcare organisation, there are myriad opportunities and use cases for applying digital technology. It is typical for back-office processes to be highly manual and paper-based (which can lead to inefficiencies and sometimes errors), and Healthscope has recently commenced automating key billing processes using robotic process automation (RPA).

“In a hospital, the back-office processes need to be accurate and efficient to ensure billing is completed successfully to reliably bring revenue into the business and ensure a good patient and doctor experience,” she says. 

But process issues can be compounded in a tight labour market where there can be turnover of staff with specialist knowledge, or where teams are under additional pressure while they wait for open roles to be filled. 

That's where digital technologies add real benefit.

“The trial is in an early phase, and when scaled will process 5-6 times the number of items a day, which will free people up from repetitive tasks to focus on other activities within the process.” 

“What that means in terms of time compression, just in a short time period, is a saving of several hundred hours. So clearly, once we start to scale that up and use the technology across multiple processes in the business, that's going to be really significant.”

While Healthscope is using RPA – a mature, and well understood technology that has been in wide application in business for over a decade – there is already a generative AI sting in the tale.

“The first question that executives have been asking is 'are we already using almost obsolete technology?' With all the recent media attention, leaders want to understand how generative AI will play into this scenario because soon there will be a world where not only is there an automated process, but the software will be able to interact with other processes, and people in the process, to ask them questions or source information that's missing. So even that next level of human interaction could potentially become automated by generative AI.”

The fact that the Healthscope executives focussed immediately on the issue of future-proofing, as well as efficiency gains, is demonstrative of how quickly generative AI has caught the attention of directors and organisational leaders.

That's almost certainly a necessity now, since all truly disruptive technologies – by definition – introduce risk. Generative AI looks especially disruptive.

Graham is a former CIO at Healthscope, and prior to that chief transformation officer at Alinta energy. That experience provides her with the perspective about what will be required to tame the potentially huge disruptions to work-flow that a rapid and ubiquitous uptake of generative AI could cause.

“The challenges come from the fact that you will start to have bottlenecks in other parts of the process unless you can automate something end-to-end. But that’s not always possible," she says.

 “If you are looking at automating a component of the process you need to understand the upstream and downstream impacts.You need to be conscious that you might actually create limitations to the true benefit that you can achieve. You've got to think creatively around what you can do with that extra service capacity that you create, and what are the capabilities that you can redeploy into other parts of the organisation?," adds Graham. 

"There is a danger of being too narrowly focused with what you're looking at. It’s important to understand how automation will free up people's time to do other tasks. What are those tasks and how do they add value to the business?”

“Every task expands to fill the amount of time, so you produce a time saving and then it just kind of disappears. At an operational level, if you save someone five minutes, you’re not really going to save five minutes of payroll. That time is going to go elsewhere. If you can't save three hours, then you actually haven't saved anything. You made life better for people, but you haven't made a physical, bankable, dollar saving.

Michael Fagan, chief transformation officer, Village Roadshow

Michael Fagan, chief transformation officer, Village Road Show.

Graham’s last point resonates with Village Roadshow’s Michael Fagan.

“Every task expands to fill the amount of time, so you produce a time saving and then it just disappears. At an operational level, if you save someone five minutes, you’re not really going to save five minutes of payroll,” he says. “That time is going to go elsewhere.”

During his 15 years at Target, including the last two as chief technology officer, Fagan said the benchmark for a measurable change was three hours – the length of a minimum shift.

“If you can't save three hours, then you actually haven't saved anything. You made life better for people, but you haven't made a physical, bankable, dollar saving.”

That’s important in the context of generative AI and time compression because it means that businesses will be looking for the technology to deliver significant efficiency gains, rather than incremental changes.

Village Roadshow is currently running its first generative AI trial within its procurement group to generate questions for RFPs and even potentially sample contracts, according to Fagan.

“For example, give me a one-page contract for rebates that covers off the headings of ABCDE. Or maybe you wanted a simple one-to-two-page contract, either as a thought starter or as a simple draft.”

Fagan is already mulling how to scale.

“How do you get it out into the population? How do you educate and empower the population that can use it, rather than trying to force it upon them?"

He says it’s about empowering team members to embrace rather than fear change.

Fagan has deep experience with technology's compressive effects having rolled out agile methods at Kmart and Target, prior to Village Roadshow.

“It not only cuts the amount of time it takes to develop and build new products, but it's also getting people to think differently about what a product is. So you say, 'this is ready to go now. It’s not perfect but what you have now is a candidate for release. It's not going to be perfect, we can live with it, but this is viable to go'."

The rollout of Village Roadshow’s digital food and beverage service is a case in point. It was built in three months to launched in time for the busy Christmas season versus the nine months forecast under traditional development cycles.

The data prep, the data wrangling is usually 80 per cent of the job. So that's what gen AI has done in the sense that all of the data it is using to make those decisions and bring those things together. If it’s already done in the background, now we can spit out the answers.

Maurice Riley, chief data officer at Digitas Australia

Maurice Riley has seen radical levels of compression in the predictive analytics field. Where it used to take months to set up predictive analytics models due to all the data wrangling, it's now weeks or even days. 

For instance, CDP solutions by companies like Tealium and Amperity offer the promise of model deployment with multiple clicks rather than multiple months.

"The data prep, the data wrangling is usually 80 per cent of the job. So that's what gen AI has done in the sense that all of the data it is using to make those decisions and bring those things together, it’s already done in the background. And now we can spit out the answers.”

Riley says the new operating velocity has forced a change in analytics service models – the speed of operating models has increased to such an extent that organisations have redeployed their analysts and data scientists back into business units and disbanded standalone analytics service departments. 

"You really want to drive transformational change ... so now you need to have greater accountability, [agility] and influence by being embedded into the team.”

We had no way of gathering insights fast enough to respond to them. So you're increasing expectations of response time by putting surveys in front of employees twice a week, and they're thinking fantastic, I’ll get double the response, double the kind of remedial action from the business. And what happened was it was big, fat mess.

Angela Bliss, executive partner, experience at AKQA. 

Angela Bliss, executive partner, experience at AKQA

Angela Bliss recounts her time brand-side implementing an employee experience program to switch from an annual 360 review process to a bi-weekly employee pulse.

“That project delivered not only compression but also a massive increase in velocity and frequency hitting an ecosystem at the same time," she said.

But the impact of compression and the extra velocity was not what was anticipated.

"We didn't have the operating models to support that philosophy. We didn't have an insight process that could respond to bi-weekly surveys. We had no way of gathering insights fast enough to respond to them. So you're increasing expectations of response time by putting surveys in front of employees twice a week, and they're thinking fantastic, I’ll get double the response, double the remedial action from the business. And what happened was it was big, fat mess.”

Ultimately, Bliss pulled the pin, but at least it was a "teachable moment", and Bliss is clear about the lessons.

“You’re going to fundamentally change workflows across multiple teams within an organisation. You’re going to need new policies, new procedures and new processes to be put in place, new operating models, to account for much more human/machine interactions. You're also going to need a workforce that understands how to exploit the capabilities of these technologies. And that's not a quick turnaround for a lot of people. You'll need to build out much bigger data and ML teams.”

Yet, she says, “What I've been hearing is that data and machine learning employees are actually the most resistant in a way to use new technologies – because they make their jobs much easier and faster and more accessible to other people.”

In each of the cases above the technology and the methods used to deploy them led (or in the case of Healthscope likely will lead) to significant time compression. But each dedicated project was managed under an agreed methodology and controlled with well-developed governance methods.

The conflation of generative AI with regular business order will be much messier as vendors race to supercharge their SaaS platforms. That's a recipe for an outcome that looks much more like Bliss’s employee survey project than Maurice Riley’s predictive analytics compaction.

It will also require a huge leap of trust – a leap many marketers and other company leaders may initially be unwilling to take.

When was the last time data was perfect? The adage of garbage in garbage out still exists. If we don't know what data is going into it, but if we have a perfect model, then that perfect model will only deal with the worst of the data and deliver the most perfect worst data. It doesn't clean the data perfectly.

Liz Miller, VP and Principal Analyst at Constellation Research

Liz Miller, VP and principal analyst, Constellation Research

AI can automate processes at a scale far beyond human capacity says Constellation Research’s Liz Miller. “That's really the brilliance of AI. It's that opportunity to be able to look at those processes where the limiting factor is human capacity. It's not about human talent. It's about capacity. How many decisions can we make in a second? How many processes can we optimise in a second? That's where AI becomes really important, because customers want to get to something quickly. And if they can get to something faster, they're going to be happier. If they see something that's really personalised and relevant to them, it just gets even better.” 

She gives the example of a travel business developing a campaign for women who want to travel alone.

“You should probably have images of women traveling alone! You're able to make those changes generatively thanks to generative AI. That’s how granular I can get with my AI-driven audience segmentation now. You don’t want to wait a week for creative to bring that back, you want to compress that." 

However, she also told Mi3 that while AI compresses the time required to make decisions into mere fractions, there is a critical caveat: velocity cannot mean abandoning human judgment. Do not assume AI is a silver bullet, Miller cautions.

“It's not going to solve all the problems. It’s not a Ron Popeil chicken machine. I'm not buying it on an infomercial, sticking a chicken in it and assuming that chicken is going to taste great. Even when we bought those machines, you had to season the dang chicken, right? You need to stand there and make sure there's not a grease fire. It's the same thing.”

With AI, according to Miller, people mistakenly assume that because we program the algorithm to be perfect that the behaviour, the action, and the rollout will also be pefect. 

“When was the last time data was perfect? The adage of garbage in garbage out still exists. If we don't know what data is going into it, but if we have a perfect model, then that perfect model will only deal with the worst of the data and deliver the most perfect worst data. It doesn't clean the data perfectly.”

We have realistic about what AI is – and what AI is not, says Miller. “AI is a decision tree. It is not judgment. Humans have judgment. We can understand we probably shouldn't stand so close to that cliff. AI is going to tell you just how close you can stand to that cliff.”

Safety last?

There’s another element to human judgment that Miller – and many of the execs we spoke with raised – trust.

“There’s a line that keeps popping up that I think a lot of people gloss over. But it's really important: Is AI safe for commercial use? It's one thing to tell someone that AI is safe for use. That's great, my kid's not going to be able to draw naked pictures of her classmates. Awesome, great. But safe for commercial uses a completely different animal. 

Marketers will want to train AI to understand subtle difference, she says.

"For instance, the difference between red and red for Coca Cola, script and script for Coca Cola because they are two completely different things. I can train my model to look for red. I need to train my model to look for Coca Cola red.”

There is not an machine learning (ML) technology out there we completely trust. It just doesn't exist - and we've been doing ML for a long time. We've been trying, but we've not got to the point where we trust ML without human supervision.

Amanda Blevins, VP and CTO, America’s VMware

Amanda Blevins, VP and CTO, America’s VMware

Technology executives with a deeply formed view about AI’s capabilities also underlined a need to exercise caution.

"There is not an machine learning (ML) technology out there we completely trust, it just doesn't exist," says Amanda Blevins who heads innovations at the giant B2B technology firm VMware. And we've been doing ML for a long time. We've been trying, but we've not got to the point where we trust ML without human supervision.”

VMware was at the vanguard of the first digital revolution, with its pioneering work in virtualisation that unlocked cloud computing. According to Blevins, generative AI is very good a helping to perform tasks that someone has done before. “All it’s doing is bringing together all these data sources and trying to answer the question you provide," per Blevins.

“It doesn't know if it's right or not. It might think it is. It might argue that it's right and will be very firm about it, but it really does not know. We still need to apply our human experience and our human brains to its output, and to then be able to take that and modify as necessary for a use case.”

Blevins says companies within the domains of their business expertise and using their own data must create and train their own models. 

That's likely to be true for a long time, she says.

“That means that there's going to be this period of caring for these models [and] these algorithms until we get to get to a point where there's trust so that I don't have to verify everything.”

Blevins also offers a cautionary note for those who advocate generative AI as a creative engine.

“It's trained on existing things, it’s not necessarily a creator. It can create more of a variation of what exists, but it is not an algorithm that is made to be an artist. It's not like a Picasso. It's not Andy Warhol. It's not coming up with the next big thing. And it can't create what's necessary to do that, because it's never been done before.”

Nobody wants to risk their brand by just blindly activating a potential feedback loop.

Alexandru Costin, VP, Generative AI and Sensei at Adobe

Alexandru Costin, VP, Generative AI and Sensei at Adobe

Alexandru Costin, the man behind Adobe’s generative AI implementation – Firefly – shares a similar view about creativity.

He tells Mi3, “It’s our policy both for creatives and marketers: we think the artists and the marketers should be in control.”

Costin says it will be a long time before ads generate themselves and campaigns are executed without human intervention.

“We think the ingredients will be generated by a human [who] will compose. The human artists will be able to do meta-documentation that serves multiple categories at once, so they will be way more productive.

“But then a marketer will take a look at those and make sure that they're connected with the segments the AI has recommended. They will make the determination ‘this is what I want and need’ by getting feedback from the system of what's working.”

He echoes Miller's sentiments about the commercial safety of generative AI. “Nobody wants to risk their brand yet by just blindly activating a potential feedback loop.” 

The question isn’t, 'will it be making decisions?' It is 'what decisions are we comfortable with the Gen AI making? And what decisions still need human lens?'

Davy Rennie, National Managing Director, Tribal DDB

Davy Rennie, National Managing Director, Tribal DDB

As machines become more embedded in the decision-making process, agency bosses such as Tribal DDB's Davy Rennie argue guardianship of a single source of truth becomes critical.

“That means a human being is still making sure that [AIs] are making the right decision – because if it's just a black-and-white decision, if it doesn't consider human-to-human interaction, doesn't consider relationships ... it could be problematic for the organisation.”

Rennie also believes people are focussing on the wrong question.

“The question isn’t, 'will it be making decisions? It is what decisions are we comfortable with the Gen AI making? And what decisions still need human lens?"

According to Rennie: “Even with us, as an organisation, when we look at deals, we look at them collectively across all the different business units, and ask, is this right for us as a group. There will still need to be someone making that call on the insights coming from generative AI.”

You cannot stop innovation. Stopping innovation is a problem. So you need to be able to experiment.

Nicolas Sekkaki, General Manager, Global Practice Managed Applications, Data and AI, Kyndryl

Nicolas Sekkaki, General Manager, Global Practice Managed Applications, Data and AI, Kyndryl

Nor is trust merely a matter of output, as Samsung discovered last month when workers from its semiconductor division accidentally leaked some of the company’s most valuable IP after using ChatGPT to check for source code errors.

Samsung may have been the first high-profile such case but it is not alone. Global IT services business Kyndryl recently shut down an internal project that was utilising ChatGPT after discovering its own data and work methods were leaking out into the ether.

“You cannot stop innovation. And by the way, stopping innovation is a problem. So you need to be able to experiment,” says Sekkaki, while cautioning that firms must equally be alive to the risks. Using live data and processes to test ChatGPT's capabilities, as Kyndryl employees did, is probably not ideal.

“They did it in good faith – and by the way, it's great to have people that understand what can we do with this disruptive innovation. But we quickly realised that we could not let anybody and everybody test it. So we shut down ChatGPT and said you cannot use it with our own data because they were creating bridges between our IP and GPT and GPT was taking all the IP and giving it back to everybody.”

Kyndryl's swift reaction demonstrated how corporate leaders are already realising the need for rapid response units. The executive leadership team was already conscious of the issue and had been discussing the disruption ChatGPT was likely to bring, hence moving at speed to contain the problem. "It was really quick," per Sekkaki. "It was at the board level."

Sekkaki says you don’t need to be in a technology company to recognise the implications of a technology that harvests the internet, and ingests everything it is fed. But despite the early hiccup, Sekkaki is bullish on platforms such as ChatGPT. “I think it’s great that we have ChatGPT. It shows the potential of [generative AI]. and how far this can go. But it also shows the limitation.”

As a global technology services business (Kyndryl was spun off from IBM in 2021) a sustained ban on platforms on ChatGPT was never really feasible, especially as its customers already want the B2B giant's help understanding disruptive technology. 

“We had customers knocking at the door and saying how can we use this technology? Either ChatPT or the underlying technology. And to be able to answer that, you have to be savvy about it and you have to make your own opinion. You have to have your people looking at it. So we have opened it up … for few people."

What we do know is that there are two sides. You've got understanding, exploring, and discovering the potential. And then you've got governance. They are at the opposite ends of the spectrum, and my personal view is that I think you could spend a lot of time trying to work out all of the governance issues and get behind the market.

Jason Davey

Jason Davey chief experience officer, Ogilvy Australia

To complicate matters, some industry leaders believe it may be too early to even attempt to build out a fully formed governance architecture.

“I don't think we've got a complete answer as an industry," says Ogilvy’s Jason Davey. "What we do know is that there are two sides. You've got understanding, exploring, and discovering the potential. And then you've got governance. They are kind of at the opposite ends of the spectrum, and my personal view is that I think you could spend a lot of time trying to work out all of the governance issues and end up behind the market.”

Davey’s says the best approach to generative AI is to implement managed trials. “Experimentation in a managed fashion is the way to go.”

His view is based on the immaturity of an environment that regulators and legislators are only beginning to consider how to tackle – the same regulators that have taken more than a decade to tackle digital media and privacy.

“For example, the law doesn't have a point of view that's concrete yet on the copyright of generative artwork. So, therefore, can you use generative AI to develop art and designs that you put in the marketplace if you can't copyright them? What risks does that put you at? I think all industries are grappling with some of these issues.“

But managed trials will at least let you understand what's possible, he says, which gives firms a foundational start point.

“How do you govern something if you don't have experience with it? It's such a fluid point at the moment.”

The problem is going to be less about all this stuff happening in an uncoordinated fashion because the natural defences we have in companies are just not ready to adapt yet. Instead, what you're going to feel is this enormous bottlenecking. And I think people are going to start very quickly to have to think about ways to like get past those bottlenecks.

Scott Brinker, editor-in-chief, chiefmartec

Scott Brinker, editor-in-chief, chiefmartech

Amid the incoming tsunami, inefficiency may yet be our salvation, according to chiefmartec’s Scott Brinker

He told Mi3 that when he first conceived of Martech Law (tech change fast, organisational change slow) he only really considered the negative context. Now, however, our human inability to match the speed of the machine may actually help us.

“Take something for instance like a company’s website. It is possible for any one person to instantly create a piece of content or even a little interactive app, but the way most organisations are structured today is that even if someone built that content or built that app over a period of six months [instead of instantly], they wouldn't be allowed to just turn around and immediately publish it on the website."

Instead, he said there are a whole series of checks and governances. "And none of those have gone away."

Per Brinker: "The problem is going to be less about all this stuff happening in an uncoordinated fashion because the natural defences we have in companies are just not ready to adapt yet. Instead, what you're going to feel is this enormous bottlenecking.”

People are going to have all these things they want to do – and the speed to do them because of AI-driven compression – but the issue of deployment will be critical, says Brinker.

“People are going to start very quickly to have to think about ways to like get past those bottlenecks.”

We don’t know yet whether breathtaking potential of exponentiality of Brinker’s Martech Law will be a curse or a blessing, but we can surmise that it the impact on businesses far reaching and disruptive, beyond anything we have experienced yet.

It turns out our very human and occasional inability to cope may be the most useful coping mechanism available. At least it will allow us to draw breath, and from a business’s perspective, hopefully not our last.

Maybe it’s time ask ChatGPT about the implications of putting highly compressive technologies into the hands of end users with little coordination or regard for the impact of accelerating work processes on the rest of the organisation’s information supply chain, and see what it says.

Go ahead, we’ll wait. We’ve got all the time in the world.

What do you think?

Search Mi3 Articles