Born Clippy
In part 2 of this series, we look at some of the more insidious reasons for inserting AI Clippys into everything and what may be actually driving those decisions. We also examine how vendors are deliberately marketing a distorted vision of AI and how this has the potential to (badly) backfire
Part 2 (of 4): Rhyming, not repeating
By Darragh Coakley
“History Doesn't Repeat Itself, but It Often Rhymes” – Mark Twain
As outlined in part 1 of this series, Clippy - one of the great design blunders of our time - has come to shed its old skin and risen in a new genAI-enabled form.
I am not the first to invoke his name with a view to how genAI is being inserted and inflated into everything.
And what that results in.
The fundamental issue of Clippy was intrusion, unrequested.
GenAI “helpers” and much genAI functionality, integrated into existing software, will now adopt the Clippy model. Across all tools, all software. Even the humble notepad in Microsoft will now feature AI, as 'Copilot all the things' continues apace.
Like Clippy, this is not a requested nor desired nor functional feature. It ultimately serves no purpose other than to inflate and insert AI where it's not needed nor desired.
What must be undone here is the assumption that this is to improve the software. Or to make users’ lives easier or better. This is the pitch they will give to you. But it’s far from the reality.
The rush to insert these features into everything, “for the sake of it”, belies the actual intention.
Ed Zitron articulates how this process of integrating and inflating AI into everything can potentially profit companies, if not actually help users.
“Generative AI may not be super useful, but it is really easy to integrate into stuff and make “new things” happen, creating all sorts of new things that a company could theoretically charge for”.
At a larger scale, for the likes of Meta, it’s less that metaAI is designed to help specific software, more that the line between distinct software is being blurred. One thing for everything. Zuckerburg more or less said as much in Meta’s recent antitrust trial.
I can’t claim to know what machinations turn and grind in that dark heart.
But I do find it notable that meta AI is now the common element across a combination of the earth’s most popular software. The common mechanism across all these systems now needs to read everything you say or post or message - “to serve you better”.
Again, Zitron has identified the potentially dangerous route in this.
“The biggest innovation here isn’t what Generative AI does, or can do, but rather the creation of an ecosystem that’s hopelessly dependent upon a handful of hyperscalers, and has no prospect of ever shaking its dependence”.
In many ways, that’s the least of our worries.
Inserting AI into software, systems, services to just charge more is at least someway planned and someway clear in its intent. However megalomaniacal.
What’s potentially far more insidious is AI being given the reins to systems purely on the assumption and promise it will work. Only for it to spectacularly fail.
Imagine a post-apocalyptic vision of our world brought about by AI. But instead of a malevolent AGI laying waste to humanity, it’s just a dumb one. An over-hyped, half-assed, untested skynet.
A potential test case for this could come from Microsoft’s plans to add Grok into their AI architecture.
Some new and exciting Grok features:
“Who’s to say the holocaust definitely happened?”
Now, personally, if I asked Liam from accounts to summarise the Q2 budget to me and he went off about white genocide and the holocaust being a fabrication; I’d probably be reluctant to have him coordinate the big public server contract.
But that’s just me. Maybe I’m just not appreciative of dynamic and disruptive thinkers.
It’s harder still to swallow the irony that Microsoft are plugging in a system which denies genocide. Even as they themselves are actively helping enable it and censor it.
On that same topic, please, please, please do not read further without donating to Gaza Go Bragh or other charities trying to stem the horror.
More often than not, when we talk about "computer stuff", we're actually talking about people stuff.
The problem of AI Clippys everywhere is less to do with tech, more to do with how tech is used by people. Made available to people. Shoved down the throats of people.
This is not to say that no AI helper can be useful nor well-done. Many people might well perceive these current helper tools as useful.
But poor design and usability decisions (or complete lack thereof), will and can only produce more Clippys.
Similarly, it's not that generative AI can't be good, can't be useful, can't help.
But only really if developed for bespoke purposes and with a view to actual processes. And integrated with care and consideration.
A specific tool to provide a specific solution for a specific need within a specific context (I’ll talk more on that in the next entry in this series).
It is also important to try to be clear about what we call “AI”.
An excellent explanation of what constitutes AI is offered by Dr Richard Whittle in his excellent talk from the GenAi Seminar series, facilitated by the Dept of TEL in MTU.
“The term “AI” itself has a variety of different perspectives and emotions and nuances. This is much more akin to something like advanced data analytics.”
This is not just semantics.
The current iteration of AI is always being sold as just a shade off being the singularity. This allows it to be sold as just about any solution to any problem (Always without specifics. Always referring vaguely to “innovation” as the answer).
The hard part in all this is identifying the actual truth about the benefits of AI, divorced from the boosterism from companies like OpenAI and Meta.
The path here sounds clear, but it’s not. Despite what many might assume. The notion that a kind of advanced analytics engine will and can and should be everywhere, in everything, replace everything, is ill-considered at best, a straight-up con at worst.
That won’t stop it being sold as such.
But we can at least try to keep clear eyes, full hearts ourselves.
Life During Wartime - Talking Heads