AI’s unintended consequences
It's hard to predict where AI is going, but I'm pretty confident of this list. From a talk at Data Universe in 2024.
Alistair Croll
Last week, I chaired the inaugural Data Universe conference in New York. Before welcoming some amazing speakers to the keynote stage, I speculated on some of the unintended consequences of abundant generative AI. David Boyle was kind enough to record a transcript, so I went back through it to extract some key takeaways. And no, I didn’t use an AI to do so. ;-)
Until recently, computers mostly did what we told them to do. Even the earliest forms of AI were programmed to follow rules. If you knew what you wanted from the computer, it did its best to give it to you—even if it ended up with a mistake or a bug. Let’s call this deterministic computing.
By contrast, AI—specifically, generative AI and Large Language Models—extracts the mechanism of thought from billions of answers, and produces unexpected things. It basically makes stuff up. We give it a prompt or some input, and it spits out images, text, or even code that wasn’t there before. It’s not always good or accurate, but these generative AI models are getting better at an astonishing rate. Call this nondeterministic computing.
Soon, we’ll take these two modes for granted, just like we do search or mapping. What fascinates me is the second-order consequences of this technology going mainstream, which we’re only starting to think about. With that in mind, here are some predictions you can probably act on immediately.
Humans assume liability for AI
AIs get to do the fun jobs (art, prose, etc.) because they’re not always correct. Humans will sign off on whatever must be correct. We’ll get much smarter about false positives and false negatives. Expect liability, insurance, and anything else that captures risk or decides negligence to undergo big changes in the coming year.
Machines prompt humans
We already let algorithms decide what we should watch next, which messages matter most, or which tasks we have to do. In business, we should expect an AI assistant telling us what us should be working on, and completing those tasks while we were figuring out what to do. As prompts give way to anticipation, AI agents will show more initiative, surveilling what you’re working on and offering to pitch in where they can.
Reconsider how you assign value to tasks
When governments review grant applications, they assume that if someone created a 150-page document, it represents effort and value. What happens when that’s effortless? Every organization treats time spent or work produced as a proxy for quality. We’ll need to look at every process in our org, then ask whether generative AI changes the time- or work-to-value metric.
This is a widespread issue: Are you paying artists per concept generated? Does your law firm charge a human hourly rate for work that can now be done automatically?
Every qualitative metric becomes quantifiable
In analytics, quantities are easy to analyze. If you can count, average, or plot something, it’s quantitative. But there’s plenty of qualitative information we can’t easily analyze, from open-ended survey questions to the look on a customer’s face to how energetically an employee completes a task.
Now that AI makes it cheap to analyze every frame of a video or every audio recording, those qualitative things suddenly become quantifiable (for better or worse.) Metrics that were once subjective (like how happy restaurant customers are, and whether patrons tip more when servers smile) will now be trackable and measurable.
We’re losing our sensemaking abilities
We’re already struggling with fake news and generated disinformation (search for information on the Liar’s Dividend for a taste of what’s to come.) But it’s not just falsehood, but context distortion, that worries me.
Rather than look at information and consult the surrounding metadata and context to understand what’s happening, we’ll just ask an AI. And it’ll help us confirm our own biases (which David McRaney walked us through on April 11) so we won’t even want to find contradictory viewpoints. Decayed critical thinking skills and the ability to understand what’s real will be huge problems that undermine our ability to govern ourselves.
Everything may be decided by (benevolent?) AI overlords
The idea that AIs could save us from ourselves is tempting.
But what powers do we give these AIs, and at what cost? Every algorithm has an objective function—the “good outcome” for which the AI is optimizing. On social media, that might be engagement; on an e-commerce site, it might be shopping cart size. But who decides what “good” is when it comes to society as a whole? Protip: Go play Universal Paperclips for a couple of hours.
I promise it will change you.
Billing goes from per-user or per-month to per-transaction
SaaS software is usually billed on a per-seat, per-month basis. Google’s Paige Bailey recorded her screen as she searched for a house on Zillow; once she uploaded the video, Google’s LLM generated a script in a programming language. If automating the use of a SaaS is this easy, what happens when you give an AI your username and password and set it loose? The usage economics of a SaaS product change completely when one user account eats up 10 times the resources of all other users combined.
Software vendors will have little choice but to start charging for consumption. You might not try to trick a human, but you’ll game an AI: Humans behave differently when they know they’re dealing with a machine. How does your company’s AI model handle someone offering to buy a product at a crazy discount? How does a self-driving car respond when attacked? Humans can anticipate and react to this creatively, but there’s no human training model that can serve as a precedent for a car being set on fire by haters.
The rise of ephemeral micro-apps
Since non-deterministic software (the stuff AI generates) can write deterministic software (the stuff we’re used to), when we have a small task we’d have done by hand, we’ll create a custom-made micro-app to automate something.
Think of macros in word processors and spreadsheets—but written by explaining the task in simple words or a screen recording. Where once software was written by the IT department, and then software developers, now we’ll see user-created software, with all the complexities and problems that entails.
Non-GPT is the new no-GMO
While generative results are a good starting point, there’s still demand for the human touch.
It’s far from perfect, but improving very quickly
When David Boyle initially sent me the transcript of my talk, I was excited to throw it into an LLM and generate a blog post. It seemed appropriate, after all, to walk the walk. But the results were disappointing.
Some of the summary missed the point, and somehow, it felt like pablum and platitudes. So I wrote this myself. I suspect we’ll see a backlash against generative content in the coming months, with some brands promising to not rely on AI, or be transparent about its use.
The current wave of AI (non-deterministic computing) isn’t a panacea for business or a cure-all for society. But itisgoing to make some expensive things cheap, and some cheap things expensive. As we recalibrate for this new reality, it will fundamentally change how we, as a species, function. I’ve been lucky enough to chat with many of the people creating these technologies, and last week was a great opportunity to speculate on where some of those changes might take us.