What Comes After Peak AI Hype
We're told arresting tech CEOs is for our own protection. Is it? It's time to take The Sniff Test.
Is This Really You?
In the film Total Recall, Arnold Schwarzenegger plays a construction worker who escapes his mundane existence by having false memories of a Mars vacation implanted in his brain. Thereafter we don’t know whether what happens is imagination or reality.
Years ago I read a science fiction story in which human brains were switched off at the age of 25. Peak performance was maintained by switching to an artificial alternative trained from birth to mimic the brain. The story involved a man resisting this dehumanisation, but ended by questioning whether we would be conscious of the switch.
AI is excellent at pattern recognition, which makes it great at understanding our revealed preferences. It promises to be a tool for hyper-personalisation of experiences. Will these be the perfect reflection of personality as in the short story, or corporate controlled as in Total Recall?
A fear of technology is that it is always watching you. The real risk is that we allow it to develop so it is always controlling you.
Peak AI
Sentiment on the US stock market rests in large part on the prospects for Nvidia and its big tech clients. Google is spending $10 billion on more than 400,000 of Nvidia’s new Blackwell GB200 chips. Meta placed a similar order and Microsoft wants up to 65,000 of them operating ChatGPT by early 2025.
Sentiment peaked a while ago, judging by stories mentioning AI on the Bloomberg financial platform.
If the news cycle has peaked, then we’re entering the long period of gradual adoption, when companies figure out how to make a return on the vast sums spent on new technology. The internet developed this way, as did railways over a century before.
Four weeks ago I wrote about how technology pushes creativity to the fringes of society. Words like practical, scalable and accessible become important as firms seek the standardisations that deliver profitability. Technology is a tool for the exercise of power, which makes it political.
The detention of Telegram founder Pavel Durov in Paris has had a sympathetic response. There are genuine fears about ISIS and organised crime using the app for fraud and drug trafficking. The Guardian recommends going much further. French prosecutors may agree judging by claims made against Elon Musk, Logan Paul and JK Rowling.
Why is this happening in Europe? The US Communications Decency Act of 1996 protects social media platforms from responsibility for the content they carry. Europe does not have such protections.
Controlling social media is a bipartisan issue. Trump tried to overturn the act while President, and plenty of Democrats want the ability to censor communication.
This raises the question that if AI personalises your online presence, and big tech or the government censors what you say and do, how will you know the personalisation is a true reflection of you?
An Excellent Meal
Last weekend, Michala and I had the best meal we’ve had for months. The tiny restaurant served dishes inspired by the chefs’ experiences travelling in Italy. The ingredients were sourced in Devon where possible, and the service was a delight.
That weekend I also caught up with an article from the peak of AI hype. Penned by an academic from the Harvard Kennedy School around the theory of informational bottlenecks, it illustrates why academics are not in business. It suggests several future uses for AI.
The first is eating out. Imagine being able to engage with the chef at length in advance and have a meal designed to your tastes. No more disappointing restaurant experiences.
Politics is considered the most important example. While we are too busy to vote in referenda, imagine an AI that knew our preferences and voted on our behalf on any issue. Thereafter, AI will delve into school records to find better fits for job vacancies, and customise the clothes we buy to our precise requirements.
This had me wondering how precise our requirements are. A standardised restaurant experience might be reassuring, but when would we try something new? How much do past preferences dictate the decisions we make?
Cognitive and Collaborative Familiarity
It’s marketing lore that we buy products that are familiar. Psychologists recognise two types of familiarity and our preference depends on personality and situation.
Cognitive familiarity is preferring what we have liked, done or bought before. Think of the Netflix recommendations for what to watch next, based on past choices. The few respondents to my poll on LinkedIn this week all voted that this was how they chose shows.
Collaborative familiarity is preferring recommendations from the crowd. Netflix also offers a top ten of what others are watching. It would not do this if it didn’t work, even if business owners on LinkedIn prefer their own opinion.
When it started streaming, Netflix employed people to label content and create the categories for recommendations. A combination of these present you with rom coms about twenty-somethings starring trending actors, if that’s your bag.
AI is quick and efficient at such categorisation. Deploying it should mean greater accuracy about how well a show fits our tastes. Netflix’s aim is more bingeworthy watching, and the former CEO only-half joked that his competition was sleep.
When streaming for relaxation, I do not want to be challenged. I prefer shows that are familiar to me. But what about restaurants?
We didn’t choose the tiny place on the edge of Dartmoor because it was Italian. It had rave reviews and markets itself as different from the normal. I use recommendations when searching for somewhere to eat and prefer new experiences. I only choose based on the menu if committed to dining in a particular location.
Collaborative familiarity becomes more important when lacking experience. I often buy Amazon’s Choice selection. I appreciate the wisdom of crowds from my stock market days.
Netflix’s marketing is not all about familiarity. A former head of product at the company told me it aims to delight customers in hard to copy ways. This element of surprise is what goes missing when recommendations are based only on what we’ve done in the past.
A Limited Intelligence
Part of the enjoyment of a great meal is the delight in discovering new flavour combinations. Often you do not know what you want until your imagination is fired by a description from the waiter. Lots of people ask for recommendations and still get food envy when other dishes arrive. Envy would be all that remained of the experience if your avatar decided in advance what you would eat.
The purpose of voting is to choose leaders. Most of us have little interest in the minutiae of politics and limited knowledge of the issues involved. Democracy is the ability to throw the bad actors out of office, not have individuals make every decision based on past preferences. It is not clear that politics has the informational gap that academics think it does.
What about job applications and the extent to which past performance determines future outcomes? Digital records already wed us to prior misjudgements. Allowing employers to delve deep into ancient records would be a misleading representation of what a person is today.
Barrack Obama and Steve Jobs wore the same style of clothes every day to reduce the number of decisions they had to make. That is not going to work for everyone. Most people want to see new designs and become tired of what’s in their wardrobe.
All of this points to the existing limits of AI. New knowledge comes from conjecture, which by definition is forward-looking. Pattern recognition looks backwards and is therefore incapable of knowledge creation. A world where personalised AI makes our decisions for us, for the foreseeable future at least, is a world without progress.
It is also a world without delight. While familiarity is important in marketing and selling, people only buy what’s new when moved to do so. There is a change of emotion that triggers behaviour, which is impulse for small purchases and reflective for large. Either way, change is necessary to do something different.
Governing the Ungovernable
Human behaviour is unpredictable. When I turn on Spotify, I skip the recommendations because the tune I want is already in my head. Either that, or I am checking for an update of a favourite podcast. Music depends on mood and AI is a long way from determining my temperament.
Unpredictability is bad for business. A consumer company does well if one quarter of buyers return the next time they buy a similar product. This means three-quarters of sales are new, or people returning after a break. It is expensive to keep attracting these buyers, which is one reason why Google makes a fortune from advertising.
Erratic behaviour is worse for politicians. People lie to pollsters, governments are blindsided by social changes and we generally don’t respond as required. This is the purpose of nudge, which creates incentives for preferred behaviours.
But what if politicians could control rather than just influence those behaviours. Some scientists estimate humans make over 35,000 decisions a day, with many automatic and not even registering. You can walk home on auto pilot after a few drinks, I am told.
The idea of a brain chip taking over before cognition deteriorates remains a fantasy. But giving people options based on previous preferences is real. This narrowing of choice deploys availability bias. This is decision making determined by what little we know, rather than all that is knowable.

There is already significant control over what we see, hear and consequently feel. Social media offers a stand against this, to the degree that it is censorship resistant. One promise of AI is breaking Google’s monopoly on search and opening our access to opinions. But it is the control of search recommendations that’s behind the unprecedented spending on Nvidia’s chips.
To some, censorship resistance is shocking. Imagine how small children may be exposed. But social media wants you to stay engaged, the way Netflix wants you binge watching. The only reason you’re seeing right-wing hate content is because you’ve gone looking for it. Kids were mean before the internet and access to knowledge is not a prize to surrender without a fight.
The Future is Now
Business wants you doing more of the same. It only cares what that is when it might be punished for allowing you to do it. Hence ID checks in bars and advertising restrictions on kids television.
Politicians want predictable behaviour. But they also want to control what that is. If they don’t like what you’re seeing or hearing, then they want to censor it.
There is a long history of fascist governments burning books and banning media. Right now it’s the left that carries the mantle. It has penetrated western bureaucracies to the extent that it has great control over what kids are taught and how we behave in public. Social media threatens that control.
A preference for censorship is why we are seeing the war on tech. But there is a greater prize from humbling the giants. If politicians control the development of AI, then the possibilities seem endless. Imagine government censoring personal profiles to eliminate what is undesirable.
Manipulation does not require anything as futuristic as brain implants. People choose experiences based on a recommendation. Personal familiarity with what to expect seals the deal.
Much of what is familiar is presented to us as fact online. How often do you check “why am I seeing this ad”? Science fiction is often about a future where technology controls us. Reality is more mundane and already here.