Why Big Business Has Little Interest in True Artificial Intelligence
Fortunes are pouring into a technology with severe limitations. What happens when Big Tech wants its money back? It’s time to take The Sniff Test.
Man’s Best Friend
“Look, look, it’s right there, you daft dog.”
I see people in the park pointing at rubber balls. The finger vibrates while the dog watches it, a curious head tilting side-to-side. Humans anthropomorphise, which means we project human behaviour onto animals. Now we do it with AI.
“Put you hands together. Like this.”
The parents mime catching the ball. They end up all but dropping it into the toddler’s cupped hands. Everyone claps and the child drops the ball.
Humans are hard-wired to acquire knowledge. In our formative years, we learn to look away from the pointing finger and how a ball arcs through the air. Much of our fundamental reasoning is acquired by the age of four.
The ARC prize for artificial general intelligence (AGI) is almost five. To date, no entry has scored even half marks. You can try the tests here. Young children will solve them.
Dogs have specific knowledge in their genetic code that results in a set of skills. Humans have an extra ability to create knowledge and expand their skills. AI may be trained to execute to high levels of achievement. AGI would be able to learn what it has not been taught.
Human-like Intelligence
The founder of the ARC prize is researcher François Chollet. He says that AI cannot solve his puzzles because it has not seen them before. The website provides examples to help train models, but the exam versions are novel.
David Deutsch argues there is nothing in the laws of physics to prevent machines reasoning. But if we don’t know how reasoning works, then we cannot code it. Those claiming AGI is a few years away, are assuming machines evolve human’s ability to create. But they cannot say how.
Chollet’s criticism is about the way we are developing AI. There is a fascination with large language models (LLMs) to the exclusion of most other technologies. He expects his tests to be solved by a chain of different models.
The proponents of LLMs push back that the machines are getting better. Even if they don’t learn like humans, delivering the same results is indistinguishable from human intelligence. Therefore it is such intelligence. Chollet disagrees.
LLMs are memorising. They are given all the answers and just have to retrieve them. Even for an unseen maths question, it’s a case of remembering the theory and inserting the numbers that are given. But isn’t this what school children do?
A week ago, Google announced that AI had achieved the silver medal status of the Maths Olympiad. This is a competition where pre-college students solve six hard problems. Each is worth seven points, and the AI’s perfect answers to four questions scored 28 points. This is one below the gold medal level.
The AI combines a pre-trained language model and a reinforcement learning algorithm. The first is an LLM and the second similar to trial and error, with rewards from the teacher for making progress. Google calls it a formal approach to reasoning.
There is nothing novel about the Maths Olympiad problems. No new solutions are developed. Learning to be as smart as smart college entrants is an achievement, but is not artificial general intelligence. Chollet points to AI’s failure to master his tests, as proof that AGI neither exists nor is getting much closer.
But this is not why AI is being developed.
Battle for Supremacy
There are three stages in any technology revolution. The first is the tools, the second the models and the third the applications. It’s a gold rush.
The first fortunes are made selling digging tools. The second from mining gold, but the lasting legacies are created by investing the proceeds.
Nvidia makes the tools to make AI. OpenAI, Google and Anthropic AI lead the race to develop the best models. What these models will be used for, and the fortunes that derive from those applications, are as yet unknown.
Meta, the owner of Facebook and WhatsApp, made Mark Zuckerberg’s fortune from internet applications. He wants to do the same with artificial intelligence, with one big difference.
Zuckerberg resents Apple. It controls how much he can make from iPhone users. He does not want to be beholden to the best AI model the way he relies on Apple.
To that end, Meta released Llama 3. This is an LLM with capabilities on a par with the latest offerings from ChatGPT, Google and Claude. The difference is it’s free to use and develop. The more people adopt it, the better it becomes and the more people use Meta’s network.
The Information says OpenAI is on track to lose $5 billion this year, despite the rapid adoption of ChatGPT models. Microsoft owns 49% of those losses and its share price flirted with disappointment this week, when recent results showed it had not lived up to the AI hype. Meta may have made doing this much harder.
Meta makes almost all of its $35 billion quarterly revenue from advertising. Microsoft makes almost twice as much from selling cloud based services to companies and individuals. It does have advertising revenue and the fast growing Xbox division. But overall, it is in stage two of the cycle, enabling others to do things with technology.
Google makes as much from advertising as Microsoft makes in total. Most of this is from search. That is the prize that LLMs are pursuing. Any pretence at developing AGI for the good of mankind is marketing spin.
So what will the AI applications be that house the adverts of the future?
A Cure for Loneliness
Our understanding of happiness has turned on its head since 2011. The coming of age of the first generation brought up on their smart phones changes everything.
In 2020, David Blanchflower published a paper about happiness in 145 countries. He described a U-shaped curve, with happiness high until around 30 years old, dropping off with a low around 50, and picking back up again in retirement.
Now he has released another study, showing a steady deterioration in the young’s well-being relative to older people. This starts in 2011. The results are replicated across 34 countries and got worse in places such as the UK during Covid.
Deteriorating trends in mental health pre-date smartphones. Smaller family sizes, loss of community activities, and economic hardship may all cause social isolation. Quick fixes such as drugs and smartphones exacerbate the problems rather than cause them. Nonetheless, things are getting worse.
Unhappiness is not caused by a phone, but interaction with phone software may worsen it. Many of the interactions are designed to generate advertising dollars. Is there much hope that AI changes this?
150 million people have sent over 10 billion messages to SnapChat’s MyAI. The company promotes it with the phrase “Test it out and just have fun”. What could be more fun than an imaginary friend?
How about a necklace? This is how Friend not imaginary seeks to solve the loneliness epidemic.
LLMs collate consensus from their training data. Hence the concern that human bias in the data is replicated in models. Fine-tuning introduces new data to retrain models. This keeps the core understanding of the world, but tailors it for specific situations.
There is a consensus about health advice. It evolves more slowly than science and remains based on the average human. Over the decades we’ve been advised to avoid sugar, avoid fat, eat more protein and avoid red meat.
I had a DNA test earlier this year. I am hyper sensitive to caffeine and sugar and impervious to creatine. I’ve cut back my coffee consumption and abandoned hopes of a short cut to a gym-bod.
I refer to machine friends as social AI. Meta has its version, built with Llama 3. Where will social AI get its information to talk to the people it befriends? If it’s from consensus about the average human, it may be inappropriate. If it sympathises with our deepest personal preferences, it may be even more so. Depressed people don’t need their emotions reinforced.
There is also the issue of how we interact with machines. Online surveys deliver more extreme views than phone calls. Here are expectations about inflation in America. The middle of opinion shown on the left is unchanged. The average shown on the right, is pulled up in online surveys by the extent of extreme opinions.
The tendency for a minority to express stronger opinions online may explain the growing polarisation of electorates. The question is whether we are our true selves in digital surveys, or when talking with humans.
In social situations people fear criticism more than rejection. This causes moderation of behaviour in a group. Only our closest companions may know our true selves.
Social AI offers this close companionship. But human relations are two way. They fall apart due to disagreements and we learn the type of friend who is best for us. It’s called growing up.
Lasting relationships have give and take. Social AI seems all give to me, the friend that appears to be a pushover. The black comedy Saltburn is about how the tables are turned in such a relationship. I see echoes of the creepy lead character as I learn about social AI.
A New Religion
Education, and in particular the sciences, overturned the doctrines of the church. Now it assumes many of the same rituals. Society rewards those who conform rather than challenge understanding. Most education is a memory test, combined with an ability to apply frameworks from one situation to another.
In this way education reinforces consensus, at least until doctorate level when you are expected to come up with something new. Here you break the rules you’ve learned, but most people are long gone before this point. They just have the rules.
LLMs mimic education. That’s why they are getting better at exams where the answers are known in advance. It’s also why they are not making much of a dent in AGI. They can neither reason nor create.
Human teachers are bias and have favourites. AI need not be, although social AI is supposed to be on your side. There is a role for AI in improving today’s education. But a greater emphasis on creativity and reasoning would serve children better.
Once you receive your education, it’s time to apply for a job. Your c.v. will be judged by a machine, so you learn how to write for one. Once more, conformity is valued over creativity.
Many jobs require conformity to a process. Yet these are the jobs that AI is learning to do. By interacting with humans and speeding them towards conformity, it may accelerate the obsolescence of many workers.
The creative human voices have always been on the edges of society. They learn the rules and reject them when inventing. The evolution of Picasso’s self portraits illustrates this.
Technology hides productive creativity on the fringes of searches and data retrieval. It does this while provoking more extreme views from those who shed inhibition in the absence of direct human contact. Meanwhile loneliness feeds on itself. An AI app that provides less reason to engage with humans is not going to fix this.
Social progressives dream of the state supplanting the family in raising children. Conservatives recoil at the horror. Technology is a tool for pushing the beliefs of whoever controls it. For now, it’s not in the hands of politicians. It’s within the grasp of a man who wants to sell adverts on applications.