AI will take your job, but won’t take over the world
Investing In and With Artificial Intelligence
Your job is at risk: If it can be written down, it can be automated
Learning to learn: AI learns what humans already know
A question of intelligence: “It works because it works” is a common investment thesis
Quants and Black Swans: GPT-4 is not truly intelligent
Bad Actors: Regulating AI will increase the risks from its use
Investing with AI: Chat GPT magnifies your strengths and weaknesses
Investing in AI: You most likely already do
Applying The Sniff Test: Why AI takes your job, but does not take over the world
My daughter used to ask what I thought she should be when she grew up. When she was younger she knew and simply wanted to correct me. As she grew older, I told her that most of the jobs she could do were yet to be invented.
It wasn’t always like this. Historically, children did the same as their parents, using tools and skills handed down from one generation to the next. Change was slow and economic and population growth limited as a result.
Technology changed this. The application of new knowledge using energy transformed the pace at which humans developed. The creation of new knowledge is what makes humans unique in the animal kingdom and, as far as we know, throughout the universe.
Hence the fascination with whether we can create something more powerful than us with artificial intelligence.
Your job is at risk
How would you build a self-driving car? You might assemble a team of top engineers and drivers and have them share everything they knew. You would then write computer code to do all those things and run your car as if permanently in the hands of the world’s best driver.
This is not a good way to build. Any list of things comes to an end, which means items will be missing. Maybe no one thought of what to do when skidding on black ice during a hurricane. Perhaps they ignored the problem of whether to hit three drunks that jumped in front of you, when the alternatives were to turn sharply into a wall or a bus queue of school children.
Self-driving software learns from human experience, by analyzing countless hours of actual driving and determining what works best. It is always better to have more examples of driving than to try and refine the code. Just watch the video, which is why Tesla records everything you do.
Your job is probably a lot simpler than driving, which means it isn’t safe. Analysts at Goldman Sachs reckon 300 million roles, or 1 in 11 jobs on the planet, will be fully automated. The most common assumption is that that lower paying clerical and data analysis jobs will be first to go, but creative types such as artists and writers, are increasingly at risk.
This is for the same reason that self-driving is possible after studying the driver and learning from the outcomes. Now machines consume novels, works of art, academic texts and computer code, and reproduce them to a better than average level. Suddenly it seems that every job is on the line.
Learning to Learn
Traditionally there are three ways to learn. You can be instructed what to do, as you are at school, you could be given the tools and left to get on with it, or there might be a combination of these two where you rewarded for right answers and punished for wrong. Artificial intelligence is developed these ways, using what’s called supervised, unsupervised and reinforcement learning.
Now there is another way, which is self-supervised learning, where the machine figures out how to learn and applies this to new situations. This is called generative artificial intelligence, which creates models that are applied to many tasks. This is how Chat GPT and the more powerful GPT-4 work, and the ‘G’ in both names stands for generative.
To emphasize this step-change, consider creating intelligence by perfecting every task that a human can do, one-by-one in separate robots or machines, and then connecting them. This would mean huge databases, energy and computing power, and retraining the machine every time we want a new task. Software that can learn does not need this huge infrastructure, because when it needs to know something, it learns it.
GPT-4 works with extremely large amounts of data and can search the internet and use other tools to fill gaps in its knowledge. We measure its success, however, by judging the outcome, not through a deep understanding of why it produces its results. While we do not know how it learns, researchers at Microsoft concede that the real test of understanding in creating new knowledge is currently out of reach.
The creators of GPT-4 have designed a way for their machine to learn as much existing knowledge as is out there. Yet they have not made the jump to human creativity and the ability to generate new knowledge. This is not because it is impossible, but because we do not yet know how, because we do not properly understand how humans do this.
A Question of Intelligence
People will argue that robots that do anything humans can do are already intelligent. After all, if a robot can be seen to do whatever a human can do, then what is the difference between the two. If humans are intelligent, then the robots must be too.
The argument explains today very well, but it cannot explain tomorrow. And there is always a tomorrow. While humans can lead by creating new knowledge, the machine will only follow once the knowledge is documented.
A description of something is not the same as understanding why it happens. To argue that something works because it works is to accept that might not always be the case. Here there are strong parallels with investing.
Quants and Black Swans
Different types of investors use observations and experience to forecast the future. The most common is technical analysis, which suffers from intellectual snobbery from those involved in behavioural economics and quantitative analysis, despite them having similar roots. All three learn from experience and make recommendations about the future based on what happened in the past.
The most honest forecasts use a probabilistic set of outcomes, meaning there is no certainty, while some things are more likely than others. Even then, there are unknown outcomes that catch us out, which are called Black Swans. This is because until someone recorded seeing a black swan, it was assumed that every swan was white.
Quantitative analysis involves large amounts of number crunching to detect patterns that are invisible to the human eye. The analysis uses machine learning to perform more calculations than conventional code. It is a rule-of-thumb that a freshly observed pattern will convey an investing advantage for a finite time, before being spotted and copied by others.
Behavioural economics deploys psychological concepts, such as loss aversion and bias, to explain why humans make investments that are less than ideal. It is an attempt to explain the world as it is, rather than how academic theory says it should be. The concept of bounded rationality argues that our limited ability to see the future causes imperfect decision making and has been explored at length by Daniel Kahneman.
Kahneman’s signature work is “Thinking, Fast and Slow”. Humans have a fast, gut reaction to every day occurrences that has its origins in our ancestry, and we must work hard to make more considered, slow judgements. GPT-4 makes similarly fast decisions about the next word in a sentence, and future development of the program will be designed to deliver slow thinking to oversee the thought process.
In the absence of slow thinking, GPT-4 makes mistakes and delivers falsehoods, although impressively few. Without advanced control of the direction of an argument it is not truly intelligent, merely putting one foot in front of another until it reaches a destination. Without control there is no sense of purpose, which is why artificial intelligence is not about to break out and take over the world.
The analogy of single steps is useful because GPT-4 uses ‘gradient descent’ to deliver its results. An analogy for this would be descending a hill in heavy mist, when you were only able to see a short distance in front of you. One way to get down is to take the steepest route visible, which most of the time will deliver you to the bottom and only occasionally over a cliff edge.
Bad Actors
Arguing that artificial intelligence is easily smart enough to do your nine-to-five, but lacks the sense of purpose to enslave you, is not to say it can’t be used for evil ends. Media is alive with threats of false narratives, lies and deceits, rendered convincing by AI software designed to fool us. If these regularly disrupt share prices, we can expect a new class of securities fraud to be unleashed on the creators of these memes.
Before then, there are calls for regulation and slowing the development of AI in case some untold disaster ensues. Some of this is by people who believe that AI will spontaneously make the leap to knowledge creation, in an event called the singularity, without anyone understanding how this would happen and hence how to program it. Others make the call because they fear the bad actors using the technology, rather than the technology itself.
Throughout history the pessimists seem to outnumber the optimists. Whether a soothsayer warning of the wrath of the gods, an 18th century cleric calculating that the world would run short of food, or the opponents of AI, the prophets of doom do not account for new knowledge creation. The way to control something is to understand it and the path to understanding is best travelled as rapidly as possible.
It is not unusual for incumbents in an industry to plead for regulation. This is nothing more than pulling up the ladder behind you, to prevent others from reaching your level. This usually comes with the argument that they are the good guys, but action is needed now because the bad guys are coming.
Politicians are listening because they love to regulate. The number of rules is proportional to political power and the ability to tax and fine those that break them. Yet regulations may only be applied in a country, or economic region, and will do nothing to halt progress on AI elsewhere.
New knowledge is the black swan. The best way to find it is to go out and look for it. The alternative is to cower at home, slowing progress and worrying about what you do not understand.
Investing with AI
We used to have a dog that watched the TV. He loved Crufts, the annual pedigree canine competition and his favourite film was ‘Dances With Wolves’. He would almost levitate when Kevin Costner rode across the screen.
The same sense of wonder at the unexpected behaviour of a pet is seen in the excitement of parents with each new action of their baby. Experiences that are new to us are delightful. This does not, unfortunately, make them unique or even interesting to others.
The internet is awash with people telling tales of amazement at the output from Chat GPT. This includes advisers using the software to replicate their investment process. What they are in fact uncovering is that their favourite theory is common knowledge and that prompts designed to deliver specific answers will do just that.
There are quant investors doing sophisticated number crunching on computers running 24 hours a day who are struggling to beat the market. There are others that are looking for patterns of investor behaviour that can be picked off and front run. The chances are that your sophisticated fundamental theory of why certain stocks should outperform is well known and discounted in share prices, and when you go to buy, someone sees you coming.
Chat GPT is not going to fix this. More likely it is going to magnify it, because someone smart will get hold of the search records and work out what are the most popular stocks for Chat GPT investors to buy. They will buy them first, pushing up the price, before selling them to you.
This is the problem with humans finding out about artificial intelligence. The software is very good at uncovering information that you didn’t know, but someone else does. As it cannot create new knowledge, by definition everything it tells you is known by other people.
If you do have a favourite way of investing and you are going to use it to figure out which stocks to buy, then AI software should speed this process for you. It’s better to pay for GPT-4 however, than use Chat GPT, and you will need to keep upgrading as the programming improves. If things work out then congratulations, but if you underperform, remember it was your thesis that failed rather than the software.
Investing in AI
Passive investment means buying the shares on a list, for example the S&P 500 in the US or FTSE 100 in the UK, and being happy to match the market. Active investment means narrowing the focus to try and pick winners. A lot of active investment is driven by narratives about the latest and greatest investment theme.
Thematic investing works best when economic policies are supportive, because the extra money available magnifies trends. The dotcom boom at the end of the last century was spectacular because of the money pumped into the financial system due to fear of the Y2k bug in computer software. More recently, rallies in crypto currencies and bankrupt companies benefited from handouts aimed at alleviating the pain of the pandemic.
Some narratives are short lived. Covid vaccines were developed using new techniques that will have lasting benefits for the ability to manufacture medicines. Pfizer’s share price, however, is now little changed from the beginning of 2019.
Other narratives fade away because they are about technologies that are destined to become ubiquitous. Most dotcom technology companies failed, but every company today has a website and Walmart is still the world’s largest retailer. If AI is destined to take most of our jobs, then it is going to benefit the majority of companies and your long term investment strategy can reflect this.
The development of AI is mostly about the availability of data. This is most plentiful in China and the US and it is from here that the winners will most likely emerge. Europe by contrast, has fewer cutting edge technology companies and greater restrictions on data access, due in large part to privacy laws.
Most cutting edge development is done away from the mainstream, by smaller and start up companies that are difficult for ordinary investors to access. Larger companies that are leaders in AI, such as Microsoft and Alphabet, the parent company of Google, earn most of their profits and cash from businesses that have nothing to do with AI. Notwithstanding, the AI narrative has already pushed a group of the largest US technology companies to the highest level yet relative to the rest of the market.
There is nothing about AI that should change the way that you invest. If you have time to actively manage your money, the narrower the focus of your investments can be. Remember, however, it is the professional investors operating at far greater size and speed than you who will dictate when the investment narrative changes.
Over time, very few fund managers outperform the general market once their fees have been deducted. The less frequently you review your holdings, the more general an investment you should hold. This means funds rather than stocks, broad technology funds over AI themed investments, and lower cost passive funds over higher priced active investments.
Applying the Sniff Test
If you work with a computer most of the time, your job is likely to be automated. If you crunch numbers, read or write text, or analyze patterns in digital images, your boss is already planning to replace you. The more you work away from the keypad, the longer your current job should exist.
Automation is not a bad thing. Technology replaced most agricultural workers and many industrial jobs, but new opportunities were generated by the greater wealth that resulted from automation. Individually there will be painful adjustments, but collectively progress will be made.
Machines being better at human tasks should not mean that computing power grows to the point where genuine intelligence, the ability to create new knowledge, spontaneously erupts in the software. To date, all software is programmed to perform a task and while we might not know exactly how neural networks deliver their results, our lack of understanding does not make the machine intelligent.
Most likely, we will need to understand how humans generate new knowledge before we can program computers to do the same. Until we can, our software is not going rogue and deciding to do anything other than what it is programmed to do.
Machine learning and artificial intelligence are not new tools in investing. Chat GPT and similar software may be the first time that AI tools have been available to the masses, but that does not create a level playing field with the professionals. If you use AI to select investments, the results will depend on your prompts and reflect common knowledge.
Investing in AI today is to be fashionably late to the party. Most of us have little access to the cutting edge technologies developed by companies that are not listed on stock markets, and the large technology companies available to buy directly or via funds are diluted exposures to AI. The more time you spend monitoring your investments, the narrower your focus can potentially be, but evidence suggests you are unlikely to beat the market over the long term.
Will AI take your job – yes, given time.
Will AI take over the world – not until our knowledge of knowledge is vastly improved.
Should you use Chat GPT to invest – only to speed up research you would gladly do without it.
Should you invest in AI – if you own stocks, directly, in a fund or a pension, you already do.
Notes
S&P 500
The Standard and Poor’s 500 Index is a collection of the more or less 500 largest companies listed on US stock markets.
You cannot buy an index, but you can buy funds that track the index, which aim to deliver the same price performance as the weighted average performance of the 500 constituents. The S&P 500 has the highest amount of money tracking it of any index in the world.
Active fund managers will benchmark the performance of their funds. The benchmark might be the S&P 500 or another index that aligns with the mandate of the fund.
FTSE 100
The FTSE (pronounced “footsie”) 100 was previously a joint venture between the Financial Times and the London Stock Exchange (LSE). Today it is owned by the LSE and just known by the acronym FTSE. It is an index of the largest 100 companies listed in the UK, with one or two exceptions due to other inclusion criteria.