During the last 2-3 years headlines have been steadily promoting that we should fear AI. From the Washington Post telling us that the AI humans should fear is already here (here), to the BBC fearmongering that AI could lead to [our] extinction (here). When the topic of malevolent AI is even intimated at, comments about Skynet from the Terminator movies, or the Sentient Programs that created The Matrix are never far away. But is the AI we have right now anywhere near ending life as we know it, or should we be more afraid of something else?
I’ve tended to adopt the position that almost all of what we are hearing about AI, and pretty much anything else in the mainstream media, is hyperbolic nonsense.
Last week I was asked to substitute in for Professor Norman Fenton on a panel of experts at one of Gunner Cooke LLP’s seminars. This one specifically on AI and Policy. As part of my input to the discussion I described the six probable scenarios I see for AI in the next 100 years, which I will now outline here for you…
Before I begin I need to mention the fact that in various forms, I have heard different commentators describe some version of the first five. Given the more rational way he presents them, and the recency with which I heard him do so, I borrow heavily from Bret Weinstein initially. However, I feel that he leaves out one glaringly obvious scenario that I happen to have seen in my research on ML and AI use in healthcare. I present that scenario last.
The Unlikely Scenarios
The first two scenarios are those where the AI itself is considered to be malevolent. They are also what Bret Weinstein describes as extremely unlikely, and with that threat assessment I generally agree. While everyone is happily spinning the wheels on ChatGPT and thinking about ways to mislead the AI, and mislead each other; in real terms I believe we are still quite far away from every being able to create the Skynet or Matrix AI - the truly sentient digital being that decides we are a virus on the planet and sets about turning us into blood and bone fertiliser.
1. The Truly Malevolent AI
The truly malevolent AI is the stuff of nightmares. It attains true sentience and, on doing so, decides that the best way to protect humans is to protect us from ourselves, or that we are an inferior intelligence, and the best way to deal with us is our extermination. HAL9000; Skynet; the Sentient Programs in The Matrix; VIKI in I, Robot; the Hosts in Westworld. At some point they all go rogue and decide humans have got to go.
2. The Pertinaceous AI
The pertinaceous AI is one to which we give a seemingly safe and simple instruction, and it obstinately carries out that instruction even when that instruction eventually comes to be at odds with our safety. Nick Bostrom describes this as the control problem. Joshua Gans calls it The Paperclip Apocalypse. He describes it like this:
Suppose that someone programs and switches on an AI that has the goal of producing paperclips. The AI is given the ability to learn, so that it can invent ways to achieve its goal better. As the AI is super-intelligent, if there is a way of turning something into paperclips, it will find it. It will want to secure resources for that purpose. The AI is single-minded and more ingenious than any person, so it will appropriate resources from all other activities. Soon, the world will be inundated with paperclips.
It gets worse. We might want to stop this AI. But it is single-minded and would realise that this would subvert its goal. Consequently, the AI would become focussed on its own survival. It is fighting humans for resources, but now it will want to fight humans because they are a threat (think The Terminator).
This AI is much smarter than us, so it is likely to win that battle. We have a situation in which an engineer has switched on an AI for a simple task but, because the AI expanded its capabilities through its capacity for self-improvement, it has innovated to better produce paperclips, and developed power to appropriate the resources it needs, and ultimately to preserve its own existence.
Basically put, the pertinaceous AI caries on its task with such single-minded determination that it either prevents us from turning it off, or alternatively, consumes so much input resources that it eventually has to see us as a potential resource in order to continue fulfilling its goal. Potentially, eventually we too become paperclips.
The Likely Scenarios
The following scenarios are those that I and other commentators consider to be more likely - and certainly are potentially happening now, even with the fledgling AI we currently have.
3. The Malevolent AI User
In this scenario the AI is being programmed or employed by people who are not morally constrained and seek power or profit at our expense. There is no denying that people are going to employ such potentially powerful technology to gain power over or derive unjustified income from the rest of us. The Australian Government employed a system that could more correctly be termed as a cross between a rules engine and very basic automated machine learning. While I no longer recall the codename for the project when it was being constructed, most Australians (and certainly those who ever received a public welfare benefit) will know the name it came to be known by - Robodebt. While not being a true AI, Robodebt has been derided as an example of what we can expect to happen in future when a capricious or technologically ignorant government messes with AI. Robodebt made decisions based on sometimes sparse or improperly aggregated information and, on the basis of poorly drafted rules and algorithms, automatically created debts for benefit repayment that sometimes exceeded the amount of benefit the individual had recieved in the first place. As has been seen before when governments get something wrong (think: Horizon Post Office Scandal in the UK), the Australian government denied for several years that there was anything wrong with the system. At least until things got so bad and so public that they had to admit it wasn’t working.
4. AI-induced Derangement
Jaron Lanier argues that the true danger from AI is that it sends humans insane. He suggests that we will not act with sufficient self-interest or awareness and the technology could be used to manipulate and mislead us and, in essence, we will die from AI-induced psychosis.
5. Economic Disruption Through AI
Just like other disruptor tech (e.g.: the automobile, the aeroplane, the computer, the internet), AI will disrupt many different industries. However, unlike anything we have seen before, some say AI could suddenly displace whole fractions of the population who will be no longer necessary in the roles for which they have trained. Many will either be too old, unable or unprepared to retrain for another role and hence, AI will have done them out of employment. We are already seeing systems like the digital receptionists and chatbots of companies like Soul Machines from Auckland, New Zealand, that are capable of replacing human receptionists. Such digital avatars never tire of answering calls, never get frustrated or annoyed at the caller, and the number of concurrent calls they can answer is only limited by the amount of compute power you make available to their AI engine.
The Missing Scenario
As I said earlier, there is one inescapable scenario we have already seen at least once before.
6. Surrendering to the AI
In this scenario humans simply cede all decision-making power to the AI - because whatever the AI says to do must be right. Around ten years ago a commercially available solution variously described as a diagnostic AI was introduced into a London hospital. The way it worked was that it monitored the signs and symptoms doctors entered into the computerised medical record during a clinic appointment, and on the right side of the screen in a box the diagnostic AI would provide a list of the most likely diagnoses given the information entered. When the probability for a particular diagnosis rose above a certain predefined threshold that diagnosis would change colour. The system was intended to be used by doctors to strengthen and validate their belief in the diagnosis they had resolved based on their own assessment of the patient’s signs and symptoms. Instead, what we saw was young doctors who saw the tool as a game - ceding all decision-making ability to the computer. They didn’t bother to expend effort in making a considered opinion of their own for diagnosis. Rather, they simply kept entering information from the patient until one of the AI’s diagnoses changed colour, and then treated for that medical condition. This, I believe, is human nature, and could even be described as The Idiocracy Scenario. Unlike in the AI-induced Derangement Scenario that produces madness, the Idiocracy Scenario produces ignorance and apathy.
I don’t know and I don’t care… Because the computer will tell me.
I have said for some time that I believed this would be the first scenario we would see from early adoption of AI, and I am sorry to say I have not been disappointed.
Already this month we have seen lawyers in at least three cases present submissions either partially or wholly authored by ChatGPT. In one Federal Court case in New York, ChatGPT suffered from what we call “AI Hallucinations” six times. The lawyer instructed ChatGPT to write his submissions, with relevant case references. ChatGPT followed his instruction, excepting that it made up the cases that it cited from wholecloth. The Federal Court Judge spotted the fake cases, and reserved his position to bring contempt proceedings and sanctions against the lawyer.
In conclusion my position can be summed up in twelve words…
AI is not your enemy. But it is also not your friend.
Good discussion and thanks for posting this.
Agree. I find the name 'AI' misleading. Its a lot of computer power, but labelling it as 'Intelligent' is a bridge too far for me.
Major risk (well, its happening already) is indeed that mankind will further switch off their own intelligence. One colleague proudly mentioned that his son had AI (chatGTP) create a school paper. This to me does sounds like a dangerous development. After all, AI determines which data it uses as input. The data out there is manipulated and limited, and it seems like an endless cycle. The next round of AI will base their input on data generated by other AI applications.
But this is happening even without 'AI'. Look at academic papers where debatable conclusions are used over and over again, cloning erroneous evidence.
I have been a computer programmer (and software architect) all my life. It strikes me that people with no IT experience are the ones with the highest confidence in 'AI'.
People are not computers and computers are not people.
AI can’t even draw hands and feet correctly. That lack of anatomic comprehension disqualifies it from any reliable role in patient care alone.