A 2024 Prediction: Sentient AI Won't Destroy the World, That’s a Job for Humans

Despite alarming headlines, why I’m more worried about evil humans with AI than about evil AI

It's been about a year since ChatGPT brought generative AI into mainstream awareness.  Despite impressive results from Large Language Models and diffusion based images, coupled with recent claims of advances toward achieving independently thinking "general AI", we are not on the precipice of AI taking over and destroying humanity.  

A fairly safe bet is that in 2024, smart computers won't decide to rid themselves of pesky humans by launching nukes.  I'm not saying computers can’t be wired up to try to guess launch codes, only that we aren't likely configure them to do that (we haven't thought that was a good idea to date) and AI lacks the ability to independently decide to do so today and in the foreseeable future. 

"But I've used AI and it does think, it can reason and respond", say some.  Not so, it offers the illusion intelligence.  How?  By looking for patterns of words (and bits of words) that appeared adjacent throughout the millions of documents used to train it.  If you ask for an apple pie recipe, it will have seen words like apple and pie and recipe near each other while being trained, and know that they tend to be near words like cinnamon, sugar, apples, bake, 300, degrees, and oven.  The genius of these systems is their ability to do association of billions of "factors" when fed massive amounts of training docs scraped from the Internet.  

It's inference at scale, not reason.  At best, it's what The Economist called "pseudo cognition".

That doesn't mean we are safe from harm caused by this new tool. Any tool can be used for good or evil.  Like a calculator, AI lets humans do things more quickly.  The ability to scale up efforts can be good when trying to cure disease or bad when trying to inflict harm.  AI is already being used to send fraudulent emails by scammers and ransomware attackers, to scan the internet for vulnerabilities to exploit, and to impersonate real people for fake extortion schemes.  We can't stop AI's use by bad actors.  It lets them do what they already do, just faster and at scale.

So what can we do?  Knowing that attackers have tools that scale up their capabilities, we can scale up our ability to detect and defend against them.  In some cases that means putting AI's ability to detect attack patterns to work for us.  In other cases, we can leverage new technologies to know whether the person on the phone or in a video is real and really saying the words we hear.  And we can seek to exploit what Beaumont Vance refers to as the “Achilles heel” of any pattern matching system by introducing nonsense that doesn't match any pattern it's been trained on and observing the result (Beaumont's version of a new Turing test).

My prediction for AI in 2024 is simple. As a tool it will continue to evolve and improve, like any new tech. What it wont do is evolve into an evil consciousness that wants to take over the world. We have plenty of pesky humans for that, who will use AI to amplify what they already do.