Newmann: Is AI A1?
“The gods value morals alone; they have paid no compliments to intellect, nor offered it a single reward.” — Mark Twain
Used to be that one of Homo sapiens’ greatest bragging rights was having an opposable thumb. “Hey, look what I can grasp!” Sadly, that boast went downhill when we belatedly discovered that chimps — and a host of other primates — also have them. So much for that differentiation. But, surely, there have to be other factors in the next degree of separation from all the other “animals.”
Well, we seem to have a few attributes that most of the other species don’t such as complex reasoning ability, collaboration of large groups and cooperative social interactions, intricate language skills … and probably a few more “human” qualities to add to that list. And then there’s intelligence. Which probably encompasses all of the above attributes. And a bunch more.
Our intellect has been responsible for a multitude of great achievements and, of course, for some not-so-great stuff. It’s kept us going through everything that nature — and, sometimes, our fellow humans — could throw at us. Somehow, we, as Homo sapiens, have persevered for the past 300,000 years or so on the merits of our own brains, our own intelligence.
And then along comes … artificial intelligence. And the irony is that this substitute has been created by our own intellect. Kind of makes you wonder: if we’re so smart, why do we need artificial enhancement?
Support Local Journalism
Well, AI can possibly create all sorts of positives. It can scale down the timelines for medical and technological breakthroughs. It may be able to increase productivity since it never gets tired. AI also can analyze data nonstop. There are certainly advantages.
But you also might want to consider a Brookings Institution article from 2018 extolling the virtues of AI, which said they’re “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention.” Hmmm … does this mean that the machines also incorporate ethics and morality into their algorithms?
Here’s where things can get a bit dicey. Especially since AI has become much more advanced (evolved?) since the Brookings article. There have been claims of a sentient AI, which, if true, could allow the machines to sense and feel their surroundings (so far, it seems the claims are … still just claims). But, according to a recent article in the New York Times, there is a new AI system that “is coming up with humanlike answers and ideas that weren’t programmed into it” (thus differentiating AI from most politicians).
The rapid development of AI is concerning enough that Geoffrey Hinton, Google’s top AI engineer (also known as the “Godfather of AI”), recently quit because of concerns about the pace and scope of AI. “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Sam Altman, the CEO of Open AI, the parent company of ChatGPT, echoed Hinton’s concerns. “I’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid,” he said. “The current worries that I have are that there are going to be disinformation problems or economic shocks, or something else at a level far beyond anything we’re prepared for.” He also, in recent testimony before Congress, cited the need for regulation.
So where does it all lead? Well, good question. And a bit like the paradox of atomic energy. Regulated and used for constructive purposes, it can be beneficial. But in some other circumstances … maybe not such a good outcome.
As Twain mentioned, the gods value morals alone. Intellect? Not so much. And AI? Well, that’s out of the realm of the gods — and maybe humans, too.