Column: AI… Not that Smart
By Terry Stone of Goldendale
Stone created this paper for Michael Sequeira, Emeritus Professor of Sciences
Here is where I stand as a former computer professional: Artificial Intelligence (AI) is not intelligence, at least not based on any of the multitude of iterations of the Turing Test, which is my go-to for making that determination.
(The Turing Test is a test for intelligence in a computer, requiring that a human being should be unable to distinguish the machine from another human being by using the replies to questions put to both.)
In fact, AI is such an imprecise and even misleading term that I usually cringe when I hear it. AI can sometimes fool humans, but it has never been shown to be self-aware, which is a fundamental part of intelligence as far as I’m concerned. AI is highly prone to error and to making up results (euphemistically and laughably called hallucinations—it can’t possibly just be “wrong”).
Interestingly enough, there are compelling and accurate portrayals of what true AI would look like that can be found in science fiction literature. A humorous romp with an early idea of AI is told in the book The Adolescence of P1 by Thomas J. Ryan. In that novel, a piece of software becomes sentient and inhabits every networked computer in the world to try to preserve itself, with comic and mayhem-filled results.
It even makes friends with the protagonist. In Do Androids Dream of Electric Sheep by Phillip K. Dick—a rather obscure dystopian novel made into the cult Bladerunner movie classics—bounty hunters in a future Los Angeles interrogate arrestees they suspect of being rogue and dangerous androids using a form of Turing Test to try to trip them up and cause them to answer random questions in a way no human would. The scenes are portrayed with rather explosive tension in the first Bladerunner movie.
As used in the industry, AI is a loosely-defined umbrella term for a whole set of types of large-language (LLM) and numerical-prediction (NPM) computing models that tie digital mining of huge datasets with machine learning and digital neural networks. Artificial Intelligence is generally broken into three classes: generative AI, predictive AI, and content-moderation AI. The last two I regard, for the most part, as snake oil. They simply don’t work, yet they have been visited on us at every turn in our on-line and in-person interactions, often against our will and our personal interests. The only predictive AI that functions by producing consistently reliable outcomes are weather-forecasting models (NPMs), and models used in chaos theory, such as those that describe how a natural gas flame ignites and propagates. These AI models only work well because they have access to databases that have been growing for at least the last 30 years (and continue to be added to daily), they have been run continuously on supercomputers, and, most importantly, have had their datasets cleaned of corrupted, irrelevant or misleading information by human intervenors.
And that remains the largest problem with every other class of AI used today: that LLM datasets are too general and too large to be cleaned. No AI is yet capable of cleaning its own datasets, which is why reliability remains an issue. This can be traced back to the vulnerability that no AI can even link to other datasets and be able to determine how clean they are. LLMs, ironically, are too simple and mostly use algorithms to predict what the most common response in its dataset(s) might be for a given string of words or numbers, but no thinking, in the human sense, goes into the process. The AI has no stake in the outcome, so it cannot self-police. It doesn’t know how. In fact most AI coders will confess that beyond a certain point, they have no idea what their AI will do and have not figured out how to make that determination. It feels as inscrutable (impossible to understand) as an Egyptian god.
In addition, all AI software produces results that are often based upon the biases of their coders and developers, making results not just unpredictable and unreliable, but detrimental (or at the very least, lacking a certain veracity) either for or against the bias introduced in programming. Even AI available to the public is still held in private trust, so getting a look under the hood is not possible because corporate developers hide behind trade secret laws. Studies have been privately done on these predictive AI models ostensibly looking for biases in the algorithms, but science, for the most part, cannot be further conducted on those results because they are all kept behind impenetrable trade-secret walls.
AI as an academic concept goes back to 1955, the year I was born. Computer and cognitive scientist John McCarthy, whom I’ve met, coined the term “artificial intelligence” and rather loosely created the structure of the discipline of the field. He worked with the first crude AI language, IPL (Information Processing Language). From 1974 to 1976, I was privileged to spend some time in college with an AI programming language McCarthy later developed as IPL’s cousin called LISP, which stands for LISt Processing. This language pioneered many of the higher-order ideas of computer processing that underlie AI today, such as tree-data structures, large data modeling, and recursion. Given that the computers I worked on back then were slow, building-sized mainframes that sucked electricity like a firehose in reverse and that they had a fraction of the processing power and data storage found in the modern smart phone, the results were uncanny, even human-like when querying software written using the LISP language. With the advent of hybrid circuitry created for the space program, miniaturization in consumer electronics ensued, and over 50 years, it became possible to create mega-server farms with enormous parallel processing capabilities and spectacularly large data storage capacities, as well as a proliferation of peta-bit bandwidth, that finally came together and allowed practical use of LLMs to serve up AI results to end users. That, as we have discussed, is another problem because these server farms consume magnitudes of power greater than our largest mainframes ever would have. It has been detrimental to our electrical grids and to the environment, so AI has residual negative effects never contemplated by its creators.
As a tool, AI shows incredible promise, especially in picking out low-level data in noisy datasets. CT scans and mammography images run through AI software have found hidden cancers that human eyes missed, and have provided very few false results. As this WSJ article shows, AI remains a helper rather than a replacement for humans, but is starting to coalesce into more accurate, more useful products. Even with the advent of powerful AI algorithms like transformers, the concept remains primarily imitative of what it finds in databases. Faced suddenly with unique information missing from its dataset, all AIs are either stumped or produce hallucinations as responses to queries run on the foreign data, depending on how they are written. An AI that’s only seen a rubber ball will still call a marble it’s never seen a “rubber ball”. Or it will say it can’t identify it. AI thus can’t reliably extrapolate the qualitative to the quantitative—and vice-versa—without first having every conceivable iteration of an example in the known universe in a given dataset. That’s a huge handicap.
Computer scientists continue to strive to use AI to explore aspects of reality we could never know otherwise. That aspect of the concept is pretty exciting. As the most brilliant Unix guru I’ve ever known (with the unlikely name of Barry Vines) once told me, “Reality is just a convenient measure of complexity.”
If you’d like to get a good scientific reference that is intellectually and practically accessible to the layman on AI, just published in August, 2024, I recommend you purchase AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and Sayash Kapoor. It has exhaustive end-notes and a comprehensive index that can take you down any number of AI rabbit trails to inspire an aspiring author's science-fiction writing.
Support Local News!
Available for Everyone; Funded by Readers.