The AI Illusion: Death in an Algorithm

Image Credit: Britanica

I have always been a bit of a techie. Ever since all those 21 Boy Scout merit badges to become an Eagle Scout, about seven decades ago, I have been fascinated by tools and their uses. Earning each merit badge taught me about one technology or another—whether cooking, knot tying, marksmanship, first aid, camping, lifesaving, swimming, etc. From an early age I wanted to fly airplanes, I wanted to build things in my father’s workshop, and I wanted to race any kind of vehicle I could; the first, of course, was the bicycle.

So far, I have flown airplanes, helicopters, and gliders, piloted sailboats, and raced cars and bicycles. There is nothing like riding your bicycle down a hill as fast as you can make it go, or making an instrument landing ‘to minimums’ in a winter storm. I have built computers, written application software and designed user interfaces for both education and business. Every one of those ‘devises’ was a tool for accomplishing some purpose or completing some mission. Now, I focus on fine woodworking, to engage precise tools and their purpose to achieve an artistic expression of form in the made object.

Every tool has a purpose, the saying goes. But then, once while I was building a house I had designed in part to fulfil a teenage dream of becoming an architect, an old carpenter told me, “Hammers are for putting in screws; screwdrivers are for taking them out.” He was joking, of course. But he also said that anyone could put a pencil to paper, but that it is a whole different thing to actually create the structure depicted in a drawing. The truth of that statement bore out as we worked around flaws in my design. It is not just about the skills in using the tool; it is also about producing something meaningful.

Doing Things with Tools

It is also said that to a chimp with a hammer, everything is a nail. That can also be said of humans. I once saw a video in which some jungle fighters handed a loaded Kalashnikov (AK-47) to a chimp. The chimp waved it around intermittently releasing bursts of bullets, as the fighters scattered in every direction. A human with a tool s/he does not understand can be just as dangerous.

In many years of studying and practicing research, I noticed that far more people try to fit their information generating purpose to their favorite research method, rather than fitting the proper method to the research purpose. That is when the means become the end. The tool has become part of a mindset that has failed to ground itself in the field of action, which always leads to ‘technophilia,’ the obsessive misunderstanding and inappropriate application of technology.

Ignorance and misapplication of “Artificial Intelligence” (AI)

I put the quote around “artificial intelligence,” because the ‘machine learning’ for which the term became a (very successful) marketing device, is not intelligent at all. Machine learning at a much smaller scale has been around for a long time (in the modern history of computers). It is an increasingly complex algorithm for recognizing patterns in vast quantities of data. It is only because of the increased capacity of computer processors and memory that such large scale data processing operations have been possible.

There are many useful applications of AI, such as in medical diagnostics, large scale document analysis, financial fraud detection, workflow automation, and robotics. It is one thing to automate a discrete manufacturing process, but quite another to write an essay for a student. Generative AI in the form of Large Language Models, such as the popular ChatGPT, can produce text in response to a prompt based on pattern recognition applied to huge databases of text. Intelligence involves far more than pattern recognition. (See Jeremy Lent, The Patterning Instinct.)

Early and recent ChatGPT output seems like human writing on the surface, but the experienced reader can often tell the difference. Recent studies have shown that students who use AI to do their school work, actually lose cognitive skills in the process. And, what is the purpose of education if not to develop intellectual skills in the student, not just in her/his ‘output’? As large language models improve the output will be more difficult to distinguish from human writing. But, really, what’s the point?

Bottom line: The value of this as any technology is in its proper application.

Dangers of Technological Hubris

Whatever else it is, AI is a fad, a remarkably widespread one at that. Of course, it is much more than that, since it has penetrated most major commercial and institutional operations. It has also penetrated the protocols of military operations. In the illegal and devastating war by Trump and Netanyahu on the nation of Iran, the misapplication of AI to decision making in the “kill chain,” resulted in the February 28, 2026, strike on the Shajareh Tayyebeh girls’ school in Minab, Iran, killing approximately 165 to 175 people, the majority of whom were elementary school girls, ages 7 to 12.

Most news outlets left it as an unfortunately deadly mistake in the ‘fog of war.’ No, it was not nearly so simple. “Project Maven” is the integration of AI into the U.S. military targeting-and-attack process, developed by Palantir, a well-connected AI applications company. That system is why the military could comply with Pete Hegseth’s demand that thousands of targets be identified and attacked quickly. The Project Maven targeting system used some erroneous data and its target identification had no human verification process. Just looking at satellite imaging data would have identified the target as a school. The old computing admonition, “Garbage In, Garbage Out” still applies, but with far more potential and real danger.

For a full discussion of this and related matters of corruption and assistance in war crimes by Silicone Valley billionaires and others in the insane complex that enabled the war on Iran and more, it is important to watch the podcast “Left Hook” with Wajaha Ali and his guest Cy Canterel of “Abstract Machines,” who is an AI expert with direct experience developing AI systems and with the associated political machinations. Any tool misunderstood and misused can be dangerous. That is why some woodworkers have missing fingers. But the development of increasingly complex AI tools that are hardly understood and indiscriminately promoted by their developers to eager customers seeking new powers of surveillance and autonomous weaponry, are far too dangerous in their undiscovered ramifications; they have simply gotten out of control. The willingness to turn over life and death decision making to autonomous systems is the worst kind of technological hubris imaginable.


2 thoughts on “The AI Illusion: Death in an Algorithm

Leave a reply to The Hopeful Realist Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.