-->

Register Now to SAVE BIG & Join Us for Enterprise AI World 2025, November 19-20, in Washington, DC

Who Thought of Making Intelligence Artificial?

Article Featured Image

Certain archetypes of quasi-thinking robots institutionally reside in the collective consciousness of most Americans. All forms of art affect the memories of people so that pictures persist through their entire lives, even if the source is trite or visibly silly. Comedy, cartoons, great movies, and certain moments in the world of sports result in iconic notions and lasting images. Consider the great cinematic production of 2001: A Space Odessey with its infamous 1968 version of the supercomputer. The heuristically programmed algorithmic computer had each letter of its acronym, HAL, falling one letter alphabetically before those for IBM, the acronym for International Business Machines, the dominant force in all computing of the 1960s.

HAL, the fictional computer, somehow gained sentience and, in a somewhat undefined manner, threatened the emotional security of movie patrons throughout the late’60s, all by manipulating their own imaginations. Of course, in 2025, we have minicomputers in our pockets that pack processing power that is many orders of magnitude greater than Sir Arthur Charles Clarke could have conceived of, but nonetheless his fictional imagery is still palpable. The ominous Johann Strauss pieces Stanley Kubrick would embed in multiple generations’ minds in the movie triggered apprehension of the onset of pending incomprehensible technology.

A few years before, in 1962, William Hanna and Joseph Barbera created the adorable cartoon world of the short-lived animation series The Jetsons. Despite a seemingly middle-class lifestyle George, the futuristic patriarch of the Jetson family, owned a flying car along with a series of automated and convenient devices that happily performed mundane tasks. It’s not clear if the Internet of Things was available to George. The most compelling of those devices included a lovable robot maid named Rosie and a robot dog named Astro. Both were, to some extent, de-facto family members.

The series only lasted three seasons, but after 60 years, it remains a fixture as an archetype in our collective memories.

Clarke, who wrote The Sentinel, the short story that 2001: A Space Odessey was based on, and director Kubrick, who cowrote the screenplay with him, likely had sufficient egos to recognize the classic that they had created. However, I doubt Hanna or Barbara had any clue that what they had produced would last beyond the lifespan of an average mid-level animated comic book.

And then came society-changing iconic figures. Nope not the Beatles. I’m referring to Star Trek’s Captain Kirk and Mr. Spock, who 10 years later had to compete with both sides of The Force for dominance in the ferocious imaginations of the audiences of Western civilization.

How did all these outrageous notions emerge? Did 2001: A Space Odessey come from the overstimulated dreams of Sir Arthur? Was it Gene Roddenberry, George Lucas, or even Stephen Spielberg? Nope. In 1956, a Dartmouth College professor named John McCarthy coined the term “artificial intelligence.” Around 1960, he took the existing programming language Information Processing Language (IPL) and combined it with a form of calculus called Lambda and created the first true programing language for AI.

It was forever to be known as List Processing, or LISP. It’s fair to have a historical position that the inception of AI goes back to famed codebreaker and British mathematician Alan Turing or other contributors. However, I’m not trying to assign credit as much as describe the evolution of a mindset. A way of thinking that is innately connected to today’s perceptions of AI and led the former CEO of both VMware and Intel, Pat Gelsinger, to refer to it as a “50-year overnight success story.”

Now, after this stroll through computer memory lane, I’m going to declare, in an entirely humble manner, that “artificial intelligence” is nothing but a buzz phrase. That AI is clever marketing combined with arithmetic. It is also true that the world’s richest man, the omnipresent Elon Musk, has declared that AI “has only a 20% chance of annihilation.”

He was referring to humanity’s possible downfall. Musk may have his ashes scattered on Mars one day, but I think he has seen a few too many reruns of Star Trek.

It is true that humanity doesn’t understand the potential of AI. I would also suggest that 10,000 generations ago, humanity didn’t understand the potential of fire either, and I think that worked out well. But it is sobering to deliberate on how little we comprehend of so much and how often our predictions for the future are completely mistaken. Consider that most computer scientists in the early days of AI would have confidently predicted that George Jetson’s maid Rosie would become a reality well before a computer could play chess with sufficient skill to defeat a true chess master. However, in 1997, the IBM super-computer Deep Blue defeated Garry Kasparov, possibly history’s greatest chess player. Rosie and Astro still await discovery, although Astro is close.

Less than 10 years ago, Silicon Valley, its behemoths, and aspiring startups were all frantically working on attaining superiority in the discipline of “Big Data,” which somewhat morphed into “machine learning” and “deep learning.” All these ideas took their own branches into more sophisticated technologies. High performance computing (HPC), serendipitously combined with video game processors, known as Graphical Processing Units (GPUs), and the history of computing took a quantum leap—maybe multiple quanta—even before quantum computing.

At this point in my discourse, I should probably get to the point. The present technology of AI is the logical extension of machine learning and just a marketing term. Machine learning is a method by which tremendously powerful processors filter and distribute huge amounts of data to arithmetically iterate, interpolate, and extrapolate that data with near infinite precision to allow for inferences that satisfy the presented question. The results happen very fast. The models are “trained” through many iterations to become more effective, every time.

In the case of deep learning, different models are used. Sometimes the approach is to use a simulated or artificial neural network to simulate how a human brain works. This is accomplished many orders of magnitude faster than a brain can work.

As every company in the world anxiously tries to maintain a competitive edge, a free enterprise system will provide many options to meet that demand. Some of those options are superior to others. In many other articles over the last decade, I’ve discussed the six cities of Silicon Valley and their dominance in all things associated with technology.

And these usual suspects have not disappointed us. They will continue to provide us with numerous approaches, products, and services, which all fall under the menacing and seemingly omniscient umbrella of AI. Innovation is not in short supply when a Silicon Valley software engineer can demand $10 million in restricted stock units vesting over 4 years. Nope, I didn’t make that up.

It is, however, important to maintain common sense as the world accelerates through this evolution in computing faster than a test driver on a Tesla runway. At least until now, AI has just been math powered with incredibly powerful processors and fed by huge amounts of data. Frightening problems such as fabricated videos and pictures exist. Incredibly difficult questions surrounding intellectual property and copywrite exist. So far, AI can’t answer those questions. I’ve been told that I’m underestimating the dire peril that these issues present and that I need to awaken to these dangers.

However, I see a much greater question on the not-so-distant horizon. Wake me up when we get to artificial intuition.

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues