Top  
Newsletter 02/11/2023 Back to Contents

I Ask the Question.  You decide.
AI — Artificial Intelligence or Abject Idiocy?
Part 1: What is AI? 
Why is AI Such A Good Thing? 
And How Come I Have Such a Mean and Nasty Jones for AI?

Confusions say
When your Karma runs over your neighbor's Dogma
You're going to need a good lawyer

Although it is certainly the hypothesis among various media of all stripes that the advent of Artificial Intelligence will be the realization of all humankind's hopes and digital dreams or the final ushering in of the Age of the Robot Overlords.  We humans tend to anthropomorphize otherwise inanimate things and objects in order to come to a better understanding about new inanimate things and objects.  The computer so easily lends itself to such an illusion, if no other reason, than the damn machines can seem so human like.  And, as greater computing resources have come available — and as our machines today can increasingly accommodate ever growing datasets and increased network bandwidth — the more life like becomes the computing experience.  What ChatGPT, Google's Bard, IBM's Vera, and the other AI models reflect is the ability of web spiders and web scrapers to find, record, and organize heretofore unimaginable amounts of data, and return that data to a computer user in a natural, albeit very artificial sounding, human speech like pattern.

Essentially, the AI models are all much like SEARCH, but with greater and different functionality.  Vast amounts of otherwise unrelated data can be "trained" and then purposed and repurposed to perform certain tasks.  According to the Stanford University, Center for Research on Foundation Models, these Foundational Models, lie at the heart of this shiny new AI technology.  Academics, like those at Stanford, who are not bought and sold by their corporate sponsors, understand that this ability to synthesize large seemingly unrelated datasets into a myriad of different uses and purposes represent both the promise that this new generation of Artificial Intelligence offers, but also the threat AI poses.  From Stanford's website comes the very prescient observation about the current State of the Art.

— WHAT IS A FOUNDATION MODEL?
In recent years, a new successful paradigm for building AI systems has emerged: Train one model on a huge amount of data and adapt it to many applications. We call such a model a foundation model.

— WHY DO WE CARE? Foundation models (e.g.,
GPT-3) have demonstrated impressive behavior, but can fail unexpectedly, harbor biases, and are poorly understood. Nonetheless, they are being deployed at scale.

It is the GPT-3 (Generative Pre-trained Transformer) Foundational Model in particular that has the Digerati, and just about every other member of the commentariat, all a gaga.

As is often the case in the world of IT, few things are ever really new.  Indeed, AI powered Search can be seen as not all that different from Search as we know it.  In practice, it is the outcome of the question asked of the Search technology that differienates AI from Search, as we currently understand Search.  We can all Google up the gazoo.  What is returned from our question asked of Google is what might best be described as an Index of the available information relevant to the question asked.  Search engine results are analogous to the Table of Contents, as well as the Index, to a non-fiction book.  The results from an AI powered Search are more like the prose of the book between the Table of Contents and the Index.  The difference is not that one result is superior to another.  The distinction lies in the purpose the user had in asking the question. 

The one salient fact about computing that users need to understand (besides all the 1s and 0s stuff) is that what the computer actually does is digitize things that already exist; record those now digital things; organize those digital things in a manner requested by a user; and then play that arrangement back.  Artificial Intelligence has been around as long as the computer.  Like computing itself, AI now benefits from the exponential growth in hardware capacity; software functionality; and network bandwidth to such a degree that a level of sophistication and human like response is possible whenever a question is posed to the AI technology.  Or so goes the pitch.

Back in the days of MS-DOS, and long before the arrival of the public Internet and widespread computer viruses, a generic DOS diskette often came with some pretty cool things that us Geezers used to call "programs."  One of these little shareware applications I found was the lovely and gentle Digital Psychotherapist, Eliza.  You can still book a free session with Eliza over at Cal State Fullerton.  At least, that's where I found Eliza practicing these days.  "ELIZA is a computer program that emulates a Rogerian psychotherapist." 

Eliza is also a fine example of early Artificial Intelligence.  And, as such, demonstrates some of the limitations of computing when this AI model was built.  All Eliza really does is playback your recorded text in a different format.  Essentially, Eliza is an early iteration what we might call today a Chatbot(But thankfully Eliza does more than point you to the website.)

So I created a little experiment that could visually demonstrate the difference between early AI and, that current All the Rage Thing that Microsoft has sunk $10Bill into, ChatGPT.  I fed what I thought was a variation on an obscure statement I had only seen and read once on a bumper sticker: "My Karma ran over your Dogma" — into 3 technologies: Eliza, ChatGPT, and Google Search.  The results of this little exercise surprised me.  I told each of the three technologies, Eliza, ChatGPT, and Google Search the same seemingly nonsensical statement.

My girlfriend's Karma just ran over my Dogma.

Here is my discussion with Eliza.  Eliza is only billed as having "the illusion of intelligence."  Eliza doesn't have in her database enough information to get the joke.  And the joke is really one of the best one liners of all time, in the opinion of this avowed Marxist (Groucho, not Karlo).  There is, however, something quite human about Eliza's response.  When the poor girl did not have enough information to process the dichotomy inherent in the joke, Eliza resorted to a subtle put down of her user.

Now, ChatGPT got the joke.  Got the joke to such a degree that the response explained the joke fairly well.  The joke is indeed a "play on words."  As presented, the response could well serve as a beginning talking point for discussion in any college level Philosophy class.

The thoroughness of the response above was frankly shocking to me.  Here I have been for years dropping this one-liner among folks who I thought might get the joke, all the while not one person calling me out about it not being original.  Where did that come from?  I might be asked.  And I would tell the truth with a wry grin. "I read it on a bumper sticker once."  So how on Earth did ChatGPT become so well versed in existential humor?  That was a question for Google.  Who all as indexed my private joke? 

Not only did Google deflate my humorous ego by informing me that this quirp is now so old as to be cliché, but in true Google fashion, I was also offered the chance to buy a T-Shirt or coffee mug emblazoned with my not so private joke; and an actual copy of the original bumper sticker.  Man, does Reality ever Byte.

Of the three examples, Eliza lacked the capacity to get the joke; Google was best as describing where the joke came from — and maybe more importantly from the viewpoint of Google advertisers — told me "Don't forget to stop by the souvenir stand on your way out." It was only ChatGPT that, in a most human like manner kind of chuckled at the joke and elaborated on the joke's greater meaning.  If you did not get the joke at first, you might have gotten the word play after ChatGPT explained it.  The joke is philosophical in nature.  So, in this case, I think ChatGPT did the joke the most justice.  Of course, dogs and cats can play with us humans, too.  What about real research? 

I asked ChatGPT about a major event in the history of malware, and also a topic that I had thought I knew a great deal about.  I fought the Nimda worm on the front lines of the cyberwar for well over a year beginning in September 2001.  What follows is directly copied and pasted from the chat session.  The purpose here was to perform a deep dive of research on Nimda starting with the chat responses.

Q. What did the Nimda worm do?

A. The Nimda worm was a computer worm that spread rapidly in 2001. It exploited several vulnerabilities in Microsoft software to spread quickly, infecting computers running Microsoft IIS web server, Microsoft Outlook email software, and Microsoft Windows operating systems. Once a computer was infected, the Nimda worm would replicate itself and attempt to spread to other computers. It also made changes to the infected system, such as modifying web pages and creating new files, which could cause the affected computer to crash or run more slowly. In addition to spreading itself, the Nimda worm was also capable of launching denial-of-service (DoS) attacks against websites, which could cause the targeted site to become temporarily unavailable. The worm was also used to download and execute additional malicious software on infected computers. Overall, the Nimda worm was considered to be a major security threat at the time and caused widespread disruption to businesses and individuals. It was one of the first computer worms to utilize multiple methods of replication and attack, making it particularly effective in spreading quickly and causing harm.

Q. What was innovative about the Nimda worm

A. The Nimda worm, which was released in September 2001, was considered innovative for a few reasons: Multi-vector attack: Nimda was the first worm that utilized multiple methods of propagation, including email attachments, network shares, and exploiting vulnerabilities in web servers. This made it difficult for anti-virus software to detect and stop the worm. Fast spread: Nimda was able to spread rapidly, infecting thousands of systems within a matter of hours. This was largely due to its ability to infect both Windows and IIS systems and its use of multiple vectors of attack. Code obfuscation: Nimda used obfuscation techniques to conceal its code, making it more difficult for anti-virus software to detect and analyze. Payload: The Nimda worm had a dual payload, causing both a denial-of-service attack on certain websites and modifying web pages to include a link to the worm, which further propagated its spread. Overall, the Nimda worm was considered a significant threat at the time due to its multi-vector attack, fast spread, and ability to evade detection.

Indeed, Nimda was the granddaddy of all our current blended threats.  If you began with no real knowledge about Nimda, ChatGPT would be an excellent starting point for focused research into Nimda.  Considering how research is performed in the real world, the two technologies, AI and Search, in fact compliment one another. 

And here is where the turf battles begin.  Both Microsoft and Google will soon implement AI technology into their respective Search engines.  When you ask Google about Nimda in general, Google returns an index of relevant websites based on facts about Nimda.  But a new element on your search results page now appears.  The About Box highlighted in red is an example of how Google will integrate AI technology into it's search results.  Here is the index; but over here is a natural language discussion about the topic you might better comprehend.

What is clear is that AI technology is no more threatening than any other powerful piece of technology.  Like the big stump of wood found by early humans in the opening scenes of the movie, 2001: A Space Odessey, a piece of wood can be used to make fire to keep the tribe warm and the food cooked, or that primitive technology can be used to beat fellow humans over the head and take their goods. 

The totality of the human experience always boils down primarily to three aspects: The Good, The Bad, and The Ugly.  Herein we have discussed what AI is and what are its promises for Good.  AI also has a Bad side that is inherent in the technology itself.  AI also has the potential to get very Ugly.  So stayed tuned and keep aclickin'.

 “Artificial intelligence would be the ultimate version of Google.
The ultimate search engine that would understand everything on the web.
It would understand exactly what you wanted, and it would give you the right thing.
We're nowhere near doing that now.
However, we can get incrementally closer to that, and that is basically what we work on.”
Larry Page

¯\_(ツ)_/¯
Gerald Reiff
Back to Top previous post next post