Top  
Newsletter 05/20/2023 Back to Contents


Artificial Intelligence and The Return of FUD

A new criticism of the AI vendors and all the Doomsayers centers around the AI vendors telling anyone who will listen that Artificial Intelligence will destroy the planet faster than will the melting ice caps.  These Prophets of Paranoia seem to all agree that a North Korean nuke might a bad thing, but...
Dudes and Dudettes, ya'll better watch out for that the incoming AI!

The burgeoning AI industry just may want anyone ignorant about AI to remain so.  In a posting dated March 27, 2022, I had discussed how Fear, Uncertainty, and Doubt has a long tradition in marketing.

FUD as a concept in and of itself has been around marketing and other forms of mass persuasion since at least the 1920s, although its history can also be traced back to at least 1693, with a similar statement of "doubts, fears, and uncertainties" entering the literature.

A case is now being made that the purveyors of AI, even those seemingly most adamant critics of its general release to the public, many of whom had worked on the AI projects for decades before its unveiling this year, are using the ages old sales tactic of Fear, Uncertainty, and Doubt, FUD, to actually create greater consumer demand for AI products.  Author, Stephen Marche, makes exactly this statement in an opinion piece that appeared in The Guardian, May 15, 2023.  Entitled, "The apocalypse isn’t coming. We must resist cynicism and fear about AI," Marche lists all the Doomsday by Technology scares that have come before, and notes that "every discussion about artificial intelligence must begin with the possibility of total human extinction. It’s silly and, worse, it’s an alibi, a distraction from the real dangers technology presents."  The author then goes on to remind his readers "that tech doomerism in general is ... a form of advertising, a species of hype."  Indeed, the public has been forced fed a steady and endless torrent of AI induced Armageddon.  He lays much of the blame for much this hysteria on the tech engineering class — if there is such a thing.

One of the defining features of our time is that the engineers – who do not, in my experience, have even the faintest education in the humanities or even recognize that society and culture are worthy of study – simply have no idea how their inventions interact with the world.

He points to Elon Musk and his self-induced Twitter fiasco as evidence for the statement above.

The foundations of FUD are foisted upon us by the Prophets of Paranoia pushing the possibility of a precipitous AI Armageddon befalling one and all.  The crescendo of oncoming crashing computer Karma, and the yet unknown capabilities of AI and its consequences not yet imagined, all center on the notion of the "emergent capabilities" of AI.  Recent research, however, indicates that any emergent capacities of AI are, in truth, merely "a mirage."

In an article posted to Medium, May 2, 2023, entitled, "Emergent Abilities in AI: Are We Chasing a Myth?," author Salvatore Raieli, makes the point that "Emergent properties are not only a concept that belongs to artificial intelligence but to all disciplines (from physics to biology)."  These emergent abilities "can be defined as an emergent property, a property that appears as the complexity of the system increases and cannot be predicted."  In AI, these emergent properties are demonstrated when the AI produces an outcome beyond what the language model used should have allowed.

These conclusions about how AI demonstrates its emergent capacities might, however, be faulty.  There is a growing body of opinion in academia that suggests that these emergent capabilities are the outcomes of the biases of the programmers observing such emergent capabilities.  Published by Stanford University, Human-Centered Artificial Intelligence, May 8, 2023, author Katharine Miller, in her piece, "AI’s Ostensible Emergent Abilities Are a Mirage," cites recent academic research that stands in clear contrast to the Sturm und rang currently embraced by the mainstream media's reporting on AI.

The plethora of paranoia pitched by these false prophets of AI perfidy is predicated upon faculty science, proposes Ms. Miller.  The sum is not greater than the sum of the parts, it is simply the case that the sum of the parts is beyond the capacity of prior language model datasets.  Miller cites a recent study published by Stanford University entitled, "Are Emergent Abilities of Large Language Models a Mirage?" [pdf will open], which posits that the illusion of emergent capabilities are the result of the mythologies employed to measure such output.  Here the Stanford scholars lay out succinctly the proposition that AI might exhibit behaviors that exceed the sum of its inputs.

Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models. What makes emergent abilities intriguing is two-fold: their sharpness, transitioning seemingly instantaneously from not present to present, and their unpredictability, appearing at seemingly unforeseeable model scales.

The scholars' alternative theory is that "for a particular task and model family, when analyzing fixed model outputs, one can choose a metric which leads to the inference of an emergent ability or another metric which does not."  Thus, they conclude, "that existing claims of emergent abilities are creations of the researcher's analyses, not fundamental changes in model behavior on specific tasks with scale."  Or, another way of saying this is:

The mirage of emergent abilities only exists because of the programmers' choice of metric... Once you investigate by changing the metrics, the mirage disappears.

Furthering the theory, that all this fallacious phooey surrounding the public's fears about AI is nothing more than the fallacious marketing techniques of our old friend and/or fiend: FUD, is an opinion piece published in The Los Angeles Times, March 31, 2023, entitled, "Afraid of AI? The startups selling it want you to be."  Columnist Brian Merchant, asks the most pertinent question when to comes to how and why the presentation of AI to the public has proceeded in the manner in which it has.

Why would you, a CEO or executive at a high-profile technology company, repeatedly return to the public stage to proclaim how worried you are about the product you are building and selling?

Merchant simply answers his own question by stating:  "Answer: If apocalyptic doomsaying about the terrifying power of AI serves your marketing strategy."  Furthermore, Merchant adds, "AI, like other, more basic forms of automation, isn't a traditional business. Scaring off customers isn't a concern when what you're selling is the fearsome power that your service promises." 

Merchant goes on to summarize well the overall purposes these seemingly scary sales strategies employed in the marketing of AI. 

Now, the benefits of this apocalyptic AI marketing are twofold. First, it encourages users to try the "scary" service in question - what better way to generate a buzz than to insist, with a certain presumed credibility, that your new technology is so potent it might unravel the world as we know it?

The second is more mundane: The bulk of OpenAI's income is unlikely to come from average users paying for premium-tier access. The business case for a rando paying monthly fees to access a chatbot that is marginally more interesting and useful than, say, Google Search, is highly unproven.

There is nothing new in the proposition that the next wave of technological innovation will bring about the demise of those businesses that fail to quickly adopt the new technology.  Merchant posits that, "If companies believe a labor-saving technology is so powerful or efficient that their competitors are sure to adopt it, they don't want to miss out - regardless of the ultimate utility."  Therefore, Merchant concludes that the real fear from AI comes from the very real notion that AI will soon replace many human workers.  He lists those common business tasks most likely to be performed by AI in the very near future.

The great promise of OpenAI's suite of AI services is, at root, that companies and individuals will save on labor costs - they can generate the ad copy, art, slide deck presentations, email marketing and data entry processes fast and cheap.

Of all the prognostications — both pro and con — proposed by both the most adamant adherents of AI, and a growing number of skeptics, the most salient is the fact that AI will make many human workers obsolete.  And the dilemma that will face democratic and free market economies is how to address and provide for the needs of legions of recently displaced white collar "information workers."  A group that will also comprise reliable voters.

Presently, I only hear the barely audible sound of clicking crickets far away in the distance.  Come closer to the 2024 elections, however, what is now only a whisper will undoubtedly become a thundering resonate chorus of disgruntled and discontented voters.  Whose message to their disparate representatives will be the same.

¯\_(ツ)_/¯
Gerald Reiff

Back to Top previous post next post