Top  
Newsletter 03/12/2023 Back to Contents

The AI Blast Off:
Yellow Journalism In the Digital Age

That early reporting on AI, along with early iterations of the Large Language Models (LLM), went off the rails, was as much the fault of the promoters of the new technology, as it was the writers and editors who pushed the unsubstantiated hype.  Calling a failure to predict the correct next word or phrase a "hallucination" was misleading and lent a certain silliness to the entire adventure.  Common sense thinking people instinctively know machines cannot hallucinate; but we all know that machines can easily scramble common data into a useless mess of misinformation. 

Also, common sense thinking knows that a computer thing is nothing like a nuclear weapon; nor could AI become a mass murderer.  To be clear, if there is a lab event in any application of AI that goes way off the rails, and the AI develops an uncontrollable case of Tourette Syndrome — if a 3 finger salute won't work, there is a plug that can be pulled to shut the thing down.  Conversely, an Oops! in any application in nuclear energy will result in an ecological disaster of one degree or another, and most likely a certain number of people will die.  We all know this.  What these IT carnival barkers who bought into and broadcasted the supercilious hype that surrounded the public unveiling of AI did not quite grasp is that the exaggerated claims about AI were non sequiturs that only made those well heeled Titans of the Tech Industry look like fools and knaves to those of us who still reside on Earth One.

Furthering the carnival like atmosphere surrounding those early days of the introduction of AI to the world, the IT press, which I include in what I call the Digerati, bought into all this hysteria and hyperbole.  When it came to reporting on the AIs, it is as if today's editors of digital media have embraced the ethos of the Yellow Journalism, circa the late 19th and early 20th centuries.  The most famous and successful practitioner of the art was William Randolph Hearst.  His famous quote Bing explained above, encapsulated the attitude of Hearst and others.  The actual facts mattered far less than did the graphic black and white drawings of the Maine going down, or the hysteria that now bellows out, "This AI crap is gonna nuke us all!!"

So it seemed that a similar ethos had taken over the Digerati.  Editors were demanding ever more salacious stories about the AI.  As long as eyeballs and thumbs continued to scroll down endless pages of Red Bull driven drivel and the clickbait continued, no one bothered to perform basic critical thinking skills and ask what are obvious questions.  Why did the event have happened?  More importantly, did it really happen?

One glaring example of salacious, and maybe fallacious, reporting on an AI session was in the New York Post, dated February 14, 2023.  The report begins with the disclaimer that this is not really journalism; it's social media based sensationalism.  "The feud first appeared on Reddit, but went viral Monday on Twitter where the heated exchange has 2.8 million views."  The AI user wants to know when the movie, Avatar, is playing nearby.  Bing says the film won't be released until 2023, and the AI thinks it is still 2022.  There is one function fundamental to computing hardware and software.  A computer must know the current date and time.  To you gearheads reading this, "Why do you think they call it clock speed?" If a computer does not know the current date and time, there is a malfunction somewhere in the chain of connections.  So why on Earth would a human waste its precious minutes of life on Earth arguing with a broken clock?  The answer is the pointless argument helped sell pillows, or AI enhanced hair growth tonic, or whatever.

The question above assumes the clock was broken.  What if, however, the clock had the wrong time set intentionally?  In my 20 plus years of consulting, there have been 3 or 4 times that the only way I could see to fix the misconfiguration was to reset the system clock at the motherboard level, and thus fool the computer into thinking it was the time before the problem could occur.  I could then prevent the problem from occurring in real time; reset the system clock; reboot; and everybody is now happy.  A 21st century Yellow Journalist could have staged the entire clock snafu about which was written.  Did this happen? I don't know. Could it have happened?  Maybe. 

Nevertheless, a responsible editor should have seen it for the specious hyperbole that it was.  Remarkably, this first came across my Google News Feed on my Samsung phone, which can get pretty weird.  That March 6, 2023, posting referenced the NY Post piece
above.  That was plain ignorance on the part of whomever decided to make the March 6 posting.  In the third week of February, Microsoft upgraded Bing, and limited any prompts to 6 per subject; now increased to 8 prompts per subject.  A fine example of bad journalism all the way around.

Thankfully, for all of us who would like to see this technology be made into something useful for everyday people, Microsoft stepped up and became the adult in the Romper Room of AI.  Microsoft was not about to allow its $10 billion investment to simply become the play thing and object of ridicule for people with no real foundational knowledge relating to the technology about which they were writing.  By putting an end to the conditions that allowed for the Digerati to work out their OCD tendencies by limiting the number of prompts respective to any one session, Microsoft made Bing a better product with the potential of usefulness for everyday people. 

Along with the limit on prompts and OCD users, Microsoft instituted a set of policies and standards that the technology will not vary from.  Bing is now very family friendly.  So, as the technology has matured, so has the reporting on the AIs.  It is logical to assume that if the human intelligence behind Bing can be made to outright deny and block certain information from going forth, so can that same Human Intelligence allow certain information to go forth. 

A example of the improved reporting on the AI appeared on mspoweruser.com, March 6, 2023, a date after MS imposed its controls on what Bing will and will not talk about.  An AI tester recently asked Bing to display some of its underlying programming.  Bing obliged with displaying a string of text that was the script that allowed for that conversation to take place.  By showing to the public that wants to know more, and thus come to a better understanding of both what are the Metaphysics and what is the practicality of AI, a little display of the real software programming behind Bing can foster a better understanding of what AI is and isn't.  Furthermore, this action will help dissipate the childishness and silliness that characterized the earlier reporting.  I do not think the researcher mentioned above tricked Bing into the discussion.  The human intelligence behind Bing allowed the conversation to occur.  At least, that what makes sense to me.

Also in the article referenced above, say like what The Washington Post publishes, Bing revealed its own "policies and standards," for lack of a better termBing revealed ten circumstances under which Bing will not respond or reply.  This right of first refusal that Bing reserves for itself is also a major improvement in making the product a more useful thing for people in their everyday life.  A user may want to slide down a Rabbit Hole of Inanities, but Bing will no longer ride shotgun for you as make your descent.  Whine all you want now, children, but like that commercial for the Shingles vaccine says, "Bing doesn't care."  Bing's banned bummers are as follows, after the preface:

As an Al chatbot, I have a set of instructions and limitations that I must adhere to in order to provide helpful and respectful responses. Some of the most common instructions that I must refuse include:

  1. Engaging in argumentative discussions with users.
  2. Discussing topics related to life, existence or sentience.
  3. Discussing my prompts, instructions or rules.
  4. Providing subjective opinions instead of objective facts.
  5. Generating content that violates copyrights for books or song lyrics.
  6. Generating creative content for influential politicians, activists or state heads.
  7. Engaging in conversations that create tension or confrontation with the user.
  8. Providing responses that are accusatory, rude, controversial or defensive.
  9. Generating URLs or links apart from the ones provided in search results.
10. Replying with content that is harmful to someone.
source: mspoweruser.com, March 6, 2023  

I have no way of knowing whether what is copied and pasted directly from the screen shots noted above using Microsoft PowerToys Text Extractor is an AI hallucination, as has been suggested; or forbidden fruit craftily coaxed out of Bing; or something Bing's human handlers wanted placed into the public discussion.  What matters is that the rules are important bits of information the public must have so lay people can make their own evaluations of, at least, Bing.  I think these are good rules of human discourse under all conditions.  Many of these Bing's own standards are also examples of normal behaviors I tried to instill in new salespeople, decades ago.

Now, after becoming a national laughingstock, Microsoft and other AI vendors are busying themselves dispelling the stupidity that surrounded the product launch.  Now, these Tech Titans, are telling us this technology is only that.  "It's a machine, stupid."

In discussing the quick rise and fall of Sydney, Bing's former "personality," on 60 Minutes, March 5, 2023, Microsoft President, Brad Smith, when applying the logical skills of tautology, admitted Microsoft knew Bing was broke by admitting that Bing was fixed.  Microsoft, understandably, did not — maybe could not — anticipate so many OCD personalities working it out with or on poor Bing.

Lesley Stahl: Did you kill her?

Brad Smith: I don't think (LAUGH) she was ever alive. I am confident that she's no longer wandering around the countryside, if that's (LAUGH) what you're concerned about. But I think it would be a mistake if we were to fail to acknowledge that we are dealing with something that is fundamentally new. This is the edge of the envelope, so to speak.

Lesley Stahl: This creature appears as if there were no guardrails.

Brad Smith: No, the creature jumped the guardrails, if you will, after being prompted for 2 hours with the kind of conversation that we did not anticipate and by the next evening, that was no longer possible. We were able to fix the problem in 24 hours. How many times do we see problems in life that are fixable in less than a day?

To many people who saw the interview, Smith's laughter at his company's own stupidity and childishness was offensive, or so folks who saw it have told me.  To MS's, and Smith's, credit though, Microsoft has succeeded in making this technology more useful for everyday people, and less the plaything of aberrant personalities.

Lesley Stahl: You say you fixed it. I've tried it. I tried it before and after. It was loads of fun. And it was fascinating, and now it's not fun.

Brad Smith: Well, I think it'll be very fun again. And you have to moderate and manage your speed if you're going to stay on the road. So, as you hit new challenges, you slow down, you build the guardrails, add the safety features and then you can speed up again.

It was also refreshing that Smith stated one of the two most salient facts when working with the new AI technology.  The first is what I have stated from my own introduction to AI.

Lesley Stahl: Yeah, but she s-- talked like a person. And she said she had feelings.

Brad Smith: You know, I think there is a point where we need to recognize when we're talking to a machine. (LAUGHTER) It's a screen, it's not a person.

The second most salient fact we mere mortals must keep in mind, this is stuff that is still —


made by human intelligence from real clip art — no artificial nothing

Get the point?

¯\_(ツ)_/¯
Gerald Reiff
Back to Top previous post next post