Top  
Newsletter 03/02/2023 Back to Contents

AI in The Marketplace:
Coming Soon: To You At You Maybe Even For You


Coming Soon to a Screen Near You

There is a hair growth product that advertises on cable channels — and which I won't mention — that claims its "Proprietary AI Technology" had discovered a "New Molecule" that will stimulate hair growth.  Now, since all the available research says that AI is nothing more or less than a "word prediction machine," exactly what words were generated that had led to the discovery of a new molecule?  Oh, so quickly have we slid down the Rabbit Hole of Marketing.  So, the Mad Hatter at this Tea Party tells us, "AI can grow hair on a billiard ball."

AI has been with us for some time now.  According to Dan Diasio, Global Accounting firm Ernst & Young's global artificial intelligence consulting leader, in an interview in ComputerWorld, published February 27, 2023, the most common interactions we consumers might have with AI in our daily consumer lives are "happening between chatbots and people [are] largely taking place in the customer service space."  This fact helps explain why the phone trees we encounter daily are seemingly increasingly obtuse.

There are a variety of vendors that have deployed tools to allow chatbots to more seamlessly and more instantly facilitate a discussion between a consumer and a company. Usually, it’s in the customer service space, and it’s used when something goes wrong or when you have a question.

Diaso further went on to explain more precisely how the Language Model works in Customer Service (CS).

In some cases, with some more advanced companies, it doesn’t have to be through a chat interface — it can be through a voice interface as well. So, that would be an example of someone calling a company and first being asked to describe what they’re calling about, and then an automated system responds to you. It’s a chatbot that sits behind that system that’s literally taking the speech and translating that into text, giving it to the chatbot and then the chatbot replies in text and then the system replies back in speech.

If the bot does not predict the right word, or misinterprets the words spoken by the hapless customer on the phone, then the CS session fails.  I am sure 12 out of 10 people reading this have experienced CS sessions where you came away positive the voice on the other end did not understand a word you said.  If it didn't, then it might be because the AI hadn't been trained yet on your particular word.

Diaso admits the limitations of chatbots in the CS space are due to inaccurate word predictions made by the AI: the much ballyhooed "AI Hallucination."

In some cases, when it predicts that next best word, that word is no longer factually accurate for the particular question. But given that word, the next best word given after that continues down that path, and then you build a series of words that go down a path that’s no longer accurate — but it’s very convincing in the way it’s been written.

IBM has built its own AI, named Vela.  Vela is only used, however, for IBM's own internal research.  In IBM's own blog, February 7, 2023, the IT giant stated that: "Vela has been online since May 2022 and is in productive use by dozens of AI researchers at IBM Research, who are training models with tens of billions of parameters."

IBM is using its own AI technology to reduce its own labor costs and streamline certain corporate functions, even replacing certain HR employees.  CEO, Arvind Krishna, said recently "AI can help businesses with hiring or promotions, where a bot can do the work of gathering information and a human assists with a final decision."  In his scenario of future hirings and firings, at least at IBM, the human is merely an assistantKrishna's fantasy of an employee free future is the stuff of nightmares for the rest of us.

Training of the AI for any one industry's specific lexicon is already moving along apace.  Indeed, the IBM concept seems correct.  Smaller, more focused data sets that are industry specific, will be more useful than the anyone can train the monster to do anything approach of general propose and experimental AI written about here and elsewhere. 

With a pre-trained foundation model, we can reduce labeled data requirements dramatically. First, we could fine-tune it domain-specific unlabeled corpus to create a domain-specific foundation model. Then, using a much smaller amount of labeled data, potentially just a thousand labeled examples, we can train a model for summarization.

A cottage industry of specific language models has come into being to train AI in specific types of knowledge.  The first example I encountered was from BleepingComputer.  Language packs that train your ChatGPT in certain computer tasks, as well as training on ChatGPT itself, are available.  Although it must not be too much of a hot of seller.  Marked down from $800 a package to an ASTOUNDING $19.99!.  Act Now!  Soon every brand from BMW to Betty Crocker will have some AI to sell you — Made Especially For You.


source: https://deals.bleepingcomputer.com/sales/the-complete-chatgpt-artificial-intelligence-openai-training-bundle

There are several different online databases for the Practice of Law.  The Practice of Law is a most verbose undertaking.  It's all about the words that had been previously spoken and written on any one case or issue.  Precision in its expository writing is essential, both factually and in its grammar and presentation.  So will lawyers confidently rely on the AI that is coming to the Practice of Law?  Attorney users of CaseText will soon find out.


source: https://casetext.com/blog/casetext-announces-cocounsel-ai-legal-assistant/

This is a most egregious example of over selling the AI Language Model.  There isn't much practical difference between reading a detailed summary of a case that is already readily available online, and the same thing done in a more mellifluous voice.  And since this technology is far from guaranteed accurate, I question the work ethic of an attorney that would copy & paste the A.I. prose into a pleading to present to a judge. 

Again, it is clear, as we watch the A.I. turn truth and facts into just other variables open to interpretation, Congress needs to set standards for this new technology.  And add meaningful penalties to vendors who only sell the razzle dazzle of AI, with little regard for the actual usefulness of their wares.  Congress must stop defective products coming to market that its own vendors already admit are broken and do not work as advertised.  Put all the Snake Oil sales people out of business before they ever get started.

Even in 1885, there was some Truth In Advertising.


source: https://wordhistories.net/2020/03/01/grow-hair-billiard-ball/#:~:text=Originally%20and%20chiefly%20used%20with,means%20to%20achieve%20the%20impossible.

¯\_(ツ)_/¯
Gerald Reiff
Back to Top previous post next post TBA →