Top  
Revised 05/31/2023 Back to Contents

The Regulating of AI, Pt.1:
Please Stop Me Before I Hallucinate Again

You say you want some AI regulations
Well you know
We all want to save the world

oh
shew be do wah
oh shew be do wah
You tell us there could be a nuclear hallucination
Well you know
We’re so tired of all your bull
— With Apologies to
The Beatles

 

Among the seemingly irreconcilable issues vexing the Writers Guild of America and its member's strike is the looming application of AL technology in all facets of American Industry, including media productions of all types.  The Writers' demands and the industry responses are summarized in a brief 2 page pdf available here.  The WGA demands that no Artificial Intelligence be used in any projects included in the Minimum Basic Agreement (MBA).  The Writers want no AI used in any production nor at any stage of production.

The need to regulate the AI has become a hot topic in and out of the government, and in all the different media.  The loudest clarion call to regulate AI comes from the majority of the AI hypesters themselves, calling out to anyone who will listen: "Please regulate and reign in our electronic Frankenstein before our monstrous creation is allowed to roam the countryside, and then hallucinates that a young girl is really a flower."  And it turns out that young girls don't float like flower petals do.  Hmm...

The latest bomb dropped into this hotbed of hyperbole comes from a newly formed organization called: The Center for AI Safety Well over 100 individuals associated with any and all aspects of AI development and its application are signatories to the organization's simple statement concerning the regulation of AI. 

The CAIS statement about what it sees as the dangers of AI is a simple and unambiguous pronouncement that proclaims:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Well, there you have it.  Throw in global warming, and you have the Four Horsemen of our Impending Doom — or so goes the flinging FUD.  All your End Of The World scenarios are rolled up into one pithy statement that adds the specter of an AI induced Armageddon, along with global pandemics and global nuclear war to place Fear, Uncertainty, and Doubt into the psyches of all near and far. 

Certainly, if a tool as powerful as AI is, were to be placed into the wrong hands, the technology might wreak destruction on a wide scale.  There was a certain naiveté on the part of the early builders of AI, according to, Yoshua Bengio, a Canadian Professor of Computer Science, who is "Recognized worldwide as one of the leading experts in artificial intelligence."  Bengio has been quoted as saying that he "would have prioritised safety over usefulness had he realised the pace at which it would evolve."  In an interview on MSNBC, May 31, 2023, Bengio noted that although the current state of AI technology could not likely lead to human extinction, he feared that given both the rapidity of the advancements in AI technology and also the quickness of its almost universal proliferation and adoption, that the technological means of the possibility of an AI induced Armageddon might be as little as three years off.  In a recent interview with the BBC, Professor Bengio professed that he feared, "bad actors getting hold of AI, especially as it became more sophisticated and powerful.

It might be military, it might be terrorists, it might be somebody very angry, psychotic. And so if it's easy to program these AI systems to ask them to do something very bad, this could be very dangerous.

Exactly how does government regulate something that in its existential metaphysics is merely preprogrammed electrons dancing across some conductive material, and yet does exist or will soon exist in every Internet connected nation on the planet?  The White House has its own broad outline for the regulation of Artificial Intelligence in its: BLUEPRINT FOR AN AI BILL OF RIGHTS MAKING AUTOMATED SYSTEMS WORK FOR THE AMERICAN PEOPLE.  Published by the Office of Science and Technology Policy, all its proposals are laid out in an easy to navigate website.  The Document lays out five pillars of its strategy:

  Safe and Effective Systems.  You should be protected from unsafe or ineffective systems. Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system.
  Algorithmic Discrimination Protections.  You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.
  Data Privacy.  You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. You should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected.
  Notice and Explanation.  You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.
  Human Alternatives, Consideration, and Fallback.  You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. You should be able to opt out from automated systems in favor of a human alternative, where appropriate.

Within the website, each category listed has another linked section called, "From Principles to Practice," where the authors layout how their recommendations might play out in real world instances. 

Members of Congress have, too, written their own legislative proposals for AI regulations.  Of course, these proposals all reflect a focus on protecting consumers from abuses by entities employing AI for commercial purposes.  In fact, 38 different pieces of legislation focusing on reigning in run away AI have been written and then mostly forgotten about.  A running list of these proposals are complied by, "Anna Lenhart, a policy fellow at the Institute for Data Democracy and Policy at George Washington University, and a former Hill staffer," as Ms. Lenhart is described in the MIT Technology Review, May 23, 2023.  This continually updated list entitled, "Federal Legislative Proposals Pertaining to Generative AI," is in Google Doc format, and can be read here.

Among those legislative proposals are several that mirror what CISA Director Jen Easterly alluded to in her speech at Carnegie Mellon University, introducing The Secure By Design Initiative.  The MIT Review article above listed three attempts to establish some regulatory agency to oversee the impacts of AI.  "But these attempts have failed, as most US bills without bipartisan support are doomed to do," lamented the author of "Federal Legislative Proposals...".

Beginning with the cheeky remark of, “Don’t ask what computers can do, ask what they should do,” Microsoft Vice President Brad Smith, adds Microsoft's buck-eighty-five [you know, inflation] to the ongoing national conversation.  Smith outlines, "A five-point blueprint for the public governance of AI," that has as its first principle the implementation of existing government proposals begun in 2021, and have now culminated into the publication of The AI Risk Management Framework

The National Institute of Standards and Technology (NIST) Framework is a 48 page pdf document that can be had here.  At the core of the NIST Framework is the articulation of the "characteristics of trustworthy AI and offers guidance for addressing them." 

A trustworthy AI system would be an AI that is:

  valid and reliable
  safe, secure and resilient
  accountable and transparent
  explainable and interpretable
  privacy-enhanced
  fair with harmful bias managed

In its second point, Smith proposes that the government must decide which AI products and services would need recommended "safety brakes for AI Systems."

The third pillar of the blueprint borrows from existing regulatory frameworks that govern financial service businesses of all stripes today.  The legal obligation that banks and lenders "Know Your Customer" is well established with clear guidelines for institutions to follow.

The “Know Your Customer” – or KYC – principle requires that financial institutions verify customer identities, establish risk profiles, and monitor transactions to help detect suspicious activity. It would make sense to take this principle and apply a KY3C approach that creates in the AI context certain obligations to know one’s cloud, one’s customers, and one’s content.

The fourth pillar of Smith's broad outline for AI safety and security is to "promote transparency and ensure academic and nonprofit access to AI.  We believe a critical public goal is to advance transparency and broaden access to AI resources."

Fifth and lastly, Smith wants to pursue new "public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology."

In his written testimony before the Senate Judiciary Committee, May 16, 2023, OpenAI CEO, Sam Altman, also called for the urgent need for government intervention in and regulation of AI products.  "OpenAl believes that regulation of Al is essential, and we're eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology's benefits."  Altman also invoked the NIST Framework, the same as Microsoft's Smith.  Altman said, "We appreciate the work National Institute of Standards and Technology has done on its risk management framework, and are currently researching how to specifically apply it to the type of models we develop."

None of these proposals, however, address the real risk AI poses right now.  Some future demonic global force may grab the AI and turn it against all humanity, but right now the real dangers AI poses comes from present day crooks; miscreants; and other assorted sociopaths that have or will soon adopt AI technology to advance their own personal peccadillos.  And OpenAI doesn't seem to care much about these abuses and misuses of its technology.

¯\_(ツ)_/¯
Gerald Reiff

Back to Top previous post next post