Top  
Revised 06/03/2023 Back to Contents

The Regulating of AI, Pt. 3:
Observations From Abroad and Conclusions for Here At Home

You say you want some new legislation
Well you know
Why don't you just fix your broken stuff instead
oh shew be do wah
oh shew be do wah
You tell us it's not really defective
Well you know
Then stop saying it's gonna lay us all out for dead
— With Apologies to
The Beatles

 

The calls for Government interference in and regulation of Artificial Intelligence products made by Sam Altman and others must be met with a high degree of incredulity.  No where in his written statement to the Senate did Mr. Altman address what might be penalties for any violations of the national standards of AI performance that he envisions.  As we have seen, OpenAI is fully aware that its products may not work correctly at any one moment in time.  For example, the LLM's are susceptible to injection attacks, where "a hacker poisons a webpage and hides malicious prompts by adding comments or using zero-point fonts on the webpage," and other forms of hacking. 

The calls for the regulation of AI are not just coming from the US.  The United Nations is concerned about the global implications of the ability of the LLMs to be veritable fountainheads of disinformation As The United Nations Human rights Council noted, in a Press Release, June 2, 2023.

... so-called “generative AI” systems can enable the cheap and rapid mass production of synthetic content that spreads disinformation or promotes and amplifies incitement to hatred, discrimination or violence on the basis of race, sex, gender and other characteristics.

The Human Rights Council has "called for regulation to address the lightning-fast development of generative AI that’s enabling mass production of fake online content which spreads disinformation and hate speech."  The UN takes note of the fact that AIs are, in reality, somewhat nebulous and impossible to regulate in the traditional understanding of the term.  And, therefore, The United Nations Human Rights Council recommends that: "Specific technologies and applications should be avoided altogether where the regulation of human rights complaints is not possible."  [Ibid]

The European Union has its own proposed regulations of AI.  The complete proposal is a 108 page pdf file with the lofty Title of, "REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL. LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS, also simply known as "The Act."  Within The Act is a statement about governing AI that makes the endeavor different when reining in other new technologies. The Act, pg 34.

Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures.

The EU also offers a more friendly Executive Summary, which is a 21 page very graphic rich pdf file.  The images here are taken from the Executive Summary.  The EU envisions two levels of risk from different applications of AI, and thus imagines different regulating schemes.  One of these would be the more consumer oriented, but not financial, types of applications.  Compliance in this "Low Risk" Tier would be voluntary on the part of the AI vendor.

A second classification of AI is noted as "High Risk,"  according to The Act, pg 14:

The classification of an AI system as high-risk is based on the intended purpose of the AI system, in line with existing product safety legislation. Therefore, the classification as high-risk does not only depend on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used.

One component of the EU's rather elaborate set of rules and regulations pertaining to high risk systems, is the need for testing high risk systems before they are put on the market.  "In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to a conformity assessment prior to their placing on the market or putting into service."  The Act, pg 33.  Some emblem that signifies the AI product has passed such testing is envisioned.  "High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market."

The EU is unambiguous about what it considers outlaw activity by AI producers, vendors, and users.  Its proposals center much on outlawing what has become known as "deep fakes" of any individual.

For AI systems considered to  be "high risk," the EU envisions rigid safety standards and testing to ensure the greatest adherence to the new standards.  The Act, ppg 46-47.  Moreover, the EU has no desire to have its Frankenstein roam the countryside.  The Act, pg 51

High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.

Ultimately, the EU wants to establish its own "European Artificial Intelligence Board."  The Act, pg 72.  The board would have the legal authority to take down and make inoperable those AI High Risk systems found not in compliance. The Act, pg 78.

Where, in the course of that evaluation, the market surveillance authority finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe.

To further the the authority of the board, "Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented."    These fines increase as do the offenses.  The Act, pg 82

The EU is clear about who its proposed intervention and regulations into the AI marketplace are aimed to protect.  It envisions the policing of high risk AI products to be not all that different than the policing of other products.  The EU certainly does consider AI a product, subject to government inspection and termination when its efficacy and safety is not up to standards.  The EU seeks to outlaw the most egregious antisocial behavior associated with certain online activities, including now AI. 

It's reliance on assessments and evaluations of how AIs are implemented imagines a new bureaucracy, a Ministry of Artificial Intelligence, if you will.  This is, however, what top down government regulation looks like.  A set of rules that are clearly expressed to all concerned about how adherence to those rules will be assessed.  And penalties clearly stated for stubborn and willful non-compliance to those rules.

The AI Police will be the Meat Inspectors of the 21st century.

The European Union model may not be a good fit, however, for American politicians or the American political economy.  Market forces, driven by and in avoidance of, large jury awards are considered self-regulating dynamics that force compliance to established norms of behavior.  Furthermore, there is also no great hue and cry among the American public for a large top heavy new bureaucracy to enforce what is not seen as a real problem yet, especially when considering all the other real issues facing Americans today. 

As a society, when it comes to AI, we are not yet at the stage of:

Holy Mother of God, What Have We Done to Our Children?

Which is how we now understand Social Media, and has become a Zeitgeist of our age. 

As was mentioned in "Artificial Intelligence and The Return of FUD," the OpenAI business model for profiting from its AI is different than that of Microsoft and Google.  Both Microsoft and Google are in the process of integrating their AI into existing products.  Their reputations in the marketplace and the need to be best in class will compel Microsoft and Google to continue to build quality products, with the AI as one new feature within their existing offerings, and maybe bring new products to market centered around existing products.  Furthermore, as one who has studied the cybersecurity issues since it became a thing, Microsoft and Google are both in the forefront of entities who now place security first.  Maybe not yet at the Secure By Design stage, but the two are working towards that goal.

OpenAI's business model is predicated upon licensing of their AI products to third parties.  Already, OpenAI is marketing Plug-Ins for its GPT 4.  As has often been true in the browser wars, these GPT plug-ins themselves are vulnerable to injection attacks and other forms AI compromise.  Rather try and restrict and guess what is safe for OpenAI to license to whomever, I suggest Uncle Sam simply buy OpenAI, and let the Federal Government actually be the licensing authority for non-proprietary third-party AI products.  The Federal Governent can take a no compromises approach to the application of what will be its technology.  At a market value of a mere 30 billion US dollars, buying OpenAI would be cheaper than setting up some huge new bureaucracy and a better use of taxpayers money.  Uncle Sam could turn a profit if done right, but would not need to.

Like what the EU envisions, some labeling and marketing scheme can be devised that will declare any US AI product is an US CISA Certified Safe AI.  The American Way is to make it right the first time.  And then sell it like hotcakes.  The AIs are, however, like few other products, and are just too powerful and prone to abuse and misuse to allow the AIs to proliferate without the guiding authority of government. 

As daft as this may sound to some, there is both a historical and a legal precedent to the federal government being the entity that regulates the sale and distribution of  anew powerful product and industry.  "The United States Atomic Energy Commission (AEC) was an agency of the United States government established after World War II by U.S. Congress to foster and control the peacetime development of atomic science and technology."  This analogy is especially fitting when it comes to AI.  We are told over and over again that AI is more powerful and potentially more dangerous than atomic energy.  So lets regulate and distribute AI the same way atomic energy was in its infancy.  And simply eliminate the middle man.

 

¯\_(ツ)_/¯
Gerald Reiff

Back to Top/strong> previous post next post