Top | 7 | |
Newsletter 9/01/2025 | If you find this article of value, please help keep the blog going by making a contribution at GoFundMe or Paypal |
Back to Contents |
Coming Soon: Warning Labels in AI Products
In its June 2019 edition, Nicotine & Tobacco Research, published an article written, May 25, 2019, titled, "Assumption of Risk and the Role of Health Warnings Labels in the United States." The authors of this study summarized their work in the Introduction with one simple statement: "This article provides historical context for understanding how the cigarette industry have manipulated language used in health warning labels (HWLs) to protect them in litigation." The "Federal Cigarette Labeling Act of 1965," was signed into law, July 27, 1965. The law mandated that beginning January 1, 1966, all packages of cigarettes were required to have printed on the side of the package the following HWL: "Caution: Cigarette Smoking May be Hazardous to Your Health." One effect of the federal law was that states could no longer require their own, and sometimes more dire, warnings about the dangers of cigarette smoking. The law also prevented the Federal Trade Commission from requiring its own more explicit warning. The FTC proposed the following language be printed on cigarette packages. "Cigarette smoking is dangerous to health and may cause death from cancer and other diseases." Tobacco companies knew that they were selling a product that posed a threat to the health of consumers who used their products. While publicly saying that there was no proof that smoking cigarettes posed any health hazards, behind the scenes lawyers for tobacco companies were arguing in private industry meetings that it was no longer viable for tobacco companies to deny the health hazards associate with smoking. With the new labels, smokers themselves could be held liable for their own health problems brought about by smoking cigarettes. An argument company lawyers thought could be made convincing to juries when the inevitable product liability lawsuits arose from harms caused by tobacco use. Today, developers of Artificial Intelligence products now face a similar dilemma that tobacco companies faced in the 1960s. As discussed in the previous Dispatch, the judge overseeing the case of Garcia v CharacterAI had ruled that AI is a product, not a service. Over a century of legal precedent has held that when a company produces a product that knowingly, or unknowingly, harms a consumer, then the producer of that product can held responsible for harms caused by the use of the product. If the jury verdict in the Garcia matter is that Character.AI is responsible either directly, or indirectly, for the death of Garcia's son, Sewell Setzer, and that verdict is upheld on appeal, then AI developers will be on notice that they, too, can be held liable for harms done to consumers of their products. The importance of the ruling that CharcterAI is a product cannot be underestimated. According to a RAND corporation study dated November 20, 2024, titled, "Liability for Harms from AI Systems," which anticipated AI companies being held liable under tort law, the question of whether AI is or is not a product is the first question that must be answered for any liability case brought against an AI company to proceed. The study examined how torts might be brought against AI developers and what might be the affirmative defenses given by those developers. The authors of the RAND study defined tort law thusly: Tort law is a form of common law, meaning that it has been formed primarily by judges ruling on specific cases and developing precedent to apply to future cases. Many of the uncertainties about how AI developers might be held liable were made prior to the rulings the court made in the Garcia matter. At the date of publication, the authors did not know if courts would rule that AI chatbots are a product, or if First Amendment claims offered in defense by AI companies would be upheld by courts. In the Garcia matter, a court not only ruled that AI is a product, but the court also ruled that speech not made by a human is not protected by First Amendment freedom of speech protections. Another legal precedent that can be applied to torts brought against AI developers is the standard of negligence. Although situations are often different in the details, the authors assert that "all people are held to have a duty to take reasonable care against causing foreseeable harm to others." Reasonable care is generally understood to mean that people are required "to be "reasonably prudent" both in their behavior and in taking precautions to ensure they do not inflict harms on others." Standards of negligence in US tort law are based upon the customs and standards of any given industry. The authors assume that any judgment of negligence by any one AI company will partially rest upon what are the established safety standards adopted by the AI industry in general. The authors imply that at present no such industry wide standards exist. Instead, the possibilities of torts arising from use of products "might have an opportunity to develop standards for safe AI development that in turn could establish benchmarks for the whole industry in future litigation." The authors offer AI developers a way out for what will surely be an onslaught of liability cases arising against them and their products. The authors suggest that, "AI developers and deployers might be able to avoid some liability by warning users of the potential risks that an AI system might pose." Like the tobacco companies did in the 1960s, AI developers might simply admit that they know their products can cause harm, and warn their customers about the possibility of harms brought about by use of their products. That might then become an affirmative defense to argue in any tort brought against them. Writing, August 24, 2025, partially in response to the matter in Raines v. OpenAI, in which 16 years old Adam Raines committed suicide after a long series of engagements with CHAT4o, OpenAI admitted that in some cases their safety mechanisms had failed. Plaintiffs allege their son's suicide was encouraged by the OpenAI chatbot. In the blog post of August 24, titled, "Helping people when they need it most," OpenAI tacitly admited that its products have at least the propensity to cause harm. OpenAI discussed all the ways that it tries to direct people who may be in a personal crisis to appropriate resources for help. Nonetheless, OpenAI confessed that, "there have been moments when our systems did not behave as intended in sensitive situations." Not behaving as intended rings like an admission that the product offered by OpenAI is, at least in certain cases, defective. Going forward, OpenAI "will develop and rollout safeguards that recognize teens’ unique developmental needs, with stronger guardrails around sensitive content and risky behaviors." Among those product improvements, OpenAI will offer "parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT." And it's not just children that have been harmed by the use, misuse, and/or abuse of AI systems. A recent article published by Futurism, June 10, 2025, titled, "People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions," cataloged the many instances that otherwise sane adults fell victim to "severe mental health crises" after prolonged engagement with ChatGPT. Author, Maggie Harrison Dupré, was given screenshots of what actually took place in these psychosis inducing chats with ChatGPT. Delusions of grandeur and bouts of extreme paranoia were detailed by both victims and family members. Dr. Nina Vasan, a psychiatrist at Stanford University and the founder of the university's Brainstorm lab, reviewed some of these conversations that Futurism had obtained. Dr. Vasan concluded that the screenshots show the "AI being incredibly sycophantic, and ending up making things worse... What these bots are saying is worsening delusions, and it's causing enormous harm." The RAND study cited above notes that tort law is basically state law. In the absence of any relevant federal law, as in the case of AI regulation, state laws hold precedence. To that end, on August 25, 2025, 44 states' Attorneys Generals sent an open letter to the CEOs of each of the major AI companies. The AGs singled out Meta for allowing AI assistants to offer sexually suggestive materials to children, as was mentioned in a previous Dispatch. The signatories acknowledged that AI has the potential to be a beneficial and transformative technology that has the potential to raise humankind to "previously inconceivable vistas of human creativity and productivity." But, warn the AGs, "some, especially children, fall victim to dangers known to the platforms." Their solution is deceptively simple, "you must exercise sound judgment and prioritize their well-being. Don’t hurt kids." The AGs closed with a comparison to how social media companies failed to protect children from harms its products caused, and also laid much of that blame on the failure of governments to act. You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned. The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it. That is why the ruling of the court in the Garcia case is so important. Social media companies have and still do hide behind the shield of Section 230 of the 1996 Telecommunications Act. This author has called for the repeal of Section 230 several times. Section 230 grants to social media platforms broad “safe harbor” protections against legal liability for any content users post on their platforms." Artificial Intelligence is neither a platform nor a service. AI is a product. And, as such, it's manufacturers can be liable for harms caused by use of their products. Given all that is currently known about harms brought about by the use, abuse, and misuse of AI chatbots, especially when children are involved, more torts claiming liability for harms AI chatbots have caused will inevitably fill courts' dockets. Therefore, following the logic of tobacco companies 60 years ago, the executives who run, and in some cases the stockholders who invest in, these companies, may themselves come to the logic behind the need for some kind of health warning label attached to their digital feeds. As the old saying goes, "To be forewarned is to be forearmed."
|
¯\_(ツ)_/¯¯ Gerald Reiff |
Back to Top | ← previous post | next post → |
If you find this article of value, please help keep the blog going by making a contribution at GoFundMe or Paypal |