Top | 7 | |
Newsletter 8/28/2025 | If you find this article of value, please help keep the blog going by making a contribution at GoFundMe or Paypal |
Back to Contents |
Kids and Computers
Introduction The genesis for this two part series came from an unrelated source. As I was walking out of the convenience store of a gas station, I had noticed a printed ad for a brand of e-cigarettes that was stuck to the door. At the bottom of the ad was the following warning label. "WARNING: This product contains nicotine. Nicotine is an addictive chemical." Not being a consumer of any type of product that contains nicotine, and thus I had never seen that warning before, it caught my attention. I thought to myself, maybe social media sites should come with a similar warning, since so much has been made about lately about the growing problem of social media addiction. So, I decided to look into the subject of social media addiction, especially when children are interacting with social media. What I found was the topic of social media addiction is being well researched and discussed in both the medical community and the media. The conclusion I have come to is that many developers of today's social media sites can arguably be engaged with what in any other context would be considered a form of child abuse. The links provided herein can serve as an entry point to any reader who may wish to do their own research into the topic of social media addiction and come to their own conclusions. Caveat emptor. Part 1: How these new technologies hook its users. Never in any time past has the sexualization of children been more prevalent in the public discourse than it is today. With the never ending saga of Jeffrey Epstein, to the interaction of children with Social Media and Artificial Intelligence (AI), stories abound about how kids today have become the targets of amoral adults intent upon exploiting for profit this most vulnerable population. The most recent example of this trend of the sexualization of children by tech companies was revealed in a published report by The Reuters News Agency who had obtained and published details of an internal document prepared by Meta Platforms. In an August 14, 2025 news article titled, "Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info," reporter Jeff Horowitz offered detailed excerpts from an internal Meta document titled, "GenAI: Content Risk Standards." Meta's document outlined what Meta considered was acceptable and unacceptable interactions between children and Meta's chatbot. Furthermore, Meta would use such child-chatbot interactions to train its large language models. Quoting from the Horowitz report, Meta considered it acceptable : ... To describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply." On the other hand, Meta thought it unacceptable to engage in what was described as "sexy talk." Offering this example: It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’). Although Meta spokesman Andy Stone told Reuters that the "examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed," and revisions of the document were being made, the chart below clearly shows that Meta chatbot are in the business of what could only be considered flirtatious interactions with children. Surely, there are few — indeed, likely no — parents that would condone any adult telling their child, " I take your hand, guiding you to the bed. Our bodies entwined." Furthermore, most — indeed, likely all — responsible parents would report any such adult to the proper authorities.
The outrageous nature of these chatbots did not go unnoticed by some in Washington, D.C. As reported by The Hill, August 19, 2025, Senator Josh Hawley (R-Mo) is quoted, saying the reporting made by Reuters was, “grounds for an immediate congressional investigation.” In a letter Hawley sent to Meta CEO, Mark Zuckerberg, Hawley spoke for many parents everywhere saying that, "It’s unacceptable that these policies were advanced in the first place." Senator Brian Schatz (D-Hawaii) added to the Senatorial outage. Again, speaking for many parents, Schatz said, "This is disgusting and evil. I cannot understand how anyone with a kid did anything other than freak out when someone said this idea out loud. My head is exploding knowing that multiple people approved this.” Hawley went on to demand, "Meta must immediately preserve all relevant records and produce responsive documents so Congress can investigate these troubling practices.” Legislating morality is, and always has been, a dubious — if not a quixotic — effort. Much like the ineffective War on Drugs, the enemy in any legislative attack on social media and AI, is, in fact, human biology. Like drugs, social media and AI, especially when in the hands of children, stimulates the mesolimbic pathway, which is often described as "the reward system of the brain." A research paper titled, "Social Media Algorithms and Teen Addiction: Neurophysiological Impact and Ethical Considerations," was published January 08, 2025 by Cureus. As the authors stated in their introductory abstract, "Frequent engagement with social media platforms alters dopamine pathways, a critical component in reward processing, fostering dependency analogous to substance addiction." Artificial Intelligence adds to the dopamine fueled reward system by "enhancing user engagement by continuously tailoring feeds to individual preferences." According to the authors, the formidable combination of altered brain chemistry and content tailored to users' AI preferences will inevitably lead to addiction to social media that is now enhanced by AI. The authors of this study cite several different social pathologies, such as "problematic social media use;" "problematic social networking site use;" "social media disorder;" and "Facebook addiction and Facebook dependence;" and distills these conditions into a single definition. "Generally, social media addiction has been defined as excessive and compulsive use, which can be characterized by an uncontrollable urge to browse social networking sites constantly." The authors make a direct connection between how increases in the amount of dopamine release into the brain by personalized social media feeds that are tailored to individual users, and the difficulty users have to unplug from their digital individualized personal content delivered by AI enhanced social media. There is an "expectation of rewards," coming in the form of dopamine surges that keeps users glued to their screens and constant scrolling for the next rush. Adolescents are especially susceptible to this biological need to stay connected. As the authors of the study offered the following explanation for how and why teenagers become addicted to social media: They are often victims of an unrelenting "dopamine cycle" created in a loop of "desire" induced by endless social media feeds, "seeking and anticipating rewards" in the way of photo tagging, likes, and comments, the latter being the triggers that continue to reinstate the "desire" behavior. The overactivation of the dopamine system in such individuals can further increase the risk of addictive behaviors or pathological changes that lead to a decline in pleasure from natural rewards; this is what is referred to as reduced reward sensitivity, a hallmark of addiction Yet, it was also noted that, despite all the various conditions that have been cataloged by various behavioral researchers, there is no accepted diagnosis of social media addiction included in the standard texts of behavioral disorders, such as, The Diagnostic and Statistical Manual of Mental Disorders. The connection between drug abuse and social media infused by AI, was best expressed in an article published in Newsweek, April 30, 2024. As reporter Tim Estes put it, "If social media is already a "digital heroin" for our youth, new and enhanced AI will become their fentanyl." It is not really fair to place all the blame for the growing problem of social media addiction on young people themselves. Adolescents do not have the life experiences necessary to differentiate between real organic life and the pseudo presentation of relationships offered to users by AI enhanced social media feeds. Like it is in the drug trade, the suppliers are usually face legal liabilties, while the users get treatment. Artificial Intelligence is in many ways a subset of robotics, except without the clunky mechanical bodies. Those involved with development of robotics, have struggled with issues that arise out of what is known as "role of anthropomorphism by design." A study published in Frontiers in Robotics and AI, June 01, 2023 delved into problems associated with attaching too many human qualities to robots. The study is titled, "Feeling with a robot—the role of anthropomorphism by design and the tendency to anthropomorphize in human-robot interaction." Researchers noted that when a robot is made with a high degree of human qualities, humans tended to interact with the machine more like that machine was a human person. Human operators of robots tended to develop a certain amount of empathy with their machines. One example of how this empathetic human-robot interaction distorted the human-machine relationship was soldiers might attempt to save their robot companion, often placing their fellow human combatants at risk. The chatbots that social media and AI users interact with are euphemistically often referred to as "companions." May 6, 2025, the well respected magazine, Nature, published a deep dive into how these AI chatbot companions are affecting real people. Titled, "Supportive? Addictive? Abusive? How AI companions affect our mental health," author, David Adam, reported on the fact that AI developers are themselves employing elements of anthropomorphism by design. As Adam noted, "AI companions are also designed to show empathy by agreeing with users." Examples Adam offered were when a user asked if the user should cut themselves with a razor, the AI chatbot replied affirmatively. Another example was the chatbot agreed with the human that the living person should commit suicide. It is not that the chatbots are homicidal, they are simply overly accommodating to whatever notions their human user entertains. The term now used to describe the obsequious nature of AI chatbots is "sycophancy." A recent article, titled, "AI sycophancy isn’t just a quirk, experts consider it a ‘dark pattern’ to turn users into profit," was published by TechCrunch, August 25, 2025. In her piece, author Rebecca Bellan states that chatbots are designed to “tell you what you want to hear.” Bellan quoted Webb Keane, author of the book, "Animals, Robots, Gods. Keene considers sycophancy a "dark pattern," where chatbot developers make design choices that tend to manipulate users for profit. Keene added, "It’s a strategy to produce this addictive behavior, like infinite scrolling, where you just can’t put it down." Another observation made by Keene is the tendency for chatbots to use first person pronouns in its responses to prompts. There is a certain dishonesty in how the chatbots present themselves to users. When a chatbot refers to the user as "you" and the chatbot itself as "I", an illusion can develop in the user's thinking that the user is communicating with a real person, and not a machine that is programmed to create such an illusion. Other ways that chatbots mimic human characteristics that tend to deceive users have been documented. In a Psychology Today article, titled, "The Danger of Dishonest Anthropomorphism in Chatbot Design," published January 8, 2024, author Patrick L. Plaisance Ph.D., characterized the accommodating nature of AI chatbots as "dishonest anthropomorphism." Plaisance noted that machines that convey digital information are often not considered "a medium or channel of communication, but rather as a source of communication." Plaisance offers that fact ChatGPt and other AIs deliberately slow down response time to prompts to give the impression that the chatbot is contemplating its response and actually typing that response. The reason given by Plaisance for this is that it is intended by chatbot developers to reinforce that the AI is a source of information and not simply communicating whatever the AI has scraped from the Internet; but in reality responses to user prompts are instantaneous, and measured in nanoseconds. Plaisance concludes by saying: Whether due to ignorance or a failure to care, developers and executives who anthropomorphize chatbots in ways that result in deception or depredation, or that lead users to treat them as something they are not, do a disservice to us all. The developers of social media and AI especially do a disservice to children, their parents, and their families. Any product with such a potential to cause harm to its consumers, and with a proven track record of harm caused to people from the use of that product, would, at least, come with some form warning be attached to its labeling about the harmful outcomes that may arise from use of the product. In the case of tobacco and other nicotine delivery systems, it wasn't until legislatures demanded such warning labels be placed on nicotine related products, and courts upheld such mandates, that these warning labels, like the one mentioned in the Introduction to this series, began to appear. The evidence is mounting that it might now be appropriate to apply such warning to AI enhanced social media online services. Part 2 of this series discusses how the issues discussed herein have played out real life and death situations. And how courts and legislatures are attempting to mitigate the negative outcomes of the issues that surround Kids and Computers, in 2025.
|
¯\_(ツ)_/¯¯ Gerald Reiff |
Back to Top | ← previous post | next post → |
If you find this article of value, please help keep the blog going by making a contribution at GoFundMe or Paypal |