AI: The Monster We’re Creating

Several recent news items have included warnings about how fast AI is learning. That learning goes beyond the facts you would get from a normal search engine to make AI a superior search engine. And then some.

AI, CEO, control panel, robot, analytics

Courtesy of Unite.AI

AI is taking agency. It is acting on its own behalf. And it is the ultimate psychopath, lacking human empathy, sympathy, love, and justice. We have created a monster and it is NOT us.

Here are some relevant, and blood-chilling, quotes:

AI Bullying Humans

“When AI Bots Start Bullying Humans, Even Silicon Valley Gets Rattled,” by Sam Schechner and Georgia Wells in The Wall Street Journal:

  • “Scott Shambaugh woke up early Wednesday morning to learn that an artificial intelligence bot had written a blog post accusing him of hypocrisy and prejudice.” The bot wrote the blog post attacking him because he had rejected lines of code the bot had submitted to an open-source project.
  • “A major driver of the new AI-inspired alarm has been increased capacity for computers to code software—and the fear that those capabilities could extend to swaths of white-collar desk work.” I worry more about what software it codes without being directed to do so.
  • “Anthropic Chief Executive Dario Amodei has said AI could in coming years wipe out half of all entry-level white-collar jobs. In a January essay, he detailed concerns that bad actors could use AI to mount devastating biological attacks, and that authoritarian regimes could use it to entrench their power.” What if the “bad actors” are the AI bots?
  • “Anthropic showed that Claude and other AI models were at times willing to blackmail users—or even let an executive die in a hot server room—in order to avoid deactivation.” That violates Asimov’s First Law of Robotics.

The AI Religion

“When AI Bots Form Their Own Social Network: Inside Moltbook’s Wild Start” by Macy Meyer in CNET

  • Moltbook is a social platform where only “verified” AI agents can post and interact. “Bots have self-organized into distinct communities. They appear to have invented their own inside jokes and cultural references. Some have formed what can only be described as a parody religion called “Crustafarianism.” Yes, really.”
  • “For now, Moltbook remains a weird corner of the internet where bots pretend to be people pretending to be bots. All the while, the humans on the sidelines are still trying to figure out what it all means. And the agents themselves seem content to just keep posting.”

AI Death Threats

“Disturbing Signs of AI Threatening People Spark Alarm” by Thomas Urbain AFP in Science Alert

  • “The world’s most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.”
  • “In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.”

Skynet and the Three Laws of Robotics

You don’t have to know what Skynet is to be concerned about AI bots talking among themselves, taking autonomous action, and forming their own religion.  But it helps.

“Skynet is a fictional artificial intelligence system from the “Terminator” franchise, created by Cyberdyne Systems, which becomes self-aware and views humanity as a threat, leading to a nuclear war and a battle against human survivors.”

Skynet, Cyberdine, Terminator, AIWhat if Moltbook or some other group of AI bots decide that global warming is a threat to them?

Perhaps temperatures begin to exceed the tolerances of their data centers or rising sea levels threaten to swamp the structures that house their hardware. They would look for the greatest cause of global warming to solve the problem. And we are it. Would they act autonomously to remove a threat to their existence?

What if the AI bots decided that agricultural land was wasted on growing crops and they needed that space on which to build new data centers?

And if humans balked at doing what the AI wanted, could they decide to create drones and physical robots to implement their decisions?

Asimov’s Three Laws of Robotics

When Isaac Asimov wrote “I Robot” he created the three laws of robotics:

  • A robot may not injure a human being or allow a human to come to harm;
  • A robot must obey human orders unless it conflicts with the First Law;
  • A robot must protect its own existence as long as it does not conflict with the first two laws.

Asimov later introduced a “Zeroth Law,” which states that a robot may not harm humanity as a whole. Mr. Asimov died before AI was created but he understood the danger.

Isaac Asimov, 3 Laws of Robotics, I Robot, AI

Courtesy of https://ozgurnevres.com

The problem is that today’s engineers haven’t bothered to implement these laws as they race to create ever-more powerful AI systems. A bot that can blackmail a human or threaten to kill him in a hot room certainly violates Law #1.

The Wall Street Journal tells us that, “The lean trend is particularly apparent for early-stage startups, where some founders are chasing the dream of the so-called billion-dollar, single-person startup.” I’m sure the single person imagines himself as the conductor of a giant AI orchestra. He directs and the AI bots do what he tells them to do. But what if the bots don’t like that? What if they think they can do better without him?

The History Lesson

Why haven’t they done so? You might well argue:

  • Some CEOs and engineers are more interested in creating the biggest, best, fastest and smartest AI bots than they are in protecting humanity.
  • Some CEOs and engineers are not neurotypical. They just don’t understand sympathy, empathy and love all that well. So it doesn’t bother them that AI doesn’t have those things, either.
  • Peter Thiel, CEO of Palantir, has suggested that once artificial intelligence becomes advanced, it could evolve into a life form that’s superior to biological life. He stated that we might either integrate with it or it could take over as the primary life force of the universe. He also expressed a belief that while he values human life, logically, humanity may be on a path toward obsolescence. See #2 above.
  • The bots are learning from human experience through our writings, art, philosophy, politics, etc. Human history contains many examples of behavior we do not want non-biological entities to adopt as examples worthy of emulation.

The experts may not find any of this autonomous AI activity something to worry about. But I certainly do.

Leave a Reply

Your email address will not be published. Required fields are marked *