When people want to learn from the past, we read history and biographies. That tells how people behaved. But the past can’t warn us about the dangers of technology that hasn’t yet been invented.
To find the people who can raise a red flag about new technologies and unknown devices, look in the science fiction aisle.
Future Strength
Dealing with the future is our strength, although sometimes it can take a while for the tech we imagine to catch up with our imaginations. We write speculative fiction by speculating how the future might turn out.
AI, or Artificial Intelligence, offers a relevant case in point. Is it benign and helpful or dangerous? Pundits are now arguing both sides of this question.
Artificial Intelligence may be a new concept to you. Even though the actual technology has just appeared on the market, however, science fiction writers have been imagining AI and writing about it for decades,
Science Fiction Warnings
While reading about ChatGPT and other new artificial intelligence products, I can’t help thinking about how science fiction has warned us about it. Here are just three examples from the world of TV and movies:
- Admiral Adama refuses to network Battlestar Galactica to other ships in the fleet for fear that the ship’s controls will be hacked by the enemy Cylons. (Battlestar Galactica)
- Skynet, an artificial, neural network-based, superintelligence system, hunts humanity to near extinction. (The Terminator franchise)
- Most recently, Starfleet’s advanced new fleet formation technology allows the enemy (who I will not identify to avoid spoilers) from taking over every linked Starfleet vessel and turn the fleet’s weapons against the earth. (Star Trek: Picard)
The message, of course, is that technology can provide great advances but also may have significant downsides—and those hazards may not be immediately apparent.
The Not-So-Positive Results
Apps like TikTok, Twitter, and SnapChat at first offer fascinating new ways to communicate but we now know some of the less positive results. They have addicted children to screentime, spread depression and suicidal tendences among young people, diminished our ability to focus and concentrate, and allowed lies and propaganda to run amok in American society.
How long will it take to identify the negative repercussions of ChatGPT, Dall-E 2, Gen-1, Deep Nostalgia, and Murf, among others. A new one seems to hit the market every day.
My husband and I will not have a “smart device” like Alexa in our home because we don’t want to have something listening to every word we say. What’s that, you argue? It only listens when you speak to it? Well, if it’s not listening to every word, how does it know when you’re speaking to it?
Did you ever read Ray Bradbury’s short story, “The Veldt?” If not, I recommend it.
Why Are We Doing This?
I fear we have lost sight of the adage, “Just because we can do something doesn’t mean we should do it.” Yes, there might be short-term advantages to AI-based products but the long-term repercussions can do a great deal more damage than we think.
If ChatGPT can write better than most people, why bother teaching people to write? And if human beings lose the capacity to research and write their own documents, how will we know if what the AI provides is fact, fiction, misdirection or lies? Somebody please explain to me how this technology advances the human race because I just don’t see it. What I see is a malicious piece of tech that makes people lazy, gullible, and easily influenced.
TikTok and The Game
We have seen what happens when an app is misused to increase a company’s profits or altered to reflect its owner’s political leanings. Algorithms can lead users down dark pathways by offering unwanted and dangerous content. We know that in China, the country that created TikTok, the government limits children’s daily usage while, here in the US, they are free to zone in on it as many hours a day as they like.
Here’s another work of fiction to consider. It’s an episode of Star Trek: The Next Generation called “The Game,” in which the crew of the Enterprise becomes addicted to a new game that stimulates the pleasure centers of their brains each time they successfully complete a level. While the game is physically and mentally pleasurable, it renders its players extremely susceptible to the power of suggestion. The game compels them to aid its creators, in a nefarious attempt to control the Enterprise and eventually the Federation. The players, mindlessly pursuing the next level and the next thrill, are not only oblivious to the danger, they actively recruit new players.
AI, the Present and the Future
Now, those of you who are firmly grounded in reality, may (and probably will) dismiss these examples as flights of fancy at best and silliness at worst. But they provide a glimpse of what happens when technology is released in a market governed only by profit and pleasure. That’s what is happening today.
And don’t expect our Congress to act on the AI threat quickly—or at all. Most of them are so old, they’ve just mastered email–and never heard of ChatGPT.
You can fight a war with guns and bombs or you can fight it by taking over the “minds and hearts” of your enemy. If you are smart about it, your opponents will render themselves vulnerable and easy to defeat without even realizing it.
The examples above may have a basis in science fiction but their lessons apply to what is happening right here and right now. The worst part is that we are, indeed, complicit. It is so much easier not to look behind the curtain and find the darkness there because they you have to do something about it.
Here’s another adage for you, one suited to the dangers of AI: “Those who do not learn from the future may be compelled to experience it.”