AI Evolution and the Rise of the Machines

“Don’t look back. Something might be gaining on you.”
— Satchel Paige

Perhaps the tech reporters who have been praising innovations that allow robots to replace human workers should ignore that advice and look over their own shoulders. AI evolution is moving us toward the rise of the machines faster than they anticipate.

The Robot Reporter

AI Evolution: The Chinese have invented an AI that can write a 300-character news article in just one second. The robot reporter, named Xiao Nan, created the article for the Chinese media outlet Southern Metropolis Daily. Its developer, Wan Xiaojun, is a professor at Peking University who is working on developing several AI machines.

Image by Jack Fitzgerald

The Chinese have invented an AI that can write a 300-character news article in just one second. The robot reporter, named Xiao Nan, created the article for the Chinese media outlet Southern Metropolis Daily. Its developer, Wan Xiaojun, is a professor at Peking University who is working on developing several AI machines.

He sees only the benefits, of course. “When compared with the staff reporters, Xiao Nan has a stronger ability to analyze data and is quicker at writing stories.”

Human reporters should not worry about AI evolution replacing them however, because Prof. Wan reassures us that,

“Robots are still unable to conduct face-to-face interviews and respond intuitively with follow-up questions. They also do not have the ability to select the news angle from an interview or conversation.”

We keep hearing this kind of reassurance from experts who tell us that AI won’t be taking over the world any time soon because they can’t really think for themselves or act independently. No Skynet tomorrow, nope. Just can’t happen. AIs are just not that smart.

AI Evolution Begins

This article, “The Mind-Blowing AI Announcement by Google That You Probably Missed,” by Creative Technology Gil Fewster casts a spotlight on exactly how fast artificial intelligences are learning and developing independent of human direction. Mr. Fewster examines a December 2016 Google article published under a title only a techie could love: “ Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System.”

To take this example one step further, imagine that everyone on the planet experienced the same phenomenon, and then tried to tell others about what they saw, heard, felt. No two answers would be exactly the same, either factually or in the terms used to express them. Sometimes, the result would be a Tower-of-Babel rush of confusing talk. Or it could produce the great literature and music of the world. Probably, both.

The Tower of Babel

It seems that Google has developed a new language translation tool, the Google Neural Machine Translation System. That doesn’t sound too threatening—who hasn’t wished for a device that would make translating from one language to another simpler and easier? It’s the kind of thing that helps humans communicate with one another, right?

The Smarter AI

Well, the GNMT kind of blasted right past that. The AI smartened up and figured out that the best and fastest way to accomplish translation was to create its own language. No human directed it to do this. No human programmed it to do this. The GNMT learned from the people who used it and made, “educated guesses about the content, tone and meaning of phrases based on the context of other words and phrases around them.”

The GNMT did all this in a matter of weeks.

So how long do you think it would take an AI that can learn this quickly to figure out more complex tasks, like interviewing a human being or selecting a news angle? How fast do you think it could go from Prof. Wan’s assurance that, “robots will be able to act as a supplement, helping newspapers and related media, as well as editors and reporters.”

Sure they will.  All we need is some professor or researcher with more arrogance than foresight or more intellect than common sense to program ambition into an AI.  Or self reliance. Or ruthless efficiency.

The Rise of the Machines

Well they’re starting small. Sean Martin in Express tells us about, “Robots Being Developed That Have a ‘Brain’ and Can Learn New Things Like a Human Child.” This time the genius developers are building a robot that can “mimic the human learning experience,” by learning from nothing as a human child would. The Italian firm Goal Robots hopes to have their first learnable robot within four years.

AI Evolution: This time the genius developers are building a robot that can “mimic the human learning experience,” by learning from nothing as a human child would.

Image by Fox News

Essentially, they are programming curiosity into their robot with the aim of “finding something surprising.” Then the robot will figure out how it works or how to make it work.

Project Coordinator Gianluca Baldassare of the Institute of Cognitive Sciences and Technologies of the National Research Council (CNR-Istc), says, “We do not give them the goals from outside, they have to choose them themselves.”

Great. How many of us have lived with two year olds? Do we always approve of the goals they set for themselves? Or the way they go about reaching them? Do we really need a robot child that has just learned to say “No!”

The Bigger Question

What is the point of all this AI evolution? Why do we need a robot reporter when we have a multitude of human ones? Why do we need a robot that can learn like a child when we have our own children? Do we really need an AI that makes binary decisions about efficiency? Do we need an AI that gets creative all by itself when no one is watching?

The darker issue is, as some of us have said all along, that they won’t help humanity at all. Instead they will hurt it in ways that we have only so far dealt with in the science fiction realm. Will we end up with C3PO, R2D2 and BB8 or Skynet?

C3PO, R2D2 and BB8

And then we have the bigger question: How do these AI developments help humanity? Google and the other genius developers don’t ever seem to address it. They pursue technology for its own sake with seemingly little thought for the potential downside.

The darker issue is, as some of us have said all along, that these robots won’t help humanity at all. Instead they will hurt it in ways that we have only so far dealt with in realm of science fiction, which most people don’t read. Will we end up with C3PO, R2D2 and BB8 or Skynet and Judgment Day?

And do we want to find out the hard way?

18 thoughts on “AI Evolution and the Rise of the Machines

  1. Sounds to me like you’re awfully bitter. Neither of you have said much, if at all about any potential “positives” of this evolution. Not everything that might evolve on this topic will necessarily be bad, right?

    • I would love to see the potential positives of this evolution, Mike. I would love to think, like Gene Roddenberry, that we’re headed toward a future utopia in which people do only the work they want to do. I’m just not seeing it. It’s easy to say that everything will work out and the AI evolution will create more jobs than it takes. But few people are showing how. I saw one on the CBS News last night about a bicycle plant that uses robots to speed production but has kept all their employees. That’s great. More of that would get me a long way toward feeling positive. But there are far more examples of people losing their jobs or not finding jobs in the first place. What’s your vision for the potential positives?

      • I will opine that it’s not that we see ONLY negatives, but rather that you, Mike, (and others) only see positives.

        I think I’m also safe in opining for myself AND Aline that, based on human history, taking away things from people that they’ve come to view as regular and normal – work – and replacing it with massive amounts of idle time expecting they’ll proactively fill it is NOT a realistic assessment.

        And if people DO resent it, and the numbers are as big as predicted, that’s a powderkeg waiting for a spark.

        Based on human history, lots of people with lots of free time and lots of anger at having things taken away from them does not equal sunshine and rainbows and unicorns, but rather “very bad things”.

        • Oh, I see the negatives alright. Also, I’m not advocating any utopian result will necessarily emerge either. All I mean to infer here is that the potential exists for both negative and positive outcomes, and I’ve got to say that you have only espoused what I view (meaning my opinion only) the POTENTIAL negatives based on the history you’ve collected or witnessed. History is no guarantee of future events, or even potential calamity.

          • Anything is possible, Mike. And I, for one, would be thrilled to see a positive outcome. History may not be a guarantee of future events but what we have seen so far is layoffs and job reductions. The only positive came from a CBS Morning News report about a bicycle factory in Alabama. More of those would be great.

          • It’s simply NOT, “The only positive cam from a CBS Morning News Report…”, but rather the only one you’ve seen or heard about in recent times. I agree there aren’t enough of these stories, but I would suggest to young Davis that there are more than the there of us are probably aware of today, and I hope to see more. Who knows?

  2. One other thing. This video by one of my favorite (Conservative) commentators is well worth the 15 minutes. Note, I’ve advanced the time past the openly-partisan part to the relevant section. (He does, still, get a couple of jabs in so be warned about that.)

    Bill Whittle – How To Stop The Civilizational Collapse

    The curve of civilization does not automatically continue upward. If you look at the history of humanity, civilizations grow, develop, reach the peak of their powers, and just when you think they’d be set… they collapse, and it’s almost overnight. To think that we’re immune from this is hubris.

    One thing Bill pointed out in a different video was that WORRY and FEAR are beneficial. He uses the example of two tribes. One says “Let’s post watches and keep the fires burning because there are leopards in the area.” The other doesn’t, saying “Well, what’s the worst that could happen?”

    WORRY and FEAR are SURVIVAL TRAITS; to arbitrarily dismiss fears based on nothing but pure hope that is not only based on nothing, but contradicts the known and documented history of every civilization that has been founded, grown, and collapsed.

    • FOUND IT. This is a 1 hour 44 minute presentation, but I’ve found the place where he talks about WORRY and being trapped under the asymptote.
      ** I highly endorse the whole video. He’s a great speaker!

      It’s captivating; what will happen when people who are bored beyond reason from the robotic jobpocalypse might doing something rash…. ? DO stick with it, he talks about the WORRY situation I mentioned above.

      • Well, David, I watched this piece of Conservative back-patting claptrap. I enjoyed the animal analyses but thought his sociology and history were lacking. Now, could boredom cause revolution — sure it could. but that’s not what this guy is really saying. He says the leaders opened the gates through boredom and a lack of concern about external threats. But history shows us far more examples of leaders corrupted by greed and weakened by inbreeding. Greed is exactly what’s going to cause the jobpocalypse, led by the current administration’s kleptocracy. Every company, every CEO, out for himself will implement the AIs and the robots and the kiosks without regard to their impact on the economy. People won’t revolt out of boredome, they’ll revolt out of hunger and poverty and despair.

          • Well, we have just seen that happen with people who voted for the candidate they thought would create change even if they didn’t trust him and were aware of his many faults. The consequences of “change at any cost” will only become apparent over time.

        • Well, Aline, I was with you until your last two sentences. Sure it’s possible this could occur, but it might not either, because we have no guarantee of this outcome coming to fruition in our time. Perhaps, as you suggest, history of human behavior, has provided you with a jaundiced perspective. And perhaps I have more faith in human nature than you do today. Only time will tell what will actually unfold here before us.

          • Maybe it’s because much of the science fiction I have read goes to either the utopian or the dystopian view. Neither of those may occur. But doesn’t it give you pause when an AI starts making its own decisions and creating its own language. What happens when AIs decide to start talking together in a language humans don’t understand?

          • Well, actually no it doesn’t (give me pause). Computers have been involved in “decision-making” for decades from everything benign like controlling how your dishwasher works to something more fear inducing like war games. In my old coding days we worked on the notion of “If-then-else” logic creation. Now we call it IFTT – “If this than that”. This has of course evolved, and I’ve personally used AI when I worked at NASA to help manage unknown contingencies that our old Symbolics computers could learn when we were fueling TDRSS satellites for the Space Shuttle program. So. I know something about this. AI can “learn” and “decide” as we know, but that’s a far cry from “sentience” which is decades away, if ever. Have you seen “Elysium” with Matt Damon? I liked the outcome of that from the dystopian future it initially portrayed. I continue to believe that humanity in general will find a way, Mr. Trump not withstanding.

  3. So you are worried that this evolution might hurt rather than help humanity. Is that the point of this or did I miss something more important?

    If so, you don’t think there’s a huge leap to C3PO, SkyNet and all the dystopian Scifi we read and see movies about?

    Are you really that worried about something bad might evolve as you’re suggesting in our lifetimes?

    I’m not.

    • I am.

      Personally, I’m starting to think about the possibility of a work-devoid world of aimless people, angry, frustrated, and declaring a Frank Herbert-like “Butlerian Jihad”: THOU SHALT NOT MAKE A MACHINE IN THE LIKENESS OF THE HUMAN MIND.

      • …And equally possible, is the notion that none of this may ever occur whatsoever. There again, positive and negative possibilities may be on the horizon, or maybe not at all.

Leave a Reply

Your email address will not be published. Required fields are marked *