Surfing the Artificial Intelligence Wave

Wall-e movie, AI, artificial intelligence 3-D jobsA lot has been written in the last couple of years about artificial intelligence (AI) and its impact on the business world. By extension, that also means its impact on the economy and our society.

The arguments run like this:

  • Artificial Intelligence will replace human beings in work that is (1) repetitive and (2) predictable.
  • Not to worry, though. These are just the 3-D jobs: Dull, Dirty and Dangerous
  • Some people will lose their jobs.
  • Okay, a lot of people will lose their jobs.
  • But that’s all right because AI will create new jobs, better jobs, and lots of them.
  • Yes, the people who will get those new jobs are not likely to be the same people who lost their old 3-D jobs.
  • Sure, that will disrupt people’s lives negatively but only in the short term.
  • In the long run, everyone will be better off, society will adjust, and the economy will boom.

In the long run, as economist John Maynard Keynes said, we will all be dead.

Human Nature and Technology

But I digress. I recently read two articles that shifted my opinion somewhat. In CIO magazine Author Thor Olavsrud gives us “5 artificial intelligence trends that will dominate 2018” and a couple of them have to do with good, old-fashioned human nature. Who’d a thunk it?

In his analysis, Mr. Olavsrud points out that emotional humans are still in charge of when, where and how to implement AI in the business world—and that’s not a slam dunk. In his article, he quotes Ramon Chen, Chief Product Officer of Reltio, a data management company:

“. . .most enterprises are reluctant to get started due to a combination of skepticism, lack of expertise, and most important of all, a lack of confidence in the reliability of their data sets.”

Lack of Understanding

Well, that makes sense. I spent years in the high-tech world and learned that the men running companies that produced extremely complex products did not necessarily use, much less understand, them. Back in the day, some of the executives at computer manufacturer Digital Equipment Co. had their secretaries print out their emails. The top brass would then read and dictate replies for the secretaries to key in and send. That happened a long time ago but it demonstrates the technology gap often found between product and executive. Not every tech CEO is Steve Jobs.

More than once, I instructed executives, including the VP of sales, on how to use software like Salesforce. They had never learned and could not or didn’t want to take the time. Using the software themselves would make their jobs easier and they would become more productive but instead they asked me to run reports.

Marketing School, R2-D2, C3-PO, Terminator, RobotsThe introduction of social media raised a whole new set of anxieties, even among marketers. The number of channels multiplied overnight like Tribbles and the rules on how to maximize them shifted even faster.

Some senior managers tried to corral the company’s social media presence by limiting the number of employees who could use it and insisting that the Legal Department approve everything being said. They didn’t understand the nature of social media, which is, at heart, social. So they hobbled its effectiveness in promoting the company.

That brings us to this question: If a CEO doesn’t know how to use Twitter, is he going to turn his company over to an Artificial Intelligence anytime soon?

Underfunding the IT Nerds

The second part of Mr. Olavsrud’s article raises another important point. For years, senior management has under-invested in the Information Technology (IT) department. While they understand they need IT, if only to fix the printer, they don’t really know what it does or how it works. That means they also can’t comprehend its full potential—or ramifications.

We have all seen movies and TV shows in which the nerd explains something complex in technical terms and the detective/CEO/FBI agent/newspaper reporter says, “In English, please.”

Companies are now rushing to implement Artificial Intelligence in their products because executives know they have to do it to remain competitive, to cut costs, and stay at the cutting edge. That doesn’t mean they know how to implement the various steps that are required to do it well. The CIO and the IT department have a much better grasp. If they don’t have the manpower or the budget required, however, they may not be able to execute on the vision.

Two Kinds of Artificial Intelligence

The second article, by Marcos Lima in The Conversation, separates AI into two segments: Artificial General Intelligence (AGI)—machine intelligence running everything—and narrow Artificial Intelligence. The latter involves teaching intelligent machines to do complex, time-consuming tasks.

He uses technology industry analysis to demonstrate that the former is 10+ years away while the latter is going on around us every day. Good news—as long as we don’t kick the employment can down the road to our great-grandchildren.

Artificial Intelligence and Human Emotion

Robot Thinker, Auguste Rodin, artificial intelligence, AIMy point, and I do have one, is that the implementation of AGI will be affected by human emotion: fear of change, reluctance to learn, lack of confidence in the outcome, misallocation of resources, or simple procrastination.

That will give us a breather—time in which to adapt ourselves to the narrow AI implementations that are taking place all around us. Time for school systems to adapt their curricula to a new type of learning. Time for government to do something about all those displaced workers. Time for us to evaluate the pros and cons before leaping ahead.

A couple of sentences in Mr. Olavrud’s analysis did jump out at me, though. He quotes Nima Negahban, CTO and co-founder of Kinetica, a specialist in GPU-accelerated databases for high-performance analytics:

“Detecting exactly what caused the final incorrect decision leading to a serious problem is something enterprises will start to look at in 2018. Auditing and tracking every input and every score that a framework produces will help with detecting the human-written code that ultimately caused the problem.”

But what if the Artificial Intelligence decides that the cause of the problem is the human, not the code?

Leave a Reply

Your email address will not be published. Required fields are marked *