Slaves of the Machines

In his book “Slaves of the Machines”, first published in 1997, Gregory J.E. Rowlins take lay readers on a tour of the sometimes scary world to which computers are leading us. Today, 20 years later, and in a world where Artificial Intelligence (AI) has become a household name, his predictions are more relevant than ever.

Before we dive into the risks we are now facing, let us first start off with defining what Artificial Intelligence is. Stated simply, AI is machines doing things that are considered to require intelligence when humans do them, e.g. understanding natural language, recognizing faces in photos or driving a car. It’s the difference between a mechanical arm on a factory production line programmed to repeat the same basic task over and over again, and an arm that learns through trial and error how to handle different tasks by itself.

There are two risks that are most often brought up in relation to the introduction of Artificial Intelligence into our society and workplace:

  • Robots and further automation risk to displace a large set of existing jobs; and
  • Super-intelligent AI agents risk running amok, creating a so-called AI-mageddon.

In relation to the first risk, a recent research report by McKinsey Global Institute called “Harnessing Automation for a Future that Works” makes this threat quite clear by predicting that 49 percent of time spent on work activities today could be automated with “currently demonstrated technology” either already in the marketplace or being developed in labs. Luckily for us, McKinsey do think it will take a few decades to come to fruition due to other ingredients such as economics, labor markets, regulations and social attitudes.

As for the second risk, the dooms-day thesis has perhaps most famously been described by the Swedish philosopher and Oxford University Professor Nick Bostrom in his book “Superintelligence: Paths, Dangers, Strategies”. The risk Bostrom describe is not that an extremely intelligent agent would misunderstand what humans want it to do and do something else. Instead, the risk is that intensely pursuing the precise (but flawed) goal that the agent is programmed to pursue could pose large risks. An open letter on the website of the Future of Life Institute shows the seriousness of this risk. The letter is signed not just by famous AI outsiders such as Steve Hawking, Elon Musk, and Nick Bostrom but also by prominent computer scientists (including Demis Hassabis, a top AI researcher at Google).

Compared to the above two risks, less has been written about a potential third one, namely the threat of lost autonomy/fairness for and potential deceit of workers when controlled by AI. This arrangement, where machines are the brains and humans are the robots (or slaves), is not only in existence in manufacturing and logistics today. It also occurs frequently in new sectors ranging from medical sales to transportation services where human intervention is still required while AI is desired for productivity and profitability.

Ryan Calo and Alex Rosenblat touch on this dilemma in their paper “The Taking Economy: Uber, Information, And Power“. The paper gives a good picture of the limited autonomy Uber drivers have vis-à-vis the automated Uber AI control system. In order to maximize productivity, the system imposes severe restrictions on the information and choices available to drivers. Drivers are not allowed to know the destination of the next ride before pick-up; heat maps are shown without precise pricing or explanations how they were created; and no chances are given to drivers to opt-out from default settings. The AI platform is in control and the information process is concealed to the degree that we cannot review or judge its fairness.

Thankfully, there are increasing efforts in academia (e.g. UC Berkeley – Algorithmic Fairness and Opacity Working Group) and legislators (see Big Data – Federal Trade Commission) to help demystify AI and the underlying Machine Learning procedures on which it is built. These efforts look to implement:

  • Increased verification and audit requirements to prevent discrimination from creeping into algorithm designs;
  • Traceability and validation of models through defined test setups where both input and output data are well-known;
  • The possibility to override default settings to ensure fairness and control;
  • The introduction of security legislation to prevent unintentional manipulation by unauthorized parties.

In a world of AI, it is the “free will” that separates humans from machines. It is high time that we exercise this will and define how we want a world with AI to be.