The techno-optimists are driving AI forward.

And we, as citizens, are bombarded by the promises and portents of its consequences. AI will destroy our jobs. AI will eliminate drudgery and leave us more time to be creative. AI will solveour information overload. AI will save lives—on the road, in healthcare, on the battlefield. AI will end humanity. It will be our friend, says Bill Gates. It will be our enemy, says Elon Musk.

So which is it?

I’m still mentally unpacking from my trip to Estonia, the week before last. One of my most stimulating conversations that week was with Marten Kaevats, National Digital Advisor to the Prime Minister. Marten is a thirty-something thinker with shocking hair, a rambling, breathless rate of speech, and a knack for explaining difficult concepts using only the objects in his pockets. His job is to help the government of Estonia create the policies that will help build a better society atop digital foundations.

Marten and I had a long chat about AI—by which I mean, I nudged Marten once, snowball-like, at the top of an imaginary hill, and he rolled down it, gaining speed and size all the time, until flipcharts and whiteboard markers were fleeing desperately out of his path.

Here’s what I took away from it.

Fuzzy Language = Fuzzy Thinking = Fuzzy Talk

Marten’s very first sentence on the topic hit me the hardest: ’You cannot get the discussion going if people misunderstand the topic.’

That is our problem, isn’t it? ‘AI’—artificial intelligence—is a phrase from science fiction that has suddenly entered ordinary speech. We read it in headlines. We hear it on the news. It’s on the lips of businesspeople and technologists and academics and politicians around the world. But no one pauses to define it before they use it. They just assume we know what they mean. But I don’t. Science fiction is littered with contradictory visions of AI. Are we talking about Arnold Schwarzenegger’s Terminator? Alex Garland’s Ex Machina? Stanley Kubrik’s HAL in 2001: A Space Odyssey? Ridley Scott’s replicants in Blade Runner? Star Wars’ C-3PO? Star Trek’s Lt. Commander Data?

Our use of the term ‘AI’ in present-day technology doesn’t clear things up much, either. Is it Amazon’s Echo? Apple’s Siri? Elon Musk’s self-driving Tesla? Is it the algorithm that predicts which show I’ll want to watch next on Netflix? Is it the annoying ad for subscription-service men’s razors that seems to follow me around everywhere while I browse the Internet? Is that AI? If so, god help us all…

We don’t have a clear idea of what they’re talking about. So how can society possibly get involved in the conversation—a conversation that, apparently, could decide the fate of humanity?

We’re Confusing Two Separate Conversations

Society needs to have two separate conversations about ‘artificial intelligence’. One conversation has to do with the Terminators and the C-3POs of our imagination. This is what we might call strong AI: self-aware software or machines with the ability to choose their own goals and agendas. Whether they choose to work with us, or against us, is a question that animates much of science fiction—and which we might one day have to face in science-reality. Maybe before the mid-point of this century. Or maybe never. (Some AI experts, like my good friend Robert Elliott Smith, have deep doubts about whether it’ll ever be possible to build artificial consciousness. Consciousness might prove to be a unique property of complex, multi-celled organisms like us.)

The other, more urgent conversation we need to have concerns the kind of AI that we know ispossible. Call it weak AI. It’s not capable of having its own goals or agendas, but it can act on our behalf. And it’s smart enough to perform those tasks the same as, or better than, we could do them ourselves. This is Tesla’s autopilot: it can drive my car more safely than I can, but it doesn’t know that it’s ‘driving a car’, nor can it decide it’d rather read a book. This is IBM’s chess-playing Deep Blue, or Google DeepMind’s AlphaGo: they can play strategy games better than the best human, but they do not know that they’re ‘playing a game’, nor could they decide that they’d really rather bake cookies.

Most present-day public discourse on AI confuses these two, very different conversations, so that it’s very difficult to have clear arguments, or reach clear views, on either of them.

A Clearer Conversation (If You Speak Estonian)

Back to my chat two weeks ago with Marten. What makes him such a powerful voice in Estonia on the questions of how technology and society fit together is that he doesn’t have a background in computer science. He began his career as a professional protestor (advocating rights for cyclists), then spent a decade as an architect and urban planner, and only from there began to explore the digital foundations of cities. When Marten talks technology, he draws, not upon the universal language and concepts of programmers, but upon the local language and concepts of his heritage.

Marten and his colleagues in the Estonian government have drawn from local folklore to conduct the conversation that Estonians need to have about ‘weak AI’ in language that every Estonian can understand. So, instead of talking with the public about algorithms and AI, they talk about ‘kratt’.

Every Estonian—even every child—is familiar with the concept of kratt. For them it’s a common, centuries-old folk tale. Take a personal object and some straw to a crossroads in the forest, and the Devil will animate the straw-thing as your personal slave in exchange for a drop of blood. In the old stories, these kratt had to do everything their master ordered them to. Often they were used for fetching things, but also for stealing things on their master’s behalf or for battling other kratt. ‘Kratt’ turns out to be an excellent metaphor to help Estonians—regardless of age or technical literacy—debate deeply the specific opportunities and ethical questions, the new rights and new responsibilities, that they will encounter in the fast-emerging world of weak AI servants.

Already, Estonian policy makers have clarified a lot of the rules these agents will live under. #KrattLaw has become a national conversation, from Twitter to the floor of their parliament, out of which is emerging the world’s first legislation for the legal personhood, liability and taxation of AI.

Translating ‘Kratt’?

Is there an equivalent metaphor to help the rest of us do the same? In 1920, the Czech science fiction writer Karel Čapek invented the word ‘robot’ (from the Slavic language word ‘robota’, meaning a forced laborer). At the time—and ever since—it has helped us to imagine, to create and to debate a world in which animated machines serve us.

Now, we need to nuance that concept to imagine and debate a world in which our robots represent us in society and exercise rights and responsibilities on our behalf: as drivers of our cars, as shoppers for our groceries, as traders of our stock portfolios or as security guards for our property.

I haven’t found the perfect metaphor yet; if you do, please, please share it with me. The ideal metaphor would:

  1. Capture the notion of an agent that represents, or is an extension of, our will;
  2. Omit the notion that the agent could formulate its own goals or agenda; and
  3. Be instantly familiar, and thus intuitive, to a wide range of people.

My first thought was a ‘genie’, but that’s not quite right. Yes, a genie is slave to the master of the lamp (1), and yes we’re all familiar with it (3), but it also has its own agenda (to trick the master into setting it free). That will to escape would always mix up our public conversation between ‘weak’ and ‘strong’ AI.

My other thought was a ‘familiar’, which fits the concept of ‘weak AI’ closely. In Western folklore, a familiar (or familiar spirit) is a creature, often a small animal like a cat or a rat, that serves the commands of a witch or wizard (1) and doesn’t have much in the way of its own plans (2). But I doubt enough people are familiar (ba-dum tss) with the idea for it to be of much use in public policy debates—except, perhaps, among Harry Potter fans and other readers of fantasy fiction.

We Can Start Here

I only know that we need this conceptual language. During last month’s stock market collapse, billions of dollars were lost by trading bots that joined in the sell-off. Is anyone to blame? If so, who? Or who will be to blame when—as will eventually happen—a Tesla on autopilot runs over a child? The owner of the algorithm? Its creator? The government, for letting the autopilot drive on the road?

Every week, artificial intelligent agents are generating more and more headlines. Our current laws, policy-making, ethics and intuitions are failing to keep pace.

With new language, we can begin to catch up.