A Machiavellian machine raises ethical questions about artificial intelligence

Home / Uncategorized / A Machiavellian machine raises ethical questions about artificial intelligence

Researchers have developed a bot capable of the deceptions required to prosper in an online diplomacy game

BY:

Eric De Grasse/Chief Technology Officer 

PROJECT COUNSEL MEDIA

1 December 2022 (San Francisco, CA) – Deception is a milestone in cognitive development because it requires an understanding of how others might think and act. That ability is on display, to a limited extent, in Cicero, an artificial intelligence system designed to play “Diplomacy”, a game of wartime strategy in which players negotiate, make alliances, bluff, withhold information, and sometimes mislead. Cicero, developed by Meta and named after the famed Roman orator, pitted its artificial wits against human players online — and outperformed most of them.

The arrival of an AI that can play the game as competently as people (detailed in the above link), opens the door to more sophisticated human-AI interactions, such as better chatbots, and optimal problem-solving where compromise is essential. But, given that Cicero demonstrates AI can, if necessary, use underhand tactics to fulfil certain goals, the creation of a Machiavellian machine also raises the question of how much agency we should outsource to algorithms — and whether a similar technology should ever be employed in real-world diplomacy.

Last year, the EU commissioned a study into the use of AI in diplomacy and its likely impact on geopolitics. In an open public comment session held last month in Brussels one AI commentator said:

“We humans are not always good at conflict resolution. If AI could complement human negotiation and stop what’s happening in Ukraine, then why not?” 

Like chess, the game of “Diplomacy” can be played on a board or online. Up to seven players vie to control different European territories. In an initial round of actual diplomacy, players can strike alliances or agreements to hold their positions or move forces around, including to attack or to defend an ally.

The game is regarded as something of a grand challenge in AI because, in addition to strategy, players must be able to understand others’ motivations. There is both co-operation and competition, with betrayal a risk.

That means, unlike in chess or Go, communication with fellow players matters. Cicero, therefore, combines the strategic reasoning of traditional games with natural language processing. During a game, the AI works out how fellow players might behave in negotiations. Then, by generating appropriately worded messages, it persuades, cajoles or coerces other players into making partnerships or concessions to execute its own game plan. Meta scientists trained Cicero using online data from about 40,000 games, including 13mn in-game messages.

After playing 82 people in 40 games in an anonymous online league, Cicero ranked in the top 10 per cent of participants playing more than one game. There were hiccups: it sometimes spat out contradictory messages on invasion plans, confusing participants. Still, only one opponent suspected Cicero might be a bot (all was revealed afterwards).

Professor David Leslie of the Alan Turing Institute in London described Cicero as:

“very technically adept Frankenstein. This is a very impressive stitching together of multiple technologies but also a window into a troubling future. I am reminded of the 2018 UK parliamentary committee report on AI which advised that AI should never be vested with the autonomous power to hurt, destroy or deceive human beings”.

The obvious first worry is anthropomorphic deception: when a person wrongly believes, as one opponent did, that there is another human behind the screen. That can pave the way for people to be manipulated by technology.

The second concern is AI equipped with cunning but lacking a sense of fundamental moral concepts, such as honesty, duty, rights and obligations. A system is being endowed with the capacity to deceive but it is not operating in the moral life of our community. To state the obvious, an AI system is, at the basic level, amoral. Cicero-like intelligence is probably best applied to tough scientific problems like weather analysis, not to sensitive geopolitical issues.

Interestingly, Cicero’s creators claim that its messages, filtered for toxic language, ended up being “largely honest and helpful” to other players, speculating that success may have arisen from proposing and explaining mutually beneficial moves. Perhaps, instead of marvelling at how well Cicero plays “Diplomacy” against humans, we should be despairing at how poorly humans play diplomacy in real life.

Related Posts