Facebook’s AI Led Astray By Human Behavior

I don’t follow the roiling waters of online advertising giant Facebook. Having moved fast and broke things, they are now thrashing around trying to fix stuff that they broke.

This month a team of researchers at Facebook released some findings from yet another study [3]. Specifically, the experiments (which don’t seem to have been reviewed by an Institutional Review Board) are trying to build simple AI’s that can “bargain” with humans. This task requires good-enough natural language to communicate with the carbon-based life form, and enough of a model of the situation to effectively reach a deal.

Their technical approach is to use machine learning so that bots can learn by example. Specifically, they use a collection of human-human negotiations, and tried to analyze the behavior to discover algorithms to replicate human-like interactions.

With preposterous amounts of computing power, who knows? It might work.

Unfortunately, the results were less than stunning.

Glancing at the conclusions in the paper, the good news is that method was able to learn “goal maximizing” instead of “likelihood maximizing” behaviors. This is neat, though given the constrained context (we know that the parties are negotiating) it’s less than miraculous.

The resulting bots aren’t completely satisfactory, though. For one thing these machine intelligences are, well, pretty mechanical. Specifically, they are obsessive and aggressive, “negotiating harder” than other bots.  Also, the conversation generated by the bots  made sense at the sentence level, but consecutive sentences did not necessarily make sense. (The examples sound rather “Presidential” to me.)

But the headline finding was that the silicon-based entities picked up some evil, deceptive tactics from their carbon-based role models. Sigh. It’s not necessarily “lying” (despite Wired magazine [1]), but, in line with “negotiating harder”, the bots learned questionable tactics that probably are really used by the humans exemplars.   (Again, this rhetoric certainly sounds Presidential to me.)

The hazards of trying to model human behavior–you might succeed too well!

I’m not surprised that this turned out to be a difficult task.

People have been trying to make bots to negotiate since the dawn of computing. The fact that we are not up to our eyeballs in NegotiBots™ suggests that this ain’t easy to do. And the versions we have seen in online markets are, well, peculiar.

One question raised by this study is, what is a good dataset to learn from? This study used a reasonably sized sample, but it was a convenience sample: people* recruited from Amazon Mechanical Turk. Easy to get, but are they representative? And what is the target population that you’d like to emulate?

(* We assume they were all people, but how would you know?)

I don’t really know.

But at least some of the results (e.g., learning aggressive and borderline dishonest tactics) may reflect the natural behavior of Mechanical Turk workers more than normal humans. This is a critical question if this technology is ever to be deployed. It will be necessary to make sure that it is learning culturally correct behavior for the cultures that it is to be deployed in.

I will add a personal note. I really don’t want to have to ‘negotiate’ with bots (or humans), thank you very much. The deployment of fixed prices was a great advance in retail marketing [2], and it is a mistake to go backwards from this approach.


  1. Liat Clark, Facebook teaches bots how to negotiate. They learn to lie instead. Wired.com.June 16 2017, http://www.wired.co.uk/article/facebook-teaches-bots-how-to-negotiate-and-lie
  2. Steven Johnson, Wonderland: How Play Made the Modern World, New York, Riverhead Books, 2016.
  3. Mike Lewis, Denis Yarats, Yann N Dauphin, Devi Parikh, and Dhruv Batra, Deal or No Deal? End-to-End Learning for Negotiation Dialogues. eorint, 2017. https://arxiv.org/abs/1706.05125v1

 

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.