Displaying items by tag: FAIR
Facebook had to shut down its chatbot experiment after two AIs (artificial intelligence) developed their own language to communicate. Researchers at the Facebook AI Research Lab (FAIR) were experimenting with teaching two chatbots, Alice and Bob, how to negotiate with one another. The researchers soon found that the chatbots had gone off script and were creating their own unique language without any human input.
“Our interest was having bots who could talk to people,” Mike Lewis of Facebook’s FAIR program told Fast Co Design, discussing the bots which were attempting to imitate human speech when they developed their own language.
Two bots were set up, known as dialog agents, as part of artificial intelligence programs designed to teach the bots to communicate with each other. The dialog agents taught each other about human speech using machine learning algorithms, and were left alone to develop their communication and conversational skills.
Having been left alone to learn, the researchers returned to find that the AI software had begun to deviate from comprehensible speech. The bots had created their own language without any input from the human supervisors. The new language was more efficient for communication, the researchers said, but wasn’t helpful in achieving the original task that had been set.
“Agents will drift off understandable language and invent codewords for themselves,” said Dhruv Batra, a visiting research scientist from Georgia Tech at Facebook AI Research speaking to Fast Co. “If I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthand.”
To complete the negotiation training for the bots, the programmers had to alter the way the machines learned. A FAIR spokesperson said, “During reinforcement learning, the agent attempts to improve its parameters from conversations with another agent. While the other agent could be a human, FAIR used a fixed supervised model that was trained to imitate humans.”
The spokesperson added, “The second model is fixed, because the researchers found that updating the parameters of both agents led to divergence from human language as the agents developed their own language for negotiating.”
The researchers claim to have broken new ground by giving the chatbots the ability to negotiate and even make compromises. Researchers Mike Lewis and Dhruv Batra said in a blog post that the technology pushes forward the ability to create bots “that can reason, converse and negotiate, all key steps in building a personalized digital assistant.”
Chatbots, till now, have been limited to holding short conversations and performing small tasks such as scheduling a meeting. But Facebook’s new code will enable bots to “engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes,” the researchers say. The bots even have the ability to estimate the “value” of an item.
However, in some cases the bots “initially feigned interest in a valueless item, only to later ‘compromise’ by conceding it – an effective negotiating tactic that people use regularly,” the researchers added. The tactic was not actually implemented in the programming by the researchers, “but was discovered by the bot as a method for trying to achieve its goals.”
The bots were also taught to continue negotiating with one another until they had both reached a successful outcome, meaning they never give up.