Skip to main content

Facebook Wants to Make Chatbots More Conversational

The social media giant is making its speech artificial intelligence training data open source

For all of the buzz over the past couple of years about how chatbots are revolutionizing the way mobile devices deliver information, enable online purchases and deliver customer service, these artificially intelligent apps still aren’t much for conversation. That’s because the ability to achieve natural dialogue between person and machine—one that involves more than simple commands and canned responses—is still very much a work in progress. Facebook hopes to change that, however, by creating a shared platform to train machine-speech systems.

Chatbots—automated computer programs used, for example, for customer service or as personal assistants on smartphones—use dialogue that’s mostly pre-scripted, says Yann LeCun, director of Facebook’s AI Research (FAIR) team. “If you go [off-] script they don’t perform very well.” Other types of chatbots are entertaining but not very useful for any particular purpose, he adds, citing the Tay AI Microsoft introduced last year via Twitter and quickly took offline after it “learned” how to produce offensive tweets. “What we don’t have is a chatbot that can actually [learn to] do something useful,” LeCun says.

LeCun and his team are offering to help programmers develop the next generation of bots with the launch of a shared, open-source repository of AI training data and programs called ParlAI. One of the main goals of developing AI is to create intelligent networks that can have a normal conversation with people, says Jason Weston, a FAIR research scientist. “The technology is not there because certain fundamental research is missing, including the development of AI dialogue algorithms that can be trained to speak in natural language,” as opposed to canned responses.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


ParlAI launched with 20 different data sets, each of which researchers can use to teach an AI dialogue system to perform a particular task, whether it’s helping chatbots learn how to answer questions or gather information needed to perform a task (such as booking a restaurant reservation). One such data set—bAbI tasks—includes 20 different tests designed to gauge, for example, how well an AI app can understand and infer meaning when presented with a series of sentences and is asked questions about the text. The AI agent needs to be able to understand what is happening and use reason to answer the question, Weston says.

Some of the data sets are text only; others include images and text that aim to teach language by relating a word to something in the real world, LeCun says. ParlAI is designed to give researchers a unified framework for training and testing dialogue models, especially multi-task training over many datasets at once, he adds. “We hope the combination of these tasks will help a machine acquire more knowledge.”

The hype surrounding machine learning and dialogue systems has overtaken the reality of the field, says Brad Hayes, a postdoctoral researcher in the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence LabInteractive Robotics Group who is not involved in the Facebook effort. “Language understanding is an immensely broad topic, and creating functional chatbots not only requires an understanding of the meaning behind language but also the ability to synthesize relevant responses.”

Making data sets public will enable broader participation in solving these problems, but Hayes says the development of more successful language comprehension and use goes beyond strictly feeding more training data into existing systems. These limitations are most obvious in the simplistic way users must speak so they can be understood by chatbots and intelligent assistants such as Amazon Alexa. “A good example of this: You can ask Alexa to turn the volume up or down or to set it to a number between 0 and 10. But it doesn't understand percentages and cannot map them back to the internal scale used by the software,” he says. That is a danger when a programmer tries to dictate how the user should speak to the device rather than designing for a more natural interaction.

One way Facebook is trying to address that trap is to allow ParlAI participants to test an AI app’s dialogue capabilities by having it interact with people via Amazon Mechanical Turk, a crowdsourcing platform that marshals human intelligence to perform tasks that computers are currently unable to do. In the end, the ability to make conversational bots and AI agents depends on how well they can be designed and trained—and that requires researchers to see how their programs interact with actual humans, Weston says.

A sound chatbot development strategy is crucial for the technology’s development. Chatbots will become crucial for the survival of future messaging, support or frontline service programs as automated chat systems enter widespread use—with an important caveat, Hayes says. “Chatbots can dramatically improve [customer service] systems and act as a force multiplier for human effort but also carry the potential to destroy user experience and brand reputation if prematurely deployed.”