The Salesforce Research team is attempting to capture the nuances of natural language processing with a new generalized model. The team described its approach in a recently published a paper on the Natural Language Decathlon (decaNLP).

According to Richard Socher, chief scientist at Salesforce who is leading the research team, while natural language processing is opening up new opportunities for machines, language understanding is becoming difficult because it connects and requires other areas of intelligence such as visual, emotional and logical areas.

“There are ambiguities and complexities in word choice, grammar, context, tone, sentiment, humor, cultural reference and more. It takes us humans years to master, so imagine the complexities of teaching a computer to understand these various facets in a single unified model. I’ve focused my career on this challenge,” he said.

The model is designed to tackle 10 different NLP tasks at once and eliminate the need to build and train individual models for each NLP problem.

“Deep learning has improved on many natural language processing (NLP) tasks individually. However, general NLP models cannot emerge within a paradigm that focuses on the particularities of a single metric, dataset and tasks. We introduce the Natural Language Decathlon (decaNLP),” the researchers wrote in the paper.

The 10 tasks are:

  1. Question answering
  2. Machine translation
  3. Summarization
  4. Natural language inference
  5. Sentiment analysis
  6. Semantic role labeling
  7. Relation extraction
  8. Goal-oriented dialogue
  9. Semantic parsing
  10. Common sense pronoun resolution

Socher is calling the model “the Swiss Army Knife of NLP” because of its ability to compact numerous tasks into one tool. According to Socher, traditional approaches require a customized architecture for each task, hindering the emergence of general NLP models.

“If you look at the broader landscape of AI, there’s been progress in multitask models as we’ve slowly evolved from focusing on feature engineering to feature learning and then to neural-architecture engineering for specific tasks. This has allowed for a fair amount of NLP improvements, but what we’re really looking for is a system that can solve all potential tasks,” said Socher.

This brought the team to an additional line of thought of having a dataset that is large enough to cover all tasks. For example, in computer vision, ImageNet is a large dataset that includes many visual categories, but there was not an equivalent dataset for NLP, so they set out to change that, Socher explained.

“DecaNLP has the potential to change the way the NLP community is focusing on single tasks. Just like ImageNet spurred a lot of new research, I hope this will allow us to think about new types of architectures that generalize across all kinds of tasks,” he said.

In addition, Socher explains that DecaNLP’s multitask question answering model can tackle unknown, never seen before tasks, which can lead to better chatbots and a broader range of new tasks.

Going forward, the team will continue to work on this model. According to Socher, all NLP tasks can be mapped to question answering, language modeling, and dialogue systems, so it is important to improve the performance of those within decaNLP.

“I hope that providing a powerful single default NLP model will also empower programmers without deep NLP expertise to quickly make progress on new NLP tasks, languages and challenges. This in turn will mean that products and tools we can speak to and interact with will become more broadly available,” he said.