MRQA 2018:
Machine Reading for Question Answering

Workshop at ACL 2018

Date: July 19 or 20, 2018

Contact: mrqa2018@googlegroups.com

Machine Reading for Question Answering (MRQA) has become an important testbed for evaluating how well computer systems understand human language, as well as a crucial technology for industry applications such as search engines and dialog systems. Existing MRQA systems have been trained to answer reading comprehension questions over documents from sources such as Wikipedia (WikiReading, SQuAD), news articles (CNN/Daily Mail, NewsQA), fictional stories (MCTest, CBT), and general web sources (MS MARCO, TriviaQA, SearchQA).

Despite rapid progress in this area, there is yet much to understand about MRQA systems and datasets. In addition to increasing in-domain test accuracy, we would like to build MRQA systems that are interpretable, robust to distributional shift, receptive to unanswerable questions, and able to adequately model inference (e.g., entailment and multi-sentence reasoning). Meanwhile, the recent cascade of new MRQA datasets calls for deeper analysis of the various natural language phenomena (coreference, paraphrase, entailment, multi-hop reasoning, etc.) these different datasets present.

The goal of this workshop is to gather researchers to address and discuss research on MRQA systems and datasets. We seek submissions in the following areas:

  • Accuracy: How can we improve overall accuracy on MRQA?
  • Interpretability: Can models provide a rationale for their predictions? In what ways can attention over the document be helpful? Can models generate logical forms that justify their predictions?
  • Speed / Scalability: Can models scale to consider multiple, lengthy documents, or the entire web as information source? Similarly, can they scale to consider richer answer spaces, such as sets of spans or entities instead of a single answer one?
  • Robustness: Can models generalize to other datasets and settings beyond the training distribution? Can they guarantee good performance on certain types of questions or documents?
  • Creation, analysis and evaluation of datasets: What kind of datasets do we want to create? What are effective methodologies for creating them? Can we quantify the challenges posed by MRQA datasets?
  • Analysis of model predictions: What types of questions or documents are particularly challenging for existing systems?
dates

Important Dates

Deadline for submission: TBD
Notification of acceptance: TBD
Deadline for camera-ready version: TBD
Early registration deadline: TBD
Workshop Date: TBD

Note: The first 3 deadlines are 11:59PM GMT -12 (anywhere in the world).

speakers

Invited Speakers

organization

Steering Committee

Organizing Committee

Program Committee

  • Yoav Artzi
  • Danish Contractor
  • Rajarshi Das
  • Bhuwan Dhingra
  • Xinya Du
  • Matt Gardner
  • Mor Geva
  • Kevin Gimpel
  • Luheng He
  • Jonathan Herzig
  • Mohit Iyyer
  • Mandar Joshi
  • Dongyeop Kang
  • Ni Lao
  • Kenton Lee
  • Omer Levy
  • Nasrin Mostafazadeh
  • Karthik Narasimhan
  • Rodrigo Nogueira
  • Panupong (Ice) Pasupat
  • Hoifung Poon
  • Siva Reddy
  • Xiang Ren
  • Tim Rocktäschel
  • Shimon Salant
  • Swabha Swayamdipta
  • Kristina Toutanova
  • Adam Trischler
  • Shuohang Wang
  • Tong Wang
  • Johannes Welbl
  • Caiming Xiong
  • Victor Zhong