MRQA 2018: Machine Reading for Question Answering

Workshop at ACL 2018
Date: Thursday, July 19, 2018
Room: 210
Contact: mrqa2018@googlegroups.com

Call for Papers

Machine Reading for Question Answering (MRQA) has become an important testbed for evaluating how well computer systems understand human language, as well as a crucial technology for industry applications such as search engines and dialog systems. Successful MRQA systems must deal with a wide range of important phenomena, including syntactic attachments, coreference links, and entailment. Recognizing the potential of MRQA as a comprehensive language understanding benchmark, the research community has recently created a multitude of large-scale datasets over text sources such as Wikipedia (WikiReading, SQuAD, WikiHop), news and other articles (CNN/Daily Mail, NewsQA, RACE), fictional stories (MCTest, CBT, NarrativeQA), and general web sources (MS MARCO, TriviaQA, SearchQA). These new datasets have in turn inspired an even wider array of new question answering systems.

Despite this rapid progress, there is much to understand about these datasets and systems. While in-domain test accuracy has been improving rapidly on these datasets, systems struggle to generalize gracefully when tested on new domains and datasets. The ideal MRQA system is not only accurate on in-domain data, but is also interpretable, robust to distributional shift, able to abstain from answering when there is no adequate answer, and capable of making logical inferences (e.g., via entailment and multi-sentence reasoning). Meanwhile, the diversity of recent datasets calls for an analysis of the various natural language phenomena (e.g., coreference, paraphrase, entailment, multi-step reasoning) these datasets present.

We seek submissions on the following topics:

  • Accuracy: How can we make MRQA systems more accurate?
  • Interpretability: How can systems provide rationales for their predictions? To what extent can cues such as attention over the document be helpful, compared to direct explanations? Can models generate derivations that justify their predictions?
  • Speed and Scalability: Can models scale to consider multiple, lengthy documents, or the entire web as information source? Similarly, can they scale to consider richer answer spaces, such as sets of spans or entities instead of a single answer one?
  • Robustness: How can systems generalize to other datasets and settings beyond the training distribution? Can we guarantee good performance on certain types of questions or documents?
  • Dataset Creation: What are effective methods for building new MRQA datasets?
  • Dataset Analysis: What challenges do current MRQA datasets pose?
  • Error Analysis: What types of questions or documents are particularly challenging for existing systems?

Important Dates

  • Deadline for submission: Monday, April 23, 2018
  • Notification of acceptance: Tuesday, May 15, 2018
  • Deadline for camera-ready version: Monday, May 28, 2018
  • Workshop Date: Thursday, July 19, 2018

All submission deadlines are 11:59 PM GMT -12 (anywhere in the world).

Financial Assistance

We can offer partial financial aid to student authors who demonstrate significant financial need. Instructions on how to apply for financial assistance will be provided after paper acceptance decisions have been finalized.

Best Paper Award

An award of $500 will be given to the best paper of MRQA 2018.

Submission Guidelines

We seek submissions of at least 4 and at most 8 pages, not including citations. All submissions will be reviewed in a single track, regardless of length. Please format your papers using the standard ACL style files. Submission is electronic via the Softconf START system.

We accept submissions on work published or submitted elsewhere. Recently published work should clearly indicate the original venue, and will be accepted if the organizers think the work will benefit from exposure to the audience of this workshop. Work published elsewhere will not be included in the workshop proceedings. All other submissions will go through a double-blind review process.