[Adapt] [Seminar] Automatic Question Generation (AQG) Metrics

朱煜烨 709351575 at sjtu.edu.cn
Wed Mar 23 09:51:39 CST 2022


Hi Adapters,

Automatic Question Generation (AQG) is the task of automatically generating questions from a given source (eg. a document). There is a need to closely examine existing similarity metrics. In this seminar, I will discuss pros and cons of two question generation metrics: (1) human-centric evaluation metrics (2) n-gram based automatic metrics. Also, I will introduce an EMNLP 2018 paper 'Empirical Methods in Natural Language Processing'. The following is the abstract:

There has always been criticism for using n-gram based similarity metrics, such as BLEU, NIST, etc, for evaluating the performance of NLG systems. However, these metrics continue to remain popular and are recently being used for evaluating the performance of systems
which automatically generate questions from documents, knowledge graphs, images, etc. Given the rising interest in such automatic question generation (AQG) systems, it is important to objectively examine whether these metrics are suitable for this task. In
particular, it is important to verify whether such metrics used for evaluating AQG systems focus on answerability of the generated question by preferring questions which contain all relevant information such as question type (Wh-types), entities, relations, etc. In this work, we show that current automatic evaluation metrics based on n-gram similarity do not always correlate well with human judgments about answerability of a question. To alleviate this problem and as a first step towards better evaluation metrics for AQG, we introduce a scoring function to capture answerability and show that when this scoring function is integrated with existing metrics, they correlate significantly better with human judgments. The scripts and data developed as a part of this work are made publicly available. (https://github.com/PrekshaNema25/Answerability-Metric)

Time: Wed 4:00pm

Venue: Tencent meeting 498-963-814

Best Regards,

Yuye Zhu


More information about the Adapt mailing list