Using BERT Extraction Combined with Transformer-Based Question Generation on CNN Stories

Authors

  • ERIC LI
  • Mihai Boicu

DOI:

https://doi.org/10.13021/jssr2020.3167

Abstract

One recurring challenge to computer question generation (QG) throughout the years is the ability to handle large amounts of context. The majority of models often train using SQuAD or similar datasets, which contain short text inputs with very little context, and hence have difficulty with larger inputs. Here, we propose a new model that combines extractive text summarization with transformer-based question generation. Context is first condensed using a BERT model and then fed through a simple, single pre-trained language model that generates the questions. The model was used on 100 CNN news articles taken from the NewsQA dataset, which are longer and contain much more context in comparison to SQuAD. The model generated around an average of 4 questions per article that were manually evaluated. We determined that around half focused on the main topic of the story while the others focused on other details. This shows that the model is able to effectively ask relevant questions while handling large amounts of context. While the results were promising, there were still areas for improvement such as generating questions with fewer word overlaps with the original article. 

Published

2022-12-13

Issue

Section

College of Engineering and Computing: Department of Information Sciences and Technology

Categories