Using BERT Extraction Combined with Transformer-Based Question Generation on CNN Stories
DOI:
https://doi.org/10.13021/jssr2020.3167Abstract
One recurring challenge to computer question generation (QG) throughout the years is the ability to handle large amounts of context. The majority of models often train using SQuAD or similar datasets, which contain short text inputs with very little context, and hence have difficulty with larger inputs. Here, we propose a new model that combines extractive text summarization with transformer-based question generation. Context is first condensed using a BERT model and then fed through a simple, single pre-trained language model that generates the questions. The model was used on 100 CNN news articles taken from the NewsQA dataset, which are longer and contain much more context in comparison to SQuAD. The model generated around an average of 4 questions per article that were manually evaluated. We determined that around half focused on the main topic of the story while the others focused on other details. This shows that the model is able to effectively ask relevant questions while handling large amounts of context. While the results were promising, there were still areas for improvement such as generating questions with fewer word overlaps with the original article.
Published
Issue
Section
Categories
License
Copyright (c) 2022 ERIC LI, Mihai Boicu
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.