ADAGE: Answer Data Automatic GEneration

David Topps, Laura Bennion, Jean Rawling, Maureen Topps.

One of the main challenges that we face in constructing series of cases for assessment purposes is the sheer volume of cases required. We have found that clinical authors are initially willing to generate case stems and responses but that they quickly tire of repeated requests. We have also found that only a small fraction of clinical teachers are good at creating good case materials and questions. So this in turn leads to faculty fatigue because of repeated requests for materials. The burn-out rate is high.

Interestingly, we have found, as other groups have, that you only need small variations in the question stem or response phrasing for a question to appear to be an entirely new item, provided that the interval between item exposures is not too short. In other words, with small tweaks, you can reuse your questions extensively. Mark Gierl and Hollis Lai have made extensive use of this in their work on automatic item generation[1]. They have created some very impressive algorithms and techniques for fully automatic creation of standardized item banks.

Now, for the item banks that Gierl and Lai are addressing, we are talking about many thousands of items. They also need to be able to assure the respective institutions that their variances in these questions fall within an acceptable and reproducible range, as these are used in high stakes exams. Accordingly, they are appropriately using very high end computing and data resources, which are beyond the budgets available to most small programs.

With ADAGE, we are taking a much simpler approach. Instead of generating hundreds of variations for each question stem, we aim to produce an order of magnitude less. Because these question banks are merely being used for practice exams and formative assessments, there is less need at this stage for the same degree of reliability.

Based on the current data structures afforded by OpenLabyrinth, our educational research platform, we are developing an approach where simple variables within the question stems and response phrases can be replaced by text and numeric variables. The data sources for these variables will be derived from simple external tables, such as Microsoft Excel, that can be more easily manipulated by researchers, without their needing to become familiar with the intricacies of good learning design in the OpenLabyrinth case writing environment.

An example case can be played here: ‘ADAGE testing‘ — if you are asked for a keyword, use ‘demo’ to unlock the case.

As this functionality evolves, we will keep interested researchers abreast of these changes through this web site.

[1] Gierl, M. J., Lai, H. and Turner, S. R. (2012), Using automatic item generation to create multiple-choice test items. Medical Education, 46: 757–765. doi: 10.1111/j.1365-2923.2012.04289.x