There are many educational support tools and applications available but the vast majority focus on individual performance and assessment. OLab4 is unique in the extent to which the platform supports the integration and assessment of team-based learning designs.
Why does team-based learning matter?
Few professionals now work in isolation. This is especially true in healthcare but applies to a broad variety of disciplines. This recognition of the importance of both the performance of teams, and of members within a team, has led to strong improvements in work safety. The airline industry has led this movement in their simulation training for many years. Other disciplines, especially healthcare, are now strong in their uptake.
Team-based learning was a major recommendation of the Institute of Medicine’s report on patient safety, ‘To Err is Human’.(1) Much past work has focused on established, pre-existing teams, who train and work together regularly. However, many teams are ad hoc, especially in healthcare, and so a more flexible approach must be taken in training and assessing team performance.
Assessment of team performance
Most current work on team-based learning, and evaluation frameworks in this area, focus on overall team performance. They also tend to focus on subjective assessment by facilitators and observers, using standardized checklists. However, this in itself gives rise to many problems.
Many team-based scenarios tend to have bursts of intense activity, interspersed by long lulls.(2) Anyone who has driven a flight simulator, even a simple one, can relate to the difference between cruising at altitude and final approach to landing. These activity bursts affect all team members at once, which makes it very difficult for the observer to catch all the details.
All teams have, and need, leaders and quiet collaborators. Facilitator/observers are much more prone to notice the actions of voluble leaders, often missing the essential actions of lesser supporting roles.
Observers are inevitably biased. We know that teachers are more prone to positively assess learners that they like and that are like them.(3) Despite many years work on cognitive bias, recent research shows that even the best of us are blind to our biases.(4)
It is crucial to develop workplace and simulation based assessment systems that can objectively capture the activities of all members of a team. Competency based assessment is making general improvements in individual assessment but is also seeing survey fatigue. The complexity of team activities demands that assessment systems capture team member actions more directly, using their workflow tools.
While it would initially appear tempting to reproduce an entire workplace environment within a simulation system, this is fraught with problems. Within healthcare, we have seen multiple attempts to create an electronic medical record (EMR) for teaching. Not only are their fundamental architectural principles in EMR design that conflict with teaching purposes (5), it is also hard to modify such designs to incorporate activity metrics internally.
Most workflows also incorporate many online resources and tools, which then additionally frustrates the researchers’ attempts to track team activities. Our approach (6) has been to use the Experience API (xAPI) to extract activity metrics from a variety of online tools into a common external Learning Record Store (LRS).
OLab4 team-oriented functions
OpenLabyrinth v3 provides a variety of functions that specifically support team-based learning, and more importantly, the assessment of team and team-member performance.
● Scenarios – a mechanism to collate teams or groups of users, along with specific sets of maps and learning activities.
● Turk Talk – group oriented, live chats with the facilitator
● Cumulative Questions – a free-text survey tool that affords group input
● Real-time collaborative annotation of notes and worksheets
● CURIOS video mashups — annotated snippets of YouTube videos
● Integrated discussion forums
All of these powerful functions can be used by advanced case authors in their learning designs. In OLab4, such functionality will be afforded by a modular approach that makes it much more intuitive for occasional authors.
Underlying this is the need to further incorporate more sophisticated approaches to xAPI tracking and analytics. This also will be built into OLab4.
References
1. Kohn LT, Corrigan J, Donaldson MS. To Err Is Human : Building a Safer Health System. (Medicine I of, ed.). Washington, D.C.: National Academies Press; 2000. doi:https://doi.org/10.17226/9728.
2. Ellaway RH, Topps D, Lachapelle K, Cooperstock J. Integrating simulation devices and systems. Stud Heal Technol Inf. 2009;142:88-90. http://www.ncbi.nlm.nih.gov/pubmed/19377120.
3. Beckman TJ, Ghosh AK, Cook DA, Erwin PJ, Mandrekar JN. How reliable are assessments of clinical teaching? J Gen Intern Med. 2004;19(9):971-977. doi:10.1111/j.1525-1497.2004.40066.x.
4. Zwaan L, Monteiro S, Sherbino J, Ilgen J, Howey B, Norman G. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Qual Saf. January 2016:bmjqs-2015-005014-. doi:10.1136/bmjqs-2015-005014.
5. Topps D. Notes on an EMR for Learners. 2010. doi:10.13140/RG.2.1.5064.6484.
6. Topps D, Meiselman E, Ellaway R, Downes A. Aggregating Ambient Student Tracking Data for Assessment. In: Ottawa Conference. Perth, WA: AMEE; 2016. http://www.slideshare.net/topps/aggregating-ambient-student-tracking-data-for-assessment.
We are now working on a new xAPI Profile that is designed to better accommodate the tracking of team performance, especially individual members within a team.