How does OLab4 support team-based learning?

There are many educational support tools and applications available but the vast majority focus on individual performance and assessment. OLab4 is unique in the extent to which the platform supports the integration and assessment of team-based learning designs.

Why does team-based learning matter?

Few professionals now work in isolation. This is especially true in healthcare but applies to a broad variety of disciplines. This recognition of the importance of both the performance of teams, and of members within a team, has led to strong improvements in work safety. The airline industry has led this movement in their simulation training for many years. Other disciplines, especially healthcare, are now strong in their uptake.

Team-based learning was a major recommendation of the Institute of Medicine’s report on patient safety, ‘To Err is Human’.(1) Much past work has focused on established, pre-existing teams, who train and work together regularly. However, many teams are ad hoc, especially in healthcare, and so a more flexible approach must be taken in training and assessing team performance.

Assessment of team performance

Most current work on team-based learning, and evaluation frameworks in this area, focus on overall team performance. They also tend to focus on subjective assessment by facilitators and observers, using standardized checklists. However, this in itself gives rise to many problems.

Many team-based scenarios tend to have bursts of intense activity, interspersed by long lulls.(2) Anyone who has driven a flight simulator, even a simple one, can relate to the difference between cruising at altitude and final approach to landing. These activity bursts affect all team members at once, which makes it very difficult for the observer to catch all the details.

All teams have, and need, leaders and quiet collaborators. Facilitator/observers are much more prone to notice the actions of voluble leaders, often missing the essential actions of lesser supporting roles.

Observers are inevitably biased. We know that teachers are more prone to positively assess learners that they like and that are like them.(3) Despite many years work on cognitive bias, recent research shows that even the best of us are blind to our biases.(4)

It is crucial to develop workplace and simulation based assessment systems that can objectively capture the activities of all members of a team. Competency based assessment is making general improvements in individual assessment but is also seeing survey fatigue. The complexity of team activities demands that assessment systems capture team member actions more directly, using their workflow tools.

While it would initially appear tempting to reproduce an entire workplace environment within a simulation system, this is fraught with problems. Within healthcare, we have seen multiple attempts to create an electronic medical record (EMR) for teaching. Not only are their fundamental architectural principles in EMR design that conflict with teaching purposes (5), it is also hard to modify such designs to incorporate activity metrics internally.

Most workflows also incorporate many online resources and tools, which then additionally frustrates the researchers’ attempts to track team activities. Our approach (6) has been to use the Experience API (xAPI) to extract activity metrics from a variety of online tools into a common external Learning Record Store (LRS).

OLab4 team-oriented functions

OpenLabyrinth v3 provides a variety of functions that specifically support team-based learning, and more importantly, the assessment of team and team-member performance.
● Scenarios – a mechanism to collate teams or groups of users, along with specific sets of maps and learning activities.
● Turk Talk – group oriented, live chats with the facilitator
● Cumulative Questions – a free-text survey tool that affords group input
● Real-time collaborative annotation of notes and worksheets
● CURIOS video mashups — annotated snippets of YouTube videos
● Integrated discussion forums

All of these powerful functions can be used by advanced case authors in their learning designs. In OLab4, such functionality will be afforded by a modular approach that makes it much more intuitive for occasional authors.

Underlying this is the need to further incorporate more sophisticated approaches to xAPI tracking and analytics. This also will be built into OLab4.


1. Kohn LT, Corrigan J, Donaldson MS. To Err Is Human : Building a Safer Health System. (Medicine I of, ed.). Washington, D.C.: National Academies Press; 2000. doi:
2. Ellaway RH, Topps D, Lachapelle K, Cooperstock J. Integrating simulation devices and systems. Stud Heal Technol Inf. 2009;142:88-90.
3. Beckman TJ, Ghosh AK, Cook DA, Erwin PJ, Mandrekar JN. How reliable are assessments of clinical teaching? J Gen Intern Med. 2004;19(9):971-977. doi:10.1111/j.1525-1497.2004.40066.x.
4. Zwaan L, Monteiro S, Sherbino J, Ilgen J, Howey B, Norman G. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Qual Saf. January 2016:bmjqs-2015-005014-. doi:10.1136/bmjqs-2015-005014.
5. Topps D. Notes on an EMR for Learners. 2010. doi:10.13140/RG.2.1.5064.6484.
6. Topps D, Meiselman E, Ellaway R, Downes A. Aggregating Ambient Student Tracking Data for Assessment. In: Ottawa Conference. Perth, WA: AMEE; 2016.

YouTube videos for new OpenLabyrinth authors

With the help of the Taylor Institute of teaching and learning at the University of Calgary, we have created a series of short YouTube tutorials for those who want to learn more about the basics of authoring in OpenLabyrinth.

You access this listing here:

If you have suggestions on other topics that should be added to this list of how-to videos, please let us know.

OpenLabyrinth at the OHMES Symposium

Coming up this week on Wed 22nd Feb and Thu 23rd, we are hosting the annual symposium for

OHMES: Office of Health & Medical Education Scholarship

at the Cumming School of Medicine. We have some great keynote speakers, including Lorelei Lingard, Kevin Eva and Stella Ng. For full details on the program, check out

We have lots of interest this year – hope you registered already.

One of the things that we will be demonstrating at this year’s symposium is the continuing work we are doing with our Rushing Roulette stress tests.

Check out this page for more info on how we are combining multiple activity streams, using xAPI and a Learning Record Store (LRS), OpenLabyrinth, and a cheap $30 Arduino board.

You also use this shortcode link to reach that same page:

Turk Talk sees broader use

Our natural language processing function in OpenLabyrinth is seeing increasing adoption across a variety of educational disciplines, not just medicine.

This unique approach to parsing text input, so that scenarios can provide a more flexible and realistic form of interaction with learners, is creating great interest amongst case and learning design experts.

We have continued to refine the approach, and now find it to be scalable and flexible for a number of different learning design approaches. Check out the numerous articles and tips about Turk Talk on this web site but if you are still confused as to what this adds, contact us.

An update on OpenLabyrinth and virtual scenarios

While we have been working with Scenario-Based Learning Design (SBLD) for some time, it has taken us a while to explore all the different ways in which OpenLabyrinth can be helpful in this regard. It is time to provide better notes for others on just how useful OpenLabyrinth can be in SBLD and the powerful additional functionality that OpenLabyrinth’s Scenarios provide.

We will be posting a series of help pages that help you get more out of Scenarios.

We will be working with groups like WAVES to continue to improve how our Scenarios can be used.

Internationalization of OpenLabyrinth’s interface

Over the years, we have received occasional questions about how OpenLabyrinth can support virtual scenarios in languages other than English. Since OpenLabyrinth is used widely around the world, we are keen to explore a less anglocentric approach.

Now, as we have noted before (More languages in OpenLabyrinth cases), the node and page content in OpenLabyrinth is quite flexible. We have authors who have written cases in Greek, Russian, Slovak, French and even Klingon.

Although I am not qualified to comment, I believe we have also had groups explore the use of right-to-left languages, with some success. For a quick look at how a case might look, check out this case on our demo server,, although sadly it is all Greek to me. (ok, enough of the puns!)

Ever since OpenLabyrinth v2.6.1, we have had some basic internationalization functions built into the code base. If you select French in your User Profile, you will see that the top level menus are rendered in French. But sadly, that is as far as it goes – no group has yet funded the code writing needed to take it further, with i18n tables and the like. So this is feasible – if anyone wants to take a crack at this, the source code for OpenLabyrinth is all up on GitHub.

Now, the next thing that I would like to throw open to our case and scenario writers is this: How important is it to have a few menu and button labels changed? I would hate for this just to be a token, if it is not that helpful. I wonder if we should instead concentrate our efforts on translating the User Guide or parts of this support web site? Please let us know in the comments below, or if you have strong suggestions, use the Contact page.

Turk Talk provides complex text capacity to OpenLabyrinth

We are conducting another round of Turk Talk sessions on our OpenLabyrinth server.

Tonight, we are stress testing the Auto Text Expander macros in Google Chrome, so that our teachers can have much richer interactions with our students in their next Turk Talk session.

For more info on how to use Auto Text Expander with OpenLabyrinth, check out our how-to page.

Being able to quickly enter strings of text while facilitating a Turk Talk session makes the flow of the Scenario much smoother. This is important because the facilitator is trying to keep up with simultaneous conversations from up to 8 learners.

OpenLabyrinth stress testing at CHES scholarship day

On Wed, 5th October, the Centre for Health Education Scholarship (CHES) at UBC held its annual scholarship symposium, in Vancouver.

There were many interesting sessions, including a stirring keynote address from Rachel Ellaway (Professor, Education Research, University of Calgary.

OpenLabyrinth featured at a few presentations at the CHES symposium, including a short presentation on Activity Metrics by David Topps and Corey Albersworth. (See )

In one of the afternoon demonstration sessions, we were able to show our Arduino stress-detector kit in action to conference participants. Here we have a short video of the Arduino sensors being calibrated.

This was the same basic setup as that first shown at the Medbiq Conference in Baltimore earlier this year. However, for this conference, no expense was spared. We splurged another $29.99 on another Arduino device. Yes, it nearly broke the budget!

We also managed to set up the software on both Windows 10 and OS X Yosemite, which highlights the platform independence of the Eclipse IDE that we used for collecting the Arduino data and sending it to the LRS.

Here we have a short video of the OpenLabyrinth stress-test in action. Our participant is playing a rapid-fire series of case vignettes on the Mac on the right, while the Arduino sensors connected to the Windows machine on the right is recording real-time data on her heart rate and Galvanic Skin Response.

We initially created this project as a simple technical demonstration that one could use a cheap, easy combination of Arduino hardware, OpenLabyrinth and xAPI statement collection into the GrassBlade Learning Record Store. We had only intended to show that such collection from multiple activity streams was feasible in the time and resources available to the average education researcher i.e. not much.

We were delighted to find that the stress detector was much more sensitive than we anticipated and will be useful in real-world research.

OpenLabyrinth and REDCap

OpenLabyrinth is increasingly used for data gathering and survey-like functions in medical education research. The number of question types has grown enormously and there are now few formats that are not supported. This, along with secure user management, connectivity to other educational tools such as LMSs (Moodle, Desire2Learn etc), via IMS-LTI authentication, and to Learning Record Stores (such as Watershed, GrassBlade, Wax LRS etc), has made OpenLabyrinth into a powerful educational research platform.

But there are other survey tools out there that you should also know about and consider in education research, especially in the health professions. REDCap is such a tool and we are fortunate that, at the University of Calgary, our Clinical Research Unit (CRU) is very effective at supporting the use of REDCap.

So, what is the difference between OpenLabyrinth, REDCap, and other survey tools such as SurveyMonkey? Well, the main thing that both OpenLabyrinth and REDCap offer is that you have better control over your data, where it resides and the custodianship/governance of this data. Cloud-based 3rd party tools such as SurveyMonkey and FluidSurveys (recently bought out by SurveyMonkey) are very good but you have no say as to where the data resides, which is of significant concern to ethics boards when it comes to anything that is health related. Patriot Act, anyone?

Both OpenLabyrinth and REDCap are open-source and freely available, although most end-users will not want to go to the hassle of setting up their own server. Both are secure and can be linked to other authentication mechanisms such as OAuth. REDCap is definitely focused more on clinical research. There are hundreds of existing validated survey instruments that you can make use of.

OpenLabyrinth is more focused on educational research. It also offers access to much more in the way of educational interactions, such as the CURIOS video mashup tool, linking to clinical scenarios and other forms of simulation, and complex branching pathways with logic rules etc.

Thanks to Mark Lowerison and the helpful folks at CRU, we have also been able to successfully connect between OpenLabyrinth and REDCap servers in a secure method, which means that you can take advantage of the best aspects of both research tools. Contact us if you are interested in exploring this further.

OpenLabyrinth at AMEE

The AMEE conference, just finished in Barcelona last week. With over 3500 attendees, this had to be one of their biggest yet.

Lots of activity there, including lots of projects and papers making use of OpenLabyrinth. One group was using OpenLabyrinth and Situational Judgement Testing for teaching ethics cases. Another group was using OpenLabyrinth as a publication and tracking mechanism for Teaching Tips.

Our group continues to collaborate with the Learning Layers project, a very interesting approach to supporting informal collaborative learning. The Barcamp was very well received – it was very similar in format to some Unconferences that our group has held previously.

The WAVES Project (Widening Access to Virtual Educational Scenarios), led by St George’s University, London, continues to make strong use of OpenLabyrinth, integrating it with MOOCs and Open edX.