OLab3 to OLab4 converter

Yes, we are making steady progress in the development of OLab4.

High on our list is a greatly improved authoring interface, but this will take time to get right. However, some teams are keen to make use of the new features of OLab4 right away.

To facilitate this, we are working on a converter for OpenLabyrinth v3 cases. We estimate that for many current cases, very little additional work will be needed after conversion.

But there are a number of functions and features in OLab3 that are so rarely used that we have decided not to support them in OLab4. For a list of what is going to be deprecated, please refer to the Technical Forum and look for this topic (or use the link below to jump straight there)…

OLab3 items to be deprecated in Olab4 converter

If you are concerned that this might create problems with your existing OLab3 cases and want to use them in OLab4, contact us through the Forum.

How does OLab4 support team-based learning?

There are many educational support tools and applications available but the vast majority focus on individual performance and assessment. OLab4 is unique in the extent to which the platform supports the integration and assessment of team-based learning designs.

Why does team-based learning matter?

Few professionals now work in isolation. This is especially true in healthcare but applies to a broad variety of disciplines. This recognition of the importance of both the performance of teams, and of members within a team, has led to strong improvements in work safety. The airline industry has led this movement in their simulation training for many years. Other disciplines, especially healthcare, are now strong in their uptake.

Team-based learning was a major recommendation of the Institute of Medicine’s report on patient safety, ‘To Err is Human’.(1) Much past work has focused on established, pre-existing teams, who train and work together regularly. However, many teams are ad hoc, especially in healthcare, and so a more flexible approach must be taken in training and assessing team performance.

Assessment of team performance

Most current work on team-based learning, and evaluation frameworks in this area, focus on overall team performance. They also tend to focus on subjective assessment by facilitators and observers, using standardized checklists. However, this in itself gives rise to many problems.

Many team-based scenarios tend to have bursts of intense activity, interspersed by long lulls.(2) Anyone who has driven a flight simulator, even a simple one, can relate to the difference between cruising at altitude and final approach to landing. These activity bursts affect all team members at once, which makes it very difficult for the observer to catch all the details.

All teams have, and need, leaders and quiet collaborators. Facilitator/observers are much more prone to notice the actions of voluble leaders, often missing the essential actions of lesser supporting roles.

Observers are inevitably biased. We know that teachers are more prone to positively assess learners that they like and that are like them.(3) Despite many years work on cognitive bias, recent research shows that even the best of us are blind to our biases.(4)

It is crucial to develop workplace and simulation based assessment systems that can objectively capture the activities of all members of a team. Competency based assessment is making general improvements in individual assessment but is also seeing survey fatigue. The complexity of team activities demands that assessment systems capture team member actions more directly, using their workflow tools.

While it would initially appear tempting to reproduce an entire workplace environment within a simulation system, this is fraught with problems. Within healthcare, we have seen multiple attempts to create an electronic medical record (EMR) for teaching. Not only are their fundamental architectural principles in EMR design that conflict with teaching purposes (5), it is also hard to modify such designs to incorporate activity metrics internally.

Most workflows also incorporate many online resources and tools, which then additionally frustrates the researchers’ attempts to track team activities. Our approach (6) has been to use the Experience API (xAPI) to extract activity metrics from a variety of online tools into a common external Learning Record Store (LRS).

OLab4 team-oriented functions

OpenLabyrinth v3 provides a variety of functions that specifically support team-based learning, and more importantly, the assessment of team and team-member performance.
● Scenarios – a mechanism to collate teams or groups of users, along with specific sets of maps and learning activities.
● Turk Talk – group oriented, live chats with the facilitator
● Cumulative Questions – a free-text survey tool that affords group input
● Real-time collaborative annotation of notes and worksheets
● CURIOS video mashups — annotated snippets of YouTube videos
● Integrated discussion forums

All of these powerful functions can be used by advanced case authors in their learning designs. In OLab4, such functionality will be afforded by a modular approach that makes it much more intuitive for occasional authors.

Underlying this is the need to further incorporate more sophisticated approaches to xAPI tracking and analytics. This also will be built into OLab4.

References

1. Kohn LT, Corrigan J, Donaldson MS. To Err Is Human : Building a Safer Health System. (Medicine I of, ed.). Washington, D.C.: National Academies Press; 2000. doi:https://doi.org/10.17226/9728.
2. Ellaway RH, Topps D, Lachapelle K, Cooperstock J. Integrating simulation devices and systems. Stud Heal Technol Inf. 2009;142:88-90. http://www.ncbi.nlm.nih.gov/pubmed/19377120.
3. Beckman TJ, Ghosh AK, Cook DA, Erwin PJ, Mandrekar JN. How reliable are assessments of clinical teaching? J Gen Intern Med. 2004;19(9):971-977. doi:10.1111/j.1525-1497.2004.40066.x.
4. Zwaan L, Monteiro S, Sherbino J, Ilgen J, Howey B, Norman G. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Qual Saf. January 2016:bmjqs-2015-005014-. doi:10.1136/bmjqs-2015-005014.
5. Topps D. Notes on an EMR for Learners. 2010. doi:10.13140/RG.2.1.5064.6484.
6. Topps D, Meiselman E, Ellaway R, Downes A. Aggregating Ambient Student Tracking Data for Assessment. In: Ottawa Conference. Perth, WA: AMEE; 2016. http://www.slideshare.net/topps/aggregating-ambient-student-tracking-data-for-assessment.

YouTube videos for new OpenLabyrinth authors

With the help of the Taylor Institute of teaching and learning at the University of Calgary, we have created a series of short YouTube tutorials for those who want to learn more about the basics of authoring in OpenLabyrinth.

You access this listing here: http://openlabyrinth.ca/youtube-howto-videos-about-openlabyrinth/

If you have suggestions on other topics that should be added to this list of how-to videos, please let us know.

OpenLabyrinth at the OHMES Symposium

Coming up this week on Wed 22nd Feb and Thu 23rd, we are hosting the annual symposium for

OHMES: Office of Health & Medical Education Scholarship

at the Cumming School of Medicine. We have some great keynote speakers, including Lorelei Lingard, Kevin Eva and Stella Ng. For full details on the program, check out

http://cumming.ucalgary.ca/ohmes/events/health-and-medical-education-scholarship-symposium

We have lots of interest this year – hope you registered already.

One of the things that we will be demonstrating at this year’s symposium is the continuing work we are doing with our Rushing Roulette stress tests.

Check out this page for more info on how we are combining multiple activity streams, using xAPI and a Learning Record Store (LRS), OpenLabyrinth, and a cheap $30 Arduino board.

You also use this shortcode link to reach that same page: http://tiny.cc/RRdemo

Turk Talk sees broader use

Our natural language processing function in OpenLabyrinth is seeing increasing adoption across a variety of educational disciplines, not just medicine.

This unique approach to parsing text input, so that scenarios can provide a more flexible and realistic form of interaction with learners, is creating great interest amongst case and learning design experts.

We have continued to refine the approach, and now find it to be scalable and flexible for a number of different learning design approaches. Check out the numerous articles and tips about Turk Talk on this web site but if you are still confused as to what this adds, contact us.

An update on OpenLabyrinth and virtual scenarios

While we have been working with Scenario-Based Learning Design (SBLD) for some time, it has taken us a while to explore all the different ways in which OpenLabyrinth can be helpful in this regard. It is time to provide better notes for others on just how useful OpenLabyrinth can be in SBLD and the powerful additional functionality that OpenLabyrinth’s Scenarios provide.

We will be posting a series of help pages that help you get more out of Scenarios.

We will be working with groups like WAVES to continue to improve how our Scenarios can be used.

Internationalization of OpenLabyrinth’s interface

Over the years, we have received occasional questions about how OpenLabyrinth can support virtual scenarios in languages other than English. Since OpenLabyrinth is used widely around the world, we are keen to explore a less anglocentric approach.

Now, as we have noted before (More languages in OpenLabyrinth cases), the node and page content in OpenLabyrinth is quite flexible. We have authors who have written cases in Greek, Russian, Slovak, French and even Klingon.

We have also had groups explore the use of right-to-left languages, with some success. For a quick look at how a case might look, check out this case on our demo server, Multi-lingual cases, which shows what can be done.

Ever since OpenLabyrinth v2.6.1, we have had some basic internationalization functions built into the code base. If you select French in your User Profile, you will see that the top level menus are rendered in French. But sadly, that is as far as it goes – no group has yet funded the code writing needed to take it further, with i18n tables and the like. So this is feasible – if anyone wants to take a crack at this, the source code for OpenLabyrinth is all up on GitHub.

Now it is also pretty easy to have the menu and button labels changed, using Google Translate. Here are some quick screen shots taken of the main authoring menu for https://demo.openlabyrinth.ca/renderLabyrinth/index/60 Multi-lingal cases:

Original (EN)

Francophone (FR)

Espagnol (ES)

Greek

Russian

Farsi (Persian)

Please let us know what you think in the comments below, or if you have strong suggestions, use the Contact page.

Turk Talk provides complex text capacity to OpenLabyrinth

We are conducting another round of Turk Talk sessions on our OpenLabyrinth server.

Tonight, we are stress testing the Auto Text Expander macros in Google Chrome, so that our teachers can have much richer interactions with our students in their next Turk Talk session.

For more info on how to use Auto Text Expander with OpenLabyrinth, check out our how-to page.

Being able to quickly enter strings of text while facilitating a Turk Talk session makes the flow of the Scenario much smoother. This is important because the facilitator is trying to keep up with simultaneous conversations from up to 8 learners.

OpenLabyrinth stress testing at CHES scholarship day

On Wed, 5th October, the Centre for Health Education Scholarship (CHES) at UBC held its annual scholarship symposium, in Vancouver.

There were many interesting sessions, including a stirring keynote address from Rachel Ellaway (Professor, Education Research, University of Calgary.

OpenLabyrinth featured at a few presentations at the CHES symposium, including a short presentation on Activity Metrics by David Topps and Corey Albersworth. (See http://www.slideshare.net/topps/activity-metrics-for-ches-day )

In one of the afternoon demonstration sessions, we were able to show our Arduino stress-detector kit in action to conference participants. Here we have a short video of the Arduino sensors being calibrated.

This was the same basic setup as that first shown at the Medbiq Conference in Baltimore earlier this year. However, for this conference, no expense was spared. We splurged another $29.99 on another Arduino device. Yes, it nearly broke the budget!

We also managed to set up the software on both Windows 10 and OS X Yosemite, which highlights the platform independence of the Eclipse IDE that we used for collecting the Arduino data and sending it to the LRS.

Here we have a short video of the OpenLabyrinth stress-test in action. Our participant is playing a rapid-fire series of case vignettes on the Mac on the right, while the Arduino sensors connected to the Windows machine on the right is recording real-time data on her heart rate and Galvanic Skin Response.

We initially created this project as a simple technical demonstration that one could use a cheap, easy combination of Arduino hardware, OpenLabyrinth and xAPI statement collection into the GrassBlade Learning Record Store. We had only intended to show that such collection from multiple activity streams was feasible in the time and resources available to the average education researcher i.e. not much.

We were delighted to find that the stress detector was much more sensitive than we anticipated and will be useful in real-world research.