Back to normal

At last, it seems like we have things back to running normally. Phew – that was way harder than we expected. Many apologies for all the inconvenience.

As well as this WordPress site, our OpenLabyrinth virtual scenario server at http://demo.openlabyrinth.ca should be working properly again.

The mail server and forgot-my-password link were offline for several weeks. Apologies to anyone who was trying to get in.

Our other linked services such as our GrassBlade LRS are now mostly back to normal. If you find continued glitches, please let us know.

Service interruption

We plan to upgrade our servers on Thursday afternoon, May 3rd.

You may notice interruption to the following services around the following times 1300-1600 Mountain time:

  • openlabyrinth.ca — this WordPress site and the associated forums
  • curios.openlabyrinth.ca — our CURIOS video mashup service
  • our GrassBlade LRS

Our OpenLabyrinth v3 virtual scenario platform at demo.openlabyrinth.ca should continue to function, but there may be linked oddities at this time, as well.

We hope that these interruptions will be brief (if all goes well and allowing for glitches).

 

xAPI and Learning Analytics

The Experience API (xAPI) provides OLab with some powerful tools to integrate activity metrics as a research tool. But, of course, there is more to it than just capturing and aggregating data.

Data visualization and learning analytics are increasingly important — this is one of the key pillars in our push towards Precision Education. Some of the Learning Record Stores (LRS) come with tools to assist with such analytics. We have spoken of this before: while we currently use GrassBlade as our workaday LRS because it is simple for small pilots, the beauty of the LRS approach is that data can easily be federated across other LRSs. For example, we have made use of the more powerful analytics provided by the Watershed LRS.

However, as we move into more detailed analytics, it is great to be able to work with even more powerful tools. We have just started working with the IEEE ICICLE group on looking at better approaches to such learning analytics. LearnSphere is one such tool, being extensively used at Carnegie Mellon University (but open-source, on Github).

LearnSphere has a powerful set of analytics tools. In the screenshot above, it shows us using the Tigris learning workflows tool to map out good learning designs and scenarios, designed to answer questions such as “which kinds of learner activity are worth measuring?” — the datasets can be quite varied and the LearnSphere group is interested in accommodating a wider range of learning research datasets.

Today’s discussion of the IEEE ICICLE xAPI and Learning Analytics SIG focused on how to integrate xAPI activity streams in a more seamless manner with LearnSphere. We are pleased to be involved with such dataflow integration initiatives. As Koetinger et al (1) demonstrated in 2016, there is a clear link between “doing” and learning. This is not a new concept at all, but proving this has been remarkably difficult in the world of education, where there are so many confounding factors to consider in a study methodology. This approach, using learning analytics, is much more solid.

1.  Koedinger, K. R., McLaughlin, E. A., Jia, J. Z., & Bier, N. L. (2016). Is the doer effect a causal relationship? In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge – LAK ’16 (pp. 388–397). New York, New York, USA: ACM Press. http://doi.org/10.1145/2883851.2883957

Using xAPI to support blended simulation

OLab and OpenLabyrinth have always been good at providing the contextual glue that holds together various simulation modalities. Here are some examples of projects where OpenLabyrinth has supported blended simulation activities:

  • Virtual Spinal Tap – uses haptic simulation to model the feeling of needle insertion
  • Rushing Roulette – timed tasks with a $30 Arduino lie detector!
  • Crevasse Rescue – multiple teams & disciplines, with high & low fidelity simulators
  • R. Ed Nekke – bookending around Laerdal mannequin scenario

But now with xAPI providing the background data linking to a Learning Record Store, it is much easier to do this across a wider range of tools and platforms. Some of the above mentioned projects used a very sophisticated gamut of high-speed networks, at considerable cost.

Doing this now with xAPI is proving to be much more flexible, scalable and cost effective. To support haptic projects, like Virtual Spinal Tap, we are now working with the Medbiq Learning Experience Working Group on an xAPI Haptics Profile. Check it out and give us feedback.

OLab3 to OLab4 converter

Yes, we are making steady progress in the development of OLab4.

High on our list is a greatly improved authoring interface, but this will take time to get right. However, some teams are keen to make use of the new features of OLab4 right away.

To facilitate this, we are working on a converter for OpenLabyrinth v3 cases. We estimate that for many current cases, very little additional work will be needed after conversion.

But there are a number of functions and features in OLab3 that are so rarely used that we have decided not to support them in OLab4. For a list of what is going to be deprecated, please refer to the Technical Forum and look for this topic (or use the link below to jump straight there)…

OLab3 items to be deprecated in Olab4 converter

If you are concerned that this might create problems with your existing OLab3 cases and want to use them in OLab4, contact us through the Forum.

How does OLab4 support team-based learning?

There are many educational support tools and applications available but the vast majority focus on individual performance and assessment. OLab4 is unique in the extent to which the platform supports the integration and assessment of team-based learning designs.

Why does team-based learning matter?

Few professionals now work in isolation. This is especially true in healthcare but applies to a broad variety of disciplines. This recognition of the importance of both the performance of teams, and of members within a team, has led to strong improvements in work safety. The airline industry has led this movement in their simulation training for many years. Other disciplines, especially healthcare, are now strong in their uptake.

Team-based learning was a major recommendation of the Institute of Medicine’s report on patient safety, ‘To Err is Human’.(1) Much past work has focused on established, pre-existing teams, who train and work together regularly. However, many teams are ad hoc, especially in healthcare, and so a more flexible approach must be taken in training and assessing team performance.

Assessment of team performance

Most current work on team-based learning, and evaluation frameworks in this area, focus on overall team performance. They also tend to focus on subjective assessment by facilitators and observers, using standardized checklists. However, this in itself gives rise to many problems.

Many team-based scenarios tend to have bursts of intense activity, interspersed by long lulls.(2) Anyone who has driven a flight simulator, even a simple one, can relate to the difference between cruising at altitude and final approach to landing. These activity bursts affect all team members at once, which makes it very difficult for the observer to catch all the details.

All teams have, and need, leaders and quiet collaborators. Facilitator/observers are much more prone to notice the actions of voluble leaders, often missing the essential actions of lesser supporting roles.

Observers are inevitably biased. We know that teachers are more prone to positively assess learners that they like and that are like them.(3) Despite many years work on cognitive bias, recent research shows that even the best of us are blind to our biases.(4)

It is crucial to develop workplace and simulation based assessment systems that can objectively capture the activities of all members of a team. Competency based assessment is making general improvements in individual assessment but is also seeing survey fatigue. The complexity of team activities demands that assessment systems capture team member actions more directly, using their workflow tools.

While it would initially appear tempting to reproduce an entire workplace environment within a simulation system, this is fraught with problems. Within healthcare, we have seen multiple attempts to create an electronic medical record (EMR) for teaching. Not only are their fundamental architectural principles in EMR design that conflict with teaching purposes (5), it is also hard to modify such designs to incorporate activity metrics internally.

Most workflows also incorporate many online resources and tools, which then additionally frustrates the researchers’ attempts to track team activities. Our approach (6) has been to use the Experience API (xAPI) to extract activity metrics from a variety of online tools into a common external Learning Record Store (LRS).

OLab4 team-oriented functions

OpenLabyrinth v3 provides a variety of functions that specifically support team-based learning, and more importantly, the assessment of team and team-member performance.
● Scenarios – a mechanism to collate teams or groups of users, along with specific sets of maps and learning activities.
● Turk Talk – group oriented, live chats with the facilitator
● Cumulative Questions – a free-text survey tool that affords group input
● Real-time collaborative annotation of notes and worksheets
● CURIOS video mashups — annotated snippets of YouTube videos
● Integrated discussion forums

All of these powerful functions can be used by advanced case authors in their learning designs. In OLab4, such functionality will be afforded by a modular approach that makes it much more intuitive for occasional authors.

Underlying this is the need to further incorporate more sophisticated approaches to xAPI tracking and analytics. This also will be built into OLab4.

References

1. Kohn LT, Corrigan J, Donaldson MS. To Err Is Human : Building a Safer Health System. (Medicine I of, ed.). Washington, D.C.: National Academies Press; 2000. doi:https://doi.org/10.17226/9728.
2. Ellaway RH, Topps D, Lachapelle K, Cooperstock J. Integrating simulation devices and systems. Stud Heal Technol Inf. 2009;142:88-90. http://www.ncbi.nlm.nih.gov/pubmed/19377120.
3. Beckman TJ, Ghosh AK, Cook DA, Erwin PJ, Mandrekar JN. How reliable are assessments of clinical teaching? J Gen Intern Med. 2004;19(9):971-977. doi:10.1111/j.1525-1497.2004.40066.x.
4. Zwaan L, Monteiro S, Sherbino J, Ilgen J, Howey B, Norman G. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Qual Saf. January 2016:bmjqs-2015-005014-. doi:10.1136/bmjqs-2015-005014.
5. Topps D. Notes on an EMR for Learners. 2010. doi:10.13140/RG.2.1.5064.6484.
6. Topps D, Meiselman E, Ellaway R, Downes A. Aggregating Ambient Student Tracking Data for Assessment. In: Ottawa Conference. Perth, WA: AMEE; 2016. http://www.slideshare.net/topps/aggregating-ambient-student-tracking-data-for-assessment.

YouTube videos for new OpenLabyrinth authors

With the help of the Taylor Institute of teaching and learning at the University of Calgary, we have created a series of short YouTube tutorials for those who want to learn more about the basics of authoring in OpenLabyrinth.

You access this listing here: http://openlabyrinth.ca/youtube-howto-videos-about-openlabyrinth/

If you have suggestions on other topics that should be added to this list of how-to videos, please let us know.