Category Archives: Uncategorized

QuRE project update

The Quality Referral Evolution (QuRE) project, www.ahs.ca/qure, is making good progress.

QuRE checklist for referrals

The QuRE Project started in Alberta, as an initiative to enhance the quality of clinical referrals and consults in healthcare. With 2.3 million referrals created every year in Alberta alone, the potential for improvements and savings is enormous. Now several other Canadian provinces, including Saskatchewan and British Columbia, are starting to engage in similar and collaborative efforts.

Earlier this year, the Office of Health & Medical Education Scholarship (OHMES) at the Cumming School of Medicine, University of Calgary, became involved with this project. The QuRE group had some tremendous work to establish an evidence-informed approach to quality referrals and consults. Some educational materials had been created and seminars given.

OHMES saw an opportunity to create a more interactive approach to the educational processes of the QuRE Project. Rather than simply telling healthcare learners what to do, we used OpenLabyrinth virtual scenarios and CURIOS video mashups, along with other educational tools such as GrassBlade, PowToons, to create more interactive materials, with built-in activity metrics and analytics.

This approach will enable us to continually modify our materials (its own quality evolution, as it were), based on learner performance, not just on yet more questionnaires.

What has particularly excited us at OHMES about this project is that it represents an opportunity to study how an educational intervention can have an impact on patient care. For years, especially in CME circles, there have been repeated calls for educational approaches that can actually demonstrate a change in how we provide care, and ultimately, on improved patient outcomes.

One particular aspect of this project is that we intend to longitudinally track, over several years, how feedback and these interventions may iteratively improve the quality of referrals and consults in participating groups. We will have sufficient data to demonstrate useful changes, thanks to the ongoing use of activity metrics, gathered from across multiple healthcare systems.

Intermittent interruptions

We have an unusual situation where our web server is being clobbered by standard updates to RedHat Linux.

Because of this, we are encountering “Error connecting to database” on an all too frequent basis.

There is a manual fix for this, but a simple OS update should not be clobbering the underlying MariaDB service. We have asked our local support team for a more robust fix.

We apologize for the current patchy service. No data has been lost. And the OpenLabyrinth virtual scenario servers themselves seem to be unaffected.

[Update: 15aug2018] – yay, I think we have a fix. There is an odd bug but, with UCIT’s help, I think we now have a more robust fix. Thank you, all.

Medbiq Conference 2018

We just got back from the Medbiquitous Conference 2018 in Baltimore. Great conference with lots of collaborative projects coming out of it.

There continues to be a high degree of interest in activity metrics and xAPI related projects. In particular, we heard about:

It was noted yet again that we are all immersed in our biases and, despite this, also overly dependent on subjective assessments from teachers. Things have to change.

 

Back to normal

At last, it seems like we have things back to running normally. Phew – that was way harder than we expected. Many apologies for all the inconvenience.

As well as this WordPress site, our OpenLabyrinth virtual scenario server at http://demo.openlabyrinth.ca should be working properly again.

The mail server and forgot-my-password link were offline for several weeks. Apologies to anyone who was trying to get in.

Our other linked services such as our GrassBlade LRS are now mostly back to normal. If you find continued glitches, please let us know.

Service interruption

We plan to upgrade our servers on Thursday afternoon, May 3rd.

You may notice interruption to the following services around the following times 1300-1600 Mountain time:

  • openlabyrinth.ca — this WordPress site and the associated forums
  • curios.openlabyrinth.ca — our CURIOS video mashup service
  • our GrassBlade LRS

Our OpenLabyrinth v3 virtual scenario platform at demo.openlabyrinth.ca should continue to function, but there may be linked oddities at this time, as well.

We hope that these interruptions will be brief (if all goes well and allowing for glitches).

 

OLab3 to OLab4 converter

Yes, we are making steady progress in the development of OLab4.

High on our list is a greatly improved authoring interface, but this will take time to get right. However, some teams are keen to make use of the new features of OLab4 right away.

To facilitate this, we are working on a converter for OpenLabyrinth v3 cases. We estimate that for many current cases, very little additional work will be needed after conversion.

But there are a number of functions and features in OLab3 that are so rarely used that we have decided not to support them in OLab4. For a list of what is going to be deprecated, please refer to the Technical Forum and look for this topic (or use the link below to jump straight there)…

OLab3 items to be deprecated in Olab4 converter

If you are concerned that this might create problems with your existing OLab3 cases and want to use them in OLab4, contact us through the Forum.

How does OLab4 support team-based learning?

There are many educational support tools and applications available but the vast majority focus on individual performance and assessment. OLab4 is unique in the extent to which the platform supports the integration and assessment of team-based learning designs.

Why does team-based learning matter?

Few professionals now work in isolation. This is especially true in healthcare but applies to a broad variety of disciplines. This recognition of the importance of both the performance of teams, and of members within a team, has led to strong improvements in work safety. The airline industry has led this movement in their simulation training for many years. Other disciplines, especially healthcare, are now strong in their uptake.

Team-based learning was a major recommendation of the Institute of Medicine’s report on patient safety, ‘To Err is Human’.(1) Much past work has focused on established, pre-existing teams, who train and work together regularly. However, many teams are ad hoc, especially in healthcare, and so a more flexible approach must be taken in training and assessing team performance.

Assessment of team performance

Most current work on team-based learning, and evaluation frameworks in this area, focus on overall team performance. They also tend to focus on subjective assessment by facilitators and observers, using standardized checklists. However, this in itself gives rise to many problems.

Many team-based scenarios tend to have bursts of intense activity, interspersed by long lulls.(2) Anyone who has driven a flight simulator, even a simple one, can relate to the difference between cruising at altitude and final approach to landing. These activity bursts affect all team members at once, which makes it very difficult for the observer to catch all the details.

All teams have, and need, leaders and quiet collaborators. Facilitator/observers are much more prone to notice the actions of voluble leaders, often missing the essential actions of lesser supporting roles.

Observers are inevitably biased. We know that teachers are more prone to positively assess learners that they like and that are like them.(3) Despite many years work on cognitive bias, recent research shows that even the best of us are blind to our biases.(4)

It is crucial to develop workplace and simulation based assessment systems that can objectively capture the activities of all members of a team. Competency based assessment is making general improvements in individual assessment but is also seeing survey fatigue. The complexity of team activities demands that assessment systems capture team member actions more directly, using their workflow tools.

While it would initially appear tempting to reproduce an entire workplace environment within a simulation system, this is fraught with problems. Within healthcare, we have seen multiple attempts to create an electronic medical record (EMR) for teaching. Not only are their fundamental architectural principles in EMR design that conflict with teaching purposes (5), it is also hard to modify such designs to incorporate activity metrics internally.

Most workflows also incorporate many online resources and tools, which then additionally frustrates the researchers’ attempts to track team activities. Our approach (6) has been to use the Experience API (xAPI) to extract activity metrics from a variety of online tools into a common external Learning Record Store (LRS).

OLab4 team-oriented functions

OpenLabyrinth v3 provides a variety of functions that specifically support team-based learning, and more importantly, the assessment of team and team-member performance.
● Scenarios – a mechanism to collate teams or groups of users, along with specific sets of maps and learning activities.
● Turk Talk – group oriented, live chats with the facilitator
● Cumulative Questions – a free-text survey tool that affords group input
● Real-time collaborative annotation of notes and worksheets
● CURIOS video mashups — annotated snippets of YouTube videos
● Integrated discussion forums

All of these powerful functions can be used by advanced case authors in their learning designs. In OLab4, such functionality will be afforded by a modular approach that makes it much more intuitive for occasional authors.

Underlying this is the need to further incorporate more sophisticated approaches to xAPI tracking and analytics. This also will be built into OLab4.

References

1. Kohn LT, Corrigan J, Donaldson MS. To Err Is Human : Building a Safer Health System. (Medicine I of, ed.). Washington, D.C.: National Academies Press; 2000. doi:https://doi.org/10.17226/9728.
2. Ellaway RH, Topps D, Lachapelle K, Cooperstock J. Integrating simulation devices and systems. Stud Heal Technol Inf. 2009;142:88-90. http://www.ncbi.nlm.nih.gov/pubmed/19377120.
3. Beckman TJ, Ghosh AK, Cook DA, Erwin PJ, Mandrekar JN. How reliable are assessments of clinical teaching? J Gen Intern Med. 2004;19(9):971-977. doi:10.1111/j.1525-1497.2004.40066.x.
4. Zwaan L, Monteiro S, Sherbino J, Ilgen J, Howey B, Norman G. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Qual Saf. January 2016:bmjqs-2015-005014-. doi:10.1136/bmjqs-2015-005014.
5. Topps D. Notes on an EMR for Learners. 2010. doi:10.13140/RG.2.1.5064.6484.
6. Topps D, Meiselman E, Ellaway R, Downes A. Aggregating Ambient Student Tracking Data for Assessment. In: Ottawa Conference. Perth, WA: AMEE; 2016. http://www.slideshare.net/topps/aggregating-ambient-student-tracking-data-for-assessment.

YouTube videos for new OpenLabyrinth authors

With the help of the Taylor Institute of teaching and learning at the University of Calgary, we have created a series of short YouTube tutorials for those who want to learn more about the basics of authoring in OpenLabyrinth.

You access this listing here: http://openlabyrinth.ca/youtube-howto-videos-about-openlabyrinth/

If you have suggestions on other topics that should be added to this list of how-to videos, please let us know.