Servers properly secured again

Our various servers, supporting OpenLabyrinth and the OLab education research platform, had some hiccups for a while, which resulted in our support of https and SSL being rather patchy.

Most people will find these days that many web browsers have conniptions about visiting unsecured sites, so many of you were getting dire warning about the site being insecure or have an invalid security certificate.

I think we have all that fixed now and there should be no more warnings.

Just to reassure you, there was no actual compromise of our sites and the data remains secure.

OLab4 Designer launch

Things have been really quiet over the summer but we are now ready to show you our work on the the OLab4 authoring interface.

Check out a series of upcoming posts on olab.ca which give more information on the OLab4 Designer and other new capabilities.

As before, all the source code will be posted on GitHub.

An Active Repository for OLab scenarios

For several years, we have been looking at different ways to make OpenLabyrinth scenarios more accessible. We think we have found a solution that meets the needs of both consumers and contributors of scenarios: the OLab Dataverse.

Now at first glance, this looks like Yet Another OER. But we think there are a few things that may help this to be more successful. We are working on ways to make it dead easy for OLab authors to upload their best cases directly to the OLab Dataverse which should help with the tedious task of metadata entry.

Because the materials are given a proper citation and DOI by the DataCite service, it means that the scenario becomes a citable reference that can be added to the authors’ CV and makes it easier for them to get academic credit for publishing their cases.

We have created some short notes on how we currently upload OpenLabyrinth maps to the OLab Dataverse, using a template in the meantime.

The OLab Dataverse is hosted at Scholars Portal on Canadian servers. Using a non-USA based service will help to mitigate some of the concerns raised for some jurisdictions and granting agencies.

When we say ‘Active Repository‘, we also plan to make this process more useful in providing activity metrics, using xAPI and a LRS. At present, we can create simple Guestbooks, which help us to track when the datasets are downloaded. But we feel it is equally important to create some activity metrics around the contributions  by faculty members and teachers. Partly, this will be based on the new xAPI Faculty Profile that we are developing and will incorporate into our OLab uploading mechanisms.

It is time we did a better job of looking at how our contributions to open science are used, appreciated and distributed in the world of Precision Education. We just submitted an article to MedEdPublish on why this is so important.

If you are interested in working with us in exploring how we can make these processes more accessible and more rewarding, please contact us.

OIPH Catalyst Grant outputs and metrics

In 2015, we were delighted to receive a Catalyst Grant from the O’Brien Institute of Public Health in support of development of various aspects of OpenLabyrinth as an educational research platform.

We have just reported on what arose as a result of this grant and, in general, we are pretty pleased with what came out of it and where things are headed. OpenLabyrinth continues to be used widely in the educational community and that reach is growing.

But how do we know?

This is more challenging to assess than you might think. Using standard lit search techniques, it is not hard to find journal articles that relate to the ongoing and innovative use of OpenLabyrinth. But that is only a small part of the impact. Now to give credit to OIPH and its reporting template, it is great to see that they want to know about the societal impacts, social media etc. This is something that we strongly agree with in OHMES.

But the actual measurement of such outputs is not so easy. An obvious way to do this would be via Altmetrics, which is revolutionizing how such projects and outputs are seen by the public. It has a powerful suite of tools that allows it to track mentions and reports in various public channels, social media channels, news items etc. Great stuff.

But Altmetric requires items to be assigned to either departments or institutes or faculties. For OHMES and OpenLabyrinth, this creates a significant problem. There is no ability to assign a tag or category which spans the range of groups and organizations that are involved. This is somewhat surprising, given that the general approach in Altmetrics is ontological, rather than taxonomical. (1,2)

In our PiHPES Project, partly as a result of such challenges, we are exploring two approaches to this. Firstly, we are using xAPI and the Learning Records Store (LRS) to directly track how our learners and teachers make use of the plethora of learning objects that we create in our LMSs and other platforms – the paradata, data about how things are used, accessed, distributed.

Secondly, we are looking for ways in which to make such activities and objects more discoverable, both through improved analytics on the generated activity metrics, and in tying the paradata and metadata together in more meaningful ontologies, using semantic linking.

 

  1. Shirky C. Ontology is Overrated — Categories, Links, and Tags [Internet]. 2005 [cited 2019 May 3]. Available from: http://www.shirky.com/writings/ontology_overrated.html
  2. Leu J. Taxonomy, ontology, folksonomies & SKOS. [Internet]. SlideShare. 2012 [cited 2019 May 3]. p. 21. Available from: https://www.slideshare.net/JanetLeu/taxonomy-ontology-folksonomies-skos

Learning Analytics and xAPI

David Topps, Medical Director, OHMES, University of Calgary
Ellen Meiselman, Senior Analyst, Learning Systems, University of Michigan

The focus on Competency-Based Medical Education (CBME) has shone welcome attention on the need for good assessment: how do you know when a learner is competent? We need data. All of us are happy when the learner is doing well. Some have suggested1 that we are becoming too focused on checklists and that gestalt ratings are just as good. But what about those learners on the lower shoulder of the curve?

We need better data for earlier detection of those who need help. We need better data for earlier detection of those who need help. Over the years, several have used various approaches in intelligent tutoring systems to examine learner
engagement.2-4 However, these have looked at engagement but, for a skilled learner, the materials may not meet their needs. Struggling learners often remain undetected, the failure to fail problem.5-7 We need stronger data to support decisions to terminate, lest we spend weeks in litigation.

We have seen several efforts to support competency assessment: EPAs, milestones, pre-EPAs, each looking at lower levels of complexity on the spectrum of learning activities. But too many of these depend on observer-b(i)ased checklists, and we are already encountering survey fatigue at all levels of learner assessment. Yet, much of what we do is now captured online in one form or another: electronic medical records, learning management systems, simulation systems, enterprise calendars.

Activity metrics take a ground up, rather than top-down, approach to tracking what learners actually do rather than what they, or their teachers, say they do. This already happens in limited individual systems but we need an extensible approach if we are to garner a more comprehensive view. The Experience API (xAPI)8 is a technically simple approach that is easy to integrate into existing systems. Data is captured into a Learning Record Store (LRS)9.

xAPI statements follow a very simple actor-verb-object structure: Bob Did This. And yet this simplicity belies great power through a very extensible set of vocabularies. The LRS structure is designed to swallow such xAPI statements from multiple sources, in true Big Data fashion, and can be securely federated so that a wide range of analytic and data visualization tools can be employed.

xAPI is technically well established. Groups, such as Medbiquitous10, are standardizing profiles of activities for medical education and health system outcomes. Now is the perfect time to engage medical educators in the power of these metrics. Their assessment needs should drive how and where these activities are measured.

References:

  1. Hodges B. Scylla or charybdis: navigating between excessive examination andnaïve reliance on self-assessment. Nurs Inq. 2007;14(3):177. http://onlinelibrary.wiley.com/doi/10.1111/j.1440-1800.2007.00376.x/full. Accessed January 21, 2017.
  2. D’Mello S, Graesser A. Automatic Detection of Learner’s Affect from Gross Body Language. Appl Artif Intell. 2009;23(2):123-150. doi:10.1080/08839510802631745.
  3. Qu L, Wang N, Johnson WL. Using Learner Focus of Attention to Detect Learner Motivation Factors. In: Springer, Berlin, Heidelberg; 2005:70-73. doi:10.1007/11527886_10.
  4. Soldato T. Detecting and reacting to the learner’s motivational state. In: Springer, Berlin, Heidelberg; 1992:567-574. doi:10.1007/3-540-55606-0_66.
  5. Nixon LJ, Gladding SP, Duffy BL. Describing Failure in a Clinical Clerkship: Implications for Identification, Assessment and Remediation for Struggling Learners. J Gen Intern Med. 2016;31(10):1172-1179. doi:10.1007/s11606- 016-3758-3.
  6. Wang FY, Degnan KO, Goren EN. Describing Failure in a Clinical Clerkship. J Gen Intern Med. 2017;32(4):378-378. doi:10.1007/s11606-016-3979-5.
  7. McColl T. MEdIC Series: Case of the Failure to Fail – Expert Review and Curated Community Commentary. AliEM. https://www.aliem.com/2017/06/medic-case-failure-to-fail-expert-review- curated-community-commentary/. Published 2017. Accessed April 29, 2019.
  8. Haag J. xAPI Overview – ADL Net. ADL Net. https://www.adlnet.gov/xAPI. Published 2015. Accessed May 29, 2017.
  9. Downes A. Learning Record Store – Tin Can API. Rustici LLC web site. http://tincanapi.com/learning-record-store/. Published 2015. Accessed May 29, 2017.
  10. Greene P, Smother V. MedBiquitous Consortium | Advancing the Health Professions Through Technology Standards. http://www.medbiq.org/. Published 2001. Accessed November 1, 2016.

OLab site updates

We have been changing where we are hosting the olab.ca services over the next few days. We apologize to those of you who found that some things did not respond while we switched over. Or you may have received a (mildly alarming) message about a bad security certificate. We are gradually fixing all these issues, while continuing to develop our various tools and components in the platform.

The switch is partially complete, and it helps us to open up a bunch of new OLab services. This site, https://olab.ca, will be the main distribution point for services and projects related to our OLab4 educational research platform.

At https://demo.olab.ca/olab, we are running a demo version of our OLab4 virtual scenario platform. The player has been complete for some time and has been shown to be nice and stable. The authoring interface for creating new scenarios is still in progress, I’m sorry to say, but we are now making great progress.

There will be 3 other platforms and 3 other services linked into this. More info as we gradually integrated them.

Porting Medbiq standard virtual scenarios between servers

In past years, there has been a lot of focus on making virtual scenarios and virtual patients portable between systems. No organization wants to put a lot of effort and resources into creating a set of scenarios for their learners, only to find that they cannot be used because the systems cannot play them any more. A huge waste of effort and resources.

Indeed, this problem was the genesis of the Open Educational Repositories (OER) initiatives that we saw a few years ago. We like to think that the Virtual Patients community was ahead of its time in generating things like this eViP repository several years ago: https://virtualpatients.eu

This project was largely the stimulus for the creation, under the auspices of Medbiquitous (https://www.medbiq.org), of the ANSI/Medbiq Virtual Patient standard. (https://www.medbiq.org/medbiquitous_virtual_patient) This standard provided the means by which cases could be ported between servers. For VP players who are compliant with the MVP standard, you can also port virtual scenarios between different systems.

Given that many virtual patient players do not last very long, this portability of cases and being able to take your content to another server or system was a mitigation against the risk that your own system went bust.

We are pleased to note that OpenLabyrinth is now one of the longest lasting systems, over 16 years old, and is also the most compliant with the MVP standard.

As we move onwards to OLab4, we will continue to support the MVP standard as far as possible, although we now note that its format does inhibit the migration of more advanced scenario features and functions.

The WAVES Projecthttp://wavesnetwork.eu, has been exploring this aspect of portability as part of its wide reaching mandate. In particular, they have created a very useful document describing some of the challenges with the MVP standard. Hosted on GitHub, this is a live document that will continue to evolve: https://github.com/wavesnetwork/mvp-standard/blob/master/FAQ.md

Integration of systems remains a challenge. The impetus now seems to be leaning away from portable content and reusable objects. SCORM was widely heralded but not widely adopted. Now the focus seems to be more on the ability to explore activity streams across a broader range of tools and platforms. This is why, in OLab4 and in the WAVES Project, we are furthering the use of xAPI and the LRS as a common data repository where you can aggregate activity metrics, no matter what educational platform the learner is using.

Turk Talk on MedEdPublish

Michelle Cullen and the faculty members at the University of Calgary School of Nursing have been making great use of Turk Talk for teaching and assessing therapeutic communication. An article has just been published that summarizes the work so far:

Turk Talk: human-machine hybrid virtual scenarios for professional education 

Cullen M, Sharma N, Topps D, Turk Talk: human-machine hybrid virtual scenarios for professional education, MedEdPublish, 2018, 7, [4], 45, doi:https://doi.org/10.15694/mep.2018.0000266.1

Logical pathways in a Turk Talk map

In the article, they describe how they have been able to scale up this approach, its practical applications and utility, and the potential cost savings compared to using standardized patients.

If you are interested in exploring the Turk Talk approach further, please contact us.