Category Archives: xapi

An Active Repository for OLab scenarios

For several years, we have been looking at different ways to make OpenLabyrinth scenarios more accessible. We think we have found a solution that meets the needs of both consumers and contributors of scenarios: the OLab Dataverse.

Now at first glance, this looks like Yet Another OER. But we think there are a few things that may help this to be more successful. We are working on ways to make it dead easy for OLab authors to upload their best cases directly to the OLab Dataverse which should help with the tedious task of metadata entry.

Because the materials are given a proper citation and DOI by the DataCite service, it means that the scenario becomes a citable reference that can be added to the authors’ CV and makes it easier for them to get academic credit for publishing their cases.

We have created some short notes on how we currently upload OpenLabyrinth maps to the OLab Dataverse, using a template in the meantime.

The OLab Dataverse is hosted at Scholars Portal on Canadian servers. Using a non-USA based service will help to mitigate some of the concerns raised for some jurisdictions and granting agencies.

When we say ‘Active Repository‘, we also plan to make this process more useful in providing activity metrics, using xAPI and a LRS. At present, we can create simple Guestbooks, which help us to track when the datasets are downloaded. But we feel it is equally important to create some activity metrics around the contributions  by faculty members and teachers. Partly, this will be based on the new xAPI Faculty Profile that we are developing and will incorporate into our OLab uploading mechanisms.

It is time we did a better job of looking at how our contributions to open science are used, appreciated and distributed in the world of Precision Education. We just submitted an article to MedEdPublish on why this is so important.

If you are interested in working with us in exploring how we can make these processes more accessible and more rewarding, please contact us.

OIPH Catalyst Grant outputs and metrics

In 2015, we were delighted to receive a Catalyst Grant from the O’Brien Institute of Public Health in support of development of various aspects of OpenLabyrinth as an educational research platform.

We have just reported on what arose as a result of this grant and, in general, we are pretty pleased with what came out of it and where things are headed. OpenLabyrinth continues to be used widely in the educational community and that reach is growing.

But how do we know?

This is more challenging to assess than you might think. Using standard lit search techniques, it is not hard to find journal articles that relate to the ongoing and innovative use of OpenLabyrinth. But that is only a small part of the impact. Now to give credit to OIPH and its reporting template, it is great to see that they want to know about the societal impacts, social media etc. This is something that we strongly agree with in OHMES.

But the actual measurement of such outputs is not so easy. An obvious way to do this would be via Altmetrics, which is revolutionizing how such projects and outputs are seen by the public. It has a powerful suite of tools that allows it to track mentions and reports in various public channels, social media channels, news items etc. Great stuff.

But Altmetric requires items to be assigned to either departments or institutes or faculties. For OHMES and OpenLabyrinth, this creates a significant problem. There is no ability to assign a tag or category which spans the range of groups and organizations that are involved. This is somewhat surprising, given that the general approach in Altmetrics is ontological, rather than taxonomical. (1,2)

In our PiHPES Project, partly as a result of such challenges, we are exploring two approaches to this. Firstly, we are using xAPI and the Learning Records Store (LRS) to directly track how our learners and teachers make use of the plethora of learning objects that we create in our LMSs and other platforms – the paradata, data about how things are used, accessed, distributed.

Secondly, we are looking for ways in which to make such activities and objects more discoverable, both through improved analytics on the generated activity metrics, and in tying the paradata and metadata together in more meaningful ontologies, using semantic linking.

 

  1. Shirky C. Ontology is Overrated — Categories, Links, and Tags [Internet]. 2005 [cited 2019 May 3]. Available from: http://www.shirky.com/writings/ontology_overrated.html
  2. Leu J. Taxonomy, ontology, folksonomies & SKOS. [Internet]. SlideShare. 2012 [cited 2019 May 3]. p. 21. Available from: https://www.slideshare.net/JanetLeu/taxonomy-ontology-folksonomies-skos

Learning Analytics and xAPI

David Topps, Medical Director, OHMES, University of Calgary
Ellen Meiselman, Senior Analyst, Learning Systems, University of Michigan

The focus on Competency-Based Medical Education (CBME) has shone welcome attention on the need for good assessment: how do you know when a learner is competent? We need data. All of us are happy when the learner is doing well. Some have suggested1 that we are becoming too focused on checklists and that gestalt ratings are just as good. But what about those learners on the lower shoulder of the curve?

We need better data for earlier detection of those who need help. We need better data for earlier detection of those who need help. Over the years, several have used various approaches in intelligent tutoring systems to examine learner
engagement.2-4 However, these have looked at engagement but, for a skilled learner, the materials may not meet their needs. Struggling learners often remain undetected, the failure to fail problem.5-7 We need stronger data to support decisions to terminate, lest we spend weeks in litigation.

We have seen several efforts to support competency assessment: EPAs, milestones, pre-EPAs, each looking at lower levels of complexity on the spectrum of learning activities. But too many of these depend on observer-b(i)ased checklists, and we are already encountering survey fatigue at all levels of learner assessment. Yet, much of what we do is now captured online in one form or another: electronic medical records, learning management systems, simulation systems, enterprise calendars.

Activity metrics take a ground up, rather than top-down, approach to tracking what learners actually do rather than what they, or their teachers, say they do. This already happens in limited individual systems but we need an extensible approach if we are to garner a more comprehensive view. The Experience API (xAPI)8 is a technically simple approach that is easy to integrate into existing systems. Data is captured into a Learning Record Store (LRS)9.

xAPI statements follow a very simple actor-verb-object structure: Bob Did This. And yet this simplicity belies great power through a very extensible set of vocabularies. The LRS structure is designed to swallow such xAPI statements from multiple sources, in true Big Data fashion, and can be securely federated so that a wide range of analytic and data visualization tools can be employed.

xAPI is technically well established. Groups, such as Medbiquitous10, are standardizing profiles of activities for medical education and health system outcomes. Now is the perfect time to engage medical educators in the power of these metrics. Their assessment needs should drive how and where these activities are measured.

References:

  1. Hodges B. Scylla or charybdis: navigating between excessive examination andnaïve reliance on self-assessment. Nurs Inq. 2007;14(3):177. http://onlinelibrary.wiley.com/doi/10.1111/j.1440-1800.2007.00376.x/full. Accessed January 21, 2017.
  2. D’Mello S, Graesser A. Automatic Detection of Learner’s Affect from Gross Body Language. Appl Artif Intell. 2009;23(2):123-150. doi:10.1080/08839510802631745.
  3. Qu L, Wang N, Johnson WL. Using Learner Focus of Attention to Detect Learner Motivation Factors. In: Springer, Berlin, Heidelberg; 2005:70-73. doi:10.1007/11527886_10.
  4. Soldato T. Detecting and reacting to the learner’s motivational state. In: Springer, Berlin, Heidelberg; 1992:567-574. doi:10.1007/3-540-55606-0_66.
  5. Nixon LJ, Gladding SP, Duffy BL. Describing Failure in a Clinical Clerkship: Implications for Identification, Assessment and Remediation for Struggling Learners. J Gen Intern Med. 2016;31(10):1172-1179. doi:10.1007/s11606- 016-3758-3.
  6. Wang FY, Degnan KO, Goren EN. Describing Failure in a Clinical Clerkship. J Gen Intern Med. 2017;32(4):378-378. doi:10.1007/s11606-016-3979-5.
  7. McColl T. MEdIC Series: Case of the Failure to Fail – Expert Review and Curated Community Commentary. AliEM. https://www.aliem.com/2017/06/medic-case-failure-to-fail-expert-review- curated-community-commentary/. Published 2017. Accessed April 29, 2019.
  8. Haag J. xAPI Overview – ADL Net. ADL Net. https://www.adlnet.gov/xAPI. Published 2015. Accessed May 29, 2017.
  9. Downes A. Learning Record Store – Tin Can API. Rustici LLC web site. http://tincanapi.com/learning-record-store/. Published 2015. Accessed May 29, 2017.
  10. Greene P, Smother V. MedBiquitous Consortium | Advancing the Health Professions Through Technology Standards. http://www.medbiq.org/. Published 2001. Accessed November 1, 2016.

xAPI and Learning Analytics

The Experience API (xAPI) provides OLab with some powerful tools to integrate activity metrics as a research tool. But, of course, there is more to it than just capturing and aggregating data.

Data visualization and learning analytics are increasingly important — this is one of the key pillars in our push towards Precision Education. Some of the Learning Record Stores (LRS) come with tools to assist with such analytics. We have spoken of this before: while we currently use GrassBlade as our workaday LRS because it is simple for small pilots, the beauty of the LRS approach is that data can easily be federated across other LRSs. For example, we have made use of the more powerful analytics provided by the Watershed LRS.

However, as we move into more detailed analytics, it is great to be able to work with even more powerful tools. We have just started working with the IEEE ICICLE group on looking at better approaches to such learning analytics. LearnSphere is one such tool, being extensively used at Carnegie Mellon University (but open-source, on Github).

LearnSphere has a powerful set of analytics tools. In the screenshot above, it shows us using the Tigris learning workflows tool to map out good learning designs and scenarios, designed to answer questions such as “which kinds of learner activity are worth measuring?” — the datasets can be quite varied and the LearnSphere group is interested in accommodating a wider range of learning research datasets.

Today’s discussion of the IEEE ICICLE xAPI and Learning Analytics SIG focused on how to integrate xAPI activity streams in a more seamless manner with LearnSphere. We are pleased to be involved with such dataflow integration initiatives. As Koetinger et al (1) demonstrated in 2016, there is a clear link between “doing” and learning. This is not a new concept at all, but proving this has been remarkably difficult in the world of education, where there are so many confounding factors to consider in a study methodology. This approach, using learning analytics, is much more solid.

1.  Koedinger, K. R., McLaughlin, E. A., Jia, J. Z., & Bier, N. L. (2016). Is the doer effect a causal relationship? In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge – LAK ’16 (pp. 388–397). New York, New York, USA: ACM Press. http://doi.org/10.1145/2883851.2883957

Using xAPI to support blended simulation

OLab and OpenLabyrinth have always been good at providing the contextual glue that holds together various simulation modalities. Here are some examples of projects where OpenLabyrinth has supported blended simulation activities:

  • Virtual Spinal Tap – uses haptic simulation to model the feeling of needle insertion
  • Rushing Roulette – timed tasks with a $30 Arduino lie detector!
  • Crevasse Rescue – multiple teams & disciplines, with high & low fidelity simulators
  • R. Ed Nekke – bookending around Laerdal mannequin scenario

But now with xAPI providing the background data linking to a Learning Record Store, it is much easier to do this across a wider range of tools and platforms. Some of the above mentioned projects used a very sophisticated gamut of high-speed networks, at considerable cost.

Doing this now with xAPI is proving to be much more flexible, scalable and cost effective. To support haptic projects, like Virtual Spinal Tap, we are now working with the Medbiq Learning Experience Working Group on an xAPI Haptics Profile. Check it out and give us feedback.

OpenLabyrinth at the OHMES Symposium

Coming up this week on Wed 22nd Feb and Thu 23rd, we are hosting the annual symposium for

OHMES: Office of Health & Medical Education Scholarship

at the Cumming School of Medicine. We have some great keynote speakers, including Lorelei Lingard, Kevin Eva and Stella Ng. For full details on the program, check out

http://cumming.ucalgary.ca/ohmes/events/health-and-medical-education-scholarship-symposium

We have lots of interest this year – hope you registered already.

One of the things that we will be demonstrating at this year’s symposium is the continuing work we are doing with our Rushing Roulette stress tests.

Check out this page for more info on how we are combining multiple activity streams, using xAPI and a Learning Record Store (LRS), OpenLabyrinth, and a cheap $30 Arduino board.

You also use this shortcode link to reach that same page: http://tiny.cc/RRdemo

OpenLabyrinth stress testing at CHES scholarship day

On Wed, 5th October, the Centre for Health Education Scholarship (CHES) at UBC held its annual scholarship symposium, in Vancouver.

There were many interesting sessions, including a stirring keynote address from Rachel Ellaway (Professor, Education Research, University of Calgary.

OpenLabyrinth featured at a few presentations at the CHES symposium, including a short presentation on Activity Metrics by David Topps and Corey Albersworth. (See http://www.slideshare.net/topps/activity-metrics-for-ches-day )

In one of the afternoon demonstration sessions, we were able to show our Arduino stress-detector kit in action to conference participants. Here we have a short video of the Arduino sensors being calibrated.

This was the same basic setup as that first shown at the Medbiq Conference in Baltimore earlier this year. However, for this conference, no expense was spared. We splurged another $29.99 on another Arduino device. Yes, it nearly broke the budget!

We also managed to set up the software on both Windows 10 and OS X Yosemite, which highlights the platform independence of the Eclipse IDE that we used for collecting the Arduino data and sending it to the LRS.

Here we have a short video of the OpenLabyrinth stress-test in action. Our participant is playing a rapid-fire series of case vignettes on the Mac on the right, while the Arduino sensors connected to the Windows machine on the right is recording real-time data on her heart rate and Galvanic Skin Response.

We initially created this project as a simple technical demonstration that one could use a cheap, easy combination of Arduino hardware, OpenLabyrinth and xAPI statement collection into the GrassBlade Learning Record Store. We had only intended to show that such collection from multiple activity streams was feasible in the time and resources available to the average education researcher i.e. not much.

We were delighted to find that the stress detector was much more sensitive than we anticipated and will be useful in real-world research.

Medbiq xAPI workshop technical report

We just published the interim technical report from our xAPI workshop at the Medbiq annual conference. https://www.researchgate.net/publication/304084961_Medbiq_xAPI_Workshop_2016_Technical_Report. (We also have an updated reported, stored internally here : Medbiq xAPI Workshop Report, which corrects a few minor errors in the original.)

Medbiq xAPI Arduino Sensors

As we mentioned in our earlier posts, we were really pleased by the participation at the workshop. We just heard from Medbiq that it was really well received and the evaluations were very positive.

We created this much more detailed Technical Report so that others, who may be interested in exploring what you can do with xAPI and Arduino sensors, can follow our processes and the challenges we faced. This will hopefully provide enough detail that others groups can also make similar explorations. Please feel free to contact us through this site if you are interesting in this area of research and development.