More xAPI stuff from OpenLabyrinth

Our work with activity metrics and the Experience API (xAPI) continues apace with the OpenLabyrinth platform. We have been able to integrate xAPI statements into our CURIOS video mashup tool.

Now when you insert a video mashup into one of our OpenLabyrinth cases, you can track how your users are using your videos, which bits they watch and replay again.

This will dovetail nicely with some of the xAPI features that we can now access with the H5P widgets. It will also allow us to track activities across a widening range of educational activities.

OpenLabyrinth has H5P widgets

A couple of weeks ago, we described how we were using H5P widgets here on our WordPress web site. Well, now we also have them fully integrated into OpenLabyrinth itself.

H5P logo

So, what’s the big deal, I hear you say…well, it means that we now have access to a whole new way of interacting with our users. It makes our nodes and pages much richer, with some nicely crafted HTML5 interactive content.

There are many pre-built H5P widgets on their main web site, which you can then easily modify to include your own content. We won’t bore you with descriptions of everything they have because H5P does it better. But the really cool part is that you can download H5P widgets from other web sites and insert them into your own cases and pages.

Given the interest in our recent work on Activity Metrics and xAPI, we are also delighted that H5P widgets provide xAPI tracking. So you can study how your learners interact with your widgets and cases in even greater detail.

OpenLabyrinth v3.4 released

Delighted to announce that we released v3.4 of OpenLabyrinth today.

Lots and lots of changes in this one… maybe too many… we are considering putting out a v3.3.3 which had fewer changes.

Those who have been following this blog will be familiar with what we have been working on. We’ll put out a more detailed list of changes on the forum soon. The main things are as follows:

  • xAPI reporting to a LRS
  • H5P widget integration (
  • Turk Talk for chat style small group communications
  • Improved LTI stability

For the latest release, server administrators can pull this from Github. For the rest of us, we always run the latest version of the software on our demo server so if you want to try these things out, contact us for a free trial account.

Activity metrics at Medbiquitous conference

It’s conference season indeed, around here. The Medbiq Annual Conference is coming up again soon in Baltimore, May 15-17, 2016.

Medbiq logo

Following on from previous years, activity streams and learning analytics will again feature prominently. OpenLabyrinth will be heavily used in a workshop we are holding about the Experience API (xAPI), along with some interesting widgets and gadgets to track/stress your learners.

This will make a nice extension on some of the other work we have recently presented about big data principles, applied to educational metrics, at the Ottawa Conference and CCME over the past month.

Come and play – we’ll make you sweat!

OpenLabyrinth at CCME

The Canadian Conference on Medical Education (CCME) starts tomorrow in Montreal, QC.

This annual gathering brings together medical educators from around the world, discussing a wide range of topics and research interests.

OpenLabyrinth features in several presentations, workshops and posters at this conference. If you are there, chat to us more about this educational research platform. Since OpenLabyrinth is free and open-source, we don’t have the funds to have a fancy exhibitor booth. Plus, we are not selling anything.

But we are always interested in talking to groups who are interested in educational research and who want to explore what can be done with activity metrics, branched pathways, embedded video or facilitated small group scenarios.

OpenLabyrinth’s timings tighten up

We are pleased to announce an interesting new development on our OpenLabyrinth test site. We are experimenting with timestamps that have millisecond accuracy – this opens up this tool to a whole bunch of new research areas.

For example, you can now start looking at reaction times or which player was first to the buzzer in competitive team scenarios. Lots more fun stuff.

Previously, in OpenLabyrinth, all of our participants’ activities when playing a case were recorded into its database but the timestamps for each activity point were only recorded to the nearest second. For most purposes, this is just fine.

But now we are able to track these same activity points much more accurately. The internal database will now record timestamps in microseconds. Now, for anyone who works with such research, it will be clear that you also have to take into account the tiny fractions of a second between an activity and the time it is stored, including the processing time in between. There are established techniques for accommodating these timing offsets.

So, if you have an interest in taking advantage of this greater timing accuracy in one of your projects, please contact us.

Turk Talk improvements in OpenLabyrinth

Today we released a new improved format for our powerful Turk Talk function in OpenLabyrinth. Many small tweaks have made this much more user friendly.

Check out the addendum to the User Guide which explains in better detail how to use this new functionality. Here is what the Turker now sees when guiding up to 8 learners simultaneously:

OLab Turker Chat panel

This is just a glimpse. The instruction notes show this much more clearly, but you can see 5 users waiting (pink columns) for the Turker to respond. The Turker can see where they are in the case, and who has been waiting longest.

OpenLabyrinth and BIG data

The Ottawa Conference, an international medical education conference held every two years (and only occasionally in Ottawa (2002, 2014…)) has its main focus on educational assessment.

This year, as we noted in a previous post, there has been a lot of interest in Big Data in medical education. Now, before I am laughed out of the house by real big data scientists, I hasten to add that the amounts of data generated by medical education are still tiny compared to those from genomics, protein folding or the ginormous stuff from the Large Hadron Collider.

But size isn’t everything.

There are various V’s attributed to big data – initially three, but growing, and controversial but I won’t get into that digression.

  • Volume
  • Velocity
  • Variety

While our volumes are several orders of magnitude smaller than the big boys, it is the principles that matter. What we have been finding is that these principles are very useful and usable even when applied to personal learning data. Just before the conference, we posted some test pages about Precision Education. This theme came out over and over again at the Ottawa Conference with some fascinating insights that can be generated from such data sources.

If you want a nice, easy to understand overview of some of the key principle of big data, I suggest (again) that you take a look at Kenneth Cukier’s presentation at TED.

OpenLabyrinth has Experience

Just in time for the Ottawa Conference in Perth last week, we were able to demonstrate activity metrics generated by OpenLabyrinth via the ADL Experience (xAPI, a.k.a. TinCan API) to multiple Learning Record Stores (LRSs).

xAPI is really catching on in educational tracking and research. It is a much lighter, more agile approach than SCORM and looks set to replace it. Much has been written about xAPI over the past two years, which I won’t repeat here. Enough to say that it is simple, yet very powerful in what you can do with it.

Now, in OpenLabyrinth, which already has a very detailed and powerful set of internal metrics built into it, we can now extend what can be done with our platform and track learning experiences across many simulation modalities. OpenLabyrinth has already proven to be remarkably effective at integrating learning activities – our ‘conceptual glue’ as we call it. We have written about this many times in the past on how we have used OpenLabyrinth to tie together various activities into a consistent logical pathway or narrative.

Now, we can do this with a wider variety of other tools and yet still track what learners and teachers are actually doing within these learning objects. Sharing of open educational objects is not about the metadata of where they are stored and what learners can do with them; it is about the activity streams of what they actually do… in real life… and real time.

Our dev team at ITRex has done a really nice job of integrating xAPI into our core structures. We are now able to perform a post-hoc analysis in great detail over selected cases, scenarios or date ranges. This can even be done, thanks to OpenLabyrinth’s strong internal metrics, on cases that were written and played long before xAPI existed!

We can also do real-time tracking of activity metrics, send xAPI statements out to the LRS immediately. We have been cautious in implementing this so as not to bog down our poor little servers. But it works… and opens up some really interesting cross-platform communications.

At present, we are sending statements to our GrassBlade LRS and to SCORM Cloud, hosted by Rustici. If others are interested in exploring this with us, contact us via one of the usual methods.

Activity metrics and big data

OpenLabyrinth and the Experience API (xAPI) will be featuring prominently in several sessions at the upcoming Ottawa Conference in Perth, Australia, in a week.


The main focus of the Ottawa Conference is assessment and evaluation in health professional education. As part of this, there are several discussion on activity metrics and big data.

We have been adapting OpenLabyrinth to make better use of the xAPI and combining it with xAPI data from many other sources to get a fuller picture of what our learners do within their learning context.

We will post links to some of the materials generated at this conference during the workshops and PeArLS sessions that relate to activity metrics.