More on the xAPI stuff… and perhaps a wee bit of clarification about terminology.
OpenLabyrinth was just admitted to the official group of Tin Can Adopters:
Tin Can API was the original name given by Rustici Software. It is now more properly known as the Experience API or xAPI but many still call it Tin Can. It is the same thing and the terms are synonymous. Advanced Distributed Learning (ADL) was the group who first commissioned the development of xAPI by Rustici so I guess they get to name it.
But most importantly, the API will remain open and non-proprietary.
A couple of weeks ago, we described how we were using H5P widgets here on our WordPress web site. Well, now we also have them fully integrated into OpenLabyrinth itself.
So, what’s the big deal, I hear you say…well, it means that we now have access to a whole new way of interacting with our users. It makes our nodes and pages much richer, with some nicely crafted HTML5 interactive content.
There are many pre-built H5P widgets on their main web site, which you can then easily modify to include your own content. We won’t bore you with descriptions of everything they have because H5P does it better. But the really cool part is that you can download H5P widgets from other web sites and insert them into your own cases and pages.
Given the interest in our recent work on Activity Metrics and xAPI, we are also delighted that H5P widgets provide xAPI tracking. So you can study how your learners interact with your widgets and cases in even greater detail.
It’s conference season indeed, around here. The Medbiq Annual Conference is coming up again soon in Baltimore, May 15-17, 2016.
Following on from previous years, activity streams and learning analytics will again feature prominently. OpenLabyrinth will be heavily used in a workshop we are holding about the Experience API (xAPI), along with some interesting widgets and gadgets to track/stress your learners.
This will make a nice extension on some of the other work we have recently presented about big data principles, applied to educational metrics, at the Ottawa Conference and CCME over the past month.
Come and play – we’ll make you sweat!
We are pleased to announce an interesting new development on our OpenLabyrinth test site. We are experimenting with timestamps that have millisecond accuracy – this opens up this tool to a whole bunch of new research areas.
For example, you can now start looking at reaction times or which player was first to the buzzer in competitive team scenarios. Lots more fun stuff.
Previously, in OpenLabyrinth, all of our participants’ activities when playing a case were recorded into its database but the timestamps for each activity point were only recorded to the nearest second. For most purposes, this is just fine.
But now we are able to track these same activity points much more accurately. The internal database will now record timestamps in microseconds. Now, for anyone who works with such research, it will be clear that you also have to take into account the tiny fractions of a second between an activity and the time it is stored, including the processing time in between. There are established techniques for accommodating these timing offsets.
So, if you have an interest in taking advantage of this greater timing accuracy in one of your projects, please contact us.
The Ottawa Conference, an international medical education conference held every two years (and only occasionally in Ottawa (2002, 2014…)) has its main focus on educational assessment.
This year, as we noted in a previous post, there has been a lot of interest in Big Data in medical education. Now, before I am laughed out of the house by real big data scientists, I hasten to add that the amounts of data generated by medical education are still tiny compared to those from genomics, protein folding or the ginormous stuff from the Large Hadron Collider.
But size isn’t everything.
There are various V’s attributed to big data – initially three, but growing, and controversial but I won’t get into that digression.
While our volumes are several orders of magnitude smaller than the big boys, it is the principles that matter. What we have been finding is that these principles are very useful and usable even when applied to personal learning data. Just before the conference, we posted some test pages about Precision Education. This theme came out over and over again at the Ottawa Conference with some fascinating insights that can be generated from such data sources.
If you want a nice, easy to understand overview of some of the key principle of big data, I suggest (again) that you take a look at Kenneth Cukier’s presentation at TED.
Just in time for the Ottawa Conference in Perth last week, we were able to demonstrate activity metrics generated by OpenLabyrinth via the ADL Experience (xAPI, a.k.a. TinCan API) to multiple Learning Record Stores (LRSs).
xAPI is really catching on in educational tracking and research. It is a much lighter, more agile approach than SCORM and looks set to replace it. Much has been written about xAPI over the past two years, which I won’t repeat here. Enough to say that it is simple, yet very powerful in what you can do with it.
Now, in OpenLabyrinth, which already has a very detailed and powerful set of internal metrics built into it, we can now extend what can be done with our platform and track learning experiences across many simulation modalities. OpenLabyrinth has already proven to be remarkably effective at integrating learning activities – our ‘conceptual glue’ as we call it. We have written about this many times in the past on how we have used OpenLabyrinth to tie together various activities into a consistent logical pathway or narrative.
Now, we can do this with a wider variety of other tools and yet still track what learners and teachers are actually doing within these learning objects. Sharing of open educational objects is not about the metadata of where they are stored and what learners can do with them; it is about the activity streams of what they actually do… in real life… and real time.
Our dev team at ITRex has done a really nice job of integrating xAPI into our core structures. We are now able to perform a post-hoc analysis in great detail over selected cases, scenarios or date ranges. This can even be done, thanks to OpenLabyrinth’s strong internal metrics, on cases that were written and played long before xAPI existed!
We can also do real-time tracking of activity metrics, send xAPI statements out to the LRS immediately. We have been cautious in implementing this so as not to bog down our poor little servers. But it works… and opens up some really interesting cross-platform communications.
At present, we are sending statements to our GrassBlade LRS and to SCORM Cloud, hosted by Rustici. If others are interested in exploring this with us, contact us via one of the usual methods.
OpenLabyrinth and the Experience API (xAPI) will be featuring prominently in several sessions at the upcoming Ottawa Conference in Perth, Australia, in a week.
The main focus of the Ottawa Conference is assessment and evaluation in health professional education. As part of this, there are several discussion on activity metrics and big data.
We have been adapting OpenLabyrinth to make better use of the xAPI and combining it with xAPI data from many other sources to get a fuller picture of what our learners do within their learning context.
We will post links to some of the materials generated at this conference during the workshops and PeArLS sessions that relate to activity metrics.
We came across some neat capabilities yesterday, while working on ways to integrate educational tools and functions from multiple different applications. We were able to combine functionality from OpenLabyrinth, WordPress, GrassBlade and some HTML5 applets from H5P.org, into a neat little cohesive scenario.
Check out this Precision Education page and embedded applets
What may not be all that apparent from that page, because the pieces fit together reasonably seamlessly, is that the various functions shown could not be produced by any one of the educational applications alone. Each application was used in a blended manner but for the learner, is a single experience.
In the background, we are working on being able to track all the activities performed, using the Experience API from ADL, which will send xAPI statements to GrassBlade, our Learning Record Store.
Mebiquitous “is a not-for-profit, international group of professional associations, universities, commercial, and governmental organizations seeking to develop and promote technology standards for the health professions that advance lifelong learning, continuous improvement, and better patient outcomes.”
These guys do great work that underpins many collaborative initiatives in healthcare. The University of Calgary Cumming School of Medicine is a proud and active member.
On 20 Jan 2016, Medbiq announced a new working group, the Learning Experience group.
“Education analytics offers an opportunity to better track learner educational activities and to better understand the strengths and weaknesses of the healthcare workforce as well as factors associated with higher performance…
…The new Learning Experience Working Group will develop a set of Experience API (xAPI) profiles to provide guidance around collecting data on specific types of healthcare learning activities. The scope includes simulations (Virtual patients, mannekin-based simulations, preceptor-reviewed simulations, virtual worlds/games, Standardized Patients, etc) and clinical training activities and experiences.”
Members of the OpenLabyrinth Development Consortium are actively involved in this initiative and in Medbiquitous.
And just a quick heads up, the Medbiquitous Annual Conference is coming up: May 16-17, 2016 in Baltimore. A very innovative and collaborative group – come join us.