Author Archives: dtopps

Medbiq xAPI workshop

At the annual Medbiquitous Conference in Baltimore, OpenLabyrinth provided the underpinnings for a workshop demonstrating the capabilities of the Experience API (xAPI).

Medbiq xAPI workshop team Using a simple Arduino computer, the team of Ellen Meiselman, Corey Albersworth, David Topps and Corey Wirun were able to track the stress levels of workshop participants as they played a very challenging series of OpenLabyrinth mini-cases. Sensors on the Arduino continuously measured heart rate and galvanic skin response which, even on these cheap ($30) kits, were easily sensitive enough to detect subtle changes in stress levels.

We intentionally set a very tight set of timers on the case series so that participants were increasingly pushed to make very rapid decisions. The data from the sensors were collected in our GrassBlade LRS, along with xAPI statements from our OpenLabyrinth cases.

Seeing such simple technology providing quite sophisticated tracking of learner stress levels prompted a lot of vigorous discussion in the workshop on how such activity metrics can be used in other ways.

We really appreciate the collaboration and help we received from the xAPI community in pulling this workshop together. We had naively thought that, since both Arduino and xAPI are simple to work with, this would be a nice quick effort. Corey A ran into a lot of tiny but time consuming quirks and put in many hours in getting this all to work smoothly.

We especially want to acknowledge the detailed help we received from Pankaj Agrawal (GrassBlade LRS) and Andrew Downes (Watershed LRS) for their patience and troubleshooting. For some parts of the project, we had some quite sophisticated statement pulls from GrassBlade to Watershed, due to the collaboration of these folks. It really showed us how much more you can achieve by blending the capabilities of these various devices and platforms,  using xAPI.

OpenLabyrinth officially recognized as TinCan/xAPI Adopter

More on the xAPI stuff… and perhaps a wee bit of clarification about terminology.

OpenLabyrinth was just admitted to the official group of Tin Can Adopters:

http://tincanapi.com/adopters/

Tin Can API was the original name given by Rustici Software. It is now more properly known as the Experience API or xAPI but many still call it Tin Can. It is the same thing and the terms are synonymous. Advanced Distributed Learning (ADL) was the group who first commissioned the development of xAPI by Rustici so I guess they get to name it.

But most importantly, the API will remain open and non-proprietary.

More xAPI stuff from OpenLabyrinth

Our work with activity metrics and the Experience API (xAPI) continues apace with the OpenLabyrinth platform. We have been able to integrate xAPI statements into our CURIOS video mashup tool.

Now when you insert a video mashup into one of our OpenLabyrinth cases, you can track how your users are using your videos, which bits they watch and replay again.

This will dovetail nicely with some of the xAPI features that we can now access with the H5P widgets. It will also allow us to track activities across a widening range of educational activities.

OpenLabyrinth has H5P widgets

A couple of weeks ago, we described how we were using H5P widgets here on our WordPress web site. Well, now we also have them fully integrated into OpenLabyrinth itself.

H5P logo

So, what’s the big deal, I hear you say…well, it means that we now have access to a whole new way of interacting with our users. It makes our nodes and pages much richer, with some nicely crafted HTML5 interactive content.

There are many pre-built H5P widgets on their main web site, which you can then easily modify to include your own content. We won’t bore you with descriptions of everything they have because H5P does it better. But the really cool part is that you can download H5P widgets from other web sites and insert them into your own cases and pages.

Given the interest in our recent work on Activity Metrics and xAPI, we are also delighted that H5P widgets provide xAPI tracking. So you can study how your learners interact with your widgets and cases in even greater detail.

OpenLabyrinth v3.4 released

Delighted to announce that we released v3.4 of OpenLabyrinth today.

Lots and lots of changes in this one… maybe too many… we are considering putting out a v3.3.3 which had fewer changes.

Those who have been following this blog will be familiar with what we have been working on. We’ll put out a more detailed list of changes on the forum soon. The main things are as follows:

  • xAPI reporting to a LRS
  • H5P widget integration (https://h5p.org/)
  • Turk Talk for chat style small group communications
  • Improved LTI stability

For the latest release, server administrators can pull this from Github. For the rest of us, we always run the latest version of the software on our demo server so if you want to try these things out, contact us for a free trial account.

Activity metrics at Medbiquitous conference

It’s conference season indeed, around here. The Medbiq Annual Conference is coming up again soon in Baltimore, May 15-17, 2016.

Medbiq logo

Following on from previous years, activity streams and learning analytics will again feature prominently. OpenLabyrinth will be heavily used in a workshop we are holding about the Experience API (xAPI), along with some interesting widgets and gadgets to track/stress your learners.

This will make a nice extension on some of the other work we have recently presented about big data principles, applied to educational metrics, at the Ottawa Conference and CCME over the past month.

Come and play – we’ll make you sweat!

OpenLabyrinth at CCME

The Canadian Conference on Medical Education (CCME) starts tomorrow in Montreal, QC.

This annual gathering brings together medical educators from around the world, discussing a wide range of topics and research interests.

OpenLabyrinth features in several presentations, workshops and posters at this conference. If you are there, chat to us more about this educational research platform. Since OpenLabyrinth is free and open-source, we don’t have the funds to have a fancy exhibitor booth. Plus, we are not selling anything.

But we are always interested in talking to groups who are interested in educational research and who want to explore what can be done with activity metrics, branched pathways, embedded video or facilitated small group scenarios.

OpenLabyrinth’s timings tighten up

We are pleased to announce an interesting new development on our OpenLabyrinth test site. We are experimenting with timestamps that have millisecond accuracy – this opens up this tool to a whole bunch of new research areas.

For example, you can now start looking at reaction times or which player was first to the buzzer in competitive team scenarios. Lots more fun stuff.

Previously, in OpenLabyrinth, all of our participants’ activities when playing a case were recorded into its database but the timestamps for each activity point were only recorded to the nearest second. For most purposes, this is just fine.

But now we are able to track these same activity points much more accurately. The internal database will now record timestamps in microseconds. Now, for anyone who works with such research, it will be clear that you also have to take into account the tiny fractions of a second between an activity and the time it is stored, including the processing time in between. There are established techniques for accommodating these timing offsets.

So, if you have an interest in taking advantage of this greater timing accuracy in one of your projects, please contact us.

Turk Talk improvements in OpenLabyrinth

Today we released a new improved format for our powerful Turk Talk function in OpenLabyrinth. Many small tweaks have made this much more user friendly.

Check out the addendum to the User Guide which explains in better detail how to use this new functionality. Here is what the Turker now sees when guiding up to 8 learners simultaneously:

OLab Turker Chat panel

This is just a glimpse. The instruction notes show this much more clearly, but you can see 5 users waiting (pink columns) for the Turker to respond. The Turker can see where they are in the case, and who has been waiting longest.

OpenLabyrinth and BIG data

The Ottawa Conference, an international medical education conference held every two years (and only occasionally in Ottawa (2002, 2014…)) has its main focus on educational assessment.

This year, as we noted in a previous post, there has been a lot of interest in Big Data in medical education. Now, before I am laughed out of the house by real big data scientists, I hasten to add that the amounts of data generated by medical education are still tiny compared to those from genomics, protein folding or the ginormous stuff from the Large Hadron Collider.

But size isn’t everything.

There are various V’s attributed to big data – initially three, but growing, and controversial but I won’t get into that digression.

  • Volume
  • Velocity
  • Variety

While our volumes are several orders of magnitude smaller than the big boys, it is the principles that matter. What we have been finding is that these principles are very useful and usable even when applied to personal learning data. Just before the conference, we posted some test pages about Precision Education. This theme came out over and over again at the Ottawa Conference with some fascinating insights that can be generated from such data sources.

If you want a nice, easy to understand overview of some of the key principle of big data, I suggest (again) that you take a look at Kenneth Cukier’s presentation at TED.