Category Archives: xapi

xAPI and Learning Analytics

The Experience API (xAPI) provides OLab with some powerful tools to integrate activity metrics as a research tool. But, of course, there is more to it than just capturing and aggregating data.

Data visualization and learning analytics are increasingly important — this is one of the key pillars in our push towards Precision Education. Some of the Learning Record Stores (LRS) come with tools to assist with such analytics. We have spoken of this before: while we currently use GrassBlade as our workaday LRS because it is simple for small pilots, the beauty of the LRS approach is that data can easily be federated across other LRSs. For example, we have made use of the more powerful analytics provided by the Watershed LRS.

However, as we move into more detailed analytics, it is great to be able to work with even more powerful tools. We have just started working with the IEEE ICICLE group on looking at better approaches to such learning analytics. LearnSphere is one such tool, being extensively used at Carnegie Mellon University (but open-source, on Github).

LearnSphere has a powerful set of analytics tools. In the screenshot above, it shows us using the Tigris learning workflows tool to map out good learning designs and scenarios, designed to answer questions such as “which kinds of learner activity are worth measuring?” — the datasets can be quite varied and the LearnSphere group is interested in accommodating a wider range of learning research datasets.

Today’s discussion of the IEEE ICICLE xAPI and Learning Analytics SIG focused on how to integrate xAPI activity streams in a more seamless manner with LearnSphere. We are pleased to be involved with such dataflow integration initiatives. As Koetinger et al (1) demonstrated in 2016, there is a clear link between “doing” and learning. This is not a new concept at all, but proving this has been remarkably difficult in the world of education, where there are so many confounding factors to consider in a study methodology. This approach, using learning analytics, is much more solid.

1.  Koedinger, K. R., McLaughlin, E. A., Jia, J. Z., & Bier, N. L. (2016). Is the doer effect a causal relationship? In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge – LAK ’16 (pp. 388–397). New York, New York, USA: ACM Press. http://doi.org/10.1145/2883851.2883957

Using xAPI to support blended simulation

OLab and OpenLabyrinth have always been good at providing the contextual glue that holds together various simulation modalities. Here are some examples of projects where OpenLabyrinth has supported blended simulation activities:

  • Virtual Spinal Tap – uses haptic simulation to model the feeling of needle insertion
  • Rushing Roulette – timed tasks with a $30 Arduino lie detector!
  • Crevasse Rescue – multiple teams & disciplines, with high & low fidelity simulators
  • R. Ed Nekke – bookending around Laerdal mannequin scenario

But now with xAPI providing the background data linking to a Learning Record Store, it is much easier to do this across a wider range of tools and platforms. Some of the above mentioned projects used a very sophisticated gamut of high-speed networks, at considerable cost.

Doing this now with xAPI is proving to be much more flexible, scalable and cost effective. To support haptic projects, like Virtual Spinal Tap, we are now working with the Medbiq Learning Experience Working Group on an xAPI Haptics Profile. Check it out and give us feedback.

OpenLabyrinth at the OHMES Symposium

Coming up this week on Wed 22nd Feb and Thu 23rd, we are hosting the annual symposium for

OHMES: Office of Health & Medical Education Scholarship

at the Cumming School of Medicine. We have some great keynote speakers, including Lorelei Lingard, Kevin Eva and Stella Ng. For full details on the program, check out

http://cumming.ucalgary.ca/ohmes/events/health-and-medical-education-scholarship-symposium

We have lots of interest this year – hope you registered already.

One of the things that we will be demonstrating at this year’s symposium is the continuing work we are doing with our Rushing Roulette stress tests.

Check out this page for more info on how we are combining multiple activity streams, using xAPI and a Learning Record Store (LRS), OpenLabyrinth, and a cheap $30 Arduino board.

You also use this shortcode link to reach that same page: http://tiny.cc/RRdemo

OpenLabyrinth stress testing at CHES scholarship day

On Wed, 5th October, the Centre for Health Education Scholarship (CHES) at UBC held its annual scholarship symposium, in Vancouver.

There were many interesting sessions, including a stirring keynote address from Rachel Ellaway (Professor, Education Research, University of Calgary.

OpenLabyrinth featured at a few presentations at the CHES symposium, including a short presentation on Activity Metrics by David Topps and Corey Albersworth. (See http://www.slideshare.net/topps/activity-metrics-for-ches-day )

In one of the afternoon demonstration sessions, we were able to show our Arduino stress-detector kit in action to conference participants. Here we have a short video of the Arduino sensors being calibrated.

This was the same basic setup as that first shown at the Medbiq Conference in Baltimore earlier this year. However, for this conference, no expense was spared. We splurged another $29.99 on another Arduino device. Yes, it nearly broke the budget!

We also managed to set up the software on both Windows 10 and OS X Yosemite, which highlights the platform independence of the Eclipse IDE that we used for collecting the Arduino data and sending it to the LRS.

Here we have a short video of the OpenLabyrinth stress-test in action. Our participant is playing a rapid-fire series of case vignettes on the Mac on the right, while the Arduino sensors connected to the Windows machine on the right is recording real-time data on her heart rate and Galvanic Skin Response.

We initially created this project as a simple technical demonstration that one could use a cheap, easy combination of Arduino hardware, OpenLabyrinth and xAPI statement collection into the GrassBlade Learning Record Store. We had only intended to show that such collection from multiple activity streams was feasible in the time and resources available to the average education researcher i.e. not much.

We were delighted to find that the stress detector was much more sensitive than we anticipated and will be useful in real-world research.

Medbiq xAPI workshop technical report

We just published the interim technical report from our xAPI workshop at the Medbiq annual conference. https://www.researchgate.net/publication/304084961_Medbiq_xAPI_Workshop_2016_Technical_Report. (We also have an updated reported, stored internally here : Medbiq xAPI Workshop Report, which corrects a few minor errors in the original.)

Medbiq xAPI Arduino Sensors

As we mentioned in our earlier posts, we were really pleased by the participation at the workshop. We just heard from Medbiq that it was really well received and the evaluations were very positive.

We created this much more detailed Technical Report so that others, who may be interested in exploring what you can do with xAPI and Arduino sensors, can follow our processes and the challenges we faced. This will hopefully provide enough detail that others groups can also make similar explorations. Please feel free to contact us through this site if you are interesting in this area of research and development.

OpenLabyrinth officially recognized as TinCan/xAPI Adopter

More on the xAPI stuff… and perhaps a wee bit of clarification about terminology.

OpenLabyrinth was just admitted to the official group of Tin Can Adopters:

Adopters

Tin Can API was the original name given by Rustici Software. It is now more properly known as the Experience API or xAPI but many still call it Tin Can. It is the same thing and the terms are synonymous. Advanced Distributed Learning (ADL) was the group who first commissioned the development of xAPI by Rustici so I guess they get to name it.

But most importantly, the API will remain open and non-proprietary.

OpenLabyrinth has H5P widgets

A couple of weeks ago, we described how we were using H5P widgets here on our WordPress web site. Well, now we also have them fully integrated into OpenLabyrinth itself.

H5P logo

So, what’s the big deal, I hear you say…well, it means that we now have access to a whole new way of interacting with our users. It makes our nodes and pages much richer, with some nicely crafted HTML5 interactive content.

There are many pre-built H5P widgets on their main web site, which you can then easily modify to include your own content. We won’t bore you with descriptions of everything they have because H5P does it better. But the really cool part is that you can download H5P widgets from other web sites and insert them into your own cases and pages.

Given the interest in our recent work on Activity Metrics and xAPI, we are also delighted that H5P widgets provide xAPI tracking. So you can study how your learners interact with your widgets and cases in even greater detail.

Activity metrics at Medbiquitous conference

It’s conference season indeed, around here. The Medbiq Annual Conference is coming up again soon in Baltimore, May 15-17, 2016.

Medbiq logo

Following on from previous years, activity streams and learning analytics will again feature prominently. OpenLabyrinth will be heavily used in a workshop we are holding about the Experience API (xAPI), along with some interesting widgets and gadgets to track/stress your learners.

This will make a nice extension on some of the other work we have recently presented about big data principles, applied to educational metrics, at the Ottawa Conference and CCME over the past month.

Come and play – we’ll make you sweat!

OpenLabyrinth’s timings tighten up

We are pleased to announce an interesting new development on our OpenLabyrinth test site. We are experimenting with timestamps that have millisecond accuracy – this opens up this tool to a whole bunch of new research areas.

For example, you can now start looking at reaction times or which player was first to the buzzer in competitive team scenarios. Lots more fun stuff.

Previously, in OpenLabyrinth, all of our participants’ activities when playing a case were recorded into its database but the timestamps for each activity point were only recorded to the nearest second. For most purposes, this is just fine.

But now we are able to track these same activity points much more accurately. The internal database will now record timestamps in microseconds. Now, for anyone who works with such research, it will be clear that you also have to take into account the tiny fractions of a second between an activity and the time it is stored, including the processing time in between. There are established techniques for accommodating these timing offsets.

So, if you have an interest in taking advantage of this greater timing accuracy in one of your projects, please contact us.