As part of a series of resources for early authors, we have been putting together some cases illustrating some of the more basic points around OpenLabyrinth case authoring.
One of the logical structures that we use a lot in OpenLabyrinth is the Dandelion.
So what the heck is a Dandelion? Check out this case here for more information
We just published the interim technical report from our xAPI workshop at the Medbiq annual conference. https://www.researchgate.net/publication/304084961_Medbiq_xAPI_Workshop_2016_Technical_Report. (We also have an updated reported, stored internally here : Medbiq xAPI Workshop Report, which corrects a few minor errors in the original.)
As we mentioned in our earlier posts, we were really pleased by the participation at the workshop. We just heard from Medbiq that it was really well received and the evaluations were very positive.
We created this much more detailed Technical Report so that others, who may be interested in exploring what you can do with xAPI and Arduino sensors, can follow our processes and the challenges we faced. This will hopefully provide enough detail that others groups can also make similar explorations. Please feel free to contact us through this site if you are interesting in this area of research and development.
Tomorrow, we will be testing out our Turk Talk function in OpenLabyrinth for the first time in a live teaching session. A number of nursing students at the University of Calgary will be putting it through its paces.
There have been some nice usability improvements since our early designs and it is now pretty easy to use. Michelle Cullen and her team at the School of Nursing have done a great job in debugging the cases. We are looking forwards to a fun session.
Testing this week has gone well and our facilitators even seem to have had fun! We hope the students do tomorrow as well.
The WAVES Project group had its first meeting in London over the past couple of days. More info will gradually be released on the project web site.
OpenLabyrinth will be extensively used in creating and supporting virtual scenarios for this project. And what the heck are virtual scenarios, I hear you ask? Scenarios in this context relate to the work of Ruth Colvin Clark in her book, Scenario-Based eLearning.
We have found this book to be very useful in our Scenario Based Learning Designs. Scenarios are basically groups of learning activities, put together so that you make best use of the resources available. We use the concept of Scenarios within OpenLabyrinth as a way to group together virtual patient cases (in a logically connected series if necessary), groups of learners, reports, counters, rules etc so that the Scenario Designer can take things way beyond a single simple case.
WAVES is coordinated by St George’s University, London, who are long time experts in OpenLabyrinth. This is a huge project involving health professional schools from many countries in Europe.
We are excited that the group is keen to explore the use of xAPI as a means of tracking activity metrics, and is also keen to work with Medbiquitous on forging common practices and Profiles around xAPI and Scenarios.
At the annual Medbiquitous Conference in Baltimore, OpenLabyrinth provided the underpinnings for a workshop demonstrating the capabilities of the Experience API (xAPI).
Using a simple Arduino computer, the team of Ellen Meiselman, Corey Albersworth, David Topps and Corey Wirun were able to track the stress levels of workshop participants as they played a very challenging series of OpenLabyrinth mini-cases. Sensors on the Arduino continuously measured heart rate and galvanic skin response which, even on these cheap ($30) kits, were easily sensitive enough to detect subtle changes in stress levels.
We intentionally set a very tight set of timers on the case series so that participants were increasingly pushed to make very rapid decisions. The data from the sensors were collected in our GrassBlade LRS, along with xAPI statements from our OpenLabyrinth cases.
Seeing such simple technology providing quite sophisticated tracking of learner stress levels prompted a lot of vigorous discussion in the workshop on how such activity metrics can be used in other ways.
We really appreciate the collaboration and help we received from the xAPI community in pulling this workshop together. We had naively thought that, since both Arduino and xAPI are simple to work with, this would be a nice quick effort. Corey A ran into a lot of tiny but time consuming quirks and put in many hours in getting this all to work smoothly.
We especially want to acknowledge the detailed help we received from Pankaj Agrawal (GrassBlade LRS) and Andrew Downes (Watershed LRS) for their patience and troubleshooting. For some parts of the project, we had some quite sophisticated statement pulls from GrassBlade to Watershed, due to the collaboration of these folks. It really showed us how much more you can achieve by blending the capabilities of these various devices and platforms, using xAPI.
More on the xAPI stuff… and perhaps a wee bit of clarification about terminology.
OpenLabyrinth was just admitted to the official group of Tin Can Adopters:
Tin Can API was the original name given by Rustici Software. It is now more properly known as the Experience API or xAPI but many still call it Tin Can. It is the same thing and the terms are synonymous. Advanced Distributed Learning (ADL) was the group who first commissioned the development of xAPI by Rustici so I guess they get to name it.
But most importantly, the API will remain open and non-proprietary.
Our work with activity metrics and the Experience API (xAPI) continues apace with the OpenLabyrinth platform. We have been able to integrate xAPI statements into our CURIOS video mashup tool.
Now when you insert a video mashup into one of our OpenLabyrinth cases, you can track how your users are using your videos, which bits they watch and replay again.
This will dovetail nicely with some of the xAPI features that we can now access with the H5P widgets. It will also allow us to track activities across a widening range of educational activities.
A couple of weeks ago, we described how we were using H5P widgets here on our WordPress web site. Well, now we also have them fully integrated into OpenLabyrinth itself.
So, what’s the big deal, I hear you say…well, it means that we now have access to a whole new way of interacting with our users. It makes our nodes and pages much richer, with some nicely crafted HTML5 interactive content.
There are many pre-built H5P widgets on their main web site, which you can then easily modify to include your own content. We won’t bore you with descriptions of everything they have because H5P does it better. But the really cool part is that you can download H5P widgets from other web sites and insert them into your own cases and pages.
Given the interest in our recent work on Activity Metrics and xAPI, we are also delighted that H5P widgets provide xAPI tracking. So you can study how your learners interact with your widgets and cases in even greater detail.
Delighted to announce that we released v3.4 of OpenLabyrinth today.
Lots and lots of changes in this one… maybe too many… we are considering putting out a v3.3.3 which had fewer changes.
Those who have been following this blog will be familiar with what we have been working on. We’ll put out a more detailed list of changes on the forum soon. The main things are as follows:
- xAPI reporting to a LRS
- H5P widget integration (https://h5p.org/)
- Turk Talk for chat style small group communications
- Improved LTI stability
For the latest release, server administrators can pull this from Github. For the rest of us, we always run the latest version of the software on our demo server so if you want to try these things out, contact us for a free trial account.
It’s conference season indeed, around here. The Medbiq Annual Conference is coming up again soon in Baltimore, May 15-17, 2016.
Following on from previous years, activity streams and learning analytics will again feature prominently. OpenLabyrinth will be heavily used in a workshop we are holding about the Experience API (xAPI), along with some interesting widgets and gadgets to track/stress your learners.
This will make a nice extension on some of the other work we have recently presented about big data principles, applied to educational metrics, at the Ottawa Conference and CCME over the past month.
Come and play – we’ll make you sweat!