The AMEE conference, just finished in Barcelona last week. With over 3500 attendees, this had to be one of their biggest yet.
Lots of activity there, including lots of projects and papers making use of OpenLabyrinth. One group was using OpenLabyrinth and Situational Judgement Testing for teaching ethics cases. Another group was using OpenLabyrinth as a publication and tracking mechanism for Teaching Tips.
Our group continues to collaborate with the Learning Layers project, a very interesting approach to supporting informal collaborative learning. The Barcamp was very well received – it was very similar in format to some Unconferences that our group has held previously.
The WAVES Project (Widening Access to Virtual Educational Scenarios), led by St George’s University, London, continues to make strong use of OpenLabyrinth, integrating it with MOOCs and Open edX.
We have now tested and successfully connected OpenLabyrinth to a wide range of Learner Record Stores (LRS) using the Experience API (xAPI), including all of the following:
You can find an updated set of notes on how to do this at Using Experience API (xAPI) on OpenLabyrinth. We would be happy to hear from groups who are interested in exploring this extension to OpenLabyrinth for tracking activity metrics and what your learners actually do.
We explored a number of aspects of Situational Judgment Testing (SJT) in OpenLabyrinth over the past couple of years. This is a very useful assessment format and is widely adopted in selection of candidates in the UK.
We just want say ‘Happy 3rd MilleniDay’, or should that be ‘3rd KiloDay’ to our sister project, Clinisnips. That is, it is 3001 days since the first video went live on the Clinisnips channel on April 16th, 2008.
C/spine 5/6 subluxation - detailed 3D flythrough of spinal canal
C/spine 5/6 subluxation - flythrough of spinal canal in 3D
PocketSnip on PocketSnips
Otoscope and Ophthalmoscope
Chronic Suppurative Otits Media
Possible TM Perforation
Fluid level behind TM
Acute otitis media
In that time, there have been nearly 4.7M views of the channel, at a rate of about 100 views per hour, which has not dropped off at all since the channel was launched.
Google Analytics is a very powerful tool and we have pulled some impressive stats over that time period. We calculate that the Watch Time over that period is over 7.68 million minutes or more than 127,000 hours of CME!
Now, of course, as we recognized back in our article…
… not every viewer will be a healthcare professional. While we are happy that the reach of Clinisnips has been very broad, as demonstrated by the broad variety of Comments that we found in our qualitative analysis, we have not been able to track in detail who has been watching Clinisnips and what else they do around those times.
You can be sure, however, that Google has a very good idea of what its users do on all of its sites, services and channels. It is why they have grown to be the size they are today. This is big data, writ large. While they share little snippets of analytics with their contributors like us, they spend a lot of effort in tracking the activity metrics of all of us.
This is why we are becoming increasingly excited about what can be done with big data, and activity metrics (via xAPI etc) in the education research world. Imagine how much more effective we could make our educational materials, if we understood how they are used and what impact they have. We are late to the table, compared to commerce. It is well past time that we started looking at what our users do, not what they (or their teachers) say they do!
As we mentioned in our earlier posts, we were really pleased by the participation at the workshop. We just heard from Medbiq that it was really well received and the evaluations were very positive.
We created this much more detailed Technical Report so that others, who may be interested in exploring what you can do with xAPI and Arduino sensors, can follow our processes and the challenges we faced. This will hopefully provide enough detail that others groups can also make similar explorations. Please feel free to contact us through this site if you are interesting in this area of research and development.
Tomorrow, we will be testing out our Turk Talk function in OpenLabyrinth for the first time in a live teaching session. A number of nursing students at the University of Calgary will be putting it through its paces.
There have been some nice usability improvements since our early designs and it is now pretty easy to use. Michelle Cullen and her team at the School of Nursing have done a great job in debugging the cases. We are looking forwards to a fun session.
Testing this week has gone well and our facilitators even seem to have had fun! We hope the students do tomorrow as well.
The WAVES Project group had its first meeting in London over the past couple of days. More info will gradually be released on the project web site.
OpenLabyrinth will be extensively used in creating and supporting virtual scenarios for this project. And what the heck are virtual scenarios, I hear you ask? Scenarios in this context relate to the work of Ruth Colvin Clark in her book, Scenario-Based eLearning.
We have found this book to be very useful in our Scenario Based Learning Designs. Scenarios are basically groups of learning activities, put together so that you make best use of the resources available. We use the concept of Scenarios within OpenLabyrinth as a way to group together virtual patient cases (in a logically connected series if necessary), groups of learners, reports, counters, rules etc so that the Scenario Designer can take things way beyond a single simple case.
WAVES is coordinated by St George’s University, London, who are long time experts in OpenLabyrinth. This is a huge project involving health professional schools from many countries in Europe.
We are excited that the group is keen to explore the use of xAPI as a means of tracking activity metrics, and is also keen to work with Medbiquitous on forging common practices and Profiles around xAPI and Scenarios.
At the annual Medbiquitous Conference in Baltimore, OpenLabyrinth provided the underpinnings for a workshop demonstrating the capabilities of the Experience API (xAPI).
Using a simple Arduino computer, the team of Ellen Meiselman, Corey Albersworth, David Topps and Corey Wirun were able to track the stress levels of workshop participants as they played a very challenging series of OpenLabyrinth mini-cases. Sensors on the Arduino continuously measured heart rate and galvanic skin response which, even on these cheap ($30) kits, were easily sensitive enough to detect subtle changes in stress levels.
We intentionally set a very tight set of timers on the case series so that participants were increasingly pushed to make very rapid decisions. The data from the sensors were collected in our GrassBlade LRS, along with xAPI statements from our OpenLabyrinth cases.
Seeing such simple technology providing quite sophisticated tracking of learner stress levels prompted a lot of vigorous discussion in the workshop on how such activity metrics can be used in other ways.
We really appreciate the collaboration and help we received from the xAPI community in pulling this workshop together. We had naively thought that, since both Arduino and xAPI are simple to work with, this would be a nice quick effort. Corey A ran into a lot of tiny but time consuming quirks and put in many hours in getting this all to work smoothly.
We especially want to acknowledge the detailed help we received from Pankaj Agrawal (GrassBlade LRS) and Andrew Downes (Watershed LRS) for their patience and troubleshooting. For some parts of the project, we had some quite sophisticated statement pulls from GrassBlade to Watershed, due to the collaboration of these folks. It really showed us how much more you can achieve by blending the capabilities of these various devices and platforms, using xAPI.
Tin Can API was the original name given by Rustici Software. It is now more properly known as the Experience API or xAPI but many still call it Tin Can. It is the same thing and the terms are synonymous. Advanced Distributed Learning (ADL) was the group who first commissioned the development of xAPI by Rustici so I guess they get to name it.
But most importantly, the API will remain open and non-proprietary.