OpenLabyrinth stress testing at CHES scholarship day

On Wed, 5th October, the Centre for Health Education Scholarship (CHES) at UBC held its annual scholarship symposium, in Vancouver.

There were many interesting sessions, including a stirring keynote address from Rachel Ellaway (Professor, Education Research, University of Calgary.

OpenLabyrinth featured at a few presentations at the CHES symposium, including a short presentation on Activity Metrics by David Topps and Corey Albersworth. (See )

In one of the afternoon demonstration sessions, we were able to show our Arduino stress-detector kit in action to conference participants. Here we have a short video of the Arduino sensors being calibrated.

This was the same basic setup as that first shown at the Medbiq Conference in Baltimore earlier this year. However, for this conference, no expense was spared. We splurged another $29.99 on another Arduino device. Yes, it nearly broke the budget!

We also managed to set up the software on both Windows 10 and OS X Yosemite, which highlights the platform independence of the Eclipse IDE that we used for collecting the Arduino data and sending it to the LRS.

Here we have a short video of the OpenLabyrinth stress-test in action. Our participant is playing a rapid-fire series of case vignettes on the Mac on the right, while the Arduino sensors connected to the Windows machine on the right is recording real-time data on her heart rate and Galvanic Skin Response.

We initially created this project as a simple technical demonstration that one could use a cheap, easy combination of Arduino hardware, OpenLabyrinth and xAPI statement collection into the GrassBlade Learning Record Store. We had only intended to show that such collection from multiple activity streams was feasible in the time and resources available to the average education researcher i.e. not much.

We were delighted to find that the stress detector was much more sensitive than we anticipated and will be useful in real-world research.

OpenLabyrinth and REDCap

OpenLabyrinth is increasingly used for data gathering and survey-like functions in medical education research. The number of question types has grown enormously and there are now few formats that are not supported. This, along with secure user management, connectivity to other educational tools such as LMSs (Moodle, Desire2Learn etc), via IMS-LTI authentication, and to Learning Record Stores (such as Watershed, GrassBlade, Wax LRS etc), has made OpenLabyrinth into a powerful educational research platform.

But there are other survey tools out there that you should also know about and consider in education research, especially in the health professions. REDCap is such a tool and we are fortunate that, at the University of Calgary, our Clinical Research Unit (CRU) is very effective at supporting the use of REDCap.

So, what is the difference between OpenLabyrinth, REDCap, and other survey tools such as SurveyMonkey? Well, the main thing that both OpenLabyrinth and REDCap offer is that you have better control over your data, where it resides and the custodianship/governance of this data. Cloud-based 3rd party tools such as SurveyMonkey and FluidSurveys (recently bought out by SurveyMonkey) are very good but you have no say as to where the data resides, which is of significant concern to ethics boards when it comes to anything that is health related. Patriot Act, anyone?

Both OpenLabyrinth and REDCap are open-source and freely available, although most end-users will not want to go to the hassle of setting up their own server. Both are secure and can be linked to other authentication mechanisms such as OAuth. REDCap is definitely focused more on clinical research. There are hundreds of existing validated survey instruments that you can make use of.

OpenLabyrinth is more focused on educational research. It also offers access to much more in the way of educational interactions, such as the CURIOS video mashup tool, linking to clinical scenarios and other forms of simulation, and complex branching pathways with logic rules etc.

Thanks to Mark Lowerison and the helpful folks at CRU, we have also been able to successfully connect between OpenLabyrinth and REDCap servers in a secure method, which means that you can take advantage of the best aspects of both research tools. Contact us if you are interested in exploring this further.

OpenLabyrinth at AMEE

The AMEE conference, just finished in Barcelona last week. With over 3500 attendees, this had to be one of their biggest yet.

Lots of activity there, including lots of projects and papers making use of OpenLabyrinth. One group was using OpenLabyrinth and Situational Judgement Testing for teaching ethics cases. Another group was using OpenLabyrinth as a publication and tracking mechanism for Teaching Tips.

Our group continues to collaborate with the Learning Layers project, a very interesting approach to supporting informal collaborative learning. The Barcamp was very well received – it was very similar in format to some Unconferences that our group has held previously.

The WAVES Project (Widening Access to Virtual Educational Scenarios), led by St George’s University, London, continues to make strong use of OpenLabyrinth, integrating it with MOOCs and Open edX.

Using OpenLabyrinth for xAPI research

We have now tested and successfully connected OpenLabyrinth to a wide range of Learner Record Stores (LRS) using the Experience API (xAPI), including all of the following:

You can find an updated set of notes on how to do this at Using Experience API (xAPI) on OpenLabyrinth. We would be happy to hear from groups who are interested in exploring this extension to OpenLabyrinth for tracking activity metrics and what your learners actually do.

Using Situational Judgment Testing

We explored a number of aspects of Situational Judgment Testing (SJT) in OpenLabyrinth over the past couple of years. This is a very useful assessment format and is widely adopted in selection of candidates in the UK.

Now we have finally pulled together our working notes, along with a few research questions that we would be interested in exploring with others. Check out the notes here: Using OpenLabyrinth for Situational Judgment Testing2

Contact us through these pages if you would like to explore this further.

Happy Milestones, Clinisnips

We just want say ‘Happy 3rd MilleniDay’, or should that be ‘3rd KiloDay’ to our sister project, Clinisnips. That is, it is 3001 days since the first video went live on the Clinisnips channel on April 16th, 2008.

In that time, there have been nearly 4.7M views of the channel, at a rate of about 100 views per hour, which has not dropped off at all since the channel was launched.

Google Analytics is a very powerful tool and we have pulled some impressive stats over that time period. We calculate that the Watch Time over that period is over 7.68 million minutes or more than 127,000 hours of CME!

Now, of course, as we recognized back in our article…

Topps, D., Helmer, J., & Ellaway, R. (2013). YouTube as a platform for publishing clinical skills training videos. Academic Medicine : Journal of the Association of American Medical Colleges, 88(2), 192-7. doi:10.1097/ACM.0b013e31827c5352

… not every viewer will be a healthcare professional. While we are happy that the reach of Clinisnips has been very broad, as demonstrated by the broad variety of Comments that we found in our qualitative analysis, we have not been able to track in detail who has been watching Clinisnips and what else they do around those times.

You can be sure, however, that Google has a very good idea of what its users do on all of its sites, services and channels. It is why they have grown to be the size they are today. This is big data, writ large. While they share little snippets of analytics with their contributors like us, they spend a lot of effort in tracking the activity metrics of all of us.

This is why we are becoming increasingly excited about what can be done with big data, and activity metrics (via xAPI etc) in the education research world. Imagine how much more effective we could make our educational materials, if we understood how they are used and what impact they have. We are late to the table, compared to commerce. It is well past time that we started looking at what our users do, not what they (or their teachers) say they do!

Medbiq xAPI workshop technical report

We just published the interim technical report from our xAPI workshop at the Medbiq annual conference. (We also have an updated reported, stored internally here : Medbiq xAPI Workshop Report, which corrects a few minor errors in the original.)

Medbiq xAPI Arduino Sensors

As we mentioned in our earlier posts, we were really pleased by the participation at the workshop. We just heard from Medbiq that it was really well received and the evaluations were very positive.

We created this much more detailed Technical Report so that others, who may be interested in exploring what you can do with xAPI and Arduino sensors, can follow our processes and the challenges we faced. This will hopefully provide enough detail that others groups can also make similar explorations. Please feel free to contact us through this site if you are interesting in this area of research and development.

Turk Talk in action

Tomorrow, we will be testing out our Turk Talk function in OpenLabyrinth for the first time in a live teaching session. A number of nursing students at the University of Calgary will be putting it through its paces.

There have been some nice usability improvements since our early designs and it is now pretty easy to use. Michelle Cullen and her team at the School of Nursing have done a great job in debugging the cases. We are looking forwards to a fun session.

Testing this week has gone well and our facilitators even seem to have had fun! We hope the students do tomorrow as well.