While we have been working with Scenario-Based Learning Design (SBLD) for some time, it has taken us a while to explore all the different ways in which OpenLabyrinth can be helpful in this regard. It is time to provide better notes for others on just how useful OpenLabyrinth can be in SBLD and the powerful additional functionality that OpenLabyrinth’s Scenarios provide.
Over the years, we have received occasional questions about how OpenLabyrinth can support virtual scenarios in languages other than English. Since OpenLabyrinth is used widely around the world, we are keen to explore a less anglocentric approach.
Now, as we have noted before (More languages in OpenLabyrinth cases), the node and page content in OpenLabyrinth is quite flexible. We have authors who have written cases in Greek, Russian, Slovak, French and even Klingon.
We have also had groups explore the use of right-to-left languages, with some success. For a quick look at how a case might look, check out this case on our demo server, Multi-lingual cases, which shows what can be done.
Ever since OpenLabyrinth v2.6.1, we have had some basic internationalization functions built into the code base. If you select French in your User Profile, you will see that the top level menus are rendered in French. But sadly, that is as far as it goes – no group has yet funded the code writing needed to take it further, with i18n tables and the like. So this is feasible – if anyone wants to take a crack at this, the source code for OpenLabyrinth is all up on GitHub.
Being able to quickly enter strings of text while facilitating a Turk Talk session makes the flow of the Scenario much smoother. This is important because the facilitator is trying to keep up with simultaneous conversations from up to 8 learners.
In one of the afternoon demonstration sessions, we were able to show our Arduino stress-detector kit in action to conference participants. Here we have a short video of the Arduino sensors being calibrated.
This was the same basic setup as that first shown at the Medbiq Conference in Baltimore earlier this year. However, for this conference, no expense was spared. We splurged another $29.99 on another Arduino device. Yes, it nearly broke the budget!
We also managed to set up the software on both Windows 10 and OS X Yosemite, which highlights the platform independence of the Eclipse IDE that we used for collecting the Arduino data and sending it to the LRS.
Here we have a short video of the OpenLabyrinth stress-test in action. Our participant is playing a rapid-fire series of case vignettes on the Mac on the right, while the Arduino sensors connected to the Windows machine on the right is recording real-time data on her heart rate and Galvanic Skin Response.
We initially created this project as a simple technical demonstration that one could use a cheap, easy combination of Arduino hardware, OpenLabyrinth and xAPI statement collection into the GrassBlade Learning Record Store. We had only intended to show that such collection from multiple activity streams was feasible in the time and resources available to the average education researcher i.e. not much.
We were delighted to find that the stress detector was much more sensitive than we anticipated and will be useful in real-world research.
OpenLabyrinth is increasingly used for data gathering and survey-like functions in medical education research. The number of question types has grown enormously and there are now few formats that are not supported. This, along with secure user management, connectivity to other educational tools such as LMSs (Moodle, Desire2Learn etc), via IMS-LTI authentication, and to Learning Record Stores (such as Watershed, GrassBlade, Wax LRS etc), has made OpenLabyrinth into a powerful educational research platform.
But there are other survey tools out there that you should also know about and consider in education research, especially in the health professions. REDCap is such a tool and we are fortunate that, at the University of Calgary, our Clinical Research Unit (CRU) is very effective at supporting the use of REDCap.
So, what is the difference between OpenLabyrinth, REDCap, and other survey tools such as SurveyMonkey? Well, the main thing that both OpenLabyrinth and REDCap offer is that you have better control over your data, where it resides and the custodianship/governance of this data. Cloud-based 3rd party tools such as SurveyMonkey and FluidSurveys (recently bought out by SurveyMonkey) are very good but you have no say as to where the data resides, which is of significant concern to ethics boards when it comes to anything that is health related. Patriot Act, anyone?
Both OpenLabyrinth and REDCap are open-source and freely available, although most end-users will not want to go to the hassle of setting up their own server. Both are secure and can be linked to other authentication mechanisms such as OAuth. REDCap is definitely focused more on clinical research. There are hundreds of existing validated survey instruments that you can make use of.
OpenLabyrinth is more focused on educational research. It also offers access to much more in the way of educational interactions, such as the CURIOS video mashup tool, linking to clinical scenarios and other forms of simulation, and complex branching pathways with logic rules etc.
Thanks to Mark Lowerison and the helpful folks at CRU, we have also been able to successfully connect between OpenLabyrinth and REDCap servers in a secure method, which means that you can take advantage of the best aspects of both research tools. Contact us if you are interested in exploring this further.
The AMEE conference, just finished in Barcelona last week. With over 3500 attendees, this had to be one of their biggest yet.
Lots of activity there, including lots of projects and papers making use of OpenLabyrinth. One group was using OpenLabyrinth and Situational Judgement Testing for teaching ethics cases. Another group was using OpenLabyrinth as a publication and tracking mechanism for Teaching Tips.
Our group continues to collaborate with the Learning Layers project, a very interesting approach to supporting informal collaborative learning. The Barcamp was very well received – it was very similar in format to some Unconferences that our group has held previously.
The WAVES Project (Widening Access to Virtual Educational Scenarios), led by St George’s University, London, continues to make strong use of OpenLabyrinth, integrating it with MOOCs and Open edX.
We have now tested and successfully connected OpenLabyrinth to a wide range of Learner Record Stores (LRS) using the Experience API (xAPI), including all of the following:
You can find an updated set of notes on how to do this at Using Experience API (xAPI) on OpenLabyrinth. We would be happy to hear from groups who are interested in exploring this extension to OpenLabyrinth for tracking activity metrics and what your learners actually do.
We explored a number of aspects of Situational Judgment Testing (SJT) in OpenLabyrinth over the past couple of years. This is a very useful assessment format and is widely adopted in selection of candidates in the UK.
We just want say ‘Happy 3rd MilleniDay’, or should that be ‘3rd KiloDay’ to our sister project, Clinisnips. That is, it is 3001 days since the first video went live on the Clinisnips channel on April 16th, 2008.
Sorry, there was a YouTube API error: The provided API key has an IP address restriction. The originating IP address of the call (188.8.131.52) violates this restriction. Please make sure you performed the steps in this video to create and save a proper server API key.
In that time, there have been nearly 4.7M views of the channel, at a rate of about 100 views per hour, which has not dropped off at all since the channel was launched.
Google Analytics is a very powerful tool and we have pulled some impressive stats over that time period. We calculate that the Watch Time over that period is over 7.68 million minutes or more than 127,000 hours of CME!
Now, of course, as we recognized back in our article…
… not every viewer will be a healthcare professional. While we are happy that the reach of Clinisnips has been very broad, as demonstrated by the broad variety of Comments that we found in our qualitative analysis, we have not been able to track in detail who has been watching Clinisnips and what else they do around those times.
You can be sure, however, that Google has a very good idea of what its users do on all of its sites, services and channels. It is why they have grown to be the size they are today. This is big data, writ large. While they share little snippets of analytics with their contributors like us, they spend a lot of effort in tracking the activity metrics of all of us.
This is why we are becoming increasingly excited about what can be done with big data, and activity metrics (via xAPI etc) in the education research world. Imagine how much more effective we could make our educational materials, if we understood how they are used and what impact they have. We are late to the table, compared to commerce. It is well past time that we started looking at what our users do, not what they (or their teachers) say they do!