Turk Talk sees broader use

Our natural language processing function in OpenLabyrinth is seeing increasing adoption across a variety of educational disciplines, not just medicine.

This unique approach to parsing text input, so that scenarios can provide a more flexible and realistic form of interaction with learners, is creating great interest amongst case and learning design experts.

We have continued to refine the approach, and now find it to be scalable and flexible for a number of different learning design approaches. Check out the numerous articles and tips about Turk Talk on this web site but if you are still confused as to what this adds, contact us.

An update on OpenLabyrinth and virtual scenarios

While we have been working with Scenario-Based Learning Design (SBLD) for some time, it has taken us a while to explore all the different ways in which OpenLabyrinth can be helpful in this regard. It is time to provide better notes for others on just how useful OpenLabyrinth can be in SBLD and the powerful additional functionality that OpenLabyrinth’s Scenarios provide.

We will be posting a series of help pages that help you get more out of Scenarios.

We will be working with groups like WAVES to continue to improve how our Scenarios can be used.

Internationalization of OpenLabyrinth’s interface

Over the years, we have received occasional questions about how OpenLabyrinth can support virtual scenarios in languages other than English. Since OpenLabyrinth is used widely around the world, we are keen to explore a less anglocentric approach.

Now, as we have noted before (More languages in OpenLabyrinth cases), the node and page content in OpenLabyrinth is quite flexible. We have authors who have written cases in Greek, Russian, Slovak, French and even Klingon.

Although I am not qualified to comment, I believe we have also had groups explore the use of right-to-left languages, with some success. For a quick look at how a case might look, check out this case on our demo server, http://demo.openlabyrinth.ca/renderLabyrinth/index/88, although sadly it is all Greek to me. (ok, enough of the puns!)

Ever since OpenLabyrinth v2.6.1, we have had some basic internationalization functions built into the code base. If you select French in your User Profile, you will see that the top level menus are rendered in French. But sadly, that is as far as it goes – no group has yet funded the code writing needed to take it further, with i18n tables and the like. So this is feasible – if anyone wants to take a crack at this, the source code for OpenLabyrinth is all up on GitHub.

Now, the next thing that I would like to throw open to our case and scenario writers is this: How important is it to have a few menu and button labels changed? I would hate for this just to be a token, if it is not that helpful. I wonder if we should instead concentrate our efforts on translating the User Guide or parts of this support web site? Please let us know in the comments below, or if you have strong suggestions, use the Contact page.

Turk Talk provides complex text capacity to OpenLabyrinth

We are conducting another round of Turk Talk sessions on our OpenLabyrinth server.

Tonight, we are stress testing the Auto Text Expander macros in Google Chrome, so that our teachers can have much richer interactions with our students in their next Turk Talk session.

For more info on how to use Auto Text Expander with OpenLabyrinth, check out our how-to page.

Being able to quickly enter strings of text while facilitating a Turk Talk session makes the flow of the Scenario much smoother. This is important because the facilitator is trying to keep up with simultaneous conversations from up to 8 learners.

OpenLabyrinth stress testing at CHES scholarship day

On Wed, 5th October, the Centre for Health Education Scholarship (CHES) at UBC held its annual scholarship symposium, in Vancouver.

There were many interesting sessions, including a stirring keynote address from Rachel Ellaway (Professor, Education Research, University of Calgary.

OpenLabyrinth featured at a few presentations at the CHES symposium, including a short presentation on Activity Metrics by David Topps and Corey Albersworth. (See http://www.slideshare.net/topps/activity-metrics-for-ches-day )

In one of the afternoon demonstration sessions, we were able to show our Arduino stress-detector kit in action to conference participants. Here we have a short video of the Arduino sensors being calibrated.

This was the same basic setup as that first shown at the Medbiq Conference in Baltimore earlier this year. However, for this conference, no expense was spared. We splurged another $29.99 on another Arduino device. Yes, it nearly broke the budget!

We also managed to set up the software on both Windows 10 and OS X Yosemite, which highlights the platform independence of the Eclipse IDE that we used for collecting the Arduino data and sending it to the LRS.

Here we have a short video of the OpenLabyrinth stress-test in action. Our participant is playing a rapid-fire series of case vignettes on the Mac on the right, while the Arduino sensors connected to the Windows machine on the right is recording real-time data on her heart rate and Galvanic Skin Response.

We initially created this project as a simple technical demonstration that one could use a cheap, easy combination of Arduino hardware, OpenLabyrinth and xAPI statement collection into the GrassBlade Learning Record Store. We had only intended to show that such collection from multiple activity streams was feasible in the time and resources available to the average education researcher i.e. not much.

We were delighted to find that the stress detector was much more sensitive than we anticipated and will be useful in real-world research.

OpenLabyrinth and REDCap

OpenLabyrinth is increasingly used for data gathering and survey-like functions in medical education research. The number of question types has grown enormously and there are now few formats that are not supported. This, along with secure user management, connectivity to other educational tools such as LMSs (Moodle, Desire2Learn etc), via IMS-LTI authentication, and to Learning Record Stores (such as Watershed, GrassBlade, Wax LRS etc), has made OpenLabyrinth into a powerful educational research platform.

But there are other survey tools out there that you should also know about and consider in education research, especially in the health professions. REDCap is such a tool and we are fortunate that, at the University of Calgary, our Clinical Research Unit (CRU) is very effective at supporting the use of REDCap.

So, what is the difference between OpenLabyrinth, REDCap, and other survey tools such as SurveyMonkey? Well, the main thing that both OpenLabyrinth and REDCap offer is that you have better control over your data, where it resides and the custodianship/governance of this data. Cloud-based 3rd party tools such as SurveyMonkey and FluidSurveys (recently bought out by SurveyMonkey) are very good but you have no say as to where the data resides, which is of significant concern to ethics boards when it comes to anything that is health related. Patriot Act, anyone?

Both OpenLabyrinth and REDCap are open-source and freely available, although most end-users will not want to go to the hassle of setting up their own server. Both are secure and can be linked to other authentication mechanisms such as OAuth. REDCap is definitely focused more on clinical research. There are hundreds of existing validated survey instruments that you can make use of.

OpenLabyrinth is more focused on educational research. It also offers access to much more in the way of educational interactions, such as the CURIOS video mashup tool, linking to clinical scenarios and other forms of simulation, and complex branching pathways with logic rules etc.

Thanks to Mark Lowerison and the helpful folks at CRU, we have also been able to successfully connect between OpenLabyrinth and REDCap servers in a secure method, which means that you can take advantage of the best aspects of both research tools. Contact us if you are interested in exploring this further.

OpenLabyrinth at AMEE

The AMEE conference, just finished in Barcelona last week. With over 3500 attendees, this had to be one of their biggest yet.

Lots of activity there, including lots of projects and papers making use of OpenLabyrinth. One group was using OpenLabyrinth and Situational Judgement Testing for teaching ethics cases. Another group was using OpenLabyrinth as a publication and tracking mechanism for Teaching Tips.

Our group continues to collaborate with the Learning Layers project, a very interesting approach to supporting informal collaborative learning. The Barcamp was very well received – it was very similar in format to some Unconferences that our group has held previously.

The WAVES Project (Widening Access to Virtual Educational Scenarios), led by St George’s University, London, continues to make strong use of OpenLabyrinth, integrating it with MOOCs and Open edX.

Using OpenLabyrinth for xAPI research

We have now tested and successfully connected OpenLabyrinth to a wide range of Learner Record Stores (LRS) using the Experience API (xAPI), including all of the following:

You can find an updated set of notes on how to do this at Using Experience API (xAPI) on OpenLabyrinth. We would be happy to hear from groups who are interested in exploring this extension to OpenLabyrinth for tracking activity metrics and what your learners actually do.

Using Situational Judgment Testing

We explored a number of aspects of Situational Judgment Testing (SJT) in OpenLabyrinth over the past couple of years. This is a very useful assessment format and is widely adopted in selection of candidates in the UK.

Now we have finally pulled together our working notes, along with a few research questions that we would be interested in exploring with others. Check out the notes here: Using OpenLabyrinth for Situational Judgment Testing2

Contact us through these pages if you would like to explore this further.