Author Archives: dtopps

New development funding for OpenLabyrinth

Excellent news! We have been very fortunate to receive new funding for the further development of OpenLabyrinth as an educational research platform.

The O’Brien Institute of Public Health, the Department of Family Medicine and the Office of PostGraduate Medical Education at The University of Calgary Cumming School of Medicine, have made a combined contribution towards a Catalyst Grant.

The intent of catalyst grants is to improve the infrastructure and tools so that we can springboard forwards to applying for broader research funding. This catalyst grant project is being directed by the Office of Health & Medical Education Scholarship (OHMES), our newly reconstituted education research group.

The grant opens up some interesting new potential functional areas for OpenLabyrinth and will be particularly focused on activity metrics, using the Experience API (xAPI). OHMES members are working closely with the Medbiquitous Learning Experience Working Group, which was just announced a few days ago, on creating a set of ANSI standards to support such research.

Turk Talk in PeerJ

We are getting lots of interest in our new Turk Talk function in OpenLabyrinth, the new human-computer hybrid approach to natural language processing.

PeerJ has just accepted a draft article as a Pre-Print.

Topps D, Cullen ML, Sharma N, Ellaway RH. (2016) Putting the ghost in the machine: exploring human-machine hybrid virtual patient systems for health professional education. PeerJ PrePrints 4:e1659v1https://doi.org/10.7287/peerj.preprints.1659v1

TurkTalkMap606

This article will go forwards for full peer-review but we were keen to do an early release to encourage others who might be interested in collaborating in this area of educational research.

Medbiquitous announces new Learning Experience group

Mebiquitous “is a not-for-profit, international group of professional associations, universities, commercial, and governmental organizations seeking to develop and promote technology standards for the health professions that advance lifelong learning, continuous improvement, and better patient outcomes.”

These guys do great work that underpins many collaborative initiatives in healthcare. The University of Calgary Cumming School of Medicine is a proud and active member.

On 20 Jan 2016, Medbiq announced a new working group, the Learning Experience group.

http://www.medbiq.org/announcing_the_learning_experience_working_group

“Education analytics offers an opportunity to better track learner educational activities and to better understand the strengths and weaknesses of the healthcare workforce as well as factors associated with higher performance…

…The new Learning Experience Working Group will develop a set of Experience API (xAPI) profiles to provide guidance around collecting data on specific types of healthcare learning activities. The scope includes simulations (Virtual patients, mannekin-based simulations, preceptor-reviewed simulations, virtual worlds/games, Standardized Patients, etc) and clinical training activities and experiences.”

Members of the OpenLabyrinth Development Consortium are actively involved in this initiative and in Medbiquitous.

And just a quick heads up, the Medbiquitous Annual Conference is coming up: May 16-17, 2016 in Baltimore. A very innovative and collaborative group – come join us.

Snazzy new home page

Well, that took way longer than it should have. Sorry about that, guys n’ gals – a combination of technical glitches and being pulled away to research grants etc etc.

We hope you like this new look and improved access to help, examples and most of all, free trial accounts on our demo server.

Now that we are running the web server in-house, and have much greater control over how it is linked to other platforms, we hope to bring you a variety of new aggregated services that tie into the OpenLabyrinth education research platform.

cropped-Olab-logo-and-text-grey-blue.jpg

New web server for OpenLabyrinth

Some time ago, we said we would be moving our OpenLabyrinth web site to another server. I think we are finally ready to go.

For most things, this will make no difference at all and you will notice little change. A new look and not much else.

For the technical amongst you, the IP address will change.

At the moment, we are just doing a wee bit more testing before we switch.

Doors

Opening a new door into the labyrinths – just as well we are not using Windows.

More Turk Talk sessions with OpenLabyrinth

Our first test session with Turk Talk, where we use a human-computer hybrid setup for simple natural language processing to assist the assessment of decision making, went really well – much better than expected with very few technical glitches.

We plan to hold the next one in a couple of weeks where we will stress test both the Turker and the software, to see how much cognitive load can be handled and what needs to be improved to increase this.

At present, we are focusing on mental health conditions for our topics and material but the approach does not have to be clinical at all.

Watch this space!

OLab at AMEE eLearning Symposium

OpenLabyrinth will be featured at the AMEE eLearning Symposium in a couple of weeks.

 

OLab3Highlights

We have a short two minute video on YouTube for the eLearning Symposium show & tell: https://www.youtube.com/watch?v=b4b62thWFmM

AMEE is the Association of Medical Education in Europe. This year, the annual conference is being held in Glasgow: http://www.amee.org/conferences/amee-2015

We will also be giving a 15 minute paper on the new CURIOS video mashup tool, which is closely integrated with OpenLabyrinth.

See you all there!

OpenLabyrinth around the world

Well that was interesting. I found a nice little web service called BatchGeo. You can load a bunch of IP addresses into the site and it plots them out on a world map. So we did this for the first 250 logins for our demo server at http://demo.openlabyrinth.ca – this is what we got:

OLab world logins

OLab world logins

It would be even more interesting to do a cumulative map of all 17,000 logins but I’m too cheap to pay the fee to BatchGeo. But this does give you an idea of how widely OpenLabyrinth is used.

 

A classic case – test your skills

The first virtual patient case that I was introduced to by my colleague, Rachel Ellaway, was the Sarah-Jane case written by Dr Jonathan Round at St George’s University London.

I was smitten. And I am not afraid (although somewhat ashamed) to say that the first five times I played this case, I killed Sarah-Jane. It is a beautifully crafted case – deceptively simple in both presentation and demeanour, yet really quite difficult to get right. I kept coming back, determined to save her.

Here was the power of serious games, pulling me in to solve the challenge… and yet it is all just text and a good story. No multimedia. No fancy doohickeys. Just a great narrative.

I have cited this case over and over when preaching about the power of the narrative. Now, we have successfully ported it across to OpenLabyrinth v3. Check it out on our list of exemplar cases at

http://tiny.cc/olab3best

or you can go straight to the case at

http://demo.openlabyrinth.ca/renderLabyrinth/index/272

OpenLabyrinth does Natural Language

We have a breakthrough!

Haven’t you always wanted to use natural language processing in your virtual patient cases? Now we have two ways of doing this. OK, full disclosure here: we are not talking Watson level full AI stuff! But there are many times with a virtual patient case design when you don’t want to prompt the user about possible answers to a question and cue them into the correct answer.

Now there have been virtual patients… or rather a virtual patient that did NLP to an amazing degree. The Maryland Project in 2007 had very impressive language processing – you could type almost any question you wanted into it and it would provide a sensible answer. But the cost and the programming effort were huge and not at all scalable.

We have had some basic text processing capabilities in OpenLabyrinth for a while now. Very useful in limited situations. But it is a pain considering all the variations that a user might type and allowing for these in the logic rules.

Now we have Turk Talk. Based on the concept of the Mechanical Turk, where a human pretends to be a computer, we have developed an interface where a human facilitator can handle text input from up to 8 learners in a small group session. Interface is done and stable – going into research testing now.

If you are interested in a collaborative project working at something like this, contact us.