The first virtual patient case that I was introduced to by my colleague, Rachel Ellaway, was the Sarah-Jane case written by Dr Jonathan Round at St George’s University London.
I was smitten. And I am not afraid (although somewhat ashamed) to say that the first five times I played this case, I killed Sarah-Jane. It is a beautifully crafted case – deceptively simple in both presentation and demeanour, yet really quite difficult to get right. I kept coming back, determined to save her.
Here was the power of serious games, pulling me in to solve the challenge… and yet it is all just text and a good story. No multimedia. No fancy doohickeys. Just a great narrative.
I have cited this case over and over when preaching about the power of the narrative. Now, we have successfully ported it across to OpenLabyrinth v3. Check it out on our list of exemplar cases at
or you can go straight to the case at
We have a breakthrough!
Haven’t you always wanted to use natural language processing in your virtual patient cases? Now we have two ways of doing this. OK, full disclosure here: we are not talking Watson level full AI stuff! But there are many times with a virtual patient case design when you don’t want to prompt the user about possible answers to a question and cue them into the correct answer.
Now there have been virtual patients… or rather a virtual patient that did NLP to an amazing degree. The Maryland Project in 2007 had very impressive language processing – you could type almost any question you wanted into it and it would provide a sensible answer. But the cost and the programming effort were huge and not at all scalable.
We have had some basic text processing capabilities in OpenLabyrinth for a while now. Very useful in limited situations. But it is a pain considering all the variations that a user might type and allowing for these in the logic rules.
Now we have Turk Talk. Based on the concept of the Mechanical Turk, where a human pretends to be a computer, we have developed an interface where a human facilitator can handle text input from up to 8 learners in a small group session. Interface is done and stable – going into research testing now.
If you are interested in a collaborative project working at something like this, contact us.
After two years of research and code testing, we are delighted to let you know about our progress with semantic indexing in OpenLabyrinth.
Much of this progress is due to the work of Lazaros Ioannidis at Aristotle University, Thessaloniki in Greece. It is being featured in workshops and presentations at MEI 2015 in Thessaloniki today.
So what the heck is semantic indexing? The foundation of Web 3.0, it allows discovery of content in new and interesting ways. For our authors and learners using OpenLabyrinth, it opens up powerful new search capabilities and data visualizations.
Imagine that you want a virtual patient case that addresses that common complaint seen in the emergency department: chest wall pain. How would you find this? With past methods, unless that phrase appeared in the title or descriptors for the case, you would be out of luck. Worse still, there are many synonyms and codes applied to this presentation: costochondritis, ICD9 = 786, Tietze’s syndrome etc.
Semantic indexing opens up the possibility of searching by the concept of chest wall pain, looking up such synonyms and coding in related vocabularies and ontologies that already exist.
More on this as we refine the tool. If you are interested in this research and its applications, contact us.