Chatbots and OLab

Thinking of using a chatbot to improve interactions with students?

Chatbots are getting better. Yes, some of the most powerful are now remarkably good. But what about the chatbot services available to the average educator?

Well, these are steadily improving and we have been testing them out for use with OLab. For more information on what we have found, check out this short article.

As you will see, they are proving to be very beneficial to large and medium-sized companies.But the resources needed to create a good scenario are still a bit beyond the average educator.

Using Chats for spoilers in OLab3

Spoiler Alert

Everyone hates spoilers. But sometimes they have a useful function. We recently came across a very neat way in which they had been used.

This reminded us of our Chats widgets in OLab3, a function that has not seen a lot of use. We created a map here that demonstrates how Chats can be used to selectively reveal spoilers and why you might want to consider them. Use ‘demo‘ to unlock the case.

CURIOS service up and running again

Nice to have some good news in the current Covid outbreak: not a big item but it will help those who are trying to create content for online learning, which is much more needed now.

We have finally managed to fix an annoying bug which was preventing proper use of our CURIOS video mashup service. It is now fully functional again.

For those of you who may have missed it previously, here is more information on how to use the CURIOS video mashup tool.

Servers properly secured again

Our various servers, supporting OpenLabyrinth and the OLab education research platform, had some hiccups for a while, which resulted in our support of https and SSL being rather patchy.

Most people will find these days that many web browsers have conniptions about visiting unsecured sites, so many of you were getting dire warning about the site being insecure or have an invalid security certificate.

I think we have all that fixed now and there should be no more warnings.

Just to reassure you, there was no actual compromise of our sites and the data remains secure.

OLab4 Designer launch

Things have been really quiet over the summer but we are now ready to show you our work on the the OLab4 authoring interface.

Check out a series of upcoming posts on olab.ca which give more information on the OLab4 Designer and other new capabilities.

As before, all the source code will be posted on GitHub.

An Active Repository for OLab scenarios

For several years, we have been looking at different ways to make OpenLabyrinth scenarios more accessible. We think we have found a solution that meets the needs of both consumers and contributors of scenarios: the OLab Dataverse.

Now at first glance, this looks like Yet Another OER. But we think there are a few things that may help this to be more successful. We are working on ways to make it dead easy for OLab authors to upload their best cases directly to the OLab Dataverse which should help with the tedious task of metadata entry.

Because the materials are given a proper citation and DOI by the DataCite service, it means that the scenario becomes a citable reference that can be added to the authors’ CV and makes it easier for them to get academic credit for publishing their cases.

We have created some short notes on how we currently upload OpenLabyrinth maps to the OLab Dataverse, using a template in the meantime.

The OLab Dataverse is hosted at Scholars Portal on Canadian servers. Using a non-USA based service will help to mitigate some of the concerns raised for some jurisdictions and granting agencies.

When we say ‘Active Repository‘, we also plan to make this process more useful in providing activity metrics, using xAPI and a LRS. At present, we can create simple Guestbooks, which help us to track when the datasets are downloaded. But we feel it is equally important to create some activity metrics around the contributions  by faculty members and teachers. Partly, this will be based on the new xAPI Faculty Profile that we are developing and will incorporate into our OLab uploading mechanisms.

It is time we did a better job of looking at how our contributions to open science are used, appreciated and distributed in the world of Precision Education. We just submitted an article to MedEdPublish on why this is so important.

If you are interested in working with us in exploring how we can make these processes more accessible and more rewarding, please contact us.

OIPH Catalyst Grant outputs and metrics

In 2015, we were delighted to receive a Catalyst Grant from the O’Brien Institute of Public Health in support of development of various aspects of OpenLabyrinth as an educational research platform.

We have just reported on what arose as a result of this grant and, in general, we are pretty pleased with what came out of it and where things are headed. OpenLabyrinth continues to be used widely in the educational community and that reach is growing.

But how do we know?

This is more challenging to assess than you might think. Using standard lit search techniques, it is not hard to find journal articles that relate to the ongoing and innovative use of OpenLabyrinth. But that is only a small part of the impact. Now to give credit to OIPH and its reporting template, it is great to see that they want to know about the societal impacts, social media etc. This is something that we strongly agree with in OHMES.

But the actual measurement of such outputs is not so easy. An obvious way to do this would be via Altmetrics, which is revolutionizing how such projects and outputs are seen by the public. It has a powerful suite of tools that allows it to track mentions and reports in various public channels, social media channels, news items etc. Great stuff.

But Altmetric requires items to be assigned to either departments or institutes or faculties. For OHMES and OpenLabyrinth, this creates a significant problem. There is no ability to assign a tag or category which spans the range of groups and organizations that are involved. This is somewhat surprising, given that the general approach in Altmetrics is ontological, rather than taxonomical. (1,2)

In our PiHPES Project, partly as a result of such challenges, we are exploring two approaches to this. Firstly, we are using xAPI and the Learning Records Store (LRS) to directly track how our learners and teachers make use of the plethora of learning objects that we create in our LMSs and other platforms – the paradata, data about how things are used, accessed, distributed.

Secondly, we are looking for ways in which to make such activities and objects more discoverable, both through improved analytics on the generated activity metrics, and in tying the paradata and metadata together in more meaningful ontologies, using semantic linking.

 

  1. Shirky C. Ontology is Overrated — Categories, Links, and Tags [Internet]. 2005 [cited 2019 May 3]. Available from: http://www.shirky.com/writings/ontology_overrated.html
  2. Leu J. Taxonomy, ontology, folksonomies & SKOS. [Internet]. SlideShare. 2012 [cited 2019 May 3]. p. 21. Available from: https://www.slideshare.net/JanetLeu/taxonomy-ontology-folksonomies-skos

Learning Analytics and xAPI

David Topps, Medical Director, OHMES, University of Calgary
Ellen Meiselman, Senior Analyst, Learning Systems, University of Michigan

The focus on Competency-Based Medical Education (CBME) has shone welcome attention on the need for good assessment: how do you know when a learner is competent? We need data. All of us are happy when the learner is doing well. Some have suggested1 that we are becoming too focused on checklists and that gestalt ratings are just as good. But what about those learners on the lower shoulder of the curve?

We need better data for earlier detection of those who need help. We need better data for earlier detection of those who need help. Over the years, several have used various approaches in intelligent tutoring systems to examine learner
engagement.2-4 However, these have looked at engagement but, for a skilled learner, the materials may not meet their needs. Struggling learners often remain undetected, the failure to fail problem.5-7 We need stronger data to support decisions to terminate, lest we spend weeks in litigation.

We have seen several efforts to support competency assessment: EPAs, milestones, pre-EPAs, each looking at lower levels of complexity on the spectrum of learning activities. But too many of these depend on observer-b(i)ased checklists, and we are already encountering survey fatigue at all levels of learner assessment. Yet, much of what we do is now captured online in one form or another: electronic medical records, learning management systems, simulation systems, enterprise calendars.

Activity metrics take a ground up, rather than top-down, approach to tracking what learners actually do rather than what they, or their teachers, say they do. This already happens in limited individual systems but we need an extensible approach if we are to garner a more comprehensive view. The Experience API (xAPI)8 is a technically simple approach that is easy to integrate into existing systems. Data is captured into a Learning Record Store (LRS)9.

xAPI statements follow a very simple actor-verb-object structure: Bob Did This. And yet this simplicity belies great power through a very extensible set of vocabularies. The LRS structure is designed to swallow such xAPI statements from multiple sources, in true Big Data fashion, and can be securely federated so that a wide range of analytic and data visualization tools can be employed.

xAPI is technically well established. Groups, such as Medbiquitous10, are standardizing profiles of activities for medical education and health system outcomes. Now is the perfect time to engage medical educators in the power of these metrics. Their assessment needs should drive how and where these activities are measured.

References:

  1. Hodges B. Scylla or charybdis: navigating between excessive examination andnaïve reliance on self-assessment. Nurs Inq. 2007;14(3):177. http://onlinelibrary.wiley.com/doi/10.1111/j.1440-1800.2007.00376.x/full. Accessed January 21, 2017.
  2. D’Mello S, Graesser A. Automatic Detection of Learner’s Affect from Gross Body Language. Appl Artif Intell. 2009;23(2):123-150. doi:10.1080/08839510802631745.
  3. Qu L, Wang N, Johnson WL. Using Learner Focus of Attention to Detect Learner Motivation Factors. In: Springer, Berlin, Heidelberg; 2005:70-73. doi:10.1007/11527886_10.
  4. Soldato T. Detecting and reacting to the learner’s motivational state. In: Springer, Berlin, Heidelberg; 1992:567-574. doi:10.1007/3-540-55606-0_66.
  5. Nixon LJ, Gladding SP, Duffy BL. Describing Failure in a Clinical Clerkship: Implications for Identification, Assessment and Remediation for Struggling Learners. J Gen Intern Med. 2016;31(10):1172-1179. doi:10.1007/s11606- 016-3758-3.
  6. Wang FY, Degnan KO, Goren EN. Describing Failure in a Clinical Clerkship. J Gen Intern Med. 2017;32(4):378-378. doi:10.1007/s11606-016-3979-5.
  7. McColl T. MEdIC Series: Case of the Failure to Fail – Expert Review and Curated Community Commentary. AliEM. https://www.aliem.com/2017/06/medic-case-failure-to-fail-expert-review- curated-community-commentary/. Published 2017. Accessed April 29, 2019.
  8. Haag J. xAPI Overview – ADL Net. ADL Net. https://www.adlnet.gov/xAPI. Published 2015. Accessed May 29, 2017.
  9. Downes A. Learning Record Store – Tin Can API. Rustici LLC web site. http://tincanapi.com/learning-record-store/. Published 2015. Accessed May 29, 2017.
  10. Greene P, Smother V. MedBiquitous Consortium | Advancing the Health Professions Through Technology Standards. http://www.medbiq.org/. Published 2001. Accessed November 1, 2016.