Measuring Performance in Teamwork

 

Authors:

  • David Topps
  • Ellen Meiselman
  • Valerie Smothers
  • xxx – more to be added

Background

Healthcare is a team sport but healthcare providers (HCPs) do not participate in team-based learning very often. While we could discuss the logistic challenges and barriers encountered, that is outside the scope of this article. Team-based learning (TBL) is not new. Michaelsen introduced TBL into general education in 1983. (Michaelsen, 1983) It is also not new to medical education. A lot has been written about its advantages and about how to introduce it to the curriculum – for example, there is an entire AMEE Guide devoted to TBL: (Parmelee, Michaelsen, Cook, & Hudes, 2012). However, the situation is much less clear when trying to assess how teams and team members perform.

As Khogali points out in his commentary (Khogali, 2013) on the AMEE Guide (Parmelee et al., 2012), there has not been much research on the effects or outcomes of TBL. To be fair, Parmelee et al do give some guidance on how to consider learning outcomes, including a novel description about a ‘backwards design’, working back from learning outcomes towards TBL course content and design.

Where there has been some assessment of the learning outcomes, as Fatmi et al pointed out in their BEME Guide (Fatmi, Hartling, Hillier, Campbell, & Oswald, 2013), many simply looked at Kirkpatrick level 1 (user satisfaction and engagement). Those studies that did move beyond this seemed to look at testing fact retention and knowledge acquisition. Sisk et al in a systematic review of TBL in nursing (Sisk, 2011) noted similar concerns about satisfaction, engagement and knowledge, and wondered why TBL was not being used to assess effectiveness in work teams.

Haidet et al have given further consideration to the challenge of reporting outcomes from TBL and call for standardized approaches to this. (Haidet et al., 2012) However, there is not much detail on how to actually measure teamwork performance outcomes. In team-based simulation, where the opportunities arise for safe and consistent performance evaluation, there has been some progress. Rosen et al present a good general overview of team-based assessment in simulation, and suggest the use of their Event Based Assessment Techniques (EBAT) tool, (Rosen et al., 2010)  but its uptake seems limited so far.

The OSCE has been with us now for decades as an assessment tool. The Team OSCE (TOSCE) was first mooted by a team at St George’s in London in 1995. However, there seems to have been little in the way of evaluation of the TOSCE beyond user satisfaction. The only paper we found was Singleton et al who performed an extensive and rigorous evaluation of the TOSCE as a summative exam tool. (Singleton, Smith, Harris, Ross-Harper, & Hilton, 1999) While they demonstrated very good reliability and reasonable validity, the most striking thing about their approach was the likely cost and logistic challenges that would arise.

We looked outside of medical education for examples of how collaboration and teamwork might be assessed. Online discussion forums see wide use in all areas of education and are a core function of many learning management systems (LMSs). Given this widespread use, we hoped that there would be a more extensive corpus of studies in this context. Thomas (Thomas, 2002) and Nandi (Nandi, Hamilton, & Harland, 2012) have both made useful comments about how one might evaluate the quality of contributions in online discussions, with the latter taking a solid qualitative approach. Dringus et al used discourse analysis in a powerful but cumbersome approach to defining their SCAFFOLD metric. (L.P. Dringus & Ellis, n.d.) Others have also attempted to derive similar metrics. (Ellis & Hafner, n.d.; Kay, 2006)

Our biggest concern with many of the above approaches relates to the highly subjective nature of the assessments, by either teachers/facilitators or peers. Apart from the biases that arise from this subjective approach, in synchronous activities, it is hard to keep up with all that is going on when there is a burst of activity. In our Health Services Virtual Organization (HSVO) project, we found that even when analyzing video-recorded sessions, it was like the hoary old analogy of flying a plane: long tedious stretches where nothing happens, interspersed by flurries of furious activity. (R. H. Ellaway, Topps, Lachapelle, & Cooperstock, 2009; R. Ellaway, Joy, Topps, & Lachapelle, 2013) In their review of tools for evaluating team performance in simulation, Rosen et al noted similar difficulties. (Rosen et al., 2010)

The advantage of online learning activities is that they generate data logs of what happened. Dringus and Ellis (Laurie P. Dringus & Ellis, 2005) together advocate the data mining of these logs for assessing discussion forums. Talavera makes a strong case for such data mining but shows that the transactional data available from LMSs is generally not suitable for such analysis. (Talavera & Gaudioso, 2004)

So, there are lots of ideas on why we should measure team performance, the general concepts of what we should look at (not just fact acquisition) and some attempts at frameworks of metrics to apply. With this in mind, our group has been taking a different look at how one might generate activity metrics from such online learning activities.

Approach

Rather than looking at the consumption of learning objects or materials, there is now increased interest in analyzing activity streams. The commercial world is somewhat ahead of the research scene in this regard: Google, Facebook and Amazon have become hugely profitable on this basis. The annual value of your personal activity data from everyday interactions with the online world ($1200 approx) far exceeds the annual cost of collecting it ($2 approx). (Madrigal, 2012)

A number of us have been exploring how activity metrics can be applied in learning analytics. In particular, we have been working with the Medbiq Learning Experience Working Group (Smothers, Topps, & Meiselman, 2016) in exploring how the Experience API (Haag, 2015) can be used. In working towards this, we are also seeking the collaboration of other authors and researchers, using collaborative writing as an ‘eat-your-own-dogfood’ exercise. More about this further below in this article. For now, we invite you to view and comment on ‘Learning Analytics and xAPI’ in a shared Google Doc.

In particular, we have been using xAPI with a number of different learning tools, such as Moodle, WordPress, H5P widgets and OpenLabyrinth. OLab4 is being redesigned to better support TBL. Indeed, xAPI will be core to the functionality of our new education research platform, OLab4. This page describes the very simple basis for xAPI statements: Actor Verb Object or …

Bob Did This.

The detailed structure and function of xAPI is better explained by Jason Haag at Advanced Distributed Learning (ADL). (Haag, 2015) We see this as an important toolset in workplace-based assessment (WBA), including use of key clinical tool, the EMR.

A key component that should be standardized where possible in one’s use of xAPI is what is meant by the Verb, since it links the Actor and the Object. Data analysis is much easier when the meaning of a Verb is consistent within a certain scope. To enable this, xAPI uses Profiles to define sets of Verbs associated with activity types, as described here in our general orientation to xAPI Profiles.

In order to improve the consistency of Verb use across Profiles and activity types, it is important to try to use existing Verb definitions where possible, or at least to align the usage of that Verb and Profile with others that are similar. See this guide on the Medbiq web site for more information: http://groups.medbiq.org/medbiq/display/XIG/Finding+or+creating+new+Verbs+or+Profiles

After familiarizing yourself with xAPI and how it works in general, we would particularly invite you to comment and contribute on the xAPI Teamwork Profile.  This is an attempt to define the Verbs that may be particularly useful in assessing team member contributions during everyday learning and collaborative workplace activities, such as meetings, discussion forums etc.

This sounds like a lot of work and you might well wonder why you would go to such lengths? Well, in the spirit of activity tracking and collaboration/teamwork, we are able to track your activities with the above-mentioned links. So when we invite you to peruse or comment on these documents, we can tell when you have and to what extent. Eating our own dogfood.

For those who like to be more actively engaged, because of these mechanisms, we are also able to offer those who make significant contributions to the project and authoring the possibility of being included as a co-author. This might sound like pie-in-the-sky (or cloud!) but this has already been done by Eric Raymond in The Cathedral and the Bazaar. (Raymond, 2010) Please contact us if you are interested in being more actively involved.

Agile Writing

As part of this exploration of collaborative teamwork, we are also taking the concept of Agile Writing further. ‘Agile’ in this context is more consistent with its use in software development, but we do hope to remain nimble in our approach. As well as inviting comments and contributions to our work on activity metrics, for those who are interested in this approach to collaborative authoring rather than the esoterica of software standards, we also invite contributions to our Agile Writing for Busy Clinician Educators piece.

And if your energies don’t extend to writing at the moment, you can always simply answer our open, anonymous, survey (Of course, there’s a survey!) on Barriers to Productive Writing.

Results

Of course, there are no results yet. But we do have funding for developing these tools and work is currently in progress on OLab4 and other tools to explore activity metrics. So, at the very least, we anticipate the creation of both collaboratively written articles on how this should proceed, and the toolsets to make these approaches more accessible to education researchers everywhere.

References

Dringus, L. P., & Ellis, T. (2005). Using data mining as a strategy for assessing asynchronous discussion forums. Computers & Education, 45(1), 141–160. http://doi.org/10.1016/J.COMPEDU.2004.05.003

Dringus, L. P., & Ellis, T. J. (n.d.). Building the SCAFFOLD for evaluating threaded discussion forum activity: describing and categorizing contributions. In 34th Annual Frontiers in Education, 2004. FIE 2004. (pp. 170–175). IEEE. http://doi.org/10.1109/FIE.2004.1408488

Ellaway, R. H., Topps, D., Lachapelle, K., & Cooperstock, J. (2009). Integrating simulation devices and systems. Stud Health Technol Inform, 142, 88–90. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/19377120

Ellaway, R., Joy, A., Topps, D., & Lachapelle, K. (2013). Hybrid Simulation with virtual patients. Simulation in Healthcare : Journal of the Society for Simulation in Healthcare, 7(6), 392–468. Retrieved from http://www.dtic.mil/docs/citations/ADA560817

Ellis, T. J., & Hafner, W. (n.d.). Peer Evaluations of Collaborative Learning Experiences Conveyed Through an Asynchronous Learning Network. In Proceedings of the 38th Annual Hawaii International Conference on System Sciences (p. 5b–5b). IEEE. http://doi.org/10.1109/HICSS.2005.487

Fatmi, M., Hartling, L., Hillier, T., Campbell, S., & Oswald, A. E. (2013). The effectiveness of team-based learning on learning outcomes in health professions education: BEME Guide No. 30. http://dx.doi.org/10.3109/0142159X.2013.849802. http://doi.org/10.3109/0142159X.2013.849802

Haag, J. (2015). xAPI Overview – ADL Net. Retrieved May 29, 2017, from https://www.adlnet.gov/xAPI

Haidet, P., Levine, R. E., Parmelee, D. X., Crow, S., Kennedy, F., Kelly, P. A., … Richards, B. F. (2012). Perspective: Guidelines for Reporting Team-Based Learning Activities in the Medical and Health Sciences Education Literature. Academic Medicine, 87(3). Retrieved from https://journals.lww.com/academicmedicine/Fulltext/2012/03010/Perspective___Guidelines_for_Reporting_Team_Based.15.aspx

Kay, R. H. (2006). Developing a comprehensive metric for assessing discussion board effectiveness. British Journal of Educational Technology, 37(5), 761–783. http://doi.org/10.1111/j.1467-8535.2006.00560.x

Khogali, S. E. (2013). Team-based learning: A practical guide: Guide Supplement 65.1 – Viewpoint. Medical Teacher, 35(2), 163–165. http://doi.org/10.3109/0142159X.2013.759199

Madrigal, A. C. (2012). How Much Is Your Data Worth? Retrieved August 23, 2016, from http://www.theatlantic.com/technology/archive/2012/03/how-much-is-your-data-worth-mmm-somewhere-between-half-a-cent-and-1-200/254730/

Michaelsen, L. K. (1983). Team learning in large classes. New Directions for Teaching and Learning, 1983(14), 13–22. http://doi.org/10.1002/tl.37219831404

Nandi, D., Hamilton, M., & Harland, J. (2012). Evaluating the quality of interaction in asynchronous discussion forums in fully online courses. Distance Education, 33(1), 5–30. http://doi.org/10.1080/01587919.2012.667957

Parmelee, D., Michaelsen, L. K., Cook, S., & Hudes, P. D. (2012). Team-based learning: A practical guide: AMEE Guide No. 65. http://dx.doi.org.ezproxy.lib.ucalgary.ca/10.3109/0142159X.2012.651179. http://doi.org/10.3109/0142159X.2012.651179

Raymond, E. (2010). The Cathedral and the Bazaar. Retrieved January 7, 2016, from http://www.webcitation.org/6eNHAoNF2

Rosen, M. A., Weaver, S. J., Lazzara, E. H., Salas, E., Wu, T., Silvestri, S., … King, H. B. (2010). Tools for evaluating team performance in simulation-based training. Journal of Emergencies, Trauma, and Shock, 3(4), 353–9. http://doi.org/10.4103/0974-2700.70746

Singleton, A., Smith, F., Harris, T., Ross-Harper, R., & Hilton, S. (1999). An evaluation of the Team Objective Structured Clinical Examination (TOSCE). Medical Education, 33(1), 34–41. http://doi.org/10.1046/j.1365-2923.1999.00264.x

Sisk, R. J. (2011). Team-Based Learning: Systematic Research Review. Journal of Nursing Education, 50(12), 665–669. http://doi.org/10.3928/01484834-20111017-01

Smothers, V., Topps, D., & Meiselman, E. (2016). Learning Experience | MedBiquitous Consortium. Retrieved February 1, 2016, from http://www.medbiq.org/learning_experience

Talavera, L., & Gaudioso, E. (2004). Mining Student Data To Characterize Similar Behavior Groups In Unstructured Collaboration Spaces. In 16th European conference on artificial intelligence. Retrieved from https://pdfs.semanticscholar.org/12fd/4b8d22052875064d43b6a7c4cfcf7f499872.pdf

Thomas, M. J. W. (2002). Learning within incoherent structures: the space of online discussion forums. Journal of Computer Assisted Learning, 18(3), 351–366. http://doi.org/10.1046/j.0266-4909.2002.03800.x