Skip to main content
  • Research article
  • Open access
  • Published:

Computer supported collaborative learning in a clerkship: an exploratory study on the relation of discussion activity and revision of critical appraisal papers

Abstract

Background

Medical students in clerkship are continuously confronted with real and relevant patient problems. To support clinical problem solving skills, students perform a Critical Appraisal of a Topic (CAT) task, often resulting in a paper. Because such a paper may contain errors, students could profit from discussion with peers, leading to paper revision. Active peer discussion by a Computer Supported Collaborative Learning (CSCL) environment show positive medical students perceptions on subjective knowledge improvement. High students’ activity during discussions in a CSCL environment demonstrated higher task-focussed discussion reflecting higher levels of knowledge construction. However, it remains unclear whether high discussion activity influences students’ decisions revise their CAT paper. The aim of this research is to examine whether students who revise their critical appraisal papers after discussion in a CSCL environment show more task-focussed activity and discuss more intensively on critical appraisal topics than students who do not revise their papers.

Methods

Forty-seven medical students, stratified in subgroups, participated in a structured asynchronous online discussion of individual written CAT papers on self-selected clinical problems. The discussion was structured by three critical appraisal topics. After the discussion, the students could revise their paper. For analysis purposes, all students’ postings were blinded and analysed by the investigator, unaware of students characteristics and whether or not the paper was revised. Postings were counted and analysed by an independent rater, Postings were assigned into outside activity, non-task-focussed activity or task-focussed activity. Additionally, postings were assigned to one of the three critical appraisal topics. Analysis results were compared by revised and unrevised papers.

Results

Twenty-four papers (51.6%) were revised after the online discussion. The discussions of the revised papers showed significantly higher numbers of postings, more task-focussed activities, and more postings about the two critical appraisal topics: “appraisal of the selected article(s)”, and “relevant conclusion regarding the clinical problem”.

Conclusion

A CSCL environment can support medical students in the execution and critical appraisal of authentic tasks in the clinical workplace. Revision of CAT papers appears to be related to discussions activity, more specifically reflecting high task-focussed activity of critical appraisal topics.

Peer Review reports

Background

In the clinical phase of the medical curriculum, during a clerkship, students learn primarily in the authentic context of the workplace [1, 2] and are continuously confronted with clinical problems. Students have a preference for learning from clinical problems in the workplace because these problems are real and relevant to them [3, 4]. To train clinical problem solving skills, medical students often use critical appraisal [46], defined as: “The process of assessing and interpreting evidence (usually by published research) by systematically considering its validity (closeness to the truth), results and relevance to the individual’s work” [7, 8]. A practical task here is a Critical Appraisal of a Topic (CAT). This CAT task requires a student to first formulate a clinical question relating to a clinical problem encountered in the workplace. Next, the literature is investigated for articles offering evidence with relevance to the problem. Then, the student has to appraise the evidence critically, in relation to the aetiology, diagnosis, prognosis, therapy and follow-up of the case in question, and to describe the evidence table. Finally, the student considers the value of the evidence and presents the conclusion related to the clinical problem concerned [8, 9]. A CAT paper written by an individual student can be considered as a first draft which has not been subject to any review. Since a CAT paper may contain errors like those of fact, calculation and interpretation, students can profit from a thorough discussion of their CAT paper with peers. Irrespective of whether they decide to revise or not revise the CAT paper afterwards [9, 10]. However, such a collaborative activity poses logistical problems, particularly when students are dispersed over different training locations. Part of a solution may be provided by a Computer Supported Collaborative Learning (CSCL) environment, enabling students to engage in a structured, asynchronous discussion, independent of place and time [1117]. However, it has been shown that such collaborative activities do not automatically result in positive learning outcomes. The success of CSCL depends on, among other factors, the intensity of the online activity within groups and its results [1820]. Research on the use of CSCL by university students has shown that a more intense activity during discussions is associated with high task-focussed discussion activity, reflecting specifically higher levels of knowledge construction [21]. In a recent study, we demonstrated that medical students perceived subjective (knowledge) improvement of their learning outcomes during asynchronous discussions of an authentic CAT task in a CSCL environment [22]. Although high activity during asynchronous discussions in a CSCL environment appears to be associated with high task-focussed activity, it remains unclear whether students’ discussion activity influences their decision to whether or not to revise the CAT paper. Furthermore, it is not clear whether high discussion activity on CAT topics influences students to revise their CAT paper.

In present study we hypothesized that students who revise their CAT paper after discussing its content with peers in a CSCL environment, conduct an extensive discussion, with more task-focussed activity, than students who do not revise their paper. Besides it is hypothesised that students who revise their CAT paper show more discussion activity on critical appraisal topics than students who do not revise their CAT paper. Thus, the first objective of present study was to examine whether students who revised their paper showed more task-focussed activity compared with students with unrevised papers. The second objective was to evaluate whether students who revised their paper showed more discussion activity on critical appraisal topics, compared with student with unrevised CAT papers.

This paper details the process of a peer discussion of a CAT paper on a clinical problem, and reports the effects on students activity during discussion in a CSCL environment.

Methods

Participants and task

Between January 2008 and June 2010, all sixth year students of the medical curriculum of the Faculty of Health, Medicine and Life Sciences, Maastricht University, the Netherlands participated in an eighteen-week clerkship in a discipline of their choice. The clerkships were offered in nine different hospitals, eight in the Netherlands and one in Austria. One of the tasks during the clerkship required students to investigate a self-selected clinical problem encountered during the elective and to write a pre-formatted critical appraisal of a topic (CAT) paper on it, a task with which students were familiar from ample earlier experience.

Study design

Sixty-six medical students were invited by e-mail for this study, forty-seven of which voluntarily agreed to participate in the study. The participants received informed consent before the start of the study and were free to withdraw their cooperation at any time. They were randomly allocated into sixteen groups, fifteen groups of three and one group of two students. Each student uploaded his individual CAT paper to a ‘drop-box’, read the papers of the peers in his group and provided his comments in an asynchronous structured discussion forum in the open source CSCL environment DOKEOS (http://www.dokeos.com). The discussion was moderated by the student who’s paper was subject of discussion. After the online discussion, students were given the opportunity to revise their paper (Figure 1). Students had only access to their own discussion forum. The discussion activity and postings were automatically filed in the CSCL environment.

Figure 1
figure 1

study design

In order to structure the discussion of the CAT papers, the students were asked to address three topics: (1) the selection of the clinical problem, the formulation of the clinical question and the process of the literature search, (2) the study design and the methods of the article(s) selected on the basis of the literature search and, (3) the evidence provided by the article(s) that could be used to address the clinical problem and relevant clinical conclusions regarding the clinical problem. To help students with the CSCL discussion task, they received, by e-mail, an instruction manual containing information about the design and use of the CSCL environment, and about the schedule for the discussion. Students were free to make arrangements in their discussion group regarding the sequence of CAT papers to be discussed, as long as each individual CAT paper was discussed within a two-week period. Students received a password and logon code to access the CSCL environment and could familiarise themselves with the environment before starting the actual task.

Measurement instruments and statistical analysis

1. Content analysis of students’ postings on collaborative problem solving activity

To identify the type of collaborative problem solving activity, content analysis of students’ postings was performed according to the validated Rainbow system [23]. This content analysis system has been developed for any educational discussion forum, but to our knowledge, has not been used in medical education before. According to the Rainbow system, there are seven categories of communicative interaction, which can be grouped into three collaborative problem solving activities, i.e., outside activity, non-task-focussed activity or task-focussed activity (Table 1). Regarding the group of task-focussed activity, the categories 5, 6 and 7 are considered to reflect the highest levels of knowledge construction, respectively. In the present study, all postings of students CAT paper discussions were blinded. Thus, the analysis of the postings was conducted by the investigator, unaware of student characteristics and whether or not the paper, which was subject to the discussion, was revised.

Table 1 Rainbow system for content analysis; activity, category, and category definitions

In principle individual postings were considered as a unit of analysis. However, when a posting contained multiple activities it was split into different units of analysis, which were then coded separately [21, 24]. All units of analysis were counted and descriptive statistics were calculated for the three collaborative problem solving activities, as well as for the seven categories.

2. Content analysis of students’ postings on CAT topics of discussion.

To examine the discussion on the three critical appraisal topics as well as the corresponding CAT task elements (Table 2), all analysis units identified as either category 5., 6., or 7. of task-focussed activity (see higher) were labelled to one of the topics and their elements. Descriptive statistics were performed on the frequency of analysis unit per discussion topic, overall, as well as on revised and unrevised papers.

Table 2 Prescribed critical appraisal topics and elements of the CAT task

A Mann–Whitney U test for independent samples was performed to compare the analysis units of the revised and the unrevised papers, with regard to the three collaborative problem solving activities and their corresponding categories, and with regard to the three critical appraisal topics and their elements. P < .05 was considered statistically significant. CAT task elements were identified in either the revised or unrevised papers, and compared by a Chi-square test.

Results

1. Content analysis of students’ postings on collaborative problem solving activity.

A total of 1582 units of analysis was identified in students’ postings in the various discussion groups (Table 3). In the discussions of revised papers (n = 24), the number of analysis units was almost twice as high as in the unrevised paper discussions (n = 23) (P < .001). Non-task-focussed activity was identified in discussions on revised papers (23.2%), as well as on unrevised papers (25.8%). Task-focussed activity was relatively high in both revised (73.9%), and unrevised paper discussions (69.5%), but, in absolute terms, the task-focussed activity in the revised paper discussions was double as high as in the unrevised paper discussions (P < .000). A statistically significant higher number of units in the category 5. (Opinions; P < .000), 6. (Argumentation; P < .005), and 7. (Broaden and Deepen; P < .016) was found in the revised paper discussions compared with the unrevised paper discussions. Effect sizes (Cohens ‘d) on these categories were large, with exception of category 7.

2. Content analysis of students’ postings on CAT topics of discussion.

Table 3 Frequencies and descriptive statistics of discussion activity in the revised and unrevised paper discussions

Analysis units of task-focussed activity relating to the three critical appraisal topics of discussion are presented in Table 4. For the topic 2 ‘appraisal of the selected article(s)’ and topic 3 ‘relevant conclusion regarding the clinical problem’, the number of units was significantly higher in the revised than in the unrevised paper discussions. Effect sizes (Cohens ‘d) on these topics were large, as well.

Table 4 Frequency of analysis units of revised and unrevised paper in critical appraisal topics of discussions

The frequency and percentages of CAT task elements identified in either revised or unrevised paper discussion are presented in Table 5. Overall, CAT task elements were discussed more in the revised than in the unrevised papers. In the revised paper discussions, every prescribed CAT topic and corresponding CAT task element was identified, where in the unrevised CAT paper discussions, the elements of topic (2): ‘Study design’ and ‘Study outcome’ were not identified as elements under discussion. Significantly more critical appraisal topics of discussion identified in the revised paper were found in topic (1): ‘Preparation for executing literature search’, and ‘Strategy of literature search’, every CAT task element of topic (2), and in topic (3): ‘Relevant clinical conclusion regarding clinical problem’.

Table 5 CAT task elements of critical appraisal topics discussed in revised and unrevised papers

Discussion

The results of the present study indicate that a Computer Supported Collaborative Learning environment can effectively support medical students to learn collaboratively during clinical clerkships. By the execution of an authentic task such as a critical appraisal of a relevant clinical problem, students are stimulated to critically discuss and revise their critical appraisal paper. Students’ paper revision seems to be associated with an increased activity during discussions with peers, and to be related to higher task-focussed discussion activity as well as a more intense discussion of critical appraisal topics.

The discussion of students who revise their CAT paper substantially differs from that of students who do not revise. Revised paper discussions are more extensive, social and task-focussed, reflecting both low and higher levels of knowledge construction. These results findings are consistent with results obtained with CSCL in university classroom environments and medical workplace, showing that both social interaction and task-orientation are typical for an active discussion leading to higher levels of knowledge construction [25, 26].

Furthermore, students who revise their CAT paper after discussing its content with peers in a CSCL environment show more discussion activity on critical appraisal topics, with a strong focus on the CAT task elements: strategy of the literature search, and appraisal of the study population. Other elements identified under discussion in the majority of revised papers were: ‘preparation for executing literature search’; ‘study design’, and ‘relevant clinical conclusion regarding clinical problem’. A study among undergraduate medical students showed no differences in critical appraisal skills between students who received a computer-based learning session and students who attended classroom lectures [17]. However, two studies on medical students’ individual learning in on-line critical appraisal modules during clinical clerkships showed positive outcomes in favour of on-line learning. One study measured medical students’ pre-and post-test scores on an individual pre-determined critical appraisal task, and showed improvement in executing a search strategy and in appraise a study design [27], while another study compared the critical appraisal skills of medical students after on-line learning with those of students without intervention, and reported a higher quality of the literature search after on-line learning [28]. In the above-mentioned studies as well as in present study, students worked individually on a critical appraisal task. However, in present study students worked on a self-selected clinical problem extended with a collaborative discussion on their paper with peers. These differences in study design could have influenced the finding in present study that not only identical CAT task elements were identified, but moreover, even more critical appraisal elements were found.

A limitation of present study is the challenge to control all variables in an on-line collaborative discussion. By the design of a structured discussion task a certain control of variables is achieved. Despite of this structured discussion, it can not be excluded that students’ performance in course may have influenced them to revise their paper. However, this phenomenon likely has played a minor role since it concerned last year medical students with a comparable knowledge level. Moreover, all students participating in the present study performed several critical appraisals during the previous four years of the medical curriculum, and thus can be considered to be experienced in writing a CAT paper. Therefore, it was expected that participants may not have felt great urgency to discuss the task. It thus is remarkable that, even after intensive training the skill in performing a CAT, 51% of the students revised their paper after collaborative online discussion. Therefore, students can profit by a peer discussion of their papers, irrespective whether they revise or not revise their paper [9, 10]. Besides the effect of discussion with peers, other factors could have influenced students to revise the CAT paper. First, since students participated voluntarily in this study they were probably highly motivated to discuss with peers. Secondly, it can be emphasized that motivation for discussion is high because this critical appraisal task was related to a self-selected, authentic clinical problem. Even though it cannot be excluded that the discussion itself could stimulate students to discuss.

Another limitation is that the content analysis was performed by one researcher. Since the analysis was conducted according to a structured and validated analysis system [23], it seemed safe to assume that the analysis was well executed.

Furthermore, it cannot be denied that the sample of participants was relatively small. However, high effect sizes reveal a high effect on the students’ activity in discussion on two levels of knowledge construction and on two critical appraisal topics of the discussion.

In CSCL research, much attention has been given to measuring outcomes in terms of cognition, skills, critical thinking and problem solving, but research on what influences student learning during discussions is scant. Further research on a larger scale could be useful to clarify the learning processes during discussions and to what extent these processes affect the interaction among students and knowledge construction in CSCL. A controlled study comparing students interaction between an intense and limited discussion could be an interesting intervention for further research. Furthermore, it could be interesting to further research the question whether implementing an on-line discussion could be applied as a framework to support students’ learning in existing courses.

Conclusions

A Computer Supported Collaborative Learning environment can support medical students in critically appraising clinical problems encountered during learning in the workplace. An increase in activity during the discussions seems to be related to more task-focussed activities and more discussion of critical appraisal topics.

References

  1. O’Brien BC, Poncelet AN: Transition to clerkship courses: Preparing students to enter the workplace. Academic Med. 2010, 85 (12): 1-8.

    Google Scholar 

  2. Chittenden EH, Henry D, Saxena V, Loeser H, O’Sullivan PS: Transitional clerkship: An experiental course based on workplace learning theory. Academic Med. 2009, 84 (7): 872-876. 10.1097/ACM.0b013e3181a815e9.

    Article  Google Scholar 

  3. Collis B, Margaryan A: Applying activity theory to computer-supported collaborative learning and work-based activities in corporate settings. Educ Technol Res Dev. 2004, 52 (4): 1042-1629.

    Article  Google Scholar 

  4. Manley K, Titchen A, Hardy S: Work-based learning in the context of contemporary health care education and practice: A concept analysis. Pract Dev Health Care. 2009, 8 (2): 87-127. 10.1002/pdh.284.

    Article  Google Scholar 

  5. Rhodes G, Shiel G: Meeting the needs of the workplace and the learner through work-based learning. J Work Learn. 2007, 19 (2): 173-187.

    Google Scholar 

  6. Feltovich PJ, Spiro RJ, Coulson RL: The nature of conceptual understanding in biomedicine: the deep structure of complex ideas and the development of misconceptions. The Cognitive Sciences in Medicine. Edited by: Evans DA, Patel VL. 1989, Cambridge: MIT Press

    Google Scholar 

  7. Hyde C, Parkes J, Deeks J, Milne R: Systematic review of effectiveness of teaching critical appraisal. 2000, Institute of Health Sciences Oxford: ICRF/NHS Centre for Statistics in Medicine

    Google Scholar 

  8. Parkes J, Hyde C, Deeks JJ, Milne R: Teaching critical appraisal skills in health care settings. 2009, Ltd: Cochrane Database of Systematic Reviews. John Wiley & Sons

    Google Scholar 

  9. Sauvé S, Lee HN, Meade MO, Lung JD, Farkouh M, Cook DJ, Sackett DL: The Critically Appraised Topic: A Practical Approach to Learning Critical Appraisal. Ann R College Physicians Surgeons Canada. 1995, 28: 396-398.

    Google Scholar 

  10. Bennet K, Sackett D, Haynes R, Neufeld V, Tugwell P, Robert R: A controlled trial of teaching critical appraisal of the clinical literature to medical students. JAMA. 1987, 257 (18): 2451-2454. 10.1001/jama.1987.03390180069025.

    Article  Google Scholar 

  11. Van Bruggen J, Kirschner P, Jochems W: External representation of argumentation in CSCL and the management of cognitive load. Learn Instr. 2002, 12: 121-138. 10.1016/S0959-4752(01)00019-6.

    Article  Google Scholar 

  12. Medélez E, Burgun A, Le Duff F, Le Beux P: Design of a CSCL Environment for Clinical Reasoning Learning and Problem-Based Learning in Medicine. Studies in Health Technology and Informatics. Edited by: Patel VL, Rogers R, Haux R. 2001, MedInfo: Ios Press

    Google Scholar 

  13. Turk RFM, Dankbaar MEW, Van Beeck EF: Computerondersteund samenwerkend leren in het coschap sociale geneeskunde. Computer-supported collaborative learning in social medicine clerkship. Tijdschrift voor medisch onderwijs Perspectives on Medical Educ. 2009, 28 (6): 269-279.

    Article  Google Scholar 

  14. Michaelsen L, Bauman Knight A, Dee Fink L: Team-based learning: a transformative use of small groups. 2002, Westport, USA: Praeger Publishers

    Google Scholar 

  15. Kreijns K, Kirschner P, Jochems W: The Sociability of Computer-Supported Collaborative Learning Environments. Educ Technol & Soc. 2002, 5 (1): 8-22.

    Google Scholar 

  16. Kreijns K, Kirschner P, Jochems W: Identifying the pitfalls for social interaction in computer-supported collaborative learning environments: a review of the research. Comput Human Behav. 2003, 19: 335-353. 10.1016/S0747-5632(02)00057-2.

    Article  Google Scholar 

  17. Davis J, Crabb S, Rogers E, Zamora J, Khan K: Computer-based teaching is as good as face to face lecture-based teaching of evidence based medicine: a randomized controlled trial. Medical Teacher. 2008, 20: 302-307.

    Article  Google Scholar 

  18. Kreijns K, Kirschner P, Jochems W, Van Buuren H: Measuring perceived sociability of computer-supported collaborative learning environments. Comput Educ. 2005, 49: 176-192.

    Article  Google Scholar 

  19. Dillenbourg P, Järvelä S, Fischer F: The evolution of research on computer-supported collaborative learning. From design to orchestration. Technology-enhanced learning: principles and products. Edited by: Ludvigsen S, Jong T, Lazonder A, Barnes S. 2009, B.V: Springer Science & Business Media

    Google Scholar 

  20. Wang AY, Newlin MH: Characteristics of students who enrol and succeed in psychology web-based classes. J Educ Psychol. 2000, 92 (1): 137-143.

    Article  Google Scholar 

  21. Schellens T, Valcke M: Collaborative learning in asynchronous discussion groups: What about the impact on cognitive processing?. Comput Human Behav. 2005, 21: 957-975. 10.1016/j.chb.2004.02.025.

    Article  Google Scholar 

  22. Koops W, Van der Vleuten C, De Leng B, Oei S, Snoeckx L: Computer-supported collaborative learning in the medical workplace: Students' experiences on formative peer feedback of a critical appraisal of a topic paper. Medical Teacher. 2011, 33 (6): e318-e323. 10.3109/0142159X.2011.575901.

    Article  Google Scholar 

  23. Baker M, Andriessen J, Lund K, Van Amelsvoort M, Quignard M: Rainbow: A framework for analysing computer-mediated pedagogical debates. J Comp-Supported Collaborative Learning. 2007, 2: 315-357. 10.1007/s11412-007-9022-4.

    Article  Google Scholar 

  24. Rourke L, Anderson T, Garrison DR, Archer W: Methodological issues in the content analysis of computer conference transcripts. Int J Artif Intell Educ. 2001, 12: 8-22.

    Google Scholar 

  25. Schellens T, Valcke M: Fostering knowledge construction in university students through asynchronous discussion groups. Comput Educ. 2006, 46: 349-370. 10.1016/j.compedu.2004.07.010.

    Article  Google Scholar 

  26. De Wever B, Van Winckel M, Valcke M: Discussing patient management online: The impact of roles on knowledge construction for students interning at the paediatric ward. Adv Heal Sci Educ. 2008, 13 (1): 25-42. 10.1007/s10459-006-9022-6.

    Article  Google Scholar 

  27. Aronoff S, Evans B, Fleece D, Lyons P, Kaplan L, Rojas R: Integrating evidence based medicine into undergraduate medical education: combining online instruction with clinical clerkships. Teaching Learning Med. 2010, 22 (3): 219-223. 10.1080/10401334.2010.488460.

    Article  Google Scholar 

  28. Schilling K, Wiecha J, Polineni D, Khalil S: An interactive web-based curriculum on evidence-based medicine: design and effectiveness. Family Med. 2006, 38 (2): 126-132.

    Google Scholar 

Pre-publication history

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Willem JM Koops.

Additional information

Competing interests

The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.

Authors’ contributions

WJMK started the study, wrote the research design, conducted the research, collected and analysed the data, and wrote the manuscript. CPMV discussed on the research design, and helped to draft the manuscript. BAL discussed on the research design, and helped to draft the manuscript. LHEHS discussed on the research design, and helped to analyse the data, and drafted the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Koops, W.J., van der Vleuten, C.P., de Leng, B.A. et al. Computer supported collaborative learning in a clerkship: an exploratory study on the relation of discussion activity and revision of critical appraisal papers. BMC Med Educ 12, 79 (2012). https://doi.org/10.1186/1472-6920-12-79

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6920-12-79

Keywords