Skip to main content
  • Research article
  • Open access
  • Published:

Adaptation and reliability of the Readiness for Inter professional Learning Scale in a Danish student and health professional setting

Abstract

Background

Shared learning activities aim to enhance the collaborative skills of health students and professionals in relation to both colleagues and patients. The Readiness for Interprofessional Learning Scale is used to assess such skills. The aim of this study was to validate a Danish four-subscale version of the RIPLS in a sample of 370 health-care students and 200 health professionals.

Methods

The questionnaire was translated following a two-step process, including forward and backward translations, and a pilot test. A test of internal consistency and a test–retest of reliability were performed using a web-based questionnaire.

Results

The questionnaire was completed by 370 health care students and 200 health professionals (test) whereas the retest was completed by 203 health professionals. A full data set of first-time responses was generated from the 570 students and professionals at baseline (test).

Good internal association was found between items in Positive Professional Identity (Q13–Q16), with factor loadings between 0.61 and 0.72.

The confirmatory factor analyses revealed 11 items with factor loadings above 0.50, 18 below 0.50, and no items below 0.20. Weighted kappa values were between 0.20 and 0.40, 16 items with values between 0.40 and 0.60, and six items between 0.60 and 0.80; all showing p-values below 0.001.

Conclusion

Strong internal consistency was found for both populations. The Danish RIPLS proved a stable and reliable instrument for the Teamwork and Collaboration, Negative Professional Identity, and Positive Professional Identity subscales, while the Roles and Responsibility subscale showed some limitations. The reason behind these limitations is unclear.

Peer Review reports

Background

Shared-learning activities have recently entered the health-care curriculum to prepare students for collaboration with colleagues and with patients. Interprofessional learning primarily aims to reduce prejudice among professionals, improve awareness of the roles and duties of other professional groups, and advance teamwork and collaborative competencies. The new activities in the education of health-care students have raised the problem of measuring the effects on student attitudes, a challenge which prompted Parsell and Bligh [1] to develop the Readiness for Interprofessional Learning Scale (RIPLS). The first version of the questionnaire had 19 items, with three subscales relating to Teamwork and Collaboration, Professional Identity, and Roles and Responsibilities. The instrument was further developed by McFadyen et al. who recommended splitting the Professional Identity subscale into Negative Professional Identity and Positive Professional Identity [2, 3]. To strengthen the third subscale, Roles and Responsibilities -while seeking new factors, such as patient-centredness, and further validating the RIPLS instrument for use with health professionals—the tool was extended to 29 items [4]. The RIPLS has been translated and adapted to fit other cultures, and by now, a Swedish [5] and Japanese version exists [6]. The translation of the instrument into Swedish and Japanese both followed a forward and backward translation process by translators being experts in both the source and the target language in order to provide a culturally relevant translation. A modified version of the English instrument with 29 items and three subscales is used in the United Arab Emirates [7]. All three versions are aimed at health-care students.

The instrument was originally developed for health-care students, and surveys have primarily involved students of health, social, and human care professions [1, 3, 513]. Although Reid et al. used a modified instrument aimed at health professionals [4], further research is needed to strengthen the validity and reliability of the results obtained in professional contexts.

Danish translation

The Danish version used in our surveys of students and professionals is a translation of the original English RIPLS. De Vet, Terwee, Mokkink, and Knol’s recommendations were followed [14]. The process involved 1) forward translation from English into Danish by two native speakers of Danish working independently of each other; 2) synthesis of these two translations; 3) two independent backward translations from Danish into English by two native speakers of English; 4) review and reconciliation of the backward translations; and 5) pilot-testing on a convenience sample of 15 students, after which the final editing took place (A detailed description of the translation process is available on request from the corresponding author). To adapt the student version for use with our health professional group, the following changes were made. We replaced the Danish equivalents of student by professional, interdisciplinary by interprofessional, and the phrase working in an interprofessional team replaced terms describing the effect of interdisciplinary training.

The aim of this study was to validate the Danish version of the Readiness for Interprofessional Learning Scale in a sample of health-care students and professionals.

Methods

Our surveys included a test of internal consistency and a test–retest of reliability, but only the professional group was retested. The questionnaires were administrated through the web.

Validation of the instrument aimed to check completeness of the data; sub-scale assumptions; item-discriminant validity and scaling success rates (i.e., the measure of item correlation with its proposed scale), internal consistency reliability for each subscale, test-retest reliability (for the professional sample only), and subscale precision and applicability to the specific population.

Stata Release 13 was used for all statistical analysis (StataCorp 2013. Stata Statistical Software Release 13, College Station, TX: StataCorp LP).

Data Collection, Study Population, and Validation

The health professional group included medical doctors, registered nurses, nursing assistants, medical secretaries, and physiotherapists who participated in an intervention concerning interprofessional learning and collaboration. The data were collected in August and September of 2012 using a web-based questionnaire system. The participants received an e-mailed invitation and a link to the form. For the retest, an identical questionnaire was administered after 16 days. For each test, a reminder was dispatched 2 weeks later to those who had not submitted a response.

The Danish student version was validated in a group of health-care students of nursing, medicine, physiotherapy, occupational therapy, laboratory technology, and radiography, who participated in a study to evaluate structured interprofessional clinical training in a hospital setting. Data collection took place from August 2008 to February 2010. A web-based questionnaire was administered through an e-mail link. A reminder was sent if no response was received within 7 days. E-mail addresses were obtained from the included educational institutions.

The Readiness for Interprofessional Learning Scale

The use of subscales in RIPLS studies has varied. The original three-subscale version has been tested in several studies [2, 8], where internal consistency tests and content analyses revealed structural limitations, leading McFadyen et al. to suggest a four-subscale structure [2], which has been followed up by thorough testing [1113].

Individual items have also been revised. The original questionnaire contained 19 items, but was expanded for use with professional groups [4]. The resulting 29 items assessed Teamwork and Collaboration (in nine items), Negative Professional Identity (three items), Positive Professional Identity (four items), and Roles and Responsibilities (13 items). Assessments are given on a five-point Likert scale, and agreement/non-agreement on a bipolar symmetric scale (Strongly agree, Agree, Undecided, Disagree, Strongly disagree).

Analyses

To ensure that no items were left unanswered, both survey systems were constructed so as to disallow blank response spaces. However, some respondents disconnected without finishing the questionnaire, and we report the proportions of both completed and partly completed responses. As the relevant information was unavailable, no analysis on non-responders was possible. Missing data are reported on the respondent level rather than on the item level.

Previous studies [1, 4, 11, 15] have indicated that four subscales are appropriate for RIPLS studies in other languages. This was evaluated for our study using a confirmatory factor analysis based on item correlations. We were prepared to reconsider the number of subscales if the association to the latent factor were found to be lower than 0.2 (<0.2). Coefficients above 0.9 (>0.9) were considered highly satisfactory [14].

Inter-discriminant validity was tested to determine the correlation between the items and the subscale assumed.

Within all subscales, Cronbach’s alpha reliability coefficients were calculated to test for internal consistency reliability. Values between 0.70 and 0.90 were taken as indicating acceptable reliability [16]. Furthermore, subscales mean scores were examined for differences between the two samples, indicating applicability issues related to certain groups.

Test–retest data (for the health professionals only) were analysed by calculating weighted kappa coefficients on each item using quadratic weights (test and retest samples). Values below 0.2 indicate poor agreement, 0.2–0.4 fair agreement, 0.4–0.6 moderate agreement, and 0.6–0.8 substantial agreement. Coefficients between 0.8 and 1.0 express perfect agreement [17].

Occasionally, questionnaires suffer from floor or ceiling effects when applied in new circumstances. As the original 19-item questionnaire was created for a student population, and later expanded to 29 items and adjusted for a population of professionals, we evaluated floor and ceiling effects by skewness of score. A high skewness of scores would indicate low precision or low applicability to the specific population.

Ethical considerations

All the included participants are of age (18 years or older) and the return of a questionnaire, whether fully or only partially completed, was considered to express voluntary consent to participation. All personal identifiers are removed or disguised during the analyses to preclude personal identification.

The study was licensed by the Danish Data Protection Agency and needed no further ethical approval according to Danish legislation.

Results

The questionnaire was completed by 370 students of various health-care professions, 80.5 % of whom were female (response rate 57,5 %). The proportions of students were as follows: 31.9 % nursing (registered), 13.8 % medical students, 19.2 % physiotherapy, 14.6 % occupational therapy, 14.3 % radiography, and 6.2 % medical laboratory technology.

Two hundred health professionals completed the first questionnaire (test) (response rate 41 %); the second questionnaire (retest) was completed by 203. Both forms were returned by 129 professionals. Table 1 gives numbers of respondents by age, gender, and profession. The responses by item and subscale are shown in Table 2.

Table 1 Students, health professionals (baseline and test–retest sample, respectively), and full data set, by age, gender, and profession
Table 2 Responses by item and subscale - full data set (n = 570)

In order to test the overall applicability of the questionnaire, a full data set of first-time responses was generated from the 570 students and professionals at baseline.

Sub-scales Assumptions

Factor analyses were conducted at baseline on the full data set of students and professionals. Association was established for all items in Teamwork and Collaboration (Q1–Q9), with factor loadings between 0.46 and 0.74. Items Q5, Q6, and Q9 were also associated with Q13–Q16 in Positive Professional Identity, with factor loadings of 0.43, 0.46, and 0.49, respectively.

While the items in Negative Professional Identity (Q10–Q12) associated with the latent factor (with factor loadings between 0.40 and 0.46), they were negatively associated with Positive Professional Identity (factor loadings between −0.51 and −0.55).

Good internal association was found between items in Positive Professional Identity (Q13–Q16), with factor loadings between 0.61 and 0.72.

The Roles and Responsibilities items (Q17–Q21) showed irregular internal correlation; five items (Q17–Q21) had factor loadings between 0.26 and 0.52, while six items (Q24–Q29) were between 0.40 and 0.87. The remaining items (Q22 and Q23) were associated with Teamwork and Collaboration (factor loading 0.27) and with Negative Professional Identity (factor loading 0.43).

The uniqueness (i.e., variability) of the items ranged between 0.22 and 0.93, with 11 items above 0.50, 18 below 0.50, and no items below 0.20. The results of factor analysis are shown in Table 3.

Table 3 Rotated factor loadings and unique variances, by item (n = 545)

The rotated factor loadings seem to suggest the need for an independent subscale for the last six items. To further test this hypothesis, an additional factor analysis assuming five subscales was performed. However, this analysis produced only marginally different uniqueness values; the introduction of a fifth subscale was therefore rejected (data not shown).

Item-Discriminant Validity, Scaling Success Rates, and Internal Consistency

The item-discriminant validity of the RIPLS was tested through scaling success rates for each subscale. To test internal consistency between items, Cronbach alpha values were calculated for each subscale, with the full data set.

Table 4 shows the results for item-internal consistency, item-discriminant validity, scaling success rates, homogeneity, and reliability.

Table 4 Item scaling tests and reliability estimates for RIPLS subscales

Test-Retest

Item responses were coded as follows: Strongly disagree = 5; Disagree = 4; Neither agree nor disagree = 3; Agree = 2; Strongly agree = 1. Mean scores were calculated by item, and stability over time was found, with 11 items showing identical mean scores in both test and retest samples; 11 items saw a 0.1 numerical decrease in test–retest mean scores; four items decreased by 0.2, while four items saw a 0.1 increase. The mean values in the test-to-retest scores thus changed only marginally (data not shown).

Weighted kappa analysis was performed in order to test for reproducibility, resulting in seven items with kappa values between 0.20 and 0.40, 16 items with values between 0.40 and 0.60, and six items between 0.60 and 0.80; all showing p-values below 0.001. Table 5 displays all kappa scores.

Table 5 Weighted kappa values (professionals, test-retest sample, n = 129)

Subscale Precision and Applicability

For each subscale, the proportions of minimum and/or maximum scores among students and professionals were calculated. These counts indicate whether the respondents tended to cluster in the highest or the lowest score categories. In the three subscales with positively formulated items, mean scores were above 3.46 for all samples, and equal to or above 4.0 in two of the three scales. In the subscale with negatively formulated items (Table 2, Q10–12), the mean was 2.0 for each sample, thus indicating skewness, as shown in Table 6.

Table 6 Descriptive statistics of score distributions for subscales, by sample

The entire data set was tested for floor and ceiling effects. For Teamwork and Collaboration, between 23 % and 55 % of respondents gave the top rating, Totally agree, whereas 1–3 % gave the bottom rating, Totally Disagree. For Negative Professional Identity, top ratings were given by 1–3 % and bottom rating by 26–37 %. For the items in Positive Professional Identity, 20–27 % gave top ratings and 1–3 % bottom rating. The first three items in Roles and Responsibility (Q17–Q19) attracted top ratings from 2–4 % and bottom from 10-31 %; for Q20, the proportions were 1 % giving top ratings, and 6 % giving bottom rating. Items Q21–Q23 were top-rated by 6–17 % and bottom-rated by 0.7-8 % of the respondents. The remaining six Roles and Responsibility items (Q24–Q29) showed 27–67 % top ratings and 0.7–3 % bottom ratings (Table 2).

Discussion

This study has demonstrated that three of the four subscales in the Danish version of the RIPLS instrument contribute to it being an appropriate tool for assessing attitudes towards shared learning among students and professionals in the health services. The analysis showed strong internal consistency within the three subscales of Teamwork and Collaboration, Negative Professional Identity, and Positive Professional Identity, with factor loading and alpha values above the recommended thresholds. However, the Roles and Responsibility subscale showed some limitations in the Danish setting, with a low alpha (0.61) and loadings exceeding the criterion thresholds.

The analysis of test-retest reliability revealed that the instrument is stable and reproducible, with weighted kappa scores lying between 0.27 and 0.70. This indicates fair to substantial agreement with the lowest values in Roles and Responsibility. While these results suggest a need for a fifth subscale - as proposed by [18] - the factor analysis did not support this assumption. Other researchers have suggested creating a new subscale from items 25–29 to assess Patient-centredness [4, 7]. However, in our factor analysis, item 24 showed correlation with Items 25–29. Besides, we also found a correlation between items 17–22 assessing Roles and Responsibilities, thus partly matching the results reported in previous research using three subscales [1, 2], or with reduced scales of 23 items [4] or 20 items [7].

The RIPLS instrument was originally developed for English students, and later proved stable and reliable for use with professionals (1–3). In our 29-item version with four subscales, which we used with both student and professional populations, factor loadings were above 0.40 for all but three items, and with an overall alpha of 0.89, internal consistency was good. Our results support those of the aforementioned Swedish study, while the comparison with the Japanese study (both 19 items, 4 subscales) is less convincing. In our study, both Cronbach alpha and the majority of the factor loadings were above the recommended thresholds [14, 16].

Regarding test-retest reliability, no kappa values below 0.2 were identified. We found seven values between 0.2 and 0.4, indicating fair agreement; 16 kappa values were between 0.4 and 0.6, indicating moderate agreement, and six results lay between 0.6 and 0.8, indicating substantial agreement. Relating these results to the four subscales, the strongest agreement was found with Positive Professional Identity (all values above 0.6), followed by Negative Professional Identity (two values between 0.4 and 0.6, one above 0.6). The results for the remaining subscales are more ambiguous, with Teamwork and Collaboration (nine items) showing two values below the 0.2 threshold and seven values between 0.4 and 0.6. For the 13 items in Roles and Responsibilities, we found five values between 0.2 and 0.4, seven between 0.4 and 0.6, and one above 0.6. A review of the literature revealed only a single study that assessed the test-retest reliability of RIPLS: McFadyen, Webster, and Maclaren (2006) used a weighted kappa, with 19 items tested on 65 professionals. With the exception of Roles and Responsibilities (which had only three items), its subscales were identical to those used by us. McFadyen et al. presented weighted kappa values between 0.12 (Item 12) and 0.55 (Item 1), being lowest for Positive Professional Identity (0.12–0.39) and Negative Professional Identity (0.31–0.55; [19]. These results differ considerably from ours, where the mentioned subscales showed the highest kappa results.

Only minor differences appeared between test results for the student and the professional setting. Comparing the subscale results for the two cohorts, we found students’ score means to be significantly higher only for Roles and Responsibilities. Their Cronbach alpha scores were also better than the professionals’- although only negligibly so.

The full data set was also tested for ceiling and floor effects, which are defined as situations in which at least 15 % of responses give the highest or lowest possible score [20]. A ceiling effect was found for all items in Teamwork and Collaboration, Positive Professional Identity, and Negative Professional Identity; the situation was more ambiguous with Roles and Responsibilities, where there was a ceiling effect for Items 21–22 and 24–29 and a floor effect for Items 17–18. No earlier research has presented results indicating floor or ceiling effects, which could reflect the fact that, with only five response categories, each answer will attract on average 20 % of responses, thus making the 15 % threshold too narrow. However, McFadyen, Webster, and Maclaren found that Roles and Responsibilities items had, in general, lower scores, leading them to reconsider the score for these items [19]. While floor or ceiling effects pose no substantial challenge to validity studies, the ceiling effects found in this study in particular may indicate problems, as they leave limited room for future intervention studies to demonstrate improvement [14]. The ceiling effect might also reflect a possible selection bias, as responders would have had relatively positive attitudes to interprofessional learning. Drop-out analyses would therefore be useful in order to prove casemix representativity.

Other studies in the field did not reveal whether questionnaires were administered by surface mail or electronic mail. All samples of this study were collected using an e-mail that provided a link to a web-based questionnaire, a method that Looij-Jansen and De Wilde have shown not to influence results or response rates [21].

In De Vet et al.’s view, the adaptation of an instrument across cultural and linguistic boundaries basically requires a translation and a test of score equivalence to ensure comparability of scores across boundaries [14]. Comparison with the Swedish study is precluded, as this used other subscales [5]; and the Japanese study reported neither individual item scores nor subscale means [6]. Claiming that cross-cultural adaptation is successful would thus be contentious; on the other hand, the concordant findings from both groups in our study indicate that the RIPLS is a stable and reliable instrument for use in a Danish setting with professionals as well as students.

Conclusion

We have shown that the Danish version of the RIPLS is a stable and reliable instrument for use in both a student and a professional population with regard to the Teamwork and Collaboration, Negative Professional Identity, and Positive Professional Identity subscales (Items 1–16).

The remaining 13 items (17–29), used for the assessment of Roles and Responsibility, proved unsuitable as a subscale, whether with students or professionals. It remains unclear if this divergence from previous research results should be interpreted as a reflection of cultural differences. We recommend further studies in order to test for the feasibility of introducing a division of the Roles and Responsibility subscale.

Abbreviations

Q:

question

RIPLS:

readiness for interprofessional learning scale

References

  1. Parsell G, Bligh J. The development of a questionnaire to assess the readiness of health care students for interprofessional learning (RIPLS). Med Educ. 1999;33(2):95–100.

    Article  Google Scholar 

  2. McFadyen AK et al. The Readiness for Interprofessional Learning Scale: a possible more stable sub-scale model for the original version of RIPLS. J Interprof Care. 2005;19(6):595–603.

    Article  Google Scholar 

  3. McFadyen AK, Maclaren WM, Webster VS. The Interdisciplinary Education Perception Scale (IEPS): an alternative remodelled sub-scale structure and its reliability. J Interprof Care. 2007;21(4):433–43.

    Article  Google Scholar 

  4. Reid R et al. Validating the Readiness for Interprofessional Learning Scale (RIPLS) in the postgraduate context: are health care professionals ready for IPL? Med Educ. 2006;40(5):415–22.

    Article  Google Scholar 

  5. Lauffs M et al. Cross-cultural adaptation of the Swedish version of Readiness for Interprofessional Learning Scale (RIPLS). Med Educ. 2008;42(4):405–11.

    Article  Google Scholar 

  6. Tamura Y et al. Cultural adatation and validating a japanese version of the readiness for interprofessional learning scale (RIPLS). Journal of Interprofessional Care. 2012;26:56–63.

    Article  Google Scholar 

  7. El-Zubeir M, Rizk DE, Al-Khalil RK. Are senior UAE medical and nursing students ready for interprofessional learning? Validating the RIPL scale in a Middle Eastern context. J Interprof Care. 2006;20(6):619–32.

    Article  Google Scholar 

  8. Horsburgh M, Lamdin R, Williamson E. Multiprofessional learning: the attitudes of medical, nursing and pharmacy students to shared learning. Med Educ. 2001;35(9):876–83.

    Article  Google Scholar 

  9. Rose MA et al. Attituddes of Students in Medicine, Nursing, Occupational, and Physical Therapy Toward Interprofessional Education. J Allied Health. 2009;38(4):196–200.

    Google Scholar 

  10. Goelen G et al. Measuring the effect of interprofessional problem-based learning on the attitudes of undergraduate health care students. Med Educ. 2006;40(6):555–61.

    Article  Google Scholar 

  11. McFadyen AK et al. Interprofessional attitudes and perceptions: Results from a longitudinal controlled trial of pre-registration helath and social care students in Scotland. Jorunal of Interprofessional Care. 2010;24(5):549–64.

    Article  Google Scholar 

  12. Bradley P, Cooper S, Duncan F. A mixed-methods study of interprofessional learning of resuscitation skills. Med Educ. 2009;43(9):912–22.

    Article  Google Scholar 

  13. Curran VR et al. A longitudinal study of the effect of an interprofessional education curriculum on student satisfaction and attitudes towards interprofessional teamwork and education. J Interprof Care. 2010;24(1):41–52.

    Article  Google Scholar 

  14. De Vet HCW et al. Measurement in Medicine. Practical Guides to Biostatistics and Epidemiology. Vol. 1.st. Cambridge: Cambridge University Press; 2011. p. 1–338.

    Google Scholar 

  15. McFadyen AK et al. The readiness for Interprofessional learning Scale: A possible more stable sub-scale for the original version of RIPLS. Jorunal of Interprofessional Care. 2012;19(6):595–603.

    Article  Google Scholar 

  16. Nunnally JC, Bernstein IH. Psychometric Theory. Vol. Third. New York: McGraw-Hill; 1994.

    Google Scholar 

  17. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–74.

    Article  Google Scholar 

  18. Cooper H, Spencer-Dawe E, McLean E. Beginning the process of teamwork: Design, impementation and evaluation of an inter-professional education intervention for first year undergraduate students. Journal of Interprofessional Care. 2005;19(5):492–508.

    Article  Google Scholar 

  19. McFadyen AK, Webster VS, Maclaren WM. The test-retest reliability of a revised version of the Readiness for Interprofessional Learning Scale (RIPLS). J Interprof Care. 2006;20(6):633–9.

    Article  Google Scholar 

  20. Terwee CB et al. Quality criteria were proposed for measurements properties of health status questionnaires. J Clin Epidemiol. 2007;60:34–42.

    Article  Google Scholar 

  21. Looij-Jansen PM, De Wilde EJ. Comparison of Web-Based versus Paper-and-Pencil Self-Administered Questionnaire: Effects on Health Indicators in Dutch Adolescents. Health Serv Res. 2008;43(5):1708–21.

    Article  Google Scholar 

Download references

Acknowledgement

We gratefully acknowledge the translation work done by Charlotte Horsted and Charlotte Bruun Pedersen, the literature review by Christina Richard Petersen, and the statistical review by Lars Korsholm. We would also like to acknowledge the support of Hospital Lillebaelt; Department of Orthopaedic Surgery, Kolding Hospital; University of Southern Denmark, Odense; University College Lillebaelt, Vejle; University College West, Esbjerg, and VIA University College, Aarhus.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Birgitte Nørgaard.

Additional information

Competing interests

The authors declare that they have no competing interests and no financial conflicts of interest to be declared

Authors’ contributions

BN has collected data from health care professionals, carried out the analyses and drafted the manuscript. ED has participated in data collection from health care students and helped draft the manuscript. JS has participated in data collection from health care students and helped draft the manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nørgaard, B., Draborg, E. & Sørensen, J. Adaptation and reliability of the Readiness for Inter professional Learning Scale in a Danish student and health professional setting. BMC Med Educ 16, 60 (2016). https://doi.org/10.1186/s12909-016-0591-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-016-0591-7

Keywords