Skip to main content

Medical artificial intelligence readiness scale for medical students (MAIRS-MS) – development, validity and reliability study

Abstract

Background

It is unlikely that applications of artificial intelligence (AI) will completely replace physicians. However, it is very likely that AI applications will acquire many of their roles and generate new tasks in medical care. To be ready for new roles and tasks, medical students and physicians will need to understand the fundamentals of AI and data science, mathematical concepts, and related ethical and medico-legal issues in addition with the standard medical principles. Nevertheless, there is no valid and reliable instrument available in the literature to measure medical AI readiness. In this study, we have described the development of a valid and reliable psychometric measurement tool for the assessment of the perceived readiness of medical students on AI technologies and its applications in medicine.

Methods

To define medical students’ required competencies on AI, a diverse set of experts’ opinions were obtained by a qualitative method and were used as a theoretical framework, while creating the item pool of the scale. Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were applied.

Results

A total of 568 medical students during the EFA phase and 329 medical students during the CFA phase, enrolled in two different public universities in Turkey participated in this study. The initial 27-items finalized with a 22-items scale in a four-factor structure (cognition, ability, vision, and ethics), which explains 50.9% cumulative variance that resulted from the EFA. Cronbach’s alpha reliability coefficient was 0.87. CFA indicated appropriate fit of the four-factor model (χ2/df = 3.81, RMSEA = 0.094, SRMR = 0.057, CFI = 0.938, and NNFI (TLI) = 0.928). These values showed that the four-factor model has construct validity.

Conclusions

The newly developed Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS) was found to be valid and reliable tool for evaluation and monitoring of perceived readiness levels of medical students on AI technologies and applications.

Medical schools may follow ‘a physician training perspective that is compatible with AI in medicine’ to their curricula by using MAIRS-MS. This scale could be benefitted by medical and health science education institutions as a valuable curriculum development tool with its learner needs assessment and participants’ end-course perceived readiness opportunities.

Peer Review reports

Background

Information processing technologies those were used to assist humanity in numerical calculations, have become instantaneously processing data that is too complex for the human brain to be calculated in parallel with the geometric increase in their capacities. In addition to banking, manufacturing, agriculture, transportation, education, psychology etc., artificial intelligence (AI) has started to influence the healthcare field over decades. Studies on AI in the field of medicine have a wide range from collecting daily health data, interpreting the data, and imaging (e.g., radiology and pathology), using them as supportive information in therapeutic and surgical procedures, and warning the patient and related personnel, when necessary. Since, one of the main application of AI is medicine and health sciences, it is an inevitable necessity for vocational education in this field to adapt to the developments related to AI [1].

Technology offers new solutions to improve healthcare quality and facilitates its access. Computer-assisted systems had been producing similar outputs to the human brain in healthcare services since the early 1970s [2]. Nowadays, many health conditions such as eye diseases, pneumonia, breast and skin cancers can accurately be detected by rapidly analyzing medical imaging with AI applications [3,4,5,6,7].. In addition, AI applications can detect coronary heart diseases by analyzing echocardiograms [8], psychotic events and neurological diseases such as Parkinson’s from speech patterns [9], facilitate the diagnosis of polyps and neoplasms in the gastrointestinal system [10] and perform certain procedural tasks such as knot tying during robotic surgery [11]. Also, AI has the potential to aid in early detection of infectious disease outbreaks [12] and its sources such as water contamination to protect public health [13]. Such AI applications might play an important role in reducing the patient care burden on many healthcare professionals or pave the way for reaching more patients [3].

In this context, employing AI in healthcare applications has generated great interest in last few years [14], and it can be conjectured that these AI-based healthcare/medical applications can help medical professionals to diagnose more reliably, improve treatment results, reduce malpractice risks and treat more patients [15]. Keeping current healthcare condition and advancements in view, it can be assumed that almost every type of clinician will be using AI technology for various purposes in the near future [3]. Considering all these assumptions, medical education - that is confronting with the potential and challenges of emerging technologies in AI- itself will be at the center of the seek for a solution [16]. Educational research is also needed to figure out, how AI will better impact medical education? [17]. Learning the fundamentals of AI will help students in grasping the effects of AI in relation to the daily medical procedures. However, medical students should be taught that promises of AI are limited and medical procedures are not simply statistical and procedural [2]. For instance, it is suggested that medical students should have prior knowledge of clinical AI systems and statistical modelling methods to test innovative AI technologies [18].

Medical artificial intelligence readiness

Merriam-Webster dictionary defines readiness as “the quality or state of being ready”. In the educational context, readiness is considered an indispensable component of teaching and learning process [19]. The emergence of a new behavior change in the education depends on the student’s level of readiness. For this reason, a student must have cognitive, affective, and psychomotor behaviors, which is necessary for the acquisition of new behavior [20]. Since, education is a behavioral change process, so measuring readiness at the beginning of the process will help in identifying from where to start the training [21]. Measuring the level of readiness allows, beginning from the first day to provide guidance in accordance with the individual and characteristic features of the individual, to examine the needs of the individual and to make plans, programs, and preparations in accordance with these needs. Keeping aforementioned facts in view, describing the readiness of medical artificial intelligence will be a guide to work on this issue.

We propose medical artificial intelligence readiness is the healthcare provider’s preparedness state in knowledge, skills, and attitude to utilize healthcare-AI applications during delivering prevention, diagnosis, treatment, and rehabilitation services in amalgam with own professional knowledge.

Considering global AI boom in view, it is expected that AI will be the one of the main elements of medical education in the coming years [22]. Future physicians are supposed to be able to objectively analyze the use of AI systems, consider the discrepancies between algorithms generated for medical tasks, better understand AI, and thereby become educated users [23]. So, the measurement of perceived medical artificial intelligence readiness of medical school students is important to guide for various educational design and developmental processes such as curriculum development, instructional design or needs analysis, etc. Although, some researchers have tried to put forth the concurrent AI knowledge and attitudes of medical students [14, 24, 25]. However, to the best of our knowledge, there is no published medical artificial intelligence readiness scale available. In the present article we describe the development of a reliable scale for measuring the perceived medical artificial intelligence readiness of medical students and tested its validity.

Method

Research design

This study was dedicated for the development of psychometric scale, followed by its validity and reliability studies using sequential exploratory mixed method [26]. According to the method, the main construct to be measured in this research was determined as perceived medical artificial intelligence readiness of medical students. An item pool was generated by an extensive literature search and expert opinions. Item format was determined following the Likert scale, using response options showing various levels of item engagement, and is frequently preferred in similar studies. The generated items were reviewed by the field experts and the initial scale was developed. The developed scale was evaluated for its validity and reliability employing Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) and the final version was established as a reliable medical AI readiness scale (Fig. 1) [27].

Fig. 1
figure1

Phases and steps involved in perceived medical AI readiness scale development and its validation

Participants

Medical students are considered to be appropriate sample as they are relatively homogeneous group according to the Turkey’s central university admission examination and student selection criteria. The data were collected from undergraduate medical students enrolled in two public universities in Turkey. The study was carried out with 568 participants of Ege University (EU) during the exploratory factor analysis (EFA) phase and with 329 participants of Manisa Celal Bayar University (MCBU) during the confirmatory factor analysis (CFA) phase. Medical students were accessed through convenience sampling via students’ classmate WhatsApp communication groups.

Data collection and analysis

Both EFA and CFA were performed in June 2020 at EU and MCBU, respectively. A Microsoft Forms based online survey questionnaire (prepared in Turkish language) was sent to all the participants via WhatsApp communication groups. In addition to the demographic information, the participants were asked to rate all the items using a Likert-type rating scale (1-strongly disagree to 5-strongly agree). The participants sent all responses by entering the electronic survey form.

The quantitative data were analyzed using descriptive statistics. The factor structure of Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS) was evaluated by principal component analysis followed by varimax rotation, by which the scale’s structural validity was assessed. MAIRS-MS factors were selected according to eigenvalues greater than 1.0, scree-plot test and with a value of Kaiser-Meyer-Olkin of 0.6 and above. The internal consistency of MAIRS-MS was evaluated by Cronbach’s alpha. The statistical analysis was performed by using IBM SPSS Statistics v21, Mplus 8.5 and R-4.0.3. The confidence interval (CI) was accepted as 99%, and p < 0.01 was considered statistically significant.

Results

A total of 568 and 329 responses were received in EFA and CFA, respectively; out of which, 544 and 321 responses were valid (24 and 8 for EFA & CFA, respectively, were excluded due to missed values). The demographic characteristics of the participants included in the factor analysis have been summarized in Table 1.

Table 1 Demographic characteristics of participants (nEFA = 544, nCFA = 321)

Validity

Item generation

We sought the opinion of a diverse set of experts involved either using or developing AI in healthcare: (a) healthcare professionals/ academics; (b) computer and data science professionals/ academics; (c) law and ethics professionals/ academics; and (d) medical students. A purposeful snowball sampling method was used to identify and complete the expert group. A total of 94 participants comprised the expert panel. An online survey questionnaire was sent via email. In addition to demographic information, all the participants were asked to list all competencies that will enable medical students to be ready for artificial intelligence technologies and possible applications in medicine. Seventy-five (79.8%) expert panel members submitted a total of 220 phrases or sentences. These inputs were reviewed and revised by the researchers in terms of content and wording. The items covering the same or similar content were combined and a list of 41 initial items was obtained.

This initial item list was sent to two experts involved in using and developing AI tools/techniques in healthcare (one medical academic professional and one computer and data science academic professional) and their qualitative opinions were requested. Through the review, combining items, omitting items, and wording changes were suggested. These suggestions were incorporated into the list and a scale with 27 items was obtained.

Face and content validity

The newly developed AI readiness scale was then sent to seven experts (i.e., four field experts, two psychometricians, and a Turkish language specialist), who evaluated it for the content and wording. Thus, a peer evaluation provided with a critique of the items, instructions, and appearance of this new instrument was done. The qualitative evaluations proposed by the experts via an opinion form were examined by the researchers.

After verification by the expert panel, two medical students were requested to respond to the instrument followed by an interview in terms of gathering their reviews on semantic contents, clarity, and readability of the items. Some minor wording changes were applied according to their suggestions. The scale was then accepted to have adequate face and content validity with 27 retained items for EFA.

Exploratory factor analysis (EFA)

To evaluate the factor structure of 27 items’ scale, we performed EFA using varimax rotation with Kaiser-Meyer-Olkin normalization. Kaiser measure of sampling adequacy for EFA was 0.89 and Bartlett test of sphericity was significant, χ2(df = 231) = 3711,19, p < 0.001. In order to consider different viewpoints for judicious preservation and deletion of the item, the analysis was done by the research team together. While performing the factoring process, eigenvalues were examined first. In addition to this process, Kaiser criterion [28] and the scree plot [29] were employed. Following these steps, the research team revealed that the scale was composed of a four-factor structure.

Exploratory factor analysis showed that five items were either loaded on more than a single factor and the loading difference was smaller than 0.10 or failed to load on any single factor (loading< 0.40) (Additional file 1). Four-factor structure explains 50.9% cumulative variance, which was resulted from the EFA phase after omitting such five items. The four-factor structure named cognition, ability, vision, and ethics constituted 16.60, 14.69, 10.65 and 9.05% of the explained variance, respectively. All communalities were higher than 0.26. The rotated factor matrix is presented in Table 2.

Table 2 Rotated factor matrix

The frequencies of all the responses were reviewed for outliers and non-normality. The responses of EFA scale revealed acceptable skewness (0.040) and kurtosis (− 0.172) values, which meant that the means of the scale were normally distributed [30]. Tests of normality suggested that kurtosis and skewness coefficients ranged within the threshold values of ±3, and therefore, it can be said that the data was normally distributed.

Confirmatory factor analysis (CFA)

Before the CFA, the data were analyzed and the responses of the CFA scale revealed acceptable skewness (− 0.705) and kurtosis (1.228) values, that confirm the means of the scale were normally distributed [24]. In CFA, the data were analyzed using Mplus software. Since, the data were of ordinal in nature, weighted least square mean and variance adjusted (WLSMV) were used as the estimation method. Later, in order to further improve model fit, three associated error sources with statistical and contextual positive correlations were added to the model with the help of modification indices (Additional file 2). Since, the structural validity was obtained in the only model tested, new data were not collected.

When the fit indices of the model tested with CFA, it was found that the Chi-square value (χ2(df = 200) = 762.203, p < 0.001) was significant. It was found that the calculated model χ2/df = 3.811 ratio indicated a perfect fit. It was observed that other model fitness values (RMSEA = 0.094, SRMR = 0.057, CFI = 0.938, NNFI = 0.928,) were all within the acceptable fitness interval as summarized in Table 3.

Table 3 Measures of model fit for the CFA model

Measurement invariance by gender

The CFA model (Additional file 2) was tested for gender invariance. We followed the guidelines by Millsap and Yun-Tein [35] and completed the analyses using the semTools package [36] with WLSMV and the Satorra and Bentler [37] chi-square difference test. Strict invariance (Δχ2 = 26.59, p = 0.22) was evident, which indicates that gender-based differences in the total scores are not caused by a defect in the scale (Table 4) [38].

Table 4 MAIRS-MS Measurement invariance by gender

Reliability

Cronbach’s alpha reliability

The internal consistency coefficients of all the factors were found acceptable [27, 39]. The Cronbach’s alpha coefficient for the whole scale was 0.877. The Cronbach’s alpha coefficients were 0.830, 0.770, 0.723, 0.632 for cognition, ability, vision and ethics factors, respectively. The item loadings ranged between 0.419 to 0.814. Reliabilities of factors ranged from 0.632 to 0.830 (Table 5). Further, correlations between the factors were significant (p < 0.01). The factors were related with each other as summarized in Table 5.

Table 5 Descriptive statistics and reliability of factors

Item discrimination

For the item discrimination values, a multidimensional item response theory (mirt) package [40] was used to estimate the graded response model separately for each dimension with the help of R software program. The item discrimination values were found consistent with the high item loadings in CFA with an average of 2.41 (Table 6).

Table 6 Scale item discrimination parameters

Discussion

Artificial intelligence is leading towards a new era that will reshape the medicine and healthcare delivery services in the coming years [2].. Although, it is not anticipated that AI will replace the role of physicians, but it will definitely undertake many tasks belonging to physicians, bringing healthcare services to a better level with faster pace, thus there is a need to create new tasks and new learning requirements for physicians, which will assist in reshaping their professional identities [2, 23, 41]. Learning the basics logic and pros/cons of machine learning, its applications at medical faculty will prepare future physicians for the data science revolution and AI competencies such as making data-based decisions [42,43,44]. Also in this way, students and medical professionals can acquire adequate skills to participate in the upcoming AI ecosystem [45]. In order to meet these requirements, physicians will need to have sufficient knowledge of mathematical concepts, the foundations of artificial intelligence, machine learning, data science, and related ethical and legal issues and be able to relate them in the context of medicine [46, 47]. Currently, medical education is facing a pedagogical problem pertaining to how and with what content AI should be introduced or included in the medical curricula, which is still in a controversial position in health services. Curriculums should not introduce AI to the students as a tool in an algorithmic way, instead should be based on the aspect that regulates and enriches students’ perceptions of clinical problems. Including principles of AI in the formal medical education will help the student to understand perceptual experiences in practice and complex clinical reasoning processes in medical applications with AI outputs used in medicine [2]. On the other hand, AI cannot yet offer a solution for direct communication, empathy and human touch, which have a very important place in healthcare. Based on these important differences, ethical debates on the relationship between the medical profession and artificial intelligence should definitely be included in the curriculum.

AI can be considered to be a key factor in the identity constructions of physicians of the future [2]. Since artificial intelligence continues to redesign the medical field, it will be imperative for physicians to know foundational artificial intelligence algorithms and techniques [44]. In order to attain the maximal efficiency from AI based technologies in medical practices and to protect the professional identity of the medical profession, curricular developments should be made in medical school programs to better understand the basic components and algorithms of artificial intelligence [47, 48]. For instance, it is stated that medical students who are trained in AI feel more secure in working with AI in the future than students who do not [14]. Another study suggests prior knowledge and readiness of AI will become a crucial skill for physicians in order to interpret medical literature, assess possible clinical software developments, formulating research questions [44].

This study developed the MAIRS-MS and evaluated its reliability and validity, which aimed to measure medical AI readiness of medical students. The overall results showed good reliability and validity of the MAIRS-MS in medical students. The scale consisted of 22 items, and EFA revealed that the MAIRS-MS had four factors: cognition, ability, vision, and ethics (Additional file 3). To investigate the concurrent criterion validity, the relationship of MAIRS-MS with a criterion (gold standard) measurement could not be applied as it is the first defined scale that is developed related to the subject.

The cognition factor of the readiness scale includes the items that measure the participant’s cognitive readiness in terms of terminological knowledge about medical artificial intelligence applications, the logic of artificial intelligence applications and data science. The ability factor of the scale includes items that measure the participant’s competencies in terms of choosing the appropriate medical artificial intelligence application, using it appropriately by combining it with the professional knowledge, and explaining it to the patient. The vision factor of the scale includes items that measure the participant’s ability to explain limitations, strengths and weaknesses related to medical artificial intelligence, anticipate opportunities and threats that may occur, and conduct ideas. Scale items under the ethics factor measure the participant’s adherence to legal and ethical norms and regulations, while using AI technologies in healthcare services.

Despite the rigor of this original first research, it suffers with some minor limitations. We collected the data from two public medical schools located within the same geographic region, and thus, the findings might not be generalized to most public and private medical schools. Additionally, the study was conducted only in Turkey; hence the results might not be generalizable in other countries, although the chances of this discrepancy are very minor. The convenience sampling approach applied in this study might cause possible selection bias. The findings presented in this study must also be carefully explored in the light of the differences across countries and cultures.

Conclusions

To the best of our knowledge, the developed MAIRS-MS is the very first scale for assessing the perceived medical artificial intelligence readiness of medical students. Although this new scale is developed for medical students, we argue that it could also be used for measuring physicians’ medical AI readiness with needful modifications. However, due to lack of validity and reliability studies, the generalization of our findings for physicians and other healthcare professionals is restricted. Further, psychometric studies are warranted to investigate the replicating results of this study with physicians, residents, and other healthcare professionals. Studies in specific specialties (e.g., radiology) that pioneering the AI applications in healthcare would contribute the improvement of MAIRS-MS. In this way, a set of measurement tools can be produced that can assess the readiness in different healthcare fields for assisting future AI transformation.

The developed instrument MAIRS-MS through this study is innovative worldwide and it may contribute to the research on assessing medical students’ perceived artificial intelligence readiness. Also, the MAIRS-MS may provide benefit to medical schools, institutions, faculty members and instructional designers as a valuable curriculum development tool with its learner needs assessment opportunity. Besides, it could be beneficial in measuring the effectiveness of courses or trainings in AI-related curricula in medical schools. Another striking point is that the definition of medical artificial intelligence readiness is introduced first time here in this article.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

MAIRS-MS:

Medical artificial intelligence readiness scale for medical students

AI:

Artificial intelligence

EFA:

Exploratory factor analysis

CFA:

Confirmatory factor analysis

EU:

Ege university

MCBU:

Manisa celal bayar university

CI:

Confidence interval

References

  1. 1.

    Wartman SA, Donald CC. Medical education must move from the information age to the age of artificial intelligence. Acad Med. 2018;93:1107–9.

    Article  Google Scholar 

  2. 2.

    van der Niet AG, Bleakley A. Where medical education meets artificial intelligence: ‘Does technology care?’. Med Educ. 2020; February:1–7. https://doi.org/10.1111/medu.14131.

  3. 3.

    Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25:44–56. https://doi.org/10.1038/s41591-018-0300-7.

    Article  Google Scholar 

  4. 4.

    Hainc N, Federau C, Stieltjes B, Blatow M, Bink A, Stippich C. The bright, artificial intelligence-augmented future of neuroimaging reading. Front Neurol. 2017;8(SEP):489. https://doi.org/10.3389/fneur.2017.00489.

    Article  Google Scholar 

  5. 5.

    Wang D, Khosla A, Gargeya R, Irshad H, Beck AH. Deep Learning for Identifying Metastatic Breast Cancer. 2016;:1–6. http://arxiv.org/abs/1606.05718.

  6. 6.

    Kelly M, Ellaway R, Scherpbier A, King N, Dornan T. Body pedagogics: embodied learning for the health professions. Med Educ. 2019;53:967–77.

    Article  Google Scholar 

  7. 7.

    Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nat Publ Gr. 2017. https://doi.org/10.1038/nature21056.

  8. 8.

    Siegersma KR, Leiner T, Chew DP, Appelman Y, Hofstra L, Verjans JW. Artificial intelligence in cardiovascular imaging: state of the art and implications for the imaging cardiologist. Netherlands Hear J. 2019;27:403–13.

    Article  Google Scholar 

  9. 9.

    Bedi G, Carrillo F, Cecchi GA, Slezak DF, Sigman M, Mota NB, et al. Automated analysis of free speech predicts psychosis onset in high-risk youths. Npj Schizophr. 2015;1. https://doi.org/10.1038/npjschz.2015.30.

  10. 10.

    Jin H-Y, Man Z, Bing H. Techniques to integrate artificial intelligence systems with medical information in gastroenterology. Artif Intell Gastrointest Endosc. 2020;1:19–27.

    Article  Google Scholar 

  11. 11.

    Kassahun Y, Yu B, Tibebu AT, Stoyanov D, Giannarou S, Metzen JH, et al. Erratum to: Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions(Int J CARS). Int J Comput Assist Radiol Surg. 2016;11:847. https://doi.org/10.1007/s11548-015-1305-z.

    Article  Google Scholar 

  12. 12.

    Jacobsmeyer B. Focus: tracking down an epidemic’s source. Physics (College Park Md). 2012;5:89.

    Google Scholar 

  13. 13.

    Doshi R, Falzon D, Thomas B V, Temesgen Z, Sadasivan L, Migliori GB, et al. Tuberculosis control, and the where and why of artificial intelligence. ERJ Open Res. 2017;3:1–5.

  14. 14.

    Sit C, Srinivasan R, Amlani A, Muthuswamy K, Azam A, Monzon L, et al. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights Imaging. 2020;11:14. https://doi.org/10.1186/s13244-019-0830-7.

    Article  Google Scholar 

  15. 15.

    Meskó B, Hetényi G, Gyorffy Z. Will artificial intelligence solve the human resource crisis in healthcare? BMC Health Serv Res. 2018;18:1–5.

    Article  Google Scholar 

  16. 16.

    Chen J. Playing to our human strengths to prepare medical students for the future. Korean J Med Educ. 2017;29:193–7.

    Article  Google Scholar 

  17. 17.

    Carin L. On Artificial Intelligence and Deep Learning Within Medical Education. Acad Med. 2020;95(11S):S10–1 Association of American Medical Colleges Learn Serve Lead.

    Article  Google Scholar 

  18. 18.

    Sapci AH, Sapci HA. Artificial intelligence education and tools for medical and health informatics students: systematic review. JMIR Med Educ. 2020;6:e19285.

    Article  Google Scholar 

  19. 19.

    Bloom BS. Human characteristics and school learning: New York: McGraw-Hill; 1976.

  20. 20.

    Başar E. Genel Öğretim Yöntemleri. Samsun: Kardeşler Ofset ve Matbaa; 2001.

    Google Scholar 

  21. 21.

    Harman G, Çelikler D. Eğitimde hazir bulunuşluğun önemi üzerine bir derleme çalişmasi. J Res Educ Teach. 2012;3:2146–9199.

    Google Scholar 

  22. 22.

    Goh P-S, Sandars J. A vision of the use of technology in medical education after the COVID-19 pandemic. MedEdPublish. 2020;9:1–8.

    Google Scholar 

  23. 23.

    Brouillette M. AI added to the curriculum for doctors-to-be. Nat Med. 2019;25:1808–9.

    Article  Google Scholar 

  24. 24.

    Gong B, Nugent JP, Guest W, Parker W, Chang PJ, Khosa F, et al. Influence of artificial intelligence on Canadian medical students’ preference for radiology specialty: ANational survey study. Acad Radiol. 2019;26:566–77.

    Article  Google Scholar 

  25. 25.

    Pinto dos Santos D, Giese D, Brodehl S, Chon SH, Staab W, Kleinert R, et al. Medical students’ attitude towards artificial intelligence: a multicentre survey. Eur Radiol. 2019;29:1640–6. https://doi.org/10.1007/s00330-018-5601-1.

    Article  Google Scholar 

  26. 26.

    Creswell JW, Creswell JD. Research design: qualitative, quantitative, and mixed methods approaches. 5th ed. Thousand Oaks: SAGE Publications; 2018.

    Google Scholar 

  27. 27.

    DeVellis RF. Scale development: theory and applications: Newbury Park, CA: Sage publications; 2016.

  28. 28.

    Kaiser HF. The varimax criterion for analytic rotation in factor analysis. Psychometrika. 1958;23:187–200.

    Article  Google Scholar 

  29. 29.

    Cattell RB. The scree test for the number of factors. Multivariate Behav Res. 1966;1:245–76.

    Article  Google Scholar 

  30. 30.

    George D, Mallery M. Using SPSS for windows step by step: a simple guide and reference. 2003.

  31. 31.

    Hooper D, Coughlan J, Mullen MR. Structural equation modelling: Guidelines for determining model fit. Electron J Bus Res Methods. 2008;6:53–60. https://doi.org/10.21427/D79B73.

    Article  Google Scholar 

  32. 32.

    Munro BH. Statistical methods for health care research. lippincott williams & wilkins; 2005.

  33. 33.

    Tabachnick BG, Fidell LS, Ullman JB. Using multivariate statistics. MA: Pearson Boston; 2007.

    Google Scholar 

  34. 34.

    Kline RB. Principles and practice of structural equation modeling: New York: Guilford publications; 2015.

  35. 35.

    Millsap RE, Yun-Tein J. Assessing factorial invariance in ordered-categorical measures. Multivariate Behav Res. 2004;39:479–515. https://doi.org/10.1207/S15327906MBR3903_4.

    Article  Google Scholar 

  36. 36.

    Jorgensen TD, Pornprasertmanit S, Schoemann AM, Rosseel Y, Miller P, Quick C, et al. semTools: Useful tools for structural equation modeling. R Packag version 05–1. 2018.

  37. 37.

    Satorra A, Bentler PM. Ensuring positiveness of the scaled difference chi-square test statistic. Psychometrika. 2010;75:243–8.

    Article  Google Scholar 

  38. 38.

    Wu AD, Li Z, Zumbo BD. Decoding the meaning of factorial invariance and updating the practice of multi-group confirmatory factor analysis: A demonstration with TIMSS data. Pract Assess Res Eval. 2007;12:3. https://doi.org/10.7275/mhqa-cd89.

    Article  Google Scholar 

  39. 39.

    Pallant J. SPSS survival manual: a step by step guide to data analysis using IBM SPSS. 2016.

  40. 40.

    Chalmers RP. Mirt: A multidimensional item response theory package for the R environment. J Stat Softw. 2012;48. https://doi.org/10.18637/jss.v048.i06.

  41. 41.

    Masters K. Artificial intelligence in medical education. Med Teach. 2019;41:976–80. https://doi.org/10.1080/0142159X.2019.1595557.

    Article  Google Scholar 

  42. 42.

    Kolachalama VB, Garg PS. Machine learning and medical education. Npj Digit Med. 2018;1:2–4. https://doi.org/10.1038/s41746-018-0061-1.

    Article  Google Scholar 

  43. 43.

    Long D, Magerko B. What is AI Literacy? Competencies and Design Considerations. In: Conference on Human Factors in Computing Systems - Proceedings. 2020:1–16.

  44. 44.

    Lindqwister AL, Hassanpour S, Lewis PJ, & Sin JM.  AI-RADS: An Artificial Intelligence Curriculum for Residents. Academic radiology. 2020;S1076-6332(20)30556-0. Advance online publication. https://doi.org/10.1016/j.acra.2020.09.017.

  45. 45.

    Bhardwaj D. Artificial intelligence: patient care and health Professional’s education. J Clin Diagnostic Res. 2019;13:3–4.

    Article  Google Scholar 

  46. 46.

    Hamdy H. Medical College of the Future: from informative to transformative. Med Teach. 2018;40:986–9.

    Article  Google Scholar 

  47. 47.

    Chan KS, Zary N. Applications and challenges of implementing artificial intelligence in medical education: integrative review. J Med Internet Res. 2019;21. https://doi.org/10.2196/13930.

  48. 48.

    Pols J. Good relations with technology: empirical ethics and aesthetics in care. Nurs Philos. 2017;18:1–7.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Süheyla Rahman for her great support in data collection, Büşra Yen and Ruchan Umay Tümer for sharing their valuable opinion from the participants’ perspective, İlyas Yazar for his contributions on Turkish language, Elif Buğra Kuzu Demir and Ahmet Acar for their feedback on scale development, Melih Bulut and Süleyman Sevinç for sharing their enlightening opinions on the scale items, Shafiul Haque and Onur Dönmez for their valuable feedback on the draft of this paper, Yasemin Kahyaoğlu Erdoğmuş for her feedback on statistical analysis. We are also thankful to our participants in this study from EU and MCBU, for their time, commitment, and willingness. Finally, would like to extend our deep and sincere gratitude to Burak Aydın for his tremendous support on the statistical analysis and revisions of this article.

Funding

The author(s) received no financial support for the research.

Author information

Affiliations

Authors

Contributions

OK, SAC and KD involved in research design, OK, SAC and KD developed the data collection tool, OK and KD collected the data, OK, SAC and KD contributed to the interpretation of the results, KD and SAC took the lead in writing the manuscript. All authors provided critical feedback and helped shape the research, analysis, and manuscript. All authors reviewed and approved the manuscript.

Corresponding author

Correspondence to S. Ayhan Çalışkan.

Ethics declarations

Ethics approval and consent to participate

The data collection in the present study was conducted after the approval of Ege University Scientific Research, and Publication Ethics Boards dated 28 November 2019 Ref.452. We confirm that all methods used in this study were carried out in accordance with relevant guidelines and regulations. The participation of students was completely voluntary and informed consent was obtained from all participants or, if participants are under 18, from a parent and/or legal guardian.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:.

Five items that loaded on more than one factor and that were subsequently discarded.

Additional file 2:.

Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS). Confirmatory Factor Analysis Graphic

Additional file 3:.

Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Karaca, O., Çalışkan, S.A. & Demir, K. Medical artificial intelligence readiness scale for medical students (MAIRS-MS) – development, validity and reliability study. BMC Med Educ 21, 112 (2021). https://doi.org/10.1186/s12909-021-02546-6

Download citation

Keywords

  • Artificial intelligence
  • Medicine
  • Readiness
  • Medical students
  • Medical education
  • Scale development
  • Validity and reliability