Skip to main content

Psychometric evaluation of Persian version of medical artificial intelligence readiness scale for medical students

Abstract

Background

Artificial intelligence’s advancement in medicine and its worldwide implementation will be one of the main elements of medical education in the coming years. This study aimed to translate and psychometric evaluation of the Persian version of the medical artificial intelligence readiness scale for medical students.

Methods

The questionnaire was translated according to a backward-forward translation procedure. Reliability was assessed by calculating Cronbach’s alpha coefficient. Confirmatory Factor Analysis was conducted on 302 medical students. Content validity was evaluated using the Content Validity Index and Content Validity Ratio.

Results

The Cronbach’s alpha coefficient for the whole scale was found to be 0.94. The Content Validity Index was 0.92 and the Content Validity Ratio was 0.75. Confirmatory factor analysis revealed a fair fit for four factors: cognition, ability, vision, and ethics.

Conclusion

The Persian version of the medical artificial intelligence readiness scale for medical students consisting of four factors including cognition, ability, vision, and ethics appears to be an almost valid and reliable instrument for the evaluation of medical artificial intelligence readiness.

Peer Review reports

Introduction

The application of artificial intelligence (AI) in medicine has seen significant growth in recent years and holds great potential for improving healthcare outcomes [1]. Studies on AI in the field of medicine and health sciences stated that AI could be of great help to physicians in a more confident diagnosis, improvement of treatment outcomes, and mitigation of medical malpractice [2,3,4,5,6]. For example, artificial intelligence facilitates the diagnosis of coronary heart diseases by echocardiogram analysis [7], the diagnosis of psychotic events and neurological diseases, such as Parkinson’s from speech patterns [8], as well as the diagnosis of polyps and neoplasms in the gastrointestinal system [9]. It even performs certain clinical procedural tasks such as knot tying during robotic surgery [10]. In the near future, artificial intelligence will play a more prominent role in caring for patients and providing them with medical services [3].

Considering the global expansion of artificial intelligence in medicine, it is expected to be one of the main components of medical education in the forthcoming years [1]. Sit et al. (2020) stated that medical students who were trained for artificial intelligence felt more secure in working with artificial intelligence in the future than students who were not [1]. Lindqwister et al. (2020) pointed out that prior knowledge and AI readiness become critical for physicians to perform clinical reasoning better [11]. Liu et al. (2022) found that formal training and current resources for AI are limited in most US medical schools, and there is a definite knowledge gap in AI education in contemporary medical education in the US [12].

Therefore, teaching the principles of artificial intelligence and its application to medical students -as future physicians who would benefit from this technology in their daily clinical practice- becomes very important [13].

Cognitive Constructivism emphasizes on the concept that learning must occur according to a student’s stage of cognitive development [14]. So before starting education, it is important to know the entering behavior of students [15]. Entering behavior describes the prerequisite knowledge, attitudes, or skills which the student already possesses that are relevant to the learning. It refers to what the students have previously learned, their ability, development, and motivational state [16]. Based on pre-assessment, which is one of the basic concepts in education, students’ entering behavior should be measured before the provision of new education [17]. It is of utmost importance since the curriculum planners and faculty plan the lesson content based on learners’ needs after realizing students’ readiness levels in the basic concepts of that subject [18]. In the field of education, preparation is considered an essential part of the teaching and learning process [19]. The emergence of a new behavior change in education depends on students’ level of preparation, so, measuring the level of readiness makes it possible to provide education according to students’ needs from the outset [20]. Therefore, faculty and curriculum developers have to acknowledge the student’s previous knowledge and background about artificial intelligence in medicine which affects their learning.

Park et al. (2021) investigated the United States of America medical students’ perceptions of the impact of artificial intelligence on the practice of medicine. They reported that more than 75% of students pointed to the important role of artificial intelligence in the future of medicine and emphasized the need for formal teaching of artificial intelligence [21]. Yüzbaşıoğlu (2021) investigated the attitudes and perceptions of dental students towards artificial intelligence and its possible applications in this field and demonstrated students’ willingness to improve their knowledge in the AI field [22]. Gray et al. (2022) identified notable gaps in curriculum and educational resources for the use of artificial intelligence in medicine and indicated obstacles, such as a lack of governance structures and processes, resource constraints, and cultural adjustment. These researchers recommended that more studies should be conducted around the world regarding teaching physicians about artificial intelligence [23].

Considering these issues and the importance of investigating medical students’ readiness for the use of artificial intelligence in medicine, it is necessary to study this readiness to plan the next steps of artificial intelligence education in medical sciences universities In Iran. Karaca et al. (2021) designed and psychometrically evaluated a valid and reliable tool for the measurement of medical students’ readiness for the use of artificial intelligence in medicine. Medical Artificial Intelligence Readiness Scale For Medical Students (MAIRS-MS) questionnaire consists of 22 items in four categories including cognition with 8 items, ability with 8 items, vision with 3 items, and ethics with 3 items [24].

Educational practices are dependent on aspects of each context; therefore, the setting is a main factor that can affect psychometric properties in instruments [25]. Consequently, psychometric evaluation of every questionnaire across diverse contexts will strengthen its application as well as create generalizability of the results. Studying medicine in Iran is a doctorate-level degree that requires about seven years of study, research, and limited hands-on practice under the supervision of reputed professors. By holding the M.D. degree, the graduate will be a Doctor in Medicine and can pursue studies in different Specialty programs, which include five years of university studies. Moreover, they can start their professional practice in hospitals, private practices, and clinics [26]. With the current international movement toward artificial intelligence improvement, and Iranian medical universities being no exception, the ability to evaluate the readiness of medical students to use medical artificial intelligence is becoming a crucial component of medical education enhancement. The results of this evaluation are important because they guide various educational design and developmental processes such as curriculum development, instructional design or needs analysis, etc. [24].

The review of the literature demonstrated that so far, the MAIRS-MS questionnaire has not been translated and psychometrically evaluated in the Persian language. This study aimed to translate and psychometric evaluation of the Persian version of the medical artificial intelligence readiness scale for medical students.

Methods

Study design and setting

The research was a cross-sectional study that was conducted at the Kerman University of Medical Sciences in Iran between November 2022 and January 2023.

Participants

The data were collected from 302 undergraduate medical students at Kerman University of Medical Sciences who entered the study by census method. The inclusion criteria were being a medical student in the study period. The exclusion criterion was questionnaires with more than 10% of questions without answers.

Ethical consideration

The KMU’s institutional review board approved the study (No. IR.KMU.AH.REC.1401.253). The participants did not receive any incentives, and participation was voluntary. Verbal and written consent for participation was obtained based on the proposal approved by the ethics committee. The participants were also assured of the confidentiality of their information, and it was explained that the results would only be used for research objectives.

Translation process

The translation process was performed according to the backward-forward translation procedure [27]. Firstly, two English language experts separately translated the questionnaire into Persian. One of them had expertise in translating medical texts and the other had expertise in translating colloquial expressions. They were not aware of the structure of the instrument. Secondly, two medical education faculty members and two artificial intelligence specialists compared both translated versions, and made the necessary revisions in terms of ambiguous words, sentence structure, and meaning of the sentence. Thirdly, two English language experts investigated and confirmed the changes. Fourthly, the translated versions were reviewed and compared with the original instrument by two medical education faculty members and two artificial intelligence specialists in terms of conceptual, semantic, and content equivalence. The instructions for answering the questionnaire were also checked. Finally, the pre-final version was compiled.

Psychometric evaluation

Content validation

The content validity of the initial MAIRS-MS was investigated both quantitatively and qualitatively by expert opinion. Ten experts from faculty members were recruited based on their experience in artificial intelligence and their expertise in medical education. They were selected within Kerman University of Medical Sciences. Experts were asked to consider each item of the MAIRS-MS based on the criteria of “essential,” “relevance,” “clarity,” and “simplicity.” Each item was assessed using Likert scales: A three-point scale for “essential” (1 – unessential, 2 – useful, but not essential, and 3 – essential,), and four-point scales for “relevance” (1 – not relevant, 2 – rather relevant, 3 – relevant, and 4 – completely relevant) and “clarity” (1 – not simple, 2 – rather simple, 3 – simple and 4 –completely simple) criteria. In addition, the experts were asked to provide comments about the “simplicity” of each item (fluency and using simple and understandable words) as well as the most appropriate placement and order of the items.

We examined content validity by computing Content Validity Ratio (CVR) and Content Validity Index (CVI) using ratings of item relevancy that were highlighted by the content experts. Given the ten experts who evaluated the items, the minimum acceptable amount of CVR was 0.62 based on the Lawshe table [28]. The formula for calculating CVI in Waltz and Bausell’s method is the number of all the respondents in the “relevancy,” “clarity,” and “simplicity” criteria divided by the number of experts who have scored 3 or 4 in the relevant question in that criterion. In this formula, if an item has a score of more than 0.79 that item is retained in the questionnaire. If CVI is between 0.70 and 0.79, the item is questionable and needs correction and revision. Furthermore, if it is less than 0.70, the item is unacceptable and it must be deleted [29]. The corrective comments of experts about the wording of items, such as fluency, using simple and understandable words, and the suitable placement of the words were used.

Face validation

Students’ opinions were used to check the face validity. In this regard, interviews were conducted with ten medical students using concurrent verbal probing and thinking aloud. The questionnaire items were examined in terms of fluency, appropriate phrasing, avoiding specialized words, and potential ambiguity.

Construct validation

The modified MAIRS-MS based on content and face validation was sent to 302 medical students who were included in the study by census method. The sample size chose based on the recommendation for confirmatory factor analysis of 5–10 people per parameter estimate in the measurement model [30]. The questionnaire was sent out two additional times with a gap of approximately two weeks between each distribution. The delivery methods included email and follow-ups through social media.

A Confirmatory Factor Analysis (CFA) was performed to examine and verify the assumed four factors structure of the MAIRS-MS with LISREL software (8.8 version. New Jersey). Several fit indices were carried out to assess the fit of the hypothesized model to the data. Relative/normed chi-square (χ2/df) range from as high as 5.0 to as low as 2.0, Root mean square error of approximation (RMSEA) below 0.08, standardized root mean square residual (SRMR) less than 0.08, Comparative fit index (CFI), Normed-fit index (NFI), and Non-Normed Fit Index (NNFI) greater than 0.95 were recommended. Since it is not requisite or reasonable to include all indices and according to Hu and Bentler’s Two-Index Presentation Strategy, a combination rule of CFI of 0.96 or higher and a SRMR of 0.09 or lower was considered herein [31]. Also, the convergent validity was assessed by the calculation of Average Variance Extracted (AVE), and Composite Reliability (CR).

Reliability assessment

The internal consistency of the MAIRS-MS was investigated by Cronbach’s alpha. Internal consistency of more than 0.7 was considered suitable [32].

Results

Demographic data

All 10 experts completed the content validation form. Most of them were women (75%). Half were affiliated to medical education and most were assistant professors. 302 medical students participated in investigating construct validity most of them (70%) were interns, 23% were clerkships and the rest were in the basic sciences stage.

Content validity

The overall CVR was 0.75, which was acceptable. The CVI for all items was 0.92 by using the Waltz and Bausell method (In terms of relevance 0.96, clarity 0.90, and simplicity 0.90). Two items (I can explain the AI applications used in healthcare services to the patient) and (I can explain the limitations of AI technology) with CVR ˂0.70 were removed as they were identified as being vague or similar to other items.

Face validity

Based on the students’ opinions in the face validation process, all translated items were clear and accepted.

Construct validity

In CFA, the factor loadings of the items were between 0.52 and 0.97, therefore, none of the items were deleted (Table 1). A standardized root means square residual (SRMR) = 0.09 and Comparative Fit Index (CFI) = 0.95, indicating a fair fit of the model based on Hu and Bentler’s Two-Index Presentation Strategy. Other fit indices were as the followings; (χ2 /df = 5.5, RMSEA = 0.1, NFI = 0.94, =, and NNFI = 0.94)

Table 1 shows Cronbach’s alpha coefficients, Average Variance Extracted (AVE), and Composite Reliability (CR) of the four-factor model of the Persian version of the medical artificial intelligence readiness scale for medical students (MAIRS-MS). According to this table, all subscales and also whole questionnaire (α = 0.94) had appropriate internal consistency. Also, the Persian version of MAIRS-MS had suitable convergent validity (AVE > 0.5, CR > 0.7, and AV > CR).

Table 1 The factor loading, Cronbach’s alpha coefficients, Average Variance Extracted (AVE), and Composite Reliability (CR) of the four-factor model of the Persian version of the medical artificial intelligence readiness scale for medical students (MAIRS-MS)

Production of the final questionnaire

After investigating reliability and validity, the Persian version of MAIRS-MS with 20 items in four domains was finalized. These domains included “cognition” with 8 items, “ability” with 7 items, “vision” with 2 items, and “ethics” with 3 items.

Discussion

Considering the critical importance of learning artificial intelligence for medical students, the increasing advancement of this science in medical sciences, and the lack of a valid and reliable tool in the Persian language to measure medical students’ readiness for the use of artificial intelligence in medicine, the results of the present study indicated that the MAIRS-MS questionnaire can be used as a valid and reliable instrument to measure medical students’ readiness for artificial intelligence training in four domains of cognition, ability, vision, and ethics.

Despite being recently developed, this instrument has been used several times in different contexts. Laupichler et al. (2022) assessed the effect of a flipped classroom course to foster medical students’ AI literacy with a focus on medical imaging and reported that the MAIRS-MS questionnaire is a practical and appropriate tool. The results demonstrated that participation in this course led to an increase in medical students’ readiness for the use of artificial intelligence in medical imaging [33]. Gray et al. (2022) introduced this questionnaire as a valid instrument for measuring medical students’ readiness for teaching AI in medicine [23]. Aboalshamat et al. (2022) assessed levels of readiness for AI among medical and dental students and graduates in Saudi Arabia with the MAIRS-MS. They concluded that participants had low levels of AI readiness. It was recommended that artificial intelligence education should be provided from the early years of education in medicine [34]. Xuan et al. (2022) measured AI readiness among medical students using MAIRS-MS in Malaysia. They recommended that education policymakers should set up more artificial intelligence training courses to provide and introduce basic concepts of artificial intelligence, especially for general medicine students. This enables them to gain more confidence in interacting with artificial intelligence technology in the future [35].

Due to the novelty of artificial intelligence in medicine and its application in the treatment of patients, several studies have been conducted to investigate the various aspects of this technology across the globe. These studies, which are often performed based on a quantitative research approach, have used different and numerous instruments for data collection. Nonetheless, the validity of these tools and their design method are debated issues.

Boillat et al. (2022) investigated the familiarity of physicians and medical students with artificial intelligence in medicine using a researcher-made questionnaire. This questionnaire was designed only by reviewing the literature and did not use experts’ opinions. Moreover, for psychometric analysis, only the reliability of the questionnaire was assessed by calculating Cronbach’s alpha coefficient [36]. Doumat et al. (2022) investigated the knowledge and attitude of medical students regarding artificial intelligence in medicine in Lebanon using a researcher-made questionnaire. This tool included 15 items on knowledge and 5 questions assessing attitude. However, no information was provided regarding the design method, as well as the results of the validity and reliability of this questionnaire [37]. Jha et al. (2022) investigated medical students’ knowledge of artificial intelligence, the role of artificial intelligence in medicine, and the priorities related to its education in Nepal using a questionnaire. The face validity of the tool was assessed on 20 graduates, and its reliability was verified by computing Cronbach’s alpha coefficient which was reported as 0.6 [38].

The reliability of the Persian version of MAIRS-MS was confirmed by calculating Cronbach’s alpha coefficient of 0.94, demonstrating a higher level compared to the reliability of the original version of the original instrument (0.87). Moreover, in the categories of cognition, ability, vision, and ethics, the reliability coefficients were reported as 0.91, 0.92, 0.89, and 0.92, respectively. As illustrated, the reliability of the Persian version was higher compared to the English questionnaire in all four areas. One of the other strengths of this study was confirmatory factor analysis on 302 medical students. This population is much more than the required sample size recommended for factor analysis.

This study has some limitations which must be considered. All participants were from one university in Iran. This sampling bias might undermine the external validity of the results and cause selection bias. Due to the fair fit of the model according to the fit indices and to be used in another context the MAIRS-MS needs further validation in groups speaking other languages, different cultures, and in other universities.

Although this tool was designed for medical students, it can also be used to measure AI readiness in medicine in other populations, such as physicians and residents. Therefore, it is suggested that future studies evaluate different samples of medical science specialists. Furthermore, MAIRS-MS can be of great help to educational planners and policymakers as a valuable tool in the design of artificial intelligence-related curricula tailored to students’ needs. This instrument can also be useful in measuring the effectiveness of training courses related to artificial intelligence.

Conclusion

Considering the excellent reliability, appropriate convergent validity, and almost acceptable construct validity of the Persian version of the MAIRS-MS, as well as its conciseness and ease of implementation, it can be used to evaluate medical students’ readiness for the use of artificial intelligence in medicine. The outcomes of this evaluation hold significance as they steer diverse educational creation and growth procedures, like devising curricula, formulating instructional designs, or conducting needs analyses.

Data Availability

The datasets used and/or analyzed in the current study are available from the corresponding author upon reasonable request.

References

  1. Sit C, Srinivasan R, Amlani A, Muthuswamy K, Azam A, Monzon L, et al. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights into imaging. 2020;11:1–6.

    Article  Google Scholar 

  2. van der Niet AG, Bleakley A. Where medical education meets artificial intelligence:‘Does technology care?’. Med Educ. 2021;55(1):30–6.

    Article  Google Scholar 

  3. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44–56.

    Article  Google Scholar 

  4. Hainc N, Federau C, Stieltjes B, Blatow M, Bink A, Stippich C. The bright, artificial intelligence-augmented future of neuroimaging reading. Front Neurol. 2017;8:489.

    Article  Google Scholar 

  5. Wang D, Khosla A, Gargeya R, Irshad H, Beck AH. Deep learning for identifying metastatic breast cancer. arXiv preprint arXiv:160605718. 2016.

  6. Kelly M, Ellaway R, Scherpbier A, King N, Dornan T. Body pedagogics: embodied learning for the health professions. Med Educ. 2019;53(10):967–77.

    Article  Google Scholar 

  7. Siegersma K, Leiner T, Chew D, Appelman Y, Hofstra L, Verjans J. Artificial intelligence in cardiovascular imaging: state of the art and implications for the imaging cardiologist. Neth Heart J. 2019;27:403–13.

    Article  Google Scholar 

  8. Bedi G, Carrillo F, Cecchi GA, Slezak DF, Sigman M, Mota NB, et al. Automated analysis of free speech predicts psychosis onset in high-risk youths. npj Schizophrenia. 2015;1(1):1–7.

    Article  Google Scholar 

  9. Anirvan P, Meher D, Singh SP. Artificial Intelligence in Gastrointestinal Endoscopy in a resource-constrained setting: a reality check. Euroasian J Hepatogastroenterol. 2020;10(2):92–7.

    Google Scholar 

  10. Kassahun Y, Yu B, Tibebu AT, Stoyanov D, Giannarou S, Metzen JH, et al. Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions. Int J Comput Assist Radiol Surg. 2016;11:553–68.

    Article  Google Scholar 

  11. Lindqwister AL, Hassanpour S, Lewis PJ, Sin JM. AI-RADS: an artificial intelligence curriculum for residents. Acad Radiol. 2021;28(12):1810–6.

    Article  Google Scholar 

  12. Liu DS, Sawyer J, Luna A, Aoun J, Wang J, Boachie L, et al. Perceptions of US medical students on artificial intelligence in medicine: mixed methods survey study. JMIR Med Educ. 2022;8(4):e38325.

    Article  Google Scholar 

  13. Meskó B, Hetényi G, Győrffy Z. Will artificial intelligence solve the human resource crisis in healthcare? BMC Health Serv Res. 2018;18(1):1–4.

    Article  Google Scholar 

  14. Milena VZ, Petra PP. Cognitive constructivist way of teaching scientific and technical contents. Int J Cogn Res Sci Eng Educ. 2021;9(1):23–36.

    Google Scholar 

  15. Mota-Valtierra G, Rodríguez-Reséndiz J, Herrera-Ruiz G. Constructivism-based methodology for teaching artificial intelligence topics focused on sustainable development. Sustainability. 2019;11(17):4642.

    Article  Google Scholar 

  16. Binder T, Sandmann A, Sures B, Friege G, Theyssen H, Schmiemann P. Assessing prior knowledge types as predictors of academic achievement in the introductory phase of biology and physics study programmes using logistic regression. Int J STEM Educ. 2019;6:1–14.

    Article  Google Scholar 

  17. Guskey TR, McTighe J. Pre-assessment: promises and cautions. Educational Leadersh. 2016;73(7):38.

    Google Scholar 

  18. Guskey TR. Does pre-assessment work? Educational Leadersh. 2018;75(5).

  19. Hockett JA, Doubet KJ. Turning on the lights: what pre-assessments can do. Educational Leadersh. 2014;71(4):50–4.

    Google Scholar 

  20. Sapci AH, Sapci HA. Artificial intelligence education and tools for medical and health informatics students: systematic review. JMIR Med Educ. 2020;6(1):e19285.

    Article  Google Scholar 

  21. Park CJ, Paul HY, Siegel EL. Medical student perspectives on the impact of artificial intelligence on the practice of medicine. Curr Probl Diagn Radiol. 2021;50(5):614–9.

    Article  Google Scholar 

  22. Yüzbaşıoğlu E. Attitudes and perceptions of dental students towards artificial intelligence. J Dent Educ. 2020.

  23. Gray K, Slavotinek J, Dimaguila GL, Choo D. Artificial Intelligence Education for the Health workforce: Expert Survey of Approaches and needs. JMIR Med Educ. 2022;8(2):e35223.

    Article  Google Scholar 

  24. Karaca O, Çalışkan SA, Demir K. Medical artificial intelligence readiness scale for medical students (MAIRS-MS)–development, validity and reliability study. BMC Med Educ. 2021;21:1–9.

    Article  Google Scholar 

  25. Koller I, Levenson MR, Glück J. What do you think you are measuring? A mixed-methods procedure for assessing the content validity of test items and theory-based scaling. Front Psychol. 2017;8:126.

    Article  Google Scholar 

  26. Salajegheh M, Hekmat SN, Malekpour-Afshar R. Identification of alternative topics to diversify medicine, dentistry, and pharmacy student theses: a mixed method study. BMC Med Educ. 2023;23(1):110.

    Article  Google Scholar 

  27. Sousa VD, Rojjanasrirat W. Translation, adaptation and validation of instruments or scales for use in cross-cultural health care research: a clear and user‐friendly guideline. J Eval Clin Pract. 2011;17(2):268–74.

    Article  Google Scholar 

  28. Wilson FR, Pan W, Schumsky DA. Recalculation of the critical values for Lawshe’s content validity ratio. Meas evaluation Couns Dev. 2012;45(3):197–210.

    Article  Google Scholar 

  29. Wynd CA, Schmidt B, Schaefer MA. Two quantitative approaches for estimating content validity. West J Nurs Res. 2003;25(5):508–18.

    Article  Google Scholar 

  30. Wolf EJ, Harrington KM, Clark SL, Miller MW. Sample size requirements for structural equation models: an evaluation of power, bias, and solution propriety. Educ Psychol Meas. 2013;73(6):913–34.

    Article  Google Scholar 

  31. Tabachenik D, Fidel J. Structural equation modeling: guidelines for determining model fit. J Bus Res Methods. 2012;6:1–55.

    Google Scholar 

  32. Kilic S. Cronbach’s alpha reliability coefficient. Psychiatry and Behavioral Sciences. 2016;6(1):47.

    Google Scholar 

  33. Laupichler MC, Hadizadeh DR, Wintergerst MW, von der Emde L, Paech D, Dick EA, et al. Effect of a flipped classroom course to foster medical students’ AI literacy with a focus on medical imaging: a single group pre-and post-test study. BMC Med Educ. 2022;22(1):1–9.

    Article  Google Scholar 

  34. Aboalshamat K, Alhuzali R, Alalyani A, Alsharif S, Qadh H, Almatrafi R et al. Medical and Dental Professionals readiness for Artificial Intelligence for Saudi Arabia Vision 2030. Int J Pharm Res Allied Sci. 2022;11(4).

  35. Pang Yi Xuan MIFF, Muhammad Imran bin Al Nazir, Hussain NTJ, Sujata Khobragade. Htoo Htoo Kyaw Soe, Soe Moe, Mila Nu Nu, Htay. Readiness towards Artificial Intelligence among Undergraduate Medical Students in Malaysia. Education in Medicine Journal. 2023.

  36. Boillat T, Nawaz FA, Rivas H. Readiness to embrace artificial intelligence among medical doctors and students: questionnaire-based study. JMIR Med Educ. 2022;8(2):e34973.

    Article  Google Scholar 

  37. Doumat G, Daher D, Ghanem N-N, Khater B. Knowledge and attitudes of medical students in Lebanon toward artificial intelligence: a national survey study. Front Artif Intell. 2022;5(1015418):1–9.

    Google Scholar 

  38. Jha N, Shankar PR, Al-Betar MA, Mukhia R, Hada K, Palaian S. Undergraduate Medical Students’ and Interns’ Knowledge and Perception of Artificial Intelligence in Medicine. Advances in Medical Education and Practice. 2022:927–37.

Download references

Acknowledgements

We thank the students who participated for their support and involvement in the study.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

MS formulated the research idea. HR gathered data. HA performed the analysis of the data. MS and HR wrote the manuscript. MS edited the draft of the paper. All authors approved the final manuscript.

Corresponding author

Correspondence to Mahla Salajegheh.

Ethics declarations

Ethical approval

This study was approved by the Research Ethics Committee of Kerman University of Medical Sciences (No. IR.KMU.AH.REC.1401.253). All methods were carried out by relevant guidelines and regulations.

Consent to participate

The participants did not receive any incentives, and participation was voluntary. Informed written consent for participation was obtained based on the proposal approved by the ethics committee. The participants were also assured of the confidentiality of their information, and it was explained that the results would only be used for research objectives.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rezazadeh, H., Ahmadipour, H. & Salajegheh, M. Psychometric evaluation of Persian version of medical artificial intelligence readiness scale for medical students. BMC Med Educ 23, 527 (2023). https://doi.org/10.1186/s12909-023-04516-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-023-04516-6

Keywords