Skip to main content
  • Research article
  • Open access
  • Published:

Anatomy Education Environment Measurement Inventory (AEEMI): a cross-validation study in Malaysian medical schools

Abstract

Background

The Anatomy Education Environment Measurement Inventory (AEEMI) evaluates the perception of medical students of educational climates with regard to teaching and learning anatomy. The study aimed to cross-validate the AEEMI, which was previously studied in a public medical school, and proposed a valid universal model of AEEMI across public and private medical schools in Malaysia.

Methods

The initial 11-factor and 132-item AEEMI was distributed to 1930 pre-clinical and clinical year medical students from 11 medical schools in Malaysia. The study examined the construct validity of the AEEMI using exploratory and confirmatory factor analyses.

Results

The best-fit model of AEEMI was achieved using 5 factors and 26 items (χ 2 = 3300.71 (df = 1680), P < 0.001, χ 2/df = 1.965, Root Mean Square of Error Approximation (RMSEA) = 0.018, Goodness-of-fit Index (GFI) = 0.929, Comparative Fit Index (CFI) = 0.962, Normed Fit Index (NFI) = 0.927, Tucker–Lewis Index (TLI) = 0.956) with Cronbach’s alpha values ranging from 0.621 to 0.927. Findings of the cross-validation across institutions and phases of medical training indicated that the AEEMI measures nearly the same constructs as the previously validated version with several modifications to the item placement within each factor.

Conclusions

These results confirmed that variability exists within factors of the anatomy education environment among institutions. Hence, with modifications to the internal structure, the proposed model of the AEEMI can be considered universally applicable in the Malaysian context and thus can be used as one of the tools for auditing and benchmarking the anatomy curriculum.

Peer Review reports

Background

Educational environment is a strong predictor of student learning [1]. It comprises multifactorial elements, such as content of instruction, learning outcomes, type of curriculum, teaching methods, and strategies, learning facilities, teachers’ competencies, behavior and guidance; and peer support, that influence student motivation and ability to learn [2]. Volatility in educational systems has indirectly influenced the components of educational environments. In keeping with the development of technology, mobile learning (m-learning) and distance learning have emerged as a new generation of learning methods that require digital literacy from learners and instructors for efficient learning [3]. Nowadays, the educational environment is not only confined to spatial learning, but has extended to social learning situations, where intercultural adaptation and social equity are being emphasized to cater to globalization in learning [4, 5]. In addition, social tolerance has been identified as a contributing factor to the psychological well-being of learners, which in turn determines the success of learning in a professional and intercultural educational environment [6]. Hence, social-psychological indices have been imparted as one of the educational environment factors that should be continually monitored to ensure provision of a positive educational environment [7].

In a similar manner, anatomy education has undergone a significant evolution in various aspects of its curriculum [8, 9]. As a pillar of medical education, teaching and learning in anatomy must withstand and adapt to changes in the ecosystem of medical training [10]. Within the past two decades, the literature on anatomy education has documented various forms of technology-enhanced and educational theory-based teaching innovations to either replace or supplement traditional teaching methods (i.e., cadaveric dissections, didactic lectures, and demonstration) [11,12,13,14,15]. Many factors underpinned the changes in teaching methods for anatomy, which emerged since 1979 after a revamp in the medical curriculum in Malaysia [16]. For instance, the requirement for medical students to learn new medical topics in an integrated medical curriculum has resulted in a reduction of anatomy syllabus and teaching hours [17]. Nevertheless, such changes in the anatomy education system have attracted a certain degree of attention among anatomists regarding the effectivity of learning due to the increasing concern on the incompetency of anatomy knowledge and related skills among medical graduates [18, 19]. This issue has been linked to clinical errors in judgment and medicolegal litigations [20]. Notwithstanding the growing assertion of insufficient knowledge on anatomy among medical students and graduates, empirical evidence has appeared to support that such a claim is lacking [21]. Likewise, previous scholars argued that the components of the educational environment are obsolete despite robust academic discussion on changes in anatomy curricula and teaching methods [22]. In fact, debate among anatomy educators on the most effective teaching methods in anatomy and extent of teaching the subject in the medical curriculum has been long-standing [10, 21, 23]. Addressing these issues requires appropriate curriculum evaluation, whereby feedback from various stakeholders, such as medical students, should be measured to ensure empirically-based action for improvement.

With the global implementation of outcome-based education in medical training, added flexibility in teaching, and assessment methods is expected from anatomy educators, which thus requires a rapid and high adaptability to the system. An important point to be noted is that students take ownership of learning and are free to utilize any learning resources in the process [24]. Alternatively, lecturers are mere facilitators of learning, who may need to play many roles at once to ensure a smooth and efficient learning process [25]. Based on this premise, measuring students’ perception of anatomy education environment – as a feedback mechanism – is imperative for the improvement of teaching and learning of anatomy. However, to ensure accurate measurement of students’ perception of the educational environment, using a valid, and reliable tool, which is suitable within the context of anatomy education is important.

In line with such a requirement, Hadie et al. [26] developed an instrument known as the Anatomy Education Environment Measurement Inventory (AEEMI), which plays a central role in the objective of the study for several reasons. First, it helps to establish students’ perceptions of factors pertaining to educational climate that influence anatomy learning. Second, the six factors of the AEEMI, namely, students’ perception of anatomy as a subject, anatomy teachers, importance of knowledge about the subject, anatomy learning resources, self-effort in learning anatomy, and quality of histology learning facilities, are aligned with issues raised in the literature on anatomy [27,28,29,30,31,32]. Third, the AEEMI contains low-inference items of educational environment and thereby ensure accurate rating on the students’ part, which is based on experience and observation rather than opinion [33]. Several studies indicated that low-inference items in an inventory could measure users’ perceptions objectively compared with high-inference items, which capture subjective feelings and reactions [34,35,36]. Hence, measurement using the AEEMI will address any problems that need improvement or point out issues that are rectifiable when the objective is measurable.

The AEEMI is an instrument that measures the perception medical students regarding the educational climate specific to anatomy as a subject. Hadie et al. [26] developed the AEEMI through the Delphi technique and involved anatomists and medical educators from various countries. A validation of the inventory was conducted on pre-clinical year students in a Malaysian public medical school, where a six-factor and 25-item framework was proposed as the best-fit model for the AEEMI. The validated instrument measures the perception of students regarding anatomy as a subject, teachers, importance of knowledge on anatomy, learning resources, self-effort in learning, and quality of histology learning facilities. A five-point Likert-type scale is used to rate of agreement with the items ranging from 1 = strongly disagree to 5 = strongly agree [26]. Although the tool was demonstrated to have good content, response process, and construct validity, Hadie et al. [26] raised their concerns on the generalizability of the AEEMI items because several important items were omitted during the validation process on the basis of statistical consideration. To ensure the trustworthiness of results obtained from the measurement using the inventory, further validation is required at a global scale to take into account the variability that may exist among institutions. Hence, the study aimed to critically examine the construct validity of the AEEMI across institutions and cohorts of students.

In a broad sense, construct validity implies the accurateness of inferences made by a measurement, such that it can measure what it intends to measure [37]. Construct validity comprises five aspects, namely, content validity (i.e., items in the instrument represent the intended factor), response process validity (i.e., users of the instrument can understand the items), internal structure validity (i.e., results are replicable in a different measurement when the same inventory is used), and relationship with other variables (i.e., results correlate with those using other tools), and consequence validity (i.e., impact of the measurement) [38]. Although evidence for the validity of the AEEMI has been established in a single-center study, a cross-validation of the instrument will ensure the selection of a robust pool of items and therefore represent the global scenario of the anatomy education environment. The AEEMI will not only be valid and reliable, but also a universal inventory at least in the Malaysian context. A universal, valid, and reliable tool will ensure a successful benchmarking process of the anatomy curriculum, which in turn will enable the improvement of the curriculum. Hence, the study intended to critically evaluate the construct validity of the AEEMI across public and private medical schools in Malaysia and propose a universal framework of the AEEMI. This study aimed to answer the following research questions: (1) What is the best-fit universal model for the AEEMI? and (2) What is the internal consistency reliability of the AEEMI when administered to medical students at different phases of training across public and private medical schools in Malaysia? To answer these questions, the study hypothesized that (1) the AEEMI will demonstrate a good model fit that is universal and (2) it would show a high level of internal consistency reliability across cohorts.

Methods

Study design and ethical approval

A multi-center cross-sectional study was conducted at nine public and two private medical schools in Malaysia, namely, Universiti Sains Malaysia, Universiti Malaya, Universiti Kebangsaan Malaysia, Universiti Putra Malaysia, Universiti Sultan Zainal Abidin, Universiti Malaysia Sarawak, Islamic International University Malaysia, Universiti Sains Islam Malaysia, Universiti Teknologi MARA, Newcastle University Medicine Malaysia, and Cyberjaya University College of Medical Sciences. Permission and ethical clearance were obtained from the Human Research Ethics Committees (HREC), Universiti Sains Malaysia (USM/JEPeM/18040225). Prior to data collection, the institution-led researcher briefed the students from each institution on the study objective, backgrounds, methodology, and the participants’ rights and method of withdrawal. Participation in the study was on a voluntary basis, and students could withdraw from the study at any time.

Recruitment of participants

The study recruited 1930 medical students from 11 medical schools in Malaysia across 5 years of study and phases of training. The purposive sampling method was used based on one of two criteria, namely, (1) the participant is a pre-clinical year student who is undertaking an anatomy subject under the formal medical curriculum of a participating university or (2) the participant is a clinical year student with a previous learning experience in anatomy under the formal medical curriculum of a participating university.

Sample size

The sample size of the study was determined according to the recommendation of Costello and Osborne [39] who prescribed the best practices for factorial analysis. The authors stated that the number of subjects required in studies involving factorial analysis should be larger than five times the number of items or greater than 100 subjects. The present study decided that the minimum sample size should be 660 because the first version of the AEEMI consisted of a total of 132 items. Considering a non-response rate of 30%, the sample size was increased to 858. Hence, the minimum sample size for each institution should be 78 students.

Research instrument

In the new cross-validated version, the researchers initially anticipated the possibility of the inclusion of items omitted by Hadie et al. [26]. Hence, the current study used the first version of the AEEMI (Additional file 1), which contains 11 factors and 132 items. This version achieved a positive scale-level content validity index/average (S-CVI/Ave) of more than 0.80 for eight factors and borderline S-CVI/Ave ranging from 0.77 to 0.79 for the three remaining factors [26]. The 132-item version of the inventory required students to rate the items using a five-point Likert-type scale (1 = strongly disagree, 2 = disagree, 3 = not sure, 4 = agree, and 5 = strongly agree).

Data collection process

The guided self-administered questionnaire was distributed during face-to-face sessions in lecture halls or classes by respective institution-led researchers. The time estimated for the completion of the questionnaire was 15 min. Completion of the AEEMI was voluntary, and the students were informed that their progress in the medical course will remain unaffected should they decline participation.

Data analysis

Exploratory and confirmatory factor analyses were used to evaluate the psychometric properties of the AEEMI. Exploratory factor analysis (EFA) was performed at the outset of data analysis to determine the factor loading for each item and to explore extractable factors using Statistical Package for the Social Sciences (SPSS) version 26 (IBM Corp., Armonk, NY). The correlation matrix of the items was considered factorable when the Kaiser–Meyer–Olkin (KMO) value exceeded 0.5, and Bartlett’s test was significant [40]. The principal axis factoring method was applied to extract factors, out of which factors with eigenvalues above 1 were retained. Varimax rotation was applied to optimize the factor loading of each item on the extracted factors. Items with factor loadings of more than ±0.4 were selected for confirmatory factor analysis (CFA) [41].

CFA was performed using Analysis of Moment Structure version 24 (SPSS Inc., Chicago, IL) [42]. Goodness-of-fit indices were determined to assess the model fit of the AEEMI models, which were considered fit when all indices achieved the minimum requirement, as shown in Table 1. Contributions of the observed variables (i.e., AEEMI items) to the latent variables (i.e., AEEMI factors) were estimated by standardized factor loadings, whereby a high factor loading indicates a high contribution of the item to the factor [48]. In addition, the relationship between changes in parameter constraints and reduction of chi square values is reflected by modification indices (MIs) [48]. The study used the MI value as an indicator for selecting any observed variables fit for retention in the framework [48]. However, removal of the observed variables was based on the opinion of a content expert and literature review [49].

Table 1 Goodness-of-fit indices used to signify model fit

In addition to factorial analyses, internal consistency reliability was investigated to assess the internal structure of the AEEMI, which was determined by reliability analysis using SPSS version 26 (IBM Corp., Armonk, NY). Cronbach’s alpha coefficient reflected the results, where values higher than 0.7 were considered to be of high internal consistency, whereas those between 0.6 and 0.7 were considered to be of satisfactory internal consistency [50].

Results

Demographic profile of the participants

Out of the 1930 respondents, 51.6% were pre-clinical year students who were actively involved in formal anatomy classes, whereas the remaining 48.4% were clinical year students who learned clinical applied anatomy integrated into other clinical subjects. Table 2 summarizes the demographic distribution of the participants.

Table 2 Demographic distributions of participants

Factorial analyses

CFA was used to confirm the dimensionality of the AEEMI, where it is established that the AEEMI measures multiple factors of the anatomy education environment. CFA on the original 6-factor and 25-item model (Model A) by Hadie et al. [26] indicated poor model fit as it reached a normed fit index (NFI) value of less than 0.9. To improve the model fit of the original 6-factor AEEMI model, the study performed stepwise item removal based on the MIs, standardized residual covariances, and standardized factor loadings, which resulted in a second model with 20 items (Model B).

Considering the removal of a large number of items, many important items could have been removed. Thus, the present study intended to identify an alternative model by EFA followed by CFA. Result of the EFA revealed that the correlation matrix of the items was factorable with a KMO value of 0.7 and a significant Bartlett’s test of sphericity. Items with factor loadings of than ±0.4 were omitted.

Two models that load on five factors were proposed on the basis of CFA, namely, a five-factor model with 20 items and a five-factor model with 26 items. Both models were found fit as satisfactory goodness-of-fit indices were achieved for the models. Table 3 summarizes the goodness-of-fit indices for the original six-factor and 25-item version (Model A), modified six-factor and 20-item version (Model B), new five-factor and 20-item version (Model C), and new five-factor and 26-item version (Model D).

Table 3 Proposed models of AEEMI and goodness-of-fit indices

Internal consistency

Reliability analysis was performed on the three models with good model fit (i.e., Models B, C, and D). Cronbach’s alpha values for each construct of the AEEMI for Model B ranged from 0.369 to 0.901, which indicated poor to high reliability, respectively. In the same context, Models C and D ranged from 0.621 to 0.927, which indicated acceptable to high reliability. Tables 4, 5, and 6 present Cronbach’s alpha and standardized factor loading values for the factors in Models B, C, and D, respectively.

Table 4 Standardized factor loading and Cronbach’s alpha for Model B (six-factor and 20-item version)
Table 5 Standardized factor loading and Cronbach’s alpha for Model C (five-factor and 20-item version)
Table 6 Standardized factor loading and Cronbach’s alpha for Model D (five-factor and 26-item version)

The final model

Considering the results, Model D, which contains five factors and 26 items, was selected as the final model (AEEMI-26). Analysis revealed that the model has achieved model fit with high goodness-of-fit indices. The reliability of each factor in the model ranged between satisfactory and high reliability. In addition, correlation values between factors were less than 0.85 (absolute value), which indicated good discriminant validity [48], as shown in Fig. 1.

Fig. 1
figure 1

Standardized factor loading of the domains in the final model of Anatomy Education Environment Measurement Inventory

Discussion

The study contributes several important pieces of evidence to support the validity of the AEEMI. First, the best-fit model of the AEEMI (AEEMI-26) consists of five factors with a total of 26 items. The factors are related to anatomy knowledge relevance, positive, and negative aspects of anatomy teachers, mastery of the anatomy subject, and anatomy learning resources. Second, out of 26 items, 25 obtained standardized factor loadings of approximately 0.5, which indicate that the AEEMI-26 possesses a positive factorial structure that supports internal structure validity. Third, the five factors were independent and exclusive from one another as the correlation values between factors were less than 0.85, thus signifying the discriminant validity of AEEMI-26. Fourth, the internal consistency of the five factors ranged from satisfactory to high with Cronbach’s alpha values between 0.62 and 0.92. Fifth, the AEEMI-26 showed an overall high internal consistency and internal structure across medical schools and years of study, thus confirming that the AEEMI-26 is a cross-valid and reliable tool for measuring the anatomy education environment. Lastly, results suggested that the AEEMI-26 is a promising benchmarking tool for measuring the quality of anatomy education environment in medical schools, especially in the Malaysian context.

The AEEMI-26 measures the quality of the anatomy education environment based on the students’ point of view in terms of anatomy knowledge relevance, anatomy teachers, anatomy subject mastery, and anatomy learning resources. These factors are defined according to the items in the AEEMI-26 that represent them. For instance, anatomy knowledge relevance refers to the usability, applicability, and transferability of anatomy knowledge in future clinical practice either as medical students or practitioners. Anatomy teacher refers to the behaviors, skills, and enthusiasm, which may be negative or positive. Anatomy subject mastery reflects the ability of medical students to answer anatomy questions and explain anatomy contents to others with confidence and clarity. Anatomy learning resources refer to learning tools and materials used to support medical students for learning anatomy. In comparison to the initial version of the AEEMI that comprised six factors and 25 items (AEEMI-25) [26], the factors in the AEEMI-26 are more robust and comprehensive because they show better representation of the anatomy education environment covering common areas across different medical schools and phases of training. Furthermore, the six factors of the AEEMI-25 are covered by the AEEMI-26, for example, the effort to learn anatomy, which is a factor in the AEEMI-25, is covered under the anatomy subject mastery in the AEEMI-26; and the quality of histology learning facilities in the AEEMI-25 is part of anatomy learning resources in the AEEMI-26.

The AEEMI-26 measures the ability of students to grasp the subject (anatomy subject mastery), connection of a subject with real practice (anatomy subject relevance), teaching behaviors (anatomy teachers), and supports for learning (anatomy learning resources). Based on this notion, the AEEMI-26 measures the aspects of the educational environment that are in line with several educational environment frameworks [51]. In addition, the model agrees with the view of [52], who suggested that “The environment of the medical is notable, not only because it derives from and is a manifestation of the curriculum, but because the environment is a determinant, of the behavior of the medical school’s students and teachers.” These facts support the strength of the AEEMI-26 in measuring important aspects of the anatomy education environment at various medical school settings. Notably, although much has been discussed about the educational environment in medical and allied health sciences education [51, 52], less effort has been exerted to explore the educational climate in a specific medical environment, such as anatomy. Despite the reduction of the number of factors, the constructive alignment between the AEEMI-26 factors and global issues of anatomy education indicates that the proposed model covers the relevant constructs of the anatomy education environment and thus enhances its validity credential [26].

Approximately 96% of the items achieved standardized factor loadings of approximately 0.5, which indicates that the AEEMI-26 has a good factorial structure that supports its internal structure validity [40]. A high standardized factor loading indicates the high degree of contribution of an item to the expression of the concept represented by a factor. In contrast, the AEEMI-25 [26] achieved standardized factor loadings of at least 0.5 for approximately 84% of the items, which suggests that the AEEMI-26 with five factors has a better conceptual representation. Internal structure validity is an important indicator that supports the validity of a measurement [38,39,40,41,42, 48,49,50]. Thus, establishing the internal structure of the AEEMI-26 across medical schools in Malaysia is essential to support its cross-validity in measuring the anatomy education environment.

The AEEMI-26 achieved good discriminant validity with correlations of less than 0.85 between factors [48]. This finding indicates that its five factors are independent and exclusively measure the anatomy education environment. Discriminant validity is established when factors achieve low correlation to one another [53]. The good discriminant validity of the AEEMI-26 can be attributed to the robust and rigorous development of a refined version of the AEEMI, thus comprising well-defined and non-redundant factors with a good pool of items [26]. Moreover, this finding has a significant impact on its psychometric credential as a valid and generalizable instrument for measuring the anatomy education environment, as data were derived from 11 medical schools in Malaysia. The study proposes that the AEEMI-26 should be further validated in other countries to provide additional evidence to support its credential as a global measurement of the anatomy education environment.

In addition, the study provided evidence that the AEEMI-26 is a reliable instrument. Reliability is broadly defined as the ability of a measurement tool to produce consistent results over time and with repetition, which is commonly expressed as internal consistency and stability [54]. The five factors of the AEEMI-26 showed satisfactory to high levels of reliability as Cronbach’s alpha values ranged from 0.62 to 0.92. In comparison, the factors of [26] AEEMI-25 reached Cronbach’s alpha values ranging from 0.60 to 0.8, which indicated a nearly similar level of reliability with AEEMI-26. Compared to a more established instrument – i.e., the Dundee Ready Educational Environment Measurement –Cronbach’s alpha of the factors of the AEEMI-25 ranged from 0.58 to 0.82 in a sample of Malaysian medical students [54], which suggests a reliability comparable to other educational environment scales. Moreover, the present study provided essential evidence to support the internal consistency of AEEMI-26 across 11 medical schools in Malaysia, thus strengthening its validity for measuring the anatomy education environment.

Hence, the AEEMI-26 displayed a valid internal structure as evidenced by its independent factorial structure and high levels of reliability across medical schools and training phases. The finding suggests that it is a valid and reliable inter-institutional tool for measuring the anatomy education environment, which has several implications for the area of anatomy curriculum improvement. The use of concise 26-item validated AEEMI could minimize instances of rating errors, and therefore provide a more reliable feedback to educators on what should be improved to cater for the students learning needs. In many instances, improvement of education system was documented as a result of high-quality management system that included feedback as one of its measurement tools as emphasized by Hattie and Timperley, [55].

Hence, it is postulated that AEEMI would be able to provide significant information of the current anatomy curriculum that needs to be improved; and henceforth address the incompetency of anatomy knowledge and related skills among medical graduates. These facts are important evidence for the proposal of AEEMI-26 as a promising benchmarking tool for measuring the quality of the anatomy education environment in medical schools, particularly in the Malaysian context. Furthermore, it can be used as a global benchmarking tool to identify the strengths and areas for improvement, facilitate the formulation of an institutional development plan (IDP) to build on strengths and fill identified gaps, prioritize IDP interventions, and monitor progress and achievements [56].

Limitations

Despite favorable outcomes that support the validity of the AEEMI-26, the study has several limitations that should be considered for future research and interpretation. Although the AEEMI initially comprised 132 items, it underwent extensive removal of items, that is, 106 items were removed during the validation process. Many items that may reflect anatomy education environment in other countries (i.e., cadaveric dissection and learning using anatomy software) are not listed under AEEMI-26. The remaining items reflect the real practice in Malaysian medical schools, whereby cadaveric dissection is not widely practiced because of shortage of cadavers and limited teaching time in the curriculum; and anatomy software is not available in most of the public medical schools because of financial constraint. Moreover, important items that may represent anatomy assessment were excluded from the inventory, although two items in AEEMI-26 measure students’ perception of their confidence in answering anatomy questions. Assessment is typically included as an essential factor of an educational environment. Therefore, future validation may consider conducting a more exhaustive evaluation of the items through item-level refinement. New items that potentially represent the anatomy education environment should be added to the initial pool of items prior to a future cross-validation study. In addition, future validation studies should be conducted in different settings (i.e., different regions and countries).

Inadequacy in the content coverage of the items of the AEEMI-26 may stem from the similarity in the anatomy curricula of the 11 participating medical institutions. In general, these institutions practice integrated curricula, where anatomy is taught in system- or course-based manner with emphasis on horizontal and vertical integration. The teaching methods used to teach anatomy are nearly similar, where the subject is taught through lectures, practical sessions, and problem-based learning. In terms of practical anatomy, nearly all institutions use anatomy models, prosected specimens, and the microscope as teaching tools. Only a few institutions are conducting cadaveric dissections and using anatomy software to supplement teaching. Hence, additional validation studies should be conducted in the future before the AEEMI-26 can be used as a global benchmarking tool. The AEEMI-26 should be validated across countries. Other sources of validity, such as, consequences and relations to other variables [38], should also be evaluated to ensure the robust psychometric credentials of the AEEMI-26.

Conclusion

The study illustrated that the AEEMI-26 is a valid and reliable inventory that measures the anatomy education environment. The key strength of this study lies in the involvement of the 1930 medical students who were at different phases of medical training from 11 public and private medical schools in Malaysia. The variation that may exist in the anatomy education environment among the institutions was captured during the validation process, which therefore contributed to the generalizability of the AEEMI-26. Although the study focused on the cross-validation of the AEEMI in the Malaysian context, the AEEMI-26 may be applicable to other countries as is assumes that the anatomy education environment is similar. The findings complement those by Hadie et al. [26], where the inventory was found to have stable constructs. Hence, the AEEMI-26 is useful for enhancing our understanding of the perception of medical students regarding the anatomy education environment. The inventory can be used to obtain students’ feedback on anatomy teaching and learning, and thus serve as a valid benchmarking tool for anatomy education curricula. Further studies should be carried out to validate the AEEMI on a global scale, which will increase its generalizability.

Availability of data and materials

The datasets used and/or analysed during this study are available from the corresponding author upon reasonable request.

Abbreviations

AEEMI:

Anatomy Education Environment Measurement Inventory

CFA:

Confirmatory factor analysis RMSEA: Root Mean Square of Error Approximation

CFI:

Comparative Fit Index

EFA:

Exploratory factor analysis

GFI:

Goodness-of-fit Index

HREC:

Human Research Ethics Committees

IDP:

Institutional development plan

KMO:

Kaiser–Meyer–Olkin

MI:

Modification indices

NFI:

Normed Fit Index

S-CVI/Ave:

Scale-level content validity index/average

SPSS:

Statistical Package for the Social Sciences

TLI:

Tucker–Lewis Index

References

  1. Fraser BJ. Environments for education. In: Wright JD, editor. International encyclopedia of the Social & Behavioral Science. Oxford: Elsevier; 2015. p. 820–3.

    Chapter  Google Scholar 

  2. Hutchinson L. Educational environment. BMJ. 2003:810–2. https://doi.org/10.1136/bmj.326.7393.810.

  3. Alzaza NS, Yaakub AR. Students’ awareness and requirements of mobile learning services in the higher education environment. Am J Econ Bus Admin. 2011. https://doi.org/10.3844/ajebasp.2011.95.100.

  4. James R. Social equity in a mass, globalised higher education environment: the unresolved issue of widening access to university: University of Melbourne, Victoria: Centre for the Study of Higher Education; 2007.

  5. Tomin VV, Sakharova NS, Eremina NV, Kabanova OV, Terekhova GV. Intercultural adaptation of students in the information field of cross-cultural interaction. Glob Media J S. 2016;2:1–7.

    Article  Google Scholar 

  6. Boghian I. The values of tolerance education. A literature review. J Innov Psychol Educ Didactics. 2017;21:205–20.

    Google Scholar 

  7. Kislyakov PA, Shmeleva EA, Karaseva TV, Silaeva OA. Monitoring of education environment according to the social–psychological safety criterion. Asian Soc Sci. 2014;10:285–91. https://doi.org/10.5539/ass.v10n17p285.

    Article  Google Scholar 

  8. Drake RL. Anatomy education in a changing medical curriculum. Anat Rec. 1998;253:28–31. https://doi.org/10.1002/(SICI)1097-0185(199802)253:1<28::AID-AR11>3.0.CO;2-E.

    Article  Google Scholar 

  9. Khalid S, Akhtar MJ, Shah F. Winds of change: do we need to change with the changing times? J Med Health Sci. 2017;11:946–98.

    Google Scholar 

  10. Estai M, Bunt S. Best teaching practices in anatomy education: a critical review. Ann Anat. 2016;208:151–7. https://doi.org/10.1016/j.aanat.2016.02.010-157.

    Article  Google Scholar 

  11. Krych AJ, March CN, Bryan RE, Peake BJ, Pawlina W, Carmichael SW. Reciprocal peer teaching: students teaching students in the gross anatomy laboratory. Clin Anat. 2005;18:296–301. https://doi.org/10.1002/ca.20090.

    Article  Google Scholar 

  12. Finn GM, McLachlan JC. A qualitative study of student responses to body painting. Anat Sci Educ. 2010;3:33–8. https://doi.org/10.1002/ase.119.

    Article  Google Scholar 

  13. McMenamin PG, Quayle MR, McHenry CR, Adams JW. The production of anatomical teaching resources using three-dimensional (3D) printing technology. Anat Sci Educ. 2014;7:479–86. https://doi.org/10.1002/ase.1475.

    Article  Google Scholar 

  14. Webb AL, Choi S. Interactive radiological anatomy elearning solution for first year medical students: development, integration, and impact on learning. Anat Sci Educ. 2014;7:350–60.

    Article  Google Scholar 

  15. Hadie SNH, Abdul Manan Sulong H, Hassan A, Mohd Ismail ZI, Talip S, Abdul Rahim AF. Creating an engaging and stimulating anatomy lecture environment using the Cognitive Load Theory-based Lecture Model: Students’ experiences. J Taibah Univ Med Sci. 2018a;13:162–72. https://doi.org/10.1016/j.jtumed.2017.11.001.

    Article  Google Scholar 

  16. Lim VKE. Medical education in Malaysia. Med Teach. 2008;30:119–23. https://doi.org/10.1080/01421590801942102.

    Article  Google Scholar 

  17. Yammine K. The current status of anatomy knowledge: where are we now? Where do we need to go and how do we get there? Teach Learn Med. 2014;26:184–8. https://doi.org/10.1080/10401334.2014.883985.

    Article  Google Scholar 

  18. Prince KJAH, Scherpbier AJAA, Van Mameren H, Drukker J, Van Der Vleuten CPM. Do students have sufficient knowledge of clinical anatomy? Med Educ. 2005;39:326–32. https://doi.org/10.1111/j.1365-2929.2005.02096.x.

    Article  Google Scholar 

  19. Fitzgerald JEF, White MJ, Tang SW, Maxwell-Armstrong CA, James DK. Are we teaching sufficient anatomy at medical school? The opinions of newly qualified doctors. Clin Anat. 2008;21:718–24. https://doi.org/10.1002/ca.20662.

    Article  Google Scholar 

  20. Aggarwal R, Brough H, Ellis H. Medical student participation in surface anatomy classes. Clin Anat. 2006;19:627–31. https://doi.org/10.1002/ca.20225.

    Article  Google Scholar 

  21. Bergman EM, van der Vleuten CPM, Scherpbier AJJA. Why don’t they know enough about anatomy? A narrative review. Med Teach. 2011;33:403–9. https://doi.org/10.3109/0142159X.2010.536276.

    Article  Google Scholar 

  22. Trautman J, McAndrew D, Craig SJ. Anatomy teaching stuck in time? A 10-year follow-up of anatomy education in Australian and New Zealand medical schools. Aust J Educ. 2019;63:340–50. https://doi.org/10.1177/0004944119878263.

    Article  Google Scholar 

  23. Bergman EM, Prince KJ, Drukker J, van der Vleuten CP, Scherpbier AJ. How much anatomy is enough? Anat Sci Educ. 2008;1:184–8.

    Article  Google Scholar 

  24. Holmboe ES, Harden RM. Outcome-based education. In: Dent JA, Harden RM, editors. A Practical Guide for Medical Teachers. 5th ed. Edinburgh: Elsevier; 2017. p. 114–21.

    Google Scholar 

  25. Kelly C. Teacher as facilitator of learning. In: Mårtensson P, Bild M, Nilsson K, editors. Teaching and Learning at Business Schools: Transforming Business Education. 2nd ed. New York: Routledge; 2016. p. 3–16.

    Google Scholar 

  26. Hadie SNH, Hassan A, Ismail ZIM, Asari MA, Khan AA, Kasim F, Yusof NAM, Manan Sulong HA, Tg Muda TFM, Arifin WN, Yusoff MSB. Anatomy education environment measurement inventory: a valid tool to measure the anatomy learning environment. Anat Sci Educ. 2017;10:423–32. https://doi.org/10.1002/ase.1683.

    Article  Google Scholar 

  27. McCuskey RS, Carmichael SW, Kirch DG. The importance of anatomy in health professions education and the shortage of qualified educators. Acad Med. 2005;80:349–51. https://doi.org/10.1097/00001888-200504000-00008.

    Article  Google Scholar 

  28. Raftery AT. Anatomy teaching in the UK. Surgery. 2007;25:1–2. https://doi.org/10.1016/j.mpsur.2006.11.002.

    Article  Google Scholar 

  29. Ganguly PK. Teaching and learning of anatomy in the 21st century: direction and the strategies. Open Educ J. 2010;3:5–10.

    Google Scholar 

  30. Johnson EO, Charchanti AV, Troupis TG. Modernization of an anatomy class: from conceptualization to implementation. A case for integrated multimodal-multidisciplinary teaching. Anat Sci Educ. 2012;5:354–66. https://doi.org/10.1002/ase.1296.

    Article  Google Scholar 

  31. Ali A, Khan ZN, Konczalik W, Coughlin P, El Sayed S. The perception of anatomy teaching among UK medical students [bulletin]. Bulletin. 2015;97:397–400. https://doi.org/10.1308/rcsbull.2015.397.

    Article  Google Scholar 

  32. Moxham BJ, Hennon H, Lignier B, Plaisant O. An assessment of the anatomical knowledge of laypersons and their attitudes towards the clinical importance of gross anatomy in medicine. Ann Anat. 2016;208:194–203. https://doi.org/10.1016/j.aanat.2016.06.001.

    Article  Google Scholar 

  33. Murray HG. Low-inference teaching behaviors and college teaching effectiveness: recent developments and controversies. In: Perry RP, Smart JC, editors. The scholarship of teaching and learning in higher education: an evidence-based perspective. Netherlands: Springer; 2007. p. 145–200.

  34. Babad E. How high is “high inference”? Within classroom differences in students’ perceptions of classroom interaction. J Classroom Interact. 1996;31:1–9.

    Google Scholar 

  35. Hadie SNH, Hassan A, Talip SB, Yusoff MSB. The teacher behavior inventory: validation of teacher behavior in an interactive lecture environment. Teach Dev. 2019;23:36–49. https://doi.org/10.1080/13664530.2018.1464504.

    Article  Google Scholar 

  36. Gupta A, Wood M, Kumar S, Misra S, Turner T. No faculty required: use of a health literacy low inference self-assessment measure to promote behavior change. Acad Pediatr. 2020;20:712–720. doi: https://doi.org/10.1016/j.acap.2020.02.019, PMID 32087380.

  37. Borsboom D, Mellenbergh GJ. Van HeerdenThe concept of validity. Psychol Rev. 2004;111:1061–71. https://doi.org/10.1037/0033-295X.111.4.1061.

    Article  Google Scholar 

  38. Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: theory and application. Am J Med. 2006;119:166.e7–16. https://doi.org/10.1016/j.amjmed.2005.10.036.

    Article  Google Scholar 

  39. Costello AB, Osborne J. Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Pract Assess Res Eval. 2005;10:1–9.

    Google Scholar 

  40. Field A. Discovering statistics using SPSS. 3rd ed. London: SAGE Ltd.; 2009.

    Google Scholar 

  41. Stevens JP. Applied multivariate statistics for the social sciences. 5th ed. New York: Routledge; 2012.

    Book  Google Scholar 

  42. Arbuckle JL. IBM SPSS® AmosTM™ 19: User’s Guide. 1st ed. Chicago: SPSS Inc; 2010. p. 654.

    Google Scholar 

  43. Browne MW, CudeckSociol Methods Res1992;21:230–258. doi: https://doi.org/10.1177/0049124192021002005.

  44. Bentler PM. Comparative fit indexes in structural models. Psychol Bull. 1990;107:238–46. https://doi.org/10.1037/0033-2909.107.2.238.

    Article  Google Scholar 

  45. Bentler PM, Bonett DG. Significance tests and goodness of fit in the analysis of covariance structures. Psychol Bull. 1980;88:588–606. https://doi.org/10.1037/0033-2909.88.3.588.

    Article  Google Scholar 

  46. Bollen KA. Sociol Methods Res. 1989;17:303–16. https://doi.org/10.1177/0049124189017003004.

    Article  Google Scholar 

  47. Marsh HW, Hocevar D. Application of confirmatory factor analysis to the study of selfconcept: first- and higher order factor models and their invariance across groups. Psychol Bull. 1985;97:562–82. https://doi.org/10.1037/0033-2909.97.3.562.

    Article  Google Scholar 

  48. Brown TA. In: Brown TA, editor. Confirmatory Factor Analysis for Applied Research. New York: Guildford Press; 2006. p. 236–319.

    Google Scholar 

  49. Piaw CY. Statistik penyelidikan lanjutan. 1st ed. Shah Alam: McGraw-Hill; 2008. p. 423.

    Google Scholar 

  50. Streiner LD, Norman GR. Health measurement scales: a practical guide to their development and use. 4th ed. New York: Oxford University Press; 2008.

    Book  Google Scholar 

  51. Roslan NS, Mohammad JAM, Ismail MAA, Ahmad A, Yusoff MSB. Rethinking education environment: the clinical education environment framework. Educ Med. 2018;10:31–46. https://doi.org/10.21315/eimj2018.10.3.4.

    Article  Google Scholar 

  52. Genn JM. Curriculum, environment, climate, quality and change in medical education—a unifying perspective. Med Teach. 2001;23:337–44. https://doi.org/10.1080/01421590120063330.

    Article  Google Scholar 

  53. Michalos AC. Netherlands: Springer;2014:61.

  54. Yusoff MSB. The Dundee ready educational environment measure: a confirmatory factor analysis in a sample of Malaysian medical students. Int J Humanit Soc Sci. 2012;2:313–21.

    Google Scholar 

  55. Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77:81–112. https://doi.org/10.3102/003465430298487.

    Article  Google Scholar 

  56. World Health Organization. WHO global benchmarking tool (GBT) for evaluation of national regulatory systems. 2019. Retrieved May 1, 2020 from http://www.who.int/medicines/regulation/benchmarking_tool/en.

Download references

Acknowledgements

The authors would like to thank all the 11 Malaysian medical schools for allowing us to conduct this study in their institutions. The authors would also like to thank the School of Medical Sciences for providing funding for English editing and proofreading for this manuscript. Finally, our greatest appreciation to the 1930 medical students who had participated in this study.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

SNH designed the research work, coordinated the data collection process among the institutions and was the major contributor in writing the manuscript. SNH approved the submitted version of the manuscript. MSBY analyzed and interpreted the data and wrote the final manuscript. MSBY approved the submitted version of the manuscript. WNA analyzed and interpreted the data and wrote the final manuscript. WNA approved the submitted version of the manuscript. FK collected the data at the one institution and approved the submitted version of the manuscript. ZIMI collected the data at the one institution and approved the submitted version of the manuscript. MAA collected the data at the one institution and approved the submitted version of the manuscript. HM collected the data at the one institution and approved the submitted version of the manuscript. AH collected the data at the one institution and approved the submitted version of the manuscript. TFM collected the data at the one institution and approved the submitted version of the manuscript. YIAB collected the data at the one institution and approved the submitted version of the manuscript. RMZ collected the data at the one institution and approved the submitted version of the manuscript. ESMR collected the data at the one institution and approved the submitted version of the manuscript. RH collected the data at the one institution and approved the submitted version of the manuscript. SBT collected the data at the one institution and approved the submitted version of the manuscript. KMKMN collected the data at the one institution and approved the submitted version of the manuscript. YS collected the data at the one institution and approved the submitted version of the manuscript. MFA collected the data at the one institution and approved the submitted version of the manuscript. AAL collected the data at the one institution and approved the submitted version of the manuscript. MR collected the data at the one institution and approved the submitted version of the manuscript.

Corresponding author

Correspondence to Siti Nurma Hanim Hadie.

Ethics declarations

Ethics approval and consent to participate

Permission and ethical clearance were obtained from the Human Research Ethics Committees (HREC), Universiti Sains Malaysia (USM/JEPeM/18040225). Written consent was obtained from the participants during after the briefing session.

Consent for publication

Written consent for publication was obtained from the participants after the briefing session.

Competing interests

The authors declare no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:.

List of items that were used in this study.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hadie, S.N.H., Yusoff, M.S.B., Arifin, W.N. et al. Anatomy Education Environment Measurement Inventory (AEEMI): a cross-validation study in Malaysian medical schools. BMC Med Educ 21, 50 (2021). https://doi.org/10.1186/s12909-020-02467-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-020-02467-w

Keywords