Skip to main content

Psychometric properties of a modified cultural awareness scale for use in higher education within the health and social care fields



Cultural awareness and cultural competence have become important skills in higher education as populations continue to grow in diversity around the world. However, currently, there are few instruments designed to assess student awareness of the aspects of culture, and the existing instruments need further development and testing for use with different target populations. Therefore, the aim of this study was to test the psychometric properties of a modified version of the Cultural Awareness Scale (CAS) for use in higher education within the health and social care fields.


A modified version of the CAS was developed, which was tested psychometrically using cross-sectional data. In total, 191 undergraduate students from different health and social care undergraduate programs in Sweden and Hong Kong responded to a call to test the modified instrument.


The results showed that the modified CAS is a four-factor measure of cultural awareness and possesses satisfactory internal consistency. Results also support the use of the modified CAS as a generic tool to measure cultural awareness among students in higher education within the health and social care fields.


The modified CAS showed satisfactory psychometric properties and can be recommended as a generic tool to measure cultural awareness among students in higher education within the health and social care fields. However, further psychometric testing on the effectiveness of the modified CAS as a tool to evaluate the efficacy of cultural awareness interventions is required.

Peer Review reports


Cultural awareness and cultural competence have become important skills in higher education within the health and social care fields as populations grow in diversity around the world as a result of increased migration. For example, in the United States 2016, more than 40 million people are reported to be born in another country [1]. In Australia, in 2018, 29% of the estimated resident population were born overseas [2]. Further, in Sweden, in the same year (2018), almost 19% of the population was foreign-born [3]. As a result of this growing diversity, professionals in health care and social work are encountering more and more patients/clients from different cultures [4, 5]. Providing sufficient care and support requires in-depth knowledge and awareness among professionals working in inter-cultural settings. Consequently, higher education institutions have the responsibility to produce culturally aware graduates who are sensitive in their cross-cultural interactions [6]. However, instruments to assess student and faculty cultural awareness and the implications for practice are few. Moreover, existing instruments may need to be further developed and tested to fit different target populations.

Conceptual framework

Culture can be defined as the values, beliefs, and norms that guide the thinking and actions of specific groups [7]. Further, culture helps individuals adapt to their environment [8]. Cultural competence is a systematic approach focusing on the ability of providers and organizations to effectively deliver health care services that meet the social, cultural, and linguistic needs of patients/clients [9] and can be understood as a process, not an event [4]. A more recent definition explains cultural competence as “using one’s understanding to represent and tailor health care that is equitable and ethical after becoming aware of oneself and others in a diverse cultural encounter” [10]. Conceptually, the literature suggests that cultural competence consists of several components such as cultural awareness, knowledge, skills, and sensitivity [4, 6, 10, 11]. Cultural awareness could be described as the recognition of one’s own cultural and professional background including beliefs, attitudes and behaviors [6]. Cultural knowledge is the process of seeking and obtaining an educational foundation about diverse cultural and ethnic groups which helps us understand the client’s worldview. Cultural skills refer to one’s ability to gather relevant cultural data regarding the client’s concerns and one’s accurate performance of a culturally-based assessment [4]. Cultural sensitivity means to have knowledge about differences and similarities between and among cultures without values assigned or judgements of those differences [11]. However, it has also been suggested that there is a hierarchy of cultural competence beginning with knowledge (the cognitive component), followed by awareness (the affective component) and sensitivity (the attitudinal component) [11]. This means to continually develop cultural competence, one needs to move from having knowledge followed by gaining awareness to being sensitive.

More recently, scholars have challenged the hierarchy of the cultural competence and its continuum. Curtis et al. [12] conducted an extensive literature review and concluded that healthcare practices would be more equitable with a transformative shift from “cultural competence” to “cultural safety.” Cultural safety focuses on examining power imbalances within the healthcare setting with the goal of approaching equity. Examining power imbalances within the healthcare setting includes critical appraisal of one’s own biases and stereotypes (p. 14). Cultural awareness, including this critique of an individual’s biases and stereotypes, is thereby a first step toward developing cultural safety and a critical component to address in higher education and research [13]. Consequently, valid and reliable instruments to assess the efficacy of cultural awareness enhancement interventions continue to be increasingly important.

Literature review

There are several different instruments aimed at evaluating cultural competence in clinical and organizational settings. For example, 11 instruments developed to assess cultural competence in nurses and nursing students were found in a review by Loftin et al. [14]. The review revealed that the majority of the 11 instruments measured general cultural aspects and did not make distinctions between the different cultural groups [14]. Another review, by Lin et al. [15], found 10 instruments developed for measuring cultural competence in health care providers. Out of the 10 instruments found, only five were in English. The instruments evaluated different aspects of cultural competence. Two instruments used distinct perspectives to measure cultural awareness: The Cultural Awareness Scale (CAS) and the Nurse Cultural Competence Scale. Of these two instruments, CAS was considered to be the most appropriate for assessing cultural awareness [15]. Both Loftin et al. [14] and Lin et al. [15] concluded that current instruments lack an operational definition of cultural competence [15].

The concept of cultural competence is difficult to measure objectively [14], which makes it challenging for researchers to select the best instrument for their research purposes. Given that prior instruments for measuring cultural awareness were predominately tested in a single discipline, another important question is whether existing instruments are generic and can be used in different target groups to further expand our knowledge of their cultural awareness and cultural safety. If so, rigorous psychometric tests and retests of existing instruments are needed.

Rew et al. [6], the developers of the CAS, identified five categories of cultural awareness. The categories were based on a literature review of cultural awareness, cultural sensitivity, and cultural competence.

The scale is theoretically based on a pathway model that focuses on the interaction between nursing faculty, and students with different backgrounds. The CAS was developed for use by nursing faculty and nursing students and has been used in several studies for measuring cultural awareness among these groups [13, 16, 17]. The CAS has been tested for and translated into several different languages, including Swedish [18], Korean [19] and Turkish [20]. However, whether the CAS is suitable for use in other groups of students in higher education, within the health and social care fields, is still unknown.


This study aimed to test the psychometric properties of a modified version of the CAS (mCAS) for use in higher education within the health and social care fields.

This study was performed in two stages. In stage one, a modified version of the CAS was developed. This version was tested psychometrically in stage two, using cross-sectional data.

Cultural awareness scale

The CAS was originally developed in English for nursing students and consists of 36 items with responses based on a 7-point scale (ranging from “strongly disagree” to “strongly agree”). There is also one additional alternative response: “does not apply.” Lower scores indicate less cultural awareness, and higher scores indicate greater awareness. Cultural awareness is considered to be the minimal level of cultural competence. Therefore, the CAS evaluates nursing student performance on the first stage of cultural competence development [6]. The scale was initially tested with undergraduate and graduate nursing students in the United States. A content validity index of 0.88 was obtained, and the internal consistency varied from 0.94–0.71. An exploratory factor analysis was performed, and five subscales emerged: “general educational experience,” “cognitive awareness,” “research issues,” “behaviors/comfort with interactions,” and “patient care/clinical care” [6]. These five factors explained 51% of the variance in scale scores. However, during a reanalysis of the CAS, Rew et al. [16] found evidence for a valid three-factor solution (“general attitudes,” “research attitudes,” “clinical experience”), and the analysis supported the reliability of the CAS.

Stage one

The first stage involved careful review of the wording of each of the 36 statements in the original English version of the CAS by the first and last authors. Wordings that referred to nursing and nursing school were changed into terms that are more applicable to a broader group of students in educational programs within the health and social care fields. For example, “The instructors at this nursing school adequately address multicultural issues in nursing” was changed to “The instructors at this university adequately address multicultural issues.” Further, the word “patient” was replaced with the word “client” throughout the scale. Item number 20 in the original CAS was excluded because this statement focused on nursing instructors in clinical settings, a specific nursing educational context. The entire research team reviewed the modified scale and decided on the final version.

The mCAS consists of 35 items, and, similar to the original version, its responses are based on a 7-point scale, ranging from “strongly disagree” to “strongly agree.” The additional alternative response “does not apply” is also used in the modified version.

Stage two

Stage two involved empirical testing of the mCAS in a cross-sectional survey performed at Malmö University (in Sweden) and Hong Kong Polytechnic University.

Study sample

A total of 191 undergraduate students, from the fields of social care (n = 18), dental hygiene (n = 30), criminology (n = 40), occupational therapy and physiotherapy (n = 103), responded to the survey. Of all the respondents, 88 were students at Malmö University, and 103 were students at Hong Kong Polytechnic University. The mean age of the students was 23 years (SD 4.27), and 67 (37%) were men.

Data collection

After permission from the head of each department, contact with the students’ teachers was made, and arrangements for when and how data collection would be performed was decided. In most cases, data were collected in connection to mandatory lectures. All participating students provided written informed consent after receiving oral and written information about the study, including the information that participation in the study was voluntary. The study was approved by the Regional ethical review board, Lund Sweden (No 2017/198) and the institutional research ethics board at Hong Kong Polytechnic University (HSEARS20170227002–0).


Descriptive statistics were used in the analysis of age, gender, type of undergraduate program, and mCAS scores were reported as frequencies, percentages, means, standard deviations, median values, and interquartile ranges. The validity of the mCAS was tested by using an exploratory factor analysis (n = 170). The Kaiser-Meyer-Olkin (KMO) Test was used to measure sampling adequacy and the appropriateness of continuing the factor analysis (KMO > 0.60) [21]. A Bartlett’s test of sphericity was performed to analyze the overall significance of correlations within a matrix (p-value < 0.05) [21]. A scree plot was performed to graphically determine the optimal number of factors to retain [22]. In the next step, a principal component analysis with varimax rotation was used for the creation of factors. Generally, factor loadings > 0.50 are recommended; however, loadings > 0.30 are considered acceptable [21]. The cut-off criterion of eigenvalue > 1.0 was used for selection of factors. Cronbach’s alpha coefficients were calculated to assess the internal consistency of the total scale and subscales. The Pearson correlation coefficient was analyzed for item-scale correlation. Data analysis was performed using the statistical package SPSS 24.0 (IBM Corporation, Armonk, NY, USA).


The mCAS appeared to be easy to answer, and few missing items were identified. However, for some of the items, the “does not apply” response, was chosen more frequently. For example, 16 students chose this response for item 16 (“In classes, my instructors/supervisors have engaged in behaviors that may have made students from certain cultural backgrounds feel excluded”). Fourteen students thought that item 13 (“I have noticed that the instructors/supervisors at this university call on students from minority cultural groups when issues related to their group come up in class”) did not apply. This affected the sample size in the final factor analysis.

The factor analysis was performed on data from 170 students who had completed the mCAS and answered all items. The internal dropout rate was 11% (21 students with incomplete questionnaires). The initial exploratory factor analysis showed good sampling adequacy with a KMO value of 0.738 and a statistically significant Bartlett’s sphericity (x2 2384.231 p-value < 0.001). Ten components showed an eigenvalue > 1.0, explaining 65.28% of the total variance. When analyzing the scree plot, a clear cut was seen at four components, and four was determined to be the optimal number of factors to retain. A principal component factor analysis with varimax rotation was performed to analyze the four-factor model. It is preferable that each item load on only one factor. However, five items (10, 15, 21, 24, and 25) loaded on multiple factors, and these items were then placed into the most relevant factor group. Then the final factor model of the mCAS was constructed (Table 1). The four factors explained 43.4% of the total variance. All items had a factor loading > 0.3. Factor 1, “general educational and research experience,” included 15 items and was the strongest factor (16.87% of the total variance). Factor 2, “behaviors/comfort with interactions,” included 8 items. Factor 3, “cognitive awareness,” and factor 4, “clinical issues” included 7 and 5 items, respectively (Table 1).

Table 1 Factor loadings for the modified Cultural Awareness Scale (N = 170)

Reliability analysis

The mean total scale score was 163.5 with a standard deviation of 25.5. The mean item score for all items was 4.66, ranging from 3.78 to 5.61. Internal consistency was analyzed for the total mCAS and for each of the four subscales identified in the factor analysis. Sample size varied across the subscales due to individual missing data. The Cronbach’s alpha value was 0.88 for the total scale and ranged from 0.7 (“cognitive awareness”) to 0.9 (“general educational and research experience”) for the four subscales (Table 2).

Table 2 Internal consistency and average item for the modified Culture Awareness Scale and subscales

The intercorrelations between the four subscales showed significant correlations between “general educational and research experience,” “cognitive awareness,” and “clinical issues” and between “behaviors/comfort with interactions” and “cognitive awareness” (Table 3).

Table 3 Correlations between the four subscales of the modified Culture Awareness Scale


The results show that the mCAS is a four-factor measure of cultural awareness with satisfactory reliability in terms of internal consistency. This finding suggests that the mCAS may be used as a generic tool to measure cultural awareness among students in higher education within the health and social care fields.

The factor analysis supported a four-factor structure of the mCAS. Similar findings were seen in a study that tested a Turkish version of the CAS [20], which also resulted in a four-factor solution. However, the items loaded slightly differently from the present study. For example, all items loaded in factor four of the Turkish study were different than those in factor four of the present study. The four-factor solution contrasts with the original five-factor structure [6] that was further confirmed in the Swedish version [18]. A five-factor structure solution was also confirmed by Oh et al. [19] who tested a Korean version of the CAS. However, the subscales were divided differently from the original CAS, and the subscale “general educational experience” was divided between factors two and four. Moreover, a confirmatory factor analysis in the reanalysis of the CAS provided evidence for a three-factor measure of cultural awareness [16]. This illustrates how construct validity is an ongoing process and emphasizes the importance of investigating psychometric properties for established measurements before further use in other cultural settings or target populations. In particular, establishing construct validity is important in the assessment of complex phenomenon such as cultural awareness.

In the present study, five items loaded on multiple factors. It is recognized that this may have aggravated interpretation of the factors [23, 24]. A similar pattern was seen in the testing of the Turkish version of the CAS [20] and in a replication of the original study [6], conducted by Krainovich et al. [13]. However, whether it concerned the same items as in our study is unknown. These findings contradicted those of studies on the original scale [6], and the Swedish [18] and Korean versions [19], neither of which reported any cross-loading items. Nevertheless, it is quite common for items to load significantly on multiple factors, and, as recommended by Pett et al. [21], these items were placed into the most relevant factor group.

The internal consistency was moderate to excellent for all the subscales and the total mCAS, suggesting strong intercorrelations between the scale items. This was also the case for the original CAS [6] and in re-analyses of the CAS [13, 16]. However, both the Swedish [18] and Korean [19] versions showed poor alpha values for the factor “behavior/comfort with interactions.” For the mCAS this factor included eight items, which is two more than in the original version, and this may be one reason for the high internal consistency. These two items were on instructor behavior that may have made students from certain cultural background feel excluded (item 16) and aspects in the classroom environment that may alienate students from some cultural backgrounds (item 21). This suggests that the factor “behavior/comfort with interactions” included not only the ability to interact with others from different cultures but also external aspects that may influence interactions. It is worth mentioning that seven of the eight items included in the subscale “behavior/comfort with interactions” are worded negatively and were re-coded in the analysis, which in turn may have attenuated internal consistency [25].

Analysis of the intercorrelations between the four subscales showed some significant correlations. However, correlation coefficients were low (r = < 0.5) [26], and a negative correlation was seen between “behavior/comfort with interactions” and “clinical issues.” This corresponds to previous studies that tested the CAS [6, 18, 19], except for higher correlations between the subscales “cognitive awareness” and “clinical issues” and between “general educational experience” and “research issues” in the Korean version [19]. However, it is difficult to compare the different studies because the items loaded somewhat differently in the factor analyses. We can assume that some dimensions of cultural awareness are related and can therefore expect some of the subscales to be correlated. Nevertheless, a very high correlation between two scales may imply that the scales are measuring the same factor, and there is a need to consider whether they could be combined into one single scale [27]. The low intercorrelations between the subscales of the mCAS may therefore be acceptable and may suggest a satisfactory convergent validity.

One possible limitation of the study may be the sample size. However, determining the sample size for factor analysis is challenging. Varying opinions and guidelines exist in the literature [28]. Previous studies have suggested that an adequate sample size for a factor analysis is partly determined by the nature of the data [24, 28]. In fact, it has been demonstrated that sample sizes can be rather small when the communalities are high (>.60) and each factor is defined by several items [24, 28, 29]. In the present study, all communalities were high, and at least five items defined the factors, suggesting that the sample size was satisfactory.

In both the original CAS and mCAS, responses to the question items are based on a 7-point scale with a midpoint choice, which is “no opinion,” and one additional alternative response, which is “does not apply.” The alternative response “does not apply” was chosen by several of the students, which was one reason for incomplete questionnaires. Some researchers argue that “don’t know or no opinion” alternatives lead to incomplete, less valid, and less informative data, whereas others maintain that it means that there is no evidence to support such impact [30]. In the case of the mCAS, it may be preferable to only have the midpoint choice “no opinion”; however, the effects of deleting the alternative response “does not apply” must be further tested and evaluated.

The mCAS exists only in English; however, most of the students were not native English speakers, which could contribute to misunderstandings and misinterpretations when responding to the questionnaire. We do not fully understand their level of English proficiency, including their ability to read, write, speak, and listen in English. However, one eligibility requirement for studies in higher education both in Sweden and Hong Kong is sufficient grades in English language courses taken in upper secondary school.


The mCAS showed satisfactory psychometric properties and can be recommended as a generic tool to measure cultural awareness among students in higher education. This might be particularly important as interprofessional collaboration is gaining momentum in health care education. The modified scale may thus be useful when evaluating effectiveness of curriculum and educational interventions aimed to improve cultural safety among students within the health and social care fields. However, to be able to use the mCAS to ensure the efficacy of cultural awareness interventions, further psychometric testing is needed. Studies with a higher number of students are warranted, the ability to detect differences between groups and the responsiveness of the scale should be tested to establish the mCAS’s ability to reflect changes in cultural awareness over time.

Availability of data and materials

The data generated and analysed during the current study are available from the corresponding author upon reasonable request.



Cultural Awareness Scale


The Kaiser-Meyer-Olkin


Modified version of the Cultural Awareness Scale


  1. Lopez G, Balik K, Radford J. Key findings about U. S immigrants. 2018 Accessed 20 Dec 2019.

  2. Australian Bureau of Statistics Accessed 20 Dec 2019.

  3. Statistic Sweden Accessed 20 Dec 2019.

  4. Campinha-Bacote J. The process of cultural competence in the delivery of healthcare services: a model of care. J Transcult Nurs. 2002;13(3):181–4.

    Article  Google Scholar 

  5. Jeffreys MR, Dogan E. Evaluating the influence of cultural competence education on students’ transcultural self-efficacy perceptions. J Transcult Nurs. 2012;3(2):188–97.

    Article  Google Scholar 

  6. Rew L, Becker H, Cookston J, Khosropour S, Martinez S. Measuring culture awareness in nursing students. J Nurs Educ. 2003;42(6):249–57.

    Article  Google Scholar 

  7. Leininger M, McFarland M. Cultural care diversity and universality: a worldwide nursing theory (2nd ed). Sudbury, MA: Jones and Bartlett; 2006.

    Google Scholar 

  8. Clinton JF. Cultural diversity and health care in America: knowledge fundamental to cultural competence in baccalaureate nursing students. J Cult Divers. 1996;3(1):4–8.

    Google Scholar 

  9. Betancourt JR, Green AR, Carrillo E, Ananeh-Firempong O. Defining cultural competence: a practical framework for addressing racial/ethnic disparities in health and health care. Public Health Rep. 2003;118:293–302.

    Article  Google Scholar 

  10. Henderson S, Horne M, Hills R, Kendall E. Cultural competence in healthcare in the community: a concept analysis. Health Soc Care Community. 2018;26:590–603.

    Article  Google Scholar 

  11. Povenmire-Kirk TC, Bethune LK, Alverson CY, Gutmann KL. A journey, not a destination developing cultural competence in secondary transition. Teach Except Child. 2015;47(6):319–28.

    Article  Google Scholar 

  12. Curtis E, Jones R, Tipene-Leach D, Walker C, Loring B, Paine S-J, Reid P. Why cultural safety rather than cultural competency is required to achieve health equity: a literature review and recommended definition. Int J Equity Health. 2019;18:174

    Article  Google Scholar 

  13. Krainovich-Miller B, Yost JM, Norman RG, Auerhahn C, Dobal M, Rosedale M, Lowry M, Moffa C. Measuring cultural awareness of nursing students: a first step toward cultural competency. J Transcult Nurs. 2008;19(3):250–8.

    Article  Google Scholar 

  14. Loftin C, Hartin V, Branson M, Reyes H. Measures of cultural competence in nurses: An integrative review. The Scientific World Journal. 2013; Article ID 289101, 10 pages Accessed 20 Dec 2019.

  15. Lin CJ, Lee CK, Huang MC. Cultural competence of healthcare providers: A systematic review of assessment instruments. J Nursing Res. 2016;00(0):00Y00.

    Article  Google Scholar 

  16. Rew L, Becker H, Chontichachalalauk J. Cultural diversity among nursing students: reanalysis of the cultural awareness scale. J Nurs Educ. 2014;3(2):71–6.

    Google Scholar 

  17. Safipour J, Hadziabdic E, Hultsjö S, Bachrach-Lindström M. Measuring nursing students’ cultural awareness: a cross-sectional study among three universities in southern Sweden. J Nurs Educ Pract. 2017;7(1):107–13.

    Google Scholar 

  18. Hadziabdic E, Safipour J, Bachrach-Lindström M, Hultsjö S. Swedish version of measuring cultural awareness in nursing students: validity and reliability test. BMC Nurs. 2016;15:25.

    Article  Google Scholar 

  19. Oh H, Lee J, Schepp KG. Translation and evaluation of the Cultural Awareness Scale for Korean nursing students. International Journal of Nursing Education and Scholarship. 2015;12(1):1–8.

    Article  Google Scholar 

  20. Basalan Iz F, Bayık Temel A. Cultural awareness scale: Psychometric properties of the Turkish version. Collegian. 2017;24:499–504.

    Article  Google Scholar 

  21. Pett MA, Lackey NR, Sullivan JJ. Making sense of factor analysis. The use of factor analysis for instrument development in health care research. Thousand Oaks, California: Sage publications Inc; 2003.

    Book  Google Scholar 

  22. Zoski K, Jurs S. An objective counterpart to the visual scree test for factor analysis: the standard error scree. Educ Psychol Meas. 1996;56(3):443–51.

    Article  Google Scholar 

  23. Streiner D. Figuring out factors: the use and misuse of factor analysis. Can J Psychiatr. 1994;39:135–40.

    Article  Google Scholar 

  24. Costello AB, Osborne J. Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Practical Assessment Research & Evaluation 2005;10(7) Available online: http://www.pareonlinenet/getvnasp?v=10&n=7 Accessed 20 Dec 2019.

  25. Irion A.L. Survey: Negative-wording questions. In M. Allan (Ed). The SAGE encyclopedia of communication research methods. 2018. Accessed 28 Oct 2019.

  26. Asuero AG, Sayago A, González AG The Correlation Coefficient: An Overview, Critical Reviews in Analytical Chemistry. 2006;36:1, 41–59.

  27. Fayers P, Machin D. Quality of life. The assessment, analysis and reporting of patient-reported outcomes. 3rd ed. UK: Wiley; 2016.

  28. Williams B, Brown T, Onsman A. Exploratory factor analysis: A five-step guide for novices. Australasian Journal of Paramedicine. 2010;8(3). Retrieved from Accessed 20 Dec 2019.

  29. MacCallum RC, Widaman KF, Preacher KJ, Hong S. Sample size in factor analysis: The role of model error. Multivariate Behavioral Research. 2001;36(4):611–637. Accessed 20 Dec 2019.

  30. DeCastellarnau A. A classification of response scale characteristics that affect data quality: a literature review. Qual Quant. 2018;52:1523–59.

    Article  Google Scholar 

Download references


The authors would like to acknowledge the students at Malmö University and Polytechnic University who took their time to answer the CAS questionnaire and all the teachers who facilitated our access to the students.


The project was funded by a collaborative research grant from the Faulty of Health and Society, Malmö University and the Faculty of Health and Social Sciences Hong Kong Polytechnic University, but the authors had full autonomy over the study design, data collection, analysis and interpretation as well as the contents of the final manuscript. Open Access funding provided by Malmö University.

Author information

Authors and Affiliations



Study design: CK, MB, EAC, DL, EC. Data collection and analysis: CK, LR, PSC, DL, EC. Manuscript preparation: CK, MB, EAC, LR, PSC, DL, EC. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Christine Kumlien.

Ethics declarations

Ethics approval and consent to participate

After permission from the head of each department, contact with the students’ teachers was made, and arrangements for when and how data collection would be performed was decided. All participating students provided written informed consent after receiving oral and written information about the study, including the information that participation in the study was voluntary. All data were processed anonymously and cannot be tracked to any students. The study was approved by the Regional ethical review board, Lund Sweden (No 2017/198) and the institutional research ethics board at Hong Kong Polytechnic University (HSEARS20170227002–0).

Consent for publication

Not applicable.

Competing interests

None declared.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kumlien, C., Bish, M., Chan, E.A. et al. Psychometric properties of a modified cultural awareness scale for use in higher education within the health and social care fields. BMC Med Educ 20, 406 (2020).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Cultural awareness
  • Cultural competence
  • Cultural awareness scale
  • Factor analysis