Skip to main content

Effect of a multimodal training on the ability of medical students to administer the MMSE: a comparative study



The Mini-Mental State Examination (MMSE) is the main screening and follow-up test for neurocognitive disorders. In France, it is often administered by medical students. Conditions allowing to administer the MMSE are strict but not well known by students, leading to mistakes in scoring. Our objectives were to assess the effect of a multimodal training on medical students’ ability to administer the MMSE and to describe their previous training.


75 medical students between the 4th and 6th year of study were included. Previous MMSE training was assessed by a standardized questionnaire. The teaching material used for our training was the article validating MMSE in French, a video explaining the steps on how to administer the MMSE test, and MMSE’s scoring exercises. The ability to administer the MMSE was assessed by a Standardized practical exam (SPE). Students were self-selected and then assigned in two groups, one benefiting from all the training before SPE, and the other receiving only the article before SPE.


41 students were included in the training group and 34 in the control group. There was no difference between groups regarding previous training. 71% of the students had already administered a MMSE test and only 17% had received specific training. Students considered their previous training as insufficient in most cases. The overall score and scores of each subpart of the SPE were significantly higher in the training group than in the control group (overall score: median [IQR]: 71 [62–78] vs. 52 [41–57], p < 0.001). The rate of students able to complete the MMSE was higher in the training group compared to the control (85% vs. 44%, p < 0.001). Quality of the training and its usefulness were judged to be good or very good by all participants.


A multimodal training improves the ability of medical students to administer the MMSE.

Key messages

  • • A multimodal training improves the ability of medical students to administer MMSE.

  • • After a multimodal training, most of the medical students are suitable to administer MMSE.

Peer Review reports


Major neurocognitive disorders (MND) are common and their prevalence is increasing [1]. MND are underdiagnosed and lead to dependency, repeated hospitalizations and decreased life expectancy [2, 3]. In addition, this diagnosis has an important weight in the decisions to limit care, as MND are frequently associated with limitation of surgical, oncological or intensive care [4,5,6]. Screening, diagnosis, severity assessment and monitoring of MND are largely based on neuropsychological tests [7]. These neuropsychological tests assess overall cognitive efficiency or are specific to some cognitive domains such as memory, language, praxis or executive functions [7, 8].

Among the tests assessing overall cognitive efficiency, the Mini-Mental State Examination (MMSE) is the most widely used for screening, monitoring and evaluating severity of cognitive disorders, in France and worldwide [9]. It is quick to administer (< 10 min) and validated in many languages. However, it requires strict administration conditions, as well as standardized completion and scoring [10,11,12]. The MMSE is scored out of 30 points and evaluates temporo-spatial orientation (10 points), learning with an immediate recall of 3 words (3 points), attention by a calculation test (5 points), memory by the delayed recall of the 3 words (3 points), language (8 points) and visuo-constructive praxis (1 point).

This test can be administered by trained neuropsychologists, neurologists, geriatricians, general practitioners but also by other health professionals such as advanced practice nurses or trained nurses [13]. In practice in French hospitals, the MMSE is frequently administered by medical students. However, the scoring of MMSE by medical students is generally misjudged and the administration and scoring instructions are frequently unknown [14]. Moreover, a non-standardized bedside training in hospital wards had a poor inter-rate reliability and does not seem to influence the number of errors made during standardized scoring exercises [14]. In addition, despite short training in the MMSE, scoring by general practitioners is significantly higher than by neuropsychologists, considered as the Gold standard, and only half of trained nurses adequately rate the MMSE on 6 filmed clinical vignettes [13, 15].

Our objective was to evaluate the effect of a standardized multimodal training on the ability of medical students to administer a MMSE test. The secondary objectives were to assess previous training of students to MMSE administration and its impact on our results, and the students’ satisfaction about this training.


This single-centre prospective comparative study was carried out in our geriatric department. We included all medical students between the 4th and 6th year, carrying out an internship in the geriatric and post-emergency departments, between July 2021 and February 2022. In France, medical students learn neurology, psychiatry and geriatrics between the 4th and 6th years of study. They spend around 20 h a week in hospital, and are allowed to carry out a clinical examination, including medical questioning, and follow-up under the supervision of a doctor. At our university, students are randomly assigned to different hospital departments for a 2 to 3 months rotation, to learn about different medical specialties.

Study design (additional Fig. 1)

Year of study, previous training on how to administer a MMSE test, number of MMSEs seen or administered previously, supervision when first administering a MMSE and whether or not students consider their training on how to administer a MMSE sufficient during their medical school were evaluated by questionnaire for each student (Additional Material 1).

Students were self-selected and assigned to 2 groups, according to their availability to receive the different training modules, regardless of their previous training, years of study or affinity:

  • A training group which received the standardized multimodal MMSE training, which included the original article validating the MMSE in French and detailing the conditions of administration, a training video explaining the steps on how to administer a MMSE and MMSE scoring exercises session.

  • A control group which only received the original article validating the MMSE.

Ability to administer the MMSE was assessed in both groups by a Standardized practical exam (SPE) with a standardized scenario and a simulated patient, in the same way as a one station Objective Structured Clinical Examinations (OSCE), between 2 and 5 days after completion of training or after reading the article, depending on the group.

Students in the control group who did not receive the full training received it after the SPE were completed (Additional Fig. 1).

Students’ satisfaction about the training was evaluated by questionnaire after completion of the training and of the SPE. Quality of the learning materials used, duration, quality of the training and students’ satisfaction with the learning methods used were evaluated. Because the order of modules was different between groups, the evaluation of students’ satisfaction was compared between groups (Additional material 2).

Training’s description (additional Fig. 2)

The learning material was built for the study and was composed of 4 modules:

  • The article validating the MMSE in French and detailing the administration and scoring instructions [10]. This article specifies tolerance or not considering vague responses from patients and number of order’s repetition allowed for examiner so that the test stays standardized. All participants in the study received this article corresponding to the minimum required to administer a MMSE.

  • A 25-minute video explaining the steps of the MMSE. It presented a scenario with a neuropsychologist administering the test to a simulated patient with a normal MMSE. This allowed the student to understand in practice the modalities of administering the test, as well as interaction with the patient. Between each step, there was a commented slide describing administration and scoring instructions and common errors made by examiners.

  • A one-hour session dedicated to MMSE scoring and error detection in exercise videos. These sessions were carried out in groups of 4 to 8 students face-to-face with a doctor in charge of the study, to promote interactivity. Each session was divided into 2 parts:

    • The first is inspired by Hernandorena et al [14] and consists of 2 videos of role-playing scenario (A and B), used, in our study, for teaching purposes. In these videos a neuropsychologist played her own role and administered a MMSE to a simulated patient. In the video A, the simulated patient responded with 5 standardized errors and students were asked to appropriately score the MMSE. In the video B, the neuropsychologist made 5 administration/scoring errors. Students had to identify these mistakes in a pre-filled MMSE grid. At the end of each exercise a detailed debriefing was made by the doctor in charge of the study.

    • The second was to score the MMSE in four videos recorded during real situations of consultation of our neuropsychologist. Patients’ oral consent was obtained before recording and their faces were blurred to respect their anonymity. A debriefing with the doctor in charge of the study was carried out at the end of each video. The 4 situations in the videos correspond to types of patients frequently encountered in geriatrics (e.g., hypoacusis, Parkinson’s disease, several degrees of cognitive impairment…).

  • A SPE to assess students’ ability to administer MMSE. This step had both a teaching and evaluative objective as described below.

SPE assessment

All the documents related to this SPE are presented in Additional Material 3. In this SPE, the student had to check that the prerequisites conditions (e.g., identity, level of education and ability to read/count of the patient, no severe hearing or visual impairments, no recent psychotropic medication prescription, native French speakers) before administering a MMSE were met and then had to administer it to a simulated patient, played by the doctor in charge of the study, according to a standardized scenario, in a maximum of 10 min. Expected result of the MMSE of the simulated patient was 18/30.

The weighting of the SPE evaluation grid was validated by two geriatricians and a neuropsychologist trained in the administration of the MMSE. Thus, out of a maximum final score of 100 points, 25 points were for the verification of prerequisite administration conditions, 65 points to assess compliance with rules of administration and scoring of the MMSE: 13 points on the orientation part, 10 points on the learning part, 14 points on the calculation part, 3 points on the recall part, 20 points on the language part and 5 points on the praxis part. The last 10 points rated the quality of the relationship between the medical student and the simulated patient, on a scale from 0 to 10, 10 being the best score. The SPE was tested on 5 students before the start of the study. The aim of this test was to train the simulated patient and identify any unexpected responses or reactions of the students in the scenario. After this pre-test, no changes were made to the SPE scenario.

The evaluation was carried out by 2 examiners: the neuropsychologist and a geriatrician of our department trained to administer this test. The neuropsychologist was blind to the student’s group (training or control). In addition to the SPE scoring, the neuropsychologist specified for each student whether he or she was suitable to administer a MMSE.

The SPE also participated in the training of students since each student, regardless of his group, received a personalized debriefing immediately after the SPE by the geriatrician in charge of the study.

Judgement criteria

The primary outcome was:

  1. 1.

    The comparison of the SPE overall scores between the training and the control groups.

Secondary outcomes were:

  1. 2.

    Comparison of the scores of each subpart of the SPE between these 2 groups.

  2. 3.

    Comparison of the rate of students suitable for the administration of a MMSE between these 2 groups and not misclassifying the severity of the cognitive impairment. A 3-points gap with the expected result of the MMSE of the simulated patient, i.e. out of the 15–21 range, was considered clinically relevant.

  3. 4.

    Comparison of the SPE overall scores between the training and the control groups considering students’ previous training or year of study.

  4. 5.

    Description of the students’ satisfaction about the training.

  5. 6.

    Description of the previous training of students to MMSE administration.

Statistical analyses

All data were collected and anonymized in a separate Microsoft Excel® file. Analysis was carried out after anonymization. Results are presented as median [IQR 25–75] or absolute value and percent (%).

To assess the primary and secondary outcomes, the 2 groups were compared with a Fisher’s test for qualitative variables and by a Mann Whitney test for quantitative variables. To assess whether previous training impacts our results (secondary outcome 4), SPE overall scores were compared between groups using a linear regression, and the interaction between groups and having seen a MMSE, having administered a MMSE previously or the year of study was evaluated.

A p-value < 0.05 was considered significant. Statistics were performed using the R studio 1.4.1106 software.


Population characteristics

Seventy-five students were included. Their characteristics are presented in Table 1. Forty-one students were in 6th year, 18 in 5th year, and 16 in 4th year. More than half of students (n = 43) had already done an internship in a department used to administrate the MMSE, such as neurology and geriatrics.

Table 1 Characteristics of the students before training, overall and according to the group

Status of pre-MMSE training (secondary outcome 6)

Before our training, thirteen students had been trained on how to administer a MMSE. Before administering their first MMSE, half of students (n = 38) had seen someone administer one at least once, most often a pear medical student (n = 31). Fifty-three students had already administered one or more MMSEs. These students were mostly alone when administering their first MMSE (n = 41). Three-quarters of students considered their previous training on how to administer a MMSE to be insufficient or very insufficient (n = 56).

Thirty-four students were included in the control group and forty-one in the training group. There was no difference between groups regarding the characteristics of the population (Table 1).

Comparison of SPE scores

There was no discrepancy in SPE scoring between the neuropsychologist and the geriatrician examiners (Intraclass Correlation Coefficient, ICC = 1) for the parts “verification of prerequisites administration conditions” (25 points) and “compliance with rules of administration and scoring of the MMSE” (65 points), suggesting an excellent inter-observer reproducibility of the scoring grid. The average score of the two examiners was used for the “quality of the doctor-patient relationship” part (10 points), because it was more subjective as demonstrated by a mild ICC at 0.68 (0.50–0.80).


Primary outcome

Students in the training group had a significantly higher overall SPE score than the control group (median [IQR]: 71 [62–78] vs. 52 [41–57], p < 0.001) (Fig. 1).

Fig. 1
figure 1

Overall SPE scores in the control and training groups. SPE: Standardized Practical Exam. The horizontal bar is the median. The maximum possible score at the SPE was 100

Secondary outcome: results for each subpart of the SPE

Students in the training group had significantly higher scores than control for each subpart of the SPE: “prerequisite conditions verification” (median [IQR]: 8 [8,9,10,11,12,13,14,15,16] vs. 0 [0–4], p < 0.001), “administration and scoring the MMSE” (median [IQR]: 55 [51–57] vs. 44 [34–49], p < 0.001) or “quality of the doctor-patient relation” (median [IQR]: 7 [78] vs. 6 [67], p = 0.001) (Table 2). The training group had a significantly higher score than the control group for the learning, calculation, recall, and language sub-sections of the “administration and scoring the MMSE” subpart (Table 2).

Table 2 Comparison of overall and detailed SPE scores and ability to administer the MMSE between the control and training groups

Secondary outcome: ability to administer a MMSE and not misclassify the severity

The rate of students considered as able to administer a MMSE by the neuropsychologist was significantly higher in the training group compared to the control group (85% vs. 44%, p < 0.001) (Table 2). Six (17.6%) students in the control group and none in the training group found a MMSE score out of the range of 15 to 21 (p = 0.007) and thus could be considered as misclassifying the severity of the patient (Additional Fig. 3).

Secondary outcome: impact of previous training/year of study

Students who had already administered a MMSE had better overall SPE score than those who had not (ß=11, IC95% [2.8;19], p = 0.009) (Fig. 2), but the benefit of the training was similar in these two populations (“groups”*“MMSE administered previously” interaction: ß=-8.9, IC95% [-21;2.8], p = 0.130, adjusted R-squared = 0.47). Year of study (4th and 5th year vs. 6th year) or having already seen a MMSE administered did not impact the overall SPE score, (ß=7.2, IC95% [-1.0;16], p = 0.089 and ß=4.3, IC95% [-4.0;13], p = 0.302 respectively), neither the benefit of the training (p = 0.201 and adjusted R-squared = 0.44 for “groups”*“Year of study” interaction and p = 0.415 and adjusted R-squared = 0.43 for “groups”*“MMSE seen previously” interaction).

Fig. 2
figure 2

Comparison of overall SPE scores between control and training groups based on whether students had administered or not a MMSE prior to training. SPE: Standardized Practical Exam. The horizontal bar corresponds to the median. The maximum possible score at the SPE was 100

Assessment of students’ satisfaction

Three students did not complete the questionnaire of satisfaction. The results of this evaluation are presented in Table 3.

Table 3 Satisfaction of the medical students on the training

Quality of the different modules was judged to be good or very good in most cases. All participants rated the quality of the SPE and the scoring exercises as very good or good. Duration of the video training and scoring exercises was considered correct in more than 80% of cases. Students considered the scoring exercise module as the most useful. Usefulness of the training, quality of the teaching and post-training ability to administer a MMSE were considered good or very good by all participants. There was no difference between the two groups regarding all these parameters or according to the year of study.


This study shows for the first time the benefit of a multimodal training on how to administer a MMSE for medical students. We also show that two-thirds of our students have already administered a MMSE, but without prior training or supervision by a senior for most of them. These results confirm an overall impression of geriatricians and neurologists, as well as rare results from the literature, on the insufficient teaching on MMSE tests to French medical students [14]. Neuropsychologists’ learning of the MMSE is based on theoretical teaching as well as practical teaching by tutoring. Neuropsychologists’ ability to administer a MMSE is therefore systematically validated by a qualified neuropsychologist accustomed to the realization of MMSE. Conversely, medical student’s training is incomplete, inhomogeneous and their ability to administered MMSE is rarely validated. However, MMSE requires strict administration conditions that are generally unknown to medical students [14]. Although they have previously administered MMSEs, most of the students consider their training to administer this test to be insufficient. This justifies the implementation of systematic and standardized training during the medical studies course.

Errors in the scoring of a MMSE can occur when instructions on how to administer or score the test are unknown, or by administering a MMSE in unfavourable circumstances (e.g., delirium, major sensorial impairments…). In both situations, consequences are serious since a low MMSE is often interpreted by non-specialized doctors as a synonym for MND. This may wrongly lead to decisions to limit invasive therapies, for example in emergency medicine, intensive care or oncology [4,5,6].

Methodological issues

Regarding the methodology used, we provide educational materials that can be used by different speakers. The initial training video can be used on a large scale and in distance learning. Scoring exercise sessions can be carried out by a trained doctor, but also by a neuropsychologist. This teaching material has been developed by 2 doctors who are in the know of the MMSE, as well as by a neuropsychologist, considered as the “Gold standard” in administering the MMSE.

Time required to carry out this training can be considered as short: 25 min of video and 1 h of face-to-face exercise session for up to 8 students. Thus, it is possible to offer a standardized training on how to administer a MMSE to numerous students. We believe that all medical students should be trained on how to administer a MMSE, as screening of neurocognitive disorders is not only carried out by geriatricians or neurologists. General practitioners are at the forefront of this screening, explaining the decision in 2016 of a specific rating for these consultations in France [16, 17]. However, they are not fully trained to carry out these tests [18].

The SPE allows a standardized evaluation of learners, reproducible, with a rating grid y developed by two geriatricians and a neuropsychologist. This type of evaluation seems suitable for the MMSE because of its well-codified conditions of administration and limited interpretation in the scoring grid. Thereby, the inter-observer reproducibility of this SPE rating grid was excellent: there was no difference in scoring between the doctor and the neuropsychologist concerning the parts “prerequisite administering conditions verification” and “MMSE administration and scoring”. At our university, students are used to being assessed by OSCE examinations. Thus, evaluation by a SPE was not a potential bias in our study. In addition, OSCEs are now part of the French national examination for 6th-year medical students, and a station about the ability to administrate a MMSE could be included in this type of examination. In addition, SPE has a teaching objective, allowing students to note their own difficulties in a “real situation” and by the immediate debriefing of the main errors at the end of SPE [19].

In our study, all participants had the article validating MMSE in French, before the SPE. We considered this to be the minimum knowledge required to administer a MMSE. However, in our study as in others [14], students had already administered MMSEs in everyday clinical practice without this minimum knowledge. Thus, our control group has probably a higher knowledge of how to administer a MMSE than that usually provided to students before administering their first MMSE. This may reduce the difference in SPE scores observed between the control and the training groups. In addition, the overall SPE score was better for students who administered a MMSE before our training, suggesting a benefit to a minimal informal bedside training.

SPE results

Despite this, overall learners’ ability to administer the MMSE was better in the training group, regardless of the administration of MMSEs before training. Each of the 3 components of this SPE was improved by the training, even the “doctor-patient relationship’s quality”. Some studies do not demonstrate the benefit of short MMSE training for general practitioners or nurses. In these studies, students failed to obtain the same final MMSE score as the neuropsychologists [13, 15]. However, the differences between general practitioners and neuropsychologists were slight [15] and, in most cases, patients were not misclassified [13]. This suggests a possible benefit of this training, however as it is not standardized and there is no detailed description of the training, it is difficult to generalize it to all medical students. In addition, the performance of students in MMSE scoring or error detection exercises is poor following informal bedside training on how to administer a MMSE in hospital departments, a usual situation observed in France [14]. Another study shows that when evaluating how to administer a MMSE during a SPE, the MMSE is correctly scored in only 78% of students after a formal training by clinical cases and vignettes, although all students consider themselves able to administer it [20]. However, this study did not precise how scoring was successful. Our results therefore are arguments in favour of a more standardized training, of longer duration, using multiple teaching materials.


Satisfaction with our training was good or very good in most cases. Duration of the training was mostly considered appropriate. Quality of materials was highlighted as well as the learning methods used. At the end of this training, all students consider their ability to administer the MMSE to be good or very good, whereas most of them considered it insufficient or very insufficient before. This argues in favour of the generalization of this type of training tool to medical students. In addition, there was no difference between groups in the assessment of training’s quality. This suggests that starting with SPE (control group) was not considered deleterious by students. This can be explained by the interest of having been put in a situation before returning to a more theoretical training.


This study has some limitations. First, SPE scoring is well standardized for the “prerequisites administration conditions” and “MMSE administration” parts, but assessment of the quality of the doctor-patient relationship is more subjective. To limit the risk of bias, we used the average of the scores given by the two examiners. Nor can we be sure what influence the fact that the simulated patient is played by the doctor rather than a real patient has on the evaluation of the relationship. Conversely, the SPE scores were totally correlated between the neuropsychologist and the geriatrician for the “verification” and “compliance” parts, probably because these two examiners work in the same team and were involved in the construction of the scoring grid. This could be a limitation to exporting our SPE to other teams with different examiners.

Second, results might be better in our study than in real life, as students might have revised before taking the SPE more than they would have in real life [19]. However, this bias affects both groups and assessment of students’ ability to administer the MMSE in an unstandardized situation with a real patient, would have been a source of variability. Third, our research protocol could have been improved by using a real simulated patient rather than a doctor, randomizing students into the two groups, keeping the time between training and SPE exactly the same, or assessing MMSE skills before and after training. Fourth, we did not assess the long-term persistence of this training’s benefit. Finally, interpretation of the MMSE was not assessed because it was outside the objectives of the training.


Perspectives of this work are multiple. First, we want to extend this training to other health care professionals involved in the MMSE administration, such as advanced practice nurses, geriatric nurses and general practitioners training in geriatrics. Second, the dissemination and validation of this training course at national level for French students is already planned. Third, this type of training could be developed for other standardized neuropsychological tests, but also for other geriatric assessment tools such as those dedicated to assessment of dependence, autonomy, or delirium. This could contribute to raising the awareness of future medical doctors about geriatric issues and thus improve the care of elderly patients [21].


This study demonstrates for the first time that a multimodal training in MMSE improves medical students’ ability to administer this test. Perspectives of this work are numerous, particularly in terms of spreading to other professionals involved in neurocognitive disorders’ screening. Further studies are needed to assess the long-term persistence of learners’ ability to administer a MMSE.

Data availability

anonymized data are available on reasonable demand to the corresponding author, Dr Frédéric ROCA.



Mini-Mental State Examination


Major neurocognitive disorders


Standardized practical exam


  1. GBD 2016 Dementia Collaborators. Global, regional, and national burden of Alzheimer’s disease and other dementias, 1990–2016: a systematic analysis for the global burden of Disease Study 2016. Lancet Neurol. 2019;18:88–106.

    Article  Google Scholar 

  2. Scheltens P, Blennow K, Breteler MMB, de Strooper B, Frisoni GB, Salloway S, et al. Alzheimer’s disease. Lancet. 2016;388:505–17.

    Article  CAS  PubMed  Google Scholar 

  3. Mitchell SL. Advanced Dementia. N Engl J Med. 2015;372:2533–40.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Schulte PJ, Martin DP, Deljou A, Sabov M, Roberts RO, Knopman DS et al. Effect of Cognitive Status on the Receipt of Procedures Requiring Anesthesia and Critical Care Admissions in Older Adults. Mayo Clin Proc. 2018;93:1552–62.

  5. Garrouste-Orgeas M, Tabah A, Vesin A, Philippart F, Kpodji A, Bruel C, et al. The ETHICA study (part II): simulation study of determinants and variability of ICU physician decisions in patients aged 80 or over. Intensive Care Med. 2013;39:1574–83.

    Article  CAS  PubMed  Google Scholar 

  6. Repetto L, Venturino A, Fratino L, Serraino D, Troisi G, Gianni W, et al. Geriatric oncology: a clinical approach to the older patient with cancer. Eur J Cancer. 2003;39:870–80.

    Article  CAS  PubMed  Google Scholar 

  7. Hugonot-Diener L, Sellal F, Thomas-Antérion C. Gremoire 2: tests et échelles des maladies neurologiques avec symptomatologie cognitive. De Boeck Superieur; 2015.

  8. Godefroy O. Grefex (Groupe De Réflexion pour l’évaluation des Fonctions Exécutives): Fonctions exécutives et pathologies neurologiques et psychiatriques: évaluation en pratique clinique. Marseille: Solal; 2008.

    Google Scholar 

  9. Moyer VA, U.S. Preventive Services Task Force. Screening for cognitive impairment in older adults: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2014;160:791–7.

    Article  PubMed  Google Scholar 

  10. Derouesne C, Poitreneau J, Hugonot L, Kalafat M, Dubois B, Laurent B. [Mini-Mental State examination:a useful method for the evaluation of the cognitive status of patients by the clinician. Consensual French version]. Presse Med. 1999;28:1141–8.

    CAS  PubMed  Google Scholar 

  11. Molloy DW, Standish TI. A guide to the standardized Mini-mental State Examination. Int Psychogeriatr. 1997;9(Suppl 1):87–94. discussion 143–150.

    Article  PubMed  Google Scholar 

  12. Kalafat M, Hugonot-Diener L, Poitrenaud J. The Mini Mental State (MMS): French standardization and normative data [Standardisation et étalonnage français du Mini Mental State (MMS) version GRÉCO]. Revue De Neuropsychologie. 2003;13:209–36.

    Google Scholar 

  13. Queally VR, Evans JJ, McMillan TM. Accuracy in scoring vignettes using the mini mental state examination and the short orientation memory concentration test. J Geriatr Psychiatry Neurol. 2010;23:160–4.

    Article  PubMed  Google Scholar 

  14. Hernandorena I, Chauvelier S, Vidal J-S, Piccoli M, Coulon J, Hugonot-Diener L, et al. Do medical French students know how to properly score a mini mental state examination? Geriatr Psychol Neuropsychiatr Vieil. 2017;15:163–9.

    PubMed  Google Scholar 

  15. Fabrigoule C, Lechevallier N, Crasborn L, Dartigues J-F, Orgogozo J-M. Inter-rater reliability of scales and tests used to measure mild cognitive impairment by general practitioners and psychologists. Curr Med Res Opin. 2003;19:603–8.

    Article  PubMed  Google Scholar 

  16. Assurance, Maladie. ameli. Tarifs conventionnels des médecins généralistes en France métropolitaine. 2022. Accessed 31 Mar 2022.

  17. Assurance, Maladie. ameli. Test d’évaluation d’un déficit cognitif - Code ALQP006. Accessed 31 Mar 2022.

  18. Wojtowicz A, Larner AJ. General Practitioner Assessment of Cognition: use in primary care prior to memory clinic referral. Neurodegener Dis Manag. 2015;5:505–10.

    Article  PubMed  Google Scholar 

  19. Pugh D, Desjardins I, Eva K. How do formative objective structured clinical examinations drive learning? Analysis of residents’ perceptions. Med Teach. 2018;40:45–52.

    Article  PubMed  Google Scholar 

  20. Karani R, Leipzig RM, Callahan EH, Thomas DC. An unfolding case with a linked objective structured clinical examination (OSCE): a curriculum in inpatient geriatric medicine. J Am Geriatr Soc. 2004;52:1191–8.

    Article  PubMed  Google Scholar 

  21. St Onge J, Ioannidis G, Papaioannou A, McLeod H, Marr S. Impact of a mandatory geriatric medicine clerkship on the care of older acute medical patients: a retrospective cohort study. BMC Med Educ. 2013;13:168.

    Article  PubMed  PubMed Central  Google Scholar 

Download references


to all medical students that participate to this work and to Sophia Braund for English editing the manuscript.



Author information

Authors and Affiliations



LL, FR, PC designed the study, LL, FR, CK, KR, DL participated to the creation of the tools and to the assessment of students, LL, FR, PC, CK analyzed the results, LL, FR wrote the manuscript, all the authors reviewed the manuscript and approved the final version.

Corresponding author

Correspondence to Frédéric Roca.

Ethics declarations

Ethics approval and consent to participate

informed consent was obtained from all students to participate to the study. All methods were carried out in accordance with relevant guidelines and regulations. According to the French’ legislation (« Loi n° 2012 − 300 du 5 mars 2012 relative aux recherches impliquant la personne humaine », article L1121-1), no medical ethic committee was needed because of the pedagogic nature of the study. Personal data and privacy protection of the participants were carried out according to the guidelines of the CNIL “Commission Nationale de l’Informatique et des Libertés” based on the RGPD “règlement général sur la protection des données”. The data were protected and stored on a dedicated drive in our hospital.

Consent to publish

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Roca, F., Lepiller, L., Keroulle, C. et al. Effect of a multimodal training on the ability of medical students to administer the MMSE: a comparative study. BMC Med Educ 24, 133 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: