Skip to main content

Students’ perceptions, engagement and satisfaction with the use of an e-rubric for the assessment of manual skills in physiotherapy

Abstract

Introduction

In recent years, formative assessment has gained importance in health care education to facilitate and enhance learning throughout the training period. Within the frame of active methodologies, rubrics have become an essential instrument for formative assessment. Most rubric-based assessment procedures focus on measuring the effects of rubrics on teachers. However, few studies focus their attention on the perception that students have of the evaluation process through rubrics.

Methods

A cross-sectional survey study was carried out with 134 students enrolled in the pre-graduate Physiotherapy education. Assessment of manual skills during a practical examination was performed using an e-rubric tool. Peer-assessment, self-assessment and teacher´s assessment were registered. After completion of the examination process, students’ perceptions, satisfaction and engagement were collected.

Results

Quantitative results related to students’ opinion about e-rubric based assessment, students’ engagement, perceived benefits and drawbacks of the e-rubric as well as the overall assessment of the learning experience were obtained. 86.6% of the students agreed upon the fact that “the rubric allowed one to know what it is expected from examination” and 83.6% of the students agreed upon the fact that “the rubric allowed one to verify the level of competence acquired”. A high rate of agreement (87.3%) was also reached among students concerning feedback.

Conclusions

E-rubrics seemed to have the potential to promote learning by making criteria and expectations explicit, facilitating feedback, self-assessment and peer-assessment. The importance of students in their own learning process required their participation in the assessment task, a fact that was globally appreciated by the students. Learning experience was considered interesting, motivating, it promoted participation, cooperative work and peer-assessment. The use of e-rubrics increased engagement levels when attention was focused on their guidance and reflection role.

Peer Review reports

Introduction

With the inclusion to the Common Space of Higher Education, an important structural, organizational and methodological adaptation of Spanish universities seemed necessary. One of the most relevant changes was the establishment of competencies [1]. In this way, the so-called competency training model arose, and the assessment process in higher education went through a shift from traditional testing of knowledge towards assessment for learning. The new assessment culture aimed at assessing higher-order thinking processes and competencies instead of factual knowledge and lower-level cognitive skills [2].

In the health care setting, competency-based medical education (CBME) has become a prevalently recommended approach to graduate and post-graduate medical education internationally during the last decades. This approach requires summative assessment (assessment of learning) at the end of the student´s training period to know the level of competence and assure that competencies have been achieved to a certain standard [3].

However, in recent years, formative assessment has gained importance in health care education to facilitate and enhance learning throughout the training period [4]. Formative assessment (assessment for learning) aims to identify students’ strengths and weaknesses and to be conducive to progress by means of identifying learning needs and providing feedback in the sense of giving information about the difference between a students’ current level of skills and a given standard [5]. Although assessment always has a summative aspect, formative assessment has the potential to provide feedback and give direction for further development.

The use of rubrics as part of active learning pedagogies.

Within the frame of active learning pedagogies, rubrics have become an essential instrument for formative assessment. Rubrics appear to have the potential to promote learning because they make expectations and criteria explicit, facilitating feedback, self-evaluation and peer-assessment [1].

A rubric can be defined as the set of quality criteria related to the competence or competencies to be evaluated, determined by descriptors that imply different levels of achievement or performance. Rubrics can be easily understood and applied by teachers and students, even external evaluators [6, 7]. Rubrics are a very useful tool to encourage student´s feedback as well as to improve their expectations regarding the evaluation process. They allow students to check if their results coincide with the teachers' expectations, in order to avoid potential disappointment or frustration if the result of the work carried out is not exactly as it was expected [8].

Thanks to recent developments in the field of technology, students have become more involved, and aware of their own assessment process and very interesting assessment modalities have emerged from a cooperative point of view, such as peer evaluation and self-assessment. All of them have enabled students to be better judges of their own work. As our students are increasingly working in technology-based environments, the shift from a normal rubric to an e-rubric gives the following advantages: a) it is easier to use; b) feedback, which is one of the central points of formative assessment, can be given much more quickly; c) students can better self- regulate their learning process; d) they provide more interaction and; e) they foster students’ autonomy in the evaluation of their competences [9, 10].

In a peer-assessment activity, students take responsibility for assessing the work of their peers against set assessment criteria. In formative peer assessment, the intention is to help students help each other when planning their learning. The students expand their knowledge in a social context of interaction and collaboration according to social constructivism principles [11,12,13]. Self-assessment has been introduced in the classroom with the intention of promoting students’ monitoring and self-evaluation strategies as it usually leads to a deeper understanding of the academic tasks [14].

The use of rubrics in physiotherapy education

Clinical education is defined as the provision of guidance and feedback on personal, professional and educational development in the student's experience of providing appropriate patient care [15]. Appropriate clinical education within the context of providing patient care is important for the development of health professionals [16,17,18].

In physical therapy (PT), as in other health-related disciplines, professionals have to master competencies from different specialties. One of these is manual therapy (MT), and it could be considered one of the most complex specialty, not only because it constitutes in itself a recognized area of ​​specialization within PT but because the acquired competencies are transversal to other areas of knowledge (pediatric physiotherapy, neurological physiotherapy, sports physiotherapy, …) [19, 20].

Many studies highlight the importance of developing manual skills in a broad set of PT subjects because they are essential in the professional world [17, 21]. This is the case of MT, where students have to acquire and fluidly apply a wide range of different techniques and maneuvers both on colleagues and patients [22, 23] in order to be effective when assessing and treating patient´s pain experience [24]. In addition, the process of learning MT techniques and maneuvers is more demanding than in other PT areas, given its greater breadth, diversity, and specificity. Consequently, it is particularly relevant to provide students with different types of support to promote their learning, especially when we deal with assessment processes. In this regard, instructional or formative rubrics can be particularly useful resources because they provide students with the criteria and performance levels to be reached. This is the reason why they are widely used in the academic setting, not only within PT but in other healthcare degrees (nursing, psychology,..) [25].

Despite its importance, there is limited in-depth knowledge about which assessment strategies are potentially effective to facilitate learning and why some strategies might be more effective than others [26]. Ernstzen et al., [27] found that certain learning opportunities such as demonstrations and discussion on patient management are suitable to provide feedback on clinical skills. However, few studies focus their attention on students' perception of the evaluation process through rubrics.

Most rubric-based assessment procedures focus on measuring the effects of rubrics on teachers (since they are in charge of their production) and how they improve the teaching process. These studies usually conclude that the teachers' perceptions are positive in terms of increasing the transparency of the evaluation process and that they constitute a facilitating element of the evaluation process [28, 29].

However, few studies focus their attention on the perception that students have of the evaluation process through rubrics. And this constitutes a fundamental element in the correct development of the evaluation process, since the perceptions and attitudes that students show towards this evaluation instrument are key to the success of the evaluation [30, 31].

It is hypothesized that knowing students' opinions about the assessment process with the e-rubric would allow us to understand the potential that this type of assessment process has on students' learning experience. Thus, the objectives of the current study are: 1) to describe and understand students’ opinion about the experience in the use of e-rubrics for the evaluation of practical skills in the context of MT; 2) to describe and understand students’ satisfaction with the e-rubrics based evaluation process; 3) to describe and understand students’ engagement with the e-rubrics based evaluation process and; 4) to describe the advantages and disadvantages of the e-rubrics based evaluation process as experienced by students.

Materials and methods

Study design and sample

A cross-sectional survey study was carried out. The Ethics Committee of Universitat Internacional de Catalunya (UIC) approved the research protocol for the study (Code FIS-2021–10).

The study inclusion criteria were to be a student enrolled in Manual Therapy I and II subjects, presenting to the first practical examination (May 2021). These subjects were included as part of the programme of the second and third year of the Physiotherapy Degree at UIC in the 2020–2021 academic year. The classes last one semester, and their practical part consists of 20 h of face-to-face instruction that take place in laboratories in groups of 18–20 students. This practical part ends with a practical examination which represents the context in which this study is developed.

Study procedure

Practical examinations took place on the 6th and 7th of May 2021. Before the exam, students were informed about the examination, procedure and the rubric used for evaluating practical performance was available for them. This rubric was designed by the six teachers involved in the Manual Therapy subjects and consisted of five items (“patient's position”, “PT´s position”, “execution procedure”, “effect” and “clinical reasoning”) which were assessed according to four levels of skill´s performance (“expert”, “proficient”, “competent” and “novel”). Description of items and levels is included in Table 1. The rubric was delivered to students in an e-rubric format using CoRubrics, an add-on for Google Sheets that facilitates the assessment process [32].

Table 1 Items and levels of the e-rubric

On the examination day, students were divided into groups of 8–10 members. One teacher supervised each group. One by one, each student within the group shall perform a manual therapy technique on a colleague. In the meantime, the rest of the students in the group shall evaluate their performance (co-evaluation) according to the criteria in the rubric previously mentioned. The same was done by the teacher (teacher-evaluation) and the student being evaluated (self-evaluation).

Once all students in the group finished their examination, they were invited to respond to an anonymous online survey about their perceptions of the e-rubric based evaluation process, as well as their level of engagement and satisfaction. Questions related to sociodemographic and educational variables were also included. The first page of the survey described the study characteristics and objectives and requested students´ informed consent to complete the survey.

Measures

The standardized questionnaire “Students´ opinion about the e-rubric based evaluation process” developed by Rasposo and Martinez [33] was administered to students in the Spanish language.

This questionnaire consists of two sections:

  • Sect. 1 includes 11 “close-answer” items to evaluate the degree of agreement-disagreement of the students using a Likert scale of four answer options. Its reliability is 0.814 (Cronbach's Alpha). This first section covers the following dimensions: rubric characteristics, assessment modality, assessment procedure and learning impact.

  • Sect. 2 consists of 9 items, with a 0–10 rating scale. Its reliability is 0.716 (Cronbach's Alpha). It covers the following dimensions: student engagement and students´ global perception of the assessment process.

Finally, a questionnaire developed ad hoc for the study was administered to collect the participants’ sociodemographic and educational variables (course of the degree in which they were enrolled, university entrance modality and subjects pending from previous years).

Analysis

For statistical analysis, IBM SPSS Statistic 25.0 software was used. Descriptive analysis was carried out. For quantitative variables, mean and standard deviation were calculated. Frequencies were calculated for qualitative variables.

Results

Of the 164 students enrolled in the course, 134 presented to the first practical examination (69 students were enrolled in the subject Manual Therapy 1 and 65 students in Manual Therapy 2). All of them agreed to participate in the study. Their mean age was 22.29 ± 2.84 years, with a similar distribution of sex (46.3% men and 53.7% women). Of the total number of participants, 93.3% had entered university studies through the baccalaureate degree and EBAU tests and only 8.2% of the students had pending subjects from previous courses.

The analysis of the information collected with the described instrument has made it possible to obtain quantitative results related to the following aspects: students’ opinion about e-rubric based assessment, students’ engagement, perceived benefits and drawbacks of the e-rubric as well as the overall assessment of the learning experience.

Students’ opinion on the e-rubric-based assessment

Satisfaction/dissatisfaction both with the different functions of the rubric and the e-rubric assessment process were measured with a four-grade rating scale: Totally Disagree (TD), Disagree (D), Agree (A) and Strongly Agree (SA). Results were organized around four dimensions:

  1. (1)

    Rubric characteristics:

The ability of the rubric to verify students’ level of performance was registered under two statements as shown in Table 2. 86.6% of the students agreed or fully agreed upon the fact that “the rubric allowed one to know what it is expected from examination”, versus 13.4% of the students who disagreed or fully disagreed. Regarding the second statement, 83.6% of the students agreed or fully agreed upon the fact that “the rubric allowed one to verify the level of competence acquired”, versus 16.4% of the students who disagreed or fully disagreed (Table 2).

  • (2) Assessment modality:

Table 2 Rubric characteristics

During practical examinations students were assessed both by themselves, and their colleagues. These results are shown in Table 3. Regarding the ability of the rubric to perform self-assessment, 86.5% of the students agreed or fully agreed, and just 13.4% of them disagreed or fully disagreed. Concerning the adequacy of the rubric for peer-assessment (co-evaluation), 94.0% of the students agreed or fully agreed, and just 5.9% of them disagreed or fully disagreed. The third item gathered regarding assessment modality was equality in assessing every group/student. 70.9% of the students agreed or fully agreed with this item, and 29.1% of them disagreed or fully disagreed (Table 3).

  • (3) Assessment process:

Table 3 Assessment modality

The main issue included in this section of the assessment process is transparency, which was collected under the following statements. As shown in Table 4, first, the objectiveness of evaluation was assessed, and 76.1% of the students agreed or fully agreed upon the fact that the rubric allowed a more objective assessment, versus 23.9% of the students who disagreed or fully disagreed. A second statement gathered students’ opinions about whether the rubric made teachers clarify their examination objectives, and a rate of agreement and full agreement of 84.4% was reached among students. Just 15.7% of the students disagreed or fully disagreed. The third statement asked students whether the rubric showed how they would be assessed, and 94.6% of the students agreed or fully agreed. Finally, 74.6% of the students agreed or fully agreed upon the fact that the rubric demonstrated the work done, versus 25.4% of the students who disagreed (Table 4).

  • (4) Learning impact

Table 4 Assessment process

The ability of the rubric to provide feedback and help students understand the features of the examination were the two issues assessed concerning the learning impact. Table 5 shows a high rate of agreement (87.3%) reached among students concerning feedback. However, still 12.7% of the students did not agree or fully disagree with the fact that the rubric provides feedback. Finally, 91.1% of the students agreed or fully agreed upon the fact that the rubric helped them understand the features of the assessment process, and 8.9% disagreed or fully disagreed (Table 5).

Table 5 Learning impact

General assessment of experimentation with rubrics

The analysis of the students' responses regarding the evaluation of the whole learning experience is reflected in Tables 6 and 7. Both direct scores (on a double interval scale from 1 to 10) and mean values are presented. Items have been organized in two main dimensions: students’ engagement and students’ perceptions about the assessment process.

  • (1) Students’ engagement

Table 6 Students’ engagement
Table 7 Students’ global perceptions about the assessment process

The aspect achieving the greater score refers to: “I have performed collaborative work within the group” (with 84.3% of the students scoring above 5 points on the 0 to 10 scale and a mean value 7.48 points; SD 2.36). It is closely followed by the aspects “the rubric has promoted participation” (with 83.6% of the students scoring above 5 points on the 0 to 10 scale and a mean value 7.31 points; SD 2.35), “the rubric has motivated me” (with 81.3% of the students scoring above 5 points on the 0 to 10 scale and a mean value 6.83 points; SD 2.51) and “the rubric has made me more responsible” (with 79.8% of the students scoring above 5 points on the 0 to 10 scale and a mean value 6.68 points; SD 2.35). It should be noted that a very high percentage (80.6%) recognizes not cheating (1–2 points on the 1 to 10 scale) or doing little cheating (3–4 points on the 1 to 10 scale) during the assessment process (Table 6).

  • (2) Students´ global perceptions about the assessment process:

The most highlighted aspect by the students about the assessment process is peer-assessment (co-evaluation), which they found very good (78.3% of the students scoring above 5 points on the 1 to 10 scale) and interesting (79.1% of the students scoring above 5 points on the 1 to 10 scale), with mean values of 7.15 (SD 2.58) and 7.21 (SD 2.54) points, respectively.

The results show that 73.1% of the students did not agree on the fact that peer assessment with the e-rubric was not useful, scoring below 5 points in the 1 to 10 scale (mean value of 2.57 points; SD 2.21) (Table 7).

Discussion

Our students agreed that the rubric allowed them to know what was expected from the examination and also agreed on the fact that the rubric allowed them to verify the level of competence acquired. Judging from the data, rubrics appear to have the potential to promote learning because they make explicit expectations and criteria, facilitating feedback and self-assessment [1]. This is in line with the results obtained in the study carried out by Reynolds et al. [34] in which students claimed that they better understood teacher expectations when the assignment involved a rubric. As stated by Panadero et al. [35], students’ anxiety (negative self-regulated learning) may decrease when implementing long-term interventions with rubrics, which is probably due to the fact that students know what is expected of their work and how it will relate to their grades.

Great agreement was observed among our students regarding the usefulness of the e-rubric for self-assessment. This is an important fact as one of the implicit goals of higher education is to enable students to be better judges of their own work [9]. As Martínez-Figueira et al., [36] pointed out, the use of rubrics for self-assessment raises levels of engagement and probably learning levels. Self- assessment also facilitates students’ understanding of the learning process, contrasting their achievements against objective proof presented by the e-rubrics.

Similar results were obtained regarding peer-assessment, as a high percentage of the students involved in our study stated that the e-rubrics allowed assessment between colleagues. Peer assessment counts on a wide literary tradition that is enhanced by the use of e-rubrics. This type of assessment facilitates peer correction, information feedback and peer analysis of the processes involved [37,38,39].

About 60% of the students agreed upon two facts: that it makes teachers clarify the criteria and shows how they will be assessed. However, nearly 20% of the students did not agree that the rubric allows for a more objective assessment, and that it demonstrates the work done. This can be due to the fact that the application of assessment criteria differs according to whether it is interpreted by teachers or students [40]. Maybe working together with students on criteria formation and adoption will make students active in the process and increase the success rate of the peer assessment [41, 42].

Regarding learning impact, again over, 60% of the students agreed that the rubric provided feedback. This is in line with other studies which indicate that e-rubrics contribute to student learning by aiding the feedback process [43]. And give more informative feedback about their strengths and areas in need of improvement [44]. Some authors argue that this positive effect on learning may be affected by the motivation and satisfaction students show with the use of technology in general [45].

Engagement was one of the main outcomes of our study. Students agreed that the rubric had motivated them, also made them participate more and increased their responsibility on their own learning process. This is in line with the results obtained by Hanrahan et al., [46] which showed that throughout the peer assessment process, students learn to develop high levels of responsibility and to focus on learning itself. And that peer assessment also provides the learners with a context where they can observe the role of their teachers and understand the role of assessment [47].

Results showed that the e-rubric based evaluation process has allowed students to perform collaborative work. Although nearly 75% of the students did not cheat during the assessment process, the remaining 20% did. This may be due to the fact that students often have a negative attitude towards peer assessment. Some students may not like the idea of having their work to be assessed by peers or assessing their peers’ work as they may feel less capable than their colleagues in achieving a certain standard or may think that increasing their colleagues’ grade may make their mates also increase theirs. This could have made students perform some more cheating [48].

Finally, regarding the global perception of the assessment process, students showed high interest and satisfaction with the rubric. These results could have been achieved due to the fact of working with technology and that teachers explained the purpose of the assessment and that a couple of sessions to get familiar with the rubric were performed. This could have helped students become more confident about themselves and their peer assessors [48].

One of the limitations of the study could have been that the rubric was used in only one examination. The literature shows that in studies where the rubric was introduced during one period only, or where the students got only a couple of lessons in self-assessment, the effects reported are small and only partial. Other limitations of the study are those derived from the context in which the examination was developed. The rubric was designed specifically for addressing manual skills within the subjects of Manual Therapy and students from two different courses included in the study data were not analyzed separately. A variation in the results could have been seen as students enrolled in the subject Manual Therapy II are one course above those in Manual Therapy I and may be more familiar with the use of these type of assessment tools as they are also used in other subjects.

Conclusions

E-rubrics seem to have the potential to promote learning by making criteria and expectations explicit, facilitating feedback, self-assessment and peer-assessment. The importance of students in their own learning process requires their participation in the assessment task, a fact that is globally appreciated by the students. Information analysis gathered by the instrument described has allowed us to confirm that the learning experience has been considered interesting, motivating, it has promoted participation, cooperative work and peer-assessment. Transparency and clarity items seem to concern students, issues which are not solved by the use of an instrument. The use of e-rubrics increases engagement levels when attention is focused on their guidance and reflection role.

Availability of data and materials

The data sets generated and analyzed during the current study are not publicly available because the local ethics board requires that members of the research team keep them securely. Could be shared the dataset upon reasonable request to the corresponding author.

Abbreviations

CBME:

Competency-based medical education

UIC:

Universitat Internacional de Catalunya

TD:

Totally Disagree

D:

Disagree

A:

Agree

SA:

Strongly Agree

References

  1. Jonsson A, Svingby G. The use of scoring rubrics: Reliability, validity and educational consequences. Educ Res Rev. 2007;2(2):130–44.

    Article  Google Scholar 

  2. Gijbels D, Dochy F. Students’ assessment preferences and approaches to learning: can formative assessment make a difference? 2006;32(4):399–409. https://doi.org/10.1080/03055690600850354.

  3. ten Cate O. Competency-Based Postgraduate Medical Education: Past, Present and Future. GMS J Med Educ. 2017;34(5):Doc69.

    Google Scholar 

  4. Frank JR, Snell LS, Ten CO, Holmboe ES, Carraccio C, Swing SR, et al. Competency-based medical education: theory to practice. Med Teach. 2010;32(8):638–45.

    Article  Google Scholar 

  5. Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No. 31. Med Teach. 2007;29(9):855–71.

    Article  Google Scholar 

  6. Brookhart SM. How to create and use rubrics for formative assessment and grading. Alexandria: ASCD; 2013.

  7. Schuwirth LWT, Van Der Vleuten CPM. Programmatic assessment: From assessment of learning to assessment for learning. Med Teach. 2011;33(6):478–85.

    Article  Google Scholar 

  8. Laurian S, Fitzgerald CJ. Effects of using rubrics in a university academic level Romanian literature class. Procedia-Social Behav Sci. 2013;76:431–40.

    Article  Google Scholar 

  9. Schunk DH, Zimmerman BJ. Motivation and self-regulated learning: Theory, research, and applications. Motivation and Self-Regulated Learning: Theory, Research, and Applications.  New York: Taylor and Francis; 2012.  p. 1–417.

  10. Vandenberg A, College SN. GPS in the classroom: using rubrics to increase student achievement. Res High Educ J. 2010;9:1–10.

    Google Scholar 

  11. Reece I, Walker S. Teaching, Training and Learning: a Practical Guide. Sunderland: Business Education Publishers Ltd; 2007.  p. 11–13.

  12. Sadler DR. Beyond feedback: developing student capability in complex appraisal. 2010;35(5):535–50. https://doi.org/10.1080/02602930903541015.

  13. Bada OS. Constructivism Learning Theory: A Paradigm for Teaching and Learning. IOSR J Res Method Educ. 2015;5(6):66–70.

    Google Scholar 

  14. Panadero E, Romero M. To rubric or not to rubric? The effects of self-assessment on self-regulation, performance and self-efficacy. 2014;21(2):133–48. https://doi.org/10.1080/0969594X2013877872.

  15. Kilminster S, Cottrell D, Grant J, Jolly B. Effective educational and clinical supervision. Med Teach. 2007;29(1):2–19.

    Article  Google Scholar 

  16. Strohschein J, Hagler P, May L. Assessing the need for change in clinical education practices. Phys Ther. 2002;82(2):160–72.

    Article  Google Scholar 

  17. Lekkas P, Larsen T, Kumar S, Grimmer K, Nyland L, Chipchase L, et al. No model of clinical education for physiotherapy students is superior to another: a systematic review. Aust J Physiother. 2007;53(1):19–28.

    Article  Google Scholar 

  18. Grant RA, Wong SD. Barriers to Literacy for Language-Minority Children: An Argument for Change in the Literacy Education Profession. J Adolesc Adult Lit. 2003;46(5):386–94.

    Article  Google Scholar 

  19. Alahmari KA, Marchetti GF, Sparto PJ, Furman JM, Whitney SL. Estimating postural control with the balance rehabilitation unit: measurement consistency, accuracy, validity, and comparison with dynamic posturography. Arch Phys Med Rehabil. 2014;95(1):65–73.

    Article  Google Scholar 

  20. Khan F, Amatya B, Galea MP, Gonzenbach R, Kesselring J. Neurorehabilitation: applied neuroplasticity. J Neurol. 2017;264(3):603–15.

    Article  Google Scholar 

  21. Delany C, Bragge P. A study of physiotherapy students’ and clinical educators’ perceptions of learning and teaching. Med Teach. 2009;31(9):e402-11.

    Article  Google Scholar 

  22. Sharma V, Kaur J. Effect of core strengthening with pelvic proprioceptive neuromuscular facilitation on trunk, balance, gait, and function in chronic stroke. J Exerc Rehabil. 2017;13(2):200–5.

    Article  Google Scholar 

  23. Michielsen M, Vaughan-Graham JA, Holland A, Magri A, Suzuki M. The Bobath concept - a model to illustrate clinical practice: responding to comments on Michielsen et al. Disabil Rehabil. 2019;41(17):2109–10.

    Article  Google Scholar 

  24. Bishop MD, Torres-Cueco R, Gay CW, Lluch-Girbés E, Beneciuk JM, Bialosky JE. What effect can manual therapy have on a patient’s pain experience? Pain Manag. 2015;5(6):455–64.

    Article  Google Scholar 

  25. García-Ros R, Ruescas-Nicolau MA, Cezón-Serrano N, Carrasco JJ, Pérez-Alenda S, Sastre-Arbona C, et al. Students’ Perceptions of Instructional Rubrics in Neurological Physical Therapy and Their Effects on Students’ Engagement and Course Satisfaction. Int J Environ Res Public Health. 2021;18(9):4957.

    Article  Google Scholar 

  26. Rushton A, Lindsay G. Clinical education: A critical analysis using soft systems methodology. Int J Ther Rehabil. 2003;10(6):271–80.

    Google Scholar 

  27. Ernstzen DV, Bitzer E, Grimmer-Somers K. Physiotherapy students’ and clinical teachers’ perceptions of clinical learning opportunities: a case study. Med Teach. 2009;31(3):e102-15.

    Article  Google Scholar 

  28. Chong DYK, Tam B, Yau SY, Wong AYL. Learning to prescribe and instruct exercise in physiotherapy education through authentic continuous assessment and rubrics. BMC Med Educ. 2020;20(1):1–11.

    Article  Google Scholar 

  29. Chan Z, Ho S. Good and bad practices in rubrics: the perspectives of students and educators. 2019;44(4):533–45. https://doi.org/10.1080/0260293820181522528.

  30. Wang W. Using rubrics in student self-assessment: student perceptions in the English as a foreign language writing context. 2016;42(8):1280–92. https://doi.org/10.1080/0260293820161261993.

  31. Kite J, Phongsavan P. Evaluating standards-based assessment rubrics in a postgraduate public health subject. 2016;42(6):837–49. https://doi.org/10.1080/0260293820161199773.

  32. Corubrics (es). (s. f.). Corubrics (es). https://corubrics-es.tecnocentres.org/

  33. Raposo M, Martínez E. La Rúbrica en la Enseñanza Universitaria: Un Recurso Para la Tutoría de Grupos de Estudiantes. Form Univ. 2011;4(4):19–28.

    Article  Google Scholar 

  34. Reynolds JR, Baird CL. Is There a Downside to Shooting for the Stars? Unrealized Educational Expectations and Symptoms of Depression: 2010;75(1):151–72. https://doi.org/10.1177/0003122409357064.

  35. Panadero E, Alonso-Tapia J, Huertas JA. Rubrics vs. self-assessment scripts: effects on first year university students’ self-regulation and performance / Rúbricas y guiones de autoevaluación: efectos sobre la autorregulación y el rendimiento de estudiantes universitarios de primer año. Infanc y Aprendiz. 2014;37(1):149–83.

    Article  Google Scholar 

  36. Martínez-Figueira E, Tellado-González F, Rivas MR. La rúbrica como instrumento para la autoevaluación: un estudio piloto. REDU Rev Docencia Univ. 2013;11(2):373–90.

    Google Scholar 

  37. Falchikov N. Improving assessment through student involvement : practical solutions for aiding learning in higher and further education. London: RoutledgeFalmer; 2005.

  38. Hargreaves A. Sustainable Leadership and Development in Education: creating the future, conserving the past. Eur J Educ. 2007;42(2):223–33.

    Article  Google Scholar 

  39. Bretones RA. La participación del alumnado de Educación Superior en su evaluación. Rev Educ. 2008;342:181–202.

    Google Scholar 

  40. Brown S, Glasner A. Assessment Matters in Higher Education. Philadelphia; 2003. p. 3–13.

  41. Boud D, Falchikov N. Rethinking assessment in higher education : learning for the longer term. London: Routledge; 2007. p. 206.

  42. Sahin I, Shelley M. Considering Students’ Perceptions: The Distance Education Student. J Educ Technol Soc. 2008;11(3):216–23.

    Google Scholar 

  43. Schamber JF, Mahoney SL. Assessing and Improving the Quality of Group Critical Thinking Exhibited in the Final Projects of Collaborative Learning Groups. J Gen Educ. 2006;55(2):103–37.

    Article  Google Scholar 

  44. Rosaline L. Jurnal Pendidikan Dompet Dhuafa edisi I. 2011.

    Google Scholar 

  45. Panadero E, Jonsson A. The use of scoring rubrics for formative assessment purposes revisited: A review. Educ Res Rev. 2013;1(9):129–44.

    Article  Google Scholar 

  46. Hanrahan SJ, Isaacs G. Assessing Self- and Peer-assessment: The students’ views. 2010;21(1):53–70. https://doi.org/10.1080/07294360123776.

  47. van den Berg I, Admiraa W, Pilot A. Peer assessment in university teaching: evaluating seven course designs. 2007;31(1):19–36. https://doi.org/10.1080/02602930500262346.

  48. Strijbos JW, Narciss S, Dünnebier K. Peer feedback content and sender’s competence level in academic writing revision tasks: Are they critical for feedback perceptions and efficiency? Learn Instr. 2010;20(4):291–303.

    Article  Google Scholar 

Download references

Acknowledgements

The authors also wish to thank all the physiotherapy students who voluntarily participated in this study.

Funding

This study had no funding.

Author information

Authors and Affiliations

Authors

Contributions

S.P-G: methodology, data curation, formal analysis, interpretation of data, investigation, writing – review & editing, visualization; S. P-G, A. C-U, S. C-B: formal analysis, investigation, data curation, writing – original draft, visualization, funding acquisition; S. P-G, A. C-U, S. C-B: conceptualization, methodology, interpretation of data; writing – review & editing C. L-C. V. G-R: interpretation of data, writing—review & editing; PR. R-R: supervision. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Silvia Pérez-Guillén.

Ethics declarations

Ethics approval and consent to participate

All methods were carried out per relevant guidelines and regulations. The informed consent was obtained from all study participants, and the study was approved by the Ethics Committee of Universitat Internacional de Catalunya (Approval No. Code FIS-2021–10). All informed consents were presented to participants electronically before starting the questionnaire.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests. Author details 1 Department of Physical Therapy, Universitat Internacional de Catalunya, Barcelona, Spain; Carrer Josep Trueta s/n. 08195 Sant Cugat del Vallès. Barcelona.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Pérez-Guillén, S., Carrasco-Uribarren, A., Celis, C.Ld. et al. Students’ perceptions, engagement and satisfaction with the use of an e-rubric for the assessment of manual skills in physiotherapy. BMC Med Educ 22, 623 (2022). https://doi.org/10.1186/s12909-022-03651-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-022-03651-w

Keywords

  • Formative assessment
  • E-rubric
  • Physiotherapy