Skip to main content

WeChat mini program in laboratory biosafety education among medical students at Guangzhou Medical University: a mixed method study of feasibility and usability

Abstract

Background

Laboratory biosafety should be a priority in all healthcare institutions. In traditional laboratory safety teaching students typically receive knowledge passively from their teachers without active involvement. The combination of experiential learning and mobile learning may provide students with greater engagement, retention, and application of knowledge. To address this issue, we developed and conducted a convergent mixed methods study to assess the feasibility and usability of a WeChat mini program (WMP) named WeMed for laboratory biosafety education for medical laboratory students at Guangzhou Medical University (GMU).

Methods

The study was conducted between November 2022 and October 2023 among second-year undergraduate students at GMU. It involved the concurrent collection, analysis, and interpretation of both qualitative and quantitative data to assess feasibility and usability. In the quantitative strand, two evaluations were conducted via online surveys from students (n = 67) after a four-week study period. The System Usability Scale (SUS) was used to evaluate usability, while self-developed questions were used to assess feasibility. Additionally, a knowledge test was administered 6 months after the program completion. In the qualitative strand, fourteen semi-structured interviews were conducted, whereby a reflexive thematic analysis was utilized to analyze the interview data.

Results

The overall SUS score is adequate (M = 68.17, SD = 14.39). The acceptability of the WeMed program is in the marginal high range. Most students agreed that WeMed was useful for learning biosafety knowledge and skills (13/14, 93%), while 79% (11/14) agreed it was easy to use and they intended to continue using it. After 6 months, a significant difference in the knowledge test scores was observed between the WeMed group (n = 67; 2nd year students) and the traditional training group (n = 90; 3rd year students). However, the results should be interpreted cautiously due to the absence of a pretest.

Conclusion

The combination of experiential learning and mobile learning with WMP is a feasible tool for providing laboratory biosafety knowledge and skills. Ongoing improvements should be made in order to increase long-term acceptance.

Peer Review reports

Background

Professionals in medical laboratory science (MLS) are constantly exposed to biological materials such as blood, urine, and tissue samples on a regular basis, which increases their risk of laboratory-acquired infections (LAIs) [1]. LAIs occur due to a failure to follow the standard operating procedure (SOP) or wear the proper personal protective equipment (PPE) [2]. According to US government data on biosafety in labs, a number of accidents occurred in labs when dealing with specific agents between 2008 and 2012 (e.g., spills) [3]. Approximately 100 to 275 pathogens have been released from laboratories each year as a result of these accidents. These accidents have caused significant damage to the environment and to public health. In China, 37 cases of brucellosis laboratory infections have been reported between 2006 and 2017 according to a recent review [2]. A total of 27 students were infected during the experiments on goats. Seven professionals were infected during the process of identifying or handling suspected brucella strains. The majority of accidental infections are attributed to substandard laboratory conditions, manipulation outside a biosafety cabinet, or inadequate personal protective equipment [2]. These accidents demonstrate the importance of having a comprehensive understanding of biosafety protocols and procedures [2]. Therefore, MLS students should be trained in biosafety as part of their preparation for future careers [2].

Traditional laboratory biosafety education is usually delivered through lectures and demonstrations [4]. This unconsciously puts students in a passive receiving mode, reduces their ability to engage in meaningful learning [4]. As a result, students are less likely to retain knowledge and skills, and may have difficulty applying them in real-world situations [5]. Besides, demonstrations cannot accurately replicate hazardous materials and procedures that might be encountered in the real world [6]. In China, medical students are often supervised by senior students, which can lead to non-compliance with SOPs [7]. It is therefore necessary for educators to integrate technology, such as interactive platforms and simulations, to provide students with extended resources to engage in biosafety learning.

Kolb’s experiential learning model (ELM) is a learner-centered model that is widely used in clinical education [8]. In this model, active learners acquire and process knowledge according to their own individual needs. The process of learning is perceived as a continuously diverging and deepening process, during which knowledge is built on experience gained at four stages of the learning cycle (see Fig. 1) [8]. As a result, the knowledge they acquire is more meaningful and likely to be retained for a longer period of time. ELM has been demonstrated to increase learning motivation and satisfaction, as well as clinical competence [8].

As technology and digitalization become the norm in universities, learning can be incorporated in countless ways. When implementing technology, however, it is necessary to take into account the specific needs of the learners and the costs associated with implementation [9]. The concept of ‘Mobile Learning’ (M-Learning) is a methodology that incorporates portable electronic devices into the teaching process within and outside of the classroom to enhance learning efficiency [10]. Due to the popularity of mobile devices, M-Learning can significantly expand health professions training and education globally [11]. M-Learning, with its quality, mobility, and platform support, is widely regarded as a highly effective approach to enhancing medical education [11].

With over 800 million active users in China, WeChat is one of the most popular social media platforms with social communication capabilities and platform functionality [6]. In recent years, WeChat-based M-learning has gained popularity in higher education due to its ease of use, short development cycle, and the ability to provide students with a personalized learning experience [12]. This convenience and accessibility of the platform make it more likely to be widely used, especially for M-learning [6]. Studies have shown that WeChat has been successfully utilized for delivering clinical courses [12], conducting interventions [6], as well as implementing teaching models [12]. In the WeChat platform, mini programs are applications that run inside the app. WMP can be accessed from any device across all networks for free, which makes it a cost-effective and scalable solution [6].

The ELM and the WeChat platform have been implemented individually in laboratory courses [13, 14]. The existing study that used the WeChat platform for teaching lab safety, however, did not assess its acceptability or effectiveness [14]. According to a recent systematic review, the feasibility of M-Learning in a real-life setting is critical to its long-term success [11]. It is therefore necessary to investigate its feasibility and usability. Furthermore, an in-depth study is needed to examine the combination of these two approaches, particularly in the use of the WMP to provide laboratory biosafety education. Therefore, this study consisted of two distinct stages and two objectives: (1) to develop WeMed to deliver laboratory biosafety education; (2) assess the feasibility and usability of the program among MLS students at GMU. In this feasibility study, possible issues during program implementation were identified. Most importantly, it ensured that this WMP would be feasible, and acceptable to the target population.

Methods

This study was conducted in two stages according to its objectives. First, WeMed was developed according to the conceptual framework for laboratory biosafety education. Then, a mixed-method evaluation was carried out to assess the feasibility and usability of WeMed. Six months after program completion, a knowledge test was administered to the WeMed group students and the traditional training group students.

Stage 1: development of WeMed

The WeMed mini program was developed in Guangzhou by a multidisciplinary team, which included MLS experts and educators. There were also software engineers from a professional technology services company on the team. A four-step development process was followed according to rapid application development (RAD) model. Comparing to traditional system development approaches (e.g., waterfall model), RAD is more flexible and adaptive due to its rapid application development process [15]. A shorter planning phase is adopted with this approach and a greater focus on development, testing, and feedback [15].

Step 1. Establishing the conceptual framework

Theories and research relating to mobile applications, experiential learning and laboratory biosafety education have been studied to establish a conceptual framework. The primary goal of the study was to create a virtual training environment for students with the flexibility to access it anytime and anywhere [10]. For content delivery, WMP was chosen due to four reasons. First, mobile phones have become a key part of everyday life for this generation [16]. Since college students are growing up alongside the development of the internet and mobile phones, they are used to having access to information and communication at the touch of a finger [16]. As a multifunctional application, WeChat has seamlessly penetrated most aspects of students’ daily lives, from staying connected with family to making payments [16]. With these features, educators and software engineers can design, develop and publish their products approaching the population. Second, WeChat provides developers with tools (e.g., WeChat developer tool) to reduce development difficulty and shorten development cycles [12]. These tools provide developers with a comprehensive set of application programming interfaces and Software Development Kits, enabling them to quickly build applications and publish them to the platform. This reduces development time and costs, which makes it ideal for RAD model. Third, with its user-friendly interface, WeChat users can access Mini Programs directly from WeChat without downloading or installing, making it more convenient and user-friendly for MLS students [6]. Fourth, the messaging and social media capabilities of WeChat enable users to share their thoughts and experiences about the program to teachers and classmates [16].

Step 2. Implementation of Kolb’s experiential learning model

In accordance with Kolb’s ELM, a number of learning activities were designed based on the four learning stages of the cycle (see Fig. 1). First, concrete experience provide students with hands-on experience by reading about preventing hazards, risk control guidelines, and outbreak preparedness [17]. Second, reflective observation is used to organize information from previous step through critical thinking [17]. With the practice mode, students have the opportunity to practice wearing PPE multiple times with guidance on organizing SOP information. Third, abstract conceptualization involves students explaining their learning from previous phases and forming new concepts [17]. Self-assessment quizzes can provide instant feedback to independent learner instant feedback on how well they are understanding key concepts. In laboratory biosafety training, the main stage is active experimentation, which involves simulations designed to allow the students to duplicate procedures multiple times, mastering the skills and techniques necessary to maintain biosafety [12]. These skills and techniques can be difficult to develop in classical lab sessions due to the time constraints. Thus, the distinctive feature of WeMed lies in its utilization of interactive simulations to deliver the learning content.

Fig. 1
figure 1

Activities designed to support different aspects of experiential learning cycle

Step 3. Development of WeMed

WeMed was developed according to WeChat mini-program design guidelines [18]. It consisted of three modules: the learning content module, the interactive practice module, and the self-assessment module (see Table 1). All the content was designed according to the 4th edition of the WHO laboratory biosafety manual [19]. The learning content module included learning materials targeting the SOP, PPE, waste management, risk control guidelines, and outbreak preparedness. It also covered the introduction of multiple infectious diseases such as coronavirus disease 2019 (COVID-19), human immunodeficiency virus (HIV), viral hepatitis, and hand, foot, and mouth disease (HFMD). In the learning content module, students can gain knowledge through a variety of modes, such as text, drawings, and interactive simulations. In the interactive practice module, the contents were delivered via interactive simulations and ‘drop and drag’ activities, allowing students to interact with and practice safety procedures, such as donning and doffing PPE (i.e., clothing, gloves, masks, and goggles). Figure 2 illustrates an example of an interactive ‘drop and drag’ activity for donning and doffing clothing. The self-assessment module included quizzes to help students review and assess their understanding of the material (see Fig. 2).

Step 4. Expert validation

In this study, expert validation was used to evaluate the validity of WeMed. A panel of ten experts evaluated its technical quality requirements on a 4-point Likert scale (1 = irrelevant, 4 = very relevant) in accordance with a framework developed by Almaiah et al. [20]. The panel was composed of four MLS technicians with at least 10 years of experience, two university lecturers, and four specialists in Android application development. The Content Validity Index (CVI) was used to measure appropriateness and accuracy of content [21]. In this study, the CVI calculation proposed by Polit and Beck was employed [21]. WeMed had excellent expert validity as indicated by its CVI of 0.83 to 1.00 at the item level, and 0.93 at the scale level [21].

Fig. 2
figure 2

Selected screenshots of the WeMed program

Stage 2: evaluation of the feasibility and usability of WeMed

The second stage of the study was to assess the feasibility and usability of WeMed. In this stage, we followed the guidance for applying mixed methods to optimize feasibility studies [22].

Feasibility evaluation

The National Institute for Health and Care Research suggests feasibility studies are essential since they determine whether a program or intervention can be done properly [23]. An evaluation feasibility study enables an investigation of the acceptability of a program and evaluation design to assist in making decisions about whether or not to proceed with a full-scale effectiveness or efficacy study [24]. According to the guideline from NIHR, feasibility studies should be conducted first, followed by pilot studies that examine the outcomes of the intervention on a smaller scale than in a randomized controlled trial (RCT) [23] (see Fig. 3). It is essential to understand “Can this WeChat mini program work within a university setting?” prior to examining “Does this WeChat mini program work?” [22]. To this end, we assessed feasibility based on the five key areas as Bowen and colleagues identified (see Table 2) [25].

Fig. 3
figure 3

Feasibility tests in the project evaluation process [24]

Usability evaluation

Usability is considered to be one of the most critical characteristics of a good digital application [26]. A recent scoping review identified a number of published standards that identify usability as a critical criterion for evaluating digital health applications [26]. Evaluation of the usability of applications can have significant benefits for users, including avoidance of stress and improved accessibility [26].

Table 1 The structure of the WeMed program

Overall methods and data collection

Orsmond and Cohn suggested that a mixed methods design can best match the specific objectives and needs of a feasibility study [27]. This design enabled a comprehensive analysis of the feasibility properties of the program and identified any potential usability issues. In the current study, we employed a convergent approach where quantitative and qualitative strands were conducted simultaneously, analyzed separately, and with equal priority [28]. It involved gathering quantitative and qualitative data for comparison, or “convergence”, in order to detect any similarities or differences between them [28]. In short, with a mixed methods convergent design, it is possible to address relevant knowledge gaps in the qualitative and quantitative research by leveraging the strengths of both methods [28]. Thus, a mixed methods convergent design was used in the second stage of the study to assess the feasibility and usability of WeMed. Considering this was a feasibility study intended for a future RCT, no reliable information could be provided about its effectiveness. The qualitative strand consisted of individual semi-structured interviews to capture user experience. The quantitative strand included an online survey to assess the feasibility and usability. In addition, a knowledge test of biosafety practices and procedures was administered 6 months after the program completion. A comprehensive overview of WeMed’s feasibility and usability is provided by both qualitative and quantitative findings. It is intended that both findings be used to develop a roadmap for future developments.

Participants

The participants were second-year students that were aged 20–25 years. They were from three classes (Classes-1, Class-2, and Class-3) without formal biosafety training. Criteria for inclusion were as follows: (1) enrollment in the subject “molecular diagnostics” in GMU; (2) voluntary explicit consent provision; (3) having an Android® device with an internet connection. An exclusion criterion involved not being able to access WeChat.

Based on eligibility criteria, 73 students were approached, 6 declined, and 67 participated in the program and submitted an online survey to evaluate the feasibility and usability of WeMed (73% were female). This sample approximates the gender demographics of MLS students at GMU (2:1 female to male ratio). The response rate was 91.78% which indicated a high response rate [29]. In a feasibility study, it is common and acceptable not to calculate the sample size based on the study design, available resources, and the nature of the study population [30]. Accordingly, the actual sample size of 67 in the quantitative strand was acceptable. On the other hand, a purposeful sampling technique was employed in the qualitative strand according to the suggestion for conducting a mixed methods convergent study [31]. Due to the academic calendar, interviews were conditioned. An invitation message was sent along with the online survey to three class WeChat groups (similar to WhatsApp groups). Sixteen students replied to it. We selected 14 students in order to achieve an even representation of gender and class (Class-1, Class-2 and Class-3). It was important to have a diverse group of students in the interview sample [30]. According to Hennink and Kaiser [32], qualitative data can be saturated with 9 to 17 participants. Our sample size, therefore, is appropriate.

Procedure

Students were invited to access all the modules of the WeMed program twice per week. It was suggested that the students access the learning content module and the interactive practice module before taking the self-assessment. In order to ensure that students had access to the program, regular reminders were sent. Monitoring of usage was not available. After four weeks, students received a message invitation to complete a survey via sojump (http://www.sojump.com). The survey consisted of three parts: the demographic questionnaire, the Chinese System Usability Scale (the Chinese SUS), and a survey developed specifically for this study to evaluate feasibility. Along with the survey submission, 14 students participated in an individual semi-structured interview. To assess whether students retained the information from WeMed, a knowledge test was administered in paper-and-pencil format 6 months after the program completion.

Quantitative strand

Instruments

Demographic questionnaire. The demographic questionnaire consisted of three questions related to age, gender, and whether they had participated in laboratory safety training.

The Chinese System Usability Scale (The Chinese SUS). The System Usability Scale (SUS) is a useful tool for assessing the usability of a system or application. Studies have indicated that it is commonly used to evaluate medical apps for usability [33]. There are ten items in the SUS which contains five positive statements and five negative statements. According to a recent systematic review, the mean SUS score of 68 is a useful benchmark, with 50% of apps falling below or above it [34]. A high SUS score indicates that the application is highly usable and can be adopted easily. In this study, the Chinese SUS was used to measure students’ experience with WeMed. Students were invited to rate their responses on a 5-point Likert scale from “Strongly Disagree” to “Strongly Agree”. There was a reported reliability of 0.84, 95% CI (0.807 8.871) in the Chinese SUS [35].

Feasibility. A set of four questions was developed based on recommendations by Bowen and colleagues for identifying feasible studies (see Table 2) [25]. According to their suggestions, feasibility could be assessed in eight key areas, such as acceptability, demand, and practicality [25]. Accordingly, students were asked to respond to questions about their experience on a 7-point Likert scale (i.e., from strongly disagree 1 to strongly agree 7). A high score indicates that WeMed is highly acceptable and practical.

Knowledge test. A set of 31 multiple-choice questions were administered to MLS students assessing their memory retention after 6 months period. These questions included personal protection (1 item), safe specimen handling (2 items), putting on and taking off personal protective equipment (26 items), hand hygiene (1 item), and emergency procedures for microbiological laboratories (1 item). A comparison was also made with third-year students (n = 90) who had already received traditional lab safety training in their previous academic courses.

Data collection and analysis

Data collection was carried out online. All statistical analyses were performed in R software (R 4.1.2). Descriptive statistics, mean and standard were calculated with the ‘psych’ package. Mann Whitney U was calculated with the ‘nortest’ package. For usability test, every individual SUS score was calculated by Formula 1 [36]. An average SUS score of all respondents was calculated to interpret the overall usability level of the program. Two categories of SUS scores (acceptability range, and grade scale) were obtained from the results [36].

$$ \begin{array}{l}SUS\,score = ((Q1 - 1) + (5 - Q2) + (Q3 - 1) + (5 - Q4) + (Q5 - 1)\\+ (5 - Q6) + (Q7 - 1) + (5 - Q8) + (Q9 - 1) + (5 - Q10)) * 2.5\end{array} $$
(1)

Qualitative strand

Semi-structured interview procedure

In order to protect the privacy of the students, individual interviews were conducted in a counseling room at GMU. Informed consents were obtained prior to the interviews. We informed students about the confidentiality and anonymity of the interviews. We asked students again for their permission to record the interview before we started. They were reminded of their right to refuse to answer questions or end the interview if they were not comfortable with it. This study used an interview guide developed by the authors in collaboration with multiple stakeholders (educator, service user and MLS professionals) to explore the users’ experiences.

Eight interview questions were prepared to examine the feasibility and usability of the WeMed program: (1) What did you like most about WeMed? (2) What module(s) of WeMed was most difficult or challenging for you? (3) What changes has WeMed made to your laboratory safety practices, if any? (4) If you continue to use WeMed, to what extent will it enhance your laboratory safety techniques? (5) What was the helpful component(s) in the WeMed program? (6) What was the unhelpful component(s) in the WeMed program? (7) Have you noticed any changes since you started practicing with the WeMed program? (8) What recommendations do you have to improve WeMed? In addition, students were encouraged to discuss additional areas that they felt were important to the program with the interviewer. Interviews lasted between 20 and 30 min.

Data collection and analysis

Fourteen recordings were transcribed verbatim by Iflyrec, an online transcription platform (https://www.iflyrec.com/). Before uploading all the transcripts to NVivo Release 1.2, the first authors reviewed all the transcripts several times for accuracy. By following Braun & Clarke’s six steps, the emergent themes, codes, and categories were identified using reflexive thematic analysis (reflexive TA) [37].

Ethics approval

The Ethics Review Committee of Guangzhou Medical University approved the study. In the quantitative strand, students submitted their informed consent online before answering the questions. To maintain confidentiality in the following semi-structured interview, each student received a unique ID. They also gave verbal permission and agreed to record the interview. They were informed that the recordings would be transcribed and reported on.

Results

Quantitative strand

Feasibility of the WeMed program

A positive response was received regarding the five key areas of feasibility (see Table 2). It was found that all means were over five, with a range of responses between three and seven. Results indicated that WeMed had some level of acceptance and practicality. Also, the students were satisfied with the program and intended to continue using it in the future.

Usability of the WeMed program

The average SUS score of WeMed was in the high marginal category according to the acceptability range (M = 68.17, SD = 14.39). The grade scale is in class D (D: 60–69, according to Kamouna et al. [33]). A good usability was suggested by the adjective ratings for the SUS scale [33].

Knowledge test

The mean score of the WeMed program group (Mean = 28.82/31, n = 67, SD = 5.09) on the knowledge test after 6 months period was significantly higher than that of the comparator Year 3 group (Mean = 16.76/31, n = 90, SD = 4.82), U = 353, p < 0.001.

Table 2 Questions and results on five key areas of feasibility

Qualitative strand

A total of 14 students (42.86% female) participated in the individual semi-structured interviews. The interview data were analyzed to answer the research questions. This resulted in eight themes that provided insight into the user experiences (see Table 3).

Feasibility of the WeMed program

Five themes were identified regarding the user experience and user preferences (see Table 4). Phrases were rephrased to maintain the intent of the students’ words and to ensure that the quotes were grammatically correct. All quotes were checked to ensure that they were an accurate representation of the conversations.

Table 3 Themes, subthemes, and percentages

Usability of the WeMed program

Students in the interviews consistently highlighted two key themes (i.e., perceived ease of use and intention to continue using it) regarding the perceived ease of use and overall user-friendliness of the program (see Table 4). These themes indicated that the interface of the program was simple, intuitive, and user-friendly. This contributed to a positive user experience, resulting in increased user satisfaction with WeMed.

Table 4 Themes for the semi-structured interviews and examples for each theme

Discussion

The study aimed to investigate a teaching model that integrates M-learning and ELM using a self-developed WMP for laboratory biosafety education at GMU. Results of this pilot study provide important insight into the feasibility and usability of WeMed. The quantitative results obtained from the feasibility evaluation showed that WeMed performed above average in five key areas of feasibility, with an average score of 5 out of 7 (n = 67). The qualitative results confirmed some of these findings through two themes. Usability was evaluated quantitatively and qualitatively across a survey, as well as some aspects of its user experience. The average SUS score of 68.17 (n = 67) suggested that WeMed had adequate usability for MLS students. A total of two usability themes were identified relating to ease of use and intention to continue using it. After 6 months, an average score of 28.82/31 was obtained on a knowledge test (n = 67, SD = 5.09). A significant difference was found between the Year 3 group (Mean = 16.76/31, n = 90, SD = 4.82) and the WeMed program group (Mean = 28.82/31, n = 67, SD = 5.09), U = 353, p < 0.001. However, due to the absence of a control group and pretest in our study, we should interpret our results cautiously.

Feasibility of the WeMed program

There was evidence that WeMed is feasible for second-year MLS students at GMU in two aspects: perceived usefulness and enhanced learning outcomes. Particularly in the self-assessment module, WeMed offers a learner-centered platform, which enables students to learn at their own pace without time restrictions [38]. These findings are consistent with the major themes from previous feasibility studies on M-Learning among medical students, such as a greater degree of flexibility [39], autonomy [10], user-friendliness [39], and a supplementary rather than replacement tool [10]. Despite the fact that the study was not designed to detect effects of WeMed on learning outcomes, results from the knowledge test and comparison with Year 3 MLS students indicated a positive improvement in knowledge. Considering the small sample size and the design of our study, these findings need to be interpreted cautiously. Therefore, further research is needed to determine how WeMed can enhance long-term knowledge retention.

Some themes were only mentioned by a minority of students, such as “well-organized and effective structure”, “clarity and ease of understanding”, and “supplemental learning”. Continuous improvement should consider these themes along with suggestions gathered from interviews in the later period to ensure long-term acceptance. This can help to make the content more engaging and user-friendly, as well as provide better support for those who are struggling with the material. According to a recent study on the usability of health applications among Asia Pacific countries, one of the top ten concerns of users is “addresses specific needs” [40]. Therefore, ongoing improvements should be made in order to reduce mental effort and screen time for MLS students, resulting in better individual learning for them [40].

Usability of the WeMed program

Usability is a key factor in the Technology Acceptance Model (TAM), as it ensures that the application is easy to use [36]. Good usability helps to increase user engagement and satisfaction with the application, leading to better adoption and utilization of the application [36]. Overall, findings from quantitative and qualitative strands suggest a positive user experience in this program. This could be attributed to two key factors: the user-friendly interface and the students’ familiarity with using mobile apps in their daily lives [41]. By being embedded directly within the WeChat app, WeMed becomes an integral part of the students’ digital environment. This eliminates the need for a separate download or installation process, thereby enhancing convenience and user-friendliness. Moreover, this embedded approach capitalizes on the widespread use of WeChat among students, leveraging their existing knowledge of the app’s interface and functionalities [12]. This familiarity and ease of use contribute to the positive reception and engagement with WeMed, ultimately enhancing their learning journey in laboratory biosafety.

Limitations

The study had several limitations. Due to the small sample size, self-selection, and geographical constraints, our study hinders generalization of the findings. However, it is essential to keep the scope small at this early stage to facilitate a feasibility study and to gather in-depth feedback from students that can guide subsequent program development [42]. Furthermore, the lack of randomizing and evaluating the effectiveness of this study will limit the strength of its conclusions. This study is not designed to test WeMed’s effectiveness on learning outcome, but rather to examine users’ experiences to prepare for future RCTs. In addition, this study employed an instrument that was developed specifically for the purpose of evaluating feasibility, and it was not validated. In future research, these limitations should be addressed.

Conclusion

This pilot feasibility study indicates that WMP is a feasible tool for providing laboratory biosafety knowledge and skills. This mixed methods study demonstrates the potential for integrating ELM and M-learning within laboratory biosafety education. Continuing to improve the program and conducting a longitudinal follow-up study are essential to better understand the long-term impact of WeMed.

Data availability

The datasets used and analyzed during the current study are available from the corresponding author upon reasonable request.

References

  1. Ibeh I, Enitan S, Akele R, Isitua C. A review of the COVID-19 pandemic and the role of medical laboratory scientists in containment. J Med Lab Sci. 2020;30(1):68–89.

    Google Scholar 

  2. Song L, Gao J, Wu Z. Laboratory-acquired infections with brucella bacteria in China. Biosaf Health. 2021;3(02):101–4.

    Article  CAS  Google Scholar 

  3. Kaiser J. The catalyst. In.: American Association for the Advancement of Science; 2014.

  4. Yang Q-F, Lian L-W, Zhao J-H. Developing a gamified artificial intelligence educational robot to promote learning effectiveness and behavior in laboratory safety courses for undergraduate students. Int J Educational Technol High Educ. 2023;20(1):18.

    Article  Google Scholar 

  5. Hill DJ, Williams OF, Mizzy DP, Triumph TF, Brennan CR, Mason DC, Lawrence DS. Introduction to laboratory safety for graduate students: an active-learning endeavor. J Chem Educ. 2019;96(4):652–9.

    Article  CAS  Google Scholar 

  6. Duan Y, Li X, Guo L, Liang W, Shang B, Lippke S. A WeChat mini program-based intervention for physical activity, fruit and vegetable consumption among Chinese cardiovascular patients in home-based rehabilitation: a study protocol. Front Public Health. 2022;10:739100.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Yu L, Wang W, Liu Z, Liu Z, Xu Y, Lin Y. Construction of a virtual simulation laboratory for gene detection. BMC Med Educ. 2023;23(1):1–11.

    Article  CAS  Google Scholar 

  8. Nurunnabi ASM, Rahim R, Alo D, Mamun Aa, Kaiser AM, Mohammad T, Sultana F. Experiential learning in clinical education guided by the Kolb’s experiential learning theory. Int J Hum Health Sci (IJHHS). 2022;6(2):155.

    Article  Google Scholar 

  9. Frøland TH, Heldal I, Sjøholt G, Ersvær E. Games on mobiles via web or virtual reality technologies: how to support learning for biomedical laboratory science education. Information. 2020;11(4):195.

    Article  Google Scholar 

  10. Prados-Carmona A, Fuentes-Jimenez F, Roman de Los Reyes R, García-Rios A, Rioja-Bravo J, Herruzo-Gomez E, Perez-Martinez P, Lopez-Miranda J, Delgado-Lista J. A pilot study on the feasibility of developing and implementing a mobile app for the acquisition of clinical knowledge and competencies by medical students transitioning from preclinical to clinical years. Int J Environ Res Public Health. 2022;19(5):2777.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Dunleavy G, Nikolaou CK, Nifakos S, Atun R, Law GCY, Tudor Car L. Mobile digital education for health professions: systematic review and meta-analysis by the digital health education collaboration. J Med Internet Res. 2019;21(2):e12937.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Ji H, Zhu K, Shen Z, Zhu H. Research on the application and effect of flipped-classroom combined with TBL teaching model in WeChat-platform-based biochemical teaching under the trend of COVID-19. BMC Med Educ. 2023;23(1):679.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Abdulwahed M, Nagy ZK. Applying Kolb’s experiential learning cycle for laboratory education. J Eng Educ. 2009;98(3):283–94.

    Article  Google Scholar 

  14. Zeng X. Exploration of university laboratory safety education platform based on WeChat official account. J Educ Educational Res. 2022;1(1):78–80.

    Article  Google Scholar 

  15. Delima R, Santosa HB, Purwadi J. Development of Dutatani website using rapid application development. IJITEE (International J Inform Technol Electr Engineering). 2017;1(2):36–44.

    Google Scholar 

  16. Tu S, Yan X, Jie K, Ying M, Huang C. WeChat: an applicable and flexible social app software for mobile teaching. Biochem Mol Biol Educ. 2018;46(5):555–60.

    Article  CAS  PubMed  Google Scholar 

  17. Kolb DA. Experience as the source of learning and development. Up Sadle River: Prentice Hall; 1984.

  18. WeChat Mini programme Design Guideline. Tencent Inc.; 2023. https://developers.weixin.qq.com/miniprogram/design/

  19. Organization WH. Laboratory biosafety manual fourth edition. In. Switzerland: WHO Geneva; 2020.

    Google Scholar 

  20. Almaiah MA, Hajjej F, Lutfi A, Al-Khasawneh A, Alkhdour T, Almomani O, Shehab R. A conceptual framework for determining quality requirements for mobile learning applications using delphi method. Electronics. 2022;11(5):788.

    Article  Google Scholar 

  21. Polit DF, Beck CT. The content validity index: are you sure you know what’s being reported? Critique and recommendations. Res Nurs Health. 2006;29(5):489–97.

    Article  PubMed  Google Scholar 

  22. Aschbrenner KA, Kruse G, Gallo JJ, Plano Clark VL. Applying mixed methods to pilot feasibility studies to inform intervention trials. Pilot Feasibility Stud. 2022;8(1):217.

    Article  PubMed  PubMed Central  Google Scholar 

  23. The National Institute for Health and Care Research. 2021. https://www.nihr.ac.uk/documents/nihr-research-for-patient-benefit-rfpb-programme-guidance- on-applying-for-feasibility-studies/20474

  24. Gadke DL, Kratochwill TR, Gettinger M. Incorporating feasibility protocols in intervention research. J Sch Psychol. 2021;84:1–18.

    Article  PubMed  Google Scholar 

  25. Bowen DJ, Kreuter M, Spring B, Cofta-Woerpel L, Linnan L, Weiner D, Bakken S, Kaplan CP, Squiers L, Fabrizio C. How we design feasibility studies. Am J Prev Med. 2009;36(5):452–7.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Maramba I, Chatterjee A, Newman C. Methods of usability testing in the development of eHealth applications: a scoping review. Int J Med Informatics. 2019;126:95–104.

    Article  Google Scholar 

  27. Orsmond GI, Cohn ES. The distinctive features of a feasibility study: objectives and guiding questions. OTJR: Occupation Participation Health. 2015;35(3):169–77.

    PubMed  Google Scholar 

  28. Creswell JW, Clark VLP. Designing and conducting mixed methods research. Sage; 2017.

  29. Fosnacht K, Sarraf S, Howe E, Peck LK. How important are high response rates for college surveys? Rev High Educ. 2017;40(2):245–65.

    Article  Google Scholar 

  30. Nakamura Y, Yoshinaga N, Tanoue H, Kato S, Nakamura S, Aoishi K, Shiraishi Y. Development and evaluation of a modified brief assertiveness training for nurses in the workplace: a single-group feasibility study. BMC Nurs. 2017;16(1):1–8.

    Article  Google Scholar 

  31. Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm Policy Mental Health Mental Health Serv Res. 2015;42:533–44.

    Article  Google Scholar 

  32. Hennink M, Kaiser BN. Sample sizes for saturation in qualitative research: a systematic review of empirical tests. Soc Sci Med. 2022;292:114523.

    Article  PubMed  Google Scholar 

  33. Kamouna A, Alten F, Grabowski E, Eter N, Clemens CR. High user acceptance of a retina e-learning app in times of increasing digitalization of medical training for ophthalmologists. Ophthalmologica. 2022;245(4):368–75.

    Article  PubMed  Google Scholar 

  34. Hyzy M, Bond R, Mulvenna M, Bai L, Dix A, Leigh S, Hunt S. System usability scale benchmarking for digital health apps: meta-analysis. JMIR mHealth uHealth. 2022;10(8):e37290.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Wang Y, Lei T, Liu X. Chinese system usability scale: translation, revision, psychological measurement. Int J Human–Computer Interact. 2020;36(10):953–63.

    Article  Google Scholar 

  36. Fergo AG, Ratnasari CI. Evaluation of octo mobile user experience using the system usability scale method. Edumatic: Jurnal Pendidikan Informatika. 2023;7(1):151–9.

    Article  Google Scholar 

  37. Braun V, Clarke V. Conceptual and design thinking for thematic analysis. Qualitative Psychol. 2022;9(1):3.

    Article  Google Scholar 

  38. Kim SK, Lee Y, Yoon H, Choi J. Adaptation of extended reality smart glasses for core nursing skill training among undergraduate nursing students: usability and feasibility study. J Med Internet Res. 2021;23(3):e24313.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Ndolo DO, Wach M, Rüdelsheim P, Craig W. A curriculum-based approach to teaching biosafety through eLearning. Front Bioeng Biotechnol. 2018;6:42.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Liew MS, Zhang J, See J, Ong YL. Usability challenges for health and wellness mobile apps: mixed-methods study among mHealth experts and consumers. JMIR mHealth uHealth. 2019;7(1):e12160.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Herrmann-Werner A, Loda T, Zipfel S, Holderried M, Holderried F, Erschens R. Evaluation of a language translation app in an undergraduate medical communication course: proof-of-concept and usability study. JMIR mHealth uHealth. 2021;9(12):e31559.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Ming DY, Wong W, Jones KA, Antonelli RC, Gujral N, Gonzales S, Rogers U, Ratliff W, Shah N, King HA. Feasibility of implementation of a mobile digital personal health record to coordinate care for children and youth with special health care needs in primary care: protocol for a mixed methods study. JMIR Res Protocols. 2023;12(1):e46847.

    Article  Google Scholar 

Download references

Acknowledgements

The authors wish to thank all the participants in this study.

Funding

This study was supported by the Guangzhou Municipal Science and Technology Bureau (Fund Ref ID: 2060206). This study was also supported by the Department of Education of Guangdong Province (Fund Ref ID: 2021455).

Author information

Authors and Affiliations

Authors

Contributions

JingJing Zhao conceptualized the idea, and reviewed the manuscript. QianJun Li contributed to data collection, semi-structured interviews and funding acquisition. LiJuan Yang contributed to the semi-structured interviews, statistical analysis, and manuscript writing. Xue Li, RuiChao Yan, QiJian Gao, ShiHao Wen, Ying Liang and ZhenYao contributed to the process of developing WeMed. All the authors read and approved the final manuscript.

Corresponding authors

Correspondence to QianJun Li or LiJuan Yang.

Ethics declarations

Ethics approval and consent to participate

The Ethics Review Committee of Guangzhou Medical University approved the study. All methods were carried out in accordance with relevant guidelines and regulations. Informed consent was obtained from the students, and verbal informed consent was obtained from all participants included in the interviews. No human specimens were collected in this study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Q., Zhao, J., Yan, R. et al. WeChat mini program in laboratory biosafety education among medical students at Guangzhou Medical University: a mixed method study of feasibility and usability. BMC Med Educ 24, 305 (2024). https://doi.org/10.1186/s12909-024-05131-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12909-024-05131-9

Keywords