A multi-year retrospective quantitative implementation evaluation of Safer Opioid Prescribing, an opioid prescribing continuing education program for Canadian healthcare providers

Background: Continuing health professions education is considered an important policy intervention for the opioid epidemic. Besides examining effectiveness or impact, it is important to also study health policy implementation to understand how an intervention was delivered within complex policy and practice environments. Implementation outcomes can be used to help interpret continuing health profession education effects and impacts, help answer questions of “how” and “why” programs work, and inform transferability. Methods: We conducted a retrospective quantitative implementation evaluation of the 2014–2017 cohort of Safer Opioid Prescribing, a Canadian continuing health professions education program consisting of three synchronous webinars and in-person workshop. To measure reach and dose, we examined participation and completion data. We used Ontario physician demographic data, including regulatory status with respect to narcotics to examine relevant trends. To measure delity and participant responsiveness, we analyzed participant-provided evaluations of bias, active learning and relevance to practice. We used descriptive statistics and measures of association for both continuous and categorical variables. We used logistic regression to determine predictors of workshop participation and analysis of covariance to examine variation in satisfaction across different-sized sessions. Results: Eighty four percent of participants were family physicians with representative reach to non-major urban physicians. Webinar completion rate was 86.2% with no differences in completion based on rurality, gender or status with the regulatory college. Participants who had regulatory involvement with respect to opioids were more likely to be male, have been in practice for longer and participate in the workshop. Participants reported no signicant bias and highly rated both active learning and relevance to practice regardless of their cohort size. Conclusions: This evaluation demonstrates that Safer Opioid Prescribing was implemented as intended. Over a short period and without any external funding, the program reached more than 1% of the Ontario physician workforce. This suggests that Safer Opioid Prescribing is a good model for using virtual continuing health professions education to reach a critical mass of prescribers to drive population level opioid utilization changes. This study represents a methodological advance of adapting evaluation methods from health policy and complex interventions for continuing health professions education.


Introduction
Education about chronic pain and opioid analgesic prescribing has been de cient in terms of both quantity and quality across the medical education continuum (1,2). Opioid manufacturer promotions couched as educational activities have been prominent drivers of inappropriate prescribing and opioid-related harms (3). Consequently, continuing health professional education (CHPE) without industry involvement has been considered an important rectifying intervention for the ongoing opioid epidemics in Canada and the United States, as identi ed by a variety of national and regional policy documents (4)(5)(6), editorialists (7,8) and national media (9).
Health policies are often characterized as complex interventions and CHPE programs share many similar distinguishing features: they involve the actions of people, include a complex chain of steps, are embedded in social systems shaped by context, and are open systems subject to change (10,11). This suggests that such CHPE programs can and should be subjected to the same rigorous evaluations as other health policies whenever possible.
At least one systematic review of opioid prescribing interventions has characterized education using an evaluative framework for complex interventions (12). This review classi ed CHPE and other prescribing interventions such as prescription drug monitoring programs in terms of implementation, effectiveness and impact outcomes. The study authors, however, did not relate such outcome categorizations back to any kinds of frameworks, conceptual models or theories of educational development, delivery or evaluation.
Moore et al.'s CHPE outcomes framework (13,14) provides a useful bridge between outcomes for educational programs and complex interventions. The rst two levels of participation and satisfaction map clearly to implementation outcomes, the third through fth levels map to effectiveness outcomes, and the sixth and seventh levels can be categorized as typical impact outcomes (Table 1). The number of physicians and other who participated in the CME activity Implementation Satisfaction (LEVEL 2) The degree to which the expectations of the participants about the setting and delivery of the CME activity were met The degree to which participants show in an educational setting how to do what the CME activity intended them to be able to do Performance (LEVEL 5) The degree to which participants do what the CME activity intended them to be able to do in their practices Patient health (LEVEL 6) The degree to which the health status of patients improves due to changes in the practice behavior of participants Impact Population health The degree to which the health status of a community of patients changes due to changes in the practice behavior of participants Moore et al. characterize these outcomes as a hierarchy, with the highest quality programs aiming and achieving demonstrable changes in terms of patient or community health outcomes. Evaluations focused on these higher-level outcomes can answer questions such as, "did this program have the intended impacts?" or more bluntly "did this program work?". However, if program implementation is not studied concurrently with effectiveness and impact outcomes, then such evaluations cannot answer questions of "how" and "why" the programs did or did not have the intended outcomes (15). Methods of program evaluation which study implementation can aid in opening this so-called "black box problem" (16). For example, by examining participation and satisfaction, one can learn not only if a program intervention reached its target audience and met its learning objectives, but one can also garner valuable data related to how a program has been implemented in a speci c context or how it may be sustained in that context. These data are essential in informing transferability of ndings into other contexts (17).

Safer Opioid Prescribing: program logic
With this framework of opioid prescribing CHPE as a health policy intervention, we aimed to conduct a comprehensive implementation evaluation of a national Canadian program called Safer Opioid Prescribing (SOP). SOP was initially developed in 2012 and 2013 using Kern's model for curriculum development (18) but with three particular adaptations for CHPE.
First, we used the PRECEDE-PROCEED model (19) as an overlay to guide multi-step program development and evaluation. PRECEDE-PROCEED is a comprehensive framework for program design, implementation and evaluation that is commonly used in the elds of public health and health promotion. In using this model, we formalized our conception of education as a health policy intervention as has been done elsewhere (20). Using PRECEDE-PROCEED allowed us to: a) contextualize CHPE within the speci c circumstances of the Canadian contemporary opioid epidemic and the range of other policy options for addressing it; b) involve the target audience for the intervention in program planning; and, c) conceptualize and categorize speci c implementation and effectiveness outcomes during the initial design stages (for which we used Moore et al.'s outcome framework).
Our second adaption of Kern's model to CHPE was to apply Prochaska's Transtheoretical Model (21) to guide content and curriculum progression. Compared to undergraduate or postgraduate health professional education, in CHPE there is a greater imperative to meet the perceived learning needs of the target audience in order to maximize the relevance to their practices and ultimately drive practice change. We aimed to achieve progression along the stages of change from "contemplative" towards "action" and "maintenance" phases with respect to best practices in chronic pain management and opioid analgesic prescribing.
Our third adaptation to CHPE was to use multiple systematic reviews of continuing medical education (CME) effectiveness to identify and incorporate best practices in education for achieving practice change and improvement in patient outcomes (22)(23)(24)(25), including for internet-based CHPE (26).
Through the process of program development, we identi ed several areas of need and potential gaps in training and education within the context of the contemporary opioid epidemic: 1. Prescribed opioids were identi ed as an important contributor to opioid related harms and family physicians were identi ed as the most common prescribers of opioids (27).
2. The opioid epidemic was growing in scale and was linked to the practices of the majority of family physicians (28).
3. There was an inequitable distribution of harms, with greater rates of overdoses and deaths from opioids in rural and remote communities -places where there might be less access to practice supports and high quality CHPE programs (29). 4. Chronic pain was a major learning priority for family physicians (30,31) and there were important knowledge gaps with respect to opioid prescribing (32).
5. There was a persistent in uence of the pharmaceutical industry on prescribing practices and thus a growing skepticism of opioid educational programs because of possible pharmaceutical industry involvement (3). Likewise, existing CHPE programs in the eld tended to be based on expert opinion rather than the best available evidence, for example, from systematic reviews or clinical practice guidelines.
6. The provincial medical regulator had an active and substantial in uence on opioid prescribing behaviour, which in some cases could be an even stronger driver of prescribing behaviour than certain kinds of educational interventions (33).
We set out to rigorously evaluate the implementation of SOP, aligning the implementation evaluation outcomes with the underlying program logic (Fig. 1). Our implementation evaluation questions were as follows: 1. Who were the participants in the program and were family physicians and prescribers from rural and remote communities well-represented? Does this participation suggest scalability to meet the needs of the opioid epidemic?
2. What was the completion rate of the program? Which participants were more or less likely to complete the program? 3. Did participants note any signi cant bias in the delivery of the program?

Intervention: Safer Opioid Prescribing
Starting in 2012, we developed SOP to address the six areas of need identi ed above. The scienti c planning committee consisted of family physicians from a diversity of backgrounds including primary care, chronic pain care, addictions medicine, anesthesia, pharmacology and inner-city medicine. The program targeted family physicians, though it was designed to also be relevant to specialist prescribers as well as other professionals involved in opioid prescribing (e.g. pharmacists). Nurse practitioners were not identi ed as a primary target at the time of development since they were not eligible to prescribe opioids in our jurisdiction until early 2017. Faculty for the program during the study period of interest did not have any history of involvement with opioid or other pharmaceutical manufacturers.
SOP content focused on opioid prescribing but was contextualized within models of the management of chronic pain as a complex medical condition. Foundational documents included a national clinical practice guideline (34) and a tool that was developed to support the implementation of the guideline (35). The program was funded entirely by participant registration fees to ensure sustainability. The program received no funding from industry for either development or delivery. Fees for the program for physician participants were C$450 for the webinars and C$650 for the workshops. A reduced rate for non-physician and resident participants was C$150 for the webinars and C$200 for the workshops. Participants in the program were sometimes required or suggested to attend by their medical regulator; however, program administrators and faculty were blinded to participants' regulatory status.
In terms of evidence-based CHPE practices, SOP utilized multiple interventions (13 distinct interventions), was of substantial duration (3-4 months), utilized a blended-learning approach, was interactive, and identi ed links between clinical practice and serious health outcomes (36). The program was split into two components -a series of three synchronous evening webinars followed by a one-day in-person workshop to create a ipped classroom ( Table 2). The virtual format was intended to make the program accessible to learners regardless of geographic location and the evening timing was intended to make it accessible outside of usual clinic time for the majority of participants. The virtual format also aimed to make the program scalable to reach a large number of participants simultaneously. The webinars were made synchronous and interactive to help create a virtual community of learning (37), which we hypothesized would help normalize a challenging area of practice and also drive higher levels of completion -a known challenge for online learning programs (38,39).
The rst of the three webinars focused on the multimodal management of chronic pain; the second on the details of opioid prescribing (e.g. patient risk assessment, medication selection, initiation and titration); and the third on situations in which prescribing can be more challenging (e.g. with the elderly, in pregnancy, with people living with opioid use disorder). The workshop addressed challenging cases and communication issues, focusing on skills and competencies particularly suited for a live workshop as compared to a synchronous webinar. Webinar participation was a pre-requisite for workshop participation. Each webinar and the workshop had speci c pre-work and post-work to prime learning and to facilitate integration into practice, respectively. The program was accredited for a total of 27 credits of learning -9 credits for the webinars and 18 credits for the workshop. Study population and setting The study population included all SOP participants from January 1, 2014 through June 14, 2017. This study period was chosen because the program had fully launched in its current form by January 2014 and the content of the program was substantially redeveloped after June 2017 based on the release of new Canadian guidelines for opioid prescribing (40). We included all participants in this study period, regardless of their profession, specialty, location, completion status and whether they were required to attend or attended voluntarily. We excluded medical residents and trainees, participants for whom there was substantial missing participation data, and participants who participated only in the workshop but not in the webinars.
The program was delivered out the University of Toronto in Ontario, which is Canada's most populous province (population 14,193,384 in 2017). The webinars could be attended from any location and there were no restrictions to participation based on provincial or national practice location. The workshops were held at a university-a liated conference in downtown Toronto. From 2016 to 2017, the national crude rate of opioid-related deaths continued to escalate from 8.4 to 11.3 per 100,000, with consistent increases across the country but substantial inter-provincial variation in the crude rate (41).

Outcome measures
We collected both participation and satisfaction data to assess for the implementation measures of reach, dose, delity and participant responsiveness (42). We examined the total number of participants in any webinar as a measure of program reach. For each participant, we recorded information about their profession, specialty and province of practice. For Ontario physician participants, we collected data about gender, graduating medical school (international versus domestic), number of years of practice since Ontario licensure to rst participation in SOP, medical specialty and rurality. We also recorded the status of the Ontario physician participants with respect to narcotics prescribing and the provincial medical regulatory college (the College of Physicians and Surgeons of Ontario, CPSO). As a measure of program dose, we collected attendance information for each of the webinars and workshop.
We collected participant-provided evaluations of balance and bias, their ratings of the program's relevance to their clinical practice and the adequacy of active learning time as measures of delity and participant responsiveness. These measures were collected anonymously from participants post-intervention and so could not be linked to individual participant demographics.

Data sources
Participation data was collected from the registration system of Continuing Professional Development at the University of Toronto which administers SOP. Registration data included dates of participation, practice location, profession and specialty. For Ontario physicians, these registration data were linked with gender, graduating medical school and dates of Ontario licensure from the public register of the CPSO. Prior authorization from the CPSO to access these data was obtained. The Ontario Medical Association's (OMA) Rurality Index of Ontario was used to determine a rurality score based on practice postal code. This Index has been used in other program evaluations as a measure of rurality for Ontario physicians (43). For data pertaining to the rurality of the Ontario family physician population (as a comparator to our participant group), we combined the OMA-generated RIO score at the Census Sub-Division level (44) with the Ontario Physician Human Resources Data Center's list of Physician Counts by Census Sub-Division for 2017 (45).
For satisfaction data, we used anonymous program evaluations collected immediately post program. The corresponding study objectives, evaluation framework level, implementation evaluation outcome, and data sources are outlined in Table 3.

Data analysis
We used the descriptive statistics of mean, median, standard deviation, minimum, and maximum for continuous measures, and frequency and percentage for categorical measures to describe the sample. We assessed the association between categorical variables using Chi-squared and Fisher's exact tests. We assessed the association between binary and continuous measures using the two sample t-test. We used logistic regression to assess association between participant factors including gender, years in practice, webinar completion, regulatory college status, setting, and international medical graduate (IMG) or Canadian medical graduate (CMG) status and the likelihood of workshop participation. We used the Hosmer-Lemeshow test to assess the goodness of t for the logistic regression model. We used analysis of covariance to assess for variability in adequacy of active learning time and clinical relevance across different size groups and program types (webinar or workshop). All tests were two-sided and p < 0.05 was considered statistically signi cant. We used the statistical software SAS 9.4 for data manipulation and statistical analysis.

Results
During this study period, SOP webinars were offered 11 times and the workshop was offered 8 times. 1. Who were the participants in the program and were family physicians and prescribers from rural and remote communities well-represented? Does this participation suggest scalability to meet the needs of the opioid epidemic?
Participant characteristics There were 517 unique registrants for this program. Participants who only participated in the workshop (n = 10) were excluded from this analysis. We combined data for registrants who participated in the program more than once (n = 20) into a single record. We excluded 18 medical residents (3.5%) and also excluded an additional 17 registrants (3.3%) due to incomplete registration or participation data.
In total, there were 472 unique participants (Table 4). 164 (34.7%) participants were female. The large majority (88.1%) were from Ontario while the remainder were from each of the other Canadian provinces but none of the northern territories. There were three participants from a US state adjacent to Ontario. 398 (84.3%) were family physicians which included general practitioners as well as those with focused practices in emergency medicine, addiction medicine, anesthesiology, community medicine, geriatrics, palliative care, occupational medicine and psychotherapy. 52 (11.0%) were other medical specialists with the majority being from emergency medicine and anesthesiology. 21 (4.4%) were other health professionals, including dentists, pharmacists, registered nurses and nurse practitioners, all of whom were from Ontario. (SD = 14.0) and ranged from 0.0 to 50.0. There was a clear bimodal distribution with a peak in the 0-10 year range consisting of nearly even female and male participants and another peak in the 30-35 years in practice range consisting mostly of male participants (Fig. 2).
While the overall sample was less urban than was the Ontario physician workforce, this difference was not signi cant when the setting distribution was compared by physician specialty. The rural to urban distribution of Ontario family physician and medical specialist participants were both re ective of the distribution of all physicians in the province (not shown).

Regulatory college status
We analyzed the Ontario physician participants with respect to their status regarding narcotic prescribing with the provincial medical regulatory college (Table 5). We found that those SOP participants who had a public record of restrictions or had a regulatory hearing regarding their narcotic prescribing were much more likely to be male (p < .0001) and have been in practice for longer (p < .0001). There were no differences in the rural to urban distribution, country of medical school graduation or profession type amongst those with identi ed medical regulatory involvement compared to those who did not.

Workshop participation
Of the 472 webinar participants, 177 (37.5%) participated in the workshop. Ontario participants were more likely to participate in the workshop than were participants from other provinces (participation rate 39.9% versus 19.6%, p = .003). We conducted a multivariate logistic regression to determine predictors of workshop participation for Ontario physician participants. Webinar completers were 5.1 times more likely to participate in the workshop (X 2 = 13.5, p = .0002; 95% CI = 2.1 to 12.1) than non-completers. Those who had involvement with the provincial regulatory college with respect to narcotic prescribing were 4.1 times more likely than those with no involvement to participate in the workshop (X 2 = 24.8, p = < .0001, 95% CI = 2.3 to 7.2). Urban practitioners were 2.2 times more likely than rural practitioners to participate in the workshop, but this nding was not statistically signi cant (X 2 = 3.0, p = .083, 95% CI = 0.9 to 5.1). There was no difference between urban and non-major urban participants (X 2 = 1.0, p = 0.32, 95% CI = 0.6 to 1.9). Gender, years in practice, or country of medical school graduation also did not predict workshop participation.
3) Did participants note any signi cant bias in the delivery of the program?
The response rate for anonymous post webinar evaluations was between 51 to 53% and for post workshop evaluations was 91 to 99%. There was no signi cant bias reported by participants in the delivery of any of the programs. Between 96.7% (webinar 1) to 99.0% (webinar 2) of webinar participants and 98.0% of workshop participants reported that the program presentation was balanced and unbiased. An analysis of covariance showed that the effect of group size on relevance to practice when controlling for program type (webinar or workshop) was not signi cant (p = 0.514).

Discussion
SOP was designed as a policy intervention for the Canadian opioid crisis, using best practices in CHPE as a means for driving meaningful change at multiple levels of outcomes. We aimed to offer a program that a) targeted family physicians, b) was accessible regardless of geography, c) was free speci cally of pharmaceutical industry bias, d) was interactive, and e) was relevant to practice. This evaluation strongly suggests that through 2014-2017, SOP was delivered as intended along multiple implementation outcomes including reach, dose, delity and participant responsiveness. Though sustainability was not formally assessed by this evaluation, sustainability is suggested by the fact that the program was able to continuously run throughout the study period without any external funding.
Family physicians, who are responsible for the majority of long-term and high-dose opioid prescribing, were disproportionately represented in the program. Likewise, participation in the program was representative of the geographic spread of physicians in Ontario, despite the program being delivered virtually from a major urban academic medical centre. This was accomplished in the context of Ontario's vast geography and known issues of poor access to high-speed internet in rural and remote communities (46). Overall, physician participants were not re ective of the gender mix of Ontario physicians. However, this was mostly driven by participants with medical regulatory involvement, who very much skewed male and also as having more years in practice. This cohort of participants with medical regulatory involvement is re ective of patterns in Ontario that have been identi ed with respect to potentially problematic opioid prescribing (27) and also patterns of medical regulatory referral to opioid education programs (33).
The program was rated as highly relevant to practice, which partly explains the very high engagement and completion rates. The slightly lower completion rates amongst specialist physicians and family physicians with focused practices, such as in emergency medicine, may in part be driven by lower relevance to practice.
It should be noted, however, that completion rates amongst these groups was still very high compared to internet-delivered CHPE norms (47)(48)(49). Another reason for lower completion rates might also be less predictable clinical schedules for certain specialists. Importantly, medical regulator involvement was not an important driver of webinar completion.
Out of province completion rates were also high at 75.0%, though not as high as Ontario participants. This may be driven by lower relevance to practice as some of the program content is focused on epidemiological and practical issues speci c to Ontario. However, a more practical reason for this difference may be because of time zone differences making it challenging for working professionals to participate in the webinars. Notably, there was a signi cantly lower completion rate for participants in Western time zones at two or more hours from Ontario, which may have caused the webinar time to con ict with normal clinical hours for participants. The completion rate for those less than a two-hour time difference was comparable to Ontario participants.
The scalability of the program is suggested by the variable size of the webinars, with the largest including 74 simultaneous learners. We identi ed no systematic variation in time for active learning or relevance to practice based on the size of the webinars. This suggests that delity to the program is maintained even with very large numbers of participants. Overall, over a short period, the program was able to reach more than 1% of the 32,055 strong Ontario physician workforce. The demonstrated geographic reach of the program and the potential for scalability suggest that this program is a good model for reaching a critical mass of prescribers to drive population level changes in opioid utilization.
While encouragement or requirement for participation with the medical regulator was not an important driver of webinar completion, it did contribute to the likelihood of workshop participation. Webinar completion was also an important driving factor for workshop participation. Geography played a role in workshop participation as well since non-Ontario participants were less likely to participate in the workshop, which was held in Toronto and thus at least several hours away. The cost of the workshop may have also been prohibitive for some participants given the cumulative costs of registration, travel and lost clinical income.
We also noted that, while the difference was not statistically signi cant, rural Ontario physicians were half as likely as their urban counterparts to participate in the workshop.
As CHPE evaluation has moved increasingly towards outcome-based approaches (15) with a preference for "higher-level" outcomes such as patient-level and population-level outcomes, there has been a related discounting of implementation outcomes such as participation and satisfaction (50). Indeed Moore et al.'s updated framework refers to three different kinds of assessment, namely summative, performance and impact. Importantly, however, this framework ignores assessment of implementation. This may be because educational interventions are not commonly conceptualized as complex interventions that are delivered in complex and dynamic health system and policy contexts -all of which can affect program delivery and structure (implementation) and thus program effects. Thus, rigorous implementation evaluations can be used to determine how the program was actually carried out and whether it was carried out as intended.
While the implementation of programs as intended is no guarantee of effectiveness, these data are key to then informing subsequent effectiveness and impact evaluations and also to assessing program theory.
Having conducted this implementation evaluation, further evaluation of SOP is now called for to assess effectiveness and impact. To our knowledge, this study is one of few examples of an opioid prescribing CHPE evaluation that has formally assessed implementation outcomes using an evaluation framework for complex interventions together with a CHPE outcome model. Barth et al. (51) describe the use of the Medical Research Council complex intervention framework to develop and evaluate, in a step-wise manner, an academic detailing intervention to improve use of a prescription drug monitoring program (PDMP). A subsequent study evaluates physician self-reports of PDMP utilization -namely a performance (education) or effectiveness (complex intervention) outcome (52). Other opioid prescribing CHPE programs have assessed implementation measures of participation and satisfaction (53-57) but have not directly related these measures to program theory, nor have they used these implementation measures to then inform effectiveness or impact outcomes.
Overall this implementation evaluation adds further support to the feasibility of delivering multicomponent CHPE programs virtually to increase reach, scalability and thus potentially effectiveness and impact (58-60).
There are several important limitations to this study. First, this study was conducted retrospectively using data that were collected for both evaluative purposes but also for administrative purposes, such as for tracking participation for accreditation reporting. We did have to exclude 17 participants (3.2%) of the sample due to incomplete participation data. This was a small enough number that it was unlikely to signi cantly bias results. Likewise, demographic data was for the most part complete. We could, for example, not determine rurality for only two of the 400 Ontario physician participants. Also, it is important to note that the evaluative data collected (e.g. relevance to practice and amount of interactivity) was de ned prior to the delivery of the program and did re ect attempts to assess underlying program logic. The second data limitation relates to the anonymous nature of the evaluative data. These were kept anonymous as per norms in CHPE to allow participants to freely share their evaluative assessments. However, this did not allow us to link evaluative statements to particular participants and then analyze these by demographic factors. This could be recti ed in future evaluations by using, for example, a linking identi er that blinds the scienti c planning committee to the identity of participants but allows evaluators to link evaluations to the demographic characteristics of speci c de-identi ed program participants. Likewise, the webinar evaluation data response rates were moderate at 51% which would introduce an unknown bias to these data. Since these responses were anonymous, it is not possible to further assess the nature of this possible bias.
However, the consistency of these evaluative responses between the webinars and the consistency of the responses with the workshop data which had an excellent response rate provides con dence that these data are re ective of the entire participant population. Third, the available data did not allow for a direct inquiry into the posited drivers of change captured in the logic model, such as that the SOP structure facilitates the creation of a virtual community of learning and practice. Qualitative inquiry using interviews or focus groups of program participants and facilitators would be well suited to better assess this aspect of the program.

Conclusions
This evaluation demonstrates that Safer Opioid Prescribing was implemented as intended. Over a short period and without any external funding, the program reached more than 1% of the Ontario physician workforce. This suggests that Safer Opioid Prescribing is a good model for using virtual continuing health professions education to reach a critical mass of prescribers to drive population level opioid utilization changes. This study represents a methodological advance of adapting evaluation methods from health policy and complex interventions for continuing health professions education.

Declarations
Ethical approval: Ethical approval for this study was granted by the University of Toronto Research Ethics Board (Protocol 35197).

Consent for publication: not applicable
Availability of data and materials: With the exception of data on physician regulatory status, datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. The data that support the ndings of physician regulatory status of this study are available from the College of Physicians and Surgeons of Ontario but restrictions apply to our further circulation of these data, which were used with their permission for the current study.  Participant distribution by years in practice, gender and regulatory status