Skip to main content

Construction and preliminary evaluation of the inpatient glycemic control questionnaire (IGCQ): a survey tool assessing perceptions and knowledge of resident physicians



Uncontrolled hyperglycemia in hospitalized patients, with or without diabetes mellitus, is associated with many adverse outcomes. Resident physicians are the primary managers of inpatient glycemic control (IGC) in many academic and community medical centers; however, no validated survey tools related to their perceptions and knowledge of IGC are currently available. As identification of common barriers to successful IGC amongst resident physicians may help foster better educational interventions (ultimately leading to improvements in IGC and patient care), we sought to construct and preliminarily evaluate such a survey tool.


We developed the IGC questionnaire (IGCQ) by using previously published but unvalidated survey tools related to physician perspectives on inpatient glycemic control as a framework. We administered the IGCQ to a cohort of resident physicians from the University of Mississippi Medical Center, University of Louisville, Emory University, and the University of Virginia. We then used classical test theory and Rasch Partial Credit Model analyses to preliminarily evaluate and revise the IGCQ. The final survey tool contains 16 total items and three answer-choice categories for most items.


Two hundred forty-six of 438 (56.2%) eligible resident physicians completed the IGCQ during various phases of development.


We constructed and preliminarily evaluated the IGCQ, a survey tool that may be useful for future research into resident physician perceptions and knowledge of IGC. Future studies could seek to externally validate the IGCQ and then utilize the survey tool in pre- and post-intervention assessments.

Peer Review reports


Hyperglycemia is common in the inpatient setting and affects up to one-third of patients admitted to general medical and surgical wards [1,2,3]. Uncontrolled hyperglycemia in hospitalized patients, with or without diabetes mellitus (DM), is associated with adverse outcomes including increased rates of infection and mortality and longer hospital stay [4,5,6,7]. Various studies in both critically and noncritically ill hyperglycemic inpatients demonstrate that improved inpatient glycemic control (IGC) can reduce rates of hospital complications, infections, and cost [8,9,10,11]. As more than 90% of patients with DM are admitted for reasons unrelated to the disease and may be cared for by staff without specific DM expertise, IGC is often poor [12]. The recent consortium for Planning Research in Inpatient Diabetes (PRIDE) was formed to promote clinical research leading to advancement and improvement of IGC. The consortium outlined eight aspects of IGC which needed to be addressed; the first suggested development and evaluation of provider education tools to improve knowledge of and address barriers to achieving appropriate IGC [13]. Since resident physicians are the primary managers of IGC in many academic and community medical centers, it is important to understand their baseline knowledge and perceptions. Currently, scant data are available [14,15,16,17] and no validated survey tools for this topic exist in the medical literature.

Numerous strategies have recently been employed in an attempt to improve IGC, including standardized insulin order sets [18,19,20,21,22,23,24,25,26], mentoring [27], diabetes care team intervention [28,29,30], computerized systems [31, 32], physician and nurse education [19, 33, 34], and resident education [35, 36]. While guidelines and interventions designed to improve IGC are gaining attention, knowledge of and barriers to successful implementation of these guidelines along with how they are being translated into clinical practice by resident physicians remains unclear [14]. Identification of common barriers to successful IGC may help foster better educational interventions, ultimately leading to improvements in IGC and patient care. We therefore aimed to construct and evaluate an easy-to-use survey tool for perceptions and knowledge of IGC among resident physicians.


Research locations

We performed this study at four locations: University of Mississippi Medical Center (UMMC), Jackson, MS; University of Virginia Health System (UVA), Charlottesville, Virginia; University of Louisville Health Sciences Center (UL), Louisville, KY; and Emory University Healthcare (Emory), Atlanta, GA.

Questionnaire design and administration

To identify relevant prior work on resident physician perspectives of IGC, we searched PUBMED and Google Scholar using search terms “resident physician AND inpatient glycemic control,” “resident AND inpatient glycemic control,” and “resident physician AND inpatient hyperglycemia” (Fig. 1). We then expanded the search by examining citations referenced in the manuscripts initially retrieved. A total of 18 manuscripts were retrieved, four [14,15,16,17] of which were deemed highly relevant for inclusion in our study.

Fig. 1

Flowchart for literature review that led to four references being included in the framework for the Inpatient Glycemic Control Questionnaire

The inpatient glycemic control questionnaire (IGCQ) was constructed by using the previously published but unvalidated surveys [14,15,16,17] as a framework. We then created a novel survey tool by consolidating and adapting these previously published questionnaires, specifically by decreasing the amount of demographic data collected, expanding the scope and focus of question material, utilizing Likert-scale answer choices, and inserting questions designed to assess knowledge of suggested inpatient glycemic targets from consensus guidelines [4]. Institutional Review Board (IRB) approval at each institution and verbal consent from each participant were obtained  prior to questionnaire administration. The IGCQ was then administered to internal medicine (IM) and medicine-pediatric resident and chief resident physicians to determine their comfort with managing IGC, knowledge of inpatient glycemic target values, and perceived barriers to successful IGC. We distributed questionnaires in person during resident physician educational lectures at UMMC, used Google Forms (Google; San Francisco, California) for data collection at Emory, utilized IRB-approved software (QuestionPro Inc.; San Francisco, California) at UVA, and used SurveyMonkey Pro (SurveyMonkey; San Mateo, California) at UL. Anonymous results were collected during February–May 2015 (UMMC), March–June 2016 (Emory), November–December 2016 (UVA), and March–May 2017 (UL). Survey results were tabulated in an Excel (Microsoft; Redmond, Washington) spreadsheet for data analyses.

Evaluation methods

Rasch partial credit model

RPCM is a unidimensional model that enables “specifically objective” comparisons of persons and items when analyzing responses recorded in two or more ordered categories [37, 38]. RPCM was conducted using Winsteps software (version to examine data for item fitting, dimensionality, and category response functioning (thresholds).

Item fitting, dependency, and dimensionality

Fit statistics examine data in comparison with expectations of RPCM. Item fitting is calculated using chi-square statistics and may be reported as mean square (MNSQ), an unstandardized average value of squared differences between the RPCM’s expected and actual values for an item. This value for each item should ideally fall between 0.50 and 1.70 for clinical tools [39]. Item dependencies represent correlation between item difficulties, identifying items potentially measuring the same concept, which could form a sub-dimension and thereby affect overall unidimensionality of the test. Principal component analysis (PCA) of the differences between observed and expected scores or residuals can reveal contrasting items, which can potentially breach the unidimensionality of outcome measure [39].

Category response functioning

RPCM compares the probability of a category response to other category responses of the same item as well as category responses from other items [39].

Classical test theory

CTT is a traditional quantitative approach to testing the reliability and validity of a scale, based on that scale's individual items. CTT assumes each subject has a true score, T, that would be obtained if there were no errors in measurement [40]. True scores quantify values on an attribute of interest, defined as the underlying concept, construct, trait, or ability of interest. As values of the true score increase, responses to items representing the same concept should also increase, assuming that item responses are coded so that higher responses reflect more of the concept [40]. We used a Kruskal-Wallis test to evaluate differences in total scores by several variables, including postgraduate year (PGY), program, and gender.

Preliminary evaluation

Pretest (phase 1)

Three IM attending physicians, two endocrinology attending physicians, and two IM resident physicians performed pretest review of the survey tool. Our goals were to ensure that: (1) the IGCQ adequately covered key aspects of both IGC and resident education and (2) question construction was neither too leading nor confusing. Feedback received in this stage raised concerns about two Likert scale questions being “vague and open for interpretation.” Based on these recommendations, we revised the IGCQ accordingly. The IGCQ as presented in Additional file 1: Appendix 1 reflects the survey tool after pretest revision but before distribution to resident physicians.

Pilot study (phase 2)

The IGCQ was administered to 182 resident physicians at UMMC, UVA, and Emory. We then used Rasch analysis of collected data to evaluate construct validity of the IGCQ. We did not perform Rasch analysis of knowledge-based items (IGCQ Questions 10–13), as we wanted to preserve five answer choices for these items in order to maintain question complexity and delineate true knowledge of IGC during preliminary evaluation. RPCM analysis demonstrated disordered thresholds for several items using the initial 5-point answer choice scale (Fig. 2). Category responses for non-knowledge based questions were subsequently merged (e.g., 1 instead of 1 and 2, 3 instead of 4 and 5), and RPCM analysis of the merged data showed improved threshold order (Fig. 3) and acceptable fit statistics (Table 1). The improved psychometric performance of merged data led us to modify the IGCQ by reducing the number of category responses for Likert scale questions from five (1 = “strongly agree,” 2 = “agree,” 3= “neither agree nor disagree,” 4 = “disagree,” and 5 = “strongly disagree”) to three (1 = “agree,” 2 = “neither agree nor disagree,” and 3 = “disagree”). We also reduced category responses for Questions 1 and 2 from five (1 = “2–3,” 2 = “4–5,” 3 = “6–7,” 4 = “8–9,” and 5 = “≥10”) to three (1 = “2–5,” 2 = “6–7,” and 3 = “≥8”). Category responses for Questions 3 and 4 were also reduced from five (1 = “< 1,” 2 = “1–2,” 3 = “3–4,” 4 = “5–6,” and 5 = “≥7”) to three (1 = “0–2,” 2 = “3–4,” and 3 = “≥5”).

Fig. 2

Category response functioning analyses were performed by ordering scales such that if responses to the individual items were summed, higher scores would indicate greater comfort in managing (Questions 1–9) or lower perceived barriers to (Questions 14–19) inpatient glycemic control. Analyses of pilot study (phase 2) data with initial 5-choice answer scale demonstrated notable threshold disorder with little discrimination. Panel a demonstrates thresholds for Questions 1–9 and Panel b demonstrates thresholds for Questions 14–19

Fig. 3

Category response functioning analyses performed on pilot study (phase 2) data with 3-choice answer scale demonstrated much less threshold disorder. Panel a demonstrates thresholds for Questions 1–9 and Panel b demonstrates thresholds for Questions 14–19

Table 1 Fit statistics for non-medical knowledge questions from pilot study (phase 2) merged data. Item 17 demonstrates mild misfit

Further study (phase 3)

For prospective evaluation of the revised IGCQ, the survey was administered to 64 resident physicians at UL. RPCM was then applied to UL cohort data and revealed no disordered thresholds (Fig. 4), confirming three category responses as the better answer choice scale for non-knowledge based questions. Fit statistics showed improvement in MNSQ for Questions 2 and 3 (Table 2). Questions 4 and 5 trended toward misfit (but not enough to degrade quality of scale) while Question 17 showed severe misfit (Table 2). We then applied RPCM to collective data from all four centers. Analyses demonstrated no disordered thresholds (Fig. 5), though Questions 2 and 3 again showed little discrimination with 3-choice answer scale. Fit statistics demonstrated moderate misfit for Question 17 (Table 3). We also used PCA to assess dimensionality. For Questions 1–9, the first principal component was essentially the average of the items and accounted for 25% of the total variance. A second principal component roughly separated Questions 1–4 from Questions 5–9, indicating that construction of separate scales could be considered. Overall, PCA indicated that it was reasonable to tabulate the responses from Questions 1–9 into a “comfort with managing IGC” scale (with higher scores indicating greater comfort) and Questions 14–19 into a “barriers to IGC” scale (with higher scores indicating lower perception of barriers). For Questions 14–19, the first principal component accounted for 31% of the total variation, which was again essentially the average (without Question 17). The second principal component accounted for 18% of the variation and was mostly due to Question 17, suggesting that the scale would improve if Question 17 were removed. CTT analyses using Kruskal-Wallis test indicated no difference in performance by gender, program, or PGY for the “barriers to IGC” scale. However, the same analyses performed on the “comfort with managing IGC” scale indicated differences in performance by gender and PGY (Table 4). Specifically, comfort with management scores increased as PGY increased (i.e., resident physicians grow more comfortable managing inpatient glycemic control as they progress through training). Figure 6 demonstrates total score frequencies for both scales.

Fig. 4

Category response functioning analyses performed on University of Louisville cohort data demonstrated no disordered thresholds. Panel a demonstrates thresholds for Questions 1–9 and Panel b demonstrates thresholds for Questions 14–19

Table 2 Fit statistics for non-medical knowledge questions from University of Louisville (phase 3) cohort data. Item 17 demonstrates severe misfit
Fig. 5

Category response functioning analyses performed on merged data from all four centers demonstrated no disordered thresholds, though items 2 and 3 again showed little discrimination with 3-choice answer scale. Panel a demonstrates thresholds for Questions 1–9 and Panel b demonstrates thresholds for Questions 14–19

Table 3 Fit statistics for non-medical knowledge questions from multicenter merged data. Item 17 again shows misfit
Table 4 Differential performance analyses for comfort with managing (IGCQ questions 1–9) and barriers to (IGCQ questions 14–19) inpatient glycemic control scales. P-values calculated using Kruskal-Wallis test
Fig. 6

Total score frequencies for the “comfort with managing inpatient glycemic control” (Panel a) and “barriers to managing inpatient glycemic control” (Panel b) scales

Final revision (phase 4)

Cumulative RPCM data from all phases of study were considered when making final modifications to the IGCQ. We removed Question 17 as fit statistics demonstrated misfit throughout phases 2–3. We also removed Questions 2 and 3 since RPCM analyses demonstrated threshold disorder in phase 2 testing and little discrimination, even with 3-choice answer scale, in both phases 2 and 3. Finally, Question 4 was removed because its data trended toward mild misfit and because it addressed time spent on education in the outpatient setting, which was ultimately viewed as extraneous in light of the IGCQ’s focus on IGC. These final refinements completed preliminary evaluation and led to the 16-item IGCQ (Additional file 2: Appendix 2). Questions 1–6 represent the “comfort with managing IGC” scale, Questions 7–10 represent the “knowledge of IGC” scale, and Questions 11–16 represent the “barriers to managing IGC” scale.


Questionnaire participation

Previous work demonstrated that an estimated minimum sample size range of 108–243 subjects is needed for Rasch analysis for item calibration with ± 0.5 logits at 99% confidence, even if the scale is poorly targeted [41]. However, a minimum sample size of 150 subjects is considered to be adequate in most cases at this confidence level [41]. In our study, 246 of 438 (56.2%) eligible resident physicians completed the IGCQ during various phases of development, including 182 in the pilot study.


To our knowledge, the IGCQ is the first preliminarily evaluated survey tool specifically constructed for assessment of perceptions and knowledge of IGC among resident physicians. Positive attributes of the IGCQ include its basis on previous work in the field, evaluation through RPCM analysis at multiple centers, ease of use, and availability for future research.

The IGCQ is primarily appropriate for use in assessing resident physician perspectives on proper glycemic control of the hospitalized patient. The survey tool has many potential applications and could be used to evaluate the effect of educational interventions on resident physician knowledge of IGC. In the future, it could also be used to compare how resident physician knowledge correlates with real-world clinical care received by hospitalized patients. The IGCQ might also be useful in various other assessments of resident physician perceptions and knowledge. For example, the IGCQ could be easily modified to focus on outpatient glycemic control by reframing Likert scale questions for that specific topic and having knowledge-based questions focus on appropriate outpatient glycemic targets from recent consensus guidelines [42].

Our study has several limitations that should be noted. First, while our results indicate that the IGCQ fits well the Rasch model standards, this analysis is only for one cohort of resident physicians and, thus, further analyses are indicated for future cohorts. Second, each participating institution had different IRB-approved survey software, so we were unable to standardize survey administration across all centers. Third, no differential item functioning (DIF) analyses were performed. DIF occurs in situations where members of different groups show differing probabilities of endorsing an item despite possessing the same level of the ability that the item is intended to measure [43]. Fourth, neither Rasch analysis of knowledge-based questions nor external validation of the survey tool were performed. Future studies could evaluate external validity of the IGCQ and psychometric properties of knowledge-based questions. Further evaluation of any performance differences by gender, ideally with psychometric statistics such as DIF analyses, would also be valuable.


Herein we presented the construction and preliminary evaluation of the IGCQ. Development of the IGCQ was informed by a review of the current literature on resident physician perspectives of IGC. Examination of the IGCQ utilizing RPCM yielded satisfactory results; however, a few potential issues were identified. We analyzed these issues accordingly and restructured the IGCQ based on study data. The preliminarily -evaluated IGCQ could be valuable for studies seeking to examine the effect of educational interventions on resident physician knowledge of IGC. Future studies could evaluate external validity of the IGCQ and psychometric properties of knowledge-based questions.

Availability of data and materials

Please contact the first author for data requests.



Classical Test Theory


Diabetes Mellitus


Emory University Healthcare


Inpatient Glycemic Control


Inpatient Glycemic Control Questionnaire


Institutional Review Board


Mean Square


Principal Component Analysis


Postgraduate Year


Planning Research in Inpatient Diabetes


Rasch Partial Credit Model


University of Louisville Health Sciences Center


University of Mississippi Medical Center


University of Virginia Health System


  1. 1.

    Galindo RJ, Davis GM, Fayfman M, et al. Comparison of efficacy and safety of glargine and detemir insulin in the management of inpatient hyperglycemia and diabetes. Endocr Pract. 2017;23(9):1059–66.

    Article  Google Scholar 

  2. 2.

    Cook CB, Kongable GL, Potter DJ, et al. Inpatient glucose control: a glycemic survey of 126 U.S. hospitals. J Hosp Med. 2009;4(9):E7–E14.

    Article  Google Scholar 

  3. 3.

    Swanson CM, Potter DJ, Kongable GL, Cook CB. Update on inpatient glycemic control in hospitals in the United States. Endocr Pract. 2011;17(6):853–61.

    Article  Google Scholar 

  4. 4.

    Moghissi ES, Korytkowski MT, DiNardo M, et al. American Association of Clinical Endocrinologists and American Diabetes Association consensus statement on inpatient glycemic control. Endocr Pract. 2009;15(4):353–69.

    Article  Google Scholar 

  5. 5.

    Umpierrez GE, Isaacs SD, Bazargan N, et al. Hyperglycemia: an independent marker of in-hospital mortality in patients with undiagnosed diabetes. J Clin Endocrinol Metab. 2002;87(3):978–82.

    Article  Google Scholar 

  6. 6.

    Kotagal M, Symons RG, Hirsch IB, et al. Perioperative hyperglycemia and risk of adverse events among patients with and without diabetes. Ann Surg. 2015;261(1):97–103.

    Article  Google Scholar 

  7. 7.

    Falciglia M, Freyberg RW, Almenoff PL, et al. Hyperglycemia-related mortality in critically ill patients varies with admission diagnosis. Crit Care Med. 2009;37(12):3001–9.

    Article  Google Scholar 

  8. 8.

    Umpierrez GE, Smiley D, Zisman A, et al. Randomized study of basal-bolus insulin therapy in the inpatient management of patients with type 2 diabetes (RABBIT 2 trial). Diabetes Care. 2007;30(9):2181–6.

    Article  Google Scholar 

  9. 9.

    Umpierrez GE, Smiley D, Jacobs S, et al. Randomized study of basal-bolus insulin therapy in the inpatient management of patients with type 2 diabetes undergoing general surgery (RABBIT 2 surgery). Diabetes Care. 2011;34(2):256–61.

    Article  Google Scholar 

  10. 10.

    Schroeder JE, Liebergall M, Raz I, et al. Benefits of a simple glycaemic protocol in an orthopaedic surgery ward: a randomized prospective study. Diabetes Metab Res Rev. 2012;28(1):71–5.

    Article  Google Scholar 

  11. 11.

    Murad MH, Coburn JA, Coto-Yglesias F, et al. Glycemic control in non-critically ill hospitalized patients: a systematic review and meta-analysis. J Clin Endocrinol Metab. 2012;97(1):49–58.

    Article  Google Scholar 

  12. 12.

    Rayman G. Virtual glucose management in the hospital setting. Ann Intern Med. 2017;166(9):673–4.

    Article  Google Scholar 

  13. 13.

    Draznin B, Gilden J, Golden SH, et al. Pathways to quality inpatient management of hyperglycemia and diabetes: a call to action. Diabetes Care. 2013;36(7):1807–14.

    Article  Google Scholar 

  14. 14.

    Latta S, Alhosaini MN, Al-Solaiman Y, et al. Management of inpatient hyperglycemia: assessing knowledge and barriers to better care among residents. Am J Ther. 2011;18(5):355–65.

    Article  Google Scholar 

  15. 15.

    Biagetti B, Ciudin A, Portela M, et al. Interns' viewpoints and knowledge about management of hyperglycemia in the hospital setting. Endocrinol Nutr. 2012;59(7):423–8.

    Article  Google Scholar 

  16. 16.

    Cheekati V, Osburne RC, Jameson KA, Cook CB. Perceptions of resident physicians about management of inpatient hyperglycemia in an urban hospital. J Hosp Med. 2009;4(1):E1–8.

    Article  Google Scholar 

  17. 17.

    Cook CB, McNaughton DA, Braddy CM, et al. Management of inpatient hyperglycemia: assessing perceptions and barriers to care among resident physicians. Endocr Pract. 2007;13(2):117–24.

    Article  Google Scholar 

  18. 18.

    Schnipper JL, Ndumele CD, Liang CL, Pendergrass ML. Effects of a subcutaneous insulin protocol, clinical education, and computerized order set on the quality of inpatient management of hyperglycemia: results of a clinical trial. J Hosp Med. 2009;4(1):16–27.

    Article  Google Scholar 

  19. 19.

    Wesorick DH, Grunawalt J, Kuhn L, et al. Effects of an educational program and a standardized insulin order form on glycemic outcomes in non-critically ill hospitalized patients. J Hosp Med. 2010;5(8):438–45.

    Article  Google Scholar 

  20. 20.

    Noschese M, Donihi AC, Koerbel G, et al. Effect of a diabetes order set on glycaemic management and control in the hospital. Qual Saf Health Care. 2008;17(6):464–8.

    Article  Google Scholar 

  21. 21.

    Thompson R, Schreuder AB, Wisse B, et al. Improving insulin ordering safely: the development of an inpatient glycemic control program. J Hosp Med. 2009;4(7):E30–5.

    Article  Google Scholar 

  22. 22.

    Schnipper JL, Ndumele CD, Liang CL, Pendergrass ML. Effects of a computerized order set on the inpatient management of hyperglycemia: a cluster-randomized controlled trial. Endocr Pract. 2010;16(2):209–18.

    Article  Google Scholar 

  23. 23.

    Shabbir H, Stein J, Tong D, et al. A real-time nursing intervention reduces dysglycemia and improves best practices in noncritically ill hospitalized patients. J Hosp Med. 2010;5(1):E15–20.

    Article  Google Scholar 

  24. 24.

    Doyle MA, Brez S, Sicoli S, et al. Using standardized insulin orders to improve patient safety in a tertiary care Centre. Can J Diabetes. 2014;38(2):118–25.

    Article  Google Scholar 

  25. 25.

    Trujillo JM, Barsky EE, Greenwood BC, et al. Improving glycemic control in medical inpatients: a pilot study. J Hosp Med. 2008;3(1):55–63.

    Article  Google Scholar 

  26. 26.

    Wong B, Mamdani MM, Yu CH. Computerized insulin order sets and glycemic control in hospitalized patients. Am J Med. 2017;130(3):366.e1–6.

    Article  Google Scholar 

  27. 27.

    Rushakoff RJ, Sullivan MM, Seley JJ, et al. Using a mentoring approach to implement an inpatient glycemic control program in United States hospitals. Healthc (Amst). 2014;2(3):205–10.

    Article  Google Scholar 

  28. 28.

    Koproski J, Pretto Z, Poretsky L. Effects of an intervention by a diabetes team in hospitalized patients with diabetes. Diabetes Care. 1997;20(10):1553–6.

    Article  Google Scholar 

  29. 29.

    Moraes MA, Rodrigues J, Cremonesi M, et al. Management of diabetes by a healthcare team in a cardiology unit: a randomized controlled trial. Clinics (Sao Paulo). 2013;68(11):1400–7.

    Article  Google Scholar 

  30. 30.

    Wang YJ, Seggelke S, Hawkins RM, et al. Impact of glucose management team on outcomes of hospitalization in patients with type 2 diabetes admitted to the medical service. Endocr Pract. 2016;22(12):1401–5.

    Article  Google Scholar 

  31. 31.

    Button E, Keaton P. Glycemic control after coronary bypass graft: using intravenous insulin regulated by a computerized system. Crit Car Nurs Clin North Am. 2006;18(2):257–65.

    Article  Google Scholar 

  32. 32.

    Juneja R, Golas AA, Carroll J, et al. Safety and effectiveness of a computerized subcutaneous insulin program to treat inpatient hyperglycemia. J Diabetes Sci Technol. 2008;2(3):384–91.

    Article  Google Scholar 

  33. 33.

    Najarian J, Bartman K, Kaszubva J, Lynch CM. Improving glycemic control in the acute care setting through nurse education. J Vasc Nurs. 2013;31(4):150–7.

    Article  Google Scholar 

  34. 34.

    Moghissi ES, Inzucchi SE, Mann KV, et al. Hyperglycemia grand rounds: descriptive findings of outcomes from a continuing education intervention to improve glycemic control and prevent hypoglycemia in the hospital setting. Hosp Pract. 2015;43(5):270–6.

    Article  Google Scholar 

  35. 35.

    Desimone ME, Blank GE, Virji M, et al. Effect of an educational inpatient diabetes management program on medical resident knowledge and measures of glycemic control: a randomized controlled trial. Endocr Pract. 2012;18(2):238–49.

    Article  Google Scholar 

  36. 36.

    Horton WB, Weeks AQ, Rhinewalt JM, Ballard RD, Asher FH. Analysis of a guideline-derived resident educational program on inpatient glycemic control. South Med J. 2015;108(10):596–8.

    Google Scholar 

  37. 37.

    Masters GN, Wright BD. “The Partial Credit Model.” Handbook of Modern Item Response Theory. Eds. van der Linden WJ, Hambleton RK . Springer: New York, NY. 1997. 101–121. Print.

    Chapter  Google Scholar 

  38. 38.

    Rasch G. On specific objectivity: an attempt at formalizing the request for generality and validity of scientific statements. Dan Yearb Philos. 1977;14:58–93.

    Google Scholar 

  39. 39.

    Gwathmey KG, Conaway MR, Sadjadi R, et al. Construction and validation of the chronic acquired polyneuropathy patient-reported index (CAP-PRI): a disease-specific, health-related quality-of-life instrument. Muscle Nerve. 2016;54(1):9–17.

    Article  Google Scholar 

  40. 40.

    Cappelleri JC, Lundy JJ, Hays RD. Overview of classical test theory and item response theory for the quantitative assessment of items in developing patient-reported outcomes measures. Clin Ther. 2014;36(5):648–62.

    Article  Google Scholar 

  41. 41.

    Linacre JM. Sample size and item calibration stability. Rasch Meas Trans. 1994;7(4):328.

    Google Scholar 

  42. 42.

    Bailey TS, Grunberger G, Bode BW, et al. American Association of Clinical Endocrinologists and American College of endocrinology 2016 outpatient glucose monitoring consensus statement. Endocr Pract. 2016;22(2):231–61.

    Article  Google Scholar 

  43. 43.

    Zampetakis LA, Bakatsaki M, Litos C, Kafetsios KG, Moustakis V. Gender-based differential item functioning in the application of the theory of planned behavior for the study of entrepreneurial intentions. Front Psychol. 2017;8:451.

Download references


The authors wish to acknowledge Mikhail Y. Akbashev, MD, for his oversight of data collection at Emory University.


The authors have no sources of funding to disclose for this study. No funding body played any role in the design of the study; collection, analysis, and interpretation of data; or writing of the manuscript.

Author information




WBH designed the survey tool, obtained IRB approval at UVA and UMMC, collected and analyzed data from all four study centers, and authored the manuscript. SL obtained IRB approval and collected data at Emory and assisted with writing and revising the manuscript. MD collected data at UL and assisted with writing and revising the manuscript. MRC performed all statistical analyses and assisted with writing the manuscript. NTK obtained IRB approval and oversaw data collection at UL and assisted with writing and revising the manuscript. JLK oversaw data collection at UVA and assisted with writing and revising the manuscript. SCT designed the survey tool, oversaw data collection at UMMC, and assisted with writing and revising the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to William B. Horton.

Ethics declarations

Ethics approval and consent to participate

Institutional Review Board (IRB) approval for study design and verbal consent to participate was granted separately at each institution (Emory University, University of Virginia, University of Louisville, and University of Mississippi) prior to questionnaire administration. Verbal consent was obtained from all participants in this study.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

Preliminary version of IGCQ prior to initial evaluation. (DOCX 18 kb)

Additional file 2:

Final version of IGCQ after all analyses and revisions were completed. (DOCX 15 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Horton, W.B., Law, S., Darji, M. et al. Construction and preliminary evaluation of the inpatient glycemic control questionnaire (IGCQ): a survey tool assessing perceptions and knowledge of resident physicians. BMC Med Educ 19, 228 (2019).

Download citation


  • Graduate medical education
  • Hyperglycemia
  • Physicians
  • Knowledge
  • Biostatistics