Wednesday, July 1, 2015

Resident Selection: Is Moral Reasoning Related to Clinical Performance?

Dena Hofkosh, MD, MEd
Children's Hospital of Pittsburgh

Purpose: Residency programs select applicants for residency training on the basis of traditional criteria that have not been shown to be related to subsequent clinical performance during residency, including medical school grades and United States Medical Licensing Examination (USMLE) scores. Clinical performance during residency is assessed in the six domains of competence defined by the Accreditation Council on Graduate Medical Education (ACGME) — Medical Knowledge, Patient Care, Professionalism, Interpersonal and Communication Skills, Systems Based Practice, and Practice-Based Learning and Improvement. The aspects of clinical performance that are most difficult to predict from traditional selection criteria include the so-called non-cognitive characteristics of interpersonal skills, integrity, and professionalism, all of which are deemed by faculty across many specialties to be among the most important features of successful performance as a physician. The assessment of moral reasoning, using the Defining Issues Test (DIT), has been proposed as a means to identify those individuals who may demonstrate these highly valued characteristics during residency training.

Method: We studied the relationships among overall application score (a score combining several traditional selection criteria), clinical performance evaluations during training, and scores on the Defining Issues Test as an assessment of moral reasoning. The study group was 59 pediatric residents in training during the 2008-2009 academic year at the Children’s Hospital of Pittsburgh.

Results: We found no significant differences across years of training in mean USMLE Step 1, Step 2, or overall application scores. Faculty rated third-year residents more highly than first- and second-year residents in all domains of clinical performance, including Patient Care and Practice-Based Learning and Improvement (p<0.001), Medical Knowledge (p=0.013), Professionalism (p=0.011), Communication (p=0.022), and Systems Based Practice (p=0.022). Mean DIT scores were in the range typically seen in post-graduates (41-50.5); there were no significant differences in mean DIT scores by year of training (p=0.114).

Overall application scores were not related to clinical performance except for weak correlations between overall application score and the medical knowledge competency domain in all three classes of residents, as well as between USMLE Step 2 score and the medical knowledge domain among the first-year residents. No associations existed between moral reasoning as measured by the DIT and clinical performance evaluations in any competency domain.

Conclusions: In this small sample of residents from a single institution, the results support our first hypothesis and prior research suggesting that traditional resident selection criteria are not related to subsequent faculty evaluations of resident clinical performance during training. Our second hypothesis, that the maturity of moral reasoning as measured by the DIT is related to resident clinical performance, was not supported by the data, which showed no significant relationships between the DIT score and faculty evaluations of clinical performance in any competency domain. The community of medical educators needs to examine other ways to assess characteristics of medical students that predict excellent clinical performance during residency and beyond.