Posted on

Beyond MOC: Should The ABR Scrap The Core Exam And Find A Way To Assess Competency Objectively?

competency

As physicians, we rely heavily upon our boards to verify that members of our profession can competently practice medicine. Nowadays, physicians are questioning whether the requirements for maintenance of certification (MOC)  provide a valid measure of competency to practice radiology. Even in areas like internal medicine and radiology, a few physicians are taking this mission to court. (1)

But that’s just MOC for attending radiologists. What about the initial exam that the academics created to ensure that residents are competent to practice medicine, the core examination? Did the test creators correlate these board exams with minimal practice competency when residents finish training? And, is it even possible to do so? Moreover, what are these exams testing?  Today I am going to provide a voice for the unloved residents that can’t vocalize their concerns about the examination due to the potential for reprisal from their faculty.

In doing so, I am going to investigate some of the biases that board creators such as the American Board of Radiology (ABR) face. And, then I am going to give some ideas our governing board can objectively use to assess minimal competency, not the current more subjective assessment of what the minimum skill sets should be.

Starting From The Beginning: Who Is Making The Exam

If we think about how the ABR makes its core exam, they farm out experienced voluntary member radiologists of the ABR to create questions for the examination, most of which are academics. Herein lies the first problem. Who are the majority of the radiologists in the country? Are they academic radiologists? Simply put, no.

So, when the initial test question creators formulate the exam, they do not base their questions on the basic competency levels of all radiologists. Instead, these test creators may base their test questions on their own academic experience. This experience may include fairly esoteric knowledge that only the academic radiologist may need. For instance, the question creator may be an academic radiologist that works in an esthesioneuroblastoma center of excellence. Therefore, this radiologist may emphasize a rare disease that most radiologists may never experience. And, you might see this question pop up on your examination even though it does not evaluate for minimal competency.

Also, some of the question designers may practice in a highly subspecialized area. These subjects may not apply to the future practice of a majority of the examinees. Do these questions test for minimal competency? Sometimes probably not. The core examination should more objectively test knowledge that addresses skill levels, not random factual or subspecialty competency.

The Problem With Correlation- Are We Correlating To The Correct Metrics?

According to the ABR, a candidate passes a test if she meets the minimum cutoff that the organization deems appropriate. No, they do not base it on a curve. But, the ABR does need to figure out how to base their minimum cutoff. So, with what exactly does the ABR correlate this minimum competency level? Well, they have to base it off something. To answer that problem, the test question makers assume that they know what the minimal level of competency should be. Well, I am not so sure that is an objective standard based on their different skills compared to the average Joe Radiologist.

Potential Objective Competency Standards For The Core Examination

So, what are some objective standards to which questions should correlate? Well, I can think of a few. Peer review in practice may be one such metric. If the radiologist is entirely off the curve and has passed the board exam, this would indicate that perhaps the examination was faulty. We can correlate the test to that.

What else might be an appropriate metric? Radiologists that cannot hold a job and has been fired by more than one practice. Think about it. If practices continue to let a radiologist go because he does not meet the standards, that is probably a useful measure. Why not use this as a way to correlate the appropriateness of the core exam questions?

Another measurement could be surveying physicians in other subspecialties to assess the competency of the practicing radiologist. If the preponderance of surveys shows poor clinical insight, I believe that would be another useful measure for determining competency.

And finally, perhaps you could use a metric such as multiple lawsuits far about the mean in a particular subspecialty. If a radiologist has been sued five times and the average in her specialty is one or two, that would be a red flag. You can see if the test questions correlate with this endpoint.

These are all potential valid endpoints that the ABR can use to correlation the test that would lend a sense of objectivity. Right now, I am only aware of the subjective criteria of a biased individual examiner of what a passing physician should know. Perhaps, we need to change this concept 180 degrees to assess true competency with objectivity.

Summary: Assessing The Correct Metrics?

Currently, the subjective determined minimum standard of the ABR core examination is not good enough. If we want to create a test that genuinely tests minimum competency, we have to create one with a basis of more objective criteria that associate with the quality of practicing general radiologists that have completed the exam. It will take time and maybe a difficult chore. But it may be well worth it to develop a test that we can rely on to make sure that residents who pass the exam have the minimal competency to practice radiology, instead of being an expert in test taking itself.

(1) https://www.radiologybusiness.com/topics/healthcare-economics/lawsuit-american-board-radiology-antitrust-moc