Practicing physicians like me rely on scientific medical
journals to keep us current on medical developments. We learn about new treatments for old
diseases. New diagnostic tests are
presented as alternatives to existing methods.
Established treatments, which are regarded as dogma, may be shown to be
less effective or less safe than originally believed. It’s a confusing
intellectual morass to sort among complex and conflicting studies some of which
reach opposite conclusions in the same medical journal. What’s a practicing physician to do?
While the medical journals that physicians read are
fundamental to our education, paradoxically most physicians have only
rudimentary training in properly analyzing and assessing these studies. For example, the quality of medical studies
often depends upon statistical analysis, a mathematical field that is foreign
to most practicing physicians.
Doctors like me hope that our peer-reviewed journal editors
have done their due diligence and vetted the studies they publish ensuring that
only high quality work reaches readers.
On a regular basis, a study in a prestigious medical journal is
challenged by other experts in the field who refute the study’s design or its
conclusion. Medical progress does not
proceed linearly.
The Path of Medical Progress
Although I am a neophyte here, I will offer some examples to
readers highlight defects in study design that can lead to tantalizing and
exaggerated headlines and sound bites.
The Study is Too Small:
If a new treatment is tested on only 5 patients, and one of them happens
to get better, is it really accurate to announce that there is a 20% response
rate? Would this hold up if the study
had 100 patients?
Where’s the Control Group?:
Doctors know that many patients get better in spite of what we do. If a new treatment brags a 35% response rate
on a group of sick individuals, was there a second group of patients called the
control group in the study who were not treated and compared? In many cases, the control group shows a
significant ‘improvement’ without any treatment, for various reasons. If the treatment group and the control group
both show a 25% improvement, then the drug is not quite the magic bullet.
Is the Study Randomized? Ideally, the treatment and the
control group should be identical in every respect except for the treatment
being tested. This is why higher quality
studies randomly assign patients into each group. Randomization maximizes the chance that the
two groups being compared will be very similar with regard to all kinds of
variables including smoking, weight and other risk factors.
Beware the False Assocation!
This is a very common and deceiving practice where investigators try to
link events that are much too far apart to be connected. Newspapers and airwaves love this stuff as
they have sizzle. “Study shows that Gym
Membership Reduces Cancer”. This ‘study’
might be sponsored by the Society of Calisthenics and Aerobic Medicine
(S.C.A.M.). Sure it might be true that
gym members have lower cancer rates, but this has nothing to do with pumping
iron. These folks are more health
conscious and are likely to be fit, non-smokers who pursue preventive medical
care. Get the point?
These are just a few examples to give readers a glimpse of the issue. Of course, I just barely peeled the onion here.
Designing medical studies is a profession. Most physicians have barely a clue on how to properly
design a study or to interpret it. Most
of us rely upon others to perform the quality control function. However, just because it’s a published study,
doesn’t mean the study is worthy of publication. Medical research may contain sleight of hand,
confusion, obfuscation, all of which can be hard to recognize. The fact that our highest quality medical studies
are routinely challenged shows how difficult it is for ordinary doctors to make
sense of it all. Medicine can be murky. Caveat lector!
It would be nice if uncertainty was completely removed from medical research, but by definition, we know that won't happen. Much less, patients have no idea what makes a good study or not. BTW, patients don't understand "cause and effect" or "placebo controlled". I wish they did, but life goes on. Thanks for your comments.
ReplyDeletewww.TulsaAllergyNews.com
Thanks so much, Lynn, for your thoughts. I agree that the public often mistakes a catchy headline for solid medical evidence. Keep in touch. MK
ReplyDeleteActually, there are many patients who have background in research and research design, medical writing, epidemiology, and related areas who do understand the difference between cause and effect and association, are sometimes better positioned to interpret medical research than their own doctors. Not to mention that many studies - particularly those done by pharmaceutical companies - are unpublished.
ReplyDeleteIt also can be a long time between when clinical practice reveals that established protocols should be revisited or changed, and actual changes take place.
Seattlelite