Vail health feature: When it comes to research, not all published medical studies are created equal
Special to the Daily
Editor’s note: This is the first article in a two-part series about how to be an educated scientific consumer.
It’s common to see a headline exclaim “New Study Links Blank to Blank.” Sometimes these headlines don’t cause much of a sensation, and sometimes these headlines link two things that seem to be so wildly connected that it’s sure to cause a buzz. Sometimes these studies link one thing to longevity, and sometimes they link the same thing to heart disease. Sometimes there seems to be so much wavering on a given topic that it’s hard to take any of it seriously.
When it comes to research, not all published studies are created equal; what makes this dichotomy troublesome is that published studies often get picked up by media outlets that blast research results in a way that leads to drastic misunderstandings between the research community, doctors and patients.
While it’s not always easy to read through a study about a given topic, there are ways to look at some of these studies objectively in order to process research about dietary trends, medical procedures, environmental hazards and a host of other pertinent information.
Behind the headlines of new research findings, there are countless people who have spent hours, weeks, days and years of their time asking a question, developing a hypothesis, building a body of data and then drawing conclusions from that data. While the conclusion of studies will often point toward a general trend in what scientists are learning about a given topic, just as frequently studies are reexamined and redesigned within the scientific community to address holes in the data or flaws in the experiment.
Dr. Robert LaPrade, M.D, Ph.D., of The Steadman Clinic, has been a leader in orthopedic knee surgery and is the chief medical officer at the Steadman Phillippon Research Institute. He explained that while reworking experiments is part of the scientific process, problems in a study’s design can lead to a disconnect between the way a study is interpreted by the scientific community and how it is presented to the public.
“There are lots of poorly designed studies,” he said. “There are lots of studies that are designed with shortcuts to hit certain results that don’t go through the basics in their design, and that causes problems within the medical profession and for patients.”
Understanding the back and forth
This ongoing process surrounding a given topic can lead to a back and forth within the scientific community on a range of subjects and deepen misunderstandings between researchers and the public, as well.
While studies are often reworked to address problems in experimental design, the data pulled from a population about a given subject can often be inconclusive because of the variances in population. This can lead to follow-up studies attempting to address these inconsistencies in test subjects by drawing data from a larger population, which can leads to discrepancies among bodies of research asking the same question.
Jason Moore, Ph.D., P.A., is a researcher at the Vail Valley Medical Center who often does larger scale studies of populations for his research. Currently, he is gathering data about the potential link between marijuana use on the mountain and ski and snowboarding accidents as part of the Colorado Department of Public Health’s effort to learn about the subject. He explained that on top of design flaws within an experiment, the variables within populations being studied could make it especially hard to pin a result to a reason.
“Finding a causal relationship in your research is almost impossible to prove,” Moore said. “Especially with population studies, there is so much variance between the characteristics of groups that have been studied for a given experiment that it often leads to different findings and more experiments to tease out variables that previous studies missed.”
The frequent lack of a causal relationship between two things leads to a larger presence of correlation in research, which means that although a study has linked one thing to another, that link represents a broader connection that hasn’t been pinpointed as a cause-and-effect relationship. This abundance of correlation in research can be confusing to the untrained eye, especially when these publications are interpreted outside the scientific community.
Reading through the research
Both Moore and LaPrade suggest digging deeper into a study to determine the factuality of how it’s being represented. Looking at the actual design of the experiment is helpful, along with some of the more general characteristics of the study, as well. Simply looking at the publishing body is often a good indication of the type of scrutiny a study has been subjected to within the scientific community.
“Looking at the journal’s impact can give a good representation of the vetting process a particular study has gone through,” LaPrade said. “Publications like Nature, or The New England Journal of Medicine are very strict on what they publish, and the studies published in similar journals have gone through the proper channels to ensure the integrity of a study.”
Moore similarly advocates looking at the conflicts of interest in a given study, which are always disclosed in the primary sources of the publication.
“Not all research is done by outside interests,” he said, “But, purity can be a rare gem in research. It’s just all part of looking at the different aspects of a study.”
It’s equally as important to realize that even researchers can find it hard to decipher studies on a given topic, with journal clubs and academic classes dedicated to analyzing research publications. And while that may make the task of getting to the bottom of what a new research actually means seem a bit daunting, having a bit of objectivity when looking at new findings can lead to a better understanding when talking to doctors, looking into a medical procedure or choosing healthy daily habits.