By Eric Trexler, CSCS
Former Director of Research and Education, INOV8 Elite Performance
It is increasingly common to see a more “evidence-based” approached to physique and strength sports. On various forums and social media outlets, people engage in arguments about training and nutrition, hurling PubMed abstracts at each other to support their methods of choice.
Not everybody is well-versed in the practice of critiquing or interpreting research. Some academic papers seem like an entirely different language, and a lot of people simply don’t have access to the full paper. (Yes, you do need to read the full text to take away meaningful conclusions- otherwise my schedule would contain a lot more free time.)
But back to the point: Some people aren’t very experienced with critically reading research, and that is totally understandable. But there is a very simple error that seems to come up frequently: Making emphatic, unequivocal conclusions about something that wasn’t even measured.
I think this comes down to our natural instinct to simplify and dichotomize everything. For example, imagine a new paper is published on some random supplement. The study might contain data pertaining to a number of measured outcomes: one-rep max, repetitions to fatigue, anaerobic sprint performance, body composition, a few endocrine biomarkers, etc. But we naturally boil this down to a simple question: “Does it work, or not?”
Sometimes, this line of thinking prompts us to make some very generous extrapolations and assumptions that may not be valid.
Let’s say that a study investigates the effect of a training method on acute indices of muscle protein synthesis (MPS). We might subconsciously simplify this to the question, “Does this method lead to muscle growth?”
Imagine that the study reveals that the training method led to increased phosphorylation of a protein kinase involved in MPS. This is what you might consider a promising finding, but by no means a “slam dunk.” Too often, you will see people run wild with such a finding, insisting that the method or strategy employed will promote hypertrophy, increase lean mass, or improve body composition.
It might. But if they didn’t measure hypertrophy, lean mass, or body composition, such a conclusion is not really appropriate. Even though it would “make sense” that acutely elevated MPS will lead to hypertrophy and improved body composition, we can’t always make that assumption. Indeed, recent data from Mitchell et al. and Nader et al.indicate that the link between acute indices of protein synthesis and long-term hypertrophy is not as strong and consistent as you might assume.
Sometimes, people make even larger leaps. Someone recently asked me if one particular study indicated that whey protein hydrolysate (WPH) was superior to other protein sources for bodybuilders. The study involved feeding rodents a diet with a protein source of whey (WP), casein (CAS), or WPH. In a nutshell, the primary finding was that the rats consuming WPH had greater GLUT-4 translocation 16 hours post-exercise, along with greater liver and muscle glycogen content.
I was hesitant to interpret this study as conclusive evidence that WPH was “better” for bodybuilders. When I think of “better,” my mind immediately goes to body composition and/or performance outcomes; neither were measured in this particular study. I also have to make a lot of assumptions before applying these outcomes to the average bodybuilder.
I have to assume that the same effect will be observed in humans, that the effect can be observed with a “reasonable” human dose of WPH, that findings would be similar in the context of resistance (not aerobic) training bouts, that the difference in glycogen storage will actually improve performance, and that the performance improvement will be significant enough to favorably improve body composition in the long term. The mice were also fasted for 14 hours following the exercise bout, so I have to assume that the findings regarding GLUT-4 and glycogen would be similar if the rodents had been eating a normal diet following the exercise bout. It is entirely possible that WPH may be preferable to WP and CAS for bodybuilders, but this particular study is far from “conclusive evidence” of such a claim.
But the point of this article is not to directly critique the studies mentioned. Rather, it is a reminder to practice caution when interpreting research, and to read in a very literal manner. As Bryan Chung states in an old blog post: “If you’re going to claim to improve hypertrophy, measure hypertrophy.” If you’re going to claim that something improves performance, measure a performance outcome.
Sometimes researchers collect “indirect” measurements for a number of reasons. They might be more sensitive to change in the given timeframe, easier to collect, less expensive or invasive, or have any number of benefits compared to more direct outcome measures. Sometimes we have to rely on animal models for ethical or methodological reasons. That’s how it goes in research, and there’s nothing wrong with that. But it’s important to realize that the findings of a study are specific to the variables measured and the population that was used. The problem is when people see a new study and use it to support an idea that wasn’t actually supported. For example, we say that since a given supplement acutely increases a single indicator of MPS in untrained subjects, then it increases hypertrophy, so it increases muscle mass, so bodybuilders should use it.
I’m not saying we can’t try to extrapolate findings. I’m a male bodybuilder in my twenties, but this doesn’t mean I automatically throw out any study performed in elderly subjects, females, or animal models. I don’t throw out a study just because they didn’t use my exact same training split, or their dietary intakes don’t match mine perfectly. If you’re trying to stay ahead of the curve, you really have to extrapolate findings, and essentially make educated predictions about what that “indirect” data might mean for a competitive bodybuilder.
You might be correct sometimes, but you might get burned. Future research might reveal differential responses in young vs. old subjects, males vs. females, trained vs. untrained, or that a particular animal model might not be as valid as we originally thought. It might also reveal that the convenient, indirect, acute variable measured doesn’t correlate as well with the long-term change we might expect it to.
The point is, feel free to apply a study’s outcomes to your training and nutrition practices. However:
- Find out if the study actually says what you believe it says. What did they do, what kind of subjects participated, and what did they actually measure?
- Realize that the more assumptions you make, the lower the chances that the finding will “pan out” in the way you expect it to.
- When sharing information or suggestions with others, be cognizant of when your claims are actually supported by research, and when you’re really going out on a limb.
- If you’re going out on a limb, make that point very clear, and use tempered language. Don’t misrepresent speculation as a well-known “fact.” Many people with good intentions do this unintentionally.
- If you don’t have a formal background in science, don’t be intimidated by the unfamiliar language or writing style you may encounter in peer-reviewed research. These papers are written by average, everyday people- not geniuses. You are more than capable of understanding and interpreting the paper, and you have every right to read it with a skeptical, critical eye.