9 Great Suggestions for Improving the Quality of Nutrition Research (and 1 that @JamesHeathers says is "Deeply Silly")

Last week saw the publication of an op-ed by Drs. David Ludwig, Cara Ebbeling and Steven Heymsfield entitled "Improving the Quality of Nutritional Research". In it they discuss the many limitations of nutrition research and write a direction that includes the following 9 major suggestions,

  1. Recognize that the design features of Phase 3 drug studies are not always feasible or appropriate for nutrition research, and specify the minimum standards necessary for nutrition studies to be considered successful.
  2. Distinction between study design categories, including engineering, pilot (exploratory), effectiveness (illustrative), effectiveness (realistic) and translation (with impact on public health and policy). Each of these types of study is important for building knowledge about diet and chronic diseases, and some overlaps can always exist. However, findings from small-scale, short-term or low-intensity trials should not be combined with definitive hypothesis testing.
  3. Define diets as accurately as possible (eg with quantitative nutrient targets and other parameters, rather than qualitative descriptors such as the Mediterranean), to allow rigorous and reproducible comparisons.
  4. Improve methods for addressing common design challenges, such as how to promote compliance with dietary prescriptions (ie, nutrition studies and more intensive behavioral and environmental interventions), and reduce rejection or loss of follow-up.
  5. Develop sensitive and specific adhesion biometrics (eg metabolism) and use available methods where feasible (eg, double labeling method for total energy expenditure).
  6. Sufficiently build and fund local (or regional) kernels to enhance research infrastructure.
  7. Standardize practices to mitigate the risk of bias related to conflicts of interest in food research, including independent oversight of data management and analysis, as was the case for drug trials.
  8. Creation of available databases when publishing the study to facilitate analyzes and academic dialogue.
  9. Identify media relations best practices to help reduce the excessive environment around the publication of small, preliminary or unclear research with limited generalizability. "

But there is one recommendation that seems to contradict the rest,

Recognize that changes or discrepancies in clinical records of dietary trials are common, and update final analysis plans before revealing any randomized team assignments and starting data analysis.

For those who do not know, clinical records are where researchers record in advance the predetermined methods and results studied through an observational experiment. The purpose of pre-registration is to reduce the risk of bias, selective reporting and p-hacking that can (and has) occurred in dietary research.

Now to be clear, I'm a clinician, not a researcher, and I'm not sure common the changes or discrepancies in the clinical records of diet tests are, but I'm also not sure that's an argument for them, even if they are. I know that recently two of the authors who claim that changes to the register are commonplace have found that they have modified one of their predefined statistical analysis plans which, if observed, would have made their results not significant.

But ordinary or not, is it good science?

To answer this question, I turned to James Heathers, a researcher and self-proclaimed "data malicious"whose area of ​​interest is methodology (and who should definitely follow on Twitter), who described the concept of acceptance that changes and discrepancies in clinical records were common were "deeply stupid".

He went on to explain why,

First of all – the whole definition of a theory is something that sets your expectations. the idea that "reality is messy" does not interfere with the idea that you have expectations guided by assumptions from theories.

Second: there is nothing to prevent you from saying "YOU CAN'T FIND ANYTHING EXPECTED TO FIND" and then * follow * it with your thorough exploratory analysis. In fact, this would be almost a better exposition of the facts by definition as you present your expectations as expectations and your post-factual assumptions alike.

Third, if you have a power analysis that determines that there is a right amount of observation needed to reliably observe a result, having the freedom to "not care then" is by no means good.

Fourth: The fact that changes were made is never included in the manuscript. that is, they suggest being able to make changes to the protocol in the registry * without * saying so. is a "new plan" rather than a "modified plan".

Fifth: If you can still do the initial analysis then no one will ever believe that you didn't change the plan after looking at the data. you have to protect yourself, and the best way to do that is to follow your own cursed plans and be realistic than you get.

Finally, Heathers is not struck by the argument that changes to the registry are A-OK because they are commonplace and discussed ancient Aztec punishments for those who invoked it.

All this to say, there is ample scope for improving the quality of nutrition research. We hope that most of these suggestions will be taken to heart, but please do not hold your breath.