Wow! 29 groups of analysts, an identical set of data, one search query, 29 different results.


This is an incredible paper, Many analysts, a set of data: Making transparent, which fluctuations in the analytical options affect the results, 61 analysts (in 29 groups) received the same set of data to deal with the same research issue (football referees are more likely to give red cards to dark-skinned players from light-skinned players).

Results;

20 groups found a statistically significant positive effect, while 9 groups did not have and where the magnitudes of the effects varied (in units of odds ratio), despite all the groups working from the same data set, from 0.89 to 2.93 1.0 would not work).

Why so many differences?

Because the results depend to a large extent on the selected analytical strategy of any group, which in turn is influenced by their statistical comfort and choices and their interaction with their pre-existing work theories.

Now these results were not motivated examples of p-hacking. The authors of the study note that the variability observed was based on "justified, but subjective, analytical decisions"and while there is no obvious means to ensure that a researcher has chosen the right methodology for his study, the authors suggest that,

"Transparency in data, methods and process gives the rest of the community the opportunity to see decisions, challenge them, offer alternatives and test these alternatives to further research".

Even more important in cases where writers may have prejudices would encourage them to favor a certain result and why I wish they were offered more in the way of statistics and critical assessment in the medical school (and perhaps less so in the way of embryology, for example ).

[Photo by Timur Saglambilek from Pexels]

Bookmark and Share

.