Posted by & filed under Legal Ethical Issues, Research Methods, Research Methods in ADA, Research Methods in AP, Research Methods in ChD, Research Methods in CP, Research Methods in SP, Social Psychology.

Description: Remember replicability? As in the mark of a good psychological study is that it is replicable, that is, if someone else read it and then tried to run it on their own they should get the same results as the original study if we are to take the original study seriously. Well putting aside the fact that academic reputations usually are NOT made by replicating other people’s work what would it mean if we could NOT replicate published psychological research? Read the three articles linked below and then think about the solidity of Psychology’s very foundations!

Sources: Psychology’s reproducibility problem is exaggerated-say psychologists, Monya Baker, Nature.

Psychology’s Replication Crisis Can’t Be Wished Away, Ed Yong, The Atlantic.

Psychology is in Crisis over Whether it’s in Crisis, Katie Palmer, Wired.

Date: March 3, 2016

Crisis

Photo Credit: Then One/Wired

Links: Article Link — http://www.nature.com/news/psychology-s-reproducibility-problem-is-exaggerated-say-psychologists-1.19498

http://www.theatlantic.com/science/archive/2016/03/psychologys-replication-crisis-cant-be-wished-away/472272/

http://www.wired.com/2016/03/psychology-crisis-whether-crisis/

Depending on how it was presented in your introductory psychology course, the concept of reproducibility could simply mean whether someone can follow your description of what you did in conducting a study with sufficient accuracy that they will produce similar results. A broader perspective on this question is a little more complicated and involves concerns over what actually ends up being published in psychology journals. One concern is that there is clearly a bias towards publishing only significant results. If you look through psychology journals you rarely if ever see research articles that did not produce significant results, as negative result studies typically end up stuffed in a file drawer somewhere. Another concern is that researchers may conduct a large number of analyses on a broad data set and then “cherry pick” those results that are significant and report those and not say anything a tall about all the other analyses they ran. The concern here is that it would mean that many published psychology studies essentially reflect quirky or fluky results that may only have resulted as a result of the kind of random variation that we try to control for in our psychology research experiments. Responses to these concerns very dramatically as you will see when you read through the three articles linked above with some psychologists becoming quite alarmed about the implications of these concerns for the very foundations of the field of psychology and other researchers arguing that it’s really not that big a deal. This is led some researchers and journal editors particular the area of social psychology to argue that any article submitted for publication must report on all of the analyses that were run and on the full extent of the data gathered so that the significant results reported in the paper can be viewed within an appropriate investigative context that allows for a more informed interpretation of the strength or perhaps of the fluky (statistically anomalous) nature of the results. However psychology as a discipline proceeds from here it is clear that the concept of empirical rigor which is one of the main foundations of the discipline of psychology, at least to the extent that psychology wishes to view itself is scientifically-based, is in need of some rather close examination.

Questions for Discussion:

  1. What is replicability and why is it important in relation to psychological research?
  2. What might a failure to replicate be due to?
  3. What sort of publication policies should be considered by psychology journal editors if they are to take seriously some of the concerns raised about replicability?

References (Read Further):

http://www.nature.com/news/first-results-from-psychology-s-largest-reproducibility-test-1.17433

Estimating the Reproducibility of Psychological Science: Open Science Collaboration https://osf.io/ezcuj/wiki/home/

Open Science Collaboration. (2012). An open, large-scale, collaborative effort to estimate the reproducibility of psychological science. Perspectives on Psychological Science, 7(6), 657-660.

Pashler, H., & Wagenmakers, E. J. (2012). Editors’ introduction to the special section on replicability in psychological science a crisis of confidence?. Perspectives on Psychological Science, 7(6), 528-530.

Klein, R. A., Ratliff, K. A., Vianello, M., Adams Jr, R. B., Bahník, Š., Bernstein, M. J., … & Cemalcilar, Z. (2014). Investigating variation in replicability. Social Psychology. http://econtent.hogrefe.com/doi/full/10.1027/1864-9335/a000178

Leave a Reply

Your email address will not be published. Required fields are marked *