Science
Related: About this forumLots of bad science still gets published. Here's how we can change that.
For over a decade, scientists have been grappling with the alarming realization that many published findings in fields ranging from psychology to cancer biology may actually be wrong. Or at least, we dont know if theyre right, because they just dont hold up when other scientists repeat the same experiments, a process known as replication.
In a 2015 attempt to reproduce 100 psychology studies from high-ranking journals, only 39 of them replicated. And in 2018, one effort to repeat influential studies found that 14 out of 28 just half replicated. Another attempt found that only 13 out of 21 social science results picked from the journals Science and Nature could be reproduced.
This is known as the replication crisis, and its devastating. The ability to repeat an experiment and get consistent results is the bedrock of science. If important experiments didnt really find what they claimed to, that could lead to iffy treatments and a loss of trust in science more broadly. So scientists have done a lot of tinkering to try to fix this crisis. Theyve come up with open science practices that help somewhat like preregistration, where a scientist announces how shell conduct her study before actually doing the study and journals have gotten better about retracting bad papers. Yet top journals still publish shoddy papers, and other researchers still cite and build on them.
This is where the Transparent Replications project comes in.
The project, launched last week by the nonprofit Clearer Thinking, has a simple goal: to replicate any psychology study published in Science or Nature (as long as its not way too expensive or technically hard). The idea is that, from now on, before researchers submit their papers to a prestigious journal, theyll know that their work will be subjected to replication attempts, and theyll have to worry about whether their findings hold up. Ideally, this will shift their incentives toward producing more robust research in the first place, as opposed to just racking up another publication in hopes of getting tenure.
https://www.vox.com/future-perfect/23489211/replication-crisis-project-meta-science-psychology
My dad lived in the "publish or perish" world, but was a physical scientist (physical geographer). He was also a meticulous researcher and would die before publishing anything not triple or quad-checked. One reason I found this interesting. But psychology isn't a "hard" science....
cyclonefence
(4,873 posts)that aggressively solicit articles from published (and vetted) authors. I was second author of the introduction of an important special issue of a medical journal. Since that time, not a day passes without a solicitation from at least one medical journal I've never heard of, offering me prominent exposure for anything I'd care to write for them.
I am not the scientist who did the research for our introduction. I am a scientific writer, not a person who is in the field. If I accepted one of these invitations, I could be cited as a published authority on something I know absolutely nothing about. There should be a way to identify these e-zine journals when whackos use them as support for insane theories.
Fiendish Thingy
(18,555 posts)Avoid meta-analyses- meta analyses have a high risk for bias, as they are a collection of studies, often cherry picked by someone with an agenda. The flaws of each individual study are masked when the aggregator states something like in a review of 20 studies, x results was found x% of the time.
Avoid preliminary studies or preliminary data- preliminary studies are often very small data sets that cannot be generalized to the larger population. Always look for the n of number of subjects. If the n= dozens, rather than hundreds, or preferably thousands of subjects, beware. If there no human subjects, be especially careful about drawing any conclusions from the study.
Redleg
(6,142 posts)I appreciate that some researchers conduct meta-analyses and that I am not one of them. They always seemed so atheoretical to me.
Redleg
(6,142 posts)... may not be "hard sciences," but conducting rigorous research in these areas is a challenge because they can't rely on the same controls to ensure internal and external validity as can the physical sciences. For example, I am a behavioral scientist and college professor who conducts research in organizations. Organizations don't normally (meaning hardly ever) allow researchers to use experimental designs. Without a rigorous design, there is no random assignment, there is no control group and there is no treatment group like you might find in a laboratory study by an experimental psychologist or biologist or medical researcher. Organizational researchers and economists have to statistically control for exogenous factors. Even researchers in psychology often have rely on subjects whose populations might not reflect the overall population, for example, relying on college freshmen and sophomores or white mice as subjects of their experiments. Results from these studies might not have the level of external validity a researcher would want. Social sciences also use a good deal of survey research, which relies on the honesty of the subjects, which can be problematic when asking them questions about sensitive issues. Even assessing something as seemingly simple as "job satisfaction" can be problematic, partly due to how the construct is defined and how it is operationalized.
I know it's a thing for some so-called "hard science" people to mock social and behavioral research. I suggest if they want to help they should get out of their labs and think about how they would research psychological and sociological phenomena. Better yet, they can give me access to their own organizations so I can conduct research there instead of having to go out hat in hand to drum up research sites.