Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Jilly_in_VA

(10,907 posts)
Tue Dec 6, 2022, 11:35 AM Dec 2022

Lots of bad science still gets published. Here's how we can change that.

For over a decade, scientists have been grappling with the alarming realization that many published findings — in fields ranging from psychology to cancer biology — may actually be wrong. Or at least, we don’t know if they’re right, because they just don’t hold up when other scientists repeat the same experiments, a process known as replication.

In a 2015 attempt to reproduce 100 psychology studies from high-ranking journals, only 39 of them replicated. And in 2018, one effort to repeat influential studies found that 14 out of 28 — just half — replicated. Another attempt found that only 13 out of 21 social science results picked from the journals Science and Nature could be reproduced.

This is known as the “replication crisis,” and it’s devastating. The ability to repeat an experiment and get consistent results is the bedrock of science. If important experiments didn’t really find what they claimed to, that could lead to iffy treatments and a loss of trust in science more broadly. So scientists have done a lot of tinkering to try to fix this crisis. They’ve come up with “open science” practices that help somewhat — like preregistration, where a scientist announces how she’ll conduct her study before actually doing the study — and journals have gotten better about retracting bad papers. Yet top journals still publish shoddy papers, and other researchers still cite and build on them.

This is where the Transparent Replications project comes in.

The project, launched last week by the nonprofit Clearer Thinking, has a simple goal: to replicate any psychology study published in Science or Nature (as long as it’s not way too expensive or technically hard). The idea is that, from now on, before researchers submit their papers to a prestigious journal, they’ll know that their work will be subjected to replication attempts, and they’ll have to worry about whether their findings hold up. Ideally, this will shift their incentives toward producing more robust research in the first place, as opposed to just racking up another publication in hopes of getting tenure.

https://www.vox.com/future-perfect/23489211/replication-crisis-project-meta-science-psychology

My dad lived in the "publish or perish" world, but was a physical scientist (physical geographer). He was also a meticulous researcher and would die before publishing anything not triple or quad-checked. One reason I found this interesting. But psychology isn't a "hard" science....

4 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Lots of bad science still gets published. Here's how we can change that. (Original Post) Jilly_in_VA Dec 2022 OP
Another reason: On-line journals cyclonefence Dec 2022 #1
Some tips to help screen out bad science or non-science masquerading as science: Fiendish Thingy Dec 2022 #2
This is good advice Redleg Dec 2022 #4
The social and behavioral sciences ... Redleg Dec 2022 #3

cyclonefence

(4,873 posts)
1. Another reason: On-line journals
Tue Dec 6, 2022, 12:11 PM
Dec 2022

that aggressively solicit articles from published (and vetted) authors. I was second author of the introduction of an important special issue of a medical journal. Since that time, not a day passes without a solicitation from at least one medical journal I've never heard of, offering me prominent exposure for anything I'd care to write for them.

I am not the scientist who did the research for our introduction. I am a scientific writer, not a person who is in the field. If I accepted one of these invitations, I could be cited as a published authority on something I know absolutely nothing about. There should be a way to identify these e-zine journals when whackos use them as support for insane theories.

Fiendish Thingy

(18,555 posts)
2. Some tips to help screen out bad science or non-science masquerading as science:
Tue Dec 6, 2022, 01:30 PM
Dec 2022
Avoid “pre-publication” studies- these are reports of data implying conclusions, but have not been peer reviewed or screened for publication. These are often just thinly veiled press releases to generate headlines and stimulate investor interest.

Avoid meta-analyses- meta analyses have a high risk for bias, as they are a collection of studies, often cherry picked by someone with an agenda. The flaws of each individual study are masked when the aggregator states something like “in a review of 20 studies, x results was found x% of the time”.

Avoid “preliminary studies” or “preliminary data”- preliminary studies are often very small data sets that cannot be generalized to the larger population. Always look for the “n” of number of subjects. If the n= dozens, rather than hundreds, or preferably thousands of subjects, beware. If there no human subjects, be especially careful about drawing any conclusions from the study.

Redleg

(6,142 posts)
4. This is good advice
Tue Dec 6, 2022, 03:40 PM
Dec 2022

I appreciate that some researchers conduct meta-analyses and that I am not one of them. They always seemed so atheoretical to me.

Redleg

(6,142 posts)
3. The social and behavioral sciences ...
Tue Dec 6, 2022, 03:38 PM
Dec 2022

... may not be "hard sciences," but conducting rigorous research in these areas is a challenge because they can't rely on the same controls to ensure internal and external validity as can the physical sciences. For example, I am a behavioral scientist and college professor who conducts research in organizations. Organizations don't normally (meaning hardly ever) allow researchers to use experimental designs. Without a rigorous design, there is no random assignment, there is no control group and there is no treatment group like you might find in a laboratory study by an experimental psychologist or biologist or medical researcher. Organizational researchers and economists have to statistically control for exogenous factors. Even researchers in psychology often have rely on subjects whose populations might not reflect the overall population, for example, relying on college freshmen and sophomores or white mice as subjects of their experiments. Results from these studies might not have the level of external validity a researcher would want. Social sciences also use a good deal of survey research, which relies on the honesty of the subjects, which can be problematic when asking them questions about sensitive issues. Even assessing something as seemingly simple as "job satisfaction" can be problematic, partly due to how the construct is defined and how it is operationalized.

I know it's a thing for some so-called "hard science" people to mock social and behavioral research. I suggest if they want to help they should get out of their labs and think about how they would research psychological and sociological phenomena. Better yet, they can give me access to their own organizations so I can conduct research there instead of having to go out hat in hand to drum up research sites.

Latest Discussions»Culture Forums»Science»Lots of bad science still...