Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

littlemissmartypants

(25,483 posts)
Mon Sep 4, 2023, 02:10 PM Sep 2023

Good vs. bad science: how to read and understand scientific studies podcast

September 4, 2023
UNDERSTANDING SCIENCE
Podcast


#269 – Good vs. bad science: how to read and understand scientific studies

“I think epidemiology has a place, but I think the pendulum has swung a little too far, and it has been asserted as being more valuable than I think it probably is.” —Peter Attia

This special episode is a rebroadcast of AMA #30, now made available to everyone, in which Peter and Bob Kaplan dive deep into all things related to studying studies to help one sift through the noise to find the signal. They define various types of studies, how a study progresses from idea to execution, and how to identify study strengths and limitations. They explain how clinical trials work, as well as biases and common pitfalls to watch out for. They dig into key factors that contribute to the rigor (or lack thereof) of an experiment, and they discuss how to measure effect size, differentiate relative risk from absolute risk, and what it really means when a study is statistically significant. Finally, Peter lays out his personal process when reading through scientific papers.

We discuss:
●The ever-changing landscape of scientific literature [2:30];
●The process for a study to progress from idea to design to execution [5:00];
●Various types of studies and how they differ [8:00];
●The different phases of clinical trials [19:45];
●Observational studies and the potential for bias [27:00];
●Experimental studies: Randomization, blinding, and other factors that make or break a study [44:30];
●Power, p-values, and statistical significance [56:45];
●Measuring effect size: Relative risk vs. absolute risk, hazard ratios, and “Number Needed to Treat” [1:08:15];
●How to interpret confidence intervals [1:18:00];
Why a study might be stopped before its completion [1:24:00];
●Why only a fraction of studies are ever published and how to combat publication bias [1:32:00];
●Why certain journals are more respected than others [1:41:00];
●Peter’s process when reading a scientific paper [1:44:15]; and
More.

Snip...more:
https://peterattiamd.com/how-to-read-and-understand-scientific-studies/

❤️pants
1 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Good vs. bad science: how to read and understand scientific studies podcast (Original Post) littlemissmartypants Sep 2023 OP
There is a simpler way to do this Warpy Sep 2023 #1

Warpy

(113,130 posts)
1. There is a simpler way to do this
Tue Sep 5, 2023, 12:12 AM
Sep 2023

First and foremost, watch those weasel words. An article heavily laced with "can, could, masy, might, and possibly" is one you need to dismiss out of hand. Science reaches some pretty firm conclusions, although those conclusions are frequently overturned down the road when more data sets come in. That weasel word stuff is pop science, which has little in common with the real thing.

Second, who paid for the research? Confirmation bias is a thing and you might want to withhold any action based on that article until it is replicated by someone whose paycheck doesn't depend on reaching those conclusions.

With medical studies, you have to consider how large the study was. If it's a longitudinal study, what the duration was. Case in point: the "crack baby" hysteria. A study of infants born to crack smoking mothers showed some pretty horrific effects in the first two weeks. I can attest to how hard it was to watch such babies in the NICU, they were obviously miserable. Eventually, thousands of those kids were followed and the study rewritten at five, ten and fifteen years and by the age of five, there were no ill effects that couldn't be attributed to the effects of poverty. Turns out keeping mothers in poverty is worse for kids than crack was. Sometimes you need to wait years for the data to come in.

Number of people in a study can also be a waving red flag. You need large numbers because there is no way to control all the possible variables when studying people. Not only do we have different habits, some of us turn ornery after a while. Fewer than 100 people in a study, you get Andrew Wakefield (if you don't know who he is, look him up he's a real pip).

Just these few things will help you read a lot more skeptically and save yourself expense and aggravation by disconnecting you from the shrieking headlines.

Latest Discussions»Culture Forums»Science»Good vs. bad science: how...