How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
5 posts found
Apr 30, 2024
acx
45 min 6,211 words 376 comments 123 likes podcast (34 min)
Scott Alexander responds to Robin Hanson's reply on medical effectiveness, clarifying interpretations and reiterating arguments about the limitations of insurance experiments in evaluating medical care. Longer summary
Scott Alexander responds to Robin Hanson's reply to his original post on medical effectiveness. Scott clarifies his interpretation of Hanson's views, discusses potential misunderstandings, and reiterates his arguments about the limitations of insurance experiments in evaluating medical effectiveness. He also addresses specific points Hanson made about cancer treatment, health insurance studies, and p-hacking in medical research. Scott concludes by restating his position that while some medicine is ineffective, it's crucial to distinguish between effective and ineffective treatments rather than dismissing medicine broadly. Shorter summary
Nov 13, 2018
ssc
38 min 5,299 words 164 comments podcast (38 min)
Scott Alexander critically examines studies on preschool effects, finding mixed and inconsistent evidence for long-term benefits. Longer summary
Scott Alexander reviews multiple studies on the long-term effects of preschool programs like Head Start. While early studies showed fade-out of test score gains, some found lasting benefits in adult outcomes. However, Scott finds inconsistencies between studies in which subgroups benefit and on which outcomes. He also notes concerns about potential p-hacking and researcher degrees of freedom. Ultimately, Scott concludes that the evidence is mixed - it permits believing preschool has small positive effects, but does not force that conclusion. He estimates 60% odds preschool helps in ways suggested by the studies, 40% odds it's useless. Shorter summary
Nov 05, 2016
ssc
18 min 2,444 words 162 comments
Scott Alexander examines a pseudoscientific claim about the Great Pyramid of Giza to illustrate how coincidences can appear more significant than they are, relating this to challenges in evaluating scientific studies. Longer summary
Scott Alexander discusses a pseudoscientific claim that the Great Pyramid of Giza's location encodes the speed of light to seven decimal places. He breaks down the coincidence, explaining how it's less impressive than it initially appears due to various degrees of freedom in the calculation. He then uses this as a jumping-off point to discuss how similar issues can arise in legitimate scientific studies, referencing Andrew Gelman's 'garden of forking paths' concept. The post concludes by emphasizing the difficulty of fully dissecting such coincidences, even when actively looking for explanations, and how this applies to evaluating scientific studies. Shorter summary
Apr 28, 2014
ssc
36 min 4,977 words 197 comments podcast (38 min)
Scott Alexander critiques a meta-analysis supporting psychic phenomena to illustrate flaws in scientific methodology and meta-analysis. Longer summary
Scott Alexander examines a meta-analysis by Daryl Bem that claims to provide strong evidence for psychic phenomena (psi). While Bem's analysis follows many best practices for scientific rigor, Alexander argues it likely suffers from experimenter effects and other biases that can produce false positive results. He uses this to illustrate broader issues with the scientific method and meta-analysis, concluding that even seemingly rigorous studies and meta-analyses can produce incorrect conclusions. This challenges the idea that scientific consensus and meta-analysis are the highest forms of evidence. Shorter summary
Jan 02, 2014
ssc
15 min 2,049 words 15 comments
Scott Alexander reviews two papers exposing statistical manipulation techniques in psychology research and addiction treatment program evaluations. Longer summary
This post discusses two papers on statistical manipulation in scientific studies. The first paper, 'False Positive Psychology', demonstrates how researchers can use four tricks to artificially achieve statistical significance: measuring multiple dependent variables, choosing when to end experiments, controlling for confounders, and testing different conditions. The authors show these tricks can make random data appear significant 61% of the time. The second paper, 'How To Have A High Success Rate In Treatment', reveals how addiction treatment programs can inflate their success rates through various methods like carefully choosing the denominator, selecting promising candidates, redefining success, and omitting control groups. Both papers highlight the ease of manipulating statistics to produce desired results in research and treatment evaluations. Shorter summary