How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
21 posts found
Jun 08, 2023
acx
10 min 1,355 words 228 comments 246 likes podcast (8 min)
Scott Alexander explores the difficulties in contextualizing statistics, providing numerous examples to show how the same data can be presented to seem significant or trivial. Longer summary
Scott Alexander discusses the challenges of putting statistical findings into context, showing how different comparisons can make the same statistic seem either significant or trivial. He provides numerous examples of effect sizes and correlations from various fields to illustrate this point. The post aims to promote awareness of how statistics can be manipulated and encourages readers to be vigilant when interpreting contextual comparisons. Scott also acknowledges the limitations of using standardized effect sizes but argues for their utility in certain situations where more specific measures are difficult to comprehend. Shorter summary
Jan 10, 2023
acx
3 min 291 words 84 comments 40 likes podcast (3 min)
Scott Alexander announces Stage 2 of the 2023 Prediction Contest, encouraging participants to use any resources to make accurate predictions. Longer summary
Scott Alexander announces Stage 2 ('Full Mode') of the 2023 Prediction Contest, following the closure of Stage 1 ('Blind Mode'). In this stage, participants are encouraged to use any resources available to make accurate predictions, including personal research, prediction markets, forecasting tournaments, and the 3295 blind mode answers from Stage 1. Scott provides suggestions on how to use these resources and emphasizes that there's no such thing as cheating, except for time travel or harming competitors. He also mentions that the form will ask for a brief description of the strategy used. Shorter summary
Dec 27, 2022
acx
7 min 963 words 324 comments 235 likes podcast (7 min)
Scott Alexander argues that selection bias, while a concern, is not a valid reason to automatically reject amateur online surveys, as professional studies also face similar limitations. Longer summary
Scott Alexander discusses the issue of selection bias in amateur online surveys, arguing that it's not a valid reason to dismiss their results outright. He points out that professional scientific studies also suffer from selection bias, often using unrepresentative samples like psychology students. The post explains that while selection bias is problematic for polls or census-like studies aiming to determine population-wide statistics, it's less of an issue for correlation studies. Scott argues that the key is to consider the mechanism being studied and how it might generalize, rather than dismissing studies based solely on their sample selection method. Shorter summary
Jul 07, 2022
acx
7 min 846 words 370 comments 139 likes podcast (7 min)
Scott Alexander examines the poor quality of research on homework effectiveness, finding only one well-designed study showing positive effects for high school algebra. Longer summary
Scott Alexander discusses the lack of reliable research on the effectiveness of homework. He critiques existing studies for their flawed methodologies, particularly their reliance on self-reported time spent on homework as a proxy for homework amount. The post highlights issues with confounding factors and poor study designs. Alexander finds only one well-designed, randomized study on homework effectiveness, which shows a positive effect for 9th-grade algebra homework. However, he notes that this single study doesn't provide enough evidence to draw broad conclusions about homework effectiveness across different subjects and grade levels. Shorter summary
Jan 26, 2022
acx
16 min 2,164 words 433 comments 150 likes podcast (22 min)
Scott Alexander critiques a study claiming cash payments to poor mothers increased infant brain function, highlighting statistical and methodological issues that undermine its positive conclusions. Longer summary
Scott Alexander critiques a recent study claiming that cash payments to low-income mothers increased brain function in babies. He points out several issues with the study, including the loss of statistical significance after adjusting for multiple comparisons, potential artifacts in EEG data visualization, and deviations from pre-registered analysis plans. He also discusses the broader context of research on poverty and cognition, noting the difficulty in finding shared environmental effects and the tendency for studies in this field to be flawed or overhyped. Scott concludes that while the study doesn't prove cash grants don't affect children's EEGs, it essentially shows no effect and should not have been reported as an unqualified positive result. Shorter summary
Jul 13, 2021
acx
3 min 321 words 91 comments 25 likes podcast (4 min)
Scott Alexander is finalizing preparations for a Reader Survey, asking participants to confirm their inclusion and make necessary adjustments before the Friday start date. Longer summary
Scott Alexander is preparing for a Reader Survey and is asking participants to confirm their inclusion in the list of surveys. He provides a list of surveys planned, noting some specifics about targeting and demographics collection. Scott also shares a link to a draft demographics survey and asks participants to review it, make necessary changes to their own surveys, and finalize everything before Friday when he plans to start the process. Shorter summary
Apr 06, 2021
acx
14 min 1,954 words 273 comments 67 likes podcast (15 min)
Scott Alexander examines two cases of multiple hypothesis testing problems in medical and social science research, highlighting the complexities in interpreting results. Longer summary
Scott Alexander discusses two cases of multiple hypothesis testing problems. The first involves a Vitamin D study for COVID-19 where a significant difference in blood pressure between groups complicates the interpretation of results. The second case relates to Scott's own study on ambidexterity and authoritarianism, where he questions the applicability of traditional multiple hypothesis testing corrections. He explores the complexities of interpreting multiple tests of the same hypothesis and considers Bayesian approaches, ultimately acknowledging the limits of his statistical knowledge on this seemingly simple question. Shorter summary
Dec 30, 2019
ssc
5 min 650 words 408 comments podcast (5 min)
Scott Alexander is requesting readers to participate in the 2020 Slate Star Codex Survey, which helps him gather data for research and community planning. Longer summary
Scott Alexander is asking readers to take the 2020 Slate Star Codex Survey. The survey helps him learn about his readers, plan community events, and provides informal research data for interesting posts. It's open to anyone who has read the blog before December 30, 2019. The survey is in two parts: Part I takes about 10 minutes and asks basic questions, while Part II takes about 15 minutes and focuses on research topics. Scott mentions some limitations of the survey and offers the possibility of a monetary prize for randomly selected respondents. Shorter summary
Mar 28, 2019
ssc
3 min 296 words 41 comments podcast (4 min)
Scott Alexander partially retracts his previous post on animal value and neural number after a commenter's larger survey yielded different results. Longer summary
Scott Alexander partially retracts his previous post about animal value and neural number. A commenter, Tibbar, replicated Scott's survey using Mechanical Turk and obtained different results with a larger sample size. Scott acknowledges that while Mechanical Turk users might not be the ideal sample and some responses seem rushed, it's difficult to claim his original results represent a universal intuitive understanding. He explains that his original sample was more informed about animal rights issues. Scott adds this to his Mistakes page and considers including a similar survey in the future, hoping readers will have forgotten about this retraction. Shorter summary
Nov 13, 2018
ssc
38 min 5,299 words 164 comments podcast (38 min)
Scott Alexander critically examines studies on preschool effects, finding mixed and inconsistent evidence for long-term benefits. Longer summary
Scott Alexander reviews multiple studies on the long-term effects of preschool programs like Head Start. While early studies showed fade-out of test score gains, some found lasting benefits in adult outcomes. However, Scott finds inconsistencies between studies in which subgroups benefit and on which outcomes. He also notes concerns about potential p-hacking and researcher degrees of freedom. Ultimately, Scott concludes that the evidence is mixed - it permits believing preschool has small positive effects, but does not force that conclusion. He estimates 60% odds preschool helps in ways suggested by the studies, 40% odds it's useless. Shorter summary
Nov 01, 2018
ssc
13 min 1,706 words 52 comments podcast (14 min)
Scott Alexander investigates the complexities of using Google Trends for mental health research, proposing methods to improve accuracy and interpretation of search data. Longer summary
Scott Alexander explores the challenges and nuances of using Google Trends for research, focusing on mental health-related searches. He demonstrates how individual keywords can be misleading and suggests averaging multiple related terms. Scott introduces a two-factor theory: one representing students' intellectual interest in mental health, which is declining, and another representing people with mental health issues, which is increasing. He discusses potential issues like temporal autocorrelation and provides tips for using Google Trends effectively, including the importance of considering school-related search patterns and the lack of need to adjust for general intellectual decline on the Internet. Shorter summary
Feb 26, 2018
ssc
20 min 2,764 words 45 comments podcast (21 min)
Scott Alexander critically examines a major meta-analysis on antidepressant efficacy, noting potential biases and comparing its surprising drug rankings to his own previous analysis. Longer summary
This post reviews a major meta-analysis by Cipriani et al on the efficacy of antidepressants. The study claims to definitively show antidepressants work, but Scott notes it doesn't actually refute previous critiques about their effectiveness. He examines potential biases and methodological issues in the study, particularly around industry funding of trials. Scott also discusses the study's ranking of different antidepressants, noting some matches with conventional wisdom but also some surprising results. He compares these rankings to his own previous analysis, finding major discrepancies, and concludes by urging some caution in interpreting the study's results despite its impressive scope. Shorter summary
Sep 22, 2016
ssc
2 min 243 words 440 comments
Scott Alexander is running an experiment to test the effectiveness of persuasive essays about AI risk, asking readers to participate by reading an essay and answering questions. Longer summary
Scott Alexander is conducting an experiment to measure the effectiveness of persuasive essays about AI risk. He's asking readers to participate by reading an essay on AI risk and answering questions about it. The experiment is split into two versions based on the reader's surname. Scott emphasizes that everyone is welcome to participate, especially those unfamiliar with or skeptical about AI risk. He notes that readers don't have to finish long essays if they lose interest, as this is part of what makes an essay persuasive. Shorter summary
Nov 30, 2015
ssc
11 min 1,459 words 422 comments
Scott reviews evidence on whether college improves critical thinking, finding modest short-term gains but questioning their long-term persistence. Longer summary
Scott examines the claim that college teaches critical thinking skills. He reviews several studies, finding modest evidence that college improves critical thinking, with effect sizes ranging from 0.18 to 0.44 standard deviations. However, he notes limitations in the research, such as lack of long-term follow-up and potential confounding factors. Scott expresses skepticism about whether these gains persist after college, drawing parallels to other temporary developmental effects. He also discusses specific aspects of college that may contribute to critical thinking gains, finding little evidence for dedicated 'critical thinking' classes but some benefit from liberal arts education and certain study habits. Shorter summary
Mar 11, 2015
ssc
7 min 941 words 187 comments
Scott Alexander critiques psychological studies claiming large effects from simple interventions, suggesting their impressive results may be due to flawed research rather than genuinely effective treatments. Longer summary
Scott Alexander examines three psychological studies that claim significant improvements in academic performance and behavior from simple interventions. He contrasts these with a large, expensive early intervention program for troubled youth that showed only modest effects. This leads him to question whether psychological research is flawed or if other interventions are ineffective. After closer examination, he finds potential issues with each study's methodology or reporting, suggesting that the impressive results may be due to poor research standards rather than genuinely effective interventions. He concludes by comparing this situation to an XKCD comic about economic theories, implying that if these psychological interventions truly worked as claimed, we would see much more significant improvements in education, rehabilitation, and mental health. Shorter summary
Scott Alexander examines a study comparing the effectiveness of drugs and therapy for psychiatric disorders, discussing the results and methodological limitations of the research. Longer summary
This post analyzes a study comparing the efficacy of pharmacotherapy and psychotherapy for various psychiatric disorders. The author discusses the graph showing effect sizes for different treatments, noting that most psychiatric treatments have an effect size around 0.5. He expresses some uncertainty about the statistical methods used and highlights three surprising findings: drugs appearing more effective than therapy for borderline personality disorder and insomnia, and drugs being more effective at preventing relapse than stopping acute episodes. The post also discusses the limitations of psychotherapy trials, noting that lower quality trials tend to show much higher effect sizes than high-quality ones, and that psychotherapy research often lacks sufficient blinding and control groups. Shorter summary
Apr 26, 2014
ssc
10 min 1,299 words 92 comments
Scott criticizes a study linking childhood bullying to negative adult outcomes, arguing that its method of controlling for confounders is inadequate and proposing alternative explanations for the correlation. Longer summary
Scott criticizes a study claiming that childhood bullying victimization leads to negative adult outcomes. He argues that the study's attempt to control for confounding factors is inadequate, as bullies are likely better at identifying vulnerable children than the researchers' measures. Scott suggests that unmeasured factors like height could explain the correlation, and that the study's method of adjusting for confounders is unreliable. He proposes that a proper study would involve an anti-bullying intervention with control schools. The post also mentions a contrasting study that found no association after adjusting for confounders, and questions the reliability of parent reports on bullying used in the original study. Shorter summary
Jan 19, 2014
ssc
6 min 734 words 31 comments
Scott Alexander critiques a study suggesting knowledge of ApoE4 gene status affects memory performance, arguing the results may be due to priming or stereotype threat rather than actual memory decline. Longer summary
Scott Alexander discusses a study by Lineweaver et al. that tested elderly adults for the ApoE4 gene, a risk factor for Alzheimer's. The study found that subjects who knew they had ApoE4 performed worse on memory tests than those who had it but didn't know. Scott critiques the study's methodology and interpretation, suggesting that the results might be due to priming effects or stereotype threat rather than actual memory decline. He expresses concern that the medical community might overinterpret these results and discourage genetic testing without sufficient evidence of harm. Shorter summary
Jan 02, 2014
ssc
15 min 2,049 words 15 comments
Scott Alexander reviews two papers exposing statistical manipulation techniques in psychology research and addiction treatment program evaluations. Longer summary
This post discusses two papers on statistical manipulation in scientific studies. The first paper, 'False Positive Psychology', demonstrates how researchers can use four tricks to artificially achieve statistical significance: measuring multiple dependent variables, choosing when to end experiments, controlling for confounders, and testing different conditions. The authors show these tricks can make random data appear significant 61% of the time. The second paper, 'How To Have A High Success Rate In Treatment', reveals how addiction treatment programs can inflate their success rates through various methods like carefully choosing the denominator, selecting promising candidates, redefining success, and omitting control groups. Both papers highlight the ease of manipulating statistics to produce desired results in research and treatment evaluations. Shorter summary
Apr 22, 2013
ssc
9 min 1,191 words 41 comments
Scott Alexander reviews a study using Implicit Association Tests to measure suicidal intent, expressing both excitement for the research direction and skepticism about its methodology and practical applications. Longer summary
Scott Alexander discusses a 2007 study by Nock & Banaji that uses Implicit Association Tests (IATs) to measure suicidal intent in psychiatric patients. The study matches categories of self/other with images of self-harm/non-self-harm, claiming to distinguish between healthy controls, those with past suicidal ideation, and past suicide attempters. While Scott is excited about research in this area, he expresses skepticism about the study's methodology, particularly its use of self-harm images instead of actual suicide imagery. He also questions the study's ability to predict future suicide attempts and its potential usefulness in real-world scenarios. Despite his reservations, Scott sees this as a positive step towards using IATs for practical applications beyond social justice projects. Shorter summary
Apr 12, 2013
ssc
13 min 1,734 words 48 comments podcast (12 min)
Scott Alexander explores the concept of 'Lizardman's Constant' and its implications for interpreting poll results, especially those concerning unpopular beliefs. Longer summary
Scott Alexander discusses the concept of 'Lizardman's Constant', which refers to the roughly 4% of respondents in polls who give outlandish or deliberately false answers. He explores this through three examples: a personal experience with survey responses, a poll about conspiracy theories, and a controversial study on climate change skepticism. The post argues that when dealing with unpopular beliefs, polls can only provide weak signals that are easily overwhelmed by noise from various sources, including jokesters, cognitive biases, and deliberate misbehavior. Scott concludes that polls relying on detecting very weak signals should be treated with skepticism. Shorter summary