How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and do semantic search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Tag: Biden

Minutes:
Blog:
Year:
Show all filters

2 posts found
Feb 14, 2025
acx
Read on
18 min 2,747 words 1,114 comments 567 likes podcast (19 min)
Scott analyzes Ted Cruz's database of 'woke' NSF grants and finds that only 40% were actually woke, with many regular science grants included simply for having a mandatory diversity statement. Longer summary
Scott Alexander analyzes Ted Cruz's database of supposedly 'woke' NSF grants, sampling 100 grants at random. He finds that only 40% were actually woke, with another 20% borderline cases, and 40% completely unrelated to wokeness. Most non-woke grants appeared in the database because they included a seemingly mandatory sentence about helping minorities or women, likely added to satisfy grant requirements. Of the genuinely woke grants, only 10-20% were egregiously bad, while others were mostly benign STEM outreach programs. Scott argues that sorting genuine woke grants from regular science would be easy, taking only a week of work, and criticizes both the Biden administration for requiring diversity statements and Republicans for targeting legitimate research. Shorter summary
Nov 22, 2024
acx
Read on
14 min 2,029 words 530 comments 370 likes podcast (12 min)
Scott Alexander explains why dismissing warnings just because previous similar warnings were wrong is a dangerous fallacy, particularly for risks that naturally increase over time. Longer summary
Scott Alexander criticizes what he calls the "generalized anti-caution argument" - the tendency to dismiss warnings about risks because previous similar warnings didn't come true. He explains that for gradually increasing risks (like drug doses or AI capabilities), being wrong about earlier warnings doesn't invalidate later ones. He illustrates this through several examples including the Ukraine war, Biden's cognitive decline, and AI development, contrasting these with cases where the risk doesn't naturally increase over time. The post ends by arguing that people should maintain appropriate caution even after false alarms, particularly for risks that naturally increase over time. Shorter summary