How to avoid getting lost reading Scott Alexander and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
4 posts found
Jul 16, 2019
ssc
17 min 2,119 words 322 comments podcast
Scott Alexander argues against broadening the definitions of words like 'lie' and 'abuse', as it dilutes their meaning and reduces their usefulness in identifying problematic behavior. Longer summary
Scott Alexander argues against broadening the definition of words like 'lie' and 'abuse' to include less severe actions. He contends that this dilutes the meaning of these terms, making them less useful for identifying genuinely problematic behavior. The post discusses how overly broad definitions can lead to everyone being labeled as liars or abusers, which removes the stigma and informational value of these terms. Scott also explains how this can be exploited by bad actors to unfairly stigmatize others. He extends this argument to other terms like 'disabled', 'queer', and 'autistic', suggesting that while some broadening of definitions can be useful, it's never right to define a term so broadly that it applies to everyone or no one. Shorter summary
Sep 25, 2018
ssc
18 min 2,306 words 191 comments podcast
The post explores how correlated variables can diverge at extreme values, applying this concept to happiness measures and moral systems. Longer summary
This post explores the concept of 'tails coming apart' and its application to various domains, particularly morality. The author begins by discussing how strongly correlated variables can diverge at extreme values, using examples like grip strength vs. arm strength. He then applies this concept to happiness, showing how different measures of happiness (e.g., life satisfaction, positive emotions) can lead to different countries being ranked as 'happiest'. The post extends this idea to morality, arguing that while different moral systems may agree in everyday situations, they diverge dramatically when taken to extremes. The author suggests that this divergence poses challenges for developing moral systems that can handle transhuman scenarios. Shorter summary
Oct 30, 2016
ssc
18 min 2,240 words 141 comments podcast
Scott Alexander examines how recent AI progress in neural networks might challenge the Bostromian paradigm of AI risk, exploring potential implications for AI goal alignment and motivation systems. Longer summary
This post discusses how recent advances in AI, particularly in neural networks and deep learning, might affect the Bostromian paradigm of AI risk. Scott Alexander explores two perspectives: the engineer's view that categorization abilities are just tools and not the core of AGI, and the biologist's view that brain-like neural networks might be adaptable to create motivation systems. He suggests that categorization and abstraction might play a crucial role in developing AI moral sense and motivation, potentially leading to AIs that are less likely to be extreme goal-maximizers. The post ends by acknowledging MIRI's work on logical AI safety while suggesting the need for research in other directions as well. Shorter summary
Nov 21, 2014
ssc
42 min 5,455 words 727 comments podcast
Scott Alexander discusses how categories are human constructs that should be flexible when it serves a useful purpose, using examples from biology, astronomy, and transgender identity. Longer summary
This post discusses the concept of categorization and how it applies to various topics, including the classification of whales as fish, the definition of planets, and transgender identity. Scott argues that categories are not inherently true or false but are tools we use to make sense of the world, and that we should be flexible in our categorizations when it serves a useful purpose. He uses examples from biology, astronomy, geography, and psychiatry to illustrate his points. The post concludes by addressing criticisms of transgender identity and arguing for compassion and practicality in how we treat people with gender dysphoria. Shorter summary