How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Minutes:
Blog:
Year:
Show all filters
3 posts found
Feb 13, 2024
acx
17 min 2,299 words 441 comments 245 likes podcast (13 min)
Scott Alexander analyzes the astronomical costs and resources needed for future AI models, sparked by Sam Altman's reported $7 trillion fundraising goal. Longer summary
Scott Alexander discusses Sam Altman's reported plan to raise $7 trillion for AI development. He breaks down the potential costs of future GPT models, explaining how each generation requires exponentially more computing power, energy, and training data. The post explores the challenges of scaling AI, including the need for vast amounts of computing power, energy infrastructure, and training data that may not exist yet. Scott also considers the implications for AI safety and OpenAI's stance on responsible AI development. Shorter summary
Dec 12, 2023
acx
19 min 2,624 words 266 comments 446 likes podcast (15 min)
Scott Alexander satirizes Silicon Valley culture through a fictional house party where everyone is obsessed with Sam Altman's firing from OpenAI. Longer summary
Scott Alexander writes a satirical account of a Bay Area house party, where conversations are dominated by speculation about Sam Altman's firing from OpenAI. The narrator encounters various eccentric characters, including startup founders with unusual ideas and people with conspiracy theories about the Altman situation. The story humorously exaggerates Silicon Valley culture, tech industry obsessions, and the tendency for people to form elaborate theories about current events. Shorter summary
Dec 17, 2015
ssc
31 min 4,288 words 798 comments
Scott Alexander argues that OpenAI's open-source strategy for AI development could be dangerous, potentially risking human extinction if AI progresses rapidly. Longer summary
Scott Alexander critiques OpenAI's strategy of making AI research open-source, arguing it could be dangerous if AI develops rapidly. He compares it to giving nuclear weapon plans to everyone, potentially leading to catastrophe. The post analyzes the risks and benefits of open AI, discusses the potential for a hard takeoff in AI development, and examines the AI control problem. Scott expresses concern that competition in AI development may be forcing desperate strategies, potentially risking human extinction. Shorter summary