How to explore Scott Alexander's work and his 1500+ blog posts? This unaffiliated fan website lets you sort and do semantic search through the whole codex. Enjoy!

See also Top Posts and All Tags.

Tag: compositionality

Minutes:
Blog:
Year:
Show all filters

2 posts found
Jul 08, 2025
acx
Read on
21 min 3,191 words Comments pending podcast (16 min)
Scott Alexander shows how he won his 2022 bet about AI image generation capabilities, tracking the progress from early failures to complete success in 2025, using this to argue against AI skeptics. Longer summary
Scott Alexander describes the resolution of a bet he made in June 2022 about AI image generation capabilities. The bet claimed that by June 2025, AI would master image compositionality and be able to accurately generate specific complex scenes. The post shows the progression of AI image generation from 2022 to 2025, starting with early failures by DALL-E2, through various partial successes with Google Imagen and DALL-E3, and ending with ChatGPT 4o's complete success in May-June 2025. Scott uses this to argue against critics who claimed AI was just a 'stochastic parrot' that couldn't achieve true understanding, though he acknowledges some remaining limitations with very complex prompts. Shorter summary
Sep 12, 2022
acx
Read on
10 min 1,502 words 313 comments 159 likes podcast (13 min)
Scott Alexander won his three-year bet on AI progress in image generation compositionality within three months, demonstrating unexpectedly rapid AI advancement. Longer summary
Scott Alexander discusses his recent bet about AI image models' ability to handle compositionality, which he won much sooner than expected. He explains the concept of compositionality in AI image generation, showing examples of DALL-E2's limitations. Scott then describes the bet he made, predicting improvement by 2025. However, newer AI models like Google's Imagen achieved the required level of compositionality within just three months. This rapid progress supports the idea that AI development is advancing faster than many predict, and that scaling and normal progress can solve even complex problems in AI. Shorter summary