Scott presents results from his AI art Turing test showing most people struggled to distinguish AI from human art, with professionals doing slightly better and participants unexpectedly preferring AI art.
Longer summary
Scott analyzes the results of his AI art Turing test where 11,000 people tried to distinguish between human and AI-generated art. The median score was 60%, only slightly above chance, showing most people had difficulty identifying AI art. Participants tended to judge images based on style rather than subtle quality differences, incorrectly assuming traditional styles were human and digital art was AI. Interestingly, people slightly preferred AI art even when they claimed to hate it. However, professional artists and AI critics scored better at detection, suggesting they may notice subtle flaws that others miss.
Shorter summary