The Normalization of Deviance in AI
Read OriginalThis article discusses the 'Normalization of Deviance' concept, as applied to AI safety. It warns that as companies treat probabilistic and potentially adversarial LLM outputs as reliable without major incidents, they lower security standards and skip human oversight, creating a dangerous cultural bias that confuses a lack of attacks with robust security.
0 comments
Comments
No comments yet
Be the first to share your thoughts!
Browser Extension
Get instant access to AllDevBlogs from your browser
Top of the Week
1
Introducing GPT-5.1 for developers
Simon Willison
•
6 votes
2
A simple explanation of the big idea behind public key cryptography
Richard Gendal Brown
•
1 votes
3
Google Antigravity Exfiltrates Data
Simon Willison
•
1 votes
4
5
Fix “This video format is not supported” on YouTube TV
David Walsh
•
1 votes
6
Tooltip Components Should Not Exist
TkDodo Dominik Dorfmeister
•
1 votes
7
llm-anthropic 0.22
Simon Willison
•
1 votes
8
GPT-5.1 Instant and GPT-5.1 Thinking System Card Addendum
Simon Willison
•
1 votes
9
Nano Banana can be prompt engineered for extremely nuanced AI image generation
Simon Willison
•
1 votes