Reflections of a Developer on LLMs in January 2026
A developer reflects on the dual nature of LLMs in 2026, highlighting their transformative potential and the societal risks they create.
A developer reflects on the dual nature of LLMs in 2026, highlighting their transformative potential and the societal risks they create.
Critique of Apple and Google's failure to enforce their own policies against abusive content on Twitter/X, questioning the legitimacy of their app store monopolies.
Analysis of tech CEOs' inaction on deepfake apps, arguing fear of political power outweighs moral responsibility.
Author discusses their blog being banned from Lobste.rs for using AI agents to assist in writing, sparking a debate on AI's role in content creation.
Explores how larger platforms often have worse fraud, spam, and support issues compared to smaller, more curated services.
An analysis of why achieving consensus on platform moderation rules is impossible, using a simple game about park vehicle rules as an example.
Explores strategies and Azure OpenAI features to mitigate inappropriate use and enhance safety in AI chatbot implementations.
Explores five industry patterns for building robust content moderation and fraud detection systems using ML, including human-in-the-loop and data augmentation.
Examines the debate over private tech platforms' rights to censor content versus arguments for treating them as public utilities.