Defining Normal to See Abnormal
Explores the history of data science through early 20th-century rat diet experiments, drawing parallels to modern statistical methods.
Explores the history of data science through early 20th-century rat diet experiments, drawing parallels to modern statistical methods.
A detailed guide on upgrading a Raspberry Pi-based home surveillance server using the new Exaviz Cruiser CM5 carrier board and a DeskPi mini rack case.
Guide to running Claude Code as a VSCode plugin on OpenShift and integrating it with AI models via vLLM for local development.
A technical guide on implementing RPKI route validation for BGP security on MikroTik RouterOS 7.21.
Learn how to use the built-in Trace feature in Azure API Management to debug and troubleshoot API policies step-by-step.
Analysis of the current tech consulting market downturn, exploring causes like AI adoption and economic uncertainty based on a LinkedIn poll.
A daily roundup of tech links covering Azure, VS Code, .NET, AI, web dev, and Windows updates from February 2026.
Explains fencing tokens and generation clocks in .NET to prevent stale leaders from writing in distributed systems, ensuring data consistency.
A monthly roundup of tech links focusing on data engineering, Kafka, AI, and software development, including personal articles and industry news.
Explores .NET in-process synchronization APIs for managing concurrency and thread safety in multi-threaded applications.
Analyzes the ongoing confusion and compliance challenges surrounding Microsoft Entra's 'One Person One License' licensing model.
A deep dive into vSAN File Services architecture, deployment prerequisites, configuration, and troubleshooting for VMware administrators.
A tech architect compares working at a Silicon Valley giant and a traditional insurance firm, debunking myths about digital vs. traditional companies.
A daily tech reading list covering AI agents, cloud development, software engineering trends, and new tools like Gemini CLI and jQuery v4.
A developer details the process of building evaluation systems for two AI-powered developer tools to measure their real-world effectiveness.
A developer documents using Claude Code AI to refactor the RestAssured.Net library, focusing on improving code structure and maintainability.
A developer shares critical drawbacks of using Claude Code for AI-assisted programming, focusing on hidden issues like problematic test generation and maintenance challenges.
Introduces Skill Eval, a TypeScript framework for testing and benchmarking AI coding agent skills to ensure reliability and correct behavior.
A guide to writing effective AGENTS.md files for AI coding agents, based on research data and best practices.
Explores Bitter Lesson Engineering, advocating for AI systems that discover solutions autonomously rather than relying on human-coded logic.