The fate of “small” open source
Discusses the future of small open source libraries in the age of LLMs, questioning their relevance when AI can generate specific code.
Discusses the future of small open source libraries in the age of LLMs, questioning their relevance when AI can generate specific code.
A developer reflects on the impact of AI-generated code on small, educational open-source libraries like his popular blob-util npm package.
Release of llm-anthropic plugin 0.22 with support for Claude's structured outputs and web search tool integration.
A guide to building a local, privacy-focused RAG system using Java to query internal documents like Confluence without external dependencies.
Explains how MCP servers enable faster development by using LLMs to dynamically read specs, unlike traditional APIs.
Testing various LLMs to generate a POV-Ray script for a pelican riding a bicycle, comparing results and fixing errors.
Explores how LLMs could lower the barrier to creating and adopting new programming languages by handling syntax and core concepts.
The article argues that writing a simple AI agent is the new 'hello world' for AI engineering and a surprisingly educational experience.
An analysis of using LLMs like ChatGPT for academic research, highlighting their utility and inherent risks as research tools.
A response to a blog post about refining AI-generated 'vibe code' through manual refactoring and cleanup.
A technical walkthrough of building a real-time conference demo using Kafka, Flink, and LLMs to summarize live audience observations.
A Python developer discusses their new LLM course, ergonomic keyboards, the diskcache project, and coding agents on the Talk Python podcast.
A guide to building and using an Android app that runs AI models, including LLMs, locally on a phone via the MediaPipe API.
An engineer argues that software development is a learning process, not an assembly line, and explains how to use LLMs as brainstorming partners.
A technical comparison of the DGX Spark and Mac Mini M4 Pro for local PyTorch development and LLM inference, including benchmarks.
Compares DGX Spark and Mac Mini for local PyTorch development, focusing on LLM inference and fine-tuning performance benchmarks.
A guide to improving LLM-generated code quality by using contextual rules and agents to enforce production-ready patterns and architecture.
A timeline and analysis of major generative AI model releases and a security framework for AI agents from late 2025.
Explores how GenAI and agentic tools are shifting developer workflows towards rapid prototyping and focusing on output over implementation details.
Argues against using API keys for securing enterprise AI tools like LLMs and agents, highlighting security flaws and recommending better alternatives.