Model Context Protocol (MCP) an overview
An overview of the Model Context Protocol (MCP), an open standard for connecting AI applications to external tools and data sources.
An overview of the Model Context Protocol (MCP), an open standard for connecting AI applications to external tools and data sources.
A guide to building a production-ready, vendor-neutral AI agent using IBM watsonx.ai, MatrixHub, and MCP Gateway, focusing on decoupled architecture.
A tutorial on building a ReAct AI agent from scratch using Google's Gemini 2.5 Pro/Flash and the LangGraph framework for complex reasoning and tool use.
Explores the ethics of LLM training data and proposes a technical method to poison AI crawlers using nofollow links.
Using an LLM to label Hacker News titles and train a Ridge regression model for personalized article ranking based on user preferences.
An introduction to reasoning in Large Language Models, covering key concepts like chain-of-thought and methods to improve LLM reasoning abilities.
Explains a technique using AI-generated summaries of SQL queries to improve the accuracy of text-to-SQL systems with LLMs.
Summary of a panel discussion at NVIDIA GTC 2025 on insights and lessons learned from building real-world LLM-powered applications.
Explores how large language models (LLMs) are transforming industrial recommendation systems and search, covering hybrid architectures, data generation, and unified frameworks.
A tutorial on implementing function calling with Google's Gemma 3 27B LLM, showing how to connect it to external tools and APIs.
A clear explanation of the attention mechanism in Large Language Models, focusing on how words derive meaning from context using vector embeddings.
Argues that AI can improve beyond current transformer models by examining biological examples of superior sample efficiency and planning.
A practical guide to implementing function calling with Google's Gemini 2.0 Flash model, enabling LLMs to interact with external tools and APIs.
Explores the concept of 'generality' in AI models, using examples of ML failures and LLM inconsistencies to question how we assess their capabilities.
Explains how to extract logprobs from OpenAI's structured JSON outputs using the structured-logprobs Python library for better LLM confidence insights.
Introducing Physica, a Physics World Model AI that enforces physical laws to prevent errors in AI-generated simulations, moving beyond token fluency.
Explains the LLMs.txt file, a new standard for providing context and metadata to Large Language Models to improve accuracy and reduce hallucinations.
Introducing Tinbox, an LLM-based tool designed to translate sensitive historical documents that standard models often refuse to process.
An analysis of the ethical debate around LLMs, contrasting their use in creative fields with their potential for scientific advancement.
Reflections on the first unit of the Hugging Face Agents course, focusing on the potential and risks of code agents and their evaluation.