17 Side Projects I Built With Claude Code in Two Months
A developer shares how using Claude Code enabled them to build 17 diverse side projects, including TUIs, games, and tools, in just two months.
A developer shares how using Claude Code enabled them to build 17 diverse side projects, including TUIs, games, and tools, in just two months.
Explains the difference between an AI agent's inner loop (verifying work within a task) and outer loop (learning across tasks).
A guide to the core principles and systems thinking required for data engineering, beyond just learning specific tools.
A guide to designing reliable, fault-tolerant data pipelines with architectural principles like idempotency, observability, and DAG-based workflows.
Argues that data quality must be enforced at the pipeline's ingestion point, not patched in dashboards, to ensure consistent, reliable data.
Explains idempotent data pipelines, patterns like partition overwrite and MERGE, and how to prevent duplicate data during retries.
Explains how to safely evolve data schemas using API-like discipline to prevent breaking downstream systems like dashboards and ML pipelines.
A guide to choosing between batch and streaming data processing models based on actual freshness requirements and cost.
Explains data partitioning and organization strategies to drastically improve query performance in analytical databases.
Explains the importance of automated testing for data pipelines, covering schema validation, data quality checks, and regression testing.
Explains the importance of pipeline observability for data health, covering metrics, logs, and lineage to detect issues beyond simple execution monitoring.
A practical, tool-agnostic checklist of essential best practices for designing, building, and maintaining reliable data engineering pipelines.
A comprehensive guide to data modeling, explaining its meaning, three abstraction levels, techniques, and importance for modern data systems.
Explains the three levels of data modeling (conceptual, logical, physical) and their importance in database design.
Compares Star Schema and Snowflake Schema data models, explaining their structures, trade-offs, and when to use each for optimal data warehousing.
Explores how data modeling principles adapt for modern lakehouse architectures using open formats like Apache Iceberg and the Medallion pattern.
Explains dimensional modeling for analytics, covering facts, dimensions, grains, and table design for query performance.
Explains Slowly Changing Dimensions (SCD) types 1-3 for managing data history in data warehouses, with practical examples.
Explains why transactional data models are inefficient for analytics and how to design denormalized, query-optimized models for better performance.
Explains database denormalization: when to flatten data for faster analytics queries and when to avoid it.