I write about and work on AI engineering, automation, acceleration, and productivity.
Agents Ai Ai-Assistant Ai-Engineering Ai-Search Anthropic Automation Chatbot Chromadb Claude Copilot-Cli Copilot-Studio Document-Analysis Eval Fastmcp Langchain Langgraph Lessons-Learned Llamaindex Llm Llm-Judge Mcp Orchestration Playwright Postgres Power-Automate Prompt-Engineering Python Qa Rag Retail Retrospective Semantic-Search Testing Voyage-Ai

What I Learned Building a Multi-Agent Document Analysis System

This is the retrospective for the multi-agent document analysis project. The first posts covered: why use multiple agents how the specialist agents work how the coordinator synthesizes findings This one covers what worked, what broke, and what I would change. In short, the architecture worked, the coordinator was the most valuable part, and chunking caused the worst failure mode. What worked The BaseAgent abstraction was enough. I did not need a framework. A simple base class handled the repeated LLM-call logic: model name, system prompt, max tokens, response cleaning, and JSON parsing. ...

April 24, 2026 · 7 min · Tyler

Coordinating Multiple LLM Agents: Cross-Domain Synthesis

After building the specialist agents, the output looked impressive. It was not useful enough. The system produced: 12 technical findings 14 risk findings 10 cost findings timeline findings That is a lot of analysis. It is also a lot to read. The coordinator is the piece that turns those separate findings into something a person can act on. Aggregation is not synthesis The first version of the coordinator just ran the agents and returned their results. ...

April 23, 2026 · 6 min · Tyler

Building Specialist LLM Agents: Technical, Risk, Cost, and Timeline Analysis

The first post covered why I split document analysis into multiple agents. This one covers how the specialists are actually built. The Python code is not the hard part. The specialist behavior mostly comes from: the system prompt the output schema the boundaries around what the agent should ignore The code is intentionally repetitive. Once you’ve written a couple agents, it’s a breeze. The shared base class Every agent needs the same basic execution logic: ...

April 22, 2026 · 7 min · Tyler

Why Multi-Agent Systems Beat Single Agents for Complex Documents

I built a document analysis system for RFPs and contracts using multiple specialist LLM agents instead of one general-purpose prompt. The architecture is simple: PDF → text extraction → Technical Analyzer → Risk Analyzer → Cost Analyzer → Timeline Analyzer → Coordinator synthesis → final report The interesting part is not that it calls an LLM. That’s easy. The interesting part is how much the output changes when the model is forced to analyze the same document through different lenses before producing a final answer. ...

April 21, 2026 · 7 min · Tyler

Building Ozark Ridge: Lessons Learned and What I'd Do Differently

This is the final post in the series. The first four covered what I built and how. This one covers what I learned, what I’d do differently, and why this architecture matters beyond the demo. What worked Archetype-based catalog generation scaled cleanly. Writing 1180 product descriptions by hand would have been infeasible. Generating them one-by-one with Claude would have been slow and inconsistent. The archetype system with variation logic produced realistic, diverse products at scale with no manual writing and consistent quality across the catalog. ...

April 16, 2026 · 9 min · Tyler

Building the AI Product Assistant: Context Injection, Multi-Turn Chat, and Cross-Product Retrieval

The previous posts focused on search. This one turns to the AI assistant — a floating chat widget that answers product questions, recommends complementary gear, and builds camping loadouts on request. Under the hood, it is a multi-turn conversation system with history, context injection when viewing a product, and dynamic retrieval when the query requires cross-product knowledge. What the assistant does Three core capabilities: Product Q&A — user is viewing a tent, asks “Is this waterproof?”, assistant answers from the product description without retrieving anything. ...

April 15, 2026 · 11 min · Tyler

Keyword Search vs Semantic Search: Why Natural Language Queries Need Vector Embeddings

The previous posts covered architecture and data ingestion. This one is about the core value proposition: why semantic search matters and how to demonstrate it. The approach: build both keyword and AI search, run the same queries through each, and document where keyword search fails. The results make the case for semantic search more effectively than any architectural explanation could. What keyword search actually does Postgres full-text search works by tokenizing text into lexemes (normalized words), removing stop words, and matching query tokens against indexed documents. It’s fast, deterministic, and has been reliable for decades. ...

April 14, 2026 · 10 min · Tyler

Building the Catalog and Ingestion Pipeline: Archetypes, Embeddings, and ChromaDB

The first post covered architecture. Here the focus shifts to data: how to generate a realistic product catalog at scale, why description quality matters for RAG, and how the ingestion pipeline embeds everything into ChromaDB. The pipeline produced 1180 products with rich descriptions, embedded them in 39 seconds, and returned retrieval results that actually held up. The archetype strategy Writing 1180 product descriptions by hand is infeasible. Having Claude write them one-by-one is slow and produces inconsistent output. The solution: archetype-based generation. ...

April 13, 2026 · 9 min · Tyler

Building AI Search for a Retail Website: The Stack and Why

I built Ozark Ridge, a mock outdoor gear retail site with AI-powered product search and a Rufus-style product assistant. The project exists to demonstrate RAG (Retrieval-Augmented Generation) in a realistic e-commerce context. This is the first post in a series documenting the build. This one covers the architecture and stack decisions. Later posts cover the RAG pipeline, keyword vs semantic search comparison, and building the AI assistant. What it does Two features: ...

April 12, 2026 · 7 min · Tyler

AI-Powered QA Testing with playwright-cli and GitHub Copilot

Most AI-assisted QA workflows assume you have access to everything: Playwright MCP configured in VS Code, Copilot Vision enabled, the embedded browser panel working. In an enterprise environment, those assumptions often don’t hold. Security policies restrict which tools can connect to which services. Features get disabled. The standard setup isn’t available. This post documents a different approach factoring in certain constraints. The combination: playwright-cli for browser interaction, GitHub Copilot CLI for the agent loop, and a plain natural language prompt describing what to test. No MCP. No generated test files. No vision model. Just a coding agent running shell commands against a real browser. ...

April 9, 2026 · 6 min · Tyler