Too Much Control Plane In Your Context

I’ve consulted several groups helping to optimize their AI agent workflows. This is part of a series on basic agent architecture to help clarify core mental models and avoid costly design mistakes. With model capabilities going parabolic, attention is turning to the control plane (or “harness” - the orchestration layer between your code and your model) as the next bottleneck. It seems to be a source of low-hanging fruits as well as contention from the model providers who have very good reasons to force you to use theirs. ...

Checking In on AI Agent Architecture: Claude Code, Gas Town, and OpenClaw

I’ve been hesitant to chime in on AI agent architectures other than to say they’re not there yet. The hard takeoff of OpenClaw (an always-on AI assistant with countless integrations out of the box) and Moltbook (a social network for people’s AI assistants to chat) created a compelling public spectacle that clearly indicates growing momentum behind LLM-powered abstractions. Always wary of getting caught up in hype, I’d like to chime in again and try to tease out some common threads behind some of the most interesting and successful experiments. In doing so, I’ll suggest a potential way forward that I’d like to explore. ...

Superfluous Abstractions: The Landscape of LLM Tooling

Judging by the activity of marketing hypemen and growth hackers, it seems “Agents” are a Big Important Thing. I can’t scroll 2 posts on social media without being offered a course or a low-code service. As usual, the grifters are trailing the edge by a few years. If you try to pin down a definition, it seems to boil down to “a distributed, fault-tolerant software system… with LLM calls”. So that means we just need to throw Claude on the BEAM. Easy enough! For those willing to read some Elixir code I direct you to a pair of excellent blog posts from 2023: ...