bleugreen lab

a flock of claudes

1. Linear Issues

I spent years building scaffolding for coding agents. Custom prompts, state machines, retry logic, context management, most of it quicky became overhead that the models buckled under.

Then they got trained on tool-use traces, and suddenly the patterns were just there: when to call functions, how to recover from errors, when to retry versus give up. The same infrastructure that used to weigh them down became force multiplication.

Claude Code with Opus 4 was where it clicked. You give it filesystem access, bash, and git, and it works. It builds, tests, debugs, commits like a developer who never sleeps and occasionally needs to be told to calm down.

Issue Tracking

The first limitation was context. Every conversation started from scratch: “This is a Next.js project, use Bun to build, etc.” CLAUDE.md files helped with project info, but I was still manually narrating session context: “So we were working on this bug, looked like we fixed it, now it’s back.”

So I added the Linear MCP server.

Linear is a project management platform for software teams. Each entity—issues, projects, cycles, teams, users—has defined relationships accessible through a GraphQL API. GitHub integration syncs pull requests with issues automatically.

This structure works well for LLM orchestration. An agent can query the API to understand a team’s work, from project roadmaps down to task dependencies. The MCP server gives Claude tools to view and edit issues directly.

I created a Linear team for my project and added workflow rules to CLAUDE.md:

- Every task starts with a Linear issue (create one if not provided)
- Track all progress via comments on the Linear issue

Instead of typing tasks into the CLI, I document them in Linear with logs, screenshots, and context. Then I open the CLI with “Let’s get started on GUI-10.”

The difference was immediate. Code became more focused. Claude stopped veering into unrelated refactors.

As each issue progresses, Claude leaves comments documenting insights and decisions. Completed issues come with a condensed report of the entire decision-making process. And beyond just external memory, I think operating through Linear subtly contextualizes that this is professional work on a real team—not a sandbox experiment.

Once I settled in, my explanations shifted from ephemeral CLI messages to durable issues with reference IDs:

Hey, that bug from GUI-12 is happening again.

One sentence gives the model everything: the original problem, attempted solutions, related changes. No retyping from memory.

Legibility

Context-providing tools should deliver relevant information and nothing else. But many tools output excessive structure and require awkward interactions to use.

It might seem strange that we need to make machine structures human-readable before feeding them back to a machine. But this matters more than it seems.

The official Linear MCP server implements their GraphQL API faithfully, which means it refers to everything—issues, comments, labels, states—via 32-character hex UUIDs.

To move TEST-123 into “Planning,” Claude has to do this:

find_issue('TEST-123')
> UUID      'fcb357b6-7719-4550-b6e0-8fa5d8554d69'
  team_uuid 'a210c784-b5d2-4dad-9dab-2ddd404b831e'

find_status('Planning', team='a210c784-b5d2-4dad-9dab-2ddd404b831e')
> 'de936f4d-5d04-41bd-8063-c5d85a319db6'

update_issue('fcb357b6-7719-4550-b6e0-8fa5d8554d69', 'de936f4d-5d04-41bd-8063-c5d85a319db6')

Try reading that out loud. Imagine keeping track of which UUID refers to what. It’s a waste of the model’s cognitive effort to spend tokens on illegible identifiers—and it frequently makes errors with these strings anyway. It recovers, but each failed tool call adds failure history to the context, which can perpetuate more failures.

I want agents clustering around “efficient team building solutions” in vector space, not “robot struggling with basic API.”

And the UUIDs are only half of it. The Linear MCP has a 65KB limit on API responses. Once an issue gets long enough (with Claude leaving detailed progress comments, they get long fast) anything beyond that threshold becomes invisible.

Hard to get information. And once you get it, you might not have all of it.

So I built my own.

A Better Bridge

This Linear MCP server provides Claude with direct, legible access to Linear. Everything uses readable identifiers:

The server handles UUID resolution internally. No data gets truncated.

To move TEST-123 to “Planning”:

update_issue('TEST-123', 'Planning')

When Claude requests an issue, it gets clean markdown, no JSON structure. Issues appear as readable documents with threaded comments. Images attached to Linear are downloaded and cached locally.

In practice, Claude treats Linear as a natural extension of its built-in todo system. The friction is gone.

2. Scope & Concurrency

After a few days with the Linear MCP, I had a routine:

user: Let's work on GUI-32

claude: [gets issue via mcp]
        [reads files]
        [edits files]
        [build/test/commit/pr]

This worked for isolated tasks. As changes got more complex, two problems emerged:

  1. For cross-file work, Claude would max out its context and compact midway through, losing track of details. I’d find half-finished refactors with dangling imports.

  2. I was finding issues faster than a single agent could fix them. Only one Claude could safely edit the codebase at a time.

Two-Step Workflow

The fix was splitting each task into planning and implementation.

Planning: Claude operates read-only on main. A slash command tells it to identify critical files, document existing patterns, and build a focused roadmap. Everything gets compressed into Linear comments.

Implementation: A script spins up a fresh Claude with a dedicated branch and worktree, the condensed plan from Linear, full edit privileges, and instructions to build, test, and PR.

This killed both problems. Multiple agents work in parallel on different branches. And each implementation agent starts with pre-digested context instead of burning tokens on exploration.

The planning agent becomes a context compiler—reading widely, summarizing tightly. The implementation agent gets to spend its entire context budget on actually building.

$ claude "/plan TEAM-123"
...
[plan complete]

$ ./cimplement.sh TEAM-123
...
[pr ready]

Automation

Typing commands got repetitive, so I automated it.

I spun up n8n in Docker and wrote a FastAPI server with a /dispatch/[task]/[issue_id] endpoint. When Linear issues move to “Plan” or “Build” state, n8n catches the webhook and dispatches the appropriate agent.

The tradeoff: visibility dropped to Linear comments, usually end-of-work summaries. Harder to catch misunderstandings early.