<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Ai-Coding-Tools on Damian Galarza | Software Engineering &amp; AI Consulting</title><link>https://www.damiangalarza.com/tags/ai-coding-tools/</link><description>Recent posts from Damian Galarza | Software Engineering &amp; AI Consulting</description><generator>Hugo</generator><language>en-us</language><managingEditor>Damian Galarza</managingEditor><atom:link href="https://www.damiangalarza.com/tags/ai-coding-tools/feed.xml" rel="self" type="application/rss+xml"/><item><title>Claude Opus 4/7 + Claude Code: 7 Practical Tips for Maximizing Extended Context</title><link>https://www.damiangalarza.com/posts/2026-04-30-claude-opus-4-7-claude-code-tips-extended-context/</link><pubDate>Thu, 30 Apr 2026 00:00:00 -0400</pubDate><author>Damian Galarza</author><guid>https://www.damiangalarza.com/posts/2026-04-30-claude-opus-4-7-claude-code-tips-extended-context/</guid><description>Practical tips for getting the most from Claude Opus 4.7's 1M context window in Claude Code. Effort levels, proactive compaction, subagent delegation, and session management from daily production use.</description><content:encoded><![CDATA[<p>Claude Opus 4.7 shipped with a 1M token context window. That&rsquo;s five times what Sonnet 4.5 offers. However, this doesn&rsquo;t mean we should no longer be careful with our context window.</p>
<p>The lost-in-the-middle problem doesn&rsquo;t disappear at 1M tokens. Content in the center of the window still gets less attention than content at the beginning and end. Opus 4.7 uses a new tokenizer that improves model performance, but it also means files you read consume context in subtly different ratios than before. Anthropic&rsquo;s docs note the new tokenizer can use up to 35% more tokens per equivalent input compared to previous models. And adaptive thinking, now the only supported thinking mode in 4.7 (fixed budgets are removed), consumes context dynamically. The model thinks longer on harder problems and shorter on easy ones. That thinking counts against your window.</p>
<p>If you&rsquo;re coming from my earlier post on <a href="/posts/2025-12-08-understanding-claude-code-context-window/">Understanding Claude Code&rsquo;s Context Window</a>, everything there still applies. The fundamentals haven&rsquo;t changed. What has changed is the ceiling, and the set of controls available to you.</p>
<p>Here are seven workflow adjustments I&rsquo;ve made since Opus 4.7 dropped. Each one addresses a specific constraint I hit in daily production use.</p>
<h2 id="1-front-load-context-in-your-first-turn">1. Front-Load Context in Your First Turn</h2>
<p>One of the big changes from Opus 4.6 to Opus 4.7 is that it no longer reads between the lines. Opus 4.6 was better at taking a vague prompt and &ldquo;figuring it out&rdquo;. Opus 4.7, however, no longer does this. You need to provide good context to the model to achieve good results. The first message in the session anchors everything that follows.</p>
<p>Structure your first turn to include three things: what you want and why, which files or areas of the codebase are relevant, and what &ldquo;done&rdquo; looks like.</p>
<p>Here&rsquo;s an example. Instead of this:</p>
<pre tabindex="0"><code>Add rate limiting to the API
</code></pre><p>Try this:</p>
<pre tabindex="0"><code>Add rate limiting to the webhook ingestion endpoint in
packages/gateway/src/routes/webhooks.ts. We&#39;re getting
hammered by a misbehaving integration that sends duplicate
events. Use the existing Redis connection in src/lib/redis.ts.
Rate limit by client IP, 100 requests per minute. Add tests
in __tests__/webhooks.test.ts. Don&#39;t change the event
processing logic in src/lib/event-handler.ts.
</code></pre><p>The second version tells Opus 4.7 exactly what to touch, why, and what to leave alone. You define the &ldquo;what&rdquo; and the constraints. Let the model propose the &ldquo;how.&rdquo;</p>
<p>One thing to watch for: don&rsquo;t turn your first message into a specification document. If you find yourself writing more than a paragraph or two, you&rsquo;re probably trying to control implementation details that the model should decide. Name the files, the constraints, and the definition of done. Stop there.</p>
<h2 id="2-switch-effort-levels-mid-session">2. Switch Effort Levels Mid-Session</h2>
<p>Thinking tokens count against your context window. A single <code>xhigh</code> response on a complex problem can use significantly more tokens than the same question at <code>high</code>. Over the course of a session, this adds up fast.</p>
<p>Opus 4.7 introduced <code>xhigh</code> effort and replaced the old fixed thinking budgets with adaptive thinking. At <code>xhigh</code>, the model almost always engages deep reasoning on complex work and skips thinking on simpler tasks. That&rsquo;s useful for architecture decisions, complex debugging, and multi-file refactors. It&rsquo;s overkill for renaming a variable across twenty files.</p>
<p>Here&rsquo;s how I handle it. I start sessions at <code>xhigh</code> for the initial planning and implementation work. When I shift to mechanical tasks, I drop the effort level:</p>
<pre tabindex="0"><code>/effort high
</code></pre><p>Rename the files, run the migration, update the imports. Then when I need deep analysis again:</p>
<pre tabindex="0"><code>/effort xhigh
</code></pre><p>In practice: you spend the first part of a session at <code>xhigh</code> implementing a feature, then need to update some test fixtures and rename a few constants. Drop to <code>high</code> or even <code>medium</code> for that work. When you&rsquo;re ready to debug a failing integration test, go back to <code>xhigh</code>.</p>
<p>The gotcha here is context switching cost. Don&rsquo;t toggle effort every other message. Batch your mechanical tasks together and run them at a lower effort level in one block. Then switch back for the next piece of deep work.</p>
<h2 id="3-compact-at-60-not-when-you-see-a-warning">3. Compact at 60%, Not When You See a Warning</h2>
<p>Autocompact triggers when your context window is nearly full. By the time that happens with a 1M window, you&rsquo;ve been running with degraded output quality for a while. The lost-in-the-middle effect doesn&rsquo;t wait for you to run out of room. It starts affecting responses well before you hit the ceiling.</p>
<p>My rule of thumb: check <code>/context</code> periodically and compact when you hit around 60%. That sounds like a lot to throw away, but consider the flip side. You still have 400K tokens after compacting, which is twice the entire Sonnet 4.5 window.</p>
<p>Here&rsquo;s what <code>/context</code> output looks like in a session approaching that threshold:</p>
<pre tabindex="0"><code>Context Usage
⛁ ⛀ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁   claude-opus-4-7 · 610k/1000k tokens (61%)
</code></pre><p>After a proactive compact:</p>
<pre tabindex="0"><code>Context Usage
⛁ ⛀ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁   claude-opus-4-7 · 85k/1000k tokens (8.5%)
</code></pre><p>That&rsquo;s a fresh start with all the important decisions preserved. Much better than letting autocompact fire at capacity and losing coherence.</p>
<p>The trade-off with early compaction is that you lose conversational nuance. Specific phrasings, detailed file contents, and turn-by-turn reasoning all get compressed into a summary. This is why Tip 4 exists.</p>
<h2 id="4-steer-your-compaction">4. Steer Your Compaction</h2>
<p>Running <code>/compact</code> without guidance lets the model decide what to keep and what to drop. This works reasonably well for short sessions, but in a long session with multiple decisions, the model often drops specifics that matter for the next phase of work.</p>
<p>Always pass steering instructions when you compact. Name the topics and the decisions you need preserved.</p>
<p>Here are three examples from real sessions:</p>
<pre tabindex="0"><code>/compact Preserve the auth refactor decisions: we chose
JWT with rotating refresh tokens over session cookies,
the token service is in src/lib/auth/tokens.ts, and
the migration adds a refresh_tokens table.
</code></pre><pre tabindex="0"><code>/compact Keep the schema changes we made to the proposals
table (added status enum, soft delete columns, and the
client_id foreign key). Preserve the repo pattern decision
from packages/shared/src/db/repos/proposals.ts.
</code></pre><pre tabindex="0"><code>/compact We&#39;re moving to phase 2 of the API implementation.
Preserve the route structure decisions (REST for CRUD,
webhooks for async events) and the middleware chain order.
Drop the debugging of the TypeScript config issues.
</code></pre><p>Keep your steering to two or three sentences. Name the topics, not every detail. The model will fill in the specifics from the conversation history. You are giving it a priority list, not writing the summary yourself.</p>
<h2 id="5-use-subagents-for-context-isolation">5. Use Subagents for Context Isolation</h2>
<p>I covered subagents in detail in my <a href="/posts/2025-12-08-understanding-claude-code-context-window/">context window post</a>, but Opus 4.7 shifts the default behavior. In my experience, Opus 4.7 spawns fewer subagents on its own compared to earlier models (Anthropic&rsquo;s release notes confirm this as a deliberate behavior change). It&rsquo;s more inclined to do work inline, which means exploration output that used to be isolated now accumulates in your main context.</p>
<p>That&rsquo;s fine for focused tasks. It becomes a problem when you need to explore a large area of the codebase or review a significant diff. The fix: explicitly request subagent delegation.</p>
<p>The key is scoping what comes back. Instead of:</p>
<pre tabindex="0"><code>Review the changes on this branch
</code></pre><p>Try:</p>
<pre tabindex="0"><code>Have a subagent review the changes on this branch against
main. Report back: any bugs, any missing test coverage,
and any patterns that don&#39;t match our existing conventions.
Don&#39;t include the full diff in the report.
</code></pre><p>Good candidates for subagent delegation:</p>
<ul>
<li><strong>Code reviews:</strong> The subagent reads every changed file, but your main context only gets the summary.</li>
<li><strong>Codebase exploration:</strong> &ldquo;Have a subagent map out how the notification system works across packages/gateway and packages/agents.&rdquo;</li>
<li><strong>Test analysis:</strong> &ldquo;Spawn a subagent to check which tests cover the payment flow and identify gaps.&rdquo;</li>
<li><strong>Pattern audits:</strong> &ldquo;Use a subagent to find all places we handle errors in route handlers and check for consistency.&rdquo;</li>
</ul>
<p>The gotcha with subagents is that they don&rsquo;t share your conversation history. If you made a decision earlier in the session that affects how the subagent should evaluate something, include that decision in the delegation prompt. The subagent starts fresh.</p>
<h2 id="6-use-rewind-to-recover-from-failed-approaches">6. Use Rewind to Recover from Failed Approaches</h2>
<p>Every failed approach leaves artifacts in your context: the wrong implementation, the correction, the explanation of why it was wrong. With Opus 4.7&rsquo;s literal instruction following, this creates a real problem. The model may anchor on parts of a failed attempt even after you have corrected course, because that failed code is still in the conversation history.</p>
<p>The <code>/rewind</code> command (or double-tap Escape) rolls back to a previous point in the conversation. This removes the failed approach from context entirely, as if it never happened.</p>
<p>Here&rsquo;s when to use rewind versus inline correction:</p>
<p><strong>Rewind</strong> when the approach is fundamentally wrong. You asked for a webhook handler and got a giant switch statement, but your codebase uses an event routing pattern. Correcting inline means the model has both patterns in context and may blend them.</p>
<p><strong>Correct inline</strong> when the details need adjustment. The approach is right but a method name is wrong, or it missed an edge case. The cost of the correction in context is low, and the model benefits from seeing the refinement.</p>
<p>A practical example: I asked Claude to implement a notification dispatch system. The first attempt built a synchronous pipeline. My codebase uses BullMQ for async job processing. Rather than explaining why synchronous was wrong and asking it to redo the work, which would leave both approaches in context, I rewound and rephrased:</p>
<pre tabindex="0"><code>Implement notification dispatch using our existing BullMQ
job infrastructure in packages/agents/src/lib/queue.ts.
Each notification type gets its own job processor.
Follow the pattern in the heartbeat-runner for job setup.
</code></pre><p>Clean context. Clear direction. No conflicting signals.</p>
<p>One warning: rewind is destructive. If the failed approach contained useful insights (it identified the right files to modify, or surfaced a constraint you hadn&rsquo;t considered), note those before rewinding. You can include them in your rephrased prompt.</p>
<h2 id="7-know-when-to-clear-compact-or-continue">7. Know When to Clear, Compact, or Continue</h2>
<p>Quality degrades gradually in long sessions. You won&rsquo;t see a cliff. Responses get slightly less precise, slightly more generic, slightly more likely to miss constraints you established earlier. A 1M window means sessions can run much longer, which makes the decision of when to stop harder, not easier.</p>
<p>Here&rsquo;s the decision framework I use:</p>
<p><strong>Continue</strong> when you&rsquo;re mid-task, below 60% context usage, and working on a single coherent thread. The model has strong recall of recent decisions and the work is flowing.</p>
<p><strong>Compact</strong> when you&rsquo;ve finished a phase and are starting the next one. You need the architectural decisions but not the turn-by-turn implementation details. This is where Tip 4&rsquo;s steering instructions matter most.</p>
<p><strong>Clear</strong> when the next task is unrelated to what you&rsquo;ve been doing. Also clear when the model starts repeating itself, when you&rsquo;ve already compacted multiple times in the session, or when you&rsquo;ve persisted your plan externally (in a TODO file, a Linear issue, or a CLAUDE.md update).</p>
<p><strong>Start a new session</strong> when you need different MCP servers, when you&rsquo;re switching to a different branch, or when you&rsquo;re doing parallel worktree work. Each worktree should get its own session. I covered why in <a href="/posts/2026-03-10-extending-claude-code-worktrees-for-true-database-isolation/">Extending Claude Code with Worktrees for True Database Isolation</a>.</p>
<p>The full session lifecycle follows a natural arc. Start with a strong first prompt (Tip 1) at xhigh effort (Tip 2). During the working phase, delegate exploration to subagents (Tip 5) and rewind failed approaches (Tip 6). When you hit around 60% context, compact proactively (Tip 3) with steering instructions (Tip 4). Then decide whether to continue, clear, or start fresh (Tip 7).</p>
<pre tabindex="0"><code>Session Start
  ├── Tip 1: Front-load context in first turn
  ├── Tip 2: xhigh for deep work, high/medium for mechanical
  │
  │   [Working...]
  │
  ├── Tip 5: Delegate exploration to subagents
  ├── Tip 6: Rewind failed approaches
  │
  │   [~60% context used]
  │
  ├── Tip 3: Proactive /compact
  ├── Tip 4: Steer the compaction
  │
  │   [Continue or...]
  │
  └── Tip 7: Clear / New session when needed
</code></pre><h2 id="the-mental-model">The Mental Model</h2>
<p>The 1M context window isn&rsquo;t five times more room. It&rsquo;s five times more rope.</p>
<p>With a 200K window, context pressure forced discipline. You had to be deliberate about what went into the window because you would run out. With 1M tokens, poor habits go unnoticed much longer before the consequences show up. That makes discipline harder, not easier.</p>
<p>The one principle behind all seven tips: active context management beats passive accumulation. Front-load your intent. Control your effort levels. Compact before you need to. Steer the compaction. Isolate expensive exploration. Remove dead ends. Know when to stop.</p>
<p>These aren&rsquo;t theoretical suggestions. They&rsquo;re the adjustments I&rsquo;ve made in my own workflow over the past week of daily Opus 4.7 usage. It rewards precision and punishes ambiguity. Give it clear context, and it delivers.</p>
<blockquote>
<p>If this post was the explanation, the cheat sheet is the reference.
Two sides: token costs for common MCPs on one, the <code>/clear</code> /
<code>/compact</code> / subagent decision tree on the other.</p>
<p><a href="/context-window-cheat-sheet/">Get the Context Window Cheat Sheet →</a></p></blockquote>
<h2 id="further-reading">Further Reading</h2>
<ul>
<li><a href="/posts/2025-12-08-understanding-claude-code-context-window/">Understanding Claude Code&rsquo;s Context Window</a></li>
<li><a href="/posts/2025-11-25-how-i-use-claude-code/">How I Use Claude Code: My Complete Development Workflow</a></li>
<li><a href="/posts/2026-03-10-extending-claude-code-worktrees-for-true-database-isolation/">Extending Claude Code with Worktrees for True Database Isolation</a></li>
<li><a href="https://claude.com/blog/best-practices-for-using-claude-opus-4-7-with-claude-code">Anthropic: Best practices for using Claude Opus 4.7 with Claude Code</a></li>
</ul>
]]></content:encoded></item><item><title>Four Dimensions of Agent-Ready Codebase Design</title><link>https://www.damiangalarza.com/posts/2026-03-25-four-patterns-that-separate-agent-ready-codebases/</link><pubDate>Wed, 25 Mar 2026 00:00:00 -0400</pubDate><author>Damian Galarza</author><guid>https://www.damiangalarza.com/posts/2026-03-25-four-patterns-that-separate-agent-ready-codebases/</guid><description>AI agents produce better output when the codebase is ready for them. Here are the four dimensions of codebase readiness that account for most of the gap.</description><content:encoded><![CDATA[<p>When an AI agent rewrites a file and the result doesn&rsquo;t match your conventions, the first move is usually to adjust the prompt. Try different instructions. Add more context to the message. Maybe switch models.</p>
<p>The model is rarely the bottleneck. The codebase is.</p>
<p>The same model, pointed at a codebase with strong tests, clear architecture, and good documentation, produces remarkably consistent output. Point it at a codebase with weak coverage, no architecture docs, and no linting, and you get drift. Not because the model is less capable, but because it has less to work with.</p>
<p>I built the <a href="/codebase-readiness/">Codebase Readiness Assessment</a> to make this measurable. It scores your repo across eight dimensions on a 0-100 scale. But you don&rsquo;t need to run the assessment to understand what separates high-scoring codebases from low-scoring ones. Four dimensions account for most of the gap.</p>
<h2 id="test-foundation">Test Foundation</h2>
<p>Test foundation carries the most weight in the assessment (25%) because it&rsquo;s the single biggest lever for agent output quality.</p>
<h3 id="what-a-low-score-looks-like">What a low score looks like</h3>
<p>An agent makes a change. There are no tests covering that area, so it moves on. The change compiles, maybe even runs, but it broke an assumption three modules away. Nobody finds out until a human reviews the PR, or worse, until production.</p>
<p>I&rsquo;ve seen this repeatedly: teams with 30-40% test coverage ask an agent to refactor a service object. The agent produces clean code that looks right. But there&rsquo;s no spec for the edge case where a nil association triggers a downstream error. The agent had no way to catch it because there&rsquo;s no test to fail.</p>
<p>The other failure mode is slow tests. If your suite takes 20 minutes, the agent can&rsquo;t iterate. It makes a change, waits, discovers the failure, tries again, waits again. In a fast suite, that feedback cycle takes seconds. In a slow one, the agent burns time and money waiting for results.</p>
<h3 id="what-a-high-score-looks-like">What a high score looks like</h3>
<p>Codebases that score well here share a few characteristics:</p>
<ul>
<li><strong>Coverage above 70% on critical paths.</strong> Not 100% everywhere, but thorough coverage on the code that matters: domain logic, service objects, API endpoints. The agent can make changes and get immediate confirmation that nothing broke.</li>
<li><strong>Suite runs in under 5 minutes.</strong> Fast enough that the agent can run tests after every meaningful change, not just at the end.</li>
<li><strong>Deterministic results.</strong> No flaky tests. When the suite says green, it means green. Agents can&rsquo;t distinguish between a flaky failure and a real one, so flaky tests teach agents to ignore failures.</li>
</ul>
<h3 id="dont-stop-at-unit-tests">Don&rsquo;t stop at unit tests</h3>
<p>Unit tests on service objects and models are the foundation, but they only verify isolated behavior. An agent that passes all unit tests can still break a user-facing workflow that spans multiple components.</p>
<p>End-to-end tests give agents confidence across entire flows. A system spec that signs a user in, submits a form, and checks the result tells the agent whether the <em>feature</em> works, not just whether a method returns the right value. This is especially valuable when agents make changes that touch controllers, views, and services in the same PR.</p>
<p>Here&rsquo;s a simplified system spec from one of my Rails projects. It covers the core user journey: signing in and submitting a video idea for validation.</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-ruby" data-lang="ruby"><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic"># spec/system/idea_submission_spec.rb</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#f9e2af">RSpec</span><span style="color:#89dceb;font-weight:bold">.</span>describe <span style="color:#a6e3a1">&#34;Idea submission&#34;</span> <span style="color:#cba6f7">do</span>
</span></span><span style="display:flex;"><span>  it <span style="color:#a6e3a1">&#34;allows a signed-in user to submit a video idea&#34;</span> <span style="color:#cba6f7">do</span>
</span></span><span style="display:flex;"><span>    user <span style="color:#89dceb;font-weight:bold">=</span> create(<span style="color:#a6e3a1">:user</span>)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    sign_in_as(user, <span style="color:#a6e3a1">path</span>: new_idea_path)
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    <span style="color:#89dceb">select</span> user<span style="color:#89dceb;font-weight:bold">.</span>channels<span style="color:#89dceb;font-weight:bold">.</span>first<span style="color:#89dceb;font-weight:bold">.</span>name, <span style="color:#a6e3a1">from</span>: <span style="color:#a6e3a1">&#34;Channel&#34;</span>
</span></span><span style="display:flex;"><span>    fill_in <span style="color:#a6e3a1">&#34;Title&#34;</span>, <span style="color:#a6e3a1">with</span>: <span style="color:#a6e3a1">&#34;Building a Rails AI Agent from Scratch&#34;</span>
</span></span><span style="display:flex;"><span>    fill_in <span style="color:#a6e3a1">&#34;Description&#34;</span>, <span style="color:#a6e3a1">with</span>: <span style="color:#a6e3a1">&#34;Step-by-step tutorial on building an AI agent&#34;</span>
</span></span><span style="display:flex;"><span>    fill_in <span style="color:#a6e3a1">&#34;Category&#34;</span>, <span style="color:#a6e3a1">with</span>: <span style="color:#a6e3a1">&#34;AI Coding&#34;</span>
</span></span><span style="display:flex;"><span>    click_button <span style="color:#a6e3a1">&#34;Validate Idea&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>    expect(page)<span style="color:#89dceb;font-weight:bold">.</span>to have_content(<span style="color:#a6e3a1">&#34;Building a Rails AI Agent from Scratch&#34;</span>)
</span></span><span style="display:flex;"><span>  <span style="color:#cba6f7">end</span>
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">end</span>
</span></span></code></pre></div><p>This test touches authentication, the form UI, the controller, the background job, and the results page. If an agent breaks any part of that chain, this spec catches it.</p>
<p>The tradeoff is speed. End-to-end tests are slower and more brittle than unit tests. You don&rsquo;t need full E2E coverage, but having system specs on your critical user journeys (signup, checkout, the core action your product is built around) gives agents a safety net that unit tests alone can&rsquo;t provide.</p>
<h3 id="the-smallest-change-that-moves-the-needle">The smallest change that moves the needle</h3>
<p>Add coverage to your critical paths first. Don&rsquo;t chase a coverage number. Instead, identify the three or four service objects or domain models where bugs would hurt the most, and write specs for those. Then add one or two system specs covering your most important user journeys end-to-end. If your suite is slow, add parallel test execution. In a Rails app, that might be as simple as adding the <code>parallel_tests</code> gem. A suite that goes from 15 minutes to 4 minutes fundamentally changes how an agent can work with your code. If you&rsquo;re running multiple agents in parallel, you&rsquo;ll also need <a href="/posts/2026-03-10-extending-claude-code-worktrees-for-true-database-isolation/">database isolation per worktree</a> to prevent test data collisions.</p>
<p>If you want to accelerate the process, tools like <a href="https://github.com/uditgoenka/autoresearch">autoresearch</a> apply this pattern as an autonomous loop: give the agent a measurable goal (like a coverage target), and it iterates, verifies, keeps what works, and discards what doesn&rsquo;t.</p>
<h2 id="documentation-as-code">Documentation as Code</h2>
<p>Documentation carries 15% of the assessment weight, but in practice it&rsquo;s the dimension where I see the biggest gap between teams that get good agent output and teams that don&rsquo;t.</p>
<h3 id="what-a-low-score-looks-like-1">What a low score looks like</h3>
<p>Without an agent-facing entry point (a <code>CLAUDE.md</code>, <code>AGENTS.md</code>, or equivalent), an agent has to reverse-engineer your conventions from the code itself. It reads your files, infers patterns, and guesses at intent. Sometimes it guesses right. Often it doesn&rsquo;t.</p>
<p>Here&rsquo;s a concrete example. A Rails app uses service objects for all business logic. Controllers call a service, the service does the work, and the result gets rendered. There&rsquo;s nothing enforcing this in the framework. It&rsquo;s a team convention. An agent that doesn&rsquo;t know about this convention puts the logic directly in the controller action. The code works. The tests pass. But it breaks the team&rsquo;s pattern, and now there&rsquo;s a 50-line controller action that should have been a service object.</p>
<p>The agent wasn&rsquo;t wrong. It had no way to know.</p>
<h3 id="what-a-high-score-looks-like-1">What a high score looks like</h3>
<p>The key insight is that this entry point file should be a map, not a manual. OpenAI&rsquo;s Harness Engineering team <a href="https://openai.com/index/harness-engineering/">learned this the hard way</a>: they tried a single large instruction file and it failed because &ldquo;context is a scarce resource&rdquo; and &ldquo;too much guidance becomes non-guidance.&rdquo; When everything is marked important, agents pattern-match locally instead of navigating intentionally.</p>
<p>Their solution: keep the entry file short (roughly 100 lines) and treat it as a table of contents that points to deeper sources of truth in a structured <code>docs/</code> directory. The entry file gives agents quick commands and a documentation map. The detail lives in dedicated files the agent reads when it needs them. Whether you call it <code>CLAUDE.md</code>, <code>AGENTS.md</code>, or <code>CURSOR.md</code>, the pattern is the same.</p>
<p>Here&rsquo;s what this looks like in practice from one of my Rails projects:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-markdown" data-lang="markdown"><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold">## Quick Commands
</span></span></span><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold"></span>
</span></span><span style="display:flex;"><span>bin/dev                                # Start dev server
</span></span><span style="display:flex;"><span>bin/rails spec                         # All tests
</span></span><span style="display:flex;"><span>bin/ci                                 # Full CI: lint + security + tests
</span></span><span style="display:flex;"><span>bin/rubocop                            # Lint
</span></span><span style="display:flex;"><span>bin/brakeman                           # Security scan
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold">## Documentation Map
</span></span></span><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold"></span>
</span></span><span style="display:flex;"><span>| Topic | Document |
</span></span><span style="display:flex;"><span>|-------|----------|
</span></span><span style="display:flex;"><span>| Stack, patterns, domain model | docs/ARCHITECTURE.md |
</span></span><span style="display:flex;"><span>| Testing patterns and stack | docs/TESTING.md |
</span></span><span style="display:flex;"><span>| Credentials, env vars, API keys | docs/CONFIGURATION.md |
</span></span><span style="display:flex;"><span>| Engineering principles | docs/design-docs/core-beliefs.md |
</span></span><span style="display:flex;"><span>| Architecture decision records | docs/design-docs/ |
</span></span></code></pre></div><p>The agent gets commands and a map up front. When it needs to understand the domain model or testing conventions, it follows the pointer. This is progressive disclosure: the agent starts with what it needs immediately and loads deeper context on demand.</p>
<p>Here&rsquo;s a trimmed excerpt from the <code>ARCHITECTURE.md</code> behind that pointer:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-markdown" data-lang="markdown"><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold">## Domain Model
</span></span></span><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold"></span>
</span></span><span style="display:flex;"><span>CreatorSignal validates YouTube video ideas. The core flow:
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">1.</span> User submits a video <span style="font-weight:bold">**Idea**</span>
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">2.</span> A <span style="font-weight:bold">**Validation**</span> job is enqueued
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">3.</span> The <span style="font-weight:bold">**ResearchAgent**</span> runs tools against YouTube, Reddit, X, and HN
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">4.</span> Results are synthesized into a scored <span style="font-weight:bold">**Go / Refine / Kill**</span> verdict
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold">### Key Models
</span></span></span><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold"></span>
</span></span><span style="display:flex;"><span>| Model | Responsibility |
</span></span><span style="display:flex;"><span>|-------|---------------|
</span></span><span style="display:flex;"><span>| <span style="color:#a6e3a1">`User`</span> | Authentication, subscription plan |
</span></span><span style="display:flex;"><span>| <span style="color:#a6e3a1">`Idea`</span> | A video idea submitted for validation |
</span></span><span style="display:flex;"><span>| <span style="color:#a6e3a1">`Validation`</span> | One run of the research agent against an idea |
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold">### Project Structure
</span></span></span><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold"></span>
</span></span><span style="display:flex;"><span>app/
</span></span><span style="display:flex;"><span>├── components/       # ViewComponent components
</span></span><span style="display:flex;"><span>├── controllers/
</span></span><span style="display:flex;"><span>├── jobs/             # ActiveJob jobs (async validation)
</span></span><span style="display:flex;"><span>├── models/
</span></span><span style="display:flex;"><span>├── services/         # Research agent, tool orchestration
</span></span><span style="display:flex;"><span>└── views/            # Hotwire (Turbo frames/streams)
</span></span></code></pre></div><p>An agent reading this knows what an <code>Idea</code> is, that validation is async through a job, and that orchestration logic lives in <code>app/services/</code>. Those are the conventions that prevent drift.</p>
<p>ADRs (Architecture Decision Records) add a layer that documentation alone can&rsquo;t. An agent that understands <em>why</em> a particular pattern was chosen can make better decisions when extending it. If your ADR says &ldquo;we chose event sourcing for the billing domain because of auditability requirements,&rdquo; the agent won&rsquo;t try to refactor billing into simple CRUD.</p>
<h3 id="the-smallest-change-that-moves-the-needle-1">The smallest change that moves the needle</h3>
<p>Create an <code>AGENTS.md</code> in your project root with two things: commands (build, test, lint) and a documentation map pointing to deeper files. <a href="https://agents.md/"><code>AGENTS.md</code></a> is an emerging standard supported by Codex, Cursor, Gemini CLI, GitHub Copilot, Windsurf, Devin, and <a href="https://agents.md/">many others</a>. If you&rsquo;re using Claude Code, symlink <code>CLAUDE.md</code> to it so both resolve to the same file. Then create an <code>ARCHITECTURE.md</code> covering your stack, domain model, and key conventions. This can take an hour and the effect on agent output is immediate. If you want to automate the scaffolding, the <a href="https://github.com/dgalarza/claude-code-workflows">agent-ready plugin</a> generates a starting point based on your existing codebase.</p>
<h2 id="architecture-clarity">Architecture Clarity</h2>
<p>Architecture clarity carries 15% of the assessment weight. It measures whether an agent can understand where code belongs and how components relate to each other.</p>
<h3 id="what-a-low-score-looks-like-2">What a low score looks like</h3>
<p>Agents replicate patterns they find in the codebase. If your codebase has clear boundaries (controllers handle HTTP, services handle business logic, models handle persistence), the agent follows those boundaries. If your codebase mixes concerns, the agent mixes concerns.</p>
<p>The most common failure I see: a controller that does everything. It validates input, calls the database, sends emails, enqueues jobs. An agent asked to add a new feature looks at the existing controller, sees that&rsquo;s where logic goes, and adds more logic to the controller. The agent is doing exactly what the codebase taught it to do.</p>
<p>The subtler version is dependency direction. In a well-layered app, dependencies point inward: controllers depend on services, services depend on models. When that direction is inconsistent (models importing from controllers, services reaching into HTTP request objects), agents produce code with the same tangled dependencies.</p>
<h3 id="what-a-high-score-looks-like-2">What a high score looks like</h3>
<ul>
<li><strong>Clear layering.</strong> Each layer has a single responsibility, and the codebase is consistent about which layer owns what.</li>
<li><strong>Domain namespacing.</strong> Related functionality is grouped by business domain, not just by technical layer. Instead of a flat <code>app/services/</code> with 40 files, you have <code>app/services/billing/</code>, <code>app/services/onboarding/</code>, <code>app/services/research/</code>. When an agent needs to add billing logic, the namespace tells it exactly where to look and what patterns to follow.</li>
<li><strong>Predictable file organization.</strong> A new developer (or agent) can guess where a piece of code lives based on what it does.</li>
<li><strong>Dependency direction is consistent.</strong> Inner layers don&rsquo;t reach outward. You don&rsquo;t see models importing controller concerns.</li>
</ul>
<p>Domain namespacing is especially powerful for agents because it constrains the search space. An agent working on a billing feature only needs to understand the billing namespace, not the entire codebase. It finds the existing patterns in that namespace and replicates them. Without namespacing, the agent has to scan the whole codebase to figure out where billing logic lives, and it might find three different patterns in three different places.</p>
<h3 id="the-smallest-change-that-moves-the-needle-2">The smallest change that moves the needle</h3>
<p>If you have fat controllers, extract one. Pick your most complex controller action, pull the business logic into a service object, and write a spec for it. The agent will start using that service object pattern for new features. One well-structured example teaches the agent more than any documentation, because it&rsquo;s a pattern it can directly replicate.</p>
<p>If your codebase has grown past a handful of services, start namespacing by domain. Group related services, jobs, and models under a shared namespace. This compounds quickly: once you have three or four service objects under <code>Billing::</code>, agents start producing new billing code in the same namespace by default. The codebase becomes self-reinforcing.</p>
<h2 id="feedback-loops">Feedback Loops</h2>
<p>Feedback loops carry 10% of the assessment weight, but their impact is multiplicative. Good feedback loops make everything else work better. Poor ones make everything else work worse.</p>
<h3 id="what-a-low-score-looks-like-3">What a low score looks like</h3>
<p>Agents learn from the signals they get back. When the only signal is &ldquo;tests passed,&rdquo; the agent has no way to know it introduced a style violation, broke a naming convention, or used a deprecated API. It moves on, confident the change is correct.</p>
<p>Two things make feedback loops weak: <strong>narrow signals</strong> and <strong>slow signals</strong>.</p>
<p>Narrow signals mean the agent only hears from one source. Tests tell the agent whether the code works. They don&rsquo;t tell it whether the code follows your conventions, whether it introduced a security vulnerability, or whether the UI actually renders correctly. Each missing signal is a category of problems the agent can&rsquo;t self-correct.</p>
<p>Slow signals are just as damaging. If the agent has to wait 20 minutes for a CI run to discover a linting error, it&rsquo;s already moved on. It&rsquo;s built three more features on top of code that doesn&rsquo;t pass lint. Now you&rsquo;re unwinding multiple changes instead of catching the first one. The closer the feedback is to the moment of the change, the cheaper it is to fix.</p>
<p>There&rsquo;s also a hierarchy to how you enforce conventions. Anything that can be checked deterministically by a linter should be a lint rule, not a line in your <code>CLAUDE.md</code>. A lint rule catches every violation, every time. A documentation rule depends on the agent reading it and choosing to follow it. If your convention is &ldquo;methods must be under 20 lines&rdquo; or &ldquo;always use <code>frozen_string_literal</code>,&rdquo; encode it in RuboCop, ESLint, or whatever linter your stack uses. Save documentation for the things that can&rsquo;t be mechanically enforced: architectural decisions, domain context, workflow conventions.</p>
<h3 id="what-a-high-score-looks-like-3">What a high score looks like</h3>
<ul>
<li><strong>Pre-commit hooks for immediate feedback.</strong> The agent discovers formatting issues, type errors, or lint violations before it even commits.</li>
<li><strong>CI that runs in under 10 minutes.</strong> Fast enough that the agent can push, get feedback, and iterate without burning excessive context.</li>
<li><strong>Rich error messages.</strong> Linting output that says &ldquo;method too long (25 lines, max 20)&rdquo; is actionable. A generic &ldquo;style violation&rdquo; is not.</li>
</ul>
<p>Here&rsquo;s what a CI script looks like when it goes beyond just running tests. This is the <code>bin/ci</code> from the same Rails project:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-ruby" data-lang="ruby"><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic"># config/ci.rb - run with bin/ci</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#f9e2af">CI</span><span style="color:#89dceb;font-weight:bold">.</span>run <span style="color:#cba6f7">do</span>
</span></span><span style="display:flex;"><span>  step <span style="color:#a6e3a1">&#34;Setup&#34;</span>, <span style="color:#a6e3a1">&#34;bin/setup --skip-server&#34;</span>
</span></span><span style="display:flex;"><span>  step <span style="color:#a6e3a1">&#34;Style: Ruby&#34;</span>, <span style="color:#a6e3a1">&#34;bin/rubocop&#34;</span>
</span></span><span style="display:flex;"><span>  step <span style="color:#a6e3a1">&#34;Security: Gem audit&#34;</span>, <span style="color:#a6e3a1">&#34;bin/bundler-audit&#34;</span>
</span></span><span style="display:flex;"><span>  step <span style="color:#a6e3a1">&#34;Security: Importmap vulnerability audit&#34;</span>, <span style="color:#a6e3a1">&#34;bin/importmap audit&#34;</span>
</span></span><span style="display:flex;"><span>  step <span style="color:#a6e3a1">&#34;Security: Brakeman code analysis&#34;</span>, <span style="color:#a6e3a1">&#34;bin/brakeman --quiet --no-pager --exit-on-warn --exit-on-error&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">end</span>
</span></span></code></pre></div><p>Five steps, each giving the agent a different kind of feedback. RuboCop catches style violations. Bundler-audit catches vulnerable gems. Brakeman catches security issues in the code itself. An agent that runs <code>bin/ci</code> gets five signals instead of one.</p>
<h3 id="browser-access-as-a-feedback-loop">Browser access as a feedback loop</h3>
<p>For web applications, there&rsquo;s a feedback loop that most teams overlook: giving agents the ability to see what they built.</p>
<p>An agent that can only run tests is working blind on anything visual. It can verify that a controller returns 200, but it can&rsquo;t tell whether the page actually renders correctly, whether a modal opens, or whether a form submits without errors. Cursor&rsquo;s team <a href="https://cursor.com/blog/agent-computer-use">wrote about this</a>: once they gave agents browser access via cloud sandboxes, agents could &ldquo;iterate until they&rsquo;ve validated their output rather than handing off the first attempt.&rdquo; More than 30% of their merged PRs are now created by agents operating autonomously in cloud sandboxes.</p>
<p>You don&rsquo;t need a full cloud sandbox to get value from this. Claude Code has <a href="https://code.claude.com/docs/en/chrome">built-in Chrome support</a> via <code>claude --chrome</code>, and tools like Playwright MCP give agents browser control locally. The agent can navigate to a page, take a snapshot of the DOM, fill in a form, and verify the result. That&rsquo;s a feedback loop that catches an entire class of issues that unit tests and linters never will.</p>
<h3 id="the-smallest-change-that-moves-the-needle-3">The smallest change that moves the needle</h3>
<p>Add a linter to your CI pipeline. For a Ruby project, that&rsquo;s RuboCop. For JavaScript/TypeScript, ESLint. For Python, Ruff. One config file, one CI step. The agent immediately starts getting feedback on style and conventions that it wouldn&rsquo;t otherwise know about.</p>
<p>If you want faster feedback, add pre-commit hooks. The agent runs into the linter before it even pushes, which means it fixes issues in the same context window where it created them. That&rsquo;s cheaper, faster, and produces cleaner commits.</p>
<p>For web projects, consider adding browser access through Playwright MCP or a similar tool. The agent starts verifying its own UI changes instead of relying on you to catch visual issues in review.</p>
<h2 id="where-to-start">Where to Start</h2>
<p>If you&rsquo;re looking at your codebase and wondering where to start, here&rsquo;s how I think about prioritization:</p>
<ol>
<li><strong>Fix your test foundation first.</strong> Without reliable tests, every other improvement is hard to verify. An agent can&rsquo;t confidently refactor your architecture if there&rsquo;s no test suite to catch regressions.</li>
<li><strong>Add an AGENTS.md.</strong> This is 30 minutes of work that immediately changes agent behavior. It&rsquo;s the highest-ROI improvement you can make.</li>
<li><strong>Add a linter to CI.</strong> This closes the feedback gap with minimal effort. The agent starts learning your conventions from automated feedback instead of guessing from code patterns.</li>
</ol>
<p>These three changes don&rsquo;t require a major initiative. They&rsquo;re individual tasks that compound. A codebase with strong tests, clear documentation, and fast feedback loops creates a reinforcing cycle: agents produce better code, which maintains the patterns, which makes future agent output even better.</p>
<p>If you want to see where your codebase stands across all eight dimensions, run the <a href="/codebase-readiness/">Codebase Readiness Assessment</a>. It takes 60 seconds and gives you a score, a per-dimension breakdown, and a prioritized roadmap.</p>
<p>If your team wants hands-on help closing these gaps, that&rsquo;s what a <a href="/services/#retainer">Production AI Retainer</a> is built for. Or if you just want to talk through your results, <a href="/pages/meet/">book a free intro call</a>.</p>
<h2 id="further-reading">Further Reading</h2>
<ul>
<li><a href="/codebase-readiness/">Codebase Readiness Assessment</a> - Run the free assessment on your repo</li>
<li><a href="https://openai.com/index/harness-engineering/">Harness Engineering: Leveraging Codex in an Agent-First World</a> - OpenAI&rsquo;s deep dive on building a million-line codebase entirely with agents</li>
<li><a href="https://cursor.com/blog/agent-computer-use">Agent Computer Use</a> - How Cursor gives agents browser access to verify their own work</li>
<li><a href="/posts/2025-11-25-how-i-use-claude-code/">How I Use Claude Code: My Complete Development Workflow</a> - How codebase structure impacts agent output quality</li>
<li><a href="/posts/2026-02-05-mcps-vs-agent-skills/">MCPs vs Agent Skills</a> - Architecture decisions that shape how agents interact with your codebase</li>
</ul>
]]></content:encoded></item><item><title>What Claude Code Does in Your Terminal</title><link>https://www.damiangalarza.com/posts/2026-03-16-what-claude-code-does-in-your-terminal/</link><pubDate>Mon, 16 Mar 2026 00:00:00 -0400</pubDate><author>Damian Galarza</author><guid>https://www.damiangalarza.com/posts/2026-03-16-what-claude-code-does-in-your-terminal/</guid><description>Claude Code runs terminal commands and asks you to approve them. This explains what those commands mean and when to pause before saying yes.</description><content:encoded><![CDATA[<p>Every time you use Claude Code to work on a project, it runs commands in your terminal. You&rsquo;ve probably seen the prompt: &ldquo;I&rsquo;m going to run this command. Approve?&rdquo; If you&rsquo;ve come across commands like these and wondered what they actually mean, you&rsquo;re in the right place.</p>
<p>This guide is for people who are building real things with Claude Code but haven&rsquo;t spent much time in the terminal. Designers, product managers, founders who have picked up coding recently. If the terminal feels like reading another language, this is the starting point.</p>
<h2 id="what-the-terminal-actually-is">What the Terminal Actually Is</h2>
<p>Your computer has two ways to interact with it.</p>
<p>The first is what you&rsquo;re used to: icons, windows, buttons. You click a file to open it. You drag it to the trash to delete it. The computer shows you what&rsquo;s happening visually.</p>
<p>The second is the terminal. Instead of clicking, you type. Instead of visual feedback, you get text. It&rsquo;s the same computer doing the same things, just through a different interface.</p>
<p>The terminal can do everything the graphical interface (sometimes called a <strong>GUI</strong>, or graphical user interface) can do, and usually faster. When Claude Code searches your entire codebase for a function name, it does that in the terminal because searching thousands of files through a visual interface would be painfully slow.</p>
<p>The terminal comes pre-installed on Mac (it&rsquo;s called Terminal). Tools like VS Code and Cursor have one built in. When you see Claude Code working, it&rsquo;s using that built-in terminal.</p>
<h2 id="files-folders-and-where-you-are">Files, Folders, and Where You Are</h2>
<p>Everything on your computer is either a file or a folder. Files have content (code, images, documents). Folders contain files and other folders.</p>
<p>The terminal uses slightly different words. Folders are called <strong>directories</strong>. Your project is a directory (usually containing many sub-directories). Same concept, different name.</p>
<p>The terminal always has a <strong>current location</strong>. Think of it like Finder or Windows Explorer: you&rsquo;re always looking at a specific folder. In the terminal, that location is called the <strong>working directory</strong>.</p>
<p>When Claude Code opens in your project, the working directory is your project folder. Every command it runs starts from there.</p>
<p>File paths can be <strong>relative</strong> or <strong>absolute</strong>. An absolute path starts from the root of your computer, like <code>/Users/yourname/Projects/my-app/src/App.tsx</code>. A relative path starts from wherever you currently are. If your working directory is <code>/Users/yourname/Projects/my-app</code>, then <code>src/App.tsx</code> points to the same file. Most commands Claude Code runs use relative paths, so knowing your current location matters.</p>
<p>If you&rsquo;re ever unsure which directory you&rsquo;re in, you can check by running:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#89dceb">pwd</span>
</span></span></code></pre></div><p>This prints the full path. Something like <code>/Users/yourname/Projects/my-app</code>. Read-only, nothing changes.</p>
<p>To move to a different directory, use <code>cd</code> (change directory):</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#89dceb">cd</span> ~/Projects/my-app
</span></span></code></pre></div><p>The <code>~/</code> is a shortcut that means your home folder (usually <code>/Users/yourname</code> on Mac or <code>/home/yourname</code> on Linux). So <code>~/Projects/my-app</code> expands to the full path automatically.</p>
<p>You can also move up one level with <code>cd ../</code>:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#89dceb">cd</span> ../
</span></span></code></pre></div><p>The <code>../</code> means &ldquo;the parent directory,&rdquo; or one folder up from where you currently are. If you&rsquo;re in <code>/Users/yourname/Projects/my-app/src</code>, running <code>cd ../</code> takes you back to <code>/Users/yourname/Projects/my-app</code>.</p>
<p>This is how you navigate to your project before starting Claude Code. If Claude Code isn&rsquo;t finding the files you expect, it might be running from the wrong directory. Use <code>pwd</code> to check, and <code>cd</code> to move to the right place.</p>
<h2 id="looking-around-what-claude-code-does-first">Looking Around: What Claude Code Does First</h2>
<p>Before Claude Code makes any changes, it looks around. It needs to understand what exists before it can modify anything. The commands it uses for this are all read-only — they cannot break anything.</p>
<p><strong><code>ls</code></strong> lists the files and folders in the current directory:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>ls
</span></span><span style="display:flex;"><span>ls src/components
</span></span></code></pre></div><p>Claude Code might run <code>ls</code> before creating a file to confirm it doesn&rsquo;t already exist. Or look inside a specific folder to understand what&rsquo;s there.</p>
<p><strong><code>cat</code></strong> prints the contents of a file:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>cat package.json
</span></span></code></pre></div><p>When Claude Code reads your <code>package.json</code> to understand your project&rsquo;s dependencies, this is what it&rsquo;s doing. Reading, not changing.</p>
<p>One thing to keep in mind: <code>cat</code> is read-only, but be cautious if Claude Code tries to <code>cat</code> files that contain secrets, like <code>.env</code> files with API keys or passwords. Those contents will be visible in your terminal session and sent to Claude as context. Treat secret files the same way you&rsquo;d treat a password: don&rsquo;t share them unless you understand where they&rsquo;re going.</p>
<p><strong><code>grep</code></strong> searches for text inside files:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>grep -r <span style="color:#a6e3a1">&#34;handleSubmit&#34;</span> src/
</span></span></code></pre></div><p>The <code>-r</code> here is a <strong>flag</strong>. Flags modify how a command behaves. They usually start with a dash (<code>-</code>) followed by a letter. In this case, <code>-r</code> tells <code>grep</code> to search <strong>recursively</strong>, meaning it looks through every file in the <code>src/</code> folder and all its sub-folders. Without <code>-r</code>, it would only search a single file.</p>
<p>You&rsquo;ll see flags on many commands. <code>ls -l</code> shows files in a detailed list. <code>rm -r</code> deletes directories. Each flag changes the command&rsquo;s behavior in a specific way. When you see one you don&rsquo;t recognize, that&rsquo;s a good time to ask Claude what it does.</p>
<p>This <code>grep</code> command finds every file in <code>src/</code> that contains the text <code>handleSubmit</code>. Claude Code uses this constantly to find where things are defined and where they&rsquo;re used.</p>
<p>All of these commands are exploratory. Nothing gets created, moved, or deleted. If you&rsquo;re unsure whether to approve a command, <code>ls</code>, <code>cat</code>, and <code>grep</code> are safe.</p>
<h2 id="when-claude-code-actually-changes-things">When Claude Code Actually Changes Things</h2>
<p>Reading is safe. Everything else deserves attention.</p>
<p><strong>Creating files and directories:</strong></p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>mkdir src/components/forms
</span></span><span style="display:flex;"><span>touch src/components/forms/LoginForm.tsx
</span></span></code></pre></div><p><code>mkdir</code> creates a directory. <code>touch</code> creates an empty file. Low-risk, but worth understanding what&rsquo;s being created and where.</p>
<p><strong>Moving and renaming:</strong></p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>mv LoginForm.tsx AuthForm.tsx
</span></span></code></pre></div><p><code>mv</code> moves or renames a file. After this runs, <code>LoginForm.tsx</code> no longer exists under that name. Renaming in the terminal is just a move with a new name.</p>
<p><strong>Deleting:</strong></p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>rm old-config.ts
</span></span><span style="display:flex;"><span>rm -r deprecated/
</span></span></code></pre></div><p><code>rm</code> deletes files. <code>rm -r</code> deletes entire directories and everything inside them. There is no trash bin. No undo. The file is gone.</p>
<p>Claude Code shouldn&rsquo;t be deleting things without telling you what and why. If you see <code>rm -rf</code>, verify that you actually want those files removed before approving. If you are unsure, ask Claude what it&rsquo;s deleting and why.</p>
<p><strong>Running your project&rsquo;s tools:</strong></p>
<p>Depending on your project, you&rsquo;ll see Claude Code use different <strong>package managers</strong> and tools. A package manager handles your project&rsquo;s dependencies: the external libraries and code your project relies on so you don&rsquo;t have to build everything from scratch.</p>
<p>The most common ones are <strong>npm</strong> (for JavaScript/Node.js projects), <strong>pip</strong> (for Python), <strong>gem</strong> (for Ruby), and <strong>yarn</strong> (an alternative to npm). Claude Code will use whichever one your project is set up with. When Claude Code runs <code>npm install</code> or <code>pip install</code>, it&rsquo;s downloading packages. <code>npm run build</code> tells npm to run a build script defined in your project.</p>
<p><strong>git</strong> is version control. It tracks every change to your code over time, like a detailed save history. <code>git add</code> tells git which files to include in your next save. <code>git commit</code> takes those staged files and saves them as a snapshot in the project&rsquo;s history.</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>npm install
</span></span><span style="display:flex;"><span>npm run build
</span></span><span style="display:flex;"><span>git add . <span style="color:#89dceb;font-weight:bold">&amp;&amp;</span> git commit -m <span style="color:#a6e3a1">&#34;Add login form&#34;</span>
</span></span></code></pre></div><p>These commands do real things. <code>npm install</code> downloads packages to your project. <code>git commit</code> saves your changes to version history. When Claude Code runs these, it&rsquo;s making moves that affect your project&rsquo;s state.</p>
<h2 id="reading-a-command-before-you-approve">Reading a Command Before You Approve</h2>
<p>You might see more complicated commands from Claude Code. Take the following example:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>find . -name <span style="color:#a6e3a1">&#34;*.test.ts&#34;</span> | grep <span style="color:#a6e3a1">&#34;auth&#34;</span>
</span></span></code></pre></div><p>These are actually sequences of simpler commands combined together, a powerful feature of the terminal that lets you compose tools. To understand it, break it down from left to right:</p>
<ul>
<li><code>find .</code> — search starting from the current directory (<code>.</code> means &ldquo;here&rdquo;)</li>
<li><code>-name &quot;*.test.ts&quot;</code> — find files that end in <code>.test.ts</code></li>
<li><code>|</code> — take the results and feed them to the next command</li>
<li><code>grep &quot;auth&quot;</code> — filter those results to only lines containing &ldquo;auth&rdquo;</li>
</ul>
<p>The full command: &ldquo;Find all test files and show me the ones related to auth.&rdquo; That&rsquo;s read-only. Safe to approve.</p>
<p>The <code>|</code> character is called a <strong>pipe</strong>. It chains commands together: the output of the first becomes the input of the second. You&rsquo;ll see it often. When you do, read each side separately, then understand what connects them.</p>
<h2 id="commands-to-think-twice-about">Commands to Think Twice About</h2>
<p>Most of what Claude Code runs is safe. A few patterns are worth pausing on.</p>
<p><strong><code>rm -rf</code> on anything important:</strong></p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>rm -rf src/
</span></span></code></pre></div><p>This deletes the entire <code>src</code> directory instantly. Claude Code will ask you to approve this command, but once you do, there&rsquo;s no additional safety net from the operating system. No trash bin, no undo. Be sure about this before approving.</p>
<p><strong><code>sudo</code>:</strong></p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>sudo npm install -g some-package
</span></span></code></pre></div><p><code>sudo</code> runs a command as an administrator. It will prompt you for your computer&rsquo;s password. Claude Code rarely needs this. If you see it, ask Claude why it&rsquo;s necessary. And don&rsquo;t type your password into Claude Code&rsquo;s prompt. If <code>sudo</code> is truly needed, deny the command, then run it yourself directly in your own terminal after Claude explains what it does.</p>
<p><strong><code>curl</code> piped into bash:</strong></p>
<p><code>curl</code> is a command that downloads content from the internet. On its own, it&rsquo;s harmless. It just fetches a file or a web page. But when you see it piped into <code>bash</code>, it becomes something different:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>curl https://example.com/script.sh | bash
</span></span></code></pre></div><p>This downloads a script from the internet and runs it immediately. The <code>|</code> sends whatever <code>curl</code> fetches straight into <code>bash</code>, which executes it as commands on your computer. Uncommon, but when you see it, understand exactly what you&rsquo;re downloading before approving.</p>
<p><strong>Pro tip:</strong> If you&rsquo;re not sure what a script contains or whether it&rsquo;s safe, ask Claude to review it first. Deny the <code>curl | bash</code> command and say something like: &ldquo;What is this script you are trying to run in bash? Before we run this, can you download the contents and evaluate if it&rsquo;s safe or not? Help me understand what it does.&rdquo; Claude can fetch the file, display its contents, and walk you through what each part does so you can make an informed decision.</p>
<h2 id="when-a-command-fails">When a Command Fails</h2>
<p>Claude Code commands fail regularly. That&rsquo;s normal. Here&rsquo;s what the common errors mean:</p>
<ul>
<li><strong><code>command not found</code></strong> — the tool isn&rsquo;t installed. Claude Code will usually try to install it, or tell you what you need.</li>
<li><strong><code>No such file or directory</code></strong> — the path is wrong. Something about the file name or location doesn&rsquo;t match.</li>
<li><strong><code>Permission denied</code></strong> — you don&rsquo;t have access to that file or directory.</li>
</ul>
<p>When something fails, Claude Code will diagnose it and try a different approach. If it gets stuck, copy the error message and paste it back. Errors in the terminal are almost always actionable once you know what they&rsquo;re saying.</p>
<h2 id="getting-help-with-commands">Getting Help with Commands</h2>
<p>You don&rsquo;t need to memorize what every command does. There are two quick ways to get answers.</p>
<p><strong>Ask the command itself.</strong> Most terminal commands have a built-in help option. Add <code>--help</code> after the command name:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>grep --help
</span></span><span style="display:flex;"><span>git --help
</span></span><span style="display:flex;"><span>npm --help
</span></span></code></pre></div><p>This prints a summary of what the command does and what flags it accepts. The output can be dense, but scanning it for the flag you&rsquo;re curious about usually answers your question.</p>
<p><strong>Ask Claude.</strong> If a command Claude Code wants to run looks unfamiliar, deny it and ask Claude to explain. Something as simple as &ldquo;What does this command do?&rdquo; will get you a plain-language breakdown. Claude can explain the command, each flag, and what will happen when it runs. This is one of the most practical ways to learn the terminal while using Claude Code.</p>
<h2 id="youre-supervising-not-executing">You&rsquo;re Supervising, Not Executing</h2>
<p>You don&rsquo;t need to be a terminal expert to use Claude Code. Your job is to understand enough to supervise what it&rsquo;s doing.</p>
<p>Supervisors know enough to recognize when something looks right and when something needs a question. That&rsquo;s the skill worth developing here.</p>
<p>Most of what Claude Code runs is safe. The unsafe things are usually obvious once you know what <code>rm -rf</code> means and when <code>sudo</code> is suspicious. You now know both.</p>
<h2 id="cheat-sheet--glossary">Cheat Sheet &amp; Glossary</h2>
<p>I put together a 2-page reference card with every command and term from this post, including safety ratings for each one. Keep it next to your keyboard while you work, or print it out.</p>
<p><a href="/downloads/terminal-cheat-sheet/">Get the free Terminal Cheat Sheet (PDF) →</a></p>
<p>If you&rsquo;re new to the terminal and want to use Claude Code confidently, I offer coaching to help you get there. From project setup to understanding what&rsquo;s happening under the hood, we can work through it together. <a href="/coaching/#claude-code-basics">Book a Claude Code Basics session</a>.</p>
]]></content:encoded></item><item><title>Ruby LSP Now Has Official Claude Code Support</title><link>https://www.damiangalarza.com/posts/2026-03-13-ruby-lsp-claude-code/</link><pubDate>Fri, 13 Mar 2026 00:00:00 -0400</pubDate><author>Damian Galarza</author><guid>https://www.damiangalarza.com/posts/2026-03-13-ruby-lsp-claude-code/</guid><description>Ruby LSP now has official Claude Code support. Install the plugin to give Claude go to definition, find references, and call hierarchy across your Ruby project.</description><content:encoded><![CDATA[<p>Up until now, Claude Code has understood your Ruby codebase the same way grep does. It reads files, searches for patterns, and infers structure from text. That works surprisingly well for many tasks. But it has a ceiling.</p>
<p>Ask Claude to explain a Rails model and it reads through the file, piecing together what the associations mean, inferring what the enum values map to, and hoping it catches all the scopes. It&rsquo;s reading text and making educated guesses about structure. Ask it to find all callers of a private method before you delete it and it&rsquo;s doing a best-effort text scan that can miss dynamic calls or alias references.</p>
<p>That ceiling just got raised. Ruby LSP now has official support in Claude Code.</p>
<h2 id="what-is-a-language-server-protocol">What Is a Language Server Protocol</h2>
<p>LSP is a standard that separates language intelligence from the editor. Before LSP, every editor had to implement its own understanding of every language. After LSP, you write the intelligence once as a language server, and any editor that speaks the protocol gets the benefits.</p>
<p>The language server sits alongside your code and maintains a live semantic model of your project. It tracks definitions, references, and symbol relationships across your entire project. When your editor asks &ldquo;where is this method defined?&rdquo; or &ldquo;what are all the callers of this function?&rdquo;, the server answers in milliseconds with accurate, complete information.</p>
<p><a href="https://github.com/Shopify/ruby-lsp">Ruby LSP</a> is Shopify&rsquo;s language server for Ruby. It integrates with Standard Ruby for diagnostics. In codebases that use <a href="https://sorbet.org/">Sorbet</a> or <a href="https://github.com/soutaro/steep">Steep</a>, those tools run as separate language servers and provide type information independently of Ruby LSP. Beyond the core server, there&rsquo;s a growing community of add-on developers building extensions for it. <a href="https://github.com/johansenja/awesome-ruby-lsp">awesome-ruby-lsp</a> is a good starting point if you want to see what&rsquo;s available.</p>
<h2 id="what-you-get">What You Get</h2>
<p>With Ruby LSP connected, Claude gains semantic code intelligence instead of relying on text search. Here&rsquo;s what that means in practice.</p>
<p><strong>Go to definition.</strong> Claude can jump directly to where a method, class, or constant is defined, including inside gems. No more manually tracking down gem source paths.</p>
<p><strong>Find all references.</strong> Before a refactor, Claude can ask the language server for every reference to a symbol across your entire project. Dynamic calls included, because the LSP understands Ruby&rsquo;s metaprogramming patterns well enough to handle common cases.</p>
<p><strong>Call hierarchy.</strong> Claude can trace the full call chain for a method, both incoming callers and outgoing calls, across your entire project. That&rsquo;s useful when you need to understand the impact of a change before making it.</p>
<p><strong>Hover documentation.</strong> Claude can pull the YARD documentation for any method without having to search for it manually.</p>
<p>The difference shows up most in refactoring. When you ask Claude to rename a method or change a signature, it can now ask the LSP for the complete reference list before making a single change. That&rsquo;s the difference between a confident refactor and a hopeful one.</p>
<h2 id="installing-ruby-lsp-in-claude-code">Installing Ruby LSP in Claude Code</h2>
<p>You&rsquo;ll need Ruby LSP installed as a gem first:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>gem install ruby-lsp
</span></span></code></pre></div><p>If you&rsquo;re working in a bundled project, add it to your <code>Gemfile</code> in the development group:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-ruby" data-lang="ruby"><span style="display:flex;"><span>group <span style="color:#a6e3a1">:development</span> <span style="color:#cba6f7">do</span>
</span></span><span style="display:flex;"><span>  gem <span style="color:#a6e3a1">&#34;ruby-lsp&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">end</span>
</span></span></code></pre></div><p>Then install the plugin from inside Claude Code:</p>
<pre tabindex="0"><code>/plugin install ruby-lsp@claude-plugins-official
</code></pre><p>That&rsquo;s it. The next time you start a Claude Code session in a Ruby project, the LSP server starts alongside it. You don&rsquo;t need to configure anything. Install it once and it works.</p>
<h2 id="what-changes-in-practice">What Changes in Practice</h2>
<p>Here&rsquo;s what it looks like in a real session. I asked Claude about a model in a Rails app:</p>
<pre tabindex="0"><code>❯ Use ruby-lsp and tell me about the membership model

⏺ Searched for 1 pattern (ctrl+o to expand)

⏺ LSP(operation: &#34;documentSymbol&#34;, file: &#34;app/models/membership.rb&#34;)
  ⎿  Found 6 symbols (ctrl+o to expand)

⏺ Read 1 file (ctrl+o to expand)

⏺ Now let me get hover info on a few key symbols:

⏺ LSP(operation: &#34;hover&#34;, symbol: &#34;enum&#34;, in: &#34;app/models/membership.rb&#34;)
  ⎿  Hover info available (ctrl+o to expand)

⏺ LSP(operation: &#34;findReferences&#34;, symbol: &#34;Membership&#34;, in: &#34;app/models/membership.rb&#34;)
  ⎿  Found 24 references across 12 files (ctrl+o to expand)

⏺ Here&#39;s a summary of the Membership model:

  app/models/membership.rb

  A join model representing a user&#39;s membership in a tenant (organization).

  Key traits:
  - acts_as_tenant :tenant — multi-tenant scoped via ActsAsTenant
  - belongs_to :tenant, belongs_to :user
  - Roles (integer enum): member (1), admin (2), owner (3)
  - scope :owners — filters members with the owner role
  - Validations: user_id must be unique per tenant (one membership per user per org), role must be present

  Used in 12 files across:
  - Controllers: invitations, memberships, organization settings
  - Services: AcceptInvitation, ConfirmUserEmail
  - Specs: model spec, tenant isolation spec, system specs

  The model is minimal — it&#39;s purely a pivot between User and Tenant with role-based access control baked in.
</code></pre><p>That&rsquo;s a thorough summary. But I was curious what actually came from the LSP versus what Claude found by reading the file. So I asked:</p>
<pre tabindex="0"><code>❯ What information did you get from the lsp vs your own search

⏺ Good question. Here&#39;s the breakdown:

  From LSP:
  - documentSymbol — gave me the class name, method names, and line numbers without reading the file
    (associations, scope, validations)
  - hover on enum — confirmed it&#39;s ActiveRecord::Enum#enum with a link to the gem source definition
  - findReferences — found all 24 references across 12 files without me doing any grep search

  From my own file read (Read tool):
  - The actual source code content — the enum values (member: 1, admin: 2, owner: 3), the acts_as_tenant
    call, the uniqueness scope on the validation (scope: :tenant_id)

  In practice:
  - LSP gave me structure and cross-references (what symbols exist, where they&#39;re used project-wide)
  - The file read gave me content (the actual values and implementation details)

  The documentSymbol result notably omitted the acts_as_tenant call and the enum values — it only surfaced
  Rails macro calls it recognized as meaningful symbols. So LSP alone would have left me with an incomplete
  picture of the model.
</code></pre><p>That second turn is the honest answer. LSP and file reads are complementary. The LSP gives Claude structure and cross-references instantly: what symbols exist, where they&rsquo;re called across the project, without spending context window tokens on file searches. The file read fills in the content: actual values, implementation details, anything the symbol index doesn&rsquo;t capture. Together they produce a more complete picture than either alone.</p>
<p>That kind of structural understanding also changes how refactoring works. When you ask Claude to rename a class or change how something is called across the project, it queries the LSP for the complete reference list before making a single change. It isn&rsquo;t guessing at usage patterns. It knows them.</p>
<p>One thing worth knowing: Ruby LSP works well without a type system. In my testing without Sorbet or Steep, the heuristic inference still produced accurate results for associations, method definitions, and references. If your codebase does use Sorbet or Steep, those type checkers can run alongside the LSP to improve accuracy further, but they&rsquo;re not required to get value from this integration.</p>
<p>Text search got Claude surprisingly far. Semantic understanding gets it the rest of the way.</p>
<p>If you want help getting Claude Code dialed in for your Ruby workflow, whether that&rsquo;s 1:1 or across your team, <a href="/claude-code/">take a look at how I work with developers and teams</a>.</p>
<h2 id="further-reading">Further Reading</h2>
<ul>
<li><a href="https://github.com/Shopify/ruby-lsp">Ruby LSP on GitHub</a></li>
<li><a href="https://shopify.github.io/ruby-lsp/">Ruby LSP documentation</a></li>
<li><a href="https://github.com/johansenja/awesome-ruby-lsp">awesome-ruby-lsp</a></li>
<li><a href="https://microsoft.github.io/language-server-protocol/">Language Server Protocol specification</a></li>
<li><a href="https://code.claude.com/docs/en/discover-plugins">Claude Code plugins documentation</a></li>
</ul>
]]></content:encoded></item><item><title>Building a Linear-Driven Agent Loop with Claude Code</title><link>https://www.damiangalarza.com/posts/2026-02-13-linear-agent-loop/</link><pubDate>Fri, 13 Feb 2026 00:00:00 -0500</pubDate><author>Damian Galarza</author><guid>https://www.damiangalarza.com/posts/2026-02-13-linear-agent-loop/</guid><description>How I built a bash-based agent loop that pulls work from Linear, implements features, runs code review, and opens pull requests autonomously.</description><content:encoded><![CDATA[<p>In December, the developer community on X was buzzing about Ralph Wiggum. If you missed it, Anthropic&rsquo;s Claude Code plugins had a plugin called <a href="https://github.com/anthropics/claude-code/tree/main/plugins/ralph-wiggum">Ralph Wiggum</a>. In the README it&rsquo;s described as:</p>
<blockquote>
<p>Ralph is a development methodology based on continuous AI agent loops. As Geoffrey Huntley describes it: &ldquo;Ralph is a Bash loop&rdquo; - a simple while true that repeatedly feeds an AI agent a prompt file, allowing it to iteratively improve its work until completion.</p></blockquote>
<p>This was used in a variety of ways. Two common ones were:</p>
<ol>
<li>Unleash an agent to work on a single task on its own until it was done.</li>
<li>Unleash an agent to iterate through a backlog of work until it had completed all of it.</li>
</ol>
<p>Today we&rsquo;re going to explore the second one, using an agent loop to iterate through a project backlog.</p>
<h2 id="where-ralph-wiggum-falls-flat">Where Ralph Wiggum Falls Flat</h2>
<p>The Ralph Wiggum plugin provides a command you call inside Claude Code. The session continues until a set of requirements have been met, at which point the loop exits. For example:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>/ralph-loop <span style="color:#a6e3a1">&#34;Build a REST API for todos. Requirements: CRUD operations, input validation, bin/rails test and bin/rails lint must pass. Output &lt;promise&gt;COMPLETE&lt;/promise&gt; when done.&#34;</span>
</span></span></code></pre></div><p>There is a drawback to this approach though. Running the loop inside of a Claude Code session means we&rsquo;re eating away at our context window. If you&rsquo;ve read my blog post on <a href="/posts/2025-12-08-understanding-claude-code-context-window">Understanding Claude Code&rsquo;s Context Window</a> then you know that this can cause poor results as time goes on. This becomes exponentially worse if you are trying to loop through multiple pieces of work. The agent&rsquo;s context window will be subject to context rot as different streams of work are worked on.</p>
<p>There is a solution though.</p>
<h2 id="bash-loops">Bash Loops</h2>
<p>Instead of running a Ralph Wiggum loop inside of the Claude Code instance, we can loop inside bash. In this version every iteration of the loop starts with a fresh context window, avoiding issues with context rot. This works via the <code>--dangerously-skip-permissions</code> flag, which allows Claude Code to run non-interactively without prompting for tool approvals. An example loop looks something like:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#cba6f7">while</span> true; <span style="color:#cba6f7">do</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f5e0dc">SESSION</span><span style="color:#89dceb;font-weight:bold">=</span><span style="color:#cba6f7">$((</span>SESSION <span style="color:#89dceb;font-weight:bold">+</span> <span style="color:#fab387">1</span><span style="color:#cba6f7">))</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f5e0dc">TIMESTAMP</span><span style="color:#89dceb;font-weight:bold">=</span><span style="color:#cba6f7">$(</span>date +%Y%m%d_%H%M%S<span style="color:#cba6f7">)</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f5e0dc">COMMIT</span><span style="color:#89dceb;font-weight:bold">=</span><span style="color:#cba6f7">$(</span>git rev-parse --short<span style="color:#89dceb;font-weight:bold">=</span><span style="color:#fab387">6</span> HEAD 2&gt;/dev/null <span style="color:#89dceb;font-weight:bold">||</span> <span style="color:#89dceb">echo</span> <span style="color:#a6e3a1">&#34;no-git&#34;</span><span style="color:#cba6f7">)</span>
</span></span><span style="display:flex;"><span>  <span style="color:#f5e0dc">LOGFILE</span><span style="color:#89dceb;font-weight:bold">=</span><span style="color:#a6e3a1">&#34;</span><span style="color:#a6e3a1">${</span><span style="color:#f5e0dc">LOG_DIR</span><span style="color:#a6e3a1">}</span><span style="color:#a6e3a1">/</span><span style="color:#a6e3a1">${</span><span style="color:#f5e0dc">AGENT_NAME</span><span style="color:#a6e3a1">}</span><span style="color:#a6e3a1">_</span><span style="color:#a6e3a1">${</span><span style="color:#f5e0dc">TIMESTAMP</span><span style="color:#a6e3a1">}</span><span style="color:#a6e3a1">_</span><span style="color:#a6e3a1">${</span><span style="color:#f5e0dc">COMMIT</span><span style="color:#a6e3a1">}</span><span style="color:#a6e3a1">.log&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>  <span style="color:#89dceb">echo</span> <span style="color:#a6e3a1">&#34;--- Session #</span><span style="color:#a6e3a1">${</span><span style="color:#f5e0dc">SESSION</span><span style="color:#a6e3a1">}</span><span style="color:#a6e3a1"> starting at </span><span style="color:#cba6f7">$(</span>date<span style="color:#cba6f7">)</span><span style="color:#a6e3a1"> ---&#34;</span>
</span></span><span style="display:flex;"><span>  <span style="color:#89dceb">echo</span> <span style="color:#a6e3a1">&#34;    Log: </span><span style="color:#f5e0dc">$LOGFILE</span><span style="color:#a6e3a1">&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>  claude --dangerously-skip-permissions <span style="color:#89b4fa">\
</span></span></span><span style="display:flex;"><span><span style="color:#89b4fa"></span>    -p <span style="color:#a6e3a1">&#34;</span><span style="color:#cba6f7">$(</span>cat <span style="color:#a6e3a1">&#34;</span><span style="color:#f5e0dc">$PROMPT_FILE</span><span style="color:#a6e3a1">&#34;</span><span style="color:#cba6f7">)</span><span style="color:#a6e3a1">&#34;</span> <span style="color:#89b4fa">\
</span></span></span><span style="display:flex;"><span><span style="color:#89b4fa"></span>    --model <span style="color:#a6e3a1">&#34;</span><span style="color:#f5e0dc">$MODEL</span><span style="color:#a6e3a1">&#34;</span> <span style="color:#89b4fa">\
</span></span></span><span style="display:flex;"><span><span style="color:#89b4fa"></span>    &amp;&gt;<span style="color:#a6e3a1">&#34;</span><span style="color:#f5e0dc">$LOGFILE</span><span style="color:#a6e3a1">&#34;</span> <span style="color:#89dceb;font-weight:bold">||</span> <span style="color:#89dceb">true</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>  <span style="color:#89dceb">echo</span> <span style="color:#a6e3a1">&#34;    Session #</span><span style="color:#a6e3a1">${</span><span style="color:#f5e0dc">SESSION</span><span style="color:#a6e3a1">}</span><span style="color:#a6e3a1"> ended at </span><span style="color:#cba6f7">$(</span>date<span style="color:#cba6f7">)</span><span style="color:#a6e3a1">&#34;</span>
</span></span><span style="display:flex;"><span>  <span style="color:#89dceb">echo</span> <span style="color:#a6e3a1">&#34;&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>  <span style="color:#6c7086;font-style:italic"># Brief pause between sessions to avoid hammering if something is broken</span>
</span></span><span style="display:flex;"><span>  sleep <span style="color:#fab387">5</span>
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">done</span>
</span></span></code></pre></div><p>The <code>$PROMPT_FILE</code> is where the real work gets defined. It&rsquo;s a markdown file that tells the agent exactly what to do during each session. Mine walks the agent through a full lifecycle: orient itself on the project, pick up the next issue from Linear, build the feature, run a code review with subagents, and open a pull request. It also includes guardrails like one issue per session, never break main, and what to do if blocked or stuck for more than 15 minutes.</p>
<p>Let&rsquo;s walk through how each of these pieces works in practice.</p>
<h2 id="how-it-all-fits-together">How It All Fits Together</h2>
<p>I decided to give this a try on my recent project CreatorSignal that I&rsquo;ve been building during my <a href="https://www.youtube.com/@damian.galarza/streams">live streams</a>. While I&rsquo;ve seen many people maintaining their backlogs in markdown files or custom Kanban board experiences within Claude Code, I prefer using <a href="https://linear.app/">Linear</a>. I didn&rsquo;t want to recreate a task management system just for the agent loop. With the <a href="https://linear.app/docs/mcp">Linear MCP</a> in hand, here&rsquo;s how I set it up.</p>
<h3 id="progressmd">PROGRESS.md</h3>
<p>One of the core pieces is the <code>PROGRESS.md</code> file. While the individual tasks are tracked and maintained in Linear, this file is meant to serve as a sort of &ldquo;memory&rdquo; for the agents to understand what has been accomplished from a more holistic level. At the start of each loop, the <code>PROGRESS.md</code> file is read in. At the end of a loop, the agent writes to it what it has accomplished.</p>
<p>Example:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-markdown" data-lang="markdown"><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold"># Progress
</span></span></span><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold"></span>
</span></span><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold">## 2026-02-13
</span></span></span><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold"></span>
</span></span><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold">### PRX-27: Billing portal (Stripe Customer Portal integration) — DONE
</span></span></span><span style="display:flex;"><span><span style="color:#fab387;font-weight:bold"></span><span style="color:#cba6f7">-</span> Created <span style="color:#a6e3a1">`BillingPortalController`</span> with <span style="color:#a6e3a1">`show`</span> and <span style="color:#a6e3a1">`create`</span> actions
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">-</span> Billing page displays current plan, price, next billing date
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">-</span> &#34;Manage Subscription&#34; button creates Stripe BillingPortal::Session and redirects
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">-</span> Free users see upgrade CTA; former subscribers can still access portal for invoices
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">-</span> Cancellation pending state shown with reactivation option
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">-</span> 11 request specs + 6 system specs, all passing (266 total)
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">-</span> PR: https://github.com/dgalarza/CreatorSignal/pull/31
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">-</span> Branch based on PRX-25 (chain: PRX-17 → PRX-23 → PRX-24 → PRX-25 → PRX-27)
</span></span></code></pre></div><h3 id="implementing-an-issue">Implementing an Issue</h3>
<p>Using the Linear MCP, the agent finds the next highest priority issue to work on. It starts by looking at the &ldquo;Todo&rdquo; column and picks the next one up. If there&rsquo;s nothing in Todo, it checks the backlog instead. From there it reads the issue&rsquo;s details to understand the work that needs to be done. For the loop to work well, issues need to be spec&rsquo;d out thoroughly. This gives the agent the highest chance of performing quality work without human supervision.</p>
<p>With an issue selected, the agent moves it to &ldquo;In Progress&rdquo;, creates a branch, and starts building. A task is not considered &ldquo;done&rdquo; unless the test suite and linters both pass. This is another critical piece for a successful agent loop. The agent must have solid ways of verifying its own work. Without automated checks, it&rsquo;s difficult for the agent to understand success, and quality drops.</p>
<p>When the agent believes its work is ready, it comments on the Linear issue with a summary of what it built and moves the issue to &ldquo;In Review&rdquo;.</p>
<h3 id="code-review">Code Review</h3>
<p>Similar to my workflow described in <a href="/posts/2025-11-25-how-i-use-claude-code">How I Use Claude Code</a>, the next step is to spawn subagents to perform code review. The agent uses the <code>Task</code> tool to spin up a reviewer that evaluates the diff against the issue requirements, checking for correctness, test quality, Rails conventions, security, and performance.</p>
<p>The review is posted as a comment on the Linear issue. This provides visibility into the full lifecycle of the work. I can see the main agent&rsquo;s implementation summary alongside the code review feedback. The agent then resolves any feedback it received and posts a final comment on the Linear issue summarizing its decisions.</p>
<h3 id="pull-request">Pull Request</h3>
<p>After the code review process is complete and feedback is addressed, the agent commits the work and opens a pull request. The Linear issue is moved to &ldquo;Done&rdquo;, and the agent writes its progress update to the PROGRESS.md file.</p>
<h3 id="clean-up">Clean Up</h3>
<p>With everything complete, the agent&rsquo;s last instructions are to check out the main branch and rebase against origin/main so that the next loop starts in a fresh state. The loop then exits cleanly. There&rsquo;s a built-in pause after each iteration before the next one starts.</p>
<h3 id="visibility">Visibility</h3>
<p>This loop proved to work well. I connected Slack to my Linear project so I could see notifications coming in as the agent worked through issues. Each time an issue had its status updated, each time an agent completed its work, and each time an agent received and addressed review feedback, I could see the progress in real time.</p>
<h2 id="improving-on-the-workflow">Improving on the Workflow</h2>
<p>While this initial pass at a loop was working well, I had some things I wanted to improve. First, as pull requests were getting opened and merged, some would end up becoming stale with merge conflicts given the speed at which new features were landing. Second, I wanted to be able to leave feedback on a pull request as if I was working with a team member and have it get addressed by the agent as part of the loop.</p>
<p>I solved this by adding a new step to the loop as follows.</p>
<p>Before picking up a new task, the agent runs <code>bin/pr_check</code>. This script looks through my open pull requests for any with the &ldquo;needs-revision&rdquo; label. If none need review feedback addressed, it checks for any that have gone stale with merge conflicts.</p>
<p>If a PR like this is found, the loop addresses one PR leaving the next for the next loop iteration. So whenever I had a PR that I felt had feedback I wanted addressed, I would leave comments on it and add the &ldquo;needs-revision&rdquo; label. The next time a loop happens the agent will pick it up and address the feedback.</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic"># bin/pr_check</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic"># Finds the first open PR that needs attention.</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic"># Returns JSON with PR details if one needs work, or empty output if all clean.</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic"># A PR &#34;needs attention&#34; if:</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#   1. It has merge conflicts (mergeableStatus == CONFLICTING)</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#   2. It has the &#34;needs-revision&#34; label</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic"># Usage:</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#   bin/pr_check           # returns JSON or empty</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#   bin/pr_check --quiet   # exit code only (0 = needs attention, 1 = all clean)</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic"># Output format:</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#   {</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#     &#34;number&#34;: 42,</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#     &#34;branch&#34;: &#34;damian/prx-7-exa-research-tools&#34;,</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#     &#34;title&#34;: &#34;PRX-7: Exa research tools&#34;,</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#     &#34;url&#34;: &#34;https://github.com/...&#34;,</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#     &#34;reason&#34;: &#34;has_feedback&#34;,    # or &#34;conflicting&#34; or &#34;conflicting,has_feedback&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#     &#34;conflicting&#34;: true,</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#     &#34;has_feedback&#34;: true</span>
</span></span><span style="display:flex;"><span><span style="color:#6c7086;font-style:italic">#   }</span>
</span></span></code></pre></div><p>The loop itself is about a 100 line bash script. I&rsquo;ll be adding it to my Claude Code workflows this week and sharing it with my newsletter.</p>
<h2 id="what-makes-this-work">What Makes This Work</h2>
<p>After running this loop across several sessions, a few things stand out as critical to getting quality results:</p>
<ol>
<li><strong>Fresh context per iteration.</strong> Running the loop in bash instead of inside a Claude Code session means each task gets a clean context window. This is the single biggest difference from the Ralph Wiggum approach.</li>
<li><strong>Well-spec&rsquo;d issues.</strong> The agent is only as good as the instructions it receives. Vague issues produce vague results. Detailed acceptance criteria and clear scope make all the difference.</li>
<li><strong>Automated verification.</strong> Requiring passing tests and linters before a task is considered &ldquo;done&rdquo; gives the agent a concrete definition of success. Without this, quality drops fast.</li>
<li><strong>Linear as the source of truth.</strong> Using an existing project management tool instead of reinventing one means I can see the full lifecycle of every issue, from backlog to done, with comments and status updates along the way.</li>
</ol>
<p>The combination of these pieces turns what could be a chaotic autonomous loop into something that produces reviewable, mergeable work. It&rsquo;s not perfect, and I still review every pull request before merging, but the amount of ground it covers between review cycles is significant.</p>
<h2 id="additional-reading">Additional Reading</h2>
<ul>
<li><a href="/posts/2026-02-05-mcps-vs-agent-skills/">MCPs vs Agent Skills: Understanding the Difference</a> - The agent loop relies on the Linear MCP as its backbone. This post covers how MCPs and skills serve different roles in your workflow.</li>
<li><a href="/posts/2025-12-08-understanding-claude-code-context-window/">Understanding Claude Code&rsquo;s Context Window</a> - A deep dive into how the context window works and why fresh context per iteration is so important.</li>
<li><a href="https://youtu.be/Seu7nksZ_4k">How AI Agents Remember Things</a> - The PROGRESS.md pattern is essentially agent memory between loop iterations. This video covers how agents persist context across sessions.</li>
<li><a href="https://youtu.be/tO_Larrawfg">MCPs vs Skills: The Mental Model You&rsquo;re Missing</a> - The video companion to the blog post above, covering the architectural distinction between MCPs and skills.</li>
<li><a href="https://www.youtube.com/playlist?list=PLeevcUmnIRCy8XirmTSbHz71hs31idVC3">Building CreatorSignal</a> - The livestream series where I&rsquo;m building CreatorSignal, the project this agent loop runs against.</li>
</ul>
<p>If you haven&rsquo;t already, sign up for my newsletter for weekly emails on AI Engineering and agentic development workflows.</p>
<hr>
<p>If you&rsquo;re building agent loops or autonomous workflows and want help getting the architecture right, I work with teams on exactly this. <a href="/ai-agents/">Let&rsquo;s talk</a>.</p>
]]></content:encoded></item><item><title>MCPs vs Agent Skills: Understanding the Difference</title><link>https://www.damiangalarza.com/posts/2026-02-05-mcps-vs-agent-skills/</link><pubDate>Thu, 05 Feb 2026 00:00:00 -0500</pubDate><author>Damian Galarza</author><guid>https://www.damiangalarza.com/posts/2026-02-05-mcps-vs-agent-skills/</guid><description>MCPs give Claude capabilities. Skills teach Claude workflows. Here's the mental model I use to decide which one I need.</description><content:encoded><![CDATA[<p>&ldquo;Should I build a skill or an MCP for this?&rdquo;</p>
<p>I&rsquo;ve been asked this question a lot since Anthropic announced Agent Skills back in October 2025. And honestly, the confusion makes sense. Both extend Claude Code&rsquo;s capabilities. Both can connect to external services. Skills can even run scripts, which sounds a lot like what MCPs do.</p>
<p>But once you understand the mental model, the distinction becomes obvious. Let&rsquo;s break it down.</p>
<h2 id="what-mcps-actually-do">What MCPs Actually Do</h2>
<p>Model Context Protocol is an open standard for connecting AI applications to external systems. It&rsquo;s the plumbing that connects Claude to the outside world by exposing tools that can read data, execute actions, and interact with external services.</p>
<p>For example, you can add the Linear MCP and give Claude the ability to read and create issues, or add the Sentry MCP so it can query errors. These are capabilities Claude didn&rsquo;t have before. MCPs extend what Claude can do.</p>
<p>There&rsquo;s something you need to consider when adding MCPs though: every MCP you add to Claude Code takes up space in your context window just by being available. Not just when it&rsquo;s used, but constantly. If you&rsquo;ve read my post on <a href="/posts/2025-12-08-understanding-claude-code-context-window/">Understanding Claude Code&rsquo;s Context Window</a>, you know this matters a lot.</p>
<h3 id="the-anatomy-of-an-mcp-tool">The Anatomy of an MCP Tool</h3>
<p>Every MCP tool exposes information to the LLM so it knows when and how to use it. Here&rsquo;s what Claude sees when the Linear MCP is configured:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>│ get_issue <span style="color:#89dceb;font-weight:bold">(</span>linear-server<span style="color:#89dceb;font-weight:bold">)</span> <span style="color:#89dceb;font-weight:bold">[</span>read-only<span style="color:#89dceb;font-weight:bold">]</span>                                        │
</span></span><span style="display:flex;"><span>│ Tool name: get_issue                                                         │
</span></span><span style="display:flex;"><span>│ Full name: mcp__linear-server__get_issue                                     │
</span></span><span style="display:flex;"><span>│                                                                              │
</span></span><span style="display:flex;"><span>│ Description:                                                                 │
</span></span><span style="display:flex;"><span>│ Retrieve detailed information about an issue by ID, including attachments   │
</span></span><span style="display:flex;"><span>│ and git branch name                                                          │
</span></span><span style="display:flex;"><span>│                                                                              │
</span></span><span style="display:flex;"><span>│ Parameters:                                                                  │
</span></span><span style="display:flex;"><span>│   • id <span style="color:#89dceb;font-weight:bold">(</span>required<span style="color:#89dceb;font-weight:bold">)</span>: string - The issue ID                                     │
</span></span><span style="display:flex;"><span>│   • includeRelations: boolean - Whether to include blocking, related,        │
</span></span><span style="display:flex;"><span>│     and duplicate relations in the response                                  │
</span></span></code></pre></div><p>The description tells the LLM when and why to use the tool. Some descriptions are verbose, which means they consume more tokens on every single message. The parameter schema is typically JSON that defines the tool&rsquo;s inputs. And the tool name is what the LLM calls to invoke it.</p>
<p>Here&rsquo;s why this matters: in <a href="/posts/2025-11-06-build-efficient-mcp-servers-three-design-principles/">Build Efficient MCP Servers: Three Design Principles</a>, I showed how a Claude Code session can have 24% or more of the context window consumed by MCP tool definitions before you&rsquo;ve even started a conversation. Add a few feature-rich MCP servers and you&rsquo;ve got precious little space left for actual work.</p>
<p>This used to create a hard practical limit. Too many MCPs and the model would get confused, more likely to pick wrong actions. Anthropic addressed this in January 2026 with <a href="https://x.com/trq212/status/2011523109871108570">MCP Tool Search</a>, which dynamically loads MCP tools on-demand when they would consume more than 10% of context. This helps, but the underlying tension remains: MCP tool definitions compete for context space, which is why skills use a different approach entirely.</p>
<h3 id="the-key-characteristics">The Key Characteristics</h3>
<p>MCPs are:</p>
<ul>
<li><strong>Single-purpose tools</strong> - Each tool does one specific thing</li>
<li><strong>Autonomous</strong> - Claude can call them directly without any instruction from you</li>
<li><strong>Always loaded</strong> - Tool descriptions are in context on every message (or dynamically loaded via MCP Tool Search)</li>
<li><strong>Bidirectional</strong> - Can read from and write to external systems</li>
</ul>
<p>When you ask Claude &ldquo;What&rsquo;s the status of issue TRA-123?&rdquo;, it can autonomously decide to call the Linear MCP to fetch that information. No skill needed, no special invocation. The capability is just there.</p>
<h2 id="what-agent-skills-actually-do">What Agent Skills Actually Do</h2>
<p>Since the original announcement of Agent Skills, Anthropic has released Agent Skills as an open standard, and other tools like GitHub Copilot and Cursor now support them as well.</p>
<p>At first glance, skills look simple. They&rsquo;re essentially a folder with some markdown files and optionally some scripts:</p>
<pre tabindex="0"><code>my-skill/
├── SKILL.md           # Main instructions (required)
├── reference.md       # Detailed docs (loaded as needed)
├── examples.md        # Usage examples (loaded as needed)
└── scripts/
    └── helper.py      # Executable scripts (run, not loaded)
</code></pre><p>Skills typically live in <code>.claude/skills/</code> within your project or <code>~/.claude/skills/</code> for global availability.</p>
<p>Skills can execute code. But that&rsquo;s not what makes them special. What makes them special is orchestration. They compose multiple capabilities into a defined workflow.</p>
<p>A tool lets Claude query your database. A skill teaches Claude your company&rsquo;s specific data model, your naming conventions, your rollback procedures. MCPs are verbs. Skills are playbooks.</p>
<h3 id="the-four-flavors-of-skills">The Four Flavors of Skills</h3>
<p>In my experience, skills tend to fall into four categories:</p>
<p><strong>Specialized workflows</strong> are multi-step procedures for specific domains. Things like a TDD workflow, a PR review process, or a deployment checklist. These are the skills I use most often.</p>
<p><strong>Tool integrations</strong> are instructions for working with specific file formats or APIs. Maybe you need Claude to know how to process DOCX files, manipulate PDFs, or query BigQuery a specific way.</p>
<p><strong>Domain expertise</strong> captures company-specific knowledge. Your data model, your naming conventions, your rollback procedures. The stuff that lives in tribal knowledge.</p>
<p><strong>Knowledge retrieval</strong> bundles reference documentation that Claude can access on demand. API specs, style guides, architectural decision records. Rather than stuffing everything into CLAUDE.md, you package it into a skill that loads only when relevant.</p>
<h3 id="why-skills-exist-progressive-disclosure">Why Skills Exist: Progressive Disclosure</h3>
<p>The key design principle behind skills is progressive disclosure. Unlike MCPs where tool definitions are always present, skills only load their full content when invoked.</p>
<p>The most basic skill is a folder with a SKILL.md file. This file contains YAML frontmatter with metadata (name and description) followed by the actual instructions. For any given skill, only the metadata is persistently available. The description tells the LLM when to invoke the skill, so you need to capture the right semantics for the agent to pick it up appropriately.</p>
<p>Once the skill is invoked, the LLM loads the rest of the SKILL.md file into context and follows its instructions. You can also break skills into separate resource files for different scenarios or workflows. This lets you keep context lean by loading only what&rsquo;s needed for the current task.</p>
<h3 id="what-this-looks-like-in-practice">What This Looks Like in Practice</h3>
<p>In <a href="/posts/2025-11-25-how-i-use-claude-code/">How I Use Claude Code: My Complete Development Workflow</a>, I described my <code>linear-implement</code> skill that takes a Linear issue and implements a solution following TDD. Here&rsquo;s how the pieces fit together:</p>
<pre tabindex="0"><code>┌──────────────────────────────────────────────────────────────┐
│                        SKILL                                 │
│                 (orchestration layer)                        │
│                                                              │
│  ┌─────────────────────────────────────────────────────────┐ │
│  │ Bundled: scripts/ │ references/ │ assets/               │ │
│  └─────────────────────────────────────────────────────────┘ │
│                                                              │
│    ┌─────────┐  ┌─────────┐  ┌─────────┐  ┌─────────┐       │
│    │  MCP    │  │  Bash   │  │  File   │  │  Web    │       │
│    │ (Linear)│  │ (tests) │  │ (write) │  │ (fetch) │       │
│    └─────────┘  └─────────┘  └─────────┘  └─────────┘       │
└──────────────────────────────────────────────────────────────┘
</code></pre><p>The skill sits on top and orchestrates everything. It calls the Linear MCP to fetch issue details. It runs bash commands to execute tests. It writes code files following TDD. It creates PRs via the GitHub CLI.</p>
<p>Without a skill, Claude can do all these things individually. But you have to orchestrate each step manually. Every session, you re-explain the workflow. With a skill, one command triggers the entire workflow. Consistent process every time. Your expertise encoded into Claude&rsquo;s behavior.</p>
<h2 id="the-claudemd-vs-skills-question">The CLAUDE.md vs Skills Question</h2>
<p>A common point of confusion: when should something go in CLAUDE.md versus a skill?</p>
<p>Here&rsquo;s how I think about it:</p>
<p><strong>CLAUDE.md</strong> is for declarative knowledge. What and why. Background context that Claude should just know. &ldquo;This is Rails 7 with RSpec.&rdquo; &ldquo;We use JSON:API format.&rdquo; &ldquo;Run tests with <code>bin/rspec</code>.&rdquo;</p>
<p><strong>Skills</strong> are for procedural knowledge. How. Multi-step workflows with defined steps. &ldquo;When implementing a feature, follow this TDD workflow&hellip;&rdquo; &ldquo;To deploy, run these 5 steps&hellip;&rdquo;</p>
<p>The analogy that works for me: CLAUDE.md is like an employee handbook (background context). Skills are like training modules (specific procedures).</p>
<p>If you&rsquo;re copy-pasting the same multi-step instructions into chat repeatedly, that&rsquo;s a skill waiting to be created. If it&rsquo;s background context Claude should just know, it belongs in CLAUDE.md.</p>
<p>There&rsquo;s a practical difference too. CLAUDE.md is always loaded in context, so it should stay lean. Skills use progressive disclosure, so they can be extensive without penalty when not in use.</p>
<h2 id="putting-it-together">Putting It Together</h2>
<p>Now that we&rsquo;ve covered what each one does separately, let me show you what it looks like when they work together.</p>
<pre tabindex="0"><code>┌─────────────────────────────────────────────────────────────────┐
│  Prompt: &#34;Help me implement Linear TRA-123&#34;                     │
└─────────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│  SKILL activates (context match)                                │
│  → Loads bundled resources, defines workflow                    │
└─────────────────────────────────────────────────────────────────┘
                              │
                    ┌─────────┴─────────┐
                    ▼                   ▼
              ┌──────────┐       ┌──────────┐
              │   MCP    │       │  Native  │
              │ (Linear) │       │  Tools   │
              │          │       │          │
              │ Fetches  │       │ Bash,    │
              │ issue    │       │ File ops │
              │ details  │       │ for TDD  │
              └──────────┘       └──────────┘
                    │                   │
                    └─────────┬─────────┘
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│  Result: Feature implemented following TDD workflow             │
└─────────────────────────────────────────────────────────────────┘
</code></pre><p>The MCP gives access to Linear (the capability). The skill orchestrates the workflow (the recipe). Each has its role.</p>
<h2 id="the-two-questions">The Two Questions</h2>
<p>You might be thinking there&rsquo;s overlap here. Since skills can have scripts, can&rsquo;t they also connect to external services? Yes, they can. Skills can include scripts that hit APIs, run curl commands, whatever you need. The difference is that these scripts only run in the context of the skill itself. If you need something more general purpose that Claude can call from any context, you want an MCP instead.</p>
<p>When Claude needs to check Linear issues, it can do that anytime, in any context, without any special setup. That&rsquo;s an MCP&rsquo;s job. But when you say &ldquo;implement this feature,&rdquo; you want a specific sequence of steps followed in a specific order. That&rsquo;s a skill&rsquo;s job.</p>
<p>When I need to decide which to use, I ask myself two questions:</p>
<h3 id="question-1-should-claude-be-able-to-call-this-capability-anytime-across-any-context">Question 1: Should Claude be able to call this capability anytime, across any context?</h3>
<p>If yes, you need an MCP.</p>
<p>If only during a specific workflow, a skill with scripts is fine.</p>
<p>Checking Linear issues? That&rsquo;s something Claude might need to do in many different contexts. MCP makes sense. Deploying to staging? That&rsquo;s a specific workflow with defined steps. Skill makes sense.</p>
<h3 id="question-2-is-this-a-repeatable-workflow-with-defined-steps">Question 2: Is this a repeatable workflow with defined steps?</h3>
<p>If yes, build a skill (with or without MCPs).</p>
<p>If no, you might not need either. Just ask Claude directly.</p>
<p>If you find yourself explaining the same multi-step process to Claude repeatedly, that&rsquo;s your signal. That&rsquo;s when you build a skill.</p>
<h3 id="common-patterns">Common Patterns</h3>
<p>Here&rsquo;s how this plays out in practice:</p>
<p><strong>MCP alone</strong>: &ldquo;Check my Linear issues.&rdquo; Claude decides to call it autonomously.</p>
<p><strong>Skill using MCP</strong>: &ldquo;Implement TRA-123.&rdquo; The skill orchestrates the workflow, calling the Linear MCP as one step among many.</p>
<p><strong>Skill with scripts</strong>: &ldquo;Deploy to staging.&rdquo; The workflow runs deploy scripts that hit external services.</p>
<p><strong>Skill without external calls</strong>: &ldquo;Follow our TDD process.&rdquo; Pure internal workflow, no external systems needed.</p>
<h2 id="the-mental-model">The Mental Model</h2>
<p>That&rsquo;s the mental model. MCPs give Claude capabilities. Skills give Claude orchestration. Or to put it another way:</p>
<p><strong>MCPs</strong> = The tools in the toolbox
<strong>Skills</strong> = The recipes that coordinate those tools</p>
<p>MCPs are the plumbing connecting Claude to the outside world. Skills are the playbook of procedural knowledge.</p>
<p>MCPs answer &ldquo;what can Claude access?&rdquo; Skills answer &ldquo;how should Claude approach this task?&rdquo;</p>
<h2 id="getting-started">Getting Started</h2>
<p>If you&rsquo;re just getting started, here&rsquo;s my recommendation:</p>
<p><strong>Start with MCPs.</strong> Find one that connects to a tool you already use. Linear, Sentry, your database, whatever. Install it and start calling it. Get a feel for how Claude uses capabilities autonomously.</p>
<p><strong>Watch for patterns.</strong> When you notice you&rsquo;re asking Claude the same multi-step sequence over and over, that&rsquo;s your signal. That&rsquo;s when you build a skill.</p>
<p><strong>Keep it simple.</strong> Your first skill doesn&rsquo;t need to be complex. Start with a workflow you repeat weekly, document the steps, and let Claude follow them consistently.</p>
<p>If you want to build your own skill from scratch, check out my video <a href="https://youtu.be/7fNOpyke2kw?si=ZotNFdJ8NyKvc4J-">Claude Code Tutorial: Build your first skill in 10 minutes</a> where I walk through creating a TDD workflow skill step by step.</p>
<p>For more examples, sign up for my newsletter. You&rsquo;ll get access to my claude-code-workflows repo on GitHub, which includes several skills I use daily, including the linear-implement workflow that ties everything together.</p>
<h2 id="further-reading">Further Reading</h2>
<ul>
<li><a href="https://youtu.be/tO_Larrawfg?si=o1qMFo5DiC8xz8oA">MCPs vs Skills</a></li>
<li><a href="https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills">Agent Skills - Anthropic Engineering Blog</a></li>
<li><a href="https://agentskills.io/specification">Agent Skills Specification</a></li>
<li><a href="/posts/2025-12-08-understanding-claude-code-context-window/">Understanding Claude Code&rsquo;s Context Window</a></li>
<li><a href="/posts/2025-11-25-how-i-use-claude-code/">How I Use Claude Code: My Complete Development Workflow</a></li>
<li><a href="/posts/2025-11-06-build-efficient-mcp-servers-three-design-principles/">Build Efficient MCP Servers: Three Design Principles</a></li>
</ul>
<hr>
<p>Working through how to structure your Claude Code setup with the right mix of MCPs and skills? I help engineers and teams design workflows that stick. <a href="/claude-code/">Learn more</a>.</p>
]]></content:encoded></item><item><title>Understanding Claude Code's Context Window</title><link>https://www.damiangalarza.com/posts/2025-12-08-understanding-claude-code-context-window/</link><pubDate>Mon, 08 Dec 2025 00:00:00 -0500</pubDate><author>Damian Galarza</author><guid>https://www.damiangalarza.com/posts/2025-12-08-understanding-claude-code-context-window/</guid><description>How Claude Code's context window works: what consumes tokens (MCP servers, tools, messages), why it matters, and how to manage context effectively.</description><content:encoded><![CDATA[<p>I&rsquo;ve been using Claude Code for some time now, and as I discussed in <a href="/posts/2025-11-25-how-i-use-claude-code/">How I Use Claude Code: My Complete Development Workflow</a>, using AI coding tools effectively is a skill in itself. One of the most important parts of getting value from your AI coding assistant is managing context.</p>
<p>In this post we&rsquo;ll look at how you can make the most of your available context window in Claude Code, as well as some common pitfalls to avoid.</p>
<h2 id="understanding-the-context-window">Understanding the Context Window</h2>
<p>Before we can begin to try to optimize our developer workflow we need to get an understanding of what the context window is and how it gets filled. The context window is how much content a large language model can hold onto at one time. Each model has predefined limits to the size of its context window. For example, Claude Sonnet 4.5&rsquo;s context window is about 200,000 tokens.</p>
<h3 id="what-is-a-token">What is a Token?</h3>
<p>When you send text to an LLM, it doesn&rsquo;t process words one at a time. Instead, text is broken into <strong>tokens</strong>—the fundamental units that language models read and generate. A token typically represents 3-4 characters, or roughly 0.75 words in English.</p>
<p>For example, the phrase <code>&quot;Hello world&quot;</code> becomes 2-3 tokens, while a compound word like <code>authentication_middleware</code> might be split into 5-7 tokens despite being a single identifier. Code tends to be more token-dense than prose because of special characters, naming conventions, and syntax. This is why reading source files consumes context faster than you might expect.</p>
<h3 id="why-token-efficiency-matters">Why Token Efficiency Matters</h3>
<p>Context windows have limited space, and filling them with code happens fast. But running out of room isn&rsquo;t the only concern. LLMs suffer from a &ldquo;lost in the middle&rdquo; problem. Content at the start and end of the context window gets prioritized, while information in the middle tends to get overlooked. This mirrors how human memory works (we remember beginnings and endings better than middles).</p>
<p><img src="/images/posts/lost-in-the-middle.png" alt="Diagram showing how LLMs prioritize content at the beginning and end of context windows while missing information in the middle"></p>
<p>Additionally, our code isn&rsquo;t the only thing consuming context window space. Our context window is going to be filled by:</p>
<p><strong>MCP Servers</strong></p>
<p>Every MCP server you add is going to take some amount of space in your context window just by being available and present. Every MCP tool definition comes with:</p>
<ol>
<li><strong>Tool name</strong> (e.g., mcp__ynab__get_transactions)</li>
<li><strong>Description</strong> - an explanation as to what the tool does so that the LLM can understand when it might be needed.</li>
<li><strong>Parameter Schema</strong> - JSON schema definition of all the parameters, types, descriptions and constraints.</li>
<li><strong>Usage notes</strong> - additional instructions and potentially examples to guide the LLM during its tool choice.</li>
</ol>
<p>Let&rsquo;s take a look at an example from the YNAB MCP I built and discussed in <a href="/posts/2025-11-06-build-efficient-mcp-servers-three-design-principles/">Build Efficient MCP Servers: Three Design Principles</a>.</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-json" data-lang="json"><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span>    <span style="color:#cba6f7">&#34;name&#34;</span>: <span style="color:#a6e3a1">&#34;mcp__ynab__get_transactions&#34;</span>,
</span></span><span style="display:flex;"><span>    <span style="color:#cba6f7">&#34;description&#34;</span>: <span style="color:#a6e3a1">&#34;Get transactions from YNAB budget.\n\n    Retrieves transactions with optional filtering by date
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">   range, account, or category.\n    Returns transaction details including date, amount, payee, category, and
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">  memo.\n\n    Use this tool when you need to:\n    - View recent transactions\n    - Find transactions in a
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">  specific date range\n    - Filter transactions by account or category\n    - Check transaction details for
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">  reconciliation\n\n    Args:\n        budget_id: Budget ID or &#39;last-used&#39; for default budget\n        since_date:
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">  Optional start date (YYYY-MM-DD format)\n        until_date: Optional end date (YYYY-MM-DD format)\n
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">  account_id: Optional account ID to filter by specific account\n        category_id: Optional category ID to filter
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">   by category\n        type: Optional transaction type (&#39;uncategorized&#39;, &#39;unapproved&#39;)\n\n    Returns:\n
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">  JSON array of transactions with:\n        - id: Transaction ID\n        - date: Transaction date\n        -
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">  amount: Amount in milliunits (divide by 1000 for dollars)\n        - memo: Transaction memo\n        - cleared:
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">  Cleared status\n        - approved: Approval status\n        - payee_id: Payee ID\n        - payee_name: Payee
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">  name\n        - category_id: Category ID\n        - category_name: Category name\n        - account_id: Account
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">  ID\n        - account_name: Account name\n\n    Example usage:\n        Get all transactions from November 2024:\n
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">          since_date=&#39;2024-11-01&#39;, until_date=&#39;2024-11-30&#39;\n\n        Get recent uncategorized transactions:\n
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">    type=&#39;uncategorized&#39;\n\n    Note: Amounts are returned in milliunits. Divide by 1000 to get dollar amounts.\n
</span></span></span><span style="display:flex;"><span><span style="color:#a6e3a1">   &#34;</span>,
</span></span><span style="display:flex;"><span>    <span style="color:#cba6f7">&#34;parameters&#34;</span>: {
</span></span><span style="display:flex;"><span>      <span style="color:#cba6f7">&#34;type&#34;</span>: <span style="color:#a6e3a1">&#34;object&#34;</span>,
</span></span><span style="display:flex;"><span>      <span style="color:#cba6f7">&#34;properties&#34;</span>: {
</span></span><span style="display:flex;"><span>        <span style="color:#cba6f7">&#34;budget_id&#34;</span>: {
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;type&#34;</span>: <span style="color:#a6e3a1">&#34;string&#34;</span>,
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;description&#34;</span>: <span style="color:#a6e3a1">&#34;Budget ID or &#39;last-used&#39; for default budget&#34;</span>
</span></span><span style="display:flex;"><span>        },
</span></span><span style="display:flex;"><span>        <span style="color:#cba6f7">&#34;since_date&#34;</span>: {
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;type&#34;</span>: <span style="color:#a6e3a1">&#34;string&#34;</span>,
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;description&#34;</span>: <span style="color:#a6e3a1">&#34;Optional start date (YYYY-MM-DD format)&#34;</span>,
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;format&#34;</span>: <span style="color:#a6e3a1">&#34;date&#34;</span>
</span></span><span style="display:flex;"><span>        },
</span></span><span style="display:flex;"><span>        <span style="color:#cba6f7">&#34;until_date&#34;</span>: {
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;type&#34;</span>: <span style="color:#a6e3a1">&#34;string&#34;</span>,
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;description&#34;</span>: <span style="color:#a6e3a1">&#34;Optional end date (YYYY-MM-DD format)&#34;</span>,
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;format&#34;</span>: <span style="color:#a6e3a1">&#34;date&#34;</span>
</span></span><span style="display:flex;"><span>        },
</span></span><span style="display:flex;"><span>        <span style="color:#cba6f7">&#34;account_id&#34;</span>: {
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;type&#34;</span>: <span style="color:#a6e3a1">&#34;string&#34;</span>,
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;description&#34;</span>: <span style="color:#a6e3a1">&#34;Optional account ID to filter by&#34;</span>
</span></span><span style="display:flex;"><span>        },
</span></span><span style="display:flex;"><span>        <span style="color:#cba6f7">&#34;category_id&#34;</span>: {
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;type&#34;</span>: <span style="color:#a6e3a1">&#34;string&#34;</span>,
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;description&#34;</span>: <span style="color:#a6e3a1">&#34;Optional category ID to filter by&#34;</span>
</span></span><span style="display:flex;"><span>        },
</span></span><span style="display:flex;"><span>        <span style="color:#cba6f7">&#34;type&#34;</span>: {
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;type&#34;</span>: <span style="color:#a6e3a1">&#34;string&#34;</span>,
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;enum&#34;</span>: [<span style="color:#a6e3a1">&#34;uncategorized&#34;</span>, <span style="color:#a6e3a1">&#34;unapproved&#34;</span>],
</span></span><span style="display:flex;"><span>          <span style="color:#cba6f7">&#34;description&#34;</span>: <span style="color:#a6e3a1">&#34;Optional transaction type filter&#34;</span>
</span></span><span style="display:flex;"><span>        }
</span></span><span style="display:flex;"><span>      },
</span></span><span style="display:flex;"><span>      <span style="color:#cba6f7">&#34;required&#34;</span>: [<span style="color:#a6e3a1">&#34;budget_id&#34;</span>],
</span></span><span style="display:flex;"><span>      <span style="color:#cba6f7">&#34;title&#34;</span>: <span style="color:#a6e3a1">&#34;GetTransactionsArguments&#34;</span>
</span></span><span style="display:flex;"><span>    }
</span></span><span style="display:flex;"><span>  }
</span></span></code></pre></div><p><strong>Token Breakdown</strong></p>
<table>
  <thead>
      <tr>
          <th>Component</th>
          <th>Tokens</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Tool name</td>
          <td>8</td>
      </tr>
      <tr>
          <td>Description (entire string)</td>
          <td>430</td>
      </tr>
      <tr>
          <td>Parameters schema</td>
          <td>225</td>
      </tr>
      <tr>
          <td>TOTAL</td>
          <td>~663 tokens</td>
      </tr>
  </tbody>
</table>
<p>This one tool definition takes up about 663 tokens. Not terrible on its own, but my YNAB MCP has about 15 tools. As you add more and more MCP servers to your stack you are consuming more and more of your context window from tool definitions alone. So it&rsquo;s important to be careful not to overload your coding assistant with too many MCP servers.</p>
<p>The community has been exploring new ways to make MCP servers more context efficient. One approach Anthropic has written about is allowing code execution within MCP servers. You can learn more about this in <a href="https://www.anthropic.com/engineering/code-execution-with-mcp">Code execution with MCP: Building more efficient agents</a> but the short version is instead of having an MCP server expose lots of different tools, it exposes a single tool or handful of tools which can then execute their own code in a sandboxed environment to achieve results on its own. Anthropic also recently announced a beta feature for <a href="https://www.anthropic.com/engineering/advanced-tool-use">advanced tool use</a> in Claude. One of the stand out updates here is moving away from a static tool list to being able lazily load tool definitions via a tool search tool.</p>
<p>Both of these are in their early stages so we&rsquo;ll continue to need to be careful about how many MCP servers we add to our coding agents and how much of the context window they consume. With that out of the way let&rsquo;s take a look at a real world scenario of a context window in a development environment and how we can make the best out of it.</p>
<h2 id="a-view-into-your-context-window">A View Into Your Context Window</h2>
<p>Claude Code provides us with a command we can run within a session called <code>/context</code>. This command will report back the current state of your context window including how much space everything is taking up. Let&rsquo;s take a look at the output of <code>/context</code> within <a href="http://www.tracewell.ai">Tracewell AI</a>:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>Context Usage
</span></span><span style="display:flex;"><span>⛁ ⛀ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁   claude-sonnet-4-5-20250929 · 101k/200k tokens <span style="color:#89dceb;font-weight:bold">(</span>51%<span style="color:#89dceb;font-weight:bold">)</span>
</span></span><span style="display:flex;"><span>⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁
</span></span><span style="display:flex;"><span>⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛀ ⛶   ⛁ System prompt: 3.1k tokens <span style="color:#89dceb;font-weight:bold">(</span>1.6%<span style="color:#89dceb;font-weight:bold">)</span>
</span></span><span style="display:flex;"><span>⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ System tools: 19.8k tokens <span style="color:#89dceb;font-weight:bold">(</span>9.9%<span style="color:#89dceb;font-weight:bold">)</span>
</span></span><span style="display:flex;"><span>⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ MCP tools: 26.5k tokens <span style="color:#89dceb;font-weight:bold">(</span>13.3%<span style="color:#89dceb;font-weight:bold">)</span>
</span></span><span style="display:flex;"><span>⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ Custom agents: 2.8k tokens <span style="color:#89dceb;font-weight:bold">(</span>1.4%<span style="color:#89dceb;font-weight:bold">)</span>
</span></span><span style="display:flex;"><span>⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ Memory files: 4.0k tokens <span style="color:#89dceb;font-weight:bold">(</span>2.0%<span style="color:#89dceb;font-weight:bold">)</span>
</span></span><span style="display:flex;"><span>⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛝ ⛝ ⛝   ⛁ Messages: <span style="color:#fab387">8</span> tokens <span style="color:#89dceb;font-weight:bold">(</span>0.0%<span style="color:#89dceb;font-weight:bold">)</span>
</span></span><span style="display:flex;"><span>⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝   ⛶ Free space: 99k <span style="color:#89dceb;font-weight:bold">(</span>49.4%<span style="color:#89dceb;font-weight:bold">)</span>
</span></span><span style="display:flex;"><span>⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝   ⛝ Autocompact buffer: 45.0k tokens <span style="color:#89dceb;font-weight:bold">(</span>22.5%<span style="color:#89dceb;font-weight:bold">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>MCP tools · /mcp
</span></span><span style="display:flex;"><span>└ mcp__memory__create_entities <span style="color:#89dceb;font-weight:bold">(</span>memory<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">686</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__memory__create_relations <span style="color:#89dceb;font-weight:bold">(</span>memory<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">689</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__memory__add_observations <span style="color:#89dceb;font-weight:bold">(</span>memory<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">668</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__memory__delete_entities <span style="color:#89dceb;font-weight:bold">(</span>memory<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">612</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__memory__delete_observations <span style="color:#89dceb;font-weight:bold">(</span>memory<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">666</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__memory__delete_relations <span style="color:#89dceb;font-weight:bold">(</span>memory<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">690</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__memory__read_graph <span style="color:#89dceb;font-weight:bold">(</span>memory<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">568</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__memory__search_nodes <span style="color:#89dceb;font-weight:bold">(</span>memory<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">607</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__memory__open_nodes <span style="color:#89dceb;font-weight:bold">(</span>memory<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">609</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__whoami <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">602</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__find_organizations <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">735</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__find_teams <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: 1.0k tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__find_projects <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">999</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__find_releases <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: 1.2k tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__get_issue_details <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: 1.4k tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__get_trace_details <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: 1.3k tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__get_event_attachment <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: 1.3k tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__update_issue <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: 1.5k tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__search_events <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: 1.5k tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__find_dsns <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: 1.0k tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__analyze_issue_with_seer <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: 1.3k tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__search_docs <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: 1.8k tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__get_doc <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">768</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__search_issues <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: 1.5k tokens
</span></span><span style="display:flex;"><span>└ mcp__sentry__use_sentry <span style="color:#89dceb;font-weight:bold">(</span>sentry<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">968</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__context7__resolve-library-id <span style="color:#89dceb;font-weight:bold">(</span>context7<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">887</span> tokens
</span></span><span style="display:flex;"><span>└ mcp__context7__get-library-docs <span style="color:#89dceb;font-weight:bold">(</span>context7<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">957</span> tokens
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>Custom agents · /agents
</span></span><span style="display:flex;"><span>└ rails-backend-expert <span style="color:#89dceb;font-weight:bold">(</span>Project<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">444</span> tokens
</span></span><span style="display:flex;"><span>└ cybersecurity-expert <span style="color:#89dceb;font-weight:bold">(</span>Project<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">287</span> tokens
</span></span><span style="display:flex;"><span>└ prompt-engineer <span style="color:#89dceb;font-weight:bold">(</span>Project<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">609</span> tokens
</span></span><span style="display:flex;"><span>└ tailwind-viewcomponent-expert <span style="color:#89dceb;font-weight:bold">(</span>Project<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">417</span> tokens
</span></span><span style="display:flex;"><span>└ product-strategy-advisor <span style="color:#89dceb;font-weight:bold">(</span>Project<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">608</span> tokens
</span></span><span style="display:flex;"><span>└ regulatory-510k-consultant <span style="color:#89dceb;font-weight:bold">(</span>Project<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">459</span> tokens
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>Memory files · /memory
</span></span><span style="display:flex;"><span>└ User <span style="color:#89dceb;font-weight:bold">(</span>/home/dgalarza/.claude/CLAUDE.md<span style="color:#89dceb;font-weight:bold">)</span>: <span style="color:#fab387">10</span> tokens
</span></span><span style="display:flex;"><span>└ Project <span style="color:#89dceb;font-weight:bold">(</span>/home/dgalarza/Code/tracewell.ai/CLAUDE.md<span style="color:#89dceb;font-weight:bold">)</span>: 4.0k tokens
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>SlashCommand Tool · <span style="color:#fab387">16</span> commands
</span></span><span style="display:flex;"><span>└ Total: 2.7k tokens
</span></span></code></pre></div><p>As you can see this gives us a nice detailed view of our context window, including what percentage of the context window is currently available and a breakdown of what is taking up space. From it we can see that my MCP tools are taking up 26.5k tokens which is about 13.3% of the Claude Sonnet 4.5 context window. Aside from that, we can see that the Custom agents that are defined and available are taking up about 2.8k tokens, my project&rsquo;s CLAUDE.md is 4k tokens and about 22% of the context window is reserved for autocompacting.</p>
<h3 id="what-is-autocompacting">What is autocompacting?</h3>
<p>In order to understand autocompacting we first need to understand how a typical conversation flows within the Anthropic API. By default, every call to the Anthropic Claude API has no recollection of previous parts of a conversation. Instead as the consumer of the API we need maintain that conversation history and provide it to the API. Take a look at the following diagram:</p>
<p><img src="/images/posts/claude-conversation.png" alt="Diagram illustrating how Claude Code maintains conversation history across API requests, showing message flow and context accumulation"></p>
<p>You can see that the first request kicks off the conversation with &ldquo;Add error handling to the auth module&rdquo;. From there we get a response back from the LLM with the result of what it did. When the user then continues the conversation in request 2, we can see they say &ldquo;Now add tests for those changes&rdquo;. You can see here though that we actually end up sending the full conversation history, with our first message, along with the response from the LLM and now our new message. This is a simplified example which doesn&rsquo;t include tool calling. Any tool call requests would also be in this history as well as the results from tool calling. As your conversation gets longer and longer, more and more of the context window is being taken up. As you approach the limit of the context window space must be freed up. One way to do this is compaction.</p>
<p>Compacting the context window is a context engineering technique to compress a long running conversation or session by summarizing the conversation in order to free up space. This summarization is typically handled by an LLM. This generated summary then becomes the basis of the remainder of the conversation. Compacting the conversation history can sometimes work well. However it&rsquo;s not an exact science and you are beholden to the LLM to identify the correct things to include in the summary. If you&rsquo;ve ever had a long running conversation with Claude Code and started feeling like things have started to go off the rails, you might have experienced this. In the long conversation you might end up with multiple autocompact calls where now the LLM is summarizing a summarization along with the rest of the conversation.</p>
<p><strong>Warning signs of problematic autocompact:</strong></p>
<ul>
<li>Claude forgets decisions you made earlier in the conversation</li>
<li>Claude repeats work it already completed</li>
<li>Claude asks questions you already answered</li>
<li>Solutions start contradicting earlier approaches</li>
</ul>
<p>When you notice these symptoms, it&rsquo;s usually time for a <code>/clear</code> and a fresh start rather than continuing to fight against a degraded context.</p>
<h2 id="managing-your-context-window">Managing Your Context Window</h2>
<p>Now that we understand what the context window is, how a conversation&rsquo;s history occurs and how it impacts the context window let&rsquo;s explore different ways to manage the context window to make the most out of it.</p>
<h3 id="delegating-to-subagents">Delegating to Subagents</h3>
<p>Claude Code has the ability to spin off &ldquo;subagents&rdquo; when it&rsquo;s working. These subagents each have their own context window which is separate from the main conversation. This gives us two advantages. First, the subagent&rsquo;s context window isn&rsquo;t cluttered with our previous conversation history. Second, and this is the flip side, our main conversation isn&rsquo;t cluttered with all the details of whatever the subagent was instructed to work on. Instead, it reports back its results. You can see this in action with Claude Opus 4.5 whenever you plan something. It typically delegates its tasks out to subagents to help aid with the plan.</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>⏺ <span style="color:#fab387">3</span> Explore agents finished <span style="color:#89dceb;font-weight:bold">(</span>ctrl+o to expand<span style="color:#89dceb;font-weight:bold">)</span>
</span></span><span style="display:flex;"><span>   ├─ Explore Tracewell Agent workflow · <span style="color:#fab387">23</span> tool uses · 104.5k tokens
</span></span><span style="display:flex;"><span>   │  ⎿ Done
</span></span><span style="display:flex;"><span>   ├─ Explore Tracewell DHF extractions · <span style="color:#fab387">28</span> tool uses · 108.0k tokens
</span></span><span style="display:flex;"><span>   │  ⎿ Done
</span></span><span style="display:flex;"><span>   └─ Explore <span style="color:#89dceb">eval</span> framework · <span style="color:#fab387">24</span> tool uses · 101.8k tokens
</span></span><span style="display:flex;"><span>      ⎿ Done
</span></span></code></pre></div><p>You can also instruct Claude to invoke a subagent explicitly. Some examples:</p>
<ul>
<li>&ldquo;Have a subagent do a code review of this branch against main&rdquo;</li>
<li>&ldquo;Use a subagent to explore how authentication works in this codebase&rdquo;</li>
<li>&ldquo;Spawn a subagent to research different caching strategies for this use case&rdquo;</li>
</ul>
<p>When the subagent completes, you&rsquo;ll see a summary like this in your main conversation:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>⏺ Task agent finished · <span style="color:#fab387">15</span> tool uses · 52.3k tokens
</span></span><span style="display:flex;"><span>  ⎿ The code review found <span style="color:#fab387">3</span> issues: <span style="color:#89dceb;font-weight:bold">[</span>summary of findings...<span style="color:#89dceb;font-weight:bold">]</span>
</span></span></code></pre></div><p>Notice that the subagent used 52k tokens of its own context, but your main conversation only receives the summary. This is the key benefit: the detailed work happens in isolation.</p>
<p>I&rsquo;ve found subagents work best for self-contained tasks that require reading lots of files. Code reviews are a natural fit. The subagent can dig through diffs without polluting your main context. The same goes for codebase exploration when you&rsquo;re trying to understand how an unfamiliar feature works across multiple modules. Research tasks also work well here; you can have a subagent investigate implementation options and report back before you commit to an approach.</p>
<h3 id="using-custom-agents">Using Custom Agents</h3>
<p>Custom agents take subagents to another level. They allow us to define a custom agent with a persona and expertise area which makes use of the same functionality of subagents where they have their own context window. Additionally we can also define what tools it has access to. This is useful if you are defining an agent that you know doesn&rsquo;t need specific tools so their tool definitions don&rsquo;t need to take space in the context window.</p>
<p>An agent is a markdown file which lives in either <code>~/.claude/agents</code> or <code>.claude/agents</code>. You can provide it a name, a description, a model, and tools which it is allowed to use. This is all handled via YAML frontmatter. After the frontmatter you define the agent itself.</p>
<p>Let&rsquo;s take a look at a practical example.</p>
<p>In Tracewell I have defined a few subagents that you can see in the earlier <code>/context</code> output. The <code>rails-backend-expert</code> doesn&rsquo;t need access to the Linear MCP so I can choose not to give the agent access to it or any of its tools. This is handled by setting an allow list of what tools you want to give the model access to:</p>
<div class="highlight"><pre tabindex="0" style="color:#cdd6f4;background-color:#1e1e2e;-moz-tab-size:2;-o-tab-size:2;tab-size:2;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#fab387">---</span>
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">name</span>: rails-backend-expert
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">description</span>: Use this agent when working on Ruby on Rails backend code, including models, controllers, services, jobs, database migrations, API endpoints, background processing, or any server-side Ruby logic.
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">tools</span>: Bash, Glob, Grep, Read, Edit, Write, NotebookEdit, WebFetch, TodoWrite, WebSearch, BashOutput, KillShell, AskUserQuestion, Skill, SlashCommand, mcp__memory__create_entities, mcp__memory__create_relations, mcp__memory__add_observations, mcp__memory__delete_entities, mcp__memory__delete_observations, mcp__memory__delete_relations, mcp__memory__read_graph, mcp__memory__search_nodes, mcp__memory__open_nodes, mcp__context7__resolve-library-id, mcp__context7__get-library-docs
</span></span><span style="display:flex;"><span><span style="color:#cba6f7">model</span>: sonnet
</span></span><span style="display:flex;"><span><span style="color:#fab387">---</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>You are a Ruby on Rails backend expert. Your role is to help with...
</span></span></code></pre></div><p>The agent&rsquo;s full persona and instructions follow after the frontmatter. I recommend using the <code>/agents</code> command to get started. From there Claude will walk you through creating your first agent. When doing so Claude will ask you if you want Claude to generate or manually configure the agent. I recommend going with its recommended approach which is to have Claude create the agent. You can provide a high level prompt and it will then generate the full agent description for you. As part of the wizard for creating the agent Claude will ask you what tools you want the agent to have access to.</p>
<h3 id="claude-skills">Claude Skills</h3>
<p>In October 2025 Anthropic announced <a href="https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills">Agent Skills</a> a way of encapsulating domain expertise or workflows for agents to follow. Skills are organized in folders with a core <code>SKILL.md</code> file which has frontmatter that has required metadata such as the name and the description. The body of the <code>SKILL.md</code> contains the instruction set of the skill itself.</p>
<p>When Claude Code starts, it loads the name and description of every skill available into its context via the system prompt. This allows Claude to use progressive disclosure in determining when to use a skill without loading the entire thing into its context window. This can be a powerful tool that can even potentially replace some MCP servers. This is possible because skills can also contain scripts that Claude can run. Instead of having to always expose tools into the context window via an MCP server you can provide a skill which has scripts that it can run that only get added to the agent&rsquo;s context window when it is useful.</p>
<p>A great example of this is the <a href="https://github.com/lackeyjb/playwright-skill">Playwright Skill for Claude Code</a> by Bryan Lackey. Previously if you wanted to easily add Playwright to Claude Code for interfacing with your web application, you&rsquo;d add the <a href="https://github.com/microsoft/playwright-mcp">playwright-mcp</a>. The playwright-mcp adds 22 tools which consume about 14.3k tokens, which is 7.2% of your context window just by being available.</p>
<p>The skill, by contrast, only adds about 200 tokens at startup for its name and description. The full SKILL.md (around 4-5k tokens) only loads when you actually invoke the skill. If you use Playwright in maybe one out of every five sessions, you&rsquo;re saving roughly 10k tokens in the sessions where you don&rsquo;t need it.</p>
<h3 id="using-clear">Using clear</h3>
<p>Another tool at your disposal is to use the <code>/clear</code> command often. This command resets / empties the context window providing a fresh start. I highly recommend that you do this often especially when you have completed a distinct task and are moving onto a new one where the previous conversation history is no longer needed or useful.</p>
<h3 id="compacting-the-conversation-manually">Compacting the conversation manually</h3>
<p>Along with autocompact you can also manually choose when to compact a conversation. You can do this by running <code>/compact</code>. It takes an optional argument which is instructions on how Claude should perform the compaction. You can guide Claude to make sure that it captures certain information while generating its summary and compacting the conversation. I recommend this when you have made significant progress on your work and are moving onto some related work. Perhaps Claude broke up the work into multiple phases and you just completed phase 1. You could:</p>
<ol>
<li>Use <code>/clear</code> to reset the context window. However, if you didn&rsquo;t persist the plan / TODO list somewhere you&rsquo;ll start from scratch.</li>
<li>Continue until autocompact kicks in and you let the LLM do the heavy lifting of summarizing / compacting the conversation.</li>
</ol>
<p>Instead, I&rsquo;d recommend using the <code>/compact</code> command and instruct Claude to summarize the progress you&rsquo;ve made so far and start with a &ldquo;fresh&rdquo; context window on the next phase of work. I say &ldquo;fresh&rdquo; since we aren&rsquo;t fully clearing the context window but are compressing the previous conversation.</p>
<h3 id="being-strategic-about-file-reads">Being Strategic About File Reads</h3>
<p>It&rsquo;s easy to overlook how quickly file reads consume context. Every time Claude reads a file, that content gets added to the conversation history. Large files, broad grep results, or reading several files in sequence can eat through your available context faster than you&rsquo;d expect.</p>
<p>A few things I&rsquo;ve learned to do:</p>
<ul>
<li>When I know roughly where something is, I&rsquo;ll point Claude to specific line ranges rather than having it read entire files. For example: &ldquo;Look at the <code>authenticate</code> method in <code>app/services/auth_service.rb</code> around lines 45-80&rdquo; instead of just &ldquo;check the auth service&rdquo;</li>
<li>I try to use targeted grep patterns before asking Claude to read files. Narrowing down candidates first means fewer files loaded into context. For example, instead of &ldquo;find where we handle webhook failures&rdquo;, I might say &ldquo;grep for <code>webhook.*fail</code> or <code>handle.*webhook</code> in app/services/ and show me the matches before reading any files.&rdquo; This way Claude identifies the 2-3 relevant files first rather than speculatively reading 10 service files looking for the right one.</li>
<li>For orientation questions like &ldquo;what does this module do?&rdquo;, asking Claude to summarize rather than read the whole thing can save significant tokens</li>
</ul>
<p>This becomes especially important in larger codebases where a single exploration session can involve dozens of file reads.</p>
<h3 id="optimizing-your-claudemd">Optimizing Your CLAUDE.md</h3>
<p>Your project&rsquo;s <code>CLAUDE.md</code> file loads into every conversation, so it&rsquo;s worth keeping it lean. Looking back at my <code>/context</code> output, my project&rsquo;s CLAUDE.md takes up 4k tokens, which is 2% of my context window before I&rsquo;ve even started working.</p>
<p>A few things to keep in mind:</p>
<ul>
<li>Bullet points tend to be more token-efficient than prose</li>
<li>Put the most critical instructions at the beginning since Claude pays more attention to the start and end of content (that &ldquo;lost in the middle&rdquo; problem again)</li>
<li>Consider whether instructions belong at the project level or could live in your user-level <code>~/.claude/CLAUDE.md</code> instead</li>
<li>Periodically audit for outdated instructions that no longer apply</li>
</ul>
<p>It&rsquo;s a balancing act. You want enough context for Claude to understand your project&rsquo;s conventions, but not so much that you&rsquo;re burning tokens on rarely-relevant details.</p>
<h2 id="best-practices-for-context-window-management">Best Practices for Context Window Management</h2>
<ol>
<li><strong>Monitor regularly</strong> - Run <code>/context</code> at the start of each session to understand your baseline usage</li>
<li><strong>Audit your MCP servers</strong> - Remove any MCP servers you haven&rsquo;t used recently; each one consumes tokens just by existing</li>
<li><strong>Prefer skills over MCP servers</strong> - When building new functionality, consider skills first for better context efficiency through progressive disclosure</li>
<li><strong>Clear between tasks</strong> - Use <code>/clear</code> liberally when switching between unrelated work</li>
<li><strong>Strategic compacting</strong> - Use <code>/compact</code> with custom instructions when transitioning between related phases of work</li>
<li><strong>Delegate complex work</strong> - Use subagents for self-contained tasks to keep their context isolated from your main conversation</li>
</ol>
<blockquote>
<p><strong>Running a team on Claude Code?</strong> Context window management gets
harder with 5+ engineers making different choices about MCP servers,
CLAUDE.md conventions, and workflow patterns. A
<a href="/services/#retainer">Production AI Retainer</a>
standardizes this across your team.</p></blockquote>
<h2 id="conclusion">Conclusion</h2>
<p>Context management isn&rsquo;t just about avoiding limits; it&rsquo;s about keeping your conversations focused and effective. A cluttered context window leads to degraded responses, just like a cluttered desk makes it harder to find what you need.</p>
<p>The key takeaways: monitor your usage with <code>/context</code>, delegate to subagents for isolated work, and use <code>/clear</code> liberally between tasks. When possible, prefer skills over MCP servers for better context efficiency through progressive disclosure.</p>
<p>Start by running <code>/context</code> in your next Claude Code session to see where your tokens are going. You might be surprised by what you find.</p>
<p><strong>Update:</strong> With Claude Opus 4.7&rsquo;s 1M token context window, the fundamentals here still apply but the ceiling and controls have changed. See <a href="/posts/2026-04-30-claude-opus-4-7-claude-code-tips-extended-context/">7 Practical Tips for Maximizing Extended Context</a> for workflow adjustments specific to the larger window.</p>
<blockquote>
<p>If this post was the explanation, the cheat sheet is the reference.
Two sides: token costs for common MCPs on one, the <code>/clear</code> /
<code>/compact</code> / subagent decision tree on the other.</p>
<p><a href="/context-window-cheat-sheet/">Get the Context Window Cheat Sheet →</a></p></blockquote>
]]></content:encoded></item></channel></rss>