AI Agent/·8 min read·OraCore Editors

Six Months on Claude Code: Five Advanced Patterns I Wish I Knew on Day One

Most people use Claude Code for a week, come out with "it's basically ChatGPT with a terminal," and miss its real differentiator: persistent memory and a system that grows with use. This article compresses six months of production use into five patterns that matter — what CLAUDE.md is really for, what to store (and not store) in MEMORY, the composition logic behind Skills, three ways people misuse Subagent dispatch, and the trade-offs between tmux, MCP servers, and file buses for cross-CLI orchestration.

Share LinkedIn

A week into Claude Code, most people settle into "ask a question, get an answer, close the window." That usage is indistinguishable from ChatGPT and misses the point.

The difference starts when you let it remember things. Claude Code's most undersold capability is not the shell, not web search — it is the cross-session memory stack: CLAUDE.md, MEMORY, Skills, Subagents, Cron, MCP servers. Stitch them together and you have an assistant that grows with use.

This article compresses six months of usage into five patterns. Not a feature list — a list of when to use what, written for the version of me that started six months ago.

1. CLAUDE.md Is a Guardrail, Not a Personality

Most tutorials treat CLAUDE.md as a place to write a personality ("be direct, have opinions, skip customer-service tone"). Those things are fine but miss the real value.

CLAUDE.md earns its keep as a guardrail, not a personality. It is injected into every conversation, so the highest-leverage content is traps you keep falling into, not vague personality descriptions.

Low-value:

Be concise, opinionated, and avoid verbose replies.

Claude already has judgment about "concise" vs "verbose." High-value:

  • Before DELETE/UPDATE, run a SELECT with the same WHERE clause to confirm scope
  • API keys must never be hardcoded; use environment variables
  • Don't mark a task complete until the linter passes

These are executable checkpoints. When Claude violates them, you can point at the rule and it actually corrects.

Keep CLAUDE.md under 1KB. Write event-driven rules (when X, do Y), not state descriptors (you are a Y-type assistant). The first is a guardrail; the second is decoration.

2. Memory Is a Two-Axis Store, Not a File System

Claude Code's memory roughly splits into three layers:

  • Session memory: current conversation, lost on exit
  • Persistent memory: MEMORY.md + USER.md, cross-session
  • Skill memory: reusable workflows extracted from experience

These look like three files. They are really frequency × persistence axes. The longer something needs to live and the less often it's read, the more condensed the storage should be.

The early mistake: dumping everything into MEMORY.md, including "deployed v4.9.3 today" and "handled an edge case for customer X." Two months in, MEMORY.md hit 8KB and every new conversation dragged unrelated history into context. Output quality dropped.

Store this:

  • Traps hard to reproduce but likely to recur ("Supabase RLS is bypassed by service role; all public routes must explicitly set status=eq.published")
  • Long-term project decisions ("Stack A uses Nuxt 3, no Next.js migration")
  • Explicit user preferences ("conventional commits, no emoji")

Don't store:

  • Short-term state (what you deployed today)
  • Anything derivable from code or git (project structure, recent commits)
  • Single-project details (a refactor's file list)

Use task lists, git log, or session memory for the short-term stuff. Writing it into MEMORY.md pollutes every future conversation.

3. Skill Value Lives in Composition, Not Individual Strength

Many people treat Skills as plugins: install one, use one. That misses the real value.

Skills react with each other. An end-to-end content pipeline for a news site can chain five Skills:

  1. keyword-research: topic in, SEO keywords + search intent out
  2. content-brief: keywords in, outline + source material out
  3. draft-writer: outline in, first-pass HTML out
  4. humanizer: scans for AI writing patterns, emits revision suggestions
  5. seo-optimizer: draft in, checks meta, canonical, internal-link density

Each Skill is ordinary on its own. Together they become a hands-off article production line. The real trick is that Skills pass data through the filesystem, not via memory variables. Each step writes to a known file, the next step reads it. No framework required.

Three principles for Skill design:

  • Single responsibility: one job per Skill, don't stack two
  • Idempotent: same input, same output, no side effects on repeat
  • Observable: every step lands as a file, so you can open the folder and see what happened

These look like Unix philosophy because that's what they are. The design inspiration for Skills is pipes, not plugins.

4. Three Wrong Ways to Dispatch Subagents

Subagents let you fork independent sub-tasks from the main session. Three common mistakes:

Overparallelization. Dispatching everything to subagents. Tasks with sequential dependencies run slower in parallel because you have to feed each result into the next and re-assemble context. Only dispatch work that is provably independent.

Re-feeding context. Dumping the full conversation history into every subagent. The point of a subagent is a clean context. Replaying history defeats the design. Instead, give the subagent a goal and only the necessary context. Let it work with a fresh head.

No exit criteria. The subagent doesn't know when it's done. Default behavior is to finish a reply, write a summary, and stop — sometimes at 30% completion. Be explicit: "must verify X, must produce file Y, must report metric Z." No checklist, no completion.

For an OraCore article pipeline, the dispatch template looks like:

Convert these 8 markdown files to HTML and insert into the DB. Acceptance:

  1. SELECT count(*) FROM articles WHERE slug LIKE 'xyz-%' returns 8
  2. Each row has rewrite_status = 'done' and status = 'published'
  3. All 4 zh/en pairs are cross-linked via related_article_id. Report with the SELECT output attached.

The subagent now knows what "done" means. Output quality shifts from "guess how far it got" to "verify the numbers."

5. Cross-CLI Orchestration: tmux vs MCP vs File Bus

At some point you'll want Claude Code as the orchestrator, dispatching work to Codex, Gemini, local llama.cpp, and so on. The motivation is legitimate: each model has strengths. Opus is good at architectural judgment, Codex is strong on refactors, Gemini handles long-context summarization.

Three implementation paths:

Path 1: tmux send-keys. Open one tmux pane per CLI, dispatch via tmux send-keys, scrape terminal output. Fast to prototype. Four structural problems: no exit codes, no structured responses, terminal escape-sequence noise, memory sharing is manual. Fine for demos, not for production.

Path 2: MCP server. Wrap each CLI as an MCP tool. The orchestrator calls dispatch_to(model, task) and receives JSON. Structured, error-aware, context shareable via MCP resources. Higher upfront cost (you write an MCP server), but the investment keeps paying off.

Path 3: file bus. A shared directory, all agents communicate through files. Orchestrator writes /tasks/001.json, workers poll the directory and write /results/001.json. Simple, debuggable, language-agnostic workers. Downsides are polling overhead and no real-time signalling.

My take: tmux for demos, MCP for production, file bus for research experiments. Don't try to make one approach cover all cases.

Closing: From Tool to System

The counterintuitive part of Claude Code is that it isn't a tool. It is a system that grows with use. The longer you use it, the more it understands you. Traps become Skills, preferences become Memory, project rules become CLAUDE.md entries. Three months in, it stops feeling like an assistant and starts feeling like a digital counterpart that keeps working while you're away.

The gap between "chat with AI" and "AI does your work" isn't a feature gap. It's a cognitive gap. Pick the most painful entry point: if you repeat the same instructions daily, start with CLAUDE.md. If you juggle projects, build project-level rules. If you want AI to run while you sleep, wire up Cron + Skills.

The important part is to start using it. The system grows on its own.

Related Reading