← blog··12 min read

Claude for Enterprise: How Engineering Teams Are Using Claude in Production

The problem with ad-hoc Claude adoption at work — and what the teams who are getting it right actually do. Shared CLAUDE.md, team-wide commands, MCP servers, and why training is the gap nobody's filling.

enterpriseteamsclaude-codetraining
Kev Gary
Senior Software Engineer, Credit Karma at Intuit

The 2025 developer surveys tell a remarkably consistent story: about 84% of professional developers now use AI coding tools at work, and Claude has become one of the most-used options among engineers who want agentic workflows rather than autocomplete. And yet — when you ask how many of those engineers received any structured training on Claude, the number is tiny. Most estimates put it under 10%.

That's the gap this post is about. Claude is in every engineering org now. The tools are in every IDE and every terminal. But the way teams actually use Claude is wildly inconsistent, mostly self-taught, and leaving enormous ROI on the table. The teams that are getting it right aren't doing anything magic — they're doing a handful of concrete things on purpose. This post is about those things.

The current state: adoption without structure

Here's the pattern I see at most engineering orgs, even technically sophisticated ones.

Senior engineers who discovered Claude early have deep workflows. They've tuned CLAUDE.md, they write custom commands, they pipe diffs into headless sessions, they use MCP to connect Claude to internal tools. Their velocity is noticeably higher.

Mid-level engineers use Claude casually. They open Claude Chat, paste a function, ask a question, copy the answer back. That's roughly 80% of "using Claude" for most teams. It's useful, but it's leaving 70% of Claude's capability on the table.

Junior engineers are either enthusiastically over-relying on it (accepting output without understanding it), or avoiding it entirely because they're not sure what they're allowed to do or how to use it effectively.

Nothing is coordinated. There's no shared CLAUDE.md. Different engineers write wildly different custom commands if they write any at all. Team standards aren't codified anywhere Claude can read. The team pays for Claude Pro or Max licenses, and the actual utilization varies 10x across engineers.

That's the baseline. Most teams are here and don't realize how much room there is above them.

The problems with ad-hoc adoption

Ad-hoc adoption isn't just "suboptimal." It creates real, measurable problems.

Inconsistent output quality. Claude behaves very differently across engineers on the same team because each engineer has a different (or no) CLAUDE.md, different prompting habits, and different conventions. Two engineers fixing the same bug can end up with two wildly different patches — not because of skill difference, but because they gave Claude different context.

No shared standards. When every engineer's Claude is working in isolation, your codebase accumulates micro-inconsistencies. One engineer's Claude writes async/await, another's writes .then() chains. One writes Vitest tests, another writes Jest-style. Your code review load goes up because humans are catching things the tool could have caught.

Wasted tokens and wasted subscriptions. Teams spend $100-$200/month per engineer on Claude Max or equivalent. If half the engineers don't know about /compact, don't know how to pick the right model for the task, and don't know about headless mode for bulk jobs, you're burning budget on patterns that cost 5x more than they need to.

Security blind spots. Ad-hoc use means individual engineers decide what Claude can see. Some paste full database schemas into chat. Some commit secrets to CLAUDE.md accidentally. Some give MCP servers access to production credentials. Without a shared standard, your security posture is the worst-case of every engineer's personal habits.

Onboarding is slow. When a new engineer joins, they have to discover all of this themselves. How to use Claude. What conventions to follow. Which tools to install. Which patterns work on this codebase. Instead of being onboarded into a system, they're dropped into a toolbox.

None of this is catastrophic on its own. But all of it together is why the teams shipping fastest with Claude feel qualitatively different from the teams that just "use Claude."

What the teams doing it right actually do

I've talked to engineering leaders at a bunch of companies that are shipping with Claude at scale. The teams that are doing it well share a few concrete practices. None of them are magic.

1. A shared, version-controlled CLAUDE.md

This is the foundation. A single CLAUDE.md committed at the root of every repo (with subdirectory-scoped ones where needed), tuned to that repo's actual stack and conventions.

Every engineer's Claude Code sessions start with the same context. New engineers don't have to figure anything out — they clone the repo and Claude behaves consistently with team standards out of the gate. When conventions change, you update the file and every subsequent session picks it up automatically.

This alone closes a huge chunk of the output-consistency gap. Here's my full guide on how to write a CLAUDE.md if you're starting from scratch.

2. Team-wide custom commands committed to .claude/commands/

Shared slash commands are the second half of the standardization story. The team picks the handful of commands it uses most — /review-pr, /write-tests, /new-feature, /run-migrations, whatever — writes them once, commits them to .claude/commands/, and every engineer gets access automatically.

When a junior engineer types /review-pr on their branch, they get the same rigorous review pattern a senior would get. When a new hire types /write-tests, they produce tests matching team conventions from day one. The knowledge of "how we do this" is encoded in a file instead of in someone's head.

3. MCP servers for internal tools

This is where Claude adoption starts to feel transformative. MCP (Model Context Protocol) lets Claude connect to external tools — your database, your GitHub org, your Notion workspace, your internal APIs. Done well, this means Claude can answer questions like:

  • "What's the schema of the users table and how is it used across the API layer?"
  • "Summarize the five most recent PRs to the billing service and flag any with failing CI."
  • "Look up the last three internal ADRs about auth and tell me if my proposed change conflicts with any of them."

Those questions are borderline impossible without tool access. With MCP servers wired up, they're conversational.

The teams getting the most out of Claude have at least two or three MCP servers in production: GitHub, their database, and an internal-tools server they built with FastMCP. Every engineer on the team gets access to the same servers, authenticated via their own credentials.

4. Standard model selection rules

Most engineers use whatever model the tool defaults to. The teams doing it right have explicit guidance: Haiku 4.5 for classification/routing/bulk, Sonnet 4.6 for daily engineering work, Opus 4.6 for architectural decisions and complex debugging. This shows up in the CLAUDE.md, in internal docs, or in custom commands that hardcode the right model.

Model selection might sound minor, but it's a 3-5x cost delta between Opus and Sonnet on the same workload. Teams that default-to-Opus are paying five times more than they need to. Teams that default-to-Haiku are getting worse output and blaming Claude. Getting the rule right matters.

5. A canonical "how we use Claude" doc for new engineers

The best teams I've seen have a short internal doc — one or two pages — that explicitly says "here's how we use Claude here." It covers: which tools to install, how to authenticate, where CLAUDE.md lives and why it matters, what custom commands exist, which MCP servers to add, and what not to do (e.g., "never give an MCP server access to production credentials without security review").

New engineers read it on day one and skip the whole ad-hoc discovery phase. Claude adoption becomes an onboarding topic, not a skunkworks project.

The training gap

Here's the thing nobody's doing, and the thing I think is the highest-leverage fix: structured training.

Your company bought Claude seats. Great. You wouldn't buy JetBrains licenses and never train anyone on how to use IntelliJ. You wouldn't buy Datadog and never train your ops team on how to read the dashboards. But buying Claude seats and never training engineers on how to use Claude is exactly that mistake, at a much larger scale.

The result is predictable: a handful of engineers use Claude deeply and effectively, a much larger group uses it shallowly, and the org-level ROI is dictated by the shallow group because that's where the mass is.

Training closes this gap in days, not months. A structured 3-day program that covers the foundations, Claude Code mastery, and the advanced topics (agents, the API, MCP) gets an entire team to the "deep usage" level in one cohort. Completion rates for live cohort training sit between 85% and 96%, compared to 5-15% for self-paced courses — the delivery mode is the ROI.

The math for engineering leaders is almost embarrassing:

  • Training cost: $225-$299 per engineer.
  • Time saved per engineer per week, post-training: 5-10 hours.
  • Recovered productivity per engineer per year at $75/hour blended cost: $19,500-$39,000.
  • Payback period: under one week.

This isn't speculation. This is just what happens when engineers stop guessing at how to use the tools they already have.

What the training should cover

If you're going to invest in structured Claude training — whether from me or from anyone else — here's the shape of what it should actually cover. If the program doesn't include these topics, it's not going to move the needle.

Foundations. How LLMs actually work, at the level an engineer needs to reason about context windows, model selection, and prompting strategies. This takes maybe half a day and it's the foundation for everything else.

Claude Code mastery. The core tool. CLAUDE.md, the permission model, sessions and memory, custom commands, hooks, skills, MCP. This is the "make every engineer's daily workflow 2x more effective" chunk.

The API layer. Messages API, client SDKs, tool use, structured outputs, prompt caching, Batch API. Not every engineer will build on the API directly, but they should understand what it can do so they know when to reach for it.

Agentic engineering. Multi-agent orchestration, sub-agents, git worktrees, headless mode, the Agent SDK, Managed Agents. This is where the ceiling is, and where most of the future ROI will come from.

MCP. Using MCP servers, and building custom ones with FastMCP. MCP is how you connect Claude to your team's internal tools, and the teams investing here see outsized returns.

A capstone. Something each engineer ships by the end. Not a homework exercise — a real integration they take back to their codebase on Monday.

That's roughly eight hours of content, which maps to a 3-day live cohort. It's not accidental that it matches the structure of Claude Camp.

PDP, L&D, and Section 127

One logistical thing worth flagging for engineering managers: this is exactly what professional development budgets are for.

Most companies offer somewhere between $1,000 and $5,250 per engineer per year in professional development or education assistance. Under IRS Section 127, up to $5,250 of that per year is tax-free to the employee. Structured engineering training like Claude Camp qualifies. It's the same category as a conference ticket or a paid course — no special procurement dance required.

If you're a manager wondering how to get budget, the answer is often "you already have it." If you're an engineer wanting your manager to approve this, the Convince Your Boss page has a pre-written email template you can forward.

Where to go from here

If you lead an engineering org and you want the whole team on a consistent Claude footing:

  1. Audit what your team is doing today. How many engineers have a CLAUDE.md in their main repo? How many use custom commands? How many have an MCP server wired up? The gap is usually bigger than leaders realize.
  2. Standardize the basics. Commit a CLAUDE.md to your main repos. Write 2-3 team commands. Pick model selection defaults. Write the one-page "how we use Claude here" doc.
  3. Invest in training. Either build it in-house (hard, time-consuming, requires someone who's already deep in this) or bring in a structured program. Live cohort training is the fastest way to close the gap across a whole team in days instead of quarters.
  4. Wire up MCP to at least one internal tool. GitHub is the easiest start. From there, your database or your internal API. Once your team feels the power of tool-connected Claude, they won't go back.
  5. Measure. Six weeks after training, ask the team: how many hours per week do you estimate you're saving with Claude? What's changed in your workflow? The numbers are usually substantial, and they're useful for justifying the next round of investment.

Claude Camp is the live cohort bootcamp I built to close exactly this training gap. Three days, eight hours, taught live on Zoom, cohorts capped at 20 engineers. It covers everything I've talked about in this post — foundations, Claude Code, the API layer, agents, MCP — with hands-on exercises on real codebases. Team pricing starts at $225 per seat for groups of 6 or more. See team pricing and curriculum at /for-teams, or forward the Convince Your Boss page to your manager to get approval.

Related reads: my Claude Code tutorial for engineers just starting out, the CLAUDE.md guide for the team standardization piece, and the Agent SDK guide for the teams ready to start automating their own workflows. And if you're still picking tools, my honest comparison of Claude Code, Cursor, and Copilot is worth a read before you make a call.

// keep reading

Related posts

← all postsPublished March 31, 2026