The SOW Nobody Reads Until Something Goes Wrong

When a project falls apart, everyone suddenly becomes an expert on what the contract said.

By Mike Phillips


There’s a document that lives at the center of almost every IT professional services engagement. It gets negotiated, revised, signed, and filed. Sometimes it gets emailed around with a note that says “final final v3.” Then it gets forgotten — until a project starts falling apart, at which point everyone involved pulls it out and reads it for the first time like it’s scripture.

That document is the statement of work. And the gap between how it’s treated at signing and how it’s treated at failure is one of the most expensive problems in enterprise technology that nobody talks about directly.

I spent a decade embedded in federal agencies as a senior IT consultant — at the Department of State, at NIH, through firms including Deloitte, Acuity, TEKsystems, and Alcor. I wrote SOWs. I worked inside them. I watched them get stretched, disputed, and weaponized. And I can tell you that the problem with most statements of work isn’t that they’re poorly written. The problem is that they’re written to get the deal done, not to govern the work.

Those are two very different documents. Most organizations only find out which one they have after something goes wrong.


What a SOW Is Supposed to Do

In theory, a statement of work is a precise description of what a vendor will deliver, when they’ll deliver it, what it will cost, and what “done” looks like. It’s the document that answers the question everyone will eventually ask: did they do what we paid them to do?

A well-written SOW defines scope clearly enough that both parties could hand it to someone who wasn’t in the room during negotiations and that person would understand exactly what was agreed to. It identifies deliverables with specificity. It describes acceptance criteria — not just what gets built, but how you’ll know it works. It draws a boundary around what’s included so that everything outside that boundary is visibly out of scope, not invisibly unaddressed.

In practice, most SOWs do none of these things cleanly. They describe work in the language of intentions rather than outcomes. They define deliverables by activity (“the vendor will provide consulting support for…”) rather than by result (“the vendor will deliver a configured, tested, production-ready system that…”). They leave acceptance criteria vague or absent entirely. And they handle scope boundaries the way a bad prenuptial agreement handles assets — by not really handling them at all.

This isn’t always negligence. Sometimes it’s optimism. Sometimes it’s deadline pressure. Sometimes it’s the natural result of writing a document about work that hasn’t started yet, using language everyone in the room agrees sounds reasonable, without stress-testing what that language means when the room empties and the work gets complicated.


The Scope Creep Problem Is Actually a Language Problem

Scope creep is the most common complaint in professional services. The client thinks the vendor agreed to do X. The vendor thinks they agreed to do Y. X and Y looked similar in the conference room. They look nothing alike six months into implementation.

The conventional diagnosis is that scope creep is a project management failure — a matter of poor change control, weak governance, or clients who keep adding requirements. That’s not wrong. But it misses the upstream cause.

Most scope creep starts in the SOW before the project begins, in the form of undefined terms, ambiguous boundaries, and language that sounds specific but isn’t.

Consider a phrase that appears in some variation in nearly every professional services agreement: “the vendor will provide implementation support as needed.” As needed by whom? Determined how? Capped at what? “Support” can mean a two-hour training session or a six-month embedded engagement, depending on who’s reading. When those two interpretations collide mid-project, neither party is lying. They’re each reading the same sentence through the lens of what they expected when they signed it.

The same problem shows out of phrases like “industry best practices,” “standard configuration,” “reasonable efforts,” and my personal favorite: “the vendor will work with the client to determine requirements.” That last one isn’t a deliverable. It’s a description of work that produces a deliverable that isn’t yet defined. If that language survives into a signed SOW, you have a contract to have a conversation, not a contract to build a system.


What Federal Contracting Gets Right (and Why It Still Fails)

Federal contracts are, in theory, the most carefully written professional services agreements in existence. There are entire regulatory frameworks — the Federal Acquisition Regulation, agency-specific supplements, mandatory clauses — designed to make federal SOWs precise. Contracting officers review them. Legal teams review them. Program offices review them. They go through multiple revision cycles before they’re signed.

And they still produce disputes, cost overruns, and failed implementations with remarkable regularity.

Having worked inside federal engagements, I can tell you that the problem isn’t usually the formal contract structure — it’s the gap between the contract and the actual day-to-day understanding of scope. Federal agencies often rely on contracting officers who understand procurement law but not the technical work. Technical staff understand the work but not what the contract says. Program managers sit in the middle trying to translate between both, usually without enough authority on either side.

The result is a situation where the contract says one thing, the technical team is building another thing, and nobody connects those two realities until there’s an invoice dispute or a congressional inquiry.

This isn’t unique to government. It’s the same dynamic in commercial IT services, just with fewer regulations and faster consequences.


The Deliverable Nobody Defines: Acceptance

If scope definition is where most SOWs are weak, acceptance criteria is where they’re almost completely absent.

Acceptance criteria answer a simple question: how will we know when this is done and working? In software and systems implementations, that question is surprisingly hard to answer in advance — and it’s almost always left to be figured out later. The SOW says the vendor will deliver a configured ServiceNow instance. What it doesn’t say is what “configured” means, what test cases need to pass, what user acceptance testing looks like, who signs off, and what happens if the client’s team isn’t available to test on the agreed timeline.

When acceptance criteria are vague or absent, two things happen. First, vendors deliver what they built and declare it complete, because completion is defined by their own internal standards rather than a shared, written agreement. Second, clients reject deliverables — or more often, express dissatisfaction without formally rejecting them — because “done” looks different to them than it does to the vendor. The project enters a grey zone of rework, dispute, and escalating frustration, none of which was contemplated in the original timeline or budget.

I’ve seen federal implementations stall for months in exactly this gap. A system would be technically functional by every measure the vendor used, and genuinely not useful by any measure the agency needed. Both parties were right by the terms of what they’d written. Neither got what they wanted.


Billing Terms: The Other Landmine

Scope ambiguity gets most of the attention in SOW discussions, but billing terms create just as many disputes — and they’re easier to prevent.

The most dangerous billing language in a professional services agreement is anything that ties payment to time rather than outcomes. Time-and-materials contracts, where the client pays for hours worked regardless of what’s produced, shift all the risk to the client and create no incentive for efficient delivery. They’re sometimes appropriate — for genuinely exploratory work where scope can’t be known in advance — but they’re used far more broadly than that justifies.

Even in fixed-price arrangements, billing milestones are often defined by activity (“upon completion of Phase 1”) rather than verified outcomes (“upon client acceptance of Phase 1 deliverables as defined in Section 3.2”). The difference matters enormously when a milestone is disputed. Activity-based milestones get invoiced when the vendor says Phase 1 is done. Outcome-based milestones get invoiced when the client has confirmed, in writing, that the deliverables meet the criteria both parties agreed to in advance.

This isn’t just a client-protection issue. Vendors benefit from clarity too. When acceptance criteria are clear and milestone triggers are specific, there’s less room for clients to withhold payment by claiming dissatisfaction with work that was never precisely defined. Vague contracts protect no one and create leverage for whoever has more lawyers.


Who Actually Writes These Things

Here’s something that gets overlooked in conversations about SOW quality: most statements of work are written by people who are simultaneously trying to close a sale.

On the vendor side, SOWs often originate in sales or presales — people whose incentive is to get the deal signed, not to anticipate every way the project could go wrong. They’re working from templates, time pressure, and the collective optimism of a team that just finished a successful demo. Precision requires slowing down and stress-testing language, which is not what the sales cycle rewards.

On the client side, the people reviewing the SOW are often program managers or IT leads who understand the technical work but haven’t been trained to read contracts as legal documents. They read for reasonableness — does this sound like what we discussed? — rather than for precision. Does this language hold up if the relationship sours? Does this acceptance criteria actually give us recourse if the deliverable doesn’t work? Those are different questions, and most technical reviewers aren’t asking them.

The result is a document that reflects what everyone hoped the project would be, written in language that sounds precise but isn’t, reviewed by people who weren’t looking for the right problems. It gets signed. The project starts. And the first time something goes sideways, everyone reaches for the contract and discovers it doesn’t say what they thought it said.


What Better Looks Like

None of this is unsolvable. The principles of a well-written SOW aren’t complicated — they’re just inconvenient when you’re trying to close a deal by Friday.

Define deliverables by outcome, not activity. “The vendor will provide support” is an activity. “The vendor will deliver a tested, documented, production-ready configuration meeting the requirements in Appendix A, accepted in writing by the client’s designated project manager” is an outcome. The second version is harder to write. It’s also much harder to dispute.

Write acceptance criteria before you write scope. If you can’t describe how you’ll know a deliverable is complete and working, you don’t understand the deliverable well enough to contract for it yet. Forcing that conversation at the SOW stage is uncomfortable. Having it six months in, mid-project, with money already spent, is worse.

Define what’s out of scope explicitly. Most SOWs describe what’s included. Few describe what’s not. Anything left unaddressed becomes negotiable under pressure, and “that’s out of scope” is a much harder argument to make when scope was never clearly bounded in the first place.

Make billing milestones contingent on verified acceptance, not vendor-declared completion. The invoice should follow the client’s signature on an acceptance document, not the vendor’s email saying the work is done.

And build in a change order process with actual teeth — a clear, written mechanism for how scope changes get documented, priced, and authorized. Change is inevitable in any complex implementation. What’s not inevitable is the chaos that results from handling it informally, through email chains and verbal agreements, rather than through a process the contract actually specifies.


The Real Cost

The reason SOW failures are so expensive isn’t just the direct cost of disputes, rework, and legal fees — though those are real. It’s the organizational cost of a failed or troubled implementation: the lost productivity, the damaged relationships, the institutional momentum that got spent on something that didn’t work.

In federal contracting, troubled implementations become audit findings, congressional inquiries, and inspector general reports. In commercial IT, they become leadership changes, vendor terminations, and projects that quietly get cancelled after two years and several million dollars with nothing to show for them.

Behind almost every one of those outcomes, if you pull the thread far enough back, you’ll find a SOW that wasn’t written to govern the work. You’ll find deliverables defined by activity. Acceptance criteria that weren’t there. Billing milestones that triggered on the vendor’s declaration rather than the client’s confirmation. Scope boundaries that were understood verbally but never written down.

You’ll find a document that was written to get the deal done.

There’s nothing wrong with wanting to get the deal done. But the deal isn’t the project. And the project is what you actually paid for.

Leave a comment