All posts

What to Look For When Comparing Contracts: NDA, MSA, Employment, and Purchase Agreements

· 14 min read

Every contract comparison starts the same way: two versions, a pile of highlighted changes, and the question of which changes actually matter. But "which changes matter" depends entirely on what kind of contract you're comparing.

The critical provisions in an NDA are not the same as in a master service agreement. The change patterns in an employment agreement look nothing like those in a purchase agreement. The comparison traps that catch reviewers off guard are different for each contract type.

Most guides on contract comparison treat all contracts the same. This one doesn't. Below is a practical breakdown of what to focus on when comparing four of the most common contract types: NDAs, MSAs, employment agreements, and purchase agreements. For each, we cover the provisions where changes matter most, the comparison patterns to watch for, and where the standard tools tend to fail.

Comparing NDAs: short but deceptive

Non-disclosure agreements are the contracts lawyers compare most often and take least seriously. They're usually 3-5 pages. They look boilerplate. Many firms have a standard form. The temptation is to skim the comparison output and move on.

That's how NDA comparison mistakes happen. NDAs are short enough that every clause matters. There's no filler. A single changed word in a 4-page NDA represents a much larger percentage of the agreement than the same change in a 40-page MSA. And because NDAs govern what information can be shared and with whom, the consequences of a missed change can extend far beyond the NDA itself, into every deal the confidential information touches.

The provisions that matter most

Definition of Confidential Information. This is the provision that gets quietly expanded or narrowed most often. Watch for changes to the scope of what's considered confidential, the addition or removal of exclusions (publicly available information, independently developed information, information received from third parties), and any carve-outs that weren't in your original draft. A broadened definition imposes more obligations on the receiving party. A narrowed one reduces protection for the disclosing party. Either direction can be problematic depending on which side you represent.

Permitted disclosures. Who can the receiving party share the confidential information with? "Representatives" can mean employees only, or it can mean employees, contractors, advisors, affiliates, and sub-processors. The difference is significant. Check whether the list of permitted recipients expanded between versions, whether any "need to know" qualification was added or removed, and whether the receiving party's obligations extend to its representatives or just to itself.

Term and survival. How long does the NDA last, and how long do the confidentiality obligations survive after termination? A 2-year NDA with 3-year survival is very different from a 2-year NDA with indefinite survival. Watch for changes to both the agreement term and the survival period. Also check whether the return-or-destroy obligations have a deadline and what happens to information that "cannot reasonably be returned" (a carve-out that effectively allows indefinite retention).

Remedies and enforcement. Does the NDA include an injunctive relief clause? Was it mutual in your draft but made one-sided in their markup? Is there a limitation of liability that caps damages for breach? For NDAs, the enforcement provisions are often more important than the confidentiality provisions themselves, because they determine what actually happens when something goes wrong.

NDA comparison traps

Mutual vs. one-sided drift. Your draft said "each party" or "the disclosing party." Their markup changed it to "Company A" in some places but not others. Now some obligations are mutual and some aren't, and the inconsistency might not be obvious in a comparison that shows changes one at a time. Scan for any instance where mutual language became one-sided.

Definition of "Representative." This defined term often gets edited in the definitions section, and the change ripples through every clause that references it. If your comparison tool doesn't connect defined-term changes to their downstream effects, you'll see the definition change in isolation without realizing it affected 8 other provisions.

Governing law and dispute resolution. These clauses are at the end of the NDA, past the confidentiality substance. Reviewers often run out of attention by the time they reach them. A changed jurisdiction or an added arbitration clause in an NDA can affect enforceability of the entire agreement.

Comparing MSAs: where noise buries substance

Master service agreements are where comparison tools earn their keep or fail spectacularly. An MSA is typically 20-50 pages, covers the full commercial relationship, and gets negotiated over multiple rounds with different lawyers touching different sections. By the time you're comparing round 3 against round 4, the change count can be in the hundreds.

MSAs also have the highest incidence of template-driven formatting noise. Each party has its own MSA template with its own styles, numbering conventions, and defined-term formatting. When one side converts the document to its template, every paragraph can show formatting changes even if the text is identical.

The provisions that matter most

Service scope and SOW structure. The MSA often defines what services are included at a high level, with details deferred to statements of work. Watch for changes that shift scope from the SOW into the MSA or vice versa. If the counterparty narrowed the service description in the MSA body while referencing a SOW that doesn't exist yet, the practical scope is undefined. Also watch for "commercially reasonable efforts" replacing "best efforts" or "shall" replacing "will" in service obligations. These seem minor but they change the standard the provider is held to.

Service levels and SLA tables. This is where table comparison quality matters. SLA tables define uptime commitments, response times, resolution targets, and the credits or penalties for missing them. A changed response time from "4 hours" to "4 business days" in an SLA table is a massive reduction in service commitment. If your comparison tool misaligns table rows (common when rows are added or reordered), that change can be invisible. We covered this failure mode in detail in our post on why Word Compare fails for legal contracts.

Payment terms and pricing. Beyond the obvious (check the numbers), watch for changes to payment timing (Net 30 to Net 60), invoice dispute windows (often shortened in later rounds), late payment penalties (added or increased), and price adjustment mechanisms (CPI escalators, annual increase caps). These are frequently spread across multiple sections rather than consolidated in one payment clause, which makes them easy to miss in a section-by-section review.

Liability caps and indemnification. In MSAs, liability provisions are often the most heavily negotiated section. Watch for changes to the overall liability cap (total contract value vs. 12 months' fees vs. a fixed dollar amount), carve-outs from the cap (IP infringement, data breach, confidentiality breach), and whether indirect/consequential damages are excluded. A common pattern: the counterparty accepts your liability cap but adds carve-outs that effectively make the cap meaningless for the scenarios where liability is most likely to arise.

Termination rights. Pay close attention to: termination for convenience (was it added? what's the notice period?), termination for cause (what constitutes "cause" and what cure period applies?), and what happens to ongoing SOWs when the MSA terminates. A shortened cure period from 30 days to 10 days can make breach-based termination trivially easy to trigger.

MSA comparison traps

Cross-reference renumbering. When sections are added or removed in an MSA, all internal cross-references need to update. "Subject to Section 8.3" might now point to the wrong provision if Section 7 was expanded. A text comparison shows the number change but doesn't tell you whether the reference is still correct. This is a manual verification step that no current tool fully automates.

Defined term cascades. MSAs can have 30+ defined terms. A change to the definition of "Services," "Deliverables," or "Affiliate" in the definitions section changes the meaning of every clause that uses that term. If you catch the definition change but don't trace its impact through the document, you've caught 10% of the change.

Exhibit and schedule changes. Pricing schedules, SLA exhibits, and security requirements are often in appendices that reviewers check last (or not at all). Changes in these attachments can have more financial impact than changes in the main body. Treat exhibit comparisons with the same attention as body comparisons.

Comparing employment agreements: high stakes, subtle edits

Employment agreements sit in a unique position. They govern someone's livelihood, they affect the employer's competitive position, and they're subject to enforceability constraints that vary by jurisdiction. Changes that look minor on paper can have outsized consequences for both sides.

The comparison challenge with employment agreements is subtlety. These aren't 50-page commercial contracts with hundreds of changes. They're typically 5-15 pages with a handful of edits that each carry significant weight. Missing a single changed word in a non-compete clause or a compensation provision can have career-altering implications.

The provisions that matter most

Compensation and benefits. Salary, bonus, equity, and benefits are usually in the first few pages and get reviewed carefully. But watch for the details that aren't the headline number: bonus eligibility conditions (changed from "guaranteed" to "discretionary"), equity vesting schedules (cliff period lengthened, acceleration on termination removed), benefit start dates, and whether compensation terms are described as "currently" or "as of the date hereof" (which can be used to argue the terms were only accurate at signing).

Non-compete and non-solicitation. These are the clauses most likely to generate litigation and most frequently edited during negotiation. The key variables: geographic scope (unlimited, state-specific, or tied to the company's operating footprint), duration (6 months vs. 24 months), scope of restricted activity (the same role, the same industry, or any role at a competitor), and the definition of "competitor" (named companies vs. a descriptive definition that could include anyone). Each of these variables can be adjusted by a word or two, and each adjustment dramatically changes the restriction's impact and enforceability.

IP assignment and work product. Who owns what the employee creates? Standard employment agreements assign all work product to the employer. Watch for changes to: the definition of "work product" (does it include work done on personal time? with personal equipment? in tangentially related fields?), pre-existing IP exclusions (is the employee's prior work carved out?), and whether the assignment is "of" IP (present transfer) or an agreement "to assign" IP (promise to transfer later, which has different legal implications in some jurisdictions).

Termination and severance. What triggers termination for cause? What does the severance package look like for termination without cause? How long is the notice period? Watch for changes to the definition of "cause" (expanding it makes it easier for the employer to terminate without severance), changes to severance terms (reduced payout, added conditions like signing a release), and changes to notice periods (shorter notice gives less time to prepare).

Employment agreement comparison traps

Obligation standards. "Shall" vs. "will" vs. "agrees to" vs. "shall use reasonable efforts to" in employment agreements. Each carries a different enforceability weight depending on jurisdiction. A comparison tool shows the text change. Understanding whether the change weakens or strengthens the obligation requires legal judgment, but catching the change in the first place requires a thorough comparison.

Cross-references to policies. Employment agreements frequently incorporate company policies by reference ("Employee shall comply with the Company's policies as updated from time to time"). If the counterparty added or expanded these references, they've effectively imported additional obligations that aren't in the agreement itself. A comparison of the agreement alone won't reveal what those policies say.

Choice of law changes. Employment law varies significantly by state and country. A changed governing law clause in an employment agreement can determine whether a non-compete is enforceable (California vs. Texas), what severance obligations apply, and how disputes are resolved. This clause is almost always at the end of the agreement and almost always under-reviewed.

Comparing purchase agreements: the numbers game

Purchase and sale agreements (whether for assets, equity, real estate, or goods) are where the most money changes hands and where missed changes in a comparison have the most direct financial consequences. These agreements are dense with numbers: purchase price, earnout calculations, working capital adjustments, escrow amounts, baskets, caps, and deductibles.

The comparison challenge is volume and density. Purchase agreements are often 30-60 pages with extensive schedules and exhibits. They involve multiple workstreams (financial, legal, tax, operational) and multiple rounds of revision. A change to an indemnification basket in round 4 might affect the economics more than the purchase price adjustment everyone focused on in round 2.

The provisions that matter most

Purchase price and adjustments. The headline price is easy to check. The adjustment mechanisms are where changes hide. Working capital targets, inventory valuations, net debt definitions, and earnout formulas all affect the effective price. A changed definition of "Net Working Capital" that excludes certain receivables can reduce the effective purchase price by millions without touching the stated number. Watch for changes to any defined term used in price calculations.

Representations and warranties. The rep section is typically the longest in a purchase agreement and the most granular. Each rep is a statement of fact that, if wrong, can trigger indemnification. Watch for: reps that were qualified with "knowledge" qualifiers (changes the burden of proof), materiality qualifiers inserted or removed ("material" adverse change vs. "any" adverse change), and reps that were deleted entirely. A deleted rep means no claim if that fact turns out to be wrong.

Indemnification mechanics. Beyond the cap and basket (which everyone checks), watch for: the definition of "Losses" (does it include consequential damages? attorney's fees? diminution in value?), the indemnification period (shorter periods limit the buyer's window to discover and claim issues), and whether the indemnification is the exclusive remedy or whether other legal remedies survive. A common negotiation tactic: accept a generous cap while narrowing the definition of "Losses" to exclude the most expensive categories.

Closing conditions and termination rights. What needs to happen before closing? What gives either party the right to walk away? Watch for changes to material adverse change (MAC) definitions (the standard for walking away from the deal), required consents and approvals (new conditions added can delay or block closing), and long-stop dates (when the deal dies if it hasn't closed). A shortened long-stop date creates urgency that can benefit one side.

Purchase agreement comparison traps

Disclosure schedule references. Reps often say "except as set forth in Schedule 3.7." If the counterparty added schedule references (expanding the exceptions) or changed which schedule is referenced, the rep's scope has changed even though the rep language in the body might look the same. Always compare disclosure schedules alongside the main body.

Definition changes with financial impact. "Indebtedness," "Net Working Capital," "Permitted Liens," "Material Contracts," and dozens of other defined terms in a purchase agreement directly affect economics. A change to the definition of "Indebtedness" that adds or removes categories can swing the effective purchase price by hundreds of thousands of dollars. These definitions are typically in a dense section early in the agreement that reviewers sometimes skim.

Table and schedule changes. Purchase agreements rely heavily on schedules: disclosure schedules, financial projections, asset lists, contract lists, employee lists. Changes to these schedules (added disclosures, removed assets, modified projections) are often more consequential than changes to the body of the agreement. If your comparison tool handles tables poorly, the most financially significant changes might be the ones you see least clearly.

Universal comparison priorities across all types

Regardless of contract type, certain comparison principles apply everywhere. Use these as your baseline, then layer the contract-specific priorities on top.

Numbers first. Any change to a dollar amount, a percentage, a date, a deadline, or a quantity is worth attention. These are objective, verifiable, and usually consequential. Scan for number changes before reading anything else.

Obligation language second. "Shall" to "may." "Must" to "should." "Best efforts" to "commercially reasonable efforts." "Will" to "agrees to use reasonable efforts to." These shifts change who is required to do what and to what standard. They're easy to miss in a sea of text but they change the enforceability of entire provisions.

Defined terms third. Check the definitions section in every round. A changed definition affects every clause that uses the term. This is the highest-leverage change an opposing party can make: edit one definition, change 20 provisions.

Deleted provisions fourth. It's psychologically harder to notice that something is missing than to notice that something changed. If a provision was in your draft and isn't in their markup, that absence is a change, and it's the kind of change that text-level highlighting can obscure. You deleted a clause; they deleted a clause. Are those the same clause? A comparison tool will show both deletions. Connecting them requires structural awareness.

Governing law and dispute resolution last (but don't skip them). These boilerplate-looking clauses at the end of the agreement determine where disputes are heard, under what law, and through what mechanism. They're changed less often, but when they are changed, the impact is structural. Review them every time.

When your comparison tool matters (and when it doesn't)

Different contract types put different demands on your comparison tool. Here's when the tool choice actually affects your review quality, and when it's irrelevant.

For NDAs (3-5 pages, few changes)

Word Compare is usually sufficient. The document is short enough that even a noisy comparison output is manageable. The change count is low enough that flat presentation isn't a burden. If you're comparing NDAs against your firm's standard form and the counterparty used the same template, any tool works. Save the sophisticated tools for contracts where they matter.

For MSAs (20-50 pages, many changes, tables)

This is where tool choice matters most. Formatting noise from template differences can produce 100+ formatting changes alongside 15 content changes. SLA tables need structure-aware comparison. Moved clauses need to be detected as moves, not as separate deletions and insertions. If you regularly compare MSAs, a tool with formatting classification and noise filtering will save meaningful time on every comparison.

For employment agreements (5-15 pages, subtle changes)

The tool matters less than the reviewer's attention. Employment agreement changes are typically few but consequential. The comparison output won't be noisy. The risk is not in missing changes due to volume but in underestimating the impact of the changes you see. That said, a tool that flags obligation-language changes ("best efforts" to "reasonable efforts") as distinct from formatting changes is useful for this contract type.

For purchase agreements (30-60 pages, dense, schedule-heavy)

Tool choice is critical. Purchase agreements combine the MSA's problems (length, formatting noise, table content) with financial density that makes every missed change expensive. The schedules and exhibits alone can run dozens of pages. If your comparison tool can't handle tables reliably and can't help you separate formatting noise from content changes, you're reviewing a purchase agreement at a disadvantage. This is the contract type where the investment in a purpose-built comparison tool has the clearest ROI.

The bottom line

Different contracts need different comparison priorities, but the underlying discipline is the same: know what matters before you open the comparison, scan for the highest-impact changes first, and don't let volume or noise push you into skimming.

For short, simple contracts, any comparison tool and a focused reviewer will catch what matters. For long, complex, or high-stakes contracts, matching your tool to the demands of the contract type isn't optional. It's how you avoid being the reviewer who missed the indemnification cap change buried on page 37 of a purchase agreement between two formatting changes and a renumbered cross-reference.

If you want to see how formatting classification and change priority handle a real comparison of the contracts you work with, try Clausul with a document pair from your practice. The output is most visibly different from standard tools on the contract types where comparison is hardest: long MSAs, dense purchase agreements, and any document where the other side changed templates between rounds.

Frequently asked questions

What is the most important thing to check when comparing any contract?

Money and risk allocation. Regardless of contract type, the changes that matter most are those that affect financial terms (pricing, caps, penalties), risk allocation (indemnification, liability limits, insurance requirements), and exit rights (termination, cure periods, renewal terms). These are the provisions where a missed change has the highest cost. Start every comparison by scanning for changes to numbers, dates, and obligation language ("shall," "must," "will"), then work outward from there.

Do I need a different comparison tool for different contract types?

No. The comparison tool is the same regardless of contract type. What changes is your review priorities. A good comparison tool gives you the same thorough detection for an NDA as for a 50-page MSA. The difference is in what you, the reviewer, focus on first. For short, simple contracts (standard NDAs, basic service agreements), Word Compare is usually sufficient. For longer or more complex contracts (MSAs with SLA tables, purchase agreements with detailed rep schedules), a tool with formatting classification and table comparison becomes more valuable.

How long should a contract comparison review take?

It depends on the contract length, the number of changes, and the stakes. A 3-page NDA with 5 changes might take 10 minutes. A 40-page MSA with 80 changes and template differences could take 1-2 hours with a standard tool, or 30-45 minutes with a tool that filters formatting noise and classifies changes. The right question is not "how fast can I review this?" but "am I confident I caught everything that matters?" Speed without confidence is not a time saving.

What are the most commonly missed changes in contract comparisons?

Based on what we see across legal teams: (1) defined term changes that ripple through the document, where the definition itself changed but the 30 places it appears look identical; (2) liability caps and indemnification limits that were quietly adjusted by a digit or a qualifier; (3) termination provisions where cure periods were shortened or convenience termination was added; (4) table changes (pricing, SLAs, milestones) where row additions or value edits get lost in formatting noise; and (5) moved clauses where a provision was relocated to a different section, narrowing or broadening its scope without changing any wording.

Should I compare every round of a contract negotiation or just the final version?

Compare every round. Each round of negotiation can introduce changes, and not all changes are flagged or discussed by the other party. Some changes are introduced quietly in later rounds after your attention has moved to other provisions. Additionally, always compare the final execution copy against the last agreed draft before signing. The "clean copy" the other side sends for signature should be identical to the agreed version. If it is not, you need to know before the ink dries.

How do I handle a contract comparison with hundreds of changes?

High change counts usually mean a template difference between versions (different firm templates, different formatting defaults). The first step is determining how many of those changes are formatting vs. content. If a tool shows 200 changes and 180 are formatting, you have 20 changes to review, not 200. A tool with formatting classification does this automatically. Without one, you can get a rough filter by comparing with Word and toggling the formatting option off, though this hides all formatting changes rather than letting you review them selectively. For the content changes, triage by category: financial terms first, then obligation language, then dates and deadlines, then everything else.


About this post. Written by the Clausul team. We build document comparison software for legal teams and we've reviewed how lawyers compare each of these contract types to understand where tools help and where they fall short.

Something inaccurate or missing? Let us know.

Last reviewed: February 2026.