Post: How to Make the Build-vs-Buy Decision in the AI Development Era

By Published On: May 1, 2026

The build-vs-buy decision in 2026 is no longer a question of cost or capability. AI-assisted development has flipped the economics, but the right answer still depends on whether the workflow touches a system of record, whether the data has compliance exposure, and whether the team will adopt what gets built. This guide walks through a five-step decision framework that produces the right answer for a specific workflow in a specific organization, not a generic build-or-buy take.

For the broader thesis behind why this decision has changed, see The Death of the SaaS Moat. This how-to is the operator’s playbook — a sequenced procedure that takes a candidate workflow and produces a defensible build-or-buy verdict in roughly half a day of focused work.

Before You Start

Three things have to be in place before this framework produces a useful answer.

  • A specific workflow in mind. Not a category. Not “our intake process generally.” A specific workflow with a defined trigger, defined outputs, and identifiable owners.
  • Honest cost data. The annual subscription cost of any SaaS tool currently doing the job, plus an estimate of the implementation, training, and ongoing-maintenance cost the team has already absorbed.
  • Stakeholder access. The person who would own the workflow operationally has to be available for at least a 30-minute conversation. Without them, the adoption-gate step at the end is guesswork.

Time required: 3–5 hours of focused work spread across a day or two. Risk profile: low — the framework produces a verdict, not a commitment, and the verdict is defensible in either direction.

Step 1 — Classify the Workflow as Pillar or Connective Tissue

The most important question is the first one. A workflow is a pillar if it holds the system of record for a regulated or business-critical data type. ATS, HRIS, CRM, EHR, ERP, accounting, payroll. A workflow is connective tissue if it exists to move data between pillars, surface data from pillars in a custom view, or fill a gap that the pillars do not natively cover. The SaaS replacement checklist goes deeper on the seven categories that count as connective tissue.

If the workflow is pillar, the answer is buy. Almost always. Compliance certifications, integration ecosystems, vendor support, and the cost of replacing system-of-record platforms make the build option indefensible for the foundational layer. Skip the rest of this framework — go to vendor evaluation. If the workflow is connective tissue, continue to Step 2.

Step 2 — Score the Workflow on the API + MCP Filter

Any custom build option requires the workflow’s connected systems to expose strong public APIs. Without API access, the build cannot reach the data, and the build conversation ends. Score the workflow on three sub-criteria:

  • API depth: Can the build read the data it needs and write the data it produces? Yes / No / Partial.
  • API stability: Has the API been stable for 18+ months? Vendors that change APIs aggressively make custom builds expensive to maintain.
  • MCP or Make.com integration: Is there a Model Context Protocol server, or does Make.com already have a robust connector? Either one cuts the build effort dramatically.

If any sub-criterion is No, the buy option wins by default. The build is technically possible but the maintenance burden makes it the wrong economic choice. If all three are Yes or Partial, continue to Step 3. The audit work in the connective-tissue audit covers the same filter applied across the whole stack.

Step 3 — Calculate the Three-Year Total Cost of Each Option

Build a simple comparison on a single page. Three columns: continue with current SaaS, switch to a different SaaS, custom build. Three rows: Year 1 implementation cost, annual ongoing cost, and three-year total. Be honest about all three columns.

For the SaaS columns, include subscription, integration cost, training cost, and the cost of any workarounds the team has built to compensate for SaaS limitations. For the custom-build column, include the build cost itself, the ongoing maintenance cost (typically 15–25% of build cost annually), and the cost of internal capability to manage the build. AI-assisted development reduces the build cost dramatically — Cursor and Claude Code have moved typical small-portal builds from quarter-million-dollar engagements to single-digit-thousands of dollars in pure build cost. Maintenance economics have changed less. Account for both.

If the three-year custom-build cost is higher than both SaaS options, buy wins. Continue to Step 4 only if the build option is at least cost-competitive on the three-year horizon.

Step 4 — Run the Adoption Gate

This is the step most build-vs-buy frameworks skip, and it is where most build decisions actually fail. A custom-built tool that the team does not adopt is more expensive than the SaaS subscription it replaced, regardless of what the cost comparison said.

Sit down with the operational owner identified in the prerequisites. Walk through the proposed custom-build experience and ask three questions:

  1. Will the team have to log into a new interface? If yes, the adoption risk is high. The successful pattern is building behind interfaces the team already uses (CRM views, Slack notifications, email-driven workflows, embedded portals on the existing site).
  2. Will the team have to learn new conventions? If yes, the adoption risk is high. The successful pattern is making the work easier without changing the surface area — the form they already fill out gets pre-filled, the data they already copy shows up where they need it.
  3. Will the team have to remember anything new? New passwords, new URLs, new procedures. Each one is friction. The successful pattern is replacing visible procedures with invisible automation.

If the answers are mostly Yes, the build is at high risk of failing on adoption regardless of how technically clean it is. Either redesign the build to address adoption explicitly, or default back to buy.

Step 5 — Make the Verdict and Document the Reasoning

If the workflow passed the pillar filter (Step 1), the API+MCP filter (Step 2), the cost comparison (Step 3), and the adoption gate (Step 4), the build is defensible. Make the verdict. Document the reasoning in a single page so the decision can be audited if the build hits problems later.

If the workflow failed any one filter, the buy is defensible. Make the buy verdict and document which filter killed the build option. That documentation matters: when the build economics improve further or the API ecosystem matures, the same workflow can be re-evaluated and the documentation tells you what changed.

Expert Take — The Adoption Gate Is Where Most Build Decisions Die

I have watched companies spend a quarter-million dollars on a new platform and then, eight months later, watch the employees go right back to spreadsheets because the new system was “too different.” The problem was never the software — the problem was that nobody addressed adoption as a design constraint. Custom AI-built tools do not solve adoption. If anything they make it harder, because the team now owns a piece of bespoke code with no vendor support contract. The framework above puts adoption at Step 4 deliberately, after the cost comparison, because most operators get to a defensible build-cost number and stop. That is the most expensive mistake in the build-vs-buy decision in 2026, and it is the one this framework is designed to prevent. For the deeper background on why automation and adoption have to be designed together, the SaaS-vs-custom-build comparison covers the tradeoffs in detail.

How to Know It Worked

A correctly-applied build-vs-buy decision is identifiable by three signals 90 days after the verdict.

First, the team using the workflow should be able to describe what changed in one sentence. If they cannot, the change was either too small to matter or too complex to land. Second, the cost data should match the projection within 20%. If actuals are dramatically off, the framework was applied to incomplete data and needs to be re-run. Third, the workflow should be measurably faster, cheaper, or more reliable on at least one dimension that the operational owner cares about. If none of those improved, the decision was technically defensible but practically wrong, and the framework needs to incorporate the missing constraint next time.

Common Mistakes

Skipping Step 1. Treating a pillar system as a build candidate because the AI-build economics make it sound feasible. They do not. The pillar layer stays. The pillar-vs-connective-tissue distinction is the single most important filter in the framework.

Underestimating maintenance cost in Step 3. Build cost has dropped sharply. Maintenance cost has dropped less. Accounting for build cost honestly but maintenance cost optimistically produces wrong answers.

Treating the adoption gate as a soft constraint. The adoption gate is a hard constraint. If the answers are mostly Yes, the build will fail on adoption even if every other filter passed. The successful build pattern is invisible to the team. The failed build pattern is yet another portal.

Not documenting the verdict. Both build and buy verdicts need to be documented. When conditions change — and they will — the documentation is the only way to revisit the decision quickly and accurately.

Frequently Asked Questions

Does this framework apply to enterprise software decisions?

Yes, with the same caveat as the broader replacement work — enterprise procurement adds 6–12 months of additional process to either verdict. The framework still produces the right technical answer; the implementation timeline simply stretches.

What if the workflow is a hybrid — partly pillar, partly connective tissue?

Split it. Most workflows that look hybrid are actually two workflows that share inputs. Run the framework separately on each. The pillar half almost always lands on buy; the connective half is where the build option becomes interesting.

How long should the framework take to apply?

Three to five hours of focused work for a single workflow if the cost data and stakeholder access are already in hand. If they are not, plan another half-day to pull both together. Workflows that take longer than a day to evaluate are usually too poorly defined to make any verdict on, and the right move is to define the workflow first.

Can the framework be run by someone who is not technical?

Yes. The most operationally useful applications of this framework happen when the operational owner runs Steps 1, 3, and 4, and a technical advisor scores Step 2. The technical work is narrow and the operational judgment is the hard part.

What if the answer changes a year from now?

That is the design intent. The framework produces a defensible verdict for current conditions. AI-build economics, API ecosystems, and adoption capability will all change. Re-run the framework annually for any workflow where the original verdict was close, and document what changed when the verdict flips.

Get Help Applying the Framework to Your Stack

The framework is straightforward. Applying it across an actual stack — with the cost data, the stakeholder coordination, and the technical scoring all in one room — is where most operators get stuck. We do that work with operators every week.

Book a Working Session With Jeff →

About the Author

Jeff Arnold is the Founder and President of 4Spot Consulting, a Make.com Certified Partner specializing in operational automation and AI implementation. He is the author of the Amazon #1 bestseller The Automated Recruiter and a SHRM Recertification Provider. For more on Jeff’s commentary, see jeff-arnold.com.

Sources & Further Reading

  • Pragmatic Engineer, “AI Tooling for Software Engineers in 2026” — newsletter.pragmaticengineer.com
  • Stack Overflow 2025 Developer Survey — trust and adoption figures
  • Gartner — vibe coding and citizen developer defect-rate forecasts
  • Keyhole Software, “Software Development Statistics: 2026” — keyholesoftware.com