Software Isn't Dead. What's Valuable Is Changing.

Share
We recorded our AI Wave podcast a few weeks before the public market SaaSpocalypse. Since then, roughly $1T has been wiped from software and services stocks. The narrative: AI makes software easier to build and models are increasingly doing the work that renders SaaS products useless. So, software is dead?
The selloff is largely deserved. We said on the pod before the selloff started how surprising it's been that 95% of public application companies have shipped nothing meaningful on AI despite having every advantage. If you're selling CRUD interfaces where people click around and the world is moving to models doing the work, you're going to be worth less.
But "software is dead" misses the mark. Code replicability was never the moat. The median public software company spends about 24% of revenue on R&D. Definitionally, the vast majority of what makes a software business valuable has nothing to do with coding. Software isn't stored code, it's embedded judgment. An opinion about how specific people do specific work, encoded in a durable system. Taste, brand, customer success, trust - none of that gets easier to replicate because the next foundation model ships.
So what is actually changing? We think about software value in three compounding tiers. Data feeds workflows, workflows feed automation. And while a lot is going on across the stack, the story is very different depending on which tier you're looking at.
Three Tiers of Software Value
Tier 1: Data / System of Record / Ontology
Every industry has a structured data layer – the canonical representation of how that industry's data and workflows fit together. Over the course of the SaaS wave, we learned that owning this data model is valuable because it allows you to quickly build workflows and makes you the "system of record," or go-to place where everyone goes when they want to access the ground truth on something. These systems of record can be vertically focused (Epic for patient health records, Veeva for life sciences regulatory and clinical trial data, Procore for construction project data) or functionally focused, like CRM for customer and deal pipeline data, or HRIS for employee records and payroll.
As many have learned the hard way, your company's AI "agents" are only as useful as the underlying data on top of which they're making decisions. These systems of record prove to be increasingly valuable.
That said, what's currently captured in most systems of record is probably not sufficient. Agents need richer context compared to what lives in a traditional SoR today, and that data layer is going to have to expand. There's an enormous amount of institutional knowledge that lives in people's heads, email threads, call recordings and Slack channels - context that has never been captured in a structured way. This is exactly why there’s focus on claude.md files and system prompts - people are manually writing down the context that agents need to operate because it doesn't exist anywhere in their systems today. That's a gap, and it's significant.
There are two related but distinct opportunities to create truly new systems of record.
The first is obvious: you can now run LLMs over heaps of existing unstructured information and extract structured data. The institutional knowledge was always there, but couldn’t be systematically processed. This is now trivially possible.
Demand is real because companies already pay humans to do it. The sales ops function at many companies exists largely to maintain CRM data hygiene,making sure call notes get logged and customer records are updated. A person manually synthesizes unstructured information into structured data. It's valuable work, and exactly the kind of work LLMs excel at.
Cognition's DeepWiki is a good example of what this looks like as a product. DeepWiki takes all the previously unstructured context around a codebase (issues, tickets, PRs, documentation) and synthesizes it into a structured, searchable knowledge base. That information existed before, scattered across a dozen tools. But no one had turned it into a system of record until now.
The second opportunity is less obvious but potentially even more interesting. If you're using AI to automate a human driven workflow, you're creating an entirely new data exhaust. Manual work doesn't generate logs. There's no instrumentation when a person does something in their head, on a phone call, or with a pen. When an agent does that same work, every step is captured. You're not just structuring existing information - you're generating entirely new structured data.
This is what the best SaaS 1.0 companies actually did. CRM is the classic example - before Salesforce, sales data lived in notebooks and spreadsheets. CRM structured it digitally, which enabled tracking close rates by funnel phase. Software created the system of record by doing the workflow.
The AI-native version of this adopts a similar pattern, applied to workflows software couldn't previously touch given the judgement required. Imagine a product that automates legal time tracking and billing for law firms. Lawyers can now focus on revenue generating work instead of logging hours. In the process, you also create a dataset of what every lawyer in a firm does and for how long as getting access to their workspace is needed to do the time tracking. Seems pretty valuable!
Or take Candid Health, which helps healthcare providers with revenue cycle management. This used to be largely manual work - billers learning through trial and error which payer rules to follow, how to format claims to avoid denials. This knowledge lived in people's heads. By automating the process, Candid is building a structured dataset of payer rules and denial patterns that never existed as a system of record before. They build and scale the dataset by doing the work.
There has been a lot of talk about "proprietary data" as a moat in the AI world. . While I'm generally skeptical of how common this actually is in practice, both examples above are variants of this idea, with an important nuance: technically, it’s the customer's data. It's not "proprietary" in the traditional sense. You earn your right to access it by delivering a value-added service and doing process capture throughout. That's what ultimately lets you build adjacent applications on top of the data asset. Knowing "what every lawyer at the firm does and how economical it is" or "which payer rules drive claim denials" is valuable far beyond the original workflow.
To sum up: while there's certainly a question about who owns the data layer in the new world (existing systems of record or new agent companies?) two things are clear. Existing SoR data is crucial to making agents useful and there's ample opportunity to build entirely new ones.
Tier 2: Workflow Enablement
For years, Silicon Valley was a manufacturing line converting engineers' ability to decompose white-collar functions into opinionated software products that made said functions more efficient.
Take Procore as an example. Before Procore, a general contractor managed jobs with printed blueprints, faxed change orders, and a filing cabinet full of RFIs. The project manager spent half the day on the phone chasing subcontractors for updates. Procore's value wasn't just digitizing the workflow. It was encoding an opinion about how submittals, RFIs, change orders, and daily logs should connect and tie back to the budget. That opinion is the product.
The question is: how much of that value lives in the opinion about how work should get done, versus the human clicking through it? Because the latter is going away. A year ago, frontier models scored in the low teens on the leading computer use benchmark; now they’re in the 70s, with saturation in sight.
The work that went into building a great opinionated workflow, years of customer conversations, understanding edge cases, deep industry knowledge, was never UI design. It was customer understanding. The GUI was just the vessel. That understanding is exactly what you need to build a great agent, and it’s the real asset.
Encoded business processes and user-facing guidance are still needed, but will look very different. We’re clearly moving away from "here are 15 steps for you to click through" to "here's what the agent needs to know" to complete a task. The blank canvas problem isn't going away. Call me crazy but I don't think a text box with infinite degrees of freedom is great for repetitive work. You end up wanting a button, which is UI reasserting itself.
Think about what Procore could look like in this new world. Instead of a PM clicking through daily logs and manually triaging RFIs, the agent handles all of it and only surfaces the decisions that genuinely need human judgment. The workflow opinion is still there, it's just driving the agent instead of the human.
This is exactly why the consensus at the start of the AI wave was that established application companies were best positioned to win. They had the deepest customer understanding and the encoded workflow knowledge to build on. And yet, they’ve largely blown the head start.
Tier 3: Workflow Automation
Workflow automation is a mostly new layer, built on top of Tiers 1 and 2.
Pre-LLM automation was centered on deterministic "if this then that" logic– valuable for most predictable, repeatable tasks, but met with a natural ceiling due to lack of intelligence. The demand for workflow automation was always there; it wasn't for lack of trying we didn't see widespread adoption.
Consider the early days of UiPath. There was a lot of enterprise demand to automate back-office tasks like invoice processing. The problem was that clickstream-based automation was brittle, as any process variance broke it. If an invoice number appeared in a slightly different position on the page, the entire workflow failed because the product had no semantic understanding of an invoice number.
This is a secular shift that has allowed software companies to go after fundamentally new budgets: labor spend. Software no longer just structures data and encodes opinions on how to work – it does the work. Value is now indexed to the economic output of the task, not the cost of the tool.
Taken to its conclusion, what you're actually selling is a digital employee that does real work. Cognition doesn't sell a coding tool, it sells you an engineer who ships production-grade code. You're not buying software; you’re buying operating leverage - the ability for your team to do more with the same headcount.
The interesting thing about this model is that it gets stickier over time. Think about how much more useful an employee is after a year versus their first week. They've learned the context, preferences and how things actually get done. Agents work the same way. The longer an agent is deployed, the better it performs relative to something fresh off the shelf. That’s a genuinely new kind of switching cost – one that compounds over time.
Where This Leaves Us
So what does this mean in practice?
For incumbents, the mandate is clear: sprint to build Tier 3 into your product. You have the data, workflow knowledge, and customer relationships. Use them to do the work, not just present buttons for people to click.
For new entrants, the playbook is different. Leverage computer use and APIs to plug into existing systems of record, get the data you need, and build automation valuable enough that customers give you access to more of it over time.
This is playing out in real time. In home services, companies like Netic, Avoca, and Probook are building AI workflow automation (taking inbound calls, processing jobs, scheduling technicians) that sits alongside existing systems of record like ServiceTitan, reading and writing into it to deliver value. In many cases, ACVs for these new AI products are already comparable to what customers pay for ServiceTitan itself, because the ROI of replacing human labor is vastly different compared to competing with a tool.
There is obvious tension in the relationship between these layers. Some AI companies are ServiceTitan’s "preferred partners". Others have been shut off from API access entirely. Who plays nice with whom, who opens their system, who tries to own the whole stack – these dynamics will define who captures value in the next era.
Which brings us to a big open question in enterprise software: what happens first - AI companies building their own system of record, or system of record companies building AI?
Zoom out and the shift is fundamental. We used to live in a world where people drove software. Now software drives the work and escalates to people for judgement or approval.
This won't happen overnight. Organizational change management, regulatory adoption, and real-world implementation are the bottlenecks, not model capability. The transition is likely slower than the panic implies.
Hopefully at this point it's clear that software was never just code. It's structured data, workflow opinions, process knowledge and automation intelligence built on top of all of it. The real asset is the customer understanding that produced it. Software isn't dead. What's valuable is changing…and it remains to be seen who captures it.

.png)

.png)


