The End of Software as We Know It
Late in 2025, I needed a way to process and categorize expense receipts for my company. The kind of task you would normally solve by purchasing accounting software. I spent two hours comparing options. None of them did exactly what I needed.
Then I described the problem to an AI coding agent and developed a local webapp to categorize these receipts, pre-fill the tax-relevant fields, flag anything unusual. An hour later, I had a working solution. Not a generic product bent to fit my case. A solution built for this specific moment.
It was not a smooth ride. The first version generated a table structure my bookkeeper could not use. The agent assigned expense categories that do not exist in our bookkeeping system. I had to set boundaries: these are the valid categories, this is the output format, these fields are mandatory. That correction loop is the real work. The technology builds the tool. The human defines what “correct” means.
My Learn and Memory Agent helped me improve the agents based on these misconceptions and dead ends. The next time I used it, the Expenses Agent had learned from my corrections. It pre-filled most fields correctly and generated a new interface where I only confirmed or adjusted. By the fourth time, I just said: do it. And it did. AI Agents in Practice Beyond Coding describes this AI coding agent setup in more detail.
That sequence: build me the tool, confirm what is right, learn for next time. It is not a product feature. It is a paradigm shift. But this is not even the end of the story. Why should I tell the AI agent to process the expenses at all? Why should the agent not actively remind me to submit my latest receipts, because it knows the bookkeeper has been waiting for the monthly report? At that point, the machine initiates. Not me. We are witnessing the end of software as we know it.
That sequence, from operating a tool to governing an agent, is not unique to my expense receipts. It is the latest step in a pattern that stretches back to the earliest days of computing. To see where it leads, it helps to trace how humans have interacted with their tools across six distinct paradigms.
The Evolution of Software Interaction
Each stage shifts responsibility from human to machine, moving from concrete artifacts to abstract context. The progression follows distinct paradigms, each with its own mental model, its own interface style, and its own assumptions about who is in control.
| Paradigm | Machine Role | Human Role | Kano Model | Evolution |
|---|---|---|---|---|
| Input/Output Oriented | Executes instructions | Operator | Basic | Commodity |
| File & Process Oriented | Passive tool | Operator | Basic | Commodity |
| Object Oriented | Passive tool, flexible | Operator | Performance | Product (+Rental) |
| Job-to-be-Done Oriented | Reactive to clicks | Director | Performance | Product (+Rental) |
| Conversation Oriented | Reactive to language | Director | Delighter | Custom-Built / Emerging |
| Situation Oriented | Proactive | Governor | Delighter | Genesis |
Two columns deserve special attention. The Kano Model captures how users perceive each paradigm. What once excited customers eventually becomes expected. File management was a revelation in the 1980s. Today nobody is impressed by “File > Save As.” Conversational AI still delights in 2026, but in a few years users will simply expect it. Early paradigms have become Basic quality: taken for granted, noticed only when missing. Middle stages still offer Performance quality: the more, the better. The latest stages deliver Delighter quality: unexpected, differentiating, hard to compete against. A product stuck at Basic competes on price. One that delivers the next Delighter leads the market.
The Evolution column connects each paradigm to Wardley’s evolution stages. Situation Oriented is at Genesis: a new paradigm, enabled by the components below it that have matured into Product or Commodity. The six stages that follow tell this story.
Input/Output Oriented (1950s-1970s)
The user submits commands or data. The machine returns results. No persistence, no state, no visual feedback. The user must know the exact syntax. The machine does nothing without explicit instruction.
Ted Nelson’s Computer Lib (1974) was already a manifesto against this “priesthood of computing,” envisioning computers as universal creative media rather than fixed command executors.
File & Process Oriented (1970s-1990s)
The computer provides rigid structure. Users create, save, and manage files. Processes guide users through predefined steps. The mental model shifts to “I have a document” or “I follow a transaction.”
IBM 3270 terminals running CICS transactions, WordPerfect, Lotus 1-2-3, SAP R/3 transaction codes. “File > Save As.” Named documents on disk. Menu-driven screens. The user thinks in files and forms.
Alan Kay and Adele Goldberg’s Dynabook vision (1977) already challenged this paradigm, imagining every user as a programmer with “symmetric authoring and consuming.” The market stayed in files and forms for two more decades.
Object Oriented (1990s, largely failed)
The user works with objects directly. A printer is an object, a document is an object. You do not “open an application.” You interact with the thing itself. Objects from different sources can be embedded in one another.
OS/2 Workplace Shell, OpenDoc, OLE and ActiveX, NeXTSTEP, Taligent. Drag-and-drop objects. Compound documents. Right-click context menus on entities. The user thinks in things, not applications.
This paradigm lost commercially. But its ambition, giving users direct control over composable entities, planted seeds that only now begin to grow. What Object Oriented promised through manual composition, Situation Oriented computing delivers through AI generation. The goal was always the same: users working with things, not applications. The mechanism changed entirely.
Object Oriented development itself continues to exist as an engineering paradigm. It is a foundational enabler for the technology we have today. But as a user-facing interaction model, it never took hold. An ACM survey (Ko et al., 2011) found that more end-user programmers already existed than professional developers. The demand for users building their own tools was massive. The technology was not ready.
Job-to-be-Done Oriented (2007+)
“There is an app for that.” One app per job. No files, no objects. I want a taxi. I want food. I want to track my run. The application disappears behind the task.
iPhone App Store, Uber, Spotify, Slack, Notion, single-purpose SaaS. App icons on a home screen. Single-purpose design. The user thinks in tasks, not tools. Onboarding asks “What do you want to do?” not “Here is your file system.”
Clayton Christensen’s Jobs to be Done framework captures the underlying principle: customers do not buy products. They hire them to make progress in a specific situation. This is still the dominant paradigm for software product design today. Most SaaS products and services are designed to get a job done for the user or enterprise. There is already a contradiction in the name: the user or customer is not asking for software. They are asking to get a job done. The first “S” in SaaS is already vanishing in this phase. Nobody calls Slack “software.” The word is going the way of the Faustkeil, the hand axe our ancestors used for a million years. Nobody remembers the name. The tool did not disappear. The relationship between human and tool changed beyond recognition.
Conversation Oriented (2022+)
The user describes intent in natural language. The machine delivers. For the first time, ambiguity is allowed. You do not need to know which app or which button. But the machine remains passive: it waits for the human to speak.
ChatGPT, Claude, GitHub Copilot, Microsoft Copilot in Office, and the emerging field of Malleable Software, where users reshape their tools at runtime through natural language.
A text input field. Natural language instead of buttons. The user thinks in wishes, not workflows.
Conversation is the last stage of the old paradigm. The human still initiates. Just more comfortably.
Situation Oriented (Genesis)
The machine initiates. It recognizes the situation, generates the appropriate interface, learns from feedback, and eventually acts autonomously. There is no pre-built application.
“The most profound technologies are those that disappear.” Mark Weiser, The Computer for the 21st Century (1991)
OpenClaw, an autonomous AI agent with a heartbeat system that initiates actions without human prompts. Claude Code, which generates tools on demand. AI agents that chain actions autonomously. Adaptive dashboards that reconfigure based on context. From the engineering side, Tudor Girba and Simon Wardley’s Moldable Development approaches the same paradigm: developers create contextual micro tools built for each problem, generating the interface for the moment rather than maintaining a fixed application.
No fixed UI. No app to install. The interface is generated for the moment and discarded after. The user thinks in outcomes, or does not need to think at all.
There are no applications anymore. Only just-in-time solutions.
The Software Inversion
Everything up to and including Conversation Oriented is human-driven. The human initiates, the machine responds. Situation Oriented is machine-driven. The machine acts, the human governs.
This is the fundamental inversion. Not a gradual shift, but a reversal of the basic relationship between human and machine. For the first time in the history of computing, the default mode is not “the human tells the machine what to do” but “the machine acts and the human sets boundaries.”
The human role evolves across three stages:
- Operator (Input/Output, File & Process, Object Oriented): The human executes. They know the commands, manage the files, manipulate the objects. The machine is a passive tool.
- Director (Job-to-be-Done, Conversation Oriented): The human delegates. They choose the app, describe the intent, direct the conversation. The machine reacts, but only when asked.
- Governor (Situation Oriented): The human sets constraints and reviews outcomes. The machine anticipates, proposes, and acts. The human adjusts boundaries, or simply trusts.
The question is no longer “What should the machine do?” It becomes “What should the machine not do?”
But the inversion does not mean the earlier paradigms disappear. They coexist. Every paradigm is an enabler for the one above it. File & Process Oriented systems persist because someone has to provide and maintain all the higher-level services. Object Oriented development is already just an engineering discipline: invisible to users, essential to builders. Job-to-be-Done and Conversation Oriented software will persist for a long time because users are accustomed to it and because these services are still needed to deliver Situation Oriented capabilities. The paradigms do not replace each other overnight. Each one enables the next and steps into the background as the newer paradigm takes the stage. The next will be Situation Oriented.
What mainly changes is who operates at each level. When an AI agent processes your expenses autonomously, it reads files, executes commands, follows processes underneath. The agent operates at Input/Output and File & Process level. The user does not. The earlier paradigms become the agent’s interface, not the user’s. The user sees Governor. The agent sees Operator. Both are real. They just serve different actors in the system.
Beyond Software
The Operator, Director, Governor progression describes any domain where tools mediate between humans and outcomes. Software is simply the most visible domain because it is furthest along the evolution path. The rest will follow.
Consider manufacturing. For decades, a maintenance engineer operated diagnostic instruments directly: reading gauges, checking vibration levels, inspecting wear patterns. That was the Operator stage. Then came digital dashboards that aggregated sensor data and highlighted anomalies: the Director stage. The next step is already emerging. AI agents will monitor sensor data continuously, predict failures, and recommend maintenance actions before breakdowns occur. The engineer will govern: defining acceptable risk levels, reviewing the agent’s decisions, adjusting boundaries when conditions change. In the Meridian Industries story, the predictive maintenance Arena is already moving along this trajectory. Wherever humans currently operate tools to make decisions, this inversion will arrive.
Three Steps to Autonomy
Situation Oriented computing does not arrive as a single leap. It unfolds in three sub-stages, each shifting more initiative from human to machine.
“Build me the interface.” The user states intent, the AI generates a solution. Still dialog-based, still explicit. But the application is not pre-built. It is created on demand, for this specific situation, and discarded afterward. My receipt tool started here: I described what I needed, and the agent built it.
“Confirm if this is correct.” The AI anticipates. It pre-fills, proposes, generates the interface before being asked. The human becomes a reviewer, not an initiator. When my agent learned my categorization patterns and presented its own suggestions for confirmation, it had reached this stage.
“Just do it.” Autonomous execution. The human delegates completely. The agent processes, categorizes, files, and only surfaces exceptions. The human role reduces to governance: defining what “correct” means, setting boundaries, reviewing outcomes periodically. David Tennenhouse called this “proactive computing” in 2000: the human leaves the interaction loop.
These three sub-stages coexist. Simple, well-understood tasks reach “just do it” quickly. Novel or high-stakes tasks may stay at “build me the interface” indefinitely. The appropriate level depends on the situation, not on the technology.
The business consequence is significant. Whenever the interaction paradigm shifts, existing product categories face disruption. The transition from File & Process to Job-to-be-Done killed desktop software suites and created the SaaS market. The transition from Job-to-be-Done to Situation Oriented will disrupt in turn. SaaS is the most visible case, but the pattern applies wherever packaged logic is the product.
Satya Nadella argues that “SaaS will collapse in the agent era.” Business applications are fundamentally databases with logic on top. When AI agents take over the logic layer, the traditional model of selling packaged application software loses its foundation. The same logic applies to ERP systems whose value sits in the logic layer rather than the data, to training platforms where scalable AI-generated courses are already displacing in-person delivery (as Developing a Strategy for the GenAI Era illustrates with the DasScrumTeam example), and to professional services where the core offering is information processing.
Not all products are affected equally. Infrastructure services like payment processing or cloud computing provide commodity capabilities that agents depend on. These persist. What loses its foundation is the logic-as-product model: the CRM, the ERP, the project management tool that packages business logic into a subscription. SaaS may not collapse overnight. As software architect David Kirwan put it: it may become the new COBOL, durable systems of record running underneath, while the real intelligence and innovation moves to the agent layer above. The interesting question is who owns that layer. Today it is the SaaS vendors. Tomorrow it might be the customers themselves.
What business model replaces it? Honestly, nobody knows yet. I feel exactly like I did when I first saw the Mosaic web browser. Completely fascinated. And completely unable to imagine how anyone would ever make money with it. That uncertainty is a characteristic of Genesis. The web had no business model in 1993 either. The business models emerged as the technology matured. The same will happen here.
Have We Not Heard This Before?
Yes. The Dynabook vision (1977), end-user programming research (1993), and Object Oriented computing all promised composable, user-controlled environments. The ambition was right. The mechanism was wrong.
The difference this time: the underlying components have evolved. In the 1990s, the enabling technologies for situation-oriented computing (natural language processing, on-demand compute, machine learning) were in Genesis. Unreliable, expensive, accessible only to specialists. Today, large language models are products approaching Commodity. Cloud compute is Commodity. The layer above them can now emerge because the layer below has matured. This is Evolution Focus in action: each evolutionary stage enables the one above it.
But the challenges are real. When every user generates their own just-in-time tools, governance becomes extraordinarily difficult. Who audits software that exists for five minutes? Who ensures compliance when the interface is generated fresh each time? These are not objections that invalidate the paradigm. They are characteristics of Genesis. Every technology in Genesis is unstructured, hard to manage, not yet standardized. The governance structures will follow. But they are not here yet. In heavily regulated domains like healthcare and finance, liability and compliance constraints will slow this transition further. Autonomous agents making consequential decisions without human review face legal barriers that technology alone cannot resolve.
What This Means for the Enterprise
Software interaction evolves from human-operated to machine-initiated. The human role shifts from Operator through Director to Governor. And Situation Oriented computing is demanding attention.
For enterprises, the consequence is concrete: rethink your product portfolio and reorganize how you work.
Your products and services sit somewhere on this evolution. Some are still File & Process Oriented. Others have reached Conversation Oriented. A few may already be experimenting with Situation Oriented capabilities. Not all will evolve at the same pace. But the direction is clear, and falling behind means competing on price in a market where competitors deliver Delighters.
The way your organization works must change with it. When the machine initiates and the human governs, you need less operational overhead and more governance capability. Teams shift from operating tools to governing AI agents that operate on their behalf. Leadership shifts from directing specific solutions to defining the constraints within which AI-generated solutions must operate. The AI-enhanced team is the organizational expression of this inversion.
Governance in this context means answering new questions. What are the boundaries within which agents may act? Who reviews outcomes when there is no fixed application to audit? How do you ensure compliance when the interface is generated fresh each time? These are not theoretical concerns. They are the operational reality of the Governor role. The technology is moving faster than the organizational capability to govern it. Enterprises that develop this capability first will have a significant advantage.
Map where each of your products sits on the evolution path. Identify the next step for each. Reorganize to match. This is Evolution Focus in practice, as Developing a Strategy for the GenAI Era illustrates with concrete examples.
The end of software as we know it is not the end of the need for structure. It is the beginning of a new kind of structure: one designed for governing intelligence rather than operating tools.
This is not a future scenario. AI Agents in Practice Beyond Coding shows what this architecture looks like in daily practice today: the system that built this book, manages consulting engagements, and publishes the website. One person, dozens of agents, a complete value chain. The inversion is already working.