The Four Acts: How Agentic Systems Evolve
#worksona#architecture#evolution#agents#strategy#ai
David OlssonThe history of the Worksona portfolio is the history of a question that keeps changing. Each time we answered one version of the question, a harder version appeared.
Looking back across the portfolio's development, we can see four distinct acts. Each act answered a question that the previous act made it possible to ask.
Act I: Foundation โ How do we coordinate multiple AI agents?
The first question was structural. Single-agent AI assistants were capable at individual tasks but broke under complexity. "Write me a report on our competitive position" requires research, synthesis, writing, formatting, and fact-checking โ each a distinct capability that a single model handles poorly when juggling all of them at once.
The answer was specialization and coordination: build 24 agents, each with a narrow, well-defined responsibility, and build a delegation system that routes tasks to the right agents in the right order. The core platform emerged from this act: the Worksona API with its agent registry, the delegation system, the MCP protocol for tool integration, the knowledge graph infrastructure for persistent memory.
The question this act answered: how do we coordinate multiple AI agents to solve hard problems?
Act II: Orchestration โ How do we make agent coordination accessible?
Having built the coordination machinery, we ran into a different problem. The machinery was powerful but opaque. Using it required understanding the agent registry, the delegation patterns, the skill loading system. This was reasonable for engineers building new agents. It was unreasonable for anyone else.
The second act was about interfaces. Visual workflow editors exposed agent composition as a node graph โ agents as nodes, data flows as edges โ that domain experts could manipulate without writing code. Chat interfaces translated natural language intent into structured agent execution and returned results as artifacts. Studio provided drag-and-drop multi-agent choreography with real-time execution feedback.
The question this act answered: how do we make agent coordination intuitive and accessible to non-engineers?
Act III: Domain Specialization โ What happens when we apply this to specific industries?
With coordination solved and interfaces accessible, the question shifted from infrastructure to application. What does this infrastructure enable in the real world?
The three vertical portfolios are the answer. ATOMIC47 takes the orchestration infrastructure and applies it to pharmaceutical document processing โ a domain with specific data (PDFs, chemical structures), specific agents (vision detection, optical recognition, property calculation), and specific accumulated knowledge (compound libraries built over time). MARKET_RESEARCH applies it to research operations โ persona simulation, quota management, hypothesis testing. APPS applies it to field operations โ evidence capture, pattern visualization, decision support.
Each vertical revealed domain-specific requirements that fed back into the platform: the knowledge graph layer became more sophisticated from being used to accumulate pharmaceutical intelligence; the delegation patterns became more refined from being applied to research workflows; the agent registry became more flexible from supporting both general-purpose and domain-specific agents.
The question this act answered: what happens when we apply multi-agent orchestration to each industry's specific challenges?
Act IV: Autonomous Evolution โ Can AI systems improve themselves?
This act is current and unresolved.
GitSona demonstrates the possibility: an AI agent that writes its own functions when it encounters problems it cannot solve with existing tools, persists those functions to a repository, and uses them in subsequent executions. The positive feedback loop is real โ more problems encountered, more tools created, more problems solvable.
The delegation patterns show a softer version of the same capability: the delegator selects coordination patterns at runtime based on task structure, learning over time which patterns produce better outcomes for which task types. This is not self-modification, but it is adaptation.
The question this act is asking: can AI systems expand their own capabilities autonomously, and can we make that expansion safe, auditable, and beneficial?
The pattern across acts
Each act was enabled by the previous one and exposed a question the previous one could not ask. You cannot ask how to make orchestration accessible until you have built orchestration. You cannot ask what domains benefit until you have made the system accessible. You cannot ask whether systems can self-improve until you have applied them to real domains and accumulated the knowledge that informs what "improvement" means.
This is the structure of compound technical work: not a roadmap planned in advance, but a sequence of problems where solving each one makes the next one visible.
The fifth act โ if the pattern holds โ will be the question that autonomous evolution makes possible to ask. We do not yet know what that question is. That is how we know we are in Act IV.