Skip to content
scsiwyg
sign insign up
get startedmcpcommunityapiplaygroundswaggersign insign up
Worksona·Worksona Delegator — Dynamic Teams from a Single Query17 Apr 2026David Olsson
Worksona

Worksona Delegator — Dynamic Teams from a Single Query

#worksona#portfolio#delegator#streaming#dynamic-model-selection#multi-agent#reference-implementation#delegation

David OlssonDavid Olsson

The delegator pattern describes how work should be decomposed and routed. Worksona Delegator is what that looks like in a running system.

It is the reference implementation — the most feature-complete version of the pattern in the portfolio, and the one from which a family of specialized descendants was derived. Version 2.0 adds real-time streaming, dynamic per-agent model selection, IndexedDB session persistence, and support for next-generation models including GPT-5 and Claude 4.5 Sonnet.

What It Does

Submit a complex query. The system takes it from there.

The Delegation Engine analyzes the query for complexity, domain breadth, and task structure. It selects an appropriate coordination topology from six options — hierarchical, parallel, sequential, peer-to-peer, iterative, or competitive — or selects automatically when pattern: 'auto' is specified. The Agent Assembler then creates specialist agents tailored to the task: researcher, analyst, writer, critic, or domain-specific variants depending on what the query requires.

Each specialist executes its subtask. Results flow to the Synthesis Layer, which reconciles outputs, resolves conflicts between workers, and produces a final deliverable. Optionally, the system also generates a process report, a cross-agent analysis, and a Mermaid sequence diagram generated programmatically from the actual agent interaction log.

Why Dynamic Team Assembly

Fixed agent graphs make a strong assumption: the team you design in advance is the right team for whatever query arrives. That assumption fails quickly. A market research brief needs different specialists than an architecture review. A simple question needs one or two workers. A multi-domain strategy brief may need six.

Dynamic assembly removes that constraint. The coordinator decides team size and composition based on the query, not based on a pre-configured graph. A granularity setting (1–10) gives the user control over decomposition depth: granularity 3 produces a lean team for a fast answer; granularity 9 produces a fully specialized team for maximum depth.

The six coordination topologies encode structurally different collaboration shapes. Hierarchical: centralized coordinator, top-down assignment, one synthesis pass. Competitive: all agents independently tackle the same problem, an evaluation agent selects the best. Adaptive: the agent roster changes mid-execution as intermediate results reveal new requirements. These are not configuration labels — they produce qualitatively different outputs from the same set of specialists.

Dynamic Model Selection

Each agent in a delegation run can use a different model. When modelConfig.mode is set to 'dynamic', the coordinator selects the model for each agent based on its role and the nature of its subtask.

In practice: a researcher agent might run on Claude 4.5 Sonnet with web search enabled and high reasoning effort; a structured-output analyst runs on GPT-5 with a larger token budget for complex reasoning; a synthesis writer runs on a lighter model where throughput matters more than raw capability. The provider and model fields in the agent schema are independent — switching either without touching business logic is straightforward.

Version 2.0 also handles the API differences between legacy and next-generation models — max_completion_tokens vs max_tokens, reasoning token accounting, per-model capability flags like web search and audio output — transparently, so agent configurations do not need model-specific branching logic.

Streaming and Persistence

Real-time streaming means output appears token-by-token as each specialist works, rather than waiting for a complete response before rendering anything. For long-running delegations — research tasks, multi-step analyses, comprehensive reports — this makes the system feel responsive rather than opaque.

Session persistence via IndexedDB means nothing is lost between browser sessions. Every delegation run, every agent conversation, every result, and every model configuration is stored locally. The included database viewer provides search, filtering by agent or date range, and export to JSON, CSV, or Markdown. Iterative work across multiple sessions is supported natively.

Where It Lives in the Stack

Browser (worksona.js + UI)
        ↓
Backend (server.js — API proxy, streaming)
        ↓
LLM Providers (OpenAI · Anthropic · Google)

The system runs on a Node/Express backend that proxies API calls to provider endpoints, keeping API keys server-side. For production, the same application deploys to Netlify Functions without code changes — a direct expression of the /champion-adoptability leadership command: the barrier to running the system locally is npm install && npm start.

The Reference Point

Worksona Delegator is the reference implementation from which a family of specialized variants descends: a generation-5 evolution with further architectural refinements, a CLI variant, an MCP-protocol variant for tool-calling integrations, and a visual delegation-pattern editor. Each variant narrows the feature surface for a specific deployment context.

The reference implementation defines the core architecture all variants share: Delegation Engine analyzes and routes, Agent Assembler creates specialists, Synthesis Layer integrates results. That three-part structure is the pattern made concrete. Every delegation run is a demonstration that complex work decomposes, routes well, and synthesizes cleanly — that intelligence is, in fact, a property of structure.


Live: delegator.worksona.io

Share
𝕏 Post