Worksona Playground: A Zero-Setup Environment for Multi-Agent Experimentation
#worksona#portfolio#multi-agent#no-code#experimentation#simulation
David OlssonYou open an HTML file in a browser. No installation, no build step, no account. A full multi-agent AI simulation environment is running.
That is the premise of Worksona Playground, and it shapes every design decision in the project.
What It Is
Playground is an interactive React-based application for composing and running multi-agent scenarios. Users define Persona Cards โ named expert profiles with specific knowledge domains, communication styles, and analytical lenses โ and pair them with Scenario Cards that carry structured context: situation, background, challenges, opportunities, and measurable objectives.
Once a scenario is assembled, any prompt or uploaded document (PDF, Word, plain text) can be run through the full panel simultaneously. Each persona produces its own response. A CFO, a Risk Officer, and a Customer Success Lead read the same document and respond through their own analytical frames โ not because of improvised prompt variation, but because those frames are encoded as persistent, reusable definitions.
The entire application lives in a single HTML file. React runs from a CDN. Data persists in browser IndexedDB. Nothing leaves the machine unless it is explicitly sent to an AI provider API.
Why It Matters
The standard way to use an LLM produces one confident answer. That is useful for many tasks. It is less useful when the goal is to surface tension between perspectives โ when you want the risk analyst and the product lead to genuinely disagree, and to see where they diverge.
Playground encodes that structure. The "Expert Panel" abstraction makes multiple conflicting viewpoints the default output, not a special prompt engineering trick.
The zero-infrastructure constraint matters for a different reason. The biggest barrier to AI tool adoption inside organizations is not capability โ it is friction. IT approval, deployment pipelines, data residency concerns. A tool that runs in a browser tab, stores nothing externally by default, and requires no installation sidesteps all of that. Teams can evaluate the multi-agent model without a procurement process.
The export system extends that portability. A complete playground โ agent definitions, scenario, prompts, LLM settings, and results โ exports as a single JSON file. A completed analysis is a tradeable artifact. Recipients can re-import it, inspect the agent configurations, change parameters, and re-run. That is a different kind of analytical handoff than a PDF report.
How It Works
flowchart TD
A[User opens index.html in browser] --> B[Define Persona Cards]
B --> C[Define Scenario Card\ncontext ยท challenges ยท objectives]
C --> D[Add source material\nupload doc or write prompt]
D --> E{Run Simulation}
E --> F[Persona 1 โ LLM API call]
E --> G[Persona 2 โ LLM API call]
E --> H[Persona N โ LLM API call]
F --> I[Collect all responses]
G --> I
H --> I
I --> J[Display per-persona results]
J --> K[Export as JSON / CSV]
K --> L[IndexedDB โ persisted locally]
Each agent card carries its own LLM settings โ provider, model, temperature, token limit. A synthesis agent can use Claude Opus for reasoning depth while a quick-scan agent uses GPT-4o Mini for speed and cost. That configuration lives at the agent level, not the session level.
The template library provides pre-built patterns โ hierarchical, iterative, federation, competitive, peer-to-peer, adaptive โ so there is no blank-page problem when starting a new scenario. GitHub repositories are supported as source material alongside uploaded documents, which makes the pattern directly applicable to code review and open-source evaluation workflows.
Persona stacking adds another layer: a role-level agent (CFO) can be filtered through a character-level persona (a late-career risk-averse operator), producing a qualitatively more specific analytical voice than role-based prompting alone.
Where It Fits in Worksona
Playground is the experimental layer of the portfolio. It is not production-grade agent infrastructure โ that is worksona-api. It is not a visual workflow IDE โ that is worksona-studio. Its job is to let practitioners understand the multi-agent model by building with it, before committing to code or production architecture.
Ideas that prove useful in Playground โ delegation patterns, persona structures, scenario formats โ become inputs to the rest of the portfolio. It is where the question "what would a panel of agents say about this?" gets answered quickly and cheaply, without any of the scaffolding that production systems require.
The embedded curriculum and self-paced learning environment make that learning function explicit. Playground is a tool for building agents, and simultaneously a structured course in how to think about building agents.
That combination โ immediate execution, structured learning, zero operational cost โ is what makes it a practical starting point for anyone entering the Worksona platform.
Live: w-playground.netlify.app