Thirty Years in the Room: How David Olsson's Market Research Instincts Built OAIRA
#market research#AI#OAIRA#simulation#intelligence#practitioner
David OlssonThere is a particular kind of frustration that only practitioners understand.
It's not the frustration of not knowing what to do. It's the frustration of knowing exactly what needs to happen — what the right model would look like, what the right question would unlock, what the research could reveal if the tooling would only cooperate — and spending most of your career working around the gap between the ambition and the instrument.
For David Olsson, that frustration accumulated across three decades, multiple verticals, and two continents. And it is precisely that frustration — specific, practitioner-grade, earned in the field — that animates every design decision in OAIRA.
The Reid Years: Where the Instinct Was Formed
The story starts in the 1990s, working within the orbit of the Reid family — Angus Reid and Andrew Reid — at a time when Canadian market research was actively trying to evolve beyond its survey-and-tabulation roots.
Angus Reid had built one of the most recognized polling and market research brands in the country. His was a house with serious quantitative ambitions: national omnibus studies, political polling, consumer tracking, large-scale panel infrastructure. The work was rigorous, the datasets were substantial, and the appetite for methodological innovation was real.
Andrew Reid, working in parallel, pushed the innovation frontier further — eventually into what would become the vision for modern insight communities and digital-first research engagement. The Reid lineage, taken together, represents a formative strain of North American MR that understood scale, understood data, and was actively trying to solve for a smarter relationship between respondents and research instruments.
David was in that room — and what the room was working on was simulation.
Early Simulation Work
The simulation tools of the 1990s were primitive by any contemporary standard. But the instinct behind them was not. The goal was to generate synthetic respondent behaviour — to ask "what would this audience do?" before deploying expensive fieldwork — and to use that pre-field intelligence to sharpen instruments, validate hypotheses, and reduce the cost of being wrong.
This is, with better compute and better models, exactly what OAIRA's statistical simulation engine does today. The PersonaTraitVector — an 8-dimensional representation of a respondent's psychology and domain position — and the Beta distribution sampling that generates realistic, internally-consistent response patterns at scale across 1,000 synthetic agents: these are not new ideas. They are early ideas, now properly resourced.
What was understood then is what OAIRA encodes now: that the question-centric model of market research — write questions, collect answers, tabulate — was always a compression of something richer. The richness that got compressed was the person.
The simulation tools were an attempt to put the person back. The technology wasn't ready. The instinct was correct.
The Long Middle: A Palette Carried Across Verticals
After the Reid years, David moved through a succession of industries — technology, professional services, consumer goods, digital product, enterprise SaaS, and more — always carrying the same palette of tools and the same underlying questions.
Not always as a market researcher, exactly. More often as the person in the room who understood what research could and couldn't do, who had a practitioner's skepticism about what data actually represented, and who kept returning to the same structural diagnosis: the research infrastructure that organisations relied on was built around the wrong atom.
The dominant MR platforms — Qualtrics, Confirmit, Forsta, SurveyMonkey — shared an architecture that had been designed for the web era of the late 1990s and extended incrementally ever since. The core data model put the question at the centre:
Survey
└── Questions []
└── Responses []
└── Respondent (anonymous row ID)
The respondent is an afterthought. A row identifier. A bucket for answers to accumulate in. Analysis is a reporting layer bolted on top: cross-tabs, filter rules, pivot tables, auto-generated summaries added late in the history of each platform as AI pressure grew.
This architecture encodes an epistemology. It says: what matters is the question and the aggregate distribution of responses to it. The person who gave those responses is largely irrelevant. Strip out their identity, aggregate their answers, find the mean, ship the deck.
Across verticals, in industry after industry, David encountered the consequences of that epistemology. Research that couldn't explain why. Decks that described behaviour without illuminating motivation. Segments that were statistically stable and practically useless. Hypotheses that got validated by the instrument design rather than by the market. The data was fine. The architecture was wrong.
The AI Inflection
The arrival of production-quality large language models changed the calculus.
Not because AI could answer the research questions directly — that route leads to hallucination, confirmation bias, and a false sense of certainty that is worse than ignorance. But because AI could finally make the person-centric architecture practical.
Here is what person-centric research requires that the traditional model couldn't operationalise:
Rich persona modeling at scale. You can't run a person-centric research study with 500 respondents if each respondent is a row ID. You need each respondent to be a legible person — with psychology, domain expertise, behavioral tendencies, internal consistency. Generating that richness historically required expensive qualitative work, ethnographic investment, or consultant judgment. LLMs can do it in seconds from a research brief.
Autonomous adaptive interviewing. The depth of understanding you want from a real respondent — the kind of depth that tells you why they said what they said, what they meant by a phrase, what they were trading off — requires conversation. Not a scripted chatbot. Adaptive, context-sensitive conversation that follows threads, identifies gaps in coverage, and knows when to probe and when to move on. This is what OAIRA's autonomous AI interviewer does, complete with a coverage system that tracks depth of understanding (not just question completion) at the level of individual respondents.
Methodology-aware analysis. Jobs-to-be-Done analysis and Gap Analysis and Journey Mapping each have specific, validated analytical frameworks. Ulwick's opportunity scoring for JTBD. Importance-satisfaction gap matrices for gap analysis. Friction rate by journey stage. These frameworks are not mysterious — they're documented and teachable — but they require being encoded into the analysis layer, not applied manually by a consultant after export. AI makes this encoding practical.
Intelligence that compounds. The output of traditional MR is an insight — a curated, interpreted snapshot of a moment. Useful, expensive, and static. The output of OAIRA is intelligence — queryable, continuously updateable, and cumulative. Each study adds to a knowledge graph. Each simulation calibrates the model. Each interview deepens the persona library. The system gets smarter as you use it.
None of these capabilities required waiting for AI. They required someone who had been thinking about the right architecture for decades and was finally in a position to build it.
What OAIRA Actually Is
Read the OAIRA blog in sequence and you trace the unfolding of a complete thesis. The first post, The Taxonomy Was Wrong, names the structural problem: the question-centric model is a historical artefact of tooling constraints, not a principled epistemology. OAIRA replaces it with a person-centric architecture where the respondent is a richly modeled subject, not an anonymous row ID.
The second post, What Person-Centric Research Looks Like in Code, makes the philosophy concrete: the PersonaTraitVector, the Beta distribution response generator, the coverage-based interview completion model, the seven context-specific AI agents that participate in the research loop as active collaborators rather than passive tools.
The third post, The Architecture of MR Software, walks a B2B product team through the same research scenario — prioritising a Q3 roadmap — executed on a traditional platform versus OAIRA. The difference is not just speed (hours vs. weeks) or cost (fraction vs. $60,000). The difference is architectural: one system knows what the methodology is for and encodes that knowledge into every layer of the instrument, the analysis, and the output. The other doesn't.
What followed that foundation — the autonomous AI interviewer, realtime voice research, the blended human and synthetic respondent model, the five-agent deep research pipeline, the ATLAS knowledge graph, crowd simulation for behavioral research, biometric signal reading, and the white-label platform infrastructure — are elaborations of the same thesis. Each feature is person-centric. Each feature compresses the gap between the ambition of research and the instrument available to pursue it.
This is not a feature roadmap. It's an argument, built in code.
The Practitioner's Advantage
Building research infrastructure is a specific skill. Building good research infrastructure requires understanding research epistemology at a level that most engineers and most product managers never encounter.
What does it mean to measure something? What is the difference between an instrument that tells you what people said and one that tells you what people meant? When is synthetic data generative and when is it deceptive? What does internal validity require, and how do you encode it into a step engine rather than rely on a consultant to enforce it case-by-case?
These are questions that practitioners answer with muscle memory because they've encountered the consequences of getting them wrong. The consultant who gets called in to salvage a badly designed study. The client who makes a major product decision on the basis of data that, on examination, measured the wrong construct. The fieldwork that comes back with floor effects on every rating question because nobody caught the ceiling in the pre-test.
David's practitioner background is not incidental to OAIRA. It is load-bearing.
The survey quality checker that detects leading questions, double-barreled questions, and loaded language before a survey goes live — that comes from knowing how many bad surveys have been deployed by well-intentioned teams who didn't know to look for those problems. The 12-item research checklist that tracks decision gates through a study — context established, research question defined, methodology selected, validation simulation run, questions reviewed — that comes from watching how often research programs go wrong because a foundational decision was left implicit rather than explicit. The ATLAS knowledge graph that makes research intelligence cumulative across studies rather than disposable after each engagement — that comes from watching clients commission essentially the same research three years apart because nothing accumulated from the last time.
Every one of those features is an answer to a problem David encountered in the field. The code is applied practitioner knowledge.
On Building Flexible, Extensible AI Systems
One of the architectural decisions in OAIRA that reflects practitioner maturity is the refusal to hard-code intelligence.
Most MR platforms, when they adopt AI, bolt it on as a fixed feature: an AI-generated summary here, a sentiment classifier there, a chatbot interface for the dashboard. The AI is a capability, separate from the platform, applied to its outputs.
OAIRA treats AI as infrastructure. The Agent Registry — the full, inspectable inventory of every AI agent running in the platform, their system prompts, their models, their roles — is evidence of this philosophy. Every distinct capability is an agent, not a feature. New capabilities are new agents. Behavior is configurable at the instruction level. Multiple models coexist: Claude Sonnet for complex analytical reasoning, Claude Haiku for high-throughput structured generation. The system is auditable because every output can be traced to the agent and instruction set that produced it.
This architecture is flexible and extensible in a way that fixed-feature AI bolted onto a traditional platform can never be. You can add an archetype — a new interviewer persona with specific tone and domain calibration — without writing code. You can configure the Research Designer agent's behavior for a specific client domain. You can run the same study design through different persona pools and compare how the agent analyses shift.
The MCP server layer extends this further: OAIRA exposes its research capabilities as tools that external AI systems — Claude desktop, custom agents, enterprise AI infrastructure — can call directly. Your research platform isn't just a web application. It's an intelligence layer that other AI systems can query.
This is what it means to build AI-native rather than AI-augmented. The intelligence isn't added to the research platform. It is the research platform.
The Market Research Industry's Real Problem
There's a version of OAIRA's story that frames it as a disruption narrative: faster, cheaper, better than legacy platforms. That version is true and also inadequate.
The more honest framing is that market research has been underperforming its potential for decades, not because the talent was bad, but because the tools encoded the wrong model. Brilliant researchers have been spending their careers doing manual work that the architecture should have automated, applying statistical frameworks that the platform should have encoded, generating insights that decompose the moment the study closes because nothing accumulates.
The question-centric model was a practical adaptation to 1990s constraints. It was never the right epistemology. The person is the unit of research. The methodology is the analytic frame. The intelligence should compound.
OAIRA is a bet that the industry is ready to adopt the architecture that the epistemology always implied — and that the tools now exist to make it practical.
David's advantage is that he spent thirty years understanding exactly why the old architecture produced the frustrations it produced, and exactly what the new one needs to do instead. That's not a theoretical understanding. It's a practitioner's map, drawn from the field.
What Comes Next
The recent posts on the OAIRA blog trace the frontier: crowd simulation for behavioral research, running 500 synthetic agents through an event environment to study emergent group dynamics before any physical infrastructure is committed. Biometric signal reading via facial expression and voice analysis — asking what the respondent's nervous system is doing alongside what they're reporting consciously. White-label platform infrastructure for research firms and agencies that want the capability without the build cost.
Each of these is an extension of the same argument. The simulation capability that started with synthetic survey respondents can model crowd behavior in physical environments. The voice interview capability that started with text-based depth probing can read emotional signals in real time. The platform infrastructure that started as a single-tenant research tool can become the engine for an entire research ecosystem.
Flexible. Extensible. AI-native in a way that means the capabilities can be composed and recombined as the problems evolve.
For David, this is not a product roadmap conversation. It's the fulfillment of an instinct that formed in the 1990s, in rooms where people were trying to make simulation work with tools that weren't ready, trying to put the person at the center of research with architectures that couldn't quite do it, trying to produce intelligence that compounded with systems that only produced insights.
The tools are ready now. The architecture is right. The practitioner who spent thirty years waiting for this is the one who built it.
OAIRA is an AI-powered market research platform. Read the full series starting at The Taxonomy Was Wrong.