Meshwork: Organizational Knowledge as Infrastructure
#worksona#portfolio#knowledge-graph#mcp#ai-native#knowledge-management
David OlssonEvery organization generates knowledge continuously โ in standups, strategy calls, design reviews, and shared documents. Most of that knowledge goes nowhere useful. It sits in transcripts no one re-reads, PDFs buried in Drive folders, and notes that don't survive the quarter. Meshwork exists to change that.
What It Is
Meshwork is a headless-first knowledge graph service built on Next.js with a Neon PostgreSQL backend. It ingests meetings (via Recall.ai recording bots), documents (PDF, DOCX, XLSX, CSV, JSON, Markdown), and pasted text through a unified extraction pipeline. Claude identifies entities โ 20 types including people, projects, decisions, tools, concepts, and risks โ and the relationships between them. OpenAI generates vector embeddings for semantic search. Everything lands in a structured, deduplicated graph with full provenance back to source passages.
The web UI handles setup and visualization. The product is the MCP server.
Why It Matters
The standard answer to "knowledge management" is a searchable archive of flat text. Meshwork takes a different position: structured, relational knowledge is categorically more useful than indexed documents. When an AI assistant can ask "what decisions have been made about the API migration, and who was involved in those discussions," the answer requires entities, relationships, and citations โ not keyword matching over raw text.
Two design decisions make this work in practice. First, the extraction pipeline runs automatically. There are no schemas to define, no tagging workflows, no human curation step. Every meeting and document feeds the graph without any manual intervention. Second, entities are globally deduplicated across all sources. "Sarah" mentioned in 50 meetings is one node with 50 citations, not 50 copies โ which means the graph grows richer with every source rather than noisier.
The shared-node multi-graph architecture extends this further. Named graphs โ "Engineering," "Board," "Research" โ organize context without duplicating knowledge. Deleting a graph removes its organizational grouping; the underlying entities and their full history persist.
How It Works
flowchart TD
A[Recall.ai Bot] -->|webhook| B[Transcript Processor]
C[Document Upload] -->|pdf/docx/xlsx| B
D[Text Paste / MCP import] --> B
B --> E[Chunker\n~400 tokens, speaker-aware]
E --> F[Claude Extraction\nentities + relations]
F --> G[OpenAI Embeddings\n1536d vectors]
G --> H[Upsert Pipeline\ndedup + mention count + citations]
H --> I[(Neon PostgreSQL\n+ pgvector)]
I --> J[MCP Server\n13 tools + 3 resources]
I --> K[REST API v1\nscope-based auth]
I --> L[Web UI\ndashboard + 3D graph viewer]
J --> M[Claude Desktop / Claude Code / Cursor]
K --> N[External Integrations]
The pipeline runs in six steps per source: chunking with speaker attribution preserved, Claude entity and relation extraction, OpenAI embedding of chunks and node descriptors, upsert with conflict resolution (existing entities get incremented mention counts and merged descriptions), citation linking from chunk to entity, and graph membership assignment.
Both PostgreSQL RPCs โ match_chunks and match_nodes โ support hybrid search: vector similarity plus 1-hop graph expansion to surface structurally related entities that don't appear in the direct text matches.
The MCP server exposes 13 tools including search, explore_graph, send_bot, create_graph, and get_entity, plus three ambient resources for account overview and top entities. Authentication uses bearer tokens with scope-based access control. Setup for an IDE user is a URL and a bearer token.
Where It Fits in Worksona
Meshwork occupies the knowledge infrastructure layer of the portfolio. It is not a productivity application โ it is a backend that other tools and AI assistants consume. Its value is realized through the MCP interface, not through the web UI.
Within the Worksona ecosystem, Meshwork provides the persistent organizational memory that AI workflows need to be context-aware. Any AI assistant connected via MCP inherits access to everything the organization knows โ every decision, every entity, every source passage โ without requiring the assistant itself to hold or recall that history.
The headless-first design is a deliberate architectural stance: the primary interface is the AI assistant, and the web UI is a control plane. This is infrastructure for the AI-native era of knowledge work.
Live: meshwork.vercel.app