Memory Vault Has a Dashboard
The CLI works. The MCP server works. The REST API works. But every time I wanted to browse what was actually in my memory database, I'd docker compose exec into the container and run SQL. That's fine for me. It's not fine for anyone else who might want to try this.
Milestone 6 shipped: Memory Vault now has a web dashboard. Four pages, token-gated, baked into the same Docker image as the API. Same docker compose up, open localhost:8000 in a browser, paste your token, and you're in.
What's in it
Four pages, each mapped to an API router:
- Search — hybrid search with space filter, similarity scores, expandable hit content. Submit-on-enter. Similarity clamped at 0% with the raw cosine value in a tooltip, because legitimate negative similarities are confusing to users who aren't reading the math.
- Browse — paginated chunk list with space and sort filters (recent / importance), two-step inline delete with 3-second auto-revert. Click Delete → button turns red and says "Confirm?", click again to actually delete. Nothing gets nuked by accident.
- Ingest — two tabs. Paste text for quick one-off memories with optional speaker and source. Upload file for
.md,.txt,.jsondrag-and-drop with per-file status dots and a summary banner. Multi-file sequential, not parallel — the API only has a single-file endpoint right now, and I'd rather do the loop client-side than rush a batch endpoint. - Stats — overview cards (total chunks, space count, embedding model, version), system health, spaces table with bar-chart visualization. Auto-refreshes every 30 seconds.
The dashboard uses the same bearer token as the API. Create one with docker compose exec app memory-vault token create dashboard, paste it into the token screen, and it's stored in localStorage. On any 401 response, the token gets cleared automatically and the paste screen reappears. No fake auth UI, no custom session layer — same tokens the CLI and API use.
The detour
I was "done" with the ten sub-steps from the plan. Started verifying in the browser and hit the Ingest page. The space dropdown showed default. And only default.
I'd assumed the backend would create spaces on first ingest. It doesn't. The API rejects unknown spaces with a 404, by design — because the MCP server uses the same path, and I don't want Claude creating typo-spaces like defalut or wotk from ambiguous prompts.
So the only way to create a new space was CLI or raw SQL. Not a dashboard feature. I made a detour into the backend: new POST /api/spaces endpoint with a regex-validated name, Pydantic schema, three regression tests, matching memory-vault space create CLI command, and a frontend combobox with a "+ New space…" option. MCP stays read-only for space creation on purpose.
Frontend-only hack would've been quicker. The right fix needed the backend. Shipping a version where "create a space" worked in the UI but not through the API would've been worse than not shipping it at all.
Technical decisions
Dashboard baked into the same Docker image, not a separate service. Multi-stage build: node:20-slim builds the React bundle, python:3.11-slim copies it into src/api/static/. FastAPI serves it at the root via a StaticFiles mount on /assets plus a SPA catch-all route for any non-API path. One container, one port (8000), one command. No separate frontend deploy, no CORS config in prod.
Strict space name validation, no silent normalization. User types Work — they get an error. User types my space — error. The first version of the frontend lowercased and hyphenated input before validating, which meant Work silently became work and my space silently became my-space. That's a trap. If someone types my space, they probably don't want a space named my-space — they want to know the dashboard doesn't accept spaces. The UI should teach the constraint, not hide it.
401 handling as a global reflex. The API client intercepts every response. If it's 401, clear the token from localStorage and reload. The token gate reappears. No per-page error handling for auth, no logout button, no session expiry logic. Revoke a token via CLI, and the dashboard using it auto-clears on its next request.
React 19 + Vite + TanStack Query, Tailwind v3. TanStack for server state (auto-caching, invalidation, background refetch), React state for local form state. No Redux, no Zustand. Tailwind v3 specifically, not v4 — v4 changed the PostCSS setup and I didn't want to debug that on top of everything else.
What's actually running
Pull the repo, docker compose up -d, open http://localhost:8000:
- PostgreSQL 16 + pgvector on port 5432
- Memory Vault API on port 8000
- Interactive docs at http://localhost:8000/docs
- Dashboard at http://localhost:8000 (same port, different path)
- MCP server still available for Claude
78 tests passing — 75 backend tests (integration against a real test database, unit tests for token hashing, rate-limit math, CORS parsing, schema validation) plus 3 new ones for the space-creation endpoint. No frontend unit tests — the dashboard is verified end-to-end in the browser against the real API, which is the integration boundary I actually care about.
I also caught and fixed a bug from M5 while dogfooding the Ingest page. I'd added a speaker field to the ingestion request schema, the CLI passed it through, but the underlying ingest_text() service function didn't accept the parameter and silently dropped it. M5's tests covered the endpoint and the schema but never asserted the value actually landed in the database. I only noticed because I typed my name into the speaker field, searched for it, and no purple speaker tag appeared on any hit. Added the param, updated the INSERT, wrote a regression test. 74 → 75 tests.
What's next
Milestone 7 is the knowledge graph — entity extraction from chunks, relationships between entities, visualization. This is where Memory Vault starts becoming more than "searchable storage" and starts surfacing connections you didn't know were there.
The repo: github.com/MihaiBuilds/memory-vault