Onboarding Flow — Complete Architecture Reference
Onboarding Flow — Complete Architecture Reference
Canonical reference for the Meshi onboarding system. Covers both the LinkedIn-based and conversational paths, the kickoff pipeline (Steps 01–07), backend orchestration, async workers, the state machine, and the architectural decisions behind key components.
Truth ranking (per AGENTS.md): the code is the source of truth for behavior. This doc captures the architecture, flow, and rationale. When code and doc disagree, code wins — update this doc and record the rationale inline.
Table of Contents
- Overview
- Two Onboarding Paths
- State Machine
- API Endpoints
- Core Service Functions
- Kickoff Station Pipeline (Steps 01–07)
- LLM Station Scaffold (runStation)
- Cache Cascade — by synthesis_method
- Cohort Selection (Read Station)
- Async Workers
- Database Schema
- Domain Events
- Function-Level Dependency Graph
- Source Map
- Resetting Onboarding (Testing)
- Open Issues / Known Sharp Edges
- Coach Agent and Agent-Runtime Integration
Overview
Onboarding is the process by which a new user creates their Meshi profile. It has two entry points:
- LinkedIn Path — User provides a LinkedIn URL; the system fetches profile data, enriches it, and generates a structured profile through 7 kickoff stations.
- Conversational Path — User skips LinkedIn; the system creates a minimal entity and the coach guides them through goal-setting conversationally.
Both paths converge at the completion gate: a claimed_user_onboarding_state row with
status = 'completed'.
Key Design Decisions
- Entity creation is lazy: the entity + person + auth link rows are created on first save, not on signup.
- Dual-mode auth: kickoff routes support both authenticated users and anonymous preflight
callers (via
X-Meshi-Preflight-Entity-Idheader). - Cache cascade by
synthesis_method: LLM station results are cached incanonical_briefand invalidated when upstream stations regenerate. Lookups are always bysynthesis_method, never by exactrun_key— see Cache Cascade.
Two Onboarding Paths
LinkedIn Path (traditional)
User submits LinkedIn URL → startOnboarding() creates/claims entity → syncLinkedinAnchor() creates identity_anchor → createOrReplaceSubmissionRecord() creates source_record → State → processing/enrichment → Enrichment kicks off via PREFLIGHT_ENRICHMENT_REQUESTED (typical) or ONBOARDING_UPDATED → Enrichment worker fetches LinkedIn profile → Trait inference extracts traits from profile → Brief synthesis generates canonical brief (async after trait inference) → Goal inference proposes goals → State → review/goals_review (set by trait inference directly) → User completes kickoff stations (01–07) → completeOnboarding() fires ONBOARDING_COMPLETEDEnrichment trigger note:
startOnboardingsets state toprocessing/enrichmentbut does not itself enqueue an event. In the typical UX flow the user went through preflight first (preflightOnboardingenqueuesPREFLIGHT_ENRICHMENT_REQUESTED), so enrichment is already underway whenstartOnboardingclaims the entity. IfupdateLinkedinUrlis called instead (e.g. fixing a bad URL), it enqueuesONBOARDING_UPDATEDwhich also triggers enrichment.
Conversational Path (no LinkedIn)
User starts without LinkedIn URL → startOnboarding() creates entity (no LinkedIn anchor) → State → review/goals_review (immediately) → Coach guides user through conversation → User sets goals via conversation or kickoff UI → completeOnboarding() fires ONBOARDING_COMPLETEDPreflight Path (anonymous kickoff)
User enters LinkedIn URL on signup page → preflightOnboarding() creates unclaimed entity → Enqueues PREFLIGHT_ENRICHMENT_REQUESTED → enrichment begins immediately → Returns entity_id for anonymous kickoff calls → User completes kickoff stations (01–07) anonymously → User signs up / logs in → startOnboarding() claims entity via claimImportedEntity() → completeOnboarding() fires ONBOARDING_COMPLETEDState Machine
Statuses
| Status | Description |
|---|---|
processing | Enrichment or inference in progress |
action_required | User must fix something (e.g., invalid LinkedIn URL) |
review | Ready for user review (goals review) |
completed | Onboarding finished |
Phases
| Phase | Description | Status |
|---|---|---|
enrichment | LinkedIn profile being fetched/processed | processing |
inference | Traits/goals being inferred | processing |
linkedin_invalid | LinkedIn URL is malformed or profile not found | action_required |
linkedin_disputed | LinkedIn URL already linked to another account | action_required |
goals_review | User reviewing proposed goals | review |
null | Onboarding completed | completed |
State Transitions
[no state] ↓ startOnboarding() with LinkedInprocessing/enrichment ↓ enrichment succeedsprocessing/inference ↓ trait inference completes (sets goals_review directly; brief synthesis continues async)review/goals_review ↓ completeOnboarding()completed/null
[no state] ↓ startOnboarding() without LinkedInreview/goals_review ↓ completeOnboarding()completed/null
processing/enrichment ↓ LinkedIn fetch fails (invalid)action_required/linkedin_invalid ↓ PATCH /linkedin-url (updateLinkedinUrl)processing/enrichment
processing/enrichment ↓ LinkedIn URL disputedaction_required/linkedin_disputed ↓ PATCH /linkedin-url (updateLinkedinUrl)processing/enrichmentTiming note:
review/goals_reviewis set directly by the trait inference worker after writing trait claims — it does not wait for brief synthesis. Brief synthesis (brief-synthesis.ts) runs async afterTRAITS_INFERREDand emitsBRIEF_SYNTHESIZED, which then triggers goal inference. Proposed goals may therefore appear on the goals review screen a few seconds after the user lands there.
API Endpoints
Onboarding Routes (/api/v0/onboarding/)
| Method | Path | Handler | Description |
|---|---|---|---|
POST | / | startOnboarding | Start onboarding with LinkedIn URL (required) |
POST | /begin | startOnboarding | Start conversational onboarding (LinkedIn optional). Passing linkedin_url here takes the full LinkedIn path identically to POST /. |
PATCH | /linkedin-url | updateLinkedinUrl | Replace LinkedIn URL and requeue enrichment. Accepts any non-completed state including action_required. |
GET | /status | getOnboardingStatus | Get current onboarding status + readiness |
POST | /complete | completeOnboarding | Mark onboarding as complete. Accepts { force: true } body to skip the readiness gate — used by the kickoff UI for both paths after intent has been collected. |
Kickoff Routes (/api/v0/onboarding/kickoff/)
Mounted with kickoffAuthMiddleware (supports anonymous preflight callers via
X-Meshi-Preflight-Entity-Id header).
Generate Endpoints (LLM-backed reads)
| Method | Path | Service | Description |
|---|---|---|---|
GET | /portrait/generate | generateKickoffPortrait | Generate AI portrait |
GET | /capabilities/generate | generateKickoffCapabilities | Generate capability suggestions |
GET | /goals/generate | generateKickoffGoals | Generate proposed goals |
GET | /read/generate | generateKickoffRead | Generate peer-read analysis |
GET | /pact/generate | generateKickoffPact | Generate operating pact |
GET | /brief/generate | generateKickoffBrief | Generate final brief |
Save Endpoints (per-station persistence)
| Method | Path | Service | Description |
|---|---|---|---|
POST | /portrait | saveKickoffPortraitReaction | Save user reaction to portrait |
POST | /capabilities | saveKickoffCapabilities | Save selected capabilities |
POST | /goals | saveKickoffGoals | Save user goals (no upper bound enforced at API layer) |
POST | /observations | saveKickoffObservations | Save read-station observations |
POST | /observation-reactions | saveKickoffObservationReaction | Save reaction to observation |
POST | /corrections | addKickoffCorrection | Log a correction for stations: portrait, capabilities, goals, read, tools |
POST | /tools | saveKickoffTools | Save location/phone |
Core Service Functions
onboarding.service.ts
| Function | Purpose | Key Side Effects |
|---|---|---|
startOnboarding(db, authUserId, authUserEmail, body) | Create/claim entity, begin onboarding | Creates entity, person, auth link, identity anchor, source record; sets state |
updateLinkedinUrl(db, authUserId, body) | Replace LinkedIn URL | Supersedes source record, syncs anchor, fires ONBOARDING_UPDATED |
getOnboardingStatus(db, authUserId) | Get current status | Returns step, person, linkedin_url, onboarding state, readiness, goal count |
completeOnboarding(db, authUserId, opts) | Mark onboarding complete | Sets state to completed, fires ONBOARDING_COMPLETED. opts.force=true skips the readiness gate — used by the kickoff UI for both LinkedIn and conversational paths. Welcome quest created inline in the route handler (not here). |
preflightOnboarding(db, body) | Anonymous preflight lookup | Creates unclaimed entity for new URLs, enqueues PREFLIGHT_ENRICHMENT_REQUESTED, returns entity_id |
Internal Functions
| Function | Purpose |
|---|---|
ensureClaimedUser(trx, authUserId, authUserEmail) | Create entity + person + auth link + email anchor + onboarding state |
claimImportedEntity(trx, authUserId, authUserEmail, existingEntityId) | Claim an existing unclaimed entity (preflight → auth bridge) |
createOrReplaceSubmissionRecord(trx, authUserPersonLinkId, entityId, linkedinUrl) | Create/supersede onboarding_submission source record |
syncLinkedinAnchor(trx, entityId, linkedinUrl, normalizedLinkedinUrl, sourceRecordId) | Create/supersede linkedin_url identity anchor, handle disputes |
enqueueOnboardingUpdateEvent(trx, params) | Enqueue ONBOARDING_UPDATED outbox event |
entityWasPreviouslyClaimed(db, entityId) | Check if entity has prior onboarding submission |
clearStalePrimaryAuthClaim(trx, entityId) | Delete stale auth link for dead users |
kickoff.service.ts
| Function | Purpose |
|---|---|
savePortraitReaction(db, caller, input) | Save portrait reaction (right/partly_right/not_really) |
saveCapabilities(db, caller, input) | Save selected capability keys |
saveKickoffGoals(db, caller, input) | Bulk-create user-asserted goals |
saveObservations(db, caller, observations) | Save read-station observations |
saveObservationReaction(db, caller, input) | Save reaction to observation (pushback/tell_more) |
addCorrection(db, caller, input) | Log correction for stations: portrait, capabilities, goals, read, tools |
saveTools(db, caller, input) | Save location/phone to person row |
readiness.service.ts
| Function | Purpose |
|---|---|
getProfileReadiness(db, entityId, preloaded?) | Compute profile readiness (blockers, goal readiness stages) |
Returns ProfileReadiness with:
isMatchReady: true when no blockers and at least one match-ready goalblockers: array ofBlockerType(linkedin_pending, linkedin_disputed, linkedin_invalid, no_confirmed_goals, no_match_ready_goals, extraction_pending, goal_embeddings_pending)goals: array ofGoalReadinesswith stages (review, extraction, embedding, match)
Kickoff Station Pipeline (Steps 01–07)
Station Overview
| # | Station | synthesis_method | Service File | LLM? | Bespoke? |
|---|---|---|---|---|---|
| 01 | Portrait | kickoff_portrait | portrait.service.ts | Yes | No (runStation) |
| 02 | Capabilities | kickoff_capabilities | capabilities.service.ts (LLM shape classification + deterministic rank) | Yes (shape classification only) | Yes |
| 03 | Goals | kickoff_goals | goals-gen.service.ts | Yes | No (runStation) |
| 04 | Read | kickoff_read | read.service.ts | Yes | No (runStation) |
| 05 | Pact | kickoff_pact | pact.service.ts | Yes | Yes (parallel calls) |
| 06 | Tools | (no LLM) | UI-only | No | N/A |
| 07 | Brief | kickoff_brief | brief.service.ts | Yes | No (runStation) |
UI: packages/web/src/routes/onboarding/ — +page.svelte is the page host,
stations/Station*.svelte is one component per station, parts/ holds shared widgets (skeletons,
banners, chat bubbles).
Station 01: Portrait
Input: Entity’s bio, current role, current company, latest brief, confirmed traits Output:
80–120 word portrait paragraph with sentence-level confidences Validation: Word count 80–120,
hasNumericAnchor regex (digit or written-out numeral), min sentence + overall confidence floors;
drops sentences below 0.6
Minimum-signal floor (assertSufficientSignal): loadDossier refuses to call the LLM
(throws 424) when none of {bio, currentRole, currentCompany, latestBrief, ≥1 confirmed trait} are
present. Name + LinkedIn URL alone don’t count — they say who, not what.
Why: without this guard, the LLM writes 80–120 words from nothing and recasts the absence of data as a positive trait (“deliberate anonymity”, “perfectly blank surface”). The word-count + numeric-anchor validator passes that confabulation through. Better to fail with a clear error.
Numeric anchor — accepts written-out numerals: hasNumericAnchor matches both digit forms
(\b\d+\b) and written-out small integers one–twenty plus common tens (hundred, thousand).
Why: Gemini Pro doing editorial prose routinely writes “Eleven years building product…” instead of “11 years”. The original digit-only regex failed on perfectly-anchored paragraphs and 502’d the station.
Station 02: Capabilities
Input: Fixed list of capability dimensions + entity dossier (role, bio, traits) Output: User-selected capability keys Processing: Single LLM call up front to classify user shape, then deterministic ranking against the capability library. The LLM output drives ranking weights — the ranked list itself is computed deterministically from those weights. Validation: LLM shape-classification output validated (known shapes only); card count, dimension key whitelist for the ranked output.
Why bespoke: the ranking algorithm against a controlled vocabulary is deterministic enough that forcing it through
runStation(which assumes a freeform LLM output) would add complexity with no benefit. The shape-classification LLM call is fast (smaller model) and its output is a single known-enum value, not prose.
Station 03: Goals
Input: Portrait + capabilities + any existing goals Output: Exactly 3 goals with success_state, time_horizon_days Validation: 3 goals exactly, success_state non-empty, time_horizon_days bounded
Station 04: Read
Input: Goals + trait claims + cohort data (kNN over goal embeddings) Output: 6 dimensions
(from STABLE_DIMENSIONS), 4 findings (clear/thin/distinctive/behind), radar chart data
Validation: Exactly 6 dimensions, 4 findings, headline cites ≥2 dim ids + specific number,
findings cite ≥4 distinct dimensions Cohort Selection: kNN over goal_embedding with
embedding_type='needs', target 200 peers — see Cohort Selection
for full algorithm.
Station 05: Pact
Input: Goals + read findings Output: 4–6 operating patterns covering every locked goal Validation: 4–6 pattern picks, no duplicates, covers all goals Special: Bespoke flow with parallel LLM calls + deterministic post-processing
Station 06: Tools
Input: None (UI-only) Output: Location + phone saved to person row Processing: No LLM, just scalar persistence. Calendar “Configure” button is a no-op (see Open Issues).
Station 07: Brief
Input: All prior stations’ outputs Output: 5-section brief covering the 5 AI-driven steps (Portrait, Capabilities, Goals, Read, Pact), each 12–40 words. Station 06 (Tools) produces only scalar data and has no brief section. Validation: 5 sections, each 12–40 words
Per-Station Validators
Every station has a validate(raw) that returns { ok: true, ... } or
{ ok: false, reason: string }. runStation calls the LLM, validates, retries once with a hint,
and 502s on second failure. No silent acceptance of malformed output.
| Station | Notable rules |
|---|---|
| Portrait | Word count 80–120; hasNumericAnchor regex; min sentence + overall confidence floors; drops sentences below 0.6 |
| Capabilities | LLM shape-classification output validated (known shapes only); card count, dimension key whitelist |
| Goals | 3 goals exactly, success_state non-empty, time_horizon_days bounded |
| Read | Exactly 6 dimensions from STABLE_DIMENSIONS, 4 findings (one per quadrant: clear/thin/distinctive/behind), headline cites ≥2 dim ids + a specific number, findings collectively cite ≥4 distinct dimensions |
| Pact | 4–6 pattern picks covering every locked goal, no duplicate patterns |
| Brief | 5 sections covering the 5 AI-driven stations (01–05; Tools excluded), each 12–40 words |
No-Fallback-on-Error
When an LLM station call fails, the UI never renders demo/placeholder content. The error message
stands alone with a “reload to try again” prompt. Continue/advance buttons gate on !error.
Why: the previous behavior surfaced an error banner and fake content, with copy saying “showing the placeholder so you can continue”. Users couldn’t tell whether the content was theirs or a stub — and it carried fictional details forward into downstream stations and saved corrections.
LLM Station Scaffold (runStation)
Four stations (portrait, goals, read, brief) use the generic runStation() helper. Pact and
capabilities are bespoke.
Source: packages/core/src/llm/run-station.ts
What runStation Handles
- Entity resolution:
resolveKickoffEntity(db, caller)— supports both auth and preflight callers - Prerequisite checks: 412 if required upstream briefs missing (lookup by
synthesis_method, NOTrun_key) - Prompt-version stamping: sha256 of system prompt → 8-char prefix
- Upstream-hash composition: folds upstream brief
created_atintorun_key - Advisory locking:
pg_try_advisory_xact_lockprevents concurrent generation (note: reentrant under Neon’s transaction-mode pooler; correctness falls to theUNIQUE(run_key)constraint oncanonical_brief) - Cache check: looks up
canonical_briefby(entity_id, synthesis_method) - LLM call + retry: calls LLM, validates, retries once with hint on failure
- Cache write: stores result in
canonical_briefwith composedrun_key
What the Consumer Provides
loadDossier, buildUserPrompt, validate, buildBrief, decodeCached, buildResult, plus the
synthesisMethod / baseRunKey / hardPrerequisiteMethods / upstreamMethods config.
Why four stations use it: they were ~80% identical glue. Pact and capabilities are intentionally NOT in this scaffold — the cost of generalizing to fit them would erase the benefit.
Run Key Shape
{base}:{entityId}[:p={promptVersionHash8}][:u={upstreamHashHash8}]Example: kickoff_goals:abc123:p=a1b2c3d4:u=e5f6g7h8
composeRunKey orders: prompt_version before upstream_hash. Both are 8-char sha256 prefixes. Empty
segments are omitted.
Cache Cascade — by synthesis_method
Decision: Prerequisite checks and upstream-hash lookups query canonical_brief by
(entity_id, synthesis_method), never by exact run_key. The composed run_key includes
:p={prompt_version}:u={upstream_hash} suffixes, so a bare run_key like kickoff_goals:{entityId}
will never match a cached row.
Sources: packages/core/src/llm/prerequisites.ts, packages/core/src/llm/upstream-hash.ts
Why this rule exists: an earlier version checked by exact run_key, which worked when run_keys
were bare but silently broke once run_key composition shipped. Station 4 returned 412 for every user
because no row matched kickoff_goals:{entityId} exactly; the cached row was
kickoff_goals:{entityId}:p=...:u=.... The cascade-invalidation path had the same bug —
regenerating goals never busted the read cache because computeUpstreamHash was looking up by
run_key too.
How invalidation works: when an upstream station regenerates, its created_at changes. The
downstream station’s :u= suffix is a hash of upstream (synthesis_method, created_at) values, so
the new run_key no longer matches the cached row — cache miss, fresh generation. A force=false
regen that produces an identical run_key (same prompt version + same upstream hash) serves from
cache with no cascade.
Tests that lock this in:
packages/core/src/llm/prerequisites_test.ts— “matches by synthesis_method even when run_key has suffixes”packages/core/src/llm/upstream-hash_test.ts— same plus “hash changes when an upstream brief is regenerated”
Cohort Selection (Read Station)
Source: packages/core/src/services/read-cohort.service.ts
Algorithm
SELECT AVG(embedding)::textover the user’sneedsgoal embeddings → centroid as text-encoded vector.SELECT entity_id, embedding <=> $centroid::vector ORDER BY ... LIMIT 1000against the partial HNSW indexidx_goal_embedding_hnsw_needs, excluding the user themselves.- JS-side dedupe by entity_id (keeping closest hit per entity), capped at
COHORT_TARGET = 200.
Why two queries instead of one CTE: HNSW ANN search uses the index only when the right side of
<=> is a literal/parameter. A subquery centroid forces a sequential scan. Round-tripping the
centroid through ::text lets us bind it as a parameter on the second statement.
Why over-fetch 1000 then dedupe: each peer typically has ~3 goals → ~3 needs embeddings. To get
COHORT_TARGET=200 unique entities we pull enough to absorb that duplication without bloating the
round-trip.
Edge Cases
- User has no
needsembedding yet: returnsn=0,low_confidence=true. The UI banner renders “First-pass read against your background — your peer cohort is still being built.” See Open Issue #3. - Thin corpus: returns whatever is available;
low_confidenceflips atn < 30.
Confidence Tiers (computeCohortConfidenceModifier)
n range | Modifier | low_confidence |
|---|---|---|
0 | 0.7 | true |
1–29 | 0.7 | true |
30–99 | 0.85 | false |
100–199 | 0.95 | false |
200+ | 1.0 | false |
The modifier multiplies all dimension confidences in the LLM read so a thin cohort can’t claim measured percentiles.
Cohort Confidence UI — tiered banner
Source: packages/web/src/routes/onboarding/parts/CohortConfidenceBanner.svelte
| Tier | Copy | Color |
|---|---|---|
n=0 | ”First-pass read against your background — we’re still building your peer cohort. Measured percentiles will appear here as soon as it’s ready.” | amber |
1–29 | ”Found N close-match peer(s) so far — too few to measure precisely. Read percentiles as directional.” | amber |
30–99 | ”Compared against N peers. Percentiles are real but treat them as estimates.” | neutral |
100–199 | ”Compared against N peers — confidence is good, with a small haircut for cohort thinness.” | subtle |
200+ | (no banner) | — |
Bonus signals: when filter_broadening_applied: true, renders the broadening steps inline. The full
server-side description and confidence_modifier are available via a details tooltip.
Why tiered: the previous binary
low_confidenceflag fired identical copy whethern=0orn=29. Withn=0(no cohort at all), saying “your situation is rare” was actively misleading — there’s no cohort to be rare in.
Why kNN by goals (not by trait similarity)
“Comparable peer” for the read station means someone pursuing similar goals, not someone with
similar background. Two PMs with near-identical resumes pursuing different roles (one CPO, one
ICs-only) should not be in the same cohort. The goal_embedding HNSW index is partial on
embedding_type = 'needs' AND goal_status IN ('proposed','confirmed') — exactly the slice we want.
Deferred (intentional, follow-up work)
- ICP / stage / origin filters: schema slot preserved (
filters_used); onlygoalis set today. v2 will narrow the candidate pool before kNN. - Filter broadening on thin pools:
filter_broadening_applied: falsealways. v2 retries with relaxed filters when initial kNN returns < 30. - Per-dimension percentile aggregation against cohort
trait_claims: v1 asks the LLM to produce percentiles given N. v2 will compute them directly from trait_claim counts.
Radar Chart — data-driven
Source: packages/web/src/routes/onboarding/stations/RadarChart.svelte
Computes polygon geometry from { dimension_names, user_pct, median_pct } props — not hardcoded
points. 6 axes at 60° apart starting at 12 o’clock. Background rings at 25/50/75/100% radius. Cohort
median renders as a dashed polygon (always 50 by construction). Loading state renders an empty
baseline with a pulsing placeholder — no layout shift when real data arrives.
Async Workers
Enrichment Worker (enrichment.ts)
Triggered by: ONBOARDING_UPDATED, ONBOARDING_COMPLETED, PERSON_LINKEDIN_READY,
CONNECTION_IMPORTED, CONTACT_IMPORTED, PREFLIGHT_ENRICHMENT_REQUESTED,
LINKEDIN_PROFILE_FETCHED
LINKEDIN_PROFILE_FETCHEDis the Apify async re-entry event: the pipeline sweep (pipeline-sweep.ts) fires Apify batches for entities at theenrichingstage; when a poll completes, the worker re-enters via this event carrying the resolved profile. The sweep/Apify path is the primary enrichment scheduling backbone for onboarding entities.
Flow:
- Load identity anchors
- Check for disputed LinkedIn anchor → set
action_required/linkedin_disputed - Find active LinkedIn anchor
- Fetch LinkedIn profile (or use prefetched/deferred)
- Create/supersede
linkedin_profilesource record - Extract timeline records (work + education)
- Materialize chunks (bio + skills)
- Mirror profile picture
- Apply LinkedIn names to person
- Emit
ENRICHMENT_COMPLETED
Failure handling: On LinkedIn fetch failure → handleEnrichmentFailure() →
requestEvidenceResearchBeforeInference() may request web research
Trait Inference Worker (trait-inference.ts)
Triggered by: ENRICHMENT_COMPLETED, PERSON_UPDATED, EVIDENCE_ENRICHMENT_COMPLETED
Flow:
- Load source records + timeline records
- Layer 1: Chunk source records (deterministic)
- Layer 2: Extract field traits from chunks + timeline (deterministic)
- Layer 3: LLM inference per dimension (chunks + timeline + imported → inferred)
- Deduplicate traits
- Write
trait_claimrows - Update pipeline stage
- Advance onboarding state to
review/goals_review(directly — does not wait for brief synthesis) - Emit
TRAITS_INFERRED
Brief Synthesis Worker (brief-synthesis.ts)
Triggered by: TRAITS_INFERRED, TRAIT_CONFIRMED, TRAIT_REJECTED, TRAIT_EDITED,
TRAIT_ASSERTED
Flow:
- Load trait claims + source chunks
- LLM synthesizes canonical brief
- Write
canonical_briefrow - Emit
BRIEF_SYNTHESIZED
This worker is the link between trait inference and goal inference. It runs async — the onboarding state is already at
goals_reviewwhen it fires.
Goal Inference Worker (goal-inference.ts)
Triggered by: BRIEF_SYNTHESIZED, PERSON_UPDATED, TRAITS_INFERRED
TRAITS_INFERREDis a belt-and-suspenders trigger that closes the DAG gap whenBRIEF_SYNTHESIZEDis suppressed by a run-key cache hit.
Flow:
- Check prerequisites (active traits, canonical brief required)
- Load source chunks + build profile text
- LLM extracts proposed goals
- Write
goalrows withorigin='inferred',status='proposed' - Emit
GOAL_CREATEDper goal
Embedding Workers
Three separate workers handle embeddings post-inference:
| Worker | File | Triggered by | What it does |
|---|---|---|---|
| Brief + trait embeddings | embedding.ts | BRIEF_SYNTHESIZED, TRAITS_INFERRED | Generates embeddings for matchmaking/search |
| Goal needs/offers | goal-needs-offers.ts | GOAL_CREATED, GOAL_CONFIRMED, GOAL_SUPERSEDED | Extracts needs/offers text, writes goal_embedding rows |
| Aggregate embedding | aggregate-embedding.ts | TRAITS_INFERRED, TRAIT_CONFIRMED, TRAIT_REJECTED, TRAIT_EDITED, TRAIT_ASSERTED | Computes entity-level aggregated trait embedding |
Database Schema
claimed_user_onboarding_state
| Column | Type | Description |
|---|---|---|
id | UUID (PK) | Row primary key |
auth_user_person_link_id | UUID (UNIQUE FK) | Links to auth_user_person_link — unique, not the PK |
status | enum | processing, action_required, review, completed |
phase | enum | enrichment, inference, linkedin_invalid, linkedin_disputed, goals_review, null |
error_code | text | Error code when action_required |
error_message | text | Human-readable error message |
completed_at | timestamp | Set when status = completed; NULL for all other statuses (DB CHECK enforced) |
onboarding_correction
| Column | Type | Description |
|---|---|---|
id | UUID (PK) | Correction ID |
entity_id | UUID (FK) | Entity being corrected |
station | enum | portrait, capabilities, goals, read, tools |
ai_said | text | What the AI generated |
user_said | text | What the user said |
fix | text | How to fix |
created_at | timestamp | When correction was logged |
Related Tables
entity— Top-level identity rowperson— Display name, full name, role, company, bio, location, phoneauth_user_person_link— Links Better Auth users to entitiesidentity_anchor— LinkedIn URLs, emails (with status: observed, verified, disputed, etc.)source_record— Onboarding submissions, LinkedIn profilessource_record_entity_link— Links source records to entitiescanonical_brief— Cached LLM station outputs (keyed by synthesis_method)goal— User-asserted and inferred goalsgoal_embedding— Needs/offers embeddings per goaltrait_claim— Extracted and inferred traitskickoff_capability_selection— User-selected capabilitieskickoff_observation— Read-station observationskickoff_observation_reaction— User reactions to observationsquest— Welcome quest created on completion
Domain Events
| Event | Emitted By | Payload | Consumers |
|---|---|---|---|
onboarding.updated | onboarding.service.ts | entityId, status, phase, errorCode | Enrichment worker |
onboarding.completed | onboarding.service.ts | entityId, personId | Enrichment worker |
preflight.enrichment.requested | onboarding.service.ts (preflight) | entityId | Enrichment worker |
enrichment.completed | enrichment.ts | entityId, sourceRecordId | Trait inference worker |
traits.inferred | trait-inference.ts | entityId, claimIds | Brief synthesis worker, goal inference worker (belt-and-suspenders), embedding worker, aggregate-embedding worker |
brief.synthesized | brief-synthesis.ts | entityId, briefId | Goal inference worker, embedding worker |
goal.created | goal-inference.ts | entityId, goalId | Goal needs/offers worker (goal-needs-offers.ts) |
person.linkedin.ready | Various (pipeline sweep, linkedin-poll) | entityId, sourceRecordId | Enrichment worker |
linkedin.profile.fetched | linkedin-poll.ts / pipeline sweep | entityId, profile | Enrichment worker (Apify async re-entry) |
Welcome quest — created inline (fire-and-forget) inside the
POST /completeroute handler aftersetCompleted. Not triggered by theonboarding.completedevent; failures are caught and logged but do not affect the response.
Function-Level Dependency Graph
┌─────────────────────────────────────────────────────────────────────────────────┐│ API Layer (packages/api) │├─────────────────────────────────────────────────────────────────────────────────┤│ ││ POST /onboarding/ POST /onboarding/begin PATCH /onboarding/ ││ POST /onboarding/complete GET /onboarding/status /linkedin-url ││ │ │ │ ││ ▼ ▼ ▼ ││ ┌──────────────────────────────────────────────────────────────────────────┐ ││ │ onboarding.service.ts │ ││ │ startOnboarding() getOnboardingStatus() updateLinkedinUrl() │ ││ │ completeOnboarding() preflightOnboarding() │ ││ └──────────────────────────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌──────────────────────────────────────────────────────────────────────────┐ ││ │ kickoff.service.ts │ ││ │ savePortraitReaction() saveCapabilities() saveKickoffGoals() │ ││ │ saveObservations() saveObservationReaction() addCorrection() │ ││ │ saveTools() │ ││ └──────────────────────────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌──────────────────────────────────────────────────────────────────────────┐ ││ │ kickoff routes (GET/POST) │ ││ │ generateKickoffPortrait() generateKickoffCapabilities() │ ││ │ generateKickoffGoals() generateKickoffRead() │ ││ │ generateKickoffPact() generateKickoffBrief() │ ││ └──────────────────────────────────────────────────────────────────────────┘ │└─────────────────────────────────────────────────────────────────────────────────┘ │ ▼┌─────────────────────────────────────────────────────────────────────────────────┐│ Core Layer (packages/core) │├─────────────────────────────────────────────────────────────────────────────────┤│ ││ ┌──────────────────────────────────────────────────────────────────────────┐ ││ │ LLM Station Services │ ││ │ portrait.service.ts capabilities.service.ts goals-gen.service.ts│ ││ │ read.service.ts pact.service.ts brief.service.ts │ ││ └──────────────────────────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌──────────────────────────────────────────────────────────────────────────┐ ││ │ LLM Infrastructure │ ││ │ run-station.ts prerequisites.ts upstream-hash.ts │ ││ │ cache-lock.ts prompt-version.ts structured-call.ts │ ││ └──────────────────────────────────────────────────────────────────────────┘ ││ │ ││ ▼ ││ ┌──────────────────────────────────────────────────────────────────────────┐ ││ │ Supporting Services │ ││ │ readiness.service.ts linkedin-preflight.ts auth-helpers.ts │ ││ │ read-cohort.service.ts │ ││ └──────────────────────────────────────────────────────────────────────────┘ │└─────────────────────────────────────────────────────────────────────────────────┘ │ ▼┌─────────────────────────────────────────────────────────────────────────────────┐│ DB Layer (packages/db) │├─────────────────────────────────────────────────────────────────────────────────┤│ ││ Repositories: ││ claimedUserOnboardingStateRepo onboardingCorrectionRepo ││ authLinkRepo identityAnchorRepo ││ sourceRecordRepo sourceLinkRepo ││ canonicalBriefRepo goalRepo ││ goalEmbeddingRepo traitClaimRepo ││ kickoffCapabilitySelectionRepo kickoffObservationRepo ││ personRepo entityRepo ││ outboxRepo questRepo │└─────────────────────────────────────────────────────────────────────────────────┘ │ ▼┌─────────────────────────────────────────────────────────────────────────────────┐│ Workers (packages/workers) │├─────────────────────────────────────────────────────────────────────────────────┤│ ││ ┌──────────────┐ ┌──────────────────┐ ┌──────────────────┐ ┌─────────────┐ ││ │enrichment.ts │ │trait-inference.ts│ │brief-synthesis.ts│ │goal- │ ││ │ │ │ │ │ │ │inference.ts │ ││ │runEnrichment │─▶│runTraitInference │─▶│synthesizeBrief() │─▶│runGoalInfer │ ││ │ │ │ │ │ │ │ │ ││ │Triggered by: │ │Triggered by: │ │Triggered by: │ │Triggered by:│ ││ │- onboarding.*│ │- enrichment. │ │- traits.inferred │ │- brief. │ ││ │- preflight.* │ │ completed │ │- trait_confirmed │ │ synthesized│ ││ │- connection.*│ │- person.updated │ │- trait_rejected │ │- traits. │ ││ │- contact.* │ │- evidence_enrich │ │- trait_edited │ │ inferred │ ││ │- linkedin_ │ │ ment.completed │ │- trait_asserted │ │- person. │ ││ │ profile_ │ │ │ │ │ │ updated │ ││ │ fetched │ │ │ │ │ │ │ ││ └──────────────┘ └──────────────────┘ └──────────────────┘ └─────────────┘ ││ │└─────────────────────────────────────────────────────────────────────────────────┘Detailed Call Graph
startOnboarding(db, authUserId, authUserEmail, body)├── parseLinkedinUrl(body.linkedin_url)├── db.transaction()│ ├── authLinkRepo.getPrimaryEntityForAuthUser()│ ├── claimedUserOnboardingStateRepo.getOrCreateByAuthUserPersonLinkId()│ ├── getCurrentSubmissionRecord()│ ├── ensureClaimedUser() OR claimImportedEntity()│ │ ├── entityRepo.createEntity()│ │ ├── personRepo.createPerson()│ │ ├── authLinkRepo.createLink()│ │ ├── identityAnchorRepo.createIdentityAnchor()│ │ └── claimedUserOnboardingStateRepo.createState()│ ├── createOrReplaceSubmissionRecord()│ │ ├── sourceRecordRepo.createSourceRecord() OR .supersede()│ │ └── sourceLinkRepo.createLink()│ ├── syncLinkedinAnchor()│ │ ├── identityAnchorRepo.getActiveByNormalizedValue()│ │ ├── identityAnchorRepo.createIdentityAnchor() OR .supersedeAnchor()│ │ └── entityWasPreviouslyClaimed()│ └── claimedUserOnboardingStateRepo.setProcessing() OR .setGoalsReview()│ (NOTE: no outbox event enqueued here — enrichment begins via prior│ PREFLIGHT_ENRICHMENT_REQUESTED or subsequent ONBOARDING_UPDATED)
completeOnboarding(db, authUserId, opts)├── db.transaction()│ ├── authLinkRepo.getPrimaryEntityForAuthUser()│ ├── claimedUserOnboardingStateRepo.getByAuthUserPersonLinkId()│ ├── getProfileReadiness() [skipped if opts.force=true]│ │ ├── claimedUserOnboardingStateRepo.getByEntityId()│ │ ├── identityAnchorRepo.getByEntityId()│ │ ├── goalRepo.getGoalsForEntity()│ │ └── goalEmbeddingRepo.getGoalEmbeddings()│ ├── claimedUserOnboardingStateRepo.setCompleted()│ └── outboxRepo.enqueueEvent(ONBOARDING_COMPLETED)[route handler, after transaction]└── questRepo.createQuest() — "Meet your coach" welcome quest (fire-and-forget)
preflightOnboarding(db, body)├── parseLinkedinUrl(body.linkedin_url)├── identityAnchorRepo.getActiveByNormalizedValue()├── personRepo.getPersonByEntityId()├── authLinkRepo.getLivePrimaryAuthForEntity()├── entityWasPreviouslyClaimed()├── fetchLinkedInName(url) // RapidAPI call for new URLs└── db.transaction() ├── entityRepo.createEntity() ├── personRepo.createPerson() ├── identityAnchorRepo.createIdentityAnchor() └── outboxRepo.enqueueEvent(PREFLIGHT_ENRICHMENT_REQUESTED)
runStation(db, caller, opts, config)├── resolveKickoffEntity(db, caller)├── requirePrerequisiteBriefs(db, entityId, hardPrerequisiteMethods)├── computePromptVersion(systemPrompt)├── computeUpstreamHash(db, entityId, upstreamMethods)├── composeRunKey(baseRunKey, entityId, promptVersion, upstreamHash)├── withCacheLock(db, entityId, async (trx) => {│ ├── canonicalBriefRepo.getBySynthesisMethod(trx, entityId, synthesisMethod)│ ├── [cache hit] → decodeCached(brief) → return│ ├── loadDossier(trx, entityId)│ ├── buildUserPrompt(dossier)│ ├── callLlmStructured(systemPrompt, userPrompt)│ ├── validate(raw)│ │ └── [fail] → retry with hint → validate again│ ├── buildBrief(validated)│ └── canonicalBriefRepo.upsert(trx, { entityId, runKey, synthesisMethod, content, productionMeta })└── buildResult(brief, validated)Source Map
packages/api/src/routes/ ├─ onboarding.ts — Core onboarding CRUD routes ├─ kickoff.ts — Kickoff station routes (generate + save) └─ middleware/ └─ kickoff-auth.ts — Dual-mode auth for kickoff routes
packages/core/src/services/ ├─ onboarding.service.ts — Top-level orchestration + LinkedIn preflight gate ├─ kickoff.service.ts — Per-station save functions ├─ readiness.service.ts — Profile readiness computation ├─ linkedin-preflight.ts — RapidAPI name fetch for preflight ├─ auth-helpers.ts — KickoffCaller resolution ├─ portrait.service.ts — Step 01 (via runStation) ├─ capabilities.service.ts — Step 02 (bespoke: LLM shape classify + deterministic rank) ├─ goals-gen.service.ts — Step 03 (via runStation) ├─ read.service.ts — Step 04 (via runStation) ├─ read-cohort.service.ts — Cohort selection (kNN) ├─ pact.service.ts — Step 05 (bespoke, parallel LLM) └─ brief.service.ts — Step 07 (via runStation)
packages/core/src/llm/ ├─ run-station.ts — Generic station scaffold ├─ prerequisites.ts — synthesis_method-based 412 gate ├─ upstream-hash.ts — synthesis_method-based cascade hash ├─ cache-lock.ts — pg_try_advisory_xact_lock wrapper ├─ prompt-version.ts — sha256 hash of system prompt └─ structured-call.ts — LLM provider chain + retry
packages/db/src/repositories/ ├─ claimed-user-onboarding-state-repo.ts — Onboarding state CRUD ├─ onboarding-correction-repo.ts — Correction log ├─ auth-link-repo.ts — Auth user ↔ entity links ├─ identity-anchor-repo.ts — LinkedIn URLs, emails ├─ source-record-repo.ts — Source records ├─ canonical-brief-repo.ts — Cached LLM outputs ├─ goal-repo.ts — Goals └─ kickoff-*.ts — Kickoff-specific repos
packages/db/migrations/ ├─ 008_claimed_user_onboarding_state.ts — Onboarding state table ├─ 071_onboarding_correction.ts — Correction log table ├─ 070_kickoff_capability_selection.ts — Capability selections ├─ 072_kickoff_observation.ts — Observations + reactions └─ 080_kickoff_schema_real.ts — Kickoff schema finalization
packages/domain/src/ ├─ domain-events.ts — Event type constants + Zod schemas └─ source-namespaces.ts — onboardingSubmissionKey(), SOURCE_NAMESPACE
packages/workers/src/functions/ ├─ enrichment.ts — LinkedIn enrichment worker ├─ trait-inference.ts — Trait extraction + inference ├─ brief-synthesis.ts — Canonical brief from traits (link between inference and goal inference) ├─ goal-inference.ts — Goal extraction from profile ├─ goal-needs-offers.ts — Goal embedding (needs/offers) rows ├─ embedding.ts — Brief + trait embeddings for matchmaking ├─ aggregate-embedding.ts — Entity-level aggregated trait embedding └─ pipeline-sweep.ts — Enrichment scheduling backbone (Apify batch dispatch)
packages/web/src/routes/onboarding/ ├─ +page.svelte — Page host, wires save callbacks ├─ data.ts — composeThinking(), shared types ├─ saves.ts — Per-station save callback shapes ├─ finish/+page.svelte — Post-onboarding redirect ├─ stations/Station*.svelte — One per station (7 files) ├─ stations/RadarChart.svelte — Data-driven 6-axis radar └─ parts/ ├─ CohortConfidenceBanner.svelte — Tiered cohort warning ├─ TypingBubble.svelte — Between-station thinking bubble └─ MeshiMessage.svelte — Chat-bubble containerResetting Onboarding (Testing)
Reset onboarding only (preserve user, login, entity, person):
DELETE FROM canonical_brief WHERE entity_id = $entity AND synthesis_method LIKE 'kickoff_%';DELETE FROM kickoff_observation_reaction WHERE observation_id IN (SELECT id FROM kickoff_observation WHERE entity_id = $entity);DELETE FROM kickoff_observation WHERE entity_id = $entity;DELETE FROM kickoff_capability_selection WHERE entity_id = $entity;DELETE FROM onboarding_correction WHERE entity_id = $entity;DELETE FROM goal_embedding WHERE entity_id = $entity;DELETE FROM goal WHERE entity_id = $entity;DELETE FROM claimed_user_onboarding_state WHERE auth_user_person_link_id IN (SELECT id FROM auth_user_person_link WHERE auth_user_id = $user);Full account wipe:
Delete the user row and let ON DELETE CASCADE handle cascading FKs. See memory/episodic/ for the
full reverse-FK order for the ~10 FKs on entity that don’t cascade.
Open Issues / Known Sharp Edges
-
RapidAPI quota:
linkedin-preflight.tscallsfresh-linkedin-scraper-api.p.rapidapi.com(env:RAPIDAPI_LINKEDIN_KEY+RAPIDAPI_LINKEDIN_HOST) — decoupled from the workers enrichment fetch (RAPIDAPI_API_KEY+RAPIDAPI_HOST) so onboarding and the pipeline have independent quotas. When quota is hit, signup throws 422 with no graceful degradation — the user is fully blocked. Mitigation sketched: aMESHI_LINKEDIN_PREFLIGHT_STUB=1dev flag that returns a placeholder name. Not implemented. -
kickoff_read_cohortbrief not persisted: The cohort is computed at read time and never stored as its owncanonical_brief. v2 (per-dimension aggregation) should snapshot the cohort + scored percentiles into akickoff_read_cohortbrief alongsidekickoff_read, so the read can be re-rendered without re-computing the cohort. -
Read fires before goal embeddings finish:
kickoff_goalssaves goals synchronously, butgoal_embeddingrows are written async viagoal-needs-offers.ts. On a fast user,kickoff_readruns before the embedding worker completes →selectCohortreturnsn=0. The result is then cached withcohort_n=0— a refresh won’t recompute it because the cache hit short-circuits. Two v2 fixes: (a) generate goal embeddings inline inkickoff_goalssave (couples a fast UI action to an LLM embedding call), or (b) decouple cohort metadata from cached read content so cohort is re-derived on every fetch while dimensions/findings stay cached. -
Pact
REGEN_OBSERVATIONSis canned text: When the user clicks “Push back” or “Tell me more” on a read finding, the regenerated body is a hardcoded string keyed by observation index, not a real LLM regen. Seepackages/web/src/routes/onboarding/data.ts — REGEN_OBSERVATIONS. Spec calls for a real regen with the user’s pushback text; not implemented. -
Tools station saves only
location+phone: Calendar “Configure” button is a no-op. Integration wiring is out of scope of the kickoff. -
Conversational path relaxed readiness:
completeOnboardingwithforce=trueskips the goals_review/readiness gate. The conversational path only requires at least 1 confirmed goal (checked by the route before allowingforceto proceed).
Coach Agent and Agent-Runtime Integration
This section documents how the onboarding flow connects to the coach agent system and the
meshi-agent-runtime. See agent-runtime-integration.md for the
full backend mode matrix.
The “Meet your coach” welcome quest
After completeOnboarding() fires ONBOARDING_COMPLETED, the POST /api/v0/onboarding/complete
route handler creates a welcome quest inline (fire-and-forget):
// route handler, after transaction completesawait questRepo.createQuest(...) // "Meet your coach" quest — type: welcomeThis quest is the bridge from onboarding to the coach: it appears in
GET /api/v0/coach/quests immediately after completion and surfaces in the UI as the user’s first
task. It is not triggered by the onboarding.completed domain event — it is created inline in
the route handler. A failure to create the quest is caught and logged but does not affect the
completion response.
Two coach entry points — different entity requirements
The coach has two distinct call paths that handle pre-onboarding users differently:
Legacy session endpoint (POST /api/v0/coach/session) — uses @meshi/coach package runtime:
-
Supports an
allow_onboarding: truebody flag that auto-creates an entity for users who have not yet started onboarding:// In coach.ts POST /session handlerif (body.allow_onboarding) {const r = await startOnboarding(db, authUserId, email, {});// No linkedinUrl → state immediately → review/goals_reviewentityId = r.body.entity_id;}This is the mechanism for conversational onboarding: the user speaks to the coach without first providing a LinkedIn URL. The entity is created in
review/goals_reviewstate; the coach then guides the user through setting goals conversationally. -
Does not go through
getBackend()— it calls@meshi/coachdirectly and does not useAgentBackendorLocalBackend.
New conversations endpoint (POST /api/v0/coach/conversations/:id/messages) — uses getBackend():
-
Requires a pre-existing entity. Returns 403 if the user has no entity:
entityId = await getEntityForAuth(db, authUserId);// throws NotFoundError → 403 if no entity exists -
Routes through
LocalBackendorAgentBackenddepending onMESHI_RUNTIME_BACKEND. -
Does not support the
allow_onboardingflow. Users must complete at least thestartOnboardingstep before using this endpoint.
Conversational onboarding — detailed flow
User opens coach with no LinkedIn URL → POST /api/v0/coach/session { message: "...", allow_onboarding: true } → Route: no entity found, allow_onboarding=true → startOnboarding(db, authUserId, email, {}) ← no linkedinUrl → Entity created, state → review/goals_review → runCoachSession() via @meshi/coach → Coach guides user through goal-setting via conversation → User sets and confirms at least 1 goal → POST /api/v0/onboarding/complete { force: true } → completeOnboarding() fires ONBOARDING_COMPLETED → "Meet your coach" quest created inlineThe kickoff station pipeline (Steps 01–07) is optional in the conversational path. The route
accepts force: true to bypass the readiness gate; the only enforced requirement is at least 1
confirmed goal. Kickoff stations may still be surfaced in the UI after goal-setting for users who
want to enrich their profile further.
AgentBackend and onboarding tools
When MESHI_RUNTIME_BACKEND=agent, the coach agent has direct access to the four onboarding tools
(get_onboarding_status, start_onboarding, update_linkedin_url, complete_onboarding) as part
of its 20-tool AGENT_TOOLS array in packages/api/src/agent-backend.ts.
This means a coach conversation running under agent backend can:
- Check the user’s onboarding state at any point (
get_onboarding_status) - Help a user fix a bad LinkedIn URL without leaving the chat (
update_linkedin_url) - Walk a user through goal confirmation and then complete onboarding in-conversation
(
complete_onboarding)
Under local backend (the default staging/prod configuration), these tools are not available
to the runtime. The platform MCP server (packages/mcp/src/server.ts) does not expose them, so
any mcp_call_tool attempt targeting get_onboarding_status returns a tool-not-found error. The
COACH_SYSTEM_PROMPT lists get_onboarding_status as a first-call tool — this is accurate only
under agent backend.
Post-onboarding coach — new conversations flow
Once onboarding is complete, the user’s coach interactions go through the new conversations system:
POST /api/v0/coach/conversations → create conversation, returns { conversation }POST /api/v0/coach/conversations/:id/messages { content: "..." } → persist user message → open agent_run row → getBackend().chat() with COACH_SYSTEM_PROMPT + message history → tee SSE stream to browser + accumulate content → persist assistant message with tool_events in response_object → fire-and-forget: auto-title the conversation if untitled → emit meshi-meta events with user_message_id + assistant_message_idThe COACH_SYSTEM_PROMPT (defined inline in packages/api/src/routes/coach.ts) instructs the
agent to use mcp_call_tool server="meshi-platform" for platform data. It lists the most commonly
needed tools: get_profile, get_brief, list_goals, list_traits, search_people,
get_onboarding_status.
Backend split: Under
localbackend this prompt is literally accurate — the external runtime executesmcp_call_toolvia its MCP client. Underagentbackend, tools are OpenAI function calls fromAGENT_TOOLS; themcp_call_toolframing is semantic instruction, not MCP transport.get_onboarding_statusis only reachable underagentbackend — the platform MCP server does not register it, so amcp_call_toolattempt for it underlocalreturns “tool not found”.
Source files
packages/api/src/routes/coach.ts — All coach HTTP routespackages/api/src/agent-backend.ts — AgentBackend + AGENT_TOOLS (onboarding tools here)packages/api/src/agent-runtime.ts — Backend factory (LocalBackend / FlyBackend / AgentBackend)packages/mcp/src/server.ts — Platform MCP tool registrations (no onboarding tools)