---
name: asmi-avatar-implementation
description: How to implement an ASMI (Avatar State Machine Interface) avatar inside a third-party website using the avatar's deployed MCP blueprint — states, expressions, animations, awareness, personality, tone of voice — with the host site's own LLM provider powering runtime chat.
model: inherit
license: MIT
version: 4
---

# ASMI Avatar Implementation — skill for coding AIs

> **TL;DR — do NOT hand-roll, do NOT hand-write.**
>
> - **UI:** import `<AsmiAvatar>` from `@avatar-state-machine-interface/react`.
>   Do not re-implement the widget in your own component. The package wires
>   `onTrace` so the face updates mid-turn (neutral → attentive → thinking
>   → smiling). Hand-rolled widgets drop this wiring and end up with a face
>   that never moves.
> - **Definition:** fetch via the ASMI MCP `get_avatar` tool (NOT
>   `get_avatar_markdown` — that's human-readable Markdown, not runnable).
>   Save the `definition` field verbatim. Do NOT rename fields (the
>   runtime reads `definition.companyContext`, not `definition.company`)
>   and do NOT hand-write the `stateMachine` section.
> - **Pre-flight:** call `validateAvatarDefinition(def)` from
>   `@avatar-state-machine-interface/runtime` at integration time to fail
>   loudly on shape mistakes instead of surfacing a minified
>   `Cannot read properties of undefined` at the first user message.
>   As of runtime 0.1.7 the validator catches the four most common
>   integrator mistakes with pointed messages: `definition.company` typo
>   (use `companyContext`), `services` written as a string instead of
>   `string[]`, missing `siteContext` block, and missing
>   `siteContext.metaDescription`. The embedding guide
>   (`get_embedding_guide` MCP tool) carries the full required-fields
>   table and a paste-ready minimum definition skeleton.

You are reading the public, avatar-agnostic skill for implementing ASMI
avatars. An individual avatar is a JSON `AvatarDefinition` produced by
the ASMI editor at `https://broen.tech/apps/asmi`.

## Architecture

**ASMI is a design-time tool, not a runtime dependency.** When the
avatar's owner deploys, the avatar becomes discoverable via the ASMI
MCP server. Your job — as the implementing coding AI (Claude Code,
Lovable, Cursor, …) — is to:

1. Connect to the ASMI MCP server.
2. Fetch the avatar's full definition + assets + per-avatar embedding
   guide.
3. Implement the avatar **directly in the target site's own codebase**,
   powered by the target site's own LLM provider key.

Once implemented, the avatar runs on the host site with zero ASMI
dependency. If the owner un-deploys later, the implementation keeps
working — it was built against a snapshot, not a live service.

There is **no public REST runtime** served by `https://broen.tech`.
`/api/asmi/chat` exists but is authenticated and design-time only
(editor test-run + owner-allowlisted demo avatars). Do **not** plan any
integration around calling `/api/asmi/chat` from a deployed site.

## Connect to MCP

One MCP URL (`https://broen.tech/api/asmi/mcp`) works everywhere. The
user generates their API key from the **Deployment panel** in the ASMI
editor (click the Deployment button in the editor header after
deploying an avatar).

The server accepts EITHER `Authorization: Bearer <key>` OR
`x-api-key: <key>` — use whichever header your tool's UI exposes.

### Per-client config snippets

**Claude Code CLI**

```bash
claude mcp add --transport http asmi https://broen.tech/api/asmi/mcp \
  --header "Authorization: Bearer <USER_MCP_API_KEY>"
```

**Claude Desktop** — Settings → Connectors → Add Custom Connector. URL:
`https://broen.tech/api/asmi/mcp`. Paste bearer token.

**Cursor** — `~/.cursor/mcp.json`:

```json
{
  "mcpServers": {
    "asmi": {
      "url": "https://broen.tech/api/asmi/mcp",
      "headers": { "Authorization": "Bearer <USER_MCP_API_KEY>" }
    }
  }
}
```

**Windsurf (Codeium)** — `~/.codeium/windsurf/mcp_config.json` (note the
key is **`serverUrl`** not `url`):

```json
{
  "mcpServers": {
    "asmi": {
      "serverUrl": "https://broen.tech/api/asmi/mcp",
      "headers": { "Authorization": "Bearer <USER_MCP_API_KEY>" }
    }
  }
}
```

**Zed** — `settings.json` (note the key is **`context_servers`** and
Zed's remote MCP support is spotty; use the `mcp-remote` stdio bridge):

```json
{
  "context_servers": {
    "asmi": {
      "source": "custom",
      "command": "npx",
      "args": ["-y", "mcp-remote", "https://broen.tech/api/asmi/mcp",
               "--header", "Authorization: Bearer <USER_MCP_API_KEY>"]
    }
  }
}
```

**Lovable.dev** — Settings → Connectors → **Personal connectors** →
New MCP server. Server name: `ASMI`. Server URL:
`https://broen.tech/api/asmi/mcp`. Authentication: **Bearer token or
API key** → paste `<USER_MCP_API_KEY>`.

**v0 (Vercel)** — Settings → MCP Connections → Add MCP. URL:
`https://broen.tech/api/asmi/mcp`. Auth: **Bearer Token** (or Custom
Headers → `x-api-key: <USER_MCP_API_KEY>`).

**Claude.ai web Connectors** — Settings → Customize → Connectors → +.
URL: `https://broen.tech/api/asmi/mcp`. Claude.ai's UI is OAuth-first;
until ASMI ships OAuth 2.1 (planned), use Claude Desktop or Claude Code
for this URL.

**ChatGPT Developer Mode** — Settings → Apps → Advanced → Developer
mode → Create connector. MCP URL + Bearer token.

**Replit Agent** — Integrations → MCP Servers → Add. URL + Bearer.

**Cline (VS Code)** — `cline_mcp_settings.json`:

```json
{
  "mcpServers": {
    "asmi": {
      "url": "https://broen.tech/api/asmi/mcp",
      "headers": { "Authorization": "Bearer <USER_MCP_API_KEY>" },
      "disabled": false
    }
  }
}
```

## MCP tool index

- `list_avatars` — list the user's **deployed** avatars (drafts excluded
  by default; pass `status: "draft"` to include them).
- `get_avatar` — full `AvatarDefinition` JSON: states, transitions,
  actions, expressions, triggers, LLM config, awareness, brand voice.
- `get_avatar_markdown` — the same definition in human-readable
  Markdown, good for embedding in your coding-agent's context.
- `get_avatar_assets` — expression image URLs + animation metadata
  (frame count, hold/transition durations, triggers, license).
- `get_embedding_guide` — a per-avatar, step-by-step implementation
  recipe tuned to the target framework (React/Vanilla/etc.).
- `get_runtime_docs` / `get_backend_example` — package docs for
  `@avatar-state-machine-interface/runtime` (state machine) and
  `@avatar-state-machine-interface/react` (drop-in React hook +
  components), which you install on the host site.

## The face is the product (non-negotiable visual rules)

The whole value of an ASMI avatar is a **face that reacts in real
time** — smiling when it welcomes a visitor, thinking while it
answers, concerned when they're frustrated. If you hide the face in a
48 px chat-bubble thumbnail, you have built a chatbot that happens to
have an avatar icon, not an ASMI avatar. Don't.

### Sizes (lower bounds — go bigger if the layout allows)

| Context                                  | Minimum face  | Recommended  |
|------------------------------------------|---------------|--------------|
| Collapsed / corner widget                | **96 px**     | 112–128 px   |
| Expanded chat panel — face region        | **240 px**    | 320–400 px   |
| Inline hero / landing-page feature       | **360 px**    | 480–640 px   |

The expanded chat panel **must have a dedicated face region at the top
of the panel**, not a shrunken header avatar. Users need to see the
expression change while they read the response.

### Anti-patterns

- ❌ **48×48 header thumbnail.** Treats the avatar like a Slack icon.
  The facial expression IS the product.
- ❌ **Only using `neutral`.** A static face defeats the point. Your
  UI must subscribe to `result.newState.expression` on every
  `processMessage` call and swap `<img src>` accordingly.
- ❌ **Rolling your own chat logic.** You can't beat the state machine
  the avatar designer tuned. Use
  `@avatar-state-machine-interface/react` (below) — it owns every
  correctness-critical piece: live mid-turn expression swaps, animation
  playback with all four trigger types, idle auto-return, transparent
  face region split from the chat shell. Don't re-implement these
  against the runtime directly.

---

## Install + use (React)

For React sites the drop-in path is two packages:

```
npm install @avatar-state-machine-interface/react \
            @avatar-state-machine-interface/runtime
```

The React package encapsulates the correctness-critical UI surface:
live mid-turn expression swaps via the runtime's trace hook, animation
playback with all four trigger types (`timer`, `mouse-enter`,
`click`, `response-received`), idle auto-return, and the
transparent face region split from the chat shell. The runtime
package handles state-machine evaluation, intent classification,
brand-voice compilation, and awareness.

### Fetching the definition (read this first if MCP responses get truncated)

Call the MCP `get_avatar` tool. The response leads with `_help` and
`definitionUrl` fields that survive client-side truncation. If your client
truncated the `definition` block (~10 KB cap is common; real definitions are
15–30 KB), recover via:

```bash
curl -s -H "x-api-key: $ASMI_MCP_API_KEY" \
  https://broen.tech/api/asmi/avatars/<avatarId>/public-definition \
  | jq '.definition' > asmi-definition.json
```

If HTTP isn't reachable (firewalled sandbox), call `get_state_machine`,
`get_company_context`, `get_brand_voice` and merge with the metadata fields
from `get_avatar`. Each slice tool is small enough to fit any cap and includes
inline `_shapeNote` fields confirming key paths.

**Never reconstruct the `stateMachine` from `get_avatar_markdown`.** The
`### Region:` heading there is a label, not a JSON key — the canonical shape
is `stateMachine.states.{regionName}` (no `regions` wrapper).

### Turnkey widget — `<AsmiAvatar>`

```tsx
"use client";
import { AsmiAvatar, type LlmProvider } from "@avatar-state-machine-interface/react";
import { definition } from "./asmi-definition.json"; // fetched via MCP

const llmProvider: LlmProvider = {
  async generate({ systemPrompt, userPrompt, history, temperature, maxTokens }) {
    // Call OpenAI / Anthropic / Gemini / whatever your site already uses.
    return (await yourLlmClient.complete({
      system: systemPrompt,
      user: userPrompt,
      history,
      temperature,
      maxTokens,
    })).text;
  },
};

export default function ChatWithAvatar() {
  return <AsmiAvatar definition={definition} llmProvider={llmProvider} debug />;
}
```

Drop `debug` once you've verified the state machine paints the
expected expression trail (devtools will show every state transition,
action, expression emission, and LLM call fired by the runtime).

If the `definition` is malformed, `<AsmiAvatar>` renders a visible red error
card with the failing field path — validation runs synchronously on mount, not
on first message. The legacy "renders fine but does nothing" failure mode is
gone in `@avatar-state-machine-interface/react` v0.2.0+.

### Backend (thin LLM proxy)

The widget runs the state machine in the browser; your backend exists only to
keep the LLM API key off the client. The minimum proxy is a single route:

```typescript
// app/api/avatar/chat/route.ts (Next.js App Router)
import { NextRequest, NextResponse } from "next/server";
import OpenAI from "openai";

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });

export async function POST(req: NextRequest) {
  const { systemPrompt, userPrompt, history, temperature, maxTokens } =
    await req.json();

  const completion = await openai.chat.completions.create({
    model: "gpt-4o",
    temperature,
    max_tokens: maxTokens,
    messages: [
      { role: "system", content: systemPrompt },
      ...history.map((h: { role: string; content: string }) => ({
        role: h.role === "model" ? "assistant" : "user",
        content: h.content,
      })),
      { role: "user", content: userPrompt },
    ],
  });

  return NextResponse.json({
    text: completion.choices[0]?.message?.content ?? "",
  });
}
```

The widget's `LlmProvider` then `fetch`es this route. See the embedding
guide's End-to-End Worked Example at
`https://broen.tech/api/asmi/mcp` (tool: `get_embedding_guide`) for the
complete three-file form (definition + proxy + page) plus Express and FastAPI
backend variants.

### Theming

**v0.1.1+: `<AsmiAvatar>` auto-detects the host's theme on mount.**
It reads `document.body`'s background + text color and scans the
page for a primary CTA button to derive the accent, then applies
those as inline `--asmi-*` CSS custom properties on its wrapper.
You get a widget that matches the host palette out of the box —
no prop wiring, no CSS needed.

Override explicitly only if:
- Auto-detection picked the wrong accent (e.g. the host has no
  obvious primary-button pattern, or the first CTA happens to be
  a secondary button).
- You want dark-mode switching or brand-spec-exact colors.
- You're composing the primitives directly instead of using
  `<AsmiAvatar>` — the hook `useHostTheme()` returns the same
  detected vars, which you spread onto your wrapper's `style`.

To override: set the `--asmi-*` variables on `:root` or any
ancestor of the widget. CSS specificity means they beat the
auto-detected inline values. Pass `autoTheme={false}` to skip
detection entirely.

#### Step 1: read the host's theme

Inspect the host site to discover:

- The page's background color (e.g. `body` or the main layout wrapper).
- The primary text color (usually set on `body`).
- The brand accent color — the call-to-action button background is
  almost always it.
- The secondary / muted text color.
- The font family (inherited automatically via `--asmi-font: inherit`).

For a Tailwind / design-token-driven host, the values live in
`tailwind.config` or the host's `:root` CSS variables — prefer
those. For a hand-styled site, read computed styles off the main
layout container.

#### Step 2: set the variables

Add a scoped `<style>` or a CSS file loaded alongside the widget:

```css
/* Dark host with an orange accent (e.g. Halloween store) */
:root {
  --asmi-panel-bg: #1d1816;              /* host's body background */
  --asmi-panel-fg: #fef6e7;              /* host's body text color */
  --asmi-panel-muted: rgba(254, 246, 231, 0.65);
  --asmi-panel-border: rgba(254, 246, 231, 0.12);
  --asmi-accent: #ff6a00;                /* host's CTA button color */
  --asmi-accent-contrast: #ffffff;
  --asmi-bubble-bg: rgba(255, 255, 255, 0.06);
  --asmi-input-bg: rgba(255, 255, 255, 0.04);
  --asmi-shadow: 0 20px 60px rgba(0, 0, 0, 0.55);
  --asmi-radius: 16px;
  --asmi-font: inherit;
  --asmi-scroll-thumb: rgba(254, 246, 231, 0.25);
  --asmi-scroll-thumb-hover: rgba(254, 246, 231, 0.45);
}
```

```css
/* Light host with a brand accent (e.g. SaaS marketing page) */
:root {
  --asmi-panel-bg: #ffffff;
  --asmi-panel-fg: #0f172a;
  --asmi-panel-muted: #64748b;
  --asmi-panel-border: rgba(15, 23, 42, 0.1);
  --asmi-accent: #2563eb;                /* host's primary button */
  --asmi-accent-contrast: #ffffff;
  --asmi-bubble-bg: rgba(15, 23, 42, 0.04);
  --asmi-input-bg: transparent;
  --asmi-shadow: 0 20px 60px rgba(15, 23, 42, 0.18);
  --asmi-radius: 12px;
  --asmi-font: inherit;
}
```

The face region itself is always transparent — never give it a
background. The expression images ship on transparent PNGs and must
blend into whatever sits behind the widget.

#### Full variable reference

| Variable | Controls | Fallback |
|---|---|---|
| `--asmi-panel-bg` | chat-panel body background | `#fff` |
| `--asmi-panel-fg` | primary text inside the panel | `#0f172a` |
| `--asmi-panel-muted` | secondary text (timestamps, placeholder) | `#64748b` |
| `--asmi-panel-border` | panel outline + input underline | `rgba(0,0,0,0.1)` |
| `--asmi-accent` | send button + user bubble background | `#D36135` |
| `--asmi-accent-contrast` | text on the accent | `#fff` |
| `--asmi-bubble-bg` | avatar-reply bubble background + badge chips | `rgba(0,0,0,0.04)` |
| `--asmi-input-bg` | text-input background | `transparent` |
| `--asmi-shadow` | panel drop-shadow | `0 20px 60px rgba(0,0,0,0.25)` |
| `--asmi-radius` | panel + bubble corner radius | `16px` |
| `--asmi-font` | font-family for everything inside the panel | `inherit` |
| `--asmi-scroll-thumb` | custom chat-log scrollbar thumb | `rgba(128,128,128,0.3)` |
| `--asmi-scroll-thumb-hover` | scrollbar thumb on hover | `rgba(128,128,128,0.5)` |

#### Verify

After setting the vars, open devtools → Elements → select the chat
panel → computed styles. Confirm `background-color` and the send
button's background match the host's palette (NOT `#fff` and
`#D36135`). If they still match the fallbacks, your CSS isn't
reaching the widget's tree — move the `:root` rule higher or
increase specificity.

### Custom chat UI — `useAsmiSession` + `<AsmiFace>`

For file uploads, voice input, paste handling, or a radically
different layout, drop the turnkey wrapper and compose the
primitives yourself. The hook owns the state machine; the face
primitive owns expression rendering + animation triggers. Everything
else is yours.

```tsx
import { useAsmiSession, AsmiFace } from "@avatar-state-machine-interface/react";

export function CustomAvatar({ definition, llmProvider }) {
  const session = useAsmiSession(definition, llmProvider, { debug: true });

  return (
    <div className="my-layout">
      <AsmiFace
        definition={definition}
        expression={session.state.expression}
        size={320}
        responseTriggerKey={session.history.length}
      />
      <MyChatLog history={session.history} />
      <MyInputWithUploads onSubmit={session.send} disabled={session.sending} />
    </div>
  );
}
```

Non-React stacks: the runtime package alone works with any UI layer —
call `get_runtime_docs` for integration examples. A Vue / Svelte /
Web Component wrapper is on the roadmap but not shipped; until then,
wire the runtime directly.

### Example `LlmProvider` (OpenAI)

```ts
import OpenAI from "openai";
import type { LlmProvider } from "@avatar-state-machine-interface/react";

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

export const llmProvider: LlmProvider = {
  async generate({ systemPrompt, userPrompt, history, temperature, maxTokens }) {
    const res = await openai.chat.completions.create({
      model: "gpt-4o-mini",
      temperature,
      max_tokens: maxTokens,
      messages: [
        { role: "system", content: systemPrompt },
        ...history.map((h) => ({
          role: h.role === "model" ? "assistant" as const : "user" as const,
          content: h.content,
        })),
        { role: "user", content: userPrompt },
      ],
    });
    return res.choices[0]?.message?.content ?? "";
  },
};
```

The runtime handles intent classification, state transitions,
expression emission, and brand-voice compilation. Your
`LlmProvider` just turns (systemPrompt, userPrompt, history) into a
text response.

## Invariants to preserve

Once the basics are wired, match these invariants — they're all
encoded in the `AvatarDefinition` already.

### 1. State ↔ expression consistency

Each conversation state has an `entry` action list. The runtime runs
`emit_expression` actions in order on state entry; the last one sets
the expression the UI must show. Entering `answering` runs
`emitThinking` then `generateResponse`, so the face shows
`thinking` until the response lands; the `Exit` action `emitSmiling`
then flips it to `smiling`. Ignore `stateChanges.expression` at your
peril — expression drift is always a client-side bug.

### 2. Intent + sentiment drive guards

Messages are classified into `(intent, sentiment, confidence, topic)`
by `llm_classify`. Transitions branch via guards like
`isAnswerableIntent`, `isFrustratedAnswerable`,
`isLowConfidenceOrUnclear`. Do NOT invent your own branching —
delegate everything to the state machine and render whatever state it
returns. Frustration sentiment forces `concerned`; positive sentiment
nudges `smiling`.

### 3. Awareness context

`awareness.temporal` / `.cultural` / `.businessHours` / `.spatial`
compile into a context-aware system prompt. Pass
`sessionContext.visitorTimezone` (via
`Intl.DateTimeFormat().resolvedOptions().timeZone`) when creating a
session so the runtime knows local time.

### 4. Personality / tone of voice

`brandVoice` encodes formality, warmth, detail, technical depth,
playfulness, humor, persona. The editor compiles these into the
avatar's system prompt — **do not modify it downstream**. If the owner
wants a different tone, they adjust sliders in the editor.

### 5. Proactive triggers

`proactiveTriggers` can fire `PROACTIVE_TRIGGER` events on page dwell,
exit intent, or return visit. If you're implementing these, use the
runtime's event dispatcher with the thresholds from the definition.

### 6. Outbound events

`emit_app_event` actions push through `response.outboundEvents`. Wire
them to your host app's side effects:

- `asmi:handoff` — visitor needs a human. Surface your own CTA.
- `asmi:satisfaction_pulse` — positive ending. Log for NPS / lead
  quality.
- `asmi:offered_contact` — avatar suggested a contact method.

---

## Reference (historical — superseded by the React package)

Prior versions of this skill shipped a ~430-line copy-paste-ready
React component. The same correctness guarantees now live in
`@avatar-state-machine-interface/react` — see the "Install + use"
section above. If you're working in a non-React stack and need the
raw pattern, call `get_runtime_docs` for the current framework-free
reference.

---

## Verification checklist

Minimum bar for calling an integration "done":

1.  Open an incognito tab on the host site.
2.  Avatar face appears within 2 s of page load and is **at least
    96 px** wide in the collapsed widget.
3.  Opening the chat panel shows the face at **at least 240 px** in a
    dedicated region above the chat log (not a 48 px header thumbnail).
4.  The avatar answers correctly to "What's your name?" — with the
    name the designer set in the editor, NOT an invented one.
5.  Sending a real question yields a brand-voice-shaped answer, not a
    generic chatbot reply.
6.  The face **visibly changes expression** at least once between
    user turns (e.g., neutral → thinking → smiling). If it stays on
    one expression, the UI isn't reading
    `result.newState.expression`.
7.  Clicking Send does NOT navigate the page away — covers the
    form-submit trap. The URL stays the same after sending.
8.  Animation triggers fire where defined (timer loops, hover, click).
9.  **The face region has a transparent background.** The avatar
    blends into the host site's surface — no grey/gradient box
    around the face.
10. **After `idleTimeoutSeconds` of inactivity (default 45 s), the
    face returns to `neutral`.** It does not get stuck on whatever
    expression the last turn produced.
11. **The widget inherits the host site's theme.** Using
    `<AsmiAvatar>` on v0.1.1+ you get this free — the component
    auto-detects the host's body bg + text + accent on mount and
    applies them as inline `--asmi-*` vars. Verify in devtools:
    the chat panel's computed `background-color` should match the
    host's surface and the send button should NOT be
    `rgb(211, 97, 53)` (the fallback `#D36135`) unless the host
    genuinely uses terra. If auto-detection misfires (wrong
    accent picked up), override via `--asmi-*` on `:root`.
    Composing the primitives manually? Spread `useHostTheme()`
    onto your own wrapper's `style`.

Anything short of 11-for-11 means the integration is incomplete.

## Things you should never do

- **Never call `/api/asmi/chat` from the host site at runtime.** It's
  authenticated and design-time. Runtime chat is the host site's
  responsibility, powered by the host site's own LLM key.
- **Never embed the owner's MCP API key in the host site.** That key is
  for your coding-AI session during implementation, not runtime.
- **Never override the avatar's response text or system prompt.** If
  the tone is wrong, open the ASMI editor and tune `brandVoice`.
- **Never cache chat responses across users.** Each turn runs through
  the state machine with session-scoped context.

---

Last update: v4 (post-v0.9.81 — mandatory `--asmi-*` theming,
after external Lovable validation showed coding AIs left the
CSS vars untouched and shipped the package's white + terra
fallbacks). Always fetch a fresh copy of this file — the URL
is stable: `https://broen.tech/skills/asmi-implementation.md`.
