My one-line: Foible AI is how AI calls a human. Not an MTurk-type human — a qualified expert, briefed, trusted, and accountable for the outcome.
The secondary framing I keep returning to: the API for human judgment — the layer that turns AI-generated intent into qualified human outcomes.
My interpretation of what we do. The neutral context-custody layer for the agent↔human boundary. We hold the user's context, carry it across the handoff, and verify the outcome — wherever an AI agent needs a qualified human to finish the job.
The wedge, as I see it. Sell the business upside to marketplaces as a way to start building end-user relationships and trust — which we can then scale across different verticals and use-cases.
My unit of sale. Completed outcomes. Not tokens, not leads, not seats.
Where I think we start vs. where we go. V1 lands in consumer trust-handoff marketplaces — hot buyers, no standards play crowding the slot. The same primitive scales into autonomous agent commerce and vertical B2B specialist work as AI traffic grows.
How I'm drawing the stack: three layers, one neutral middle.
Inside L2.5, my decomposition:
My assumption on the business model: platforms pay on recovered completions. Users free. Agent vendors are a future third lane.
How I'm segmenting the space:
| Low-skill unblocker | Qualified specialist | |
|---|---|---|
| Collaborativeuser in loop | Cell 1 — 2FA prompts (too thin) | Cell 2 — agent escalates to a qualified human (Upwork, Wyzant) |
| Autonomousagent end-to-end | Cell 3 — headless CAPTCHA (Browserbase owns it) | Cell 4 — agent transacts where verified completion matters (ChatGPT→Resy, ChatGPT→Shopify) |
How I've decomposed where value leaks today: