tambo-ai/tambo: Generative UI SDK for React

✨ Explore this must-read post from Hacker News 📖

📂 **Category**:

📌 **What You’ll Learn**:

Build agents that speak your UI

The open-source generative UI toolkit for React. Connect your components—Tambo handles streaming, state management, and MCP.

npm version
License
Last Commit
Discord
GitHub stars

tambo-ai/tambo | Trendshift

Start For Free •
Docs •
Discord


Tambo 1.0 is here! Read the announcement: Introducing Tambo: Generative UI for React


Tambo is a React toolkit for building agents that render UI (also known as generative UI).

Register your components with Zod schemas. The agent picks the right one and streams the props so users can interact with them. “Show me sales by region” renders your . “Add a task” updates your .

Get started in 5 minutes →


2025-11-07-cheatsheet-demo.mp4


Tambo is a fullstack solution for adding generative UI to your app. You get a React SDK plus a backend that handles conversation state and agent execution.

1. Agent included — Tambo runs the LLM conversation loop for you. Bring your own API key (OpenAI, Anthropic, Gemini, Mistral, or any OpenAI-compatible provider). Works with agent frameworks like LangChain and Mastra, but they’re not required.

2. Streaming infrastructure — Props stream to your components as the LLM generates them. Cancellation, error recovery, and reconnection are handled for you.

3. Tambo Cloud or self-host — Cloud is a hosted backend that manages conversation state and agent orchestration. Self-hosted runs the same backend on your infrastructure via Docker.

Most software is built around a one-size-fits-all mental model. We built Tambo to help developers build software that adapts to users.

npm create tambo-app my-tambo-app
cd my-tambo-app
npx tambo init      # choose cloud or self-hosted
npm run dev

Tambo Cloud is a hosted backend, free to get started with plenty of credits to start building. Self-hosted runs on your own infrastructure.

Check out the pre-built component library for agent and generative UI primitives:


2025-11-07-ui-component-library.mp4


Or fork a template:

Tell the AI which components it can use. Zod schemas define the props. These schemas become LLM tool definitions—the agent calls them like functions and Tambo renders the result.

Render once in response to a message. Charts, summaries, data visualizations.


2025-11-07-generative-form.mp4


const components: TamboComponent[] = [
  {
    name: "Graph",
    description: "Displays data as charts using Recharts library",
    component: Graph,
    propsSchema: z.object({
      data: z.array(z.object()),
      type: z.enum(["line", "bar", "pie"]),
    }),
  },
];

Persist and update as users refine requests. Shopping carts, spreadsheets, task boards.


2025-11-07-db-thing.mp4


const InteractableNote = withInteractable(Note, {
  componentName: "Note",
  description: "A note supporting title, content, and color modifications",
  propsSchema: z.object({
    title: z.string(),
    content: z.string(),
    color: z.enum(["white", "yellow", "blue", "green"]).optional(),
  }),
});

Docs: generative components, interactable components

Wrap your app with TamboProvider. You must provide either userKey or userToken to identify the thread owner.

<TamboProvider
  apiKey={process.env.NEXT_PUBLIC_TAMBO_API_KEY!}
  userKey={currentUserId}
  components={components}
>
  <Chat />
  <InteractableNote id="note-1" title="My Note" content="Start writing..." />
TamboProvider>

Use userKey for server-side or trusted environments. Use userToken (OAuth access token) for client-side apps where the token contains the user identity. See User Authentication for details.

Docs: provider options

useTambo() is the primary hook — it gives you messages, streaming state, and thread management. useTamboThreadInput() handles user input and message submission.

const { messages, isStreaming } = useTambo();
const { value, setValue, submit, isPending } = useTamboThreadInput();

Docs: threads and messages, streaming status, full tutorial

Connect to Linear, Slack, databases, or your own MCP servers. Tambo supports the full MCP protocol: tools, prompts, elicitations, and sampling.

import { MCPTransport } from "@tambo-ai/react/mcp";

const mcpServers = [
  {
    name: "filesystem",
    url: "http://localhost:8261/mcp",
    transport: MCPTransport.HTTP,
  },
];

<TamboProvider
  apiKey={process.env.NEXT_PUBLIC_TAMBO_API_KEY!}
  userKey={currentUserId}
  components={components}
  mcpServers={mcpServers}
>
  <App />
TamboProvider>;

2025-11-07-elicitations.mp4


Docs: MCP integration

Sometimes you need functions that run in the browser. DOM manipulation, authenticated fetches, accessing React state. Define them as tools and the AI can call them.

const tools: TamboTool[] = [
  {
    name: "getWeather",
    description: "Fetches weather for a location",
    tool: async (params: { location: string }) =>
      fetch(`/api/weather?q=${encodeURIComponent(params.location)}`).then((r) =>
        r.json(),
      ),
    inputSchema: z.object({
      location: z.string(),
    }),
    outputSchema: z.object({
      temperature: z.number(),
      condition: z.string(),
      location: z.string(),
    }),
  },
];

<TamboProvider
  apiKey={process.env.NEXT_PUBLIC_TAMBO_API_KEY!}
  userKey={currentUserId}
  tools={tools}
  components={components}
>
  <App />
TamboProvider>;

Docs: local tools

Context, Auth, and Suggestions

Additional context lets you pass metadata to give the AI better responses. User state, app settings, current page. User authentication passes tokens from your auth provider. Suggestions generates prompts users can click based on what they’re doing.

<TamboProvider
  apiKey={process.env.NEXT_PUBLIC_TAMBO_API_KEY!}
  userToken={userToken}
  contextHelpers={{
    selectedItems: () => ({
      key: "selectedItems",
      value: selectedItems.map((i) => i.name).join(", "),
    }),
    currentPage: () => ({ key: "page", value: window.location.pathname }),
  }}
/>
const { suggestions, accept } = useTamboSuggestions({ maxSuggestions: 3 });

suggestions.map((s) => (
  <button key={s.id} onClick={() => accept(s)}>
    {s.title}
  button>
));

Docs: additional context, user authentication, suggestions

OpenAI, Anthropic, Cerebras, Google Gemini, Mistral, and any OpenAI-compatible provider. Full list. Missing one? Let us know.

Feature Tambo Vercel AI SDK CopilotKit Assistant UI
Component selection AI decides which components to render Manual tool-to-component mapping Via agent frameworks (LangGraph) Chat-focused tool UI
MCP integration Built-in Experimental (v4.2+) Recently added Requires AI SDK v5
Persistent stateful components Yes No Shared state patterns No
Client-side tool execution Declarative, automatic Manual via onToolCall Agent-side only No
Self-hostable MIT (SDK + backend) Apache 2.0 (SDK only) MIT MIT
Hosted option Tambo Cloud No CopilotKit Cloud Assistant Cloud
Best for Full app UI control Streaming and tool abstractions Multi-agent workflows Chat interfaces

Join the Discord to chat with other developers and the core team.

Interested in contributing? Read the Contributing Guide.

Join the conversation on Twitter and follow @tambo_ai.

MIT unless otherwise noted. Some workspaces (like apps/api) are Apache-2.0.


Tambo AI Animation

For AI/LLM agents: docs.tambo.co/llms.txt

{💬|⚡|🔥} **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#tamboaitambo #Generative #SDK #React**

🕒 **Posted on**: 1770775754

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *