Streaming & Progress
Long-running tools — bulk exports, multi-page scrapes, slow API calls — will time out before they finish unless they tell the platform they're still alive. I spent a while getting the timeout design right, and progress reporting is what I landed on: your tool says "I'm still working" at each step, the dispatch timeout resets, and the AI agent sees live progress updates. It's simple, but the engineering tradeoffs behind it are worth understanding.
The Timeout Problem
Every tool dispatch has a 30-second timeout. I set it there because most tool calls should be fast — a single API call, a DOM read, maybe a quick calculation. If your tool doesn't respond within 30 seconds, the platform kills it and returns a timeout error. That's the right default for most tools. But for tools that process multiple items or call slow APIs, 30 seconds isn't enough.
Progress reporting extends this: each time your tool reports progress, the 30-second timer resets. A tool that reports progress at least once every 30 seconds can run for up to 5 minutes. I picked 5 minutes as the absolute ceiling because anything longer than that is usually a sign your tool should be broken into smaller steps — or that something has gone wrong and the tool is stuck. Five minutes is generous enough for real bulk operations without letting a broken tool run forever.
Reporting Progress
The handle function receives an optional second argument — a ToolHandlerContext with a reportProgress method. Here's what it looks like in practice:
import { defineTool, fetchJSON, ToolError } from '@opentabs-dev/plugin-sdk';
import type { ToolHandlerContext } from '@opentabs-dev/plugin-sdk';
import { z } from 'zod';
export const exportMessages = defineTool({
name: 'export_messages',
displayName: 'Export Messages',
description: 'Export all messages from a channel',
icon: 'download',
input: z.object({
channelId: z.string().describe('Channel ID to export'),
}),
output: z.object({
messages: z.array(z.object({
text: z.string(),
author: z.string(),
timestamp: z.string(),
})),
total: z.number(),
}),
handle: async (params, context) => {
const messages: { text: string; author: string; timestamp: string }[] = [];
let cursor: string | undefined;
let page = 0;
do {
const result = await fetchJSON<{
messages: { text: string; author: string; timestamp: string }[];
nextCursor?: string;
totalPages: number;
}>(`/api/channels/${params.channelId}/messages?cursor=${cursor ?? ''}`);
if (!result) throw ToolError.internal('Unexpected empty response');
messages.push(...result.messages);
cursor = result.nextCursor;
page++;
context?.reportProgress({
progress: page,
total: result.totalPages,
message: `Exported page ${page} of ${result.totalPages}`,
});
} while (cursor);
return { messages, total: messages.length };
},
});The ToolHandlerContext Interface
interface ToolHandlerContext {
reportProgress(opts: ProgressOptions): void;
}
interface ProgressOptions {
/** Current progress value (e.g., 3 of 10 items processed). Omit for indeterminate progress. */
progress?: number;
/** Total expected value (e.g., 10 items total). Omit for indeterminate progress. */
total?: number;
/** Optional human-readable message describing the current step. */
message?: string;
}| Field | Type | Required | Description |
|---|---|---|---|
progress | number | No | Current step (e.g., 3 of 10). Omit for indeterminate progress. |
total | number | No | Total steps expected (e.g., 10). Omit for indeterminate progress. |
message | string | No | Human-readable description of the current step |
How Progress Flows
There are a lot of hops between your tool handler and the AI agent, I know. But each one exists for a reason — the browser's security model forces a specific path through content script boundaries and extension messaging. Here's the full chain:
- Tool handler calls
reportProgress()in the page context - Adapter IIFE fires a
CustomEventondocument(MAIN world → ISOLATED world) - Content script relay forwards via
chrome.runtime.sendMessage - Extension background sends a
tool.progressJSON-RPC notification over WebSocket - MCP server resets the dispatch timeout and emits
notifications/progressto the MCP client - AI agent (e.g., Claude) sees the progress update in real time
The good news: you don't need to think about any of this. Progress notifications are fire-and-forget — if any step in the chain fails, your tool keeps running normally and the result is unaffected. The worst case is a missed progress update, which just means the timeout doesn't reset for that tick.
Timeout Behavior
| Scenario | Timeout |
|---|---|
| Tool with no progress reporting | 30 seconds |
| Tool reporting progress every N seconds (N < 30) | Up to 5 minutes |
| Tool reporting progress, but gap > 30s between reports | Times out at the 30s gap |
| Tool reporting progress for over 5 minutes | Killed at 5 minutes (absolute max) |
The mental model is simple: each reportProgress call resets a 30-second timer. Keep reporting at least once every 30 seconds and your tool keeps running — up to the 5-minute ceiling. If you go silent for more than 30 seconds, the platform assumes you're stuck and kills the dispatch.
When to Use Progress
In practice, most tools don't need progress reporting. If your tool makes one API call and returns, the 30-second timeout is plenty. Here's when it matters:
Use progress when:
- Processing multiple items (pages, records, files) in a loop
- Making multiple API calls sequentially
- Any operation that might take more than a few seconds
Skip progress when:
- The tool is a single fast operation (one API call, one DOM read)
- The total work is unknown upfront and there are no meaningful intermediate steps
The context Parameter is Optional
I made the context parameter optional on purpose. It's always present when your tool runs inside a browser tab — the adapter runtime injects it. But I typed it as optional (context?) so your tool handlers work in unit tests without mocking the adapter. You just call your handler directly and context is undefined:
handle: async (params, context) => {
// Always use optional chaining — context is undefined in tests
context?.reportProgress({ progress: 1, total: 2, message: 'Step 1' });
const result = await doWork(params);
context?.reportProgress({ progress: 2, total: 2, message: 'Done' });
return result;
},Always use context?.reportProgress() (optional chaining), never context.reportProgress(). This keeps your tool testable outside the adapter runtime.
MCP Client Integration
You generally don't need to think about this part. Progress notifications are only forwarded to the MCP client if the client includes a progressToken in the tool call request's _meta field. If no progressToken is provided, the server still resets the timeout on progress — your tool runs longer — but no notifications are sent to the client.
Most MCP clients (including Claude) send progressToken automatically, so this just works.
Next Steps
If you're building a plugin that does bulk work, you'll probably want to pair progress reporting with structured errors — so the AI agent knows both how far you got and what went wrong if something fails mid-way:
- Plugin Development — the full walkthrough if you're starting from scratch
- Error Handling — structured errors that tell the AI agent exactly what happened
- SDK Reference: Tools — complete
defineToolAPI reference including all handler options
Last Updated: 10 Mar, 2026