Conversation
|
No actionable comments were generated in the recent review. 🎉 📝 WalkthroughWalkthroughThis PR introduces custom event dispatch functionality for tools, enabling them to emit progress updates and intermediate results during execution. The system pipes custom events through an async generator-based execution pathway, streams them via the processor, and surfaces them to clients through a new onCustomEvent callback across all framework integrations (React, Vue, Svelte, Solid). Changes
Sequence DiagramsequenceDiagram
participant App as Client Application
participant TC as Tool Execution<br/>(executeToolCalls)
participant Proc as StreamProcessor
participant CC as ChatClient
participant CB as App Callback<br/>(onCustomEvent)
App->>TC: Invoke tool with context
activate TC
TC->>TC: Poll for pending events
Note over TC: emitCustomEvent() queues<br/>CustomEvent chunks
TC->>Proc: Yield CustomEvent chunk
deactivate TC
activate Proc
Proc->>Proc: handleCustomEvent()
alt System event?
Proc->>Proc: Early return (tool-input-available,<br/>approval-requested)
else User custom event
Proc->>CC: Invoke onCustomEvent
end
deactivate Proc
activate CC
CC->>CB: Forward event with data<br/>and toolCallId context
deactivate CC
activate CB
CB->>App: Display progress/update UI
deactivate CB
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
View your CI Pipeline Execution ↗ for commit 356097d
☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-devtools-core
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.claude/settings.local.json (1)
1-19:⚠️ Potential issue | 🟡 MinorAdd
.claude/settings.local.jsonto.gitignoreand remove from this commit.This file contains developer-specific machine paths (line 17:
/Users/jherr/projects/tanstack/ai) and custom permission sets that should not be shared across the team. Following the existing pattern in.gitignorefor.env.localfiles,.claude/settings.local.jsonshould be gitignored locally rather than committed to the repository.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/settings.local.json around lines 1 - 19, The committed file .claude/settings.local.json contains machine-specific paths and permission sets (see permissions.allow and the path "/Users/jherr/projects/tanstack/ai"); add ".claude/settings.local.json" to .gitignore, remove the file from the repo index without deleting the local copy (so it remains on your machine), and update the commit (amend or create a new commit) to remove the file from the repository so those sensitive entries are not tracked or shared.
🧹 Nitpick comments (6)
packages/typescript/ai/src/activities/chat/tools/tool-calls.ts (1)
259-290: Polling-based event flushing works but has a subtle 10ms busy-wait.The
executeWithEventPollingapproach is functional and correctly handles both success and error cases (rejections propagate throughPromise.racein subsequent iterations). The 10ms polling interval is a reasonable trade-off for simplicity, though it means ~100 wake-ups/second per executing tool even when no events are emitted.An alternative would be a push-based notification (e.g., resolve a "signal" promise from within
emitCustomEventso the poller wakes immediately), which would eliminate unnecessary wake-ups for long-running tools. Not critical for correctness, but worth considering if tools frequently take many seconds.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/src/activities/chat/tools/tool-calls.ts` around lines 259 - 290, The current executeWithEventPolling loop uses a 10ms timeout causing busy-wait; change it to a push-based wakeup by introducing a reusable "signal" promise and resolver that emitCustomEvent calls when it pushes into pendingEvents, pass that signal into executeWithEventPolling (or make it accessible alongside pendingEvents), and replace the setTimeout race participant with awaiting the signal (and executionWithFlag) so the loop wakes immediately on new events; ensure the signal is re-created/reset after being consumed so subsequent events continue to trigger the resolver and preserve existing behavior for executionPromise resolution and final flush.packages/typescript/ai/tests/custom-events-integration.test.ts (2)
87-104: Test manually simulates the event emission pipeline — consider a note for clarity.The mock context on Lines 90–101 manually injects
toolCallIdintodata(via spread{ ...data, toolCallId: 'tc-1' }), which means this test validates the processor's forwarding behavior but not the actualemitCustomEventimplementation that would run in production. The(testTool as any).executecast is also fragile.This is acceptable for integration-level coverage, but worth noting that the real
emitCustomEvent→ CUSTOM chunk pipeline is implicitly trusted here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/tests/custom-events-integration.test.ts` around lines 87 - 104, Add a short clarifying comment above the mockContext block explaining that the test simulates the real emitCustomEvent → CUSTOM chunk pipeline by manually injecting toolCallId and thus validates processor forwarding but not the emitCustomEvent implementation; also tighten the fragile cast by replacing "(testTool as any).execute" with a safer access (e.g., check for testTool.execute via optional chaining or a typed helper) and reference the symbols toolExecuteFunc, testTool, mockContext, emitCustomEvent, and processor.processChunk so reviewers can easily locate and understand the intent of the simulation.
1-4: Import ordering:zodshould be imported before local imports per ESLint.Fix
import { describe, expect, it, vi } from 'vitest' +import { z } from 'zod' import { toolDefinition } from '../src/activities/chat/tools/tool-definition' import { StreamProcessor } from '../src/activities/chat/stream/processor' -import { z } from 'zod'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/tests/custom-events-integration.test.ts` around lines 1 - 4, Imports are out of order: import z from 'zod' should appear before local module imports to satisfy ESLint; reorder the import statements so "import { z } from 'zod'" comes before the local imports (toolDefinition and StreamProcessor) in the file containing those symbols to fix the lint error.packages/typescript/ai/src/activities/chat/stream/processor.ts (1)
864-868:(chunk.data as any)?.toolCallId— consider a narrower type assertion.The
as anyworks but loses type safety. A minor improvement would be a type guard or inline check:toolCallId: (chunk.data != null && typeof chunk.data === 'object' && 'toolCallId' in chunk.data) ? (chunk.data as { toolCallId?: string }).toolCallId : undefined,That said, since
chunk.dataisunknown, theas anyapproach is pragmatic and the optional chaining provides runtime safety.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/src/activities/chat/stream/processor.ts` around lines 864 - 868, The current use of (chunk.data as any)?.toolCallId in the onCustomEvent call loses type safety; replace it with a narrow runtime type check (or small type-guard helper like isToolCallData) that ensures chunk.data is a non-null object and has a 'toolCallId' property, then cast to { toolCallId?: string } to read and pass toolCallId into this.events.onCustomEvent?.(chunk.name, chunk.data, { toolCallId: ... }); this updates the extraction in the processor (referenced by chunk.data and the onCustomEvent call) without using as any.packages/typescript/ai-client/tests/chat-client.test.ts (1)
3-9: Import ordering:createCustomEventChunksshould be sorted alphabetically per ESLint.Per the static analysis hint, the import should be sorted.
Fix
import { + createCustomEventChunks, createMockConnectionAdapter, createTextChunks, createThinkingChunks, createToolCallChunks, - createCustomEventChunks, } from './test-utils'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-client/tests/chat-client.test.ts` around lines 3 - 9, The import list in chat-client.test.ts is out of alphabetical order: move createCustomEventChunks so the imported identifiers are sorted alphabetically (e.g., createCustomEventChunks, createTextChunks, createThinkingChunks, createToolCallChunks, createMockConnectionAdapter) to satisfy ESLint; update the import statement that contains createMockConnectionAdapter, createTextChunks, createThinkingChunks, createToolCallChunks, createCustomEventChunks accordingly.packages/typescript/ai/src/activities/chat/index.ts (1)
1095-1106:createCustomEventChunk— minor:dataparameter could be widened tounknown.The
dataparameter is typed asRecord<string, any>, but custom events could theoretically carry any data shape. Since this is an internal helper and the callers (tool execution context) always pass records, this is fine for now — just noting for future flexibility.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/src/activities/chat/index.ts` around lines 1095 - 1106, The createCustomEventChunk helper currently types its data parameter as Record<string, any>; widen this parameter to unknown so the function can accept any payload shape (change the signature of createCustomEventChunk(eventName: string, data: unknown)), and update the returned CustomEvent's data field typing if necessary to accept unknown; ensure any callers (e.g. the tool execution contexts) that rely on specific properties narrow or assert the unknown before use.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/settings.local.json:
- Line 17: The setting contains a hardcoded absolute path "Bash(git -C
/Users/jherr/projects/tanstack/ai checkout:*)" that leaks a developer-local
environment; update this entry to use a relative/generic path or placeholder
(e.g., "Bash(git -C . checkout:*)" or "Bash(git checkout:*)") or replace the
path with a variable like "${PROJECT_ROOT}" so it no longer references
/Users/jherr/projects/tanstack/ai.
In `@packages/typescript/ai-react/src/use-chat.ts`:
- Line 67: onCustomEvent is being captured at useMemo creation
(optionsRef.current.onCustomEvent) so updates to options.onCustomEvent won't be
used until clientId changes; change the onCustomEvent property in the object
created by useMemo to be a closure that calls
optionsRef.current.onCustomEvent(...) at invocation time (matching how
onFinish/onError are implemented) so it always dereferences the latest callback
while leaving useMemo and its clientId dependency intact.
---
Outside diff comments:
In @.claude/settings.local.json:
- Around line 1-19: The committed file .claude/settings.local.json contains
machine-specific paths and permission sets (see permissions.allow and the path
"/Users/jherr/projects/tanstack/ai"); add ".claude/settings.local.json" to
.gitignore, remove the file from the repo index without deleting the local copy
(so it remains on your machine), and update the commit (amend or create a new
commit) to remove the file from the repository so those sensitive entries are
not tracked or shared.
---
Nitpick comments:
In `@packages/typescript/ai-client/tests/chat-client.test.ts`:
- Around line 3-9: The import list in chat-client.test.ts is out of alphabetical
order: move createCustomEventChunks so the imported identifiers are sorted
alphabetically (e.g., createCustomEventChunks, createTextChunks,
createThinkingChunks, createToolCallChunks, createMockConnectionAdapter) to
satisfy ESLint; update the import statement that contains
createMockConnectionAdapter, createTextChunks, createThinkingChunks,
createToolCallChunks, createCustomEventChunks accordingly.
In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 1095-1106: The createCustomEventChunk helper currently types its
data parameter as Record<string, any>; widen this parameter to unknown so the
function can accept any payload shape (change the signature of
createCustomEventChunk(eventName: string, data: unknown)), and update the
returned CustomEvent's data field typing if necessary to accept unknown; ensure
any callers (e.g. the tool execution contexts) that rely on specific properties
narrow or assert the unknown before use.
In `@packages/typescript/ai/src/activities/chat/stream/processor.ts`:
- Around line 864-868: The current use of (chunk.data as any)?.toolCallId in the
onCustomEvent call loses type safety; replace it with a narrow runtime type
check (or small type-guard helper like isToolCallData) that ensures chunk.data
is a non-null object and has a 'toolCallId' property, then cast to {
toolCallId?: string } to read and pass toolCallId into
this.events.onCustomEvent?.(chunk.name, chunk.data, { toolCallId: ... }); this
updates the extraction in the processor (referenced by chunk.data and the
onCustomEvent call) without using as any.
In `@packages/typescript/ai/src/activities/chat/tools/tool-calls.ts`:
- Around line 259-290: The current executeWithEventPolling loop uses a 10ms
timeout causing busy-wait; change it to a push-based wakeup by introducing a
reusable "signal" promise and resolver that emitCustomEvent calls when it pushes
into pendingEvents, pass that signal into executeWithEventPolling (or make it
accessible alongside pendingEvents), and replace the setTimeout race participant
with awaiting the signal (and executionWithFlag) so the loop wakes immediately
on new events; ensure the signal is re-created/reset after being consumed so
subsequent events continue to trigger the resolver and preserve existing
behavior for executionPromise resolution and final flush.
In `@packages/typescript/ai/tests/custom-events-integration.test.ts`:
- Around line 87-104: Add a short clarifying comment above the mockContext block
explaining that the test simulates the real emitCustomEvent → CUSTOM chunk
pipeline by manually injecting toolCallId and thus validates processor
forwarding but not the emitCustomEvent implementation; also tighten the fragile
cast by replacing "(testTool as any).execute" with a safer access (e.g., check
for testTool.execute via optional chaining or a typed helper) and reference the
symbols toolExecuteFunc, testTool, mockContext, emitCustomEvent, and
processor.processChunk so reviewers can easily locate and understand the intent
of the simulation.
- Around line 1-4: Imports are out of order: import z from 'zod' should appear
before local module imports to satisfy ESLint; reorder the import statements so
"import { z } from 'zod'" comes before the local imports (toolDefinition and
StreamProcessor) in the file containing those symbols to fix the lint error.
| optionsRef.current.onError?.(error) | ||
| }, | ||
| tools: optionsRef.current.tools, | ||
| onCustomEvent: optionsRef.current.onCustomEvent, |
There was a problem hiding this comment.
onCustomEvent is captured at memo-creation time, unlike onFinish/onError which dereference optionsRef at call time.
onFinish and onError (Lines 60–65) are wrapped in closures that read optionsRef.current when invoked, so they always use the latest callback. onCustomEvent is evaluated once when useMemo runs, meaning subsequent changes to options.onCustomEvent won't take effect until clientId changes.
Suggested fix: wrap in a closure for consistency
- onCustomEvent: optionsRef.current.onCustomEvent,
+ onCustomEvent: (eventType: string, data: unknown, context: { toolCallId?: string }) => {
+ optionsRef.current.onCustomEvent?.(eventType, data, context)
+ },📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| onCustomEvent: optionsRef.current.onCustomEvent, | |
| onCustomEvent: (eventType: string, data: unknown, context: { toolCallId?: string }) => { | |
| optionsRef.current.onCustomEvent?.(eventType, data, context) | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-react/src/use-chat.ts` at line 67, onCustomEvent is
being captured at useMemo creation (optionsRef.current.onCustomEvent) so updates
to options.onCustomEvent won't be used until clientId changes; change the
onCustomEvent property in the object created by useMemo to be a closure that
calls optionsRef.current.onCustomEvent(...) at invocation time (matching how
onFinish/onError are implemented) so it always dereferences the latest callback
while leaving useMemo and its clientId dependency intact.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
.changeset/custom-events-dispatch.md (1)
4-7: Consider bumping framework packages tominorinstead ofpatch.The framework integrations (
@tanstack/ai-react,@tanstack/ai-solid,@tanstack/ai-svelte,@tanstack/ai-vue) are receiving new public API surface: theonCustomEventcallback in their hook options. While the addition is backward-compatible (optional parameter), semantic versioning typically treats new functionality as aminorchange rather than apatch.📦 Suggested versioning adjustment
-'@tanstack/ai-react': patch -'@tanstack/ai-solid': patch -'@tanstack/ai-svelte': patch -'@tanstack/ai-vue': patch +'@tanstack/ai-react': minor +'@tanstack/ai-solid': minor +'@tanstack/ai-svelte': minor +'@tanstack/ai-vue': minor🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.changeset/custom-events-dispatch.md around lines 4 - 7, The changeset currently marks the framework integration packages as a patch but they introduce a new public API option (onCustomEvent) so update the version bumps for '@tanstack/ai-react', '@tanstack/ai-solid', '@tanstack/ai-svelte', and '@tanstack/ai-vue' from "patch" to "minor" in the .changeset entry to reflect the new optional onCustomEvent hook option being added to their hook options.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.changeset/custom-events-dispatch.md:
- Line 12: The docs incorrectly reference dispatchEvent(); update the text to
use the real API context.emitCustomEvent() and clarify that emitCustomEvent is
called on the ToolExecutionContext passed into a tool's execute function (e.g.,
context.emitCustomEvent({...})), and ensure mentions of the streaming chunk and
client callback refer to the existing `custom_event` stream chunk and the chat
hook `onCustomEvent` so examples, tests and docs match the implementation.
---
Nitpick comments:
In @.changeset/custom-events-dispatch.md:
- Around line 4-7: The changeset currently marks the framework integration
packages as a patch but they introduce a new public API option (onCustomEvent)
so update the version bumps for '@tanstack/ai-react', '@tanstack/ai-solid',
'@tanstack/ai-svelte', and '@tanstack/ai-vue' from "patch" to "minor" in the
.changeset entry to reflect the new optional onCustomEvent hook option being
added to their hook options.
|
|
||
| feat: add custom event dispatch support for tools | ||
|
|
||
| Tools can now emit custom events during execution via `dispatchEvent()`. Custom events are streamed to clients as `custom_event` stream chunks and surfaced through the client chat hook's `onCustomEvent` callback. This enables tools to send progress updates, intermediate results, or any structured data back to the UI during long-running operations. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat .changeset/custom-events-dispatch.mdRepository: TanStack/ai
Length of output: 626
🏁 Script executed:
# Search for method definitions and usage patterns
rg -n "emitCustomEvent|dispatchEvent" --type ts --type js -A 3 -B 1Repository: TanStack/ai
Length of output: 5790
🏁 Script executed:
# Search for ToolExecutionContext interface definition
rg -n "interface ToolExecutionContext|class ToolExecutionContext" -A 20 --type tsRepository: TanStack/ai
Length of output: 1569
🏁 Script executed:
# Look for any tool execution context type definitions
fd -t f -e ts -e d.ts | xargs grep -l "ToolExecutionContext" | head -5Repository: TanStack/ai
Length of output: 221
Incorrect API name in documentation.
The changeset states tools emit events via dispatchEvent(), but the actual API is context.emitCustomEvent() where context is the ToolExecutionContext passed to the tool's execute function. The implementation, tests, and all usage examples consistently use emitCustomEvent().
Update the documentation to reflect the correct API name:
Suggested fix
-Tools can now emit custom events during execution via `dispatchEvent()`. Custom events are streamed to clients as `custom_event` stream chunks and surfaced through the client chat hook's `onCustomEvent` callback. This enables tools to send progress updates, intermediate results, or any structured data back to the UI during long-running operations.
+Tools can now emit custom events during execution via `context.emitCustomEvent()` where `context` is the `ToolExecutionContext` passed to the tool's execute function. Custom events are streamed to clients as `custom_event` stream chunks and surfaced through the client chat hook's `onCustomEvent` callback. This enables tools to send progress updates, intermediate results, or any structured data back to the UI during long-running operations.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Tools can now emit custom events during execution via `dispatchEvent()`. Custom events are streamed to clients as `custom_event` stream chunks and surfaced through the client chat hook's `onCustomEvent` callback. This enables tools to send progress updates, intermediate results, or any structured data back to the UI during long-running operations. | |
| Tools can now emit custom events during execution via `context.emitCustomEvent()` where `context` is the `ToolExecutionContext` passed to the tool's execute function. Custom events are streamed to clients as `custom_event` stream chunks and surfaced through the client chat hook's `onCustomEvent` callback. This enables tools to send progress updates, intermediate results, or any structured data back to the UI during long-running operations. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.changeset/custom-events-dispatch.md at line 12, The docs incorrectly
reference dispatchEvent(); update the text to use the real API
context.emitCustomEvent() and clarify that emitCustomEvent is called on the
ToolExecutionContext passed into a tool's execute function (e.g.,
context.emitCustomEvent({...})), and ensure mentions of the streaming chunk and
client callback refer to the existing `custom_event` stream chunk and the chat
hook `onCustomEvent` so examples, tests and docs match the implementation.
… into tools/custom-events-dispatch
🎯 Changes
Adds a mechanism for tools to dispatch custom events to the client in realtime.
✅ Checklist
pnpm run test:pr.🚀 Release Impact
Summary by CodeRabbit
onCustomEventcallback, enabling real-time progress tracking and feedback from long-running tool operations.