Overview
The Playground (Chat Tab) is MCPJam Inspector’s interactive testing environment for MCP servers with LLM integration. It enables real-time conversation with AI models while automatically invoking MCP tools, handling elicitations, and streaming responses. Key Features:- Multi-provider LLM support (OpenAI, Anthropic, DeepSeek, Google, Ollama)
- Free chat via MCPJam-provided models (powered by MCPJam backend)
- Real-time MCP tool execution with OpenAI Apps SDK compatibility
- MCP-UI rendering for custom interactive components
- Server-Sent Events (SSE) for streaming responses
- Interactive elicitation support (MCP servers requesting user input)
- Multi-server MCP integration with automatic tool routing
- Frontend:
client/src/components/ChatTabV2.tsx(new resizable layout) - Legacy:
client/src/components/ChatTab.tsx(original single-panel layout) - Backend:
server/routes/mcp/chat.ts - Hook:
client/src/hooks/use-chat.ts
Architecture Overview
UI Layout (ChatTabV2)
The playground uses a resizable split-panel layout powered byResizablePanelGroup:
- Chat panel (left): Message history, tool execution, input form
- Logger panel (right): JSON-RPC messages from MCP servers
- Resizable divider: Users can adjust panel sizes
- Responsive: Minimum widths prevent UI collapse
System Components
Chat Flow: Local vs Backend
The Playground supports two execution paths based on the selected model:1. Local Execution (User API Keys)
Used when the user selects models requiring their own API keys (OpenAI, Anthropic, DeepSeek, Google, or Ollama). Flow:server/routes/mcp/chat.ts:207-403-createStreamingResponse()server/utils/chat-helpers.ts-createLlmModel()client/src/hooks/use-chat.ts:181-327- SSE event handling
2. Backend Execution (Free Models via MCPJamBackend)
Used when the user selects MCPJam-provided models (identified byisMCPJamProvidedModel()).
Flow:
server/routes/mcp/chat.ts:405-589-sendMessagesToBackend()shared/backend-conversation.ts-runBackendConversation()shared/http-tool-calls.ts-executeToolCallsFromMessages()
shared/types.ts:109-118):
meta-llama/llama-3.3-70b-instructopenai/gpt-oss-120bx-ai/grok-4-fastopenai/gpt-5-nano
Free Chat: MCPJamBackend Integration
MCPJam Inspector offers free chat powered by MCPJamBackend atCONVEX_HTTP_URL.
How It Works
1. Model Selection:CONVEX_HTTP_URL- MCPJamBackend backend endpoint (required for free chat)- Authenticated users get access via WorkOS tokens
MCP Integration via MCPClientManager
The Playground uses MCPClientManager to orchestrate MCP server connections and tool execution.Tool Retrieval
getToolsForAiSdk() does (see docs/contributing/mcp-client-manager.mdx):
- Fetches tools from specified servers (or all if undefined)
- Converts MCP tool schemas to AI SDK format
- Attaches
_serverIdmetadata to each tool - Wires up
tool.execute()to callmcpClientManager.executeTool() - Caches tool
_metafields for OpenAI Apps SDK
Tool Execution Flow
Local Execution (AI SDK):Server Selection
Users can select which MCP servers to use in the chat:Streaming Implementation (SSE)
The Playground uses Server-Sent Events (SSE) for real-time streaming of LLM responses and tool execution.Event Types
Defined inshared/sse.ts:
Server-Side Streaming
Client-Side Parsing
client/src/hooks/use-chat.ts:181-327):
Elicitation Support
Elicitation allows MCP servers to request interactive input from the user during tool execution.Flow
User Response
client/src/hooks/use-chat.ts:563-619):
client/src/components/ElicitationDialog.tsx
OpenAI Apps SDK Integration
MCPJam Inspector supports OpenAI Apps SDK via_meta field preservation in tool results.
Why _meta?
The OpenAI Apps SDK uses _meta to pass rendering hints to OpenAI’s UI (e.g., chart data, markdown formatting, images). Tools can return:
Implementation
1. Tool Metadata Caching (MCPClientManager):shared/http-tool-calls.ts:154-168):
shared/backend-conversation.ts:125-142):
server/routes/mcp/chat.ts:512-528):
Accessing Tool Metadata
Device Globals for ChatGPT Apps and MCP Apps
As of PR #1026 and #1038, the playground provides device context to ChatGPT Apps and MCP Apps through configurable device globals. These settings are only applied when the UI Playground is active; outside the playground, defaults are used. Device Settings:-
Device Type -
'mobile','tablet', or'desktop'. Controlled by the device type selector in the playground toolbar. Outside the playground, automatically detected from window size viagetDeviceType(). -
Locale - BCP 47 locale code (e.g.,
'en-US','ja-JP'). Controlled by the locale selector in the playground toolbar. Outside the playground, defaults tonavigator.language. -
Timezone (MCP Apps only) - IANA timezone identifier (e.g.,
'America/New_York','Asia/Tokyo'). Controlled by the timezone selector in the playground toolbar. Outside the playground, defaults toIntl.DateTimeFormat().resolvedOptions().timeZone. -
Device Capabilities - Input method support:
hover(boolean) - Whether hover interactions are supportedtouch(boolean) - Whether touch input is supported
{ hover: true, touch: false }. -
Safe Area Insets - Device notches, rounded corners, and gesture areas in pixels:
top,bottom,left,right(numbers)
- ChatGPT Apps - Tools with
openai/outputTemplatemetadata - MCP Apps - Tools with
ui/resourceUrimetadata - Mixed/None - Shows ChatGPT Apps controls by default
ui-playground-store.ts) maintains these settings and provides them to the appropriate renderer:
ChatGPT Apps - Values passed via window.openai API:
ui/initialize host context:
Server Instructions Integration
As of PR #948, MCP server instructions are automatically included in the chat context as system messages. This enables the LLM to understand server-specific guidance and capabilities. Flow:- Extract instructions from
connectedServerConfigs[serverName]?.initializationInfo?.instructions - Create system messages with metadata
{ source: "server-instruction", serverName } - Keep instruction messages in sync with selected servers via
useEffect - Filter out old instruction messages when servers change
- Prepend instruction messages to conversation history
client/src/components/ChatTabV2.tsx:170-182):
client/src/components/ChatTabV2.tsx:252-279):
- LLM understanding server-specific capabilities and constraints
- Contextual guidance for tool usage patterns
- Server-defined best practices and limitations
- Multi-server coordination with distinct instruction sets
Widget State Propagation to Model
As of PR #891, widget state changes from OpenAI Apps are automatically propagated to the LLM model as hidden assistant messages. This enables the AI to understand and reason about widget interactions. Flow:- Widget calls
window.openai.setWidgetState(state)in iframe chatgpt-app-renderer.tsxreceivesopenai:setWidgetStatepostMessage- State is deduped by comparing serialized JSON
onWidgetStateChangecallback is invoked with(toolCallId, state)ChatTabV2.tsxadds/updates hidden assistant message with IDwidget-state-${toolCallId}- Message contains text:
"The state of widget ${toolCallId} is: ${JSON.stringify(state)}" thread.tsxhides messages starting withwidget-state-from UI- Model receives state updates in conversation context
client/src/components/ChatTabV2.tsx:281-326):
- LLM understanding user interactions with charts/dashboards
- Contextual follow-up questions based on widget selections
- Multi-turn conversations referencing widget state
- Debugging widget behavior through model awareness
Technical Details
Agent Loop
Both local and backend execution use an agent loop pattern:server/routes/mcp/chat.ts:57-58):
MAX_AGENT_STEPS = 10- Max iterationsELICITATION_TIMEOUT = 300000- 5 minutes
Message Format
Client Messages (shared/types.ts:19-28):
server/routes/mcp/chat.ts:223-234):
Content Blocks
Used for rich UI rendering:Model Selection
Available Models (shared/types.ts:167-260):
- Anthropic: Claude Opus 4, Sonnet 4, Sonnet 3.7/3.5, Haiku 3.5
- OpenAI: GPT-4.1, GPT-4.1 Mini/Nano, GPT-4o, GPT-4o Mini
- DeepSeek: Chat, Reasoner
- Google: Gemini 2.5 Pro/Flash, 2.0 Flash Exp, 1.5 Pro/Flash variants
- Meta: Llama 3.3 70B (Free)
- X.AI: Grok 4 Fast (Free)
- OpenAI: GPT-OSS 120B, GPT-5 Nano (Free)
- Ollama: User-defined local models
client/src/hooks/use-chat.ts:159-179):
Temperature & System Prompt
Key Files Reference
Frontend
client/src/components/ChatTabV2.tsx- Main chat UI with resizable layoutclient/src/components/ChatTab.tsx- Legacy single-panel layoutclient/src/components/logging/json-rpc-logger-view.tsx- MCP protocol viewerclient/src/components/ui/resizable.tsx- Resizable panel componentsclient/src/hooks/use-chat.ts- Chat state managementclient/src/components/chat/message.tsx- Message renderingclient/src/components/chat/chat-input.tsx- Input componentclient/src/components/ElicitationDialog.tsx- Elicitation UIclient/src/lib/sse.ts- SSE parsing utilities
Backend
server/routes/mcp/chat.ts- Chat endpoint (593 lines)server/utils/chat-helpers.ts- LLM model creation
Shared
shared/types.ts- Type definitionsshared/sse.ts- SSE event typesshared/backend-conversation.ts- Backend conversation orchestrationshared/http-tool-calls.ts- Tool execution logic
SDK
sdk/mcp-client-manager/index.ts- MCP orchestrationsdk/mcp-client-manager/tool-converters.ts- AI SDK conversion- See
docs/contributing/mcp-client-manager.mdxfor full docs
Development Patterns
Adding New Model Providers
- Add provider to
shared/types.ts:
-
Add models to
SUPPORTED_MODELSarray -
Implement in
server/utils/chat-helpers.ts:
- Add API key handling in
client/src/hooks/use-ai-provider-keys.ts
Adding New SSE Event Types
- Define in
shared/sse.ts:
- Emit in backend:
- Handle in client:
Debugging Tips
Enable RPC Logging:MCP-UI Integration
MCPJam Inspector supports rendering custom UI components from MCP servers using the MCP-UI specification.Detection and Rendering
Located inclient/src/components/chat-v2/thread.tsx:151-172:
MCP-UI Component
TheMCPUIResourcePart component uses @mcp-ui/client library to render UI resources:
Supported Action Types
MCP-UI components can trigger the following actions:- tool: Request tool execution (converted to chat message)
- link: Open external URLs in new tab
- prompt: Send text prompt as follow-up message
- intent: Send intent string as follow-up message
- notify: Display notification (converted to chat message)
Component Library
The implementation uses the basic component library from@mcp-ui/client:
- Button: Interactive buttons with action handlers
- Text: Text display with formatting
- Stack: Layout container for vertical/horizontal stacking
- Card: Container with border and padding
- Image: Image display with alt text
MCP-UI vs OpenAI Apps SDK
MCPJam Inspector supports both MCP-UI and OpenAI Apps SDK for custom UI rendering:| Feature | MCP-UI | OpenAI Apps SDK |
|---|---|---|
| Specification | MCP-UI (open standard) | OpenAI proprietary |
| Rendering | RemoteDOM components | Sandboxed iframes |
| Tool calls | Via action handlers | Via window.openai.callTool() |
| State persistence | Not supported | Via window.openai.setWidgetState() |
| Security | Component-level isolation | Full iframe sandbox |
| Use case | Simple interactive components | Complex web applications |
UI Playground (Apps Builder)
The UI Playground is a specialized testing environment for ChatGPT Apps and MCP Apps, providing device emulation, locale testing, and widget debugging capabilities.Playground Controls
Located inclient/src/components/ui-playground/PlaygroundMain.tsx, the playground header includes:
Device Selector - Toggle between mobile (430×932), tablet (820×1180), and desktop (1280×800) viewports to test responsive layouts.
Locale Selector - Choose from common BCP 47 locales for internationalization testing:
- English: en-US, en-GB
- European: es-ES, es-MX, fr-FR, de-DE, it-IT, pt-BR
- Asian: ja-JP, zh-CN, zh-TW, ko-KR, hi-IN
- Other: ar-SA, ru-RU, nl-NL
window.openai.locale and included in the openai:set_globals message.
Theme Toggle - Switch between light and dark modes. Theme changes are automatically propagated to widgets.
Widget Debugging UI
As of PR #1022, the debug interface uses icon buttons instead of tabs for a more compact layout: Located inclient/src/components/chat-v2/thread.tsx:416-568:
Debug Controls:
- Data (database icon) - View tool input, output, and error details
- Widget State (box icon) - Inspect current widget state with last updated timestamp
- Globals (globe icon) - View global values (theme, displayMode, locale, maxHeight)
- Inline (layout icon) - Default message flow rendering
- Picture-in-Picture (picture-in-picture icon) - Floating overlay at top of screen
- Fullscreen (maximize icon) - Full viewport expansion
Locale Propagation Flow
Implementation Details:-
Locale Selection (
client/src/components/ui-playground/PlaygroundMain.tsx:69-89):- Dropdown with 16 common locales
- Stored in
useUIPlaygroundStoreglobals - Fallback to
navigator.languageif not set
-
Widget Storage (
client/src/components/chat-v2/chatgpt-app-renderer.tsx:284-349):- Locale included in widget storage payload
- Passed to backend for iframe initialization
- Available as
window.openai.localein widget
-
Global Synchronization (
client/src/components/chat-v2/chatgpt-app-renderer.tsx:831-850):- Locale sent via
openai:set_globalsmessage - Updates when user changes locale selector
- Propagated to both inline and modal views
- Locale sent via
Debug Panel State Management
The debug UI uses local state to track which panel is active:Related Documentation
- MCPClientManager - MCP orchestration layer
- Elicitation Support - Interactive prompts
- Debugging - JSON-RPC logging
- LLM Playground - User guide
- OpenAI SDK Architecture - OpenAI Apps SDK implementation

