Build custom applications with GWI Spark insights via MCP.Documentation Index
Fetch the complete documentation index at: https://gwi-c45e8f9c-custom-gpt-apps.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Overview
GWI Spark MCP lets an MCP client (e.g., Claude Desktop, Cursor, Copilot Studio, or your own MCP-compatible runtime) call GWI Spark tools via a single MCP endpoint using JSON-RPC. It’s designed so a host LLM/agent can pull focused, insight-ready consumer data from GWI — while following the standard MCP lifecycle (not “just another API endpoint”). Recommended pre-read: MCP Lifecycle (initialize → tools/list → tools/call)Lifecycle - Model Context Protocol: https://modelcontextprotocol.io/specification/2025-11-25/basic/lifecycle
How to use
Spark MCP uses JSON-RPC and follows the MCP lifecycle. In practice, an MCP client will typically:- initialize
- tools/list (discover what tools are available + how to use them)
- tools/call (invoke a specific tool)
Lifecycle essentials
1) initialize
Your MCP client should start by initializing the session (per the MCP lifecycle spec). (Exact params/fields depend on the MCP client runtime; follow the lifecycle doc linked above.)2) tools/list
Next, list available tools so your host LLM can “see”:- tool names
- tool descriptions
- expected arguments
- built-in guidance (including prompt decomposition expectations)
3) tools/call
When calling a tool, use:- method: “tools/call”
- params.name: tool name
- params.arguments: tool inputs
Spark MCP works best when it’s used by a host LLM/agent that can do tool calling.
When your LLM connects to the MCP server, it will typically first discover the tools it can use (via the standard MCP tool discovery flow). The tool definitions include guidance on how to use each tool effectively — including the expectation that complex user requests should be broken into smaller, Spark-style questions. In practice, this means:- A user can ask a broad question (e.g., “Build me a profile of Gen Z skincare buyers in the UK and how to reach them”)
- Your host LLM reads the available tool descriptions and orchestrates the work by:
- splitting the request into a small number of focused queries, then
- calling chat_gwi multiple times (and explore_insight_gwi where needed), then
- summarising the results back to the user in your UI/app
Common errors (and how to fix)\
Unauthorized (401) What happened: Your token is missing/invalid. Fix: Ensure you’re sending Authorization: Bearer YOUR_TOKEN.
You got a shallow answer
What happened: The prompt is too broad. Fix: Break the request into 2–4 narrower questions and re-run.Your app expects “metadata fields”
What happened: Your parser assumes a custom response structure. Fix: Treat the response as JSON-RPC and read outputs from result.content[] (text).Quick start checklist
- Use POST /v1/spark-api/mcp for Spark MCP calls
- Authenticate with Authorization: Bearer YOUR_TOKEN
- Follow the MCP lifecycle (initialize → tools/list → tools/call)
- Let the host LLM use tool definitions from tools/list to orchestrate (decompose broad questions into focused queries)
- Avoid “direct API-style” MCP integrations that skip the protocol steps

