Skip to content

MCP server

The Wholisphere MCP server exposes accessibility capabilities as tools any Model Context Protocol-aware AI tool can call.

Tools exposed

  • describe_image — vision LLM produces a screen-reader-style description
  • simplify_text — rewrite at chosen reading level
  • summarize_page — overview + bullets
  • voice_action — map a transcript + DOM into an action plan

Use with Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json (Mac) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
"mcpServers": {
"wholisphere": {
"command": "npx",
"args": ["-y", "@wholisphere/mcp"],
"env": {
"GEMINI_API_KEY": "your-google-ai-studio-key"
}
}
}
}

Restart Claude Desktop. The four Wholisphere tools appear in the tool picker.

Use with Cursor

Cursor’s MCP support is configured per-project. Add .cursor/mcp.json to your repo:

{
"mcpServers": {
"wholisphere": {
"command": "npx",
"args": ["-y", "@wholisphere/mcp"]
}
}
}

Local dev (from this monorepo)

Terminal window
cd agent/apps/mcp
GEMINI_API_KEY=... pnpm dev

To debug the JSON-RPC stream:

Terminal window
npx @modelcontextprotocol/inspector node dist/server.js

Why this matters

In 2026 every serious AI tool consumes MCP servers. By shipping accessibility as MCP tools, Wholisphere becomes a primitive other AIs use, not just a standalone product. A developer asks Cursor “describe this image and rewrite it for a 4th grader” and Cursor calls our tools.