A 3D interactive model visualization built with Three.js, enhanced with Model Context Protocol (MCP) server capabilities that allow AI assistants and other MCP clients to manipulate the 3D model in real-time.
- Interactive 3D Model: Rotate with mouse/touch, zoom with mouse wheel/pinch
- MCP Server Integration: Control the model remotely via MCP tools
- Real-time Updates: Changes made through MCP tools are instantly visible in the browser
- WebSocket Communication: Bidirectional communication between MCP server and browser app
- Node.js (v18 or higher)
- npm
-
Install dependencies:
npm install
-
Start the MCP server (in one terminal):
Default (localhost development):
node server.js
Server starts on
http://localhost:3000/mcp(MCP) andws://localhost:3001(WebSocket)The server defaults to
http://localhost:5173for browser connection URLs. This is configured in the.envfile (which defaults to localhost) or can be overridden via command-line arguments.For Netlify deployment (optional/advanced):
node server.js --browser-url https://your-app.netlify.app
⚠️ Note: When hosting the frontend on Netlify, you can use the-u(or--browser-url) parameter with your Netlify URL. This ensures that connection links generated by the MCP server point to your Netlify deployment. See Netlify Setup for more details.Alternative syntax:
node server.js -u https://your-app.netlify.app
See help:
node server.js --help
-
Start the web application (in another terminal):
npm run dev
Open
http://localhost:5173in your browser -
Connect an MCP client (see Connecting MCP Clients below)
| Client | Cost | Works with Localhost | Requires Public URL | Best For |
|---|---|---|---|---|
| MCP Inspector | Free | ✅ Yes | ❌ No | Testing & debugging tools |
| Cursor | Free | ✅ Yes | ❌ No | Full IDE with AI assistant |
| VS Code + MCP | Free | ✅ Yes | ❌ No | VS Code users |
| Claude Code | Free | ✅ Yes | ❌ No | CLI-based testing |
| Continue.dev | Free | ✅ Yes | ❌ No | VS Code extension users |
| Claude Desktop | Free | ✅ Yes (subprocess mode) | ✅ Yes (HTTP mode + tunnel) | Desktop app with Claude |
| ChatGPT | Paid (Plus) | ❌ No | ✅ Yes (tunnel needed) | OpenAI integration |
Recommendation for Testing: Start with MCP Inspector (free, no setup needed) or Cursor (free IDE with built-in MCP support).
These clients work with localhost, so no additional setup is needed.
Option 1: Deeplink (macOS)
open 'cursor://anysphere.cursor-deeplink/mcp/install?name=3d-model-server&config=eyJ1cmwiOiJodHRwOi8vbG9jYWxob3N0OjMwMDAvbWNwIn0='Option 2: Manual Configuration
- Open Cursor Settings → Features → Model Context Protocol
- Add server configuration:
{ "mcpServers": { "3d-model-server": { "url": "http://localhost:3000/mcp" } } }
Add to your MCP configuration:
{
"name": "3d-model-server",
"type": "http",
"url": "http://localhost:3000/mcp"
}claude mcp add --transport http 3d-model-server http://localhost:3000/mcpContinue.dev is a free VS Code extension that provides AI coding assistance with MCP support.
- Install Continue.dev extension in VS Code
- Configure MCP server in Continue.dev settings:
{ "mcpServers": { "3d-model-server": { "url": "http://localhost:3000/mcp" } } }
The MCP Inspector is a developer tool for testing and debugging MCP servers. It provides an interactive web interface to explore server capabilities, test tools, and view resources.
Example: MCP Inspector connected to the server, showing the change_background_color tool being tested with the color "tin".
To use with your MCP server:
-
Make sure your MCP server is running:
node server.js
Server should be running on
http://localhost:3000/mcp -
Start the MCP Inspector:
npx @modelcontextprotocol/inspector http://localhost:3000/mcp
-
Open the Inspector UI:
- The inspector will start a web interface (usually on
http://localhost:5173) - Open your browser and navigate to the URL shown in the terminal
- The inspector will start a web interface (usually on
-
Configure the connection:
- Transport Type: Select "Streamable HTTP" (this matches your server's transport)
- URL: Enter
http://localhost:3000/mcp(note: port 3000, not 3001) - Connection Type: Select "Direct"
- Click the "Connect" button
- You should see a green dot and "Connected" status when successful
-
Browse and test tools:
- Click on the "Tools" tab in the top navigation bar
- You'll see a list of all available tools in the middle pane:
change_model_color- Change the color of the 3D modelchange_model_size- Change the uniform size of the modelscale_model- Scale the model independently in each dimensionchange_background_color- Change the background color of the sceneset_key_light_intensity- Set the intensity of the key lightset_key_light_position- Set the position of the key light- And more...
-
Call a tool:
- Click on any tool name in the tools list to select it
- The right pane will show the tool's description and parameters
- Enter the required parameter value(s) in the input field(s):
- For
change_background_color: Enter a color name (e.g.,"tin") or hex code (e.g.,"#878687") - For
change_model_size: Enter a number (e.g.,2.5) - For
scale_model: Enter values for x, y, z axes
- For
- Click the "Run Tool" button (paper airplane icon)
- The result will appear below, showing "Success" and the response message
- If your 3D app is running and connected, you'll see the changes reflected immediately
-
View history:
- The bottom-left "History" pane shows all your previous tool calls
- Click on any history entry to see its details
- Use "Clear" to remove history entries
Example: To change the background color to tin:
- Select
change_background_colorfrom the tools list - Enter
tinin the "color" parameter field - Click "Run Tool"
- You'll see:
"Background color changed to tin (#878687)" - The background in your 3D app will update to the new color
Note: The Inspector connects directly to your MCP HTTP endpoint. Make sure your server is running before starting the Inspector. If you're using a tunneled server (for remote access), you can also connect to the tunneled URL:
npx @modelcontextprotocol/inspector https://your-tunnel-url.ngrok-free.app/mcpThese clients require a publicly accessible server (use ngrok or localtunnel). See ChatGPT Setup for tunneling instructions.
Claude Desktop is Anthropic's free desktop application that supports MCP servers.
Claude Desktop supports two connection modes:
- Subprocess Mode (Recommended for localhost) - Claude Desktop manages the server process automatically
- HTTP/SSE Mode (For remote/tunneled servers) - Connect to an already-running server
Prerequisites:
- Claude Desktop installed (download from https://claude.ai/download)
- For HTTP/SSE mode: ngrok or localtunnel installed
This is the simplest setup - Claude Desktop will start and manage your server automatically.
Step-by-Step Setup:
-
Make sure your server is NOT already running:
- If you have
node server.jsrunning in a terminal, stop it (Ctrl+C) - Claude Desktop will start the server automatically
- Having both running will cause port conflicts
- If you have
-
Locate Claude Desktop configuration file:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.jsonLinux:
~/.config/Claude/claude_desktop_config.json -
Edit the configuration file:
If the file doesn't exist, create it. Add the
mcpServerssection with subprocess configuration:{ "mcpServers": { "3d-model-server": { "command": "node", "args": ["/Users/turner/MCPDevelopment/Hello3DLLM/server.js"] } } }⚠️ Important:- Replace
/Users/turner/MCPDevelopment/Hello3DLLM/server.jswith the absolute path to yourserver.jsfile - Use
nodecommand (or full path like/Users/turner/.nvm/versions/node/v22.14.0/bin/nodeif using nvm) - Make sure no other instance of the server is running
- Replace
-
Restart Claude Desktop:
- Quit Claude Desktop completely
- Reopen Claude Desktop
- Claude Desktop will automatically start your MCP server
-
Verify the connection:
- In Claude Desktop, ask: "What tools do you have available?"
- Claude should list your MCP tools (e.g.,
change_model_color,change_model_size, etc.) - Check Claude Desktop logs if there are issues:
~/Library/Logs/Claude/mcp-server-3d-model-server.log(macOS)
-
Start your 3D app (optional but recommended):
npm run dev
This allows you to see changes in real-time when Claude calls the MCP tools.
-
Connect to the 3D app:
- Ask Claude Desktop: "How do I connect to the 3D app?" or "Get browser URL"
- Claude will provide a connection URL with your unique session ID (e.g.,
http://localhost:5173?sessionId=<unique-uuid>) - Copy and paste the URL into your browser
- The browser will connect to your Claude Desktop session
Using Netlify-Hosted App with Claude Desktop (Optional/Advanced):
If you want to use your Netlify-hosted app instead of running locally:
-
Update
.envfile (or set BROWSER_URL environment variable):Edit the
.envfile in your project root:BROWSER_URL=https://your-app.netlify.app
Or set environment variable: macOS/Linux - Add to your shell profile (
~/.zshrcor~/.bashrc):export BROWSER_URL=https://your-app.netlify.appThen restart your terminal or run:
source ~/.zshrcWindows - Set system environment variable or use PowerShell:
$env:BROWSER_URL="https://your-app.netlify.app"
-
Create WebSocket tunnel (so Netlify can connect to your local WebSocket):
# Using ngrok ngrok http 3001 # Or using localtunnel lt --port 3001 --subdomain hello3dllm-websocket
Copy the tunnel URL (e.g.,
wss://hello3dllm-websocket.loca.lt) -
Configure Netlify:
- Go to your Netlify site settings
- Add environment variable:
VITE_WS_URL=wss://your-websocket-tunnel-url - Redeploy your site
-
Restart Claude Desktop (to pick up the BROWSER_URL from
.envfile or environment variable) -
Connect:
- Ask Claude Desktop: "How do I connect to the 3D app?"
- It will provide a Netlify URL with your unique session ID (e.g.,
https://your-app.netlify.app?sessionId=<unique-uuid>) - Open that URL in your browser
Note: Keep the WebSocket tunnel running while using the app. The tunnel URL may change if you restart it.
Troubleshooting Subprocess Mode:
-
Port already in use error:
- Make sure you've stopped any manually running server instances
- Check if port 3000 or 3001 is in use:
lsof -i :3000orlsof -i :3001(macOS/Linux) - Kill the process if needed:
kill -9 <PID> - Or use:
lsof -ti :3000 -ti :3001 | xargs kill -9
-
Server not starting:
- Verify the path to
server.jsis correct and absolute - Check that
nodecommand is in your PATH, or use full path to node executable - Check Claude Desktop logs:
~/Library/Logs/Claude/mcp-server-3d-model-server.log(macOS)
- Verify the path to
-
Tools not appearing:
- Check Claude Desktop logs for errors
- Verify the server started successfully
- Restart Claude Desktop completely
-
Netlify app not connecting:
- Verify WebSocket tunnel is running
- Check
VITE_WS_URLis set correctly in Netlify (usewss://protocol) - Ensure tunnel URL matches what's configured in Netlify
- Check browser console for WebSocket connection errors
Use this mode if you want to run the server manually or connect via a tunnel.
Step-by-Step Setup:
-
Start your MCP server manually:
node server.js
Server should be running on
http://localhost:3000/mcp -
Create a tunnel for your MCP server (if connecting remotely):
Option A: Using ngrok
ngrok http 3000
Copy the HTTPS URL shown (e.g.,
https://abc123.ngrok-free.app)Option B: Using localtunnel
lt --port 3000 --subdomain hello3dllm-mcpserver
Creates URL:
https://hello3dllm-mcpserver.loca.lt⚠️ Important: Keep this tunnel running while using Claude Desktop! -
Locate Claude Desktop configuration file:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.jsonLinux:
~/.config/Claude/claude_desktop_config.json -
Edit the configuration file:
If the file doesn't exist, create it. Add or update the
mcpServerssection:For ngrok:
{ "mcpServers": { "3d-model-server": { "url": "https://abc123.ngrok-free.app/mcp", "transport": "sse" } } }For localtunnel:
{ "mcpServers": { "3d-model-server": { "url": "https://hello3dllm-mcpserver.loca.lt/mcp", "transport": "sse" } } }⚠️ Important Notes:- Endpoint: Use
/mcp(NOT/sse) - your server handles SSE streams on/mcp - Transport: Use
"transport": "sse"(Server-Sent Events) - this matches your server'sStreamableHTTPServerTransport - The endpoint
/mcphandles both POST requests (initialization/tool calls) and GET requests (SSE streams) - Claude Desktop's example shows
/sse, but that's just a generic example - your server uses/mcp
⚠️ Important:- Replace the URL with your actual tunnel URL
- Include
/mcpat the end of the URL - Use
https://(nothttp://) for the tunnel URL
- Endpoint: Use
-
Restart Claude Desktop:
- Quit Claude Desktop completely
- Reopen Claude Desktop
- The MCP server should now be connected
-
Verify the connection:
- In Claude Desktop, ask: "What tools do you have available?"
- Claude should list your MCP tools (e.g.,
change_model_color,change_model_size, etc.) - If tools aren't showing, check:
- Tunnel is still running
- Configuration file has correct URL with
/mcpsuffix - Claude Desktop was fully restarted
-
Start your 3D app (optional but recommended):
npm run dev
This allows you to see changes in real-time when Claude calls the MCP tools.
-
Connect to the 3D app:
- Ask Claude Desktop: "How do I connect to the 3D app?" or "Get browser URL"
- Claude will provide a connection URL with your session ID
- Copy and paste the URL into your browser
- The browser will connect to your Claude Desktop session
Troubleshooting:
-
Tools not appearing:
- Verify tunnel is running (
ngrok http 3000orlt --port 3000) - Check configuration file JSON syntax is valid
- Ensure URL includes
/mcpat the end - Restart Claude Desktop completely
- Verify tunnel is running (
-
Connection errors:
- Verify MCP server is running (
node server.js) - Check tunnel URL is accessible in browser:
https://your-tunnel-url/mcp - Ensure tunnel hasn't expired (ngrok free tier URLs change on restart)
- Verify MCP server is running (
-
Changes not visible:
- Make sure your 3D app is running (
npm run dev) - Verify browser is connected to the correct session
- Check browser console for WebSocket connection errors
- Make sure your 3D app is running (
Note: If using ngrok free tier, your tunnel URL will change each time you restart ngrok. You'll need to update the configuration file and restart Claude Desktop when this happens. For a more stable URL, consider using localtunnel with a custom subdomain or ngrok's paid plan with a custom domain.
Recommendation: For local development, use Subprocess Mode (Option 1) - it's simpler and doesn't require tunneling.
ChatGPT requires a publicly accessible server and a paid Plus subscription with developer mode enabled. See ChatGPT Setup for detailed instructions.
- Add the MCP tool in your MCP client (ChatGPT, Claude Desktop, etc.)
- Ask your AI assistant: "How do I connect to the 3D app?" or "Get browser URL"
- Your AI assistant will provide a connection URL with your session ID embedded
- Copy and paste the URL into your browser
- The browser will automatically connect to your session
Connection URL formats:
- Claude Desktop (STDIO mode):
http://localhost:5173?sessionId=<unique-uuid> - ChatGPT/HTTP mode:
https://your-app.netlify.app?sessionId=<your-session-id>
Each MCP client session gets its own unique session ID. When you ask for the connection URL, the AI assistant provides a URL specific to your session. Multiple users can connect simultaneously, each with their own isolated browser instance.
Note: In STDIO mode (Claude Desktop subprocess), each process instance generates a unique UUID session ID at startup, ensuring that different Claude Desktop users get isolated sessions.
Once connected, ask your AI assistant to manipulate the model using natural language:
- Change color: "Change the model to red" or "Make it blue"
- Change size: "Make the model bigger" or "Set size to 2.5"
- Scale: "Stretch horizontally" or "Make it tall and thin"
- Background: "Change background to black"
- Combined: "Make a red model that's tall and thin"
The AI will automatically call the appropriate MCP tools, and changes appear in real-time in your browser.
Returns the URL to open in your browser to connect the 3D visualization app. This tool is automatically called when users ask how to connect or how to open the 3D app.
Parameters:
- None
Example Usage:
- User asks: "How do I connect to the 3D app?"
- ChatGPT calls this tool and returns: "To connect your browser to the 3D visualization app, open this URL: https://your-app.netlify.app?sessionId=abc-123..."
Note: The URL includes the current session ID, ensuring each ChatGPT session connects to its own browser instance.
Changes the color of the 3D model.
Parameters:
color(string): Hex color code (e.g.,#ff0000for red)
Example:
{
"name": "change_model_color",
"arguments": { "color": "#ff0000" }
}Changes the uniform size of the model by scaling.
Parameters:
size(number): New size value (must be positive, scale factor)
Note: This scales the model uniformly, preserving its shape and position.
Example:
{
"name": "change_model_size",
"arguments": { "size": 2.0 }
}Scales the model independently in each dimension (x, y, z axes).
Parameters:
x(number): Scale factor for X axis (must be positive)y(number): Scale factor for Y axis (must be positive)z(number): Scale factor for Z axis (must be positive)
Example:
{
"name": "scale_model",
"arguments": { "x": 1.5, "y": 1.0, "z": 2.0 }
}Changes the background color of the 3D scene.
Parameters:
color(string): Hex color code (e.g.,#000000for black)
Example:
{
"name": "change_background_color",
"arguments": { "color": "#000000" }
}Important: ChatGPT requires a publicly accessible server (not just localhost).
The following configurations have been tested and confirmed working with ChatGPT:
Setup:
- ✅ MCP server running locally on port 3000
- ✅ MCP server exposed via tunnel (ngrok/localtunnel) for ChatGPT access
- ✅ WebSocket server running locally on port 3001 (no tunnel needed)
- ✅ App running locally (
npm run dev)
Steps:
- Start MCP server:
npm run mcp:server - Create tunnel for MCP HTTP endpoint (port 3000):
ngrok http 3000 # or lt --port 3000 --subdomain hello3dllm-mcpserver - Start local app:
npm run dev - Configure ChatGPT with tunneled MCP URL:
https://your-tunnel-url/mcp - App connects to local WebSocket:
ws://localhost:3001
Pros:
- ✅ Simple setup (only one tunnel needed)
- ✅ Fast local WebSocket connection
- ✅ Works with ChatGPT
Cons:
- ❌ App must run locally (not accessible to others)
Setup:
- ✅ MCP server running locally on port 3000
- ✅ MCP server exposed via tunnel for ChatGPT access
- ✅ WebSocket server running locally on port 3001
- ✅ WebSocket exposed via tunnel for Netlify app
- ✅ App deployed to Netlify
Steps:
-
Start MCP server with Netlify URL:
node server.js --browser-url https://your-app.netlify.app
Or using the short form:
node server.js -u https://your-app.netlify.app
Important: The
-uparameter (or--browser-url) is required so that the MCP server generates correct connection URLs pointing to your Netlify deployment. -
Create tunnel for MCP HTTP endpoint (port 3000):
ngrok http 3000 # or lt --port 3000 --subdomain hello3dllm-mcpserver -
Create tunnel for WebSocket (port 3001):
ngrok http 3001 # or lt --port 3001 --subdomain hello3dllm-websocket -
Configure Netlify:
- Set
VITE_WS_URLenvironment variable to tunneled WebSocket URL (e.g.,wss://your-websocket-tunnel-url) - Deploy app
- Set
-
Configure ChatGPT with tunneled MCP URL:
https://your-mcp-tunnel-url/mcp
Pros:
- ✅ App accessible to anyone via Netlify
- ✅ Works with ChatGPT
- ✅ No backend hosting costs
Cons:
- ❌ Requires two tunnels (both must stay active)
- ❌ Local machine must be running 24/7
-
Install ngrok:
- Download from https://ngrok.com or
brew install ngrok
- Download from https://ngrok.com or
-
Start ngrok in a new terminal (keep MCP server running):
ngrok http 3000
For a custom domain (requires free ngrok account):
ngrok http 3000 --domain=your-name.ngrok-free.app
-
Copy the HTTPS URL from ngrok (e.g.,
https://abc123.ngrok-free.app) -
Configure ChatGPT:
- Open ChatGPT → Settings → Personalization → Model Context Protocol
- Add server:
- Name:
3d-model-server - URL:
https://your-ngrok-url.ngrok-free.app/mcp⚠️ Include/mcpat the end! - Transport: HTTP or Streamable HTTP
- Name:
-
Start the web app (optional but recommended):
npm run dev
Note: If running the app locally, you don't need to tunnel the WebSocket (port 3001). The app will connect to
ws://localhost:3001directly. Only the MCP HTTP endpoint (port 3000) needs to be tunneled for ChatGPT access.
-
Install localtunnel:
npm install -g localtunnel
-
Start localtunnel in a new terminal (keep MCP server running):
lt --port 3000 --subdomain hello3dllm-mcpserver
Creates URL:
https://hello3dllm-mcpserver.loca.lt -
Configure ChatGPT:
- Open ChatGPT → Settings → Personalization → Model Context Protocol
- Add server:
- Name:
3d-model-server - URL:
https://hello3dllm-mcpserver.loca.lt/mcp⚠️ Include/mcpat the end! - Transport: HTTP or Streamable HTTP
- Name:
-
Start the web app (optional but recommended):
npm run dev
Note: If running the app locally, you don't need to tunnel the WebSocket (port 3001). The app will connect to
ws://localhost:3001directly. Only the MCP HTTP endpoint (port 3000) needs to be tunneled for ChatGPT access.
Benefits of localtunnel:
- ✅ Custom subdomains that remain consistent (as long as the subdomain is available)
- ✅ No account required for basic usage
- ✅ Simple command-line interface
Deploy to Railway, Render, or Fly.io. Ensure:
- Server runs on the service's assigned port (or use
PORTenv var) - Endpoint accessible at
https://your-app.railway.app/mcp
- 404 Not Found: Make sure URL includes
/mcpat the end - Connection refused: Verify MCP server is running (
npm run mcp:server) - Tools not available: Refresh ChatGPT page after adding server
- Changes not visible: Ensure web app (
npm run dev) is running
The server currently allows all origins (origin: '*'). For production, restrict CORS:
cors({
origin: ['https://chat.openai.com', 'https://chatgpt.com']
})✅ This configuration has been tested and works! You can host the front-end on Netlify while running the MCP server locally, using tunnels for external access.
Architecture:
- MCP server runs locally (ports 3000 and 3001)
- MCP HTTP endpoint exposed via tunnel (for ChatGPT access)
- WebSocket exposed via tunnel (for Netlify app access)
- Front-end deployed to Netlify
Why tunnels are needed:
-
Netlify app → Local WebSocket: The app at Netlify runs in the browser and needs to connect to your local WebSocket server. Since it can't access
localhost:3001directly, you need a tunnel. -
ChatGPT → Local MCP server: ChatGPT runs in the cloud and cannot access
localhost:3000on your desktop. It needs a publicly accessible URL via tunnel.
Solution: Expose both ports using tunneling services (ngrok or localtunnel).
Setup Steps:
-
Start your local MCP server:
# Important: Use -u parameter with your Netlify URL node server.js --browser-url https://your-app.netlify.appReplace
https://your-app.netlify.appwith your actual Netlify URL.Alternative syntax:
node server.js -u https://your-app.netlify.app
Why this is needed: The
-u(or--browser-url) parameter tells the MCP server what URL to use when generating connection links. Without it, the server defaults tohttp://localhost:5173, which won't work for Netlify deployments.Server runs on
http://localhost:3000/mcp(MCP) andws://localhost:3001(WebSocket) -
Create tunnels using ngrok or localtunnel:
Option A: Using ngrok
Terminal 1 - MCP HTTP tunnel (for ChatGPT):
ngrok http 3000
Copy the HTTPS URL (e.g.,
https://abc123.ngrok-free.app)Terminal 2 - WebSocket tunnel (for front-end):
ngrok http 3001
Copy the HTTPS URL (e.g.,
https://xyz789.ngrok-free.app)⚠️ Important: ngrok tunnels HTTP/HTTPS, and WebSocket connections work over these tunnels. Use the HTTPS URL withwss://protocol.Option B: Using localtunnel (Alternative)
Install localtunnel:
npm install -g localtunnel
Terminal 1 - MCP HTTP tunnel (for ChatGPT):
lt --port 3000 --subdomain hello3dllm-mcpserver
Creates URL:
https://hello3dllm-mcpserver.loca.ltTerminal 2 - WebSocket tunnel (for front-end):
lt --port 3001 --subdomain hello3dllm-websocket
Creates URL:
https://hello3dllm-websocket.loca.lt⚠️ Important: Usewss://protocol for WebSocket connections (e.g.,wss://hello3dllm-websocket.loca.lt)Benefits of localtunnel:
- ✅ Custom subdomains that remain consistent (as long as the subdomain is available)
- ✅ No account required for basic usage
- ✅ Simple command-line interface
-
Configure ChatGPT:
- Open ChatGPT → Settings → Personalization → Model Context Protocol
- Add server:
- Name:
3d-model-server - URL:
- If using ngrok:
https://your-mcp-ngrok-url.ngrok-free.app/mcp - If using localtunnel:
https://hello3dllm-mcpserver.loca.lt/mcp ⚠️ Include/mcpat the end!
- If using ngrok:
- Transport: HTTP or Streamable HTTP
- Name:
-
Deploy front-end to Netlify:
- Build command:
npm run build - Publish directory:
dist - Environment Variable: Set
VITE_WS_URLto your WebSocket tunnel URL:- If using ngrok:
VITE_WS_URL=wss://your-websocket-ngrok-url.ngrok-free.app - If using localtunnel:
VITE_WS_URL=wss://hello3dllm-websocket.loca.lt ⚠️ Usewss://(notws://) and the HTTPS tunnel URL
- If using ngrok:
- Build command:
-
Keep everything running while using the application:
- ✅ Your local MCP server must stay running
- ✅ Both tunnels (ngrok or localtunnel) must stay active
- ✅ Your local machine must be connected to the internet
Important Notes:
-
Tunnel URL stability:
- ngrok: URLs change each time you restart ngrok on the free tier (unless you use a paid plan with a custom domain). You'll need to update Netlify env vars and ChatGPT config each time.
- localtunnel: Custom subdomains remain consistent as long as the subdomain is available and the tunnel stays active. This makes it easier to maintain stable URLs.
-
Redeploy Netlify after URL changes: When your tunnel WebSocket URL changes, you must update
VITE_WS_URLin Netlify and trigger a new deployment for the change to take effect. -
Keep tunnels active: Both tunnels must remain running. If either tunnel stops, the corresponding connection will fail.
Pros:
- ✅ Free hosting for front-end (Netlify)
- ✅ No backend hosting costs
- ✅ Full control over MCP server
- ✅ Easy to test and iterate locally
Cons:
- ❌ Requires your local machine to be running 24/7 for production use
- ❌ Two tunnels needed (ngrok free tier has limits; localtunnel is free but subdomains may be taken)
- ❌ ngrok free tier URLs change on restart (requires updating Netlify env vars and ChatGPT config each time)
- ❌ Network-dependent (your internet connection must be stable)
- ❌ Must manually update and redeploy Netlify whenever tunnel WebSocket URL changes (less frequent with localtunnel's stable subdomains)
- ❌ Must manually update ChatGPT MCP server URL whenever tunnel MCP URL changes (less frequent with localtunnel's stable subdomains)
Alternative: Single tunnel If you want to use a single tunnel (ngrok or localtunnel), you could modify the server to serve both MCP and WebSocket on the same port, but this requires code changes and may complicate the setup.
- These platforms use serverless functions that don't support persistent WebSocket connections
- The MCP server requires long-running SSE (Server-Sent Events) streams
- Serverless functions are stateless and can't maintain in-memory session state
- Custom ports (3000, 3001) are not supported
✅ The front-end can be hosted on Netlify/Vercel, but requires a separate backend server for the MCP server and WebSocket.
Front-End on Netlify/Vercel + Backend on Railway/Render/Fly.io
Front-End Deployment (Netlify):
-
Build Configuration:
- Build command:
npm run build - Publish directory:
dist
- Build command:
-
Environment Variables:
VITE_WS_URL: Your backend WebSocket URL (e.g.,wss://your-backend.railway.apporwss://your-backend.railway.app:3001)⚠️ Important: Usewss://(secure WebSocket) for HTTPS sites
-
Deploy:
- Connect your repository to Netlify
- Configure build settings
- Set the
VITE_WS_URLenvironment variable - Deploy
Backend Deployment (Railway/Render/Fly.io):
-
Environment Variables:
MCP_PORT=3000 WS_PORT=3001 NODE_ENV=production
- Note: Some platforms use
PORTinstead - check your platform's documentation - If your platform assigns a single port, you may need to use the same port for both MCP and WebSocket
- Note: Some platforms use
-
Build & Start Commands:
- Build command:
npm install && npm run build(builds front-end for unified serving) - Start command:
npm start(runsserver.js)
- Build command:
-
Deploy:
- Connect your repository
- Set environment variables
- Deploy
-
Update Front-End WebSocket URL:
- After backend deployment, update
VITE_WS_URLin Netlify to match your backend URL - Format:
wss://your-backend-domain.com(or with port if needed)
- After backend deployment, update
Full Stack on Railway/Render/Fly.io
Deploy both front-end and backend together on a single platform:
-
Environment Variables:
MCP_PORT=3000 WS_PORT=3001 NODE_ENV=production VITE_WS_URL=wss://your-app-domain.com # Or use relative URL detection -
Build & Start Commands:
- Build command:
npm install && npm run build - Start command:
npm start
- Build command:
-
How It Works:
- The server automatically serves static files from
dist/folder - Front-end connects to WebSocket on the same domain
- Single deployment, simpler configuration
- The server automatically serves static files from
-
WebSocket URL Configuration:
- For same-domain deployment, you can use relative WebSocket URLs or detect the current hostname
- Update
WebSocketClient.jsif needed to auto-detect the WebSocket URL from the current page location
- ✅ Supports persistent processes and WebSocket
- ✅ Free tier available (with usage limits)
- ✅ Automatic HTTPS
- Set
MCP_PORTandWS_PORTenvironment variables
- ✅ Supports persistent processes and WebSocket
- ✅ Free tier available (spins down after inactivity)
- ✅ Automatic HTTPS
- May need to configure health checks
- ✅ Supports persistent processes and WebSocket
- ✅ Free tier available
- ✅ Automatic HTTPS
- Requires
fly.tomlconfiguration file
- ✅ Excellent for static sites
- ✅ Free tier with generous limits
- ✅ Automatic HTTPS
- ❌ Cannot host MCP server or WebSocket server
- Must set
VITE_WS_URLto external backend
- ✅ Excellent for static sites
- ✅ Free tier available
- ✅ Automatic HTTPS
- ❌ Cannot host MCP server or WebSocket server
- Must set
VITE_WS_URLto external backend
-
CORS Configuration:
// In server.js, replace: cors({ origin: '*' }) // With: cors({ origin: [ 'https://your-frontend-domain.netlify.app', 'https://chat.openai.com', 'https://chatgpt.com' ] })
-
WebSocket Security:
- Always use
wss://(secure WebSocket) in production - Ensure your hosting platform provides SSL/TLS certificates
- Always use
-
Environment Variables:
- Never commit
.envfiles to version control - Use platform-specific secret management
- Rotate secrets regularly
- Never commit
-
Rate Limiting:
- Consider adding rate limiting for MCP endpoints
- Protect against abuse and DDoS attacks
-
Monitoring:
- Set up error tracking (e.g., Sentry)
- Monitor WebSocket connection health
- Track MCP session usage
-
Test MCP Endpoint:
curl https://your-backend-domain.com/mcp
-
Test WebSocket Connection:
- Open browser console on your deployed front-end
- Check for WebSocket connection logs
- Verify connection uses
wss://protocol
-
Test End-to-End:
- Connect an MCP client (ChatGPT, Cursor, etc.) to your deployed MCP endpoint
- Make a tool call (e.g., "change model color to red")
- Verify changes appear in the browser
WebSocket Connection Fails:
- Verify
VITE_WS_URLis set correctly in front-end deployment - Check that backend WebSocket server is running
- Ensure firewall/security groups allow WebSocket connections
- Verify SSL certificate is valid (for
wss://)
MCP Client Can't Connect:
- Verify MCP endpoint is accessible:
https://your-backend.com/mcp - Check CORS settings allow your MCP client's origin
- Review server logs for connection errors
Front-End Can't Connect to WebSocket:
- Check browser console for connection errors
- Verify
VITE_WS_URLenvironment variable is set - Ensure WebSocket URL uses correct protocol (
ws://vswss://) - Check that backend WebSocket server is accessible from the front-end domain
Static Files Not Serving:
- Verify
npm run buildcompleted successfully - Check that
dist/folder exists in deployment - Review server logs for static file serving messages
The server supports two transport modes:
┌─────────────────┐ ┌──────────────┐ ┌─────────────┐
│ Claude Desktop │──stdin▶│ MCP Server │────────▶│ WebSocket │
│ (Subprocess) │◀─stdout│ (server.js) │ │ Server │
└─────────────────┘ └──────────────┘ └─────────────┘
│ │
│ ┌────────▼────────┐
│ │ Browser App │
│ │ (Application) │
│ └────────┬────────┘
│ │
│ ┌────────▼────────┐
└──────────────▶│ SceneManager │
│ (Model Control) │
└─────────────────┘
┌─────────────────┐ ┌──────────────┐ ┌─────────────┐
│ MCP Client │──HTTP──▶│ MCP Server │────────▶│ WebSocket │
│ (AI Assistant) │──SSE───▶│ (server.js) │ │ Server │
└─────────────────┘ └──────────────┘ └─────────────┘
│ │
│ ┌────────▼────────┐
│ │ Browser App │
│ │ (Application) │
│ └────────┬────────┘
│ │
│ ┌────────▼────────┐
└──────────────▶│ SceneManager │
│ (Model Control) │
└─────────────────┘
How it works:
- MCP Client sends tool call requests to the MCP Server (via STDIO or HTTP/SSE)
- MCP Server auto-detects the transport mode and processes requests accordingly
- MCP Server broadcasts commands via WebSocket to connected browser clients
- Browser App receives WebSocket messages and updates the model
- Changes are immediately visible in the 3D scene
Transport Detection:
- STDIO Mode: Automatically detected when
stdinis not a TTY (subprocess) - HTTP Mode: Automatically detected when
stdinis a TTY (manual execution)
Hello3DLLM/
├── server.js # MCP server with WebSocket bridge
├── src/
│ ├── Application.js # Main app with WebSocket integration
│ ├── SceneManager.js # Scene management with model manipulation methods
│ ├── WebSocketClient.js # WebSocket client for browser
│ ├── Model.js # Model class definition
│ ├── CameraController.js # Camera controls
│ ├── RotationController.js # Rotation handling
│ └── main.js # Entry point
├── package.json # Dependencies and scripts
└── README.md # This file
Server Configuration:
# Basic usage (defaults to localhost:5173 for browser URL)
MCP_PORT=3000 WS_PORT=3001 node server.js
# With browser URL environment variable
BROWSER_URL=https://your-app.netlify.app node server.js
# With command-line argument (overrides environment variable)
node server.js --browser-url https://your-app.netlify.appBrowser URL Configuration Priority:
- Command-line argument (
--browser-urlor-u) - highest priority - Environment variable (
BROWSER_URL) - Default (
http://localhost:5173) - lowest priority
Command-Line Options:
--browser-url <url>or-u <url>: Set the browser URL for connection links--helpor-h: Show usage help
Example:
# Use Netlify URL for connection links
node server.js --browser-url https://my-app.netlify.app
# Or using short form
node server.js -u https://my-app.netlify.app
# Use localhost (default)
node server.jsFront-End Configuration:
The .env file in the project root contains the default browser URL configuration:
# Browser URL for the 3D app
# Default: localhost (for local development)
# Change to your Netlify URL if using Netlify-hosted app
BROWSER_URL=http://localhost:5173WebSocket URL (optional):
For Netlify deployments, you may need to set VITE_WS_URL in Netlify's environment variables. For local development, the front-end automatically uses ws://localhost:3001 if VITE_WS_URL is not set, making local development seamless.
npm run build
npm run preview # Preview production buildWhen adding a new MCP tool:
-
Register in
server.jsusingmcpServer.registerTool():mcpServer.registerTool( 'your_tool_name', { title: 'Your Tool Title', description: 'Description of what the tool does', inputSchema: { param1: z.string().describe('Parameter description') } }, async ({ param1 }) => { broadcastToClients({ type: 'yourCommandType', param1: param1 }); return { content: [{ type: 'text', text: 'Success' }] }; } );
-
Add handler in
src/Application.js:case 'yourCommandType': this.sceneManager.yourMethod(command.param1); break;
-
Implement method in
src/SceneManager.js:yourMethod(param1) { // Your implementation }
-
Update README with tool documentation
-
Restart the MCP server and refresh your MCP client
See the change_background_color tool implementation in the codebase for a complete example.
- Ensure MCP server is running (
npm run mcp:serveror via Claude Desktop) - Check that port 3001 is not in use:
lsof -i :3001 - Check browser console for connection errors
- Verify browser is connected with correct session ID
For HTTP Mode (ChatGPT, Manual):
- Verify MCP server is running on port 3000:
lsof -i :3000 - Check endpoint URL:
http://localhost:3000/mcp - Ensure no firewall is blocking the connection
For STDIO Mode (Claude Desktop Subprocess):
- Check Claude Desktop logs:
~/Library/Logs/Claude/mcp-server-3d-model-server.log(macOS) - Verify server path in configuration is correct and absolute
- Ensure no other server instance is running (ports 3000/3001 must be free)
- Restart Claude Desktop completely
If you see "port already in use" errors:
# Check what's using the ports
lsof -i :3000 -i :3001
# Kill processes on those ports
lsof -ti :3000 -ti :3001 | xargs kill -9
# Or kill any running server.js processes
pkill -f "node.*server.js"Important: When using Claude Desktop in subprocess mode, don't run node server.js manually - let Claude Desktop manage it.
- Check browser console for WebSocket errors
- Verify browser app is running (
npm run dev) - Ensure browser is connected with the correct session ID
- Verify WebSocket server is running on port 3001
- In STDIO mode, check that browser URL includes
?sessionId=<unique-uuid>(each process gets a unique session ID)
MIT
Contributions welcome! Please feel free to submit a Pull Request.
