Skip to content

aidenlab/Hello3DLLM

Repository files navigation

Hello3DLLM - 3D Model Visualization with MCP Server

A 3D interactive model visualization built with Three.js, enhanced with Model Context Protocol (MCP) server capabilities that allow AI assistants and other MCP clients to manipulate the 3D model in real-time.

Features

  • Interactive 3D Model: Rotate with mouse/touch, zoom with mouse wheel/pinch
  • MCP Server Integration: Control the model remotely via MCP tools
  • Real-time Updates: Changes made through MCP tools are instantly visible in the browser
  • WebSocket Communication: Bidirectional communication between MCP server and browser app

Quick Start

Prerequisites

  • Node.js (v18 or higher)
  • npm

Installation & Running

  1. Install dependencies:

    npm install
  2. Start the MCP server (in one terminal):

    Default (localhost development):

    node server.js

    Server starts on http://localhost:3000/mcp (MCP) and ws://localhost:3001 (WebSocket)

    The server defaults to http://localhost:5173 for browser connection URLs. This is configured in the .env file (which defaults to localhost) or can be overridden via command-line arguments.

    For Netlify deployment (optional/advanced):

    node server.js --browser-url https://your-app.netlify.app

    ⚠️ Note: When hosting the frontend on Netlify, you can use the -u (or --browser-url) parameter with your Netlify URL. This ensures that connection links generated by the MCP server point to your Netlify deployment. See Netlify Setup for more details.

    Alternative syntax:

    node server.js -u https://your-app.netlify.app

    See help:

    node server.js --help
  3. Start the web application (in another terminal):

    npm run dev

    Open http://localhost:5173 in your browser

  4. Connect an MCP client (see Connecting MCP Clients below)

Connecting MCP Clients

Quick Comparison: MCP Client Options

Client Cost Works with Localhost Requires Public URL Best For
MCP Inspector Free ✅ Yes ❌ No Testing & debugging tools
Cursor Free ✅ Yes ❌ No Full IDE with AI assistant
VS Code + MCP Free ✅ Yes ❌ No VS Code users
Claude Code Free ✅ Yes ❌ No CLI-based testing
Continue.dev Free ✅ Yes ❌ No VS Code extension users
Claude Desktop Free ✅ Yes (subprocess mode) ✅ Yes (HTTP mode + tunnel) Desktop app with Claude
ChatGPT Paid (Plus) ❌ No ✅ Yes (tunnel needed) OpenAI integration

Recommendation for Testing: Start with MCP Inspector (free, no setup needed) or Cursor (free IDE with built-in MCP support).

Local Clients (Cursor, VS Code, Claude Code)

These clients work with localhost, so no additional setup is needed.

Cursor

Option 1: Deeplink (macOS)

open 'cursor://anysphere.cursor-deeplink/mcp/install?name=3d-model-server&config=eyJ1cmwiOiJodHRwOi8vbG9jYWxob3N0OjMwMDAvbWNwIn0='

Option 2: Manual Configuration

  1. Open Cursor Settings → Features → Model Context Protocol
  2. Add server configuration:
    {
      "mcpServers": {
        "3d-model-server": {
          "url": "http://localhost:3000/mcp"
        }
      }
    }

VS Code

Add to your MCP configuration:

{
  "name": "3d-model-server",
  "type": "http",
  "url": "http://localhost:3000/mcp"
}

Claude Code

claude mcp add --transport http 3d-model-server http://localhost:3000/mcp

Continue.dev

Continue.dev is a free VS Code extension that provides AI coding assistance with MCP support.

  1. Install Continue.dev extension in VS Code
  2. Configure MCP server in Continue.dev settings:
    {
      "mcpServers": {
        "3d-model-server": {
          "url": "http://localhost:3000/mcp"
        }
      }
    }

MCP Inspector

The MCP Inspector is a developer tool for testing and debugging MCP servers. It provides an interactive web interface to explore server capabilities, test tools, and view resources.

MCP Inspector Interface

Example: MCP Inspector connected to the server, showing the change_background_color tool being tested with the color "tin".

To use with your MCP server:

  1. Make sure your MCP server is running:

    node server.js

    Server should be running on http://localhost:3000/mcp

  2. Start the MCP Inspector:

    npx @modelcontextprotocol/inspector http://localhost:3000/mcp
  3. Open the Inspector UI:

    • The inspector will start a web interface (usually on http://localhost:5173)
    • Open your browser and navigate to the URL shown in the terminal
  4. Configure the connection:

    • Transport Type: Select "Streamable HTTP" (this matches your server's transport)
    • URL: Enter http://localhost:3000/mcp (note: port 3000, not 3001)
    • Connection Type: Select "Direct"
    • Click the "Connect" button
    • You should see a green dot and "Connected" status when successful
  5. Browse and test tools:

    • Click on the "Tools" tab in the top navigation bar
    • You'll see a list of all available tools in the middle pane:
      • change_model_color - Change the color of the 3D model
      • change_model_size - Change the uniform size of the model
      • scale_model - Scale the model independently in each dimension
      • change_background_color - Change the background color of the scene
      • set_key_light_intensity - Set the intensity of the key light
      • set_key_light_position - Set the position of the key light
      • And more...
  6. Call a tool:

    • Click on any tool name in the tools list to select it
    • The right pane will show the tool's description and parameters
    • Enter the required parameter value(s) in the input field(s):
      • For change_background_color: Enter a color name (e.g., "tin") or hex code (e.g., "#878687")
      • For change_model_size: Enter a number (e.g., 2.5)
      • For scale_model: Enter values for x, y, z axes
    • Click the "Run Tool" button (paper airplane icon)
    • The result will appear below, showing "Success" and the response message
    • If your 3D app is running and connected, you'll see the changes reflected immediately
  7. View history:

    • The bottom-left "History" pane shows all your previous tool calls
    • Click on any history entry to see its details
    • Use "Clear" to remove history entries

Example: To change the background color to tin:

  1. Select change_background_color from the tools list
  2. Enter tin in the "color" parameter field
  3. Click "Run Tool"
  4. You'll see: "Background color changed to tin (#878687)"
  5. The background in your 3D app will update to the new color

Note: The Inspector connects directly to your MCP HTTP endpoint. Make sure your server is running before starting the Inspector. If you're using a tunneled server (for remote access), you can also connect to the tunneled URL:

npx @modelcontextprotocol/inspector https://your-tunnel-url.ngrok-free.app/mcp

Remote Clients (Require Public Access)

These clients require a publicly accessible server (use ngrok or localtunnel). See ChatGPT Setup for tunneling instructions.

Claude Desktop

Claude Desktop is Anthropic's free desktop application that supports MCP servers.

Claude Desktop supports two connection modes:

  1. Subprocess Mode (Recommended for localhost) - Claude Desktop manages the server process automatically
  2. HTTP/SSE Mode (For remote/tunneled servers) - Connect to an already-running server

Prerequisites:


Option 1: Subprocess Mode (Recommended for Localhost)

This is the simplest setup - Claude Desktop will start and manage your server automatically.

Step-by-Step Setup:

  1. Make sure your server is NOT already running:

    • If you have node server.js running in a terminal, stop it (Ctrl+C)
    • Claude Desktop will start the server automatically
    • Having both running will cause port conflicts
  2. Locate Claude Desktop configuration file:

    macOS:

    ~/Library/Application Support/Claude/claude_desktop_config.json
    

    Windows:

    %APPDATA%\Claude\claude_desktop_config.json
    

    Linux:

    ~/.config/Claude/claude_desktop_config.json
    
  3. Edit the configuration file:

    If the file doesn't exist, create it. Add the mcpServers section with subprocess configuration:

    {
      "mcpServers": {
        "3d-model-server": {
          "command": "node",
          "args": ["/Users/turner/MCPDevelopment/Hello3DLLM/server.js"]
        }
      }
    }

    ⚠️ Important:

    • Replace /Users/turner/MCPDevelopment/Hello3DLLM/server.js with the absolute path to your server.js file
    • Use node command (or full path like /Users/turner/.nvm/versions/node/v22.14.0/bin/node if using nvm)
    • Make sure no other instance of the server is running
  4. Restart Claude Desktop:

    • Quit Claude Desktop completely
    • Reopen Claude Desktop
    • Claude Desktop will automatically start your MCP server
  5. Verify the connection:

    • In Claude Desktop, ask: "What tools do you have available?"
    • Claude should list your MCP tools (e.g., change_model_color, change_model_size, etc.)
    • Check Claude Desktop logs if there are issues: ~/Library/Logs/Claude/mcp-server-3d-model-server.log (macOS)
  6. Start your 3D app (optional but recommended):

    npm run dev

    This allows you to see changes in real-time when Claude calls the MCP tools.

  7. Connect to the 3D app:

    • Ask Claude Desktop: "How do I connect to the 3D app?" or "Get browser URL"
    • Claude will provide a connection URL with your unique session ID (e.g., http://localhost:5173?sessionId=<unique-uuid>)
    • Copy and paste the URL into your browser
    • The browser will connect to your Claude Desktop session

Using Netlify-Hosted App with Claude Desktop (Optional/Advanced):

⚠️ Note: The default setup uses localhost. Netlify setup requires a WebSocket tunnel and is more complex. See Netlify Setup for details.

If you want to use your Netlify-hosted app instead of running locally:

  1. Update .env file (or set BROWSER_URL environment variable):

    Edit the .env file in your project root:

    BROWSER_URL=https://your-app.netlify.app

    Or set environment variable: macOS/Linux - Add to your shell profile (~/.zshrc or ~/.bashrc):

    export BROWSER_URL=https://your-app.netlify.app

    Then restart your terminal or run: source ~/.zshrc

    Windows - Set system environment variable or use PowerShell:

    $env:BROWSER_URL="https://your-app.netlify.app"
  2. Create WebSocket tunnel (so Netlify can connect to your local WebSocket):

    # Using ngrok
    ngrok http 3001
    
    # Or using localtunnel
    lt --port 3001 --subdomain hello3dllm-websocket

    Copy the tunnel URL (e.g., wss://hello3dllm-websocket.loca.lt)

  3. Configure Netlify:

    • Go to your Netlify site settings
    • Add environment variable: VITE_WS_URL=wss://your-websocket-tunnel-url
    • Redeploy your site
  4. Restart Claude Desktop (to pick up the BROWSER_URL from .env file or environment variable)

  5. Connect:

    • Ask Claude Desktop: "How do I connect to the 3D app?"
    • It will provide a Netlify URL with your unique session ID (e.g., https://your-app.netlify.app?sessionId=<unique-uuid>)
    • Open that URL in your browser

Note: Keep the WebSocket tunnel running while using the app. The tunnel URL may change if you restart it.

Troubleshooting Subprocess Mode:

  • Port already in use error:

    • Make sure you've stopped any manually running server instances
    • Check if port 3000 or 3001 is in use: lsof -i :3000 or lsof -i :3001 (macOS/Linux)
    • Kill the process if needed: kill -9 <PID>
    • Or use: lsof -ti :3000 -ti :3001 | xargs kill -9
  • Server not starting:

    • Verify the path to server.js is correct and absolute
    • Check that node command is in your PATH, or use full path to node executable
    • Check Claude Desktop logs: ~/Library/Logs/Claude/mcp-server-3d-model-server.log (macOS)
  • Tools not appearing:

    • Check Claude Desktop logs for errors
    • Verify the server started successfully
    • Restart Claude Desktop completely
  • Netlify app not connecting:

    • Verify WebSocket tunnel is running
    • Check VITE_WS_URL is set correctly in Netlify (use wss:// protocol)
    • Ensure tunnel URL matches what's configured in Netlify
    • Check browser console for WebSocket connection errors

Option 2: HTTP/SSE Mode (For Remote/Tunneled Servers)

Use this mode if you want to run the server manually or connect via a tunnel.

Step-by-Step Setup:

  1. Start your MCP server manually:

    node server.js

    Server should be running on http://localhost:3000/mcp

  2. Create a tunnel for your MCP server (if connecting remotely):

    Option A: Using ngrok

    ngrok http 3000

    Copy the HTTPS URL shown (e.g., https://abc123.ngrok-free.app)

    Option B: Using localtunnel

    lt --port 3000 --subdomain hello3dllm-mcpserver

    Creates URL: https://hello3dllm-mcpserver.loca.lt

    ⚠️ Important: Keep this tunnel running while using Claude Desktop!

  3. Locate Claude Desktop configuration file:

    macOS:

    ~/Library/Application Support/Claude/claude_desktop_config.json
    

    Windows:

    %APPDATA%\Claude\claude_desktop_config.json
    

    Linux:

    ~/.config/Claude/claude_desktop_config.json
    
  4. Edit the configuration file:

    If the file doesn't exist, create it. Add or update the mcpServers section:

    For ngrok:

    {
      "mcpServers": {
        "3d-model-server": {
          "url": "https://abc123.ngrok-free.app/mcp",
          "transport": "sse"
        }
      }
    }

    For localtunnel:

    {
      "mcpServers": {
        "3d-model-server": {
          "url": "https://hello3dllm-mcpserver.loca.lt/mcp",
          "transport": "sse"
        }
      }
    }

    ⚠️ Important Notes:

    • Endpoint: Use /mcp (NOT /sse) - your server handles SSE streams on /mcp
    • Transport: Use "transport": "sse" (Server-Sent Events) - this matches your server's StreamableHTTPServerTransport
    • The endpoint /mcp handles both POST requests (initialization/tool calls) and GET requests (SSE streams)
    • Claude Desktop's example shows /sse, but that's just a generic example - your server uses /mcp

    ⚠️ Important:

    • Replace the URL with your actual tunnel URL
    • Include /mcp at the end of the URL
    • Use https:// (not http://) for the tunnel URL
  5. Restart Claude Desktop:

    • Quit Claude Desktop completely
    • Reopen Claude Desktop
    • The MCP server should now be connected
  6. Verify the connection:

    • In Claude Desktop, ask: "What tools do you have available?"
    • Claude should list your MCP tools (e.g., change_model_color, change_model_size, etc.)
    • If tools aren't showing, check:
      • Tunnel is still running
      • Configuration file has correct URL with /mcp suffix
      • Claude Desktop was fully restarted
  7. Start your 3D app (optional but recommended):

    npm run dev

    This allows you to see changes in real-time when Claude calls the MCP tools.

  8. Connect to the 3D app:

    • Ask Claude Desktop: "How do I connect to the 3D app?" or "Get browser URL"
    • Claude will provide a connection URL with your session ID
    • Copy and paste the URL into your browser
    • The browser will connect to your Claude Desktop session

Troubleshooting:

  • Tools not appearing:

    • Verify tunnel is running (ngrok http 3000 or lt --port 3000)
    • Check configuration file JSON syntax is valid
    • Ensure URL includes /mcp at the end
    • Restart Claude Desktop completely
  • Connection errors:

    • Verify MCP server is running (node server.js)
    • Check tunnel URL is accessible in browser: https://your-tunnel-url/mcp
    • Ensure tunnel hasn't expired (ngrok free tier URLs change on restart)
  • Changes not visible:

    • Make sure your 3D app is running (npm run dev)
    • Verify browser is connected to the correct session
    • Check browser console for WebSocket connection errors

Note: If using ngrok free tier, your tunnel URL will change each time you restart ngrok. You'll need to update the configuration file and restart Claude Desktop when this happens. For a more stable URL, consider using localtunnel with a custom subdomain or ngrok's paid plan with a custom domain.

Recommendation: For local development, use Subprocess Mode (Option 1) - it's simpler and doesn't require tunneling.

ChatGPT (Requires Paid Tier)

ChatGPT requires a publicly accessible server and a paid Plus subscription with developer mode enabled. See ChatGPT Setup for detailed instructions.

Connecting to the 3D App

For Single-User Sessions

  1. Add the MCP tool in your MCP client (ChatGPT, Claude Desktop, etc.)
  2. Ask your AI assistant: "How do I connect to the 3D app?" or "Get browser URL"
  3. Your AI assistant will provide a connection URL with your session ID embedded
  4. Copy and paste the URL into your browser
  5. The browser will automatically connect to your session

Connection URL formats:

  • Claude Desktop (STDIO mode): http://localhost:5173?sessionId=<unique-uuid>
  • ChatGPT/HTTP mode: https://your-app.netlify.app?sessionId=<your-session-id>

For Multi-User Sessions

Each MCP client session gets its own unique session ID. When you ask for the connection URL, the AI assistant provides a URL specific to your session. Multiple users can connect simultaneously, each with their own isolated browser instance.

Note: In STDIO mode (Claude Desktop subprocess), each process instance generates a unique UUID session ID at startup, ensuring that different Claude Desktop users get isolated sessions.

Using the MCP Tools

Once connected, ask your AI assistant to manipulate the model using natural language:

  • Change color: "Change the model to red" or "Make it blue"
  • Change size: "Make the model bigger" or "Set size to 2.5"
  • Scale: "Stretch horizontally" or "Make it tall and thin"
  • Background: "Change background to black"
  • Combined: "Make a red model that's tall and thin"

The AI will automatically call the appropriate MCP tools, and changes appear in real-time in your browser.

Available MCP Tools

get_browser_connection_url

Returns the URL to open in your browser to connect the 3D visualization app. This tool is automatically called when users ask how to connect or how to open the 3D app.

Parameters:

  • None

Example Usage:

Note: The URL includes the current session ID, ensuring each ChatGPT session connects to its own browser instance.

change_model_color

Changes the color of the 3D model.

Parameters:

  • color (string): Hex color code (e.g., #ff0000 for red)

Example:

{
  "name": "change_model_color",
  "arguments": { "color": "#ff0000" }
}

change_model_size

Changes the uniform size of the model by scaling.

Parameters:

  • size (number): New size value (must be positive, scale factor)

Note: This scales the model uniformly, preserving its shape and position.

Example:

{
  "name": "change_model_size",
  "arguments": { "size": 2.0 }
}

scale_model

Scales the model independently in each dimension (x, y, z axes).

Parameters:

  • x (number): Scale factor for X axis (must be positive)
  • y (number): Scale factor for Y axis (must be positive)
  • z (number): Scale factor for Z axis (must be positive)

Example:

{
  "name": "scale_model",
  "arguments": { "x": 1.5, "y": 1.0, "z": 2.0 }
}

change_background_color

Changes the background color of the 3D scene.

Parameters:

  • color (string): Hex color code (e.g., #000000 for black)

Example:

{
  "name": "change_background_color",
  "arguments": { "color": "#000000" }
}

ChatGPT Setup

Important: ChatGPT requires a publicly accessible server (not just localhost).

✅ Tested Working Configurations

The following configurations have been tested and confirmed working with ChatGPT:

Configuration 1: MCP Server Tunnelled, WebSocket Local, App Local

Setup:

  • ✅ MCP server running locally on port 3000
  • ✅ MCP server exposed via tunnel (ngrok/localtunnel) for ChatGPT access
  • ✅ WebSocket server running locally on port 3001 (no tunnel needed)
  • ✅ App running locally (npm run dev)

Steps:

  1. Start MCP server: npm run mcp:server
  2. Create tunnel for MCP HTTP endpoint (port 3000):
    ngrok http 3000
    # or
    lt --port 3000 --subdomain hello3dllm-mcpserver
  3. Start local app: npm run dev
  4. Configure ChatGPT with tunneled MCP URL: https://your-tunnel-url/mcp
  5. App connects to local WebSocket: ws://localhost:3001

Pros:

  • ✅ Simple setup (only one tunnel needed)
  • ✅ Fast local WebSocket connection
  • ✅ Works with ChatGPT

Cons:

  • ❌ App must run locally (not accessible to others)

Configuration 2: MCP Server Tunnelled, WebSocket Tunnelled, App on Netlify

Setup:

  • ✅ MCP server running locally on port 3000
  • ✅ MCP server exposed via tunnel for ChatGPT access
  • ✅ WebSocket server running locally on port 3001
  • ✅ WebSocket exposed via tunnel for Netlify app
  • ✅ App deployed to Netlify

Steps:

  1. Start MCP server with Netlify URL:

    node server.js --browser-url https://your-app.netlify.app

    Or using the short form:

    node server.js -u https://your-app.netlify.app

    Important: The -u parameter (or --browser-url) is required so that the MCP server generates correct connection URLs pointing to your Netlify deployment.

  2. Create tunnel for MCP HTTP endpoint (port 3000):

    ngrok http 3000
    # or
    lt --port 3000 --subdomain hello3dllm-mcpserver
  3. Create tunnel for WebSocket (port 3001):

    ngrok http 3001
    # or
    lt --port 3001 --subdomain hello3dllm-websocket
  4. Configure Netlify:

    • Set VITE_WS_URL environment variable to tunneled WebSocket URL (e.g., wss://your-websocket-tunnel-url)
    • Deploy app
  5. Configure ChatGPT with tunneled MCP URL: https://your-mcp-tunnel-url/mcp

Pros:

  • ✅ App accessible to anyone via Netlify
  • ✅ Works with ChatGPT
  • ✅ No backend hosting costs

Cons:

  • ❌ Requires two tunnels (both must stay active)
  • ❌ Local machine must be running 24/7

Option A: Using ngrok (Recommended for Testing)

  1. Install ngrok:

  2. Start ngrok in a new terminal (keep MCP server running):

    ngrok http 3000

    For a custom domain (requires free ngrok account):

    ngrok http 3000 --domain=your-name.ngrok-free.app
  3. Copy the HTTPS URL from ngrok (e.g., https://abc123.ngrok-free.app)

  4. Configure ChatGPT:

    • Open ChatGPT → Settings → Personalization → Model Context Protocol
    • Add server:
      • Name: 3d-model-server
      • URL: https://your-ngrok-url.ngrok-free.app/mcp ⚠️ Include /mcp at the end!
      • Transport: HTTP or Streamable HTTP
  5. Start the web app (optional but recommended):

    npm run dev

    Note: If running the app locally, you don't need to tunnel the WebSocket (port 3001). The app will connect to ws://localhost:3001 directly. Only the MCP HTTP endpoint (port 3000) needs to be tunneled for ChatGPT access.

Option B: Using localtunnel (Alternative to ngrok)

  1. Install localtunnel:

    npm install -g localtunnel
  2. Start localtunnel in a new terminal (keep MCP server running):

    lt --port 3000 --subdomain hello3dllm-mcpserver

    Creates URL: https://hello3dllm-mcpserver.loca.lt

  3. Configure ChatGPT:

    • Open ChatGPT → Settings → Personalization → Model Context Protocol
    • Add server:
      • Name: 3d-model-server
      • URL: https://hello3dllm-mcpserver.loca.lt/mcp ⚠️ Include /mcp at the end!
      • Transport: HTTP or Streamable HTTP
  4. Start the web app (optional but recommended):

    npm run dev

    Note: If running the app locally, you don't need to tunnel the WebSocket (port 3001). The app will connect to ws://localhost:3001 directly. Only the MCP HTTP endpoint (port 3000) needs to be tunneled for ChatGPT access.

Benefits of localtunnel:

  • ✅ Custom subdomains that remain consistent (as long as the subdomain is available)
  • ✅ No account required for basic usage
  • ✅ Simple command-line interface

Option C: Deploy to Public Service

Deploy to Railway, Render, or Fly.io. Ensure:

  • Server runs on the service's assigned port (or use PORT env var)
  • Endpoint accessible at https://your-app.railway.app/mcp

Troubleshooting

  • 404 Not Found: Make sure URL includes /mcp at the end
  • Connection refused: Verify MCP server is running (npm run mcp:server)
  • Tools not available: Refresh ChatGPT page after adding server
  • Changes not visible: Ensure web app (npm run dev) is running

Security Note

The server currently allows all origins (origin: '*'). For production, restrict CORS:

cors({
  origin: ['https://chat.openai.com', 'https://chatgpt.com']
})

Production Deployment

Netlify Setup (Optional)

⚠️ Default Setup: The default configuration uses localhost for local development. Netlify setup is optional and requires additional configuration (WebSocket tunneling). See the main Quick Start section for the default localhost setup.

Hybrid Deployment: Front-End on Netlify + Local MCP Server

✅ This configuration has been tested and works! You can host the front-end on Netlify while running the MCP server locally, using tunnels for external access.

Architecture:

  • MCP server runs locally (ports 3000 and 3001)
  • MCP HTTP endpoint exposed via tunnel (for ChatGPT access)
  • WebSocket exposed via tunnel (for Netlify app access)
  • Front-end deployed to Netlify

Why tunnels are needed:

  1. Netlify app → Local WebSocket: The app at Netlify runs in the browser and needs to connect to your local WebSocket server. Since it can't access localhost:3001 directly, you need a tunnel.

  2. ChatGPT → Local MCP server: ChatGPT runs in the cloud and cannot access localhost:3000 on your desktop. It needs a publicly accessible URL via tunnel.

Solution: Expose both ports using tunneling services (ngrok or localtunnel).

Setup Steps:

  1. Start your local MCP server:

    # Important: Use -u parameter with your Netlify URL
    node server.js --browser-url https://your-app.netlify.app

    Replace https://your-app.netlify.app with your actual Netlify URL.

    Alternative syntax:

    node server.js -u https://your-app.netlify.app

    Why this is needed: The -u (or --browser-url) parameter tells the MCP server what URL to use when generating connection links. Without it, the server defaults to http://localhost:5173, which won't work for Netlify deployments.

    Server runs on http://localhost:3000/mcp (MCP) and ws://localhost:3001 (WebSocket)

  2. Create tunnels using ngrok or localtunnel:

    Option A: Using ngrok

    Terminal 1 - MCP HTTP tunnel (for ChatGPT):

    ngrok http 3000

    Copy the HTTPS URL (e.g., https://abc123.ngrok-free.app)

    Terminal 2 - WebSocket tunnel (for front-end):

    ngrok http 3001

    Copy the HTTPS URL (e.g., https://xyz789.ngrok-free.app)

    ⚠️ Important: ngrok tunnels HTTP/HTTPS, and WebSocket connections work over these tunnels. Use the HTTPS URL with wss:// protocol.

    Option B: Using localtunnel (Alternative)

    Install localtunnel:

    npm install -g localtunnel

    Terminal 1 - MCP HTTP tunnel (for ChatGPT):

    lt --port 3000 --subdomain hello3dllm-mcpserver

    Creates URL: https://hello3dllm-mcpserver.loca.lt

    Terminal 2 - WebSocket tunnel (for front-end):

    lt --port 3001 --subdomain hello3dllm-websocket

    Creates URL: https://hello3dllm-websocket.loca.lt

    ⚠️ Important: Use wss:// protocol for WebSocket connections (e.g., wss://hello3dllm-websocket.loca.lt)

    Benefits of localtunnel:

    • ✅ Custom subdomains that remain consistent (as long as the subdomain is available)
    • ✅ No account required for basic usage
    • ✅ Simple command-line interface
  3. Configure ChatGPT:

    • Open ChatGPT → Settings → Personalization → Model Context Protocol
    • Add server:
      • Name: 3d-model-server
      • URL:
        • If using ngrok: https://your-mcp-ngrok-url.ngrok-free.app/mcp
        • If using localtunnel: https://hello3dllm-mcpserver.loca.lt/mcp
        • ⚠️ Include /mcp at the end!
      • Transport: HTTP or Streamable HTTP
  4. Deploy front-end to Netlify:

    • Build command: npm run build
    • Publish directory: dist
    • Environment Variable: Set VITE_WS_URL to your WebSocket tunnel URL:
      • If using ngrok: VITE_WS_URL=wss://your-websocket-ngrok-url.ngrok-free.app
      • If using localtunnel: VITE_WS_URL=wss://hello3dllm-websocket.loca.lt
      • ⚠️ Use wss:// (not ws://) and the HTTPS tunnel URL
  5. Keep everything running while using the application:

    • ✅ Your local MCP server must stay running
    • ✅ Both tunnels (ngrok or localtunnel) must stay active
    • ✅ Your local machine must be connected to the internet

Important Notes:

  • Tunnel URL stability:

    • ngrok: URLs change each time you restart ngrok on the free tier (unless you use a paid plan with a custom domain). You'll need to update Netlify env vars and ChatGPT config each time.
    • localtunnel: Custom subdomains remain consistent as long as the subdomain is available and the tunnel stays active. This makes it easier to maintain stable URLs.
  • Redeploy Netlify after URL changes: When your tunnel WebSocket URL changes, you must update VITE_WS_URL in Netlify and trigger a new deployment for the change to take effect.

  • Keep tunnels active: Both tunnels must remain running. If either tunnel stops, the corresponding connection will fail.

Pros:

  • ✅ Free hosting for front-end (Netlify)
  • ✅ No backend hosting costs
  • ✅ Full control over MCP server
  • ✅ Easy to test and iterate locally

Cons:

  • ❌ Requires your local machine to be running 24/7 for production use
  • ❌ Two tunnels needed (ngrok free tier has limits; localtunnel is free but subdomains may be taken)
  • ❌ ngrok free tier URLs change on restart (requires updating Netlify env vars and ChatGPT config each time)
  • ❌ Network-dependent (your internet connection must be stable)
  • ❌ Must manually update and redeploy Netlify whenever tunnel WebSocket URL changes (less frequent with localtunnel's stable subdomains)
  • ❌ Must manually update ChatGPT MCP server URL whenever tunnel MCP URL changes (less frequent with localtunnel's stable subdomains)

Alternative: Single tunnel If you want to use a single tunnel (ngrok or localtunnel), you could modify the server to serve both MCP and WebSocket on the same port, but this requires code changes and may complicate the setup.

Important: Hosting Limitations

⚠️ The MCP server cannot be hosted on Netlify or Vercel because:

  • These platforms use serverless functions that don't support persistent WebSocket connections
  • The MCP server requires long-running SSE (Server-Sent Events) streams
  • Serverless functions are stateless and can't maintain in-memory session state
  • Custom ports (3000, 3001) are not supported

✅ The front-end can be hosted on Netlify/Vercel, but requires a separate backend server for the MCP server and WebSocket.

Deployment Options

Option 1: Separate Hosting (Recommended for Flexibility)

Front-End on Netlify/Vercel + Backend on Railway/Render/Fly.io

Front-End Deployment (Netlify):

  1. Build Configuration:

    • Build command: npm run build
    • Publish directory: dist
  2. Environment Variables:

    • VITE_WS_URL: Your backend WebSocket URL (e.g., wss://your-backend.railway.app or wss://your-backend.railway.app:3001)
    • ⚠️ Important: Use wss:// (secure WebSocket) for HTTPS sites
  3. Deploy:

    • Connect your repository to Netlify
    • Configure build settings
    • Set the VITE_WS_URL environment variable
    • Deploy

Backend Deployment (Railway/Render/Fly.io):

  1. Environment Variables:

    MCP_PORT=3000
    WS_PORT=3001
    NODE_ENV=production
    • Note: Some platforms use PORT instead - check your platform's documentation
    • If your platform assigns a single port, you may need to use the same port for both MCP and WebSocket
  2. Build & Start Commands:

    • Build command: npm install && npm run build (builds front-end for unified serving)
    • Start command: npm start (runs server.js)
  3. Deploy:

    • Connect your repository
    • Set environment variables
    • Deploy
  4. Update Front-End WebSocket URL:

    • After backend deployment, update VITE_WS_URL in Netlify to match your backend URL
    • Format: wss://your-backend-domain.com (or with port if needed)

Option 2: Unified Hosting (Simpler Setup)

Full Stack on Railway/Render/Fly.io

Deploy both front-end and backend together on a single platform:

  1. Environment Variables:

    MCP_PORT=3000
    WS_PORT=3001
    NODE_ENV=production
    VITE_WS_URL=wss://your-app-domain.com  # Or use relative URL detection
  2. Build & Start Commands:

    • Build command: npm install && npm run build
    • Start command: npm start
  3. How It Works:

    • The server automatically serves static files from dist/ folder
    • Front-end connects to WebSocket on the same domain
    • Single deployment, simpler configuration
  4. WebSocket URL Configuration:

    • For same-domain deployment, you can use relative WebSocket URLs or detect the current hostname
    • Update WebSocketClient.js if needed to auto-detect the WebSocket URL from the current page location

Platform-Specific Notes

Railway

  • ✅ Supports persistent processes and WebSocket
  • ✅ Free tier available (with usage limits)
  • ✅ Automatic HTTPS
  • Set MCP_PORT and WS_PORT environment variables

Render

  • ✅ Supports persistent processes and WebSocket
  • ✅ Free tier available (spins down after inactivity)
  • ✅ Automatic HTTPS
  • May need to configure health checks

Fly.io

  • ✅ Supports persistent processes and WebSocket
  • ✅ Free tier available
  • ✅ Automatic HTTPS
  • Requires fly.toml configuration file

Netlify (Front-End Only)

  • ✅ Excellent for static sites
  • ✅ Free tier with generous limits
  • ✅ Automatic HTTPS
  • ❌ Cannot host MCP server or WebSocket server
  • Must set VITE_WS_URL to external backend

Vercel (Front-End Only)

  • ✅ Excellent for static sites
  • ✅ Free tier available
  • ✅ Automatic HTTPS
  • ❌ Cannot host MCP server or WebSocket server
  • Must set VITE_WS_URL to external backend

Production Security Checklist

  1. CORS Configuration:

    // In server.js, replace:
    cors({ origin: '*' })
    // With:
    cors({
      origin: [
        'https://your-frontend-domain.netlify.app',
        'https://chat.openai.com',
        'https://chatgpt.com'
      ]
    })
  2. WebSocket Security:

    • Always use wss:// (secure WebSocket) in production
    • Ensure your hosting platform provides SSL/TLS certificates
  3. Environment Variables:

    • Never commit .env files to version control
    • Use platform-specific secret management
    • Rotate secrets regularly
  4. Rate Limiting:

    • Consider adding rate limiting for MCP endpoints
    • Protect against abuse and DDoS attacks
  5. Monitoring:

    • Set up error tracking (e.g., Sentry)
    • Monitor WebSocket connection health
    • Track MCP session usage

Testing Production Deployment

  1. Test MCP Endpoint:

    curl https://your-backend-domain.com/mcp
  2. Test WebSocket Connection:

    • Open browser console on your deployed front-end
    • Check for WebSocket connection logs
    • Verify connection uses wss:// protocol
  3. Test End-to-End:

    • Connect an MCP client (ChatGPT, Cursor, etc.) to your deployed MCP endpoint
    • Make a tool call (e.g., "change model color to red")
    • Verify changes appear in the browser

Troubleshooting Production Issues

WebSocket Connection Fails:

  • Verify VITE_WS_URL is set correctly in front-end deployment
  • Check that backend WebSocket server is running
  • Ensure firewall/security groups allow WebSocket connections
  • Verify SSL certificate is valid (for wss://)

MCP Client Can't Connect:

  • Verify MCP endpoint is accessible: https://your-backend.com/mcp
  • Check CORS settings allow your MCP client's origin
  • Review server logs for connection errors

Front-End Can't Connect to WebSocket:

  • Check browser console for connection errors
  • Verify VITE_WS_URL environment variable is set
  • Ensure WebSocket URL uses correct protocol (ws:// vs wss://)
  • Check that backend WebSocket server is accessible from the front-end domain

Static Files Not Serving:

  • Verify npm run build completed successfully
  • Check that dist/ folder exists in deployment
  • Review server logs for static file serving messages

Architecture

The server supports two transport modes:

STDIO Mode (Subprocess - Claude Desktop)

┌─────────────────┐         ┌──────────────┐         ┌─────────────┐
│ Claude Desktop  │──stdin▶│  MCP Server   │────────▶│  WebSocket  │
│  (Subprocess)   │◀─stdout│  (server.js)  │         │   Server    │
└─────────────────┘         └──────────────┘         └─────────────┘
                                      │                       │
                                      │              ┌────────▼────────┐
                                      │              │  Browser App    │
                                      │              │  (Application)  │
                                      │              └────────┬────────┘
                                      │                       │
                                      │              ┌────────▼────────┐
                                      └──────────────▶│  SceneManager   │
                                                     │  (Model Control) │
                                                     └─────────────────┘

HTTP/SSE Mode (ChatGPT, Manual)

┌─────────────────┐         ┌──────────────┐         ┌─────────────┐
│   MCP Client    │──HTTP──▶│  MCP Server   │────────▶│  WebSocket  │
│  (AI Assistant) │──SSE───▶│  (server.js)  │         │   Server    │
└─────────────────┘         └──────────────┘         └─────────────┘
                                      │                       │
                                      │              ┌────────▼────────┐
                                      │              │  Browser App    │
                                      │              │  (Application)  │
                                      │              └────────┬────────┘
                                      │                       │
                                      │              ┌────────▼────────┐
                                      └──────────────▶│  SceneManager   │
                                                     │  (Model Control) │
                                                     └─────────────────┘

How it works:

  1. MCP Client sends tool call requests to the MCP Server (via STDIO or HTTP/SSE)
  2. MCP Server auto-detects the transport mode and processes requests accordingly
  3. MCP Server broadcasts commands via WebSocket to connected browser clients
  4. Browser App receives WebSocket messages and updates the model
  5. Changes are immediately visible in the 3D scene

Transport Detection:

  • STDIO Mode: Automatically detected when stdin is not a TTY (subprocess)
  • HTTP Mode: Automatically detected when stdin is a TTY (manual execution)

Project Structure

Hello3DLLM/
├── server.js                 # MCP server with WebSocket bridge
├── src/
│   ├── Application.js         # Main app with WebSocket integration
│   ├── SceneManager.js        # Scene management with model manipulation methods
│   ├── WebSocketClient.js    # WebSocket client for browser
│   ├── Model.js               # Model class definition
│   ├── CameraController.js    # Camera controls
│   ├── RotationController.js  # Rotation handling
│   └── main.js                # Entry point
├── package.json               # Dependencies and scripts
└── README.md                  # This file

Development

Environment Variables

Server Configuration:

# Basic usage (defaults to localhost:5173 for browser URL)
MCP_PORT=3000 WS_PORT=3001 node server.js

# With browser URL environment variable
BROWSER_URL=https://your-app.netlify.app node server.js

# With command-line argument (overrides environment variable)
node server.js --browser-url https://your-app.netlify.app

Browser URL Configuration Priority:

  1. Command-line argument (--browser-url or -u) - highest priority
  2. Environment variable (BROWSER_URL)
  3. Default (http://localhost:5173) - lowest priority

Command-Line Options:

  • --browser-url <url> or -u <url>: Set the browser URL for connection links
  • --help or -h: Show usage help

Example:

# Use Netlify URL for connection links
node server.js --browser-url https://my-app.netlify.app

# Or using short form
node server.js -u https://my-app.netlify.app

# Use localhost (default)
node server.js

Front-End Configuration: The .env file in the project root contains the default browser URL configuration:

# Browser URL for the 3D app
# Default: localhost (for local development)
# Change to your Netlify URL if using Netlify-hosted app
BROWSER_URL=http://localhost:5173

WebSocket URL (optional): For Netlify deployments, you may need to set VITE_WS_URL in Netlify's environment variables. For local development, the front-end automatically uses ws://localhost:3001 if VITE_WS_URL is not set, making local development seamless.

Building for Production

npm run build
npm run preview  # Preview production build

Adding New Tools

When adding a new MCP tool:

  1. Register in server.js using mcpServer.registerTool():

    mcpServer.registerTool(
      'your_tool_name',
      {
        title: 'Your Tool Title',
        description: 'Description of what the tool does',
        inputSchema: {
          param1: z.string().describe('Parameter description')
        }
      },
      async ({ param1 }) => {
        broadcastToClients({
          type: 'yourCommandType',
          param1: param1
        });
        return { content: [{ type: 'text', text: 'Success' }] };
      }
    );
  2. Add handler in src/Application.js:

    case 'yourCommandType':
      this.sceneManager.yourMethod(command.param1);
      break;
  3. Implement method in src/SceneManager.js:

    yourMethod(param1) {
      // Your implementation
    }
  4. Update README with tool documentation

  5. Restart the MCP server and refresh your MCP client

See the change_background_color tool implementation in the codebase for a complete example.

Troubleshooting

WebSocket Connection Issues

  • Ensure MCP server is running (npm run mcp:server or via Claude Desktop)
  • Check that port 3001 is not in use: lsof -i :3001
  • Check browser console for connection errors
  • Verify browser is connected with correct session ID

MCP Client Connection Issues

For HTTP Mode (ChatGPT, Manual):

  • Verify MCP server is running on port 3000: lsof -i :3000
  • Check endpoint URL: http://localhost:3000/mcp
  • Ensure no firewall is blocking the connection

For STDIO Mode (Claude Desktop Subprocess):

  • Check Claude Desktop logs: ~/Library/Logs/Claude/mcp-server-3d-model-server.log (macOS)
  • Verify server path in configuration is correct and absolute
  • Ensure no other server instance is running (ports 3000/3001 must be free)
  • Restart Claude Desktop completely

Port Conflicts

If you see "port already in use" errors:

# Check what's using the ports
lsof -i :3000 -i :3001

# Kill processes on those ports
lsof -ti :3000 -ti :3001 | xargs kill -9

# Or kill any running server.js processes
pkill -f "node.*server.js"

Important: When using Claude Desktop in subprocess mode, don't run node server.js manually - let Claude Desktop manage it.

Model Not Updating

  • Check browser console for WebSocket errors
  • Verify browser app is running (npm run dev)
  • Ensure browser is connected with the correct session ID
  • Verify WebSocket server is running on port 3001
  • In STDIO mode, check that browser URL includes ?sessionId=<unique-uuid> (each process gets a unique session ID)

License

MIT

Contributing

Contributions welcome! Please feel free to submit a Pull Request.

About

Exploration of LLM Enabled Interactive Visual Apps

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages