Skip to content

getsentry/nullflix-tracing-example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

NullFlix - Real-World Tracing with Sentry

A Netflix-inspired movie streaming demo application showcasing how to use Sentry's tracing and trace explorer to identify and solve real performance problems in modern web applications.

MovieSearch Demo Node.js Sentry

🎯 Purpose

This application demonstrates real-world scenarios where tracing helps you:

  • Identify performance bottlenecks in search operations
  • Detect unnecessary API calls from poor debouncing
  • Track request cancellation patterns indicating UX issues
  • Monitor error rates and failure patterns
  • Measure the impact of optimizations with data

πŸƒ Quick Start Guide

# 1. Clone and install
git clone <repository-url>
cd NullFlix
npm run install:all

# 2. Install Playwright browsers
npx playwright install chromium

# 3. Sentry DSNs are pre-configured for demo purposes
# You can update them in frontend/.env and backend/.env

# 4. Start the application
npm run dev

# 5. Generate trace data automatically
# In a new terminal, run one of these:
npm run generate:light   # 20 user sessions (~2 mins)
npm run generate:medium  # 50 user sessions (~5 mins)
npm run generate:heavy   # 100 user sessions (~10 mins)

# 6. View in Sentry
# Go to Performance > Traces
# Filter: last 15 minutes
# Look for patterns in the trace waterfall

πŸ€– Automated Load Testing

The project includes Playwright-based load tests that simulate realistic user behavior:

User Personas:

  • Power User: Fast typing, knows what they want
  • Casual Browser: Moderate speed, browsing for movies
  • Hunt and Peck: Slow typing, makes mistakes
  • Explorer: Tries single characters and partial searches
  • Mobile User: Simulates mobile typing patterns

Load Test Commands:

# Run with different intensities
npm run generate:light   # 20 sessions
npm run generate:medium  # 50 sessions  
npm run generate:heavy   # 100 sessions

# Run with UI to watch the tests
npm run test:load:ui

# Run in headed mode to see the browser
npm run test:load:headed

# Custom session count
TOTAL_REQUESTS=200 npm run test:load

What the tests generate:

  • Realistic typing speeds (20-500ms between keystrokes)
  • Typing mistakes and corrections
  • Search abandonment patterns
  • Error and slow query scenarios
  • Variable session lengths (1-4 searches per user)
  • Keyboard navigation usage

🎯 Key Things to Look For in Traces

Pattern What It Means Action
Many short red spans High cancellation rate Increase debounce
Long duration spans Slow searches Optimize search algorithm
Repeated identical searches No optimization Consider implementing caching
Spans with query.length:1 Single char searches Consider min query length
Clustered requests User frustration Improve search relevance

Features

  • 🎬 Beautiful Netflix-style UI with smooth animations and transitions
  • πŸ” Debounced search to minimize API calls
  • ❌ Request cancellation when user types new queries
  • ⚑ Variable latency simulation for realistic performance patterns
  • πŸ“Š Sentry span instrumentation with single-span patterns
  • ⌨️ Keyboard navigation support (arrow keys, escape)
  • πŸ“± Responsive design that works on all devices
  • πŸš€ Single-command startup with concurrently

Tech Stack

  • Frontend: React with TypeScript (Vite), Native Fetch API, Sentry
  • Backend: Node.js with TypeScript, Express, Sentry
  • Environment: Native Node.js .env file support (no dotenv dependency)
  • Instrumentation: Sentry JavaScript SDK with proper span patterns
  • Type Safety: Full TypeScript implementation with strict mode
  • Realistic Scenarios: Variable latency, simulated failures, error patterns

Quick Start

Prerequisites

  • Node.js 20.6+ installed (uses native .env file support)
  • Sentry account (optional, but recommended for full experience)

Installation

  1. Clone the repository

    git clone <repository-url>
    cd NullFlix
  2. Install all dependencies (frontend and backend)

    npm run install:all

    Or install them separately:

    npm install              # Install root dependencies
    npm run install:backend  # Install backend dependencies
    npm run install:frontend # Install frontend dependencies

Configuration

  1. Backend configuration (backend/.env)

    # Server Configuration
    PORT=3001
    NODE_ENV=development
    
    # Sentry Configuration
    SENTRY_DSN=your_backend_sentry_dsn_here
    
    # Search Configuration
    SEARCH_DELAY=300       # Artificial delay for search simulation (ms)
    MAX_RESULTS=10         # Maximum number of results to return
    
    # Failure Simulation (for realistic testing)
    FAILURE_RATE=0.05      # 5% failure rate
    SLOW_QUERY_RATE=0.15   # 15% slow query rate
  2. Frontend configuration (frontend/.env)

    # Sentry Configuration
    VITE_SENTRY_DSN=your_frontend_sentry_dsn_here
    
    # API Configuration
    VITE_API_URL=http://localhost:3001
    
    # Search Configuration
    VITE_DEBOUNCE_MS=150   # Debounce delay in milliseconds

Running the Application

Option 1: Start both servers with one command (Recommended)

npm run dev

This will start both the backend (port 3001) and frontend (port 5173) concurrently.

Option 2: Start servers separately

  1. Start the backend server

    cd backend
    npm start

    The server will run on http://localhost:3001

  2. Start the frontend development server (in a new terminal)

    cd frontend
    npm run dev

    The app will open on http://localhost:5173

Available Scripts

From the root directory, you can run:

  • npm run dev - Start both frontend and backend in development mode with hot reload
  • npm run start - Start both frontend and backend
  • npm run build - Build both frontend and backend for production
  • npm run type-check - Run TypeScript type checking on both projects
  • npm run install:all - Install all dependencies (root, backend, and frontend)
  • npm run install:backend - Install only backend dependencies
  • npm run install:frontend - Install only frontend dependencies
  • npm run backend:dev - Start only the backend server with hot reload
  • npm run frontend:dev - Start only the frontend dev server
  • npm run frontend:build - Build the frontend for production

Sentry Instrumentation

Frontend Spans

The frontend creates a single span per debounced search request using the callback pattern:

Location: frontend/src/App.tsx (lines 77-86)

await Sentry.startSpan({
  op: 'http.client',
  name: 'Search autocomplete',
  attributes: {
    'query.length': searchQuery.length,
    'ui.debounce_ms': DEBOUNCE_MS,
  }
}, async (span) => {
  // Search logic with automatic span management
});

Attributes:

  • query.length (int): Length of the search query
  • ui.debounce_ms (int): Debounce delay configuration
  • ui.aborted (bool): Set if request was cancelled
  • results.count (int): Number of results returned
  • results.has_results (bool): Quick check for empty results
  • http.response_size (int): Size of response in bytes

Backend Spans

The backend creates a single span per search request with enhanced attributes:

Location: backend/src/server.ts (lines 98-102)

await Sentry.startSpan({
  name: 'Search',
  op: 'search',
}, async (span) => {
  // Search logic with failure simulation
});

Attributes:

  • search.engine (enum): Always "mock" in this demo
  • search.mode (enum): "prefix" if query length < 3, else "fuzzy"
  • results.count (int): Number of results returned
  • query.length (int): Length of the search query
  • performance.slow (bool): Set when query takes >2x base delay
  • search.duration_ms (int): Actual search time for slow queries
  • error.type (string): Error message when search fails
  • request.aborted (bool): Set when client cancels the request

Configuration Options

Adjusting Debounce Time

Modify VITE_DEBOUNCE_MS in frontend/.env:

VITE_DEBOUNCE_MS=500  # Increase for slower typing users

Adjusting Dataset Size

The backend includes 35 popular movies. To modify:

  1. Edit backend/src/data/movies.ts
  2. Add/remove movies from the MOVIES array

Adjusting Search Delay

Modify SEARCH_DELAY in backend/.env:

SEARCH_DELAY=1000  # 1 second delay to simulate slower searches

πŸ” Real-World Tracing Scenarios

Scenario 1: Identifying Slow Search Performance

Problem: Users complain that search feels sluggish.

How to investigate with Sentry Trace Explorer:

  1. Generate the issue:

    # Increase backend delay to simulate slow database
    # Edit backend/.env: SEARCH_DELAY=2000
    npm run dev
    • Search for "the" multiple times
    • Search for single letters vs full words
  2. In Sentry Trace Explorer:

    • Filter: op:search AND search.mode:fuzzy
    • Look at P95 duration distribution
    • Group by query.length to see if longer queries are slower
    • Compare cache.hit:true vs cache.hit:false performance
  3. What you'll discover:

    • Cache misses take 2+ seconds
    • Short queries (search.mode:prefix) might be unnecessarily slow
    • Cache is significantly improving performance
  4. Solution: Implement different search strategies for prefix vs fuzzy matching

Scenario 2: Detecting Excessive API Calls

Problem: Backend is receiving too many requests, increasing costs.

How to investigate:

  1. Generate the issue:

    # Reduce debounce time to simulate the problem
    # Edit frontend/.env: VITE_DEBOUNCE_MS=50
    npm run dev
    • Type "avengers" quickly
    • Type and delete repeatedly
  2. In Sentry Trace Explorer:

    • Filter: op:http.client
    • Look for ui.aborted:true rate
    • Group by ui.debounce_ms
    • Check span count over time
  3. What you'll discover:

    • High cancellation rate (>60%) indicates debounce too low
    • Many spans starting but not completing
    • Wasted backend resources on cancelled requests
  4. Metric queries to run:

    # Cancellation rate by debounce setting
    op:http.client ui.aborted:true
    group by: ui.debounce_ms
    aggregate: count() / total_count()
    

Scenario 3: User Experience Optimization

Problem: Understanding real user search patterns to optimize UX.

How to investigate:

  1. Generate various user behaviors:

    • Fast typers vs slow typers
    • Single character searches vs full titles
    • Users who delete and retype
  2. In Sentry Trace Explorer:

    # Query length distribution
    op:http.client
    group by: query.length
    aggregate: count()
    
    # Success rate by query length
    op:http.client ui.aborted:false
    group by: query.length
    aggregate: count() / total_count()
    
  3. Insights you'll gain:

    • Most users search with 3-5 characters
    • Single character searches have high cancellation
    • Optimal debounce varies by query length

Scenario 4: Performance Regression Detection

Problem: Need to catch performance degradations before users complain.

Set up alerts in Sentry:

  1. Slow Search Alert:

    Alert when: p95(span.duration) > 1000ms
    Filter: op:search
    Window: 5 minutes
    
  2. High Cancellation Alert:

    Alert when: count(ui.aborted:true) / count() > 0.5
    Filter: op:http.client
    Window: 10 minutes
    
  3. Error Rate Alert:

    Alert when: count(error.type:*) / count() > 0.1
    Filter: op:search
    Window: 15 minutes
    

πŸ“Š Key Metrics to Track

Frontend Metrics

  • Search Initiation Rate: How often users search
  • Cancellation Rate: ui.aborted:true percentage
  • Query Length Distribution: Optimize for common lengths
  • Debounce Effectiveness: Cancellation rate by debounce setting

Backend Metrics

  • Search Latency by Mode: Compare prefix vs fuzzy performance
  • Error Rate: Percentage of failed searches
  • Slow Query Rate: Queries taking >2x expected time
  • Results Count Distribution: Understand result set sizes

End-to-End Metrics

  • Time to First Result: Full journey from keystroke to display
  • Search Success Rate: Completed searches with results
  • User Session Patterns: How users refine their searches

πŸš€ Getting Started with Tracing

Step 1: Generate Baseline Data

# Start with default settings
npm run dev

# Generate various search patterns:
- Search for single letters: "a", "t", "s"
- Search for common words: "the", "star", "dark"
- Search for full titles: "avengers", "inception"
- Type and delete to create cancellations
- Repeat searches to test cache

Step 2: Explore in Sentry

  1. Navigate to Performance > Traces
  2. Filter by time range (last 30 minutes)
  3. Add filters:
    • transaction:*search* to see all search-related spans
    • has:cache.hit to see backend spans
    • has:ui.aborted to see cancelled requests

Step 3: Create Custom Queries

Navigate to Performance > Queries and try these:

-- Find your slowest searches
SELECT 
  query.length,
  search.mode,
  p95(span.duration) as p95_duration,
  count() as volume
FROM spans
WHERE op = 'search'
GROUP BY query.length, search.mode
ORDER BY p95_duration DESC

-- Analyze cancellation patterns
SELECT 
  ui.debounce_ms,
  sum(CASE WHEN ui.aborted = true THEN 1 ELSE 0 END) as cancelled,
  count() as total,
  (cancelled / total) * 100 as cancellation_rate
FROM spans  
WHERE op = 'http.client'
GROUP BY ui.debounce_ms

-- Error rate analysis
SELECT 
  timestamp.truncate(1h) as hour,
  sum(CASE WHEN error.type IS NOT NULL THEN 1 ELSE 0 END) as errors,
  count() as total,
  (errors / total) * 100 as error_rate,
  avg(span.duration) as avg_duration
FROM spans
WHERE op = 'search'
GROUP BY hour

Production Deployment

Frontend

  1. Build the production bundle:
    cd frontend
    npm run build
  2. Deploy the dist folder to your static hosting service
  3. Set tracesSampleRate to a lower value (e.g., 0.1) in production

Backend

  1. Set NODE_ENV=production in environment
  2. Deploy to your Node.js hosting service
  3. Configure proper CORS origins for production
  4. Set appropriate sampling rates for production traffic

πŸ”¬ Common Issues This Demo Reveals

Issue 1: "Search feels slow"

What tracing shows:

  • P95 latency spikes for complex queries
  • Single character searches take 3x longer
  • Opportunity: Implement query optimization or pagination

Issue 2: "Too many API calls"

What tracing shows:

  • 70% cancellation rate with 50ms debounce
  • 30% cancellation with 150ms debounce
  • 10% cancellation with 500ms debounce
  • Opportunity: Dynamic debounce based on typing speed

Issue 3: "Inconsistent performance"

What tracing shows:

  • Normal queries: ~300ms response time
  • Slow queries: 600-1500ms response time
  • 2-5x performance variance
  • Opportunity: Query optimization, connection pooling

Issue 4: "Users abandoning search"

What tracing shows:

  • High cancellation on single characters
  • Users typing, deleting, retyping
  • Opportunity: Better placeholder text, search suggestions

πŸ’‘ Optimization Experiments to Try

  1. Variable Debouncing

    // Modify frontend/src/App.tsx
    const DEBOUNCE_MS = query.length < 3 ? 500 : 200;

    Track: Does this reduce cancellations without hurting UX?

  2. Query Optimization

    // Modify backend/src/server.ts
    // Add early exit for single characters
    if (query.length === 1) {
      return results.slice(0, 5); // Return fewer results
    }

    Track: Improvement in P95 latency for short queries

  3. Smart Error Handling

    // Implement exponential backoff for retries

    Track: Success rate improvement during high load

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   React App     β”‚
β”‚                 β”‚
β”‚  - Debounced    β”‚
β”‚    Search       β”‚
β”‚  - AbortController
β”‚  - Sentry Span  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚ HTTP GET /api/search?q=...
         β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Express API    β”‚
β”‚                 β”‚
β”‚  - Cache Check  β”‚
β”‚  - Mock Search  β”‚
β”‚  - Sentry Span  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Data Flow

  1. User types in search input
  2. After debounce delay, frontend starts "Search autocomplete" span
  3. Frontend sends GET request with AbortController signal
  4. If user types again, previous request is aborted
  5. Backend performs search with variable latency based on query
  6. Backend may simulate failures (5% rate) or slow queries (15% rate)
  7. Backend returns results and sets span attributes
  8. Frontend displays results with animations or error states
  9. Both spans are sent to Sentry with relevant attributes

🌟 Realistic Production Scenarios

This application simulates real-world conditions:

Variable Latency

  • Single character searches: 3x slower (simulates full table scan)
  • Common words ("the", "a"): 2x slower (more results to process)
  • Random slow queries: 15% chance of 2-5x delay (cold start, lock contention)
  • Jitter: Β±30% variance on all queries

Failure Simulation (5% rate)

  • Database timeouts: 5-second delay before error
  • Rate limiting: 429 status with appropriate message
  • Service unavailable: 503 status for backend issues

Performance Patterns

  • Base latency: 300ms (configurable)
  • Slow queries: 600-1500ms depending on complexity
  • Timeout errors: 5000ms before failure

User Experience Indicators

  • Live statistics: Shows searches, cancellations, and failures
  • Error states: Clear messaging for different failure types
  • Loading states: Visual feedback during search
  • Cancellation tracking: Real-time cancellation rate

πŸŽ“ Why This is a Great Tracing Example

This application demonstrates key tracing concepts:

1. Single Span Pattern

  • Clean, focused spans without nesting complexity
  • Each span represents one logical operation
  • Easy to understand in trace waterfall view

2. Meaningful Attributes

  • query.length - Correlate performance with input size
  • ui.aborted - Track wasted work from cancellations
  • search.mode - Different algorithms for different query types
  • results.count - Business metric for search effectiveness
  • performance.slow - Identify queries needing optimization

3. Real-World Problems

  • Debouncing: Balance between responsiveness and efficiency
  • Request Cancellation: Handle interrupted operations gracefully
  • Variable Latency: Different code paths have different costs
  • Error Simulation: Realistic failure scenarios with proper handling

4. Clear Cause and Effect

  • Change debounce β†’ see cancellation rate change
  • Increase search delay β†’ watch P95 latency grow
  • Adjust failure rate β†’ observe error handling patterns
  • Type quickly β†’ generate realistic user patterns

5. Actionable Insights

Every trace tells a story:

  • "User typed 3 characters, cancelled 2 requests, got results in 150ms"
  • "Single character search took 3x longer than normal queries"
  • "150ms debounce causes 30% cancellation rate"
  • "5% of searches fail with timeout errors"

Troubleshooting

Backend not starting

  • Check if port 3001 is already in use
  • Verify Node.js version is 16+
  • Check .env file exists in backend folder

Frontend not connecting to backend

  • Ensure backend is running on port 3001
  • Check CORS is enabled in backend
  • Verify VITE_API_URL in frontend .env

Sentry spans not appearing

  • Verify DSN is correctly set in .env files
  • Check browser console for Sentry initialization errors
  • Ensure tracesSampleRate is set to 1.0 in development

License

MIT

Contributing

Feel free to submit issues and enhancement requests!

About

No description or website provided.

Topics

Resources

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

Packages

No packages published