A Netflix-inspired movie streaming demo application showcasing how to use Sentry's tracing and trace explorer to identify and solve real performance problems in modern web applications.
This application demonstrates real-world scenarios where tracing helps you:
- Identify performance bottlenecks in search operations
- Detect unnecessary API calls from poor debouncing
- Track request cancellation patterns indicating UX issues
- Monitor error rates and failure patterns
- Measure the impact of optimizations with data
# 1. Clone and install
git clone <repository-url>
cd NullFlix
npm run install:all
# 2. Install Playwright browsers
npx playwright install chromium
# 3. Sentry DSNs are pre-configured for demo purposes
# You can update them in frontend/.env and backend/.env
# 4. Start the application
npm run dev
# 5. Generate trace data automatically
# In a new terminal, run one of these:
npm run generate:light # 20 user sessions (~2 mins)
npm run generate:medium # 50 user sessions (~5 mins)
npm run generate:heavy # 100 user sessions (~10 mins)
# 6. View in Sentry
# Go to Performance > Traces
# Filter: last 15 minutes
# Look for patterns in the trace waterfallThe project includes Playwright-based load tests that simulate realistic user behavior:
User Personas:
- Power User: Fast typing, knows what they want
- Casual Browser: Moderate speed, browsing for movies
- Hunt and Peck: Slow typing, makes mistakes
- Explorer: Tries single characters and partial searches
- Mobile User: Simulates mobile typing patterns
Load Test Commands:
# Run with different intensities
npm run generate:light # 20 sessions
npm run generate:medium # 50 sessions
npm run generate:heavy # 100 sessions
# Run with UI to watch the tests
npm run test:load:ui
# Run in headed mode to see the browser
npm run test:load:headed
# Custom session count
TOTAL_REQUESTS=200 npm run test:loadWhat the tests generate:
- Realistic typing speeds (20-500ms between keystrokes)
- Typing mistakes and corrections
- Search abandonment patterns
- Error and slow query scenarios
- Variable session lengths (1-4 searches per user)
- Keyboard navigation usage
| Pattern | What It Means | Action |
|---|---|---|
| Many short red spans | High cancellation rate | Increase debounce |
| Long duration spans | Slow searches | Optimize search algorithm |
| Repeated identical searches | No optimization | Consider implementing caching |
Spans with query.length:1 |
Single char searches | Consider min query length |
| Clustered requests | User frustration | Improve search relevance |
- π¬ Beautiful Netflix-style UI with smooth animations and transitions
- π Debounced search to minimize API calls
- β Request cancellation when user types new queries
- β‘ Variable latency simulation for realistic performance patterns
- π Sentry span instrumentation with single-span patterns
- β¨οΈ Keyboard navigation support (arrow keys, escape)
- π± Responsive design that works on all devices
- π Single-command startup with concurrently
- Frontend: React with TypeScript (Vite), Native Fetch API, Sentry
- Backend: Node.js with TypeScript, Express, Sentry
- Environment: Native Node.js .env file support (no dotenv dependency)
- Instrumentation: Sentry JavaScript SDK with proper span patterns
- Type Safety: Full TypeScript implementation with strict mode
- Realistic Scenarios: Variable latency, simulated failures, error patterns
- Node.js 20.6+ installed (uses native .env file support)
- Sentry account (optional, but recommended for full experience)
-
Clone the repository
git clone <repository-url> cd NullFlix
-
Install all dependencies (frontend and backend)
npm run install:all
Or install them separately:
npm install # Install root dependencies npm run install:backend # Install backend dependencies npm run install:frontend # Install frontend dependencies
-
Backend configuration (
backend/.env)# Server Configuration PORT=3001 NODE_ENV=development # Sentry Configuration SENTRY_DSN=your_backend_sentry_dsn_here # Search Configuration SEARCH_DELAY=300 # Artificial delay for search simulation (ms) MAX_RESULTS=10 # Maximum number of results to return # Failure Simulation (for realistic testing) FAILURE_RATE=0.05 # 5% failure rate SLOW_QUERY_RATE=0.15 # 15% slow query rate
-
Frontend configuration (
frontend/.env)# Sentry Configuration VITE_SENTRY_DSN=your_frontend_sentry_dsn_here # API Configuration VITE_API_URL=http://localhost:3001 # Search Configuration VITE_DEBOUNCE_MS=150 # Debounce delay in milliseconds
npm run devThis will start both the backend (port 3001) and frontend (port 5173) concurrently.
-
Start the backend server
cd backend npm startThe server will run on http://localhost:3001
-
Start the frontend development server (in a new terminal)
cd frontend npm run devThe app will open on http://localhost:5173
From the root directory, you can run:
npm run dev- Start both frontend and backend in development mode with hot reloadnpm run start- Start both frontend and backendnpm run build- Build both frontend and backend for productionnpm run type-check- Run TypeScript type checking on both projectsnpm run install:all- Install all dependencies (root, backend, and frontend)npm run install:backend- Install only backend dependenciesnpm run install:frontend- Install only frontend dependenciesnpm run backend:dev- Start only the backend server with hot reloadnpm run frontend:dev- Start only the frontend dev servernpm run frontend:build- Build the frontend for production
The frontend creates a single span per debounced search request using the callback pattern:
Location: frontend/src/App.tsx (lines 77-86)
await Sentry.startSpan({
op: 'http.client',
name: 'Search autocomplete',
attributes: {
'query.length': searchQuery.length,
'ui.debounce_ms': DEBOUNCE_MS,
}
}, async (span) => {
// Search logic with automatic span management
});Attributes:
query.length(int): Length of the search queryui.debounce_ms(int): Debounce delay configurationui.aborted(bool): Set if request was cancelledresults.count(int): Number of results returnedresults.has_results(bool): Quick check for empty resultshttp.response_size(int): Size of response in bytes
The backend creates a single span per search request with enhanced attributes:
Location: backend/src/server.ts (lines 98-102)
await Sentry.startSpan({
name: 'Search',
op: 'search',
}, async (span) => {
// Search logic with failure simulation
});Attributes:
search.engine(enum): Always "mock" in this demosearch.mode(enum): "prefix" if query length < 3, else "fuzzy"results.count(int): Number of results returnedquery.length(int): Length of the search queryperformance.slow(bool): Set when query takes >2x base delaysearch.duration_ms(int): Actual search time for slow querieserror.type(string): Error message when search failsrequest.aborted(bool): Set when client cancels the request
Modify VITE_DEBOUNCE_MS in frontend/.env:
VITE_DEBOUNCE_MS=500 # Increase for slower typing usersThe backend includes 35 popular movies. To modify:
- Edit
backend/src/data/movies.ts - Add/remove movies from the
MOVIESarray
Modify SEARCH_DELAY in backend/.env:
SEARCH_DELAY=1000 # 1 second delay to simulate slower searchesProblem: Users complain that search feels sluggish.
How to investigate with Sentry Trace Explorer:
-
Generate the issue:
# Increase backend delay to simulate slow database # Edit backend/.env: SEARCH_DELAY=2000 npm run dev
- Search for "the" multiple times
- Search for single letters vs full words
-
In Sentry Trace Explorer:
- Filter:
op:searchANDsearch.mode:fuzzy - Look at P95 duration distribution
- Group by
query.lengthto see if longer queries are slower - Compare
cache.hit:truevscache.hit:falseperformance
- Filter:
-
What you'll discover:
- Cache misses take 2+ seconds
- Short queries (
search.mode:prefix) might be unnecessarily slow - Cache is significantly improving performance
-
Solution: Implement different search strategies for prefix vs fuzzy matching
Problem: Backend is receiving too many requests, increasing costs.
How to investigate:
-
Generate the issue:
# Reduce debounce time to simulate the problem # Edit frontend/.env: VITE_DEBOUNCE_MS=50 npm run dev
- Type "avengers" quickly
- Type and delete repeatedly
-
In Sentry Trace Explorer:
- Filter:
op:http.client - Look for
ui.aborted:truerate - Group by
ui.debounce_ms - Check span count over time
- Filter:
-
What you'll discover:
- High cancellation rate (>60%) indicates debounce too low
- Many spans starting but not completing
- Wasted backend resources on cancelled requests
-
Metric queries to run:
# Cancellation rate by debounce setting op:http.client ui.aborted:true group by: ui.debounce_ms aggregate: count() / total_count()
Problem: Understanding real user search patterns to optimize UX.
How to investigate:
-
Generate various user behaviors:
- Fast typers vs slow typers
- Single character searches vs full titles
- Users who delete and retype
-
In Sentry Trace Explorer:
# Query length distribution op:http.client group by: query.length aggregate: count() # Success rate by query length op:http.client ui.aborted:false group by: query.length aggregate: count() / total_count() -
Insights you'll gain:
- Most users search with 3-5 characters
- Single character searches have high cancellation
- Optimal debounce varies by query length
Problem: Need to catch performance degradations before users complain.
Set up alerts in Sentry:
-
Slow Search Alert:
Alert when: p95(span.duration) > 1000ms Filter: op:search Window: 5 minutes -
High Cancellation Alert:
Alert when: count(ui.aborted:true) / count() > 0.5 Filter: op:http.client Window: 10 minutes -
Error Rate Alert:
Alert when: count(error.type:*) / count() > 0.1 Filter: op:search Window: 15 minutes
- Search Initiation Rate: How often users search
- Cancellation Rate:
ui.aborted:truepercentage - Query Length Distribution: Optimize for common lengths
- Debounce Effectiveness: Cancellation rate by debounce setting
- Search Latency by Mode: Compare prefix vs fuzzy performance
- Error Rate: Percentage of failed searches
- Slow Query Rate: Queries taking >2x expected time
- Results Count Distribution: Understand result set sizes
- Time to First Result: Full journey from keystroke to display
- Search Success Rate: Completed searches with results
- User Session Patterns: How users refine their searches
# Start with default settings
npm run dev
# Generate various search patterns:
- Search for single letters: "a", "t", "s"
- Search for common words: "the", "star", "dark"
- Search for full titles: "avengers", "inception"
- Type and delete to create cancellations
- Repeat searches to test cache- Navigate to Performance > Traces
- Filter by time range (last 30 minutes)
- Add filters:
transaction:*search*to see all search-related spanshas:cache.hitto see backend spanshas:ui.abortedto see cancelled requests
Navigate to Performance > Queries and try these:
-- Find your slowest searches
SELECT
query.length,
search.mode,
p95(span.duration) as p95_duration,
count() as volume
FROM spans
WHERE op = 'search'
GROUP BY query.length, search.mode
ORDER BY p95_duration DESC
-- Analyze cancellation patterns
SELECT
ui.debounce_ms,
sum(CASE WHEN ui.aborted = true THEN 1 ELSE 0 END) as cancelled,
count() as total,
(cancelled / total) * 100 as cancellation_rate
FROM spans
WHERE op = 'http.client'
GROUP BY ui.debounce_ms
-- Error rate analysis
SELECT
timestamp.truncate(1h) as hour,
sum(CASE WHEN error.type IS NOT NULL THEN 1 ELSE 0 END) as errors,
count() as total,
(errors / total) * 100 as error_rate,
avg(span.duration) as avg_duration
FROM spans
WHERE op = 'search'
GROUP BY hour- Build the production bundle:
cd frontend npm run build - Deploy the
distfolder to your static hosting service - Set
tracesSampleRateto a lower value (e.g., 0.1) in production
- Set
NODE_ENV=productionin environment - Deploy to your Node.js hosting service
- Configure proper CORS origins for production
- Set appropriate sampling rates for production traffic
What tracing shows:
- P95 latency spikes for complex queries
- Single character searches take 3x longer
- Opportunity: Implement query optimization or pagination
What tracing shows:
- 70% cancellation rate with 50ms debounce
- 30% cancellation with 150ms debounce
- 10% cancellation with 500ms debounce
- Opportunity: Dynamic debounce based on typing speed
What tracing shows:
- Normal queries: ~300ms response time
- Slow queries: 600-1500ms response time
- 2-5x performance variance
- Opportunity: Query optimization, connection pooling
What tracing shows:
- High cancellation on single characters
- Users typing, deleting, retyping
- Opportunity: Better placeholder text, search suggestions
-
Variable Debouncing
// Modify frontend/src/App.tsx const DEBOUNCE_MS = query.length < 3 ? 500 : 200;
Track: Does this reduce cancellations without hurting UX?
-
Query Optimization
// Modify backend/src/server.ts // Add early exit for single characters if (query.length === 1) { return results.slice(0, 5); // Return fewer results }
Track: Improvement in P95 latency for short queries
-
Smart Error Handling
// Implement exponential backoff for retriesTrack: Success rate improvement during high load
βββββββββββββββββββ
β React App β
β β
β - Debounced β
β Search β
β - AbortController
β - Sentry Span β
ββββββββββ¬βββββββββ
β HTTP GET /api/search?q=...
β
ββββββββββΌβββββββββ
β Express API β
β β
β - Cache Check β
β - Mock Search β
β - Sentry Span β
βββββββββββββββββββ
- User types in search input
- After debounce delay, frontend starts "Search autocomplete" span
- Frontend sends GET request with AbortController signal
- If user types again, previous request is aborted
- Backend performs search with variable latency based on query
- Backend may simulate failures (5% rate) or slow queries (15% rate)
- Backend returns results and sets span attributes
- Frontend displays results with animations or error states
- Both spans are sent to Sentry with relevant attributes
This application simulates real-world conditions:
- Single character searches: 3x slower (simulates full table scan)
- Common words ("the", "a"): 2x slower (more results to process)
- Random slow queries: 15% chance of 2-5x delay (cold start, lock contention)
- Jitter: Β±30% variance on all queries
- Database timeouts: 5-second delay before error
- Rate limiting: 429 status with appropriate message
- Service unavailable: 503 status for backend issues
- Base latency: 300ms (configurable)
- Slow queries: 600-1500ms depending on complexity
- Timeout errors: 5000ms before failure
- Live statistics: Shows searches, cancellations, and failures
- Error states: Clear messaging for different failure types
- Loading states: Visual feedback during search
- Cancellation tracking: Real-time cancellation rate
This application demonstrates key tracing concepts:
- Clean, focused spans without nesting complexity
- Each span represents one logical operation
- Easy to understand in trace waterfall view
query.length- Correlate performance with input sizeui.aborted- Track wasted work from cancellationssearch.mode- Different algorithms for different query typesresults.count- Business metric for search effectivenessperformance.slow- Identify queries needing optimization
- Debouncing: Balance between responsiveness and efficiency
- Request Cancellation: Handle interrupted operations gracefully
- Variable Latency: Different code paths have different costs
- Error Simulation: Realistic failure scenarios with proper handling
- Change debounce β see cancellation rate change
- Increase search delay β watch P95 latency grow
- Adjust failure rate β observe error handling patterns
- Type quickly β generate realistic user patterns
Every trace tells a story:
- "User typed 3 characters, cancelled 2 requests, got results in 150ms"
- "Single character search took 3x longer than normal queries"
- "150ms debounce causes 30% cancellation rate"
- "5% of searches fail with timeout errors"
- Check if port 3001 is already in use
- Verify Node.js version is 16+
- Check
.envfile exists in backend folder
- Ensure backend is running on port 3001
- Check CORS is enabled in backend
- Verify
VITE_API_URLin frontend.env
- Verify DSN is correctly set in
.envfiles - Check browser console for Sentry initialization errors
- Ensure
tracesSampleRateis set to 1.0 in development
MIT
Feel free to submit issues and enhancement requests!