Welcome! This assessment evaluates your ability to work with a modern React/Three.js stack similar to our production codebase. You'll be building a collaborative image canvas where users can upload, position, and manage image layers.
Time budget: 2-3 hours Focus: Quality over quantity. We'd rather see fewer features implemented well than everything done poorly.
This project uses technologies you'll work with daily:
- Next.js 14 (App Router)
- React Three Fiber (Three.js in React)
- MobX (state management)
- Socket.io (real-time updates)
- CSS Modules with BEM naming convention
- TypeScript (strict mode)
Copy the code from this repo into your own private repo. This should be where you complete the task.
# Install dependencies
pnpm install
# Start the mock WebSocket server (terminal 1)
pnpm run server
# Start the dev server (terminal 2)
pnpm run dev
# Open http://localhost:3000Complete the MobX store that manages canvas layers. You'll need state for tracking layers and which one is selected, plus actions for adding, removing, updating positions, selecting, and reordering layers.
Think about how to efficiently derive sorted layers and the currently selected layer.
The upload button should allow users to add images to the canvas. Think about file validation, generating unique IDs, and how the new layer integrates with your store.
Create a React Three Fiber component that renders an image layer on the canvas:
- Load and display the image texture
- Position it according to the layer data
- Make it draggable and update the store when moved
- Handle visibility and opacity
- Indicate when the layer is selected
Consider what cleanup is needed when the component unmounts.
Build out LayersPanel.tsx and LayerItem.tsx to display and manage layers:
- Show all layers with thumbnails
- Support selection, deletion, and reordering
- Handle the empty state
Use the provided CSS Module with BEM conventions for styling.
Connect the socket events (src/shared/services/ws.ts) to your layers store so that remote layer additions and movements are reflected in the UI. The mock server simulates another user making changes.
Note: Pay attention to how the server sends position data.
When a layer is selected, display an "Apply Effect" button above the image on the canvas. When clicked:
- Send a request to the server via WebSocket to process the image
- Show a loading/processing state while waiting
- Handle the server response - on success, a new layer with the processed image should appear
- Handle errors gracefully
The server supports grayscale and blur effects. See src/entities/image/model/types.ts for the relevant type definitions.
This tests your ability to:
- Render HTML UI elements positioned relative to 3D objects
- Handle async request/response flows over WebSockets
- Manage loading and error states
- Integrate new layers into your existing store
- Undo/Redo for layer operations
- Layer opacity controls
- Visual selection outline on the canvas
- Keyboard shortcuts (delete, arrow keys for nudging)
- Performance optimizations
- Effect type selector (dropdown to choose between grayscale/blur)
src/
├── app/ # Next.js App Router
├── entities/ # Entity modules
│ ├── image/ # Image effects
│ └── layer/ # Layers module
├── features/ # Feature modules
│ └── upload/ # Files upload feature
├── widgets/ # Widget modules
│ ├── canvas/ # Three.js canvas feature
│ ├── toolbar/ # Top toolbar
│ └── layersPanel/ # Layer management panel
└── shared/ # Shared utilities and components
├── lib/ # Helpers (BEM, etc.)
└── ui/ # Reusable UI components
- Working functionality
- Clean, well-organized code
- Proper TypeScript usage
- Appropriate MobX patterns
- Correct React patterns and cleanup
- Resource management in Three.js
- Consistent styling approach
- Handling of async operations and error states
Please complete the provided NOTES.md file with your thoughts on:
- What was most challenging
- What you'd do with more time
After submission, we'll review your task. If progressed, we'll schedule a 90 minute technical interview where a section of that will include walking us through your implementation, explaining your decisions, and discussing potential modifications.
Please ensure you understand all code you submit.
You may use AI assistants as you would in normal work. Please disclose what you used in NOTES.md. We're evaluating your problem-solving ability and understanding, not whether you avoided AI.
- Ensure both
pnpm run devandpnpm run serverwork - Complete the
NOTES.mdfile - Document anything you didn't finish
- Push all of your work and make sure your private repo is up-to-date.
- Invite will-gendo and ethan-gendo to your private repo (and make sure they have at least read access).
Make reasonable assumptions and document them. We're evaluating your problem-solving approach as much as the final code.
Good luck!