A browser-based Random Forest implementation in JavaScript.
This project provides an interactive, no-dependency Random Forest library and demo web app for classification tasks—designed for education, technical exploration, or lightweight prototyping.
Javascript-RandomForest implements a configurable Random Forest classifier that runs entirely in your web browser.
It lets you set forest and tree hyperparameters, load or generate datasets, and visualize classification performance.
All model operations are performed locally with zero server-side dependencies.
- Forest configuration:
- Set number of trees, maximum depth, minimum samples per split
- Choose splitting criterion (
gini,entropy) - Configure bootstrap sampling, max features, and more
- Training options:
- Upload or paste CSV dataset
- Synthetic multi-class data generation
- Train/validation/test set splitting
- Monitoring:
- Show accuracy, per-class metrics, out-of-bag (OOB) error (if bootstrap enabled)
- Visualize per-tree predictions and overall forest voting
- Display feature importances after fitting
- Model persistence:
- Save/load trained forest models as JSON
- Inference:
- Manual input for single sample prediction
- Review individual and aggregate predictions
- A modern web browser (Chrome, Firefox, Edge, Safari, etc.)
- Clone the repository:
git clone https://github.com/matthewJamesAbbott/Javascript-RandomForest.git
- Open
index.htmlin your browser.
No server installation is required.
To run a demo with synthetic data:
- Open
index.html - Adjust forest parameters (e.g., Trees:
10, Max Depth:5, Bootstrap:Enabled) - Generate synthetic dataset (e.g., 3 classes, 100 samples/class)
- Click Train Forest
- View the model statistics and output predictions
To use your own data, upload a CSV or paste tabular data, then train and evaluate as above.
- Number of Trees: Control ensemble size
- Max Depth: Set limits on individual tree growth
- Min Samples/Split: Change minimum sample count per split
- Splitting Criterion: Choose between Gini or Entropy for classification splits
- Max Features: Limit number of features evaluated per split (
all,sqrt,log2) - Bootstrap Samples: Toggle bootstrapped sampling per tree
- OOB Score: Optionally report out-of-bag accuracy estimate when using bootstrap
- For best browser performance, recommended forest sizes are 5–50 trees with depths of 3–8 for medium-sized datasets.
- All computation and data handling are performed client-side—your data remains on your device.
- The provided synthetic data generator is best for demonstration, not for benchmarking.
MIT License.
You are free to use, modify, and distribute this project for any purpose.