This repository collects all the scripts and generates the figures used in the Ethereum Consensus Client's hardware resource analysis.
This Analysis was performed during the months of February 2024 and May 2024. It consisted of running the 6 main clients (Prysm, Lighthouse, Teku, Nimbus, Lodestar and Grandine)
Inside this repository you may find several files with which you launch the nodes you intend to analyze. The setup includes an execution node (always Nethermind), a consensus node (Lighthouse, Prysm, Teku, Nimbus, Lodestar, Grandine) and some monitoring tools (such as Prometheus, Node Exporter or CAdvisor)
It is possible to run a metrics server, which should aborb all the generated metrics into a single prometheus instance. This is very useful when the experiment involves several clients and machines. This way, analyzing the data is faster and easier, as everything is contained in a single instance.
Caddy is a Proxy server which will receive all the incoming traffic and redirect it to the metrics server.
- Copy
Caddyfile.sample
intoCaddyfile
- You may configure some credentials if needed.
- Copy
prometheus-template.yml
intoprometheus.yml
(prometheus folder). You may comment all the scrapes as this prometheus will be a server receiving the data. - Run
docker-compose up -d prometheus victoriametrics caddy
- You may now use
http://user:password@yourIP/promhttp/api/v1/write
andhttp://user:password@yourIP/victoria/api/v1/write
as remote write
- Decide which client to run
- Copy
.env.sample
into.env
and fill the variables. Tags refer to the client version (docker image). - Copy
prometheus-template.yml
intoprometheus.yml
(prometheus folder). See remote write above if needed, comment it otherwise. - You may configure which clients to scrape and the remote write (see server options above).
- Run
docker-compose up -d nethermind <cl-node> prometheus cadvisor node-exporter
.
Data has been collected using a Jupyter Notebook and a Prometheus. Please refer to the report.ipynb
file, where you may find a page containing the data collection details.
After the execution, you may find several CSV files have been downloaded (under {network}/csv/
).
Data has been plotted using the same Jupyter Notebook report.ipynb
. The last page contains details about the plots. These plots are built from the CSV files downloaded previously with the same document.
First of all we must place ourselves in the analysis
folder
It is important to load the requirements into a venv
Most of the figures can be generated by running download_csvs.py
and then running plot.py
In those files, one must edit the main()
to select which plots and csvs to generate.
Before all, the base.py
contains a section called run1
and another run2
. These define the times at which to download and plot the data, and which files to use. By commenting and uncommenting run2
section, we can select which phase to download and plot. By default, we have commented run1
Maintained by MigaLabs