A network agnostic DHT crawler and monitor. The crawler connects to DHT bootstrappers and then recursively follows all entries in their k-buckets until all peers have been visited. The crawler supports the following networks:
- IPFS - Amino DHT
- Ethereum - Consensus Layer
- Ethereum - Execution Layer
- Filecoin
- Polkadot
- Kusama
- Rococo
- Westend
- Avail
- Celestia - Mainnet
- Celestia - Arabica
- Celestia - Mocha
- Pactus
The crawler was:
- 🏆 awarded a prize in the DI2F Workshop hackathon. 🏆
- 🎓 used for the ACM SigCOMM'22 paper Design and Evaluation of IPFS: A Storage Layer for the Decentralized Web 🎓
Nebula powers:
- 📊 the weekly reports for the IPFS Amino DHT here! 📊
- 🌐 many graphs on probelab.io for most of the supported networks above 🌐
You can find a demo on YouTube: Nebula: A Network Agnostic DHT Crawler 📺
Grafana Dashboard is not part of this repository
- Table of Contents
- Project Status
- Usage
- Install
- How does it work?
- Development
- Report
- Related Efforts
- Demo
- Maintainers
- Contributing
- Support
- Other Projects
- License
The crawler is powering critical IPFS Amino DHT KPIs, used for Weekly IPFS Reports as well as for many metrics on probelab.io
.
The main
branch will contain the latest changes and should not be considered stable. The latest stable release that is production ready is version 2.2.0.
Head over to the release section and download binaries from the latest stable release.
git clone https://github.com/dennis-tra/nebula
cd nebula
make build
Now you should find the nebula
executable in the dist
subfolder.
Nebula is a command line tool and provides the crawl
sub-command.
To simply crawl the IPFS Amino DHT network run:
nebula --dry-run crawl
The crawler can store its results as JSON documents or in a postgres database -
the --dry-run
flag prevents it from doing any of it. Nebula will just print a
summary of the crawl at the end instead. A crawl takes ~5-10 min depending on
your internet connection. You can also specify the network you want to crawl by
appending, e.g., --network FILECOIN
and limit the number of peers to crawl by
providing the --limit
flag with the value of, e.g., 1000
. Example:
nebula --dry-run crawl --network FILECOIN --limit 1000
To find out which other network values are supported, you can run:
nebula networks
To store crawl results as JSON files provide the --json-out
command line flag like so:
nebula --json-out ./results/ crawl
After the crawl has finished, you will find the JSON files in the ./results/
subdirectory.
When providing only the --json-out
command line flag you will see that the
*_neighbors.json
document is empty. This document would contain the full
routing table information of each peer in the network which is quite a bit of
data (~250MB for the Amino DHT as of April '23) and is therefore disabled by
default
To populate the document, you'll need to pass the --neighbors
flag to
the crawl
subcommand.
nebula --json-out ./results/ crawl --neighbors
The routing table information forms a graph and graph visualization tools often
operate with adjacency lists. To convert the *_neighbors.json
document
to an adjacency list, you can use jq
and the following command:
jq -r '.NeighborIDs[] as $neighbor | [.PeerID, $neighbor] | @csv' ./results/2023-04-16T14:32_neighbors.json > ./results/2023-04-16T14:32_neighbors.csv
If you want to store the information in a proper database, you could run make database
or make databased
(for running it in the background) to start a local postgres instance and run Nebula like:
nebula --db-user nebula_test --db-name nebula_test crawl --neighbors
At this point, you can also start Nebula's monitoring process, which would periodically probe the discovered peers to track their uptime. Run in another terminal:
nebula --db-user nebula_test --db-name nebula_test monitor
When Nebula is configured to store its results in a postgres database, then it also tracks session information of remote peers. A session is one continuous streak of uptime (see below).
However, this is not implemented for all supported networks. The ProbeLab team is using the monitoring feature for the IPFS, Celestia, Filecoin, and Avail networks. Most notably, the Ethereum discv4/discv5 monitoring implementation still needs some work.
There are a few more command line flags that are documented when you runnebula --help
and nebula crawl --help
:
The crawl
sub-command starts by connecting to a set of bootstrap nodes and constructing the routing tables (kademlia k-buckets)
of these peers based on their PeerIDs
. Then nebula
builds
random PeerIDs
with common prefix lengths (CPL) that fall each peers buckets, and asks each remote peer if they know any peers that are
closer (XOR distance) to the ones nebula
just constructed. This will effectively yield a list of all PeerIDs
that a peer has
in its routing table. The process repeats for all found peers until nebula
does not find any new PeerIDs
.
If Nebula is configured to store its results in a database, every peer that was visited is written to it. The visit information includes latency measurements (dial/connect/crawl durations), current set of multi addresses, current agent version and current set of supported protocols. If the peer was dialable nebula
will
also create a session
instance that contains the following information:
CREATE TABLE sessions (
-- A unique id that identifies this particular session
id INT GENERATED ALWAYS AS IDENTITY,
-- Reference to the remote peer ID. (database internal ID)
peer_id INT NOT NULL,
-- Timestamp of the first time we were able to visit that peer.
first_successful_visit TIMESTAMPTZ NOT NULL,
-- Timestamp of the last time we were able to visit that peer.
last_successful_visit TIMESTAMPTZ NOT NULL,
-- Timestamp when we should start visiting this peer again.
next_visit_due_at TIMESTAMPTZ,
-- When did we notice that this peer is not reachable.
first_failed_visit TIMESTAMPTZ,
-- When did we first notice that this peer is not reachable anymore.
last_failed_visit TIMESTAMPTZ,
-- When did we last visit this peer. For indexing purposes.
last_visited_at TIMESTAMPTZ NOT NULL,
-- When was this session instance updated the last time
updated_at TIMESTAMPTZ NOT NULL,
-- When was this session instance created
created_at TIMESTAMPTZ NOT NULL,
-- Number of successful visits in this session.
successful_visits_count INTEGER NOT NULL,
-- The number of times this session went from pending to open again.
recovered_count INTEGER NOT NULL,
-- The state this session is in (open, pending, closed)
-- open: currently considered online
-- pending: peer missed a dial and is pending to be closed
-- closed: peer is considered to be offline and session is complete
state session_state NOT NULL,
-- Number of failed visits before closing this session.
failed_visits_count SMALLINT NOT NULL,
-- What's the first error before we close this session.
finish_reason net_error,
-- The uptime time range for this session measured from first- to last_successful_visit to
uptime TSTZRANGE NOT NULL,
-- The peer ID should always point to an existing peer in the DB
CONSTRAINT fk_sessions_peer_id FOREIGN KEY (peer_id) REFERENCES peers (id) ON DELETE CASCADE,
PRIMARY KEY (id, state, last_visited_at)
) PARTITION BY LIST (state);
At the end of each crawl nebula
persists general statistics about the crawl like the total duration, dialable peers, encountered errors, agent versions etc...
Tip
You can use the crawl
sub-command with the global --dry-run
option that skips any database operations.
Command line help page:
NAME:
nebula crawl - Crawls the entire network starting with a set of bootstrap nodes.
USAGE:
nebula crawl [command options] [arguments...]
OPTIONS:
--addr-dial-type value Which type of addresses should Nebula try to dial (private, public, any) (default: "public") [$NEBULA_CRAWL_ADDR_DIAL_TYPE]
--addr-track-type value Which type addresses should be stored to the database (private, public, any) (default: "public") [$NEBULA_CRAWL_ADDR_TRACK_TYPE]
--bootstrap-peers value [ --bootstrap-peers value ] Comma separated list of multi addresses of bootstrap peers (default: default IPFS) [$NEBULA_CRAWL_BOOTSTRAP_PEERS, $NEBULA_BOOTSTRAP_PEERS]
--limit value Only crawl the specified amount of peers (0 for unlimited) (default: 0) [$NEBULA_CRAWL_PEER_LIMIT]
--neighbors Whether to persist all k-bucket entries of a particular peer at the end of a crawl. (default: false) [$NEBULA_CRAWL_NEIGHBORS]
--network nebula networks Which network should be crawled. Presets default bootstrap peers and protocol. Run: nebula networks for more information. (default: "IPFS") [$NEBULA_CRAWL_NETWORK]
--protocols value [ --protocols value ] Comma separated list of protocols that this crawler should look for [$NEBULA_CRAWL_PROTOCOLS, $NEBULA_PROTOCOLS]
--workers value How many concurrent workers should dial and crawl peers. (default: 1000) [$NEBULA_CRAWL_WORKER_COUNT]
Network Specific Configuration:
--check-exposed Whether to check if the Kubo API is exposed. Checking also includes crawling the API. (default: false) [$NEBULA_CRAWL_CHECK_EXPOSED]
The monitor
sub-command polls every 10 seconds all sessions from the database (see above) that are due to be dialed
in the next 10 seconds (based on the next_visit_due_at
timestamp). It attempts to dial all peers using previously
saved multi-addresses and updates their session
instances accordingly if they're dialable or not.
The next_visit_due_at
timestamp is calculated based on the uptime that nebula
has observed for that given peer.
If the peer is up for a long time nebula
assumes that it stays up and thus decreases the dial frequency aka. sets
the next_visit_due_at
timestamp to a time further in the future.
Command line help page:
NAME:
nebula monitor - Monitors the network by periodically dialing previously crawled peers.
USAGE:
nebula monitor [command options] [arguments...]
OPTIONS:
--workers value How many concurrent workers should dial peers. (default: 1000) [$NEBULA_MONITOR_WORKER_COUNT]
--help, -h show help
The resolve sub-command goes through all multi addresses that are present in the database and resolves them to their respective IP-addresses. Behind one multi address can be multiple IP addresses due to, e.g., the dnsaddr
protocol.
Further, it queries the GeoLite2 database from Maxmind to extract country information about the IP addresses and UdgerDB to detect datacenters. The command saves all information alongside the resolved addresses.
Command line help page:
NAME:
nebula resolve - Resolves all multi addresses to their IP addresses and geo location information
USAGE:
nebula resolve [command options] [arguments...]
OPTIONS:
--udger-db value Location of the Udger database v3 [$NEBULA_RESOLVE_UDGER_DB]
--batch-size value How many database entries should be fetched at each iteration (default: 100) [$NEBULA_RESOLVE_BATCH_SIZE]
--help, -h show help (default: false)
To develop this project, you need Go 1.23
and the following tools:
golang-migrate/migrate
to manage the SQL migrationv4.15.2
volatiletech/sqlboiler
to generate Go ORMv4.14.1
docker
to run a local postgres instance
To install the necessary tools you can run make tools
. This will use the go install
command to download and install the tools into your $GOPATH/bin
directory. So make sure you have it in your $PATH
environment variable.
You need a running postgres instance to persist and/or read the crawl results. Run make database
or use the following command to start a local instance of postgres:
docker run --rm -p 5432:5432 -e POSTGRES_PASSWORD=password -e POSTGRES_USER=nebula_test -e POSTGRES_DB=nebula_test --name nebula_test_db postgres:14
Tip
You can use the crawl
sub-command with the global --dry-run
option that skips any database operations or store the results as JSON files with the --json-out
flag.
The default database settings for local development are:
Name = "nebula_test"
Password = "password"
User = "nebula_test"
Host = "localhost"
Port = 5432
Migrations are applied automatically when nebula
starts and successfully establishes a database connection.
To run them manually you can run:
# Up migrations
make migrate-up
# Down migrations
make migrate-down
# Generate the ORM with SQLBoiler
make models # runs: sqlboiler
# This will update all files in the `pkg/models` directory.
# Create new migration
migrate create -ext sql -dir pkg/db/migrations -seq some_migration_name
To run the tests you need a running test database instance:
make database # or make databased (note the d suffix for "daemon") to start the DB in the background
make test
- Merge everything into
main
- Create a new tag with the new version
- Push tag to GitHub
This will trigger the goreleaser.yml
workflow which pushes creates a new draft release in GitHub.
wiberlin/ipfs-crawler
- A crawler for the IPFS network, code for their paper (arXiv).adlrocha/go-libp2p-crawler
- Simple tool to crawl libp2p networks resourceslibp2p/go-libp2p-kad-dht
- Basic crawler for the Kademlia DHT implementation on go-libp2p.migalabs/armiarma
- Armiarma is a Libp2p open-network crawler with a current focus on Ethereum's CL networkmigalabs/eth-light-crawler
- Ethereum light crawler by @cortze.
The following presentation shows a ways to use Nebula by showcasing crawls of the Amino, Celestia, and Ethereum DHT's:
Note
This section is work-in-progress and doesn't include information about all networks yet.
The following sections document our experience with crawling the different networks.
Under the hood Nebula uses packages from go-ethereum
to facilitate peer
communication. Mostly, Nebula relies on the discover package.
However, we made quite a few changes to the implementation that can be found in
our fork of go-ethereum
here in the nebula
branch.
Most notably, the custom changes include:
- export of internal constants, functions, methods and types to customize their behaviour or call them directly
- changes to the response matcher logic. UDP packets won't be forwarded to all matchers. This was required so that concurrent requests to the same peer don't lead to unhandled packets
Deployment recommendations:
- CPUs: 4 (better 8)
- Memory > 4 GB
- UDP Read Buffer size >1 MiB (better 4 MiB) via the
--udp-buffer-size=4194304
command line flag or corresponding environment variableNEBULA_UDP_BUFFER_SIZE
. You might need to adjust the maximum buffer size on Linux, so that the flag takes effect:sysctl -w net.core.rmem_max=8388608 # 8MiB
- UDP Response timeout of
3s
(default) - Workers: 3000
Feel free to dive in! Open an issue or submit PRs.
It would really make my day if you supported this project through Buy Me A Coffee.
You may be interested in one of my other projects:
pcp
- Command line peer-to-peer data transfer tool based on libp2p.image-stego
- A novel way to image manipulation detection. Steganography-based image integrity - Merkle tree nodes embedded into image chunks so that each chunk's integrity can be verified on its own.
Apache License Version 2.0 © Dennis Trautwein