The data we collect here includes DNS and Web Server data of public bug bounty programs.
Our aim with this project is to:
- Monitor over 800 companies for new assets
- help bug bounty hunters get up and running on new programs as quickly as possible.
- give security teams better visibility into their assets.
- reduce the load and noise that some programs face from automated tools (we run them on schedule, and give the results to everyone)
The setup consists of two workflows
- Inventory 3.0 - Targets
- Inventory 3.0
This workflow streamlines the consolidation of bug bounty program data from various sources, ensuring a comprehensive and organized view. Let's break it down:
-
Data collection: The workflow fetches data from two important sources:
- Bounty Targets Data: This repository contains a wealth of bug bounty program information.
- Chaos Public Bug Bounty Programs: It provides additional valuable bug bounty program data.
-
Data transformation: The collected data undergoes transformation using Python scripts. The scripts convert the data into a specific format, ensuring consistency and ease of analysis. You can find the detailed data format in the targets.json file.
-
Program merging: To avoid duplication, the workflow merges programs with the same URL together. This consolidation eliminates redundancies and presents a unified view of bug bounty programs.
-
Community program inclusion: The workflow incorporates an additional set of programs from the community.json file. These programs are merged with the existing dataset, enhancing its coverage and diversity.
-
Final output: The workflow generates a final consolidated JSON file, targets.json, which encompasses all the merged bug bounty program data. This file serves as a valuable resource for bug bounty researchers, providing a centralized and comprehensive view of programs.
Note: The screenshot above provides a visual representation of the workflow.
-
Gathering the tagets: Get the list of domains from targets.json, and extract program names.
-
Making workflow run in parallel: Extracted program names are connected
file-splitter
node to make the whole workflow distributed per program
- Passive Enumeration:
- Active Enumeration
- Use passive enumeration data and create a new bruteforce wordlist
- Use dsieve to get environments per subdomain level
- Generate new potential subdomains with mksub and custom wordlist, with additional level2.txt wordlist
- Resolve again with puredns
- Permutations
- Collecting previous results
- Use python script that will get all of the previous
hostnames.txt
per program - Use anew to get the new hostnames found
- zip active, passive, permutations per program to be pushed to repository
- Use python script that will get all of the previous
- Reporting
Note: As described, almost everything in this repository is generated automatically. We carefully designed the workflows (and continue to develop them) to ensure the results are as accurate as possible.
All contributions/ideas/suggestions are welcome! If you want to add/edit a target/workflow, feel free to send us a PR with new targets through community.json, tweet at us @trick3st, or join the conversation on Discord.
We believe in the value of tinkering. Sign up for a demo on trickest.com to customize this workflow to your use case, get access to many more workflows, or build your own from scratch!