An awesome internet discovery button, for developers, tech and science lovers.
A browser extension that takes you to a random site from one of the awesome curated lists. Like good ol' StumbleUpon (which is now dead).
⚡️ Install Chrome Extension ⚡️ Install Firefox Add-on
There are 45,787 unique sites from 554 awesome lists on Github from kind contributors. There's some hidden gems waiting in there 💎 .
To stumble: Simply click on the ⚡️ extension button → go to a new awesome site!
(or use Alt
+ Shift
+S
)
We have all been down internet rabbit holes.
.
One minute you're casually reading the news, the next you've read so much about random topic
you might as well do a TED talk.
.
What just happened? The rabbit hole pulled you in and you lost track of time, but you also might have discovered something awesome.
.
So why not embrace it, by having a fancy button for it, obviously.
Stay stumblin' on the same topic, or exit back to random mode.
- Clone or fork this repository
- Open Chrome/Brave or other Chromium-based browser
- Open the extensions page at
chrome://extensions
- Enable developer mode
- Click "Load unpacked" and select the
/extension
folder.
Here's some of the things I'd like to build out for this extension. However the main one right now is simply to curate the links as good as I can, add more data sources and make sure the pages are a good mix of interesting, useful, fun and exciting.
- Feedback mechanism for good/bad links
- Favourite 'gems' to bookmark folder
- Basic stats
- Categories
- awesome curated lists
- tech, science, software, startups, etc.
- Rabbit hole feature (stay on the same topic).
- Firefox support
- Safari support
This extension requires the <all_urls>
permission, in order to show the overlay UI on every stumble page that you visit. It does not access data on these sites. There is no tracking, or analytics of any kind, and state is only stored locally.
This extension is made possible by awesome people curating the internet:
It's completely local - you can find it under /extension/data. It's generated with awesome_scraper.py.
To make sure that every link works and is relevant, the dataset is cleaned. Any dead or broken links are removed, as well as links to CI pipelines, recursive links, donation links, etc. This is done with the cleanup functions in utils.py. Running this script can take a few hours on a slow connection.
After removing from the dataset, a record of dead or broken links (those with 404, SSL, other server errors) is saved in these text files after every scrape.
❗️If you are one of the awesome list maintainers, find the text file for your awesome-list to check for dead links and remove them from your list, or update with a valid URL. If the file is empty, all good!
☝️Submit an issue 🤘Submit a PR
✨ Stay curious!