This project is a Cloudflare worker designed to scrape web pages and extract useful information, including a markdown-formatted version of the content. It's built to handle requests to scrape a given URL and return structured data about the page.
- Fetch and scrape content from any given URL.
- Extract metadata such as title, byline, excerpt, and more.
- Convert HTML content to clean markdown format.
- Handle requests with optional markdown formatting.
- Remove everything but the content (Reader Mode)
To use this worker, send a GET request to the worker's endpoint with the url
query parameter specifying the page to be scraped. Optionally, you can include the markdown
query parameter to specify whether the content should be returned in markdown format (default: true
).
e
GET https://<worker-name>.workers.dev/?url=https://example.com&markdown=true
{
"page": {
"byline": "Author Name",
"content": "... stripped html content ...",
"dir": null,
"excerpt": "..."
"lang": null,
"length": 191,
"siteName": null,
"textContent": "... markdown content ...",
"title": "Example Domain"
}
}
To deploy this Cloudflare worker, you have two options:
-
Use Wrangler CLI:
npx wrangler deploy
-
Click the "Deploy to Cloudflare Workers" button at the top of this README.
Run in a docker container by first building the image and then running the container.
Run the commands below from the project root.
docker compose -f docker-compose-dev.yaml build
docker compose -f docker-compose-dev.yaml up -d
Modifications to your running container can be made in the docker-compose-dev.yaml
.
GET http://<IP ADDR>:8787/?url=https://example.com&markdown=true