Skip to content

Latest commit

 

History

History
83 lines (47 loc) · 2.43 KB

README.md

File metadata and controls

83 lines (47 loc) · 2.43 KB

Logo

Webcrawler

This Node.js web crawler is designed to fetch and analyze links from a specified website. It focuses on exploring the links provided on the main page, recursively navigating through linked pages, and gathering information about link occurrences. The application is built with Node.js and includes a comprehensive test suite to ensure its reliability.

🔗 Links

portfolio linkedin twitter

Run Locally

Clone the project

  git clone https://github.com/bupd/webcrawler.git

Go to the project directory

  cd webcrawler

Install dependencies

  npm install

Start the crawler

  npm run start <Website Link Here>

Demo

Demo

Features

  • Link Fetching: The application start fetching links from the main page of the provided website.
  • Recursive Crawling: It recursively searches through all the links on the linked pages, focusing on the main domain and excluding external links and subdomains.
  • Link Occurrences: The application counts and displays how many times a link is mentioned and referenced across the website.
  • Command-Line Execution: Run the application by executing npm run start <Website Link> to initiate the crawling process.

Running Tests

To run tests, run the following command

  npm run test

Contributing

Contributions are welcomed! If you find issues or have suggestions for improvements, please open an issue or submit a pull request.

  1. Fork the repository.
  2. Create your feature branch: git checkout -b feature/new-feature.
  3. Commit your changes: git commit -m 'Add new feature'.
  4. Push to the branch: git push origin feature/new-feature.
  5. Open a pull request.

Support

For support, email bupdprasanth@gmail.com or join our Discord channel.

Acknowledgements

  • The application leverages Node.js for its functionality. Happy crawling!