- Intro
- Contributing
- Installation
- Add products
- Scrape products
- Delete data
- User settings
- Clean up data
- View the latest datapoint of product(s)
- View all products
- Visualize data
With this program you can easily scrape and track prices on product at multiple websites.
This program can also visualize price over time of the products being tracked. That can be helpful if you want to buy a product in the future and wants to know if a discount might be around the corner.
Requires python 3.10+
Feel free to fork the project and create a pull request with new features or refactoring of the code. Also feel free to make issues with problems or suggestions to new features.
In version v1.1, I have changed how data is stored in records.json
: dates
under each product have been changed to datapoints
and now a list containing dictionaries with date
and price
keys.
If you want to update your data to be compatible with version v1.1, then open an interactive python session where this repository is located and run the following commands:
>>> from scraper.format_to_new import Format
>>> Format.format_old_records_to_new()
In version v2.3.0, I have add the column short_url
to products.csv
. If you have add products before v2.3.0, then run the following commands in an interactive python session to add the new column:
>>> from scraper.format_to_new import Format
>>> Format.add_short_urls_to_products_csv()
In version v3.0.0, I have changed where data is stored from a json file to a SQLite database. If you have data from before v3.0.0, then run the following commands in an interactive python session to add the data from records.json to the database (OBS: Pandas is required):
>>> from scraper.format_to_new import Format
>>> Format.from_json_to_db()
NOTE: This will replace the content in the database with what is in records.json. That means if you have products and/or datapoints in the database but not records.json, they will be deleted.
OBS: If you doesn't have Pandas installed run this command:
pip3 install pandas
Requires python 3.10+
Clone this repository and move into the repository:
git clone https://github.com/Crinibus/scraper.git
cd scraper
Then make sure you have the modules, run this in the terminal:
pip3 install -r requirements.txt
To add a single product, use the following command, where you replace <category>
and <url>
with your category and url:
python3 main.py -a -c <category> -u <url>
e.g.
python3 main.py -a -c vr -u https://www.komplett.dk/product/1168594/gaming/spiludstyr/vr/vr-briller/oculus-quest-2-vr-briller
This adds the category (if new) and the product to the records.json file, and adds a line at the end of the products.csv file so the script can scrape price of the new product.
To add multiple products at once, just add specify another category and url with -c <category>
and -u <url>
. E.g. with the following command I add two products:
python3 main.py -a -c <category> -u <url> -c <category2> -u <url2>
This is equivalent to the above:
python3 main.py -a -c <category> <category2> -u <url> <url2>
OBS: The url must have a schema like: https://
or http://
.
OBS: If an error occures when adding a product, then the error might happen because the url has a &
in it, when this happens then just put quotation marks around the url. This should solve the problem. If this doesn't solve the problem then summit a issue.
This scraper can (so far) scrape prices on products from:
- Amazon*
- eBay.com
- Komplett.dk
- Proshop.dk
- Computersalg.dk
- Elgiganten.dk & Elgiganten.se
- AvXperten.dk
- Av-Cables.dk
- Power.dk
- Expert.dk
- MM-Vision.dk
- Coolshop.dk
- Sharkgaming.dk
- Newegg.com & Newegg.ca
- HifiKlubben.dk
- Shein.com
*OBS these Amazon domains should work: .com, .ca, .es, .fr, .de and .it
The listed Amazon domains is from my quick testing with one or two products from each domain.
If you find that some other Amazon domains works or some of the listed doesn't please create an issue.
To scrape prices of products run this in the terminal:
python3 main.py -s
To scrape with threads run the same command but with the --threads
argument:
python3 main.py -s --threads
When you add a new product the product is activated to be scraped. If you wish to not scrape a product anymore, you can deactivate the product with the following command:
python3 main.py --deactivate --id <id>
You can activate a product again with the following command:
python3 main.py --activate --id <id>
If you want to start from scratch with no data in the records.json and products.csv files, then just run the following command:
python3 main.py --delete --all
You can also just delete some products or some categories:
python3 main.py --delete --id <id>
python3 main.py --delete --name <name>
python3 main.py --delete --category <category>
Then just add products like described here.
If you just want to delete all datapoints for every product, then run this command:
python3 main.py --reset --all
You can also just delete datapoints for some products:
python3 main.py --reset --id <id>
python3 main.py --reset --name <name>
python3 main.py --reset --category <category>
User settings can be added and changed in the file settings.ini.
Under the category ChangeName
you can change how the script changes product names, so similar products will be placed in the same product in records.json file.
When adding a new setting under the category ChangeName
in settings.ini, there must be a line with key<n>
and a line with value<n>
, where <n>
is the "link" between keywords and valuewords. E.g. value3
is the value to key3
.
In key<n>
you set the keywords (seperated by a comma) that the product name must have for to be changed to what value<n>
is equal to. Example if the user settings is the following:
[ChangeName]
key1 = asus,3080,rog,strix,oc
value1 = asus geforce rtx 3080 rog strix oc
The script checks if a product name has all of the words in key1
, it gets changed to what value1
is.
You can change the time between each time a url is being request by changing the field request_delay
in the file scraper/settings.ini under the Scraping
section.
Default is 0 seconds, but to avoid the website you scrape products from thinking you are DDOS attacting them or you being restricted from scraping on their websites temporarily, set the request_delay in settings.ini to a higher number of seconds, e.g. 5 seconds.
If you want to clean up your data, meaning you want to remove unnecessary datapoints (datapoints that have the same price as the datapoint before and after it), then run the following command:
python3 main.py --clean-data
You can search for product names and categories you have in your records.json by using the argument --search [<word> ...]
. The search is like a keyword search, so e.g. if you enter --search logitech
all product names and categories that contains the word "logitech" are found.
You can search with multiple keywords, just seperate them with a space: --search logitech corsair
. Here all the product names and categories that contains the words "logitech" or "corsair" are found.
If you want to view the latest datapoint of a product, you can use the argument --latest-datapoint
with --id
and/or --name
.
Example:
python3 main.py --name "logitech z533" --latest-datapoint
The above command will show the latest datapoint for all the websites the specified product, in this case "logitech z533", has been scraped from and will show something like this:
LOGITECH Z533
> Komplett - 849816
- DKK 999.0
- 2022-09-12
> Proshop - 2511000
- DKK 669.0
- 2022-09-12
> Avxperten - 25630
- DKK 699.0
- 2022-09-12
To view all the products you have scraped, you can use the argument --list-products
.
Example:
python3 main.py --list-products
This will list all the products in the following format:
CATEGORY
> PRODUCT NAME
- WEBSITE NAME - PRODUCT ID
- ✓ WEBSITE NAME - PRODUCT ID
The check mark (✓) shows that the product is activated.
To visualize your data, just run main.py with the -v
or --visualize
argument and then specify which products you want to be visualized. These are your options for how you want to visualize your products:
--all
to visualize all your products-c [<category> [<category> ...]]
or--category [<category> [<category> ...]]
to visualize all products in one or more categories--id [<id> [<id> ...]]
to visualize one or more products with the specified id(s)-n [<name> [<name> ...]]
or--name [<name> ...]]
to visualize one or more products with the specified name(s)--compare
to compare two or more products with the specified id(s), name(s) and/or category(s) or all products on one graph. Use with--id
,--name
,--category
and/or--all
Show graphs for all products
To show graphs for all products, run the following command:
python3 main.py -v --all
Show graph(s) for specific products
To show a graph for only one product, run the following command where <id>
is the id of the product you want a graph for:
python3 main.py -v --id <id>
For multiple products, just add another id, like so:
python3 main.py -v --id <id> <id>
Show graphs for products in one or more categories
To show graphs for all products in one category, run the following command where <category>
is the category you want graph from:
python3 main.py -v -c <category>
For multiple categories, just add another category, like so:
python3 main.py -v -c <category> <category>
Show graps for products with a specific name
To show graphs for product(s) with a specific name, run the following command where <name>
is the name of the product(s) you want graphs for:
python3 main.py -v --name <name>
For multiple products with different names, just add another name, like so:
python3 main.py -v --name <name> <name2>
If the name of a product has multiple words in it, then just add quotation marks around the name.
Only show graph for products that are up to date
To only show graphs for the products that are up to date, use the flag --up-to-date
or -utd
, like so:
python3 main.py -v --all -utd
The use of the flag -utd
is only implemented when visualizing all products like the example above or when visualizing all products in a category:
python3 main.py -v -c <category> -utd
Compare two products
To compare two products on one graph, use the flag --compare
with flag --id
, --name
, --category
and/or --all
, like so:
python3 main.py -v --compare --id <id>
python3 main.py -v --compare --name <name>
python3 main.py -v --compare --category <category>
python3 main.py -v --compare --id <id> --name <name> --category <category>
python3 main.py -v --compare --all
OBS when using --name
or --category
multiple products can be visualized