A python script to automate the cleaning of the Downloads folder
This project is a python script to automate the cleaning of the Downloads folder or any other folder you want. In my daily life, I downloaded a lot of files and my Downloads folder was always a mess, so I decided to start this project.
This is not just a simple script, it has a lot of features and settings that you can customize, like:
- Move files to folders based on their extensions
- Create folders based on the extensions found in settings
- Indicate how many days the files will be kept in the sorted folder before being deleted
- Decide if you want to delete the files or send them to the trash
- Set the maximum size of the files that will be moved
- Python 3.x
- Python 3.x
- pip
-
Clone the repository:
git clone https://github.com/LimbersMay/AutomateDownloadsFolder.git
-
Create a virtual environment (recommended):
python -m venv venv
-
Activate the virtual environment (Linux):
source venv/bin/activate
Activate the virtual environment (Windows):
venv\Scripts\activate
-
Install the requirements:
pip install -r requirements.txt
-
Rename the
settings.example.json
file tosettings.json
located in thedata
folder.
There are several ways to use this script, for example:
- Run the script manually
- Create a cron job
- Start the script when the computer starts
Here, I will show you how to start the script when the computer starts.
-
Create a new systemd user service called
automate_downloads_folder.service
in the/lib/systemd/user
directory:touch /lib/systemd/user/automate_downloads_folder.service
-
Open the file with your favorite text editor and paste the following code:
[Unit] Description=My Script [Service] Type=simple ExecStart=/usr/bin/python /home/limbers/Documents/PersonalProjects/automateDownloadsFolde> WorkingDirectory=/home/limbers/Documents/PersonalProjects/automateDownloadsFolder [Install] WantedBy=default.target
-
Reload the systemd daemon:
systemctl --user daemon-reload
-
Enable the service:
systemctl --user enable automate_downloads_folder.service
Remember to change the ExecStart
field to the path where the script is located.
Note:
If you are using a virtual environment,
you must use the path to the python executable inside the virtual environment instead of /usr/bin/python
.
The path to the python executable inside the virtual environment is usually /path/to/venv/bin/python
.
For windows, the easiest way to start the script when the computer starts, is to create an exe of the script and put a shortcut of the exe in the startup folder.
To create an exe follow the steps below:
-
Install pyinstaller:
pip install pyinstaller
-
Create the exe:
pyinstaller --noconfirm --onefile --windowed --icon "./assets/work.ico" --hidden-import "plyer.platforms.win.notification" "./main.py"
-
Move the exe from the
dist
folder to the root folder (where themain.py
file is located) -
Feel free to delete the
build
anddist
folders and any other file created by pyinstaller -
Create a shortcut and copy it
-
Press
Win + R
and typeshell:startup
to open the startup folder -
Paste the shortcut in the startup folder
-
Restart the computer
The settings are in the settings.json
file, located in data/settings.json
.
These settings are:
extensions
: List of extensions that will be used to create the folders and move the files.daysToKeep
: Number of days that the files will be kept in the sorted folder before being deleted.sendToTrash
: Iftrue
, the files will be sent to the trash. Iffalse
, the files will be deleted from the system.maxSizeInMb
: Maximum size of the files that will be moved to the sorted folder.paths
: List of paths that will be used to search for files to be moved.
Feel free to change the names of the extensions, the program will create the folders with the names you put in the settings.
- Make possible to have multiple source folders where the files will be searched.
- Make possible to have multiple destination folders where the files will be moved.
- Use sqlite3 to store all the data instead of using a json file. (The change from json to sqlite3 wouldn't be hard to do, because I used the repository pattern to separate the data layer from the business layer, so I just need to create a new repository that uses sqlite3 instead of json)