Replies: 2 comments
-
That'd be incredible. Even me on a infinitely smaller/personal scale, with my i9 12900K it would make a massive difference when comparing thousands of pictures (let alone hundreds of thousands at times). Even better would be to allow us to even select how many threads it can use. So that we can scale to what we wanna give while doing other stuff so it doesn't slow down the computer (or not too much) while still being much faster! |
Beta Was this translation helpful? Give feedback.
-
I wish this were so. It doesn't need to be all-elbows, it could be as "nice" as possible on io & processing scheduling and still work much faster than it does now. |
Beta Was this translation helpful? Give feedback.
-
I would love to see a version of dupeGuru that can take advantage of high core counts. For example, I'm running several servers each with 64 cores/128 threads, 512GB ECC RAM, and 528TB ZFS array. One of the servers has over 3.3 billion inodes in use.
I'm using dupeGuru to attempt to identify duplicate files, and it's been running for a week with very little progress.
If dupeGuru could be configured to support file hashing in parallel, with one hash function per thread, that would reduce the amount of time required to complete the duplicate identification by orders of magnitude.
If this were an old-school C program, I'd use a FORK system call with semaphores to communicate the completion status of each forked process.
Beta Was this translation helpful? Give feedback.
All reactions