-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speedup Process methods #799
Comments
+1 for this enhancement request. It will be awesome for the Glances project. |
I started working on this in a separate branch (master...oneshot#files_bucket
|
Any head up on this enhancement ? |
I completed the linux implementation but I still have to benchmark it
|
Linux benchmark. With this I get a 2x speedup (twice as fast) if I involve all the "one shot" methods, meaning I am emulating the best possible scenario:
Output:
|
FreeBSD impact deriving from getting multiple (14) info when only 1 is needed is negligible 0.46 secs vs. 0.42, so even when NOT using |
Linux speedup went from 1.9x to 2.6x after f851be9. |
BSD platforms implementation is completed. On FreeBSD I get a +2.18x speedup. |
Nice! Are you also think that the new process method will speedup on
Windows Operating System?
|
Yes, this is intended for all OSes, even though Windows is probably gonna be the most difficult platform because it has less C APIs which can be used to directly retrieve multiple info in one shot. Line 234 in 50015c4
The only Windows C call I can think of that is being used basically all the time on Windows is OpenProcess .We use a wrapper around it: psutil/psutil/arch/windows/process_info.c Line 21 in 50015c4
...which is extensively used in the main C extension module:
What we can do is get the handle once, store it in Python (as an |
Solaris implementation 630b40d +1.37x speedup. |
…Handle in order to keep the handle reference at Python level and allow caching
It turns out storing |
Good news @giampaolo ! |
OSX: going from 1.8 to 1.9 speedup with 1e8cef9. |
It turns out the apparent slowdown occurring on Windows as per my previous message #799 (comment) was due to the benchmark script not being stable enough, so we're good also on Windows. |
The interesting thing about Windows is that because some Process methods use a dual implementation (see #304) we can get a way bigger speedup for PIDs owned by other users, for which the first "fast" implementation raises AccessDenied. |
OK, this is now merged into master as of de41bcc. |
Great job @giampaolo ! Many thanks. |
Great job! |
* master: (375 commits) update psutil fix procsmem script which was not printing processes try to fix tests on travis try to fix tests on travis OSX: fix compilation warning giampaolo#936: give credits to Max Bélanger giampaolo#811: move DLL check logic in _pswindows.py winmake: use the right win slashes winmake: do not try to install GIT commit hook if this is not a GIT cloned dir giampaolo#811: on Win XP let the possibility to install psutil from sources as it still (kind of) works) giampaolo#811: add a Q&A section in the doc; tell what Win versions are supported giampaolo#811: raise a meaningful error message if on Windows XP update doc; bump up version giampaolo#939: update MANIFEST to include only src files and not much else update HISTORY travis: execute mem leaks and flake8 tests only on py 2.7 and 3.5; no need to test all python versions bump up version update version in doc add simple test case for oneshot() ctx manager add simple test case for oneshot() ctx manager speedup fetch all process test by using oneshot giampaolo#799 - oneshot / linux: speedup memory_full_info and memory_maps fix flake8 first pass giampaolo#943: better error message in case of version conflict on import. update doc 799 onshot / win: no longer store the handle in python; I am now sure this is slower than using OpenProcess/CloseHandle in C update doc (win) add memleak test for proc_info() move stuff around memleak: fix false positive on windows giampaolo#933 (win) fix memory leak in WindowsService.description() giampaolo#933 (win) fix memory leak in cpu_stats() (missing free()) refactoring giampaolo#799 / win: pass handle also to memory_maps() and username() functions fix numbers mem leak script: provide better error output in case of failure refactor memleak script refactor memleak script refactor memleak script refactor memleak script refactor memleak script: get rid of no longer used logic to deal with Process properties memleak script refactoring doc styling giampaolo#799 / win: use oneshot() around num_threads() and num_ctx_switches(); speedup from 1.2x to 1.8x refactor windows tests win: enable dueal process impl tests win / C: refactor memory_info_2 code() and return it along side other proc_info() metrics windows c refactor proc_info() code update windmake script winmake clean: make it an order of magnitude faster; also update Makefile update doc bench script: add psutil ver winmake: more aggressive logic to uninstall psutil adjust bench2 script to new perf API try to adjust perf upgrade perf code memory leak script: humanize memory difference in case of failure style changes fix giampaolo#932 / netbsd: check connections return value and raise exception netbsd / connections: refactoring netbsd / connections: refactoring netbsd / connections: refactoring netbsd / connections: refactoring netbsd / connections: refactoring testing make clean with unittests was a bad idea after all make 'make clean' 4x faster! add test for make clean adjust winmake script fix netbsd/openvsd compilation failure bsd: fix mem leak osx: fix memory leak pre-release refactoring update IDEAS add mtu test for osx and bsd osx: separate IFFLAGS function osx/bsd: separate IFFLAGS function linux: separate IFFLAGS function share C function to retrieve MTU across all UNIXes HISTORY: make anchors more easily referenceable fix giampaolo#927: Popen.__del__ may cause maximum recursion depth error. fix Popen test which is occasionally failing more releases timeline from README to doc ignore failing tests on OSX + TRAVIS update INSTALL instructions update print_announce.py script update HISTORY HISTORY: provide links to issues on the bug tracker update IDEAS giampaolo#910: [OSX / BSD] in case of error, psutil.pids() raised RuntimeError instead of the original OSError exception. fix unicode tests on windows / py3 small test refactoring fix giampaolo#926: [OSX] Process.environ() on Python 3 can crash interpreter if process environ has an invalid unicode string. osx: fix compiler warnings refactor unicode tests fix unicode test giampaolo#783: fix some unicode related test failures on osx test refactoring test refactroring ...
This is something I've been thinking about for a while. Problem with current
Process
class implementation is that if you want to fetch multiple process info the underlying (C / Python) implementation may unnecessarily do the same thing more than once.For instance, on Linux we read
/proc/pid/stat
file to getterminal
,cpu_times
, andcreate_time
, and each time we invoke those methods weopen
the file andread
from it. We get the one info we're interested in and discard the rest.A similar thing happens on basically every OS. For instance on BSD we use
kinfo_proc
syscall to get basically 80% of all process info (uids
,gids
,create_time
,ppid
,io_counters
,status
etc.).Again, all this info retrieved once (in C) and re-requested every time we call a
Process
method.Since we typically get more than one info about the process (e.g. think about a top-like app) it appears clear that this could (and should) be done in a single operation. A possible solution would be to provide a context manager which temporarily puts the
Process
instance in a state so that internally the requested metrics are determined in a single shot and then "cached" / "stored" somewhere:Note:
Process.as_dict()
method would use this method implicitly.=== EDITS AFTER COMMENTS BELOW ===
Branch
master...oneshot#files_bucket
Benchmark scripts
Linux (+2.56x speedup)
Windows (+1.9x or +6.5x speedup)
user's process:
other user's process:
FreeBSD (+2.18x speedup)
OSX (+1.92x speedup)
SunOS (+1.37x speedup)
The text was updated successfully, but these errors were encountered: