Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

High CPU usage on windows #1314

Closed
arkpar opened this issue Jun 17, 2016 · 14 comments
Closed

High CPU usage on windows #1314

arkpar opened this issue Jun 17, 2016 · 14 comments
Assignees
Labels
F3-annoyance 💩 The client behaves within expectations, however this “expected behaviour” itself is at issue. Z0-unconfirmed 🤔 Issue might be valid, but it’s not yet known.

Comments

@arkpar
Copy link
Collaborator

arkpar commented Jun 17, 2016

Parity use 50%-100% when idling on windows

@arkpar arkpar added the F2-bug 🐞 The client fails to follow expected behavior. label Jun 17, 2016
@arkpar arkpar assigned arkpar and NikVolf and unassigned arkpar Jun 17, 2016
@gavofyork gavofyork added F3-annoyance 💩 The client behaves within expectations, however this “expected behaviour” itself is at issue. and removed F2-bug 🐞 The client fails to follow expected behavior. labels Jun 18, 2016
@NikVolf NikVolf added the Z0-unconfirmed 🤔 Issue might be valid, but it’s not yet known. label Jun 19, 2016
@NikVolf
Copy link
Contributor

NikVolf commented Jun 19, 2016

can no longer reproduce

@arkpar
Copy link
Collaborator Author

arkpar commented Jun 19, 2016

was probably caused by transaction spamming

@gavofyork
Copy link
Contributor

wouldn't it have manifested on other platforms, too?

@remyroy
Copy link

remyroy commented Jun 22, 2016

I just did a quick profiling capture using Very Sleepy with v1.2.0-unstable-1bead4a-20160622. Parity seems to be spending a lot of time in rocksdb allocating new block and in RtlIpv6AddressToStringW. That was just for the startup profiling.

capture1
capture2

Raw data: https://www.dropbox.com/s/syavln88izn8k2x/capture-data.zip?dl=1

@remyroy
Copy link

remyroy commented Jun 22, 2016

Once the startup phase is completed, my version of parity is somewhat low in terms of CPU resource usage. It mostly uses ~0-2% with some spikes going to ~30-57% CPU which I'm guessing are new blocks.

@arkpar
Copy link
Collaborator Author

arkpar commented Jun 22, 2016

RtlIpv6AddressToStringW looks really strange. Did you start parity with logging turned on?

@remyroy
Copy link

remyroy commented Jun 22, 2016

I started parity without any command line options. I'm not sure if logging is enabled by default.

@remyroy
Copy link

remyroy commented Jun 22, 2016

I just did another quick profiling capture using Very Sleepy with v1.2.0-unstable-1bead4a-20160622 during normal blockchain synchronization operations. There are few significant things to be noted. Most of the time is spent waiting on condition variables, creating threads or waiting for I/O completion. There might be some CPU cost to creating these threads that can be helped in some ways.

parity-capture

One thing I noted is that during the normal blockchain synchronization operations, I felt that the various CPU spikes were more intense than when I was using geth. The slowdowns I get in my other applications are more important.

Raw data: https://www.dropbox.com/s/x29143bbgj1dwk7/parity-idle-capture.zip?dl=1

@gavofyork
Copy link
Contributor

gavofyork commented Jun 24, 2016

this is likely caused by parity being aggressively parallelised; apparently more so than geth.

if you'd prefer to have slower syncing/importing and having less of a strain on the system then you can use process manager to set parity's CPU core affinity and prevent it from using all cores.

see here for more information.

@dev-dan
Copy link

dev-dan commented Aug 29, 2017

This is currently occurring for me on windows with parity 1.7, If I open the signer extension, my cpu usage for parity.exe spikes to 100% until the signer is disabled.

@5chdn
Copy link
Contributor

5chdn commented Aug 30, 2017

@dev-dan please see #6387 (via #6300). we are working on it.

@CryptoSiD
Copy link

CryptoSiD commented Feb 19, 2018

Same issue, I'm not using the wallet at all and it's consuming 30-80% of my i5

@jjzazuet
Copy link

jjzazuet commented Jun 2, 2018

Hi. Same issue here if it helps. I've got three different Xeon nodes running Parity 1.10 on Debian. All three of them oscillate between 12% to 70% CPU usage in the Kovan test net. I'm running out of ideas on why would this happen in the test net. top snapshots attached. Thanks!

Node 1

top - 01:05:41 up 23:06,  1 user,  load average: 0.18, 0.33, 0.35
Tasks: 134 total,   1 running, 133 sleeping,   0 stopped,   0 zombie
%Cpu(s):  4.7 us,  0.6 sy,  0.0 ni, 94.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 32764032 total, 18889240 free,  3361680 used, 10513112 buff/cache
KiB Swap: 33362940 total, 33362940 free,        0 used. 28904312 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
12533 parity    20   0 1779300 441680  23848 S  38.9  1.3 625:27.87 parity
12625 lol+  20   0 1695108 768124  46608 S   2.0  2.3  36:21.27 lol
  230 root      20   0       0      0      0 S   0.3  0.0   0:17.49 lol
  420 root      20   0  250116   3580   2500 S   0.3  0.0   0:05.67 rsyslogd
12382 nobody    20   0   16512   3868   2680 S   0.3  0.0   3:09.06 lol
13582 www-data  20   0 13.727g 1.496g  17192 S   0.3  4.8   5:02.43 lol
    1 root      20   0  204556   6824   5236 S   0.0  0.0   0:02.50 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kthreadd

Node 2

top - 01:06:12 up 23:08,  1 user,  load average: 0.48, 0.42, 0.41
Tasks: 131 total,   1 running, 130 sleeping,   0 stopped,   0 zombie
%Cpu(s):  5.0 us,  0.5 sy,  0.0 ni, 94.4 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 32636760 total, 18160612 free,  3034836 used, 11441312 buff/cache
KiB Swap: 33233916 total, 33233916 free,        0 used. 29096664 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
11532 parity    20   0 1703524 427580  24040 S  42.5  1.3 675:22.51 parity
11624 lol+  20   0 1525868 509568  46652 S   2.0  1.6  33:24.17 lol
11460 lol   20   0 1931528 634108 145308 S   0.3  1.9   4:55.39 lol
16989 root      20   0   41052   3252   2656 R   0.3  0.0   0:00.89 top
    1 root      20   0  204564   6936   5344 S   0.0  0.0   0:01.19 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kthreadd
    3 root      20   0       0      0      0 S   0.0  0.0   0:01.60 ksoftirqd/0
    5 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/0

Node 3

top - 01:08:02 up 23:10,  1 user,  load average: 0.25, 0.37, 0.42
Tasks: 131 total,   1 running, 130 sleeping,   0 stopped,   0 zombie
%Cpu(s):  7.0 us,  0.9 sy,  0.0 ni, 92.0 id,  0.1 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 32631304 total, 19734736 free,  3349320 used,  9547248 buff/cache
KiB Swap: 33228796 total, 33228796 free,        0 used. 28773508 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
11687 parity    20   0 1781348 439376  24072 S  57.5  1.3 696:07.64 parity
11798 lol+  20   0 1579368 665548  46580 S   3.0  2.0  35:35.49 lol
11615 lol   20   0 1930476 633964 143212 S   0.7  1.9   4:57.95 lol
 2661 root      20   0       0      0      0 S   0.3  0.0   0:26.39 kworker/5:0
11541 nobody    20   0   16512   3812   2624 S   0.3  0.0   3:07.02 lol
19453 root      20   0   41052   3220   2652 R   0.3  0.0   0:01.14 top
    1 root      20   0  204580   6788   5172 S   0.0  0.0   0:01.43 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kthreadd

@Tbaut
Copy link
Contributor

Tbaut commented Jun 4, 2018

@jjzazuet please follow #8696

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
F3-annoyance 💩 The client behaves within expectations, however this “expected behaviour” itself is at issue. Z0-unconfirmed 🤔 Issue might be valid, but it’s not yet known.
Projects
None yet
Development

No branches or pull requests

9 participants