You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Before replacing an array of 8 carbon-cache daemons I was wondering if you believe go-carbon is ready. Do you (or anyone else) use go-carbon in a production setup? How many datapoints/sec does it handle? My $dayjob graphite cluster currently handles 45k points/sec per node, for some 2.6M metrics in 1.6T on flash.
Thank you so much.
The text was updated successfully, but these errors were encountered:
Before carbon-cache on PyPy in 80 instances and 4x lower traffic ended on same setup as go-carbon now.
Up to 16 mln metrics in cluster with replication factor 2. About 35 mln updates per minute (update operations * point per update) with 5 instances each with one go-carbon - update time ~0.4ms. Each node generates ~50k IOPS on SSD drives and go-carbon consumes ~600-700% CPU. Metrics in go-carbon cache ~1.5 mln per node. Huge dirty pages used, buffers and VFS cache.
Metrics sizes from 1.5MB to 2.5MB and from 1s to 30s update times precision in whisper. All space used ~20TB of whisper data.
Go-carbon looks very clean and promising.
Before replacing an array of 8 carbon-cache daemons I was wondering if you believe go-carbon is ready. Do you (or anyone else) use go-carbon in a production setup? How many datapoints/sec does it handle? My $dayjob graphite cluster currently handles 45k points/sec per node, for some 2.6M metrics in 1.6T on flash.
Thank you so much.
The text was updated successfully, but these errors were encountered: