Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements to the hadoop collector. #665

Merged
merged 1 commit into from
Jul 23, 2014
Merged

Improvements to the hadoop collector. #665

merged 1 commit into from
Jul 23, 2014

Conversation

liquidgecka
Copy link
Contributor

This change implements two major changes necessary to use the hadoop
collector in a production environment. The first change divides the time
by 1000 as the time reported in the metrics files are miliseconds since
epoch. Without this change the following error is thrown from graphite:
struct.error: 'L' format requires 0 <= number <= 4294967295

The second change included here is a truncation step to the file read.
Without a truncation step the metrics file will grow forever, and each
run will re-read the whole metrics file, sending all the old, as well as
the new metrics to the graphite server. By truncating the file after a
successful read we can stop not only the disks from filling up, but we
can also ensure that we never overload the graphite server by re-sending
massive numbers of metrics. This change is optional, and defaulted to
false in order to maintain current semantics.

This change implements two major changes necessary to use the hadoop
collector in a production environment. The first change divides the time
by 1000 as the time reported in the metrics files are miliseconds since
epoch. Without this change the following error is thrown from graphite:
struct.error: 'L' format requires 0 <= number <= 4294967295

The second change included here is a truncation step to the file read.
Without a truncation step the metrics file will grow forever, and each
run will re-read the whole metrics file, sending all the old, as well as
the new metrics to the graphite server. By truncating the file after a
successful read we can stop not only the disks from filling up, but we
can also ensure that we never overload the graphite server by re-sending
massive numbers of metrics. This change is optional, and defaulted to
false in order to maintain current semantics.
kormoc added a commit that referenced this pull request Jul 23, 2014
Improvements to the hadoop collector.
@kormoc kormoc merged commit 5044e72 into BrightcoveOS:master Jul 23, 2014
@kormoc
Copy link
Contributor

kormoc commented Jul 23, 2014

Thanks! And sorry it took so long

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants