-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ElasticSearch Support #53
Comments
@kadaan Related to logstash - i'm not familiar with it, but if i'm reading the docs correctly, it should be possible for logstash to grab the JSON metrics from the http endpoint exposed by the metrics library. I'm not sure if that is how the logstash - ES is supposed to work, but if that was possible it would simplify the integration a lot. |
As this functionality does not impact the stable metrics functionality it is also included in the latest .nuget for easier testing. |
Yup, that is how it works. The main thing that would be good it to ensure the current json form is "ideal" and to create an explicit ES mapping to describe the data.
|
This is great functionality - however if the ES server is not running it will cause an exception. What are your thoughts about putting a try-catch around the WebClient code? Is it considered safer to swallow the exception or force the end-user to deal with the exception? |
ES bits are still in very early stages. Exception handling was not a concern yet. It would normally be delegated to MetricsErrorHandler where the end-user has the possibility to handle the error, but its not required. |
Looking for an functionality to index documents daily so we can drop indices rather then delete documents. For example, when we use "metrics" as an index, the reporter would post to index "metrics-2015.06.10" if the time-stamp of the samples was on June 10th 2015 This allows elasicsearch's Curator to drop indexes after a configured period and saves from massive indexes. Does this sound feasible? |
It does sound feasible. I'm not using ES at the moment so my focus will not be on this at the moment. I know ES supports TTL for documents and they get automatically deleted when expired. Maybe that would be a better approach. In the near future I plan to re-iterate over the ES support, but not until the HDR histogram bits are finalized. |
TTL on documents is not recommended. |
I did not know that. Any references? |
I think @kadaan refers to giving a ttl for each distinct document, you can setup a TTL for the whole index https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-indices.html#indices-ttl, I'd like to be able to set ttl for distinct type of metrics. In real situation there are metrics I do not care to keep around for lots of time, but some important metrics can last forever. You should also add some default mapping instead of letting elastic-search guessing everything, as an example Name, Type, Unit for gauges should be not-analyzed. I think that for metrics, all fields of type strings should be set "not-analyzed" by default. |
Why not just write the events out to JSON on file and use log stash to ingest them? |
@alkampfergit that TTL you referred to does not sound like it deals with deletion of documents. It sounds more like the poll interval for the index actually purging documents whose TTL has expired. AFAIK the only three ways to delete documents in elasticsearch are to set a TTL on the document, to delete an index, or to run to delete by query. |
According to the documentation (https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-ttl-field.html) You can provide a per index/type default _ttl value as follows: { Using json file and logstash is probably the preferred way to send log to elasticsearch, but it should be an option, because it introduce a little bit of extra complexity in setup. There are also need to do some "mainteinance of the log". If I have a gauge, and configured to send log to ES each 2 seconds, probably I'd like to have full resolution for the last 24 hours, then consolidate last X days to have 1 value for the gauge for each minute (averaging all the logs in each minute), then probably 1 value for each hour for the last year. In this way i can have different resolution for a gauge depending on how much the log is old. |
Also can you add the server name to the metric. We have a cluster of machines doing the same thing and it would be nice to aggregate by server ( name or ip ) |
I am currently implementing Metrics.NET in a Proof of Concept project. I am sending the metrics to ElasticSearch with the following config: In my .csv report directory I see data for all metrics used: Counter, Meter, Histogram and Timer. I am using the Metrics.NET NuGet package version 0.3.3-pre (Prerelease). Can you help me figure out why I am not seeing Histogram and Timer data in ElasticSearch? EDIT - Found the problem: |
Hello, [Field name [Percentile 99.9%] cannot contain '.'] I'm trying to use Metrics.net with elasticsearch2.2.0 and grafana. Is possible to change the Metrics.net column name ? Thank you |
This issue is for tracking status and ideas related to ElasticSearch ecosystem.
The first bits for elasticsearch integration are available on dev & master branches and are also pushed to latest nuget v0.2.13.
Most of the code is in the ElasticSearchReport.cs. Turns out reporting metrics to ES is very straight forward.
At the moment only gauges, counters and meters are reported but Histograms & Timers should be trivial to add (hopefully today).The text was updated successfully, but these errors were encountered: