-
-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't archive/escalate events after upgrade to ES5 #48
Comments
2 thoughts...
You could try "--elasticsearch-keyword raw" to do ES2/Logstash2 style queries temporarily to see if thats really the issue. I've also uploaded a new version where "-v" will log a portion of the elastic search response to an archive, to see why its getting nil. |
New version -v output when pressing 'archive': 2017-04-30 22:55:35 (elasticsearch.go:110) -- Decoding response (truncated at 1024 bytes): {"error":{"root_cause":[{"type":"script_exception","reason":"runtime error","script_stack":["if (!ctx._source.tags.contains(tag)) {\n\t\t\t "," ^---- HERE"],"script":"\n\t\t\t for (tag in params.tags) {\n\t\t\t if (!ctx._source.tags.contains(tag)) {\n\t\t\t ctx._source.tags.add(tag);\n\t\t\t }\n\t\t\t }\n\t\t\t","lang":"painless"}],"type":"script_exception","reason":"runtime error","script_stack":["if (!ctx._source.tags.contains(tag)) {\n\t\t\t "," ^---- HERE"],"script":"\n\t\t\t for (tag in params.tags) {\n\t\t\t if (!ctx._source.tags.contains(tag)) {\n\t\t\t ctx._source.tags.add(tag);\n\t\t\t }\n\t\t\t }\n\t\t\t","lang":"painless","caused_by":{"type":"null_pointer_exception","reason":null}},"status":500} 2017-04-30 22:55:35 (eventservice.go:363) -- Updated events, failures = false |
"--elasticsearch-keyword raw" doesn't show anything in the inbox. I my upgrade I simply deleted /var/lib/elasticsearch and /var/lib/logstash, so I think I effectively started with a clean slate. In case it's relevant, I'm seeing this: 2017-04-30 22:57:17 (elasticsearch.go:181) -- Found template version 50001 |
Thanks. The update_by_query I use with ES5 doesn't deal with the case where there is not an existing tag object. With Logstash/ES2 it looks like I could assume it was always there. Not so with version 5 of the stack. Fix has been pushed to master, but will take a bit to show up for download. |
Looks like that did it: 2017-05-01 08:40:27 (evebox.go:112) -- No command provided, defaulting to server. 2017-05-01 08:40:27 (server.go:156) -- This is EveBox Server version 0.7.1dev (rev: 99169db) 2017-05-01 08:40:27 (geoip-service.go:44) -- Failed to initialize geoip database: no database files found 2017-05-01 08:40:27 (configdb.go:52) -- Using in-memory configuration DB. 2017-05-01 08:40:27 (sqlmigrator.go:79) -- Updating database to version 0. 2017-05-01 08:40:27 (sqlmigrator.go:79) -- Updating database to version 1. 2017-05-01 08:40:27 (server.go:271) -- Configuring ElasticSearch datastore 2017-05-01 08:40:27 (server.go:273) -- Using ElasticSearch URL http://localhost:9200 2017-05-01 08:40:27 (server.go:275) -- Using ElasticSearch Index logstash. 2017-05-01 08:40:27 (elasticsearch.go:100) -- Event base index: logstash 2017-05-01 08:40:27 (elasticsearch.go:101) -- Event search index: logstash-* 2017-05-01 08:40:27 (elasticsearch.go:227) -- Elastic Search keyword initialized to "keyword" 2017-05-01 08:40:27 (server.go:294) -- Connected to Elastic Search (version: 5.3.2) 2017-05-01 08:40:27 (server.go:133) -- Session reaper started 2017-05-01 08:40:27 (server.go:167) -- Authentication disabled. 2017-05-01 08:40:27 (server.go:278) -- Listening on 0.0.0.0:5636 2017-05-01 08:40:39 (anonymous.go:64) -- Logging in anonymous user from 192.168.1.6:37314 2017-05-01 08:40:48 (eventservice.go:366) -- Updated 1 events, failures = false 2017-05-01 08:41:21 (eventservice.go:366) -- Updated 8 events, failures = false 2017-05-01 08:41:21 (eventservice.go:366) -- Updated 11 events, failures = false 2017-05-01 08:41:22 (eventservice.go:366) -- Updated 8 events, failures = false 2017-05-01 08:41:27 (eventservice.go:366) -- Updated 6 events, failures = false 2017-05-01 08:41:27 (eventservice.go:366) -- Updated 7 events, failures = false 2017-05-01 08:41:27 (eventservice.go:366) -- Updated 5 events, failures = false 2017-05-01 08:41:28 (eventservice.go:366) -- Updated 4 events, failures = false 2017-05-01 08:41:28 (eventservice.go:366) -- Updated 8 events, failures = false 2017-05-01 08:41:28 (eventservice.go:366) -- Updated 5 events, failures = false 2017-05-01 08:41:29 (eventservice.go:366) -- Updated 1606 events, failures = false Thanks Jason! |
It seems I still can't archive some older events. When I click archive on an event dated 30/4/2017 I get: 2017-05-02 11:39:53 (elasticsearch.go:110) -- Decoding response (truncated at 1024 bytes): {"took":2248,"timed_out":false,"total":0,"updated":0,"deleted":0,"batches":0,"version_conflicts":0,"noops":0,"retries":{"bulk":0,"search":0},"throttled_millis":0,"requests_per_second":-1.0,"throttled_until_millis":0,"failures":[]} 2017-05-02 11:39:53 (eventservice.go:366) -- Updated 0 events, failures = false 2017-05-02 11:40:06 (server.go:129) -- Reaping sessions. These may be events from right after the move to ES5, so perhaps they are somehow different. |
I wonder if there was a time window when events were being added, but there was no template installed (Logstash does that), or it had the non-ES5 template installed. Debugging is a bit of pain... curl http://10.16.1.10:9200/logstash-2017.05.03/_mapping/log|jq . Change the date to the index of the event that won't delete (visible in the evebox json). I guess that would be more curiosity. If it is wrong, I'm not sure how to actually fix it. I'd just delete the offending index. |
I deleted the old index (using curator, just deleting everything older than 2 days), but even with more recent events this sometimes happens. Not for all events though, so I'm not sure what could be the issue. curl http://127.0.0.1:9200/logstash-2017.05.03/_mapping/log {} |
Thats a real problem if the mapping is just {}. Try curl http://127.0.0.1:9200/logstash-2017.05.03/_mapping to not limit it to a type. |
|
Ok, last request, if you can... Your Logstash version and config? Can the events ever be archived? Or just do they not get archived, but get archived on a subsequent run? |
Ok, I saw this as well. I had a group of 2 events that would just not archive. However, when opened individually they could be archived. What was interesting is that the @timestamp and timestamp fields in the event did not match, even after taking into account UTC vs localtime. |
Small update: I just upgraded to ES 5.4 & LS 5.4 and I'm still seeing this behavior where a small subset of events reappear after archiving them. |
For an event that isn't archiving, look at the JSON. Are the "@timestamp" and "timestamp" fields equivalent? I'm also running ES 5.4 and Logstash 5.4 now. No forwarder though. Will keep an eye on it. |
So I think this was all related to @timestamp and timestamp being out of sync. EveBox wasn't consistent with which field it used which could cause issues if they weren't in sync. When using Logstash, I believe its best to have:
which will make Logstash using the existing timestamp instead of using the time of reading the event, which should keep the timestamps. But I've also gone through EveBox code to consistently use the "@timestamp" field for queries as well as update events which should let you archive these events. |
Looks like 0.7.1dev (Rev: eb91f5d) fixed this, thanks! |
Great. |
Just upgraded to ES5 from ES2. I can view and search events, but when I archive/escalate them it doesn't work. On archive they disappear from the view initially, but reappear after a refresh.
Console output looks a bit strange with the nil events:
The text was updated successfully, but these errors were encountered: