-
Notifications
You must be signed in to change notification settings - Fork 270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Kibana][Metrics] Dashboard that shows only latest results of each scan for each month #111
Comments
Hi, I was just about to start thinking through this problem. have you started looking at it? |
Hi @cybergoof, |
I am not great on elk aggregations, but I think maybe that is the way to go. However, that means really changing the dashboard. Almost like two dashboards. One that shows "Current state of your environment", and then another dashboard that shows historical information? With the aggregation, could we create a search that brings back just the latest scan of every host? And then build the visualizations based on that? |
It wouldn't be needed to have the two dashboards, you could just deploy from scratch the environment, and as all the data is pulled from the scanners, it would be downloaded and structured with the new format to ELK. I am not either great with ELK aggregations, but I believe it should be able to do that; not that it is directly related but Splunk has a way to get the latest values submitted regarding logs, so I believe ELK should also have the option. The difficult thing I guess would be to have it with the latest for each month, some research is needed on that. |
It would be possible to do this with the fingerprint filter in Logstash to create a unique ID per scanning target. So from one field or a combination of fields a MD5 or Sha1 hash is created. Then in the end two different writes to Elasticsearch need to happen.
With this you can then create a current dashboard that is maybe looking at the last 30 days.
This also means that you now have the possibility of writing the current state to a different Elasticsearch server. In companies with a dedicated small security team and a much larger ops team the impact of a large number of users from the ops team querying the historical data while they only want the most recent scans most of the time could cause performance issues. The current index set will always be much smaller than the full historical dataset. Another benefit of this way would be if you want to limit access to the historical data and allow a larger number of people access the current data. |
Oh, um, I didn't know about that approach. Okay, now i am going to work on doing that. You can assign me this ticket if you want. I can create the index I one, as an optional config file? |
On second thought...since each scan contains multiple documents you would somehow have to make sure that repeat findings are fingerprinted the same. If you only use the Hostname then it would end up as a single document, overwriten again and again. Figuring out what fields to use is the tricky part |
Thanks for offering @cybergoof ^^ I was hoping for a way of filtering that in Kibana/ElasticSearch directly without need of creating more entries, but I am no ELK expert at all, so if this is the best way, we should go for it. @elvarb how should we define that? Could we make is part of the ECS(#97), as we would like to eventually use that, or should we have a custom field? Mentioning @austin-taylor to be on the loop and make sure we are all aligned on it :) |
Looking at the data, timestamp + hostname can be a unique scan of a host. With multiple documents for each vuln found. How do we group those to only display the latest GROUP as valid? |
Okay, I think I have a better one. ASSET+"_"+PLUGIN_ID as the document ID? This would make sure that every finding is only recorded one. There are some problems with this. It will overwrite the original time the scan detected. It won't distinguish if a vuln is gone. It also can't distinguish from if a vuln is detected, then not detected (remediated) then detected again. |
Hey @cybergoof, Being these part of the structure design, and not being myself right now working with the Kibana part, I would be more comfortable if you agreed with @austin-taylor regarding the changes. On my side, I don't think I will be able to work with the Kibana part of my plans in some time as I want to do the redesign of the VulnWhisperer standard and make all modules follow that standard, so these changes will be something tracked aside from the master branch until everything is changed and properly tested. Apologies for not being helpful with the Kibana part at the moment. Thanks for your help! |
First, I am really really really bad at Elastic query language. It just doesn't make sense to me. However, I think that this aggregation demonstrates what I am talking about. Running this query will return the earliest time that an asset had a plugin fire. Changing from asc to desc gives the earliest. Since script aggregations are not used in Kibana visualizations, I think this would have to be a fingerprint. I can test it out. But would like someone else to evaluate
|
We are having an issue when reporting vulnerabilities with Kibana Dashboards, as if you have run a scan once a week, the results per month will obviusly appear x4 times as all scans are feeded; it would be good when trying to get overall time metrics, that it only shows the results of one scan per month.
The text was updated successfully, but these errors were encountered: