You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 19, 2022. It is now read-only.
individual file parsers are told to look for a number of matching lines through regex
the regex also specifies what values to capture
the captured values are then aggregated on and the report is generated
The parsing is generally the same on all tools for.a given file (but some tools don't parse all files), this was a design decision we made to speed development, it does mean slower parsing for some individual tools but it is useful for this issue. It would be very easy to save to disk these parsed rows. But why?
eventually we could start to reparse the data so that reports are faster to run after the first one (including different reports would benefit)
other tooling (graphing, spreadsheets etc) could use the saved off data as a baseline for generating custom reports on the fly without having to implement their own parsing logic or rules some of which were surprisingly complex to account for.
Problems with this
not all tools read all the same files...do we now parse all files? do we have a tool that does all parsers?
duplicate data is handled differently by different tooling in sperf. Would need to figure out how to handle this and not break other tools or surprise third parties consuming the data
where to store it? what if there isn't enough space? Is this easy for people to find?
The text was updated successfully, but these errors were encountered:
the normal flow of sperf is:
The parsing is generally the same on all tools for.a given file (but some tools don't parse all files), this was a design decision we made to speed development, it does mean slower parsing for some individual tools but it is useful for this issue. It would be very easy to save to disk these parsed rows. But why?
Problems with this
The text was updated successfully, but these errors were encountered: