You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Of course that can be done with filters before, but it would be easier to be able to specify an option to e.g., ignore changed numbers or timestamp-like data.
I assume simply regexp-changing the string to hash will allow that. It would only need to find nice regular expressions for some common filters.
The text was updated successfully, but these errors were encountered:
You mean something like an integrated cut ? Do you have an example usecase? I don't know if I understand it correctly and don't want to agree or rule it out before that.
I am mostly interested in log file analysis where I sometimes have timestamps in front or random IDs that are irrelevant for the bigger picture. Usually I use a sed 's/.../XXX/ | sort -u`, but that's not optimal.
Would it be helpful if anewer had a parameter that would skip the first x chars of the line? I think this could be done without loosing much performance.
Of course that can be done with filters before, but it would be easier to be able to specify an option to e.g., ignore changed numbers or timestamp-like data.
I assume simply regexp-changing the string to hash will allow that. It would only need to find nice regular expressions for some common filters.
The text was updated successfully, but these errors were encountered: