Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: pass path to censor function #16

Merged
merged 14 commits into from
Oct 2, 2020

Conversation

ethanresnick
Copy link
Contributor

@ethanresnick ethanresnick commented Mar 18, 2019

This PR calls the user's censor function with the path being censored as the second argument. I'm trying to support two different use cases here:

  1. Sometimes different keys need to be redacted in different ways. For example, on an object representing a bank account, you might censor the account number by xxxx-ing out most of the numbers (but still leave a few for debugging), while censoring the account owner's name by removing it completely:

    censor: (v, path) => path[path.length - 1] === 'accountNumber' 
      ? `xxxxxx${v.substr(-4)}` 
      : '[Redacted]'
  2. Because fast-redact serializes the redacted object rather than returning a deep clone (for well-explained reasons), it's difficult to compose redaction logic. For example, suppose I have a few different business objects, like bank accounts, clients, etc., and that I want to define a separate redaction function for each object (in the same place where I define the object's schema etc). For example, I might define redactAccount as:

    const censorAccount; // same censor function shown above
    const accountSecretPaths = ["accountNumber", "otherSecretPath"];
    const redactAccount = fastRedact({
      paths: accountSecretPaths, 
      censor: censorAccount
    });

    Now, suppose I need to log a redacted version of an object like:

    { clients: [...], accounts: [...] }

    If my redactAccount et al functions returned censored objects, rather than strings, this would be as simple as:

    log({ clients: [...].map(redactClient), accounts: [...].map(redactAccount) })

    So, my thought is that, by passing the path to censor functions, it becomes possible to support cases like this. I.e., I can now define a composedRedact function like so:

    const opts = {
      paths: [
        ...accountSecretPaths.map(it => `accounts.*.${it}`),
        ...clientSecretPaths.map(it => `clients.*.${it}`)
      ],
      censor: (v, path) => path[0] === 'clients' 
        ? censorClient(v, path.slice(2))
        : censorAccount(v, path.slice(2));
    };
    
    const composedRedact = fastRedact(opts);

    Fwiw, I explored approaches to composition that leveraged serialize: false and restore(), but ran up against the wall that the state stored internally for each redaction function returned by fastRedact only covers the last object redacted, so it wasn't possible to redact a bunch of objects without serialization, collect and serialize the results, and then restore all of them.

    Given that limitation, I couldn't think of an approach for supporting composition that, at least absent a major refactor, wouldn't, under the hood, do more or less what the above opts.censor and opts.paths are doing explicitly. (Although I certainly can imagine some more convenient APIs that could be added to abstract that basic logic.)

@ethanresnick
Copy link
Contributor Author

Also, I'm certainly no V8 performance expert, but I tried to be relatively performance minded in my implementation here. Still, if its performance is an issue, I imagine that a relatively simple way to take down the overhead further would be to test the .length of censor function at the beginning and not bother with constructing the path array if the arity is only 1.

Copy link
Collaborator

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good work!!!

Have you checked the benchmarks? It this change impacting them at all?

lib/modifiers.js Outdated Show resolved Hide resolved
lib/modifiers.js Outdated Show resolved Hide resolved
@ethanresnick
Copy link
Contributor Author

@mcollina Thanks for the review. I'll try to work on this more tonight.

@ethanresnick
Copy link
Contributor Author

ethanresnick commented Mar 18, 2019

As far as the benchmarks go, here's what I got:

Before

benchNoirV2*500: 85.971ms
benchFastRedact*500: 41.433ms
benchFastRedactRestore*500: 14.223ms
benchNoirV2Wild*500: 67.439ms
benchFastRedactWild*500: 29.585ms
benchFastRedactWildRestore*500: 37.385ms
benchFastRedactIntermediateWild*500: 133.649ms
benchFastRedactIntermediateWildRestore*500: 93.075ms
benchJSONStringify*500: 236.068ms
benchNoirV2Serialize*500: 319.681ms
benchFastRedactSerialize*500: 240.579ms
benchNoirV2WildSerialize*500: 284.403ms
benchFastRedactWildSerialize*500: 276.943ms
benchFastRedactIntermediateWildSerialize*500: 328.257ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 477.666ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 262.410ms
benchNoirV2CensorFunction*500: 41.339ms
benchFastRedactCensorFunction*500: 67.306ms
benchNoirV2*500: 52.799ms
benchFastRedact*500: 2.785ms
benchFastRedactRestore*500: 15.687ms
benchNoirV2Wild*500: 93.922ms
benchFastRedactWild*500: 21.801ms
benchFastRedactWildRestore*500: 30.377ms
benchFastRedactIntermediateWild*500: 152.801ms
benchFastRedactIntermediateWildRestore*500: 94.812ms
benchJSONStringify*500: 234.192ms
benchNoirV2Serialize*500: 305.718ms
benchFastRedactSerialize*500: 235.601ms
benchNoirV2WildSerialize*500: 273.017ms
benchFastRedactWildSerialize*500: 269.712ms
benchFastRedactIntermediateWildSerialize*500: 308.397ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 470.652ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 265.717ms
benchNoirV2CensorFunction*500: 50.077ms
benchFastRedactCensorFunction*500: 45.616ms

After

benchNoirV2*500: 85.883ms
benchFastRedact*500: 46.895ms
benchFastRedactRestore*500: 9.792ms
benchNoirV2Wild*500: 57.063ms
benchFastRedactWild*500: 29.695ms
benchFastRedactWildRestore*500: 36.148ms
benchFastRedactIntermediateWild*500: 122.937ms
benchFastRedactIntermediateWildRestore*500: 91.967ms
benchJSONStringify*500: 235.932ms
benchNoirV2Serialize*500: 322.515ms
benchFastRedactSerialize*500: 244.163ms
benchNoirV2WildSerialize*500: 292.856ms
benchFastRedactWildSerialize*500: 287.887ms
benchFastRedactIntermediateWildSerialize*500: 339.095ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 480.987ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 261.544ms
benchNoirV2CensorFunction*500: 76.100ms
benchFastRedactCensorFunction*500: 81.954ms
benchNoirV2*500: 87.294ms
benchFastRedact*500: 2.751ms
benchFastRedactRestore*500: 16.665ms
benchNoirV2Wild*500: 35.325ms
benchFastRedactWild*500: 20.978ms
benchFastRedactWildRestore*500: 28.181ms
benchFastRedactIntermediateWild*500: 152.879ms
benchFastRedactIntermediateWildRestore*500: 93.906ms
benchJSONStringify*500: 230.684ms
benchNoirV2Serialize*500: 307.351ms
benchFastRedactSerialize*500: 236.411ms
benchNoirV2WildSerialize*500: 272.602ms
benchFastRedactWildSerialize*500: 274.934ms
benchFastRedactIntermediateWildSerialize*500: 344.361ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 472.360ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 257.025ms
benchNoirV2CensorFunction*500: 70.302ms
benchFastRedactCensorFunction*500: 61.608ms

I honestly am not sure how to tell what's noise and what's meaningful. The above is just from one run of npm run bench against each codebase.

Copy link
Collaborator

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ethanresnick
Copy link
Contributor Author

@mcollina @davidmarkclements What's the next step for getting this merged?

Copy link
Owner

@davidmarkclements davidmarkclements left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please create separate tests for path rather than adapting current ones?

Could you also add a benchmark with the censor function and a benchmark the covers all code added to specialSet and post results?

Growing the array later seems to be faster than allocating this string upfront and overwriting it later.
This gives back some of the performance cost of constructing the path,
for people who pass in censor functions that only take one argument.
`isCensorFct` prop is never passed in as part of `o` (see index.js),
and the builder already had an object with a `secret` key, making the
subsequent builder.push redundant.
ethanresnick added a commit to ethanresnick/fast-redact that referenced this pull request Oct 9, 2019
This creates a new object before each test, which should make the
benchmarks more reliable (since state/hidden class transitions on the
object from prior tests could effect the speed of the redaction). This
doesn’t go as far as to create a new object right before each redaction
call (i.e., in the `for` loop) as that seems like it would add more
noise/difficulty interpreting the final results, since a bigger chunk
of each benchmark’s work would be this object creation.

This also means that the `serialize: false` tests (and the pino-noir
tests, which are effectively serialize false) don’t leave the object in
a different state for the subsequent tests, which is required for
adding tests for davidmarkclements#16

Most of the benchmarks remained similar before and after this change,
with some of the fast-redact benchmarks getting 10-30% faster [but idk
how much of that is noise]. A few benchmarks seemed to show consistent
differences of greater than 30%:

- `benchNoirV2` (the first case): about 50% faster [consistently],  for
reasons I don’t quite understand.

- `benchFastRedactRestore`: about 50% faster

- `benchFastRedactIntermediateWild`: about 50% slower

- `benchNoirV2Wild`,`benchNoirV2CensorFunction`, `benchNoirV2Wild`,
`benchNoirV2CensorFunction`: 2-4x as fast

- `benchFastRedactCensorFunction`: 50-100% faster

Here are the full results:

# Before [ran twice, and the results were pretty similar]

// benchNoirV2*500: 84.279ms
// benchFastRedact*500: 14.280ms
benchFastRedactRestore*500: 22.773ms
benchNoirV2Wild*500: 105.580ms
benchFastRedactWild*500: 40.533ms
benchFastRedactWildRestore*500: 41.080ms
benchFastRedactIntermediateWild*500: 107.193ms
benchFastRedactIntermediateWildRestore*500: 92.896ms
benchJSONStringify*500: 323.536ms
benchNoirV2Serialize*500: 407.667ms
benchFastRedactSerialize*500: 337.723ms
benchNoirV2WildSerialize*500: 380.418ms
benchFastRedactWildSerialize*500: 372.057ms
benchFastRedactIntermediateWildSerialize*500: 417.458ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 572.464ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 348.632ms
benchNoirV2CensorFunction*500: 70.804ms
benchFastRedactCensorFunction*500: 59.476ms
benchNoirV2*500: 48.808ms
benchFastRedact*500: 11.550ms
benchFastRedactRestore*500: 13.436ms
benchNoirV2Wild*500: 61.383ms
benchFastRedactWild*500: 31.472ms
benchFastRedactWildRestore*500: 34.325ms
benchFastRedactIntermediateWild*500: 128.071ms
benchFastRedactIntermediateWildRestore*500: 128.602ms
benchJSONStringify*500: 317.474ms
benchNoirV2Serialize*500: 395.179ms
benchFastRedactSerialize*500: 323.449ms
benchNoirV2WildSerialize*500: 369.051ms
benchFastRedactWildSerialize*500: 356.204ms
benchFastRedactIntermediateWildSerialize*500: 426.994ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 531.789ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 340.540ms
benchNoirV2CensorFunction*500: 38.953ms
benchFastRedactCensorFunction*500: 53.361ms

# After [ran twice, and the results were pretty similar]

// benchNoirV2*500: 54.057ms
// benchFastRedact*500: 15.949ms
benchFastRedactRestore*500: 13.749ms
benchNoirV2Wild*500: 34.787ms
benchFastRedactWild*500: 44.642ms
benchFastRedactWildRestore*500: 42.949ms
benchFastRedactIntermediateWild*500: 162.311ms
benchFastRedactIntermediateWildRestore*500: 106.921ms
benchJSONStringify*500: 306.766ms
benchNoirV2Serialize*500: 403.440ms
benchFastRedactSerialize*500: 321.294ms
benchNoirV2WildSerialize*500: 366.650ms
benchFastRedactWildSerialize*500: 342.946ms
benchFastRedactIntermediateWildSerialize*500: 437.797ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 573.556ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 336.521ms
benchNoirV2CensorFunction*500: 27.110ms
benchFastRedactCensorFunction*500: 47.454ms
benchNoirV2*500: 38.437ms
benchFastRedact*500: 10.272ms
benchFastRedactRestore*500: 9.693ms
benchNoirV2Wild*500: 18.504ms
benchFastRedactWild*500: 30.266ms
benchFastRedactWildRestore*500: 35.108ms
benchFastRedactIntermediateWild*500: 131.794ms
benchFastRedactIntermediateWildRestore*500: 110.691ms
benchJSONStringify*500: 299.861ms
benchNoirV2Serialize*500: 384.236ms
benchFastRedactSerialize*500: 314.049ms
benchNoirV2WildSerialize*500: 365.485ms
benchFastRedactWildSerialize*500: 344.158ms
benchFastRedactIntermediateWildSerialize*500: 426.421ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 537.079ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 340.104ms
benchNoirV2CensorFunction*500: 16.021ms
benchFastRedactCensorFunction*500: 31.100ms
@ethanresnick ethanresnick mentioned this pull request Oct 9, 2019
This creates a new object before each test, which should make the
benchmarks more reliable (since state/hidden class transitions on the
object from prior tests could effect the speed of the redaction). This
doesn’t go as far as to create a new object right before each redaction
call (i.e., in the `for` loop) as that seems like it would add more
noise/difficulty interpreting the final results, since a bigger chunk
of each benchmark’s work would be this object creation.

This also means that the `serialize: false` tests (and the pino-noir
tests, which are effectively serialize false) don’t leave the object in
a different state for the subsequent tests, which is required for
adding tests for davidmarkclements#16

Most of the benchmarks remained similar before and after this change,
with some of the fast-redact benchmarks getting 10-30% faster [but idk
how much of that is noise]. A few benchmarks seemed to show consistent
differences of greater than 30%:

- `benchNoirV2` (the first case): about 50% faster [consistently],  for
reasons I don’t quite understand.

- `benchFastRedactRestore`: about 50% faster

- `benchFastRedactIntermediateWild`: about 50% slower

- `benchNoirV2Wild`,`benchNoirV2CensorFunction`, `benchNoirV2Wild`,
`benchNoirV2CensorFunction`: 2-4x as fast

- `benchFastRedactCensorFunction`: 50-100% faster

Here are the full results:

benchNoirV2*500: 84.279ms
benchFastRedact*500: 14.280ms
benchFastRedactRestore*500: 22.773ms
benchNoirV2Wild*500: 105.580ms
benchFastRedactWild*500: 40.533ms
benchFastRedactWildRestore*500: 41.080ms
benchFastRedactIntermediateWild*500: 107.193ms
benchFastRedactIntermediateWildRestore*500: 92.896ms
benchJSONStringify*500: 323.536ms
benchNoirV2Serialize*500: 407.667ms
benchFastRedactSerialize*500: 337.723ms
benchNoirV2WildSerialize*500: 380.418ms
benchFastRedactWildSerialize*500: 372.057ms
benchFastRedactIntermediateWildSerialize*500: 417.458ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 572.464ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 348.632ms
benchNoirV2CensorFunction*500: 70.804ms
benchFastRedactCensorFunction*500: 59.476ms
benchNoirV2*500: 48.808ms
benchFastRedact*500: 11.550ms
benchFastRedactRestore*500: 13.436ms
benchNoirV2Wild*500: 61.383ms
benchFastRedactWild*500: 31.472ms
benchFastRedactWildRestore*500: 34.325ms
benchFastRedactIntermediateWild*500: 128.071ms
benchFastRedactIntermediateWildRestore*500: 128.602ms
benchJSONStringify*500: 317.474ms
benchNoirV2Serialize*500: 395.179ms
benchFastRedactSerialize*500: 323.449ms
benchNoirV2WildSerialize*500: 369.051ms
benchFastRedactWildSerialize*500: 356.204ms
benchFastRedactIntermediateWildSerialize*500: 426.994ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 531.789ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 340.540ms
benchNoirV2CensorFunction*500: 38.953ms
benchFastRedactCensorFunction*500: 53.361ms

benchNoirV2*500: 54.057ms
benchFastRedact*500: 15.949ms
benchFastRedactRestore*500: 13.749ms
benchNoirV2Wild*500: 34.787ms
benchFastRedactWild*500: 44.642ms
benchFastRedactWildRestore*500: 42.949ms
benchFastRedactIntermediateWild*500: 162.311ms
benchFastRedactIntermediateWildRestore*500: 106.921ms
benchJSONStringify*500: 306.766ms
benchNoirV2Serialize*500: 403.440ms
benchFastRedactSerialize*500: 322.238ms
benchNoirV2WildSerialize*500: 355.420ms
benchFastRedactWildSerialize*500: 364.779ms
benchFastRedactIntermediateWildSerialize*500: 399.256ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 573.556ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 336.521ms
benchNoirV2CensorFunction*500: 27.110ms
benchFastRedactCensorFunction*500: 47.454ms
benchNoirV2*500: 38.437ms
benchFastRedact*500: 10.272ms
benchFastRedactRestore*500: 9.693ms
benchNoirV2Wild*500: 18.504ms
benchFastRedactWild*500: 30.266ms
benchFastRedactWildRestore*500: 35.108ms
benchFastRedactIntermediateWild*500: 131.794ms
benchFastRedactIntermediateWildRestore*500: 110.691ms
benchJSONStringify*500: 299.861ms
benchNoirV2Serialize*500: 384.236ms
benchFastRedactSerialize*500: 305.921ms
benchNoirV2WildSerialize*500: 354.217ms
benchFastRedactWildSerialize*500: 337.948ms
benchFastRedactIntermediateWildSerialize*500: 399.507ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 537.079ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 340.104ms
benchNoirV2CensorFunction*500: 16.021ms
benchFastRedactCensorFunction*500: 31.100ms

as
Add benchmarks for path arg to censor fn
This could effect the performance
@ethanresnick
Copy link
Contributor Author

ethanresnick commented Oct 9, 2019

Could you also add a benchmark with the censor function and a benchmark the covers all code added to specialSet and post results?

@davidmarkclements I've pushed a commit that adds these benchmarks. There was already one benchmark using a censor function, but I've added a couple others that test the combination of a censor function (with and without the second path argument) and different redaction path wildcard scenarios. For the specialSet changes, the only new branch in the logic is the case of calling the censor function with a path, which is covered by the new benchmark. (All the rest of that function is covered by the existing benchmarks, I believe.)

Here are the results....

Before

These results are with none of the changes from this PR, but with the rework to the benchmarks from #22. These are taken straight from the OP in #22.

benchNoirV2*500: 54.057ms
benchFastRedact*500: 15.949ms
benchFastRedactRestore*500: 13.749ms
benchNoirV2Wild*500: 34.787ms
benchFastRedactWild*500: 44.642ms
benchFastRedactWildRestore*500: 42.949ms
benchFastRedactIntermediateWild*500: 162.311ms
benchFastRedactIntermediateWildRestore*500: 106.921ms
benchJSONStringify*500: 306.766ms
benchNoirV2Serialize*500: 403.440ms
benchFastRedactSerialize*500: 321.294ms
benchNoirV2WildSerialize*500: 366.650ms
benchFastRedactWildSerialize*500: 342.946ms
benchFastRedactIntermediateWildSerialize*500: 437.797ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 573.556ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 336.521ms
benchNoirV2CensorFunction*500: 27.110ms
benchFastRedactCensorFunction*500: 47.454ms
benchNoirV2*500: 38.437ms
benchFastRedact*500: 10.272ms
benchFastRedactRestore*500: 9.693ms
benchNoirV2Wild*500: 18.504ms
benchFastRedactWild*500: 30.266ms
benchFastRedactWildRestore*500: 35.108ms
benchFastRedactIntermediateWild*500: 131.794ms
benchFastRedactIntermediateWildRestore*500: 110.691ms
benchJSONStringify*500: 299.861ms
benchNoirV2Serialize*500: 384.236ms
benchFastRedactSerialize*500: 314.049ms
benchNoirV2WildSerialize*500: 365.485ms
benchFastRedactWildSerialize*500: 344.158ms
benchFastRedactIntermediateWildSerialize*500: 426.421ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 537.079ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 340.104ms
benchNoirV2CensorFunction*500: 16.021ms
benchFastRedactCensorFunction*500: 31.100ms

After

Same benchmarks, but with the code changes from this PR.

benchNoirV2*500: 57.173ms
benchFastRedact*500: 15.292ms
benchFastRedactRestore*500: 17.874ms
benchNoirV2Wild*500: 36.849ms
benchFastRedactWild*500: 42.018ms
benchFastRedactWildRestore*500: 43.830ms
benchFastRedactIntermediateWild*500: 170.819ms
benchFastRedactIntermediateWildRestore*500: 104.677ms
benchJSONStringify*500: 295.113ms
benchNoirV2Serialize*500: 400.466ms
benchFastRedactSerialize*500: 318.930ms
benchNoirV2WildSerialize*500: 358.291ms
benchFastRedactWildSerialize*500: 339.127ms
benchFastRedactIntermediateWildSerialize*500: 430.686ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 578.805ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 330.956ms
benchNoirV2CensorFunction*500: 26.536ms
benchFastRedactCensorFunction*500: 42.226ms
benchNoirV2*500: 52.377ms
benchFastRedact*500: 8.433ms
benchFastRedactRestore*500: 10.723ms
benchNoirV2Wild*500: 21.983ms
benchFastRedactWild*500: 35.885ms
benchFastRedactWildRestore*500: 40.448ms
benchFastRedactIntermediateWild*500: 186.271ms
benchFastRedactIntermediateWildRestore*500: 115.239ms
benchJSONStringify*500: 289.707ms
benchNoirV2Serialize*500: 395.613ms
benchFastRedactSerialize*500: 318.428ms
benchNoirV2WildSerialize*500: 356.309ms
benchFastRedactWildSerialize*500: 332.504ms
benchFastRedactIntermediateWildSerialize*500: 410.139ms
benchFastRedactIntermediateWildMatchWildOutcomeSerialize*500: 546.796ms
benchFastRedactStaticMatchWildOutcomeSerialize*500: 362.284ms
benchNoirV2CensorFunction*500: 14.827ms
benchFastRedactCensorFunction*500: 37.329ms

The results look within noise to me, which I assume is because this PR is now pretty aggressive about not constructing/passing the path if censor isn't a function and (a new check) if its .length isn't greater than 1

For the wholly-new benchmarks, here are the results I got:

benchFastRedactCensorFunctionIntermediateWild*500: 157.490ms
benchFastRedactCensorFunctionWithPath*500: 28.350ms
benchFastRedactWildCensorFunctionWithPath*500: 93.050ms
benchFastRedactIntermediateWildCensorFunctionWithPath*500: 128.482ms
...
...
benchFastRedactCensorFunctionIntermediateWild*500: 147.375ms
benchFastRedactCensorFunctionWithPath*500: 12.012ms
benchFastRedactWildCensorFunctionWithPath*500: 88.490ms
benchFastRedactIntermediateWildCensorFunctionWithPath*500: 127.904ms

Could you please create separate tests for path rather than adapting current ones?

Done

@ethanresnick
Copy link
Contributor Author

@davidmarkclements Can you take another look at this? I'd love to get it merged.

@ethanresnick
Copy link
Contributor Author

@davidmarkclements Ping :)

@davidmarkclements
Copy link
Owner

sorry this took so long to get through @ethanresnick - thanks for the PR!

@davidmarkclements davidmarkclements merged commit e702039 into davidmarkclements:master Oct 2, 2020
@davidmarkclements
Copy link
Owner

released as v3.0.0 (I know it's likely a minor but we need to ensure we don't break anything in pino, so we'll do a manual upgrade other there)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants