-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for snapshot(fix=True)
#109
Comments
Thank you for your feedback and your sponsoring ❤️ Enabling inline-snapshot by default means (currently because of #57) disabeling of pytest assert-rewrite.
You can enable This gives you currently no control which snapshots will be updated, which brings the risk that you will change snapshots which you maybe not want to change (if you run your whole test suite). But it might be useful if you only run specific test in your UI. Another way is to add some extra pytest args in the vscode config {
"python.testing.pytestArgs": [
"tests", "--inline-snapshot=fix,create"
],
} It would be nice if we could configure a third option beside "Run Test" and "Debug Test", but I think this is currently not possible. The problem with this approach is that you approve the current behavior every time you run your tests. A bug will not cause your tests to fail but change them and make sure that the bug stays in the code 😄 💥. This is the reason why I'm not promoting this approach.
I don't like the idea to create extra syntax for this. It feels like a work around for "I want to click on the snapshot and rerun the test to fix/create the value". Do you really have the use case that you only want to fix one of multiple snapshots in the same test? Another workflow might be to run your tests, inline-snapshot will record what it can fix and an lsp-client would allow you to fix the snapshots in your source code after the test failed. |
Thanks for the quick feedback!
Yeah, my main workflow for inline-snapshot is for example in FastAPI tests, when testing the OpenAPI schema, it's a large dict/JSON with values generated from the standard. For example, here's a short one: https://github.com/fastapi/fastapi/blob/master/tests/test_tutorial/test_first_steps/test_tutorial001.py#L22-L42 But then, there are some that are really long. When those don't pass, it's not easy to find what's breaking, so, being able to update the snapshot for each specific one of those and see what are the differences between the old and new OpenAPI with git can help a lot.
That sounds very cool! I'm not even sure what it means and how it would work, but that would probably be much better than what I'm thinking. 😅 I think for now I can set and unset the config before each run, that's working. 🤓 The downside is that if I forget to change the config after one run generating, it would re-generate the snapshot, there's no way to "enable only once". But that will work for now. Feel free to close this one now! ☕ |
Thank you for inline-snapshot! It's great. 🙇 🍰
Request
I came to
inline-snapshot
from Samuel Colvin'sinsert_assert
, one thing I miss that I would like to have is not require running the CLI to generate the snapshots.I would like to be able to click the icon in my editor (VS Code) to run the test and get the snapshot with that.
This is a request somewhat similar to #62 and #57
Example
I would like to be able to write:
...and after that single run (equivalent to running
pytest
without inline-snapshot parameters), I would get:And then:
would result in:
Alternatives
I can think of a couple of alternatives. It could be that
snapshot(fix=True)
would cover both fix and create. It could also besnapshot(mode="create")
andsnapshot(mode="fix")
to avoid invalid states likesnapshot(fix=True, create=True)
.Tests Always Passing
The result of one of these code-generating runs could be a "failure" so that regenerating the test data doesn't automatically "pass". And as the changed code removes the flag, it would not regenerate the data again the next time.
So, I would have to set the function parameter manually when I want to update a snapshot, and as it would be removed right after, no committed code would have those flags, preventing it from always "fixing"/passing the tests automatically.
In Short
In short, I want to be able to run it through the UI, which means it runs
pytest
without parameters underneath, instead of having to call it through the CLI, figure out the file I want to test, find its path among many other files, or run the tests for everything, just for a single test.The text was updated successfully, but these errors were encountered: