You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hiya,
I'm looking to use capturemock to emulate an LLM in some testing that will be part of a CI/CD pipeline. The functions that need testing are defined asynchronously and this seems to be causing problems with capturemock. If the function that needs testing is defined in a non-asynchronous way, everything works fine. Unfortunately, when it is defined asynchronously (which is needed), capturemock doesn't seem to recognise the library it's supposed to be mocking. It doesn't create an error but it will not create a .mock file if in record mode, nor will it use one (that was created using a non-asynchronous definition) if it is in replay mode.
I have created a MWE that doesn't involve large LLM files but still demonstrates the problem.
Then I run pytest but it won't create a .mock file. If you want to run it in a non-aynchronous way, then you can comment out the 3 relevant lines (11,15,19) and comment in the lines directly below. This will allow you to confirm that it's only the async aspect that's causing the problem and create a .mock file which you can try use if you want to try it when defined asynchronously in replay mode. It might be that this capturemock has to be used in a different way with asynchronous functions but I couldn't see anything in the documentation about how to do this. Any help you could provide on this would be appreciated.
The text was updated successfully, but these errors were encountered:
I think capturemock's Python recording ability predates the existence of async functions in Python unfortunately :) Or at least it was never something I considered when writing it.
I agree this would be a useful feature. Unfortunately I cannot promise to be able to work on this any time soon. Am very happy to consider pull requests though :)
Hi @gjb1002,
Unfortunately, I don't know this area of Python or asynchronous functions well enough to have a chance of making it work. I understand that you don't have time to work on it now but appreciate the quick response to let me know that it is at least expected behaviour.
If you do decide to attack this, please do let me know.
Hiya,
I'm looking to use capturemock to emulate an LLM in some testing that will be part of a CI/CD pipeline. The functions that need testing are defined asynchronously and this seems to be causing problems with capturemock. If the function that needs testing is defined in a non-asynchronous way, everything works fine. Unfortunately, when it is defined asynchronously (which is needed), capturemock doesn't seem to recognise the library it's supposed to be mocking. It doesn't create an error but it will not create a .mock file if in record mode, nor will it use one (that was created using a non-asynchronous definition) if it is in replay mode.
I have created a MWE that doesn't involve large LLM files but still demonstrates the problem.
python version:
Python 3.7.17
requirements.txt:
test_requests.py:
Then I run
pytest
but it won't create a .mock file. If you want to run it in a non-aynchronous way, then you can comment out the 3 relevant lines (11,15,19) and comment in the lines directly below. This will allow you to confirm that it's only the async aspect that's causing the problem and create a .mock file which you can try use if you want to try it when defined asynchronously in replay mode. It might be that this capturemock has to be used in a different way with asynchronous functions but I couldn't see anything in the documentation about how to do this. Any help you could provide on this would be appreciated.The text was updated successfully, but these errors were encountered: