You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Writing these tests is time consuming. Is there a way to automate this systematic testing ?
Wouldn't it be terribly slow too : Shall we use some kind of snapshot testing against successful responses dwyl/aws-sdk-mock
Additionnaly, publishing an array that states feature parity for each supported service would be a nice add-on to the readme.
The text was updated successfully, but these errors were encountered:
We're going to have to write some tests for coverage as we build the framework, i wonder if we can kill two birds one steon here? Transform the unit tests into performance tests so we are at least covering ground as we implement instead of having to maintain multiple test suites? We could build the unit tests in a way to swap out implementation and be auto instrumented for benchmarking?
We can build the same cloud-based platform to test the Itty-AWS API, for feature parity with AWS-SDK and performance.
But the performance test suite should be different from the feature parity test suite. For me, testing package performance involves invoking the same command multiple times, while feature parity involves invoking each command once. Testing the performance of each service does not seem relevant to me and can be expensive.
Finally, we must not forget the more classic tests of the code of the framework itself, which can be carried out completely offline for greater speed.
Writing these tests is time consuming. Is there a way to automate this systematic testing ?
Wouldn't it be terribly slow too : Shall we use some kind of snapshot testing against successful responses dwyl/aws-sdk-mock
Additionnaly, publishing an array that states feature parity for each supported service would be a nice add-on to the readme.
The text was updated successfully, but these errors were encountered: