-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Coverage collection requires instrumented code in same location until collection is not completed causing service downtime #225
Comments
First, let me congratulate you on the bravery of running live for testing. The issue as stated in the title "Coverage collection requires instrumented code in same location until collection is not completed causing service downtime" - presumably a stray "not", or meant as "while ... not" - can be circumvented using the "Instrument now, test later, collect coverage after that" style
Now as to the length of time taken to process the coverage data being hundreds or thousands of times longer than observed in the largest test cases I have to hand (altcover self-testing while unit tests are run) - this is essentially the same as issue #175 I trust that you have ensured that the coverage instrumentation has excluded by filters any third party assemblies and such which bulk up the output to no good effect. It may also help to use the The question is then how much data are being processed across what size of report. To at least assist me with understanding the scaling here, could you provide the summary data for a run - as an example, this is the summary data for the altcover self-test in the most recent (at time of writing) GitHub actions build -
Anonymising the file names is entirely fine; the key items are the numbers of visits, times taken, and number of entities. |
Hi Steve, Below is requested summary data for the altcover: Summary1: This takes 30mins for processing 50,192,781b of instrumented coverage. 14:37:27 [exec] ... C:\dotnet_coverage\coverage\OAService\coverage.xml.0.acv (50,192,781b) Summary2: This takes 1min for processing 2,50,36,175b of instrumented coverage. 11:33:56 [exec] ... C:\dotnet_coverage\coverage\OTService\coverage.xml.0.acv (2,50,36,175b) I have tried the coverage instrumentation by excluded any third party assemblies but it excluded everything using |
So, all the time is being taken reading (decompressing and indexing) the recorded visit data; with code base not much larger than AltCover even in the worst case. In the slower case, the deflate compression has been extremely efficient, reducing each ~40 byte visit record to just over 5 bits, an indication that a lot of repeat visits are happening; this may account for the amount of processing time needed to unpack it all. There are two approaches that can be taken to reduce the amount of processing done in this stage. If you're only interested in whether code has been visited, rather than how often. the Alternatively, we can take advantage of the fact that you are not running tests under On the use of filters, which may not be relevant, and certainly don't look to have a significant effect in this case If there are any unwanted assemblies (modules) in the coverage report, then they can be skipped by name; a usual one for me is that Alternatively, if your code of interest all has a common name part like |
For the moment I shall close this as duplicate of Issue #175 |
Hi Steve,
For the collecting coverage we need the dotnet service folder and the folder with the .acv files in the state when we have stopped the service. After which we can run command to collect coverage and it takes 40-50 mins in our service scenario which is resulting into down time of the service.
Can you suggest a way if possible to move this instrumented dotnet service (i.e., __Saved folder) and acv files folder to other place and restart the service for our usage?
The text was updated successfully, but these errors were encountered: