-
Notifications
You must be signed in to change notification settings - Fork 553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Undefined method each for Nil in result_merger #406
Comments
If this is indeed the issue, the correct way to do it is to write to temp file and then do an atomic move. I'll have a poke around to see if this is what is happening. |
I found this unsafe write, but it's possible this doesn't run concurrently anywhere: module SimpleCov
module LastRun
def write(json)
File.open(last_run_path, "w+") do |f|
f.puts JSON.pretty_generate(json)
end
end
end
end
end The write that would be causing your issue (result merging) appears to be protected by a write lock. |
How can you make this call return Given you only see this sometimes, it must be some kind of race condition. I just don't know what :/ |
@xaviershay thats quite easy, when using the OJ gem for example you get the following
So when We do use a parallel_test setup so it's quite possible that there are race conditions. Thanks for looking into it! |
ah didn't not know about OJ. Thanks, that helps! |
Anyway can we move on with this? Our CI is still failing daily because sometimes we parse |
I still can't repro. If your PR fixes your builds for you, I'd recommend running off that branch until we get a fix in master. That should at least unblock you. (You can set this up in your Gemfile.) I'm not prepared to merge a speculative fix without a repro or test coverage though. Looking into OJ, I still can't repro:
What version of oj and ruby are you using? How are you setting it up? Are you changing compat modes? Can you post code showing a |
Ensure we cache appropriately so we don't merge more than once or do unsafe reads of the resultset. See https://gist.github.com/jenseng/62465f674f8c02de09ef776f23d4dca4 for simple repro script. Basically there are 3 main problems when using merging: 1. `SimpleCov.result` doesn't cache the `@result`, so the default `at_exit` behavior causes it to store and merge 3 times. 2. `SimpleCov::ResultMerger.resultset` calls `.stored_data` twice. 3. `SimpleCov::ResultMerger.merged_result` doesn't synchronize or use a cached `.resultset`, so a concurrent `.store_result` call can cause us to read an empty file. This can cause the formatter to miss out on coverage data in our formatters and/or get the wrong values for covered percentages. Furthermore, if you use OJ, `JSON.parse("") -> nil`, which means `.resultset` can be nil, causing exceptions as seen in simplecov-ruby#406.
Ensure we cache appropriately so we don't merge more than once or do unsafe reads of the resultset. See https://gist.github.com/jenseng/62465f674f8c02de09ef776f23d4dca4 for simple repro script. Basically there are 3 main problems when using merging: 1. `SimpleCov.result` doesn't cache the `@result`, so the default `at_exit` behavior causes it to store and merge 3 times. 2. `SimpleCov::ResultMerger.resultset` calls `.stored_data` twice. 3. `SimpleCov::ResultMerger.merged_result` doesn't synchronize or use a cached `.resultset`, so a concurrent `.store_result` call can cause us to read an empty file. This can cause the formatter to miss out on coverage data in our formatters and/or get the wrong values for covered percentages. Furthermore, if you use OJ, `JSON.parse("") -> nil`, which means `.resultset` can be nil, causing exceptions as seen in simplecov-ruby#406.
Ensure we cache appropriately so we don't merge more than once or do unsafe reads of the resultset. See https://gist.github.com/jenseng/62465f674f8c02de09ef776f23d4dca4 for simple repro script. Basically there are 3 main problems when using merging: 1. `SimpleCov.result` doesn't cache the `@result`, so the default `at_exit` behavior causes it to store and merge 3 times. 2. `SimpleCov::ResultMerger.resultset` calls `.stored_data` twice. 3. `SimpleCov::ResultMerger.merged_result` doesn't synchronize or use a cached `.resultset`, so a concurrent `.store_result` call can cause us to read an empty file. This can cause the formatter to miss out on coverage data in our formatters and/or get the wrong values for covered percentages. Furthermore, if you use OJ, `JSON.parse("") -> nil`, which means `.resultset` can be nil, causing exceptions as seen in simplecov-ruby#406. Also ping rubocop even more precisely, as 0.48.1+ fails on existing code.
Ensure we cache appropriately so we don't merge more than once or do unsafe reads of the resultset. See https://gist.github.com/jenseng/62465f674f8c02de09ef776f23d4dca4 for simple repro script. Basically there are 3 main problems when using merging: 1. `SimpleCov.result` doesn't cache the `@result`, so the default `at_exit` behavior causes it to store and merge 3 times. 2. `SimpleCov::ResultMerger.resultset` calls `.stored_data` twice. 3. `SimpleCov::ResultMerger.merged_result` doesn't synchronize or use a cached `.resultset`, so a concurrent `.store_result` call can cause us to read an empty file. This can cause the formatter to miss out on coverage data in our formatters and/or get the wrong values for covered percentages. Furthermore, if you use OJ, `JSON.parse("") -> nil`, which means `.resultset` can be nil, causing exceptions as seen in simplecov-ruby#406. Also pin rubocop even more precisely, as 0.48.1+ fails on existing code.
Ensure we cache appropriately so we don't merge more than once or do unsafe reads of the resultset. This has the added benefit of reducing the runtime of at_exit hooks by over 66%, since we now do less than 1/3 the work. See https://gist.github.com/jenseng/62465f674f8c02de09ef776f23d4dca4 for simple repro script. Basically there are 3 main problems when using merging: 1. `SimpleCov.result` doesn't cache the `@result`, so the default `at_exit` behavior causes it to store and merge 3 times. 2. `SimpleCov::ResultMerger.resultset` calls `.stored_data` twice. 3. `SimpleCov::ResultMerger.merged_result` doesn't synchronize or use a cached `.resultset`, so a concurrent `.store_result` call can cause us to read an empty file. This can cause the formatter to miss out on coverage data in our formatters and/or get the wrong values for covered percentages. Furthermore, if you use OJ, `JSON.parse("") -> nil`, which means `.resultset` can be nil, so this race condition causes exceptions as seen in simplecov-ruby#406. In addition to fixing the race condition, also add ` || {}` to make `.stored_data` a bit more robust and protect against an empty .resultset.json.
Ensure we cache and synchronize appropriately so we don't merge more than once or do unsafe reads of the resultset. This has the added benefit of reducing the runtime of at_exit hooks by over 66%, since we now do less than 1/3 the work. See https://gist.github.com/jenseng/62465f674f8c02de09ef776f23d4dca4 for simple repro script. Basically there are 3 main problems when using merging: 1. `SimpleCov::ResultMerger.stored_data` doesn't synchronize its read, so it can see an empty file if another process is writing it. 2. `SimpleCov::ResultMerger.resultset` calls `.stored_data` twice and never caches the result. 3. `SimpleCov.result` doesn't cache the `@result`, so the default `at_exit` behavior causes it to store and merge 3 times. Due to 1. and 2., this is extra bad This can cause the formatter to miss out on coverage data and/or get the wrong values for covered percentages. Furthermore, if you use OJ, `JSON.parse("") -> nil`, which means `.resultset` can be nil, so this race condition causes exceptions as seen in simplecov-ruby#406. In addition to fixing the race condition, also add ` || {}` to make `.stored_data` a bit more robust and protect against an empty .resultset.json.
Ensure we cache and synchronize appropriately so we don't merge more than once or do unsafe reads of the resultset. This has the added benefit of reducing the runtime of at_exit hooks by over 66%, since we now do less than 1/3 the work. See https://gist.github.com/jenseng/62465f674f8c02de09ef776f23d4dca4 for simple repro script. Basically there are 3 main problems when using merging: 1. `SimpleCov::ResultMerger.stored_data` doesn't synchronize its read, so it can see an empty file if another process is writing it. 2. `SimpleCov::ResultMerger.resultset` calls `.stored_data` twice and never caches the result. 3. `SimpleCov.result` doesn't cache the `@result`, so the default `at_exit` behavior causes it to store and merge 3 times. Due to 1. and 2., this is extra bad This can cause the formatter to miss out on coverage data and/or get the wrong values for covered percentages. Furthermore, if you use OJ, `JSON.parse("") -> nil`, which means `.resultset` can be nil, so this race condition causes exceptions as seen in simplecov-ruby#406. In addition to fixing the race condition, also add ` || {}` to make `.stored_data` a bit more robust and protect against an empty .resultset.json.
Ensure we cache and synchronize appropriately so we don't merge more than once or do unsafe reads of the resultset. This has the added benefit of reducing the runtime of at_exit hooks by over 66%, since we now do less than 1/3 the work. See https://gist.github.com/jenseng/62465f674f8c02de09ef776f23d4dca4 for simple repro script. Basically there are 3 main problems when using merging: 1. `SimpleCov::ResultMerger.stored_data` doesn't synchronize its read, so it can see an empty file if another process is writing it. 2. `SimpleCov::ResultMerger.resultset` calls `.stored_data` twice and never caches the result. 3. `SimpleCov.result` doesn't cache the `@result`, so the default `at_exit` behavior causes it to store and merge 3 times. Due to 1. and 2., this is extra bad This can cause the formatter to miss out on coverage data and/or get the wrong values for covered percentages. Furthermore, if you use OJ, `JSON.parse("") -> nil`, which means `.resultset` can be nil, so this race condition causes exceptions as seen in simplecov-ruby#406. In addition to fixing the race condition, also add ` || {}` to make `.stored_data` a bit more robust and protect against an empty .resultset.json.
Ensure we cache and synchronize appropriately so we don't merge more than once or do unsafe reads of the resultset. This has the added benefit of reducing the runtime of at_exit hooks by over 66%, since we now do less than 1/3 the work. See https://gist.github.com/jenseng/62465f674f8c02de09ef776f23d4dca4 for simple repro script. Basically there are 3 main problems when using merging: 1. `SimpleCov::ResultMerger.stored_data` doesn't synchronize its read, so it can see an empty file if another process is writing it. 2. `SimpleCov::ResultMerger.resultset` calls `.stored_data` twice and never caches the result. 3. `SimpleCov.result` doesn't cache the `@result`, so the default `at_exit` behavior causes it to store and merge 3 times. Due to 1. and 2., this is extra bad This can cause the formatter to miss out on coverage data and/or get the wrong values for covered percentages. Furthermore, if you use OJ, `JSON.parse("") -> nil`, which means `.resultset` can be nil, so this race condition causes exceptions as seen in simplecov-ruby#406. In addition to fixing the race condition, also add ` || {}` to make `.stored_data` a bit more robust and protect against an empty .resultset.json.
Hello,
Using latest ruby/simplecov/jenkins.
On our jenkins build system we sometimes see the following error returning when all specs passed and simplecov starts building its report:
Looking at the code I see that
resultset
can returnnil
when thestored_data
returnsnil
. For some reason the data is not written yet to the file so simplecov fails. I find this failing a bit ugly. Would it be more preferred to always return a Hash, even when the reading of JSON data fails?In result_merger.rb
This would prevent simplecov to generate an error but wont generate any reporting. Willing to make a PR to fix this issue. I believe a race condition might exists, for large projects the system is not complete done writing to the file but simplecov already starts reading it, resulting in an empty file. Don't know how to solve that issue
Regards
The text was updated successfully, but these errors were encountered: