-
Notifications
You must be signed in to change notification settings - Fork 596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GenomicsDB Import/Select fails on record containing a spanning deletion allele #4716
Comments
@kgururaj Could you please make this ticket your top priority? |
@cwhelan @ldgauthier @kgururaj informs me that he's already fixed this in GenomicsDB, and will do a release soon. Once it's out, there will be a PR later this week to update GATK to the latest version. |
droazen
pushed a commit
that referenced
this issue
Jul 6, 2018
… sites-only query support, and bug fixes (#4645) This PR addresses required changes in order to use latest version of GenomicsDB which exposes new functionality such as: * Multi interval import and query support: * We create multiple arrays (directories) in a single workspace - one per interval. So, if you wish to import intervals ("chr1", [ 1, 100M ]) and ("chr2", [ 1, 100M ]), you end up with 2 directories/arrays in the workspace with names chr1$1$100M and chr2$1$100M. The array names depend on the partition bounds. * During the read phase, the user only supplies the workspace. The array names are obtained by scanning the entries in the workspace and reading the right arrays. For example, if you wish to read ("chr2", [ 50, 50M] ), then only the second array is queried. In the previous version of the tool, the array name was a constant - genomicsdb_array. The new version will be backward compatible with respect to reads. Hence, if a directory named genomicsdb_array is found in the workspace directory, it's passed as the array for the GenomicsDBFeatureReader otherwise the array names are generated from the directory entry names. * Parallel import based on chromosome intervals. The number of threads to use can be specified as an integer argument to the executeImport call. If no argument is specified, the number of threads is determined by Java's ForkJoinPool (typically equal to the #cores in the system). * The max number of intervals to import in parallel can be controlled by the command line argument --max-num-intervals-to-import-in-parallel (default 1) Note that increasing parallelism increases the number of FeatureReaders opened to feed data to the importer. So, if you are using N threads and your batch size is B, you will have N*B feature readers open. * Protobuf based API for import and read #3688 #2687 * Option to produce GT field * Option to produce GT for spanning deletion based on min PL value * Doesn't support #4541 or #3689 yet - next version * Bug fixes * Fix for #4716 * More error messages
Fixed in #4645 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Reading from GenomicsDB fails when a some records containing spanning deletion alleles are imported into a workspace. Not all records seem to cause this to fail; I haven't been able to figure out what specific properties of the records cause the error. Here's the contents (minus header) of a VCF file that causes the error:
Steps to reproduce:
Error:
This issue was discovered while trying to add spanning deletion genotyping support to HaplotypeCaller for #2960 and resolution seems to be necessary to support the fix for that issue.
The text was updated successfully, but these errors were encountered: