Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GATK CNV on WGS. #2858

Closed
1 task
droazen opened this issue Jun 5, 2017 · 43 comments · Fixed by #3913
Closed
1 task

GATK CNV on WGS. #2858

droazen opened this issue Jun 5, 2017 · 43 comments · Fixed by #3913

Comments

@droazen
Copy link
Contributor

droazen commented Jun 5, 2017

@LeeTL1220 commented on Mon May 23 2016

  • File issues for evaluations (and make sure each evaluation is described, including ground truth). Make sure that there is a milestone and assignee for each.

Need to show some measure of performance for the WGS evaluation (GATK CNV only. Not ACNV)

@samuelklee
Copy link
Contributor

samuelklee commented Jul 9, 2017

Reviving this. This will essentially be a major refactor/rewrite of CreatePanelOfNormals to make it scalable enough to handle WGS.

  • CombineReadCounts is too cumbersome for large matrices. Change CreatePanelOfNormals to take in multiple -I instead.
  • Rename NormalizeSomaticReadCounts to DenoiseReadCounts and require integer read counts as input. These will still be backed by a ReadCountCollection until @asmirnov239's changes are in.
  • Remove optional outputs (factor-normalized and beta-hats) from DenoiseReadCounts. For now, TN and PTN output will remain in the same format (log2) to maintain compatibility with downstream tools.
  • Maximum number of eigensamples K to retain in the PoN is specified; the smaller of this or the number of samples remaining after filtering is used. The number actually used to denoise can be specified in DenoiseReadCounts. If we are going to spend energy computing K eigensamples, there is no reason we shouldn't expose all of them in the PoN, even if we don't want to use all of them for denoising. (Also, the current SVD utility methods do not allow for specification of K < N when performing SVD on an MxN matrix, even though the backend implementations that are called do allow for this; this is terrible. In any case, randomized SVD should be much faster than the currently available implementations, even when K = N).
  • Rename CreatePanelOfNormals to CreateReadCountPanelOfNormals
  • Refer to "targets" as intervals. See Eliminate Targets from new/legacy CNV tools. #3246.
  • Remove QC.
  • Refer to proportional coverage as fractional coverage.
  • Perform optional GC-bias correction internally if annotated intervals are passed as input.
  • Make standardization process for panel and case samples identical. Currently, a sample mean is taken at one point in the PoN standardization process, while a sample median is taken in the case standardization process.
  • HDF5 PoN will store version number, all integer read counts, all/panel intervals, all/panel sample paths/names, all annotated intervals (if GC-bias correction was performed), fractional-coverage medians for all intervals, relevant SVD results (eigenvalues and left-singular vectors) for the specified number of eigensamples, and command line.
  • In a future iteration, we could allow an input PoN to be the source of read counts. This would allow iteration on filter parameters without needing output from CombineReadCounts. The code should easily allow for this.
  • ReadCountCollection is too memory intensive; minimize use in DenoiseReadCounts when writing results.
  • Optimize and clean up HDF5 writing (e.g., transpose skinny matrices, make writing of intervals as a string matrix faster).

@samuelklee samuelklee changed the title WGS workflow evaluation GATK CNV on WGS. Jul 10, 2017
@samuelklee
Copy link
Contributor

@vruano may also have some insight on the PCOV vs RAW issue.

@samuelklee
Copy link
Contributor

samuelklee commented Jul 11, 2017

FYI @sooheelee I pointed you at the docs recently, but realized they're slightly out of date. CreatePanelOfNormals currently expects proportional coverage, upon which it initially performs a series of filtering steps. The docs state that these steps are performed on integer counts, which is incorrect. The fact that filtering yields different post-filter coverage for the two types of input ultimately results in slightly different denoising. Not a big deal, but we should decide what the actual method should be doing and clarify in the code/docs.

@sooheelee
Copy link
Contributor

I'll keep an eye out for the "integer counts" passage in the docs and change this to reflect our recommendations.

@samuelklee
Copy link
Contributor

@samuelklee
Copy link
Contributor

samuelklee commented Jul 19, 2017

Can someone explain this pseudoinverse business to me? We standardize the counts to give a matrix C, calculate the SVD to get the left-singular matrix U and the pseudoinverse of C, but then perform another SVD on U to get the pseudoinverse of U. Why do we need these pseudoinverses? @LeeTL1220 @davidbenjamin?

@samuelklee
Copy link
Contributor

The memory footprint used by ReadCountCollectionUtils.parse is extremely large compared to the size of the file. I suspect there may be memory leaks in TableReader.

Speed is also an issue; pandas is 4x faster (taking ~10s on my desktop) on a 300bp-bin WGS read-count file with ~11.5 million rows if all columns are read, and 10x faster if the Target name field (which is pretty much useless) is ignored.

(Also, for some reason, the speed difference is much greater when running within my Ubuntu VM on my Broad laptop; what takes pandas ~15s to load takes ReadCountCollectionUtils.parse so long that I just end up killing it...not sure why this is?)

@vruano
Copy link
Contributor

vruano commented Jul 19, 2017

TableReader is a dumb-dumb one-record at a time reader so it shouldn't suffer from memory leaks.

In contrast the parser() method uses a incremental "buffer" that accumulates the counts until the end when the actual returned table is created... The reason for this is to keep the ReadCountsCollection class constant.

So at some point you need at least twice the amount of memory as compare to a solution that would simply use the returned object as the accumulator.

@samuelklee
Copy link
Contributor

You might be right---and I think it's worse, in that the ReadCountCollection argument validation (for uniqueness of targets, etc.) also adds to the overhead.

@vruano
Copy link
Contributor

vruano commented Jul 19, 2017

By default we aim for correctness over efficiency... for what you're saying about the target being useless it sounds that perhaps you should consider to generalize the ReadCountCollection so that in some sub implementations targets are implicit based on their index in the collection and so for those tools that don't care about target-names this operation would go much faster.

So RCC could be an interface rather than a concrete class and some child class would implement the current RCC s behavior, whereas some new child class would implement a more efficient solution for WG- fix size interval collections.

@vruano
Copy link
Contributor

vruano commented Jul 19, 2017

As for TableReader.... we could make it a bit more efficient by reusing DataLine instances. Currently it creates one per each input line, but same instance could be reused loading each new line data onto it before calling createRecord.

We are delegating to a external library to parse the lines into String[] arrays (one element per cell) .... we could save on that by implementing it ourselves more efficiently but of course that would be take one of our some of his/her precious development time...

In any case I don't know what the gain would be considering that these operations are done close to I/O that typically should be dominating the time-cost.

The DataLine reuse may save some memory churning and wouldn't take long to code.

@samuelklee
Copy link
Contributor

samuelklee commented Jul 21, 2017

First cut at a rewrite seems to be working fine and is much, much leaner.

Building a small PoN with 4 simulated normals with 5M bins each, CombineReadCounts took ~1 min, CreatePanelOfNormals (with no QC) took ~4.5 minutes (although ~1 minute of this is writing target weights, which I haven't added to the new version yet) and generated a 2.7GB PoN, and NormalizeSomaticReadCounts took ~8 minutes (~7.5 minutes of which was spent composing/writing results, thanks to overhead from ReadCountCollection).

In comparison, the new CreateReadCountPanelOfNormals took ~1 minute (which includes combining read-count files, which takes ~30s of I/O) and generated a 520MB PoN, and DenoiseReadCounts took ~30s (~10s of which was composing/writing results, as we are still forced to generate two ReadCountCollections).

Resulting PTN and TN copy ratios were identical down to 1E-16 levels. Differences are only due to removing the unnecessary pseudoinverse computation. Results after filtering and before SVD are identical, despite the code being rewritten from scratch to be more memory efficient (e.g., filtering is performed in place)---phew!

If we stored read counts as HDF5 instead of as plain text, this would make things much faster. Perhaps it would be best to make TSV an optional output of the new coverage collection tool. As a bonus, it would then only take a little bit more code to allow previous PoNs to be provided via -I as an additional source of read counts.

Remaining TODO's:

  • Allow DenoiseReadCounts to be run without a PoN. This will just perform standardization and optional GC correction. This gives us the ability to run the case sample pipeline without a PoN, which will give users more options and might be good enough if the data is not too noisy.
  • Actually, I'm not sure why we take perform SVD on the intervals x samples matrix and take the left-singular vectors. Typically, SVD is performed on the samples x intervals matrix and the right-singular vectors are taken, which saves some extra computation that is necessary to calculate the left-singular vectors. (EDIT: Actually, looks like Spark's SVD is faster on tall and skinny matrices, which might be due to the fact that the underlying implementation calls Fortran code. I still think that representing samples as row vectors has some benefits, so I've changed things to reflect this; I now just take a transpose before performing the SVD, so that we still operate on the same intervals x samples matrix.) This will also save us some transposing, which we do anyway to make HDF5 writes faster.
  • Change HDF5 matrix writing to allow matrices with NxM > MAX_INT, which can be done naively by chunking and writing to multiple HDF5 subdirectories. This will allow for smaller bin sizes. (EDIT: I implemented this in a way that allows one to set the maximum number of values allowed per chunk, so that heap usage can be controlled, but the downside is that this translates into a corresponding limit on the number of columns (i.e., intervals). On the other hand, you could theoretically crank this number up to Integer.MAX_VALUE, as long as you set -Xmx high enough... In practice, it's very unlikely that we'll need to go to bins smaller than a read length.)
  • Check that CreatePanelOfNormals works correctly on Spark cluster. Implement Randomized SVD, which should give better performance on large matrices. See https://arxiv.org/pdf/1007.5510.pdf and https://research.fb.com/fast-randomized-svd/. For now, I'll require that the coverage matrix can fit in RAM, but more sophisticated versions of the algorithm could be implemented in the future.
  • Update methods doc. Note that some of the CNV section is out of date and incorrect. In particular, we have been taking in PCOV as input to CreatePanelOfNormals for some time now, but the doc states that we take integer read counts. This already yields different results by the first filtering step (on intervals by interval median). @LeeTL1220 @davidbenjamin what is the "official ReCapSeg" behavior, and do we want to keep the current behavior? In general, I think all of the standardization (i.e., filtering/imputation/truncation/transformation) steps could stand some revisiting.

Evaluation:

  • Revisit standardization procedure by checking with simulated data. We should make sure that the centering of the data does not rescale the true copy ratio.
  • Investigate the effect of keeping duplicates. I am still not sure why we do this, and it may have a more drastic impact on WGS data. Turns out we don't keep duplicates for WGS; see Clear up keep duplicates confusion in CNV tools. #3367.
  • Check that GC-bias-correction+PCA and PCA-only perform comparably, even at small bin sizes (~300bp). From what I've seen, this is true for larger bin sizes (~3kbp), so explicit GC-bias correction may not be necessary. (That is, even at these (purportedly) large bin sizes, the effect of the read-based GC-bias correction is obvious for those samples where it is important. However, the end result is not very different from PCA-only denoising with no GC-bias correction performed.)
  • Check that changing CBS alpha parameter sufficiently reduces hypersegmentation. Looks like the hybrid p-value calculation in DNACopy is not accurate enough to handle WGS-size data. (Also, it's relatively slow, taking ~30 minutes on ~10M intervals.) Even if I set alpha to 0, I still get a ridiculous number of segments! So I think it's finally time to scrap CBS. I'll look into other R segmentation packages that might give us a quick drop-in solution, but we may want to roll our own simple algorithm (which we will scrap anyway once the coverage model is in for somatic). I've implemented a fast kernel-segmentation method that seems very promising, see below.
  • Investigate performance vs. CGA ReCapSeg pipeline on THCA samples.
  • Investigate concordance with Genome STRiP.

@samuelklee
Copy link
Contributor

samuelklee commented Jul 25, 2017

I created a panel of normals from 90 WGS TCGA samples with 250bp (~11.5M) bins, which took ~57 minutes total and produced an 11GB PoN (this file includes all of the input read counts---which take up 20GB as a combined TSV file and a whopping 63GB as individual TSV files---as well as the eigenvectors, filtering results, etc.). The run can be found in /dsde/working/slee/wgs-pon-test/tieout/no-gc. It completed successfully with -Xmx32G (in comparison, CreatePanelOfNormals crashed after 40 minutes with -Xmx128G). The runtime breakdown was as follows:

  • ~45 minutes simply from reading of the 90 TSV read-count files in serial. Hopefully Have SparkGenomeReadCounts produce a HDF5 for coverage in addition to the tsv and make it readable during PoN creation. #3349 should greatly speed this up. (In comparison, CombineReadCounts reading 10 files in parallel at a time took ~100 minutes to create the aforementioned 20GB combined TSV file, creating 25+GB of temp files along the way.)

  • ~5 minutes from the preprocessing and filtering steps. We could probably further optimize some of this code in terms of speed and heap usage. (I had to throw in a call to System.gc() to avoid an OOM with -Xmx32G, which I encountered in my first attempt at the run...)

  • ~5 minutes from performing the SVD of the post-filtering 8643028 x 86 matrix, maintaining 30 eigensamples. I could write a quick implementation of randomized SVD, which I think could bring this down a bit (the scikit-learn implementation takes <2 minutes on a 10M x 100 matrix), but this can probably wait.

Clearly making I/O faster and more space efficient is the highest priority. Luckily it's also low hanging fruit. The 8643028 x 30 matrix of eigenvectors takes <2 minutes to read from HDF5 when the WGS PoN is used in DenoiseReadCounts, which gives us a rough idea of how long it should take to read in the original ~11.5M x 90 counts from HDF5. So once #3349 is in, then I think that a ~15 minute single-core WGS PoN could easily be viable.

I believe that a PoN on the order of this size will be all that is required for WGS denoising, if it is not already overkill. To go bigger by more than an order of magnitude, we'll have to go out of core, which will require more substantial changes to the code. But since the real culprit responsible for hypersegmentation is CBS, rather than insufficient denoising, I'd rather focus on finding a viable segmentation alternative.

@samuelklee
Copy link
Contributor

With HDF5 read-count file input enabled by #3349, a WGS PoN with 39 samples x ~1M bins (i.e., 3kb) took <1 minute to build. Thanks @LeeTL1220!

@LeeTL1220
Copy link
Contributor

LeeTL1220 commented Jul 27, 2017 via email

@LeeTL1220
Copy link
Contributor

Sorry did not see previous comment

@samuelklee
Copy link
Contributor

samuelklee commented Jul 27, 2017

Note the number of samples and bin size are different from that comment.

@LeeTL1220
Copy link
Contributor

@samuelklee I certainly agree about focusing on CBS alternative.

@samuelklee
Copy link
Contributor

samuelklee commented Aug 9, 2017

I've implemented the Gaussian-kernel binary-segmentation algorithm from this paper: https://hal.inria.fr/hal-01413230/document This method uses a low-rank approximation to the kernel to obtain an approximate segmentation in linear complexity in time and space. In practice, performance is actually quite impressive!

The implementation is relatively straightforward, clocking in at ~100 lines of python. Time complexity is O(log(maximum number of segments) * number of data points) and space complexity is O(number of data points * dimension of the kernel approximation), which makes use for WGS feasible.

Segmentation of 10^6 simulated points with 100 segments takes about a minute and tends to recover segments accurately. Compare this with CBS, where segmentation of a WGS sample with ~700k points takes ~10 minutes---and note that these ~700k points are split up amongst ~20 chromosomes to start!

There are a small number of parameters that can affect the segmentation, but we can probably find good defaults in practice. What's also nice is that this method can find changepoints in moments of the distribution other than the mean, which means that it can straightforwardly be used for alternate-allele fraction segmentation. For example, all segments were recovered in the following simulated multimodal data, even though all of the segments have zero mean:

baf

Replacing the SNP segmentation in ACNV (which performs expensive maximum-likelihood estimation of the allele-fraction model) with this method would give a significant speedup there.

Joint segmentation is straightforward and is simply given by addition of the kernels. However, complete data is still required.

Given such a fast heuristic, I'm more amenable to augmenting this method with additional heuristics to clean up or improve the segmentation if necessary. We can also use it to initialize our more sophisticated HMM models, as well.

@LeeTL1220 @mbabadi @davidbenjamin I'd be interested to hear your thoughts, if you have any.

@LeeTL1220
Copy link
Contributor

@samuelklee This looks really good and the paper is a great find. I am unclear on a couple of things.

  • How accurate was CBS on the 700k points? 100 ground-truth segments became how many in CBS?
  • I'm a little unclear on how we would derive a segment mean for this approach. If I am missing something obvious, then I'd ask: How were the "segment means" in this approach vs. CBS?
  • Would it be easier to make a version that only addressed total copy ratio segments (GATK CNV) and then implement joint segmentation later?
  • What is the name of this approach? "KernSeg"?

@mbabadi
Copy link
Contributor

mbabadi commented Aug 9, 2017

@samuelklee this looks awesome!

  • What variant of the algorithm did you implement? the paper lists several.
  • I haven't read the paper in detail yet, but is it possible to choose a conservatively large number of possible break points and then filter bad break points, possibly based on the rapid decline of the change point probability? i.e. does the algorithm naturally produce change point probabilities?
  • Is it possible to throw in additional change points incrementally, without doing extra work, until a certain criterion is met? (see above)

@samuelklee
Copy link
Contributor

samuelklee commented Aug 9, 2017

How accurate was CBS on the 700k points? 100 ground-truth segments became how many in CBS?

The 700k came from a real WGS sample (as opposed to the simulated data with 100 segments and 1M points), so there is no ground truth there. I was just trying to get an idea of real-world runtime for CBS.

When I run CBS on the simulated data, the results are pretty much the same (both miss a changepoint where the adjacent segment means are close by chance; the runtime is also ~1.2 minutes in both cases):

CBS:
1m-cbs

Kernel segmentation
1m

However, compare how CBS does on the simulated zero-mean multimodal data (terrible, as expected!):
baf-cbs

I'm a little unclear on how we would derive a segment mean for this approach. If I am missing something obvious, then I'd ask: How were the "segment means" in this approach vs. CBS?

I'm pretty sure that the segment means in CBS are literally just that. So if the breakpoints are good, then the segment means are the same.

Would it be easier to make a version that only addressed total copy ratio segments (GATK CNV) and then implement joint segmentation later?

Sure, but joint segmentation is much cheaper and easier with this method than our previous probabilistic approaches. Even SNP segmentation will be much cheaper.

What is the name of this approach? "KernSeg"?

Not sure...I couldn't find an R package, although an R/C implementation is mentioned in the paper. But the python implementation is straightforward and a pure Java implementation should not be so bad. There are some cythonized numpy methods that my python implementation used, but I think equivalent implementations of these methods should be relatively fast in pure Java as well.

What variant of the algorithm did you implement? the paper lists several.

I implemented what they call ApproxKSeg. It's an approximate version that combines binary segmentation with the low-rank approximation to the Gaussian kernel.

I haven't read the paper in detail yet, but is it possible to choose a conservatively large number of possible break points and then filter bad break points, possibly based on the rapid decline of the change point probability? i.e. does the algorithm naturally produce change point probabilities?

Yes, you can oversegment and then choose which breakpoints to retain. However, there are no proper changepoint probabilities, only changepoint costs. Adding a penalty term based on the number of changepoints seems to perform relatively well in simple tests, but one could certainly devise other ways to filter changepoints (some of which could yield probabilities, if you are willing to assume a probabilistic model). I think we should just think of this as a fast, heuristic, non-parametric method for finding breakpoints in multidimensional data.

Is it possible to throw in additional change points incrementally, without doing extra work, until a certain criterion is met? (see above)

The version I implemented adds changepoints via binary segmentation. The time complexity required to split a segment is linear in the number of points contained in the segment, although some care must be taken in the implementation to ensure this.

@samuelklee
Copy link
Contributor

I naively cythonized part of my python implementation to use memoryviews, resulting in a ~60x speedup!

The simulated data with 100 segments and 1M points ran in 1.3 seconds!!

THIS IS BANANAS!!!

@sooheelee
Copy link
Contributor

sooheelee commented Oct 2, 2017

Yes, I'd like to test out the new tools in the workflow. All of these changes seem major and I'm excited to see how the workflow performs. I'll stop by to chat with you.

@samuelklee
Copy link
Contributor

samuelklee commented Oct 7, 2017

The HCC1143 WES sample from the tutorial data runs in < 1.5 minutes and looks pretty good!

Copy-ratio only (using a PoN with 40 samples):
hcc1143_t_clean no-ac modeled

Matched-normal mode, allele-fraction only (note that there are not very many hets in this data, since the common sites list we used was relatively small):
hcc1143_t_clean matched-normal no-cr modeled

Matched-normal mode (copy ratio + allele fraction):
hcc1143_t_clean matched-normal modeled

@sooheelee
Copy link
Contributor

Thanks @samuelklee!

@samuelklee
Copy link
Contributor

samuelklee commented Oct 7, 2017

TCGA WGS at 250bp (~8.5M copy ratios after filtering during the denoising step from ~11.5M read counts, ~840k hets after filtering on count totals <30 and homozygosity from ~5.5M allelic counts) took ~27 minutes to run in matched-normal mode. Results look very reasonable. You can see the result at /dsde/working/slee/CNV-1.5_TCGA_WGS/TCGA-05-4389-01A-01D-1931-08.matched-normal.modeled.png

@sooheelee
Copy link
Contributor

Here's a first pass draft of the figure for the ASHG poster. For the good/bad PoN illustration at bottom, I use figures I had originally generated with what is now the older workflow.


somatic_cnv

@samuelklee
Copy link
Contributor

Just noticed an updated version of the kernel segmentation paper: https://hal.inria.fr/hal-01413230v2/document The original version referenced in this issue can be found at: https://hal.inria.fr/hal-01413230v1/document

A couple of noteworthy changes include the form of the penalty function and the fact that they use an estimate of the variance to normalize the data and set the kernel variance to unity (they also symmetrize the BAF, which I guess is required for the latter). I don't think either change will make a huge difference to the final segmentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants