Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merging similar UMIs? #5

Open
tomsing1 opened this issue Feb 28, 2016 · 2 comments
Open

Merging similar UMIs? #5

tomsing1 opened this issue Feb 28, 2016 · 2 comments

Comments

@tomsing1
Copy link

I was wondering if you had considered merging UMIs that might be erroneous copies, eg as outlined in this blog post.

The use of UMIs [...] would work perfectly if it were not for base-calling errors, which erroneously create sequencing reads with the same genomic coordinates and UMIs and that are identical for the base at which the error occurred.

If I understand the current code correctly, it considers two barcodes as separte UMIs if they differ even by one base. Would it be useful to merge reads into the same UMI if they are 'nearly' identical, e.g. based on Hamming distance?

@vals
Copy link
Owner

vals commented Feb 29, 2016

I have thought about that a bit. The annoying part is how to practically do this.

Say we allow hamming distance of 1 as error. One way of implementing this would be to upon observation of a (UMI, tag) pair check for already observed pairs if there is one there where the UMI only differ by 1. Now in stead of doing exact lookup in the hash table as when you do counting unique's, you have to go through the entire observed data every time. This would need some more thought out data structure than is currently there. Additionally, this approach is not so good, because imagine you have UMIs A, B, and C. And say d(A, B) = 1, d(B, C) = 1, but d(A, C) = 2. If we observe A first, C will be counted as a new UMI, but not B. If B is observed first, neither A nor C will be counted as new UMIs.

The more correct way to do this is probably to make a clustering of UMIs in the collapsing stage at the end. Now for every transcript, you need to cluster UMIs and merge the ones which are within a "ball" in hamming distance. Then in stead of counting unique UMIs for a transcript, you count the number of distinct UMI-balls. But what if balls overlap? How computationally expensive does this become?

These are questions which made me not want to put too much effort in to it.

Testing the different strategies (without simulation which the blog post does) would be pretty straight forward using a dataset with spike-ins.

That said, I'm getting very good performance with the "unique" method based on spike-ins. Which also doesn't motivate me too much to try to implement more robust counting as outlined in the blog post.

You can output the entire "evidence table" from the tallying procedure if you want to try some different UMI-merging approaches (use the --output_evidence_table option). If you find one that is computationally reasonable and improves the result, we should definitely make it part of the merging at the end! If it is not computationally reasonable we could maybe still add it as an option.

Sorry for the long response, let me know what your thoughts are on this reasoning.

@tomsing1
Copy link
Author

Thanks a lot for your detailed response. Yes, I agree, this is not a trivial problem. It seems that the authors of the blog post (or some of their colleagues at the same institute) have already implemented code that implements the different options of error correction they describe in the post:
https://github.com/CGATOxford/UMI-tools

They use some extra code for logging / IO from another of their packages, but other than that the code is pretty much standalone. Perhaps there is an opportunity to combine their work with yours? Just a thought...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants