Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Binned quality scores and their effect on (non-decreasing) trans rates #1307

Open
JacobRPrice opened this issue Mar 24, 2021 · 73 comments
Open

Comments

@JacobRPrice
Copy link

JacobRPrice commented Mar 24, 2021

Overview

We are encountering some issues in obtaining appropriate estimates of error rates; I believe these problems are coming from our new(-er) sequencing facility sending us fastq files containing binned quality scores. Searching the issues, it seems like this is not an uncommon situation, but I'm not sure that a single answer or solution has been offered yet.

My observations

Example of forward read quality profiles:
ReadQuality_post-cutadapt_F

And the corresponding reverse read quality profiles:
ReadQuality_post-cutadapt_R

Our initial attempts followed a typical pipeline. Instead of pre-learning error rates, we wanted to use the full dataset to avoid bias due to which sample(s) were used. This particular dataset is a fairly large and (what I'd consider) very deeply sequenced (300,000-60,000 reads per sample). In the fitted error rates, we saw the characteristic dips that are often caused by binned quality scores.

# First attempt approach
dadaFs <- dada(
  derep = filtFs,
  err = NULL,
  selfConsist = TRUE,
  pool = FALSE,
  multithread = parallel::detectCores() - 1, 
  verbose = TRUE
)

A2G is especially nasty!
err1

In issue #938, @hjruscheweyh laid out a near identical problem (to the one I'm having), with his colleague @GuillemSalazar graciously offering up the substitute function they used to address/correct this issue [link to comment containing function]. Their approach attempted to enforce monotonicity by changing the arguments of the loess function to have span equal to 2 and weights equal to the log-transformed totals.

I then tested out their approach on our data, this time thinking that I should follow the tutorial's suggestion to pre-learn error rates. For both learnErrors and dada I passed their loessErrfun_mod to the errorEstimationFunction parameter.

errF <- learnErrors(
  filtFs, 
  multithread = TRUE, 
  errorEstimationFunction = loessErrfun_mod,
  verbose = TRUE
)

dadaFs <- dada(
  derep = filtFs,
  err = errF,
  pool = FALSE,
  errorEstimationFunction = loessErrfun_mod,
  multithread = TRUE, 
  verbose = TRUE
)

The results from learnErrors indicated that this might have done the trick!
learnErrors_errF_firsttrynoselfconsistondada

But when I pulled the final error model from dadaFs I was disappointed to see that all of the fitted parameters seemed to have "flattened" (indicating less of a response in error frequency to the quality score), and furthermore the error frequencies for A2G actually increased with increasing quality scores, which should never happen in reality.
dada_errF_firsttrynoselfconsistondada

It's possible that these results came from pre-learning the error rates (and any bias that may result in) or, perhapse more likely, from not allowing dada to selfConsist. To check this, I reran the same data a second time but passing TRUE to selfConsist.

errF <- learnErrors(
  filtFs, 
  multithread = TRUE, 
  errorEstimationFunction = loessErrfun_mod,
  verbose = TRUE
)
dadaFs <- dada(
  derep = filtFs,
  err = errF,
  selfConsist = TRUE,
  pool = FALSE,
  errorEstimationFunction = loessErrfun_mod,
  multithread = TRUE, 
  verbose = TRUE
)

While the self consistency loop for dada() terminated before we saw convergence, the fluctuations were small enough that it is probably ok to proceed with the results (or at least check them out).

 [1] 1.432041e+01 9.763790e-01 2.606707e-02 6.451500e-03 9.615610e-04
 [6] 1.049815e-03 5.119809e-04 6.031202e-04 5.129712e-04 6.957217e-04

The error model obtained from learnErrors was identical to the previous trial (same arguments/samples were being passed, so that is expected).
learnErrors_errF

Much to my dismay, the error model, while not identical, was very similar, with the same increasing error frequency for A2G.
dada_errF

Illumina Customer Service on Binned Quality Scores

I spent some time on the phone with Illumina and both of the techs I spoke with indicated that binned quality scores are here to stay and that the quality score values that comprise each bin may vary according to Illumina platform and software version. Neither one was able to find me any information about whether or not binning can be turned off by the sequencing facility themselves.

That said, I'm not sure how prevalent the use of binned quality scores are across all sequencing facilities, this particular dataset is the first time I've encountered them. I agree with @wangjiawen2013 comment in #971 that this has the potential to become more and more of an issue.

Questions (Finally, Jake, stop jabbering!)

How severe of a problem are models with increasing error frequencies?

It isn't intuitive that this should be possible (or at least likely). I haven't come across anyone in the issues plotting the final error models (coming out of dada), so I don't know how unique our particular dataset is.

"Official" recommendations for handling binned quality data

I'd like to suggest, or more properly, encourage, that recommendations for how to handle binned quality data be established. @benjjneb has commented 2 that they're waiting on the appropriate data needed to do this development.

Updating learnErrors() (and hence, dada())

Are there plans in place to add functionality to learnErrors enabling users to enforce monotonicity or otherwise alter its behavior if binned quality scores are present?

Communication with other developers/users

Lastly, the dada2 plugin for QIIME2 does not offer much in the way of intermediate output or the ability to validate the results. If sequencing data with binned quality scores is truely an issues, unwary users may be getting bad results. This last item is more for investigators/labs who treat QIIME, dada2, other pipelines as black boxes and are primarily concerned with the output. I understand the Callahan Lab isn't responsible for the development decisions of the QIIME team, but it may be worthwhile to communicate these challenges so that they can be addressed.

Other Issues regarding this question:

Included for others who may be interested as well.
#791 #938 #964 #1083 #1135 #1228

@benjjneb
Copy link
Owner

Hi Jake,
I just wanted to say that your contribution here has not gone unnoticed. In fact it is highly appreciated, especially how you have not only contributed new data on this issue, but collated several existing issues as well.

I consider this the issue for binned quality scores, and will direct future related issues this way.
More to come.

@JacobRPrice
Copy link
Author

JacobRPrice commented Apr 16, 2021

A follow up to my original post:

I ended up trying a couple of different additional approaches/variants of a modified loessErrfun to see what the outcomes would look like. Perhaps I have access to more computing power than I do good sense, but I'm a visual learner and it helped me understand what the effects of such changes may have.

Two generalizable solutions have been offered so far.

  1. Altering the arguments passed to the loess
  2. Enforcing monotonicity in the error rates

1 ) altering loess
This solution, as discussed above, was offered up by @hjruscheweyh and @GuillemSalazar in Issue #938. As they commented:

We tried to enforce monotonicity by changing the parameters (span=2 and the log-transformed totals as weights) of the loess function used by loessErrfun():

They used:

# Guillem's solution
mod.lo <- loess(rlogp ~ q, df, weights = log10(tot),span = 2)

while the original/standard function uses total counts for weights and the default value (0.75) for the span parameter.

# original
mod.lo <- loess(rlogp ~ q, df, weights=tot)

I was curious to see what the impact of varying just the weights argument was, as opposed to changing the span value at the same time (two things moving at the same time makes it hard to predict the effects of just one).

2 ) Enforcing monotonicity in the error rates
This has been suggested in a couple of locations/issues by @benjjneb and @mikemc , but @hhollandmoritz put together a great issue exploring the effects of using NovaSeq data in Issue #791. They described how they approached enforcing some degree of monotonic behavior by assigning any (fitted) value lower than the Q40 to be the q40 value in a comment in that thread. In the same thread, @cjfields provided confirmation that they've observed the same behavior and found the approach to be useful.

# hhollandmortiz
NSnew_errR_out <- getErrors(NSerrR_mon) %>%
  data.frame() %>%
  mutate_all(funs(case_when(. < X40 ~ X40,
                            . >= X40 ~ .))) %>% as.matrix()
rownames(NSnew_errR_out) <- rownames(getErrors(NSerrR_mon))
colnames(NSnew_errR_out) <- colnames(getErrors(NSerrR_mon))

It sounds like it worked for them, so maybe it would work for us too!

Trials

Summarizing what trials I wanted to test (for clarity):

  1. alter loess arguments (weights and span) & enforce monotonicity
  2. enforce monotonicity
  3. alter loess arguments (weights only) & enforce monotonicity

alter loess arguments (weights and span) & enforce monotonicity

Why not try both of the suggested solutions at the same time, anything worth doing is worth overdoing, amiright?

library(magrittr)
library(dplyr)

loessErrfun_mod <- function(trans) {
  qq <- as.numeric(colnames(trans))
  est <- matrix(0, nrow=0, ncol=length(qq))
  for(nti in c("A","C","G","T")) {
    for(ntj in c("A","C","G","T")) {
      if(nti != ntj) {
        errs <- trans[paste0(nti,"2",ntj),]
        tot <- colSums(trans[paste0(nti,"2",c("A","C","G","T")),])
        rlogp <- log10((errs+1)/tot)  # 1 psuedocount for each err, but if tot=0 will give NA
        rlogp[is.infinite(rlogp)] <- NA
        df <- data.frame(q=qq, errs=errs, tot=tot, rlogp=rlogp)

        # original
        # ###! mod.lo <- loess(rlogp ~ q, df, weights=errs) ###!
        # mod.lo <- loess(rlogp ~ q, df, weights=tot) ###!
        # #        mod.lo <- loess(rlogp ~ q, df)

        # Gulliem Salazar's solution
        # https://github.com/benjjneb/dada2/issues/938
        mod.lo <- loess(rlogp ~ q, df, weights = log10(tot),span = 2)

        pred <- predict(mod.lo, qq)
        maxrli <- max(which(!is.na(pred)))
        minrli <- min(which(!is.na(pred)))
        pred[seq_along(pred)>maxrli] <- pred[[maxrli]]
        pred[seq_along(pred)<minrli] <- pred[[minrli]]
        est <- rbind(est, 10^pred)
      } # if(nti != ntj)
    } # for(ntj in c("A","C","G","T"))
  } # for(nti in c("A","C","G","T"))

  # HACKY
  MAX_ERROR_RATE <- 0.25
  MIN_ERROR_RATE <- 1e-7
  est[est>MAX_ERROR_RATE] <- MAX_ERROR_RATE
  est[est<MIN_ERROR_RATE] <- MIN_ERROR_RATE

  # enforce monotonicity
  # https://github.com/benjjneb/dada2/issues/791
  estorig <- est
  est <- est %>%
    data.frame() %>%
    mutate_all(funs(case_when(. < X40 ~ X40,
                              . >= X40 ~ .))) %>% as.matrix()
  rownames(est) <- rownames(estorig)
  colnames(est) <- colnames(estorig)

  # Expand the err matrix with the self-transition probs
  err <- rbind(1-colSums(est[1:3,]), est[1:3,],
               est[4,], 1-colSums(est[4:6,]), est[5:6,],
               est[7:8,], 1-colSums(est[7:9,]), est[9,],
               est[10:12,], 1-colSums(est[10:12,]))
  rownames(err) <- paste0(rep(c("A","C","G","T"), each=4), "2", c("A","C","G","T"))
  colnames(err) <- colnames(trans)
  # Return
  return(err)
}

# check what this looks like
errF <- learnErrors(
  filtFs,
  multithread = TRUE,
  errorEstimationFunction = loessErrfun_mod,
  verbose = TRUE
)

learnErrors_errF_span2_enforceMono

Taking this approach seemed to work really well. The curves are smooth without the hooks/dips that we see in the default approach. It also does not have the increasing A2G pattern like we saw in @GuillemSalazar 's approach (which only made the changes to loess).

enforce monotonicity

But how much of that improvement due to just enforcing monotonicity? Let's revert back to the original loess call and enforce monotonicity by adapting @hhollandmoritz 's process.

library(magrittr)
library(dplyr)

loessErrfun_mod <- function(trans) {
  qq <- as.numeric(colnames(trans))
  est <- matrix(0, nrow=0, ncol=length(qq))
  for(nti in c("A","C","G","T")) {
    for(ntj in c("A","C","G","T")) {
      if(nti != ntj) {
        errs <- trans[paste0(nti,"2",ntj),]
        tot <- colSums(trans[paste0(nti,"2",c("A","C","G","T")),])
        rlogp <- log10((errs+1)/tot)  # 1 psuedocount for each err, but if tot=0 will give NA
        rlogp[is.infinite(rlogp)] <- NA
        df <- data.frame(q=qq, errs=errs, tot=tot, rlogp=rlogp)

        # original
        # ###! mod.lo <- loess(rlogp ~ q, df, weights=errs) ###!
        mod.lo <- loess(rlogp ~ q, df, weights=tot) ###!
        # #        mod.lo <- loess(rlogp ~ q, df)

        # Gulliem Salazar's solution
        # https://github.com/benjjneb/dada2/issues/938
        # mod.lo <- loess(rlogp ~ q, df, weights = log10(tot),span = 2)

        pred <- predict(mod.lo, qq)
        maxrli <- max(which(!is.na(pred)))
        minrli <- min(which(!is.na(pred)))
        pred[seq_along(pred)>maxrli] <- pred[[maxrli]]
        pred[seq_along(pred)<minrli] <- pred[[minrli]]
        est <- rbind(est, 10^pred)
      } # if(nti != ntj)
    } # for(ntj in c("A","C","G","T"))
  } # for(nti in c("A","C","G","T"))

  # HACKY
  MAX_ERROR_RATE <- 0.25
  MIN_ERROR_RATE <- 1e-7
  est[est>MAX_ERROR_RATE] <- MAX_ERROR_RATE
  est[est<MIN_ERROR_RATE] <- MIN_ERROR_RATE

  # enforce monotonicity
  # https://github.com/benjjneb/dada2/issues/791
  estorig <- est
  est <- est %>%
    data.frame() %>%
    mutate_all(funs(case_when(. < X40 ~ X40,
                              . >= X40 ~ .))) %>% as.matrix()
  rownames(est) <- rownames(estorig)
  colnames(est) <- colnames(estorig)

  # Expand the err matrix with the self-transition probs
  err <- rbind(1-colSums(est[1:3,]), est[1:3,],
               est[4,], 1-colSums(est[4:6,]), est[5:6,],
               est[7:8,], 1-colSums(est[7:9,]), est[9,],
               est[10:12,], 1-colSums(est[10:12,]))
  rownames(err) <- paste0(rep(c("A","C","G","T"), each=4), "2", c("A","C","G","T"))
  colnames(err) <- colnames(trans)
  # Return
  return(err)
}


# check what this looks like
errF <- learnErrors(
  filtFs,
  multithread = TRUE,
  errorEstimationFunction = loessErrfun_mod,
  verbose = TRUE
)

learnErrors_errF_enforceMono

Without altering the loess arguments, we still see some sharp peaks and dips, but they're much smaller in magnitude, so some degree of smoothing is offered by this approach. In a couple of cases there doesn't appear to be much of a response to quality score (A2G, T2C), or at least for my data, the model is not doing a good job of estimating the parameters.

alter loess arguments (weights only) & enforce monotonicity

Can we improve upon the previous trial by modifying loess weights argument (in addition to enforcing monotonicity)?

library(magrittr)
library(dplyr)

loessErrfun_mod <- function(trans) {
  qq <- as.numeric(colnames(trans))
  est <- matrix(0, nrow=0, ncol=length(qq))
  for(nti in c("A","C","G","T")) {
    for(ntj in c("A","C","G","T")) {
      if(nti != ntj) {
        errs <- trans[paste0(nti,"2",ntj),]
        tot <- colSums(trans[paste0(nti,"2",c("A","C","G","T")),])
        rlogp <- log10((errs+1)/tot)  # 1 psuedocount for each err, but if tot=0 will give NA
        rlogp[is.infinite(rlogp)] <- NA
        df <- data.frame(q=qq, errs=errs, tot=tot, rlogp=rlogp)

        # original
        # ###! mod.lo <- loess(rlogp ~ q, df, weights=errs) ###!
        # mod.lo <- loess(rlogp ~ q, df, weights=tot) ###!
        # #        mod.lo <- loess(rlogp ~ q, df)

        # Gulliem Salazar's solution
        # https://github.com/benjjneb/dada2/issues/938
        # mod.lo <- loess(rlogp ~ q, df, weights = log10(tot),span = 2)

        # only change the weights
        mod.lo <- loess(rlogp ~ q, df, weights = log10(tot))

        pred <- predict(mod.lo, qq)
        maxrli <- max(which(!is.na(pred)))
        minrli <- min(which(!is.na(pred)))
        pred[seq_along(pred)>maxrli] <- pred[[maxrli]]
        pred[seq_along(pred)<minrli] <- pred[[minrli]]
        est <- rbind(est, 10^pred)
      } # if(nti != ntj)
    } # for(ntj in c("A","C","G","T"))
  } # for(nti in c("A","C","G","T"))

  # HACKY
  MAX_ERROR_RATE <- 0.25
  MIN_ERROR_RATE <- 1e-7
  est[est>MAX_ERROR_RATE] <- MAX_ERROR_RATE
  est[est<MIN_ERROR_RATE] <- MIN_ERROR_RATE

  # enforce monotonicity
  # https://github.com/benjjneb/dada2/issues/791
  estorig <- est
  est <- est %>%
    data.frame() %>%
    mutate_all(funs(case_when(. < X40 ~ X40,
                              . >= X40 ~ .))) %>% as.matrix()
  rownames(est) <- rownames(estorig)
  colnames(est) <- colnames(estorig)

  # Expand the err matrix with the self-transition probs
  err <- rbind(1-colSums(est[1:3,]), est[1:3,],
               est[4,], 1-colSums(est[4:6,]), est[5:6,],
               est[7:8,], 1-colSums(est[7:9,]), est[9,],
               est[10:12,], 1-colSums(est[10:12,]))
  rownames(err) <- paste0(rep(c("A","C","G","T"), each=4), "2", c("A","C","G","T"))
  colnames(err) <- colnames(trans)
  # Return
  return(err)
}

# check what this looks like
errF <- learnErrors(
  filtFs,
  multithread = TRUE,
  errorEstimationFunction = loessErrfun_mod,
  verbose = TRUE
)

learnErrors_errF_weights_enforceMono

Without adjusting the span argument, we still have dips and peaks, although they are much less severe. This is most likely due to the effects of altering the argument passed to weight. Furthermore, enforcing decreasing error rates did it's job as expected.

Decisions, Decisions (for now)

I think that for this particular dataset (I can't emphasize that enough), using the combined approach (Trial 1 above) works best. I am currently running the dada2 pipeline using that version, and I'm looking forward to see what the results look like; I will make sure to post an update once it has completed.

Request for feedback

If anyone has any feedback on these preliminary trials, or my rationale, I'd greatly appreciate the feedback. Analysis paralysis can be quite real!

Jake

@cjfields
Copy link

@JacobRPrice really impressive work! Would be great to get @benjjneb thoughts, though this argues that one could at least try different approaches with some follow-up QA to see what works best.

@benjjneb
Copy link
Owner

benjjneb commented May 3, 2021

Hi Jake,
We are digging into this in more detail in collaboration with a student here at NC State @Y-Q-Si (email address ysi4@ncsu.edu) who is interested in the error model for the binned quality score. Would you mind sharing the particular dataset that generated the plots above, either on this thread or to her email address, so we can start doing some testing and evaluation of our own? Thank you in advance.

@JacobRPrice
Copy link
Author

Hi @benjjneb and @Y-Q-Si,

I sent an email briefly describing the data and shared a google drive link to the raw data as a tarball. Please let me know if you have any questions or I can provide clarification.


Outline of QC protocol:

cutadapt was used to remove primers

FWD <- "GTGYCAGCMGCCGCGGTAA"
REV <- "GGACTACNVGGGTWTCTAAT"

After removing the primers, we carried out QC using the following filterAndTrim function call:

out.filtN_cut_filt <- filterAndTrim(
  fwd=cutFs,
  filt=filtFs,
  rev=cutRs,
  filt.rev=filtRs,
  truncLen=c(200,200),
  maxN=0,
  maxEE=2,
  truncQ=2,
  compress=TRUE,
  verbose=TRUE,
  multithread = parallel::detectCores() - 1
)

I can send you my exact scripts if you need more detailed information.

Jake

@jonalim
Copy link

jonalim commented May 21, 2021

@JacobRPrice @benjjneb I'm a bioinformatics analyst from the University of Maryland. I've experimented with error modeling on a NovaSeq dataset using the following loess function, which weights by totals and uses a degree of 1. I figured that a degree of 1 would force the underlying fits to be linear, thus "discouraging" the final model from changing direction. (The resulting models still changed direction, but oh well.) I didn't enforce monotonicity, but I think it would help to do so, to remove the dips in the models during learnErrors().

mod.lo <- loess(rlogp ~ q, df, weights=tot, degree=1, span = 0.95)
learnErrors(nbases=1e8, ...) post-dada
errF tot deg1 span95 pre_dada errF tot deg1 span95 post_dada

My understanding is that the "totals" here are highest for the three quality scores that actually appear in my NovaSeq reads: 11, 23, and 37. Allowing those three datapoints to dominate the loess fit seems to maintain an overall downward slope in the post-DADA fits, which seems desirable to me.

I've also tried weighting by log(tot). This yielded a tighter fit, which makes sense because the weights have less variance. In some cases though (G2A, T2C, C2T post-DADA), the fit had error frequency increasing with the consensus quality score.

mod.lo <- loess(rlogp ~ q, df, weights=log(tot), degree=1, span = 0.95)
learnErrors(nbases=1e8, ...) post-dada
errF logtot deg1 span95 pre_dada errF logtot deg1 span95 post_dada

Unfortunately, I'm not at liberty to share my own dataset, but like yourself, I am curious to find out if these parameters are generalizable to other datasets.

Here are the results when I used @JacobRPrice's "Trial 1" model function on my dataset.

learnErrors(nbases=1e8, ...) post-dada
errF log10tot span2 monoton pre_dada errF log10tot span2 monoton post_dada

@benjjneb
Copy link
Owner

Thanks @jonalim !

Pinging @Y-Q-Si

@Y-Q-Si
Copy link

Y-Q-Si commented May 22, 2021

Hi @jonalim, Thanks for your comments. I am running some tests with Jack's data. I am wondering if you would mind sharing the "trans" matrix with me? It would be very helpful if I can run the same tests on both matrices. Thank you in advance.

@Andreas-Bio
Copy link

Andreas-Bio commented May 25, 2021

I have the same issue. I do not have binned quality scores.

#1343

The solution of @JacobRPrice's "Trial 1" does not work for me.


errF <- learnErrors(inputfiles, multithread = T, randomize = T, nbases = 1e20, errorEstimationFunction = loessErrfun_mod)
404252934 total bases in 7873078 reads from 180 samples will be used for learning the error rates.
Error rates could not be estimated (this is usually because of very few reads).
Fehler in getErrors(err, enforce = TRUE) : Error matrix is NULL.

@jonalim
Copy link

jonalim commented Jun 8, 2021

@Y-Q-Si Certainly. Here's a link to an RDS and the script used to create it. The RDS contains a list of three lists, one for each error model function. Each internal list contains:

  • pre_dada: the value of getErrors() called on the output of learnErrors()
  • post_dada: the value of getErrors() called on the output of dada()

https://drive.google.com/drive/folders/11GThjrUKdcL_n64EbXdxf1Ww_gXqPE87?usp=sharing

@JacobRPrice
Copy link
Author

@andzandz11
The error you're seeing is most likely due to a single (or perhaps more) unique sequence(s) within your dataset. In your issue (#1343), your quality profile plots shows that you have a tiny fraction of very long reads (in comparison with the others within the dataset). You may want to look further into your filtering/trimming parameters and make sure that the read length distribution is more reasonable.

@jeffkimbrel
Copy link

jeffkimbrel commented Aug 3, 2021

@andzandz11 - you are also trying to use 1e20 bp, which seems an unreasonably large number.

Overall, are there any solid recommendations yet? We've received word from our sequencing center that they are about to run a few hundred of our samples on a NovaSeq (instead of the usual MiSeq), so we are about to be in the same situation of fitting error models against binned quality scores.

Also, I first ran up against this issue with Illumina RTA3 not with amplicon data, but with variant calling and the need there to distinguish between low-frequency variants and sequencing error. I wonder if the folks developing those tools, such as LoFreq (@CSB5) or similar, have some clever ways to use the bins.

@JacobRPrice
Copy link
Author

JacobRPrice commented Sep 9, 2021

I'll second Jeff's request. I'd love to hear any recommendations official or otherwise. We have several NovaSeq datasets that are currently tabled.

@hhollandmoritz
Copy link

Hi All!

Just rejoining this conversation after a long hiatus. First @JacobRPrice thank you for all the work you've been doing on this. It's extremely helpful. I will go ahead and try your three methods on a recent NovaSeq soils dataset and report back.

Second, I am now working with a new sequencing center that due to pandemic-related issues will be sequencing all amplicon data only on NovaSeq, so this challenge has recently become much more pressing on my end.

The good news is that in my group we may actually have the resources to create a dataset with the same samples sequenced on HiSeq and resequenced on NovaSeq. @benjjneb are you still looking for a comparison dataset for this? If so, I can see if we can push that project forward. If not, is the comparison dataset you're currently using public?

@jcmcnch
Copy link

jcmcnch commented Nov 2, 2021

Hi all, we're encountering some issues in two recent NovaSeq runs so will be jumping on this wagon too! Just FYI @hhollandmoritz @JacobRPrice @benjjneb @Y-Q-Si we have some 16S and 18S seawater microbial community mocks of known composition that are not denoising well from two recent NovaSeq runs and they've been sequenced many times before (HiSeq, MiSeq, you name it). In these recent NovaSeq runs we're getting lots of 1-mismatches to the mocks that represent a good fraction of the total reads (up to something like 8-10% of the mock which is a bit disturbing and not at all usual), and it seems likely that it's related to the issues discussed here. @JacobRPrice I also agree this is a pretty big issue for qiime2 users - we normally use qiime2 and have had good luck with default DADA2 parameters before but with our recent issues there's really no way to troubleshoot natively in qiime2. I am working with someone in the lab who is much more savvy with R than I am who will help implement the code, and we'll report back after trying the "combined" solution discussed above. BTW if there's interest in others getting the mock data to play with I could ask Jed if he'd mind us sharing... In any case, thanks everyone for all the really useful info and suggestions!

@hhollandmoritz
Copy link

So time to report back. Like @JacobRPrice I ran the three options, but I also added a 4th trial based on @jonalim's method. The samples are from arctic soils so pretty high diversity environment. I'm not quite sure I understand what the differences are between the errors generated from the pre-learning step and from after dada2, but I'd be happy to provide those two if anyone thinks they'd be helpful.

Option 1: Alters the weights and span in loess, also enforce monotonicity

loessErrfun_mod1 <- function(trans) {
  qq <- as.numeric(colnames(trans))
  est <- matrix(0, nrow=0, ncol=length(qq))
  for(nti in c("A","C","G","T")) {
    for(ntj in c("A","C","G","T")) {
      if(nti != ntj) {
        errs <- trans[paste0(nti,"2",ntj),]
        tot <- colSums(trans[paste0(nti,"2",c("A","C","G","T")),])
        rlogp <- log10((errs+1)/tot)  # 1 psuedocount for each err, but if tot=0 will give NA
        rlogp[is.infinite(rlogp)] <- NA
        df <- data.frame(q=qq, errs=errs, tot=tot, rlogp=rlogp)
        
        # original
        # ###! mod.lo <- loess(rlogp ~ q, df, weights=errs) ###!
        # mod.lo <- loess(rlogp ~ q, df, weights=tot) ###!
        # #        mod.lo <- loess(rlogp ~ q, df)
        
        # Gulliem Salazar's solution
        # https://github.com/benjjneb/dada2/issues/938
        mod.lo <- loess(rlogp ~ q, df, weights = log10(tot),span = 2)
        
        pred <- predict(mod.lo, qq)
        maxrli <- max(which(!is.na(pred)))
        minrli <- min(which(!is.na(pred)))
        pred[seq_along(pred)>maxrli] <- pred[[maxrli]]
        pred[seq_along(pred)<minrli] <- pred[[minrli]]
        est <- rbind(est, 10^pred)
      } # if(nti != ntj)
    } # for(ntj in c("A","C","G","T"))
  } # for(nti in c("A","C","G","T"))
  
  # HACKY
  MAX_ERROR_RATE <- 0.25
  MIN_ERROR_RATE <- 1e-7
  est[est>MAX_ERROR_RATE] <- MAX_ERROR_RATE
  est[est<MIN_ERROR_RATE] <- MIN_ERROR_RATE
  
  # enforce monotonicity
  # https://github.com/benjjneb/dada2/issues/791
  estorig <- est
  est <- est %>%
    data.frame() %>%
    mutate_all(funs(case_when(. < X40 ~ X40,
                              . >= X40 ~ .))) %>% as.matrix()
  rownames(est) <- rownames(estorig)
  colnames(est) <- colnames(estorig)
  
  # Expand the err matrix with the self-transition probs
  err <- rbind(1-colSums(est[1:3,]), est[1:3,],
               est[4,], 1-colSums(est[4:6,]), est[5:6,],
               est[7:8,], 1-colSums(est[7:9,]), est[9,],
               est[10:12,], 1-colSums(est[10:12,]))
  rownames(err) <- paste0(rep(c("A","C","G","T"), each=4), "2", c("A","C","G","T"))
  colnames(err) <- colnames(trans)
  # Return
  return(err)
}

# check what this looks like
errF_1 <- learnErrors(
  filtFs,
  multithread = TRUE,
  nbases = 1e10,
  errorEstimationFunction = loessErrfun_mod1,
  verbose = TRUE
)
## 287310600 total bases in 1276936 reads from 7 samples will be used for learning the error rates.
## Initializing error rates to maximum possible estimate.
## selfConsist step 1 .......
##    selfConsist step 2
##    selfConsist step 3
##    selfConsist step 4
##    selfConsist step 5
##    selfConsist step 6
##    selfConsist step 7
##    selfConsist step 8
##    selfConsist step 9
##    selfConsist step 10
errR_1 <- learnErrors(
  filtRs,
  multithread = TRUE,
  nbases = 1e10,
  errorEstimationFunction = loessErrfun_mod1,
  verbose = TRUE
)
## 280925920 total bases in 1276936 reads from 7 samples will be used for learning the error rates.
## Initializing error rates to maximum possible estimate.
## selfConsist step 1 .......
##    selfConsist step 2
##    selfConsist step 3
##    selfConsist step 4
##    selfConsist step 5
##    selfConsist step 6
##    selfConsist step 7
##    selfConsist step 8
##    selfConsist step 9
##    selfConsist step 10

Option 2: Only enforce monotonicity

loessErrfun_mod2 <- function(trans) {
  qq <- as.numeric(colnames(trans))
  est <- matrix(0, nrow=0, ncol=length(qq))
  for(nti in c("A","C","G","T")) {
    for(ntj in c("A","C","G","T")) {
      if(nti != ntj) {
        errs <- trans[paste0(nti,"2",ntj),]
        tot <- colSums(trans[paste0(nti,"2",c("A","C","G","T")),])
        rlogp <- log10((errs+1)/tot)  # 1 psuedocount for each err, but if tot=0 will give NA
        rlogp[is.infinite(rlogp)] <- NA
        df <- data.frame(q=qq, errs=errs, tot=tot, rlogp=rlogp)
        
        # original
        # ###! mod.lo <- loess(rlogp ~ q, df, weights=errs) ###!
        mod.lo <- loess(rlogp ~ q, df, weights=tot) ###!
        # #        mod.lo <- loess(rlogp ~ q, df)
        
        # Gulliem Salazar's solution
        # https://github.com/benjjneb/dada2/issues/938
        # mod.lo <- loess(rlogp ~ q, df, weights = log10(tot),span = 2)
        
        pred <- predict(mod.lo, qq)
        maxrli <- max(which(!is.na(pred)))
        minrli <- min(which(!is.na(pred)))
        pred[seq_along(pred)>maxrli] <- pred[[maxrli]]
        pred[seq_along(pred)<minrli] <- pred[[minrli]]
        est <- rbind(est, 10^pred)
      } # if(nti != ntj)
    } # for(ntj in c("A","C","G","T"))
  } # for(nti in c("A","C","G","T"))
  
  # HACKY
  MAX_ERROR_RATE <- 0.25
  MIN_ERROR_RATE <- 1e-7
  est[est>MAX_ERROR_RATE] <- MAX_ERROR_RATE
  est[est<MIN_ERROR_RATE] <- MIN_ERROR_RATE
  
  # enforce monotonicity
  # https://github.com/benjjneb/dada2/issues/791
  estorig <- est
  est <- est %>%
    data.frame() %>%
    mutate_all(funs(case_when(. < X40 ~ X40,
                              . >= X40 ~ .))) %>% as.matrix()
  rownames(est) <- rownames(estorig)
  colnames(est) <- colnames(estorig)
  
  # Expand the err matrix with the self-transition probs
  err <- rbind(1-colSums(est[1:3,]), est[1:3,],
               est[4,], 1-colSums(est[4:6,]), est[5:6,],
               est[7:8,], 1-colSums(est[7:9,]), est[9,],
               est[10:12,], 1-colSums(est[10:12,]))
  rownames(err) <- paste0(rep(c("A","C","G","T"), each=4), "2", c("A","C","G","T"))
  colnames(err) <- colnames(trans)
  # Return
  return(err)
}


# check what this looks like
errF_2 <- learnErrors(
  filtFs,
  multithread = TRUE,
  nbases = 1e10,
  errorEstimationFunction = loessErrfun_mod2,
  verbose = TRUE
)
## 287310600 total bases in 1276936 reads from 7 samples will be used for learning the error rates.
## Initializing error rates to maximum possible estimate.
## selfConsist step 1 .......
##    selfConsist step 2
##    selfConsist step 3
##    selfConsist step 4
##    selfConsist step 5
##    selfConsist step 6
##    selfConsist step 7
##    selfConsist step 8
## Convergence after  8  rounds.

errR_2 <- learnErrors(
  filtRs,
  multithread = TRUE,
  nbases = 1e10,
  errorEstimationFunction = loessErrfun_mod2,
  verbose = TRUE
)
## 280925920 total bases in 1276936 reads from 7 samples will be used for learning the error rates.
## Initializing error rates to maximum possible estimate.
## selfConsist step 1 .......
##    selfConsist step 2
##    selfConsist step 3
##    selfConsist step 4
##    selfConsist step 5
##    selfConsist step 6
##    selfConsist step 7
## Convergence after  7  rounds.

Option 3: Only alter loess weights and also enforce monotonicity

loessErrfun_mod3 <- function(trans) {
  qq <- as.numeric(colnames(trans))
  est <- matrix(0, nrow=0, ncol=length(qq))
  for(nti in c("A","C","G","T")) {
    for(ntj in c("A","C","G","T")) {
      if(nti != ntj) {
        errs <- trans[paste0(nti,"2",ntj),]
        tot <- colSums(trans[paste0(nti,"2",c("A","C","G","T")),])
        rlogp <- log10((errs+1)/tot)  # 1 psuedocount for each err, but if tot=0 will give NA
        rlogp[is.infinite(rlogp)] <- NA
        df <- data.frame(q=qq, errs=errs, tot=tot, rlogp=rlogp)
        
        # original
        # ###! mod.lo <- loess(rlogp ~ q, df, weights=errs) ###!
        # mod.lo <- loess(rlogp ~ q, df, weights=tot) ###!
        # #        mod.lo <- loess(rlogp ~ q, df)
        
        # Gulliem Salazar's solution
        # https://github.com/benjjneb/dada2/issues/938
        # mod.lo <- loess(rlogp ~ q, df, weights = log10(tot),span = 2)
        
        # only change the weights
        mod.lo <- loess(rlogp ~ q, df, weights = log10(tot))
        
        pred <- predict(mod.lo, qq)
        maxrli <- max(which(!is.na(pred)))
        minrli <- min(which(!is.na(pred)))
        pred[seq_along(pred)>maxrli] <- pred[[maxrli]]
        pred[seq_along(pred)<minrli] <- pred[[minrli]]
        est <- rbind(est, 10^pred)
      } # if(nti != ntj)
    } # for(ntj in c("A","C","G","T"))
  } # for(nti in c("A","C","G","T"))
  
  # HACKY
  MAX_ERROR_RATE <- 0.25
  MIN_ERROR_RATE <- 1e-7
  est[est>MAX_ERROR_RATE] <- MAX_ERROR_RATE
  est[est<MIN_ERROR_RATE] <- MIN_ERROR_RATE
  
  # enforce monotonicity
  # https://github.com/benjjneb/dada2/issues/791
  estorig <- est
  est <- est %>%
    data.frame() %>%
    mutate_all(funs(case_when(. < X40 ~ X40,
                              . >= X40 ~ .))) %>% as.matrix()
  rownames(est) <- rownames(estorig)
  colnames(est) <- colnames(estorig)
  
  # Expand the err matrix with the self-transition probs
  err <- rbind(1-colSums(est[1:3,]), est[1:3,],
               est[4,], 1-colSums(est[4:6,]), est[5:6,],
               est[7:8,], 1-colSums(est[7:9,]), est[9,],
               est[10:12,], 1-colSums(est[10:12,]))
  rownames(err) <- paste0(rep(c("A","C","G","T"), each=4), "2", c("A","C","G","T"))
  colnames(err) <- colnames(trans)
  # Return
  return(err)
}

# check what this looks like
errF_3 <- learnErrors(
  filtFs,
  multithread = TRUE,
  nbases = 1e10,
  errorEstimationFunction = loessErrfun_mod3,
  verbose = TRUE
)
## 287310600 total bases in 1276936 reads from 7 samples will be used for learning the error rates.
## Initializing error rates to maximum possible estimate.
## selfConsist step 1 .......
##    selfConsist step 2
##    selfConsist step 3
##    selfConsist step 4
##    selfConsist step 5
##    selfConsist step 6
##    selfConsist step 7
##    selfConsist step 8
##    selfConsist step 9
## Convergence after  9  rounds.


# check what this looks like
errR_3 <- learnErrors(
  filtRs,
  multithread = TRUE,
  nbases = 1e10,
  errorEstimationFunction = loessErrfun_mod3,
  verbose = TRUE
)
## 280925920 total bases in 1276936 reads from 7 samples will be used for learning the error rates.
## Initializing error rates to maximum possible estimate.
## selfConsist step 1 .......
##    selfConsist step 2
##    selfConsist step 3
##    selfConsist step 4
##    selfConsist step 5
##    selfConsist step 6
##    selfConsist step 7
##    selfConsist step 8
##    selfConsist step 9
##    selfConsist step 10

Option 4: Alter loess function arguments (weights, span, and degree) also enforce monotonicity.

loessErrfun_mod4 <- function(trans) {
  qq <- as.numeric(colnames(trans))
  est <- matrix(0, nrow=0, ncol=length(qq))
  for(nti in c("A","C","G","T")) {
    for(ntj in c("A","C","G","T")) {
      if(nti != ntj) {
        errs <- trans[paste0(nti,"2",ntj),]
        tot <- colSums(trans[paste0(nti,"2",c("A","C","G","T")),])
        rlogp <- log10((errs+1)/tot)  # 1 psuedocount for each err, but if tot=0 will give NA
        rlogp[is.infinite(rlogp)] <- NA
        df <- data.frame(q=qq, errs=errs, tot=tot, rlogp=rlogp)
        
        # original
        # ###! mod.lo <- loess(rlogp ~ q, df, weights=errs) ###!
        # mod.lo <- loess(rlogp ~ q, df, weights=tot) ###!
        # #        mod.lo <- loess(rlogp ~ q, df)
        
        # jonalim's solution
        # https://github.com/benjjneb/dada2/issues/938
        mod.lo <- loess(rlogp ~ q, df, weights = log10(tot),degree = 1, span = 0.95)
        
        pred <- predict(mod.lo, qq)
        maxrli <- max(which(!is.na(pred)))
        minrli <- min(which(!is.na(pred)))
        pred[seq_along(pred)>maxrli] <- pred[[maxrli]]
        pred[seq_along(pred)<minrli] <- pred[[minrli]]
        est <- rbind(est, 10^pred)
      } # if(nti != ntj)
    } # for(ntj in c("A","C","G","T"))
  } # for(nti in c("A","C","G","T"))
  
  # HACKY
  MAX_ERROR_RATE <- 0.25
  MIN_ERROR_RATE <- 1e-7
  est[est>MAX_ERROR_RATE] <- MAX_ERROR_RATE
  est[est<MIN_ERROR_RATE] <- MIN_ERROR_RATE
  
  # enforce monotonicity
  # https://github.com/benjjneb/dada2/issues/791
  estorig <- est
  est <- est %>%
    data.frame() %>%
    mutate_all(funs(case_when(. < X40 ~ X40,
                              . >= X40 ~ .))) %>% as.matrix()
  rownames(est) <- rownames(estorig)
  colnames(est) <- colnames(estorig)
  
  # Expand the err matrix with the self-transition probs
  err <- rbind(1-colSums(est[1:3,]), est[1:3,],
               est[4,], 1-colSums(est[4:6,]), est[5:6,],
               est[7:8,], 1-colSums(est[7:9,]), est[9,],
               est[10:12,], 1-colSums(est[10:12,]))
  rownames(err) <- paste0(rep(c("A","C","G","T"), each=4), "2", c("A","C","G","T"))
  colnames(err) <- colnames(trans)
  # Return
  return(err)
}

# check what this looks like
errF_4 <- learnErrors(
  filtFs,
  multithread = TRUE,
  nbases = 1e10,
  errorEstimationFunction = loessErrfun_mod4,
  verbose = TRUE
)
## 287310600 total bases in 1276936 reads from 7 samples will be used for learning the error rates.
## Initializing error rates to maximum possible estimate.
## selfConsist step 1 .......
##    selfConsist step 2
##    selfConsist step 3
##    selfConsist step 4
##    selfConsist step 5
##    selfConsist step 6
##    selfConsist step 7
##    selfConsist step 8
##    selfConsist step 9
##    selfConsist step 10
errR_4 <- learnErrors(
  filtRs,
  multithread = TRUE,
  nbases = 1e10,
  errorEstimationFunction = loessErrfun_mod4,
  verbose = TRUE
)
## 280925920 total bases in 1276936 reads from 7 samples will be used for learning the error rates.
## Initializing error rates to maximum possible estimate.
## selfConsist step 1 .......
##    selfConsist step 2
##    selfConsist step 3
##    selfConsist step 4
##    selfConsist step 5
##    selfConsist step 6
##    selfConsist step 7
##    selfConsist step 8
## Convergence after  8  rounds.

Now for the side-by-side plots:

Forward Reads:

Option 1: Alters the weights and span in loess, also enforce monotonicity
errF_plot1

Option 2: Monotonicity only
errF_plot2

Option 3: Only alter loess weights and also enforce monotonicity
errF_plot3

Option 4: Alter loess function arguments (weights, span, and degree) also enforce monotonicity.
errF_plot4

Reverse Reads:

Option 1: Alters the weights and span in loess, also enforce monotonicity
errR_plot1
Option 2: Monotonicity only
errR_plot2
Option 3: Only alter loess weights and also enforce monotonicity
errR_plot3
Option 4: Alter loess function arguments (weights, span, and degree) also enforce monotonicity.
errR_plot4

Conclusions:

So far @jonalim's solution seems the best for this data. All methods are still not doing great with the A2G, and T2C transitions, but there also just doesn't seem to be a lot of data for those transitions. I'd welcome anyone else's input on whether or not they think the Option 4 plot is acceptable or still not trustworthy enough to draw conclusions from.

We're trying to develop a lab pipeline and seeing as it looks like we'll be dealing with NovaSeq data going forward, my current recommendation for users of NovaSeq data is going to be to try out all four methods of error learning and choose the plots that look best for the data.

@benjjneb
Copy link
Owner

benjjneb commented Nov 2, 2021

BTW if there's interest in others getting the mock data to play with I could ask Jed if he'd mind us sharing

Yes, if you have mock community data from NovaSeq, I'd love to be able to look at it. @jcmcnch

@cjfields
Copy link

cjfields commented Nov 2, 2021

We're trying to develop a lab pipeline and seeing as it looks like we'll be dealing with NovaSeq data going forward, my current recommendation for users of NovaSeq data is going to be to try out all four methods of error learning and choose the plots that look best for the data.

That's amazing work @hhollandmoritz ! Yeah we've set up a 'NovaSeq' fix in our Nextflow implementation that essentially does option 1, but I agree it's worth making this more flexible based on the above options.

@jcmcnch
Copy link

jcmcnch commented Nov 9, 2021

Hi everybody,

An update from me and Liv ( @livycy ) who has done some testing of the improved error model from @JacobRPrice on our mock communities (16S and 18S; clone sequences of nearly-full rRNA) for which we know the true result and have been able to reliably get good denoising from q2-dada2 in previous runs (HiSeq MiSeq etc).

TL;DR - while the error model looks better, we still see little change in the abundance of some artifactual sequences (defined here as 1-mismatches to the true sequence), in particular A=>G and T=>C transitions which had very "flat" error model profiles.

Some more detail:

For our mocks, we normally get < 0.5% of total sequences falling out the exact expected mock sequences after denoising with DADA2 (consistent between q2-dada2 and the native R version) which we've been very happy with. In our past few NovaSeq runs, we're getting up to 10% of reads falling outside the true mocks which is alarming and something we'd really like to resolve. For these sequences that differ from the reference, we know from BLASTing them that that a significant fraction are 1 base away from the true sequences and therefore likely denoising artifacts.

We see the same type of error dips and issues so Liv ran the new error model from @JacobRPrice , with similar results:

Here's the original error model:

image

Then now the improved one:

image

So everything looks nicer. No more dips etc.

However, it doesn't appreciably change the percentage of total ASV counts that differ from the known sequences (3.7% originally vs 3% with the new error model).

I then looked more carefully at the 1-mismatch artifacts from the most abundant member of our 16S mock and did some quick counting by eye (looking at alignments in a viewer, I'm sure there's a smarter way using biopython or something). The location of the variants are scattered across the ASVs though they are definitely more abundant in the reverse read.

From this I can conclude a couple of things (so far this is specific to this particular member of the mock community which happens to be a SAR11):

  • The improved error model seems to eliminate A=>T and A=>C transversions
  • The improved error model did not eliminate A=>G and T=>C transitions which in this case comprise 83% (15/18) of the remaining artifactual ASVs
  • Overall, the performance with NovaSeq data is still poor compared to what we've seen previously and is concerning for us. BTW, deblur performed far better which is consistent with what we've seen in the past. That being said, I still much prefer DADA2 because we know deblur will remove some true sequence microdiversity and skew abundance profiles as a result.

When looking at the error profiles I do see why DADA2 might not be doing well with A=>G and T=>C transitions (the error profile is pretty flat) but the rest is quite mysterious to me... @benjjneb @hhollandmoritz @jonalim @Y-Q-Si - any suggestions about the next step we could do to try and resolve the poor performance on our mocks?

Thanks,
Jesse

@jcmcnch
Copy link

jcmcnch commented Dec 10, 2021

Hi all,

I just wanted to send a brief update on this. We dug into some of our previous datasets and found that while our two most recent NovaSeq runs had issues with DADA2 denoising (i.e. many 1-mismatches to the mock, suggeting denoising issues) which were not resolved by improving the error model as indicated above, a previous NovaSeq run had excellent performance with DADA2, and worked nearly as well as a HiSeq rapidrun (which does not have binned quality scores). The reason why our two most recent runs have had such comparatively poor performance is mysterious but it could be some property of the sequencing pools we've prepared and/or the ratio of amplicon to metagenome reads in those pools. We have noticed similar situations in the past where DADA2 did fail for inexplicable reasons and we were not able to figure out why this was happening even after consulting with @benjjneb.

So I would conclude from this that NovaSeq binned quality scores are not inherently a problem for DADA2, at least for our simple mock communities. FYI @cjfields If anyone would like access to these mock community datasets for testing, I would be happy to provide them. @benjjneb would your group be interested to take a closer look? This would include a number of mocks derived from the same exact sequencing libraries run on HiSeq and then on NovaSeq, both of which worked well in terms of denoising and another NovaSeq run (in addition to the one we already shared) which had quite serious issues with the same mocks.

-Jesse

@nejcstopno
Copy link

Thanks all for this resourceful thread! I am too working on a large dataset produced with NovaSeq and I am about to test out these models. I am considering doing the testing on a randomly subsetted dataset (maybe 1/10) but don't know if this is sufficient to decide about which model to use for the full dataset. Any comments?
I have 970 samples with 200k reads each and if running the error learning steps for all models on the full dataset it will take a couple of weeks (changing the nbases=1e10 alone took nearly 3 days to complete for forward reads only).

@JacobRPrice
Copy link
Author

@nejcstopno , IMHO testing on a 10% subset should give you a pretty good idea of what the outcome would be for the full dataset.

As far as your choice of nbases = 1e10, the default is 1e8 and is generally pretty sufficient for most purposes and 1e10 may be larger than the number of reads offered by the (970*0.1 =) 97 samples you're considering to train on. Assuming you have 200k reads per sample and each read is 200 bp long post trimming, it would take 250 samples to reach 1e10.

> 1e10 / (2e5*200)
[1] 250

@bjmarzul
Copy link

bjmarzul commented May 5, 2022

I'm not sure how I got here, so I'm hesitant to comment, but I'm pretty sure setting span = 2 in loess isn't correct. My understanding is that span is the percent of the data to use to smooth. Often called alpha and usually .75 by default. 75%. Reducing the span (adjusting it closer to 0) will make the line "more wiggly" and less straight. There's a tutorial somewhere on optimizing the smoothing parameter of loess which could be a good place to gather information. As well as the geom_smooth() documentation for ggplot2 which is where I would have probably first heard of loess regression.

@cmgautier
Copy link

Hello! I have already analysed by the past metagenomic data with dada2 but this time I have the same issue that you, with probably binned quality score but with illumina MiSeq. I read all the discussion and thank you for information provided.
I have some questions for a better understanding:

  • Is it necessary to do some modifications of the initial script to analyse data such as alter loess arguments (weights and span) & enforce monotonicity like suggested by @JacobRPrice ?
  • Has the binned quality score an impact on the taxonomy assignation or something else, if nothing is modified on the script (initial dada2 script)?
    Thanks for your help!

@nuorenarra
Copy link

Hi all, I read through this thread as I am also having similar issues with dada2 error model with NovaSeq data. Thanks @hhollandmoritz so much for such a thorough exploration into this!
Wondering if there has been any updates on the "official" fix/recommendations for dada2, @benjjneb ?

Also, I tried the "Option4" function ( loessErrfun_mod4,) with the error model, suggested by @hhollandmoritz, but i got similar error message to @pdhrati02 and @cmgautier .
Specifically, after I ran

errF <- learnErrors(filtFs, multithread=TRUE, nbases = 1e10,
errorEstimationFunction = loessErrfun_mod4,
verbose = TRUE)

I got this error message:

Error rates could not be estimated (this is usually because of very few reads).
Error in getErrors(err, enforce = TRUE) : Error matrix is NULL.

Has anyone figured out what this means yet?

@hhollandmoritz
Copy link

@nuorenarra I didn't really do anything other than package the hard work of @JacobRPrice, @jonalim and others into a lab pipeline. :) But glad you find it useful.

As for the getErrors "matrix is NULL" issue, @cmgautier and @pdhrati02 found that the issue is caused by not loading dplyr directly. Specifically, I ran both of their data without issue through our lab pipeline and shared it with them. @cmgautier then used my code and found the following:

When I tried my script with my packages (only dada2 and ggplot), it does not work. So I added one by one your packages and for learning error rates with NovaSeq data, we need "library(dplyr); packageVersion("dplyr")" in addition to dada2 package.

She found that it worked after loading the dplyr package directly and I believe that @pdhrati02's code also worked after that fix.

I haven't looked deeply into the model 4 function to figure out which line is causing the error but since @benjjneb has hinted that a more official solution is on its way, I haven't prioritized fixing it.

@cjfields
Copy link

@nuorenarra I didn't really do anything other than package the hard work of @JacobRPrice, @jonalim and others into a lab pipeline. :) But glad you find it useful.

As for the getErrors "matrix is NULL" issue, @cmgautier and @pdhrati02 found that the issue is caused by not loading dplyr directly. Specifically, I ran both of their data without issue through our lab pipeline and shared it with them. @cmgautier then used my code and found the following:

When I tried my script with my packages (only dada2 and ggplot), it does not work. So I added one by one your packages and for learning error rates with NovaSeq data, we need "library(dplyr); packageVersion("dplyr")" in addition to dada2 package.

She found that it worked after loading the dplyr package directly and I believe that @pdhrati02's code also worked after that fix.

I haven't looked deeply into the model 4 function to figure out which line is causing the error but since @benjjneb has hinted that a more official solution is on its way, I haven't prioritized fixing it.

Same for us. We have an in-house fix using the above that works for our workflow when we have NovaSeq data, which should work also for the NovaSeq X (fingers crossed).

@slambrechts
Copy link

We are also experiencing this problem (on Nextseq)

In the meantime, is there a more official solution?

When I read the posts above, it seems like many people are choosing option / model 4?

@cjfields
Copy link

cjfields commented Mar 6, 2024

...When I read the posts above, it seems like many people are choosing option / model 4?

That does seem like the preferred model, though I don't think it's been noted as the 'official' one. My impression is that the results could use some level of evaluation using a truth set (mock community). Should also note it does require loading dplyr currently; it's mentioned above somewhere, but this ticket is long!

@peterthorpe5
Copy link

@benjjneb Dear Ben, Thank you for the wonderful software, we use it a lot - its awesome. I was wondering if there was an update on this issue/ thread (Illumina with binned quality scores) as we also have the issue.

regards, Pete

@hhollandmoritz
Copy link

hhollandmoritz commented Aug 8, 2024 via email

@cjfields
Copy link

cjfields commented Aug 8, 2024

Hi Andreas, You're correct, the loessErrFun_mod4 has not been optimized for performance. In our lab, it's very typical for it to take hours on large datasets.

Yes, the same here. Also note we're using the same function for binned-quality PacBio Revio+Kinnex runs (#1892), which also takes a long time.

EDIT: also note the comment there from @jonalim regarding whether that error model is the best: #1892 (comment)

@cjfields
Copy link

Just a note that the newer MiSeq i100 will also use binned quality scores, so the era of full range qualities with Illumina data will be soon coming to an end.

@jeffkimbrel
Copy link

Hi all, it looks like the newer Novaseq X machines use RTA4 base calling and quality scoring (rather than RTA3). It will still have three bins, so perhaps the workarounds for RTA3 will still work for RTA4. But, the bins themselves may be different, even among different versions of RTA4 (12, 20, 37 or 12, 24, 40)1, whereas RTA3 was 2, 12, 23 and 372.

Footnotes

  1. https://knowledge.illumina.com/instrumentation/novaseq-x-x-plus/instrumentation-novaseq-x-x-plus-reference_material-list/000008320

  2. https://www.illumina.com/content/dam/illumina-marketing/documents/products/appnotes/novaseq-hiseq-q30-app-note-770-2017-010.pdf

@cjfields
Copy link

@jeffkimbrel we have a NovaSeq X+, but many of the researchers locally are still using the older NovaSeq 6000 since the NovaSeq X doesn't currently support 2x250nt reads.

@luigallucci
Copy link

There are any news regarding the error step and "official" solution? Someone tested it with NextSeq 2000?

@Andreas-Bio
Copy link

Andreas-Bio commented Nov 18, 2024

I used a NextSeq 2000 and I am not satisfied with the results, because the new chemistry is so good that the error rates become so low that the model fit was not good, even when I used more sequences for training. Alomost every SNP in my data, although I expect a lot of them to be PCR errors became an ASV. The FASTQC phred score plot was almost always at 100%.

@luigallucci
Copy link

I used a NextSeq 2000 and I am not satisfied with the results, because the new chemistry is so good that the error rates become so low that the model fit was not good, even when I used more sequences for training. Alomost every SNP in my data, although I expect a lot of them to be PCR errors became an ASV. The FASTQC phred score plot was almost always at 100%.

I used the error model 1 and 4. In my case, this is generating an acceptable (or at least better) error plot respect to the classic function. Only A>>G stay flat as line.

Error model 1:
Fwd
error_fwd.pdf
Rev
error_rev.pdf

Error model 4:
Fwd
error_fwd4.pdf
Rev
error_rev4.pdf

Maybe, if you have feedback/opinion on that...I would appreciate.

@sghignone
Copy link

Hello Luigi, Just a little feedback from my experience, I solved my issues using cutadapt before dada2 and raising --p-n-reads-learn 2000000 (in the frame of a qiime2 workflow):

qiime cutadapt trim-paired --i-demultiplexed-sequences paired-end-demux.qza --p-cores 160 \ --p-front-f ^GTGYCAGCMGCCGCGGTAA --p-front-r ^GGACTACNVGGGTWTCTAAT --verbose \ --p-discard-untrimmed --p-match-read-wildcards --o-trimmed-sequences paired-end-demux-trimmed.qza
and
qiime dada2 denoise-paired --i-demultiplexed-seqs paired-end-demux-trimmed.qza \ --o-table table.qza --o-representative-sequences rep-seqs.qza --o-denoising-stats denoising-stats.qza \ --p-chimera-method consensus --p-n-threads 0 --p-trunc-len-f 272 --p-trunc-len-r 272 \ --verbose --p-n-reads-learn 2000000
hope it helps as hint..

@cjfields
Copy link

...and raising --p-n-reads-learn 2000000 (in the frame of a qiime2 workflow):

@sghignone this also mirrors what we have seen and has been reported by others above: number of reads needs to be increased for binned quality scores.

@luigallucci
Copy link

luigallucci commented Nov 18, 2024

Hello Luigi, Just a little feedback from my experience, I solved my issues using cutadapt before dada2 and raising --p-n-reads-learn 2000000 (in the frame of a qiime2 workflow):

qiime cutadapt trim-paired --i-demultiplexed-sequences paired-end-demux.qza --p-cores 160 \ --p-front-f ^GTGYCAGCMGCCGCGGTAA --p-front-r ^GGACTACNVGGGTWTCTAAT --verbose \ --p-discard-untrimmed --p-match-read-wildcards --o-trimmed-sequences paired-end-demux-trimmed.qza and qiime dada2 denoise-paired --i-demultiplexed-seqs paired-end-demux-trimmed.qza \ --o-table table.qza --o-representative-sequences rep-seqs.qza --o-denoising-stats denoising-stats.qza \ --p-chimera-method consensus --p-n-threads 0 --p-trunc-len-f 272 --p-trunc-len-r 272 \ --verbose --p-n-reads-learn 2000000 hope it helps as hint..

This is what I'm actually doing :) on standalone dada, is the value nbases = 1e10, just to give references to people that can read from here. Regarding the cutadapt step, I'm just keeping the primer check inside R and eventually an additional polishing step to remove leftover from the primers.

Actually, in my case the model 4 is giving probably the best results.
My original question was to check/ask if there is a consensus in the community or not.

@sghignone
Copy link

sghignone commented Nov 18, 2024

also make sure to remove revcon sequences (--p-discard-untrimmed ) which are always there and which I think interfere with the error estimation.

@benjjneb
Copy link
Owner

benjjneb commented Dec 6, 2024

A few updates from our end. In the devel branch of dada2 (which is the master branch here: https://github.com/benjjneb/dada2) we have added some new functionality to better deal with binned quality scores. Two viz changes are a fix for plotQualityProfile so that binned quality scores show up correctly as a single-Q-score band rather than a smeared out band. Example:

plotQP_new

The second viz change is to plotErrors, that now scales the observed error rates per average Q score by the number of reads supporting them. Example:

plotErrors_new

Finally, we have added a new function called makeBinnedQualErrfun: https://github.com/benjjneb/dada2/blob/master/R/errorModels.R#L97

This is a function that makes a function -- it takes as argument the set of binned quality scores, and returns a function we might name binnedQualErrfun that fits an error model given those binned quality scores, and that can be passed to learnErrors. The method used to fit the error model is to anchor the error model on the binned quality scores, and to simply to linear interpolation of the fitted error rates between adjacent binned quality scores. Example usage is below:

binnedQs <- c(2, 11, 25, 37)
binnedQualErrfun <- makeBinnedQualErrfun(binnedQs)
learnErrors(filts, errorEstimationFunction=binnedQualErrfun, multi=TRUE)

We're happy to hear any feedback on these changes/additions. We also intend to add some simple function calls that will return the set of binned quality scores from a given fastq file, and perhaps some warnings that binned quality scores are likely present when using learnErrors with the default loessErrfun.

ps: The plotErrors viz above is from an error model generated using the new makeBinnedQualErrfun appraoch in the package.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests