You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm running this code on a yarn cluster. It's trying to filter a BAM file to just those alignments which are either on chr22 or have a mate on chr22.
overridedefrun(args: Arguments, sc: SparkContext):Unit= {
valfilterContig= args.filterContig
valalignments= sc.loadAlignments(args.reads)
valmatchingAlignments= alignments.filter(matchesContig(_, filterContig))
matchingAlignments.persist()
println("Found "+ matchingAlignments.count() +" alignments with one pair in "+ filterContig)
matchingAlignments.coalesce(10).adamSAMSave(args.outputPath, asSam =false)
}
I'm consistently getting this error:
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in stage 5.0 failed 4 times, most recent failure: Lost task 5.3 in stage 5.0 (TID 706, demeter-csmaz08-10.demeter.hpc.mssm.edu): java.lang.AssertionError: assertion failed: Cannot return header if not attached.
@arahuja I believe that issue was specifically when you used .coalesce(1). I ran out of memory when I tried that, so I'm using .coalesce(10) and running into this issue.
I'm running this code on a yarn cluster. It's trying to filter a BAM file to just those alignments which are either on chr22 or have a mate on chr22.
I'm consistently getting this error:
My command line is this:
(the input is from the dream challenge)
Would this be expected to work? cc @ryan-williams
The text was updated successfully, but these errors were encountered: