-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RESTART SEQUENTIAL MODE #853
Comments
Hi! You need to add
Also you can not use more than one node for the same sample. You should instead run several samples in parallel, each in one node. The
|
Hello |
Oh yeah thanks, I managed to deploy the restart mode. |
Hi, thanks a lot , I manage to get a hold of a big memory node which kind took care of the problem |
Hi there again, I managed to run all my samples using the sequential mode after getting the Particularly I am ONLY interested in the cyanobacteria present in the three environmental samples. My proposed method is, after I am done with the assembly and all the steps, I mine all the contigs that classify as cyanobacteria and reanalyze their functional and taxonomy profiles independently. Do you think this is the right way or can you advise a better one? Thank you in advance |
That's ok. The subset functions in SQMtools are devised to help you on that |
Perfect , Thanks |
Hi there once again,
Project directory structure:
When trying to run
But no pavian file is produced, it is worth noting that Best, |
Hello |
Hi @fpusan @jtamames @ggnatalia
I am using squeezemeta to analyse my samples individually using the sequential mode. Thing is I am using a cluster that uses PBS job sceduler. I request a lot of nodes for each sample and they are assigned to me. However despite all that, almost everytime a job is killed during a diamond process despite me playing with different
-b
values.Each node has about 120GB of RAM and I request 6 nodes. That makes 720GB of RAM to be available per sample run. Dividing by 8 that makes -b value of about 90. What I discovered is that only one node is utilized during the diamond process for some reasons and all the others are left free. So I tried the automatic choice of block size by not specifying the block size and it manage to calculate that the node it works on has about 109GB of RAM so it assigns about
-b 11
and this makes diamond to run until step6 which involves the LCA algorithm and then it dies again. At this point I don't know what the problem is.All attempts to restart the run with
--restart
option have proven futile as I keep getting the error despite the fact that the file is in the project directory with read permissions:File availability:
My original code is :
Here is the syslog output:
Progress file output:
My nohup output error file:
My nohup output normal file(basically local syslog):
The text was updated successfully, but these errors were encountered: