-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
<LMON BE API> (ERROR): read_lmonp return a neg value #46
Comments
Can you attach the .top file in your stat_results directory? |
I got the following, obviously an incomplete list of hosts
|
as a workaround, you can set the topology to "depth" with a value of "1". This will connect the STAT FE directly to all the STAT BEs. The topology file is actually just MRNet communication processes that STAT uses to make it's communication more scalable. The number of hosts you see in the topology file is probably the square root of the number of nodes of your job. That aside, I will have to do some debugging to see why it doesn't like the topology file. It might not like that your hostnames have "-" characters in it. Anyway, let me know if the depth 1 workaround suffices until I can fix this. |
I'm actually thinking that MRNet does not like the fact that there is the ".bullx" in your hostname. It may be the case that when it sees ".something" it thinks "something" is your domain name and then tries to resolve the network address using that. I'm guessing that in your case, ".bullx" is not the domain, but rather just part of the node name. Is this accurate? |
I think so. We have discussed this a while back in #33 (comment) |
depth 1 option works (no crash) but the stat-cl stays for ever in "Sampling traces..." when I try to sample a 880 nodes x 8 MPI/node job. |
how long did you wait? It could take a few minutes when running with this flat topology rather than using a scalable tree. Regarding progress sampling, there is no easy way to poll the daemons for progress, so unfortunately you suggestion cannot be implemented. Ultimately, I'm hoping that we can get a fix in MRNet so you can use a scalable tree and then in theory the operation should be fairly quick. |
Up to 30 minutes or so. |
can you try running the same complex app with the deep stack at a smaller scale? One potential issue is contention on the file system since all the STATD's will be trying to parse you executable's debug information. |
I have run it with 300 nodes, but still it takes a long time. I think that you are right about STATD being stuck waiting for the debug info from the file system. I looked on one node with bpf profile and offcpu tools and most of the time STATD is off cpu when stack traces are being collected. Also la large number of lustre rpc are around. Can this be improved? I need to think and ask around. |
I have run stac-cl with -l BE -l SW -l CP on the case that hangs ( 880 nodes x 8 MPI/node) I wonder if a hang appears in STAT. Can one run stat on stat ? :) One more note: |
Yes, you can run STAT on STAT, but the latter instance is likely to hang for the same reason. I think the problem is that without the MRNet tree, the STAT FE is trying to connect to 880 BEs and send data. This is clearly not scalable and will either take too long or hang. I'm hoping we can get the fix in MRNet to tolerate the "." in your system's nodes hostnames so we can let STAT/MRNet deploy a more scalable tree. |
Perhaps this helps, internally in our cluster the short node names are enough for comms. So if you can get them with something like |
I looked at the STAT source code and found that we already have a STAT_FE_HOSTNAME environment variable. @antonl321 in your case from above, can you set this variable to "ac1-1001" or whatever the short hostname is from where you invoke STAT? If you look at the .top file that it produces, the value that you specify should be on the first line, rather than the hostname with ".bullx". You might want to test at a smaller scale, say 4 nodes and then when you run stat-cl, also add "-d 2" to the stat-cl args. |
I'm a bit confused, what exactly should be in STAT_FE_HOSTNAME? The name of the head node, or hostname pattern, or ...? |
It should be the short hostname of the node from where you run stat-cl or stat-gui. Does that make sense? |
Yes, it does. I'll test this when I get some free nodes on the system. Hopefully this evening or tomorrow morning. |
With STAT_PE_HOSTNAME defined and without -d option stat-cl fails quickly with this sort of message ( see below). <Jul 27 20:45:09> <STAT_BackEnd.C:655> registering signal handlers |
Perhaps it's just a typo in your message, but you said you set "STAT_PE_HOSTNAME" when it should be "STAT_FE_HOSTNAME", that is it looks like you put a "P" where it was supposed to be "F". Regarding the "-d" option, that has a different meaning for the STATD process than it would with stat-cl, and "-d 1" is the correct option for STATD. That aside, can you send the .top file in the stat_results directory an also the debug log files that STAT generated? |
I have checked, in my script I have the right variable name. I have miss-typed the variable name in my message. |
Do you happen to have the STAT FE log file too? This would be enabled with -l FE. It looks like this is a different error in the STATD daemons when they are trying to receive their connection info from the STAT FE, and it would help to see where the FE is in the process. That aside, it looks like there is a potential "fix" in MRNet. Can you modify your var/spack/repos/builtin/packages/mrnet/package.py in Spack and add:
After that you will have to |
I should add that with the "hntest" mrnet, please add XPLAT_USE_FQDN=1 when you run and please do not specify STAT_FE_HOSTNAME. |
And when you test this, try at small scale again, perhaps 4 nodes and add the "-d 2" option to |
stat logs with -l FE attached. I have rebuild mrnet and stat. |
In both cases, i still see this in the STAT FE log:
Just to confirm, did you set XPLAT_USE_FQDN=1 in your environment? Also, can you cd to the MRNet installation prefix and grep for that environment variable to make sure you picked up the changes:
|
Also did you get any error output to the terminal when you ran? |
I noticed in the topology file:
That ad-2051 appears twice with the :0 rank. This is because the string comparison thinks the .bullx is a different host than the one without .bullx, and this may cause a conflict in MRNet. I pushed a change to the STAT develop branch that may fix this. Can you uninstall STAT with spack and rebuild |
I tested my mrnet installation, it matches the variable as in your example. I launch stat-* from a script which has this line |
STAT doesn't dump the value on env variables, but that is a good idea. please keep using mrnet@hntest from here on out. This is what I think will be needed moving forward and if/when we get it to work, it will hopefully be merged in to the MRNet mast and then a release. |
I have installed the new version. I'm afraid that we are back to the negative value error when using -d 2
logs attached |
Can you add --procs=4 to your stat-cl command and let me know if that helps? |
yes, that fixes the error. the small test is sampled but the large one hangs as before. |
can you please send me the debug log files for the large-scale hang case? |
This run generated a lot more data in the STATD files. In total I have 7.8 GB of logs but it hangs nevertheless. I put only 1 STATD file and the other ones in the attached tar ball. |
you can reduce the size of the BE logs if you remove the "-l SW" flag. We don't really want to debug stackwalker anyway. That aside, the one STAD log file you did send looks like gathered samples OK and sent them to the FE. It may not be easy, but can
If you find one that doesn't end with that, can you please send the file? |
In fact there are about 162 files that have in the last line "return value 0" and about 760 with something else on the last line. I attach one of them. |
Even in that file I see:
this is the 5th to last line. So this STAD actually gathered samples OK and sent them to the FE. However, it looks like the FE never receives the merged traces. I'm still wondering if there is a lingering STATD. Can I trouble you to run again without "-l SW"? The log files should be a lot smaller (and easier for me to parse), then hopefully it'll be small enough that you can send them all to me and I can poke around instead of asking you to look. |
surprise, without -l SW it has collected first 2 traces but then it hung. |
but it hung at the first "sampling traces ..." in a second run without any debug flags. Hmm! |
All of the 880 STATD processes were killed with signal 15 (SIGTERM):
Is it possible that the application exited while STAT was attached and gathering samples? As far as I can see, there is no error in STAT itself, but if the application exits, then SLURM may be killing off the STATD daemons. Or is ist possible that you are running at the edge of the memory limit and getting OOM killed? The STATD can consume quite a bit of memory when gathering stack traces with function and line information. You could try running STAT with "-t 1". This will cause STAT to only collect one stack trace with just function frames (no line number info). |
In looking at the *STAT.log file, it appears that STAT is taking a while to process the third trace. The good news, is that you should have 2 good .dot files in the stat_results directory that you can look at. I'm not sure if it is legitimately hanging on the third one or if it is just taking a long time. Did you set a time limit on your job or again, did the job perhaps exit in the meantime? If you can tar up the run-specific stat_results directory, that may help too. |
I think that the job has been terminated because I killed the app to launch a new experiment. Unfortunately later tries weren't successful. I was wondering, does the gdb alternative for stack traces collection works? I saw it mentioned in the documentation as a build option. |
Using GDB instead of Stackwalker for the daemons won't help in this case. Stackwalker got the traces OK and the daemons successfully sent them to the STAT frontend, it's just that the STAT frontend is churning on the graphs. I'm pretty sure if you wait long enough, that the STAT frontend operation would complete, but I couldn't tell you how long it might take. You could attach gdb (or even use STAT's serial attach) to attach to the STAT frontend python process when it is in this operation to make sure it is progressing. For the log files you sent, it looks like you killed it off after 6 minutes. Would you be willing to let it run for a half hour or an hour to see if it completes? The duration of this operation is dependent on the number of nodes/edges in the traces. Actually, another thing that may help speed things up is if you run stat-cl with the "-U" flag. Instead of keeping track of every MPI rank, the edges will just include the total count that visited that edge and then record the lowest ranked process that took that call path. Sorry for yet another test idea, but I think we are making progress. I will also point out that STAT does have the ability to merge certain core files via the core-stack-merge script that is included in the source. It likely won't work out of the box for something like gstack, but currently works for lightweight core files (this was done back in the days of IBM BG/L, BG/P, BG/Q) and it can also use gdb to parse full core files. |
I have tried to sample the large scale run with stat-gui. The first try came with a graph in few minutes, but the second one got stuck in "Sample Stack Traces" for more than 15 minutes. It could be that the sampling of some processes in some peculiar states gets stuck. Would it be possible and wise to implement a timeout for the stack sampling? Assuming that a small number of process are stuck I think that partial the information (with an warning attached) could be useful to the user. |
I was actually hoping to see the STAT frontend process (which is python), not the STATD process. From the debug logs, the STATD daemons are waiting on the STAT frontend. |
Also, we had previously explored the idea of timeouts, but while a good idea and conceptually simple, it was deemed too difficult to implement. Thanks for the suggestion/request, though. |
Well, I think I have fixed it! Or at least I made some good progress. |
@antonl321 I think there were a few issues raised here. Have they all been resolved? Were you able to run OK at scale? Please let me know if this is still an open issue of if I may close it. |
Hi, Unfortunately the freezing of STAT at large scale has came back. |
have you built STAT with fgfs? The fgfs variant should be enabled by default in Spack unless you specify ~fgfs in the install command. Since it is stuck on sampling, I'm wondering if it is contention on the file system, which fgfs should alleviate. |
fgfs is enabled.
…On Tue, 25 Oct 2022, 20:48 Gregory Lee, ***@***.***> wrote:
have you built STAT with fgfs? The fgfs variant should be enabled by
default in Spack unless you specify ~fgfs in the install command. Since it
is stuck on sampling, I'm wondering if it is contention on the file system,
which fgfs should alleviate.
—
Reply to this email directly, view it on GitHub
<#46 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AMSEMBRIH5ARTT62U3RENWDWFAMOHANCNFSM53BB2DXA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Can you gather STAT logs for me? In the "Advanced" tab of the attach dialog in the STAT GUI, you can select "Log Frontend", "Log Backend", "Log CP". Please do not log SW or SWERR at this time because they will generate large log files and at the scale you are running, this is probably not a good idea. |
Sure, but it might take a bit of time. It's a bit more difficult to get large runs on the my system. Hopefully early next week. |
one smaller scale test you can run is to launch a job on 4 nodes. Then in the stat-gui attach dialog, go to the "Topology" tab and set "Specify Topology Type" to "depth". Then click on "Modify Topology" and change it to "2". After that go back to the "Attach" tab and see if you can attach to a job as usual. This will test to make sure the MRNet tree is functioning properly on your system. If you can gather me the logs from this small scale test, I can start to debug and see if anything stands out that may cause issues at scale. |
I did a run on 16 nodes and the depth=2 worked fine. I attach the logs for the the same run (default params for attach). |
thanks for the logs. It looks like fgfs is working and should scalably handle reading binaries. I will await logs from the larger run inf you are able to produce them. |
Hi, I got some logs for a run with 440 nodes. The node on which STAT run is ad6-175. |
Hi,
I did some tests with the latest stat-cl.
It works file on runs with 45 nodes x 16 MPI
but it fails with the error from bellow on a run with 880 nodes x 8 MPI
In stat-cl.err I see the following
The text was updated successfully, but these errors were encountered: