If an end-user has a cluster with 3x master, 3x data and 1x ML node (say), then if they cannot open an ML job due to running out of memory, then they get a pretty awful error message.
Each nodes returns to say that it cannot open the job because it is not an ML node, and the ML node returns to say it's full. We also get the additional message to say the datafeed could not start. See below.

Can we trap this and display a top level error? e.g. Insufficient available memory. More nodes or more memory required or close unused jobs.
I would expect the full error to be useful in troubleshooting and it could be hidden with an "expand to view full details".
Relates elastic/elasticsearch#29950, and possibly #17961
cc @skearns64