Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Java error=12, cannot allocate memory (also chmod) #1403

Closed
planetf1 opened this issue Nov 21, 2016 · 13 comments
Closed

Java error=12, cannot allocate memory (also chmod) #1403

planetf1 opened this issue Nov 21, 2016 · 13 comments

Comments

@planetf1
Copy link

Please use the following bug reporting template to help produce actionable and reproducible issues. Please try to ensure that the reproduction is minimal so that the team can go through more bugs!

  • A brief description

Java, chmod & other tools fail with "Cannot allocate memory" when trying to build an open source project (Apache Ranger). Same code builds fine on non-MS unix systems with less memory (ie 4GB -- this system has 16)

  • Expected results

Apache Ranger builds ok

  • Actual results (with terminal output if applicable)

jonesn@DESKTOP-2V27KS0:~/src/ranger$ cat ../ranger100.log | grep allocate
java.io.IOException: Cannot run program "chmod": error=12, Cannot allocate memory
Caused by: java.io.IOException: error=12, Cannot allocate memory
Caused by: java.io.IOException: Cannot run program "chmod": error=12, Cannot allocate memory
Caused by: java.io.IOException: error=12, Cannot allocate memory
java.io.IOException: Cannot run program "chmod": error=12, Cannot allocate memory
Caused by: java.io.IOException: error=12, Cannot allocate memory
Caused by: java.io.IOException: Cannot run program "chmod": error=12, Cannot allocate memory
Caused by: java.io.IOException: error=12, Cannot allocate memory
Caused by: java.io.IOException: Cannot run program "chmod": error=12, Cannot allocate memory
Caused by: java.io.IOException: error=12, Cannot allocate memory

(will attach full log)

  • Your Windows build number
    14971

  • Steps / All commands required to reproduce the error from a brand new installation
    Build apache ranger as per http://ranger.apache.org/quick_start_guide.html
    (Note - open jdk 1.8 is installed, and as of 21/11/16 a 1.8 patch is needed as per https://reviews.apache.org/r/53924/ - business as usual. ...)

  • Strace of the failing command
    not obtained

  • Required packages and commands to install
    See apache page above

See our contributing instructions for assistance.

@planetf1
Copy link
Author

See http://pastebin.com/26fA3XHX for the ranger build logs. Search for "allocate"

Also:
jonesn@DESKTOP-2V27KS0:~/src$ free -m
total used free shared buffers cached
Mem: 15946 10281 5664 17 33 184
-/+ buffers/cache: 10064 5882
Swap: 6873 473 6399

@planetf1
Copy link
Author

And here's the fork failure from the log:

Caused by: java.io.IOException: error=12, Cannot allocate memory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:247)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 40 more

And some version info:

jonesn@DESKTOP-2V27KS0:/mnt/c/tmp$ dpkg -l | grep openjdk
ii openjdk-7-jdk:amd64 7u121-2.6.8-1ubuntu0.14.04.1 amd64 OpenJDK Development Kit (JDK)
ii openjdk-7-jre:amd64 7u121-2.6.8-1ubuntu0.14.04.1 amd64 OpenJDK Java runtime, using Hotspot JIT
ii openjdk-7-jre-headless:amd64 7u121-2.6.8-1ubuntu0.14.04.1 amd64 OpenJDK Java runtime, using Hotspot JIT (headless)
ii openjdk-8-jdk:amd64 8u111-b14-314.04.1 amd64 OpenJDK Development Kit (JDK)
ii openjdk-8-jdk-headless:amd64 8u111-b14-3
14.04.1 amd64 OpenJDK Development Kit (JDK) (headless)
ii openjdk-8-jre:amd64 8u111-b14-314.04.1 amd64 OpenJDK Java runtime, using Hotspot JIT
ii openjdk-8-jre-headless:amd64 8u111-b14-3
14.04.1 amd64 OpenJDK Java runtime, using Hotspot JIT (headless)
jonesn@DESKTOP-2V27KS0:/mnt/c/tmp$ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)

@aseering
Copy link
Contributor

Hi @planetf1 -- thanks for reporting this! I suspect that it might have the same underlying cause as #1019 ; specifically, some operations in WSL currently don't work properly if your system's page file is small as compared to your available RAM. Could you try the instructions in the last comment on that ticket to see if they help your situation?

@therealkenc
Copy link
Collaborator

#1286 is also a dup.

@planetf1
Copy link
Author

Thanks. My virtual memory settings currently are
c: System Managed (11205 MB)
d: None

I have 16GB ram

Unfortunately I'm also rather constrainted on storage space -- it's a mere 120 GB SSD... I'll however try setting it to a fixed size, say 16GB & report back

@planetf1
Copy link
Author

planetf1 commented Nov 22, 2016

Setting to fixed 16GB allowed the compile to progress past this step. However it failed later on with a different error (one of the test cases seemed to exit prematurely with no obvious error). No obvious cause that I can precisely describe here for debugging the ms bash system in particular as the open source project is a new one for me that I'm getting up to speed with. I can only say the build is fine in regular ubuntu in hyper-v, or a remote system, or CentOS, so I'm going to have to revert to those environments for now. It would be good to understand more about the initial memory issue going on here though - I would have thought things should work just fine with a system managed page file? Thanks.

@aseering
Copy link
Contributor

I assume (though I don't know for sure) that the memory limitation is due to the difference in the behavior of the NT kernel vs the Linux kernel:

Windows assumes that, if an application maps ("asks the kernel to let it use in the future") a block of virtual-memory pages, it does actually intend to use them eventually. It therefore requires that enough memory currently be available, either as real physical memory or as swap, to allow the entirety of any given mapping to be allocated and to have data assigned to it. If it gets some unmanageably-large allocations (I'm sure there are specific rules but I don't know the rules offhand), it doesn't necessarily grow the page file; it may instead choose to return an error code to indicate to the application that it probably doesn't want to be allocating that much memory on this machine. This approach can be a little clunky/limiting, but it has some nice correctness guarantees and it tends to encourage good application behavior.

Linux pretty much always lets applications map whatever they want, whether or not it's reasonable on the current machine. Therefore, for simplicity, some applications' runtimes (notably including Java; I'm not personally aware of any other major applications that do this to such a large extent) tend to map giant chunks of memory up front, presumably so that they don't have to think about mapping more memory later. They then allocate and use only as much memory as they actually use. If they allocate more memory than is available on the system (and the swap file runs out of space, etc), which they can do because the kernel has already promised the mapping, at that point it's too late for the kernel to provide an error code; the only correct thing it can do is to force-kill the application.

Supporting the Linux behavior on Windows would therefore require either allowing the Windows pagefile to grow much larger than the amount of memory that Java is likely to actually use, just in case it does use it (which would be problematic on machines like yours with limited disk space), or implementing the Linux out-of-memory killer in Windows, which I would expect to require a change to the core NT kernel (not WSL-specific code).

(Disclaimer: I'm not actually a WSL dev, just a Windows and Linux user; this is just my best guess at the cause based on the behavior of the two kernels' public APIs, it's certainly possible that there's a different limitation going on under the hood.)

@therealkenc
Copy link
Collaborator

The chakracore guys allocate 32GB of virtual memory out of the gate, lol. I made a 40GB page file to make it start. #708.

@benhillis
Copy link
Member

benhillis commented Nov 22, 2016

Linux also has some interesting semantics around noreserve that we aren't currently honoring 100%. We have a work item to improve this scenario in a few ways:

  1. Count swap size as maximum page size not current page size
  2. Potentially allow overcommit (toggelable via a procfs file just like real Linux).
  3. Honor noreserve semantics.

This work is in progress.

@planetf1
Copy link
Author

Thanks for the update, I'm familiar with how some of the different memory allocation algorithms differ between OSs.... Good to hear it's an area that's evolving. I'll keep prodding at the area for other projects and see how things go especially as I'm tracking the insider builds anyway.

@benhillis
Copy link
Member

This should be resolved in recent Windows Insider builds. Please reopen if you continue to have issues.

@afwn90cj93201nixr2e1re
Copy link

@sunilmut
Copy link
Member

sunilmut commented Mar 1, 2017

@shelru - It seems like you are running build 14393, which is the Anniversary Update build. This issue is fixed in the insider builds, the updates for which are not yet available to Anniversary Update build. You can either wait for the Creators Update build (which will be an update to the Anniversary Update build) or try out the Insider builds.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants