Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eating 100% CPU when setting com.sun.enterprise.server.logging.GFFileHandler.logtoFile=false #3506

Closed
poikilotherm opened this issue Dec 10, 2018 · 4 comments · Fixed by #3508
Labels
Type: Bug Label issue as a bug defect

Comments

@poikilotherm
Copy link
Contributor

poikilotherm commented Dec 10, 2018

Description

Summary: I see a 100% CPU load on one CPU core when starting my application, due to a "RUNNABLE" thread.

When using https://github.com/payara/docker-payaraserver-full Docker image (using Payara 5.183 and OpenJDK 1.8.0u171), the logging to file disabled via setting:

asadmin set-log-attributes com.sun.enterprise.server.logging.GFFileHandler.logtoFile=false

(See Dockerfile).

The application logs to console, which is just fine, but it is burning a CPU core infinitely.

I don't know if #2117 is related. Seems to be the only other issue I could find.

Expected Outcome

The server should not be using a complete CPU core for logging...?

Current Outcome

Top from container, see PID 100:
grafik

Related thread dump, with nid=0x64 == PID 100:

dv> "GFFileHandler log pump" #27 daemon prio=5 os_prio=0 tid=0x00007fbedc175800 nid=0x64 runnable [0x00007fbef0ffd000]
dv>    java.lang.Thread.State: RUNNABLE
dv>     at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:297)
dv>     at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
dv>     - locked <0x00000003dfc96488> (a java.io.OutputStreamWriter)
dv>     at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
dv>     at java.util.logging.StreamHandler.flush(StreamHandler.java:259)
dv>     - locked <0x00000003dfc963c8> (a com.sun.enterprise.server.logging.GFFileHandler)
dv>     at com.sun.enterprise.server.logging.GFFileHandler.log(GFFileHandler.java:960)
dv>     at com.sun.enterprise.server.logging.GFFileHandler$3.run(GFFileHandler.java:610)

Steps to reproduce (Only for bug reports)

While the below stays to be a valid option for reproducing, @pioneer2k stated in payara/docker-payaraserver-full#72 this is even a problem when just starting the domain without even a deployment.


You should be able to reproduce by using my feature branch.
Docker, Maven and Git is assumed as installed:

git clone https://github.com/poikilotherm/dataverse -b 5292-small-container
mvn -Pcontainer clean package docker:build docker:run -DskipTests

Then enter the container via:

docker exec -it dataverse-1 /bin/bash
top -H

And watch it burn the CPU.

Environment

  • Payara Version: 5.183
  • Edition: Full
  • JDK Version: OpenJDK 1.8.0u171
  • Operating System: Linux/Docker
@OndroMih
Copy link
Contributor

I confirm I can see the CPU spike when running the maven build. There's no CPU spike when starting a plain Payara Server 5.183 in docker without any deployments.
I can see the CPU spike with Payara Server 5.184 in docker even without any deployments.

@OndroMih OndroMih added 0:Triaged Type: Bug Label issue as a bug defect labels Dec 10, 2018
@svendiedrichsen
Copy link
Contributor

svendiedrichsen commented Dec 10, 2018

The property logtoFile on a GFFileHandler doesn't make sense to me. The handlers only purpose is to log to a file, isn't it?
I can see the code which in case logtoFile is false doesn't publish the log at all. It only flushes the writer and tries to rotate the log file. The thread does not wait, sleep or block. This will lead to constant flushing and consume all the processor.

    public void log() {

        if (logToFile) {
            LogRecord record;

            // take is blocking so we take one record off the queue
            try {
                record = pendingRecords.take();
                super.publish(record);
            } catch (InterruptedException e) {
                return;
            }

            // now try to read more.  we end up blocking on the above take call if nothing is in the queue
            List<LogRecord> v = new ArrayList<>();
            int msgs = pendingRecords.drainTo(v, flushFrequency);
            for (int j = 0; j < msgs; j++) {
                super.publish(v.get(j));
            }
        }
        flush();
        if ((rotationRequested.get())
                || ((limitForFileRotation > 0)
                && (meter.written >= limitForFileRotation))) {
            // If we have written more than the limit set for the
            // file, or rotation requested from the Timer Task or LogMBean
            // start fresh with a new file after renaming the old file.
            synchronized (rotationLock) {
                rotate();
                rotationRequested.set(false);
            }
        }
    }

Besides the pendingRecords BlockingQueue does not get drained and will eventually block the log writing thread when filled up to its limit. Which is hardwired to 5000 entries. That's where this gets dangerous. This may over time freeze all server threads and bring the whole server to a freeze.

IMHO the property should go. If you don't want the logging to write to a file don't configure a file handler.

@poikilotherm
Copy link
Contributor Author

There is a property logToConsole, but it seems not to be used anywhere in this class or elsewhere.

There might be reasons that somebody wants the logs both piped to STDOUT/STDERR plus logfile, so maybe the property should stay, but the Thread should not be started in the first place if it is set to false?

svendiedrichsen added a commit to svendiedrichsen/Payara that referenced this issue Dec 10, 2018
…start pump thread if logging to file. Store into queue only if logging to file.
@svendiedrichsen
Copy link
Contributor

@poikilotherm There is a mechanism to log to several different Handlers in parallel in JUL.

@arjantijms arjantijms pinned this issue Jan 2, 2019
@MattGill98 MattGill98 unpinned this issue Jan 14, 2019
MattGill98 pushed a commit that referenced this issue Feb 1, 2019
* #3506 prevent processor wastage when not logging to file. Only start pump thread if logging to file. Store into queue only if logging to file.

* add logToFile to the pumps loop condition to stop looping if logToFile is set to false.

* draining pendingRecords if the pump gets disabled. reusing the draining code for the log method.

* initialize collection to drain to with the correct size

* Adding call to flush() after draining pending records.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Bug Label issue as a bug defect
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants