Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(jmc-core): Upstream updates #201

Merged
merged 9 commits into from
Apr 14, 2023

Conversation

Josh-Matsuoka
Copy link
Contributor

@Josh-Matsuoka Josh-Matsuoka commented Mar 21, 2023

Hi,

This PR updates the embedded JMC sources to bring them in line with upstream, there's work onging upstream to refactor jmc-core and bring in most of the classes we're using, but it's not going to be resolved in time so updating the embedded sources makes the most sense for the moment.

Addresses #95

TODO for a separate PR or perhaps for this one, the InterruptibleReportsGenerator relies on old rules code that has been updated between 7.1.1 and 8.2.0 and doesn't build when updated to 8.2.0 so updating the maven dependency on jmc-7.1.1 will need a rework of InterruptibleReportsGenerator

@Josh-Matsuoka Josh-Matsuoka changed the title Upstream updates chore(jmc-core): Upstream updates Mar 21, 2023
@Josh-Matsuoka Josh-Matsuoka marked this pull request as ready for review March 21, 2023 15:52
@andrewazores andrewazores added the chore Refactor, rename, cleanup, etc. label Mar 21, 2023
Copy link
Member

@ebaron ebaron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey Josh, thanks for doing this! A couple of general comments:

  • Were there any packages/files moved to core by 8.2 that can now be removed from this embedded source, or are they all still needed?
  • It seems like GitHub is complaining about no newline at the end of several files. I'm not sure if it's like this in the JMC source too.

@ebaron
Copy link
Member

ebaron commented Apr 4, 2023

@Josh-Matsuoka Also, do you know how difficult it will be to update the InterruptibleReportsGenerator code for 8.2? We'd really like to have jmc-core updated to 8.2 as well for Cryostat 2.3.

@Josh-Matsuoka
Copy link
Contributor Author

@ebaron I'll take a look in a bit more detail and see if it's realistic to do in time for code freeze. Otherwise, the code we're currently embedding is still in the process of being moved upstream, so it wouldn't be in 8.2. I'll take a look at fixing the whitespace issue as well.

@Josh-Matsuoka
Copy link
Contributor Author

Updated the PR to bump jmc-core up to 8.2

I've updated InterruptibleReportsGenerator to account for the rules 2.0 changes and added an extra few sanity checks to the rule evaluation code to check for dependencies (some rules have dependencies in rules 2.0, e.g. the GC rules rely on the GC Information Rule to be run) and check that the required events exist as well.

This should fix the newline issue as well.

@Josh-Matsuoka Josh-Matsuoka requested a review from ebaron April 13, 2023 18:39
@andrewazores
Copy link
Member

Very simple looking integration test failures when running this update in a Cryostat image:

[ERROR]   ReportIT.testGetReportShouldSendFile:206 
Expected: is a map with size <7>
     but: map size was <8>
[ERROR]   TargetReportIT.testGetReportShouldSendFile:185 
Expected: is a map with size <7>
     but: map size was <8>

quite a promising result! Actually running it in a smoketest now to see how things look. The fact that the itest suite otherwise all passed should mean that the basic functionalities are all still working however.

@Josh-Matsuoka
Copy link
Contributor Author

I'll make a PR in cryostat main to fix those itests

@andrewazores
Copy link
Member

It looks like the "explanation" field is missing some string interpolations:

image

I'm not sure how the JMC Rules 2.0 is meant to do this, though.

@andrewazores
Copy link
Member

The explanation field is also optional, so on many of these evaluations in the report there is no text description anymore. I think the RuleEvaluation should be populated, for now, with a simple concatenation (separated by \ns probably, and then .strip()ed) of the result's getSummary(), getExplanation(), getSolution(), and some construction of the suggestRecordingSettings(). Later on in the next development cycle we can split those out into independent fields and also change the score back to a Severity, too.

@andrewazores
Copy link
Member

image

After my last commit ^

@andrewazores
Copy link
Member

It seems like Severity.getLimit() really does just report the "limit" of the evaluation severity, ie. if the result was a Critical then the score is simply 75, since that is the lower bound for the Critical range. Does JMC 8+ Rules 2.0 still have the decimal score values like the old implementation had? Or does it just bucket results into OK/Warning/Critical?

@Josh-Matsuoka
Copy link
Contributor Author

It buckets them in by the looks

https://github.com/openjdk/jmc/blob/8c0d12982a921d6c7248ecc7cbf5672117715d81/core/org.openjdk.jmc.flightrecorder.rules/src/main/java/org/openjdk/jmc/flightrecorder/rules/Severity.java#L75

Rules call out to Severity.get() when setting it and that method then clamps it to the nearest Severity score

@andrewazores
Copy link
Member

Yea, I guess it does. The Severity constructor is private and the only score/limit values are those 0/25/75 thresholds.

But, individual rule evaluations do still have a [0.0, 100.0] computed score, ex.:

https://github.com/openjdk/jmc/blob/8c0d12982a921d6c7248ecc7cbf5672117715d81/core/org.openjdk.jmc.flightrecorder.rules.jdk/src/main/java/org/openjdk/jmc/flightrecorder/rules/jdk/io/FileReadRule.java#L116

And this looks like a common pattern:

		double score = RulesToolkit.mapExp100(longestDuration.doubleValueIn(UnitLookup.SECOND),
				infoLimit.doubleValueIn(UnitLookup.SECOND), warningLimit.doubleValueIn(UnitLookup.SECOND));

...

			return ResultBuilder.createFor(this, vp).setSeverity(severity)
					.setSummary(Messages.getString(Messages.FileReadRuleFactory_TEXT_WARN))
					.setExplanation(Messages.getString(Messages.FileReadRuleFactory_TEXT_WARN_LONG))
					.addResult(TypedResult.SCORE, UnitLookup.NUMBER_UNITY.quantity(score))
					.addResult(LONGEST_READ_AMOUNT, amountRead).addResult(LONGEST_READ_TIME, longestDuration)
					.addResult(AVERAGE_FILE_READ, avgDuration).addResult(TOTAL_FILE_READ, totalDuration)
					.addResult(LONGEST_TOTAL_READ, totalLongestIOPath).addResult(LONGEST_READ_PATH, fileName).build();

I think we can extract it back with result.getTypedResult(TypedResult.SCORE)...

@andrewazores
Copy link
Member

Nailed it:

image

@andrewazores
Copy link
Member

I'm having some strange flakey test issues with cryostat-reports built against this updated version. It seems pretty indeterminate, so I wonder if there is a thread execution order problem.

@andrewazores
Copy link
Member

I think the problem was some kind of caching or threading issue with mvnd, when I use normal mvn it seems perfectly reliable.

@ebaron any other feedback on this PR? I think it's looking good.

@andrewazores
Copy link
Member

I spoke too soon:

[INFO] -------------------------------------------------------
[INFO]  T E S T S
[INFO] -------------------------------------------------------
[INFO] Running io.cryostat.ReportResourceTest
2023-04-14 10:58:46,207 INFO  [io.cry.rep.ReportResource] (main) CPUs: 8 singlethread: false maxMemory: 7952M memoryFactor: 10 timeout: 29000ms
2023-04-14 10:58:46,277 INFO  [io.quarkus] (main) cryostat-reports 2.3.0-SNAPSHOT on JVM (powered by Quarkus 2.13.7.Final) started in 1.786s. Listening on: http://localhost:8081
2023-04-14 10:58:46,278 INFO  [io.quarkus] (main) Profile test activated. 
2023-04-14 10:58:46,278 INFO  [io.quarkus] (main) Installed features: [cdi, resteasy-reactive, resteasy-reactive-jackson, smallrye-context-propagation, vertx]
2023-04-14 10:58:47,475 INFO  [io.cry.rep.ReportResource] (executor-thread-0) Received request for profiling_sample.jfr (2228122 bytes)
2023-04-14 10:58:47,890 INFO  [io.cry.rep.ReportResource] (executor-thread-0) Deleted /tmp/uploads/resteasy-reactive5845175127117923596upload
2023-04-14 10:58:47,891 INFO  [io.cry.rep.ReportResource] (executor-thread-0) Completed request for profiling_sample.jfr after 470ms
2023-04-14 10:58:47,904 INFO  [io.qua.htt.access-log] (executor-thread-0) 127.0.0.1 - - 14/Apr/2023:10:58:47 -0400 "POST /report HTTP/1.1" 200 15191
2023-04-14 10:58:48,095 INFO  [io.cry.rep.ReportResource] (executor-thread-1) Received request for profiling_sample.jfr.gz (700047 bytes)
2023-04-14 10:58:48,110 INFO  [io.cry.rep.ReportResource] (executor-thread-1) profiling_sample.jfr.gz was compressed. Decompressed size: 2228122 bytes. Decompression took 15ms
2023-04-14 10:58:48,257 INFO  [io.cry.rep.ReportResource] (executor-thread-1) Deleted /tmp/13938576611463892980.tmp
2023-04-14 10:58:48,258 INFO  [io.cry.rep.ReportResource] (executor-thread-1) Completed request for profiling_sample.jfr.gz after 162ms
2023-04-14 10:58:48,258 INFO  [io.qua.htt.access-log] (executor-thread-1) 127.0.0.1 - - 14/Apr/2023:10:58:48 -0400 "POST /report HTTP/1.1" 200 15185
2023-04-14 10:58:48,279 INFO  [io.qua.htt.access-log] (executor-thread-0) 127.0.0.1 - - 14/Apr/2023:10:58:48 -0400 "GET /health HTTP/1.1" 204 -
2023-04-14 10:58:48,328 INFO  [io.cry.rep.ReportResource] (executor-thread-0) Received request for profiling_sample.jfr (2228122 bytes)
2023-04-14 10:58:48,437 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-6) Rule missing required events: Metaspace Out of Memory 
2023-04-14 10:58:48,437 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: Class Loading Pressure 
2023-04-14 10:58:48,437 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Biased Locking Revocation 
2023-04-14 10:58:48,437 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-3) Rule missing required events: Heap Dump 
2023-04-14 10:58:48,446 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-6) Rule missing required events: Fatal Errors 
2023-04-14 10:58:48,446 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: GC Freed Ratio 
2023-04-14 10:58:48,452 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Code Cache 
2023-04-14 10:58:48,455 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: Threads Allocating 
2023-04-14 10:58:48,461 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: Heap Content 
2023-04-14 10:58:48,470 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: String Deduplication 
2023-04-14 10:58:48,473 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-7) Rule missing required events: Allocated Classes 
2023-04-14 10:58:48,477 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-7) Rule missing required events: Primitive To Object Conversion 
2023-04-14 10:58:48,478 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-6) Rule missing required events: TLAB Allocation Ratio 
2023-04-14 10:58:48,484 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-1) Rule missing required events: Class Leak 
2023-04-14 10:58:48,490 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Exceptional Dump Reason 
2023-04-14 10:58:48,504 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Process Started 
2023-04-14 10:58:48,507 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-6) Rule missing required events: DMS Incidents 
2023-04-14 10:58:48,669 WARN  [io.cry.cor.log.Logger] (pool-9-thread-1) Exception thrown: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.NullPointerException: Cannot invoke "org.openjdk.jmc.flightrecorder.rules.jdk.memory.GarbageCollectionsInfo.getObjectCountGCs()" because "aggregate" is null
	at java.base/java.util.concurrent.ForkJoinTask.reportExecutionException(ForkJoinTask.java:605)
	at java.base/java.util.concurrent.ForkJoinTask.get(ForkJoinTask.java:981)
	at io.cryostat.core.reports.InterruptibleReportGenerator.generateResultHelper(InterruptibleReportGenerator.java:269)
	at io.cryostat.core.reports.InterruptibleReportGenerator.lambda$generateReportInterruptibly$2(InterruptibleReportGenerator.java:139)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.NullPointerException: Cannot invoke "org.openjdk.jmc.flightrecorder.rules.jdk.memory.GarbageCollectionsInfo.getObjectCountGCs()" because "aggregate" is null
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
	at java.base/java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:562)
	at java.base/java.util.concurrent.ForkJoinTask.reportExecutionException(ForkJoinTask.java:604)
	... 7 more
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.NullPointerException: Cannot invoke "org.openjdk.jmc.flightrecorder.rules.jdk.memory.GarbageCollectionsInfo.getObjectCountGCs()" because "aggregate" is null
	at java.base/java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1433)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException: Cannot invoke "org.openjdk.jmc.flightrecorder.rules.jdk.memory.GarbageCollectionsInfo.getObjectCountGCs()" because "aggregate" is null
	at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
	at io.cryostat.core.reports.InterruptibleReportGenerator.lambda$evaluate$8(InterruptibleReportGenerator.java:423)
	at java.base/java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1428)
	... 5 more
Caused by: java.lang.NullPointerException: Cannot invoke "org.openjdk.jmc.flightrecorder.rules.jdk.memory.GarbageCollectionsInfo.getObjectCountGCs()" because "aggregate" is null
	at org.openjdk.jmc.flightrecorder.rules.jdk.memory.HeapInspectionRule.getHeapInspectionResult(HeapInspectionRule.java:102)
	at org.openjdk.jmc.flightrecorder.rules.jdk.memory.HeapInspectionRule.access$000(HeapInspectionRule.java:66)
	at org.openjdk.jmc.flightrecorder.rules.jdk.memory.HeapInspectionRule$1.call(HeapInspectionRule.java:92)
	at org.openjdk.jmc.flightrecorder.rules.jdk.memory.HeapInspectionRule$1.call(HeapInspectionRule.java:89)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at io.cryostat.core.reports.InterruptibleReportGenerator.lambda$evaluate$8(InterruptibleReportGenerator.java:422)
	... 6 more

2023-04-14 10:58:48,671 INFO  [io.cry.rep.ReportResource] (executor-thread-0) Deleted /tmp/uploads/resteasy-reactive9522206778977305222upload
2023-04-14 10:58:48,672 INFO  [io.cry.rep.ReportResource] (executor-thread-0) Completed request for profiling_sample.jfr after 343ms
2023-04-14 10:58:48,672 INFO  [io.qua.htt.access-log] (executor-thread-0) 127.0.0.1 - - 14/Apr/2023:10:58:48 -0400 "POST /report HTTP/1.1" 200 320
2023-04-14 10:58:48,708 INFO  [io.cry.rep.ReportResource] (executor-thread-1) Received request for profiling_sample.jfr.gz (700047 bytes)
2023-04-14 10:58:48,727 INFO  [io.cry.rep.ReportResource] (executor-thread-1) profiling_sample.jfr.gz was compressed. Decompressed size: 2228122 bytes. Decompression took 19ms
2023-04-14 10:58:48,836 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-3) Rule missing required events: Heap Dump 
2023-04-14 10:58:48,837 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Class Loading Pressure 
2023-04-14 10:58:48,840 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-3) Rule missing required events: Fatal Errors 
2023-04-14 10:58:48,840 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-6) Rule missing required events: GC Freed Ratio 
2023-04-14 10:58:48,842 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Code Cache 
2023-04-14 10:58:48,845 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-2) Rule missing required events: Biased Locking Revocation 
2023-04-14 10:58:48,846 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: Heap Content 
2023-04-14 10:58:48,847 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: String Deduplication 
2023-04-14 10:58:48,847 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-2) Rule missing required events: Allocated Classes 
2023-04-14 10:58:48,847 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-1) Rule missing required events: Metaspace Out of Memory 
2023-04-14 10:58:48,849 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Primitive To Object Conversion 
2023-04-14 10:58:48,849 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-2) Rule missing required events: TLAB Allocation Ratio 
2023-04-14 10:58:48,851 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Class Leak 
2023-04-14 10:58:48,853 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-2) Rule missing required events: Exceptional Dump Reason 
2023-04-14 10:58:48,855 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-1) Rule missing required events: DMS Incidents 
2023-04-14 10:58:48,857 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-2) Rule missing required events: Process Started 
2023-04-14 10:58:48,859 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-3) Rule missing required events: Threads Allocating 
2023-04-14 10:58:49,062 INFO  [io.cry.rep.ReportResource] (executor-thread-1) Deleted /tmp/16103924128966749444.tmp
2023-04-14 10:58:49,062 INFO  [io.cry.rep.ReportResource] (executor-thread-1) Completed request for profiling_sample.jfr.gz after 354ms
2023-04-14 10:58:49,068 INFO  [io.qua.htt.access-log] (executor-thread-1) 127.0.0.1 - - 14/Apr/2023:10:58:49 -0400 "POST /report HTTP/1.1" 200 117006
2023-04-14 10:58:49,148 INFO  [io.cry.rep.ReportResource] (executor-thread-0) Received request for profiling_sample.jfr (2228122 bytes)
2023-04-14 10:58:49,236 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-6) Rule missing required events: Heap Dump 
2023-04-14 10:58:49,236 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-1) Rule missing required events: Biased Locking Revocation 
2023-04-14 10:58:49,236 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-7) Rule missing required events: Metaspace Out of Memory 
2023-04-14 10:58:49,237 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: Class Loading Pressure 
2023-04-14 10:58:49,240 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-7) Rule missing required events: GC Freed Ratio 
2023-04-14 10:58:49,240 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-6) Rule missing required events: Code Cache 
2023-04-14 10:58:49,243 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: Threads Allocating 
2023-04-14 10:58:49,244 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-1) Rule missing required events: Heap Content 
2023-04-14 10:58:49,245 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-6) Rule missing required events: String Deduplication 
2023-04-14 10:58:49,240 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-3) Rule missing required events: Fatal Errors 
2023-04-14 10:58:49,245 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: Allocated Classes 
2023-04-14 10:58:49,247 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: TLAB Allocation Ratio 
2023-04-14 10:58:49,248 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-3) Rule missing required events: Primitive To Object Conversion 
2023-04-14 10:58:49,249 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-2) Rule missing required events: Class Leak 
2023-04-14 10:58:49,251 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: Exceptional Dump Reason 
2023-04-14 10:58:49,253 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: DMS Incidents 
2023-04-14 10:58:49,254 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-2) Rule missing required events: Process Started 
2023-04-14 10:58:49,359 INFO  [io.cry.rep.ReportResource] (executor-thread-0) Deleted /tmp/uploads/resteasy-reactive18226242869208799007upload
2023-04-14 10:58:49,359 INFO  [io.cry.rep.ReportResource] (executor-thread-0) Completed request for profiling_sample.jfr after 211ms
2023-04-14 10:58:49,363 INFO  [io.qua.htt.access-log] (executor-thread-0) 127.0.0.1 - - 14/Apr/2023:10:58:49 -0400 "POST /report HTTP/1.1" 200 14626
2023-04-14 10:58:49,429 INFO  [io.cry.rep.ReportResource] (executor-thread-1) Received request for profiling_sample.jfr.gz (700047 bytes)
2023-04-14 10:58:49,442 INFO  [io.cry.rep.ReportResource] (executor-thread-1) profiling_sample.jfr.gz was compressed. Decompressed size: 2228122 bytes. Decompression took 12ms
2023-04-14 10:58:49,538 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-7) Rule missing required events: Metaspace Out of Memory 
2023-04-14 10:58:49,538 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Heap Dump 
2023-04-14 10:58:49,539 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-6) Rule missing required events: Class Loading Pressure 
2023-04-14 10:58:49,540 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-6) Rule missing required events: Fatal Errors 
2023-04-14 10:58:49,540 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-3) Rule missing required events: Biased Locking Revocation 
2023-04-14 10:58:49,540 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: GC Freed Ratio 
2023-04-14 10:58:49,542 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-6) Rule missing required events: Code Cache 
2023-04-14 10:58:49,543 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-3) Rule missing required events: Threads Allocating 
2023-04-14 10:58:49,544 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Heap Content 
2023-04-14 10:58:49,545 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-6) Rule missing required events: String Deduplication 
2023-04-14 10:58:49,545 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-7) Rule missing required events: Allocated Classes 
2023-04-14 10:58:49,546 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Primitive To Object Conversion 
2023-04-14 10:58:49,546 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-6) Rule missing required events: TLAB Allocation Ratio 
2023-04-14 10:58:49,548 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Class Leak 
2023-04-14 10:58:49,549 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Exceptional Dump Reason 
2023-04-14 10:58:49,552 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: DMS Incidents 
2023-04-14 10:58:49,553 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Process Started 
2023-04-14 10:58:49,623 INFO  [io.cry.rep.ReportResource] (executor-thread-1) Deleted /tmp/3299751993829939484.tmp
2023-04-14 10:58:49,623 INFO  [io.cry.rep.ReportResource] (executor-thread-1) Completed request for profiling_sample.jfr.gz after 193ms
2023-04-14 10:58:49,624 INFO  [io.qua.htt.access-log] (executor-thread-1) 127.0.0.1 - - 14/Apr/2023:10:58:49 -0400 "POST /report HTTP/1.1" 200 14626
2023-04-14 10:58:49,669 INFO  [io.cry.rep.ReportResource] (executor-thread-0) Received request for profiling_sample.jfr (2228122 bytes)
2023-04-14 10:58:49,758 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-2) Rule missing required events: Heap Content 
2023-04-14 10:58:49,759 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: String Deduplication 
2023-04-14 10:58:49,758 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-1) Rule missing required events: GC Freed Ratio 
2023-04-14 10:58:49,759 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-4) Rule missing required events: Primitive To Object Conversion 
2023-04-14 10:58:49,759 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-7) Rule missing required events: Allocated Classes 
2023-04-14 10:58:49,760 WARN  [io.cry.cor.log.Logger] (pool-9-thread-1) Rule missing required events: Heap Dump 
2023-04-14 10:58:49,769 INFO  [io.cry.rep.ReportResource] (executor-thread-0) Deleted /tmp/uploads/resteasy-reactive7260260379757832277upload
2023-04-14 10:58:49,769 INFO  [io.cry.rep.ReportResource] (executor-thread-0) Completed request for profiling_sample.jfr after 100ms
2023-04-14 10:58:49,770 INFO  [io.qua.htt.access-log] (executor-thread-0) 127.0.0.1 - - 14/Apr/2023:10:58:49 -0400 "POST /report HTTP/1.1" 200 29228
2023-04-14 10:58:49,798 INFO  [io.cry.rep.ReportResource] (executor-thread-1) Received request for profiling_sample.jfr.gz (700047 bytes)
2023-04-14 10:58:49,812 INFO  [io.cry.rep.ReportResource] (executor-thread-1) profiling_sample.jfr.gz was compressed. Decompressed size: 2228122 bytes. Decompression took 13ms
2023-04-14 10:58:49,912 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-3) Rule missing required events: Heap Dump 
2023-04-14 10:58:49,913 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-5) Rule missing required events: Heap Content 
2023-04-14 10:58:49,913 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-7) Rule missing required events: String Deduplication 
2023-04-14 10:58:49,912 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-1) Rule missing required events: GC Freed Ratio 
2023-04-14 10:58:49,914 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-3) Rule missing required events: Primitive To Object Conversion 
2023-04-14 10:58:49,914 WARN  [io.cry.cor.log.Logger] (ForkJoinPool.commonPool-worker-2) Rule missing required events: Allocated Classes 
2023-04-14 10:58:49,923 INFO  [io.cry.rep.ReportResource] (executor-thread-1) Deleted /tmp/8787880238765287505.tmp
2023-04-14 10:58:49,923 INFO  [io.cry.rep.ReportResource] (executor-thread-1) Completed request for profiling_sample.jfr.gz after 124ms
2023-04-14 10:58:49,924 INFO  [io.qua.htt.access-log] (executor-thread-1) 127.0.0.1 - - 14/Apr/2023:10:58:49 -0400 "POST /report HTTP/1.1" 200 29300
[ERROR] Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.259 s <<< FAILURE! - in io.cryostat.ReportResourceTest
[ERROR] io.cryostat.ReportResourceTest.testReportEndpoint(String)[1]  Time elapsed: 0.393 s  <<< FAILURE!
java.lang.AssertionError: 
Expected one <title>
Expected: <1>
     but: was <0>
	at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
	at io.cryostat.ReportResourceTest.testReportEndpoint(ReportResourceTest.java:100)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at io.quarkus.test.junit.QuarkusTestExtension.runExtensionMethod(QuarkusTestExtension.java:1106)
	at io.quarkus.test.junit.QuarkusTestExtension.runExtensionMethod(QuarkusTestExtension.java:1000)
	at io.quarkus.test.junit.QuarkusTestExtension.interceptTestTemplateMethod(QuarkusTestExtension.java:960)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:94)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204)
	at org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.execute(TestTemplateTestDescriptor.java:142)
	at org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.lambda$execute$2(TestTemplateTestDescriptor.java:110)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
	at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
	at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
	at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
	at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
	at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
	at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
	at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
	at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992)
	at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
	at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
	at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
	at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
	at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:276)
	at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
	at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
	at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
	at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625)
	at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
	at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
	at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
	at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
	at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:276)
	at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625)
	at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
	at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
	at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
	at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
	at org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.execute(TestTemplateTestDescriptor.java:110)
	at org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.execute(TestTemplateTestDescriptor.java:44)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.apache.maven.surefire.junitplatform.LazyLauncher.execute(LazyLauncher.java:50)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:184)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:148)
	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:122)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
	at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495)

2023-04-14 10:58:49,971 INFO  [io.quarkus] (main) cryostat-reports stopped in 0.033s
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   ReportResourceTest.testReportEndpoint:100 Expected one <title>
Expected: <1>
     but: was <0>
[INFO] 
[ERROR] Tests run: 9, Failures: 1, Errors: 0, Skipped: 0
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  10.729 s
[INFO] Finished at: 2023-04-14T10:58:50-04:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:3.0.0:test (default-test) on project cryostat-reports: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/andrew/workspace/cryostat-reports/target/surefire-reports for the individual test results.
[ERROR] Please refer to dump files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException

This is a pretty typical example. The stack trace where some aggregate rule field is null seems pretty strange.

@andrewazores
Copy link
Member

andrewazores commented Apr 14, 2023

Ah, maybe it has to do with that new inter-rule dependency stuff you were talking about, @Josh-Matsuoka . Could it be that this Rule A is failing because it depends on a result from Rule B, but A is calculated on Thread 1 and B is on Thread 2? I see you already have some logic to try to ensure this kind of thing doesn't happen, but perhaps that's not threadsafe?

@ebaron
Copy link
Member

ebaron commented Apr 14, 2023

I think the problem was some kind of caching or threading issue with mvnd, when I use normal mvn it seems perfectly reliable.

@ebaron any other feedback on this PR? I think it's looking good.

The embedded JMC sources look fine to me. Once we figure out the report generation test issue, this is good to go for me.

@Josh-Matsuoka
Copy link
Contributor Author

Looking at the heap inspection rule, it has a dependency on the GarbageCollectionInfo rule to be run first, and it's results being added to the ResultsProvider:

https://github.com/openjdk/jmc/blob/master/core/org.openjdk.jmc.flightrecorder.rules.jdk/src/main/java/org/openjdk/jmc/flightrecorder/rules/jdk/memory/HeapInspectionRule.java

So if the GarbageCollectionInfo rule is never run the aggregate would be null causing the error.

This kind of thing should be caught by the dependency check, it looks like there might be a race here where the evaluatedRules map was updated, causing the dependency check to pass but the rule was run before the ResultProvider was updated.

@andrewazores
Copy link
Member

andrewazores commented Apr 14, 2023

I was just looking at that and figured it's probably something like that. Also, the simple shouldEvaluate() check means that a rule with dependencies may never be run at all if it happened to be in the queue first before the rules it depends upon, since this is done with just a single pass over the rule set and filtering out the ones where dependencies haven't already been done already. ex. if the first rule checked has any dependencies, that rule will never be evaluated.

I think a recursive search and flattening of the dependency tree probably makes sense here? Iterate over the filtered set of rules to be evaluated. For each rule, if there are no dependencies, add it to the flattened list. If there are dependencies, recurse to that rule and check its dependencies. Recurse until a base case (rule with 0 deps) and add that to the flattened list, then go back up a level and add that to the flattened list, etc. Then actually start processing the rules. This can either be done on one dedicated thread (bad throughput), on a new thread per request (OK throughput), or each independent dependency chain/subgraph can be done on a worker thread (best performance) if the chains are maintained in separate structures (sublists?) rather an a single flattened list.

~Also this whole thing can be cached, because the dependency relations between rules are static, so this can all be done on startup by checking the RuleRegistry and computing these dependency subgraphs for all of the rules, and then just making a map keyed on a rule to be evaluated and mapping to its set of direct dependencies. When we service a request and need to evaluate a rule we can reference this cache to determine which rules need to be evaluated first and prepend them to the work queue.~~

After a cursory glance at the JMC sources for RuleRegistry... it already does exactly this! RuleRegistry.getRules() should be returning the rule collection in an order where they are already topologically sorted, so all rules are evaluated after their dependencies. So long as we iterate over that collection in the given order (and on a single thread), we should be OK. This doesn't let us do anything fancy like evaluating subgraphs on worker threads, though. Maybe the best compromise for now is to use a thread pool for servicing the generation requests, but have each individual report generation handled by a single thread for all rule evaluations? Or maybe we can just kick off the 0-dependency evaluations onto worker threads and use a dedicated thread for any rules with dependencies?

@andrewazores
Copy link
Member

That looks like it's basically what I was describing. Let's port that over but replace the nThreads with an ExecutorService.

@andrewazores
Copy link
Member

@Josh-Matsuoka are you able to take on porting over that implementation or should I give it a stab?

@Josh-Matsuoka
Copy link
Contributor Author

I'll take a look at it

@Josh-Matsuoka
Copy link
Contributor Author

Updated to bring over evaluateParallel

@andrewazores
Copy link
Member

This is looking all good now, repeated testing now seems stable and the code looks like it's doing what I was trying to explain. Great work @Josh-Matsuoka , thanks so much!

andrewazores
andrewazores previously approved these changes Apr 14, 2023
@andrewazores
Copy link
Member

@Josh-Matsuoka oh, sign your commits please. git rebase -f -S main and git push -f should do it.

ebaron
ebaron previously approved these changes Apr 14, 2023
@Josh-Matsuoka
Copy link
Contributor Author

hmm that brought in quite the commit history

@andrewazores
Copy link
Member

Fixed it :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chore Refactor, rename, cleanup, etc.
Projects
No open projects
Status: Done
Development

Successfully merging this pull request may close these issues.

3 participants