Skip to content

2. Reports

Jean Luis Urena edited this page Oct 25, 2024 · 4 revisions

Overview

The JmeterPerf::Report::Summary and JmeterPerf::Report::Comparator are simple yet powerful tools that can be used to generate reports based on the JMeter run and then compare them with others.

An instance of this class is returned on a Jmeter run initiated with jmeter_perf via the #run method call. It provides insights into key performance metrics such as average response time, error percentage, and response percentiles.

Attributes

  • name (String): The name of the summary report.
  • avg (Float): Average response time.
  • error_percentage (Float): The percentage of errors in total requests.
  • max (Integer): Maximum response time.
  • min (Integer): Minimum response time.
  • p10 (Float): 10th percentile response time.
  • p50 (Float, alias median): 50th percentile (median) response time.
  • p95 (Float): 95th percentile response time.
  • requests_per_minute (Float, alias rpm): Number of requests per minute.
  • response_codes (Hash): A hash of response codes and their occurrence counts.
  • standard_deviation (Float, alias std): Standard deviation of response times.
  • total_bytes (Integer): Total bytes received during the test.
  • total_elapsed_time (Integer): Total time elapsed during the test.
  • total_errors (Integer): Total number of errors.
  • total_latency (Integer): Total latency during the test.
  • total_requests (Integer): Total number of requests made.
  • total_sent_bytes (Integer): Total bytes sent during the test.

Example Usage

summary = JmeterPerf.test do
  threads count: 2, duration: 10 do
    get name: 'Test Request', url: 'http://127.0.0.1:8080/test'
  end
end.run(
  name: 'Test Plan Summary',
  out_jtl: 'tmp/test_plan.jtl'
)

# Print some summary metrics
puts "Total Requests: #{summary.total_requests}"
puts "Average Response Time: #{summary.avg} ms"
puts "Error Percentage: #{summary.error_percentage}%"

Comparator

With Comparator you can compare two performance summaries to evaluate the impact of changes in your system. It computes the statistical metrics Cohen's D and T-statistic to determine the significance of the differences between two test runs.

Attributes

  • cohens_d (Float): Effect size calculation (Cohen's D). Measures the standardized difference between the two test summaries.
  • t_statistic (Float): T-Statistic calculation. Provides insight into the statistical significance of the difference in means between the base and test
  • human_rating (String): Arbitrary human rating based on the magnitude of movement according to Sawilowsky's rule of thumb. Varies from Very small to Huge (e.g., "Very small increase").

Methods

  • pass?(cohens_d_limit = nil, effect_size = :vsmall, direction = :both): Determines if the performance change is acceptable based on a cohens_d_limit. You can also specify the desired effect size (:vsmall, :small, :medium, :large, etc.) based on Sawilowsky's rule of thumb instead. The direction (:positive, :negative, :both), determines whether you're looking for positive, negative, or both results.
  • generate_reports(output_dir: '.', output_format: :all): Generates comparison reports in HTML or CSV format, providing a comprehensive analysis of the two summaries.

Example Usage

The Comparator class takes two Summary objects and compares their performance metrics to determine if there is a significant change in performance.

# Assume we have two summaries, one from a baseline run and one from a new test
base_summary = JmeterPerf.test do
  threads count: 2, duration: 10 do
    get name: 'Baseline Request', url: 'http://127.0.0.1:8080/test'
  end
end.run(name: 'Baseline Summary', out_jtl: 'tmp/baseline.jtl')

test_summary = JmeterPerf.test do
  threads count: 2, duration: 10 do
    get name: 'New Request', url: 'http://127.0.0.1:8080/test'
  end
end.run(name: 'Test Summary', out_jtl: 'tmp/test.jtl')

# Compare the summaries
comparator = JmeterPerf::Report::Comparator.new(base_summary, test_summary, 'Baseline vs Test ')
puts "Cohen's D: #{comparator.cohens_d}"
puts "T-Statistic: #{comparator.t_statistic}"
puts "Human Rating: #{comparator.human_rating}"

# Determine if the performance difference is significant
if comparator.pass?(effect_size: :medium, direction: :both)
  puts 'Performance change is acceptable.'
else
  puts 'Performance change is not acceptable.'
end

Comparison Reports

The Comparator class can generate HTML and CSV reports that summarize the comparison between two test runs.

Example Report Generation

comparator.generate_reports(output_dir: 'reports', output_format: :all)

This will generate both HTML and CSV reports in the reports directory, providing a detailed view of the differences between the two test runs.

CSV Report
Label,Requests,Errors,Error %,Min,Median,Avg,Max,Std,P10,P50,P95
Base Metric,12926,122,0.94,0,2,0.00,200,5.92,1.00,2.00,3.00
Test Metric,9,0,0.00,1028,2018,2236.00,3022,663.43,2007.00,2018.00,3022.00
HTML Report
2024-10-25_15-38-55
Clone this wiki locally