Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(performance): Expands search for serviceowners, improved tracing and logging #1439

Merged
merged 6 commits into from
Nov 12, 2024

Conversation

dagfinno
Copy link
Collaborator

@dagfinno dagfinno commented Nov 12, 2024

Implements more GET testcalls to serviceowner apis and improves tracing and logging

Description

Related Issue(s)

Verification

  • Your code builds clean without any errors or warnings
  • Manual testing done (required)
  • Relevant automated test added (if you find this hard, leave it and we'll help out)

Documentation

  • Documentation is updated (either in docs-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced a sentinelPerformanceValue for enhanced performance tracking.
    • Added flexibility to performance tests by implementing isSingleUserMode for better user scenario handling.
  • Improvements

    • Enhanced logging and traceability in dialog creation and purging processes.
    • Expanded performance thresholds for more detailed monitoring.
    • Updated functions to accept dynamic GET request functions, improving modularity.
  • Bug Fixes

    • Improved error handling and conditions for single vs. multi-user modes in various performance test scripts.
  • Documentation

    • Updated comments for clarity regarding options and parameters in several scripts.

@dagfinno dagfinno requested a review from a team as a code owner November 12, 2024 10:18
Copy link
Contributor

coderabbitai bot commented Nov 12, 2024

Warning

Rate limit exceeded

@dagfinno has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 50 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 06bce86 and c4c48bc.

📝 Walkthrough
📝 Walkthrough
📝 Walkthrough

Walkthrough

The pull request introduces several changes across multiple test scripts in the K6 framework. Key modifications include the addition of a new constant sentinelPerformanceValue in the configuration file and the introduction of a variable isSingleUserMode in various performance test scripts to streamline user mode checks. Additionally, functions have been updated to enhance their flexibility and traceability, including the use of unique trace identifiers in HTTP headers. Overall, the changes aim to improve the structure and maintainability of the test scripts without altering their fundamental logic.

Changes

File Path Change Summary
tests/k6/common/config.js Added constant: export const sentinelPerformanceValue = "dialogporten-e2e-sentinel-performance";
tests/k6/tests/enduser/performance/enduser-search.js Added import for randomItem. Introduced isSingleUserMode variable to consolidate user mode checks. Updated logic to call enduserSearch based on user mode.
tests/k6/tests/graphql/performance/graphql-search.js Added isSingleUserMode variable for user mode checks. Updated logic to call graphqlSearch based on user mode.
tests/k6/tests/performancetest_common/createDialog.js Introduced traceparent variable for tracing in createDialog and createAndRemoveDialog functions. Updated paramsWithToken to include traceparent and enduser properties.
tests/k6/tests/performancetest_common/simpleSearch.js Added getFunction parameter to several functions for dynamic function selection. Updated graphqlSearch and serviceownerSearch to include traceparent handling.
tests/k6/tests/performancetest_data/01-create-dialog.js Added import for sentinelPerformanceValue. Updated cleanUp function to include sentinelPerformanceValue in searchTags.
tests/k6/tests/serviceowner/performance/create-dialog.js Introduced isSingleUserMode variable for user mode checks. Updated logic to call createDialog based on user mode.
tests/k6/tests/serviceowner/performance/create-remove-dialog.js Added isSingleUserMode constant for user mode checks. Updated logic to call createAndRemoveDialog based on user mode.
tests/k6/tests/serviceowner/performance/purge-dialogs.js Added import for uuidv4. Introduced traceparent variable for logging. Updated Authorization header in paramsWithToken to include traceparent.
tests/k6/tests/serviceowner/performance/serviceowner-search.js Expanded thresholds array in options object to include additional dialog and transmission activity thresholds.

Possibly related PRs

Suggested reviewers

  • oskogstad
  • arealmaas

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (16)
tests/k6/tests/enduser/performance/enduser-search.js (2)

25-25: LGTM! Consider adding a comment explaining the single user mode conditions.

The logic for determining single user mode is well-structured and handles undefined values properly using nullish coalescing.

+    // Single user mode is active when running with exactly one VU, one iteration, and no duration
     const isSingleUserMode = (options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1 && (options.duration ?? 0) === 0;

Line range hint 26-30: LGTM! Consider caching the random selection for better performance.

The logic for user selection is clear and appropriate. For single user mode, it uses the first user, and for multi-user mode, it randomly selects a user.

Consider caching the random selection if multiple iterations are performed within the same VU context:

 export default function() {
+    // Cache the selected user within the VU context
+    let vu = __VU - 1;
+    if (!this.selectedUsers) {
+        this.selectedUsers = new Array(options.vus);
+    }
     if (!endUsersWithTokens || endUsersWithTokens.length === 0) {
         throw new Error('No end users loaded for testing');
     }
     
     const isSingleUserMode = (options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1 && (options.duration ?? 0) === 0;
     if (isSingleUserMode) {
         enduserSearch(endUsersWithTokens[0]);
     }
     else {
-        enduserSearch(randomItem(endUsersWithTokens));
+        if (!this.selectedUsers[vu]) {
+            this.selectedUsers[vu] = randomItem(endUsersWithTokens);
+        }
+        enduserSearch(this.selectedUsers[vu]);
     }
 }
tests/k6/tests/serviceowner/performance/create-dialog.js (2)

Line range hint 22-27: LGTM! Consider extracting default values as constants.

The introduction of isSingleUserMode improves code clarity. However, the default values could be extracted as named constants for better maintainability.

Consider this refactor:

+const DEFAULT_VUS = 1;
+const DEFAULT_ITERATIONS = 1;
+const DEFAULT_DURATION = 0;

-const isSingleUserMode = (options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1 && (options.duration ?? 0) === 0;
+const isSingleUserMode = (options.vus ?? DEFAULT_VUS) === DEFAULT_VUS && 
+                        (options.iterations ?? DEFAULT_ITERATIONS) === DEFAULT_ITERATIONS && 
+                        (options.duration ?? DEFAULT_DURATION) === DEFAULT_DURATION;

Line range hint 1-4: Add multi-user mode example in the documentation.

The comment only shows how to run in single-user mode. Adding an example for multi-user mode would be helpful for other developers.

Consider updating the comment:

 /**
  * Performance test for creating a dialog
- * Run: k6 run tests/k6/tests/serviceowner/performance/create-dialog.js --vus 1 --iterations 1
+ * Single-user mode: k6 run tests/k6/tests/serviceowner/performance/create-dialog.js --vus 1 --iterations 1
+ * Multi-user mode: k6 run tests/k6/tests/serviceowner/performance/create-dialog.js --vus 10 --iterations 100
  */
tests/k6/tests/serviceowner/performance/create-remove-dialog.js (2)

25-26: Consider extracting the test mode detection to a shared utility.

The introduction of isSingleUserMode improves code readability. However, since this condition might be useful across different test files, consider moving it to a shared utility function.

Consider creating a utility function in a shared location:

+ // In performancetest_common/testModeUtils.js
+ export const isSingleUserTestMode = (opts) => {
+   return (opts.vus ?? 1) === 1 && 
+          (opts.iterations ?? 1) === 1 && 
+          (opts.duration ?? 0) === 0;
+ };

  // In current file
- const isSingleUserMode = (options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1 && (options.duration ?? 0) === 0;
+ const isSingleUserMode = isSingleUserTestMode(options);

Line range hint 27-32: Add tracing information for test mode and data selection.

The control flow logic is clear and correct. However, given the PR's focus on improved tracing and logging, consider adding trace information about the test mode and selected data.

Consider adding logging:

  if (isSingleUserMode) {
+   console.log(`Running in single user mode with fixed service owner ${serviceOwners[0].id} and end user ${endUsers[0].id}`);
    createAndRemoveDialog(serviceOwners[0], endUsers[0]);
  }
  else {
+   const selectedServiceOwner = randomItem(serviceOwners);
+   const selectedEndUser = randomItem(endUsers);
+   console.log(`Running in multi-user mode with random service owner ${selectedServiceOwner.id} and end user ${selectedEndUser.id}`);
-   createAndRemoveDialog(randomItem(serviceOwners), randomItem(endUsers));
+   createAndRemoveDialog(selectedServiceOwner, selectedEndUser);
  }
tests/k6/tests/graphql/performance/graphql-search.js (3)

Line range hint 14-21: Consider adding tracing-specific configurations

Given the PR's focus on improved tracing, consider adding tracing-specific thresholds or metrics to the options configuration. This would help monitor and analyze the tracing overhead and performance impact.

 export let options = {
     summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(95)', 'p(99)', 'p(99.5)', 'p(99.9)', 'count'],
-    thresholds: getDefaultThresholds(['http_req_duration', 'http_reqs'],['graphql search'])
+    thresholds: getDefaultThresholds(
+        ['http_req_duration', 'http_reqs', 'http_req_sending'],
+        ['graphql search', 'tracing']
+    )
 };

Line range hint 26-29: Enhance error handling with more context

The current error handling could be more informative for debugging purposes, especially in performance testing scenarios.

-    if (!endUsers || endUsers.length === 0) {
-        throw new Error('No end users loaded for testing');
+    if (!endUsers) {
+        throw new Error('End users array is undefined - check test data loading');
+    }
+    if (endUsers.length === 0) {
+        throw new Error('End users array is empty - verify test data configuration');
     }

Line range hint 30-36: Consider extracting mode detection logic

The single-user mode detection logic could be reused across other performance tests. Consider extracting it into a shared utility function.

+// In a shared utility file (e.g., performancetest_common/testMode.js)
+export const isSingleUserMode = (opts) => {
+    return (opts.vus ?? 1) === 1 && 
+           (opts.iterations ?? 1) === 1 && 
+           (opts.duration ?? 0) === 0;
+};

// In this file
-    const isSingleUserMode = (options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1 && (options.duration ?? 0) === 0;
+    const testMode = isSingleUserMode(options);
-    if (isSingleUserMode) {
+    if (testMode) {

This refactoring would:

  1. Improve code reusability across test files
  2. Make the mode detection logic easier to maintain and update
  3. Keep the test file focused on its primary responsibility
tests/k6/tests/performancetest_data/01-create-dialog.js (1)

Line range hint 39-52: Consider adding performance tracing logs

Given the PR's objective of improving tracing, consider adding debug logs when the sentinel tag is added. This would help track performance test execution flow.

Example addition:

 export default function (endUser, resource) {
     if (!endUser?.match(/^\d{11}$/)) {
         throw new Error('endUser must be a 11-digit number');
     }
     if (!resource?.trim()) {
         throw new Error('resource is required');
     }
     let payload = createDialogPayload();
     payload.serviceResource = "urn:altinn:resource:" +resource;
     payload.party = "urn:altinn:person:identifier-no:" + endUser;
+    console.debug(`Creating performance test dialog for user ${endUser} with resource ${resource}`);
     return cleanUp(payload);
 }
tests/k6/tests/performancetest_common/createDialog.js (2)

17-17: Consider following W3C trace context specification for traceparent.

While adding tracing is a good improvement, consider following the W3C trace context specification format for the traceparent header: 00-<trace-id>-<parent-id>-<trace-flags>.

Here's how you could modify it:

-    var traceparent = uuidv4();
+    var traceId = uuidv4().replace(/-/g, '');
+    var parentId = uuidv4().replace(/-/g, '').substr(0, 16);
+    var traceparent = `00-${traceId}-${parentId}-01`;

Also applies to: 21-21, 23-23


58-61: Consider adding error handling for dialog removal.

The code regenerates traceparent and updates headers/tags correctly, but consider adding error handling and logging for cases where dialog removal fails.

     describe('remove dialog', () => {
       traceparent = uuidv4();
       paramsWithToken.tags.name = 'remove dialog';
       paramsWithToken.tags.traceparent = traceparent;
       paramsWithToken.headers.traceparent = traceparent
       if (dialogId) {
           let r = purgeSO('dialogs/' + dialogId, paramsWithToken);   
-          expect(r.status, 'response status').to.equal(204);
+          try {
+              expect(r.status, 'response status').to.equal(204);
+          } catch (error) {
+              console.error(`Failed to remove dialog ${dialogId}: ${error.message}`);
+              throw error;
+          }
       }
   });
tests/k6/tests/serviceowner/performance/purge-dialogs.js (2)

25-30: Consider following W3C trace context specification for better interoperability.

While adding tracing is great, the current implementation might not integrate well with other tracing systems. The W3C trace context specification defines a specific format for the traceparent header:
traceparent: 00-<trace-id>-<parent-id>-01

Consider this implementation:

-    var traceparent = uuidv4();
-    console.log("Searching for dialogs to purge, tracevalue: " + traceparent);
+    var traceId = uuidv4().replace(/-/g, '');
+    var parentId = '0000000000000000';
+    var traceparent = `00-${traceId}-${parentId}-01`;
+    console.log("Searching for dialogs to purge, traceId: " + traceId);

Line range hint 107-124: Consider batch processing for better performance.

The current implementation deletes dialogs one by one, which might not be optimal for large cleanups. Consider implementing batch processing to reduce the number of API calls.

Consider this approach:

  1. Process dialogs in batches (e.g., 50 at a time)
  2. Use Promise.all for parallel processing within each batch
  3. Add delay between batches to prevent overwhelming the system

Would you like me to provide a code example for this implementation?

tests/k6/tests/performancetest_common/simpleSearch.js (2)

19-32: LGTM! Consider adding JSDoc for the new parameter.

The addition of the getFunction parameter improves modularity by allowing different GET functions to be used. The EU-specific conditional for labellog is correctly implemented.

Add parameter documentation:

 /**
  * Retrieves the content for a dialog.
  * Get dialog, dialog activities, seenlogs, labellog, and transmissions.
  * @param {Object} response - The response object.
  * @param {Object} paramsWithToken - The parameters with token.
+ * @param {Function} [getFunction=getEU] - The function to use for GET requests.
  * @returns {void}
  */

119-126: Consider adding User-Agent header to serviceownerSearch for consistency.

The tracing implementation with traceparent is well done. However, the User-Agent header is only present in graphqlSearch but missing in serviceownerSearch.

Add the User-Agent header to serviceownerSearch:

     let paramsWithToken = {
         headers: {
             Authorization: "Bearer " + serviceowner.token,
-            traceparent: traceparent
+            traceparent: traceparent,
+            'User-Agent': 'dialogporten-k6-serviceowner-search'
         },

Also applies to: 141-157

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 0972243 and e7ad00e.

⛔ Files ignored due to path filters (1)
  • tests/k6/tests/performancetest_data/serviceowners-staging.csv is excluded by !**/*.csv
📒 Files selected for processing (10)
  • tests/k6/common/config.js (1 hunks)
  • tests/k6/tests/enduser/performance/enduser-search.js (2 hunks)
  • tests/k6/tests/graphql/performance/graphql-search.js (2 hunks)
  • tests/k6/tests/performancetest_common/createDialog.js (3 hunks)
  • tests/k6/tests/performancetest_common/simpleSearch.js (7 hunks)
  • tests/k6/tests/performancetest_data/01-create-dialog.js (2 hunks)
  • tests/k6/tests/serviceowner/performance/create-dialog.js (1 hunks)
  • tests/k6/tests/serviceowner/performance/create-remove-dialog.js (1 hunks)
  • tests/k6/tests/serviceowner/performance/purge-dialogs.js (2 hunks)
  • tests/k6/tests/serviceowner/performance/serviceowner-search.js (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • tests/k6/common/config.js
🧰 Additional context used
📓 Learnings (1)
tests/k6/tests/serviceowner/performance/serviceowner-search.js (1)
Learnt from: dagfinno
PR: digdir/dialogporten#1373
File: tests/k6/tests/enduser/performance/simple-search.js:11-20
Timestamp: 2024-11-12T05:32:45.311Z
Learning: In `tests/k6/tests/performancetest_common/common.js`, the function `getDefaultThresholds` may intentionally create empty threshold arrays to make labels visible in k6 reports.
🔇 Additional comments (7)
tests/k6/tests/serviceowner/performance/serviceowner-search.js (1)

10-17: LGTM! Comprehensive threshold coverage for new API endpoints.

The expanded thresholds align well with the PR's objective of enhancing serviceowner APIs. The additions cover all relevant endpoints including dialogs, activities, seenlogs, and transmissions.

Let's verify that all API endpoints used in the test are covered by thresholds:

✅ Verification successful

Let me gather more information about the API endpoints and thresholds by checking the actual test file content.


Let me check the content of simpleSearch.js to verify the actual API endpoints being called and compare them with the thresholds.


Thresholds correctly cover all API endpoints used in the test

The thresholds defined in serviceowner-search.js perfectly match all the API endpoints called in the implementation:

  • get dialog covers the main dialog fetch
  • get dialog activities and get dialog activity cover the activities chain
  • get seenlogs and get seenlog cover the seenlog chain
  • get transmissions and get transmission cover the transmissions chain

All API calls made in the retrieveDialogContent function (when called with getSO) have corresponding threshold tags. The implementation is complete and accurate.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if all API endpoints in the test have corresponding thresholds

# Search for HTTP requests in the test files
rg -A 2 "http\.(get|post|put|delete)" tests/k6/tests/performancetest_common/simpleSearch.js

# Compare with threshold definitions
echo "Thresholds defined:"
rg "getDefaultThresholds.*\[.*\]" tests/k6/tests/serviceowner/performance/serviceowner-search.js

Length of output: 325


Script:

#!/bin/bash
# Check the content of the test file
cat tests/k6/tests/serviceowner/performance/serviceowner-search.js

# Also check for any related test files that might be imported
rg -l "serviceowner.*search" tests/k6/tests/

Length of output: 1906


Script:

#!/bin/bash
# Check the content of the simpleSearch.js file to see the actual API calls
cat tests/k6/tests/performancetest_common/simpleSearch.js

Length of output: 6005

tests/k6/tests/performancetest_data/01-create-dialog.js (2)

1-2: LGTM: Clean import of performance sentinel value

The addition of the sentinel value import is well-structured and follows good practices for test data identification.


11-11: Verify search implementation handles sentinel tags

Let's verify that the search implementation properly handles the sentinel tags and doesn't expose them in unintended ways.

✅ Verification successful

Search implementation safely handles sentinel tags

The search implementation is safe with sentinel tags for the following reasons:

  • Sentinel tags are only used in test data generation and performance testing
  • The setSearchTags function in tests/k6/common/dialog.js properly validates and structures the tags
  • The sentinel values are confined to the test environment and are not part of the actual search logic
  • The sentinel tags are used consistently across test files for cleanup purposes only

The addition of the performance sentinel tag follows the same pattern as the existing test sentinel tag and doesn't impact the search functionality.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if search implementation filters or handles sentinel tags specially
ast-grep --pattern 'searchTags.includes($_) || searchTags.filter($_) || searchTags.find($_)'

# Look for any special handling of performance test data in search
rg -l "dialogporten-e2e-sentinel-performance" --type js

Length of output: 176


Script:

#!/bin/bash
# Check the content of config.js to understand how sentinel value is used
cat tests/k6/common/config.js

# Look for any search-related code that processes tags
rg -A 5 "searchTags" --type js

# Look for any search filtering or tag processing logic
ast-grep --pattern 'function $_(searchTags) { $$$ }'

Length of output: 8023

tests/k6/tests/serviceowner/performance/purge-dialogs.js (2)

10-16: LGTM! Good practice using centralized configuration.

The addition of UUID generation capability and centralized configuration value improves traceability and maintainability.


25-26: Verify consistent tracing implementation across the codebase.

Let's ensure that the tracing implementation is consistent with other parts of the system.

✅ Verification successful

Tracing implementation is consistent across test files

The implementation of traceparent in purge-dialogs.js follows the same pattern used across other test files:

  • Uses uuidv4() for generating trace IDs
  • Consistently includes traceparent in request headers
  • Properly includes trace values in tags where applicable
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for consistent tracing implementation across test files

# Search for traceparent usage in other test files
echo "Checking traceparent implementation in other files..."
rg -l "traceparent.*=.*" "tests/k6"

# Search for other tracing-related patterns
echo "Checking other tracing patterns..."
rg -l "trace.*id|span.*id" "tests/k6"

Length of output: 599


Script:

#!/bin/bash
# Check the implementation details of traceparent in the identified files
echo "Checking traceparent implementation in createDialog.js..."
rg "traceparent" -C 2 tests/k6/tests/performancetest_common/createDialog.js

echo -e "\nChecking traceparent implementation in simpleSearch.js..."
rg "traceparent" -C 2 tests/k6/tests/performancetest_common/simpleSearch.js

echo -e "\nChecking traceparent implementation in purge-dialogs.js..."
rg "traceparent" -C 2 tests/k6/tests/serviceowner/performance/purge-dialogs.js

Length of output: 2872

tests/k6/tests/performancetest_common/simpleSearch.js (2)

66-71: LGTM! Well-structured function chain with consistent parameter passing.

The implementation of getFunction parameter across getContent, getContentChain, and getUrl functions is clean and maintains good separation of concerns. The changes improve the flexibility of the test framework while keeping the functions focused.

Also applies to: 83-96, 105-106


Line range hint 1-157: Verify test coverage for the enhanced functionality.

The changes improve tracing and flexibility of the test framework. Let's verify the test coverage for these enhancements.

✅ Verification successful

Let me gather more information about the test coverage by checking how these functions are used in the test files we found.


Let me check the actual usage of the enhanced functionality in the test files we found to verify if they cover both getEU and getSO scenarios.


Based on the comprehensive results, I can now generate the final response:

Test coverage is comprehensive for both getEU and getSO functions

The enhanced functionality is well-tested across multiple scenarios:

  • Both getEU and getSO functions are thoroughly tested in dedicated test files
  • Test coverage includes:
    • Basic dialog retrieval
    • Search and filtering scenarios
    • Pagination and ordering
    • Error cases (404, invalid inputs)
    • Different parameter combinations
    • Chain of requests through retrieveDialogContent
  • The conditional labellog retrieval (specific to getEU) is properly handled in the implementation
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for test coverage of the enhanced functionality

# Look for test files that exercise these functions
echo "Searching for test files using these functions..."
rg -l "retrieveDialogContent|graphqlSearch|serviceownerSearch" tests/

# Check for existing test scenarios with different GET functions
echo "Checking for test scenarios with different GET functions..."
rg "getFunction" tests/

Length of output: 2445


Script:

#!/bin/bash
# Check how these functions are used in the test files

echo "Checking serviceowner-search.js usage..."
rg -A 5 "retrieveDialogContent|serviceownerSearch" tests/k6/tests/serviceowner/performance/serviceowner-search.js

echo -e "\nChecking graphql-search.js usage..."
rg -A 5 "graphqlSearch" tests/k6/tests/graphql/performance/graphql-search.js

# Check if there are any other test files that might be testing these scenarios
echo -e "\nChecking for additional test scenarios..."
fd -e js -e ts . tests/ --exec rg -l "getEU|getSO"

Length of output: 1911


Script:

#!/bin/bash
# Check the implementation details in test files

echo "Checking dialogSearch.js files for both enduser and serviceowner..."
rg -B 2 -A 5 "getEU|getSO" tests/k6/tests/enduser/dialogSearch.js tests/k6/tests/serviceowner/dialogSearch.js

echo -e "\nChecking dialogDetails.js files..."
rg -B 2 -A 5 "getEU|getSO" tests/k6/tests/enduser/dialogDetails.js tests/k6/tests/serviceowner/dialogDetails.js

echo -e "\nChecking request.js implementation..."
rg -B 2 -A 5 "export function getEU|export function getSO" tests/k6/common/request.js

Length of output: 28732

@dagfinno dagfinno merged commit b1d6eaf into main Nov 12, 2024
3 checks passed
@dagfinno dagfinno deleted the performance/expand-serviceowner-search branch November 12, 2024 11:54
oskogstad pushed a commit that referenced this pull request Nov 12, 2024
🤖 I have created a release *beep* *boop*
---


##
[1.32.0](v1.31.0...v1.32.0)
(2024-11-12)


### Features

* **graphql:** Set max execution depth to 10
([#1431](#1431))
([8845e49](8845e49)),
closes [#1430](#1430)
* **performance:** create a k6 purge script to run after creating
dialogs ([#1435](#1435))
([9555d78](9555d78))
* **performance:** Expands search for serviceowners, improved tracing
and logging
([#1439](#1439))
([b1d6eaf](b1d6eaf))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
@coderabbitai coderabbitai bot mentioned this pull request Dec 12, 2024
4 tasks
@coderabbitai coderabbitai bot mentioned this pull request Dec 27, 2024
4 tasks
@coderabbitai coderabbitai bot mentioned this pull request Jan 8, 2025
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants