-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(performance): Expands search for serviceowners, improved tracing and logging #1439
Conversation
Warning Rate limit exceeded@dagfinno has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 50 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 Walkthrough📝 Walkthrough📝 WalkthroughWalkthroughThe pull request introduces several changes across multiple test scripts in the K6 framework. Key modifications include the addition of a new constant Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (16)
tests/k6/tests/enduser/performance/enduser-search.js (2)
25-25
: LGTM! Consider adding a comment explaining the single user mode conditions.The logic for determining single user mode is well-structured and handles undefined values properly using nullish coalescing.
+ // Single user mode is active when running with exactly one VU, one iteration, and no duration const isSingleUserMode = (options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1 && (options.duration ?? 0) === 0;
Line range hint
26-30
: LGTM! Consider caching the random selection for better performance.The logic for user selection is clear and appropriate. For single user mode, it uses the first user, and for multi-user mode, it randomly selects a user.
Consider caching the random selection if multiple iterations are performed within the same VU context:
export default function() { + // Cache the selected user within the VU context + let vu = __VU - 1; + if (!this.selectedUsers) { + this.selectedUsers = new Array(options.vus); + } if (!endUsersWithTokens || endUsersWithTokens.length === 0) { throw new Error('No end users loaded for testing'); } const isSingleUserMode = (options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1 && (options.duration ?? 0) === 0; if (isSingleUserMode) { enduserSearch(endUsersWithTokens[0]); } else { - enduserSearch(randomItem(endUsersWithTokens)); + if (!this.selectedUsers[vu]) { + this.selectedUsers[vu] = randomItem(endUsersWithTokens); + } + enduserSearch(this.selectedUsers[vu]); } }tests/k6/tests/serviceowner/performance/create-dialog.js (2)
Line range hint
22-27
: LGTM! Consider extracting default values as constants.The introduction of
isSingleUserMode
improves code clarity. However, the default values could be extracted as named constants for better maintainability.Consider this refactor:
+const DEFAULT_VUS = 1; +const DEFAULT_ITERATIONS = 1; +const DEFAULT_DURATION = 0; -const isSingleUserMode = (options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1 && (options.duration ?? 0) === 0; +const isSingleUserMode = (options.vus ?? DEFAULT_VUS) === DEFAULT_VUS && + (options.iterations ?? DEFAULT_ITERATIONS) === DEFAULT_ITERATIONS && + (options.duration ?? DEFAULT_DURATION) === DEFAULT_DURATION;
Line range hint
1-4
: Add multi-user mode example in the documentation.The comment only shows how to run in single-user mode. Adding an example for multi-user mode would be helpful for other developers.
Consider updating the comment:
/** * Performance test for creating a dialog - * Run: k6 run tests/k6/tests/serviceowner/performance/create-dialog.js --vus 1 --iterations 1 + * Single-user mode: k6 run tests/k6/tests/serviceowner/performance/create-dialog.js --vus 1 --iterations 1 + * Multi-user mode: k6 run tests/k6/tests/serviceowner/performance/create-dialog.js --vus 10 --iterations 100 */tests/k6/tests/serviceowner/performance/create-remove-dialog.js (2)
25-26
: Consider extracting the test mode detection to a shared utility.The introduction of
isSingleUserMode
improves code readability. However, since this condition might be useful across different test files, consider moving it to a shared utility function.Consider creating a utility function in a shared location:
+ // In performancetest_common/testModeUtils.js + export const isSingleUserTestMode = (opts) => { + return (opts.vus ?? 1) === 1 && + (opts.iterations ?? 1) === 1 && + (opts.duration ?? 0) === 0; + }; // In current file - const isSingleUserMode = (options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1 && (options.duration ?? 0) === 0; + const isSingleUserMode = isSingleUserTestMode(options);
Line range hint
27-32
: Add tracing information for test mode and data selection.The control flow logic is clear and correct. However, given the PR's focus on improved tracing and logging, consider adding trace information about the test mode and selected data.
Consider adding logging:
if (isSingleUserMode) { + console.log(`Running in single user mode with fixed service owner ${serviceOwners[0].id} and end user ${endUsers[0].id}`); createAndRemoveDialog(serviceOwners[0], endUsers[0]); } else { + const selectedServiceOwner = randomItem(serviceOwners); + const selectedEndUser = randomItem(endUsers); + console.log(`Running in multi-user mode with random service owner ${selectedServiceOwner.id} and end user ${selectedEndUser.id}`); - createAndRemoveDialog(randomItem(serviceOwners), randomItem(endUsers)); + createAndRemoveDialog(selectedServiceOwner, selectedEndUser); }tests/k6/tests/graphql/performance/graphql-search.js (3)
Line range hint
14-21
: Consider adding tracing-specific configurationsGiven the PR's focus on improved tracing, consider adding tracing-specific thresholds or metrics to the options configuration. This would help monitor and analyze the tracing overhead and performance impact.
export let options = { summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(95)', 'p(99)', 'p(99.5)', 'p(99.9)', 'count'], - thresholds: getDefaultThresholds(['http_req_duration', 'http_reqs'],['graphql search']) + thresholds: getDefaultThresholds( + ['http_req_duration', 'http_reqs', 'http_req_sending'], + ['graphql search', 'tracing'] + ) };
Line range hint
26-29
: Enhance error handling with more contextThe current error handling could be more informative for debugging purposes, especially in performance testing scenarios.
- if (!endUsers || endUsers.length === 0) { - throw new Error('No end users loaded for testing'); + if (!endUsers) { + throw new Error('End users array is undefined - check test data loading'); + } + if (endUsers.length === 0) { + throw new Error('End users array is empty - verify test data configuration'); }
Line range hint
30-36
: Consider extracting mode detection logicThe single-user mode detection logic could be reused across other performance tests. Consider extracting it into a shared utility function.
+// In a shared utility file (e.g., performancetest_common/testMode.js) +export const isSingleUserMode = (opts) => { + return (opts.vus ?? 1) === 1 && + (opts.iterations ?? 1) === 1 && + (opts.duration ?? 0) === 0; +}; // In this file - const isSingleUserMode = (options.vus ?? 1) === 1 && (options.iterations ?? 1) === 1 && (options.duration ?? 0) === 0; + const testMode = isSingleUserMode(options); - if (isSingleUserMode) { + if (testMode) {This refactoring would:
- Improve code reusability across test files
- Make the mode detection logic easier to maintain and update
- Keep the test file focused on its primary responsibility
tests/k6/tests/performancetest_data/01-create-dialog.js (1)
Line range hint
39-52
: Consider adding performance tracing logsGiven the PR's objective of improving tracing, consider adding debug logs when the sentinel tag is added. This would help track performance test execution flow.
Example addition:
export default function (endUser, resource) { if (!endUser?.match(/^\d{11}$/)) { throw new Error('endUser must be a 11-digit number'); } if (!resource?.trim()) { throw new Error('resource is required'); } let payload = createDialogPayload(); payload.serviceResource = "urn:altinn:resource:" +resource; payload.party = "urn:altinn:person:identifier-no:" + endUser; + console.debug(`Creating performance test dialog for user ${endUser} with resource ${resource}`); return cleanUp(payload); }
tests/k6/tests/performancetest_common/createDialog.js (2)
17-17
: Consider following W3C trace context specification for traceparent.While adding tracing is a good improvement, consider following the W3C trace context specification format for the traceparent header:
00-<trace-id>-<parent-id>-<trace-flags>
.Here's how you could modify it:
- var traceparent = uuidv4(); + var traceId = uuidv4().replace(/-/g, ''); + var parentId = uuidv4().replace(/-/g, '').substr(0, 16); + var traceparent = `00-${traceId}-${parentId}-01`;Also applies to: 21-21, 23-23
58-61
: Consider adding error handling for dialog removal.The code regenerates traceparent and updates headers/tags correctly, but consider adding error handling and logging for cases where dialog removal fails.
describe('remove dialog', () => { traceparent = uuidv4(); paramsWithToken.tags.name = 'remove dialog'; paramsWithToken.tags.traceparent = traceparent; paramsWithToken.headers.traceparent = traceparent if (dialogId) { let r = purgeSO('dialogs/' + dialogId, paramsWithToken); - expect(r.status, 'response status').to.equal(204); + try { + expect(r.status, 'response status').to.equal(204); + } catch (error) { + console.error(`Failed to remove dialog ${dialogId}: ${error.message}`); + throw error; + } } });tests/k6/tests/serviceowner/performance/purge-dialogs.js (2)
25-30
: Consider following W3C trace context specification for better interoperability.While adding tracing is great, the current implementation might not integrate well with other tracing systems. The W3C trace context specification defines a specific format for the
traceparent
header:
traceparent: 00-<trace-id>-<parent-id>-01
Consider this implementation:
- var traceparent = uuidv4(); - console.log("Searching for dialogs to purge, tracevalue: " + traceparent); + var traceId = uuidv4().replace(/-/g, ''); + var parentId = '0000000000000000'; + var traceparent = `00-${traceId}-${parentId}-01`; + console.log("Searching for dialogs to purge, traceId: " + traceId);
Line range hint
107-124
: Consider batch processing for better performance.The current implementation deletes dialogs one by one, which might not be optimal for large cleanups. Consider implementing batch processing to reduce the number of API calls.
Consider this approach:
- Process dialogs in batches (e.g., 50 at a time)
- Use Promise.all for parallel processing within each batch
- Add delay between batches to prevent overwhelming the system
Would you like me to provide a code example for this implementation?
tests/k6/tests/performancetest_common/simpleSearch.js (2)
19-32
: LGTM! Consider adding JSDoc for the new parameter.The addition of the
getFunction
parameter improves modularity by allowing different GET functions to be used. The EU-specific conditional for labellog is correctly implemented.Add parameter documentation:
/** * Retrieves the content for a dialog. * Get dialog, dialog activities, seenlogs, labellog, and transmissions. * @param {Object} response - The response object. * @param {Object} paramsWithToken - The parameters with token. + * @param {Function} [getFunction=getEU] - The function to use for GET requests. * @returns {void} */
119-126
: Consider adding User-Agent header to serviceownerSearch for consistency.The tracing implementation with
traceparent
is well done. However, theUser-Agent
header is only present ingraphqlSearch
but missing inserviceownerSearch
.Add the User-Agent header to serviceownerSearch:
let paramsWithToken = { headers: { Authorization: "Bearer " + serviceowner.token, - traceparent: traceparent + traceparent: traceparent, + 'User-Agent': 'dialogporten-k6-serviceowner-search' },Also applies to: 141-157
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
tests/k6/tests/performancetest_data/serviceowners-staging.csv
is excluded by!**/*.csv
📒 Files selected for processing (10)
tests/k6/common/config.js
(1 hunks)tests/k6/tests/enduser/performance/enduser-search.js
(2 hunks)tests/k6/tests/graphql/performance/graphql-search.js
(2 hunks)tests/k6/tests/performancetest_common/createDialog.js
(3 hunks)tests/k6/tests/performancetest_common/simpleSearch.js
(7 hunks)tests/k6/tests/performancetest_data/01-create-dialog.js
(2 hunks)tests/k6/tests/serviceowner/performance/create-dialog.js
(1 hunks)tests/k6/tests/serviceowner/performance/create-remove-dialog.js
(1 hunks)tests/k6/tests/serviceowner/performance/purge-dialogs.js
(2 hunks)tests/k6/tests/serviceowner/performance/serviceowner-search.js
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/k6/common/config.js
🧰 Additional context used
📓 Learnings (1)
tests/k6/tests/serviceowner/performance/serviceowner-search.js (1)
Learnt from: dagfinno
PR: digdir/dialogporten#1373
File: tests/k6/tests/enduser/performance/simple-search.js:11-20
Timestamp: 2024-11-12T05:32:45.311Z
Learning: In `tests/k6/tests/performancetest_common/common.js`, the function `getDefaultThresholds` may intentionally create empty threshold arrays to make labels visible in k6 reports.
🔇 Additional comments (7)
tests/k6/tests/serviceowner/performance/serviceowner-search.js (1)
10-17
: LGTM! Comprehensive threshold coverage for new API endpoints.
The expanded thresholds align well with the PR's objective of enhancing serviceowner APIs. The additions cover all relevant endpoints including dialogs, activities, seenlogs, and transmissions.
Let's verify that all API endpoints used in the test are covered by thresholds:
✅ Verification successful
Let me gather more information about the API endpoints and thresholds by checking the actual test file content.
Let me check the content of simpleSearch.js to verify the actual API endpoints being called and compare them with the thresholds.
Thresholds correctly cover all API endpoints used in the test
The thresholds defined in serviceowner-search.js
perfectly match all the API endpoints called in the implementation:
get dialog
covers the main dialog fetchget dialog activities
andget dialog activity
cover the activities chainget seenlogs
andget seenlog
cover the seenlog chainget transmissions
andget transmission
cover the transmissions chain
All API calls made in the retrieveDialogContent
function (when called with getSO
) have corresponding threshold tags. The implementation is complete and accurate.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if all API endpoints in the test have corresponding thresholds
# Search for HTTP requests in the test files
rg -A 2 "http\.(get|post|put|delete)" tests/k6/tests/performancetest_common/simpleSearch.js
# Compare with threshold definitions
echo "Thresholds defined:"
rg "getDefaultThresholds.*\[.*\]" tests/k6/tests/serviceowner/performance/serviceowner-search.js
Length of output: 325
Script:
#!/bin/bash
# Check the content of the test file
cat tests/k6/tests/serviceowner/performance/serviceowner-search.js
# Also check for any related test files that might be imported
rg -l "serviceowner.*search" tests/k6/tests/
Length of output: 1906
Script:
#!/bin/bash
# Check the content of the simpleSearch.js file to see the actual API calls
cat tests/k6/tests/performancetest_common/simpleSearch.js
Length of output: 6005
tests/k6/tests/performancetest_data/01-create-dialog.js (2)
1-2
: LGTM: Clean import of performance sentinel value
The addition of the sentinel value import is well-structured and follows good practices for test data identification.
11-11
: Verify search implementation handles sentinel tags
Let's verify that the search implementation properly handles the sentinel tags and doesn't expose them in unintended ways.
✅ Verification successful
Search implementation safely handles sentinel tags
The search implementation is safe with sentinel tags for the following reasons:
- Sentinel tags are only used in test data generation and performance testing
- The
setSearchTags
function intests/k6/common/dialog.js
properly validates and structures the tags - The sentinel values are confined to the test environment and are not part of the actual search logic
- The sentinel tags are used consistently across test files for cleanup purposes only
The addition of the performance sentinel tag follows the same pattern as the existing test sentinel tag and doesn't impact the search functionality.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if search implementation filters or handles sentinel tags specially
ast-grep --pattern 'searchTags.includes($_) || searchTags.filter($_) || searchTags.find($_)'
# Look for any special handling of performance test data in search
rg -l "dialogporten-e2e-sentinel-performance" --type js
Length of output: 176
Script:
#!/bin/bash
# Check the content of config.js to understand how sentinel value is used
cat tests/k6/common/config.js
# Look for any search-related code that processes tags
rg -A 5 "searchTags" --type js
# Look for any search filtering or tag processing logic
ast-grep --pattern 'function $_(searchTags) { $$$ }'
Length of output: 8023
tests/k6/tests/serviceowner/performance/purge-dialogs.js (2)
10-16
: LGTM! Good practice using centralized configuration.
The addition of UUID generation capability and centralized configuration value improves traceability and maintainability.
25-26
: Verify consistent tracing implementation across the codebase.
Let's ensure that the tracing implementation is consistent with other parts of the system.
✅ Verification successful
Tracing implementation is consistent across test files
The implementation of traceparent in purge-dialogs.js
follows the same pattern used across other test files:
- Uses
uuidv4()
for generating trace IDs - Consistently includes traceparent in request headers
- Properly includes trace values in tags where applicable
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent tracing implementation across test files
# Search for traceparent usage in other test files
echo "Checking traceparent implementation in other files..."
rg -l "traceparent.*=.*" "tests/k6"
# Search for other tracing-related patterns
echo "Checking other tracing patterns..."
rg -l "trace.*id|span.*id" "tests/k6"
Length of output: 599
Script:
#!/bin/bash
# Check the implementation details of traceparent in the identified files
echo "Checking traceparent implementation in createDialog.js..."
rg "traceparent" -C 2 tests/k6/tests/performancetest_common/createDialog.js
echo -e "\nChecking traceparent implementation in simpleSearch.js..."
rg "traceparent" -C 2 tests/k6/tests/performancetest_common/simpleSearch.js
echo -e "\nChecking traceparent implementation in purge-dialogs.js..."
rg "traceparent" -C 2 tests/k6/tests/serviceowner/performance/purge-dialogs.js
Length of output: 2872
tests/k6/tests/performancetest_common/simpleSearch.js (2)
66-71
: LGTM! Well-structured function chain with consistent parameter passing.
The implementation of getFunction
parameter across getContent
, getContentChain
, and getUrl
functions is clean and maintains good separation of concerns. The changes improve the flexibility of the test framework while keeping the functions focused.
Also applies to: 83-96, 105-106
Line range hint 1-157
: Verify test coverage for the enhanced functionality.
The changes improve tracing and flexibility of the test framework. Let's verify the test coverage for these enhancements.
✅ Verification successful
Let me gather more information about the test coverage by checking how these functions are used in the test files we found.
Let me check the actual usage of the enhanced functionality in the test files we found to verify if they cover both getEU and getSO scenarios.
Based on the comprehensive results, I can now generate the final response:
Test coverage is comprehensive for both getEU and getSO functions
The enhanced functionality is well-tested across multiple scenarios:
- Both
getEU
andgetSO
functions are thoroughly tested in dedicated test files - Test coverage includes:
- Basic dialog retrieval
- Search and filtering scenarios
- Pagination and ordering
- Error cases (404, invalid inputs)
- Different parameter combinations
- Chain of requests through
retrieveDialogContent
- The conditional labellog retrieval (specific to
getEU
) is properly handled in the implementation
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for test coverage of the enhanced functionality
# Look for test files that exercise these functions
echo "Searching for test files using these functions..."
rg -l "retrieveDialogContent|graphqlSearch|serviceownerSearch" tests/
# Check for existing test scenarios with different GET functions
echo "Checking for test scenarios with different GET functions..."
rg "getFunction" tests/
Length of output: 2445
Script:
#!/bin/bash
# Check how these functions are used in the test files
echo "Checking serviceowner-search.js usage..."
rg -A 5 "retrieveDialogContent|serviceownerSearch" tests/k6/tests/serviceowner/performance/serviceowner-search.js
echo -e "\nChecking graphql-search.js usage..."
rg -A 5 "graphqlSearch" tests/k6/tests/graphql/performance/graphql-search.js
# Check if there are any other test files that might be testing these scenarios
echo -e "\nChecking for additional test scenarios..."
fd -e js -e ts . tests/ --exec rg -l "getEU|getSO"
Length of output: 1911
Script:
#!/bin/bash
# Check the implementation details in test files
echo "Checking dialogSearch.js files for both enduser and serviceowner..."
rg -B 2 -A 5 "getEU|getSO" tests/k6/tests/enduser/dialogSearch.js tests/k6/tests/serviceowner/dialogSearch.js
echo -e "\nChecking dialogDetails.js files..."
rg -B 2 -A 5 "getEU|getSO" tests/k6/tests/enduser/dialogDetails.js tests/k6/tests/serviceowner/dialogDetails.js
echo -e "\nChecking request.js implementation..."
rg -B 2 -A 5 "export function getEU|export function getSO" tests/k6/common/request.js
Length of output: 28732
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…igdir/dialogporten into performance/expand-serviceowner-search
Quality Gate passedIssues Measures |
🤖 I have created a release *beep* *boop* --- ## [1.32.0](v1.31.0...v1.32.0) (2024-11-12) ### Features * **graphql:** Set max execution depth to 10 ([#1431](#1431)) ([8845e49](8845e49)), closes [#1430](#1430) * **performance:** create a k6 purge script to run after creating dialogs ([#1435](#1435)) ([9555d78](9555d78)) * **performance:** Expands search for serviceowners, improved tracing and logging ([#1439](#1439)) ([b1d6eaf](b1d6eaf)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
Implements more GET testcalls to serviceowner apis and improves tracing and logging
Description
Related Issue(s)
Verification
Documentation
docs
-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)Summary by CodeRabbit
Release Notes
New Features
sentinelPerformanceValue
for enhanced performance tracking.isSingleUserMode
for better user scenario handling.Improvements
Bug Fixes
Documentation