Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(performance): Add readmes #1501

Merged
merged 19 commits into from
Nov 21, 2024
Merged

chore(performance): Add readmes #1501

merged 19 commits into from
Nov 21, 2024

Conversation

dagfinno
Copy link
Collaborator

@dagfinno dagfinno commented Nov 20, 2024

Add README.md for serviceowner/performance, enduser/performance and graphql/performance

Description

Related Issue(s)

Verification

  • Your code builds clean without any errors or warnings
  • Manual testing done (required)
  • Relevant automated test added (if you find this hard, leave it and we'll help out)

Documentation

  • Documentation is updated (either in docs-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)

Summary by CodeRabbit

  • New Features

    • Introduced detailed README files for performance testing across various API endpoints, including end-user dialogs, service owner APIs, and GraphQL functionalities.
  • Documentation

    • Added comprehensive instructions for executing performance tests via command line and GitHub Actions, including prerequisites, test execution steps, and cleanup procedures.

@dagfinno dagfinno requested a review from a team as a code owner November 20, 2024 12:17
Copy link
Contributor

coderabbitai bot commented Nov 20, 2024

📝 Walkthrough

Walkthrough

This pull request introduces new README documentation for performance testing across multiple API endpoints, including those in the api/v1/enduser/dialogs, serviceowner, and GraphQL search functionalities. Each README provides detailed instructions on prerequisites, test execution via command line and GitHub Actions, and includes information on managing test artifacts. The documentation aims to standardize the performance testing process and ensure clarity for users running these tests.

Changes

File Path Change Summary
tests/k6/tests/enduser/performance/README.md Added detailed README for performance testing of GET endpoints in api/v1/enduser/dialogs.
tests/k6/tests/serviceowner/performance/README.md Updated README for performance testing of serviceowner API, covering GET and POST endpoints.
tests/k6/tests/graphql/performance/README.md Introduced README for performance testing of GraphQL endpoints focusing on POST requests.

Possibly related PRs

Suggested reviewers

  • arealmaas: Suggested reviewer for the changes made in the performance testing documentation.

Warning

Rate limit exceeded

@dagfinno has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 10 minutes and 38 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 73a184a and 695ceeb.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (11)
tests/k6/tests/graphql/README.md (3)

1-4: Update title to match content description

The current title "Graphql get dialogs" doesn't accurately reflect the content. Consider updating it to "GraphQL Search Performance Tests" or similar, as this README specifically covers GraphQL search functionality testing.


8-11: Enhance test file section with more details

Consider adding:

  • The full path to the test file
  • A brief description of the test scenarios or parameters it supports
  • Expected outputs or success criteria

60-61: Improve Test Results section formatting and clarity

The section has several issues:

  1. Remove duplicate "Test Results" heading
  2. Add proper punctuation and capitalization
  3. Clarify the AppInsights integration
-## Test Results
-Test results can be found in github action run log and in appinsights. We are prepared for exporting results to grafana, but so far results are exported to a private grafana instance only, as can be seen from the `.secrets`listed earlier 
+## Test Results
+Test results can be found in:
+- GitHub Actions run logs
+- Azure Application Insights
+- Private Grafana instance (currently limited access)
+
+Note: We are prepared for public Grafana exports in the future.
🧰 Tools
🪛 LanguageTool

[grammar] ~60-~60: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...h.js \ --input tokens=personal ``` ## Test Results Test results can be found in github action run log a...

(PHRASE_REPETITION)


[typographical] ~61-~61: It seems that a comma is missing.
Context: ...d for exporting results to grafana, but so far results are exported to a private grafa...

(SO_COMMA)

tests/k6/tests/enduser/performance/README.md (5)

1-3: Consider enhancing the title for better clarity.

The current title "Enduser get dialogs" could be more descriptive. Consider something like "Performance Tests for Enduser Dialog API Endpoints" to better reflect the document's content.

-# Enduser get dialogs
+# Performance Tests for Enduser Dialog API Endpoints

6-6: Consider using an absolute link for prerequisites.

The relative link to prerequisites could break if files are reorganized. Consider using an absolute GitHub URL.


9-10: Add more context about the test file's purpose.

Consider adding a brief description of what enduser-search.js does and its key features.

 ## Test file
-The test file associated with this performance test is 
-- `enduser-search.js`
+The test file associated with this performance test is:
+- `enduser-search.js` - Implements sequential API endpoint testing for enduser dialogs, measuring response times and performance metrics.

15-24: Enhance endpoint documentation with parameter descriptions.

Consider adding descriptions for the placeholders (<ssn>, <dialogId>, etc.) and their expected formats.


74-76: Improve clarity of test results section.

The section needs grammatical improvements and better structure.

 ## Test Results
-Test results can be found in github action run log and in appinsights. We are prepared for exporting results to grafana, but so far results are exported to a private grafana instance only, as can be seen from the `.secrets`listed earlier 
+Test results can be found in:
+- GitHub Actions run logs
+- Application Insights
+
+While we're prepared to export results to Grafana, currently results are only exported to a private Grafana instance (configured via the `.secrets` file mentioned above).
🧰 Tools
🪛 LanguageTool

[grammar] ~74-~74: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...h.js \ --input tokens=personal ``` ## Test Results Test results can be found in github action run log a...

(PHRASE_REPETITION)


[typographical] ~75-~75: It seems that a comma is missing.
Context: ...d for exporting results to grafana, but so far results are exported to a private grafa...

(SO_COMMA)

tests/k6/tests/serviceowner/performance/README.md (3)

10-14: Fix grammatical error and enhance test file descriptions

The sentence has a subject-verb agreement error, and the test file descriptions could be more informative.

Apply this diff:

-The test files associated with this performance test is 
-  - `create-dialog.js`
-  - `create-remove-dialog.js`
-  - `serviceowner-search.js`
-  - `purge-dialogs.js` (used for cleanup after test)
+The test files associated with this performance test are:
+  - `create-dialog.js` - Tests the performance of dialog creation endpoints
+  - `create-remove-dialog.js` - Evaluates the complete lifecycle of dialog creation and removal
+  - `serviceowner-search.js` - Measures the performance of dialog search functionality
+  - `purge-dialogs.js` - Utility script for cleaning up test data after performance tests

31-35: Add example command with realistic values

To make the instructions more user-friendly, include a concrete example of the test command.

Add an example after the command:

k6 run <test-file> -e API_VERSION=v1 \
-e API_ENVIRONMENT=<(test|staging|yt01)> \
--vus=<vus> --duration=<duration>

+Example:
+k6 run create-dialog.js -e API_VERSION=v1 \
+-e API_ENVIRONMENT=test \
+--vus=10 --duration=30s

76-95: Enhance cleanup documentation with best practices

Add information about when to run cleanup and error handling.

Add the following information:

### Clean up
To clean up after the performance test, you can use the `purge-dialogs.js` test file. This file is specifically designed for cleanup purposes. It ensures that any resources created during the test, such as dialogs, are removed from the system.

+Best practices for cleanup:
+- Run cleanup immediately after completing your test session
+- Verify cleanup success in the logs
+- If cleanup fails, retry with increased timeout values
+- For debugging purposes, you may want to preserve test data temporarily
+
To run the cleanup script, follow these steps:
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 130d6bb and 4d24c0e.

📒 Files selected for processing (3)
  • tests/k6/tests/enduser/performance/README.md (1 hunks)
  • tests/k6/tests/graphql/README.md (1 hunks)
  • tests/k6/tests/serviceowner/performance/README.md (1 hunks)
🧰 Additional context used
🪛 LanguageTool
tests/k6/tests/enduser/performance/README.md

[grammar] ~74-~74: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...h.js \ --input tokens=personal ``` ## Test Results Test results can be found in github action run log a...

(PHRASE_REPETITION)


[typographical] ~75-~75: It seems that a comma is missing.
Context: ...d for exporting results to grafana, but so far results are exported to a private grafa...

(SO_COMMA)

tests/k6/tests/graphql/README.md

[grammar] ~60-~60: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...h.js \ --input tokens=personal ``` ## Test Results Test results can be found in github action run log a...

(PHRASE_REPETITION)


[typographical] ~61-~61: It seems that a comma is missing.
Context: ...d for exporting results to grafana, but so far results are exported to a private grafa...

(SO_COMMA)

🔇 Additional comments (4)
tests/k6/tests/graphql/README.md (1)

63-64: Expand TODO section with more context

The current TODO item "Fix reporting" lacks context. Consider:

  1. Describing the specific reporting issues
  2. Adding acceptance criteria
  3. Linking to related issues or documentation
tests/k6/tests/enduser/performance/README.md (2)

57-64: Enhance security-related documentation.

The .secrets example should clearly indicate that the ** are placeholders. Consider adding a note about secure handling of credentials and tokens.

 ```file
-TOKEN_GENERATOR_USERNAME:**
-TOKEN_GENERATOR_PASSWORD:**
+TOKEN_GENERATOR_USERNAME=<your-username-here>
+TOKEN_GENERATOR_PASSWORD=<your-password-here>
 # Additional secrets...

+> Note: Replace placeholder values with actual credentials. Never commit real credentials to version control.


---

`77-79`: **Expand the TODO section with specific requirements.**

The current TODO item "Fix reporting" is too vague. Consider adding more details about what needs to be fixed and any specific requirements or acceptance criteria.

</details>
<details>
<summary>tests/k6/tests/serviceowner/performance/README.md (1)</summary>

`96-100`: **Expand reporting and results documentation**

The test results and TODO sections need more detailed information:
- How to access the private Grafana instance
- What metrics are available and how to interpret them
- Specific details about what needs to be fixed in reporting


Would you like me to help create a more detailed results and reporting section?

</details>

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

tests/k6/tests/graphql/README.md Outdated Show resolved Hide resolved
tests/k6/tests/graphql/README.md Outdated Show resolved Hide resolved
tests/k6/tests/serviceowner/performance/README.md Outdated Show resolved Hide resolved
@dagfinno dagfinno changed the title feat(performance): Add readmes chore(performance): Add readmes Nov 20, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (13)
tests/k6/tests/graphql/README.md (4)

1-3: Enhance document structure with proper formatting.

Consider improving the heading to better reflect the content and adding proper formatting for code elements.

-# Graphql get dialogs
+# GraphQL Performance Tests
 
-This directory holds a performance test with POST for `graphql`. The test file `graphql-search.js` is responsible for executing the performance test. It includes a list of end users (ssn) with pre-generated tokens and performs a POST request to the GraphQL endpoint with the payload `input: { party: ["urn:altinn:person:identifier-no:${identifier}"]}`. This test is designed to measure the performance of the GraphQL search functionality. 
+This directory holds a performance test with POST for GraphQL. The test file `graphql-search.js` is responsible for executing the performance test. It includes a list of end users (SSN) with pre-generated tokens and performs a POST request to the GraphQL endpoint with the following payload:
+
+```graphql
+input: { 
+  party: ["urn:altinn:person:identifier-no:${identifier}"]
+}
+```
+
+This test is designed to measure the performance of the GraphQL search functionality.

18-29: Enhance security and clarity in command examples.

  1. Add a warning about handling credentials securely.
  2. Use more descriptive placeholder values.
  3. Add example values as comments.
-TOKEN_GENERATOR_USERNAME=<username> \
-TOKEN_GENERATOR_PASSWORD=<passwd> API_ENVIRONMENT=<(test|staging|yt01)> \
+# Ensure these credentials are kept secure and never committed to version control
+TOKEN_GENERATOR_USERNAME=<your-username> \  # Example: "john.doe"
+TOKEN_GENERATOR_PASSWORD=<your-password> \  # Example: Keep this secret!
+API_ENVIRONMENT=<environment> \             # Example: "test"

31-36: Add more context to GitHub Actions instructions.

Consider adding:

  1. Required permissions for running the workflow
  2. Expected duration of the test
  3. Where to find the results after completion

60-64: Improve results section clarity and TODO details.

  1. Fix grammar in the results section
  2. Expand the TODO with more context about the reporting issue
-Test results can be found in GitHub action run log and in App Insights. We are prepared for exporting results to grafana, but so far results are exported to a private grafana instance only, as can be seen from the .secrets listed earlier 
+Test results can be found in the GitHub Actions run log and in App Insights. While we are prepared for exporting results to Grafana, currently results are only exported to a private Grafana instance, as configured in the `.secrets` file.

-## TODO
-Fix reporting
+## TODO
+- Fix reporting issues:
+  - [ ] Configure public Grafana instance
+  - [ ] Set up proper metrics export
+  - [ ] Implement comprehensive test result visualization
🧰 Tools
🪛 LanguageTool

[grammar] ~60-~60: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...h.js \ --input tokens=personal ``` ## Test Results Test results can be found in GitHub action run log a...

(PHRASE_REPETITION)


[uncategorized] ~61-~61: You might be missing the article “the” here.
Context: ...st Results Test results can be found in GitHub action run log and in App Insights. We ...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)


[typographical] ~61-~61: It seems that a comma is missing.
Context: ...d for exporting results to grafana, but so far results are exported to a private grafa...

(SO_COMMA)

tests/k6/tests/enduser/performance/README.md (5)

8-11: Consider enhancing the test file section with more details.

Add a brief description of what enduser-search.js does and its key features to help users understand its purpose without having to open the file.

 ## Test file
-The test file associated with this performance test is 
-- `enduser-search.js`
+The test file associated with this performance test is:
+- `enduser-search.js` - Implements sequential endpoint testing for end users, measuring response times and performance metrics.

12-24: Consider adding performance expectations or success criteria.

The test description clearly outlines the endpoints being tested, but it would be helpful to include:

  • Expected response time thresholds
  • Success/failure criteria
  • Performance benchmarks

32-42: Consider adding example commands with realistic values.

While the instructions are clear, providing complete example commands with realistic values would make it easier for users to understand the expected format:

 TOKEN_GENERATOR_USERNAME=<username> \
 TOKEN_GENERATOR_PASSWORD=<passwd> API_ENVIRONMENT=<test|staging|yt01> \
 ../../scripts/generate_tokens.sh ../../performancetest_data personal
+
+Example:
+TOKEN_GENERATOR_USERNAME=testuser \
+TOKEN_GENERATOR_PASSWORD=secretpass API_ENVIRONMENT=test \
+../../scripts/generate_tokens.sh ../../performancetest_data personal

45-50: Add details about required parameters and their values.

The GitHub Actions section would benefit from:

  • List of required parameters
  • Valid value ranges or examples
  • Expected outcomes

74-78: Improve clarity of results section and TODO details.

  1. Fix grammar in the results section:
-## Test Results
-Test results can be found in GitHub action run log and in App Insights. We are prepared for exporting results to grafana, but so far results are exported to a private grafana instance only, as can be seen from the `.secrets`listed earlier 
+## Test Results
+Test results can be found in the GitHub Actions run log and in App Insights. While we're prepared for exporting results to Grafana, currently results are only exported to a private Grafana instance, as configured in the `.secrets` file.
  1. The TODO item "Fix reporting" is too vague. Consider expanding it with specific tasks or requirements.
🧰 Tools
🪛 LanguageTool

[grammar] ~74-~74: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...h.js \ --input tokens=personal ``` ## Test Results Test results can be found in GitHub action run log a...

(PHRASE_REPETITION)


[typographical] ~75-~75: It seems that a comma is missing.
Context: ...d for exporting results to grafana, but so far results are exported to a private grafa...

(SO_COMMA)

tests/k6/tests/serviceowner/performance/README.md (4)

9-14: Enhance test file descriptions

Consider adding brief descriptions for each test file to help users understand their specific purposes. For example:

 The test files associated with this performance test are 
-create-dialog.js
-create-remove-dialog.js
-serviceowner-search.js
-purge-dialogs.js (used for cleanup after test)
+- `create-dialog.js` - Tests the performance of dialog creation endpoint
+- `create-remove-dialog.js` - Evaluates the complete lifecycle of dialog creation and removal
+- `serviceowner-search.js` - Measures search functionality performance
+- `purge-dialogs.js` - Utility script for cleaning up test data after performance tests

16-16: Add missing article in heading

-### Run Test
+### Run the Test
🧰 Tools
🪛 LanguageTool

[uncategorized] ~16-~16: Possible missing article found.
Context: ... (used for cleanup after test) ### Run Test To run the performance test, follow the...

(AI_HYDRA_LEO_MISSING_THE)


40-42: Document GitHub Actions parameters

Consider adding descriptions for the workflow parameters to help users make informed choices. For example:

 2. Select "Run workflow" and fill in the required parameters:
+   - `vus`: Number of virtual users (concurrent load)
+   - `duration`: Test duration (e.g., "5m" for 5 minutes)
+   - `testSuitePath`: Path to the test script to execute
+   - `tokens`: Authentication token type to use ("personal", "enterprise", or "both")
 3. Tag the performance test with a descriptive name.

96-100: Enhance results and TODO sections

The results and TODO sections could benefit from more detail:

  1. For results: Consider specifying what key metrics users should look for in App Insights and Grafana
  2. For TODO: Provide more context about what aspects of reporting need fixing
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 4d24c0e and 73a184a.

📒 Files selected for processing (3)
  • tests/k6/tests/enduser/performance/README.md (1 hunks)
  • tests/k6/tests/graphql/README.md (1 hunks)
  • tests/k6/tests/serviceowner/performance/README.md (1 hunks)
🧰 Additional context used
🪛 LanguageTool
tests/k6/tests/enduser/performance/README.md

[grammar] ~74-~74: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...h.js \ --input tokens=personal ``` ## Test Results Test results can be found in GitHub action run log a...

(PHRASE_REPETITION)


[typographical] ~75-~75: It seems that a comma is missing.
Context: ...d for exporting results to grafana, but so far results are exported to a private grafa...

(SO_COMMA)

tests/k6/tests/graphql/README.md

[grammar] ~60-~60: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...h.js \ --input tokens=personal ``` ## Test Results Test results can be found in GitHub action run log a...

(PHRASE_REPETITION)


[uncategorized] ~61-~61: You might be missing the article “the” here.
Context: ...st Results Test results can be found in GitHub action run log and in App Insights. We ...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)


[typographical] ~61-~61: It seems that a comma is missing.
Context: ...d for exporting results to grafana, but so far results are exported to a private grafa...

(SO_COMMA)

tests/k6/tests/serviceowner/performance/README.md

[uncategorized] ~16-~16: Possible missing article found.
Context: ... (used for cleanup after test) ### Run Test To run the performance test, follow the...

(AI_HYDRA_LEO_MISSING_THE)

🔇 Additional comments (3)
tests/k6/tests/graphql/README.md (2)

15-17: Fix incorrect directory path


42-50: Enhance security-related documentation

tests/k6/tests/enduser/performance/README.md (1)

1-3: LGTM! Clear and informative introduction.

The introduction effectively communicates the purpose, scope, and implementation details of the performance tests.

tests/k6/tests/enduser/performance/README.md Outdated Show resolved Hide resolved
oskogstad
oskogstad previously approved these changes Nov 20, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Outside diff range and nitpick comments (12)
tests/k6/tests/graphql/performance/README.md (2)

1-2: Improve title clarity and formatting

The current title "Graphql get dialogs" doesn't accurately reflect the content, as this README covers GraphQL search performance testing.

-# Graphql get dialogs
+# GraphQL Search Performance Testing

53-53: Fix heading hierarchy and formatting

  1. Change the heading level from h5 to h4 for consistency
  2. Remove the trailing period from the heading
  3. Use fenced code blocks consistently throughout the document
🧰 Tools
🪛 Markdownlint

53-53: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time

(MD001, heading-increment)


53-53: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)

tests/k6/tests/enduser/performance/README.md (6)

8-11: Consider enhancing the test file section with more details.

Add a brief description of what enduser-search.js does and its key features to help developers understand its purpose without having to open the file.


37-42: Add example values for better clarity.

Consider adding example values for the parameters to help users understand the expected format:

 k6 run enduser-search.js -e API_VERSION=v1 \
-e API_ENVIRONMENT=<test|staging|yt01> \
--vus=<vus> --duration=<duration>
+e API_ENVIRONMENT=test \
+--vus=10 --duration=30s

46-50: Document the required parameters for GitHub Actions.

Add a list of the required parameters with descriptions and example values to help users fill in the workflow form correctly.


56-64: Improve clarity of secrets file example.

The secrets file example should clarify which values are required vs optional:

-K6_CLOUD_PROJECT_ID=**
-K6_CLOUD_TOKEN=**
+K6_CLOUD_PROJECT_ID=<required_for_cloud_reporting>
+K6_CLOUD_TOKEN=<required_for_cloud_reporting>

77-79: Enhance the test results section with more details.

Consider adding:

  • Expected format of the results
  • How to interpret the results
  • Links to relevant dashboards or monitoring tools
🧰 Tools
🪛 LanguageTool

[grammar] ~77-~77: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...h.js \ --input tokens=personal ``` ## Test Results Test results can be found in GitHub action run log a...

(PHRASE_REPETITION)


[typographical] ~78-~78: It seems that a comma is missing.
Context: ...d for exporting results to grafana, but so far results are exported to a private grafa...

(SO_COMMA)


67-67: Fix markdown formatting issues.

  1. Change the heading level from h5 to h4 by using one less #
  2. Remove the trailing period from the heading
  3. Use fenced code blocks consistently throughout the document
🧰 Tools
🪛 Markdownlint

67-67: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time

(MD001, heading-increment)


67-67: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)

tests/k6/tests/serviceowner/performance/README.md (4)

1-3: Consider adding target audience information.

The introduction effectively explains the purpose, but could benefit from specifying the intended audience (e.g., developers, QA engineers, DevOps) and any required expertise level.


5-15: Enhance prerequisites section with specific requirements.

Consider adding:

  • Minimum K6 version requirement
  • Required permissions/roles for token generation
  • Estimated disk space needed for test artifacts
  • Any environment-specific requirements

24-35: Add example values and validation steps.

The CLI instructions would be more user-friendly with:

  1. Example values for <vus> and <duration> with recommended ranges
  2. Steps to validate successful token generation
  3. Expected output examples

99-103: Expand the test results section.

Consider adding:

  1. Key metrics to look for in the results
  2. Troubleshooting guide for common issues
  3. Links to example reports or dashboards
  4. Success criteria or baseline metrics

Also, the TODO item about fixing reporting should be tracked in an issue.

Would you like me to create a GitHub issue to track the reporting fixes?

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 73a184a and 695ceeb.

📒 Files selected for processing (3)
  • tests/k6/tests/enduser/performance/README.md (1 hunks)
  • tests/k6/tests/graphql/performance/README.md (1 hunks)
  • tests/k6/tests/serviceowner/performance/README.md (1 hunks)
🧰 Additional context used
🪛 LanguageTool
tests/k6/tests/enduser/performance/README.md

[grammar] ~77-~77: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...h.js \ --input tokens=personal ``` ## Test Results Test results can be found in GitHub action run log a...

(PHRASE_REPETITION)


[typographical] ~78-~78: It seems that a comma is missing.
Context: ...d for exporting results to grafana, but so far results are exported to a private grafa...

(SO_COMMA)

tests/k6/tests/graphql/performance/README.md

[grammar] ~63-~63: This phrase is duplicated. You should probably use “Test Results” only once.
Context: ...h.js \ --input tokens=personal ``` ## Test Results Test results can be found in GitHub action run log a...

(PHRASE_REPETITION)


[typographical] ~64-~64: It seems that a comma is missing.
Context: ...d for exporting results to grafana, but so far results are exported to a private grafa...

(SO_COMMA)

🪛 Markdownlint
tests/k6/tests/enduser/performance/README.md

67-67: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time

(MD001, heading-increment)


67-67: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


65-65: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)

tests/k6/tests/graphql/performance/README.md

53-53: Expected: h4; Actual: h5
Heading levels should only increment by one level at a time

(MD001, heading-increment)


53-53: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


51-51: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)

tests/k6/tests/serviceowner/performance/README.md

60-60: Punctuation: '.'
Trailing punctuation in heading

(MD026, no-trailing-punctuation)


58-58: Expected: fenced; Actual: indented
Code block style

(MD046, code-block-style)

🔇 Additional comments (4)
tests/k6/tests/graphql/performance/README.md (2)

56-61: Add explanation for architecture-specific settings

Explain why --container-architecture linux/amd64 is needed and when users might need to modify this setting.


66-67: Expand TODO section with specific tasks

The current TODO is too vague. Consider listing specific items that need attention:

  1. What aspects of reporting need fixing?
  2. Are there specific metrics missing?
  3. What's the target state for reporting?
tests/k6/tests/enduser/performance/README.md (1)

80-81: Expand the TODO section with more context.

The "Fix reporting" TODO is too vague. Please provide:

  • What specific reporting issues need to be fixed
  • Any known limitations or problems
  • Planned improvements or solutions
tests/k6/tests/serviceowner/performance/README.md (1)

79-98: Add verification steps for cleanup process.

The cleanup section should include:

  1. How to verify successful cleanup
  2. Error handling steps if cleanup fails
  3. Warning about potential impact on shared environments

@dagfinno dagfinno merged commit 9266d1d into main Nov 21, 2024
6 checks passed
@dagfinno dagfinno deleted the performance/readme branch November 21, 2024 06:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants