Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] : +[Doc] Maintain annotations from all retries #35215

Closed
Az8th opened this issue Mar 16, 2025 · 6 comments · Fixed by #35292
Closed

[Feature] : +[Doc] Maintain annotations from all retries #35215

Az8th opened this issue Mar 16, 2025 · 6 comments · Fixed by #35292
Assignees
Labels

Comments

@Az8th
Copy link
Contributor

Az8th commented Mar 16, 2025

🚀 Feature Request

Hello community & maintainers !

Firstly, I would like to say that some aspects of annotations are confusing for people, which is quite sad regarding the overall quality of the Playwright documentation

As per #33954, this seems it had been already discussed internally, but was set as not planned without further indication. Not sure if this is related, but as this issue had the 1.50 tag and 1.51 brought the possibility to add attachments to test.step, I suppose this is the expected way to add dynamic values. If so, it should definitely be mentionned in the Annotation documentation.

Secondly, and it has being already said here #30188 #32411, users would tend to think that annotations values are kept for each run, or values would be updated, but this is more trickier : only ones from the latest run are kept.

It almost sounds like a bug, as runtime annotations added only on the previous runs will not appear at all, which is a loss of information that is important enough for a developper to add it manually. Please, this has to be put in a warning block in the docs.

Yet, this could be fixed by keeping every annotation between runs, and just override their description when it is updated.

Some would argue that we could lose tracibility from which run it happened, but we could just add testInfo.retry to its type then.

You may think that this is a duplicate of the aforementionned issues. In fact, I would hugely prefer the first to be reopened, as it would be better to treat each annotation block as a separated object for each retry, or at least only for runtime ones and keep the static ones on top, but I undestand it may be difficult to implement and that attachments can fullfill those needs . However, I think the solution I propose would be less complex to implement, while removing a counter-intuitive behaviour.

Example

test('I can login with 2FA', {
  //static annotation that can't change
  annotation: { type: 'Issue', description: 'https://github.com/microsoft/playwright/issues/35215 },
}, async ({ loginPage, smsAPI, browser }, testInfo) => {

  //runtime annotation that will change between runs with fixed type
  test.info().annotations.push({ type: 'Executed at', description: utils.getFullDateTime() }); 

  //runtime annotation that will not change between runs
  test.info().annotations.push({ type: 'Browser version', description: browser.version() }); 
  
  await loginPage.goto(loginPage.URL);

  //runtime annotation that may change between runs with dynamic type
  test.info().annotations.push({ type: `Server node @{testInfo.retry}`, description: loginPage.getServerID() });

  await loginPage.logWithCredentials(user: env.DEFAULT_USR, password: env.DEFAULT_PWD);
  await loginPage.enter2FA(await smsAPI.getLatest(number: "12345", pattern: "Here is your code :", grep: /\d+/);
  
  /* I have an errorPage auto fixture that checks for every navigation if its url is reached
   * In that case it would fail the test and execute the following code:
   */
  //runtime annotation that may not appear on all runs
  test.info().annotations.push({
    type: `Sentry event @${testInfo.retry}`,
    description: `https://example.sentry.com/[...]/events/${errorPage.getSentryErrorID()}`
  }); 
});

Let's say we have 2 runs, as an error occured for the first one because one of the server instances has an invalid SMS sender API token

Actual behaviour

Annotations:
  Issue: https://github.com/microsoft/playwright/issues/35215
  Executed at: Saturday 15th March 2025, 23:58:43 UTC
  Browser version: 134.0.6998.89 
  Server node @2: Preprod#1

No mention of the error event here

With feature :

Annotations:
  Issue: https://github.com/microsoft/playwright/issues/35215
  Executed at: Saturday 15th March 2025, 23:58:43 UTC
  Browser version: 134.0.6998.89
  Server node @1: Preprod#3
  Server node @2: Preprod#1
  Sentry event @1: https://example.sentry.com/[...]/events/0

Here, error event and all visited nodes are kept

Motivation

As you may see above, I put several key informations in the annotations, as this is the first element to appear on the report that allows to share clickable URLs. Unfortunately, some of them are dynamic between run and I can't track them all this way.

Putting it in test steps or attachments is not ideal, as it forces to go to the bottom of the report.

For example, I link the id of the issue generated by Sentry in the case of an error event shows up to reduce the number of clicks devs needs to reach those precious debugging logs, for critical situations.

I also use them to track the ID of the tested server instance as we use a load balancer, and when the test started. I don't find this convenient to put them in attachments, but I could accomodate if the latter was added natively to the report for each run/retry in the same way it is displayed for the main page of the HTML report

@Az8th
Copy link
Contributor Author

Az8th commented Mar 16, 2025

This would also permit to append data to existing descriptions.

In case it seems edgy to you, here is an explicit example :
Given I want to track the total amount of time awaiting for navigations accross runs and that I use --repeat-each,
I could keep track of individual times for each runs in attachments, but extract them one by one would be very tedious, while I could just set up annotations for a max and min that I could access in runtime to compare with the previous runs and also add a total time amount, which I could use to extract usable data to compare with previous execution of the test suite and monitor performance evolution.

@agg23
Copy link
Contributor

agg23 commented Mar 17, 2025

It's unclear what you'd like to see here. Can you clearly enumerate your desired changes in a small list?

@Az8th
Copy link
Contributor Author

Az8th commented Mar 18, 2025

Not sure where it misses clarity, but titles summaries it :/

Let me rephrase it then. Actually, test annotations are run scoped, I would like :

  • an option to make them test scoped in order to save context among retries. (The example I provided shows the actual behaviour, and what would I expect using such option)
  • a warning to be added in the docs concerning the actual behaviour

@agg23
Copy link
Contributor

agg23 commented Mar 18, 2025

I don't understand how you would just make annotations run scoped. From your example, I would expect to see:

Run 1

  Issue: https://github.com/microsoft/playwright/issues/35215
  Executed at: Saturday 15th March 2025, 23:58:43 UTC
  Browser version: 134.0.6998.89 
  Server node @1: Preprod#3
  Sentry event @1: https://example.sentry.com/[...]/events/0

Run 2

  Issue: https://github.com/microsoft/playwright/issues/35215
  Executed at: Saturday 15th March 2025, 23:59:43 UTC
  Browser version: 134.0.6998.89
  Server node @2: Preprod#1

Combined all runs

  Issue: https://github.com/microsoft/playwright/issues/35215
  Executed at: Saturday 15th March 2025, 23:58:43 UTC
  Browser version: 134.0.6998.89 
  Server node @1: Preprod#3
  Sentry event @1: https://example.sentry.com/[...]/events/0
  Issue: https://github.com/microsoft/playwright/issues/35215
  Executed at: Saturday 15th March 2025, 23:59:43 UTC
  Browser version: 134.0.6998.89
  Server node @2: Preprod#1

Even if you deduped them you would get

  Issue: https://github.com/microsoft/playwright/issues/35215
  Executed at: Saturday 15th March 2025, 23:58:43 UTC
  Browser version: 134.0.6998.89 
  Server node @1: Preprod#3
  Sentry event @1: https://example.sentry.com/[...]/events/0
  Executed at: Saturday 15th March 2025, 23:59:43 UTC
  Server node @2: Preprod#1

Which is definitely not what you want.


As discussed in #30188 as you linked, annotations are intended to be static. You are not expected to modify them during the actual content of the test (only conditionally at the beginning based on your platform or whatever). You should be using attachments for this usecase. If that's not sufficient for your usecase, you can also write a custom reporter.

@agg23 agg23 closed this as not planned Won't fix, can't repro, duplicate, stale Mar 18, 2025
@dgozman dgozman reopened this Mar 19, 2025
@dgozman
Copy link
Contributor

dgozman commented Mar 19, 2025

Discussed with the team, we would like to make annotations available on TestResult, in addition to static annotations on TestCase, and update reporters accordingly.

@Az8th
Copy link
Contributor Author

Az8th commented Mar 19, 2025

Nice to hear ! Will definitely help :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants