Replies: 16 comments 1 reply
-
[The pain of having to repeat the same values for multiple tests might be] dealt with using a cucumber 'Background'. [Edit: "Background" might be a good feature to add in general, but the internal feedback is that we want to be able to run randomized tests anyway.] |
Beta Was this translation helpful? Give feedback.
-
Notes from a standup discussion (08/06/21):
|
Beta Was this translation helpful? Give feedback.
-
Just FYI, some folks have big questions about the use of randomized tests and whether they're really an appropriate tool. |
Beta Was this translation helpful? Give feedback.
-
The main non-technical challenge I see here is giving enough useful output to the user. Will need a lot of feedback on that. Current MVP ideas (not much research so far):
Getting around tricky situations:
Questions:
As @rpigneri-vol named them, these random tests are our "spellcheck" option - they aren't a proper testing suite and shouldn't be treated as such, but they may be better than nothing. |
Beta Was this translation helpful? Give feedback.
-
New home of "faker": https://www.npmjs.com/package/community-faker Question was raised:
I haven't ever looked into how to do that. Is that possible? Do we need it? [Maybe an answer: I don't think we do need this. From what I'm understanding, this refers to the interview, not to the test, and the interview is deterministic. At least, all interviews that I've ever worked with have been deterministic.] |
Beta Was this translation helpful? Give feedback.
-
Output thoughts: In the report, only list the "name" of the test (maybe just sequential numbers) and the order of the screens (page id and/or title?). Each test then has a A file with:
Other files in the folder:
Maybe the name of the folder would also contain "failed" if it failed. Or maybe there'd be a "failed" folder for all failed tests? I don't love nested folders, though. |
Beta Was this translation helpful? Give feedback.
-
So, first brainstorm for what random input output might look like in the downloaded artifacts folder, including folders (folder 1 is open):
I'm using indentation to denote contents of the folder or file. Not sure what to call an "infinite loop" question. I don't think anyone else uses the name "infinite loop" to describe questions where you press continue and you just keep getting the same question over and over again. Infinite loops: we may only catch single-page infinite loops and probably not all of those either. Will try to write a whole comment about that later. Edit:
...needs to be replaced with interactive Steps. |
Beta Was this translation helpful? Give feedback.
-
Can we add to the report a list of the possibly hidden fields? Or just the fields that were on the screen and the values that they had or did not have? |
Beta Was this translation helpful? Give feedback.
-
Use small values for integers (0-10) so you don't test with 99 children. Maybe similar way to answer "no" after a couple screens where you are asked "is there another". General: if I've seen this screen 5 times, pick a different button this time. |
Beta Was this translation helpful? Give feedback.
-
Deep dive discussion: creating the feature file as a separate file might be useful! A failed random test is a good candidate for a test you always run the same way. |
Beta Was this translation helpful? Give feedback.
-
where to put error screenshots for easy access? And can we give more useful info in filenames now? From #429 discussion about artifacts structure.
Everything would be in one artifact folder. |
Beta Was this translation helpful? Give feedback.
-
A challenge we'll have with creating the story table output for users: Every field name representing a variable needs to be base64 decoded. Currently that means we have multiple guesses for what a field name might be. That would add a lot of nonsense rows to the table which would be duplicates of each other. Proposals to reduce this problem: Detect non-valid var name characters (that are often present when some text has been overly decoded) and remove those from the table. We can probably also remove those from field name guesses as a bonus. [Also, we need a Line where decoding names starts: Decoding objects (with Examples of encoded objects: |
Beta Was this translation helpful? Give feedback.
-
Constraining random valuesA future goal. Allow devs to constrain the random values. Provide arguments to constrain the values that can be given for a variable. Some ideas:
This could also be used to give actual hard-coded values by just giving one value without |
Beta Was this translation helpful? Give feedback.
-
#552 starts on folder/directory structure. [Done June 2022] |
Beta Was this translation helpful? Give feedback.
-
The Halting ProblemOne of them at least. How do we know when an interview has reached its end? For example, if an ending screen has just a "restart" button, it will think it's a continue button and keep submitting it, creating an infinite loop. Ideas for solutions:
Ideas for non-ideal solutions:
Rejected ideas:
|
Beta Was this translation helpful? Give feedback.
-
Final info for devReplicabilityIdeally, we'd output a story table test. Unfortunately, we currently can't properly decode variable names to do that, so that's not super possible. Another temp option is to have a different kind of file created - one that saves the selectors on the pages along with the answer given for that field. There would be a special Step for replicating a random input test. It wouldn't be human-readable, but ALKiln will be able to understand what to do. Human-readable results ideas:
|
Beta Was this translation helpful? Give feedback.
-
This is an alternative to having the developer write out every scenario to cover all their code. It's not ideal, but since you can't abstract in cucumber, writing every single scenario can be a huge task. It is possible, in cucumber, to allow the user to pass in data structures like lists. We would just have to handle randomly selecting them.
Note: This is not a fault in cucumber - it's not meant to be used the way we're using it.
Also need to think whether the developer will need to copy/paste this 'scenario' for however many times they want the random tests to be run, or if we can run them repeatedly somehow. This might be better in its own issue. [Edit: This is probably doable now that we have the knowledge of setting, and resetting, our own custom timeouts.]
Beta Was this translation helpful? Give feedback.
All reactions