-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement a first basic scenario with Cypress for the dApp #1584
Comments
Start with the basic main scenarios |
I guess we need to discuss what the basic scenarios are and how to organise the tests. |
@taleldayekh you assigned me as well. What tasks do you expect I should start to work on? |
@weilbith I assigned you because I guess we will be having some discussions on how to best implement this stuff. :) But if you want you could resarch the CI integration. |
Sounds good. Ping me when you have something ready I could try to make work with CI. |
I was thinking about implementing the following scenario, but we would need to discuss how and if we break it down. The scenario has been build under the assumption, that those are high level integration tests and not pure UI tests.
The big question: Should we run this end to end or split the tests?I am not quite sure. Reading the cypress best practice docs:
Watching the video: They also do not recommend to do E2E tests with Cypress, but they also say: Some further reading: https://github.com/NoriSte/ui-testing-best-practices |
@christianbrb Nice, thanks for the good job and effort in thinking that through. |
In the case of the light client, it is harder to have pure UI tests since you would have to mock the SDK. If the integration image is used as a base I don't think that it would be a problem having e2e tests in Cypress, since everything happens in a controlled environment. |
The |
By old blockchain data do you mean that the block number is not zero and there are some data there?. If that is the case then that is how things are supposed to be. The block number should not be zero. When the image is built, the Raiden contracts are deployed and a channel is opened between two Python Raiden nodes. Geth's state is then stored in the new image. Each new container when started should have the contracts deployed and two python Raiden nodes with an open fully funded channel between them. This happens during the image creation and it is part of the image. The reason behind this is that we wanted to have a faster execution time when running the tests. Deploying the contracts and opening the channels takes time and it is not what we want to tests. That is one of the reasons the image is more complicated and takes time to build, however, it is way faster on runtime. Does this help? |
No, unfortunately not. That is what we know. I must admit that I technically don't understand this issue. There must be something persistent. But I don't know how this is possible due to the cgroup fo the container. |
Do you properly delete the previous container? I remember always stopping, then deleting the container and then creating a new one. |
I am probably asking stupid things, let me check the previous code and changes a bit maybe I can get an idea |
@kelsos Don't waste your time. I can do this in the time I'm getting payed for. I just thought asking would be helpful. |
@weilbith ok no worries. By checking I can see that you use the |
I'm 100% sure container is removed. |
Ok, but in general giving a quick look at the changes, there is no reason why the state would persist when the image is removed. It didn't before and I didn't see any change in the Dockerfile that would change that. |
Wait what? Why should I remove the image? Only the container based on the image should get removed. Then a new container gets started based on this image. Anyways. I could not reproduce the issue with the following test:
-> They start again from the block after the image build was done and all contracts deployed. This result is reproducable. The image is fine (happily, else I would have been super confused). So the issue must occur somewhere later in the setup. The debugging in the Cypress electron app was kinda clunky. So I don't trust these results anymore. Eventually it is just that Cypress cleans localstorage, but not the indexdb and then it looks like that. |
Sorry I meant container |
Thanks for your help. Was a nice feeling to have you on board for some minutes. 🤗 |
Just for the protocol: what affected my confused mind was that Talel stated that if he rebuilt the image from scratch (apparently some layer caching is not working) this was working fine again. So I first thought the issue can't by at Cypress then... |
But eventually the re-build just causes the new persistent layer of the dApp to create a new DB. Talel his electron app was pretty cluttered already. Since also the script works fine (replaced the run of Cypress with two times quering the block number via RPC), it must be something with Cypress and how the dApp parses the old state in contrast to what the Eth node says. |
Quick update: Talel told me that actually applying the manual delete of the database in the support index script is doing the job. Here is the issue of Cypress discussing the problem and providing the suggestion. |
Good luck with that, I've been fighting it all day ) |
The Cypress tests have been implemented and @taleldayekh has shown the progress on his local machine. Closing this issue to get better organized to talk about it in more detail within the planning. |
Description
This issue is a followup to #1404.
In #1580 the basics are done and Cypress works with the integration image.
The next step for us is to cover the basic dApp functionality with tests.
Acceptance criteria
Tasks
Additional information
The text was updated successfully, but these errors were encountered: