Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

review of practices #30

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
58 changes: 32 additions & 26 deletions docs/content/practices/emulator_setup.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# Emulator setup

Basically we have next choices:
Basically we have 2 choices:

* Manage devices automatically by `avd`
* Manage docker containers with emulators by `docker`
* Manage devices automatically via `avd`
* Manage docker containers with emulators with `docker`

Using docker image is the easiest way, however it's important to understand how docker creates device for you.
Using a docker image is the easiest way, however it's important to understand how docker creates emulators for you.

## Creating an emulator

Before starting to read this topic, make sure you've read
an [an official documentation](https://developer.android.com/studio/run/emulator-commandline)
Before starting to read this, make sure you've read
[the official documentation](https://developer.android.com/studio/run/emulator-commandline)

Firstly, you need to create an `ini` configuration:
Firstly, you need to create an `ini` configuration file for the emulator:

```ini
PlayStore.enabled=false
Expand Down Expand Up @@ -48,19 +48,19 @@ skin.name=320x480
disk.dataPartition.size=8G
```

Pay your attention that we disabled:
Pay attention to what we have disabled:

* Accelerometer
* Audio input/output
* Play Store
* Sensors:Accelerometer, Humidity, Pressure, Light
* Gyroscope

We don't really need them for our tests run. It also may improve our tests performance, because there are no background
operations related to that tasks.
We don't really need them for our test runs. It also may improve our tests performance, because there are no background
operations related to those tasks.

After that, you can run your emulator by `avd manager`, which is a part of android sdk manager. After your device
creation, you need change default generated ini file to custom one. You may see an example below:
After that, you can run your emulator via `avd manager`, which is part of the android sdk manager. After creating the emulator, you need to switch the default generated `ini` file to the custom one we defined previously.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I rephrased this. I believe this is what is intended after reading the code in the script

You can achieve that with a script like this one:

```bash
function define_android_sdk_environment_if_needed() {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly recommend to have a "ready" bash file for this that we upload in this article (maybe extra subsection here to make it more prominent) and indicate how to execute -> ./bashFile.sh -args
The command can actually differ on windows though

Expand Down Expand Up @@ -109,22 +109,28 @@ define_path_environment_if_needed
create_and_patch_emulator
```

Pay your attention that you also need to wait until your emulator is fully booted.
Keep in mind that the emulator must fully boot before running any test. Otherwise the tests will fail because there is still no device ready
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, for snapshot testing I was using swarmer from Juno, which is deprecated but works pretty fine for starting several emulators and wait for all of them to be ready before running the tests.
Another new thing is Gradle Virtual Managed Devices...

Maybe worth mentioning later, after releasing the MVP?

on which they can run.

## How to run an emulator in a Docker?
### Summary
1. create an `ini` configuration file for the emulator
2. run your emulator via `avd manager` and
3. switch the `ini` file generated in 2. with the one we create in 1.

Running an emulator in a docker a way easier than manually, because it encapsulates all this logic. If you don't have an
experience with docker, you can check
[this guide](https://www.youtube.com/watch?v=zJ6WbK9zFpI) to check the basics.
## How to run an emulator in Docker?

There are some popular already built docker images for you:
Running an emulator in a docker is way easier than manually, because it encapsulates all this logic. If you don't have
experience with docker, check
[this guide](https://www.youtube.com/watch?v=zJ6WbK9zFpI) to get familiarized with the basics.

There are some popular docker images already built for you:

* [Official Google emulator](https://github.com/google/android-emulator-container-scripts)
* [Agoda emulator](https://github.com/agoda-com/docker-emulator-android)
* [Avito emulator](https://hub.docker.com/r/avitotech/android-emulator-29)

Talking about [Avito emulator](https://github.com/google/android-emulator-container-scripts), it also patches your
emulator with adb commands to prevent tests flakiness and to speed them up
Talking about the [Avito emulator](https://github.com/google/android-emulator-container-scripts), it also patches your
emulator with adb commands to prevent tests flakiness and to speed them up.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be good to quickly mention how it does it.
From the description here, I take for granted that Avito emulator is the best out of the 3 here? If yes, should we stress the point in here?

What are the differences among them?

As a learner, I want to use the best for my needs


##### Run Avito emulator

Expand Down Expand Up @@ -154,18 +160,18 @@ docker rm $(docker ps -a -q)
## Conclusion

* Use docker emulators </br>
_You also will have an opportunity to run them with `Kubernetes`, to make it scalable in the future_
_You'll also have the opportunity to run them with `Kubernetes`, to make it scalable in the future_

* Start fresh emulators each test batch and kill them after all of your tests finished</br>
_Emulators tend to leak and may not work properly after some time_
* Start fresh emulators on each test batch and kill them after all of your tests finished</br>
_Emulators tend to freeze and may not work properly after idling for some time_

* Use the same emulator as on CI locally</br>
* Use the same emulator locally as on your CI </br>
_All devices are different. It can save you a lot of time with debugging and understanding why your test works locally
and fails on CI. It won't be possible to run Docker emulator on macOS or Windows, because
and fails on CI. It won't be possible to run Docker emulators on macOS or Windows, because
of [haxm#51](https://github.com/intel/haxm/issues/51#issuecomment-389731675). Use AVD to launch them on such
machines (script above may help you)_

!!! warning

To run an emulator on CI with a docker, make sure that nested virtualisation supported and KVM installed.
To run an emulator on CI with a docker, make sure that nested virtualisation is supported and KVM is installed.
You can check more details [here](https://developer.android.com/studio/run/emulator-acceleration#vm-linux)
38 changes: 19 additions & 19 deletions docs/content/practices/emulator_vs_real_device.md
Original file line number Diff line number Diff line change
@@ -1,37 +1,37 @@
# Emulator vs Real device

This question is a trade off and there is no right and wrong answers. We'll review pros/cons and basic emulator setup on
CI
Instrumented tests can run either on emulators or real devices. Which one should we used?
This question is a trade off and there is no right or wrong answer. We'll review the pros and cons of both approaches.

## Real device

Here is pros/cons
Here the pros/cons

➕ Real environment </br>
➕ Doesn't consume CI resources
➕ Doesn't consume significant RAM, CPU and memory of the CI.

➖ Breaks often </br>
➖ Requires special conditions </br>
➖ Breaks often: Battery swells, USB port failures, OS software failing... all this happens because real devices are not designed to be intensively used continuously. </br>
➖ Requires to place them in a room with special environment conditions </br>

A real device will help you to catch more bugs from the first perspective, however talking about scalability, if you
Although it seems that a real device is a better alternative because it helps you catch bugs on a full-fledged Android environment, it comes with its own issues. However, talking about scalability, if you
have a lot of devices, you need to locate them in a special room with no direct sunlight and with a climate-control.

However, it doesn't save from disk and battery deterioration, because they are always charging and performs I/O
operations. It may be a reason of your tests failing not because of them caught a real bug, but because of an issue with
a device.
But that doesn't prevent them from disk and battery deterioration, because they are always charging and performing I/O
operations. Therefore, if your tests fail, could be because of an issue with
a device and not because of a real bug in the app under test.

## Emulator

Here is pros/cons
Here the pros/cons

➕ Easy configurable </br>
➕ Can work faster than a real device </br>
➕ Can work faster than a real device</br>
_Keep in mind that it's achievable only if you applied a special configuration and have powerful build agents_</br>
Тot demanding in maintenance </br>
Not demanding in hardware maintenance e.g. battery, disk, USB ports, display...) </br>

➖ Not a real environment </br>
➖ Consumes CI resources </br>
➖ Not the real environment on which the app will end up running</br>
➖ Consumes significant resources of the CI like RAM, CPU and memory</br>
➖ Emulators might freeze if stay idle for a long time and need to be restarted.</br>

The most benefit that we may have is a fresh emulator instance each test bunch. Also, it's possible to create a special
configuration and disable features you don't need to have in tests which affects device stability. However, you need to
have powerful machine (and definitely not one, if you want to run your tests on pull requests)
The main benefit that we may have is a fresh emulator instance on each test run. Also, it's possible to create a special
configuration and disable features you don't need to have in tests, like sensor or audio input/output, which affect device stability. Nevertheless, you need
a powerful machine (and definitely not one, if you want to run your instrumented tests on very pull requests)
60 changes: 28 additions & 32 deletions docs/content/practices/flakiness.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@

![alt text](../images/practices/header_flakiness.svg "Sad")

Flakiness it's an unstable behavior of particular test. If you execute this test N times, it won't pass `N/N`. Or, it
can pass only locally, but always or often failed on the CI.
Flakiness means lack of reliability on a particular test. If you execute this test N times, it won't pass `N/N`. Or, it
might only pass locally, but it often (or always) fails on the CI.

It's the most frustrating problem in instrumented testing, which requires a lot of time from engineers to fight.
Understanding the causes of flakiness is the most frustrating problem in instrumented testing, which requires a lot of time from engineers to fight against.

## Reason

Expand All @@ -14,33 +14,30 @@ It's the most frustrating problem in instrumented testing, which requires a lot
* Test code </br>
`Example: testing toasts/snack-bars`
* A real device or Emulator </br>
`Example: Disk/Battery/Processor/Memory issues or notification has shown on the device`
`Example: Disk/Battery/Processor/Memory issues or notification showed up on the device`
* Infrastructure </br>
`Example: Processor/Disk/Memory issues`

{==

It's not possible to fight flakiness on 100% if your codebase changes every day (including the new sources of flakiness)
It's not possible to completely eradicate flakiness if your codebase changes every day: every code change can potentially add flakiness.

However, it's possible to reduce it and achieve good percentage of flakiness free.
However, it's possible to reduce it and achieve a good percentage of flakiness free.

==}

In general, to reduce flakiness you need to choose tools like framework for writing, test runner and emulator properly
In general, the key to reduce flakiness is to pick the right tools: **test framework**, **test runner** and **emulator**

## Flakiness protection

#### 1. Wait for the content appearing </br>

: When we have an http request or other asynchronous operation, it's not possible to predict how soon our expected
content will be shown on the screen.<br>By default, Espresso framework will fail assertion if there is no expected
content in a particular time.
: When a http request or any other asynchronous operation is running, it's not possible to predict how long it takes to return a response to fill our screen with data.<br>If there is no content on the screen at the time the Espresso assertions occur, the tests will fail.
</br>
Google provided [Idling Resources](https://developer.android.com/training/testing/espresso/idling-resource) to catch
In order to solve this problem, Google provided [Idling Resources](https://developer.android.com/training/testing/espresso/idling-resource) to watch
asynchronous operations.
</br> However, this goes against the common testing best practice of not putting testing code inside your application
code and also requires an additional effort from engineers.</br>
Recommended by community way it's to use smart-waiting (aka flaky safely algorithm) like this
</br> However, Idling Resources require putting testing code in production. This goes against the common testing best practices and also requires an additional effort from engineers.</br>
The recommended means by the community is to use smart-waiting (aka flaky safely algorithm) like this
:
```kotlin
fun <T> invokeFlakySafely(
Expand Down Expand Up @@ -73,24 +70,23 @@ fun <T> invokeFlakySafely(
}
```

: This is an internals of
: This algorithm is the foundation of
the [Kaspresso library](https://github.com/KasperskyLab/Kaspresso/blob/c8c32004494071e6851d814598199e13c495bf00/kaspresso/src/main/kotlin/com/kaspersky/kaspresso/flakysafety/algorithm/FlakySafetyAlgorithm.kt)

: Official documentation says that it's not a good way to handle this, because of an additional consuming of CPU
resources. However, it's a pragmatic trade-off which speed ui testing writing up and relieves engineers from thinking
: Official documentation says that it's not a good way to handle this, because of additional CPU consumption. However, it's a pragmatic trade-off which speeds up writing of ui tests and relieves engineers from thinking
about this problem at all.

: Some frameworks have already implemented solution, which intercepts all assertions:
: Moreover, some frameworks have implemented a system based on exception interceptors: whenever an assertion fails and throws an exception, the framework executes an action (e.g. scroll down) and retries the failing assertion.

: * [Avito UI test framework](https://github.com/avito-tech/avito-android/tree/develop/subprojects/android-test/ui-testing-core/src/main/kotlin/com/avito/android)
* [Kaspresso](https://github.com/KasperskyLab/Kaspresso)

: Consider using them to avoid this problem at all.
: Consider using them to avoid issues with asynchronous operations.

#### 2. Use isolated environment for each test </br>

: Package clear before each test will all your data in application and process itself. This will get rid of the
likelihood affection old data to your current test. Marathon and Avito-Test runner provide the easiest way to clear the
: Package clear before each test will delete all your data in application and process itself. This will get rid of the
likelihood of old data to affecting your current test. Marathon and Avito-Test runner provide the easiest way to clear the
state.
</br>

Expand All @@ -99,26 +95,26 @@ here: [State Clearing](https://android-ui-testing.github.io/Cookbook/practices/s

#### 3. Test vanishing content in other way (Toasts, Snackbars, etc) </br>

: Testing the content which is going to be hidden after N time (usually ms) it's also challenging. Toast might be shown
properly, but your test framework is checking other content on the screen at the particular moment. When this check is
done, toast might have already been disappeared, your test will be failed.
: Testing the content which is going to be hidden after a certain time (usually ms) it's also challenging. A toast might be shown
properly, but your test framework is checking other content on the screen at that particular moment. When this check is
done and it is time to assert the toast, it might have already disappeared. Therefore, your test will fail.

: To solve this, you may not to test it at all. Or, you can have some proxy object which saves a fact that
Toast/SnackBar has been shown. This solution has already been implemented by Avito company, you may check the
: One way to solve this is not to test it at all. Or, on the other hand, you can have some proxy object which remembers that the
Toast/SnackBar has been shown. This solution has already been implemented by the company Avito, you can check the
details [here](https://avito-tech.github.io/avito-android/test/Toast/)

: If you have own designed component, which is also disappears after some time, you can disable this disparity for tests
: If you have your own designed component, which also disappears after some time, you can disable this disparity for tests
and close it manually.

#### 4. Use special configuration for your device </br>

: In the most of the cases you don't to have Accelerometer, Audio input/output, Play Store, Sensors and Gyroscope in
: In most cases you don't need the Accelerometer, Audio input/output, Play Store, Sensors and Gyroscope in
your tests.
</br>
You can see how to disable them
here: [Emulator setup](https://android-ui-testing.github.io/Cookbook/practices/emulator_setup/)

: Also, it's recommended way to disable animations on the device, screen-off timeout and long press timeout. The script
: For more reliability, it's also recommended to disable animations on the device, screen-off timeout and long press timeout. The script
below will patch all your devices connected to `adb`
```bash
devices=$(adb devices -l | sed '1d' | sed '$d' | awk '{print $1}')
Expand All @@ -138,9 +134,9 @@ done

#### 5. Use fresh emulator instance each test batch </br>

: Your tests may affect your emulator work, like save some information in the external storage, which can be a reason of
flakiness. It's not pragmatic to run a new emulator for each test in terms of speed, however you can do it each batch.
Just kill emulators when all of your tests finished.
: Your tests may affect your emulator work, like saving some information in the external storage, which can be one more reason of
flakiness. It's not pragmatic to run a new emulator for each test in terms of speed, however you can do it for each batch of tests.
Just kill all the emulators once all of your tests finished.
<br>
You can see how to disable them
here: [Emulator setup](https://android-ui-testing.github.io/Cookbook/practices/emulator_setup/)
Expand Down
Loading