Blog

Vaadin TestBench: How to stabilize tests in slow environments

By  
Anna Koskinen
Anna Koskinen
·
On Jan 11, 2023 4:04:28 PM
·

Vaadin TestBench is an awesome tool for creating integration tests for your application, but it's not immune to the problems caused by general slowness in your test environment. You might be running low on memory or disk space, or you might be temporarily trying to do more things on your test server than it can easily support. Fixing the environment would be the ideal solution, but this blog post focuses on making your tests stable enough to serve their function despite the slowness.

If the problems highlighted by your TestBench tests are also present in manual testing, you should look into fixing your application first. The techniques described here are meant solely for getting rid of test failures when nothing is actually wrong with the application but your test environment runs slowly enough to cause issues with the test scripts. Some of these techniques can also be applied preemptively.

A common indicator of test failures caused by slowness in the test environment is that they occur infrequently, but more often when there is a high server demand. When you set a breakpoint and step through the test slowly, the problems may not be there at all.

Some examples of problems caused by the slowness

  • Layouting/scrolling/animation takes longer than expected
    • Wrong items visible
    • Wrong sizes/positions on elements
    • Hover actions for wrong component
    • Wrong click target
    • Wrong drag source or target or position
  • Rendering takes longer than expected
    • Element is present but contains no text
  • Overlay not getting added/positioned fast enough
    • Exception from element not found
    • Clicking an element underneath instead of the overlay
  • Overlay not getting removed fast enough
    • Clicking the overlay instead of an element underneath
    • Testing too early that the removal succeeded
  • Custom interactions perform too fast
    • Direct JavaScript call updates an element before built-in communication arrives
  • Chained actions choke the test browser
    • Actions follow each other too quickly with inconsistent results

What to do about these things?

Break down action chains

Add explicit delays

Wait until a condition is met

Try again

Break down action chains

Give the browser extra time to respond to test engine commands. Directly chained Selenium operations are often too fast for a slow browser, but other helper methods depend on them as well and may fail to operate effectively in slow environments.

Example: dragAndDrop

Consider the following command:

new Actions(driver).dragAndDrop(draggable, dropTarget).perform();

Under the hood that performs the same tasks as this:

new Actions(driver).moveToElement(draggable).clickAndHold()
        .moveToElement(dropTarget).release().perform();

There is no place to add any delays or even to observe the effects of the different sections by adding a breakpoint and stepping through it. The entire command chain is sent out at once with the single perform call. If your attempt at dragging and dropping starts to fail randomly, try something like this instead:

new Actions(driver).moveToElement(draggable).perform();
new Actions(driver).clickAndHold().perform();
new Actions(driver).moveToElement(dropTarget).perform();
new Actions(driver).release().perform();

Example: sendKeys

In some cases, the action chains are hidden behind TestBench helper methods. For example, simulating keyboard navigation from a TextField to a Button above it might look like this:

$(TextFieldElement.class).first()
        .sendKeys(Keys.chord(Keys.SHIFT, Keys.TAB));

That TestBenchElement.sendKeys method makes sure that the element is in view and that Vaadin's own processing is done before the WebElement.sendKeys is triggered for the wrapped element, which makes the method call more stable. However, sometimes the handling of that chord itself is the problem. In this particular case, you can't just split the method call, because the WebElement.sendKeys releases the pressed keys at the end of the method call. You could do it like this instead:

TextFieldElement textField = $(TextFieldElement.class).first();
new Actions(driver).keyDown(Keys.SHIFT).perform();
textField.sendKeys(Keys.TAB);
new Actions(driver).keyUp(Keys.SHIFT).perform();

Note that sendKeys also focuses the element if it's not focused before, so if you'd like to move back two tabulator steps, you need to change your approach. One option is to find the newly focused element and send the next tabulator call to that. A simpler solution is to use the command from the Actions instead – but then you need to make sure the initial focus is on the correct element:

TextFieldElement textField = $(TextFieldElement.class).first();
textField.focus();
new Actions(driver).keyDown(Keys.SHIFT).perform();
new Actions(driver).sendKeys(Keys.TAB).perform();
new Actions(driver).keyUp(Keys.SHIFT).perform();

Also, keep in mind that this version doesn't do the implicit waiting for Vaadin to finish processing server messages. If that is a problem, try adding a getCommandExecutor().waitForVaadin() call after the focus() call.

Another thing to note about Actions.sendKeys is that it doesn't release any modifier keys implicitly (unlike other keys sent through it). You must always remember to do the keyUp call for modifier keys separately, or clear all pressed keys with a sendKeys(Keys.NULL) call.

Add explicit delays

Sometimes the browser just needs a little more time to process things. Putting the test thread to sleep for a time may help your test. However, you must deal with the potential InterruptedException, and if your wait is too lengthy, your connection can timeout. Consider the following helper function, with the timeout changed to fit your test environment:

private static final int BROWSER_TIMEOUT_IN_MS = 30 * 1000;
/**
 * Sleeps for the given number of ms but ensures that the browser
 * connection does not time out.
 *
 * @param timeoutMillis
 *            Number of ms to wait
 */
protected void sleep(int timeoutMillis) {
    while (timeoutMillis > 0) {
        int delay = Math.min(BROWSER_TIMEOUT_IN_MS, timeoutMillis);
        try {
            Thread.sleep(delay);
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }
        timeoutMillis -= delay;

        // Do something to keep the connection alive

        getDriver().getTitle();
    }
}

Now you can just add sleep(100) or some other suitable delay to the sections of your test that are struggling to keep up.

There is also LockSupport.parkNanos that does basically the same thing as Thread.sleep but doesn't define any exceptions. You might still want a helper method with it, though, since it takes in nanoseconds rather than milliseconds:

private static final long B_TIMEOUT_IN_NS = 30L * 1000 * 1000 * 1000;
private void parkMillis(long timeoutMillis) {
    long timeoutNanos = timeoutMillis * 1000 * 1000;
    while (timeoutNanos > 0) {
        long delay = Math.min(B_TIMEOUT_IN_NS, timeoutNanos);
        LockSupport.parkNanos(delay);
        timeoutNanos -= delay;

        // Do something to keep the connection alive
        getDriver().getTitle();
    }
}

With only three lines of difference in the method length, lower recognizability of the concept of 'parking', and the risk of accidental overflowing if you don't remember to force all calculations into using Longs (if you forget that L in this example you end up with a delay of -64771072 nanoseconds and an eternal loop), Thread.sleep is likely the preferable approach.

Note that a fixed delay isn't a very robust solution. If your environment gets even slower, you may need to update the delays to match. If your environment gets faster, you might be wasting time for no good reason.

Wait until a condition is met

Whenever you can, it's better to avoid fixed delays and instead check periodically until your condition is met. It's not always possible for one reason or another, but it's a good default.

If you only have one element that you are waiting to see, there is a built-in helper method for that:

$(NotificationElement.class).waitForFirst();

However, if you want something other than the presence of the first element, you need to dig deeper. Selenium provides a number of built-in helper methods for various waiting needs. You can give one such method as a parameter to the waitUntil method in TestBenchTestCase:

waitUntil(ExpectedConditions.elementToBeClickable(
        By.tagName("vaadin-notification")));

Sometimes you also need to wait for more than one thing, e.g. if your environment is slow enough that it takes a while before the contents are rendered:

NotificationElement notification = 
        $(NotificationElement.class).waitForFirst();
waitUntil(ExpectedConditions.textToBePresentInElement(notification,
        "Hello world!"));

If the built-in options aren't enough, you can create your own custom implementation:

waitUntil(driver -> notification.getText().startsWith("Hello"));

ExpectedCondition error messages

Sadly, many of the solutions above provide rather ugly and sometimes uninformative error messages, such as:

org.openqa.selenium.TimeoutException: Expected condition failed: waiting for com.vaadin.testbench.ElementQuery$$Lambda$674/0x00000008003ef270@834e986 (tried for 10 second(s) with 500 milliseconds interval)

You can customize the default error message in your custom implementations by overriding toString():

String expected = "Hello world!";
waitUntil(new ExpectedCondition<Boolean>() {
    String actual;
    @Override
    public Boolean apply(WebDriver arg0) {
        actual = notification.getText();
        return expected.equals(actual);
    }

    @Override
    public String toString() {
        // waiting for ...
        return "notification text to match '" + expected
                + "' (was: '" + actual + "')";
    }
});

org.openqa.selenium.TimeoutException: Expected condition failed: waiting for notification text to match 'Hello world!' (was: 'Hello world') (tried for 10 second(s) with 500 milliseconds interval)

Another approach is to use try-catch:

import static org.junit.Assert.fail;
...
try {
    waitUntil(driver -> notification.getText().startsWith("Hello"));
} catch (TimeoutException e) {
    fail("Notification text didn't match '" + expected + "' (was: '"
           + notification.getText() + "')");
}

As a bonus point, if the check fails, now it registers as a failure rather than an error. However, the customization options for the error message are now more limited, as you can only access method variables that are final or effectively final from within the actual check. 

You can also combine these approaches:

try {
    waitUntil(new ExpectedCondition<Boolean>() {
        ...
    });
} catch (TimeoutException e) {
    fail(e.getMessage());
}

ExpectedCondition return values

Unless the waiting fails, waitUntil returns the result of ExpectedCondition.apply. Thus you can use it to make more complicated queries than just the first element.

For example, finding a displayed element with a specific class name within a Dialog overlay:

WebElement buttonElement = waitUntil(
        ExpectedConditions.visibilityOfElementLocated(
                By.cssSelector("vaadin-dialog-overlay .myButton")));
ButtonElement button = wrap(ButtonElement.class, buttonElement);

You can do similar things in custom implementations. Failure conditions of ExpectedCondition.apply are null or false, returning anything else marks the check as passed.

If your ComboBox popup contents are slow to load, and you are only interested in some of the options, you could try something like this:

ComboBoxElement cb = $(ComboBoxElement.class).first();
cb.openPopup();
List<String> percentages = waitUntil(driver -> {
    List<String> filteredOptions = cb.getOptions().stream()
            .filter(item -> item.contains("%")
                    && !item.equals("100%"))
            .collect(Collectors.toList());
    if (filteredOptions.isEmpty()) {
        return null;
    }
    return filteredOptions;
});

The ComboBox in this example contains various size options, and when the contents of the popup have loaded, only those that are both in percentage format and not covering the entire 100% are returned for some further use in your test method.

So when doesn't all this waiting work?

For example, overlays that are only present for a limited amount of time might disappear if you spend too much time checking their properties before you perform whatever operation you were planning to perform on them, or you might need to fit your action into a specific state of an animation cycle. Things like that are flaky by nature and there isn't much you can do to truly stabilize them.

Try again

If neither fixed delays nor waiting until a condition is met works, sometimes it's a good enough solution to just try again.

For example, sometimes a double click on an element simply does not register as a double click, or there is some mechanism that cannot be measured and has no quantifiable side effects. You might have an overlay that is meant to close when you click it or click outside of it, but the initial click does not work for some reason.

If these things work without issues in manual testing, it's not necessarily worth the effort to figure out just why exactly the action fails. Just wait to see whether the results of your action happen as expected and if not, catch the exception and try again. If the second wait doesn't yield the expected results either, that's when you need to dig in deeper.

If the effects of your action are complex and succeed partially, you may need to revert those effects before you try again. Alternatively, you might need to catch some different errors from the second attempt, such as a StaleElementReferenceException if your overlay decides to disappear on you after all.

Closing words

Test runs aren't always perfect, but the main goal is to test the functionality of your application, not the functionality of your tests. If there happens to be a bug in some version of ChromeDriver, or some version of Firefox is fiddly about registering programmatic clicks, or your test cluster is low on processing power – chances are that none of that affects the end users. If you can fix those problems, great, but sometimes we just need to live with the imperfections.

As the saying goes, nothing is as permanent as a temporary fix, but when it comes to your tests, the bar of what is good enough is much lower than when it comes to your production build. The tests exist to help you, and as long as they work well enough that they are more of a help than a hindrance, they are doing their job. The techniques discussed in this blog post should help your tests to help you even when the conditions are less than ideal.