Documentation

Documentation versions (currently viewingVaadin 23)
New Acceleration Kits: Kubernetes Kit and Azure Cloud Kit. Read the blog post.

Advanced Testing Concepts

The following testing concepts aren’t typically needed in tests. The cases when you might need to disable automatic waiting or scrolling in a view are rare. In such situations, you’re probably dealing with a bug.

Waiting for Vaadin

Traditional web pages load a page that’s immediately rendered by the browser. In such applications, you can test the page elements immediately after the page is loaded. In Vaadin and other Single-Page Applications (SPAs), rendering is done by JavaScript code asynchronously. Therefore, you need to wait until the server has given its response to an AJAX request and the JavaScript code finishes rendering the UI.

A major advantage of using TestBench compared to other testing solutions is that TestBench knows when something is still being rendered on the page. It waits for rendering to finish before moving on with the test. Usually, this isn’t something you need to consider since waiting is automatically enabled. However, it might be necessary to disable it sometimes. You can do this by calling disableWaitForVaadin() in the TestBenchCommands interface.

You can call it in a test case as follows:

testBench(driver).disableWaitForVaadin();

When waiting for rendering to finish is disabled, you can wait for it to finish by calling waitForVaadin(), explicitly.

testBench(driver).waitForVaadin();

You can re-enable the waiting with enableWaitForVaadin() in the same interface.

Waiting Until a Condition Is Met

In addition to waiting for Vaadin, it’s also possible to wait until a condition is met. For example, you might want to wait until an element is visible on the web page.

waitUntil(ExpectedConditions.presenceOfElementLocated(By.id("first")));

This call waits until the specified element is present, or times out after waiting for 10 seconds by default.

waitUntil(condition, timeout) allows the timeout duration to be controlled.

Scrolling

To be able to interact with an element, it needs to be visible on the screen. This limitation is set so that tests which are run using a WebDriver simulate a normal user as much as possible. TestBench handles this automatically by ensuring that an element is in view before an interaction is triggered.

Sometimes, you might want to disable this behavior. You can do this with TestBenchCommands.setAutoScrollIntoView(false).

Profiling Test Execution Time

You might not be interested only in the fact that it works, but also how long it takes. Profiling test execution times consistently isn’t trivial, as a test environment can have different kinds of latency and interference.

For example, in a distributed setup, timings taken on the test server would include the latencies between the test server, the grid hub, a grid node running the browser, and the web server running the application. In such a setup, you could also expect interference between multiple test nodes, which all might make requests to a shared application server and possibly also share virtual machine resources.

Furthermore, in Vaadin applications there are two sides which need to be profiled: the server side, on which the application logic is executed; and the client side, where it’s rendered in the browser. Vaadin TestBench includes methods for measuring execution time both on the server side and the client side.

The TestBenchCommands interface offers the following methods for profiling test execution time:

totalTimeSpentServicingRequests()

Returns the total time in milliseconds spent servicing requests in the application on the server side. The timer starts when you first navigate to the application and hence start a new session. The time passes only when servicing requests for the particular session.

If you’re also interested in the client-side performance for the last request, you must call timeSpentRenderingLastRequest() before calling this method. It’s necessary because this method makes an extra server request, which causes an empty response to be rendered.

timeSpentServicingLastRequest()

Returns the time in milliseconds spent servicing the last request in the application on the server side. Not all user interaction through the WebDriver causes server requests.

As with the total, if you’re also interested in the client-side performance for the last request, you must call timeSpentRenderingLastRequest() before calling this method.

totalTimeSpentRendering()

Returns the total time in milliseconds spent rendering the user interface of the application on the client side, that is, in the browser. This time only passes when the browser is rendering after interacting with it through the WebDriver.

timeSpentRenderingLastRequest()

Returns the time in milliseconds spent rendering user interface of the application after the last server request. Not all user interaction through the WebDriver causes server requests.

If you also call timeSpentServicingLastRequest() or totalTimeSpentServicingRequests(), you should do so before calling this method. The methods cause a server request, which zeros the rendering time measured by this method.

The following example is given in the VerifyExecutionTimeITCase.java file in the TestBench demo.

@Test
public void verifyServerExecutionTime() throws Exception {
    // Get start time on the server-side
    long currentSessionTime = testBench(getDriver())
            .totalTimeSpentServicingRequests();

    // Interact with the application
    calculateOnePlusTwo();

    // Calculate the passed processing time on the serve-side
    long timeSpentByServerForSimpleCalculation =
            testBench().totalTimeSpentServicingRequests() -
            currentSessionTime;

    // Report the timing
    System.out.println("Calculating 1+2 took about "
            + timeSpentByServerForSimpleCalculation
            + "ms in servlets service method.");

    // Fail if the processing time was critically long
    if (timeSpentByServerForSimpleCalculation > 30) {
        fail("Simple calculation shouldn't take " +
             timeSpentByServerForSimpleCalculation + "ms!");
    }

    // Do the same with rendering time
    long totalTimeSpentRendering =
            testBench().totalTimeSpentRendering();
    System.out.println("Rendering UI took "
            + totalTimeSpentRendering + "ms");
    if (totalTimeSpentRendering > 400) {
        fail("Rendering UI shouldn't take "
               + totalTimeSpentRendering + "ms!");
    }

    // A normal assertion on the UI state
    assertEquals("3.0",
        $(TextFieldElement.class).first()
        .getValue());
}

Running Tests in Parallel

TestBench supports parallel tests execution using its own test runner (JUnit 4) or native JUnit 5 parallel execution.

Up to 50 test methods are executed simultaneously by default. The limit can be set using the com.vaadin.testbench.Parameters.testsInParallel system property.

When running tests in parallel, you need to ensure that the tests are independent and don’t affect each other in any way.

Extending ParallelTest (JUnit 4)

Usually, you want to configure something for all your tests, so it makes sense to create a common superclass. For example, you might use public abstract class AbstractIT extends ParallelTest.

If your tests don’t work in parallel, set the com.vaadin.testbench.Parameters.testsInParallel to 1.

Using Native JUnit 5 Parallel Execution

To run tests in parallel, extend the TestBench utility class BrowserTestBase or manually annotate test classes with @Execution(ExecutionMode.CONCURRENT).

To disable parallel execution, annotate the test class with @Execution(ExecutionMode.SAME_THREAD).

9F6A7015-9AD8-43DC-AC68-CC6D66C5212F