Documentation versions (currently viewingVaadin 23)

You are viewing documentation for Vaadin 23. View latest documentation

Advanced Testing Methods

There are a few advanced testing methods you may want to consider: waiting for Vaadin; waiting until a particular condition is met; scrolling; profiling test execution time; and running tests in parallel.

These testing methods aren’t typically needed. For example, situations in which you might need to disable automatic waiting or scrolling in a view are rare. In such cases, you’ve probably encountered a bug in the software. Nevertheless, these testing methods are explained here for when they are needed.

Waiting for Vaadin

Web pages are typically loaded and rendered immediately by the browser. In such applications, you can test the page elements immediately after the page is loaded. In Vaadin and other Single-Page Applications (SPAs), rendering is done by JavaScript code, asynchronously. Therefore, you need to wait until the server has given its response to an AJAX request and the JavaScript code finishes rendering the UI.

A major advantage of using TestBench compared to other testing solutions is that TestBench knows when something is still being rendered on the page. It waits for rendering to finish before moving on with the test. Usually, this isn’t something you need to consider since waiting is automatically enabled. However, it might be necessary to disable it sometimes. You can do this by calling disableWaitForVaadin() in the TestBenchCommands interface.

You can call it in a test case as follows:


When "waiting for rendering to finish" has been disabled, you can wait for it to finish by calling waitForVaadin(), explicitly.


You can re-enable waiting in the same interface with enableWaitForVaadin().

Waiting Until a Condition is Met

In addition to waiting for Vaadin, it’s also possible to wait until a condition is met. For example, you might want to wait until an element is visible on the web page. That might be done like so:


This call waits until the specified element is present, or times out after waiting for 10 seconds, by default.

waitUntil(condition, timeout) allows the timeout duration to be controlled.


To be able to interact with an element, it needs to be visible on the screen. This limitation is set so that tests which are run using a WebDriver simulate a normal user as much as possible. TestBench handles this automatically by ensuring that an element is in view before an interaction is triggered.

Sometimes, you might want to disable this behavior. You can do this with TestBenchCommands.setAutoScrollIntoView(false).

Profiling Test Execution Time

You might not be interested only in the fact that an application works, but also how long it takes. Profiling test execution times consistently isn’t trivial. A test environment can have different kinds of latency and interference.

For example, in a distributed setup, timing results taken on the test server would include the latencies between the test server, the grid hub, a grid node running the browser, and the web server running the application. In such a setup, you could also expect interference between multiple test nodes, which all might make requests to a shared application server and possibly also shared virtual machine resources.

Furthermore, in Vaadin applications there are two sides which need to be profiled: the server side, on which the application logic is executed; and the client side, where it’s rendered in the browser. Vaadin TestBench includes methods for measuring execution time both on the server side and the client side.

The TestBenchCommands interface offers the following methods for profiling test execution time:


This returns the total time in milliseconds spent servicing requests in the application on the server side. The timer starts when you first navigate to the application and hence start a new session. The time passes only when servicing requests for the particular session.

If you’re also interested in the client-side performance for the last request, you must call timeSpentRenderingLastRequest() before calling this method. It’s necessary because this method makes an extra server request, which causes an empty response to be rendered.


This will return the time in milliseconds spent servicing the last request in the application on the server side. Not all user interaction through the WebDriver causes server requests.

As with the total, if you’re also interested in the client-side performance for the last request, you must call timeSpentRenderingLastRequest() before calling this method.


This method returns the total time in milliseconds spent rendering the user interface of the application on the client side, that is, in the browser. This time only passes when the browser is still rendering after interacting with it through the WebDriver.


This returns the time in milliseconds spent rendering user interface of the application after the last server request. Not all user interaction through the WebDriver causes server requests.

If you also call timeSpentServicingLastRequest() or totalTimeSpentServicingRequests(), you should do so before calling this method. These methods cause a server request, which zeros the rendering time measured by this method.

The following example is given in the file in the TestBench demo:

public void verifyServerExecutionTime() throws Exception {
    // Get start time on the server-side
    long currentSessionTime = testBench(getDriver())

    // Interact with the application

    // Calculate the passed processing time on the serve-side
    long timeSpentByServerForSimpleCalculation =
            testBench().totalTimeSpentServicingRequests() -

    // Report the timing
    System.out.println("Calculating 1+2 took about "
            + timeSpentByServerForSimpleCalculation
            + "ms in servlets service method.");

    // Fail if the processing time was critically long
    if (timeSpentByServerForSimpleCalculation > 30) {
        fail("Simple calculation shouldn't take " +
             timeSpentByServerForSimpleCalculation + "ms!");

    // Do the same with rendering time
    long totalTimeSpentRendering =
    System.out.println("Rendering UI took "
            + totalTimeSpentRendering + "ms");
    if (totalTimeSpentRendering > 400) {
        fail("Rendering UI shouldn't take "
               + totalTimeSpentRendering + "ms!");

    // A normal assertion on the UI state

Running Tests in Parallel

TestBench supports parallel tests execution using its own test runner (JUnit 4) or native JUnit 5 parallel execution.

Up to fifty test methods are executed simultaneously by default. The limit can be set using the com.vaadin.testbench.Parameters.testsInParallel system property.

When running tests in parallel, you need to ensure that the tests are independent and don’t affect each other in any way.

Extending ParallelTest (JUnit 4)

Usually, you will probably want to configure something for all of your tests. It makes sense, therefore, to create a common superclass. For example, you might use public abstract class AbstractIT extends ParallelTest.

If your tests don’t work in parallel, set the com.vaadin.testbench.Parameters.testsInParallel to 1.

Using Native JUnit 5 Parallel Execution

To run tests in parallel, extend the TestBench utility class BrowserTestBase or manually annotate test classes with @Execution(ExecutionMode.CONCURRENT).

To disable parallel execution, annotate the test class with @Execution(ExecutionMode.SAME_THREAD).

Accessing WebDriver and Additional Test Information

Using JUnit 5, it is possible to access additional test information in a method annotated with @Test, @BeforeEach, @AfterEach, @BeforeAll, or @AfterAll by adding the BrowserTestInfo method parameter. Here is an example of this:

public void setWebDriverAndCapabilities(BrowserTestInfo browserTestInfo) {
    // customize driver if needed
    // access browser capabilities
    this.capabilities = browserTestInfo.capabilities();

BrowserTestInfo contains information about the following:

  • WebDriver and browser capabilities used for current test execution;

  • Hostname of the hub for remote execution; and

  • Browser name and version used for local execution.