Loading...

This page is about Vaadin 8 – read about the latest version here

Testing web-apps with jUnit5 and Testbench

All of us know that the testing of graphical user interfaces can mostly become a little inconvenient. In doing so, the technology being used does not matter. In the field of web applications, Selenium has become established as a quasi standard. When using Vaadin, however, one can fall back on a tool with Testbench, which offers a little more comfort. We will see how we can use this together with jUnit5.

What are we dealing with here?

With the help of a sample application, we will see how the different types of implementations of one and the same business process can have an effect on the production. Certainly, it must be decided individually for each project, which structure will be adequate, but always in view of the possible expansion stages, which are expected. In this document here, only the different possibilities can be indicated; there can be no generic, always correct answer. If the decision points are mostly not technical in nature, but instead always have economical aspects, then it is quite probable that there are things, which will have a direct impact, for instance, on the quality of the test coverage.

The example

The project presented here is a sample application implemented with Vaadin 8, in which we can organize or enter a set of virtual persons in a list. The entries can be edited and deleted. In order to enter a new record, an entry screen appears upon pressing a button, in which the necessary attributes are listed.

A search can be made in the set of records, which has been implemented here in a simple way with the help of an input field. The aim of the application is not to implement a sophisticated search, but instead only to make it possible to display a variable set with the help of an input.

_images/JavaMagazin-TestingVaadin-002a.png _images/JavaMagazin-TestingVaadin-002b.png _images/JavaMagazin-TestingVaadin-002c.png

In the video given below one can see the application in use.

_images/JavaMagazin-TestingVaadin-001.gif

This sample application is realized in different versions. However, from the perspective of the user, it is always the same application with the same UI and functionality. The differences are present at the level of the source code. The ways in which these have been structured and implemented are very different.

The source codes are present on github and can be retrieved from the URL https://github.com/vaadin-developer/testbench-jumpstart

At present, three versions of this application are present. The first version is very compact and reminds one of a Hello World. The implementation is present in the module modules/demos/demo01 In this implementation, all elements have been put together in one class.

The second version is structured rather like an industrial project, but has some quirks which I unfortunately see again and again in projects. This implementation is present in the module modules/demos/demo02 However, some quirks are still present here, which will have an impact only in the creation of tests. The individual elements have already been broken down in different modules. Thus, there is a module with the name shared, in which generic elements are present, which can/must be used in almost all modules. The module srv contains the backend implementation of services, which should be available to the individual graphical components. The module ui-components contains a component as representative of own UI blocks available in a project, which represent either a sub-application or a functional block or can also be the home for own components, such as a specialized table.

The version three is certainly not the optimum one that can be achieved, but still has some strengths, which I want to highlight here without making it a purely academic project. This implementation too is present in the repository in the module modules/demos/demo03

The same modules as in version two are present here. The differences here lie in the way in which the individual components are connected with one another. For instance, this has an effect on the way the elements can be tested.

No tests are available in the modules with the respective demo implementations. These were outsourced, in order to demonstrate different process models during the implementation of tests.

There are two main groups in doing so. The difference lies primarily in the use of jUnit4 and jUnit5. We will focus here on the implementations based on jUnit5. The knowledge of jUnit5 necessary for this has also been dealt with under a separate sub-point.

We now come to the description of the first implementation, which gives us the starting situation for the respective transformations.

The implementation

As already noted, I will present here completely only the first version and then concentrate only on the differences while presenting the further versions. As in the case of every Vaadin application, we need first of all a servlet. This is the simplest version.

  @WebServlet(urlPatterns = "/*", name = "MyUIServlet", asyncSupported = true)
  @VaadinServletConfiguration(ui = MyUI.class, productionMode = false)
  public static class MyUIServlet extends VaadinServlet {
  }

By means of the annotation @VaadinServletConfiguration and using the attribute ui=MyUI.class, the connection to the
initial graphical components is established. I will not describe here the initialization cycle further,
but this is the entry point, if one wants to work with technologies like DI in combination with Vaadin.

We now come to the implementation of the UI. In order to implement the UI shown in the beginning in the class MyUI extends UI, we need

  • a grid for showing the data in the form of a table
  • a text field for entering the search term
  • a component for entering new persons, here with the name CustomerForm
  • a service, which enables the CRUD functions on the database.

With the exception of the component CustomerForm, the graphical elements can be taken from the standard repository of the Vaadin components. All elements have been defined here as class attributes. In addition to the graphical elements, there is also the CRUD service CustomerService for the transient demo data.

  private final Grid<Customer> grid = new Grid<>();
  private final TextField filterText = new TextField();
  private final CustomerForm form = new CustomerForm(this);
  private CustomerService service = CustomerService.getInstance();

In the method init(VaadinRequest vaadinRequest), all elements are then set in relation to one another. At the start, the input screen (CustomerForm) has been switched invisible, although an instance has already been generated.

    form.setVisible(false);

Thereafter, the input field is initialized for the search requests. It is important here that an updateList() is executed as soon as the button is pressed. This method takes the text from the input field for the search requests and uses this, in order to start a rudimentary search. The records are then transferred to the grid (will be explained shortly) for display.

    filterText.setPlaceholder("filter by name...");
    filterText.addValueChangeListener(e -> updateList());
    filterText.setValueChangeMode(ValueChangeMode.LAZY);
  public void updateList() {
    grid.setItems(service.findAll(filterText.getValue()));
  }

One should also be able to remove the filter; this is also realized with a button.

    Button clearFilterTextBtn = new Button();
    clearFilterTextBtn.setDescription("Clear the current filter");
    clearFilterTextBtn.addClickListener(e -> filterText.clear());

Both the buttons come together in a layout component so that these are always placed together on the screen.

    CssLayout filtering = new CssLayout(filterText, clearFilterTextBtn);
    filtering.setStyleName(ValoTheme.LAYOUT_COMPONENT_GROUP);

We now come to the button, with whose help a new record can be generated. By pressing this button, the instance of the class CustomerForm should be switched as visible and a new instance of the class Customer should be provided for accepting the input data. A possibly available selection of a record in the table is canceled during this.

    Button addCustomerBtn = new Button("Add new customer");
    addCustomerBtn.addClickListener(e -> {
      grid.asSingleSelect().clear();
      form.setCustomer(new Customer());
    });

Last, but not the least, a table is needed for providing an overview of the records. The columns are defined such that a connection is established for each column with the attribute of the class Customer by means of method reference. Each time a record is selected in the table, this instance of the class Customer is transferred to the instance of the class CustomerForm. Finally, the input screen is switched as visible. As soon as a row is deselected, the instance of the class Customer is also removed from the input screen by switching this to not visible. At the end, the table is filled initially with data.

    grid.addColumn(Customer::getFirstName).setCaption("First Name");
    grid.addColumn(Customer::getLastName).setCaption("Last Name");
    grid.addColumn(Customer::getEmail).setCaption("Email");
    grid.asSingleSelect()
        .addValueChangeListener(event -> {
          if (event.getValue() == null) {
            form.setVisible(false);
            form.setCustomer(null);
          } else {
            form.setCustomer(event.getValue());
          }
        });
    grid.setSizeFull();
    updateList();

All the elements are now set in a final VerticalLayout and the application is finished.

    HorizontalLayout main = new HorizontalLayout(grid, form);
    main.setSizeFull();
    main.setExpandRatio(grid, 1);

    setContent(new VerticalLayout(new HorizontalLayout(filtering, addCustomerBtn), main));

CustomerForm

The CustomerForm is a fragment of the UI, which is shown only when its functions are needed. This is a view of the details of the selected record, which is highlighted in the row of the table. Similarly, it is possible in this component to edit the data or to use a blank variant, in order to create a new record. To do this, the following attributes are defined in the class.

  private final TextField                    firstName = new TextField("First name");
  private final TextField                    lastName  = new TextField("Last name");
  private final TextField                    email     = new TextField("Email");
  private final NativeSelect<CustomerStatus> status    = new NativeSelect<>("Status");
  private final DateField                    birthdate = new DateField("Birthday");

In the initial implementation of this component, the connection to the holding components is mapped in such a way that a reference is also specified during the generation by means of a constructor parameter. With this, both the components are connected quite closely with each other.

    private final MyUI myUI;
    public CustomerForm(MyUI myUI) {
      this.myUI = myUI;
      //SNIPP
    }

Now, there is also the interaction with the holding component myUI. The record currently being shown in this component can be deleted or saved. To do this, the information must be sent to the instance of the class MyUI. This is done in this implementation by directly calling the methods, which are provided by the instance of the class MyUI.


  private CustomerService service = CustomerService.getInstance();
  
  private void delete() {
    service.delete(customer);
    myUI.updateList();
    setVisible(false);
  }

  private void save() {
    service.save(customer);
    myUI.updateList();
    setVisible(false);
  }

This results in different things, which are not to be recommended. Firstly, the component CustomerForm must have access to the service CustomerService. Secondly, the methods are executed directly in the component myUI. We will see how this can be made more elegant and robust by means of registrations and events. The aim must be that this component can exist independently, in order to achieve independence from the class MyUI. But more about that later.

Altogether, these are about 50 lines of source code. (plus the classes CustomerService, Customer and CustomerStatus) and these make up the reference for the subsequent observations.

If we now take this as the basis and start, and want to write the tests for the UI, then we require some technical preparations.

Test infrastructure

Local browser / web-driver combination

Some things are now needed for letting the application and the tests run on the own computer. Firstly, we will write tests, in which a local browser is remote controlled by the jUnit test. To do this, we need the binary files of the web driver. I am referring here to Google Chrome and Mozilla Firefox. This is also there for Opera, Safari and MS Edge, but some exceptions must be kept in mind here. Naturally, the respective browser must also be installed on the system. In order now to reach these files, one can naturally look up the respective source in the Internet and in this way start searching for everything together. Since this is a very tedious and time-consuming task, there are some solutions in the field of open source, which simplify this work.

One of these solutions is the Maven plug-in webdriverextensions-maven-plugin. One can link this in the pom.xml in the build section and then one only needs to
specify a path, where the files are saved and the browser, for which the files to be fetched

In this example, the system-specific files are fetched for the browsers Google Chrome, Opera and Mozilla Firefox and are saved in the folder _data/webdrivers. A call of the Maven target webdriverextensions:install-drivers starts the downloading of the files and saving these at the defined location.

      <plugin>
        <groupId>com.github.webdriverextensions</groupId>
        <artifactId>webdriverextensions-maven-plugin</artifactId>
        <executions>
          <execution>
            <goals>
              <goal>install-drivers</goal>
            </goals>
          </execution>
        </executions>
        <configuration>
          <keepDownloadedWebdrivers>true</keepDownloadedWebdrivers>
          <installationDirectory>_data/webdrivers</installationDirectory>
          <drivers>
            <driver>
              <name>chromedriver</name>
            </driver>
            <driver>
              <name>operadriver</name>
            </driver>
            <driver>
              <name>geckodriver</name>
            </driver>
          </drivers>
        </configuration>
      </plugin>

In order to use TestBench, additionally the definitions of the dependencies are also needed in pom.xml.

      <dependency>
        <groupId>com.vaadin</groupId>
        <artifactId>vaadin-testbench</artifactId>
        <version>${vaadin-testbench-api.version}</version>
        <scope>test</scope>
      </dependency>

Since TestBench is a commercial add-on, a license is expected here while executing the tests. A test license is available at https://vaadin.com/pro/licenses. If you want to use TestBench for the development of open-source software, please log on to Vaadin with your open-source project, because free of cost licenses are also granted for open-source Vaadin projects.

Now everything is available and we can start writing the first test. This example uses jUnit5; the process is exactly the same as in case of jUnit4 at this point, only different annotations are used.

public abstract class BaseVaadinTestClass extends TestBenchTestCase {

  protected String url;

  @BeforeEach
  public void setUp(){
  //SNIPP some inits to start Servletcontainer..  see code
  
    System.setProperty("webdriver.chrome.driver", "_data/webdrivers/chromedriver-mac-64bit");
    System.setProperty("webdriver.gecko.driver", "_data/webdrivers/geckodriver-mac-64bit");
//    setDriver(new ChromeDriver());
//    setDriver(new SafariDriver());
    setDriver(new FirefoxDriver());
    getDriver().manage().window().setSize(new Dimension(1920, 1080));

Initializations necessary to start the servlet container used here have been left out. We will discuss this later. Essentially, two things happen here. Firstly, the paths pointing to the web driver binaries are set. This requirement comes for the web drivers or from Selenium itself. Secondly, the desired instance
of the web driver is generated and set with the method setDriver(..). It is to be noted here that the class BaseVaadinTestClass is derived directly from the class TestBenchTestCase. The class TestBenchTestCase is the base class for generating the TestBench tests. It is important at this point that the web driver is closed again after the test. If this is not done , then
instances of the requested browser continue to exist. If the closing fails, then this is simply ignored here and the remaining tests are not interrupted. A logging has been avoided in this example for the sake of clarity.

  @AfterEach
  public void tearDown()
      throws Exception {
    ((CheckedExecutor) () -> getDriver().quit()).execute();
    //SNIPP shutdown Servletcontainer....
  }

In addition to the source text examples given in this article, I will also use the sources from the Open Source project

Functional-Reactive http://www.functional-reactive.org/. The sources are available at https://github.com/functional-reactive/functional-reactive-lib

Till now, we have the possibility of remote controlling the browser by means of Selenium on the computer, on which this test is conducted. But this also results in some initial restrictions. In case different versions of a browser are to be tested, then here a switchover between the installed and the active version becomes necessary. This can lead to quite complex constructions that are prone to errors at the level of the operating system. Also added to this is the fact that possibly all necessary browsers are not available on the respective development computer. As an example, the IE and Safari can be named. In addition, a browser being used in the test also blocks some system resources and, if one is unlucky, also the focus of the input devices. Therefore, on the computer being used, it is not possible to work along with tests that take a long time. All of this together makes one think very quickly about alternatives. One possibility is to use the head-less browsers. Some time ago, there was the project PhantomJS here. Unfortunately, the main developer of this open-source project decided to focus his work on exactly this project. The natural consequence of this is that one has to think of alternatives. And that too not only in case of new projects. Google Chrome is also present in the head-less version, is actively being developed and is being driven forward by Google itself. For the other browsers like Firefox, Safari, Opera and IE Edge, I am currently not aware of any stable procedures for working locally with head-less versions. The problem of platform dependency of the developmental station continues to remain in all these cases.

Virtual machines

In order to bypass the scenario just described, one can fall back on the virtualization technology. The target operating system is then installed, started, configured and used in a fully virtual machine. The consumption of system resources is quite high and the automation requires some knowledge, for instance, of the use of Ansible and Co. In order to approach a special case in the development, this can be worthwhile starting point. It is implied here that a specific variant consisting of an operating system and a browser version is used. The developer can do the testing and debugging here mostly manually, in order to correct an individual problem. However, this process is not recommended for extensive tests. Some light-weight approaches are used here.

Remote browser / web-driver combination

We now come to the approach, in which the browsers to be used in a test are made available on other computers. There is an open-source project here, which has dealt with this topic for very long. Implied here is the project Selenium, can be found at the web address http://www.seleniumhq.org/. The project Selenium is supported by some companies and is now actively being developed for more than 10 years. It has, in fact,
become a de-facto standard.

Selenium grid implies an environment, in which a group of virtual and physical computers make available the browser instances,
with which a web-app can be started. The set of computers can also consist of a single node. This means that one can also directly install a single node on the own computer, start it and then use it for conducting the tests.

The different combinations and versions can then be stored in this cluster. A remote driver is then used, fitted with specific attributes, in order to request the required
combination of operating system, browser type and browser version. No installation of these components is required on the computer, on which the test is running. Similarly, web driver binaries are no longer needed for the different browsers. This omits the complete management of these elements.

However, the question arises, who provides this cluster and where. There are different methods here, for instance, the local installation, the use of one or more cloud services and docker.

The entire process is the simplest with docker. Dockers are nowadays available for all the common platforms. These include Linux (e.g., Debian or Ubuntu), Windows and MacOS. An overview
of all the supported platforms is given here. https://docs.docker.com/engine/installation/#supported-platforms

The cloud offers, however, can be used only when the testing of the application is not associated with safety-critical and legal restrictions. The use of dockers, on the other hand, can be done on your own developer machine, as also in an environment provided by a closed corporate network.

Selenium - the original

We will start with creating a workflow environment, in which we can build a Selenium grid. After the docker has been installed, one can start using the docker. To do this, let us start a terminal on the computer, on which the docker is installed. The following commands are to be entered in the command line and executed:

docker run -d -p 4444:4444 --name selenium-hub selenium/hub:latest
docker run -d --name selenium-node-chrome --link selenium-hub:hub selenium/node

These commands fetch all parts, which we need for a test with Google Chrome: First is the control of the Selenium grid, followed by a node, which is provided to us by Google Chrome. The operating system used here is Linux. In order to test that everything is working so far, one can under the assumption that the docker service is running on the own machine, look up the URL http://localhost:4444/grid/console.

One sees here, which versions of Selenium have been used, under which operating system is the node running (here Linux), the IP and the port and also which browser is being used in which version.

_images/JaxEnter-Vaadin-009_001.png

If one now switches to tab configuration, one gets more information, which one can use later for configuring the web driver.

_images/JaxEnter-Vaadin-009_002.png

In order to view the log files of the Selenium grid, one can enter the following docker command in a terminal:

docker logs -f selenium-hub

For the sake of completeness, here are the commands to stop these instances and to start them again.

docker stop selenium-node-chrome
docker stop selenium-hub
 
docker start selenium-hub
docker start selenium-node-chrome

The deletion of all components can be done with the command rm.

docker image rm --force selenium/node-chrome
docker image rm --force selenium/hub

We now have everything for getting a running version of a Selenium grid in the docker, for letting it run and deleting it again, if needed. Since this process is not so simple, I can only recommend preferring it over a classical local installation.

Zalenium - the enhancement of Zalando

We now come to the enhancement Zalenium. This project is also open-source and is present in github. https://github.com/zalando/zalenium. The special feature here is that functions like recording the videos of test cases are already mapped here. Similarly, the dynamic linking of the following Cloud services is also possible:

The underlying prerequisites for the operation are docker starting with the version >= 1.11.1 and the docker images elgalu/selenium and dosel/zalenium

docker pull elgalu/selenium
docker pull dosel/zalenium

The start is then done with the following command.

docker run --rm -ti --name zalenium -p 4444:4444 \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v /tmp/videos:/home/seluser/videos \
    --privileged dosel/zalenium start

This command maps the folder /tmp/videos in the container, in which the videos of the sessions are saved. One should naturally take a folder here, which matches the own requirements and is also cleared up at regular intervals.

In order to stop the Zalenium container, one can give the command docker stop zalenium.

If the Zalenium container is started on the locally running docker host, one can access the dashboard under http://localhost:4444/dashboard . Some functions have been made available here, such as loading the recorded videos and the display of the related log files.

_images/Zalenium_Dashboard.gif

In order to get a Live View, one can call the URL http://localhost:4444/grid/admin/live. Similarly, under the assumption that the docker container has been started locally.

_images/Zalenium_LivePreview.gif

Selenoid - the challenger

We now come to the challenger, which manages completely without the original Selenium implementation. This project is a new implementation based on the programming language Go. However, Selenoid is compatible at the protocol level. This means that Selenoid behaves outwardly like a classical Selenium.

The major difference here is that the implementation is essentially more efficient. The size of the binary is approx. 7 MB and the memory consumption is approx. 1/10 as compared to the original Selenium implementation.

In order now to start using Selenoid, one can make use of the following docker commands. In this case too, we assume that we are using the local docker installation.

docker run --rm                                     \
    -v /var/run/docker.sock:/var/run/docker.sock    \
    -v ${HOME}:/root                                \
    -e OVERRIDE_HOME=${HOME}                        \
    aerokube/cm:latest-release selenoid start       \
    --vnc --tmpfs 128

As soon as the container is started, one can view the status in the browser. To do this, simply enter the following URL in the browser. ** http://localhost:4444/status**

The output gives information about the current status and is formatted in JSON, which simplifies a processing with other tools.

{
 "total":5,
 "used":0,
 "queued":0,
 "pending":0,
 "browsers":{
   "chrome":{"60.0":{},
             "61.0":{}},
   "firefox":{"54.0":{},
              "55.0":{}},
   "opera":{"46.0":{},
            "47.0":{}}
  }
}
Selenoid UI

Selenoid provides its own small UI, in which one is shown some information about the current state. In this case too, one can do the installation and start with a single docker command. It is assumed here that the Selenoid node is running in the same docker host.

 docker run -d --name selenoid-ui  \
    --link selenoid                 \
    -p 8080:8080                    \
    aerokube/selenoid-ui --selenoid-uri=http://selenoid:4444

The UI is accessible at the URL http://localhost:8080/ and gives some information about the current state, and (if configured), a window with a display of the VNC window of the tests currently running. In this way, one can view the active tests in the browser window.

Vaadin Development Environment

In case of the possibilities presented till now, we can see that these are always associated with a certain effort for building this infrastructure. The open source project Vaadin Developer Environment helps in building a development environment, which can manage almost completely without Internet.

Types of tests

In the literature, one can find very different terms for the topic Names of test types. Given here is a small overview of the forms and types used here.

At this point, I am aware that there is no generally applicable and worldwide accepted definition of type of classification. Also, this is not a complete listing of all types.

While going through the web applications, one can make out different groups of tests. These respectively have different focus areas with effects on the main workflow within the development.

Unit tests

The classical unit tests deal with a large number of smaller units, which carry out an isolated logical test. The individual tests must be independent of one another. It is also important that each test can be executed individually, which is an indirect requirement based on independence. The execution must be done in a short time and can run locally. This enables the developer to view the proceedings in a simple way with tools. Thus, it makes a perceptible difference, whether the overhead during the workflow of a test is 0.1 s or 1 s. Even at the first glance, this time difference seems to be negligibly small, but it becomes quite big, if a large number of these tests must be executed in a day with a high frequency. The time periods, which then get summed up, are a cost factor not to be underestimated within the development. The aim of the tests is the validation of an individual small logical sub-function.

Integration tests

We now come to the integration tests. In these tests, several sub-components of the complete system are used together to carry out a logical test. Already a lot is being discussed here, starting from when the tests can be called as integration tests.

Let us use an example for help. We are developing a Core Java program, which must access a database via JDBC. One can proceed in two ways here. Firstly, one can start by replacing the database by a pure InMemory solution, if one wants to test the application. According to this, is it then not necessary to start an additional infrastructure? Actually yes, only then one does not use it later in production.

Next, we will view a web application, which must run in a servlet container. Along with JVM, a structural component, the servlet container, is also started here. If one now wants to test a single component, for instance, a specialized table, then the servlet container must also be started.

In order to keep things simple at this point, I personally assume that the integration tests always start, when one enhances the system to such an extent that a cooperation takes place among different systems. Exception to this, however, are system components, which technically are essential for operating the components to be tested. Thus, for instance, in this example, the first test with the DB would be an integration test for me, since an RDBMS must be started before starting the test. The second example, in my opinion, is not an integration test, because the servlet container is a component of the elementary run environment. A servlet cannot run without a servlet container.

I am aware at this point that there will be different opinions here. I will also not claim the complete correctness, but instead make use of a definition variant used for this document.

We can thus summarize that the integration tests are always connected with the use of different technologies, in order to provide a composition for the respective test. This mostly contains the classical end-to-end tests.

Compatibility tests

One can divide the field of compatibility test in at least two areas, if one focuses on the development of web applications.

Firstly, it is to be made sure that the logical function is ensured in all supported operating systems, browser types and browser version combinations. Here, through the full permutation of all components, an n-dimensional matrix can be set up, which must be gone through. Since the individual combinations are atomic in nature, the effort can be distributed over n nodes, which enables one to effectively minimize the total running time with the available budget. However, the aim here is only the testing of the logical equivalence.

On the other hand, the visual equivalence must also be tested. The given aim here is that the graphical interface in the ideal case looks the same in all combinations. And now come the exceptions. Sometimes, it is desired to retain graphical style elements of the respective platform, in order to enable the user to have a seamless integration in the environment being used.

Moreover, the displays are not exactly the same when using different types of browsers on the same platform.

Where exactly the permitted tolerance lies, is certainly subject to many other project-related factors.

Technically, one can work here with reference builders, which have been generated on a validated platform. These reference builders are then taken for a comparison.

To summarize, one can say that there are both types of compatibilities. The test should be conducted separately, since one is dealing here with different levels of the malfunction.

Performance tests

A very important area is often the area of performance. The question now arises, when one should start with this. It has always turned out that it is never too early to start with these tests, but often it is too late. The performance impacts the technologies used and the architecture being used. A correction at the end of the
project can have fatal consequences.

As in the case of other test types, it is also very important here that these tests run automatically. It has become established here as a useful middle path that there should be performance tests once in the night, which are based on the current developmental level. The protocols generated are then part of the meetings held in the morning. Only like this it can be ensured that the problems can be measured as early as possible. I have intentionally used the term measured here, because it can still be detected earlier based on the available experience.

The aim of a performance test is to challenge the system with load and destroy it later, in order to determine the absolute limiting values. A subsequent reduction of the load can also help in determining, whether the system is present again in a proper operating state.

I would also like to point out here that the reporting mechanisms used later can/should also be tested at this point.

Performance tests are normally synthetic test scenarios.

Long-term tests

Contrary to the performance tests, the aim of the long-term tests is to hold the system exactly at the limit of the possible load, which the system can withstand, over a long period of time. It is to be ensured with these tests that there will be no further problems, such as memory leaks. All types of resource management are tested here, if the correct test scenario has been selected. To achieve this, it is recommended to focus on
real business processes. Recordings from the currently active production system can be taken here as the basis. The aim here is to let a realistic load profile be used.

Behavior-driven development

It has been left out at this point, how and with what means the tests relevant for acceptance are defined with the technical department. These tests impact the economic aspects of the projects, and the successful tests mostly apply as a gate to start the next project phase.

The way, in which a department now defines the tests i.e., which input values lead to which responses of the system or output values, is the content of methods and techniques/technologies, such as jBehave. However, this is a meta-level, which is based on the structure of the unit and integration tests. I would like to point out here only the generally available literature. For instance: Wikipedia Behavior-driven development

Mutation - Testing

What comes after LineCoverage?

Almost everyone knows the situation, in which the management asks for reliable numbers, with which the quality of a software can be expressed. A popular means here is the test coverage at the line level. However, this abstract number is to be evaluated only when one has the information about the portions of source code being dealt with. Everyone has experienced that the test coverage has been improved by writing tests free of meaning. Whether through the testing of getter/setter or other trivial source code fragments.

The term mutation testing comes here. The term was mentioned for the first time in the literature in the '70s. The principle is quite simple. Mutations are generated from a piece of source code. These modified versions are tested with the existing test suite.

In case of each mutation, at least one test from the test suite must fail so that this testing can be considered as successful. One then says that the mutation has not survived. However, no statement is made here whether the failure itself has actually been triggered by the mutation. It is rather assumed that this is so.

However, if there is no single test with a failure, then the mutation is said to have survived. According to the test, therefore, it is irrelevant, which version is available or delivered. Naturally, this has not been the aim of the existing tests. Moreover, there is also a vagueness in the system, which in combination with other vague features, can lead to errors that cannot be determined. Therefore, the aim must be not only to generate a high test coverage, but also a test coverage as robust as possible. But how exactly can a machine support us in doing this?

Mutation testing

already translated

Monkey - Testing

Monkey testing implies the absolutely meaning-free use of the application to test, whether at this point there is a possibility to cause damages or whether and how the system deals with such things. At the first glance, it may appear to be a little strange, but at the second glance it is a very effective technique for discovering weak points in the UI. It should still not be possible to cause high damages.

At this point, I will not go further into this topic and refer to further literature: https://en.wikipedia.org/wiki/Monkey_testing

Vaadin Add-ons

We now come to the practical use of all components, without examining further the questions regarding the provision of an infrastructure needed for this.

junit5

In the examples given below, we will examine more closely with the help of jUnit5, how a test can look like on the whole. It is important at this point to understand, when and what happens during a workflow.

Described here are only the essential features, which are used for this implementation, in the basic functions. The original documentation of jUnit5 is to be recommended here, which is present at User Guide JUnit5.

The lifecycle

As in the case of jUnit4, there is a lifecycle in jUnit5 too, which describes the individual phases in the workflow of a test. In principle, there are here again the two elements before or after all tests and before or after each individual test. The related annotations are:

  • @BeforeAll
  • @BeforeEach
  • @AfterEach
  • @AfterAll

One can already implement a few things with this, but not everything that is needed. In order now to extend the lifecycle through programming, jUnit5 offers the concept of events or call-backs. Here, call-backs are sent for the respective lifecycle stage, to which a response can be given. The structure is essentially more elaborate at the points.

The image from the original documentation gives an overview of this.

http://junit.org/junit5/docs/current/user-guide/images/extensions_lifecycle.png

We will shortly discuss in detail, how this can look like in the form of source code.

Test - HelloWorld

In the descriptions given below, I will refer to the following minimum example. The main thing here in the foreground is, how the test is initialized and at which points configurations can be done.

The example consists of an interface, which is filled only with one button, which is to be found and used in the respective test. At this point, a function buttonID() is used, which we will examine later in detail. It is important to know at this point that an ID is generated for the element by means of this function.

public class BasicTestUI extends UI {

  public static final String BUTTON_ID = buttonID().apply(BasicTestUI.class, "buttonID");

  @Override
  protected void init(VaadinRequest request) {
    final Button button = new Button();
    button.setId(BasicTestUI.BUTTON_ID);
    button.setCaption(BUTTON_ID);
    button.addClickListener(e -> out.println("e = " + e));
    setContent(button);
  }
}

@VaadinUnitTest && @VaadinCompatTest

The Vaadin add-ons for Testbench assume that there are two main groups, in which all tests can be classified at this point.

@VaadinUnitTest

These are firstly the unit tests, as described above, which are written with the aim of testing a logical fact. The test must run quickly and is developed mostly in the phase, in which a related aspect of the application is under active development. Practically, this means that the test as well as the corresponding source text is written more or less at the same time and an iterative process causes mutual changes, till the desired result has been achieved.

Therefore, one must be able to execute the test itself quickly; debugging is an essential component and the result should be visible to the developer. This means that the browser used for the test is installed on the system of the developer and the progress of the test can be seen directly in this. In order to mark a test as such, one can now use the annotation @VaadinUnitTest at the class level.

In order now to write a test, which uses the button in the test UI, the following source text is required.

@VaadinUnitTest
public class BasicUnitTest {

  @Test
  void test001(BasicTestPageObject pageObject) {
    pageObject.loadPage();
    pageObject.button.get().click();
  }
}

There are some implicit things here, which have not yet been explained. These include, for instance, the PageObject used here with the name pageObject. But we will come to this shortly. Let us have a look at first at the other main group of tests.

@VaadinCompatTest

The other major group of the tests is that of the long-running tests. At this point, it is necessary to carry out a set of tests with the same logic. More details have been given above under the terms compatibility and long-term tests. However, it mostly happens here that these tests not only take a longer time and the developer is at this point also less interested in the view of all individual tests. A final OK suffices here completely. Accordingly this means that the tests should run in a different environment, in order to make the computer of the developer free for other tasks.

This type of tests are highlighted at the class level with the annotation @VaadinCompatTest.

@VaadinCompatTest
public class BasicCompatTest {

  @TestTemplate
  void testTemplate(BasicTestPageObject pageObject) {
    pageObject.loadPage();
    pageObject.button.get().click();
  }
}

Configuration

If one now views both the implementations of the tests, then the only difference is the annotation at the class level and the highlighting of the respective test. The latter is due to the implementation of jUnit5. If one views how both the tests are executed, then the following picture emerges. When using the annotation @VaadinUnitTest, the test is called exactly once. The PageObject is initialized with a web driver, which has been defined for using the unit tests. (Explanation will follow later)

If one now uses the annotation @VaadinCompatTest, then the test is called n - times. In case of each test run, the next combination of browser, OS and version is used. Consequently, the test can be written in a way that the developer need not know, how often the test is started. It is only certain here that in a run of a test, the instance of the specified PageObject matches exactly the combination to be tested. In order to map this and use exclusively jUnit5 tools, the choice falls here on the test templates. The description for this is given here. http://junit.org/junit5/docs/current/user-guide/#writing-tests-test-templates

In order now to define, where and how the respective web driver is to be used, it is important to customize the configuration files according to own requirements.

Config files

In order to configure the add-ons, one can create a folder with the name .testbenchextensions in the project folder. It can be defined here which browser is used for the unit test and which is used for the Compat test. The structure of the Properties file is quite simple.

There are two main areas. The first one is unittesting. It can be specified here, which browser is to be used for the unit tests. In this example, the locally installed browser Chrome is used. However, a Selenoid cluster can also be set here as the target. Here, in case of unittesting.target simply the IP/DNS name of the Selenoid server is specified.

The second main area is the definition of the individual aims for the Compat test. The following name structure is present here.

compattesting.grid. is the main component. Subsequently a logical name is assigned. For instance, selenoid is used as the name here. This results in the start of path for the configuration for this aim with compattesting.grid.selenoid. The sub-points are defined with

  • target : Target address of the cluster
  • os: the OS to be expected there, on which the browser runs, here Linux in docker
  • browser: which browser types are used
  • browser.typname.version : Specification of the versions to be used
    • here, for instance, Chrome and Firefox are defined as target
    • the version of Firefox in this example is 57
unittesting.browser=chrome
unittesting.target=locale

compattesting.grid.selenoid.target=locale
compattesting.grid.selenoid.os=linux
compattesting.grid.selenoid.browser=chrome,firefox
compattesting.grid.selenoid.browser.firefox.version=57
compattesting.grid.selenoid.browser.chrome.version=63

There are further settings, which should be looked up in the original documentation of the project. The project itself is present at https://github.com/vaadin-developer/vaadin-addons

The PageObject Pattern

We now come to the mapping of the logical tests. The PageObject Pattern is used here. The pattern itself is quite old, but is still worth a look. The aim is to make it easy for the user to access the elements of the graphical user interface. The PageObject pattern itself is not restricted to web applications. It can be used in all kinds of graphical user interfaces. The structure can be presented as follows. Each graphical user interface has its own way for accessing the elements, such as a button. The instance or a corresponding representative is needed, in order to reproduce an input of the user in a test at a desired point of time. In case of a button, a click on it. Let us assume that the button can be described with the logical name OK, hence it is required to have a method in the test, which has the name buttonOK(). In order now to use the button, one can think of the following example.

pageObject.buttonOK().click()

The basics

We now come to the implementation, in order to interact with a Vaadin application. The communication with the web application is abstracted in a generic way from the project Selenium. Here, a part of the communication
is mapped and the user gets a representative of the addressed graphical components. From the implementation itself, it then looks so that in the method pageObject.buttonOK() the Selenium-specific portion can be found.

public WebElement pageObject.buttonOK(){
    return /* query via Selenium */
}

If one has a closer look at this method signature, one notices that the return value is of the type WebElement. And it is exactly here that the trouble starts. From this point onward, the developer must know what he had asked for there. It is still quite probable at this point, but how does it look like in a week? Or else, starting from this point, how well are we protected against refactoring errors? The answer is rather disillusioning. The Testbench now comes into play here. Testbench enables one here to continue working with the Vaadin types, if one wants to. It is naturally up to the developer to continue working at the level of the web elements.

In our real case, we get an instance of the type ButtonElement.

public ButtonElement pageObject.buttonOK(){
    return /* query via Testbench/Selenium */
}

How can one now search for a button for generating a new record? At this point, I still assume that the page has been loaded. How we handle this, I will explain at a later stage.

The class TestBenchTestCase contains a method with the following signature. $(Class<T> clazz). The method name is quite short, even if it needs getting used to. This method expects a class as parameter, which contains a graphical element. In our case, the class ButtonElement.

_images/JavaMagazin-TestingVaadin-003.png

The implementation of our method then looks like as follows.

public ButtonElement pageObject.buttonOK(){
    return $(ButtonElement.class);
}

However, several buttons are present in our UI so that we still need to define, which button is exactly implied. There are different types of addressing here, which we will examine one by one.

One of these is the specification of a label, which is present on the button. With this, we finally get the implementation.

public ButtonElement pageObject.buttonOK(){
    return $(ButtonElement.class)
              .caption("Add new customer")
              .first();
}

Similarly, one can now work with the Grid, which is present in the UI.

  public GridElement dataGrid() {
    return $(GridElement.class).first();
  }

We now come to another case to use the button for deleting the input, which has been entered in a text field for searching the records. The button here has no caption, which we can use. One can reach the objective here with a relative addressing.

In this program, the position of this button is always within a CSSLayout. This enables us to specify this relationship.

  public ButtonElement clearFilterBTN() {
    return $(CssLayoutElement.class).$(ButtonElement.class).first();
  }

In the same way, we now get the text field for ourselves.

  public TextFieldElement filterTextField() {
    return $(CssLayoutElement.class).$(TextFieldElement.class).first();
  }

The method should have become clear by now, so that we can now go to the first test.

The aim now is to enter a term in the text field for starting a search. filterTextField().setValue("Lara") Based on the test data generated, we know that the term Lara results in only a single hit. This means that the table may contain only a single entry. Assert.assertEquals("Lara" , getFirstNameAtIndex(0));

After this has been checked, the button should be pressed for deleting the search input (clearFilterBTN().click();), which leads to a following check, whether further records are now available in the table.

The complete test can look like as follows:

public class AddressBook02Test extends AddressBook {
  @Test
  public void test001() {
    getDriver().get(url);
    filterTextField().setValue("Lara");
    assertEquals("Lara" , getFirstNameAtIndex(0));
    assertEquals(1L , dataGrid().getRowCount());
    clearFilterBTN().click();
    assertTrue(dataGrid().getRowCount() > 1);
  }
}

This test contains the methods getFirstNameAtIndex and get(url); not yet described in detail. For the latter method, the URL of the web application is entered, which causes the page to be loaded. The method getFirstNameAtIndex addresses the column FirstName in the table.

  public String getFirstNameAtIndex(int index) {
    return $(GridElement.class).first()
                               .getCell(index, 0)
                               .getText();
  }

We have now successfully completed the first step. If one now thinks about it a little, what the next questions will be, then one of these can be: How does one work with more than one PageObject? We have drafted here the first page with its main components and have omitted till now that the component CustomerForm is activated.

More than one PageObject

If various components are now used within a test, then one can generate the PageObjects ad hoc and use them. It is important here that the same web driver is used. Since every time we come to the instance of the web driver currently being used, the rest is trivial.

  public CustomerFormPageObject createNewEntry() {
    newCustomerButton().click();
    final CustomerFormPageObject result = new CustomerFormPageObject();
    result.setDriver(getDriver());
    return result;
  }

In the example just shown, the button is clicked for generating a new customer instance. Subsequently, the input screen should appear. To be able to access this, we generate the next PageObject. In this case, an instance of the class CustomerFormPageObject. In doing so, the web driver is taken from the available instance and specified for the new instance. We are ready at this point. Both instances can now be formulated as comprehensive tests.

Refactoring, the enemy of the jUnit tests?

We now come to the challenge of having to carry out a refactoring. In doing so, the structure of the page gets changed. Let us assume that the elements we want to address are still available. What will happen with a high probability is a failure of all tests, which are based directly or indirectly in selectors, have used parts of the structure as identification attribute. This means constructs, such as: $(CssLayoutElement.class).$(TextFieldElement.class). Similarly, it is also a bad idea to use caption of elements. Therefore, what can one do at this point?

Plain Old IDAs

A very simple and also very effective means are the classical IDAs During the development of components, all elements involved in the direct interaction are given IDAs. These IDAs must be easily accessible from outside. This can be done either through constants or it is possible to calculate the ID. It is certainly different, which of these ways is preferred. I will take here the path of static IDAs.

MyUI - Refactoring

In this example, already a large portion of the UI is mapped in the class MyUI. At this point naturally the question arises, whether this is right or not and we should start here to outsource this. In real projects, this question is to be answered clearly with yes. In this example,
however, we would omit this and start clearing up now within the class.

All elements, which must be addressed directly, must be given an ID. In order to build the name space, one can start with defining a base name for this project. This is done here in the interface Constants.

public interface Constants {
  String REGISTRY_BASE_KEY = "org.rapidpm.vaadin.demo";
}

In the class MyUI, the name range is then defined for the class based on this.

  public static final String COMPONENT_BASE_KEY = REGISTRY_BASE_KEY + "." + MyUI.class.getSimpleName();

The definition of the individual components is then done according to the same principle. Let us have a look at the definitions for the data table for this. First of all, the ID is defined, by means of which the component can be addressed. The ID is also the same as the root node for all sub-IDs of the component. However, these are not IDAs here, but instead
properties for the captions, which can be used in a property service.

  public static final String DATA_GRID_ID                     = COMPONENT_BASE_KEY + "." + "dataGrid";
  public static final String DATA_GRID_COL                    = DATA_GRID_ID + "." + "col";
  public static final String DATA_GRID_COL_CAPTION_FIRST_NAME = DATA_GRID_COL + "." + "firstName";
  public static final String DATA_GRID_COL_CAPTION_LAST_NAME  = DATA_GRID_COL + "." + "lastName";
  public static final String DATA_GRID_COL_CAPTION_EMAIL      = DATA_GRID_COL + "." + "email";

The property service is defined quickly and is then provided with an InMemory solution.

public interface PropertyService {
  String resolve(String key);
  boolean hasKey(String key);
}
public class PropertyServiceInMemory implements PropertyService {
  private final Map<String, String> storage = new HashMap<>();

  @Override
  public String resolve(final String key) {
    return storage.get(key);
  }

  @Override
  public boolean hasKey(final String key) {
    return storage.containsKey(key);
  }

  @PostConstruct
  public void init() {
    //SNIPP
    storage.put("org.rapidpm.vaadin.demo.MyUI.dataGrid.col.firstName", "First Name");
    storage.put("org.rapidpm.vaadin.demo.MyUI.dataGrid.col.lastName", "Last Name");
    storage.put("org.rapidpm.vaadin.demo.MyUI.dataGrid.col.email", "Email");
  }
}

The access to the property service can be enabled, for instance, by means of dependency injection and similarly a method can be defined, which makes the access more comfortable.

  @Inject private PropertyService propertyService;

  private String resolve(String key){
    return propertyService.resolve(key);
  }

While initializing the components, one can then work completely with constants.

    grid.setId(DATA_GRID_ID);
    grid.addColumn(Customer::getFirstName)
        .setCaption(resolve(DATA_GRID_COL_CAPTION_FIRST_NAME))
        .setId(DATA_GRID_COL_CAPTION_FIRST_NAME);
    grid.addColumn(Customer::getLastName)
        .setCaption(resolve(DATA_GRID_COL_CAPTION_LAST_NAME))
        .setId(DATA_GRID_COL_CAPTION_LAST_NAME);
    grid.addColumn(Customer::getEmail)
        .setCaption(resolve(DATA_GRID_COL_CAPTION_EMAIL))
        .setId(DATA_GRID_COL_CAPTION_EMAIL);

This construction is now also enabled for a refactoring. For instance, if a class is renamed, then the IDE can detect, that the elements used in the properties must also be renamed. The name space thus always matches with the name of the class.

Synthesis of test cases

Since now the components can be addressed by means of ID, the implementation of the PageObjects becomes a little simpler.

The source codes shown here are part of the open-source project vaadin-add-ons, which is present on github. https://github.com/vaadin-developer/vaadin-addons

In order now to get the data grid, the implementation gets reduced to the following.

  public GridElement dataGrid() {
    return $(GridElement.class).id(DATA_GRID);
  }

Now we come to a case, which unfortunately does not look so good. Let us assume that we have to select a specific cell from the table. To do this, there is the method getCell(x,y), which expects the coordinates of the cell as the parameter. This does not imply the coordinates on the screen, but instead the relative addresses in the table itself. If these are know exactly, then everything is OK. The PageObject should now contain a service method, in order to return the last name of a specific record.

The implied record can be, for instance, the one in the first row. This makes known the first coordinate. The column number is still missing. In the implementation given below, it is assumed that the last name can be found in the second column.

  public String getLastNameAtIndex(int index) {
    return dataGrid()
        .getCell(index, 1) //TODO reorder problem
        .getText();
  }

This is also correct in our case, but is defined implicitly. In order to define this explicitly, the following must also happen at the time of initializing the UI.

    grid.setColumnOrder(
        DATA_GRID_COL_CAPTION_FIRST_NAME,
        DATA_GRID_COL_CAPTION_LAST_NAME,
        DATA_GRID_COL_CAPTION_EMAIL
    );

The element can now be accessed correctly. But here there is the assumption, that till the time this method is used, no user interaction has taken place, which has rearranged the columns or the initialization results in a different sequence. If the source code just shown is rewritten as the one given below, then hopefully some tests will fail.

    grid.setColumnOrder(
        DATA_GRID_COL_CAPTION_LAST_NAME,
        DATA_GRID_COL_CAPTION_FIRST_NAME,
        DATA_GRID_COL_CAPTION_EMAIL
    );

Vaadin Add-on - ComponentIDGenerator

In order to address a component in a test, it is the simplest when one has IDAs. Now it remains to be seen, how an ID can be generated in a simple way. We have seen earlier that the manual setup of such IDAs can be quite complicated. How can one make this look more elegant? The first idea can be that at this point random numbers are used. One can use here, for instance, the UUIDAs. But on second thoughts it is better for the development,
if defined name spaces are used, because in this way the ID becomes more meaningful for the developer.

The interface ComponentIDGenerator offers a lot of methods, which allow the
generation of the IDAs that belong to a specific component type.

Let us have look here at first at the generic version.

  static TriFunction<Class, Class, String, String> genericID() {
    return (uiClass, componentClass, label)
        -> (uiClass.getSimpleName()
            + "-" + componentClass.getSimpleName()
            + "-" + label.replace(" ", "-"))
        .toLowerCase(Locale.US);
  }

This function is the generic variant for generating an ID based on the class that describes the type, for instance, a Button, the holding class
in which this component is used, for instance, a CustomerForm and a logical name for this identity. A name space is built from these components, which is divided by hyphens. Since a part of the ID can be made up of the type of the element, one can start
building the service function. We will take here as an example the function, which can be used
for generating an ID for a button.

  static BiFunction<Class, String, String> buttonID() {
    return (uiClass, label) -> genericID().apply(uiClass, Button.class, label);
  }

Here, one has converted a TriFunction to a BiFunction, which simplifies the usage and secondly the usage in the source code has become clearer.

public static final String BUTTON_ID = buttonID().apply(BasicTestUI.class, "buttonID");

In this example, an ID has been generated, which refers to the button and is used in the class BasicTestUI . If at a later point of time the class is renamed in a refactoring process, then the ID is also to be
customized to the new name. We thus get very stable IDAs, which do not lose the semantic reference.

Component - Testing

We now come to the topic of Component tests. A component should be an independent fragment of the UI,
because it is reused at different points. From our sample application, let us take the
component CustomerForm. This component could be used only for display as well as for modification or for
new entry. Therefore, it is probable that this fragment of the UI can be used at different places. Therefore, it makes sense to implement the component in such a way that there is an independence. However, this also means that in the extreme case, this component will be developed in a separate module. When this should happen, must be decided in the individual case. But this has been done here as an example. This also results in some requirements. One must now be able to test the component as detached. To do this, a UI is needed, which enables us to write the test effectively. In the end, it is a web application for displaying this single component. In this case, a servlet is needed and a UI class, which is used as container of the component .

@WebServlet(urlPatterns = "/*", name = "MyUIServlet", asyncSupported = true)
@VaadinServletConfiguration(ui = TestUI.class, productionMode = false)
public class TestServlet extends VaadinServlet {
}
public class TestUI extends UI {
//SNIPP
}

The implementation of the class TestUI now has the following tasks. Firstly, it is the container for the component to be tested. However, there are some more requirements. If one sees, how the component exchanges information with other components, then there must be way to allow this information reach
the component and, in the same process, it also becomes necessary to be able to receive information from the component.

As an example, events are mentioned here, which see to it that a response is triggered in the component. One should also be able to record the events that are sent from the component,
so that this behavior can be verified in the test.

In the case of CustomerForm, an instance of the class Customer is expected, which is then displayed. It is advisable at this point to keep this attribute ready as input field in the test UI so that it can be manipulated easily and can be taken as data reference in similar tests.

In our example, these are the attributes listed below.

  private final       TextField                    firstName          = new TextField("First name");
  private final       TextField                    lastName           = new TextField("Last name");
  private final       TextField                    email              = new TextField("Email");
  private final       NativeSelect<CustomerStatus> status             = new NativeSelect<>("Status");
  private final       DateField                    birthday           = new DateField("Birthday");
  private final       TextField                    id                 = new TextField("Customer ID");

The related IDAs are defined at the class level.

  public static final String TEST_SWITCH_BUTTON = buttonID().apply(TestUI.class, "testSwitchButton");
  public static final String REGISTER_BUTTON    = buttonID().apply(TestUI.class, "testRegisterButton");
  public static final String FIRST_NAME         = textfieldID().apply(TestUI.class, "firstName");
  public static final String LAST_NAME          = textfieldID().apply(TestUI.class, "lastName");
  public static final String EMAIL              = textfieldID().apply(TestUI.class, "email");
  public static final String BIRTHDAY           = dateFieldID().apply(TestUI.class, "birthday");
  public static final String CUSTOMER_FORM      = genericID().apply(CustomerForm.class, TestUI.class, "customerForm");
  public static final String ID                 = textfieldID().apply(TestUI.class, "customerID");

Similarly, there is the instance of the component to be tested, the required data container (Customer) and an instance for linking the attribute to the data container.

  private final Binder<Customer>             beanBinder   = new Binder<>(Customer.class);
  private final CustomerForm                 customerForm = new CustomerForm();

  private Customer fromLastEvent;

Everything together can be arranged easily in a VerticalLayout; the important point here are not the ergonomic concepts, but instead only that the individual components can be addressed in the tests.

    final VerticalLayout testAttibutes = new VerticalLayout();
    testAttibutes.addComponents(id,
                                firstName,
                                lastName,
                                email,
                                birthday,
                                status
    );
    
    final Button aSwitch = new Button("switch");
    aSwitch.setId(TEST_SWITCH_BUTTON);
    aSwitch.addClickListener((Button.ClickListener) event -> customerForm.setCustomer(fromLastEvent));

    final VerticalLayout layout = new VerticalLayout(customerForm, aSwitch, register, testAttibutes);

    setContent(layout);

This web application can now be used to write component tests. There will certainly be specific customizations in the individual case, but the main process essentially remains the same. The tests now use the same mechanisms and structures, which are also used in the unit and integration tests. In this case, only an application with exactly one component is started. The speed should always be above the speed, which the related complete application reaches.

Summary

We have dealt with a few things here, which one has to face when dealing with the topic of how automated UI tests are written based on jUnit. Special focus was placed here firstly on the management of the web driver for controlling the browser. Secondly, it was seen even the old design patterns can play out their strengths. In this case, this is the PageObject Pattern in combination with the component IDAs.

In these examples, the Testbench of Vaadin was used. If you want to know more about this,
then I would recommend the open-source project Vaadin Add-ons on github, and there, in particular, both the repositories

These repositories and the Vaadin documentation contain some more detailed information about this topic.

If you have questions or comments, simply contact me at sven@vaadin.com or via Twitter @SvenRuppert.

Happy coding!