Implementing sign-in with Google’s OAuth 2 services

Traditionally all public web apps have had their own authentication mechanisms, typically implemented with application specific username-password pairs. The approach is straightforward to set up for web developers, but also infamously difficult to do right (not at all or too weak hashing of passwords). The largest security issue with password based authentication is not the implementation in the application, but the end user, who tends to use bad passwords and share them in various web apps.


OpenID was supposed to be the salvation for the password issue with web applications, but finally OAuth and OAuth 2 standards have become the standard way to delegate authentication to separate services. These kinds of solutions are good for both the application developer (security wise) and especially for the end users who don’t need to remember lots of different passwords. Now that Google just closed its OpenID service in favour of OAuth 2 standard, it is probably a good time to publish a short tutorial of how to implement Google Sign-In in a server side Java application. I implemented “Google Sign-In” to my recent example app for creating invoices, so let’s have a look at how to do that.

Scribe - the de-facto Java library for OAuth 2

The OAuth 2 specification is not simple. Do yourself a favour and don’t try to use it at a HTTP level. Google also suggests to use for example their own OAuth library to simplify the usage. Scribe is another excellent Java library to help with generic OAuth 2 usage. From the Vaadin directory, you can actually even find an add-on built around Scribe, but it doesn’t support Google as a provider by default and the Scribe library can  pretty easily be used alone with Vaadin as well.

In the invoice example application, there are no features at all we want to expose to non-authenticated users. We actually don’t limit the access to the application, but we just want to show user specific data. So we need an identifier for each user, email address, that Google will verify for us. Using email as a key gives us the freedom to change to some other authentication mechanism in the future. Google has lots of different services and APIs that one could use, but for our case we only need the basic account information, including the email address.

To begin the usage, you need to sign in to Google Developer console, and register your web app for that service. You’ll also need to register allowed callback URLs for your application. I suggest to register “localhost versions” of your return urls to make development and debugging easier. See Google’s OAuth 2 docs for more details.

Display the login screen

For users entering the application, we naturally want to display a login screen. In case the session scoped UserSession bean says the user has not yet logged in, we simply hide the main content of the application in UI init method and show a modal LoginWindow instance for the end user:

Instance<LoginWindow> loginWindow;

protected void init(VaadinRequest request) {
    if (!userSession.isLoggedIn()) {

Alternatively we could swap the main content into a view with the login functionality.

OAuthService - the gateway to user specific data

In LoginWindow we need to create a Scribe object called OAuthService. That needs to be configured for your provider. Google2Api is a reusable helper class that defines some details about Google’s OAuth 2 service. For most service providers Scribe contains these implementations, but for newer Google OAuth 2 I relied on an implementation available from Github. In addition to that you need to provide the Google API key and the secret key that you got from the developer console. If you wish to use some other (OAuth 2 protected) services by Google, you should switch “email” scope to something else.

The callback url is where the user is redirected after access has been granted, with the verifier key as a parameter. We’ll use the current location of the user’s browser (~ your effective application url), but strip away a possible hash part.

private OAuthService createService() {
    ServiceBuilder sb = new ServiceBuilder();
    String callBackUrl = Page.getCurrent().getLocation().toString();
    if(callBackUrl.contains("#")) {
        callBackUrl = callBackUrl.substring(0, callBackUrl.indexOf("#"));

Via the fully configured service class, we will get the url where the user can login and grant application an access to his/her data. To forward the user there, I’m using a separate link so that the user doesn’t think that he/she accidentally ended up there.

service = createService();
String url = service.getAuthorizationUrl(null);

gplusLoginButton = new Link("Login with Google", new ExternalResource(url));

The OAuth 2 magic

When the user grants (or rejects) an access to his/her email address, he returns to our application with special parameters. To read those, we register a session scoped parameter handler to our Vaadin application. Using a code passed as a request parameter, we create a Verifier instance that is our “passport” to OAuth protected services that Google provides.

Using the Verifier we can now use the Google+ service to get generic information about the user and specifically the email address we are interested in this case. The payload from the Google+ API is in JSON format, so I wrote very simple wrapper classes and parsed the response using Gson. The next step is then to pass the email address to UserSession using a login method.

Finally, we’ll just need to make some small cleanup. Close the window, remove the OAuth return url handler and, to give a final touch, we’ll redirect the user to a clean address without the OAuth request parameters. The full request handler method in the example looks like this:

public boolean handleRequest(VaadinSession session, VaadinRequest request,
        VaadinResponse response) throws IOException {
    if (request.getParameter("code") != null) {
        String code = request.getParameter("code");
        Verifier v = new Verifier(code);
        Token t = service.getAccessToken(null, v);

        OAuthRequest r = new OAuthRequest(Verb.GET,
        service.signRequest(t, r);
        Response resp = r.send();

        GooglePlusAnswer answer = new Gson().fromJson(resp.getBody(),

        userSession.login(answer.emails[0].value, answer.displayName);


        ((VaadinServletResponse) response).getHttpServletResponse().
        return true;

    return false;

Hooray! We can now identify the user and provide the correct user specific data - without nasty registration or low quality passwords by the end user. Although OAuth 2 is not necessarily the simplest way to tackle authentication, the procedure is finally pretty straightforward. The end user can skip registration and password hassles, and you don’t need to worry about the security of your password hashing algorithms.

If you want to try OAuth 2 sign-in using my example application, you just need to create an application in Google Developer Console and create a “Client ID” in the “APIs & Auth->Credentials” section. Also, remember to register a proper “Redirect URL” for your development server (for example http://localhost:8080/invoicer/). Once you have placed the “Client ID” and its “Client secret” to the file (or to pom.xml) you can just run the app locally and start playing around with it.

Check out the full example app

JCache, why and how?

Like the small Java Specification Request (JSR) number 107 suggests, JCache was a very long standing standardization effort that finally finished last year - over ten years after it got started. What does it mean to you? Why, where and how could you use it?

Caching libraries in the Java ecosystem are almost as common as mosquitoes in Northern Europe. Ehcache, Hazelcast, Infinispan, GridGain, Apache Ignite, JCS… Caching is needed in many different kinds of solutions to optimize the application in various ways. The simplest libraries are just in-memory object maps with simple evict rules, while the most advanced caching libraries have efficient cross-JVM features and configurable options to write the cache state to disk, making them cluster ready persistency options as such, and a good basis for memory heavy computation and big data style programming.

Most caching solutions are based on map like data structures and JCache API tries to standardize the most common use cases. If you have advanced needs, you probably have to use some implementation specific features, like with JPA, but the standard will definitely make it easier to swap between caching libraries in the future. And it also makes it easier for developers to move from a project to another, which are probably using different caching libraries.

“Simple” usage

Although the default JCache API doesn’t let you adjust all implementation specific features, the default API is pretty versatile and should adapt to most common use cases. In the following code snippet, you can get an overview of creating and configuring a cache and using it to cache an expensive service method call.

private void listEntriesJavaSEStyle(String filter) {
   final String name = "myCache";

    Cache<String, List> cache = Caching.getCache(name, String.class,
    if (cache == null) {
        final CachingProvider cachingProvider = Caching.getCachingProvider();
        final CacheManager mgr = cachingProvider.getCacheManager();
        MutableConfiguration<String, List> config = new MutableConfiguration<>();
        config.setTypes(String.class, List.class);
                Duration.ONE_MINUTE));"Creating cache",
        cache = mgr.createCache(name, config);

    // first look up from cache, if not found, go to service and cache value
    List cached = cache.get(filter);
    if (cached != null) {"Cache hit!");
    } else {"Cache missed :-(");
        List<PhoneBookEntry> entries = service.getEntries(filter);
        cache.put(filter, entries);

The code snippet uses the filter string as a key for the cache and caches the result list from backend in the cache, if not found. In serious usage, you’d naturally move configuring the cache away from you application logic and save the cache reference to a local field.

The cache configuration is the most probable place where you’ll still need to enter implementation specific code or other configuration options. The standard API has, for example, quite limited evict rule configuration possibilities, supporting only simple expiration time based rules.

The actual Cache object API is straightforward for developers with experience of java.util.Map usage or proprietary caching APIs. I’d expect it to match most caching use cases as such, and the need to escape to implementation specific APIs during actual usage will be rare.

The JCache API also supports cache mutation events and entry processors. When using those, you should however notice that JCache doesn’t specify that they should be run in the same JVM process as your own code. So using them for stuff that doesn’t relate directly to the cache may become trickier than you think, especially in a clustered environment. For example modifying your Vaadin UI(s) from a CacheEntryListener might be unstable on certain implementations.

Managed bean goodies

The javax.cache.annotation package contains a lot of handy looking annotations. With them you can implement a common caching logic for managed beans (EJB, CDI, Spring…) just by adding a single annotation to a method. For example, the above example to cache the possibly expensive backend call can be implemented as such:

public List<PhoneBookEntry> getEntries(String filter) {
    // Same very same business logic as in a version without caching

The CacheResult annotation is all that is needed and the container will then handle everything, including setting up the cache, checking for existing values from the cache and storing the value, if the value was not found from the cache. The actual cache name can be explicitly defined in the annotation, as well, or at class level using the CacheDefaults annotation. I’d say that is really simple caching and that’s why I used quotation marks in the previous sub-title.

Super handy, but the downside is that annotations are not that well supported yet. Those who are using Spring 4.1 or newer are super lucky to enjoy these goodies today. Also, the latest Payara (a commercially supported build of GlassFish) has a built-in support for JCache, including the annotations, using Hazelcast as an implementation. The good folks at Tomitribe have put together a nice CDI extension library that allows your to use these annotations in any CDI managed bean today.


If you are already using a specific caching library in your project, I don’t see that much value in throwing in JCache API eagerly today. Also, if your cache library is used by another library, like as a second level cache for your JPA implementation, the JCache doesn’t bring any advantage to you. But you should definitely start using it in new projects with basic caching requirements and especially in projects where you are planning to try different libraries. All major caching libraries already support JSR 107.

All in all, I think it hasn’t really been an issue that the JCache API took ages to standardize. Portability of applications has been one of the main goals, but hasn’t been that big of an issue on this area. The usage of various caching libraries has always been rather similar (~ java.util.Map like API) and it has never been a huge task to swap between caching libraries. The largest differences have been in the way caches have been configured. Setting up a cache can now be defined with a common API as well, but I’d guess that this is an area where many users will still have to fall back to implementation specific features. In cache configuration, with feature packed implementations, a common standard just cannot tackle everything.

The actual cache usage of caching libraries will, in the future, mostly use the exact same API, independently of the library in use. This will increase a healthy competition among various caching solutions, but the largest benefit of the JSR 107 will be the added productivity of Java developers, who can in the future use the very same API for caching, in each and every project they work in.

The declarative annotation based “caching hints” for managed beans is a feature I expect to be really popular in the future. You can find example usages of the declarative approach and the raw Java API usage from my example project that uses a CDI based service class with Vaadin UI. As an implementation I used Hazelcast, the first library that published support for the final JCache API, and the handy JCache-CDI library by Tomitribe.

Check out the JCache with Vaadin example

How we improved the startup time in 7.5

When you are making your web applications for non-intranet usage (or for mobile users via VPN), you cannot emphasize the importance of compression too much. Google’s developer tools will blame you for missing compression and we have written several articles about this previously.

Still, we have seen it in action that Vaadin developers don’t take care of this. They have chosen Vaadin, because they want to delegate web app complexity to the framework, so why should they have to think about compression? It is also expensive to put engineers to fine tune the hosting setup. They shouldn’t have to, so we decided to do something about this.

What is new in 7.5 ?

In Vaadin applications, resource usage is weighted to the first load of the application. The “client side thin client” aka widgetset, which a Vaadin application needs to load before it can start communicating. This thin client often is far more expensive (in terms of transferred data) than the actual UI state changes that the thin client then does with the server. Thin client resources are also static, same for each and every user, which make it a sweet spot for some optimization.

The largest part of the widgetset is the JavaScript that the GWT compiler spits out. GWT has a really handy post processor that gzips all generated artifacts in a matter of milliseconds, but that is not enabled by default. In Vaadin widgetsets, it now is. If you look at the generated resources, you’ll see that the output has a “.gz” version of all generated JS files, which is only a fraction of the non-compressed version.

Some application servers (like Jetty and Tomcat 8, with configuration) support this kind of pre-compressed resources out of the box. But the problem is that many Vaadin projects serve these files through VaadinServlet, not through the default servlet by the servlet engine. In Vaadin 7.5 we now include a similar feature in VaadinServlet, and for your convenience, it is on by default. So, in practice, if a “VAADIN/foo/bar.js” is requested from the servlet, it checks if there is a “vaadin/foo/bar.js.gz” file available. If there is, this pre-compressed content is streamed to the browser, with proper content encoding headers.

This solution has four things that make it really good: it is a good default, it is completely transparent to users and developers, it consumes less bandwidth and it consumes less server resources (as it needs to send less stuff through the wire). The only downside we have found is that some really simple compression filters don’t work properly with the feature, but there is an easy fix for that.

What about themes?

The theme files (mostly CSS in modern themes) are the second largest resources in typical Vaadin applications - and also required to be loaded before the application can start to render. The VaadinServlet improvement treats theme files in exactly the same manner as JS files, but our tooling doesn’t yet create a “.gz” version of css files by default. This is, however, really easy to set up in your own build with e.g. the yui-compressor Maven plugin:


The above configuration will also use yui-compressor to strip obsolete comments and whitespaces from the css file, making it even smaller. In my test, a basic theme based on Valo compresses from original 304kB to just 29kB (minified + gzipped).

In future versions, we plan to add similar CSS compression features to our own tooling as well.

What does this mean?

If you have previously had an “advanced hosting setup” with a proper compression setup and even a CDN for static resources, this change doesn’t help you too much. If you were using a filter or Tomcat’s connector based features to compress stuff on the fly, you might save some CPU cycles, as the resources don’t need to be run through a gzip algorithm.

But for new and existing Vaadin apps, with a bare bones hosting setup, the change is noticeable. If you or your users have a slow network connection, the improvement might be huge. In a typical “office setup”, you’ll just notice a snappier application startup on Monday mornings. In our tests that simulate slow network users, the initial startup time in of a Vaadin app (nothing cached in the client) dropped to 1/4th of the bare bones setup just by compressing the widgetset and the theme!

Although the default setup is now much faster out of the box, you still shouldn’t forget hosting fine tuning, especially in cases where you strive for the best possible performance. We still don’t compress the actual “state communication”, which may in some data heavy applications be considerable as well. It wouldn’t be that difficult to compress that part dynamically in the VaadinServlet, but in some cases you want to do that in a e.g. front proxy instead. Also CDN distribution of your static files has additional benefits as well.

If you’re uncertain about your Vaadin apps’ hosting setup, I’m sure some of our consultants are more than happy to help you, as well. They’re based around the world and are happy to fly over to your office as well. Contact our sales for more info.

Get Vaadin 7.5 now