Practical Introduction to Big Data and MapReduce Webinar

Join Christoph Engelbert (Hazelcast) and Matti Tahvonen (Vaadin) for an introduction on Big Data and Map Reduce. Learn how to query and and process data that is too large to be handled by one server only. In practice we’ll use Hazelcast and Vaadin to showcase how to do it but everything you learn is generic and you’ll learn about the concepts of handling Big Data.

Webinar takes place today Thursday February 5th 2015 @ 3 PM CET

embedyoutube=3t9A0sbfz-A

Post your questions and comments below. Thank you!

Hi,

The examples we went through, and some more, are available via
github
.

Somebody asked whether it is hard to move from Vaadin JPAContainer usege to “hazelcast” style programming. No, it is not. Instead of using JPAContaiener, you are just listing the simple beans e.g. using BeanItemContainer like many do with JPA services as well. Naturally, if you data store is BIG, you should not list everything in you UI but do smart queries, e.g. with the MapReduce style probramming and show only relevant information for the end user. You could also do some lazy binding, like by using the new helpers I
recently introduced in Viritin
, in case your entities come from the backend in deterministic order. Should be “dead simple”. In the example app there is actually
a simple CRUD sample
to edit the salaries used in some MapReduce tutorials.

We decided to make some simple JCache API with Vaadin tutorial (where Hazelcast is naturally as the implementation). When we have that ready I can share some examples.

cheers,
matti

I’ve committed latest changes, mostly to make the source more readable and uploaded it to the github repository :slight_smile: Thanks to Vaadin and Matti for this fun webinar!