Mysterious Cache Problem

Hi all,

this might not be a Vaadin related issue, but I have run out of alternatives and would like to check in the forum if someone has had any similar issues. My application is caching some of its pages.

The app is a set of search and edition user cases. My problem is that when I edit an entity, after saving the changes and editing the same instance again, changes do not take effect. This happens very randomly, therefore making it extremely difficult to trace possible causes.

I am using Vaadin 7.5.3, with pojo model entities (hibernate @annotated), and a Hibernate 4.3.1 (EntityManager version) persistence layer over a MySQL 5.1 database in the backend. I use RESOURCE_LOCAL as the transaction settings in persistence.xml, and InnoDB as the table scheme type. The database server is Tomcat 7.0 (Servlet 3.0, so web.xml is skipped).

What I have already checked is:

  • that Hibernate is not caching: for that I have set in persistence.xml the setting to none, and activated the printout of statistics in the console via the property true, which is throwing the stats, and the L2C hits remains 0, which should mean that no usage of the cache is being done. This didn’t work.

  • in mysql I have set the query_cache_size to 0, and the innodb_buffer_pool_size to 0 too. Restarted the server and that hasn’t worked either. Besides running tremendously slow (had to set back the innodb_buffer_pool)

  • in the browser I try to delete all history data

After these three settings, I have checked if html headers in the application contain any cache related settings, and apparently not. So vaadin shouldn’t be either.

From what I have researched on the issue, nothing should be caching anything. But it still does. It doesn’t happen all the time. Curious behaviors are that restarting the browser (and many times deleting the cache in the browser) solves the issue. But not 100% of the time.

Don’t know where else to look at. Can’t find any reliable solution. Can anybody give me a hint please?

Here the main config files (persistence.xml, my.ini)

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0" xmlns="" xmlns:xsi="" xsi:schemaLocation="">
  <persistence-unit name="SofiaPU" transaction-type="RESOURCE_LOCAL">
      <property name="javax.persistence.jdbc.url" value="jdbc:mysql://*****:3306/*****"/>
      <property name="javax.persistence.jdbc.password" value="*****"/>
      <property name="javax.persistence.jdbc.driver" value="com.mysql.jdbc.Driver"/>
      <property name="javax.persistence.jdbc.user" value="****"/>
      <property name="hibernate.generate_statistics" value="true"/>
      <property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect"/>
      <property name="hibernate.connection.release_mode" value="on_close"/>
      <property name="hibernate.show_sql" value="false"/>


MySQL Server Instance Configuration File


Generated by the MySQL Server Instance Configuration Wizard

Installation Instructions


On Linux you can copy this file to /etc/my.cnf to set global options,

mysql-data-dir/my.cnf to set server-specific options

(@localstatedir@ for this installation) or to

~/.my.cnf to set user-specific options.

On Windows you should keep this file in the installation directory

of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To

make sure the server reads the config file use the startup option


To run run the server from the command line, execute this in a

command line shell, e.g.

mysqld --defaults-file=“C:\Program Files\MySQL\MySQL Server X.Y\my.ini”

To install the server as a Windows service manually, execute this in a

command line shell, e.g.

mysqld --install MySQLXY --defaults-file=“C:\Program Files\MySQL\MySQL Server X.Y\my.ini”

And then execute this in a command line shell to start the server, e.g.

net start MySQLXY

Guildlines for editing this file


In this file, you can use all long options that the program supports.

If you want to know the options a program supports, start the program

with the “–help” option.

More detailed information about the individual options can also be

found in the manual.



The following options will be read by MySQL client applications.

Note that only client applications shipped by MySQL are guaranteed

to read this section. If you want your own MySQL client program to

honor these values, you need to specify it as an option during the

MySQL client library initialization.







The following options will be read by the MySQL Server. Make sure that

you have installed the server correctly (see above) so it reads this



master-slave replication part


The TCP/IP Port the MySQL Server will listen on


#Path to installation directory. All paths are usually resolved relative to this.
basedir=“C:/Program Files (x86)/MySQL/MySQL Server 5.1/”

#Path to the database root
datadir=“C:/ProgramData/MySQL/MySQL Server 5.1/Data/”

The default character set that will be used when a new schema or table is

created and no character set is defined


The default storage engine that will be used when create new tables when


Set the SQL mode to strict


The maximum amount of concurrent sessions the MySQL server will

allow. One of these connections will be reserved for a user with

SUPER privileges to allow the administrator to login even if the

connection limit has been reached.


Query cache is used to cache SELECT results and later return them

without actual executing the same query once again. Having the query

cache enabled may result in significant speed improvements, if your

have a lot of identical queries and rarely changing tables. See the

“Qcache_lowmem_prunes” status variable to check if the current value

is high enough for your load.

Note: In case your tables change very often or if your queries are

textually different every time, the query cache may result in a

slowdown instead of a performance improvement.


The number of open tables for all threads. Increasing this value

increases the number of file descriptors that mysqld requires.

Therefore you have to make sure to set the amount of open files

allowed to at least 4096 in the variable “open-files-limit” in

section [mysqld_safe]


Maximum size for internal (in-memory) temporary tables. If a table

grows larger than this value, it is automatically converted to disk

based table This limitation is for a single table. There can be many

of them.


How many threads we should keep in a cache for reuse. When a client

disconnects, the client’s threads are put in the cache if there aren’t

more than thread_cache_size threads from before. This greatly reduces

the amount of thread creations needed if you have a lot of new

connections. (Normally this doesn’t give a notable performance

improvement if you have a good thread implementation.)


#*** MyISAM Specific options

The maximum size of the temporary file MySQL is allowed to use while

recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE.

If the file-size would be bigger than this, the index will be created

through the key cache (which is slower).


If the temporary file used for fast index creation would be bigger

than using the key cache by the amount specified here, then prefer the

key cache method. This is mainly used to force long character keys in

large tables to use the slower key cache method to create the index.


Size of the Key Buffer, used to cache index blocks for MyISAM tables.

Do not set it larger than 30% of your available memory, as some memory

is also required by the OS to cache rows. Even if you’re not using

MyISAM tables, you should still set it to 8-64M as it will also be

used for internal temporary disk tables.


Size of the buffer used for doing full table scans of MyISAM tables.

Allocated per thread, if a full scan is needed.


This buffer is allocated when MySQL needs to rebuild the index in

REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE

into an empty table. It is allocated per thread so be careful with

large settings.


#*** INNODB Specific options ***

Use this option if you have a MySQL server with InnoDB support enabled

but you do not plan to use it. This will save memory and disk space

and speed up some things.


Additional memory pool that is used by InnoDB to store metadata

information. If InnoDB requires more memory for this purpose it will

start to allocate it from the OS. As this is fast enough on most

recent operating systems, you normally do not need to change this

value. SHOW INNODB STATUS will display the current amount used.


If set to 1, InnoDB will flush (fsync) the transaction logs to the

disk at each commit, which offers full ACID behavior. If you are

willing to compromise this safety, and you are running small

transactions, you may set this to 0 or 2 to reduce disk I/O to the

logs. Value 0 means that the log is only written to the log file and

the log file flushed to disk approximately once per second. Value 2

means the log is written to the log file at each commit, but the log

file is only flushed to disk approximately once per second.


The size of the buffer InnoDB uses for buffering log data. As soon as

it is full, InnoDB will have to flush it to disk. As it is flushed

once per second anyway, it does not make sense to have it very large

(even with long transactions).


InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and

row data. The bigger you set this the less disk I/O is needed to

access data in tables. On a dedicated database server you may set this

parameter up to 80% of the machine physical memory size. Do not set it

too large, though, because competition of the physical memory may

cause paging in the operating system. Note that on 32bit systems you

might be limited to 2-3.5G of user level memory per process, so do not

set it too high.


Size of each log file in a log group. You should set the combined size

of log files to about 25%-100% of your buffer pool size to avoid

unneeded buffer pool flush activity on log file overwrite. However,

note that a larger logfile size will increase the time needed for the

recovery process.


Number of threads allowed inside the InnoDB kernel. The optimal value

depends highly on the application, hardware as well as the OS

scheduler properties. A too high value may lead to thread thrashing.


max_allowed_packet = 500Mshow variables
[/code]I think with all this info it should be enough, but if you need anyother info pls say it and will upload it.

Many thanks in advance,

Carlos Conti.

Sorry forgot to say that when checking the changes in the database, they are there. It is a refresh problem.

  1. You are sure the tables are created as INNODB table ?
  2. This you should never do …
  1. I’m not sure what the default is so you should probably set the transaction isolation in my.cnf
transaction-isolation = READ COMMITTED 

REPEATABLE READ will not show new data in a single transaction.

  1. JPA Entity manager do caching. ( Hibername is JPA implementation ). You can explicitly flush and clear your entity manager , you can even ask the the entity manager to refresh a single entity.
eManager.refresh(command); // autogenerated key value is now populated. 


Hi Petrus,

many thanks for your suggestions. InnoDB applies by default REPEATABLE_READ transactions, but that’s the isolation level I desire in my app, otherwhise I might be able to read data which is not yet truly written in the database, and in that sense all our business logic relies lastly on the database, so reading transient data is not an option. Besides bear in mind that the dynamic edition-search-edition is implemented
each on a separate transaction
. So isolation level shouldn’t be an issue, and REPEATABLE READ is an appropiate isolation level for our purposes.

Tables are InnoDB, No doubt. I don’t see the point in flushing, since as I said, the changes are written to the database, but they are not refreshing on the screen, despite the fact that each screen runs on a separate transaction.

Extending the post, I have tried to disable tomcat caching by adding in context.xml in the app the attribute allowCaching=false. But hasn’t worked either. Finally I have prepended in the servlet response a Cache-Control header where I presumably disable browser caching… like that:

I am truly desperate with this. The problem is not as bad as some weeks ago, but I need to find a reliable solution asap.

If not a solution at least would like to be able to monitor (as I do in Hibernate) if a tomcat response or mysql result is a result of a previous cached result or not. That would help. Any ideas?

Many thanks!


BTW READ COMMITTED gives truely commited data . REPEATABLE_READ will give you commited data but it will NOT give you new data create after your transaction started, so if you start a transaction and keep it open, even one day later it will give you exaclty the same data as when you started the transaction .

You are using transaction-type RESOURCE_LOCAL :

<persistence-unit name="SofiaPU" transaction-type="RESOURCE_LOCAL"> It might be the case that your transactions are active for long periods( in that case you will definitly see the bahaviour you describe). You should probably use transaction-type=“JTA” and configure the data source in your application server’s jdbc resources.

  1. Enable logging in Hibernate (or what ever JPA implementaion you use ) , it should have a log setting to log the actual SQL it executes (and from that you can infere if your action resulted in a cashed reply or not ) and if the cached result is from the database or the entity manager.
  2. I suggest (if you don’t use Hibernate specific code) to swap the JPA implementation to EclipseLink or even Apache OpenJPA, then you should be able to see if this is a Hibernate configuration issue or not. I’ve had less hassle EclipseLink than the other two.

Hi Petrus,

you are right on that, sorry… my mistake… my explanation was related to READ_UNCOMMITED… which is not the case…

I could try that, and set the transaction level to READ_COMMITTED, however that shouldn’t be the root of the problem, because my transaction last for few minutes, and I can assure you with the big hassle this is, we are double checking every single time… As I said the dynamic of user cases is search-edit-search-edit-search-edit… most of the time… and for every action (search or edit) we are creating and closing a transaction according to the following methods in our persistence util class:

(This belongs to an abstraction class called SessionLayer, which is instantiated by an UI level static instance called SessionFactory, which contains the reference to the EntityManagerFactory instance, which is responsible for creating the Sessions. So basically we have one transaction per session)

If I finish all transactions properly, should there RESOURCE_LOCAL play a role? Transactions shouldn’t6 remain active, in that case it would totally make sense, but it is not the case.

Many thanks for the active discussion.



public synchronized void initTransaction() throws Exception{
        try {
            if (!entityManager.getTransaction().isActive()) {

            } else {
               // throw new Exception("Transacción en curso. Finalizar antes la transacción");
        } catch (Exception exc) {
            throw new Exception(exc.getMessage());

public synchronized void finishAndCommitTransaction() throws Exception{
        if (entityManager.getTransaction().isActive()){
            }catch(Exception ex){
                throw new Exception(ex.getMessage());
               throw new Exception("No hay una transacción activa");

public synchronized void finishAndRollbackTransaction() throws Exception{
        if (entityManager.getTransaction().isActive()) {


You should have one(or more) transaction(s) per request rather than a transaction per session BTW.

If you finish all your transactions properly ( even in case of unexpected runtime exceptions ) this should not play a role. JTA just simplify things ( mostly ),

Then I suggest you enable SQL logging in Hibernate ( if posible ) and/or jdbc ( profileSQL=true connction property if I’m not mistaken ), to confirm if the query is excecuted on Mysql or not . If you see the see jdbc queries, the issue is probably somewhere with the transactions/isolation level , if not it is an issue with Hibernate caching ( but as noted before I don’t use hibernate much ).

Petrus, you might have hit it… after your last post you gave me the idea to check if any transactions were active after each transaction is finished… and voila… after executing ‘show innodb status’ in mysql, I see after some activity that many transactions remain ACTIVE so that’s very likely the root cause, then it makes sense withe the isolation level REPEATABLE_READ.

I will focus on the properly shut down of transactions, committed but active, so that might be it…

Will post the results. Many thanks for your help.


Hi Petrus,

the problem is solved but honestly don’t know exactly why. The solution was to introduce c3p0 as connection pool in my application, and everything was solved magically. So an unsufficient connection management problema might have caused the error, not allowing transactions to finish somehow, and not allowing other transactions to read fresh data.

Hope not to see this problem ever again. Was exhausting.