Typically we don’t run into such trouble. But rarely, sometimes during maintenance in our data center, it may happen, that the Tomcat Server with our WebApp starts, before the other server with the DB is up.
Is there anything what we can do within our WebApp that it might retry to connect on it’s own? Right now the URL for Login is monitored, it will give a 404 if it couldn’t start., so sometime later one will notice and restart Tomcat…
Right now, Tomcat doesn’t know about the database - only the WebApp via the application.properties. So the WebApp doesn’t know that the DB is back and remains with 404.
That all, because it was a “project only” first, resulting into a demonstrator. But now…
Yes, we should consider the Tomcat DB Connection Pool! I should have thought of that before asking here
Even if it means a bit more effort for the local developer, it’s way more practicable for the production environment.
Wait a minute… a “application.properties”? Sounds more like a spring boot application… which you are deploying on an external tomcat? That sounds stressful. Spring Boot does handle reconnecting to databases normally flawlessly (except for the magic starting case…)
yes, we are creating a .war file and deploying it to Tomcat, jenkins build pipeline.
And yes, it’s the (rare) not-starting case I want to solve. Interruptions inbetween we don’t have a headache with, yes.
I have read some setup tips so far concerning pooling, I really think it is something we should aim for, but the more I’ve read the less I’m sure it might solve the starting issue…
Interesting to see how old the starting post is already
yes - due to hibernate it doesn’t come up at all at the moment. Was looking now from the opposite end… to let Tomcat trying to redeploy in case it wasn’t successful. It’s possible (by scripts, checking health and “touching” the war) but well… it would be a workaround, nothing more
the WabApp fails only after the timeout of a minute. Starting the DB within that one minute let’s start the application fine!
Sooo - that could be another workaround for us (well, it’s not really a workaround but kind of solution). When I expect a DB downtime from e.g. less 30min I define that timeout and all is fine.
Perhaps not the very best solution but at least sometning
PS: kind of solution because the application is more used for processing and exchanging data between partners. Users can open it for some monitoring but it is in general not a heavily used Users App. If so, kind of a “maintenance” view should be delivered, but right now not necessary.