If you’ve ever hit Spring Boot’s tiny 1 MB multipart request limit, you know how frustrating it can be—especially for new Vaadin users. Just when you think you’ve fixed it, another trap awaits: reverse proxies. For example, nginx ships with the same 1 MB max request size by default. Fixing that can be a battle… and in some setups, practically impossible .
I recently ran into this myself while deploying on a Dokku-based PaaS that I use for demos. After wrestling with nginx configs for longer than I’d like to admit, I decided to build a workaround—so that I (and other Vaadin devs) wouldn’t have to fight this fight again .
Modern browsers have a neat trick up their sleeve: the File API. It lets the upload component read selected files directly—just like in a “real” programming language. This means we can split large files into smaller chunks and send them to the server in multiple requests.
Starting with Viritin 2.19.1, the UploadFileHandler supports chunking.
You can enable it by default with your preferred chunk size.
Or let the component handle it automatically: if the server responds with a 413 Payload Too Large, chunked transfer kicks in instantly.
Either way, from your app’s perspective there’s no difference—the API is the same whether the file arrived in one piece or many.
Try it out online demo:
Open DevTools → Network tab, pick a directory (demoes directory upload as well) with files over 1 MB (or drag them in, individuals accepted via D&D). Even with nginx’s ridiculously small 1 MB default still in place, you’ll see those big files sail through as “chunks”.
If only something similar was possible for downloads as well. I’ve tried your Viritin component for downloading and it’s much more intuitive than the Anchor class (with it’s awkward upside-down logic of read/write streams). Plus your component allows to pick the save file immediately, instead of Anchor’s requiring to complete and close the stream first before asking for the filename. (What’s the use of stream then anyway if it needs closing before continuing? Not to mention how much more convenient is for the user to start the download and be able to do other things while DL goes on.)
Alas, with large files (or in fact anything that takes more than a minute or two to download), your component gets (a browser?) timeout after that minute or two and the download hangs - without saving anything. So you can’t download very large files with it.
I’m speculating (though I could be wrong) the reason for this is, that the browser isn’t notified ahead of the download about the expected file size, (probably because the stream is still being written) and so after a while, it just gives up listening for data.
Is there a way to work around that? Perhaps supplying an optional filesize parameter before DL actually starts?
Hi Ted, sorry for answering bit late, hassle with travels vacays etc and I’m now finally starting to purge my forum queue…
Is there an easy way how I can reproduce the issue? Anything special in your nginx proxy?
You can set the Content-Lenght header (aka file size), but I wonder if it some timeout in the nginx that cuts to connection Most likely the same issue exists both with the DynamicFileDownloader or if you switch to Anchor + DownloadHandler in the latest Vaadin version.
No fuss, I’m a bit late answering this as well.
Nothing special. If you prepare download records on-the-fly and add a wait (say half a sec) between them, you can download them without problem if there aren’t too many.
Once the total time exceeds the minute or two, mentioned in my previous reply, whole download fails.