How to ensure field validation is performed on client

In a Form component, is the property validation translated into ajax/javascript and performed on the client rather than the server?

For performance and security issues, I need the validation of the fields in a Form to be done completely on the client javascript. How do I specify/ensure that? I do not wish to do the dirty work of coding in javascript myself, but need to ensure my validation java code is translated to client-side javascript in the spirit/design-intentions of GWT.

As you can see, as I type in the tag below in this forum, the client-server validation is ridiculously slow and misses keystrokes.

I wouldn’t do validation client side, in fact i don’t even bother if i’m not doing it server side first. A malicious person can send packets directly to your site, bypassing the client validation. Client validation is good to aid the user, but it’s not secure. This is also good news for you because you don’t need to do any javascript validation on the server side.

P.S. Someone gave me
THIS LINK
when i had slowness issues with GWT. Although i was lazy and migrated to Vaadin instead :grin: for its table streaming goodie wich solved my problem (but not only), there’s patterns that would help you probably overcome the trouble you are having.

I agree in that for security reasons if nothing else, one should not rely on client side validation.

However, if you do want to do some client side validation of text fields (in addition to any server verifications), you can take a look at
CSValidation
.

An immediate text field sends the value to the server when it loses the focus. A non-immediate text field sends its value when there is some immediate event on the UI, such as clicking on a button. If you do want keystroke by keystroke communication with the server, check the
SuperImmediateTextField
.

I don’t understand how server side could be more secure than client side validation.

On the client side, I just wish to check that a field is integer, has at least 10 characters, is grammatically acceptable, etc. Everyone seems to tell me to perform all these trivial validation on the client not on the server, for SECURITY’s and PRIVACY’s sake and performance.

Imagine sending this info back and forth from client to server because a user has trouble filling in the form - doesn’t that increase the chance of intercept?

On the server side, do you think that the measures that I take to reject spurious attempt as insufficient for preventing hijacking of a session.

You’ve been misled, badly, or you’ve misunderstood. A client-side only validation is a basic security flaw in modern RIA designs. Let me give you an example. Let’s say I have a form with an integer field that sends it data to http://someserver/ajax/?intparam=42, where 42 is the user added value in the field. With client side validation the user’s input value is checked, if it is accepted, then the above url is created and called. Now if you’ve only had a client-side validation, then the server will accept the intparam as it and do whatever it needs to do with it.

As client-side validation is based on javascript, which is executed in the browser, there’s nothing that stops a user from inspecting your javascript code and looking for flaws (side note, please do not expect obfuscated javascript to keep you safe, it won’t). The user can by-pass your validation or even modify it run-time. In my example case, the attack on the server is even easier. You don’t even need any other tool than firebug and enable its network traffic tab. There you can see what calls have been made to the server. This means that when your ajax script has done its validation and is ready to send its data to the url described above, the url will come visible in firebug. As an attacker, all I would have to do is to copy that url string, modify the parameter to whatever I want and execute the url in my browser, thus sending unvalidated data to the server.

Now to correct what you’ve been told. Client-side validation is good for performance, but that’s all it’s good for, NEVER rely on that validation, you HAVE TO ALWAYS validate everything again on the server side. It is true, that excessive calls to the server will increase the possibility for man-in-the-middle attacks, but you have to put these attacks in perspective. A session hijacking attack is much less likely attack than bypassing your client-side validation, so I would be more concerned about the bypass-attack. If you are worried about the constant ajax-calls made to the server, then run your application through SSL.

Kim is right. You should never trust client-side-only validation. It might be good to validate input data at the browser for reducing network traffic, but submitted data should always be validated at the server-side too.

If you are interested in reading more about this, Joonas has good material available here:
http://vaadin.com/web/joonas/wiki/-/wiki/Main/RIA+Security

On the server side, I have measures in place to prevent spurious request to hijack the session.

So the original question was not actually server side validation vs client side validation.

The original question is - I need to force client side for responsive validation, despite my having server side security measures on any incoming request.

Let me explain what I mean by Responsive. Responsive is a word used in English mainly within congregational context. A teacher (or pastor or rabbi) would ask two selected groups to read a series of paragraphs responsively. That means group 1 would read paragraph 1 and then group 2 reads para 2 and group 1 returns to read para 3, and then group 2 reads para 4, an so on and on.

A responsive field validation is when a user enters information into the field but the entry does not satisfy the syntax conditions of the field. Validator responds by asking for renewed entry. Then user enters renewed entry.

I believe it is wrong to perform client-server responsive field validation. I am unable to convince myself that it is the right thing to do.

I don’t think anybody should bring themselves say that this question implies that the server side should not have any security measures against maliciously spurious request. No, no, this question is not about whether there ought to be any checks on the server side. This question is about my decision not to trust client-server responsive validation because of my religiously hard-nosed belief that it is wrong to perform such kind of validation. And the question goes on to find out how I could achieve that techno-religious belief in Vaadin Field objects.

And Henri answered the question rather completely, by recommending CSVValidation. Though I wished it was part of Vaadin.

The original question is - I am concerned about users’ privacy being violated during client-server responsive validation. Not about use of https or not.

My model is - if both server and client have their respective validation, all responsive validation should be performed only on the client.

Then when the request is sent to the server, whether ssl or not, and if the parameters of the request fail the server-side criteria, it would fail completely, without any responsive response to the client asking it to correct the parameters. It is wrong to increase risks to user’s privacy through cient-server responsive validation, even if ssl is in place.

Let me clarify with apology. Perhaps, “wrong” is too strong a word. Rather, it is “unacceptable” in most cases.

Chiming in…

I think there was a bit of misunderstanding here regarding different types of security vs. privacy, and also whether or not you were actually suggesting to have no server-side validation.

My thoughts (for everyone to ponder):

I think you have a valid point; if one assumes the client-server communication can be eavesdropped, it is a
little
bit safer to communicate as little as possible. I say “a little”, because it’s a kind of “security by obscurity” - it’s not real security, but still makes it slightly less likely that information gets into the wrong hands accidentally.

An example: You have a server on a campus network where virtually anyone can sniff the network traffic. Unfortunately there is no way to get https working. By transmitting as little as possible client-server traffic, you minimize the chance that some eavesdropper “accidentally” sees something sensitive - but if someone actually decides to try to hack your servers, or steal some users data, this scheme offers no protection: it will only take the eavesdropper a few minutes longer to wait for the one singe http request to come trough the pipes.

So my point is: by all means, do as much client-side validation as possible (in addition to server side validation) - there are several benefits (less traffic, more responsive - especially if you want to validate after each keystroke). Just be aware of the limitations, and don’t put too much trust in the ‘security’ this provides, because ultimately it’s not to be trusted.

Best Regards,
Marc

I’m not sure what the security issue is. If the data is sensitive and you are not encrypting it via HTTPS, you are doing a great disservice to your users.

Clearly, though, a person could be entering a password, for example, which is checked by the client/browser, such that only a hash is sent over the wire. Some think this gives protection because the password is not sent in the clear, but since the hash (not the password) is what does the authentication, it’s no more secure for that particular transaction than a password. It is slightly better in that many users do reuse their passwords, so in the cleartext could give them potential to attack related accounts – but that is a very rare and targeted/personal type of attack with network access.

All that said, the question for me – and I’ve not even researched it, just listening to this thread – does Vaadin have a way to write pluggable code that can be delivered to the browser for validation checking on the browser without having to communicate with the server? I mean, this is very common and efficient in traditional browser-server AJAX.

For us Java programmers, it would be nice if it could be GWT code that is translated to JavaScript for us, but it could also be the use of regular JS with ways to link our Vaadin components to use that JS to do it for us (irrespective of the setImmediate() flags). I’ve seen several such validators for textfields and such, but do not know if they work client-side or server side as I’m still a newbie.

Thanks for any insights.

Partial compilation of GWT (java) snippets on the fly is not possible/feasible - right now anyway. You must compile the whole client side before deploying. But if you’re doing that already, you can certainly code your client-side validation in java. We could probably ease this by providing some examples and/or some ready-made widgets to extend…

The CSValidation example in the incubator is a good starting point.

As for JavaScript -based validation, that would certainly be possible to do dynamically. CSValidation is probably a good startingpoint for this as well, just make it run a JS snippet instead of a regexp.

Also: I think a) future versions of vaadin will have some built-in client-side validation b) the upcoming add-on “Directory” will probably see a bunch of add-ons addressing these issues.

Best Regards,
Marc

Repetitive use of the same encryption key enhances the chances of decryption. Especially when there are well known communist countries attempting to steal dissidents’ information.

For a particular ssl session, is the same encryption key reused for every response? If so, I would reduce the number of responsive cycles per key.

Good grief, you have some sensitive information with high value targets…seems like the web is not the place for such communications.

I have no idea what the rest of the world is using, but most US SSL uses 128-bit or 256-bit AES these days, and I’ve not heard of any exploits yet, though as you suggest, if there were a nasty party who did break it, they wouldn’t tell us. SSL will use the same key during the entire session, but of course you’ll get a new one the next time you visit the site.

But still, if your users’ lives depend on you doing client-side validations to avoid additional network traffic that will enable the crack, it seems like they either need to stay on your site a very short period of time or they’d be toast just by using it for any length of time where regular network traffic would take place (outside of validation).

But it sure is amazing that you have to be that concerned about security whereas financial institutions and commerce sites seem unconcerned and just go ahead with SSL (turning off weak ciphers in most cases – anything below 128-bit is typical to block). Seems like they’d just crack your server than try to defeat user’s SSL sessions.

Good luck, and don’t give me your URL since I don’t want to visit a site under such cracking scrutiny :wink:

Remember that if you are validating a form, the validation is usually done when someone clicks on the submit-button. This means that if the validation fails, we’ve only made one extra round-trip to the server. That one extra round-trip probably won’t make much of a difference if someone is trying to crack your encryption. Especially when you consider the architecture of Vaadin where basically every action makes a round-trip to the server.

And secondly, I can bet that your application and/or server configuration has a weakness point far more likely to be exploited than cracking your encryption.

I am not an encryption expert and most of the time become cautious after hearing some hearsay or heresy. So let me dispense my heresy and you the psychiatric therapist to calm my nerves.

Let’s say we have an encryption with entropy of log2(per-bit-possibilities) = 256 . Let me express that as a kindred of putting one hydrogen atom in a confined space in total vacuum where it has only 2^256 possibilities of its orientation and location, etc. I am not a particle physicist either so, I again operating on hypothetical hearsay/heresy. So don’t fault me for my imprecision in molecular physics or chem. As I inject more hydrogen atoms into that space, the positional/orientational entropy of each atom is reduced.

So I am looking at information entropy with a similar eye. Given the same key, would you not say that, even though a user keys in a different attempted password, the user has injected more stuff into the space without overall increase of entropy but has significantly reduced the expected entropy of each bit.

Let’s say a user attempts to set up a password using 8 16bit chars. That would be 128 bits per try. So, after 10 tries, the user’s proposal for a password is finally accepted by my password criteria. What about the request’s https and html form repeated overhead like the submit button’s name and value, etc and return error message of unacceptable password format?1280 ++, let’s say a total of 4kbytes of unique information was transmitted. And because, the hacker has also used the application and sniffs all these repeated bytes and encountered the same error message and json terminators and button name values - that reduces the expected entropy further because they are like unchanging microscopic walls constraining the movement, position and orientation of each hydrogen atom.

Now say, I am on the mission to cloudify the engineering, research, financial and manufacturing data analysis of a company. Not that I am. There are thousands of fields that gets to be verified. Not that a user would use all of them. Within the same session. Now I am thinking further about the reduction in encryption entropy.

See, I now seem to be so threatened that I need to have a set of data accessed once and only once and place it on the local session. So that, all field verification has to be done locally without traverse or travesty of the network. Do you think it is safe for me to let the user verify those fields over the network. It’s not a banking operation of a paltry few pages you know. While the corresponding set of data sits on the server for the benefit of graph plotting and other manipulation that the user chooses.

That is saying that, whatever needs to be done on the server, let it stay on the server and whatever needs to be done by the local session let it stay on the local session, rather than transmitting a whole amounts of data to the Window’s urihandler or parameterhandler and then the window transmitting it back just because the application needs to find out the uri and parameters. Why should those handlers sit on the Window widget rather than the Application servlet?

Perhaps, who wants to hack into a stupid manufacturing concern’s engineering data? Why would anybody want to do that.

What about the case of the insider telling a collaborator

… between 10 and 10.05 am I will be accessing such pages, with multiple erroneous field entries. Then the tacit message is - for the rest of the session, the collaborator can then sniff my transmissions and find out that critical parameter set that defines the secret of our new product or you can judge for yourself whether to buy or dump the company’s stocks …

all done within plausible deniability of the insider. Is this situation one that needs to be mitigated or am I being paranoid?

I am trying to start a business, so may be, I should spend more time (and money) on security issues. But in the mean time, I am doing my best with the lowest cost to myself to weed out possibilities. It is not that I know so much but that I know so little about security that I will take efforts to reduce, if not eliminate, every possibility. And part of those efforts is to reduce transmission of information and reduce the reduction of encryption entropy and I am evaluating if vaadin is suitable for such a goal and how costly would the mitigation be if there are some corners to be repaired. Perhaps, I should just stick with vanilla GWT where I know and understand the data transfer. But vaadin is so seductive because of all the widgets it has.

If you feel I am paranoid, you have to calculate out the entropy numbers for me to show that I am, rather than saying - well the banks have no worries, why should you.

There are much easier ways to get into your network and computers than to apply cryptanalysis on network traffic encrypted with current standard mechanisms. While you have a point in theory, mounting such attacks in real life systems requires so much expertise and is so costly that only organizations like the NSA are capable and ready to do so - and only if they have very good cause to do so.

If there is an insider, it is so much easier to put a keylogger between your keyboard and computer, or install a trojan horse on the computer, or exploit a vulnerability in some software you did not upgrade to the latest version in time, or …

If you focus so much energy on minimizing encrypted network traffic, you are also likely to forget much more significant security issues. Furthermore, the more you code yourself, the more bugs and vulnerabilities you are going to create - this is just statistics. For the network traffic, if paranoid, just increase the required SSL/TLS key length.

Disclaimer: I used to develop network security software professionally for several years.