RFC: Moving towards validators in fields

Some thoughts on this. When I look at this example, it seems like the order in which you add constraints and validators affects their validation sequence. While this is likely true for validators, it’s probably not the intended behavior for constraints. Aren’t they supposed to be validated before custom validators?

I believe this confusion happens because we are trying to put two concepts on the same line: validators and constraints. Validators is a general concept where you can have multiple instances of validators (with error messages) that are executed in a specific order. Constraints, on the other hand, is a higher-level abstraction built on top of validators to save you the bother of dealing with validation in straightforward cases. Constraint validators always run in a pre-defined order, and each constraint validator can only have a single instance. Additionally, constraints may have an impact on the UI.

In my view, if we stick with Binder handling validation, it should be capable of handling everything related to validation: validators, their order, and error messages. Constraint methods might be only used for adding visual aspects when using Binder. I’d like to mention that Flow already provides plenty of validators like StringLengthValidator and so on, which can be used to achieve the constraint validation. The only thing it doesn’t cover is bad (unparsable) input.

Bad input stands out from other validators because it should be validated before “required” but, at the same time, after any cross-field validators, and ideally, it should be enabled by default or required. It’s a problem that both approaches struggle with at the moment. Even if you introduce addValidator API to components, it still won’t let you put a cross-field validator before the bad input check unless additional API is introduced. A similar issue applies to Binder: it doesn’t offer any API for reacting to bad input, so you can’t adjust the position of its validator or customize the error message. The error message can now be customized with i18n, but this goes against the principle that Binder should manage everything related to validation.

As a developer, I like when things are explicit and flexible as much as possible. So the convenient way for me would be that as soon as I bind a Binder to a field, constraints are no longer validated. To validate constraints, I should explicitly add the corresponding Flow validators, including something for bad input (validator?). If I don’t add the bad input validator, then a default one is used before the required validator. Or even more agressively, the framework would not allow me run the app until I add a bad input validator explicitly for fields that require it. In other words, I would appreciate Binder guiding me on which validators I should add to achieve the good UX (I’m not considering breaking changes it could result in, it’s just how I’m used to working with validation :slight_smile: )

1 Like

Good point, the current field classes don’t have an additional generic type to specify its converted (bean property) type for setConvertedValue and getConvertedValue.

Of course, knowing the bean’s property type is a problem only if the field also becomes responsible for conversion of its value to/from it (which is why I mentioned it parenthetically—I didn’t really like where that thought was going), but it would need it in order to keep the full validation/conversion chain happy if there are any conversions involved. My initial thought for these two methods would be for the Binder to call them internally so it could convert to/from the bean/field value when it is doing its copying. Either that or the Binder needs access to the field’s validation/conversion chain. I am not sold on this approach.

I don’t see how with this approach you can completely separate validators from converters though, as there might be multiple of each for a given field value/bean property where validators may “need” (or at least want) to operate at each step of a multi-step conversion.

Alternatively, if we limit validation to just the two ends of the conversion, then we will have two types of validators:

  1. field value validators (values that make the field happy) and

  2. bean property validators (values that make the bean happy).

(In addition to the Binder-level validators.)

When the Binder populates the field, even though the Binder/bean may be happy with the property value, the field might have to “reject” the value because it is invalid from its perspective. Similarly, although the field may be happy with a user-entered value, the Binder/bean may reject it based on its own validators (including JSR 380’s), which the field then needs to report to the user.

When the Binder populates the field, even though the Binder /bean may be happy with the property value, the field might have to “reject” the value because it is invalid from its perspective.

This sounds weird to me. Wouldn’t the data model have the strictest limitations on what is acceptable? For example, why would an email property accept a value that the field decides is an invalid email string?

Do developers in general make a distinction between validators and constraints, or is it all just “preventing bad input”?

One observation from the example that I happened to pick without giving any though to ordering is that order does not matter because the restrictions form a strict hierarchy of value space subsets: It makes no sense to do an arithmetic comparison when if there is no value and the value can only be 13 if it also satisfies the >= 1 condition. While this hierarchy is certainly not universal, I still wonder if it’s universal enough specifically for the types of conditions where built-in constraints would apply. At least in the case of numeric fields, there’s a clear hierarchy between empty, parsing and the min/max limits (except for the useless case with overlapping min and max limits).

I’m thus wondering whether ordering matters in practice other than for generic validators where the order of addition can already be used as a natural signal?

At the same time, I could also imagine constraints being ordered in relation to other validators. In that way, a constraint would just be a validator that also affects the component’s internal state management. This could use a concept where some field components have special support for specific validator subclasses. In that way, numerField.addValidator(new MinValueValidator(1)) and numberField.addValidator(value -> value >= 1) would both would reject the same values but one would additionally also configure the field to conditionally disable the - step button.

This would give the possibility of adding a generic validator “before” a constraint validator so that a field configured to reject odd numbers first and only then negative numbers would show the odd number error instead of the negative number error when the value is e.g. -1 even though the logic to disable the - step button would still be in effect. This would also mean that the component would only be “allowed” to immediately show the result of some client-side validation rule if there are no earlier rules that delegate to the server. This mechanism could then also be abused to disable instant client-side validation by making sure that the first validator is a dummy server-side validator that accepts any value.

I want to make sure I understand the requirement here: do you want to have access to the text entered by the user so that you can parse it with server-side logic to determine a date that should be set as the field value? The alternative would be to only get access to the text as input when creating a validation error message without any direct influence over the field value.

If that’s the case, then I would say that it doesn’t really have anything to do with the interaction between the field and the binder but instead only about how that specific field type behaves in general. (But that doesn’t reduce the importance of that feature request.)

Yes! Like it was possible with V8

3 Likes

The need for server-side date parsing for DatePicker is the only use case I’ve seen for that particular functionality, so unless someone has another decent example, it might be better to treat that one as a component-specific edge case and not the definitive building block of form data binding architecture?

Agree! I think this should not bloat the binder’s API - just a “simple” server side hook that can be configured on the field that sets the value of the field (which of course would trigger the binder) or return an error to the client.

1 Like

My particular beef with the Binder is the writeback of converted and formatted values.
If I have 3.14159265359 in my data model, and show it truncated as 3.14 in a field, I don’t want Binder to immediately write back 3.14.

Or, I have 1KG in my data model and show it as 2.20 pounds to my users, I don’t want Binder to write back 0.997903KG.

Fighting with Binder to make it stop has not been fun. It is not possible to just override Binder.setBean and just tweak it a bit, since it uses private stuff. I ended up using reflection to get references to the private stuff and then do .setAccessible(true) on them. Made me feel dirty.

It is not possible to replace the entire Binder either. It is the top-level class, and anywhere you give a binder to Vaadin, it must be a subclass of this.

I have considered dropping the binder and just use the fields directly, but in editable grids you have to set a Binder.

I suspect the root issue is that you try to do too much. It makes for wonderful demos, but just gets in the way in more substantial applications.

I guess I’d like to keep the current functionality, but split it up so that it is easier to get the functionality I want. Maybe start with a top-level interface that only has setBean/isValid?

Does this not work in your case?

  binder.forField(field)
    .bind(GET, SET)
    .setConvertBackToPresentation(false);

Edit: nvm - wrong way :)

No, not always, and the example you gave is not how I envision this being used. Continuing with your email example, let’s say there is a Contact model class with an email member. It would make sense from a model standpoint to verify that the character length limit of the field is not exceeded and the email address is otherwise properly formatted according to RFC 5322. However, we don’t know if we are going to need any additional validation by a field, such as when the Contact class is used by the Employee class vs. the Customer class. For Employee, we may need to have an additional restriction that the email address is an internal one (i.e., ends with the corporate domain) whereas Customer email addresses are restricted from being internal ones. So we have the model with common restrictions plus each field with its own particular context restrictions.

Yes, that is exactly what is defined in the ticket linked in this comment: RFC: Moving towards validators in fields - #15 by Tatu2

But if the user touches the field the rounding error will end up there anyway … So what is the point? However it is intriguing use case, which is hard to handle “perfectly”.

What do you mean “touches”? Are you saying that vaadin might sync back the value to the server in other cases than a valueChanged? I haven’t notice that.

If the user actually enters a value like “2.20” then I’m happy to receive 0.997903KG back to the data model. Then that is the correct value.

In my mind, displaying data should never change the data. I would be interesting to have some examples where that behavior is desired. At the very least I’d claim that is an edge case and should not’ve been the default, though I guess that ship has sailed.

Did you notice the writeChangedBindingsToBean(BEAN bean) and writeBean(BEAN bean, Collection<Binding<BEAN, ?>> bindingsToWrite) methods that were introduced in Vaadin 24.4?

1 Like

I had weird issues updating to 24.4, so I’m still on 24.3.12, so no, but from the names and a quick look in github, they don’t seem to help me.

The formatted value is changed immediately when I call setBean. It calls

getBindings().forEach(b → b.initFieldValue(bean, true));

That “true” is the writeBackChangedValues parameter that I’m so strongly against.

The mentioned new methods would be a solution for you if you would use readBean instead of setBean. Based on your problem description it looks to me that buffered mode is actually what you need instead of non-buffered setBean usage.

No, not always, and the example you gave is not how I envision this being used. Continuing with your email example, let’s say there is a Contact model class with an email member. It would make sense from a model standpoint to verify that the character length limit of the field is not exceeded and the email address is otherwise properly formatted according to RFC 5322. However, we don’t know if we are going to need any additional validation by a field, such as when the Contact class is used by the Employee class vs. the Customer class. For Employee , we may need to have an additional restriction that the email address is an internal one (i.e., ends with the corporate domain) whereas Customer email addresses are restricted from being internal ones. So we have the model with common restrictions plus each field with its own particular context restrictions.

Here the assumption is, if I’m reading you right, that the Field (e.g. EmailField customerEmailField) has the limitation, and not the Property (e.g. Customer customer = ...:s email member). Is that right? Because I would think the Customer should make sure it can’t be in an invalid state.

No, buffered won’t work for us. I have all logic encoded in my bean, and none in the fields. The bean setter method often have side-effects. They might change other values, or state which then is reflected back to the UI. So, it is the bean that decides if it is valid or not. Not the UI.

Just to be clear; Our “bean” is no JPA bean. It is our own thing. We control when changes are saved to the db. I’ve never understood why buffered was a thing, but I’ve assumed it had to do with JPA beans and not having control over when stuff gets saved?

Could be so. The mentioned writeChangedBindingsToBean method however was introduced to be more friendly with the cases where setters do have side effects. I.e. you do not want to trigger the side effect when the value has not changed actually.

Furthermore, there is this improvement coming feat:Improve Binder change detection by onuridrisoglu · Pull Request #19488 · vaadin/flow · GitHub

I.e. if user edits the value and then again edits back to original value, the field will not be deemed changed and when used in the combination of the previous method, wont be written to the bean then.