-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
model: Connecting to DBs that don't contain tables that are in the model should not be an error. #235
Comments
We added that verification because the error handling later on was more complicated. But now that the library is more mature we can relax this. For missing tables we can remove the table from the model and warn. All API calls with that model will return a Should we also handle other types of discrepancies more robustly?, e.g: extra/missing/different column? |
That's precisely the sort of scenario we wrote this library to avoid - why use
I agree with Adrian here, but we need to do this thoughtfully. The intended way for supporting two different versions of a schema was to let the library user handle it.
If we're relaxing verification, we'll need to be clear on what the compatibility guarantees are and for how long they are valid
Missing tables should be a simple case. I'd say we should return a specific error type so clients can handle this case easier.
This is going to be a much harder case to solve for. I think a recent example was that Likewise a change of column type would be another issue that you could plausibly handle at runtime, i.e from a scalar type to a set of scalar types (or vice-versa), but one of the nice things about updating your bindings using Given the pitfalls outlined above (and significant level of effort that would be involved to do this properly) I do wonder whether using |
I wouldn't try to handle changes on runtime. Just marking nonexisting/changed fields and tables as "not supported" and ignore them or error out on transactions that use them would be much easier. However, the challenge I see is how to unambiguously let the client know what to expect from the library if only a subset of the fields/tables are supported. I think this is the same "weirdness" you express above.
Do you mean having the client decide which model to use in runtime or having the library support multiple (versioned) models? |
Yes that's my point. The only way to prevent this would be to indirect struct access through getter/setter functions where you could return an error if the field wasn't supported. This is much less convenient that using the struct directly. EDIT: in some ways the model declares the minimum required feature set for a connection... and I quite like how that is enforced at present through validation.
Having the client pick at runtime |
Sorry, my example wasn't good, you're right, no need to use ovn-nbctl. We can use libovsdb to check if a table is in the shema supported by the server, e.g., as suggested by @amorenoz:
|
Why not put the burden on the client (like is currently done with the C and Python ovsdb IDLs) and make it the responsibility of the client to determine what tables/columns to use? All we'd need is a way to determine if a table/column exists in the remote schema. The client can then set up its own mechanism to determine what tables/columns to read to/write from. EDIT: Of course schema validation shouldn't throw an error for a missing table/column. :) |
I think the original rationale behind strict validation was to ensure no [un]marshalling errrors occur that are difficult to report (e.g: they occur on the update-handling thread) and to deal with (it's difficult to determine if the error is on the client, on the library or on the server side).
The missing table issue is relatively easy to fix, we could:
IMHO, the weirdness comes when dealing with column type changes or missing columns. When a client sees an empty field when reading a Model from the cache, how would it know if it's actually empty in the DB of if it was missing from the schema in the first place? |
That's when the "put the burden on the client" comes to play IMO: the client will have to figure out what columns are safe to read from by using its own mechanism. An oversimplified example: it could have a dedicated table in the database that lists supported features. If feature F that requires values from Column X is not listed in the supported features table then the client must not use feature F. Back to the ovn-kubernetes feature that triggered this issue, this becomes even simpler: if the Load_Balancer_Group table is not available in the schema, then fall back to "legacy" configuration of Load_Balancer and direct referencing from Logical_Switches/Logical_Routers (i.e., never read/write from/to Load_Balancer_Group). |
The problem is the client cannot really control the columns that are read. Using your example, today, the only way for libovsdb not to read column "FeatureF" is not have it in the DatabaseModel. We could avoid the update decoding if we use the recently added Conditional Monitoring was introduced, but I haven't looked at that feature so I don't know if it would work. Maybe @dave-tucker knows. So, it seems to me that in order to really support this we would have to add Table and Column selectors in the DatabaseModel and carry them to the |
libovsdb/model/model.go
Line 78 in e29d8bd
This should probably logged as a warning (at most) and the table should optionally be removed from the model. Otherwise it breaks all upgrade scenarios in which clients are upgraded before the database. Even worse, the client might have an external way of checking if a table actually exists on the remote server (e.g., run an
ovn-nbctl list <table>
command), and adapt its execution based on that information, completely avoiding to use the nonexistent table.An example of such a scenario can be found here; the client fails to validate the schema every time it connects to a remote server, crash loops, even though it's perfectly able to function when the table is missing on the server side:
CC: @amorenoz
The text was updated successfully, but these errors were encountered: