You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Very cool project! We've been running into similar memory and performance issues using CoreData + protobufs that it seems dflat tries to help solve. This might be a naive question and I have some assumptions for how this works, but was curious how the FlatBuffer schema maps to the generated sqlite schema?
The text was updated successfully, but these errors were encountered:
Short answer: we don't do the schema mapping. It is error prone and provided limited value. Instead, like many other schemaless solutions, we store the primary keys along with the flatbuffers blob. In this way, when read old data, we don't need ALTER TABLE at all, because that is a write operation, even if in cases it just mutates the metadata.
For SELECT, we build separate tables and join them at SELECT time if it is indexed. This imposes constraints because you can only join 50 tables, but that seems to be alright so far. The separate table can be built asynchronously upon upgrade, before it is ready, we can just do a full table scan.
If no indexed property, we do a full table scan and use flatbuffers to extract properties and compare them.
Very cool project! We've been running into similar memory and performance issues using CoreData + protobufs that it seems dflat tries to help solve. This might be a naive question and I have some assumptions for how this works, but was curious how the FlatBuffer schema maps to the generated sqlite schema?
The text was updated successfully, but these errors were encountered: