Query practices with FDB

Hello, I’ve been reading a lot about FoundationDB lately and getting to understand it. I understand that it’s a drastically different database design than traditional RDBMSes, where FDB primarily just supports key/value storage and anything else you want you build on top of using layers, such as the record layer in Java.

In reading through things and trying to relate it to how I use, e.g., DynamoDB at my job for our data and APIs, I had some questions and musings on how to query data (efficiently).

Easy FDB storage and indexes

I think that getting data when you know the identifier is simple enough. The primary key is encoded within the fdb key (along with perhaps a directory and/or subspace prefix. And you can either serialize different data parts within the value (such as protobuf serialization with record layer) or encoded the data parts as individual key/value pairs with subspace suffixes after the primary key.

The next step is retrieving values by something other than the primary key, by using an index. Suppose I want to retrieve all objects where a field is equal to a certain value. The naive method would be using a get range operation on the subspace for objects of that type. If it were protobuf encoded, you would deserialize the value and comparing the value or if the object, then retrieve the primary key from the fdb key. If the object were exploded then you would look at keys within that object’s subspace where the key’s suffix matches the field you’re comparing and looking at the value from there. The get range is a bit wasteful because you’re effectively doing a scan of all data, like a table scan in SQL.

An index would be storing the value you’re looking for in the fdb key before the primary key, e.g. (prefix, value, pk) => nil. So you can just do a get range operation on (prefix, value, ) and parse the primary keys from the resulting keys. All key/value results from this get range read would be usable and nothing is wasted. You can then do individual gets on the pk to get the whole object, if needed. The index takes up some more key space in your db, but that’s unavoidable.

Up to this point, I think I understand everything well enough, from a high level.

Queries

The next part is how you perform more complicated queries.

#1: Prefix match. If I wanted to find objects whose string field starts with a certain value, this would actually be very easy with a regular index described above, because of how fdb’s get range operation works. Simply do a get range over (subspace prefix, value prefix, ) instead of the whole value

#2 Contains match. If I wanted to find objects whose string field contains a value, there’s no perfectly efficient way to handle this. If you could enumerate what “contains value” values are being queried, you could make a custom index for that, storing many index records for each “word” (e.g.) in the value. But user-generated searches would be impossible to predict.

A naive way to perform this might be to perform a scan over all objects. If the object is stored with its values exploded in suffixes, not only would you have to scan over objects that don’t match the query, but you’d also have to scan over key/value pairs that don’t even contain the field you’re querying against.

For example, suppose the data (subspace, object type, pk1, name) => alice and (subspace, object type, pk1, email) => a@a.com. If I want to query for objects whose name value contains lic, I’d do a get range over (subspace, object type, ), discard results whose suffix does not match name, and then compare the data.

If we want to not wastefully iterate over keys which contain fields other than the field we’re querying for, we could scan over an index on that field instead of scanning over the live data. The get range over (subspace, index for object, name value,) means every key/value pair you get back can be actionable for the query. It’s like an index scan instead of a table scan.

The next concern is how much data there might be. Queries are read-only but may still run into the 5 second FDB transaction limit. Perhaps I understand this less with read transactions and there may be a way to query without transaction time limits, or do a series of transactions that carry off from one another (use the last key from the last transaction results as the starting key in the next transaction).

#3 Ends with, case-insensitive matching, etc. I think most other simple queries against single data points follows the same woes as described above in #2.

#4 Complex queries. Next would be combining multiple data points and filters. Such as name eq "alice" and (email endswith "@gmail.com" or email contains "alice"). This would be the job of a query planner to figure out dynamically, choosing the best set of get ranges and gets to perform to minimize response time. This is obviously a much deeper undertaking and depends on how data is stored, available indexes, etc. and is something RDBMSes spend a lot of time getting right and tuning. I think the FDB record layer has some notion of this built in. Regardless, it’s probably a large undertaking in its own right to nail well.

Other considerations

#1 Tenancy vs global. Consider if you’re designing a system that stores tenant data separately. My understanding is that this is how Apple iCloud stores data for users, each user effectively gets their own database, so all your data is “closer together” in the overall FoundationDB key space due to how keys are ordered. Retrieving a user’s Notes and Reminders would all be under e.g. the (icloud, user id,) key prefix, instead of (icloud, notes, user id,) and (icloud, reminders, user id,) prefixes which may be far away and split from each other.

In this way, “table scans” or “index” scans may be cheaper. You’re not iterating over millions of users data to find the value you want due to how data is partitioned or grouped. Fewer discarded results in the scan if you’re only querying your own data. This is as opposed to global data, like all iCloud user’s notes being stored in one giant SQL table with the user id as part of the primary key.

But if you have tenant-oriented data, if you wanted to perform queries globally you hit other problems. Such as first iterating over every user id then scanning each user’s Notes key subspace, as a rough example. (A better example may be if you wanted to search all Tweets, since iCloud Notes obviously can’t be queried like that). Or if this type of data is something specially designed to be queried, you could have per-user indexes within the user’s tenancy subspace plus a second global index for global queries.

#2 External query service. I think that a natural conclusion to make is that FoundationDB, as a key/value store, is somewhat limited in regard to open-ended, complex queries. It can be done, but there’d be a fair amount of engineering to take into account. FDB seems like it excels with direct retrieval of data and “field equals value” indexed queries". But if it gets more complex, you may have to bite the bullet on slightly slower scans or write complex query planners. And it depends entirely on your data model, e.g. iCloud user-guarded tenant data, or a global and open social media platform.

From this, perhaps offloading queries to a separate platform may be desired. Such as Elastic Search. Data writes to FoundationDB are replicated to ES so that simple API requests on direct data can be performed against FDB while complex and open-ended searches are performed against ES. I believe this may be what Okta does, for instance, since they even mention in their documentation that new or updated User objects may take up to 1-2 seconds before it’s available from their search APIs. Obviously FDB can’t perform every function perfectly, and other storage formats may excel better for certain scenarios, and bringing in other technologies to your system to achieve your goals is acceptable.

Closing thoughts

I think that I needed to write this all out to get it organized in my head a little bit better. There’s a lot to consider in your data modeling, and many of the FoundationDB articles I’ve read recently don’t go into querying more than a simple index described further above, or some of the considerations you have to take into account when adopting FDB over an RDBMS.

I think I wanted to think about this ahead of time, so the following doesn’t occur: begin creating a new service with FDB as its store, get it all working with simple operations, begin to implement different searches/queries, and get stumped on how to actually implement it well.

I think everything I’ve said above answers my own questions well, but would love to hear what others’ experiences are.