We plan to use FDB with a large number of users and security is a key concern for the entire ecosystem. This includes authentication, authorization, restricting certain range operations (eg., clear) to prevent both intentional and unintentional corruption of the data.
We are looking into different ways to implement this. On way is to natively enhance the cluster to build an auth layer - for example the write proxy can intercept all write ops and validate the client and subspace/key range before executing the txn. But this seems to against the general philosophy of FDB.
Second option is to have a layer which will interface with the application clients and apply the security considerations before sending the queries to the server. This more sounds like the right thing to do but write a layer has its own challenges.
- What will be performance impact in introducing another layer between client and cluster ?
- The protocol between the clients and auth layer should be carefully designed to make sure the proxy layer stays truly stateless and the client stays as close to the cluster as possible. ( What I really mean is i dont want to invent another complex protocol between client and the layer and then the layer does protocol conversion making it computation heavy and complex for maintaining the codebase).
Some notes about the actual application/clients -
The existing system we are trying to replace is a blob store, so we intend to use FDB as a native binary store. Client relies heavily on indexes and the server process has no need to understand the blob.
Tens of thousands of clients are common for a given cluster ( we are aware of the current limitation of max concurrent clients for a given cluster).
Our usecase seems to be a standard one and its possible lot of other folks have solved this problem. If there is already a solution for this then its ideal if not then any ideas/feedback is welcome.