How to inject the client binaries into an existing application Dockerfile?

I’m trying to create a Dockerfile for an application that uses the FoundationDB client library, to connect to an external cluster (hosted elsewhere).

In my case, the dockerfile uses the linux ASP.NET Core 3.1 docker images provided by Microsoft, copies the application binary files, and run it. Except of course it fails at runtime when attempting to load the fdb C library!

I’m looking at only installing the client binaries on this docker image, not the server components (to keep the footprint small). Also, I have to start the image from an existing one (in my case FROM so I cannot reuse any “official” foundationdb docker image.

I looked at how the kubernetes operator builds an image, and it simply download and untar the .tar.gz files from the official download site (which include both client and server), and copy/link the binaries into place in the file system.

I also looked at the document layer docker file, but it seems to be inheriting from the foundationdb-build image which contains too much stuff for me.

I’ve also seen various Dockerfiles on this forum that download the debian packages and use dpkg to install both client and server (and most conversation is focused on how to start/run the fdbserver process).

Finaly, there is that seems to indicate that I can “use” the image to copy binaries from it, but how do I do that?

As anyone created a Dockerfile for a custom application server that only needs to connect to a cluster?

I’m guessing this would look like

FROM foobar/super-awesome-runtime:1.2.3

# install fdb client binaries
RUN curl && \
    tar -xvz ...... && \
    mv xxxx yyyy && \
    ??? && \
    Profit !!!!

COPY my-awesome-application-binaries ....

#TODO: inject some path or uri to download the cluster file

ENTRYPOINT ["/app/my-awesome-entrypoint"]

To inject the cluster file when the image starts, I was thinking of using en environment variable that would contain either a path or an URI, and have it downloaded by the application during startup. Has anyone done this or used a different approach?

If you’re only interested in installing the client libraries, you can download those directly. There’s an example of that in the sample python docker image:

That example is copying its primary client library from the main foundationdb image (COPY --from=fdb /usr/lib/ /usr/lib), and it’s using wget to pull the multiversion libraries. I think you could mix and match these based on your needs. The simplest thing will probably be to use curl or wget to download the libraries from the website.

Having the cluster file downloaded at start time is a reasonable approach, especially if your clients are decoupled from the databases they connect to, like if they run in different environments or are managed by different teams. If you have a programmable DNS service, then storing the cluster file in a DNS TXT record could be easier than running a dedicated web service to distribute them, but it’s all going to depend on the details of your infrastructure. If your clients can be slightly stateful, you may want a fallback where it reuses the old cluster file if it can’t download a new one. In most cases it will be safe to re-use the old cluster file, since the connected clients will update it live when you change coordinators. I know that giving clients state to manage is often challenging in a containerized environment, but it can help protect you against your cluster file delivery service being unavailable.

1 Like