Simple Dockerfile


EDIT: My up to date Dockerfile and use of docker are here

I’m trying to create a simple docker image with following Dockerfile:

FROM ubuntu:16.04
RUN apt-get update && apt-get install -y curl python
RUN curl -sO
RUN curl -sO
RUN dpkg -i foundationdb-clients_5.1.5-1_amd64.deb foundationdb-server_5.1.5-1_amd64.deb

$ docker build . -t fdb

I can start the server after running the image with bash

$ docker run --rm -ti fdb bash
root@d674e8f5297e:~# /usr/lib/foundationdb/fdbmonitor
Time="1524732521.244386" Severity="10" LogGroup="default" Process="fdbmonitor": Started FoundationDB Process Monitor 5.1 (v5.1.5)
Time="1524732521.250752" Severity="10" LogGroup="default" Process="fdbmonitor": Watching conf file /etc/foundationdb/foundationdb.conf
Time="1524732521.250884" Severity="10" LogGroup="default" Process="fdbmonitor": Watching conf dir /etc/foundationdb (2)
Time="1524732521.250993" Severity="10" LogGroup="default" Process="fdbmonitor": Loading configuration /etc/foundationdb/foundationdb.conf
Time="1524732521.252791" Severity="10" LogGroup="default" Process="fdbmonitor": Starting backup_agent.1
Time="1524732521.253656" Severity="10" LogGroup="default" Process="fdbmonitor": Starting fdbserver.4500
Time="1524732521.255500" Severity="10" LogGroup="default" Process="fdbserver.4500": Launching /usr/sbin/fdbserver (12) for fdbserver.4500
Time="1524732521.258697" Severity="10" LogGroup="default" Process="backup_agent.1": Launching /usr/lib/foundationdb/backup_agent/backup_agent (11) for backup_agent.1
Time="1524732521.456013" Severity="10" LogGroup="default" Process="fdbserver.4500": FDBD joined cluster.

, but failed to start directly running fdbmonitor

$ docker run --rm -t fdb /usr/lib/foundationdb/fdbmonitor    
Time="1524733204.457982" Severity="10" LogGroup="default" Process="fdbmonitor": Started FoundationDB Process Monitor 5.1 (v5.1.5)
Time="1524733204.467214" Severity="10" LogGroup="default" Process="fdbmonitor": Watching conf file /etc/foundationdb/foundationdb.conf
Time="1524733204.469169" Severity="10" LogGroup="default" Process="fdbmonitor": Watching conf dir /etc/foundationdb (2)
Time="1524733204.475943" Severity="10" LogGroup="default" Process="fdbmonitor": Loading configuration /etc/foundationdb/foundationdb.conf
Time="1524733204.482494" Severity="10" LogGroup="default" Process="fdbmonitor": Starting backup_agent.1
Time="1524733204.484942" Severity="10" LogGroup="default" Process="fdbmonitor": Starting fdbserver.4500
Time="1524733204.488199" Severity="20" LogGroup="default" Process="backup_agent.1": Process 5 exited 0, restarting in 0 seconds
Time="1524733204.491412" Severity="20" LogGroup="default" Process="fdbserver.4500": Process 6 exited 0, restarting in 0 seconds
Time="1524733204.493386" Severity="20" LogGroup="default" Process="backup_agent.1": Process 7 exited 0, restarting in 63 seconds
Time="1524733204.494910" Severity="20" LogGroup="default" Process="fdbserver.4500": Process 8 exited 0, restarting in 66 seconds

Any idea?

[RESOLVED?] Fdbbackup 'Platform error' on 'file_backup_write_range'
(Alex Miller) #2

I love questions with reproducible steps!

You need to run docker run --init --rm -t fdb /usr/lib/foundationdb/fdbmonitor

I think what’s going on here is that without --init, PID 1 is fdbmonitor, which means all processes get reparented to it, and all zombies have a SIGCHLD sent to fdbmonitor. Not digging into the strace log too much, it looks like something about unexpected SIGCHLDs are causing fdbmonitor to think fdbserver is dead, and looping on trying to restart one that will live.


Ah, the --init option did the trick! Thanks!


I’v been struggling to access an foundationdb server container from other containers. I put my efforts in I hope it will be a help for someone who struggle with same kind of problem and welcome comments.

(Alex Miller) #5

We’ve been having some discussions about this thread and your other one showing struggles with using a docker image built with the Debian packages. Again, I really appreciate the reproducible examples, and thanks for struggling through all this.

For this particular case, what’s happening is that the Debian package installation immediately starts FoundationDB, which then creates and initializes a data directory. This was a seemingly reasonable thing to do, as then we can run configure new single memory, so that a database can be actually created and initialized. However, for installing in docker, this is particularly unhelpful, as it both creates files you don’t want (an empty database), and files that actively harm you (the process id file).

I think that if you add a
RUN rm -r /var/lib/foundationdb/data/*
to the end of your Dockerfile, and this would need to be in its own RUN line, you’ll see less problems. It’ll become an install of foundationdb that’s a lot more like what you expect, and it’ll be one that won’t actively resist having multiple instances of it started as a docker image.

This then raises the question of what we should do about the server Debian (and rpm) package going forward, since anyone else trying to install FDB in a container in a reasonable way is probably also going to hit these issues. I don’t have immediate thoughts as to what the right balance here is, but it’s probably something we should take a look at trying to resolve. What you’ve done is the clearly obvious way of writing a Dockerfile that wraps FoundationDB, and it seems rather sad to me that it doesn’t work in non-obvious ways.

(Justin Lowery) #6

Thank you @hiroshi and @alexmiller. I got my install working quickly today on Ubuntu 16.04 in a Docker container using both of your tips.


I’ll add RUN rm -r /var/lib/foundationdb/data/* to my Dockerfile.

As for debian package and Dockerfile,
I also read

Still I’m new to FondationDB as you see, I don’t know how debian package or docker image should behave yet.
However, I’ll ask here for help with reproducable steps :slight_smile: if another problem arise, and share what works.

One thing I noticed on my struggle, there is no error log to be seen when two fdbserver processes use copies of a same data directory. It will be usefull if stdout of fdbmonitor say “Another process already ueses same processid file” or alike. So we can be able to google the message…

Anyway thanks for your help.

(Justin Lowery) #8

I read your issue comment and thought that I would share my perspective of running this in Docker. Hopefully someone else can add to this with a perspective of using docker-compose or other orchestration tools.

My thoughts below assume that fdbmonitor will not be auto-started at some point in future releases when the packages are installed onto the host system for setup because of its current behavior that leads to the necessity of a hack for container support.

Is it more sensible to run fdbmonitor, or to run fdbserver directly and allow the container to die if fdbserver crashes or dies?

I think that fdbmonitor should be the binary that is started by the ENTRYPOINT instruction in a Dockerfile.

The main reason is that it reduces variability among supported setups. The binary only uses a few hundred KB of resident memory, though sets up user and group privilege separation, auto-reloads the config file (though this is only fully supported on Linux), and emits useful log messages related to configuration.

How should shared cluster files be handled and propagated?

When obtaining the default cluster file from a package installed on the host, the file is owned by the foundationdb user and group, however, Docker users and groups often do not match the system users and groups, and only share UIDs and GIDs.

For example, using the above Dockerfile on an Ubuntu 16.04 host, by default they happened to map to the _apt user and input group. This means that the file’s default location of /etc/foundationdb is not writable, which is required. This suggests that users should be instructed to copy this file to where the container can access it.

In my opinion, I think that a docker directory should be placed somewhere in the repo, along with a script that does the equivalent of the Debian postinst script, as far as creating the initial fdb.cluster file, possibly combined with the script, except without it auto-restarting the fdbmonitor process without checking if it’s running.

Having both the default foundationdb.conf and a script to create fdb.cluster and configure one or more coordinator addresses in it for would make this a bit more intuitive. A custom foundationdb.conf can be overlaid using bind mounts or another mutable type of storage.

I mainly suggest adding the script because the documentation warns of potential data loss from malformed cluster files. I just took note of the syntax and edited mine manually.

There are many ways that containers can have persistent read and write access to files, and while I use bind mounts, this is not a solution for everyone.

Users of orchestration systems, like Kubernetes, will want to use something that is both mutable and available over a network, like GlusterFS, as some setups may not have access to local storage.

My point here is that thoughtful documentation about this should be fairly generic and possibly mention both the more common bind mounts method, and something more scalable, like GlusterFS.

Along with that, some users, myself included, use docker-proxy for network routing between containers that share the same private network, without binding certain container services to a port on an egress interface.

This is where suggesting the use of with a fixed IP makes sense for setting up the fdb.cluster file, though this is where the opinion of someone who uses more automation tools would be helpful, similar to the use of GlusterFS above, as this probably won’t be helpful to them.

With that said, the script currently restarts the server without checking if the process is running. In my opinion, it should not trigger a restart unless the fdbmonitor process is already running. It does already have an argument for a path to a custom fdb.cluster file, which could be mentioned along with copying the file to an accessible location.

In the future, perhaps somehow allowing for hostname resolution in the fdb.cluster file would be helpful.

I do not have an opinion on the best way that this would be implemented, though I suggest it because that is the way I regularly reference my container services over their private networks that are controlled by docker-proxy. This issue posted two days ago refers to this functionality.

That was a lot, so putting it all together along with the notes that were already here:

Here is an example Dockerfile, nearly identical to the one posted by @hiroshi, except with variables that can be configured using shell arguments and that are made available within the container. Hopefully the RUN rm ... instruction can be removed soon in a later version:

FROM ubuntu:16.04
RUN apt-get update && apt-get install -y curl python



RUN curl -sO${FDB_VERSION}/ubuntu/installers/foundationdb-clients_${FDB_VERSION}-${DEB_REVISION}_amd64.deb
RUN curl -sO${FDB_VERSION}/ubuntu/installers/foundationdb-server_${FDB_VERSION}-${DEB_REVISION}_amd64.deb
RUN dpkg -i foundationdb-clients_${FDB_VERSION}-${DEB_REVISION}_amd64.deb foundationdb-server_${FDB_VERSION}-${DEB_REVISION}_amd64.deb

RUN rm -r /var/lib/foundationdb/data/*

ENTRYPOINT /usr/lib/foundationdb/fdbmonitor

The image can be built with this command:

docker build -t foundationdb:5.1.7 ./foundationdb-docker/

This is the step where fdb.cluster should be configured with one or more coordinator IPs, possibly using a script, and where foundationdb.conf would be customized if needed.

I don’t use docker-compose or Swarm, and hopefully more users will help out here eventually. The container can be created and started using this command:

docker run --init -td \
  -v ${DOCKER_VOL}/foundationdb/etc:/etc/foundationdb \
  -v ${DOCKER_VOL}/foundationdb/lib:/var/lib/foundationdb \
  -v ${DOCKER_VOL}/foundationdb/log:/var/log/foundationdb \
  -v ${DOCKER_VOL}/foundationdb/plugins:/usr/lib/foundationdb/plugins \
  -p 4500:4500 \
  --net foundationdb \
  --ip \
  --restart=unless-stopped \
  --name foundationdb \

From the host, or a container, as long as the foundationdb-clients package has been installed, the fdb.cluster file is accessible, and the IP is reachable, the database must now be reconfigured as noted by @alexmiller:

fdbcli -C ${DOCKER_VOL}/foundationdb/etc/fdb.cluster --exec configure new single memory

According to my opinions listed above, the foundationdb-docker/ directory or equivalent would contain something similar to this layout:

  • (or a combination of both scripts)
  • foundationdb.conf
  • Dockerfile

Hopefully this will kick off a larger discussion of what can be improved from here and how to tackle the other important problems.

(Chr1st0ph) #9

@umpc I share your thoughts, and created some topics today:

My “thoughts” on the topic: ( contains a script for creating initial coordinator setup ) ( usage of the Dockerfile )

(Justin Lowery) #10

Continuing the discussion from the GitHub PR, @bjrnio:

I like that you have made a basic example for docker-compose that others can build upon. I know it was just an off-hand example, so I mention in case someone makes something with it that they will want to use a version string instead of latest, so that the version is never changed accidentally.

We agree in that we could fix this issue with workarounds like stopping the service, this or temporarily changing the runlevel, …

The Dockerfile in my post is a working example as of 5.1.7 meant for people searching until there is a consensus on all of the changes. I didn’t mean to imply its workaround as a choice for a permanent solution.

The best solution so far would be changing the docs, with supporting text, to instruct Ubuntu users installing through apt to run sudo systemctl enable foundationdb, sudo systemctl start foundationdb, and fdbcli --exec "configure new single memory". In the same future PR, the foundationdb-server postinst script would be changed to only write an fdb.cluster file. Whether or not to keep enabling auto-start when the package is installed matters less, though my opinion is the same as yours, to not assume and leave it disabled until the user chooses to enable it.

Regarding cluster files (and foundationdb.conf files, too) , my way would be: leave defaults at build and set permissions properly at startup so that users can mount a volume.

I am inclined to agree after some reading and think that instead of including scripts in a docker directory, that a doc page and a readme could be written to suggest using docker cp to access the initial files from /etc/foundationdb in the image, with a reference to, which can be obtained from the repo in case someone is concerned over a syntax issue potentially causing data corruption.

One of the older open PRs questions whether is even needed. I did not need it and perhaps anyone who is going to plan and manage a database should already know to read the docs and heed any warnings. I don’t know any facts related to that, so I don’t have an opinion.

What you mention about name resolution to fdb.cluster files is something I was pretty disappointed with FDB, and I expect it to be implemented in the future.

Early yesterday, @wwilson made a good point here, essentially saying that in some setups, hostname resolution could become an unexpected single point of failure.

I would still like to see the feature implemented and think that a warning in the docs should be sufficient, though I would understand if it’s later decided against.

Also, I’m discussing Ubuntu here only, I think we should set down first on what we want from a Docker image, regardless of the underlaying OS, and then port it to Debian, CentOS, and if possible, Alpine.

I don’t have an opinion here as having a Docker image solves compatibility on Linux in general.

While I am typing, looking at your PR:

  • Would you be open to adding version and Debian package revision strings as ARG variables, like in my example?

  • Do you have a reason for using latest, or would you consider switching to use a specific version string?

  • The current sed usage depends on the example config file’s listen_address directive to be placed on line 25. If it is moved in a later commit, the new contents of line 25 will be replaced with the custom directive.

    • How about something like: sed -i "s/^listen_address.*/listen_address = ${LISTEN_ADDR}/" /etc/foundationdb/foundationdb.conf, so that any changes can be made to the file, and using ARG LISTEN_ADDR= rather than ARG LISTEN_ADDR "" and ${LISTEN_ADDR:-} which is completely correct though appears to the uninitiated (me, a few minutes ago) to prepend a dash.

Thanks for your work and constructive discussion so far!

(Ricard Bejarano) #11

Would you be open to adding version and Debian package revision strings as ARG variables, like in my example?

Yes, sure! I’ll commit those changes, but let me know, is there a specific reason to use 2 variables? What are the advantages of using “5.1.7” + “1” over “5.1.7-1”? I don’t have an opinion so I’ll do whatever you think it’s best, but if I had to choose I’d go with one.

Do you have a reason for using latest, or would you consider switching to use a specific version string?

No, but I’d leave the choice to the developer. If you want a specific version of FDB, go ahead, or else just leave it as latest and Docker will pull the latest stable image (today, that would be 5.1.7).
Once you jump into production, the best practice is to use specific version numbers, so you either stay with what you chose or check what version was latest and use that.
We can guide users through this process in the docs, it leaves developers the freedom of choosing, which I think it’s better.

The current sed usage depends on the example config file’s listen_address directive to be placed on line 25. If it is moved in a later commit, the new contents of line 25 will be replaced with the custom directive.

Yes, I’m checking it myself and it doesn’t feel right, and the ${LISTEN_ADDR:-} part works but it’s over-engineered, I don’t know why did I do it that way. :smile:

I’ll wait for your answer regarding the version ARGs and as soon as you do so I’ll commit a revised Dockerfile and a template for a docker-compose.yaml file.

Edit: forget why I asked if 2 is better than 1, I didn’t have the context of the Dockerfile and turns out you need them to be separate. I’ll commit in minutes.
Edit 2: commit id: 5f3a82c

(Justin Lowery) #12

I wanted to update this thread with a few changes I have submitted to the branch in latest PR.

Edit: (copied from GitHub)

Just finished making a few changes.

Once they have been merged, and docker-compose is confirmed to work, this initial PR is ready for a review and to be merged, unless the core maintainers would first like to see a new PR merged including the removal of auto-starting fdbmonitor and auto-configuring the initial cluster, which allows for removing the hack that deletes the auto-configured database.

(Ricard Bejarano) #13

Let me ask, what is the --init flag there for?

(Justin Lowery) #14

I saw where the original poster was having trouble with fdbmonitor running with a PID of 1 and --init fixed the problem, so I thought that fdbmonitor was the PID 1 process, which is what happens when you specify ENTRYPOINT instead of CMD in the Dockerfile.

As it turns out, the shell started by Docker (CMD), that eventually starts fdbmonitor, never exits, making the shell the PID 1 process instead.

You can leave out --init since that is the case. I’d just forgot to check if it was actually needed here.

Edit: While it isn’t needed, signals must be passed to fdbmonitor, which cannot run with a PID of 1.

(Ricard Bejarano) #15

I see, I’ll leave it out for simplicity, I’m also elaborating a little bit on the so people can use it as a reference when official docs are written.

(Justin Lowery) #16

A lot of progress has been made so far. I am replying to link anyone watching this thread to the latest updates.

Any feedback is welcome. If someone uses Windows on bare metal, that is something that still needs to be tested.

(Pavan) #17

Thanks Justin. I have tried out the instructions in the link provided and it works perfectly.

Are there instructions on using the tar.gz installation somewhere, I was trying that in my docker build unsuccessfully, but maybe this is better.