Need help with baseimage-docker?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

8.3K Stars 1.1K Forks MIT License 564 Commits 2 Opened issues


A minimal Ubuntu base image modified for Docker-friendliness

Services available


Need anything else?

Contributors list

A minimal Ubuntu base image modified for Docker-friendliness


Baseimage-docker only consumes 8.3 MB RAM and is much more powerful than Busybox or Alpine. See why below.

Baseimage-docker is a special Docker image that is configured for correct use within Docker containers. It is Ubuntu, plus:

  • Modifications for Docker-friendliness.
  • Administration tools that are especially useful in the context of Docker.
  • Mechanisms for easily running multiple processes, without violating the Docker philosophy.

You can use it as a base for your own Docker images.

Baseimage-docker is available for pulling from the Docker registry!

What are the problems with the stock Ubuntu base image?

Ubuntu is not designed to be run inside Docker. Its init system, Upstart, assumes that it's running on either real hardware or virtualized hardware, but not inside a Docker container. But inside a container you don't want a full system; you want a minimal system. Configuring that minimal system for use within a container has many strange corner cases that are hard to get right if you are not intimately familiar with the Unix system model. This can cause a lot of strange problems.

Baseimage-docker gets everything right. The "Contents" section describes all the things that it modifies.

Why use baseimage-docker?

You can configure the stock

image yourself from your Dockerfile, so why bother using baseimage-docker?
  • Configuring the base system for Docker-friendliness is no easy task. As stated before, there are many corner cases. By the time that you've gotten all that right, you've reinvented baseimage-docker. Using baseimage-docker will save you from this effort.
  • It reduces the time needed to write a correct Dockerfile. You won't have to worry about the base system and you can focus on the stack and the app.
  • It reduces the time needed to run
    docker build
    , allowing you to iterate your Dockerfile more quickly.
  • It reduces download time during redeploys. Docker only needs to download the base image once: during the first deploy. On every subsequent deploys, only the changes you make on top of the base image are downloaded.

Related resources: Website | Github | Docker registry | Discussion forum | Twitter | Blog

Table of contents

What's inside the image?


Looking for a more complete base image, one that is ideal for Ruby, Python, Node.js and Meteor web apps? Take a look at passenger-docker.

| Component | Why is it included? / Remarks | | ---------------- | ------------------- | | Ubuntu 20.04 LTS | The base system. | | A correct init process | Main article: Docker and the PID 1 zombie reaping problem.

According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them. Most Docker containers do not have an init process that does this correctly. As a result, their containers become filled with zombie processes over time.


docker stop
sends SIGTERM to the init process, which stops all services. Unfortunately most init systems don't do this correctly within Docker since they're built for hardware shutdowns instead. This causes processes to be hard killed with SIGKILL, which doesn't give them a chance to correctly deinitialize things. This can cause file corruption.

Baseimage-docker comes with an init process
that performs both of these tasks correctly. | | Fixes APT incompatibilities with Docker | See | | syslog-ng | A syslog daemon is necessary so that many services - including the kernel itself - can correctly log to /var/log/syslog. If no syslog daemon is running, a lot of important messages are silently swallowed.

Only listens locally. All syslog messages are forwarded to "docker logs".

Why syslog-ng?
I've had bad experience with rsyslog. I regularly run into bugs with rsyslog, and once in a while it takes my log host down by entering a 100% CPU loop in which it can't do anything. Syslog-ng seems to be much more stable. | | logrotate | Rotates and compresses logs on a regular basis. | | SSH server | Allows you to easily login to your container to inspect or administer things.

SSH is disabled by default and is only one of the methods provided by baseimage-docker for this purpose. The other method is through docker exec. SSH is also provided as an alternative because
docker exec
comes with several caveats.

Password and challenge-response authentication are disabled by default. Only key authentication is allowed. | | cron | The cron daemon must be running for cron jobs to work. | | runit | Replaces Ubuntu's Upstart. Used for service supervision and management. Much easier to use than SysV init and supports restarting daemons when they crash. Much easier to use and more lightweight than Upstart. | |
| A tool for running a command as another user. Easier to use than
, has a smaller attack vector than
, and unlike
this tool sets
correctly. Available as
. | |
| A tool for installing
packages that automatically cleans up after itself. All arguments are passed to
apt-get -y install --no-install-recommends
and after installation the apt caches are cleared. To include recommended packages, add
. |

Baseimage-docker is very lightweight: it only consumes 8.3 MB of memory.

Wait, I thought Docker is about running a single process in a container?

The Docker developers advocate the philosophy of running a single logical service per container. A logical service can consist of multiple OS processes.

Baseimage-docker only advocates running multiple OS processes inside a single container. We believe this makes sense because at the very least it would solve the PID 1 problem and the "syslog blackhole" problem. By running multiple processes, we solve very real Unix OS-level problems, with minimal overhead and without turning the container into multiple logical services.

Splitting your logical service into multiple OS processes also makes sense from a security standpoint. By running processes as different users, you can limit the impact of vulnerabilities. Baseimage-docker provides tools to encourage running processes as different users, e.g. the


Do we advocate running multiple logical services in a single container? Not necessarily, but we do not prohibit it either. While the Docker developers are very opinionated and have very rigid philosophies about how containers should be built, Baseimage-docker is completely unopinionated. We believe in freedom: sometimes it makes sense to run multiple services in a single container, and sometimes it doesn't. It is up to you to decide what makes sense, not the Docker developers.

Does Baseimage-docker advocate "fat containers" or "treating containers as VMs"?

There are people who think that Baseimage-docker advocates treating containers as VMs because Baseimage-docker advocates the use of multiple processes. Therefore, they also think that Baseimage-docker does not follow the Docker philosophy. Neither of these impressions are true.

The Docker developers advocate running a single logical service inside a single container. But we are not disputing that. Baseimage-docker advocates running multiple OS processes inside a single container, and a single logical service can consist of multiple OS processes.

It follows that Baseimage-docker also does not deny the Docker philosophy. In fact, many of the modifications we introduce are explicitly in line with the Docker philosophy. For example, using environment variables to pass parameters to containers is very much the "Docker way", and providing a mechanism to easily work with environment variables in the presence of multiple processes that may run as different users.

Inspecting baseimage-docker

To look around in the image, run:

docker run --rm -t -i phusion/baseimage: /sbin/my_init -- bash -l


 is one of the baseimage-docker version numbers.

You don't have to download anything manually. The above command will automatically pull the baseimage-docker image from the Docker registry.

Using baseimage-docker as base image

Getting started

The image is called

, and is available on the Docker registry.
# Use phusion/baseimage as base image. To make your builds reproducible, make
# sure you lock down to a specific version, not to `latest`!
# See for
# a list of version numbers.
FROM phusion/baseimage:

Use baseimage-docker's init system.

CMD ["/sbin/my_init"]

...put your own build instructions here...

Clean up APT when done.

RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

Adding additional daemons

A daemon is a program which runs in the background of its system, such as a web server.

You can add additional daemons (for example, your own app) to the image by creating runit service directories. You only have to write a small shell script which runs your daemon;

will start your script, and - by default - restart it upon its exit, after waiting one second.

The shell script must be called

, must be executable, and is to be placed in the directory
will switch to the directory and invoke
after your container starts.

Be certain that you do not start your container using interactive mode (

) with another command, as
must be the first process to run. If you do this, your runit service directories won't be started. For instance,
docker run -it  bash
will bring you to bash in your container, but you'll lose all your daemons.

Here's an example showing you how a

service directory can be made for a

, or whatever you choose to name your file (make sure this file is chmod +x): ```bash


/sbin/setuser memcache
runs the given command as the user

If you omit that part, the command will be run as root.

exec /sbin/setuser memcache /usr/bin/memcached >>/var/log/memcached.log 2>&1 ``

In an accompanying
RUN mkdir /etc/service/memcached
COPY /etc/service/memcached/run
RUN chmod +x /etc/service/memcached/run

A given shell script must run without daemonizing or forking itself; this is because

will start and restart your script on its own. Usually, daemons provide a command line flag or a config file option for preventing such behavior - essentially, you just want your script to run in the foreground, not the background.

Running scripts during container startup

The baseimage-docker init system,

, runs the following scripts during startup, in the following order:
  • All executable scripts in
    , if this directory exists. The scripts are run in lexicographic order.
  • The script
    , if this file exists.

All scripts must exit correctly, e.g. with exit code 0. If any script exits with a non-zero exit code, the booting will fail.

Important note: If you are executing the container in interactive mode (i.e. when you run a container with

), rather than daemon mode, you are sending stdout directly to the terminal (
terminal). If you are not calling
in your run declaration,
will not be executed, therefore your scripts will not be called during container startup.

The following example shows how you can add a startup script. This script simply logs the time of boot to the file /tmp/boottime.txt.

date > /tmp/boottime.txt


RUN mkdir -p /etc/my_init.d
COPY /etc/my_init.d/
RUN chmod +x /etc/my_init.d/

Shutting down your process

handles termination of children processes at shutdown. When it receives a SIGTERM it will pass the signal onto the child processes for correct shutdown. If your process is started with a shell script, make sure you
the actual process, otherwise the shell will receive the signal and not your process.

will terminate processes after a 5 second timeout. This can be adjusted by setting environment variables:
# Give children processes 5 minutes to timeout
# Give all other processes (such as those which have been forked) 5 minutes to timeout

Note: Prior to 0.11.1, the default values for

were 5 seconds. In version 0.11.1+ the default process timeout has been adjusted to 30 seconds to allow more time for containers to terminate gracefully. The default timeout of your container runtime may supersede this setting, for example Docker currently applies a 10s timeout by default before sending SIGKILL, upon
docker stop
or receiving SIGTERM.

Environment variables

If you use

as the main container command, then any environment variables set with
docker run --env
or with the
command in the Dockerfile, will be picked up by
. These variables will also be passed to all child processes, including
startup scripts, Runit and Runit-managed services. There are however a few caveats you should be aware of:
  • Environment variables on Unix are inherited on a per-process basis. This means that it is generally not possible for a child process to change the environment variables of other processes.
  • Because of the aforementioned point, there is no good central place for defining environment variables for all applications and services. Debian has the
    file but it only works in some situations.
  • Some services change environment variables for child processes. Nginx is one such example: it removes all environment variables unless you explicitly instruct it to retain them through the
    configuration option. If you host any applications on Nginx (e.g. using the passenger-docker image, or using Phusion Passenger in your own image) then they will not see the environment variables that were originally passed by Docker.
  • We ignore HOME, SHELL, USER and a bunch of other environment variables on purpose, because not ignoring them will break multi-user containers. See -- A workaround for setting the
    environment variable looks like this:
    RUN echo /root > /etc/container_environment/HOME
    . See

provides a solution for all these caveats.

Centrally defining your own environment variables

During startup, before running any startup scripts,

imports environment variables from the directory
. This directory contains files named after the environment variable names. The file contents contain the environment variable values. This directory is therefore a good place to centrally define your own environment variables, which will be inherited by all startup scripts and Runit services.

For example, here's how you can define an environment variable from your Dockerfile:

RUN echo Apachai Hopachai > /etc/container_environment/MY_NAME

You can verify that it works, as follows:

$ docker run -t -i  /sbin/my_init -- bash -l
*** Running bash -l...
# echo $MY_NAME
Apachai Hopachai

Handling newlines

If you've looked carefully, you'll notice that the 'echo' command actually prints a newline. Why does $MYNAME not contain a newline then? It's because `myinit` strips the trailing newline. If you intended on the value having a newline, you should add another newline, like this:

RUN echo -e "Apachai Hopachai\n" > /etc/container_environment/MY_NAME

Environment variable dumps

While the previously mentioned mechanism is good for centrally defining environment variables, itself does not prevent services (e.g. Nginx) from changing and resetting environment variables from child processes. However, the

mechanism does make it easy for you to query what the original environment variables are.

During startup, right after importing environment variables from

will dump all its environment variables (that is, all variables imported from
, as well as all variables it picked up from
docker run --env
) to the following locations, in the following formats:
  • /etc/container_environment
  • /etc/
    - a dump of the environment variables in Bash format. You can source the file directly from a Bash shell script.
  • /etc/container_environment.json
    - a dump of the environment variables in JSON format.

The multiple formats make it easy for you to query the original environment variables no matter which language your scripts/apps are written in.

Here is an example shell session showing you how the dumps look like:

$ docker run -t -i \
  --env FOO=bar --env HELLO='my beautiful world' \
  phusion/baseimage: /sbin/my_init -- \
  bash -l
*** Running bash -l...
# ls /etc/container_environment
# cat /etc/container_environment/HELLO; echo
my beautiful world
# cat /etc/container_environment.json; echo
{"TERM": "xterm", "container": "lxc", "HOSTNAME": "f45449f06950", "HOME": "/root", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "FOO": "bar", "HELLO": "my beautiful world"}
# source /etc/
# echo $HELLO
my beautiful world

Modifying environment variables

It is even possible to modify the environment variables in

(and therefore the environment variables in all child processes that are spawned after that point in time), by altering the files in
. After each time
runs a startup script, it resets its own environment variables to the state in
, and re-dumps the new environment variables to

But note that:

  • modifying
    has no effect.
  • Runit services cannot modify the environment like that.
    only activates changes in
    when running startup scripts.


Because environment variables can potentially contain sensitive information,

and its Bash and JSON dumps are by default owned by root, and accessible only to the
group (so that any user added this group will have these variables automatically loaded).

If you are sure that your environment variables don't contain sensitive data, then you can also relax the permissions on that directory and those files by making them world-readable:

RUN chmod 755 /etc/container_environment
RUN chmod 644 /etc/ /etc/container_environment.json

System logging

Baseimage-docker uses syslog-ng to provide a syslog facility to the container. Syslog-ng is not managed as an runit service (see below). Syslog messages are forwarded to the console.

Log startup/shutdown sequence

In order to ensure that all application log messages are captured by syslog-ng, syslog-ng is started separately before the runit supervisor process, and shutdown after runit exits. This uses the startup script facility provided by this image. This avoids a race condition which would exist if syslog-ng were managed as an runit service, where runit kills syslog-ng in parallel with the container's other services, causing log messages to be dropped during a graceful shutdown if syslog-ng exits while logs are still being produced by other services.

Upgrading the operating system inside the container

Baseimage-docker images contain an Ubuntu operating system (see OS version at Overview). You may want to update this OS from time to time, for example to pull in the latest security updates. OpenSSL is a notorious example. Vulnerabilities are discovered in OpenSSL on a regular basis, so you should keep OpenSSL up-to-date as much as you can.

While we release Baseimage-docker images with the latest OS updates from time to time, you do not have to rely on us. You can update the OS inside Baseimage-docker images yourself, and it is recommended that you do this instead of waiting for us.

To upgrade the OS in the image, run this in your Dockerfile:

RUN apt-get update && apt-get upgrade -y -o Dpkg::Options::="--force-confold"

Container administration

One of the ideas behind Docker is that containers should be stateless, easily restartable, and behave like a black box. However, you may occasionally encounter situations where you want to login to a container, or to run a command inside a container, for development, inspection and debugging purposes. This section describes how you can administer the container for those purposes.

Running a one-shot command in a new container

Note: This section describes how to run a command insider a -new- container. To run a command inside an existing running container, see Running a command in an existing, running container.

Normally, when you want to create a new container in order to run a single command inside it, and immediately exit after the command exits, you invoke Docker like this:


However the downside of this approach is that the init system is not started. That is, while invoking

, important daemons such as cron and syslog are not running. Also, orphaned child processes are not properly reaped, because
is PID 1.

Baseimage-docker provides a facility to run a single one-shot command, while solving all of the aforementioned problems. Run a single command in the following manner:

docker run YOUR_IMAGE /sbin/my_init -- COMMAND ARGUMENTS ...

This will perform the following:

  • Runs all system startup files, such as /etc/my_init.d/* and /etc/rc.local.
  • Starts all runit services.
  • Runs the specified command.
  • When the specified command exits, stops all runit services.

For example:

$ docker run phusion/baseimage: /sbin/my_init -- ls
*** Running /etc/rc.local...
*** Booting runit daemon...
*** Runit started as PID 80
*** Running ls...
bin  boot  dev  etc  home  image  lib  lib64  media  mnt  opt  proc  root  run  sbin  selinux  srv  sys  tmp  usr  var
*** ls exited with exit code 0.
*** Shutting down runit daemon (PID 80)...
*** Killing all processes...

You may find that the default invocation is too noisy. Or perhaps you don't want to run the startup files. You can customize all this by passing arguments to

. Invoke
docker run YOUR_IMAGE /sbin/my_init --help
for more information.

The following example runs

without running the startup files and with less messages, while running all runit services:
$ docker run phusion/baseimage: /sbin/my_init --skip-startup-files --quiet -- ls
bin  boot  dev  etc  home  image  lib  lib64  media  mnt  opt  proc  root  run  sbin  selinux  srv  sys  tmp  usr  var

Running a command in an existing, running container

There are two ways to run a command inside an existing, running container.

Both way have their own pros and cons, which you can learn in their respective subsections.

Login to the container, or running a command inside it, via
docker exec

You can use the

docker exec
tool on the Docker host OS to login to any container that is based on baseimage-docker. You can also use it to run a command inside a running container.
docker exec
works by using Linux kernel system calls.

Here's how it compares to using SSH to login to the container or to run a command inside it:

  • Pros
    • Does not require running an SSH daemon inside the container.
    • Does not require setting up SSH keys.
    • Works on any container, even containers not based on baseimage-docker.
  • Cons
    • If the
      docker exec
      process on the host is terminated by a signal (e.g. with the
      command or even with Ctrl-C), then the command that is executed by
      docker exec
      is not killed and cleaned up. You will either have to do that manually, or you have to run
      docker exec
      -t -i
    • Requires privileges on the Docker host to be able to access the Docker daemon. Note that anybody who can access the Docker daemon effectively has root access.
    • Not possible to allow users to login to the container without also letting them login to the Docker host.


Start a container:

docker run YOUR_IMAGE

Find out the ID of the container that you just ran:

docker ps

Now that you have the ID, you can use

docker exec
to run arbitrary commands in the container. For example, to run
echo hello world
docker exec YOUR-CONTAINER-ID echo hello world

To open a bash session inside the container, you must pass

-t -i
so that a terminal is available:
docker exec -t -i YOUR-CONTAINER-ID bash -l

Login to the container, or running a command inside it, via SSH

You can use SSH to login to any container that is based on baseimage-docker. You can also use it to run a command inside a running container.

Here's how it compares to using

docker exec
to login to the container or to run a command inside it:

  • Pros
    • Does not require root privileges on the Docker host.
    • Allows you to let users login to the container, without letting them login to the Docker host. However, this is not enabled by default because baseimage-docker does not expose the SSH server to the public Internet by default.
  • Cons
    • Requires setting up SSH keys. However, baseimage-docker makes this easy for many cases through a pregenerated, insecure key. Read on to learn more.

Enabling SSH

Baseimage-docker disables the SSH server by default. Add the following to your Dockerfile to enable it:

RUN rm -f /etc/service/sshd/down

Regenerate SSH host keys. baseimage-docker does not contain any, so you

have to do that yourself. You may also comment out this instruction; the

init system will auto-generate one during boot.

RUN /etc/my_init.d/

Alternatively, to enable sshd only for a single instance of your container, create a folder with a startup script. The contents of that should be

### In myfolder/ (make sure this file is chmod +x):
rm -f /etc/service/sshd/down
ssh-keygen -P "" -t dsa -f /etc/ssh/ssh_host_dsa_key

Then, you can start your container with

docker run -d -v `pwd`/myfolder:/etc/my_init.d my/dockerimage

This will initialize sshd on container boot. You can then access it with the insecure key as below, or using the methods to add a secure key. Further, you can publish the port to your machine with -p 2222:22 allowing you to ssh to instead of looking up the ip address of the container.

About SSH keys

First, you must ensure that you have the right SSH keys installed inside the container. By default, no keys are installed, so nobody can login. For convenience reasons, we provide a pregenerated, insecure key (PuTTY format) that you can easily enable. However, please be aware that using this key is for convenience only. It does not provide any security because this key (both the public and the private side) is publicly available. In production environments, you should use your own keys.

Using the insecure key for one container only

You can temporarily enable the insecure key for one container only. This means that the insecure key is installed at container boot. If you

docker stop
docker start
the container, the insecure key will still be there, but if you use
docker run
to start a new container then that container will not contain the insecure key.

Start a container with

docker run YOUR_IMAGE /sbin/my_init --enable-insecure-key

Find out the ID of the container that you just ran:

docker ps

Once you have the ID, look for its IP address with:

docker inspect -f "{{ .NetworkSettings.IPAddress }}" 

Now that you have the IP address, you can use SSH to login to the container, or to execute a command inside it:

# Download the insecure private key
curl -o insecure_key -fSL
chmod 600 insecure_key

Login to the container

ssh -i insecure_key [email protected]

Running a command inside the container

ssh -i insecure_key [email protected] echo hello world

Enabling the insecure key permanently

It is also possible to enable the insecure key in the image permanently. This is not generally recommended, but is suitable for e.g. temporary development or demo environments where security does not matter.

Edit your Dockerfile to install the insecure key permanently:

RUN /usr/sbin/enable_insecure_key

Instructions for logging into the container is the same as in section Using the insecure key for one container only.

Using your own key

Edit your Dockerfile to install an SSH public key:

## Install an SSH of your choice.
COPY /tmp/
RUN cat /tmp/ >> /root/.ssh/authorized_keys && rm -f /tmp/

Then rebuild your image. Once you have that, start a container based on that image:

docker run your-image-name

Find out the ID of the container that you just ran:

docker ps

Once you have the ID, look for its IP address with:

docker inspect -f "{{ .NetworkSettings.IPAddress }}" 

Now that you have the IP address, you can use SSH to login to the container, or to execute a command inside it:

# Login to the container
ssh -i /path-to/your_key [email protected]

Running a command inside the container

ssh -i /path-to/your_key [email protected] echo hello world


Looking up the IP of a container and running an SSH command quickly becomes tedious. Luckily, we provide the

tool which automates this process. This tool is to be run on the Docker host, not inside a Docker container.

First, install the tool on the Docker host:

curl --fail -L -O && \
tar xzf master.tar.gz && \
sudo ./baseimage-docker-master/

Then run the tool as follows to login to a container using SSH:


You can lookup

by running
docker ps

By default,

will open a Bash session. You can also tell it to run a command, and then exit:
docker-ssh YOUR-CONTAINER-ID echo hello world

Building the image yourself

If for whatever reason you want to build the image yourself instead of downloading it from the Docker registry, follow these instructions.

Clone this repository:

git clone
cd baseimage-docker

Start a virtual machine with Docker in it. You can use the Vagrantfile that we've already provided.

First, install

vagrant plugin install vagrant-disksize:

Then, start the virtual machine

vagrant up
vagrant ssh
cd /vagrant

Build the image:

make build

If you want to call the resulting image something else, pass the NAME variable, like this:

make build NAME=joe/baseimage

You can also change the

base-image to
as these distributions are quite similar.
make build BASE_IMAGE=debian:stretch

The image will be:

. Use the
variable in combination with the
one to call it
make build BASE_IMAGE=debian:stretch NAME=joe/stretch

To verify that the various services are started, when the image is run as a container, add

to the end of your make invocations, e.g.:
make build BASE_IMAGE=debian:stretch NAME=joe/stretch test

Removing optional services

The default baseimage-docker installs

services during the build process.

In case you don't need one or more of these services in your image, you can disable its installation through the

that is sourced within
. Do this at build time by passing a variable in with
as in
docker build --build-arg DISABLE_SYSLOG=1 image/
, or you may set the variable in
with an ENV setting above the RUN directive.

These represent build-time configuration, so setting them in the shell env at build-time will not have any effect. Setting them in child images' Dockerfiles will also not have any effect.)

You can also set them directly as shown in the following example, to prevent

from being installed into your image, set
to the
variable in the
### In ./image/buildconfig
# ...
# Default services
# Set 1 to the service you want to disable
export DISABLE_SSH=1

Then you can proceed with

make build


  • Using baseimage-docker? Tweet about us or follow us on Twitter.
  • Having problems? Want to participate in development? Please post a message at the discussion forum.
  • Looking for a more complete base image, one that is ideal for Ruby, Python, Node.js and Meteor web apps? Take a look at passenger-docker.
  • Need a helping hand? Phusion also offers consulting on a wide range of topics, including Web Development, UI/UX Research & Design, Technology Migration and Auditing.

Please enjoy baseimage-docker, a product by Phusion. :-)

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.