Need help with official-images?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

5.0K Stars 1.8K Forks Apache License 2.0 21.9K Commits 29 Opened issues


Primary source of truth for the Docker "Official Images" program

Services available


Need anything else?

Contributors list

Docker Official Images

Table of Contents

  1. Docker Official Images
    1. Table of Contents
    2. What are "Official Images"?
    3. Architectures other than amd64?
    4. More FAQs?
    5. Contributing to the standard library
      1. Review Guidelines
        1. Maintainership
        2. Repeatability
        3. Consistency
        4. Clarity
        5. init
        6. Cacheability
        7. Security
        8. Multiple Architectures
      2. Commitment
    6. Library definition files
      1. Filenames
      2. Tags and aliases
      3. Instruction format
      4. Creating a new repository
      5. Adding a new tag in an existing repository (that you're the maintainer of)
      6. Change to a tag in an existing repository (that you're the maintainer of)
    7. Bashbrew

What are "Official Images"?

The Docker Official Images are curated images hosted on Docker Hub. The main tenets are:

See Docker's documentation for a good high-level overview of the program.

In essence we strive to heed upstream's recommendations on how they intend for their software to be consumed. Many images are maintained in collaboration with the relevant upstream project if not maintained directly by them. Additionally we aim to exemplify the best practices for Dockerfiles to serve as a reference when making or deriving your own images from them.

(If you are a representative of an upstream for which there exists an image and you would like to get involved, please see the Maintainership section below!)

Architectures other than amd64?

Some images have been ported for other architectures, and many of these are officially supported (to various degrees).

  • Architectures officially supported by Docker, Inc. for running Docker: (see
    • ARMv6 32-bit (
    • ARMv7 32-bit (
    • ARMv8 64-bit (
    • Linux x86-64 (
    • Windows x86-64 (
  • Other architectures built by official images: (but not officially supported by Docker, Inc.)
    • ARMv5 32-bit (
    • IBM POWER8 (
    • IBM z Systems (
    • MIPS64 LE (
    • RISC-V 64-bit (
    • x86/i686 (

As of 2017-09-12, these other architectures are included under the non-prefixed images via "manifest lists" (also known as "indexes" in the OCI image specification), such that, for example,

docker run hello-world
should run as-is on all supported platforms.

If you're curious about how these are built, head over to to see the build scaffolding.

See the multi-arch section below for recommendations in adding more architectures to an official image.

More FAQs?

Yes! We have a dedicated FAQ repository where we try to collect other common questions (both about the program and about our practices).

Contributing to the standard library

Thank you for your interest in the Docker official images project! We strive to make these instructions as simple and straightforward as possible, but if you find yourself lost, don't hesitate to seek us out on Libera.Chat IRC in channel

or by creating a GitHub issue here.

Be sure to familiarize yourself with Official Repositories on Docker Hub and the Best practices for writing Dockerfiles in the Docker documentation. These will be the foundation of the review process performed by the official images maintainers. If you'd like the review process to go more smoothly, please ensure that your

s adhere to all the points mentioned there, as well as below, before submitting a pull request.

Also, the Hub descriptions for these images are currently stored separately in the

repository, whose
explains more about how it's structured and how to contribute to it. Please be prepared to submit a PR there as well, pending acceptance of your image here.

Review Guidelines

Because the official images are intended to be learning tools for those new to Docker as well as the base images for advanced users to build their production releases, we review each proposed

to ensure that it meets a minimum standard for quality and maintainability. While some of that standard is hard to define (due to subjectivity), as much as possible is defined here, while also adhering to the "Best Practices" where appropriate.

A checklist which may be used by the maintainers during review can be found in


Version bumps and security fixes should be attended to in a timely manner.

If you do not represent upstream and upstream becomes interested in maintaining the image, steps should be taken to ensure a smooth transition of image maintainership over to upstream.

For upstreams interested in taking over maintainership of an existing repository, the first step is to get involved in the existing repository. Making comments on issues, proposing changes, and making yourself known within the "image community" (even if that "community" is just the current maintainer) are all important places to start to ensure that the transition is unsurprising to existing contributors and users.

When taking over an existing repository, please ensure that the entire Git history of the original repository is kept in the new upstream-maintained repository to make sure the review process isn't stalled during the transition. This is most easily accomplished by forking the new from the existing repository, but can also be accomplished by fetching the commits directly from the original and pushing them into the new repo (ie,

git fetch master
git rebase FETCH_HEAD
git push -f
). On GitHub, an alternative is to move ownership of the git repository. This can be accomplished without giving either group admin access to the other owner's repository:
  • create temporary intermediary organization
  • give old and new owners admin access to intermediary organization
  • old owner transfers repo ownership to intermediary organization
  • new owner transfers repo ownership to its new home
    • recommend that old owner does not fork new repo back into the old organization to ensure that GitHub redirects will just work


Rebuilding the same

should result in the same version of the image being packaged, even if the second build happens several versions later, or the build should fail outright, such that an inadvertent rebuild of a
tagged as
doesn't end up containing
. For example, if using
to install the main program for the image, be sure to pin it to a specific version (ex:
... apt-get install -y my-package=0.1.0 ...
). For dependent packages installed by
there is not usually a need to pin them to a version.

No official images can be derived from, or depend on, non-official images with the following notable exceptions:


All official images should provide a consistent interface. A beginning user should be able to

docker run official-image bash
) without needing to learn about
. It is also nice for advanced users to take advantage of entrypoint, so that they can
docker run official-image --arg1 --arg2
without having to specify the binary to execute.
  1. If the startup process does not need arguments, just use

    CMD ["irb"]
  2. If there is initialization that needs to be done on start, like creating the initial database, use an

    along with
    ENTRYPOINT ["/"]
    CMD ["postgres"]
    1. Ensure that

      docker run official-image bash
      ) works too. The easiest way is to check for the expected command and if it is something else, just
      exec "[email protected]"
      (run whatever was passed, properly keeping the arguments escaped).
      set -e

      this if will check if the first argument is a flag

      but only works if all arguments require a hyphenated flag

      -v; -SL; -f arg; etc will work, but not arg1 arg2

      if [ "$#" -eq 0 ] || [ "${1#-}" != "$1" ]; then set -- mongod "[email protected]" fi

      check for the expected command

      if [ "$1" = 'mongod' ]; then # init db stuff.... # use gosu (or su-exec) to drop to a non-root user exec gosu mongod "[email protected]" fi

      else default to run whatever the user wanted like "bash" or "sh"

      exec "[email protected]"

  3. If the image only contains the main executable and its linked libraries (ie no shell) then it is fine to use the executable as the

    , since that is the only thing that can run:
    ENTRYPOINT ["fully-static-binary"]
    CMD ["--help"]

    The most common indicator of whether this is appropriate is that the image

    starts with
    FROM scratch


Try to make the

easy to understand/read. It may be tempting, for the sake of brevity, to put complicated initialization details into a standalone script and merely add a
command in the
. However, this causes the resulting
to be overly opaque, and such
s are unlikely to pass review. Instead, it is recommended to put all the commands for initialization into the
as appropriate
command combinations. To find good examples, look at the current official images.

Some examples at the time of writing:


Following the Docker guidelines it is highly recommended that the resulting image be just one concern per container; predominantly this means just one process per container, so there is no need for a full init system. There are two situations where an init-like process would be helpful for the container. The first being signal handling. If the process launched does not handle

by exiting, it will not be killed since it is PID 1 in the container (see "NOTE" at the end of the Foreground section in the docker docs). The second situation would be zombie reaping. If the process spawns child processes and does not properly reap them it will lead to a full process table, which can prevent the whole system from spawning any new processes. For both of these concerns we recommend tini. It is incredibly small, has minimal external dependencies, fills each of these roles, and does only the necessary parts of reaping and signal forwarding.

Be sure to use tini in

as appropriate.

It is best to install tini from a distribution-provided package (ex.

apt-get install tini
). If tini is not available in your distribution or is too old, here is a snippet of a
to add in tini:
# Install tini for signal processing and zombie killing
RUN set -eux; \
  wget -O /usr/local/bin/tini "${TINI_VERSION}/tini"; \
  wget -O /usr/local/bin/tini.asc "${TINI_VERSION}/tini.asc"; \
  export GNUPGHOME="$(mktemp -d)"; \
  gpg --batch --keyserver --recv-keys "$TINI_SIGN_KEY"; \
  gpg --batch --verify /usr/local/bin/tini.asc /usr/local/bin/tini; \
  command -v gpgconf && gpgconf --kill all || :; \
  rm -r "$GNUPGHOME" /usr/local/bin/tini.asc; \
  chmod +x /usr/local/bin/tini; \
  tini --version


This is one place that experience ends up trumping documentation for the path to enlightenment, but the following tips might help:

  • Avoid

    whenever possible, but when necessary, be as specific as possible (ie,
    COPY /somewhere/
    instead of
    COPY . /somewhere

    The reason for this is that the cache for

    instructions considers file
    changes to be a cache bust, which can make the cache behavior of
    unpredictable sometimes, especially when
    is part of what needs to be
    ed (for example).
  • Ensure that lines which are less likely to change come before lines that are more likely to change (with the caveat that each line should generate an image that still runs successfully without assumptions of later lines).

    For example, the line that contains the software version number (

    ) should come after a line that sets up the APT repository
    file (
    RUN echo 'deb some-suite main' > /etc/apt/sources.list.d/mysoftware.list


Image Build


should be written to help mitigate interception attacks during build. Our requirements focus on three main objectives: verifying the source, verifying author, and verifying the content; these are respectively accomplished by the following: using https where possible; importing PGP keys with the full fingerprint in the
to check signatures; embedding checksums directly in the
. All three should be used when possible. Just https and embedded checksum can be used when no signature is published. As a last resort, just an embedded checksum is acceptable if the site doesn't have https available and no signature.

The purpose in recommending the use of https for downloading needed artifacts is that it ensures that the download is from a trusted source which also happens to make interception much more difficult.

The purpose in recommending PGP signature verification is to ensure that only an authorized user published the given artifact. When importing PGP keys, please use the the
service when possible (preferring
otherwise). See also the FAQ section on keys and verification.

The purpose in recommending checksum verification is to verify that the artifact is as expected. This ensures that when remote content changes, the Dockerfile also will change and provide a natural

docker build
cache bust. As a bonus, this also prevents accidentally downloading newer-than-expected artifacts on poorly versioned files.

Below are some examples:

  • Preferred: download over https, PGP key full fingerprint import and

    verification, embedded checksum verified.

    ENV PYTHON_DOWNLOAD_SHA512 (sha512-value-here)
    RUN set -eux; \
        curl -fL "$PYTHON_VERSION/Python-$PYTHON_VERSION.tar.xz" -o python.tar.xz; \
        curl -fL "$PYTHON_VERSION/Python-$PYTHON_VERSION.tar.xz.asc" -o python.tar.xz.asc; \
        export GNUPGHOME="$(mktemp -d)"; \
    # gpg: key F73C700D: public key "Larry Hastings " imported
        gpg --batch --keyserver --recv-keys 97FC712E4C024BBEA48A61ED3A5CA953F73C700D; \
        gpg --batch --verify python.tar.xz.asc python.tar.xz; \
        rm -r "$GNUPGHOME" python.tar.xz.asc; \
        echo "$PYTHON_DOWNLOAD_SHA512 *python.tar.xz" | sha512sum --strict --check; \
        # install
  • Alternate: full key fingerprint imported to apt which will check signatures and checksums when packages are downloaded and installed.

    RUN set -eux; \
        key='A4A9406876FCBD3C456770C88C718D3B5072E1F5'; \
        export GNUPGHOME="$(mktemp -d)"; \
        gpg --batch --keyserver --recv-keys "$key"; \
        gpg --batch --armor --export "$key" > /etc/apt/trusted.gpg.d/mysql.gpg.asc; \
        gpgconf --kill all; \
        rm -rf "$GNUPGHOME"; \
        apt-key list > /dev/null

    RUN set -eux;
    echo "deb stretch mysql-${MYSQL_MAJOR}" > /etc/apt/sources.list.d/mysql.list;
    apt-get update;
    apt-get install -y mysql-community-client="${MYSQL_VERSION}" mysql-community-server-core="${MYSQL_VERSION}";
    rm -rf /var/lib/apt/lists/*;
    # ...

    (As a side note,

    rm -rf /var/lib/apt/lists/*
    is roughly the opposite of
    apt-get update
    -- it ensures that the layer doesn't include the extra ~8MB of APT package list data, and enforces appropriate
    apt-get update
  • Less Secure Alternate: embed the checksum into the


    ENV RUBY_DOWNLOAD_SHA256 (sha256-value-here)
    RUN set -eux; \
        curl -fL -o ruby.tar.gz "$RUBY_MAJOR/ruby-$RUBY_VERSION.tar.gz"; \
        echo "$RUBY_DOWNLOAD_SHA256 *ruby.tar.gz" | sha256sum --strict --check; \
        # install
  • Unacceptable: download the file over http(s) with no verification.

    RUN curl -fL "${JULIA_VERSION%[.-]*}/julia-${JULIA_VERSION}-linux-x86_64.tar.gz" | tar ... \
        # install
Runtime Configuration

By default, Docker containers are executed with reduced privileges: whitelisted Linux capabilities, Control Groups, and a default Seccomp profile (1.10+ w/ host support). Software running in a container may require additional privileges in order to function correctly, and there are a number of command line options to customize container execution. See

docker run
Reference and Seccomp for Docker for reference.

Official Repositories that require additional privileges should specify the minimal set of command line options for the software to function, and may still be rejected if this introduces significant portability or security issues. In general,

is not allowed, but a combination of
options may be acceptable. Additionally,
can be tricky as there are many host filesystem locations that introduce portability/security issues (e.g. X11 socket).
Security Releases

For image updates which constitute a security fix, there are a few things we recommend to help ensure your update is merged, built, and released as quickly as possible:

  1. Contact us a few days in advance to give us a heads up and a timing estimate (so we can schedule time for the incoming update appropriately).
  2. Include
    in the title of your pull request (for example,
    [security] Update FooBar to 1.2.5, 1.3.7, 2.0.1
  3. Keep the pull request free of changes that are unrelated to the security fix -- we'll still be doing review of the update, but it will be expedited so this will help us help you.
  4. Be active and responsive to comments on the pull request after it's opened (as usual, but even more so if the timing of the release is of importance).

Multiple Architectures

Each repo can specify multiple architectures for any and all tags. If no architecture is specified, images are built in Linux on

(aka x86-64). To specify more or different architectures, use the
field (comma-delimited list, whitespace is trimmed). Valid architectures are found in Bashbrew's
  • amd64
  • arm32v6
  • arm32v7
  • arm64v8
  • i386
  • mips64le
  • ppc64le
  • riscv64
  • s390x
  • windows-amd64


of any given tag must be a strict subset of the
of the tag it is

Images must have a single

per entry in the library file that can be used for multiple architectures. This means that each supported architecture will have the same
line (e.g.
FROM debian:buster
). See
, and
for examples of library files using one
per entry and see their respective git repos for example

If different parts of the Dockerfile only happen in one architecture or another, use control flow (e.g.

) along with
dpkg --print-architecture
apk -print-arch
to detect the userspace architecture. Only use
for architecture detection when more accurate tools cannot be installed. See golang for an example where some architectures require building binaries from the upstream source packages and some merely download the binary release.

For base images like

it will be necessary to have a different
and build context in order to
architecture specific binaries and this is a valid exception to the above. Since these images use the same
, they need to be in the same entry. Use the architecture specific fields for
, and
, which are the architecture concatenated with hyphen (
) and the field (e.g.
). Any architecture that does not have an architecture-specific field will use the default field (e.g. no
will be used for
). See the
files in the library for examples. The following is an example for
Maintainers: Tianon Gravi  (@tianon),
             Joseph Ferguson  (@yosifkit)
GitCommit: 7d0ee592e4ed60e2da9d59331e16ecdcadc1ed87

Tags: latest Architectures: amd64, arm32v5, arm32v7, arm64v8, ppc64le, s390x

all the same commit; easy for us to generate this way since they could be different

amd64-GitCommit: 7d0ee592e4ed60e2da9d59331e16ecdcadc1ed87 amd64-Directory: amd64/hello-world arm32v5-GitCommit: 7d0ee592e4ed60e2da9d59331e16ecdcadc1ed87 arm32v5-Directory: arm32v5/hello-world arm32v7-GitCommit: 7d0ee592e4ed60e2da9d59331e16ecdcadc1ed87 arm32v7-Directory: arm32v7/hello-world arm64v8-GitCommit: 7d0ee592e4ed60e2da9d59331e16ecdcadc1ed87 arm64v8-Directory: arm64v8/hello-world ppc64le-GitCommit: 7d0ee592e4ed60e2da9d59331e16ecdcadc1ed87 ppc64le-Directory: ppc64le/hello-world s390x-GitCommit: 7d0ee592e4ed60e2da9d59331e16ecdcadc1ed87 s390x-Directory: s390x/hello-world

Tags: nanoserver Architectures: windows-amd64

if there is only one architecture, you can use the unprefixed fields

Directory: amd64/hello-world/nanoserver

or use the prefixed versions

windows-amd64-GitCommit: 7d0ee592e4ed60e2da9d59331e16ecdcadc1ed87 Constraints: nanoserver

See the instruction format section for more information on the format of the library file.


Proposing a new official image should not be undertaken lightly. We expect and require a commitment to maintain your image (including and especially timely updates as appropriate, as noted above).

Library definition files

The library definition files are plain text files found in the

directory of the
. Each library file controls the current "supported" set of image tags that appear on the Docker Hub description. Tags that are removed from a library file do not get removed from the Docker Hub, so that old versions can continue to be available for use, but are not maintained by upstream or the maintainer of the official image. Tags in the library file are only built through an update to that library file or as a result of its base image being updated (ie, an image

FROM debian:buster
would be rebuilt when
is built). Only what is in the library file will be rebuilt when a base has updates.

Given this policy, it is worth clarifying a few cases: backfilled versions, release candidates, and continuous integration builds. When a new repository is proposed, it is common to include some older unsupported versions in the initial pull request with the agreement to remove them right after acceptance. Don't confuse this with a comprehensive historical archive which is not the intention. Another common case where the term "supported" is stretched a bit is with release candidates. A release candidate is really just a naming convention for what are expected to be shorter-lived releases, so they are totally acceptable and encouraged. Unlike a release candidate, continuous integration builds which have a fully automated release cycle based on code commits or a regular schedule are not appropriate.

It is highly recommended that you browse some of the existing

file contents (and history to get a feel for how they change over time) before creating a new one to become familiar with the prevailing conventions and further help streamline the review process (so that we can focus on content instead of esoteric formatting or tag usage/naming).


The filename of a definition file will determine the name of the image repository it creates on the Docker Hub. For example, the

file will create tags in the

Tags and aliases

The tags of a repository should reflect upstream's versions or variations. For example, Ubuntu 14.04 is also known as Ubuntu Trusty Tahr, but often as simply Ubuntu Trusty (especially in usage), so

(version number) and
(version name) are appropriate aliases for the same image contents. In Docker, the
tag is a special case, but it's a bit of a misnomer;
really is the "default" tag. When one does
docker run xyz
, Docker interprets that to mean
docker run xyz:latest
. Given that background, no other tag ever contains the string
, since it's not something users are expected or encouraged to actually type out (ie,
should really be used as simply
). Put another way, having an alias for the "highest 2.2-series release of XYZ" should be
, not
. Similarly, if there is an Alpine variant of
, it should be aliased as
, not

It is strongly encouraged that version number tags be given aliases which make it easy for the user to stay on the "most recent" release of a particular series. For example, given currently supported XYZ Software versions of 2.3.7 and 2.2.4, suggested aliases would be

Tags: 2.3.7, 2.3, 2, latest
Tags: 2.2.4, 2.2
, respectively. In this example, the user can use
to easily use the most recent patch release of the 2.2 series, or
if less granularity is needed (Python is a good example of where that's most obviously useful --
are very different, and can be thought of as the
tag for each of the major release tracks of Python).

As described above,

is really "default", so the image that it is an alias for should reflect which version or variation of the software users should use if they do not know or do not care which version they use. Using Ubuntu as an example,
points to the most recent LTS release, given that it is what the majority of users should be using if they know they want Ubuntu but do not know or care which version (especially considering it will be the most "stable" and well-supported release at any given time).

Instruction format

The manifest file format is officially based on RFC 2822, and as such should be familiar to folks who are already familiar with the "headers" of many popular internet protocols/formats such as HTTP or email.

The primary additions are inspired by the way Debian commonly uses 2822 -- namely, lines starting with

are ignored and "entries" are separated by a blank line.

The first entry is the "global" metadata for the image. The only required field in the global entry is

, whose value is comma-separated in the format of
Name  (@github)
Name (@github)
. Any field specified in the global entry will be the default for the rest of the entries and can be overridden in an individual entry.
# this is a comment and will be ignored
Maintainers: John Smith  (@example-jsmith),
             Anne Smith  (@example-asmith)
GitCommit: deadbeefdeadbeefdeadbeefdeadbeefdeadbeef

this is also a comment, and will also be ignored

Tags: 1.2.3, 1.2, 1, latest Directory: 1

Tags: 2.0-rc1, 2.0-rc, 2-rc, rc GitRepo: GitFetch: refs/heads/2.0-pre-release GitCommit: beefdeadbeefdeadbeefdeadbeefdeadbeefdead Directory: 2

Bashbrew will fetch code out of the Git repository (

) at the commit specified (
). If the commit referenced is not available by fetching
of the associated
, it becomes necessary to supply a value for
in order to tell Bashbrew what ref to fetch in order to get the commit necessary.

The built image will be tagged as

with a
value of
1.6, 1, latest
will create tags of
, and

Optionally, if

is present, Bashbrew will look for the
inside the specified subdirectory instead of at the root (and
will be used as the "context" for the build instead of the top-level of the repository).

See the multi-arch section for details on how to specify a different

, or
for a specific architecture.

Creating a new repository

  • Create a new file in the
    folder. Its name will be the name of your repository on the Hub.
  • Add your tag definitions using the appropriate syntax (see above).
  • Create a pull request adding the file from your forked repository to this one. Please be sure to add details as to what your repository does.

Adding a new tag in an existing repository (that you're the maintainer of)

  • Add your tag definition using the instruction format documented above.
  • Create a pull request from your Git repository to this one. Please be sure to add details about what's new, if possible.

Change to a tag in an existing repository (that you're the maintainer of)

  • Update the relevant tag definition using the instruction format documented above.
  • Create a pull request from your Git repository to this one. Please be sure to add details about what's changed, if possible.


Bashbrew (

) is a tool for cloning, building, tagging, and pushing the Docker official images. See the Bashbrew
for more information.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.