Need help with trireme-kubernetes?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

135 Stars 17 Forks Apache License 2.0 288 Commits 28 Opened issues


Aporeto integration with Kubernetes Network Policies

Services available


Need anything else?

Contributors list


Twitter URL Slack URL License Documentation Analytics

TL;DR? Jump to the Getting Started section.

Trireme-Kubernetes is a simple, straightforward implementation of the Kubernetes Network Policies specifications. It is independent from the used networking backend and works in any Kubernetes cluster - even in managed Kubernetes clusters like Google Kubernetes Engine (GKE) or Azure Container Service (AKS).

One of its powerful features is that you can deploy it to multiple Kubernetes clusters and secure network traffic between specific pods of the different clusters (to secure e.g. MySQL replication or a MongoDB replicaset).

Trireme-Kubernetes builds upon a powerful concept of identity based on standard Kubernetes tags.

It is based on the Trireme Zero-Trust library.

More on Kubernetes network policies:

Architecture and Components

The architecture of Trireme-Kubernetes is the following:


Trireme-Kubernetes consists of several components - not all of them are required:

  • Trireme-Kubernetes: the enforcement service which polices network connections (a.k.a "flows" in Trireme terminology) based on standard

    defined on the Kubernetes API
  • Trireme-CSR (optional): an identity service (basically a CA) that is used to automatically sign certificates and generate asymmetric KeyPairs for each Trireme-Kubernetes instance. Note that this is deployed by default. However, you can exchange it to a simple pre-shared key deployment (PSK) if you really wish to do so.

  • Trireme-Statistics (optional): the monitoring and statistics bundle that currently implements the trireme-lib collector interface for InfluxDB. Flows and Container events can be displayed in either Grafana, Chronograf or Trireme-Graph - which shows a generated graph specifically for Kubernetes network flows between pods. Depending on your use-case, some or all of those frontend tools can be deployed.


  • Trireme requires Kubernetes 1.8.x or later with GA NetworkPolicy support
  • Trireme requires
    with access to the
  • Trireme requires the
    utility to be installed
  • Trireme requires access to the Docker event API socket (
    by default)
  • Trireme requires privileged access.
  • When deploying with the
    model (default and recommended), Trireme requires access to the in-cluster Kubernetes service API Token of its pod. Access to the Kubernetes Namespaces/Pods/NetworkPolicies must be available as read-only. NOTE: the default deployment takes care of this.

Getting Started

Trireme-Kubernetes is focused on being simple and straight forward to deploy. NOTE: for any serious deployment, the extensive deployment guide should be followed.

This section provides a quick and easy way to try Trireme-Kubernetes in your existing cluster.

If you are using GKE or another system on which you don't have admin access (for RBAC / ABAC), make sure you can configure additional ABAC / RBAC rules. Specifically on GKE you have to ensure that you have full cluster admin rights through RBAC. You can ensure that you do, by running the following command (replace with your account email address):

kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [email protected]

1) Checkout the deployment files:

git clone
cd trireme-kubernetes/deployment

2) Create the

from this configuration file: (keeping everything by default should be fine)
kubectl create -f trireme-config-cm.yaml

3) Optionally, deploy the Trireme-Statistics bundle now (this will deploy all possible frontend options):

kubectl create -f statistics/

4) Create a dummy self-signed Certificate Authority (CA) for Trireme-CSR (the identity service) and add it as a Kubernetes secret (requires the tg utility - quick install:

go get -u

5) Finally, deploy Trireme-CSR and Trireme-Kubernetes:

kubectl create -f trireme/

At this point, the whole framework is up and running and you can access the Services in order to display your NetworkPolicy metrics:

$ kubectl --namespace=kube-system get services

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE chronograf ClusterIP 8888/TCP 20h chronograf-public LoadBalancer 80:32153/TCP 20h grafana ClusterIP 3000/TCP 20h grafana-public LoadBalancer 80:30716/TCP 20h graph ClusterIP 8080/TCP 20h graph-public LoadBalancer 80:31709/TCP 20h influxdb ClusterIP 8086/TCP 20h

Getting started with policy enforcement:

You can test your setup with NetworkPolicies by using an example two-tier application such as apobeer

git clone
cd apobeer/kubernetes
kubectl create -f .

The deployed NetworkPolicy allows traffic from

, but not from

Kubernetes cluster with Trireme

As a result, streaming your logs on any frontend pod should give you a stream of Beers:

$ kubectl logs frontend-mffv7 -n beer
The beer of the day is:  "Cantillon Blåbær Lambik"
The beer of the day is:  "Rochefort Trappistes 10"

And as defined by the policy, only

is able to connect.
logs show that it was unable to connect to
$ kubectl logs external-bww23 -n beer

Kubernetes and Trireme

Kubernetes does not enforce natively NetworkPolicies and requires a third party solution/controller to do so.

Unlike most of the traditional solutions, Trireme is not tight together with a complex networking solution. It therefore gives you the freedom to use one Networking implementation if needed and another NetworkPolicy provider. It acts as the controller to enforce the defined Kubernetes network policies.

Trireme-kubernetes does not rely on any distributed control-plane or setup (no need to plug into

). Enforcement is performed directly on every node without any shared state propagation (more info at Trireme )

Advanced deployment and installation options.

Trireme-Kubernetes can be deployed as:

  • Fully managed by Kubernetes as a
    . (recommended deployment)
  • A standalone daemon process on each node.
  • A docker container managed outside Kubernetes on each node.

External materials

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.