Need help with netassert?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

274 Stars 34 Forks Apache License 2.0 74 Commits 5 Opened issues


Network security testing for DevSecOps workflows

Services available


Need anything else?

Contributors list


: network security testing for DevSecOps workflows

NOTE: this framework is in beta state as we move towards our first 1.0 release. Please file any issues you find and note the version used.

This is a security testing framework for fast, safe iteration on firewall, routing, and NACL rules for Kubernetes (Network Policies, services) and non-containerised hosts (cloud provider instances, VMs, bare metal). It aggressively parallelises

to test outbound network connections and ports from any accessible host, container, or Kubernetes pod by joining the same network namespace as the instance under test.


The alternative is to

into a container and
, or spin up new pods with the same selectors and
from there. This has lots of problems (extra tools in container image, or tool installation despite immutable root filesystems, or egress prevention).
aims to fix this: - does not rely on a dedicated tool speaking the correct target protocol (e.g. doesn't need
, GRPC client, etc) - does not bloat the pod under test or increase the pod's attack surface with non-production tooling - works with
FROM scratch
containers - is parallelised to run in near-constant time for large or small test suites - does not appear to the Kubernetes API server that it's changing the system under test - uses TCP/IP (layers 3 and 4) so does not show up in HTTP logs (e.g.
access logs) - produces TAP output for humans and build servers

More information and background in this presentation from Configuration Management Camp 2018.


Usage: netassert [options] [filename]


--image Name of test image --no-pull Don't pull test container on target nodes --timeout Integer time to wait before giving up on tests (default 120)

--ssh-user SSH user for kubelet host --ssh-options Optional options to pass to the 'gcloud compute ssh' command --known-hosts A known_hosts file (default: ${HOME}/.ssh/known_hosts)

--debug More debug -h --help Display this message


Prerequisites on host machine

  • jq
  • yj
    (checked in to root of this repo, direct download)
  • parallel
  • timeout

These will be moved into a container runner in the future

Prerequisites on target

  • docker

Deploy fake mini-microservices

$ kubectl apply -f resource/deployment/demo.yaml
service/test-database created
deployment.apps/test-database created
service/test-frontend created
deployment.apps/test-frontend created
service/test-microservice created
deployment.apps/test-microservice created

Run netassert (this should fail)

As we haven't applied network policies, this should FAIL.

./netassert test/test-k8s.yaml

Ensure your user has SSH access to the node names listed by

kubectl get nodes
. To change the SSH user set
--ssh-user MY_USER
. To configure your ssh keys, use DNS resolvable names (or
entries) for the nodes, and/or add login directives to
: ```bash


Host node-1 HostName User sublimino IdentityFile ~/.ssh/node-1-key.pem ```

Apply network policies

kubectl apply -f resource/net-pol/web-deny-all.yaml
kubectl apply -f resource/net-pol/test-services-allow.yaml

Run netassert (this should pass)

Now that we've applied the policies that these tests reflect, this should pass:

./netassert test/test-k8s.yaml

For manual verification of the test results we can

in the pods under test (see [why] above for reasons that this is a bad idea).

Manually test the pods

kubectl exec -it test-frontend-$YOUR_POD_ID -- wget -qO- --timeout=2 http://test-microservice
kubectl exec -it test-microservice-$YOUR_POD_ID -- wget -qO- --timeout=2 http://test-database
kubectl exec -it test-database-$YOUR_POD_ID -- wget -qO- --timeout=2 http://test-frontend

These should all pass as they have equivalent network policies.

The network policies do not allow the

pods to communicate with the

Let's verify that manually - this should FAIL:

kubectl exec -it test-frontend-$YOUR_POD_ID -- wget -qO- --timeout=2 http://test-database


netassert takes a single YAML file as input. This file lists the hosts to test from, and describes the hosts and ports that it should be able to reach.

It can test from any reachable host, and from inside Kubernetes pods.

A simple example:

host: # child keys must be ssh-accessible hosts
  localhost: # host to run test from, must be accessible via SSH UDP:53 # host and ports to test for access

A full example:

host: # child keys must be ssh-accessible hosts
  localhost: # host to run test from, can be a remote host UDP:53 # host and ports to test from localhost 443 # if no protocol is specified then TCP is implied 80, 81, 443, 22 # ports can be comma or space delimited # this can be anything SSH can access
      - 443 # ports can be provided as a list
      - 80
    localhost: # this tests ports on the local machine
      - 22
      - -999       # ports can be negated with `-`, this checks that 999 TCP is not open
      - -TCP:30731 # TCP is implied, but can be specified
      - -UDP:1234  # UDP must be explicitly stated, otherwise TCP assumed
      - -UDP:555 # this must be accessible via ssh (perhaps via ssh-agent), or localhost UDP:53 # this tests is accesible from UDP:53 # this tests is accesible from 443 # this tests is accesible from

k8s: # child keys must be Kubernetes entities deployment: # only deployments currently supported test-frontend: # pod name, defaults to default namespace test-microservice: 80 # test-microservice is the DNS name of the target service test-database: -80 # test-frontend should not be able to access test-database port 80

new-namespace:test-microservice: # `new-namespace` is the namespace name 80 # longer DNS names can be used for other namespaces
  test-frontend.default: 80

  test-frontend.default.svc.cluster.local: 80 # full DNS names can be used
  test-microservice.default.svc.cluster.local: -80

Test outbound connections from localhost

To test that

can reach
on port 53 UDP:
  localhost: UDP:53 UDP:53

What this test does: 1. Starts on the test runner host 1. Pull the test container 1. Check port

is open on
1. Shows TAP results

Test outbound connections from a remote server

Test that
can reach
      - 22
      - 443

What this test does: 1. Starts on the test runner host 1. SSH to
1. Pull the test container 1. Check ports
are open 1. Returns TAP results to the test runner host

Test localhost can reach a remote server, and that the remote server can reach another host

      - 22
      - 22

Test a Kubernetes pod

Test that a pod can reach
    some-namespace:my-pod: UDP:53

Test Kubernetes pods' intercommunication

Test that

in namespace
can reach
, and that
cannot reach
      other-namespace:other-pod: 80

  default:my-pod: -80

Example flow for K8S pods

  1. from test host:
    nettest test/test-k8s.yaml
  2. look up deployments, pods, and namespaces to test in Kube API
  3. for each pod, SSH to a worker node running an instance
  4. connect a test container to the container's network namespace
  5. run that pod's test suite from inside the network namespace
  6. report results via TAP
  7. test host gathers TAP results and reports
  8. the same process applies to non-Kubernetes instances accessible via ssh

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.