This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Getting Started

Learn how to quickly install and start using Tetragon.

1 - Quick Kubernetes Install

Discover and experiment with Tetragon in a kubernetes environment

Create a cluster

If you don’t have a Kubernetes Cluster yet, you can use the instructions below to create a Kubernetes cluster locally or using a managed Kubernetes service:

The following commands create a single node Kubernetes cluster using Google Kubernetes Engine. See Installing Google Cloud SDK for instructions on how to install gcloud and prepare your account.

export NAME="$(whoami)-$RANDOM"
export ZONE="us-west2-a"
gcloud container clusters create "${NAME}" --zone ${ZONE} --num-nodes=1
gcloud container clusters get-credentials "${NAME}" --zone ${ZONE}

The following commands create a single node Kubernetes cluster using Azure Kubernetes Service. See Azure Cloud CLI for instructions on how to install az and prepare your account.

export NAME="$(whoami)-$RANDOM"
export AZURE_RESOURCE_GROUP="${NAME}-group"
az group create --name "${AZURE_RESOURCE_GROUP}" -l westus2
az aks create --resource-group "${AZURE_RESOURCE_GROUP}" --name "${NAME}"
az aks get-credentials --resource-group "${AZURE_RESOURCE_GROUP}" --name "${NAME}"

The following commands create a single node Kubernetes cluster with eksctl using Amazon Elastic Kubernetes Service. See eksctl installation for instructions on how to install eksctl and prepare your account.

export NAME="$(whoami)-$RANDOM"
eksctl create cluster --name "${NAME}"

Tetragon’s correct operation depends on access to the host /proc filesystem. The following steps configure kind and Tetragon accordingly when using a Linux system. The following commands create a single node Kubernetes cluster using kind that is properly configured for Tetragon.

cat <<EOF > kind-config.yaml
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
  - role: control-plane
    extraMounts:
      - hostPath: /proc
        containerPath: /procHost
EOF
kind create cluster --config kind-config.yaml
EXTRA_HELM_FLAGS=(--set tetragon.hostProcPath=/procHost) # flags for helm install

The commands in this Getting Started guide assume you use a single-node Kubernetes cluster. If you use a cluster with multiple nodes, be aware that some of the commands shown need to be modified. We call out these changes where they are necessary.

Deploy Tetragon

To install and deploy Tetragon, run the following commands:

helm repo add cilium https://helm.cilium.io
helm repo update
helm install tetragon ${EXTRA_HELM_FLAGS[@]} cilium/tetragon -n kube-system
kubectl rollout status -n kube-system ds/tetragon -w

By default, Tetragon will filter kube-system events to reduce noise in the event logs. See concepts and advanced configuration to configure these parameters.

Deploy demo application

To explore Tetragon it is helpful to have a sample workload. Here we use Cilium’s demo application, but any workload would work equally well:

kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.15.3/examples/minikube/http-sw-app.yaml

Before going forward, verify that all pods are up and running - it might take several seconds for some pods to satisfy all the dependencies:

kubectl get pods

The output should be similar to this:

NAME                         READY   STATUS    RESTARTS   AGE
deathstar-6c94dcc57b-7pr8c   1/1     Running   0          10s
deathstar-6c94dcc57b-px2vw   1/1     Running   0          10s
tiefighter                   1/1     Running   0          10s
xwing                        1/1     Running   0          10s

What’s Next

Check for execution events.

2 - Quick Local Docker Install

Discover and experiment with Tetragon on your local Linux host

Start Tetragon

The easiest way to start experimenting with Tetragon is to run it via Docker using the released container images.

docker run -d --name tetragon --rm --pull always \
    --pid=host --cgroupns=host --privileged             \
    -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf    \
    quay.io/cilium/tetragon:latest

This will start Tetragon in a privileged container running in the background. Running Tetragon as a privileged container is required to load and attach BPF programs. See the Installation and Configuration section for more details.

Run demo application

To explore Tetragon it is helpful to have a sample workload. You can use Cilium’s demo application with this Docker Compose file, but any workload would work equally well:

curl -LO https://github.com/cilium/tetragon/raw/main/examples/quickstart/docker-compose.yaml
docker compose -f docker-compose.yaml up -d

You can use docker container ls to verify that the containers are up and running. It might take a short amount of time for the container images to download.

CONTAINER ID   IMAGE                             COMMAND                  CREATED          STATUS          PORTS                                   NAMES
cace79752d94   quay.io/cilium/json-mock:v1.3.8   "bash /run.sh"           34 seconds ago   Up 33 seconds                                           starwars-xwing-1
4f8422b43b5b   quay.io/cilium/json-mock:v1.3.8   "bash /run.sh"           34 seconds ago   Up 33 seconds                                           starwars-tiefighter-1
7b60618ca8bd   quay.io/cilium/starwars:v2.1      "/starwars-docker --…"   34 seconds ago   Up 33 seconds   0.0.0.0:8080->80/tcp, :::8080->80/tcp   starwars-deathstar-1

What’s next

Check for execution events.

3 - Execution Monitoring

Execution traces with Tetragon

At the core of Tetragon is the tracking of all executions in a Kubernetes cluster, virtual machines, and bare metal systems. This creates the foundation that allows Tetragon to attribute all system behavior back to a specific binary and its associated metadata (container, Pod, Node, and cluster).

Observe Tetragon execution events

Tetragon exposes the execution events over JSON logs and GRPC stream. The user can then observe all executions in the system.

Use the following instructions to observe execution events. These instructions assume you have deployed the Cilium demo application in your environment.

For a single node Kubernetes cluster, you can target the Tetragon DaemonSet with a kubectl exec command:

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing

This command runs tetra getevents -o compact --pods xwing in the single Pod that is a member of the Tetragon DaemonSet. Because there is only a single node in the cluster, it is guaranteed that the “xwing” Pod will also be running on the same node and that Tetragon will be able to capture and report execution events.

In a cluster with multiple nodes, you will need to ensure that the Tetragon Pod you use is located on the same node as the “xwing” Pod, so that it can capture the execution events.

You can use this command to get the name of the Tetragon Pod that is on the same Kubernetes node as the “xwing” Pod:

POD=$(kubectl -n kubesystem get pods -l 'app.kubernetes.io/name=tetragon' -o name --field-selector spec.nodeName=$(kubectl get pod xwing -o jsonpath='{.spec.nodeName}'))

Once you have the identified the matching Pod, then target it with a kubectl exec to run the tetra getevents command.

kubectl exec -ti -n kube-system $POD -c tetragon -- tetra getevents -o compact --pods xwing

Because the Tetragon Pod where you are running tetra getevents in on the same node as the “xwing” Pod, the command will return the execution events captured by Tetragon.

docker exec tetragon tetra getevents -o compact

The tetra get-events -o compact command returns a compact form of the execution events. To trigger an execution event, you will run a curl command inside the “xwing” Pod/container.

kubectl exec -ti xwing -- bash -c 'curl https://ebpf.io/applications/#tetragon'
kubectl exec -ti xwing -- bash -c 'curl https://ebpf.io/applications/#tetragon'
docker exec -ti starwars-xwing-1 curl https://ebpf.io/applications/#tetragon

The CLI will print a compact form of the event to the terminal similar to the following output. The example output below is from Kubernetes; the Docker output is very similar.

πŸš€ process default/xwing /bin/bash -c "curl https://ebpf.io/applications/#tetragon"
πŸš€ process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
πŸ’₯ exit    default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon 60

The compact execution event contains the event type, the pod name, the binary and the args. The exit event will include the return code; in the case of the curl command above, the return code was 60.

For the complete execution event in JSON format remove the -o compact option from the tetra getevents command.

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents --pods xwing
kubectl exec -ti -n kube-system $POD -c tetragon -- tetra getevents --pods xwing
docker exec -ti tetragon tetra getevents

The complete execution event includes a lot more details related to the binary and event. See below for a full example of the execution event generated by the curl command used above. In a Kubernetes environment this will include the Kubernetes metadata include the Pod, Container, Namespaces, and Labels, among other useful metadata.

Full output of a process execution event

{
  "process_exec": {
    "process": {
      "exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4Njc0MzIxMzczOjUyNjk5",
      "pid": 52699,
      "uid": 0,
      "cwd": "/",
      "binary": "/usr/bin/curl",
      "arguments": "https://ebpf.io/applications/#tetragon",
      "flags": "execve rootcwd",
      "start_time": "2023-10-06T22:03:57.700327580Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://551e161c47d8ff0eb665438a7bcd5b4e3ef5a297282b40a92b7c77d6bd168eb3",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-10-06T21:52:41Z",
          "pid": 49
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        },
        "workload": "xwing"
      },
      "docker": "551e161c47d8ff0eb665438a7bcd5b4",
      "parent_exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4NjcwODgzMjk5OjUyNjk5",
      "tid": 52699
    },
    "parent": {
      "exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4NjcwODgzMjk5OjUyNjk5",
      "pid": 52699,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/bash",
      "arguments": "-c \"curl https://ebpf.io/applications/#tetragon\"",
      "flags": "execve rootcwd clone",
      "start_time": "2023-10-06T22:03:57.696889812Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://551e161c47d8ff0eb665438a7bcd5b4e3ef5a297282b40a92b7c77d6bd168eb3",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-10-06T21:52:41Z",
          "pid": 49
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        },
        "workload": "xwing"
      },
      "docker": "551e161c47d8ff0eb665438a7bcd5b4",
      "parent_exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4NjQ1MjQ1ODM5OjUyNjg5",
      "tid": 52699
    }
  },
  "node_name": "gke-john-632-default-pool-7041cac0-9s95",
  "time": "2023-10-06T22:03:57.700326678Z"
}

What’s next

Execution events are the most basic event Tetragon can produce. To see how to use tracing policies to enable file monitoring see the File Access Monitoring section of the Getting Started guide. To see a network policy check the Networking Monitoring section.

4 - File Access Monitoring

File access traces with Tetragon

Tracing policies can be added to Tetragon through YAML configuration files that extend Tetragon’s base execution tracing capabilities. These policies perform filtering in kernel to ensure only interesting events are published to userspace from the BPF programs running in kernel. This ensures overhead remains low even on busy systems.

The instructions below extend the example from Execution Monitoring with a policy to monitor sensitive files in Linux. The policy used is file_monitoring.yaml, which you can review and extend as needed. Files monitored here serve as a good base set of files.

Apply the tracing policy

To apply the policy in Kubernetes, use kubectl. In Kubernetes, the policy references a Custom Resource Definition (CRD) installed by Tetragon. Docker uses the same YAML configuration file as Kubernetes, but this file is loaded from disk when the Docker container is launched.

Note that these instructions assume you’ve installed the demo application, as outlined in either the Quick Kubernetes Install or the Quick Docker Install section.

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring.yaml
kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring.yaml
wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring.yaml
docker stop tetragon
docker run -d --name tetragon --rm --pull always \
  --pid=host --cgroupns=host --privileged \
  -v ${PWD}/file_monitoring.yaml:/etc/tetragon/tetragon.tp.d/file_monitoring.yaml \
  -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf \
  quay.io/cilium/tetragon:latest

Observe Tetragon file access events

With the tracing policy applied you can attach tetra to observe events again:

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing
POD=$(kubectl -n kubesystem get pods -l 'app.kubernetes.io/name=tetragon' -o name --field-selector spec.nodeName=$(kubectl get pod xwing -o jsonpath='{.spec.nodeName}'))
kubectl exec -ti -n kube-system $POD -c tetragon -- tetra getevents -o compact --pods xwing
docker exec -ti tetragon tetra getevents -o compact

To generate an event, try to read a sensitive file referenced in the policy.

kubectl exec -ti xwing -- bash -c 'cat /etc/shadow'
kubectl exec -ti xwing -- bash -c 'cat /etc/shadow'
cat /etc/shadow

This will generate a read event (Docker events will omit Kubernetes metadata shown below) that looks something like this:

πŸš€ process default/xwing /bin/bash -c "cat /etc/shadow"
πŸš€ process default/xwing /bin/cat /etc/shadow
πŸ“š read    default/xwing /bin/cat /etc/shadow
πŸ’₯ exit    default/xwing /bin/cat /etc/shadow 0

Per the tracing policy, Tetragon generates write events in responses to attempts to write in sensitive directories (for example, attempting to write in the /etc directory).

kubectl exec -ti xwing -- bash -c 'echo foo >> /etc/bar'
kubectl exec -ti xwing -- bash -c 'echo foo >> /etc/bar'
echo foo >> /etc/bar

In response, you will see output similar to the following (Docker events do not include the Kubernetes metadata shown here).

πŸš€ process default/xwing /bin/bash -c "echo foo >>  /etc/bar"
πŸ“ write   default/xwing /bin/bash /etc/bar
πŸ“ write   default/xwing /bin/bash /etc/bar
πŸ’₯ exit    default/xwing /bin/bash -c "echo foo >>  /etc/bar

What’s next

To explore tracing policies for networking see the Networking Monitoring section of the Getting Started guide. To dive into the details of policies and events please see the Concepts section of the documentation.

5 - Network Monitoring

Network access traces with Tetragon

In addition to file access monitoring, Tetragon’s tracing policies also support monitoring network access. In this section, you will see how to monitor network traffic to “external” destinations (destinations that are outside the Kubernetes cluster or external to the Docker host where Tetragon is running). These instructions assume you already have Tetragon running in either Kubernetes or Docker, and that you have deployed the Cilium demo application.

Monitoring Kubernetes network access

First, you’ll need to find the pod CIDR and service CIDR in use. In many cases the pod CIDR is relatively easy to find.

export PODCIDR=`kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'`

You can fetch the service CIDR from the cluster in some environments. When working with managed Kubernetes offerings (AKS, EKS, or GKE) you will need the environment variables used when you created the cluster.

export SERVICECIDR=$(gcloud container clusters describe ${NAME} --zone ${ZONE} | awk '/servicesIpv4CidrBlock/ { print $2; }')
export SERVICECIDR=$(kubectl describe pod -n kube-system kube-apiserver-kind-control-plane | awk -F= '/--service-cluster-ip-range/ {print $2; }')
export SERVICECIDR=$(aws eks describe-cluster --name ${NAME} | jq -r '.cluster.kubernetesNetworkConfig.serviceIpv4Cidr')
export SERVICECIDR=$(az aks show --name ${NAME} --resource-group ${AZURE_RESOURCE_GROUP} | jq -r '.networkProfile.serviceCidr)

Once you have this information, you can customize a policy to exclude network traffic to the networks stored in the PODCIDR and SERVICECIDR environment variables. Use envsubst to do this, and then apply the policy to your Kubernetes cluster with kubectl apply:

wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/network_egress_cluster.yaml
envsubst < network_egress_cluster.yaml | kubectl apply -f -

Once the tracing policy is applied, you can attach tetra to observe events again:

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing --processes curl
POD=$(kubectl -n kubesystem get pods -l 'app.kubernetes.io/name=tetragon' -o name --field-selector spec.nodeName=$(kubectl get pod xwing -o jsonpath='{.spec.nodeName}'))
kubectl exec -ti -n kube-system $POD -c tetragon -- tetra getevents -o compact --pods xwing --processes curl

Then execute a curl command in the “xwing” Pod to access one of our favorite sites.

 kubectl exec -ti xwing -- bash -c 'curl https://ebpf.io/applications/#tetragon'

You will observe a connect event being reported in the output of the tetra getevents command:

πŸš€ process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
πŸ”Œ connect default/xwing /usr/bin/curl tcp 10.32.0.19:33978 -> 104.198.14.52:443
πŸ’₯ exit    default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon 60

You can confirm in-kernel BPF filters are not producing events for in-cluster traffic by issuing a curl to one of our services and noting there is no connect event.

kubectl exec -ti xwing -- bash -c 'curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing'

The output should be similar to:

Ship landed

And as expected no new events:

πŸš€ process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
πŸ”Œ connect default/xwing /usr/bin/curl tcp 10.32.0.19:33978 -> 104.198.14.52:443
πŸ’₯ exit    default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon 60

Monitoring Docker or bare metal network access

This example also works easily for local Docker users. However, since Docker does not have pod CIDR or service CIDR constructs, you will construct a tracing policy that filters 127.0.0.1 from the Tetragon event log.

First, set the necessary environment variables to the loopback IP address.

export PODCIDR="127.0.0.1/32"
export SERVICECIDR="127.0.0.1/32"

Next, customize the policy using envsubst.

wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/network_egress_cluster.yaml
envsubst < network_egress_cluster.yaml > network_egress_cluster_subst.yaml

Finally, start Tetragon with the new policy.

docker stop tetragon
docker run -d --name tetragon --rm --pull always \
  --pid=host --cgroupns=host --privileged               \
  -v ${PWD}/network_egress_cluster_subst.yaml:/etc/tetragon/tetragon.tp.d/network_egress_cluster_subst.yaml \
  -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf      \
  quay.io/cilium/tetragon:latest

Once Tetragon is running, use docker exec to run the tetra getevents command and log the output to your terminal.

docker exec -ti tetragon tetra getevents -o compact

Now remote TCP connections will be logged, but connections to the localhost address are filtered out by Tetragon. You can see this by executing a curl command to generate a remote TCP connect.

curl https://ebpf.io/applications/#tetragon

This produces the following output:

πŸš€ process  /usr/bin/curl https://ebpf.io/applications/#tetragon
πŸ”Œ connect  /usr/bin/curl tcp 192.168.1.190:36124 -> 104.198.14.52:443
πŸ’₯ exit     /usr/bin/curl https://ebpf.io/applications/#tetragon 0

What’s next

So far you have installed Tetragon and used a couple policies to monitor sensitive files and provide network auditing for connections outside your own cluster and node. Both these cases highlight the value of in-kernel filtering. Another benefit of in-kernel filtering is you can add enforcement to the policies to not only alert via a log entry, but to block the operation in kernel and/or kill the application attempting the operation.

To learn more about policies and events Tetragon can implement review the Concepts section.

6 - Policy Enforcement

Enforcing restrictions with Tetragon

Tetragon’s tracing policies support monitoring kernel functions to report events, such as file access events or network connection events, as well as enforcing restrictions on those same kernel functions. Using in-kernel filtering in Tetragon provides a key performance improvement by limiting events from kernel to user space. In-kernel filtering also enables Tetragon to enforce policy restrictions at the kernel level. For example, by issuing a SIGKILL to a process when a policy violation is detected, the process will not continue to run. If the policy enforcement is triggered through a syscall this means the application will not return from the syscall and will be terminated.

In this section, you will add network and file policy enforcement on top of the Tetragon functionality (execution, file tracing, and network tracing policy) you’ve already deployed in this Getting Started guide. Specifically, you will:

  • Apply a policy that restricts network traffic egressing a Kubernetes cluster
  • Apply a block write and read operations to sensitive files

For specific implementation details refer to the Enforcement concept section.

Restricting network traffic on Kubernetes

In this use case you will use a Tetragon tracing policy to block TCP connections outside the Kubernetes cluster where Tetragon is running. The Tetragon policy is namespaced, limiting the scope of the enforcement policy to just the “default” namespace where you installed the demo application in the Quick Kubernetes Install section.

The policy you will use is very similar to the policy you used in the Network Monitoring section, but with enforcement enabled. Although this policy does not use them, Tetragon tracing policies support including Kubernetes filters, such as namespaces and labels, so you can limit a policy to targeted namespaces and Pods. This is critical for effective policy segmentation.

First, ensure you have the proper Pod CIDR captured for use later:

export PODCIDR=`kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'`

You will also need to capture the service CIDR for use in customizing the policy. When working with managed Kubernetes offerings (AKS, EKS, or GKE) you will need the environment variables used when you created the cluster.

export SERVICECIDR=$(gcloud container clusters describe ${NAME} --zone ${ZONE} | awk '/servicesIpv4CidrBlock/ { print $2; }')
export SERVICECIDR=$(kubectl describe pod -n kube-system kube-apiserver-kind-control-plane | awk -F= '/--service-cluster-ip-range/ {print $2; }')
export SERVICECIDR=$(aws eks describe-cluster --name ${NAME} | jq -r '.cluster.kubernetesNetworkConfig.serviceIpv4Cidr')
export SERVICECIDR=$(az aks show --name ${NAME} --resource-group ${AZURE_RESOURCE_GROUP} | jq -r '.networkProfile.serviceCidr)

When you have captured the Pod CIDR and Service CIDR, then you can customize and apply the enforcement policy. (If you installed the demo application in a different namespace than the default namespace, adjust the kubectl apply command accordingly.)

wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/network_egress_cluster_enforce.yaml
envsubst < network_egress_cluster_enforce.yaml | kubectl apply -n default -f -

With the enforcement policy applied, run the tetra getevents command to observe events.

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing
POD=$(kubectl -n kubesystem get pods -l 'app.kubernetes.io/name=tetragon' -o name --field-selector spec.nodeName=$(kubectl get pod xwing -o jsonpath='{.spec.nodeName}'))
kubectl exec -ti -n kube-system $POD -c tetragon -- tetra getevents -o compact --pods xwing

To generate an event that Tetragon will report, use curl to connect to a site outside the Kubernetes cluster:

kubectl exec -ti xwing -- bash -c 'curl https://ebpf.io/applications/#tetragon'

The command returns an error code because the egress TCP connects are blocked. The tetra CLI will print the curl and annotate that the process that was issued a SIGKILL.

command terminated with exit code 137

Making network connections to destinations inside the cluster will work as expected:

kubectl exec -ti xwing -- bash -c 'curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing'

The successful internal connection is filtered and will not be shown. The tetra getevents output from the two curl commands should look something like this:

πŸš€ process default/xwing /bin/bash -c "curl https://ebpf.io/applications/#tetragon"
πŸš€ process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
πŸ”Œ connect default/xwing /usr/bin/curl tcp 10.32.0.28:45200 -> 104.198.14.52:443
πŸ’₯ exit    default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon SIGKILL
πŸš€ process default/xwing /bin/bash -c "curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing"
πŸš€ process default/xwing /usr/bin/curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing

Enforce file access restrictions

The following extends the example from File Access Monitoring with enforcement to ensure sensitive files are not read. The policy used is the file_monitoring_enforce.yaml, which you can review and extend as needed. The only difference between the observation policy and the enforce policy is the addition of an action block to SIGKILL the application and return an error on the operation.

To apply the policy:

kubectl delete -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring.yaml
kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring_enforce.yaml
kubectl delete -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring.yaml
kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring_enforce.yaml
wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring_enforce.yaml
docker stop tetragon
docker run --name tetragon --rm --pull always \
  --pid=host --cgroupns=host --privileged               \
  -v ${PWD}/file_monitoring_enforce.yaml:/etc/tetragon/tetragon.tp.d/file_monitoring_enforce.yaml \
  -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf      \
  quay.io/cilium/tetragon:latest

With the policy applied, you can run tetra getevents to have Tetragon start outputting events to the terminal.

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing
POD=$(kubectl -n kubesystem get pods -l 'app.kubernetes.io/name=tetragon' -o name --field-selector spec.nodeName=$(kubectl get pod xwing -o jsonpath='{.spec.nodeName}'))
kubectl exec -ti -n kube-system $POD -c tetragon -- tetra getevents -o compact --pods xwing
docker exec -ti tetragon tetra getevents -o compact

Next, attempt to read a sensitive file (one of the files included in the defined policy):

kubectl exec -ti xwing -- bash -c 'cat /etc/shadow'
kubectl exec -ti xwing -- bash -c 'cat /etc/shadow'
cat /etc/shadow

Because the file is included in the policy, the command will fail with an error code.

kubectl exec -ti xwing -- bash -c 'cat /etc/shadow'

The output should be similar to:

command terminated with exit code 137

This will generate a read event (Docker events will not contain the Kubernetes metadata shown here).

πŸš€ process default/xwing /bin/bash -c "cat /etc/shadow"
πŸš€ process default/xwing /bin/cat /etc/shadow
πŸ“š read    default/xwing /bin/cat /etc/shadow
πŸ“š read    default/xwing /bin/cat /etc/shadow
πŸ“š read    default/xwing /bin/cat /etc/shadow
πŸ’₯ exit    default/xwing /bin/cat /etc/shadow SIGKILL

Attempts to read or write to files that are not part of the enforced file policy are not impacted.

πŸš€ process default/xwing /bin/bash -c "echo foo >> bar; cat bar"
πŸš€ process default/xwing /bin/cat bar
πŸ’₯ exit    default/xwing /bin/cat bar 0
πŸ’₯ exit    default/xwing /bin/bash -c "echo foo >> bar; cat bar" 0

What’s next

The completes the Getting Started guide. At this point you should be able to observe execution traces in a Kubernetes cluster and extend the base deployment of Tetragon with policies to observe and enforce different aspects of a Kubernetes system.

The rest of the docs provide further documentation about installation and using policies. Some useful links:

  • To explore details of writing and implementing policies the Concepts is a good jumping off point.
  • For installation into production environments we recommend reviewing Advanced Installations.
  • Finally the Use Cases section covers different uses and deployment concerns related to Tetragon.