This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Welcome to Tetragon documentation

1 - Overview

Discover Cilium Tetragon and its capabilities

Cilium Tetragon component enables powerful realtime, eBPF-based Security Observability and Runtime Enforcement.

Tetragon detects and is able to react to security-significant events, such as

  • Process execution events
  • System call activity
  • I/O activity including network & file access

When used in a Kubernetes environment, Tetragon is Kubernetes-aware - that is, it understands Kubernetes identities such as namespaces, pods and so-on - so that security event detection can be configured in relation to individual workloads.

A diagram showing Tetragon capabilities and how it interacts with Kubernetes, the kernel and other metrics, logging, tracing or events systems

Tetragon Overview Diagram

Functionality Overview

eBPF Real-Time

Tetragon is a runtime security enforcement and observability tool. What this means is Tetragon applies policy and filtering directly in eBPF in the kernel. It performs the filtering, blocking, and reacting to events directly in the kernel instead of sending events to a user space agent.

For an observability use case, applying filters directly in the kernel drastically reduces observation overhead. By avoiding expensive context switching and wake-ups, especially for high frequency events, such as send, read, or write operations, eBPF reduces required resources. Instead, Tetragon provides rich filters (file, socket, binary names, namespace/capabilities, etc.) in eBPF, which allows users to specify the important and relevant events in their specific context, and pass only those to the user-space agent.

eBPF Flexibility

Tetragon can hook into any function in the Linux kernel and filter on its arguments, return value, associated metadata that Tetragon collects about processes (e.g., executable names), files, and other properties. By writing tracing policies users can solve various security and observability use cases. We provide a number of examples for these in the repository and highlight some below in the ‘Getting Started Guide’, but users are encouraged to create new policies that match their use cases. The examples are just that, jumping off points that users can then use to create new and specific policy deployments even potentially tracing kernel functions we did not consider. None of the specifics about which functions are traced and what filters are applied are hard-coded in the engine itself.

Critically, Tetragon allows hooking deep in the kernel where data structures can not be manipulated by user space applications avoiding common issues with syscall tracing where data is incorrectly read, maliciously altered by attackers, or missing due to page faults and other user/kernel boundary errors.

Many of the Tetragon developers are also kernel developers. By leveraging this knowledge base Tetragon has created a set of tracing policies that can solve many common observability and security use cases.

eBPF Kernel Aware

Tetragon, through eBPF, has access to the Linux kernel state. Tetragon can then join this kernel state with Kubernetes awareness or user policy to create rules enforced by the kernel in real time. This allows annotating and enforcing process namespace and capabilities, sockets to processes, process file descriptor to filenames and so on. For example, when an application changes its privileges we can create a policy to trigger an alert or even kill the process before it has a chance to complete the syscall and potentially run additional syscalls.

What’s next?

2 - Getting Started

How to quickly get started with Tetragon and learn how to install, deploy and configure it

2.1 - Quick Kubernetes Install

Discover and experiment with Tetragon in a kubernetes environment

Create a cluster

If you don’t have a Kubernetes Cluster yet, you can use the instructions below to create a Kubernetes cluster locally or using a managed Kubernetes service:

The following commands create a single node Kubernetes cluster using Google Kubernetes Engine. See Installing Google Cloud SDK for instructions on how to install gcloud and prepare your account.

export NAME="$(whoami)-$RANDOM"
export ZONE="us-west2-a"
gcloud container clusters create "${NAME}" --zone ${ZONE} --num-nodes=1
gcloud container clusters get-credentials "${NAME}" --zone ${ZONE}

The following commands create a single node Kubernetes cluster using Azure Kubernetes Service. See Azure Cloud CLI for instructions on how to install az and prepare your account.

export NAME="$(whoami)-$RANDOM"
export AZURE_RESOURCE_GROUP="${NAME}-group"
az group create --name "${AZURE_RESOURCE_GROUP}" -l westus2
az aks create --resource-group "${AZURE_RESOURCE_GROUP}" --name "${NAME}"
az aks get-credentials --resource-group "${AZURE_RESOURCE_GROUP}" --name "${NAME}"

The following commands create a Kubernetes cluster with eksctl using Amazon Elastic Kubernetes Service. See eksctl installation for instructions on how to install eksctl and prepare your account.

export NAME="$(whoami)-$RANDOM"
eksctl create cluster --name "${NAME}"

Tetragon’s correct operation depends on access to the host /proc filesystem. The following steps configure kind and Tetragon accordingly when using a Linux system.

cat <<EOF > kind-config.yaml
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
  - role: control-plane
    extraMounts:
      - hostPath: /proc
        containerPath: /procHost
EOF
kind create cluster --config kind-config.yaml
EXTRA_HELM_FLAGS=(--set tetragon.hostProcPath=/procHost) # flags for helm install

Deploy Tetragon

To install and deploy Tetragon, run the following commands:

helm repo add cilium https://helm.cilium.io
helm repo update
helm install tetragon ${EXTRA_HELM_FLAGS[@]} cilium/tetragon -n kube-system
kubectl rollout status -n kube-system ds/tetragon -w

By default, Tetragon will filter kube-system events to reduce noise in the event logs. See concepts and advanced configuration to configure these parameters.

Deploy demo application

To explore Tetragon it is helpful to have a sample workload. Here we use Cilium’s demo application, but any workload would work equally well:

kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.15.3/examples/minikube/http-sw-app.yaml

Before going forward, verify that all pods are up and running - it might take several seconds for some pods to satisfy all the dependencies:

kubectl get pods

The output should be similar to this:

NAME                         READY   STATUS    RESTARTS   AGE
deathstar-6c94dcc57b-7pr8c   1/1     Running   0          10s
deathstar-6c94dcc57b-px2vw   1/1     Running   0          10s
tiefighter                   1/1     Running   0          10s
xwing                        1/1     Running   0          10s

What’s Next

Check for execution events.

2.2 - Quick Local Docker Install

Discover and experiment with Tetragon on your local Linux host

Start Tetragon

The easiest way to start experimenting with Tetragon is to run it via Docker using the released container images.

docker run --name tetragon-container --rm --pull always \
    --pid=host --cgroupns=host --privileged             \
    -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf    \
    quay.io/cilium/tetragon-ci:latest

This will start Tetragon in a privileged container. Priviliges are required to load and attach BPF programs. See Installation section for more details.

What’s Next

Check for execution events.

2.3 - Execution Monitoring

Execution traces with Tetragon

At the core of Tetragon is the tracking of all executions in a kubernetes cluster, virtual machines, and baremetal systems. This creates the foundation that allows Tetragon to attribute all system behavior back to a specific binary and its associated metadata (container, Pod, Node, and cluster).

Observe Tetragon Execution Events

Tetragon exposes the execution trace over JSON logs and GRPC stream. The user can then observe all executions in the system.

The following command can be used to observe exec events.

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing
docker exec tetragon-container tetra getevents -o compact

This will print a compact form of the exec logs. For an example we do the following with the demo application.

kubectl exec -ti xwing -- bash -c 'curl https://ebpf.io/applications/#tetragon'
curl https://ebpf.io/applications/#tetragon

The CLI will print a compact form of the event to the terminal similar to the following output.

🚀 process default/xwing /bin/bash -c "curl https://ebpf.io/applications/#tetragon"
🚀 process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
💥 exit    default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon 60

The compact exec event contains the event type, the pod name, the binary and the args. The exit event will include the return code, in the case of curl 60 above.

For the complete exec event in JSON format remove the compact option.

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents --pods xwing
docker exec tetragon-container tetra getevents

This will include a lot more details related the binary and event. A full example of the above curl is hown here, In a Kubernetes environment this will include the Kubernetes metadata include the Pod, Container, Namespaces, and Labels among other useful metadata.

Process execution event

{
  "process_exec": {
    "process": {
      "exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4Njc0MzIxMzczOjUyNjk5",
      "pid": 52699,
      "uid": 0,
      "cwd": "/",
      "binary": "/usr/bin/curl",
      "arguments": "https://ebpf.io/applications/#tetragon",
      "flags": "execve rootcwd",
      "start_time": "2023-10-06T22:03:57.700327580Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://551e161c47d8ff0eb665438a7bcd5b4e3ef5a297282b40a92b7c77d6bd168eb3",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-10-06T21:52:41Z",
          "pid": 49
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        },
        "workload": "xwing"
      },
      "docker": "551e161c47d8ff0eb665438a7bcd5b4",
      "parent_exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4NjcwODgzMjk5OjUyNjk5",
      "tid": 52699
    },
    "parent": {
      "exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4NjcwODgzMjk5OjUyNjk5",
      "pid": 52699,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/bash",
      "arguments": "-c \"curl https://ebpf.io/applications/#tetragon\"",
      "flags": "execve rootcwd clone",
      "start_time": "2023-10-06T22:03:57.696889812Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://551e161c47d8ff0eb665438a7bcd5b4e3ef5a297282b40a92b7c77d6bd168eb3",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-10-06T21:52:41Z",
          "pid": 49
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        },
        "workload": "xwing"
      },
      "docker": "551e161c47d8ff0eb665438a7bcd5b4",
      "parent_exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4NjQ1MjQ1ODM5OjUyNjg5",
      "tid": 52699
    }
  },
  "node_name": "gke-john-632-default-pool-7041cac0-9s95",
  "time": "2023-10-06T22:03:57.700326678Z"
}

What’s next

Execution events are the most basic event Tetragon can produce. To see how to use tracing policies to enable file monitoring see the File Access Monitoring quickstart. To see a network policy check the Networking Monitoring quickstart.

2.4 - File Access Monitoring

File access traces with Tetragon

Tracing Policies can be added to Tetragon through YAML configuration files that extend Tetragon’s base execution tracing capabilities. These policies do filtering in kernel to ensure only interesting events are published to userspace from the BPF programs running in kernel. This ensures overhead remains low even on busy systems.

The following extends the example from Execution Tracing with a policy to monitor sensitive files in Linux. The policy used is the file_monitoring.yaml it can be reviewed and extended as needed. Files monitored here serve as a good base set of files.

To apply the policy Kubernetes uses a CRD that can be applied with kubectl. Uses the same YAML configuration as Kuberenetes, but loaded through a file on disk.

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring.yaml
wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring.yaml
docker stop tetragon-container
docker run --name tetragon-container --rm --pull always \
  --pid=host --cgroupns=host --privileged               \
  -v ${PWD}/file_monitoring.yaml:/etc/tetragon/tetragon.tp.d/file_monitoring.yaml \
  -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf      \
  quay.io/cilium/tetragon-ci:latest

With the file applied we can attach tetra to observe events again:

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing
docker exec tetragon-container tetra getevents -o compact

Then reading a sensitive file:

kubectl exec -ti xwing -- bash -c 'cat /etc/shadow'
cat /etc/shadow

This will generate a read event (Docker events will omit Kubernetes metadata),

🚀 process default/xwing /bin/bash -c "cat /etc/shadow"
🚀 process default/xwing /bin/cat /etc/shadow
📚 read    default/xwing /bin/cat /etc/shadow
💥 exit    default/xwing /bin/cat /etc/shadow 0

Attempts to write in sensitive directories will similarly create write events. For example, attempting to write in /etc.

kubectl exec -ti xwing -- bash -c 'echo foo >> /etc/bar'
cat /etc/shadow

Will result in the following output in the tetra CLI.

🚀 process default/xwing /bin/bash -c "echo foo >>  /etc/bar"
📝 write   default/xwing /bin/bash /etc/bar
📝 write   default/xwing /bin/bash /etc/bar
💥 exit    default/xwing /bin/bash -c "echo foo >>  /etc/bar

What’s next

To explore tracing policies for networking try the Networking Monitoring quickstart. To dive into the details of policies and events please see Concepts section.

2.5 - Network Monitoring

Network access traces with Tetragon

This adds a network policy on top of execution and file tracing already deployed in the quick start. In this case we monitor all network traffic outside the Kubernetes pod CIDR and service CIDR.

Kubernetes Cluster Network Access Monitoring

First we must find the pod CIDR and service CIDR in use. The pod IP CIDR can be found relatively easily in many cases.

export PODCIDR=`kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'`

The services CIDR can then be fetched depending on environment. We require environment variables ZONE and NAME from install steps.

export SERVICECIDR=$(gcloud container clusters describe ${NAME} --zone ${ZONE} | awk '/servicesIpv4CidrBlock/ { print $2; }')
export SERVICECIDR=$(kubectl describe pod -n kube-system kube-apiserver-kind-control-plane | awk -F= '/--service-cluster-ip-range/ {print $2; }')

First we apply a policy that includes the podCIDR and serviceIP list as filters to avoid filter out cluster local traffic. To apply the policy:

wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/network_egress_cluster.yaml
envsubst < network_egress_cluster.yaml | kubectl apply -f -

With the file applied we can attach tetra to observe events again:

 kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing --processes curl

Then execute a curl command in the xwing pod to curl one of our favorite sites.

 kubectl exec -ti xwing -- bash -c 'curl https://ebpf.io/applications/#tetragon'

A connect will be observed in the tetra shell:

🚀 process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
🔌 connect default/xwing /usr/bin/curl tcp 10.32.0.19:33978 -> 104.198.14.52:443
💥 exit    default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon 60

We can confirm in-kernel BPF filters are not producing events for in cluster traffic by issuing a curl to one of our services and noting there is no connect event.

kubectl exec -ti xwing -- bash -c 'curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing'

The output should be similar to:

Ship landed

And as expected no new events:

🚀 process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
🔌 connect default/xwing /usr/bin/curl tcp 10.32.0.19:33978 -> 104.198.14.52:443
💥 exit    default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon 60

Docker/Baremetal Network Access Monitoring

This example also works easily for local docker users as well except it is not as obvious to the quickstart authors what IP address CIDR will be useful. The policy by default will filter all local IPs 127.0.0.1 from the event log. So we can demo that here.

Set env variables to local loopback IP.

export PODCIDR="127.0.0.1/32"
export SERVICECIDR="127.0.0.1/32"

To create the policy,

wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/network_egress_cluster.yaml
envsubst < network_egress_cluster.yaml > network_egress_cluster_subst.yaml

Start Tetragon with the new policy:

docker stop tetragon-container
docker run --name tetragon-container --rm --pull always \
  --pid=host --cgroupns=host --privileged               \
  -v ${PWD}/network_egress_cluster_subst.yaml:/etc/tetragon/tetragon.tp.d/network_egress_cluster_subst.yaml \
  -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf      \
  quay.io/cilium/tetragon-ci:latest

Attach to the tetragon output

docker exec tetragon-container tetra getevents -o compact

Now remote TCP connections will be logged and local IPs filters. Executing a curl to generate a remote TCP connect.

curl https://ebpf.io/applications/#tetragon

Produces the following output:

🚀 process  /usr/bin/curl https://ebpf.io/applications/#tetragon
🔌 connect  /usr/bin/curl tcp 192.168.1.190:36124 -> 104.198.14.52:443
💥 exit     /usr/bin/curl https://ebpf.io/applications/#tetragon 0

Whats Next

So far we have installed Tetragon and shown a couple policies to monitor sensitive files and provide network auditing for connections outside our own cluster and node. Both these cases highlight the value of in kernel filtering. Another benefit of in-kernel filtering is we can add enforcement to the policies to not only alert, but block the operation in kernel and/or kill the application attempting the operation.

To learn more about policies and events Tetragon can implement review the Concepts section.

2.6 - Policy Enforcement

Policy Enforcement

This adds a network and file policy enforcement on top of execution, file tracing and networking policy already deployed in the quick start. In this use case we use a namespace filter to limit the scope of the enforcement policy to just the default namespace we installed the demo application in from the Quick Kubernetes Install.

This highlights two important concepts of Tetragon. First in kernel filtering provides a key performance improvement by limiting events from kernel to user space. But, also allows for enforcing policies in the kernel. By issueing a SIGKILL to the process at this point the application will be stopped from continuing to run. If the operation is triggered through a syscall this means the application will not return from the syscall and will be terminated.

Second, by including kubernetes filters, such as namespace and labels we can segment a policy to apply to targeted namespaces and pods. This is critical for effective policy segmentation.

For implementation details see the Enforcement concept section.

Kubernetes Enforcement

The following section is layed out with the following:

  • A guide to promote the network observation policy that observer all network traffic egressing the cluster to enforce this policy.
  • A guide to promote the file access monitoring policy to block write and read operations to sensitive files.

Block TCP Connect outside Cluster

First we will deploy the Network Monitoring policy with enforcement on. For this case the policy is written to only apply against the empire namespace. This limits the scope of the policy for the getting started guide.

Ensure we have the proper Pod CIDRs

export PODCIDR=`kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'`

and Service CIDRs configured.

export SERVICECIDR=$(gcloud container clusters describe ${NAME} --zone ${ZONE} | awk '/servicesIpv4CidrBlock/ { print $2; }')
export SERVICECIDR=$(kubectl describe pod -n kube-system kube-apiserver-kind-control-plane | awk -F= '/--service-cluster-ip-range/ {print $2; }')

Then we can apply the egress cluster enforcement policy

wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/network_egress_cluster_enforce.yaml
envsubst < network_egress_cluster_enforce.yaml | kubectl apply -n default -f -

With the enforcement policy applied we can attach tetra to observe events again:

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing

And once again execute a curl command in the xwing:

kubectl exec -ti xwing -- bash -c 'curl https://ebpf.io/applications/#tetragon'

The command returns an error code because the egress TCP connects are blocked shown here.

command terminated with exit code 137

Connect inside the cluster will work as expected,

kubectl exec -ti xwing -- bash -c 'curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing'

The Tetra CLI will print the curl and annotate that the process that was issued a Sigkill. The successful internal connect is filtered and will not be shown.

🚀 process default/xwing /bin/bash -c "curl https://ebpf.io/applications/#tetragon"
🚀 process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
🔌 connect default/xwing /usr/bin/curl tcp 10.32.0.28:45200 -> 104.198.14.52:443
💥 exit    default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon SIGKILL
🚀 process default/xwing /bin/bash -c "curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing"
🚀 process default/xwing /usr/bin/curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing

Enforce File Access Monitoring

The following extends the example from File Access Monitoring with enforcement to ensure sensitive files are not read. The policy used is the file_monitoring_enforce.yaml it can be reviewed and extended as needed. The only difference between the observation policy and the enforce policy is the addition of an action block to sigkill the application and return an error on the op.

To apply the policy:

kubectl delete -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring.yaml
kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring_enforce.yaml
wget https://raw.githubusercontent.com/cilium/tetragon/main/examples/quickstart/file_monitoring_enforce.yaml
docker stop tetragon-container
docker run --name tetragon-container --rm --pull always \
  --pid=host --cgroupns=host --privileged               \
  -v ${PWD}/file_monitoring_enforce.yaml:/etc/tetragon/tetragon.tp.d/file_monitoring_enforce.yaml \
  -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf      \
  quay.io/cilium/tetragon-ci:latest

With the file applied we can attach tetra to observe events again,

kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --pods xwing
docker exec tetragon-container tetra getevents -o compact

Then reading a sensitive file,

kubectl exec -ti xwing -- bash -c 'cat /etc/shadow'
cat /etc/shadow

The command will fail with an error code because this is one of our sensitive files,

kubectl exec -ti xwing -- bash -c 'cat /etc/shadow'

The output should be similar to:

command terminated with exit code 137

This will generate a read event (Docker events will omit Kubernetes metadata),

🚀 process default/xwing /bin/bash -c "cat /etc/shadow"
🚀 process default/xwing /bin/cat /etc/shadow
📚 read    default/xwing /bin/cat /etc/shadow
📚 read    default/xwing /bin/cat /etc/shadow
📚 read    default/xwing /bin/cat /etc/shadow
💥 exit    default/xwing /bin/cat /etc/shadow SIGKILL

Writes and reads to files not part of the enforced file policy will not be impacted.

🚀 process default/xwing /bin/bash -c "echo foo >> bar; cat bar"
🚀 process default/xwing /bin/cat bar
💥 exit    default/xwing /bin/cat bar 0
💥 exit    default/xwing /bin/bash -c "echo foo >> bar; cat bar" 0

What’s next

The completes the quick start guides. At this point we should be able to observe execution traces in a Kubernetes cluster and extend the base deployment of Tetragon with policies to observe and enforce different aspects of a Kubernetes system.

The rest of the docs provide further documentation about installation and using policies. Some useful links:

To explore details of writing and implementing policies the Concepts is a good jumping off point. For installation into production environments we recommend reviewing Advanced Installations. Finally the Use Cases section covers different uses and deployment concerns related to Tetragon.

3 - Installation

Tetragon installation and deployment configuration options

3.1 - Deploy on Kubernetes

Deploy and manage Tetragon on Kubernetes

The recommended way to deploy Tetragon on a Kubernetes cluster is to use the Helm chart with Helm 3. Tetragon uses the helm.cilium.io repository to release the helm chart.

Install

To install the latest release of the Tetragon helm chart, use the following command.

helm repo add cilium https://helm.cilium.io
helm repo update
helm install tetragon cilium/tetragon -n kube-system

To wait until Tetragon deployment is ready, use the following kubectl command:

kubectl rollout status -n kube-system ds/tetragon -w

Configuration

You can then make modifications to the Tetragon configuration using helm upgrade, see the following example.

helm upgrade tetragon cilium/tetragon -n kube-system --set tetragon.grpc.address=localhost:1337

You can also edit the tetragon-config ConfigMap directly and restart the Tetragon daemonset with:

kubectl edit cm tetragon-config -n kube-system
kubectl rollout restart ds/tetragon -n kube-system

Upgrade

Upgrade Tetragon using a new specific version of the helm chart.

helm upgrade tetragon cilium/tetragon -n kube-system --version 0.9.0

Uninstall

Uninstall Tetragon using the following command.

helm uninstall tetragon -n kube-system

3.2 - Deploy as a container

Install and manage Tetragon as a container without a Kubernetes cluster

Install

Stable versions

To run a stable version, please check Tetragon quay repository and select which version you want. For example if you want to run the latest version which is v1.1.0 currently.

docker run --name tetragon --rm -d                   \
    --pid=host --cgroupns=host --privileged          \
    -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf \
    quay.io/cilium/tetragon:v1.1.0

Unstable-development versions

To run unstable development versions of Tetragon, use the latest tag from Tetragon-CI quay repository. This will run the image that was built from the latest commit available on the Tetragon main branch.

docker run --name tetragon --rm -d                  \
   --pid=host --cgroupns=host --privileged          \
   -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf \
   quay.io/cilium/tetragon-ci:latest

Configuration

There are multiple ways to set configuration options:

  1. Append Tetragon controlling settings at the end of the command

    As an example set the file where to export JSON events with --export-filename argument:

    docker run --name tetragon --rm -d \
        --pid=host --cgroupns=host --privileged \
        -v /sys/kernel:/sys/kernel \
        quay.io/cilium/tetragon:v1.1.0 \
        /usr/bin/tetragon --export-filename /var/log/tetragon/tetragon.log
    

    For a complete list of CLI arguments, please check Tetragon daemon configuration.

  2. Environment variables

    docker run --name tetragon --rm -d \
        --pid=host --cgroupns=host --privileged \
        --env "TETRAGON_EXPORT_FILENAME=/var/log/tetragon/tetragon.log" \
        -v /sys/kernel:/sys/kernel \
        quay.io/cilium/tetragon:v1.1.0
    

    Every controlling setting can be set using environment variables. Prefix it with the key word TETRAGON_ then upper case the controlling setting. As an example to set where to export JSON events: --export-filename will be TETRAGON_EXPORT_FILENAME.

    For a complete list of all controlling settings, please check tetragon daemon configuration.

  3. Configuration files mounted as volumes

    On the host machine set the configuration drop-ins inside /etc/tetragon/tetragon.conf.d/ directory according to the configuration examples, then mount it as volume:

    docker run --name tetragon --rm -d \
        --pid=host --cgroupns=host --privileged \
        -v /sys/kernel:/sys/kernel \
        -v /etc/tetragon/tetragon.conf.d/:/etc/tetragon/tetragon.conf.d/ \
        quay.io/cilium/tetragon:v1.1.0
    

    This will map the /etc/tetragon/tetragon.conf.d/ drop-in directory from the host into the container.

See Tetragon daemon configuration reference for further details.

3.3 - Deploy with a package

Install and manage Tetragon via released packages.

Install

Tetragon will be managed as a systemd service. Tarballs are built and distributed along the assets in the releases.

  1. First download the latest binary tarball, using curl for example to download the amd64 release:

    curl -LO https://github.com/cilium/tetragon/releases/download/v1.1.0/tetragon-v1.1.0-amd64.tar.gz
    
  2. Extract the downloaded archive, and start the install script to install Tetragon. Feel free to inspect the script before starting it.

    tar -xvf tetragon-v1.1.0-amd64.tar.gz
    cd tetragon-v1.1.0-amd64/
    sudo ./install.sh
    

    If Tetragon was successfully installed, the final output should be similar to:

    Tetragon installed successfully!
    
  3. Finally, you can check the Tetragon systemd service.

    sudo systemctl status tetragon
    

    The output should be similar to:

    ● tetragon.service - Tetragon eBPF-based Security Observability and Runtime Enforcement
     Loaded: loaded (/lib/systemd/system/tetragon.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-01-23 20:08:16 CET; 5s ago
       Docs: https://github.com/cilium/tetragon/
    Main PID: 138819 (tetragon)
      Tasks: 17 (limit: 18985)
     Memory: 151.7M
        CPU: 913ms
     CGroup: /system.slice/tetragon.service
             └─138819 /usr/local/bin/tetragon
    

Configuration

The default Tetragon configuration shipped with the Tetragon package will be installed in /usr/local/lib/tetragon/tetragon.conf.d/. Local administrators can change the configuration by adding drop-ins inside /etc/tetragon/tetragon.conf.d/ to override the default settings or use the command line flags. To restore default settings, remove any added configuration inside /etc/tetragon/tetragon.conf.d/.

See Tetragon daemon configuration for further details.

Upgrade

To upgrade Tetragon:

  1. Download the new tarball.

    curl -LO https://github.com/cilium/tetragon/releases/download/v1.1.0/tetragon-v1.1.0-amd64.tar.gz
    
  2. Stop the Tetragon service.

    sudo systemctl stop tetragon
    
  3. Remove the old Tetragon version.

    sudo rm -fr /usr/lib/systemd/system/tetragon.service
    sudo rm -fr /usr/local/bin/tetragon
    sudo rm -fr /usr/local/lib/tetragon/
    
  4. Install the upgraded Tetragon version.

    tar -xvf tetragon-v1.1.0-amd64.tar.gz
    cd tetragon-v1.1.0-amd64/
    sudo ./install.sh
    

Uninstall

To completely remove Tetragon run the uninstall.sh script that is provided inside the tarball.

sudo ./uninstall.sh

Or remove it manually.

sudo systemctl stop tetragon
sudo systemctl disable tetragon
sudo rm -fr /usr/lib/systemd/system/tetragon.service
sudo systemctl daemon-reload
sudo rm -fr /usr/local/bin/tetragon
sudo rm -fr /usr/local/bin/tetra
sudo rm -fr /usr/local/lib/tetragon/

To just purge custom settings:

sudo rm -fr /etc/tetragon/

Operating

gRPC API access

To access the gRPC API with tetra client, set --server-address to point to the corresponding address:

sudo tetra --server-address unix:///var/run/tetragon/tetragon.sock getevents

See restrict gRPC API access for further details.

Tetragon Events

By default JSON events are logged to /var/log/tetragon/tetragon.log unless this location is changed. Logs are always rotated into the same directory.

To read real-time JSON events, tailing the logs file is enough.

sudo tail -f /var/log/tetragon/tetragon.log

Tetragon also ships a gRPC client that can be used to receive events.

  1. To print events in json format using tetra gRPC client:

    sudo tetra --server-address "unix:///var/run/tetragon/tetragon.sock" getevents
    
  2. To print events in human compact format:

    sudo tetra --server-address "unix:///var/run/tetragon/tetragon.sock" getevents -o compact
    

What’s next

See Explore security observability events to learn more about how to see the Tetragon events.

3.4 - Install tetra CLI

To interact with Tetragon, install the Tetragon client CLI tetra

This guide presents various methods to install tetra in your environment.

Install the latest release

Autodetect your environment

This shell script autodetects the OS and the architecture, downloads the archive of the binary and its SHA 256 digest, compares that the actual digest with the supposed one, installs the binary, and removes the download artifacts.

GOOS=$(go env GOOS)
GOARCH=$(go env GOARCH)
curl -L --remote-name-all https://github.com/cilium/tetragon/releases/latest/download/tetra-${GOOS}-${GOARCH}.tar.gz{,.sha256sum}
sha256sum --check tetra-${GOOS}-${GOARCH}.tar.gz.sha256sum
sudo tar -C /usr/local/bin -xzvf tetra-${GOOS}-${GOARCH}.tar.gz
rm tetra-${GOOS}-${GOARCH}.tar.gz{,.sha256sum}

Quick install for each environment

This installation method retrieves the adapted archived for your environment, extract it and install it in the /usr/local/bin directory.

curl -L https://github.com/cilium/tetragon/releases/latest/download/tetra-linux-amd64.tar.gz | tar -xz
sudo mv tetra /usr/local/bin
curl -L https://github.com/cilium/tetragon/releases/latest/download/tetra-linux-arm64.tar.gz | tar -xz
sudo mv tetra /usr/local/bin
curl -L https://github.com/cilium/tetragon/releases/latest/download/tetra-darwin-amd64.tar.gz | tar -xz
sudo mv tetra /usr/local/bin
curl -L https://github.com/cilium/tetragon/releases/latest/download/tetra-darwin-arm64.tar.gz | tar -xz
sudo mv tetra /usr/local/bin
curl -L https://github.com/cilium/tetragon/releases/latest/download/tetra-windows-amd64.tar.gz
tar -xz tetra-windows-amd64.tar.gz
# move the binary in a directory in your PATH
curl -L https://github.com/cilium/tetragon/releases/latest/download/tetra-windows-arm64.tar.gz
tar -xz tetra-windows-arm64.tar.gz
# move the binary in a directory in your PATH

Install using homebrew

Homebrew is a package manager for macOS and Linux. A formulae is available to fetch precompiled binaries. You can also use it to build from sources (using the --build-from-source flag) with a Go dependency.

brew install tetra

Install a specific release

You can retrieve the release of tetra along the release of Tetragon on GitHub at the following URL: https://github.com/cilium/tetragon/releases.

To download a specific release you can use the following script, replacing the OS, ARCH and TAG values with your desired options.

OS=linux
ARCH=amd64
TAG=v0.9.0
curl -LO https://github.com/cilium/tetragon/releases/download/${TAG}/tetra-${OS}-${ARCH}.tar.gz | tar -xz

3.5 - Configure Tetragon

Depending on your deployment mode, Tetragon configuration can be changed by:

kubectl edit cm -n kube-system tetragon-config
# Change your configuration setting, save and exit
# Restart Tetragon daemonset
kubectl rollout restart -n kube-system ds/tetragon
# Change configuration inside /etc/tetragon/ then restart container.
# Example:
#   1. As a privileged user, write to the file /etc/tetragon/tetragon.conf.d/export-file
#      the path where to export events, example "/var/log/tetragon/tetragon.log"
#   2. Bind mount host /etc/tetragon into container /etc/tetragon
# Tetragon events will be exported to /var/log/tetragon/tetragon.log
echo "/var/log/tetragon/tetragon.log" > /etc/tetragon/tetragon.conf.d/export-file
docker run --name tetragon --rm -d \
  --pid=host --cgroupns=host --privileged \
  -v /etc/tetragon:/etc/tetragon \
  -v /sys/kernel:/sys/kernel \
  -v /var/log/tetragon:/var/log/tetragon \
  quay.io/cilium/tetragon:v1.1.0 \
  /usr/bin/tetragon
# Change configuration inside /etc/tetragon/ then restart systemd service.
# Example:
#   1. As a privileged user, write to the file /etc/tetragon/tetragon.conf.d/export-file
#      the path where to export events, example "/var/log/tetragon/tetragon.log"
#   2. Bind mount host /etc/tetragon into container /etc/tetragon
# Tetragon events will be exported to /var/log/tetragon/tetragon.log
echo "/var/log/tetragon/tetragon.log" > /etc/tetragon/tetragon.conf.d/export-file
systemctl restart tetragon

To read more about Tetragon configuration, please check our reference pages:

Enable Process Credentials

On Linux each process has various associated user, group IDs and capabilities known as process credentials. To enable visility into process_credentials, run Tetragon with enable-process-creds setting set.

kubectl edit cm -n kube-system tetragon-config
# Change "enable-process-cred" from "false" to "true", then save and exit
# Restart Tetragon daemonset
kubectl rollout restart -n kube-system ds/tetragon
echo "true" > /etc/tetragon/tetragon.conf.d/enable-process-cred
docker run --name tetragon --rm -d \
  --pid=host --cgroupns=host --privileged \
  -v /etc/tetragon:/etc/tetragon \
  -v /sys/kernel:/sys/kernel \
  -v /var/log/tetragon:/var/log/tetragon \
  quay.io/cilium/tetragon:v1.1.0 \
  /usr/bin/tetragon
# Write to the drop-in file /etc/tetragon/tetragon.conf.d/enable-process-cred  true
# Run the following as a privileged user then restart tetragon service
echo "true" > /etc/tetragon/tetragon.conf.d/enable-process-cred
systemctl restart tetragon

3.6 - Verify installation

Verify Tetragon image and software bill of materials signatures

Verify Tetragon image signature

Learn how to verify Tetragon container images signatures.

Prerequisites

You will need to install cosign.

Verify Signed Container Images

Since version 0.8.4, all Tetragon container images are signed using cosign.

Let’s verify a Tetragon image’s signature using the cosign verify command:

cosign verify --certificate-github-workflow-repository cilium/tetragon --certificate-oidc-issuer https://token.actions.githubusercontent.com <Image URL> | jq

Verify the SBOM signature

Download and verify the signature of the software bill of materials

A Software Bill of Materials (SBOM) is a complete, formally structured list of components that are required to build a given piece of software. SBOM provides insight into the software supply chain and any potential concerns related to license compliance and security that might exist.

Starting with version 0.8.4, all Tetragon images include an SBOM. The SBOM is generated in SPDX format using the bom tool. If you are new to the concept of SBOM, see what an SBOM can do for you.

Download SBOM

The SBOM can be downloaded from the supplied Tetragon image using the cosign download sbom command.

cosign download sbom --output-file sbom.spdx <Image URL>

Verify SBOM Image Signature

To ensure the SBOM is tamper-proof, its signature can be verified using the cosign verify command.

COSIGN_EXPERIMENTAL=1 cosign verify --certificate-github-workflow-repository cilium/tetragon --certificate-oidc-issuer https://token.actions.githubusercontent.com --attachment sbom <Image URL> | jq

It can be validated that the SBOM image was signed using Github Actions in the Cilium repository from the Issuer and Subject fields of the output.

4 - Concepts

The concepts section helps you understand various Tetragon abstractions and mechanisms.

4.1 - Events

Documentation for Tetragon Events

Tetragon’s events are exposed to the system through either the gRPC endpoint or JSON logs. Commands in this section assume the Getting Started guide was used, but are general other than the namespaces chosen and should work in most environments.

JSON

The first way is to observe the raw json output from the stdout container log:

kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f

The raw JSON events provide Kubernetes API, identity metadata, and OS level process visibility about the executed binary, its parent and the execution time. A base Tetragon installation will produce process_exec and process_exit events encoded in JSON as shown here,

Process execution event

{
  "process_exec": {
    "process": {
      "exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4Njc0MzIxMzczOjUyNjk5",
      "pid": 52699,
      "uid": 0,
      "cwd": "/",
      "binary": "/usr/bin/curl",
      "arguments": "https://ebpf.io/applications/#tetragon",
      "flags": "execve rootcwd",
      "start_time": "2023-10-06T22:03:57.700327580Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://551e161c47d8ff0eb665438a7bcd5b4e3ef5a297282b40a92b7c77d6bd168eb3",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-10-06T21:52:41Z",
          "pid": 49
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        },
        "workload": "xwing"
      },
      "docker": "551e161c47d8ff0eb665438a7bcd5b4",
      "parent_exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4NjcwODgzMjk5OjUyNjk5",
      "tid": 52699
    },
    "parent": {
      "exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4NjcwODgzMjk5OjUyNjk5",
      "pid": 52699,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/bash",
      "arguments": "-c \"curl https://ebpf.io/applications/#tetragon\"",
      "flags": "execve rootcwd clone",
      "start_time": "2023-10-06T22:03:57.696889812Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://551e161c47d8ff0eb665438a7bcd5b4e3ef5a297282b40a92b7c77d6bd168eb3",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-10-06T21:52:41Z",
          "pid": 49
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        },
        "workload": "xwing"
      },
      "docker": "551e161c47d8ff0eb665438a7bcd5b4",
      "parent_exec_id": "Z2tlLWpvaG4tNjMyLWRlZmF1bHQtcG9vbC03MDQxY2FjMC05czk1OjEzNTQ4NjQ1MjQ1ODM5OjUyNjg5",
      "tid": 52699
    }
  },
  "node_name": "gke-john-632-default-pool-7041cac0-9s95",
  "time": "2023-10-06T22:03:57.700326678Z"
}

Will only highlight a few important fields here. For a full specification of events see the Reference section. All events in Tetragon contain a process_exec block to identify the process generating the event. For execution events this is the primary block. For Tracing Policy events the hook that generated the event will attach further data to this. The process_exec event provides a cluster wide unique id the process_exec.exec_id for this process along with the metadata expected in a Kubernetes cluster process_exec.process.pod. The binary and args being executed are part of the event here process_exec.process.binary and process_exec.process.args. Finally, a node_name and time provide the location and time for the event and will be present in all event types.

A default deployment writes the JSON log to /var/run/cilium/tetragon/tetragon.log where it can be exported through normal log collection tooling, e.g. ‘fluentd’, logstash, etc.. The file will be rotated and compressed by default. See [Helm Options] for details on how to customize this location.

Export Filtering

Export filters restrict the JSON event output to a subset of desirable events. These export filters are configured as a line-separated list of JSON objects, where each object can contain one or more filter expressions. Filters are combined by taking the logical OR of each line-separated filter object and the logical AND of sibling expressions within a filter object. As a concrete example, suppose we had the following filter configuration:

{"event_set": ["PROCESS_EXEC", "PROCESS_EXIT"], "namespace": "foo"}
{"event_set": ["PROCESS_KPROBE"]}

The above filter configuration would result in a match if:

  • The event type is PROCESS_EXEC or PROCESS_EXIT AND the pod namespace is “foo”; OR
  • The event type is PROCESS_KPROBE

Tetragon supports two groups of export filters: an allowlist and a denylist. If neither is configured, all events are exported. If only an allowlist is configured, event exports are considered default-deny, meaning only the events in the allowlist are exported. The denylist takes precedence over the allowlist in cases where two filter configurations match on the same event.

You can configure export filters using the provided helm options, command line flags, or environment variables.

List of Process Event Filters
FilterDescription
event_setFilter process events by event types. Supported types include: PROCESS_EXEC, PROCESS_EXIT, PROCESS_KPROBE, PROCESS_UPROBE, PROCESS_TRACAEPOINT, PROCESS_LOADER
binary_regexFilter process events by a list of regular expressions of process binary names (e.g. "^/home/kubernetes/bin/kubelet$"). You can find the full syntax here.
health_checkFilter process events if their binary names match Kubernetes liveness / readiness probe commands of their corresponding pods.
namespaceFilter by Kubernetes pod namespaces. An empty string ("") filters processes that do not belong to any pod namespace.
pidFilter by process PID.
pid_setLike pid but also includes processes that are descendants of the listed PIDs.
pod_regexFilter by pod name using a list of regular expressions. You can find the full syntax here.
arguments_regexFilter by pod name using a list of regular expressions. You can find the full syntax here.
labelsFilter events by pod labels using Kubernetes label selector syntax Note that this filter never matches events without the pod field (i.e. host process events).
policy_namesFilter events by tracing policy names.
capabilitiesFilter events by Linux process capability.

Field Filtering

In some cases, it is not desirable to include all of the fields exported in Tetragon events by default. In these cases, you can use field filters to restrict the set of exported fields for a given event type. Field filters are configured similarly to export filters, as line-separated lists of JSON objects.

Field filters select fields using the protobuf field mask syntax under the "fields" key. You can define a path of fields using field names separated by period (.) characters. To define multiple paths in a single field filter, separate them with comma (,) characters. For example, "fields":"process.binary,parent.binary,pod.name" would select only the process.binary, parent.binary, and pod.name fields.

By default, a field filter applies to all process events, although you can control this behaviour with the "event_set" key. For example, you can apply a field filter to PROCESS_CONNECT and PROCESS_CLOSE events by specifying "event_set":["PROCESS_CONNECT","PROCESS_CLOSE"] in the filter definition.

Each field filter has an "action" that determines what the filter should do with the selected field. The supported action types are "INCLUDE" and "EXCLUDE". A value of "INCLUDE" will cause the field to appear in an event, while a value of "EXCLUDE" will hide the field. In the absence of any field filter for a given event type, the export will include all fields by default. Defining one or more "INCLUDE" filters for a given event type changes that behaviour to exclude all other event types by default.

As a simple example of the above, consider the case where we want to include only exec_id and parent_exec_id in all event types except for PROCESS_EXEC:

{"fields":"process.exec_id,process.parent_exec_id", "event_set": ["PROCESS_EXEC"], "invert_event_set": true, "action": "INCLUDE"}

Redacting Sensitive Information

Since Tetragon traces the entire system, event exports might sometimes contain sensitive information (for example, a secret passed via a command line argument to a process). To prevent this information from being exfiltrated via Tetragon JSON export, Tetragon provides a mechanism called Redaction Filters which can be used to string patterns to redact from exported process arguments. These filters are written in JSON and passed to the Tetragon agent via the --redaction-filters command line flag or the redactionFilters Helm value.

To perform redactions, redaction filters define RE2 regular expressions in the redact field. Any capture groups in these RE2 regular expressions are redacted and replaced with "*****".

For more control, you can select which binary or binaries should have their arguments redacted with the binary_regex field.

As a concrete example, the following will redact all passwords passed to processes with the "--password" argument:

{"redact": ["--password(?:\\s+|=)(\\S*)"]}

Now, an event that contains the string "--password=foo" would have that string replaced with "--password=*****".

Suppose we also see some passwords passed via the -p shorthand for a specific binary, foo. We can also redact these as follows:

{"binary_regex": ["(?:^|/)foo$"], "redact": ["-p(?:\\s+|=)(\\S*)"]}

With both of the above redaction filters in place, we are now redacting all password arguments.

tetra CLI

A second way is to use the tetra CLI. This has the advantage that it can also be used to filter and pretty print the output. The tool allows filtering by process, pod, and other fields. To install tetra see the Tetra Installation Guide

To start printing events run:

kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | tetra getevents -o compact

The tetra CLI is also available inside tetragon container.

kubectl exec -it -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact

This was used in the quick start and generates a pretty printing of the events, To further filter by a specific binary and/or pod do the following,

kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | tetra getevents -o compact --processes curl --pod xwing

Will filter and report just the relevant events.

🚀 process default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon
💥 exit    default/xwing /usr/bin/curl https://ebpf.io/applications/#tetragon 60

gRPC

In addition Tetragon can expose a gRPC endpoint listeners may attach to. The gRPC is exposed by default helm install on localhost:54321, but the address can be configured with the --server-address option. This can be set from helm with the tetragon.grpc.address flag or disabled completely if needed with tetragon.grpc.enabled.

helm install tetragon cilium/tetragon -n kube-system --set tetragon.grpc.enabled=true --set tetragon.grpc.address=localhost:54321

An example gRPC endpoint is the Tetra CLI when its not piped JSON output directly,

 kubectl exec -ti -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact

4.2 - Metrics

Documentation for Tetragon metrics

Tetragon’s metrics are exposed to the system through an HTTP endpoint. These are used to expose event summaries and information about the state of the Tetragon agent.

Kubernetes

Tetragon pods exposes a metrics endpoint by default. The chart also creates a service named tetragon that exposes metrics on the specified port.

Getting metrics port

Check if the tetragon service exists:

kubectl get services tetragon -n kube-system

The output should be similar to:

NAME       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
tetragon   ClusterIP   10.96.54.218   <none>        2112/TCP    3m

Port Forwarding

To forward the metrics port locally, use kubectl port forward:

kubectl -n kube-system port-forward service/tetragon 2112:2112

Local Package Install

By default, metrics are disabled when using release packages to install locally. The metrics can be enabled using --metrics-server flag to specify the address.

Alternatively, the examples/configuration/tetragon.yaml file contains example entries showing the defaults for the address of metrics-server. Local overrides can be created by editing and copying this file into /etc/tetragon/tetragon.yaml, or by editing and copying “drop-ins” from the examples/configuration/tetragon.conf.d directory into the /etc/tetragon/tetragon.conf.d/ subdirectory. The latter is generally recommended.

Set Metrics Address

Run sudo tetragon --metrics-server localhost:2112 to set metrics address to localhost:2112 and export metrics.

sudo tetragon --metrics-server localhost:2112

The output should be similar to this:

time="2023-09-21T13:17:08+05:30" level=info msg="Starting tetragon"
version=v0.11.0
time="2023-09-21T13:17:08+05:30" level=info msg="config settings"
config="mapeased
time="2023-09-22T23:16:24+05:30" level=info msg="Starting metrics server"
addr="localhost:2112"
[...]
time="2023-09-21T13:17:08+05:30" level=info msg="Listening for events..."

Alternatively, a file named server-address can be created in etc/tetragon/tetragon.conf.d/metrics-server with content specifying a port like this localhost:2112, or any port of your choice as mentioned above.

Fetch the Metrics

After the metrics are exposed, either by port forwarding in case of Kubernetes installation or by setting metrics address in case of Package installation, the metrics can be fetched using curl on localhost:2112/metrics:

curl localhost:2112/metrics

The output should be similar to this:

# HELP promhttp_metric_handler_errors_total Total number of internal errors encountered by the promhttp metric handler.
# TYPE promhttp_metric_handler_errors_total counter
promhttp_metric_handler_errors_total{cause="encoding"} 0
promhttp_metric_handler_errors_total{cause="gathering"} 0
# HELP tetragon_errors_total The total number of Tetragon errors. For internal use only.
# TYPE tetragon_errors_total counter
[...]

4.3 - Tracing Policy

Documentation for the TracingPolicy custom resource

Tetragon’s TracingPolicy is a user-configurable Kubernetes custom resource (CR) that allows users to trace arbitrary events in the kernel and optionally define actions to take on a match. Policies consist of a hook point (kprobes, tracepoints, and uprobes are supported), and selectors for in-kernel filtering and specifying actions. For more details, see hook points page and the selectors page.

For the complete custom resource definition (CRD) refer to the YAML file cilium.io_tracingpolicies.yaml. One practical way to explore the CRD is to use kubectl explain against a Kubernetes API server on which it is installed, for example kubectl explain tracingpolicy.spec.kprobes provides field-specific documentation and details on kprobe spec.

Tracing Policies can be loaded and unloaded at runtime in Tetragon, or on startup using flags.

  • With Kubernetes, you can use kubectl to add and remove a TracingPolicy.
  • You can use tetra gRPC CLI to add and remove a TracingPolicy.
  • You can use the --tracing-policy and --tracing-policy-dir flags to statically add policies at startup time, see more in the daemon configuration page.

Hence, even though Tracing Policies are structured as a Kubernetes CR, they can also be used in non-Kubernetes environments using the last two loading methods.

4.3.1 - Example

Learn the basics of Tracing Policy via an example

To discover TracingPolicy, let’s understand via an example that will be explained, part by part, in this document:

apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: "fd-install"
spec:
  kprobes:
  - call: "fd_install"
    syscall: false
    args:
    - index: 0
      type: "int"
    - index: 1
      type: "file"
    selectors:
    - matchArgs:
      - index: 1
        operator: "Equal"
        values:
        - "/tmp/tetragon"
      matchActions:
      - action: Sigkill

The policy checks for file descriptors being created, and sends a SIGKILL signal to any process that creates a file descriptor to a file named /tmp/tetragon. We discuss the policy in more detail next.

Required fields

apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: "fd-install"

The first part follows a common pattern among all Cilium Policies or more widely Kubernetes object. It first declares the Kubernetes API used, then the kind of Kubernetes object it is in this API and an arbitrary name for the object that has to comply with Kubernetes naming convention.

Hook point

spec:
  kprobes:
  - call: "fd_install"
    syscall: false
    args:
    - index: 0
      type: "int"
    - index: 1
      type: "file"

The beginning of the specification describes the hook point to use. Here we are using a kprobe, hooking on the kernel function fd_install. That’s the kernel function that gets called when a new file descriptor is created. We indicate that it’s not a syscall, but a regular kernel function. We then specify the function arguments, so that Tetragon’s BPF code will extract and optionally perform filtering on them.

See the hook points page for further information on the various hook points available and arguments.

Selectors

    selectors:
    - matchArgs:
      - index: 1
        operator: "Equal"
        values:
        - "/tmp/tetragon"
      matchActions:
      - action: Sigkill

Selectors allow you to filter on the events to extract only a subset of the events based on different properties and optionally take an action.

In the example, we filter on the argument at index 1, passing a file struct to the function. Tetragon has the knowledge on how to apply the Equal operator over a Linux kernel file struct and match on the path of the file.

Then we add the Sigkill action, meaning, that any match of the selector should send a SIGKILL signal to the process that initiated the event.

Learn more about the various selectors in the dedicated selectors page.

Message

The message field is an optional short message that will be included in the generated event to inform users what is happening.

spec:
  kprobes:
  - call: "fd_install"
    message: "Installing a file descriptor"

Tags

Tags are optional fields of a Tracing Policy that are used to categorize generated events. Further reference here: Tags documentation.

Policy effect

First, let’s create the /tmp/tetragon file with some content:

echo eBPF! > /tmp/tetragon

You can save the policy in an example.yaml file, compile Tetragon locally, and start Tetragon:

sudo ./tetragon --bpf-lib bpf/objs --tracing-policy example.yaml

(See Quick Kubernetes Install and Quick Local Docker Install for other ways to start Tetragon.)

Once the Tetragon starts, you can monitor events using tetra, the tetragon CLI:

./tetra tetra getevents -o compact

Reading the /tmp/tetragon file with cat:

cat /tmp/tetragon

Results in the following events:

🚀 process  /usr/bin/cat /tmp/tetragon
📬 open     /usr/bin/cat /tmp/tetragon
💥 exit     /usr/bin/cat /tmp/tetragon SIGKILL

And the shell where the cat command was performed will return:

Killed

See more

For more examples of tracing policies, take a look at the examples/tracingpolicy folder in the Tetragon repository. Also read the following sections on hook points and selectors.

4.3.2 - Hook points

Hook points for Tracing Policies and arguments description

Tetragon can hook into the kernel using kprobes and tracepoints, as well as in user-space programs using uprobes. Users can configure these hook points using the correspodning sections of the TracingPolicy specification (.spec). These hook points include arguments and return values that can be specified using the args and returnArg fields as detailed in the following sections.

Kprobes

Kprobes enables you to dynamically hook into any kernel function and execute BPF code. Because kernel functions might change across versions, kprobes are highly tied to your kernel version and, thus, might not be portable across different kernels.

Conveniently, you can list all kernel symbols reading the /proc/kallsyms file. For example to search for the write syscall kernel function, you can execute sudo grep sys_write /proc/kallsyms, the output should be similar to this, minus the architecture specific prefixes.

ffffdeb14ea712e0 T __arm64_sys_writev
ffffdeb14ea73010 T ksys_write
ffffdeb14ea73140 T __arm64_sys_write
ffffdeb14eb5a460 t proc_sys_write
ffffdeb15092a700 d _eil_addr___arm64_sys_writev
ffffdeb15092a740 d _eil_addr___arm64_sys_write

You can see that the exact name of the symbol for the write syscall on our kernel version is __arm64_sys_write. Note that on x86_64, the prefix would be __x64_ instead of __arm64_.

In our example, we will explore a kprobe hooking into the fd_install kernel function. The fd_install kernel function is called each time a file descriptor is installed into the file descriptor table of a process, typically referenced within system calls like open or openat. Hooking fd_install has its benefits and limitations, which are out of the scope of this guide.

spec:
  kprobes:
  - call: "fd_install"
    syscall: false

Kprobes calls can be defined independently in different policies, or together in the same Policy. For example, we can define trace multiple kprobes under the same tracing policy:

spec:
  kprobes:
  - call: "sys_read"
    syscall: true
    # [...]
  - call: "sys_write"
    syscall: true
    # [...]

Tracepoints

Tracepoints are statically defined in the kernel and have the advantage of being stable across kernel versions and thus more portable than kprobes.

To see the list of tracepoints available on your kernel, you can list them using sudo ls /sys/kernel/debug/tracing/events, the output should be similar to this.

alarmtimer    ext4            iommu           page_pool     sock
avc           fib             ipi             pagemap       spi
block         fib6            irq             percpu        swiotlb
bpf_test_run  filelock        jbd2            power         sync_trace
bpf_trace     filemap         kmem            printk        syscalls
bridge        fs_dax          kvm             pwm           task
btrfs         ftrace          libata          qdisc         tcp
cfg80211      gpio            lock            ras           tegra_apb_dma
cgroup        hda             mctp            raw_syscalls  thermal
clk           hda_controller  mdio            rcu           thermal_power_allocator
cma           hda_intel       migrate         regmap        thermal_pressure
compaction    header_event    mmap            regulator     thp
cpuhp         header_page     mmap_lock       rpm           timer
cros_ec       huge_memory     mmc             rpmh          tlb
dev           hwmon           module          rseq          tls
devfreq       i2c             mptcp           rtc           udp
devlink       i2c_slave       napi            sched         vmscan
dma_fence     initcall        neigh           scmi          wbt
drm           interconnect    net             scsi          workqueue
emulation     io_uring        netlink         signal        writeback
enable        iocost          oom             skb           xdp
error_report  iomap           page_isolation  smbus         xhci-hcd

You can then choose the subsystem that you want to trace, and look the tracepoint you want to use and its format. For example, if we choose the netif_receive_skb tracepoints from the net subsystem, we can read its format with sudo cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/format, the output should be similar to the following.

name: netif_receive_skb
ID: 1398
format:
        field:unsigned short common_type;       offset:0;       size:2; signed:0;
        field:unsigned char common_flags;       offset:2;       size:1; signed:0;
        field:unsigned char common_preempt_count;       offset:3;       size:1; signed:0;
        field:int common_pid;   offset:4;       size:4; signed:1;

        field:void * skbaddr;   offset:8;       size:8; signed:0;
        field:unsigned int len; offset:16;      size:4; signed:0;
        field:__data_loc char[] name;   offset:20;      size:4; signed:0;

print fmt: "dev=%s skbaddr=%px len=%u", __get_str(name), REC->skbaddr, REC->len

Similarly to kprobes, tracepoints can also hook into system calls. For more details, see the raw_syscalls and syscalls subysystems.

An example of tracepoints TracingPolicy could be the following, observing all syscalls and getting the syscall ID from the argument at index 4:

apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: "raw-syscalls"
spec:
  tracepoints:
  - subsystem: "raw_syscalls"
    event: "sys_enter"
    args:
    - index: 4
      type: "int64"

Uprobes

Uprobes are similar to kprobes, but they allow you to dynamically hook into any user-space function and execute BPF code. Uprobes are also tied to the binary version of the user-space program, so they may not be portable across different versions or architectures.

To use uprobes, you need to specify the path to the executable or library file, and the symbol of the function you want to probe. You can use tools like objdump, nm, or readelf to find the symbol of a function in a binary file. For example, to find the readline symbol in /bin/bash using nm, you can run:

nm -D /bin/bash | grep readline

The output should look similar to this, with a few lines redacted:

[...]
000000000009f2b0 T pcomp_set_readline_variables
0000000000097e40 T posix_readline_initialize
00000000000d5690 T readline
00000000000d52f0 T readline_internal_char
00000000000d42d0 T readline_internal_setup
[...]

You can see in the nm output: first the symbol value, then the symbol type, for the readline symbol T meaning that this symbol is in the text (code) section of the binary, and finally the symbol name. This confirms that the readline symbol is present in the /bin/bash binary and might be a function name that we can hook with a uprobe.

You can define multiple uprobes in the same policy, or in different policies. You can also combine uprobes with kprobes and tracepoints to get a comprehensive view of the system behavior.

Here is an example of a policy that defines an uprobe for the readline function in the bash executable, and applies it to all processes that use the bash binary:

apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: "example-uprobe"
spec:
  uprobes:
  - path: "/bin/bash"
    symbols:
    - "readline"

This example shows how to use uprobes to hook into the readline function running in all the bash shells.

Arguments

Kprobes, uprobes and tracepoints all share a needed arguments fields called args. It is a list of arguments to include in the trace output. Tetragon’s BPF code requires information about the types of arguments to properly read, print and filter on its arguments. This information needs to be provided by the user under the args section. For the available types, check the TracingPolicy CRD.

Following our example, here is the part that defines the arguments:

args:
- index: 0
  type: "int"
- index: 1
  type: "file"

Each argument can optionally include a ’label’ parameter, which will be included in the output. This can be used to annotate the arguments to help with understanding and processing the output. As an example, here is the same definition, with an appropriate label on the int argument:

args:
- index: 0
  type: "int"
  label: "FD"
- index: 1
  type: "file"

To properly read and hook onto the fd_install(unsigned int fd, struct file *file) function, the YAML snippet above tells the BPF code that the first argument is an int and the second argument is a file, which is the struct file of the kernel. In this way, the BPF code and its printer can properly collect and print the arguments.

These types are sorted by the index field, where you can specify the order. The indexing starts with 0. So, index: 0 means, this is going to be the first argument of the function, index: 1 means this is going to be the second argument of the function, etc.

Note that for some args types, char_buf and char_iovec, there are additional fields named returnCopy and sizeArgIndex available:

  • returnCopy indicates that the corresponding argument should be read later (when the kretprobe for the symbol is triggered) because it might not be populated when the kprobe is triggered at the entrance of the function. For example, a buffer supplied to read(2) won’t have content until kretprobe is triggered.
  • sizeArgIndex indicates the (1-based, see warning below) index of the arguments that represents the size of the char_buf or iovec. For example, for write(2), the third argument, size_t count is the number of char element that we can read from the const void *buf pointer from the second argument. Similarly, if we would like to capture the __x64_sys_writev(long, iovec *, vlen) syscall, then iovec has a size of vlen, which is going to be the third argument.

These flags can be combined, see the example below.

- call: "sys_write"
  syscall: true
  args:
  - index: 0
    type: "int"
  - index: 1
    type: "char_buf"
    returnCopy: true
    sizeArgIndex: 3
  - index: 2
    type: "size_t"

Note that you can specify which arguments you would like to print from a specific syscall. For example if you don’t care about the file descriptor, which is the first int argument with index: 0 and just want the char_buf, what is written, then you can leave this section out and just define:

args:
- index: 1
  type: "char_buf"
  returnCopy: true
  sizeArgIndex: 3
- index: 2
  type: "size_t"

This tells the printer to skip printing the int arg because it’s not useful.

For char_buf type up to the 4096 bytes are stored. Data with bigger size are cut and returned as truncated bytes.

You can specify maxData flag for char_buf type to read maximum possible data (currently 327360 bytes), like:

args:
- index: 1
  type: "char_buf"
  maxData: true
  sizeArgIndex: 3
- index: 2
  type: "size_t"

This field is only used for char_buff data. When this value is false (default), the bpf program will fetch at most 4096 bytes. In later kernels (>=5.4) tetragon supports fetching up to 327360 bytes if this flag is turned on.

The maxData flag does not work with returnCopy flag at the moment, so it’s usable only for syscalls/functions that do not require return probe to read the data.

Return values

A TracingPolicy spec can specify that the return value should be reported in the tracing output. To do this, the return parameter of the call needs to be set to true, and the returnArg parameter needs to be set to specify the type of the return argument. For example:

- call: "sk_alloc"
  syscall: false
  return: true
  args:
  - index: 1
    type: int
    label: "family"
  returnArg:
    type: sock

In this case, the sk_alloc hook is specified to return a value of type sock (a pointer to a struct sock). Whenever the sk_alloc hook is hit, not only will it report the family parameter in index 1, it will also report the socket that was created.

Return values for socket tracking

A unique feature of a sock being returned from a hook such as sk_alloc is that the socket it refers to can be tracked. Most networking hooks in the network stack are run in a context that is not that of the process that owns the socket for which the actions relate; this is because networking happens asynchronously and not entirely in-line with the process. The sk_alloc hook does, however, occur in the context of the process, such that the task, the PID, and the TGID are of the process that requested that the socket was created.

Specifying socket tracking tells Tetragon to store a mapping between the socket and the process’ PID and TGID; and to use that mapping when it sees the socket in a sock argument in another hook to replace the PID and TGID of the context with the process that actually owns the socket. This can be done by adding a returnArgAction to the call. Available actions are TrackSock and UntrackSock. See TrackSock and UntrackSock.

- call: "sk_alloc"
  syscall: false
  return: true
  args:
  - index: 1
    type: int
    label: "family"
  returnArg:
    type: sock
  returnArgAction: TrackSock

Socket tracking is only available on kernels >=5.3.

Lists

It’s possible to define list of functions and use it in the kprobe’s call field.

Following example traces all sys_dup[23] syscalls.

spec:
  lists:
  - name: "dups"
    type: "syscalls"
    values:
    - "sys_dup"
    - "sys_dup2"
    - "sys_dup3"
  kprobes:
  - call: "list:dups"

It is basically a shortcut for following policy:

spec:
  kprobes:
  - call: "sys_dup"
    syscall: true
  - call: "sys_dup2"
    syscall: true
  - call: "sys_dup3"
    syscall: true

As shown in subsequent examples, its main benefit is allowing a single definition for calls that have the same filters.

The list is defined under lists field with arbitrary values for name and values fields.

spec:
  lists:
  - name: "dups"
    type: "syscalls"
    values:
    - "sys_dup"
    - "sys_dup2"
    - "sys_dup3"
    ...

It’s possible to define multiple lists.

spec:
  lists:
  - name: "dups"
    type: "syscalls"
    values:
    - "sys_dup"
    - "sys_dup2"
    - "sys_dup3"
    name: "another"
    - "sys_open"
    - "sys_close"

Syscalls specified with sys_ prefix are translated to their 64 bit equivalent function names.

It’s possible to specify 32 bit syscall by using its full function name that includes specific architecture native prefix (like __ia32_ for x86):

spec:
  lists:
  - name: "dups"
    type: "syscalls"
    values:
    - "sys_dup"
    - "__ia32_sys_dup"
    name: "another"
    - "sys_open"
    - "sys_close"

Specific list can be referenced in kprobe’s call field with "list:NAME" value.

spec:
  lists:
  - name: "dups"
  ...

  kprobes:
  - call: "list:dups"

The kprobe definition creates a kprobe for each item in the list and shares the rest of the config specified for kprobe.

List can also specify type field that implies extra checks on the values (like for syscall type) or denote that the list is generated automatically (see below). User must specify syscall type for list with syscall functions. Also syscall functions can’t be mixed with regular functions in the list.

The additional selector configuration is shared with all functions in the list. In following example we create 3 kprobes that share the same pid filter.

spec:
  lists:
  - name: "dups"
    type: "syscalls"
    values:
    - "sys_dup"
    - "sys_dup2"
    - "sys_dup3"
  kprobes:
  - call: "list:dups"
    selectors:
    - matchPIDs:
      - operator: In
        followForks: true
        isNamespacePID: false
        values:
        - 12345

It’s possible to use argument filter together with the list.

It’s important to understand that the argument will be retrieved by using the specified argument type for all the functions in the list.

Following example adds argument filter for first argument on all functions in dups list to match value 9999.

spec:
  lists:
  - name: "dups"
    type: "syscalls"
    values:
    - "sys_dup"
    - "sys_dup2"
    - "sys_dup3"
  kprobes:
  - call: "list:dups"
    args:
    - index: 0
      type: int
    selectors:
    - matchArgs:
      - index: 0
        operator: "Equal"
        values:
        - 9999

There are two additional special types of generated lists.

The generated_syscalls type of list that generates list with all possible syscalls on the system.

Following example traces all syscalls for /usr/bin/kill binary.

spec:
  lists:
  - name: "all-syscalls"
    type: "generated_syscalls"
  kprobes:
  - call: "list:all-syscalls"
    selectors:
    - matchBinaries:
      - operator: "In"
        values:
        - "/usr/bin/kill"

The generated_ftrace type of list that generates functions from ftrace available_filter_functions file with specified filter. The filter is specified with pattern field and expects regular expression.

Following example traces all kernel ksys_* functions for /usr/bin/kill binary.

spec:
  lists:
  - name: "ksys"
    type: "generated_ftrace"
    pattern: "^ksys_*"
  kprobes:
  - call: "list:ksys"
    selectors:
    - matchBinaries:
      - operator: "In"
        values:
        - "/usr/bin/kill"

Note that if syscall list is used in selector with InMap operator, the argument type needs to be syscall64, like.

spec:
  lists:
  - name: "dups"
    type: "syscalls"
    values:
    - "sys_dup"
    - "__ia32_sys_dup"
  tracepoints:
  - subsystem: "raw_syscalls"
    event: "sys_enter"
    args:
    - index: 4
      type: "syscall64"
    selectors:
    - matchArgs:
      - index: 0
        operator: "InMap"
        values:
        - "list:dups"

4.3.3 - Options

Pass options to hook

It’s possible to pass options through spec file as an array of name and value pairs:

spec:
  options:
    - name: "option-1"
      value: "True"
    - name: "option-2"
      value: "10"

Options array is passed and processed by each hook used in the spec file that supports options. At the moment it’s availabe for kprobe and uprobe hooks.

Kprobe options

disable-kprobe-multi

This option disables kprobe multi link interface for all the kprobes defined in the spec file. If enabled, all the defined kprobes will be atached through standard kprobe interface. It stays enabled for another spec file without this option.

It takes boolean as value, by default it’s false.

Example:

  options:
    - name: "disable-kprobe-multi"
      value: "1"

Uprobe options

disable-uprobe-multi

This option disables uprobe multi link interface for all the uprobes defined in the spec file. If enabled, all the defined uprobes will be atached through standard uprobe interface. It stays enabled for another spec file without this option.

It takes boolean as value, by default it’s false.

Example:

  options:
    - name: "disable-uprobe-multi"
      value: "1"

4.3.4 - Selectors

Perform in-kernel BPF filtering and actions on events

Selectors are a way to perform in-kernel BPF filtering on the events to export, or on the events on which to apply an action.

A TracingPolicy can contain from 0 to 5 selectors. A selector is composed of 1 or more filters. The available filters are the following:

Arguments filter

Arguments filters can be specified under the matchArgs field and provide filtering based on the value of the function’s argument.

In the next example, a selector is defined with a matchArgs filter that tells the BPF code to process only the function call for which the second argument, index equal to 1, concerns the file under the path /etc/passwd or /etc/shadow. It’s using the operator Equal to match against the value of the argument.

Note that conveniently, we can match against a path directly when the argument is of type file.

selectors:
- matchArgs:
  - index: 1
    operator: "Equal"
    values:
    - "/etc/passwd"
    - "/etc/shadow"

The available operators for matchArgs are:

  • Equal
  • NotEqual
  • Prefix
  • Postfix
  • Mask

Further examples

In the previous example, we used the operator Equal, but we can also use the Prefix operator and match against all files under /etc with:

selectors:
- matchArgs:
  - index: 1
    operator: "Prefix"
    values:
    - "/etc"

In this situation, an event will be created every time a process tries to access a file under /etc.

Although it makes less sense, you can also match over the first argument, to only detect events that will use the file descriptor 4, which is usually the first that come afters stdin, stdout and stderr in process. And combine that with the previous example.

- matchArgs:
  - index: 0
    operator: "Equal"
    values:
    - "3"
  - index: 1
    operator: "Prefix"
    values:
    - "/etc"

Return args filter

Arguments filters can be specified under the returnMatchArgs field and provide filtering based on the value of the function return value. It allows you to filter on the return value, thus success, error or value returned by a kernel call.

matchReturnArgs:
- operator: "NotEqual"
  values:
  - 0

The available operators for matchReturnArgs are:

  • Equal
  • NotEqual
  • Prefix
  • Postfix

A use case for this would be to detect the failed access to certain files, like /etc/shadow. Doing cat /etc/shadow will use a openat syscall that will returns -1 for a failed attempt with an unprivileged user.

PIDs filter

PIDs filters can be specified under the matchPIDs field and provide filtering based on the value of host pid of the process. For example, the following matchPIDs filter tells the BPF code that observe only hooks for which the host PID is equal to either pid1 or pid2 or pid3:

- matchPIDs:
  - operator: "In"
    followForks: true
    values:
    - "pid1"
    - "pid2"
    - "pid3"

The available operators for matchPIDs are:

  • In
  • NotIn

Further examples

Another example can be to collect all processes not associated with a container’s init PID, which is equal to 1. In this way, we are able to detect if there was a kubectl exec performed inside a container because processes created by kubectl exec are not children of PID 1.

- matchPIDs:
  - operator: NotIn
    followForks: false
    isNamespacePID: true
    values:
    - 1

Binaries filter

Binary filters can be specified under the matchBinaries field and provide filtering based on the value of a certain binary name. For example, the following matchBinaries selector tells the BPF code to process only system calls and kernel functions that are coming from cat or tail.

- matchBinaries:
  - operator: "In"
    values:
    - "/usr/bin/cat"
    - "/usr/bin/tail"

Currently, only the In operator type is supported and the values field has to be a map of strings. The default behaviour is followForks: true, so all the child processes are followed. The current limitation is 4 values.

Further examples

One example can be to monitor all the sys_write system calls which are coming from the /usr/sbin/sshd binary and its child processes and writing to stdin/stdout/stderr.

This is how we can monitor what was written to the console by different users during different ssh sessions. The matchBinaries selector in this case is the following:

- matchBinaries:
  - operator: "In"
    values:
    - "/usr/sbin/sshd"

while the whole kprobe call is the following:

- call: "sys_write"
  syscall: true
  args:
  - index: 0
    type: "int"
  - index: 1
    type: "char_buf"
    sizeArgIndex: 3
  - index: 2
    type: "size_t"
  selectors:
  # match to /sbin/sshd
  - matchBinaries:
    - operator: "In"
      values:
      - "/usr/sbin/sshd"
  # match to stdin/stdout/stderr
    matchArgs:
    - index: 0
      operator: "Equal"
      values:
      - "1"
      - "2"
      - "3"

Namespaces filter

Namespaces filters can be specified under the matchNamespaces field and provide filtering of calls based on Linux namespace. You can specify the namespace inode or use the special host_ns keyword, see the example and description for more information.

An example syntax is:

- matchNamespaces:
  - namespace: Pid
    operator: In
    values:
    - "4026531836"
    - "4026531835"

This will match if: [Pid namespace is 4026531836] OR [Pid namespace is 4026531835]

  • namespace can be: Uts, Ipc, Mnt, Pid, PidForChildren, Net, Cgroup, or User. Time and TimeForChildren are also available in Linux >= 5.6.
  • operator can be In or NotIn
  • values can be raw numeric values (i.e. obtained from lsns) or "host_ns" which will automatically be translated to the appropriate value.

Limitations

  1. We can have up to 4 values. These can be both numeric and host_ns inside a single namespace.
  2. We can have up to 4 namespace values under matchNamespaces in Linux kernel < 5.3. In Linux >= 5.3 we can have up to 10 values (i.e. the maximum number of namespaces that modern kernels provide).

Further examples

We can have multiple namespace filters:

selectors:
- matchNamespaces:
  - namespace: Pid
    operator: In
    values:
    - "4026531836"
    - "4026531835"
  - namespace: Mnt
    operator: In
    values:
    - "4026531833"
    - "4026531834"

This will match if: ([Pid namespace is 4026531836] OR [Pid namespace is 4026531835]) AND ([Mnt namespace is 4026531833] OR [Mnt namespace is 4026531834])

Use cases examples

Generate a kprobe event if /etc/shadow was opened by /bin/cat which either had host Net or Mnt namespace access

apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: "example_ns_1"
spec:
  kprobes:
    - call: "fd_install"
      syscall: false
      args:
        - index: 0
          type: int
        - index: 1
          type: "file"
      selectors:
        - matchBinaries:
          - operator: "In"
            values:
            - "/bin/cat"
          matchArgs:
          - index: 1
            operator: "Equal"
            values:
            - "/etc/shadow"
          matchNamespaces:
          - namespace: Mnt
            operator: In
            values:
            - "host_ns"
        - matchBinaries:
          - operator: "In"
            values:
            - "/bin/cat"
          matchArgs:
          - index: 1
            operator: "Equal"
            values:
            - "/etc/shadow"
          matchNamespaces:
          - namespace: Net
            operator: In
            values:
            - "host_ns"

This example has 2 selectors. Note that each selector starts with -.

Selector 1:

        - matchBinaries:
          - operator: "In"
            values:
            - "/bin/cat"
          matchArgs:
          - index: 1
            operator: "Equal"
            values:
            - "/etc/shadow"
          matchNamespaces:
          - namespace: Mnt
            operator: In
            values:
            - "host_ns"

Selector 2:

        - matchBinaries:
          - operator: "In"
            values:
            - "/bin/cat"
          matchArgs:
          - index: 1
            operator: "Equal"
            values:
            - "/etc/shadow"
          matchNamespaces:
          - namespace: Net
            operator: In
            values:
            - "host_ns"

We have [Selector1 OR Selector2]. Inside each selector we have filters. Both selectors have 3 filters (i.e. matchBinaries, matchArgs, and matchNamespaces) with different arguments. Adding a - in the beginning of a filter will result in a new selector.

So the previous CRD will match if:

[binary == /bin/cat AND arg1 == /etc/shadow AND MntNs == host] OR [binary == /bin/cat AND arg1 == /etc/shadow AND NetNs is host]

We can modify the previous example as follows:

Generate a kprobe event if /etc/shadow was opened by /bin/cat which has host Net and Mnt namespace access

apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: "example_ns_2"
spec:
  kprobes:
    - call: "fd_install"
      syscall: false
      args:
        - index: 0
          type: int
        - index: 1
          type: "file"
      selectors:
        - matchBinaries:
          - operator: "In"
            values:
            - "/bin/cat"
          matchArgs:
          - index: 1
            operator: "Equal"
            values:
            - "/etc/shadow"
          matchNamespaces:
          - namespace: Mnt
            operator: In
            values:
            - "host_ns"
          - namespace: Net
            operator: In
            values:
            - "host_ns"

Here we have a single selector. This CRD will match if:

[binary == /bin/cat AND arg1 == /etc/shadow AND (MntNs == host AND NetNs == host) ]

Capabilities filter

Capabilities filters can be specified under the matchCapabilities field and provide filtering of calls based on Linux capabilities in the specific sets.

An example syntax is:

- matchCapabilities:
  - type: Effective
    operator: In
    values:
    - "CAP_CHOWN"
    - "CAP_NET_RAW"

This will match if: [Effective capabilities contain CAP_CHOWN] OR [Effective capabilities contain CAP_NET_RAW]

  • type can be: Effective, Inheritable, or Permitted.
  • operator can be In or NotIn
  • values can be any supported capability. A list of all supported capabilities can be found in /usr/include/linux/capability.h.

Limitations

  1. There is no limit in the number of capabilities listed under values.
  2. Only one type field can be specified under matchCapabilities.

Namespace changes filter

Namespace changes filter can be specified under the matchNamespaceChanges field and provide filtering based on calls that are changing Linux namespaces. This filter can be useful to track execution of code in a new namespace or even container escapes that change their namespaces.

For instance, if an unprivileged process creates a new user namespace, it gains full privileges within that namespace. This grants the process the ability to perform some privileged operations within the context of this new namespace that would otherwise only be available to privileged root user. As a result, such filter is useful to track namespace creation, which can be abused by untrusted processes.

To keep track of the changes, when a process_exec happens, the namespaces of the process are recorded and these are compared with the current namespaces on the event with a matchNamespaceChanges filter.

matchNamespaceChanges:
- operator: In
  values:
  - "Mnt"

The unshare command, or executing in the host namespace using nsenter can be used to test this feature. See a demonstration example of this feature.

Capability changes filter

Capability changes filter can be specified under the matchCapabilityChanges field and provide filtering based on calls that are changing Linux capabilities.

To keep track of the changes, when a process_exec happens, the capabilities of the process are recorded and these are compared with the current capabilities on the event with a matchCapabilityChanges filter.

matchCapabilityChanges:
- type: Effective
  operator: In
  isNamespaceCapability: false
  values:
  - "CAP_SETUID"

See a demonstration example of this feature.

Actions filter

Actions filters are a list of actions that execute when an appropriate selector matches. They are defined under matchActions and currently, the following action types are supported:

Sigkill action

Sigkill action terminates synchronously the process that made the call that matches the appropriate selectors from the kernel. In the example below, every sys_write system call with a PID not equal to 1 or 0 attempting to write to /etc/passwd will be terminated. Indeed when using kubectl exec, a new process is spawned in the container PID namespace and is not a child of PID 1.

- call: "sys_write"
  syscall: true
  args:
  - index: 0
    type: "fd"
  - index: 1
    type: "char_buf"
    sizeArgIndex: 3
  - index: 2
    type: "size_t"
  selectors:
  - matchPIDs:
    - operator: NotIn
      followForks: true
      isNamespacePID: true
      values:
      - 0
      - 1
    matchArgs:
    - index: 0
      operator: "Prefix"
      values:
      - "/etc/passwd"
    matchActions:
    - action: Sigkill

Signal action

Signal action sends specified signal to current process. The signal number is specified with argSig value.

Following example is equivalent to the Sigkill action example above. The difference is to use the signal action with SIGKILL(9) signal.

- call: "sys_write"
  syscall: true
  args:
  - index: 0
    type: "fd"
  - index: 1
    type: "char_buf"
    sizeArgIndex: 3
  - index: 2
    type: "size_t"
  selectors:
  - matchPIDs:
    - operator: NotIn
      followForks: true
      isNamespacePID: true
      values:
      - 0
      - 1
    matchArgs:
    - index: 0
      operator: "Prefix"
      values:
      - "/etc/passwd"
    matchActions:
    - action: Signal
      argSig: 9

Override action

Override action allows to modify the return value of call. While Sigkill will terminate the entire process responsible for making the call, Override will run in place of the original kprobed function and return the value specified in the argError field. It’s then up to the code path or the user space process handling the returned value to whether stop or proceed with the execution.

For example, you can create a TracingPolicy that intercepts sys_symlinkat and will make it return -1 every time the first argument is equal to the string /etc/passwd:

kprobes:
- call: "sys_symlinkat"
  syscall: true
  args:
  - index: 0
    type: "string"
  - index: 1
    type: "int"
  - index: 2
    type: "string"
  selectors:
  - matchArgs:
    - index: 0
      operator: "Equal"
      values:
      - "/etc/passwd\0"
    matchActions:
    - action: Override
      argError: -1

FollowFD action

The FollowFD action allows to create a mapping using a BPF map between file descriptors and filenames. After its creation, the mapping can be maintained through UnfollowFD and CopyFD actions. Note that proper maintenance of the mapping is up to the tracing policy writer.

FollowFD is typically used at hook points where a file descriptor and its associated filename appear together. The kernel function fd_install is a good example.

The fd_install kernel function is called each time a file descriptor must be installed into the file descriptor table of a process, typically referenced within system calls like open or openat. It is a good place for tracking file descriptor and filename matching.

Let’s take a look at the following example:

- call: "fd_install"
  syscall: false
  args:
  - index: 0
    type: int
  - index: 1
    type: "file"
  selectors:
  - matchPIDs:
      # [...]
    matchArgs:
      # [...]
    matchActions:
    - action: FollowFD
      argFd: 0
      argName: 1

This action uses the dedicated argFd and argName fields to get respectively the index of the file descriptor argument and the index of the name argument in the call.

While the mapping between the file descriptor and filename remains in place (that is, between FollowFD and UnfollowFD for the same file descriptor) tracing policies may refer to filenames instead of file descriptors. This offers greater convenience and allows more functionality to reside inside the kernel, thereby reducing overhead.

For instance, assume that you want to prevent writes into file /etc/passwd. The system call sys_write only receives a file descriptor, not a filename, as argument. Yet with a bracketing pair of FollowFD and UnfollowFD actions in place the tracing policy that hooks into sys_write can nevertheless refer to the filename /etc/passwd, if it also marks the relevant argument as of type fd.

The following example combines actions FollowFD and UnfollowFD as well as an argument of type fd to such effect:

kprobes:
- call: "fd_install"
  syscall: false
  args:
  - index: 0
    type: int
  - index: 1
    type: "file"
  selectors:
  - matchArgs:
    - index: 1
      operator: "Equal"
      values:
      - "/tmp/passwd"
    matchActions:
    - action: FollowFD
      argFd: 0
      argName: 1
- call: "sys_write"
  syscall: true
  args:
  - index: 0
    type: "fd"
  - index: 1
    type: "char_buf"
    sizeArgIndex: 3
  - index: 2
    type: "size_t"
  selectors:
  - matchArgs:
    - index: 0
      operator: "Equal"
      values:
      - "/tmp/passwd"
    matchActions:
    - action: Sigkill
- call: "sys_close"
  syscall: true
  args:
  - index: 0
     type: "int"
  selectors:
  - matchActions:
    - action: UnfollowFD
      argFd: 0
      argName: 0

UnfollowFD action

The UnfollowFD action takes a file descriptor from a system call and deletes the corresponding entry from the BPF map, where it was put under the FollowFD action. It is typically used at hooks points where the scope of association between a file descriptor and a filename ends. The system call sys_close is a good example.

Let’s take a look at the following example:

- call: "sys_close"
  syscall: true
  args:
  - index: 0
    type: "int"
  selectors:
  - matchPIDs:
    - operator: NotIn
      followForks: true
      isNamespacePID: true
      values:
      - 0
      - 1
    matchActions:
    - action: UnfollowFD
      argFd: 0

Similar to the FollowFD action, the index of the file descriptor is described under argFd:

matchActions:
- action: UnfollowFD
  argFd: 0

In this example, argFD is 0. So, the argument from the sys_close system call at index: 0 will be deleted from the BPF map whenever a sys_close is executed.

- index: 0
  type: "int"

CopyFD action

The CopyFD action is specific to duplication of file descriptor use cases. Similary to FollowFD, it takes an argFd and argName arguments. It can typically be used tracking the dup, dup2 or dup3 syscalls.

See the following example for illustration:

- call: "sys_dup2"
  syscall: true
  args:
  - index: 0
    type: "fd"
  - index: 1
    type: "int"
  selectors:
  - matchPIDs:
    # [...]
    matchActions:
    - action: CopyFD
      argFd: 0
      argName: 1
- call: "sys_dup3"
  syscall: true
  args:
  - index: 0
    type: "fd"
  - index: 1
    type: "int"
  - index: 2
    type: "int"
  selectors:
  - matchPIDs:
    # [...]
    matchActions:
    - action: CopyFD
      argFd: 0
      argName: 1

GetUrl action

The GetUrl action can be used to perform a remote interaction such as triggering Thinkst canaries or any system that can be triggered via an URL request. It uses the argUrl field to specify the URL to request using GET method.

matchActions:
- action: GetUrl
  argUrl: http://ebpf.io

DnsLookup action

The DnsLookup action can be used to perform a remote interaction such as triggering Thinkst canaries or any system that can be triggered via an DNS entry request. It uses the argFqdn field to specify the domain to lookup.

matchActions:
- action: DnsLookup
  argFqdn: ebpf.io

Post action

The Post action allows an event to be transmitted to the agent, from kernelspace to userspace. By default, all TracingPolicy hook will create an event with the Post action except in those situations:

  • a NoPost action was specified in a matchActions;
  • a rate-limiting parameter is in place, see details below.

This action allows you to specify parameters for the Post action.

Rate limiting

Post takes the rateLimit parameter with a time value. This value defaults to seconds, but post-fixing ’m’ or ‘h’ will cause the value to be interpreted in minutes or hours. When this parameter is specified for an action, that action will check if the same action has fired, for the same thread, within the time window, with the same inspected arguments. (Only the first 40 bytes of each inspected argument is used in the matching. Only supported on kernels v5.3 onwards.)

For example, you can specify a selector to only generate an event every 5 minutes with adding the following action and its paramater:

matchActions:
- action: Post
  rateLimit: 5m

By default, the rate limiting is applied per thread, meaning that only repeated actions by the same thread will be rate limited. This can be expanded to all threads for a process by specifying a rateLimitScope with value “process”; or can be expanded to all processes by specifying the same with the value “global”.

Stack traces

Post takes the kernelStackTrace parameter, when turned to true (by default to false) it enables dump of the kernel stack trace to the hook point in kprobes events. To dump user space stack trace set userStackTrace parameter to true. For example, the following kprobe hook can be used to retrieve the kernel stack to kfree_skb_reason, the function called in the kernel to drop kernel socket buffers.

kprobes:
  - call: kfree_skb_reason
    selectors:
    - matchActions:
      - action: Post
        kernelStackTrace: true
        userStackTrace: true

Once loaded, events created from this policy will contain a new kernel_stack_trace field on the process_kprobe event with an output similar to:

{
  "address": "18446744072119856613",
  "offset": "5",
  "symbol": "kfree_skb_reason"
},
{
  "address": "18446744072119769755",
  "offset": "107",
  "symbol": "__sys_connect_file"
},
{
  "address": "18446744072119769989",
  "offset": "181",
  "symbol": "__sys_connect"
},
[...]

The “address” is the kernel function address, “offset” is the offset into the native instruction for the function and “symbol” is the function symbol name.

User mode stack trace is contained in user_stack_trace field on the process_kprobe event and looks like:

{
  "address": "140498967885099",
  "offset": "1209643",
  "symbol": "__connect",
  "module": "/usr/lib/x86_64-linux-gnu/libc.so.6"
},
{
  "address": "140498968021470",
  "offset": "1346014",
  "symbol": "inet_pton",
  "module": "/usr/lib/x86_64-linux-gnu/libc.so.6"
},
{
  "address": "140498971185511",
  "offset": "106855",
  "module": "/usr/lib/x86_64-linux-gnu/libcurl.so.4.7.0"
},

The “address” is the function address, “offset” is the function offset from the beginning of the binary module. “module” is the absolute path of the binary file to which address belongs. “symbol” is the function symbol name. “symbol” may be missing if the binary file is stripped.

This output can be enhanced in a more human friendly using the tetra getevents -o compact command. Indeed, by default, it will print the stack trace along the compact output of the event similarly to this:

❓ syscall /usr/bin/curl kfree_skb_reason
Kernel:
   0xffffffffa13f2de5: kfree_skb_reason+0x5
   0xffffffffa13dda9b: __sys_connect_file+0x6b
   0xffffffffa13ddb85: __sys_connect+0xb5
   0xffffffffa13ddbd8: __x64_sys_connect+0x18
   0xffffffffa1714bd8: do_syscall_64+0x58
   0xffffffffa18000e6: entry_SYSCALL_64_after_hwframe+0x6e
User space:
   0x7f878cf2752b: __connect (/usr/lib/x86_64-linux-gnu/libc.so.6+0x12752b)
   0x7f878cf489de: inet_pton (/usr/lib/x86_64-linux-gnu/libc.so.6+0x1489de)
   0x7f878d1b6167:  (/usr/lib/x86_64-linux-gnu/libcurl.so.4.7.0+0x1a167)

The printing format for kernel stack trace is "0x%x: %s+0x%x", address, symbol, offset. The printing format for user stack trace is "0x%x: %s (%s+0x%x)", address, symbol, module, offset.

NoPost action

The NoPost action can be used to suppress the event to be generated, but at the same time all its defined actions are performed.

It’s useful when you are not interested in the event itself, just in the action being performed.

Following example override openat syscall for “/etc/passwd” file but does not generate any event about that.

- call: "sys_openat"
  return: true
  syscall: true
  args:
  - index: 0
    type: int
  - index: 1
    type: "string"
  - index: 2
    type: "int"
  returnArg:
    type: "int"
  selectors:
  - matchPIDs:
    matchArgs:
    - index: 1
      operator: "Equal"
      values:
      - "/etc/passwd"
    matchActions:
    - action: Override
      argError: -2
    - action: NoPost

TrackSock action

The TrackSock action allows to create a mapping using a BPF map between sockets and processes. It however needs to maintain a state correctly, see UntrackSock related action. TrackSock works similarly to FollowFD, specifying the argument with the sock type using argSock instead of specifying the FD argument with argFd.

It is however more likely that socket tracking will be performed on the return value of sk_alloc as described above.

Socket tracking is only available on kernel >=5.3.

UntrackSock action

The UntrackSock action takes a struct sock pointer from a function call and deletes the corresponding entry from the BPF map, where it was put under the TrackSock action.

Let’s take a look at the following example:

- call: "__sk_free"
  syscall: false
  args:
    - index: 0
      type: sock
  selectors:
    - matchActions:
      - action: UntrackSock
        argSock: 0

Similar to the TrackSock action, the index of the sock is described under argSock:

- matchActions:
  - action: UntrackSock
    argSock: 0

In this example, argSock is 0. So, the argument from the __sk_free function call at index: 0 will be deleted from the BPF map whenever a __sk_free is executed.

- index: 0
  type: "sock"

Socket tracking is only available on kernel >=5.3.

Notify Killer action

The NotifyKiller action notifies the killer program to kill or override a syscall.

It’s meant to be used on systems with kernel that lacks multi kprobe feature, that allows to attach many kprobes quickly). To workaround that the killer sensor uses the raw syscall tracepoint and attaches simple program to syscalls that we need to kill or override.

The specs needs to have killer program definition, that instructs tetragon to load the killer program and attach it to specified syscalls.

spec:
  killers:
  - calls:
    - "list:dups"

The syscalls expects list of syscalls or list:XXX pointer to list.

Note that currently only single killer definition is allowed.

The NotifyKiller action takes 2 arguments.

matchActions:
- action: "NotifyKiller"
  argError: -1
  argSig: 9

If specified the argError will be passed to bpf_override_return helper to override the syscall return value. If specified the argSig will be passed to bpf_send_signal helper to override the syscall return value.

The following is spec for killing /usr/bin/bash program whenever it calls sys_dup or sys_dup2 syscalls.

spec:
  lists:
  - name: "dups"
    type: "syscalls"
    values:
    - "sys_dup"
    - "sys_dup2"
  killers:
  - calls:
    - "list:dups"
  tracepoints:
  - subsystem: "raw_syscalls"
    event: "sys_enter"
    args:
    - index: 4
      type: "syscall64"
    selectors:
    - matchArgs:
      - index: 0
        operator: "InMap"
        values:
        - "list:dups"
      matchBinaries:
      - operator: "In"
        values:
        - "/usr/bin/bash"
      matchActions:
      - action: "NotifyKiller"
        argSig: 9

Note as mentioned above the NotifyKiller with killer program is meant to be used only on kernel versions with no support for fast attach of multiple kprobes (kprobe_multi link).

With kprobe_multi link support the above example can be easily replaced with:

spec:
  lists:
  - name: "syscalls"
    type: "syscalls"
    values:
    - "sys_dup"
    - "sys_dup2"
  kprobes:
  - call: "list:syscalls"
    selectors:
    - matchBinaries:
      - operator: "In"
        values:
        - "/usr/bin/bash"
      matchActions:
      - action: "Sigkill"

Selector Semantics

The selector semantics of the CiliumTracingPolicy follows the standard Kubernetes semantics and the principles that are used by Cilium to create a unified policy definition.

To explain deeper the structure and the logic behind it, let’s consider first the following example:

selectors:
 - matchPIDs:
   - operator: In
     followForks: true
     values:
     - pid1
     - pid2
     - pid3
  matchArgs:
  - index: 0
    operator: "Equal"
    values:
    - fdString1

In the YAML above matchPIDs and matchArgs are logically AND together giving the expression:

(pid in {pid1, pid2, pid3} AND arg0=fdstring1)

Multiple values

When multiple values are given, we apply the OR operation between them. In case of having multiple values under the matchPIDs selector, if any value matches with the given pid from pid1, pid2 or pid3 then we accept the event:

pid==pid1 OR pid==pid2 OR pid==pid3

As an example, we can filter for sys_read() syscalls that were not part of the container initialization and the main pod process and tried to read from the /etc/passwd file by using:

selectors:
 - matchPIDs:
   - operator: NotIn
     followForks: true
     values:
     - 0
     - 1
  matchArgs:
  - index: 0
    operator: "Equal"
    values:
    - "/etc/passwd"

Similarly, we can use multiple values under the matchArgs selector:

(pid in {pid1, pid2, pid3} AND arg0={fdstring1, fdstring2})

If any value matches with fdstring1 or fdstring2, specifically (string==fdstring1 OR string==fdstring2) then we accept the event.

For example, we can monitor sys_read() syscalls accessing both the /etc/passwd or the /etc/shadow files:

selectors:
 - matchPIDs:
   - operator: NotIn
     followForks: true
     values:
     - 0
     - 1
  matchArgs:
  - index: 0
    operator: "Equal"
    values:
    - "/etc/passwd"
    - "/etc/shadow"

Multiple operators

When multiple operators are supported under matchPIDs or matchArgs, they are logically AND together. In case if we have multiple operators under matchPIDs:

selectors:
  - matchPIDs:
    - operator: In
      followForks: true
      values:
      - pid1
    - operator: NotIn
      followForks: true
      values:
      - pid2

then we would build the following expression on the BPF side:

(pid == 0[following forks]) && (pid != 1[following forks])

In case of having multiple matchArgs:

selectors:
 - matchPIDs:
   - operator: In
     followForks: true
     values:
     - pid1
     - pid2
     - pid3
  matchArgs:
  - index: 0
    operator: "Equal"
    values:
    - 1
  - index: 2
    operator: "lt"
    values:
    - 500

Then we would build the following expression on the BPF side

(pid in {pid1, pid2, pid3} AND arg0=1 AND arg2 < 500)

Operator types

There are different types supported for each operator. In case of matchArgs:

  • Equal
  • NotEqual
  • Prefix
  • Postfix
  • Mask
  • GreaterThan (aka GT)
  • LessThan (aka LT)
  • SPort - Source Port
  • NotSPort - Not Source Port
  • SPortPriv - Source Port is Privileged (0-1023)
  • NotSPortPriv - Source Port is Not Privileged (Not 0-1023)
  • DPort - Destination Port
  • NotDPort - Not Destination Port
  • DPortPriv - Destination Port is Privileged (0-1023)
  • NotDPortPriv - Destination Port is Not Privileged (Not 0-1023)
  • SAddr - Source Address, can be IPv4/6 address or IPv4/6 CIDR (for ex 1.2.3.4/24 or 2a1:56::1/128)
  • NotSAddr - Not Source Address
  • DAddr - Destination Address
  • NotDAddr - Not Destination Address
  • Protocol
  • Family
  • State

The operator types Equal and NotEqual are used to test whether the certain argument of a system call is equal to the defined value in the CR.

For example, the following YAML snippet matches if the argument at index 0 is equal to /etc/passwd:

      matchArgs:
      - index: 0
        operator: "Equal"
        values:
        - "/etc/passwd"

Both Equal and NotEqual are set operations. This means if multiple values are specified, they are ORd together in case of Equal, and ANDd together in case of NotEqual.

For example, in case of Equal the following YAML snippet matches if the argument at index 0 is in the set of {arg0, arg1, arg2}.

matchArgs:
- index: 0
  operator: "Equal"
  values:
  - "arg0"
  - "arg1"
  - "arg2"

The above would be executed in the kernel as

arg == arg0 OR arg == arg1 OR arg == arg2

In case of NotEqual the following YAML snippet matches if the argument at index 0 is not in the set of {arg0, arg1}.

matchArgs:
- index: 0
  operator: "NotEqual"
  values:
  - "arg0"
  - "arg1"

The above would be executed in the kernel as

arg != arg0 AND arg != arg1

The operator type Mask performs and bitwise operation on the argument value and defined values. The argument type needs to be one of the value types.

For example in following YAML snippet we match second argument for bits 1 and 9 (0x200 value). We could use single value 0x201 as well.

matchArgs:
- index: 2
  operator: "Mask"
  values:
  - 1
  - 0x200

The above would be executed in the kernel as

arg & 1 OR arg & 0x200

The value can be specified as hexadecimal (with 0x prefix) octal (with 0 prefix) or decimal value (no prefix).

The operator Prefix checks if the certain argument starts with the defined value, while the operator Postfix compares if the argument matches to the defined value as trailing.

The operators relating to ports, addresses and protocol are used with sock or skb types. Port operators can accept a range of ports specified as min:max as well as lists of individual ports. Address operators can accept IPv4/6 CIDR ranges as well as lists of individual addresses.

The Protocol operator can accept integer values to match against, or the equivalent IPPROTO_ enumeration. For example, UDP can be specified as either IPPROTO_UDP or 17; TCP can be specified as either IPPROTO_TCP or 6.

The Family operator can accept integer values to match against or the equivalent AF_ enumeration. For example, IPv4 can be specified as either AF_INET or 2; IPv6 can be specified as either AF_INET6 or 10.

The State operator can accept integer values to match against or the equivalent TCP_ enumeration. For example, an established socket can be matched with TCP_ESTABLISHED or 1; a closed socket with TCP_CLOSE or 7.

In case of matchPIDs:

  • In
  • NotIn

The operator types In and NotIn are used to test whether the pid of a system call is found in the provided values list in the CR. Both In and NotIn are set operations, which means if multiple values are specified they are ORd together in case of In and ANDd together in case of NotIn.

For example, in case of In the following YAML snippet matches if the pid of a certain system call is being part of the list of {0, 1}:

- matchPIDs:
  - operator: In
    followForks: true
    isNamespacePID: true
    values:
    - 0
    - 1

The above would be executed in the kernel as

pid == 0 OR pid == 1

In case of NotIn the following YAML snippet matches if the pid of a certain system call is not being part of the list of {0, 1}:

- matchPIDs:
  - operator: NotIn
    followForks: true
    isNamespacePID: true
    values:
    - 0
    - 1

The above would be executed in the kernel as

pid != 0 AND pid != 1

In case of matchBinaries:

  • In

The In operator type is used to test whether a binary name of a system call is found in the provided values list. For example, the following YAML snippet matches if the binary name of a certain system call is being part of the list of {binary0, binary1, binary2}:

- matchBinaries:
  - operator: "In"
    values:
    - "binary0"
    - "binary1"
    - "binary2"

Multiple selectors

When multiple selectors are configured they are logically ORd together.

selectors:
 - matchPIDs:
   - operator: In
     followForks: true
     values:
     - pid1
     - pid2
     - pid3
  matchArgs:
  - index: 0
    operator: "Equal"
    values:
    - 1
  - index: 2
    operator: "lt"
    values:
    -  500
 - matchPIDs:
   - operator: In
     followForks: true
     values:
     - pid1
     - pid2
     - pid3
  matchArgs:
  - index: 0
    operator: "Equal"
    values:
    - 2

The above would be executed in kernel as:

(pid in {pid1, pid2, pid3} AND arg0=1 AND arg2 < 500) OR
(pid in {pid1, pid2, pid3} AND arg0=2)

Limitations

Those limitations might be outdated, see issue #709.

Because BPF must be bounded we have to place limits on how many selectors can exist.

  • Max Selectors 8.
  • Max PID values per selector 4
  • Max MatchArgs per selector 5 (one per index)
  • Max MatchArg Values per MatchArgs 1 (limiting initial implementation can bump to 16 or so)

Return Actions filter

Return actions filters are a list of actions that execute when an return selector matches. They are defined under matchReturnActions and currently support all the Actions filter action types.

4.3.5 - Tags

Use Tags to categorize events

Tags are optional fields of a Tracing Policy that are used to categorize generated events.

Introduction

Tags are specified in Tracing policies and will be part of the generated event.

apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: "file-monitoring-filtered"
spec:
  kprobes:
  - call: "security_file_permission"
    message: "Sensitive file system write operation"
    syscall: false
    args:
    - index: 0
      type: "file" # (struct file *) used for getting the path
    - index: 1
      type: "int" # 0x04 is MAY_READ, 0x02 is MAY_WRITE
    selectors:
    - matchArgs:
      - index: 0
        operator: "Prefix"
        values:
        - "/etc"              # Writes to sensitive directories
        - "/boot"
        - "/lib"
        - "/lib64"
        - "/bin"
        - "/usr/lib"
        - "/usr/local/lib"
        - "/usr/local/sbin"
        - "/usr/local/bin"
        - "/usr/bin"
        - "/usr/sbin"
        - "/var/log"          # Writes to logs
        - "/dev/log"
        - "/root/.ssh"        # Writes to sensitive files add here.
      - index: 1
        operator: "Equal"
        values:
        - "2" # MAY_WRITE
    tags: [ "observability.filesystem", "observability.process" ]

Every kprobe call can have up to max 16 tags.

Namespaces

Observability namespace

Events in this namespace relate to collect and export data about the internal system state.

  • “observability.filesystem”: the event is about file system operations.
  • “observability.privilege_escalation”: the event is about raising permissions of a user or a process.
  • “observability.process”: the event is about an instance of a Linux program being executed.

User defined Tags

Users can define their own tags inside Tracing Policies. The official supported tags are documented in the Namespaces section.

4.3.6 - K8s Policy Filtering

Tetragon in-kernel filtering based on Kubernetes namespaces, pod labels, and container fields

Motivation

Tetragon is configured via TracingPolicies. Broadly speaking, TracingPolicies define what situations Tetragon should react to and how. The what can be, for example, specific system calls with specific argument values. The how defines what action the Tetragon agent should perform when the specified situation occurs. The most common action is generating an event, but there are others (e.g., returning an error without executing the function, or killing the corresponding process).

Here we discuss how to apply tracing policies only on a subset of pods running on the system via the followings mechanisms:

  • namespaced policies
  • pod-label filters
  • container field filters

Tetragon implements these mechanisms in-kernel via eBPF. This is important for both observability and enforcement use-cases. For observability, copying only the relevant events from kernel- to user-space reduces overhead. For enforcement, performing the enforcement action in the kernel avoids the race-condition of doing it in user-space. For example, let us consider the case where we want to block an application from performing a system call. Performing the filtering in-kernel means that the application will never finish executing the system call, which is not possible if enforcement happens in user-space (after the fact).

To ensure that namespaced tracing policies are always correctly applied, Tetragon needs to perform actions before containers start executing. Tetragon supports this via OCI runtime hooks. If such hooks are not added, Tetragon will apply policies in a best-effort manner using information from the k8s API server.

Namespace filtering

For namespace filtering we use TracingPolicyNamespaced which has the same contents as a TracingPolicy, but it is defined in a specific namespace and it is only applied to pods of that namespace.

Pod label filters

For pod label filters, we use the PodSelector field of tracing policies to select the pods that the policy is applied to.

Container field filters

For container field filters, we use the containerSelector field of tracing policies to select the containers that the policy is applied to. At the moment, the only supported field is name.

Demo

Setup

For this demo, we use containerd and configure appropriate run-time hooks using minikube.

First, let us start minikube, build and load images, and install Tetragon and OCI hooks:

minikube start --container-runtime=containerd
./contrib/rthooks/minikube-containerd-install-hook.sh
make image image-operator
minikube image load --daemon=true cilium/tetragon:latest cilium/tetragon-operator:latest
minikube ssh -- sudo mount bpffs -t bpf /sys/fs/bpf
helm install --namespace kube-system \
	--set tetragonOperator.image.override=cilium/tetragon-operator:latest \
	--set tetragon.image.override=cilium/tetragon:latest  \
	--set tetragon.grpc.address="unix:///var/run/cilium/tetragon/tetragon.sock" \
	tetragon ./install/kubernetes/tetragon

Once the tetragon pod is up and running, we can get its name.

tetragon_pod=$(kubectl -n kube-system get pods -l app.kubernetes.io/name=tetragon -o custom-columns=NAME:.metadata.name --no-headers)

Next, we check the tetragon-operator logs and tetragon agent logs to ensure that everything is in order.

First, we check if the operator installed the TracingPolicyNamespaced CRD.

kubectl -n kube-system logs -c tetragon-operator $tetragon_pod

The expected output is:

level=info msg="Tetragon Operator: " subsys=tetragon-operator
level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=TracingPolicy/v1alpha1 subsys=k8s
level=info msg="Creating CRD (CustomResourceDefinition)..." name=TracingPolicyNamespaced/v1alpha1 subsys=k8s
level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=TracingPolicyNamespaced/v1alpha1 subsys=k8s
level=info msg="Initialization complete" subsys=tetragon-operator

Next, we check that policyfilter (the low level mechanism that implements the desired functionality) is indeed enabled.

kubectl -n kube-system logs -c tetragon $tetragon_pod

The output should include:

level=info msg="Enabling policy filtering"

Namespaced policies

For illustration purposes, we will use the lseek system call with an invalid argument. Specifically a file descriptor (the first argument) of -1. Normally, this operation would return a “Bad file descriptor error”.

Let us start a pod in the default namespace:

kubectl -n default run test --image=python -it --rm --restart=Never  -- python

Above command will result in the following python shell:

If you don't see a command prompt, try pressing enter.
>>>

There is no policy installed so attempting to do the lseek operation will just return an error. So using the python shell we can execute an lseek and see the returned error.

>>> import os
>>> os.lseek(-1,0,0)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
OSError: [Errno 9] Bad file descriptor
>>>

In another terminal, we install a policy in the default namespace:

cat << EOF | kubectl apply -n default -f -
apiVersion: cilium.io/v1alpha1
kind: TracingPolicyNamespaced
metadata:
  name: "lseek-namespaced"
spec:
  kprobes:
  - call: "sys_lseek"
    syscall: true
    args:
    - index: 0
      type: "int"
    selectors:
    - matchArgs:
      - index: 0
        operator: "Equal"
        values:
        - "-1"
      matchActions:
      - action: Sigkill
EOF

The above tracing policy will kill the process that performs an lseek system call with a file descriptor of -1. Note that we use a SigKill action only for illustration purposes because it’s easier to observe its effects.

Then, attempting the lseek operation on the previous terminal, will result in the process getting killed:

>>> os.lseek(-1, 0, 0)
pod "test" deleted
pod default/test terminated (Error)

The same is true for a newly started container:

kubectl -n default run test --image=python -it --rm --restart=Never  -- python
If you don't see a command prompt, try pressing enter.
>>> import os
>>> os.lseek(-1, 0, 0)
pod "test" deleted
pod default/test terminated (Error)

Doing the same on another namespace:

kubectl create namespace test
kubectl -n test run test --image=python -it --rm --restart=Never  -- python

Will not kill the process and result in an error:

If you don't see a command prompt, try pressing enter.
>>> import os
>>> os.lseek(-1, 0, 0)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
OSError: [Errno 9] Bad file descriptor

Pod label filters

Let’s install a tracing policy with a pod label filter.

cat << EOF | kubectl apply -f -
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: "lseek-podfilter"
spec:
  podSelector:
    matchLabels:
      app: "lseek-test"
  kprobes:
  - call: "sys_lseek"
    syscall: true
    args:
    - index: 0
      type: "int"
    selectors:
    - matchArgs:
      - index: 0
        operator: "Equal"
        values:
        - "-1"
      matchActions:
      - action: Sigkill
EOF

Pods without the label will not be affected:

kubectl run test  --image=python -it --rm --restart=Never  -- python
If you don't see a command prompt, try pressing enter.
>>> import os
>>> os.lseek(-1, 0, 0)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  OSError: [Errno 9] Bad file descriptor
  >>>

But pods with the label will:

kubectl run test --labels "app=lseek-test" --image=python -it --rm --restart=Never  -- python
If you don't see a command prompt, try pressing enter.
>>> import os
>>> os.lseek(-1, 0, 0)
pod "test" deleted
pod default/test terminated (Error)

Container field filters

Let’s install a tracing policy with a container field filter.

cat << EOF | kubectl apply -f -
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: "lseek-podfilter"
spec:
  containerSelector:
    matchExpressions:
      - key: name
        operator: In
        values:
        - main
  kprobes:
  - call: "sys_lseek"
    syscall: true
    args:
    - index: 0
      type: "int"
    selectors:
    - matchArgs:
      - index: 0
        operator: "Equal"
        values:
        - "-1"
      matchActions:
      - action: Sigkill
EOF

Let’s create a pod with 2 containers:

cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: lseek-pod
spec:
  containers:
  - name: main
    image: python
    command: ['sh', '-c', 'sleep infinity']
  - name: sidecar
    image: python
    command: ['sh', '-c', 'sleep infinity']
EOF

Containers that don’t match the name main will not be affected:

kubectl exec -it lseek-pod -c sidecar -- python3
>>> import os
>>> os.lseek(-1, 0, 0)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  OSError: [Errno 9] Bad file descriptor
>>>

But containers matching the name main will:

kubectl exec -it lseek-pod -c main -- python3
>>> import os
>>> os.lseek(-1, 0, 0)
command terminated with exit code 137

4.4 - Enforcement

Documentation for Tetragon enforcement system

Tetragon allows enforcing events in the kernel inline with the operation itself. This document describes the types of enforcement provided by Tetragon and concerns policy implementors must be aware of.

There are two ways that Tetragon performs enforcement: overriding the return value of a function and sending a signal (e.g., SIGKILL) to the process.

Override return value

Override the return value of a call means that the function will never be executed and, instead, a value (typically an error) will be returned to the caller. Generally speaking, only system calls and security check functions allow to change their return value in this manner. Details about how users can configure tracing policies to override the return value can be found in the Override action documentation.

Signals

Another type of enforcement is signals. For example, users can write a TracingPolicy (details can be found in the Signal action documentation) that sends a SIGKILL to a process matching certain criteria and thus terminate it.

In contrast with overriding the return value, sending a SIGKILL signal does not always stop the operation being performed by the process that triggered the operation. For example, a SIGKILL sent in a write() system call does not guarantee that the data will not be written to the file. However, it does ensure that the process is terminated synchronously (and any threads will be stopped). In some cases it may be sufficient to ensure the process is stopped and the process does not handle the return of the call. To ensure the operation is not completed, though, the Signal action should be combined with the Override action.

5 - Policy Library

Library of Tetragon Policies

5.1 - Tetragon Observability Policies

Library of policies that implement Tetragon observability and runtime enforcement. mechanisms.

Index

Security Sensitive Events

System Activity

Networking

Observability Policies

Binary Execution in /tmp

Description

Monitor execution of a binary in the /tmp directory.

Use Case

Preventing execution of executables in /tmp is a common best-practice as several canned exploits rely on writing and then executing malicious binaries in the /tmp directory. A common best-practice to enforce this is to mount the /tmp filesystem with the noexec flag. This observability policy is used to monitor for violations of this best practice.

Policy

No policy needs to be loaded, standard process execution observability is sufficient.

Example jq Filter

jq 'select(.process_exec != null) | select(.process_exec.process.binary | contains("/tmp/")) | "\(.time) \(.process_exec.process.pod.namespace) \(.process_exec.process.pod.name) \(.process_exec.process.binary) \(.process_exec.process.arguments)"'

Example Output

"2023-10-31T18:44:22.777962637Z default/xwing /tmp/nc ebpf.io 1234"

sudo Invocation Monitoring

Description

Monitor sudo invocations

Use Case

sudo is used to run executables with particular privileges. Creating a audit log of sudo invocations is a common best-practice.

Policy

No policy needs to be loaded, standard process execution observability is sufficient.

Example jq Filter

jq 'select(.process_exec != null) | select(.process_exec.process.binary | contains("sudo")) | "\(.time) \(.process_exec.process.pod.namespace) \(.process_exec.process.binary) \(.process_exec.process.arguments)"'

Example Output

"2023-10-31T19:03:35.273111185Z null /usr/bin/sudo -i"

Privileges Escalation via SUID Binary Execution

Description

Monitor execution of SUID “Set User ID” binaries.

Use Case

The “Set User Identity” and “Set Group Identity” are permission flags. When set on a binary file, the binary will execute with the permissions of the owner or group associated with the executable file, rather than the user executing it. Usually it is used to run programs with elevated privileges to perform specific tasks.

Detecting the execution of setuid and setgid binaries is a common best-practice as attackers may abuse such binaries, or even create them during an exploit for subsequent execution.

Requirement

Tetragon must run with the Process Credentials visibility enabled, please refer to Enable Process Credentials documentation.

Policy

No policy needs to be loaded, standard process execution observability is sufficient.

Example jq Filter

jq 'select(.process_exec != null) | select(.process_exec.process.binary_properties != null) | select(.process_exec.process.binary_properties.setuid != null or .process_exec.process.binary_properties.setgid != null) | "\(.time) \(.process_exec.process.pod.namespace) \(.process_exec.process.pod.name) \(.process_exec.process.binary) \(.process_exec.process.arguments) uid=\(.process_exec.process.process_credentials.uid) euid=\(.process_exec.process.process_credentials.euid)  gid=\(.process_exec.process.process_credentials.gid) egid=\(.process_exec.process.process_credentials.egid) binary_properties=\(.process_exec.process.binary_properties)"'

Example Output

"2024-02-05T20:20:50.828208246Z null null /usr/bin/sudo id uid=1000 euid=0  gid=1000 egid=1000 binary_properties={\"setuid\":0,\"privileges_changed\":[\"PRIVILEGES_RAISED_EXEC_FILE_SETUID\"]}"
"2024-02-05T20:20:57.008655978Z null null /usr/bin/wall hello uid=1000 euid=1000  gid=1000 egid=5 binary_properties={\"setgid\":5}"
"2024-02-05T20:21:00.116297664Z null null /usr/bin/su --help uid=1000 euid=0  gid=1000 egid=1000 binary_properties={\"setuid\":0,\"privileges_changed\":[\"PRIVILEGES_RAISED_EXEC_FILE_SETUID\"]}"

Privileges Escalation via File Capabilities Execution

Description

Monitor execution of binaries with file capabilities.

Use Case

File capabilities allow the binary execution to acquire more privileges to perform specific tasks. They are stored in the extended attribute part of the binary on the file system. They can be set using the setcap tool.

For further reference, please check capabilities man page; section File capabilities.

Detecting the execution of file capabilities binaries is a common best-practice, since they cross privilege boundaries which make them a suitable target for attackers. Such binaries are also used by attackers to hide their privileges after a successful exploit.

Requirement

Tetragon must run with the Process Credentials visibility enabled, please refer to Enable Process Credentials documentation.

Policy

No policy needs to be loaded, standard process execution observability is sufficient.

Example jq Filter

jq 'select(.process_exec != null) | select(.process_exec.process.binary_properties != null) | select(.process_exec.process.binary_properties.privileges_changed != null) | "\(.time) \(.process_exec.process.pod.namespace) \(.process_exec.process.pod.name) \(.process_exec.process.binary) \(.process_exec.process.arguments) uid=\(.process_exec.process.process_credentials.uid) euid=\(.process_exec.process.process_credentials.euid)  gid=\(.process_exec.process.process_credentials.gid) egid=\(.process_exec.process.process_credentials.egid) caps=\(.process_exec.process.cap) binary_properties=\(.process_exec.process.binary_properties)"''

Example Output

"2024-02-05T20:49:39.551528684Z null null /usr/bin/ping ebpf.io uid=1000 euid=1000  gid=1000 egid=1000 caps={\"permitted\":[\"CAP_NET_RAW\"],\"effective\":[\"CAP_NET_RAW\"]} binary_properties={\"privileges_changed\":[\"PRIVILEGES_RAISED_EXEC_FILE_CAP\"]}"

Privileges Escalation via Setuid System Calls

Description

Monitor execution of the setuid() system calls family.

Use Case

The setuid() and setgid() system calls family allow to change the effective user ID and group ID of the calling process.

Detecting setuid() and setgid() calls that set the user ID or group ID to root is a common best-practice to identify when privileges are raised.

Policy

The privileges-raise.yaml monitors the various interfaces of setuid() and setgid() to root.

Example jq Filter

jq 'select(.process_kprobe != null) | select(.process_kprobe.policy_name | test("privileges-raise")) | select(.process_kprobe.function_name | test("__sys_")) | "\(.time) \(.process_kprobe.process.pod.namespace) \(.process_kprobe.process.pod.name) \(.process_kprobe.process.binary) \(.process_kprobe.process.arguments) \(.process_kprobe.function_name) \(.process_kprobe.args)"'

Example Output

"2024-02-05T15:23:24.734543507Z null null /usr/local/sbin/runc --root /run/containerd/runc/k8s.io --log /run/containerd/io.containerd.runtime.v2.task/k8s.io/024daa4cc70eb683355f6f67beda3012c65d64f479d958e421cd209738a75392/log.json --log-format json --systemd-cgroup exec --process /tmp/runc-process2191655094 --detach --pid-file /run/containerd/io.containerd.runtime.v2.task/k8s.io/024daa4cc70eb683355f6f67beda3012c65d64f479d958e421cd209738a75392/2d56fcb136b07310685c9e188be7a49d32dc0e45a10d3fe14bc550e6ce2aa5cb.pid 024daa4cc70eb683355f6f67beda3012c65d64f479d958e421cd209738a75392 __sys_setuid [{\"int_arg\":0}]"
"2024-02-05T15:23:24.734550826Z null null /usr/local/sbin/runc --root /run/containerd/runc/k8s.io --log /run/containerd/io.containerd.runtime.v2.task/k8s.io/024daa4cc70eb683355f6f67beda3012c65d64f479d958e421cd209738a75392/log.json --log-format json --systemd-cgroup exec --process /tmp/runc-process2191655094 --detach --pid-file /run/containerd/io.containerd.runtime.v2.task/k8s.io/024daa4cc70eb683355f6f67beda3012c65d64f479d958e421cd209738a75392/2d56fcb136b07310685c9e188be7a49d32dc0e45a10d3fe14bc550e6ce2aa5cb.pid 024daa4cc70eb683355f6f67beda3012c65d64f479d958e421cd209738a75392 __sys_setgid [{\"int_arg\":0}]"
"2024-02-05T15:23:28.731719921Z null null /usr/bin/sudo id __sys_setresgid [{\"int_arg\":-1},{\"int_arg\":0},{\"int_arg\":-1}]"
"2024-02-05T15:23:28.731752014Z null null /usr/bin/sudo id __sys_setresuid [{\"int_arg\":-1},{\"int_arg\":0},{\"int_arg\":-1}]"
"2024-02-05T15:23:30.803946368Z null null /usr/bin/sudo id __sys_setgid [{\"int_arg\":0}]"
"2024-02-05T15:23:30.805118893Z null null /usr/bin/sudo id __sys_setresuid [{\"int_arg\":0},{\"int_arg\":0},{\"int_arg\":0}]"

Privileges Escalation via Unprivileged User Namespaces

Description

Monitor creation of User namespaces by unprivileged.

Use Case

User namespaces isolate security-related identifiers like user IDs, group IDs credentials and capabilities. A process can have a normal unprivileged user ID outside a user namespace while at the same time having a privileged user ID 0 (root) inside its own user namespace.

When an unprivileged process creates a new user namespace beside having a privileged user ID, it will also receive the full set of capabilities. User namespaces are feature to replace setuid and setgid binaries, and to allow applications to create sandboxes. However, they expose lot of kernel interfaces that are normally restricted to privileged (root). Such interfaces may increase the attack surface and get abused by attackers in order to perform privileges escalation exploits.

Unfortunatly, a report from Google shows up that 44% of the exploits required unprivileged user namespaces to perform chain privileges escalation. Therfore, detecting the creation of user namespaces by unprivileged is a common best-practice to identify such cases.

Policy

The privileges-raise.yaml monitors the creation of user namespaces by unprivileged.

Example jq Filter

jq 'select(.process_kprobe != null) | select(.process_kprobe.policy_name | test("privileges-raise")) | select(.process_kprobe.function_name | test("create_user_ns")) | "\(.time) \(.process_kprobe.process.pod.namespace) \(.process_kprobe.process.pod.name) \(.process_kprobe.process.binary) \(.process_kprobe.process.arguments) \(.process_kprobe.function_name) \(.process_kprobe.process.process_credentials)"'

Example Output

"2024-02-05T22:08:15.033035972Z null null /usr/bin/unshare -rUfp create_user_ns {\"uid\":1000,\"gid\":1000,\"euid\":1000,\"egid\":1000,\"suid\":1000,\"sgid\":1000,\"fsuid\":1000,\"fsgid\":1000}"

Privileges Change via Capset System Call

Description

Monitor execution of the capset() system call.

Use Case

The capset() system call allows to change the process capabilities.

Detecting capset() calls that set the effective, inheritable and permitted capabilities to non zero is a common best-practice to identify processes that could raise their privileges.

Policy

The privileges-raise.yaml monitors capset() calls that do not drop capabilities.

Example jq Filter

jq 'select(.process_kprobe != null) | select(.process_kprobe.policy_name | test("privileges-raise")) | select(.process_kprobe.function_name | test("capset")) | "\(.time) \(.process_kprobe.process.pod.namespace) \(.process_kprobe.process.pod.name) \(.process_kprobe.process.binary) \(.process_kprobe.process.arguments) \(.process_kprobe.function_name) \(.process_kprobe.args[3]) \(.process_kprobe.args[1])"'

Example Output

"2024-02-05T21:12:03.579600653Z null null /usr/bin/sudo id security_capset {\"cap_permitted_arg\":\"000001ffffffffff\"} {\"cap_effective_arg\":\"000001ffffffffff\"}"
"2024-02-05T21:12:04.754115578Z null null /usr/local/sbin/runc --root /run/containerd/runc/k8s.io --log /run/containerd/io.containerd.runtime.v2.task/k8s.io/024daa4cc70eb683355f6f67beda3012c65d64f479d958e421cd209738a75392/log.json --log-format json --systemd-cgroup exec --process /tmp/runc-process2431693392 --detach --pid-file /run/containerd/io.containerd.runtime.v2.task/k8s.io/024daa4cc70eb683355f6f67beda3012c65d64f479d958e421cd209738a75392/9403a57a3061274de26cad41915bad5416d4d484c9e142b22193b74e19a252c5.pid 024daa4cc70eb683355f6f67beda3012c65d64f479d958e421cd209738a75392 security_capset {\"cap_permitted_arg\":\"000001ffffffffff\"} {\"cap_effective_arg\":\"000001ffffffffff\"}"
"2024-02-05T21:12:12.836813445Z null null /usr/bin/runc --root /var/run/docker/runtime-runc/moby --log /run/containerd/io.containerd.runtime.v2.task/moby/c7bc6bf80f07bf6475e507f735866186650137bca2be796d6a39e22b747b97e9/log.json --log-format json --systemd-cgroup create --bundle /run/containerd/io.containerd.runtime.v2.task/moby/c7bc6bf80f07bf6475e507f735866186650137bca2be796d6a39e22b747b97e9 --pid-file /run/containerd/io.containerd.runtime.v2.task/moby/c7bc6bf80f07bf6475e507f735866186650137bca2be796d6a39e22b747b97e9/init.pid c7bc6bf80f07bf6475e507f735866186650137bca2be796d6a39e22b747b97e9 security_capset {\"cap_permitted_arg\":\"00000000a80425fb\"} {\"cap_effective_arg\":\"00000000a80425fb\"}"
"2024-02-05T21:12:14.774175889Z null null /usr/local/sbin/runc --root /run/containerd/runc/k8s.io --log /run/containerd/io.containerd.runtime.v2.task/k8s.io/024daa4cc70eb683355f6f67beda3012c65d64f479d958e421cd209738a75392/log.json --log-format json --systemd-cgroup exec --process /tmp/runc-process2888400204 --detach --pid-file /run/containerd/io.containerd.runtime.v2.task/k8s.io/024daa4cc70eb683355f6f67beda3012c65d64f479d958e421cd209738a75392/d8b8598320fe3d874b901c70863f36233760b3e63650a2474f707cc51b4340f9.pid 024daa4cc70eb683355f6f67beda3012c65d64f479d958e421cd209738a75392 security_capset {\"cap_permitted_arg\":\"000001ffffffffff\"} {\"cap_effective_arg\":\"000001ffffffffff\"}"

Fileless Execution

Description

Monitor the execution of binaries that exist exclusively as a computer memory-based artifact.

Use Case

Often attackers execute fileless binaries that reside only in memory rather than on the file system to cover their traces. On Linux this is possible with the help of memfd_create() and shared memory anonymous files. Therefore, detecting execution of such binaries that live only in RAM or backed by volatile storage is a common best-practice.

Requirement

Tetragon must run with the Process Credentials visibility enabled, please refer to Enable Process Credentials documentation.

Policy

No policy needs to be loaded, standard process execution observability is sufficient.

Demo to reproduce

You can use the exec-memfd.py python script as an example which will copy the binary /bin/true into an anonymous memory then execute it. The binary will not be linked on the file system.

Example jq Filter

jq 'select(.process_exec != null) | select(.process_exec.process.binary_properties != null) | select(.process_exec.process.binary_properties.file) | "\(.time) \(.process_exec.process.pod.namespace) \(.process_exec.process.pod.name) \(.process_exec.process.binary) \(.process_exec.process.arguments) uid=\(.process_exec.process.process_credentials.uid) euid=\(.process_exec.process.process_credentials.euid) binary_properties=\(.process_exec.process.binary_properties)"'

Example Output

"2024-02-14T15:17:48.758997159Z null null /proc/self/fd/3 null uid=1000 euid=1000 binary_properties={\"file\":{\"inode\":{\"number\":\"45021\",\"links\":0}}}"

The output shows that the executed binary refers to a file descriptor /proc/self/fd/3 that it is not linked on the file system. The binary_properties includes an inode with zero links on the file system.

Execution of Deleted Binaries

Description

Monitor the execution of deleted binaries.

Use Case

Malicious actors may open a binary, delete it from the file system to hide their traces then execute it. Detecting such executions is a good pratice.

Requirement

Tetragon must run with the Process Credentials visibility enabled, please refer to Enable Process Credentials documentation.

Policy

No policy needs to be loaded, standard process execution observability is sufficient.

Example jq Filter

jq 'select(.process_exec != null) | select(.process_exec.process.binary_properties != null) | select(.process_exec.process.binary_properties.file != null) | "\(.time) \(.process_exec.process.pod.namespace) \(.process_exec.process.pod.name) \(.process_exec.process.binary) \(.process_exec.process.arguments) uid=\(.process_exec.process.process_credentials.uid) euid=\(.process_exec.process.process_credentials.euid) binary_properties=\(.process_exec.process.binary_properties)"'

Example Output

"2024-02-14T16:07:54.265540484Z null null /proc/self/fd/14 null uid=1000 euid=1000 binary_properties={\"file\":{\"inode\":{\"number\":\"4991635\",\"links\":0}}}"

The output shows that the executed binary refers to a file descriptor /proc/self/fd/14 that it is not linked on the file system. The binary_properties includes an inode with zero links on the file system.

eBPF Subsystem Interactions

Description

Audit BPF program loads and BPFFS interactions

Use Case

Understanding BPF programs loaded in a cluster and interactions between applications and programs can identify bugs and malicious or unexpected BPF activity.

Policy

bpf.yaml

Example jq Filter

jq 'select(.process_kprobe != null) | select(.process_kprobe.function_name | test("bpf_check")) | "\(.time) \(.process_kprobe.process.binary) \(.process_kprobe.process.arguments) programType:\(.process_kprobe.args[0].bpf_attr_arg.ProgType) programInsn:\(.process_kprobe.args[0].bpf_attr_arg.InsnCnt)"

Example Output

"2023-11-01T02:56:54.926403604Z /usr/bin/bpftool prog list programType:BPF_PROG_TYPE_SOCKET_FILTER programInsn:2"

Kernel Module Audit Trail

Description

Audit loading of kernel modules

Use Case

Understanding exactly what kernel modules are running in the cluster is crucial to understand attack surface and any malicious actors loading unexpected modules.

Policy

modules.yaml

Example jq Filter

 jq 'select(.process_kprobe != null) | select(.process_kprobe.function_name | test("security_kernel_module_request"))  | "\(.time) \(.process_kprobe.process.binary) \(.process_kprobe.process.arguments) module:\(.process_kprobe.args[0].string_arg)"'

Example Output

"2023-11-01T04:11:38.390880528Z /sbin/iptables -A OUTPUT -m cgroup --cgroup 1 -j LOG module:ipt_LOG"

Shared Library Loading

Description

Monitor loading of libraries

Use Case

Understanding the exact versions of shared libraries that binaries load and use is crucial to understand use of vulnerable or deprecated library versions or attacks such as shared library hijacking.

Policy

library.yaml

Example jq Filter

jq 'select(.process_loader != null) | "\(.time) \(.process_loader.process.pod.namespace) \(.process_loader.process.binary) \(.process_loader.process.arguments) \(.process_loader.path)"

Example Output

"2023-10-31T19:42:33.065233159Z default/xwing /usr/bin/curl https://ebpf.io /usr/lib/x86_64-linux-gnu/libssl.so.3"

SSHd connection monitoring

Description

Monitor sessions to SSHd

Use Case

It is best practice to audit remote connections into a shell server.

Policy

sshd.yaml

Example jq Filter

 jq 'select(.process_kprobe != null) | select(.process_kprobe.function_name | test("tcp_close")) |  "\(.time) \(.process_kprobe.process.binary) \(.process_kprobe.process.arguments) \(.process_kprobe.args[0].sock_arg.family) \(.process_kprobe.args[0].sock_arg.type)  \(.process_kprobe.args[0].sock_arg.protocol) \(.process_kprobe.args[0].sock_arg.saddr):\(.process_kprobe.args[0].sock_arg.sport)"'

Example Output

"2023-11-01T04:51:20.109146920Z /usr/sbin/sshd default/xwing AF_INET SOCK_STREAM IPPROTO_TCP 127.0.0.1:22"

Outbound connections

Description

Monitor all cluster egress connections

Use Case

Connections made outside a Kubernetes cluster can be audited to provide insights into any unexpected or malicious reverse shells.

Environment Variables

PODCIDR=`kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'`
SERVICECIDR=$(gcloud container clusters describe ${NAME} --zone ${ZONE} | awk '/servicesIpv4CidrBlock/ { print $2; }')
SERVICECIDR=$(kubectl describe pod -n kube-system kube-apiserver-kind-control-plane | awk -F= '/--service-cluster-ip-range/ {print $2; }')

Policy

egress.yaml

Example jq Filter

 jq 'select(.process_kprobe != null) | select(.process_kprobe.function_name | test("tcp_connect")) | "\(.time) \(.process_kprobe.process.binary) \(.process_kprobe.process.arguments) \(.process_kprobe.args[0].sock_arg.saddr):\(.process_kprobe.args[0].sock_arg.sport) -> \(.process_kprobe.args[0].sock_arg.daddr):\(.process_kprobe.args[0].sock_arg.dport)"'

Example Output

"2023-11-01T05:25:14.837745007Z /usr/bin/curl http://ebpf.io 10.168.0.45:48272 -> 104.198.14.52:80"

6 - Use Cases

This section presents various use cases on process, files, network and security monitoring and enforcement.

By default, Tetragon monitors process lifecycle, learn more about that in the dedicated use cases.

For more advanced use cases, Tetragon can observe tracepoints and arbitrary kernel calls via kprobes. For that, Tetragon must be extended and configured with custom resources objects named TracingPolicy. It can then generates process_tracepoint and process_kprobes events.

6.1 - Process lifecycle

Tetragon observes by default the process lifecycle via exec and exit

Tetragon observes process creation and termination with default configuration and generates process_exec and process_exit events:

  • The process_exec events include useful information about the execution of binaries and related process information. This includes the binary image that was executed, command-line arguments, the UID context the process was executed with, the process parent information, the capabilities that a process had while executed, the process start time, the Kubernetes Pod, labels and more.
  • The process_exit events, as the process_exec event shows how and when a process started, indicate how and when a process is removed. The information in the event includes the binary image that was executed, command-line arguments, the UID context the process was executed with, process parent information, process start time, the status codes and signals on process exit. Understanding why a process exited and with what status code helps understand the specifics of that exit.

Both these events include Linux-level metadata (UID, parents, capabilities, start time, etc.) but also Kubernetes-level metadata (Kubernetes namespace, labels, name, etc.). This data make the connection between node-level concepts, the processes, and Kubernetes or container environments.

These events enable a full lifecycle view into a process that can aid an incident investigation, for example, we can determine if a suspicious process is still running in a particular environment. For concrete examples of such events, see the next use case on process execution.

6.1.1 - Process execution

Monitor process lifecycle with process_exec and process_exit

This first use case is monitoring process execution, which can be observed with the Tetragon process_exec and process_exit JSON events. These events contain the full lifecycle of processes, from fork/exec to exit, including metadata such as:

  • Binary name: Defines the name of an executable file
  • Parent process: Helps to identify process execution anomalies (e.g., if a nodejs app forks a shell, this is suspicious)
  • Command-line argument: Defines the program runtime behavior
  • Current working directory: Helps to identify hidden malware execution from a temporary folder, which is a common pattern used in malwares
  • Kubernetes metadata: Contains pods, labels, and Kubernetes namespaces, which are critical to identify service owners, particularly in a multitenant environments
  • exec_id: A unique process identifier that correlates all recorded activity of a process

As a first step, let’s start monitoring the events from the xwing pod:

kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | tetra getevents -o compact --namespace default --pod xwing

Then in another terminal, let’s kubectl exec into the xwing pod and execute some example commands:

kubectl exec -it xwing -- /bin/bash
whoami

If you observe, the output in the first terminal should be:

🚀 process default/xwing /bin/bash
🚀 process default/xwing /usr/bin/whoami
💥 exit    default/xwing /usr/bin/whoami 0

Here you can see the binary names along with its arguments, the pod info, and return codes in a compact one-line view of the events.

For more details use the raw JSON events to get detailed information, you can stop the Tetragon CLI by Crl-C and parse the tetragon.log file by executing:

kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | jq 'select(.process_exec.process.pod.name=="xwing" or .process_exit.process.pod.name=="xwing")'

Example process_exec and process_exit events can be:

Process Exec Event

{
  "process_exec": {
    "process": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjExNDI4NjE1NjM2OTAxOjUxNTgz",
      "pid": 51583,
      "uid": 0,
      "cwd": "/",
      "binary": "/usr/bin/whoami",
      "arguments": "--version",
      "flags": "execve rootcwd clone",
      "start_time": "2022-05-11T12:54:45.615Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://1fb931d2f6e5e4cfdbaf30fdb8e2fdd81320bdb3047ded50120a4f82838209ce",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2022-05-11T10:07:33Z",
          "pid": 50
        }
      },
      "docker": "1fb931d2f6e5e4cfdbaf30fdb8e2fdd",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjkwNzkyMjU2MjMyNjk6NDM4NzI=",
      "refcnt": 1
    },
    "parent": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjkwNzkyMjU2MjMyNjk6NDM4NzI=",
      "pid": 43872,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/bash",
      "flags": "execve rootcwd clone",
      "start_time": "2022-05-11T12:15:36.225Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://1fb931d2f6e5e4cfdbaf30fdb8e2fdd81320bdb3047ded50120a4f82838209ce",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2022-05-11T10:07:33Z",
          "pid": 43
        }
      },
      "docker": "1fb931d2f6e5e4cfdbaf30fdb8e2fdd",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjkwNzkxODU5NTMzOTk6NDM4NjE=",
      "refcnt": 1
    }
  },
  "node_name": "kind-control-plane",
  "time": "2022-05-11T12:54:45.615Z"
}

Process Exit Event

{
  "process_exit": {
    "process": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjExNDI4NjE1NjM2OTAxOjUxNTgz",
      "pid": 51583,
      "uid": 0,
      "cwd": "/",
      "binary": "/usr/bin/whoami",
      "arguments": "--version",
      "flags": "execve rootcwd clone",
      "start_time": "2022-05-11T12:54:45.615Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://1fb931d2f6e5e4cfdbaf30fdb8e2fdd81320bdb3047ded50120a4f82838209ce",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2022-05-11T10:07:33Z",
          "pid": 50
        }
      },
      "docker": "1fb931d2f6e5e4cfdbaf30fdb8e2fdd",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjkwNzkyMjU2MjMyNjk6NDM4NzI="
    },
    "parent": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjkwNzkyMjU2MjMyNjk6NDM4NzI=",
      "pid": 43872,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/bash",
      "flags": "execve rootcwd clone",
      "start_time": "2022-05-11T12:15:36.225Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://1fb931d2f6e5e4cfdbaf30fdb8e2fdd81320bdb3047ded50120a4f82838209ce",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2022-05-11T10:07:33Z",
          "pid": 43
        }
      },
      "docker": "1fb931d2f6e5e4cfdbaf30fdb8e2fdd",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjkwNzkxODU5NTMzOTk6NDM4NjE="
    }
  },
  "node_name": "kind-control-plane",
  "time": "2022-05-11T12:54:45.616Z"
}

6.1.2 - Advanced Process execution

Advanced Process Execution monitoring using Tracing Policies

Monitor ELF or Flat binaries execution

Advanced process execution can be performed by using Tracing Policies to monitor the execve system call path.

If we want to monitor execution of Executable and Linkable Format (ELF) or flat binaries before they are actually executed. Then the process-exec-elf-begin tracing policy is a good first choice.

Before going forward, verify that all pods are up and running, ensure you deploy our Demo Application to explore the Security Observability Events:

kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.15.3/examples/minikube/http-sw-app.yaml

It might take several seconds for some pods until they satisfy all the dependencies:

kubectl get pods -A

The output should be similar to:

NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
default              deathstar-54bb8475cc-6c6lc                   1/1     Running   0          2m54s
default              deathstar-54bb8475cc-zmfkr                   1/1     Running   0          2m54s
default              tiefighter                                   1/1     Running   0          2m54s
default              xwing                                        1/1     Running   0          2m54s
kube-system          tetragon-sdwv6                               2/2     Running   0          27m

Let’s apply the process-exec-elf-begin Tracing Policy.

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/process-exec/process-exec-elf-begin.yaml

Then start monitoring events with the tetra CLI:

kubectl exec -it -n kube-system ds/tetragon -c tetragon -- tetra getevents

In another terminal, kubectl exec into the xwing Pod:

kubectl exec -it xwing -- /bin/bash

And execute some commands:

id

The tetra CLI will generate the following ProcessKprobe events:

{
  "process_kprobe": {
    "process": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE2NjY0MDI4MTA4MzcxOjM2NDk5",
      "pid": 36499,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/bash",
      "flags": "execve",
      "start_time": "2023-08-02T11:58:53.618461573Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://775beeb1a25a95e10dc149d6eb166bf45dd5e6039e8af3b64e8fb4d29669f349",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-08-02T07:24:54Z",
          "pid": 13
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        }
      },
      "docker": "775beeb1a25a95e10dc149d6eb166bf",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE2NjYyNzg3ODI1MTQ4OjM2NDkz",
      "refcnt": 1,
      "tid": 36499
    },
      "parent": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE2NjYyNzg3ODI1MTQ4OjM2NDkz",
      "pid": 36493,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/bash",
      "flags": "execve rootcwd clone",
      "start_time": "2023-08-02T11:58:52.378178852Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://775beeb1a25a95e10dc149d6eb166bf45dd5e6039e8af3b64e8fb4d29669f349",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-08-02T07:24:54Z",
          "pid": 13
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        }
      },
      "docker": "775beeb1a25a95e10dc149d6eb166bf",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE2NjYyNzE2OTU0MjgzOjM2NDg0",
      "tid": 36493
    },
    "function_name": "security_bprm_creds_from_file",
    "args": [
      {
        "file_arg": {
          "path": "/bin/busybox"
        }
      }
    ],
    "action": "KPROBE_ACTION_POST"
  },
  "node_name": "kind-control-plane",
  "time": "2023-08-02T11:58:53.624096751Z"
}

In addition to the Kubernetes Identity and process metadata, ProcessKprobe events contain the binary being executed. In the above case they are:

  • function_name: where we are hooking into the kernel to read the binary that is being executed.
  • file_arg: that includes the path being executed, and here it is /bin/busybox that is the real binary being executed, since on the xwing pod the container is running busybox. The binary /usr/bin/id -> /bin/busybox points to busybox.

To disable the process-exec-elf-being Tracing Policy run:

kubectl delete -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/process-exec/process-exec-elf-begin.yaml

6.1.3 - Privileged execution

Monitor process capabilities and kernel namespace access

Tetragon also provides the ability to check process capabilities and kernel namespaces access.

This information would help us determine which process or Kubernetes pod has started or gained access to privileges or host namespaces that it should not have. This would help us answer questions like:

Which Kubernetes pods are running with CAP_SYS_ADMIN in my cluster?

Which Kubernetes pods have host network or pid namespace access in my cluster?

Step 1: Enabling Process Credential and Namespace Monitoring

  • Edit the Tetragon configmap:

    kubectl edit cm -n kube-system tetragon-config
    
  • Set the following flags from “false” to “true”:

    # enable-process-cred: true
    # enable-process-ns: true
    
  • Save your changes and exit.

  • Restart the Tetragon daemonset:

    kubectl rollout restart -n kube-system ds/tetragon
    

Step 2: Deploying a Privileged Nginx Pod

  • Create a YAML file (e.g., privileged-nginx.yaml) with the following PodSpec:

    apiVersion: v1
    kind: Pod
    metadata:
      name: privileged-the-pod
    spec:
      hostPID: true
      hostNetwork: true
      containers:
      - name: privileged-the-pod
        image: nginx:latest
        ports:
        - containerPort: 80
        securityContext:
          privileged: true
    
  • Apply the configuration:

    kubectl apply -f privileged-nginx.yaml
    

Step 3: Monitoring with Tetragon

  • Start monitoring events from the privileged Nginx pod:

    kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | tetra getevents --namespace default --pod privileged-the-pod
    
  • You should observe Tetragon generating events similar to these, indicating the privileged container start:

    🚀 process default/privileged-nginx /nginx -g daemon off;  🛑 CAP_SYS_ADMIN
    

6.2 - Filename access

Monitor filename access using kprobe hooks

This page shows how you can create a tracing policy to monitor filename access. For general information about tracing policies, see the tracing policy page.

There are two aspects of the tracing policy: (i) what hooks you can use to monitor specific types of access, and (ii) how you can filter at the kernel level for only specific events.

Hooks

There are different ways applications can access and modify files, and for this tracing policy we focus in three different types.

The first is read and write accesses, which the most common way that files are accessed by applications. Applications can perform this type of accesses with a variety of different system calls: read and write, optimized system calls such as copy_file_range and sendfile, as well as asynchronous I/O system call families such as the ones provided by aio and io_uring. Instead of monitoring every system call, we opt to hook into the security_file_permission hook, which is a common execution point for all the above system calls.

Applications can also access files by mapping them directly into their virtual address space. Since it is difficult to caught the accesses themselves in this case, our policy will instead monitor the point when the files are mapped into the application’s virtual memory. To do so, we use the security_mmap_file hook.

Lastly, there is a family of system calls (e.g,. truncate) that allow to indirectly modify the contents of the file by changing its size. To catch these types of accesses we will hook into security_path_truncate.

Filtering

Using the hooks above, you can monitor all accesses in the system. This will create a large number of events, however, and it is frequently the case that you are only interested in a specific subset those events. It is possible to filter the events after their generation, but this induces unnecessary overhead. Tetragon, using BPF, allows filtering these events directly in the kernel.

For example, the following snippet shows how you can limit the events from the security_file_permission hook only for the /etc/passwd file. For this, you need to specify the arguments of the function that you hooking into, as well as their type.

  - call: "security_file_permission"
    syscall: false
    args:
    - index: 0
      type: "file" # (struct file *) used for getting the path
    - index: 1
      type: "int" # 0x04 is MAY_READ, 0x02 is MAY_WRITE
    selectors:
    - matchArgs:      
      - index: 0
        operator: "Equal"
        values:
        - "/etc/passwd" # filter by filename (/etc/passwd)
      - index: 1
        operator: "Equal"
        values:
        - "2" # filter by type of access (MAY_WRITE)

The previous example uses the Equal operator. Similarly, you can use the Prefix operator to filter events based on the prefix of a filename.

Examples

In this example, we monitor if a process inside a Kubernetes workload performs an read or write in the /etc/ directory. The policy may be extended with additional directories or specific files if needed.

As a first step, we apply the following policy that uses the three hooks mentioned previously as well as appropriate filtering:

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/filename_monitoring.yaml

Next, we deploy a file-access Pod with an interactive bash session:

kubectl run --rm -it file-access -n default --image=busybox --restart=Never

In another terminal, you can start monitoring the events from the file-access Pod:

kubectl exec -it -n kube-system ds/tetragon -c tetragon -- tetra getevents -o compact --namespace default --pod file-access

In the interactive bash session, edit the /etc/passwd file:

vi /etc/passwd

If you observe, the output in the second terminal should be:

🚀 process default/file-access /bin/sh
🚀 process default/file-access /bin/vi /etc/passwd
📚 read    default/file-access /bin/vi /etc/passwd
📚 read    default/file-access /bin/vi /etc/passwd
📚 read    default/file-access /bin/vi /etc/passwd
📝 write   default/file-access /bin/vi /etc/passwd
📝 truncate default/file-access /bin/vi /etc/passwd
💥 exit    default/file-access /bin/vi /etc/passwd 0

Note, that read and writes are only generated for /etc/ files based on BPF in-kernel filtering specified in the policy. The default CRD additionally filters events associated with the pod init process to filter init noise from pod start.

Similarly to the previous example, reviewing the JSON events provides additional data. An example process_kprobe event observing a write can be:

{
  "process_kprobe": {
    "process": {
      "exec_id": "dGV0cmFnb24tZGV2LWNvbnRyb2wtcGxhbmU6MTY4MTc3MDUwMTI1NDI6NjQ3NDY=",
      "pid": 64746,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/vi",
      "arguments": "/etc/passwd",
      "flags": "execve rootcwd clone",
      "start_time": "2024-04-14T02:18:02.240856427Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "file-access",
        "container": {
          "id": "containerd://6b742e38ee3a212239e6d48b2954435a407af44b9a354bdf540db22f460ab40e",
          "name": "file-access",
          "image": {
            "id": "docker.io/library/busybox@sha256:c3839dd800b9eb7603340509769c43e146a74c63dca3045a8e7dc8ee07e53966",
            "name": "docker.io/library/busybox:latest"
          },
          "start_time": "2024-04-14T02:17:46Z",
          "pid": 12
        },
        "pod_labels": {
          "run": "file-access"
        },
        "workload": "file-access",
        "workload_kind": "Pod"
      },
      "docker": "6b742e38ee3a212239e6d48b2954435",
      "parent_exec_id": "dGV0cmFnb24tZGV2LWNvbnRyb2wtcGxhbmU6MTY4MDE3MDQ3OTQyOTg6NjQ2MTU=",
      "refcnt": 1,
      "tid": 64746
    },
    "parent": {
      "exec_id": "dGV0cmFnb24tZGV2LWNvbnRyb2wtcGxhbmU6MTY4MDE3MDQ3OTQyOTg6NjQ2MTU=",
      "pid": 64615,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/sh",
      "flags": "execve rootcwd clone",
      "start_time": "2024-04-14T02:17:46.240638141Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "file-access",
        "container": {
          "id": "containerd://6b742e38ee3a212239e6d48b2954435a407af44b9a354bdf540db22f460ab40e",
          "name": "file-access",
          "image": {
            "id": "docker.io/library/busybox@sha256:c3839dd800b9eb7603340509769c43e146a74c63dca3045a8e7dc8ee07e53966",
            "name": "docker.io/library/busybox:latest"
          },
          "start_time": "2024-04-14T02:17:46Z",
          "pid": 1
        },
        "pod_labels": {
          "run": "file-access"
        },
        "workload": "file-access",
        "workload_kind": "Pod"
      },
      "docker": "6b742e38ee3a212239e6d48b2954435",
      "parent_exec_id": "dGV0cmFnb24tZGV2LWNvbnRyb2wtcGxhbmU6MTY3OTgyOTA2MDc3NTc6NjQ1NjQ=",
      "tid": 64615
    },
    "function_name": "security_file_permission",
    "args": [
      {
        "file_arg": {
          "path": "/etc/passwd",
          "permission": "-rw-r--r--"
        }
      },
      {
        "int_arg": 2
      }
    ],
    "return": {
      "int_arg": 0
    },
    "action": "KPROBE_ACTION_POST",
    "policy_name": "file-monitoring",
    "return_action": "KPROBE_ACTION_POST"
  },
  "node_name": "tetragon-dev-control-plane",
  "time": "2024-04-14T02:18:14.376304204Z"
}

In addition to the Kubernetes Identity and process metadata from exec events, process_kprobe events contain the arguments of the observed system call. In the above case they are

  • file_arg.path: the observed file path
  • int_arg: is the type of the operation (2 for a write and 4 for a read)
  • return.int_arg: is 0 if the operation is allowed

To disable the TracingPolicy run:

kubectl delete -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/filename_monitoring.yaml

To delete the file-access Pod from the interactive bash session, type:

exit

Another example of a similar policy can be found in our examples folder.

Limitations

Note that this policy has certain limitations because it matches on the filename that the application uses to accesses. If an application accesses the same file via a hard link or a different bind mount, no event will be generated.

6.3 - Network observability

Monitor TCP connect using kprobe hooks

To view TCP connect events, apply the example TCP connect TracingPolicy:

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/tcp-connect.yaml

To start monitoring events in the xwing pod run the Tetragon CLI:

kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | tetra getevents -o compact --namespace default --pod xwing

In another terminal, start generate a TCP connection. Here we use curl.

kubectl exec -it xwing -- curl http://cilium.io

The output in the first terminal will capture the new connect and write,

🚀 process default/xwing /usr/bin/curl http://cilium.io
🔌 connect default/xwing /usr/bin/curl tcp 10.244.0.6:34965 -> 104.198.14.52:80
📤 sendmsg default/xwing /usr/bin/curl tcp 10.244.0.6:34965 -> 104.198.14.52:80 bytes 73
🧹 close   default/xwing /usr/bin/curl tcp 10.244.0.6:34965 -> 104.198.14.52:80
💥 exit    default/xwing /usr/bin/curl http://cilium.io 0

To disable the TracingPolicy run:

kubectl delete -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/tcp-connect.yaml

6.4 - Linux process credentials

Monitor Linux process credentials

On Linux each process has various associated user, group IDs, capabilities, secure management flags, keyring, LSM security that are used part of the security checks upon acting on other objects. These are called the task privileges or process credentials.

Changing the process credentials is a standard operation to perform privileged actions or to execute commands as another user. The obvious example is sudo that allows to gain high privileges and run commands as root or another user. An other example is services or containers that can gain high privileges during execution to perform restricted operations.

Composition of Linux process credentials

Traditional UNIX credentials

  • Real User ID
  • Real Group ID
  • Effective, Saved and FS User ID
  • Effective, Saved and FS Group ID
  • Supplementary groups

Linux Capabilities

  • Set of permitted capabilities: a limiting superset for the effective capabilities.
  • Set of inheritable capabilities: the set that may get passed across execve(2).
  • Set of effective capabilities: the set of capabilities a task is actually allowed to make use of itself.
  • Set of bounding capabilities: limits the capabilities that may be inherited across execve(2), especially when a binary is executed that will execute as UID 0.

Secure management flags (securebits).

These govern the way the UIDs/GIDs and capabilities are manipulated and inherited over certain operations such as execve(2).

Linux Security Module (LSM)

The LSM framework provides a mechanism for various security checks to be hooked by new kernel extensions. Tasks can have extra controls part of LSM on what operations they are allowed to perform.

Tetragon Process Credentials monitoring

Monitoring Linux process credentials is a good practice to idenfity programs running with high privileges. Tetragon allows retrieving Linux process credentials as a process_credentials object.

Changes to credentials can be monitored either in system calls or in internal kernel functions.

Generally it is better to monitor in internal kernel functions. For further details please read Advantages and disadvantages of kernel layer monitoring compared to the system call layer section.

6.4.1 - Monitor Process Credentials changes at the System Call layer

Monitor system calls that change Process Credentials

Tetragon can hook at the system calls that directly manipulate the credentials. This allows us to determine which process is trying to change its credentials and the new credentials that could be applied by the kernel.

This answers the questions:

Which process or container is trying to change its UIDs/GIDs in my cluster?

Which process or container is trying to change its capabilities in my cluster?

Before going forward, verify that all pods are up and running, ensure you deploy our Demo Application to explore the Security Observability Events:

kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.15.3/examples/minikube/http-sw-app.yaml

It might take several seconds for some pods until they satisfy all the dependencies:

kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
default              deathstar-54bb8475cc-6c6lc                   1/1     Running   0          2m54s
default              deathstar-54bb8475cc-zmfkr                   1/1     Running   0          2m54s
default              tiefighter                                   1/1     Running   0          2m54s
default              xwing                                        1/1     Running   0          2m54s
kube-system          tetragon-sdwv6                               2/2     Running   0          27m

Monitor UIDs/GIDs credential changes

We use the process.credentials.changes.at.syscalls Tracing Policy that hooks the setuid system calls family:

  • setuid
  • setgid
  • setfsuid
  • setfsgid
  • setreuid
  • setregid
  • setresuid
  • setresgid

Let’s apply the process.credentials.changes.at.syscalls Tracing Policy.

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/process-credentials/process.credentials.changes.at.syscalls.yaml

Then start monitoring events with the tetra CLI:

kubectl exec -it -n kube-system ds/tetragon -c tetragon -- tetra getevents

In another terminal, kubectl exec into the xwing Pod:

kubectl exec -it xwing -- /bin/bash

And execute su as this will call the related setuid system calls:

su root

The tetra CLI will generate the following ProcessKprobe events:

{
  "process_kprobe": {
    "process": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQwNzk4ODc2MDI2NTk4OjEyNTc5OA==",
      "pid": 125798,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/su",
      "arguments": "root",
      "flags": "execve rootcwd clone",
      "start_time": "2023-07-05T19:14:30.918693157Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://55936e548de63f77ceb595d64966dd8e267b391ff0ef63b26c17eb8c2f6510be",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-07-05T18:45:16Z",
          "pid": 19
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        }
      },
      "docker": "55936e548de63f77ceb595d64966dd8",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQwNzk1NjYyMDM3MzMyOjEyNTc5Mg==",
      "refcnt": 1,
      "tid": 125798
    },
    "parent": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQwNzk1NjYyMDM3MzMyOjEyNTc5Mg==",
      "pid": 125792,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/bash",
      "flags": "execve rootcwd clone",
      "start_time": "2023-07-05T19:14:27.704703805Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://55936e548de63f77ceb595d64966dd8e267b391ff0ef63b26c17eb8c2f6510be",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-07-05T18:45:16Z",
          "pid": 13
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        }
      },
      "docker": "55936e548de63f77ceb595d64966dd8",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQwNzk1NjE2MTU0NzA2OjEyNTc4Mw==",
      "refcnt": 2,
      "tid": 125792
    },
    "function_name": "__x64_sys_setgid",
    "args": [
      {
        "int_arg": 0
      }
    ],
    "action": "KPROBE_ACTION_POST"
  },
  "node_name": "kind-control-plane",
  "time": "2023-07-05T19:14:30.918977160Z"
}
{
  "process_kprobe": {
    "process": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQwNzk4ODc2MDI2NTk4OjEyNTc5OA==",
      "pid": 125798,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/su",
      "arguments": "root",
      "flags": "execve rootcwd clone",
      "start_time": "2023-07-05T19:14:30.918693157Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://55936e548de63f77ceb595d64966dd8e267b391ff0ef63b26c17eb8c2f6510be",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-07-05T18:45:16Z",
          "pid": 19
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        }
      },
      "docker": "55936e548de63f77ceb595d64966dd8",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQwNzk1NjYyMDM3MzMyOjEyNTc5Mg==",
      "refcnt": 1,
      "tid": 125798
    },
    "parent": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQwNzk1NjYyMDM3MzMyOjEyNTc5Mg==",
      "pid": 125792,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/bash",
      "flags": "execve rootcwd clone",
      "start_time": "2023-07-05T19:14:27.704703805Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://55936e548de63f77ceb595d64966dd8e267b391ff0ef63b26c17eb8c2f6510be",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-07-05T18:45:16Z",
          "pid": 13
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        }
      },
      "docker": "55936e548de63f77ceb595d64966dd8",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQwNzk1NjE2MTU0NzA2OjEyNTc4Mw==",
      "refcnt": 2,
      "tid": 125792
    },
    "function_name": "__x64_sys_setuid",
    "args": [
      {
        "int_arg": 0
      }
    ],
    "action": "KPROBE_ACTION_POST"
  },
  "node_name": "kind-control-plane",
  "time": "2023-07-05T19:14:30.918990583Z"
}

In addition to the Kubernetes Identity and process metadata from exec events, ProcessKprobe events contain the arguments of the observed system call. In the above case they are:

  • function_name: the system call, __x64_sys_setuid or __x64_sys_setgid
  • int_arg: the uid or gid to use, in our case it’s 0 which corresponds to the root user.

To disable the process.credentials.changes.at.syscalls Tracing Policy run:

kubectl delete -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/process-credentials/process.credentials.changes.at.syscalls.yaml

6.4.2 - Monitor Process Credentials changes at the Kernel layer

Monitor Process Credentials changes at the kernel layer

Monitoring Process Credentials changes at the kernel layer is also possible. This allows to capture the new process_credentials that should be applied.

This process-creds-installed tracing policy can be used to answer the following questions:

Which process or container is trying to change its own UIDs/GIDs in the cluster?

Which process or container is trying to change its own capabilities in the cluster?

In which user namespace the credentials are being changed?

How to monitor process_credentials changes?

Advantages and disadvantages of kernel layer monitoring compared to the system call layer

The main advantages of monitoring at the kernel layer compared to the system call layer:

  • Not vulnerable to user space arguments tampering.

  • Ability to display the full new credentials to be applied.

  • It is more reliable since it has full context on where and how the new credentials should be applied including the user namespace.

  • A catch all layer for all system calls, and every normal kernel path that manipulate credentials.

  • One potential disadvantage is that this approach may generate a lot of events, so appropriate filtering must be applied to reduce the noise.

Kubernetes Environments

First, verify that your k8s environment is all setup and that all pods are up and running, and deploy the Demo Application:

kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.15.3/examples/minikube/http-sw-app.yaml

It might take several seconds until all pods are Running:

kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
default              deathstar-54bb8475cc-6c6lc                   1/1     Running   0          2m54s
default              deathstar-54bb8475cc-zmfkr                   1/1     Running   0          2m54s
default              tiefighter                                   1/1     Running   0          2m54s
default              xwing                                        1/1     Running   0          2m54s
kube-system          tetragon-sdwv6                               2/2     Running   0          27m

Monitor Process Credentials installation

We use the process-creds-installed Tracing Policy that hooks the kernel layer when credentials are being installed.

So let’s apply the process-creds-installed Tracing Policy.

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/process-credentials/process-creds-installed.yaml

Then we start monitoring for events with tetra cli:

kubectl exec -it -n kube-system ds/tetragon -c tetragon -- tetra getevents

In another terminal, inside a pod and as a non root user we will execute a setuid binary (suid):

/tmp/su -

The tetra cli will generate the following ProcessKprobe events:

{
 "process_kprobe": {
    "process": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE0MDczMDQyODk3MTc6MjIzODY=",
      "pid": 22386,
      "uid": 11,
      "cwd": "/",
      "binary": "/tmp/su",
      "arguments": "-",
      "flags": "execve rootcwd clone",
      "start_time": "2023-07-25T12:04:59.359333454Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://2e58c8357465961fd96f758e87d0269dfb5f97c536847485de9d7ec62be34a64",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-07-25T11:44:48Z",
          "pid": 43
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        }
      },
      "docker": "2e58c8357465961fd96f758e87d0269",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjEzOTU0MDY5OTA1ODc6MjIzNTI=",
      "refcnt": 1,
      "tid": 22386
    },
    "parent": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjEzOTU0MDY5OTA1ODc6MjIzNTI=",
      "pid": 22352,
      "uid": 11,
      "cwd": "/",
      "binary": "/bin/sh",
      "flags": "execve rootcwd",
      "start_time": "2023-07-25T12:04:47.462035587Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
                "name": "xwing",
        "container": {
          "id": "containerd://2e58c8357465961fd96f758e87d0269dfb5f97c536847485de9d7ec62be34a64",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-07-25T11:44:48Z",
          "pid": 41
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        }
      },
      "docker": "2e58c8357465961fd96f758e87d0269",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjEzOTU0MDQ3NzY5NzI6MjIzNTI=",
      "refcnt": 2,
      "tid": 22352
    },
    "function_name": "commit_creds",
    "args": [
      {
        "process_credentials_arg": {
          "uid": 0,
          "gid": 0,
          "euid": 0,
          "egid": 0,
          "suid": 0,
          "sgid": 0,
          "fsuid": 0,
          "fsgid": 0,
          "caps": {
            "permitted": [
              "CAP_CHOWN",
              "DAC_OVERRIDE",
              "CAP_FOWNER",
              "CAP_FSETID",
              "CAP_KILL",
              "CAP_SETGID",
              "CAP_SETUID",
              "CAP_SETPCAP",
              "CAP_NET_BIND_SERVICE",
              "CAP_NET_RAW",
              "CAP_SYS_CHROOT",
              "CAP_MKNOD",
              "CAP_AUDIT_WRITE",
              "CAP_SETFCAP"
            ],
            "effective": [
              "CAP_CHOWN",
              "DAC_OVERRIDE",
              "CAP_FOWNER",
              "CAP_FSETID",
              "CAP_KILL",
              "CAP_SETGID",
              "CAP_SETUID",
              "CAP_SETPCAP",
              "CAP_NET_BIND_SERVICE",
              "CAP_NET_RAW",
              "CAP_SYS_CHROOT",
              "CAP_MKNOD",
              "CAP_AUDIT_WRITE",
              "CAP_SETFCAP"
            ]
          },
          "user_ns": {
            "level": 0,
            "uid": 0,
            "gid": 0,
            "ns": {
              "inum": 4026531837,
              "is_host": true
            }
          }
        }
      }
    ],
    "action": "KPROBE_ACTION_POST"
  },
  "node_name": "kind-control-plane",
  "time": "2023-07-25T12:05:01.410834172Z"
}

In addition to the Kubernetes Identity and process metadata from exec events, ProcessKprobe events contain the arguments of the observed system call. In the above case they are:

  • function_name: the kernel commit_creds() function to install new credentials.
  • process_credentials_arg: the new process_credentials to be installed on the current process. It includes the UIDs/GIDs, the capabilities and the target user namespace.

Here we can clearly see that the suid binary is being executed by a user ID 11 in order to elevate its privileges to user ID 0 including capabilities.

To disable the process-creds-installed Tracing Policy run:

kubectl delete -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/process-credentials/process-creds-installed.yaml

6.5 - Host System Changes

Monitor Host System changes

Some pods need to change the host system or kernel parameters in order to perform administrative tasks, obvious examples are pods loading a kernel module to extend the operating system functionality, or pods managing the network.

However, there are also other cases where a compromised container may want to load a kernel module to hide its behaviour.

In this aspect, monitoring such host system changes helps to identify pods and containers that affect the host system.

Monitor Linux kernel modules

A kernel module is a code that can be loaded into the kernel image at runtime, without rebooting. These modules, which can be loaded by pods and containers, can modify the host system. The Monitor Linux kernel modules guide will assist you in observing such events.

6.5.1 - Monitor Linux Kernel Modules

Monitor Linux Kernel Modules operations

Monitoring kernel modules helps to identify processes that load kernel modules to add features, to the operating system, to alter host system functionality or even hide their behaviour. This can be used to answer the following questions:

Which process or container is changing the kernel?

Which process or container is loading or unloading kernel modules in the cuslter?

Which process or container requested a feature that triggered the kernel to automatically load a module?

Are the loaded kernel modules signed?

Monitor Loading kernel modules

Kubernetes Environments

After deploying Tetragon, use the monitor-kernel-modules tracing policy which generates ProcessKprobe events to trace kernel module operations.

Apply the monitor-kernel-modules tracing policy:

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/host-changes/monitor-kernel-modules.yaml

Then start monitoring for events with tetra CLI:

kubectl exec -it -n kube-system ds/tetragon -c tetragon -- tetra getevents

When loading an out of tree module named kernel_module_hello.ko with the command insmod, tetra CLI will generate the following ProcessKprobe events:

1. Reading the kernel module from the file system

{
  "process_kprobe": {
    "process": {
      "exec_id": "OjEzMTg4MTQwNDUwODkwOjgyMDIz",
      "pid": 82023,
      "uid": 0,
      "cwd": "/home/tixxdz/tetragon",
      "binary": "/usr/sbin/insmod",
      "arguments": "contrib/tester-progs/kernel_module_hello.ko",
      "flags": "execve clone",
      "start_time": "2023-08-30T11:01:22.846516679Z",
      "auid": 1000,
      "parent_exec_id": "OjEzMTg4MTM4MjY2ODQyOjgyMDIy",
      "refcnt": 1,
      "tid": 82023
    },
    "parent": {
      "exec_id": "OjEzMTg4MTM4MjY2ODQyOjgyMDIy",
      "pid": 82022,
      "uid": 1000,
      "cwd": "/home/tixxdz/tetragon",
      "binary": "/usr/bin/sudo",
      "arguments": "insmod contrib/tester-progs/kernel_module_hello.ko",
      "flags": "execve",
      "start_time": "2023-08-30T11:01:22.844332959Z",
      "auid": 1000,
      "parent_exec_id": "OjEzMTg1NTE3MTgzNDM0OjgyMDIx",
      "refcnt": 1,
      "tid": 0
    },
    "function_name": "security_kernel_read_file",
    "args": [
      {
        "file_arg": {
          "path": "/home/tixxdz/tetragon/contrib/tester-progs/kernel_module_hello.ko"
        }
      },
      {
        "int_arg": 2
      }
    ],
    "return": {
      "int_arg": 0
    },
    "action": "KPROBE_ACTION_POST"
  },
  "time": "2023-08-30T11:01:22.847554295Z"
}

In addition to the process metadata from exec events, ProcessKprobe events contain the arguments of the observed call. In the above case they are:

  • security_kernel_read_file: the kernel security hook when the kernel loads file specified by user space.
  • file_arg: the full path of the kernel module on the file system.

2. Finalize loading of kernel modules

{
  "process_kprobe": {
    "process": {
      "exec_id": "OjEzMTg4MTQwNDUwODkwOjgyMDIz",
      "pid": 82023,
      "uid": 0,
      "cwd": "/home/tixxdz/tetragon",
      "binary": "/usr/sbin/insmod",
      "arguments": "contrib/tester-progs/kernel_module_hello.ko",
      "flags": "execve clone",
      "start_time": "2023-08-30T11:01:22.846516679Z",
      "auid": 1000,
      "parent_exec_id": "OjEzMTg4MTM4MjY2ODQyOjgyMDIy",
      "refcnt": 1,
      "tid": 82023
    },
    "parent": {
      "exec_id": "OjEzMTg4MTM4MjY2ODQyOjgyMDIy",
      "pid": 82022,
      "uid": 1000,
      "cwd": "/home/tixxdz/tetragon",
      "binary": "/usr/bin/sudo",
      "arguments": "insmod contrib/tester-progs/kernel_module_hello.ko",
      "flags": "execve",
      "start_time": "2023-08-30T11:01:22.844332959Z",
      "auid": 1000,
      "parent_exec_id": "OjEzMTg1NTE3MTgzNDM0OjgyMDIx",
      "refcnt": 1,
      "tid": 0
    },
    "function_name": "do_init_module",
    "args": [
      {
        "module_arg": {
          "name": "kernel_module_hello",
          "tainted": [
            "TAINT_OUT_OF_TREE_MODULE",
            "TAINT_UNSIGNED_MODULE"
          ]
        }
      }
    ],
    "action": "KPROBE_ACTION_POST"
  },
  "time": "2023-08-30T11:01:22.847638990Z"
}

This ProcessKprobe event contains:

  • do_init_module: the function call where the module is finaly loaded.
  • module_arg: the kernel module information, it contains:
    • name: the name of the kernel module as a string.
    • tainted: the module tainted flags that will be applied on the kernel. In the example above, it indicates we are loading an out-of-tree module, that is unsigned module which may compromise the integrity of our system.

Monitor Kernel Modules Signature

Kernels compiled with CONFIG_MODULE_SIG option will check if the modules being loaded were cryptographically signed. This allows to assert that:

  • If the module being loaded is signed, the kernel has its key and the signature verification succeeded.

  • The integrity of the system or the kernel was not compromised.

Kubernetes Environments

After deploying Tetragon, use the monitor-signed-kernel-modules tracing policy which generates ProcessKprobe events to identify if kernel modules are signed or not.

Apply the monitor-signed-kernel-modules tracing policy:

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/host-changes/monitor-signed-kernel-modules.yaml

Before going forward, deploy the test-pod into the demo-app namespace, which has its security context set to privileged. This allows to run the demo by mountig an xfs file system inside the test-pod which requires privileges, but will also trigger an automatic xfs module loading operation.

kubectl create namespace demo-app
kubectl apply -n demo-app -f https://raw.githubusercontent.com/cilium/tetragon/main/testdata/specs/testpod.yaml

Start monitoring for events with tetra CLI:

kubectl exec -it -n kube-system ds/tetragon -c tetragon -- tetra getevents

In another terminal, kubectl exec into the test-pod and run the following commands to create an xfs filesystem:

kubectl exec -it -n demo-app test-pod -- /bin/sh
apk update
dd if=/dev/zero of=loop.xfs bs=1 count=0 seek=32M
ls -lha loop.xfs
apk add xfsprogs
mkfs.xfs -q loop.xfs
mkdir /mnt/xfs.volume
mount -o loop -t xfs loop.xfs /mnt/xfs.volume/
losetup -a | grep xfs

Now the xfs filesystem should be mounted at /mnt/xfs.volume. To unmount it and release the loop device run:

umount /mnt/xfs.volume/

tetra CLI will generate the following events:

1. Automatic loading of kernel modules

First the mount command will trigger an automatic operation to load the xfs kernel module.

{
  "process_kprobe": {
    "process": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQxMjc1NTA0OTk5NTcyOjEzMDg3Ng==",
      "pid": 130876,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/mount",
      "arguments": "-o loop -t xfs loop.xfs /mnt/xfs.volume/",
      "flags": "execve rootcwd clone",
      "start_time": "2023-09-09T23:27:42.732039059Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "demo-app",
        "name": "test-pod",
        "container": {
          "id": "containerd://1e910d5cc8d8d68c894934170b162ef93aea5652867ed6bd7c620c7e3f9a10f1",
          "name": "test-pod",
          "image": {
            "id": "docker.io/cilium/starwars@sha256:f92c8cd25372bac56f55111469fe9862bf682385a4227645f5af155eee7f58d9",
            "name": "docker.io/cilium/starwars:latest"
          },
          "start_time": "2023-09-09T22:46:09Z",
          "pid": 45672
        },
        "workload": "test-pod"
      },
      "docker": "1e910d5cc8d8d68c894934170b162ef",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQxMjYyOTc1MjI1MDkzOjEzMDgwOQ==",
      "refcnt": 1,
      "tid": 130876
    },
    "parent": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQxMjYyOTc1MjI1MDkzOjEzMDgwOQ==",
      "pid": 130809,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/sh",
      "flags": "execve rootcwd clone",
      "start_time": "2023-09-09T23:27:30.202263472Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "demo-app",
        "name": "test-pod",
        "container": {
          "id": "containerd://1e910d5cc8d8d68c894934170b162ef93aea5652867ed6bd7c620c7e3f9a10f1",
          "name": "test-pod",
          "image": {
            "id": "docker.io/cilium/starwars@sha256:f92c8cd25372bac56f55111469fe9862bf682385a4227645f5af155eee7f58d9",
            "name": "docker.io/cilium/starwars:latest"
          },
          "start_time": "2023-09-09T22:46:09Z",
          "pid": 45612
        },
        "workload": "test-pod"
      },
      "docker": "1e910d5cc8d8d68c894934170b162ef",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQxMjYyOTEwMjM3OTQ2OjEzMDgwMA==",
      "tid": 130809
    },
    "function_name": "security_kernel_module_request",
    "args": [
      {
        "string_arg": "fs-xfs"
      }
    ],
    "return": {
      "int_arg": 0
    },
    "action": "KPROBE_ACTION_POST"
  },
  "node_name": "kind-control-plane",
  "time": "2023-09-09T23:27:42.751151233Z"
}

In addition to the process metadata from exec events, ProcessKprobe event contains the arguments of the observed call. In the above case they are:

  • security_kernel_module_request: the kernel security hook where modules are loaded on-demand.
  • string_arg: the name of the kernel module. When modules are automatically loaded, for security reasons, the kernel prefixes the module with the name of the subsystem that requested it. In our case, it’s requested by the file system subsystem, hence the name is fs-xfs.

2. Kernel calls modprobe to load the kernel module

The kernel will then call user space modprobe to load the kernel module.

{
  "process_exec": {
    "process": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQxMjc1NTI0MjYzMjIxOjEzMDg3Nw==",
      "pid": 130877,
      "uid": 0,
      "cwd": "/",
      "binary": "/sbin/modprobe",
      "arguments": "-q -- fs-xfs",
      "flags": "execve rootcwd clone",
      "start_time": "2023-09-09T23:27:42.751301124Z",
      "auid": 4294967295,
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE6MA==",
      "tid": 130877
    },
    "parent": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE6MA==",
      "pid": 0,
      "uid": 0,
      "binary": "<kernel>",
      "flags": "procFS",
      "start_time": "2023-09-09T11:59:47.227037763Z",
      "auid": 0,
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE6MA==",
      "tid": 0
    }
  },
  "node_name": "kind-control-plane",
  "time": "2023-09-09T23:27:42.751300984Z"
}

The ProcessExec event where modprobe tries to load the xfs module.

3. Reading the kernel module from the file system

modprobe will read the passed xfs kernel module from the host file system.

{
  "process_kprobe": {
    "process": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQxMjc1NTI0MjYzMjIxOjEzMDg3Nw==",
      "pid": 130877,
      "uid": 0,
      "cwd": "/",
      "binary": "/sbin/modprobe",
      "arguments": "-q -- fs-xfs",
      "flags": "execve rootcwd clone",
      "start_time": "2023-09-09T23:27:42.751301124Z",
      "auid": 4294967295,
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE6MA==",
      "refcnt": 1,
      "tid": 130877
    },
    "parent": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE6MA==",
      "pid": 0,
      "uid": 0,
      "binary": "<kernel>",
      "flags": "procFS",
      "start_time": "2023-09-09T11:59:47.227037763Z",
      "auid": 0,
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE6MA==",
      "tid": 0
    },
    "function_name": "security_kernel_read_file",
    "args": [
      {
        "file_arg": {
          "path": "/usr/lib/modules/6.2.0-32-generic/kernel/fs/xfs/xfs.ko"
        }
      },
      {
        "int_arg": 2
      }
    ],
    "return": {
      "int_arg": 0
    },
    "action": "KPROBE_ACTION_POST"
  },
  "node_name": "kind-control-plane",
  "time": "2023-09-09T23:27:42.752425825Z"
}

This ProcessKprobe event contains:

  • security_kernel_read_file: the kernel security hook when the kernel loads file specified by user space.
  • file_arg: the full path of the kernel module on the host file system.

4. Kernel module signature and sections are parsed

The final event is when the kernel is parsing the module sections. If all succeed the module will be loaded.

{
  "process_kprobe": {
    "process": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjQxMjc1NTI0MjYzMjIxOjEzMDg3Nw==",
      "pid": 130877,
      "uid": 0,
      "cwd": "/",
      "binary": "/sbin/modprobe",
      "arguments": "-q -- fs-xfs",
      "flags": "execve rootcwd clone",
      "start_time": "2023-09-09T23:27:42.751301124Z",
      "auid": 4294967295,
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE6MA==",
      "refcnt": 1,
      "tid": 130877
    },
    "parent": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE6MA==",
      "pid": 0,
      "uid": 0,
      "binary": "<kernel>",
      "flags": "procFS",
      "start_time": "2023-09-09T11:59:47.227037763Z",
      "auid": 0,
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjE6MA==",
      "tid": 0
    },
    "function_name": "find_module_sections",
    "args": [
      {
        "module_arg": {
          "name": "xfs",
          "signature_ok": true
        }
      }
    ],
    "action": "KPROBE_ACTION_POST"
  },
  "node_name": "kind-control-plane",
  "time": "2023-09-09T23:27:42.760880332Z"
}

This ProcessKprobe event contains the module argument.

  • find_module_sections: the function call where the kernel parses the module sections.
  • module_arg: the kernel module information, it contains:
    • name: the name of the kernel module as a string.
    • signature_ok: a boolean value, if set to true then module signature was successfully verified by the kernel. If it is false or missing then the signature verification was not performed or probably failed. In all cases this means the integrity of the system has been compromised. Depends on kernels compiled with CONFIG_MODULE_SIG option.

Monitor Unloading of kernel modules

Using the same monitor-kernel-modules tracing policy allows to monitor unloading of kernel modules.

The following ProcessKprobe event will be generated:

Removing kernel modules event

{
  "process_kprobe": {
    "process": {
      "exec_id": "OjMzNzQ4NzY1MDAyNDk5OjI0OTE3NQ==",
      "pid": 249175,
      "uid": 0,
      "cwd": "/home/tixxdz/tetragon",
      "binary": "/usr/sbin/rmmod",
      "arguments": "kernel_module_hello",
      "flags": "execve clone",
      "start_time": "2023-08-30T16:44:03.471068355Z",
      "auid": 1000,
      "parent_exec_id": "OjMzNzQ4NzY0MjQ4MTY5OjI0OTE3NA==",
      "refcnt": 1,
      "tid": 249175
    },
    "parent": {
      "exec_id": "OjMzNzQ4NzY0MjQ4MTY5OjI0OTE3NA==",
      "pid": 249174,
      "uid": 1000,
      "cwd": "/home/tixxdz/tetragon",
      "binary": "/usr/bin/sudo",
      "arguments": "rmmod kernel_module_hello",
      "flags": "execve",
      "start_time": "2023-08-30T16:44:03.470314558Z",
      "auid": 1000,
      "parent_exec_id": "OjMzNzQ2MjA5OTUxODI4OjI0OTE3Mw==",
      "refcnt": 1,
      "tid": 0
    },
    "function_name": "free_module",
    "args": [
      {
        "module_arg": {
          "name": "kernel_module_hello",
          "tainted": [
            "TAINT_OUT_OF_TREE_MODULE",
            "TAINT_UNSIGNED_MODULE"
          ]
        }
      }
    ],
    "action": "KPROBE_ACTION_POST"
  },
  "time": "2023-08-30T16:44:03.471984676Z"
}

To disable the monitor-kernel-modules run:

kubectl delete -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/host-changes/monitor-kernel-modules.yaml

6.6 - Security Profiles

Observe and record security events

Tetragon is able to observe various security events and even enforce security policies.

The Record Linux Capabilities Usage guide shows how to monitor and record Capabilities checks conducted by the kernel on behalf of applications during privileged operations. This can be used to inspect and produce security profiles for pods and containers.

6.6.1 - Record Linux Capabilities Usage

Record a capability profile of pods and containers

When the kernel needs to perform a privileged operation on behalf of a process, it checks the Capabilities of the process and issues a verdict to allow or deny the operation.

Tetragon is able to record these checks performed by the kernel. This can be used to answer the following questions:

What is the capabilities profile of pods or containters running in the cluster?

What capabilities to add or remove when configuring a security context for a pod or container?

Kubernetes Environments

First, verify that your k8s environment is set up and that all pods are up and running, and deploy the demo application:

kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.15.3/examples/minikube/http-sw-app.yaml

It might take several seconds until all pods are Running:

kubectl get pods -A

The output should be similar to:

NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
default              deathstar-54bb8475cc-6c6lc                   1/1     Running   0          2m54s
default              deathstar-54bb8475cc-zmfkr                   1/1     Running   0          2m54s
default              tiefighter                                   1/1     Running   0          2m54s
default              xwing                                        1/1     Running   0          2m54s
kube-system          tetragon-sdwv6                               2/2     Running   0          27m

Monitor Capability Checks

We use the creds-capability-usage tracing policy which generates ProcessKprobe events.

Apply the creds-capability-usage policy:

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/process-credentials/creds-capability-usage.yaml

Start monitoring for events with tetra cli, but match only events of xwing pod:

kubectl exec -it -n kube-system ds/tetragon -c tetragon -- tetra getevents --namespaces default --pods xwing

In another terminal, kubectl exec into the xwing pod:

kubectl exec -it xwing -- /bin/bash

As an example execute dmesg to print the kernel ring buffer. This requires the special capability CAP_SYSLOG:

dmesg

The output should be similar to:

dmesg: klogctl: Operation not permitted

The tetra cli will generate the following ProcessKprobe events:

{
  "process_kprobe": {
    "process": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjEyODQyNzgzMzUwNjg0OjczODYw",
      "pid": 73860,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/dmesg",
      "flags": "execve rootcwd clone",
      "start_time": "2023-07-06T10:13:33.834390020Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://cfb961400ff25811d22d139a10f6a62efef53c2ecc11af47bc911a7f9a2ac1f7",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-07-06T08:07:30Z",
          "pid": 171
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        }
      },
      "docker": "cfb961400ff25811d22d139a10f6a62",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjEyODQyMTI3MTIwOTcyOjczODUw",
      "refcnt": 1,
      "ns": {
        "uts": {
          "inum": 4026534655
        },
        "ipc": {
          "inum": 4026534656
        },
        "mnt": {
          "inum": 4026534731
        },
        "pid": {
          "inum": 4026534732
        },
        "pid_for_children": {
          "inum": 4026534732
        },
        "net": {
          "inum": 4026534512
        },
        "time": {
          "inum": 4026531834,
          "is_host": true
        },
        "time_for_children": {
          "inum": 4026531834,
          "is_host": true
        },
        "cgroup": {
          "inum": 4026534733
        },
        "user": {
          "inum": 4026531837,
          "is_host": true
        }
      },
      "tid": 73860
    },
    "parent": {
      "exec_id": "a2luZC1jb250cm9sLXBsYW5lOjEyODQyMTI3MTIwOTcyOjczODUw",
      "pid": 73850,
      "uid": 0,
      "cwd": "/",
      "binary": "/bin/bash",
      "flags": "execve rootcwd clone",
      "start_time": "2023-07-06T10:13:33.178160018Z",
      "auid": 4294967295,
      "pod": {
        "namespace": "default",
        "name": "xwing",
        "container": {
          "id": "containerd://cfb961400ff25811d22d139a10f6a62efef53c2ecc11af47bc911a7f9a2ac1f7",
          "name": "spaceship",
          "image": {
            "id": "docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6",
            "name": "docker.io/tgraf/netperf:latest"
          },
          "start_time": "2023-07-06T08:07:30Z",
          "pid": 165
        },
        "pod_labels": {
          "app.kubernetes.io/name": "xwing",
          "class": "xwing",
          "org": "alliance"
        }
      },
      "docker": "cfb961400ff25811d22d139a10f6a62",
      "parent_exec_id": "a2luZC1jb250cm9sLXBsYW5lOjEyODQyMDgxNTA3MzUzOjczODQx",
      "refcnt": 2,
      "tid": 73850
    },
    "function_name": "cap_capable",
    "args": [
      {
        "user_ns_arg": {
          "level": 0,
          "uid": 0,
          "gid": 0,
          "ns": {
            "inum": 4026531837,
            "is_host": true
          }
        }
      },
      {
        "capability_arg": {
          "value": 34,
          "name": "CAP_SYSLOG"
        }
      }
    ],
    "return": {
      "int_arg": -1
    },
    "action": "KPROBE_ACTION_POST"
  },
  "node_name": "kind-control-plane",
  "time": "2023-07-06T10:13:33.834882128Z"
}

In addition to the Kubernetes Identity and process metadata from exec events, ProcessKprobe events contain the arguments of the observed system call. In the above case they are:

  • function_name: that is the cap_capable kernel function.

  • user_ns_arg: is the user namespace where the capability is required.

    • level: is the nested level of the user namespace. Here it is zero which indicates the initial user namespace.
    • uid: is the user ID of the owner of the user namespace.
    • gid: is the group ID of the owner of the user namespace.
    • ns: details the information about the namespace. is_host indicates that the target user namespace where the capability is required is the host namespace.
  • capability_arg: is the capability required to perform the operation. In this example reading the kernel ring buffer.

    • value: is the integer number of the required capability.
    • name: is the name of the required capability. Here it is the CAP_SYSLOG.
  • return: indicates via the int_arg if the capability check succeeded or failed. 0 means it succeeded and the access was granted while -1 means it failed and the operation was denied.

To disable the creds-capability-usage run:

kubectl delete -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/process-credentials/creds-capability-usage.yaml

7 - Contribution Guide

How to contribute to the project

Welcome to Tetragon :) !

We’re happy you’re interested in contributing to the Tetragon project.

All contributions are welcome

While this document focuses on the technical details of how to submit patches to the Tetragon project, we value all kinds of contributions.

For example, actions that can greatly improve Tetragon and contribute to its success could be:

  • Write a blog post about Tetragon or one of its use cases, we will be happy to add a reference to it in resources.
  • Talk about Tetragon during conferences or meetups, similarly, as a blog post, video recordings can be added to resources.
  • Share your usage of Tetragon on social platforms, and add yourself to the user list of the Cilium project as a Tetragon user.
  • Raise an issue on the repository about a bug, enhancement, or something else. See open a new issue.
  • Review a patch on the repository, this might look intimidading but some simple pull requests would benefit from a fresh pair of eyes. See open pull requests.
  • Submit a patch to the Tetragon project, for code and documentation contribution. See the next section for a how-to guide.

Guide for code and docs contribution

This section of the Tetragon documentation will help you make sure you have an environment capable of testing changes to the Tetragon source code, and that you understand the workflow of getting these changes reviewed and merged upstream.

  1. Make sure you have a GitHub account.

  2. Fork the Tetragon repository to your GitHub user or organization. The repository is available under github.com/cilium/tetragon.

  3. (Optional) Turn off GitHub actions for your fork. This is recommended to avoid unnecessary CI notification failures on the fork.

  4. Clone your fork and set up the base repository as upstream remote:

    git clone https://github.com/${YOUR_GITHUB_USERNAME_OR_ORG}/tetragon.git
    cd tetragon
    git remote add upstream https://github.com/cilium/tetragon.git
    
  5. Prepare your development setup.

  6. Check out GitHub good first issues to find something to work on. If this is your first Tetragon issue, try to start with something small that you think you can do without too much external help. Also avoid assigning too many issues to yourself (see Don’t Lick the Cookie!).

  7. Follow the steps in making changes to start contributing.

  8. Learn how to run the tests or how to preview and contribute to the docs.

  9. Learn how to submit a pull request to the project.

  10. Please accept our gratitude for taking the time to improve Tetragon! :)

7.1 - Development setup

This will help you getting started with your development setup to build Tetragon

Building and running Tetragon

For local development, you will likely want to build and run bare-metal Tetragon.

Requirements

  • A Go toolchain with the version specified in the main go.mod;
  • GNU make;
  • A running Docker service (you can use Podman as well);
  • For building tests, libcap and libelf (in Debian systems, e.g., install libelf-dev and libcap-dev).

Build everything

You can build most Tetragon targets as follows (this can take time as it builds all the targets needed for testing, see minimal build):

make

If you want to use podman instead of docker, you can do the following (assuming you need to use sudo with podman):

CONTAINER_ENGINE='sudo podman' make

You can ignore /bin/sh: docker: command not found in the output.

To build using the local clang, you can use:

CONTAINER_ENGINE='sudo podman' LOCAL_CLANG=1 LOCAL_CLANG_FORMAT=1 make

See Dockerfile.clang for the minimal required version of clang.

Minimal build

To build the tetragon binary, the BPF programs and the tetra CLI binary you can use:

make tetragon tetragon-bpf tetra

Run Tetragon

You should now have a ./tetragon binary, which can be run as follows:

sudo ./tetragon --bpf-lib bpf/objs

Notes:

  1. The --bpf-lib flag tells Tetragon where to look for its compiled BPF programs (which were built in the make step above).

  2. If Tetragon fails with an error "BTF discovery: candidate btf file does not exist", then make sure that your kernel support BTF, otherwise place a BTF file where Tetragon can read it and specify its path with the --btf flag. See more about that in the FAQ.

Running code generation

Tetragon uses code generation based on protoc to generate large amounts of boilerplate code based on our protobuf API. We similarly use automatic generation to maintain our k8s CRDs. Whenever you make changes to these files, you will be required to re-run code generation before your PR can be accepted.

To run codegen from protoc, run the following command from the root of the repository:

make protogen

And to run k8s CRD generation, run the following command from the root of the repository:

make crds

Finally, should you wish to modify any of the resulting codegen files (ending in .pb.go), do not modify them directly. Instead, you can edit the files in cmd/protoc-gen-go-tetragon and then re-run make protogen.

Running vendor

Tetragon uses multiple modules to separate the main module, from api from pkg/k8s. Depending on your changes you might need to vendor those changes, you can use:

make vendor

Note that the make protogen and make crds commands already vendor changes automatically.

Building and running a Docker image

The base kernel should support BTF or a BTF file should be bind mounted on top of /var/lib/tetragon/btf inside container.

To build Tetragon image:

make image

To run the image:

docker run --name tetragon \
   --rm -it -d --pid=host \
   --cgroupns=host --privileged \
   -v /sys/kernel/btf/vmlinux:/var/lib/tetragon/btf \
   cilium/tetragon:latest

Run the tetra binary to get Tetragon events:

docker exec -it tetragon \
   bash -c "/usr/bin/tetra getevents -o compact"

Building and running as a systemd service

To build Tetragon tarball:

make tarball

Running Tetragon in kind

This command will setup tetragon, kind cluster and install tetragon in it. Ensure docker, kind, kubectl, and helm are installed.

# Setup tetragon on kind
make kind-setup

Verify that Tetragon is installed by running:

kubectl get pods -n kube-system

Local Development in Vagrant Box

If you are on an intel Mac, use Vagrant to create a dev VM:

vagrant up
vagrant ssh
make

If you are getting an error, you can try to run sudo launchctl load /Library/LaunchDaemons/org.virtualbox.startup.plist (from a Stackoverflow answer).

What’s next

7.2 - Making changes

Learn how to make your first changes to the project
  1. Make sure the main branch of your fork is up-to-date:

    git fetch upstream
    git checkout main
    git merge upstream/main
    

    For further reference read GitHub syncing a fork documentation.

  2. Create a PR branch with a descriptive name, branching from main:

    git switch -c pr/${GITHUB_USERNAME_OR_ORG}/changes-to-something main
    
  3. Make the changes you want.

  4. Separate the changes into logical commits.

    • Describe the changes in the commit messages. Focus on answering the question why the change is required and document anything that might be unexpected.
    • If any description is required to understand your code changes, then those instructions should be code comments instead of statements in the commit description.
    • For submitting PRs, all commits need to be signed off (git commit -s). See the section Developer’s Certificate of Origin
  5. Make sure your changes meet the following criteria:

    • New code is covered by Integration Testing.
    • End to end integration / runtime tests have been extended or added. If not required, mention in the commit message what existing test covers the new code.
    • Follow-up commits are squashed together nicely. Commits should separate logical chunks of code and not represent a chronological list of changes.
  6. Run git diff --check to catch obvious white space violations

  7. Build Tetragon with your changes included.

What’s next

7.3 - Running tests

Learn how to run the tests of the project

Tetragon has several types of tests:

  • Go tests, composed of unit tests for userspace Go code and Go and BPF code.
  • BPF unit tests, testing specifing BPF functions.
  • E2E tests, for end-to-end tests, installing Tetragon in Kubernetes clusters and checking for specific features.

Those tests are running in the Tetragon CI on various kernels1 and various architectures (amd64 and arm64).

Go tests

To run the Go tests locally, you can use:

make test

Use EXTRA_TESTFLAGS to add flags to the go test command.

Test specific kernels

To run the Go tests on various kernel versions, we use vmtests with cilium/little-vm-helper in the CI, you can also use it locally for testing specific kernels. See documentation github.com/cilium/tetragon/tests/vmtests.

BPF unit tests

To run BPF unit tests, you can use:

make bpf-test

Those tests can be found under github.com/cilium/tetragon/bpf/tests. The framework uses Go tests with cilium/ebpf to run those tests, you can use BPFGOTESTFLAGS to add go test flags, like make BPFGOTESTFLAGS="-v" bpf-test.

E2E tests

To run E2E tests, you can use:

make e2e-test

This will build the Tetragon image and use the e2e framework to create a kind cluster, install Tetragon and run the tests. To not rebuild the image before running the test, use E2E_BUILD_IMAGES=0. You can use EXTRA_TESTFLAGS to add flags to the go test command.

What’s next


  1. For the detailed list, search for jobs.test.strategy.matrix.kernel in github.com/cilium/tetragon/.github/workflows/vmtests.yml ↩︎

7.4 - Documentation

Learn how to contribute to the documentation

Thank you for taking the time to improve Tetragon’s documentation.

Find the content

All the Tetragon documentation content can be found under github.com/cilium/tetragon/docs/content/en/docs.

Style to follow

We generally follow the Kubernetes docs style guide k8s.io/docs/contribute/style/style-guide.

Preview locally

To preview the documentation locally, use one of the method below. Then browse to localhost:1313/docs, the default port used by Hugo to listen.

Using Docker

With a Docker service available, from the root of the repository, use:

make docs

You can also use make from the Makefile at the /docs folder level.

To cleanup the container image built in the process, you can use:

make -C docs clean

Local Hugo installation

The documentation is a Hugo static website using the Docsy theme.

Please refer to dedicated guides on how to install Hugo+extended and how to tweak Docsy, but generally, to preview your work, from the /docs folder:

hugo server

7.5 - Submitting a pull request

Learn how to submit a pull request to the project

Submitting a pull request

Contributions must be submitted in the form of pull requests against the upstream GitHub repository at https://github.com/cilium/tetragon.

  1. Fork the Tetragon repository.

  2. Push your changes to the topic branch in your fork of the repository.

  3. Submit a pull request on https://github.com/cilium/tetragon.

Before hitting the submit button, please make sure that the following requirements have been met:

  1. Each commit compiles and is functional on its own to allow for bisecting of commits.

  2. All code is covered by unit and/or runtime tests where feasible.

  3. All changes have been tested and checked for regressions by running the existing testsuite against your changes.

  4. All commits contain a well written commit description including a title, description and a Fixes: #XXX line if the commit addresses a particular GitHub issue identified by its number. Note that the GitHub issue will be automatically closed when the commit is merged.

    doc: add contribution guideline and how to submit pull requests
    
    Tetragon Open Source project was just released and it does not include
    default contributing guidelines.
    
    This patch fixes this by adding:
    
    1. CONTRIBUTING.md file in the root directory as suggested by github documentation: https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/setting-guidelines-for-repository-contributors
    
    2. Development guide under docs directory with a section on how to submit pull requests.
    
    3. Moves the DEVELOP.md file from root directory to the `docs/contributing/development/` one.
    
    Fixes: #33
    
    Signed-off-by: Djalal Harouni <djalal@cilium.io>
    

    Note: Make sure to include a blank line in between commit title and commit description.

  5. All commits are signed off. See the section Developer’s Certificate of Origin.

  6. All important steps in Making changes have been followed.

7.6 - Developer's certificate of origin

Learn about the “sign-off” procedure

To improve tracking of who did what, we’ve introduced a “sign-off” procedure, make sure to read and apply the Developer’s Certificate of Origin.

8 - Reference

Low level reference documentation for Tetragon

8.1 - Daemon Configuration

Explore Tetragon options and configuration mechanisms.

Tetragon default controlling settings are set during compilation, so configuration is only needed when it is necessary to deviate from those defaults. This document lists those controlling settings and how they can be set as a CLI arguments or as configuration options from YAML files.

Options

The following table list all Tetragon daemon available options and is automatically generated using the tetragon binary --generate-docs flag. The same information can also be retrieved using --help.

FlagUsageDefault Value
--bpf-libLocation of Tetragon libs (btf and bpf files)/var/lib/tetragon/
--btfLocation of btf
--cgroup-rateBase sensor events cgroup rate <events,interval> disabled by default ('1000/1s' means rate 1000 events per second
--config-dirConfiguration directory that contains a file for each option
--cpuprofileStore CPU profile into provided file
--data-cache-sizeSize of the data events cache1024
--debugEnable debug messages. Equivalent to '--log-level=debug'false
--disable-kprobe-multiAllow to disable kprobe multi interfacefalse
--enable-export-aggregationEnable JSON export aggregationfalse
--enable-k8s-apiAccess Kubernetes API to associate Tetragon events with Kubernetes podsfalse
--enable-msg-handling-latencyEnable metrics for message handling latencyfalse
--enable-pid-set-filterEnable pidSet export filters. Not recommended for production usefalse
--enable-pod-infoEnable PodInfo custom resourcefalse
--enable-policy-filterEnable policy filter code (beta)false
--enable-policy-filter-debugEnable policy filter debug messagesfalse
--enable-process-ancestorsInclude ancestors in process exec eventstrue
--enable-process-credEnable process_cred eventsfalse
--enable-process-nsEnable namespace information in process_exec and process_kprobe eventsfalse
--enable-tracing-policy-crdEnable TracingPolicy and TracingPolicyNamespaced custom resourcestrue
--event-queue-sizeSet the size of the internal event queue.10000
--export-aggregation-buffer-sizeAggregator channel buffer size10000
--export-aggregation-window-sizeJSON export aggregation time window15s
--export-allowlistJSON export allowlist
--export-denylistJSON export denylist
--export-file-compressCompress rotated JSON export filesfalse
--export-file-max-backupsNumber of rotated JSON export files to retain5
--export-file-max-size-mbSize in MB for rotating JSON export files10
--export-file-permAccess permissions on JSON export files600
--export-file-rotation-intervalInterval at which to rotate JSON export files in addition to rotating them by size0s
--export-filenameFilename for JSON export. Disabled by default
--export-rate-limitRate limit (per minute) for event export. Set to -1 to disable-1
--expose-kernel-addressesExpose real kernel addresses in events stack tracesfalse
--expose-stack-addressesExpose real linear addresses in events stack tracesfalse
--field-filtersField filters for event exports
--force-large-progsForce loading large programs, even in kernels with < 5.3 versionsfalse
--force-small-progsForce loading small programs, even in kernels with >= 5.3 versionsfalse
--generate-docsGenerate documentation in YAML format to stdoutfalse
--gops-addressgops server address (e.g. 'localhost:8118'). Disabled by default
--helphelp for tetragonfalse
--k8s-kubeconfig-pathAbsolute path of the kubernetes kubeconfig file
--kernelKernel version
--kmodsList of kernel modules to load symbols from[]
--log-formatSet log formattext
--log-levelSet log levelinfo
--memprofileStore MEM profile into provided file
--metrics-label-filterComma-separated list of enabled metrics labels. Unknown labels will be ignored.namespace,workload,pod,binary
--metrics-serverMetrics server address (e.g. ':2112'). Disabled by default
--netns-dirNetwork namespace dir/var/run/docker/netns/
--pprof-addrProfile via pprof http
--process-cache-sizeSize of the process cache65536
--procfsLocation of procfs to consume existing PIDs/proc/
--rb-queue-sizeSet size of channel between ring buffer and sensor go routines (default 65k, allows K/M/G suffix)65535
--rb-sizeSet perf ring buffer size for single cpu (default 65k, allows K/M/G suffix)0
--rb-size-totalSet perf ring buffer size in total for all cpus (default 65k per cpu, allows K/M/G suffix)0
--redaction-filtersRedaction filters for events
--release-pinned-bpfRelease all pinned BPF programs and maps in Tetragon BPF directory. Enabled by default. Set to false to disabletrue
--server-addressgRPC server address (e.g. 'localhost:54321' or 'unix:///var/run/tetragon/tetragon.sock'localhost:54321
--tracing-policyTracing policy file to load at startup
--tracing-policy-dirDirectory from where to load Tracing Policies/etc/tetragon/tetragon.tp.d
--verboseset verbosity level for eBPF verifier dumps. Pass 0 for silent, 1 for truncated logs, 2 for a full dump0

Configuration precedence

Tetragon controlling settings can also be loaded from YAML configuration files according to this order:

  1. From the drop-in configuration snippets inside the following directories where each filename maps to one controlling setting and the content of the file to its corresponding value:

    • /usr/lib/tetragon/tetragon.conf.d/*
    • /usr/local/lib/tetragon/tetragon.conf.d/*
  2. From the configuration file /etc/tetragon/tetragon.yaml if available, overriding previous settings.

  3. From the drop-in configuration snippets inside /etc/tetragon/tetragon.conf.d/*, similarly overriding previous settings.

  4. If the config-dir setting is set, Tetragon loads its settings from the files inside the directory pointed by this option, overriding previous controlling settings. The config-dir is also part of Kubernetes ConfigMap.

When reading configuration from directories, each filename maps to one controlling setting. If the same controlling setting is set multiple times, then the last value or content of that file overrides the previous ones.

To summarize the configuration precedence:

  1. Drop-in directory pointed by --config-dir.

  2. Drop-in directory /etc/tetragon/tetragon.conf.d/*.

  3. Configuration file /etc/tetragon/tetragon.yaml.

  4. Drop-in directories:

    • /usr/local/lib/tetragon/tetragon.conf.d/*
    • /usr/lib/tetragon/tetragon.conf.d/*

Configuration examples

The examples/configuration/tetragon.yaml file contains example entries showing the defaults as a guide to the administrator. Local overrides can be created by editing and copying this file into /etc/tetragon/tetragon.yaml, or by editing and copying “drop-ins” from the examples/configuration/tetragon.conf.d directory into the /etc/tetragon/tetragon.conf.d/ subdirectory. The latter is generally recommended.

Each filename maps to a one controlling setting and the content of the file to its corresponding value. This is the recommended way.

Changing configuration example:

  • /etc/tetragon/tetragon.conf.d/bpf-lib with a corresponding value of:

    /var/lib/tetragon/
    
  • /etc/tetragon/tetragon.conf.d/log-format with a corresponding value of:

    text
    
  • /etc/tetragon/tetragon.conf.d/export-filename with a corresponding value of:

    /var/log/tetragon/tetragon.log
    

Restrict gRPC API access

The gRPC API supports unix sockets, it can be set using one of the following methods:

  • Use the --server-address flag:

    --server-address unix:///var/run/tetragon/tetragon.sock
    
  • Or use the drop-in configuration file /etc/tetragon/tetragon.conf.d/server-address containing:

    unix:///var/run/tetragon/tetragon.sock
    

Then to access the gRPC API with tetra client, set --server-address to point to the corresponding address:

sudo tetra --server-address unix:///var/run/tetragon/tetragon.sock getevents

Configure Tracing Policies location

Tetragon daemon automatically loads Tracing policies from the default /etc/tetragon/tetragon.tp.d/ directory. Tracing policies can be organized in directories such: /etc/tetragon/tetragon.tp.d/file-access, /etc/tetragon/tetragon.tp.d/network-access, etc.

The --tracing-policy-dir controlling setting can be used to change the default directory from where Tracing policies are loaded.

The --tracing-policy controlling setting can be used to specify the path of one tracing policy to load.

8.2 - Helm chart

This reference is generated from the Tetragon Helm chart values.

The Tetragon Helm chart source is available under github.io/cilium/tetragon/install/kubernetes/tetragon and is distributed from the Cilium helm charts repository helm.cilium.io.

To deploy Tetragon using this Helm chart you can run the following commands:

helm repo add cilium https://helm.cilium.io
helm repo update
helm install tetragon cilium/tetragon -n kube-system

To use the values available, with helm install or helm upgrade, use --set key=value.

Values

KeyTypeDefaultDescription
affinityobject{}
crds.installMethodstring"operator"Method for installing CRDs. Supported values are: “operator”, “helm” and “none”. The “operator” method allows for fine-grained control over which CRDs are installed and by default doesn’t perform CRD downgrades. These can be configured in tetragonOperator section. The “helm” method always installs all CRDs for the chart version.
daemonSetAnnotationsobject{}
daemonSetLabelsOverrideobject{}
dnsPolicystring"Default"
enabledbooltrueGlobal settings
exportobject{"filenames":["tetragon.log"],"mode":"stdout","resources":{},"securityContext":{},"stdout":{"argsOverride":[],"commandOverride":[],"enabledArgs":true,"enabledCommand":true,"extraEnv":[],"extraVolumeMounts":[],"image":{"override":null,"repository":"quay.io/cilium/hubble-export-stdout","tag":"v1.0.4"}}}Tetragon event settings
exportDirectorystring"/var/run/cilium/tetragon"
exportFileCreationIntervalstring"120s"
extraConfigmapMountslist[]
extraHostPathMountslist[]
extraVolumeslist[]
hostNetworkbooltrue
imagePullPolicystring"IfNotPresent"
imagePullSecretslist[]
nodeSelectorobject{}
podAnnotationsobject{}
podLabelsobject{}
podLabelsOverrideobject{}
podSecurityContextobject{}
priorityClassNamestring""Tetragon agent settings
selectorLabelsOverrideobject{}
serviceAccount.annotationsobject{}
serviceAccount.createbooltrue
serviceAccount.namestring""
serviceLabelsOverrideobject{}
tetragon.argsOverridelist[]
tetragon.btfstring""
tetragon.commandOverridelist[]
tetragon.enableK8sAPIbooltrue
tetragon.enableMsgHandlingLatencyboolfalseEnable latency monitoring in message handling
tetragon.enablePolicyFilterbooltrueEnable policy filter. This is required for K8s namespace and pod-label filtering.
tetragon.enablePolicyFilterDebugboolfalseEnable policy filter debug messages.
tetragon.enableProcessCredboolfalse
tetragon.enableProcessNsboolfalse
tetragon.enabledbooltrue
tetragon.exportAllowListstring"{\"event_set\":[\"PROCESS_EXEC\", \"PROCESS_EXIT\", \"PROCESS_KPROBE\", \"PROCESS_UPROBE\", \"PROCESS_TRACEPOINT\"]}"
tetragon.exportDenyListstring"{\"health_check\":true}\n{\"namespace\":[\"\", \"cilium\", \"kube-system\"]}"
tetragon.exportFileCompressboolfalse
tetragon.exportFileMaxBackupsint5
tetragon.exportFileMaxSizeMBint10
tetragon.exportFilePermstring"600"
tetragon.exportFilenamestring"tetragon.log"
tetragon.exportRateLimitint-1
tetragon.extraArgsobject{}
tetragon.extraEnvlist[]
tetragon.extraVolumeMountslist[]
tetragon.fieldFiltersstring""
tetragon.gops.addressstring"localhost"The address at which to expose gops.
tetragon.gops.portint8118The port at which to expose gops.
tetragon.grpc.addressstring"localhost:54321"The address at which to expose gRPC. Examples: localhost:54321, unix:///var/run/tetragon/tetragon.sock
tetragon.grpc.enabledbooltrueWhether to enable exposing Tetragon gRPC.
tetragon.hostProcPathstring"/proc"Location of the host proc filesystem in the runtime environment. If the runtime runs in the host, the path is /proc. Exceptions to this are environments like kind, where the runtime itself does not run on the host.
tetragon.image.overridestringnil
tetragon.image.repositorystring"quay.io/cilium/tetragon"
tetragon.image.tagstring"v1.1.0"
tetragon.ociHookSetupobject{"enabled":false,"extraVolumeMounts":[],"installDir":"/opt/tetragon","interface":"oci-hooks","resources":{},"securityContext":{"privileged":true}}Configure tetragon’s init container for setting up tetragon-oci-hook on the host
tetragon.ociHookSetup.enabledboolfalseenable init container to setup tetragon-oci-hook
tetragon.ociHookSetup.extraVolumeMountslist[]Extra volume mounts to add to the oci-hook-setup init container
tetragon.ociHookSetup.interfacestring"oci-hooks"interface specifices how the hook is configured. There is only one avaialble value for now: “oci-hooks” (https://github.com/containers/common/blob/main/pkg/hooks/docs/oci-hooks.5.md).
tetragon.ociHookSetup.resourcesobject{}resources for the the oci-hook-setup init container
tetragon.ociHookSetup.securityContextobject{"privileged":true}Security context for oci-hook-setup init container
tetragon.processCacheSizeint65536
tetragon.prometheus.addressstring""The address at which to expose metrics. Set it to "" to expose on all available interfaces.
tetragon.prometheus.enabledbooltrueWhether to enable exposing Tetragon metrics.
tetragon.prometheus.metricsLabelFilterstring"namespace,workload,pod,binary"Comma-separated list of enabled metrics labels. The configurable labels are: namespace, workload, pod, binary. Unkown labels will be ignored. Removing some labels from the list might help reduce the metrics cardinality if needed.
tetragon.prometheus.portint2112The port at which to expose metrics.
tetragon.prometheus.serviceMonitor.enabledboolfalseWhether to create a ‘ServiceMonitor’ resource targeting the tetragon pods.
tetragon.prometheus.serviceMonitor.labelsOverrideobject{}The set of labels to place on the ‘ServiceMonitor’ resource.
tetragon.prometheus.serviceMonitor.scrapeIntervalstring"10s"Interval at which metrics should be scraped. If not specified, Prometheus’ global scrape interval is used.
tetragon.redactionFiltersstring""
tetragon.resourcesobject{}
tetragon.securityContext.privilegedbooltrue
tetragonOperatorobject{"affinity":{},"annotations":{},"enabled":true,"extraLabels":{},"extraPodLabels":{},"extraVolumeMounts":[],"extraVolumes":[],"forceUpdateCRDs":false,"image":{"override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/tetragon-operator","tag":"v1.1.0"},"nodeSelector":{},"podAnnotations":{},"podInfo":{"enabled":false},"podSecurityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}},"priorityClassName":"","prometheus":{"address":"","enabled":true,"port":2113,"serviceMonitor":{"enabled":false,"labelsOverride":{},"scrapeInterval":"10s"}},"resources":{"limits":{"cpu":"500m","memory":"128Mi"},"requests":{"cpu":"10m","memory":"64Mi"}},"securityContext":{},"serviceAccount":{"annotations":{},"create":true,"name":""},"skipCRDCreation":false,"strategy":{},"tolerations":[{"operator":"Exists"}],"tracingPolicy":{"enabled":true}}Tetragon Operator settings
tetragonOperator.annotationsobject{}Annotations for the Tetragon Operator Deployment.
tetragonOperator.enabledbooltrueEnables the Tetragon Operator.
tetragonOperator.extraLabelsobject{}Extra labels to be added on the Tetragon Operator Deployment.
tetragonOperator.extraPodLabelsobject{}Extra labels to be added on the Tetragon Operator Deployment Pods.
tetragonOperator.extraVolumeslist[]Extra volumes for the Tetragon Operator Deployment.
tetragonOperator.imageobject{"override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/tetragon-operator","tag":"v1.1.0"}tetragon-operator image.
tetragonOperator.nodeSelectorobject{}Steer the Tetragon Operator Deployment Pod placement via nodeSelector, tolerations and affinity rules.
tetragonOperator.podAnnotationsobject{}Annotations for the Tetragon Operator Deployment Pods.
tetragonOperator.podInfo.enabledboolfalseEnables the PodInfo CRD and the controller that reconciles PodInfo custom resources.
tetragonOperator.podSecurityContextobject{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}}securityContext for the Tetragon Operator Deployment Pod container.
tetragonOperator.priorityClassNamestring""priorityClassName for the Tetragon Operator Deployment Pods.
tetragonOperator.prometheusobject{"address":"","enabled":true,"port":2113,"serviceMonitor":{"enabled":false,"labelsOverride":{},"scrapeInterval":"10s"}}Enables the Tetragon Operator metrics.
tetragonOperator.prometheus.addressstring""The address at which to expose Tetragon Operator metrics. Set it to "" to expose on all available interfaces.
tetragonOperator.prometheus.portint2113The port at which to expose metrics.
tetragonOperator.prometheus.serviceMonitorobject{"enabled":false,"labelsOverride":{},"scrapeInterval":"10s"}The labels to include with supporting metrics.
tetragonOperator.prometheus.serviceMonitor.enabledboolfalseWhether to create a ‘ServiceMonitor’ resource targeting the tetragonOperator pods.
tetragonOperator.prometheus.serviceMonitor.labelsOverrideobject{}The set of labels to place on the ‘ServiceMonitor’ resource.
tetragonOperator.prometheus.serviceMonitor.scrapeIntervalstring"10s"Interval at which metrics should be scraped. If not specified, Prometheus’ global scrape interval is used.
tetragonOperator.resourcesobject{"limits":{"cpu":"500m","memory":"128Mi"},"requests":{"cpu":"10m","memory":"64Mi"}}resources for the Tetragon Operator Deployment Pod container.
tetragonOperator.securityContextobject{}securityContext for the Tetragon Operator Deployment Pods.
tetragonOperator.serviceAccountobject{"annotations":{},"create":true,"name":""}tetragon-operator service account.
tetragonOperator.skipCRDCreationboolfalseDEPRECATED. This value will be removed in Tetragon v1.2 release. Use crds.installMethod instead. Skip CRD creation.
tetragonOperator.strategyobject{}resources for the Tetragon Operator Deployment update strategy
tetragonOperator.tracingPolicy.enabledbooltrueEnables the TracingPolicy and TracingPolicyNamespaced CRD creation.
tolerations[0].operatorstring"Exists"
updateStrategyobject{}

8.3 - gRPC API

This reference is generated from the protocol buffer specification and documents the gRPC API of Tetragon.

The Tetragon API is an independant Go module that can be found in the Tetragon repository under api. The version 1 of this API is defined in github.com/cilium/tetragon/api/v1/tetragon.

tetragon/capabilities.proto

CapabilitiesType

NameNumberDescription
CAP_CHOWN0In a system with the [_POSIX_CHOWN_RESTRICTED] option defined, this overrides the restriction of changing file ownership and group ownership.
DAC_OVERRIDE1Override all DAC access, including ACL execute access if [_POSIX_ACL] is defined. Excluding DAC access covered by CAP_LINUX_IMMUTABLE.
CAP_DAC_READ_SEARCH2Overrides all DAC restrictions regarding read and search on files and directories, including ACL restrictions if [_POSIX_ACL] is defined. Excluding DAC access covered by "$1"_LINUX_IMMUTABLE.
CAP_FOWNER3Overrides all restrictions about allowed operations on files, where file owner ID must be equal to the user ID, except where CAP_FSETID is applicable. It doesn't override MAC and DAC restrictions.
CAP_FSETID4Overrides the following restrictions that the effective user ID shall match the file owner ID when setting the S_ISUID and S_ISGID bits on that file; that the effective group ID (or one of the supplementary group IDs) shall match the file owner ID when setting the S_ISGID bit on that file; that the S_ISUID and S_ISGID bits are cleared on successful return from chown(2) (not implemented).
CAP_KILL5Overrides the restriction that the real or effective user ID of a process sending a signal must match the real or effective user ID of the process receiving the signal.
CAP_SETGID6Allows forged gids on socket credentials passing.
CAP_SETUID7Allows forged pids on socket credentials passing.
CAP_SETPCAP8Without VFS support for capabilities: Transfer any capability in your permitted set to any pid, remove any capability in your permitted set from any pid With VFS support for capabilities (neither of above, but) Add any capability from current's capability bounding set to the current process' inheritable set Allow taking bits out of capability bounding set Allow modification of the securebits for a process
CAP_LINUX_IMMUTABLE9Allow modification of S_IMMUTABLE and S_APPEND file attributes
CAP_NET_BIND_SERVICE10Allows binding to ATM VCIs below 32
CAP_NET_BROADCAST11Allow broadcasting, listen to multicast
CAP_NET_ADMIN12Allow activation of ATM control sockets
CAP_NET_RAW13Allow binding to any address for transparent proxying (also via NET_ADMIN)
CAP_IPC_LOCK14Allow mlock and mlockall (which doesn't really have anything to do with IPC)
CAP_IPC_OWNER15Override IPC ownership checks
CAP_SYS_MODULE16Insert and remove kernel modules - modify kernel without limit
CAP_SYS_RAWIO17Allow sending USB messages to any device via /dev/bus/usb
CAP_SYS_CHROOT18Allow use of chroot()
CAP_SYS_PTRACE19Allow ptrace() of any process
CAP_SYS_PACCT20Allow configuration of process accounting
CAP_SYS_ADMIN21Allow everything under CAP_BPF and CAP_PERFMON for backward compatibility
CAP_SYS_BOOT22Allow use of reboot()
CAP_SYS_NICE23Allow setting cpu affinity on other processes
CAP_SYS_RESOURCE24Control memory reclaim behavior
CAP_SYS_TIME25Allow setting the real-time clock
CAP_SYS_TTY_CONFIG26Allow vhangup() of tty
CAP_MKNOD27Allow the privileged aspects of mknod()
CAP_LEASE28Allow taking of leases on files
CAP_AUDIT_WRITE29Allow writing the audit log via unicast netlink socket
CAP_AUDIT_CONTROL30Allow configuration of audit via unicast netlink socket
CAP_SETFCAP31Set or remove capabilities on files
CAP_MAC_OVERRIDE32Override MAC access. The base kernel enforces no MAC policy. An LSM may enforce a MAC policy, and if it does and it chooses to implement capability based overrides of that policy, this is the capability it should use to do so.
CAP_MAC_ADMIN33Allow MAC configuration or state changes. The base kernel requires no MAC configuration. An LSM may enforce a MAC policy, and if it does and it chooses to implement capability based checks on modifications to that policy or the data required to maintain it, this is the capability it should use to do so.
CAP_SYSLOG34Allow configuring the kernel's syslog (printk behaviour)
CAP_WAKE_ALARM35Allow triggering something that will wake the system
CAP_BLOCK_SUSPEND36Allow preventing system suspends
CAP_AUDIT_READ37Allow reading the audit log via multicast netlink socket
CAP_PERFMON38Allow system performance and observability privileged operations using perf_events, i915_perf and other kernel subsystems
CAP_BPF39CAP_BPF allows the following BPF operations: - Creating all types of BPF maps - Advanced verifier features - Indirect variable access - Bounded loops - BPF to BPF function calls - Scalar precision tracking - Larger complexity limits - Dead code elimination - And potentially other features - Loading BPF Type Format (BTF) data - Retrieve xlated and JITed code of BPF programs - Use bpf_spin_lock() helper CAP_PERFMON relaxes the verifier checks further: - BPF progs can use of pointer-to-integer conversions - speculation attack hardening measures are bypassed - bpf_probe_read to read arbitrary kernel memory is allowed - bpf_trace_printk to print kernel memory is allowed CAP_SYS_ADMIN is required to use bpf_probe_write_user. CAP_SYS_ADMIN is required to iterate system wide loaded programs, maps, links, BTFs and convert their IDs to file descriptors. CAP_PERFMON and CAP_BPF are required to load tracing programs. CAP_NET_ADMIN and CAP_BPF are required to load networking programs.
CAP_CHECKPOINT_RESTORE40Allow writing to ns_last_pid

ProcessPrivilegesChanged

Reasons of why the process privileges changed.

NameNumberDescription
PRIVILEGES_CHANGED_UNSET0
PRIVILEGES_RAISED_EXEC_FILE_CAP1A privilege elevation happened due to the execution of a binary with file capability sets. The kernel supports associating capability sets with an executable file using setcap command. The file capability sets are stored in an extended attribute (see https://man7.org/linux/man-pages/man7/xattr.7.html) named security.capability. The file capability sets, in conjunction with the capability sets of the process, determine the process capabilities and privileges after the execve system call. For further reference, please check sections File capability extended attribute versioning and Namespaced file capabilities of the capabilities man pages: https://man7.org/linux/man-pages/man7/capabilities.7.html. The new granted capabilities can be listed inside the process object.
PRIVILEGES_RAISED_EXEC_FILE_SETUID2A privilege elevation happened due to the execution of a binary with set-user-ID to root. When a process with nonzero UIDs executes a binary with a set-user-ID to root also known as suid-root executable, then the kernel switches the effective user ID to 0 (root) which is a privilege elevation operation since it grants access to resources owned by the root user. The effective user ID is listed inside the process_credentials part of the process object. For further reading, section Capabilities and execution of programs by root of https://man7.org/linux/man-pages/man7/capabilities.7.html. Afterward the kernel recalculates the capability sets of the process and grants all capabilities in the permitted and effective capability sets, except those masked out by the capability bounding set. If the binary also have file capability sets then these bits are honored and the process gains just the capabilities granted by the file capability sets (i.e., not all capabilities, as it would occur when executing a set-user-ID to root binary that does not have any associated file capabilities). This is described in section Set-user-ID-root programs that have file capabilities of https://man7.org/linux/man-pages/man7/capabilities.7.html. The new granted capabilities can be listed inside the process object. There is one exception for the special treatments of set-user-ID to root execution receiving all capabilities, if the SecBitNoRoot bit of the Secure bits is set, then the kernel does not grant any capability. Please check section: The securebits flags: establishing a capabilities-only environment of the capabilities man pages: https://man7.org/linux/man-pages/man7/capabilities.7.html
PRIVILEGES_RAISED_EXEC_FILE_SETGID3A privilege elevation happened due to the execution of a binary with set-group-ID to root. When a process with nonzero GIDs executes a binary with a set-group-ID to root, the kernel switches the effective group ID to 0 (root) which is a privilege elevation operation since it grants access to resources owned by the root group. The effective group ID is listed inside the process_credentials part of the process object.

SecureBitsType

NameNumberDescription
SecBitNotSet0
SecBitNoRoot1When set UID 0 has no special privileges. When unset, inheritance of root-permissions and suid-root executable under compatibility mode is supported. If the effective uid of the new process is 0 then the effective and inheritable bitmasks of the executable file is raised. If the real uid is 0, the effective (legacy) bit of the executable file is raised.
SecBitNoRootLocked2Make bit-0 SecBitNoRoot immutable
SecBitNoSetUidFixup4When set, setuid to/from uid 0 does not trigger capability-"fixup". When unset, to provide compatiblility with old programs relying on set*uid to gain/lose privilege, transitions to/from uid 0 cause capabilities to be gained/lost.
SecBitNoSetUidFixupLocked8Make bit-2 SecBitNoSetUidFixup immutable
SecBitKeepCaps16When set, a process can retain its capabilities even after transitioning to a non-root user (the set-uid fixup suppressed by bit 2). Bit-4 is cleared when a process calls exec(); setting both bit 4 and 5 will create a barrier through exec that no exec()'d child can use this feature again.
SecBitKeepCapsLocked32Make bit-4 SecBitKeepCaps immutable
SecBitNoCapAmbientRaise64When set, a process cannot add new capabilities to its ambient set.
SecBitNoCapAmbientRaiseLocked128Make bit-6 SecBitNoCapAmbientRaise immutable

tetragon/tetragon.proto

BinaryProperties

FieldTypeLabelDescription
setuidgoogle.protobuf.UInt32ValueIf set then this is the set user ID used for execution
setgidgoogle.protobuf.UInt32ValueIf set then this is the set group ID used for execution
privileges_changedProcessPrivilegesChangedrepeatedThe reasons why this binary execution changed privileges. Usually this happens when the process executes a binary with the set-user-ID to root or file capability sets. The final granted privileges can be listed inside the process_credentials or capabilities fields part of of the process object.
fileFilePropertiesFile properties in case the executed binary is: 1. An anonymous shared memory file https://man7.org/linux/man-pages/man7/shm_overview.7.html. 2. An anonymous file obtained with memfd API https://man7.org/linux/man-pages/man2/memfd_create.2.html. 3. Or it was deleted from the file system.

Capabilities

FieldTypeLabelDescription
permittedCapabilitiesTyperepeatedPermitted set indicates what capabilities the process can use. This is a limiting superset for the effective capabilities that the thread may assume. It is also a limiting superset for the capabilities that may be added to the inheritable set by a thread without the CAP_SETPCAP in its effective set.
effectiveCapabilitiesTyperepeatedEffective set indicates what capabilities are active in a process. This is the set used by the kernel to perform permission checks for the thread.
inheritableCapabilitiesTyperepeatedInheritable set indicates which capabilities will be inherited by the current process when running as a root user.

Container

FieldTypeLabelDescription
idstringIdentifier of the container.
namestringName of the container.
imageImageImage of the container.
start_timegoogle.protobuf.TimestampStart time of the container.
pidgoogle.protobuf.UInt32ValueProcess identifier in the container namespace.
maybe_exec_probeboolIf this is set true, it means that the process might have been originated from a Kubernetes exec probe. For this field to be true, the following must be true: 1. The binary field matches the first element of the exec command list for either liveness or readiness probe excluding the basename. For example, "/bin/ls" and "ls" are considered a match. 2. The arguments field exactly matches the rest of the exec command list.

CreateContainer

CreateContainer informs the agent that a container was created This is intented to be used by OCI hooks (but not limited to them) and corresponds to the CreateContainer hook: https://github.com/opencontainers/runtime-spec/blob/main/config.md#createcontainer-hooks.

FieldTypeLabelDescription
cgroupsPathstringcgroupsPath is the cgroups path for the container. The path is expected to be relative to the cgroups mountpoint. See: https://github.com/opencontainers/runtime-spec/blob/58ec43f9fc39e0db229b653ae98295bfde74aeab/specs-go/config.go#L174
rootDirstringrootDir is the absolute path of the root directory of the container. See: https://github.com/opencontainers/runtime-spec/blob/main/specs-go/config.go#L174
annotationsCreateContainer.AnnotationsEntryrepeatedannotations are the run-time annotations for the container see https://github.com/opencontainers/runtime-spec/blob/main/config.md#annotations
containerNamestringcontainerName is the name of the container

CreateContainer.AnnotationsEntry

FieldTypeLabelDescription
keystring
valuestring

FileProperties

FieldTypeLabelDescription
inodeInodePropertiesInode of the file
pathstringPath of the file

GetHealthStatusRequest

FieldTypeLabelDescription
event_setHealthStatusTyperepeated

GetHealthStatusResponse

FieldTypeLabelDescription
health_statusHealthStatusrepeated

HealthStatus

FieldTypeLabelDescription
eventHealthStatusType
statusHealthStatusResult
detailsstring

Image

FieldTypeLabelDescription
idstringIdentifier of the container image composed of the registry path and the sha256.
namestringName of the container image composed of the registry path and the tag.

InodeProperties

FieldTypeLabelDescription
numberuint64The inode number
linksgoogle.protobuf.UInt32ValueThe inode links on the file system. If zero means the file is only in memory

KernelModule

FieldTypeLabelDescription
namestringKernel module name
signature_okgoogle.protobuf.BoolValueIf true the module signature was verified successfully. Depends on kernels compiled with CONFIG_MODULE_SIG option, for details please read: https://www.kernel.org/doc/Documentation/admin-guide/module-signing.rst
taintedTaintedBitsTyperepeatedThe module tainted flags that will be applied on the kernel. For further details please read: https://docs.kernel.org/admin-guide/tainted-kernels.html

KprobeArgument

FieldTypeLabelDescription
string_argstring
int_argint32
skb_argKprobeSkb
size_arguint64
bytes_argbytes
path_argKprobePath
file_argKprobeFile
truncated_bytes_argKprobeTruncatedBytes
sock_argKprobeSock
cred_argKprobeCred
long_argint64
bpf_attr_argKprobeBpfAttr
perf_event_argKprobePerfEvent
bpf_map_argKprobeBpfMap
uint_arguint32
user_namespace_argKprobeUserNamespaceDeprecated.
capability_argKprobeCapability
process_credentials_argProcessCredentials
user_ns_argUserNamespace
module_argKernelModule
kernel_cap_t_argstringCapabilities in hexadecimal format.
cap_inheritable_argstringCapabilities inherited by a forked process in hexadecimal format.
cap_permitted_argstringCapabilities that are currently permitted in hexadecimal format.
cap_effective_argstringCapabilities that are actually used in hexadecimal format.
linux_binprm_argKprobeLinuxBinprm
net_dev_argKprobeNetDev
labelstring

KprobeBpfAttr

FieldTypeLabelDescription
ProgTypestring
InsnCntuint32
ProgNamestring

KprobeBpfMap

FieldTypeLabelDescription
MapTypestring
KeySizeuint32
ValueSizeuint32
MaxEntriesuint32
MapNamestring

KprobeCapability

FieldTypeLabelDescription
valuegoogle.protobuf.Int32Value
namestring

KprobeCred

FieldTypeLabelDescription
permittedCapabilitiesTyperepeated
effectiveCapabilitiesTyperepeated
inheritableCapabilitiesTyperepeated

KprobeFile

FieldTypeLabelDescription
mountstring
pathstring
flagsstring
permissionstring

KprobeLinuxBinprm

FieldTypeLabelDescription
pathstring
flagsstring
permissionstring

KprobeNetDev

FieldTypeLabelDescription
namestring

KprobePath

FieldTypeLabelDescription
mountstring
pathstring
flagsstring
permissionstring

KprobePerfEvent

FieldTypeLabelDescription
KprobeFuncstring
Typestring
Configuint64
ProbeOffsetuint64

KprobeSkb

FieldTypeLabelDescription
hashuint32
lenuint32
priorityuint32
markuint32
saddrstring
daddrstring
sportuint32
dportuint32
protouint32
sec_path_lenuint32
sec_path_olenuint32
protocolstring
familystring

KprobeSock

FieldTypeLabelDescription
familystring
typestring
protocolstring
markuint32
priorityuint32
saddrstring
daddrstring
sportuint32
dportuint32
cookieuint64
statestring

KprobeTruncatedBytes

FieldTypeLabelDescription
bytes_argbytes
orig_sizeuint64

KprobeUserNamespace

FieldTypeLabelDescription
levelgoogle.protobuf.Int32Value
ownergoogle.protobuf.UInt32Value
groupgoogle.protobuf.UInt32Value
nsNamespace

Namespace

FieldTypeLabelDescription
inumuint32Inode number of the namespace.
is_hostboolIndicates if namespace belongs to host.

Namespaces

FieldTypeLabelDescription
utsNamespaceHostname and NIS domain name.
ipcNamespaceSystem V IPC, POSIX message queues.
mntNamespaceMount points.
pidNamespaceProcess IDs.
pid_for_childrenNamespaceProcess IDs for children processes.
netNamespaceNetwork devices, stacks, ports, etc.
timeNamespaceBoot and monotonic clocks.
time_for_childrenNamespaceBoot and monotonic clocks for children processes.
cgroupNamespaceCgroup root directory.
userNamespaceUser and group IDs.

Pod

FieldTypeLabelDescription
namespacestringKubernetes namespace of the Pod.
namestringName of the Pod.
containerContainerContainer of the Pod from which the process that triggered the event originates.
pod_labelsPod.PodLabelsEntryrepeatedContains all the labels of the pod.
workloadstringKubernetes workload of the Pod.
workload_kindstringKubernetes workload kind (e.g. "Deployment", "DaemonSet") of the Pod.

Pod.PodLabelsEntry

FieldTypeLabelDescription
keystring
valuestring

Process

FieldTypeLabelDescription
exec_idstringExec ID uniquely identifies the process over time across all the nodes in the cluster.
pidgoogle.protobuf.UInt32ValueProcess identifier from host PID namespace.
uidgoogle.protobuf.UInt32ValueUser identifier associated with the process.
cwdstringCurrent working directory of the process.
binarystringAbsolute path of the executed binary.
argumentsstringArguments passed to the binary at execution.
flagsstringFlags are for debugging purposes only and should not be considered a reliable source of information. They hold various information about which syscalls generated events, use of internal Tetragon buffers, errors and more. - execve This event is generated by an execve syscall for a new process. See procFs for the other option. A correctly formatted event should either set execve or procFS (described next). - procFS This event is generated from a proc interface. This happens at Tetragon init when existing processes are being loaded into Tetragon event buffer. All events should have either execve or procFS set. - truncFilename Indicates a truncated processes filename because the buffer size is too small to contain the process filename. Consider increasing buffer size to avoid this. - truncArgs Indicates truncated the processes arguments because the buffer size was too small to contain all exec args. Consider increasing buffer size to avoid this. - taskWalk Primarily useful for debugging. Indicates a walked process hierarchy to find a parent process in the Tetragon buffer. This may happen when we did not receive an exec event for the immediate parent of a process. Typically means we are looking at a fork that in turn did another fork we don't currently track fork events exactly and instead push an event with the original parent exec data. This flag can provide this insight into the event if needed. - miss An error flag indicating we could not find parent info in the Tetragon event buffer. If this is set it should be reported to Tetragon developers for debugging. Tetragon will do its best to recover information about the process from available kernel data structures instead of using cached info in this case. However, args will not be available. - needsAUID An internal flag for Tetragon to indicate the audit has not yet been resolved. The BPF hooks look at this flag to determine if probing the audit system is necessary. - errorFilename An error flag indicating an error happened while reading the filename. If this is set it should be reported to Tetragon developers for debugging. - errorArgs An error flag indicating an error happened while reading the process args. If this is set it should be reported to Tetragon developers for debugging - needsCWD An internal flag for Tetragon to indicate the current working directory has not yet been resolved. The Tetragon hooks look at this flag to determine if probing the CWD is necessary. - noCWDSupport Indicates that CWD is removed from the event because the buffer size is too small. Consider increasing buffer size to avoid this. - rootCWD Indicates that CWD is the root directory. This is necessary to inform readers the CWD is not in the event buffer and is '/' instead. - errorCWD An error flag indicating an error occurred while reading the CWD of a process. If this is set it should be reported to Tetragon developers for debugging. - clone Indicates the process issued a clone before exec*. This is the general flow to exec* a new process, however its possible to replace the current process with a new process by doing an exec* without a clone. In this case the flag will be omitted and the same PID will be used by the kernel for both the old process and the newly exec'd process.
start_timegoogle.protobuf.TimestampStart time of the execution.
auidgoogle.protobuf.UInt32ValueAudit user ID, this ID is assigned to a user upon login and is inherited by every process even when the user's identity changes. For example, by switching user accounts with su - john.
podPodInformation about the the Kubernetes Pod where the event originated.
dockerstringThe 15 first digits of the container ID.
parent_exec_idstringExec ID of the parent process.
refcntuint32Reference counter from the Tetragon process cache.
capCapabilitiesSet of capabilities that define the permissions the process can execute with.
nsNamespacesLinux namespaces of the process, disabled by default, can be enabled by the --enable-process-ns flag.
tidgoogle.protobuf.UInt32ValueThread ID, note that for the thread group leader, tid is equal to pid.
process_credentialsProcessCredentialsProcess credentials
binary_propertiesBinaryPropertiesExecuted binary properties. This field is only available on ProcessExec events.

ProcessCredentials

FieldTypeLabelDescription
uidgoogle.protobuf.UInt32ValueThe real user ID
gidgoogle.protobuf.UInt32ValueThe real group ID
euidgoogle.protobuf.UInt32ValueThe effective user ID
egidgoogle.protobuf.UInt32ValueThe effective group ID
suidgoogle.protobuf.UInt32ValueThe saved user ID
sgidgoogle.protobuf.UInt32ValueThe saved group ID
fsuidgoogle.protobuf.UInt32Valuethe filesystem user ID
fsgidgoogle.protobuf.UInt32ValueThe filesystem group ID
securebitsSecureBitsTyperepeatedSecure management flags
capsCapabilitiesSet of capabilities that define the permissions the process can execute with.
user_nsUserNamespaceUser namespace where the UIDs, GIDs and capabilities are relative to.

ProcessExec

FieldTypeLabelDescription
processProcessProcess that triggered the exec.
parentProcessImmediate parent of the process.
ancestorsProcessrepeatedAncestors of the process beyond the immediate parent.

ProcessExit

FieldTypeLabelDescription
processProcessProcess that triggered the exit.
parentProcessImmediate parent of the process.
signalstringSignal that the process received when it exited, for example SIGKILL or SIGTERM (list all signal names with kill -l). If there is no signal handler implemented for a specific process, we report the exit status code that can be found in the status field.
statusuint32Status code on process exit. For example, the status code can indicate if an error was encountered or the program exited successfully.
timegoogle.protobuf.TimestampDate and time of the event.

ProcessKprobe

FieldTypeLabelDescription
processProcessProcess that triggered the kprobe.
parentProcessImmediate parent of the process.
function_namestringSymbol on which the kprobe was attached.
argsKprobeArgumentrepeatedArguments definition of the observed kprobe.
returnKprobeArgumentReturn value definition of the observed kprobe.
actionKprobeActionAction performed when the kprobe matched.
kernel_stack_traceStackTraceEntryrepeatedKernel stack trace to the call.
policy_namestringName of the Tracing Policy that created that kprobe.
return_actionKprobeActionAction performed when the return kprobe executed.
messagestringShort message of the Tracing Policy to inform users what is going on.
tagsstringrepeatedTags of the Tracing Policy to categorize the event.
user_stack_traceStackTraceEntryrepeatedUser-mode stack trace to the call.

ProcessLoader

loader sensor event triggered for loaded binary/library

FieldTypeLabelDescription
processProcess
pathstring
buildidbytes

ProcessTracepoint

FieldTypeLabelDescription
processProcessProcess that triggered the tracepoint.
parentProcessImmediate parent of the process.
subsysstringSubsystem of the tracepoint.
eventstringEvent of the subsystem.
argsKprobeArgumentrepeatedArguments definition of the observed tracepoint. TODO: once we implement all we want, rename KprobeArgument to GenericArgument
policy_namestringName of the policy that created that tracepoint.
actionKprobeActionAction performed when the tracepoint matched.
messagestringShort message of the Tracing Policy to inform users what is going on.
tagsstringrepeatedTags of the Tracing Policy to categorize the event.

ProcessUprobe

FieldTypeLabelDescription
processProcess
parentProcess
pathstring
symbolstring
policy_namestringName of the policy that created that uprobe.
messagestringShort message of the Tracing Policy to inform users what is going on.
argsKprobeArgumentrepeatedArguments definition of the observed uprobe.
tagsstringrepeatedTags of the Tracing Policy to categorize the event.

RuntimeHookRequest

RuntimeHookRequest synchronously propagates information to the agent about run-time state.

FieldTypeLabelDescription
createContainerCreateContainer

RuntimeHookResponse

StackTraceEntry

FieldTypeLabelDescription
addressuint64linear address of the function in kernel or user space.
offsetuint64offset is the offset into the native instructions for the function.
symbolstringsymbol is the symbol name of the function.
modulestringmodule path for user space addresses.

Test

FieldTypeLabelDescription
arg0uint64
arg1uint64
arg2uint64
arg3uint64

UserNamespace

FieldTypeLabelDescription
levelgoogle.protobuf.Int32ValueNested level of the user namespace. Init or host user namespace is at level 0.
uidgoogle.protobuf.UInt32ValueThe owner user ID of the namespace
gidgoogle.protobuf.UInt32ValueThe owner group ID of the namepace.
nsNamespaceThe user namespace details that include the inode number of the namespace.

HealthStatusResult

NameNumberDescription
HEALTH_STATUS_UNDEF0
HEALTH_STATUS_RUNNING1
HEALTH_STATUS_STOPPED2
HEALTH_STATUS_ERROR3

HealthStatusType

NameNumberDescription
HEALTH_STATUS_TYPE_UNDEF0
HEALTH_STATUS_TYPE_STATUS1

KprobeAction

NameNumberDescription
KPROBE_ACTION_UNKNOWN0Unknown action
KPROBE_ACTION_POST1Post action creates an event (default action).
KPROBE_ACTION_FOLLOWFD2Post action creates a mapping between file descriptors and file names.
KPROBE_ACTION_SIGKILL3Sigkill action synchronously terminates the process.
KPROBE_ACTION_UNFOLLOWFD4Post action removes a mapping between file descriptors and file names.
KPROBE_ACTION_OVERRIDE5Override action modifies the return value of the call.
KPROBE_ACTION_COPYFD6Post action dupplicates a mapping between file descriptors and file names.
KPROBE_ACTION_GETURL7GetURL action issue an HTTP Get request against an URL from userspace.
KPROBE_ACTION_DNSLOOKUP8GetURL action issue a DNS lookup against an URL from userspace.
KPROBE_ACTION_NOPOST9NoPost action suppresses the transmission of the event to userspace.
KPROBE_ACTION_SIGNAL10Signal action sends specified signal to the process.
KPROBE_ACTION_TRACKSOCK11TrackSock action tracks socket.
KPROBE_ACTION_UNTRACKSOCK12UntrackSock action un-tracks socket.
KPROBE_ACTION_NOTIFYENFORCER13NotifyEnforcer action notifies killer sensor.

TaintedBitsType

Tainted bits to indicate if the kernel was tainted. For further details: https://docs.kernel.org/admin-guide/tainted-kernels.html

NameNumberDescription
TAINT_UNSET0
TAINT_PROPRIETARY_MODULE1A proprietary module was loaded.
TAINT_FORCED_MODULE2A module was force loaded.
TAINT_FORCED_UNLOAD_MODULE4A module was force unloaded.
TAINT_STAGED_MODULE1024A staging driver was loaded.
TAINT_OUT_OF_TREE_MODULE4096An out of tree module was loaded.
TAINT_UNSIGNED_MODULE8192An unsigned module was loaded. Supported only on kernels built with CONFIG_MODULE_SIG option.
TAINT_KERNEL_LIVE_PATCH_MODULE32768The kernel has been live patched.
TAINT_TEST_MODULE262144Loading a test module.

tetragon/events.proto

AggregationInfo

AggregationInfo contains information about aggregation results.

FieldTypeLabelDescription
countuint64Total count of events in this aggregation time window.

AggregationOptions

AggregationOptions defines configuration options for aggregating events.

FieldTypeLabelDescription
window_sizegoogle.protobuf.DurationAggregation window size. Defaults to 15 seconds if this field is not set.
channel_buffer_sizeuint64Size of the buffer for the aggregator to receive incoming events. If the buffer becomes full, the aggregator will log a warning and start dropping incoming events.

CapFilter

Filter over a set of Linux process capabilities. See message Capabilities for more info. WARNING: Multiple sets are ANDed. For example, if the permitted filter matches, but the effective filter does not, the filter will NOT match.

FieldTypeLabelDescription
permittedCapFilterSetFilter over the set of permitted capabilities.
effectiveCapFilterSetFilter over the set of effective capabilities.
inheritableCapFilterSetFilter over the set of inheritable capabilities.

CapFilterSet

Capability set to filter over. NOTE: you may specify only ONE set here.

FieldTypeLabelDescription
anyCapabilitiesTyperepeatedMatch if the capability set contains any of the capabilities defined in this filter.
allCapabilitiesTyperepeatedMatch if the capability set contains all of the capabilities defined in this filter.
exactlyCapabilitiesTyperepeatedMatch if the capability set exactly matches all of the capabilities defined in this filter.
noneCapabilitiesTyperepeatedMatch if the capability set contains none of the capabilities defined in this filter.

FieldFilter

FieldTypeLabelDescription
event_setEventTyperepeatedEvent types to filter or undefined to filter over all event types.
fieldsgoogle.protobuf.FieldMaskFields to include or exclude.
actionFieldFilterActionWhether to include or exclude fields.
invert_event_setgoogle.protobuf.BoolValueWhether or not the event set filter should be inverted.

Filter

FieldTypeLabelDescription
binary_regexstringrepeated
namespacestringrepeated
health_checkgoogle.protobuf.BoolValue
piduint32repeated
pid_setuint32repeatedFilter by the PID of a process and any of its descendants. Note that this filter is intended for testing and development purposes only and should not be used in production. In particular, PID cycling in the OS over longer periods of time may cause unexpected events to pass this filter.
event_setEventTyperepeated
pod_regexstringrepeatedFilter by process.pod.name field using RE2 regular expression syntax: https://github.com/google/re2/wiki/Syntax
arguments_regexstringrepeatedFilter by process.arguments field using RE2 regular expression syntax: https://github.com/google/re2/wiki/Syntax
labelsstringrepeatedFilter events by pod labels using Kubernetes label selector syntax: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors Note that this filter never matches events without the pod field (i.e. host process events).
policy_namesstringrepeatedFilter events by tracing policy names
capabilitiesCapFilterFilter events by Linux process capability

GetEventsRequest

FieldTypeLabelDescription
allow_listFilterrepeatedallow_list specifies a list of filters to apply to only return certain events. If multiple filters are specified, at least one of them has to match for an event to be included in the results.
deny_listFilterrepeateddeny_list specifies a list of filters to apply to exclude certain events from the results. If multiple filters are specified, at least one of them has to match for an event to be excluded. If both allow_list and deny_list are specified, the results contain the set difference allow_list - deny_list.
aggregation_optionsAggregationOptionsaggregation_options configures aggregation options for this request. If this field is not set, responses will not be aggregated. Note that currently only process_accept and process_connect events are aggregated. Other events remain unaggregated.
field_filtersFieldFilterrepeatedFields to include or exclude for events in the GetEventsResponse. Omitting this field implies that all fields will be included. Exclusion always takes precedence over inclusion in the case of conflicts.

GetEventsResponse

FieldTypeLabelDescription
process_execProcessExecProcessExec event includes information about the execution of binaries and other related process metadata.
process_exitProcessExitProcessExit event indicates how and when a process terminates.
process_kprobeProcessKprobeProcessKprobe event contains information about the pre-defined functions and the process that invoked them.
process_tracepointProcessTracepointProcessTracepoint contains information about the pre-defined tracepoint and the process that invoked them.
process_loaderProcessLoader
process_uprobeProcessUprobe
process_throttleProcessThrottle
testTest
rate_limit_infoRateLimitInfo
node_namestringName of the node where this event was observed.
timegoogle.protobuf.TimestampTimestamp at which this event was observed. For an aggregated response, this field to set to the timestamp at which the event was observed for the first time in a given aggregation time window.
aggregation_infoAggregationInfoaggregation_info contains information about aggregation results. This field is set only for aggregated responses.

ProcessThrottle

FieldTypeLabelDescription
typeThrottleTypeThrottle type
cgroupstringCgroup name

RateLimitInfo

FieldTypeLabelDescription
number_of_dropped_process_eventsuint64

RedactionFilter

FieldTypeLabelDescription
matchFilterrepeatedDeprecated. Deprecated, do not use.
redactstringrepeatedRE2 regular expressions to use for redaction. Strings inside capture groups are redacted.
binary_regexstringrepeatedRE2 regular expression to match binary name. If supplied, redactions will only be applied to matching processes.

EventType

Represents the type of a Tetragon event.

NOTE: EventType constants must be in sync with the numbers used in the GetEventsResponse event oneof.

NameNumberDescription
UNDEF0
PROCESS_EXEC1
PROCESS_EXIT5
PROCESS_KPROBE9
PROCESS_TRACEPOINT10
PROCESS_LOADER11
PROCESS_UPROBE12
PROCESS_THROTTLE27
TEST40000
RATE_LIMIT_INFO40001

FieldFilterAction

Determines the behavior of a field filter

NameNumberDescription
INCLUDE0
EXCLUDE1

ThrottleType

NameNumberDescription
THROTTLE_UNKNOWN0
THROTTLE_START1
THROTTLE_STOP2

tetragon/stack.proto

StackAddress

FieldTypeLabelDescription
addressuint64
symbolstring

StackTrace

FieldTypeLabelDescription
addressesStackAddressrepeated

StackTraceLabel

FieldTypeLabelDescription
keystring
countuint64

StackTraceNode

FieldTypeLabelDescription
addressStackAddress
countuint64
labelsStackTraceLabelrepeated
childrenStackTraceNoderepeated

tetragon/sensors.proto

AddTracingPolicyRequest

FieldTypeLabelDescription
yamlstring

AddTracingPolicyResponse

DeleteTracingPolicyRequest

FieldTypeLabelDescription
namestring

DeleteTracingPolicyResponse

DisableSensorRequest

FieldTypeLabelDescription
namestring

DisableSensorResponse

DisableTracingPolicyRequest

FieldTypeLabelDescription
namestring

DisableTracingPolicyResponse

EnableSensorRequest

FieldTypeLabelDescription
namestring

EnableSensorResponse

EnableTracingPolicyRequest

FieldTypeLabelDescription
namestring

EnableTracingPolicyResponse

GetStackTraceTreeRequest

FieldTypeLabelDescription
namestring

GetStackTraceTreeResponse

FieldTypeLabelDescription
rootStackTraceNode

GetVersionRequest

GetVersionResponse

FieldTypeLabelDescription
versionstring

ListSensorsRequest

ListSensorsResponse

FieldTypeLabelDescription
sensorsSensorStatusrepeated

ListTracingPoliciesRequest

ListTracingPoliciesResponse

FieldTypeLabelDescription
policiesTracingPolicyStatusrepeated

RemoveSensorRequest

FieldTypeLabelDescription
namestring

RemoveSensorResponse

SensorStatus

FieldTypeLabelDescription
namestringname is the name of the sensor
enabledboolenabled marks whether the sensor is enabled
collectionstringcollection is the collection the sensor belongs to (typically a tracing policy)

TracingPolicyStatus

FieldTypeLabelDescription
iduint64id is the id of the policy
namestringname is the name of the policy
namespacestringnamespace is the namespace of the policy (or empty of the policy is global)
infostringinfo is additional information about the policy
sensorsstringrepeatedsensors loaded in the scope of this policy
enabledboolDeprecated. indicating if the policy is enabled. Deprecated: use 'state' instead.
filter_iduint64filter ID of the policy used for k8s filtering
errorstringpotential error of the policy
stateTracingPolicyStatecurrent state of the tracing policy

TracingPolicyState

NameNumberDescription
TP_STATE_UNKNOWN0unknown state
TP_STATE_ENABLED1loaded and enabled
TP_STATE_DISABLED2loaded but disabled
TP_STATE_LOAD_ERROR3failed to load
TP_STATE_ERROR4failed during lifetime

FineGuidanceSensors

Method NameRequest TypeResponse TypeDescription
GetEventsGetEventsRequestGetEventsResponse stream
GetHealthGetHealthStatusRequestGetHealthStatusResponse
AddTracingPolicyAddTracingPolicyRequestAddTracingPolicyResponse
DeleteTracingPolicyDeleteTracingPolicyRequestDeleteTracingPolicyResponse
RemoveSensorRemoveSensorRequestRemoveSensorResponse
ListTracingPoliciesListTracingPoliciesRequestListTracingPoliciesResponse
EnableTracingPolicyEnableTracingPolicyRequestEnableTracingPolicyResponse
DisableTracingPolicyDisableTracingPolicyRequestDisableTracingPolicyResponse
ListSensorsListSensorsRequestListSensorsResponse
EnableSensorEnableSensorRequestEnableSensorResponse
DisableSensorDisableSensorRequestDisableSensorResponse
GetStackTraceTreeGetStackTraceTreeRequestGetStackTraceTreeResponse
GetVersionGetVersionRequestGetVersionResponse
RuntimeHookRuntimeHookRequestRuntimeHookResponse

Scalar Value Types

.proto TypeNotesC++JavaPythonGoC#PHPRuby
doubledoubledoublefloatfloat64doublefloatFloat
floatfloatfloatfloatfloat32floatfloatFloat
int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.int32intintint32intintegerBignum or Fixnum (as required)
int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.int64longint/longint64longinteger/stringBignum
uint32Uses variable-length encoding.uint32intint/longuint32uintintegerBignum or Fixnum (as required)
uint64Uses variable-length encoding.uint64longint/longuint64ulonginteger/stringBignum or Fixnum (as required)
sint32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.int32intintint32intintegerBignum or Fixnum (as required)
sint64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.int64longint/longint64longinteger/stringBignum
fixed32Always four bytes. More efficient than uint32 if values are often greater than 2^28.uint32intintuint32uintintegerBignum or Fixnum (as required)
fixed64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.uint64longint/longuint64ulonginteger/stringBignum
sfixed32Always four bytes.int32intintint32intintegerBignum or Fixnum (as required)
sfixed64Always eight bytes.int64longint/longint64longinteger/stringBignum
boolboolbooleanbooleanboolboolbooleanTrueClass/FalseClass
stringA string must always contain UTF-8 encoded or 7-bit ASCII text.stringStringstr/unicodestringstringstringString (UTF-8)
bytesMay contain any arbitrary sequence of bytes.stringByteStringstr[]byteByteStringstringString (ASCII-8BIT)

8.4 - Metrics

This reference is autogenerated from the Tetragon Prometheus metrics registry.

Tetragon Health Metrics

tetragon_build_info

Build information about tetragon

labelvalues
commit931b70f2c9878ba985ba6b589827bea17da6ec33
go_versiongo1.22.0
modifiedfalse
time2022-05-13T15:54:45Z

tetragon_data_event_size

The size of received data events.

labelvalues
opbad, ok

tetragon_data_events_total

The number of data events by type. For internal use only.

labelvalues
eventAdded, Appended, Bad, Matched, NotMatched, Received

tetragon_errors_total

The total number of Tetragon errors. For internal use only.

labelvalues
typeevent_finalize_process_info_failed, event_missing_process_info, handler_error, process_cache_evicted, process_cache_miss_on_get, process_cache_miss_on_remove, process_pid_tid_mismatch

tetragon_event_cache_accesses_total

The total number of Tetragon event cache accesses. For internal use only.

tetragon_event_cache_entries

The number of entries in the event cache.

tetragon_event_cache_errors_total

The total of errors encountered while fetching process exec information from the cache.

labelvalues
errornil_process_pid
event_typePROCESS_EXEC, PROCESS_EXIT, PROCESS_KPROBE, PROCESS_LOADER, PROCESS_THROTTLE, PROCESS_TRACEPOINT, PROCESS_UPROBE, RATE_LIMIT_INFO

tetragon_event_cache_parent_info_errors_total

The total of times we failed to fetch cached parent info for a given event type.

labelvalues
event_typePROCESS_EXEC, PROCESS_EXIT, PROCESS_KPROBE, PROCESS_LOADER, PROCESS_THROTTLE, PROCESS_TRACEPOINT, PROCESS_UPROBE, RATE_LIMIT_INFO

tetragon_event_cache_pod_info_errors_total

The total of times we failed to fetch cached pod info for a given event type.

labelvalues
event_typePROCESS_EXEC, PROCESS_EXIT, PROCESS_KPROBE, PROCESS_LOADER, PROCESS_THROTTLE, PROCESS_TRACEPOINT, PROCESS_UPROBE, RATE_LIMIT_INFO

tetragon_event_cache_process_info_errors_total

The total of times we failed to fetch cached process info for a given event type.

labelvalues
event_typePROCESS_EXEC, PROCESS_EXIT, PROCESS_KPROBE, PROCESS_LOADER, PROCESS_THROTTLE, PROCESS_TRACEPOINT, PROCESS_UPROBE, RATE_LIMIT_INFO

tetragon_event_cache_retries_total

The total number of retries for event caching per entry type.

labelvalues
entry_typeparent_info, pod_info, process_info

tetragon_events_exported_bytes_total

Number of bytes exported for events

tetragon_events_exported_total

Total number of events exported

tetragon_events_last_exported_timestamp

Timestamp of the most recent event to be exported

tetragon_flags_total

The total number of Tetragon flags. For internal use only.

labelvalues
typeauid, clone, errorArgs, errorCWD, errorCgroupID, errorCgroupKn, errorCgroupName, errorCgroupSubsys, errorCgroupSubsysCgrp, errorCgroups, errorFilename, errorPathResolutionCwd, execve, execveat, miss, nocwd, procFS, rootcwd, taskWalk, truncArgs, truncFilename

tetragon_generic_kprobe_merge_errors_total

The total number of failed attempts to merge a kprobe and kretprobe event.

labelvalues
curr_fnexample_kprobe
curr_typeenter, exit
prev_fnexample_kprobe
prev_typeenter, exit

tetragon_generic_kprobe_merge_ok_total

The total number of successful attempts to merge a kprobe and kretprobe event.

tetragon_generic_kprobe_merge_pushed_total

The total number of pushed events for later merge.

tetragon_handler_errors_total

The total number of event handler errors. For internal use only.

labelvalues
error_typeevent_handler_failed, unknown_opcode
opcode0, 11, 13, 14, 15, 23, 24, 25, 26, 5, 7

tetragon_handling_latency

The latency of handling messages in us.

labelvalues
op11, 13, 14, 15, 23, 24, 25, 26, 5, 7

tetragon_map_capacity

Capacity of a BPF map. Expected to be constant.

labelvalues
mapexecve_map, tg_execve_joined_info_map

tetragon_map_entries

The total number of in-use entries per map.

labelvalues
mapexecve_map, tg_execve_joined_info_map

tetragon_map_errors_total

The number of errors per map.

labelvalues
mapexecve_map, tg_execve_joined_info_map

tetragon_missed_events_total

The total number of Tetragon events per type that are failed to sent from the kernel.

labelvalues
msg_op11, 13, 14, 15, 23, 24, 25, 26, 5, 7

tetragon_msg_op_total

The total number of times we encounter a given message opcode. For internal use only.

labelvalues
msg_op11, 13, 14, 15, 23, 24, 25, 26, 5, 7

tetragon_notify_overflowed_events_total

The total number of events dropped because listener buffer was full

tetragon_policyfilter_hook_container_name_missing_total

The total number of operations when the container name was missing in the OCI hook

tetragon_policyfilter_metrics_total

Policy filter metrics. For internal use only.

labelvalues
errorgeneric-error, pod-namespace-conflict
opadd, add-container, delete, update
subsyspod-handlers, rthooks

tetragon_process_cache_capacity

The capacity of the process cache. Expected to be constant.

tetragon_process_cache_size

The size of the process cache

tetragon_process_loader_stats

Process Loader event statistics. For internal use only.

labelvalues
countLoaderReceived, LoaderResolvedImm, LoaderResolvedRetry

tetragon_ratelimit_dropped_total

The total number of rate limit Tetragon drops

tetragon_ringbuf_perf_event_errors_total

The total number of errors when reading the Tetragon ringbuf.

tetragon_ringbuf_perf_event_lost_total

The total number of Tetragon ringbuf perf events lost.

tetragon_ringbuf_perf_event_received_total

The total number of Tetragon ringbuf perf events received.

tetragon_ringbuf_queue_lost_total

The total number of Tetragon events ring buffer queue lost.

tetragon_ringbuf_queue_received_total

The total number of Tetragon events ring buffer queue received.

tetragon_tracingpolicy_loaded

The number of loaded tracing policy by state.

labelvalues
statedisabled, enabled, error, load_error

tetragon_watcher_errors_total

The total number of errors for a given watcher type.

labelvalues
errorfailed_to_get_pod
watcherk8s

tetragon_watcher_events_total

The total number of events for a given watcher type.

labelvalues
watcherk8s

Tetragon Resources Metrics

go_gc_duration_seconds

A summary of the pause duration of garbage collection cycles.

go_goroutines

Number of goroutines that currently exist.

go_info

Information about the Go environment.

labelvalues
versiongo1.22.0

go_memstats_alloc_bytes

Number of bytes allocated and still in use.

go_memstats_alloc_bytes_total

Total number of bytes allocated, even if freed.

go_memstats_buck_hash_sys_bytes

Number of bytes used by the profiling bucket hash table.

go_memstats_frees_total

Total number of frees.

go_memstats_gc_sys_bytes

Number of bytes used for garbage collection system metadata.

go_memstats_heap_alloc_bytes

Number of heap bytes allocated and still in use.

go_memstats_heap_idle_bytes

Number of heap bytes waiting to be used.

go_memstats_heap_inuse_bytes

Number of heap bytes that are in use.

go_memstats_heap_objects

Number of allocated objects.

go_memstats_heap_released_bytes

Number of heap bytes released to OS.

go_memstats_heap_sys_bytes

Number of heap bytes obtained from system.

go_memstats_last_gc_time_seconds

Number of seconds since 1970 of last garbage collection.

go_memstats_lookups_total

Total number of pointer lookups.

go_memstats_mallocs_total

Total number of mallocs.

go_memstats_mcache_inuse_bytes

Number of bytes in use by mcache structures.

go_memstats_mcache_sys_bytes

Number of bytes used for mcache structures obtained from system.

go_memstats_mspan_inuse_bytes

Number of bytes in use by mspan structures.

go_memstats_mspan_sys_bytes

Number of bytes used for mspan structures obtained from system.

go_memstats_next_gc_bytes

Number of heap bytes when next garbage collection will take place.

go_memstats_other_sys_bytes

Number of bytes used for other system allocations.

go_memstats_stack_inuse_bytes

Number of bytes in use by the stack allocator.

go_memstats_stack_sys_bytes

Number of bytes obtained from system for stack allocator.

go_memstats_sys_bytes

Number of bytes obtained from system.

go_threads

Number of OS threads created.

process_cpu_seconds_total

Total user and system CPU time spent in seconds.

process_max_fds

Maximum number of open file descriptors.

process_open_fds

Number of open file descriptors.

process_resident_memory_bytes

Resident memory size in bytes.

process_start_time_seconds

Start time of the process since unix epoch in seconds.

process_virtual_memory_bytes

Virtual memory size in bytes.

process_virtual_memory_max_bytes

Maximum amount of virtual memory available in bytes.

Tetragon Events Metrics

tetragon_events_total

The total number of Tetragon events

labelvalues
binaryexample-binary
namespaceexample-namespace
podexample-pod
typePROCESS_EXEC, PROCESS_EXIT, PROCESS_KPROBE, PROCESS_LOADER, PROCESS_THROTTLE, PROCESS_TRACEPOINT, PROCESS_UPROBE, RATE_LIMIT_INFO
workloadexample-workload

tetragon_policy_events_total

Policy events calls observed.

labelvalues
binaryexample-binary
hookexample_kprobe
namespaceexample-namespace
podexample-pod
policyexample-tracingpolicy
workloadexample-workload

tetragon_syscalls_total

System calls observed.

labelvalues
binaryexample-binary
namespaceexample-namespace
podexample-pod
syscallexample_syscall
workloadexample-workload

9 - FAQ

List of frequently asked questions

What is the minimum Linux kernel version to run Tetragon?

Tetragon needs Linux kernel version 4.19 or greater.

We currently run tests on stable long-term support kernels 4.19, 5.4, 5.10, 5.15 and bpf-next, see this test workflow for up to date information. Not all Tetragon features work with older kernel versions. BPF evolves rapidly and we recommend you use the most recent stable kernel possible to get the most out of Tetragon’s features.

Note that Tetragon needs BTF support which might take some work on older kernels.

What are the Linux kernel configuration options needed to run Tetragon?

This is the list of needed configuration options, note that this might evolve quickly with new Tetragon features:

# CORE BPF
CONFIG_BPF
CONFIG_BPF_JIT
CONFIG_BPF_JIT_DEFAULT_ON
CONFIG_BPF_EVENTS
CONFIG_BPF_SYSCALL
CONFIG_HAVE_BPF_JIT
CONFIG_HAVE_EBPF_JIT
CONFIG_FTRACE_SYSCALLS

# BTF
CONFIG_DEBUG_INFO_BTF
CONFIG_DEBUG_INFO_BTF_MODULES

# Enforcement
CONFIG_BPF_KPROBE_OVERRIDE

# CGROUP and Process tracking
CONFIG_CGROUPS=y        Control Group support
CONFIG_MEMCG=y          Memory Control group
CONFIG_BLK_CGROUP=y     Generic block IO controller
CONFIG_CGROUP_SCHED=y
CONFIG_CGROUP_PIDS=y    Process Control group
CONFIG_CGROUP_FREEZER=y Freeze and unfreeze tasks controller
CONFIG_CPUSETS=y        Manage CPUSETs
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_DEVICE=Y  Devices Control group
CONFIG_CGROUP_CPUACCT=y CPU accouting controller
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_BPF=y     Attach eBPF programs to a cgroup
CGROUP_FAVOR_DYNMODS=y  (optional)  >= 6.0
  Reduces the latencies of dynamic cgroup modifications at the
  cost of making hot path operations such as forks and exits
  more expensive.
  Platforms with frequent cgroup migrations could enable this
  option as a potential alleviation for pod and containers
  association issues.

At runtime, to probe if your kernel has sufficient features turned on, you can run tetra with root privileges with the probe command:

sudo tetra probe

You can also run this command directly from the tetragon container image on a Kubernetes cluster node. For example:

kubectl run bpf-probe --image=quay.io/cilium/tetragon-ci:latest --privileged --restart=Never -it --rm --command -- tetra probe

The output should be similar to this (with boolean values depending on your actual configuration):

override_return: true
buildid: true
kprobe_multi: false
fmodret: true
fmodret_syscall: true
signal: true
large: true

Tetragon failed to start complaining about a missing BTF file

You might have encountered the following issues:

level=info msg="BTF discovery: default kernel btf file does not exist" btf-file=/sys/kernel/btf/vmlinux
level=info msg="BTF discovery: candidate btf file does not exist" btf-file=/var/lib/tetragon/metadata/vmlinux-5.15.49-linuxkit
level=info msg="BTF discovery: candidate btf file does not exist" btf-file=/var/lib/tetragon/btf
[...]
level=fatal msg="Failed to start tetragon" error="tetragon, aborting kernel autodiscovery failed: Kernel version \"5.15.49-linuxkit\" BTF search failed kernel is not included in supported list. Please check Tetragon requirements documentation, then use --btf option to specify BTF path and/or '--kernel' to specify kernel version"

Tetragon needs BTF (BPF Type Format) support to load its BPF programs using CO-RE (Compile Once - Run Everywhere). In brief, CO-RE is useful to load BPF programs that have been compiled on a different kernel version than the target kernel. Indeed, kernel structures change between versions and BPF programs need to access fields in them. So CO-RE uses the BTF file of the kernel in which you are loading the BPF program to know the differences between the struct and patch the fields offset in the accessed structures. CO-RE allows portability of the BPF programs but requires a kernel with BTF enabled.

Most of the common Linux distributions now ship with BTF enabled and do not require any extra work, this is kernel option CONFIG_DEBUG_INFO_BTF=y. To check if BTF is enabled on your Linux system and see the BTF data file of your kernel, the standard location is /sys/kernel/btf/vmlinux. By default, Tetragon will look for this file (this is the first line in the log output above).

If your kernel does not support BTF you can:

  • Retrieve the BTF file for your kernel version from an external source.
  • Build the BTF file from your kernel debug symbols. You will need pahole to add BTF metadata to the debugging symbols and LLVM minimize the medata size.
  • Rebuild your kernel with CONFIG_DEBUG_INFO_BTF to y.

Tetragon will also look into /var/lib/tetragon/btf for the vmlinux file (this is the third line in the log output above). Or you can use the --btf flag to directly indicate Tetragon where to locate the file.

If you encounter this issue while using Docker Desktop on macOS, please refer to can I run Tetragon on Mac computers.

Can I install and use Tetragon in standalone mode (outside of k8s)?

Yes! TBD docs

Otherwise you can build Tetragon from source by running make to generate standalone binaries. Make sure to take a look at the Development Setup guide for the build requirements. Then use sudo ./tetragon --bpf-lib bpf/objs to run Tetragon.

CI is complaining about Go module vendoring, what do I do?

You can run make vendor then add and commit your changes.

CI is complaining about a missing “signed-off-by” line. What do I do?

You need to add a signed-off-by line to your commit messages. The easiest way to do this is with git fetch origin/main && git rebase --signoff origin/main. Then push your changes.

Can I run Tetragon on Mac computers?

Yes! You can run Tetragon locally by running a Linux virtual machine on your Mac.

On macOS running on amd64 (also known as Intel Mac) and arm64 (also know as Apple Silicon Mac), open source and commercial solutions exists to run virtual machines, here is a list of popular open source projects that you can use:

You can use these solutions to run a recent Linux distribution that ships with BTF debug information support.

Please note that you need to use a recent Docker Desktop version on macOS (for example 24.0.6 with Kernel 6.4.16-linuxkit), because the Linux virtual machine provided by older Docker Desktop versions lacked support for the BTF debug information. The BTF debug information file is needed for CO-RE in order to load sensors of Tetragon. Run the following commands to see if Tetragon can be used on your Docker Desktop version:

# The Kernel needs to be compiled with CONFIG_DEBUG_INFO_BTF and
# CONFIG_DEBUG_INFO_BTF_MODULES support:
$ docker run -it --rm --privileged --pid=host ubuntu \
    nsenter -t 1 -m -u -n -i sh -c \
    'cat /proc/config.gz | gunzip | grep CONFIG_DEBUG_INFO_BTF'
CONFIG_DEBUG_INFO_BTF=y
CONFIG_DEBUG_INFO_BTF_MODULES=y

# "/sys/kernel/btf/vmlinux" should be present:
$ docker run -it --rm --privileged --pid=host ubuntu \
    nsenter -t 1 -m -u -n -i sh -c 'ls -la /sys/kernel/btf/vmlinux'
-r--r--r--    1 root     root       4988627 Nov 21 20:33 /sys/kernel/btf/vmlinux

10 - Troubleshooting

Learn how to troubleshoot Tetragon

Automatic log and state collection

Before you report a problem, make sure to retrieve the necessary information from your cluster.

Tetragon’s bugtool captures potentially useful information about your environment for debugging. The tool is meant to be used for debugging a single Tetragon agent node but can be run automatically in a cluster. Note that in the context of Kubernetes, the command needs to be run from inside the Tetragon Pod’s container.

Key information collected by bugtool:

  • Tetragon configuration
  • Network configuration
  • Kernel configuration
  • eBPF maps
  • Process traces (if tracing is enabled)

Automatic Kubernetes cluster sysdump

You can collect information in a Kubernetes cluster using the Cilium CLI:

cilium-cli sysdump

More details can be found in the Cilium docs. The Cilium CLI sysdump command will automatically run tetra bugtool on each nodes where Tetragon is running.

Manual single node sysdump

It’s also possible to run the bug collection tool manually with the scope of a single node using tetra bugtool.

Kubernetes installation

  1. Identify the Tetragon Pod (<tetragon-namespace> is likely to be kube-system with the default install):

    kubectl get pods -n <tetragon-namespace> -l app.kubernetes.io/name=tetragon
    
  2. Execute tetra bugtool within the Pod:

    kubectl exec -n <tetragon-namespace> <tetragon-pod-name> -c tetragon -- tetra bugtool
    
  3. Retrieve the created archive from the Pod’s filesystem:

    kubectl cp -c tetragon <tetragon-namespace>/<tetragon-pod-name>:tetragon-bugtool.tar.gz tetragon-bugtool.tar.gz
    

Container installation

  1. Enter the Tetragon Container:

    docker exec -it <tetragon-container-id> tetra bugtool
    
  2. Retrieve the archive using docker cp:

    docker cp <tetragon-container-id>:/tetragon-bugtool.tar.gz tetragon-bugtool.tar.gz
    

Systemd host installation

  1. Execute tetra bugtool with Elevated Permissions:

    sudo tetra bugtool
    

Enable debug log level

When debugging, it might be useful to change the log level. The default log level is controlled by the log-level option at startup:

  • Enable debug level with --log-level=debug
  • Enable trace level with --log-level=trace

Change log level dynamically

It is possible to change the log level dynamically by sending the corresponding signal to tetragon process.

  • Change log level to debug level by sending the SIGRTMIN+20 signal to tetragon pid:

    sudo kill -s SIGRTMIN+20 $(pidof tetragon)
    
  • Change log level to trace level by sending the SIGRTMIN+21 signal to tetragon pid:

    sudo kill -s SIGRTMIN+21 $(pidof tetragon)
    
  • To Restore the original log level send the SIGRTMIN+22 signal to tetragon pid:

    sudo kill -s SIGRTMIN+22 $(pidof tetragon)
    

11 - Resources

Additional resources to learn about Tetragon

Conference talks

TitleAuthorsConferenceDate
Past, Present, Future of Tetragon- First Production Use Cases, Lessons Learnt, Where Are We Heading?John Fastabend & Natália Réka IvánkóKubeCon EU2023
eBPF and Kubernetes — Better Together! Observability and Security with TetragonAnna Kapuścińska & James LaverackKubernetes Community Days UK2023
The Next Log4jshell?! Preparing for CVEs with eBPF!John Fastabend & Natália Réka IvánkóKubeCon EU2023
Tutorial: Getting Familiar with Security Observability Using eBPF & Cilium TetragonDuffie Cooley & Raphaël PinsonKubeCon EU2023
Securing the Superpowers: Who Loaded That eBPF Program?John Fastabend & Natália Réka IvánkóCloudNative SecurityCon NA2023
Container Security and Runtime Enforcement with TetragonDjalal HarounieBPF Summit2022
You and Your Security Profiles; Generating Security Policies with the Help of eBPFJohn Fastabend & Natália Réka IvánkóeBPF Day North America2022
Keeping your cluster safe from attacks with eBPFJed Salazar & Natália Réka IvánkóeBPF Summit2021
Uncovering a Sophisticated Kubernetes Attack in Real Time Part II.Jed Salazar & Natália Réka IvánkóO’Reilly Superstream Series, Infrastructure & Ops2021
Uncovering a Sophisticated Kubernetes Attack in Real-TimeJed Salazar & Natália Réka IvánkóKubeCon EU2020

Book

Security Observability with eBPF - Jed Salazar & Natália Réka Ivánkó, OReilly, 2022

Blog posts

Tetragon 1.0: Kubernetes Security Observability & Runtime Enforcement with eBPF - Thomas Graf, 2023

Tutorial: Setting Up a Cybersecurity Honeypot with Tetragon to Trigger Canary Tokens - Dean Lewis, 2023

Can I use Tetragon without Cilium? - Dean Lewis, 2023

Detecting a Container Escape with Cilium and eBPF - Natália Réka Ivánkó, 2021

Detecting and Blocking log4shell with Isovalent Cilium Enterprise - Jed Salazar, 2021

Hands-on lab

Security Observability with eBPF and Tetragon - Natália Réka Ivánkó, Roland Wolters, Raphaël Pinson