https://falco.org/blog/falco-kernel-testing/ Free cookie consent management tool by TermsFeed Falco * About What is Falco? Learn about Falco and how it works Why choose Falco? Benefits of Falco for runtime security Falco use cases Threat detection and regulatory compliance Case studies Discover how the industry is adopting Falco Falco ecosystem Integrations and plugins FAQ The most common questions about the whole Falco ecosystem * Docs * Blog * Community About the community For users and contributors Events Meet and learn about Falco Contributors The people who build Falco Falco brand Brand guidelines * Training * Versions v0.35 v0.34 v0.33 v0.32 v0.31 v0.30 v0.29 v0.28 v0.27 v0.26 [ ] English Zhong Wen Chinese hangugeo Korean Ri Ben Yu Japanese mlyaallN Malayalam Try Falco About What is Falco? Learn about Falco and how it works Why choose Falco? Benefits of Falco for runtime security Falco use cases Threat detection and regulatory compliance Case studies Discover how the industry is adopting Falco Falco ecosystem Integrations and plugins FAQ The most common questions about the whole Falco ecosystem Docs Blog Community About the community For users and contributors Events Meet and learn about Falco Contributors The people who build Falco Falco brand Brand guidelines Training Featured Image for Introducing a framework for regression testing against Linux kernels 1. The Falco blog 2. Introducing a framework for regression testing against Linux kernels Federico Di Pierro, Aldo Lacuku Sep 21, 2023 Introducing a framework for regression testing against Linux kernels * Falco Libs * Kmod * eBPF There are a few foundational technologies that empower the Cloud Native ecosystem. Containers is one. And one of the basis for containerization is the Linux Kernel itself. With Falco, we are developing a runtime security tool that hooks directly in the kernel to collect information about the system and notifies about malicious behavior. We have found the need to validate our drivers against various versions of the Linux kernel, to properly ensure that with each iteration of our drivers, supported kernels remained unaffected. To elaborate, we lacked a means to guarantee that a new driver release could: * Successfully compile on multiple kernel versions. * Pass the eBPF verifier when executed on various kernel versions. * Operate as expected, such as retrieving kernel events, across multiple kernel versions. To address this issue, we started a major intervention. Initially, a proposal was discussed and incorporated into the libs repository. Since this was a pretty novel area, there were no pre-existing tools available to tackle it. Consequently, we embarked on the development of a completely new framework. Allow us to introduce you to the kernel testing framework. Components of a kernel testing framework Considering the inherent characteristics of the challenge, we need to set up a complete virtual machine for each distinct kernel version. These tests should be executed automatically each time new code is integrated into our drivers, serving as a means to promptly identify any issue or flaw in the tested kernel versions. With these objectives in mind, our approach should fulfill the following requirements: * Rapid and cost-effective VM creation: the process of creating these virtual machines should be efficient and budget-friendly. * Effortless distribution of VM images: we should ensure easy sharing and deployment of the virtual machine images. * Parallel execution of tests on multiple VMs: tests should run concurrently on each virtual machine to expedite the process. * Reproducibility in local environments for debugging purposes: developers should be able to replicate the test environment locally to investigate and troubleshoot issues. * Straightforward and user-friendly presentation of the test results: they should be presented in a simple and intuitive manner to immediately spot failures. Ignite a Firecracker microVM Weave Ignite is used to provision the firecracker microVMs. Weave Ignite is an open-source tool designed for lightweight and fast virtual machine management. It enables users to effortlessly create and manage virtual machines (VMs) for various purposes, such as development, testing, and experimentation. One of the main reasons why we chose to use this tool was its capability to create firecracker microVMs from kernels and rootfs packed as OCI images. Currently, we are using a patched version located at a forked repository. These patches were essential to enable the booting of kernels that necessitated the use of an initrd (initial ramdisk). Kernel & Rootfs OCI images Virtual machines consist of two essential layers: the kernel and the rootfs. These layers are packaged and distributed as OCI (Open Container Initiative) images. The kernel image encompasses the kernel that the virtual machine relies on, in contrast the rootfs image serves as the fundamental building block of a virtual machine, offering the essential filesystem necessary for booting the VM. Typically, these rootfs images incorporate a Linux distribution. For more info on how we build them please check the available images documentation. Ansible Playbooks Automation is accomplished through the utilization of Ansible. A collection of playbooks is responsible for: * Orchestrating the provisioning of microVMs. * Configuring the machines. * Retrieving the code to be tested. * Eliminating the microVMs once the testing process is completed. Presenting test results We wanted the test data to be publicly and easily accessible by anyone, thus we had to find a way to represent the test output. Since there are 3 possible ways of instrumenting the kernel, that are using a kernel module or one of the available eBPF probes, the playbooks perform up to 3 tests. Taking into account that the modern eBPF probe is built in the Falco libraries, only 2 drivers need to be compiled. We have 3 possible results for each of them: * success, when the test goes fine * error, when the test fails * skipped, when the test is not runnable for the kernel (for example, skipping modern eBPF tests where it is unsupported) The natural way of dealing with all of this was to develop a small tool that, given as input the output root folder, would generate a markdown matrix with the results. While scrutinizing the first version of the markdown matrix, we understood that it would have been even better if errors were also attached to the markdown, allowing for a more streamlined visualization of the results. This is the format we settled with; it can be found at libs github pages: [matrix] How we use it We implemented a new Github action workflow in the libs repository that triggers on pushes to master, using an x86_64 and an aarch64 nodes with virtualization capabilities provided by the CNCF. The workflow itself is very simple since it runs the testing framework on self-hosted nodes just like you would run it locally: jobs: test-kernels: strategy: fail-fast: false matrix: architecture: [X64, ARM64] # We use a matrix to run our job on both supported arch # Since github actions do not support arm64 runners and they do not offer virtualization capabilities, we need to use self hosted nodes. runs-on: [ "self-hosted", "linux", "${{matrix.architecture}}" ] steps: # We clone the kernel-testing repo, generate vars.yaml (ie: input options for the kernel-testing run) # and run needed ansible playbooks one by one, directly on each node. - name: Checkout uses: actions/checkout@v3 with: repository: falcosecurity/kernel-testing ref: v0.2.3 - name: Generate vars yaml working-directory: ./ansible-playbooks run: | LIBS_V=${{ github.event.inputs.libsversion }} LIBS_VERSION=${LIBS_V:-${{ github.ref_name }}} cat > vars.yml <