Need help with linux-kernel-module-cheat?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

2.4K Stars 417 Forks GNU General Public License v3.0 1.8K Commits 57 Opened issues


The perfect emulation setup to study and develop the Linux kernel v5.4.3, kernel modules, QEMU, gem5 and x86_64, ARMv7 and ARMv8 userland and baremetal assembly, ANSI C, C++ and POSIX. GDB step debug and KGDB just work. Powered by Buildroot and crosstool-NG. Highly automated. Thoroughly documented. Automated tests. "Tested" in an Ubuntu 19.10 host.完美的仿真设置,可用于研究和开发Linux内核v5.4.3,内核模块,QEMU,gem5和x86_64,ARMv7和ARMv8用户界面以及裸机装配,ANSI C,C ++和POSIX。 GDB步骤调试和KGDB可以正常工作。 由Buildroot和crosstool-NG支持。 高度自动化。 彻底记录。 自动化测试。 在Ubuntu 19.10主机中经过“测试”。

Services available


Need anything else?

Contributors list

= Linux Kernel Module Cheat :cirosantilli-media-base: :description: The perfect emulation setup to study and develop the <> v5.9.2, kernel modules, <>, <> and x86_64, ARMv7 and ARMv8 <> and <> assembly, <>, <> and <>. <> and <> just work. Powered by <> and <>. Highly automated. Thoroughly documented. Automated <>. "Tested" in an Ubuntu 20.04 host. :idprefix: :idseparator: - :nofooter: :sectanchors: :sectlinks: :sectnumlevels: 6 :sectnums: :toc-title: :toc: macro :toclevels: 6[image:[]]


TL;DR: xref:qemu-buildroot-setup-getting-started[xrefstyle=full]

The source code for this page is located at:[]. Due to[a GitHub limitation], this README is too long and not fully rendered on, so either use: or <>.





The most important functionality of this repository is the

option, sample usage:

.... ./setup ./run --china > index.html firefox index.html ....

see also:

The secondary systems programming functionality is described on the sections below starting from <>.


== Getting started

Each child section describes a possible different setup for this repo.

If you don't know which one to go for, start with <>.

Design goals of this project are documented at: xref:design-goals[xrefstyle=full].

=== Should you waste your life with systems programming?

Being the hardcore person who fully understands an important complex system such as a computer, it does have a nice ring to it doesn't it?

But before you dedicate your life to this nonsense, do consider the following points:

  • almost all contributions to the kernel are done by large companies, and if you are not an employee in one of them, you are likely not going to be able to do much. + This can be inferred by the fact that the
    directory is by far the largest in the kernel. + The kernel is of course just an interface to hardware, and the hardware developers start developing their kernel stuff even before specs are publicly released, both to help with hardware development and to have things working when the announcement is made. + Furthermore, I believe that there are in-tree devices which have never been properly publicly documented. Linus is of course fine with this, since code == documentation for him, but it is not as easy for mere mortals. + There are some less hardware bound higher level layers in the kernel which might not require being in a hardware company, and a few people must be living off it. + But of course, those are heavily motivated by the underlying hardware characteristics, and it is very likely that most of the people working there were previously at a hardware company. + In that sense, therefore, the kernel is not as open as one might want to believe. + Of course, if there is some[super useful and undocumented hardware that is just waiting there to be reverse engineered], then that's a much juicier target :-)
  • it is impossible to become rich with this knowledge. + This is partly implied by the fact that you need to be in a big company to make useful low level things, and therefore you will only be a tiny cog in the engine. + The key problem is that the entry cost of hardware design is just too insanely high for startups in general.
  • Is learning this the most useful thing that you think can do for society? + Or are you just learning it for job security and having a nice sounding title? + I'm not a huge fan of the person, but I think Jobs said it right: + First determine the useful goal, and then backtrack down to the most efficient thing you can do to reach it.
  • there are two things that sadden me compared to physics-based engineering: + -- ** you will never become eternally famous. All tech disappears sooner or later, while laws of nature, at least as useful approximations, stay unchanged. ** every problem that you face is caused by imperfections introduced by other humans. + It is much easier to accept limitations of physics, and even natural selection in biology, which are not produced by a sentient being (?). -- + Physics-based engineering, just like low level hardware, is of course completely closed source however, since wrestling against the laws of physics is about the most expensive thing humans can do, so there's also a downside to it.

Are you fine with those points, and ready to continue wasting your life with this crap?

Good. In that case, read on, and let's have some fun together ;-)

Related: <>.

=== QEMU Buildroot setup

==== QEMU Buildroot setup getting started

This setup has been mostly tested on Ubuntu. For other host operating systems see: xref:supported-hosts[xrefstyle=full]. For greater stability, consider using the <> instead of master:

Reserve 12Gb of disk and run:

.... git clone cd linux-kernel-module-cheat ./setup ./build --download-dependencies qemu-buildroot ./run ....

You don't need to clone recursively even though we have

fetches just the submodules that you need for this build to save time.

If something goes wrong, see: xref:common-build-issues[xrefstyle=full] and use our issue tracker:

The initial build will take a while (30 minutes to 2 hours) to clone and build, see <> for more details.

If you don't want to wait, you could also try the following faster but much more limited methods:

  • <>
  • <>

but you will soon find that they are simply not enough if you anywhere near serious about systems programming.


, QEMU opens up leaving you in the <>, and you can start playing with the kernel modules inside the simulated system:

.... insmod hello.ko insmod hello2.ko rmmod hello rmmod hello2 ....

This should print to the screen:

.... hello init hello2 init hello cleanup hello2 cleanup ....

which are

messages from
methods of those modules.


  • link:kernel_modules/hello.c[]
  • link:kernel_modules/hello2.c[]

Quit QEMU with:

.... Ctrl-A X ....

See also: xref:quit-qemu-from-text-mode[xrefstyle=full].

All available modules can be found in the link:kernel_modules[] directory.

It is super easy to build for different <>, just use the


.... ./setup ./build --arch aarch64 --download-dependencies qemu-buildroot ./run --arch aarch64 ....

To avoid typing

--arch aarch64
many times, you can set the default arch as explained at: xref:default-command-line-arguments[xrefstyle=full]

I now urge you to read the following sections which contain widely applicable information:

  • <>
  • <>
  • <>
  • Linux kernel ** <> ** <>

Once you use <> and <>, your terminal will look a bit like this:

.... [ 1.451857] input: AT Translated Set 2 keyboard as /devices/platform/i8042/s1│loading @0xffffffffc0000000: ../kernelmodules-1.0//timer.ko [ 1.454310] ledtrig-cpu: registered to indicate activity on CPUs │(gdb) b lkmctimercallback [ 1.455621] usbcore: registered new interface driver usbhid │Breakpoint 1 at 0xffffffffc0000000: file /home/ciro/bak/git/linux-kernel-module [ 1.455811] usbhid: USB HID core driver │-cheat/out/x8664/buildroot/build/kernelmodules-1.0/./timer.c, line 28. [ 1.462044] NET: Registered protocol family 10 │(gdb) c [ 1.467911] Segment Routing with IPv6 │Continuing. [ 1.468407] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver │ [ 1.470859] NET: Registered protocol family 17 │Breakpoint 1, lkmctimercallback (data=0xffffffffc0002000 ) [ 1.472017] 9pnet: Installing 9P2000 support │ at /linux-kernel-module-cheat//out/x8664/buildroot/build/ [ 1.475461] schedclock: Marking stable (1473574872, 0)->(1554017593, -80442)│kernelmodules-1.0/./timer.c:28 [ 1.479419] ALSA device list: │28 { [ 1.479567] No soundcards found. │(gdb) c [ 1.619187] ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 │Continuing. [ 1.622954] ata2.00: configured for MWDMA2 │ [ 1.644048] scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ P5│Breakpoint 1, lkmctimercallback (data=0xffffffffc0002000 ) [ 1.741966] tsc: Refined TSC clocksource calibration: 2904.010 MHz │ at /linux-kernel-module-cheat//out/x8664/buildroot/build/ [ 1.742796] clocksource: tsc: mask: 0xffffffffffffffff maxcycles: 0x29dc0f4s│kernelmodules-1.0/./timer.c:28 [ 1.743648] clocksource: Switched to clocksource tsc │28 { [ 2.072945] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8043│(gdb) bt [ 2.078641] EXT4-fs (vda): couldn't mount as ext3 due to feature incompatibis│#0 lkmctimercallback (data=0xffffffffc0002000 ) [ 2.080350] EXT4-fs (vda): mounting ext2 file system using the ext4 subsystem│ at /linux-kernel-module-cheat//out/x8664/buildroot/build/ [ 2.088978] EXT4-fs (vda): mounted filesystem without journal. Opts: (null) │kernelmodules-1.0/./timer.c:28 [ 2.089872] VFS: Mounted root (ext2 filesystem) readonly on device 254:0. │#1 0xffffffff810ab494 in calltimerfn (timer=0xffffffffc0002000 , [ 2.097168] devtmpfs: mounted │ fn=0xffffffffc0000000 <lkmctimercallback>) at kernel/time/timer.c:1326 [ 2.126472] Freeing unused kernel memory: 1264K │#2 0xffffffff810ab71f in expiretimers (head=, [ 2.126706] Write protecting the kernel read-only data: 16384k │ base=) at kernel/time/timer.c:1363 [ 2.129388] Freeing unused kernel memory: 2024K │#3 _runtimers (base=) at kernel/time/timer.c:1666 [ 2.139370] Freeing unused kernel memory: 1284K │#4 runtimersoftirq (h=) at kernel/time/timer.c:1692 [ 2.246231] EXT4-fs (vda): warning: mounting unchecked fs, running e2fsck isd│#5 0xffffffff81a000cc in _dosoftirq () at kernel/softirq.c:285 [ 2.259574] EXT4-fs (vda): re-mounted. Opts: blockvalidity,barrier,userxatr│#6 0xffffffff810577cc in invokesoftirq () at kernel/softirq.c:365 hello S98 │#7 irqexit () at kernel/softirq.c:405 │#8 0xffffffff818021ba in exitingirq () at ./arch/x86/include/asm/apic.h:541 Apr 15 23:59:23 login[49]: root login on 'console' │#9 smpapictimerinterrupt (regs=) hello /root/.profile │ at arch/x86/kernel/apic/apic.c:1052

insmod /timer.ko │#10 0xffffffff8180190f in apictimerinterrupt ()

[ 6.791945] timer: loading out-of-tree module taints kernel. │ at arch/x86/entry/entry_64.S:857

[ 7.821621] 4294894248 │#11 0xffffffff82003df8 in initthreadunion ()

[ 8.851385] 4294894504 │#12 0x0000000000000000 in ?? () │(gdb) ....

==== How to hack stuff

Besides a seamless <>, this project also aims to make it effortless to modify and rebuild several major components of the system, to serve as an awesome development setup.

===== Your first Linux kernel hack

Let's hack up the <>, which is an easy place to start.

Open the file:

.... vim submodules/linux/init/main.c ....

and find the

function, then add there a:

.... pr_info("I'VE HACKED THE LINUX KERNEL!!!"); ....

Then rebuild the Linux kernel, quit QEMU and reboot the modified kernel:

.... ./build-linux ./run ....

and, surely enough, your message has appeared at the beginning of the boot:

.... <6>[ 0.000000] I'VE HACKED THE LINUX KERNEL!!! ....

So you are now officially a Linux kernel hacker, way to go!

We could have used just link:build[] to rebuild the kernel as in the <> instead of link:build-linux[], but building just the required individual components is preferred during development:

  • saves a few seconds from parsing Make scripts and reading timestamps
  • makes it easier to understand what is being done in more detail
  • allows passing more specific options to customize the build

The link:build[] script is just a lightweight wrapper that calls the smaller build scripts, and you can see what

does with:

.... ./build --dry-run ....

see also: <>.

When you reach difficulties, QEMU makes it possible to easily GDB step debug the Linux kernel source code, see: xref:gdb[xrefstyle=full].

===== Your first kernel module hack

Edit link:kernel_modules/hello.c[] to contain:

.... pr_info("hello init hacked\n"); ....

and rebuild with:

.... ./build-modules ....

Now there are two ways to test it out: the fast way, and the safe way.

The fast way is, without quitting or rebooting QEMU, just directly re-insert the module with:

.... insmod /mnt/9p/outrootfsoverlay/lkmc/hello.ko ....

and the new

message should now show on the terminal at the end of the boot.

This works because we have a <<9p>> mount there setup by default, which mounts the host directory that contains the build outputs on the guest:

.... ls "$(./getvar outrootfsoverlay_dir)" ....

The fast method is slightly risky because your previously insmodded buggy kernel module attempt might have corrupted the kernel memory, which could affect future runs.

Such failures are however unlikely, and you should be fine if you don't see anything weird happening.

The safe way, is to fist <>, rebuild the modules, put them in the root filesystem, and then reboot:

.... ./build-modules ./build-buildroot ./run --eval-after 'insmod hello.ko' ....

is required after
because it re-generates the root filesystem with the modules that we compiled at

You can see that

does that as well, by running:

.... ./build --dry-run ....

See also: <>.

is optional: you could just type
insmod hello.ko
in the terminal, but this makes it run automatically at the end of boot, and then drops you into a shell.

If the guest and host are the same arch, typically x86_64, you can speed up boot further with <>:

.... ./run --kvm ....

All of this put together makes the safe procedure acceptably fast for regular development as well.

It is also easy to GDB step debug kernel modules with our setup, see: xref:gdb-step-debug-kernel-module[xrefstyle=full].

===== Your first glibc hack

We use <>, and it is tracked as an unmodified submodule at link:submodules/glibc[], at the exact same version that Buildroot has it, which can be found at:[package/glibc/]. Buildroot 2018.05 applies no patches.

Let's hack up the


.... ./build-buildroot -- glibc-reconfigure ....

with the patch:

.... diff --git a/libio/ioputs.c b/libio/ioputs.c index 706b20b492..23185948f3 100644 --- a/libio/ioputs.c +++ b/libio/ioputs.c @@ -38,8 +38,9 @@ IOputs (const char *str) if ((IOvtableoffset (IOstdout) != 0 || _IOfwide (IOstdout, -1) == -1) && IOsputn (IOstdout, str, len) == len + && IOsputn (IOstdout, " hacked", 7) == 7 && IOputcunlocked ('\n', _IOstdout) != EOF) - result = MIN (INTMAX, len + 1); + result = MIN (INTMAX, len + 1 + 7);

IOreleaselock (IO_stdout); return result; ....

And then:

.... ./run --eval-after './c/hello.out' ....


.... hello hacked ....


We can also test our hacked glibc on <> with:

.... ./run --userland userland/c/hello.c ....

I just noticed that this is actually a good way to develop glibc for other archs.

In this example, we got away without recompiling the userland program because we made a change that did not affect the glibc ABI, see this answer for an introduction to ABI stability:

Note that for arch agnostic features that don't rely on bleeding kernel changes that you host doesn't yet have, you can develop glibc natively as explained at:

  • more focus on symbol versioning, but no one knows how to do it, so I answered

Tested on a30ed0f047523ff2368d421ee2cce0800682c44e + 1.

===== Your first Binutils hack

Have you ever felt that a single

instruction was not enough? Really? Me too!

So let's hack the <>, which is part of[GNU Binutils], to add a new shiny version of


GCC uses GNU GAS as its backend, so we will test out new mnemonic with an <> test program: link:userland/arch/x8664/binutilshack.c[], which is just a copy of link:userland/arch/x8664/binutilsnohack.c[] but with

instead of

The inline assembly is disabled with an

, so first modify the source to enable that.

Then, try to build userland:

.... ./build-userland ....

and watch it fail with:

.... binutils_hack.c:8: Error: no such instruction: `myinc %rax' ....

Now, edit the file

.... vim submodules/binutils-gdb/opcodes/i386-tbl.h ....

and add a copy of the

instruction just next to it, but with the new name

.... diff --git a/opcodes/i386-tbl.h b/opcodes/i386-tbl.h index af583ce578..3cc341f303 100644 --- a/opcodes/i386-tbl.h +++ b/opcodes/i386-tbl.h @@ -1502,6 +1502,19 @@ const insntemplate i386optab[] = { { { 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 } } } }, + { "myinc", 1, 0xfe, 0x0, 1, + { { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 } }, + { 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0 }, + { { { 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, + 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 } } } }, { "sub", 2, 0x28, None, 1, { { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ....

Finally, rebuild Binutils, userland and test our program with <>:

.... ./build-buildroot -- host-binutils-rebuild ./build-userland --static ./run --static --userland userland/arch/x8664/binutilshack.c ....

and we se that

worked since the assert did not fail!

Tested on b60784d59bee993bf0de5cde6c6380dd69420dda + 1.

===== Your first GCC hack

OK, now time to hack GCC.

For convenience, let's use the <>.

If we run the program link:userland/c/gcc_hack.c[]:

.... ./build-userland --static ./run --static --userland userland/c/gcc_hack.c ....

it produces the normal boring output:

.... i = 2 j = 0 ....

So how about we swap

to make things more fun?

Open the file:

.... vim submodules/gcc/gcc/c/c-parser.c ....

and find the function


In that function, swap


.... diff --git a/gcc/c/c-parser.c b/gcc/c/c-parser.c index 101afb8e35f..89535d1759a 100644 --- a/gcc/c/c-parser.c +++ b/gcc/c/c-parser.c @@ -8529,7 +8529,7 @@ cparserpostfixexpressionafterprimary (cparser parser, expr.originaltype = DECLBITFIELDTYPE (field); } break; - case CPPPLUSPLUS: + case CPPMINUSMINUS: / Postincrement. / start = expr.getstart (); finish = cparserpeektoken (parser)->getfinish (); @@ -8548,7 +8548,7 @@ cparserpostfixexpressionafterprimary (cparser *parser, expr.originalcode = ERRORMARK; expr.originaltype = NULL; break; - case CPPMINUSMINUS: + case CPPPLUSPLUS: / Postdecrement. */ start = expr.getstart (); finish = cparserpeektoken (parser)->get_finish (); ....

Now rebuild GCC, the program and re-run it:

.... ./build-buildroot -- host-gcc-final-rebuild ./build-userland --static ./run --static --userland userland/c/gcc_hack.c ....

and the new ouptut is now:

.... i = 2 j = 0 ....

We need to use the ugly

thing because GCC has to packages in Buildroot,
: No one is able to example precisely with a minimal example why this is required:

==== About the QEMU Buildroot setup

What QEMU and Buildroot are:

  • <>
  • <>

This is our reference setup, and the best supported one, use it unless you have good reason not to.

It was historically the first one we did, and all sections have been tested with this setup unless explicitly noted.

Read the following sections for further introductory material:

  • <>
  • <>

[[dry-run]] === Dry run to get commands for your project

One of the major features of this repository is that we try to support the

option really well for all scripts.

This option, as the name suggests, outputs the external commands that would be run (or more precisely: equivalent commands), without actually running them.

This allows you to just clone this repository and get full working commands to integrate into your project, without having to build or use this setup further!

For example, we can obtain a QEMU run for the file link:userland/c/hello.c[] in <> by adding

to the normal command:

.... ./run --dry-run --userland userland/c/hello.c ....

which as of LKMC a18f28e263c91362519ef550150b5c9d75fa3679 + 1 outputs:

.... + /path/to/linux-kernel-module-cheat/out/qemu/default/opt/x8664-linux-user/qemu-x8664 \ -L /path/to/linux-kernel-module-cheat/out/buildroot/build/default/x8664/target \ -r 5.2.1 \ -seed 0 \ -trace enable=loadfile,file=/path/to/linux-kernel-module-cheat/out/run/qemu/x8664/0/trace.bin \ -cpu max \ /path/to/linux-kernel-module-cheat/out/userland/default/x8664/c/hello.out \ ; ....

So observe that the command contains:

  • +
    : sign to differentiate it from program stdout, much like bash
    output. This is not a valid part of the generated Bash command however.
  • the actual command nicely, indented and with arguments broken one per line, but with continuing backslashes so you can just copy paste into a terminal + For setups that don't support the newline e.g. <>, you can turn them off with
  • ;
    : both a valid part of the Bash command, and a visual mark the end of the command

For the specific case of running emulators such as QEMU, the last command is also automatically placed in a file for your convenience and later inspection:

.... cat "$(./getvar run_dir)/" ....

Since we need this so often, the last run command is also stored for convenience at:

.... cat out/ ....

although this won't of course work well for <>.


also automatically specifies, in valid Bash shell syntax:
  • environment variables used to run the command with syntax
    + ENV_VAR_1=abc ENV_VAR_2=def ./some/command
  • change in working directory with
    + cd /some/new/path && ./some/command

=== gem5 Buildroot setup

==== About the gem5 Buildroot setup

This setup is like the <>, but it uses[gem5] instead of QEMU as a system simulator.

QEMU tries to run as fast as possible and give correct results at the end, but it does not tell us how many CPU cycles it takes to do something, just the number of instructions it ran. This kind of simulation is known as functional simulation.

The number of instructions executed is a very poor estimator of performance because in modern computers, a lot of time is spent waiting for memory requests rather than the instructions themselves.

gem5 on the other hand, can simulate the system in more detail than QEMU, including:

  • simplified CPU pipeline
  • caches
  • DRAM timing

and can therefore be used to estimate system performance, see: xref:gem5-run-benchmark[xrefstyle=full] for an example.

The downside of gem5 much slower than QEMU because of the greater simulation detail.

See <> for a more thorough comparison.

==== gem5 Buildroot setup getting started

For the most part, if you just add the

--emulator gem5
option or
suffix to all commands and everything should magically work.

If you haven't built Buildroot yet for <>, you can build from the beginning with:

.... ./setup ./build --download-dependencies gem5-buildroot ./run --emulator gem5 ....

If you have already built previously, don't be afraid: gem5 and QEMU use almost the same root filesystem and kernel, so

will be fast.

Remember that the gem5 boot is <> than QEMU since the simulation is more detailed.

If you have a relatively new GCC version and the gem5 build fails on your machine, see: <>.

To get a terminal, either open a new shell and run:

.... ./gem5-shell ....

You can quit the shell without killing gem5 by typing tilde followed by a period:

.... ~. ....

If you are inside <>, which I highly recommend, you can both run gem5 stdout and open the guest terminal on a split window with:

.... ./run --emulator gem5 --tmux ....

See also: xref:tmux-gem5[xrefstyle=full].

At the end of boot, it might not be very clear that you have the shell since some <> messages may appear in front of the prompt like this:


<6>[ 1.215329] clocksource: tsc: mask: 0xffffffffffffffff maxcycles: 0x1cd486fa865, maxidle_ns: 440795259574 ns

<6>[ 1.215351] clocksource: Switched to clocksource tsc ....

but if you look closely, the

prompt marker
is there already, just hit enter and a clear prompt line will appear.

If you forgot to open the shell and gem5 exit, you can inspect the terminal output post-mortem at:

.... less "$(./getvar --emulator gem5 m5outdir)/system.pc.com1.device" ....

More gem5 information is present at: xref:gem5[xrefstyle=full]

Good next steps are:

  • <>: how to run a benchmark in gem5 full system, including how to boot Linux, checkpoint and restore to skip the boot on a fast CPU
  • <>: understand the output files that gem5 produces, which contain information about your run
  • <>: magic guest instructions used to control gem5
  • <>: how to add your own files to the image if you have a benchmark that we don't already support out of the box (also send a pull request!)

[[docker]] === Docker host setup

This repository has been tested inside clean[Docker] containers.

This is a good option if you are on a Linux host, but the native setup failed due to your weird host distribution, and you have better things to do with your life than to debug it. See also: xref:supported-hosts[xrefstyle=full].

For example, to do a <> inside Docker, run:

.... sudo apt-get install docker ./run-docker create && \ ./run-docker sh -- ./build --download-dependencies qemu-buildroot ./run-docker ....

You are now left inside a shell in the Docker! From there, just run as usual:

.... ./run ....

The host git top level directory is mounted inside the guest with a[Docker volume], which means for example that you can use your host's GUI text editor directly on the files. Just don't forget that if you nuke that directory on the guest, then it gets nuked on the host as well!

Command breakdown:

  • ./run-docker create
    : create the image and container. + Needed only the very first time you use Docker, or if you run
    ./run-docker DESTROY
    to restart for scratch, or save some disk space. + The image and container name is
    . The container shows under: + .... docker ps -a .... + and the image shows under: + .... docker images ....
  • ./run-docker
    : open a shell on the container. + If it has not been started previously, start it. This can also be done explicitly with: + .... ./run-docker start .... + Quit the shell as usual with
    + This can be called multiple times from different host terminals to open multiple shells.
  • ./run-docker stop
    : stop the container. + This might save a bit of CPU and RAM once you stop working on this project, but it should not be a lot.
  • ./run-docker DESTROY
    : delete the container and image. + This doesn't really clean the build, since we mount the guest's working directory on the host git top-level, so you basically just got rid of the
    installs. + To actually delete the Docker build, run on host: + .... # sudo rm -rf out.docker ....

To use <> from inside Docker, you need a second shell inside the container. You can either do that from another shell with:

.... ./run-docker ....

or even better, by starting a <> session inside the container. We install

by default in the container.

You can also start a second shell and run a command in it at the same time with:

.... ./run-docker sh -- ./run-gdb start_kernel ....

To use <> from Docker, run:

.... ./run --graphic --vnc ....

and then on host:

.... sudo apt-get install vinagre ./vnc ....

TODO make files created inside Docker be owned by the current user in host instead of


[[prebuilt]] === Prebuilt setup

==== About the prebuilt setup

This setup uses prebuilt binaries that we upload to GitHub from time to time.

We don't currently provide a full prebuilt because it would be too big to host freely, notably because of the cross toolchain.

Our prebuilts currently include:

  • <> binaries ** Linux kernel ** root filesystem
  • <> binaries for QEMU

For more details, see our our <>.

Advantage of this setup: saves time and disk space on the initial install, which is expensive in largely due to building the toolchain.

The limitations are severe however:

  • can't <>, since the source and cross toolchain with GDB are not available. Buildroot cannot easily use a host toolchain: xref:prebuilt-toolchain[xrefstyle=full]. + Maybe we could work around this by just downloading the kernel source somehow, and using a host prebuilt GDB, but we felt that it would be too messy and unreliable.
  • you won't get the latest version of this repository. Our <> attempt to automate builds failed, and storing a release for every commit would likely make GitHub mad at us anyway.
  • <> is not currently supported. The major blocking point is how to avoid distributing the kernel images twice: once for gem5 which uses
    , and once for QEMU which uses
    images, see also: ** ** <>.

This setup might be good enough for those developing simulators, as that requires less image modification. But once again, if you are serious about this, why not just let your computer build the <> while you take a coffee or a nap? :-)

==== Prebuilt setup getting started

Checkout to the latest tag and use the Ubuntu packaged QEMU to boot Linux:

.... sudo apt-get install qemu-system-x86 git clone cd linux-kernel-module-cheat git checkout "$(git rev-list --tags --max-count=1)" ./release-download-latest unzip lkmc-*.zip ./run --qemu-which host ....

You have to checkout to the latest tag to ensure that the scripts match the release format:

This is known not to work for aarch64 on an Ubuntu 16.04 host with QEMU 2.5.0, presumably because QEMU is too old, the terminal does not show any output. I haven't investigated why.

Or to run a baremetal example instead:

.... ./run \ --arch aarch64 \ --baremetal userland/c/hello.c \ --qemu-which host \ ; ....

Be saner and use our custom built QEMU instead:

.... ./setup ./build --download-dependencies qemu ./run ....

To build the kernel modules as in <> do:

.... git submodule update --depth 1 --init --recursive "$(./getvar linuxsourcedir)" ./build-linux --no-modules-install -- modules_prepare ./build-modules --gcc-which host ./run ....

TODO: for now the only way to test those modules out without <> is with 9p, since we currently rely on Buildroot to manipulate the root filesystem.

Command explanation:

  • modules_prepare
    does the minimal build procedure required on the kernel for us to be able to compile the kernel modules, and is way faster than doing a full kernel build. A full kernel build would also work however.
  • --gcc-which host
    selects your host Ubuntu packaged GCC, since you don't have the Buildroot toolchain
  • --no-modules-install
    is required otherwise the
    make modules_install
    target we run by default fails, since the kernel wasn't built

To modify the Linux kernel, build and use it as usual:

.... git submodule update --depth 1 --init --recursive "$(./getvar linuxsourcedir)" ./build-linux ./run ....

//// For gem5, do:

.... git submodule update --init --depth 1 "$(./getvar linuxsourcedir)" sudo apt-get install qemu-utils ./build-gem5 ./run --emulator gem5 --qemu-which host ....

is required because we currently distribute
files which <>, so we need
to extract them first.

The Linux kernel is required for

to convert the compressed kernel image which QEMU understands into the raw vmlinux that gem5 understands: ////

//// [[ubuntu]] === Ubuntu guest setup

==== About the Ubuntu guest setup

This setup is similar to <>, but instead of using Buildroot for the root filesystem, it downloads an Ubuntu image with Docker, and uses that as the root filesystem.

The rationale for choice of Ubuntu as a second distribution in addition to Buildroot can be found at: xref:linux-distro-choice[xrefstyle=full]

Advantages over Buildroot:

  • saves build time
  • you get to play with a huge selection of Debian packages out of the box
  • more representative of most non-embedded production systems than BusyBox


  • less visibility: The fact that that question has no answer makes me cringe
  • less compatibility, e.g. no one knows what the officially supported cross compilers are:

Docker is used here just as an image download provider since it has a wide variety of images. Why we don't just download the regular Ubuntu disk image:

  • that image is not ready to boot, but rather goes into an interactive installer:
  • the default Ubuntu image has a large collection of software, and is large. The docker version is much more minimal.

One alternative would be to use[Ubuntu base] which can be downloaded from: That provides a

and comes very close to what we obtain with Docker, but without the need for

==== Ubuntu guest setup getting started


.... sudo ./build-docker ./run --docker ....

is required for Docker operations: ////

[[host]] === Host kernel module setup


This method runs the kernel modules directly on your host computer without a VM, and saves you the compilation time and disk usage of the virtual machine method.

It has however severe limitations:

  • can't control which kernel version and build options to use. So some of the modules will likely not compile because of kernel API changes, since[the Linux kernel does not have a stable kernel module API].
  • bugs can easily break you system. E.g.: ** segfaults can trivially lead to a kernel crash, and require a reboot ** your disk could get erased. Yes, this can also happen with
    from userland. But you should not use
    when developing newbie programs. And for the kernel you don't have the choice not to use
    . ** even more subtle system corruption such as[not being able to rmmod]
  • can't control which hardware is used, notably the CPU architecture
  • can't step debug it with <> easily. The alternatives are[JTAG] or <>, but those are less reliable, and require extra hardware.

Still interested?

.... ./build-modules --host ....

Compilation will likely fail for some modules because of kernel or toolchain differences that we can't control on the host.

The best workaround is to compile just your modules with:

.... ./build-modules --host -- hello hello2 ....

which is equivalent to:

.... ./build-modules \ --gcc-which host \ --host \ -- \ kernelmodules/hello.c \ kernelmodules/hello2.c \ ; ....

Or just remove the

extension from the failing files and try again:

.... cd "$(./getvar kernelmodulessource_dir)" mv broken.c broken.c~ ....

Once you manage to compile, and have come to terms with the fact that this may blow up your host, try it out with:

.... cd "$(./getvar kernelmodulesbuildhostsubdir)" sudo insmod hello.ko

Our module is there.

sudo lsmod | grep hello

Last message should be: hello init

dmesg -T

sudo rmmod hello

Last message should be: hello exit

dmesg -T

Not present anymore

sudo lsmod | grep hello ....

==== Hello host

Minimal host build system example:

.... cd hellohostkernel_module make sudo insmod hello.ko dmesg sudo rmmod hello.ko dmesg ....

=== Userland setup

==== About the userland setup

In order to test the kernel and emulators, userland content in the form of executables and scripts is of course required, and we store it mostly under:

  • link:userland/[]
  • <>
  • <>

When we started this repository, it only contained content that interacted very closely with the kernel, or that had required performance analysis.

However, we soon started to notice that this had an increasing overlap with other userland test repositories: we were duplicating build and test infrastructure and even some examples.

Therefore, we decided to consolidate other userland tutorials that we had scattered around into this repository.

Notable userland content included / moving into this repository includes:

  • <>
  • <>
  • <>
  • <>
  • <>

==== Userland setup getting started

There are several ways to run our <>, notably:

  • natively on the host as shown at: xref:userland-setup-getting-started-natively[xrefstyle=full] + Can only run examples compatible with your host CPU architecture and OS, but has the fastest setup and runtimes.
  • from user mode simulation with: + -- ** the host prebuilt toolchain: xref:userland-setup-getting-started-with-prebuilt-toolchain-and-qemu-user-mode[xrefstyle=full] ** the Buildroot toolchain you built yourself: xref:qemu-user-mode-getting-started[xrefstyle=full] -- + This setup: + -- ** can run most examples, including those for other CPU architectures, with the notable exception of examples that rely on kernel modules ** can run reproducible approximate performance experiments with gem5, see e.g. <> --
  • from full system simulation as shown at: xref:qemu-buildroot-setup-getting-started[xrefstyle=full]. + This is the most reproducible and controlled environment, and all examples work there. But also the slower one to setup.

===== Userland setup getting started natively

With this setup, we will use the host toolchain and execute executables directly on the host.

No toolchain build is required, so you can just download your distro toolchain and jump straight into it.

Build, run and example, and clean it in-tree with:

.... sudo apt-get install gcc cd userland ./build c/hello ./c/hello.out ./build --clean ....

Source: link:userland/c/hello.c[].

Build an entire directory and test it:

.... cd userland ./build c ./test c ....

Build the current directory and test it:

.... cd userland/c ./build ./test ....

As mentioned at <>, tests under link:userland/libs[] require certain optional libraries to be installed, and are not built or tested by default.

You can install those libraries with:

.... cd linux-kernel-module-cheat ./setup ./build --download-dependencies userland-host ....

and then build the examples and test with:

.... ./build --package-all ./test --package-all ....

Pass custom compiler options:

.... ./build --ccflags='-foptimize-sibling-calls -foptimize-strlen' --force-rebuild ....

Here we used

to force rebuild since the sources weren't modified since the last build.

Some CLI options have more specialized flags, e.g.

for the <>:

.... ./build --optimization-level 3 --force-rebuild ....

See also <> for



scripts inside link:userland/[] are just symlinks to link:build-userland-in-tree[] which you can also use from toplevel as:

.... ./build-userland-in-tree ./build-userland-in-tree userland/c ./build-userland-in-tree userland/c/hello.c ....

is in turn just a thin wrapper around link:build-userland[]:

.... ./build-userland --gcc-which host --in-tree userland/c ....

So you can use any option supported by

script freely with

The situation is analogous for link:userland/test[], link:test-executables-in-tree[] and link:test-executables[], which are further documented at: xref:user-mode-tests[xrefstyle=full].

Do a more clean out-of-tree build instead and run the program:

.... ./build-userland --gcc-which host --userland-build-id host ./run --emulator native --userland userland/c/hello.c --userland-build-id host ....

Here we:

  • put the host executables in a separate <> to avoid conflict with Buildroot builds.
  • ran with the
    --emulator native
    option to run the program natively

In this case you can debub the program with:

.... ./run --debug-vm --emulator native --userland userland/c/hello.c --userland-build-id host ....

as shown at: xref:debug-the-emulator[xrefstyle=full], although direct GDB host usage works as well of course.

===== Userland setup getting started with prebuilt toolchain and QEMU user mode

If you are lazy to built the Buildroot toolchain and QEMU, but want to run e.g. ARM <> in <>, you can get away on Ubuntu 18.04 with just:

.... sudo apt-get install gcc-aarch64-linux-gnu qemu-system-aarch64 ./build-userland \ --arch aarch64 \ --gcc-which host \ --userland-build-id host \ ; ./run \ --arch aarch64 \ --qemu-which host \ --userland-build-id host \ --userland userland/c/commandlinearguments.c \ --cli-args 'asdf "qw er"' \ ; ....


  • --gcc-which host
    : use the host toolchain. + We must pass this to
    as well because QEMU must know which dynamic libraries to use. See also: xref:user-mode-static-executables[xrefstyle=full].
  • --userland-build-id host
    : put the host built into a <>

This present the usual trade-offs of using prebuilts as mentioned at: xref:prebuilt[xrefstyle=full].

Other functionality are analogous, e.g. testing:

.... ./test-executables \ --arch aarch64 \ --gcc-which host \ --qemu-which host \ --userland-build-id host \ ; ....

and <>:

.... ./run \ --arch aarch64 \ --gdb \ --gcc-which host \ --qemu-which host \ --userland-build-id host \ --userland userland/c/commandlinearguments.c \ --cli-args 'asdf "qw er"' \ ; ....

===== Userland setup getting started full system

First ensure that <> is working.

After doing that setup, you can already execute your userland programs from inside QEMU: the only missing step is how to rebuild executables and run them.

And the answer is exactly analogous to what is shown at: xref:your-first-kernel-module-hack[xrefstyle=full]

For example, if we modify link:userland/c/hello.c[] to print out something different, we can just rebuild it with:

.... ./build-userland ....

Source: link:build-userland[].

calls that script automatically for us when doing the initial full build.

Now, run the program either without rebooting use the <<9p>> mount:

.... /mnt/9p/outrootfsoverlay/c/hello.out ....

or shutdown QEMU, add the executable to the root filesystem:

.... ./build-buildroot ....

reboot and use the root filesystem as usual:

.... ./hello.out ....

=== Baremetal setup

==== About the baremetal setup

This setup does not use the Linux kernel nor Buildroot at all: it just runs your very own minimal OS.

is not currently supported, only
: I had made some x86 bare metal examples at: but I'm lazy to port them here now. Pull requests are welcome.

The main reason this setup is included in this project, despite the word "Linux" being on the project name, is that a lot of the emulator boilerplate can be reused for both use cases.

This setup allows you to make a tiny OS and that runs just a few instructions, use it to fully control the CPU to better understand the simulators for example, or develop your own OS if you are into that.

You can also use C and a subset of the C standard library because we enable[Newlib] by default. See also:


Our C bare-metal compiler is built with[crosstool-NG]. If you have already built <> previously, you will end up with two GCCs installed. Unfortunately I don't see a solution for this, since we need separate toolchains for Newlib on baremetal and glibc on Linux:

==== Baremetal setup getting started


file inside link:baremetal/[] and
file inside
generates a separate baremetal image.

For example, to run link:baremetal/arch/aarch64/dump_regs.c[] in QEMU do:

.... ./setup ./build --arch aarch64 --download-dependencies qemu-baremetal ./run --arch aarch64 --baremetal baremetal/arch/aarch64/dump_regs.c ....

And the terminal prints the values of certain system registers. This example prints registers that are only accessible from <> or higher, and thus could not be run in userland.

In addition to the examples under link:baremetal/[], several of the <> can also be run in baremetal! This is largely due to the <>.

The examples that work include most <> that don't rely on complicated syscalls such as threads, and almost all the <> examples.

The exact list of userland programs that work in baremetal is specified in <> with the

property, but you can also easily find it out with a <>:

.... ./test-executables --arch aarch64 --dry-run --mode baremetal ....

For example, we can run the C hello world link:userland/c/hello.c[] simply as:

.... ./run --arch aarch64 --baremetal userland/c/hello.c ....

and that outputs to the serial port the string:

.... hello ....

which QEMU shows on the host terminal.

To modify a baremetal program, simply edit the file, e.g.

.... vim userland/c/hello.c ....

and rebuild:

.... ./build-baremetal --arch aarch64 ./run --arch aarch64 --baremetal userland/c/hello.c ....

./build qemu-baremetal
that we run previously is only needed for the initial build. That script calls link:build-baremetal[] for us, in addition to building prerequisites such as QEMU and crosstool-NG.

uses crosstool-NG, and so it must be preceded by link:build-crosstool-ng[], which
./build qemu-baremetal
also calls.

Now let's run link:userland/arch/aarch64/add.S[]:

.... ./run --arch aarch64 --baremetal userland/arch/aarch64/add.S ....

This time, the terminal does not print anything, which indicates success: if you look into the source, you will see that we just have an assertion there.

You can see a sample assertion fail in link:userland/c/assert_fail.c[]:

.... ./run --arch aarch64 --baremetal userland/c/assert_fail.c ....

and the terminal contains:

.... lkmcexitstatus_134 error: simulation error detected by parsing logs ....

and the exit status of our script is 1:

.... echo $? ....

You can run all the baremetal examples in one go and check that all assertions passed with:

.... ./test-executables --arch aarch64 --mode baremetal ....

To use gem5 instead of QEMU do:

.... ./setup ./build --download-dependencies gem5-baremetal ./run --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 ....

and then <> open a shell with:

.... ./gem5-shell ....

Or as usual, <> users can do both in one go with:

.... ./run --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 --tmux ....

TODO: the carriage returns are a bit different than in QEMU, see: xref:gem5-baremetal-carriage-return[xrefstyle=full].

Note that

requires the
--emulator gem5
option, and generates separate executable images for both, as can be seen from:

.... echo "$(./getvar --arch aarch64 --baremetal userland/c/hello.c --emulator qemu image)" echo "$(./getvar --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 image)" ....

This is unlike the Linux kernel that has a single image for both QEMU and gem5:

.... echo "$(./getvar --arch aarch64 --emulator qemu image)" echo "$(./getvar --arch aarch64 --emulator gem5 image)" ....

The reason for that is that on baremetal we don't parse the <> from memory like the Linux kernel does, which tells the kernel for example the UART address, and many other system parameters.

also supports the
machine, which represents an older hardware compared to the default

.... ./build-baremetal --arch aarch64 --emulator gem5 --machine RealViewPBX ./run --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 --machine RealViewPBX ....

see also: xref:gem5-arm-platforms[xrefstyle=full].

This generates yet new separate images with new magic constants:

.... echo "$(./getvar --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 --machine VExpressGEM5V1 image)" echo "$(./getvar --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 --machine RealViewPBX image)" ....

But just stick to newer and better

unless you have a good reason to use

When doing baremetal programming, it is likely that you will want to learn userland assembly first, see: xref:userland-assembly[xrefstyle=full].

For more information on baremetal, see the section: xref:baremetal[xrefstyle=full].

The following subjects are particularly important:

  • <>
  • <>

=== Build the documentation

You don't need to depend on GitHub.

For a quick and dirty build, install[Asciidoctor] however you like and build:

.... asciidoctor README.adoc xdg-open README.html ....

For development, you will want to do a more controlled build with extra error checking as follows.

For the initial build do:

.... ./setup ./build --download-dependencies docs ....

which also downloads build dependencies.

Then the following times just to the faster:

.... ./build-doc ....

Source: link:build-doc[]

The HTML output is located at:

.... xdg-open out/README.html ....

More information about our documentation internals can be found at: xref:documentation[xrefstyle=full]

[[gdb]] == GDB step debug

=== GDB step debug kernel boot

makes QEMU and gem5 wait for a GDB connection, otherwise we could accidentally go past the point we want to break at:

.... ./run --gdb-wait ....

Say you want to break at

. So on another shell:

.... ./run-gdb start_kernel ....

or at a given line:

.... ./run-gdb init/main.c:1088 ....

Now QEMU will stop there, and you can use the normal GDB commands:

.... list next continue ....

See also:


==== GDB step debug kernel boot other archs

Just don't forget to pass

, e.g.:

.... ./run --arch aarch64 --gdb-wait ....


.... ./run-gdb --arch aarch64 start_kernel ....

[[kernel-o0]] ==== Disable kernel compiler optimizations

is an impossible dream,
being the default.

So get ready for some weird jumps, and

 fun. Why, Linux, why.


level of some other userland content can be controlled as explained at: <>.

=== GDB step debug kernel post-boot

Let's observe the kernel

system call as it reacts to some userland actions.

Start QEMU with just:

.... ./run ....

and after boot inside a shell run:

.... ./ ....

which counts to infinity to stdout. Source: link:rootfs_overlay/lkmc/[].

Then in another shell, run:

.... ./run-gdb ....

and then hit:

.... Ctrl-C break _x64sys_write continue continue continue ....

And you now control the counting on the first shell from GDB!

Before v4.17, the symbol name was just

, the change happened at[d5a00528b58cdb2c71206e18bd021e34c4eab878]. As of Linux v 4.19, the function is called
, and
. One good way to find it if the name changes again is to try:

.... rbreak .*sys_write ....

or just have a quick look at the sources!

When you hit

, if we happen to be inside kernel code at that point, which is very likely if there are no heavy background tasks waiting, and we are just waiting on a
type system call of the command prompt, we can already see the source for the random place inside the kernel where we stopped.

=== tmux

tmux just makes things even more fun by allowing us to see both the terminal for:

  • emulator stdout
  • <>

at once without dragging windows around!

First start


.... tmux ....

Now that you are inside a shell inside tmux, you can start GDB simply with:

.... ./run --gdb ....

which is just a convenient shortcut for:

.... ./run --gdb-wait --tmux --tmux-args start_kernel ....

This splits the terminal into two panes:

  • left: usual QEMU with terminal
  • right: GDB

and focuses on the GDB pane.

Now you can navigate with the usual tmux shortcuts:

  • switch between the two panes with:
    Ctrl-B O
  • close either pane by killing its terminal with
    as usual

See the tmux manual for further details:

.... man tmux ....

To start again, switch back to the QEMU pane with

, kill the emulator, and re-run:

.... ./run --gdb ....

This automatically clears the GDB pane, and starts a new one.

The option

determines which options will be passed to the program running on the second tmux pane, and is equivalent to:

This is equivalent to:

.... ./run --gdb-wait ./run-gdb start_kernel ....

Due to Python's CLI parsing quicks, if the link:run-gdb[] arguments start with a dash

, you have to use the
sign, e.g. to <>:

.... ./run --gdb --tmux-args=--no-continue ....


==== tmux gem5

If you are using gem5 instead of QEMU,

has a different effect by default: it opens the gem5 terminal instead of the debugger:

.... ./run --emulator gem5 --tmux ....

To open a new pane with GDB instead of the terminal, use:

.... ./run --gdb ....

which is equivalent to:

.... ./run --emulator gem5 --gdb-wait --tmux --tmux-args start_kernel --tmux-program gdb ....

, so we can just write:

.... ./run --emulator gem5 --gdb-wait --tmux-program gdb ....

If you also want to see both GDB and the terminal with gem5, then you will need to open a separate shell manually as usual with


From inside tmux, you can create new terminals on a new window with

Ctrl-B C
split a pane yet again vertically with
Ctrl-B %
or horizontally with
Ctrl-B "

=== GDB step debug kernel module

Loadable kernel modules are a bit trickier since the kernel can place them at different memory locations depending on load order.

So we cannot set the breakpoints before


However, the Linux kernel GDB scripts offer the

command, which takes care of that beautifully for us.

Shell 1:

.... ./run ....

Wait for the boot to end and run:

.... insmod timer.ko ....

Source: link:kernel_modules/timer.c[].

This prints a message to dmesg every second.

Shell 2:

.... ./run-gdb ....

In GDB, hit

, and note how it says:

.... scanning for modules in /root/linux-kernel-module-cheat/out/kernelmodules/x8664/kernelmodules loading @0xffffffffc0000000: /root/linux-kernel-module-cheat/out/kernelmodules/x8664/kernelmodules/timer.ko ....


working! Now simply:

.... break lkmctimercallback continue continue continue ....

and we now control the callback from GDB!

Just don't forget to remove your breakpoints after

, or they will point to stale memory locations.

TODO: why does

break work_func
insmod kthread.ko
not very well? Sometimes it breaks but not others.

[[gdb-step-debug-kernel-module-arm]] ==== GDB step debug kernel module insmodded by init on ARM


51e31cdc2933a774c2a0dc62664ad8acec1d2dbe it does not always work, and
fails with the message:

.... loading vmlinux Traceback (most recent call last): File "/linux-kernel-module-cheat//out/arm/buildroot/build/linux-custom/scripts/gdb/linux/", line 163, in invoke self.loadallsymbols() File "/linux-kernel-module-cheat//out/arm/buildroot/build/linux-custom/scripts/gdb/linux/", line 150, in loadallsymbols [self.loadmodulesymbols(module) for module in modulelist] File "/linux-kernel-module-cheat//out/arm/buildroot/build/linux-custom/scripts/gdb/linux/", line 110, in loadmodulesymbols modulename = module['name'].string() gdb.MemoryError: Cannot access memory at address 0xbf0000cc Error occurred in Python command: Cannot access memory at address 0xbf0000cc ....

Can't reproduce on

are fine.

It is kind of random: if you just

manually and then immediately
./run-gdb --arch arm
, then it usually works.

But this fails most of the time: shell 1:

.... ./run --arch arm --eval-after 'insmod hello.ko' ....

shell 2:

.... ./run-gdb --arch arm ....

then hit

on shell 2, and voila.


.... cat /proc/modules ....

says that the load address is:

.... 0xbf000000 ....

so it is close to the failing



.... ./run-toolchain readelf -- -s "$(./getvar kernelmodulesbuild_subdir)/hello.ko" ....

does not give any interesting hits at

, no symbol was placed that far.

[[gdb-module-init]] ==== GDB module_init

TODO find a more convenient method. We have working methods, but they are not ideal.

This is not very easy, since by the time the module finishes loading, and

can work properly,
has already finished running!

Possibly asked at:


[[gdb-module-init-step-into-it]] ===== GDB module_init step into it

This is the best method we've found so far.

The kernel calls

synchronously, therefore it is not hard to step into that call.

As of 4.16, the call happens in

, so we can do in shell 1:

.... ./run ....

shell 2 after boot finishes (because there are other calls to

at boot, presumably for the built-in modules):

.... ./run-gdb dooneinitcall ....

then step until the line:

.... 833 ret = fn(); ....

which does the actual call, and then step into it.

For the next time, you can also put a breakpoint there directly:

.... ./run-gdb init/main.c:833 ....

How we found this out: first we got <> working, and then we did a

. AKA cheating :-)

[[gdb-module-init-calculate-entry-address]] ===== GDB module_init calculate entry address

This works, but is a bit annoying.

The key observation is that the load address of kernel modules is deterministic: there is a pre allocated memory region "module mapping space" filled from bottom up.

So once we find the address the first time, we can just reuse it afterwards, as long as we don't modify the module.

Do a fresh boot and get the module:

.... ./run --eval-after './;insmod fops.ko;./linux/poweroff.out' ....

The boot must be fresh, because the load address changes every time we insert, even after removing previous modules.

The base address shows on terminal:

.... 0xffffffffc0000000 .text ....

Now let's find the offset of


.... ./run-toolchain readelf -- \ -s "$(./getvar kernelmodulesbuild_subdir)/fops.ko" | \ grep myinit ....

which gives:

.... 30: 0000000000000240 43 FUNC LOCAL DEFAULT 2 myinit ....

so the offset address is

and we deduce that the function will be placed at:

.... 0xffffffffc0000000 + 0x240 = 0xffffffffc0000240 ....

Now we can just do a fresh boot on shell 1:

.... ./run --eval 'insmod fops.ko;./linux/poweroff.out' --gdb-wait ....

and on shell 2:

.... ./run-gdb '*0xffffffffc0000240' ....

GDB then breaks, and


[[gdb-module-init-break-at-the-end-of-sys-init-module]] ===== GDB moduleinit break at the end of sysinit_module

TODO not working. This could be potentially very convenient.

The idea here is to break at a point late enough inside

, at which point
can be called and do its magic.

Beware that there are both

syscalls, and
by default.

Both call

however, which is what
hooks to.

If we try:

.... b sysfinitmodule ....

then hitting:

.... n ....

does not break, and insertion happens, likely because of optimizations? <>

Then we try:

.... b doinitmodule ....

A naive:

.... fin ....

also fails to break!

Finally, in despair we notice that <> prints the kernel load address as explained at <>.

So, if we set a breakpoint just after that message is printed by searching where that happens on the Linux source code, we must be able to get the correct load address before


[[gdb-module-init-add-trap-instruction]] ===== GDB module_init add trap instruction

This is another possibility: we could modify the module source by adding a trap instruction of some kind.

This appears to be described at:

But it refers to a

script which is not in the tree anymore and beyond my
git log

And just adding:

.... asm( " int $3"); ....

directly gives an <> as I'd expect.

==== Bypass lx-symbols

Useless, but a good way to show how hardcore you are. Disable


.... ./run-gdb --no-lxsymbols ....

From inside guest:

.... insmod timer.ko cat /proc/modules ....

as mentioned at:


This will give a line of form:

.... fops 2327 0 - Live 0xfffffffa00000000 ....

And then tell GDB where the module was loaded with:

.... Ctrl-C add-symbol-file ../../../rootfsoverlay/x8664/timer.ko 0xffffffffc0000000 0xffffffffc0000000 ....

Alternatively, if the module panics before you can read

, there is a <> which shows the load address:

.... echo 8 > /proc/sys/kernel/printk echo 'file kernel/module.c +p' > /sys/kernel/debug/dynamic_debug/control ./linux/myinsmod.out hello.ko ....

And then search for a line of type:

.... [ 84.877482] 0xfffffffa00000000 .text ....

Tested on 4f4749148273c282e80b58c59db1b47049e190bf + 1.

=== GDB step debug early boot

TODO successfully debug the very first instruction that the Linux kernel runs, before


Break at the very first instruction executed by QEMU:

.... ./run-gdb --no-continue ....

Note however that early boot parts appear to be relocated in memory somehow, and therefore:

  • you won't see the source location in GDB, only assembly
  • you won't be able to break by symbol in those early locations

Further discussion at: <>.

In the specific case of gem5 aarch64 at least:

  • gem5 relocates the kernel in memory to a fixed location, see e.g.
  • --param 'system.workload.early_kernel_symbols=True
    should in theory duplicate the symbols to the correct physical location, but it was broken at one point:
  • gem5 executes directly from vmlinux, so there is no decompression code involved, so you actually immediately start running the "true" first instruction from
    as described at:
  • once the MMU gets turned on at kernel symbol
    , the virtual address matches the ELF symbols, and you start seeing correct symbols without the need for
    . This can be observed clearly with
    function_trace = True
    : which produces: + .... 0: kernelflagslelo32 (12500) 12500: crctcpaddbacklog (1000) 13500: _crccryptoalgtested (6500) 20000: _crctcpaddbacklog (10000) 30000: _crccryptoalgtested (500) 30500: _crcscsiishostdevice (5000) 35500: _crccryptoalgtested (1500) 37000: _crcscsiishostdevice (4000) 41000: _crccryptoalgtested (3000) 44000: _crctcpaddbacklog (263500) 307500: _crccryptoalgtested (975500) 1283000: _crctcpaddbacklog (77191500) 78474500: _crccryptoalgtested (1000) 78475500: _crcscsiishostdevice (19500) 78495000: _crccryptoalgtested (500) 78495500: _crcscsiishostdevice (13500) 78509000: _primaryswitched (14000) 78523000: memset (21118000) 99641000: _primaryswitched (2500) 99643500: startkernel (11000) .... + so we see that `primaryswitched
    is the first non-trash symbol (non-
    and non-
    `, which are just informative symbols, not actual executable code)

==== Linux kernel entry point


As mentioned at: <>, the very first kernel instructions executed appear to be placed into memory at a different location than that of the kernel ELF section.

As a result, we are unable to break on early symbols such as:

.... ./run-gdb extract_kernel ./run-gdb main ....

<>>> however does show the right symbols however! This could be because <>, which QEMU uses the compressed version, and as mentioned on the Stack Overflow answer, the entry point is actually a tiny decompresser routine.

I also tried to hack


.... @@ -81,7 +81,7 @@ else ${gdb} \ -q \ -ex 'add-auto-load-safe-path $(pwd)' \ --ex 'file vmlinux' \ +-ex 'file arch/arm/boot/compressed/vmlinux' \ -ex 'target remote localhost:${port}' \ ${brk} \ -ex 'continue' \ ....

and no I do have the symbols from

, but the breaks still don't work.

v4.19 also added a

option for having the kernel uncompressed which could make following the startup easier, but it is only available on s390.
however is already uncompressed by default, so might be the easiest one. See also: xref:vmlinux-vs-bzimage-vs-zimage-vs-image[xrefstyle=full].

You then need the associated

to enable it if available:

.... config KERNELUNCOMPRESSED bool "None" depends on HAVEKERNEL_UNCOMPRESSED ....

===== arm64 secondary CPU entry point

In gem5 aarch64 Linux v4.18, experimentally the entry point of secondary CPUs seems to be

as shown at

What happens is that:

  • the bootloader goes in in WFE
  • the kernel writes the entry point to the secondary CPU (the address of
    ) with CPU0 at the address given to the kernel in the
    of the DTB
  • the kernel wakes up the bootloader with a SEV, and the bootloader boots to the address the kernel told it

The CPU0 action happens at:[]:

Here's the code that writes the address and does SEV:

.... static int smpspintablecpuprepare(unsigned int cpu) { _le64 _iomem *release_addr;

if (!cpu_release_addr[cpu])
    return -ENODEV;


  • The cpu-release-addr may or may not be inside the linear mapping.
  • As ioremap_cache will either give us a new mapping or reuse the
  • existing linear mapping, we can use it to cover both cases. In
  • either case the memory will be MT_NORMAL.
  • / release_addr = ioremap_cache(cpu_release_addr[cpu],
    if (!release_addr) return -ENOMEM;


  • We write the release address as LE regardless of the native
  • endianess of the kernel. Therefore, any boot-loaders that
  • read this address need to convert this address to the
  • boot-loader's endianess before jumping. This is mandated by
  • the boot protocol.
  • / writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr); __flush_dcache_area((__force void *)release_addr,


  • Send an event to wake up the secondary CPU.
  • / sev();


and here's the code that reads the value from the DTB:

.... static int smpspintablecpuinit(unsigned int cpu) { struct device_node *dn; int ret;

dn = of_get_cpu_node(cpu, NULL);
if (!dn)
    return -ENODEV;


  • Determine the address from which the CPU is polling.
  • / ret = of_property_read_u64(dn, "cpu-release-addr",


==== Linux kernel arch-agnostic entry point

is the first C function to be executed basically:

For the earlier arch-specific entry point, see: <>.

==== Linux kernel early boot messages

When booting Linux on a slow emulator like <>, what you observe is that:

  • first nothing shows for a while
  • then at once, a bunch of message lines show at once followed on aarch64 Linux 5.4.3 by: + .... [ 0.081311] printk: console [ttyAMA0] enabled ....

This means of course that all the previous messages had been generated earlier and stored, but were only printed to the terminal once the terminal itself was enabled.

Notably for example the very first message:

.... [ 0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd070] ....

happens very early in the boot process.

If you get a failure before that, it will be hard to see the print messages.

One possible solution is to parse the dmesg buffer, gem5 actually implements that: <>.

=== GDB step debug userland processes


GDB breakpoints are set on virtual addresses, so you can in theory debug userland processes as well.

You will generally want to use <> for this as it is more reliable, but this method can overcome the following limitations of


  • the emulator does not support host to guest networking. This seems to be the case for gem5 as explained at: xref:gem5-host-to-guest-networking[xrefstyle=full]
  • cannot see the start of the
    process easily
  • gdbserver
    alters the working of the kernel, and makes your run less representative

Known limitations of direct userland debugging:

  • the kernel might switch context to another process or to the kernel itself e.g. on a system call, and then TODO confirm the PIC would go to weird places and source code would be missing. + Solutions to this are being researched at: xref:lx-ps[xrefstyle=full].
  • TODO step into shared libraries. If I attempt to load them explicitly: + .... (gdb) sharedlibrary ../../staging/lib/ No loaded shared libraries match the pattern `../../staging/lib/'. .... + since GDB does not know that libc is loaded.

==== GDB step debug userland custom init

This is the userland debug setup most likely to work, since at init time there is only one userland executable running.

For executables from the link:userland/[] directory such as link:userland/posix/count.c[]:

  • Shell 1: + .... ./run --gdb-wait --kernel-cli 'init=/lkmc/posix/count.out' ....
  • Shell 2: + .... ./run-gdb --userland userland/posix/count.c main .... + Alternatively, we could also pass the full path to the executable: + .... ./run-gdb --userland "$(./getvar userlandbuilddir)/posix/count.out" main .... + Path resolution is analogous to <>.

Then, as soon as boot ends, we are left inside a debug session that looks just like what

would produce.

==== GDB step debug userland BusyBox init

BusyBox custom init process:

  • Shell 1: + .... ./run --gdb-wait --kernel-cli 'init=/bin/ls' ....
  • Shell 2: + .... ./run-gdb --userland "$(./getvar buildrootbuildbuilddir)"/busybox-*/busybox lsmain ....

This follows BusyBox' convention of calling the main for each executable as

since the
executable has many "mains".

BusyBox default init process:

  • Shell 1: + .... ./run --gdb-wait ....
  • Shell 2: + .... ./run-gdb --userland "$(./getvar buildrootbuildbuilddir)"/busybox-*/busybox initmain ....

cannot be debugged with <> without modifying the source, or else
exits early with:

.... "must be run as PID 1" ....

==== GDB step debug userland non-init

Non-init process:

  • Shell 1: + .... ./run --gdb-wait ....
  • Shell 2: + .... ./run-gdb --userland userland/linux/rand_check.c main ....
  • Shell 1 after the boot finishes: + .... ./linux/rand_check.out ....

This is the least reliable setup as there might be other processes that use the given virtual address.

[[gdb-step-debug-userland-non-init-without-gdb-wait]] ===== GDB step debug userland non-init without --gdb-wait

TODO: if I try <> without

and the
break main
that we do inside

.... Cannot access memory at address 0x10604 ....

and then GDB never breaks. Tested at ac8663a44a450c3eadafe14031186813f90c21e4 + 1.

The exact behaviour seems to depend on the architecture:

  • arm
    : happens always
  • x86_64
    : appears to happen only if you try to connect GDB as fast as possible, before init has been reached.
  • aarch64
    : could not observe the problem

We have also double checked the address with:

.... ./run-toolchain --arch arm readelf -- \ -s "$(./getvar --arch arm userlandbuilddir)/linux/myinsmod.out" | \ grep main ....

and from GDB:

.... info line main ....

and both give:

.... 000105fc ....

which is just 8 bytes before


also says

However, if do a

in GDB, and then a direct:

.... b *0x000105fc ....

it works. Why?!

On GEM5, x86 can also give the

Cannot access memory at address
, so maybe it is also unreliable on QEMU, and works just by coincidence.

=== GDB call

GDB can call functions as explained at:

However this is failing for us:

  • some symbols are not visible to
    even though
    sees them
  • for those that are,
    fails with an E14 error

E.g.: if we break on



call printk(0, "asdf") Could not fetch register "origrax"; remote failure reply 'E14' b printk Breakpoint 2 at 0xffffffff81091bca: file kernel/printk/printk.c, line 1824. call fdgetpos(fd) No symbol "fdgetpos" in current context. b fdgetpos Breakpoint 3 at 0xffffffff811615e3: fdget_pos. (9 locations)


even though

is the first thing

.... 581 SYSCALLDEFINE3(write, unsigned int, fd, const char _user *, buf, 582 sizet, count) 583 { 584 struct fd f = fdgetpos(fd); ....

I also noticed that I get the same error:

.... Could not fetch register "orig_rax"; remote failure reply 'E14' ....

when trying to use:

.... fin ....

on many (all?) functions.

See also:

=== GDB view ARM system registers

info all-registers
shows some of them.

The implementation is described at:

=== GDB step debug multicore userland

For a more minimal baremetal multicore setup, see: xref:arm-baremetal-multicore[xrefstyle=full].

We can set and get which cores the Linux kernel allows a program to run on with


.... ./run --cpus 2 --eval-after './linux/sched_getaffinity.out' ....

Source: link:userland/linux/sched_getaffinity.c[]

Sample output:

.... schedgetaffinity = 1 1 schedgetcpu = 1 schedgetaffinity = 1 0 schedgetcpu = 0 ....

Which shows us that:

  • initially: ** all 2 cores were enabled as shown by
    sched_getaffinity = 1 1
    ** the process was randomly assigned to run on core 1 (the second one) as shown by
    sched_getcpu = 1
    . If we run this several times, it will also run on core 0 sometimes.
  • then we restrict the affinity to just core 0, and we see that the program was actually moved to core 0

The number of cores is modified as explained at: xref:number-of-cores[xrefstyle=full]

from the util-linux package sets the initial core affinity of a program:

.... ./build-buildroot \ --config 'BR2PACKAGEUTILLINUX=y' \ --config 'BR2PACKAGEUTILLINUXSCHEDUTILS=y' \ ; ./run --eval-after 'taskset -c 1,1 ./linux/schedgetaffinity.out' ....


.... schedgetaffinity = 0 1 schedgetcpu = 1 schedgetaffinity = 1 0 schedgetcpu = 0 ....

so we see that the affinity was restricted to the second core from the start.

Let's do a QEMU observation to justify this example being in the repository with <>.

We will run our

infinitely many times, on core 0 and core 1 alternatively:

.... ./run \ --cpus 2 \ --eval-after 'i=0; while true; do taskset -c $i,$i ./linux/sched_getaffinity.out; i=$((! $i)); done' \ --gdb-wait \ ; ....

on another shell:

.... ./run-gdb --userland "$(./getvar userlandbuilddir)/linux/sched_getaffinity.out" main ....

Then, inside GDB:

.... (gdb) info threads Id Target Id Frame * 1 Thread 1 (CPU#0 [running]) main () at schedgetaffinity.c:30 2 Thread 2 (CPU#1 [halted ]) nativesafehalt () at ./arch/x86/include/asm/irqflags.h:55 (gdb) c (gdb) info threads Id Target Id Frame 1 Thread 1 (CPU#0 [halted ]) nativesafehalt () at ./arch/x86/include/asm/irqflags.h:55 * 2 Thread 2 (CPU#1 [running]) main () at schedgetaffinity.c:30 (gdb) c ....

and we observe that

info threads
shows the actual correct core on which the process was restricted to run by

We should also try it out with kernel modules:

TODO we then tried:

.... ./run --cpus 2 --eval-after './linux/schedgetaffinitythreads.out' ....


.... ./run-gdb --userland "$(./getvar userlandbuilddir)/linux/schedgetaffinitythreads.out" ....

to switch between two simultaneous live threads with different affinities, it just didn't break on our threads:

.... b mainthread0 ....

Note that secondary cores in gem5 are kind of broken however: <>.


  • ** ** ** (summary only)

=== Linux kernel GDB scripts

We source the Linux kernel GDB scripts by default for

, but they also contains some other goodies worth looking into.

Those scripts basically parse some in-kernel data structures to offer greater visibility with GDB.

All defined commands are prefixed by

, so to get a full list just try to tab complete that.

There aren't as many as I'd like, and the ones that do exist are pretty self explanatory, but let's give a few examples.

Show dmesg:

.... lx-dmesg ....

Show the <>:

.... lx-cmdline ....

Dump the device tree to a

file in the current directory:

.... lx-fdtdump pwd ....

List inserted kernel modules:

.... lx-lsmod ....

Sample output:

.... Address Module Size Used by 0xffffff80006d0000 hello 16384 0 ....



==== lx-ps

List all processes:

.... lx-ps ....

Sample output:

.... 0xffff88000ed08000 1 init 0xffff88000ed08ac0 2 kthreadd ....

The second and third fields are obviously PID and process name.

The first one is more interesting, and contains the address of the

in memory.

This can be confirmed with:

.... p ((struct task_struct)*0xffff88000ed08000 ....

which contains the correct PID for all threads I've tried:

.... pid = 1, ....

TODO get the PC of the kthreads: Then we would be able to see where the threads are stopped in the code!

On ARM, I tried:

.... taskptregs((struct threadinfo *)((struct taskstruct)*0xffffffc00e8f8000))->uregs[ARM_pc] ....


is a
and GDB cannot see defines without
: which are apparently not set?


  • presentation:

[[config-pid-in-contextidr]] ===== CONFIGPIDIN_CONTEXTIDR on ARM the kernel can store an indication of PID in the CONTEXTIDR_EL1 register, making that much easier to observe from simulators.

In particular, gem5 prints that number out by default on


Let's test it out with <> + <>:

.... ./build-linux --arch aarch64 --linux-build-id CONFIGPIDINCONTEXTIDR --config 'CONFIGPIDINCONTEXTIDR=y'

Checkpoint run.

./run --arch aarch64 --emulator gem5 --linux-build-id CONFIGPIDIN_CONTEXTIDR --eval './'

Trace run.

./run \ --arch aarch64 \ --emulator gem5 \ --gem5-readfile 'posix/getpid.out; posix/getpid.out' \ --gem5-restore 1 \ --linux-build-id CONFIGPIDIN_CONTEXTIDR \ --trace FmtFlag,ExecAll,-ExecSymbol \ ; ....

The terminal runs both programs which output their PID to stdout:

.... pid=44 pid=45 ....

By quickly inspecting the

file, we immediately notice that the
system.cpu: A
part of the logs, which used to always be
system.cpu: A0
, now has a few different values! Nice!

We can briefly summarize those values by removing repetitions:

.... cut -d' ' -f4 "$(./getvar --arch aarch64 --emulator gem5 tracetxtfile)" | uniq -c ....


.... 97227 A39 147476 A38 222052 A40 1 terminal 1117724 A40 27529 A31 43868 A40 27487 A31 138349 A40 13781 A38 231246 A40 25536 A38 28337 A40 214799 A38 963561 A41 92603 A38 27511 A31 224384 A38 564949 A42 182360 A38 729009 A43 8398 A23 20200 A10 636848 A43 187995 A44 27529 A31 70071 A44 16981 A0 623806 A44 16981 A0 139319 A44 24487 A0 174986 A44 25420 A0 89611 A44 16981 A0 183184 A44 24728 A0 89608 A44 17226 A0 899075 A44 24974 A0 250608 A44 137700 A43 1497997 A45 227485 A43 138147 A38 482646 A46 ....

I'm not smart enough to be able to deduce all of those IDs, but we can at least see that:

  • A44 and A45 are there as expected from stdout!
  • A39 must be the end of the execution of
    m5 checkpoint
  • so we guess that A38 is the shell as it comes next
  • the weird "terminal" line is
    336969745500: system.terminal: attach terminal 0
  • which is the shell PID? I should have printed that as well :-)
  • why are there so many other PIDs? This was supposed to be a silent system without daemons!
  • A0 is presumably the kernel. However we see process switches without going into A0, so I'm not sure how, it appears to count kernel instructions as part of processes
  • A46 has to be the
    m5 exit

Or if you want to have some real fun, try: link:baremetal/arch/aarch64/contextidr_el1.c[]:

.... ./run --arch aarch64 --emulator gem5 --baremetal baremetal/arch/aarch64/contextidr_el1.c --trace-insts-stdout ....

in which we directly set the register ourselves! Output excerpt:

.... 31500: system.cpu: A0 T0 : @main+12 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000001 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad) 32000: system.cpu: A1 T0 : @main+16 : msr contextidrel1, x0 : IntAlu : D=0x0000000000000001 flags=(IsInteger|IsSerializeAfter|IsNonSpeculative) 32500: system.cpu: A1 T0 : @main+20 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000001 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad) 33000: system.cpu: A1 T0 : @main+24 : add w0, w0, #1 : IntAlu : D=0x0000000000000002 flags=(IsInteger) 33500: system.cpu: A1 T0 : @main+28 : str x0, [sp, #12] : MemWrite : D=0x0000000000000002 A=0x82fffffc flags=(IsInteger|IsMemRef|IsStore) 34000: system.cpu: A1 T0 : @main+32 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000002 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad) 34500: system.cpu: A1 T0 : @main+36 : subs w0, #9 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 35000: system.cpu: A1 T0 : @main+40 : b.le

: IntAlu : flags=(IsControl|IsDirectControl|IsCondControl) 35500: system.cpu: A1 T0 : @main+12 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000002 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad) 36000: system.cpu: A2 T0 : @main+16 : msr contextidr
el1, x0 : IntAlu : D=0x0000000000000002 flags=(IsInteger|IsSerializeAfter|IsNonSpeculative) 36500: system.cpu: A2 T0 : @main+20 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000002 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad) 37000: system.cpu: A2 T0 : @main+24 : add w0, w0, #1 : IntAlu : D=0x0000000000000003 flags=(IsInteger) 37500: system.cpu: A2 T0 : @main+28 : str x0, [sp, #12] : MemWrite : D=0x0000000000000003 A=0x82fffffc flags=(IsInteger|IsMemRef|IsStore) 38000: system.cpu: A2 T0 : @main+32 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000003 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad) 38500: system.cpu: A2 T0 : @main+36 : subs w0, #9 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 39000: system.cpu: A2 T0 : @main+40 : b.le
: IntAlu : flags=(IsControl|IsDirectControl|IsCondControl) 39500: system.cpu: A2 T0 : @main+12 : ldr x0, [sp, #12] : MemRead : D=0x0000000000000003 A=0x82fffffc flags=(IsInteger|IsMemRef|IsLoad) 40000: system.cpu: A3 T0 : @main+16 : msr contextidr_el1, x0 : IntAlu : D=0x0000000000000003 flags=(IsInteger|IsSerializeAfter|IsNonSpeculative) ....

<> D13.2.27 "CONTEXTIDREL1, Context ID Register (EL1)" documents `CONTEXTIDREL1` as:

Identifies the current Process Identifier.

The value of the whole of this register is called the Context ID and is used by:

  • The debug logic, for Linked and Unlinked Context ID matching.
  • The trace logic, to identify the current process.

The significance of this register is for debug and trace use only.

Tested on 145769fc387dc5ee63ec82e55e6b131d9c968538 + 1.

=== Debug the GDB remote protocol

For when it breaks again, or you want to add a new feature!

.... ./run --debug ./run-gdb --before '-ex "set remotetimeout 99999" -ex "set debug remote 1"' start_kernel ....

See also:

[[remote-g-packet]] ==== Remote 'g' packet reply is too long

This error means that the GDB server, e.g. in QEMU, sent more registers than the GDB client expected.

This can happen for the following reasons:

  • you set the architecture of the client wrong, often 32 vs 64 bit as mentioned at:
  • there is a bug in the GDB server and the XML description does not match the number of registers actually sent
  • the GDB server does not send XML target descriptions and your GDB expects a different number of registers by default. E.g., gem5 d4b3e064adeeace3c3e7d106801f95c14637c12f does not send the XML files

The XML target description format is described a bit further at:


KGDB is kernel dark magic that allows you to GDB the kernel on real hardware without any extra hardware support.

It is useless with QEMU since we already have full system visibility with

. So the goal of this setup is just to prepare you for what to expect when you will be in the treches of real hardware.

KGDB is cheaper than JTAG (free) and easier to setup (all you need is serial), but with less visibility as it depends on the kernel working, so e.g.: dies on panic, does not see boot sequence.

First run the kernel with:

.... ./run --kgdb ....

this passes the following options on the kernel CLI:

.... kgdbwait kgdboc=ttyS1,115200 ....

tells the kernel to wait for KGDB to connect.

So the kernel sets things up enough for KGDB to start working, and then boot pauses waiting for connection:

.... <6>[ 4.866050] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled <6>[ 4.893205] 00:05: ttyS0 at I/O 0x3f8 (irq = 4, basebaud = 115200) is a 16550A <6>[ 4.916271] 00:06: ttyS1 at I/O 0x2f8 (irq = 3, basebaud = 115200) is a 16550A <6>[ 4.987771] KGDB: Registered I/O driver kgdboc <2>[ 4.996053] KGDB: Waiting for connection from remote gdb...

Entering kdb (current=0x(_ptrval_), pid 1) on processor 0 due to Keyboard Entry [0]kdb> ....

KGDB expects the connection at

, our second serial port after
which contains the terminal.

The last line is the KDB prompt, and is covered at: xref:kdb[xrefstyle=full]. Typing now shows nothing because that prompt is expecting input from


Instead, we connect to the serial port

with GDB:

.... ./run-gdb --kgdb --no-continue ....

Once GDB connects, it is left inside the function


So now we can set breakpoints and continue as usual.

For example, in GDB:

.... continue ....

Then in QEMU:

.... ./ & ./ ....

link:rootfs_overlay/lkmc/[] pauses the kernel for KGDB, and gives control back to GDB.

And now in GDB we do the usual:

.... break _x64sys_write continue continue continue continue ....

And now you can count from KGDB!

If you do:

break __x64_sys_write
immediately after
./run-gdb --kgdb
, it fails with
KGDB: BP remove failed: 
. I think this is because it would break too early on the boot sequence, and KGDB is not yet ready.

See also:



TODO: we would need a second serial for KGDB to work, but it is not currently supported on

-M virt
that we use:

One possible workaround for this would be to use <>.

Main more generic question:

=== KGDB kernel modules

Just works as you would expect:

.... insmod timer.ko ./ ....


.... break lkmctimercallback continue continue continue ....

and you now control the count.

=== KDB

KDB is a way to use KDB directly in your main console, without GDB.

Advantage over KGDB: you can do everything in one serial. This can actually be important if you only have one serial for both shell and .

Disadvantage: not as much functionality as GDB, especially when you use Python scripts. Notably, TODO confirm you can't see the the kernel source code and line step as from GDB, since the kernel source is not available on guest (ah, if only debugging information supported full source, or if the kernel had a crazy mechanism to embed it).

Run QEMU as:

.... ./run --kdb ....

This passes

to the Linux CLI, therefore using our main console. Then QEMU:

.... [0]kdb> go ....

And now the

prompt is responsive because it is listening to the main console.

After boot finishes, run the usual:

.... ./ & ./ ....

And you are back in KDB. Now you can count with:

.... [0]kdb> bp _x64sys_write [0]kdb> go [0]kdb> go [0]kdb> go [0]kdb> go ....

And you will break whenever

is hit.

You can get see further commands with:

.... [0]kdb> help ....

The other KDB commands allow you to step instructions, view memory, registers and some higher level kernel runtime data similar to the superior GDB Python scripts.

==== KDB graphic

You can also use KDB directly from the <> window with:

.... ./run --graphic --kdb ....

This setup could be used to debug the kernel on machines without serial, such as modern desktops.

This works because

(which stands for
!) to

==== KDB ARM

TODO neither

are working as of 1cd1e58b023791606498ca509256cc48e95e4f5b + 1.

seems to place and hit the breakpoint correctly, but no matter how many
commands I do, the
stdout simply does not show.

seems to place the breakpoint correctly, but after the first
the kernel oopses with warning:

.... WARNING: CPU: 0 PID: 46 at /root/linux-kernel-module-cheat/submodules/linux/kernel/smp.c:416 smpcallfunction_many+0xdc/0x358 ....

and stack trace:

.... smpcallfunctionmany+0xdc/0x358 kickallcpussync+0x30/0x38 kgdbflushswbreakaddr+0x3c/0x48 dbgdeactivateswbreakpoints+0x7c/0xb8 kgdbcpuenter+0x284/0x6a8 kgdbhandleexception+0x138/0x240 kgdbbrkfn+0x2c/0x40 brkhandler+0x7c/0xc8 dodebugexception+0xa4/0x1c0 el1dbg+0x18/0x78 _arm64syswrite+0x0/0x30 el0svchandler+0x74/0x90 el0svc+0x8/0xc ....

My theory is that every serious ARM developer has JTAG, and no one ever tests this, and the kernel code is just broken.

== gdbserver

Step debug userland processes to understand how they are talking to the kernel.

First build

into the root filesystem:

.... ./build-buildroot --config 'BR2PACKAGEGDB=y' ....

Then on guest, to debug link:userland/linux/rand_check.c[]:

.... ./ ./c/commandlinearguments.out asdf qwer ....

Source: link:rootfs_overlay/lkmc/[].

And on host:

.... ./run-gdb --gdbserver --userland userland/c/commandlinearguments.c main ....

or alternatively with the path to the executable itself:

.... ./run --gdbserver --userland "$(./getvar userlandbuilddir)/c/commandlinearguments.out" ....


=== gdbserver BusyBox

Analogous to <>:

.... ./ ls ....

on host you need:

.... ./run-gdb --gdbserver --userland "$(./getvar buildrootbuildbuilddir)"/busybox-*/busybox lsmain ....

=== gdbserver libc

Our setup gives you the rare opportunity to step debug libc and other system libraries.

For example in the guest:

.... ./ ./posix/count.out ....

Then on host:

.... ./run-gdb --gdbserver --userland userland/posix/count.c main ....

and inside GDB:

.... break sleep continue ....

And you are now left inside the

function of our default libc implementation uclibc[

You can also step into the


.... step ....

This is made possible by the GDB command that we use by default:

.... set sysroot ${commonbuildrootbuild_dir}/staging ....

which automatically finds unstripped shared libraries on the host for us.

See also:

=== gdbserver dynamic loader

TODO: try to step debug the dynamic loader. Would be even easier if

is available:


== CPU architecture

The portability of the kernel and toolchains is amazing: change an option and most things magically work on completely different hardware.

To use

instead of x86 for example:

.... ./build-buildroot --arch arm ./run --arch arm ....


.... ./run --arch arm --gdb-wait

On another terminal.

./run-gdb --arch arm ....

We also have one letter shorthand names for the architectures and




./run -a A


./run -a a


./run -a x ....

Known quirks of the supported architectures are documented in this section.

[[x86-64]] === x86_64

==== ring0

This example illustrates how reading from the x86 control registers with

mov crX, rax
can only be done from kernel land on ring0.

From kernel land:

.... insmod ring0.ko ....

works and output the registers, for example:

.... cr0 = 0xFFFF880080050033 cr2 = 0xFFFFFFFF006A0008 cr3 = 0xFFFFF0DCDC000 ....

However if we try to do it from userland:

.... ./ring0.out ....

stdout gives:

.... Segmentation fault ....

and dmesg outputs:

.... traps: ring0.out[55] general protection ip:40054c sp:7fffffffec20 error:0 in ring0.out[400000+1000] ....


  • link:kernel_modules/ring0.c[]
  • link:lkmc/ring0.h[]
  • link:userland/arch/x86_64/ring0.c[]

In both cases, we attempt to run the exact same code which is shared on the

header file.



=== arm

==== Run arm executable in aarch64

TODO Can you run arm executables in the aarch64 guest?

I've tried:

.... ./run-toolchain --arch aarch64 gcc -- -static ~/test/helloworld.c -o "$(./getvar p9dir)/a.out" ./run --arch aarch64 --eval-after '/mnt/9p/data/a.out' ....

but it fails with:

.... a.out: line 1: syntax error: unexpected word (expecting ")") ....

=== MIPS

We used to "support" it until f8c0502bb2680f2dbe7c1f3d7958f60265347005 (it booted) but dropped since one was testing it often.

If you want to revive and maintain it, send a pull request.

=== Other architectures

It should not be too hard to port this repository to any architecture that Buildroot supports. Pull requests are welcome.

== init

When the Linux kernel finishes booting, it runs an executable as the first and only userland process. This executable is called the


The init process is then responsible for setting up the entire userland (or destroying everything when you want to have fun).

This typically means reading some configuration files (e.g.

) and forking a bunch of userland executables based on those files, including the very interactive shell that we end up on.

systemd provides a "popular" init implementation for desktop distros as of 2017.

BusyBox provides its own minimalistic init implementation which Buildroot, and therefore this repo, uses by default.


program can be either an executable shell text file, or a compiled ELF file. It becomes easy to accept this once you see that the
system call handles both cases equally:


executable is searched for in a list of paths in the root filesystem, including
and a few others. For more details see: xref:path-to-init[xrefstyle=full]

=== Replace init

To have more control over the system, you can replace BusyBox's init with your own.

The most direct way to replace

with our own is to just use the
<> directly:

.... ./run --kernel-cli 'init=/lkmc/' ....

This just counts every second forever and does not give you a shell.

This method is not very flexible however, as it is hard to reliably pass multiple commands and command line arguments to the init with it, as explained at: xref:init-environment[xrefstyle=full].

For this reason, we have created a more robust helper method with the


.... ./run --eval 'echo "asdf qwer";insmod hello.ko;./linux/poweroff.out' ....

It is basically a shortcut for:

.... ./run --kernel-cli 'init=/lkmc/ - lkmceval="insmod hello.ko;./linux/poweroff.out"' ....

Source: link:rootfsoverlay/lkmc/[].

This allows quoting and newlines by base64 encoding on host, and decoding on guest, see: xref:kernel-command-line-parameters-escaping[xrefstyle=full].

It also automatically chooses between

for you, see: xref:path-to-init[xrefstyle=full]

replaces BusyBox' init completely, which makes things more minimal, but also has has the following consequences:
  • /etc/fstab
    mounts are not done, notably
    , test it out with: + .... ./run --eval 'echo asdf;ls /proc;ls /sys;echo qwer' ....
  • no shell is launched at the end of boot for you to interact with the system. You could explicitly add a
    at the end of your commands however: + .... ./run --eval 'echo hello;sh' ....

The best way to overcome those limitations is to use: xref:init-busybox[xrefstyle=full]

If the script is large, you can add it to a gitignored file and pass that to

as in:

.... echo ' cd /lkmc insmod hello.ko ./linux/poweroff.out ' > data/ ./run --eval "$(cat data/" ....

or add it to a file to the root filesystem guest and rebuild:

.... echo '#!/bin/sh cd /lkmc insmod hello.ko ./linux/poweroff.out ' > rootfsoverlay/lkmc/ chmod +x rootfsoverlay/lkmc/ ./build-buildroot ./run --kernel-cli 'init=/lkmc/' ....

Remember that if your init returns, the kernel will panic, there are just two non-panic possibilities:

  • run forever in a loop or long sleep
  • poweroff
    the machine

==== poweroff.out

Just using BusyBox'

at the end of the
does not work and the kernel panics:

.... ./run --eval poweroff ....

because BusyBox'

tries to do some fancy stuff like killing init, likely to allow userland to shutdown nicely.

But this fails when we are



works more brutally and effectively if you add

.... ./run --eval 'poweroff -f' ....

but why not just use our minimal

and be done with it?

.... ./run --eval './linux/poweroff.out' ....

Source: link:userland/linux/poweroff.c[]

This also illustrates how to shutdown the computer from C:

[[sleep-forever-out]] ==== sleep_forever.out

I dare you to guess what this does:

.... ./run --eval './posix/sleep_forever.out' ....

Source: link:userland/posix/sleep_forever.c[]

This executable is a convenient simple init that does not panic and sleeps instead.

[[time-boot-out]] ==== time_boot.out

Get a reasonable answer to "how long does boot take in guest time?":

.... ./run --eval-after './linux/time_boot.c' ....

Source: link:userland/linux/time_boot.c[]

That executable writes to

directly through
a message of type:

.... [ 2.188242] /path/to/linux-kernel-module-cheat/userland/linux/time_boot.c ....

which tells us that boot took

seconds based on the dmesg timestamp.


[[init-busybox]] === Run command at the end of BusyBox init

Use the

option is for you rely on something that BusyBox' init set up for you like

.... ./run --eval-after 'echo asdf;ls /proc;ls /sys;echo qwer' ....

After the commands run, you are left on an interactive shell.

The above command is basically equivalent to:

.... ./run --kernel-cli-after-dash 'lkmc_eval="insmod hello.ko;./linux/poweroff.out;"' ....

where the

option gets evaled by our default link:rootfs_overlay/etc/init.d/S98[] startup script.

Except that

is smarter and uses

Alternatively, you can also add the comamdns to run to a new

entry to run at the end o the BusyBox init:

.... cp rootfsoverlay/etc/init.d/S98 rootfsoverlay/etc/init.d/S99.gitignore vim rootfs_overlay/etc/init.d/S99.gitignore ./build-buildroot ./run ....

and they will be run automatically before the login prompt.

Scripts under

are run by
, which gets called by the line
in link:rootfs_overlay/etc/inittab[

=== Path to init

The init is selected at:

  • initrd or initramfs system:
    , a custom one can be set with the
  • otherwise: default is
    , followed by some other paths, a custom one can be set with

More details:

The final init that actually got selected is shown on Linux v5.9.2 a line of type:

<6>[    0.309984] Run /sbin/init as init process

at the very end of the boot logs.

=== Init environment

Documented at[]:

The kernel parses parameters from the kernel command line up to "-"; if it doesn't recognize a parameter and it doesn't contain a '.', the parameter gets passed to init: parameters with '=' go into init's environment, others are passed as command line arguments to init. Everything after "-" is passed as an argument to init.

And you can try it out with:

.... ./run --kernel-cli 'init=/lkmc/linux/initenvpoweroff.out' --kernel-cli-after-dash 'asdf=qwer zxcv' ....

From the <>, we see that the kernel CLI at LKMC 69f5745d3df11d5c741551009df86ea6c61a09cf now contains:

.... init=/lkmc/linux/initenvpoweroff.out console=ttyS0 - lkmc_home=/lkmc asdf=qwer zxcv ....

and the init program outputs:

.... args:



env: HOME=/ TERM=linux lkmc_home=/lkmc asdf=qwer ....

Source: link:userland/linux/initenvpoweroff.c[].

As of the Linux kernel v5.7 (possibly earlier, I've skipped a few releases), boot also shows the init arguments and environment very clearly, which is a great addition:

.... <6>[ 0.309984] Run /sbin/init as init process <7>[ 0.309991] with arguments: <7>[ 0.309997] /sbin/init <7>[ 0.310004] nokaslr <7>[ 0.310010] - <7>[ 0.310016] with environment: <7>[ 0.310022] HOME=/ <7>[ 0.310028] TERM=linux <7>[ 0.310035] earlyprintk=pl011,0x1c090000 <7>[ 0.310041] lkmc_home=/lkmc ....

==== init arguments

The annoying dash

gets passed as a parameter to
, which makes it impossible to use this method for most non custom executables.

Arguments with dots that come after

are still treated specially (of the form
) and disappear, from args, e.g.:

.... ./run --kernel-cli 'init=/lkmc/linux/initenvpoweroff.out' --kernel-cli-after-dash '/lkmc/linux/poweroff.out' ....


.... args


ab ....

so see how

is gone.

The simple workaround is to just create a shell script that does it, e.g. as we've done at: link:rootfsoverlay/lkmc/[].

==== init environment env

Wait, where do

come from? (greps the kernel). Ah, OK, the kernel sets those by default:

.... const char *envpinit[MAXINIT_ENVS+2] = { "HOME=/", "TERM=linux", NULL, }; ....

==== BusyBox shell init environment

On top of the Linux kernel, the BusyBox

shell will also define other variables.

We can explore the shenanigans that the shell adds on top of the Linux kernel with:

.... ./run --kernel-cli 'init=/bin/sh' ....

From there we observe that:

.... env ....


.... SHLVL=1 HOME=/ TERM=linux PWD=/ ....

therefore adding

to the default kernel exported variables.

Furthermore, to increase confusion, if you list all non-exported shell variables with:

.... set ....

then it shows more variables, notably:

.... PATH='/sbin:/usr/sbin:/bin:/usr/bin' ....

===== BusyBox shell initrc files

Login shells source some default files, notably:

.... /etc/profile $HOME/.profile ....

In our case,

is set to
presumably by

We provide

from link:rootfs_overlay/.profile[], and use the default BusyBox

The shell knows that it is a login shell if the first character of

, see also:

When we use just

, the Linux kernel sets
, which does not start with

However, if you use

on inttab described at <>, BusyBox' init sets
, and so does
. This can be observed with:

.... cat /proc/$$/cmdline ....


is the PID of the shell itself:


== initrd

The kernel can boot from an CPIO file, which is a directory serialization format much like tar:

The bootloader, which for us is provided by QEMU itself, is then configured to put that CPIO into memory, and tell the kernel that it is there.

This is very similar to the kernel image itself, which already gets put into memory by the QEMU


With this setup, you don't even need to give a root filesystem to the kernel: it just does everything in memory in a ramfs.

To enable initrd instead of the default ext2 disk image, do:

.... ./build-buildroot --initrd ./run --initrd ....

By looking at the QEMU run command generated, you can see that we didn't give the

option at all:

.... cat "$(./getvar run_dir)/" ....

Instead, we used the QEMU

option to point to the
filesystem that Buildroot generated for us.

Try removing that

option to watch the kernel panic without rootfs at the end of boot.

When using

, there can be no <> across boots, since all file operations happen in memory in a tmpfs:

.... date >f poweroff cat f

can't open 'f': No such file or directory


which can be good for automated tests, as it ensures that you are using a pristine unmodified system image every time.

Not however that we already disable disk persistency by default on ext2 filesystems even without

: xref:disk-persistency[xrefstyle=full].

One downside of this method is that it has to put the entire filesystem into memory, and could lead to a panic:

.... end Kernel panic - not syncing: Out of memory and no killable processes... ....

This can be solved by increasing the memory as explained at <>:

.... ./run --initrd --memory 256M ....

The main ingredients to get initrd working are:

    : make Buildroot generate
    in addition to the other images. + It is also possible to compress that image with other options.
  • qemu -initrd
    : make QEMU put the image into memory and tell the kernel about it.
    : Compile the kernel with initrd support, see also: + Buildroot forces that option when
    is given

TODO: how does the bootloader inform the kernel where to find initrd?

=== initrd in desktop distros

Most modern desktop distributions have an initrd in their root disk to do early setup.

The rationale for this is described at:

One obvious use case is having an encrypted root filesystem: you keep the initrd in an unencrypted partition, and then setup decryption from there.

I think GRUB then knows read common disk formats, and then loads that initrd to memory with a

directive of type:

.... initrd /initrd.img-4.4.0-108-generic ....


=== initramfs

initramfs is just like <>, but you also glue the image directly to the kernel image itself using the kernel's build system.

Try it out with:

.... ./build-buildroot --initramfs ./build-linux --initramfs ./run --initramfs ....

Notice how we had to rebuild the Linux kernel this time around as well after Buildroot, since in that build we will be gluing the CPIO to the kernel image.

Now, once again, if we look at the QEMU run command generated, we see all that QEMU needs is the

option, no
not even
! Pretty cool:

.... cat "$(./getvar run_dir)/" ....

It is also interesting to observe how this increases the size of the kernel image if you do a:

.... ls -lh "$(./getvar linux_image)" ....

before and after using initramfs, since the

is now glued to the kernel image.

Don't forget that to stop using initramfs, you must rebuild the kernel without

to get rid of the attached CPIO image:

.... ./build-linux ./run ....

Alternatively, consider using <> if you need to switch between initramfs and non initramfs often:

.... ./build-buildroot --initramfs ./build-linux --initramfs --linux-build-id initramfs ./run --initramfs --linux-build-id ....

Setting up initramfs is very easy: our scripts just set

to point to the CPIO path. shows a full manual setup.

=== rootfs

This is how

shows the root filesystem:
  • hard disk:
    /dev/root on / type ext2 (rw,relatime,block_validity,barrier,user_xattr)
    . That file does not exist however.
  • initrd:
    rootfs on / type rootfs (rw)
  • initramfs:
    rootfs on / type rootfs (rw)

TODO: understand


==== /dev/root

See: xref:rootfs[xrefstyle=full]

=== gem5 initrd

TODO we were not able to get it working yet:

This would require gem5 to load the CPIO into memory, just like QEMU. Grepping

shows some ARM hits under:

.... src/arch/arm/linux/atag.hh ....

but they are commented out.

=== gem5 initramfs

This could in theory be easier to make work than initrd since the emulator does not have to do anything special.

However, it didn't: boot fails at the end because it does not see the initramfs, but rather tries to open our dummy root filesystem, which unsurprisingly does not have a format in a way that the kernel understands:

.... VFS: Cannot open root device "sda" or unknown-block(8,0): error -5 ....

We think that this might be because gem5 boots directly

, and not from the final compressed images that contain the attached rootfs such as
, which is what QEMU does, see also: xref:vmlinux-vs-bzimage-vs-zimage-vs-image[xrefstyle=full].

To do this failed test, we automatically pass a dummy disk image as of gem5 7fa4c946386e7207ad5859e8ade0bbfc14000d91 since the scripts don't handle a missing

well, much like is currently done for <>.

Interestingly, using initramfs significantly slows down the gem5 boot, even though it did not work. For example, we've observed a 4x slowdown of as 17062a2e8b6e7888a14c3506e9415989362c58bf for aarch64. This must be because expanding the large attached CPIO must be expensive. We can clearly see from the kernel logs that the kernel just hangs at a point after the message

PCI: CLS 0 bytes, default 64
for a long time before proceeding further.

== Device tree

The device tree is a Linux kernel defined data structure that serves to inform the kernel how the hardware is setup.

Device trees serve to reduce the need for hardware vendors to patch the kernel: they just provide a device tree file instead, which is much simpler.

x86 does not use it device trees, but many other archs to, notably ARM.

This is notably because ARM boards:

  • typically don't have discoverable hardware extensions like PCI, but rather just put everything on an SoC with magic register addresses
  • are made by a wide variety of vendors due to ARM's licensing business model, which increases variability

The Linux kernel itself has several device trees under

, see also:

=== DTB files

Files that contain device trees have the

extension when compiled, and
when in text form.

You can convert between those formats with:

.... "$(./getvar buildroothostdir)"/bin/dtc -I dtb -O dts -o a.dts a.dtb "$(./getvar buildroothostdir)"/bin/dtc -I dts -O dtb -o a.dtb a.dts ....

Buildroot builds the tool due to


On Ubuntu 18.04, the package is named:

.... sudo apt-get install device-tree-compiler ....

See also:

Device tree files are provided to the emulator just like the root filesystem and the Linux kernel image.

In real hardware, those components are also often provided separately. For example, on the Raspberry Pi 2, the SD card must contain two partitions:

  • the first contains all magic files, including the Linux kernel and the device tree
  • the second contains the root filesystem

See also:

=== Device tree syntax

Good format descriptions:


Minimal example

.... /dts-v1/;

/ { a; }; ....

Check correctness with:

.... dtc a.dts ....

Separate nodes are simply merged by node path, e.g.:

.... /dts-v1/;

/ { a; };

/ { b; }; ....


dtc a.dts

.... /dts-v1/;

/ { a; b; }; ....

=== Get device tree from a running kernel

This is specially interesting because QEMU and gem5 are capable of generating DTBs that match the selected machine depending on dynamic command line parameters for some types of machines.

So observing the device tree from the guest allows to easily see what the emulator has generated.

Compile the

tool into the root filesystem:

.... ./build-buildroot \ --arch aarch64 \ --config 'BR2PACKAGEDTC=y' \ --config 'BR2PACKAGEDTC_PROGRAMS=y' \ ; ....

-M virt
for example, which we use by default for
, boots just fine without the

.... ./run --arch aarch64 ....

Then, from inside the guest:

.... dtc -I fs -O dts /sys/firmware/devicetree/base ....


.... cpus { #address-cells = <0x1>; #size-cells = <0x0>;

            [email protected] {
                    compatible = "arm,cortex-a57";
                    device_type = "cpu";
                    reg = <0x0>;


=== Device tree emulator generation

Since emulators know everything about the hardware, they can automatically generate device trees for us, which is very convenient.

This is the case for both QEMU and gem5.

For example, if we increase the <> to 2:

.... ./run --arch aarch64 --cpus 2 ....

QEMU automatically adds a second CPU to the DTB!

.... [email protected] { [email protected] { ....

The action seems to be happening at:


You can dump the DTB QEMU generated with:

.... ./run --arch aarch64 -- -machine dumpdtb=dtb.dtb ....

as mentioned at:

<> 2a9573f5942b5416fb0570cf5cb6cdecba733392 can also generate its own DTB.

gem5 can generate DTBs on ARM with

. The generated DTB is placed in the <> named as

== KVM[KVM] is Linux kernel interface that <> execution of virtual machines.

You can make QEMU or <> by passing enabling KVM with:

.... ./run --kvm ....

KVM works by running userland instructions natively directly on the real hardware instead of running a software simulation of those instructions.

Therefore, KVM only works if you the host architecture is the same as the guest architecture. This means that this will likely only work for x86 guests since almost all development machines are x86 nowadays. Unless you are[running an ARM desktop for some weird reason] :-)

We don't enable KVM by default because:

  • it limits visibility, since more things are running natively: ** can't use <> ** can't do <> ** on gem5, you lose <> and therefor any notion of performance
  • QEMU kernel boots are already <> for most purposes without it

One important use case for KVM is to fast forward gem5 execution, often to skip boot, take a <>, and then move on to a more detailed and slow simulation

=== KVM arm

TODO: we haven't gotten it to work yet, but it should be doable, and this is an outline of how to do it. Just don't expect this to tested very often for now.

We can test KVM on arm by running this repository inside an Ubuntu arm QEMU VM.

This produces no speedup of course, since the VM is already slow since it cannot use KVM on the x86 host.

First, obtain an Ubuntu arm64 virtual machine as explained at:

Then, from inside that image:

.... sudo apt-get install git git clone cd linux-kernel-module-cheat ./setup -y ....

and then proceed exactly as in <>.

We don't want to build the full Buildroot image inside the VM as that would be way too slow, thus the recommendation for the prebuilt setup.

TODO: do the right thing and cross compile QEMU and gem5. gem5's Python parts might be a pain. QEMU should be easy:

=== gem5 KVM

While gem5 does have KVM, as of 2019 its support has not been very good, because debugging it is harder and people haven't focused intensively on it.

X86 was broken with pending patches:[email protected]/msg15046.html It failed immediately on:

.... panic: KVM: Failed to enter virtualized mode (hw reason: 0x80000021) ....

also mentioned at:



  • ARM thread:

== User mode simulation

Both QEMU and gem5 have an user mode simulation mode in addition to full system simulation that we consider elsewhere in this project.

In QEMU, it is called just <>, and in gem5 it is called <>.

In both, the basic idea is the same.

User mode simulation takes regular userland executables of any arch as input and executes them directly, without booting a kernel.

Instead of simulating the full system, it translates normal instructions like in full system mode, but magically forwards system calls to the host OS.

Advantages over full system simulation:

  • the simulation may <> since you don't have to simulate the Linux kernel and several device models
  • you don't need to build your own kernel or root filesystem, which saves time. You still need a toolchain however, but the pre-packaged ones may work fine.


  • lower guest to host portability: ** TODO confirm: host OS == guest OS? ** TODO confirm: the host Linux kernel should be newer than the kernel the executable was built for. + It may still work even if that is not the case, but could fail is a missing system call is reached. + The target Linux kernel of the executable is a GCC toolchain build-time configuration. ** emulator implementers have to keep up with libc changes, some of which break even a C hello world due setup code executed before main. + See also: xref:user-mode-simulation-with-glibc[xrefstyle=full]
  • cannot be used to test the Linux kernel or any devices, and results are less representative of a real system since we are faking more

=== QEMU user mode getting started

Let's run link:userland/c/commandlinearguments.c[] built with the Buildroot toolchain on QEMU user mode:

.... ./build user-mode-qemu ./run \ --userland userland/c/commandlinearguments.c \ --cli-args='asdf "qw er"' \ ; ....


.... /path/to/linux-kernel-module-cheat/out/userland/default/x8664/c/commandline_arguments.out asdf qw er ....

./run --userland
path resolution is analogous to <>.

./build user-mode-qemu
first builds Buildroot, and then runs
, which is further documented at: xref:userland-setup[xrefstyle=full]. It also builds QEMU. If you ahve already done a <> previously, this will be very fast.

If you modify the userland programs, rebuild simply with:

.... ./build-userland ....

To rebuild just QEMU userland if you hack it, use:

.... ./build-qemu --mode userland ....


.... --mode userland ....

is needed because QEMU has two separate executables:

  • qemu-x86_64
    for userland
  • qemu-system-x86_64
    for full system

==== User mode GDB

It's nice when <> just works, right?

.... ./run \ --arch aarch64 \ --gdb-wait \ --userland userland/c/commandlinearguments.c \ --cli-args 'asdf "qw er"' \ ; ....

and on another shell:

.... ./run-gdb \ --arch aarch64 \ --userland userland/c/commandlinearguments.c \ main \ ; ....

Or alternatively, if you are using <>, do everything in one go with:

.... ./run \ --arch aarch64 \ --gdb \ --userland userland/c/commandlinearguments.c \ --cli-args 'asdf "qw er"' \ ; ....

To stop at the very first instruction of a freestanding program, just use

. A good example of this is shown at: xref:freestanding-programs[xrefstyle=full].

=== User mode tests

Automatically run all userland tests that can be run in user mode simulation, and check that they exit with status 0:

.... ./build --all-archs test-executables-userland ./test-executables --all-archs --all-emulators ....

Or just for QEMU:

.... ./build --all-archs test-executables-userland-qemu ./test-executables --all-archs --emulator qemu ....

Source: link:test-executables[]

This script skips a manually configured list of tests, notably:

  • tests that depend on a full running kernel and cannot be run in user mode simulation, e.g. those that rely on kernel modules
  • tests that require user interaction
  • tests that take perceptible amounts of time
  • known bugs we didn't have time to fix ;-)

Tests under link:userland/libs/[] are only run if

are given as described at <>.

The gem5 tests require building statically with build id

, see also: xref:gem5-syscall-emulation-mode[xrefstyle=full]. TODO automate this better.

See: xref:test-this-repo[xrefstyle=full] for more useful testing tips.

=== User mode Buildroot executables

If you followed <>, you can now run the executables created by Buildroot directly as:

.... ./run \ --userland "$(./getvar buildroottargetdir)/bin/echo" \ --cli-args='asdf' \ ; ....

To easily explore the userland executable environment interactively, you can do:

.... ./run \ --arch aarch64 \ --userland "$(./getvar --arch aarch64 buildroottargetdir)/bin/sh" \ --terminal \ ; ....


.... ./run \ --arch aarch64 \ --userland "$(./getvar --arch aarch64 buildroottargetdir)/bin/sh" \ --cli-args='-c "uname -a && pwd"' \ ; ....

Here is an interesting examples of this: xref:linux-test-project[xrefstyle=full]

=== User mode simulation with glibc

At 125d14805f769104f93c510bedaa685a52ec025d we <>, and caused some user mode pain, which we document here.

==== FATAL: kernel too old failure in userland simulation

glibc has a check for kernel version, likely obtained from the

syscall, and if the kernel is not new enough, it quits.

Both gem5 and QEMU however allow setting the reported

version from the command line, which we do to always match our toolchain.

QEMU by default copies the host

value, but we always override it in our scripts.

Determining the right number to use for the kernel version is of course highly non-trivial and would require an extensive userland test suite, which most emulators don't have.

.... ./run --arch aarch64 --kernel-version 4.18 --userland userland/posix/uname.c ....

Source: link:userland/posix/uname.c[].

The QEMU source that does this is at:



The ID is just hardcoded on the source:

==== stack smashing detected when using glibc

For some reason QEMU / glibc x86_64 picks up the host libc, which breaks things.

Other archs work as they different host libc is skipped. <> also work.

We have worked around this with with from the thread: by creating the file: link:rootfs_overlay/etc/[] which is a symlink to a file that cannot exist:



.... rm -f "$(./getvar buildroottargetdir)/etc/" ./run --userland userland/c/hello.c ./run --userland userland/c/hello.c --qemu-which host ....


.... *** stack smashing detected ***: terminated qemu: uncaught target signal 6 (Aborted) - core dumped ....

To get things working again, restore

.... ./build-buildroot ....

I've also tested on an Ubuntu 16.04 guest and the failure is different one:

.... qemu: uncaught target signal 4 (Illegal instruction) - core dumped ....

A non-QEMU-specific example of stack smashing is shown at:

Tested at: 2e32389ebf1bedd89c682aa7b8fe42c3c0cf96e5 + 1.

=== User mode static executables


.... ./build-userland \ --arch aarch64 \ --static \ ; ./run \ --arch aarch64 \ --static \ --userland userland/c/commandlinearguments.c \ --cli-args 'asdf "qw er"' \ ; ....

Running dynamically linked executables in QEMU requires pointing it to the root filesystem with the

option so that it can find the dynamic linker and shared libraries, see also:

We pass

by default, so everything just works.

However, in case something goes wrong, you can also try statically linked executables, since this mechanism tends to be a bit more stable, for example:

  • QEMU x8664 guest on x8664 host was failing with <>, but we found a workaround
  • gem5 user only supported static executables in the past, as mentioned at: xref:gem5-syscall-emulation-mode[xrefstyle=full]

Running statically linked executables sometimes makes things break:

  • <>
  • TODO understand why: + .... ./run --static --userland userland/c/filewriteread.c .... + fails our assertion that the data was read back correctly: + .... Assertion `strcmp(data, output) == 0' faile ....

==== User mode static executables with dynamic libraries

One limitation of static executables is that Buildroot mostly only builds dynamic versions of libraries (the libc is an exception).

So programs that rely on those libraries might not compile as GCC can't find the

version of the library.

For example, if we try to build <> statically:

.... ./build-userland --package openblas --static -- userland/libs/openblas/hello.c ....

it fails with:

.... ld: cannot find -lopenblas ....

[[cpp-static-and-pthreads]] ===== C++ static and pthreads

and pthreads also causes issues:

As a consequence, the following just hangs as of LKMC ca0403849e03844a328029d70c08556155dc1cd0 + 1 the example link:userland/cpp/atomic/std_atomic.cpp[]:

.... ./run --userland userland/cpp/atomic/std_atomic.cpp --static ....

And before that, it used to fail with other randomly different errors, e.g.:

.... qemu-x8664: /path/to/linux-kernel-module-cheat/submodules/qemu/accel/tcg/cpu-exec.c:700: cpuexec: Assertion

!have_mmap_lock()' failed.
qemu-x86_64: /path/to/linux-kernel-module-cheat/submodules/qemu/accel/tcg/cpu-exec.c:700: cpu_exec: Assertion
!havemmaplock()' failed. ....

And a native Ubuntu 18.04 AMD64 run with static compilation segfaults.

As of LKMC f5d4998ff51a548ed3f5153aacb0411d22022058 the aarch64 error:

.... ./run --arch aarch64 --userland userland/cpp/atomic/fail.cpp --static ....


.... terminate called after throwing an instance of 'std::system_error' what(): Unknown error 16781344 qemu: uncaught target signal 6 (Aborted) - core dumped ....

The workaround:

.... -pthread -Wl,--whole-archive -lpthread -Wl,--no-whole-archive ....

fixes some of the problems, but not all TODO which were missing?, so we are just skipping those tests for now.

=== syscall emulation mode program stdin

The following work on both QEMU and gem5 as of LKMC 99d6bc6bc19d4c7f62b172643be95d9c43c26145 + 1. Interactive input:

.... ./run --userland userland/c/getchar.c ....

Source: link:userland/c/getchar.c[]

A line of type should show:

.... enter a character: ....

and after pressing say

and Enter, we get:

.... you entered: a ....

Note however that due to <> we don't really see the initial

enter a character

Non-interactive input from a file by forwarding emulators stdin implicitly through our Python scripts:

.... printf a > f.tmp ./run --userland userland/c/getchar.c < f.tmp ....

Input from a file by explicitly requesting our scripts to use it via the Python API:

.... printf a > f.tmp ./run --emulator gem5 --userland userland/c/getchar.c --stdin-file f.tmp ....

This is especially useful when running tests that require stdin input.

=== gem5 syscall emulation mode

Less robust than QEMU's, but still usable:


There are much more unimplemented syscalls in gem5 than in QEMU. Many of those are trivial to implement however.

So let's just play with some static ones:

.... ./build-userland --arch aarch64 ./run \ --arch aarch64 \ --emulator gem5 \ --userland userland/c/commandlinearguments.c \ --cli-args 'asdf "qw er"' \ ; ....

TODO: how to escape spaces on the command line arguments?

<> also works normally on gem5:

.... ./run \ --arch aarch64 \ --emulator gem5 \ --gdb-wait \ --userland userland/c/commandlinearguments.c \ --cli-args 'asdf "qw er"' \ ; ./run-gdb \ --arch aarch64 \ --emulator gem5 \ --userland userland/c/commandlinearguments.c \ main \ ; ....

==== gem5 dynamic linked executables in syscall emulation

Support for dynamic linking was added in November 2019:


Note that as shown at xref:benchmark-emulators-on-userland-executables[xrefstyle=full], the dynamic version runs 200x more instructions, which might have an impact on smaller simulations in detailed CPUs.

==== gem5 syscall emulation exit status

As of gem5 7fa4c946386e7207ad5859e8ade0bbfc14000d91, the crappy
script does not forward the exit status of syscall emulation mode, you can test it with:

.... ./run --dry-run --emulator gem5 --userland userland/c/false.c ....

Source: link:userland/c/false.c[].

Then manually run the generated gem5 CLI, and do:

.... echo $? ....

and the output is always


Instead, it just outputs a message to stdout just like for <>:

.... Simulated exit code not 0! Exit code is 1 ....

which we parse in link:run[] and then exit with the correct result ourselves...

Related thread:

==== gem5 syscall emulation mode syscall tracing

Since gem5 has to implement syscalls itself in syscall emulation mode, it can of course clearly see which syscalls are being made, and we can log them for debug purposes with <>, e.g.:

.... ./run \ --emulator gem5 \ --userland userland/arch/x86_64/freestanding/linux/hello.S \ --trace-stdout \ --trace ExecAll,SyscallBase,SyscallVerbose \ ; ....

the trace as of f2eeceb1cde13a5ff740727526bf916b356cee38 + 1 contains:

.... 0: system.cpu A0 T0 : @asmmainafterprologue : mov rdi, 0x1 0: system.cpu A0 T0 : @asmmainafterprologue.0 : MOVRI : limm rax, 0x1 : IntAlu : D=0x0000000000000001 flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop) 1000: system.cpu A0 T0 : @asmmainafterprologue+7 : mov rdi, 0x1 1000: system.cpu A0 T0 : @asmmainafterprologue+7.0 : MOVRI : limm rdi, 0x1 : IntAlu : D=0x0000000000000001 flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop) 2000: system.cpu A0 T0 : @asmmainafterprologue+14 : lea rsi, DS:[rip + 0x19] 2000: system.cpu A0 T0 : @asmmainafterprologue+14.0 : LEARP : rdip t7, %ctrl153, : IntAlu : D=0x000000000040008d flags=(IsInteger|IsMicroop|IsDelayedCommit|IsFirstMicroop) 2500: system.cpu A0 T0 : @asmmainafterprologue+14.1 : LEARP : lea rsi, DS:[t7 + 0x19] : IntAlu : D=0x00000000004000a6 flags=(IsInteger|IsMicroop|IsLastMicroop) 3500: system.cpu A0 T0 : @asmmainafterprologue+21 : mov rdi, 0x6 3500: system.cpu A0 T0 : @asmmainafterprologue+21.0 : MOVRI : limm rdx, 0x6 : IntAlu : D=0x0000000000000006 flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop) 4000: system.cpu: T0 : syscall write called w/arguments 1, 4194470, 6, 0, 0, 0 hello 4000: system.cpu: T0 : syscall write returns 6 4000: system.cpu A0 T0 : @asmmainafterprologue+28 : syscall eax : IntAlu : flags=(IsInteger|IsSerializeAfter|IsNonSpeculative|IsSyscall) 5000: system.cpu A0 T0 : @asmmainafterprologue+30 : mov rdi, 0x3c 5000: system.cpu A0 T0 : @asmmainafterprologue+30.0 : MOVRI : limm rax, 0x3c : IntAlu : D=0x000000000000003c flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop) 6000: system.cpu A0 T0 : @asmmainafterprologue+37 : mov rdi, 0 6000: system.cpu A0 T0 : @asmmainafterprologue+37.0 : MOVRI : limm rdi, 0 : IntAlu : D=0x0000000000000000 flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop) 6500: system.cpu: T0 : syscall exit called w/arguments 0, 4194470, 6, 0, 0, 0 6500: system.cpu: T0 : syscall exit returns 0 6500: system.cpu A0 T0 : @asmmainafter_prologue+44 : syscall eax : IntAlu : flags=(IsInteger|IsSerializeAfter|IsNonSpeculative|IsSyscall) ....

so we see that two syscall lines were added for each syscall, showing the syscall inputs and exit status, just like a mini


==== gem5 syscall emulation multithreading

gem5 user mode multithreading has been particularly flaky compared <>, but work is being put into improving it.

In gem5 syscall simulation, the

syscall checks if there is a free CPU, and if there is a free one, the new threads runs on that CPU.

Otherwise, the

call, and therefore higher level interfaces to
such as
also fail and return a failure return status in the guest.

For example, if we use just one CPU for link:userland/posix/pthread_self.c[] which spawns one thread besides


.... ./run --cpus 1 --emulator gem5 --userland userland/posix/pthread_self.c --cli-args 1 ....

fails with this error message coming from the guest stderr:

.... pthread_create: Resource temporarily unavailable ....

It works however if we add on extra CPU:

.... ./run --cpus 2 --emulator gem5 --userland userland/posix/pthread_self.c --cli-args 1 ....

Once threads exit, their CPU is freed and becomes available for new

calls: For example, the following run spawns a thread, joins it, and then spawns again, and 2 CPUs are enough:

.... ./run --cpus 2 --emulator gem5 --userland userland/posix/pthread_self.c --cli-args '1 2' ....

because at each point in time, only up to two threads are running.

gem5 syscall emulation does show the expected number of cores when queried, e.g.:

.... ./run --cpus 1 --userland userland/cpp/threadhardwareconcurrency.cpp --emulator gem5 ./run --cpus 2 --userland userland/cpp/threadhardwareconcurrency.cpp --emulator gem5 ....



This can also be clearly by running


.... ./run \ --arch aarch64 \ --cli-args 4 \ --cpus 8 \ --emulator gem5 \ --userland userland/linux/sched_getcpu.c \ ; ....

which necessarily produces an output containing the CPU numbers from 1 to 4 and no higher:

.... 1 3 4 2 ....

TODO why does the

come at the end here? Would be good to do a detailed assembly run analysis.

==== gem5 syscall emulation multiple executables

gem5 syscall emulation has the nice feature of allowing you to run multiple executables "at once".

Each executable starts running on the next free core much as if it had been forked right at the start of simulation: <>.

This can be useful to quickly create deterministic multi-CPU workload. --cmd
takes a semicolon separated list, so we could do which LKMC exposes this by taking
multiple times as in:

.... ./run \ --arch aarch64 \ --cpus 2 \ --emulator gem5 \ --userland userland/posix/getpid.c \ --userland userland/posix/getpid.c \ ; ....

We need at least one CPU per executable, just like when forking new processes.

The outcome of this is that we see two different

messages printed to stdout:

.... pid=101 pid=100 ....

since from <> we can see that sets up one different PID per executable starting at 100:

.... workloads = options.cmd.split(';') idx = 0 for wrkld in workloads: process = Process(pid = 100 + idx) ....

We can also see that these processes are running concurrently with <> by hacking:

.... --debug-flags ExecAll \ --debug-file cout \ ....

which starts with:

.... 0: system.cpu1: A0 T0 : @end+274873647040 : add x0, sp, #0 : IntAlu : D=0x0000007ffffefde0 flags=(IsInteger) 0: system.cpu0: A0 T0 : @end+274873647040 : add x0, sp, #0 : IntAlu : D=0x0000007ffffefde0 flags=(IsInteger) 500: system.cpu0: A0 T0 : @end+274873647044 : bl <end+274873649648> : IntAlu : D=0x0000004000001008 flags=(IsInteger|IsControl|IsDirectControl|IsUncondControl|IsCall) 500: system.cpu1: A0 T0 : @end+274873647044 : bl <end+274873649648> : IntAlu : D=0x0000004000001008 flags=(IsInteger|IsControl|IsDirectControl|IsUncondControl|IsCall) ....

and therefore shows one instruction running on each CPU for each process at the same time.

===== gem5 syscall emulation --smt

gem5 b1623cb2087873f64197e503ab8894b5e4d4c7b4 syscall emulation has an

option presumably for <> but it has been neglected forever it seems:

If we start from the manually hacked working command from <> and try to add:

.... --cpu 1 --cpu-type Derivo3CPU --caches ....

We choose <> because of the assert:

.... example/ assert(options.cpu_type == "DerivO3CPU") ....

But then that fails with:

.... gem5.opt: /path/to/linux-kernel-module-cheat/out/gem5/master3/build/ARM/cpu/o3/ FullO3CPU::FullO3CPU(DerivO3CPUParams*) [with Impl = O3CPUImpl]: Assertion `params->numPhysVecPredRegs >= numThreads * TheISA::NumVecPredRegs' failed. Program aborted at tick 0 ....

=== QEMU user mode quirks

==== QEMU user mode does not show stdout immediately

At 8d8307ac0710164701f6e14c99a69ee172ccbb70 + 1, I noticed that if you run link:userland/posix/count.c[]:

.... ./run --userland userland/posix/count_to.c --cli-args 3 ....

it first waits for 3 seconds, then the program exits, and then it dumps all the stdout at once, instead of counting once every second as expected.

The same can be reproduced by copying the raw QEMU command and piping it through

, so I don't think it is a bug in our setup:

.... /path/to/linux-kernel-module-cheat/out/qemu/default/x8664-linux-user/qemu-x8664 \ -L /path/to/linux-kernel-module-cheat/out/buildroot/build/default/x8664/target \ /path/to/linux-kernel-module-cheat/out/userland/default/x8664/posix/count.out \ 3 \ | tee ....

TODO: investigate further and then possibly post on QEMU mailing list.

===== QEMU user mode does not show errors

Similarly to <>, QEMU error messages do not show at all through pipes.

In particular, it does not say anything if you pass it a non-existing executable:

.... qemu-x86_64 asdf | cat ....

So we just check ourselves manually

== Kernel module utilities

=== insmod[Provided by BusyBox]:

.... ./run --eval-after 'insmod hello.ko' ....

=== myinsmod

If you are feeling raw, you can insert and remove modules with our own minimal module inserter and remover!



./linux/myinsmod.out hello.ko


./linux/myinsmod.out hello.ko "" 1 ./linux/myrmmod.out hello ....

which teaches you how it is done from C code.


  • link:userland/linux/myinsmod.c[]
  • link:userland/linux/myrmmod.c[]

The Linux kernel offers two system calls for module insertion:

  • init_module
  • finit_module


.... man init_module ....

documents that:

The finitmodule() system call is like initmodule(), but reads the module to be loaded from the file descriptor fd. It is useful when the authenticity of a kernel module can be determined from its location in the filesystem; in cases where that is possible, the overhead of using cryptographically signed modules to determine the authenticity of a module can be avoided. The paramvalues argument is as for initmodule().

is newer and was added only in v3.8. More rationale:


=== modprobe

Implemented as a BusyBox applet by default:

searches for modules installed under:

.... ls /lib/modules/ ....

and specified in the


This is the default install path for

modules built with
make modules_install
in the Linux kernel tree, with root path given by
, and therefore canonical in that sense.

Currently, there are only two kinds of kernel modules that you can try out with

  • modules built with Buildroot, see: xref:kernel-modules-buildroot-package[xrefstyle=full]
  • modules built from the kernel tree itself, see: xref:dummy-irq[xrefstyle=full]

We are not installing out custom

modules there, because:
  • we don't know the right way. Why is there no
    target for kernel modules? + This can of course be solved by running Buildroot in verbose mode, and copying whatever it is doing, initial exploration at:
  • we would have to think how to not have to include the kernel modules twice in the root filesystem, but still have <<9p>> working for fast development as described at: xref:your-first-kernel-module-hack[xrefstyle=full]

=== kmod

The more "reference" implementation of

, etc.:

Default implementation on desktop distros such as Ubuntu 16.04, where e.g.:

.... ls -l /bin/lsmod ....


.... lrwxrwxrwx 1 root root 4 Jul 25 15:35 /bin/lsmod -> kmod ....


.... dpkg -l | grep -Ei ....


.... ii kmod 22-1ubuntu5 amd64 tools for managing Linux kernel modules ....

BusyBox also implements its own version of those executables, see e.g. <>. Here we will only describe features that differ from kmod to the BusyBox implementation.

==== module-init-tools

Name of a predecessor set of tools.

==== kmod modprobe


can also load modules under different names to avoid conflicts, e.g.:

.... sudo modprobe vmhgfs -o vm_hgfs ....

== Filesystems

=== OverlayFS[OverlayFS] is a filesystem merged in the Linux kernel in 3.18.

As the name suggests, OverlayFS allows you to merge multiple directories into one. The following minimal runnable examples should give you an intuition on how it works:


We are very interested in this filesystem because we are looking for a way to make host cross compiled executables appear on the guest root

without reboot.

This would have several advantages:

  • makes it faster to test modified guest programs ** not rebooting is fundamental for <>, where the reboot is very costly. ** no need to regenerate the root filesystem at all and reboot ** overcomes the
    problem as shown at: xref:rpath[xrefstyle=full]
  • we could keep the base root filesystem very small, which implies: ** less host disk usage, no need to copy the entire
    ./getvar out_rootfs_overlay_dir
    to the image again ** no need to worry about <>

We can already make host files appear on the guest with <<9p>>, but they appear on a subdirectory instead of the root.

If they would appear on the root instead, that would be even more awesome, because you would just use the exact same paths relative to the root transparently.

For example, we wouldn't have to mess around with variables such as


The idea is to:

  • 9P mount our overlay directory
    ./getvar out_rootfs_overlay_dir
    on the guest, which we already do at
  • then create an overlay with that directory and the root, and
    into it. + I was unable to mount directly to
    avoid the
    : ** ** **

We already have a prototype of this running from

on guest at
, but it has the following shortcomings:
  • changes to underlying filesystems are not visible on the overlay unless you remount with
    mount -r remount /mnt/overlay
    , as mentioned[on the kernel docs]: + .... Changes to the underlying filesystems while part of a mounted overlay filesystem are not allowed. If the underlying filesystem is changed, the behavior of the overlay is undefined, though it will not result in a crash or deadlock. .... + This makes everything very inconvenient if you are inside
    action. You would have to leave
    , remount, then come back.
  • the overlay does not contain sub-filesystems, e.g.
    . We would have to re-mount them. But should be doable with some automation.

Even more awesome than

would be to
, but I couldn't get that working either:

=== Secondary disk

A simpler and possibly less overhead alternative to <<9P>> would be to generate a secondary disk image with the benchmark you want to rebuild.

Then you can

and re-mount on guest without reboot.

To build the secondary disk image run link:build-disk2[]:

.... ./build-disk2 ....

This will put the entire <> into a squashfs filesystem.

Then, if that filesystem is present,

will automatically pass it as the second disk on the command line.

For example, from inside QEMU, you can mount that disk with:

.... mkdir /mnt/vdb mount /dev/vdb /mnt/vdb /mnt/vdb/lkmc/c/hello.out ....

To update the secondary disk while a simulation is running to avoid rebooting, first unmount in the guest:

.... umount /mnt/vdb ....

and then on the host:


Edit the file.

vim userland/c/hello.c ./build-userland ./build-disk2 ....

and now you can re-run the updated version of the executable on the guest after remounting it.

gem5 support for multiple disks is discussed at:

== Graphics

Both QEMU and gem5 are capable of outputting graphics to the screen, and taking mouse and keyboard input.

=== QEMU text mode

Text mode is the default mode for QEMU.

The opposite of text mode is <>

In text mode, we just show the serial console directly on the current terminal, without opening a QEMU GUI window.

You cannot see any graphics from text mode, but text operations in this mode, including:

  • scrolling up: xref:scroll-up-in-graphic-mode[xrefstyle=full]
  • copy paste to and from the terminal

making this a good default, unless you really need to use with graphics.

Text mode works by sending the terminal character by character to a serial device.

This is different from a display screen, where each character is a bunch of pixels, and it would be much harder to convert that into actual terminal text.

For more details, see:

  • <>

Note that you can still see an image even in text mode with the VNC:

.... ./run --vnc ....

and on another terminal:

.... ./vnc ....

but there is not terminal on the VNC window, just the <> penguin.

==== Quit QEMU from text mode

However, our QEMU setup captures Ctrl + C and other common signals and sends them to the guest, which makes it hard to quit QEMU for the first time since there is no GUI either.

The simplest way to quit QEMU, is to do:

.... Ctrl-A X ....

Alternative methods include:

  • quit
    command on the <>
  • pkill qemu

=== QEMU graphic mode

Enable graphic mode with:

.... ./run --graphic ....

Outcome: you see a penguin due to <>.

For a more exciting GUI experience, see: xref:x11[xrefstyle=full]

Text mode is the default due to the following considerable advantages:

  • copy and paste commands and stdout output to / from host
  • get full panic traces when you start making the kernel crash :-) See also:
  • have a large scroll buffer, and be able to search it, e.g. by using tmux on host
  • one less window floating around to think about in addition to your shell :-)
  • graphics mode has only been properly tested on

Text mode has the following limitations over graphics mode:

  • you can't see graphics such as those produced by <>
  • very early kernel messages such as
    early console in extract_kernel
    only show on the GUI, since at such early stages, not even the serial has been setup.

has a VGA device enabled by default, as can be seen as:

.... ./qemu-monitor info qtree ....

and the Linux kernel picks it up through the[fbdev] graphics system as can be seen from:

.... cat /dev/urandom > /dev/fb0 ....

flooding the screen with colors. See also:

==== Scroll up in graphic mode

Scroll up in <>:

.... Shift-PgUp ....

but I never managed to increase that buffer:


The superior alternative is to use text mode and GNU screen or <>.

==== QEMU Graphic mode arm

===== QEMU graphic mode arm terminal

TODO: on arm, we see the penguin and some boot messages, but don't get a shell at then end:

.... ./run --arch aarch64 --graphic ....

I think it does not work because the graphic window is <> only, i.e.:

.... cat /dev/urandom > /dev/fb0 ....

fails with:

.... cat: write error: No space left on device ....

and has no effect, and the Linux kernel does not appear to have a built-in DRM console as it does for fbdev with <>.

There is however one out-of-tree implementation: <>.

===== QEMU graphic mode arm terminal implementation

rely on the QEMU CLI option:

.... -device virtio-gpu-pci ....

and the kernel config options:


Unlike x86,

don't have a display device attached by default, thus the need for

See also (recently edited and corrected by yours truly... :-)).

===== QEMU graphic mode arm VGA

TODO: how to use VGA on ARM? Tried:

.... -device VGA ....

But says:


We use virtio-gpu because the legacy VGA framebuffer is

very troublesome on aarch64, and virtio-gpu is the only

video device that doesn't implement it.


so maybe it is not possible?

=== gem5 graphic mode

gem5 does not have a "text mode", since it cannot redirect the Linux terminal to same host terminal where the executable is running: you are always forced to connect to the terminal with


TODO could not get it working on

, only ARM.


More concretely, first build the kernel with the <>, and then run:

.... ./build-linux \ --arch arm \ --custom-config-file-gem5 \ --linux-build-id gem5-v4.15 \ ; ./run --arch arm --emulator gem5 --linux-build-id gem5-v4.15 ....

and then on another shell:

.... vinagre localhost:5900 ....

The <> penguin only appears after several seconds, together with kernel messages of type:

.... [ 0.152755] [drm] found ARM HDLCD version r0p0 [ 0.152790] hdlcd 2b000000.hdlcd: bound virt-encoder (ops 0x80935f94) [ 0.152795] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013). [ 0.152799] [drm] No driver support for vblank timestamp query. [ 0.215179] Console: switching to colour frame buffer device 240x67 [ 0.230389] hdlcd 2b000000.hdlcd: fb0: frame buffer device [ 0.230509] [drm] Initialized hdlcd 1.0.0 20151021 for 2b000000.hdlcd on minor 0 ....

The port

is incremented by one if you already have something running on that port,
stdout tells us the right port on stdout as:

.... system.vncserver: Listening for connections on port 5900 ....

and when we connect it shows a message:

.... info: VNC client attached ....

Alternatively, you can also dump each new frame to an image file with


.... ./run \ --arch arm \ --emulator gem5 \ --linux-build-id gem5-v4.15 \ -- --frame-capture \ ; ....

This creates on compressed PNG whenever the screen image changes inside the <> with filename of type:

.... frames_system.vncserver/fb...png.gz ....

It is fun to see how we get one new frame whenever the white underscore cursor appears and reappears under the penguin!

The last frame is always available uncompressed at:


TODO <> failed on


.... kmscube[706]: unhandled level 2 translation fault (11) at 0x00000000, esr 0x92000006, in[7fbf6a6000+e000] ....

Tested on:[38fd6153d965ba20145f53dc1bb3ba34b336bde9]

==== Graphic mode gem5 aarch64


we also need to configure the kernel with link:linux_config/display[]:

.... git -C "$(./getvar linuxsourcedir)" fetch gem5/v4.15:gem5/v4.15 git -C "$(./getvar linuxsourcedir)" checkout gem5/v4.15 ./build-linux \ --arch aarch64 \ --config-fragment linuxconfig/display \ --custom-config-file-gem5 \ --linux-build-id gem5-v4.15 \ ; git -C "$(./getvar linuxsource_dir)" checkout - ./run --arch aarch64 --emulator gem5 --linux-build-id gem5-v4.15 ....

This is because the gem5

defconfig does not enable HDLCD like the 32 bit one
one for some reason.

==== gem5 graphic mode DP650

TODO get working. There is an unmerged patchset at:

The DP650 is a newer display hardware than HDLCD. TODO is its interface publicly documented anywhere? Since it has a gem5 model and[in-tree Linux kernel support], that information cannot be secret?

The key option to enable support in Linux is

which we enable at link:linux_config/display[].

Build the kernel exactly as for <> and then run with:

.... ./run --arch aarch64 --dp650 --emulator gem5 --linux-build-id gem5-v4.15 ....

==== gem5 graphic mode internals

We cannot use mainline Linux because the <> are required at least to provide the


gem5 emulates the[HDLCD] ARM Holdings hardware for


The kernel uses HDLCD to implement the <> interface, the required kernel config options are present at: link:linux_config/display[].

TODO: minimize out the

. If we just remove it on
: it does not work with a failing dmesg:

.... [ 0.066208] [drm] found ARM HDLCD version r0p0 [ 0.066241] hdlcd 2b000000.hdlcd: bound virt-encoder (ops drmvencoderops) [ 0.066247] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013). [ 0.066252] [drm] No driver support for vblank timestamp query. [ 0.066276] hdlcd 2b000000.hdlcd: Cannot do DMA to address 0x0000000000000000 [ 0.066281] swiotlb: coherent allocation failed for device 2b000000.hdlcd size=8294400 [ 0.066288] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.15.0 #1 [ 0.066293] Hardware name: V2P-AARCH64 (DT) [ 0.066296] Call trace: [ 0.066301] dumpbacktrace+0x0/0x1b0 [ 0.066306] showstack+0x24/0x30 [ 0.066311] dumpstack+0xb8/0xf0 [ 0.066316] swiotlballoccoherent+0x17c/0x190 [ 0.066321] _dmaalloc+0x68/0x160 [ 0.066325] drmgemcmacreate+0x98/0x120 [ 0.066330] drmfbdevcmacreate+0x74/0x2e0 [ 0.066335] _drmfbhelperinitialconfigandunlock+0x1d8/0x3a0 [ 0.066341] drmfbhelperinitialconfig+0x4c/0x58 [ 0.066347] drmfbdevcmainitwithfuncs+0x98/0x148 [ 0.066352] drmfbdevcmainit+0x40/0x50 [ 0.066357] hdlcddrmbind+0x220/0x428 [ 0.066362] trytobringupmaster+0x21c/0x2b8 [ 0.066367] componentmasteraddwithmatch+0xa8/0xf0 [ 0.066372] hdlcdprobe+0x60/0x78 [ 0.066377] platformdrvprobe+0x60/0xc8 [ 0.066382] driverprobedevice+0x30c/0x478 [ 0.066388] _driverattach+0x10c/0x128 [ 0.066393] busforeachdev+0x70/0xb0 [ 0.066398] driverattach+0x30/0x40 [ 0.066402] busadddriver+0x1d0/0x298 [ 0.066408] driverregister+0x68/0x100 [ 0.066413] _platformdriverregister+0x54/0x60 [ 0.066418] hdlcdplatformdriverinit+0x20/0x28 [ 0.066424] dooneinitcall+0x44/0x130 [ 0.066428] kernelinitfreeable+0x13c/0x1d8 [ 0.066433] kernelinit+0x18/0x108 [ 0.066438] retfrom_fork+0x10/0x1c [ 0.066444] hdlcd 2b000000.hdlcd: Failed to set initial hw configuration. [ 0.066470] hdlcd 2b000000.hdlcd: master bind failed: -12 [ 0.066477] hdlcd: probe of 2b000000.hdlcd failed with error -12 ....

So what other options are missing from

? It would be cool to minimize it out to better understand the options.

[[x11]] === X11 Buildroot

Once you've seen the

penguin as a sanity check, you can try to go for a cooler X11 Buildroot setup.

Build and run:

.... ./build-buildroot --config-fragment buildroot_config/x11 ./run --graphic ....

Inside QEMU:

.... startx ....

And then from the GUI you can start exciting graphical programs such as:

.... xcalc xeyes ....

Outcome: xref:image-x11[xrefstyle=full]

[[image-x11]] .X11 Buildroot graphical user interface screenshot [link=x11.png] image::x11.png[]

We don't build X11 by default because it takes a considerable amount of time (about 20%), and is not expected to be used by most users: you need to pass the

flag to enable it.

More details:

Not sure how well that graphics stack represents real systems, but if it does it would be a good way to understand how it works.

To x11 packages have an

prefix as in:

.... ./build-buildroot --config-fragment buildrootconfig/x11 -- xserverxorg-server-reconfigure ....

the easiest way to find them out is to just list

"$(./getvar buildroot_build_build_dir)/x*

TODO as of: c2696c978d6ca88e8b8599c92b1beeda80eb62b2 I noticed that

leads to a <>:

.... [ 2.809104] WARNING: CPU: 0 PID: 51 at drivers/gpu/drm/ttm/ttmbovm.c:304 ttmbovm_open+0x37/0x40 ....

==== X11 Buildroot mouse not moving

TODO 9076c1d9bcc13b6efdb8ef502274f846d8d4e6a1 I'm 100% sure that it was working before, but I didn't run it forever, and it stopped working at some point. Needs bisection, on whatever commit last touched x11 stuff.


did not help, I just get to see the host cursor, but the guest cursor still does not move.


.... watch -n 1 grep i8042 /proc/interrupts ....

shows that interrupts do happen when mouse and keyboard presses are done, so I expect that it is some wrong either with:

  • QEMU. Same behaviour if I try the host's QEMU 2.10.1 however.
  • X11 configuration. We do have

contains the following interesting lines:

.... 27.549 LoadModule: "mouse" 27.549 Loading /usr/lib/xorg/modules/input/ 27.590 : Cannot find which device to use. 27.590 : cannot open input device 27.590 PreInit returned 2 for "" 27.590 UnloadModule: "mouse" ....

The file

does not exist.

Note that our current link:kernelconfifragment sets:





for gem5, so you might want to remove those lines to debug this.

==== X11 Buildroot ARM


hangs at a message:

.... vgaarb: this pci device is not a vga device ....

and nothing shows on the screen, and:

.... grep EE /var/log/Xorg.0.log ....


.... (EE) Failed to load module "modesetting" (module does not exist, 0) ....

A friend told me this but I haven't tried it yet:

  • xf86-video-modesetting
    is likely the missing ingredient, but it does not seem possible to activate it from Buildroot currently without patching things.
  • xf86-video-fbdev
    should work as well, but we need to make sure fbdev is enabled, and maybe add some line to the

== Networking

=== Enable networking

We disable networking by default because it starts an userland process, and we want to keep the number of userland processes to a minimum to make the system more understandable as explained at: xref:resource-tradeoff-guidelines[xrefstyle=full]

To enable networking on Buildroot, simply run:

.... ifup -a ....

That command goes over all (

) the interfaces in
and brings them up.

Then test it with:

.... wget cat index.html ....

Disable networking with:

.... ifdown -a ....

To enable networking by default after boot, use the methods documented at <>.

=== ping

does not work within QEMU by default, e.g.:

.... ping ....

hangs after printing the header:

.... PING ( 56 data bytes ....

Here Ciro describes how to get it working:

Further bibliography:

=== Guest host networking

In this section we discuss how to interact between the guest and the host through networking.

First ensure that you can access the external network since that is easier to get working, see: xref:networking[xrefstyle=full].

==== Host to guest networking

===== nc host to guest


we can create the most minimal example possible as a sanity check.

On guest run:

.... nc -l -p 45455 ....

Then on host run:

.... echo asdf | nc localhost 45455 ....

appears on the guest.

This uses:

  • BusyBox'
    utility, which is enabled with
  • nc
    from the
    package on an Ubuntu 18.04 host

Only this specific port works by default since we have forwarded it on the QEMU command line.

We us this exact procedure to connect to <>.

===== ssh into guest

Not enabled by default due to the build / runtime overhead. To enable, build with:

.... ./build-buildroot --config 'BR2PACKAGEOPENSSH=y' ....

Then inside the guest turn on sshd:

.... ./ ....

Source: link:rootfs_overlay/lkmc/[]

And finally on host:

.... ssh [email protected] -p 45456 ....


===== gem5 host to guest networking

Could not do port forwarding from host to guest, and therefore could not use


==== Guest to host networking

First <>.

Then in the host, start a server:

.... python -m SimpleHTTPServer 8000 ....

And then in the guest, find the IP we need to hit with:

.... ip rounte ....

which gives:

..... default via dev eth0 dev eth0 scope link src .....

so we use in the guest:

.... wget ....



=== 9P

The[9p protocol] allows the guest to mount a host directory.

Both QEMU and <> support 9P.

==== 9P vs NFS

All of 9P and NFS (and sshfs) allow sharing directories between guest and host.

Advantages of 9P

  • requires
    on the host to mount
  • we could share a guest directory to the host, but this would require running a server on the guest, which adds <> + Furthermore, this would be inconvenient, since what we usually want to do is to share host cross built files with the guest, and to do that we would have to copy the files over after the guest starts the server.
  • QEMU implements 9P natively, which makes it very stable and convenient, and must mean it is a simpler protocol than NFS as one would expect. + This is not the case for gem5 7bfb7f3a43f382eb49853f47b140bfd6caad0fb8 unfortunately, which relies on the[diod] host daemon, although it is not unfeasible that future versions could implement it natively as well.

Advantages of NFS:

  • way more widely used and therefore stable and available, not to mention that it also works on real hardware.
  • the name does not start with a digit, which is an invalid identifier in all programming languages known to man. Who in their right mind would call a software project as such? It does not even match the natural order of Plan 9; Plan then 9: P9!

==== 9P getting started

As usual, we have already set everything up for you. On host:

.... cd "$(./getvar p9_dir)" uname -a > host ....


.... cd /mnt/9p/data cat host uname -a > guest ....


.... cat guest ....

The main ingredients for this are:

  • 9P
    settings in our <>
  • 9p
    entry on our link:rootfsoverlay/etc/fstab[] + Alternatively, you could also mount your own with: + .... mkdir /mnt/my9p mount -t 9p -o trans=virtio,version=9p2000.L host0 /mnt/my9p .... + where mount tag
    is set by the emulator (`mount
    flag on QEMU CLI), and can be found in the guest with:
    cat /sys/bus/virtio/drivers/9pnetvirtio/virtio0/mounttag` as documented at:[].
  • Launch QEMU with
    as in your link:run[] script + When we tried: + .... security_model=mapped .... + writes from guest failed due to user mismatch problems:



==== gem5 9P

Is possible on aarch64 as shown at:[], and it is just a matter of exposing to X86 for those that want it.

Enable it by passing the

option on the gem5 command line:

.... ./run --arch aarch64 --emulator gem5 -- --vio-9p ....

Then on the guest:

.... mkdir -p /mnt/9p/gem5 mount -t 9p -o trans=virtio,version=9p2000.L,aname=/path/to/linux-kernel-module-cheat/out/run/gem5/aarch64/0/m5out/9p/share gem5 /mnt/9p/gem5 echo asdf > /mnt/9p/gem5/qwer ....

Yes, you have to pass the full path to the directory on the host. Yes, this is horrible.

The shared directory is:

.... out/run/gem5/aarch64/0/m5out/9p/share ....

so we can observe the file the guest wrote from the host with:

.... out/run/gem5/aarch64/0/m5out/9p/share/qwer ....

and vice versa:

.... echo zxvc > out/run/gem5/aarch64/0/m5out/9p/share/qwer ....

is now visible from the guest:

.... cat /mnt/9p/gem5/qwer ....

Checkpoint restore with an open mount will likely fail because gem5 uses an ugly external executable to implement diod. The protocol is not very complex, and QEMU implements it in-tree, which is what gem5 should do as well at some point.

Also checkpoint without

and restore with
did not work either, the mount fails.

However, this did work, on guest:

.... unmount /mnt/9p/gem5 m5 checkpoint ....

then restore with the detalied CPU of interest e.g.

.... ./run --arch aarch64 --emulator gem5 -- --vio-9p --cpu-type DerivO3CPU --caches ....

Tested on gem5 b2847f43c91e27f43bd4ac08abd528efcf00f2fd, LKMC 52a5fdd7c1d6eadc5900fc76e128995d4849aada.

==== NFS

TODO: get working.

<<9p>> is better with emulation, but let's just get this working for fun.

First make sure that this works: xref:guest-to-host-networking[xrefstyle=full].

Then, build the kernel with NFS support:

.... ./build-linux --config-fragment linux_config/nfs ....

Now on host:

.... sudo apt-get install nfs-kernel-server ....

Now edit

to contain:

.... /tmp *(rw,sync,norootsquash,nosubtreecheck) ....

and restart the server:

.... sudo systemctl restart nfs-kernel-server ....

Now on guest:

.... mkdir /mnt/nfs mount -t nfs /mnt/nfs ....

TODO: failing with:

.... mount: mounting on /mnt/nfs failed: No such device ....

And now the

directory from host is not mounted on guest!

If you don't want to start the NFS server after the next boot automatically so save resources,[do]:

.... systemctl disable nfs-kernel-server ....

== Operating systems

  • <>
  • <>
  • <>
  • <>
  • <>

== Linux kernel

=== Linux kernel configuration

==== Modify kernel config

To modify a single option on top of our <>, do:

.... ./build-linux --config 'CONFIGFORTIFYSOURCE=y' ....

Kernel modules depend on certain kernel configs, and therefore in general you might have to clean and rebuild the kernel modules after changing the kernel config:

.... ./build-modules --clean ./build-modules ....

and then proceed as in <>.

You might often get way without rebuilding the kernel modules however.

To use an extra kernel config fragment file on top of our defaults, do:

.... printf ' CONFIGIKCONFIG=y CONFIGIKCONFIG_PROC=y ' > data/myconfig ./build-linux --config-fragment 'data/myconfig' ....

To use just your own exact

instead of our defaults ones, use:

.... ./build-linux --custom-config-file data/myconfig ....

There is also a shortcut

to use the <>.

The following options can all be used together, sorted by decreasing config setting power precedence:

  • --config
  • --config-fragment
  • --custom-config-file

To do a clean menu config yourself and use that for the build, do:

.... ./build-linux --clean ./build-linux --custom-config-target menuconfig ....

But remember that every new build re-configures the kernel by default, so to keep your configs you will need to use on further builds:

.... ./build-linux --no-configure ....

So what you likely want to do instead is to save that as a new

and use it later as:

.... ./build-linux --no-configure --no-modules-install savedefconfig cp "$(./getvar linuxbuilddir)/defconfig" data/myconfig ./build-linux --custom-config-file data/myconfig ....

You can also use other config generating targets such as

with the same method as shown at: xref:linux-kernel-defconfig[xrefstyle=full].

==== Find the kernel config

Get the build config in guest:

.... zcat /proc/config.gz ....

or with our shortcut:

.... ./ ....

or to conveniently grep for a specific option case insensitively:

.... ./ ikconfig ....

Source: link:rootfs_overlay/lkmc/[].

This is enabled by:


From host:

.... cat "$(./getvar linux_config)" ....

Just for fun[]:

.... ./linux/scripts/extract-ikconfig "$(./getvar vmlinux)" ....

although this can be useful when someone gives you a random image.

[[kernel-configs-about]] ==== About our Linux kernel configs

By default, link:build-linux[] generates a

that is a mixture of:
  • a base config extracted from Buildroot's minimal per machine
    , which has the minimal options needed to boot as explained at: xref:buildroot-kernel-config[xrefstyle=full].
  • small overlays put top of that

To find out which kernel configs are being used exactly, simply run:

.... ./build-linux --dry-run ....

and look for the
call. This script from the Linux kernel tree, as the name suggests, merges multiple configuration files into one as explained at:

For each arch, the base of our configs are named as:

.... linux_config/buildroot- ....

e.g.: link:linuxconfig/buildroot-x8664[].

These configs are extracted directly from a Buildroot build with link:update-buildroot-kernel-configs[].

Note that Buildroot can

override some of the configurations, e.g. it forces
is on. For this reason, those configs are not simply copy pasted from Buildroot files, but rather from a Buildroot kernel build, and then minimized with
make savedefconfig

On top of those, we add the following by default:

  • link:linux_config/min[]: see: xref:linux-kernel-min-config[xrefstyle=full]
  • link:linux_config/default[]: other optional configs that we enable by default because they increase visibility, or expose some cool feature, and don't significantly increase build time nor add significant runtime overhead + We have since observed that the kernel size itself is very bloated compared to
    as shown at: xref:linux-kernel-defconfig[xrefstyle=full].

[[buildroot-kernel-config]] ===== About Buildroot's kernel configs

To see Buildroot's base configs, start from[`buildroot/configs/qemux8664defconfig`].

That file contains

, which points to the base config file used:[board/qemu/x8664/linux-4.15.config].

, on the other hand, uses[`buildroot/configs/qemuarmvexpressdefconfig
], which contains
, and therefore just does a
make vexpress
defconfig`, and gets its config from the Linux kernel tree itself.

====== Linux kernel defconfig

To boot[defconfig] from disk on Linux and see a shell, all we need is these missing virtio options:

.... ./build-linux \ --linux-build-id defconfig \ --custom-config-target defconfig \ --config CONFIGVIRTIOPCI=y \ --config CONFIGVIRTIOBLK=y \ ; ./run --linux-build-id defconfig ....

Oh, and check this out:

.... du -h \ "$(./getvar vmlinux)" \ "$(./getvar --linux-build-id defconfig vmlinux)" \ ; ....


.... 360M /path/to/linux-kernel-module-cheat/out/linux/default/x8664/vmlinux 47M /path/to/linux-kernel-module-cheat/out/linux/defconfig/x8664/vmlinux ....

Brutal. Where did we go wrong?

The extra virtio options are not needed if we use <>:

.... ./build-linux \ --linux-build-id defconfig \ --custom-config-target defconfig \ ; ./run --initrd --linux-build-id defconfig ....

On aarch64, we can boot from initrd with:

.... ./build-linux \ --arch aarch64 \ --linux-build-id defconfig \ --custom-config-target defconfig \ ; ./run \ --arch aarch64 \ --initrd \ --linux-build-id defconfig \ --memory 2G \ ; ....

We need the 2G of memory because the CPIO is 600MiB due to a humongous amount of loadable kernel modules!

In aarch64, the size situation is inverted from x86_64, and this can be seen on the vmlinux size as well:

.... 118M /path/to/linux-kernel-module-cheat/out/linux/default/aarch64/vmlinux 240M /path/to/linux-kernel-module-cheat/out/linux/defconfig/aarch64/vmlinux ....

So it seems that the ARM devs decided rather than creating a minimal config that boots QEMU, to try and make a single config that boots every board in existence. Terrible!


Tested on 1e2b7f1e5e9e3073863dc17e25b2455c8ebdeadd + 1.

====== Linux kernel min config

link:linux_config/min[] contains minimal tweaks required to boot gem5 or for using our slightly different QEMU command line options than Buildroot on all archs.

It is one of the default config fragments we use, as explained at: xref:kernel-configs-about[xrefstyle=full]>.

Having the same config working for both QEMU and gem5 (oh, the hours of bisection) means that you can deal with functional matters in QEMU, which runs much faster, and switch to gem5 only for performance issues.

We can build just with

on top of the base config with:

.... ./build-linux \ --arch aarch64 \ --config-fragment linuxconfig/min \ --custom-config-file linuxconfig/buildroot-aarch64 \ --linux-build-id min \ ; ....

vmlinux had a very similar size to the default. It seems that link:linuxconfig/buildroot-aarch64[] contains or implies most link:linuxconfig/default[] options already? TODO: that seems odd, really?

Tested on 649d06d6758cefd080d04dc47fd6a5a26a620874 + 1.

===== Notable alternate gem5 kernel configs

Other configs which we had previously tested at 4e0d9af81fcce2ce4e777cb82a1990d7c2ca7c1e are:

  • arm
    configs present in the official ARM gem5 Linux kernel fork as described at: xref:gem5-arm-linux-kernel-patches[xrefstyle=full]. Some of the configs present there are added by the patches.
  • Jason's magic
    config: which is referenced at:[]. QEMU boots with that by removing
    # CONFIG_VIRTIO_PCI is not set

=== Kernel version

==== Find the kernel version

We try to use the latest possible kernel major release version.


.... cat /proc/version ....

or in the source:

.... cd "$(./getvar linuxsourcedir)" git log | grep -E ' Linux [0-9]+.' | head ....

==== Update the Linux kernel

During update all you kernel modules may break since the kernel API is not stable.

They are usually trivial breaks of things moving around headers or to sub-structs.

The userland, however, should simply not break, as Linus enforces strict backwards compatibility of userland interfaces.

This backwards compatibility is just awesome, it makes getting and running the latest master painless.

This also makes this repo the perfect setup to develop the Linux kernel.

In case something breaks while updating the Linux kernel, you can try to bisect it to understand the root cause, see: xref:bisection[xrefstyle=full].

===== Update the Linux kernel LKMC procedure

First, use use the branching procedure described at: xref:update-a-forked-submodule[xrefstyle=full]

Because the kernel is so central to this repository, almost all tests must be re-run, so basically just follow the full testing procedure described at: xref:test-this-repo[xrefstyle=full]. The only tests that can be skipped are essentially the <> tests.

Before comitting, don't forget to update:

  • the
    constant in[]
  • the tagline of this repository on: ** this README ** the GitHub project description

==== Downgrade the Linux kernel

The kernel is not forward compatible, however, so downgrading the Linux kernel requires downgrading the userland too to the latest Buildroot branch that supports it.

The default Linux kernel version is bumped in Buildroot with commit messages of type:

.... linux: bump default to version 4.9.6 ....

So you can try:

.... git log --grep 'linux: bump default to version' ....

Those commits change


You should then look up if there is a branch that supports that kernel. Staying on branches is a good idea as they will get backports, in particular ones that fix the build as newer host versions come out.

Finally, after downgrading Buildroot, if something does not work, you might also have to make some changes to how this repo uses Buildroot, as the Buildroot configuration options might have changed.

We don't expect those changes to be very difficult. A good way to approach the task is to:

  • do a dry run build to get the equivalent Bash commands used: + .... ./build-buildroot --dry-run ....
  • build the Buildroot documentation for the version you are going to use, and check if all Buildroot build commands make sense there

Then, if you spot an option that is wrong, some grepping in this repo should quickly point you to the code you need to modify.

It also possible that you will need to apply some patches from newer Buildroot versions for it to build, due to incompatibilities with the host Ubuntu packages and that Buildroot version. Just read the error message, and try:

  • git log master -- packages/
  • Google the error message for mailing list hits

Successful port reports:

  • v3.18:

=== Kernel command line parameters

Bootloaders can pass a string as input to the Linux kernel when it is booting to control its behaviour, much like the

system call does to userland processes.

This allows us to control the behaviour of the kernel without rebuilding anything.

With QEMU, QEMU itself acts as the bootloader, and provides the

option and we expose it through
./run --kernel-cli
, e.g.:

.... ./run --kernel-cli 'foo bar' ....

Then inside the host, you can check which options were given with:

.... cat /proc/cmdline ....

They are also printed at the beginning of the boot message:

.... dmesg | grep "Command line" ....

See also:


The arguments are documented in the kernel documentation:

When dealing with real boards, extra command line options are provided on some magic bootloader configuration file, e.g.:

  • GRUB configuration files:
  • Raspberry pi
    on a magic partition:

==== Kernel command line parameters escaping

Double quotes can be used to escape spaces as in

opt="a b"
, but double quotes themselves cannot be escaped, e.g.

This even lead us to use base64 encoding with


==== Kernel command line parameters definition points

There are two methods:

  • __setup
    as in: + .... _setup("console=", consolesetup); ....
  • core_param
    as in: + .... coreparam(panic, panictimeout, int, 0644); ....

suggests how they are different:

.... /** * core_param - define a historical core kernel parameter.


  • coreparam is just like moduleparam(), but cannot be modular and
  • doesn't add a prefix (such as "printk."). This is for compatibility
  • with __setup(), and it makes sense as truly core parameters aren't
  • tied to the particular file they're in. */ ....

==== rw

By default, the Linux kernel mounts the root filesystem as readonly. TODO rationale?

This cannot be observed in the default BusyBox init, because by default our link:rootfs_overlay/etc/inittab[] does:

.... /bin/mount -o remount,rw / ....

Analogously, Ubuntu 18.04 does in its fstab something like:

.... UUID=/dev/sda1 / ext4 errors=remount-ro 0 1 ....

which uses default mount


We have however removed those setups init setups to keep things more minimal, and replaced them with the

kernel boot parameter makes the root mounted as writable.

To observe the default readonly behaviour, hack the link:run[] script to remove <>, and then run on a raw shell:

.... ./run --kernel-cli 'init=/bin/sh' ....

Now try to do:

.... touch a ....

which fails with:

.... touch: a: Read-only file system ....

We can also observe the read-onlyness with:

.... mount -t proc /proc mount ....

which contains:

.... /dev/root on / type ext2 (ro,relatime,blockvalidity,barrier,userxattr) ....

and so it is Read Only as shown by


==== norandmaps

Disable userland address space randomization. Test it out by running <> twice:

.... ./run --eval-after './linux/randcheck.out;./linux/poweroff.out' ./run --eval-after './linux/randcheck.out;./linux/poweroff.out' ....

If we remove it from our link:run[] script by hacking it up, the addresses shown by

vary across boots.

Equivalent to:

.... echo 0 > /proc/sys/kernel/randomizevaspace ....

=== printk

is the most simple and widely used way of getting information from the kernel, so you should familiarize yourself with its basic configuration.

We use

a lot in our kernel modules, and it shows on the terminal by default, along with stdout and what you type.

Hide all


.... dmesg -n 1 ....

or equivalently:

.... echo 1 > /proc/sys/kernel/printk ....

See also:

Do it with a <> to affect the boot itself:

.... ./run --kernel-cli 'loglevel=5' ....

and now only boot warning messages or worse show, which is useful to identify problems.

Our default

format is:



.... <6>[ 2.979121] Freeing unused kernel memory: 2024K ....


    : higher means less serious
    : seconds since boot

This format is selected by the following boot options:

  • console_msg_format=syslog
    : add the
     part. Added in v4.16.
  • printk.time=y
    : add the

The debug highest level is a bit more magic, see: xref:pr-debug[xrefstyle=full] for more info.

==== /proc/sys/kernel/printk

The current printk level can be obtained with:

.... cat /proc/sys/kernel/printk ....

As of

this prints:

.... 7 4 1 7 ....

which contains:

  • 7
    : current log level, modifiable by previously mentioned methods
  • 4
    : documented as: "printk's without a loglevel use this": TODO what does that mean, how to call
    without a log level?
  • 1
    : minimum log level that still prints something (
    prints nothing)
  • 7
    : default log level

We start at the boot time default after boot by default, as can be seen from:

.... insmod myprintk.ko ....

which outputs something like:

.... <1>[ 12.494429] pralert <2>[ 12.494666] prcrit <3>[ 12.494823] prerr <4>[ 12.494911] prwarning <5>[ 12.495170] prnotice <6>[ 12.495327] prinfo ....

Source: link:kernel_modules/myprintk.c[]

This proc entry is defined at:


if defined CONFIG_PRINTK

    .procname   = "printk",
    .data       = &console_loglevel,
    .maxlen     = 4*sizeof(int),
    .mode       = 0644,
    .proc_handler   = proc_dointvec,


which teaches us that printk can be completely disabled at compile time:


config PRINTK default y bool "Enable support for printk" if EXPERT select IRQ_WORK help This option enables normal printk support. Removing it eliminates most of the message strings from the kernel image and makes the kernel more or less silent. As this makes it very difficult to diagnose system problems, saying N here is strongly discouraged. ....

is defined at:


define consoleloglevel (consoleprintk[0])



is an array with 4 ints:

.... int consoleprintk[4] = { CONSOLELOGLEVELDEFAULT, /* consoleloglevel / MESSAGELOGLEVELDEFAULT, / defaultmessageloglevel / CONSOLELOGLEVELMIN, / minimumconsoleloglevel / CONSOLELOGLEVELDEFAULT, / defaultconsoleloglevel */ }; ....

and then we see that the default is configurable with


.... /* * Default used to be hard-coded at 7, quiet used to be hardcoded at 4, * we're now allowing both to be set from kernel config. */




The message loglevel default is explained at:

.... /* printk's without a loglevel use this.. */



The min is just hardcoded to one as you would expect, with some amazing kernel comedy around it:

.... /* We show everything that is MORE important than this.. */

define CONSOLELOGLEVELSILENT 0 /* Mum's the word */

define CONSOLELOGLEVELMIN 1 /* Minimum loglevel we let people use */

define CONSOLELOGLEVELDEBUG 10 /* issue debug messages */

define CONSOLELOGLEVELMOTORMOUTH 15 /* You can't shut this one up */


We then also learn about the useless

kernel parameters at:

.... config CONSOLELOGLEVELQUIET int "quiet console loglevel (1-15)" range 1 15 default "4" help loglevel to use when "quiet" is passed on the kernel commandline.

  When "quiet" is passed on the kernel commandline this loglevel
  will be used as the loglevel. IOW passing "quiet" will be the
  equivalent of passing "loglevel="


which explains the useless reason why that number is special. This is implemented at:

.... static int _init debugkernel(char *str) { consoleloglevel = CONSOLELOGLEVEL_DEBUG; return 0; }

static int _init quietkernel(char *str) { consoleloglevel = CONSOLELOGLEVEL_QUIET; return 0; }

earlyparam("debug", debugkernel); earlyparam("quiet", quietkernel); ....

[[ignore-loglevel]] ==== ignore_loglevel

.... ./run --kernel-cli 'ignore_loglevel' ....

enables all log levels, and is basically the same as:

.... ./run --kernel-cli 'loglevel=8' ....

except that you don't need to know what is the maximum level.

[[pr-debug]] ==== pr_debug

Debug messages are not printable by default without recompiling.

But the awesome

option which we enable by default allows us to do:

.... echo 8 > /proc/sys/kernel/printk echo 'file kernel/module.c +p' > /sys/kernel/debug/dynamic_debug/control ./linux/myinsmod.out hello.ko ....

and we have a shortcut at:

.... ./ ....

Source: link:rootfsoverlay/lkmc/[].


Wildcards are also accepted, e.g. enable all messages from all files:

.... echo 'file * +p' > /sys/kernel/debug/dynamic_debug/control ....

TODO: why is this not working:

.... echo 'func sysinitmodule +p' > /sys/kernel/debug/dynamic_debug/control ....

Enable messages in specific modules:

.... echo 8 > /proc/sys/kernel/printk echo 'module myprintk +p' > /sys/kernel/debug/dynamic_debug/control insmod myprintk.ko ....

Source: link:kernel_modules/myprintk.c[]

This outputs the


.... printk debug ....

but TODO: it also shows debug messages even without enabling them explicitly:

.... echo 8 > /proc/sys/kernel/printk insmod myprintk.ko ....

and it shows as enabled:


grep myprintk /sys/kernel/debug/dynamic_debug/control

/root/linux-kernel-module-cheat/out/kernelmodules/x8664/kernelmodules/panic.c:12 [myprintk]myinit =p "prdebug\012" ....


for boot messages as well, before we can reach userland and write to

.... ./run --kernel-cli 'dyndbg="file * +p" loglevel=8' ....

Get ready for the noisiest boot ever, I think it overflows the

buffer and funny things happen.

[[pr-debug-is-different-from-printk-kern-debug]] ===== prdebug != printk(KERNDEBUG


is set,
is not the exact same as
messages are visible with:

.... ./run --kernel-cli 'initcall_debug logleve=8' ....

which outputs lines of type:

.... <7>[ 1.756680] calling clkdisableunused+0x0/0x130 @ 1 <7>[ 1.757003] initcall clkdisableunused+0x0/0x130 returned 0 after 111 usecs ....

which are

in v4.16.

Mentioned at:

This likely comes from the ifdef split at


.... /* If you are writing a driver, please use dev_dbg instead */



/* dynamicprdebug() uses pr_fmt() internally so we don't need it here */

define pr_debug(fmt, ...) \

dynamic_pr_debug(fmt, ##__VA_ARGS__)

elif defined(DEBUG)

define pr_debug(fmt, ...) \

printk(KERN_DEBUG pr_fmt(fmt), ##__VA_ARGS__)


define pr_debug(fmt, ...) \

no_printk(KERN_DEBUG pr_fmt(fmt), ##__VA_ARGS__)



=== Kernel module APIs

==== Kernel module parameters

The Linux kernel allows passing module parameters at insertion time <>.


tool exposes that as:

.... insmod params.ko i=3 j=4 ....

Parameters are declared in the module as:

.... static u32 i = 0; moduleparam(i, int, SIRUSR | SIWUSR); MODULEPARM_DESC(i, "my favorite int"); ....

Automated test:

.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:kernel_modules/params.c[]
  • link:rootfs_overlay/lkmc/[]

As shown in the example, module parameters can also be read and modified at runtime from <>.

We can obtain the help text of the parameters with:

.... modinfo params.ko ....

The output contains:

.... parm: j:my second favorite int parm: i:my favorite int ....

===== modprobe.conf

<> insertion can also set default parameters via the link:rootfs_overlay/etc/modprobe.conf[

] file:

.... modprobe params cat /sys/kernel/debug/lkmc_params ....


.... 12 34 ....

This is specially important when loading modules with <> or else we would have no opportunity of passing those.

doesn't actually insmod anything for us:

==== Kernel module dependencies

One module can depend on symbols of another module that are exported with


.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:kernel_modules/dep.c[]
  • link:kernel_modules/dep2.c[]
  • link:rootfs_overlay/lkmc/[]

The kernel deduces dependencies based on the

that each module uses.

Symbols exported by

can be seen with:

.... insmod dep.ko grep lkmc_dep /proc/kallsyms ....

sample output:

.... ffffffffc0001030 r _ksymtablkmcdep [dep] ffffffffc000104d r _kstrtablkmcdep [dep] ffffffffc0002300 B lkmc_dep [dep] ....

This requires


Dependency information is stored by the kernel module build system in the

files' <>, e.g.:

.... modinfo dep2.ko ....


.... depends: dep ....

We can double check with:

.... strings 3 dep2.ko | grep -E 'depends' ....

The output contains:

.... depends=dep ....

Module dependencies are also stored at:

.... cd /lib/module/* grep dep modules.dep ....


.... extra/dep2.ko: extra/dep.ko extra/dep.ko: ....

TODO: what for, and at which point point does Buildroot / BusyBox generate that file?

===== Kernel module dependencies with modprobe


, <> deals with kernel module dependencies for us.

First get <> working.

Then, for example:

.... modprobe buildroot_dep2 ....

outputs to dmesg:

.... 42 ....

and then:

.... lsmod ....


.... Module Size Used by Tainted: G buildrootdep2 16384 0 buildrootdep 16384 1 buildroot_dep2 ....


  • link:buildrootpackages/kernelmodules/buildroot_dep.c[]
  • link:buildrootpackages/kernelmodules/buildroot_dep2.c[]

Removal also removes required modules that have zero usage count:

.... modprobe -r buildroot_dep2 ....

uses information from the
file to decide the required dependencies. That file contains:

.... extra/buildrootdep2.ko: extra/buildrootdep.ko ....



[[module-info]] ==== MODULE_INFO

Module metadata is stored on module files at compile time. Some of the fields can be retrieved through the

struct module

.... insmod module_info.ko ....

Dmesg output:

.... name = module_info version = 1.0 ....

Source: link:kernelmodules/moduleinfo.c[]

Some of those are also present on sysfs:

.... cat /sys/module/module_info/version ....


.... 1.0 ....

And we can also observe them with the

command line utility:

.... modinfo module_info.ko ....

sample output:

.... filename: moduleinfo.ko license: GPL version: 1.0 srcversion: AF3DE8A8CFCDEB6B00E35B6 depends: vermagic: 4.17.0 SMP modunload modversions ....

Module information is stored in a special

section of the ELF file:

.... ./run-toolchain readelf -- -SW "$(./getvar kernelmodulesbuildsubdir)/moduleinfo.ko" ....


.... [ 5] .modinfo PROGBITS 0000000000000000 0000d8 000096 00 A 0 0 8 ....


.... ./run-toolchain readelf -- -x .modinfo "$(./getvar kernelmodulesbuildsubdir)/moduleinfo.ko" ....


.... 0x00000000 6c696365 6e73653d 47504c00 76657273 license=GPL.vers 0x00000010 696f6e3d 312e3000 61736466 3d717765 ion=1.0.asdf=qwe 0x00000020 72000000 00000000 73726376 65727369 r.......srcversi 0x00000030 6f6e3d41 46334445 38413843 46434445 on=AF3DE8A8CFCDE 0x00000040 42364230 30453335 42360000 00000000 B6B00E35B6...... 0x00000050 64657065 6e64733d 006e616d 653d6d6f 0x00000060 64756c65 5f696e66 6f007665 726d6167 duleinfo.vermag 0x00000070 69633d34 2e31372e 3020534d 50206d6f ic=4.17.0 SMP mo 0x00000080 645f756e 6c6f6164 206d6f64 76657273 dunload modvers 0x00000090 696f6e73 2000 ions . ....

I think a dedicated section is used to allow the Linux kernel and command line tools to easily parse that information from the ELF file as we've done with




==== vermagic


As of kernel v5.8, you can't use

string from modules anymore as per:[]. So instead we just showcase

Sample insmod output as of LKMC fa8c2ee521ea83a74a2300e7a3be9f9ab86e2cb6 + 1 aarch64:

.... <6>[ 25.180697] sysname = Linux <6>[ 25.180697] nodename = buildroot <6>[ 25.180697] release = 5.9.2 <6>[ 25.180697] version = #1 SMP Thu Jan 1 00:00:00 UTC 1970 <6>[ 25.180697] machine = aarch64 <6>[ 25.180697] domainname = (none) ....

Vermagic is a magic string present in the kernel and previously visible in <> on kernel modules. It is used to verify that the kernel module was compiled against a compatible kernel version and relevant configuration:

.... insmod vermagic.ko ....

Possible dmesg output:

.... VERMAGICSTRING = 4.17.0 SMP modunload modversions ....

If we artificially create a mismatch with

, the insmod fails with:

.... insmod: can't insert 'vermagic_fail.ko': invalid module format ....


says the expected and found vermagic found:

.... vermagicfail: version magic 'asdfqwer' should be '4.17.0 SMP modunload modversions ' ....

Source: link:kernelmodules/vermagicfail.c[]

The kernel's vermagic is defined based on compile time configurations at[include/linux/vermagic.h]:



    UTS_RELEASE " "                                                 \
    MODULE_ARCH_VERMAGIC                                            \



part of the string for example is defined on the same file based on the value of







TODO how to get the vermagic from running kernel from userland?

<> has a flag to skip the vermagic check:

.... --force-modversion ....

This option just strips

information from the module before loading, so it is not a kernel feature.

[[init-module]] ==== init_module

are an older alternative to the

.... insmod initmodule.ko rmmod initmodule ....

Dmesg output:

.... initmodule cleanupmodule ....

Source: link:kernelmodules/initmodule.c[]

TODO why were


==== Floating point in kernel modules

It is generally hard / impossible to use floating point operations in the kernel. TODO understand details.

A quick (x86-only for now because lazy) example is shown at: link:kernel_modules/float.c[]


.... insmod float.ko myfloat=1 enable_fpu=1 ....

We have to call:

before starting FPU operations, and
when we are done. This particular example however did not blow up without it at lkmc 7f917af66b17373505f6c21d75af9331d624b3a9 + 1:

.... insmod float.ko myfloat=1 enable_fpu=0 ....

The v5.1 documentation under[arch/x86/include/asm/fpu/api.h] reads:

.... * Use kernelfpubegin/end() if you intend to use FPU in kernel context. It * disables preemption so be careful if you intend to use it for long periods * of time. ....

The example sets in the link:kernel_modules/Makefile[]:

.... CFLAGSREMOVEfloat.o += -mno-sse -mno-sse2 ....

to avoid:

.... error: SSE register return with SSE disabled ....

We found those flags with

./build-modules --verbose



=== Kernel panic and oops

To test out kernel panics and oops in controlled circumstances, try out the modules:

.... insmod panic.ko insmod oops.ko ....


  • link:kernel_modules/panic.c[]
  • link:kernel_modules/oops.c[]

A panic can also be generated with:

.... echo c > /proc/sysrq-trigger ....

Panic vs oops:

How to generate them:


When a panic happens, <> does not work as it normally does, and it is hard to get the logs if on are on <>:


==== Kernel panic

On panic, the kernel dies, and so does our terminal.

The panic trace looks like:

.... panic: loading out-of-tree module taints kernel. panic myinit Kernel panic - not syncing: hello panic CPU: 0 PID: 53 Comm: insmod Tainted: G O 4.16.0 #6 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 04/01/2014 Call Trace: dumpstack+0x7d/0xba ? 0xffffffffc0000000 panic+0xda/0x213 ? printk+0x43/0x4b ? 0xffffffffc0000000 myinit+0x1d/0x20 [panic] dooneinitcall+0x3e/0x170 doinitmodule+0x5b/0x210 loadmodule+0x2035/0x29d0 ? kernelreadfile+0x7d/0x140 ? SySfinitmodule+0xa8/0xb0 SySfinitmodule+0xa8/0xb0 dosyscall64+0x6f/0x310 ? tracehardirqsoffthunk+0x1a/0x32 entrySYSCALL64afterhwframe+0x42/0xb7 RIP: 0033:0x7ffff7b36206 RSP: 002b:00007fffffffeb78 EFLAGS: 00000206 ORIGRAX: 0000000000000139 RAX: ffffffffffffffda RBX: 000000000000005c RCX: 00007ffff7b36206 RDX: 0000000000000000 RSI: 000000000069e010 RDI: 0000000000000003 RBP: 000000000069e010 R08: 00007ffff7ddd320 R09: 0000000000000000 R10: 00007ffff7ddd320 R11: 0000000000000206 R12: 0000000000000003 R13: 00007fffffffef4a R14: 0000000000000000 R15: 0000000000000000 Kernel Offset: disabled ---[ end Kernel panic - not syncing: hello panic ....

Notice how our panic message

hello panic
is visible at:

.... Kernel panic - not syncing: hello panic ....

===== Kernel module stack trace to source line

The log shows which module each symbol belongs to if any, e.g.:

.... myinit+0x1d/0x20 [panic] ....

says that the function

is in the module

To find the line that panicked, do:

.... ./run-gdb ....

and then:

.... info line *(myinit+0x1d) ....

which gives us the correct line:

.... Line 7 of "/root/linux-kernel-module-cheat/out/kernelmodules/x8664/kernel_modules/panic.c" starts at address 0xbf00001c and ends at 0xbf00002c . ....

as explained at:

The exact same thing can be done post mortem with:

.... ./run-toolchain gdb -- \ -batch \ -ex 'info line *(myinit+0x1d)' \ "$(./getvar kernelmodulesbuild_subdir)/panic.ko" \ ; ....



[[bug-on]] ===== BUG_ON

Basically just calls

for most archs.

===== Exit emulator on panic

For testing purposes, it is very useful to quit the emulator automatically with exit status non zero in case of kernel panic, instead of just hanging forever.

====== Exit QEMU on panic

Enabled by default with:

  • panic=-1
    command line option which reboots the kernel immediately on panic, see: xref:reboot-on-panic[xrefstyle=full]
  • QEMU
    , which makes QEMU exit when the guest tries to reboot

Also asked at which also mentions the x86_64

-device pvpanic
, but I don't see much advantage to it.

TODO neither method exits with exit status different from 0, so for now we are just grepping the logs for panic messages, which sucks.

One possibility that gets close would be to use <> to break at the

function, and then send a <>
command if that happens, but I don't see a way to exit with non-zero status to indicate error.

====== Exit gem5 on panic

gem5 9048ef0ffbf21bedb803b785fb68f83e95c04db8 (January 2019) can detect panics automatically if the option

is on.

It parses kernel symbols and detecting when the PC reaches the address of the

function. gem5 then prints to stdout:

.... Kernel panic in simulated kernel ....

and exits with status -6.

At gem5 ff52563a214c71fcd1e21e9f00ad839612032e3b (July 2018) behaviour was different, and just exited 0:[email protected]/msg15870.html TODO find fixing commit.

We enable the

option by default on
, which makes gem5 exit immediately in case of panic, which is awesome!

If we don't set

, then gem5 just hangs on an infinite guest loop.

TODO: why doesn't gem5 x86 ff52563a214c71fcd1e21e9f00ad839612032e3b support

as well? Trying to set
there fails with:

.... tried to set or access non-existentobject parameter: paniconpanic ....

However, at that commit panic on x86 makes gem5 crash with:

.... panic: i8042 "System reset" command not implemented. ....

which is a good side effect of an unimplemented hardware feature, since the simulation actually stops.

The implementation of panic detection happens at:

.... kernelPanicEvent = addKernelFuncEventOrPanicLinux::KernelPanicEvent( "panic", "Kernel panic in simulated kernel", dmesg_output); ....

Here we see that the symbol

for the
function is the one being tracked.

Related thread:

===== Reboot on panic

Make the kernel reboot after n seconds after panic:

.... echo 1 > /proc/sys/kernel/panic ....

Can also be controlled with the

kernel boot parameter.

to disable,
to reboot immediately.



===== Panic trace show addresses instead of symbols


, then addresses are shown on traces instead of symbol plus offset.

In v4.16 it does not seem possible to configure that at runtime. GDB step debugging with:

.... ./run --eval-after 'insmod dumpstack.ko' --gdb-wait --tmux-args dumpstack ....

shows that traces are printed at


.... static void printkstackaddress(unsigned long address, int reliable, char *loglvl) { touchnmiwatchdog(); printk("%s %s%pB\n", loglvl, reliable ? "" : "? ", (void *)address); } ....


is documented at

.... If KALLSYMS are disabled then the symbol address is printed instead. ....

I wasn't able do disable

to test this this out however, it is being selected by some other option? But I then used
make menuconfig
to see which options select it, and they were all off...

[[oops]] ==== Kernel oops

On oops, the shell still lives after.

However we:

  • leave the normal control flow, and
    oops after
    never gets printed: an interrupt is serviced
  • cannot
    rmmod oops

It is possible to make

lead to panics always with:

.... echo 1 > /proc/sys/kernel/paniconoops insmod oops.ko ....

An oops stack trace looks like:

.... BUG: unable to handle kernel NULL pointer dereference at 0000000000000000 IP: myinit+0x18/0x30 [oops] PGD dccf067 P4D dccf067 PUD dcc1067 PMD 0 Oops: 0002 [#1] SMP NOPTI Modules linked in: oops(O+) CPU: 0 PID: 53 Comm: insmod Tainted: G O 4.16.0 #6 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 04/01/2014 RIP: 0010:myinit+0x18/0x30 [oops] RSP: 0018:ffffc900000d3cb0 EFLAGS: 00000282 RAX: 000000000000000b RBX: ffffffffc0000000 RCX: ffffffff81e3e3a8 RDX: 0000000000000001 RSI: 0000000000000086 RDI: ffffffffc0001033 RBP: ffffc900000d3e30 R08: 69796d2073706f6f R09: 000000000000013b R10: ffffea0000373280 R11: ffffffff822d8b2d R12: 0000000000000000 R13: ffffffffc0002050 R14: ffffffffc0002000 R15: ffff88000dc934c8 FS: 00007ffff7ff66a0(0000) GS:ffff88000fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 000000000dcd2000 CR4: 00000000000006f0 Call Trace: dooneinitcall+0x3e/0x170 doinitmodule+0x5b/0x210 loadmodule+0x2035/0x29d0 ? SySfinitmodule+0xa8/0xb0 SySfinitmodule+0xa8/0xb0 dosyscall64+0x6f/0x310 ? tracehardirqsoffthunk+0x1a/0x32 entrySYSCALL64afterhwframe+0x42/0xb7 RIP: 0033:0x7ffff7b36206 RSP: 002b:00007fffffffeb78 EFLAGS: 00000206 ORIG_RAX: 0000000000000139 RAX: ffffffffffffffda RBX: 000000000000005c RCX: 00007ffff7b36206 RDX: 0000000000000000 RSI: 000000000069e010 RDI: 0000000000000003 RBP: 000000000069e010 R08: 00007ffff7ddd320 R09: 0000000000000000 R10: 00007ffff7ddd320 R11: 0000000000000206 R12: 0000000000000003 R13: 00007fffffffef4b R14: 0000000000000000 R15: 0000000000000000 Code: 04 25 00 00 00 00 00 00 00 00 e8 b2 33 09 c1 31 c0 c3 0f 1f 44 RIP: myinit+0x18/0x30 [oops] RSP: ffffc900000d3cb0 CR2: 0000000000000000 ---[ end trace 3cdb4e9d9842b503 ]--- ....

To find the line that oopsed, look at the


.... RIP: 0010:myinit+0x18/0x30 [oops] ....

and then on GDB:

.... ./run-gdb ....


.... info line *(myinit+0x18) ....

which gives us the correct line:

.... Line 7 of "/root/linux-kernel-module-cheat/out/kernelmodules/x8664/kernel_modules/panic.c" starts at address 0xbf00001c and ends at 0xbf00002c . ....

This-did not work on

due to <> so we need to either:
  • <>
  • <> post-mortem method

[[dump-stack]] ==== dump_stack


function produces a stack trace much like panic and oops, but causes no problems and we return to the normal control flow, and can cleanly remove the module afterwards:

.... insmod dump_stack.ko ....

Source: link:kernelmodules/dumpstack.c[]

[[warn-on]] ==== WARN_ON


macro basically just calls <>.

One extra side effect is that we can make it also panic with:

.... echo 1 > /proc/sys/kernel/paniconwarn insmod warn_on.ko ....

Source: link:kernelmodules/warnon.c[]

Can also be activated with the

boot parameter.

[[not-syncing-vfs]] ==== not syncing: VFS: Unable to mount root fs on unknown-block(0,0)

Let's learn how to diagnose problems with the root filesystem not being found. TODO add a sample panic error message for each error type:


This is the diagnosis procedure.

First, if we remove the following options from the our kernel build:


we get a message like this:

.... <4>[ 0.541708] VFS: Cannot open root device "vda" or unknown-block(0,0): error -6 <4>[ 0.542035] Please append a correct "root=" boot option; here are the available partitions: <0>[ 0.542562] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ....

From the message, we notice that the kernel sees a disk of some sort (vda means a virtio disk), but it could not open it.

This means that the kernel cannot properly read any bytes from the disk.

And afterwards, it has an useless message

here are the available partitions:
, but of course we have no available partitions, the list is empty, because the kernel cannot even read bytes from the disk, so it definitely cannot understand its filesystems.

This can indicate basically two things:

  • on real hardware, it could mean that the hardware is broken. Kind of hard on emulators ;-)
  • you didn't configure the kernel with the option that enables it to read from that kind of disk. + In our case, disks are virtio devices that QEMU exposes to the guest kernel. This is why removing the options: + .... CONFIGVIRTIOBLK=y CONFIGVIRTIOPCI=y .... + led to this error.

Now, let's restore the previously removed virtio options, and instead remove:

.... CONFIGEXT4FS=y ....

This time, the kernel will be able to read bytes from the device. But it won't be able to read files from the filesystem, because our filesystem is in ext4 format.

Therefore, this time the error message looks like this:

.... <4>[ 0.585296] List of all partitions: <4>[ 0.585913] fe00 524288 vda <4>[ 0.586123] driver: virtio_blk <4>[ 0.586471] No filesystem could mount root, tried: <4>[ 0.586497] squashfs <4>[ 0.586724] <0>[ 0.587360] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(254,0) ....

In this case, we see that the kernel did manage to read from the

disk! It even told us how: by using the
driver: virtio_blk

However, it then went through the list of all filesystem types it knows how to read files from, in our case just

, and none of those worked, because our partition is an ext4 partition.

Finally, the last possible error is that we simply passed the wrong

<>. For example, if we hack our command to pass:

.... root=/dev/vda2 ....

which does not even exist since

is a raw non-partitioned ext4 image, then boot fails with a message:

.... <4>[ 0.608475] Please append a correct "root=" boot option; here are the available partitions: <4>[ 0.609563] fe00 524288 vda <4>[ 0.609723] driver: virtio_blk <0>[ 0.610433] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(254,2) ....

This one is easy, because the kernel tells us clearly which partitions it would have been able to understand. In our case


Once all those problems are solved, in the working setup, we finally see something like:

.... <6>[ 0.636129] EXT4-fs (vda): mounted filesystem with ordered data mode. Opts: (null) <6>[ 0.636700] VFS: Mounted root (ext4 filesystem) on device 254:0. ....

Tested on LKMC 863a373a30cd3c7982e3e453c4153f85133b17a9, Linux kernel 5.4.3.


  • summary only

=== Pseudo filesystems

Pseudo filesystems are filesystems that don't represent actual files in a hard disk, but rather allow us to do special operations on filesystem-related system calls.

What each pseudo-file does for each related system call does is defined by its <>.



==== debugfs

Debugfs is the simplest pseudo filesystem to play around with:

.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:kernel_modules/debugfs.c[]
  • link:rootfs_overlay/lkmc/[]

Debugfs is made specifically to help test kernel stuff. Just mount, set <>, and we are done.

For this reason, it is the filesystem that we use whenever possible in our tests.
explicitly mounts a debugfs at a custom location, but the most common mount point is

This mount not done automatically by the kernel however: we, like most distros, do it from userland with our link:rootfs_overlay/etc/fstab[fstab].

Debugfs support requires the kernel to be compiled with


Only the more basic file operations can be implemented in debugfs, e.g.

never gets called:


==== procfs

Procfs is just another fops entry point:

.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....

Procfs is a little less convenient than <>, but is more used in serious applications.

Procfs can run all system calls, including ones that debugfs can't, e.g. <>.


  • link:kernel_modules/procfs.c[]
  • link:rootfs_overlay/lkmc/[]



[[proc-version]] ===== /proc/version

Its data is shared with

, which is a <> function and has a Linux syscall to back it up.

Where the data comes from and how to modify it:


In this repo, leaking host information, and to make builds more reproducible, we are setting:

  • user and date to dummy values with
  • hostname to the kernel git commit with

A sample result is:

.... Linux version 4.19.0-dirty ([email protected]) (gcc version 6.4.0 (Buildroot 2018.05-00002-gbc60382b8f)) #1 SMP Thu Jan 1 00:00:00 UTC 1970 ....

==== sysfs

Sysfs is more restricted than <>, as it does not take an arbitrary


.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:kernel_modules/sysfs.c[]
  • link:rootfs_overlay/lkmc/[]

Vs procfs:


You basically can only do

, and
on sysfs files.

It is similar to a <> file operation, except that write is also implemented.

TODO: what are those

structs? Make a more complex example that shows what they can do.



==== Character devices

Character devices can have arbitrary <> associated to them:

.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:rootfsoverlay/lkmc/[]
  • link:rootfs_overlay/lkmc/[]
  • link:kernelmodules/characterdevice.c[]

Unlike <> entires, character device files are created with userland


.... mknod c ....

Intuitively, for physical devices like keyboards, the major number maps to which driver, and the minor number maps to which device it is.

A single driver can drive multiple compatible devices.

The major and minor numbers can be observed with:

.... ls -l /dev/urandom ....


.... crw-rw-rw- 1 root root 1, 9 Jun 29 05:45 /dev/urandom ....

which means:

  • c
    (first letter): this is a character device. Would be
    for a block device.
  • 1,   9
    : the major number is
    , and the minor

To avoid device number conflicts when registering the driver we:

  • ask the kernel to allocate a free major number for us with:
  • find ouf which number was assigned by grepping
    for the kernel module name


===== Automatically create character device file on insmod

And also destroy it on


.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:kernelmodules/characterdevice_create.c[]
  • link:rootfsoverlay/lkmc/[]


=== Pseudo files

==== File operations

File operations are the main method of userland driver communication.

struct file_operations
determines what the kernel will do on filesystem system calls of <>.

This example illustrates the most basic system calls:


.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:kernel_modules/fops.c[]
  • link:rootfs_overlay/lkmc/[]

Then give this a try:

.... sh -x ./ ....

We have put printks on each fop, so this allows you to see which system calls are being made for each command.

No, there no official documentation:

[[seq-file]] ==== seq_file

Writing trivial read <> is repetitive and error prone. The

API makes the process much easier for those trivial cases:

.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:kernelmodules/seqfile.c[]
  • link:rootfsoverlay/lkmc/[]

In this example we create a debugfs file that behaves just like a file that contains:

.... 0 1 2 ....

However, we only store a single integer in memory and calculate the file on the fly in an iterator fashion.

does not provide



[[seq-file-single-open]] ===== seqfile singleopen

If you have the entire read output upfront,

is an even more convenient version of <>:

.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:kernelmodules/seqfilesingleopen.c[]
  • link:rootfsoverlay/lkmc/[]

This example produces a debugfs file that behaves like a file that contains:

.... ab cd ....

==== poll

The poll system call allows an user process to do a non-busy wait on a kernel event.


  • link:kernel_modules/poll.c[]
  • link:rootfs_overlay/lkmc/[]


.... ./ ....


gets printed to stdout every second from userland, e.g.:

.... poll <6>[ 4.275305] poll <6>[ 4.275580] return POLLIN revents = 1 POLLIN n=10 buf=4294893337 poll <6>[ 4.276627] poll <6>[ 4.276911] return 0 <6>[ 5.271193] wakeup <6>[ 5.272326] poll <6>[ 5.273207] return POLLIN revents = 1 POLLIN n=10 buf=4294893588 poll <6>[ 5.276367] poll <6>[ 5.276618] return 0 <6>[ 6.275178] wakeup <6>[ 6.276370] poll <6>[ 6.277269] return POLLIN revents = 1 POLLIN n=10 buf=4294893839 ....

Force the poll <> to return 0 to see what happens more clearly:

.... ./ pol0=1 ....

Sample output:

.... poll <6>[ 85.674801] poll <6>[ 85.675788] return 0 <6>[ 86.675182] wakeup <6>[ 86.676431] poll <6>[ 86.677373] return 0 <6>[ 87.679198] wakeup <6>[ 87.680515] poll <6>[ 87.681564] return 0 <6>[ 88.683198] wake_up ....

From this we see that control is not returned to userland: the kernel just keeps calling the poll

again and again.

Typically, we are waiting for some hardware to make some piece of data available available to the kernel.

The hardware notifies the kernel that the data is ready with an interrupt.

To simplify this example, we just fake the hardware interrupts with a <> that sleeps for a second in an infinite loop.



==== ioctl


system call is the best way to pass an arbitrary number of parameters to the kernel in a single go:

.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:kernel_modules/ioctl.c[]
  • link:lkmc/ioctl.h[]
  • link:userland/kernel_modules/ioctl.c[]
  • link:rootfs_overlay/lkmc/[]

is one of the most important methods of communication with real device drivers, which often take several fields as input.

takes as input:
  • an integer
    : it usually identifies what type of operation we want to do on this call
  • an untyped pointer to memory: can be anything, but is typically a pointer to a
    + The type of the
    often depends on the
    input + This
    is defined on a uapi-style C header that is used both to compile the kernel module and the userland executable. + The fields of this
    can be thought of as arbitrary input parameters.

And the output is:

  • an integer return value.
    man ioctl
    documents: + ____ Usually, on success zero is returned. A few
    requests use the return value as an output parameter and return a nonnegative value on success. On error, -1 is returned, and errno is set appropriately. ____
  • the input pointer data may be overwritten to contain arbitrary output



==== mmap


system call allows us to share memory between user and kernel space without copying:

.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:kernel_modules/mmap.c[]
  • link:userland/kernel_modules/mmap.c[]
  • link:rootfs_overlay/lkmc/[]

In this example, we make a tiny 4 byte kernel buffer available to user-space, and we then modify it on userspace, and check that the kernel can see the modification.

, like most more complex <>, does not work with <> as of 4.9, so we use a <> file for it.

Example adapted from:



==== Anonymous inode

Anonymous inodes allow getting multiple file descriptors from a single filesystem entry, which reduces namespace pollution compared to creating multiple device files:

.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:kernelmodules/anonymousinode.c[]
  • link:lkmc/anonymous_inode.h[]
  • link:userland/kernelmodules/anonymousinode.c[]
  • link:rootfsoverlay/lkmc/[]

This example gets an anonymous inode via <> from a debugfs entry by using


Reads to that inode return the sequence:

, ...
, ...


==== netlink sockets

Netlink sockets offer a socket API for kernel / userland communication:

.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:kernel_modules/netlink.c[]
  • link:lkmc/netlink.h[]
  • link:userland/kernel_modules/netlink.c[]
  • link:rootfs_overlay/lkmc/[]

Launch multiple user requests in parallel to stress our socket:

.... insmod netlink.ko sleep=1 for i in

seq 16
; do ./netlink.out & done ....

TODO: what is the advantage over




=== kthread

Kernel threads are managed exactly like userland threads; they also have a backing

, and are scheduled with the same mechanism:

.... insmod kthread.ko ....

Source: link:kernel_modules/kthread.c[]

Outcome: dmesg counts from

once every second infinitely many times:

.... 0 1 2 ... 8 9 0 1 2 ... ....

The count stops when we


.... rmmod kthread ....

The sleep is done with

, see: xref:sleep[xrefstyle=full].



==== kthreads

Let's launch two threads and see if they actually run in parallel:

.... insmod kthreads.ko ....

Source: link:kernel_modules/kthreads.c[]

Outcome: two threads count to dmesg from

in parallel.

Each line has output of form:

.... ....

Possible very likely outcome:


1 0 2 0 1 1 2 1 1 2 2 2 1 3 2 3 ....

The threads almost always interleaved nicely, thus confirming that they are actually running in parallel.

==== sleep

Count to dmesg every one second from

up to
n - 1

.... insmod sleep.ko n=5 ....

Source: link:kernel_modules/sleep.c[]

The sleep is done with a call to[

] directly inside
for simplicity.



==== Workqueues

A more convenient front-end for <>:

.... insmod workqueue_cheat.ko ....

Outcome: count from

infinitely many times

Stop counting:

.... rmmod workqueue_cheat ....

Source: link:kernelmodules/workqueuecheat.c[]

The workqueue thread is killed after the worker function returns.

We can't call the module just

because there is already a built-in with that name:


===== Workqueue from workqueue

Count from

every second infinitely many times by scheduling a new work item from a work item:

.... insmod workfromwork.ko ....


.... rmmod workfromwork ....

The sleep is done indirectly through:[

], which waits the specified time before scheduling the work.

Source: link:kernelmodules/workfrom_work.c[]

==== schedule

Let's block the entire kernel! Yay:

..... ./run --eval-after 'dmesg -n 1;insmod schedule.ko schedule=0' .....

Outcome: the system hangs, the only way out is to kill the VM.

Source: link:kernel_modules/schedule.c[]

kthreads only allow interrupting if you call

, and the
<> turns it off.

Sleep functions like

also end up calling schedule.

If we allow

to be called, then the system becomes responsive:

..... ./run --eval-after 'dmesg -n 1;insmod schedule.ko schedule=1' .....

and we can observe the counting with:

.... dmesg -w ....

The system also responds if we <>:

.... ./run --cpus 2 --eval-after 'dmesg -n 1;insmod schedule.ko schedule=0' ....

==== Wait queues

Wait queues are a way to make a thread sleep until an event happens on the queue:

.... insmod wait_queue.c ....

Dmesg output:

.... 0 0 1 0 2 0

Wait one second.

0 1 1 1 2 1

Wait one second.

0 2 1 2 2 2 ... ....

Stop the count:

.... rmmod wait_queue ....

Source: link:kernelmodules/waitqueue.c[]

This example launches three threads:

  • one thread generates events every with[
  • the other two threads wait for that with[
    ], and print a dmesg when it happens. + The
    macro works a bit like: + .... while (!cond) sleepuntilevent ....

=== Timers

Count from

infinitely many times in 1 second intervals using timers:

.... insmod timer.ko ....

Stop counting:

.... rmmod timer ....

Source: link:kernel_modules/timer.c[]

Timers are callbacks that run when an interrupt happens, from the interrupt context itself.

Therefore they produce more accurate timing than thread scheduling, which is more complex, but you can't do too much work inside of them.



=== IRQ

==== irq.ko

Brute force monitor every shared interrupt that will accept us:

.... ./run --eval-after 'insmod irq.ko' --graphic ....

Source: link:kernel_modules/irq.c[].

Now try the following:

  • press a keyboard key and then release it after a few seconds
  • press a mouse key, and release it after a few seconds
  • move the mouse around

Outcome: dmesg shows which IRQ was fired for each action through messages of type:

.... handler irq = 1 dev = 250 ....

is the character device for the module and never changes, as can be confirmed by:

.... grep lkmc_irq /proc/devices ....

The IRQs that we observe are:

  • 1
    for keyboard press and release. + If you hold the key down for a while, it starts firing at a constant rate. So this happens at the hardware level!
  • 12
    mouse actions

This only works if for IRQs for which the other handlers are registered as


We can see which ones are those, either via dmesg messages of type:

.... genirq: Flags mismatch irq 0. 00000080 (myirqhandler0) vs. 00015a00 (timer) requestirq irq = 0 ret = -16 requestirq irq = 1 ret = 0 ....

which indicate that

is not, but
is, or with:

.... cat /proc/interrupts ....

which shows:

.... 0: 31 IO-APIC 2-edge timer 1: 9 IO-APIC 1-edge i8042, myirqhandler0 ....

so only

attached but not

The <> also has some interrupt statistics for x86_64:

.... ./qemu-monitor info irq ....

TODO: properly understand how each IRQ maps to what number.

==== dummy-irq

The Linux kernel v4.16 mainline also has a

module at
for monitoring a single IRQ.

We build it by default with:


And then you can do

.... ./run --graphic ....

and in guest:

.... modprobe dummy-irq irq=1 ....

Outcome: when you click a key on the keyboard, dmesg shows:

.... dummy-irq: interrupt occurred on IRQ 1 ....

However, this module is intended to fire only once as can be seen from its source:

.... static int count = 0;

if (count == 0) {
    printk(KERN_INFO "dummy-irq: interrupt occurred on IRQ %d\n",


and furthermore interrupt

happen immediately TODO why, were they somehow pending?

==== /proc/interrupts

In the guest with <>:

.... watch -n 1 cat /proc/interrupts ....

Then see how clicking the mouse and keyboard affect the interrupt counts.

This confirms that:

  • 1: keyboard
  • 12: mouse click and drags

The module also shows which handlers are registered for each IRQ, as we have observed at <>

When in text mode, we can also observe interrupt line 4 with handler

increase continuously as IO goes through the UART.

=== Kernel utility functions

==== kstrto

Convert a string to an integer:

.... ./ echo $? ....

Outcome: the test passes:

.... 0 ....


  • link:kernel_modules/kstrto.c[]
  • link:rootfs_overlay/lkmc/[]


[[virt-to-phys]] ==== virttophys

Convert a virtual address to physical:

.... insmod virttophys.ko cat /sys/kernel/debug/lkmcvirtto_phys ....

Source: link:kernelmodules/virtto_phys.c[]

Sample output:

.... *kmallocptr = 0x12345678 kmallocptr = ffff88000e169ae8 virttophys(kmallocptr) = 0xe169ae8 staticvar = 0x12345678 &staticvar = ffffffffc0002308 virttophys(&staticvar) = 0x40002308 ....

We can confirm that the

translation worked with:

.... ./qemu-monitor 'xp 0xe169ae8' ....

which reads four bytes from a given physical address, and gives the expected:

.... 000000000e169ae8: 0x12345678 ....

TODO it only works for kmalloc however, for the static variable:

.... ./qemu-monitor 'xp 0x40002308' ....

it gave a wrong value of




===== Userland physical address experiments

Only tested in x86_64.

The Linux kernel exposes physical addresses to userland through:

  • /proc//maps
  • /proc//pagemap
  • /dev/mem

In this section we will play with them.

The following files contain examples to access that data and test it out:

  • link:lkmc/pagemap.h[]
  • link:rootfsoverlay/lkmc/[]
  • link:userland/linux/virttophys_user.c[]
  • link:userland/posix/virttophys_test.c[]

First get a virtual address to play with:

.... ./posix/virttophys_test.out & ....

Source: link:userland/posix/virttophys_test.c[]

Sample output:

.... vaddr 0x600800 pid 110 ....

The program:

  • allocates a
    variable and sets is value to
  • prints the virtual address of the variable, and the program PID
  • runs a while loop until until the value of the variable gets mysteriously changed somehow, e.g. by nasty tinkerers like us

Then, translate the virtual address to physical using


.... ./linux/virttophys_user.out 110 0x600800 ....

Sample output physical address:

.... 0x7c7b800 ....

Now we can verify that

gave the correct physical address in the following ways:
  • <>
  • <>



====== QEMU xp


<> command reads memory at a given physical address.

First launch

as described at <>.

On a second terminal, use QEMU to read the physical address:

.... ./qemu-monitor 'xp 0x7c7b800' ....


.... 0000000007c7b800: 0x12345678 ....

Yes!!! We read the correct value from the physical address.

We could not find however to write to memory from the QEMU monitor, boring.

[[dev-mem]] ====== /dev/mem

exposes access to physical addresses, and we use it through the convenient
BusyBox utility.

First launch

as described at <>.

Next, read from the physical address:

.... devmem 0x7c7b800 ....

Possible output:

.... Memory mapped at address 0x7ff7dbe01000. Value at address 0X7C7B800 (0x7ff7dbe01800): 0x12345678 ....

which shows that the physical memory contains the expected value


is a new virtual address that
maps to the physical address to be able to read from it.

Modify the physical memory:

.... devmem 0x7c7b800 w 0x9abcdef0 ....

After one second, we see on the screen:

.... i 9abcdef0 [1]+ Done ./posix/virttophys_test.out ....

so the value changed, and the

loop exited!

This example requires:

    , otherwise
    fails with: + .... devmem: mmap: Operation not permitted ....
  • nopat
    kernel parameter

which we set by default.


[[pagemap-dump-out]] ====== pagemap_dump.out

Dump the physical address of all pages mapped to a given process using


First launch

as described at <>. Suppose that the output was:


./posix/virttophys_test.out &

vaddr 0x601048 pid 63

./linux/virttophys_user.out 63 0x601048

0x1a61048 ....

Now obtain the page map for the process:

.... ./linux/pagemap_dump.out 63 ....

Sample output excerpt:

.... vaddr pfn soft-dirty file/shared swapped present library 400000 1ede 0 1 0 1 ./posix/virttophystest.out 600000 1a6f 0 0 0 1 ./posix/virttophystest.out 601000 1a61 0 0 0 1 ./posix/virttophys_test.out 602000 2208 0 0 0 1 [heap] 603000 220b 0 0 0 1 [heap] 7ffff78ec000 1fd4 0 1 0 1 /lib/ ....


  • link:userland/linux/pagemap_dump.c[]
  • link:lkmc/pagemap.h[]

Adapted from:

Meaning of the flags:

  • vaddr
    : first virtual address of a page the belongs to the process. Notably: + .... ./run-toolchain readelf -- -l "$(./getvar userlandbuilddir)/posix/virttophys_test.out" .... + contains: + .... Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align ... LOAD 0x0000000000000000 0x0000000000400000 0x0000000000400000 0x000000000000075c 0x000000000000075c R E 0x200000 LOAD 0x0000000000000e98 0x0000000000600e98 0x0000000000600e98 0x00000000000001b4 0x0000000000000218 RW 0x200000

Section to Segment mapping: Segment Sections... ... 02 .interp .hash .dynsym .dynstr .rela.plt .init .plt .text .fini .rodata .ehframehdr .ehframe 03 .ctors .dtors .jcr .dynamic .got.plt .data .bss .... + from which we deduce that: + **

is the text segment **
is the data segment *
: add three zeroes to it, and you have the physical address. + Three zeroes is 12 bits which is 4kB, which is the size of a page. + For example, the virtual address
, which means that its physical address is
+ This is consistent with what `linux/virt

told us: the virtual address
has physical address
corresponds to the three last zeroes, and is the offset within the page.
Also, this value falls inside
, which as previously analyzed is the data section, which is the normal location for global variables such as ours.
seems to indicate that the page can be shared across processes, possibly for read-only pages? E.g. the text segment has
, but the data has
: TODO swapped to disk?
: TODO vs swapped?
library`: which executable owns that page

This program works in two steps:

  • parse the human readable lines lines from
    . This files contains lines of form: + .... 7ffff7b6d000-7ffff7bdd000 r-xp 00000000 fe:00 658 /lib/ .... + which tells us that: + **
    is a virtual address range that belong to the process, possibly containing multiple pages. **
    is the name of the library that owns that memory
  • loop over each page of each address range, and ask
    for more information about that page, including the physical address

=== Linux kernel tracing

Good overviews:

  • by Brendan Greg, AKA the master of tracing. Also:

I hope to have examples of all methods some day, since I'm obsessed with visibility.

[[config-proc-events]] ==== CONFIGPROCEVENTS

Logs proc events such as process creation to a link:kernel_modules/netlink.c[netlink socket].

We then have a userland program that listens to the events and prints them out:


./linux/proc_events.out &

set mcast listen ok

sleep 2 & sleep 1

fork: parent tid=48 pid=48 -> child tid=79 pid=79 fork: parent tid=48 pid=48 -> child tid=80 pid=80 exec: tid=80 pid=80 exec: tid=79 pid=79

exit: tid=80 pid=80 exit_code=0

exit: tid=79 pid=79 exit_code=0 echo a a


Source: link:userland/linux/proc_events.c[]

TODO: why

exit: tid=79
shows after
exit: tid=80

Note how

echo a
is a Bash built-in, and therefore does not spawn a new process.

TODO: why does this produce no output?

.... ./linux/proc_events.out >f & ....


TODO can you get process data such as UID and process arguments? It seems not since

contains so little data: We could try to immediately read it from
, but there is a risk that the process finished and another one took its PID, so it wouldn't be reliable.
  • requests process name
  • requests UID

[[config-proc-events-aarch64]] ===== CONFIGPROCEVENTS aarch64

0111ca406bdfa6fd65a2605d353583b4c4051781 was failing with:


kernelmodules 1.0 Building /usr/bin/make -j8 -C '/linux-kernel-module-cheat//out/aarch64/buildroot/build/kernelmodules-1.0/user' BR2PACKAGEOPENBLAS="" CC="/linux-kernel-module-cheat//out/aarch64/buildroot/host/bin/aarch64-buildroot-linux-uclibc-gcc" LD="/linux-kernel-module-cheat//out/aarch64/buildroot/host/bin/aarch64-buildroot-linux-uclibc-ld" /linux-kernel-module-cheat//out/aarch64/buildroot/host/bin/aarch64-buildroot-linux-uclibc-gcc -ggdb3 -fopenmp -O0 -std=c99 -Wall -Werror -Wextra -o 'procevents.out' 'procevents.c' In file included from /linux-kernel-module-cheat//out/aarch64/buildroot/host/aarch64-buildroot-linux-uclibc/sysroot/usr/include/signal.h:329:0, from procevents.c:12: /linux-kernel-module-cheat//out/aarch64/buildroot/host/aarch64-buildroot-linux-uclibc/sysroot/usr/include/sys/ucontext.h:50:16: error: field ‘ucmcontext’ has incomplete type mcontextt ucmcontext; ^~~~~~~~~~~ ....

so we commented it out.

Related threads:


If we try to naively update uclibc to 1.0.29 with

, which contains the above mentioned patch, clean
test build fails with:

.... ../utils/ldd.c: In function 'elffinddynamic': ../utils/ldd.c:238:12: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] return (void )byteswaptohost(dynp->dun.dval); ^ /tmp/user/20321/cciGScKB.o: In function

msgmerge.c:(.text+0x22): undefined reference to
escape' /tmp/user/20321/cciGScKB.o: In function
msgmerge.c:(.text+0xf6): undefined reference to
poparserinit' msgmerge.c:(.text+0x11e): undefined reference to `poparserfeedline' msgmerge.c:(.text+0x128): undefined reference to `poparserfinish' collect2: error: ld returned 1 exit status recipe for target '../utils/' failed make[2]: *
* [../utils/] Error 1 make[2]: *** Waiting for unfinished jobs.... /tmp/user/20321/ccF8V8jF.o: In function

msgfmt.c:(.text+0xbf3): undefined reference to
poparserinit' msgfmt.c:(.text+0xc1f): undefined reference to `poparserfeedline' msgfmt.c:(.text+0xc2b): undefined reference to `poparserfinish' collect2: error: ld returned 1 exit status recipe for target '../utils/' failed make[2]: *** [../utils/] Error 1 package/ recipe for target '/data/git/linux-kernel-module-cheat/out/aarch64/buildroot/build/uclibc-custom/.stampbuilt' failed make[1]: *** [/data/git/linux-kernel-module-cheat/out/aarch64/buildroot/build/uclibc-custom/.stampbuilt] Error 2 Makefile:79: recipe for target 'all' failed make: *** [all] Error 2 ....

Buildroot master has already moved to uclibc 1.0.29 at f8546e836784c17aa26970f6345db9d515411700, but it is not yet in any tag... so I'm not tempted to update it yet just for this.

==== ftrace

Trace a single function:

.... cd /sys/kernel/debug/tracing/

Stop tracing.

echo 0 > tracing_on

Clear previous trace.

echo > trace

List the available tracers, and pick one.

cat availabletracers echo function > currenttracer

List all functions that can be traced

cat availablefilterfunctions

Choose one.

echo _kmalloc > setftrace_filter

Confirm that only __kmalloc is enabled.

cat enabled_functions

echo 1 > tracing_on

Latest events.

head trace

Observe trace continuously, and drain seen events out.

cat trace_pipe & ....

Sample output:


tracer: function

entries-in-buffer/entries-written: 97/97 #P:1

_-----=> irqs-off

/ _----=> need-resched

| / _---=> hardirq/softirq

|| / _--=> preempt-depth

||| / delay


| | | |||| | |

        head-228   [000] ....   825.534637: __kmalloc 


Trace all possible functions, and draw a call graph:

.... echo 1 > maxgraphdepth echo 1 > events/enable echo functiongraph > currenttracer ....

Sample output:



| | | | | | |

0) 2.173 us | } /* ntpticklength / 0) | timekeepingupdate() { 0) 4.176 us | ntpgetnextleap(); 0) 5.016 us | updatevsyscall(); 0) | rawnotifiercallchain() { 0) 2.241 us | notifiercallchain(); 0) + 19.879 us | } 0) 3.144 us | updatefasttimekeeper(); 0) 2.738 us | updatefasttimekeeper(); 0) ! 117.147 us | } 0) | rawspinunlockirqrestore() { 0) 4.045 us | rawwriteunlockirqrestore(); 0) + 22.066 us | } 0) ! 265.278 us | } / updatewalltime */ ....

TODO: what do



under the
tree enables a certain set of functions, the higher the
more functions are enabled.

TODO: can you get function arguments?

===== ftrace system calls

===== trace-cmd

TODO example:

.... ./build-buildroot --config 'BR2PACKAGETRACE_CMD=y' ....

==== Kprobes

kprobes is an instrumentation mechanism that injects arbitrary code at a given address in a trap instruction, much like GDB. Oh, the good old kernel. :-)

.... ./build-linux --config 'CONFIG_KPROBES=y' ....

Then on guest:

.... insmod kprobe_example.ko sleep 4 & sleep 4 &' ....

Outcome: dmesg outputs on every fork:

.... <dofork> prehandler: p->addr = 0x00000000e1360063, ip = ffffffff810531d1, flags = 0x246 <dofork> posthandler: p->addr = 0x00000000e1360063, flags = 0x246 <dofork> prehandler: p->addr = 0x00000000e1360063, ip = ffffffff810531d1, flags = 0x246 <dofork> posthandler: p->addr = 0x00000000e1360063, flags = 0x246 ....

Source: link:kernelmodules/kprobeexample.c[]

TODO: it does not work if I try to immediately launch

, why?

.... insmod kprobe_example.ko sleep 4 & sleep 4 & ....

I don't think your code can refer to the surrounding kernel code however: the only visible thing is the value of the registers.

You can then hack it up to read the stack and read argument values, but do you really want to?

There is also a kprobes + ftrace based mechanism with

which does read the memory for us based on format strings that indicate type... Horrendous. Used by:



==== Count boot instructions

TODO: didn't port during refactor after 3b0a343647bed577586989fb702b760bd280844a. Reimplementing should not be hard.

  • qemu/docs/tracing.txt

Results (boot not excluded) are shown at: xref:table-boot-instruction-counts[xrefstyle=full]

[[table-boot-instruction-counts]] .Boot instruction counts for various setups [options="header"] |=== |Commit |Arch |Simulator |Instruction count

|7228f75ac74c896417fb8c5ba3d375a14ed4d36b |arm |QEMU |680k

|7228f75ac74c896417fb8c5ba3d375a14ed4d36b |arm |gem5 AtomicSimpleCPU |160M

|7228f75ac74c896417fb8c5ba3d375a14ed4d36b |arm |gem5 HPI |155M

|7228f75ac74c896417fb8c5ba3d375a14ed4d36b |x86_64 |QEMU |3M

|7228f75ac74c896417fb8c5ba3d375a14ed4d36b |x86_64 |gem5 AtomicSimpleCPU |528M



.... ./trace-boot --arch x86_64 ....

sample output:

.... instructions 1833863 entryaddress 0x1000000 instructionsfirmware 20708 ....


.... ./run --arch aarch64 --emulator gem5 --eval 'm5 exit'


./run --arch aarch64 --emulator gem5 --eval 'm5 exit' -- --cpu-type=HPI --caches

./gem5-stat --arch aarch64 sim_insts ....


  • 0x1000000
    is the address where QEMU puts the Linux kernel at with
    in x86. + It can be found from: + .... ./run-toolchain readelf -- -e "$(./getvar vmlinux)" | grep Entry .... + TODO confirm further. If I try to break there with: + .... ./run-gdb *0x1000000 .... + but I have no corresponding source line. Also note that this line is not actually the first line, since the kernel messages such as
    early console in extract_kernel
    have already shown on screen at that point. This does not break at all: + .... ./run-gdb extract_kernel .... + It only appears once on every log I've seen so far, checked with
    grep 0x1000000 trace.txt
    + Then when we count the instructions that run before the kernel entry point, there is only about 100k instructions, which is insignificant compared to the kernel boot itself. + TODO
    --arch arm
    --arch aarch64
    does not count firmware instructions properly because the entry point address of the ELF file (
    ) does not show up on the trace at all. Tested on[f8c0502bb2680f2dbe7c1f3d7958f60265347005].
  • We can also discount the instructions after
    runs by using
    to get the initial address of
    . One easy way to do that now is to just run: + .... ./run-gdb --userland "$(./getvar userlandbuilddir)/linux/poweroff.out" main .... + And get that from the traces, e.g. if the address is
    , then we search: + .... grep -n 4003a0 trace.txt .... + I have observed a single match for that instruction, so it must be the init, and there were only 20k instructions after it, so the impact is negligible.
  • to disable networking. Is replacing
    enough? + -- ** ** -- +
    did not significantly reduce instruction counts, so maybe replacing
    is enough.
  • gem5 simulates memory latencies. So I think that the CPU loops idle while waiting for memory, and counts will be higher.

=== Linux kernel hardening

Make it harder to get hacked and easier to notice that you were, at the cost of some (small?) runtime overhead.

[[config-fortify-source]] ==== CONFIGFORTIFYSOURCE

Detects buffer overflows for us:

.... ./build-linux --config 'CONFIGFORTIFYSOURCE=y' --linux-build-id fortify ./build-modules --clean ./build-modules ./build-buildroot ./run --eval-after 'insmod strlen_overflow.ko' --linux-build-id fortify ....

Possible dmesg output:

.... strlen_overflow: loading out-of-tree module taints kernel. detected buffer overflow in strlen ------------[ cut here ]------------ ....

followed by a trace.

You may not get this error because this depends on

overflowing at least until the next page: if a random
appears soon enough, it won't blow up as desired.

TODO not always reproducible. Find a more reproducible failure. I could not observe it on:

.... insmod memcpy_overflow.ko ....

Source: link:kernelmodules/strlenoverflow.c[]


==== Linux security modules

===== SELinux

TODO get a hello world permission control working:

.... ./build-linux \ --config-fragment linuxconfig/selinux \ --linux-build-id selinux \ ; ./build-buildroot --config 'BR2PACKAGE_REFPOLICY=y' ./run --enable-kvm --linux-build-id selinux ....

Source: link:linux_config/selinux[]

This builds:

    , which includes a reference
    policy: + refpolicy in turn depends on:
    , which contains tools such as
    : + setools depends on:
    , which is the backing userland library

After boot finishes, we see:

.... Starting auditd: mkdir: invalid option -- 'Z' ....

which comes from

, because BusyBox'
does not have the crazy
option like Ubuntu. That's amazing!

The kernel logs contain:

.... SELinux: Initializing. ....

Inside the guest we now have:

.... getenforce ....

which initially says:

.... Disabled ....

TODO: if we try to enforce:

.... setenforce 1 ....

it does not work and outputs:

.... setenforce: SELinux is disabled ....

SELinux requires glibc as mentioned at: xref:libc-choice[xrefstyle=full].

=== User mode Linux

I once got[UML] running on a minimal Buildroot setup at:

But in part because it is dying, I didn't spend much effort to integrate it into this repo, although it would be a good fit in principle, since it is essentially a virtualization method.

Maybe some brave soul will send a pull request one day.

=== UIO

UIO is a kernel subsystem that allows to do certain types of driver operations from userland.

This would be awesome to improve debuggability and safety of kernel modules.

VFIO looks like a newer and better UIO replacement, but there do not exist any examples of how to use it:

TODO get something interesting working. I currently don't understand the behaviour very well.

TODO how to ACK interrupts? How to ensure that every interrupt gets handled separately?

TODO how to write to registers. Currently using


This example should handle interrupts from userland and print a message to stdout:

.... ./ ....

TODO: what is the expected behaviour? I should have documented this when I wrote this stuff, and I'm that lazy right now that I'm in the middle of a refactor :-)

UIO interface in a nutshell:

  • blocking read / poll: waits until interrupts
  • write
    : call
    callback. Default: 0 or 1 to enable / disable interrupts.
  • mmap
    : access device memory


  • link:userland/kernelmodules/uioread.c[]
  • link:rootfsoverlay/lkmc/[]


  • that website has QEMU examples for everything as usual. The example has a kernel-side which creates the memory mappings and is used by the user.
  • userland driver stability questions: ** ** **

=== Linux kernel interactive stuff

[[fbcon]] ==== Linux kernel console fun

Requires <>.

You can also try those on the

of your Ubuntu host, but it is much more fun inside a VM!

Stop the cursor from blinking:

.... echo 0 > /sys/class/graphics/fbcon/cursor_blink ....

Rotate the console 90 degrees!

.... echo 1 > /sys/class/graphics/fbcon/rotate ....

Relies on:


Documented under:


TODO: font and keymap. Mentioned at: and I think can be done with BusyBox

, we just have to understand their formats, related:

==== Linux kernel magic keys

Requires <>.

Let's have some fun.

I think most are implemented under:

.... drivers/tty ....

TODO find all.

Scroll up / down the terminal:

.... Shift-PgDown Shift-PgUp ....

Or inside


.... sendkey shift-pgup sendkey shift-pgdown ....

===== Ctrl Alt Del

If you run in <>:

.... ./run --graphic ....

and then from the graphic window you enter the keys:

.... Ctrl-Alt-Del ....

then this runs the following command on the guest:

.... /sbin/reboot ....

This is enabled from our link:rootfs_overlay/etc/inittab[]:

.... ::ctrlaltdel:/sbin/reboot ....

This leads Linux to try to reboot, and QEMU shutdowns due to the

option which we set by default for, see: xref:exit-emulator-on-panic[xrefstyle=full].

Here is a minimal example of Ctrl Alt Del:

.... ./run --kernel-cli 'init=/lkmc/linux/ctrlaltdel.out' --graphic ....

Source: link:userland/linux/ctrlaltdel.c[]

When you hit

in the guest, our tiny init handles a
sent by the kernel and outputs to stdout:

.... cad ....

To map between

man 2 reboot
and the uClibc
magic constants see:

.... less "$(./getvar buildrootbuildbuild_dir)"/uclibc-*/include/sys/reboot.h" ....

The procfs mechanism is documented at:

.... less linux/Documentation/sysctl/kernel.txt ....

which says:

.... When the value in this file is 0, ctrl-alt-del is trapped and sent to the init(1) program to handle a graceful restart. When, however, the value is > 0, Linux's reaction to a Vulcan Nerve Pinch (tm) will be an immediate reboot, without even syncing its dirty buffers.

Note: when a program (like dosemu) has the keyboard in 'raw' mode, the ctrl-alt-del is intercepted by the program before it ever reaches the kernel tty layer, and it's up to the program to decide what to do with it. ....

Under the hood, behaviour is controlled by the


.... man 2 reboot ....

system calls can set either of the these behaviours for
  • do a hard shutdown syscall. Set in uClibc C code with: + .... reboot(RBENABLECAD) .... + or from procfs with: + .... echo 1 > /proc/sys/kernel/ctrl-alt-del .... + Done by BusyBox'
    reboot -f
  • send a SIGINT to the init process. This is what BusyBox' init does, and it then execs the string set in
    . + Set in uclibc C code with: + .... reboot(RBDISABLECAD) .... + or from procfs with: + .... echo 0 > /proc/sys/kernel/ctrl-alt-del .... + Done by BusyBox'

When a BusyBox init is with the signal, it prints the following lines:

.... The system is going down NOW! Sent SIGTERM to all processes Sent SIGKILL to all processes Requesting system reboot ....

On busybox-1.29.2's init at init/init.c we see how the kill signals are sent:

.... static void runshutdownandkillprocesses(void) { /* Run everything to be run at "shutdown". This is done prior * to killing everything, in case people wish to use scripts to * shut things down gracefully... */ run_actions(SHUTDOWN);

message(L_CONSOLE | L_LOG, "The system is going down NOW!");

/* Send signals to every process except pid 1 */ kill(-1, SIGTERM); message(L_CONSOLE, "Sent SIG%s to all processes", "TERM"); sync(); sleep(1);

kill(-1, SIGKILL); message(L_CONSOLE, "Sent SIG%s to all processes", "KILL"); sync(); /*sleep(1); - callers take care about making a pause */

} ....


is called from:

.... /* The SIGPWR/SIGUSR[12]/SIGTERM handler */ static void haltrebootpwoff(int sig) NORETURN; static void haltrebootpwoff(int sig) ....

which also prints the final line:

.... message(L_CONSOLE, "Requesting system %s", m); ....

which is set as the signal handler via TODO.



===== SysRq

We cannot test these actual shortcuts on QEMU since the host captures them at a lower level, but from:

.... ./qemu-monitor ....

we can for example crash the system with:

.... sendkey alt-sysrq-c ....

Same but boring because no magic key:

.... echo c > /proc/sysrq-trigger ....

Implemented in:

.... drivers/tty/sysrq.c ....

On your host, on modern systems that don't have the

key you can do:

.... Alt-PrtSc-space ....

which prints a message to

of type:

.... sysrq: SysRq : HELP : loglevel(0-9) reboot(b) crash(c) terminate-all-tasks(e) memory-full-oom-kill(f) kill-all-tasks(i) thaw-filesystems(j) sak(k) show-backtrace-all-active-cpus(l) show-memory-usage(m) nice-all-RT-tasks(n) poweroff(o) show-registers(p) show-all-timers(q) unraw(r) sync(s) show-task-states(t) unmount(u) show-blocked-tasks(w) dump-ftrace-buffer(z) ....

Individual SysRq can be enabled or disabled with the bitmask:

.... /proc/sys/kernel/sysrq ....

The bitmask is documented at:

.... less linux/Documentation/admin-guide/sysrq.rst ....


==== TTY

In order to play with TTYs, do this:

.... printf ' tty2::respawn:/sbin/getty -n -L -l /lkmc/ tty2 0 vt100 tty3::respawn:-/bin/sh tty4::respawn:/sbin/getty 0 tty4 tty63::respawn:-/bin/sh ::respawn:/sbin/getty -L ttyS0 0 vt100 ::respawn:/sbin/getty -L ttyS1 0 vt100 ::respawn:/sbin/getty -L ttyS2 0 vt100

Leave one serial empty.

::respawn:/sbin/getty -L ttyS3 0 vt100

' >> rootfs_overlay/etc/inittab ./build-buildroot ./run --graphic -- \ -serial telnet::1235,server,nowait \ -serial vc:800x600 \ -serial telnet::1236,server,nowait \ ; ....

and on a second shell:

.... telnet localhost 1235 ....

We don't add more TTYs by default because it would spawn more processes, even if we use

instead of

On the GUI, switch TTYs with:

  • Alt-Left
    go to previous / next populated
    TTY. Skips over empty TTYs.
  • Alt-Fn
    : go to the nth TTY. If it is not populated, don't go there.
  • chvt 
    : go to the n-th virtual TTY, even if it is empty:

You can also test this on most hosts such as Ubuntu 18.04, except that when in the GUI, you must use

to switch to another terminal.

Next, we also have the following shells running on the serial ports, hit enter to activate them:

  • /dev/ttyS0
    : first shell that was used to run QEMU, corresponds to QEMU's
    -serial mon:stdio
    . + It would also work if we used
    -serial stdio
    , but: + -- **
    would kill QEMU instead of going to the guest **
    Ctrl-A C
    wouldn't open the QEMU console there -- + see also:
  • /dev/ttyS1
    : second shell running
  • /dev/ttyS2
    : go on the GUI and enter
    , corresponds to QEMU's
    -serial vc
    . Go back to the main console with

although we cannot change between terminals from there.

Each populated TTY contains a "shell":

  • -/bin/sh
    : goes directly into an
    without a login prompt. + The trailing dash
    can be used on any command. It makes the command that follows take over the TTY, which is what we typically want for interactive shells: + The
    executable however also does this operation and therefore dispenses the
  • /sbin/getty
    asks for password, and then gives you an
    + We can overcome the password prompt with the
    -l /lkmc/
    technique explained at: but I don't see any advantage over

Identify the current TTY with the command:

.... tty ....



This outputs:

  • /dev/console
    for the initial GUI terminal. But I think it is the same as
    , because if I try to do + .... tty1::respawn:-/bin/sh .... + it makes the terminal go crazy, as if multiple processes are randomly eating up the characters.
  • /dev/ttyN
    for the other graphic TTYs. Note that there are only 63 available ones, from
    is the current one):[]. I think this is determined by: + .... #define MAXNRCONSOLES 63 .... + in
  • /dev/ttySN
    for the text shells. + These are Serial ports, see this to understand what those represent physically: + There are only 4 serial ports, I think this is determined by QEMU. TODO check. + See also:

Get the TTY in bulk for all processes:

.... ./ ....

Source: link:rootfs_overlay/lkmc/[].

The TTY appears under the

section, which is enabled by
-o tty
. This shows the TTY device number, e.g.:

.... 4,1 ....

and we can then confirm it with:

.... ls -l /dev/tty1 ....

Next try:

.... insmod kthread.ko ....

and switch between virtual terminals, to understand that the dmesg goes to whatever current virtual terminal you are on, but not the others, and not to the serial terminals.



===== Start a getty from outside of init


TODO: how to place an

directly on a TTY as well without

If I try the exact same command that the

is doing from a regular shell after boot:

.... /sbin/getty 0 tty1 ....

it fails with:

.... getty: setsid: Operation not permitted ....

The following however works:

.... ./run --eval 'getty 0 tty1 & getty 0 tty2 & getty 0 tty3 & sleep 99999999' --graphic ....

presumably because it is being called from



cycles between three TTYs,
being the default one that appears under the boot messages.

man 2 setsid
says that there is only one failure possibility:

EPERM The process group ID of any process equals the PID of the calling process. Thus, in particular, setsid() fails if the calling process is already a process group leader.

We can get some visibility into it to try and solve the problem with:

.... ./ ....

===== console kernel boot parameter

Take the command described at <> and try adding the following:

  • -e 'console=tty7'
    : boot messages still show on
    (TODO how to change that?), but we don't get a shell at the end of boot there. + Instead, the shell appears on
  • -e 'console=tty2'
    , but
    is broken, because we have two shells there: ** one due to the
    entry which uses whatever
    points to ** another one due to the
    entry we added
  • -e 'console=ttyS0'
    much like
    , but messages show only on serial, and the terminal is broken due to having multiple shells on it
  • -e 'console=tty1 console=ttyS0'
    : boot messages show on both
    , but only
    gets a shell because it came last

[[config-logo]] ==== CONFIG_LOGO

If you run in <>, then you get a Penguin image for <> above the console!

This is due to the[

] option which we enable by default.

on the terminal then kills the poor penguins.


is set, the logo can be disabled at boot with:

.... ./run --kernel-cli 'logo.nologo' ....


Looks like a recompile is needed to modify the image...


=== DRM

DRM / DRI is the new interface that supersedes


.... ./build-buildroot --config 'BR2PACKAGELIBDRM=y' ./build-userland --package libdrm -- userland/libs/libdrm/modeset.c ./run --eval-after './libs/libdrm/modeset.out' --graphic ....

Source: link:userland/libs/libdrm/modeset.c[]

Outcome: for a few seconds, the screen that contains the terminal gets taken over by changing colors of the rainbow.

TODO not working for

, it takes over the screen for a few seconds and the kernel messages disappear, but the screen stays black all the time.

.... ./build-buildroot --config 'BR2PACKAGELIBDRM=y' ./build-userland --package libdrm ./build-buildroot ./run --eval-after './libs/libdrm/modeset.out' --graphic ....

<> however worked, which means that it must be a bug with this demo?

We set

on our default kernel configuration, and it creates one device file for each display:


ls -l /dev/dri

total 0 crw------- 1 root root 226, 0 May 28 09:41 card0

grep 226 /proc/devices

226 drm

ls /sys/module/drm /sys/module/drmkmshelper/


Try creating new displays:

.... ./run --arch aarch64 --graphic -- -device virtio-gpu-pci ....

to see multiple

, and then use a different display with:

.... ./run --eval-after './libs/libdrm/modeset.out' --graphic ....


  • KMS

Tested on:[93e383902ebcc03d8a7ac0d65961c0e62af9612b]

==== kmscube

.... ./build-buildroot --config-fragment buildroot_config/kmscube ....

Outcome: a colored spinning cube coded in OpenGL + EGL takes over your display and spins forever:

It is a bit amusing to see OpenGL running outside of a window manager window like that:

TODO: it is very slow, about 1FPS. I tried Buildroot master ad684c20d146b220dd04a85dbf2533c69ec8ee52 with:


and the FPS was much better, I estimate something like 15FPS.

On Ubuntu 18.04 with NVIDIA proprietary drivers:

.... sudo apt-get instll kmscube kmscube ....

fails with:

.... drmModeGetResources failed: Invalid argument failed to initialize legacy DRM ....

See also:

  • and:

Tested on:[2903771275372ccfecc2b025edbb0d04c4016930]

==== kmscon

TODO get working.

Implements a console for <>.

The Linux kernel has a built-in fbdev console called <> but not for <> it seems.

The upstream project seems dead with last commit in 2014:

Build failed in Ubuntu 18.04 with: but this fork compiled but didn't run on host:

Haven't tested the fork on QEMU too much insanity.

==== libdri2

TODO get working.

Looks like a more raw alternative to libdrm:

.... ./build-buildroot --config 'BR2PACKABELIBDRI2=y' wget \ -O "$(./getvar userlandsourcedir)/dri2test.c" \ \ ; ./build-userland ....

but then I noticed that that example requires multiple files, and I don't feel like integrating it into our build.

When I build it on Ubuntu 18.04 host, it does not generate any executable, so I'm confused.

=== Linux kernel testing


==== Linux Test Project

Tests a lot of Linux and POSIX userland visible interfaces.

Buildroot already has a package, so it is trivial to build it:

.... ./build-buildroot --config 'BR2PACKAGELTP_TESTSUITE=y' ....

So now let's try and see if the

system call is working:

.... /usr/lib/ltp-testsuite/testcases/bin/exit01 ....

which gives successful output:

.... exit01 1 TPASS : exit() test PASSED ....

and has source code at:

Besides testing any kernel modifications you make, LTP can also be used to the system call implementation of <> as shown at <>:

.... ./run --userland "$(./getvar buildroottargetdir)/usr/lib/ltp-testsuite/testcases/bin/exit01" ....

Tested at: 287c83f3f99db8c1ff9bbc85a79576da6a78e986 + 1.

==== stress

<> userland stress. Two versions:

.... ./build-buildroot \ --config 'BR2PACKAGESTRESS=y' \ --config 'BR2PACKAGESTRESS_NG=y' \ ; ....

is likely the best, but it requires glibc, see: xref:libc-choice[xrefstyle=full].




.... stress --help stress -c 16 & ps ....

and notice how 16 threads were created in addition to a parent worker thread.

It just runs forever, so kill it when you get tired:

.... kill %1 ....

stress -c 1 -t 1
makes gem5 irresponsive for a very long time.

=== Linux kernel build system

==== vmlinux vs bzImage vs zImage vs Image

Between all archs on QEMU and gem5 we touch all of those kernel built output files.

We are trying to maintain a description of each at:

QEMU does not seem able to boot ELF files like



images to
is possible in theory x86 with[
] but we didn't get any gem5 boots working from images generated like that for some reason, see:

=== Virtio

Virtio is an interface that guest machines can use to efficiently use resources from host machines.

The types of resources it supports are for disks and networking hardware.

This interface is not like the real interface used by the host to read from real disks and network devices.

Rather, it is a simplified interface, that makes those operations simpler and faster since guest and host work together knowing that this is an emulation use case.

=== Kernel modules

[[dump-regs]] ==== dump_regs

The following kernel modules and <> executables dump and disassemble various registers which cannot be observed from userland (usually "system registers", "control registers"):

  • link:kernelmodules/dumpregs.c[]
  • link:userland/arch/arm/dump_regs.c[]
  • link:userland/arch/aarch64/dump_regs.c[]
  • link:baremetal/arch/arm/dump_regs.c[]
  • link:baremetal/arch/aarch64/dump_regs.c[]

Some of those programs are using:

  • link:lkmc/aarch64dumpregs.h[]

Alternatively, you can also get their value from inside <> with:

.... info registers all ....

or the short version:

.... i r a ....

or to get just specific registers, e.g. just ARMv8's SCTLR:

.... i r SCTLR ....

but it is sometimes just more convenient to run an executable to get the registers at the point of interest.

See also:


== FreeBSD

Prebuilt on Ubuntu 20.04 worked:[]

TODO minimal build + boot on QEMU example anywhere???


=== Zephyr

Zephyr is an RTOS that has <> support. I think it works much like our <> which uses Newlib and generates individual ELF files that contain both our C program's code, and the Zephyr libraries.

TODO get a hello world working, and then consider further integration in this repo, e.g. being able to run all C userland content on it.

TODO: Cortex-A CPUs are not currently supported, there are some

boards, but can't find a QEMU Cortex-A. There is an x86_64 qemu board, but we don't currently have an <>. For this reason, we won't touch this further for now.

However, unlike Newlib, Zephyr must be setting up a simple pre-main runtime to be able to handle threads.

Failed attempt:


wget -O - 2>/dev/null | sudo apt-key add - sudo apt-add-repository 'deb bionic-rc main' sudo apt-get update sudo apt-get install cmake git clone pip3 install --user -U west packaging cd zephyr git checkout v1.14.1 west init zephyrproject west update export ZEPHYRTOOLCHAINVARIANT=xtools export XTOOLSTOOLCHAINPATH="$(pwd)/out/crosstool-ng/build/default/install/aarch64/bin/" source west build -b qemuaarch64 samples/helloworld ....

The build system of that project is a bit excessive / wonky. You need an edge CMake not present in Ubuntu 18.04, which I don't want to install right now, and it uses the weird custom

build tool frontend.

=== ARM Mbed

TODO minimal setup to run it on QEMU? Possible?

== Xen

TODO: get prototype working and then properly integrate:

.... ./build-xen ....

Source: link:build-xen[]

This script attempts to build Xen for aarch64 and feed it into QEMU through link:submodules/boot-wrapper-aarch64[]

TODO: other archs not yet attempted.

The current bad behaviour is that it prints just:

.... Boot-wrapper v0.2 ....

and nothing else.

We will also need

on the Linux kernel, but first Xen should print some Xen messages before the kernel is ever reached.

If we pass to QEMU the xen image directly instead of the boot wrapper one:

.... -kernel ../xen/xen/xen ....

then Xen messages do show up! So it seems that the configuration failure lies in the boot wrapper itself rather than Xen.

Maybe it is also possible to run Xen directly like this: QEMU can already load multiple images at different memory locations with the generic loader: which looks something along:

.... -kernel file1.elf -device loader,file=file2.elf ....

so as long as we craft the correct DTB and feed it into Xen so that it can see the kernel, it should work. TODO does QEMU support patching the auto-generated DTB with pre-generated options? In the worst case we can just dump it hand hack it up though with

-machine dumpdtb
, see: xref:device-tree-emulator-generation[xrefstyle=full].


  • this attempt was based on: which is the documentation for the ARM Fast Models closed source simulators.
  • this is the only QEMU aarch64 Xen page on the web. It uses the Ubuntu aarc64 image, which has EDK2. + I however see no joy on blobs. Buildroot does not seem to support EDK 2.

Link on readme

== U-Boot

U-Boot is a popular bootloader.

It can read disk filesystems, and Buildroot supports it, so we could in theory put it into memory, and let it find a kernel image from the root filesystem and boot that, but I didn't manage to get it working yet:

== Emulators

  • <>
  • <>
  • <>


=== Introduction to QEMU[QEMU] is a system simulator: it simulates a CPU and devices such as interrupt handlers, timers, UART, screen, keyboard, etc.

If you are familiar with[VirtualBox], then QEMU then basically does the same thing: it opens a "window" inside your desktop that can run an operating system inside your operating system.

Also both can use very similar techniques: either <> or <>. VirtualBox' binary translator is / was based on QEMU's it seems:

The huge advantage of QEMU over VirtualBox is that is supports cross arch simulation, e.g. simulate an ARM guest on an x86 host.

QEMU is likely the leading cross arch system simulator as of 2018. It is even the default <> simulator that developers get with Android Studio 3 to develop apps without real hardware.

Another advantage of QEMU over virtual box is that it doesn't have Oracle' hands all all over it, more like RedHat + ARM.

Another advantage of QEMU is that is has no nice configuration GUI. Because who needs GUIs when you have 50 million semi-documented CLI options? Android Studio adds a custom GUI configuration tool on top of it.

QEMU is also supported by Buildroot in-tree, see e.g.: We however just build our own manually with link:build-qemu[], as it gives more flexibility, and building QEMU is very easy!

All of this makes QEMU the natural choice of reference system simulator for this repo.

=== Binary translation

Used by <> and <>.

=== Disk persistency

We disable disk persistency for both QEMU and gem5 by default, to prevent the emulator from putting the image in an unknown state.

For QEMU, this is done by passing the

option to
, and for gem5 it is the default behaviour.

If you hack up our link:run[] script to remove that option, then:

.... ./run --eval-after 'date >f;poweroff'


followed by:

.... ./run --eval-after 'cat f' ....

gives the date, because

syncs before shutdown.


command also saves the disk:

.... sync ....

When you do:

.... ./build-buildroot ....

the disk image gets overwritten by a fresh filesystem and you lose all changes.

Remember that if you forcibly turn QEMU off without

from inside the VM, e.g. by closing the QEMU window, disk changes may not be saved.

Persistency is also turned off when booting from <> with a CPIO instead of with a disk.

Disk persistency is useful to re-run shell commands from the history of a previous session with

, but we felt that the loss of determinism was not worth it.

==== gem5 disk persistency

TODO how to make gem5 disk writes persistent?

As of cadb92f2df916dbb47f428fd1ec4932a2e1f0f48 there are some

entries in the <> under cow sections, but hacking them to true did not work:

.... diff --git a/configs/common/ b/configs/common/ index 17498c42b..76b8b351d 100644 --- a/configs/common/ +++ b/configs/common/ @@ -60,7 +60,7 @@ os_types = { 'alpha' : [ 'linux' ], }

class CowIdeDisk(IdeDisk): - image = CowDiskImage(child=RawDiskImage(readonly=True), + image = CowDiskImage(child=RawDiskImage(readonly=False), read_only=False)

 def childImage(self, ci):


The directory of interest is


=== gem5 qcow2

qcow2 does not appear supported, there are not hits in the source tree, and there is a mention on Nate's 2009 wishlist:

This would be good to allow storing smaller sparse ext2 images locally on disk.

=== Snapshot

QEMU allows us to take snapshots at any time through the monitor.

You can then restore CPU, memory and disk state back at any time.

qcow2 filesystems must be used for that to work.

To test it out, login into the VM with and run:

.... ./run --eval-after 'umount /mnt/9p/*;./' ....

On another shell, take a snapshot:

.... ./qemu-monitor savevm mysnapid ....

The counting continues.

Restore the snapshot:

.... ./qemu-monitor loadvm mysnapid ....

and the counting goes back to where we saved. This shows that CPU and memory states were reverted.


is needed because snapshotting conflicts with <<9p>>, which we felt is a more valuable default. If you forget to unmount, the following error appears on the QEMU monitor:

..... Migration is disabled when VirtFS export path '/linux-kernel-module-cheat/out/x8664/buildroot/build' is mounted in the guest using mounttag 'host_out' .....

We can also verify that the disk state is also reversed. Guest:

.... echo 0 >f ....


.... ./qemu-monitor savevm mysnapid ....


.... echo 1 >f ....


.... ./qemu-monitor loadvm mysnapid ....


.... cat f ....

And the output is


Our setup does not allow for snapshotting while using <>.


==== Snapshot internals

Snapshots are stored inside the

images themselves.

They can be observed with:

.... "$(./getvar buildroothostdir)/bin/qemu-img" info "$(./getvar qcow2_file)" ....

which after

savevm my_snap_id
savevm asdf
contains an output of type:

.... image: out/x8664/buildroot/images/rootfs.ext2.qcow2 file format: qcow2 virtual size: 512M (536870912 bytes) disk size: 180M clustersize: 65536 Snapshot list: ID TAG VM SIZE DATE VM CLOCK 1 mysnapid 47M 2018-04-27 21:17:50 00:00:15.251 2 asdf 47M 2018-04-27 21:20:39 00:00:18.583 Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false ....

As a consequence:

  • it is possible to restore snapshots across boots, since they stay on the same image the entire time
  • it is not possible to use snapshots with <> in our setup, since we don't pass
    at all when initrd is enabled

=== Device models

This section documents:

  • how to interact with peripheral hardware device models through device drivers
  • how to write your own hardware device models for our emulators, see also:

For the more complex interfaces, we focus on simplified educational devices, either:

  • present in the QEMU upstream: ** <>

==== PCI

Only tested in x86.

[[qemu-edu]] ===== QEMU edu PCI device

Small upstream educational PCI device:

.... ./ ....

This tests a lot of features of the edu device, to understand the results, compare the inputs with the documentation of the hardware:


  • kernel module: link:kernelmodules/qemuedu.c[]
  • QEMU device:
  • test script: link:rootfsoverlay/lkmc/[]

Works because we add to our default QEMU CLI:

.... -device edu ....

This example uses:

  • the QEMU
    educational device, which is a minimal educational in-tree PCI example
  • the
    kernel module, which exercises the
    hardware. + I've contacted the awesome original author author of
    edu[Jiri Slaby], and he told there is no official kernel module example because this was created for a kernel module university course that he gives, and he didn't want to give away answers.[I don't agree with that philosophy], so students, cheat away with this repo and go make startups instead.

TODO exercise DMA on the kernel module. The

hardware model has that feature:

===== Manipulate PCI registers directly

In this section we will try to interact with PCI devices directly from userland without kernel modules.

First identify the PCI device with:

.... lspci ....

In our case for example, we see:

.... 00:06.0 Unclassified device [00ff]: Device 1234:11e8 (rev 10) 00:07.0 Unclassified device [00ff]: Device 1234:11e9 ....

which we identify as being <> by the magic number:


Alternatively, we can also do use the QEMU monitor:

.... ./qemu-monitor info qtree ....

which gives:

.... dev: edu, id "" addr = 06.0 romfile = "" rombar = 1 (0x1) multifunction = false commandserrenable = true x-pcie-lnksta-dllla = true x-pcie-extcap-init = true class Class 00ff, addr 00:06.0, pci id 1234:11e8 (sub 1af4:1100) bar 0: mem at 0xfea00000 [0xfeafffff] ....

See also:

Read the configuration registers as binary:

.... hexdump /sys/bus/pci/devices/0000:00:06.0/config ....

Get nice human readable names and offsets of the registers and some enums:

.... setpci --dumpregs ....

Get the values of a given config register from its human readable name, either with either bus or device id:

.... setpci -s 0000:00:06.0 BASEADDRESS0 setpci -d 1234:11e8 BASEADDRESS0 ....

Note however that

also appears when you do:

.... lspci -v ....


.... Memory at feb54000 ....

Then you can try messing with that address with <>:

.... devmem 0xfeb54000 w 0x12345678 ....

which writes to the first register of the edu device.

The device then fires an interrupt at irq 11, which is unhandled, which leads the kernel to say you are a bad person:

.... <3>[ 1065.567742] irq 11: nobody cared (try booting with the "irqpoll" option) ....

followed by a trace.

Next, also try using our <> IRQ monitoring module before triggering the interrupt:

.... insmod irq.ko devmem 0xfeb54000 w 0x12345678 ....

Our kernel module handles the interrupt, but does not acknowledge it like our proper edu kernel module, and so it keeps firing, which leads to infinitely many messages being printed:

.... handler irq = 11 dev = 251 ....

===== pciutils

There are two versions of

  • a simple one from BusyBox
  • a more complete one from[pciutils] which Buildroot has a package for, and is the default on Ubuntu 18.04 host. This is the one we enable by default.

===== Introduction to PCI

The PCI standard is non-free, obviously like everything in low level: but Google gives several illegal PDF hits :-)

And of course, the best documentation available is:

Like every other hardware, we could interact with PCI on x86 using only IO instructions and memory operations.

But PCI is a complex communication protocol that the Linux kernel implements beautifully for us, so let's use the kernel API.


  • edu device source and spec in QEMU tree: ** **
  • inb outb runnable example (no device)
  • LDD3 PCI chapter
  • another QEMU device + module, but using a custom QEMU device: ** **
  • course given by the creator of the edu device. In Czech, and only describes API

===== PCI BFD

lspci -k
shows something like:

.... 00:04.0 Class 00ff: 1234:11e8 lkmc_pci ....

Meaning of the first numbers:

.... <8:bus>:<5:device>.<3:function> ....

Often abbreviated to BDF.

  • bus: groups PCI slots
  • device: maps to one slot
  • function:

Sometimes a fourth number is also added, e.g.:

.... 0000:00:04.0 ....

TODO is that the domain?

Class: pure magic: TODO: does it have any side effects? Set in the edu device at:

.... k->classid = PCICLASS_OTHERS ....

===== PCI BAR

Each PCI device has 6 BAR IOs (base address register) as per the PCI spec.

Each BAR corresponds to an address range that can be used to communicate with the PCI.

Each BAR is of one of the two types:

    : must be accessed with
    : must be accessed with
    . This is the saner method apparently, and what the edu device uses.

The length of each region is defined by the hardware, and communicated to software via the configuration registers.

The Linux kernel automatically parses the 64 bytes of standardized configuration registers for us.

QEMU devices register those regions with:

.... memoryregioninitio(&edu->mmio, OBJECT(edu), &edummioops, edu, "edu-mmio", 1 << 20); pciregisterbar(pdev, 0, PCIBASEADDRESSSPACE_MEMORY, &edu->mmio); ....

==== GPIO

TODO: broken. Was working before we moved

-M versatilepb
-M virt
around af210a76711b7fa4554dcc2abd0ddacfc810dfd4. Either make it work on
-M virt
if that is possible, or document precisely how to make it work with
, or hopefully
which is newer.

QEMU does not have a very nice mechanism to observe GPIO activity:

The best you can do is to hack our link:build[] script to add:

.... HOSTQEMUOPTS='--extra-cflags=-DDEBUG_PL061=1' ....

where[PL061] is the dominating ARM Holdings hardware that handles GPIO.

Then compile with:

.... ./build-buildroot --arch arm --config-fragment buildrootconfig/gpio ./build-linux --config-fragment linuxconfig/gpio ....

then test it out with:

.... ./ ....

Source: link:rootfs_overlay/lkmc/[]

Buildroot's Linux tools package provides some GPIO CLI tools:

, TODO document them here.

==== LEDs

TODO: broken when

moved to
-M virt
, same as <>.

Hack QEMU's

with a printf:

.... static void armsysctlwrite(void *opaque, hwaddr offset, uint64t val, unsigned size) { armsysctlstate *s = (armsysctl_state *)opaque;

switch (offset) {
case 0x08: /* LED */
    printf("LED val = %llx\n", (unsigned long long)val);


and then rebuild with:

.... ./build-qemu --arch arm ./build-linux --arch arm --config-fragment linux_config/leds ....

But beware that one of the LEDs has a heartbeat trigger by default (specified on dts), so it will produce a lot of output.

And then activate it with:

.... cd /sys/class/leds/versatile:0 cat max_brightness echo 255 >brightness ....

Relevant QEMU files:

  • hw/arm/versatilepb.c
  • hw/misc/arm_sysctl.c

Relevant kernel files:

  • arch/arm/boot/dts/versatile-pb.dts
  • drivers/leds/led-class.c
  • drivers/leds/leds-sysctl.c

==== gem5 educational hardware models

TODO get some working!

=== QEMU monitor

The QEMU monitor is a magic terminal that allows you to send text commands to the QEMU VM itself:

While QEMU is running, on another terminal, run:

.... ./qemu-monitor ....

or send one command such as

info qtree
and quit the monitor:

.... ./qemu-monitor info qtree ....

or equivalently:

.... echo 'info qtree' | ./qemu-monitor ....

Source: link:qemu-monitor[]

uses the
QEMU command line option, which makes the monitor listen from a socket.

Alternatively, we can also enter the QEMU monitor from inside

<> with:

.... Ctrl-A C ....

and go back to the terminal with:

.... Ctrl-A C ....


When in graphic mode, we can do it from the GUI:

.... Ctrl-Alt ? ....


is a digit
, or
, or,
, etc. depending on what else is available on the GUI: serial, parallel and frame buffer.

Finally, we can also access QEMU monitor commands directly from <> with the


.... ./run-gdb ....

then inside that shell:

.... monitor info qtree ....

This way you can use both QEMU monitor and GDB commands to inspect the guest from inside a single shell! Pretty awesome.

In general,

is the best option, as it:
  • works on both modes
  • allows to use the host Bash history to re-run one off commands
  • allows you to search the output of commands on your host shell even when in graphic mode

Getting everything to work required careful choice of QEMU command line options:


==== QEMU monitor from guest

Peter Maydell said potentially not possible nicely as of August 2018:

It is also worth looking into the QEMU Guest Agent tool

that can be enabled with:

.... ./build-buildroot --config 'BR2PACKAGEQEMU=y' ....

See also:

==== QEMU monitor from GDB

When doing <> it is possible to send QEMU monitor commands through the GDB

command, which saves you the trouble of opening yet another shell.

Try for example:

.... monitor help monitor info qtree ....

=== Debug the emulator

When you start hacking QEMU or gem5, it is useful to see what is going on inside the emulator themselves.

This is of course trivial since they are just regular userland programs on the host, but we make it a bit easier with:

.... ./run --debug-vm ....

Or for a faster development loop you can pass

command as a semicolon separated list:

.... ./run --debug-vm-ex 'break qemuaddopts;run' ....

which is equivalent to the more verbose:

.... ./run --debug-vm-args '-ex "break qemuaddopts" -ex "run"' ....

if you ever want need anything besides -ex.

Or if things get really involved and you want a debug script:

.... printf 'break qemuaddopts run ' > data/vm.gdb ./run --debug-vm-file data/vm.gdb ....

Our default emulator builds are optimized with

gcc -O2 -g
. To use
instead, build and run with:

.... ./build-qemu --qemu-build-type debug --verbose ./run --debug-vm ./build-gem5 --gem5-build-type debug --verbose ./run --debug-vm --emulator-gem5 ....


is optional, but shows clearly each GCC build command so that you can confirm what
is doing.

The build outputs are automatically stored in a different directories for optimized and debug builds, which prevents

files from overwriting
ones. Therefore,
is not required.

The price to pay for debuggability is high however: a Linux kernel boot was about 3x slower in QEMU and 14 times slower in gem5 debug compared to opt, see benchmarks at: xref:benchmark-linux-kernel-boot[xrefstyle=full].

Similar slowdowns can be observed at: xref:benchmark-emulators-on-userland-executables[xrefstyle=full].

When in <>, using

makes Ctrl-C not get passed to the QEMU guest anymore: it is instead captured by GDB itself, so allow breaking. So e.g. you won't be able to easily quit from a guest program like:

.... sleep 10 ....

In graphic mode, make sure that you never click inside the QEMU graphic while debugging, otherwise you mouse gets captured forever, and the only solution I can find is to go to a TTY with


You can still send key presses to QEMU however even without the mouse capture, just either click on the title bar, or alt tab to give it focus.

==== Reverse debug the emulator

While step debugging any complex program, you always end up feeling the need to step in reverse to reach the last call to some function that was called before the failure point, in order to trace back the problem to the actual bug source.

While GDB "has" this feature, it is just too broken to be usable, and so we expose the amazing Mozilla RR tool conveniently in this repo:

Before the first usage setup rr with:

.... echo 'kernel.perfeventparanoid=1' | sudo tee -a /etc/sysctl.conf sudo sysctl -p ....

Then use it with your content of interest, for example:

.... ./run --debug-vm-rr --userland userland/c/hello.c ....

This will:

  • first run the program once until completion or crash
  • then restart the program at the very first instruction at
    and leave you in a GDB shell

From there, run the program until your point of interest, e.g.:

.... break qemuaddopts continue ....

and you can now reliably use reverse debugging commands such as


To restart debugging again after quitting

, simply run on your host terminal:

.... rr replay ....

The use case of

is often to go to the final crash and then walk back from there, so you often want to automate running until the end after record with
as in:

.... ./run --debug-vm-args='-ex continue' --debug-vm-rr --userland userland/c/hello.c ....

Programs often tend to blow up in very low frames that use values passed in from higher frames. In those cases, remember that just like with forward debugging, you can't just go:

.... up up up reverse-next ....

but rather, you must:

.... reverse-finish reverse-finish reverse-finish reverse-next ....

==== Debug gem5 Python scripts

Start pdb at the first instruction:

.... ./run --emulator gem5 --gem5-exe-args='--pdb' --terminal ....


as we must be on foreground.

Alternatively, you can add to the point of the code where you want to break the usual:

.... import ipdb; ipdb.set_trace() ....

and then run with:

.... ./run --emulator gem5 --terminal ....

TODO test PyCharm:

=== Tracing

QEMU can log several different events.

The most interesting are events which show instructions that QEMU ran, for which we have a helper:

.... ./trace-boot --arch x86_64 ....

Under the hood, this uses QEMU's


You can then inspect the address of each instruction run:

.... less "$(./getvar --arch x8664 rundir)/trace.txt" ....

Sample output excerpt:

.... exectb 0.000 pid=10692 tb=0x7fb4f8000040 pc=0xfffffff0 exectb 35.391 pid=10692 tb=0x7fb4f8000180 pc=0xfe05b exectb 21.047 pid=10692 tb=0x7fb4f8000340 pc=0xfe066 exectb 12.197 pid=10692 tb=0x7fb4f8000480 pc=0xfe06a ....

Get the list of available trace events:

.... ./run --trace help ....

TODO: any way to show the actualy disassembled instruction executed directly from there? Possible with <>.

Enable other specific trace events:

.... ./run --trace trace1,trace2 ./qemu-trace2txt -a "$arch" less "$(./getvar -a "$arch" run_dir)/trace.txt" ....

This functionality relies on the following setup:

  • ./configure --enable-trace-backends=simple
    . This logs in a binary format to the trace file. + It makes 3x execution faster than the default trace backend which logs human readable data to stdout. + Logging with the default backend
    greatly slows down the CPU, and in particular leads to this boot message: + .... All QSes seen, last rcusched kthread activity 5252 (4294901421-4294896169), jiffiestillnextfqs=1, root ->qsmask 0x0 swapper/0 R running task 0 1 0 0x00000008 ffff880007c03ef8 ffffffff8107aa5d ffff880007c16b40 ffffffff81a3b100 ffff880007c03f60 ffffffff810a41d1 0000000000000000 0000000007c03f20 fffffffffffffedc 0000000000000004 fffffffffffffedc ffffffff00000000 Call Trace: [] schedshowtask+0xcd/0x130 [] rcucheckcallbacks+0x871/0x880 [] updateprocesstimes+0x2f/0x60 .... + in which the boot appears to hang for a considerable time.
  • patch QEMU source to remove the
    in the
    file. See also:

==== QEMU -d tracing

QEMU also has a second trace mechanism in addition to

, find out the events with:

.... ./run -- -d help ....

Let's pick the one that dumps executed instructions,


.... ./run --eval './linux/poweroff.out' -- -D out/trace.txt -d in_asm less out/trace.txt ....

Sample output excerpt:


IN: 0xfffffff0: ea 5b e0 00 f0 ljmpw $0xf000:$0xe05b

IN: 0x000fe05b: 2e 66 83 3e 88 61 00 cmpl $0, %cs:0x6188 0x000fe062: 0f 85 7b f0 jne 0xd0e1 ....

TODO: after

, symbol names are meant to show, which is awesome, but I don't get any. I do see them however when running a bare metal example from:

TODO: what is the point of having two mechanisms,

tracing is cool because it does not require a messy recompile, and it can also show symbols.

==== QEMU trace register values

TODO: is it possible to show the register values for each instruction?

This would include the memory values read into the registers.

Asked at:

Seems impossible due to optimizations that QEMU does:


PANDA can list memory addresses, so I bet it can also decode the instructions: I wonder why they don't just upstream those things to QEMU's tracing:

gem5 can do it as shown at: xref:gem5-tracing[xrefstyle=full].

==== QEMU trace memory accesses

Not possible apparently, not even with the

trace events, Peter comments

No. You will miss all the fast-path memory accesses, which are done with custom generated assembly in the TCG backend. In general QEMU is not designed to support this kind of monitoring of guest operations.

Related question:

==== Trace source lines

We can further use Binutils'

to get the line that corresponds to each address:

.... ./trace-boot --arch x8664 ./trace2line --arch x8664 less "$(./getvar --arch x8664 rundir)/trace-lines.txt" ....

The last commands takes several seconds.

The format is as follows:

.... 39368 staticcpu_has arch/x86/include/asm/cpufeature.h:148 ....


  • 39368
    : number of consecutive times that a line ran. Makes the output much shorter and more meaningful
  • _static_cpu_has
    : name of the function that contains the line
  • arch/x86/include/asm/cpufeature.h:148
    : file and line

This could of course all be done with GDB, but it would likely be too slow to be practical.

TODO do even more awesome offline post-mortem analysis things, such as:

  • detect if we are in userspace or kernelspace. Should be a simple matter of reading the
  • read kernel data structures, and determine the current thread. Maybe we can reuse / extend the kernel's GDB Python scripts??

==== QEMU record and replay

QEMU runs, unlike gem5, are not deterministic by default, however it does support a record and replay mechanism that allows you to replay a previous run deterministically.

This awesome feature allows you to examine a single run as many times as you would like until you understand everything:


Record a run.

./run --eval-after './linux/rand_check.out;./linux/poweroff.out;' --record

Replay the run.

./run --eval-after './linux/rand_check.out;./linux/poweroff.out;' --replay ....

A convenient shortcut to do both at once to test the feature is:

.... ./qemu-rr --eval-after './linux/rand_check.out;./linux/poweroff.out;' ....

By comparing the terminal output of both runs, we can see that they are the exact same, including things which normally differ across runs:

  • timestamps of dmesg output
  • <> output

The record and replay feature was revived around QEMU v3.0.0. It existed earlier but it rot completely. As of v3.0.0 it is still flaky: sometimes we get deadlocks, and only a limited number of command line arguments are supported.

Documented at:

TODO: using

as above leads to a kernel warning:

.... rcu_sched detected stalls on CPUs/tasks ....

TODO: replay deadlocks intermittently at disk operations, last kernel message:

.... EXT4-fs (sda): re-mounted. Opts: blockvalidity,barrier,userxattr ....

TODO replay with network gets stuck:

.... ./qemu-rr --eval-after 'ifup -a;wget -S;./linux/poweroff.out;' ....

after the message:

.... adding dns ....

There is explicit network support on the QEMU patches, but either it is buggy or we are not using the correct magic options.

Solved on unmerged c42634d8e3428cfa60672c3ba89cabefc720cde9 from


only seem to work with initrd since I cannot plug a working IDE disk device? See also:

Then, when I tried with <> and no disk:

.... ./build-buildroot --arch aarch64 --initrd ./qemu-rr --arch aarch64 --eval-after './linux/rand_check.out;./linux/poweroff.out;' --initrd ....

QEMU crashes with:

.... ERROR:replay/replay-time.c:49:replayreadclock: assertion failed: (replayfile && replaymutex_locked()) ....

I had the same error previously on x86-64, but it was fixed: so maybe the forgot to fix it for


Solved on unmerged c42634d8e3428cfa60672c3ba89cabefc720cde9 from

===== QEMU reverse debugging

TODO get working.

QEMU replays support checkpointing, and this allows for a simplistic "reverse debugging" implementation proposed at on the unmerged[]:

.... ./run --eval-after './linux/randcheck.out;./linux/poweroff.out;' --record ./run --eval-after './linux/randcheck.out;./linux/poweroff.out;' --replay --gdb-wait ....

On another shell:

.... ./run-gdb start_kernel ....


.... n n n n reverse-continue ....

and we are back at


==== QEMU trace multicore

TODO: is there any way to distinguish which instruction runs on each core? Doing:

.... ./run --arch x8664 --cpus 2 --eval './linux/poweroff.out' --trace exectb ./qemu-trace2txt ....

just appears to output both cores intertwined without any clear differentiation.

==== QEMU get guest instruction count


==== gem5 tracing

gem5 provides also provides a tracing mechanism documented at:[]:

.... ./run --arch aarch64 --eval 'm5 exit' --emulator gem5 --trace ExecAll less "$(./getvar --arch aarch64 run_dir)/trace.txt" ....

Our wrapper just forwards the options to the

gem5 option.

Keep in mind however that the disassembly is very broken in several places as of 2019q2, so you can't always trust it.

Output the trace to stdout instead of a file:

.... ./run \ --arch aarch64 \ --emulator gem5 \ --eval 'm5 exit' \ --trace ExecAll \ --trace-stdout \ ; ....

We also have a shortcut for

--trace ExecAll -trace-stdout

.... ./run \ --arch aarch64 \ --emulator gem5 \ --eval 'm5 exit' \ --trace-insts-stdout \ ; ....

Be warned, the trace is humongous, at 16Gb.

This would produce a lot of output however, so you will likely not want that when tracing a Linux kernel boot instructions. But it can be very convenient for smaller traces such as <>.

List all available debug flags:

.... ./run --arch aarch64 --gem5-exe-args='--debug-help' --emulator gem5 ....

but to understand most of them you have to look at the source code:

.... less "$(./getvar gem5sourcedir)/src/cpu/SConscript" less "$(./getvar gem5sourcedir)/src/cpu/" ....

The most important trace flags to know about are:

  • <>
  • Faults
    : CPU exceptions / interrupts, see an example at: <>
  • <>
  • <>

Trace internals are discussed at: <>.

As can be seen on the

is just an alias that enables a set of flags.

We can make the trace smaller by naming the trace file as

, which enables GZIP compression, but that is not currently exposed on our scripts, since you usually just need something human readable to work on.

Enabling tracing made the runtime about 4x slower on the <>, with or without


Trace the source lines just like <> with:

.... ./trace-boot --arch aarch64 --emulator gem5 ./trace2line --arch aarch64 --emulator gem5 less "$(./getvar --arch aarch64 run_dir)/trace-lines.txt" ....

TODO: 7452d399290c9c1fc6366cdad129ef442f323564

this is too slow and takes hours. QEMU's processing of 170k events takes 7 seconds. gem5's processing is analogous, but there are 140M events, so it should take 7000 seconds ~ 2 hours which seems consistent with what I observe, so maybe there is no way to speed this up... The workaround is to just use gem5's
to get function granularity, and then GDB individually if line detail is needed?

===== gem5 trace internals

gem5 traces are generated from

calls scattered throughout the code, except for
instruction traces, which uses

The trace IDs are themselves encoded in

files, e.g.:

.... DebugFlag('Event' ....



The build system then automatically adds the options to the


For this entry, the build system then generates a file

, which contains:

.... namespace Debug { class SimpleFlag; extern SimpleFlag ExecEnable; } ....

and must be included in from callers of


Tested in b4879ae5b0b6644e6836b0881e4da05c64a6550d.

===== gem5 ExecAll trace format

This debug flag traces all instructions.

The output format is of type:

.... 25007000: system.cpu T0 : @startkernel : stp 25007000: system.cpu T0 : @startkernel.0 : addxiuop ureg0, sp, #-112 : IntAlu : D=0xffffff8008913f90 25007500: system.cpu T0 : @startkernel.1 : strxiuop x29, [ureg0] : MemWrite : D=0x0000000000000000 A=0xffffff8008913f90 25008000: system.cpu T0 : @startkernel.2 : strxiuop x30, [ureg0, #8] : MemWrite : D=0x0000000000000000 A=0xffffff8008913f98 25008500: system.cpu T0 : @startkernel.3 : addxi_uop sp, ureg0, #0 : IntAlu : D=0xffffff8008913f90 ....

There are two types of lines:

  • full instructions, as the first line. Only shown if the
    flag is given.
  • micro ops that constitute the instruction, the lines that follow. Yes,
    also has microops:[]. Only shown if the
    flag is given.


  • 25007500
    : time count in some unit. Note how the microops execute at further timestamps.
  • system.cpu
    : distinguishes between CPUs when there are more than one. For example, running xref:arm-baremetal-multicore[xrefstyle=full] with two cores produces
  • T0
    : thread number. TODO:[hyperthread]? How to play with it? +
    .ini has
    --param 'system.multi_thread = True' --param 'system.cpu[0].numThreads = 2'
    , but in <> the first one alone does not produce
    , and with the second one simulation blows up with: + .... fatal: fatal condition interrupts.size() != numThreads occurred: CPU system.cpu has 1 interrupt controllers, but is expecting one per thread (2) ....
  • @start_kernel
    : we are in the
    function. Awesome feature! Implemented with libelf copy pasted in-tree
    . To get raw addresses, remove the
    , which is enabled by
    . This can be done with
  • .1
    as in
    : index of the <>
  • stp
    : instruction disassembly. Note however that the disassembly of many instructions are very broken as of 2019q2, and you can't just trust them blindly.
  • strxi_uop   x29, [ureg0]
    : microop disassembly.
  • MemWrite :  D=0x0000000000000000 A=0xffffff8008913f90
    : a memory write microop: **
    stands for data, and represents the value that was written to memory or to a register **
    stands for address, and represents the address to which the value was written. It only shows when data is being written to memory, but not to registers.

The best way to verify all of this is to write some <>

===== gem5 Registers trace format

This flag shows a more detailed register usage than <>.

For example, if we run in LKMC 0323e81bff1d55b978a4b36b9701570b59b981eb:

.... ./run --arch aarch64 --baremetal userland/arch/aarch64/add.S --emulator gem5 --trace ExecAll,Registers --trace-stdout ....

then the stdout contains:

.... 31000: system.cpu A0 T0 : @mainafterprologue : movz x0, #1, #0 : IntAlu : D=0x0000000000000001 flags=(IsInteger) 31500: system.cpu.[tid:0]: Setting int reg 34 (34) to 0. 31500: system.cpu.[tid:0]: Reading int reg 0 (0) as 0x1. 31500: system.cpu.[tid:0]: Setting int reg 1 (1) to 0x3. 31500: system.cpu A0 T0 : @mainafterprologue+4 : add x1, x0, #2 : IntAlu : D=0x0000000000000003 flags=(IsInteger) 32000: system.cpu.[tid:0]: Setting int reg 34 (34) to 0. 32000: system.cpu.[tid:0]: Reading int reg 1 (1) as 0x3. 32000: system.cpu.[tid:0]: Reading int reg 31 (34) as 0. 32000: system.cpu.[tid:0]: Setting int reg 0 (0) to 0x3. ....

which corresponds to the two following instructions:

.... mov x0, 1 add x1, x0, 2 ....

TODO that format is either buggy or is very difficult to understand:

  • what is
    ? Presumably some flags register?
  • what do the numbers in parenthesis mean at
    31 (34)
    ? Presumably some flags register?
  • why is the first instruction setting
    reg 1
    and the second one
    reg 0
    , given that the first sets
    and the second

===== gem5 TARMAC traces

===== gem5 tracing internals

As of gem5 16eeee5356585441a49d05c78abc328ef09f7ace the default tracer is

. It is set at:

.... src/cpu/ = ExeTracer() ....

which then gets used at:

.... class BaseCPU(ClockedObject): [...] tracer = Param.InstTracer(default_tracer, "Instruction tracer") ....

All tracers derive from the common

base class:

.... git grep ': InstTracer' ....


.... src/arch/arm/tracers/tarmacparser.hh:218: TarmacParser(const Params *p) : InstTracer(p), startPc(p->startpc), src/arch/arm/tracers/ : InstTracer(p), src/cpu/exetrace.hh:67: ExeTracer(const Params *params) : InstTracer(params) src/cpu/ : InstTracer(p), buf(nullptr), bufSize(0), curMsg(nullptr) src/cpu/inteltrace.hh:63: IntelTrace(const IntelTraceParams *p) : InstTracer(p) ....

As mentioned at <>, there appears to be no way to select those currently without hacking the config scripts.

TARMAC is described at: <>.

TODO: are

useful for anything or just relics?

Then there is also the


.... src/cpu/nativetrace.hh:68:class NativeTrace : public ExeTracer ....

which gets implemented in a few different ISAs, but not all:

.... src/arch/arm/nativetrace.hh:40:class ArmNativeTrace : public NativeTrace src/arch/sparc/nativetrace.hh:41:class SparcNativeTrace : public NativeTrace src/arch/x86/nativetrace.hh:41:class X86NativeTrace : public NativeTrace ....

TODO: I can't find any usages of those classes from in-tree configs.

=== QEMU GUI is unresponsive

Sometimes in Ubuntu 14.04, after the QEMU SDL GUI starts, it does not get updated after keyboard strokes, and there are artifacts like disappearing text.

We have not managed to track this problem down yet, but the following workaround always works:

.... Ctrl-Shift-U Ctrl-C root ....

This started happening when we switched to building QEMU through Buildroot, and has not been observed on later Ubuntu.

Using text mode is another workaround if you don't need GUI features.

== gem5

Getting started at: xref:gem5-buildroot-setup[xrefstyle=full].

gem5 has a bunch of crappiness, mostly described at: <>, but it does deserve some credit on the following points:

  • insanely configurable system topology from Python without recompiling, made possible in part due to a well defined memory packet structure that allows adding caches and buses transparently
  • each micro architectural model (<>) works with all ISAs

=== gem5 vs QEMU

  • advantages of gem5: ** simulates a generic more realistic <> CPU cycle by cycle, including a realistic DRAM memory access model with latencies, caches and page table manipulations. This allows us to: + -- *** do much more realistic performance benchmarking with it, which makes absolutely no sense in QEMU, which is purely functional *** make certain functional observations that are not possible in QEMU, e.g.: **** use Linux kernel APIs that flush cache memory like DMA, which are crucial for driver development. In QEMU, the driver would still work even if we forget to flush caches. **** spectre / meltdown: *****[email protected]/msg15319.html ***** -- + It is not of course truly cycle accurate, as that: + -- ** would require exposing proprietary information of the CPU designs:[] ** would make the simulation even slower TODO confirm, by how much -- + but the approximation is reasonable. + It is used mostly for microarchitecture research purposes: when you are making a new chip technology, you don't really need to specialize enormously to an existing microarchitecture, but rather develop something that will work with a wide range of future architectures. ** runs are deterministic by default, unlike QEMU which has a special <> mode, that requires first playing the content once and then replaying ** gem5 ARM at least appears to implement more low level CPU functionality than QEMU, e.g. QEMU only added EL2 in 2018: See also: xref:arm-exception-levels[xrefstyle=full] ** gem5 offers more advanced logging, even for non micro architectural things which QEMU models in some way, e.g. <>, because QEMU's binary translation optimizations reduce visibility
  • disadvantages of gem5: ** slower than QEMU, see: xref:benchmark-linux-kernel-boot[xrefstyle=full] + This implies that the user base is much smaller, since no Android devs. + Instead, we have only chip makers, who keep everything that really works closed, and researchers, who can't version track or document code properly >:-) And this implies that: + -- *** the documentation is more scarce *** it takes longer to support new hardware features -- + Well, not that AOSP is that much better anyway. ** not sure: gem5 has BSD license while QEMU has GPL + This suits chip makers that want to distribute forks with secret IP to their customers. + On the other hand, the chip makers tend to upstream less, and the project becomes more crappy in average :-) ** gem5 is way more complex and harder to modify and maintain + The only hairy thing in QEMU is the binary code generation. + gem5 however has tended towards horrendous intensive <> in order to support all its different hardware types + gem5 also has a complex Python interface which is also largely auto-generated, which greatly increases the maintenance complexity of the project: <>. + This is done so that reconfiguring platforms can be done quickly without recompiling, and it is amazing when it works, but the maintenance costs are also very high. For example, <> of several trivial
    files accounted for 50% of the build time at one point: <>. + All of this also makes it hard to setup an IDE for developing gem5: <> + The feelings of helplessness this brings are well summarized by the following CSDN article + ____ Found DPRINTF based debugging unable to meet your needs?

Found GDB based debugging unfriendly to human beings?

Want to debug gem5 source with the help of modern IDEs like Eclipse?

Failed in getting help from GEM5 community?

Come on, dude! Here is the up-to-date tutorial for you!

Just be ready for THE ENDLESS NIGHTMARE gem5 will bring!

=== gem5 run benchmark

OK, this is why we used gem5 in the first place, performance measurements!

Let's see how many cycles dhrystone, which Buildroot provides, takes for a few different input parameters.

We will do that for various input parameters on full system by taking a checkpoint after the boot finishes a fast atomic CPU boot, and then we will restore in a more detailed mode and run the benchmark:

.... ./build-buildroot --config 'BR2PACKAGEDHRYSTONE=y'

Boot fast, take checkpoint, and exit.

./run --arch aarch64 --emulator gem5 --eval-after './'

Restore the checkpoint after boot, and benchmark with input 1000.

./run \ --arch aarch64 \ --emulator gem5 \ --eval-after './' \ --gem5-readfile 'm5 resetstats;dhrystone 1000;m5 dumpstats' \ --gem5-restore 1 \ -- \ --cpu-type=HPI \ --restore-with-cpu=HPI \ --caches \ --l2cache \ --l1dsize=64kB \ --l1isize=64kB \ --l2_size=256kB \ ;

Get the value for number of cycles.

head because there are two lines: our dumpstats and the

automatic dumpstats at the end which we don't care about.

./gem5-stat --arch aarch64 | head -n 1

Now for input 10000.

./run \ --arch aarch64 \ --emulator gem5 \ --eval-after './' \ --gem5-readfile 'm5 resetstats;dhrystone 10000;m5 dumpstats' \ --gem5-restore 1 \ -- \ --cpu-type=HPI \ --restore-with-cpu=HPI \ --caches \ --l2cache \ --l1dsize=64kB \ --l1isize=64kB \ --l2_size=256kB \ ; ./gem5-stat --arch aarch64 | head -n 1 ....

If you ever need a shell to quickly inspect the system state after boot, you can just use:

.... ./run \ --arch aarch64 \ --emulator gem5 \ --eval-after './' \ --gem5-readfile 'sh' \ --gem5-restore 1 \ ....

This procedure is further automated and DRYed up at:

.... ./gem5-bench-dhrystone cat out/gem5-bench-dhrystone.txt ....

Source: link:gem5-bench-dhrystone[]

Output at 2438410c25e200d9766c8c65773ee7469b599e4a + 1:

.... n cycles 1000 13665219 10000 20559002 100000 85977065 ....

so as expected, the Dhrystone run with a larger input parameter

took more cycles than the ones with smaller input parameters.


commands output the approximate number of CPU cycles it took Dhrystone to run.

A more naive and simpler to understand approach would be a direct:

.... ./run --arch aarch64 --emulator gem5 --eval 'm5 checkpoint;m5 resetstats;dhrystone 10000;m5 exit' ....

but the problem is that this method does not allow to easily run a different script without running the boot again. The

script works around that by using <> as explained further at: xref:gem5-restore-new-script[xrefstyle=full].

Now you can play a fun little game with your friends:

  • pick a computational problem
  • make a program that solves the computation problem, and outputs output to stdout
  • write the code that runs the correct computation in the smallest number of cycles possible

Interesting algorithms and benchmarks for this game are being collected at:

  • <>
  • <>

To find out why your program is slow, a good first step is to have a look at the <>.

==== Skip extra benchmark instructions

A few imperfections of our <> are:

  • when we do
    m5 resetstats
    m5 exit
    , there is some time passed before the
    system call returns and the actual benchmark starts and ends
  • the benchmark outputs to stdout, which means so extra cycles in addition to the actual computation. But TODO: how to get the output to check that it is correct without such IO cycles?

Solutions to these problems include:

  • modify benchmark code with instrumentation directly, see <> for an example.
  • monitor known addresses TODO possible? Create an example.

Discussion at:

Those problems should be insignificant if the benchmark runs for long enough however.

=== gem5 system parameters

Besides optimizing a program for a given CPU setup, chip developers can also do the inverse, and optimize the chip for a given benchmark!

The rabbit hole is likely deep, but let's scratch a bit of the surface.

==== Number of cores

.... ./run --arch arm --cpus 2 --emulator gem5 ....

Can be checked with

or <> in Ubuntu 18.04:

.... cat /proc/cpuinfo getconf NPROCESSORSCONF ....

Or from <>, we can use either of:

  • <> with link:userland/linux/sysconf.c[] + .... ./run --cpus 2 --emulator gem5 --userland userland/linux/sysconf.c | grep SCNPROCESSORS_ONLN ....
  • <>'s link:userland/cpp/threadhardwareconcurrency.cpp[]: + .... ./run --cpus 2 --emulator gem5 --userland userland/cpp/threadhardwareconcurrency.cpp ....
  • direct access to several special filesystem files that contain this information e.g. via link:userland/c/cat.c[]: + .... ./run --cpus 2 --emulator gem5 --userland userland/c/cat.c --cli-args /proc/cpuinfo ....

===== QEMU user mode multithreading

<> QEMU v4.0.0 always shows the number of cores of the host, presumably because the thread switching uses host threads directly which would make that harder to implement.

It does not seem possible to make the guest see a different number of cores than what the host has. Full system does have the

options, which controls this.

E.g., all of of the following output the same as

on the host:

.... nproc ./run --cpus 1 --userland userland/cpp/threadhardwareconcurrency.cpp ./run --cpus 2 --userland userland/cpp/threadhardwareconcurrency.cpp ....

This random page suggests that QEMU splits one host thread thread per guest thread, and thus presumably delegates context switching to the host kernel:

We can confirm that with:

.... ./run --userland userland/posix/pthread_count.c --cli-args 4 ps Haux | grep qemu | wc ....

Remember <> though.

At 369a47fc6e5c2f4a7f911c1c058b6088f8824463 + 1 QEMU appears to spawn 3 host threads plus one for every new guest thread created. Remember that link:userland/posix/pthread_count.c[] spawns N + 1 total threads if you count the


===== gem5 ARM full system with more than 8 cores

With <>, tested at LKMC 224fae82e1a79d9551b941b19196c7e337663f22 gem5 3ca404da175a66e0b958165ad75eb5f54cb5e772 on vanilla kernel:

.... ./run \ --arch aarch64 \ --emulator gem5 \ --cpus 16 \ -- \ --machine-type VExpressGEM5V2 \ ; ....

boots to a shell and


For the GICv2 extension method, build the kernel with the <>, and then run:

.... ./run \ --arch aarch64 \ --linux-build-id gem5-v4.15 \ --emulator gem5 \ --cpus 16 \ -- \ --param 'system.realview.gic.gem5_extensions = True' \ ; ....

Tested in LKMC 788087c6f409b84adf3cff7ac050fa37df6d4c46. It fails after boot with

FATAL: kernel too old
as mentioned at: <> but everything seems to work on the gem5 side of things.

==== gem5 cache size

A quick

+./run --emulator gem5 -- -h+
leads us to the options:

.... --caches --l1dsize=1024 --l1isize=1024 --l2cache --l2size=1024 --l3size=1024 ....

But keep in mind that it only affects benchmark performance of the most detailed CPU types as shown at: xref:table-gem5-cache-cpu-type[xrefstyle=full].

[[table-gem5-cache-cpu-type]] .gem5 cache support in function of CPU type [options="header"] |=== |arch |CPU type |caches used

|X86 |


|X86 |


|ARM |


|ARM |



{empty}*: couldn't test because of:


Cache sizes can in theory be checked with the methods described at:[]:

.... lscpu cat /sys/devices/system/cpu/cpu0/cache/index2/size ....

and on Ubuntu 20.04 host <>:

.... getconf -a | grep CACHE ....

and we also have an easy to use userland executable using <> at link:userland/linux/sysconf.c[]:

.... ./run --emulator gem5 --userland userland/linux/sysconf.c ....

but for some reason the Linux kernel is not seeing the cache sizes:


Behaviour breakdown:

  • arm QEMU and gem5 (both
    ), x86 gem5:
    files don't exist, and
    value empty
  • x86 QEMU:
    files exist, but
    values still empty

The only precise option is therefore to look at <> as done at: <>.

Or for a quick and dirty performance measurement approach instead:

.... ./gem5-bench-cache -- --arch aarch64 cat "$(./getvar --arch aarch64 run_dir)/bench-cache.txt" ....

which gives:

.... cmd ./run --emulator gem5 --arch aarch64 --gem5-readfile "dhrystone 1000" --gem5-restore 1 -- --caches --l2cache --l1dsize=1024 --l1isize=1024 --l2size=1024 --l3size=1024 --cpu-type=HPI --restore-with-cpu=HPI time 23.82 exit_status 0 cycles 93284622 instructions 4393457

cmd ./run --emulator gem5 --arch aarch64 --gem5-readfile "dhrystone 1000" --gem5-restore 1 -- --caches --l2cache --l1dsize=1024kB --l1isize=1024kB --l2size=1024kB --l3size=1024kB --cpu-type=HPI --restore-with-cpu=HPI time 14.91 exit_status 0 cycles 10128985 instructions 4211458

cmd ./run --emulator gem5 --arch aarch64 --gem5-readfile "dhrystone 10000" --gem5-restore 1 -- --caches --l2cache --l1dsize=1024 --l1isize=1024 --l2size=1024 --l3size=1024 --cpu-type=HPI --restore-with-cpu=HPI time 51.87 exit_status 0 cycles 188803630 instructions 12401336

cmd ./run --emulator gem5 --arch aarch64 --gem5-readfile "dhrystone 10000" --gem5-restore 1 -- --caches --l2cache --l1dsize=1024kB --l1isize=1024kB --l2size=1024kB --l3size=1024kB --cpu-type=HPI --restore-with-cpu=HPI time 35.35 exit_status 0 cycles 20715757 instructions 12192527

cmd ./run --emulator gem5 --arch aarch64 --gem5-readfile "dhrystone 100000" --gem5-restore 1 -- --caches --l2cache --l1dsize=1024 --l1isize=1024 --l2size=1024 --l3size=1024 --cpu-type=HPI --restore-with-cpu=HPI time 339.07 exit_status 0 cycles 1176559936 instructions 94222791

cmd ./run --emulator gem5 --arch aarch64 --gem5-readfile "dhrystone 100000" --gem5-restore 1 -- --caches --l2cache --l1dsize=1024kB --l1isize=1024kB --l2size=1024kB --l3size=1024kB --cpu-type=HPI --restore-with-cpu=HPI time 240.37 exit_status 0 cycles 125666679 instructions 91738770 ....

We make the following conclusions:

  • the number of instructions almost does not change: the CPU is waiting for memory all the extra time. TODO: why does it change at all?
  • the wall clock execution time is not directionally proportional to the number of cycles: here we had a 10x cycle increase, but only 2x time increase. This suggests that the simulation of cycles in which the CPU is waiting for memory to come back is faster.

==== gem5 DRAM model

Some info at: <> but highly TODO :-)

===== gem5 memory latency

TODO These look promising:

.... --list-mem-types --mem-type=MEMTYPE --mem-channels=MEMCHANNELS --mem-ranks=MEMRANKS --mem-size=MEMSIZE ....

TODO: now to verify this with the Linux kernel? Besides raw performance benchmarks.

Now for a raw simplistic benchmark on <> without caches via <>:

.... ./run --arch aarch64 --cli-args 1000000 --emulator gem5 --userland userland/gcc/busy_loop.c -- --cpu-type TimingSimpleCPU ....

LKMC eb22fd3b6e7fff7e9ef946a88b208debf5b419d5 gem5 872cb227fdc0b4d60acc7840889d567a6936b6e1 outputs:

.... Exiting @ tick 897173931000 because exiting with last active thread context ....

and now because:

  • we have no caches, each instruction is fetched from memory
  • each loop contains 11 instructions as shown at xref:c-busy-loop[xrefstyle=full]
  • and supposing that the loop dominated executable pre/post
    , which we know is true since as shown in <> an empty dynamically linked C program only as about 100k instructions, while our loop runs 1000000 * 11 = 12M.

we should have about 1000000 * 11 / 897173931000 ps ~ 12260722 ~ 12MB/s of random accesses. The default memory type used is

as per:

.... common/ parser.addoption("--mem-type", type="choice", default="DDR31600_8x8 ....

and according to that reaches 6400 MB/s so we are only off by a factor of 50x :-) TODO. Maybe if the minimum transaction if 64 bytes, we would be on point.

Another example we could use later on is link:userland/gcc/busy_loop.c[], but then that mixes icache and dcache accesses, so the analysis is a bit more complex:

.... ./run --arch aarch64 --cli-args 0x1000000 --emulator gem5 --userland userland/gcc/busy_loop.c -- --cpu-type TimingSimpleCPU ....

===== Memory size

Can be set across emulators with:

.... ./run --memory 512M ....

We can verify this on the guest directly from the kernel with:

.... cat /proc/meminfo ....

as of LKMC 1e969e832f66cb5a72d12d57c53fb09e9721d589 this output contains:

.... MemTotal: 498472 kB ....

which we expand with:

.... printf '0x%X\n' $((498472 * 1024)) ....


.... 0x1E6CA000 ....

TODO: why is this value a bit smaller than 512M?

also gives the same result:

.... free -b ....


.... total used free shared buffers cached Mem: 510435328 20385792 490049536 0 503808 2760704 -/+ buffers/cache: 17121280 493314048 Swap: 0 0 0 ....

which we expand with:

.... printf '0x%X\n' 510435328$((498472 * 1024) ....

man free
from Ubuntu's procps 3.3.15 tells us that
obtains this information from
as well.

From C, we can get this information with


.... ./linux/total_memory.out ....

Source: link:userland/linux/total_memory.c[]


.... sysconf(SCPHYSPAGES) * sysconf(SCPAGESIZE) = 0x1E6CA000 sysconf(SCAVPHYSPAGES) * sysconf(SCPAGESIZE) = 0x1D178000 getphyspages() * sysconf(SCPAGESIZE) = 0x1E6CA000 getavphyspages() * sysconf(SCPAGESIZE) = 0x1D178000 ....

This is mentioned at:

AV means available and gives the free memory:

===== gem5 DRAM setup

This can be explored pretty well from <>. just has a single

DRAM with size given as <> and physical address starting at 0. also has that

DRAM, but can have more memory types. Notably, aarch64 has as shown on

.... 0x00000000-0x03ffffff: ( 0 - 64 MiB) Boot memory (CS0) 0x04000000-0x07ffffff: ( 64 MiB - 128 MiB) Reserved 0x08000000-0x0bffffff: (128 MiB - 192 MiB) NOR FLASH0 (CS0 alias) 0x0c000000-0x0fffffff: (192 MiB - 256 MiB) NOR FLASH1 (Off-chip, CS4) 0x80000000-XxXXXXXXXX: ( 2 GiB - ) DRAM ....

We place the entry point of our baremetal executables right at the start of DRAM with our <>.

This can be seen indirectly with:

.... ./getvar --arch aarch64 --emulator gem5 entry_address ....

which gives 0x80000000 in decimal, or more directly with some some <>:

.... ./run \ --arch aarch64 \ --baremetal baremetal/arch/aarch64/no_bootloader/exit.S \ --emulator gem5 \ --trace ExecAll,-ExecSymbol \ --trace-stdout \ ; ....

and we see that the first instruction runs at 0x80000000:

.... 0: system.cpu: A0 T0 : 0x80000000 ....

TODO: what are the boot memory and NOR FLASH used for?

==== gem5 disk and network latency

TODO These look promising:

.... --ethernet-linkspeed --ethernet-linkdelay ....

and also:


==== gem5 clock frequency

As of gem5 872cb227fdc0b4d60acc7840889d567a6936b6e1 defaults to 2GHz for

.... parser.add_option("--cpu-clock", action="store", type="string", default='2GHz', help="Clock for blocks running at CPU speed") ....

We can check that very easily by looking at the timestamps of a <> of an <> without any caches:

.... ./run \ --arch aarch64 \ --emulator gem5 \ --userland userland/arch/aarch64/freestanding/linux/hello.S \ --trace-insts-stdout \ ; ....

which shows:

.... 0: system.cpu: A0 T0 : @asmmainafterprologue : movz x0, #1, #0 : IntAlu : D=0x0000000000000001 flags=(IsInteger) 500: system.cpu: A0 T0 : @asmmainafterprologue+4 : adr x1, #28 : IntAlu : D=0x0000000000400098 flags=(IsInteger) 1000: system.cpu: A0 T0 : @asmmainafterprologue+8 : ldr w2, #4194464 : MemRead : D=0x0000000000000006 A=0x4000a0 flags=(IsInteger|IsMemRef|IsLoad) 1500: system.cpu: A0 T0 : @asmmainafterprologue+12 : movz x8, #64, #0 : IntAlu : D=0x0000000000000040 flags=(IsInteger) 2000: system.cpu: A0 T0 : @asmmainafterprologue+16 : svc #0x0 : IntAlu : flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall) hello 2500: system.cpu: A0 T0 : @asmmainafterprologue+20 : movz x0, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 3000: system.cpu: A0 T0 : @asmmainafterprologue+24 : movz x8, #93, #0 : IntAlu : D=0x000000000000005d flags=(IsInteger) 3500: system.cpu: A0 T0 : @asmmainafterprologue+28 : svc #0x0 : IntAlu : flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall) ....

so we see that it runs one instruction every 500 ps which makes up 2GHz.

So if we change the frequency to say 1GHz and re-run it:

.... ./run \ --arch aarch64 \ --emulator gem5 \ --userland userland/arch/aarch64/freestanding/linux/hello.S \ --trace-insts-stdout \ -- \ --cpu-clock 1GHz \ ; ....

we get as expected:

.... 0: system.cpu: A0 T0 : @asmmainafterprologue : movz x0, #1, #0 : IntAlu : D=0x0000000000000001 flags=(IsInteger) 1000: system.cpu: A0 T0 : @asmmainafterprologue+4 : adr x1, #28 : IntAlu : D=0x0000000000400098 flags=(IsInteger) 2000: system.cpu: A0 T0 : @asmmainafterprologue+8 : ldr w2, #4194464 : MemRead : D=0x0000000000000006 A=0x4000a0 flags=(IsInteger|IsMemRef|IsLoad) 3000: system.cpu: A0 T0 : @asmmainafterprologue+12 : movz x8, #64, #0 : IntAlu : D=0x0000000000000040 flags=(IsInteger) 4000: system.cpu: A0 T0 : @asmmainafterprologue+16 : svc #0x0 : IntAlu : flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall) hello 5000: system.cpu: A0 T0 : @asmmainafterprologue+20 : movz x0, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 6000: system.cpu: A0 T0 : @asmmainafterprologue+24 : movz x8, #93, #0 : IntAlu : D=0x000000000000005d flags=(IsInteger) 7000: system.cpu: A0 T0 : @asmmainafterprologue+28 : svc #0x0 : IntAlu : flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall) ....

As of gem5 872cb227fdc0b4d60acc7840889d567a6936b6e1, but like <>, does not get propagated to the guest, and is not for example visible at:

.... ls /sys/devices/system/cpu/cpu0/cpufreq ....

=== gem5 kernel command line parameters

Analogous <>:

.... ./run --arch arm --kernel-cli 'init=/lkmc/linux/poweroff.out' --emulator gem5 ....

Internals: when we give

to gem5, it overrides default command lines, including some mandatory ones which are required to boot properly.

Our run script hardcodes the require options in the default

and appends extra options given by

To find the default options in the first place, we removed

and ran:

.... ./run --arch arm --emulator gem5 ....

and then looked at the line of the Linux kernel that starts with:

.... Kernel command line: ....

[[gem5-gdb]] === gem5 GDB step debug

==== gem5 GDB step debug kernel Analogous <>, on the first shell:

.... ./run --arch arm --emulator gem5 --gdb-wait ....

On the second shell:

.... ./run-gdb --arch arm --emulator gem5 ....

On a third shell:

.... ./gem5-shell ....

When you want to break, just do a

on GDB shell, and then

And we now see the boot messages, and then get a shell. Now try the

procedure described for QEMU at: xref:gdb-step-debug-kernel-post-boot[xrefstyle=full].

==== gem5 GDB step debug userland process

We are unable to use

because of networking as mentioned at: xref:gem5-host-to-guest-networking[xrefstyle=full]

The alternative is to do as in <>.

Next, follow the exact same steps explained at <>, but passing

--emulator gem5
to every command as usual.

But then TODO (I'll still go crazy one of those days): for

, while debugging
./linux/myinsmod.out hello.ko
, after then line:

.... 23 if (argc < 3) { 24 params = ""; ....

I press

, it just runs the program until the end, instead of stopping on the next line of execution. The module does get inserted normally.


.... ./run-gdb --arch arm --emulator gem5 --userland gem5-1.0/gem5/util/m5/m5 main ....

breaks when

is run on guest, but does not show the source code.

==== gem5 GDB step debug secondary cores

gem5's secondary core GDB setup is a hack and spawns one gdbserver for each core in separate ports, e.g. 7000, 7001, etc.

Partly because of this, it is basically unusable/very hard to use, because you can't attach to a core that is stopped either because it hasn't been initialized, or if you are already currently debugging another core.

This affects both full system and <>, and is described in more detail at:

In LKMC 0a3ce2f41f12024930bcdc74ff646b66dfc46999, we can easily test attaching to another core by passing

, e.g. to connect to the second core we can use
--run-id 1

.... ./run-gdb --arch aarch64 --emulator gem5 --userland userland/gcc/busy_loop.c --run-id 1 ....

=== gem5 checkpoint

Analogous to QEMU's <>, but better since it can be started from inside the guest, so we can easily checkpoint after a specific guest event, e.g. just before

is done.


To see it in action try:

.... ./run --arch aarch64 --emulator gem5 ....

In the guest, wait for the boot to end and run:

.... m5 checkpoint ....

where <> is a guest utility present inside the gem5 tree which we cross-compiled and installed into the guest.

To restore the checkpoint, kill the VM and run:

.... ./run --arch arm --emulator gem5 --gem5-restore 1 ....


option restores the checkpoint that was created most recently.

Let's create a second checkpoint to see how it works, in guest:

.... date >f m5 checkpoint ....

Kill the VM, and try it out:

.... ./run --arch arm --emulator gem5 --gem5-restore 1 ....

Here we use

--gem5-restore 1
again, since the second snapshot we took is now the most recent one

Now in the guest:

.... cat f ....

contains the

. The file
wouldn't exist had we used the first checkpoint with
--gem5-restore 2
, which is the second most recent snapshot taken.

If you automate things with <> as in:

.... ./run --arch arm --eval 'm5 checkpoint;m5 resetstats;dhrystone 1000;m5 exit' --emulator gem5 ....

Then there is no need to pass the kernel command line again to gem5 for replay:

.... ./run --arch arm --emulator gem5 --gem5-restore 1 ....

since boot has already happened, and the parameters are already in the RAM of the snapshot.

==== gem5 checkpoint userland minimal example

In order to debug checkpoint restore bugs, this minimal setup using link:userland/freestanding/gem5_checkpoint.S[] can be handy:

.... ./build-userland --arch aarch64 --static ./run --arch aarch64 --emulator gem5 --static --userland userland/freestanding/gem5checkpoint.S --trace-insts-stdout ./run --arch aarch64 --emulator gem5 --static --userland userland/freestanding/gem5checkpoint.S --trace-insts-stdout --gem5-restore 1 ./run --arch aarch64 --emulator gem5 --static --userland userland/freestanding/gem5_checkpoint.S --trace-insts-stdout --gem5-restore 1 -- --cpu-type=DerivO3CPU --restore-with-cpu=DerivO3CPU --caches ....

On the initial run, we see that all instructions are executed and the checkpoint is taken:

.... 0: system.cpu: A0 T0 : @asmmainafterprologue : movz x0, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 500: system.cpu: A0 T0 : @asmmainafterprologue+4 : movz x1, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 1000: system.cpu: A0 T0 : @asmmainafterprologue+8 : m5checkpoint : IntAlu : flags=(IsInteger|IsNonSpeculative|IsUnverifiable) Writing checkpoint warn: Checkpoints for file descriptors currently do not work. info: Entering event queue @ 1000. Starting simulation... 1500: system.cpu: A0 T0 : @asmmainafterprologue+12 : movz x0, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 2000: system.cpu: A0 T0 : @asmmainafterprologue+16 : m5exit : NoOpClass : flags=(IsInteger|IsNonSpeculative) Exiting @ tick 2000 because m5_exit instruction encountered ....

Then, on the first restore run, the checkpoint is restored, and only instructions after the checkpoint are executed:

.... info: Entering event queue @ 1000. Starting simulation... 1500: system.cpu: A0 T0 : @asmmainafterprologue+12 : movz x0, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 2000: system.cpu: A0 T0 : @asmmainafterprologue+16 : m5exit : NoOpClass : flags=(IsInteger|IsNonSpeculative) Exiting @ tick 2000 because m5exit instruction encountered ....

and a similar thing happens for the <>:

.... info: Entering event queue @ 1000. Starting simulation... 79000: system.cpu: A0 T0 : @asmmainafterprologue+12 : movz x0, #0, #0 : IntAlu : D=0x0000000000000000 FetchSeq=1 CPSeq=1 flags=(IsInteger) Exiting @ tick 84500 because m5exit instruction encountered ....

Here we don't see the last

m5 exit
instruction on the log, but it must just be something to do with the O3 logging.

==== gem5 checkpoint internals

A quick way to get a <> or full system checkpoint to observe is:

.... ./run --arch aarch64 --emulator gem5 --baremetal userland/freestanding/gem5checkpoint.S --trace-insts-stdout ./run --arch aarch64 --emulator gem5 --userland userland/freestanding/gem5checkpoint.S --trace-insts-stdout ....

Checkpoints are stored inside the <> at:

.... "$(./getvar --emulator gem5 m5out_dir)/cpt." ....


 is the cycle number at which the checkpoint was taken.
exposes the
-r N
flag to restore checkpoints, which N-th checkpoint with the largest

However, that interface is bad because if you had taken previous checkpoints, you have no idea what

to use, unless you memorize which checkpoint was taken at which cycle.

Therefore, just use our superior

flag, which uses directory timestamps to determine which checkpoint you created most recently.


-r N
integer value is just pure
sugar, the backend at
just takes the actual tracepoint directory path as input.

The file

contains almost everything in the checkpoint except memory.

It is a[Python configparser compatible file] with a section structure that matches the <> tree e.g.:

.... [system.cpu.itb.walker.power_state] currState=0 prvEvalTick=0 ....

When a checkpoint is taken, each

calls its overridden
method to generate the checkpoint, and when loading,
is called.

[[gem5-restore-new-script]] ==== gem5 checkpoint restore and run a different script

You want to automate running several tests from a single pristine post-boot state.

The problem is that boot takes forever, and after the checkpoint, the memory and disk states are fixed, so you can't for example:

  • hack up an existing rc script, since the disk is fixed
  • inject new kernel boot command line options, since those have already been put into memory by the bootloader

There is however a few loopholes, <> being the simplest, as it reads whatever is present on the host.

So we can do it like:


Boot, checkpoint and exit.

printf 'echo "setup run";m5 exit' > "$(./getvar gem5readfilefile)" ./run --emulator gem5 --eval 'm5 checkpoint;m5 readfile > /tmp/ && sh /tmp/'

Restore and run the first benchmark.

printf 'echo "first benchmark";m5 exit' > "$(./getvar gem5readfilefile)" ./run --emulator gem5 --gem5-restore 1

Restore and run the second benchmark.

printf 'echo "second benchmark";m5 exit' > "$(./getvar gem5readfilefile)" ./run --emulator gem5 --gem5-restore 1

If something weird happened, create an interactive shell to examine the system.

printf 'sh' > "$(./getvar gem5readfilefile)" ./run --emulator gem5 --gem5-restore 1 ....

Since this is such a common setup, we provide the following helpers for this operation:

  • ./run --gem5-readfile
    is a convenient way to set the
    m5 readfile
    file contents from a string in the command line, e.g.: + .... # Boot, checkpoint and exit. ./run --emulator gem5 --eval './' --gem5-readfile 'echo "setup run"'

Restore and run the first benchmark.

./run --emulator gem5 --gem5-restore 1 --gem5-readfile 'echo "first benchmark"'

Restore and run the second benchmark.

./run --emulator gem5 --gem5-restore 1 --gem5-readfile 'echo "second benchmark"' .... * link:rootfsoverlay/lkmc/[]. This script is analogous to gem5's in-tree[hackback_ckpt.rcS], but with less noise. + Usage: + ....

Boot, checkpoint and exit.

./run --emulator gem5 --eval './' --gem5-readfile 'echo "setup run"'

Restore and run the first benchmark.

./run --emulator gem5 --gem5-restore 1 --gem5-readfile 'echo "first benchmark"'

Restore and run the second benchmark.

./run --emulator gem5 --gem5-restore 1 --gem5-readfile 'echo "second benchmark"' ....

Their usage is also exemplified at <>.

If you forgot to use an appropriate

for your boot and the simulation is already running, link:rootfs_overlay/lkmc/[] can be used directly from an interactive guest shell.

First we reset the readfile to something that runs quickly:

.... printf 'echo "first benchmark"' > "$(./getvar gem5readfilefile)" ....

and then in the guest, take a checkpoint and exit with:

.... ./ ....

Now the guest is in a state where readfile will be executed automatically without interactive intervention:

.... ./run --emulator gem5 --gem5-restore 1 --gem5-readfile 'echo "first benchmark"' ./run --emulator gem5 --gem5-restore 1 --gem5-readfile 'echo "second benchmark"' ....

Other loophole possibilities to execute different benchmarks non-interactively include:

  • <<9p>>
  • <>
  • expect
    as mentioned at: + .... #!/usr/bin/expect spawn telnet localhost 3456 expect "# $" send "pwd\r" send "ls /\r" send "m5 exit\r" expect eof .... + This is ugly however as it is not deterministic.[email protected]/msg15233.html

==== gem5 restore checkpoint with a different CPU

gem5 can switch to a different CPU model when restoring a checkpoint.

A common combo is to boot Linux with a fast CPU, make a checkpoint and then replay the benchmark of interest with a slower CPU.

This can be observed interactively in full system with:

.... ./run --arch aarch64 --emulator gem5 ....

Then in the guest terminal after boot ends:

.... sh -c 'm5 checkpoint;sh' m5 exit ....

And then restore the checkpoint with a different slower CPU:

.... ./run --arch arm --emulator gem5 --gem5-restore 1 -- --caches --cpu-type=DerivO3CPU ....

And now you will notice that everything happens much slower in the guest terminal!

One even more direct and minimal way to observe this is with link:userland/freestanding/gem5_checkpoint.S[] which was mentioned at <> plus some logging:

.... ./run \ --arch aarch64 \ --emulator gem5 \ --static \ --trace ExecAll,FmtFlag,O3CPU,SimpleCPU \ --userland userland/freestanding/gem5checkpoint.S \ ; cat "$(./getvar --arch aarch64 --emulator gem5 tracetxtfile)" ./run \ --arch aarch64 \ --emulator gem5 \ --gem5-restore 1 \ --static \ --trace ExecAll,FmtFlag,O3CPU,SimpleCPU \ --userland userland/freestanding/gem5checkpoint.S \ -- \ --caches \ --cpu-type DerivO3CPU \ --restore-with-cpu DerivO3CPU \ ; cat "$(./getvar --arch aarch64 --emulator gem5 tracetxtfile)" ....

At gem5 2235168b72537535d74c645a70a85479801e0651, the first run does everything in <>:

.... ... 0: SimpleCPU: system.cpu.dcacheport: received snoop pkt for addr:0x1f92 WriteReq 0: SimpleCPU: system.cpu.dcacheport: received snoop pkt for addr:0x1e40 WriteReq 0: SimpleCPU: system.cpu.dcacheport: received snoop pkt for addr:0x1e30 WriteReq 0: SimpleCPU: system.cpu: Tick 0: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue : movz x0, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 500: SimpleCPU: system.cpu: Tick 500: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+4 : movz x1, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 1000: SimpleCPU: system.cpu: Tick 1000: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+8 : m5checkpoint : IntAlu : flags=(IsInteger|IsNonSpeculative|IsUnverifiable) 1000: SimpleCPU: system.cpu: Resume 1500: SimpleCPU: system.cpu: Tick 1500: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+12 : movz x0, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 2000: SimpleCPU: system.cpu: Tick 2000: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+16 : m5exit : No_OpClass : flags=(IsInteger|IsNonSpeculative) ....

and after restore we see as expected a single

instruction executed amidst

.... FullO3CPU: Ticking main, FullO3CPU. 79000: ExecEnable: system.cpu: A0 T0 : @asmmainafter_prologue+12 : movz x0, #0, #0 : IntAlu : D=0x0000000000000000 FetchSeq=1 CPSeq=1 flags=(IsInteger) 82500: O3CPU: system.cpu: Removing committed instruction [tid:0] PC (0x400084=>0x400088).(0=>1) [sn:1] 82500: O3CPU: system.cpu: Removing instruction, [tid:0] [sn:1] PC (0x400084=>0x400088).(0=>1) 82500: O3CPU: system.cpu: Scheduling next tick! 83000: O3CPU: system.cpu: ....

which is the

after the checkpoint. The final
does not appear due to DerivO3CPU logging insanity.



===== gem5 fast forward

Besides switching CPUs after a checkpoint restore, also has the

option to automatically run the script from the start on a less detailed CPU, and switch to a more detailed CPU at a given tick.

This is generally useless compared to checkpoint restoring because:

  • checkpoint restore allows to run multiple contents after the restore, and restoring to multiple different system states, which you almost always want to do
  • we generally don't know the exact tick at which the region of interest will start, especially as the binaries change. It is much easier to just instrument the content with a checkoint <>

But let's give it a try anyway with link:userland/freestanding/gem5_checkpoint.S[] which was mentioned at <>

.... ./run \ --arch aarch64 \ --emulator gem5 \ --static \ --trace ExecAll,FmtFlag,O3CPU,SimpleCPU \ --userland userland/freestanding/gem5checkpoint.S \ -- \ --caches --cpu-type DerivO3CPU \ --fast-forward 1000 \ ; cat "$(./getvar --arch aarch64 --emulator gem5 tracetxt_file)" ....

At gem5 2235168b72537535d74c645a70a85479801e0651 we see something like:

.... 0: O3CPU: system.switchcpus: Creating O3CPU object. 0: O3CPU: system.switchcpus: Workload[0] process is 0 0: SimpleCPU: system.cpu: ActivateContext 0 0: SimpleCPU: system.cpu.dcacheport: received snoop pkt for addr:0 WriteReq 0: SimpleCPU: system.cpu.dcacheport: received snoop pkt for addr:0x40 WriteReq ...

  0: SimpleCPU: system.cpu.dcache_port: received snoop pkt for addr:0x1f92 WriteReq
  0: SimpleCPU: system.cpu.dcache_port: received snoop pkt for addr:0x1e40 WriteReq
  0: SimpleCPU: system.cpu.dcache_port: received snoop pkt for addr:0x1e30 WriteReq
  0: SimpleCPU: system.cpu: Tick
  0: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue    :   movz   x0, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
500: SimpleCPU: system.cpu: Tick
500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+4    :   movz   x1, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)

1000: SimpleCPU: system.cpu: Tick 1000: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+8 : m5checkpoint : IntAlu : flags=(IsInteger|IsNonSpeculative|IsUnverifiable) 1000: O3CPU: system.switchcpus: [tid:0] Calling activate thread. 1000: O3CPU: system.switchcpus: [tid:0] Adding to active threads list 1500: O3CPU: system.switchcpus:

FullO3CPU: Ticking main, FullO3CPU. 1500: O3CPU: system.switchcpus: Scheduling next tick! 2000: O3CPU: system.switchcpus:

FullO3CPU: Ticking main, FullO3CPU. 2000: O3CPU: system.switchcpus: Scheduling next tick! 2500: O3CPU: system.switchcpus:


FullO3CPU: Ticking main, FullO3CPU. 44500: ExecEnable: system.switchcpus: A0 T0 : @asmmainafterprologue+12 : movz x0, #0, #0 : IntAlu : D=0x00000000000 48000: O3CPU: system.switchcpus: Removing committed instruction [tid:0] PC (0x400084=>0x400088).(0=>1) [sn:1] 48000: O3CPU: system.switchcpus: Removing instruction, [tid:0] [sn:1] PC (0x400084=>0x400088).(0=>1) 48000: O3CPU: system.switchcpus: Scheduling next tick! 48500: O3CPU: system.switchcpus:

... ....

We can also compare that to the same log but without

and other CPU switch options:

.... 0: SimpleCPU: system.cpu.dcacheport: received snoop pkt for addr:0x1e40 WriteReq 0: SimpleCPU: system.cpu.dcacheport: received snoop pkt for addr:0x1e30 WriteReq 0: SimpleCPU: system.cpu: Tick 0: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue : movz x0, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 500: SimpleCPU: system.cpu: Tick 500: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+4 : movz x1, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 1000: SimpleCPU: system.cpu: Tick 1000: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+8 : m5checkpoint : IntAlu : flags=(IsInteger|IsNonSpeculative|IsUnverifiable) 1000: SimpleCPU: system.cpu: Resume 1500: SimpleCPU: system.cpu: Tick 1500: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+12 : movz x0, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 2000: SimpleCPU: system.cpu: Tick 2000: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+16 : m5exit : NoOpClass : flags=(IsInteger|IsNonSpeculative) ....

Therefore, it is clear that what we wanted happen:

  • up until the tick 1000,
    was ticking
  • after tick 1000, cpu
    started ticking



==== gem5 checkpoint upgrader

The in-tree

is a tool to upgrade checkpoints taken from an older version of gem5 to be compatible with the newest version, so you can update gem5 without having to re-run the simulation that generated the checkpoints.

For example, whenever a <>, old checkpoints break unless upgraded.

Unfortunately, since the process is not very automated (automatable?), and requires manually patching the upgrader every time a new breaking change is done, the upgrader tends to break soon if you try to move many versions of gem5 ahead as of 2020. This is evidenced in bug reports such as this one:

The script can be used as:

.... util/ m5out/cpt.1000/m5.cpt ....

This updates the

file in-place, and a
is generated as a backup of the old file.

The upgrader determines which upgrades are needed by checking the

entry of the checkpoint:

.... [Globals] version_tags=arm-ccregs arm-contextidr-el2 arm-gem5-gic-ext ... ....

Each of those tags corresponds to a Python file under


=== Pass extra options to gem5

Remember that in the gem5 command line, we can either pass options to the script being run as in:

.... build/X86/gem5.opt configs/examples/ --some-option ....

or to the gem5 executable itself:

.... build/X86/gem5.opt --some-option configs/examples/ ....

Pass options to the script in our setup use:

  • get help: + .... ./run --emulator gem5 -- -h ....
  • boot with the more detailed and slow
    CPU model: + .... ./run --arch arm --emulator gem5 -- --caches --cpu-type=HPI ....

To pass options to the

executable we expose the
  • get help: + .... ./run --gem5-exe-args='-h' --emulator gem5 ....

=== m5ops

m5ops are magic instructions which lead gem5 to do magic things, like quitting or dumping stats.


There are two main ways to use m5ops:

  • <>
  • <>

is convenient if you only want to take snapshots before or after the benchmark, without altering its source code. It uses the <> as its backend.

cannot should / should not be used however:
  • in bare metal setups
  • when you want to call the instructions from inside interest points of your benchmark. Otherwise you add the syscall overhead to the benchmark, which is more intrusive and might affect results. + Why not just hardcode some <> as in our example instead, since you are going to modify the source of the benchmark anyway?

==== gem5 m5 executable

is a guest command line utility that is installed and run on the guest, that serves as a CLI front-end for the <>

Its source is present in the gem5 tree:

It is possible to guess what most tools do from the corresponding <>, but let's at least document the less obvious ones here.

In LKMC we build


.... ./build-m5 --arch aarch64 ....


executable can be run on <> as normal with:

.... ./run --arch aarch64 --emulator gem5 --userland "$(./getvar --arch aarch64 outrootfsoverlaybindir)/m5" --cli-args dumpstats ....

This can be a good test <> since it executes very quickly.

===== m5 exit

End the simulation.

Sane Python scripts will exit gem5 with status 0, which is what

===== m5 dumpstats

Makes gem5 dump one more statistics entry to the <>.

===== m5 fail

End the simulation with a failure exit event:

.... m5 fail 1 ....

Sane Python scripts would use that as the exit status of gem5, which would be useful for testing purposes, but
at 200281b08ca21f0d2678e23063f088960d3c0819 just prints an error message:

.... Simulated exit code not 0! Exit code is 1 ....

and exits with status 0.

We then parse that string ourselves in link:run[] and exit with the correct status...

TODO: it used to be like that, but it actually got changed to just print the message. Why?

m5 fail
is just a superset of
m5 exit
, which is just:

.... m5 fail 0 ....

as can be seen from the source:

===== m5 writefile

Send a guest file to the host. <<9p>> is a more advanced alternative.


.... echo mycontent > myfileguest m5 writefile myfileguest myfilehost ....


.... cat "$(./getvar --arch aarch64 --emulator gem5 m5out_dir)/myfilehost" ....

Does not work for subdirectories, gem5 crashes:

.... m5 writefile myfileguest mydirhost/myfilehost ....

===== m5 readfile

Read a host file pointed to by the --script
option to stdout.


.... date > "$(./getvar gem5readfilefile)" ....


.... m5 readfile ....

Outcome: date shows on guest.

===== m5 initparam

Ermm, just another <> that only takes integers and only from CLI options? Is this software so redundant?


.... ./run --emulator gem5 --gem5-restore 1 -- --initparam 13 ./run --emulator gem5 --gem5-restore 1 -- --initparam 42 ....


.... m5 initparm ....

Outputs the given paramter.

===== m5 execfile

Trivial combination of

m5 readfile
+ execute the script.


.... printf '#!/bin/sh echo asdf ' > "$(./getvar gem5readfilefile)" ....


.... touch /tmp/execfile chmod +x /tmp/execfile m5 execfile ....


.... adsf ....

==== m5ops instructions

There are few different possible instructions that can be used to implement identical m5ops:

  • magic instructions reserved in the encoding space
  • magic addresses: <>
  • unused <> addresses space on ARM platforms

All of those those methods are exposed through the <> in-tree executable. You can select which method to use when calling the executable, e.g.:

.... m5 exit

Same as the above.

m5 --inst exit

The address is mandatory if not configured at build time.

m5 --addr 0x10010000 exit m5 --semi exit ....

To make things simpler to understand, you can play around with our own minimized educational

  • link:userland/c/m5ops.c[]
  • link:userland/cpp/m5ops.cpp[]

The instructions used by

are present in link:lkmc/m5ops.h[] in a very simple to understand and reuse inline assembly form.

To use that file, first rebuild

with the m5ops instructions enabled and install it on the root filesystem:

.... ./build-userland \ --arch aarch64 \ --force-rebuild \ userland/c/m5ops.c \ ; ./build-buildroot --arch aarch64 ....

We don't enable

by default on userland executables because we try to use a single image for both gem5, QEMU and <>, and those instructions would break the latter two. We enable it in the <> by default since we already have different images for QEMU and gem5 there.

Then, from inside <>, test it out with:



./c/m5ops.out c


./c/m5ops.out d


./c/m5ops.out e

dump resetstats

./c/m5ops.out r ....

In theory, the cleanest way to add m5ops to your benchmarks would be to do exactly what the

tool does:
  • include[
  • link with the
    file under
    for the correct arch, e.g.
    for aarch64.

However, I think it is usually not worth the trouble of hacking up the build system of the benchmark to do this, and I recommend just hardcoding in a few raw instructions here and there, and managing it with version control +



  •[email protected]/msg15418.html

===== m5ops magic addresses

These are magic addresses that when accessed lead to an <>.

The base address is given by

, and then each m5op happens at a different address offset form that base.


is 0, then the memory m5ops are disabled.

Note that the address is physical, and therefore when running in full system on top of the Linux kernel, you must first map a virtual to physical address with

as mentioned at: <>.

One advantage of this method is that it can work with <>, whereas the magic instructions don't, since the host cannot handle them and it is hard to hook into that.

A <> example of that can be found at: link:baremetal/arch/aarch64/nobootloader/m5exit_addr.S[].

As of gem5 0d5a80cb469f515b95e03f23ddaf70c9fd2ecbf2, --baremetal
disables the memory m5ops however for some reason, therefore you should run that program as:

.... ./run --arch aarch64 --baremetal baremetal/arch/aarch64/nobootloader/m5exitaddr.S --emulator gem5 --trace-insts-stdout -- --param 'system.m5opsbase=0x10010000' ....

TODO failing with:

.... info: Entering event queue @ 0. Starting simulation... fatal: Unable to find destination for [0x10012100:0x10012108] on system.iobus ....

===== m5ops instructions interface

Let's study how the <> uses them:

    ]: defines the magic constants that represent the instructions
    ]: use the magic constants that represent the instructions using C preprocessor magic
    ]: the actual executable. Gets linked to
    which defines a function for each m5op.

We notice that there are two different implementations for each arch:

  • magic instructions, which don't exist in the corresponding arch
  • magic memory addresses on a given page: <>

Then, in aarch64 magic instructions for example, the lines:

.... .macro m5op_func, name, func, subfunc .globl \name \name: .long 0xff000110 | (\func << 16) | (\subfunc << 12) ret ....

define a simple function function for each m5op. Here we see that:

  • 0xff000110
    is a base mask for the magic non-existing instruction
  • \func
    are OR-applied on top of the base mask, and define m5op this is. + Those values will loop over the magic constants defined in
    with the deferred preprocessor idiom. + For example,
    due to: + .... #define M5OP_EXIT 0x21 ....


calls the defined functions as in:

.... m5_exit(ints[0]); ....

Therefore, the runtime "argument" that gets passed to the instruction, e.g. the delay in ticks until the exit for

m5 exit
, gets passed directly through the[aarch64 calling convention].

Keep in mind that for all archs,

does the calls with 64-bit integers:

.... uint64t ints[2] = {0,0}; parseintargs(argc, argv, ints, argc); m5fail(ints[1], ints[0]); ....

Therefore, for example:

  • aarch64 uses
    for the first argument and
    for the second, since each is 64 bits log already
  • arm uses
    for the first argument, and
    for the second, since each register is only 32 bits long

That convention specifies that

contain the function arguments, so
contains the first argument, and
the second.

In our

example, we just hardcode everything in the assembly one-liners we are producing.

We ignore the

since it is always 0 on the ops that interest us.

===== m5op annotations

also describes some annotation instructions.

What they mean:

=== gem5 arm Linux kernel patches contains an ARM Linux kernel forks with a few gem5 specific Linux kernel patches on top of mainline created by ARM Holdings on top of a few upstream kernel releases.

Our link:build[] script automatically adds that remote for us as


The patches are optional: the vanilla kernel does boot. But they add some interesting gem5-specific optimizations, instrumentations and device support.

The patches also <> that are known to work well with gem5.

E.g. for arm v4.9 there is:[].

In order to use those patches and their associated configs, and, we recommend using <> as:

.... git -C "$(./getvar linuxsourcedir)" fetch gem5-arm:gem5/v4.15 git -C "$(./getvar linuxsourcedir)" checkout gem5/v4.15 ./build-linux \ --arch aarch64 \ --custom-config-file-gem5 \ --linux-build-id gem5-v4.15 \ ; git -C "$(./getvar linuxsourcedir)" checkout - ./run \ --arch aarch64 \ --emulator gem5 \ --linux-build-id gem5-v4.15 \ ; ....

QEMU also boots that kernel successfully:

.... ./run \ --arch aarch64 \ --linux-build-id gem5-v4.15 \ ; ....

but glibc kernel version checks make init fail with:

.... FATAL: kernel too old ....

because glibc was built to expect a newer Linux kernel as shown at: xref:fatal-kernel-too-old-failure-in-userland-simulation[xrefstyle=full]. Your choices to solve this are:

  • see if there is a more recent gem5 kernel available, or port your patch of interest to the newest kernel
  • modify this repo to use <>, which is not hard because of Buildroot
  • patch glibc to remove that check, which is easy because glibc is in a submodule of this repo

It is obviously not possible to understand what the Linux kernel fork commits actually do from their commit message, so let's explain them one by one here as we understand them:

  • drm: Add component-aware simple encoder
    allows you to see images through VNC, see: xref:gem5-graphic-mode[xrefstyle=full]
  • gem5: Add support for gem5's extended GIC mode
    adds support for more than 8 cores, see: xref:gem5-arm-full-system-with-more-than-8-cores[xrefstyle=full]

Tested on 649d06d6758cefd080d04dc47fd6a5a26a620874 + 1.

==== gem5 arm Linux kernel patches boot speedup

We have observed that with the kernel patches, boot is 2x faster, falling from 1m40s to 50s.


], we see that a large part of the difference is at the message:

.... clocksource: Switched to clocksource archsyscounter ....

which takes 4s on the patched kernel, and 30s on the unpatched one! TODO understand why, especially if it is a config difference, or if it actually comes from a patch.

=== m5out directory

When you run gem5, it generates an

directory at:

.... echo $(./getvar --arch arm --emulator gem5 m5out_dir)" ....

The location of that directory can be set with

./gem5.opt -d
, and defaults to

The files in that directory contains some very important information about the run, and you should become familiar with every one of them.

[[gem5-m5out-system-terminal-file]] ==== gem5 m5out/system.terminal file

Contains UART output, both from the Linux kernel or from the baremetal system.

Can also be seen live on <>.

[[gem5-m5out-system-dmesg-file]] ==== gem5


This file used to be called just

, but the name was changed after the workload refactorings of March 2020.

This file is capable of showing terminal messages that are

before the serial is enabled as described at: <>.

The file is dumped only on kernel panics which gem5 can detect by the PC address: <>.

This mechanism can be very useful to debug the Linux kernel boot if problems happen before the serial is enabled.

This magic mechanism works by activating an event when the PC reaches the

address, much like gem5 <> and then parsing printk function arguments and buffers!

The relevant source is at[


We can test this mechanism in a controlled way by hacking a

into the kernel next to a
that shows up before the serial is enabled, e.g. on Linux v5.4.3 we could do:

.... diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index f296d89be757..3e79916322c2 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -6207,6 +6207,7 @@ void _init ftraceinit(void)

pr_info("ftrace: allocating %ld entries in %ld pages\n",
    count, count / ENTRIES_PER_PAGE + 1);
  • panic("foobar");

    lastftraceenabled = ftrace_enabled = 1; ....

With this, after the panic,

contains on LKMC d09a0d97b81582cc88381c4112db631da61a048d aarch64:

.... [0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd070] [0.000000] Linux version 5.4.3-dirty ([email protected]) (gcc version 8.3.0 (Buildroot 2019.11-00002-g157ac499cf)) #1 SMP Thu Jan 1 00:00:00 UTC 1970 [0.000000] Machine model: V2P-CA15 [0.000000] Memory limited to 256MB [0.000000] efi: Getting EFI parameters from FDT: [0.000000] efi: UEFI not found. [0.000000] On node 0 totalpages: 65536 [0.000000] DMA32 zone: 1024 pages used for memmap [0.000000] DMA32 zone: 0 pages reserved [0.000000] DMA32 zone: 65536 pages, LIFO batch:15 [0.000000] percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 [0.000000] pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 [0.000000] pcpu-alloc: [0] 0 [0.000000] Detected PIPT I-cache on CPU0 [0.000000] CPU features: detected: ARM erratum 832075 [0.000000] CPU features: detected: EL2 vector hardening [0.000000] ARMSMCCCARCHWORKAROUND1 missing from firmware [0.000000] Built 1 zonelists, mobility grouping on. Total pages: 64512 [0.000000] Kernel command line: earlyprintk=pl011,0x1c090000 lpj=19988480 rw loglevel=8 mem=256MB root=/dev/sda consolemsgformat=syslog nokaslr norandmaps panic=-1 printk.devkmsg=on printk.time=y rw console=ttyAMA0 - lkmc_home=/lkmc [0.000000] Dentry cache hash table entries: 32768 (order: 6, 262144 bytes, linear) [0.000000] Inode-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) [0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off [0.000000] Memory: 233432K/262144K available (6652K kernel code, 792K rwdata, 2176K rodata, 896K init, 659K bss, 28712K reserved, 0K cma-reserved) [0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1 [0.000000] ftrace: allocating 22067 entries in 87 pages ....

So we see that messages up to the

do show up!

[[gem5-m5out-stats-txt-file]] ==== gem5 m5out/stats.txt file

This file contains important statistics about the run:

.... cat "$(./getvar --arch aarch64 m5out_dir)/stats.txt" ....

Whenever we run

m5 dumpstats
or when and are exiting (TODO other scripts?), a section with the following format is added to that file:

.... ---------- Begin Simulation Statistics ---------- [the stats] ---------- End Simulation Statistics ---------- ....

That file contains several important execution metrics, e.g. number of cycles and several types of cache misses:

.... system.cpu.numCycles system.cpu.dtb.instmisses system.cpu.dtb.insthits ....

For x86, it is interesting to try and correlate


In LKMC f42c525d7973d70f4c836d2169cc2bd2893b4197 gem5 5af26353b532d7b5988cf0f6f3d0fbc5087dd1df, the stat file for a <> hello world:

.... ./run --arch aarch64 --emulator gem5 --userland userland/c/hello.c ....

which has a single dump done at the exit, has size 59KB and stat lines of form:

.... final_tick 91432000 # Number of ticks from beginning of simulation (restored from checkpoints and never reset) ....

We can reduce the file size by adding the

magic suffix to the stat flie name:

.... --stats-file stats.txt?desc=false ....

as explained in:

.... gem5.opt --stats-help ....

and this reduces the file size to 39KB by removing those excessive comments:

.... final_tick 91432000 ....

although trailing spaces are still prse

We can further reduce this size by removing spaces from the dumps with this hack:

.... ccprintf(stream, " |%12s %10s %10s", ValueToString(value, precision), pdfstr.str(), cdfstr.str()); } else { - ccprintf(stream, "%-40s %12s %10s %10s", name, - ValueToString(value, precision), pdfstr.str(), cdfstr.str()); + ccprintf(stream, "%s %s", name, ValueToString(value, precision)); + if (pdfstr.rdbuf()->inavail()) + stream << " " << pdfstr.str(); + if (cdfstr.rdbuf()->inavail()) + stream << " " << cdfstr.str();

     if (descriptions) {
         if (!desc.empty())


and after that the file size went down to 21KB.

===== gem5 HDF5 statistics

We can make gem5 dump statistics in the <> format by adding the magic

prefix to the file name as in:

.... gem5.opt --stats-file h5://stats.h5 ....

as explained in:

.... gem5.opt --stats-help ....

This is not exposed in LKMC f42c525d7973d70f4c836d2169cc2bd2893b4197 however, you just have to <>.

TODO what is the advantage? The generated file for

--stats-file h5://stats.h5?desc=False
in LKMC f42c525d7973d70f4c836d2169cc2bd2893b4197 gem5 5af26353b532d7b5988cf0f6f3d0fbc5087dd1df for a single dump was 946K, so much larger than the text version seen at <> which was only 59KB max!

We then try to see if it is any better when you have a bunch of dump events:

.... ./run --arch aarch64 --emulator gem5 --userland userland/c/m5ops.c --cli-args 'd 1000' ....

and there yes, we see that the file size fell from 39MB on

to 3.2MB on
, so the increase observed previously was just due to some initial size overhead (considering the patched gem5 with no spaces in the text file).

We also note however that the stat dump made the such a simulation that just loops and dumps considerably slower, from 3s to 15s on <>. Fascinating, we are definitely not disk bound there.

We enable HDF5 on the build by default with

. To disable it, you can add
to the build as in:

.... ./build-gem5 -- USE_HDF5=0 ....

Library support is automatically detected, and only built if you have it installed. But there have been some compilation bugs with HDF5, which is why you might want to turn it off sometimes, e.g.:

===== gem5 only dump selected stats

To prevent the stats file from becoming humongous.

===== Meaning of each gem5 stat

Well, run minimal examples, and reverse engineer them up!

We can start with link:userland/arch/x86_64/freestanding/linux/hello.S[] on atomic with <>.

.... ./run \ --arch aarch64 \ --emulator gem5 \ --userland userland/arch/aarch64/freestanding/linux/hello.S \ --trace ExecAll \ --trace-stdout \ ; ....

which gives:

.... 0: system.cpu: A0 T0 : @start : movz x0, #1, #0 : IntAlu : D=0x0000000000000001 flags=(IsInteger) 500: system.cpu: A0 T0 : @start+4 : adr x1, #28 : IntAlu : D=0x0000000000400098 flags=(IsInteger) 1000: system.cpu: A0 T0 : @start+8 : ldr w2, #4194464 : MemRead : D=0x0000000000000006 A=0x4000a0 flags=(IsInteger|IsMemRef|IsLoad) 1500: system.cpu: A0 T0 : @start+12 : movz x8, #64, #0 : IntAlu : D=0x0000000000000040 flags=(IsInteger) 2000: system.cpu: A0 T0 : @start+16 : svc #0x0 : IntAlu : flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall) 2500: system.cpu: A0 T0 : @start+20 : movz x0, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 3000: system.cpu: A0 T0 : @start+24 : movz x8, #93, #0 : IntAlu : D=0x000000000000005d flags=(IsInteger) 3500: system.cpu: A0 T0 : @start+28 : svc #0x0 : IntAlu : flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall) ....

The most important stat of all is usually the cycle count, which is a direct measure of performance if you modelled you system well:

.... sim_ticks 3500 # Number of ticks simulated ....


are often critical:

.... siminsts 6 # Number of instructions simulated simops 6 # Number of ops (including micro ops) simulated ....

is like
but it also includes <>.

In <>, syscall instructions are magic, and therefore appear to not be counted, that is why we get 6 instructions instead of 8.

===== gem5 stats internals

This describes the internals of the <>.

GDB call stack to


.... Stats::pythonDump () at build/ARM/python/pybind11/ Stats::StatEvent::process() () GlobalEvent::BarrierEvent::process (this=0x555559fa6a80) at build/ARM/sim/ EventQueue::serviceOne ([email protected]=0x555558c36080) at build/ARM/sim/ doSimLoop (eventq=0x555558c36080) at build/ARM/sim/ simulate (numcycles=) at build/ARM/sim/ ....


.... void pythonDump() { py::module m = py::module::import("m5.stats"); m.attr("dump")(); } ....

This calls

def dump
does the main dumping

That function does notably:

.... for output in outputList: if output.valid(): output.begin() for stat in stats_list: stat.visit(output) output.end() ....

are defined in C++ and output the header and tail respectively

.... void Text::begin() { ccprintf(*stream, "\n---------- Begin Simulation Statistics ----------\n"); }

void Text::end() { ccprintf(*stream, "\n---------- End Simulation Statistics ----------\n"); stream->flush(); } ....

contains the stats, and
prints them,
contains by default just the text output. I don't see any other types of output in gem5, but likely JSON / binary formats could be envisioned.

Tested in gem5 b4879ae5b0b6644e6836b0881e4da05c64a6550d.

==== gem5 config.ini


file, contains a very good high level description of the system:

.... less $(./getvar --arch arm --emulator gem5 m5out_dir)" ....

That file contains a tree representation of the system, sample excerpt:

.... [root] type=Root children=system full_system=true

[system] type=ArmSystem children=cpu cpuclkdomain autoresetaddr_64=false semihosting=Null

[system.cpu] type=AtomicSimpleCPU children=dstage2mmu dtb interrupts isa istage2mmu itb tracer branchPred=Null

[system.cpuclkdomain] type=SrcClockDomain clock=500 ....

Each node has:

  • a list of child nodes, e.g.
    is a child of
    , and both
    are children of
  • a list of parameters, e.g.
    , which means that <> was turned off ** the
    parameter shows is present on every node, and it maps to a
    object that inherits from <>. + For example,
    maps is defined at[src/cpu/simple/].

Set custom configs with the

option of
, e.g. we can make gem5 wait for GDB to connect with:

.... --param 'system.cpu[0].waitforremote_gdb = True' ....

More complex settings involving new classes however require patching the config files, although it is easy to hack this up. See for example: link:patches/manual/gem5-semihost.patch[].

Modifying the

file manually does nothing since it gets overwritten every time.

===== gem5


file contains a graphviz
file that provides a simplified graphical view of a subset of the <>.

This file gets automatically converted to

, which you can view after running gem5 with:

.... xdg-open "$(./getvar --arch arm --emulator gem5 m5outdir)/" xdg-open "$(./getvar --arch arm --emulator gem5 m5outdir)/" ....

An example of such file can be seen at: <>.

On Ubuntu 20.04, you can also see the dot file "directly" with xdot:

.... xdot "$(./getvar --arch arm --emulator gem5 m5out_dir)/" ....

which is kind of really cool because it allows you to view graph arrows on hover. This can be very useful because the PDF and SVG often overlap so many arrows together that you just can't know which one is coming from/going to where.

It is worth noting that if you are running a bunch of short simulations, dot/SVG/PDF generation could have a significant impact in simulation startup time, so it is something to watch out for. As per it can be turned off with:

.... gem5.opt --dot-config='' ....

or in LKMC:

.... ./run --gem5-exe-args='--dot-config= --json-config= --dump-config=' ....

The time difference can be readily observed on minimal examples by running gem5 with


By looking into gem5 872cb227fdc0b4d60acc7840889d567a6936b6e1

are can try to remove the SVG/PDF conversion to see if those dominate the runtime:

.... def dodot(root, outdir, dotFilename): if not pydot: warn("No dot file generated. " + "Please install pydot to generate the dot file and pdf.") return # * use ranksep > 1.0 for for vertical separation between nodes # especially useful if you need to annotate edges using e.g. visio # which accepts svg format # * no need for hoizontal separation as nothing moves horizonally callgraph = pydot.Dot(graphtype='digraph', ranksep='1.3') dotcreatenodes(root, callgraph) dotcreateedges(root, callgraph) dotfilename = os.path.join(outdir, dotFilename) callgraph.write(dotfilename) try: # dot crashes if the figure is extremely wide. # So avoid terminating simulation unnecessarily callgraph.writesvg(dotfilename + ".svg") callgraph.writepdf(dotfilename + ".pdf") except: warn("failed to generate dot output from %s", dot_filename) ....

but nope, they don't,

are the culprits, so the only way to gain speed is to remove
generation altogether. It is tempting to do this by default on LKMC and add an option to enable dot generation when desired so we can be a bit faster by default... but I'm lazy to document the option right now. When it annoys me further maybe :-)

=== m5term

We use the

in-tree executable to connect to the terminal instead of a direct

If you use

directly, it mostly works, but certain interactive features don't, e.g.:
  • up and down arrows for history navigation
  • tab to complete paths
  • Ctrl-C
    to kill processes

TODO understand in detail what

does differently than

=== gem5 Python scripts without rebuild

We have made a crazy setup that allows you to just

, and edit Python scripts directly there.

This is not normally possible with Buildroot, since normal Buildroot packages first copy files to the output directory (

$(./getvar -a  buildroot_build_build_dir)/
), and then build there.

So if you modified the Python scripts with this setup, you would still need to

to copy the modified files over.

For gem5 specifically however, we have hacked up the build so that we

into the
tree, and then do an[out of tree] build to

Another advantage of this method is the we factor out the

gem5 builds which are identical and large, as well as the smaller arch generic pieces.

Using Buildroot for gem5 is still convenient because we use it to:

  • to cross build
    for us
  • check timestamps and skip the gem5 build when it is not requested

The out of build tree is required, because otherwise Buildroot would copy the output build of all archs to each arch directory, resulting in

build copies, which is significant.

[[gem5-fs-biglittle]] === gem5 fs_bigLITTLE

By default, we use



--gem5-script biglittle
option enables the alternative
script instead:

.... ./run --arch aarch64 --emulator gem5 --gem5-script biglittle ....

Advantages over
  • more representative of mobile ARM SoCs, which almost always have big little cluster
  • simpler than
    , and therefore easier to understand and modify

Disadvantages over
  • only works for ARM, not other archs
  • not as many configuration options as
    , many things are hardcoded

We setup 2 big and 2 small CPUs, but

cat /proc/cpuinfo
shows 4 identical CPUs instead of 2 of two different types, likely because gem5 does not expose some informational register much like the caches:[email protected]/msg15426.html <> does show that the two big ones are
and the small ones are

TODO: why is the

required despite
having a DTB generation capability? Without it, nothing shows on terminal, and the simulation terminates with
simulate() limit reached  @  18446744073709551615
. The magic
works however without a DTB.

Tested on:[18c1c823feda65f8b54cd38e261c282eee01ed9f]

=== gem5 in-tree tests

All those tests could in theory be added to this repo instead of to gem5, and this is actually the superior setup as it is cross emulator.

But can the people from the project be convinced of that?

==== gem5 unit tests

These are just very small GTest tests that test a single class in isolation, they don't run any executables.

Build the unit tests and run them:

.... ./build-gem5 --unit-tests ....

Running individual unit tests is not yet exposed, but it is easy to do: while running the full tests, GTest prints each test command being run, e.g.:

.... /path/to/build/ARM/base/circlebuf.test.opt --gtest_output=xml:/path/to/build/ARM/unittests.opt/base/circlebuf.test.xml [==========] Running 4 tests from 1 test case. [----------] Global test environment set-up. [----------] 4 tests from CircleBufTest [ RUN ] CircleBufTest.BasicReadWriteNoOverflow [ OK ] CircleBufTest.BasicReadWriteNoOverflow (0 ms) [ RUN ] CircleBufTest.SingleWriteOverflow [ OK ] CircleBufTest.SingleWriteOverflow (0 ms) [ RUN ] CircleBufTest.MultiWriteOverflow [ OK ] CircleBufTest.MultiWriteOverflow (0 ms) [ RUN ] CircleBufTest.PointerWrapAround [ OK ] CircleBufTest.PointerWrapAround (0 ms) [----------] 4 tests from CircleBufTest (0 ms total)

[----------] Global test environment tear-down [==========] 4 tests from 1 test case ran. (0 ms total) [ PASSED ] 4 tests. ....

so you can just copy paste the command.

Building individual tests is possible with

(singular, no 's'):

.... ./build-gem5 --unit-test base/circlebuf.test ....

This does not run the test however.

Note that the command and it's corresponding results don't need to show consecutively on stdout because tests are run in parallel. You just have to match them based on the class name

to the file

==== gem5 regression tests

This section is about running the gem5 in-tree tests.

Running the larger 2019 regression tests is exposed for example with:

.... ./build-gem5 --arch aarch64 ./gem5-regression --arch aarch64 -- --length quick --length long ....

Sample run time: 87 minutes on <> Ubuntu 20.04 gem5 872cb227fdc0b4d60acc7840889d567a6936b6e1.

After the first run has downloaded the test binaries for you, you can speed up the process a little bit by skipping an useless SCons call:

.... ./gem5-regression --arch aarch64 -- --length quick --length long --skip-build ....

Note however that running without

is required at least once to download the test binaries, because the test interface is bad.

List available instead of running them:

.... ./gem5-regression --arch aarch64 --cmd list -- --length quick --length long ....

You can then pick one suite (has to be a suite, not an "individual test") from the list and run just it e.g. with:

.... ./gem5-regression --arch aarch64 -- --uid SuiteUID:tests/gem5/cputests/ ....

=== gem5 simulate() limit reached

This error happens when the following instruction limits are reached:

.... system.cpu[0].maxinstsallthreads system.cpu[0].maxinstsanythread ....

If the parameter is not set, it defaults to

, which is magic and means the huge maximum value of
: 0xFFFFFFFFFFFFFFFF, which in practice would require a very long simulation if at least one CPU were live.

So this usually means all CPUs are in a sleep state, and no events are scheduled in the future, which usually indicates a bug in either gem5 or guest code, leading gem5 to blow up.

Still, at gem5 08c79a194d1a3430801c04f37d13216cc9ec1da3 does not exit with non-zero status due to this... and so we just parse it out just as for <>...

A trivial and very direct way to see message would be:

.... ./run \ --emulator gem5 \ --userland userland/arch/x8664/freestanding/linux/hello.S \ --trace-insts-stdout \ -- \ --param 'system.cpu[0].maxinstsallthreads = 3' \ ; ....

which as of lkmc 402059ed22432bb351d42eb10900e5a8e06aa623 runs only the first three instructions and quits!

.... info: Entering event queue @ 0. Starting simulation... 0: system.cpu A0 T0 : @asmmainafterprologue : mov rdi, 0x1 0: system.cpu A0 T0 : @asmmainafterprologue.0 : MOVRI : limm rax, 0x1 : IntAlu : D=0x0000000000000001 flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop) 1000: system.cpu A0 T0 : @asmmainafterprologue+7 : mov rdi, 0x1 1000: system.cpu A0 T0 : @asmmainafterprologue+7.0 : MOVRI : limm rdi, 0x1 : IntAlu : D=0x0000000000000001 flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop) 2000: system.cpu A0 T0 : @asmmainafterprologue+14 : lea rsi, DS:[rip + 0x19] 2000: system.cpu A0 T0 : @asmmainafterprologue+14.0 : LEARP : rdip t7, %ctrl153, : IntAlu : D=0x000000000040008d flags=(IsInteger|IsMicroop|IsDelayedCommit|IsFirstMicroop) 2500: system.cpu A0 T0 : @asmmainafterprologue+14.1 : LEAR_P : lea rsi, DS:[t7 + 0x19] : IntAlu : D=0x00000000004000a6 flags=(IsInteger|IsMicroop|IsLastMicroop) Exiting @ tick 3000 because all threads reached the max instruction count ....

The exact same can be achieved with the older hardcoded

mechanism present in

.... ./run \ --emulator gem5 \ --userland \userland/arch/x86_64/freestanding/linux/hello.S \ --trace-insts-stdout \ -- \ --maxinsts 3 ; ....

Other related options are:

  • --abs-max-tick
    : set the maximum guest simulation time. The same scale as the ExecAll trace is used. E.g., for the above example with 3 instructions, the same trace would be achieved with a value of 3000.

The message also shows on <> deadlocks, for example in link:userland/posix/pthread_deadlock.c[]:

.... ./run \ --emulator gem5 \ --userland userland/posix/pthread_deadlock.c \ --cli-args 1 \ ; ....

ends in:

.... Exiting @ tick 18446744073709551615 because simulate() limit reached ....

where 18446744073709551615 is 0xFFFFFFFFFFFFFFFF in decimal.

And there is a <> example at link:baremetal/arch/aarch64/nobootloader/wfeloop.S[] that dies on <>:

.... ./run \ --arch aarch64 \ --baremetal baremetal/arch/aarch64/nobootloader/wfeloop.S \ --emulator gem5 \ --trace-insts-stdout \ ; ....

which gives:

.... info: Entering event queue @ 0. Starting simulation... 0: system.cpu A0 T0 : @lkmcstart : wfe : IntAlu : D=0x0000000000000000 flags=(IsSerializeAfter|IsNonSpeculative|IsQuiesce|IsUnverifiable) 1000: system.cpu A0 T0 : @lkmcstart+4 : b : IntAlu : flags=(IsControl|IsDirectControl|IsUncondControl) 1500: system.cpu A0 T0 : @lkmc_start : wfe : IntAlu : D=0x0000000000000000 flags=(IsSerializeAfter|IsNonSpeculative|IsQuiesce|IsUnverifiable) Exiting @ tick 18446744073709551615 because simulate() limit reached ....

Other examples of the message:

  • <> with a single CPU stays stopped at an WFE sleep instruction
  • this sample bug on multithreading:

=== gem5 build options

In order to use different build options, you might also want to use <> to keep the build outputs separate from one another.

==== gem5 debug build

How to use it in LKMC: xref:debug-the-emulator[xrefstyle=full].

If you build gem5 with

scons build/ARM/gem5.debug
, then that is a

It relates to the more common

build just as explained at xref:debug-the-emulator[xrefstyle=full]: both
, but

==== gem5 fast build

.... ./build-gem5 --gem5-build-type fast ....

How it goes faster is explained at:

Disables debug symbols (no

) for some reason.

Benchmarks present at:

  • xref:benchmark-emulators-on-userland-executables[xrefstyle=full]

==== gem5 prof and perf builds

Profiling builds as of 3cea7d9ce49bda49c50e756339ff1287fd55df77 both use:

-g -O3
and disable asserts and logging like the <> and:
  • prof
    for gprof
  • perf
    for google-pprof

Profiling techniques are discussed in more detail at: <>.

For the

build, you can get the
file with:

.... ./run --arch aarch64 --emulator gem5 --userland userland/c/hello.c --gem5-build-type prof gprof "$(./getvar --arch aarch64 gem5_executable)" > tmp.gprof ....

==== gem5 clang build

TODO test properly, benchmark vs GCC.

.... sudo apt-get install clang ./build-gem5 --gem5-clang ./run --emulator gem5 --gem5-clang ....

==== gem5 sanitation build

If there gem5 appears to have a C++ undefined behaviour bug, which is often very difficult to track down, you can try to build it with the following extra SCons options:

.... ./build-gem5 --gem5-build-id san --verbose -- --with-ubsan --without-tcmalloc ....

This will make GCC do a lot of extra sanitation checks at compile and run time.

As a result, the build and runtime will be way slower than normal, but that still might be the fastest way to solve undefined behaviour problems.

Ideally, we should also be able to run it with asan with

, but if we try then the build fails at gem5 16eeee5356585441a49d05c78abc328ef09f7ace (with two ubsan trivial fixes I'll push soon):


==9621==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 371712 byte(s) in 107 object(s) allocated from: #0 0x7ff039804448 in malloc (/usr/lib/x86_64-linux-gnu/ #1 0x7ff03950d065 in dictresize ../Objects/dictobject.c:643

Direct leak of 23728 byte(s) in 26 object(s) allocated from: #0 0x7ff039804448 in malloc (/usr/lib/x8664-linux-gnu/ #1 0x7ff03945e40d in _PyObjectGCMalloc ../Modules/gcmodule.c:1499 #2 0x7ff03945e40d in _PyObjectGC_Malloc ../Modules/gcmodule.c:1493

Direct leak of 2928 byte(s) in 43 object(s) allocated from: #0 0x7ff03980487e in _interceptorrealloc (/usr/lib/x8664-linux-gnu/ #1 0x7ff03951d763 in listresize ../Objects/listobject.c:62 #2 0x7ff03951d763 in app1 ../Objects/listobject.c:277 #3 0x7ff03951d763 in PyList_Append ../Objects/listobject.c:289

Direct leak of 2002 byte(s) in 3 object(s) allocated from: #0 0x7ff039804448 in malloc (/usr/lib/x8664-linux-gnu/ #1 0x7ff0394fd813 in PyStringFromStringAndSize ../Objects/stringobject.c:88 #2 0x7ff0394fd813 in PyStringFromStringAndSize ../Objects/stringobject.c: Direct leak of 40 byte(s) in 2 object(s) allocated from #0 0x7ff039804448 in malloc (/usr/lib/x8664-linux-gnu/ #1 0x7ff03951ea4b in PyList_New ../Objects/listobject.c:152

Indirect leak of 10384 byte(s) in 11 object(s) allocated from #0 0x7ff039804448 in malloc (/usr/lib/x8664-linux-gnu/ #1 0x7ff03945e40d in _PyObjectGCMalloc ../Modules/gcmodule.c: #2 0x7ff03945e40d in _PyObjectGC_Malloc ../Modules/gcmodule.c:1493

Indirect leak of 4089 byte(s) in 6 object(s) allocated from: #0 0x7ff039804448 in malloc (/usr/lib/x8664-linux-gnu/ #1 0x7ff0394fd648 in PyStringFromString ../Objects/stringobject.c:143

Indirect leak of 2090 byte(s) in 3 object(s) allocated from: #0 0x7ff039804448 in malloc (/usr/lib/x8664-linux-gnu/ #1 0x7ff0394eb36f in typenew ../Objects/typeobject.c: #2 0x7ff0394eb36f in typenew ../Objects/typeobject.c:2094 Indirect leak of 1346 byte(s) in 2 object(s) allocated from: #0 0x7ff039804448 in malloc (/usr/lib/x8664-linux-gnu/ #1 0x7ff0394fd813 in PyStringFromStringAndSize ../Objects/stringobject.c: #2 0x7ff0394fd813 in PyStringFromStringAndSize ../Objects/stringobject.c: SUMMARY: AddressSanitizer: 418319 byte(s) leaked in 203 allocation(s). ....

From the message, this appears however to be a Python / pyenv11 bug however and not in gem5 specifically. I think it worked when I tried it in the past in an older gem5 / Ubuntu.

is needed / a good idea when using
: since both do more or less similar jobs, see also <>.

==== gem5 Ruby build

gem5 has two types of memory system:

  • the classic memory system, which is used by default, its caches are covered at: <>
  • the Ruby memory system

The Ruby memory system includes the SLICC domain specific language to describe memory systems: SLICC transpiles to C++ auto-generated files under


Ruby seems to have usage outside of gem5, but the naming overload with the link:[Ruby programming language], which also has link:[domain specific languages] as a concept, makes it impossible to google anything about it!

Since it is not the default, Ruby is generally less stable that the classic memory model. However, because it allows describing a wide variety of important <>, while the classic system only describes a single protocol, Ruby is very importanonly describes a single protocol, Ruby is a very important feature of gem5.

Ruby support must be enabled at compile time with the

flag, which compiles support for the desired memory system type.

Note however that most ISAs already implicitly set

via the
directory, e.g.

.... PROTOCOL = 'MOESICMPdirectory' ....

and therefore ARM already compiles

by default.

Then, with
, you can choose to use either the classic or the ruby system type selected at build time with
at runtime by passing the
  • if
    is given, use the ruby memory system that was compiled into gem5. Caches are always present when Ruby is used, since the main goal of Ruby is to specify the cache coherence protocol, and it therefore hardcodes cache hierarchies.
  • otherwise, use the classic memory system. Caches may be optional for certain CPU types and are enabled with

Note that the

option has some crazy side effects besides enabling Ruby, e.g. it[sets the default
instead of the otherwise default
]. TODO: I have been told that this is because <>.

It is not possible to build more than one Ruby system into a single build, and this is a major pain point for testing Ruby:

For example, to use a two level <> we can do:

.... ./build-gem5 --arch aarch64 --gem5-build-id ruby -- PROTOCOL=MESITwoLevel ./run --arch aarch64 --emulator -gem5 --gem5-build-id ruby -- --ruby ....

and during build we see a humongous line of type:

.... [ SLICC] src/mem/protocol/MESITwoLevel.slicc -> ARM/mem/protocol/, ARM/mem/protocol/AccessPermission.hh, ... ....

which shows that dozens of C++ files are being generated from Ruby SLICC.

The relevant Ruby source files live in the source tree under:

.... src/mem/protocol/MESITwoLevel* ....

We already pass the

flag by default to the build, which generates an HTML summary of each memory protocol under (TODO broken:[]):

.... xdg-open "$(./getvar --arch aarch64 --gem5-build-id ruby gem5buildbuild_dir)/ARM/mem/protocol/html/index.html" ....

A minimized ruby config which was not merged upstream can be found for study at:

One easy way to see that Ruby is being used without understanding it in detail is to <>:

.... ./run \ --arch aarch64 \ --emulator gem5 \ --gem5-worktree master \ --userland userland/arch/aarch64/freestanding/linux/hello.S \ --static \ --trace ExecAll,FmtFlag,Ruby,XBar \ -- \ --ruby \ ; cat "$(./getvar --arch aarch64 --emulator gem5 tracetxtfile)" ....


  • when the
    flag is given, we see a gazillion Ruby related messages prefixed e.g. by
    . + We also observe from
    lines that instruction timing is not simple anymore, so the memory system must have latencies
  • without
    , we instead see
    (Coherent Crossbar) related messages such as
    , which I believe is the more precise name for the memory model that the classic memory system uses: <>.

Certain features may not work in Ruby. For example, <> creation is only possible in Ruby protocols that support flush, which is the case for

but not
:[email protected]/msg17418.html

Tested in gem5 d7d9bc240615625141cd6feddbadd392457e49eb.

[[gem5-ruby-mi-example-protocol]] ===== gem5 Ruby MI_example protocol

This is the simplest of all protocols, and therefore the first one you should study to learn how Ruby works.

To study it, we can take an approach similar to what was done at: <>.

Our full command line will be something like

.... ./build-gem5 --arch aarch64 --gem5-build-id MIexample ./run \ --arch aarch64 \ --cli-args '2 100' \ --cpus 3 \ --emulator gem5 \ --userland userland/cpp/atomic/aarch64add.cpp \ --gem5-build-id MI_example \ -- \ --ruby \ ; ....

which produces a <> like the following by with 3 CPUs instead of 2:

[[config-dot-svg-timingsimplecpu-caches-3-cpus-ruby]] .
for a system with three TimingSimpleCPU CPUs with the Ruby
protocol. image::{cirosantilli-media-base}gem5configTimingSimpleCPU3CPUsMIexample_b1623cb2087873f64197e503ab8894b5e4d4c7b4.svg?sanitize=true[height=600]

===== gem5 crossbar interconnect

Crossbar or

in the code, is the default <> that gets used by
if <> is not given.

It presumably implements a crossbar switch along the lines of:

This is the best introductory example analysis we have so far: <>. It contains more or less the most minimal example in which something interesting can be observed: multiple cores fighting over a single data memory variable.

Long story short: the interconnect contains the snoop mechanism, and it forwards packets coming form caches of a CPU to the caches of other CPUs in which the block is present.

It is therefore the heart of the <> mechanism, as it informs other caches of bus transactions they need to know about.

TODO: describe it in more detail. It appears to be a very simple mechanism.


we see that there is both a coherent and a non-coherent XBar.

it is set at:

.... if options.ruby: ... else: MemClass = Simulation.setMemClass(options) system.membus = SystemXBar() ....


is defined at
with a nice comment:


One of the key coherent crossbar instances is the system

interconnect, tying together the CPU clusters, GPUs, and any I/O

coherent masters, and DRAM controllers.

class SystemXBar(CoherentXBar): ....

Tested in gem5 12c917de54145d2d50260035ba7fa614e25317a3.

==== gem5 Python 3 build

Python 3 support was mostly added in 2019 Q3 at arounda347a1a68b8a6e370334be3a1d2d66675891e0f1 but remained buggy for some time afterwards.

In an Ubuntu 18.04 host where

by default, build with Python 3 instead with:

.... ./build-gem5 --gem5-build-id python3 -- PYTHON_CONFIG=python3-config ....

Python 3 is then automatically used when running if you use that build.

=== gem5 CPU types

gem5 has a few in tree CPU models for different purposes.

In and, those are selectable with the


The information to make highly accurate models isn't generally public for non-free CPUs, so either you must either rely vendor provided models or on experiments/reverse engineering.

There is no simple answer for "what is the best CPU", in theory you have to understand each model and decide which one is closer your target system.

Whenever possible, stick to:

  • vendor provide ones obviously, e.g. ARM Holdings models of ARM cores, unless there is good reason not to, as they are the most likely to be accurate
  • newer models instead of older models

Both of those can be checked with

git log
git blame

All CPU types inherit from the

class, and looking at the class hierarchy in <> gives a good overview of what we have:
  • BaseCPU
    : <> ***
    : <> **
    DerivO3CPU : public FullO3CPU
    : <>

From this we see that there are basically only 4 C++ CPU models in gem5: Atomic, Timing, Minor and O3. All others are basically parametrizations of those base types.

==== List of gem5 CPU types

===== gem5


Simple abstract CPU without a pipeline.

They are therefore completely unrealistic. But they also run much faster. <> are an alternative way of fast forwarding boot when they work.


  • <>
  • <>

====== gem5


: the default one. Memory accesses happen instantaneously. The fastest simulation except for KVM, but not realistic at all.

Useful to <>.

====== gem5


: memory accesses are realistic, but the CPU has no pipeline. The simulation is faster than detailed models, but slower than

To fully understand

, see: <>.

Without caches, the CPU just stalls all the time waiting for memory requests for every advance of the PC or memory read from a instruction!

Caches do make a difference here of course, and lead to much faster memory return times.

===== gem5 MinorCPU

Generic <> <> core.

Its C++ implementation that can be parametrized to more closely match real cores.

Note that since gem5 is highly parametrizable, the parametrization could even change which instructions a CPU can execute by altering its available <>, which are used to model performance.

For example,

allows all implemented instructions, including <> instructions, but a derived class modelling, say, an[ARM Cortex A7 core], might not, since SVE is a newer feature and the A7 core does not have SVE.

The weird name "Minor" stands for "M (TODO what is M) IN ONder".

Its 4 stage pipeline is described at the "MinorCPU" section of <>.

A commented execution example can be seen at: <>.

There is also an in-tree doxygen at:[

] and rendered at:

As of 2019, in-order cores are mostly present in low power/cost contexts, for example little cores of[ARM bigLITTLE].

The following models extend the

class by parametrization to make it match existing CPUs more closely:
  • HPI
    : derived from
    . + Created by Ashkan Tousi in 2017 while working at ARM. + According to <>: + ____ The HPI CPU timing model is tuned to be representative of a modern in-order Armv8-A implementation. ____ +
  • ex5_LITTLE
    : derived from
    . Description reads: + ____ ex5 LITTLE core (based on the ARM Cortex-A7) ____ + Implemented by Pierre-Yves Péneau from LIRMM, which is a research lab in Montpellier, France, in 2017.

===== gem5


Generic <>. "O3" Stands for "Out Of Order"!

Basic documentation on the old gem5 wiki:

Analogous to <>, but modelling an out of order core instead of in order.

A commented execution example can be seen at: <>.

The default <> are described at: <>. All default widths are set to 8 instructions, from the <>:

.... [system.cpu] type=DerivO3CPU commitWidth=8 decodeWidth=8 dispatchWidth=8 fetchWidth=8 issueWidth=8 renameWidth=8 squashWidth=8 wbWidth=8 ....

This can be observed for example at: <>.

Existing parametrizations:

  • ex5_big
    : big corresponding to
    , by same author at same time. It description reads: + ____ ex5 big core (based on the ARM Cortex-A15) ____
  • O3_ARM_v7a
    : implemented by Ronald Dreslinski from the[University of Michigan] in 2012 + Not sure why it has v7a in the name, since I believe the CPUs are just the microarchitectural implementation of any ISA, and the v8 hello world did run. + The CLI option is named slightly differently as:
    --cpu-type O3_ARM_v7a_3

====== gem5

pipeline stages
  • fetch: besides obviously fetching the instruction, this is also where branch prediction runs. Presumably because you need to branch predict before deciding what to fetch next.

  • retire: the instruction is completely and totally done with. + Mispeculated instructions never reach this stage as can be seen at: <>. + The

    happens at this time as well. And therefore
    does not happen for mispeculated instructions.

[[gem5-util-o3-pipeview-py-o3-pipeline-viewer]] ====== gem5 util/ O3 pipeline viewer

Mentioned at:

.... ./run \ --arch aarch64 \ --emulator gem5 \ --userland userland/arch/aarch64/freestanding/linux/hello.S \ --trace O3PipeView \ --trace-stdout \ -- \ --cpu-type DerivO3CPU \ --caches \ ; "$(./getvar gem5sourcedir)/util/" -c 500 -o o3pipeview.tmp.log --color "$(./getvar --arch aarch64 tracetxtfile)" less -R o3pipeview.tmp.log ....

Or without color:

.... "$(./getvar gem5sourcedir)/util/" -c 500 -o o3pipeview.tmp.log "$(./getvar --arch aarch64 tracetxtfile)" less o3pipeview.tmp.log ....

A sample output for this can be seen at: <>.

====== gem5 Konata O3 pipeline viewer

Appears to be browser based, so you can zoom in and out, rather than the forced wrapping as for <>.

Uses the same data source as


<> shows how the text-based visualization can get problematic due to stalls requiring wraparounds.

==== gem5 ARM RSK

Dated 2017, it contains a good overview of gem5 CPUs.

=== gem5 ARM platforms

The gem5 platform is selectable with the

option, which is named after the analogous QEMU
option, and which sets the

Each platform represents a different system with different devices, memory and interrupt setup.

TODO: describe the main characteristics of each platform, as of gem5 5e83d703522a71ec4f3eb61a01acd8c53f6f3860:

  • VExpress_GEM5_V1
    : good sane base platform
  • VExpress_GEM5_V1_DPU
    with DP650 instead of HDLCD, selected automatically by
    ./run --dp650
    , see also: <>
  • VExpress_GEM5_V2
    : VExpressGEM5V1 with GICv3, uses a different bootloader
    TODO is it because of GICv3?
  • anything that does not start with:
    : old and bad, don't use them

=== gem5 upstream images

Present at:


Depending on which archive you download from there, you can find some of:

  • Ubuntu based images
  • precompiled Linux kernels, with the <> for arm
  • precompiled <> for ISAs that have them, e.g. ARM
  • precompiled DTBs if you don't want to use autogeneration for some crazy reason

Some of those images are also used on the <> continuous integration.

Could be used as an alternative to this repository. But why would you do that? :-)

E.g. to use a precompiled ARM kernel:

.... mkdir aarch-system-201901106 cd aarch-system-201901106 wget tar xvf aarch-system-201901106.tar.bz2 cd .. ./run --arch aarch64 --emulator gem5 --linux-exec aarch-system-201901106/binaries/vmlinux.arm64 ....

=== gem5 bootloaders

Certain ISAs like ARM have bootloaders that are automatically run before the main image to setup basic system state.

We cross compile those bootloaders from source automatically during


As of gem5 bcf041f257623e5c9e77d35b7531bae59edc0423, the source code of the bootloaderes can be found under:

.... system/arm/ ....

and their selection can be seen under:

, e.g.:

.... def setupBootLoader(self, cursys, loc): if not cursys.bootloader: cursys.bootloader = [ loc('bootemm.arm64'), loc('boot_emm.arm') ] ....

The bootloader basically just sets up a bit of CPU state and jumps to the kernel entry point.

In aarch64 at least, CPUs other than CPU0 are also started up briefly, run some initialization, and are made wait on a WFE. This can be seen easily by booting a multicore Linux kernel run with <>.

=== gem5 memory system

Parent section: <>.

==== gem5 port system

The gem5 memory system is connected in a very flexible way through the port system.

This system exists to allow seamlessly connecting any combination of CPU, caches, interconnects, DRAM and peripherals.

A <> is the basic information unit that gets sent across ports.

===== gem5 functional vs atomic vs timing memory requests

gem5 memory requests can be classified in the following broad categories:

  • functional: get the value magically, do not update caches, see also: <>
  • atomic: get the value now without making a <>, but do not update caches. Cannot work in <> due to fundamental limitations, mentioned in passing at:
  • timing: get the value simulating delays and updating caches

This trichotomy can be notably seen in the definition of the[MasterPort class]:

.... class MasterPort : public Port, public AtomicRequestProtocol, public TimingRequestProtocol, public FunctionalRequestProtocol ....

and the base classes are defined under


Then, by reading the rest of the class, we see that the send methods are all boring, and just forward to some polymorphic receiver that does the actual interesting activity:

.... Tick sendAtomicSnoop(PacketPtr pkt) { return AtomicResponseProtocol::sendSnoop(_masterPort, pkt); }

AtomicResponseProtocol::sendSnoop(AtomicRequestProtocol *peer, PacketPtr pkt)
    return peer->recvAtomicSnoop(pkt);


The receive methods are therefore the interesting ones, and must be overridden on derived classes if they ever expect to receive such requests:

.... Tick recvAtomicSnoop(PacketPtr pkt) override { panic("%s was not expecting an atomic snoop request\n", name()); return 0; }

recvFunctionalSnoop(PacketPtr pkt) override
    panic("%s was not expecting a functional snoop request\n", name());

void recvTimingSnoopReq(PacketPtr pkt) override { panic("%s was not expecting a timing snoop request.\n", name()); }


One question that comes up now is: but why do CPUs need to care about <>?

And one big answer is: to be able to implement LLSC atomicity as mentioned at: <>, since when other cores update memory, they could invalidate the lock of the current core.

Then, as you might expect, we can see that for example

does not override

Now let see which requests are generated by ordinary <>. We run:

.... ./run \ --arch aarch64 \ --debug-vm \ --emulator gem5 \ --gem5-build-type debug \ --useland userland/arch/aarch64/freestanding/linux/hello.S \ ....

and then break at the methods of the LDR class

: <>.

Before starting, we of course guess that:

  • AtomicSimpleCPU
    will be making atomic accesses from
  • TimingSimpleCPU
    will be making timing accesses from
    , which must generate the event which leads to

so let's confirm it.

We break on

which is what
uses, and that leads as expected to:

.... MasterPort::sendAtomic AtomicSimpleCPU::sendPacket AtomicSimpleCPU::readMem SimpleExecContext::readMem readMemAtomic<(ByteOrder)1, ExecContext, unsigned long> readMemAtomicLE ArmISAInst::LDRXL64_LIT::execute AtomicSimpleCPU::tick ....


immediately translates the address, creates a packet, sends the atomic request, and gets the response back without any events.

And now if we do the same with

--cpu-type TimingSimpleCPU
and break at
, and then add another break for the next event schedule
b EventManager::schedule
(which we imagine is the memory read) we reach:

.... EventManager::schedule DRAMCtrl::addToReadQueue DRAMCtrl::recvTimingReq DRAMCtrl::MemoryPort::recvTimingReq TimingRequestProtocol::sendReq MasterPort::sendTimingReq CoherentXBar::recvTimingReq CoherentXBar::CoherentXBarSlavePort::recvTimingReq TimingRequestProtocol::sendReq MasterPort::sendTimingReq TimingSimpleCPU::handleReadPacket TimingSimpleCPU::sendData TimingSimpleCPU::finishTranslation DataTranslation::finish ArmISA::TLB::translateComplete ArmISA::TLB::translateTiming ArmISA::TLB::translateTiming TimingSimpleCPU::initiateMemRead SimpleExecContext::initiateMemRead initiateMemRead ArmISAInst::LDRXL64_LIT::initiateAcc TimingSimpleCPU::completeIfetch TimingSimpleCPU::IcachePort::ITickEvent::process EventQueue::serviceOne ....

so as expected we have


Remember however that timing requests are a bit more complicated due to <>, since the page table walk can itself lead to further memory requests.

In this particular instance, the address being read with

ldr x2, =len
<> is likely placed just after the text section, and therefore the pagewalk is already in the TLB due to previous instruction fetches, and this is because the translation just finished immediately going through
, some key snippets are:

.... TLB::translateComplete(const RequestPtr &req, ThreadContext *tc, Translation *translation, Mode mode, TLB::ArmTranslationType tranType, bool callFromS2) { bool delay = false; Fault fault; if (FullSystem) fault = translateFs(req, tc, mode, translation, delay, true, tranType); else fault = translateSe(req, tc, mode, translation, delay, true); if (!delay) translation->finish(fault, req, tc, mode); else translation->markDelayed(); ....

and then

does not use
at all, so we learn that in syscall emulation,
is always
and things progress immediately there. And then further down
does some more fault checking:

.... void TimingSimpleCPU::finishTranslation(WholeTranslationState *state) { if (state->getFault() != NoFault) { translationFault(state->getFault()); } else { if (!state->isSplit) { sendData(state->mainReq, state->data, state->res, state->mode == BaseTLB::Read); ....

Tested in gem5 b1623cb2087873f64197e503ab8894b5e4d4c7b4.

====== gem5 functional requests

As seen at <>, functional requests are not used in common simulation, since the core must always go through caches.

Functional access are therefore only used for more magic simulation functionalities.

One such functionality, is the <> implementation of the <> which is done at


As seen from

man futex
, the Linux kernel reads the value from an address that is given as the first argument of the call.

Therefore, here it makes sense for gem5 syscall implementation, which does not actually have a real kernel running, to just make a functional request and be done with it, since the impact of cache changes done by this read would be insignificant to the cost of an actual full context switch that would happen on a real syscall.

It is generally hard to implement functional requests for <> runs, because packets are flying through the memory system in a transient state, and there is no simple way of finding exactly which ones might have the latest version of the memory. See for example:


The typical error message in that case is:

.... fatal: Ruby functional read failed for address ....

==== gem5


===== gem5


is what goes through <>: a single packet is sent out to the memory system, gets modified when it hits valid data, and then returns with the reply.

is what CPUs create and send to get memory values. E.g. on <>:

.... void AtomicSimpleCPU::tick() { ... Packet ifetchpkt = Packet(ifetchreq, MemCmd::ReadReq); ifetch_pkt.dataStatic(&inst);

icache_latency = sendPacket(icachePort, &ifetch_pkt);

Tick AtomicSimpleCPU::sendPacket(MasterPort &port, const PacketPtr &pkt) { return port.sendAtomic(pkt); } ....

On <>, we note that the packet is dynamically created unlike for the AtomicSimpleCPU, since it must exist across multiple <> which happen on separate function calls, unlike atomic memory which is done immediately in a single call:

.... void TimingSimpleCPU::sendFetch(const Fault &fault, const RequestPtr &req, ThreadContext *tc) { if (fault == NoFault) { DPRINTF(SimpleCPU, "Sending fetch for addr %#x(pa: %#x)\n", req->getVaddr(), req->getPaddr()); ifetchpkt = new Packet(req, MemCmd::ReadReq); ifetchpkt->dataStatic(&inst); DPRINTF(SimpleCPU, " -- pkt addr: %#x\n", ifetch_pkt->getAddr());

    if (!icachePort.sendTimingReq(ifetch_pkt)) {


It must later delete the return packet that it gets later on, e.g. for the ifetch:

.... TimingSimpleCPU::completeIfetch(PacketPtr pkt) { if (pkt) { delete pkt; } ....

The most important properties of a Packet are:

  • PacketDataPtr data;
    : the data coming back from a reply packet or being sent via it
  • Addr addr;
    : the physical address of the data. TODO comment says could be virtual too, when? + .... /// The address of the request. This address could be virtual or /// physical, depending on the system configuration. Addr addr; ....
  • Flags flags;
    : flags describing properties of the
  • MemCmd cmd;
    : see <>

====== gem5


Each <> contains a



is basically an enumeration of possible commands, stuff like:

.... enum Command { InvalidCmd, ReadReq, ReadResp, ....

Each command has a fixed number of attributes defined in the static array:

.... static const CommandInfo commandInfo[]; ....

which gets initialized in the .cc file in the same order as the Command enum.

.... const MemCmd::CommandInfo MemCmd::commandInfo[] = { /* InvalidCmd / { 0, InvalidCmd, "InvalidCmd" }, / ReadReq - Read issued by a non-caching agent such as a CPU or * device, with no restrictions on alignment. / { SET3(IsRead, IsRequest, NeedsResponse), ReadResp, "ReadReq" }, / ReadResp */ { SET3(IsRead, IsResponse, HasData), InvalidCmd, "ReadResp" }, ....

From this we see for example that both

are marked with the

The second field of this array also specifies the corresponding reply of each request. E.g. the reply of a

is a
is just a placeholders for requests that are already replies.

.... struct CommandInfo { /// Set of attribute flags. const std::bitset attributes; /// Corresponding response for requests; InvalidCmd if no /// response is applicable. const Command response; /// String representation (for printing) const std::string str; }; ....

Some important commands include:

  • ReadReq
    : what the CPU sends out to its cache, see also: <>
  • ReadSharedReq
    : what dcache of the CPU sends forward to the <> after a
    , see also: see also: <>
  • ReadResp
    : response to a
    . Can come from either DRAM or another cache that has the data. On <> we see that a new packet is created.
  • WriteReq
    : what the CPU sends out to its cache, see also: <>
  • UpgradeReq
    : what dcache of CPU sends forward after a

===== gem5


One good way to think about

could be "it is what the <> see", a bit like

is passed to the constructor of
, and
keeps a reference to it:

.... Packet(const RequestPtr &req, MemCmd _cmd) : cmd(cmd), id((PacketId)req.get()), req(req), data(nullptr), addr(0), isSecure(false), size(0), _qosValue(0), headerDelay(0), snoopDelay(0), payloadDelay(0), senderState(NULL) { if (req->hasPaddr()) { addr = req->getPaddr(); flags.set(VALIDADDR); isSecure = req->isSecure(); } if (req->hasSize()) { size = req->getSize(); flags.set(VALIDSIZE); } } ....


is defined as:

.... typedef std::shared_ptr RequestPtr; ....

so we see that shared pointers to requests are basically passed around.

Some key fields include:

  • _paddr
    : + .... /**
    • The physical address of the request. Valid only if validPaddr
    • is set. */ Addr _paddr = 0; ....
  • _vaddr
    : + .... /** The virtual address of the request. */ Addr _vaddr = MaxAddr; ....

====== gem5



, a single packet of each type is kept for the entire CPU, e.g.:

.... RequestPtr ifetch_req; ....

and it gets created at construction time:

.... AtomicSimpleCPU::AtomicSimpleCPU(AtomicSimpleCPUParams *p) { ifetchreq = std::makeshared(); ....

and then it gets modified for each request:

.... setupFetchRequest(ifetch_req); ....

which does:

.... req->setVirt(fetchPC, sizeof(MachInst), Request::INST_FETCH, instMasterId(), instAddr); ....

Virtual to physical address translation done by the CPU stores the physical address:

.... fault = thread->dtb->translateAtomic(req, thread->getTC(), BaseTLB::Read); ....

which eventually calls e.g. on fs with MMU enabled:

.... Fault TLB::translateMmuOn(ThreadContext* tc, const RequestPtr &req, Mode mode, Translation *translation, bool &delay, bool timing, bool functional, Addr vaddr, ArmFault::TranMethod tranMethod) { req->setPaddr(pa); ....

====== gem5


In <>, the request gets created per memory read:

.... Fault TimingSimpleCPU::initiateMemRead(Addr addr, unsigned size, Request::Flags flags, const std::vector& byteenable) { ... RequestPtr req = std::makeshared( addr, size, flags, dataMasterId(), pc, thread->contextId()); ....

and from <> and <> we remember that

is actually started from the
instruction definitions for timing:

.... Fault LDRWL64_LIT::initiateAcc(ExecContext *xc, Trace::InstRecord *traceData) const { ... fault = initiateMemRead(xc, traceData, EA, Mem, memAccessFlags); ....

From this we see that

memory instructions are basically extracting the required information for the request, notably the address
and flags.

==== gem5


Mentioned at:

Each cache object owns a


.... class BaseCache : public ClockedObject { /** Miss status registers */ MSHRQueue mshrQueue; ....

is the base class of

is a

.... class MSHRQueue : public Queue ....

and Queue is also a gem5 class under


The MSHR basically keeps track of all information the cache receives, and helps it take appropriate action. I'm not sure why it is separate form the cache at all, as it is basically performing essential cache bookkeeping.

A clear example of MSHR in action can be seen at: <>. In that example what happened was:

  • CPU1 writes to an address and it completes
  • CPU2 sends read
  • CPU1 writes to the address again
  • CPU2 snoops the write, and notes it down in its MSHR
  • CPU2 receives a snoop reply for its read, also from CPU1 which has the data and the line becomes valid
  • CPU2 gets its data. But the MSHR remembers that it had also received a write snoop, so it also immediately invalidates that line

From this we understand that MSHR is the part of the cache that synchronizes stuff pending snoops and ensures that things get invalidated.

==== gem5


You can place this <> in between two <> to get extra statistics about the packets that are going through.

It only works on <>, and does not seem to dump any memory values, only add extra <>.

For example, the patch link:patches/manual/gem5-commmonitor-se.patch[] hack a

between the CPU and the L1 cache on top of gem5 1c3662c9557c85f0d25490dc4fbde3f8ab0cb350:

.... patch -d "$(./getvar gem5sourcedir)" -p 1 < patches/manual/gem5-commmonitor-se.patch ....

That patch was done largely by copying what --memcheck
does with a

You can then run with:

.... ./run \ --arch aarch64 \ --emulator gem5 \ --userland userland/arch/aarch64/freestanding/linux/hello.S \ -- \ --caches \ --cpu-type TimingSimpleCPU \ ; ....

and now we have some new extra histogram statistics such as:

.... system.cpu.dcache_mon.readBurstLengthHist::samples 1 ....

One neat thing about this is that it is agnostic to the memory object type, so you don't have to recode those statistics for every new type of object that operates on memory packets.

==== gem5


is a highly simplified memory system. It can replace a more complex DRAM model if you use it e.g. as:

.... ./run --emulator gem5 -- --mem-type SimpleMemory ....

and it also gets used in certain system-y memories present in ARM systems by default e.g. Flash memory:

.... [system.realview.flash0] type=SimpleMemory ....

As of gem5 3ca404da175a66e0b958165ad75eb5f54cb5e772 LKMC 059a7ef9d9c378a6d1d327ae97d90b78183680b2 it did not provide any speedup to the Linux kernel boot according to a quick test.

=== gem5 internals

Internals under other sections:

  • <>
  • <>
  • <>
  • <>

==== gem5 Eclipse configuration

In order to develop complex C++ software such as gem5, a good IDE setup is fundamental.

The best setup I've reached is with Eclipse. It is not perfect, and there is a learning curve, but is worth it.

Notably, it is very hard to get perfect due to: <>.

I recommend the following settings, tested in Eclipse 2019.09, Ubuntu 18.04:

  • fix all missing stdlib headers:
  • use spaces instead of tabs: Window, Preferences, Code Style, C/C++, Formatter, New, Edit, Tab Policy, Spaces Only
  • either ** create the project in the gem5 build directory! Files are moved around there and symlinked, and this gives the best chances of success ** add to the include search path: *** ./src/ in the source tree *** the ISA specific build directory which contains some self-generated stuff, e.g.: out/gem5/default/build/ARM

To run and GDB step debug the executable, just copy the <> from your run command (Eclipse does not like newlines for the arguments), e.g.:

.... ./run --emulator gem5 --print-cmd-oneline ....

and configure it into Eclipse as usual.

One downside of this setup is that if you want to nuke your build directory to get a clean build, then the Eclipse configuration files present in it might get deleted. Maybe it is possible to store configuration files outside of the directory, but we are now mitigating that by making a backup copy of those configuration files before removing the directory, and restoring it when you do

./build-gem --clean

==== gem5 Python C++ interaction

The interaction uses the Python C extension interface interface through the <> helper library:

The C++ executable both:

  • starts running the Python executable
  • provides Python classes written in C++ for that Python code to use

An example of this can be found at:


then gem5 magic

class adds some crazy stuff on top of it further, is is a mess. In particular, it auto generates
headers. TODO: why is this mess needed at all? pybind11 seems to handle constructor arguments just fine:

Let's study

for example:


.... class BadDevice(BasicPioDevice): type = 'BadDevice' cxx_header = "dev/baddev.hh" devicename = Param.String("Name of device to error on") ....

The object is created in Python for example from


.... fb = BadDevice(pio_addr=0x801fc0003d0, devicename='FrameBuffer') ....


has no
method, and neither
, it all just falls through until the

This constructor will loop through the inheritance chain and give the Python parameters to the C++ BadDeviceParams class as follows.

The auto-generated

file defines BadDeviceParams in C++:


ifndef PARAMSBadDevice__

define PARAMSBadDevice__

class BadDevice;



include "params/BasicPioDevice.hh"

struct BadDeviceParams : public BasicPioDeviceParams { BadDevice * create(); std::string devicename; };

endif // PARAMSBadDevice__



defines the param Python from C++ with pybind11:

.... namespace py = pybind11;

static void moduleinit(py::module &minternal) { py::module m = minternal.defsubmodule("paramBadDevice"); py::class>(m, "BadDeviceParams") .def(py::init<>()) .def("create", &BadDeviceParams::create) .def_readwrite("devicename", &BadDeviceParams::devicename) ;

py::class_>(m, "BadDevice")


static EmbeddedPyBind embedobj("BadDevice", moduleinit, "BasicPioDevice"); ....

then uses the parameters on the constructor:

.... class BadDevice : public BasicPioDevice { private: std::string devname;

public: typedef BadDeviceParams Params;

protected: const Params * params() const { return dynamiccast(params); }

public: /** * Constructor for the Baddev Class. * @param p object parameters * @param a base address of the write */ BadDevice(Params *p); ....

then uses the parameter:

.... BadDevice::BadDevice(Params *p) : BasicPioDevice(p, 0x10), devname(p->devicename) { } ....

It has been found that this usage of <> across hundreds of

files accounted for 50% of the gem5 build time at one point: <>.

To get a feeling of how

objects are run, see: <>.



Tested on gem5 08c79a194d1a3430801c04f37d13216cc9ec1da3.

==== gem5 entry point

The main is at:

. It calls:

.... ret = initM5Python(); ....


.... 230 int 231 initM5Python() 232 { 233 EmbeddedPyBind::initAll(); 234 return EmbeddedPython::initAll(); 235 } ....

basically just initializes the
Python object, which is used across multiple

Back on


.... ret = m5Main(argc, argv); ....

which goes to:

.... result = PyRunString(*command, Pyfile_input, dict, dict); ....

with commands looping over:

.... import m5 m5.main() ....

which leads into:

.... src/python/m5/ ....

which finally calls your config file like

.... filename = sys.argv[0] filedata = file(filename, 'r').read() filecode = compile(filedata, filename, 'exec') [...] exec filecode in scope ....

TODO: the file path name appears to be passed as a command line argument to the Python script, but I didn't have the patience to fully understand the details.

The Python config files then set the entire system up in Python, and finally call

to run the actual simulation. This function has a C++ native implementation at:

.... src/sim/ ....

and that is where the main event loop,

, gets called and starts kicking off the <>.

Tested at gem5 b4879ae5b0b6644e6836b0881e4da05c64a6550d.

===== gem5



seem to be automatically added to the
namespace, and this is done in a very convoluted way, let's try to understand a bit:

.... src/python/m5/objects/ ....


.... modules = loader.modules

for module in modules.keys(): if module.startswith('m5.objects.'): exec("from %s import *" % module) ....

And from <> we see that this appears to loop over every object string of type



gets called from
at the

.... class CodeImporter(object): def loadmodule(self, fullname): override = os.environ.get('M5OVERRIDEPYSOURCE', 'false').lower() if override in ('true', 'yes') and os.path.exists(abspath): src = open(abspath, 'r').read() code = compile(src, abspath, 'exec')

        if os.path.basename(srcfile) == '':
            mod.__path__ = fullname.split('.')
            mod.__package__ = fullname
            mod.__package__ = fullname.rpartition('.')[0]
        mod.__file__ = srcfile

    exec(code, mod.__dict__)

import sys importer = CodeImporter() addmodule = importer.addmodule sys.meta_path.append(importer) ....

Here as a bonus here we also see how <> works.


we see that
is just a
with module equals to

.... class SimObject(PySource): def init(self, source, tags=None, addtags=None): '''Specify the source file and any tags (automatically in the m5.objects package)''' super(SimObject, self).init('m5.objects', source, tags, addtags) ....


method seems to be doing the magic and is called from

.... bool EmbeddedPython::addModule() const { PyObject *code = getCode(); PyObject *result = PyObjectCallMethod(importerModule, PyCC("addmodule"), ....

which is called from:

.... int EmbeddedPython::initAll() { // Load the importer module PyObject *code = importer->getCode(); importerModule = PyImportExecCodeModule(PyCC("importer"), code); if (!importerModule) { PyErrPrint(); return 1; }

// Load the rest of the embedded python files into the embedded
// python importer
list::iterator i = getList().begin();
list::iterator end = getList().end();
for (; i != end; ++i)
    if (!(*i)->addModule())



comes from:

.... EmbeddedPython::EmbeddedPython(const char *filename, const char *abspath, const char *modpath, const unsigned char *code, int zlen, int len) : filename(filename), abspath(abspath), modpath(modpath), code(code), zlen(zlen), len(len) { // if we've added the importer keep track of it because we need it // to bootstrap. if (string(modpath) == string("importer")) importer = this; else getList().push_back(this); }

list & EmbeddedPython::getList() { static list thelist; return thelist; } ....

and the constructor in turn gets called from per

autogenerated files such as e.g.

.... EmbeddedPython embeddedm5objectsIde( "m5/objects/", "/home/ciro/bak/git/linux-kernel-module-cheat/data/gem5/master4/src/dev/storage/", "m5.objects.Ide", datam5objectsIde, 947, 2099);

} // anonymous namespace ....

which get autogenerated at


.... def embedPyFile(target, source, env):

for source in PySource.all: basepyenv.Command(source.cpp, [ py_marshal, source.tnode ], MakeAction(embedPyFile, Transform("EMBED PY"))) ....

where the

thing as you might expect is a static list of all
source files as they get updated in the constructor.

Tested in gem5 d9cb548d83fa81858599807f54b52e5be35a6b03.

==== gem5 event queue

gem5 is an event based simulator, and as such the event queue is of of the crucial elements in the system.

Every single action that takes time (e.g. notably <>) models that time delay by scheduling an event in the future.

The gem5 event queue stores one callback event for each future point in time.

The event queue is implemented in the class

in the file

Not all times need to have an associated event: if a given time has no events, gem5 just skips it and jumps to the next event: the queue is basically a linked list of events.

Important examples of events include:

  • CPU ticks
  • peripherals and memory

At <> we see for example that at the beginning of an <> simulation, gem5 sets up exactly two events:

  • the first CPU cycle
  • one exit event at the end of time which triggers <>

Then, at the end of the callback of one tick event, another tick is scheduled.

And so the simulation progresses tick by tick, until an exit event happens.


class has one awesome
function that prints a human friendly representation of the queue, and can be easily called from GDB. TODO example.

We can also observe what is going on in the event queue with the


Event execution is done at


.... Event *exit_event = eventq->serviceOne(); ....

This calls the

method of the event.

Another important technique is to use <> and break at interesting points such as:

.... b Trace::OstreamLogger::logMessage b EventManager::schedule b EventFunctionWrapper::process ....

although stepping into

which does
is a bit of a pain:

Another potentially useful technique is to use:

.... --trace Event,ExecAll,FmtFlag,FmtStackTrace --trace-stdout ....

which automates the logging of


But alas, it misses which function callback is being scheduled, which is the awesome thing we actually want:


Then, once we had that, the most perfect thing ever would be to make the full event graph containing which events schedule which events!

===== gem5 event queue AtomicSimpleCPU syscall emulation freestanding example analysis

Let's now analyze every single event on a minimal <> in the <>:

.... ./run \ --arch aarch64 \ --emulator gem5 \ --userland userland/arch/aarch64/freestanding/linux/hello.S \ --trace Event,ExecAll,FmtFlag \ --trace-stdout \ ; ....

which gives:

.... 0: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 scheduled @ 0 **** REAL SIMULATION **** 0: Event: Event70: generic 70 scheduled @ 0 info: Entering event queue @ 0. Starting simulation... 0: Event: Event70: generic 70 rescheduled @ 18446744073709551615 0: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 executed @ 0 0: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue : movz x0, #1, #0 : IntAlu : D=0x0000000000000001 flags=(IsInteger) 0: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 rescheduled @ 500 500: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 executed @ 500 500: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+4 : adr x1, #28 : IntAlu : D=0x0000000000400098 flags=(IsInteger) 500: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 rescheduled @ 1000 1000: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 executed @ 1000 1000: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+8 : ldr w2, #4194464 : MemRead : D=0x0000000000000006 A=0x4000a0 flags=(IsInteger|IsMemRef|IsLoad) 1000: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 rescheduled @ 1500 1500: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 executed @ 1500 1500: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+12 : movz x8, #64, #0 : IntAlu : D=0x0000000000000040 flags=(IsInteger) 1500: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 rescheduled @ 2000 2000: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 executed @ 2000 2000: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+16 : svc #0x0 : IntAlu : flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall) hello 2000: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 rescheduled @ 2500 2500: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 executed @ 2500 2500: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+20 : movz x0, #0, #0 : IntAlu : D=0x0000000000000000 flags=(IsInteger) 2500: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 rescheduled @ 3000 3000: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 executed @ 3000 3000: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+24 : movz x8, #93, #0 : IntAlu : D=0x000000000000005d flags=(IsInteger) 3000: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 rescheduled @ 3500 3500: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 executed @ 3500 3500: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+28 : svc #0x0 : IntAlu : flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall) 3500: Event: Event71: generic 71 scheduled @ 3500 3500: Event: Event71: generic 71 executed @ 3500 ....

On the event trace, we can first see:

.... 0: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 scheduled @ 0 ....

This schedules a tick event for time

, and leads to the first clock tick.


.... 0: Event: Event70: generic 70 scheduled @ 0 0: Event: Event70: generic 70 rescheduled @ 18446744073709551615 ....

schedules the end of time event for time

, which is later rescheduled to the actual end of time.


.... 0: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 executed @ 0 0: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue : movz x0, #1, #0 : IntAlu : D=0x0000000000000001 flags=(IsInteger) 0: Event: AtomicSimpleCPU tick.wrappedfunction_event: EventFunctionWrapped 39 rescheduled @ 500 ....

the tick event happens, the instruction runs, and then the instruction is rescheduled in

time units. This is done at the end of

.... if (_status != Idle) reschedule(tickEvent, curTick() + latency, true); ....


.... 3500: ExecEnable: system.cpu: A0 T0 : @asmmainafterprologue+28 : svc #0x0 : IntAlu : flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall) 3500: Event: Event71: generic 71 scheduled @ 3500 3500: Event: Event_71: generic 71 executed @ 3500 ....

the exit system call is called, and then it schedules an exit evit, which gets executed and the simulation ends.

We guess then that

comes from the SE implementation of the exit syscall, so let's just confirm, the trace contains:

.... exitSimLoop() at 0x5555594746e0 exitImpl() at 0x55555948c046 exitFunc() at 0x55555948c147 SyscallDesc::doSyscall() at 0x5555594949b6 Process::syscall() at 0x555559484717 SimpleThread::syscall() at 0x555559558059 ArmISA::SupervisorCall::invoke() at 0x5555572950d7 BaseSimpleCPU::advancePC() at 0x555559083133 AtomicSimpleCPU::tick() at 0x55555907834c ....



.... new GlobalSimLoopExitEvent(when + simQuantum, message, exit_code, repeat); ....

Tested in gem5 12c917de54145d2d50260035ba7fa614e25317a3.

====== AtomicSimpleCPU initial events

Let's have a closer look at the initial magically scheduled events of the simulation.

Most events come from other events, but at least one initial event must be scheduled somehow from elsewhere to kick things off.

The initial tick event:

.... 0: Event: AtomicSimpleCPU tick.wrappedfunctionevent: EventFunctionWrapped 39 scheduled @ 0 ....

we'll study by breaking at at the point that prints messages:

b Trace::OstreamLogger::logMessage()
to see where events ar

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.