GithubHelp home page GithubHelp logo

asears / linux-kernel-module-cheat Goto Github PK

View Code? Open in Web Editor NEW

This project forked from ************/linux-kernel-module-cheat

0.0 1.0 0.0 6.65 MB

The perfect emulation setup to study and develop the Linux kernel v5.4.3, kernel modules, QEMU, gem5 and x86_64, ARMv7 and ARMv8 userland and baremetal assembly, ANSI C, C++ and POSIX. GDB step debug and KGDB just work. Powered by Buildroot and crosstool-NG. Highly automated. Thoroughly documented. Automated tests. "Tested" in an Ubuntu 19.10 host.

License: GNU General Public License v3.0

Makefile 0.81% C 29.56% Shell 3.83% C++ 8.87% Python 41.21% Assembly 14.12% Dockerfile 0.01% Gnuplot 0.14% Ruby 1.08% JavaScript 0.29% HTML 0.06% E 0.01%

linux-kernel-module-cheat's Introduction

Linux Kernel Module Cheat

64534859

The perfect emulation setup to study and develop the Linux kernel v5.4.3, kernel modules, QEMU, gem5 and x86_64, ARMv7 and ARMv8 userland and baremetal assembly, ANSI C, C++ and POSIX. GDB step debug and KGDB just work. Powered by Buildroot and crosstool-NG. Highly automated. Thoroughly documented. Automated tests. "Tested" in an Ubuntu 19.10 host.

The source code for this page is located at: https://github.com/************/linux-kernel-module-cheat. Due to a GitHub limitation, this README is too long and not fully rendered on github.com. Either use: README.adoc, https://************.com/linux-kernel-module-cheat or build the docs yourself.

Each child section describes a possible different setup for this repo.

If you don’t know which one to go for, start with QEMU Buildroot setup getting started.

Design goals of this project are documented at: [design-goals].

This setup has been mostly tested on Ubuntu. For other host operating systems see: [supported-hosts]. For greater stability, consider using the latest release instead of master: https://github.com/************/linux-kernel-module-cheat/releases

Reserve 12Gb of disk and run:

git clone https://github.com/************/linux-kernel-module-cheat
cd linux-kernel-module-cheat
./build --download-dependencies qemu-buildroot
./run

You don’t need to clone recursively even though we have .git submodules: download-dependencies fetches just the submodules that you need for this build to save time.

If something goes wrong, see: [common-build-issues] and use our issue tracker: https://github.com/************/linux-kernel-module-cheat/issues

The initial build will take a while (30 minutes to 2 hours) to clone and build, see [benchmark-builds] for more details.

If you don’t want to wait, you could also try the following faster but much more limited methods:

but you will soon find that they are simply not enough if you anywhere near serious about systems programming.

After ./run, QEMU opens up leaving you in the /lkmc/ directory, and you can start playing with the kernel modules inside the simulated system:

insmod hello.ko
insmod hello2.ko
rmmod hello
rmmod hello2

This should print to the screen:

hello init
hello2 init
hello cleanup
hello2 cleanup

which are printk messages from init and cleanup methods of those modules.

Sources:

Quit QEMU with:

Ctrl-A X

All available modules can be found in the kernel_modules directory.

It is super easy to build for different CPU architectures, just use the --arch option:

./build --arch aarch64 --download-dependencies qemu-buildroot
./run --arch aarch64

To avoid typing --arch aarch64 many times, you can set the default arch as explained at: [default-command-line-arguments]

I now urge you to read the following sections which contain widely applicable information:

Once you use GDB step debug and tmux, your terminal will look a bit like this:

[    1.451857] input: AT Translated Set 2 keyboard as /devices/platform/i8042/s1│loading @0xffffffffc0000000: ../kernel_modules-1.0//timer.ko
[    1.454310] ledtrig-cpu: registered to indicate activity on CPUs             │(gdb) b lkmc_timer_callback
[    1.455621] usbcore: registered new interface driver usbhid                  │Breakpoint 1 at 0xffffffffc0000000: file /home/ciro/bak/git/linux-kernel-module
[    1.455811] usbhid: USB HID core driver                                      │-cheat/out/x86_64/buildroot/build/kernel_modules-1.0/./timer.c, line 28.
[    1.462044] NET: Registered protocol family 10                               │(gdb) c
[    1.467911] Segment Routing with IPv6                                        │Continuing.
[    1.468407] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver              │
[    1.470859] NET: Registered protocol family 17                               │Breakpoint 1, lkmc_timer_callback (data=0xffffffffc0002000 <mytimer>)
[    1.472017] 9pnet: Installing 9P2000 support                                 │    at /linux-kernel-module-cheat//out/x86_64/buildroot/build/
[    1.475461] sched_clock: Marking stable (1473574872, 0)->(1554017593, -80442)│kernel_modules-1.0/./timer.c:28
[    1.479419] ALSA device list:                                                │28      {
[    1.479567]   No soundcards found.                                           │(gdb) c
[    1.619187] ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100                 │Continuing.
[    1.622954] ata2.00: configured for MWDMA2                                   │
[    1.644048] scsi 1:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ P5│Breakpoint 1, lkmc_timer_callback (data=0xffffffffc0002000 <mytimer>)
[    1.741966] tsc: Refined TSC clocksource calibration: 2904.010 MHz           │    at /linux-kernel-module-cheat//out/x86_64/buildroot/build/
[    1.742796] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x29dc0f4s│kernel_modules-1.0/./timer.c:28
[    1.743648] clocksource: Switched to clocksource tsc                         │28      {
[    2.072945] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8043│(gdb) bt
[    2.078641] EXT4-fs (vda): couldn't mount as ext3 due to feature incompatibis│#0  lkmc_timer_callback (data=0xffffffffc0002000 <mytimer>)
[    2.080350] EXT4-fs (vda): mounting ext2 file system using the ext4 subsystem│    at /linux-kernel-module-cheat//out/x86_64/buildroot/build/
[    2.088978] EXT4-fs (vda): mounted filesystem without journal. Opts: (null)  │kernel_modules-1.0/./timer.c:28
[    2.089872] VFS: Mounted root (ext2 filesystem) readonly on device 254:0.    │#1  0xffffffff810ab494 in call_timer_fn (timer=0xffffffffc0002000 <mytimer>,
[    2.097168] devtmpfs: mounted                                                │    fn=0xffffffffc0000000 <lkmc_timer_callback>) at kernel/time/timer.c:1326
[    2.126472] Freeing unused kernel memory: 1264K                              │#2  0xffffffff810ab71f in expire_timers (head=<optimized out>,
[    2.126706] Write protecting the kernel read-only data: 16384k               │    base=<optimized out>) at kernel/time/timer.c:1363
[    2.129388] Freeing unused kernel memory: 2024K                              │#3  __run_timers (base=<optimized out>) at kernel/time/timer.c:1666
[    2.139370] Freeing unused kernel memory: 1284K                              │#4  run_timer_softirq (h=<optimized out>) at kernel/time/timer.c:1692
[    2.246231] EXT4-fs (vda): warning: mounting unchecked fs, running e2fsck isd│#5  0xffffffff81a000cc in __do_softirq () at kernel/softirq.c:285
[    2.259574] EXT4-fs (vda): re-mounted. Opts: block_validity,barrier,user_xatr│#6  0xffffffff810577cc in invoke_softirq () at kernel/softirq.c:365
hello S98                                                                       │#7  irq_exit () at kernel/softirq.c:405
                                                                                │#8  0xffffffff818021ba in exiting_irq () at ./arch/x86/include/asm/apic.h:541
Apr 15 23:59:23 login[49]: root login on 'console'                              │#9  smp_apic_timer_interrupt (regs=<optimized out>)
hello /root/.profile                                                            │    at arch/x86/kernel/apic/apic.c:1052
# insmod /timer.ko                                                              │#10 0xffffffff8180190f in apic_timer_interrupt ()
[    6.791945] timer: loading out-of-tree module taints kernel.                 │    at arch/x86/entry/entry_64.S:857
# [    7.821621] 4294894248                                                     │#11 0xffffffff82003df8 in init_thread_union ()
[    8.851385] 4294894504                                                       │#12 0x0000000000000000 in ?? ()
                                                                                │(gdb)

Besides a seamless initial build, this project also aims to make it effortless to modify and rebuild several major components of the system, to serve as an awesome development setup.

Let’s hack up the Linux kernel entry point, which is an easy place to start.

Open the file:

vim submodules/linux/init/main.c

and find the start_kernel function, then add there a:

pr_info("I'VE HACKED THE LINUX KERNEL!!!");

Then rebuild the Linux kernel, quit QEMU and reboot the modified kernel:

./build-linux
./run

and, surely enough, your message has appeared at the beginning of the boot:

<6>[    0.000000] I'VE HACKED THE LINUX KERNEL!!!

So you are now officially a Linux kernel hacker, way to go!

We could have used just build to rebuild the kernel as in the initial build instead of build-linux, but building just the required individual components is preferred during development:

  • saves a few seconds from parsing Make scripts and reading timestamps

  • makes it easier to understand what is being done in more detail

  • allows passing more specific options to customize the build

The build script is just a lightweight wrapper that calls the smaller build scripts, and you can see what ./build does with:

./build --dry-run

When you reach difficulties, QEMU makes it possible to easily GDB step debug the Linux kernel source code, see: Section 2, “GDB step debug”.

Edit kernel_modules/hello.c to contain:

pr_info("hello init hacked\n");

and rebuild with:

./build-modules

Now there are two ways to test it out: the fast way, and the safe way.

The fast way is, without quitting or rebooting QEMU, just directly re-insert the module with:

insmod /mnt/9p/out_rootfs_overlay/lkmc/hello.ko

and the new pr_info message should now show on the terminal at the end of the boot.

This works because we have a 9P mount there setup by default, which mounts the host directory that contains the build outputs on the guest:

ls "$(./getvar out_rootfs_overlay_dir)"

The fast method is slightly risky because your previously insmodded buggy kernel module attempt might have corrupted the kernel memory, which could affect future runs.

Such failures are however unlikely, and you should be fine if you don’t see anything weird happening.

The safe way, is to fist quit QEMU, rebuild the modules, put them in the root filesystem, and then reboot:

./build-modules
./build-buildroot
./run --eval-after 'insmod hello.ko'

./build-buildroot is required after ./build-modules because it re-generates the root filesystem with the modules that we compiled at ./build-modules.

You can see that ./build does that as well, by running:

./build --dry-run

--eval-after is optional: you could just type insmod hello.ko in the terminal, but this makes it run automatically at the end of boot, and then drops you into a shell.

If the guest and host are the same arch, typically x86_64, you can speed up boot further with KVM:

./run --kvm

All of this put together makes the safe procedure acceptably fast for regular development as well.

It is also easy to GDB step debug kernel modules with our setup, see: Section 2.4, “GDB step debug kernel module”.

Not satisfied with mere software? OK then, let’s hack up the QEMU x86 CPU identification:

vim submodules/qemu/target/i386/cpu.c

and modify:

.model_id = "QEMU Virtual CPU version " QEMU_HW_VERSION,

to contain:

.model_id = "QEMU Virtual CPU version HACKED " QEMU_HW_VERSION,

then as usual rebuild and re-run:

./build-qemu
./run --eval-after 'grep "model name" /proc/cpuinfo'

and once again, there is your message: QEMU communicated it to the Linux kernel, which printed it out.

You have now gone from newb to hardware hacker in a mere 15 minutes, your rate of progress is truly astounding!!!

Seriously though, if you want to be a real hardware hacker, it just can’t be done with open source tools as of 2018. The root obstacle is that:

The only thing you can do with open source is purely functional designs with Verilator, but you will never know if it can be actually produced and how efficient it can be.

If you really want to develop semiconductors, your only choice is to join an university or a semiconductor company that has the EDA licenses.

While hacking QEMU, you will likely want to GDB step its source. That is trivial since QEMU is just another userland program like any other, but our setup has a shortcut to make it even more convenient, see: Section 18.7, “Debug the emulator”.

We use glibc as our default libc now, and it is tracked as an unmodified submodule at submodules/glibc, at the exact same version that Buildroot has it, which can be found at: package/glibc/glibc.mk. Buildroot 2018.05 applies no patches.

Let’s hack up the puts function:

./build-buildroot -- glibc-reconfigure

with the patch:

diff --git a/libio/ioputs.c b/libio/ioputs.c
index 706b20b492..23185948f3 100644
--- a/libio/ioputs.c
+++ b/libio/ioputs.c
@@ -38,8 +38,9 @@ _IO_puts (const char *str)
   if ((_IO_vtable_offset (_IO_stdout) != 0
        || _IO_fwide (_IO_stdout, -1) == -1)
       && _IO_sputn (_IO_stdout, str, len) == len
+      && _IO_sputn (_IO_stdout, " hacked", 7) == 7
       && _IO_putc_unlocked ('\n', _IO_stdout) != EOF)
-    result = MIN (INT_MAX, len + 1);
+    result = MIN (INT_MAX, len + 1 + 7);

   _IO_release_lock (_IO_stdout);
   return result;

And then:

./run --eval-after './c/hello.out'

outputs:

hello hacked

Lol!

We can also test our hacked glibc on User mode simulation with:

./run --userland userland/c/hello.c

I just noticed that this is actually a good way to develop glibc for other archs.

In this example, we got away without recompiling the userland program because we made a change that did not affect the glibc ABI, see this answer for an introduction to ABI stability: https://stackoverflow.com/questions/2171177/what-is-an-application-binary-interface-abi/54967743#54967743

Note that for arch agnostic features that don’t rely on bleeding kernel changes that you host doesn’t yet have, you can develop glibc natively as explained at:

Tested on a30ed0f047523ff2368d421ee2cce0800682c44e + 1.

Have you ever felt that a single inc instruction was not enough? Really? Me too!

So let’s hack the [gnu-gas-assembler], which is part of GNU Binutils, to add a new shiny version of inc called…​ myinc!

GCC uses GNU GAS as its backend, so we will test out new mnemonic with an [gcc-inline-assembly] test program: userland/arch/x86_64/binutils_hack.c, which is just a copy of userland/arch/x86_64/binutils_nohack.c but with myinc instead of inc.

The inline assembly is disabled with an #ifdef, so first modify the source to enable that.

Then, try to build userland:

./build-userland

and watch it fail with:

binutils_hack.c:8: Error: no such instruction: `myinc %rax'

Now, edit the file

vim submodules/binutils-gdb/opcodes/i386-tbl.h

and add a copy of the "inc" instruction just next to it, but with the new name "myinc":

diff --git a/opcodes/i386-tbl.h b/opcodes/i386-tbl.h
index af583ce578..3cc341f303 100644
--- a/opcodes/i386-tbl.h
+++ b/opcodes/i386-tbl.h
@@ -1502,6 +1502,19 @@ const insn_template i386_optab[] =
     { { { 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 	  0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0,
 	  1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 } } } },
+  { "myinc", 1, 0xfe, 0x0, 1,
+    { { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 } },
+    { 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+      0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
+      0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+      0, 0, 0, 0, 0, 0 },
+    { { { 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+	  0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0,
+	  1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 } } } },
   { "sub", 2, 0x28, None, 1,
     { { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
         0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,

Finally, rebuild Binutils, userland and test our program with User mode simulation:

./build-buildroot -- host-binutils-rebuild
./build-userland --static
./run --static --userland userland/arch/x86_64/binutils_hack.c

and we se that myinc worked since the assert did not fail!

Tested on b60784d59bee993bf0de5cde6c6380dd69420dda + 1.

OK, now time to hack GCC.

For convenience, let’s use the User mode simulation.

If we run the program userland/c/gcc_hack.c:

./build-userland --static
./run --static --userland userland/c/gcc_hack.c

it produces the normal boring output:

i = 2
j = 0

So how about we swap ++ and -- to make things more fun?

Open the file:

vim submodules/gcc/gcc/c/c-parser.c

and find the function c_parser_postfix_expression_after_primary.

In that function, swap case CPP_PLUS_PLUS and case CPP_MINUS_MINUS:

diff --git a/gcc/c/c-parser.c b/gcc/c/c-parser.c
index 101afb8e35f..89535d1759a 100644
--- a/gcc/c/c-parser.c
+++ b/gcc/c/c-parser.c
@@ -8529,7 +8529,7 @@ c_parser_postfix_expression_after_primary (c_parser *parser,
 		expr.original_type = DECL_BIT_FIELD_TYPE (field);
 	    }
 	  break;
-	case CPP_PLUS_PLUS:
+	case CPP_MINUS_MINUS:
 	  /* Postincrement.  */
 	  start = expr.get_start ();
 	  finish = c_parser_peek_token (parser)->get_finish ();
@@ -8548,7 +8548,7 @@ c_parser_postfix_expression_after_primary (c_parser *parser,
 	  expr.original_code = ERROR_MARK;
 	  expr.original_type = NULL;
 	  break;
-	case CPP_MINUS_MINUS:
+	case CPP_PLUS_PLUS:
 	  /* Postdecrement.  */
 	  start = expr.get_start ();
 	  finish = c_parser_peek_token (parser)->get_finish ();

Now rebuild GCC, the program and re-run it:

./build-buildroot -- host-gcc-final-rebuild
./build-userland --static
./run --static --userland userland/c/gcc_hack.c

and the new ouptut is now:

i = 2
j = 0

We need to use the ugly -final thing because GCC has to packages in Buildroot, -initial and -final: https://stackoverflow.com/questions/54992977/how-to-select-an-override-srcdir-source-for-gcc-when-building-buildroot No one is able to example precisely with a minimal example why this is required:

This is our reference setup, and the best supported one, use it unless you have good reason not to.

It was historically the first one we did, and all sections have been tested with this setup unless explicitly noted.

Read the following sections for further introductory material:

One of the major features of this repository is that we try to support the --dry-run option really well for all scripts.

This option, as the name suggests, outputs the external commands that would be run (or more precisely: equivalent commands), without actually running them.

This allows you to just clone this repository and get full working commands to integrate into your project, without having to build or use this setup further!

For example, we can obtain a QEMU run for the file userland/c/hello.c in User mode simulation by adding --dry-run to the normal command:

./run --dry-run --userland userland/c/hello.c

which as of LKMC a18f28e263c91362519ef550150b5c9d75fa3679 + 1 outputs:

+ /path/to/linux-kernel-module-cheat/out/qemu/default/opt/x86_64-linux-user/qemu-x86_64 \
  -L /path/to/linux-kernel-module-cheat/out/buildroot/build/default/x86_64/target \
  -r 5.2.1 \
  -seed 0 \
  -trace enable=load_file,file=/path/to/linux-kernel-module-cheat/out/run/qemu/x86_64/0/trace.bin \
  -cpu max \
  /path/to/linux-kernel-module-cheat/out/userland/default/x86_64/c/hello.out \
;

So observe that the command contains:

  • +: sign to differentiate it from program stdout, much like bash -x output. This is not a valid part of the generated Bash command however.

  • the actual command nicely, indented and with arguments broken one per line, but with continuing backslashes so you can just copy paste into a terminal

  • ;: both a valid part of the Bash command, and a visual mark the end of the command

For the specific case of running emulators such as QEMU, the last command is also automatically placed in a file for your convenience and later inspection:

cat "$(./getvar run_dir)/run.sh"

Since we need this so often, the last run command is also stored for convenience at:

cat out/run.sh

although this won’t of course work well for [simultaneous-runs].

Furthermore, --dry-run also automatically specifies, in valid Bash shell syntax:

  • environment variables used to run the command with syntax + ENV_VAR_1=abc ENV_VAR_2=def ./some/command

  • change in working directory with + cd /some/new/path && ./some/command

This setup is like the QEMU Buildroot setup, but it uses gem5 instead of QEMU as a system simulator.

QEMU tries to run as fast as possible and give correct results at the end, but it does not tell us how many CPU cycles it takes to do something, just the number of instructions it ran. This kind of simulation is known as functional simulation.

The number of instructions executed is a very poor estimator of performance because in modern computers, a lot of time is spent waiting for memory requests rather than the instructions themselves.

gem5 on the other hand, can simulate the system in more detail than QEMU, including:

  • simplified CPU pipeline

  • caches

  • DRAM timing

and can therefore be used to estimate system performance, see: Section 19.2, “gem5 run benchmark” for an example.

The downside of gem5 much slower than QEMU because of the greater simulation detail.

See gem5 vs QEMU for a more thorough comparison.

For the most part, if you just add the --emulator gem5 option or *-gem5 suffix to all commands and everything should magically work.

If you haven’t built Buildroot yet for QEMU Buildroot setup, you can build from the beginning with:

./build --download-dependencies gem5-buildroot
./run --emulator gem5

If you have already built previously, don’t be afraid: gem5 and QEMU use almost the same root filesystem and kernel, so ./build will be fast.

Remember that the gem5 boot is considerably slower than QEMU since the simulation is more detailed.

If you have a relatively new GCC version and the gem5 build fails on your machine, see: gem5 build broken on recent compiler version.

To get a terminal, either open a new shell and run:

./gem5-shell

You can quit the shell without killing gem5 by typing tilde followed by a period:

~.

If you are inside tmux, which I highly recommend, you can both run gem5 stdout and open the guest terminal on a split window with:

./run --emulator gem5 --tmux

At the end of boot, it might not be very clear that you have the shell since some printk messages may appear in front of the prompt like this:

# <6>[    1.215329] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1cd486fa865, max_idle_ns: 440795259574 ns
<6>[    1.215351] clocksource: Switched to clocksource tsc

but if you look closely, the PS1 prompt marker # is there already, just hit enter and a clear prompt line will appear.

If you forgot to open the shell and gem5 exit, you can inspect the terminal output post-mortem at:

less "$(./getvar --emulator gem5 m5out_dir)/system.pc.com_1.device"

More gem5 information is present at: Section 19, “gem5”

Good next steps are:

  • gem5 run benchmark: how to run a benchmark in gem5 full system, including how to boot Linux, checkpoint and restore to skip the boot on a fast CPU

  • m5out directory: understand the output files that gem5 produces, which contain information about your run

  • m5ops: magic guest instructions used to control gem5

  • [add-new-files-to-the-buildroot-image]: how to add your own files to the image if you have a benchmark that we don’t already support out of the box (also send a pull request!)

This repository has been tested inside clean Docker containers.

This is a good option if you are on a Linux host, but the native setup failed due to your weird host distribution, and you have better things to do with your life than to debug it. See also: [supported-hosts].

For example, to do a QEMU Buildroot setup inside Docker, run:

sudo apt-get install docker
./run-docker create && \
./run-docker sh -- ./build --download-dependencies qemu-buildroot
./run-docker sh

You are now left inside a shell in the Docker! From there, just run as usual:

./run

The host git top level directory is mounted inside the guest with a Docker volume, which means for example that you can use your host’s GUI text editor directly on the files. Just don’t forget that if you nuke that directory on the guest, then it gets nuked on the host as well!

Command breakdown:

  • ./run-docker create: create the image and container.

    Needed only the very first time you use Docker, or if you run ./run-docker DESTROY to restart for scratch, or save some disk space.

    The image and container name is lkmc. The container shows under:

    docker ps -a

    and the image shows under:

    docker images
  • ./run-docker sh: open a shell on the container.

    If it has not been started previously, start it. This can also be done explicitly with:

    ./run-docker start

    Quit the shell as usual with Ctrl-D

    This can be called multiple times from different host terminals to open multiple shells.

  • ./run-docker stop: stop the container.

    This might save a bit of CPU and RAM once you stop working on this project, but it should not be a lot.

  • ./run-docker DESTROY: delete the container and image.

    This doesn’t really clean the build, since we mount the guest’s working directory on the host git top-level, so you basically just got rid of the apt-get installs.

    To actually delete the Docker build, run on host:

    # sudo rm -rf out.docker

To use GDB step debug from inside Docker, you need a second shell inside the container. You can either do that from another shell with:

./run-docker sh

or even better, by starting a tmux session inside the container. We install tmux by default in the container.

You can also start a second shell and run a command in it at the same time with:

./run-docker sh -- ./run-gdb start_kernel

To use QEMU graphic mode from Docker, run:

./run --graphic --vnc

and then on host:

sudo apt-get install vinagre
./vnc

TODO make files created inside Docker be owned by the current user in host instead of root:

This setup uses prebuilt binaries that we upload to GitHub from time to time.

We don’t currently provide a full prebuilt because it would be too big to host freely, notably because of the cross toolchain.

Our prebuilts currently include:

For more details, see our our release procedure.

Advantage of this setup: saves time and disk space on the initial install, which is expensive in largely due to building the toolchain.

The limitations are severe however:

  • can’t GDB step debug the kernel, since the source and cross toolchain with GDB are not available. Buildroot cannot easily use a host toolchain: [prebuilt-toolchain].

    Maybe we could work around this by just downloading the kernel source somehow, and using a host prebuilt GDB, but we felt that it would be too messy and unreliable.

  • you won’t get the latest version of this repository. Our [travis] attempt to automate builds failed, and storing a release for every commit would likely make GitHub mad at us anyway.

  • gem5 is not currently supported. The major blocking point is how to avoid distributing the kernel images twice: once for gem5 which uses vmlinux, and once for QEMU which uses arch/* images, see also:

This setup might be good enough for those developing simulators, as that requires less image modification. But once again, if you are serious about this, why not just let your computer build the full featured setup while you take a coffee or a nap? :-)

Checkout to the latest tag and use the Ubuntu packaged QEMU to boot Linux:

sudo apt-get install qemu-system-x86
git clone https://github.com/************/linux-kernel-module-cheat
cd linux-kernel-module-cheat
git checkout "$(git rev-list --tags --max-count=1)"
./release-download-latest
unzip lkmc-*.zip
./run --qemu-which host

You have to checkout to the latest tag to ensure that the scripts match the release format: https://stackoverflow.com/questions/1404796/how-to-get-the-latest-tag-name-in-current-branch-in-git

This is known not to work for aarch64 on an Ubuntu 16.04 host with QEMU 2.5.0, presumably because QEMU is too old, the terminal does not show any output. I haven’t investigated why.

Or to run a baremetal example instead:

./run \
  --arch aarch64 \
  --baremetal userland/c/hello.c \
  --qemu-which host \
;

Be saner and use our custom built QEMU instead:

./build --download-dependencies qemu
./run

This also allows you to modify QEMU if you’re into that sort of thing.

To build the kernel modules as in Your first kernel module hack do:

git submodule update --depth 1 --init --recursive "$(./getvar linux_source_dir)"
./build-linux --no-modules-install -- modules_prepare
./build-modules --gcc-which host
./run

TODO: for now the only way to test those modules out without building Buildroot is with 9p, since we currently rely on Buildroot to manipulate the root filesystem.

Command explanation:

  • modules_prepare does the minimal build procedure required on the kernel for us to be able to compile the kernel modules, and is way faster than doing a full kernel build. A full kernel build would also work however.

  • --gcc-which host selects your host Ubuntu packaged GCC, since you don’t have the Buildroot toolchain

  • --no-modules-install is required otherwise the make modules_install target we run by default fails, since the kernel wasn’t built

To modify the Linux kernel, build and use it as usual:

git submodule update --depth 1 --init --recursive "$(./getvar linux_source_dir)"
./build-linux
./run

THIS IS DANGEROUS (AND FUN), YOU HAVE BEEN WARNED

This method runs the kernel modules directly on your host computer without a VM, and saves you the compilation time and disk usage of the virtual machine method.

It has however severe limitations:

  • can’t control which kernel version and build options to use. So some of the modules will likely not compile because of kernel API changes, since the Linux kernel does not have a stable kernel module API.

  • bugs can easily break you system. E.g.:

    • segfaults can trivially lead to a kernel crash, and require a reboot

    • your disk could get erased. Yes, this can also happen with sudo from userland. But you should not use sudo when developing newbie programs. And for the kernel you don’t have the choice not to use sudo.

    • even more subtle system corruption such as not being able to rmmod

  • can’t control which hardware is used, notably the CPU architecture

  • can’t step debug it with GDB easily. The alternatives are JTAG or KGDB, but those are less reliable, and require extra hardware.

Still interested?

./build-modules --host

Compilation will likely fail for some modules because of kernel or toolchain differences that we can’t control on the host.

The best workaround is to compile just your modules with:

./build-modules --host -- hello hello2

which is equivalent to:

./build-modules \
  --gcc-which host \
  --host \
  -- \
  kernel_modules/hello.c \
  kernel_modules/hello2.c \
;

Or just remove the .c extension from the failing files and try again:

cd "$(./getvar kernel_modules_source_dir)"
mv broken.c broken.c~

Once you manage to compile, and have come to terms with the fact that this may blow up your host, try it out with:

cd "$(./getvar kernel_modules_build_host_subdir)"
sudo insmod hello.ko

# Our module is there.
sudo lsmod | grep hello

# Last message should be: hello init
dmesg -T

sudo rmmod hello

# Last message should be: hello exit
dmesg -T

# Not present anymore
sudo lsmod | grep hello

Minimal host build system example:

cd hello_host_kernel_module
make
sudo insmod hello.ko
dmesg
sudo rmmod hello.ko
dmesg

In order to test the kernel and emulators, userland content in the form of executables and scripts is of course required, and we store it mostly under:

When we started this repository, it only contained content that interacted very closely with the kernel, or that had required performance analysis.

However, we soon started to notice that this had an increasing overlap with other userland test repositories: we were duplicating build and test infrastructure and even some examples.

Therefore, we decided to consolidate other userland tutorials that we had scattered around into this repository.

Notable userland content included / moving into this repository includes:

There are several ways to run our [userland-content], notably:

With this setup, we will use the host toolchain and execute executables directly on the host.

No toolchain build is required, so you can just download your distro toolchain and jump straight into it.

Build, run and example, and clean it in-tree with:

sudo apt-get install gcc
cd userland
./build c/hello
./c/hello.out
./build --clean

Build an entire directory and test it:

cd userland
./build c
./test c

Build the current directory and test it:

cd userland/c
./build
./test

As mentioned at [userland-libs-directory], tests under userland/libs require certain optional libraries to be installed, and are not built or tested by default.

You can install those libraries with:

cd linux-kernel-module-cheat
./build --download-dependencies userland-host

and then build the examples and test with:

./build --package-all
./test --package-all

Pass custom compiler options:

./build --ccflags='-foptimize-sibling-calls -foptimize-strlen' --force-rebuild

Here we used --force-rebuild to force rebuild since the sources weren’t modified since the last build.

Some CLI options have more specialized flags, e.g. -O optimization level:

./build --optimization-level 3 --force-rebuild

See also User mode static executables for --static.

The build scripts inside userland/ are just symlinks to build-userland-in-tree which you can also use from toplevel as:

./build-userland-in-tree
./build-userland-in-tree userland/c
./build-userland-in-tree userland/c/hello.c

build-userland-in-tre is in turn just a thin wrapper around build-userland:

./build-userland --gcc-which host --in-tree userland/c

So you can use any option supported by build-userland script freely with build-userland-in-tree and build.

The situation is analogous for userland/test, test-executables-in-tree and test-executables, which are further documented at: Section 10.2, “User mode tests”.

Do a more clean out-of-tree build instead and run the program:

./build-userland --gcc-which host --userland-build-id host
./run --emulator native --userland userland/c/hello.c --userland-build-id host

Here we:

  • put the host executables in a separate build-variant to avoid conflict with Buildroot builds.

  • ran with the --emulator native option to run the program natively

In this case you can debub the program with:

./run --debug-vm --emulator native --userland userland/c/hello.c --userland-build-id host

as shown at: Section 18.7, “Debug the emulator”, although direct GDB host usage works as well of course.

If you are lazy to built the Buildroot toolchain and QEMU, but want to run e.g. ARM [userland-assembly] in User mode simulation, you can get away on Ubuntu 18.04 with just:

sudo apt-get install gcc-aarch64-linux-gnu qemu-system-aarch64
./build-userland \
  --arch aarch64 \
  --gcc-which host \
  --userland-build-id host \
;
./run \
  --arch aarch64 \
  --qemu-which host \
  --userland-build-id host \
  --userland userland/c/command_line_arguments.c \
  --cli-args 'asdf "qw er"' \
;

where:

This present the usual trade-offs of using prebuilts as mentioned at: Section 1.5, “Prebuilt setup”.

Other functionality are analogous, e.g. testing:

./test-executables \
  --arch aarch64 \
  --gcc-which host \
  --qemu-which host \
  --userland-build-id host \
;
./run \
  --arch aarch64 \
  --gdb \
  --gcc-which host \
  --qemu-which host \
  --userland-build-id host \
  --userland userland/c/command_line_arguments.c \
  --cli-args 'asdf "qw er"' \
;

First ensure that QEMU Buildroot setup is working.

After doing that setup, you can already execute your userland programs from inside QEMU: the only missing step is how to rebuild executables and run them.

And the answer is exactly analogous to what is shown at: Section 1.1.2.2, “Your first kernel module hack”

For example, if we modify userland/c/hello.c to print out something different, we can just rebuild it with:

./build-userland

Source: build-userland. ./build calls that script automatically for us when doing the initial full build.

Now, run the program either without rebooting use the 9P mount:

/mnt/9p/out_rootfs_overlay/c/hello.out

or shutdown QEMU, add the executable to the root filesystem:

./build-buildroot

reboot and use the root filesystem as usual:

./hello.out

This setup does not use the Linux kernel nor Buildroot at all: it just runs your very own minimal OS.

x86_64 is not currently supported, only arm and aarch64: I had made some x86 bare metal examples at: https://github.com/************/x86-bare-metal-examples but I’m lazy to port them here now. Pull requests are welcome.

The main reason this setup is included in this project, despite the word "Linux" being on the project name, is that a lot of the emulator boilerplate can be reused for both use cases.

This setup allows you to make a tiny OS and that runs just a few instructions, use it to fully control the CPU to better understand the simulators for example, or develop your own OS if you are into that.

You can also use C and a subset of the C standard library because we enable Newlib by default. See also: https://electronics.stackexchange.com/questions/223929/c-standard-libraries-on-bare-metal/400077#400077

Our C bare-metal compiler is built with crosstool-NG. If you have already built Buildroot previously, you will end up with two GCCs installed. Unfortunately I don’t see a solution for this, since we need separate toolchains for Newlib on baremetal and glibc on Linux: https://stackoverflow.com/questions/38956680/difference-between-arm-none-eabi-and-arm-linux-gnueabi/38989869#38989869

Every .c file inside baremetal/ and .S file inside baremetal/arch/<arch>/ generates a separate baremetal image.

For example, to run baremetal/arch/aarch64/dump_regs.c in QEMU do:

./build --arch aarch64 --download-dependencies qemu-baremetal
./run --arch aarch64 --baremetal baremetal/arch/aarch64/dump_regs.c

And the terminal prints the values of certain system registers. This example prints registers that are only accessible from EL1 or higher, and thus could not be run in userland.

In addition to the examples under baremetal/, several of the userland examples can also be run in baremetal! This is largely due to the awesomeness of Newlib.

The examples that work include most C examples that don’t rely on complicated syscalls such as threads, and almost all the [userland-assembly] examples.

The exact list of userland programs that work in baremetal is specified in [path-properties] with the baremetal property, but you can also easily find it out with a baremetal test dry run:

./test-executables --arch aarch64 --dry-run --mode baremetal

For example, we can run the C hello world userland/c/hello.c simply as:

./run --arch aarch64 --baremetal userland/c/hello.c

and that outputs to the serial port the string:

hello

which QEMU shows on the host terminal.

To modify a baremetal program, simply edit the file, e.g.

vim userland/c/hello.c

and rebuild:

./build-baremetal --arch aarch64
./run --arch aarch64 --baremetal userland/c/hello.c

./build qemu-baremetal that we run previously is only needed for the initial build. That script calls build-baremetal for us, in addition to building prerequisites such as QEMU and crosstool-NG.

./build-baremetal uses crosstool-NG, and so it must be preceded by build-crosstool-ng, which ./build qemu-baremetal also calls.

Now let’s run userland/arch/aarch64/add.S:

./run --arch aarch64 --baremetal userland/arch/aarch64/add.S

This time, the terminal does not print anything, which indicates success: if you look into the source, you will see that we just have an assertion there.

You can see a sample assertion fail in userland/c/assert_fail.c:

./run --arch aarch64 --baremetal userland/c/assert_fail.c

and the terminal contains:

lkmc_exit_status_134
error: simulation error detected by parsing logs

and the exit status of our script is 1:

echo $?

You can run all the baremetal examples in one go and check that all assertions passed with:

./test-executables --arch aarch64 --mode baremetal

To use gem5 instead of QEMU do:

./build --download-dependencies gem5-baremetal
./run --arch aarch64 --baremetal userland/c/hello.c --emulator gem5

and then as usual open a shell with:

./gem5-shell

Or as usual, tmux users can do both in one go with:

./run --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 --tmux

TODO: the carriage returns are a bit different than in QEMU, see: [gem5-baremetal-carriage-return].

Note that ./build-baremetal requires the --emulator gem5 option, and generates separate executable images for both, as can be seen from:

echo "$(./getvar --arch aarch64 --baremetal userland/c/hello.c --emulator qemu image)"
echo "$(./getvar --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 image)"

This is unlike the Linux kernel that has a single image for both QEMU and gem5:

echo "$(./getvar --arch aarch64 --emulator qemu image)"
echo "$(./getvar --arch aarch64 --emulator gem5 image)"

The reason for that is that on baremetal we don’t parse the device tress from memory like the Linux kernel does, which tells the kernel for example the UART address, and many other system parameters.

gem5 also supports the RealViewPBX machine, which represents an older hardware compared to the default VExpress_GEM5_V1:

./build-baremetal --arch aarch64 --emulator gem5 --machine RealViewPBX
./run --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 --machine RealViewPBX

This generates yet new separate images with new magic constants:

echo "$(./getvar --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 --machine VExpress_GEM5_V1 image)"
echo "$(./getvar --arch aarch64 --baremetal userland/c/hello.c --emulator gem5 --machine RealViewPBX      image)"

But just stick to newer and better VExpress_GEM5_V1 unless you have a good reason to use RealViewPBX.

When doing baremetal programming, it is likely that you will want to learn userland assembly first, see: [userland-assembly].

For more information on baremetal, see the section: [baremetal].

The following subjects are particularly important:

You don’t need to depend on GitHub.

For a quick and dirty build, install Asciidoctor however you like and build:

asciidoctor README.adoc
xdg-open README.html

For development, you will want to do a more controlled build with extra error checking as follows.

For the initial build do:

./build --download-dependencies docs

which also downloads build dependencies.

Then the following times just to the faster:

./build-doc

Source: build-doc

The HTML output is located at:

xdg-open out/README.html

More information about our documentation internals can be found at: [documentation]

--gdb-wait makes QEMU and gem5 wait for a GDB connection, otherwise we could accidentally go past the point we want to break at:

./run --gdb-wait

Say you want to break at start_kernel. So on another shell:

./run-gdb start_kernel

or at a given line:

./run-gdb init/main.c:1088

Now QEMU will stop there, and you can use the normal GDB commands:

list
next
continue

See also:

Just don’t forget to pass --arch to ./run-gdb, e.g.:

./run --arch aarch64 --gdb-wait

and:

./run-gdb --arch aarch64 start_kernel

O=0 is an impossible dream, O=2 being the default.

So get ready for some weird jumps, and <value optimized out> fun. Why, Linux, why.

Let’s observe the kernel write system call as it reacts to some userland actions.

Start QEMU with just:

./run

and after boot inside a shell run:

./count.sh

which counts to infinity to stdout. Source: rootfs_overlay/lkmc/count.sh.

Then in another shell, run:

./run-gdb

and then hit:

Ctrl-C
break __x64_sys_write
continue
continue
continue

And you now control the counting on the first shell from GDB!

Before v4.17, the symbol name was just sys_write, the change happened at d5a00528b58cdb2c71206e18bd021e34c4eab878. As of Linux v 4.19, the function is called sys_write in arm, and __arm64_sys_write in aarch64. One good way to find it if the name changes again is to try:

rbreak .*sys_write

or just have a quick look at the sources!

When you hit Ctrl-C, if we happen to be inside kernel code at that point, which is very likely if there are no heavy background tasks waiting, and we are just waiting on a sleep type system call of the command prompt, we can already see the source for the random place inside the kernel where we stopped.

tmux just makes things even more fun by allowing us to see both the terminal for:

at once without dragging windows around!

First start tmux with:

tmux

Now that you are inside a shell inside tmux, you can start GDB simply with:

./run --gdb

which is just a convenient shortcut for:

./run --gdb-wait --tmux --tmux-args start_kernel

This splits the terminal into two panes:

  • left: usual QEMU with terminal

  • right: GDB

and focuses on the GDB pane.

Now you can navigate with the usual tmux shortcuts:

  • switch between the two panes with: Ctrl-B O

  • close either pane by killing its terminal with Ctrl-D as usual

See the tmux manual for further details:

man tmux

To start again, switch back to the QEMU pane with Ctrl-O, kill the emulator, and re-run:

./run --gdb

This automatically clears the GDB pane, and starts a new one.

The option --tmux-args determines which options will be passed to the program running on the second tmux pane, and is equivalent to:

This is equivalent to:

./run --gdb-wait
./run-gdb start_kernel

Due to Python’s CLI parsing quicks, if the run-gdb arguments start with a dash -, you have to use the = sign, e.g. to GDB step debug early boot:

./run --gdb --tmux-args=--no-continue

If you are using gem5 instead of QEMU, --tmux has a different effect by default: it opens the gem5 terminal instead of the debugger:

./run --emulator gem5 --tmux

To open a new pane with GDB instead of the terminal, use:

./run --gdb

which is equivalent to:

./run --emulator gem5 --gdb-wait --tmux --tmux-args start_kernel --tmux-program gdb

--tmux-program implies --tmux, so we can just write:

./run --emulator gem5 --gdb-wait --tmux-program gdb

If you also want to see both GDB and the terminal with gem5, then you will need to open a separate shell manually as usual with ./gem5-shell.

From inside tmux, you can create new terminals on a new window with Ctrl-B C split a pane yet again vertically with Ctrl-B % or horizontally with Ctrl-B ".

Loadable kernel modules are a bit trickier since the kernel can place them at different memory locations depending on load order.

So we cannot set the breakpoints before insmod.

However, the Linux kernel GDB scripts offer the lx-symbols command, which takes care of that beautifully for us.

Shell 1:

./run

Wait for the boot to end and run:

insmod timer.ko

This prints a message to dmesg every second.

Shell 2:

./run-gdb

In GDB, hit Ctrl-C, and note how it says:

scanning for modules in /root/linux-kernel-module-cheat/out/kernel_modules/x86_64/kernel_modules
loading @0xffffffffc0000000: /root/linux-kernel-module-cheat/out/kernel_modules/x86_64/kernel_modules/timer.ko

That’s lx-symbols working! Now simply:

break lkmc_timer_callback
continue
continue
continue

and we now control the callback from GDB!

Just don’t forget to remove your breakpoints after rmmod, or they will point to stale memory locations.

TODO: why does break work_func for insmod kthread.ko not very well? Sometimes it breaks but not others.

TODO on arm 51e31cdc2933a774c2a0dc62664ad8acec1d2dbe it does not always work, and lx-symbols fails with the message:

loading vmlinux
Traceback (most recent call last):
  File "/linux-kernel-module-cheat//out/arm/buildroot/build/linux-custom/scripts/gdb/linux/symbols.py", line 163, in invoke
    self.load_all_symbols()
  File "/linux-kernel-module-cheat//out/arm/buildroot/build/linux-custom/scripts/gdb/linux/symbols.py", line 150, in load_all_symbols
    [self.load_module_symbols(module) for module in module_list]
  File "/linux-kernel-module-cheat//out/arm/buildroot/build/linux-custom/scripts/gdb/linux/symbols.py", line 110, in load_module_symbols
    module_name = module['name'].string()
gdb.MemoryError: Cannot access memory at address 0xbf0000cc
Error occurred in Python command: Cannot access memory at address 0xbf0000cc

Can’t reproduce on x86_64 and aarch64 are fine.

It is kind of random: if you just insmod manually and then immediately ./run-gdb --arch arm, then it usually works.

But this fails most of the time: shell 1:

./run --arch arm --eval-after 'insmod hello.ko'

shell 2:

./run-gdb --arch arm

then hit Ctrl-C on shell 2, and voila.

Then:

cat /proc/modules

says that the load address is:

0xbf000000

so it is close to the failing 0xbf0000cc.

readelf:

./run-toolchain readelf -- -s "$(./getvar kernel_modules_build_subdir)/hello.ko"

does not give any interesting hits at cc, no symbol was placed that far.

TODO find a more convenient method. We have working methods, but they are not ideal.

This is not very easy, since by the time the module finishes loading, and lx-symbols can work properly, module_init has already finished running!

Possibly asked at:

This is the best method we’ve found so far.

The kernel calls module_init synchronously, therefore it is not hard to step into that call.

As of 4.16, the call happens in do_one_initcall, so we can do in shell 1:

./run

shell 2 after boot finishes (because there are other calls to do_init_module at boot, presumably for the built-in modules):

./run-gdb do_one_initcall

then step until the line:

833         ret = fn();

which does the actual call, and then step into it.

For the next time, you can also put a breakpoint there directly:

./run-gdb init/main.c:833

How we found this out: first we got GDB module_init calculate entry address working, and then we did a bt. AKA cheating :-)

This works, but is a bit annoying.

The key observation is that the load address of kernel modules is deterministic: there is a pre allocated memory region https://www.kernel.org/doc/Documentation/x86/x86_64/mm.txt "module mapping space" filled from bottom up.

So once we find the address the first time, we can just reuse it afterwards, as long as we don’t modify the module.

Do a fresh boot and get the module:

./run --eval-after './pr_debug.sh;insmod fops.ko;./linux/poweroff.out'

The boot must be fresh, because the load address changes every time we insert, even after removing previous modules.

The base address shows on terminal:

0xffffffffc0000000 .text

Now let’s find the offset of myinit:

./run-toolchain readelf -- \
  -s "$(./getvar kernel_modules_build_subdir)/fops.ko" | \
  grep myinit

which gives:

    30: 0000000000000240    43 FUNC    LOCAL  DEFAULT    2 myinit

so the offset address is 0x240 and we deduce that the function will be placed at:

0xffffffffc0000000 + 0x240 = 0xffffffffc0000240

Now we can just do a fresh boot on shell 1:

./run --eval 'insmod fops.ko;./linux/poweroff.out' --gdb-wait

and on shell 2:

./run-gdb '*0xffffffffc0000240'

GDB then breaks, and lx-symbols works.

TODO not working. This could be potentially very convenient.

The idea here is to break at a point late enough inside sys_init_module, at which point lx-symbols can be called and do its magic.

Beware that there are both sys_init_module and sys_finit_module syscalls, and insmod uses fmodule_init by default.

Both call do_module_init however, which is what lx-symbols hooks to.

If we try:

b sys_finit_module

then hitting:

n

does not break, and insertion happens, likely because of optimizations? Disable kernel compiler optimizations

Then we try:

b do_init_module

A naive:

fin

also fails to break!

Finally, in despair we notice that pr_debug prints the kernel load address as explained at Bypass lx-symbols.

So, if we set a breakpoint just after that message is printed by searching where that happens on the Linux source code, we must be able to get the correct load address before init_module happens.

This is another possibility: we could modify the module source by adding a trap instruction of some kind.

This appears to be described at: https://www.linuxjournal.com/article/4525

But it refers to a gdbstart script which is not in the tree anymore and beyond my git log capabilities.

And just adding:

asm( " int $3");

directly gives an oops as I’d expect.

Useless, but a good way to show how hardcore you are. Disable lx-symbols with:

./run-gdb --no-lxsymbols

From inside guest:

insmod timer.ko
cat /proc/modules

as mentioned at:

This will give a line of form:

fops 2327 0 - Live 0xfffffffa00000000

And then tell GDB where the module was loaded with:

Ctrl-C
add-symbol-file ../../../rootfs_overlay/x86_64/timer.ko 0xffffffffc0000000
0xffffffffc0000000

Alternatively, if the module panics before you can read /proc/modules, there is a pr_debug which shows the load address:

echo 8 > /proc/sys/kernel/printk
echo 'file kernel/module.c +p' > /sys/kernel/debug/dynamic_debug/control
./linux/myinsmod.out hello.ko

And then search for a line of type:

[   84.877482]  0xfffffffa00000000 .text

Tested on 4f4749148273c282e80b58c59db1b47049e190bf + 1.

TODO successfully debug the very first instruction that the Linux kernel runs, before start_kernel!

Break at the very first instruction executed by QEMU:

./run-gdb --no-continue

TODO why can’t we break at early startup stuff such as:

./run-gdb extract_kernel
./run-gdb main

Maybe it is because they are being copied around at specific locations instead of being run directly from inside the main image, which is where the debug information points to?

gem5 tracing with --debug-flags=Exec does show the right symbols however! So in the worst case, we can just read their source. Amazing.

v4.19 also added a CONFIG_HAVE_KERNEL_UNCOMPRESSED=y option for having the kernel uncompressed which could make following the startup easier, but it is only available on s390. aarch64 however is already uncompressed by default, so might be the easiest one. See also: Section 15.21.1, “vmlinux vs bzImage vs zImage vs Image”.

One possibility is to run:

./trace-boot --arch arm

and then find the second address (the first one does not work, already too late maybe):

less "$(./getvar --arch arm trace_txt_file)"

and break there:

./run --arch arm --gdb-wait
./run-gdb --arch arm '*0x1000'

but TODO: it does not show the source assembly under arch/arm: https://stackoverflow.com/questions/11423784/qemu-arm-linux-kernel-boot-debug-no-source-code

I also tried to hack run-gdb with:

@@ -81,7 +81,7 @@ else
 ${gdb} \
 -q \\
 -ex 'add-auto-load-safe-path $(pwd)' \\
--ex 'file vmlinux' \\
+-ex 'file arch/arm/boot/compressed/vmlinux' \\
 -ex 'target remote localhost:${port}' \\
 ${brk} \
 -ex 'continue' \\

and no I do have the symbols from arch/arm/boot/compressed/vmlinux', but the breaks still don’t work.

When booting Linux on a slow emulator like gem5, what you observe is that:

  • first nothing shows for a while

  • then at once, a bunch of message lines show at once followed on aarch64 Linux 5.4.3 by:

    [    0.081311] printk: console [ttyAMA0] enabled

This means of course that all the previous messages had been generated earlier and stored, but were only printed to the terminal once the terminal itself was enabled.

Notably for example the very first message:

[    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd070]

happens very early in the boot process.

If you get a failure before that, it will be hard to see the print messages.

One possible solution is to parse the dmesg buffer, gem5 actually implements that: gem5 m5out/system.workload.dmesg file.

QEMU’s -gdb GDB breakpoints are set on virtual addresses, so you can in theory debug userland processes as well.

You will generally want to use gdbserver for this as it is more reliable, but this method can overcome the following limitations of gdbserver:

  • the emulator does not support host to guest networking. This seems to be the case for gem5 as explained at: Section 14.3.1.3, “gem5 host to guest networking”

  • cannot see the start of the init process easily

  • gdbserver alters the working of the kernel, and makes your run less representative

Known limitations of direct userland debugging:

  • the kernel might switch context to another process or to the kernel itself e.g. on a system call, and then TODO confirm the PIC would go to weird places and source code would be missing.

    Solutions to this are being researched at: Section 2.10.1, “lx-ps”.

  • TODO step into shared libraries. If I attempt to load them explicitly:

    (gdb) sharedlibrary ../../staging/lib/libc.so.0
    No loaded shared libraries match the pattern `../../staging/lib/libc.so.0'.

    since GDB does not know that libc is loaded.

This is the userland debug setup most likely to work, since at init time there is only one userland executable running.

For executables from the userland/ directory such as userland/posix/count.c:

  • Shell 1:

    ./run --gdb-wait --kernel-cli 'init=/lkmc/posix/count.out'
  • Shell 2:

    ./run-gdb --userland userland/posix/count.c main

    Alternatively, we could also pass the full path to the executable:

    ./run-gdb --userland "$(./getvar userland_build_dir)/posix/count.out" main

    Path resolution is analogous to that of ./run --baremetal.

Then, as soon as boot ends, we are left inside a debug session that looks just like what gdbserver would produce.

BusyBox custom init process:

  • Shell 1:

    ./run --gdb-wait --kernel-cli 'init=/bin/ls'
  • Shell 2:

    ./run-gdb --userland "$(./getvar buildroot_build_build_dir)"/busybox-*/busybox ls_main

This follows BusyBox' convention of calling the main for each executable as <exec>_main since the busybox executable has many "mains".

BusyBox default init process:

  • Shell 1:

    ./run --gdb-wait
  • Shell 2:

    ./run-gdb --userland "$(./getvar buildroot_build_build_dir)"/busybox-*/busybox init_main

init cannot be debugged with gdbserver without modifying the source, or else /sbin/init exits early with:

"must be run as PID 1"

Non-init process:

  • Shell 1:

    ./run --gdb-wait
  • Shell 2:

    ./run-gdb --userland userland/linux/rand_check.c main
  • Shell 1 after the boot finishes:

    ./linux/rand_check.out

This is the least reliable setup as there might be other processes that use the given virtual address.

TODO: if I try GDB step debug userland non-init without --gdb-wait and the break main that we do inside ./run-gdb says:

Cannot access memory at address 0x10604

and then GDB never breaks. Tested at ac8663a44a450c3eadafe14031186813f90c21e4 + 1.

The exact behaviour seems to depend on the architecture:

  • arm: happens always

  • x86_64: appears to happen only if you try to connect GDB as fast as possible, before init has been reached.

  • aarch64: could not observe the problem

We have also double checked the address with:

./run-toolchain --arch arm readelf -- \
  -s "$(./getvar --arch arm userland_build_dir)/linux/myinsmod.out" | \
  grep main

and from GDB:

info line main

and both give:

000105fc

which is just 8 bytes before 0x10604.

gdbserver also says 0x10604.

However, if do a Ctrl-C in GDB, and then a direct:

b *0x000105fc

it works. Why?!

On GEM5, x86 can also give the Cannot access memory at address, so maybe it is also unreliable on QEMU, and works just by coincidence.

However this is failing for us:

  • some symbols are not visible to call even though b sees them

  • for those that are, call fails with an E14 error

E.g.: if we break on __x64_sys_write on count.sh:

>>> call printk(0, "asdf")
Could not fetch register "orig_rax"; remote failure reply 'E14'
>>> b printk
Breakpoint 2 at 0xffffffff81091bca: file kernel/printk/printk.c, line 1824.
>>> call fdget_pos(fd)
No symbol "fdget_pos" in current context.
>>> b fdget_pos
Breakpoint 3 at 0xffffffff811615e3: fdget_pos. (9 locations)
>>>

even though fdget_pos is the first thing __x64_sys_write does:

581 SYSCALL_DEFINE3(write, unsigned int, fd, const char __user *, buf,
582         size_t, count)
583 {
584     struct fd f = fdget_pos(fd);

I also noticed that I get the same error:

Could not fetch register "orig_rax"; remote failure reply 'E14'

when trying to use:

fin

on many (all?) functions.

See also: ************#19

For a more minimal baremetal multicore setup, see: [arm-baremetal-multicore].

We can set and get which cores the Linux kernel allows a program to run on with sched_getaffinity and sched_setaffinity:

./run --cpus 2 --eval-after './linux/sched_getaffinity.out'

Sample output:

sched_getaffinity = 1 1
sched_getcpu = 1
sched_getaffinity = 1 0
sched_getcpu = 0

Which shows us that:

  • initially:

    • all 2 cores were enabled as shown by sched_getaffinity = 1 1

    • the process was randomly assigned to run on core 1 (the second one) as shown by sched_getcpu = 1. If we run this several times, it will also run on core 0 sometimes.

  • then we restrict the affinity to just core 0, and we see that the program was actually moved to core 0

The number of cores is modified as explained at: Section 19.2.2.1, “Number of cores”

taskset from the util-linux package sets the initial core affinity of a program:

./build-buildroot \
  --config 'BR2_PACKAGE_UTIL_LINUX=y' \
  --config 'BR2_PACKAGE_UTIL_LINUX_SCHEDUTILS=y' \
;
./run --eval-after 'taskset -c 1,1 ./linux/sched_getaffinity.out'

output:

sched_getaffinity = 0 1
sched_getcpu = 1
sched_getaffinity = 1 0
sched_getcpu = 0

so we see that the affinity was restricted to the second core from the start.

Let’s do a QEMU observation to justify this example being in the repository with userland breakpoints.

We will run our ./linux/sched_getaffinity.out infinitely many times, on core 0 and core 1 alternatively:

./run \
  --cpus 2 \
  --eval-after 'i=0; while true; do taskset -c $i,$i ./linux/sched_getaffinity.out; i=$((! $i)); done' \
  --gdb-wait \
;

on another shell:

./run-gdb --userland "$(./getvar userland_build_dir)/linux/sched_getaffinity.out" main

Then, inside GDB:

(gdb) info threads
  Id   Target Id         Frame
* 1    Thread 1 (CPU#0 [running]) main () at sched_getaffinity.c:30
  2    Thread 2 (CPU#1 [halted ]) native_safe_halt () at ./arch/x86/include/asm/irqflags.h:55
(gdb) c
(gdb) info threads
  Id   Target Id         Frame
  1    Thread 1 (CPU#0 [halted ]) native_safe_halt () at ./arch/x86/include/asm/irqflags.h:55
* 2    Thread 2 (CPU#1 [running]) main () at sched_getaffinity.c:30
(gdb) c

and we observe that info threads shows the actual correct core on which the process was restricted to run by taskset!

TODO we then tried:

./run --cpus 2 --eval-after './linux/sched_getaffinity_threads.out'

and:

./run-gdb --userland "$(./getvar userland_build_dir)/linux/sched_getaffinity_threads.out"

to switch between two simultaneous live threads with different affinities, it just didn’t break on our threads:

b main_thread_0

Bibliography:

We source the Linux kernel GDB scripts by default for lx-symbols, but they also contains some other goodies worth looking into.

Those scripts basically parse some in-kernel data structures to offer greater visibility with GDB.

All defined commands are prefixed by lx-, so to get a full list just try to tab complete that.

There aren’t as many as I’d like, and the ones that do exist are pretty self explanatory, but let’s give a few examples.

Show dmesg:

lx-dmesg
lx-cmdline

Dump the device tree to a fdtdump.dtb file in the current directory:

lx-fdtdump
pwd

List inserted kernel modules:

lx-lsmod

Sample output:

Address            Module                  Size  Used by
0xffffff80006d0000 hello                  16384  0

Bibliography:

List all processes:

lx-ps

Sample output:

0xffff88000ed08000 1 init
0xffff88000ed08ac0 2 kthreadd

The second and third fields are obviously PID and process name.

The first one is more interesting, and contains the address of the task_struct in memory.

This can be confirmed with:

p ((struct task_struct)*0xffff88000ed08000

which contains the correct PID for all threads I’ve tried:

pid = 1,

TODO get the PC of the kthreads: https://stackoverflow.com/questions/26030910/find-program-counter-of-process-in-kernel Then we would be able to see where the threads are stopped in the code!

On ARM, I tried:

task_pt_regs((struct thread_info *)((struct task_struct)*0xffffffc00e8f8000))->uregs[ARM_pc]

but task_pt_regs is a #define and GDB cannot see defines without -ggdb3: https://stackoverflow.com/questions/2934006/how-do-i-print-a-defined-constant-in-gdb which are apparently not set?

Bibliography:

For when it breaks again, or you want to add a new feature!

./run --debug
./run-gdb --before '-ex "set remotetimeout 99999" -ex "set debug remote 1"' start_kernel

This error means that the GDB server, e.g. in QEMU, sent more registers than the GDB client expected.

This can happen for the following reasons:

KGDB is kernel dark magic that allows you to GDB the kernel on real hardware without any extra hardware support.

It is useless with QEMU since we already have full system visibility with -gdb. So the goal of this setup is just to prepare you for what to expect when you will be in the treches of real hardware.

KGDB is cheaper than JTAG (free) and easier to setup (all you need is serial), but with less visibility as it depends on the kernel working, so e.g.: dies on panic, does not see boot sequence.

First run the kernel with:

./run --kgdb

this passes the following options on the kernel CLI:

kgdbwait kgdboc=ttyS1,115200

kgdbwait tells the kernel to wait for KGDB to connect.

So the kernel sets things up enough for KGDB to start working, and then boot pauses waiting for connection:

<6>[    4.866050] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
<6>[    4.893205] 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
<6>[    4.916271] 00:06: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
<6>[    4.987771] KGDB: Registered I/O driver kgdboc
<2>[    4.996053] KGDB: Waiting for connection from remote gdb...

Entering kdb (current=0x(____ptrval____), pid 1) on processor 0 due to Keyboard Entry
[0]kdb>

KGDB expects the connection at ttyS1, our second serial port after ttyS0 which contains the terminal.

The last line is the KDB prompt, and is covered at: Section 3.3, “KDB”. Typing now shows nothing because that prompt is expecting input from ttyS1.

Instead, we connect to the serial port ttyS1 with GDB:

./run-gdb --kgdb --no-continue

Once GDB connects, it is left inside the function kgdb_breakpoint.

So now we can set breakpoints and continue as usual.

For example, in GDB:

continue

Then in QEMU:

./count.sh &
./kgdb.sh

rootfs_overlay/lkmc/kgdb.sh pauses the kernel for KGDB, and gives control back to GDB.

And now in GDB we do the usual:

break __x64_sys_write
continue
continue
continue
continue

And now you can count from KGDB!

If you do: break __x64_sys_write immediately after ./run-gdb --kgdb, it fails with KGDB: BP remove failed: <address>. I think this is because it would break too early on the boot sequence, and KGDB is not yet ready.

See also:

TODO: we would need a second serial for KGDB to work, but it is not currently supported on arm and aarch64 with -M virt that we use: https://unix.stackexchange.com/questions/479085/can-qemu-m-virt-on-arm-aarch64-have-multiple-serial-ttys-like-such-as-pl011-t/479340#479340

One possible workaround for this would be to use KDB ARM.

Just works as you would expect:

insmod timer.ko
./kgdb.sh

In GDB:

break lkmc_timer_callback
continue
continue
continue

and you now control the count.

KDB is a way to use KDB directly in your main console, without GDB.

Advantage over KGDB: you can do everything in one serial. This can actually be important if you only have one serial for both shell and .

Disadvantage: not as much functionality as GDB, especially when you use Python scripts. Notably, TODO confirm you can’t see the the kernel source code and line step as from GDB, since the kernel source is not available on guest (ah, if only debugging information supported full source, or if the kernel had a crazy mechanism to embed it).

Run QEMU as:

./run --kdb

This passes kgdboc=ttyS0 to the Linux CLI, therefore using our main console. Then QEMU:

[0]kdb> go

And now the kdb> prompt is responsive because it is listening to the main console.

After boot finishes, run the usual:

./count.sh &
./kgdb.sh

And you are back in KDB. Now you can count with:

[0]kdb> bp __x64_sys_write
[0]kdb> go
[0]kdb> go
[0]kdb> go
[0]kdb> go

And you will break whenever __x64_sys_write is hit.

You can get see further commands with:

[0]kdb> help

The other KDB commands allow you to step instructions, view memory, registers and some higher level kernel runtime data similar to the superior GDB Python scripts.

You can also use KDB directly from the graphic window with:

./run --graphic --kdb

This setup could be used to debug the kernel on machines without serial, such as modern desktops.

This works because --graphics adds kbd (which stands for KeyBoarD!) to kgdboc.

TODO neither arm and aarch64 are working as of 1cd1e58b023791606498ca509256cc48e95e4f5b + 1.

arm seems to place and hit the breakpoint correctly, but no matter how many go commands I do, the count.sh stdout simply does not show.

aarch64 seems to place the breakpoint correctly, but after the first go the kernel oopses with warning:

WARNING: CPU: 0 PID: 46 at /root/linux-kernel-module-cheat/submodules/linux/kernel/smp.c:416 smp_call_function_many+0xdc/0x358

and stack trace:

smp_call_function_many+0xdc/0x358
kick_all_cpus_sync+0x30/0x38
kgdb_flush_swbreak_addr+0x3c/0x48
dbg_deactivate_sw_breakpoints+0x7c/0xb8
kgdb_cpu_enter+0x284/0x6a8
kgdb_handle_exception+0x138/0x240
kgdb_brk_fn+0x2c/0x40
brk_handler+0x7c/0xc8
do_debug_exception+0xa4/0x1c0
el1_dbg+0x18/0x78
__arm64_sys_write+0x0/0x30
el0_svc_handler+0x74/0x90
el0_svc+0x8/0xc

My theory is that every serious ARM developer has JTAG, and no one ever tests this, and the kernel code is just broken.

Step debug userland processes to understand how they are talking to the kernel.

First build gdbserver into the root filesystem:

./build-buildroot --config 'BR2_PACKAGE_GDB=y'

Then on guest, to debug userland/linux/rand_check.c:

./gdbserver.sh ./c/print_argv.out asdf qwer

And on host:

./run-gdb --gdbserver --userland userland/c/command_line_arguments.c main

or alternatively with the path to the executable itself:

./run --gdbserver --userland "$(./getvar userland_build_dir)/c/print_argv.out"
./gdbserver.sh ls

on host you need:

./run-gdb --gdbserver --userland "$(./getvar buildroot_build_build_dir)"/busybox-*/busybox ls_main

Our setup gives you the rare opportunity to step debug libc and other system libraries.

For example in the guest:

./gdbserver.sh ./posix/count.out

Then on host:

./run-gdb --gdbserver --userland userland/posix/count.c main

and inside GDB:

break sleep
continue

And you are now left inside the sleep function of our default libc implementation uclibc libc/unistd/sleep.c!

You can also step into the sleep call:

step

This is made possible by the GDB command that we use by default:

set sysroot ${common_buildroot_build_dir}/staging

which automatically finds unstripped shared libraries on the host for us.

The portability of the kernel and toolchains is amazing: change an option and most things magically work on completely different hardware.

To use arm instead of x86 for example:

./build-buildroot --arch arm
./run --arch arm

Debug:

./run --arch arm --gdb-wait
# On another terminal.
./run-gdb --arch arm

We also have one letter shorthand names for the architectures and --arch option:

# aarch64
./run -a A
# arm
./run -a a
# x86_64
./run -a x

Known quirks of the supported architectures are documented in this section.

This example illustrates how reading from the x86 control registers with mov crX, rax can only be done from kernel land on ring0.

From kernel land:

insmod ring0.ko

works and output the registers, for example:

cr0 = 0xFFFF880080050033
cr2 = 0xFFFFFFFF006A0008
cr3 = 0xFFFFF0DCDC000

However if we try to do it from userland:

./ring0.out

stdout gives:

Segmentation fault

and dmesg outputs:

traps: ring0.out[55] general protection ip:40054c sp:7fffffffec20 error:0 in ring0.out[400000+1000]

Sources:

In both cases, we attempt to run the exact same code which is shared on the ring0.h header file.

Bibliography:

I’ve tried:

./run-toolchain --arch aarch64 gcc -- -static ~/test/hello_world.c -o "$(./getvar p9_dir)/a.out"
./run --arch aarch64 --eval-after '/mnt/9p/data/a.out'

but it fails with:

a.out: line 1: syntax error: unexpected word (expecting ")")

We used to "support" it until f8c0502bb2680f2dbe7c1f3d7958f60265347005 (it booted) but dropped since one was testing it often.

If you want to revive and maintain it, send a pull request.

It should not be too hard to port this repository to any architecture that Buildroot supports. Pull requests are welcome.

When the Linux kernel finishes booting, it runs an executable as the first and only userland process. This executable is called the init program.

The init process is then responsible for setting up the entire userland (or destroying everything when you want to have fun).

This typically means reading some configuration files (e.g. /etc/initrc) and forking a bunch of userland executables based on those files, including the very interactive shell that we end up on.

systemd provides a "popular" init implementation for desktop distros as of 2017.

BusyBox provides its own minimalistic init implementation which Buildroot, and therefore this repo, uses by default.

The init program can be either an executable shell text file, or a compiled ELF file. It becomes easy to accept this once you see that the exec system call handles both cases equally: https://unix.stackexchange.com/questions/174062/can-the-init-process-be-a-shell-script-in-linux/395375#395375

The init executable is searched for in a list of paths in the root filesystem, including /init, /sbin/init and a few others. For more details see: Section 6.3, “Path to init”

To have more control over the system, you can replace BusyBox’s init with your own.

The most direct way to replace init with our own is to just use the init= command line parameter directly:

./run --kernel-cli 'init=/lkmc/count.sh'

This just counts every second forever and does not give you a shell.

This method is not very flexible however, as it is hard to reliably pass multiple commands and command line arguments to the init with it, as explained at: Section 6.4, “Init environment”.

For this reason, we have created a more robust helper method with the --eval option:

./run --eval 'echo "asdf qwer";insmod hello.ko;./linux/poweroff.out'

It is basically a shortcut for:

./run --kernel-cli 'init=/lkmc/eval_base64.sh - lkmc_eval="insmod hello.ko;./linux/poweroff.out"'

This allows quoting and newlines by base64 encoding on host, and decoding on guest, see: Section 15.3.1, “Kernel command line parameters escaping”.

It also automatically chooses between init= and rcinit= for you, see: Section 6.3, “Path to init”

--eval replaces BusyBox' init completely, which makes things more minimal, but also has has the following consequences:

  • /etc/fstab mounts are not done, notably /proc and /sys, test it out with:

    ./run --eval 'echo asdf;ls /proc;ls /sys;echo qwer'
  • no shell is launched at the end of boot for you to interact with the system. You could explicitly add a sh at the end of your commands however:

    ./run --eval 'echo hello;sh'

The best way to overcome those limitations is to use: Section 6.2, “Run command at the end of BusyBox init”

If the script is large, you can add it to a gitignored file and pass that to --eval as in:

echo '
cd /lkmc
insmod hello.ko
./linux/poweroff.out
' > data/gitignore.sh
./run --eval "$(cat data/gitignore.sh)"

or add it to a file to the root filesystem guest and rebuild:

echo '#!/bin/sh
cd /lkmc
insmod hello.ko
./linux/poweroff.out
' > rootfs_overlay/lkmc/gitignore.sh
chmod +x rootfs_overlay/lkmc/gitignore.sh
./build-buildroot
./run --kernel-cli 'init=/lkmc/gitignore.sh'

Remember that if your init returns, the kernel will panic, there are just two non-panic possibilities:

  • run forever in a loop or long sleep

  • poweroff the machine

Just using BusyBox' poweroff at the end of the init does not work and the kernel panics:

./run --eval poweroff

because BusyBox' poweroff tries to do some fancy stuff like killing init, likely to allow userland to shutdown nicely.

But this fails when we are init itself!

BusyBox' poweroff works more brutally and effectively if you add -f:

./run --eval 'poweroff -f'

but why not just use our minimal ./linux/poweroff.out and be done with it?

./run --eval './linux/poweroff.out'

I dare you to guess what this does:

./run --eval './posix/sleep_forever.out'

This executable is a convenient simple init that does not panic and sleeps instead.

Get a reasonable answer to "how long does boot take in guest time?":

./run --eval-after './linux/time_boot.c'

That executable writes to dmesg directly through /dev/kmsg a message of type:

[    2.188242] /path/to/linux-kernel-module-cheat/userland/linux/time_boot.c

which tells us that boot took 2.188242 seconds based on the dmesg timestamp.

Use the --eval-after option is for you rely on something that BusyBox' init set up for you like /etc/fstab:

./run --eval-after 'echo asdf;ls /proc;ls /sys;echo qwer'

After the commands run, you are left on an interactive shell.

The above command is basically equivalent to:

./run --kernel-cli-after-dash 'lkmc_eval="insmod hello.ko;./linux/poweroff.out;"'

where the lkmc_eval option gets evaled by our default rootfs_overlay/etc/init.d/S98 startup script.

Except that --eval-after is smarter and uses base64 encoding.

Alternatively, you can also add the comamdns to run to a new init.d entry to run at the end o the BusyBox init:

cp rootfs_overlay/etc/init.d/S98 rootfs_overlay/etc/init.d/S99.gitignore
vim rootfs_overlay/etc/init.d/S99.gitignore
./build-buildroot
./run

and they will be run automatically before the login prompt.

Scripts under /etc/init.d are run by /etc/init.d/rcS, which gets called by the line ::sysinit:/etc/init.d/rcS in /etc/inittab.

The init is selected at:

  • initrd or initramfs system: /init, a custom one can be set with the rdinit= kernel command line parameter

  • otherwise: default is /sbin/init, followed by some other paths, a custom one can be set with init=

The kernel parses parameters from the kernel command line up to "-"; if it doesn’t recognize a parameter and it doesn’t contain a '.', the parameter gets passed to init: parameters with '=' go into init’s environment, others are passed as command line arguments to init. Everything after "-" is passed as an argument to init.

And you can try it out with:

./run --kernel-cli 'init=/lkmc/linux/init_env_poweroff.out' --kernel-cli-after-dash 'asdf=qwer zxcv'

From the generated QEMU command, we see that the kernel CLI at LKMC 69f5745d3df11d5c741551009df86ea6c61a09cf now contains:

init=/lkmc/linux/init_env_poweroff.out console=ttyS0 - lkmc_home=/lkmc asdf=qwer zxcv

and the init program outputs:

args:
/lkmc/linux/init_env_poweroff.out
-
zxcv

env:
HOME=/
TERM=linux
lkmc_home=/lkmc
asdf=qwer

The annoying dash - gets passed as a parameter to init, which makes it impossible to use this method for most non custom executables.

Arguments with dots that come after - are still treated specially (of the form subsystem.somevalue) and disappear, from args, e.g.:

./run --kernel-cli 'init=/lkmc/linux/init_env_poweroff.out' --kernel-cli-after-dash '/lkmc/linux/poweroff.out'

outputs:

args
/lkmc/linux/init_env_poweroff.out
-
ab

so see how a.b is gone.

The simple workaround is to just create a shell script that does it, e.g. as we’ve done at: rootfs_overlay/lkmc/gem5_exit.sh.

Wait, where do HOME and TERM come from? (greps the kernel). Ah, OK, the kernel sets those by default: https://github.com/torvalds/linux/blob/94710cac0ef4ee177a63b5227664b38c95bbf703/init/main.c#L173

const char *envp_init[MAX_INIT_ENVS+2] = { "HOME=/", "TERM=linux", NULL, };

On top of the Linux kernel, the BusyBox /bin/sh shell will also define other variables.

We can explore the shenanigans that the shell adds on top of the Linux kernel with:

./run --kernel-cli 'init=/bin/sh'

From there we observe that:

env

gives:

SHLVL=1
HOME=/
TERM=linux
PWD=/

therefore adding SHLVL and PWD to the default kernel exported variables.

Furthermore, to increase confusion, if you list all non-exported shell variables https://askubuntu.com/questions/275965/how-to-list-all-variables-names-and-their-current-values with:

set

then it shows more variables, notably:

PATH='/sbin:/usr/sbin:/bin:/usr/bin'

Login shells source some default files, notably:

/etc/profile
$HOME/.profile

We provide /.profile from rootfs_overlay/.profile, and use the default BusyBox /etc/profile.

The shell knows that it is a login shell if the first character of argv[0] is -, see also: https://stackoverflow.com/questions/2050961/is-argv0-name-of-executable-an-accepted-standard-or-just-a-common-conventi/42291142#42291142

When we use just init=/bin/sh, the Linux kernel sets argv[0] to /bin/sh, which does not start with -.

However, if you use ::respawn:-/bin/sh on inttab described at TTY, BusyBox' init sets argv[0][0] to -, and so does getty. This can be observed with:

cat /proc/$$/cmdline

The kernel can boot from an CPIO file, which is a directory serialization format much like tar: https://superuser.com/questions/343915/tar-vs-cpio-what-is-the-difference

The bootloader, which for us is provided by QEMU itself, is then configured to put that CPIO into memory, and tell the kernel that it is there.

This is very similar to the kernel image itself, which already gets put into memory by the QEMU -kernel option.

With this setup, you don’t even need to give a root filesystem to the kernel: it just does everything in memory in a ramfs.

To enable initrd instead of the default ext2 disk image, do:

./build-buildroot --initrd
./run --initrd

By looking at the QEMU run command generated, you can see that we didn’t give the -drive option at all:

cat "$(./getvar run_dir)/run.sh"

Instead, we used the QEMU -initrd option to point to the .cpio filesystem that Buildroot generated for us.

Try removing that -initrd option to watch the kernel panic without rootfs at the end of boot.

When using .cpio, there can be no filesystem persistency across boots, since all file operations happen in memory in a tmpfs:

date >f
poweroff
cat f
# can't open 'f': No such file or directory

which can be good for automated tests, as it ensures that you are using a pristine unmodified system image every time.

Not however that we already disable disk persistency by default on ext2 filesystems even without --initrd: Section 18.2, “Disk persistency”.

One downside of this method is that it has to put the entire filesystem into memory, and could lead to a panic:

end Kernel panic - not syncing: Out of memory and no killable processes...

This can be solved by increasing the memory as explained at Memory size:

./run --initrd --memory 256M

The main ingredients to get initrd working are:

TODO: how does the bootloader inform the kernel where to find initrd? https://unix.stackexchange.com/questions/89923/how-does-linux-load-the-initrd-image

Most modern desktop distributions have an initrd in their root disk to do early setup.

The rationale for this is described at: https://en.wikipedia.org/wiki/Initial_ramdisk

One obvious use case is having an encrypted root filesystem: you keep the initrd in an unencrypted partition, and then setup decryption from there.

I think GRUB then knows read common disk formats, and then loads that initrd to memory with a /boot/grub/grub.cfg directive of type:

initrd /initrd.img-4.4.0-108-generic

initramfs is just like initrd, but you also glue the image directly to the kernel image itself using the kernel’s build system.

Try it out with:

./build-buildroot --initramfs
./build-linux --initramfs
./run --initramfs

Notice how we had to rebuild the Linux kernel this time around as well after Buildroot, since in that build we will be gluing the CPIO to the kernel image.

Now, once again, if we look at the QEMU run command generated, we see all that QEMU needs is the -kernel option, no -drive not even -initrd! Pretty cool:

cat "$(./getvar run_dir)/run.sh"

It is also interesting to observe how this increases the size of the kernel image if you do a:

ls -lh "$(./getvar linux_image)"

before and after using initramfs, since the .cpio is now glued to the kernel image.

Don’t forget that to stop using initramfs, you must rebuild the kernel without --initramfs to get rid of the attached CPIO image:

./build-linux
./run

Alternatively, consider using [linux-kernel-build-variants] if you need to switch between initramfs and non initramfs often:

./build-buildroot --initramfs
./build-linux --initramfs --linux-build-id initramfs
./run --initramfs --linux-build-id

Setting up initramfs is very easy: our scripts just set CONFIG_INITRAMFS_SOURCE to point to the CPIO path.

This is how /proc/mounts shows the root filesystem:

  • hard disk: /dev/root on / type ext2 (rw,relatime,block_validity,barrier,user_xattr). That file does not exist however.

  • initrd: rootfs on / type rootfs (rw)

  • initramfs: rootfs on / type rootfs (rw)

TODO: understand /dev/root better:

This would require gem5 to load the CPIO into memory, just like QEMU. Grepping initrd shows some ARM hits under:

src/arch/arm/linux/atag.hh

but they are commented out.

This could in theory be easier to make work than initrd since the emulator does not have to do anything special.

However, it didn’t: boot fails at the end because it does not see the initramfs, but rather tries to open our dummy root filesystem, which unsurprisingly does not have a format in a way that the kernel understands:

VFS: Cannot open root device "sda" or unknown-block(8,0): error -5

We think that this might be because gem5 boots directly vmlinux, and not from the final compressed images that contain the attached rootfs such as bzImage, which is what QEMU does, see also: Section 15.21.1, “vmlinux vs bzImage vs zImage vs Image”.

To do this failed test, we automatically pass a dummy disk image as of gem5 7fa4c946386e7207ad5859e8ade0bbfc14000d91 since the scripts don’t handle a missing --disk-image well, much like is currently done for [baremetal].

Interestingly, using initramfs significantly slows down the gem5 boot, even though it did not work. For example, we’ve observed a 4x slowdown of as 17062a2e8b6e7888a14c3506e9415989362c58bf for aarch64. This must be because expanding the large attached CPIO must be expensive. We can clearly see from the kernel logs that the kernel just hangs at a point after the message PCI: CLS 0 bytes, default 64 for a long time before proceeding further.

The device tree is a Linux kernel defined data structure that serves to inform the kernel how the hardware is setup.

platform_device contains a minimal runnable example of device tree manipulation.

Device trees serve to reduce the need for hardware vendors to patch the kernel: they just provide a device tree file instead, which is much simpler.

x86 does not use it device trees, but many other archs to, notably ARM.

This is notably because ARM boards:

  • typically don’t have discoverable hardware extensions like PCI, but rather just put everything on an SoC with magic register addresses

  • are made by a wide variety of vendors due to ARM’s licensing business model, which increases variability

The Linux kernel itself has several device trees under ./arch/<arch>/boot/dts, see also: https://stackoverflow.com/questions/21670967/how-to-compile-dts-linux-device-tree-source-files-to-dtb/42839737#42839737

Files that contain device trees have the .dtb extension when compiled, and .dts when in text form.

You can convert between those formats with:

"$(./getvar buildroot_host_dir)"/bin/dtc -I dtb -O dts -o a.dts a.dtb
"$(./getvar buildroot_host_dir)"/bin/dtc -I dts -O dtb -o a.dtb a.dts

Buildroot builds the tool due to BR2_PACKAGE_HOST_DTC=y.

On Ubuntu 18.04, the package is named:

sudo apt-get install device-tree-compiler

Device tree files are provided to the emulator just like the root filesystem and the Linux kernel image.

In real hardware, those components are also often provided separately. For example, on the Raspberry Pi 2, the SD card must contain two partitions:

  • the first contains all magic files, including the Linux kernel and the device tree

  • the second contains the root filesystem

Good format descriptions:

Minimal example

/dts-v1/;

/ {
    a;
};

Check correctness with:

dtc a.dts

Separate nodes are simply merged by node path, e.g.:

/dts-v1/;

/ {
    a;
};

/ {
    b;
};

then dtc a.dts gives:

/dts-v1/;

/ {
        a;
        b;
};

This is specially interesting because QEMU and gem5 are capable of generating DTBs that match the selected machine depending on dynamic command line parameters for some types of machines.

So observing the device tree from the guest allows to easily see what the emulator has generated.

Compile the dtc tool into the root filesystem:

./build-buildroot \
  --arch aarch64 \
  --config 'BR2_PACKAGE_DTC=y' \
  --config 'BR2_PACKAGE_DTC_PROGRAMS=y' \
;

-M virt for example, which we use by default for aarch64, boots just fine without the -dtb option:

./run --arch aarch64

Then, from inside the guest:

dtc -I fs -O dts /sys/firmware/devicetree/base

contains:

        cpus {
                #address-cells = <0x1>;
                #size-cells = <0x0>;

                cpu@0 {
                        compatible = "arm,cortex-a57";
                        device_type = "cpu";
                        reg = <0x0>;
                };
        };

Since emulators know everything about the hardware, they can automatically generate device trees for us, which is very convenient.

This is the case for both QEMU and gem5.

For example, if we increase the number of cores to 2:

./run --arch aarch64 --cpus 2

QEMU automatically adds a second CPU to the DTB!

                cpu@0 {
                cpu@1 {

The action seems to be happening at: hw/arm/virt.c.

You can dump the DTB QEMU generated with:

./run --arch aarch64 -- -machine dumpdtb=dtb.dtb

gem5 fs_bigLITTLE 2a9573f5942b5416fb0570cf5cb6cdecba733392 can also generate its own DTB.

gem5 can generate DTBs on ARM with --generate-dtb. The generated DTB is placed in the m5out directory named as system.dtb.

KVM is Linux kernel interface that greatly speeds up execution of virtual machines.

You can make QEMU or gem5 by passing enabling KVM with:

./run --kvm

KVM works by running userland instructions natively directly on the real hardware instead of running a software simulation of those instructions.

Therefore, KVM only works if you the host architecture is the same as the guest architecture. This means that this will likely only work for x86 guests since almost all development machines are x86 nowadays. Unless you are running an ARM desktop for some weird reason :-)

We don’t enable KVM by default because:

  • it limits visibility, since more things are running natively:

  • QEMU kernel boots are already fast enough for most purposes without it

One important use case for KVM is to fast forward gem5 execution, often to skip boot, take a gem5 checkpoint, and then move on to a more detailed and slow simulation

TODO: we haven’t gotten it to work yet, but it should be doable, and this is an outline of how to do it. Just don’t expect this to tested very often for now.

We can test KVM on arm by running this repository inside an Ubuntu arm QEMU VM.

This produces no speedup of course, since the VM is already slow since it cannot use KVM on the x86 host.

Then, from inside that image:

sudo apt-get install git
git clone https://github.com/************/linux-kernel-module-cheat
cd linux-kernel-module-cheat
sudo ./setup -y

and then proceed exactly as in Prebuilt setup.

We don’t want to build the full Buildroot image inside the VM as that would be way too slow, thus the recommendation for the prebuilt setup.

TODO: do the right thing and cross compile QEMU and gem5. gem5’s Python parts might be a pain. QEMU should be easy: https://stackoverflow.com/questions/26514252/cross-compile-qemu-for-arm

While gem5 does have KVM, as of 2019 its support has not been very good, because debugging it is harder and people haven’t focused intensively on it.

X86 was broken with pending patches: https://www.mail-archive.com/[email protected]/msg15046.html It failed immediately on:

panic: KVM: Failed to enter virtualized mode (hw reason: 0x80000021)

Bibliography:

Both QEMU and gem5 have an user mode simulation mode in addition to full system simulation that we consider elsewhere in this project.

In QEMU, it is called just "user mode", and in gem5 it is called syscall emulation mode.

In both, the basic idea is the same.

User mode simulation takes regular userland executables of any arch as input and executes them directly, without booting a kernel.

Instead of simulating the full system, it translates normal instructions like in full system mode, but magically forwards system calls to the host OS.

Advantages over full system simulation:

  • the simulation may run faster since you don’t have to simulate the Linux kernel and several device models

  • you don’t need to build your own kernel or root filesystem, which saves time. You still need a toolchain however, but the pre-packaged ones may work fine.

Disadvantages:

  • lower guest to host portability:

    • TODO confirm: host OS == guest OS?

    • TODO confirm: the host Linux kernel should be newer than the kernel the executable was built for.

      It may still work even if that is not the case, but could fail is a missing system call is reached.

      The target Linux kernel of the executable is a GCC toolchain build-time configuration.

    • emulator implementers have to keep up with libc changes, some of which break even a C hello world due setup code executed before main.

  • cannot be used to test the Linux kernel or any devices, and results are less representative of a real system since we are faking more

Let’s run userland/c/command_line_arguments.c built with the Buildroot toolchain on QEMU user mode:

./build user-mode-qemu
./run \
  --userland userland/c/command_line_arguments.c \
  --cli-args='asdf "qw er"' \
;

Output:

/path/to/linux-kernel-module-cheat/out/userland/default/x86_64/c/print_argv.out
asdf
qw er

./run --userland path resolution is analogous to that of ./run --baremetal.

./build user-mode-qemu first builds Buildroot, and then runs ./build-userland, which is further documented at: Section 1.7, “Userland setup”. It also builds QEMU. If you ahve already done a QEMU Buildroot setup previously, this will be very fast.

If you modify the userland programs, rebuild simply with:

./build-userland

It’s nice when the obvious just works, right?

./run \
  --arch aarch64 \
  --gdb-wait \
  --userland userland/c/command_line_arguments.c \
  --cli-args 'asdf "qw er"' \
;

and on another shell:

./run-gdb \
  --arch aarch64 \
  --userland userland/c/command_line_arguments.c \
  main \
;

Or alternatively, if you are using tmux, do everything in one go with:

./run \
  --arch aarch64 \
  --gdb \
  --userland userland/c/command_line_arguments.c \
  --cli-args 'asdf "qw er"' \
;

To stop at the very first instruction of a freestanding program, just use --no-continue. A good example of this is shown at: [freestanding-programs].

Automatically run all userland tests that can be run in user mode simulation, and check that they exit with status 0:

./build --all-archs test-executables-userland
./test-executables --all-archs --all-emulators

Or just for QEMU:

./build --all-archs test-executables-userland-qemu
./test-executables --all-archs --emulator qemu

This script skips a manually configured list of tests, notably:

  • tests that depend on a full running kernel and cannot be run in user mode simulation, e.g. those that rely on kernel modules

  • tests that require user interaction

  • tests that take perceptible amounts of time

  • known bugs we didn’t have time to fix ;-)

Tests under userland/libs/ are only run if --package or --package-all are given as described at [userland-libs-directory].

The gem5 tests require building statically with build id static, see also: Section 10.7, “gem5 syscall emulation mode”. TODO automate this better.

See: [test-this-repo] for more useful testing tips.

If you followed QEMU Buildroot setup, you can now run the executables created by Buildroot directly as:

./run \
  --userland "$(./getvar buildroot_target_dir)/bin/echo" \
  --cli-args='asdf' \
;

To easily explore the userland executable environment interactively, you can do:

./run \
  --arch aarch64 \
  --userland "$(./getvar --arch aarch64 buildroot_target_dir)/bin/sh" \
  --terminal \
;

or:

./run \
  --arch aarch64 \
  --userland "$(./getvar --arch aarch64 buildroot_target_dir)/bin/sh" \
  --cli-args='-c "uname -a && pwd"' \
;

Here is an interesting examples of this: Section 15.20.1, “Linux Test Project”

At 125d14805f769104f93c510bedaa685a52ec025d we moved Buildroot from uClibc to glibc, and caused some user mode pain, which we document here.

glibc has a check for kernel version, likely obtained from the uname syscall, and if the kernel is not new enough, it quits.

Both gem5 and QEMU however allow setting the reported uname version from the command line, which we do to always match our toolchain.

QEMU by default copies the host uname value, but we always override it in our scripts.

Determining the right number to use for the kernel version is of course highly non-trivial and would require an extensive userland test suite, which most emulators don’t have.

./run --arch aarch64 --kernel-version 4.18 --userland userland/posix/uname.c

Bibliography:

The ID is just hardcoded on the source:

For some reason QEMU / glibc x86_64 picks up the host libc, which breaks things.

Other archs work as they different host libc is skipped. User mode static executables also work.

We have worked around this with with https://bugs.launchpad.net/qemu/+bug/1701798/comments/12 from the thread: https://bugs.launchpad.net/qemu/+bug/1701798 by creating the file: rootfs_overlay/etc/ld.so.cache which is a symlink to a file that cannot exist: /dev/null/nonexistent.

Reproduction:

rm -f "$(./getvar buildroot_target_dir)/etc/ld.so.cache"
./run --userland userland/c/hello.c
./run --userland userland/c/hello.c --qemu-which host

Outcome:

*** stack smashing detected ***: <unknown> terminated
qemu: uncaught target signal 6 (Aborted) - core dumped

To get things working again, restore ld.so.cache with:

./build-buildroot

I’ve also tested on an Ubuntu 16.04 guest and the failure is different one:

qemu: uncaught target signal 4 (Illegal instruction) - core dumped

A non-QEMU-specific example of stack smashing is shown at: https://stackoverflow.com/questions/1345670/stack-smashing-detected/51897264#51897264

Tested at: 2e32389ebf1bedd89c682aa7b8fe42c3c0cf96e5 + 1.

Example:

./build-userland \
  --arch aarch64 \
  --static \
;
./run \
  --arch aarch64 \
  --static \
  --userland userland/c/command_line_arguments.c \
  --cli-args 'asdf "qw er"' \
;

Running dynamically linked executables in QEMU requires pointing it to the root filesystem with the -L option so that it can find the dynamic linker and shared libraries.

We pass -L by default, so everything just works.

However, in case something goes wrong, you can also try statically linked executables, since this mechanism tends to be a bit more stable, for example:

Running statically linked executables sometimes makes things break:

One limitation of static executables is that Buildroot mostly only builds dynamic versions of libraries (the libc is an exception).

So programs that rely on those libraries might not compile as GCC can’t find the .a version of the library.

For example, if we try to build [blas] statically:

./build-userland --package openblas --static -- userland/libs/openblas/hello.c

it fails with:

ld: cannot find -lopenblas

g++ and pthreads also causes issues:

As a consequence, the following just hangs as of LKMC ca0403849e03844a328029d70c08556155dc1cd0 + 1 the example userland/cpp/atomic/std_atomic.cpp:

./run --userland userland/cpp/atomic/std_atomic.cpp --static

And before that, it used to fail with other randomly different errors, e.g.:

qemu-x86_64: /path/to/linux-kernel-module-cheat/submodules/qemu/accel/tcg/cpu-exec.c:700: cpu_exec: Assertion `!have_mmap_lock()' failed.
qemu-x86_64: /path/to/linux-kernel-module-cheat/submodules/qemu/accel/tcg/cpu-exec.c:700: cpu_exec: Assertion `!have_mmap_lock()' failed.

And a native Ubuntu 18.04 AMD64 run with static compilation segfaults.

The workaround:

-pthread -Wl,--whole-archive -lpthread -Wl,--no-whole-archive

fixes some of the problems, but not all, so we are just skipping those tests for now.

The following work on both QEMU and gem5 as of LKMC 99d6bc6bc19d4c7f62b172643be95d9c43c26145 + 1. Interactive input:

./run --userland userland/c/getchar.c

A line of type should show:

enter a character:

and after pressing say a and Enter, we get:

you entered: a

Note however that due to QEMU user mode does not show stdout immediately we don’t really see the initial enter a character line.

Non-interactive input from a file by forwarding emulators stdin implicitly through our Python scripts:

printf a > f.tmp
./run --userland userland/c/getchar.c < f.tmp

Input from a file by explicitly requesting our scripts to use it via the Python API:

printf a > f.tmp
./run --emulator gem5 --userland userland/c/getchar.c --stdin-file f.tmp

This is especially useful when running tests that require stdin input.

Less robust than QEMU’s, but still usable:

There are much more unimplemented syscalls in gem5 than in QEMU. Many of those are trivial to implement however.

So let’s just play with some static ones:

./build-userland --arch aarch64
./run \
  --arch aarch64 \
  --emulator gem5 \
  --userland userland/c/command_line_arguments.c \
  --cli-args 'asdf "qw er"' \
;

TODO: how to escape spaces on the command line arguments?

GDB step debug also works normally on gem5:

./run \
  --arch aarch64 \
  --emulator gem5 \
  --gdb-wait \
  --userland userland/c/command_line_arguments.c \
  --cli-args 'asdf "qw er"' \
;
./run-gdb \
  --arch aarch64 \
  --emulator gem5 \
  --userland userland/c/command_line_arguments.c \
  main \
;

As of gem5 7fa4c946386e7207ad5859e8ade0bbfc14000d91, the crappy se.py script does not forward the exit status of syscall emulation mode, you can test it with:

./run --dry-run --emulator gem5 --userland userland/c/false.c

Then manually run the generated gem5 CLI, and do:

echo $?

and the output is always 0.

Instead, it just outputs a message to stdout just like for m5 fail:

Simulated exit code not 0! Exit code is 1

which we parse in run and then exit with the correct result ourselves…​

Since gem5 has to implement syscalls itself in syscall emulation mode, it can of course clearly see which syscalls are being made, and we can log them for debug purposes with gem5 tracing, e.g.:

./run \
  --emulator gem5 \
  --userland userland/arch/x86_64/freestanding/linux/hello.S \
  --trace-stdout \
  --trace ExecAll,SyscallBase,SyscallVerbose \
;

the trace as of f2eeceb1cde13a5ff740727526bf916b356cee38 + 1 contains:

      0: system.cpu A0 T0 : @asm_main_after_prologue    : mov   rdi, 0x1
      0: system.cpu A0 T0 : @asm_main_after_prologue.0  :   MOV_R_I : limm   rax, 0x1 : IntAlu :  D=0x0000000000000001  flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop)
   1000: system.cpu A0 T0 : @asm_main_after_prologue+7    : mov rdi, 0x1
   1000: system.cpu A0 T0 : @asm_main_after_prologue+7.0  :   MOV_R_I : limm   rdi, 0x1 : IntAlu :  D=0x0000000000000001  flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop)
   2000: system.cpu A0 T0 : @asm_main_after_prologue+14    : lea        rsi, DS:[rip + 0x19]
   2000: system.cpu A0 T0 : @asm_main_after_prologue+14.0  :   LEA_R_P : rdip   t7, %ctrl153,  : IntAlu :  D=0x000000000040008d  flags=(IsInteger|IsMicroop|IsDelayedCommit|IsFirstMicroop)
   2500: system.cpu A0 T0 : @asm_main_after_prologue+14.1  :   LEA_R_P : lea   rsi, DS:[t7 + 0x19] : IntAlu :  D=0x00000000004000a6  flags=(IsInteger|IsMicroop|IsLastMicroop)
   3500: system.cpu A0 T0 : @asm_main_after_prologue+21    : mov        rdi, 0x6
   3500: system.cpu A0 T0 : @asm_main_after_prologue+21.0  :   MOV_R_I : limm   rdx, 0x6 : IntAlu :  D=0x0000000000000006  flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop)
   4000: system.cpu: T0 : syscall write called w/arguments 1, 4194470, 6, 0, 0, 0
hello
   4000: system.cpu: T0 : syscall write returns 6
   4000: system.cpu A0 T0 : @asm_main_after_prologue+28    :   syscall    eax           : IntAlu :   flags=(IsInteger|IsSerializeAfter|IsNonSpeculative|IsSyscall)
   5000: system.cpu A0 T0 : @asm_main_after_prologue+30    : mov        rdi, 0x3c
   5000: system.cpu A0 T0 : @asm_main_after_prologue+30.0  :   MOV_R_I : limm   rax, 0x3c : IntAlu :  D=0x000000000000003c  flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop)
   6000: system.cpu A0 T0 : @asm_main_after_prologue+37    : mov        rdi, 0
   6000: system.cpu A0 T0 : @asm_main_after_prologue+37.0  :   MOV_R_I : limm   rdi, 0  : IntAlu :  D=0x0000000000000000  flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop)
   6500: system.cpu: T0 : syscall exit called w/arguments 0, 4194470, 6, 0, 0, 0
   6500: system.cpu: T0 : syscall exit returns 0
   6500: system.cpu A0 T0 : @asm_main_after_prologue+44    :   syscall    eax           : IntAlu :   flags=(IsInteger|IsSerializeAfter|IsNonSpeculative|IsSyscall)

so we see that two syscall lines were added for each syscall, showing the syscall inputs and exit status, just like a mini strace!

This is not currently nicely exposed in LKMC, but gem5 syscall emulation does allow you to run multiple executables "at once".

--cmd takes a semicolon separated list, so we could do:

./run --arch aarch64 --emulator gem5 --userland userland/posix/getpid.c --cpus 2

and then hack the produced command by replacing:

  --cmd /home/ciro/bak/git/linux-kernel-module-cheat/out/userland/default/aarch64/posix/getpid.out \
  --param 'system.cpu[0].workload[:].release = "5.4.3"' \

with:

  --cmd '/path/to/linux-kernel-module-cheat/out/userland/default/aarch64/posix/getpid.out;/path/to/linux-kernel-module-cheat/out/userland/default/aarch64/posix/getpid.out' \
  --param 'system.cpu[:].workload[:].release = "5.4.3"' \

The outcome of this is that we see two different pid messages printed to stdout:

pid=101
pid=100

since from gem5 Process we can see that se.py sets up one different PID per executable starting at `100:

    workloads = options.cmd.split(';')
    idx = 0
    for wrkld in workloads:
        process = Process(pid = 100 + idx)

This is basically starts running one process per CPU much like if it had been forked.

We can also see that these processes are running concurrently with gem5 tracing by hacking:

  --debug-flags ExecAll \
  --debug-file cout \

which starts with:

      0: system.cpu1: A0 T0 : @__end__+274873647040    :   add   x0, sp, #0         : IntAlu :  D=0x0000007ffffefde0  flags=(IsInteger)
      0: system.cpu0: A0 T0 : @__end__+274873647040    :   add   x0, sp, #0         : IntAlu :  D=0x0000007ffffefde0  flags=(IsInteger)
    500: system.cpu0: A0 T0 : @__end__+274873647044    :   bl   <__end__+274873649648> : IntAlu :  D=0x0000004000001008  flags=(IsInteger|IsControl|IsDirectControl|IsUncondControl|IsCall)
    500: system.cpu1: A0 T0 : @__end__+274873647044    :   bl   <__end__+274873649648> : IntAlu :  D=0x0000004000001008  flags=(IsInteger|IsControl|IsDirectControl|IsUncondControl|IsCall)

and therefore shows one instruction running on each CPU for each process at the same time.

gem5 b1623cb2087873f64197e503ab8894b5e4d4c7b4 syscall emulation has an --smt option presumably for [hardware-threads] but it has been neglected forever it seems: ************#104

If we start from the manually hacked working command from gem5 syscall emulation multiple executables and try to add:

--cpu 1 --cpu-type Derivo3CPU --caches

We choose DerivO3CPU because of the se.py assert:

example/se.py:115:        assert(options.cpu_type == "DerivO3CPU")

But then that fails with:

gem5.opt: /path/to/linux-kernel-module-cheat/out/gem5/master3/build/ARM/cpu/o3/cpu.cc:205: FullO3CPU<Impl>::FullO3CPU(DerivO3CPUParams*) [with Impl = O3CPUImpl]: Assertion `params->numPhysVecPredRegs >= numThreads * TheISA::NumVecPredRegs' failed.
Program aborted at tick 0

At 8d8307ac0710164701f6e14c99a69ee172ccbb70 + 1, I noticed that if you run userland/posix/count.c:

./run --userland userland/posix/count_to.c --cli-args 3

it first waits for 3 seconds, then the program exits, and then it dumps all the stdout at once, instead of counting once every second as expected.

The same can be reproduced by copying the raw QEMU command and piping it through tee, so I don’t think it is a bug in our setup:

/path/to/linux-kernel-module-cheat/out/qemu/default/x86_64-linux-user/qemu-x86_64 \
  -L /path/to/linux-kernel-module-cheat/out/buildroot/build/default/x86_64/target \
  /path/to/linux-kernel-module-cheat/out/userland/default/x86_64/posix/count.out \
  3 \
| tee

TODO: investigate further and then possibly post on QEMU mailing list.

Similarly to QEMU user mode does not show stdout immediately, QEMU error messages do not show at all through pipes.

In particular, it does not say anything if you pass it a non-existing executable:

qemu-x86_64 asdf | cat

So we just check ourselves manually

./run --eval-after 'insmod hello.ko'

If you are feeling raw, you can insert and remove modules with our own minimal module inserter and remover!

# init_module
./linux/myinsmod.out hello.ko
# finit_module
./linux/myinsmod.out hello.ko "" 1
./linux/myrmmod.out hello

which teaches you how it is done from C code.

Source:

The Linux kernel offers two system calls for module insertion:

  • init_module

  • finit_module

and:

man init_module

documents that:

The finit_module() system call is like init_module(), but reads the module to be loaded from the file descriptor fd. It is useful when the authenticity of a kernel module can be determined from its location in the filesystem; in cases where that is possible, the overhead of using cryptographically signed modules to determine the authenticity of a module can be avoided. The param_values argument is as for init_module().

finit is newer and was added only in v3.8. More rationale: https://lwn.net/Articles/519010/

modprobe searches for modules installed under:

ls /lib/modules/<kernel_version>

and specified in the modules.order file.

This is the default install path for CONFIG_SOME_MOD=m modules built with make modules_install in the Linux kernel tree, with root path given by INSTALL_MOD_PATH, and therefore canonical in that sense.

Currently, there are only two kinds of kernel modules that you can try out with modprobe:

We are not installing out custom ./build-modules modules there, because:

The more "reference" kernel.org implementation of lsmod, insmod, rmmod, etc.: https://git.kernel.org/pub/scm/utils/kernel/kmod/kmod.git

Default implementation on desktop distros such as Ubuntu 16.04, where e.g.:

ls -l /bin/lsmod

gives:

lrwxrwxrwx 1 root root 4 Jul 25 15:35 /bin/lsmod -> kmod

and:

dpkg -l | grep -Ei

contains:

ii  kmod                                        22-1ubuntu5                                         amd64        tools for managing Linux kernel modules

BusyBox also implements its own version of those executables, see e.g. modprobe. Here we will only describe features that differ from kmod to the BusyBox implementation.

Name of a predecessor set of tools.

kmod’s modprobe can also load modules under different names to avoid conflicts, e.g.:

sudo modprobe vmhgfs -o vm_hgfs

OverlayFS is a filesystem merged in the Linux kernel in 3.18.

As the name suggests, OverlayFS allows you to merge multiple directories into one. The following minimal runnable examples should give you an intuition on how it works:

We are very interested in this filesystem because we are looking for a way to make host cross compiled executables appear on the guest root / without reboot.

This would have several advantages:

  • makes it faster to test modified guest programs

    • not rebooting is fundamental for gem5, where the reboot is very costly.

    • no need to regenerate the root filesystem at all and reboot

    • overcomes the check_bin_arch problem as shown at: [rpath]

  • we could keep the base root filesystem very small, which implies:

    • less host disk usage, no need to copy the entire ./getvar out_rootfs_overlay_dir to the image again

    • no need to worry about [br2-target-rootfs-ext2-size]

We can already make host files appear on the guest with 9P, but they appear on a subdirectory instead of the root.

If they would appear on the root instead, that would be even more awesome, because you would just use the exact same paths relative to the root transparently.

For example, we wouldn’t have to mess around with variables such as PATH and LD_LIBRARY_PATH.

The idea is to:

We already have a prototype of this running from fstab on guest at /mnt/overlay, but it has the following shortcomings:

  • changes to underlying filesystems are not visible on the overlay unless you remount with mount -r remount /mnt/overlay, as mentioned on the kernel docs:

    Changes to the underlying filesystems while part of a mounted overlay
    filesystem are not allowed.  If the underlying filesystem is changed,
    the behavior of the overlay is undefined, though it will not result in
    a crash or deadlock.

    This makes everything very inconvenient if you are inside chroot action. You would have to leave chroot, remount, then come back.

  • the overlay does not contain sub-filesystems, e.g. /proc. We would have to re-mount them. But should be doable with some automation.

Even more awesome than chroot would be to pivot_root, but I couldn’t get that working either:

A simpler and possibly less overhead alternative to 9P would be to generate a secondary disk image with the benchmark you want to rebuild.

Then you can umount and re-mount on guest without reboot.

We don’t support this yet, but it should not be too hard to hack it up, maybe by hooking into rootfs-post-build-script.

This was not possible from gem5 fs.py as of 60600f09c25255b3c8f72da7fb49100e2682093a: https://stackoverflow.com/questions/50862906/how-to-attach-multiple-disk-images-in-a-simulation-with-gem5-fs-py/51037661#51037661

Both QEMU and gem5 are capable of outputting graphics to the screen, and taking mouse and keyboard input.

Text mode is the default mode for QEMU.

The opposite of text mode is QEMU graphic mode

In text mode, we just show the serial console directly on the current terminal, without opening a QEMU GUI window.

You cannot see any graphics from text mode, but text operations in this mode, including:

making this a good default, unless you really need to use with graphics.

Text mode works by sending the terminal character by character to a serial device.

This is different from a display screen, where each character is a bunch of pixels, and it would be much harder to convert that into actual terminal text.

For more details, see:

Note that you can still see an image even in text mode with the VNC:

./run --vnc

and on another terminal:

./vnc

but there is not terminal on the VNC window, just the CONFIG_LOGO penguin.

However, our QEMU setup captures Ctrl + C and other common signals and sends them to the guest, which makes it hard to quit QEMU for the first time since there is no GUI either.

The simplest way to quit QEMU, is to do:

Ctrl-A X

Alternative methods include:

Enable graphic mode with:

./run --graphic

Outcome: you see a penguin due to CONFIG_LOGO.

For a more exciting GUI experience, see: Section 13.4, “X11 Buildroot”

Text mode is the default due to the following considerable advantages:

  • copy and paste commands and stdout output to / from host

  • get full panic traces when you start making the kernel crash :-) See also: https://unix.stackexchange.com/questions/208260/how-to-scroll-up-after-a-kernel-panic

  • have a large scroll buffer, and be able to search it, e.g. by using tmux on host

  • one less window floating around to think about in addition to your shell :-)

  • graphics mode has only been properly tested on x86_64.

Text mode has the following limitations over graphics mode:

  • you can’t see graphics such as those produced by X11 Buildroot

  • very early kernel messages such as early console in extract_kernel only show on the GUI, since at such early stages, not even the serial has been setup.

x86_64 has a VGA device enabled by default, as can be seen as:

./qemu-monitor info qtree

and the Linux kernel picks it up through the fbdev graphics system as can be seen from:

cat /dev/urandom > /dev/fb0

TODO: on arm, we see the penguin and some boot messages, but don’t get a shell at then end:

./run --arch aarch64 --graphic

I think it does not work because the graphic window is DRM only, i.e.:

cat /dev/urandom > /dev/fb0

fails with:

cat: write error: No space left on device

and has no effect, and the Linux kernel does not appear to have a built-in DRM console as it does for fbdev with fbcon.

There is however one out-of-tree implementation: kmscon.

arm and aarch64 rely on the QEMU CLI option:

-device virtio-gpu-pci

and the kernel config options:

CONFIG_DRM=y
CONFIG_DRM_VIRTIO_GPU=y

Unlike x86, arm and aarch64 don’t have a display device attached by default, thus the need for virtio-gpu-pci.

See also https://wiki.qemu.org/Documentation/Platforms/ARM (recently edited and corrected by yours truly…​ :-)).

-device VGA
# We use virtio-gpu because the legacy VGA framebuffer is
# very troublesome on aarch64, and virtio-gpu is the only
# video device that doesn't implement it.

so maybe it is not possible?

gem5 does not have a "text mode", since it cannot redirect the Linux terminal to same host terminal where the executable is running: you are always forced to connect to the terminal with gem-shell.

TODO could not get it working on x86_64, only ARM.

More concretely, first build the kernel with the gem5 arm Linux kernel patches, and then run:

./build-linux \
  --arch arm \
  --custom-config-file-gem5 \
  --linux-build-id gem5-v4.15 \
;
./run --arch arm --emulator gem5 --linux-build-id gem5-v4.15

and then on another shell:

vinagre localhost:5900

The CONFIG_LOGO penguin only appears after several seconds, together with kernel messages of type:

[    0.152755] [drm] found ARM HDLCD version r0p0
[    0.152790] hdlcd 2b000000.hdlcd: bound virt-encoder (ops 0x80935f94)
[    0.152795] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[    0.152799] [drm] No driver support for vblank timestamp query.
[    0.215179] Console: switching to colour frame buffer device 240x67
[    0.230389] hdlcd 2b000000.hdlcd: fb0:  frame buffer device
[    0.230509] [drm] Initialized hdlcd 1.0.0 20151021 for 2b000000.hdlcd on minor 0

The port 5900 is incremented by one if you already have something running on that port, gem5 stdout tells us the right port on stdout as:

system.vncserver: Listening for connections on port 5900

and when we connect it shows a message:

info: VNC client attached

Alternatively, you can also dump each new frame to an image file with --frame-capture:

./run \
  --arch arm \
  --emulator gem5 \
  --linux-build-id gem5-v4.15 \
  -- --frame-capture \
;

This creates on compressed PNG whenever the screen image changes inside the m5out directory with filename of type:

frames_system.vncserver/fb.<frame-index>.<timestamp>.png.gz

It is fun to see how we get one new frame whenever the white underscore cursor appears and reappears under the penguin!

The last frame is always available uncompressed at: system.framebuffer.png.

TODO kmscube failed on aarch64 with:

kmscube[706]: unhandled level 2 translation fault (11) at 0x00000000, esr 0x92000006, in libgbm.so.1.0.0[7fbf6a6000+e000]

For aarch64 we also need to configure the kernel with linux_config/display:

git -C "$(./getvar linux_source_dir)" fetch https://gem5.googlesource.com/arm/linux gem5/v4.15:gem5/v4.15
git -C "$(./getvar linux_source_dir)" checkout gem5/v4.15
./build-linux \
  --arch aarch64 \
  --config-fragment linux_config/display \
  --custom-config-file-gem5 \
  --linux-build-id gem5-v4.15 \
;
git -C "$(./getvar linux_source_dir)" checkout -
./run --arch aarch64 --emulator gem5 --linux-build-id gem5-v4.15

This is because the gem5 aarch64 defconfig does not enable HDLCD like the 32 bit one arm one for some reason.

TODO get working. There is an unmerged patchset at: https://gem5-review.googlesource.com/c/public/gem5/+/11036/1

The DP650 is a newer display hardware than HDLCD. TODO is its interface publicly documented anywhere? Since it has a gem5 model and in-tree Linux kernel support, that information cannot be secret?

The key option to enable support in Linux is DRM_MALI_DISPLAY=y which we enable at linux_config/display.

Build the kernel exactly as for Graphic mode gem5 aarch64 and then run with:

./run --arch aarch64 --dp650 --emulator gem5 --linux-build-id gem5-v4.15

We cannot use mainline Linux because the gem5 arm Linux kernel patches are required at least to provide the CONFIG_DRM_VIRT_ENCODER option.

gem5 emulates the HDLCD ARM Holdings hardware for arm and aarch64.

The kernel uses HDLCD to implement the DRM interface, the required kernel config options are present at: linux_config/display.

TODO: minimize out the --custom-config-file. If we just remove it on arm: it does not work with a failing dmesg:

[    0.066208] [drm] found ARM HDLCD version r0p0
[    0.066241] hdlcd 2b000000.hdlcd: bound virt-encoder (ops drm_vencoder_ops)
[    0.066247] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[    0.066252] [drm] No driver support for vblank timestamp query.
[    0.066276] hdlcd 2b000000.hdlcd: Cannot do DMA to address 0x0000000000000000
[    0.066281] swiotlb: coherent allocation failed for device 2b000000.hdlcd size=8294400
[    0.066288] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.15.0 #1
[    0.066293] Hardware name: V2P-AARCH64 (DT)
[    0.066296] Call trace:
[    0.066301]  dump_backtrace+0x0/0x1b0
[    0.066306]  show_stack+0x24/0x30
[    0.066311]  dump_stack+0xb8/0xf0
[    0.066316]  swiotlb_alloc_coherent+0x17c/0x190
[    0.066321]  __dma_alloc+0x68/0x160
[    0.066325]  drm_gem_cma_create+0x98/0x120
[    0.066330]  drm_fbdev_cma_create+0x74/0x2e0
[    0.066335]  __drm_fb_helper_initial_config_and_unlock+0x1d8/0x3a0
[    0.066341]  drm_fb_helper_initial_config+0x4c/0x58
[    0.066347]  drm_fbdev_cma_init_with_funcs+0x98/0x148
[    0.066352]  drm_fbdev_cma_init+0x40/0x50
[    0.066357]  hdlcd_drm_bind+0x220/0x428
[    0.066362]  try_to_bring_up_master+0x21c/0x2b8
[    0.066367]  component_master_add_with_match+0xa8/0xf0
[    0.066372]  hdlcd_probe+0x60/0x78
[    0.066377]  platform_drv_probe+0x60/0xc8
[    0.066382]  driver_probe_device+0x30c/0x478
[    0.066388]  __driver_attach+0x10c/0x128
[    0.066393]  bus_for_each_dev+0x70/0xb0
[    0.066398]  driver_attach+0x30/0x40
[    0.066402]  bus_add_driver+0x1d0/0x298
[    0.066408]  driver_register+0x68/0x100
[    0.066413]  __platform_driver_register+0x54/0x60
[    0.066418]  hdlcd_platform_driver_init+0x20/0x28
[    0.066424]  do_one_initcall+0x44/0x130
[    0.066428]  kernel_init_freeable+0x13c/0x1d8
[    0.066433]  kernel_init+0x18/0x108
[    0.066438]  ret_from_fork+0x10/0x1c
[    0.066444] hdlcd 2b000000.hdlcd: Failed to set initial hw configuration.
[    0.066470] hdlcd 2b000000.hdlcd: master bind failed: -12
[    0.066477] hdlcd: probe of 2b000000.hdlcd failed with error -12

So what other options are missing from gem5_defconfig? It would be cool to minimize it out to better understand the options.

Once you’ve seen the CONFIG_LOGO penguin as a sanity check, you can try to go for a cooler X11 Buildroot setup.

Build and run:

./build-buildroot --config-fragment buildroot_config/x11
./run --graphic

Inside QEMU:

startx

And then from the GUI you can start exciting graphical programs such as:

xcalc
xeyes
x11
Figure 1. X11 Buildroot graphical user interface screenshot

We don’t build X11 by default because it takes a considerable amount of time (about 20%), and is not expected to be used by most users: you need to pass the -x flag to enable it.

Not sure how well that graphics stack represents real systems, but if it does it would be a good way to understand how it works.

To x11 packages have an xserver prefix as in:

./build-buildroot --config-fragment buildroot_config/x11 -- xserver_xorg-server-reconfigure

the easiest way to find them out is to just list "$(./getvar buildroot_build_build_dir)/x*.

TODO as of: c2696c978d6ca88e8b8599c92b1beeda80eb62b2 I noticed that startx leads to a BUG_ON:

[    2.809104] WARNING: CPU: 0 PID: 51 at drivers/gpu/drm/ttm/ttm_bo_vm.c:304 ttm_bo_vm_open+0x37/0x40

TODO 9076c1d9bcc13b6efdb8ef502274f846d8d4e6a1 I’m 100% sure that it was working before, but I didn’t run it forever, and it stopped working at some point. Needs bisection, on whatever commit last touched x11 stuff.

-show-cursor did not help, I just get to see the host cursor, but the guest cursor still does not move.

Doing:

watch -n 1 grep i8042 /proc/interrupts

shows that interrupts do happen when mouse and keyboard presses are done, so I expect that it is some wrong either with:

  • QEMU. Same behaviour if I try the host’s QEMU 2.10.1 however.

  • X11 configuration. We do have BR2_PACKAGE_XDRIVER_XF86_INPUT_MOUSE=y.

/var/log/Xorg.0.log contains the following interesting lines:

[    27.549] (II) LoadModule: "mouse"
[    27.549] (II) Loading /usr/lib/xorg/modules/input/mouse_drv.so
[    27.590] (EE) <default pointer>: Cannot find which device to use.
[    27.590] (EE) <default pointer>: cannot open input device
[    27.590] (EE) PreInit returned 2 for "<default pointer>"
[    27.590] (II) UnloadModule: "mouse"

The file /dev/inputs/mice does not exist.

Note that our current link:kernel_confi_fragment sets:

# CONFIG_INPUT_MOUSE is not set
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set

for gem5, so you might want to remove those lines to debug this.

On ARM, startx hangs at a message:

vgaarb: this pci device is not a vga device

and nothing shows on the screen, and:

grep EE /var/log/Xorg.0.log

says:

(EE) Failed to load module "modesetting" (module does not exist, 0)

A friend told me this but I haven’t tried it yet:

  • xf86-video-modesetting is likely the missing ingredient, but it does not seem possible to activate it from Buildroot currently without patching things.

  • xf86-video-fbdev should work as well, but we need to make sure fbdev is enabled, and maybe add some line to the Xorg.conf

We disable networking by default because it starts an userland process, and we want to keep the number of userland processes to a minimum to make the system more understandable as explained at: [resource-tradeoff-guidelines]

To enable networking on Buildroot, simply run:

ifup -a

That command goes over all (-a) the interfaces in /etc/network/interfaces and brings them up.

Then test it with:

wget google.com
cat index.html

Disable networking with:

ifdown -a

To enable networking by default after boot, use the methods documented at Run command at the end of BusyBox init.

ping does not work within QEMU by default, e.g.:

ping google.com

hangs after printing the header:

PING google.com (216.58.204.46): 56 data bytes

In this section we discuss how to interact between the guest and the host through networking.

First ensure that you can access the external network since that is easier to get working, see: Section 14, “Networking”.

With nc we can create the most minimal example possible as a sanity check.

On guest run:

nc -l -p 45455

Then on host run:

echo asdf | nc localhost 45455

asdf appears on the guest.

This uses:

  • BusyBox' nc utility, which is enabled with CONFIG_NC=y

  • nc from the netcat-openbsd package on an Ubuntu 18.04 host

Only this specific port works by default since we have forwarded it on the QEMU command line.

We us this exact procedure to connect to gdbserver.

Not enabled by default due to the build / runtime overhead. To enable, build with:

./build-buildroot --config 'BR2_PACKAGE_OPENSSH=y'

Then inside the guest turn on sshd:

./sshd.sh

And finally on host:

ssh root@localhost -p 45456

Could not do port forwarding from host to guest, and therefore could not use gdbserver: https://stackoverflow.com/questions/48941494/how-to-do-port-forwarding-from-guest-to-host-in-gem5

Then in the host, start a server:

python -m SimpleHTTPServer 8000

And then in the guest, find the IP we need to hit with:

ip rounte

which gives:

default via 10.0.2.2 dev eth0
10.0.2.0/24 dev eth0 scope link  src 10.0.2.15

so we use in the guest:

wget 10.0.2.2:8000

Bibliography:

The 9p protocol allows the guest to mount a host directory.

Both QEMU and 9P gem5 support 9P.

All of 9P and NFS (and sshfs) allow sharing directories between guest and host.

Advantages of 9P

  • requires sudo on the host to mount

  • we could share a guest directory to the host, but this would require running a server on the guest, which adds simulation overhead

    Furthermore, this would be inconvenient, since what we usually want to do is to share host cross built files with the guest, and to do that we would have to copy the files over after the guest starts the server.

  • QEMU implements 9P natively, which makes it very stable and convenient, and must mean it is a simpler protocol than NFS as one would expect.

    This is not the case for gem5 7bfb7f3a43f382eb49853f47b140bfd6caad0fb8 unfortunately, which relies on the diod host daemon, although it is not unfeasible that future versions could implement it natively as well.

Advantages of NFS:

  • way more widely used and therefore stable and available, not to mention that it also works on real hardware.

  • the name does not start with a digit, which is an invalid identifier in all programming languages known to man. Who in their right mind would call a software project as such? It does not even match the natural order of Plan 9; Plan then 9: P9!

As usual, we have already set everything up for you. On host:

cd "$(./getvar p9_dir)"
uname -a > host

Guest:

cd /mnt/9p/data
cat host
uname -a > guest

Host:

cat guest

The main ingredients for this are:

Bibliography:

TODO seems possible! Lets do it:

From the source, there is just one exported tag named gem5, so we could try on the guest:

mkdir -p /mnt/9p/gem5
mount -t 9p -o trans=virtio,version=9p2000.L gem5 /mnt/9p/data

TODO: get working.

9P is better with emulation, but let’s just get this working for fun.

First make sure that this works: Section 14.3.2, “Guest to host networking”.

Then, build the kernel with NFS support:

./build-linux --config-fragment linux_config/nfs

Now on host:

sudo apt-get install nfs-kernel-server

Now edit /etc/exports to contain:

/tmp *(rw,sync,no_root_squash,no_subtree_check)

and restart the server:

sudo systemctl restart nfs-kernel-server

Now on guest:

mkdir /mnt/nfs
mount -t nfs 10.0.2.2:/tmp /mnt/nfs

TODO: failing with:

mount: mounting 10.0.2.2:/tmp on /mnt/nfs failed: No such device

And now the /tmp directory from host is not mounted on guest!

If you don’t want to start the NFS server after the next boot automatically so save resources, do:

systemctl disable nfs-kernel-server

To modify a single option on top of our default kernel configs, do:

./build-linux --config 'CONFIG_FORTIFY_SOURCE=y'

Kernel modules depend on certain kernel configs, and therefore in general you might have to clean and rebuild the kernel modules after changing the kernel config:

./build-modules --clean
./build-modules

and then proceed as in Your first kernel module hack.

You might often get way without rebuilding the kernel modules however.

To use an extra kernel config fragment file on top of our defaults, do:

printf '
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
' > data/myconfig
./build-linux --config-fragment 'data/myconfig'

To use just your own exact .config instead of our defaults ones, use:

./build-linux --custom-config-file data/myconfig

There is also a shortcut --custom-config-file to use the gem5 arm Linux kernel patches.

The following options can all be used together, sorted by decreasing config setting power precedence:

  • --config

  • --config-fragment

  • --custom-config-file

To do a clean menu config yourself and use that for the build, do:

./build-linux --clean
./build-linux --custom-config-target menuconfig

But remember that every new build re-configures the kernel by default, so to keep your configs you will need to use on further builds:

./build-linux --no-configure

So what you likely want to do instead is to save that as a new defconfig and use it later as:

./build-linux --no-configure --no-modules-install savedefconfig
cp "$(./getvar linux_build_dir)/defconfig" data/myconfig
./build-linux --custom-config-file data/myconfig

You can also use other config generating targets such as defconfig with the same method as shown at: Section 15.1.3.1.1, “Linux kernel defconfig”.

Get the build config in guest:

zcat /proc/config.gz

or with our shortcut:

./conf.sh

or to conveniently grep for a specific option case insensitively:

./conf.sh ikconfig

This is enabled by:

CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y

From host:

cat "$(./getvar linux_config)"
./linux/scripts/extract-ikconfig "$(./getvar vmlinux)"

although this can be useful when someone gives you a random image.

By default, build-linux generates a .config that is a mixture of:

To find out which kernel configs are being used exactly, simply run:

./build-linux --dry-run

and look for the merge_config.sh call. This script from the Linux kernel tree, as the name suggests, merges multiple configuration files into one as explained at: https://unix.stackexchange.com/questions/224887/how-to-script-make-menuconfig-to-automate-linux-kernel-build-configuration/450407#450407

For each arch, the base of our configs are named as:

linux_config/buildroot-<arch>

These configs are extracted directly from a Buildroot build with update-buildroot-kernel-configs.

Note that Buildroot can sed override some of the configurations, e.g. it forces CONFIG_BLK_DEV_INITRD=y when BR2_TARGET_ROOTFS_CPIO is on. For this reason, those configs are not simply copy pasted from Buildroot files, but rather from a Buildroot kernel build, and then minimized with make savedefconfig: https://stackoverflow.com/questions/27899104/how-to-create-a-defconfig-file-from-a-config

On top of those, we add the following by default:

To see Buildroot’s base configs, start from buildroot/configs/qemu_x86_64_defconfig.

That file contains BR2_LINUX_KERNEL_CUSTOM_CONFIG_FILE="board/qemu/x86_64/linux-4.15.config", which points to the base config file used: board/qemu/x86_64/linux-4.15.config.

arm, on the other hand, uses buildroot/configs/qemu_arm_vexpress_defconfig, which contains BR2_LINUX_KERNEL_DEFCONFIG="vexpress", and therefore just does a make vexpress_defconfig, and gets its config from the Linux kernel tree itself.

To boot defconfig from disk on Linux and see a shell, all we need is these missing virtio options:

./build-linux \
  --linux-build-id defconfig \
  --custom-config-target defconfig \
  --config CONFIG_VIRTIO_PCI=y \
  --config CONFIG_VIRTIO_BLK=y \
;
./run --linux-build-id defconfig

Oh, and check this out:

du -h \
  "$(./getvar vmlinux)" \
  "$(./getvar --linux-build-id defconfig vmlinux)" \
;

Output:

360M    /path/to/linux-kernel-module-cheat/out/linux/default/x86_64/vmlinux
47M     /path/to/linux-kernel-module-cheat/out/linux/defconfig/x86_64/vmlinux

Brutal. Where did we go wrong?

The extra virtio options are not needed if we use initrd:

./build-linux \
  --linux-build-id defconfig \
  --custom-config-target defconfig \
;
./run --initrd --linux-build-id defconfig

On aarch64, we can boot from initrd with:

./build-linux \
  --arch aarch64 \
  --linux-build-id defconfig \
  --custom-config-target defconfig \
;
./run \
  --arch aarch64 \
  --initrd \
  --linux-build-id defconfig \
  --memory 2G \
;

We need the 2G of memory because the CPIO is 600MiB due to a humongous amount of loadable kernel modules!

In aarch64, the size situation is inverted from x86_64, and this can be seen on the vmlinux size as well:

118M    /path/to/linux-kernel-module-cheat/out/linux/default/aarch64/vmlinux
240M    /path/to/linux-kernel-module-cheat/out/linux/defconfig/aarch64/vmlinux

So it seems that the ARM devs decided rather than creating a minimal config that boots QEMU, to try and make a single config that boots every board in existence. Terrible!

Tested on 1e2b7f1e5e9e3073863dc17e25b2455c8ebdeadd + 1.

linux_config/min contains minimal tweaks required to boot gem5 or for using our slightly different QEMU command line options than Buildroot on all archs.

It is one of the default config fragments we use, as explained at: Section 15.1.3, “About our Linux kernel configs”>.

Having the same config working for both QEMU and gem5 (oh, the hours of bisection) means that you can deal with functional matters in QEMU, which runs much faster, and switch to gem5 only for performance issues.

We can build just with min on top of the base config with:

./build-linux \
  --arch aarch64 \
  --config-fragment linux_config/min \
  --custom-config-file linux_config/buildroot-aarch64 \
  --linux-build-id min \
;

vmlinux had a very similar size to the default. It seems that linux_config/buildroot-aarch64 contains or implies most linux_config/default options already? TODO: that seems odd, really?

Tested on 649d06d6758cefd080d04dc47fd6a5a26a620874 + 1.

Other configs which we had previously tested at 4e0d9af81fcce2ce4e777cb82a1990d7c2ca7c1e are:

We try to use the latest possible kernel major release version.

In QEMU:

cat /proc/version

or in the source:

cd "$(./getvar linux_source_dir)"
git log | grep -E '    Linux [0-9]+\.' | head

During update all you kernel modules may break since the kernel API is not stable.

They are usually trivial breaks of things moving around headers or to sub-structs.

The userland, however, should simply not break, as Linus enforces strict backwards compatibility of userland interfaces.

This backwards compatibility is just awesome, it makes getting and running the latest master painless.

This also makes this repo the perfect setup to develop the Linux kernel.

In case something breaks while updating the Linux kernel, you can try to bisect it to understand the root cause, see: [bisection].

First, use use the branching procedure described at: [update-a-forked-submodule]

Because the kernel is so central to this repository, almost all tests must be re-run, so basically just follow the full testing procedure described at: [test-this-repo]. The only tests that can be skipped are essentially the [baremetal] tests.

Before comitting, don’t forget to update:

  • the linux_kernel_version constant in common.py

  • the tagline of this repository on:

    • this README

    • the GitHub project description

The kernel is not forward compatible, however, so downgrading the Linux kernel requires downgrading the userland too to the latest Buildroot branch that supports it.

The default Linux kernel version is bumped in Buildroot with commit messages of type:

linux: bump default to version 4.9.6

So you can try:

git log --grep 'linux: bump default to version'

Those commits change BR2_LINUX_KERNEL_LATEST_VERSION in /linux/Config.in.

You should then look up if there is a branch that supports that kernel. Staying on branches is a good idea as they will get backports, in particular ones that fix the build as newer host versions come out.

Finally, after downgrading Buildroot, if something does not work, you might also have to make some changes to how this repo uses Buildroot, as the Buildroot configuration options might have changed.

We don’t expect those changes to be very difficult. A good way to approach the task is to:

  • do a dry run build to get the equivalent Bash commands used:

    ./build-buildroot --dry-run
  • build the Buildroot documentation for the version you are going to use, and check if all Buildroot build commands make sense there

Then, if you spot an option that is wrong, some grepping in this repo should quickly point you to the code you need to modify.

It also possible that you will need to apply some patches from newer Buildroot versions for it to build, due to incompatibilities with the host Ubuntu packages and that Buildroot version. Just read the error message, and try:

  • git log master — packages/<pkg>

  • Google the error message for mailing list hits

Successful port reports:

Bootloaders can pass a string as input to the Linux kernel when it is booting to control its behaviour, much like the execve system call does to userland processes.

This allows us to control the behaviour of the kernel without rebuilding anything.

With QEMU, QEMU itself acts as the bootloader, and provides the -append option and we expose it through ./run --kernel-cli, e.g.:

./run --kernel-cli 'foo bar'

Then inside the host, you can check which options were given with:

cat /proc/cmdline

They are also printed at the beginning of the boot message:

dmesg | grep "Command line"

See also:

The arguments are documented in the kernel documentation: https://www.kernel.org/doc/html/v4.14/admin-guide/kernel-parameters.html

When dealing with real boards, extra command line options are provided on some magic bootloader configuration file, e.g.:

Double quotes can be used to escape spaces as in opt="a b", but double quotes themselves cannot be escaped, e.g. opt"a\"b"

This even lead us to use base64 encoding with --eval!

There are two methods:

  • __setup as in:

    __setup("console=", console_setup);
  • core_param as in:

    core_param(panic, panic_timeout, int, 0644);

core_param suggests how they are different:

/**
 * core_param - define a historical core kernel parameter.

...

 * core_param is just like module_param(), but cannot be modular and
 * doesn't add a prefix (such as "printk.").  This is for compatibility
 * with __setup(), and it makes sense as truly core parameters aren't
 * tied to the particular file they're in.
 */

By default, the Linux kernel mounts the root filesystem as readonly. TODO rationale?

This cannot be observed in the default BusyBox init, because by default our rootfs_overlay/etc/inittab does:

/bin/mount -o remount,rw /

Analogously, Ubuntu 18.04 does in its fstab something like:

UUID=/dev/sda1 / ext4 errors=remount-ro 0 1

which uses default mount rw flags.

We have however removed those setups init setups to keep things more minimal, and replaced them with the rw kernel boot parameter makes the root mounted as writable.

To observe the default readonly behaviour, hack the run script to remove replace init, and then run on a raw shell:

./run --kernel-cli 'init=/bin/sh'

Now try to do:

touch a

which fails with:

touch: a: Read-only file system

We can also observe the read-onlyness with:

mount -t proc /proc
mount

which contains:

/dev/root on / type ext2 (ro,relatime,block_validity,barrier,user_xattr)

and so it is Read Only as shown by ro.

Disable userland address space randomization. Test it out by running [rand_check-out] twice:

./run --eval-after './linux/rand_check.out;./linux/poweroff.out'
./run --eval-after './linux/rand_check.out;./linux/poweroff.out'

If we remove it from our run script by hacking it up, the addresses shown by linux/rand_check.out vary across boots.

Equivalent to:

echo 0 > /proc/sys/kernel/randomize_va_space

printk is the most simple and widely used way of getting information from the kernel, so you should familiarize yourself with its basic configuration.

We use printk a lot in our kernel modules, and it shows on the terminal by default, along with stdout and what you type.

Hide all printk messages:

dmesg -n 1

or equivalently:

echo 1 > /proc/sys/kernel/printk

Do it with a Kernel command line parameters to affect the boot itself:

./run --kernel-cli 'loglevel=5'

and now only boot warning messages or worse show, which is useful to identify problems.

Our default printk format is:

<LEVEL>[TIMESTAMP] MESSAGE

e.g.:

<6>[    2.979121] Freeing unused kernel memory: 2024K

where:

  • LEVEL: higher means less serious

  • TIMESTAMP: seconds since boot

This format is selected by the following boot options:

  • console_msg_format=syslog: add the <LEVEL> part. Added in v4.16.

  • printk.time=y: add the [TIMESTAMP] part

The debug highest level is a bit more magic, see: Section 15.4.3, “pr_debug” for more info.

The current printk level can be obtained with:

cat /proc/sys/kernel/printk

As of 87e846fc1f9c57840e143513ebd69c638bd37aa8 this prints:

7       4       1       7

which contains:

  • 7: current log level, modifiable by previously mentioned methods

  • 4: documented as: "printk’s without a loglevel use this": TODO what does that mean, how to call printk without a log level?

  • 1: minimum log level that still prints something (0 prints nothing)

  • 7: default log level

We start at the boot time default after boot by default, as can be seen from:

insmod myprintk.ko

which outputs something like:

<1>[   12.494429] pr_alert
<2>[   12.494666] pr_crit
<3>[   12.494823] pr_err
<4>[   12.494911] pr_warning
<5>[   12.495170] pr_notice
<6>[   12.495327] pr_info
#if defined CONFIG_PRINTK
	{
		.procname	= "printk",
		.data		= &console_loglevel,
		.maxlen		= 4*sizeof(int),
		.mode		= 0644,
		.proc_handler	= proc_dointvec,
	},

which teaches us that printk can be completely disabled at compile time:

config PRINTK
	default y
	bool "Enable support for printk" if EXPERT
	select IRQ_WORK
	help
	  This option enables normal printk support. Removing it
	  eliminates most of the message strings from the kernel image
	  and makes the kernel more or less silent. As this makes it
	  very difficult to diagnose system problems, saying N here is
	  strongly discouraged.

console_loglevel is defined at:

#define console_loglevel (console_printk[0])

and console_printk is an array with 4 ints:

int console_printk[4] = {
	CONSOLE_LOGLEVEL_DEFAULT,	/* console_loglevel */
	MESSAGE_LOGLEVEL_DEFAULT,	/* default_message_loglevel */
	CONSOLE_LOGLEVEL_MIN,		/* minimum_console_loglevel */
	CONSOLE_LOGLEVEL_DEFAULT,	/* default_console_loglevel */
};

and then we see that the default is configurable with CONFIG_CONSOLE_LOGLEVEL_DEFAULT:

/*
 * Default used to be hard-coded at 7, quiet used to be hardcoded at 4,
 * we're now allowing both to be set from kernel config.
 */
#define CONSOLE_LOGLEVEL_DEFAULT CONFIG_CONSOLE_LOGLEVEL_DEFAULT
#define CONSOLE_LOGLEVEL_QUIET	 CONFIG_CONSOLE_LOGLEVEL_QUIET

The message loglevel default is explained at:

/* printk's without a loglevel use this.. */
#define MESSAGE_LOGLEVEL_DEFAULT CONFIG_MESSAGE_LOGLEVEL_DEFAULT

The min is just hardcoded to one as you would expect, with some amazing kernel comedy around it:

/* We show everything that is MORE important than this.. */
#define CONSOLE_LOGLEVEL_SILENT  0 /* Mum's the word */
#define CONSOLE_LOGLEVEL_MIN	 1 /* Minimum loglevel we let people use */
#define CONSOLE_LOGLEVEL_DEBUG	10 /* issue debug messages */
#define CONSOLE_LOGLEVEL_MOTORMOUTH 15	/* You can't shut this one up */

We then also learn about the useless quiet and debug kernel parameters at:

config CONSOLE_LOGLEVEL_QUIET
	int "quiet console loglevel (1-15)"
	range 1 15
	default "4"
	help
	  loglevel to use when "quiet" is passed on the kernel commandline.

	  When "quiet" is passed on the kernel commandline this loglevel
	  will be used as the loglevel. IOW passing "quiet" will be the
	  equivalent of passing "loglevel=<CONSOLE_LOGLEVEL_QUIET>"

which explains the useless reason why that number is special. This is implemented at:

static int __init debug_kernel(char *str)
{
	console_loglevel = CONSOLE_LOGLEVEL_DEBUG;
	return 0;
}

static int __init quiet_kernel(char *str)
{
	console_loglevel = CONSOLE_LOGLEVEL_QUIET;
	return 0;
}

early_param("debug", debug_kernel);
early_param("quiet", quiet_kernel);
./run --kernel-cli 'ignore_loglevel'

enables all log levels, and is basically the same as:

./run --kernel-cli 'loglevel=8'

except that you don’t need to know what is the maximum level.

Debug messages are not printable by default without recompiling.

But the awesome CONFIG_DYNAMIC_DEBUG=y option which we enable by default allows us to do:

echo 8 > /proc/sys/kernel/printk
echo 'file kernel/module.c +p' > /sys/kernel/debug/dynamic_debug/control
./linux/myinsmod.out hello.ko

and we have a shortcut at:

./pr_debug.sh

Wildcards are also accepted, e.g. enable all messages from all files:

echo 'file * +p' > /sys/kernel/debug/dynamic_debug/control

TODO: why is this not working:

echo 'func sys_init_module +p' > /sys/kernel/debug/dynamic_debug/control

Enable messages in specific modules:

echo 8 > /proc/sys/kernel/printk
echo 'module myprintk +p' > /sys/kernel/debug/dynamic_debug/control
insmod myprintk.ko

This outputs the pr_debug message:

printk debug

but TODO: it also shows debug messages even without enabling them explicitly:

echo 8 > /proc/sys/kernel/printk
insmod myprintk.ko

and it shows as enabled:

# grep myprintk /sys/kernel/debug/dynamic_debug/control
/root/linux-kernel-module-cheat/out/kernel_modules/x86_64/kernel_modules/panic.c:12 [myprintk]myinit =p "pr_debug\012"

Enable pr_debug for boot messages as well, before we can reach userland and write to /proc:

./run --kernel-cli 'dyndbg="file * +p" loglevel=8'

Get ready for the noisiest boot ever, I think it overflows the printk buffer and funny things happen.

When CONFIG_DYNAMIC_DEBUG is set, printk(KERN_DEBUG is not the exact same as pr_debug( since printk(KERN_DEBUG messages are visible with:

./run --kernel-cli 'initcall_debug logleve=8'

which outputs lines of type:

<7>[    1.756680] calling  clk_disable_unused+0x0/0x130 @ 1
<7>[    1.757003] initcall clk_disable_unused+0x0/0x130 returned 0 after 111 usecs

which are printk(KERN_DEBUG inside init/main.c in v4.16.

This likely comes from the ifdef split at init/main.c:

/* If you are writing a driver, please use dev_dbg instead */
#if defined(CONFIG_DYNAMIC_DEBUG)
#include <linux/dynamic_debug.h>

/* dynamic_pr_debug() uses pr_fmt() internally so we don't need it here */
#define pr_debug(fmt, ...) \
    dynamic_pr_debug(fmt, ##__VA_ARGS__)
#elif defined(DEBUG)
#define pr_debug(fmt, ...) \
    printk(KERN_DEBUG pr_fmt(fmt), ##__VA_ARGS__)
#else
#define pr_debug(fmt, ...) \
    no_printk(KERN_DEBUG pr_fmt(fmt), ##__VA_ARGS__)
#endif

The Linux kernel allows passing module parameters at insertion time through the init_module and finit_module system calls.

The insmod tool exposes that as:

insmod params.ko i=3 j=4

Parameters are declared in the module as:

static u32 i = 0;
module_param(i, int, S_IRUSR | S_IWUSR);
MODULE_PARM_DESC(i, "my favorite int");

Automated test:

./params.sh
echo $?

Outcome: the test passes:

0

Sources:

As shown in the example, module parameters can also be read and modified at runtime from sysfs.

We can obtain the help text of the parameters with:

modinfo params.ko

The output contains:

parm:           j:my second favorite int
parm:           i:my favorite int

modprobe insertion can also set default parameters via the /etc/modprobe.conf file:

modprobe params
cat /sys/kernel/debug/lkmc_params

Output:

12 34

This is specially important when loading modules with Kernel module dependencies or else we would have no opportunity of passing those.

One module can depend on symbols of another module that are exported with EXPORT_SYMBOL:

./dep.sh
echo $?

Outcome: the test passes:

0

Sources:

The kernel deduces dependencies based on the EXPORT_SYMBOL that each module uses.

Symbols exported by EXPORT_SYMBOL can be seen with:

insmod dep.ko
grep lkmc_dep /proc/kallsyms

sample output:

ffffffffc0001030 r __ksymtab_lkmc_dep   [dep]
ffffffffc000104d r __kstrtab_lkmc_dep   [dep]
ffffffffc0002300 B lkmc_dep     [dep]

This requires CONFIG_KALLSYMS_ALL=y.

Dependency information is stored by the kernel module build system in the .ko files' MODULE_INFO, e.g.:

modinfo dep2.ko

contains:

depends:        dep

We can double check with:

strings 3 dep2.ko | grep -E 'depends'

The output contains:

depends=dep

Module dependencies are also stored at:

cd /lib/module/*
grep dep modules.dep

Output:

extra/dep2.ko: extra/dep.ko
extra/dep.ko:

TODO: what for, and at which point point does Buildroot / BusyBox generate that file?

Unlike insmod, modprobe deals with kernel module dependencies for us.

Then, for example:

modprobe buildroot_dep2

outputs to dmesg:

42

and then:

lsmod

outputs:

Module                  Size  Used by    Tainted: G
buildroot_dep2         16384  0
buildroot_dep          16384  1 buildroot_dep2

Sources:

Removal also removes required modules that have zero usage count:

modprobe -r buildroot_dep2

modprobe uses information from the modules.dep file to decide the required dependencies. That file contains:

extra/buildroot_dep2.ko: extra/buildroot_dep.ko

Bibliography:

Module metadata is stored on module files at compile time. Some of the fields can be retrieved through the THIS_MODULE struct module:

insmod module_info.ko

Dmesg output:

name = module_info
version = 1.0

Some of those are also present on sysfs:

cat /sys/module/module_info/version

Output:

1.0

And we can also observe them with the modinfo command line utility:

modinfo module_info.ko

sample output:

filename:       module_info.ko
license:        GPL
version:        1.0
srcversion:     AF3DE8A8CFCDEB6B00E35B6
depends:
vermagic:       4.17.0 SMP mod_unload modversions

Module information is stored in a special .modinfo section of the ELF file:

./run-toolchain readelf -- -SW "$(./getvar kernel_modules_build_subdir)/module_info.ko"

contains:

  [ 5] .modinfo          PROGBITS        0000000000000000 0000d8 000096 00   A  0   0  8

and:

./run-toolchain readelf -- -x .modinfo "$(./getvar kernel_modules_build_subdir)/module_info.ko"

gives:

  0x00000000 6c696365 6e73653d 47504c00 76657273 license=GPL.vers
  0x00000010 696f6e3d 312e3000 61736466 3d717765 ion=1.0.asdf=qwe
  0x00000020 72000000 00000000 73726376 65727369 r.......srcversi
  0x00000030 6f6e3d41 46334445 38413843 46434445 on=AF3DE8A8CFCDE
  0x00000040 42364230 30453335 42360000 00000000 B6B00E35B6......
  0x00000050 64657065 6e64733d 006e616d 653d6d6f depends=.name=mo
  0x00000060 64756c65 5f696e66 6f007665 726d6167 dule_info.vermag
  0x00000070 69633d34 2e31372e 3020534d 50206d6f ic=4.17.0 SMP mo
  0x00000080 645f756e 6c6f6164 206d6f64 76657273 d_unload modvers
  0x00000090 696f6e73 2000                       ions .

I think a dedicated section is used to allow the Linux kernel and command line tools to easily parse that information from the ELF file as we’ve done with readelf.

Bibliography:

Vermagic is a magic string present in the kernel and on MODULE_INFO of kernel modules. It is used to verify that the kernel module was compiled against a compatible kernel version and relevant configuration:

insmod vermagic.ko

Possible dmesg output:

VERMAGIC_STRING = 4.17.0 SMP mod_unload modversions

If we artificially create a mismatch with MODULE_INFO(vermagic, the insmod fails with:

insmod: can't insert 'vermagic_fail.ko': invalid module format

and dmesg says the expected and found vermagic found:

vermagic_fail: version magic 'asdfqwer' should be '4.17.0 SMP mod_unload modversions '

The kernel’s vermagic is defined based on compile time configurations at include/linux/vermagic.h:

#define VERMAGIC_STRING                                                 \
        UTS_RELEASE " "                                                 \
        MODULE_VERMAGIC_SMP MODULE_VERMAGIC_PREEMPT                     \
        MODULE_VERMAGIC_MODULE_UNLOAD MODULE_VERMAGIC_MODVERSIONS       \
        MODULE_ARCH_VERMAGIC                                            \
        MODULE_RANDSTRUCT_PLUGIN

The SMP part of the string for example is defined on the same file based on the value of CONFIG_SMP:

#ifdef CONFIG_SMP
#define MODULE_VERMAGIC_SMP "SMP "
#else
#define MODULE_VERMAGIC_SMP ""

TODO how to get the vermagic from running kernel from userland? https://lists.kernelnewbies.org/pipermail/kernelnewbies/2012-October/006306.html

kmod modprobe has a flag to skip the vermagic check:

--force-modversion

This option just strips modversion information from the module before loading, so it is not a kernel feature.

init_module and cleanup_module are an older alternative to the module_init and module_exit macros:

insmod init_module.ko
rmmod init_module

Dmesg output:

init_module
cleanup_module

It is generally hard / impossible to use floating point operations in the kernel. TODO understand details.

A quick (x86-only for now because lazy) example is shown at: kernel_modules/float.c

Usage:

insmod float.ko myfloat=1 enable_fpu=1

We have to call: kernel_fpu_begin() before starting FPU operations, and kernel_fpu_end() when we are done. This particular example however did not blow up without it at lkmc 7f917af66b17373505f6c21d75af9331d624b3a9 + 1:

insmod float.ko myfloat=1 enable_fpu=0

The v5.1 documentation under arch/x86/include/asm/fpu/api.h reads:

 * Use kernel_fpu_begin/end() if you intend to use FPU in kernel context. It
 * disables preemption so be careful if you intend to use it for long periods
 * of time.

The example sets in the kernel_modules/Makefile:

CFLAGS_REMOVE_float.o += -mno-sse -mno-sse2

to avoid:

error: SSE register return with SSE disabled

We found those flags with ./build-modules --verbose.

Bibliography:

To test out kernel panics and oops in controlled circumstances, try out the modules:

insmod panic.ko
insmod oops.ko

Source:

A panic can also be generated with:

echo c > /proc/sysrq-trigger

How to generate them:

When a panic happens, Shift-PgUp does not work as it normally does, and it is hard to get the logs if on are on QEMU graphic mode:

On panic, the kernel dies, and so does our terminal.

The panic trace looks like:

panic: loading out-of-tree module taints kernel.
panic myinit
Kernel panic - not syncing: hello panic
CPU: 0 PID: 53 Comm: insmod Tainted: G           O     4.16.0 #6
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014
Call Trace:
 dump_stack+0x7d/0xba
 ? 0xffffffffc0000000
 panic+0xda/0x213
 ? printk+0x43/0x4b
 ? 0xffffffffc0000000
 myinit+0x1d/0x20 [panic]
 do_one_initcall+0x3e/0x170
 do_init_module+0x5b/0x210
 load_module+0x2035/0x29d0
 ? kernel_read_file+0x7d/0x140
 ? SyS_finit_module+0xa8/0xb0
 SyS_finit_module+0xa8/0xb0
 do_syscall_64+0x6f/0x310
 ? trace_hardirqs_off_thunk+0x1a/0x32
 entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x7ffff7b36206
RSP: 002b:00007fffffffeb78 EFLAGS: 00000206 ORIG_RAX: 0000000000000139
RAX: ffffffffffffffda RBX: 000000000000005c RCX: 00007ffff7b36206
RDX: 0000000000000000 RSI: 000000000069e010 RDI: 0000000000000003
RBP: 000000000069e010 R08: 00007ffff7ddd320 R09: 0000000000000000
R10: 00007ffff7ddd320 R11: 0000000000000206 R12: 0000000000000003
R13: 00007fffffffef4a R14: 0000000000000000 R15: 0000000000000000
Kernel Offset: disabled
---[ end Kernel panic - not syncing: hello panic

Notice how our panic message hello panic is visible at:

Kernel panic - not syncing: hello panic

The log shows which module each symbol belongs to if any, e.g.:

myinit+0x1d/0x20 [panic]

says that the function myinit is in the module panic.

To find the line that panicked, do:

./run-gdb

and then:

info line *(myinit+0x1d)

which gives us the correct line:

Line 7 of "/root/linux-kernel-module-cheat/out/kernel_modules/x86_64/kernel_modules/panic.c" starts at address 0xbf00001c <myinit+28> and ends at 0xbf00002c <myexit>.

The exact same thing can be done post mortem with:

./run-toolchain gdb -- \
  -batch \
  -ex 'info line *(myinit+0x1d)' \
  "$(./getvar kernel_modules_build_subdir)/panic.ko" \
;

Related:

Basically just calls panic("BUG!") for most archs.

For testing purposes, it is very useful to quit the emulator automatically with exit status non zero in case of kernel panic, instead of just hanging forever.

Enabled by default with:

Also asked at https://unix.stackexchange.com/questions/443017/can-i-make-qemu-exit-with-failure-on-kernel-panic which also mentions the x86_64 -device pvpanic, but I don’t see much advantage to it.

TODO neither method exits with exit status different from 0, so for now we are just grepping the logs for panic messages, which sucks.

One possibility that gets close would be to use GDB step debug to break at the panic function, and then send a QEMU monitor from GDB quit command if that happens, but I don’t see a way to exit with non-zero status to indicate error.

gem5 9048ef0ffbf21bedb803b785fb68f83e95c04db8 (January 2019) can detect panics automatically if the option system.panic_on_panic is on.

It parses kernel symbols and detecting when the PC reaches the address of the panic function. gem5 then prints to stdout:

Kernel panic in simulated kernel

and exits with status -6.

At gem5 ff52563a214c71fcd1e21e9f00ad839612032e3b (July 2018) behaviour was different, and just exited 0: https://www.mail-archive.com/[email protected]/msg15870.html TODO find fixing commit.

We enable the system.panic_on_panic option by default on arm and aarch64, which makes gem5 exit immediately in case of panic, which is awesome!

If we don’t set system.panic_on_panic, then gem5 just hangs on an infinite guest loop.

TODO: why doesn’t gem5 x86 ff52563a214c71fcd1e21e9f00ad839612032e3b support system.panic_on_panic as well? Trying to set system.panic_on_panic there fails with:

tried to set or access non-existentobject parameter: panic_on_panic

However, at that commit panic on x86 makes gem5 crash with:

panic: i8042 "System reset" command not implemented.

which is a good side effect of an unimplemented hardware feature, since the simulation actually stops.

        kernelPanicEvent = addKernelFuncEventOrPanic<Linux::KernelPanicEvent>(
            "panic", "Kernel panic in simulated kernel", dmesg_output);

Here we see that the symbol "panic" for the panic() function is the one being tracked.

Make the kernel reboot after n seconds after panic:

echo 1 > /proc/sys/kernel/panic

Can also be controlled with the panic= kernel boot parameter.

0 to disable, -1 to reboot immediately.

Bibliography:

If CONFIG_KALLSYMS=n, then addresses are shown on traces instead of symbol plus offset.

In v4.16 it does not seem possible to configure that at runtime. GDB step debugging with:

./run --eval-after 'insmod dump_stack.ko' --gdb-wait --tmux-args dump_stack

shows that traces are printed at arch/x86/kernel/dumpstack.c:

static void printk_stack_address(unsigned long address, int reliable,
                 char *log_lvl)
{
    touch_nmi_watchdog();
    printk("%s %s%pB\n", log_lvl, reliable ? "" : "? ", (void *)address);
}

and %pB is documented at Documentation/core-api/printk-formats.rst:

If KALLSYMS are disabled then the symbol address is printed instead.

I wasn’t able do disable CONFIG_KALLSYMS to test this this out however, it is being selected by some other option? But I then used make menuconfig to see which options select it, and they were all off…​

On oops, the shell still lives after.

However we:

  • leave the normal control flow, and oops after never gets printed: an interrupt is serviced

  • cannot rmmod oops afterwards

It is possible to make oops lead to panics always with:

echo 1 > /proc/sys/kernel/panic_on_oops
insmod oops.ko

An oops stack trace looks like:

BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
IP: myinit+0x18/0x30 [oops]
PGD dccf067 P4D dccf067 PUD dcc1067 PMD 0
Oops: 0002 [#1] SMP NOPTI
Modules linked in: oops(O+)
CPU: 0 PID: 53 Comm: insmod Tainted: G           O     4.16.0 #6
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014
RIP: 0010:myinit+0x18/0x30 [oops]
RSP: 0018:ffffc900000d3cb0 EFLAGS: 00000282
RAX: 000000000000000b RBX: ffffffffc0000000 RCX: ffffffff81e3e3a8
RDX: 0000000000000001 RSI: 0000000000000086 RDI: ffffffffc0001033
RBP: ffffc900000d3e30 R08: 69796d2073706f6f R09: 000000000000013b
R10: ffffea0000373280 R11: ffffffff822d8b2d R12: 0000000000000000
R13: ffffffffc0002050 R14: ffffffffc0002000 R15: ffff88000dc934c8
FS:  00007ffff7ff66a0(0000) GS:ffff88000fc00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 000000000dcd2000 CR4: 00000000000006f0
Call Trace:
 do_one_initcall+0x3e/0x170
 do_init_module+0x5b/0x210
 load_module+0x2035/0x29d0
 ? SyS_finit_module+0xa8/0xb0
 SyS_finit_module+0xa8/0xb0
 do_syscall_64+0x6f/0x310
 ? trace_hardirqs_off_thunk+0x1a/0x32
 entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x7ffff7b36206
RSP: 002b:00007fffffffeb78 EFLAGS: 00000206 ORIG_RAX: 0000000000000139
RAX: ffffffffffffffda RBX: 000000000000005c RCX: 00007ffff7b36206
RDX: 0000000000000000 RSI: 000000000069e010 RDI: 0000000000000003
RBP: 000000000069e010 R08: 00007ffff7ddd320 R09: 0000000000000000
R10: 00007ffff7ddd320 R11: 0000000000000206 R12: 0000000000000003
R13: 00007fffffffef4b R14: 0000000000000000 R15: 0000000000000000
Code: <c7> 04 25 00 00 00 00 00 00 00 00 e8 b2 33 09 c1 31 c0 c3 0f 1f 44
RIP: myinit+0x18/0x30 [oops] RSP: ffffc900000d3cb0
CR2: 0000000000000000
---[ end trace 3cdb4e9d9842b503 ]---

To find the line that oopsed, look at the RIP register:

RIP: 0010:myinit+0x18/0x30 [oops]

and then on GDB:

./run-gdb

run

info line *(myinit+0x18)

which gives us the correct line:

Line 7 of "/root/linux-kernel-module-cheat/out/kernel_modules/x86_64/kernel_modules/panic.c" starts at address 0xbf00001c <myinit+28> and ends at 0xbf00002c <myexit>.

This-did not work on arm due to GDB step debug kernel module insmodded by init on ARM so we need to either:

The dump_stack function produces a stack trace much like panic and oops, but causes no problems and we return to the normal control flow, and can cleanly remove the module afterwards:

insmod dump_stack.ko

The WARN_ON macro basically just calls dump_stack.

One extra side effect is that we can make it also panic with:

echo 1 > /proc/sys/kernel/panic_on_warn
insmod warn_on.ko

Can also be activated with the panic_on_warn boot parameter.

Let’s learn how to diagnose problems with the root filesystem not being found. TODO add a sample panic error message for each error type:

This is the diagnosis procedure:

  • does the filesystem appear on the list of filesystems? If not, then likely you are missing either:

    • the driver for that hardware type, e.g. hard drive / SSD type. Here, Linux does not know how to communicate with a given hardware to get bytes from it at all. In simiulation, the most important often missing one is virtio which needs:

      CONFIG_VIRTIO_PCI=y
      CONFIG_VIRTIO_BLK=y
    • the driver for that filesystem type. Here, Linux can read bytes from the hardware, but cannot interpret them as a tree of files because it does not recognize the file system format. For example, to boot from [squashfs] we would need:

      CONFIG_SQUASHFS=y
  • your filesystem of interest appears in the list, then you just need to set the root command line parameter to point to that, e.g. root=/dev/sda

Pseudo filesystems are filesystems that don’t represent actual files in a hard disk, but rather allow us to do special operations on filesystem-related system calls.

What each pseudo-file does for each related system call does is defined by its File operations.

Bibliography:

Debugfs is the simplest pseudo filesystem to play around with:

./debugfs.sh
echo $?

Outcome: the test passes:

0

Sources:

Debugfs is made specifically to help test kernel stuff. Just mount, set File operations, and we are done.

For this reason, it is the filesystem that we use whenever possible in our tests.

debugfs.sh explicitly mounts a debugfs at a custom location, but the most common mount point is /sys/kernel/debug.

This mount not done automatically by the kernel however: we, like most distros, do it from userland with our fstab.

Debugfs support requires the kernel to be compiled with CONFIG_DEBUG_FS=y.

Only the more basic file operations can be implemented in debugfs, e.g. mmap never gets called:

Procfs is just another fops entry point:

./procfs.sh
echo $?

Outcome: the test passes:

0

Procfs is a little less convenient than debugfs, but is more used in serious applications.

Procfs can run all system calls, including ones that debugfs can’t, e.g. mmap.

Sources:

Bibliography:

Its data is shared with uname(), which is a POSIX C function and has a Linux syscall to back it up.

Where the data comes from and how to modify it:

In this repo, leaking host information, and to make builds more reproducible, we are setting:

  • user and date to dummy values with KBUILD_BUILD_USER and KBUILD_BUILD_TIMESTAMP

  • hostname to the kernel git commit with KBUILD_BUILD_HOST and KBUILD_BUILD_VERSION

A sample result is:

Linux version 4.19.0-dirty (lkmc@84df9525b0c27f3ebc2ebb1864fa62a97fdedb7d) (gcc version 6.4.0 (Buildroot 2018.05-00002-gbc60382b8f)) #1 SMP Thu Jan 1 00:00:00 UTC 1970

Sysfs is more restricted than procfs, as it does not take an arbitrary file_operations:

./sysfs.sh
echo $?

Outcome: the test passes:

0

Sources:

Vs procfs:

You basically can only do open, close, read, write, and lseek on sysfs files.

It is similar to a seq_file file operation, except that write is also implemented.

TODO: what are those kobject structs? Make a more complex example that shows what they can do.

Bibliography:

Character devices can have arbitrary File operations associated to them:

./character_device.sh
echo $?

Outcome: the test passes:

0

Sources:

Unlike procfs entires, character device files are created with userland mknod or mknodat syscalls:

mknod </dev/path_to_dev> c <major> <minor>

Intuitively, for physical devices like keyboards, the major number maps to which driver, and the minor number maps to which device it is.

A single driver can drive multiple compatible devices.

The major and minor numbers can be observed with:

ls -l /dev/urandom

Output:

crw-rw-rw-    1 root     root        1,   9 Jun 29 05:45 /dev/urandom

which means:

  • c (first letter): this is a character device. Would be b for a block device.

  • 1, 9: the major number is 1, and the minor 9

To avoid device number conflicts when registering the driver we:

  • ask the kernel to allocate a free major number for us with: register_chrdev(0

  • find ouf which number was assigned by grepping /proc/devices for the kernel module name

File operations are the main method of userland driver communication. struct file_operations determines what the kernel will do on filesystem system calls of Pseudo filesystems.

This example illustrates the most basic system calls: open, read, write, close and lseek:

./fops.sh
echo $?

Outcome: the test passes:

0

Sources:

Then give this a try:

sh -x ./fops.sh

We have put printks on each fop, so this allows you to see which system calls are being made for each command.

Writing trivial read File operations is repetitive and error prone. The seq_file API makes the process much easier for those trivial cases:

./seq_file.sh
echo $?

Outcome: the test passes:

0

Sources:

In this example we create a debugfs file that behaves just like a file that contains:

0
1
2

However, we only store a single integer in memory and calculate the file on the fly in an iterator fashion.

Bibliography:

If you have the entire read output upfront, single_open is an even more convenient version of seq_file:

./seq_file.sh
echo $?

Outcome: the test passes:

0

Sources:

This example produces a debugfs file that behaves like a file that contains:

ab
cd

The poll system call allows an user process to do a non-busy wait on a kernel event:

./poll.sh

Outcome: jiffies gets printed to stdout every second from userland.

Sources:

Typically, we are waiting for some hardware to make some piece of data available available to the kernel.

The hardware notifies the kernel that the data is ready with an interrupt.

To simplify this example, we just fake the hardware interrupts with a kthread that sleeps for a second in an infinite loop.

The ioctl system call is the best way to pass an arbitrary number of parameters to the kernel in a single go:

./ioctl.sh
echo $?

Outcome: the test passes:

0

Sources:

ioctl is one of the most important methods of communication with real device drivers, which often take several fields as input.

ioctl takes as input:

  • an integer request : it usually identifies what type of operation we want to do on this call

  • an untyped pointer to memory: can be anything, but is typically a pointer to a struct

    The type of the struct often depends on the request input

    This struct is defined on a uapi-style C header that is used both to compile the kernel module and the userland executable.

    The fields of this struct can be thought of as arbitrary input parameters.

And the output is:

  • an integer return value. man ioctl documents:

    Usually, on success zero is returned. A few ioctl() requests use the return value as an output parameter and return a nonnegative value on success. On error, -1 is returned, and errno is set appropriately.

  • the input pointer data may be overwritten to contain arbitrary output

Bibliography:

The mmap system call allows us to share memory between user and kernel space without copying:

./mmap.sh
echo $?

Outcome: the test passes:

0

Sources:

In this example, we make a tiny 4 byte kernel buffer available to user-space, and we then modify it on userspace, and check that the kernel can see the modification.

mmap, like most more complex File operations, does not work with debugfs as of 4.9, so we use a procfs file for it.

Bibliography:

Anonymous inodes allow getting multiple file descriptors from a single filesystem entry, which reduces namespace pollution compared to creating multiple device files:

./anonymous_inode.sh
echo $?

Outcome: the test passes:

0

Sources:

This example gets an anonymous inode via ioctl from a debugfs entry by using anon_inode_getfd.

Reads to that inode return the sequence: 1, 10, 100, …​ 10000000, 1, 100, …​

Netlink sockets offer a socket API for kernel / userland communication:

./netlink.sh
echo $?

Outcome: the test passes:

0

Sources:

Launch multiple user requests in parallel to stress our socket:

insmod netlink.ko sleep=1
for i in `seq 16`; do ./netlink.out & done

Bibliography:

Kernel threads are managed exactly like userland threads; they also have a backing task_struct, and are scheduled with the same mechanism:

insmod kthread.ko

Outcome: dmesg counts from 0 to 9 once every second infinitely many times:

0
1
2
...
8
9
0
1
2
...

The count stops when we rmmod:

rmmod kthread

The sleep is done with usleep_range, see: Section 15.10.2, “sleep”.

Bibliography:

Let’s launch two threads and see if they actually run in parallel:

insmod kthreads.ko

Outcome: two threads count to dmesg from 0 to 9 in parallel.

Each line has output of form:

<thread_id> <count>

Possible very likely outcome:

1 0
2 0
1 1
2 1
1 2
2 2
1 3
2 3

The threads almost always interleaved nicely, thus confirming that they are actually running in parallel.

Count to dmesg every one second from 0 up to n - 1:

insmod sleep.ko n=5

The sleep is done with a call to usleep_range directly inside module_init for simplicity.

Bibliography:

A more convenient front-end for kthread:

insmod workqueue_cheat.ko

Outcome: count from 0 to 9 infinitely many times

Stop counting:

rmmod workqueue_cheat

The workqueue thread is killed after the worker function returns.

We can’t call the module just workqueue.c because there is already a built-in with that name: https://unix.stackexchange.com/questions/364956/how-can-insmod-fail-with-kernel-module-is-already-loaded-even-is-lsmod-does-not

Count from 0 to 9 every second infinitely many times by scheduling a new work item from a work item:

insmod work_from_work.ko

Stop:

rmmod work_from_work

The sleep is done indirectly through: queue_delayed_work, which waits the specified time before scheduling the work.

Let’s block the entire kernel! Yay:

./run --eval-after 'dmesg -n 1;insmod schedule.ko schedule=0'

Outcome: the system hangs, the only way out is to kill the VM.

kthreads only allow interrupting if you call schedule(), and the schedule=0 kernel module parameter turns it off.

Sleep functions like usleep_range also end up calling schedule.

If we allow schedule() to be called, then the system becomes responsive:

./run --eval-after 'dmesg -n 1;insmod schedule.ko schedule=1'

and we can observe the counting with:

dmesg -w

The system also responds if we add another core:

./run --cpus 2 --eval-after 'dmesg -n 1;insmod schedule.ko schedule=0'

Wait queues are a way to make a thread sleep until an event happens on the queue:

insmod wait_queue.c

Dmesg output:

0 0
1 0
2 0
# Wait one second.
0 1
1 1
2 1
# Wait one second.
0 2
1 2
2 2
...

Stop the count:

rmmod wait_queue

This example launches three threads:

  • one thread generates events every with wake_up

  • the other two threads wait for that with wait_event, and print a dmesg when it happens.

    The wait_event macro works a bit like:

    while (!cond)
        sleep_until_event

Count from 0 to 9 infinitely many times in 1 second intervals using timers:

insmod timer.ko

Stop counting:

rmmod timer

Timers are callbacks that run when an interrupt happens, from the interrupt context itself.

Therefore they produce more accurate timing than thread scheduling, which is more complex, but you can’t do too much work inside of them.

Bibliography:

Brute force monitor every shared interrupt that will accept us:

./run --eval-after 'insmod irq.ko' --graphic

Now try the following:

  • press a keyboard key and then release it after a few seconds

  • press a mouse key, and release it after a few seconds

  • move the mouse around

Outcome: dmesg shows which IRQ was fired for each action through messages of type:

handler irq = 1 dev = 250

dev is the character device for the module and never changes, as can be confirmed by:

grep lkmc_irq /proc/devices

The IRQs that we observe are:

  • 1 for keyboard press and release.

    If you hold the key down for a while, it starts firing at a constant rate. So this happens at the hardware level!

  • 12 mouse actions

This only works if for IRQs for which the other handlers are registered as IRQF_SHARED.

We can see which ones are those, either via dmesg messages of type:

genirq: Flags mismatch irq 0. 00000080 (myirqhandler0) vs. 00015a00 (timer)
request_irq irq = 0 ret = -16
request_irq irq = 1 ret = 0

which indicate that 0 is not, but 1 is, or with:

cat /proc/interrupts

which shows:

  0:         31   IO-APIC   2-edge      timer
  1:          9   IO-APIC   1-edge      i8042, myirqhandler0

so only 1 has myirqhandler0 attached but not 0.

The QEMU monitor also has some interrupt statistics for x86_64:

./qemu-monitor info irq

TODO: properly understand how each IRQ maps to what number.

The Linux kernel v4.16 mainline also has a dummy-irq module at drivers/misc/dummy-irq.c for monitoring a single IRQ.

We build it by default with:

CONFIG_DUMMY_IRQ=m

And then you can do

./run --graphic

and in guest:

modprobe dummy-irq irq=1

Outcome: when you click a key on the keyboard, dmesg shows:

dummy-irq: interrupt occurred on IRQ 1

However, this module is intended to fire only once as can be seen from its source:

    static int count = 0;

    if (count == 0) {
        printk(KERN_INFO "dummy-irq: interrupt occurred on IRQ %d\n",
            irq);
        count++;
    }

and furthermore interrupt 1 and 12 happen immediately TODO why, were they somehow pending?

So so see something interesting, you need to monitor an interrupt that is more rare than the keyboard, e.g. platform_device.

In the guest with QEMU graphic mode:

watch -n 1 cat /proc/interrupts

Then see how clicking the mouse and keyboard affect the interrupt counts.

This confirms that:

  • 1: keyboard

  • 12: mouse click and drags

The module also shows which handlers are registered for each IRQ, as we have observed at irq.ko

When in text mode, we can also observe interrupt line 4 with handler ttyS0 increase continuously as IO goes through the UART.

Convert a virtual address to physical:

insmod virt_to_phys.ko
cat /sys/kernel/debug/lkmc_virt_to_phys

Sample output:

*kmalloc_ptr = 0x12345678
kmalloc_ptr = ffff88000e169ae8
virt_to_phys(kmalloc_ptr) = 0xe169ae8
static_var = 0x12345678
&static_var = ffffffffc0002308
virt_to_phys(&static_var) = 0x40002308

We can confirm that the kmalloc_ptr translation worked with:

./qemu-monitor 'xp 0xe169ae8'

which reads four bytes from a given physical address, and gives the expected:

000000000e169ae8: 0x12345678

TODO it only works for kmalloc however, for the static variable:

./qemu-monitor 'xp 0x40002308'

it gave a wrong value of 00000000.

Bibliography:

Only tested in x86_64.

The Linux kernel exposes physical addresses to userland through:

  • /proc/<pid>/maps

  • /proc/<pid>/pagemap

  • /dev/mem

In this section we will play with them.

First get a virtual address to play with:

./posix/virt_to_phys_test.out &

Sample output:

vaddr 0x600800
pid 110

The program:

  • allocates a volatile variable and sets is value to 0x12345678

  • prints the virtual address of the variable, and the program PID

  • runs a while loop until until the value of the variable gets mysteriously changed somehow, e.g. by nasty tinkerers like us

Then, translate the virtual address to physical using /proc/<pid>/maps and /proc/<pid>/pagemap:

./linux/virt_to_phys_user.out 110 0x600800

Sample output physical address:

0x7c7b800

Now we can verify that linux/virt_to_phys_user.out gave the correct physical address in the following ways:

Bibliography:

The xp QEMU monitor command reads memory at a given physical address.

First launch linux/virt_to_phys_user.out as described at Userland physical address experiments.

On a second terminal, use QEMU to read the physical address:

./qemu-monitor 'xp 0x7c7b800'

Output:

0000000007c7b800: 0x12345678

Yes!!! We read the correct value from the physical address.

We could not find however to write to memory from the QEMU monitor, boring.

/dev/mem exposes access to physical addresses, and we use it through the convenient devmem BusyBox utility.

First launch linux/virt_to_phys_user.out as described at Userland physical address experiments.

Next, read from the physical address:

devmem 0x7c7b800

Possible output:

Memory mapped at address 0x7ff7dbe01000.
Value at address 0X7C7B800 (0x7ff7dbe01800): 0x12345678

which shows that the physical memory contains the expected value 0x12345678.

0x7ff7dbe01000 is a new virtual address that devmem maps to the physical address to be able to read from it.

Modify the physical memory:

devmem 0x7c7b800 w 0x9abcdef0

After one second, we see on the screen:

i 9abcdef0
[1]+  Done                       ./posix/virt_to_phys_test.out

so the value changed, and the while loop exited!

This example requires:

  • CONFIG_STRICT_DEVMEM=n, otherwise devmem fails with:

    devmem: mmap: Operation not permitted
  • nopat kernel parameter

which we set by default.

Dump the physical address of all pages mapped to a given process using /proc/<pid>/maps and /proc/<pid>/pagemap.

First launch linux/virt_to_phys_user.out as described at Userland physical address experiments. Suppose that the output was:

# ./posix/virt_to_phys_test.out &
vaddr 0x601048
pid 63
# ./linux/virt_to_phys_user.out 63 0x601048
0x1a61048

Now obtain the page map for the process:

./linux/pagemap_dump.out 63

Sample output excerpt:

vaddr pfn soft-dirty file/shared swapped present library
400000 1ede 0 1 0 1 ./posix/virt_to_phys_test.out
600000 1a6f 0 0 0 1 ./posix/virt_to_phys_test.out
601000 1a61 0 0 0 1 ./posix/virt_to_phys_test.out
602000 2208 0 0 0 1 [heap]
603000 220b 0 0 0 1 [heap]
7ffff78ec000 1fd4 0 1 0 1 /lib/libuClibc-1.0.30.so

Meaning of the flags:

  • vaddr: first virtual address of a page the belongs to the process. Notably:

    ./run-toolchain readelf -- -l "$(./getvar userland_build_dir)/posix/virt_to_phys_test.out"

    contains:

      Type           Offset             VirtAddr           PhysAddr
                     FileSiz            MemSiz              Flags  Align
    ...
      LOAD           0x0000000000000000 0x0000000000400000 0x0000000000400000
                     0x000000000000075c 0x000000000000075c  R E    0x200000
      LOAD           0x0000000000000e98 0x0000000000600e98 0x0000000000600e98
                     0x00000000000001b4 0x0000000000000218  RW     0x200000
    
     Section to Segment mapping:
      Segment Sections...
    ...
       02     .interp .hash .dynsym .dynstr .rela.plt .init .plt .text .fini .rodata .eh_frame_hdr .eh_frame
       03     .ctors .dtors .jcr .dynamic .got.plt .data .bss

    from which we deduce that:

    • 400000 is the text segment

    • 600000 is the data segment

  • pfn: add three zeroes to it, and you have the physical address.

    Three zeroes is 12 bits which is 4kB, which is the size of a page.

    For example, the virtual address 0x601000 has pfn of 0x1a61, which means that its physical address is 0x1a61000

    This is consistent with what linux/virt_to_phys_user.out told us: the virtual address 0x601048 has physical address 0x1a61048.

    048 corresponds to the three last zeroes, and is the offset within the page.

    Also, this value falls inside 0x601000, which as previously analyzed is the data section, which is the normal location for global variables such as ours.

  • soft-dirty: TODO

  • file/shared: TODO. 1 seems to indicate that the page can be shared across processes, possibly for read-only pages? E.g. the text segment has 1, but the data has 0.

  • swapped: TODO swapped to disk?

  • present: TODO vs swapped?

  • library: which executable owns that page

This program works in two steps:

  • parse the human readable lines lines from /proc/<pid>/maps. This files contains lines of form:

    7ffff7b6d000-7ffff7bdd000 r-xp 00000000 fe:00 658                        /lib/libuClibc-1.0.22.so

    which tells us that:

    • 7f8af99f8000-7f8af99ff000 is a virtual address range that belong to the process, possibly containing multiple pages.

    • /lib/libuClibc-1.0.22.so is the name of the library that owns that memory

  • loop over each page of each address range, and ask /proc/<pid>/pagemap for more information about that page, including the physical address

Good overviews:

I hope to have examples of all methods some day, since I’m obsessed with visibility.

Logs proc events such as process creation to a netlink socket.

We then have a userland program that listens to the events and prints them out:

# ./linux/proc_events.out &
# set mcast listen ok
# sleep 2 & sleep 1
fork: parent tid=48 pid=48 -> child tid=79 pid=79
fork: parent tid=48 pid=48 -> child tid=80 pid=80
exec: tid=80 pid=80
exec: tid=79 pid=79
# exit: tid=80 pid=80 exit_code=0
exit: tid=79 pid=79 exit_code=0
echo a
a
#

TODO: why exit: tid=79 shows after exit: tid=80?

Note how echo a is a Bash built-in, and therefore does not spawn a new process.

TODO: why does this produce no output?

./linux/proc_events.out >f &

TODO can you get process data such as UID and process arguments? It seems not since exec_proc_event contains so little data: https://github.com/torvalds/linux/blob/v4.16/include/uapi/linux/cn_proc.h#L80 We could try to immediately read it from /proc, but there is a risk that the process finished and another one took its PID, so it wouldn’t be reliable.

0111ca406bdfa6fd65a2605d353583b4c4051781 was failing with:

>>> kernel_modules 1.0 Building
/usr/bin/make -j8 -C '/linux-kernel-module-cheat//out/aarch64/buildroot/build/kernel_modules-1.0/user' BR2_PACKAGE_OPENBLAS="" CC="/linux-kernel-module-cheat//out/aarch64/buildroot/host/bin/aarch64-buildroot-linux-uclibc-gcc" LD="/linux-kernel-module-cheat//out/aarch64/buildroot/host/bin/aarch64-buildroot-linux-uclibc-ld"
/linux-kernel-module-cheat//out/aarch64/buildroot/host/bin/aarch64-buildroot-linux-uclibc-gcc  -ggdb3 -fopenmp -O0 -std=c99 -Wall -Werror -Wextra -o 'proc_events.out' 'proc_events.c'
In file included from /linux-kernel-module-cheat//out/aarch64/buildroot/host/aarch64-buildroot-linux-uclibc/sysroot/usr/include/signal.h:329:0,
                 from proc_events.c:12:
/linux-kernel-module-cheat//out/aarch64/buildroot/host/aarch64-buildroot-linux-uclibc/sysroot/usr/include/sys/ucontext.h:50:16: error: field ‘uc_mcontext’ has incomplete type
     mcontext_t uc_mcontext;
                ^~~~~~~~~~~

so we commented it out.

Related threads:

If we try to naively update uclibc to 1.0.29 with buildroot_override, which contains the above mentioned patch, clean aarch64 test build fails with:

../utils/ldd.c: In function 'elf_find_dynamic':
../utils/ldd.c:238:12: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
     return (void *)byteswap_to_host(dynp->d_un.d_val);
            ^
/tmp/user/20321/cciGScKB.o: In function `process_line_callback':
msgmerge.c:(.text+0x22): undefined reference to `escape'
/tmp/user/20321/cciGScKB.o: In function `process':
msgmerge.c:(.text+0xf6): undefined reference to `poparser_init'
msgmerge.c:(.text+0x11e): undefined reference to `poparser_feed_line'
msgmerge.c:(.text+0x128): undefined reference to `poparser_finish'
collect2: error: ld returned 1 exit status
Makefile.in:120: recipe for target '../utils/msgmerge.host' failed
make[2]: *** [../utils/msgmerge.host] Error 1
make[2]: *** Waiting for unfinished jobs....
/tmp/user/20321/ccF8V8jF.o: In function `process':
msgfmt.c:(.text+0xbf3): undefined reference to `poparser_init'
msgfmt.c:(.text+0xc1f): undefined reference to `poparser_feed_line'
msgfmt.c:(.text+0xc2b): undefined reference to `poparser_finish'
collect2: error: ld returned 1 exit status
Makefile.in:120: recipe for target '../utils/msgfmt.host' failed
make[2]: *** [../utils/msgfmt.host] Error 1
package/pkg-generic.mk:227: recipe for target '/data/git/linux-kernel-module-cheat/out/aarch64/buildroot/build/uclibc-custom/.stamp_built' failed
make[1]: *** [/data/git/linux-kernel-module-cheat/out/aarch64/buildroot/build/uclibc-custom/.stamp_built] Error 2
Makefile:79: recipe for target '_all' failed
make: *** [_all] Error 2

Buildroot master has already moved to uclibc 1.0.29 at f8546e836784c17aa26970f6345db9d515411700, but it is not yet in any tag…​ so I’m not tempted to update it yet just for this.

Trace a single function:

cd /sys/kernel/debug/tracing/

# Stop tracing.
echo 0 > tracing_on

# Clear previous trace.
echo > trace

# List the available tracers, and pick one.
cat available_tracers
echo function > current_tracer

# List all functions that can be traced
# cat available_filter_functions
# Choose one.
echo __kmalloc > set_ftrace_filter
# Confirm that only __kmalloc is enabled.
cat enabled_functions

echo 1 > tracing_on

# Latest events.
head trace

# Observe trace continuously, and drain seen events out.
cat trace_pipe &

Sample output:

# tracer: function
#
# entries-in-buffer/entries-written: 97/97   #P:1
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
            head-228   [000] ....   825.534637: __kmalloc <-load_elf_phdrs
            head-228   [000] ....   825.534692: __kmalloc <-load_elf_binary
            head-228   [000] ....   825.534815: __kmalloc <-load_elf_phdrs
            head-228   [000] ....   825.550917: __kmalloc <-__seq_open_private
            head-228   [000] ....   825.550953: __kmalloc <-tracing_open
            head-229   [000] ....   826.756585: __kmalloc <-load_elf_phdrs
            head-229   [000] ....   826.756627: __kmalloc <-load_elf_binary
            head-229   [000] ....   826.756719: __kmalloc <-load_elf_phdrs
            head-229   [000] ....   826.773796: __kmalloc <-__seq_open_private
            head-229   [000] ....   826.773835: __kmalloc <-tracing_open
            head-230   [000] ....   827.174988: __kmalloc <-load_elf_phdrs
            head-230   [000] ....   827.175046: __kmalloc <-load_elf_binary
            head-230   [000] ....   827.175171: __kmalloc <-load_elf_phdrs

Trace all possible functions, and draw a call graph:

echo 1 > max_graph_depth
echo 1 > events/enable
echo function_graph > current_tracer

Sample output:

# CPU  DURATION                  FUNCTION CALLS
# |     |   |                     |   |   |   |
 0)   2.173 us    |                  } /* ntp_tick_length */
 0)               |                  timekeeping_update() {
 0)   4.176 us    |                    ntp_get_next_leap();
 0)   5.016 us    |                    update_vsyscall();
 0)               |                    raw_notifier_call_chain() {
 0)   2.241 us    |                      notifier_call_chain();
 0) + 19.879 us   |                    }
 0)   3.144 us    |                    update_fast_timekeeper();
 0)   2.738 us    |                    update_fast_timekeeper();
 0) ! 117.147 us  |                  }
 0)               |                  _raw_spin_unlock_irqrestore() {
 0)   4.045 us    |                    _raw_write_unlock_irqrestore();
 0) + 22.066 us   |                  }
 0) ! 265.278 us  |                } /* update_wall_time */

TODO: what do + and ! mean?

Each enable under the events/ tree enables a certain set of functions, the higher the enable more functions are enabled.

TODO example:

./build-buildroot --config 'BR2_PACKAGE_TRACE_CMD=y'

kprobes is an instrumentation mechanism that injects arbitrary code at a given address in a trap instruction, much like GDB. Oh, the good old kernel. :-)

./build-linux --config 'CONFIG_KPROBES=y'

Then on guest:

insmod kprobe_example.ko
sleep 4 & sleep 4 &'

Outcome: dmesg outputs on every fork:

<_do_fork> pre_handler: p->addr = 0x00000000e1360063, ip = ffffffff810531d1, flags = 0x246
<_do_fork> post_handler: p->addr = 0x00000000e1360063, flags = 0x246
<_do_fork> pre_handler: p->addr = 0x00000000e1360063, ip = ffffffff810531d1, flags = 0x246
<_do_fork> post_handler: p->addr = 0x00000000e1360063, flags = 0x246

TODO: it does not work if I try to immediately launch sleep, why?

insmod kprobe_example.ko
sleep 4 & sleep 4 &

I don’t think your code can refer to the surrounding kernel code however: the only visible thing is the value of the registers.

You can then hack it up to read the stack and read argument values, but do you really want to?

There is also a kprobes + ftrace based mechanism with CONFIG_KPROBE_EVENTS=y which does read the memory for us based on format strings that indicate type…​ https://github.com/torvalds/linux/blob/v4.16/Documentation/trace/kprobetrace.txt Horrendous. Used by: https://github.com/brendangregg/perf-tools/blob/98d42a2a1493d2d1c651a5c396e015d4f082eb20/execsnoop

Bibliography:

TODO: didn’t port during refactor after 3b0a343647bed577586989fb702b760bd280844a. Reimplementing should not be hard.

Results (boot not excluded) are shown at: Table 1, “Boot instruction counts for various setups”

Table 1. Boot instruction counts for various setups
Commit Arch Simulator Instruction count

7228f75ac74c896417fb8c5ba3d375a14ed4d36b

arm

QEMU

680k

7228f75ac74c896417fb8c5ba3d375a14ed4d36b

arm

gem5 AtomicSimpleCPU

160M

7228f75ac74c896417fb8c5ba3d375a14ed4d36b

arm

gem5 HPI

155M

7228f75ac74c896417fb8c5ba3d375a14ed4d36b

x86_64

QEMU

3M

7228f75ac74c896417fb8c5ba3d375a14ed4d36b

x86_64

gem5 AtomicSimpleCPU

528M

QEMU:

./trace-boot --arch x86_64

sample output:

instructions 1833863
entry_address 0x1000000
instructions_firmware 20708

gem5:

./run --arch aarch64 --emulator gem5 --eval 'm5 exit'
# Or:
# ./run --arch aarch64 --emulator gem5 --eval 'm5 exit' -- --cpu-type=HPI --caches
./gem5-stat --arch aarch64 sim_insts

Notes:

  • 0x1000000 is the address where QEMU puts the Linux kernel at with -kernel in x86.

    It can be found from:

    ./run-toolchain readelf -- -e "$(./getvar vmlinux)" | grep Entry

    TODO confirm further. If I try to break there with:

    ./run-gdb *0x1000000

    but I have no corresponding source line. Also note that this line is not actually the first line, since the kernel messages such as early console in extract_kernel have already shown on screen at that point. This does not break at all:

    ./run-gdb extract_kernel

    It only appears once on every log I’ve seen so far, checked with grep 0x1000000 trace.txt

    Then when we count the instructions that run before the kernel entry point, there is only about 100k instructions, which is insignificant compared to the kernel boot itself.

    TODO --arch arm and --arch aarch64 does not count firmware instructions properly because the entry point address of the ELF file (ffffff8008080000 for aarch64) does not show up on the trace at all. Tested on f8c0502bb2680f2dbe7c1f3d7958f60265347005.

  • We can also discount the instructions after init runs by using readelf to get the initial address of init. One easy way to do that now is to just run:

    ./run-gdb --userland "$(./getvar userland_build_dir)/linux/poweroff.out" main

    And get that from the traces, e.g. if the address is 4003a0, then we search:

    grep -n 4003a0 trace.txt

    I have observed a single match for that instruction, so it must be the init, and there were only 20k instructions after it, so the impact is negligible.

  • to disable networking. Is replacing init enough?

    CONFIG_NET=n did not significantly reduce instruction counts, so maybe replacing init is enough.

  • gem5 simulates memory latencies. So I think that the CPU loops idle while waiting for memory, and counts will be higher.

Make it harder to get hacked and easier to notice that you were, at the cost of some (small?) runtime overhead.

Detects buffer overflows for us:

./build-linux --config 'CONFIG_FORTIFY_SOURCE=y' --linux-build-id fortify
./build-modules --clean
./build-modules
./build-buildroot
./run --eval-after 'insmod strlen_overflow.ko' --linux-build-id fortify

Possible dmesg output:

strlen_overflow: loading out-of-tree module taints kernel.
detected buffer overflow in strlen
------------[ cut here ]------------

followed by a trace.

You may not get this error because this depends on strlen overflowing at least until the next page: if a random \0 appears soon enough, it won’t blow up as desired.

TODO not always reproducible. Find a more reproducible failure. I could not observe it on:

insmod memcpy_overflow.ko

TODO get a hello world permission control working:

./build-linux \
  --config-fragment linux_config/selinux \
  --linux-build-id selinux \
;
./build-buildroot --config 'BR2_PACKAGE_REFPOLICY=y'
./run --enable-kvm --linux-build-id selinux

This builds:

After boot finishes, we see:

Starting auditd: mkdir: invalid option -- 'Z'

which comes from /etc/init.d/S01auditd, because BusyBox' mkdir does not have the crazy -Z option like Ubuntu. That’s amazing!

The kernel logs contain:

SELinux:  Initializing.

Inside the guest we now have:

getenforce

which initially says:

Disabled

TODO: if we try to enforce:

setenforce 1

it does not work and outputs:

setenforce: SELinux is disabled

SELinux requires glibc as mentioned at: [libc-choice].

But in part because it is dying, I didn’t spend much effort to integrate it into this repo, although it would be a good fit in principle, since it is essentially a virtualization method.

Maybe some brave soul will send a pull request one day.

UIO is a kernel subsystem that allows to do certain types of driver operations from userland.

This would be awesome to improve debuggability and safety of kernel modules.

VFIO looks like a newer and better UIO replacement, but there do not exist any examples of how to use it: https://stackoverflow.com/questions/49309162/interfacing-with-qemu-edu-device-via-userspace-i-o-uio-linux-driver

TODO get something interesting working. I currently don’t understand the behaviour very well.

TODO how to ACK interrupts? How to ensure that every interrupt gets handled separately?

TODO how to write to registers. Currently using /dev/mem and lspci.

This example should handle interrupts from userland and print a message to stdout:

./uio_read.sh

TODO: what is the expected behaviour? I should have documented this when I wrote this stuff, and I’m that lazy right now that I’m in the middle of a refactor :-)

UIO interface in a nutshell:

  • blocking read / poll: waits until interrupts

  • write: call irqcontrol callback. Default: 0 or 1 to enable / disable interrupts.

  • mmap: access device memory

Sources:

Bibliography:

Requires Graphics.

You can also try those on the Ctrl-Alt-F3 of your Ubuntu host, but it is much more fun inside a VM!

Stop the cursor from blinking:

echo 0 > /sys/class/graphics/fbcon/cursor_blink
echo 1 > /sys/class/graphics/fbcon/rotate

Relies on: CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y.

Documented under: Documentation/fb/.

TODO: font and keymap. Mentioned at: https://cmcenroe.me/2017/05/05/linux-console.html and I think can be done with BusyBox loadkmap and loadfont, we just have to understand their formats, related:

Requires Graphics.

Let’s have some fun.

I think most are implemented under:

drivers/tty

TODO find all.

Scroll up / down the terminal:

Shift-PgDown
Shift-PgUp

Or inside ./qemu-monitor:

sendkey shift-pgup
sendkey shift-pgdown

If you run in QEMU graphic mode:

./run --graphic

and then from the graphic window you enter the keys:

Ctrl-Alt-Del

then this runs the following command on the guest:

/sbin/reboot

This is enabled from our rootfs_overlay/etc/inittab:

::ctrlaltdel:/sbin/reboot

This leads Linux to try to reboot, and QEMU shutdowns due to the -no-reboot option which we set by default for, see: Section 15.7.1.3, “Exit emulator on panic”.

Here is a minimal example of Ctrl Alt Del:

./run --kernel-cli 'init=/lkmc/linux/ctrl_alt_del.out' --graphic

When you hit Ctrl-Alt-Del in the guest, our tiny init handles a SIGINT sent by the kernel and outputs to stdout:

cad

To map between man 2 reboot and the uClibc RB_* magic constants see:

less "$(./getvar buildroot_build_build_dir)"/uclibc-*/include/sys/reboot.h"

The procfs mechanism is documented at:

less linux/Documentation/sysctl/kernel.txt

which says:

When the value in this file is 0, ctrl-alt-del is trapped and
sent to the init(1) program to handle a graceful restart.
When, however, the value is > 0, Linux's reaction to a Vulcan
Nerve Pinch (tm) will be an immediate reboot, without even
syncing its dirty buffers.

Note: when a program (like dosemu) has the keyboard in 'raw'
mode, the ctrl-alt-del is intercepted by the program before it
ever reaches the kernel tty layer, and it's up to the program
to decide what to do with it.

Under the hood, behaviour is controlled by the reboot syscall:

man 2 reboot

reboot system calls can set either of the these behaviours for Ctrl-Alt-Del:

  • do a hard shutdown syscall. Set in uClibc C code with:

    reboot(RB_ENABLE_CAD)

    or from procfs with:

    echo 1 > /proc/sys/kernel/ctrl-alt-del

    Done by BusyBox' reboot -f.

  • send a SIGINT to the init process. This is what BusyBox' init does, and it then execs the string set in inittab.

    Set in uclibc C code with:

    reboot(RB_DISABLE_CAD)

    or from procfs with:

    echo 0 > /proc/sys/kernel/ctrl-alt-del

    Done by BusyBox' reboot.

When a BusyBox init is with the signal, it prints the following lines:

The system is going down NOW!
Sent SIGTERM to all processes
Sent SIGKILL to all processes
Requesting system reboot

On busybox-1.29.2’s init at init/init.c we see how the kill signals are sent:

static void run_shutdown_and_kill_processes(void)
{
	/* Run everything to be run at "shutdown".  This is done _prior_
	 * to killing everything, in case people wish to use scripts to
	 * shut things down gracefully... */
	run_actions(SHUTDOWN);

	message(L_CONSOLE | L_LOG, "The system is going down NOW!");

	/* Send signals to every process _except_ pid 1 */
	kill(-1, SIGTERM);
	message(L_CONSOLE, "Sent SIG%s to all processes", "TERM");
	sync();
	sleep(1);

	kill(-1, SIGKILL);
	message(L_CONSOLE, "Sent SIG%s to all processes", "KILL");
	sync();
	/*sleep(1); - callers take care about making a pause */
}

and run_shutdown_and_kill_processes is called from:

/* The SIGPWR/SIGUSR[12]/SIGTERM handler */
static void halt_reboot_pwoff(int sig) NORETURN;
static void halt_reboot_pwoff(int sig)

which also prints the final line:

	message(L_CONSOLE, "Requesting system %s", m);

which is set as the signal handler via TODO.

Bibliography:

We cannot test these actual shortcuts on QEMU since the host captures them at a lower level, but from:

./qemu-monitor

we can for example crash the system with:

sendkey alt-sysrq-c

Same but boring because no magic key:

echo c > /proc/sysrq-trigger

Implemented in:

drivers/tty/sysrq.c

On your host, on modern systems that don’t have the SysRq key you can do:

Alt-PrtSc-space

which prints a message to dmesg of type:

sysrq: SysRq : HELP : loglevel(0-9) reboot(b) crash(c) terminate-all-tasks(e) memory-full-oom-kill(f) kill-all-tasks(i) thaw-filesystems(j) sak(k) show-backtrace-all-active-cpus(l) show-memory-usage(m) nice-all-RT-tasks(n) poweroff(o) show-registers(p) show-all-timers(q) unraw(r) sync(s) show-task-states(t) unmount(u) show-blocked-tasks(w) dump-ftrace-buffer(z)

Individual SysRq can be enabled or disabled with the bitmask:

/proc/sys/kernel/sysrq

The bitmask is documented at:

less linux/Documentation/admin-guide/sysrq.rst

In order to play with TTYs, do this:

printf '
tty2::respawn:/sbin/getty -n -L -l /lkmc/loginroot.sh tty2 0 vt100
tty3::respawn:-/bin/sh
tty4::respawn:/sbin/getty 0 tty4
tty63::respawn:-/bin/sh
::respawn:/sbin/getty -L ttyS0 0 vt100
::respawn:/sbin/getty -L ttyS1 0 vt100
::respawn:/sbin/getty -L ttyS2 0 vt100
# Leave one serial empty.
#::respawn:/sbin/getty -L ttyS3 0 vt100
' >> rootfs_overlay/etc/inittab
./build-buildroot
./run --graphic -- \
  -serial telnet::1235,server,nowait \
  -serial vc:800x600 \
  -serial telnet::1236,server,nowait \
;

and on a second shell:

telnet localhost 1235

We don’t add more TTYs by default because it would spawn more processes, even if we use askfirst instead of respawn.

On the GUI, switch TTYs with:

You can also test this on most hosts such as Ubuntu 18.04, except that when in the GUI, you must use Ctrl-Alt-Fx to switch to another terminal.

Next, we also have the following shells running on the serial ports, hit enter to activate them:

although we cannot change between terminals from there.

Each populated TTY contains a "shell":

Identify the current TTY with the command:

tty

Bibliography:

This outputs:

Get the TTY in bulk for all processes:

./psa.sh

The TTY appears under the TT section, which is enabled by -o tty. This shows the TTY device number, e.g.:

4,1

and we can then confirm it with:

ls -l /dev/tty1

Next try:

insmod kthread.ko

and switch between virtual terminals, to understand that the dmesg goes to whatever current virtual terminal you are on, but not the others, and not to the serial terminals.

Bibliography:

TODO: how to place an sh directly on a TTY as well without getty?

If I try the exact same command that the inittab is doing from a regular shell after boot:

/sbin/getty 0 tty1

it fails with:

getty: setsid: Operation not permitted

The following however works:

./run --eval 'getty 0 tty1 & getty 0 tty2 & getty 0 tty3 & sleep 99999999' --graphic

presumably because it is being called from init directly?

Outcome: Alt-Right cycles between three TTYs, tty1 being the default one that appears under the boot messages.

man 2 setsid says that there is only one failure possibility:

EPERM The process group ID of any process equals the PID of the calling process. Thus, in particular, setsid() fails if the calling process is already a process group leader.

We can get some visibility into it to try and solve the problem with:

./psa.sh

Take the command described at TTY and try adding the following:

  • -e 'console=tty7': boot messages still show on /dev/tty1 (TODO how to change that?), but we don’t get a shell at the end of boot there.

    Instead, the shell appears on /dev/tty7.

  • -e 'console=tty2' like /dev/tty7, but /dev/tty2 is broken, because we have two shells there:

    • one due to the ::respawn:-/bin/sh entry which uses whatever console points to

    • another one due to the tty2::respawn:/sbin/getty entry we added

  • -e 'console=ttyS0' much like tty2, but messages show only on serial, and the terminal is broken due to having multiple shells on it

  • -e 'console=tty1 console=ttyS0': boot messages show on both tty1 and ttyS0, but only S0 gets a shell because it came last

This is due to the CONFIG_LOGO=y option which we enable by default.

reset on the terminal then kills the poor penguins.

When CONFIG_LOGO=y is set, the logo can be disabled at boot with:

./run --kernel-cli 'logo.nologo'

Looks like a recompile is needed to modify the image…​

DRM / DRI is the new interface that supersedes fbdev:

./build-buildroot --config 'BR2_PACKAGE_LIBDRM=y'
./build-userland --package libdrm -- userland/libs/libdrm/modeset.c
./run --eval-after './libs/libdrm/modeset.out' --graphic

Outcome: for a few seconds, the screen that contains the terminal gets taken over by changing colors of the rainbow.

TODO not working for aarch64, it takes over the screen for a few seconds and the kernel messages disappear, but the screen stays black all the time.

./build-buildroot --config 'BR2_PACKAGE_LIBDRM=y'
./build-userland --package libdrm
./build-buildroot
./run --eval-after './libs/libdrm/modeset.out' --graphic

kmscube however worked, which means that it must be a bug with this demo?

We set CONFIG_DRM=y on our default kernel configuration, and it creates one device file for each display:

# ls -l /dev/dri
total 0
crw-------    1 root     root      226,   0 May 28 09:41 card0
# grep 226 /proc/devices
226 drm
# ls /sys/module/drm /sys/module/drm_kms_helper/

Try creating new displays:

./run --arch aarch64 --graphic -- -device virtio-gpu-pci

to see multiple /dev/dri/cardN, and then use a different display with:

./run --eval-after './libs/libdrm/modeset.out' --graphic

Bibliography:

./build-buildroot --config-fragment buildroot_config/kmscube

Outcome: a colored spinning cube coded in OpenGL + EGL takes over your display and spins forever: https://www.youtube.com/watch?v=CqgJMgfxjsk

It is a bit amusing to see OpenGL running outside of a window manager window like that: https://stackoverflow.com/questions/3804065/using-opengl-without-a-window-manager-in-linux/50669152#50669152

TODO: it is very slow, about 1FPS. I tried Buildroot master ad684c20d146b220dd04a85dbf2533c69ec8ee52 with:

make qemu_x86_64_defconfig
printf "
BR2_CCACHE=y
BR2_PACKAGE_HOST_QEMU=y
BR2_PACKAGE_HOST_QEMU_LINUX_USER_MODE=n
BR2_PACKAGE_HOST_QEMU_SYSTEM_MODE=y
BR2_PACKAGE_HOST_QEMU_VDE2=y
BR2_PACKAGE_KMSCUBE=y
BR2_PACKAGE_MESA3D=y
BR2_PACKAGE_MESA3D_DRI_DRIVER_SWRAST=y
BR2_PACKAGE_MESA3D_OPENGL_EGL=y
BR2_PACKAGE_MESA3D_OPENGL_ES=y
BR2_TOOLCHAIN_BUILDROOT_CXX=y
" >> .config

and the FPS was much better, I estimate something like 15FPS.

On Ubuntu 18.04 with NVIDIA proprietary drivers:

sudo apt-get instll kmscube
kmscube

fails with:

drmModeGetResources failed: Invalid argument
failed to initialize legacy DRM

TODO get working.

Implements a console for DRM.

The Linux kernel has a built-in fbdev console called Linux kernel console fun but not for DRM it seems.

The upstream project seems dead with last commit in 2014: https://www.freedesktop.org/wiki/Software/kmscon/

Build failed in Ubuntu 18.04 with: dvdhrm/kmscon#131 but this fork compiled but didn’t run on host: Aetf/kmscon#2 (comment)

Haven’t tested the fork on QEMU too much insanity.

TODO get working.

Looks like a more raw alternative to libdrm:

./build-buildroot --config 'BR2_PACKABE_LIBDRI2=y'
wget \
  -O "$(./getvar userland_source_dir)/dri2test.c" \
  https://raw.githubusercontent.com/robclark/libdri2/master/test/dri2test.c \
;
./build-userland

but then I noticed that that example requires multiple files, and I don’t feel like integrating it into our build.

When I build it on Ubuntu 18.04 host, it does not generate any executable, so I’m confused.

Tests a lot of Linux and POSIX userland visible interfaces.

Buildroot already has a package, so it is trivial to build it:

./build-buildroot --config 'BR2_PACKAGE_LTP_TESTSUITE=y'

So now let’s try and see if the exit system call is working:

/usr/lib/ltp-testsuite/testcases/bin/exit01

which gives successful output:

exit01      1  TPASS  :  exit() test PASSED

Besides testing any kernel modifications you make, LTP can also be used to the system call implementation of User mode simulation as shown at User mode Buildroot executables:

./run --userland "$(./getvar buildroot_target_dir)/usr/lib/ltp-testsuite/testcases/bin/exit01"

Tested at: 287c83f3f99db8c1ff9bbc85a79576da6a78e986 + 1.

[posix] userland stress. Two versions:

./build-buildroot \
  --config 'BR2_PACKAGE_STRESS=y' \
  --config 'BR2_PACKAGE_STRESS_NG=y' \
;

STRESS_NG is likely the best, but it requires glibc, see: [libc-choice].

Websites:

stress usage:

stress --help
stress -c 16 &
ps

and notice how 16 threads were created in addition to a parent worker thread.

It just runs forever, so kill it when you get tired:

kill %1

stress -c 1 -t 1 makes gem5 irresponsive for a very long time.

Between all archs on QEMU and gem5 we touch all of those kernel built output files.

Converting arch/* images to vmlinux is possible in theory x86 with extract-vmlinux but we didn’t get any gem5 boots working from images generated like that for some reason, see: ************#79

The following kernel modules and [baremetal] executables dump and disassemble various registers which cannot be observed from userland (usually "system registers", "control registers"):

Some of those programs are using:

Alternatively, you can also get their value from inside GDB step debug with:

info registers all

or the short version:

i r a

or to get just specific registers, e.g. just ARMv8’s SCTLR:

i r SCTLR

but it is sometimes just more convenient to run an executable to get the registers at the point of interest.

See also:

TODO: get prototype working and then properly integrate:

./build-xen

Source: build-xen

This script attempts to build Xen for aarch64 and feed it into QEMU through submodules/boot-wrapper-aarch64

TODO: other archs not yet attempted.

The current bad behaviour is that it prints just:

Boot-wrapper v0.2

and nothing else.

We will also need CONFIG_XEN=y on the Linux kernel, but first Xen should print some Xen messages before the kernel is ever reached.

If we pass to QEMU the xen image directly instead of the boot wrapper one:

-kernel ../xen/xen/xen

then Xen messages do show up! So it seems that the configuration failure lies in the boot wrapper itself rather than Xen.

Maybe it is also possible to run Xen directly like this: QEMU can already load multiple images at different memory locations with the generic loader: https://github.com/qemu/qemu/blob/master/docs/generic-loader.txt which looks something along:

-kernel file1.elf -device loader,file=file2.elf

so as long as we craft the correct DTB and feed it into Xen so that it can see the kernel, it should work. TODO does QEMU support patching the auto-generated DTB with pre-generated options? In the worst case we can just dump it hand hack it up though with -machine dumpdtb, see: Section 8.4, “Device tree emulator generation”.

Bibliography:

U-Boot is a popular bootloader.

It can read disk filesystems, and Buildroot supports it, so we could in theory put it into memory, and let it find a kernel image from the root filesystem and boot that, but I didn’t manage to get it working yet: https://stackoverflow.com/questions/58028789/how-to-boot-linux-aarch64-with-u-boot-with-buildroot-on-qemu

QEMU is a system simulator: it simulates a CPU and devices such as interrupt handlers, timers, UART, screen, keyboard, etc.

If you are familiar with VirtualBox, then QEMU then basically does the same thing: it opens a "window" inside your desktop that can run an operating system inside your operating system.

Also both can use very similar techniques: either binary translation or KVM. VirtualBox' binary translator is / was based on QEMU’s it seems: https://en.wikipedia.org/wiki/VirtualBox#Software-based_virtualization

The huge advantage of QEMU over VirtualBox is that is supports cross arch simulation, e.g. simulate an ARM guest on an x86 host.

QEMU is likely the leading cross arch system simulator as of 2018. It is even the default [android] simulator that developers get with Android Studio 3 to develop apps without real hardware.

Another advantage of QEMU over virtual box is that it doesn’t have Oracle' hands all all over it, more like RedHat + ARM.

Another advantage of QEMU is that is has no nice configuration GUI. Because who needs GUIs when you have 50 million semi-documented CLI options? Android Studio adds a custom GUI configuration tool on top of it.

QEMU is also supported by Buildroot in-tree, see e.g.: https://github.com/buildroot/buildroot/blob/2018.05/configs/qemu_aarch64_virt_defconfig We however just build our own manually with build-qemu, as it gives more flexibility, and building QEMU is very easy!

All of this makes QEMU the natural choice of reference system simulator for this repo.

We disable disk persistency for both QEMU and gem5 by default, to prevent the emulator from putting the image in an unknown state.

For QEMU, this is done by passing the snapshot option to -drive, and for gem5 it is the default behaviour.

If you hack up our run script to remove that option, then:

./run --eval-after 'date >f;poweroff'

followed by:

./run --eval-after 'cat f'

gives the date, because poweroff without -n syncs before shutdown.

The sync command also saves the disk:

sync

When you do:

./build-buildroot

the disk image gets overwritten by a fresh filesystem and you lose all changes.

Remember that if you forcibly turn QEMU off without sync or poweroff from inside the VM, e.g. by closing the QEMU window, disk changes may not be saved.

Persistency is also turned off when booting from initrd with a CPIO instead of with a disk.

Disk persistency is useful to re-run shell commands from the history of a previous session with Ctrl-R, but we felt that the loss of determinism was not worth it.

TODO how to make gem5 disk writes persistent?

As of cadb92f2df916dbb47f428fd1ec4932a2e1f0f48 there are some read_only entries in the gem5 config.ini under cow sections, but hacking them to true did not work:

diff --git a/configs/common/FSConfig.py b/configs/common/FSConfig.py
index 17498c42b..76b8b351d 100644
--- a/configs/common/FSConfig.py
+++ b/configs/common/FSConfig.py
@@ -60,7 +60,7 @@ os_types = { 'alpha' : [ 'linux' ],
            }

 class CowIdeDisk(IdeDisk):
-    image = CowDiskImage(child=RawDiskImage(read_only=True),
+    image = CowDiskImage(child=RawDiskImage(read_only=False),
                          read_only=False)

     def childImage(self, ci):

The directory of interest is src/dev/storage.

qcow2 does not appear supported, there are not hits in the source tree, and there is a mention on Nate’s 2009 wishlist: http://gem5.org/Nate%27s_Wish_List

This would be good to allow storing smaller sparse ext2 images locally on disk.

QEMU allows us to take snapshots at any time through the monitor.

You can then restore CPU, memory and disk state back at any time.

qcow2 filesystems must be used for that to work.

To test it out, login into the VM with and run:

./run --eval-after 'umount /mnt/9p/*;./count.sh'

On another shell, take a snapshot:

./qemu-monitor savevm my_snap_id

The counting continues.

Restore the snapshot:

./qemu-monitor loadvm my_snap_id

and the counting goes back to where we saved. This shows that CPU and memory states were reverted.

The umount is needed because snapshotting conflicts with 9P, which we felt is a more valuable default. If you forget to unmount, the following error appears on the QEMU monitor:

Migration is disabled when VirtFS export path '/linux-kernel-module-cheat/out/x86_64/buildroot/build' is mounted in the guest using mount_tag 'host_out'

We can also verify that the disk state is also reversed. Guest:

echo 0 >f

Monitor:

./qemu-monitor savevm my_snap_id

Guest:

echo 1 >f

Monitor:

./qemu-monitor loadvm my_snap_id

Guest:

cat f

And the output is 0.

Our setup does not allow for snapshotting while using initrd.

Snapshots are stored inside the .qcow2 images themselves.

They can be observed with:

"$(./getvar buildroot_host_dir)/bin/qemu-img" info "$(./getvar qcow2_file)"

which after savevm my_snap_id and savevm asdf contains an output of type:

image: out/x86_64/buildroot/images/rootfs.ext2.qcow2
file format: qcow2
virtual size: 512M (536870912 bytes)
disk size: 180M
cluster_size: 65536
Snapshot list:
ID        TAG                 VM SIZE                DATE       VM CLOCK
1         my_snap_id              47M 2018-04-27 21:17:50   00:00:15.251
2         asdf                    47M 2018-04-27 21:20:39   00:00:18.583
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

As a consequence:

  • it is possible to restore snapshots across boots, since they stay on the same image the entire time

  • it is not possible to use snapshots with initrd in our setup, since we don’t pass -drive at all when initrd is enabled

This section documents:

For the more complex interfaces, we focus on simplified educational devices, either:

Only tested in x86.

PCI driver for our minimal pci_min.c QEMU fork device:

./run -- -device lkmc_pci_min

then:

insmod pci_min.ko

Sources:

Outcome:

<4>[   10.608241] pci_min: loading out-of-tree module taints kernel.
<6>[   10.609935] probe
<6>[   10.651881] dev->irq = 11
lkmc_pci_min mmio_write addr = 0 val = 12345678 size = 4
<6>[   10.668515] irq_handler irq = 11 dev = 251
lkmc_pci_min mmio_write addr = 4 val = 0 size = 4

What happened:

  • right at probe time, we write to a register

  • our hardware model is coded such that it generates an interrupt when written to

  • the Linux kernel interrupt handler write to another register, which tells the hardware to stop sending interrupts

Kernel messages and printks from inside QEMU are shown all together, to see that more clearly, run in QEMU graphic mode instead.

We don’t enable the device by default because it does not work for vanilla QEMU, which we often want to test with this repository.

Probe already does a MMIO write, which generates an IRQ and tests everything.

Small upstream educational PCI device:

./qemu_edu.sh

This tests a lot of features of the edu device, to understand the results, compare the inputs with the documentation of the hardware: https://github.com/qemu/qemu/blob/v2.12.0/docs/specs/edu.txt

Sources:

Works because we add to our default QEMU CLI:

-device edu

This example uses:

  • the QEMU edu educational device, which is a minimal educational in-tree PCI example

  • the pci.ko kernel module, which exercises the edu hardware.

    I’ve contacted the awesome original author author of edu Jiri Slaby, and he told there is no official kernel module example because this was created for a kernel module university course that he gives, and he didn’t want to give away answers. I don’t agree with that philosophy, so students, cheat away with this repo and go make startups instead.

TODO exercise DMA on the kernel module. The edu hardware model has that feature:

In this section we will try to interact with PCI devices directly from userland without kernel modules.

First identify the PCI device with:

lspci

In our case for example, we see:

00:06.0 Unclassified device [00ff]: Device 1234:11e8 (rev 10)
00:07.0 Unclassified device [00ff]: Device 1234:11e9

which we identify as being edu and pci_min respectively by the magic numbers: 1234:11e?

Alternatively, we can also do use the QEMU monitor:

./qemu-monitor info qtree

which gives:

      dev: lkmc_pci_min, id ""
        addr = 07.0
        romfile = ""
        rombar = 1 (0x1)
        multifunction = false
        command_serr_enable = true
        x-pcie-lnksta-dllla = true
        x-pcie-extcap-init = true
        class Class 00ff, addr 00:07.0, pci id 1234:11e9 (sub 1af4:1100)
        bar 0: mem at 0xfeb54000 [0xfeb54007]
      dev: edu, id ""
        addr = 06.0
        romfile = ""
        rombar = 1 (0x1)
        multifunction = false
        command_serr_enable = true
        x-pcie-lnksta-dllla = true
        x-pcie-extcap-init = true
        class Class 00ff, addr 00:06.0, pci id 1234:11e8 (sub 1af4:1100)
        bar 0: mem at 0xfea00000 [0xfeafffff]

Read the configuration registers as binary:

hexdump /sys/bus/pci/devices/0000:00:06.0/config

Get nice human readable names and offsets of the registers and some enums:

setpci --dumpregs

Get the values of a given config register from its human readable name, either with either bus or device id:

setpci -s 0000:00:06.0 BASE_ADDRESS_0
setpci -d 1234:11e9 BASE_ADDRESS_0

Note however that BASE_ADDRESS_0 also appears when you do:

lspci -v

as:

Memory at feb54000

Then you can try messing with that address with /dev/mem:

devmem 0xfeb54000 w 0x12345678

which writes to the first register of our pci_min device.

The device then fires an interrupt at irq 11, which is unhandled, which leads the kernel to say you are a bad boy:

lkmc_pci_min mmio_write addr = 0 val = 12345678 size = 4
<5>[ 1064.042435] random: crng init done
<3>[ 1065.567742] irq 11: nobody cared (try booting with the "irqpoll" option)

followed by a trace.

Next, also try using our irq.ko IRQ monitoring module before triggering the interrupt:

insmod irq.ko
devmem 0xfeb54000 w 0x12345678

Our kernel module handles the interrupt, but does not acknowledge it like our proper pci_min kernel module, and so it keeps firing, which leads to infinitely many messages being printed:

handler irq = 11 dev = 251

There are two versions of setpci and lspci:

  • a simple one from BusyBox

  • a more complete one from pciutils which Buildroot has a package for, and is the default on Ubuntu 18.04 host. This is the one we enable by default.

The PCI standard is non-free, obviously like everything in low level: https://pcisig.com/specifications but Google gives several illegal PDF hits :-)

And of course, the best documentation available is: http://wiki.osdev.org/PCI

Like every other hardware, we could interact with PCI on x86 using only IO instructions and memory operations.

But PCI is a complex communication protocol that the Linux kernel implements beautifully for us, so let’s use the kernel API.

Bibliography:

lspci -k shows something like:

00:04.0 Class 00ff: 1234:11e8 lkmc_pci

Meaning of the first numbers:

<8:bus>:<5:device>.<3:function>

Often abbreviated to BDF.

Sometimes a fourth number is also added, e.g.:

0000:00:04.0

TODO is that the domain?

Class: pure magic: https://www-s.acm.illinois.edu/sigops/2007/roll_your_own/7.c.1.html TODO: does it have any side effects? Set in the edu device at:

k->class_id = PCI_CLASS_OTHERS

Each PCI device has 6 BAR IOs (base address register) as per the PCI spec.

Each BAR corresponds to an address range that can be used to communicate with the PCI.

Each BAR is of one of the two types:

  • IORESOURCE_IO: must be accessed with inX and outX

  • IORESOURCE_MEM: must be accessed with ioreadX and iowriteX. This is the saner method apparently, and what the edu device uses.

The length of each region is defined by the hardware, and communicated to software via the configuration registers.

The Linux kernel automatically parses the 64 bytes of standardized configuration registers for us.

QEMU devices register those regions with:

memory_region_init_io(&edu->mmio, OBJECT(edu), &edu_mmio_ops, edu,
                "edu-mmio", 1 << 20);
pci_register_bar(pdev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, &edu->mmio);

TODO: broken. Was working before we moved arm from -M versatilepb to -M virt around af210a76711b7fa4554dcc2abd0ddacfc810dfd4. Either make it work on -M virt if that is possible, or document precisely how to make it work with versatilepb, or hopefully vexpress which is newer.

The best you can do is to hack our build script to add:

HOST_QEMU_OPTS='--extra-cflags=-DDEBUG_PL061=1'

where PL061 is the dominating ARM Holdings hardware that handles GPIO.

Then compile with:

./build-buildroot --arch arm --config-fragment buildroot_config/gpio
./build-linux --config-fragment linux_config/gpio

then test it out with:

./gpio.sh

Buildroot’s Linux tools package provides some GPIO CLI tools: lsgpio, gpio-event-mon, gpio-hammer, TODO document them here.

TODO: broken when arm moved to -M virt, same as GPIO.

Hack QEMU’s hw/misc/arm_sysctl.c with a printf:

static void arm_sysctl_write(void *opaque, hwaddr offset,
                            uint64_t val, unsigned size)
{
    arm_sysctl_state *s = (arm_sysctl_state *)opaque;

    switch (offset) {
    case 0x08: /* LED */
        printf("LED val = %llx\n", (unsigned long long)val);

and then rebuild with:

./build-qemu --arch arm
./build-linux --arch arm --config-fragment linux_config/leds

But beware that one of the LEDs has a heartbeat trigger by default (specified on dts), so it will produce a lot of output.

And then activate it with:

cd /sys/class/leds/versatile:0
cat max_brightness
echo 255 >brightness

Relevant QEMU files:

  • hw/arm/versatilepb.c

  • hw/misc/arm_sysctl.c

Relevant kernel files:

  • arch/arm/boot/dts/versatile-pb.dts

  • drivers/leds/led-class.c

  • drivers/leds/leds-sysctl.c

Minimal platform device example coded into the -M versatilepb SoC of our QEMU fork.

Using this device now requires checking out to the branch:

git checkout platform-device
git submodule sync

before building, it does not work on master.

Rationale: we found out that the kernels that build for qemu -M versatilepb don’t work on gem5 because versatilepb is an old pre-v7 platform, and gem5 requires armv7. So we migrated over to -M virt to have a single kernel for both gem5 and QEMU, and broke this since the single kernel was more important. TODO port to -M virt.

Uses:

Expected outcome after insmod:

  • QEMU reports MMIO with printfs

  • IRQs are generated and handled by this module, which logs to dmesg

Without insmoding this module, try writing to the register with /dev/mem:

devmem 0x101e9000 w 0x12345678

We can also observe the interrupt with dummy-irq:

modprobe dummy-irq irq=34
insmod platform_device.ko

The IRQ number 34 was found by on the dmesg after:

insmod platform_device.ko

The QEMU monitor is a magic terminal that allows you to send text commands to the QEMU VM itself: https://en.wikibooks.org/wiki/QEMU/Monitor

While QEMU is running, on another terminal, run:

./qemu-monitor

or send one command such as info qtree and quit the monitor:

./qemu-monitor info qtree

or equivalently:

echo 'info qtree' | ./qemu-monitor

Source: qemu-monitor

qemu-monitor uses the -monitor QEMU command line option, which makes the monitor listen from a socket.

Alternatively, we can also enter the QEMU monitor from inside -nographics QEMU text mode with:

Ctrl-A C

and go back to the terminal with:

Ctrl-A C

When in graphic mode, we can do it from the GUI:

Ctrl-Alt ?

where ? is a digit 1, or 2, or, 3, etc. depending on what else is available on the GUI: serial, parallel and frame buffer.

Finally, we can also access QEMU monitor commands directly from GDB step debug with the monitor command:

./run-gdb

then inside that shell:

monitor info qtree

This way you can use both QEMU monitor and GDB commands to inspect the guest from inside a single shell! Pretty awesome.

In general, ./qemu-monitor is the best option, as it:

  • works on both modes

  • allows to use the host Bash history to re-run one off commands

  • allows you to search the output of commands on your host shell even when in graphic mode

Getting everything to work required careful choice of QEMU command line options:

It is also worth looking into the QEMU Guest Agent tool qemu-gq that can be enabled with:

./build-buildroot --config 'BR2_PACKAGE_QEMU=y'

When doing GDB step debug it is possible to send QEMU monitor commands through the GDB monitor command, which saves you the trouble of opening yet another shell.

Try for example:

monitor help
monitor info qtree

When you start hacking QEMU or gem5, it is useful to see what is going on inside the emulator themselves.

This is of course trivial since they are just regular userland programs on the host, but we make it a bit easier with:

./run --debug-vm

Or for a faster development loop:

./run --debug-vm-args '-ex "break qemu_add_opts" -ex "run"'

Or if things get really involved and you want a debug script:

printf 'break qemu_add_opts
run
' > data/vm.gdb
./run --debug-vm-file data/vm.gdb

Our default emulator builds are optimized with gcc -O2 -g. To use -O0 instead, build and run with:

./build-qemu --qemu-build-type debug --verbose
./run --debug-vm
./build-gem5 --gem5-build-type debug --verbose
./run --debug-vm --emulator-gem5

The --verbose is optional, but shows clearly each GCC build command so that you can confirm what --*-build-type is doing.

The build outputs are automatically stored in a different directories for optimized and debug builds, which prevents debug files from overwriting opt ones. Therefore, --gem5-build-id is not required.

The price to pay for debuggability is high however: a Linux kernel boot was about 3x slower in QEMU and 14 times slower in gem5 debug compared to opt, see benchmarks at: [benchmark-linux-kernel-boot].

Similar slowdowns can be observed at: [benchmark-emulators-on-userland-executables].

When in QEMU text mode, using --debug-vm makes Ctrl-C not get passed to the QEMU guest anymore: it is instead captured by GDB itself, so allow breaking. So e.g. you won’t be able to easily quit from a guest program like:

sleep 10

In graphic mode, make sure that you never click inside the QEMU graphic while debugging, otherwise you mouse gets captured forever, and the only solution I can find is to go to a TTY with Ctrl-Alt-F1 and kill QEMU.

You can still send key presses to QEMU however even without the mouse capture, just either click on the title bar, or alt tab to give it focus.

While step debugging any complex program, you always end up feeling the need to step in reverse to reach the last call to some function that was called before the failure point, in order to trace back the problem to the actual bug source.

While GDB "has" this feature, it is just too broken to be usable, and so we expose the amazing Mozilla RR tool conveniently in this repo: https://stackoverflow.com/questions/1470434/how-does-reverse-debugging-work/53063242#53063242

Before the first usage setup rr with:

echo 'kernel.perf_event_paranoid=1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

Then use it with your content of interest, for example:

./run --debug-vm-rr --userland userland/c/hello.c

This will:

  • first run the program once until completion or crash

  • then restart the program at the very first instruction at _start and leave you in a GDB shell

From there, run the program until your point of interest, e.g.:

break qemu_add_opts
continue

and you can now reliably use reverse debugging commands such as reverse-continue, reverse-finish and reverse-next!

To restart debugging again after quitting rr, simply run on your host terminal:

rr replay

The use case of rr is often to go to the final crash and then walk back from there, so you often want to automate running until the end after record with --debug-vm-args as in:

./run --debug-vm-args='-ex continue' --debug-vm-rr --userland userland/c/hello.c

Programs often tend to blow up in very low frames that use values passed in from higher frames. In those cases, remember that just like with forward debugging, you can’t just go:

up
up
up
reverse-next

but rather, you must:

reverse-finish
reverse-finish
reverse-finish
reverse-next

Start pdb at the first instruction:

./run --emulator gem5 --gem5-exe-args='--pdb' --terminal

Requires --terminal as we must be on foreground.

Alternatively, you can add to the point of the code where you want to break the usual:

import ipdb; ipdb.set_trace()

and then run with:

./run --emulator gem5 --terminal

QEMU can log several different events.

The most interesting are events which show instructions that QEMU ran, for which we have a helper:

./trace-boot --arch x86_64

Under the hood, this uses QEMU’s -trace option.

You can then inspect the address of each instruction run:

less "$(./getvar --arch x86_64 run_dir)/trace.txt"

Sample output excerpt:

exec_tb 0.000 pid=10692 tb=0x7fb4f8000040 pc=0xfffffff0
exec_tb 35.391 pid=10692 tb=0x7fb4f8000180 pc=0xfe05b
exec_tb 21.047 pid=10692 tb=0x7fb4f8000340 pc=0xfe066
exec_tb 12.197 pid=10692 tb=0x7fb4f8000480 pc=0xfe06a

Get the list of available trace events:

./run --trace help

TODO: any way to show the actualy disassembled instruction executed directly from there? Possible with QEMU -d tracing.

Enable other specific trace events:

./run --trace trace1,trace2
./qemu-trace2txt -a "$arch"
less "$(./getvar -a "$arch" run_dir)/trace.txt"

This functionality relies on the following setup:

  • ./configure --enable-trace-backends=simple. This logs in a binary format to the trace file.

    It makes 3x execution faster than the default trace backend which logs human readable data to stdout.

    Logging with the default backend log greatly slows down the CPU, and in particular leads to this boot message:

    All QSes seen, last rcu_sched kthread activity 5252 (4294901421-4294896169), jiffies_till_next_fqs=1, root ->qsmask 0x0
    swapper/0       R  running task        0     1      0 0x00000008
     ffff880007c03ef8 ffffffff8107aa5d ffff880007c16b40 ffffffff81a3b100
     ffff880007c03f60 ffffffff810a41d1 0000000000000000 0000000007c03f20
     fffffffffffffedc 0000000000000004 fffffffffffffedc ffffffff00000000
    Call Trace:
     <IRQ>  [<ffffffff8107aa5d>] sched_show_task+0xcd/0x130
     [<ffffffff810a41d1>] rcu_check_callbacks+0x871/0x880
     [<ffffffff810a799f>] update_process_times+0x2f/0x60

    in which the boot appears to hang for a considerable time.

  • patch QEMU source to remove the disable from exec_tb in the trace-events file. See also: https://rwmj.wordpress.com/2016/03/17/tracing-qemu-guest-execution/

QEMU also has a second trace mechanism in addition to -trace, find out the events with:

./run -- -d help

Let’s pick the one that dumps executed instructions, in_asm:

./run --eval './linux/poweroff.out' -- -D out/trace.txt -d in_asm
less out/trace.txt

Sample output excerpt:

----------------
IN:
0xfffffff0:  ea 5b e0 00 f0           ljmpw    $0xf000:$0xe05b

----------------
IN:
0x000fe05b:  2e 66 83 3e 88 61 00     cmpl     $0, %cs:0x6188
0x000fe062:  0f 85 7b f0              jne      0xd0e1

TODO: after IN:, symbol names are meant to show, which is awesome, but I don’t get any. I do see them however when running a bare metal example from: https://github.com/************/newlib-examples/tree/900a9725947b1f375323c7da54f69e8049158881

TODO: what is the point of having two mechanisms, -trace and -d? -d tracing is cool because it does not require a messy recompile, and it can also show symbols.

TODO: is it possible to show the register values for each instruction?

This would include the memory values read into the registers.

Seems impossible due to optimizations that QEMU does:

PANDA can list memory addresses, so I bet it can also decode the instructions: https://github.com/panda-re/panda/blob/883c85fa35f35e84a323ed3d464ff40030f06bd6/panda/docs/LINE_Censorship.md I wonder why they don’t just upstream those things to QEMU’s tracing: panda-re/panda#290

gem5 can do it as shown at: Section 18.8.8, “gem5 tracing”.

Not possible apparently, not even with the memory_region_ops_read and memory_region_ops_write trace events, Peter comments https://lists.gnu.org/archive/html/qemu-devel/2015-06/msg07482.html

No. You will miss all the fast-path memory accesses, which are done with custom generated assembly in the TCG backend. In general QEMU is not designed to support this kind of monitoring of guest operations.

We can further use Binutils' addr2line to get the line that corresponds to each address:

./trace-boot --arch x86_64
./trace2line --arch x86_64
less "$(./getvar --arch x86_64 run_dir)/trace-lines.txt"

The last commands takes several seconds.

The format is as follows:

39368 _static_cpu_has arch/x86/include/asm/cpufeature.h:148

Where:

  • 39368: number of consecutive times that a line ran. Makes the output much shorter and more meaningful

  • _static_cpu_has: name of the function that contains the line

  • arch/x86/include/asm/cpufeature.h:148: file and line

This could of course all be done with GDB, but it would likely be too slow to be practical.

TODO do even more awesome offline post-mortem analysis things, such as:

  • detect if we are in userspace or kernelspace. Should be a simple matter of reading the

  • read kernel data structures, and determine the current thread. Maybe we can reuse / extend the kernel’s GDB Python scripts??

QEMU runs, unlike gem5, are not deterministic by default, however it does support a record and replay mechanism that allows you to replay a previous run deterministically.

This awesome feature allows you to examine a single run as many times as you would like until you understand everything:

# Record a run.
./run --eval-after './linux/rand_check.out;./linux/poweroff.out;' --record
# Replay the run.
./run --eval-after './linux/rand_check.out;./linux/poweroff.out;' --replay

A convenient shortcut to do both at once to test the feature is:

./qemu-rr --eval-after './linux/rand_check.out;./linux/poweroff.out;'

By comparing the terminal output of both runs, we can see that they are the exact same, including things which normally differ across runs:

The record and replay feature was revived around QEMU v3.0.0. It existed earlier but it rot completely. As of v3.0.0 it is still flaky: sometimes we get deadlocks, and only a limited number of command line arguments are supported.

TODO: using -r as above leads to a kernel warning:

rcu_sched detected stalls on CPUs/tasks

TODO: replay deadlocks intermittently at disk operations, last kernel message:

EXT4-fs (sda): re-mounted. Opts: block_validity,barrier,user_xattr

TODO replay with network gets stuck:

./qemu-rr --eval-after 'ifup -a;wget -S google.com;./linux/poweroff.out;'

after the message:

adding dns 10.0.2.3

There is explicit network support on the QEMU patches, but either it is buggy or we are not using the correct magic options.

Solved on unmerged c42634d8e3428cfa60672c3ba89cabefc720cde9 from https://github.com/ispras/qemu/tree/rr-180725

TODO arm and aarch64 only seem to work with initrd since I cannot plug a working IDE disk device? See also: https://lists.gnu.org/archive/html/qemu-devel/2018-02/msg05245.html

Then, when I tried with initrd and no disk:

./build-buildroot --arch aarch64 --initrd
./qemu-rr --arch aarch64 --eval-after './linux/rand_check.out;./linux/poweroff.out;' --initrd

QEMU crashes with:

ERROR:replay/replay-time.c:49:replay_read_clock: assertion failed: (replay_file && replay_mutex_locked())

I had the same error previously on x86-64, but it was fixed: https://bugs.launchpad.net/qemu/+bug/1762179 so maybe the forgot to fix it for aarch64?

Solved on unmerged c42634d8e3428cfa60672c3ba89cabefc720cde9 from https://github.com/ispras/qemu/tree/rr-180725

TODO get working.

QEMU replays support checkpointing, and this allows for a simplistic "reverse debugging" implementation proposed at https://lists.gnu.org/archive/html/qemu-devel/2018-06/msg00478.html on the unmerged https://github.com/ispras/qemu/tree/rr-180725:

./run --eval-after './linux/rand_check.out;./linux/poweroff.out;' --record
./run --eval-after './linux/rand_check.out;./linux/poweroff.out;' --replay --gdb-wait

On another shell:

./run-gdb start_kernel

In GDB:

n
n
n
n
reverse-continue

and we are back at start_kernel

TODO: is there any way to distinguish which instruction runs on each core? Doing:

./run --arch x86_64 --cpus 2 --eval './linux/poweroff.out' --trace exec_tb
./qemu-trace2txt

just appears to output both cores intertwined without any clear differentiation.

gem5 provides also provides a tracing mechanism documented at: http://www.gem5.org/Trace_Based_Debugging:

./run --arch aarch64 --eval 'm5 exit' --emulator gem5 --trace ExecAll
less "$(./getvar --arch aarch64 run_dir)/trace.txt"

Our wrapper just forwards the options to the --debug-flags gem5 option.

Keep in mind however that the disassembly is very broken in several places as of 2019q2, so you can’t always trust it.

Output the trace to stdout instead of a file:

./run \
  --arch aarch64 \
  --emulator gem5 \
  --eval 'm5 exit' \
  --trace ExecAll \
  --trace-stdout \
;

We also have a shortcut for --trace ExecAll -trace-stdout with --trace-insts-stdout

./run \
  --arch aarch64 \
  --emulator gem5 \
  --eval 'm5 exit' \
  --trace-insts-stdout \
;

Be warned, the trace is humongous, at 16Gb.

This would produce a lot of output however, so you will likely not want that when tracing a Linux kernel boot instructions. But it can be very convenient for smaller traces such as [baremetal].

List all available debug flags:

./run --arch aarch64 --gem5-exe-args='--debug-help' --emulator gem5

but to understand most of them you have to look at the source code:

less "$(./getvar gem5_source_dir)/src/cpu/SConscript"
less "$(./getvar gem5_source_dir)/src/cpu/exetrace.cc"

The most important trace flags to know about are:

Trace internals are discussed at: gem5 trace internals.

As can be seen on the Sconstruct, Exec is just an alias that enables a set of flags.

We can make the trace smaller by naming the trace file as trace.txt.gz, which enables GZIP compression, but that is not currently exposed on our scripts, since you usually just need something human readable to work on.

Enabling tracing made the runtime about 4x slower on the [p51], with or without .gz compression.

Trace the source lines just like for QEMU with:

./trace-boot --arch aarch64 --emulator gem5
./trace2line --arch aarch64 --emulator gem5
less "$(./getvar --arch aarch64 run_dir)/trace-lines.txt"

TODO: 7452d399290c9c1fc6366cdad129ef442f323564 ./trace2line this is too slow and takes hours. QEMU’s processing of 170k events takes 7 seconds. gem5’s processing is analogous, but there are 140M events, so it should take 7000 seconds ~ 2 hours which seems consistent with what I observe, so maybe there is no way to speed this up…​ The workaround is to just use gem5’s ExecSymbol to get function granularity, and then GDB individually if line detail is needed?

gem5 traces are generated from DPRINTF(<trace-id> calls scattered throughout the code, except for ExecAll instruction traces, which uses Debug::ExecEnable directly..

The trace IDs are themselves encoded in SConscript files, e.g.:

DebugFlag('Event'

in src/cpu/SConscript.

The build system then automatically adds the options to the --debug-flags.

For this entry, the build system then generates a file build/ARM/debug/ExecEnable.hh, which contains:

namespace Debug {
class SimpleFlag;
extern SimpleFlag ExecEnable;
}

and must be included in from callers of DPRINTF( as <debug/ExecEnable.hh>.

Tested in b4879ae5b0b6644e6836b0881e4da05c64a6550d.

This debug flag traces all instructions.

The output format is of type:

25007000: system.cpu T0 : @start_kernel    : stp
25007000: system.cpu T0 : @start_kernel.0  :   addxi_uop   ureg0, sp, #-112 : IntAlu :  D=0xffffff8008913f90
25007500: system.cpu T0 : @start_kernel.1  :   strxi_uop   x29, [ureg0] : MemWrite :  D=0x0000000000000000 A=0xffffff8008913f90
25008000: system.cpu T0 : @start_kernel.2  :   strxi_uop   x30, [ureg0, #8] : MemWrite :  D=0x0000000000000000 A=0xffffff8008913f98
25008500: system.cpu T0 : @start_kernel.3  :   addxi_uop   sp, ureg0, #0 : IntAlu :  D=0xffffff8008913f90

There are two types of lines:

Breakdown:

  • 25007500: time count in some unit. Note how the microops execute at further timestamps.

  • system.cpu: distinguishes between CPUs when there are more than one. For example, running [arm-baremetal-multicore] with two cores produces system.cpu0 and system.cpu1

  • T0: thread number. TODO: hyperthread? How to play with it?

    config.ini has --param 'system.multi_thread = True' --param 'system.cpu[0].numThreads = 2', but in [arm-baremetal-multicore] the first one alone does not produce T1, and with the second one simulation blows up with:

    fatal: fatal condition interrupts.size() != numThreads occurred: CPU system.cpu has 1 interrupt controllers, but is expecting one per thread (2)
  • @start_kernel: we are in the start_kernel function. Awesome feature! Implemented with libelf https://sourceforge.net/projects/elftoolchain/ copy pasted in-tree ext/libelf. To get raw addresses, remove the ExecSymbol, which is enabled by Exec. This can be done with Exec,-ExecSymbol.

  • .1 as in @start_kernel.1: index of the microop

  • stp: instruction disassembly. Note however that the disassembly of many instructions are very broken as of 2019q2, and you can’t just trust them blindly.

  • strxi_uop x29, [ureg0]: microop disassembly.

  • MemWrite : D=0x0000000000000000 A=0xffffff8008913f90: a memory write microop:

    • D stands for data, and represents the value that was written to memory or to a register

    • A stands for address, and represents the address to which the value was written. It only shows when data is being written to memory, but not to registers.

The best way to verify all of this is to write some baremetal code

This flag shows a more detailed register usage than gem5 ExecAll trace format.

For example, if we run in LKMC 0323e81bff1d55b978a4b36b9701570b59b981eb:

./run --arch aarch64 --baremetal userland/arch/aarch64/add.S --emulator gem5 --trace ExecAll,Registers --trace-stdout

then the stdout contains:

  31000: system.cpu A0 T0 : @main_after_prologue    :   movz   x0, #1, #0        : IntAlu :  D=0x0000000000000001  flags=(IsInteger)
  31500: system.cpu.[tid:0]: Setting int reg 34 (34) to 0.
  31500: system.cpu.[tid:0]: Reading int reg 0 (0) as 0x1.
  31500: system.cpu.[tid:0]: Setting int reg 1 (1) to 0x3.
  31500: system.cpu A0 T0 : @main_after_prologue+4    :   add   x1, x0, #2         : IntAlu :  D=0x0000000000000003  flags=(IsInteger)
  32000: system.cpu.[tid:0]: Setting int reg 34 (34) to 0.
  32000: system.cpu.[tid:0]: Reading int reg 1 (1) as 0x3.
  32000: system.cpu.[tid:0]: Reading int reg 31 (34) as 0.
  32000: system.cpu.[tid:0]: Setting int reg 0 (0) to 0x3.

which corresponds to the two following instructions:

mov x0, 1
add x1, x0, 2

TODO that format is either buggy or is very difficult to understand:

  • what is 34? Presumably some flags register?

  • what do the numbers in parenthesis mean at 31 (34)? Presumably some flags register?

  • why is the first instruction setting reg 1 and the second one reg 0, given that the first sets x0 and the second x1?

As of gem5 16eeee5356585441a49d05c78abc328ef09f7ace the default tracer is ExeTracer. It is set at:

src/cpu/BaseCPU.py:63:default_tracer = ExeTracer()

which then gets used at:

class BaseCPU(ClockedObject):
    [...]
    tracer = Param.InstTracer(default_tracer, "Instruction tracer")

All tracers derive from the common InstTracer base class:

git grep ': InstTracer'

gives:

src/arch/arm/tracers/tarmac_parser.hh:218:    TarmacParser(const Params *p) : InstTracer(p), startPc(p->start_pc),
src/arch/arm/tracers/tarmac_tracer.cc:57:  : InstTracer(p),
src/cpu/exetrace.hh:67:    ExeTracer(const Params *params) : InstTracer(params)
src/cpu/inst_pb_trace.cc:72:    : InstTracer(p), buf(nullptr), bufSize(0), curMsg(nullptr)
src/cpu/inteltrace.hh:63:    IntelTrace(const IntelTraceParams *p) : InstTracer(p)

As mentioned at gem5 TARMAC traces, there appears to be no way to select those currently without hacking the config scripts.

TARMAC is described at: gem5 TARMAC traces.

TODO: are IntelTrace and TarmacParser useful for anything or just relics?

Then there is also the NativeTrace class:

src/cpu/nativetrace.hh:68:class NativeTrace : public ExeTracer

which gets implemented in a few different ISAs, but not all:

src/arch/arm/nativetrace.hh:40:class ArmNativeTrace : public NativeTrace
src/arch/sparc/nativetrace.hh:41:class SparcNativeTrace : public NativeTrace
src/arch/x86/nativetrace.hh:41:class X86NativeTrace : public NativeTrace

TODO: I can’t find any usages of those classes from in-tree configs.

Sometimes in Ubuntu 14.04, after the QEMU SDL GUI starts, it does not get updated after keyboard strokes, and there are artifacts like disappearing text.

We have not managed to track this problem down yet, but the following workaround always works:

Ctrl-Shift-U
Ctrl-C
root

This started happening when we switched to building QEMU through Buildroot, and has not been observed on later Ubuntu.

Using text mode is another workaround if you don’t need GUI features.

gem5 has a bunch of crappiness, mostly described at: gem5 vs QEMU, but it does deserve some credit on the following points:

  • insanely configurable system topology from Python without recompiling, made possible in part due to a well defined memory packet structure that allows adding caches and buses transparently

  • each micro architectural model (gem5 CPU types) works with all ISAs

  • advantages of gem5:

  • disadvantages of gem5:

    • slower than QEMU, see: [benchmark-linux-kernel-boot]

      This implies that the user base is much smaller, since no Android devs.

      Instead, we have only chip makers, who keep everything that really works closed, and researchers, who can’t version track or document code properly >:-) And this implies that:

      • the documentation is more scarce

      • it takes longer to support new hardware features

      Well, not that AOSP is that much better anyway.

    • not sure: gem5 has BSD license while QEMU has GPL

      This suits chip makers that want to distribute forks with secret IP to their customers.

      On the other hand, the chip makers tend to upstream less, and the project becomes more crappy in average :-)

    • gem5 is way more complex and harder to modify and maintain

      The only hairy thing in QEMU is the binary code generation.

      gem5 however has tended towards horrendous intensive code generation in order to support all its different hardware types

      gem5 also has a complex Python interface which is also largely auto-generated, which greatly increases the maintenance complexity of the project: [embedding-python-in-another-application].

      This is done so that reconfiguring platforms can be done quickly without recompiling, and it is amazing when it works, but the maintenance costs are also very high. For example, [pybind11] of several trivial param_ files accounted for 50% of the build time at one point: [pybind11-accounts-for-50-of-gem5-build-time].

OK, this is why we used gem5 in the first place, performance measurements!

Let’s see how many cycles [dhrystone], which Buildroot provides, takes for a few different input parameters.

We will do that for various input parameters on full system by taking a checkpoint after the boot finishes a fast atomic CPU boot, and then we will restore in a more detailed mode and run the benchmark:

./build-buildroot --config 'BR2_PACKAGE_DHRYSTONE=y'
# Boot fast, take checkpoint, and exit.
./run --arch aarch64 --emulator gem5 --eval-after './gem5.sh'

# Restore the checkpoint after boot, and benchmark with input 1000.
./run \
  --arch aarch64 \
  --emulator gem5 \
  --eval-after './gem5.sh' \
  --gem5-readfile 'm5 resetstats;dhrystone 1000;m5 dumpstats' \
  --gem5-restore 1 \
  -- \
  --cpu-type=HPI \
  --restore-with-cpu=HPI \
  --caches \
  --l2cache \
  --l1d_size=64kB \
  --l1i_size=64kB \
  --l2_size=256kB \
;
# Get the value for number of cycles.
# head because there are two lines: our dumpstats and the
# automatic dumpstats at the end which we don't care about.
./gem5-stat --arch aarch64 | head -n 1

# Now for input 10000.
./run \
  --arch aarch64 \
  --emulator gem5 \
  --eval-after './gem5.sh' \
  --gem5-readfile 'm5 resetstats;dhrystone 10000;m5 dumpstats' \
  --gem5-restore 1 \
  -- \
  --cpu-type=HPI \
  --restore-with-cpu=HPI \
  --caches \
  --l2cache \
  --l1d_size=64kB \
  --l1i_size=64kB \
  --l2_size=256kB \
;
./gem5-stat --arch aarch64 | head -n 1

If you ever need a shell to quickly inspect the system state after boot, you can just use:

./run \
  --arch aarch64 \
  --emulator gem5 \
  --eval-after './gem5.sh' \
  --gem5-readfile 'sh' \
  --gem5-restore 1 \

This procedure is further automated and DRYed up at:

./gem5-bench-dhrystone
cat out/gem5-bench-dhrystone.txt

Output at 2438410c25e200d9766c8c65773ee7469b599e4a + 1:

n cycles
1000 13665219
10000 20559002
100000 85977065

so as expected, the Dhrystone run with a larger input parameter 100000 took more cycles than the ones with smaller input parameters.

The gem5-stats commands output the approximate number of CPU cycles it took Dhrystone to run.

A more naive and simpler to understand approach would be a direct:

./run --arch aarch64 --emulator gem5 --eval 'm5 checkpoint;m5 resetstats;dhrystone 10000;m5 exit'

but the problem is that this method does not allow to easily run a different script without running the boot again. The ./gem5.sh script works around that by using m5 readfile as explained further at: Section 19.5.3, “gem5 checkpoint restore and run a different script”.

Now you can play a fun little game with your friends:

  • pick a computational problem

  • make a program that solves the computation problem, and outputs output to stdout

  • write the code that runs the correct computation in the smallest number of cycles possible

Interesting algorithms and benchmarks for this game are being collected at:

To find out why your program is slow, a good first step is to have a look at the gem5 m5out/stats.txt file.

A few imperfections of our benchmarking method are:

  • when we do m5 resetstats and m5 exit, there is some time passed before the exec system call returns and the actual benchmark starts and ends

  • the benchmark outputs to stdout, which means so extra cycles in addition to the actual computation. But TODO: how to get the output to check that it is correct without such IO cycles?

Solutions to these problems include:

  • modify benchmark code with instrumentation directly, see m5ops instructions for an example.

  • monitor known addresses TODO possible? Create an example.

Those problems should be insignificant if the benchmark runs for long enough however.

Besides optimizing a program for a given CPU setup, chip developers can also do the inverse, and optimize the chip for a given benchmark!

The rabbit hole is likely deep, but let’s scratch a bit of the surface.

./run --arch arm --cpus 2 --emulator gem5

Check with:

cat /proc/cpuinfo
getconf _NPROCESSORS_CONF
./run --cpus 2 --emulator gem5 --userland userland/linux/sysconf.c | grep _SC_NPROCESSORS_ONLN
./run --cpus 2 --emulator gem5 --userland userland/cpp/thread_hardware_concurrency.cpp

User mode simulation QEMU v4.0.0 always shows the number of cores of the host, presumably because the thread switching uses host threads directly which would make that harder to implement.

It does not seem possible to make the guest see a different number of cores than what the host has. Full system does have the -smp options, which controls this.

E.g., all of of the following output the same as nproc on the host:

nproc
./run --cpus 1 --userland userland/cpp/thread_hardware_concurrency.cpp
./run --cpus 2 --userland userland/cpp/thread_hardware_concurrency.cpp

This random page suggests that QEMU splits one host thread thread per guest thread, and thus presumably delegates context switching to the host kernel: https://qemu.weilnetz.de/w64/2012/2012-12-04/qemu-tech.html#User-emulation-specific-details

We can confirm that with:

./run --userland userland/posix/pthread_count.c --cli-args 4
ps Haux | grep qemu | wc

At 369a47fc6e5c2f4a7f911c1c058b6088f8824463 + 1 QEMU appears to spawn 3 host threads plus one for every new guest thread created. Remember that userland/posix/pthread_count.c spawns N + 1 total threads if you count the main thread.

gem5 user mode multithreading has been particularly flaky compared to QEMU’s, but work is being put into improving it.

In gem5 syscall simulation, the fork syscall checks if there is a free CPU, and if there is a free one, the new threads runs on that CPU.

Otherwise, the fork call, and therefore higher level interfaces to fork such as pthread_create also fail and return a failure return status in the guest.

For example, if we use just one CPU for userland/posix/pthread_self.c which spawns one thread besides main:

./run --cpus 1 --emulator gem5 --userland userland/posix/pthread_self.c --cli-args 1

fails with this error message coming from the guest stderr:

pthread_create: Resource temporarily unavailable

It works however if we add on extra CPU:

./run --cpus 2 --emulator gem5 --userland userland/posix/pthread_self.c --cli-args 1

Once threads exit, their CPU is freed and becomes available for new fork calls: For example, the following run spawns a thread, joins it, and then spawns again, and 2 CPUs are enough:

./run --cpus 2 --emulator gem5 --userland userland/posix/pthread_self.c --cli-args '1 2'

because at each point in time, only up to two threads are running.

gem5 syscall emulation does show the expected number of cores when queried, e.g.:

./run --cpus 1 --userland userland/cpp/thread_hardware_concurrency.cpp --emulator gem5
./run --cpus 2 --userland userland/cpp/thread_hardware_concurrency.cpp --emulator gem5

outputs 1 and 2 respectively.

This can also be clearly by running sched_getcpu:

./run \
  --arch aarch64 \
  --cli-args  4 \
  --cpus 8 \
  --emulator gem5 \
  --userland userland/linux/sched_getcpu.c \
;

which necessarily produces an output containing the CPU numbers from 1 to 4 and no higher:

1
3
4
2

TODO why does the 2 come at the end here? Would be good to do a detailed assembly run analysis.

Build the kernel with the gem5 arm Linux kernel patches, and then run:

./run \
  --arch aarch64 \
  --linux-build-id gem5-v4.15 \
  --emulator gem5 \
  --cpus 16 \
  -- \
  --param 'system.realview.gic.gem5_extensions = True' \
;

Tested in LKMC 788087c6f409b84adf3cff7ac050fa37df6d4c46. It fails after boot with FATAL: kernel too old as mentioned at: gem5 arm Linux kernel patches but everything seems to work on the gem5 side of things.

A quick ./run --emulator gem5 -- -h leads us to the options:

--caches
--l1d_size=1024
--l1i_size=1024
--l2cache
--l2_size=1024
--l3_size=1024

But keep in mind that it only affects benchmark performance of the most detailed CPU types as shown at: Table 2, “gem5 cache support in function of CPU type”.

Table 2. gem5 cache support in function of CPU type
arch CPU type caches used

X86

AtomicSimpleCPU

no

X86

DerivO3CPU

?*

ARM

AtomicSimpleCPU

no

ARM

HPI

yes

*: couldn’t test because of:

Cache sizes can in theory be checked with the methods described at: https://superuser.com/questions/55776/finding-l2-cache-size-in-linux:

getconf -a | grep CACHE
lscpu
cat /sys/devices/system/cpu/cpu0/cache/index2/size

but for some reason the Linux kernel is not seeing the cache sizes:

Behaviour breakdown:

  • arm QEMU and gem5 (both AtomicSimpleCPU or HPI), x86 gem5: /sys files don’t exist, and getconf and lscpu value empty

  • x86 QEMU: /sys files exist, but getconf and lscpu values still empty

So we take a performance measurement approach instead:

./gem5-bench-cache -- --arch aarch64
cat "$(./getvar --arch aarch64 run_dir)/bench-cache.txt"

which gives:

cmd ./run --emulator gem5 --arch aarch64 --gem5-readfile "dhrystone 1000" --gem5-restore 1 -- --caches --l2cache --l1d_size=1024   --l1i_size=1024   --l2_size=1024   --l3_size=1024   --cpu-type=HPI --restore-with-cpu=HPI
time 23.82
exit_status 0
cycles 93284622
instructions 4393457

cmd ./run --emulator gem5 --arch aarch64 --gem5-readfile "dhrystone 1000" --gem5-restore 1 -- --caches --l2cache --l1d_size=1024kB --l1i_size=1024kB --l2_size=1024kB --l3_size=1024kB --cpu-type=HPI --restore-with-cpu=HPI
time 14.91
exit_status 0
cycles 10128985
instructions 4211458

cmd ./run --emulator gem5 --arch aarch64 --gem5-readfile "dhrystone 10000" --gem5-restore 1 -- --caches --l2cache --l1d_size=1024   --l1i_size=1024   --l2_size=1024   --l3_size=1024   --cpu-type=HPI --restore-with-cpu=HPI
time 51.87
exit_status 0
cycles 188803630
instructions 12401336

cmd ./run --emulator gem5 --arch aarch64 --gem5-readfile "dhrystone 10000" --gem5-restore 1 -- --caches --l2cache --l1d_size=1024kB --l1i_size=1024kB --l2_size=1024kB --l3_size=1024kB --cpu-type=HPI --restore-with-cpu=HPI
time 35.35
exit_status 0
cycles 20715757
instructions 12192527

cmd ./run --emulator gem5 --arch aarch64 --gem5-readfile "dhrystone 100000" --gem5-restore 1 -- --caches --l2cache --l1d_size=1024   --l1i_size=1024   --l2_size=1024   --l3_size=1024   --cpu-type=HPI --restore-with-cpu=HPI
time 339.07
exit_status 0
cycles 1176559936
instructions 94222791

cmd ./run --emulator gem5 --arch aarch64 --gem5-readfile "dhrystone 100000" --gem5-restore 1 -- --caches --l2cache --l1d_size=1024kB --l1i_size=1024kB --l2_size=1024kB --l3_size=1024kB --cpu-type=HPI --restore-with-cpu=HPI
time 240.37
exit_status 0
cycles 125666679
instructions 91738770

We make the following conclusions:

  • the number of instructions almost does not change: the CPU is waiting for memory all the extra time. TODO: why does it change at all?

  • the wall clock execution time is not directionally proportional to the number of cycles: here we had a 10x cycle increase, but only 2x time increase. This suggests that the simulation of cycles in which the CPU is waiting for memory to come back is faster.

TODO These look promising:

--list-mem-types
--mem-type=MEM_TYPE
--mem-channels=MEM_CHANNELS
--mem-ranks=MEM_RANKS
--mem-size=MEM_SIZE

TODO: now to verify this with the Linux kernel? Besides raw performance benchmarks.

./run --memory 512M

We can verify this on the guest directly from the kernel with:

cat /proc/meminfo

as of LKMC 1e969e832f66cb5a72d12d57c53fb09e9721d589 this output contains:

MemTotal:         498472 kB

which we expand with:

printf '0x%X\n' $((498472 * 1024))

to:

0x1E6CA000

TODO: why is this value a bit smaller than 512M?

free also gives the same result:

free -b

contains:

             total       used       free     shared    buffers     cached
Mem:     510435328   20385792  490049536          0     503808    2760704
-/+ buffers/cache:   17121280  493314048
Swap:            0          0          0

which we expand with:

printf '0x%X\n' 510435328$((498472 * 1024)

man free from Ubuntu’s procps 3.3.15 tells us that free obtains this information from /proc/meminfo as well.

From C, we can get this information with sysconf(_SC_PHYS_PAGES) or get_phys_pages():

./linux/total_memory.out

Output:

sysconf(_SC_PHYS_PAGES) * sysconf(_SC_PAGESIZE) = 0x1E6CA000
sysconf(_SC_AVPHYS_PAGES) * sysconf(_SC_PAGESIZE) = 0x1D178000
get_phys_pages() * sysconf(_SC_PAGESIZE) = 0x1E6CA000
get_avphys_pages() * sysconf(_SC_PAGESIZE) = 0x1D178000

TODO These look promising:

--ethernet-linkspeed
--ethernet-linkdelay

Clock frequency: TODO how does it affect performance in benchmarks?

./run --arch aarch64 --emulator gem5 -- --cpu-clock 10000000

Check with:

m5 resetstats
sleep 10
m5 dumpstats

and then:

./gem5-stat --arch aarch64

TODO: why doesn’t this exist:

ls /sys/devices/system/cpu/cpu0/cpufreq

Analogous to QEMU:

./run --arch arm --kernel-cli 'init=/lkmc/linux/poweroff.out' --emulator gem5

Internals: when we give --command-line= to gem5, it overrides default command lines, including some mandatory ones which are required to boot properly.

Our run script hardcodes the require options in the default --command-line and appends extra options given by -e.

To find the default options in the first place, we removed --command-line and ran:

./run --arch arm --emulator gem5

and then looked at the line of the Linux kernel that starts with:

Kernel command line:

Analogous to QEMU, on the first shell:

./run --arch arm --emulator gem5 --gdb-wait

On the second shell:

./run-gdb --arch arm --emulator gem5

On a third shell:

./gem5-shell

When you want to break, just do a Ctrl-C on GDB shell, and then continue.

And we now see the boot messages, and then get a shell. Now try the ./count.sh procedure described for QEMU at: Section 2.2, “GDB step debug kernel post-boot”.

We are unable to use gdbserver because of networking as mentioned at: Section 14.3.1.3, “gem5 host to guest networking”

The alternative is to do as in GDB step debug userland processes.

Next, follow the exact same steps explained at GDB step debug userland non-init without --gdb-wait, but passing --emulator gem5 to every command as usual.

But then TODO (I’ll still go crazy one of those days): for arm, while debugging ./linux/myinsmod.out hello.ko, after then line:

23     if (argc < 3) {
24         params = "";

I press n, it just runs the program until the end, instead of stopping on the next line of execution. The module does get inserted normally.

TODO:

./run-gdb --arch arm --emulator gem5 --userland gem5-1.0/gem5/util/m5/m5 main

breaks when m5 is run on guest, but does not show the source code.

Analogous to QEMU’s Snapshot, but better since it can be started from inside the guest, so we can easily checkpoint after a specific guest event, e.g. just before init is done.

To see it in action try:

./run --arch aarch64 --emulator gem5

In the guest, wait for the boot to end and run:

m5 checkpoint

where gem5 m5 executable is a guest utility present inside the gem5 tree which we cross-compiled and installed into the guest.

To restore the checkpoint, kill the VM and run:

./run --arch arm --emulator gem5 --gem5-restore 1

The --gem5-restore option restores the checkpoint that was created most recently.

Let’s create a second checkpoint to see how it works, in guest:

date >f
m5 checkpoint

Kill the VM, and try it out:

./run --arch arm --emulator gem5 --gem5-restore 1

Here we use --gem5-restore 1 again, since the second snapshot we took is now the most recent one

Now in the guest:

cat f

contains the date. The file f wouldn’t exist had we used the first checkpoint with --gem5-restore 2, which is the second most recent snapshot taken.

If you automate things with Kernel command line parameters as in:

./run --arch arm --eval 'm5 checkpoint;m5 resetstats;dhrystone 1000;m5 exit' --emulator gem5

Then there is no need to pass the kernel command line again to gem5 for replay:

./run --arch arm --emulator gem5 --gem5-restore 1

since boot has already happened, and the parameters are already in the RAM of the snapshot.

In order to debug checkpoint restore bugs, this minimal setup using userland/freestanding/gem5_checkpoint.S can be handy:

./build-userland --arch aarch64 --static
./run --arch aarch64 --emulator gem5 --static --userland userland/freestanding/gem5_checkpoint.S --trace-insts-stdout
./run --arch aarch64 --emulator gem5 --static --userland userland/freestanding/gem5_checkpoint.S --trace-insts-stdout --gem5-restore 1
./run --arch aarch64 --emulator gem5 --static --userland userland/freestanding/gem5_checkpoint.S --trace-insts-stdout --gem5-restore 1 -- --cpu-type=DerivO3CPU --restore-with-cpu=DerivO3CPU --caches

On the initial run, we see that all instructions are executed and the checkpoint is taken:

      0: system.cpu: A0 T0 : @asm_main_after_prologue    :   movz   x0, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
    500: system.cpu: A0 T0 : @asm_main_after_prologue+4    :   movz   x1, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
   1000: system.cpu: A0 T0 : @asm_main_after_prologue+8    :   m5checkpoint             : IntAlu :   flags=(IsInteger|IsNonSpeculative|IsUnverifiable)
Writing checkpoint
warn: Checkpoints for file descriptors currently do not work.
info: Entering event queue @ 1000.  Starting simulation...
   1500: system.cpu: A0 T0 : @asm_main_after_prologue+12    :   movz   x0, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
   2000: system.cpu: A0 T0 : @asm_main_after_prologue+16    :   m5exit                   : No_OpClass :   flags=(IsInteger|IsNonSpeculative)
Exiting @ tick 2000 because m5_exit instruction encountered

Then, on the first restore run, the checkpoint is restored, and only instructions after the checkpoint are executed:

info: Entering event queue @ 1000.  Starting simulation...
   1500: system.cpu: A0 T0 : @asm_main_after_prologue+12    :   movz   x0, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
   2000: system.cpu: A0 T0 : @asm_main_after_prologue+16    :   m5exit                   : No_OpClass :   flags=(IsInteger|IsNonSpeculative)
Exiting @ tick 2000 because m5_exit instruction encountered

and a similar thing happens for the restore with a different CPU type:

info: Entering event queue @ 1000.  Starting simulation...
  79000: system.cpu: A0 T0 : @asm_main_after_prologue+12    :   movz   x0, #0, #0        : IntAlu :  D=0x0000000000000000  FetchSeq=1  CPSeq=1  flags=(IsInteger)
Exiting @ tick 84500 because m5_exit instruction encountered

Here we don’t see the last m5 exit instruction on the log, but it must just be something to do with the O3 logging.

Checkpoints are stored inside the m5out directory at:

"$(./getvar --emulator gem5 m5out_dir)/cpt.<checkpoint-time>"

where <checkpoint-time> is the cycle number at which the checkpoint was taken.

fs.py exposes the -r N flag to restore checkpoints, which N-th checkpoint with the largest <checkpoint-time>: https://github.com/gem5/gem5/blob/e02ec0c24d56bce4a0d8636a340e15cd223d1930/configs/common/Simulation.py#L118

However, that interface is bad because if you had taken previous checkpoints, you have no idea what N to use, unless you memorize which checkpoint was taken at which cycle.

Therefore, just use our superior --gem5-restore flag, which uses directory timestamps to determine which checkpoint you created most recently.

The -r N integer value is just pure fs.py sugar, the backend at m5.instantiate just takes the actual tracepoint directory path as input.

You want to automate running several tests from a single pristine post-boot state.

The problem is that boot takes forever, and after the checkpoint, the memory and disk states are fixed, so you can’t for example:

  • hack up an existing rc script, since the disk is fixed

  • inject new kernel boot command line options, since those have already been put into memory by the bootloader

There is however a few loopholes, m5 readfile being the simplest, as it reads whatever is present on the host.

So we can do it like:

# Boot, checkpoint and exit.
printf 'echo "setup run";m5 exit' > "$(./getvar gem5_readfile_file)"
./run --emulator gem5 --eval 'm5 checkpoint;m5 readfile > /tmp/gem5.sh && sh /tmp/gem5.sh'

# Restore and run the first benchmark.
printf 'echo "first benchmark";m5 exit' > "$(./getvar gem5_readfile_file)"
./run --emulator gem5 --gem5-restore 1

# Restore and run the second benchmark.
printf 'echo "second benchmark";m5 exit' > "$(./getvar gem5_readfile_file)"
./run --emulator gem5 --gem5-restore 1

# If something weird happened, create an interactive shell to examine the system.
printf 'sh' > "$(./getvar gem5_readfile_file)"
./run --emulator gem5 --gem5-restore 1

Since this is such a common setup, we provide the following helpers for this operation:

  • ./run --gem5-readfile is a convenient way to set the m5 readfile file contents from a string in the command line, e.g.:

    # Boot, checkpoint and exit.
    ./run --emulator gem5 --eval './gem5.sh' --gem5-readfile 'echo "setup run"'
    
    # Restore and run the first benchmark.
    ./run --emulator gem5 --gem5-restore 1 --gem5-readfile 'echo "first benchmark"'
    
    # Restore and run the second benchmark.
    ./run --emulator gem5 --gem5-restore 1 --gem5-readfile 'echo "second benchmark"'
  • rootfs_overlay/lkmc/gem5.sh. This script is analogous to gem5’s in-tree hack_back_ckpt.rcS, but with less noise.

    Usage:

    # Boot, checkpoint and exit.
    ./run --emulator gem5 --eval './gem5.sh' --gem5-readfile 'echo "setup run"'
    
    # Restore and run the first benchmark.
    ./run --emulator gem5 --gem5-restore 1 --gem5-readfile 'echo "first benchmark"'
    
    # Restore and run the second benchmark.
    ./run --emulator gem5 --gem5-restore 1 --gem5-readfile 'echo "second benchmark"'

Their usage is also exemplified at gem5 run benchmark.

If you forgot to use an appropriate --eval for your boot and the simulation is already running, rootfs_overlay/lkmc/gem5.sh can be used directly from an interactive guest shell.

First we reset the readfile to something that runs quickly:

printf 'echo "first benchmark"' > "$(./getvar gem5_readfile_file)"

and then in the guest, take a checkpoint and exit:

./gem5.sh

Now the guest is in a state where readfile will be executed automatically without interactive intervention:

./run --emulator gem5 --gem5-restore 1 --gem5-readfile 'echo "first benchmark"'
./run --emulator gem5 --gem5-restore 1 --gem5-readfile 'echo "second benchmark"'

Other loophole possibilities to execute different benchmarks non-interactively include:

gem5 can switch to a different CPU model when restoring a checkpoint.

A common combo is to boot Linux with a fast CPU, make a checkpoint and then replay the benchmark of interest with a slower CPU.

This can be observed interactively in full system with:

./run --arch aarch64 --emulator gem5

Then in the guest terminal after boot ends:

sh -c 'm5 checkpoint;sh'
m5 exit

And then restore the checkpoint with a different slower CPU:

./run --arch arm --emulator gem5 --gem5-restore 1 -- --caches --cpu-type=DerivO3CPU

And now you will notice that everything happens much slower in the guest terminal!

One even more direct and minimal way to observe this is with userland/freestanding/gem5_checkpoint.S which was mentioned at gem5 checkpoint userland minimal example plus some logging:

./run \
  --arch aarch64 \
  --emulator gem5 \
  --static \
  --trace ExecAll,FmtFlag,O3CPU,SimpleCPU \
  --userland userland/freestanding/gem5_checkpoint.S \
;
cat "$(./getvar --arch aarch64 --emulator gem5 trace_txt_file)"
./run \
  --arch aarch64 \
  --emulator gem5 \
  --gem5-restore 1 \
  --static \
  --trace ExecAll,FmtFlag,O3CPU,SimpleCPU \
  --userland userland/freestanding/gem5_checkpoint.S \
  -- \
  --caches \
  --cpu-type DerivO3CPU \
  --restore-with-cpu DerivO3CPU \
;
cat "$(./getvar --arch aarch64 --emulator gem5 trace_txt_file)"

At gem5 2235168b72537535d74c645a70a85479801e0651, the first run does everything in AtomicSimpleCPU:

...
      0: SimpleCPU: system.cpu.dcache_port: received snoop pkt for addr:0x1f92 WriteReq
      0: SimpleCPU: system.cpu.dcache_port: received snoop pkt for addr:0x1e40 WriteReq
      0: SimpleCPU: system.cpu.dcache_port: received snoop pkt for addr:0x1e30 WriteReq
      0: SimpleCPU: system.cpu: Tick
      0: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue    :   movz   x0, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
    500: SimpleCPU: system.cpu: Tick
    500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+4    :   movz   x1, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
   1000: SimpleCPU: system.cpu: Tick
   1000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+8    :   m5checkpoint             : IntAlu :   flags=(IsInteger|IsNonSpeculative|IsUnverifiable)
   1000: SimpleCPU: system.cpu: Resume
   1500: SimpleCPU: system.cpu: Tick
   1500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+12    :   movz   x0, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
   2000: SimpleCPU: system.cpu: Tick
   2000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+16    :   m5exit                   : No_OpClass :   flags=(IsInteger|IsNonSpeculative)

and after restore we see as expected a single ExecEnable instruction executed amidst O3CPU noise:

FullO3CPU: Ticking main, FullO3CPU.
  79000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+12    :   movz   x0, #0, #0        : IntAlu :  D=0x0000000000000000  FetchSeq=1  CPSeq=1  flags=(IsInteger)
  82500: O3CPU: system.cpu: Removing committed instruction [tid:0] PC (0x400084=>0x400088).(0=>1) [sn:1]
  82500: O3CPU: system.cpu: Removing instruction, [tid:0] [sn:1] PC (0x400084=>0x400088).(0=>1)
  82500: O3CPU: system.cpu: Scheduling next tick!
  83000: O3CPU: system.cpu:

which is the movz after the checkpoint. The final m5exit does not appear due to DerivO3CPU logging insanity.

Bibliography:

Besides switching CPUs after a checkpoint restore, fs.py also has the --fast-forward option to automatically run the script from the start on a less detailed CPU, and switch to a more detailed CPU at a given tick.

This is generally useless compared to checkpoint restoring because:

  • checkpoint restore allows to run multiple contents after the restore, and restoring to multiple different system states, which you almost always want to do

  • we generally don’t know the exact tick at which the region of interest will start, especially as the binaries change. It is much easier to just instrument the content with a checkoint m5op

But let’s give it a try anyway with userland/freestanding/gem5_checkpoint.S which was mentioned at gem5 checkpoint userland minimal example

./run \
  --arch aarch64 \
  --emulator gem5 \
  --static \
  --trace ExecAll,FmtFlag,O3CPU,SimpleCPU \
  --userland userland/freestanding/gem5_checkpoint.S \
  -- \
  --caches
  --cpu-type DerivO3CPU \
  --fast-forward 1000 \
;
cat "$(./getvar --arch aarch64 --emulator gem5 trace_txt_file)"

At gem5 2235168b72537535d74c645a70a85479801e0651 we see something like:

      0: O3CPU: system.switch_cpus: Creating O3CPU object.
      0: O3CPU: system.switch_cpus: Workload[0] process is 0      0: SimpleCPU: system.cpu: ActivateContext 0
      0: SimpleCPU: system.cpu.dcache_port: received snoop pkt for addr:0 WriteReq
      0: SimpleCPU: system.cpu.dcache_port: received snoop pkt for addr:0x40 WriteReq
...

      0: SimpleCPU: system.cpu.dcache_port: received snoop pkt for addr:0x1f92 WriteReq
      0: SimpleCPU: system.cpu.dcache_port: received snoop pkt for addr:0x1e40 WriteReq
      0: SimpleCPU: system.cpu.dcache_port: received snoop pkt for addr:0x1e30 WriteReq
      0: SimpleCPU: system.cpu: Tick
      0: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue    :   movz   x0, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
    500: SimpleCPU: system.cpu: Tick
    500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+4    :   movz   x1, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
   1000: SimpleCPU: system.cpu: Tick
   1000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+8    :   m5checkpoint             : IntAlu :   flags=(IsInteger|IsNonSpeculative|IsUnverifiable)
   1000: O3CPU: system.switch_cpus: [tid:0] Calling activate thread.
   1000: O3CPU: system.switch_cpus: [tid:0] Adding to active threads list
   1500: O3CPU: system.switch_cpus:

FullO3CPU: Ticking main, FullO3CPU.
   1500: O3CPU: system.switch_cpus: Scheduling next tick!
   2000: O3CPU: system.switch_cpus:

FullO3CPU: Ticking main, FullO3CPU.
   2000: O3CPU: system.switch_cpus: Scheduling next tick!
   2500: O3CPU: system.switch_cpus:

...

FullO3CPU: Ticking main, FullO3CPU.
  44500: ExecEnable: system.switch_cpus: A0 T0 : @asm_main_after_prologue+12    :   movz   x0, #0, #0        : IntAlu :  D=0x00000000000
  48000: O3CPU: system.switch_cpus: Removing committed instruction [tid:0] PC (0x400084=>0x400088).(0=>1) [sn:1]
  48000: O3CPU: system.switch_cpus: Removing instruction, [tid:0] [sn:1] PC (0x400084=>0x400088).(0=>1)
  48000: O3CPU: system.switch_cpus: Scheduling next tick!
  48500: O3CPU: system.switch_cpus:

...

We can also compare that to the same log but without --fast-forward and other CPU switch options:

      0: SimpleCPU: system.cpu.dcache_port: received snoop pkt for addr:0x1e40 WriteReq
      0: SimpleCPU: system.cpu.dcache_port: received snoop pkt for addr:0x1e30 WriteReq
      0: SimpleCPU: system.cpu: Tick
      0: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue    :   movz   x0, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
    500: SimpleCPU: system.cpu: Tick
    500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+4    :   movz   x1, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
   1000: SimpleCPU: system.cpu: Tick
   1000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+8    :   m5checkpoint             : IntAlu :   flags=(IsInteger|IsNonSpeculative|IsUnverifiable)
   1000: SimpleCPU: system.cpu: Resume
   1500: SimpleCPU: system.cpu: Tick
   1500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+12    :   movz   x0, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
   2000: SimpleCPU: system.cpu: Tick
   2000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+16    :   m5exit                   : No_OpClass :   flags=(IsInteger|IsNonSpeculative)

Therefore, it is clear that what we wanted happen:

  • up until the tick 1000, SimpleCPU was ticking

  • after tick 1000, cpu O3CPU started ticking

Bibliography:

Remember that in the gem5 command line, we can either pass options to the script being run as in:

build/X86/gem5.opt configs/examples/fs.py --some-option

or to the gem5 executable itself:

build/X86/gem5.opt --some-option configs/examples/fs.py

Pass options to the script in our setup use:

  • get help:

    ./run --emulator gem5 -- -h
  • boot with the more detailed and slow HPI CPU model:

    ./run --arch arm --emulator gem5 -- --caches --cpu-type=HPI

To pass options to the gem5 executable we expose the --gem5-exe-args option:

  • get help:

    ./run --gem5-exe-args='-h' --emulator gem5

m5ops are magic instructions which lead gem5 to do magic things, like quitting or dumping stats.

Documentation: http://gem5.org/M5ops

There are two main ways to use m5ops:

m5 is convenient if you only want to take snapshots before or after the benchmark, without altering its source code. It uses the m5ops instructions as its backend.

m5 cannot should / should not be used however:

  • in bare metal setups

  • when you want to call the instructions from inside interest points of your benchmark. Otherwise you add the syscall overhead to the benchmark, which is more intrusive and might affect results.

    Why not just hardcode some m5ops instructions as in our example instead, since you are going to modify the source of the benchmark anyway?

m5 is a guest command line utility that is installed and run on the guest, that serves as a CLI front-end for the m5ops

It is possible to guess what most tools do from the corresponding m5ops, but let’s at least document the less obvious ones here.

In LKMC we build m5 with:

./build-m5 --arch aarch64

The m5 executable can be run on User mode simulation as normal with:

./run --arch aarch64 --emulator gem5 --userland "$(./getvar --arch aarch64 out_rootfs_overlay_bin_dir)/m5" --cli-args dumpstats

This can be a good test m5ops since it executes very quickly.

End the simulation.

Sane Python scripts will exit gem5 with status 0, which is what fs.py does.

Makes gem5 dump one more statistics entry to the gem5 m5out/stats.txt file.

End the simulation with a failure exit event:

m5 fail 1

Sane Python scripts would use that as the exit status of gem5, which would be useful for testing purposes, but fs.py at 200281b08ca21f0d2678e23063f088960d3c0819 just prints an error message:

Simulated exit code not 0! Exit code is 1

and exits with status 0.

We then parse that string ourselves in run and exit with the correct status…​

TODO: it used to be like that, but it actually got changed to just print the message. Why? https://gem5-review.googlesource.com/c/public/gem5/+/4880

m5 fail is just a superset of m5 exit, which is just:

m5 fail 0

Send a guest file to the host. 9P is a more advanced alternative.

Guest:

echo mycontent > myfileguest
m5 writefile myfileguest myfilehost

Host:

cat "$(./getvar --arch aarch64 --emulator gem5 m5out_dir)/myfilehost"

Does not work for subdirectories, gem5 crashes:

m5 writefile myfileguest mydirhost/myfilehost

Read a host file pointed to by the fs.py --script option to stdout.

Host:

date > "$(./getvar gem5_readfile_file)"

Guest:

m5 readfile

Outcome: date shows on guest.

Ermm, just another m5 readfile that only takes integers and only from CLI options? Is this software so redundant?

Host:

./run --emulator gem5 --gem5-restore 1 -- --initparam 13
./run --emulator gem5 --gem5-restore 1 -- --initparam 42

Guest:

m5 initparm

Outputs the given paramter.

Trivial combination of m5 readfile + execute the script.

Host:

printf '#!/bin/sh
echo asdf
' > "$(./getvar gem5_readfile_file)"

Guest:

touch /tmp/execfile
chmod +x /tmp/execfile
m5 execfile

Outcome:

adsf

gem5 allocates some magic instructions on unused instruction encodings for convenient guest instrumentation.

Those instructions are exposed through the gem5 m5 executable in tree executable.

To make things simpler to understand, you can play around with our own minimized educational m5 subset:

The instructions used by ./c/m5ops.out are present in lkmc/m5ops.h in a very simple to understand and reuse inline assembly form.

To use that file, first rebuild m5ops.out with the m5ops instructions enabled and install it on the root filesystem:

./build-userland \
  --arch aarch64 \
  --force-rebuild \
  userland/c/m5ops.c \
;
./build-buildroot --arch aarch64

We don’t enable -DLKMC_M5OPS_ENABLE=1 by default on userland executables because we try to use a single image for both gem5, QEMU and native, and those instructions would break the latter two. We enable it in the Baremetal setup by default since we already have different images for QEMU and gem5 there.

Then, from inside gem5 Buildroot setup, test it out with:

# checkpoint
./c/m5ops.out c

# dumpstats
./c/m5ops.out d

# exit
./c/m5ops.out e

# dump resetstats
./c/m5ops.out r

In theory, the cleanest way to add m5ops to your benchmarks would be to do exactly what the m5 tool does:

However, I think it is usually not worth the trouble of hacking up the build system of the benchmark to do this, and I recommend just hardcoding in a few raw instructions here and there, and managing it with version control + sed.

Bibliography:

Let’s study how the gem5 m5 executable uses them:

We notice that there are two different implementations for each arch:

  • magic instructions, which don’t exist in the corresponding arch

  • magic memory addresses on a given page

TODO: what is the advantage of magic memory addresses? Because you have to do more setup work by telling the kernel never to touch the magic page. For the magic instructions, the only thing that could go wrong is if you run some crazy kind of fuzzing workload that generates random instructions.

Then, in aarch64 magic instructions for example, the lines:

.macro  m5op_func, name, func, subfunc
        .globl \name
        \name:
        .long 0xff000110 | (\func << 16) | (\subfunc << 12)
        ret

define a simple function function for each m5op. Here we see that:

  • 0xff000110 is a base mask for the magic non-existing instruction

  • \func and \subfunc are OR-applied on top of the base mask, and define m5op this is.

    Those values will loop over the magic constants defined in m5ops.h with the deferred preprocessor idiom.

    For example, exit is 0x21 due to:

    #define M5OP_EXIT               0x21

Finally, m5.c calls the defined functions as in:

m5_exit(ints[0]);

Therefore, the runtime "argument" that gets passed to the instruction, e.g. the delay in ticks until the exit for m5 exit, gets passed directly through the aarch64 calling convention.

Keep in mind that for all archs, m5.c does the calls with 64-bit integers:

uint64_t ints[2] = {0,0};
parse_int_args(argc, argv, ints, argc);
m5_fail(ints[1], ints[0]);

Therefore, for example:

  • aarch64 uses x0 for the first argument and x1 for the second, since each is 64 bits log already

  • arm uses r0 and r1 for the first argument, and r2 and r3 for the second, since each register is only 32 bits long

That convention specifies that x0 to x7 contain the function arguments, so x0 contains the first argument, and x1 the second.

In our m5ops example, we just hardcode everything in the assembly one-liners we are producing.

We ignore the \subfunc since it is always 0 on the ops that interest us.

include/gem5/asm/generic/m5ops.h also describes some annotation instructions.

https://gem5.googlesource.com/arm/linux/ contains an ARM Linux kernel forks with a few gem5 specific Linux kernel patches on top of mainline created by ARM Holdings on top of a few upstream kernel releases.

Our build script automatically adds that remote for us as gem5-arm.

The patches are optional: the vanilla kernel does boot. But they add some interesting gem5-specific optimizations, instrumentations and device support.

The patches also add defconfigs that are known to work well with gem5.

In order to use those patches and their associated configs, and, we recommend using [linux-kernel-build-variants] as:

git -C "$(./getvar linux_source_dir)" fetch gem5-arm:gem5/v4.15
git -C "$(./getvar linux_source_dir)" checkout gem5/v4.15
./build-linux \
  --arch aarch64 \
  --custom-config-file-gem5 \
  --linux-build-id gem5-v4.15 \
;
git -C "$(./getvar linux_source_dir)" checkout -
./run \
  --arch aarch64 \
  --emulator gem5 \
  --linux-build-id gem5-v4.15 \
;

QEMU also boots that kernel successfully:

./run \
  --arch aarch64 \
  --linux-build-id gem5-v4.15 \
;

but glibc kernel version checks make init fail with:

FATAL: kernel too old

because glibc was built to expect a newer Linux kernel as shown at: Section 10.4.1, “FATAL: kernel too old failure in userland simulation”. Your choices to solve this are:

  • see if there is a more recent gem5 kernel available, or port your patch of interest to the newest kernel

  • modify this repo to use uClibc, which is not hard because of Buildroot

  • patch glibc to remove that check, which is easy because glibc is in a submodule of this repo

It is obviously not possible to understand what the Linux kernel fork commits actually do from their commit message, so let’s explain them one by one here as we understand them:

Tested on 649d06d6758cefd080d04dc47fd6a5a26a620874 + 1.

We have observed that with the kernel patches, boot is 2x faster, falling from 1m40s to 50s.

With ts, we see that a large part of the difference is at the message:

clocksource: Switched to clocksource arch_sys_counter

which takes 4s on the patched kernel, and 30s on the unpatched one! TODO understand why, especially if it is a config difference, or if it actually comes from a patch.

When you run gem5, it generates an m5out directory at:

echo $(./getvar --arch arm --emulator gem5 m5out_dir)"

The location of that directory can be set with ./gem5.opt -d, and defaults to ./m5out.

The files in that directory contains some very important information about the run, and you should become familiar with every one of them.

Contains UART output, both from the Linux kernel or from the baremetal system.

Can also be seen live on m5term.

This file used to be called just m5out/system.dmesg, but the name was changed after the workload refactorings of March 2020.

This file is capable of showing terminal messages that are printk before the serial is enabled as described at: Linux kernel early boot messages.

The file is dumped only on kernel panics which gem5 can detect by the PC address: Exit gem5 on panic.

This mechanism can be very useful to debug the Linux kernel boot if problems happen before the serial is enabled.

This magic mechanism works by activating an event when the PC reaches the printk address, much like gem5 can detect panic by PC and then parsing printk function arguments and buffers!

The relevant source is at src/kern/linux/printk.c.

We can test this mechanism in a controlled way by hacking a panic() into the kernel next to a printk that shows up before the serial is enabled, e.g. on Linux v5.4.3 we could do:

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index f296d89be757..3e79916322c2 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -6207,6 +6207,7 @@ void __init ftrace_init(void)

    pr_info("ftrace: allocating %ld entries in %ld pages\n",
        count, count / ENTRIES_PER_PAGE + 1);
+   panic("foobar");

    last_ftrace_enabled = ftrace_enabled = 1;

With this, after the panic, system.workload.dmesg contains on LKMC d09a0d97b81582cc88381c4112db631da61a048d aarch64:

[0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd070]
[0.000000] Linux version 5.4.3-dirty (lkmc@f7688b48ac46e9a669e279f1bc167722d5141eda) (gcc version 8.3.0 (Buildroot 2019.11-00002-g157ac499cf)) #1 SMP Thu Jan 1 00:00:00 UTC 1970
[0.000000] Machine model: V2P-CA15
[0.000000] Memory limited to 256MB
[0.000000] efi: Getting EFI parameters from FDT:
[0.000000] efi: UEFI not found.
[0.000000] On node 0 totalpages: 65536
[0.000000]   DMA32 zone: 1024 pages used for memmap
[0.000000]   DMA32 zone: 0 pages reserved
[0.000000]   DMA32 zone: 65536 pages, LIFO batch:15
[0.000000] percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784
[0.000000] pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096
[0.000000] pcpu-alloc: [0] 0
[0.000000] Detected PIPT I-cache on CPU0
[0.000000] CPU features: detected: ARM erratum 832075
[0.000000] CPU features: detected: EL2 vector hardening
[0.000000] ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware
[0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 64512
[0.000000] Kernel command line: earlyprintk=pl011,0x1c090000 lpj=19988480 rw loglevel=8 mem=256MB root=/dev/sda console_msg_format=syslog nokaslr norandmaps panic=-1 printk.devkmsg=on printk.time=y rw console=ttyAMA0 - lkmc_home=/lkmc
[0.000000] Dentry cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
[0.000000] Inode-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
[0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
[0.000000] Memory: 233432K/262144K available (6652K kernel code, 792K rwdata, 2176K rodata, 896K init, 659K bss, 28712K reserved, 0K cma-reserved)
[0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[0.000000] ftrace: allocating 22067 entries in 87 pages

So we see that messages up to the ftrace do show up!

This file contains important statistics about the run:

cat "$(./getvar --arch aarch64 m5out_dir)/stats.txt"

Whenever we run m5 dumpstats or when fs.py and se.py are exiting (TODO other scripts?), a section with the following format is added to that file:

---------- Begin Simulation Statistics ----------
[the stats]
---------- End Simulation Statistics   ----------

That file contains several important execution metrics, e.g. number of cycles and several types of cache misses:

system.cpu.numCycles
system.cpu.dtb.inst_misses
system.cpu.dtb.inst_hits

For x86, it is interesting to try and correlate numCycles with:

In LKMC f42c525d7973d70f4c836d2169cc2bd2893b4197 gem5 5af26353b532d7b5988cf0f6f3d0fbc5087dd1df, the stat file for a [c] hello world:

./run --arch aarch64 --emulator gem5 --userland userland/c/hello.c

which has a single dump done at the exit, has size 59KB and stat lines of form:

final_tick                                   91432000                       # Number of ticks from beginning of simulation (restored from checkpoints and never reset)

We can reduce the file size by adding the ?desc=False magic suffix to the stat flie name:

--stats-file stats.txt?desc=false

as explained in:

gem5.opt --stats-help

and this reduces the file size to 39KB by removing those excessive comments:

final_tick                                   91432000

although trailing spaces are still prse

We can further reduce this size by removing spaces from the dumps with this hack:

         ccprintf(stream, " |%12s %10s %10s",
                  ValueToString(value, precision), pdfstr.str(), cdfstr.str());
     } else {
-        ccprintf(stream, "%-40s %12s %10s %10s", name,
-                 ValueToString(value, precision), pdfstr.str(), cdfstr.str());
+        ccprintf(stream, "%s %s", name, ValueToString(value, precision));
+        if (pdfstr.rdbuf()->in_avail())
+            stream << " " << pdfstr.str();
+        if (cdfstr.rdbuf()->in_avail())
+            stream << " " << cdfstr.str();

         if (descriptions) {
             if (!desc.empty())

and after that the file size went down to 21KB.

We can make gem5 dump statistics in the [hdf5] format by adding the magic h5:// prefix to the file name as in:

gem5.opt --stats-file h5://stats.h5

as explained in:

gem5.opt --stats-help

This is not exposed in LKMC f42c525d7973d70f4c836d2169cc2bd2893b4197 however, you just have to hack the gem5 CLI for now.

TODO what is the advantage? The generated file for --stats-file h5://stats.h5?desc=False in LKMC f42c525d7973d70f4c836d2169cc2bd2893b4197 gem5 5af26353b532d7b5988cf0f6f3d0fbc5087dd1df for a single dump was 946K, so much larger than the text version seen at gem5 m5out/stats.txt file which was only 59KB max!

We then try to see if it is any better when you have a bunch of dump events:

./run --arch aarch64 --emulator gem5 --userland userland/c/m5ops.c --cli-args 'd 1000'

and there yes, we see that the file size fell from 39MB on stats.txt to 3.2MB on stats.m5, so the increase observed previously was just due to some initial size overhead (considering the patched gem5 with no spaces in the text file).

We also note however that the stat dump made the such a simulation that just loops and dumps considerably slower, from 3s to 15s on [p51]. Fascinating, we are definitely not disk bound there.

This describes the internals of the gem5 m5out/stats.txt file.

GDB call stack to dumpstats:

Stats::pythonDump () at build/ARM/python/pybind11/stats.cc:58
Stats::StatEvent::process() ()
GlobalEvent::BarrierEvent::process (this=0x555559fa6a80) at build/ARM/sim/global_event.cc:131
EventQueue::serviceOne (this=this@entry=0x555558c36080) at build/ARM/sim/eventq.cc:228
doSimLoop (eventq=0x555558c36080) at build/ARM/sim/simulate.cc:219
simulate (num_cycles=<optimized out>) at build/ARM/sim/simulate.cc:132

Stats::pythonDump does:

void
pythonDump()
{
    py::module m = py::module::import("m5.stats");
    m.attr("dump")();
}

This calls src/python/m5/stats/init.py in def dump does the main dumping

That function does notably:

    for output in outputList:
        if output.valid():
            output.begin()
            for stat in stats_list:
                stat.visit(output)
            output.end()

begin and end are defined in C++ and output the header and tail respectively

void
Text::begin()
{
    ccprintf(*stream, "\n---------- Begin Simulation Statistics ----------\n");
}

void
Text::end()
{
    ccprintf(*stream, "\n---------- End Simulation Statistics   ----------\n");
    stream->flush();
}

stats_list contains the stats, and stat.visit prints them, outputList contains by default just the text output. I don’t see any other types of output in gem5, but likely JSON / binary formats could be envisioned.

Tested in gem5 b4879ae5b0b6644e6836b0881e4da05c64a6550d.

The m5out/config.ini file, contains a very good high level description of the system:

less $(./getvar --arch arm --emulator gem5 m5out_dir)"

That file contains a tree representation of the system, sample excerpt:

[root]
type=Root
children=system
full_system=true

[system]
type=ArmSystem
children=cpu cpu_clk_domain
auto_reset_addr_64=false
semihosting=Null

[system.cpu]
type=AtomicSimpleCPU
children=dstage2_mmu dtb interrupts isa istage2_mmu itb tracer
branchPred=Null

[system.cpu_clk_domain]
type=SrcClockDomain
clock=500

Each node has:

  • a list of child nodes, e.g. system is a child of root, and both cpu and cpu_clk_domain are children of system

  • a list of parameters, e.g. system.semihosting is Null, which means that [semihosting] was turned off

Set custom configs with the --param option of fs.py, e.g. we can make gem5 wait for GDB to connect with:

fs.py --param 'system.cpu[0].wait_for_remote_gdb = True'

More complex settings involving new classes however require patching the config files, although it is easy to hack this up. See for example: patches/manual/gem5-semihost.patch.

Modifying the config.ini file manually does nothing since it gets overwritten every time.

The m5out/config.dot file contains a graphviz .dot file that provides a simplified graphical view of a subset of the gem5 config.ini.

This file gets automatically converted to .svg and .pdf, which you can view after running gem5 with:

xdg-open "$(./getvar --arch arm --emulator gem5 m5out_dir)/config.dot.pdf"
xdg-open "$(./getvar --arch arm --emulator gem5 m5out_dir)/config.dot.svg"

An example of such file can be seen at: config.dot.svg for a TimingSimpleCPU without caches..

We use the m5term in-tree executable to connect to the terminal instead of a direct telnet.

If you use telnet directly, it mostly works, but certain interactive features don’t, e.g.:

  • up and down arrows for history navigation

  • tab to complete paths

  • Ctrl-C to kill processes

TODO understand in detail what m5term does differently than telnet.

We have made a crazy setup that allows you to just cd into submodules/gem5, and edit Python scripts directly there.

This is not normally possible with Buildroot, since normal Buildroot packages first copy files to the output directory ($(./getvar -a <arch> buildroot_build_build_dir)/<pkg>), and then build there.

So if you modified the Python scripts with this setup, you would still need to ./build to copy the modified files over.

For gem5 specifically however, we have hacked up the build so that we cd into the submodules/gem5 tree, and then do an out of tree build to out/common/gem5.

Another advantage of this method is the we factor out the arm and aarch64 gem5 builds which are identical and large, as well as the smaller arch generic pieces.

Using Buildroot for gem5 is still convenient because we use it to:

  • to cross build m5 for us

  • check timestamps and skip the gem5 build when it is not requested

The out of build tree is required, because otherwise Buildroot would copy the output build of all archs to each arch directory, resulting in arch^2 build copies, which is significant.

By default, we use configs/example/fs.py script.

The --gem5-script biglittle option enables the alternative configs/example/arm/fs_bigLITTLE.py script instead:

./run --arch aarch64 --emulator gem5 --gem5-script biglittle

Advantages over fs.py:

  • more representative of mobile ARM SoCs, which almost always have big little cluster

  • simpler than fs.py, and therefore easier to understand and modify

Disadvantages over fs.py:

  • only works for ARM, not other archs

  • not as many configuration options as fs.py, many things are hardcoded

We setup 2 big and 2 small CPUs, but cat /proc/cpuinfo shows 4 identical CPUs instead of 2 of two different types, likely because gem5 does not expose some informational register much like the caches: https://www.mail-archive.com/[email protected]/msg15426.html gem5 config.ini does show that the two big ones are DerivO3CPU and the small ones are MinorCPU.

TODO: why is the --dtb required despite fs_bigLITTLE.py having a DTB generation capability? Without it, nothing shows on terminal, and the simulation terminates with simulate() limit reached @ 18446744073709551615. The magic vmlinux.vexpress_gem5_v1.20170616 works however without a DTB.

All those tests could in theory be added to this repo instead of to gem5, and this is actually the superior setup as it is cross emulator.

But can the people from the project be convinced of that?

These are just very small GTest tests that test a single class in isolation, they don’t run any executables.

Build the unit tests and run them:

./build-gem5 --unit-tests

Running individual unit tests is not yet exposed, but it is easy to do: while running the full tests, GTest prints each test command being run, e.g.:

/path/to/build/ARM/base/circlebuf.test.opt --gtest_output=xml:/path/to/build/ARM/unittests.opt/base/circlebuf.test.xml
[==========] Running 4 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 4 tests from CircleBufTest
[ RUN      ] CircleBufTest.BasicReadWriteNoOverflow
[       OK ] CircleBufTest.BasicReadWriteNoOverflow (0 ms)
[ RUN      ] CircleBufTest.SingleWriteOverflow
[       OK ] CircleBufTest.SingleWriteOverflow (0 ms)
[ RUN      ] CircleBufTest.MultiWriteOverflow
[       OK ] CircleBufTest.MultiWriteOverflow (0 ms)
[ RUN      ] CircleBufTest.PointerWrapAround
[       OK ] CircleBufTest.PointerWrapAround (0 ms)
[----------] 4 tests from CircleBufTest (0 ms total)

[----------] Global test environment tear-down
[==========] 4 tests from 1 test case ran. (0 ms total)
[  PASSED  ] 4 tests.

so you can just copy paste the command.

Building individual tests is possible with:

./build-gem5 --unit-test base/circlebuf.test

This does not run the test however.

Note that the command and it’s corresponding results don’t need to show consecutively on stdout because tests are run in parallel. You just have to match them based on the class name CircleBufTest to the file circlebuf.test.cpp.

This section is about running the gem5 in-tree tests.

Running the larger 2019 regression tests is exposed for example with:

./build-gem5 --arch aarch64
./gem5-regression --arch aarch64 -- --length quick --length long

After the first run has downloaded the test binaries for you, you can speed up the process a little bit by skipping an useless SCons call:

./gem5-regression --arch aarch64 -- --length quick --length long --skip-build

Note however that running without --skip-build is required at least once to download the test binaries, because the test interface is bad.

List available instead of running them:

./gem5-regression --arch aarch64 --cmd list

You can then pick one suite (has to be a suite, not an "individual test") from the list and run just it e.g. with:

./gem5-regression --arch aarch64 -- --uid SuiteUID:tests/gem5/cpu_tests/test.py:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt

This error happens when the following instruction limits are reached:

system.cpu[0].max_insts_all_threads
system.cpu[0].max_insts_any_thread

If the parameter is not set, it defaults to 0, which is magic and means the huge maximum value of uint64_t: 0xFFFFFFFFFFFFFFFF, which in practice would require a very long simulation if at least one CPU were live.

So this usually means all CPUs are in a sleep state, and no events are scheduled in the future, which usually indicates a bug in either gem5 or guest code, leading gem5 to blow up.

Still, fs.py at gem5 08c79a194d1a3430801c04f37d13216cc9ec1da3 does not exit with non-zero status due to this…​ and so we just parse it out just as for m5 fail…​

A trivial and very direct way to see message would be:

./run \
  --emulator gem5 \
  --userland \userland/arch/x86_64/freestanding/linux/hello.S \
  --trace-insts-stdout \
  -- \
  --param 'system.cpu[0].max_insts_all_threads = 3' \
;

which as of lkmc 402059ed22432bb351d42eb10900e5a8e06aa623 runs only the first three instructions and quits!

info: Entering event queue @ 0.  Starting simulation...
      0: system.cpu A0 T0 : @asm_main_after_prologue    : mov   rdi, 0x1
      0: system.cpu A0 T0 : @asm_main_after_prologue.0  :   MOV_R_I : limm   rax, 0x1 : IntAlu :  D=0x0000000000000001  flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop)
   1000: system.cpu A0 T0 : @asm_main_after_prologue+7    : mov rdi, 0x1
   1000: system.cpu A0 T0 : @asm_main_after_prologue+7.0  :   MOV_R_I : limm   rdi, 0x1 : IntAlu :  D=0x0000000000000001  flags=(IsInteger|IsMicroop|IsLastMicroop|IsFirstMicroop)
   2000: system.cpu A0 T0 : @asm_main_after_prologue+14    : lea        rsi, DS:[rip + 0x19]
   2000: system.cpu A0 T0 : @asm_main_after_prologue+14.0  :   LEA_R_P : rdip   t7, %ctrl153,  : IntAlu :  D=0x000000000040008d  flags=(IsInteger|IsMicroop|IsDelayedCommit|IsFirstMicroop)
   2500: system.cpu A0 T0 : @asm_main_after_prologue+14.1  :   LEA_R_P : lea   rsi, DS:[t7 + 0x19] : IntAlu :  D=0x00000000004000a6  flags=(IsInteger|IsMicroop|IsLastMicroop)
Exiting @ tick 3000 because all threads reached the max instruction count

The exact same can be achieved with the older hardcoded --maxinsts mechanism present in se.py and fs.py:

./run \
  --emulator gem5 \
  --userland \userland/arch/x86_64/freestanding/linux/hello.S \
  --trace-insts-stdout \
  -- \
  --maxinsts 3
;

Other related fs.py options are:

  • --abs-max-tick: set the maximum guest simulation time. The same scale as the ExecAll trace is used. E.g., for the above example with 3 instructions, the same trace would be achieved with a value of 3000.

The message also shows on User mode simulation deadlocks, for example in userland/posix/pthread_deadlock.c:

./run \
  --emulator gem5 \
  --userland userland/posix/pthread_deadlock.c \
  --cli-args 1 \
;

ends in:

Exiting @ tick 18446744073709551615 because simulate() limit reached

where 18446744073709551615 is 0xFFFFFFFFFFFFFFFF in decimal.

And there is a [baremetal] example at baremetal/arch/aarch64/no_bootloader/wfe_loop.S that dies on WFE:

./run \
  --arch aarch64 \
  --baremetal baremetal/arch/aarch64/no_bootloader/wfe_loop.S \
  --emulator gem5 \
  --trace-insts-stdout \
;

which gives:

info: Entering event queue @ 0.  Starting simulation...
      0: system.cpu A0 T0 : @lkmc_start    :   wfe                      : IntAlu :  D=0x0000000000000000  flags=(IsSerializeAfter|IsNonSpeculative|IsQuiesce|IsUnverifiable)
   1000: system.cpu A0 T0 : @lkmc_start+4    :   b   <lkmc_start>         : IntAlu :   flags=(IsControl|IsDirectControl|IsUncondControl)
   1500: system.cpu A0 T0 : @lkmc_start    :   wfe                      : IntAlu :  D=0x0000000000000000  flags=(IsSerializeAfter|IsNonSpeculative|IsQuiesce|IsUnverifiable)
Exiting @ tick 18446744073709551615 because simulate() limit reached

Other examples of the message:

In order to use different build options, you might also want to use [gem5-build-variants] to keep the build outputs separate from one another.

Profiling builds as of 3cea7d9ce49bda49c50e756339ff1287fd55df77 both use: -g -O3 and disable asserts and logging like the gem5 fast build and:

  • prof uses -pg for gprof

  • perf uses -lprofile for google-pprof

Profiling techniques are discussed in more detail at: [profiling-userland-programs].

For the prof build, you can get the gmon.out file with:

./run --arch aarch64 --emulator gem5 --userland userland/c/hello.c --gem5-build-type prof
gprof "$(./getvar --arch aarch64 gem5_executable)" > tmp.gprof

TODO test properly, benchmark vs GCC.

sudo apt-get install clang
./build-gem5 --gem5-clang
./run --emulator gem5 --gem5-clang

If there gem5 appears to have a C++ undefined behaviour bug, which is often very difficult to track down, you can try to build it with the following extra SCons options:

./build-gem5 --gem5-build-id san --verbose -- --with-ubsan --without-tcmalloc

This will make GCC do a lot of extra sanitation checks at compile and run time.

As a result, the build and runtime will be way slower than normal, but that still might be the fastest way to solve undefined behaviour problems.

Ideally, we should also be able to run it with asan with --with-asan, but if we try then the build fails at gem5 16eeee5356585441a49d05c78abc328ef09f7ace (with two ubsan trivial fixes I’ll push soon):

=================================================================
==9621==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 371712 byte(s) in 107 object(s) allocated from:
    #0 0x7ff039804448 in malloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x10c448)
    #1 0x7ff03950d065 in dictresize ../Objects/dictobject.c:643

Direct leak of 23728 byte(s) in 26 object(s) allocated from:
    #0 0x7ff039804448 in malloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x10c448)
    #1 0x7ff03945e40d in _PyObject_GC_Malloc ../Modules/gcmodule.c:1499
    #2 0x7ff03945e40d in _PyObject_GC_Malloc ../Modules/gcmodule.c:1493

Direct leak of 2928 byte(s) in 43 object(s) allocated from:
    #0 0x7ff03980487e in __interceptor_realloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x10c87e)
    #1 0x7ff03951d763 in list_resize ../Objects/listobject.c:62
    #2 0x7ff03951d763 in app1 ../Objects/listobject.c:277
    #3 0x7ff03951d763 in PyList_Append ../Objects/listobject.c:289

Direct leak of 2002 byte(s) in 3 object(s) allocated from:
    #0 0x7ff039804448 in malloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x10c448)
    #1 0x7ff0394fd813 in PyString_FromStringAndSize ../Objects/stringobject.c:88
    #2 0x7ff0394fd813 in PyString_FromStringAndSize ../Objects/stringobject.c:
    Direct leak of 40 byte(s) in 2 object(s) allocated from
    #0 0x7ff039804448 in malloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x10c448)
    #1 0x7ff03951ea4b in PyList_New ../Objects/listobject.c:152

Indirect leak of 10384 byte(s) in 11 object(s) allocated from
    #0 0x7ff039804448 in malloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x10c448
    #1 0x7ff03945e40d in _PyObject_GC_Malloc ../Modules/gcmodule.c:
    #2 0x7ff03945e40d in _PyObject_GC_Malloc ../Modules/gcmodule.c:1493

Indirect leak of 4089 byte(s) in 6 object(s) allocated from:
    #0 0x7ff039804448 in malloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x10c448)
    #1 0x7ff0394fd648 in PyString_FromString ../Objects/stringobject.c:143

Indirect leak of 2090 byte(s) in 3 object(s) allocated from:
    #0 0x7ff039804448 in malloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x10c448
    #1 0x7ff0394eb36f in type_new ../Objects/typeobject.c:
    #2 0x7ff0394eb36f in type_new ../Objects/typeobject.c:2094
Indirect leak of 1346 byte(s) in 2 object(s) allocated from:
    #0 0x7ff039804448 in malloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x10c448)
    #1 0x7ff0394fd813 in PyString_FromStringAndSize ../Objects/stringobject.c:
    #2 0x7ff0394fd813 in PyString_FromStringAndSize ../Objects/stringobject.c:
    SUMMARY: AddressSanitizer: 418319 byte(s) leaked in 203 allocation(s).

From the message, this appears however to be a Python / pyenv11 bug however and not in gem5 specifically. I think it worked when I tried it in the past in an older gem5 / Ubuntu.

--without-tcmalloc is needed / a good idea when using --with-asan: https://stackoverflow.com/questions/42712555/address-sanitizer-fsanitize-address-works-with-tcmalloc since both do more or less similar jobs, see also [memory-leaks].

gem5 has two types of memory system:

  • the classic memory system, which is used by default

  • the Ruby memory system

The Ruby memory system includes the SLICC domain specific language to describe memory systems: http://gem5.org/Ruby SLICC transpiles to C++ auto-generated files under build/<isa>/mem/ruby/protocol/.

Ruby seems to have usage outside of gem5, but the naming overload with the Ruby programming language, which also has domain specific languages as a concept, makes it impossible to google anything about it!

Since it is not the default, Ruby is generally less stable that the classic memory model. However, because it allows describing a wide variety of important cache coherence protocols, while the classic system only describes a single protocol, Ruby is very importanonly describes a single protocol, Ruby is a very important feature of gem5.

Ruby support must be enabled at compile time with the scons PROTOCOL= flag, which compiles support for the desired memory system type.

Note however that most ISAs already implicitly set PROTOCOL via the build_opts/ directory, e.g. build_opts/ARM contains:

PROTOCOL = 'MOESI_CMP_directory'

and therefore ARM already compiles MOESI_CMP_directory by default.

Then, with fs.py and se.py, you can choose to use either the classic or built-in ruby system at runtime with the --ruby option:

  • if --ruby is given, use the ruby memory system that was compiled into gem5. Caches are always present when Ruby is used, since the main goal of Ruby is to specify the cache coherence protocol, and it therefore hardcodes cache hierarchies.

  • otherwise, use the classic memory system. Caches may be optional for certain CPU types and are enabled with --caches.

For example, to use a two level [mesi-cache-coherence-protocol] we can do:

./build-gem5 --arch aarch64 --gem5-build-id ruby -- PROTOCOL=MESI_Two_Level
./run --arch aarch64 --emulator -gem5 --gem5-build-id ruby -- --ruby

and during build we see a humongous line of type:

[   SLICC] src/mem/protocol/MESI_Two_Level.slicc -> ARM/mem/protocol/AccessPermission.cc, ARM/mem/protocol/AccessPermission.hh, ...

which shows that dozens of C++ files are being generated from Ruby SLICC.

The relevant Ruby source files live in the source tree under:

src/mem/protocol/MESI_Two_Level*

We already pass the SLICC_HTML flag by default to the build, which generates an HTML summary of each memory protocol under (TODO broken: https://gem5.atlassian.net/browse/GEM5-357):

xdg-open "$(./getvar --arch aarch64 --gem5-build-id ruby gem5_build_build_dir)/ARM/mem/protocol/html/index.html"

A minimized ruby config which was not merged upstream can be found for study at: https://gem5-review.googlesource.com/c/public/gem5/+/13599/1

One easy way to see that Ruby is being used without understanding it in detail is to enable some logging:

./run \
  --arch aarch64 \
  --emulator gem5 \
  --gem5-worktree master \
  --userland userland/arch/aarch64/freestanding/linux/hello.S \
  --static \
  --trace ExecAll,FmtFlag,Ruby,XBar \
  -- \
  --ruby \
;
cat "$(./getvar --arch aarch64 --emulator gem5 trace_txt_file)"

Then:

  • when the --ruby flag is given, we see a gazillion Ruby related messages prefixed e.g. by RubyPort:.

    We also observe from ExecEnable lines that instruction timing is not simple anymore, so the memory system must have latencies

  • without --ruby, we instead see XBar (Coherent Crossbar) related messages such as CoherentXBar:, which I believe is the more precise name for the memory model that the classic memory system uses: gem5 crossbar interconnect.

Certain features may not work in Ruby. For example, gem5 checkpoint creation is only possible in Ruby protocols that support flush, which is the case for PROTOCOL=MOESI_hammer but not PROTOCOL=MESI_Three_Level: https://www.mail-archive.com/[email protected]/msg17418.html

Tested in gem5 d7d9bc240615625141cd6feddbadd392457e49eb.

Crossbar or XBar in the code, is the default CPU interconnect that gets used by fs.py if --ruby is not given.

TODO: describe it in more detail. It appears to be a very simple mechanism.

Under src/mem/ we see that there is both a coherent and a non-coherent XBar.

In se.py it is set at:

if options.ruby:
    ...
else:
    MemClass = Simulation.setMemClass(options)
    system.membus = SystemXBar()

and SystemXBar is defined at src/mem/XBar.py with a nice comment:

# One of the key coherent crossbar instances is the system
# interconnect, tying together the CPU clusters, GPUs, and any I/O
# coherent masters, and DRAM controllers.
class SystemXBar(CoherentXBar):

Tested in gem5 12c917de54145d2d50260035ba7fa614e25317a3.

Python 3 support was mostly added in 2019 Q3 at arounda347a1a68b8a6e370334be3a1d2d66675891e0f1 but remained buggy for some time afterwards.

In an Ubuntu 18.04 host where python is python2 by default, build with Python 3 instead with:

./build-gem5 --gem5-build-id python3 -- PYTHON_CONFIG=python3-config

Python 3 is then automatically used when running if you use that build.

gem5 has a few in tree CPU models for different purposes.

In fs.py and se.py, those are selectable with the --cpu-type option.

The information to make highly accurate models isn’t generally public for non-free CPUs, so either you must either rely vendor provided models or on experiments/reverse engineering.

There is no simple answer for "what is the best CPU", in theory you have to understand each model and decide which one is closer your target system.

Whenever possible, stick to:

  • vendor provide ones obviously, e.g. ARM Holdings models of ARM cores, unless there is good reason not to, as they are the most likely to be accurate

  • newer models instead of older models

Both of those can be checked with git log and git blame.

All CPU types inherit from the BaseCPU class, and looking at the class hierarchy in Eclipse gives a good overview of what we have:

  • BaseCPU

    • BaseKvmCPU

    • BaseSimpleCPU

      • AtomicSimpleCPU

      • TimingSimpleCPU

    • MinorO3CPU

    • BaseO3CPU

      • FullO3CPU

Simple abstract CPU without a pipeline.

They are therefore completely unrealistic. But they also run much faster.

Implementations:

KVM CPUs are an alternative way of fast forwarding boot when they work.

Generic in-order core that does not model any specific CPU.

Its C++ implementation that can be parametrized to more closely match real cores.

Note that since gem5 is highly parametrizable, the parametrization could even change which instructions a CPU can execute by altering its available functional units, which are used to model performance.

For example, MinorCPU allows all implemented instructions, including [arm-sve] instructions, but a derived class modelling, say, an ARM Cortex A7 core, might not, since SVE is a newer feature and the A7 core does not have SVE.

The weird name "Minor" stands for "M (TODO what is M) IN ONder".

Its 4 stage pipeline is described at the "MinorCPU" section of gem5 ARM RSK.

There is also an in-tree doxygen at: src/doc/inside-minor.doxygen and rendered at: http://pages.cs.wisc.edu/~swilson/gem5-docs/minor.html

As of 2019, in-order cores are mostly present in low power / cost contexts, for example little cores of ARM bigLITTLE.

The following models extend the MinorCPU class by parametrization to make it match existing CPUs more closely:

  • HPI: derived from MinorCPU.

    Created by Ashkan Tousi in 2017 while working at ARM.

    According to gem5 ARM RSK:

    The HPI CPU timing model is tuned to be representative of a modern in-order Armv8-A implementation.

  • ex5_LITTLE: derived from MinorCPU. Description reads:

    ex5 LITTLE core (based on the ARM Cortex-A7)

    Implemented by Pierre-Yves Péneau from LIRMM, which is a research lab in Montpellier, France, in 2017.

  • O3_ARM_v7a: implemented by Ronald Dreslinski from the University of Michigan in 2012

    Not sure why it has v7a in the name, since I believe the CPUs are just the microarchitectural implementation of any ISA, and the v8 hello world did run.

    The CLI option is named slightly differently as: --cpu-type O3_ARM_v7a_3.

Generic out-of-order core. "O3" Stands for "Out Of Order"!

Analogous to MinorCPU, but modelling an out of order core instead of in order.

Existing parametrizations:

  • ex5_big: big corresponding to ex5_LITTLE, by same author at same time. It description reads:

    ex5 big core (based on the ARM Cortex-A15)

The gem5 platform is selectable with the --machine option, which is named after the analogous QEMU -machine option, and which sets the --machine-type.

Each platform represents a different system with different devices, memory and interrupt setup.

TODO: describe the main characteristics of each platform, as of gem5 5e83d703522a71ec4f3eb61a01acd8c53f6f3860:

  • VExpress_GEM5_V1: good sane base platform

  • VExpress_GEM5_V1_DPU: VExpress_GEM5_V1 with DP650 instead of HDLCD, selected automatically by ./run --dp650, see also: gem5 graphic mode DP650

  • VExpress_GEM5_V2: VExpress_GEM5_V1 with GICv3, uses a different bootloader arm/aarch64_bootloader/boot_emm_v2.arm64 TODO is it because of GICv3?

  • anything that does not start with: VExpress_GEM5_: old and bad, don’t use them

Present at:

Depending on which archive you download from there, you can find some of:

  • Ubuntu based images

  • precompiled Linux kernels, with the gem5 arm Linux kernel patches for arm

  • precompiled gem5 bootloaders for ISAs that have them, e.g. ARM

  • precompiled DTBs if you don’t want to use autogeneration for some crazy reason

Some of those images are also used on the gem5 unit tests continuous integration.

Could be used as an alternative to this repository. But why would you do that? :-)

E.g. to use a precompiled ARM kernel:

mkdir aarch-system-201901106
cd aarch-system-201901106
wget http://www.gem5.org/dist/current/arm/aarch-system-201901106.tar.bz2
tar xvf aarch-system-201901106.tar.bz2
cd ..
./run --arch aarch64 --emulator gem5 --linux-exec aarch-system-201901106/binaries/vmlinux.arm64

Certain ISAs like ARM have bootloaders that are automatically run before the main image to setup basic system state.

We cross compile those bootloaders from source automatically during ./build-gem5.

As of gem5 bcf041f257623e5c9e77d35b7531bae59edc0423, the source code of the bootloaderes can be found under:

system/arm/

and their selection can be seen under: src/dev/arm/RealView.py, e.g.:

    def setupBootLoader(self, cur_sys, loc):
        if not cur_sys.boot_loader:
            cur_sys.boot_loader = [ loc('boot_emm.arm64'), loc('boot_emm.arm') ]

Internals under other sections:

In order to develop complex C++ software such as gem5, a good IDE setup is fundamental.

The best setup I’ve reached is with Eclipse. It is not perfect, and there is a learning curve, but is worth it.

Notably, it is very hard to get perfect due to: Why are all C++ symlinked into the gem5 build dir?.

I recommend the following settings, tested in Eclipse 2019.09, Ubuntu 18.04:

To run and GDB step debug the executable, just copy the full command line from the output ./run, and configure it into Eclipse.

The interaction uses the Python C extension interface https://docs.python.org/2/extending/extending.html interface through the [pybind11] helper library: https://github.com/pybind/pybind11

The C++ executable both:

  • starts running the Python executable

  • provides Python classes written in C++ for that Python code to use

An example of this can be found at:

then gem5 magic SimObject class adds some crazy stuff on top of it further, is is a mess. In particular, it auto generates params/ headers. TODO: why is this mess needed at all? pybind11 seems to handle constructor arguments just fine:

Let’s study BadDevice for example:

src/dev/BadDevice.py defines devicename:

class BadDevice(BasicPioDevice):
    type = 'BadDevice'
    cxx_header = "dev/baddev.hh"
    devicename = Param.String("Name of device to error on")

The object is created in Python for example from src/dev/alpha/Tsunami.py as:

    fb = BadDevice(pio_addr=0x801fc0003d0, devicename='FrameBuffer')

Since BadDevice has no __init__ method, and neither BasicPioDevice, it all just falls through until the SimObject.__init__ constructor.

This constructor will loop through the inheritance chain and give the Python parameters to the C++ BadDeviceParams class as follows.

The auto-generated build/ARM/params/BadDevice.hh file defines BadDeviceParams in C++:

#ifndef __PARAMS__BadDevice__
#define __PARAMS__BadDevice__

class BadDevice;

#include <cstddef>
#include <string>

#include "params/BasicPioDevice.hh"

struct BadDeviceParams
    : public BasicPioDeviceParams
{
    BadDevice * create();
    std::string devicename;
};

#endif // __PARAMS__BadDevice__

and ./python/_m5/param_BadDevice.cc defines the param Python from C++ with pybind11:

namespace py = pybind11;

static void
module_init(py::module &m_internal)
{
    py::module m = m_internal.def_submodule("param_BadDevice");
    py::class_<BadDeviceParams, BasicPioDeviceParams, std::unique_ptr<BadDeviceParams, py::nodelete>>(m, "BadDeviceParams")
        .def(py::init<>())
        .def("create", &BadDeviceParams::create)
        .def_readwrite("devicename", &BadDeviceParams::devicename)
        ;

    py::class_<BadDevice, BasicPioDevice, std::unique_ptr<BadDevice, py::nodelete>>(m, "BadDevice")
        ;

}

static EmbeddedPyBind embed_obj("BadDevice", module_init, "BasicPioDevice");

src/dev/baddev.hh then uses the parameters on the constructor:

class BadDevice : public BasicPioDevice
{
  private:
    std::string devname;

  public:
    typedef BadDeviceParams Params;

  protected:
    const Params *
    params() const
    {
        return dynamic_cast<const Params *>(_params);
    }

  public:
     /**
      * Constructor for the Baddev Class.
      * @param p object parameters
      * @param a base address of the write
      */
    BadDevice(Params *p);

src/dev/baddev.cc then uses the parameter:

BadDevice::BadDevice(Params *p)
    : BasicPioDevice(p, 0x10), devname(p->devicename)
{
}

It has been found that this usage of [pybind11] across hundreds of SimObject files accounted for 50% of the gem5 build time at one point: [pybind11-accounts-for-50-of-gem5-build-time].

To get a feeling of how SimObject objects are run, see: gem5 event queue AtomicSimpleCPU syscall emulation freestanding example analysis.

Tested on gem5 08c79a194d1a3430801c04f37d13216cc9ec1da3.

The main is at: src/sim/main.cc. It calls:

ret = initM5Python();

src/sim/init.cc:

230 int
231 initM5Python()
232 {
233     EmbeddedPyBind::initAll();
234     return EmbeddedPython::initAll();
235 }

initAll basically just initializes the _m5 Python object, which is used across multiple .py.

Back on main:

ret = m5Main(argc, argv);

which goes to:

result = PyRun_String(*command, Py_file_input, dict, dict);

with commands looping over:

import m5
m5.main()

which leads into:

src/python/m5/main.py#main

which finally calls your config file like fs.py with:

filename = sys.argv[0]
filedata = file(filename, 'r').read()
filecode = compile(filedata, filename, 'exec')
[...]
exec filecode in scope

TODO: the file path name appears to be passed as a command line argument to the Python script, but I didn’t have the patience to fully understand the details.

The Python config files then set the entire system up in Python, and finally call m5.simulate() to run the actual simulation. This function has a C++ native implementation at:

src/sim/simulate.cc

and that is where the main event loop, doSimLoop, gets called and starts kicking off the gem5 event queue.

Tested at gem5 b4879ae5b0b6644e6836b0881e4da05c64a6550d.

gem5 is an event based simulator, and as such the event queue is of of the crucial elements in the system.

The gem5 event queue stores one callback event for each future point in time.

The event queue is implemented in the class EventQueue in the file src/sim/eventq.hh.

Not all times need to have an associated event: if a given time has no events, gem5 just skips it and jumps to the next event: the queue is basically a linked list of events.

Important examples of events include:

  • CPU ticks

  • peripherals and memory

At gem5 event queue AtomicSimpleCPU syscall emulation freestanding example analysis we see for example that at the beginning of an AtomicCPU simulation, gem5 sets up exactly two events:

Then, at the end of the callback of one tick event, another tick is scheduled.

And so the simulation progresses tick by tick, until an exit event happens.

The EventQueue class has one awesome dump() function that prints a human friendly representation of the queue, and can be easily called from GDB. TODO example.

We can also observe what is going on in the event queue with the Event debug flag.

Event execution is done at EventQueue::serviceOne():

Event *exit_event = eventq->serviceOne();

This calls the Event::process method of the event.

Another important technique is to use GDB and break at interesting points such as:

b Trace::OstreamLogger::logMessage()
b EventManager::schedule
b EventFunctionWrapper::process

although stepping into EventFunctionWrapper::process which does std::function is a bit of a pain: https://stackoverflow.com/questions/59429401/how-to-step-into-stdfunction-user-code-from-c-functional-with-gdb

Another potentially useful technique is to use:

--trace Event,ExecAll,FmtFlag,FmtStackTrace --trace-stdout

which automates the logging of Trace::OstreamLogger::logMessage() backtraces.

But alas, it misses which function callback is being scheduled, which is the awesome thing we actually want:

Then, once we had that, the most perfect thing ever would be to make the full event graph containing which events schedule which events!

Let’s now analyze every single event on a minimal gem5 syscall emulation mode in the simplest CPU that we have:

./run \
  --arch aarch64 \
  --emulator gem5 \
  --userland userland/arch/aarch64/freestanding/linux/hello.S \
  --trace Event,ExecAll,FmtFlag \
  --trace-stdout \
;

which gives:

      0: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 scheduled @ 0
**** REAL SIMULATION ****
      0: Event: Event_70: generic 70 scheduled @ 0
info: Entering event queue @ 0.  Starting simulation...
      0: Event: Event_70: generic 70 rescheduled @ 18446744073709551615
      0: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 executed @ 0
      0: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue    :   movz   x0, #1, #0        : IntAlu :  D=0x0000000000000001  flags=(IsInteger)
      0: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 rescheduled @ 500
    500: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 executed @ 500
    500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+4    :   adr   x1, #28            : IntAlu :  D=0x0000000000400098  flags=(IsInteger)
    500: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 rescheduled @ 1000
   1000: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 executed @ 1000
   1000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+8    :   ldr   w2, #4194464       : MemRead :  D=0x0000000000000006 A=0x4000a0  flags=(IsInteger|IsMemRef|IsLoad)
   1000: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 rescheduled @ 1500
   1500: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 executed @ 1500
   1500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+12    :   movz   x8, #64, #0       : IntAlu :  D=0x0000000000000040  flags=(IsInteger)
   1500: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 rescheduled @ 2000
   2000: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 executed @ 2000
   2000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+16    :   svc   #0x0               : IntAlu :   flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall)
hello
   2000: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 rescheduled @ 2500
   2500: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 executed @ 2500
   2500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+20    :   movz   x0, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
   2500: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 rescheduled @ 3000
   3000: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 executed @ 3000
   3000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+24    :   movz   x8, #93, #0       : IntAlu :  D=0x000000000000005d  flags=(IsInteger)
   3000: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 rescheduled @ 3500
   3500: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 executed @ 3500
   3500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+28    :   svc   #0x0               : IntAlu :   flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall)
   3500: Event: Event_71: generic 71 scheduled @ 3500
   3500: Event: Event_71: generic 71 executed @ 3500

On the event trace, we can first see:

0: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 scheduled @ 0

This schedules a tick event for time 0, and leads to the first clock tick.

Then:

0: Event: Event_70: generic 70 scheduled @ 0
0: Event: Event_70: generic 70 rescheduled @ 18446744073709551615

schedules the end of time event for time 0, which is later rescheduled to the actual end of time.

At:

0: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 executed @ 0
0: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue    :   movz   x0, #1, #0        : IntAlu :  D=0x0000000000000001  flags=(IsInteger)
0: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 rescheduled @ 500

the tick event happens, the instruction runs, and then the instruction is rescheduled in 500 time units. This is done at the end of AtomicSimpleCPU::tick():

if (_status != Idle)
    reschedule(tickEvent, curTick() + latency, true);

At:

3500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+28    :   svc   #0x0               : IntAlu :   flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall)
3500: Event: Event_71: generic 71 scheduled @ 3500
3500: Event: Event_71: generic 71 executed @ 3500

the exit system call is called, and then it schedules an exit evit, which gets executed and the simulation ends.

We guess then that Event_71 comes from the SE implementation of the exit syscall, so let’s just confirm, the trace contains:

exitSimLoop() at sim_events.cc:97 0x5555594746e0
exitImpl() at syscall_emul.cc:215 0x55555948c046
exitFunc() at syscall_emul.cc:225 0x55555948c147
SyscallDesc::doSyscall() at syscall_desc.cc:72 0x5555594949b6
Process::syscall() at process.cc:401 0x555559484717
SimpleThread::syscall() at 0x555559558059
ArmISA::SupervisorCall::invoke() at faults.cc:856 0x5555572950d7
BaseSimpleCPU::advancePC() at base.cc:681 0x555559083133
AtomicSimpleCPU::tick() at atomic.cc:757 0x55555907834c

and exitSimLoop() does:

new GlobalSimLoopExitEvent(when + simQuantum, message, exit_code, repeat);

Tested in gem5 12c917de54145d2d50260035ba7fa614e25317a3.

Let’s have a closer look at the initial magically scheduled events of the simulation.

Most events come from other events, but at least one initial event must be scheduled somehow from elsewhere to kick things off.

The initial tick event:

0: Event: AtomicSimpleCPU tick.wrapped_function_event: EventFunctionWrapped 39 scheduled @ 0

we’ll study by breaking at at the point that prints messages: b Trace::OstreamLogger::logMessage() to see where events are being scheduled from:

Trace::OstreamLogger::logMessage() at trace.cc:149 0x5555593b3b1e
void Trace::Logger::dprintf_flag<char const*, char const*, unsigned long>() at 0x55555949e603
void Trace::Logger::dprintf<char const*, char const*, unsigned long>() at 0x55555949de58
Event::trace() at eventq.cc:395 0x55555946d109
EventQueue::schedule() at eventq_impl.hh:65 0x555557195441
EventManager::schedule() at eventq.hh:746 0x555557194aa2
AtomicSimpleCPU::activateContext() at atomic.cc:239 0x555559075531
SimpleThread::activate() at simple_thread.cc:177 0x555559545a63
Process::initState() at process.cc:283 0x555559484011
ArmProcess64::initState() at process.cc:126 0x55555730827a
ArmLinuxProcess64::initState() at process.cc:1,777 0x5555572d5e5e

The interesting call is at AtomicSimpleCPU::activateContext:

schedule(tickEvent, clockEdge(Cycles(0)));

which calls EventManager::schedule.

AtomicSimpleCPU is an EventManager because SimObject inherits from it.

tickEvent is an EventFunctionWrapper which contains a std::function<void(void)> callback;, and is initialized in the constructor as:

tickEvent([this]{ tick(); }, "AtomicSimpleCPU tick",
        false, Event::CPU_Tick_Pri),

The call stack above ArmLinuxProcess64::initState is [pybind11] fuzziness, but if we grep a bit we find the Python call point:

src/python/m5/simulate.py

def instantiate(ckpt_dir=None):

    ...

    # Create the C++ sim objects and connect ports
    for obj in root.descendants(): obj.createCCObject()
    for obj in root.descendants(): obj.connectPorts()

    # Do a second pass to finish initializing the sim objects
    for obj in root.descendants(): obj.init()

    ...

    # Restore checkpoint (if any)
    if ckpt_dir:
        ...
    else:
        for obj in root.descendants(): obj.initState()

and this gets called from the toplevel Python scripts e.g. se.py configs/common/Simulation.py does:

m5.instantiate(checkpoint_dir)

As we can see, initState is just one stage of generic SimObject initialization. root.descendants() goes over the entire SimObject tree calling initState().

Finally, we see that initState is part of the SimObject C++ API:

src/sim/sim_object.hh

class SimObject : public EventManager, public Serializable, public Drainable,
                  public Stats::Group
{

    ...

    /**
     * initState() is called on each SimObject when *not* restoring
     * from a checkpoint.  This provides a hook for state
     * initializations that are only required for a "cold start".
     */
    virtual void initState();

Finally, we see that initState is exposed to the Python API at:

build/ARM/python/_m5/param_SimObject.cc

module_init(py::module &m_internal)
{
    py::module m = m_internal.def_submodule("param_SimObject");
    py::class_<SimObjectParams, std::unique_ptr<SimObjectParams, py::nodelete>>(m, "SimObjectParams")
        .def_readwrite("name", &SimObjectParams::name)
        .def_readwrite("eventq_index", &SimObjectParams::eventq_index)
        ;

    py::class_<SimObject, Drainable, Serializable, Stats::Group, std::unique_ptr<SimObject, py::nodelete>>(m, "SimObject")
        .def("init", &SimObject::init)
        .def("initState", &SimObject::initState)
        .def("memInvalidate", &SimObject::memInvalidate)
        .def("memWriteback", &SimObject::memWriteback)
        .def("regProbePoints", &SimObject::regProbePoints)
        .def("regProbeListeners", &SimObject::regProbeListeners)
        .def("startup", &SimObject::startup)
        .def("loadState", &SimObject::loadState, py::arg("cp"))
        .def("getPort", &SimObject::getPort, pybind11::return_value_policy::reference, py::arg("if_name"), py::arg("idx"))
        ;

}

which is more magical than the other param classes since py::class_<SimObject has non-trivial methods, those are auto-generated by the cxx_exports code generation mechanism:

class SimObject(object):

    ...

    cxx_exports = [
        PyBindMethod("init"),
        PyBindMethod("initState"),
        PyBindMethod("memInvalidate"),
        PyBindMethod("memWriteback"),
        PyBindMethod("regProbePoints"),
        PyBindMethod("regProbeListeners"),
        PyBindMethod("startup"),
    ]

And the second magically scheduled event is the exit event:

0: Event: Event_70: generic 70 scheduled @ 0
0: Event: Event_70: generic 70 rescheduled @ 18446744073709551615

which is scheduled with backtrace:

Trace::OstreamLogger::logMessage() at trace.cc:149 0x5555593b3b1e
void Trace::Logger::dprintf_flag<char const*, char const*, unsigned long>() at 0x55555949e603
void Trace::Logger::dprintf<char const*, char const*, unsigned long>() at 0x55555949de58
Event::trace() at eventq.cc:395 0x55555946d109
EventQueue::schedule() at eventq_impl.hh:65 0x555557195441
BaseGlobalEvent::schedule() at global_event.cc:78 0x55555946d6f1
GlobalEvent::GlobalEvent() at 0x55555949d177
GlobalSimLoopExitEvent::GlobalSimLoopExitEvent() at sim_events.cc:61 0x555559474470
simulate() at simulate.cc:104 0x555559476d6f

which comes at object creation inside simulate() through the GlobalEvent() constructor:

simulate_limit_event =
    new GlobalSimLoopExitEvent(mainEventQueue[0]->getCurTick(),
                                "simulate() limit reached", 0);

This event indicates that the simulation should finish by overriding bool isExitEvent() which gets checked in the main simulation at EventQueue::serviceOne:

if (event->isExitEvent()) {
    assert(!event->flags.isSet(Event::Managed) ||
            !event->flags.isSet(Event::IsMainQueue)); // would be silly
    return event;

Tested in gem5 12c917de54145d2d50260035ba7fa614e25317a3.

Inside AtomicSimpleCPU::tick() we saw previously that the reschedule happens at:

    if (latency < clockPeriod())
        latency = clockPeriod();

    if (_status != Idle)
        reschedule(tickEvent, curTick() + latency, true);

so it is interesting to learn where that latency comes from.

From our logs, we see that all events happened with a 500 time unit interval between them, so that must be the value for all instructions of our simple example.

By GDBing it a bit, we see that none of our instructions incremented latency, and so it got set to clockPeriod(), which comes from ClockDomain::clockPeriod() which then likely comes from:

    parser.add_option("--cpu-clock", action="store", type="string",
                      default='2GHz',

because the time unit is picoseconds. This then shows on the config.ini as:

[system.cpu_clk_domain]
type=SrcClockDomain
clock=500

It will be interesting to see how AtomicSimpleCPU makes memory access on GDB and to compare that with TimingSimpleCPU.

We assume that the memory access still goes through the CoherentXBar, but instead of generating an event to model delayed response, it must be doing the access directly.

Inside AtomicSimpleCPU::tick, we track ifetch_req and see:

        fault = thread->itb->translateAtomic(ifetch_req, thread->getTC(),
                                                BaseTLB::Execute);

We can compare that with what happen sin TimingSimpleCPU:

        thread->itb->translateTiming(ifetch_req, thread->getTC(),
                &fetchTranslation, BaseTLB::Execute);

and so there it is: the ITB classes are the same, but there are a separate Atomic and Timing methods!

The Timing one calls ArmISA::TLB::translateComplete

Tested in gem5 b4879ae5b0b6644e6836b0881e4da05c64a6550d.

Happens on EmulationPageTable, and seems to happen atomically without making any extra memory requests.

TODO confirm from code, notably by seeing where the translation table is set.

But we can confirm with logging with:

--trace DRAM,ExecAll,FmtFlag

which gives

      0: DRAM: system.mem_ctrls: recvAtomic: ReadReq 0x78
      0: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue    :   movz   x0, #1, #0        : IntAlu :  D=0x0000000000000001  flags=(IsInteger)
    500: DRAM: system.mem_ctrls: recvAtomic: ReadReq 0x7c
    500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+4    :   adr   x1, #28            : IntAlu :  D=0x0000000000400098  flags=(IsInteger)
   1000: DRAM: system.mem_ctrls: recvAtomic: ReadReq 0x80
   1000: DRAM: system.mem_ctrls: recvAtomic: ReadReq 0xa0
   1000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+8    :   ldr   w2, #4194464       : MemRead :  D=0x0000000000000006 A=0x4000a0  flags=(IsInteger|IsMemRef|IsLoad)
   1500: DRAM: system.mem_ctrls: recvAtomic: ReadReq 0x84
   1500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+12    :   movz   x8, #64, #0       : IntAlu :  D=0x0000000000000040  flags=(IsInteger)
   2000: DRAM: system.mem_ctrls: recvAtomic: ReadReq 0x88
   2000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+16    :   svc   #0x0               : IntAlu :   flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall)
hello
   2500: DRAM: system.mem_ctrls: recvAtomic: ReadReq 0x8c
   2500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+20    :   movz   x0, #0, #0        : IntAlu :  D=0x0000000000000000  flags=(IsInteger)
   3000: DRAM: system.mem_ctrls: recvAtomic: ReadReq 0x90
   3000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+24    :   movz   x8, #93, #0       : IntAlu :  D=0x000000000000005d  flags=(IsInteger)
   3500: DRAM: system.mem_ctrls: recvAtomic: ReadReq 0x94
   3500: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+28    :   svc   #0x0               : IntAlu :   flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall)
Exiting @ tick 3500 because exiting with last active thread context
   3500: DRAM: system.mem_ctrls_0: Computing stats due to a dump callback
   3500: DRAM: system.mem_ctrls_1: Computing stats due to a dump callback

So we see that before every instruction execution there was a DRAM event! Also, each read happens 4 bytes after the previous one, which is consistent with instruction fetches.

The DRAM addresses are very close to zero e.g. 0x78 for the first instruction, and therefore we guess that they are physical since the ELF entry point is much higher:

./run-toolchain --arch aarch64 readelf -- -h "$(./getvar --arch aarch64 userland_build_dir)/arch/aarch64/freestanding/linux/hello.out

at:

  Entry point address:               0x400078

For LDR, we see that there was an extra DRAM read as well after the fetch read, as expected.

Tested in gem5 b4879ae5b0b6644e6836b0881e4da05c64a6550d.

Now, let’s move on to TimingSimpleCPU, which is just like AtomicSimpleCPU internally, but now the memory requests don’t actually finish immediately: gem5 CPU types!

This means that simulation will be much more accurate, and the DRAM memory will be modelled.

TODO: analyze better what each of the memory event mean. For now, we have just collected a bunch of data there, but needs interpreting. The CPU specifics in this section are already insightful however.

TimingSimpleCPU should be the second simplest CPU to analyze, so let’s give it a try:

./run \
  --arch aarch64 \
  --emulator gem5 \
  --gem5-build-type gem5 \
  --userland userland/arch/aarch64/freestanding/linux/hello.S \
  --trace Event,ExecAll,FmtFlag \
  --trace-stdout \
  -- \
  --cpu-type TimingSimpleCPU \
;

As of LKMC 78ce2dabe18ef1d87dc435e5bc9369ce82e8d6d2 gem5 12c917de54145d2d50260035ba7fa614e25317a3 the log is now much more complex.

Here is an abridged version with:

  • the beginning up to the second instruction

  • end ending

because all that happens in between is exactly the same as the first two instructions and therefore boring.

We have also manually added:

  • double newlines before each event execution

  • line IDs to be able to refer to specific events more easily (#0, #1, etc.)

#0       0: Event: system.cpu.wrapped_function_event: EventFunctionWrapped 43 scheduled @ 0
**** REAL SIMULATION ****
#1       0: Event: system.mem_ctrls_0.wrapped_function_event: EventFunctionWrapped 14 scheduled @ 7786250
#2       0: Event: system.mem_ctrls_1.wrapped_function_event: EventFunctionWrapped 20 scheduled @ 7786250
#3       0: Event: Event_74: generic 74 scheduled @ 0
info: Entering event queue @ 0.  Starting simulation...
#4       0: Event: Event_74: generic 74 rescheduled @ 18446744073709551615

#5       0: Event: system.cpu.wrapped_function_event: EventFunctionWrapped 43 executed @ 0
#6       0: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 scheduled @ 0
#7       0: Event: system.membus.reqLayer0.wrapped_function_event: EventFunctionWrapped 60 scheduled @ 1000

#8       0: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 executed @ 0
#9       0: Event: system.mem_ctrls_0.wrapped_function_event: EventFunctionWrapped 12 scheduled @ 0
#10      0: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 10 scheduled @ 46250
#11      0: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 scheduled @ 5000

#12      0: Event: system.mem_ctrls_0.wrapped_function_event: EventFunctionWrapped 12 executed @ 0
#13      0: Event: system.mem_ctrls_0.wrapped_function_event: EventFunctionWrapped 15 scheduled @ 0

#14      0: Event: system.mem_ctrls_0.wrapped_function_event: EventFunctionWrapped 15 executed @ 0

#15   1000: Event: system.membus.reqLayer0.wrapped_function_event: EventFunctionWrapped 60 executed @ 1000

#16   5000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 executed @ 5000

#17  46250: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 10 executed @ 46250
#18  46250: Event: system.mem_ctrls.port-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 8 scheduled @ 74250

#19  74250: Event: system.mem_ctrls.port-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 8 executed @ 74250
#20  74250: Event: system.membus.slave[1]-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 64 scheduled @ 77000
#21  74250: Event: system.membus.respLayer1.wrapped_function_event: EventFunctionWrapped 65 scheduled @ 77000

#22  77000: Event: system.membus.respLayer1.wrapped_function_event: EventFunctionWrapped 65 executed @ 77000

#23  77000: Event: system.membus.slave[1]-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 64 executed @ 77000
#24  77000: Event: Event_40: Timing CPU icache tick 40 scheduled @ 77000

#25  77000: Event: Event_40: Timing CPU icache tick 40 executed @ 77000
     77000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue    :   movz   x0, #1, #0        : IntAlu :  D=0x0000000000000001  flags=(IsInteger)
#26  77000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 scheduled @ 77000
#27  77000: Event: system.membus.reqLayer0.wrapped_function_event: EventFunctionWrapped 60 scheduled @ 78000

#28  77000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 executed @ 77000
#29  77000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 10 scheduled @ 95750
#30  77000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 scheduled @ 77000

#31  77000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 executed @ 77000

#32  78000: Event: system.membus.reqLayer0.wrapped_function_event: EventFunctionWrapped 60 executed @ 78000

#33  95750: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 10 executed @ 95750
#34  95750: Event: system.mem_ctrls.port-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 8 scheduled @ 123750

#35 123750: Event: system.mem_ctrls.port-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 8 executed @ 123750
#36 123750: Event: system.membus.slave[1]-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 64 scheduled @ 126000
#37 123750: Event: system.membus.respLayer1.wrapped_function_event: EventFunctionWrapped 65 scheduled @ 126000

#38 126000: Event: system.membus.respLayer1.wrapped_function_event: EventFunctionWrapped 65 executed @ 126000

#39 126000: Event: system.membus.slave[1]-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 64 executed @ 126000
#40 126000: Event: Event_40: Timing CPU icache tick 40 scheduled @ 126000

#41 126000: Event: Event_40: Timing CPU icache tick 40 executed @ 126000
    126000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+4    :   adr   x1, #28            : IntAlu :  D=0x0000000000400098  flags=(IsInteger)
#42 126000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 scheduled @ 126000
#43 126000: Event: system.membus.reqLayer0.wrapped_function_event: EventFunctionWrapped 60 scheduled @ 127000

    [...]

    469000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+28    :   svc   #0x0               : IntAlu :   flags=(IsSerializeAfter|IsNonSpeculative|IsSyscall)
    469000: Event: Event_75: generic 75 scheduled @ 469000
    469000: Event: Event_75: generic 75 executed @ 469000

Looking into the generated config.dot.svg can give a better intuition on the shape of the memory system: Figure 2, “config.dot.svg for a TimingSimpleCPU without caches.”, so it is good to keep that in mind.

gem5 config TimingSimpleCPU 12c917de54145d2d50260035ba7fa614e25317a3
Figure 2. config.dot.svg for a TimingSimpleCPU without caches.

It is also helpful to see this as a tree of events where one execute event schedules other events:

    | | | | |
    0 1 2 3 4   0 TimingSimpleCPU::fetch
    5
    |
    +---+
    |   |
    6   7       6 DRAMCtrl::processNextReqEvent (0)
    8   15      7 BaseXBar::Layer::releaseLayer
    |
+---+---+
|   |   |
9   10  11      9 DRAMCtrl::Rank::processActivateEvent
12  17  16     10 DRAMCtrl::processRespondEvent (46.25)
|   |          11 DRAMCtrl::processNextReqEvent (5)
|   |
13  18         13 DRAMCtrl::Rank::processPowerEvent
14  19         18 PacketQueue::processSendEvent (28)
    |
    +---+
    |   |
    20  21     20 PacketQueue::processSendEvent  (2.75)
    23  22     21 BaseXBar::Layer<SrcType, DstType>::releaseLayer
    |
    24         24 TimingSimpleCPU::IcachePort::ITickEvent::process (0)
    25
    |
    +---+
    |   |
    26  27     26 DRAMCtrl::processNextReqEvent
    28  32     27 BaseXBar::Layer<SrcType, DstType>::releaseLayer
    |
    +---+
    |   |
    29  30     29 DRAMCtrl::processRespondEvent
    33  31     30 DRAMCtrl::processNextReqEvent
    |
    34         34 PacketQueue::processSendEvent
    35
    |
    +---+
    |   |
    36  37     36 PacketQueue::processSendEvent
    39  38     37 BaseXBar::Layer<SrcType, DstType>::releaseLayer
    |
    40         40 TimingSimpleCPU::IcachePort::ITickEvent::process
    41
    |
    +---+
    |   |
    42  43     42 DRAMCtrl::processNextReqEvent
               43 BaseXBar::Layer<SrcType, DstType>::releaseLayer

Note that every schedule is followed by an execution, so we put them together, for example:

    |   |
    6   7    6 DRAMCtrl::processNextReqEvent (0)
    8   15   7 BaseXBar::Layer::releaseLayer (0)
    |

means:

  • 6: schedule DRAMCtrl::processNextReqEvent to run in 0 ns after the execution that scheduled it

  • 8: execute DRAMCtrl::processNextReqEvent

  • 7: schedule BaseXBar::Layer::releaseLayer to run in 0 ns after the execution that scheduled it

  • 15: execute BaseXBar::Layer::releaseLayer

With this, we can focus on going up the event tree from an event of interest until we see what originally caused it!

Notably, the above tree contains the execution of the first two instructions.

Observe how the events leading up to the second instruction are basically a copy of those of the first one, this is the basic TimingSimpleCPU event loop in action.

One line summary of events:

  • #5: adds the request to the DRAM queue, and schedules a DRAMCtrl::processNextReqEvent which later sees that request immediately

  • #8: picks up the only request from the DRAM read queue (readQueue) and services that.

    If there were multiple requests, priority arbitration under DRAMCtrl::chooseNext could chose a different one than the first based on packet priorities

    This puts the request on the response queue respQueue and schedules another DRAMCtrl::processNextReqEvent but the request queue is empty, and that does nos schedule further events

  • #17: picks up the only request from the DRAM response queue and services that by placing it in yet another queue, and scheduling the PacketQueue::processSendEvent which will later pick up that packet

  • #19: picks up the request from the previous queue, and forwards it to another queue, and schedules yet another PacketQueue::processSendEvent

    The current one is the DRAM passing the message to the XBar, and the next processSendEvent is the XBar finally sending it back to the CPU

  • #23: the XBar port is actually sending the reply back.

    If knows to which CPU core to send the request to because ports keep a map of request to source:

    const auto route_lookup = routeTo.find(pkt->req);

Schedules TimingSimpleCPU::fetch through:

EventManager::schedule
TimingSimpleCPU::activateContext
SimpleThread::activate
Process::initState
ArmProcess64::initState
ArmLinuxProcess64::initState

This schedules the initial tick, much like for for AtomicSimpleCPU.

This time however, it is not a tick as in AtomicSimpleCPU, but rather a fetch event that gets scheduled for later on, since reading DRAM memory now takes time:

TimingSimpleCPU::activateContext(ThreadID thread_num)
{
    DPRINTF(SimpleCPU, "ActivateContext %d\n", thread_num);

    assert(thread_num < numThreads);

    threadInfo[thread_num]->notIdleFraction = 1;
    if (_status == BaseSimpleCPU::Idle)
        _status = BaseSimpleCPU::Running;

    // kick things off by initiating the fetch of the next instruction
    if (!fetchEvent.scheduled())
        schedule(fetchEvent, clockEdge(Cycles(0)));

By looking at the source, we see that fetchEvent runs TimingSimpleCPU::fetch.

Just like for AtomicSimpleCPU, this call comes from the initState call, which is exposed on SimObject and ultimately comes from Python.

Backtrace:

EventManager::schedule
DRAMCtrl::Rank::startup
DRAMCtrl::startup

Snippets:

void
DRAMCtrl::startup()
{
    // remember the memory system mode of operation
    isTimingMode = system()->isTimingMode();

    if (isTimingMode) {
        // timestamp offset should be in clock cycles for DRAMPower
        timeStampOffset = divCeil(curTick(), tCK);

        // update the start tick for the precharge accounting to the
        // current tick
        for (auto r : ranks) {
            r->startup(curTick() + tREFI - tRP);
        }

        // shift the bus busy time sufficiently far ahead that we never
        // have to worry about negative values when computing the time for
        // the next request, this will add an insignificant bubble at the
        // start of simulation
        nextBurstAt = curTick() + tRP + tRCD;
    }
}

which then calls:

void
DRAMCtrl::Rank::startup(Tick ref_tick)
{
    assert(ref_tick > curTick());

    pwrStateTick = curTick();

    // kick off the refresh, and give ourselves enough time to
    // precharge
    schedule(refreshEvent, ref_tick);
}

DRAMCtrl::startup is itself a SimObject method exposed to Python and called from simulate in src/python/m5/simulate.py:

def simulate(*args, **kwargs):
    global need_startup

    if need_startup:
        root = objects.Root.getInstance()
        for obj in root.descendants(): obj.startup()

where simulate happens after m5.instantiate, and both are called directly from the toplevel scripts, e.g. for se.py in configs/common/Simulation.py:

def run(options, root, testsys, cpu_class):
    ...
            exit_event = m5.simulate()

By looking up some variable definitions in the source, we now we see some memory parameters clearly:

  • ranks: std::vector<DRAMCtrl::Rank*> with 2 elements. TODO why do we have 2? What does it represent? Likely linked to config.ini at system.mem_ctrls.ranks_per_channel=2: https://en.wikipedia.org/wiki/Memory_rank

  • tCK=1250, tREFI=7800000, tRP=13750, tRCD=13750: all defined in a single code location with a comment:

         /**
         * Basic memory timing parameters initialized based on parameter
         * values.
         */

    Their values can be seen under config.ini and they are documented in src/mem/DRAMCtrl.py e.g.:

        # the base clock period of the DRAM
        tCK = Param.Latency("Clock period")
    
        # minimum time between a precharge and subsequent activate
        tRP = Param.Latency("Row precharge time")
    
        # the amount of time in nanoseconds from issuing an activate command
        # to the data being available in the row buffer for a read/write
        tRCD = Param.Latency("RAS to CAS delay")
    
        # refresh command interval, how often a "ref" command needs
        # to be sent. It is 7.8 us for a 64ms refresh requirement
        tREFI = Param.Latency("Refresh command interval")

So we realize that we are going into deep DRAM modelling, more detail that a mere mortal should ever need to know.

curTick() + tREFI - tRP = 0 + 7800000 - 13750 = 7786250 which is when that refreshEvent was scheduled. Our simulation ends way before that point however, so we will never know what it did thank God.

This is just the startup of the second rank, see: TimingSimpleCPU analysis #1.

se.py allocates the memory controller at configs/common/MemConfig.py:

def config_mem(options, system):

    ...

    opt_mem_channels = options.mem_channels

    ...

    nbr_mem_ctrls = opt_mem_channels

    ...

    for r in system.mem_ranges:
        for i in range(nbr_mem_ctrls):
            mem_ctrl = create_mem_ctrl(cls, r, i, nbr_mem_ctrls, intlv_bits,
                                       intlv_size)

            ...

            mem_ctrls.append(mem_ctrl)

From the timing we know what that one is: the end of time exit event, like for AtomicSimpleCPU.

Executes TimingSimpleCPU::fetch().

The log shows that event ID 43 is now executing: we had previously seen event 43 get scheduled and had analyzed it to be the initial fetch.

We can step into TimingSimpleCPU::fetch() to confirm that the expected [elf] entry point is being fetched. We can inspect the ELF with:

./run-toolchain --arch aarch64 readelf -- -h "$(./getvar --arch aarch64 userland_build_dir)/arch/aarch64/freestanding/linux/hello.out

which contains:

  Entry point address:               0x400078

and by the time we go past:

TimingSimpleCPU::fetch()
{
    ...
    if (needToFetch) {
        ...
        setupFetchRequest(ifetch_req);
        DPRINTF(SimpleCPU, "Translating address %#x\n", ifetch_req->getVaddr());
        thread->itb->translateTiming(ifetch_req, thread->getTC(),
                &fetchTranslation, BaseTLB::Execute);

BaseSimpleCPU::setupFetchRequest sets up the fetch of the expected entry point by reading the PC:

p/x ifetch_req->getVaddr()

Still during the execution of the fetch, execution then moves into the address translation ArmISA::TLB::translateTiming, and after a call to:

TLB::translateSe

the packet now contains the physical address:

_paddr = 0x78

Schedules DRAMCtrl::processNextReqEvent through:

EventManager::schedule
DRAMCtrl::addToReadQueue
DRAMCtrl::recvTimingReq
DRAMCtrl::MemoryPort::recvTimingReq
TimingRequestProtocol::sendReq
MasterPort::sendTimingReq
CoherentXBar::recvTimingReq
CoherentXBar::CoherentXBarSlavePort::recvTimingReq
TimingRequestProtocol::sendReq
MasterPort::sendTimingReq
TimingSimpleCPU::sendFetch
TimingSimpleCPU::FetchTranslation::finish
ArmISA::TLB::translateComplete
ArmISA::TLB::translateTiming
ArmISA::TLB::translateTiming
TimingSimpleCPU::fetch

The event loop has started, and magic initialization schedulings are not happening anymore: now every event is being scheduled from another event:

From the trace, we see that we are already running from the event queue under TimingSimpleCPU::fetch as expected.

From the backtrace we see the tortuous path that the data request takes, going through:

  • ArmISA::TLB

  • CoherentXBar

  • DRAMCtrl

This matches the config.ini system image, since we see that the request goes through the CoherentXBar before reaching memory, like all other CPU memory accesses, see also: gem5 crossbar interconnect.

The scheduling happens at frame DRAMCtrl::addToReadQueue:

     // If we are not already scheduled to get a request out of the
     // queue, do so now
     if (!nextReqEvent.scheduled()) {
         DPRINTF(DRAM, "Request scheduled immediately\n");
         schedule(nextReqEvent, curTick());
     }

From this we deduce that the DRAM has a request queue of some sort, and that the fetch:

  • has added a read request to that queue

  • and has made a future request to read from the queue

The signature of the function is:

DRAMCtrl::addToReadQueue(PacketPtr pkt, unsigned int pktCount)

where PacketPtr is of class `Packet, and so clearly the packet is coming from above.

From:

p/x *pkt

we see:

addr = 0x78

which from TimingSimpleCPU analysis #5 we know is the physical address of the ELF entry point.

Communication goes through certain components via the class Port interface, e.g. at TimingSimpleCPU::sendFetch a call is made to send the packet forward:

icachePort.sendTimingReq(ifetch_pkt)

which ends up calling:

peer->recvTimingReq(pkt);

to reach the receiving side:

CoherentXBar::CoherentXBarSlavePort::recvTimingReq

Ports are also used to connect the XBar and the DRAM.

We will then see that at TimingSimpleCPU analysis #20 a reply packet will come back through the port interface down to the icache port, and only then does the decoding and execution happen.

Schedules BaseXBar::Layer::releaseLayer through:

EventManager::schedule
BaseXBar::Layer<SlavePort, MasterPort>::occupyLayer
BaseXBar::Layer<SlavePort, MasterPort>::succeededTiming
CoherentXBar::recvTimingReq
CoherentXBar::CoherentXBarSlavePort::recvTimingReq
TimingRequestProtocol::sendReq
MasterPort::sendTimingReq
TimingSimpleCPU::sendFetch
TimingSimpleCPU::FetchTranslation::finish
ArmISA::TLB::translateComplete
ArmISA::TLB::translateTiming
ArmISA::TLB::translateTiming
TimingSimpleCPU::fetch

which schedules a SimpleMemory::release.

Executes DRAMCtrl::processNextReqEvent.

Schedules DRAMCtrl::Rank::processActivateEvent through:

EventManager::schedule
DRAMCtrl::activateBank
DRAMCtrl::doDRAMAccess
DRAMCtrl::processNextReqEvent

Schedules DRAMCtrl::processRespondEvent through:

EventManager::schedule
DRAMCtrl::processNextReqEvent

Schedules DRAMCtrl::processNextReqEvent through:

EventManager::schedule
DRAMCtrl::processNextReqEvent

Executes DRAMCtrl::Rank::processActivateEvent.

which schedules:

Schedules DRAMCtrl::Rank::processPowerEvent through:

EventManager::schedule
DRAMCtrl::Rank::schedulePowerEvent
DRAMCtrl::Rank::processActivateEvent

Executes DRAMCtrl::Rank::processPowerEvent.

This it must just be some power statistics stuff, as it does not schedule anything else.

Executes BaseXBar::Layer<SrcType, DstType>::releaseLayer.

Executes DRAMCtrl::processNextReqEvent().

Executes DRAMCtrl::processRespondEvent().

Schedules PacketQueue::processSendEvent() through:

PacketQueue::schedSendEvent
PacketQueue::schedSendTiming
QueuedSlavePort::schedTimingResp
DRAMCtrl::accessAndRespond
DRAMCtrl::processRespondEvent

Executes PacketQueue::processSendEvent().

Schedules PacketQueue::processSendEvent through:

EventManager::schedule
PacketQueue::schedSendEvent
PacketQueue::schedSendTiming
QueuedSlavePort::schedTimingResp
CoherentXBar::recvTimingResp
CoherentXBar::CoherentXBarMasterPort::recvTimingResp
TimingResponseProtocol::sendResp
SlavePort::sendTimingResp
RespPacketQueue::sendTiming
PacketQueue::sendDeferredPacket
PacketQueue::processSendEvent

From this backtrace, we see that this event is happening as the fetch reply packet finally comes back from DRAM.

Schedules BaseXBar::Layer<SrcType, DstType>::releaseLayer through:

EventManager::schedule
BaseXBar::Layer<MasterPort, SlavePort>::occupyLayer
BaseXBar::Layer<MasterPort, SlavePort>::succeededTiming
CoherentXBar::recvTimingResp
CoherentXBar::CoherentXBarMasterPort::recvTimingResp
TimingResponseProtocol::sendResp
SlavePort::sendTimingResp
RespPacketQueue::sendTiming
PacketQueue::sendDeferredPacket
PacketQueue::processSendEvent

Executes BaseXBar::Layer<SrcType, DstType>::releaseLayer.

Executes PacketQueue::processSendEvent.

Schedules TimingSimpleCPU::IcachePort::ITickEvent::process() through:

EventManager::schedule
TimingSimpleCPU::TimingCPUPort::TickEvent::schedule
TimingSimpleCPU::IcachePort::recvTimingResp
TimingResponseProtocol::sendResp
SlavePort::sendTimingResp
RespPacketQueue::sendTiming
PacketQueue::sendDeferredPacket
PacketQueue::processSendEvent

Executes TimingSimpleCPU::IcachePort::ITickEvent::process().

This custom process then calls TimingSimpleCPU::completeIfetch(PacketPtr pkt), and that finally executes the very first instruction:

77000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue    :   movz   x0, #1, #0        : IntAlu :  D=0x0000000000000001  flags=(IsInteger)

The end of this instruction must be setting things up in a way that can continue the PC walk loop, and by looking at the source and traces, it is clearly from: TimingSimpleCPU::advanceInst which calls TimingSimpleCPU::fetch.

And TimingSimpleCPU::fetch is the very thing we did in this simulation at TimingSimpleCPU analysis #0!!! OMG, that’s the loop.

Schedules DRAMCtrl::processNextReqEvent through:

EventManager::schedule
DRAMCtrl::addToReadQueue
DRAMCtrl::recvTimingReq
DRAMCtrl::MemoryPort::recvTimingReq
TimingRequestProtocol::sendReq
MasterPort::sendTimingReq
CoherentXBar::recvTimingReq
CoherentXBar::CoherentXBarSlavePort::recvTimingReq
TimingRequestProtocol::sendReq
MasterPort::sendTimingReq
TimingSimpleCPU::sendFetch
TimingSimpleCPU::FetchTranslation::finish
ArmISA::TLB::translateComplete
ArmISA::TLB::translateTiming
ArmISA::TLB::translateTiming
TimingSimpleCPU::fetch
TimingSimpleCPU::advanceInst
TimingSimpleCPU::completeIfetch
TimingSimpleCPU::IcachePort::ITickEvent::process

Schedules BaseXBar::Layer<SrcType, DstType>::releaseLayer through:

EventManager::schedule
BaseXBar::Layer<SlavePort, MasterPort>::occupyLayer
BaseXBar::Layer<SlavePort, MasterPort>::succeededTiming
CoherentXBar::recvTimingReq
CoherentXBar::CoherentXBarSlavePort::recvTimingReq
TimingRequestProtocol::sendReq
MasterPort::sendTimingReq
TimingSimpleCPU::sendFetch
TimingSimpleCPU::FetchTranslation::finish
ArmISA::TLB::translateComplete
ArmISA::TLB::translateTiming
ArmISA::TLB::translateTiming
TimingSimpleCPU::fetch
TimingSimpleCPU::advanceInst
TimingSimpleCPU::completeIfetch
TimingSimpleCPU::IcachePort::ITickEvent::process

Execute DRAMCtrl::processNextReqEvent.

Schedule DRAMCtrl::processRespondEvent().

One important thing we want to check now, is how the memory reads are going to make the processor stall in the middle of an instruction.

Since we were using a simple CPU without a pipeline, the data memory access stall everything: there is no further progress until memory comes back.

For that, we can GDB to the TimingSimpleCPU::completeIfetch of the first LDR done in our test program.

By doing that, we see that this time at:

if (curStaticInst && curStaticInst->isMemRef()) {
    // load or store: just send to dcache
    Fault fault = curStaticInst->initiateAcc(&t_info, traceData);

    if (_status == BaseSimpleCPU::Running) {
    }
} else if (curStaticInst) {
    // non-memory instruction: execute completely now
    Fault fault = curStaticInst->execute(&t_info, traceData);
  • curStaticInst->isMemRef() is true, and there is no instruction execute call in that part of the branch, only for instructions that don’t touch memory

  • _status is BaseSimpleCPU::Status::DcacheWaitResponse and advanceInst is not yet called

So, where is the execute happening? Well, I’ll satisfy myself with a quick source grep and guess:

  • curStaticInst->initiateAcc sets up some memory request events

  • which likely lead up to: TimingSimpleCPU::completeDataAccess, which off the bat ends in advanceInst.

    It also calls curStaticInst->completeAcc, which pairs up with the initiateAcc call.

The following is the region of interest of the event log:

 175000: Event: Event_40: Timing CPU icache tick 40 executed @ 175000
 175000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 scheduled @ 175000
 175000: Event: system.membus.reqLayer0.wrapped_function_event: EventFunctionWrapped 60 scheduled @ 176000

 175000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 executed @ 175000
 175000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 10 scheduled @ 193750
 175000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 scheduled @ 175000

 175000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 executed @ 175000

 176000: Event: system.membus.reqLayer0.wrapped_function_event: EventFunctionWrapped 60 executed @ 176000

 193750: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 10 executed @ 193750
 193750: Event: system.mem_ctrls.port-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 8 scheduled @ 221750

 221750: Event: system.mem_ctrls.port-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 8 executed @ 221750
 221750: Event: system.membus.slave[2]-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 66 scheduled @ 224000
 221750: Event: system.membus.respLayer2.wrapped_function_event: EventFunctionWrapped 67 scheduled @ 224000

 224000: Event: system.membus.respLayer2.wrapped_function_event: EventFunctionWrapped 67 executed @ 224000

 224000: Event: system.membus.slave[2]-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 66 executed @ 224000
 224000: Event: Event_42: Timing CPU dcache tick 42 scheduled @ 224000

 224000: Event: Event_42: Timing CPU dcache tick 42 executed @ 224000
 175000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+8    :   ldr   w2, #4194464       : MemRead :  D=0x0000000000000006 A=0x4000a0  flags=(IsInteger|IsMemRef|IsLoad)

We first find it by looking for the ExecEnable of LDR.

Then, we go up to the previous Timing CPU icache tick event, which from the analysis of previous instruction instructions, we know is where the instruction execution starts, the LDR instruction fetch is done by then!

Next, several events happen as the data request must be percolating through the memory system, it must be very similar to the instruction fetches. TODO analyze event function names.

Finally, at last we reach

 224000: Event: Event_42: Timing CPU dcache tick 42 executed @ 224000
 175000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+8    :   ldr   w2, #4194464       : MemRead :  D=0x0000000000000006 A=0x4000a0  flags=(IsInteger|IsMemRef|IsLoad)

from which we guess:

  • 224000: this is the time that the data request finally returned, and at which execute gets called

  • 175000: the log finally prints at the end of execution, but it does not show the actual time that things finished, but rather the time that the ifetch finished, which happened in the past

Let’s just add --caches to see if things go any faster:

      0: Event: system.cpu.wrapped_function_event: EventFunctionWrapped 43 scheduled @ 0
**** REAL SIMULATION ****
      0: Event: system.mem_ctrls_0.wrapped_function_event: EventFunctionWrapped 14 scheduled @ 7786250
      0: Event: system.mem_ctrls_1.wrapped_function_event: EventFunctionWrapped 20 scheduled @ 7786250
      0: Event: Event_84: generic 84 scheduled @ 0
info: Entering event queue @ 0.  Starting simulation...
      0: Event: Event_84: generic 84 rescheduled @ 18446744073709551615
      0: Event: system.cpu.wrapped_function_event: EventFunctionWrapped 43 executed @ 0
      0: Event: system.cpu.icache.mem_side-MemSidePort.wrapped_function_event: EventFunctionWrapped 59 scheduled @ 1000
   1000: Event: system.cpu.icache.mem_side-MemSidePort.wrapped_function_event: EventFunctionWrapped 59 executed @ 1000
   1000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 scheduled @ 1000
   1000: Event: system.membus.reqLayer0.wrapped_function_event: EventFunctionWrapped 70 scheduled @ 2000
   1000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 executed @ 1000
   1000: Event: system.mem_ctrls_0.wrapped_function_event: EventFunctionWrapped 12 scheduled @ 1000
   1000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 10 scheduled @ 46250
   1000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 scheduled @ 5000
   1000: Event: system.mem_ctrls_0.wrapped_function_event: EventFunctionWrapped 12 executed @ 1000
   1000: Event: system.mem_ctrls_0.wrapped_function_event: EventFunctionWrapped 15 scheduled @ 1000
   1000: Event: system.mem_ctrls_0.wrapped_function_event: EventFunctionWrapped 15 executed @ 1000
   2000: Event: system.membus.reqLayer0.wrapped_function_event: EventFunctionWrapped 70 executed @ 2000
   5000: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 9 executed @ 5000
  46250: Event: system.mem_ctrls.wrapped_function_event: EventFunctionWrapped 10 executed @ 46250
  46250: Event: system.mem_ctrls.port-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 8 scheduled @ 74250
  74250: Event: system.mem_ctrls.port-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 8 executed @ 74250
  74250: Event: system.membus.slave[1]-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 74 scheduled @ 77000
  74250: Event: system.membus.respLayer1.wrapped_function_event: EventFunctionWrapped 75 scheduled @ 80000
  77000: Event: system.membus.slave[1]-RespPacketQueue.wrapped_function_event: EventFunctionWrapped 74 executed @ 77000
  77000: Event: system.cpu.icache.cpu_side-CpuSidePort.wrapped_function_event: EventFunctionWrapped 57 scheduled @ 78000
  78000: Event: system.cpu.icache.cpu_side-CpuSidePort.wrapped_function_event: EventFunctionWrapped 57 executed @ 78000
  78000: Event: Event_40: Timing CPU icache tick 40 scheduled @ 78000
  78000: Event: Event_40: Timing CPU icache tick 40 executed @ 78000
  78000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue    :   movz   x0, #1, #0        : IntAlu :  D=0x0000000000000001  flags=(IsInteger)
  78000: Event: system.cpu.icache.cpu_side-CpuSidePort.wrapped_function_event: EventFunctionWrapped 57 scheduled @ 83000
  80000: Event: system.membus.respLayer1.wrapped_function_event: EventFunctionWrapped 75 executed @ 80000
  83000: Event: system.cpu.icache.cpu_side-CpuSidePort.wrapped_function_event: EventFunctionWrapped 57 executed @ 83000
  83000: Event: Event_40: Timing CPU icache tick 40 scheduled @ 83000
  83000: Event: Event_40: Timing CPU icache tick 40 executed @ 83000
  83000: ExecEnable: system.cpu: A0 T0 : @asm_main_after_prologue+4    :   adr   x1, #28            : IntAlu :  D=0x0000000000400098  flags=(IsInteger)
  83000: Event: system.cpu.icache.mem_side-MemSidePort.wrapped_function_event: EventFunctionWrapped 59 scheduled @ 84000
  [...]
 191000: Event: Event_85: generic 85 scheduled @ 191000
 191000: Event: Event_85: generic 85 executed @ 191000

So yes, --caches does work here, leading to a runtime of 191000 rather than 469000 without caches!

Notably, we now see that very little time passed between the first and second instructions, presumably because rather than going out all the way to the DRAM system, the event chain stops right at the icache.cpu_side when a hit happens, which must have been the case for the second instruction, which is just adjacent to the first one.

It is also interested to look into the generated config.dot.svg to compare it to the one without caches: Figure 2, “config.dot.svg for a TimingSimpleCPU without caches.”. With caches: Figure 3, “config.dot.svg for a TimingSimpleCPU with caches.”.

We can see from there, that we now have icache and dcache elements inside the CPU block, and that the CPU icache and dcache ports go through the caches to the SystemXBar rather than being directly connected as before.

It is worth noting that the caches do not affect the ArmITB and ArmDTB TLBs, since those are already caches themselves.

gem5 config TimingSimpleCPU caches 12c917de54145d2d50260035ba7fa614e25317a3
Figure 3. config.dot.svg for a TimingSimpleCPU with caches.

TODO is this the minimal setup that allows us to see the gem5 crossbar interconnect? Can we see anything in AtomicSimpleCPU?

It would be amazing to analyze a simple example with interconnect packets possibly invalidating caches of other CPUs.

To observe it we could create one well controlled workload with instructions that flush memory, and run it on two CPUs.

If we don’t use such instructions that flush memory, we would only see the interconnect at work when caches run out.

gem5 config TimingSimpleCPU caches 2 CPUs 12c917de54145d2d50260035ba7fa614e25317a3
Figure 4. config.dot.svg for a system with two TimingSimpleCPU with caches.

The events for the Atomic CPU were pretty simple: basically just ticks.

But as we venture into more complex CPU models such as MinorCPU, the events get much more complex and interesting.

The memory system system part must be similar to that of TimingSimpleCPU that we previously studied gem5 event queue TimingSimpleCPU syscall emulation freestanding example analysis: the main thing we want to see is how the CPU pipeline speeds up execution by preventing some memory stalls.

The config.dot.svg also indicates that: everything is exactly as in gem5 event queue TimingSimpleCPU syscall emulation freestanding example analysis with caches, except that the CPU is a MinorCPU instead of TimingSimpleCPU, and the --caches are now mandatory.

TODO: analyze the trace for:

./run \
  --arch aarch64 \
  --emulator gem5 \
  --userland userland/arch/aarch64/freestanding/linux/hello.S \
  --trace Event \
  --trace-stdout \
  -- \
  --cpu-type MinorCPU \
  --caches \
;

These classes get used everywhere, and they have a somewhat convoluted relation with one another, so let’s figure it out this mess.

None of those objects are SimObjects, so they must all belong to some higher SimObject.

This section and all children tested at gem5 b1623cb2087873f64197e503ab8894b5e4d4c7b4.

As we delve into more details below, we will reach the following conclusion: a ThreadContext represents on thread of a CPU with multiple [hardware-threads].

We therefore we can have multiple ThreadContext for each BaseCPU.

ThreadContext is what gets passed in syscalls, e.g.:

src/sim/syscall_emul.hh

template <class OS>
SyscallReturn
readFunc(SyscallDesc *desc, ThreadContext *tc,
        int tgt_fd, Addr buf_ptr, int nbytes)

The class hierarchy for ThreadContext looks like:

ThreadContext
  O3ThreadContext
  SimpleThread

where the gem5 MinorCPU also uses SimpleThread:

/** Minor will use the SimpleThread state for now */
typedef SimpleThread MinorThread;

It is a bit confusing, things would be much clearer if SimpleThread was called instead SimpleThreadContext!

readIntReg and other register access methods are some notable methods implemented in descendants, e.g. SimpleThread::readIntReg.

Essentially all methods of the base ThreadContext are pure virtual.

SimpleThread storage defined on BaseSimpleCPU for simple CPUs like AtomicSimpleCPU:

    for (unsigned i = 0; i < numThreads; i++) {
        if (FullSystem) {
            thread = new SimpleThread(this, i, p->system,
                                      p->itb, p->dtb, p->isa[i]);
        } else {
            thread = new SimpleThread(this, i, p->system, p->workload[i],
                                      p->itb, p->dtb, p->isa[i]);
        }
        threadInfo.push_back(new SimpleExecContext(this, thread));
        ThreadContext *tc = thread->getTC();
        threadContexts.push_back(tc);
    }

and on MinorCPU for Minor:

MinorCPU::MinorCPU(MinorCPUParams *params) :
    BaseCPU(params),
    threadPolicy(params->threadPolicy)
{
    /* This is only written for one thread at the moment */
    Minor::MinorThread *thread;

    for (ThreadID i = 0; i < numThreads; i++) {
        if (FullSystem) {
            thread = new Minor::MinorThread(this, i, params->system,
                    params->itb, params->dtb, params->isa[i]);
            thread->setStatus(ThreadContext::Halted);
        } else {
            thread = new Minor::MinorThread(this, i, params->system,
                    params->workload[i], params->itb, params->dtb,
                    params->isa[i]);
        }

        threads.push_back(thread);
        ThreadContext *tc = thread->getTC();
        threadContexts.push_back(tc);
    }

Those are used from gem5 ExecContext.

From this we see that one CPU can have multiple threads, and that this is controlled from the Python:

BaseCPU::BaseCPU(Params *p, bool is_checker)
    : numThreads(p->numThreads)

and since SimpleThread contains its registers, this must represent [hardware-threads].

If we analyse SimpleThread::readIntReg, we see that the actual register data is contained inside ThreadContext descendants, e.g. in SimpleThread:

    RegVal
    readIntReg(RegIndex reg_idx) const override
    {
        int flatIndex = isa->flattenIntIndex(reg_idx);
        assert(flatIndex < TheISA::NumIntRegs);
        uint64_t regVal(readIntRegFlat(flatIndex));
        DPRINTF(IntRegs, "Reading int reg %d (%d) as %#x.\n",
                reg_idx, flatIndex, regVal);
        return regVal;
    }

    RegVal readIntRegFlat(RegIndex idx) const override { return intRegs[idx]; }
    void
    setIntRegFlat(RegIndex idx, RegVal val) override
    {
        intRegs[idx] = val;
    }

    std::array<RegVal, TheISA::NumIntRegs> intRegs;

Another notable type of method contained in Thread context are methods that forward to gem5 ThreadState.

Instantiation happens in the FullO3CPU constructor:

FullO3CPU<Impl>::FullO3CPU(DerivO3CPUParams *params)

    for (ThreadID tid = 0; tid < this->numThreads; ++tid) {
        if (FullSystem) {
            // SMT is not supported in FS mode yet.
            assert(this->numThreads == 1);
            this->thread[tid] = new Thread(this, 0, NULL);

        // Setup the TC that will serve as the interface to the threads/CPU.
        O3ThreadContext<Impl> *o3_tc = new O3ThreadContext<Impl>;

and the SimObject DerivO3CPU is just a FullO3CPU instantiation:

class DerivO3CPU : public FullO3CPU<O3CPUImpl>

O3ThreadContext is a template class:

template <class Impl>
class O3ThreadContext : public ThreadContext

The only Impl used appears to be O3CPUImpl? This is explicitly instantiated in the source:

template class O3ThreadContext<O3CPUImpl>;

Unlike in SimpleThread however, O3ThreadContext does not contain the register data itself, e.g. O3ThreadContext::readIntRegFlat instead forwards to cpu:

template <class Impl>
RegVal
O3ThreadContext<Impl>::readIntRegFlat(RegIndex reg_idx) const
{
    return cpu->readArchIntReg(reg_idx, thread->threadId());
}

where:

    typedef typename Impl::O3CPU O3CPU;

   /** Pointer to the CPU. */
    O3CPU *cpu;

and:

struct O3CPUImpl
{
    /** The O3CPU type to be used. */
    typedef FullO3CPU<O3CPUImpl> O3CPU;

and at long last FullO3CPU contains the register values:

template <class Impl>
RegVal
FullO3CPU<Impl>::readArchIntReg(int reg_idx, ThreadID tid)
{
    intRegfileReads++;
    PhysRegIdPtr phys_reg = commitRenameMap[tid].lookup(
            RegId(IntRegClass, reg_idx));

    return regFile.readIntReg(phys_reg);
}

So we guess that this difference from SimpleThread is due to register renaming of the out of order implementation.

Owned one per ThreadContext.

Many ThreadContext methods simply forward to ThreadState implementations.

SimpleThread inherits from ThreadState, and forwards to it on several methods e.g.:

    int cpuId() const override { return ThreadState::cpuId(); }
    uint32_t socketId() const override { return ThreadState::socketId(); }
    int threadId() const override { return ThreadState::threadId(); }
    void setThreadId(int id) override { ThreadState::setThreadId(id); }
    ContextID contextId() const override { return ThreadState::contextId(); }
    void setContextId(ContextID id) override { ThreadState::setContextId(id); }

O3ThreadContext on the other hand contains an O3ThreadState:

template <class Impl>
struct O3ThreadState : public ThreadState

at:

template <class Impl>
class O3ThreadContext : public ThreadContext
{
    O3ThreadState<Impl> *thread

    ContextID contextId() const override { return thread->contextId(); }

    void setContextId(ContextID id) override { thread->setContextId(id); }

ExecContext gets used in instruction definitions, e.g.:

build/ARM/arch/arm/generated/exec-ns.cc.inc

    Fault Mul::execute(
        ExecContext *xc, Trace::InstRecord *traceData) const

It contains methods to allow interacting with CPU state from inside instruction execution, notably reading and writing from/to registers.

For example, the ARM mul instruction uses ExecContext to read the input operands, multiply them, and write to the output:

    Fault Mul::execute(
        ExecContext *xc, Trace::InstRecord *traceData) const
    {
        Fault fault = NoFault;
        uint64_t resTemp = 0;
        resTemp = resTemp;
        uint32_t OptCondCodesNZ = 0;
        uint32_t OptCondCodesC = 0;
        uint32_t OptCondCodesV = 0;
        uint32_t Reg0 = 0;
        uint32_t Reg1 = 0;
        uint32_t Reg2 = 0;

        OptCondCodesNZ = xc->readCCRegOperand(this, 0);
        OptCondCodesC = xc->readCCRegOperand(this, 1);
        OptCondCodesV = xc->readCCRegOperand(this, 2);
        Reg1 =
            ((reg1 == PCReg) ? readPC(xc) : xc->readIntRegOperand(this, 3));
        Reg2 =
            ((reg2 == PCReg) ? readPC(xc) : xc->readIntRegOperand(this, 4));

        if (testPredicate(OptCondCodesNZ, OptCondCodesC, OptCondCodesV, condCode)/*auto*/)
        {
            Reg0 = resTemp = Reg1 * Reg2;;
            if (fault == NoFault) {
                {
                    uint32_t final_val = Reg0;
                    ((reg0 == PCReg) ? setNextPC(xc, Reg0) : xc->setIntRegOperand(this, 0, Reg0));
                    if (traceData) { traceData->setData(final_val); }
                };
            }
        } else {
            xc->setPredicate(false);
        }

        return fault;
    }

ExecContext is however basically just a wrapper that forwards to other classes that actually contain the data in a microarchitectural neutral manner. For example, in SimpleExecContext:

    /** Reads an integer register. */
    RegVal
    readIntRegOperand(const StaticInst *si, int idx) override
    {
        numIntRegReads++;
        const RegId& reg = si->srcRegIdx(idx);
        assert(reg.isIntReg());
        return thread->readIntReg(reg.index());
    }

So we see that this just does some register position bookkeeping needed for instruction execution, but the actual data comes from SimpleThread::readIntReg, which is a specialization of gem5 ThreadContext.

ExecContext is a fully virtual class. The hierarchy is:

  • ExecContext

    • SimpleExecContext

    • Minor::MinorExecContext

    • BaseDynInst

      • BaseO3DynInst

If we follow SimpleExecContext creation for example, we see:

class BaseSimpleCPU : public BaseCPU
{
    std::vector<SimpleExecContext*> threadInfo;

and:

BaseSimpleCPU::BaseSimpleCPU(BaseSimpleCPUParams *p)
    : BaseCPU(p),
      curThread(0),
      branchPred(p->branchPred),
      traceData(NULL),
      inst(),
      _status(Idle)
{
    SimpleThread *thread;

    for (unsigned i = 0; i < numThreads; i++) {
        if (FullSystem) {
            thread = new SimpleThread(this, i, p->system,
                                      p->itb, p->dtb, p->isa[i]);
        } else {
            thread = new SimpleThread(this, i, p->system, p->workload[i],
                                      p->itb, p->dtb, p->isa[i]);
        }
        threadInfo.push_back(new SimpleExecContext(this, thread));
        ThreadContext *tc = thread->getTC();
        threadContexts.push_back(tc);
    }

therefore there is one ExecContext for each ThreadContext, and each ExecContext knows about its own ThreadContext.

This makes sense, since each ThreadContext represents one CPU register set, and therefore needs a separate ExecContext which allows instruction implementations to access those registers.

The Process class is used only for gem5 syscall emulation mode, and it represents a process like a Linux userland process, in addition to any further gem5 specific data needed to represent the process.

The first thing most syscall implementations do is to actually pull Process out of gem5 ThreadContext, e.g.:

template <class OS>
SyscallReturn
readFunc(SyscallDesc *desc, ThreadContext *tc,
        int tgt_fd, Addr buf_ptr, int nbytes)
{
    auto p = tc->getProcessPtr();

For example, we can readily see from its interface that it contains several accessors for common process fields:

    inline uint64_t uid() { return _uid; }
    inline uint64_t euid() { return _euid; }
    inline uint64_t gid() { return _gid; }
    inline uint64_t egid() { return _egid; }

Process is a SimObject, and therefore produced directly in e.g. se.py.

se.py produces one process per-executable given:

    workloads = options.cmd.split(';')
    idx = 0
    for wrkld in workloads:
        process = Process(pid = 100 + idx)

and those are placed in the workload property:

for i in range(np):
    if options.smt:
        system.cpu[i].workload = multiprocesses
    elif len(multiprocesses) == 1:
        system.cpu[i].workload = multiprocesses[0]
    else:
        system.cpu[i].workload = multiprocesses[i]

and finally each thread of a CPU gets assigned to a different such workload:

BaseSimpleCPU::BaseSimpleCPU(BaseSimpleCPUParams *p)
    : BaseCPU(p),
      curThread(0),
      branchPred(p->branchPred),
      traceData(NULL),
      inst(),
      _status(Idle)
{
    SimpleThread *thread;

    for (unsigned i = 0; i < numThreads; i++) {
        if (FullSystem) {
            thread = new SimpleThread(this, i, p->system,
                                      p->itb, p->dtb, p->isa[i]);
        } else {
            thread = new SimpleThread(this, i, p->system, p->workload[i],
                                      p->itb, p->dtb, p->isa[i]);
        }
        threadInfo.push_back(new SimpleExecContext(this, thread));
        ThreadContext *tc = thread->getTC();
        threadContexts.push_back(tc);
    }

gem5 uses a ton of code generation, which makes the project horrendous:

  • lots of magic happen on top of pybind11, which is already magic, to more automatically glue the C++ and Python worlds: gem5 Python C++ interaction

  • .isa code which describes most of the instructions

  • Ruby for memory systems

To find the definition of generated code, do a:

grep -I -r build/ 'code of interest'

where:

The code generation exists partly to support insanely generic cross ISA instructions mapping to one compute model, where it might be reasonable.

But it has been widely overused to insanity. It likely also exists partly because when the project started in 2003 C++ compilers weren’t that good, so you couldn’t rely on features like templates that much.

Generated code at: build/<ISA>/config/the_isa.hh which e.g. for ARM contains:

#ifndef __CONFIG_THE_ISA_HH__
#define __CONFIG_THE_ISA_HH__

#define ARM_ISA 1
#define MIPS_ISA 2
#define NULL_ISA 3
#define POWER_ISA 4
#define RISCV_ISA 5
#define SPARC_ISA 6
#define X86_ISA 7

enum class Arch {
  ArmISA = ARM_ISA,
  MipsISA = MIPS_ISA,
  NullISA = NULL_ISA,
  PowerISA = POWER_ISA,
  RiscvISA = RISCV_ISA,
  SparcISA = SPARC_ISA,
  X86ISA = X86_ISA
};

#define THE_ISA ARM_ISA
#define TheISA ArmISA
#define THE_ISA_STR "arm"

#endif // __CONFIG_THE_ISA_HH__

Generation code: src/SConscript at def makeTheISA.

Tested on gem5 b1623cb2087873f64197e503ab8894b5e4d4c7b4.

gem5 moves a bit slowly, and if your host compiler is very new, the gem5 build might be broken for it, e.g. this was the case for Ubuntu 19.10 with GCC 9 and gem5 62d75e7105fe172eb906d4f80f360ff8591d4178 from Dec 2019.

This happens mostly because GCC keeps getting more strict with warnings and gem5 uses -Werror.

The specific problem mentioned above was later fixed, but if it ever happens again, you can work around it by either by or by disabling -Werror:

./build-gem5 -- CCFLAGS=-Wno-error

or by installing an older compiler and using it with something like:

./build-gem5 -- CC=gcc-8 CXX=g++-8

E.g. src/cpu/decode_cache.hh includes:

#include "arch/isa_traits.hh"

which in turn is meant to refer to files of form:

src/arch/<isa>/isa_traits.hh

What happens is that the build system creates a file:

build/ARM/arch/isa_traits.hh

which contains just:

#include "arch/arm/isa_traits.hh"

and puts that in the -I include path during build.

It appears to be possible to deal with it using preprocessor macros, but it is ugly: https://stackoverflow.com/questions/3178946/using-define-to-include-another-file-in-c-c/3179218#3179218

In addition to the header polymorphism, gem5 also namespaces classes with TheISA::, e.g. in src/cpu/decode_cache.hh:

Value items[TheISA::PageBytes];

which is defined at:

…​ build/ARM/config/the_isa.hh …​

as:

#define TheISA ArmISA

and forces already arm/ specific headers to define their symbols under:

namespace ArmISA

so I don’t see the point of this pattern, why not just us PageBytes directly? Looks like a documentation mechanism to indicate that a certain symbol is ISA specific.

Tested in gem5 2a242c5f59a54bc6b8953f82486f7e6fe0aa9b3d.

Some scons madness.

Then the a5bc2291391b0497fdc60fdc960e07bcecebfb8f SConstruct use symlinks in a futile attempt to make things better for editors or build systems from the past century.

It was not possible to disable the symlinks automatically for the entire project when I last asked: https://stackoverflow.com/questions/53656787/how-to-set-disable-duplicate-0-for-all-scons-build-variants-without-repeating-th

The horrendous downsides of this are:

Buildroot is a set of Make scripts that download and compile from source compatible versions of:

  • GCC

  • Linux kernel

  • C standard library: Buildroot supports several implementations, see: [libc-choice]

  • BusyBox: provides the shell and basic command line utilities

It therefore produces a pristine, blob-less, debuggable setup, where all moving parts are configured to work perfectly together.

Perhaps the awesomeness of Buildroot only sinks in once you notice that all it takes is 4 commands as explained at [buildroot-hello-world].

The downsides of Buildroot are:

  • the first build takes a while, but it is well worth it

  • the selection of software packages is relatively limited if compared to Debian.

    In theory, any software can be packaged, and the Buildroot side is easy.

    The hard part is dealing with crappy third party build systems and huge dependency chains.

  • it is written in Make and Bash rather than Python like LKMC

This repo basically wraps around that, and tries to make everything even more awesome for kernel developers by adding the capability of seamlessly running the stuff you’ve built on emulators usually via ./run.

As this repo develops however, we’ve started taking some of the build out of Buildroot, e.g. notably the Linux kernel to have more build flexibility and faster build startup times.

Therefore, more and more, this repo wants to take over everything that Buildroot does, and one day completely replace it to achieve emulation Nirvana, see e.g.:

We provide the following mechanisms:

  • ./build-buildroot --config-fragment data/br2: append the Buildroot configuration file data/br2 to a single build. Must be passed every time you run ./build. The format is the same as buildroot_config/default.

  • ./build-buildroot --config 'BR2_SOME_OPTION="myval"': append a single op

linux-kernel-module-cheat's People

Contributors

************ avatar reveriel avatar andrewzigerelli avatar parzival3 avatar mgalgs avatar wataash avatar stubbfel avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.