GithubHelp home page GithubHelp logo

arighi / virtme-ng Goto Github PK

View Code? Open in Web Editor NEW
199.0 7.0 17.0 883 KB

Quickly build and run kernels inside a virtualized snapshot of your live system

License: GNU General Public License v2.0

Python 87.90% Shell 12.04% Makefile 0.05%

virtme-ng's Introduction

virtme-ng-demo.mp4

What is virtme-ng?

virtme-ng is a tool that allows to easily and quickly recompile and test a Linux kernel, starting from the source code.

It allows to recompile the kernel in few minutes (rather than hours), then the kernel is automatically started in a virtualized environment that is an exact copy-on-write copy of your live system, which means that any changes made to the virtualized environment do not affect the host system.

In order to do this a minimal config is produced (with the bare minimum support to test the kernel inside qemu), then the selected kernel is automatically built and started inside qemu, using the filesystem of the host as a copy-on-write snapshot.

This means that you can safely destroy the entire filesystem, crash the kernel, etc. without affecting the host.

Kernels produced with virtme-ng are lacking lots of features, in order to reduce the build time to the minimum and still provide you a usable kernel capable of running your tests and experiments.

virtme-ng is based on virtme, written by Andy Lutomirski [email protected] (web | git).

Quick start

 $ uname -r
 5.19.0-23-generic
 $ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
 $ cd linux
 $ vng --build --commit v6.2-rc4
 ...
 $ vng
           _      _
    __   _(_)_ __| |_ _ __ ___   ___       _ __   __ _
    \ \ / / |  __| __|  _   _ \ / _ \_____|  _ \ / _  |
     \ V /| | |  | |_| | | | | |  __/_____| | | | (_| |
      \_/ |_|_|   \__|_| |_| |_|\___|     |_| |_|\__  |
                                                 |___/
    kernel version: 6.2.0-rc4-virtme x86_64

 $ uname -r
 6.2.0-rc4-virtme
 ^
 |___ Now you have a shell inside a virtualized copy of your entire system,
      that is running the new kernel! \o/

 Then simply type "exit" to return back to the real system.

Installation

  • Debian / Ubuntu

You can install the latest stable version of virtme-ng via:

 $ sudo apt install virtme-ng
  • Ubuntu ppa

If you're using Ubuntu, you can install the latest experimental version of virtme-ng from ppa:arighi/virtme-ng:

 $ sudo add-apt-repository ppa:arighi/virtme-ng
 $ sudo apt install --yes virtme-ng
  • Install from source

To install virtme-ng from source you can clone this git repository and build a standalone virtme-ng running the following commands:

 $ git clone --recurse-submodules https://github.com/arighi/virtme-ng.git
 $ BUILD_VIRTME_NG_INIT=1 pip3 install --verbose -r requirements.txt .

If you are in Debian/Ubuntu you may need to install the following packages to build virtme-ng from source properly:

 $ sudo apt install python3-pip python3-argcomplete flake8 pylint \
   cargo rustc qemu-system-x86

In recent versions of pip3 you may need to specify --break-system-packages to properly install virtme-ng in your system from source.

  • Run from source

You can also run virtme-ng directly from source, make sure you have all the requirements installed (optionally you can build virtme-ng-init for a faster boot, by running make), then from the source directory simply run any virtme-ng command, such as:

 $ ./vng --help

Requirements

  • You need Python 3.8 or higher

  • QEMU 1.6 or higher is recommended (QEMU 1.4 and 1.5 are partially supported using a rather ugly kludge)

    • You will have a much better experience if KVM is enabled. That means that you should be on bare metal with hardware virtualization (VT-x or SVM) enabled or in a VM that supports nested virtualization. On some Linux distributions, you may need to be a member of the "kvm" group. Using VirtualBox or most VPS providers will fall back to emulation.
  • Depending on the options you use, you may need a statically linked busybox binary somewhere in your path.

  • Optionally, you may need virtiofsd 1.7.0 (or higher) for better filesystem performance inside the virtme-ng guests.

Examples

  • Build a kernel from a clean local kernel source directory (if a .config is not available virtme-ng will automatically create a minimum .config with all the required feature to boot the instance):
   $ vng -b
  • Build tag v6.1-rc3 from a local kernel git repository:
   $ vng -b -c v6.1-rc3
  • Generate a minimal kernel .config in the current kernel build directory:
   $ vng --kconfig
  • Run a kernel previously compiled from a local git repository in the current working directory:
   $ vng
  • Run an interactive virtme-ng session using the same kernel of the host:
   $ vng -r
  • Test installed kernel 6.2.0-21-generic kernel (NOTE: /boot/vmlinuz-6.2.0-21-generic needs to be accessible):
   $ vng -r 6.2.0-21-generic
  • Run a pre-compiled vanilla v6.6 kernel fetched from the Ubuntu mainline builds repository (useful to test a specific kernel version directly and save a lot of build time):
   $ vng -r v6.6
  • Download and test kernel 6.2.0-1003-lowlatency from deb packages:
   $ mkdir test
   $ cd test
   $ apt download linux-image-6.2.0-1003-lowlatency linux-modules-6.2.0-1003-lowlatency
   $ for d in *.deb; do dpkg -x $d .; done
   $ vng -r ./boot/vmlinuz-6.2.0-1003-lowlatency
  • Build the tip of the latest kernel on a remote build host called "builder", running make inside a specific build chroot (managed remotely by schroot):
   $ vng --build --build-host builder \
     --build-host-exec-prefix "schroot -c chroot:kinetic-amd64 -- "
  • Run the previously compiled kernel from the current working directory and enable networking:
   $ vng --network user
  • Run the previously compiled kernel adding an additional virtio-scsi device:
   $ qemu-img create -f qcow2 /tmp/disk.img 8G
   $ vng --disk /tmp/disk.img
  • Recompile the kernel passing some env variables to enable Rust support (using specific versions of the Rust toolchain binaries):
   $ vng --build RUSTC=rustc-1.62 BINDGEN=bindgen-0.56 RUSTFMT=rustfmt-1.62
  • Build the arm64 kernel (using a separate chroot in /opt/chroot/arm64 as the main filesystem):
   $ vng --build --arch arm64 --root /opt/chroot/arm64/
  • Execute uname -r inside a kernel recompiled in the current directory and send the output to cowsay on the host:
   $ vng -- uname -r | cowsay
    __________________
   < 6.1.0-rc6-virtme >
    ------------------
           \   ^__^
            \  (oo)\_______
               (__)\       )\/\
                   ||----w |
                   ||     ||
  • Run a bunch of parallel virtme-ng instances in a pipeline, with different kernels installed in the system, passing each other their stdout/stdin and return all the generated output back to the host (also measure the total elapsed time):
   $ time true | \
   > vng -r 5.19.0-38-generic -e "cat && uname -r" | \
   > vng -r 6.2.0-19-generic  -e "cat && uname -r" | \
   > vng -r 6.2.0-20-generic  -e "cat && uname -r" | \
   > vng -r 6.3.0-2-generic   -e "cat && uname -r" | \
   > cowsay -n
    ___________________
   / 5.19.0-38-generic \
   | 6.2.0-19-generic  |
   | 6.2.0-20-generic  |
   \ 6.3.0-2-generic   /
    -------------------
           \   ^__^
            \  (oo)\_______
               (__)\       )\/\
                   ||----w |
                   ||     ||

   real    0m2.737s
   user    0m8.425s
   sys     0m8.806s
  • Run the vanilla v6.7-rc5 kernel with an Ubuntu 22.04 rootfs:
   $ vng -r v6.7-rc5 --user root --root ./rootfs/22.04 --root-release jammy -- cat /etc/lsb-release /proc/version
   ...
   DISTRIB_ID=Ubuntu
   DISTRIB_RELEASE=22.04
   DISTRIB_CODENAME=jammy
   DISTRIB_DESCRIPTION="Ubuntu 22.04.3 LTS"
   Linux version 6.7.0-060700rc5-generic (kernel@kathleen) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.2.0-7ubuntu1) 13.2.0, GNU ld (GNU Binutils for Ubuntu) 2.41) #202312102332 SMP PREEMPT_DYNAMIC Sun Dec 10 23:41:31 UTC 2023
  • Run the current kernel creating a 1GB NUMA node with CPUs 0,1,3 assigned and a 3GB NUMA node with CPUs 2,4,5,6,7 assigned:
   $ vng -r -m 4G --numa 1G,cpus=0-1,cpus=3 --numa 3G,cpus=2,cpus=4-7 -- numactl -H
   available: 2 nodes (0-1)
   node 0 cpus: 0 1 3
   node 0 size: 1005 MB
   node 0 free: 914 MB
   node 1 cpus: 2 4 5 6 7
   node 1 size: 2916 MB
   node 1 free: 2797 MB
   node distances:
   node   0   1
     0:  10  20
     1:  20  10
  • Run glxgears inside a kernel recompiled in the current directory:
   $ vng -g -- glxgears

   (virtme-ng is started in graphical mode)
  • Execute an awesome window manager session with kernel 6.2.0-1003-lowlatency (installed in the system):
   $ vng -r 6.2.0-1003-lowlatency -g -- awesome

   (virtme-ng is started in graphical mode)
  • Run the steam snap (tested in Ubuntu) inside a virtme-ng instance using the 6.2.0-1003-lowlatency kernel:
   $ vng -r 6.2.0-1003-lowlatency --snaps --net user -g -- /snap/bin/steam

   (virtme-ng is started in graphical mode)
  • Generate a memory dump of a running instance and read 'jiffies' from the memory dump using the drgn debugger:
   # Start the vng instance in debug mode
   $ vng --debug

   # In a separate shell session trigger the memory dump to /tmp/vmcore.img
   $ vng --dump /tmp/vmcore.img

   # Use drgn to read 'jiffies' from the memory dump:
   $ echo "print(prog['jiffies'])" | drgn -q -s vmlinux -c /tmp/vmcore.img
   drgn 0.0.23 (using Python 3.11.6, elfutils 0.189, with libkdumpfile)
   For help, type help(drgn).
   >>> import drgn
   >>> from drgn import NULL, Object, cast, container_of, execscript, offsetof, reinterpret, sizeof
   >>> from drgn.helpers.common import *
   >>> from drgn.helpers.linux import *
   >>> (volatile unsigned long)4294675464
  • Run virtme-ng inside a docker container:
   $ docker run -it ubuntu:22.04 /bin/bash
   # apt update
   # echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
   # apt install --yes git qemu-kvm udev iproute2 busybox-static \
     coreutils python3-requests libvirt-clients kbd kmod file rsync zstd udev
   # git clone --recursive https://github.com/arighi/virtme-ng.git
   # ./virtme-ng/vng -r v6.6 -- uname -r
   6.6.0-060600-generic

See also: .github/workflows/run.yml as a practical example on how to use virtme-ng inside docker.

Implementation details

virtme-ng allows to automatically configure, build and run kernels using the main command-line interface called vng.

A minimal custom .config is automatically generated if not already present when --build is specified.

It is possible to specify a set of custom configs (.config chunk) in ~/.config/virtme-ng/kernel.config, these user-specific settings will override the default settings (except for the mandatory configs that are required to boot and test the kernel inside qemu, using virtme-run).

Then the kernel is compiled either locally or on an external build host (if the --build-host option is used); once the build is done only the required files needed to test the kernel are copied from the remote host if an external build host is used.

When a remote build host is used (--build-host) the target branch is force pushed to the remote host inside the ~/.virtme directory.

Then the kernel is executed using the virtme module. This allows to test the kernel using a safe copy-on-write snapshot of the entire host filesystem.

All the kernels compiled with virtme-ng have a -virtme suffix to their kernel version, this allows to easily determine if you're inside a virtme-ng kernel or if you're using the real host kernel (simply by checking uname -r).

External kernel modules

It is possible to recompile and test out-of-tree kernel modules inside the virtme-ng kernel, simply by building them against the local directory of the kernel git repository that was used to build and run the kernel.

Default options

Typically, if you always use virtme-ng with an external build server (e.g., vng --build --build-host REMOTE_SERVER --build-host-exec-prefix CMD) you don't always want to specify these options, so instead, you can simply define them in ~/.config/virtme-ng/virtme-ng.conf under default_opts and then simply run vng --build.

Example (always use an external build server called 'kathleen' and run make inside a build chroot called chroot:lunar-amd64). To do so, modify the default_opts sections in ~/.config/virtme-ng/virtme-ng.conf as following:

    "default_opts" : {
        "build_host": "kathleen",
        "build_host_exec_prefix": "schroot -c chroot:lunar-amd64 --"
    },

Now you can simply run vng --build to build your kernel from the current working directory using the external build host, prepending the exec prefix command when running make.

Troubleshooting

  • If you get permission denied when starting qemu, make sure that your username is assigned to the group kvm or libvirt:
  $ groups | grep "kvm\|libvirt"
  • When using --network bridge to create a bridged network in the guest you may get the following error:
  ...
  failed to create tun device: Operation not permitted

This is because qemu-bridge-helper requires CAP_NET_ADMIN permissions.

To fix this you need to add allow all to /etc/qemu/bridge.conf and set the CAP_NET_ADMIN capability to qemu-bridge-helper, as following:

  $ sudo filecap /usr/lib/qemu/qemu-bridge-helper net_admin
  • If the guest fails to start because the host doesn't have enough memory available you can specify a different amount of memory using --memory MB, (this option is passed directly to qemu via -m, default is 1G).

  • If you're testing a kernel for an architecture different than the host, keep in mind that you need to use also --root DIR to use a specific chroot with the binaries compatible with the architecture that you're testing.

    If the chroot doesn't exist in your system virtme-ng will automatically create it using the latest daily build Ubuntu cloud image:

  $ vng --build --arch riscv64 --root ./tmproot
  • If the build on a remote build host is failing unexpectedly you may want to try cleaning up the remote git repository, running:
  $ vng --clean --build-host HOSTNAME
  • Snap support is still experimental and something may not work as expected (keep in mind that virtme-ng will try to run snapd in a bare minimum system environment without systemd), if some snaps are not running try to disable apparmor, adding --append="apparmor=0" to the virtme-ng command line.

  • Running virtme-ng instances inside docker: in case of failures/issues, especially with stdin/stdout/stderr redirections, make sure that you have udev installed in your docker image and run the following command before using vng:

  $ udevadm trigger --subsystem-match --action=change

Contributing

Please see DCO-1.1.txt.

Additional resources

Credits

virtme-ng is written by Andrea Righi [email protected]

virtme-ng is based on virtme, written by Andy Lutomirski [email protected] (web | git).

virtme-ng's People

Contributors

amluto avatar arighi avatar ezequielgarcia avatar fzago-cray avatar gagallo7 avatar haggaie avatar hramrach avatar jamesyonan avatar jpirko avatar jsitnicki avatar krasin avatar kuba-moo avatar lmb avatar marcosps avatar matttbe avatar mbgg avatar morbidrsa avatar nkapron avatar palmer-dabbelt avatar pinchartl avatar ribalda avatar soleen avatar spyff avatar tklauser avatar tych0 avatar vadorovsky avatar vbatts avatar winnscode avatar zeil avatar zevweiss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

virtme-ng's Issues

Clarify how to boot host's running kernel

Commit d18d080 changed the behavior of vng -r to try to boot a kernel built from the current working directory instead of the host's running kernel, but didn't update the --run flag's --help text, which still describes the old behavior. I use the old behavior heavily, and it's not obvious to me how to achieve that with the current state of the code -- it would be great if the help text (and perhaps README?) could be updated to show how to do that (ideally via some comparably simple flag).

Enforcing cfg/kernelcraft.conf to be installed into /etc

Looking at kernelcraft/run.py, it first checks if there is a .kernelcraft.conf at the user $HOME. In this case, shouldn't be easier if the requirement of creating a file under /etc be relaxed and default it to the user home in the first place?

Thanks!

running vng through timeout may leave the terminal in a bad state

When we run timeout SECONDS vng -- COMMAND, if timeout expires the terminal is left in a bad state, i.e. key strokes are not echoed back to the terminal.

Example command that demonstrates the problem:

timeout 10 vng -- sleep 20

Notes:

  1. By running stty in that bad state we can see that the following settings were disabled after the timeout expired: echo, icanon, brkint. We can also see differences in the min and time settings.
  2. It's possible to workaround this issue by entering reset.

ppc64le vs ppc64el

I found some mentions of ppc64el, which seems a typo. Is it on purpose, or the intention was to write ppc64le? I found it when looking at commit 708347d, and then found that ARCH_MAPPING is also using ppc64el.

allow to use double quotes in commands passed via --exec (regression in v1.11)

Commit 18680da ("virtme-ng: pass --exec command via /proc/cmdline") introduced a regression found by the mutter test case:
https://gitlab.gnome.org/GNOME/mutter/-/jobs/2917515

Passing the --exec command via /proc/cmdline directly, instead of using a temporary initramfs, is faster, but it has the downside that we don't support special characters anymore (such as double quotes for example).

To solve this regression and avoid breaking the old behavior we can pass the command as a base64-encoded string, that will be decoded in the guest by virtme-ng-init/virtme-init. The overhaed of the base64 encoding/decoding is negligible (definitely better than creating a temporary initramfs) and we can use any special character in the shell command.

a -q to undo a -v thats in a shell wrapper ?

hi Andrea,

I love this project !
being lazy, Ive got a few shell-fns wapping vng

function vrun__ () {
vng --verbose --name $GDESC
--user root --cwd ../..
-a dynamic_debug.verbose=2
$*
}
alias vrun_1="vrun__ -p 1"
alias vrun_="vrun__ -p 4 "
alias vrun_3="vrun_ -a dynamic_debug.verbose=3"
alias vrun_9="vrun_ --force-9p"

alias vrun_e="vrun_ -e $*"
alias vrun_t="vrun_3 -e tools/testing/selftests/dynamic_debug/dyndbg_selftest.sh"

alias vrun_d="vrun_ -a nokaslr -o=-S -o=-s "
alias vrun_d9="vrun_d --force-9p"

function vrun_a () {
vrun__
-a drm.debug=0x100
-a i915.dyndbg=class,DRM_UT_CORE,+pfmlt_%class,DRM_UT_KMS,+pfml_%class,DRM_UT_ATOMIC,+pfm
$*
}

At the root of it is that --verbose.
It would be convenient to have a -q to undo it in my wrapper scheme

Id be happy to try to patch it myself, but I know its a 2 minute thing for you :-)

support snaps inside virtme-ng

Running snaps inside a virtme-ng instance is currently problematic, because virtme-ng doesn't use systemd as pid 1, but it has its own init script / process: virtme-init or virtme-ng-init when running on a native arch.

Switching to a systemd init is not ideal, because it would increase a lot the boot time and it would make the whole boot process extremely complex (it would require a lot of adjustments, bind mounts, etc., because we bootstrap virtme-ng instances using a CoW copy of the entire host filesystem).

Probably a better solution is to "emulate" systemd, providing a fake systemctl interface. We also need to start snapd, create a proper /run/snapd.socket in order to be able to run snap, and probably there is something else to be addressed.

See also: https://forum.snapcraft.io/t/installing-and-running-snaps-without-systemd/3949

Allow to use 'microvm' qemu virtual platform

microvm is a minimalist machine type without PCI nor ACPI support, designed for short-lived guests. microvm also establishes a baseline for benchmarking and optimizing both QEMU and guest operating systems, since it is optimized for both boot time and footprint [1].

This seems to be ideal for the purpose of testing virtme-ng on x86, because it would allow to disable PCI and ACPI and provide a faster build time and also a faster boot time.

[1] https://qemu.readthedocs.io/en/latest/system/i386/microvm.html

Document default git reset --hard?

Shouldn't the fact that, with default options, the tool will run git reset --hard HEAD as its first step be documented?

Maybe it's just me, but I didn't expect that would happen and lost a bunch of changes I had staged and was trying to test before committing.

Improve virtme-init performance

Provide an alternative faster implementation of virtme-init. Instead of using bash, it could be re-implemented using Rust for example. Having a single binary to perform all the system initialization can help to provide an even faster boot time.

allow to start graphic apps inside a virtme-ng session

Improve support for graphic applications. It would be nice if virtme-ng could automatically start graphic applications (GUIs) inside a completely virtualized session, maybe running a minimal X server or something similar.

Only supported by very latest Ubuntu?

Hey,

I noticed while trying to install this on Ubuntu, that there seems to be only support for the two latest Ubuntu versions Lunar & Mantic. Any reason why there is no support for at least the latest LTS (which is probably what most people run, right?)?

Greetings,
Sebastian

Object is remote error when using virtiofsd (fixed with --force-9p)

Hi @arighi , have you seem this error?

$ vng --rwdir . --memory 8G

      _      _

__ ()_ | | _ __ ___ ___ _ __ __ _
\ \ / / | | | _ _ \ / _ ____| _ \ / _ |
\ V /| | | | |
| | | | | | /
| | | | (| |
_/ |
|| _|| || |_|_
| || ||__ |
|___/
kernel version: 6.7.0-rc3+ x86_64

mpdesouza@virtme-ng:~/git/linux> make
UPD include/generated/compile.h
xargs: rm: Object is remote
make[1]: *** [/home/mpdesouza/git/linux/Makefile:1202: remove-stale-files] Error 126
make: *** [Makefile:234: __sub-make] Error 2

I'm seem this problem since last week, but I don't quite remember when it started to happen...

There are also other issues to mount other overlay dirs:

[ 0.302116] virtme-ng-init: mount devtmpfs -> /dev: EBUSY: Device or resource busy
[ 0.303059] overlay: filesystem on /opt not supported
[ 0.303087] virtme-ng-init: mount virtme_rw_overlay2 -> /opt: EINVAL: Invalid argument
[ 0.303149] overlay: filesystem on /srv not supported
[ 0.303161] virtme-ng-init: mount virtme_rw_overlay3 -> /srv: EINVAL: Invalid argument
[ 0.303465] overlay: filesystem on /var not supported
[ 0.303481] virtme-ng-init: mount virtme_rw_overlay5 -> /var: EINVAL: Invalid argument

But again, I'm not sure what can be the problem. virtiofsd maybe? When I use --force-9p it works. Have you seen it in your setup? As I never played with virtiofsd I'm not sure what's the issue... My virtiofsd version is 1.7.2

Thanks again for working on virtme-ng!

audio support

EDIT: 1551298 introduced --disable-microvm so use that instead of abusing --force-9p

There's a tricky way to support audio at the moment:

vng -qr 6.4.0-5-generic -g xterm --disable-microvm \
--qemu-opts="-audiodev sdl,id=snd0 -device intel-hda -device hda-output,audiodev=snd0"

The option --disable-microvm is required because I couldn't find a way to enable an audio device with the microvm QEMU arch (maybe there's a way doing some virtio-mmio magic...).

At this point we need to start a sound server inside virtme (we need to do this manually because we don't rely on systemd). I have found a way to start pipewire (not sure if/how other sound systems can work).

From the virtme-ng shell run the following commands:

$ sudo mkdir -p /run/user/virtme
$ sudo chown $USER:$USER /run/user/virtme
$ export XDG_RUNTIME_DIR=/run/user/virtme
$ pipewire &
$ pipewire-pulse &
$ wireplumber &

At this point I am able to play sounds, for example:

$ pw-play rickroll.mp3

So, that's the PoC, now we need to find a way to integrate all of this in a better way.

return value of executed command is not returned back to host

The return value of the command executed inside the kernel is not returned back to the host.

For example:

$ vng -- false
$ echo $?
0

In the above case, I would expect vng to return 1.

Another example would be to run vng -- make on some kernel module source tree. If make fails, the vng command also returns 0.

It may be possible to workaround this issue by writing a wrapper script that prints the return value of the executed command into a temporary file, but it would be much more convenient had virtme-ng could return it automatically.

/dev/stdout: Permission denied

When using kc, and trying to run selftests, it returns an error when executing some selftests

/home/mpdesouza/git/linux/tools/testing/selftests/kselftest/runner.sh: line 116: /dev/stdout: Permission denied

Checking what's going on...

RFI: multiple numa nodes support

Hi,

I sometimes need to emulate multiple numa memory nodes, and currently it's not easy with virtme-ng, AFAIK. I'll describe my attempts below to hopefully highlight what's needed. Let's say I want to emulate 4GB and 8 cpus total, with 2 nodes 2GB and 4 cpus per node:

vng -v -r . -p 8 -m 4G
...
> numactl -H
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 3917 MB
node 0 free: 3806 MB
node distances:
node   0 
  0:  10

so we have just 1 node. vng command itself doesn't seem to take a nodes option, so let's try with --qemu_opts

> vng -v -r . -p 8 -m 4G --qemu-opts="-numa node,mem=2G -numa node,mem=2G"
...
qemu-system-x86_64: -numa node,mem=2G: numa configuration should use either mem= or memdev=,mixing both is not allowed

Oh, --dry-run reveals that it's passing -m 2G -object memory-backend-memfd,id=mem,size=2G,share=on -numa node,memdev=mem so let's try adding another node the same way

vng -v -r . -p 8 -m 4G --qemu-opts="-object memory-backend-memfd,id=mem1,size=2G,share=on -numa node,memdev=mem1"
...
qemu-system-x86_64: total memory for NUMA nodes (0x180000000) should equal RAM size (0x100000000)

Hm, too bad because the 4G is used as both total memory (-m 4G) and also for the first node. I have find no way to avoid that, eventually just edited virtme/commands/run.py where it does qemuargs.extend(["-m" ... hardcoded 4G there so the 2G is only used for the first node.

vng -v -r . -p 8 -m 2G --qemu-opts="-object memory-backend-memfd,id=mem1,size=2G,share=on -numa node,memdev=mem1"
...
> numactl -H
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 3917 MB
node 0 free: 3808 MB
node distances:
node   0 
  0:  10 

Still one node? Oh maybe it's the microvm type?

vng -v -r . -p 8 -m 2G --disable-microvm --qemu-opts="-object memory-backend-memfd,id=mem1,size=2G,share=on -numa node,memdev=mem1"
> numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 1972 MB
node 0 free: 1863 MB
node 1 cpus:
node 1 size: 1951 MB
node 1 free: 1940 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 

Yep, but the cpus are all on one node, so...

vng -v -r . -p 8 -m 2G --disable-microvm --qemu-opts="-object memory-backend-memfd,id=mem1,size=2G,share=on -numa node,memdev=mem1,cpus=4-7"
> numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3
node 0 size: 2014 MB
node 0 free: 1923 MB
node 1 cpus: 4 5 6 7
node 1 size: 1909 MB
node 1 free: 1862 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 

Finally what I need, but impossible (AFAIK) to achieve without hacking the code.

So if I could propose a possible solution, if there was a vng parameter that took the number of nodes, created the mem and numa devices and distributed all the total memory and cpus between them uniformly (or near uniformly if not perfectly divisible), and most likely also disabled the microvm to avoid surprises, it would be great.

Maybe there could also be a way to override the total memory without having it assigned all to the first node, to allow a more customized configuration with differently sized nodes or different distribution of cpus to nodes, although I didn't need that at this point.

Thanks for considering!

support running virtme-ng instances inside docker

Theoretically we should be able to run vng instances inside docker, but apparently there are problems running non-interactive sessions.

The problem seems to be related to the fact that virtio-serial devices (used by virtme-ng to connect stdin/stdout/stderr between host and guest) are not created properly, potentially due to some limitations of the /dev inside docker.

Investigate more on this and see if we can find a proper fix.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.