GithubHelp home page GithubHelp logo

89luca89 / distrobox Goto Github PK

View Code? Open in Web Editor NEW
8.9K 54.0 371.0 15.46 MB

Use any linux distribution inside your terminal. Enable both backward and forward compatibility with software and freedom to use whatever distribution you’re more comfortable with. Mirror available at: https://gitlab.com/89luca89/distrobox

Home Page: https://distrobox.it/

License: GNU General Public License v3.0

Shell 99.97% Dockerfile 0.03%
centos fedora linux containers podman development suse alpine opensuse almalinux

distrobox's Introduction

Distrobox

previous logo credits j4ckr3d
current logo credits David Lapshin

Lint CI GitHub GitHub release (latest by date) Packaging status GitHub issues by-label

Use any Linux distribution inside your terminal. Enable both backward and forward compatibility with software and freedom to use whatever distribution you’re more comfortable with. Distrobox uses podman, docker or lilipod to create containers using the Linux distribution of your choice. The created container will be tightly integrated with the host, allowing sharing of the HOME directory of the user, external storage, external USB devices and graphical apps (X11/Wayland), and audio.


Documentation - Matrix Room - Telegram Group


overview


Warning

Documentation on Github strictly refers to the code in the main branch. For the official documentation Head over https://distrobox.it


What it does

Simply put it's a fancy wrapper around podman, docker or lilipod to create and start containers highly integrated with the hosts.

The distrobox environment is based on an OCI image. This image is used to create a container that seamlessly integrates with the rest of the operating system by providing access to the user's home directory, the Wayland and X11 sockets, networking, removable devices (like USB sticks), systemd journal, SSH agent, D-Bus, ulimits, /dev and the udev database, etc...

It implements the same concepts introduced by https://github.com/containers/toolbox but in a simplified way using POSIX sh and aiming at broader compatibility.

All the props go to them as they had the great idea to implement this stuff.

It is divided into 12 commands:

  • distrobox-assemble - creates and destroy containers based on a config file
  • distrobox-create - creates the container
  • distrobox-enter - to enter the container
  • distrobox-ephemeral - create a temporal container, destroy it when exiting the shell
  • distrobox-list - to list containers created with distrobox
  • distrobox-rm - to delete a container created with distrobox
  • distrobox-stop - to stop a running container created with distrobox
  • distrobox-upgrade - to upgrade one or more running containers created with distrobox at once
  • distrobox-generate-entry - to create an entry of a created container in the applications list
  • distrobox-init - the entrypoint of the container (not meant to be used manually)
  • distrobox-export - it is meant to be used inside the container, useful to export apps and services from the container to the host
  • distrobox-host-exec - to run commands/programs from the host, while inside of the container

It also includes a little wrapper to launch commands with distrobox COMMAND instead of calling the single files.

Please check the usage docs here and see some handy tips on how to use it

See it in action

Thanks to castrojo, you can see Distrobox in action in this explanatory video on his setup with Distrobox, Toolbx, Fedora Silverblue for the uBlue project (check it out!)

Video

Why

  • Provide a mutable environment on an immutable OS, like Endless OS, Fedora Silverblue, OpenSUSE MicroOS, ChromeOS or SteamOS3
  • Provide a locally privileged environment for sudoless setups (eg. company-provided laptops, security reasons, etc...)
  • To mix and match a stable base system (eg. Debian Stable, Ubuntu LTS, RedHat) with a bleeding-edge environment for development or gaming (eg. Arch, OpenSUSE Tumbleweed or Fedora with latest Mesa)
  • Leverage high abundance of curated distro images for docker/podman to manage multiple environments

Refer to the compatibility list for an overview of supported host's distro HERE and container's distro HERE.

Aims

This project aims to bring any distro userland to any other distro supporting podman, docker or lilipod. It has been written in POSIX sh to be as portable as possible and not have problems with dependencies and glibc version's compatibility.

Refer HERE for a list of supported container managers and minimum supported versions.

It also aims to enter the container as fast as possible, every millisecond adds up if you use the container as your default environment for your terminal:

These are some sample results of distrobox-enter on the same container on my weak laptop:

~$ hyperfine --warmup 3 --runs 100 "distrobox enter bench -- whoami"
Benchmark 1: distrobox enter bench -- whoami
  Time (mean ± σ):     395.6 ms ±  10.5 ms    [User: 167.4 ms, System: 62.4 ms]
  Range (min … max):   297.3 ms … 408.9 ms    100 runs

Security implications

Isolation and sandboxing is not the main aim of the project, on the contrary it aims to tightly integrate the container with the host. The container will have complete access to your home, pen drives and so on, so do not expect it to be highly sandboxed like a plain docker/podman container or a flatpak.

⚠️ BE CAREFUL:⚠️ if you use docker, or you use podman/lilipod with the --root/-r flag, the containers will run as root, so root inside the rootful container can modify system stuff outside the container, Be also aware that In rootful mode, you'll be asked to setup user's password, this will ensure at least that the container is not a passwordless gate to root, but if you have security concern for this, use podman or lilipod that runs in rootless mode. Rootless docker is still not working as intended and will be included in the future when it will be complete.

That said, it is in the works to implement some sort of decoupling with the host, as discussed here: #28 Sandboxed mode


Quick Start

Create a new distrobox:

distrobox create -n test

Create a new distrobox with Systemd (acts similar to an LXC):

distrobox create --name test --init --image debian:latest --additional-packages "systemd libpam-systemd"

Enter created distrobox:

distrobox enter test

Add one with a different distribution, eg Ubuntu 20.04:

distrobox create -i ubuntu:20.04

Execute a command in a distrobox:

distrobox enter test -- command-to-execute

List running distroboxes:

distrobox list

Stop a running distrobox:

distrobox stop test

Remove a distrobox:

distrobox rm test

You can check HERE for more advanced usage and check a comprehensive list of useful tips HERE

Assemble Distrobox

Manifest files can be used to declare a set of distroboxes and use distrobox-assemble to create/destroy them in batch.

Head over the usage docs of distrobox-assemble for a more detailed guide.

Configure Distrobox

Configuration files can be placed in the following paths, from the least important to the most important:

  • /usr/share/distrobox/distrobox.conf
  • /usr/etc/distrobox/distrobox.conf
  • /etc/distrobox/distrobox.conf
  • ${HOME}/.config/distrobox/distrobox.conf
  • ${HOME}/.distroboxrc

You can specify inside distrobox configurations and distrobox-specific Environment variables.

Example configuration file:

container_always_pull="1"
container_generate_entry=0
container_manager="docker"
container_image_default="registry.opensuse.org/opensuse/toolbox:latest"
container_name_default="test-name-1"
container_user_custom_home="$HOME/.local/share/container-home-test"
container_init_hook="~/.local/distrobox/a_custom_default_init_hook.sh"
container_pre_init_hook="~/a_custom_default_pre_init_hook.sh"
container_manager_additional_flags="--env-file /path/to/file --custom-flag"
container_additional_volumes="/example:/example1 /example2:/example3:ro"
non_interactive="1"
skip_workdir="0"
PATH="$PATH:/path/to/custom/podman"

Alternatively it is possible to specify preferences using ENV variables:

  • DBX_CONTAINER_ALWAYS_PULL
  • DBX_CONTAINER_CUSTOM_HOME
  • DBX_CONTAINER_IMAGE
  • DBX_CONTAINER_MANAGER
  • DBX_CONTAINER_NAME
  • DBX_CONTAINER_ENTRY
  • DBX_NON_INTERACTIVE
  • DBX_SKIP_WORKDIR

Installation

Distrobox is packaged in the following distributions, if your distribution is on this list, you can refer to your repos for installation:

Packaging status

Thanks to the maintainers for their work: M0Rf30, alcir, dfaggioli, AtilaSaraiva, michel-slm

Alternative methods

Here is a list of alternative ways to install distrobox

Curl or Wget

If you like to live your life dangerously, or you want the latest release, you can trust me and simply run this in your terminal:

curl -s https://raw.githubusercontent.com/89luca89/distrobox/main/install | sudo sh
# or using wget
wget -qO- https://raw.githubusercontent.com/89luca89/distrobox/main/install | sudo sh

or if you want to select a custom directory to install without sudo:

curl -s https://raw.githubusercontent.com/89luca89/distrobox/main/install | sh -s -- --prefix ~/.local
# or using wget
wget -qO- https://raw.githubusercontent.com/89luca89/distrobox/main/install | sh -s -- --prefix ~/.local

If you want to install the last development version, directly from last commit on git, you can use:

curl -s https://raw.githubusercontent.com/89luca89/distrobox/main/install | sudo sh -s -- --next
# or using wget
wget -qO- https://raw.githubusercontent.com/89luca89/distrobox/main/install | sudo sh -s -- --next

or:

curl -s https://raw.githubusercontent.com/89luca89/distrobox/main/install | sh -s -- --next --prefix ~/.local
# or using wget
wget -qO- https://raw.githubusercontent.com/89luca89/distrobox/main/install | sh -s -- --next --prefix ~/.local

Upgrading

Just run the curl or wget command again.

Warning Remember to add prefix-path-you-choose/bin to your PATH, to make it work.

Git

Alternatively you can clone the project using git clone or using the latest release HERE.

Enter the directory and run ./install, by default it will attempt to install in ~/.local but if you run the script as root, it will default to /usr/local. You can specify a custom directory with the --prefix flag such as ./install --prefix ~/.distrobox.

Prefix explained: main distrobox files get installed to ${prefix}/bin whereas the manpages get installed to ${prefix}/share/man.


Check the Host Distros compatibility list for distro-specific instructions.

Dependencies

Distrobox depends on a container manager to work, you can choose to install either podman, docker or lilipod.

Please look in the Compatibility Table for your distribution notes.

There are ways to install Podman without root privileges and in home. or Lilipod without root privileges and in home. This should play well with completely sudoless setups and with devices like the Steam Deck (SteamOS).


Uninstallation

If you installed distrobox using the install script in the default install directory use this:

curl -s https://raw.githubusercontent.com/89luca89/distrobox/main/uninstall | sudo sh

or if you specified a custom path:

curl -s https://raw.githubusercontent.com/89luca89/distrobox/main/uninstall | sh -s -- --prefix ~/.local

Else if cloned the project using git clone or using the latest archive release from HERE,

enter the directory and run ./uninstall, by default it will assume the install directory was /usr/local if ran as root or ~/.local, you can specify another directory if needed with ./uninstall --prefix ~/.local


distro-box

This artwork uses Cardboard Box model by J0Y licensed under Creative Commons Attribution 4.0
This artwork uses GTK Loop Animation by GNOME Project licensed under Creative Commons Attribution-ShareAlike 3.0 as a pre-configured scene

distrobox's People

Contributors

89luca89 avatar actions-user avatar akinomyoga avatar alcir avatar bam80 avatar baskrahmer avatar cgzones avatar daudix avatar deinferno avatar dfaggioli avatar dnkmmr69420 avatar ennec-e avatar ericcurtin avatar eyecantcu avatar frauh0lle avatar hexared avatar juhp avatar luc14n0 avatar michel-slm avatar mirkobrombin avatar misobarisic avatar nathanchance avatar nullpointerarray avatar osalbahr avatar pavinjosdev avatar phoppermann avatar sandorex avatar sfalken avatar thecmdrunner avatar vyskocilm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

distrobox's Issues

Error when installing with curl https://raw.githubusercontent.com/89luca89/distrobox/main/install | sudo sh

Host os : kernel 5.13.19-2-MANJARO

I executed
curl https://raw.githubusercontent.com/89luca89/distrobox/main/install | sudo sh

It printed out :

/usr/bin/curl
/usr/bin/tar
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 122 100 122 0 0 345 0 --:--:-- --:--:-- --:--:-- 345
100 139k 0 139k 0 0 117k 0 --:--:-- 0:00:01 --:--:-- 440k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 135 100 135 0 0 320 0 --:--:-- --:--:-- --:--:-- 321
100 21507 0 21507 0 0 25096 0 --:--:-- --:--:-- --:--:-- 103k
distrobox-1.0.0/
distrobox-1.0.0/LICENSE
distrobox-1.0.0/README.md
distrobox-1.0.0/distrobox-create
distrobox-1.0.0/distrobox-enter
distrobox-1.0.0/distrobox-export
distrobox-1.0.0/distrobox-init
distrobox-1.0.0/install
chmod: cannot access '/usr/local/bin/distrobox-1.0.0/distrobox-create': No such file or directory

ls in usr/local/bin gives :

distrobox-create docker-compose git-quick-stats

So it looks like it's a simple folder naming error (distrobox-create instead of distrobox-1.0.0 ?)

Best of luck with your project, it would be very useful to me :)

Apps working when launched from container, but not host.

Hi there,

Great tool!

I am using Fedora Silverblue and have set up an arch container for software access.

I'm seemingly unable to launch applications once I have exported them with

distrobox-export --app

The .dekstop files go across fine, and they come up in the launcher ...yet they never actually launch. They run fine when launched from the container terminal.

Examples I have tested are Atom and GNOME Podcasts.

I may be missing something fundamental, as I am relatively new to container workflows.

Cheers.

Error: unable to start container "xxx": unable to find user root: no matching entries in passwd file

I have created my first podman image/container based on distrobox. It's a Ubuntu 20.04 image with Python3 script based on PyQt reading Garmin devices data via an USB ANT stick, storing the data in a MariaDB. Works well - many thanks for the great development of Distrobox ...

I would like to save/import the container/image now, so that I can use it on a next Laptop by using the "Container save and restore" commands from the README.md. But I get the following error message on final distrobox-enter command.
"Error: unable to start container "xxx": unable to find user root: no matching entries in passwd file".

I check it by restoring in same environment I build the container in Fedora 33 and next in a Virtual Machine with Fedora 35. I always get the same message in both environments.

Any suggestions how to fix this?

Error: host directory cannot be empty

I am trying to use distrobox on a Debian 11 Host but as soon as I try to create a new instance with e.g.:

distrobox-create --image registry.fedoraproject.org/fedora:35 --name fedora-35

I got an error like:

Error: host directory cannot be empty

distrobox login problem

photo_2021-12-31_20-35-24

as described I have problems logging into distrobox, returning the error as screen both with Ubuntu and Arch.
Do you have solutions?
PS: Fedora 35

thanks

[Error] duplicate mount destination on Fedora Silverblue

if [ -d "/var/home/${container_user_name}" ]; then

I'm on Fedora Silverblue.
Using distrobox-create, I get
Error: /var/home/myuser: duplicate mount destination

I think because we have

    210                 --volume ${container_user_home}:${container_user_home}:rslave

and again

    220                         --volume /var/home/${container_user_name}:/var/home/${container_user_name}:rslave"

On my system, $container_user_home (that is $HOME) is /var/home/myuser.
Not to say that on Fedora Silverblue /home is a symlink to var/home.

[Error] Error parsing env vars picks invalid entries

Here's what happened:

image

Text for copy / pasting purposes:

ryleu@riley-fedora ~ [1]> podman pull docker.io/library/ubuntu:20.04
Trying to pull docker.io/library/ubuntu:20.04...
Getting image source signatures
Copying blob 7b1a6ab2e44d done  
Copying config ba6acccedd done  
Writing manifest to image destination
Storing signatures
ba6acccedd2923aee4c2acc6a23780b14ed4b8a5fa4e14e252a23b846df9b6c1
ryleu@riley-fedora ~> distrobox-create --image docker.io/library/ubuntu:20.04 --name ubuntu-20-04
4ca39d806adbdb662c895e26d5bf05ea19d64510eba997e067b4a44247eef936
ryleu@riley-fedora ~> distrobox-enter --name ubuntu-20-04 -- bash -l
Starting container
run this command to follow along:
	podman logs -f ubuntu-20-04
.....done!
Error: error parsing environment variables: name "declare -f" has white spaces, poorly formatted name
An error occurred
ryleu@riley-fedora ~ [125]>

I'm not sure why this happened. It could be because of SSH, but that seems unlikely. I'll test it without SSH once I get the chance.

System info:

  • OS: Fedora 35
  • Shell: Fish (over SSH)
  • DistroBox version: latest (installed this morning from git)

add control for dependencies on top of each scripts

Could be useful to have a check at the beginning of each script to verify the presence of podman, etc.
Something similar to this:

# Check if the give command is present on the system.
# Arguments:
#   $1 - command name
# Outputs:
#   Notice that the command is not present.
# Examples:
#   check_prerequisite "kubectl"
check_prerequisite () {
    cmd=$1
    if ! command -v "$cmd" &> /dev/null; then
        printf "Please, install $1 first.\n"
        exit 1
    fi
}

[Feature] Create custom HOME for containers

This is needed for #28

We should be able to use something like --home-dir /path/i/like so that a container can use a specific directory as HOME, instead of the $HOME of the host.

This can be needed for various reasons, from isolation (as per #28) or to keep clean the host's HOME when we try software or build stuff.

Original HOME will anyway be reachable on /run/host/ so we will be still able to reach for files and such

[Issue] Distrobox fails to initiate user on Void Linux guest.

Info

Host system: Arch (EndeavourOS)
Container tech: Docker
Shell: zsh
Username on host system: simonw
Guest systems tried:

  • Void Linux Thin (Failed)
  • Void Linux Full (Failed)
  • Fedora Toolbox (Succeeded)

Steps taken

distrobox-create -i ghcr.io/void-linux/void-linux:latest-full-x86_64 -n void-box-full -v
distrobox-enter --name void-box-full

I tried starting the container with docker and it worked as expected:

docker run -it  dce757d093a4         
/ # cat etc/passwd
root:x:0:0:root:/root:/bin/sh
nobody:x:99:99:Unprivileged User:/dev/null:/bin/false
/ # exit

Log output

$ docker logs -f void-box-full
+ [ ! -f /run/.containerenv ]
+ [ ! -f /.dockerenv ]
+ [ -z 1000 ]
+ [ -z /home/simonw ]
+ [ -z simonw ]
+ [ -z 1000 ]
+ basename bash
+ shell_pkg=bash
+ command -v mount
+ command -v passwd
/usr/sbin/mount
+ command -v sudo
+ command -v apk
/usr/sbin/passwd
+ command -v apt-get
+ command -v dnf
+ command -v pacman
+ command -v slackpkg
+ command -v xbps-install
/usr/sbin/xbps-install
+ xbps-install -Sy bash procps-ng shadow sudo util-linux
[*] Updating repository `https://alpha.de.repo.voidlinux.org/current/x86_64-repodata' ...
x86_64-repodata: [1722KB 0%] 178MB/s ETA: 00m00s
x86_64-repodata: [1722KB 6%] 703KB/s ETA: 00m14s
x86_64-repodata: [1722KB 56%] 831KB/s ETA: 00m01s
x86_64-repodata: 1722KB [avg rate: 1467KB/s]
Package `procps-ng' already installed.
Package `shadow' already installed.
Package `util-linux' already installed.
2 packages will be downloaded:

2 packages will be installed:

  bash-5.1.008_1 
  sudo-1.9.8p2_1 

Size to download:             2608KB
Size required on disk:          11MB
Space available on disk:       577GB


[*] Downloading packages
bash-5.1.008_1.x86_64.xbps.sig: [512B 100%] 41MB/s ETA: 00m00s
bash-5.1.008_1.x86_64.xbps.sig: 512B [avg rate: 41MB/s]
bash-5.1.008_1.x86_64.xbps: [1540KB 0%] 95MB/s ETA: 00m00s
bash-5.1.008_1.x86_64.xbps: [1540KB 40%] 766KB/s ETA: 00m01s
bash-5.1.008_1.x86_64.xbps: 1540KB [avg rate: 1890KB/s]
bash-5.1.008_1: verifying RSA signature...
sudo-1.9.8p2_1.x86_64.xbps.sig: [512B 100%] 29MB/s ETA: 00m00s
sudo-1.9.8p2_1.x86_64.xbps.sig: 512B [avg rate: 29MB/s]
sudo-1.9.8p2_1.x86_64.xbps: [1066KB 0%] 163MB/s ETA: 00m00s
sudo-1.9.8p2_1.x86_64.xbps: [1066KB 76%] 891KB/s ETA: 00m00s
sudo-1.9.8p2_1.x86_64.xbps: 1066KB [avg rate: 1164KB/s]
Registered /usr/bin/bash into /etc/shells.
Registered /bin/bash into /etc/shells.
Setting up permissions to /etc/sudoers...
sudo-1.9.8p2_1: verifying RSA signature...

[*] Collecting package files
bash-5.1.008_1: collecting files...
sudo-1.9.8p2_1: collecting files...

[*] Unpacking packages
bash-5.1.008_1: unpacking ...
bash-5.1.008_1: registered 'sh' alternatives group
sudo-1.9.8p2_1: unpacking ...

[*] Configuring unpacked packages
bash-5.1.008_1: configuring ...
bash-5.1.008_1: installed successfully.
sudo-1.9.8p2_1: configuring ...
sudo-1.9.8p2_1: installed successfully.

2 downloaded, 2 installed, 0 updated, 2 configured, 0 removed.
+ HOST_MOUNTS_RO=/etc/machine-id /var/lib/flatpak /var/lib/systemd/coredump /var/log/journal
+ mount_bind /run/host/etc/machine-id /etc/machine-id ro
+ source_dir=/run/host/etc/machine-id
+ target_dir=/etc/machine-id
+ mount_flags=ro
+ [ -d /run/host/etc/machine-id ]
+ [ -f /run/host/etc/machine-id ]
+ [ -d /run/host/etc/machine-id ]
+ [ -f /run/host/etc/machine-id ]
+ touch /etc/machine-id
+ [ ro =  ]
+ mount --rbind -o ro /run/host/etc/machine-id /etc/machine-id
+ return 0
+ mount_bind /run/host/var/lib/flatpak /var/lib/flatpak ro
+ source_dir=/run/host/var/lib/flatpak
+ target_dir=/var/lib/flatpak
+ mount_flags=ro
+ [ -d /run/host/var/lib/flatpak ]
+ [ -f /run/host/var/lib/flatpak ]
+ return 0
+ mount_bind /run/host/var/lib/systemd/coredump /var/lib/systemd/coredump ro
+ source_dir=/run/host/var/lib/systemd/coredump
+ target_dir=/var/lib/systemd/coredump
+ mount_flags=ro
+ [ -d /run/host/var/lib/systemd/coredump ]
+ [ -d /run/host/var/lib/systemd/coredump ]
+ mkdir -p /var/lib/systemd/coredump
+ [ ro =  ]
+ mount --rbind -o ro /run/host/var/lib/systemd/coredump /var/lib/systemd/coredump
+ return 0
+ mount_bind /run/host/var/log/journal /var/log/journal ro
+ source_dir=/run/host/var/log/journal
+ target_dir=/var/log/journal
+ mount_flags=ro
+ [ -d /run/host/var/log/journal ]
+ [ -d /run/host/var/log/journal ]
+ mkdir -p /var/log/journal
+ [ ro =  ]
+ mount --rbind -o ro /run/host/var/log/journal /var/log/journal
+ return 0
+ HOST_MOUNTS=/media /run/media /run/udev/data /mnt /var/mnt /run/systemd/journal /run/libvirt /var/lib/libvirt
+ mount_bind /run/host/media /media rw
+ source_dir=/run/host/media
+ target_dir=/media
+ mount_flags=rw
+ [ -d /run/host/media ]
+ [ -f /run/host/media ]
+ return 0
+ mount_bind /run/host/run/media /run/media rw
+ source_dir=/run/host/run/media
+ target_dir=/run/media
+ mount_flags=rw
+ [ -d /run/host/run/media ]
+ [ -d /run/host/run/media ]
+ mkdir -p /run/media
+ [ rw =  ]
+ mount --rbind -o rw /run/host/run/media /run/media
+ return 0
+ mount_bind /run/host/run/udev/data /run/udev/data rw
+ source_dir=/run/host/run/udev/data
+ target_dir=/run/udev/data
+ mount_flags=rw
+ [ -d /run/host/run/udev/data ]
+ [ -d /run/host/run/udev/data ]
+ mkdir -p /run/udev/data
+ [ rw =  ]
+ mount --rbind -o rw /run/host/run/udev/data /run/udev/data
+ return 0
+ mount_bind /run/host/mnt /mnt rw
+ source_dir=/run/host/mnt
+ target_dir=/mnt
+ mount_flags=rw
+ [ -d /run/host/mnt ]
+ [ -d /run/host/mnt ]
+ mkdir -p /mnt
+ [ rw =  ]
+ mount --rbind -o rw /run/host/mnt /mnt
+ return 0
+ mount_bind /run/host/var/mnt /var/mnt rw
+ source_dir=/run/host/var/mnt
+ target_dir=/var/mnt
+ mount_flags=rw
+ [ -d /run/host/var/mnt ]
+ [ -f /run/host/var/mnt ]
+ return 0
+ mount_bind /run/host/run/systemd/journal /run/systemd/journal rw
+ source_dir=/run/host/run/systemd/journal
+ target_dir=/run/systemd/journal
+ mount_flags=rw
+ [ -d /run/host/run/systemd/journal ]
+ [ -d /run/host/run/systemd/journal ]
+ mkdir -p /run/systemd/journal
+ [ rw =  ]
+ mount --rbind -o rw /run/host/run/systemd/journal /run/systemd/journal
+ return 0
+ mount_bind /run/host/run/libvirt /run/libvirt rw
+ source_dir=/run/host/run/libvirt
+ target_dir=/run/libvirt
+ mount_flags=rw
+ [ -d /run/host/run/libvirt ]
+ [ -f /run/host/run/libvirt ]
+ return 0
+ mount_bind /run/host/var/lib/libvirt /var/lib/libvirt rw
+ source_dir=/run/host/var/lib/libvirt
+ target_dir=/var/lib/libvirt
+ mount_flags=rw
+ [ -d /run/host/var/lib/libvirt ]
+ [ -f /run/host/var/lib/libvirt ]
+ return 0
+ [ -d /usr/lib/rpm/macros.d/ ]
+ grep -q Defaults !fqdn /etc/sudoers
+ printf Defaults !fqdn\n
+ grep -q simonw ALL = (root) NOPASSWD:ALL /etc/sudoers
+ printf %s ALL = (root) NOPASSWD:ALL\n simonw
+ grep -q simonw /etc/group
+ groupadd --force --gid 1000 simonw
+ id simonw
id: 'simonw': no such user
/usr/bin/entrypoint: 295: SHELL: parameter not set
+ [ 2 -ne 0 ]
+ printf Error: An error occurred\n
Error: An error occurred

Issue

As you can see "an error occurred". Maybe it has something to do with user creation?

+ id simonw
id: 'simonw': no such user

I don't know if this is a wide spread issue or an issue specific to my setup. Maybe someone has an idea or can recreate it.

[ Improvement ] Themes and icons integration

Until version 1.0.2 there was a rudimental integration between host's custom themes and icons and the container using symlinks:

ln -s /run/host/usr/share/themes/* ~/.themes

same with icons

ln -s /run/host/usr/share/icons/* ~/.icons


This has been removed in version 1.0.3 as it is not a solution I like nor it is solid enough
I would like to find a solution that is solid enough to not spit confusing errors when using multiple distroboxes and mix and matching different containers

[Suggestio] Remove hyphens in commands

I suggest changing the commands to distrobox {create,enter,init,export} (notice the lack of hyphens) because it's consistent with other utilities like Podman, Docker or Toolbx and many more.

distrobox-export --app failure due to cp: -r not specified

Hi again!

I was just trying to export Visual Studio Code today (from the Aur bin package) and came across the following


[joseph@Arch ~]$ distrobox-export --app code
cp: -r not specified; omitting directory '/usr/share/icons/hicolor/128x128/stock/code'

Nothing is exported to .local/share/applications/

Not sure if I missed a flag somewhere, but I couldn't see any that were relevant ?

[Feature] Sandboxed mode

Right now the distrobox's containers are created in privileged mode and share a lot of sensitive host's folder.

This is done because the aim is tight integration with the host, not sandboxing.


It would be nice to have an optional (see: disabled if not specified) --unprivileged or a --sandbox flag in distrobox-create to have a more isolated container to work with.

Ubuntu 18.04 container doesn't work

It is stated that Ubuntu 18.04 is a supported container distro, but

[user@pc ~]$ distrobox-create --name ubuntu-18 --image docker.io/library/ubuntu:18.04
e4a127e1cf407e72033dc8f34326c1793796de62f97ecbc871ad71823824b89b
Distrobox 'ubuntu-18' successfully created.
To enter, run:
	distrobox-enter --name ubuntu-18
[user@pc ~]$    distrobox-enter --name ubuntu-18
Starting container ubuntu-18
run this command to follow along:
	podman logs -f ubuntu-18
.......
done!
Error: executable file `bash` not found in $PATH: No such file or directory: OCI runtime attempted to invoke a command that was not found

An error occurred

[Error] distrobox-export sometimes doesn't properly close quotes in the Exec line

For instance with distrobox-export --app simple-scan
The applications doesn't appear in the application list because the Exec line in the .desktop file is wrong.

It is
Exec=/var/home/myuser/.local/bin/distrobox-enter -H -n fedora-toolbox-35 -- " simple-scan
while it should be
Exec=/var/home/alessio/.local/bin/distrobox-enter -H -n fedora-toolbox-35 -- " simple-scan "

Or
Exec=/var/home/alessio/.local/bin/distrobox-enter -H -n fedora-toolbox-35 -- " env QT_QPA_PLATFORM=xcb x2goclient
should be
Exec=/var/home/alessio/.local/bin/distrobox-enter -H -n fedora-toolbox-35 -- " env QT_QPA_PLATFORM=xcb x2goclient "

While (distrobox-export --app virt-viewer)
Exec=/var/home/alessio/.local/bin/distrobox-enter -H -n fedora-toolbox-35 -- " remote-viewer " %u
is correct.

Can't enter containers after rebooting the host

Hi, I can create new containers and enter them just one time.

When I try to enter a container for a second time with

distrobox-enter --name fedora-35 -- bash -l  

Error: unable to start container "fedora-35": container create failed (no logs from conmon): EOF

Cannot start container, does it exist?
Try running first:

    distrobox-create --name <name-of-container> --image <remote>/<docker>:<tag>
An error occurred

despite I have a container with that name:

podman ps -a
CONTAINER ID  IMAGE                                 COMMAND               CREATED       STATUS                   PORTS  NAMES
92bef478ff04  registry.fedoraproject.org/fedora:35  /usr/bin/entrypoi...  15 hours ago  Created                         fedora-35-2
e276c55031e7  registry.fedoraproject.org/fedora:35  /usr/bin/entrypoi...  19 hours ago  Exited (6) 19 hours ago         fedora-35

EDIT: what's trigger this is the reboot of the host, I can create new containers and enter them as far as I don't reboot the host. After the reboot all the previous containers can't be entered despite they are listed by podman ps.

[Error] unable to start container: error stat'ing file irqbalance sock file

I have created two containers using distrobox-create: Fedora 35 and Ubuntu 20.04.
I was able to enter and configure them without issues.

However, after system reboot, trying to enter any of them results in error message:

❯ distrobox-enter -n ubuntu-toolbox-20
Error: unable to start container "1302abf5585c1ab2ce43c26347fca10bd41eed09d7fb378390defcb6d6be57be": error stat'ing file `/run/irqbalance/irqbalance1451.sock`: No such file or directory: OCI runtime attempted to invoke a command that was not found

An error occurred

After reboot, irqbalance sock file was recreated by system:

❯ ll /run/irqbalance/                   
srwxr-xr-x@ 0 root root 30 Dec 14:37  irqbalance1454.sock

and container expects it at the path it was during create:

❯ podman inspect ubuntu-toolbox-20 | grep -i irqbalance
                "Source": "/run/irqbalance/irqbalance1451.sock",
                "Destination": "/run/irqbalance/irqbalance1451.sock",
                "/run/irqbalance/irqbalance1451.sock:/run/irqbalance/irqbalance1451.sock",
                "/run/irqbalance/irqbalance1451.sock:/run/irqbalance/irqbalance1451.sock:rw,rprivate,nosuid,nodev,rbind",

Is there a way to overcome this?

Using podman version 3.4.4 on Fedora Linux 35 host,

Error: cannot link container's themes to host home.

Can't initialize the container, what's the rationale of these themes/icons operations? Here there's the log:

podman logs -f fedora-35
+ HOST_LINKS='/etc/host.conf /etc/hosts /etc/resolv.conf /etc/localtime /etc/timezone'
+ for link in ${HOST_LINKS}
+ rm -f /etc/host.conf
+ ln -s /run/host/etc/host.conf /etc/host.conf
+ for link in ${HOST_LINKS}
+ rm -f /etc/hosts
+ ln -s /run/host/etc/hosts /etc/hosts
+ for link in ${HOST_LINKS}
+ rm -f /etc/resolv.conf
+ ln -s /run/host/etc/resolv.conf /etc/resolv.conf
+ for link in ${HOST_LINKS}
+ rm -f /etc/localtime
+ ln -s /run/host/etc/localtime /etc/localtime
+ for link in ${HOST_LINKS}
+ rm -f /etc/timezone
+ ln -s /run/host/etc/timezone /etc/timezone
++ basename /bin/bash
+ shell_pkg=bash
+ command -v mount
+ command -v apk
+ command -v apt-get
+ command -v dnf
/usr/bin/dnf
+ dnf install -y --setopt=install_weak_deps=False sudo shadow-utils passwd procps-ng util-linux bash
Fedora 35 - x86_64                              6.6 MB/s |  61 MB     00:09    
Fedora 35 openh264 (From Cisco) - x86_64        4.1 kB/s | 2.5 kB     00:00    
Fedora Modular 35 - x86_64                      3.1 MB/s | 2.6 MB     00:00    
Fedora 35 - x86_64 - Updates                    4.7 MB/s |  15 MB     00:03    
Fedora Modular 35 - x86_64 - Updates            484 kB/s | 717 kB     00:01    
Package sudo-1.9.7p2-2.fc35.x86_64 is already installed.
Package shadow-utils-2:4.9-7.fc35.x86_64 is already installed.
Package bash-5.1.8-2.fc35.x86_64 is already installed.
Dependencies resolved.
================================================================================
 Package                Architecture  Version              Repository      Size
================================================================================
Installing:
 passwd                 x86_64        0.80-11.fc35         fedora         107 k
 procps-ng              x86_64        3.3.17-3.fc35        fedora         328 k
 util-linux             x86_64        2.37.2-1.fc35        fedora         2.2 M
Installing dependencies:
 cracklib-dicts         x86_64        2.9.6-27.fc35        fedora         3.6 M
 libfdisk               x86_64        2.37.2-1.fc35        fedora         155 k
 libuser                x86_64        0.63-7.fc35          fedora         380 k
 libutempter            x86_64        1.2.1-5.fc35         fedora          26 k
 systemd-libs           x86_64        249.7-2.fc35         updates        616 k
 util-linux-core        x86_64        2.37.2-1.fc35        fedora         434 k

Transaction Summary
================================================================================
Install  9 Packages

Total download size: 7.8 M
Installed size: 27 M
Downloading Packages:
(1/9): libfdisk-2.37.2-1.fc35.x86_64.rpm        708 kB/s | 155 kB     00:00    
(2/9): libuser-0.63-7.fc35.x86_64.rpm           1.5 MB/s | 380 kB     00:00    
(3/9): libutempter-1.2.1-5.fc35.x86_64.rpm      583 kB/s |  26 kB     00:00    
(4/9): passwd-0.80-11.fc35.x86_64.rpm           604 kB/s | 107 kB     00:00    
(5/9): procps-ng-3.3.17-3.fc35.x86_64.rpm       1.3 MB/s | 328 kB     00:00    
(6/9): util-linux-core-2.37.2-1.fc35.x86_64.rpm 888 kB/s | 434 kB     00:00    
(7/9): cracklib-dicts-2.9.6-27.fc35.x86_64.rpm  3.4 MB/s | 3.6 MB     00:01    
(8/9): util-linux-2.37.2-1.fc35.x86_64.rpm      3.2 MB/s | 2.2 MB     00:00    
(9/9): systemd-libs-249.7-2.fc35.x86_64.rpm     1.1 MB/s | 616 kB     00:00    
--------------------------------------------------------------------------------
Total                                           3.9 MB/s | 7.8 MB     00:02     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1
  Installing       : systemd-libs-249.7-2.fc35.x86_64                       1/9
  Running scriptlet: systemd-libs-249.7-2.fc35.x86_64                       1/9
  Installing       : util-linux-core-2.37.2-1.fc35.x86_64                   2/9
  Running scriptlet: util-linux-core-2.37.2-1.fc35.x86_64                   2/9
  Running scriptlet: libutempter-1.2.1-5.fc35.x86_64                        3/9
  Installing       : libutempter-1.2.1-5.fc35.x86_64                        3/9
  Installing       : libuser-0.63-7.fc35.x86_64                             4/9
  Installing       : libfdisk-2.37.2-1.fc35.x86_64                          5/9
  Installing       : util-linux-2.37.2-1.fc35.x86_64                        6/9
warning: /etc/adjtime created as /etc/adjtime.rpmnew

  Running scriptlet: util-linux-2.37.2-1.fc35.x86_64                        6/9
  Installing       : passwd-0.80-11.fc35.x86_64                             7/9
  Installing       : procps-ng-3.3.17-3.fc35.x86_64                         8/9
  Installing       : cracklib-dicts-2.9.6-27.fc35.x86_64                    9/9
  Running scriptlet: cracklib-dicts-2.9.6-27.fc35.x86_64                    9/9
  Verifying        : cracklib-dicts-2.9.6-27.fc35.x86_64                    1/9
  Verifying        : libfdisk-2.37.2-1.fc35.x86_64                          2/9
  Verifying        : libuser-0.63-7.fc35.x86_64                             3/9
  Verifying        : libutempter-1.2.1-5.fc35.x86_64                        4/9
  Verifying        : passwd-0.80-11.fc35.x86_64                             5/9
  Verifying        : procps-ng-3.3.17-3.fc35.x86_64                         6/9
  Verifying        : util-linux-2.37.2-1.fc35.x86_64                        7/9
  Verifying        : util-linux-core-2.37.2-1.fc35.x86_64                   8/9
  Verifying        : systemd-libs-249.7-2.fc35.x86_64                       9/9

Installed:
  cracklib-dicts-2.9.6-27.fc35.x86_64       libfdisk-2.37.2-1.fc35.x86_64       
  libuser-0.63-7.fc35.x86_64                libutempter-1.2.1-5.fc35.x86_64     
  passwd-0.80-11.fc35.x86_64                procps-ng-3.3.17-3.fc35.x86_64      
  systemd-libs-249.7-2.fc35.x86_64          util-linux-2.37.2-1.fc35.x86_64     
  util-linux-core-2.37.2-1.fc35.x86_64     

Complete!
+ HOST_MOUNTS_RO='/etc/machine-id /var/lib/flatpak /var/lib/systemd/coredump /var/log/journal'
+ for mount_ro in ${HOST_MOUNTS_RO}
+ mount_bind /run/host/etc/machine-id /etc/machine-id ro
+ source_dir=/run/host/etc/machine-id
+ target_dir=/etc/machine-id
+ mount_flags=ro
+ mount_o=
+ '[' -d /run/host/etc/machine-id ']'
+ '[' -f /run/host/etc/machine-id ']'
+ '[' -d /run/host/etc/machine-id ']'
+ '[' -f /run/host/etc/machine-id ']'
+ touch /etc/machine-id
+ '[' ro = '' ']'
+ mount_o='-o ro'
+ mount --rbind -o ro /run/host/etc/machine-id /etc/machine-id
+ return 0
+ for mount_ro in ${HOST_MOUNTS_RO}
+ mount_bind /run/host/var/lib/flatpak /var/lib/flatpak ro
+ source_dir=/run/host/var/lib/flatpak
+ target_dir=/var/lib/flatpak
+ mount_flags=ro
+ mount_o=
+ '[' -d /run/host/var/lib/flatpak ']'
+ '[' -d /run/host/var/lib/flatpak ']'
+ mkdir -p /var/lib/flatpak
+ '[' ro = '' ']'
+ mount_o='-o ro'
+ mount --rbind -o ro /run/host/var/lib/flatpak /var/lib/flatpak
+ return 0
+ for mount_ro in ${HOST_MOUNTS_RO}
+ mount_bind /run/host/var/lib/systemd/coredump /var/lib/systemd/coredump ro
+ source_dir=/run/host/var/lib/systemd/coredump
+ target_dir=/var/lib/systemd/coredump
+ mount_flags=ro
+ mount_o=
+ '[' -d /run/host/var/lib/systemd/coredump ']'
+ '[' -d /run/host/var/lib/systemd/coredump ']'
+ mkdir -p /var/lib/systemd/coredump
+ '[' ro = '' ']'
+ mount_o='-o ro'
+ mount --rbind -o ro /run/host/var/lib/systemd/coredump /var/lib/systemd/coredump
+ return 0
+ for mount_ro in ${HOST_MOUNTS_RO}
+ mount_bind /run/host/var/log/journal /var/log/journal ro
+ source_dir=/run/host/var/log/journal
+ target_dir=/var/log/journal
+ mount_flags=ro
+ mount_o=
+ '[' -d /run/host/var/log/journal ']'
+ '[' -d /run/host/var/log/journal ']'
+ mkdir -p /var/log/journal
+ '[' ro = '' ']'
+ mount_o='-o ro'
+ mount --rbind -o ro /run/host/var/log/journal /var/log/journal
+ return 0
+ HOST_MOUNTS='/media /mnt /run/systemd/journal /run/libvirt /var/mnt /var/lib/libvirt'
+ for mount in ${HOST_MOUNTS}
+ mount_bind /run/host/media /media rw
+ source_dir=/run/host/media
+ target_dir=/media
+ mount_flags=rw
+ mount_o=
+ '[' -d /run/host/media ']'
+ '[' -d /run/host/media ']'
+ mkdir -p /media
+ '[' rw = '' ']'
+ mount_o='-o rw'
+ mount --rbind -o rw /run/host/media /media
+ return 0
+ for mount in ${HOST_MOUNTS}
+ mount_bind /run/host/mnt /mnt rw
+ source_dir=/run/host/mnt
+ target_dir=/mnt
+ mount_flags=rw
+ mount_o=
+ '[' -d /run/host/mnt ']'
+ '[' -d /run/host/mnt ']'
+ mkdir -p /mnt
+ '[' rw = '' ']'
+ mount_o='-o rw'
+ mount --rbind -o rw /run/host/mnt /mnt
+ return 0
+ for mount in ${HOST_MOUNTS}
+ mount_bind /run/host/run/systemd/journal /run/systemd/journal rw
+ source_dir=/run/host/run/systemd/journal
+ target_dir=/run/systemd/journal
+ mount_flags=rw
+ mount_o=
+ '[' -d /run/host/run/systemd/journal ']'
+ '[' -d /run/host/run/systemd/journal ']'
+ mkdir -p /run/systemd/journal
+ '[' rw = '' ']'
+ mount_o='-o rw'
+ mount --rbind -o rw /run/host/run/systemd/journal /run/systemd/journal
+ return 0
+ for mount in ${HOST_MOUNTS}
+ mount_bind /run/host/run/libvirt /run/libvirt rw
+ source_dir=/run/host/run/libvirt
+ target_dir=/run/libvirt
+ mount_flags=rw
+ mount_o=
+ '[' -d /run/host/run/libvirt ']'
+ '[' -f /run/host/run/libvirt ']'
+ return 0
+ for mount in ${HOST_MOUNTS}
+ mount_bind /run/host/var/mnt /var/mnt rw
+ source_dir=/run/host/var/mnt
+ target_dir=/var/mnt
+ mount_flags=rw
+ mount_o=
+ '[' -d /run/host/var/mnt ']'
+ '[' -f /run/host/var/mnt ']'
+ return 0
+ for mount in ${HOST_MOUNTS}
+ mount_bind /run/host/var/lib/libvirt /var/lib/libvirt rw
+ source_dir=/run/host/var/lib/libvirt
+ target_dir=/var/lib/libvirt
+ mount_flags=rw
+ mount_o=
+ '[' -d /run/host/var/lib/libvirt ']'
+ '[' -f /run/host/var/lib/libvirt ']'
+ return 0
+ mkdir -p /home/alex/.themes/
+ ln -s /run/host/usr/share/themes/Breeze /run/host/usr/share/themes/Breeze-Dark /run/host/usr/share/themes/Default /run/host/usr/share/themes/Emacs /run/host/usr/share/themes/KvAdapta /run/host/usr/share/themes/KvAmbiance /run/host/usr/share/themes/KvAmbience /run/host/usr/share/themes/KvArc /run/host/usr/share/themes/KvArcDark /run/host/usr/share/themes/KvBeige /run/host/usr/share/themes/KvBrown /run/host/usr/share/themes/KvCurvesLight /run/host/usr/share/themes/KvCyan /run/host/usr/share/themes/KvDarkRed /run/host/usr/share/themes/KvFlatLight /run/host/usr/share/themes/KvGnome /run/host/usr/share/themes/KvGnomeAlt /run/host/usr/share/themes/KvGnomeDark /run/host/usr/share/themes/KvGnomish /run/host/usr/share/themes/KvGray /run/host/usr/share/themes/KvOxygen /run/host/usr/share/themes/KvRoughGlass /run/host/usr/share/themes/KvSimplicity /run/host/usr/share/themes/KvSimplicityDark /run/host/usr/share/themes/Kvantum /run/host/usr/share/themes/Raleigh /home/alex/.themes/
ln: failed to create symbolic link '/home/alex/.themes/Breeze': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/Breeze-Dark': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/Default': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/Emacs': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvAdapta': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvAmbiance': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvAmbience': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvArc': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvArcDark': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvBeige': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvBrown': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvCurvesLight': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvCyan': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvDarkRed': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvFlatLight': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvGnome': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvGnomeAlt': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvGnomeDark': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvGnomish': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvGray'
ln: failed to create symbolic link '/home/alex/.themes/KvOxygen': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvRoughGlass': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvSimplicity': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/KvSimplicityDark': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/Kvantum': Permission denied
ln: failed to create symbolic link '/home/alex/.themes/Raleigh': Permission denied
Error: cannot link container's themes to host home.
+ echo 'Error: cannot link container'\''s themes to host home.'
+ mkdir -p /home/alex/.icons/
+ ln -s /run/host/usr/share/icons/Adwaita /run/host/usr/share/icons/Breeze_Snow /run/host/usr/share/icons/breeze /run/host/usr/share/icons/breeze-dark /run/host/usr/share/icons/breeze_cursors /run/host/usr/share/icons/default /run/host/usr/share/icons/gnome /run/host/usr/share/icons/hicolor /run/host/usr/share/icons/locolor /run/host/usr/share/icons/mindforger /run/host/usr/share/icons/scalable /home/alex/.icons/
+ echo 'Defaults !fqdn'
+ echo 'alex ALL = (root) NOPASSWD:ALL'
++ list_sudo_groups
++ group=
++ grep -q sudo: /etc/group
++ grep -q wheel: /etc/group
++ group=wheel
++ echo wheel
++ return 0
+ useradd --home-dir /home/alex --no-create-home --shell /bin/bash --uid 1000 --gid 1000 --groups wheel alex
useradd: group '1000' does not exist
++ list_sudo_groups
++ group=
++ grep -q sudo: /etc/group
++ grep -q wheel: /etc/group
++ group=wheel
++ echo wheel
++ return 0
+ usermod -aG wheel alex
usermod: user 'alex' does not exist
+ '[' 6 -ne 0 ']'
+ echo An error occurred
An error occurred

[ Feature ] Add version to the script

Right now we have versioning in the release on github and on git, but not in the scripts

I would like to add distrobox-enter --version and equivalent to the other tools

Need to find a way to inject the version in the script withtout being too much invasive

Volume Mounting Issue on NixOS.

Hi, this is a super cool looking tool, and I love that its written in shell! I went to try and port it to NixOS, but I ran into an issue where it appears the volumes are not mounting, specifically Podman reports /usr/bin/entrypoint does not exist, and the container never starts because it cannot execute it's entrypoint command. I do not know if this is an issue regarding NixOS, or something else. I would love to help, but I am not sure where to start on fixing it, and any help would be greatly appreciated. Thank you!

[Improvement] Move mounts from create to init

As saw on #72 , there may be situations where the files or directories mounted at create time will disappear or change name, and start of the container will fail, as the Mount point will not exists.

Right now we mount the whole host's file system under /run/host, so we should at this point move all those dynamic mounting from distrobox-create to distrobox-init in order to ensure higher robustness to a created pet container

[ Feature ] Export arbitrary binary from container & direnv

Hello and thank you very much for creating this project!

As far as I understand, distrobox-export --app can only export applications which have a .desktop file. Would it be possible to extend this to programs in /bin or usr/bin? Or have an option like distrobox-export --bin which does that?

Background:

I discovered your project and dahbox around the same time during the last days when I was searching for a possibility to have project specific environments (and not using the nix package manager or nixos) with direnv.
Both projects have a similar scope but yours seems to be more generic in terms of available container images. That's why I am opening this request.

The workflow I mean is described here, and I would imagine it could look similar to this:

# Direnv setup
echo "PATH_add $PWD/.distrobox-bin" > .envrc
direnv allow

# Export from distrobox
# I currently work a lot in R, so I will use it as an example
distrobox-export --bin R --export-to "$PWD/.distrobox-bin"

# This would create a bash script $PWD/.distrobox-bin/R which would point to R in the container which in turn would be added to $PATH via direnv when the user works in this project folder.

Would this be feasible for this project or is it out of scope?

[Feature] Add support for ChromeOS

I was able to install DistroBox successfully on my Chromebook in devmode, but it can't run due to the lack of podman. Podman isn't a standard package, nor is it part of the dev_tools available to ChromeOS.

[Error] Failed to resolve names for Ubuntu archives

When I try to set up an Ubuntu distrobox, it loads forever. When I get the logs from pod man it says it failed to resolve names for archive.ubuntu.com and security.ubuntu.com. I can access both sites from the host system.

Host system: Fedora 35

DistroBox version: 6db2fc4

Guest system: both Ubuntu 20.04 and 21.10

Error & logs:


ryleu@riley-fedora ~> distrobox-create --image docker.io/library/ubuntu:20.04 --name ubuntu-20-04
f818363c6ce37a68923068ed989e20eba86466466854b01124b4d6ea275e359f
e/
ryleu@riley-fedora ~> distrobox-enter --name ubuntu-20-04 -- bash -li
Starting container
run this command to follow along:
        podman logs -f ubuntu-20-04
....................................................................................................^C⏎
ryleu@riley-fedora ~ [SIGINT]> podman logs -f ubuntu-20-04
+ HOST_LINKS=/etc/host.conf /etc/hosts /etc/resolv.conf /etc/localtime /etc/timezone
+ rm -f /etc/host.conf
+ ln -s /run/host/etc/host.conf /etc/host.conf
+ rm -f /etc/hosts
+ ln -s /run/host/etc/hosts /etc/hosts
+ rm -f /etc/resolv.conf
+ ln -s /run/host/etc/resolv.conf /etc/resolv.conf
+ rm -f /etc/localtime
+ ln -s /run/host/etc/localtime /etc/localtime
+ rm -f /etc/timezone
+ ln -s /run/host/etc/timezone /etc/timezone
+ basename /bin/bash
+ shell_pkg=bash
+ command -v mount
/usr/bin/mount
+ command -v passwd
/usr/bin/passwd
+ command -v sudo
+ command -v apk
+ command -v apt-get
/usr/bin/apt-get
+ apt-get update
Err:1 http://archive.ubuntu.com/ubuntu focal InRelease
  Temporary failure resolving 'archive.ubuntu.com'
Err:2 http://security.ubuntu.com/ubuntu focal-security InRelease
  Temporary failure resolving 'security.ubuntu.com'
Err:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease
  Temporary failure resolving 'archive.ubuntu.com'
Err:4 http://archive.ubuntu.com/ubuntu focal-backports InRelease
  Temporary failure resolving 'archive.ubuntu.com'
Reading package lists...
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal/InRelease  Temporary failure resolving 'archive.ubuntu.com'
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease  Temporary failure resolving 'archive.ubuntu.com'
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease  Temporary failure resolving 'archive.ubuntu.com'W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease  Temporary failure resolving 'security.ubuntu.com'
W: Some index files failed to download. They have been ignored, or old ones used instead.
+ apt-get install -y --no-install-recommends sudo apt-utils passwd procps util-linux bash
Reading package lists...
Building dependency tree...
Reading state information...
Package apt-utils is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
  apt

E: Unable to locate package sudo
E: Package 'apt-utils' has no installation candidate
+ [ 100 -ne 0 ]
An error occurred
+ echo An error occurred

[Error] distrobox-enter fails to enter container on Fedora Silverblue host

I am using a Fedora Silverblue 35 x86_64 host (specifically Fedora Kinoite), and distrobox-enter is not able to enter the container. An error code of 255 is given when attaching to the container. As an aside, this machine is able to use toolbox properly. Also, I tried entering the container distrobox created, and was able to do so at the end of the log.

I can't seem to see what the issue is from the log, is there some way to get more information about why the exec did not work? I'm not sure if this is relevant, but my default user shell is zsh (which may not exist in the container), but the error doesn't seem to be about that.

System information:

$ uname -a
Linux rkrishna-p1 5.15.8-200.fc35.x86_64 #1 SMP Tue Dec 14 14:26:01 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
$ podman -v
podman version 3.4.4

Logs from create and enter (I can add more specific create logs if needed):

$ distrobox-create --image registry.fedoraproject.org/fedora-toolbox:35 --name fedora-toolbox-35
4335314437a42ae7deb9ecc6f39523caa62ad02b52238401f6bd2e7b79d88588
Distrobox 'fedora-toolbox-35' successfully created.
To enter, run:
        distrobox-enter --name fedora-toolbox-35

$ distrobox-enter --name fedora-toolbox-35 -v -- bash -l
+ container_manager=podman
++ id -ru
+ '[' -S /run/user/1000/podman/podman.sock ']'
+ command -v podman
+ '[' 1 -ne 0 ']'
+ container_manager='podman --log-level debug'
++ podman --log-level debug inspect --type container fedora-toolbox-35 --format '{{.State.Status}}'
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called inspect.PersistentPreRunE(podman --log-level debug inspect --type container fedora-toolbox-35 --format {{.State.Status}}) 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/ramesh/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Overriding graph root "/var/home/ramesh/.local/share/containers/storage" with "/home/ramesh/.local/share/containers/storage" from database 
DEBU[0000] Overriding static dir "/var/home/ramesh/.local/share/containers/storage/libpod" with "/home/ramesh/.local/share/containers/storage/libpod" from database 
DEBU[0000] Overriding volume path "/var/home/ramesh/.local/share/containers/storage/volumes" with "/home/ramesh/.local/share/containers/storage/volumes" from database 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/ramesh/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/ramesh/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/ramesh/.local/share/containers/storage/volumes 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that metacopy is not being used 
DEBU[0000] Cached value indicated that native-diff is usable 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument 
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Found CNI network podman (type=bridge) at /var/home/ramesh/.config/cni/net.d/87-podman.conflist 
DEBU[0000] Default CNI network name podman is unchangeable 
INFO[0000] Setting parallel job count to 37             
DEBU[0000] Called inspect.PersistentPostRunE(podman --log-level debug inspect --type container fedora-toolbox-35 --format {{.State.Status}}) 
+ container_status=configured
+ container_exists=0
+ '[' 0 -gt 0 ']'
+ '[' configured '!=' running ']'
++ date +%FT%T.%N%:z
+ log_timestamp=2021-12-22T11:06:52.400520143-05:00
+ podman --log-level debug start fedora-toolbox-35
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called start.PersistentPreRunE(podman --log-level debug start fedora-toolbox-35) 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/ramesh/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Overriding graph root "/var/home/ramesh/.local/share/containers/storage" with "/home/ramesh/.local/share/containers/storage" from database 
DEBU[0000] Overriding static dir "/var/home/ramesh/.local/share/containers/storage/libpod" with "/home/ramesh/.local/share/containers/storage/libpod" from database 
DEBU[0000] Overriding volume path "/var/home/ramesh/.local/share/containers/storage/volumes" with "/home/ramesh/.local/share/containers/storage/volumes" from database 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/ramesh/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/ramesh/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/ramesh/.local/share/containers/storage/volumes 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that metacopy is not being used 
DEBU[0000] Cached value indicated that native-diff is usable 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument 
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Found CNI network podman (type=bridge) at /var/home/ramesh/.config/cni/net.d/87-podman.conflist 
DEBU[0000] Default CNI network name podman is unchangeable 
INFO[0000] Setting parallel job count to 37             
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that metacopy is not being used 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
DEBU[0000] overlay: mount_data=,lowerdir=/home/ramesh/.local/share/containers/storage/overlay/l/IWK2RKP3WDDIE3CLHUC2RGOZD3:/home/ramesh/.local/share/containers/storage/overlay/l/QDFAUJIC76SESXQI4VF3EVKZ2U,upperdir=/home/ramesh/.local/share/containers/storage/overlay/c887962cbd5ea52ebd68ad5c5a521fbc28c7d44f64af001373aab6b8cebc2903/diff,workdir=/home/ramesh/.local/share/containers/storage/overlay/c887962cbd5ea52ebd68ad5c5a521fbc28c7d44f64af001373aab6b8cebc2903/work,userxattr 
DEBU[0000] mounted container "4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974" at "/home/ramesh/.local/share/containers/storage/overlay/c887962cbd5ea52ebd68ad5c5a521fbc28c7d44f64af001373aab6b8cebc2903/merged" 
DEBU[0000] Created root filesystem for container 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 at /var/home/ramesh/.local/share/containers/storage/overlay/c887962cbd5ea52ebd68ad5c5a521fbc28c7d44f64af001373aab6b8cebc2903/merged 
DEBU[0000] network configuration does not support host.containers.internal address 
DEBU[0000] Not modifying container 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 /etc/passwd 
DEBU[0000] Not modifying container 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 /etc/group 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription 
INFO[0000] User mount overriding libpod mount at "/etc/resolv.conf" 
INFO[0000] User mount overriding libpod mount at "/etc/hosts" 
DEBU[0000] Setting CGroups for container 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 to user.slice:libpod:4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 
DEBU[0000] set root propagation to "rslave"             
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Workdir "/" resolved to host path "/var/home/ramesh/.local/share/containers/storage/overlay/c887962cbd5ea52ebd68ad5c5a521fbc28c7d44f64af001373aab6b8cebc2903/merged" 
DEBU[0000] Created OCI spec for container 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 at /home/ramesh/.local/share/containers/storage/overlay-containers/4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 -u 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 -r /usr/bin/crun -b /home/ramesh/.local/share/containers/storage/overlay-containers/4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974/userdata -p /run/user/1000/containers/overlay-containers/4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974/userdata/pidfile -n fedora-toolbox-35 --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/ramesh/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974]"
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0000] Received: 62850                              
INFO[0000] Got Conmon PID as 62847                      
DEBU[0000] Created container 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 in OCI runtime 
DEBU[0000] Starting container 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 with command [/usr/bin/entrypoint -v --name ramesh --user 1000 --group 1000 --home /home/ramesh] 
DEBU[0000] Started container 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 
DEBU[0000] Called start.PersistentPostRunE(podman --log-level debug start fedora-toolbox-35) 
+ printf 'Starting container %s\n' fedora-toolbox-35
Starting container fedora-toolbox-35
+ printf 'run this command to follow along:\n'
run this command to follow along:
+ printf '\t%s logs -f %s\n' 'podman --log-level debug' fedora-toolbox-35
        podman --log-level debug logs -f fedora-toolbox-35
+ :
++ podman --log-level debug logs -t --since 2021-12-22T11:06:52.400520143-05:00 fedora-toolbox-35
+ container_manager_log='2021-12-22T11:06:52.467681000-05:00 /usr/bin/mount
2021-12-22T11:06:52.467744000-05:00 /usr/bin/passwd
2021-12-22T11:06:52.467804000-05:00 /usr/bin/sudo
2021-12-22T11:06:52.468089000-05:00 /usr/sbin/useradd
2021-12-22T11:06:52.468155000-05:00 /usr/sbin/usermod
2021-12-22T11:06:52.468220000-05:00 /usr/bin/bash'
+ case "${container_manager_log}" in
+ printf .
.+ sleep 1
+ :
++ podman --log-level debug logs -t --since 2021-12-22T11:06:52.400520143-05:00 fedora-toolbox-35
+ container_manager_log='2021-12-22T11:06:52.467681000-05:00 /usr/bin/mount
2021-12-22T11:06:52.467744000-05:00 /usr/bin/passwd
2021-12-22T11:06:52.467804000-05:00 /usr/bin/sudo
2021-12-22T11:06:52.468089000-05:00 /usr/sbin/useradd
2021-12-22T11:06:52.468155000-05:00 /usr/sbin/usermod
2021-12-22T11:06:52.468220000-05:00 /usr/bin/bash
2021-12-22T11:06:52.670083000-05:00 Removing password for user ramesh.
2021-12-22T11:06:52.670195000-05:00 passwd: Success
2021-12-22T11:06:52.713965000-05:00 Removing password for user root.
2021-12-22T11:06:52.714073000-05:00 passwd: Success
2021-12-22T11:06:52.720632000-05:00 container_setup_done'
+ case "${container_manager_log}" in
+ break
+ printf '\ndone!\n'

done!
+ podman --log-level debug logs -t --since 2021-12-22T11:06:52.400520143-05:00 fedora-toolbox-35
+ grep Warning
+ :
++ generate_command
++ result_command='podman --log-level debug exec'
++ result_command='podman --log-level debug exec
                --user=ramesh'
++ '[' 0 -eq 0 ']'
++ result_command='podman --log-level debug exec
                --user=ramesh
                        --interactive
                        --tty'
+++ command -v distrobox-enter
++ result_command='podman --log-level debug exec
                --user=ramesh
                        --interactive
                        --tty
                --workdir=/var/home/ramesh
                --env=DISTROBOX_ENTER_PATH=/home/ramesh/.local/bin/distrobox-enter'
++ set +o xtrace
++ result_command='podman --log-level debug exec
                --user=ramesh
                        --interactive
                        --tty
                --workdir=/var/home/ramesh
                --env=DISTROBOX_ENTER_PATH=/home/ramesh/.local/bin/distrobox-enter --env="SHELL=/usr/bin/zsh" --env="SESSION_MANAGER=local/unix:@/tmp/.ICE-unix/1448,unix/unix:/tmp/.ICE-unix/1448" --env="WINDOWID=75497479" --env="COLORTERM=truecolor" --env="XDG_CONFIG_DIRS=/home/ramesh/.config/kdedefaults:/etc/xdg:/usr/share/kde-settings/kde-profile/default/xdg" --env="XDG_SESSION_PATH=/org/freedesktop/DisplayManager/Session1" --env="HISTCONTROL=ignoredups" --env="XDG_MENU_PREFIX=kf5-" --env="HOSTNAME=rkrishna-p1" --env="HISTSIZE=1000" --env="LANGUAGE=" --env="SSH_AUTH_SOCK=/tmp/ssh-XXXXXXAVFIDP/agent.1371" --env="SHELL_SESSION_ID=c956d6cb4778449faf745a420ff0b2d6" --env="DESKTOP_SESSION=/usr/share/xsessions/plasmax11" --env="SSH_AGENT_PID=1405" --env="GTK_RC_FILES=/etc/gtk/gtkrc:/home/ramesh/.gtkrc:/home/ramesh/.config/gtkrc" --env="GDK_CORE_DEVICE_EVENTS=1" --env="XCURSOR_SIZE=24" --env="EDITOR=nano" --env="XDG_SEAT=seat0" --env="PWD=/var/home/ramesh" --env="XDG_SESSION_DESKTOP=KDE" --env="LOGNAME=ramesh" --env="XDG_SESSION_TYPE=x11" --env="SYSTEMD_EXEC_PID=1491" --env="OMF_PATH=/home/ramesh/.local/share/omf" --env="XAUTHORITY=/run/user/1000/xauth_KRtVxu" --env="GTK2_RC_FILES=/etc/gtk-2.0/gtkrc:/home/ramesh/.gtkrc-2.0:/home/ramesh/.config/gtkrc-2.0" --env="HOME=/home/ramesh" --env="LANG=en_US.UTF-8" --env="XDG_CURRENT_DESKTOP=KDE" --env="KONSOLE_DBUS_SERVICE=:1.742" --env="KONSOLE_DBUS_SESSION=/Sessions/1" --env="PROFILEHOME=" --env="XDG_SEAT_PATH=/org/freedesktop/DisplayManager/Seat0" --env="INVOCATION_ID=70cfb3ce99564a01b3c09e4a80e2c731" --env="KONSOLE_VERSION=210803" --env="XZ_OPT=-T0" --env="MANAGERPID=1342" --env="KDE_SESSION_UID=1000" --env="XDG_SESSION_CLASS=user" --env="TERM=xterm-256color" --env="USER=ramesh" --env="COLORFGBG=15;0" --env="KDE_SESSION_VERSION=5" --env="PAM_KWALLET5_LOGIN=/run/user/1000/kwallet5.socket" --env="DISPLAY=:0" --env="SHLVL=1" --env="XDG_VTNR=2" --env="XDG_SESSION_ID=2" --env="XDG_RUNTIME_DIR=/run/user/1000" --env="KDEDIRS=/usr" --env="QT_AUTO_SCREEN_SCALE_FACTOR=0" --env="JOURNAL_STREAM=8:35909" --env="XCURSOR_THEME=breeze_cursors" --env="XDG_DATA_DIRS=/home/ramesh/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share" --env="KDE_FULL_SESSION=true" --env="PATH=/usr/local/bin:/usr/bin:/bin:/home/ramesh/bin:/usr/local/sbin:/usr/sbin:/var/opt/CustomScripts:/sbin:/home/ramesh/.local/bin" --env="DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus" --env="KDE_APPLICATIONS_AS_SCOPE=1" --env="MAIL=/var/spool/mail/ramesh" --env="OMF_CONFIG=/home/ramesh/.config/omf" --env="OLDPWD=/home/ramesh" --env="KONSOLE_DBUS_WINDOW=/Windows/1" --env="_=/usr/bin/printenv" --env="PATH=/usr/local/bin:/usr/bin:/bin:/home/ramesh/bin:/usr/local/sbin:/usr/sbin:/var/opt/CustomScripts:/sbin:/home/ramesh/.local/bin" fedora-toolbox-35 bash -l'
++ printf %s 'podman --log-level debug exec
                --user=ramesh
                        --interactive
                        --tty
                --workdir=/var/home/ramesh
                --env=DISTROBOX_ENTER_PATH=/home/ramesh/.local/bin/distrobox-enter --env="SHELL=/usr/bin/zsh" --env="SESSION_MANAGER=local/unix:@/tmp/.ICE-unix/1448,unix/unix:/tmp/.ICE-unix/1448" --env="WINDOWID=75497479" --env="COLORTERM=truecolor" --env="XDG_CONFIG_DIRS=/home/ramesh/.config/kdedefaults:/etc/xdg:/usr/share/kde-settings/kde-profile/default/xdg" --env="XDG_SESSION_PATH=/org/freedesktop/DisplayManager/Session1" --env="HISTCONTROL=ignoredups" --env="XDG_MENU_PREFIX=kf5-" --env="HOSTNAME=rkrishna-p1" --env="HISTSIZE=1000" --env="LANGUAGE=" --env="SSH_AUTH_SOCK=/tmp/ssh-XXXXXXAVFIDP/agent.1371" --env="SHELL_SESSION_ID=c956d6cb4778449faf745a420ff0b2d6" --env="DESKTOP_SESSION=/usr/share/xsessions/plasmax11" --env="SSH_AGENT_PID=1405" --env="GTK_RC_FILES=/etc/gtk/gtkrc:/home/ramesh/.gtkrc:/home/ramesh/.config/gtkrc" --env="GDK_CORE_DEVICE_EVENTS=1" --env="XCURSOR_SIZE=24" --env="EDITOR=nano" --env="XDG_SEAT=seat0" --env="PWD=/var/home/ramesh" --env="XDG_SESSION_DESKTOP=KDE" --env="LOGNAME=ramesh" --env="XDG_SESSION_TYPE=x11" --env="SYSTEMD_EXEC_PID=1491" --env="OMF_PATH=/home/ramesh/.local/share/omf" --env="XAUTHORITY=/run/user/1000/xauth_KRtVxu" --env="GTK2_RC_FILES=/etc/gtk-2.0/gtkrc:/home/ramesh/.gtkrc-2.0:/home/ramesh/.config/gtkrc-2.0" --env="HOME=/home/ramesh" --env="LANG=en_US.UTF-8" --env="XDG_CURRENT_DESKTOP=KDE" --env="KONSOLE_DBUS_SERVICE=:1.742" --env="KONSOLE_DBUS_SESSION=/Sessions/1" --env="PROFILEHOME=" --env="XDG_SEAT_PATH=/org/freedesktop/DisplayManager/Seat0" --env="INVOCATION_ID=70cfb3ce99564a01b3c09e4a80e2c731" --env="KONSOLE_VERSION=210803" --env="XZ_OPT=-T0" --env="MANAGERPID=1342" --env="KDE_SESSION_UID=1000" --env="XDG_SESSION_CLASS=user" --env="TERM=xterm-256color" --env="USER=ramesh" --env="COLORFGBG=15;0" --env="KDE_SESSION_VERSION=5" --env="PAM_KWALLET5_LOGIN=/run/user/1000/kwallet5.socket" --env="DISPLAY=:0" --env="SHLVL=1" --env="XDG_VTNR=2" --env="XDG_SESSION_ID=2" --env="XDG_RUNTIME_DIR=/run/user/1000" --env="KDEDIRS=/usr" --env="QT_AUTO_SCREEN_SCALE_FACTOR=0" --env="JOURNAL_STREAM=8:35909" --env="XCURSOR_THEME=breeze_cursors" --env="XDG_DATA_DIRS=/home/ramesh/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share" --env="KDE_FULL_SESSION=true" --env="PATH=/usr/local/bin:/usr/bin:/bin:/home/ramesh/bin:/usr/local/sbin:/usr/sbin:/var/opt/CustomScripts:/sbin:/home/ramesh/.local/bin" --env="DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus" --env="KDE_APPLICATIONS_AS_SCOPE=1" --env="MAIL=/var/spool/mail/ramesh" --env="OMF_CONFIG=/home/ramesh/.config/omf" --env="OLDPWD=/home/ramesh" --env="KONSOLE_DBUS_WINDOW=/Windows/1" --env="_=/usr/bin/printenv" --env="PATH=/usr/local/bin:/usr/bin:/bin:/home/ramesh/bin:/usr/local/sbin:/usr/sbin:/var/opt/CustomScripts:/sbin:/home/ramesh/.local/bin" fedora-toolbox-35 bash -l'
+ cmd='podman --log-level debug exec
                --user=ramesh
                        --interactive
                        --tty
                --workdir=/var/home/ramesh
                --env=DISTROBOX_ENTER_PATH=/home/ramesh/.local/bin/distrobox-enter --env="SHELL=/usr/bin/zsh" --env="SESSION_MANAGER=local/unix:@/tmp/.ICE-unix/1448,unix/unix:/tmp/.ICE-unix/1448" --env="WINDOWID=75497479" --env="COLORTERM=truecolor" --env="XDG_CONFIG_DIRS=/home/ramesh/.config/kdedefaults:/etc/xdg:/usr/share/kde-settings/kde-profile/default/xdg" --env="XDG_SESSION_PATH=/org/freedesktop/DisplayManager/Session1" --env="HISTCONTROL=ignoredups" --env="XDG_MENU_PREFIX=kf5-" --env="HOSTNAME=rkrishna-p1" --env="HISTSIZE=1000" --env="LANGUAGE=" --env="SSH_AUTH_SOCK=/tmp/ssh-XXXXXXAVFIDP/agent.1371" --env="SHELL_SESSION_ID=c956d6cb4778449faf745a420ff0b2d6" --env="DESKTOP_SESSION=/usr/share/xsessions/plasmax11" --env="SSH_AGENT_PID=1405" --env="GTK_RC_FILES=/etc/gtk/gtkrc:/home/ramesh/.gtkrc:/home/ramesh/.config/gtkrc" --env="GDK_CORE_DEVICE_EVENTS=1" --env="XCURSOR_SIZE=24" --env="EDITOR=nano" --env="XDG_SEAT=seat0" --env="PWD=/var/home/ramesh" --env="XDG_SESSION_DESKTOP=KDE" --env="LOGNAME=ramesh" --env="XDG_SESSION_TYPE=x11" --env="SYSTEMD_EXEC_PID=1491" --env="OMF_PATH=/home/ramesh/.local/share/omf" --env="XAUTHORITY=/run/user/1000/xauth_KRtVxu" --env="GTK2_RC_FILES=/etc/gtk-2.0/gtkrc:/home/ramesh/.gtkrc-2.0:/home/ramesh/.config/gtkrc-2.0" --env="HOME=/home/ramesh" --env="LANG=en_US.UTF-8" --env="XDG_CURRENT_DESKTOP=KDE" --env="KONSOLE_DBUS_SERVICE=:1.742" --env="KONSOLE_DBUS_SESSION=/Sessions/1" --env="PROFILEHOME=" --env="XDG_SEAT_PATH=/org/freedesktop/DisplayManager/Seat0" --env="INVOCATION_ID=70cfb3ce99564a01b3c09e4a80e2c731" --env="KONSOLE_VERSION=210803" --env="XZ_OPT=-T0" --env="MANAGERPID=1342" --env="KDE_SESSION_UID=1000" --env="XDG_SESSION_CLASS=user" --env="TERM=xterm-256color" --env="USER=ramesh" --env="COLORFGBG=15;0" --env="KDE_SESSION_VERSION=5" --env="PAM_KWALLET5_LOGIN=/run/user/1000/kwallet5.socket" --env="DISPLAY=:0" --env="SHLVL=1" --env="XDG_VTNR=2" --env="XDG_SESSION_ID=2" --env="XDG_RUNTIME_DIR=/run/user/1000" --env="KDEDIRS=/usr" --env="QT_AUTO_SCREEN_SCALE_FACTOR=0" --env="JOURNAL_STREAM=8:35909" --env="XCURSOR_THEME=breeze_cursors" --env="XDG_DATA_DIRS=/home/ramesh/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share" --env="KDE_FULL_SESSION=true" --env="PATH=/usr/local/bin:/usr/bin:/bin:/home/ramesh/bin:/usr/local/sbin:/usr/sbin:/var/opt/CustomScripts:/sbin:/home/ramesh/.local/bin" --env="DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus" --env="KDE_APPLICATIONS_AS_SCOPE=1" --env="MAIL=/var/spool/mail/ramesh" --env="OMF_CONFIG=/home/ramesh/.config/omf" --env="OLDPWD=/home/ramesh" --env="KONSOLE_DBUS_WINDOW=/Windows/1" --env="_=/usr/bin/printenv" --env="PATH=/usr/local/bin:/usr/bin:/bin:/home/ramesh/bin:/usr/local/sbin:/usr/sbin:/var/opt/CustomScripts:/sbin:/home/ramesh/.local/bin" fedora-toolbox-35 bash -l'
+ eval podman --log-level debug exec --user=ramesh --interactive --tty --workdir=/var/home/ramesh --env=DISTROBOX_ENTER_PATH=/home/ramesh/.local/bin/distrobox-enter '--env="SHELL=/usr/bin/zsh"' '--env="SESSION_MANAGER=local/unix:@/tmp/.ICE-unix/1448,unix/unix:/tmp/.ICE-unix/1448"' '--env="WINDOWID=75497479"' '--env="COLORTERM=truecolor"' '--env="XDG_CONFIG_DIRS=/home/ramesh/.config/kdedefaults:/etc/xdg:/usr/share/kde-settings/kde-profile/default/xdg"' '--env="XDG_SESSION_PATH=/org/freedesktop/DisplayManager/Session1"' '--env="HISTCONTROL=ignoredups"' '--env="XDG_MENU_PREFIX=kf5-"' '--env="HOSTNAME=rkrishna-p1"' '--env="HISTSIZE=1000"' '--env="LANGUAGE="' '--env="SSH_AUTH_SOCK=/tmp/ssh-XXXXXXAVFIDP/agent.1371"' '--env="SHELL_SESSION_ID=c956d6cb4778449faf745a420ff0b2d6"' '--env="DESKTOP_SESSION=/usr/share/xsessions/plasmax11"' '--env="SSH_AGENT_PID=1405"' '--env="GTK_RC_FILES=/etc/gtk/gtkrc:/home/ramesh/.gtkrc:/home/ramesh/.config/gtkrc"' '--env="GDK_CORE_DEVICE_EVENTS=1"' '--env="XCURSOR_SIZE=24"' '--env="EDITOR=nano"' '--env="XDG_SEAT=seat0"' '--env="PWD=/var/home/ramesh"' '--env="XDG_SESSION_DESKTOP=KDE"' '--env="LOGNAME=ramesh"' '--env="XDG_SESSION_TYPE=x11"' '--env="SYSTEMD_EXEC_PID=1491"' '--env="OMF_PATH=/home/ramesh/.local/share/omf"' '--env="XAUTHORITY=/run/user/1000/xauth_KRtVxu"' '--env="GTK2_RC_FILES=/etc/gtk-2.0/gtkrc:/home/ramesh/.gtkrc-2.0:/home/ramesh/.config/gtkrc-2.0"' '--env="HOME=/home/ramesh"' '--env="LANG=en_US.UTF-8"' '--env="XDG_CURRENT_DESKTOP=KDE"' '--env="KONSOLE_DBUS_SERVICE=:1.742"' '--env="KONSOLE_DBUS_SESSION=/Sessions/1"' '--env="PROFILEHOME="' '--env="XDG_SEAT_PATH=/org/freedesktop/DisplayManager/Seat0"' '--env="INVOCATION_ID=70cfb3ce99564a01b3c09e4a80e2c731"' '--env="KONSOLE_VERSION=210803"' '--env="XZ_OPT=-T0"' '--env="MANAGERPID=1342"' '--env="KDE_SESSION_UID=1000"' '--env="XDG_SESSION_CLASS=user"' '--env="TERM=xterm-256color"' '--env="USER=ramesh"' '--env="COLORFGBG=15;0"' '--env="KDE_SESSION_VERSION=5"' '--env="PAM_KWALLET5_LOGIN=/run/user/1000/kwallet5.socket"' '--env="DISPLAY=:0"' '--env="SHLVL=1"' '--env="XDG_VTNR=2"' '--env="XDG_SESSION_ID=2"' '--env="XDG_RUNTIME_DIR=/run/user/1000"' '--env="KDEDIRS=/usr"' '--env="QT_AUTO_SCREEN_SCALE_FACTOR=0"' '--env="JOURNAL_STREAM=8:35909"' '--env="XCURSOR_THEME=breeze_cursors"' '--env="XDG_DATA_DIRS=/home/ramesh/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share"' '--env="KDE_FULL_SESSION=true"' '--env="PATH=/usr/local/bin:/usr/bin:/bin:/home/ramesh/bin:/usr/local/sbin:/usr/sbin:/var/opt/CustomScripts:/sbin:/home/ramesh/.local/bin"' '--env="DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus"' '--env="KDE_APPLICATIONS_AS_SCOPE=1"' '--env="MAIL=/var/spool/mail/ramesh"' '--env="OMF_CONFIG=/home/ramesh/.config/omf"' '--env="OLDPWD=/home/ramesh"' '--env="KONSOLE_DBUS_WINDOW=/Windows/1"' '--env="_=/usr/bin/printenv"' '--env="PATH=/usr/local/bin:/usr/bin:/bin:/home/ramesh/bin:/usr/local/sbin:/usr/sbin:/var/opt/CustomScripts:/sbin:/home/ramesh/.local/bin"' fedora-toolbox-35 bash -l
++ podman --log-level debug exec --user=ramesh --interactive --tty --workdir=/var/home/ramesh --env=DISTROBOX_ENTER_PATH=/home/ramesh/.local/bin/distrobox-enter --env=SHELL=/usr/bin/zsh --env=SESSION_MANAGER=local/unix:@/tmp/.ICE-unix/1448,unix/unix:/tmp/.ICE-unix/1448 --env=WINDOWID=75497479 --env=COLORTERM=truecolor --env=XDG_CONFIG_DIRS=/home/ramesh/.config/kdedefaults:/etc/xdg:/usr/share/kde-settings/kde-profile/default/xdg --env=XDG_SESSION_PATH=/org/freedesktop/DisplayManager/Session1 --env=HISTCONTROL=ignoredups --env=XDG_MENU_PREFIX=kf5- --env=HOSTNAME=rkrishna-p1 --env=HISTSIZE=1000 --env=LANGUAGE= --env=SSH_AUTH_SOCK=/tmp/ssh-XXXXXXAVFIDP/agent.1371 --env=SHELL_SESSION_ID=c956d6cb4778449faf745a420ff0b2d6 --env=DESKTOP_SESSION=/usr/share/xsessions/plasmax11 --env=SSH_AGENT_PID=1405 --env=GTK_RC_FILES=/etc/gtk/gtkrc:/home/ramesh/.gtkrc:/home/ramesh/.config/gtkrc --env=GDK_CORE_DEVICE_EVENTS=1 --env=XCURSOR_SIZE=24 --env=EDITOR=nano --env=XDG_SEAT=seat0 --env=PWD=/var/home/ramesh --env=XDG_SESSION_DESKTOP=KDE --env=LOGNAME=ramesh --env=XDG_SESSION_TYPE=x11 --env=SYSTEMD_EXEC_PID=1491 --env=OMF_PATH=/home/ramesh/.local/share/omf --env=XAUTHORITY=/run/user/1000/xauth_KRtVxu --env=GTK2_RC_FILES=/etc/gtk-2.0/gtkrc:/home/ramesh/.gtkrc-2.0:/home/ramesh/.config/gtkrc-2.0 --env=HOME=/home/ramesh --env=LANG=en_US.UTF-8 --env=XDG_CURRENT_DESKTOP=KDE --env=KONSOLE_DBUS_SERVICE=:1.742 --env=KONSOLE_DBUS_SESSION=/Sessions/1 --env=PROFILEHOME= --env=XDG_SEAT_PATH=/org/freedesktop/DisplayManager/Seat0 --env=INVOCATION_ID=70cfb3ce99564a01b3c09e4a80e2c731 --env=KONSOLE_VERSION=210803 --env=XZ_OPT=-T0 --env=MANAGERPID=1342 --env=KDE_SESSION_UID=1000 --env=XDG_SESSION_CLASS=user --env=TERM=xterm-256color --env=USER=ramesh '--env=COLORFGBG=15;0' --env=KDE_SESSION_VERSION=5 --env=PAM_KWALLET5_LOGIN=/run/user/1000/kwallet5.socket --env=DISPLAY=:0 --env=SHLVL=1 --env=XDG_VTNR=2 --env=XDG_SESSION_ID=2 --env=XDG_RUNTIME_DIR=/run/user/1000 --env=KDEDIRS=/usr --env=QT_AUTO_SCREEN_SCALE_FACTOR=0 --env=JOURNAL_STREAM=8:35909 --env=XCURSOR_THEME=breeze_cursors --env=XDG_DATA_DIRS=/home/ramesh/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share --env=KDE_FULL_SESSION=true --env=PATH=/usr/local/bin:/usr/bin:/bin:/home/ramesh/bin:/usr/local/sbin:/usr/sbin:/var/opt/CustomScripts:/sbin:/home/ramesh/.local/bin --env=DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus --env=KDE_APPLICATIONS_AS_SCOPE=1 --env=MAIL=/var/spool/mail/ramesh --env=OMF_CONFIG=/home/ramesh/.config/omf --env=OLDPWD=/home/ramesh --env=KONSOLE_DBUS_WINDOW=/Windows/1 --env=_=/usr/bin/printenv --env=PATH=/usr/local/bin:/usr/bin:/bin:/home/ramesh/bin:/usr/local/sbin:/usr/sbin:/var/opt/CustomScripts:/sbin:/home/ramesh/.local/bin fedora-toolbox-35 bash -l
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called exec.PersistentPreRunE(podman --log-level debug exec --user=ramesh --interactive --tty --workdir=/var/home/ramesh --env=DISTROBOX_ENTER_PATH=/home/ramesh/.local/bin/distrobox-enter --env=SHELL=/usr/bin/zsh --env=SESSION_MANAGER=local/unix:@/tmp/.ICE-unix/1448,unix/unix:/tmp/.ICE-unix/1448 --env=WINDOWID=75497479 --env=COLORTERM=truecolor --env=XDG_CONFIG_DIRS=/home/ramesh/.config/kdedefaults:/etc/xdg:/usr/share/kde-settings/kde-profile/default/xdg --env=XDG_SESSION_PATH=/org/freedesktop/DisplayManager/Session1 --env=HISTCONTROL=ignoredups --env=XDG_MENU_PREFIX=kf5- --env=HOSTNAME=rkrishna-p1 --env=HISTSIZE=1000 --env=LANGUAGE= --env=SSH_AUTH_SOCK=/tmp/ssh-XXXXXXAVFIDP/agent.1371 --env=SHELL_SESSION_ID=c956d6cb4778449faf745a420ff0b2d6 --env=DESKTOP_SESSION=/usr/share/xsessions/plasmax11 --env=SSH_AGENT_PID=1405 --env=GTK_RC_FILES=/etc/gtk/gtkrc:/home/ramesh/.gtkrc:/home/ramesh/.config/gtkrc --env=GDK_CORE_DEVICE_EVENTS=1 --env=XCURSOR_SIZE=24 --env=EDITOR=nano --env=XDG_SEAT=seat0 --env=PWD=/var/home/ramesh --env=XDG_SESSION_DESKTOP=KDE --env=LOGNAME=ramesh --env=XDG_SESSION_TYPE=x11 --env=SYSTEMD_EXEC_PID=1491 --env=OMF_PATH=/home/ramesh/.local/share/omf --env=XAUTHORITY=/run/user/1000/xauth_KRtVxu --env=GTK2_RC_FILES=/etc/gtk-2.0/gtkrc:/home/ramesh/.gtkrc-2.0:/home/ramesh/.config/gtkrc-2.0 --env=HOME=/home/ramesh --env=LANG=en_US.UTF-8 --env=XDG_CURRENT_DESKTOP=KDE --env=KONSOLE_DBUS_SERVICE=:1.742 --env=KONSOLE_DBUS_SESSION=/Sessions/1 --env=PROFILEHOME= --env=XDG_SEAT_PATH=/org/freedesktop/DisplayManager/Seat0 --env=INVOCATION_ID=70cfb3ce99564a01b3c09e4a80e2c731 --env=KONSOLE_VERSION=210803 --env=XZ_OPT=-T0 --env=MANAGERPID=1342 --env=KDE_SESSION_UID=1000 --env=XDG_SESSION_CLASS=user --env=TERM=xterm-256color --env=USER=ramesh --env=COLORFGBG=15;0 --env=KDE_SESSION_VERSION=5 --env=PAM_KWALLET5_LOGIN=/run/user/1000/kwallet5.socket --env=DISPLAY=:0 --env=SHLVL=1 --env=XDG_VTNR=2 --env=XDG_SESSION_ID=2 --env=XDG_RUNTIME_DIR=/run/user/1000 --env=KDEDIRS=/usr --env=QT_AUTO_SCREEN_SCALE_FACTOR=0 --env=JOURNAL_STREAM=8:35909 --env=XCURSOR_THEME=breeze_cursors --env=XDG_DATA_DIRS=/home/ramesh/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share --env=KDE_FULL_SESSION=true --env=PATH=/usr/local/bin:/usr/bin:/bin:/home/ramesh/bin:/usr/local/sbin:/usr/sbin:/var/opt/CustomScripts:/sbin:/home/ramesh/.local/bin --env=DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus --env=KDE_APPLICATIONS_AS_SCOPE=1 --env=MAIL=/var/spool/mail/ramesh --env=OMF_CONFIG=/home/ramesh/.config/omf --env=OLDPWD=/home/ramesh --env=KONSOLE_DBUS_WINDOW=/Windows/1 --env=_=/usr/bin/printenv --env=PATH=/usr/local/bin:/usr/bin:/bin:/home/ramesh/bin:/usr/local/sbin:/usr/sbin:/var/opt/CustomScripts:/sbin:/home/ramesh/.local/bin fedora-toolbox-35 bash -l) 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/ramesh/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Overriding graph root "/var/home/ramesh/.local/share/containers/storage" with "/home/ramesh/.local/share/containers/storage" from database 
DEBU[0000] Overriding static dir "/var/home/ramesh/.local/share/containers/storage/libpod" with "/home/ramesh/.local/share/containers/storage/libpod" from database 
DEBU[0000] Overriding volume path "/var/home/ramesh/.local/share/containers/storage/volumes" with "/home/ramesh/.local/share/containers/storage/volumes" from database 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/ramesh/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/ramesh/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/ramesh/.local/share/containers/storage/volumes 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that metacopy is not being used 
DEBU[0000] Cached value indicated that native-diff is usable 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument 
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Found CNI network podman (type=bridge) at /var/home/ramesh/.config/cni/net.d/87-podman.conflist 
DEBU[0000] Default CNI network name podman is unchangeable 
INFO[0000] Setting parallel job count to 37             
DEBU[0000] Handling terminal attach                     
INFO[0000] Created exec session 2baaa36fde7cb1490e8c99cf21263fdfd202e83dca5ab885fb0d03a93092b847 in container 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 
INFO[0000] Going to start container 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 exec session 2baaa36fde7cb1490e8c99cf21263fdfd202e83dca5ab885fb0d03a93092b847 and attach to it 
DEBU[0000] Sending resize events to exec session 2baaa36fde7cb1490e8c99cf21263fdfd202e83dca5ab885fb0d03a93092b847 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 -u 2baaa36fde7cb1490e8c99cf21263fdfd202e83dca5ab885fb0d03a93092b847 -r /usr/bin/crun -b /home/ramesh/.local/share/containers/storage/overlay-containers/4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974/userdata/2baaa36fde7cb1490e8c99cf21263fdfd202e83dca5ab885fb0d03a93092b847 -p /home/ramesh/.local/share/containers/storage/overlay-containers/4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974/userdata/2baaa36fde7cb1490e8c99cf21263fdfd202e83dca5ab885fb0d03a93092b847/exec_pid -n fedora-toolbox-35 --exit-dir /home/ramesh/.local/share/containers/storage/overlay-containers/4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974/userdata/2baaa36fde7cb1490e8c99cf21263fdfd202e83dca5ab885fb0d03a93092b847/exit --full-attach -s -l none --log-level debug --syslog -t -i -e --exec-attach --exec-process-spec /home/ramesh/.local/share/containers/storage/overlay-containers/4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974/userdata/2baaa36fde7cb1490e8c99cf21263fdfd202e83dca5ab885fb0d03a93092b847/exec-process-602857697 --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/ramesh/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --exec --exit-command-arg 2baaa36fde7cb1490e8c99cf21263fdfd202e83dca5ab885fb0d03a93092b847 --exit-command-arg 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974]"
DEBU[0000] Attaching to container 4b361b6c560541938ed55d44a4165332544b176d44be008a8e0c93d8276f9974 exec session 2baaa36fde7cb1490e8c99cf21263fdfd202e83dca5ab885fb0d03a93092b847 
DEBU[0000] Received: 0                                  
DEBU[0000] Received: -1                                 
Error: OCI runtime error: [conmon:d]: exec with attach is waiting for start message from parent
[conmon:d]: exec with attach got start message from parent
+ '[' 255 -ne 0 ']'
+ printf '\nAn error occurred\n'

An error occurred

$ podman ps
podman ps
CONTAINER ID  IMAGE                                         COMMAND               CREATED         STATUS            PORTS       NAMES
4335314437a4  registry.fedoraproject.org/fedora-toolbox:35  /usr/bin/entrypoi...  13 minutes ago  Up 7 minutes ago              fedora-toolbox-35

$ podman exec -it fedora-toolbox-35 bash
[root@fedora-toolbox-35 /]# ls
README.md  bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

handle custom user shell

The toolbox_enter script already exports user's env to the container
This includes $SHELL

Default command should be that and not hardcoded bash.

It should also use this information in toolbox_init to install the shell and set it as user's default.

Use something other than hostname to indicate container name in command line prompt

distrobox runs podman/docker with --host set to the name of the container which in turn changes the contents of /etc/hostname to be the name of the container. But /etc/hostname is supposed to be the name of the host on the network and affects many system functions (see man hostname). Since the container is sharing the network, both the container and the host should have the same hostname.

On Debian there is a precedence for indicating the chroot in the command line prompt (chroot being a precursor to containers). To my knowledge there isn't an equivalent on Fedora based distros. Not knowing of an alternative I started using osvirtalias in cnest.

I describe the situation in more detail here:
https://github.com/castedo/cnest/tree/master/prompt

So I can suggest doing similarly and set osvirtalias and/or debian_chroot to be the name of the container, and let the distro and container configuration determine the command line prompts however they are supposed to. And do this without incorrectly repurposing the hostname for a purpose other than what hostname is supposed to mean.

If you come across an alternative to osvirtalias please let me know. I don't want to start a precedent that is non-standard, but I do want to start a precedent other than debian_chroot. If it catches on, Debian distros probably want to use a name without chroot in it anyway.

[Improvement] Set a fqdn for container

Right now we're using just simple hostname for container's hostnames, we should set up a fully qualified one using either

container-name.container-id

Or

container-name.hosts-hostname

As suggested in #62

This could be useful if someone setups podman to expose DNS externally so that you can reach your container by hostname.
Second option works a bit better in situations like this as it's more predictable than the IDs

[Feature] NixOS support

Hello!

I'm currently using the Arch Linux container and can't use sudo, given the following error:

Files % /usr/bin/sudo su
sudo: PAM account management error: Authentication service cannot retrieve authentication info
sudo: a password is required

I tried running passwd but I don't know my user password inside the container too. While we're at it, is there a way to have the binaries at /usr/bin/ at my path so I don't have to write the full path all the time?

As for the log, here it is:

++ '[' 0 -ne 0 ']'
+ '[' '!' -f /run/.containerenv ']'
+ '[' -z 1001 ']'
+ '[' -z /home/atila ']'
+ '[' -z atila ']'
+ '[' -z 1001 ']'
++ basename /run/current-system/sw/bin/zsh
+ shell_pkg=zsh
+ command -v mount
/usr/sbin/mount
+ command -v passwd
/usr/sbin/passwd
+ command -v sudo
/usr/sbin/sudo
+ command -v useradd
/usr/sbin/useradd
+ command -v usermod
/usr/sbin/usermod
+ command -v zsh
/usr/sbin/zsh
+ HOST_MOUNTS_RO='/etc/machine-id /var/lib/flatpak /var/lib/systemd/coredump /var/log/journal'
+ for host_mount_ro in ${HOST_MOUNTS_RO}
+ mount_bind /run/host/etc/machine-id /etc/machine-id ro
+ source_dir=/run/host/etc/machine-id
+ target_dir=/etc/machine-id
+ mount_flags=ro
+ '[' -d /run/host/etc/machine-id ']'
+ '[' -f /run/host/etc/machine-id ']'
+ '[' -d /run/host/etc/machine-id ']'
+ '[' -f /run/host/etc/machine-id ']'
+ touch /etc/machine-id
+ '[' ro = '' ']'
+ mount_flags=rslave
+ mount --rbind -o rslave /run/host/etc/machine-id /etc/machine-id
+ return 0
+ for host_mount_ro in ${HOST_MOUNTS_RO}
+ mount_bind /run/host/var/lib/flatpak /var/lib/flatpak ro
+ source_dir=/run/host/var/lib/flatpak
+ target_dir=/var/lib/flatpak
+ mount_flags=ro
+ '[' -d /run/host/var/lib/flatpak ']'
+ '[' -d /run/host/var/lib/flatpak ']'
+ mkdir -p /var/lib/flatpak
+ '[' ro = '' ']'
+ mount_flags=rslave
+ mount --rbind -o rslave /run/host/var/lib/flatpak /var/lib/flatpak
+ return 0
+ for host_mount_ro in ${HOST_MOUNTS_RO}
+ mount_bind /run/host/var/lib/systemd/coredump /var/lib/systemd/coredump ro
+ source_dir=/run/host/var/lib/systemd/coredump
+ target_dir=/var/lib/systemd/coredump
+ mount_flags=ro
+ '[' -d /run/host/var/lib/systemd/coredump ']'
+ '[' -d /run/host/var/lib/systemd/coredump ']'
+ mkdir -p /var/lib/systemd/coredump
+ '[' ro = '' ']'
+ mount_flags=rslave
+ mount --rbind -o rslave /run/host/var/lib/systemd/coredump /var/lib/systemd/coredump
+ return 0
+ for host_mount_ro in ${HOST_MOUNTS_RO}
+ mount_bind /run/host/var/log/journal /var/log/journal ro
+ source_dir=/run/host/var/log/journal
+ target_dir=/var/log/journal
+ mount_flags=ro
+ '[' -d /run/host/var/log/journal ']'
+ '[' -d /run/host/var/log/journal ']'
+ mkdir -p /var/log/journal
+ '[' ro = '' ']'
+ mount_flags=rslave
+ mount --rbind -o rslave /run/host/var/log/journal /var/log/journal
+ return 0
+ HOST_MOUNTS='/media /run/media /run/udev/data /mnt /var/mnt /run/systemd/journal /run/libvirt /var/lib/libvirt'
+ for host_mount in ${HOST_MOUNTS}
+ mount_bind /run/host/media /media rw
+ source_dir=/run/host/media
+ target_dir=/media
+ mount_flags=rw
+ '[' -d /run/host/media ']'
+ '[' -f /run/host/media ']'
+ return 0
+ for host_mount in ${HOST_MOUNTS}
+ mount_bind /run/host/run/media /run/media rw
+ source_dir=/run/host/run/media
+ target_dir=/run/media
+ mount_flags=rw
+ '[' -d /run/host/run/media ']'
+ '[' -f /run/host/run/media ']'
+ return 0
+ for host_mount in ${HOST_MOUNTS}
+ mount_bind /run/host/run/udev/data /run/udev/data rw
+ source_dir=/run/host/run/udev/data
+ target_dir=/run/udev/data
+ mount_flags=rw
+ '[' -d /run/host/run/udev/data ']'
+ '[' -d /run/host/run/udev/data ']'
+ mkdir -p /run/udev/data
+ '[' rw = '' ']'
+ mount_flags=rslave
+ mount --rbind -o rslave /run/host/run/udev/data /run/udev/data
+ return 0
+ for host_mount in ${HOST_MOUNTS}
+ mount_bind /run/host/mnt /mnt rw
+ source_dir=/run/host/mnt
+ target_dir=/mnt
+ mount_flags=rw
+ '[' -d /run/host/mnt ']'
+ '[' -f /run/host/mnt ']'
+ return 0
+ for host_mount in ${HOST_MOUNTS}
+ mount_bind /run/host/var/mnt /var/mnt rw
+ source_dir=/run/host/var/mnt
+ target_dir=/var/mnt
+ mount_flags=rw
+ '[' -d /run/host/var/mnt ']'
+ '[' -f /run/host/var/mnt ']'
+ return 0
+ for host_mount in ${HOST_MOUNTS}
+ mount_bind /run/host/run/systemd/journal /run/systemd/journal rw
+ source_dir=/run/host/run/systemd/journal
+ target_dir=/run/systemd/journal
+ mount_flags=rw
+ '[' -d /run/host/run/systemd/journal ']'
+ '[' -d /run/host/run/systemd/journal ']'
+ mkdir -p /run/systemd/journal
+ '[' rw = '' ']'
+ mount_flags=rslave
+ mount --rbind -o rslave /run/host/run/systemd/journal /run/systemd/journal
+ return 0
+ for host_mount in ${HOST_MOUNTS}
+ mount_bind /run/host/run/libvirt /run/libvirt rw
+ source_dir=/run/host/run/libvirt
+ target_dir=/run/libvirt
+ mount_flags=rw
+ '[' -d /run/host/run/libvirt ']'
+ '[' -f /run/host/run/libvirt ']'
+ return 0
+ for host_mount in ${HOST_MOUNTS}
+ mount_bind /run/host/var/lib/libvirt /var/lib/libvirt rw
+ source_dir=/run/host/var/lib/libvirt
+ target_dir=/var/lib/libvirt
+ mount_flags=rw
+ '[' -d /run/host/var/lib/libvirt ']'
+ '[' -f /run/host/var/lib/libvirt ']'
+ return 0
+ '[' -d /usr/lib/rpm/macros.d/ ']'
+ grep -q 'Defaults !fqdn' /etc/sudoers
+ grep -q 'atila ALL = (root) NOPASSWD:ALL' /etc/sudoers
+ grep -q atila /etc/group
+ id atila
uid=1001(atila) gid=1001(users) groups=1(wheel),57(networkmanager),1001(users)
+ passwd --delete atila
passwd: password expiry information changed.
+ passwd --delete root
passwd: password expiry information changed.
+ mkdir -p /home/atila/.local/share/themes
+ mkdir -p /home/atila/.local/share/icons
+ chown 1001:1001 /home/atila/.local/share/themes
+ chown 1001:1001 /home/atila/.local/share/icons
+ mount_bind /run/host/usr/share/themes /home/atila/.local/share/themes rw
+ source_dir=/run/host/usr/share/themes
+ target_dir=/home/atila/.local/share/themes
+ mount_flags=rw
+ '[' -d /run/host/usr/share/themes ']'
+ '[' -f /run/host/usr/share/themes ']'
+ return 0
+ mount_bind /run/host/usr/share/icons /home/atila/.local/share/icons rw
+ source_dir=/run/host/usr/share/icons
+ target_dir=/home/atila/.local/share/icons
+ mount_flags=rw
+ '[' -d /run/host/usr/share/icons ']'
+ '[' -f /run/host/usr/share/icons ']'
+ return 0
+ printf 'container_setup_done\n'
container_setup_done

Thx for your time and your wonderful work!

Átila.

[ Improvement ] Add more feedback

Various commands like distrobox-export and distrobox-create should be a little more talkative and have clearer feedback for the user

[Suggestion] Default to "y" or "n" when it comes to [y/n]

When you run this distrobox-create you'll be greeted by this message if the specified container image has not been pulled Do you want to pull the image now? [y/n]. And if you press enter it'll prompt you to make an input.
My suggestion is to default to either Yes or No in certain commands. For example, [y/n] would be replaced by [Y/n] or [y/N].
It would reduce the overall amount of keystrokes by a small margin, nothing more.

[Error] distrobox-create takes infinite time if gvfs volume is mounted

distrobox-create at some point perform
+++ find /run -iname '*sock' '!' -path '/run/user/*'

If a remote resource, like i.e. Nextcloud or GDrive or an SFT resource is mounted through Nautilus (/run/user/1000/gvfs/), the task could take a lot (probably the infinity) of time. Sometimes then, the create process seems stuck.

Indeed, if you run
find /run -iname '*sock' '!' -path '/run/user/*'
And for instance you have an sftp resource containing 150 files mounted, such find command will never end (after 30 minutes I interrupted it) with a lot of messages like

find: ‘/run/user/1000/gvfs/sftp:host=fedorapeople.org,user=xxxx/dev/vga_arbiter’: Input/output error
find: ‘/run/user/1000/gvfs/sftp:host=fedorapeople.org,user=xxxx/proc/tty/driver’: Permission denied

[Error] Custom shell not picked up correctly

Ciao,
I am on Fedora 35 and use zsh as default shell. When I try to enter a newly created container (using the default image), the following error is shown:

Error: executable file zsh not found in $PATH: No such file or directory: OCI runtime attempted to invoke a command that was not found

By manually specifying bash as command (distrobox-enter --name test -- bash -l), I can successfully enter the container.

According to #2, zsh should be automatically installed when initialising the container, if I understand correctly. Perhaps I'm doing something wrong.

P.S. I find this project very useful. It does everything I wish Toolbx had implemented, such as the possibility to easily export apps and the possibility to specify a different home directory.

Grazie e buon anno.

[Feature] add support for RedHat UBI

As pointed out in #57 by @alcir

For the records, for what it worth, also Red Had UBI images seem to work.
distrobox-create -i registry.access.redhat.com/ubi8/ubi:latest --name ubi8
You have to install vte-profile to avoid bash: __vte_prompt_command: command not found messages
sudo dnf install vte-profile
and glibc-langpack-en to avoid Failed to set locale, defaulting to C.UTF-8
sudo dnf install glibc-langpack-en

Let's investigate on this and see if we can integrate the missing dependencies also in other distribution families

Wayland session from the guest OS?

Does this project target to be able run full Desktop Environment Wayland session from a container?
I was able to run KDE Plasma session from within Toolbox, maybe could be automated here..

[Improvement] Void linux support

Hello, I have just installed distrobox on void linux and have successfully created two 'boxes' (Fedora and debian)
Through this testing I imagine void linux could be added to the compatible distro's, however if there are more tests you would like me to exectute just ask

Max

[Suggestion] Create website

Reading a long README is painfully annoying, especially when the installation procedure is far down the README. At least the table of contents is there, but it's still really hard to access because the README is overwhelmingly long.

I suggest creating a website (distrobox.github.io I guess) so we can have a good place to document without overwhelming a newcomer.

[Issue] distrobox-enter command fails with rootless Podman configuration on Arch Linux

Steps to reproduce:

  1. install distrobox (1.2.4)
  2. install podman (3.4.4)
  3. configure rootless podman
  4. execute podman system migrate
  5. verify podman rootless execution
  6. launch distrobox-create command (my case: distrobox-create --image docker.io/alpine:latest --name alpine)
  7. launch distrobox-enter command (my case: distrobox-enter -v --name alpine -- bash -l)

Although steps 1-6 are correctly executed and rootless podman configuration in place, I receive these errors:

+ container_manager=podman
++ id -ru
+ '[' -S /run/user/1000/podman/podman.sock ']'
+ command -v podman
+ '[' 1 -ne 0 ']'
+ container_manager='podman --log-level debug'
++ podman --log-level debug inspect --type container alpine --format '{{.State.Status}}'
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called inspect.PersistentPreRunE(podman --log-level debug inspect --type container alpine --format {{.State.Status}}) 
DEBU[0000] overlay: storage already configured with a mount-program 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf" 
DEBU[0000] Merged system config "/etc/containers/containers.conf" 
DEBU[0000] overlay: storage already configured with a mount-program 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/test/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/test/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/test/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/test/.local/share/containers/storage/volumes 
DEBU[0000] overlay: storage already configured with a mount-program 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Found CNI network podman (type=bridge) at /home/test/.config/cni/net.d/87-podman.conflist 
DEBU[0000] Default CNI network name podman is unchangeable 
INFO[0000] Setting parallel job count to 37             
DEBU[0000] Called inspect.PersistentPostRunE(podman --log-level debug inspect --type container alpine --format {{.State.Status}}) 
+ container_status=configured
+ container_exists=0
+ '[' 0 -gt 0 ']'
+ '[' configured '!=' running ']'
++ date +%FT%T.%N%:z
+ log_timestamp=2021-12-23T00:46:47.510303108+01:00
+ podman --log-level debug start alpine
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called start.PersistentPreRunE(podman --log-level debug start alpine) 
DEBU[0000] overlay: storage already configured with a mount-program 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf" 
DEBU[0000] Merged system config "/etc/containers/containers.conf" 
DEBU[0000] overlay: storage already configured with a mount-program 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/test/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/test/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/test/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/test/.local/share/containers/storage/volumes 
DEBU[0000] overlay: storage already configured with a mount-program 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Found CNI network podman (type=bridge) at /home/test/.config/cni/net.d/87-podman.conflist 
DEBU[0000] Default CNI network name podman is unchangeable 
INFO[0000] Setting parallel job count to 37             
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] overlay: mount_data=,lowerdir=/home/test/.local/share/containers/storage/overlay/l/W6EQNHMNEGVRKPMJLYQR7VGX4A,upperdir=/home/test/.local/share/containers/storage/overlay/7068819b0445a80f55e7812908eec48f7f51edfcb1528167ef3a45b43e480c96/diff,workdir=/home/test/.local/share/containers/storage/overlay/7068819b0445a80f55e7812908eec48f7f51edfcb1528167ef3a45b43e480c96/work 
DEBU[0000] mounted container "3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0" at "/home/test/.local/share/containers/storage/overlay/7068819b0445a80f55e7812908eec48f7f51edfcb1528167ef3a45b43e480c96/merged" 
DEBU[0000] Created root filesystem for container 3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0 at /home/test/.local/share/containers/storage/overlay/7068819b0445a80f55e7812908eec48f7f51edfcb1528167ef3a45b43e480c96/merged 
DEBU[0000] network configuration does not support host.containers.internal address 
DEBU[0000] Not modifying container 3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0 /etc/passwd 
DEBU[0000] Not modifying container 3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0 /etc/group 
DEBU[0000] skipping unrecognized mount in /etc/containers/mounts.conf: "# Configuration file for default mounts in containers (see man 5" 
DEBU[0000] skipping unrecognized mount in /etc/containers/mounts.conf: "# containers-mounts.conf for further information)" 
DEBU[0000] skipping unrecognized mount in /etc/containers/mounts.conf: "" 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription 
INFO[0000] User mount overriding libpod mount at "/etc/resolv.conf" 
INFO[0000] User mount overriding libpod mount at "/etc/hosts" 
DEBU[0000] Setting CGroups for container 3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0 to user.slice:libpod:3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0 
DEBU[0000] set root propagation to "rslave"             
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Workdir "/" resolved to host path "/home/test/.local/share/containers/storage/overlay/7068819b0445a80f55e7812908eec48f7f51edfcb1528167ef3a45b43e480c96/merged" 
DEBU[0000] Created OCI spec for container 3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0 at /home/test/.local/share/containers/storage/overlay-containers/3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0 -u 3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0 -r /usr/bin/crun -b /home/test/.local/share/containers/storage/overlay-containers/3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0/userdata -p /run/user/1000/containers/overlay-containers/3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0/userdata/pidfile -n alpine --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/test/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0]"
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0000] Received: -1                                 
DEBU[0000] Cleaning up container 3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0 
DEBU[0000] Network is already cleaned up, skipping...   
DEBU[0000] unmounted container "3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0" 
Error: unable to start container "3057f1cedc61c15c94e38419e7d1e0ab4b5dc489896c95391d7335275fdae6e0": make `/home/test/.local/share/containers/storage/overlay/7068819b0445a80f55e7812908eec48f7f51edfcb1528167ef3a45b43e480c96/merged` private: Permission denied: OCI permission denied
+ '[' 125 -ne 0 ']'
+ printf '\nAn error occurred\n'

An error occurred

[Feature] Duplicate existing container

It would be useful to have the ability to duplicate an already running and set-up container
instead of relying on either creating from scratch or using podman/docker manually.

adding a --duplicate flag to distrobox-create could be useful.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.