GithubHelp home page GithubHelp logo

osbuild / bootc-image-builder Goto Github PK

View Code? Open in Web Editor NEW
97.0 97.0 49.0 626 KB

A container for deploying bootable container images.

Home Page: https://osbuild.org

License: Apache License 2.0

Dockerfile 1.45% Shell 1.11% Go 60.27% Python 36.22% Makefile 0.52% Hack 0.44%
bootc

bootc-image-builder's Introduction

OSBuild

Build-Pipelines for Operating System Artifacts

OSBuild is a pipeline-based build system for operating system artifacts. It defines a universal pipeline description and a build system to execute them, producing artifacts like operating system images, working towards an image build pipeline that is more comprehensible, reproducible, and extendable.

See the osbuild(1) man-page for details on how to run osbuild, the definition of the pipeline description, and more.

Project

Principles

  1. OSBuild stages are never broken, only deprecated. The same manifest should always produce the same output.
  2. OSBuild stages should be explicit whenever possible instead of e.g. relying on the state of the tree.
  3. Pipelines are independent, so the tree is expected to be empty at the beginning of each.
  4. Manifests are expected to be machine-generated, so OSBuild has no convenience functions to support manually created manifests.
  5. The build environment is confined against accidental misuse, but this should not be considered a security boundary.
  6. OSBuild may only use Python language features supported by the oldest target distribution.

Contributing

Please refer to the developer guide to learn about our workflow, code style and more.

Requirements

The requirements for this project are:

  • bubblewrap >= 0.4.0
  • python >= 3.6

Additionally, the built-in stages require:

  • bash >= 5.0
  • coreutils >= 8.31
  • curl >= 7.68
  • qemu-img >= 4.2.0
  • rpm >= 4.15
  • tar >= 1.32
  • util-linux >= 235
  • skopeo

At build-time, the following software is required:

  • python-docutils >= 0.13
  • pkg-config >= 0.29

Testing requires additional software:

  • pytest

Build

Osbuild is a python script so it is not compiled. To verify changes made to the code use included makefile rules:

  • make lint to run linter on top of the code
  • make test-all to run base set of tests
  • sudo make test-run to run extended set of tests (takes long time)

Installation

Installing osbuild requires to not only install the osbuild module, but also additional artifacts such as tools (i.e: osbuild-mpp) sources, stages, schemas and SELinux policies.

For this reason, doing an installation from source is not trivial and the easier way to install it is to create the set of RPMs that contain all these components.

This can be done with the rpm make target, i.e:

make rpm

A set of RPMs will be created in the ./rpmbuild/RPMS/noarch/ directory and can be installed in the system using the distribution package manager, i.e:

sudo dnf install ./rpmbuild/RPMS/noarch/*.rpm

Repository

License

  • Apache-2.0
  • See LICENSE file for details.

bootc-image-builder's People

Contributors

achilleas-k avatar bcl avatar cgwalters avatar dependabot[bot] avatar javipolo avatar jeckersb avatar kingsleyzissou avatar lmilbaum avatar mkumatag avatar mvo5 avatar ochosi avatar ondrejbudai avatar ralphbean avatar red-hat-konflux[bot] avatar rhatdan avatar sabre1041 avatar sacsant avatar sallyom avatar say-paul avatar schuellerf avatar sshnaidm avatar supakeen avatar teg avatar vrothberg avatar yoheiueda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bootc-image-builder's Issues

Support containers auth

Currently we don't support using containers that require authorization.

For building, osbuild uses skopeo to pull a container and will implicitly read the auth file from the standard locations described in containers-auth.json(5). However, bootc-image-builder wont be able to resolve the container to generate the manifest unless we explicitly set the path to the auth file with resolver.AuthFilePath = path.

We should support (and document) a process for users to mount an auth file into the BIB container and read it during the resolve process.

This limitation can also be worked around with the local container store feature once that's finished (pull the container on the host while authed and then use the local store to build).

Unable to build images that are not based on Fedora 39

Hey folks!

I have attempted to build some images using this tool for the Universal Blue team and we are unable to build images that are not based on Fedora 39.
The 2 images I have tried:
ghcr.io/ublue-os/bluefin:gts (Based on Fedora 38)
ghcr.io/ublue-os/ucore:latest (Based on Fedora CoreOS 37)

I was talking with @achilleas-k and it might be related to using the Fedora 39 tools for building the images and them not being backwards compatible to build older versions of Fedora.

Happy to help test where I can!

Thanks,

Noel

Buildroot requires `x86_64-v3`

So ELN, which is our buildroot, is compiled for x86_64-v3 (see here). People are running into issues regularly where they're running bootc-image-builder on older Xeon hardware that do not support this level of the architecture.

It's probably a good idea to error out much earlier if the sublevel is not supported.

We can use the output of ld.so --help for this.

We are slowly moving to zstd:chunked format for container images but osbuild is not handling it.

ostree container image deploy --imgref=ostree-unverified-image:dir:/tmp/tmpxu_tjilg/image --stateroot=default --target-imgref=ostree-unverified-registry:quay.io/rhatdan/lamp:latest --karg=rw --karg=console=tty0 --karg=console=ttyS0 --karg=root=LABEL=root --sysroot=/run/osbuild/tree
error: Performing deployment: Importing: Unhandled layer type: application/vnd.oci.image.layer.v1.tar+zstd

This happens when trying to convert "quay.io/rhatdan/lamp:latest".

Which was pushed using zstd:chunked format.

building a qcow2 from a derived image doesn't work throwing Selinux error in the pipeline

Building a derived image on osx with podman desktop and trying to create a qcow2 fails with selinux error in the pipeline. Here's the Containerfile used - note it doesn't matter if the non-dev version of the centos-bootc:stream9 container is used, it still fails.

FROM quay.io/centos-bootc/centos-bootc-dev:stream9
RUN rpm-ostree install gdm firefox gnome-kiosk-script-session plymouth-system-theme firewalld
RUN rm -rf /var/lib/gdm/.config/pulse/default.pa && rm -rf /var/lib/xkb/README.compiled
COPY custom.conf /etc/gdm/
COPY core.conf /usr/lib/sysusers.d/
COPY --chmod=0755 --chown=1042:1042 gnome-kiosk-script /usr/lib/
COPY kiosk-gdm /usr/lib/
COPY kiosk.conf /usr/lib/tmpfiles.d/
RUN mkdir -p /usr/etc-system/ && \
    echo 'AuthorizedKeysFile /usr/etc-system/%u.keys' >> /etc/ssh/sshd_config.d/30-auth-system.conf && \
    echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL7xFq1HtZKZiaD8MfkhNtn37m8GSc1W168NoSaT9RSf cardno:000F_C36A3FC0' > /usr/etc-system/root.keys && chmod 0600 /usr/etc-system/root.keys
RUN systemctl enable sshd && firewall-offline-cmd --disabled
RUN systemctl set-default graphical.target && ostree container commit

if I just use bib to create a qcow2 from centos-bootc:stream9 or centos-bootc-dev:stream9 it works flawlessy instead

The error I always get is

 ⏱  Duration: 0s
org.osbuild.ostree.selinux: 033dd409cb4cd702596676d94321a1dd5f23a90a4f163557aec276603d20b5e3 {
  "deployment": {
    "osname": "default",
    "ref": "ostree/1/1/0"
  }
}
Traceback (most recent call last):
  File "/run/osbuild/bin/org.osbuild.ostree.selinux", line 95, in <module>
    r = main(stage_args["tree"],
  File "/run/osbuild/bin/org.osbuild.ostree.selinux", line 80, in main
    raise ValueError("Could not find SELinux policy")
ValueError: Could not find SELinux policy

⏱  Duration: 0s

Failed
Error: running osbuild failed: exit status 1
2024/02/01 09:48:43 error: running osbuild failed: exit status 1

here's the full pipelines log:

podman run \
    --rm \
    -it \
    --privileged \
    --pull=newer \
    --security-opt label=type:unconfined_t \
    -v $(pwd)/config.json:/config.json \
    -v $(pwd)/output:/output \
    quay.io/centos-bootc/bootc-image-builder:latest \
    --type qcow2 \
    --config /config.json \
    quay.io/runcom/kiosk-base:latest
Generating manifest-qcow2.json ... DONE
Building manifest-qcow2.json
source/org.osbuild.skopeo (org.osbuild.skopeo): Getting image source signatures
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:2ad0a0fc42196a143f32c3f12c2c6069f4d17bfe15da6cd9a8d6f218b3070ca3
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:4a5f4a6603cd1623dc680d66502b3d96e70b36ae8dabea53b8532f0d8bfa965a
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:2e4960a02a6368e170f228b4fa106e3c710985860fb5c1801686ae96c94e2e38
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:e9696bd3499f2463d9ac659c076b49f77939d5d8880b19cdb68a7058af4c9e44
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:4b5f90120f1143456a058761e410c4e9794467002852c6c04b2ba85b37a60808
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:1be6be164bbd6705807fb07de5310646da50330bb137e6b3a5f231a33312af58
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:cc8749a253d9477f9303b71fc21c0289ae290ffe7dc718271627bf14005fc9b4
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:c0d1fdaaa1d72f146d6824297b5821a39d5387be345561705c7caa25ca47cf88
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:5071d287f7c7db296d5c28f34097e0647f7e77e57c084a24427b8b67bf9268b2
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:f7c3e4981127ccc5832c54f2f7b3704402c0d3fc0e5290f704d259351aa55c88
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:6c2e22d7b9b19b57645b401b2561e54d8ce8ef9872828a701095d2eed278428c
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:5956b8f11f80d013a46bac783e9d1b57b20226fa071d416428b43b5696375c91
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:c8bef76f2c94d4cc7a420f245a5d39a7593aa68df578a3ba11f27b7c54fb4d04
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:d71aab47db7a5596be1e320bab2c08fba924ad635f31bbaa1002344114992039
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:d3ae93dae5d97c785eba84c0a7972ad3e642e3f5214643bf61bc1f6512fdd708
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:4726293aa2a33ff85b98c4a71bd4fb21e6e3df24812cb409c412192c5a939115
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:ff42e03d47a841053f7a186bd824718d42ff1d406ae47e07c37e752cdf563c14
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:94b5821e307145939e33f2df175c317315cf88df7466933dee5b757d1f227508
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:999f9af559e4a46b19cee18c6ac132b59a1056a8974e7a799acd186d0f8a0108
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:0614e40d506f2160fbb8c4904f512953b6fe8cc7f1f2c099eb7ef04b49d370f4
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:21b080004e82d1fcf3047e8151994cead0c4f3a1532c3deff7e9bfa7aa7af663
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:83abaf6d6857c6205a75e1ef1674015f5da94c88d535421752e761457c30e9f5
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:ac8e626392a1f9dc56c2619d13fe20dbeb7d35264f0f79def88b504a2c645733
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:a3a568395ae2c83d8c26a745345ba996214cf347b502dd487afa5daa4d7a34ad
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:379fef1872ce19fffc8a6f0d54cba46618ee7232270c5669869ed6de71aca569
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:2d19a128033e0a53ea4fd9306b3d3a4d8008effbe6906bea0ede2dbada2ffb5c
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:99e0a5d400870763e44fbc09991caa64fb573071b2fce8c36a3eb2448b7bc08e
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:0270fde0d4d373bfc00391c5eab11c93114016eef4aa74bb13e798fd74963457
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:ddb083605a303814a02bff4f93286e7ee2c8436959f5c9c7a629feb500d014dc
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:27bcd423590746df7d4c62994fb8b7bd57a9ec66d590f0890c08498c0d1c145b
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:9128594d5c6ead9ad197f976698827477d618a94a96cacfbcbc915f70d1e1407
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:3ee5b3a43c74526384cf82c102646bf59c97d4526ddcac452d8fdb0ee33945fb
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:6ecf34d6acab0e18b92ba33e99787cf95db08e0bfca15856242c26de47e588a4
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:cc08b56138c7f253f58e79d63b2f70b2eeb57be4c8b6ac93cf2cc765859872a0
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:ecec55d49774543868959ac7b3dcf2cc23f4f67baa7a9053587155f9865978db
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:2310bd66d182ce8cb8d550419364f5967aa0d95f7d81c1e11b15ae08467be7c6
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:38206305e735ab0de33d2281dd1e1e22b100e7292494dd47a72ee0cfe139f74d
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:2699daace685d35d6ba3421ec41b104f4e0e131db4f661bc6fd3dbccdeb544a7
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:58f1b769e6dae40157549544aa8db989166eb59a92454abb1f46a62f4eddf4b1
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:50d7d0f10630ab9be1fb2b21d3a13269cbc2bb4f8415a45ae86aa7bff96b5785
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:112dfba5723614741cf6b0a6a631d2d3b0948f2cdc9ebf3673dd52d129ee1871
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:a35770dbfe51d19ae936641e62c4145337a92575d0167f8c9ba2320b2da04b7e
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:2ea66b289d6304e59654b9244138f8920dea809d7ba8ca1fa5bf66adc28e13b0
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:56dda937b9c81ba265f5f7e5f9a3a8e7212024b949ebdbc5cd95612d3f3554b1
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:2ad9c5ad1178de77c57534d24f21337551f54f563ad55d5f9c9c67673b03d488
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:b20f966cd8588dd42f94e13dd28e636e51859a9a93d92b7adad21bdf8c02433e
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:778484223fd878b87faa2c3647977e06521f2c61fc616be8c72d6d166d24b169
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:abbff43fcf075f0099bc106d3cf7b651207cea041ca1f1722c5a71e56d4c8fcb
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:feb2b075a06b1616e50b22489462f9f6d2dcd8b04ff818e443002ba5db29dc45
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:cbc6bb813b500f055cfa815eb8aa53010f112925caf073f0a03058f40a5ac5e5
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:2c26116509e233504c4d86af1639ff43fb3ce68a1893884efe338bdb69209c21
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:c686bafea26cb0f72db29f817c9196d5e9a91fffa8e0c1168b0ef21ac96b3470
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:724789c163736aa86fd0243c0252b5dce1eb1b7386d8fe7228b0090aea5a36ef
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:856147274a0c1c07edc2178334263f9d39f1a8d969f43e63829903e924b692ec
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:cb397995265fe0962f057b3290efb551755477bca620be9272b2dae4fb179209
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:100193d149de721aa41f27e3632aceb9d4952d818c97469ecce9de40ef6e3f64
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:0f2b29cda75b43b647ea3ee5aefc234b1372ea68703ea58e28b3293c4668fa25
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:3e50cffe254c455214d5d24590c19c3272a6e479b5621e1d2c9e475c1cc979c9
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:614c3a384bea11e3c34fbe9ce6d6700e68f42df72d170381df91eb6cc6eca4f5
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:872740fec956f67506a6e1ba33a968d24cd924a80a10122f2ae7dbb24b237ede
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:2187fdc81842dc9f1009de093df1f19923fb775820bad5203d444d2b1a0e9bfb
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:3f5910b797819c4779148171e0e06a0b66a8ac4611fedd512c1025168bec3a71
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:d9e95ef1132353c48b66725a263c8836d0bd6eed01284a146c9d672f0c810ce8
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:9625cf9eb5156f060d22ee1feb2113676a4871bd47d4e773d9f3c73278441af6
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:ad312c5c40ccd18a3c639cc139211f3c4284e568b69b2e748027cee057986fe0
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:bd9ddc54bea929a22b334e73e026d4136e5b73f5cc29942896c72e4ece69b13d
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:649a24114fc93d581934e57d2981ad9e8ad681bc3a6b73daee2238be12815d26
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:70d6b2462a1a554ade17ee34e352094efab5c73daf524368cdc9dd239b5462fa
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:ff33e8002e15a5a5b08346f043ba3cb19da48c0fe604e7aa9a6d8114e86a9523
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:8842c8e6fb7ea1933d8d5e6a27aa76c7b94e71d58aed32789b0c4bb4303a9dd8
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:68996e87b284eb8172fbd90f99fb53e542ad6dc365cda4bee03be4307aeaca51
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:50b1a1367bc58c32ddb644393ab04de007a83f790f990928b1f7a583130b6118
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:de537bbf0ab4a35a386b14270616f157ce55ac43b240f7c0c4ddbbbfc64719eb
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:27ec056548cc7659e3509902711327b3a383a91a41b53e125cbc72fab331097e
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:19d9e75518268f52583d67dc0444807d731f39f680996ade1fca6b1c8d9bc5eb
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:7152bceda1f0dbc5ebd5abd84b26e0ece08423139161df1c794e18a219973f88
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying config sha256:9fa546c58071dc1b50676f35e271285bae0eb6b62fe292d0dca48a33170b7de0
source/org.osbuild.skopeo (org.osbuild.skopeo): Writing manifest to image destination
Pipeline build: 9f406ced233e4468682ffd343fa8c601ef06537d2a2fb8407d18b7c40c6dc2a6
Build
  root: <host>
  runner: org.osbuild.fedora38 (org.osbuild.fedora38)
org.osbuild.container-deploy: 62fd57299066cdf54acb2173f4377650c3b092b542b24226df5487f1e85b4f9c {
  "exclude": [
    "/sysroot"
  ]
}
time="2024-02-01T09:47:15Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Getting image source signatures
Copying blob sha256:4b5f90120f1143456a058761e410c4e9794467002852c6c04b2ba85b37a60808
Copying blob sha256:4a5f4a6603cd1623dc680d66502b3d96e70b36ae8dabea53b8532f0d8bfa965a
Copying blob sha256:2e4960a02a6368e170f228b4fa106e3c710985860fb5c1801686ae96c94e2e38
Copying blob sha256:e9696bd3499f2463d9ac659c076b49f77939d5d8880b19cdb68a7058af4c9e44
Copying blob sha256:1be6be164bbd6705807fb07de5310646da50330bb137e6b3a5f231a33312af58
Copying blob sha256:2ad0a0fc42196a143f32c3f12c2c6069f4d17bfe15da6cd9a8d6f218b3070ca3
Copying blob sha256:cc8749a253d9477f9303b71fc21c0289ae290ffe7dc718271627bf14005fc9b4
Copying blob sha256:c0d1fdaaa1d72f146d6824297b5821a39d5387be345561705c7caa25ca47cf88
Copying blob sha256:5071d287f7c7db296d5c28f34097e0647f7e77e57c084a24427b8b67bf9268b2
Copying blob sha256:f7c3e4981127ccc5832c54f2f7b3704402c0d3fc0e5290f704d259351aa55c88
Copying blob sha256:6c2e22d7b9b19b57645b401b2561e54d8ce8ef9872828a701095d2eed278428c
Copying blob sha256:5956b8f11f80d013a46bac783e9d1b57b20226fa071d416428b43b5696375c91
Copying blob sha256:c8bef76f2c94d4cc7a420f245a5d39a7593aa68df578a3ba11f27b7c54fb4d04
Copying blob sha256:d71aab47db7a5596be1e320bab2c08fba924ad635f31bbaa1002344114992039
Copying blob sha256:d3ae93dae5d97c785eba84c0a7972ad3e642e3f5214643bf61bc1f6512fdd708
Copying blob sha256:4726293aa2a33ff85b98c4a71bd4fb21e6e3df24812cb409c412192c5a939115
Copying blob sha256:ff42e03d47a841053f7a186bd824718d42ff1d406ae47e07c37e752cdf563c14
Copying blob sha256:94b5821e307145939e33f2df175c317315cf88df7466933dee5b757d1f227508
Copying blob sha256:999f9af559e4a46b19cee18c6ac132b59a1056a8974e7a799acd186d0f8a0108
Copying blob sha256:0614e40d506f2160fbb8c4904f512953b6fe8cc7f1f2c099eb7ef04b49d370f4
Copying blob sha256:21b080004e82d1fcf3047e8151994cead0c4f3a1532c3deff7e9bfa7aa7af663
Copying blob sha256:83abaf6d6857c6205a75e1ef1674015f5da94c88d535421752e761457c30e9f5
Copying blob sha256:ac8e626392a1f9dc56c2619d13fe20dbeb7d35264f0f79def88b504a2c645733
Copying blob sha256:a3a568395ae2c83d8c26a745345ba996214cf347b502dd487afa5daa4d7a34ad
Copying blob sha256:379fef1872ce19fffc8a6f0d54cba46618ee7232270c5669869ed6de71aca569
Copying blob sha256:2d19a128033e0a53ea4fd9306b3d3a4d8008effbe6906bea0ede2dbada2ffb5c
Copying blob sha256:99e0a5d400870763e44fbc09991caa64fb573071b2fce8c36a3eb2448b7bc08e
Copying blob sha256:0270fde0d4d373bfc00391c5eab11c93114016eef4aa74bb13e798fd74963457
Copying blob sha256:ddb083605a303814a02bff4f93286e7ee2c8436959f5c9c7a629feb500d014dc
Copying blob sha256:27bcd423590746df7d4c62994fb8b7bd57a9ec66d590f0890c08498c0d1c145b
Copying blob sha256:9128594d5c6ead9ad197f976698827477d618a94a96cacfbcbc915f70d1e1407
Copying blob sha256:3ee5b3a43c74526384cf82c102646bf59c97d4526ddcac452d8fdb0ee33945fb
Copying blob sha256:6ecf34d6acab0e18b92ba33e99787cf95db08e0bfca15856242c26de47e588a4
Copying blob sha256:cc08b56138c7f253f58e79d63b2f70b2eeb57be4c8b6ac93cf2cc765859872a0
Copying blob sha256:ecec55d49774543868959ac7b3dcf2cc23f4f67baa7a9053587155f9865978db
Copying blob sha256:2310bd66d182ce8cb8d550419364f5967aa0d95f7d81c1e11b15ae08467be7c6
Copying blob sha256:38206305e735ab0de33d2281dd1e1e22b100e7292494dd47a72ee0cfe139f74d
Copying blob sha256:2699daace685d35d6ba3421ec41b104f4e0e131db4f661bc6fd3dbccdeb544a7
Copying blob sha256:58f1b769e6dae40157549544aa8db989166eb59a92454abb1f46a62f4eddf4b1
Copying blob sha256:50d7d0f10630ab9be1fb2b21d3a13269cbc2bb4f8415a45ae86aa7bff96b5785
Copying blob sha256:112dfba5723614741cf6b0a6a631d2d3b0948f2cdc9ebf3673dd52d129ee1871
Copying blob sha256:a35770dbfe51d19ae936641e62c4145337a92575d0167f8c9ba2320b2da04b7e
Copying blob sha256:2ea66b289d6304e59654b9244138f8920dea809d7ba8ca1fa5bf66adc28e13b0
Copying blob sha256:56dda937b9c81ba265f5f7e5f9a3a8e7212024b949ebdbc5cd95612d3f3554b1
Copying blob sha256:2ad9c5ad1178de77c57534d24f21337551f54f563ad55d5f9c9c67673b03d488
Copying blob sha256:b20f966cd8588dd42f94e13dd28e636e51859a9a93d92b7adad21bdf8c02433e
Copying blob sha256:778484223fd878b87faa2c3647977e06521f2c61fc616be8c72d6d166d24b169
Copying blob sha256:abbff43fcf075f0099bc106d3cf7b651207cea041ca1f1722c5a71e56d4c8fcb
Copying blob sha256:feb2b075a06b1616e50b22489462f9f6d2dcd8b04ff818e443002ba5db29dc45
Copying blob sha256:cbc6bb813b500f055cfa815eb8aa53010f112925caf073f0a03058f40a5ac5e5
Copying blob sha256:2c26116509e233504c4d86af1639ff43fb3ce68a1893884efe338bdb69209c21
Copying blob sha256:c686bafea26cb0f72db29f817c9196d5e9a91fffa8e0c1168b0ef21ac96b3470
Copying blob sha256:724789c163736aa86fd0243c0252b5dce1eb1b7386d8fe7228b0090aea5a36ef
Copying blob sha256:856147274a0c1c07edc2178334263f9d39f1a8d969f43e63829903e924b692ec
Copying blob sha256:cb397995265fe0962f057b3290efb551755477bca620be9272b2dae4fb179209
Copying blob sha256:100193d149de721aa41f27e3632aceb9d4952d818c97469ecce9de40ef6e3f64
Copying blob sha256:0f2b29cda75b43b647ea3ee5aefc234b1372ea68703ea58e28b3293c4668fa25
Copying blob sha256:3e50cffe254c455214d5d24590c19c3272a6e479b5621e1d2c9e475c1cc979c9
Copying blob sha256:614c3a384bea11e3c34fbe9ce6d6700e68f42df72d170381df91eb6cc6eca4f5
Copying blob sha256:872740fec956f67506a6e1ba33a968d24cd924a80a10122f2ae7dbb24b237ede
Copying blob sha256:2187fdc81842dc9f1009de093df1f19923fb775820bad5203d444d2b1a0e9bfb
Copying blob sha256:3f5910b797819c4779148171e0e06a0b66a8ac4611fedd512c1025168bec3a71
Copying blob sha256:d9e95ef1132353c48b66725a263c8836d0bd6eed01284a146c9d672f0c810ce8
Copying blob sha256:9625cf9eb5156f060d22ee1feb2113676a4871bd47d4e773d9f3c73278441af6
Copying blob sha256:ad312c5c40ccd18a3c639cc139211f3c4284e568b69b2e748027cee057986fe0
Copying blob sha256:bd9ddc54bea929a22b334e73e026d4136e5b73f5cc29942896c72e4ece69b13d
Copying blob sha256:649a24114fc93d581934e57d2981ad9e8ad681bc3a6b73daee2238be12815d26
Copying blob sha256:70d6b2462a1a554ade17ee34e352094efab5c73daf524368cdc9dd239b5462fa
Copying blob sha256:ff33e8002e15a5a5b08346f043ba3cb19da48c0fe604e7aa9a6d8114e86a9523
Copying blob sha256:8842c8e6fb7ea1933d8d5e6a27aa76c7b94e71d58aed32789b0c4bb4303a9dd8
Copying blob sha256:68996e87b284eb8172fbd90f99fb53e542ad6dc365cda4bee03be4307aeaca51
Copying blob sha256:50b1a1367bc58c32ddb644393ab04de007a83f790f990928b1f7a583130b6118
Copying blob sha256:de537bbf0ab4a35a386b14270616f157ce55ac43b240f7c0c4ddbbbfc64719eb
Copying blob sha256:27ec056548cc7659e3509902711327b3a383a91a41b53e125cbc72fab331097e
Copying blob sha256:19d9e75518268f52583d67dc0444807d731f39f680996ade1fca6b1c8d9bc5eb
Copying blob sha256:7152bceda1f0dbc5ebd5abd84b26e0ece08423139161df1c794e18a219973f88
Copying config sha256:9fa546c58071dc1b50676f35e271285bae0eb6b62fe292d0dca48a33170b7de0
Writing manifest to image destination
9fa546c58071dc1b50676f35e271285bae0eb6b62fe292d0dca48a33170b7de0
Untagged: docker.io/library/tmp-container-deploy-97994287862764:latest
Deleted: 9fa546c58071dc1b50676f35e271285bae0eb6b62fe292d0dca48a33170b7de0

⏱  Duration: 42s
org.osbuild.selinux: 9f406ced233e4468682ffd343fa8c601ef06537d2a2fb8407d18b7c40c6dc2a6 {
  "file_contexts": "etc/selinux/targeted/contexts/files/file_contexts",
  "labels": {
    "/usr/bin/mount": "system_u:object_r:install_exec_t:s0",
    "/usr/bin/ostree": "system_u:object_r:install_exec_t:s0",
    "/usr/bin/umount": "system_u:object_r:install_exec_t:s0"
  }
}
setfiles: Regex version mismatch, expected: 10.42 2022-12-11 actual: 10.40 2022-04-14
setfiles: Regex version mismatch, expected: 10.42 2022-12-11 actual: 10.40 2022-04-14

⏱  Duration: 10s
Pipeline ostree-deployment: 033dd409cb4cd702596676d94321a1dd5f23a90a4f163557aec276603d20b5e3
Build
  root: 9f406ced233e4468682ffd343fa8c601ef06537d2a2fb8407d18b7c40c6dc2a6
  runner: org.osbuild.linux (org.osbuild.linux)
org.osbuild.ostree.init-fs: 5d6f0bc236838d4f87bc10478b799d7aaca33ff4c549f9d95d67bf84d2d40894 {}
ostree admin init-fs --modern /run/osbuild/tree --sysroot=/run/osbuild/tree

(ostree admin init-fs:4): GLib-WARNING **: 09:48:08.180: getpwuid_r(): failed due to unknown user id (0)

(ostree admin init-fs:4): GLib-WARNING **: 09:48:08.180: Could not find home directory: $HOME is not set, and user database could not be read.

⏱  Duration: 0s
org.osbuild.ostree.os-init: 3fecef446fcdf37d9f97949fa7930581feb0e220345c2867d768aeea89ecb900 {
  "osname": "default"
}
ostree admin os-init default --sysroot=/run/osbuild/tree

(ostree admin os-init:4): GLib-WARNING **: 09:48:08.294: getpwuid_r(): failed due to unknown user id (0)

(ostree admin os-init:4): GLib-WARNING **: 09:48:08.294: Could not find home directory: $HOME is not set, and user database could not be read.

⏱  Duration: 0s
org.osbuild.mkdir: efb56b39820f0b0df98b0c7da5198f6e1c20dc9ba247eb5ec37e0fc0a17bb65d {
  "paths": [
    {
      "path": "/boot/efi",
      "mode": 448
    }
  ]
}

⏱  Duration: 0s
org.osbuild.ostree.deploy.container: 68567db3acf5318146158a93ddff010b5c66bbfe68a0217437d68d78d08a129e {
  "osname": "default",
  "kernel_opts": [
    "rw",
    "console=tty0",
    "console=ttyS0"
  ],
  "target_imgref": "ostree-unverified-registry:quay.io/runcom/kiosk-base:latest",
  "rootfs": {
    "label": "root"
  },
  "mounts": [
    "/boot",
    "/boot/efi"
  ]
}
ostree container image deploy --imgref=ostree-unverified-image:dir:/tmp/tmpnokn_2ye/image --stateroot=default --target-imgref=ostree-unverified-registry:quay.io/runcom/kiosk-base:latest --karg=rw --karg=console=tty0 --karg=console=ttyS0 --karg=root=LABEL=root --sysroot=/run/osbuild/tree

(process:12): GLib-WARNING **: 09:48:08.621: getpwuid_r(): failed due to unknown user id (0)

(process:12): GLib-WARNING **: 09:48:08.621: Could not find home directory: $HOME is not set, and user database could not be read.
Image contains non-ostree compatible file paths: run: 4

⏱  Duration: 32s
org.osbuild.ostree.config: 14eaa75d621af081e049de2e73f032f1f4f192b18fe8b98d88f7b75eafddc69b {
  "repo": "/ostree/repo",
  "config": {
    "sysroot": {
      "readonly": true,
      "bootloader": "none"
    }
  }
}
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): Deployment root at 'ostree/deploy/default/deploy/ffb26361e890932d3569539780fe76215d5c169b196468e8178804740231dcaf.0'
ostree config set sysroot.bootloader none --repo=/run/osbuild/tree/ostree/repo

(ostree config:4): GLib-WARNING **: 09:48:40.815: getpwuid_r(): failed due to unknown user id (0)

(ostree config:4): GLib-WARNING **: 09:48:40.815: Could not find home directory: $HOME is not set, and user database could not be read.
ostree config set sysroot.readonly true --repo=/run/osbuild/tree/ostree/repo

(ostree config:5): GLib-WARNING **: 09:48:40.820: getpwuid_r(): failed due to unknown user id (0)

(ostree config:5): GLib-WARNING **: 09:48:40.820: Could not find home directory: $HOME is not set, and user database could not be read.
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): mounting /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree/ostree/deploy/default/deploy/ffb26361e890932d3569539780fe76215d5c169b196468e8178804740231dcaf.0 -> /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree/sysroot unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree/var unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree/boot unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): extra unmount /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree unmounted

⏱  Duration: 0s
org.osbuild.fstab: 502b50a7c548bab5e353569dbf4a01aefa55b31212e768130c77fe56f35eccaa {
  "filesystems": [
    {
      "uuid": "d04173d6-fb05-46d1-9628-d8ee4bdf225a",
      "vfs_type": "ext4",
      "path": "/",
      "options": "defaults",
      "freq": 1,
      "passno": 1
    },
    {
      "uuid": "1b5cf9f3-4683-4ae1-9bfb-ab8412eb38cf",
      "vfs_type": "ext4",
      "path": "/boot",
      "options": "defaults",
      "freq": 1,
      "passno": 2
    },
    {
      "uuid": "7B77-95E7",
      "vfs_type": "vfat",
      "path": "/boot/efi",
      "options": "umask=0077,shortname=winnt",
      "passno": 2
    }
  ]
}
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): Deployment root at 'ostree/deploy/default/deploy/ffb26361e890932d3569539780fe76215d5c169b196468e8178804740231dcaf.0'
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): mounting /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree/ostree/deploy/default/deploy/ffb26361e890932d3569539780fe76215d5c169b196468e8178804740231dcaf.0 -> /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree/sysroot unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree/var unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree/boot unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): extra unmount /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree unmounted

⏱  Duration: 0s
org.osbuild.users: 059eb28c02ea7370283440a6efa4e546655d681c52121ebeb24472949b55efd3 {
  "users": {
    "runcom": {
      "groups": [
        "wheel"
      ],
      "password": "$6$Y9Khp1szKOFoWlZI$fst.T0dxD/gXKGiS/55WniouXeDY4dss1bjBbI2ryoO.ntRvqsc7Po/X5BM38jh7FraMBRM2w15RceUC.fX8n.",
      "key": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL7xFq1HtZKZiaD8MfkhNtn37m8GSc1W168NoSaT9RSf cardno:000F_C36A3FC0"
    }
  }
}
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): Deployment root at 'ostree/deploy/default/deploy/ffb26361e890932d3569539780fe76215d5c169b196468e8178804740231dcaf.0'
[sss_cache] [confdb_init] (0x0010): Unable to open config database [/var/lib/sss/db/config.ldb]
Could not open available domains
[sss_cache] [confdb_init] (0x0010): Unable to open config database [/var/lib/sss/db/config.ldb]
Could not open available domains
Creating mailbox file: No such file or directory
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): mounting /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree/ostree/deploy/default/deploy/ffb26361e890932d3569539780fe76215d5c169b196468e8178804740231dcaf.0 -> /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree/sysroot unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree/var unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree/boot unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): extra unmount /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-cdc5c39a3d93459ea9716e005c52d696/data/tree unmounted

⏱  Duration: 0s
org.osbuild.ostree.selinux: 033dd409cb4cd702596676d94321a1dd5f23a90a4f163557aec276603d20b5e3 {
  "deployment": {
    "osname": "default",
    "ref": "ostree/1/1/0"
  }
}
Traceback (most recent call last):
  File "/run/osbuild/bin/org.osbuild.ostree.selinux", line 95, in <module>
    r = main(stage_args["tree"],
  File "/run/osbuild/bin/org.osbuild.ostree.selinux", line 80, in main
    raise ValueError("Could not find SELinux policy")
ValueError: Could not find SELinux policy

⏱  Duration: 0s

Failed
Error: running osbuild failed: exit status 1
2024/02/01 09:48:43 error: running osbuild failed: exit status 1

`setfiles: Could not set context` on SELinux-enforcing systems

Tried with quay.io/centos-boot/fedora-tier-1:eln and quay.io/centos-boot/centos-tier-1:stream9.

sudo podman run --rm -it --privileged -v $(pwd)/images:/output ghcr.io/osbuild/osbuild-deploy-container -imageref quay.io/centos-boot/centos-tier-1:stream9
[...]
org.osbuild.selinux: c73ddc1b46d5d88c144b1b185cf2559477ea8bcb72f87365ce5fbc02d4625ef3 {
  "file_contexts": "etc/selinux/targeted/contexts/files/file_contexts",
  "labels": {
    "/usr/bin/cp": "system_u:object_r:install_exec_t:s0"
  }
}
/usr/lib/tmpfiles.d/journal-nocow.conf:26: Failed to resolve specifier: uninitialized /etc/ detected, skipping.
All rules containing unresolvable specifiers will be skipped.
setfiles: Could not set context for /run/osbuild/tree/usr/lib/systemd/system-generators/systemd-fstab-generator:  Invalid argument
setfiles: Could not set context for /run/osbuild/tree/usr/lib/systemd/system-generators/systemd-rc-local-generator:  Invalid argument
setfiles: Could not set context for /run/osbuild/tree/usr/lib/systemd/system-generators/systemd-sysv-generator:  Invalid argument
Traceback (most recent call last):
  File "/run/osbuild/bin/org.osbuild.selinux", line 75, in <module>
    r = main(args["tree"], args["options"])
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/run/osbuild/bin/org.osbuild.selinux", line 62, in main
    subprocess.run(["setfiles", "-F", "-r", f"{tree}", f"{file_contexts}", f"{tree}"], check=True)
  File "/usr/lib64/python3.12/subprocess.py", line 571, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['setfiles', '-F', '-r', '/run/osbuild/tree', '/run/osbuild/tree/etc/selinux/targeted/contexts/files/file_contexts', '/run/osbuild/tree']' returned non-zero exit status 255.

⏱  Duration: 4s

Failed
running osbuild failed: exit status 1

Tagged releases / multiple releases

We should have tagged releases every X amount of weeks.

We thought we hit a bug and wanted to roll back to a previous release of the image builder, but found that there are no tagged releases / releases on github.

support running as a "controller"

A good pattern that the Kubernetes world has introduced is that of "spec/status" and "controllers/operators": https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md

I think what would make sense here is to teach bib how to operate as a "controller" particularly in IaaS (KubeVirt, GCP, AWS, Azure, etc.) where basically it queries the container image, and the target disk image, and runs to update the target disk image only if it is out of date with respect to the container.

This would require us to inject and manage metadata, etc.

rhtap not pushing new images

$ skopeo inspect -n docker://quay.io/centos-bootc/bootc-image-builder  | jq -r .Created
2024-02-07T14:50:36.65773167Z
$

And we have a few PRs merged since then.

Looking at the pipeline I see e.g. this run which seems to have built the latest commit but then...it got stuck deploying or something?

user credentials

We had a realtime chat on what specifically we should support configuring when generating disk images (as opposed to injecting into the container image). @mvo5 mentioned that the test suite right now relies on injecting a root ssh key. Yes, but note that it's also possible to inject a key in two other ways:

That said, note that containers/bootc@e477e4f also landed, so I think it would clearly make sense to expose that too, particularly after #18 is done.

There is still a much thornier question of the general problem of secrets, to which it's hard to have one opinionated answer (this touches on containers/bootc#22 etc.)

`AttributeError: 'Loop' object has no attribute 'fd'`

After #6 is fixed/worked around, people are running into this issue. I saw it myself, but not every time. I think that it depends on whether the host system ran osbuild before you run it in a container after a reboot. I guess that some extra initial setup of loop devices might be needed in the container.

This might be the same issue as Michael Hofmann hit when trying to run osbuild in a container: https://gitlab.com/cki-project/experimental/osbuild-example/-/blob/main/osbuild-composer-loopback.service?ref_type=heads

Note that this might be host-distro specific.

⏱  Duration: 0s
org.osbuild.copy:
5d96bf6806bf5ba0999439a1fd7cbb4b44328b943a8e3b8b6196849bede191e8 {
   "paths": [
     {
       "from": "input://root-tree/",
       "to": "mount://-/"
     }
   ]
}
device/- (org.osbuild.loopback): loop0 acquired (locked: False)
device/boot (org.osbuild.loopback): Exception ignored in: <function
Loop.__del__ at 0x7f2b38fd63e0>
device/boot (org.osbuild.loopback): Traceback (most recent call last):
device/boot (org.osbuild.loopback):   File
"/usr/lib/python3.12/site-packages/osbuild/loop.py", line 137, in __del__
device/boot (org.osbuild.loopback):     self.close()
device/boot (org.osbuild.loopback):   File
"/usr/lib/python3.12/site-packages/osbuild/loop.py", line 144, in close
device/boot (org.osbuild.loopback):     fd, self.fd = self.fd, -1
device/boot (org.osbuild.loopback):                   ^^^^^^^
device/boot (org.osbuild.loopback): AttributeError: 'Loop' object has no
attribute 'fd'
Traceback (most recent call last):
   File "/usr/bin/osbuild", line 33, in <module>
     sys.exit(load_entry_point('osbuild==99', 'console_scripts',
'osbuild')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   File "/usr/lib/python3.12/site-packages/osbuild/main_cli.py", line
169, in osbuild_cli
     r = manifest.build(
         ^^^^^^^^^^^^^^^
   File "/usr/lib/python3.12/site-packages/osbuild/pipeline.py", line
468, in build
     res = pl.run(store, monitor, libdir, stage_timeout)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   File "/usr/lib/python3.12/site-packages/osbuild/pipeline.py", line
372, in run
     results = self.build_stages(store,
               ^^^^^^^^^^^^^^^^^^^^^^^^
   File "/usr/lib/python3.12/site-packages/osbuild/pipeline.py", line
345, in build_stages
     r = stage.run(tree,
         ^^^^^^^^^^^^^^^
   File "/usr/lib/python3.12/site-packages/osbuild/pipeline.py", line
218, in run
     devices[name] = devmgr.open(dev)
                     ^^^^^^^^^^^^^^^^
   File "/usr/lib/python3.12/site-packages/osbuild/devices.py", line 92,
in open
     res = client.call("open", args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
   File "/usr/lib/python3.12/site-packages/osbuild/host.py", line 348,
in call
     ret, _ = self.call_with_fds(method, args)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   File "/usr/lib/python3.12/site-packages/osbuild/host.py", line 384,
in call_with_fds
     raise error
osbuild.host.RemoteError: FileNotFoundError: [Errno 2] No such file or
directory: 'loop1'
    File "/usr/lib/python3.12/site-packages/osbuild/host.py", line 268,
in serve
     reply, reply_fds = self._handle_message(msg, fds)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   File "/usr/lib/python3.12/site-packages/osbuild/host.py", line 301,
in _handle_message
     ret, fds = self.dispatch(name, args, fds)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   File "/usr/lib/python3.12/site-packages/osbuild/devices.py", line
127, in dispatch
     r = self.open(args["dev"],
         ^^^^^^^^^^^^^^^^^^^^^^
   File "/usr/lib/osbuild/devices/org.osbuild.loopback", line 104, in open
     raise error from None
   File "/usr/lib/osbuild/devices/org.osbuild.loopback", line 101, in open
     self.lo = self.make_loop(self.fd, start, size, lock)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   File "/usr/lib/osbuild/devices/org.osbuild.loopback", line 81, in
make_loop
     lo = self.ctl.loop_for_fd(fd, lock=lock,
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   File "/usr/lib/python3.12/site-packages/osbuild/loop.py", line 649,
in loop_for_fd
     lo = Loop(self.get_unbound())
          ^^^^^^^^^^^^^^^^^^^^^^^^
   File "/usr/lib/python3.12/site-packages/osbuild/loop.py", line 128,
in __init__
     self.fd = os.open(self.devname, os.O_RDWR, dir_fd=dir_fd)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Add instructions for reproducing issues without BIB

Seeing #183 and #168 (and this comment in particular), I think it would a great troubleshooting first step for issues relating to image building. Not sure where we would put this, developer docs or the issue template (both?), but I can imagine we'll have a lot of these kinds of situations where it's not clear if an issue is coming from the way bib is building the image or it's from the container itself. Instructions for taking bib out of the equation and "manually" making a simple disk image would save a lot of time.

Does not fail when building an amd64 image on a arm64 buildc-image-builder container.

So I encountered this unfortunate issue when trying to build an amd64 image..

If you forget to pass in --platform when building an image (ex. you build arm64 image, but try to do --target-arch 64).

You end up successfully building the image, but it does not boot / work.

For example:

sudo podman run \
    --rm \
    -it \
    --privileged \
    --pull=newer \
    --security-opt label=type:unconfined_t \
    -v $(pwd)/config.json:/config.json \
    -v $(pwd)/output:/output \
    quay.io/centos-bootc/bootc-image-builder:latest \
     --target-arch amd64 \
    --type qcow2 \
    --config /config.json \
    quay.io/podai/test

WITHOUT passing in --platform (ex. you build on a mac M1 machine).

You'll be able to successfully build it, but it won't be the correct arch / runnable OS:

Generating manifest-qcow2.json ... WARNING: target-arch is experimental and needs an installed 'qemu-user' package
DONE
Building manifest-qcow2.json
source/org.osbuild.skopeo (org.osbuild.skopeo): Getting image source signatures
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:cc1cfbc35d21a69586464ad522b3f90a8eae9e0df3fb013b000f948754b3e8c9
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:89240ed41e1aefa1bd79784dbcc5ebec485906a62d763ed9eb2a2388091851de
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:bd11baf0428ff124d556fc75de722c48e24a83e22c785cd1cb91618ac7b71635
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:1c58a26e3d089b09421b5a9a1e6f0b59e42240fddfb5fbe20b4af6468092326e
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:0f30c3b22a8ac16bb84fddf66ededc4cc7ebe6aedf8637be0c99aead946af86c
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:afb028a8634b3fbaa11a502a48cc88ceb19f5f40ccab1b8e89299f2a3a00460c
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:bdea91dcfd51f0ecbbe2329698625e53c562033b4aee4bca399bcbd9b66dd4cf
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:faffc1ec197c58213e8f1b9ff9db2b44ed0f9e8a0fcb1a1a9a031774349406af
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:f7912e763f28c9e97fcea4768344c5d95cbbe0979af4194f894d318c743e0c73
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:7683129025f0511f1b5c8317e5c5167e319ebd1f4e7286f78a871a6a62c93cdd
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:0b8ad46e310682381e3de061cc78820c9be9d719a6241226aa325f625935b153
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:05ecfa566c1a637e78bef4ebdd5a91940e48b26efb9ae5eed9533fcd66dcf84b
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:390c54a8d39c36e05e164d79dc80c0b357cf822aaf2bbc4ec98354e5b0c9f0f9
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:d2f011136d56693d05fbd5bf216cf01b3a5503d7571c123f2d69f878f42e553d
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:9f8c1e0a4d3fb2b223daa32ec2d51e18dec77e379f8dabf402098573a9d0a7a3
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:9b6680056f29f0b258d601f84b5ec4ab796db74b69e58d82fcd5aa3b377696dd
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:792d79671e79882f8371e1ccaa0b16ea42269f9dd9e7a159ed76a8dfaf67ed11
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:4be755155a52a9a1e489d3063d2669f0de432d678ed177330a7d7954143d58a7
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:6ef256db60f3e1fb23a7e9c0cfe3df08da9885ba5e10958a7ce7b6402a1fba00
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:3ae6ad00804e6a207aed104b41e57aa029e0592abb820b3b985755a43a6a3486
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:712ae22fde39bc4d7838792f210d88345f59778c9e094582b17bafd1597111a2
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:b386885e94258b445215b95606ca74b8dfa8009fdb01d0a01d1e532dc52a086c
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:ca6de86920a067ab432dd1b1144aeeb74d20714184c8dee1f39ed25b14f000f6
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:83b2b4ffa7e74d0bbed9760274079d5ee8f3d782d33398e2adc53afbe2995269
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:7f827d24de9945b53182e9ff49763b845d99959002ecc39850faa75687511e16
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:020771e53d0f803e30ec523c3c6594d12ee9f89b0e8863d75e1e133b6577b712
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:eee3d0a27a1d13fd4750ace1044d6063c0df28f2d86a0c0c9b5f5dd3d5071ee0
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:e392634275ca309075c0ee56879d41f3ab188d726756e2444c3898ab39b092e6
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:556e61be1b0a721e3799e9d6770c393ae9fcdd9522c58a43bdf532a7d98bf53d
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:eafe043baa8406cfc55b2326d80cb606ace9a0f43ac7fd030c5c48c87654e3dc
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:fd9590fa5d86ddde016e057cec36e79e1c61395448205516a81b96a40509c5e2
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:2d2335c7ac42e695d8e6fc709849fc95af5fc3a6a91dc6396932a4f3498d8618
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:e43ae6ecde822340c0f805465297049689733a6f626939dde4c1e0c4c7cde770
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:509a00807f5984db7db06f347db8e6256da2e984c0aac7830ebe9f88554236e7
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:aee21b1a67690291d756c04d9da9230081bb2f353190e4aa006a3f5b66a4a37d
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:68314966c799e4adf672bac314727a5b698dbc98be074c55316fece20af97c7d
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:c4c935299266a8392f331da0b4301f139f06bc10fc11e49206069c0bd028fd58
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:8a087118482096bfd5cb43e136af0f2d23309b4d373a335bd36c5555be0319eb
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:9325b8690ccd20932f9557fa46854c272df7a474584b148ef5bf5072ab580b4b
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:2ad373f991881d1145f5d1fc70d5d65ca1c080f4363605b1c35c505c9df1a937
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:f4fcadef541ffa0818d0f67489596c518451fc298b5b1fdad32507ef9d37ea01
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:215a838f3f49d612c1b8c5564ce0f769c1649e98767d17e5bd27549a4f2c169e
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:bf653315f19b593865cc1ca0cee08502314eedb86c568dc97091f11edcf79f0f
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:4147e14cd262c9cace1188109a5bfc69b1b42d6bc061cc31ce93a850c77540e3
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:a8c28fa64337b88c26452801326c6a00696c9de92bdfaac87e7be001231848a6
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:2b37d21d4f55d18a2ddc4a1a2a914614ae9da31ed8e40c1babeaead99872f128
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:5e926a45f69c693f454d0420909333e3e94dae39f8cfff9d1bdbc832c40c85f9
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:0bc5636a34d7ffdc34d5b9200f362584fb8a42f59e8e7cd11c8f90452733a387
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:58969a64325ac25325512ed248551ddee95c463dd88da2b4eabd6c59fd011b97
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:46984dda1441749e8325958a71ec5eb0ba09eaff5391545df3a18defaf04c234
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:c4e06a7024e56859be2e55815f3b603e68c3c6374500783ce5bc0cf30d115f95
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:851ef3033f6a779dc87754b5389c00e976e6b1453d959ad202026a84933395fa
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:ee53af9d6833fce3d1e6d2519545e94ada1f7f25f7da11252843fac82de7606a
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:e8c315c69032de5baa815e2cb7103b019f33423d1d106bdf6949778805c903b8
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:c18fe0ff553933a4e65010b6033f1418bd3b55fc839e7669f3240d340e122a7c
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:f2620d4ffb537bfc3e5798ea09db13399b081c5cb57636ce4ba8df0e9f8ccb7a
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:1811f1f84958883d069b264cfdb2dced1893ab158e773f4ea14db32f1e6eee61
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:206de99bd1b7e026d046b000d007523b4030087ecaf7bd3e7f8c16f515308f04
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:1eabd09f2cf0f60f31bf6095a4b966ed5237f50b0cd430be30758a34d7c89701
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:d994d5141ca3f19a4d0a92f4539d1f76bcb1126fc549d8e999c09167eef5a273
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:dc2aaef4dd15e0f9380b414c7106637891791217d6e45af6e5cdd4409d028fbb
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:13538192df80d6d7cda6c456cd0f70c334e15f04e864cbbd04f90cd64cfe4e28
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:c5f8420a52da2492826fa00499bc43603d25881f214f108854136270f8cc4418
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:1bfa15fe39db8e81bbba428967fa49228c11c921be5e7d59ac4133ed5f321360
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:ad312c5c40ccd18a3c639cc139211f3c4284e568b69b2e748027cee057986fe0
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:bd9ddc54bea929a22b334e73e026d4136e5b73f5cc29942896c72e4ece69b13d
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:9159f08bafdeea4f0d4cd66a150943f657fbe66bb31135884b7ceb64693d7972
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:f9381365f8ae1abc21883449ce55750df76d4a249d64eb6df533da08b5380f61
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:5ea1fffbbc9a4b630779fcda396f3d0aa80d1d44ed9c59a452d65d4c15ad5eeb
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying blob sha256:0e15f6e67fa016ae81e328d101ab5dab71e38aaec1bf9b876b0faa5e95585a2c
source/org.osbuild.skopeo (org.osbuild.skopeo): Copying config sha256:87adcc49554632fb01c8a00339c3e3c5be58e6a3cbaa105c306ff7c261e45e46
source/org.osbuild.skopeo (org.osbuild.skopeo): Writing manifest to image destination
�[0m�[1mPipeline build: 98ce2119ab2aaeecf9383da75b636ff122e308c23da2882aa050e261e011a07e�[0m
Build
  root: <host>
  runner: org.osbuild.fedora38 (org.osbuild.fedora38)
�[0m�[1morg.osbuild.container-deploy: be99b0a8287653452c30bc180e58801cc8199abd0b601e752e0335badbd0976e�[0m {
  "exclude": [
    "/sysroot"
  ]
}
time="2024-02-15T22:16:00Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Getting image source signatures
Copying blob sha256:89240ed41e1aefa1bd79784dbcc5ebec485906a62d763ed9eb2a2388091851de
Copying blob sha256:0f30c3b22a8ac16bb84fddf66ededc4cc7ebe6aedf8637be0c99aead946af86c
Copying blob sha256:bd11baf0428ff124d556fc75de722c48e24a83e22c785cd1cb91618ac7b71635
Copying blob sha256:1c58a26e3d089b09421b5a9a1e6f0b59e42240fddfb5fbe20b4af6468092326e
Copying blob sha256:afb028a8634b3fbaa11a502a48cc88ceb19f5f40ccab1b8e89299f2a3a00460c
Copying blob sha256:cc1cfbc35d21a69586464ad522b3f90a8eae9e0df3fb013b000f948754b3e8c9
Copying blob sha256:bdea91dcfd51f0ecbbe2329698625e53c562033b4aee4bca399bcbd9b66dd4cf
Copying blob sha256:faffc1ec197c58213e8f1b9ff9db2b44ed0f9e8a0fcb1a1a9a031774349406af
Copying blob sha256:f7912e763f28c9e97fcea4768344c5d95cbbe0979af4194f894d318c743e0c73
Copying blob sha256:7683129025f0511f1b5c8317e5c5167e319ebd1f4e7286f78a871a6a62c93cdd
Copying blob sha256:0b8ad46e310682381e3de061cc78820c9be9d719a6241226aa325f625935b153
Copying blob sha256:05ecfa566c1a637e78bef4ebdd5a91940e48b26efb9ae5eed9533fcd66dcf84b
Copying blob sha256:390c54a8d39c36e05e164d79dc80c0b357cf822aaf2bbc4ec98354e5b0c9f0f9
Copying blob sha256:d2f011136d56693d05fbd5bf216cf01b3a5503d7571c123f2d69f878f42e553d
Copying blob sha256:9f8c1e0a4d3fb2b223daa32ec2d51e18dec77e379f8dabf402098573a9d0a7a3
Copying blob sha256:9b6680056f29f0b258d601f84b5ec4ab796db74b69e58d82fcd5aa3b377696dd
Copying blob sha256:792d79671e79882f8371e1ccaa0b16ea42269f9dd9e7a159ed76a8dfaf67ed11
Copying blob sha256:4be755155a52a9a1e489d3063d2669f0de432d678ed177330a7d7954143d58a7
Copying blob sha256:6ef256db60f3e1fb23a7e9c0cfe3df08da9885ba5e10958a7ce7b6402a1fba00
Copying blob sha256:3ae6ad00804e6a207aed104b41e57aa029e0592abb820b3b985755a43a6a3486
Copying blob sha256:712ae22fde39bc4d7838792f210d88345f59778c9e094582b17bafd1597111a2
Copying blob sha256:b386885e94258b445215b95606ca74b8dfa8009fdb01d0a01d1e532dc52a086c
Copying blob sha256:ca6de86920a067ab432dd1b1144aeeb74d20714184c8dee1f39ed25b14f000f6
Copying blob sha256:83b2b4ffa7e74d0bbed9760274079d5ee8f3d782d33398e2adc53afbe2995269
Copying blob sha256:7f827d24de9945b53182e9ff49763b845d99959002ecc39850faa75687511e16
Copying blob sha256:020771e53d0f803e30ec523c3c6594d12ee9f89b0e8863d75e1e133b6577b712
Copying blob sha256:eee3d0a27a1d13fd4750ace1044d6063c0df28f2d86a0c0c9b5f5dd3d5071ee0
Copying blob sha256:e392634275ca309075c0ee56879d41f3ab188d726756e2444c3898ab39b092e6
Copying blob sha256:556e61be1b0a721e3799e9d6770c393ae9fcdd9522c58a43bdf532a7d98bf53d
Copying blob sha256:eafe043baa8406cfc55b2326d80cb606ace9a0f43ac7fd030c5c48c87654e3dc
Copying blob sha256:fd9590fa5d86ddde016e057cec36e79e1c61395448205516a81b96a40509c5e2
Copying blob sha256:2d2335c7ac42e695d8e6fc709849fc95af5fc3a6a91dc6396932a4f3498d8618
Copying blob sha256:e43ae6ecde822340c0f805465297049689733a6f626939dde4c1e0c4c7cde770
Copying blob sha256:509a00807f5984db7db06f347db8e6256da2e984c0aac7830ebe9f88554236e7
Copying blob sha256:aee21b1a67690291d756c04d9da9230081bb2f353190e4aa006a3f5b66a4a37d
Copying blob sha256:68314966c799e4adf672bac314727a5b698dbc98be074c55316fece20af97c7d
Copying blob sha256:c4c935299266a8392f331da0b4301f139f06bc10fc11e49206069c0bd028fd58
Copying blob sha256:8a087118482096bfd5cb43e136af0f2d23309b4d373a335bd36c5555be0319eb
Copying blob sha256:9325b8690ccd20932f9557fa46854c272df7a474584b148ef5bf5072ab580b4b
Copying blob sha256:2ad373f991881d1145f5d1fc70d5d65ca1c080f4363605b1c35c505c9df1a937
Copying blob sha256:f4fcadef541ffa0818d0f67489596c518451fc298b5b1fdad32507ef9d37ea01
Copying blob sha256:215a838f3f49d612c1b8c5564ce0f769c1649e98767d17e5bd27549a4f2c169e
Copying blob sha256:bf653315f19b593865cc1ca0cee08502314eedb86c568dc97091f11edcf79f0f
Copying blob sha256:4147e14cd262c9cace1188109a5bfc69b1b42d6bc061cc31ce93a850c77540e3
Copying blob sha256:a8c28fa64337b88c26452801326c6a00696c9de92bdfaac87e7be001231848a6
Copying blob sha256:2b37d21d4f55d18a2ddc4a1a2a914614ae9da31ed8e40c1babeaead99872f128
Copying blob sha256:5e926a45f69c693f454d0420909333e3e94dae39f8cfff9d1bdbc832c40c85f9
Copying blob sha256:0bc5636a34d7ffdc34d5b9200f362584fb8a42f59e8e7cd11c8f90452733a387
Copying blob sha256:58969a64325ac25325512ed248551ddee95c463dd88da2b4eabd6c59fd011b97
Copying blob sha256:46984dda1441749e8325958a71ec5eb0ba09eaff5391545df3a18defaf04c234
Copying blob sha256:c4e06a7024e56859be2e55815f3b603e68c3c6374500783ce5bc0cf30d115f95
Copying blob sha256:851ef3033f6a779dc87754b5389c00e976e6b1453d959ad202026a84933395fa
Copying blob sha256:ee53af9d6833fce3d1e6d2519545e94ada1f7f25f7da11252843fac82de7606a
Copying blob sha256:e8c315c69032de5baa815e2cb7103b019f33423d1d106bdf6949778805c903b8
Copying blob sha256:c18fe0ff553933a4e65010b6033f1418bd3b55fc839e7669f3240d340e122a7c
Copying blob sha256:f2620d4ffb537bfc3e5798ea09db13399b081c5cb57636ce4ba8df0e9f8ccb7a
Copying blob sha256:1811f1f84958883d069b264cfdb2dced1893ab158e773f4ea14db32f1e6eee61
Copying blob sha256:206de99bd1b7e026d046b000d007523b4030087ecaf7bd3e7f8c16f515308f04
Copying blob sha256:1eabd09f2cf0f60f31bf6095a4b966ed5237f50b0cd430be30758a34d7c89701
Copying blob sha256:d994d5141ca3f19a4d0a92f4539d1f76bcb1126fc549d8e999c09167eef5a273
Copying blob sha256:dc2aaef4dd15e0f9380b414c7106637891791217d6e45af6e5cdd4409d028fbb
Copying blob sha256:13538192df80d6d7cda6c456cd0f70c334e15f04e864cbbd04f90cd64cfe4e28
Copying blob sha256:c5f8420a52da2492826fa00499bc43603d25881f214f108854136270f8cc4418
Copying blob sha256:1bfa15fe39db8e81bbba428967fa49228c11c921be5e7d59ac4133ed5f321360
Copying blob sha256:ad312c5c40ccd18a3c639cc139211f3c4284e568b69b2e748027cee057986fe0
Copying blob sha256:bd9ddc54bea929a22b334e73e026d4136e5b73f5cc29942896c72e4ece69b13d
Copying blob sha256:9159f08bafdeea4f0d4cd66a150943f657fbe66bb31135884b7ceb64693d7972
Copying blob sha256:f9381365f8ae1abc21883449ce55750df76d4a249d64eb6df533da08b5380f61
Copying blob sha256:5ea1fffbbc9a4b630779fcda396f3d0aa80d1d44ed9c59a452d65d4c15ad5eeb
Copying blob sha256:0e15f6e67fa016ae81e328d101ab5dab71e38aaec1bf9b876b0faa5e95585a2c
Copying config sha256:87adcc49554632fb01c8a00339c3e3c5be58e6a3cbaa105c306ff7c261e45e46
Writing manifest to image destination
87adcc49554632fb01c8a00339c3e3c5be58e6a3cbaa105c306ff7c261e45e46
time="2024-02-15T22:16:19Z" level=error msg="Unable to write image event: \"write unixgram @15608->/run/systemd/journal/socket: sendmsg: no such file or directory\""
Untagged: docker.io/library/tmp-container-deploy-05333577510572:latest
Deleted: 87adcc49554632fb01c8a00339c3e3c5be58e6a3cbaa105c306ff7c261e45e46
time="2024-02-15T22:16:23Z" level=error msg="Unable to write image event: \"write unixgram @86880->/run/systemd/journal/socket: sendmsg: no such file or directory\""
time="2024-02-15T22:16:23Z" level=error msg="Unable to write image event: \"write unixgram @86880->/run/systemd/journal/socket: sendmsg: no such file or directory\""

⏱  Duration: 23s
�[0m�[1morg.osbuild.selinux: 98ce2119ab2aaeecf9383da75b636ff122e308c23da2882aa050e261e011a07e�[0m {
  "file_contexts": "etc/selinux/targeted/contexts/files/file_contexts",
  "labels": {
    "/usr/bin/mount": "system_u:object_r:install_exec_t:s0",
    "/usr/bin/ostree": "system_u:object_r:install_exec_t:s0",
    "/usr/bin/umount": "system_u:object_r:install_exec_t:s0"
  }
}

⏱  Duration: 6s
�[0m�[1mPipeline ostree-deployment: e30356ab398264342a503cc55fc9921805cbf4c060c9d215d08a51db0a8c7158�[0m
Build
  root: 98ce2119ab2aaeecf9383da75b636ff122e308c23da2882aa050e261e011a07e
  runner: org.osbuild.linux (org.osbuild.linux)
�[0m�[1morg.osbuild.ostree.init-fs: b9693cdd59c678c7d46f78491afb5cd5fdc934f5d45afe5ccd90dd0658b46c5d�[0m {}
ostree admin init-fs --modern /run/osbuild/tree --sysroot=/run/osbuild/tree

⏱  Duration: 0s
�[0m�[1morg.osbuild.ostree.os-init: 14ae9b64cb76af5eddd298ac78dbc764d96a2b7111c3d294d817951c0656ca2b�[0m {
  "osname": "default"
}
ostree admin os-init default --sysroot=/run/osbuild/tree

⏱  Duration: 0s
�[0m�[1morg.osbuild.mkdir: 7a6c7d048a619e4fad337a1740d5c873417943fc44246cd51a290a0977df2390�[0m {
  "paths": [
    {
      "path": "/boot/efi",
      "mode": 448
    }
  ]
}

⏱  Duration: 0s
�[0m�[1morg.osbuild.ostree.deploy.container: ffd896ffe2c2996e4d5da711c7b861b1c7a6cb00456343db40ba26421072d61c�[0m {
  "osname": "default",
  "kernel_opts": [
    "rw",
    "console=tty0",
    "console=ttyS0"
  ],
  "target_imgref": "ostree-unverified-registry:quay.io/podai/test:latest",
  "rootfs": {
    "label": "root"
  },
  "mounts": [
    "/boot",
    "/boot/efi"
  ]
}
ostree container image deploy --imgref=ostree-unverified-image:dir:/tmp/tmptdq9hbal/image --stateroot=default --target-imgref=ostree-unverified-registry:quay.io/podai/test:latest --karg=rw --karg=console=tty0 --karg=console=ttyS0 --karg=root=LABEL=root --sysroot=/run/osbuild/tree
Image contains non-ostree compatible file paths: run: 4

⏱  Duration: 15s
�[0m�[1morg.osbuild.ostree.config: de5ef050203e9572bb9b3e3ec0f1e5e0abf8d6958a86346bfb3f8a19f57ef9e6�[0m {
  "repo": "/ostree/repo",
  "config": {
    "sysroot": {
      "readonly": true,
      "bootloader": "none"
    }
  }
}
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): Deployment root at 'ostree/deploy/default/deploy/868f720bd363a9992e355f2dd470a2a0ac2cc6460dee4bf6d16f853a7776f670.0'
ostree config set sysroot.bootloader none --repo=/run/osbuild/tree/ostree/repo
ostree config set sysroot.readonly true --repo=/run/osbuild/tree/ostree/repo
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): mounting /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree/ostree/deploy/default/deploy/868f720bd363a9992e355f2dd470a2a0ac2cc6460dee4bf6d16f853a7776f670.0 -> /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree/sysroot unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree/var unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree/boot unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): extra unmount /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree unmounted

⏱  Duration: 0s
�[0m�[1morg.osbuild.fstab: 707ee57cd4761cd8ecb4b3f063b443ce7ca9dd5f98b449f6ae8c7559a0b494d4�[0m {
  "filesystems": [
    {
      "uuid": "eb937573-051c-41f8-baad-e1e91d2c22ff",
      "vfs_type": "ext4",
      "path": "/",
      "options": "defaults",
      "freq": 1,
      "passno": 1
    },
    {
      "uuid": "5156d285-27e4-4ca0-bae9-2ea54949d436",
      "vfs_type": "ext4",
      "path": "/boot",
      "options": "ro",
      "freq": 1,
      "passno": 2
    },
    {
      "uuid": "7B77-95E7",
      "vfs_type": "vfat",
      "path": "/boot/efi",
      "options": "umask=0077,shortname=winnt",
      "passno": 2
    }
  ]
}
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): Deployment root at 'ostree/deploy/default/deploy/868f720bd363a9992e355f2dd470a2a0ac2cc6460dee4bf6d16f853a7776f670.0'
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): mounting /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree/ostree/deploy/default/deploy/868f720bd363a9992e355f2dd470a2a0ac2cc6460dee4bf6d16f853a7776f670.0 -> /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree/sysroot unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree/var unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree/boot unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree unmounted
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): extra unmount /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree
mount/ostree-ostree/1/1/0 (org.osbuild.ostree.deployment): umount: /store/stage/uuid-8dc32694b20a45ffa73e9d1d34385c4f/data/tree unmounted

⏱  Duration: 0s
�[0m�[1morg.osbuild.ostree.selinux: e30356ab398264342a503cc55fc9921805cbf4c060c9d215d08a51db0a8c7158�[0m {
  "deployment": {
    "osname": "default",
    "ref": "ostree/1/1/0"
  }
}

⏱  Duration: 0s
�[0m�[1mPipeline image: a78ea5e05c83c7539f116d2184bb9f27729e21524e44279a717ce6d4bda786a6�[0m
Build
  root: 98ce2119ab2aaeecf9383da75b636ff122e308c23da2882aa050e261e011a07e
  runner: org.osbuild.linux (org.osbuild.linux)
�[0m�[1morg.osbuild.truncate: 3e11d8e4cb2f2b412815539afc48b253a62b932ba51be9f92b873921d7706f11�[0m {
  "filename": "disk.img",
  "size": "10737418240"
}

⏱  Duration: 0s
�[0m�[1morg.osbuild.sfdisk: 095498d1713d60a94c6f262228678c2df7d0c85448a642d559b25e023d30894c�[0m {
  "label": "gpt",
  "uuid": "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
  "partitions": [
    {
      "bootable": true,
      "size": 2048,
      "start": 2048,
      "type": "21686148-6449-6E6F-744E-656564454649",
      "uuid": "FAC7F1FB-3E8D-4137-A512-961DE09A5549"
    },
    {
      "size": 1026048,
      "start": 4096,
      "type": "C12A7328-F81F-11D2-BA4B-00A0C93EC93B",
      "uuid": "68B2905B-DF3E-4FB3-80FA-49D1E773AA33"
    },
    {
      "size": 2097152,
      "start": 1030144,
      "type": "0FC63DAF-8483-4772-8E79-3D69D8477DE4",
      "uuid": "CB07C243-BC44-4717-853E-28852021225B"
    },
    {
      "size": 17844191,
      "start": 3127296,
      "type": "0FC63DAF-8483-4772-8E79-3D69D8477DE4",
      "uuid": "6264D520-3FB9-423F-8AB8-7A0A8E3D3562"
    }
  ]
}
device/device (org.osbuild.loopback): loop0 acquired (locked: True)
label: gpt
label-id: D209C89E-EA5E-4FBD-B161-B461CCE297E0
start="2048", size="2048", type="21686148-6449-6E6F-744E-656564454649", uuid="FAC7F1FB-3E8D-4137-A512-961DE09A5549", bootable
start="4096", size="1026048", type="C12A7328-F81F-11D2-BA4B-00A0C93EC93B", uuid="68B2905B-DF3E-4FB3-80FA-49D1E773AA33"
start="1030144", size="2097152", type="0FC63DAF-8483-4772-8E79-3D69D8477DE4", uuid="CB07C243-BC44-4717-853E-28852021225B"
start="3127296", size="17844191", type="0FC63DAF-8483-4772-8E79-3D69D8477DE4", uuid="6264D520-3FB9-423F-8AB8-7A0A8E3D3562"
{
   "partitiontable": {
      "label": "gpt",
      "id": "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
      "device": "/dev/loop0",
      "unit": "sectors",
      "firstlba": 2048,
      "lastlba": 20971486,
      "sectorsize": 512,
      "partitions": [
         {
            "node": "/dev/loop0p1",
            "start": 2048,
            "size": 2048,
            "type": "21686148-6449-6E6F-744E-656564454649",
            "uuid": "FAC7F1FB-3E8D-4137-A512-961DE09A5549"
         },{
            "node": "/dev/loop0p2",
            "start": 4096,
            "size": 1026048,
            "type": "C12A7328-F81F-11D2-BA4B-00A0C93EC93B",
            "uuid": "68B2905B-DF3E-4FB3-80FA-49D1E773AA33"
         },{
            "node": "/dev/loop0p3",
            "start": 1030144,
            "size": 2097152,
            "type": "0FC63DAF-8483-4772-8E79-3D69D8477DE4",
            "uuid": "CB07C243-BC44-4717-853E-28852021225B"
         },{
            "node": "/dev/loop0p4",
            "start": 3127296,
            "size": 17844191,
            "type": "0FC63DAF-8483-4772-8E79-3D69D8477DE4",
            "uuid": "6264D520-3FB9-423F-8AB8-7A0A8E3D3562"
         }
      ]
   }
}

⏱  Duration: 0s
�[0m�[1morg.osbuild.mkfs.fat: 8430a91d7c09f152e8f70cab3b7e15f80fcd803402266fd6af0ee4ec28d76320�[0m {
  "volid": "7B7795E7"
}
device/device (org.osbuild.loopback): loop0 acquired (locked: True)
mkfs.fat 4.2 (2021-01-31)

⏱  Duration: 0s
�[0m�[1morg.osbuild.mkfs.ext4: 2eedb2b100929524942f4c8da9df7caa8994d1cea9addd13a40899ccfffb17bb�[0m {
  "uuid": "5156d285-27e4-4ca0-bae9-2ea54949d436",
  "label": "boot"
}
device/device (org.osbuild.loopback): loop0 acquired (locked: True)
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks:      0/262144�������������             �������������done                            
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: 5156d285-27e4-4ca0-bae9-2ea54949d436
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376

Allocating group tables: 0/8���   ���done                            
Writing inode tables: 0/8���   ���done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: 0/8���   ���done


⏱  Duration: 0s
�[0m�[1morg.osbuild.mkfs.ext4: 1eeb005f5e61e1fdfd355d7fccb63aa4f5246dda7fcb385a8211d087f546339d�[0m {
  "uuid": "eb937573-051c-41f8-baad-e1e91d2c22ff",
  "label": "root"
}
device/device (org.osbuild.loopback): loop0 acquired (locked: True)
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks:       0/2230523���������������               ���������������done                            
Creating filesystem with 2230523 4k blocks and 558624 inodes
Filesystem UUID: eb937573-051c-41f8-baad-e1e91d2c22ff
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables:  0/69�����     �����done                            
Writing inode tables:  0/69�����     �����done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information:  0/69�����     �����done


⏱  Duration: 0s
�[0m�[1morg.osbuild.copy: 602deeecf3956e17985499c532b2dc7762633a0293630541653df1a2b658a2fc�[0m {
  "paths": [
    {
      "from": "input://root-tree/",
      "to": "mount://-/"
    }
  ]
}
device/- (org.osbuild.loopback): loop0 acquired (locked: False)
device/boot (org.osbuild.loopback): loop1 acquired (locked: False)
device/boot-efi (org.osbuild.loopback): loop2 acquired (locked: False)
mount/- (org.osbuild.ext4): mounting /dev/loop0 -> /store/tmp/buildroot-tmp-mdu0hb7m/mounts/
mount/boot (org.osbuild.ext4): mounting /dev/loop1 -> /store/tmp/buildroot-tmp-mdu0hb7m/mounts/boot
mount/boot-efi (org.osbuild.fat): mounting /dev/loop2 -> /store/tmp/buildroot-tmp-mdu0hb7m/mounts/boot/efi
copying '/run/osbuild/inputs/root-tree/.' -> '/run/osbuild/mounts/.'
mount/boot-efi (org.osbuild.fat): umount: /store/tmp/buildroot-tmp-mdu0hb7m/mounts/boot/efi unmounted
mount/boot (org.osbuild.ext4): umount: /store/tmp/buildroot-tmp-mdu0hb7m/mounts/boot unmounted
mount/- (org.osbuild.ext4): umount: /store/tmp/buildroot-tmp-mdu0hb7m/mounts/ unmounted

⏱  Duration: 7s
�[0m�[1morg.osbuild.bootupd: a78ea5e05c83c7539f116d2184bb9f27729e21524e44279a717ce6d4bda786a6�[0m {
  "deployment": {
    "osname": "default",
    "ref": "ostree/1/1/0"
  },
  "static-configs": true,
  "bios": {
    "device": "disk"
  }
}
device/- (org.osbuild.loopback): loop0 acquired (locked: False)
device/boot (org.osbuild.loopback): loop1 acquired (locked: False)
device/boot-efi (org.osbuild.loopback): loop2 acquired (locked: False)
device/disk (org.osbuild.loopback): loop3 acquired (locked: False)
mount/- (org.osbuild.ext4): mounting /dev/loop0 -> /store/tmp/buildroot-tmp-hqnud32x/mounts/
mount/boot (org.osbuild.ext4): mounting /dev/loop1 -> /store/tmp/buildroot-tmp-hqnud32x/mounts/boot
mount/boot-efi (org.osbuild.fat): mounting /dev/loop2 -> /store/tmp/buildroot-tmp-hqnud32x/mounts/boot/efi
Installed: grub.cfg
Installed: "fedora/grub.cfg"
mount/boot-efi (org.osbuild.fat): umount: /store/tmp/buildroot-tmp-hqnud32x/mounts/boot/efi unmounted
mount/boot (org.osbuild.ext4): umount: /store/tmp/buildroot-tmp-hqnud32x/mounts/boot unmounted
mount/- (org.osbuild.ext4): umount: /store/tmp/buildroot-tmp-hqnud32x/mounts/ unmounted

⏱  Duration: 0s
�[0m�[1mPipeline qcow2: cb712e1c8526f90f1fa084d6745e87c44bf88cf6d3eb5ceefadf682ebd248b66�[0m
Build
  root: <host>
  runner: org.osbuild.fedora38 (org.osbuild.fedora38)
�[0m�[1morg.osbuild.qemu: cb712e1c8526f90f1fa084d6745e87c44bf88cf6d3eb5ceefadf682ebd248b66�[0m {
  "filename": "disk.qcow2",
  "format": {
    "type": "qcow2",
    "compat": ""
  }
}

⏱  Duration: 38s
build:    	98ce2119ab2aaeecf9383da75b636ff122e308c23da2882aa050e261e011a07e
ostree-deployment:	e30356ab398264342a503cc55fc9921805cbf4c060c9d215d08a51db0a8c7158
image:    	a78ea5e05c83c7539f116d2184bb9f27729e21524e44279a717ce6d4bda786a6
qcow2:    	cb712e1c8526f90f1fa084d6745e87c44bf88cf6d3eb5ceefadf682ebd248b66
Build complete!
Results saved in
/output/

ISO install fails to boot

The integration tests currently fails during the test/test_build.py::test_iso_installs[quay.io/centos-bootc/fedora-bootc:eln,anaconda-iso] step. When running it locally and booting into the installed image I see:

0a7a7c r/w with ordered data mode. Quota mode: none.

Generating "/run/initramfs/rdsosreport.txt"


Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.


Press Enter for maintenance
(or press Control-D to continue): 

Screenshot from 2024-03-05 09-50-38

When logging in I see:

:/root# systemctl status --no-pager -l ostree-prepare-root
× ostree-prepare-root.service - OSTree Prepare OS/
     Loaded: loaded (/usr/lib/systemd/system/ostree-prepare-root.service; enabled; preset: enabled)
     Active: failed (Result: exit-code) since Tue 2024-03-05 08:48:16 UTC; 5min ago
       Docs: man:ostree(1)
    Process: 442 ExecStart=/usr/lib/ostree/ostree-prepare-root /sysroot (code=exited, status=1/FAILURE)
   Main PID: 442 (code=exited, status=1/FAILURE)
        CPU: 1ms

Mar 05 08:48:16 localhost systemd[1]: Starting ostree-prepare-root.service - OSTree Prepare OS/...
Mar 05 08:48:16 localhost ostree-prepare-root[442]: Loading usr/lib/ostree/prepare-root.conf
Mar 05 08:48:16 localhost ostree-prepare-root[442]: Resolved OSTree target to: /sysroot/ostree/deploy/default/deploy/fdb3ad563b6c2955fdb32d3380119a2f35abb19458d2dec50f38b9bfb686350d.0
Mar 05 08:48:16 localhost ostree-prepare-root[442]: sysroot.readonly configuration value: 1 (fs writable: 1)
Mar 05 08:48:16 localhost ostree-prepare-root[442]: ostree-prepare-root: composefs: failed to mount: No such file or directory
Mar 05 08:48:16 localhost systemd[1]: ostree-prepare-root.service: Main process exited, code=exited, status=1/FAILURE
Mar 05 08:48:16 localhost systemd[1]: ostree-prepare-root.service: Failed with result 'exit-code'.
Mar 05 08:48:16 localhost systemd[1]: Failed to start ostree-prepare-root.service - OSTree Prepare OS/.
Mar 05 08:48:16 localhost systemd[1]: ostree-prepare-root.service: Triggering OnFailure= dependencies.

To reproduce run:

sudo pytest -s -vv test/test_build.py::test_iso_installs[quay.io/centos-bootc/fedora-bootc:eln,anaconda-iso]

and use the generated image (e.g. by looking at ps afx|grep qemu) to testboot.

using bootc install-to-filesystem

This relates to #4

  • We had some general agreement to support bootc install-to-filesystem; this will help long term with things like containers/bootc#128
  • bootc install-to-filesystem should also grow support for being provided the base container image externally (e.g. cached in osbuild); we know this is needed for offline ISO installs too. This ties with the above for the lifecycle bound app/infra containers
  • We can't drop the osbuild/ostree stages because not every case will use bootc in the near future
  • Agreement that for the ISO/installer case any customization (embedded kickstarts, but also which installer) would likely live external to the container (blueprint or equivalent)

Wrong imageref in a built qcow2

I built and booted my own qcow2:

sudo podman run \
    --rm \
    -it \
    --privileged \
    --pull=newer \
    --security-opt label=type:unconfined_t \
    -v $(pwd)/output:/output \
    quay.io/centos-bootc/bootc-image-builder:latest \
    --type qcow2 \
    ghcr.io/ondrejbudai/fedora-bootc:39

However, bootc update doesn't work because the image ref doesn't contain the tag:

$ sudo bootc update
ERROR Upgrading: Pulling: Creating importer: Failed to invoke skopeo proxy method OpenImage: remote error: reading manifest latest in ghcr.io/ondrejbudai/fedora-bootc: manifest unknown
$ sudo bootc status
note: The format of this API is not yet stable
apiVersion: org.containers.bootc/v1alpha1
kind: BootcHost
metadata:
  name: host
spec:
  image:
    image: ghcr.io/ondrejbudai/fedora-bootc
    transport: registry
    signature: !ostreeRemote ''
status:
  staged: null
  booted:
    image:
      image:
        image: ghcr.io/ondrejbudai/fedora-bootc <-- LOOK HERE
        transport: registry
        signature: !ostreeRemote ''
      version: 39.20231220.0
      timestamp: null
      imageDigest: sha256:bbb35d247de551d80e9828967114e0bf60de787f6650ee8019b499f0a5772fbe
    incompatible: false
    pinned: false
    ostree:
      checksum: 10d041accf6827df7091684aedbe38fe0685ca6191c768b7612c58eb581c4f03
      deploySerial: 0
  rollback: null
  isContainer: false

This issue can be fixed by running sudo bootc switch --no-signature-verification ghcr.io/ondrejbudai/fedora-bootc:39:

$ sudo bootc update
No changes in ostree-unverified-registry:ghcr.io/ondrejbudai/fedora-bootc:39 => sha256:bbb35d247de551d80e9828967114e0bf60de787f6650ee8019b499f0a5772fbe
Staged update present, not changed.
$ sudo bootc status
note: The format of this API is not yet stable
apiVersion: org.containers.bootc/v1alpha1
kind: BootcHost
metadata:
  name: host
spec:
  image:
    image: ghcr.io/ondrejbudai/fedora-bootc:39
    transport: registry
    signature: insecure
status:
  booted:
    image:
      image:
        image: ghcr.io/ondrejbudai/fedora-bootc:39 <-- LOOK HERE
        transport: registry
        signature: insecure
      version: 39.20231220.0
      timestamp: null
      imageDigest: sha256:bbb35d247de551d80e9828967114e0bf60de787f6650ee8019b499f0a5772fbe
    incompatible: false
    pinned: false
    ostree:
      checksum: 3ca9bf074ad6d8580742e7ee022f4ed8977a234bc74f81d33ee4358c2931aa6c
      deploySerial: 0

I suppose this gets fixed with #18, but I wonder if we can have a short-term fix.

Installer for Universal Blue Images

For full spec, please visit: https://hackmd.io/@ublue-os/ByEpOVeYp
Website: https://universal-blue.org/
Github Organization: https://github.com/ublue-os

Purpose

Our current ISO installer is not sufficiently supporting our growing user base. We are hoping to utilize bootc-image-builder as a way build our ISO images using consistent tooling. We would also like to fully integrate it into our CI process.

MVP

Taken directly from our spec, here is what we feel we need to have for a successful MVP:

Offline ISO

A fully Fedora offline experience that does not require internet to install.

Use cases

  • Bazzite - it's a mobile device and that team prefers a full offline installer for reliability reasons
  • Anyone who runs a lab - users have asked for an offline experience so that they can make custom images to deploy labs in places with limited internet. "Burn 50 usb sticks at the home base, ship to this conference"

Features

  • Build ISO using bootc-image-builder
  • Ability to directly install a signed OCI image directly without need for the user to ever use an unsigned image.
    • This is one of our biggest issues with the current implementation of our existing ISO installer
  • The installer experience should be as close as possible to upstream Fedora.
    • Adding the user account on first boot.
    • Adding the user account to specific groups
    • Have the same partition layout as upstream (Currently there is hardcoding to use EXT4 rather than BTRFS which is what is used upstream)
    • Configure Full Disk Encryption
    • Allow for networking configuration and choice of hostname.
  • The ability to use kickstart files in a way a linux sysadmin expects.
  • Ability to build it in a Github action
  • Ability to build via Podman or Podman Desktop
  • Ability to generate cloud images with the same tooling.
  • No deviation from standard Fedora - we don't want "the ublue way"

ability to create raw images for vfkit

I'd like to be able to create a raw image for me to use with vfkit.

Right now I have to do the following:

#!/bin/sh

set -exu

BOOTC_IMG_PATH=$1
qemu-img convert -f qcow2 -O raw ${BOOTC_IMG_PATH} bootc-overlay.img

#cp -c bootc-overlay.img snapshopt.img

./out/vfkit --cpus 2 --memory 2048 \
    --bootloader efi,variable-store=./efi-variable-store,create \
    --device virtio-blk,path=bootc-overlay.img \
    --device virtio-serial,stdio \
    --device virtio-net,nat,mac=72:20:43:d4:38:62 \
    --device virtio-rng \
    --device virtio-input,keyboard \
    --device virtio-input,pointing \
    --device virtio-gpu,width=1920,height=1080 \
    --gui

It would be nice to be able to run the raw image directly rather than having to convert the qcow2 after.

Unable to install from generated ISO

Hey Folks!

Really appreciate all the work going into this project! I was testing the new ISO feature implemented today and I am running into issues booting from the ISO it creates. I have tried 2 different images from https://github.com/ublue-os/ project.

The first one is Bluefin, which is our Developer Focused build. The other one is Ucore which is some modifications we make on top of traditional CoreOS. Neither seem to boot.

Here are the commands I am using to both generate and test the ISO:

Building ISO

sudo podman run \
    --rm \
    -it \
    --privileged \
    --pull=newer \
    --security-opt label=type:unconfined_t \
    -v $(pwd)/output:/output \
    quay.io/centos-bootc/bootc-image-builder:latest \
    --type iso \
    ghcr.io/ublue-os/bluefin:latest

Testing ISO

qemu-img create -f qcow2 disk.qcow2 50G
sudo qemu-system-x86_64 \
    -enable-kvm \
    -cpu host \
    -m 8G \
    -device virtio-net-pci,netdev=n0,mac="FE:0B:6E:22:3D:13" \
    -netdev user,id=n0,net=10.0.2.0/24,hostfwd=tcp::2222-:22 \
    -bios /usr/share/edk2/ovmf/OVMF.inteltdx.fd \
    -cdrom ./output/bootiso/install.iso \
    -drive file=./disk.qcow2

Here is some screenshots of where the issue is happening during installation:
image

image

To collect Anaconda logs, I have selected no to the prompt, go into debug shell and then used TMUX to switch to tty2.
I have attached a log of what output I'm receiving during build time and my anaconda logs in the attached logs.tar.gz file.

logs.tar.gz

Let me know if there is anything else you want me to test!

Thanks,

Noel

Requires running under rootful podman

A paper cut we hit today is that podman desktop defaults to rootless, and bib doesn't work with that because we need loopback. The core problem is we need to write Linux filesystems. The important Linux filesystems like XFS/ext4 in general really want to be only written by code from the Linux kernel.

Running the Linux kernel is either done by reusing the host kernel (privileged), or running a VM. But on the podman machine case we're already in a VM, which gets us into nested virt, and on Mac at least that's going to involve full emulation which usually mostly works but isn't considered a production scenario and definitely hits weird random bugs.

My inclination because we're already running this container with --privileged is just to behind the scenes reuse the fact that podman machine uses FCOS today and the core user has passwordless sudo enabled and basically reuse that to re-execute ourselves with real root privileges. Yes, this would not really be "rootless" but I personally don't care about that and I don't think users would really in general either.

fails to deploy when running `podman create` as part of container image build

Using the default bootc-image-builder command, I'm unable to create something like this in a Containerfile:

FROM quay.io/centos-bootc/fedora-bootc:eln

RUN echo "root:root" | chpasswd
RUN podman create --name hello --restart=always -p 8080:8080 cdrage/helloworld

I'm getting the error:

ostree container image deploy --imgref=ostree-unverified-image:dir:/tmp/tmpo2itqhqp/image --stateroot=default --target-imgref=ostree-remote-registry::quay.io/podai/test --karg=rw --karg=console=tty0 --karg=console=ttyS0 --karg=root=LABEL=root --sysroot=/run/osbuild/tree
error: Performing deployment: Importing: Parsing layer blob sha256:fddea510d1419a5023604a90d8b1a832d777a836e4344590dfd0b999355233b7: error: ostree-tar: Processing deferred hardlink var/lib/containers/storage/overlay/4f2c33229347e298a539cd86ec91e881b6e445b45f0bc46448e86e4bd0103e7e/diff/usr/bin/perl: Failed to find object: No such file or directory: var: Processing tar via ostree: Failed to commit tar: ExitStatus(unix_wait_status(256))
Traceback (most recent call last):
  File "/run/osbuild/bin/org.osbuild.ostree.deploy.container", line 140, in <module>
    r = main(stage_args["tree"],
        ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/run/osbuild/bin/org.osbuild.ostree.deploy.container", line 135, in main
    ostree_container_deploy(tree, inputs, osname, target_imgref, kopts)
  File "/run/osbuild/bin/org.osbuild.ostree.deploy.container", line 109, in ostree_container_deploy
    ostree.cli("container", "image", "deploy",
  File "/run/osbuild/lib/osbuild/util/ostree.py", line 202, in cli
    return subprocess.run(["ostree"] + args,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.12/subprocess.py", line 571, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ostree', 'container', 'image', 'deploy', '--imgref=ostree-unverified-image:dir:/tmp/tmpo2itqhqp/image', '--stateroot=default', '--target-imgref=ostree-remote-registry::quay.io/podai/test', '--karg=rw', '--karg=console=tty0', '--karg=console=ttyS0', '--karg=root=LABEL=root', '--sysroot=/run/osbuild/tree']' returned non-zero exit status 1.

⏱  Duration: 23s

Even after adding RUN ostree container commit to the Containerfile, it does not make a difference.

Relationship with https://github.com/osbuild/osbuild/pull/1402

If I am understanding things correctly, https://github.com/osbuild/images is basically Go versions of mpp files?

So the work on this branch is basically redoing https://github.com/osbuild/osbuild/pull/1402/files#diff-39b52d4acfa02644f133a682a54270e15f5a8d69de4b1ff0482af589418da8b0 in Go, is that correct?

Now when I look at both of these I think an overall problem is that it's hardcoding distribution and "flavor" specific things in the pipeline code. We have the basic things like UEFI vendor hardcoding in the MPP version...that one will get fixed with the switch to bootupd.

But then the next problem is things like hardcoding ext4. I think in the general case container payload needs to be Source Of Truth for this stuff. This is how bootc install does it - the container ships its own config, so e.g. one might have a base RHEL image using xfs by default, but automotive can override that to ext4 (because they need fsverity) just by changing the container.

That said, a problem with this model (that was also brought up a bit in rhinstaller/anaconda#5298 ) is that it requires fully pulling and "mounting" the container image before we can make the target filesystem.

One thing I think that could help is standardized metadata keys on the container image. What format that should be is obviously a huge bikeshed topic.

Building a different Arch than the host machine

One of the things that came up during the hackfest was the need to build an X86 Disk image from an arm (MAC) platform.

podman build --arch=x86
podman run --arch=x86 ...
podman run --privileged bootc-image-builder --arch=x86

Is what we need, Then have bootc-image-builder run the internal skopeo or podman commands with the --arch commands.

If you are doing --arch, you should also do --variant.

Output directory confusing

Example:

        qcow2: 'qcow2/disk.qcow2',
        ami: 'image/disk.raw',
        raw: 'image/disk.raw',
        iso: 'bootiso/disk.iso',

Is where all the locations are.

But it should be:

        qcow2: 'qcow2/disk.qcow2',
        ami: 'ami/disk.raw',
        raw: 'raw/disk.raw',
        iso: 'iso/disk.iso',

Which makes more sense / predictable.

Can't configure root user

Having a root user configuration breaks the build because, when an existing user is modified, osbuild runs mkhomedir_helper and the following error occurs:

[sss_cache] [confdb_init] (0x0010): Unable to open config database [/var/lib/sss/db/config.ldb]
Could not open available domains
[sss_cache] [confdb_init] (0x0010): Unable to open config database [/var/lib/sss/db/config.ldb]
Could not open available domains
Creating mailbox file: No such file or directory
[sss_cache] [confdb_init] (0x0010): Unable to open config database [/var/lib/sss/db/config.ldb]
Could not open available domains
[sss_cache] [confdb_init] (0x0010): Unable to open config database [/var/lib/sss/db/config.ldb]
Could not open available domains
Traceback (most recent call last):
  File "/run/osbuild/bin/org.osbuild.users", line 199, in <module>
    r = main(args["tree"], args["options"])
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/run/osbuild/bin/org.osbuild.users", line 182, in main
    ensure_homedir(tree, name, home)
  File "/run/osbuild/bin/org.osbuild.users", line 157, in ensure_homedir
    subprocess.run(["chroot", root, "mkhomedir_helper", name], check=True)
  File "/usr/lib64/python3.12/subprocess.py", line 571, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['chroot', '/run/osbuild/tree', 'mkhomedir_helper', 'root']' returned non-zero exit status 6.

Reproduce by running bib with the following config:

{
  "blueprint": {
    "customizations": {
      "user": [
        {
          "name": "root",
          "key": "whatever"
        }
      ]
    }
  }
}

Dynamic root partition size

I'm trying to build an image bigger than 2GB with bootc-image-builder.
The fixed size is specified here: https://github.com/osbuild/bootc-image-builder/blob/aaa2f5b/bib/cmd/bootc-image-builder/partition_tables.go#L53

The size of an image with layers combined can be found like so

crane manifest quay.io/fedora-ostree-desktops/silverblue:39 | jq '([.layers[].size] | add)'

or

skopeo inspect docker://quay.io/fedora-ostree-desktops/silverblue:39 | jq '([.LayersData[].Size] | add)'

and then converted from bytes into Gibibytes.

Using the manifest data could be an automatic way to figure out the disk size, otherwise a manual override flag could be helpful.

Let me know your thoughts, cheers!

No console output on applehv

Containerfile:

FROM quay.io/centos-bootc/fedora-bootc:eln
RUN echo "root:root" | chpasswd

Then build using bootc-image-builder with ami selected.

vfkit command:

vfkit --cpus 2 --memory 2048 \
    --bootloader efi,variable-store=./efi-variable-store,create \
    --device virtio-blk,path=/Users/cdrage/bootc/image/disk.raw \
    --device virtio-serial,stdio \
    --device virtio-net,nat,mac=72:20:43:d4:38:62 \
    --device virtio-rng \
    --device virtio-input,keyboard \
    --device virtio-input,pointing \
    --device virtio-gpu,width=1920,height=1080 \
    --gui

Output when trying to boot:

Screenshot 2024-01-10 at 2 38 59 PM

It'll boot, but show the EFI_RNG_PROTOCOL unavailable you have to press enter on.

setting default systemd target in the image isn't honored

Running an image that has RUN systemctl set-default graphical.target doesn't produce a qcow2 that actually has graphical.target as default target - not sure where the problem is (bib or bootc actually) so forgive me if it's the wrong repo and I can move it where appropriate.
The kiosk-demo example is a reproducer for me https://github.com/CentOS/centos-bootc-layered/pull/36/files#diff-a61a6be8ca56bc4a3808309e0191b9e8f8bbbf1b755d4cb7837afdba44857775 and just boots into cmdline mode

`cp: failed to preserve ownership for` in Podman Desktop on Mac

@teg reported that the build failed for him when running the build on M1 Mac:

⏱  Duration: 32s
cp: failed to preserve ownership for '/output/qcow2/./disk.qcow2': Operation not permitted
cp: failed to preserve ownership for '/output/qcow2/.': Operation not permitted
Traceback (most recent call last):
  File "/usr/bin/osbuild", line 33, in <module>
    sys.exit(load_entry_point('osbuild==101', 'console_scripts', 'osbuild')())
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/main_cli.py", line 179, in osbuild_cli
    export(pid, output_directory, object_store, manifest)
  File "/usr/lib/python3.12/site-packages/osbuild/main_cli.py", line 56, in export
    obj.export(dest)
  File "/usr/lib/python3.12/site-packages/osbuild/objectstore.py", line 249, in export
    subprocess.run(
  File "/usr/lib64/python3.12/subprocess.py", line 571, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['cp', '--reflink=auto', '-a', '/store/stage/uuid-c6ab67ad31ed40abae8922411cadb303/data/tree/.', '/output/qcow2']' returned non-zero exit status 1.
running osbuild failed: exit status 1
teg@MacBook-Pro osbuild-deploy-container % 

Can't build CS9 aarch64 AMI image on x86_64 machine

Build CS9 aarch64 AMI image on x86_64 machine with command
sudo podman run --rm -it --privileged --pull=newer --security-opt label=type:unconfined_t --env AWS_ACCESS_KEY_ID=zzz --env AWS_SECRET_ACCESS_KEY=zzz [quay.io/centos-bootc/bootc-image-builder:latest](http://quay.io/centos-bootc/bootc-image-builder:latest) --type ami --aws-ami-name bootc-bib-centos-stream-9-aarch64-v7vu --aws-bucket yyy --aws-region us-east-1 [quay.io/rhel-edge/centos-bootc-test:v7vu](http://quay.io/rhel-edge/centos-bootc-test:v7vu)

I got the following error:

ostree container image deploy --imgref=ostree-unverified-image:dir:/tmp/tmpekdnamgc/image --stateroot=default --target-imgref=ostree-unverified-registry:quay.io/rhel-edge/centos-bootc-test:v7vu --karg=rw --karg=console=tty0 --karg=console=ttyS0 --karg=root=LABEL=root --sysroot=/run/osbuild/tree
error: Performing deployment: Full sync: During fsfreeze-thaw: ioctl(FIFREEZE): Inappropriate ioctl for device
Traceback (most recent call last):
  File "/run/osbuild/bin/org.osbuild.ostree.deploy.container", line 140, in <module>
    r = main(stage_args["tree"],
  File "/run/osbuild/bin/org.osbuild.ostree.deploy.container", line 135, in main
    ostree_container_deploy(tree, inputs, osname, target_imgref, kopts)
  File "/run/osbuild/bin/org.osbuild.ostree.deploy.container", line 109, in ostree_container_deploy
    ostree.cli("container", "image", "deploy",
  File "/run/osbuild/lib/osbuild/util/ostree.py", line 204, in cli
    return subprocess.run(["ostree"] + args,
  File "/usr/lib64/python3.9/subprocess.py", line 528, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ostree', 'container', 'image', 'deploy', '--imgref=ostree-unverified-image:dir:/tmp/tmpekdnamgc/image', '--stateroot=default', '--target-imgref=ostree-unverified-registry:quay.io/rhel-edge/centos-bootc-test:v7vu', '--karg=rw', '--karg=console=tty0', '--karg=console=ttyS0', '--karg=root=LABEL=root', '--sysroot=/run/osbuild/tree']' returned non-zero exit status 1.

`Kickstart insufficient` when running with libvirt

@noelmiller reported in Discord that when the ISO is run using virt-manager, the installation fails on the Kickstart insufficient error.

image

Interestingly enough, this error does not happen when he used qemu directly (the installation failed later in this case, see #114).

Note that Noel used ghcr.io/ublue-os/bluefin:latest. I'm wondering if the same issue exists also with bootc images. Also, I wonder what's the different between libvirt and qemu. We should definitely test whether ISO with bootc images inside work under libvirt.

AttributeError: 'Loop' object has no attribute 'fd'

⏱  Duration: 0s
org.osbuild.sfdisk: 0e500a8f0b9764bce41572e55cb310e0fa5243a46256070442b7cfc17fd671cf {
  "label": "gpt",
  "uuid": "D209C89E-EA5E-4FBD-B161-B461CCE297E0",
  "partitions": [
    {
      "bootable": true,
      "size": 2048,
      "start": 2048,
      "type": "21686148-6449-6E6F-744E-656564454649",
      "uuid": "FAC7F1FB-3E8D-4137-A512-961DE09A5549"
    },
    {
      "size": 1026048,
      "start": 4096,
      "type": "C12A7328-F81F-11D2-BA4B-00A0C93EC93B",
      "uuid": "68B2905B-DF3E-4FB3-80FA-49D1E773AA33"
    },
    {
      "size": 2097152,
      "start": 1030144,
      "type": "0FC63DAF-8483-4772-8E79-3D69D8477DE4",
      "uuid": "CB07C243-BC44-4717-853E-28852021225B"
    },
    {
      "size": 17844191,
      "start": 3127296,
      "type": "0FC63DAF-8483-4772-8E79-3D69D8477DE4",
      "uuid": "6264D520-3FB9-423F-8AB8-7A0A8E3D3562"
    }
  ]
}
device/device (org.osbuild.loopback): Exception ignored in: <function Loop.__del__ at 0x7f9f58f163e0>
device/device (org.osbuild.loopback): Traceback (most recent call last):
device/device (org.osbuild.loopback):   File "/usr/lib/python3.12/site-packages/osbuild/loop.py", line 137, in __del__
device/device (org.osbuild.loopback):     self.close()
device/device (org.osbuild.loopback):   File "/usr/lib/python3.12/site-packages/osbuild/loop.py", line 144, in close
device/device (org.osbuild.loopback):     fd, self.fd = self.fd, -1
device/device (org.osbuild.loopback):                   ^^^^^^^
device/device (org.osbuild.loopback): AttributeError: 'Loop' object has no attribute 'fd'
Traceback (most recent call last):
  File "/usr/bin/osbuild", line 33, in <module>
    sys.exit(load_entry_point('osbuild==99', 'console_scripts', 'osbuild')())
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/main_cli.py", line 169, in osbuild_cli
    r = manifest.build(
        ^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/pipeline.py", line 468, in build
    res = pl.run(store, monitor, libdir, stage_timeout)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/pipeline.py", line 372, in run
    results = self.build_stages(store,
              ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/pipeline.py", line 345, in build_stages
    r = stage.run(tree,
        ^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/pipeline.py", line 218, in run
    devices[name] = devmgr.open(dev)
                    ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/devices.py", line 92, in open
    res = client.call("open", args)
          ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/host.py", line 348, in call
    ret, _ = self.call_with_fds(method, args)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/host.py", line 384, in call_with_fds
    raise error
osbuild.host.RemoteError: FileNotFoundError: [Errno 2] No such file or directory: 'loop0'
   File "/usr/lib/python3.12/site-packages/osbuild/host.py", line 268, in serve
    reply, reply_fds = self._handle_message(msg, fds)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/host.py", line 301, in _handle_message
    ret, fds = self.dispatch(name, args, fds)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/devices.py", line 127, in dispatch
    r = self.open(args["dev"],
        ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/osbuild/devices/org.osbuild.loopback", line 104, in open
    raise error from None
  File "/usr/lib/osbuild/devices/org.osbuild.loopback", line 101, in open
    self.lo = self.make_loop(self.fd, start, size, lock)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/osbuild/devices/org.osbuild.loopback", line 81, in make_loop
    lo = self.ctl.loop_for_fd(fd, lock=lock,
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/loop.py", line 649, in loop_for_fd
    lo = Loop(self.get_unbound())
         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/loop.py", line 128, in __init__
    self.fd = os.open(self.devname, os.O_RDWR, dir_fd=dir_fd)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

running osbuild failed: exit status 1
➜  ~ 

Doesn't work using the latest image

odc fails on Fedora 38 and Centos Stream 9 images

Trying with the image in $title doesn't yield a working qcow2 - it just hangs in emergency shell where you can't even login as the root account is locked. No user added or nothing, it just doesn't get to a working prompt.

Unable to use built ISO & testing it out on UTM

My Containerfile:

FROM quay.io/centos-bootc/fedora-bootc:eln
RUN echo "root:root" | chpasswd
RUN echo "hello" > hello.txt

I then proceeded to build the .iso using the standard bootc-image-builder command.

After installing the .iso within UTM, it successfully installed! After however, the OS got stuck on this screen:

Screenshot 2024-01-11 at 2 15 46 PM

Trying to diagnose now and get more logs. Opened this issue up for brevity.

root filesystem label is container_file_t when it should be root_t

Migrating this back from CentOS/centos-bootc#184

Because we want to be able to add a proper Closes in this repository.


One thing I notice here...and I'm not yet certain if it's a bib regression or not, but looking at the disk image before it's booted:

$ guestfish --ro -a disk.qcow2                                                                                                                                                                                                                   
><fs> run
list-filesystems
><fs> list-filesystems
/dev/sda1: unknown
/dev/sda2: vfat
/dev/sda3: ext4
/dev/sda4: ext4
><fs> mount /dev/sda4 /
><fs> getxattrs /
[0] = {
  attrname: security.selinux
  attrval: system_u:object_r:container_file_t:s0\x00
}
><fs> 

That's just really broken, we shouldn't end up with a physical disk image root labeled container_file_t! It looks like actually all of the labels up to the deployment root are similarly broken (they should be something like root_t or usr_t).

However once we get to the deployment things are fine:

><fs> getxattrs /ostree/deploy/default/deploy/3ef1290eacdb05e50127ed5a920e264f228dae248addb10d98224a2e04918c2c.0/etc/fstab
[0] = {
  attrname: security.selinux
  attrval: system_u:object_r:etc_t:s0\x00
}
><fs> getxattrs /ostree/deploy/default/deploy/3ef1290eacdb05e50127ed5a920e264f228dae248addb10d98224a2e04918c2c.0/etc/passwd 
[0] = {
  attrname: security.selinux
  attrval: system_u:object_r:passwd_file_t:s0\x00
}
><fs> 

And it's specifically that /ostree/deploy/default/backing is also container_file_t, and the overlayfs picks up that context and that breaks everything.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.