Comments (8)
Yeah spot on
This context manager around the bootc call runs containers_storage_source() which puts the container storage at /run/osbuild/containers/storage in the stage's root (the build root) and yields the full container source for bootc to use.
If bootc can be configured to look at /run/osbuild/containers, or if we also mount/symlink that to /var/lib/containers, then bootc will find the whole container store there and it should work as expected.
This should do the trick, we had to name it something other than /var/lib/containers/storage
inside the build root because there were some conflicts, if I recall correctly. But then /run/osbuild/containers
was also more consistent with how some of the other sources worked. I might be wrong about the /var/lib/containers/storage
bit inside the build root, in which case we should be able to symlink it (but things might get a bit weird).
from bootc-image-builder.
@cgwalters That would be the correct place to add it. Note that I do not know if this will conflict with any of the other work related to container storage/copying, cc @kingsleyzissou.
from bootc-image-builder.
Off the top of my head I can't think of any reason that would conflict. I vaguely remember we discussed adding /var/lib/containers
here when we were having issues with the new mount api. But that was before we realised there was a change in the api.
@ondrejbudai do you have any opinions here?
from bootc-image-builder.
Does {source,input}/org.osbuild.containers-storage
not cover this requirement?
I wouldn't bind mount host resources straight into a build root without thinking about all the implications. Those bind mounts are very limited and very specific files or directories that are mostly necessary for certain binaries to work. We already have mechanisms for providing resources like containers into stages. Those resources used to be strictly network resources (file URLs, container refs, ostree commit URLs) to minimise the interaction between the host and the build process, but we started bending that rule for BIB with the use of the host's container registry.
To get an external resource to reach a stage, two things are required: A source
and an input
. The source
is responsible for fetching a resource into the osbuild store (e.g. sources/org.osbuild.curl
downloads a file from a URL, sources/org.osbuild.skopeo
pulls a container into an archive). The input
is responsible for providing a specific resource into a stage (e.g. inputs/org.osbuid.files
hardlink files from the store to a location accessible by the stage, inputs/org.osbuild.containers
does similar things but with some container-specific differences).
For the container registry situation, we bent the rule a bit and instead of pulling something static into the store, we bind mount the host's container storage into the store using a new source
called org.osbuild.containers-storage
and then provide a container from that store into a stage with a new input
also called org.osbuild.containers-storage
. This violates one rule, our builds are now tied to the specific availability of containers on the host, but it still uses existing mechanisms and checksums, so as long as the required specific container (by ID) is available on the host, the build is still reproducible.
I've been floating a new idea for a while which I get more and more convinced we should implement every time something like this comes along: A new type of osbuild module called host-resource
(or whatever, naming is hard) that is specifically meant to pass things from a host into a store to make it available to an input
for a stage
. The advantage of this is that we can separate the two mechanisms and more easily reason about what a build is doing. Host resources can have a separate store from the store/cache we use for the regular sources. A manifest with a host-resource
makes it clear that it is not like other manifests and will require other things not available via a URL. The existing sources/org.osbuild.containers-storage
would be the first such host-resource
.
So for this issue in particular, a few questions:
- By "bound containers" I understand we're talking about application containers that are part of (or bound to) a base image. These are analogous to what we would often call "embedded containers" when building a disk image with included application containers. Is this correct?
- What does bootc need exactly when running
bootc install to-filesystem
? As I understand it, it needs a container store to find containers to pull into the image? Will it always look at/var/lib/containers
or can this store be anywhere? - Do the specifics of the storage configuration (e.g. driver) matter for bootc?
- Does
{source,input}/org.osbuild.containers-storage
not cover this scenario?
from bootc-image-builder.
Does {source,input}/org.osbuild.containers-storage not cover this requirement?
Maybe? I don't fully understand how things are wired up today, is that stage/code used as part of how bib installs the bootc base image itself today? If so and that's going to force an extra copy for these images too, that's not an exciting prospect.
See also e.g. this thread: containers/bootc#719 (comment) which touched on the UX around bib and bound images and pulling.
By "bound containers" I understand we're talking about application containers that are part of (or bound to) a base image. These are analogous to what we would often call "embedded containers" when building a disk image with included application containers. Is this correct?
There's a lot more in containers/bootc#128 but the TL;DR is that the base image only has references, as distinct from physically bound.
What does bootc need exactly when running bootc install to-filesystem? As I understand it, it needs a container store to find containers to pull into the image? Will it always look at /var/lib/containers or can this store be anywhere?
The current logic always looks in /var/lib/containers but we could certainly add support for alternative lookasides.
There's a bigger picture thing here that we should also be considering how this works with Anaconda. Today for ISOs with embedded containers we inject them as a dir
but I would like to change them to use an unpacked overlay store for multiple reasons, but the biggest is that it would allow running the images at install time which would be needed (well, very helpful) for anaconda use bootc install in that scenario. (In a PXE style scenario, obviously anaconda could just fetch the image itself)
But then if we have the bootc image itself in an unpacked (overlay) store, it argues for doing so for LBIs as well.
from bootc-image-builder.
Does {source,input}/org.osbuild.containers-storage not cover this requirement?
Maybe? I don't fully understand how things are wired up today, is that stage/code used as part of how bib installs the bootc base image itself today? If so and that's going to force an extra copy for these images too, that's not an exciting prospect.
They are used today for bib, yes. The org.osbuild.bootc.install-to-filesystem
stage (which runs bootc install to-filesystem
) uses it in the way I described. It reads a container from the "host" (which in the case of osbuild running inside the bib container, is the bib container itself) storage. When the user -v
mounts /var/lib/containers/storage
into the container, then osbuild is effectively reading the real host container storage.
The extra copy you reference isn't an issue in this case. We do copy the container contents during a build but that's for creating the build root. When bootc install to-filesystem
is called, it's reading straight from the container storage.
Any other stage (or the bootc install stage itself) that needs a container from the storage can read it in the same way.
See also e.g. this thread: containers/bootc#719 (comment) which touched on the UX around bib and bound images and pulling.
By "bound containers" I understand we're talking about application containers that are part of (or bound to) a base image. These are analogous to what we would often call "embedded containers" when building a disk image with included application containers. Is this correct?
There's a lot more in containers/bootc#128 but the TL;DR is that the base image only has references, as distinct from physically bound.
Right. I think I follow the distinction. As far as building the disk image is concerned, these logically bound images are a lot closer to what we already support in osbuild, which is we pull a container at build time and store it in the system's container storage, ready to be used when the system is booted.
In the traditional case, the references to the images are part of the osbuild manifest, and are originally defined in the user's blueprint. I suppose in this case the difference is that bootc will take care of everything (finding the containers refs in the base container, pulling them from storage, and putting them in the destination disk image's storage) and there will be no mention of these images in the manifest itself.
What does bootc need exactly when running bootc install to-filesystem? As I understand it, it needs a container store to find containers to pull into the image? Will it always look at /var/lib/containers or can this store be anywhere?
The current logic always looks in /var/lib/containers but we could certainly add support for alternative lookasides.
If I understood everything correctly, this might already work with the current implementation, perhaps with some minor changes.
This context manager around the bootc call runs containers_storage_source()
which puts the container storage at /run/osbuild/containers/storage
in the stage's root (the build root) and yields the full container source for bootc to use.
If bootc can be configured to look at /run/osbuild/containers
, or if we also mount/symlink that to /var/lib/containers
, then bootc will find the whole container store there and it should work as expected.
There's a bigger picture thing here that we should also be considering how this works with Anaconda. Today for ISOs with embedded containers we inject them as a
dir
but I would like to change them to use an unpacked overlay store for multiple reasons, but the biggest is that it would allow running the images at install time which would be needed (well, very helpful) for anaconda use bootc install in that scenario. (In a PXE style scenario, obviously anaconda could just fetch the image itself)But then if we have the bootc image itself in an unpacked (overlay) store, it argues for doing so for LBIs as well.
If Anaconda can read an unpacked overlay store from the ISO to do its installation, osbuild can make one when it's building an ISO.
from bootc-image-builder.
@kingsleyzissou please correct me if I'm wrong about the way the containers-storage source and input work.
from bootc-image-builder.
OK, containers/bootc#737 uses that.
from bootc-image-builder.
Related Issues (20)
- No vcs version info in bib container HOT 1
- kickstart users and/or groups are not compatible with user-supplied kickstart content when build iso after # 438 mereged HOT 3
- Can config.yaml / config.toml be readonly? HOT 1
- Error with build config json file? HOT 2
- Does not work on Linux with Podman Machine (errors with cp: error writing '/output/image/./disk.raw') HOT 6
- No documentation on btrfs HOT 1
- Should Anaconda's Users module be enabled by default?
- Can't concurrently build boot images for multiple architectures
- aarch64 uses dos format partitions HOT 4
- Failed to build out btrfs roofs qcow2 image using quay.io/fedora/fedora-bootc:40/latest and quay.io/centos-bootc/centos-bootc:stream10 HOT 2
- Expose option to set `target-imgref` to a different value from the source imgref
- Any investigation on using krun (to be able to run bib non-root)? HOT 3
- We should output if chown is successful or not
- chown doesn't work on mac or windows? HOT 1
- Filesystem customizations don't work with toml config
- Support custom mountpoints HOT 5
- testing farm: `Error: building at STEP "COPY .git /build/.git": checking on sources under ".../discover/default-0/tests": copier: stat: "/.git": no such file or directory`
- Build Fedora 40 anaconda-iso failed on aarch64
- Can't build CS10 anaconda-iso
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bootc-image-builder.