GithubHelp home page GithubHelp logo

osbuild / osbuild-composer Goto Github PK

View Code? Open in Web Editor NEW
150.0 26.0 106.0 612 MB

An HTTP service for building bootable OS images.

Home Page: https://www.osbuild.org

License: Apache License 2.0

Python 5.53% Go 67.58% Makefile 0.38% Shell 26.23% HCL 0.24% Dockerfile 0.04%

osbuild-composer's Introduction

OSBuild Composer

Operating System Image Composition Services

The composer project is a set of HTTP services for composing operating system images. It builds on the pipeline execution engine of osbuild and defines its own class of images that it supports building.

Multiple APIs are available to access a composer service. This includes support for the lorax-composer API, and as such can serve as drop-in replacement for lorax-composer.

You can control a composer instance either directly via the provided APIs, or through higher-level user-interfaces from external projects. This, for instance, includes a Cockpit Module or using the composer-cli command-line tool.

Project

Principles

  1. OSBuild Composer shall only allow users to do what generally makes sense.
  2. Blueprints are the policy layer where we decide what to expose to end users.
  3. If a blueprint can be built, it should also boot.
  4. It should be obvious why a blueprint doesn’t build.
  5. The cloud API is never broken.
  6. In the hosted service, OSBuild Composer is an orchestrator of image builds.
  7. On-premises, it should be as easy as possible to run the service and build an image.
  8. OSBuild Composer needs to run on the oldest supported target distribution.

Contributing

Please refer to the developer guide to learn about our workflow, code style and more.

About

Composer is a middleman between the workhorses from osbuild and the user-interfaces like cockpit-composer, composer-cli, or others. It defines a set of high-level image compositions that it supports building. Builds of these compositions can be requested via the different APIs of Composer, which will then translate the requests into pipeline-descriptions for osbuild. The pipeline output is then either provided back to the user, or uploaded to a user specified target.

The following image visualizes the overall architecture of the OSBuild infrastructure and the place that Composer takes:

overview

Consult the osbuild-composer(7) man-page for an introduction into composer, information on running your own composer instance, as well as details on the provided infrastructure and services.

Requirements

The requirements for this project are:

  • osbuild >= 26
  • systemd >= 244

At build-time, the following software is required:

  • go >= 1.20
  • python-docutils >= 0.13
  • krb5-devel for fedora/rhel or libkrb5-dev for debian/ubuntu`
  • btrfs-progs-devel for fedora/rhel or libbtrfs-dev for debian/ubuntu
  • device-mapper-devel for fedora/rhel or libdevmapper-dev for debian/ubuntu
  • gpgme-devel for fedora/rhel or libgpgme-dev for debian/ubuntu
  • rpmdevtools (only for make push-check)
  • rpmlint (only for make push-check)

Build

The standard go package system is used. Consult upstream documentation for detailed help. In most situations the following commands are sufficient to build and install from source:

make build

The man-pages require python-docutils and can be built via:

make man

Run Tests

To run our tests locally just call

make unit-tests

Before pushing something for a pull request you should run this check to avoid problems with required github actions

make push-check

Repository:

Pull request gating

Each pull request against osbuild-composer starts a series of automated tests. Tests run via GitHub Actions and Jenkins. Each push to the pull request will launch theses tests automatically.

Jenkins only tests pull requests from members of the osbuild organization in GitHub. A member of the osbuild organization must say ok to test in a pull request comment to approve testing. Anyone can ask for testing to run by saying the bot's favorite word, schutzbot, in a pull request comment. Testing will begin shortly after the comment is posted.

Test results in Jenkins are available by clicking the Details link on the right side of the Schutzbot check in the pull request page.

License:

  • Apache-2.0
  • See LICENSE file for details.

osbuild-composer's People

Contributors

7flying avatar achilleas-k avatar atodorov avatar bcl avatar chloenayon avatar croissanne avatar dependabot[bot] avatar diaasami avatar dvdhrm avatar gicmo avatar henrywang avatar jikortus avatar jkozol avatar jordigilh avatar jrusz avatar juan-abia avatar kingsleyzissou avatar larskarlitski avatar lavocatt avatar major avatar msehnout avatar ochosi avatar ondrejbudai avatar runcom avatar schutzbot avatar supakeen avatar teg avatar thozza avatar ygalblum avatar yih-redhat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

osbuild-composer's Issues

Ability to specify image size on creation

Currently, Image Builder generates images that are of fixed size. This is almost never the size that people need, because they want to, for example, install additional software or run software that has larger storage requirements (such as Image Builder itself). In practice, this means people always have to manually resize the generated image to the size they require.

To make this easier, let’s allow specifying the size when creating the image.

Acceptance criteria:

  1. Image size can be set via the compose API (new size key) and in the front end
  2. The image list in the front end shows the image size as well
  3. Images continue fulfilling alignment requirements (Azure wants 1MB, for example: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/create-upload-generic)

Nice to have:

  1. Image size can be set from composer-cli (maybe with --size?) as well.
  2. Research suitable defaults and min / max values for all output formats.

The issue for the UI is osbuild/cockpit-composer#840

Use systemd to spawn osbuild for the worker process

Motivation

The osbuild-worker process currently runs as root which is far from ideal especially considering the amount of code we run there (think all the AWS, Azure, etc. SDKs). What we should do instead is to run osbuild-worker as non-root user (created by systemd) and call osbuild over some IPC mechanism like Unix Domain Sockets (UDS).

Proposed solution

Create a thin wrapper in /cmd that would run osbuild and create a systemd socket unit in /distribution that would use Accept=yes as defined here. This option makes sure a new instance of osbuild is spawned for each call to the socket. Unix permissions should take care of access control to this socket.

The wrapper will use a simple JSON communication over a UDS to run osbuild and return its state and logs. The format of this communication is up for a discussion.

Optional follow-up work

This socket should be used in osbuild-worker instead of spawning a new process. Replace the current logic with IPC call. Once this works, modify the osbuild-worker unit in /distribution in a way that osbuild-worker runs as non-root user.

RCM API

Introduce a limited API for RCM to use to push composes to osbuild-composer.

Acceptance criteria:

  • Introduce a dedicated TCP socket for this API.
  • Implement a method to push a new compose, specifying the architectures to build for and the repositories to use. It would be ok to silently drop requests for unsupported (non-native) architecture, until we have support for more. For the time being assume the qcow2 output type and the local upload target.
  • Implement a method to get the status of a compose (waiting/running/finished/failed)
  • Make the new API optional (ship the systemd socket unit in a sub-package and gracefully handle when not installed, possibly needing further discussion?)
  • Add support for the koji upload target, and use that exclusively in stead of the local one. This requires changes both to the input to the endpoint pushing a new compose, and the output of the status endpoint in finished state.
  • Return per-job status from GET /v1/compose/

Optional:

  • Support kerberos authentication in the same way brew etc does.
  • Extend the status API to optionally block until the compose is no longer waiting/running.

Local boot tests

We want to do a minimal test to verify that our images boot. For each output type, select one test case with a blueprint that allows an ssh connection (might require some coordination with other issues for this cycle). Boot the image, connect to it and use systemctl to verify that the image booted correctly.
For the images that can be booted locally, use qemu or nspawn for this (whichever makes sense for the output type). If some images cannot be booted locally, but only in the target environment, document that.

  • qcow2
  • ami
  • vhd
  • openstack
  • vmdk
  • partitioned-disk
  • ext4-filesystem
  • tar

Add RHEL8.2 support

Add multi-distro support, and default to the host distro. Add support for running on and producing RHEL8.2 images. Support for using the host’s subscription (passing through the client-certificate) is a nice-to-have, but not a requirement. The aim is still parity with lorax.
This does not yet include selecting the distro to build from the UI.

Manually review our code and images for functional equivalence to Lorax

Review (again) the produced images for regressions compared to lorax, and document these. For each functional difference, improve image-info to detect the difference, if possible. The aim is not to produce identical images, just to make sure that any functionality produced by lorax we also produce. Acceptance criteria for this is a bit vague, but the aim is that you should be confident that all issues have been identified and documented and prioritized. A second goal is to be confident in image-info detecting discrepancies, or document any discrepancies it will not detect. Nice-to-have would be to work towards fixing them, if any.

Upload support in the weldr API

We need to reach parity with lorax-composer's upload support. This proposal extends the current routes to support uploads (including authentication tokens), and passes them (with the compose) to the worker as Target structs. Implementing uploading in the worker is outside the scope of this proposal.

The proposal should be considered complete when the features used by osbuild/cockpit-composer#767 are implemented and tested.

An additional nice-to-have would be a full implementation and test of all the features that lorax-composer supports.

Suggested prioritization, if relevant:

  • aws
  • azure
  • vsphere
  • openstack
  • other supported formats (if any)

dnf-json not in PATH of the sytemd unit

@jkozol discovered that after d05673a the systemd unit would not start anymore:

Nov 12 19:29:18 hanada.local osbuild-composer[75894]: panic: exec: "dnf-json": executable file not found in $PATH
Nov 12 19:29:18 hanada.local osbuild-composer[75894]: goroutine 1 [running]:
Nov 12 19:29:18 hanada.local osbuild-composer[75894]: main.main
Nov 12 19:29:18 hanada.local osbuild-composer[75894]:         /home/gicmo/Code/src/osbuild-composer/cmd/osbuild-composer/main.go:43
Nov 12 19:29:18 hanada.local systemd[1]: osbuild-composer.service: Main process exited, code=exited, status=2/INVALIDARGUMENT

We had a look and go uses LookPath(name) to find the executable, but it only looks in PATH and not in the current working directory. Since dnf-json is not in PATH but in /usr/libexec/osbuild-composer and the env for the systemd unit does not include the latter directory it wont indeed find dnf-json (cf. cmd: &{Path:dnf-json Args:[dnf-json] Env:[] Dir: ... lookPathErr:0xc0000107a0 ...}).

env of the systemd unit:

Nov 12 19:32:05 hanada.local env[76758]: LANG=en_GB.UTF-8
Nov 12 19:32:05 hanada.local env[76758]: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Nov 12 19:32:05 hanada.local env[76758]: LISTEN_PID=76758
Nov 12 19:32:05 hanada.local env[76758]: LISTEN_FDS=2
Nov 12 19:32:05 hanada.local env[76758]: LISTEN_FDNAMES=osbuild-composer.socket:osbuild-composer.socket
Nov 12 19:32:05 hanada.local env[76758]: LOGNAME=_osbuild-composer
Nov 12 19:32:05 hanada.local env[76758]: USER=_osbuild-composer
Nov 12 19:32:05 hanada.local env[76758]: INVOCATION_ID=76348a3dd8a946308ee04f56bf21e215
Nov 12 19:32:05 hanada.local env[76758]: JOURNAL_STREAM=9:1066442
Nov 12 19:32:05 hanada.local env[76758]: STATE_DIRECTORY=/var/lib/osbuild-composer

Easiest fix would be to add Environment="PATH=$PATH:/usr/libexec/osbuild-composer to the systemd unit file.
(That or @jkozol and I missing something obvious)

Create socket unit for RCM API

Related to: #192

We need to provide a socket unit for the newly create RCM API. It seems that it should be possible to distinguish what socket was used to spawn the composer by its name:
https://www.freedesktop.org/software/systemd/man/sd_listen_fds.html
https://www.freedesktop.org/software/systemd/man/systemd.socket.html#FileDescriptorName=
https://github.com/coreos/go-systemd/blob/master/activation/listeners.go#L42

It would also be nice to verify that it works, preferably using some integration tests. I would, personally, like to see the Packit testing farm involved where we could install the composer and start it with systemd like it was a real use-case.

Implement missing /projects commands and tests

This API is used to list “projects” (this was meant to be a different concept, but is just rpm packages now). It is not fully implemented and tested yet.

Get as close as possible to Lorax’ implementation. At the least, listing packages in the UI and all composer-cli projects commands must work. Start by implementing the tests for lorax, so that we know what to aim for in the osbuild implementation.

API documentation # #

Expand integration testing for the weldr API

Continue to expand coverage of the osbuild-weldr-tests to cover the rest of the API.
First priority will be given to routes used the cockpit-composer, followed by composer-cli.

  • Implement @teg suggestion to pass the open socket descriptor instead of the path
  • Add more error cases to the blueprint tests
  • Add tests for other routes

deleting composes doesn't delete results.json

Using composer-cli compose delete <UUID> deletes all of the files in the output directory, but returns an error:

2020-01-22 18:41:10,486: 05ac067a-b212-4a83-a30c-4325d7a53a3d: unlinkat /var/lib/osbuild-composer/outputs/05ac067a-b212-4a83-a30c-4325d7a53a3d/result.json: permission denied

Leaving that file behind. It is owned by root.root

I'm using osbuild-composer commit 481c1dd with my fedora-32 patch.

Lorax parity - generate desired pipelines

For each output type:

  1. Add the canonical pipeline to osbuild-composer that corresponds to the empty blueprint. These should be based on the kickstart and image-info of the official images. If there is no official image for this type, use lorax' kickstarts as a reference.

  2. Tweak our blueprint->pipeline translators until they produce the desired pipelines on empty inputs. Use osbuild-pipeline to add tests to verify that the output is as wanted.

  3. Add image-info tests that build the canonical pipelines using osbuild and verify the output image.

  4. Add the (equivalent of) the lorax-composer compliance tests to the osbuild-composer test-suite to verify that our images pass.

In the order of priority:

  • qcow2 (#37)
  • ami (#42)
  • vhd (#44)
  • openstack (#45)
  • ext4-filesystem
  • partitioned-disk
  • tar
  • vmdk
  • live-iso (this is a hard one and not required)

Moudular dependency problems on Fedora 31

Jan 24 12:17:07 introdesk.g02.org osbuild-composer[8662]: Modular dependency problems:
Jan 24 12:17:07 introdesk.g02.org osbuild-composer[8662]:  Problem 1: conflicting requests
Jan 24 12:17:07 introdesk.g02.org osbuild-composer[8662]:   - nothing provides module(platform:f31) needed by module eclipse:latest:3120191219031631:d36a98d4-0.x86_64
Jan 24 12:17:07 introdesk.g02.org osbuild-composer[8662]:  Problem 2: conflicting requests
Jan 24 12:17:07 introdesk.g02.org osbuild-composer[8662]:   - nothing provides module(platform:f31) needed by module exa:latest:3120190813195051:22d7e2a5-0.x86_64
Jan 24 12:17:07 introdesk.g02.org osbuild-composer[8662]:  Problem 3: conflicting requests
Jan 24 12:17:07 introdesk.g02.org osbuild-composer[8662]:   - nothing provides module(platform:f31) needed by module gimp:2.10:3120191106095052:f636be4b-0.x86_64
Jan 24 12:17:07 introdesk.g02.org osbuild-composer[8662]:  Problem 4: conflicting requests
Jan 24 12:17:07 introdesk.g02.org osbuild-composer[8662]:   - nothing provides module(platform:f31) needed by module meson:latest:3120191009081836:dc56099c-0.x86_64
Jan 24 12:17:07 introdesk.g02.org osbuild-composer[8662]:  Problem 5: conflicting requests
Jan 24 12:17:07 introdesk.g02.org osbuild-composer[8662]:   - nothing provides module(platform:f31) needed by module ninja:latest:3120190304180949:f636be4b-0.x86_64
Jan 24 12:17:07 introdesk.g02.org osbuild-composer[8662]:  Problem 6: conflicting requests
Jan 24 12:17:07 introdesk.g02.org osbuild-composer[8662]:   - nothing provides module(platform:f31) needed by module nodejs:10:3120190816104510:f636be4b-0.x86_64

I am trying to build an image using the cockpit web front-end. I only get these messages and the image never builds.

Some /compose routes are not implemented

The following routes are not implemented:

  • /compose/cancel
  • /compose/delete
  • /compose/metadata
  • /compose/results
  • /compose/logs
  • /compose/log

In bold there are ones apparently used by cockpit-composer.

Add the ability to run the integration test suite locally

Follow-up to #234 (@msehnout's Vagrantfile) and #249 (@ondrejbudai's Vagrantfile and friends)

I discussed #249 with @teg and there are the outcomes:

The CI is the primary "tester" of osbuild/composer.

In an ideal world you might want to drop the local testing because then you don't need to maintain two environments. However, this is not possible - we need to run the integration tests in a VM, therefore it needs to be hosted in AWS or similar service. For that reason, secrets are involved. This makes running tests in forks impossible, which in turn makes any experimenting with our projects really hard. To sum it up, we need a way to run the integration test suite locally.

For the local environment we want to reuse the CI setup as much as possible. The goal is basically to run the same scripts on different targets. The setup of our CI is going to look more or less like this:

  1. Fetch sources and build RPMs
  2. Install the RPMs and run tests

Currently, the first step on the CI downloads the specfile, fetches sources from it and builds the RPM. On a local machine, that's roughly the same as our current setup in make rpm. However, the CI does it in a pristine container. We should do the same in the local setup to ensure the pristine environment (developers love to install weird stuff on their machines).

The second step on the CI (not yet implemented but we have a plan) firstly spins up a fresh VM on AWS, installs the RPMs there and run the tests. We should do the exact same thing in the local environment. Instead of AWS we should use qemu. I personally think that Vagrant is a great tool which is great for development and quick PoCs. However, it manages its inner state in a bit opaque way. This makes it not very suitable for integration tests which we would like to run in as defined environment as possible.

VM/container versioning

Quoting @teg:

I think we need to first set up and pin a well-known base VM (up-to-date Fedora Cloud 31 without any extra packages installed). We should take snapshots of this to run our tests in, knowing that every time we run the test-suite the base image is bit-for-bit the same and the only thing that changed is our test RPMs. From time to time we can update the base VM, but that should then be done explicitly and the last known-good tests re-run to verify the changing image does not break them.

This is something we currently don't know how to do it (both in the CI/locally). @major might help us. We should make sure we are using the same well-known base in the CI/locally.

Development environment

It would be nice to have some. We think we can reuse the bits and pieces from the local integration tests setup but we are currently not sure how to exactly.

blueprint versions are not validated

lorax-composer verifies that the version field of a blueprint is a valid semantic version and rejects blueprints that have an invalid version. We should do the same: if there's a version field, it should be a valid.

Thanks @bcl for pointing this out.

Unify naming, improve readability and code correctness

Inconsistent naming

The current implementation refers to the same variable with 3 different names:

  1. composeType
    func (s *Store) PushCompose(composeID uuid.UUID, bp *blueprint.Blueprint, checksums map[string]string, arch, composeType string, uploadTarget *target.Target) error {
  2. OutputType
    OutputType: composeType,
  3. outputFormat
    Pipeline(b *blueprint.Blueprint, additionalRepos []rpmmd.RepoConfig, checksums map[string]string, outputArchitecture, outputFormat string) (*pipeline.Pipeline, error)

Undefined terminology

We should start using "type" or "format" consistently across the codebase and also define our terminology, preferably in some guide for contributors:

  • type vs format
  • output vs compose
  • job vs build
  • probably some more

Weak typing

Also I would prefer to use enumeration for architectures and compose types instead of a string which is error prone. For example:

// Not necessarily valid Go code
type Architecture int

const (
        X86_64 Base = iota
        AARCH64
        PPC64
)

// helper function as an input validation
func ArchitectureFromString(input string) Architecture { 
    // ...
}

Test multiple combinations of customizations

Add to our test cases non-empty blueprints.

The blueprints should be chosen as follows:

Firstly, for each output type, create one blueprint that applies every customization in some minimal way. The aim is not to test every combination of every customization, just one blueprint per output type for now. Make sure to test options of each customization, so set a group and an ssh key for the user, etc. For customizations that allow more than one item, only add one (one user with one group).

Secondly, do the same again, but for every customization that allows more items set two instead of one (two users with two groups, etc).

Once all the test-cases pass, generate lorax composer images from the same blueprints, and verify that the image info makes sense.

Lorax parity - test and flesh out the weldr API

Add detailed tests for the full weldr API, allow them to be run in two modes: the standard internal one used by our CI, and one where it is run against the system instance. The latter is meant to be used to run our tests against lorax-composer, so we can verify that the tests themselves are correct. I imagine this only being run locally while developing tests, rather than as part of CI. How to run them has to be well-documented somewhere.

At the same time, work on fleshing out the missing bits of our implementation of the weldr API so the tests actually pass.

Also, temporarily allow some tests to only be enabled in the external/system mode (so they are not ran on the CI). This way we can work quickly on testing the full weldr API and verifying it against lorax-composer, even if the implementation in osbulid-composer lags behind. This allows the work to be done more asynchronously.

Installing RPM build deps fails on RHEL 8.1

Installing the RPM dependencies with dnf builddep fails on RHEL 8.1 with the following error:

No matching package to install: 'compiler(go-compiler)'

However, the go-toolset package seems to provide what we need.

Lorax parity - blueprint customizations

Extend the translation logic to support all the customizations available in blueprints and generate the desired pipelines. Save some samples of each type / config and verify with osbuild-pipeline that we produce the right ones.

We may want to leave the git customization for last, and consider that again for the next cycle, as it seems unclear if that is how we want it and what the scope should be. It would be worthwhile to figure out which version of lorax-composer supported this, so how big a deal in tersm of compatibility it would be to drop it.

Lorax documentation: https://weldr.io/lorax/lorax-composer.html#blueprints

Parse and represent customizations:

  • groups/packages/modules
  • hostname
  • kernel
  • ssh key
  • user
  • group
  • timezone
  • locale
  • firewall
  • services

Hook up with output types and add basic tests:

  • groups/packages/modules
  • hostname
  • kernel
  • ssh key
  • user
  • group
  • timezone
  • locale
  • firewall
  • services

Produce Vagrant boxes

Goals

Produce Fedora and RHEL “boxes” that can be used with Vagrant [1], more specifically with vagrant-libvirt provider [2]. This means it won’t be usable with any other provider, namely Docker, Hyper-V, VMWare, or VirtualBox. In the future we might want to support at least VirtualBox to enable people running macOS to take advantage of this feature.

In the following text, whenever I refer to a “vagrant box” I mean a libvirt specific one.

Scope of the work

Vagrant box is a tarball containing [3]

  • qcow2 image
  • image.json - file containing very little amount of metadata (example here)
  • default Vagrantfile - default configuration of the box (example here)

Both files can be static and thus it is not necessary to store them directly in the pipeline.

I think the work can be divided into two parts:

  • Create the qcow2 image with required configuration [4]
  • Create the tarball.

Creation of the image

According to the documentation [4], the base image must contain a running SSH server. Additionally, it must contain a specific user with a specific SSH key and it should use a well-known root password. All of this is possible with current osbuild and osbuild-composer implementation.

It is also required to have a specific sudo configuration and sshd configuration. This would require changing a configuration file, therefore it is related to the work from Will about how to handle cases like this. [5][6]

Creation of the tarball

This part is more complicated and cannot be achieved with current osbuild implementation. It basically requires an “assembler pipeline” where we produce qcow2 image and then take the produced image and create a tarball containing it. The metadata files will be mostly or completely predefined (I think completely, but I’m not 100% sure).

We can leave this part out and use a shell script to generate a tarball out of a qcow2 image.

Proposed steps

  • Create a pipeline (json file) and metadata by hand (json and Vagrantfile), verify that it can work. This pipeline will contain script stages to configure sudo and sshd.
  • Implement SSH configuration stage
  • Implement sudo configuration stage
  • Create “vagrant-libvirt” output type here (osbuild-composer) containing all the customizations required
  • Implement vagrant-libvirt assembler in osbuild (this can copy & paste our current qemu assembler and add a tar call at the end)
  • Finish the work in osbuild-composer by using the new osbuild feature

Final note (for Open Source Contest)

It is not necessary to implement all the steps. In fact it is way to much for a single contribution, so feel free to implement only parts of it (e.g. only create the pipeline, or only implement the stages)

Sources

[1] https://www.vagrantup.com/
[2] https://github.com/vagrant-libvirt/vagrant-libvirt
[3] https://github.com/vagrant-libvirt/vagrant-libvirt/tree/master/example_box
[4] https://www.vagrantup.com/docs/boxes/base.html#creating-a-base-box-1
[5] osbuild/osbuild#181
[6] osbuild/osbuild#191

Remote Workers

Allow workers to run remotely. I.e., on separate hosts from composer itself.

Acceptance criteria:

  • In composer, add support for the jobqueue socket to be a TCP socket rather than UDS
  • In the worker, add support for configuring the composer hostname to connect to rather than UDS
  • Add support for workers to authenticate themselves to composer using SSL Client Certificates, make this mandatory for the TCP socket

Optional:

  • Split the worker into an rpm subpackage of composer so they can be installed independently.

provide documentation for osbuild-composer in README.md

The contents of README.md are currently lacking.

Acceptance criteria:

  1. A high-level entry point for people accessing the project should be provided.
  2. We continue to reference the API documentation that exists for lorax-composer, since this is sufficient enough at this point for consumers wanting to use osbuild-composer

Composes should contain frozen blueprint

Currently, we just copy the blueprint to the new compose. This means they can contain wildcard package versions.

However, Lorax does a different thing: Takes all the extra packages and groups, depsolves them and replaces the wildcard versions with specific ones. This process is known in lorax's codebase as blueprint freezing.

This has visible outcomes in some APIs like /compose/info or /compose/metadata. Generally it's a good feature for reproducibility of composes.

Koji Upload Target

Add a new Upload Target for Koji.

  • Add support for composer to create a target directory in a koji instance
  • Add support for a worker to upload its artefacts to a given koji upload directory
  • Add support to composer to call CGImport once a compose has completed successfully and store the relevant metadata as part of the compose
  • Integrate with the RCM API (issue #192).

Optional:

  • Add authentication (kerberos) support

Implement missing /modules commands and tests

This API is used to list “modules” (this was meant to be a different concept, but is just rpm packages now). It is not fully implemented and tested yet.

Get as close as possible to Lorax’ implementation. At the least, listing packages in the UI and all composer-cli modules commands must work. Start by implementing the tests for lorax, so that we know what to aim for in the osbuild implementation.

API documentation

A new compose isn't immediately saved to the store

When I create a compose using cockpit-composer it's not immediately shown in the compose list. This is not a critical bug but it's slightly annoying.

I think the cause of this is that we depsolve the compose before putting it into the store. Imo we should put it in the store as soon as possible as depsolving can take a while (especially when downloading all the dnf metadata) and a user might get confused why the image isn't showing up in the list.

Basic cloud boot tests

Similarly to the local boot test, make a simple tool in Golang that uses the AWS sdk to upload and boot our ami image and run the same tests against it.

Azure image requires VHD cookie with value "conectix"

When uploading the vhd image to Azure it fails to boot with this error message:

{
  "error": {
    "code": "InvalidParameter",
    "message": "The specified cookie value in VHD footer indicates that disk 'image.vhd' with blob https://golangimageuploads.blob.core.windows.net:8443/quickstart-6922813081684464493/image.vhd is not a supported VHD. Disk is expected to have cookie value 'conectix'.",
    "target": "disks"
  }
}

more info: https://support.microsoft.com/en-us/help/4053292/conectix-error-and-uploaded-vhd-is-not-supported-when-you-create-a-vm

Move depsolving out of the workers

Let composer be the only place depsolving takes place, and use rpm directly rather than dnf in the workers.

  • Add rpm source implementation and use that in the WIP rpm stage.
  • Generate the rpm-style pipelines in composer rather than the current dnf ones.

Clarify and automate test case generation

Problem
Our test/cases can verify the validity of a pipeline and image created from a specified blueprint, distro, and boot method. Currently, they are manually written. This is suboptimal since each additional distro or boot method will require several additional tests and with the move from dnf to rpm the cases will be tedious to manually write.

Solution
A python script will be able to generate test cases for each new distro and boot method. This script can be called by a developer with certain parameters and it will output json test cases. This will require clarifying which test cases are needed. Testing each combination of distro, output type, boot method, and blueprint customization will create too many test cases. Instead, the combinations that need to be tested should be clarified so as to reduce unnecessary test cases. For the script, it will need to generate pipelines and the resulting image's info.

Achievables

  • Add a flag to osbuild-pipeline so it can output the results of rpmmd instead of the pipeline manifest
  • Update distro/distro_test.go to accept both package checksums and package specs
  • Create python script that accepts a boot and a compose-request key as input and uses osbuild-pipeline and image-info to generate test cases
  • Generate tests for combinations of boot methods and image types on x86_64

Implement missing /source commands and tests

This API is used to list and add custom sources. It is not fully implemented and tested yet.

Get as close as possible to Lorax’ implementation. At the least, managing custom source in the UI and all composer-cli sources commands must work. Start by implementing the tests for lorax, so that we know what to aim for in the osbuild implementation.

API documentation

Fedora 31 release

Write a spec file and get osbuild-composer into fedora 31. The spec file should be accepted into fedora, work with packit and work with the (possibly adjusted) make-repo.sh tool in the tools directory. Once this is finished, and a first release out (passing minimal manual testing), immediately start the process of inclusion into RHEL8.2.

Uploaded AMI images cannot be booted in AWS

When composer builds and uploads an ami image to AWS, it doesn't boot. This is true for both Fedora and RHEL.

The cause of this issue

We're currently building the ami images (both rhel and fedora) as raw.xz archives. These archives are then uploaded. However, AWS doesn't support this format. The current state is the result of c051642. The commit surely moved the ami output closer to the official images (they are indeed raw.xz) but broke the upload functionality. Why the official images are raw.xz is still an open question - my opinion is that the only reason is to save disk space and bandwidth on our mirrors, because those images have 6GB but majority of that is just an empty space. Using compression in those cases is indeed very useful.

Proposed solutions

  • change the ami output. AWS should support raw, vmdk, vhdx and ova.
    • ova - it's just wrapper over vmdk or vhdx, not very useful to us
    • vmdk - I couldn't get it to work (ClientError: Disk validation failed [Unsupported VMDK File Format])
    • vhdx - works, imho the best solution
    • raw - works, but it's huge (no compression)
  • when uploading to AWS in worker, decompress the image on fly (this would mean that raw is uploaded)
    • this is both slow (decompression) and slow (uploading a huge raw image)

This issue was initially tracked in this document but we forgot about it, therefore I decided to track it here.

dnf-json fails when deploying master to Fedora 31

Steps to reproduce:

  1. Try to insert a package into a blueprint in our "auto-deploy" instance

Result:

-- The job identifier is 9124.
Mar 04 13:46:35 composer-fedora31 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-timedated comm="systemd" exe="/usr/lib/systemd/systemd" host>
Mar 04 13:46:38 composer-fedora31 PackageKit[172230]: get-updates transaction /22_acbabaab from uid 0 finished with success after 2747ms
Mar 04 13:46:50 composer-fedora31 osbuild-composer[172059]: Traceback (most recent call last):
Mar 04 13:46:50 composer-fedora31 osbuild-composer[172059]:   File "dnf-json", line 122, in <module>
Mar 04 13:46:50 composer-fedora31 osbuild-composer[172059]:     base.install_specs(arguments["package-specs"], exclude=arguments.get("exclude-specs", []))
Mar 04 13:46:50 composer-fedora31 osbuild-composer[172059]:   File "/usr/lib/python3.7/site-packages/dnf/base.py", line 1838, in install_specs
Mar 04 13:46:50 composer-fedora31 osbuild-composer[172059]:     install_specs, exclude_specs = self._categorize_specs(install, exclude)
Mar 04 13:46:50 composer-fedora31 osbuild-composer[172059]:   File "/usr/lib/python3.7/site-packages/dnf/base.py", line 1775, in _categorize_specs
Mar 04 13:46:50 composer-fedora31 osbuild-composer[172059]:     _parse_specs(install_specs, install)
Mar 04 13:46:50 composer-fedora31 osbuild-composer[172059]:   File "/usr/lib/python3.7/site-packages/dnf/util.py", line 70, in _parse_specs
Mar 04 13:46:50 composer-fedora31 osbuild-composer[172059]:     for value in values:
Mar 04 13:46:50 composer-fedora31 osbuild-composer[172059]: TypeError: 'NoneType' object is not iterable
Mar 04 13:47:05 composer-fedora31 systemd[1]: systemd-hostnamed.service: Succeeded.
-- Subject: Unit succeeded

Create an integration test for RCM API

Even thought the RCM API is not yet complete (see #192 for the status), we should introduce tests that will make sure the REST API does not break.

Acceptance criteria:

  • Introduce new executable in /cmd that will run the tests
  • Cover all endpoints and all expected use cases (create a compose, get compose status)
  • Plug-in the executable to our existing CI infrastructure

osbuild-pipeline should have --help

Since we use osbuild-pipeline as a CLI tool for development and testing of osbuild-composer I think we should implement a reasonable --help text and if possible also validation of input arguments.

WDYT?

In Fedora 31, the default fedora source points to Fedora 30

# composer-cli sources info fedora 
{"sources":{"Fedora 30":{"name":"Fedora 30","type":"yum-metalink","url":"https://mirrors.fedoraproject.org/metalink?repo=fedora-30\u0026arch=x86_64","check_gpg":true,"check_ssl":true,"system":true}},"errors":[]}

Upload support in the worker

We need to implement support for uploading images, according to the information given in the Target struct. This proposal should be considered complete when the features exposed in the UI by osbuild/cockpit-composer#767 are implemented. A nice-to-have would be to support all the features the weldr API supports.

The suggestion is to use Mantle for the initial implementation. Though in the future, or if it makes more sense, we may want to use the SDKs provided by the cloud providers directly, as Mantle is anyway just a think wrapper around these.

Proposed proritization:

  • aws
  • azure
  • vsphere
  • openstack
  • other supported formats (if any)

Coverage service integration

Choose a service like coveralls.io or codecov or whatever and start tracking code coverage.

Service should generate graphs of coverage, and optionally update PR comments (or not).

Coverage will be for informational purposes only, it will not block merging the PR.

Implement missing /blueprints commands and tests

This API is used to manage blueprints. It is not fully implemented and tested yet.

Get as close as possible to Lorax’ implementation. At the least, managing custom source in the UI and all composer-cli sources commands must work. Start by implementing the tests for lorax, so that we know what to aim for in the osbuild implementation.

API documentation

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.