GithubHelp home page GithubHelp logo

jirutka / setup-alpine Goto Github PK

View Code? Open in Web Editor NEW
91.0 5.0 13.0 42 KB

Easily use Alpine Linux on GitHub Actions, with support for QEMU user emulator

License: MIT License

Shell 100.00%
actions alpine-linux chroot ci github-actions qemu musl-libc cross-compile

setup-alpine's Introduction

GitHub Action: Setup Alpine Linux

A GitHub Action to easily set up and use a chroot-based [1] Alpine Linux environment in your workflows and emulate any supported CPU architecture (using QEMU).

runs-on: ubuntu-latest
steps:
  - uses: jirutka/setup-alpine@v1
    with:
      branch: v3.15

  - run: cat /etc/alpine-release
    shell: alpine.sh {0}

See more examples

Highlights

  • Easy to use and flexible

    • Add one step uses: jirutka/setup-alpine@v1 to set up the Alpine environment, for the next steps that should run in this environment, specify shell: alpine.sh {0}. See Usage examples for more.

    • You can switch between one, or even more, Alpine environments and the host system (Ubuntu) within a single job (i.e. each step can run in a different environment). This is ideal, for example, for cross-compilations.

  • Emulation of non-x86 architectures

    • This couldn’t be easier; just specify input parameter arch. The action sets up QEMU user space emulator and installs Alpine Linux environment for the specified architecture. You can then build and run binaries for/from this architecture, just like on real hardware, but (significantly) slower (it’s a software emulation after all).

  • No hassle with Docker images

    • You don’t have to write any Dockerfiles, for example, to cross-compile Rust crates with C dependencies.[2]

    • No, you really don’t need any so-called “official” Docker image for gcc, nodejs, python or whatever you need… just install it using apk from Alpine packages. It is fast, really fast!

  • Always up to date environment

    • The whole environment, all packages, are always installed directly from the Alpine Linux’s official repositories using apk-tools (Alpine’s package manager). There’s no intermediate layer that tends to lang behind with security fixes (such as Docker images). — You might be thinking, isn’t that slow? No, it’s faster than pulling a Docker image!

    • No, you really don’t need any Docker image to get a stable build environment. Alpine Linux provides stable releases (branches); these get just (security) fixes, no breaking changes.

  • It’s simple and lightweight

    • You don’t have to worry about that on a hosted CI service, but still… This action is written in ~220 LoC and uses only basic Unix tools (chroot, mount, wget, Bash, …) and apk-tools (Alpine’s package manager). That’s it.

Parameters

Inputs

apk-tools-url

URL of the apk-tools static binary to use. It must end with #!sha256! followed by a SHA-256 hash of the file. This should normally be left at the default value.

Default: see action.yml

arch

CPU architecture to emulate using QEMU user space emulator. Allowed values are: x86_64 (native), x86 (native), aarch64, armhf [3], armv7, ppc64le, riscv64 [4], and s390x.

Default: x86_64

branch

Alpine branch (aka release) to install: vMAJOR.MINOR, latest-stable, or edge.

Example: v3.15
Default: latest-stable

extra-keys

A list of paths of additional trusted keys (for installing packages from the extra-repositories) to copy into /etc/apk/keys/. The paths should be relative to the workspace directory (the default location of your repository when using the checkout action).

Example: .keys/[email protected]

extra-repositories

A list of additional Alpine repositories to add into /etc/apk/repositories (Alpine’s official main and community repositories are always added).

mirror-url

URL of an Alpine Linux mirror to fetch packages from.

packages

A list of Alpine packages to install.

Example: build-base openssh-client
Default: no extra packages

shell-name

Name of the script to run sh in the Alpine chroot that will be added to GITHUB_PATH. This name should be used in jobs.<job_id>.steps[*].shell (e.g. shell: alpine.sh {0}) to run the step’s script in the chroot.

Default: alpine.sh

volumes

A list of directories on the host system to mount bind into the chroot. You can specify the source and destination path: <src-dir>:<dest-dir>. The <src-dir> is an absolute path of existing directory on the host system, <dest-dir> is an absolute path in the chroot (it will be created if doesn’t exist). You can omit the latter if they’re the same.

Please note that /home/runner/work (where’s your workspace located) is always mounted, don’t specify it here.

Example: ${{ steps.alpine-aarch64.outputs.root-path }}:/mnt/alpine-aarch64

Outputs

root-path

Path to the created Alpine root directory.

Usage examples

Basic usage

runs-on: ubuntu-latest
steps:
  - uses: actions/checkout@v2

  - name: Setup latest Alpine Linux
    uses: jirutka/setup-alpine@v1

  - name: Run script inside Alpine chroot as root
    run: |
      cat /etc/alpine-release
      apk add nodejs npm
    shell: alpine.sh --root {0}

  - name: Run script inside Alpine chroot as the default user (unprivileged)
    run: |
      ls -la  # as you would expect, you're in your workspace directory
      npm build
    shell: alpine.sh {0}

  - name: Run script on the host system (Ubuntu)
    run: |
      cat /etc/os-release
    shell: bash

Set up Alpine with specified packages

- uses: jirutka/setup-alpine@v1
  with:
    branch: v3.15
    packages: >
      build-base
      libgit2-dev
      meson

Set up and use Alpine for a different CPU architecture

runs-on: ubuntu-latest
steps:
  - name: Setup Alpine Linux v3.15 for aarch64
    uses: jirutka/setup-alpine@v1
    with:
      arch: aarch64
      branch: v3.15

  - name: Run script inside Alpine chroot with aarch64 emulation
    run: uname -m
    shell: alpine.sh {0}

Set up Alpine with packages from the testing repository

- uses: jirutka/setup-alpine@v1
  with:
    extra-repositories: |
      http://dl-cdn.alpinelinux.org/alpine/edge/testing
    packages: some-pkg-from-testing

Set up and use multiple Alpine environments in a single job

runs-on: ubuntu-latest
steps:
  - name: Setup latest Alpine Linux for x86_64
    uses: jirutka/setup-alpine@v1
    with:
      shell-name: alpine-x86_64.sh

  - name: Setup latest Alpine Linux for aarch64
    uses: jirutka/setup-alpine@v1
    with:
      arch: aarch64
      shell-name: alpine-aarch64.sh

  - name: Run script inside Alpine chroot
    run: uname -m
    shell: alpine-x86_64.sh {0}

  - name: Run script inside Alpine chroot with aarch64 emulation
    run: uname -m
    shell: alpine-aarch64.sh {0}

  - name: Run script on the host system (Ubuntu)
    run: cat /etc/os-release
    shell: bash

Cross-compile Rust application with system libraries

runs-on: ubuntu-latest
strategy:
  matrix:
    include:
      - rust-target: aarch64-unknown-linux-musl
        os-arch: aarch64
env:
  CROSS_SYSROOT: /mnt/alpine-${{ matrix.os-arch }}
steps:
  - uses: actions/checkout@v1

  - name: Set up Alpine Linux for ${{ matrix.os-arch }} (target arch)
    id: alpine-target
    uses: jirutka/setup-alpine@v1
    with:
      arch: ${{ matrix.os-arch }}
      branch: edge
      packages: >
        dbus-dev
        dbus-static
      shell-name: alpine-target.sh

  - name: Set up Alpine Linux for x86_64 (build arch)
    uses: jirutka/setup-alpine@v1
    with:
      arch: x86_64
      packages: >
        build-base
        pkgconf
        lld
        rustup
      volumes: ${{ steps.alpine-target.outputs.root-path }}:${{ env.CROSS_SYSROOT }}
      shell-name: alpine.sh

  - name: Install Rust stable toolchain via rustup
    run: rustup-init --target ${{ matrix.rust-target }} --default-toolchain stable --profile minimal -y
    shell: alpine.sh {0}

  - name: Build statically linked binary
    env:
      CARGO_BUILD_TARGET: ${{ matrix.rust-target }}
      CARGO_PROFILE_RELEASE_STRIP: symbols
      PKG_CONFIG_ALL_STATIC: '1'
      PKG_CONFIG_LIBDIR: ${{ env.CROSS_SYSROOT }}/usr/lib/pkgconfig
      RUSTFLAGS: -C linker=/usr/bin/ld.lld
      SYSROOT: /dummy  # workaround for https://github.com/rust-lang/pkg-config-rs/issues/102
    run: |
      # Workaround for https://github.com/rust-lang/pkg-config-rs/issues/102.
      echo -e '#!/bin/sh\nPKG_CONFIG_SYSROOT_DIR=${{ env.CROSS_SYSROOT }} exec pkgconf "$@"' \
          | install -m755 /dev/stdin pkg-config
      export PKG_CONFIG="$(pwd)/pkg-config"
      cargo build --release --locked --verbose
    shell: alpine.sh {0}

  - name: Try to run the binary
    run: ./myapp --version
    working-directory: target/${{ matrix.rust-target }}/release
    shell: alpine-target.sh {0}

History

This action is an evolution of the alpine-chroot-install script I originally wrote for Travis CI in 2016. The implementation is principally the same, but tailored to GitHub Actions. It’s so simple and fast thanks to how awesome apk-tools is!

License

This project is licensed under MIT License. For the full text of the license, see the LICENSE file.


1. If you don’t know what chroot is, think of it as a very simple container. It’s one of the cornerstones of containers and the only one that is actually needed for this use case.
2. The popular cross tool used by actions-rs/cargo action doesn’t allow you to easily install additional packages or whatever needed for building your crate without creating, build and maintaining custom Docker images (cross-rs/cross#281). This just impose unnecessary complexity and boilerplate.
3. armhf is armv6 with hard-float.
4. riscv64 is available only for branch edge for now.

setup-alpine's People

Contributors

jirutka avatar panekj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

setup-alpine's Issues

Post Run exits with failure - unmount "target is busy"

Thanks for the action this is very useful for us to run some tests in Alpine rather than Ubuntu

The action works well for us, but we get this failure during the Post Run

image

Unfortunately this marks the job as failed, so pull requests can't be merged etc.

Is there something we can do here? We're using jirutka/setup-alpine@v1 and branch v3.16

Detect native arch dynamically

Currently from what I read from the setup script. It seems anything which is not x86 will setup qemu. However in the use case where we're using a self hosted runner that the host is natively aarch64 it should not setup qemu.

./setup-alpine.sh: line 208: update-binfmts: command not found

name: Build U-Boot on Tag

on:
  push:
    tags:
      - 'v*' # Trigger only for tags starting with 'v'

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2

      - name: Setup latest Alpine Linux
        uses: jirutka/setup-alpine@v1
        with:
          arch: aarch64
          packages: >
            build-base
    
      - name: Run script inside Alpine chroot with aarch64 emulation
        run: uname -m
        shell: alpine.sh {0}
❄️ raptor@pelagus  20:40:52 29.02.2024 +00:00  1m47s71ms
├─ u-boot on  CICD via ❄️  impure (shell) 
╰───❯ act
[Build U-Boot on Tag/build] 🚀  Start image=catthehacker/ubuntu:act-latest
INFO[0000] Parallel tasks (0) below minimum, setting to 1 
[Build U-Boot on Tag/build]   🐳  docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=true
INFO[0003] Parallel tasks (0) below minimum, setting to 1 
[Build U-Boot on Tag/build]   🐳  docker create image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] network="host"
[Build U-Boot on Tag/build]   🐳  docker run image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] network="host"
[Build U-Boot on Tag/build]   ☁  git clone 'https://github.com/jirutka/setup-alpine' # ref=v1
[Build U-Boot on Tag/build] ⭐ Run Pre Setup latest Alpine Linux
[Build U-Boot on Tag/build]   ☁  git clone 'https://github.com/webiny/action-post-run' # ref=3.1.0
[Build U-Boot on Tag/build]   ✅  Success - Pre Setup latest Alpine Linux
[Build U-Boot on Tag/build] ⭐ Run Main actions/checkout@v2
[Build U-Boot on Tag/build]   🐳  docker cp src=/home/raptor/src/u-boot/. dst=/home/raptor/src/u-boot
[Build U-Boot on Tag/build]   ✅  Success - Main actions/checkout@v2
[Build U-Boot on Tag/build] ⭐ Run Main Setup latest Alpine Linux
[Build U-Boot on Tag/build]   🐳  docker cp src=/home/raptor/.cache/act/jirutka-setup-alpine@v1/ dst=/var/run/act/actions/jirutka-setup-alpine@v1/
[Build U-Boot on Tag/build] 'runs-on' key not defined in Build U-Boot on Tag/build
[Build U-Boot on Tag/build] ⭐ Run Main sudo -E ./setup-alpine.sh
[Build U-Boot on Tag/build]   🐳  docker exec cmd=[bash --noprofile --norc -e -o pipefail /var/run/act/workflow/1-composite-setup.sh] user= workdir=/var/run/act/actions/jirutka-setup-alpine@v1
[Build U-Boot on Tag/build]   ❓  ::group::Prepare rootfs directory
| ▷ Alpine will be installed into: /home/root/rootfs/alpine-latest-aarch64
[Build U-Boot on Tag/build]   ❓  ::endgroup::
[Build U-Boot on Tag/build]   ❓  ::group::Download static apk-tools
| ▷ Downloading https://gitlab.alpinelinux.org/api/v4/projects/5/packages/generic/v2.14.0/x86_64/apk.static
| apk: OK
[Build U-Boot on Tag/build]   ❓  ::endgroup::
[Build U-Boot on Tag/build]   ❓  ::group::Install qemu-aarch64 emulator
| ▷ Fetching qemu-aarch64 from the latest-stable Alpine repository
| fetch http://dl-cdn.alpinelinux.org/alpine/latest-stable/community/x86_64/APKINDEX.tar.gz
| Downloading qemu-aarch64-8.1.5-r0
| ▷ Unpacking qemu-aarch64 and installing on the host system
| ▷ Registering binfmt for aarch64
| ./setup-alpine.sh: line 208: update-binfmts: command not found
| 
| Error occurred at line 208:
|   205 |               rm ./$qemu_cmd-*.apk
|   206 | 
|   207 |               info "Registering binfmt for $qemu_arch"
| > 208 |               update-binfmts --import "$SCRIPT_DIR"/binfmts/$qemu_cmd
|   209 |       fi
|   210 | fi
|   211 | 
[Build U-Boot on Tag/build]   ❗  ::error title=setup-alpine: Install qemu-aarch64 emulator::Error occurred at line 208:                update-binfmts --import "$SCRIPT_DIR"/binfmts/$qemu_cmd (see the job log for more information)
[Build U-Boot on Tag/build]   ❌  Failure - Main sudo -E ./setup-alpine.sh
[Build U-Boot on Tag/build] exitcode '1': failure
[Build U-Boot on Tag/build] 'runs-on' key not defined in Build U-Boot on Tag/build
[Build U-Boot on Tag/build]   ⚙  ::set-output:: root-path=
[Build U-Boot on Tag/build]   ❌  Failure - Main Setup latest Alpine Linux
[Build U-Boot on Tag/build] exitcode '1': failure
[Build U-Boot on Tag/build] ⭐ Run Post Setup latest Alpine Linux
[Build U-Boot on Tag/build]   🐳  docker cp src=/home/raptor/.cache/act/jirutka-setup-alpine@v1/ dst=/var/run/act/actions/jirutka-setup-alpine@v1/
[Build U-Boot on Tag/build]   ✅  Success - Post Setup latest Alpine Linux
[Build U-Boot on Tag/build] 🏁  Job failed
Error: Job 'build' failed

Issue with aarch64 emulation

Thank you for making this action it's much more elegant than using docker.

It tried this out last night and ran into an issue with aarch64 simulation.

When I ran the native x86_64 version things worked as expected however when I switched the arch to aarch64 I got a failed build.

You can see the example here.

https://github.com/upmaru-stage/locomo/actions/runs/6598676983/job/17927100353

After it failed the job kept running so I had to manually cancel

Here is the native version that succeeded

https://github.com/upmaru-stage/locomo/actions/runs/6598606683/job/17926843672

Thank you!

Man, just thank you for this awesome work! ❤️

Readme error?

In the Set up and use multiple Alpine environments in a single job section, shell-name: alpine-aarch64.sh is specified for both of the first two steps. If I understand the purpose of shell-name and the rest of the example correctly, the first one should be shell-name: alpine-x86_64.sh.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.