GithubHelp home page GithubHelp logo

sel4 / ci-actions Goto Github PK

View Code? Open in Web Editor NEW
3.0 20.0 13.0 18.08 MB

CI GitHub actions for the seL4 repositories

Home Page: https://sel4.systems

Shell 27.65% JavaScript 3.24% Dockerfile 4.82% Makefile 3.17% Python 48.28% Perl 12.84%
ci continuous-integration sel4 ci-actions

ci-actions's Introduction

CI

CI actions and Workflows for seL4 repositories

This repository collects definitions for continuous integration (CI) tasks/actions for the repositories of the seL4 foundation. While some of these might be useful more generally, most of them will be specific to the seL4 setup.

The idea is to concentrate most of the GitHub workflow definitions here in a single repository to avoid duplication, share code between actions, and to make it easier to replicate a similar CI setup on other platforms.

Shared JavaScript is in js/, and shared shell scripts are in scripts/

This repository also defines a number of GitHub action workflows that can be called from other repositories. These are all files in .github/workflows that define an on: workflow_call trigger. In particular:

Availabe actions

The following GitHub actions are available:

Contributing

Contributions are welcome!

See open issues for things than need work, there is also a list of good first issues if you are new to all this and want to get involved.

See the file CONTRIBUTING.md for more information.

License

See the directory LICENSES/ for a list of the licenses used in this repository, and the SPDX tag in file headers for the license of each file.

ci-actions's People

Contributors

axel-h avatar chrisguikema avatar corlewis avatar dependabot[bot] avatar indanz avatar ivan-velickovic avatar lsf37 avatar mbrcknl avatar wom-bat avatar xurtis avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ci-actions's Issues

binary verification

Full binary verification run.

These are expensive and long. Should be triggered manually and/or weekly only.

Will likely need mechanisms from #80 first.

add a CI action for sel4test build + simulation

#38 was a first attempt at this. I think it's possible to use yaml directly for storing all platform and configuration info, potentially also build/run groups, and then using those groups in a build matrix for the workflow action, which should invoke one script that knows how to read those configuration files and translate them to the correct build invocation.

If this action runs on all PRs, we probably don't need #73

docker build action

Build a matrix of docker images for the seL4-CAmkES-L4v-dockerfiles repo.

The action can probably go into that repo directly.

List of images to build.

  • CakeMLToolchain
  • Camkes
  • L4v
  • RISCV (no longer a separate image)
  • seL4
  • Sysinit (currently unmaintained)
  • Other:
    • sel4-rust
    • camkes-vis (currently unmaintained)
    • l4v-cakeml (currently unmaintained)
    • rust-sysinit (no longer a separate image)

report benchmarking regressions

see also Bamboo/sel4bench/regressions

Will need #75 before this makes sense, and might make sense to fuse into that action as an option.

Make style check report only on lines changed in PR

On PR, it'd be cool to have the style checker to only demand style corrections for the lines that were changed, and warn (but not demand a fix) for the rest of the file.

We basically need an intersection of diff lines: take all diff lines from the style checker diff that also occur in the PR diff.

rump-hello action

Can possibly be implemented in the rumprun repo.

Needs a machine queue, i.e. probably after #74

Show manifest status

Show:

  • manifest URL and hash
  • each repo, path and hash
  • mark any updated repos, with hash

To make sure that the logs show what exactly was tested.

Implement seL4-compile action

An action for the seL4 repo that can run on pull requests and pushes to master that compiles seL4 and the manual on all supported platforms.

add statistics to proof test

Add the following statistics to the aws-proofs action:

  • sloc count
  • a generalised targeted sorry count (so we don't have to add a separate one for each project); maybe configurable as input

Make sure test ref exists in clone

There is a race condition in PRs where the PR head (GITHUB_REF) might be set for an action, and shortly after, but before the repo is cloned inside the action, this ref disappears, e.g. because of a force-push, or because the PR has been merged with "rebase + merge" or "squash + merge", esp for very long-running actions that are optional (like in l4v on the MCS branch).

I think seL4/util_libs#67 is an instance of that.

This issue is to investigate if there is a way to make sure that the test ref is fetched explicitly, even if it is not connected any more and would otherwise not be included in a git clone.

Hopefully we've factored out things enough that this doesn't need to be done in much more than one location..

seL4: add markdown run to refman test

Bamboo tests the markdown generation separately. Probably best to do this as well to make sure we haven't missed anything, but will fuse this into the PDF generation.

use code annotations for style etc

Mostly putting this here so someone can pick it up if they feel like implementing it:

It could be nice to have the style checker directly comment on the code instead of outputting a diff (see also theory-linter, which does that)

Main blocker for that is that code annotations for GitHub actions only work for PRs from within the repo, not for forks, which is the common case. You can implement comments/code annotations from a GitHub app, though. This would have to be written and hosted somewhere. That is all solvable, but needs someone with a bit of time and web app knowledge.

monitor proofs for performance regressions

It'd be nice to know (automatically) if and which proof sessions get slower or faster over time. Also the total time for test runs.

The weekly clean tests should produce reasonably reliable timing information modulo some noise. We could record that timing info over time, and run a script similar to the seL4 performance regression script that plots performance over time as well as raises an alarm on significant jumps.

run-proofs action should work for l4v

Currently, the run-proofs action only works for pull requests to the seL4 repo.

A small amount of refactoring should enable the same action to work on l4v.

link checker needs file excludes

The -x option on the link checker currently excludes URL patterns, not local file patterns.

We'll need a local file pattern so that we can ignore 3rd-party code where we don't want to fix the links.

test if a PR produces the same binaries

It's sometimes useful to know whether a change still produces the same binaries or not. E.g. when source code is just re-arranged or config options changes/renamed etc.

Since seL4 builds are reproducible, this is testable, and could be an optional GH action triggered by a label.

investigate if proof cache should store more state

Currently, aws-proofs always rebuilds the following:

  • scala component
  • standalone C parser
  • haskell kernel

This is because the cache doesn't store build outputs for these. Investigate what would be necessary to include them.

It might be fine to just scan and add them to a second tar file. Potential problems when directory structure changes and the cache then pollutes the source tree with random stuff. Then again, it should reset automatically after the regular clean build, so problems will be limited in time.

sel4test compile action

Not entirely clear if we want this.

On bamboo we had this for all PRs. I would actually prefer to go one step further on PRs and automatically run all of the simulation tests (but trigger hardware tests manually).

docker deploy action

For the seL4-CAmkES-L4v-dockerfiles repo, after build (#84), on push to the master branch, deploy a set of images.

The action can probably go into that repo directly.

List of images to deploy.

  • CakeMLToolchain
  • Camkes
  • L4v
  • RISCV (no longer a separate image)
  • seL4
  • Sysinit (unmaintained)
  • Other:
    • sel4-rust
    • camkes-vis (unmaintained)
    • l4v-cakeml (unmaintained)
    • rust-sysinit (no longer a separate image)

sel4 hardware test action

This action should build sel4test from the sel4test manifest for a matrix of configurations, and then run these configurations on a set of machine queues.

We will probably want the actual platform, build, and run definitions in a separate config file, and let the GitHub action matrix select out of that.

@wom-bat is currently working on a more defined interface to the machine queue that can be implemented by multiple organisations. The main interface points are

  • the image to be run
  • which machine/platform to run it on
  • maybe priority/urgency level
  • where and how to report the results (could be an email or a script that then sets web page + GitHub status)

The idea is that this is asynchronous, i.e. the action does image build and HW test kick-off, and potentially sets a corresponding GitHub status to pending. The payload for the machine queue contains how to interpret results and would set the corresponding status to fail/succeed when the job is finished.

It'd be nice if the machine queue could indicate on GitHub that a job has started running.

It'd also be nice if we could get partial logs while a job is running. Unclear if that is feasible.

CAmkES unit tests

Can possibly run in camkes-tool repo directly, i.e. might not need a separate action.

seL4: add C Parser action

A CI action that runs the C parser on the supported platforms. The preprocess test does some of that already, but for fewer combinations.

See sel4_test family C Parser in Bamboo.

Portable shell check is slow

Installing dependencies seems to take forever compared to what the action does.

A docker action would probably be faster.

run full l4v on AWS

trigger a full l4v regression test build for a matrix of architecture on AWS, report results through GitHub status.

Since this costs money, it'll need some access control, either org members only for PR, or triggered by label.

scope out Bamboo replacement

  • turn the bamboo task sets into issues
  • make plan + issues for AWS trigger for proofs
  • make plan + issues for hardware test trigger + reporting

Implement seL4-test action

An action that runs on pull-requests and push to the seL4 repo, which runs seL4-tests on simulators for all platforms that support this.

sel4test-sim can fail on GH runners

On seL4/seL4#449 which does not change the binary (checked that by comparing said binaries), we saw the PC99_debug_clang_64 simulation test failing. It succeeded after re-running.

The specific test task that failed was Running test SCHED0021 (Test for pre-emption during running of many threads with equal prio), which depends on timer input.

It looks like that can be too unreliable on GitHub.

Just putting this information here in case we're seeing this more often. If we do we might either have to remove this test from simulation, or tweak parameters to make occurrence rare enough that it is not a nuisance.

sel4bench action

Run the seL4 benchmarks.

For things outside a simulator this need a working machine queue setup first, i.e. should come after #74

implement a filter for bashism check

The main seL4 repo wants all shell scripts to be portable, but for some of the other repos, it is fine to have bash scripts, at least when they are explicitly bash.

This issue is for either implementing a filter, like the style filter, or to add bashism config per repo so that scripts that explicitly invoke bash are not checked.

Currently slightly favouring the filter option, because that would by default lead to checks for new files and if a script can easily be made portable, it should be.

action for seL4 releases

This includes camkes, l4v, etc.

Currently this is triggered by a nightly run on Bamboo.

This is probably not quite appropriate any more on the GitHub setup (on push to master for specific repos? manually triggered?), and needs some additional thinking on how to avoid race conditions as in the 12.1.0 release, where some repo had new commits before the test finished, and so the release script didn't trigger correctly.

(but otherwise reuse the existing release scripts)

camkes test action

like sel4test, for camkes.

Will likely want separate actions for simulation (run on PR?, when manually triggered?) and HW builds.

Should reuse the mechanisms from #74 and #60

schedule tests for l4v

  • regular clean regression (weekly?), measure time
  • regular full test run with vanilla Isabelle (monthly?)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.