sel4 / ci-actions Goto Github PK
View Code? Open in Web Editor NEWCI GitHub actions for the seL4 repositories
Home Page: https://sel4.systems
CI GitHub actions for the seL4 repositories
Home Page: https://sel4.systems
Can possibly be implemented in the rumprun repo.
Needs a machine queue, i.e. probably after #74
Plan CAMKESVMARM, see ArmVMMCamkes
in Bamboo.
It's sometimes useful to know whether a change still produces the same binaries or not. E.g. when source code is just re-arranged or config options changes/renamed etc.
Since seL4 builds are reproducible, this is testable, and could be an optional GH action triggered by a label.
See bamboo/camkes/CLI
For the seL4-CAmkES-L4v-dockerfiles repo, after build (#84), on push to the master branch, deploy a set of images.
The action can probably go into that repo directly.
List of images to deploy.
Full binary verification run.
These are expensive and long. Should be triggered manually and/or weekly only.
Will likely need mechanisms from #80 first.
Can possibly implemented in the corresponding repo directly.
On PR, it'd be cool to have the style checker to only demand style corrections for the lines that were changed, and warn (but not demand a fix) for the rest of the file.
We basically need an intersection of diff lines: take all diff lines from the style checker diff that also occur in the PR diff.
See Bamboo/CAmkES/VisualCamkes
When the C parser updates in l4v, we ideally want the Docker version to update automatically. This probably needs to be triggered explicitly from the l4v repo.
An action for the seL4 repo that can run on pull requests and pushes to master that compiles seL4 and the manual on all supported platforms.
There is a race condition in PRs where the PR head (GITHUB_REF
) might be set for an action, and shortly after, but before the repo is cloned inside the action, this ref disappears, e.g. because of a force-push, or because the PR has been merged with "rebase + merge" or "squash + merge", esp for very long-running actions that are optional (like in l4v
on the MCS branch).
I think seL4/util_libs#67 is an instance of that.
This issue is to investigate if there is a way to make sure that the test ref is fetched explicitly, even if it is not connected any more and would otherwise not be included in a git clone
.
Hopefully we've factored out things enough that this doesn't need to be done in much more than one location..
Currently these trigger Bamboo. When #80 is worked out, we can probably use the same mechanism for the test board directly.
Add the following statistics to the aws-proofs action:
replacing Bamboo's manifest bump for verification-manifest/default.xml
after successful proof run on l4v@master
Used to be not supported, but now exists: composite run steps action
Wherever possible we should convert the actions here into such composite actions instead of shell script steps.
Bamboo tests the markdown generation separately. Probably best to do this as well to make sure we haven't missed anything, but will fuse this into the PDF generation.
The -x
option on the link checker currently excludes URL patterns, not local file patterns.
We'll need a local file pattern so that we can ignore 3rd-party code where we don't want to fix the links.
Show:
To make sure that the logs show what exactly was tested.
Installing dependencies seems to take forever compared to what the action does.
A docker action would probably be faster.
Mostly putting this here so someone can pick it up if they feel like implementing it:
It could be nice to have the style checker directly comment on the code instead of outputting a diff (see also theory-linter, which does that)
Main blocker for that is that code annotations for GitHub actions only work for PRs from within the repo, not for forks, which is the common case. You can implement comments/code annotations from a GitHub app, though. This would have to be written and hosted somewhere. That is all solvable, but needs someone with a bit of time and web app knowledge.
trigger a full l4v regression test build for a matrix of architecture on AWS, report results through GitHub status.
Since this costs money, it'll need some access control, either org members only for PR, or triggered by label.
Currently repo and path are hard-coded. Should be derived from manifest instead.
Run the seL4 benchmarks.
For things outside a simulator this need a working machine queue setup first, i.e. should come after #74
This includes camkes, l4v, etc.
Currently this is triggered by a nightly run on Bamboo.
This is probably not quite appropriate any more on the GitHub setup (on push to master for specific repos? manually triggered?), and needs some additional thinking on how to avoid race conditions as in the 12.1.0 release, where some repo had new commits before the test finished, and so the release script didn't trigger correctly.
(but otherwise reuse the existing release scripts)
See job xml_lint_job
in Bamboo seL4 plan.
See Bamboo/CAmkES/CamkesVM
Currently, aws-proofs
always rebuilds the following:
This is because the cache doesn't store build outputs for these. Investigate what would be necessary to include them.
It might be fine to just scan and add them to a second tar file. Potential problems when directory structure changes and the cache then pollutes the source tree with random stuff. Then again, it should reset automatically after the regular clean build, so problems will be limited in time.
This action should build sel4test
from the sel4test manifest for a matrix of configurations, and then run these configurations on a set of machine queues.
We will probably want the actual platform, build, and run definitions in a separate config file, and let the GitHub action matrix select out of that.
@wom-bat is currently working on a more defined interface to the machine queue that can be implemented by multiple organisations. The main interface points are
The idea is that this is asynchronous, i.e. the action does image build and HW test kick-off, and potentially sets a corresponding GitHub status to pending. The payload for the machine queue contains how to interpret results and would set the corresponding status to fail/succeed when the job is finished.
It'd be nice if the machine queue could indicate on GitHub that a job has started running.
It'd also be nice if we could get partial logs while a job is running. Unclear if that is feasible.
Currently, the run-proofs
action only works for pull requests to the seL4 repo.
A small amount of refactoring should enable the same action to work on l4v
.
Can possibly run in camkes-tool
repo directly, i.e. might not need a separate action.
Build a matrix of docker images for the seL4-CAmkES-L4v-dockerfiles repo.
The action can probably go into that repo directly.
List of images to build.
see also Bamboo/sel4bench/regressions
Will need #75 before this makes sense, and might make sense to fuse into that action as an option.
Currently the seL4-compile action goes through the vanilla verified configurations (without MCS), for {py2,py3} x {gcc,llvm}
.
We should probably add MCS to the matrix where it is supported.
#38 was a first attempt at this. I think it's possible to use yaml directly for storing all platform and configuration info, potentially also build/run groups, and then using those groups in a build matrix for the workflow action, which should invoke one script that knows how to read those configuration files and translate them to the correct build invocation.
If this action runs on all PRs, we probably don't need #73
It'd be nice to know (automatically) if and which proof sessions get slower or faster over time. Also the total time for test runs.
The weekly clean tests should produce reasonably reliable timing information modulo some noise. We could record that timing info over time, and run a script similar to the seL4 performance regression script that plots performance over time as well as raises an alarm on significant jumps.
Might be small enough to work on GitHub runners, might not.
@mbrcknl do you know how much memory a decompile-only run needs?
Might need docker setup for doxygen and friends.
A CI action that runs the C parser on the supported platforms. The preprocess test does some of that already, but for fewer combinations.
See sel4_test
family C Parser
in Bamboo.
On seL4/seL4#449 which does not change the binary (checked that by comparing said binaries), we saw the PC99_debug_clang_64 simulation test failing. It succeeded after re-running.
The specific test task that failed was Running test SCHED0021 (Test for pre-emption during running of many threads with equal prio)
, which depends on timer input.
It looks like that can be too unreliable on GitHub.
Just putting this information here in case we're seeing this more often. If we do we might either have to remove this test from simulation, or tweak parameters to make occurrence rare enough that it is not a nuisance.
Minus the one the trigger is from.
Could add option for triggering a single repo, which would also solve #93
Build and test the seL4 tutorials for a matrix of configurations.
Needs machine queue, e.g. #74
Eg sel4test and seL4, or seL4 and l4v.
The PRs could refer to each other explicitly in the description or title. Could also match on branch name.
The main seL4 repo wants all shell scripts to be portable, but for some of the other repos, it is fine to have bash scripts, at least when they are explicitly bash.
This issue is for either implementing a filter, like the style filter, or to add bashism config per repo so that scripts that explicitly invoke bash
are not checked.
Currently slightly favouring the filter option, because that would by default lead to checks for new files and if a script can easily be made portable, it should be.
Not entirely clear if we want this.
On bamboo we had this for all PRs. I would actually prefer to go one step further on PRs and automatically run all of the simulation tests (but trigger hardware tests manually).
An action that runs on pull-requests and push to the seL4 repo, which runs seL4-tests on simulators for all platforms that support this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.