iver-wharf / iver-wharf.github.io Goto Github PK
View Code? Open in Web Editor NEWDocumentation of Wharf
Home Page: https://iver-wharf.github.io
License: MIT License
Documentation of Wharf
Home Page: https://iver-wharf.github.io
License: MIT License
Based on iver-wharf/rfcs#8
Currently each program's version is only set when building via the docker
Makefile target or via Wharf.
Would be nice if for example go install github.com/iver-wharf/wharf-api
contained the version already, instead of our current "local dev" version.
Suggest to:
Get version from version.yaml
(compared to now where it's an empty string in a freshly cloned repo)
Get Git commit from BuildInfo.Settings["vcs.revision"]
(debug.ReadBuildInfo
, package runtime/debug)
Use Git commit date from BuildInfo.Settings["vcs.time"]
(it's not the build date, but closest available. Maybe add that as another field to app.Version
as Version.CommitDate
)
Leave the Version.BuildRef
as unset, and only populate it from Wharf builds, as is done already.
This needs to first be implemented into wharf-core, then info all of the dependent repos.
We have some Go coding style rules that we decided on in the Wharf team a while ago, but this was before we went open source.
These notes needs to be moved to iver-wharf.github.io, cleaned up, and then possibly linked in each repo's CONTRIBUTING.md
Ask me, @jilleJr, where to find these private notes if you wish to take on this task.
Add documentation for iver-wharf/wharf-provider-gitlab#15
GitHub supports template repositories. Once we get the providers a bit more stable and how we want them, we can create a repository, named iver-wharf/wharf-provider-template-go
, and then we can add more providers.
Top prio on new providers:
Add documentation for iver-wharf/wharf-provider-azuredevops#14
Based on RFC-0011: quay.io
Docker images are currently stored in our internal docker repo. We want these to be publicly available.
We shall host them over at https://quay.io
Both are free for open source repos. Quay.io includes RedHat's Clair that gives industry leading security scanning.
iver-wharf
(same as on GitHub)https://quay.io/repository/iver-wharf/wharf-api/status
)Certificates! Oh the joy!
We need a better way to import our internal self-signed CAs than our current procedure where we're embedding them into the images. But this deserves a different GitHub issue and nothing we should have to worry about here. It can be looked into in #44
This is an old ticket moved from our internal ticketing system. The idea was to use OPA but if we could find a different way to solve this without introducting yet another dependency then that would be swell.
This deserves an RFC once POC is working
Currently the certificate added by the Wharf build into the images when building with kaniko is loaded via the following lines:
We want to add the certificates into the containers inside kubernetes instead, leaving the images intact of any self-signed CAs.
Suggestion is to use OPA (Open Policy Agent) in our kubernetes cluster to dynamically add in the mounting of our CA certs, for example via a configmap.
Steps:
wharf-inject-certs: false
(to be able to turn it off)wharf.iver.com/inject-certs: false
(to be able to turn it off)/etc/ssl/certs
folder in a temp volume using the configmap volume of our certs./etc/ssl/certs
on all containers.After some research (mostly by looking in the update-ca-certificates
script), certs needs to go into
# certs are imported from here
/usr/local/share/ca-certificates/**/*.pem
# certs are stored here
/usr/share/ca-certificates/**/*.pem
# conf file tracking all added and ignored certs
/etc/ca-certificates.conf
# all certs in one file
/etc/ssl/certs/ca-certificates.crt
# all certs one by one, name hashed via `openssl rehash .`
/etc/ssl/certs/{HASH-OF-CERT}
That's why it's probably best to do it with an init-container, or maybe an operator. Needs further RnD!
This issue is requesting a tracerbullet. Just something that works good enough for regular GNU/Linux containers. Disregard windows containers for now.
Regards the "container" step
This is no longer relevant as we're transitioning away from Jenkins. However, the feature itself is still heavily requested.
Once the cmd repo has been transferred (see #32) we can add a new issue there with the equivalent feature request, as well as closing this issue.
artifacts
property to container stepcontainer:
image: mcr.microsoft.com/dotnet/core/sdk:3.1
cmds:
- dotnet test /repo/src/Iver.Spark.sln --results-directory /test-results
artifacts:
- /test-results
To consider: curl and wget will not exist on every image that's used. It's really bad habit to assume the image using is either Linux, have certain tools preinstalled, or even has permission to do some stuff.
Good option is to mount a second container, for example how they do it in their example found in the README.md:
https://github.com/jenkinsci/kubernetes-plugin#container-group-support
Trimmed down example of just that:
podTemplate(containers: [
containerTemplate(name: 'main', image: '$CONTAINER_IMAGE', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'artifacts', image: 'ubuntu:20.04', ttyEnabled: true, command: 'cat')
]) {
node(POD_LABEL) {
stage('Run container commands') {
git 'https://github.com/jenkinsci/kubernetes-plugin.git'
container('maven') {
sh CONTAINER_CMDS
}
}
if (CONTAINER_ARTIFACTS) {
stage('Upload artifacts') {
container('golang') {
sh"""
for file in $CONTAINER_ARTIFACTS/*; do
curl $WHARF_API/something/artifacts
done
"""
}
}
}
}
}
This is a multi-step process as concluded in a previous meeting. Meeting notes: https://iver-wharf.github.io/wharf-notes/2021-11-08-biweekly
The meeting notes states that this should be done after the Jenkins move. But I (@jilleJr) consider it such a low hanging fruit to get at least something up and running, to dismiss it.
To begin with, we need a way to build and deploy any branch from the Wharf projects and then deploy them to our internal test environment.
We are depending on lots of libraries, none of which we give credit for. This is a violation of the open source licenses that they have, and actually needs to be addressed ASAP, for legal reasons.
We may need to remove previous released versions that does not have this, or backport these changes to the previous released versions.
Needs to be investigated:
How to extract licenses automatically on build, for example via https://github.com/google/go-licenses
What requirements does the licenses have? Can we just add the licenses next to the binaries in the Docker images, or do we need CLI argument to extract the licenses, or do we need new endpoints to return the licenses? For the frontend I think just a "third party licenses" page would suffice, but that needs to be investigated as well.
Are we using any dependencies with incompatible licenses to our MIT license?
Can we get an actual open source lawyer to look over the upcoming RFC for this, to see if our solution is compliant to the different licenses?
This needs an RFC of the proposed solution for how we solve this for all of our code repos.
The step types docs are very basic at the moment. Using solely a code block to describe everything.
This is fine for us internally, but does not show a greater grasp over how these step types work.
Suggested convention:
<!-- panels:start -->
<!-- div:left-panel -->
# Container step
Short description of what it does.
Longer description of how it works.
<!-- div:right-panel -->
## Minimal example
```yaml
myStage:
myStep:
container:
image: alpine:latest
cmds:
- echo 'hello world'
```
<!-- panels:end -->
<!-- panels:start -->
<!-- div:left-panel -->
## Parameters
### `image`
Type: `string`
Short description of what it does
```yaml
image: alpine:latest
```
### `cmds`
Type: `string array`
Short description of what it does
```yaml
cmds:
- echo 'hello world'
```
<!-- div:right-panel -->
## Full example
```yaml
myStage:
myStep:
container:
image: alpine:latest
cmds:
- echo 'hello world'
os: linux/windows
shell: /bin/sh
secretName: mysecret
serviceAccount: default
certificatesMountPath: /usr/local/share/ca-certificates
```
<!-- panels:end -->
Preview:
Issue #72 regards adding vulnerability scans, but this only regards unit testing and formatting checks
For all Go repos:
goimports
diffing, making sure everything is formatted (via git diff --exit-code
)For Angular repo:
This needs to be added as GitHub Actions to the applicable repos to be run on new or updated pull requests.
Codacy already deals with linting and some static code checks
The future plan is to have Wharf run and validate pull requests, but as Wharf lacks that functionality we have to settle with GitHub Actions, as some is better than none
An improvement of usability would be to be able to tell the Jenkins build which git committish to build.
Currently it will build the latest commit on the target branch. You cannot build or deploy an older version. That should be the default behaviour, but the freedom of being able to declare which committish could improve usability a lot.
What needs to be done:
This will probably be irrelevant once we get the cmd project up and running, but the same applies there so we're keeping this issue for that sake
When importing from either provider, you need an access key.
The access this key needs has to be documented, as it's an obvious question that arises.
Todo:
We should make it possible to update the credentials for a project in the future when we need it. Such as when we need to set commit statuses, we then need permission for that. But let's follow the "principle of least privilage"
Remaining repos:
Rules for moving these:
.git
folder into the new repoOnce these has been moved we can update the "About Wharf" page in this repo so it no longer says "goal is to be open sourced by end of 2021"
Currently providers can talk to the API, but not the other way around. We want to change that.
Most providers support checks/statuses that can reject PRs and such (see iver-wharf/wharf-provider-azuredevops#5)
The previous attempt was to solve this with a message queue (see https://github.com/iver-wharf/messagebus-go) just to get around the circular-dependency issue. This has since been concluded as overkill.
What we instead can do is use dependency-inversion, where we create a standardized API for the providers that they have to fulfill, and then the main API can talk to them through this standardized interface. By doing this, we can also let the main API do all the redirection of endpoints so the end user does not talk to the provider/import APIs directly, and by doing that we can populate the data that the providers get as well, such as the project name when the user only provided the project ID.
This needs an RFC. Especially regarding how this standardized API should look. If we stick to REST or transition over to an asyncronous protocol like gRPC or websockets is up for discussion. Personally (from @jilleJr's perspective) I find gRPC the most enticing.
Our unit tests are not that coherent in their names. We use a mix of different naming conventions, as they are written by different developers.
The simple answer is just to name them:
func Test{NameOfTypeOrFunction}(t *testing.T)
// Example
func Add(a, b int) int
func TestAdd(t *testing.T)
But it gets diffuse when we start having multiple tests targetting the same function/type/method but with slight variations.
An example format of this (though not that great IMO):
func Test{NameOfTypeOrFunction}_{StateOfType}_{ExpectedResult}(t *testing.T)
func TestAdd_GivenNegativeValues_ReturnsNegative(t *testing.T)
Suggest to write an RFC of this.
Describe where cloned repository is landing by default, so it will be clear for the users.
Add documentation for iver-wharf/wharf-provider-github#15
When importing from either provider, you need an access key.
The access this key needs has to be documented, as it's an obvious question that arises.
Todo:
We should make it possible to update the credentials for a project in the future when we need it. Such as when we need to set commit statuses, we then need permission for that. But let's follow the "principle of least privilage"
Some shell syntaxes allow escalation of priviliges, such as the ; & |
combo.
Need to investigate these security holes and if it can damage us. It's not unthinkable that a customer sells Wharf as a service and allows arbitrary commands to be executed. This is an attack vector that needs to be investigated if it can be abused to hurt us or our customers.
This was originally mentioned by Niklas E. Could perhaps consult with him for more information about this and ways to solve it.
Make investigation to figure out what needs to be done and how long will it take
When importing from either provider, you need an access key.
The access this key needs has to be documented, as it's an obvious question that arises.
Todo:
We should make it possible to update the credentials for a project in the future when we need it. Such as when we need to set commit statuses, we then need permission for that. But let's follow the "principle of least privilage"
We're introducing problem types, according to IETF RFC-7087, and need problem describing pages for the following:
See api!45 and api!46 MRs for more info (sorry, not open sourced yet)
This concerns the implementation of OpenID Connect as a mechanism for authentication and eventually possibly also and implementation of authorisation.
TODOs ->
scopes
from the token.Document the process of configuring a open id connect ID provider for wharf.
Steps:
This is about fetching deployment status from kubernetes clusters and displaying this on Wharf frontend.
Task is an investigation of how to get that information from the cluster.
There is multiple scenarios of how to do that.
What is the most convienient for the user?
What is the easiest to implement by us?
Should we monitor per build?
Should we monitor per environment for a project?
After investigation we need to make a meeting with Bjorn and agree on the particular solution for this issue.
Best case:
security
The issues should contain description of how to build and scan the docker image yourself locally. Such as:
$ docker build . -t wharf-web
$ docker save wharf-web -o image.tar
$ trivy image --input image.tar
There are some alternatives available, just searching the internet for "trivy github action" yields lots of good alternatives.
Suggest to add this to one repo, and once that is reviewed and merged, first then start applying it to the rest of the repos.
Repos that need this:
In the future we can translate this to a Wharf build, but as Wharf lacks this kind of integration right now we should start the work using GitHub Actions.
After #33 is deployed and tested, we then need to consider persistence.
Add persistence of messages that are going to be send with email notifications. Messages should be first saved to the persistence and after that the module should attempt to send them. Messages should be deleted from persistence only after they are successfully sent.
We do not need to keep them in a database, yaml/json files should be enough.
Number of retries should be configurable (configmap from initial task). Exponential delay on the retries.
We more or less only want to use ShouldBindJSON, because BindJSON also sets status code to 400 (Bad Request) and Content-Type: text/plain; charset=utf-8
, and gin will spam warnings in the logs if you set those multiple times.
Docs: https://github.com/gin-gonic/gin#model-binding-and-validation
Based on RFC: iver-wharf/rfcs#26
Published: https://iver-wharf.github.io/rfcs/published/0026-v2-go-modules-release
Need to update the module names in the following repos to include the major version as suffix:
We need to stay compatible with Go modules to be able to use this library from other projects. Go modules almost requires it's own doctrine, but this PR is based on some statements from here: https://github.com/golang/go/wiki/Modules#releasing-modules-v2-or-higher
Example errors that appear as we haven't been Go modules compatibility up until now:
$ go get github.com/iver-wharf/wharf-api/pkg/[email protected]
go get github.com/iver-wharf/wharf-api/pkg/[email protected]:
github.com/iver-wharf/[email protected]:
invalid version:
module contains a go.mod file, so major version must be compatible:
should be v0 or v1, not v4
(I've wrapped the text in console output just to be more easily readable)
Ex: https://quay.io/repository/iver-wharf/wharf-provider-azuredevops/create-notification, but for all the repos
Suggest sending them to a teams channel
Depends on iver-wharf/wharf-helm#11
Enforcing GPG signed Git commits is a great security feature for anything that deals with Git repos.
Suggested implementation:
When enabled, reject builds on commits that are unsigned or unverified.
Users provide Wharf with a list of public keys that it has in its own storage that cmd project then uses with a basic git verify-commit HEAD
call, preferably with a simple --verify-commit
flag or something to the cmd command.
Wharf API shall provide all GPG keys for a given project on demand for the builder to use for verifying the commits in a single endpoint. For example: GET /projects/{projectid}/gpg/keyring
to just get all GPG keys together in PEM format (one directly after the other) so they can all be imported into GPG at once by doing:
$ curl "https://wharf.local/api/projects/${PROJECT_ID}/gpg/keyring" > keyring.pub
$ gpg --import keyring.pub
We can later add a GET /projects/{projectid}/gpg/keys
endpoint to receive each key with some metadata in JSON format instead, similar to how GitLab does it. Would be nice to add just for the flexability.
Public GPG keys will be stored in the DB. Columns needs to be
public_key
, PublicKey
, or something like that)This was originally a comment from Niklas Engvall, and I [kalle] presume it comes from his usage of Argo CD which has this feature.
Basic API (REST) that can send notifications based on some configuration.
To begin with, you configure the component with SMTP credentials and then the API can tell this component to send an email based on its config from the database
This component only needs a basic HTTP endpoint that will tell it to send an email. The API implementation comes later in iver-wharf/wharf-api#8
Having persistence and remembering notifications that failed to send is out of scope for this issue. That comes later. Make this just a dumb endpoint for sending emails to begin with.
Our future planned workflow for running integration/system tests:
kubectl delete namespace
)What would aid in this particular use case is for tighter integration with Kubernetes. We already have Kubernetes integration planned (see "Investigate deployment status") but what this ultimately needs is a way to configure Wharf to wait until all the deployed resources are fully deployed and running (ex: All deployments have 100% of their pods in Ready state) before proceeding with running the tests. Though this can be scripted and is not directly dependent on any Wharf functionality. Wharf can only add QoL features to this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.