GithubHelp home page GithubHelp logo

suse / catapult Goto Github PK

View Code? Open in Web Editor NEW
15.0 16.0 9.0 10.65 MB

SCF and KubeCF CI implementation

License: Apache License 2.0

Makefile 8.75% Shell 76.21% Dockerfile 1.94% Go 4.51% HCL 3.96% HTML 3.04% TeX 0.98% Smarty 0.61%
kubecf scf

catapult's Introduction

Build Status Catapult

$> git clone https://github.com/SUSE/catapult.git && cd catapult
$> make all

This will start a local Kind cluster and deploy kubecf on top of it. Remove everything with make clean.

Next, check the First steps wiki page or do:

$> make help
$> make help-all

Description

Catapult is a CI implementation for KubeCF, SCF & Stratos, designed from the ground-up to work locally. This allows iterating and using it for manual tests and development of the products, in addition to running it in your favourite CI scheduler (Concourse, Gitlab…).

Catapult supports several k8s backends: can create CaaSP4, GKE, EKS clusters on its own, and you can bring your own cluster with the "imported" backend.

It is implemented as a little lifecycle manager (a finite state machine), written with Makefiles and Bash scripts.

The deployments achieved with Catapult are not production ready; don't expect them to be in the future either. They are for developing and testing.

It also contains some goodies to aid in development and testing deployments (see make module-extra-*).

To use it in a CI, like travis, see for example:

Documentation

For now, all documentation is in the project wiki.

Contributing

Please run catapult's linting, unit tests, integration tests, etc for a full TDD experience, as PRs are gated through them (see "build status" label):

 $> make catapult-tests

Debug catapult with DEBUG_MODE=true.

You can get your local development for SCF or KubeCF, with all needed catapult deps, with:

$> docker run -v /var/run/docker.sock:/var/run/docker.sock -ti --rm splatform/catapult:latest dind

Check out Run in Docker page on the wiki for more options.

catapult's People

Contributors

andreas-kupries avatar dirkmueller avatar dragonchaser avatar fargozhu avatar flaviodsr avatar greygoo avatar harts avatar jandubois avatar jimmykarily avatar mook-as avatar mudler avatar prabalsharma avatar satadruroy avatar stefannica avatar svollath avatar thardeck avatar viccuad avatar viovanov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

catapult's Issues

Move EKCP_PROXY usage to EKCP backend

Right now, EKCP_PROXY is set through all scripts when needed, so kubectl would work against the https_proxy for ekcp.
I would prefer to set EKCP_PROXY and the http*_proxy once, when BACKEND=EKCP. This adds EKCP compatibility to all scripts, future and current, and a cleaner impementation.

One option could be making a kubectl wrapper in buildir/bin when BACKEND=EKCP, that sets the https_proxy prior to calling kubectl.

Add make target for downloading kubectl, helm, cfcli per build

Different k8s backends need different versions of helm, kubectl, etc: aws expects a specific version of kubectl depending on the k8s version you deploy, CaaSP should be pinned too, etc.

Add default HELM_VER, KUBECTL_VER, CFCLI_VER per backend, and save the version used for the whole life of the deployment in something like #30.

As part of the make buildir or something similar, populate build/bin/ with them.

By default, prefer the downloaded binaries. Add option to disable downloaded binaries or prefer local binaries by changing the concat order when creating PATH in build/.envrc.

Make minikube deployments work e2e

Currently make all-minikube will fail, as:

  • The default storageclass from minikube is called "standard", not "persistent". For now one needs to run BACKEND=minikube make k8s, make scf-chart scf-gen-config, change the scf-config-values.yaml to point that "standard" storageclass, and proceed with make private modules/scf install
  • SCF will deploy correctly, but the SCF API endpoint is not exposed to the outside network of the VM neither by ingress nor a nodeport. You can push a pod with catapult to the cluster and get a terminal inside so you can do to usual stuff, with make module-extra-terminal

Documentation in code base

Right now if someone is new to catapult and want to extend the tool, there is too little information to start with. We do have wiki but that is more inclined towards usage and some design.

In my opinion, what we need is documentation in code base itself. Each sub folder should have a readme which tells us about each file in it.
For example, if I go in modules/extra I should know what is fissile.sh, instead of going inside the file to read the code.
I think this will help us in understanding the full capabilities of catapult at present and in turn will result in feature enhancements.

Some kubecf resources not cleaned up by `make scf-clean`

Testing kubecf deployments via catapult, when I clean a kubecf cluster with make scf-clean, and attempt to redeploy with make scf, I'll encounter this error:

Installing CFO from: https://s3.amazonaws.com/cf-operators/release/helm-charts/cf-operator-3.2.1%2B0.ga32a3f79.tgz
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: namespace: , name: cf-operator-quarks-job, existing_kind: rbac.authorization.k8s.io/v1, Kind=ClusterRole, new_kind: rbac.authorization.k8s.io/v1, Kind=ClusterRole
Makefile:17: recipe for target 'install' failed

I found I have to delete the CR and CRBs manually with the following commands before I can deploy again:

kubectl delete clusterrolebinding cf-operator cf-operator-quarks-job
kubectl delete clusterrole cf-operator-quarks-job cf-operator

And in a previous reinstall, I also had to run:

kubectl delete psp susecf-scf-default

(though I didn't see this psp lingering after the last clean)

I don't see any .envrc in buildkind folder

backend/check.sh
~/gop/src/github.com/SUSE/catapult/buildkind ~/gop/src/github.com/SUSE/catapult
β„Ή  πŸš€  kind ☸ kind  πŸŽ‚  backend/check.sh ➀  Loading
β„Ή  πŸš€  kind ☸ kind  πŸŽ‚  backend/check.sh ➀  Testing imported k8s cluster
Testing kube
Verified: nodes should have kernel version 3.19 or higher
Verified: authenticate with kubernetes cluster
Configuration problem detected: all kube-dns pods should be running (show N/N ready)
Configuration problem detected: all tiller pods should be running (N/N ready)
Verified: A storage class should exist in K8s
β„Ή  πŸš€  kind ☸ kind  πŸŽ‚  backend/check.sh ➀  Adding cap-values configmap if missing
β„Ή  πŸš€  kind ☸ kind  πŸŽ‚  backend/check.sh ➀  Initializing helm client
"stable" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈ 
βœ…  πŸš€  kind ☸ kind πŸŽ‚  backend/check.sh ➀  k8s cluster imported successfully
prabal@razor:~/gop/src/github.com/SUSE/catapult/buildkind> ll
total 24
drwxr-xr-x 2 prabal users  129 Mar 12 12:03 bin
-rw-r--r-- 1 prabal users 6759 Mar 12 11:59 defaults.sh
-rw-r--r-- 1 prabal users  212 Mar 12 12:00 kind-config.yaml
-rw------- 1 prabal users 5354 Mar 12 12:02 kubeconfig
-rw-r--r-- 1 prabal users 2613 Mar 12 12:02 storageclass.yaml
prabal@razor:~/gop/src/github.com/SUSE/catapult/buildkind> 

Add make scf-cache to cache all needed container images

We could add an scf-cache that would pre-cache the images from the chart, and make pipeline runs more deterministic in time.

Maybe having a catapult/.cache-images or so, and cleaning them if they are older than ?2 days on make scf-clean or make clean.

README should be more clear on the project purpose

README should be more explicit on the nature of this project.

It is a Development project - no support is intended - use it at your own risk.

Catapult aims just to quickly deploy an environment with SCF which can be re-used later for Development, Testing or even demos - there isn't any production-ready setup, and there will never be.

`scf` target should have a wider scope

The scf target currently is depending on the previous targets to deploy a scf installation. It would be better instead to have the public scf target which also takes care of the setup, so that make scf returns a deployed cluster (regardless of the backend) always.

e.g. We can rename the current scf target and make it private (e.g. make deploy-scf ) to just care of the helm bits and have a smarter oneliner "make scf" (which should become the public one, consumed by end-users).

Group common functions into helpers

There are common usages and patterns that can be identified in the current script ( a lot of dup code to do kube calls! ).

We could group them e.g. into functions, split in helpers (even by category) that can be included when necessary. During refactoring, they should come with unit tests whenever possible

Relates to #35

duplicate of #128

Accidental duplicate

Testing kubecf deployments via catapult, when I clean a kubecf cluster with make scf-clean, and attempt to redeploy with make scf, I'll encounter this error:

Installing CFO from: https://s3.amazonaws.com/cf-operators/release/helm-charts/cf-operator-3.2.1%2B0.ga32a3f79.tgz
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: namespace: , name: cf-operator-quarks-job, existing_kind: rbac.authorization.k8s.io/v1, Kind=ClusterRole, new_kind: rbac.authorization.k8s.io/v1, Kind=ClusterRole
Makefile:17: recipe for target 'install' failed

I found I have to delete the CR and CRBs manually with the following commands before I can deploy again:

kubectl delete clusterrolebinding cf-operator cf-operator-quarks-job
kubectl delete clusterrole cf-operator-quarks-job cf-operator

And in a previous reinstall, I also had to run:

kubectl delete psp susecf-scf-default

(though I didn't see this psp lingering after the last clean)

Document state steps, and how to add new states

It would be nice to have a pipeliney diagram of the states (k8s backend, scf, test), the make targets needed to move through states, and what is expected at the end of each state.

It would also be nice to document how to add a new state: which common variables are needed, provide a state template, etc.

Architecture overview

The README or the wiki should come with pictures or diagrams about the architecture, and a general overview of what catapult actually does (and where it fits!).

Share default overrides between modules and backends

Currently there is no way to override from backends e.g. defaults that comes from modules - this has the downside of having hacks in modules for the specific backends.

dbb9c9f is an e.g. of how it could be done - read a generated default.sh which is the result of all the concat defaults - allowing in this way to refer to each other freely (or something like that)

Refer to #99

refactor: remove scf from master

It was discussed in recent QA and catapult meeting that everything which is relevant to scf should be in branch and master should only contain kubecf related code. This will simplify catapult to some extent and help us with feature enhancements.

Automate stratos metrics deployment

Add a new make target to be able to deploy Stratos Metrics as explained here:
https://documentation.suse.com/suse-cap/1.5/single-html/cap-guides/#sec-cap-stratos-metrics

In case of deploying from a staging repo,
supplying the registry overrides for the staging registry correctly, see:

https://github.com/SUSE/stratos-metrics/blob/master/README.md#deploying-metrics-from-a-private-image-repository

As an example, here's the helm chart values for a staging registry configuration for metrics:

kube:
  registry:
    hostname: staging.registry.X
    username: <USER>
    password: <PASSWORD>
    email: default

prometheus:
  imagePullSecrets:
  - name: regsecret

Simplify caasp4os implementation

Problem

To deploy caasp4os we start a docker image locally. The image:

  • Is a sle15 with the caasp4 product installed, so it contains the skuba cli for
    caasp4 and the terraform from the caasp4 product. Installing the skuba package
    gives you the terraform files that caasp4 ship and support for Openstack
  • Needs an ssh key present for terraform.
  • Uses skuba and terraform to get the cluster deployed, and creates files to monitor and destroy that cluster.

This is achieved by having a "punctured" docker image:
https://github.com/SUSE/catapult/blob/master/backend/caasp4os/lib/skuba.sh#L44-L51
For running terraform we need access to the host ssh key, so we pass the ssh key
unix sock, we need to write outside of the docker image, so we pass the path
outside as volume, and then /etc/passwd, and the user uid. We pass also the env
vars for ssh-agent to work, and things like that.

All of this is hacky, and it also makes it flimsy out of linux.

Possible solutions

  • Use an image supported by the caasp team

  • Move current implementation to VM instead of docker image; would make it work out of linux

  • Have a terraform module to get a host converted into a caasp4 client. That client could live on a local VM or be another machine on the cloud

More transparent and clear the make tagets scopes

Problem: There is no real differentiation between private and public targets, and backend-private targets are currently used for end-user usage.

We should split make targets between internal and public so to provide a stable and solid API outside the internal usage.

This also means that we should document the public one, providing also a description of the arguments that can tweak the targets scopes.

E.g:

Internal:

  • make minikube
  • make kind
  • make kind-clean
  • make minikube-clean

Public:

  • make cluster / make kubernetes
  • make scf
  • make clean
  • make scf-login
  • make all

Public (Extra-core functions, Plugins):

  • make ingress and make ingress-forward
  • make terminal
    ...

In such way we can define a configuration aka "Context" which our make targets run against. A Context can be a JSON/YAML (#30) or a ENV configuration (e.g. CLUSTER_NAME=gke-test1, BACKEND=gke )

Relates to: #40 and #34

Catapult-web should allow also cluster creation

In the same fashion we spawn a terminal, catapult-web should also have a way to let the user post a json config, and start a deployment in a web tty.

Then the same cluster would show up later on automatically in the index list.

Introduce backend variable

As the codebase started to grow with new backends, we broke the workflow by adding targets highly backend-dependent. This comes with a trade-off which allows us to integrate features more quickly, but makes the usage more complex as we go.

What about introducing a general BACKEND variable and share all the make targets between backends? (at least, from a future public, well-defined API perspective)

So nobody has to care to look at the docs anymore when changing the backend, but just about the states.

Relates to #35

EKS cleanup

EKS cleanup using terraform sometimes results in a ton of leftovers which is a mess if you have to clean it up manually.

Suggestion: update the clean-eks target, add terraform show as last command to see what has been left and make cleaning that up easier.

Define variables needed by each module

We could come up with a strategy to define in one place the variables needed per-module, which than could be re-used to generate docs automatically. e.g. an env.sh for each module which contains the defaults.

Could be done at the same time when considering: #72

Support for Json configuration files

It would be much easier to use a JSON or YAML structure than using environment variables as it's today.

This shouldn't replace the env usage, but should instead add the feature to support both ways of running catapult targets.

remove chmod 777

There are several files containing chmod 777:

backend/caasp4os/terraform-os/storage-instance.tf:      "sudo chmod 777 /srv/nfs/kubedata",
modules/experimental/eirinifs.sh:sudo chmod 777 image/eirinifs.tar &&  kubectl cp image/eirinifs.tar scf/bits-0:/var/vcap/store/bits-service/assets/eirinifs.tar
modules/extra/fissile.sh:            /bin/bash -c "cd /bosh-release && bosh create-release --force --tarball=${BOSH_REL} --name=${FISSILE_OPT_RELEASE_NAME} --version=${FISSILE_OPT_RELEASE_VERSION} && chmod 777 ${BOSH_REL}"

Identify the OS Host Type

description

when downloading the kind file, the script always selects the linux file independently of the OS host being a mac (darwin) and that breaks the make flow.

solution

  1. add a if-then clause inside of the common.sh that stores the OS host type for later usage
    if [[ "$OSTYPE" == "darwin"* ]]; then export KIND_OS_TYPE="${KIND_OS_TYPE:-kind-darwin-amd64}" else export KIND_OS_TYPE="${KIND_OS_TYPE:-kind-linux-amd64}" fi

  2. change the tools.sh to support the new env var
    wget https://github.com/kubernetes-sigs/kind/releases/download/${KIND_VERSION}/${KIND_OS_TYPE}

Diego deployments defaults with btrfs

$> kubectl logs -f diego-cell-0 -n scf -c diego-cell
[...]
+ chmod 600 /var/vcap/data/grootfs/store/privileged/.backing-store
+ truncate -s 169034383360 /var/vcap/data/grootfs/store/privileged/.backing-store
+ mkfs.btrfs -f /var/vcap/data/grootfs/store/privileged/.backing-store
btrfs-progs v4.4.1
See http://btrfs.wiki.kernel.org for more information.

Label:              (null)
UUID:               af1eb860-6077-4736-9602-a463a29a20a2
Node size:          16384
Sector size:        4096
Filesystem size:    157.42GiB
+ local logfile=/var/log/create_and_mount_filesystem.log
+ mount -t btrfs -o loop,user_subvol_rm_allowed /var/vcap/data/grootfs/store/privileged/.backing-store /var/vcap/data/grootfs/store/privileged
Block group profiles:
  Data:             single            8.00MiB
  Metadata:         DUP               1.01GiB
  System:           DUP              12.00MiB
SSD detected:       no
Incompat features:  extref, skinny-metadata
Number of devices:  1
Devices:
   ID        SIZE  PATH
    1   157.42GiB  /var/vcap/data/grootfs/store/privileged/.backing-store

+ set +x
mount: /var/vcap/data/grootfs/store/privileged/.backing-store: failed to setup loop device: No such file or directory
Waiting for the loop kernel module to be loaded... done; bailing out for container restart.

https://github.com/SUSE/catapult/blob/master/modules/scf/defaults.sh#L14 invalidates https://github.com/SUSE/catapult/blob/master/modules/scf/gen_config.sh#L14 which at end draws a btrfs setup on diego which fails

Feature Request: support for testing BOSH releases.

After switching over to kubecf, we feel there is a need to build a tool for developers which would make changing, building, deploying and testing changes in BOSH releases consumed by kubecf.

The anatomy of the current process is as follows:

  1. Clone the target BOSH release and make changes locally or push it to feature branch.
  2. Create a BOSH release using bosh create-release...
  3. Build its release image using fissile build release-images...
  4. Load that image into target system e.g: minikube or kind.
  5. Test the changes.
  6. If there are any issues then repeat step 1 through 6.

We have created a script to automate the aforementioned steps in the attached script - rebuild-brains.sh.txt, however its not generic and easily configurable. It would be great to see if this use-case can be accomodated in catapult.

Move kubecf deployments to its own module

I would like to separate kubecf deployments into a modules/kubecf or the like, by copy pasting the current scf module. Right now scf seems that it would be able to be frozen, which would actually help in having a QA tool with less regressions there. Also, kubecf is starting to do things differently and accruing changes.

This would mean the necessity to publicize make kubecf instead of make scf, though.

awk illegal statements

Issue

the wait_ns script is throwing some exceptions related with the awk command

awk: syntax error at source line 1
 context is
	{ if ((match($2, >>>  /^([0-9]+)\/([0-9]+)$/, <<<
awk: illegal statement at source line 1

Environment

System Version: macOS 10.15 (19A546d)
Kernel Version: Darwin 19.0.0

make sub-folders names and file names more intuitive

Right now many of us are on-boarding to catapult. In my opinion, it will be helpful if we rename few sub-folders and filenames to make it more intuitive. Specially, since we don't have any documentation in code base.
for example:

  • backend should be renamed to something tells us that this is about k8s-platform and not about backend of catapult tool itself.

  • catapult/tests should be renamed to catapult/catapult-tests or catapult/modules/tests should be renamed to catapult/modules/kubecf-tests

  • modules/extra should be renamed to something which tells us whats inside it. If it has dev-tools then it should be named as such

  • modules/experimental should be not be in master if it indeed is experimental but, if it is usable then it should be named as such

  • metrics should be renamed to stratos-metrics as it can be confused with some metrics collection tool

  • whats in catapult/kube? if its charts being used by modules/extra/ then this should be moved in there.

I think this might help us with extending the tool. We also need documentation in code base, but I will create a separate issue for that.

make stratos fails with cf-operator/kubecf

Description:

After deploying kubecf, when running make stratos, it will fail with a bunch of subsequent errors.

First error:

chuller@d10:/data/src/suse/catapult(master)> make stratos
make -s -C modules/stratos
/data/src/suse/catapult/buildkind /data/src/suse/catapult/modules/stratos
β„Ή  πŸš€  kind ☸ kind  πŸŽ‚  ./clean.sh ➀  Loading
configmap/cap-values patched
βœ…  πŸš€  kind ☸ kind πŸŽ‚  ./clean.sh ➀  Stratos removed
/data/src/suse/catapult/buildkind /data/src/suse/catapult/modules/stratos
β„Ή  πŸš€  kind ☸ kind  πŸŽ‚  ./chart.sh ➀  Loading
⚠  πŸš€  kind ☸ kind πŸŽ‚  ./chart.sh ➀  No stratos chart url given - using latest public release from kubernetes-charts.suse.com
$HELM_HOME has been configured at /data/src/suse/catapult/buildkind/.helm.
Not installing Tiller due to 'client-only' flag having been set
"suse" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "suse" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.
console/Chart.yaml
console/values.yaml
console/templates/__helpers.tpl
console/templates/deployment.yaml
console/templates/ingress.yaml
console/templates/pre-install.yaml
console/templates/secrets.yaml
console/templates/service.yaml
console/templates/volume-migration.yaml
console/.helmignore
console/imagelist.txt
configmap/cap-values patched
βœ…  πŸš€  kind ☸ kind πŸŽ‚  ./chart.sh ➀  Stratos chart uncompressed
/data/src/suse/catapult/buildkind /data/src/suse/catapult/modules/stratos
β„Ή  πŸš€  kind ☸ kind  πŸŽ‚  ./gen-config.sh ➀  Loading
β„Ή  πŸš€  kind ☸ kind  πŸŽ‚  ./gen-config.sh ➀  Generating stratos config values from scf values
2020/02/19 16:47:47 error applying patch: yamlpatch replace operation does not apply: doc is missing path: /kube/registry/hostname
make[1]: *** [Makefile:13: gen-config] Error 1

The culprit here seems to be in modules/scf/gen_config.sh, the last block addition in that file only runs when ${SCF_OPERATOR} is false. Otherwise this block is never added, which then leaves you with an scf-config.yaml that is missing hostname, password and other entries.

Just moving the last fi in the file up a bit is not a quickfix, that then triggers other issues with kubecf/cf-coperator deployments itself.

When manually working around the issue, the resulting scf-config-values.yaml looks as follows:

console:
  service:
    ingress:
      enabled: true
features:
  eirini:
    enabled: true
kube:
  organization: cap
  pod_cluster_ip_range: 0.0.0.0/0
  registry:
    hostname: registry.suse.com
    password: ""
    username: ""
  service_cluster_ip_range: 0.0.0.0/0
  storage_class: persistent
services:
  router:
    externalIPs:
    - 172.17.0.2
    - 172.17.0.2
    type: LoadBalancer
  ssh-proxy:
    externalIPs:
    - 172.17.0.2
    - 172.17.0.2
    type: LoadBalancer
  tcp-router:
    externalIPs:
    - 172.17.0.2
    - 172.17.0.2
    port_range:
      end: 20008
      start: 20000
    type: LoadBalancer
system_domain: 172.17.0.2.nip.io
Error: render error in "console/templates/pre-install.yaml": template: console/templates/pre-install.yaml:15:21: executing "console/templates/pre-install.yaml" at <.Values.kube.storage_class.persistent>: can't evaluate field persistent in type interface {}

This is triggered by the storageclass entry not being correct. It needs to be

  storage_class:
    persistent: persistent

When working around this, the remaining issue that appears is

β„Ή  πŸš€  kind ☸ kind  πŸŽ‚  ./install.sh ➀  Deploying stratos
Error: render error in "console/templates/ingress.yaml": template: console/templates/__helpers.tpl:151:3: executing "ingress.host" at <required "Host name is required" $host>: error calling required: Host name is required

That error is triggered as the scf-config-values.yaml for kubecf does not contain an /env/DOMAIN entry anymore which is needed by stratos. As catapult automatically creates the config for stratos from the scf config file, this needs to be added additionally.

Running `make clean` after building scf from source results in lots of permission denied errors

When running make clean after building scf from scratch on kind it is impossible for the script remove all the contents of scf in the buildkind directory (lots of permission denied errors).

Suggested fix

Spin up a docker container as part of the make clean process and delete files of buildkind/scf/ there or run chmod on this folder and set permissions to 0777 to be able to delete the files on the host system.

Add make target for importing existing clusters

Currently, if one wants to run catapult make targets against a preexisting target, one needs to do
CLUSTER_NAME=foo make buildir; cp kubeconfig buildir/, and make sure that the cap-values configmap exists. (see https://github.com/SUSE/catapult/wiki/Import-your-k8s-cluster).

We could streamline this by providing a make target. It may be useful to detect on the target if the cluster is prepared for cap by running the kube-check script from github.com/SUSE/scf, or interactively prompting for values to create a cap-values configmap.

caasp4 deployment not working

~/gop/src/github.com/SUSE/catapult/backend/caasp4os ~/gop/src/github.com/SUSE/catapult/buildprabal-catapult ~/gop/src/github.com/SUSE/catapult/backend/caasp4os
+ . defaults.sh
++ CAASP_VER=update
++ KUBECTL_VERSION=v1.17.0
++ HELM_VERSION=v3.1.1
+ set -Eeuo pipefail
++ docker images -q skuba/update
+ [[ '' == '' ]]
+ info 'Creating skuba/update container image…'
+ set +x
β„Ή  πŸš€  caasp4os ☸ prabal-catapult  πŸŽ‚  ./docker_skuba.sh ➀  Creating skuba/update container image…
+ make -C docker/skuba/ update
make[2]: Entering directory '/home/macsuse/gop/src/github.com/SUSE/catapult/backend/caasp4os/docker/skuba'
./build.sh update /SUSE/Updates/SUSE-CAASP/4.0/x86_64/update/
>>> ERROR: no skuba version found
make[2]: *** [Makefile:15: update] Error 1
make[2]: Leaving directory '/home/macsuse/gop/src/github.com/SUSE/catapult/backend/caasp4os/docker/skuba'
make[1]: *** [Makefile:17: deps-caasp4os] Error 2
make[1]: Leaving directory '/home/macsuse/gop/src/github.com/SUSE/catapult/backend/caasp4os'
make: *** [Makefile:31: k8s] Error 2

Steps to reproduce:

  1. pull latest from catapult repo
  2. make use sure you don't have suba docker image, and sle15 image
  3. make BACKEND=caasp4os k8s;

Simplify layout of scripts/*

Currently, the implementation consists of everything flat on scripts/*. Add folders per k8s backend, state, etc (kind/* ,caasp4os/* , gke/* , scf/* , tests/*). It would be nice to document idiosyncrasies or what each state consists of, in wiki, readme files in the folder, etc.

Add Mascotte/Logo!

At this point we really need a Mascotte/Logo for our front page πŸ˜ƒ

Identify common code parts

Breaking things into functions would ease up code isolation and more control and flexibility.

It would help (and blocks imho) issues like #89 , where some code parts are shared.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.