GithubHelp home page GithubHelp logo

fabric8io / osio-pipeline Goto Github PK

View Code? Open in Web Editor NEW
13.0 28.0 17.0 106 KB

DSL and utility functions in groovy for running Jenkins OSIO Pipeline

Groovy 100.00%
openshift pipeline-library jenkins-pipeline jenkins pipeline

osio-pipeline's Introduction

OpenShift Pipeline Library Build Status

Overview

This repository provides a set of pipeline functions (pipeline library) that are used in Jenkinsfile to do Continuous Delivery / Continuous Integration for OpenShift.io applications. This pipeline library can be used with any OpenShift cluster which adheres following prerequisites.

Diagram

OSIO-Pipeline Flow Diagram PlantUML

Prerequisites

  • OpenShift command-line interface (oc binary) should be available on Jenkins master or/and slave nodes.
  • Jenkins should have following set of plugins
  • Familarity with writing Jenkinsfile, basic groovy syntax and Jenkins pipeline.

User Guide

Examples

Following are some example Jenkinsfiles to illustrate how to use this pipeline library.

Deploy stand-alone application

The following example builds a nodejs booster and deploys it to a stage environment and then on approval to the run environment.

@Library('github.com/fabric8io/osio-pipeline@master') _

osio {

  config runtime: 'node'

  ci {
    // runs oc process
    def resources = processTemplate()

    // performs an s2i build
    build resources: resources
  }

  cd {
    // override the RELEASE_VERSION template parameter
    def resources = processTemplate(params: [
        RELEASE_VERSION: "1.0.${env.BUILD_NUMBER}"
    ])

    build resources: resources

    deploy resources: resources, env: 'stage'

    // wait for user to approve the promotion to "run" environment
    deploy resources: resources, env: 'run', approval: 'manual'
  }
}

Deploy stand-alone application with external configuration

The following example build and deploy a nodejs application like previous one. It also loads an external resource like configmap and deploy it to stage and run environments.

@Library('github.com/fabric8io/osio-pipeline@master') _

osio {
    config runtime: 'node'

    ci {
        def app = processTemplate()

        build app: app
    }

    cd {
      def resources = processTemplate(params: [
        release_version: "1.0.${env.BUILD_NUMBER}"
      ])
      def cm = loadResources(file: "configmap.yaml")

      build resources: resources

      // deploy API takes multiple resources in array form
      deploy resources: [resources,  cm], env: 'stage'

      // wait for user to approve the promotion to "run" environment
      deploy resources: [resources,  cm], env: 'run', approval: 'manual'
    }
}

where configmap.yaml is

APIVersion: v1
kind: ConfigMap
metadata:
    ...
    ...

loadResources API also supports List kind like following one where configurations.yaml is

APIVersion: v1
kind: List
items:
  -kind: ConfigMap
    ...
  -kind: Secrete
    ...

How to use this library

To use the functions in this library just add the following to the top of your Jenkinsfile:

@Library('github.com/fabric8io/osio-pipeline@master') _

That will use the master branch (bleeding edge) of this repository.

API

The following API's are declared in this repository. A little description and example is provided for all

osio

This is the first functionality we use in the JenkinsFile. Everything we want to do with our pipeline, we write that in this block. This block will register all the plugins then emit an event of pipeline start and execute whatever you have specified and then emit an event of pipeline end.

    osio {
        // your continuous integration/delivery flow.
    }

config

This is the API where you provide configurations like runtime or something like global variables. This will be used further by default for spinning up pods to execute your commands or your flow.

    config {
        runtime: 'node'
        version: '8'
    }

If above block is configured in your pipeline then every time the spined pod will have a container named node which having the environments for nodejs8. By default pod will be spined with basic utilities like oc, git etc

Variables Configured in Pipeline

Name Required Default Value Description
runtime false none runtime of the application ex java, node, go etc.
version false none version of the runtime using

plugins

This is the API where you can provide the configuration related to the plugins. You can provide the configurations as map of key value pairs assigned to a plugin name

    plugins analytics: ["disabled" : true]

    plugins foobar: ["foo" : "bar"]

or

    plugins analytics: ["disabled" : true], foobar: ["foo" : "bar"]

Note : This needs to be specified outside of osio block in Jenkinsfile

Right now, the library is using the analytics plugins, which is by default enabled but can be disabled by setting disabled: true

ci

This is the block which will be executed for continuous integration flow. By default all branches starting with name PR- will go through this execution. You can override by providing a branch name in arguments

    ci {
       // Your continuous integration flow like run tests etc.
    }

To override the default branch for this flow

    ci (branch: 'test'){
       // Your continuous integration flow like run tests etc.
    }

Parameters

Name Required Default Value Description
branch false all starting with PR- branch where you want to perform CI flow

cd

This is the block which will be executed for continuous delivery flow. By default this gets executed for master branch. You can override by providing a branch name in arguments

    cd {
       // Your continuous delivery flow like run tests, generate release tag etc.
    }

To override the default branch for this flow

    cd (branch: 'production'){
       // Your continuous delivery flow like run tests, generate release tag etc.
    }

Parameters

Name Required Default Value Description
branch false master branch where you want to perform CD flow

processTemplate

processTemplate runs oc process on the OpenShift template pointed by file parameter and returns a representation of the resources (internals can change in future). All mandatory parameters required to process the template must be passed in as params.

    def resources = processTemplate(
      file: 'template.yaml',
      params: [ release_version: "1.0.${env.BUILD_NUMBER}" ]
    )

Parameters

Name Required Default Value Description
file false .openshiftio/application.yaml file which you want to process as OpenShift template
params false null a map of key value pairs of all template parameters

Following template parameters must be present in the template and is set to the following values by default. You can override them by passing key value pairs in params.

Default Template Parameter

Name Default Value
SUFFIX_NAME branch name
SOURCE_REPOSITORY_URL output of git config remote.origin.url
SOURCE_REPOSITORY_REF output of git rev-parse --short HEAD
RELEASE_VERSION output of git rev-list --count HEAD

NOTE : processTemplate API expects a RELEASE_VERSION parameter in OpenShift template. This parameter is used to tag an image in build API and then to refer the same image in deploy API while building and deploying an application.

loadResources

loadResources returns a list of OpenShift resources read from a yaml file. This API returns resources list in the form of an array where every array element is a key value pair of resource kind and resource array itself. This API can read multiple resources separated by --- from the yaml file.

    def resource = loadResources(file: ".openshiftio/app.yaml")

Parameters

Name Required Default Value Description
file true none An relative path of resource yaml file.
validate false true A validation for resource yaml file.

build

This is the API which is responsible for doing s2i build, generating image and creating imagestream (if not exist)

    build resources: resources, namespace: "test", commands: """
          npm install
          npm test
    """

or like

    build resources: [
                [ BuildConfig: resources.BuildConfig],
                [ImageStream: resources.ImageStream],
        ], namespace: "test", commands: """
            npm install
            npm test
        """

All the commands and s2i process gets executed in a container according to the environments specified in config API otherwise default.

Parameters

Name Required Default Value Description
resources true null OpenShift resources at least buildConfig and imageStream resource.
namespace false user-namespace namespace where you want to perform s2i build
commands false null commands you want to execute before s2i build

deploy

This is the API which is responsible for deploying your application to OpenShift.

    deploy resources: resources, env: 'stage'

or like

    deploy resources: resources, env: 'run', approval: 'manual', timeout: '15`

Parameters

Name Required Default Value Description
resources true null OpenShift resources at least deploymentConfig, service, route, tag and imageStream resource.
env true null environment where you want to deploy - run or stage
approval false null if provided manual then user will be asked whether to deploy or not
timeout false 30 time (in minutes) to wait for user input if approval is manual

The route generated after above step will be added as annotation in the pipeline.

spawn

This is an API to spawn an pod as per requirement and execute the commands in the pod.

    spawn (image: 'oc`) {
          sh """
              oc version
          """
    }

or like

    spawn image: 'java`,
          commands: "java -version",
          envVars: ["a":"b", "MAVEN_OPTS": '-Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn']

Either one of commands or closure needs to be specified.

Parameters

Name Required Default Value Description
image true null environment you want in pod like maven, node etc.
version false latest version of the environment you want
checkout_scm false true whether you want git code or not for performing commands
commands false null commands that you want to execute
envVars false [] environment variables you want to set in the pod
stage false null A stage name to appear in UI or Jenkins stage view

NOTE: For oc image, as an optimization, a new pod is not started instead commands and body are executed on master itself

Contribution Guide

We love contributors. We appreciate contributions in all forms :) - reporting issues, feedback, documentation, code changes, tests.. etc.

Dev Setup

  1. Install maven (v 3.0 +)
  2. Clone this
    git clone [email protected]:fabric8io/osio-pipeline.git
    
  3. cd osio-pipeline
  4. Import it maven project in your favorite IDE. We reccomond Intellije IDEA.
  5. Make changes in code according to following conventions
    • vars -> Provides an end user pipeline API's
    • src -> Contains the the code used inside pipeline API's
    • test -> Contains unit tests for all source code

Test

To run the unit tests, execute

mvn test

osio-pipeline's People

Contributors

chmouel avatar hrishin avatar khrm avatar kishansagathiya avatar openshiftio-launchpad avatar piyush-garg avatar pradeepitm12 avatar rupalibehera avatar sthaha avatar yzainee-zz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

osio-pipeline's Issues

Support building and running operators

How can we make osio-pipeline.a first class build environment for operators and being able to deploy to some environments,

There is probably go support and permission requirements (i.e: CRD available) first to do this,

[osio-pipeline] Setup CI to run PR tests

Need to setup CI server to run PR builds and tests:

Environment Dependency:

  • oc CLI - version > oc v3.6.0+c4dd4cf

Test commands:

  • run mvn test

Preferable CI:

  • CICO
  • Travis

Approve flow is not consistent

Pipeline flow prompts for user input to approve application deployment to other environments though Jenkins file has only stage environment not run.

Apps without route defined fail to indicate successful deployment

Feature Request:
Currently deployment of a project is indicated using annotations but the annotations are added only if there is a route. So in case of applications (e.g. penfold for MM) that do not have a route, the pipeline fails to indicate a successful deployment.

Exception handle for missing field in template

If a keyword is missing from the template say "kind" then Jenkins shows null pointer exception.
Rather it should be handled with a proper message and stop build process asking to provide a proper template.

Cleanup BC between PR

We have BCs created on each PR, but when the PR is closed we are not cleaning this up

Tests for deploy API

Implement unit tests for deploy API

given:

  • deployable resources (excluding ImageStream, BuildConfig)

test:

  • process all resources except build
  • creates resources in a given namespace
  • deployment refers rights image stream tag
  • accepts the manual approval

[osio-pipeline] Setup CD and release job

Need to set up a CD/Release job so we can release semantic versions of the pipeline library.
In that way, Jenkinsfile would refer to some version of the pipeline library and it would not break pipeline by making changes the master branch.

Pass the current branch we are running to openshift build

When building from a PR openshift creates a new branch which works well for us until we start the new-build we need to have a way to pass the reference of the branch to the template.

when #3 would be implemented we can auto detect (env variable i guess?) which branch has been checked out and automatically set it for us.

Tests for loadResources API

Write a unit tests for loadResources API

which can assert

  • for a given resources file, it loads the resources
  • for a given resources file containing many items (kind: List), it loads the resources
  • for a given resource file, it validates and loads the resource

Integrate other pieces of pipeline

What if the user want to specify a new thing in the pipeline
For example a security scanner
How do we want to make it as generic as possible

build error on utils.mergeMaps

Utils.mergeMaps fails with the following error

org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods flatten java.util.List
	at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectStaticMethod(StaticWhitelist.java:189)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:114)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
	at io.openshift.Utils.mergeMaps(..../builds/6/libs/github.com/fabric8io/osio-pipeline/src/io/openshift/Utils.groovy:109)

Improve resource creation and deletion of deployment workflow

Sub tasks:

  • fix arbitrary delete of resources from stage, run spaces
    due to deployment constraints, current CD flow deletes the all resources which delete existing running service
  • fix template processing flow
    Due to OpenShift plugin restriction deployment workflow applies DC upfront which triggers new deployment for existing running application.
  • for each build pipeline creates two deployments
    As workflow deploys an application it rollout two application deployments instead of one.

Planner: https://openshift.io/openshiftio/Openshift_io/plan/detail/341

Support Docker builds

Docker builds are important when it comes import flow. However, with the current limitation of OpenShift Online S2I can't execute Docker build strategy with given Dockerfile.

In order to support Docker build, evaluate alternative tools to build an image from Dockerfile like Buildah expose this capability in build API

deploy command is too restrictive

deploy command is too restrictive for multi-container scenarios. deploy requires certain resources to exist on the resource yaml but in the case of multi-container applications there can be multiple files that are applied. For instance if application requires 2 DeploymentConfigs one for a database and one for the application to be build, it is common (and perhaps mandatory) to separate the deployment configs to separate files and deploy them individually. An application can easily not have Service and Route.

Allow configure template variables

When we have variable in the template we need to find a way how to get the user (or UI) to pass those paramters. Something like an hashmap in the user Jenkinsfile :

def buildParameters = [option:'value', Ref:'feature-branch', Build:'grunt']

which get then passe to the template,

we don't define a contract, it's up to the user to know those values from the booster documentation or other.

This goes with #2

Fix RELEASE_VERSION parameter dependency

def applyDefaults(provided=[:], templateParams) {
def params = [:]
def setParam = { key, compute ->
if (key in templateParams) {
params[key] = provided[key] ?: compute()
}
}
setParam('SUFFIX_NAME') { "-${env.BRANCH_NAME}".toLowerCase() }
setParam('SOURCE_REPOSITORY_REF') { Utils.shWithOutput(this, "git rev-parse --short HEAD") }
setParam('SOURCE_REPOSITORY_URL') { Utils.shWithOutput(this, "git config remote.origin.url") }
setParam('RELEASE_VERSION') { Utils.shWithOutput(this, "git rev-list --count HEAD") }
return params
}

This method makes it mandatory for the openshift template to have the RELEASE_VERSION parameter else there's a failure because of non-availability of meta

def required = ['ImageStream', 'DeploymentConfig', 'meta']
def res = Utils.mergeResources(args.resources)
def found = res.keySet()
def missing = required - found
if (missing) {
error "Missing mandatory build resources params: $missing; found: $found"
}
def tag = res.meta.tag
if (!tag) {
error "Missing mandatory metadata: tag"
}

Is this intended or should we have something like :

// returns a map of parameter name: value by choosing from provided if the
// key is present in templateParams or uses default value
def applyDefaults(provided=[:], templateParams) {

  def params = [:]
  def setParam = { key, compute ->
    if (key in templateParams) {
      params[key] = provided[key] ?: compute()
    }
   else{
     params[key]  = compute()
   }  
  }

  setParam('SUFFIX_NAME') { "-${env.BRANCH_NAME}".toLowerCase() }
  setParam('SOURCE_REPOSITORY_REF') { Utils.shWithOutput(this, "git rev-parse --short HEAD") }
  setParam('SOURCE_REPOSITORY_URL') { Utils.shWithOutput(this, "git config remote.origin.url") }
  setParam('RELEASE_VERSION') { Utils.shWithOutput(this, "git rev-list --count HEAD") }
  return params
}

Add timestamps in `osio`

Problem:

At this moment there is less visibility about overall build progress and where the build is taking time. It makes hard to understand which step is taking time.

It's better to log the timestamps on important activities like osio, scm

OSIO Pipeline: imagestream tagging for backing services fails.

I was testing a Node + DB CRUD booster
https://github.com/sbose78/nodejs-rest-http-crud

The db was defined in .openshiftio/service.yaml

apiVersion: v1
items:
- apiVersion: v1
  kind: ImageStream
  metadata:
    annotations:
      openshift.io/generated-by: OpenShiftNewApp
    creationTimestamp: null
    labels:
      app: my-database
    name: my-database
  spec:
    tags:
    - annotations:
        openshift.io/imported-from: openshift/postgresql-92-centos7
      from:
        kind: DockerImage
        name: openshift/postgresql-92-centos7
      generation: null
      importPolicy: {}
      name: latest
      referencePolicy:
        type: ""
  status:
    dockerImageRepository: ""

While running the cd step which looked like this

cd {

    // override the RELEASE_VERSION template parameter
    def resources = processTemplate(params: [
        RELEASE_VERSION: "1.0.${env.BUILD_NUMBER}"
    ])
    def cm = loadResources(file: ".openshiftio/service.yaml")

    // performs an s2i build
    build resources: resources
    deploy resources: [resources,  cm], env: 'stage'
    deploy resources: [resources,  cm], env: 'run', approval: 'manual'
  }

..at the deploy step, the pipeline library tries to do a

oc tag -n usernamespace --alias=true usernamespace/my-database:1.0.1  my-database:1.0.1

which fails because the imagestream in the usernamespace doesn't exist. Only my node app's image stream exists in the usernamespace. The .openshiftio/service.yaml wasn't processed as part of build which is why the image stream doesn't exist.

Do you think allowing build to do something like build resources: [resources, cm] is a good idea?
https://github.com/sbose78/osio-pipeline/blob/master/vars/build.groovy#L4

Run a custom postCommit hook

I was trying to think how we want to do cI integrations, it may be good that we allow user to specify their custom postcommit hook :

https://docs.openshift.com/container-platform/3.4/dev_guide/builds/build_hooks.html#configuring-post-commit-build-hooks

For example on a bc i have add :

  postCommit:
    script: npm install --only=dev && npm test

to run unit tests after a bulds,

we can allow a different setting for PR build so we can implement CI and CD like this.

We could have local from jenkinsfile and global ones from a cfgmap

E2E Testing

Let's get E2E testing for this project ASAP,

How do we want to do this ?

Support Helm Charts

Implement a new function similar to processTemplate that processes helm charts and returns the resource to be deployed

Opt-out of the analytics call

For https://github.com/sbose78/nodejs-health-check/blob/master/Jenkinsfile ,

How do I opt-out of analytics? To make this library re-usable, that would be a nice feature to have.

Exiting "Trigger OpenShift Build" successfully; build "nodejs-health-check-s2i-master-2" has completed with status:  [Complete].
[Pipeline] echo
invoking bayesian analytics
[Pipeline] retry
[Pipeline] {
[Pipeline] bayesianAnalysis
No direct or transitive dependencies found.
Running Bayesian stack analysis...
Bayesian API URL is https://recommender.api.openshift.io/api/v1

Tests for build API

Implement unit tests for build API

given:

  • build resources (ImageStream, BuildConfig)

test:

  • creates build resources
  • tags an image in ImageStream

Define CI and CD

We currently don't make any difference between a CI and CD pipeline (it's all CD)

How are we going to do those ?

Secrets

How do we want to do secrets?

Should we let the user to do the secrets injection on openshift and reference it from their Jenkinsfile if needed?

update build events arguments

The build.* event listener may need some information about the build, like source git url, commit id and build name etc.
Update build.end event to publish the above information as event arguments.

Enhancement: quay build target

Here is an idea that we may want to investigate.

For dockerbuild or for other reasons why can perhaps offer to build on quay directly and pull the output image for CD,

We are currently using s2i build on openshift to build but what about we are going driver based and let the user specify he/she wants to build on quay instead as this seems to support it : https://docs.quay.io/guides/building.html

I would imagine that building on quay.io mean we avoid the start-build and only get metadatas from .openshift/application.yaml like the docker building context.

and would then be able to use the image built for -stage or -run

Investigation tasks :

  • do we have access to the build feature on quay.
  • how do we specify another secrets to osio pipeline
  • how do we monitor the building process

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.