GithubHelp home page GithubHelp logo

cloudogu / ces-build-lib Goto Github PK

View Code? Open in Web Editor NEW
73.0 12.0 48.0 1.18 MB

Jenkins pipeline shared library adding features for Maven, Gradle, Docker, SonarQube, Git and others

License: MIT License

Groovy 100.00%
jenkins jenkins-pipeline maven git ces java sonarqube docker gradle

ces-build-lib's Introduction

Cloudogu logo

ces-build-lib

Jenkins Pipeline Shared library, that contains additional features for Git, Maven, etc. in an object-oriented manner as well as some additional pipeline steps.

Table of contents

Usage

@Library('github.com/cloudogu/[email protected]')
import com.cloudogu.ces.cesbuildlib.*
  • Best practice: Use a defined version (e.g. a git commit hash or a git tag, such as 6cd41e0 or 1.67.0 in the example above) and not a branch such as develop. Otherwise, your build might change when the there is a new commit on the branch. Using branches is like using snapshots!
  • When build executors are docker containers and you intend to use their Docker host in the Pipeline: Please see #8.

Syntax completion

You can get syntax completion in your Jenkinsfile when using the ces-build-lib, by adding it as dependency to your project.

You can get the source.jar from JitPack.

With Maven this can be done like so:

  • Define the JitPack repository:
    <repositories>
        <repository>
            <id>jitpack.io</id>
            <url>https://jitpack.io</url>
        </repository>
    </repositories>
  • And the ces-build-lib dependency:
    <dependency>
        <!-- Shared Library used in Jenkins. Including this in maven provides code completion in Jenkinsfile. -->
        <groupId>com.github.cloudogu</groupId>
        <artifactId>ces-build-lib</artifactId>
        <!-- Keep this version in sync with the one used in Jenkinsfile -->
        <version>888733b</version>
        <!-- Don't ship this dependency with the app -->
        <optional>true</optional>
        <!-- Don't inherit this dependency! -->
        <scope>provided</scope>
    </dependency>

Or you can download the file (and sources) manually and add them to your IDE. For example:

  • https://jitpack.io/com/github/cloudogu/ces-build-lib/9fa7ac4/ces-build-lib-9fa7ac4.jar
  • https://jitpack.io/com/github/cloudogu/ces-build-lib/9fa7ac4/ces-build-lib-9fa7ac4-sources.jar

Current version is .
For further details and options refer to the JitPack website.

This is confirmed to work with IntelliJ IDEA.

Maven

Maven from local Jenkins tool

Run maven from a local tool installation on Jenkins.

See MavenLocal

def mvnHome = tool 'M3'
def javaHome = tool 'OpenJDK-8'
Maven mvn = new MavenLocal(this, mvnHome, javaHome)

stage('Build') {
    mvn 'clean install'
}

Maven Wrapper

Run maven using a Maven Wrapper from the local repository.

With local JDK tool

Similar to MavenLocal you can use the Maven Wrapper with a JDK from a local tool installation on Jenkins:

def javaHome = tool 'OpenJDK-8'
Maven mvn = new MavenWrapper(this, javaHome)

stage('Build') {
    mvn 'clean install'
}

With the JDK provided by the build agent

It is also possible to not specify a JDK tool and use the Java Runtime on the Build agent's PATH. However, experience tells us that this is absolutely non-deterministic and will result in unpredictable behavior.
So: Better set an explicit Java tool to be used or use MavenWrapperInDocker.

Maven mvn = new MavenWrapper(this)

stage('Build') {
    mvn 'clean install'
}

Maven in Docker

Run maven in a docker container. This can be helpful, when

  • constant ports are bound during the build that cause port conflicts in concurrent builds. For example, when running integration tests, unit tests that use infrastructure that binds to ports or
  • one maven repo per builds is required For example when concurrent builds of multi module project install the same snapshot versions.

Plain Maven In Docker

The builds are run inside the official maven containers from Dockerhub

See MavenInDocker

Maven mvn = new MavenInDocker(this, "3.5.0-jdk-8")

stage('Build') {
    mvn 'clean install'
}

Maven Wrapper In Docker

It's also possible to use the MavenWrapper in a Docker Container. Here, the Docker container is responsible for providing the JDK.

See MavenWrapperInDocker

Maven mvn = MavenWrapperInDocker(this, 'adoptopenjdk/openjdk11:jdk-11.0.10_9-alpine')

stage('Build') {
    mvn 'clean install'
}

Since Oracle's announcement of shorter free JDK support, plenty of JDK images have appeared on public container image registries, where adoptopenjdk is just one option. The choice is yours.

Advanced Maven in Docker features

The following features apply to plain Maven as well as Maven Wrapper in Docker.

Maven starts new containers

If you run Docker from your maven build, because you use the docker-maven-plugin for example, you can connect the docker socket through to your docker in maven like so:

stage('Unit Test') {
    // The UI module build runs inside a docker container, so pass the docker host to the maven container
    mvn.enableDockerHost = true

    mvn docker:start 

    // Don't expose docker host more than necessary
    mvn.enableDockerHost = false
}

There are some security-related concerns about this. See Docker.

Local repo

Maven in Docker

If you would like to use Jenkin's local maven repo (or more accurate the one of the build executor, typically at /home/jenkins/.m2) instead of a maven repo per job (within each workspace), you can use the following options:

Maven mvn = new MavenInDocker(this, "3.5.0-jdk-8")
mvn.useLocalRepoFromJenkins = true

This speed speeds up the first build and uses less memory. However, concurrent builds of multi module projects building the same version (e.g. a SNAPSHOT), might overwrite their dependencies, causing non-deterministic build failures.

Maven without Docker

The default is the default maven behavior /home/jenkins/.m2 is used. If you want to use a separate maven repo per Workspace (e.g. in order to avoid concurrent builds overwriting dependencies of multi module projects building the same version (e.g. a SNAPSHOT) the following will work:

mvn.additionalArgs += " -Dmaven.repo.local=${env.WORKSPACE}/.m2"

Lazy evaluation / execute more steps inside container

If you need to execute more steps inside the maven container you can pass a closure to your maven instance that is lazily evaluated within the container. The String value returned are the maven arguments.

Maven mvn = new MavenInDocker(this, "3.5.0-jdk-8"),
echo "Outside Maven Container! ${new Docker(this).findIp()}"
mvn {
    echo "Insinde Maven Container! ${new Docker(this).findIp()}"
    'clean package -DskipTests'
}

Mirrors

You can define maven mirrors as follows:

Maven.useMirrors([name: 'maven-proxy', mirrorOf: 'central', url: 'https://maven.example.org'],
                 [name: 'google-maven', mirrorOf: 'central', url: 'https://maven-central.storage.googleapis.com/maven2/'],
)

Repository Credentials

If you specified one or more <repository> in your pom.xml that requires authentication, you can pass these credentials to your ces-build-lib Maven instance like so:

mvn.useRepositoryCredentials([id: 'ces', credentialsId: 'nexusSystemUserCredential'],
                             [id: 'another', credentialsId: 'nexusSystemUserCredential'])

Note that the id must match the one specified in your pom.xml and the credentials ID must belong to a username and password credential defined in Jenkins.

Deploying to Nexus repository

Deploying artifacts

ces-build-lib makes deploying to nexus repositories easy, even when it includes signing of the artifacts and usage of the nexus staging plugin (as necessary for Maven Central or other Nexus repository pro instances).

Simple deployment

The most simple use case is to deploy to a nexus repo (not Maven Central):

  • Just set the repository using Maven.useRepositoryCredentials() by passing a nexus username and password/access token as jenkins username and password credential and
    • either a repository ID that matches a <distributionManagement><repository> (or <snapshotRepository>, examples bellow) defined in your pom.xml (then, no url or type parameters are needed)
      (distributionManagement > snapshotRepository or repository (depending on the version) > id)
    • or a repository ID (you can choose) and the URL.
      In this case you can alos specifiy a type: 'Nexus2' (defaults to Nexus3) - as the base-URLs differ. This approach is deprecated and might be removed from ces-build-lib in the future.
  • Call Maven.deployToNexusRepository(). And that is it.

Simple Example:

# <distributionManagement> in pom.xml (preferred)
mvn.useRepositoryCredentials([id: 'ces', credentialsId: 'nexusSystemUserCredential'])
# Alternative: Distribution management via Jenkins (deprecated)
mvn.useRepositoryCredentials([id: 'ces', url: 'https://ecosystem.cloudogu.com/nexus', credentialsId: 'nexusSystemUserCredential', type: 'Nexus2'])

# Deploy
mvn.deployToNexusRepository()    

Note that if the pom.xml's version contains -SNAPSHOT, the artifacts are automatically deployed to the snapshot repository (e.g. on oss.sonatype.org). Otherwise, the artifacts are deployed to the release repository (e.g. on oss.sonatype.org).

Signing artifacts (e.g. Maven Central)

If you want to sign the artifacts before deploying, just set the credentials for signing before deploying, using Maven.setSignatureCredentials() passing the secret key as ASC file (as jenkins secret file credential) and the passphrase (as jenkins secret text credential). An ASC file can be exported via gpg --export-secret-keys -a ABCDEFGH > secretkey.asc. See Working with PGP Signatures

Deploying with staging (e.g. Maven Central)

Another option is to use the nexus-staging-maven-plugin instead of the default maven-deploy-plugin. This is useful if you deploy to a Nexus repository pro, such as Maven Central.

Just use the Maven.deployToNexusRepositoryWithStaging() instead of Maven.deployToNexusRepository().

When deploying to Maven Central, make sure that your pom.xml adheres to the requirements by Maven Central, as stated here.

Note that as of nexus-staging-maven-plugin version 1.6.8, it does seem to read the distribution repositories from pom.xml only.

That is, you need to specify them in your pom.xml, they cannot be passed by the ces-build-lib. So for example for maven central you need to add the following:

<distributionManagement>
    <snapshotRepository>
        <id>ossrh</id>
        <url>https://oss.sonatype.org/content/repositories/snapshots</url>
    </snapshotRepository>
    <repository>
        <id>ossrh</id>
        <url>https://oss.sonatype.org/service/local/staging/deploy/maven2/</url>
    </repository>
</distributionManagement>

In addition you either have to pass an url to useRepositoryCredentials() or specify the nexus-staging-maven plugin in your pom.xml:

  <plugin>
    <groupId>org.sonatype.plugins</groupId>
    <artifactId>nexus-staging-maven-plugin</artifactId>
    <!-- ... -->
    <configuration>
      <serverId>ossrh</serverId>
      <nexusUrl>https://oss.sonatype.org/</nexusUrl>
    </configuration>
  </plugin>

Either way, the repository ID (here: ossrh) and the base nexus URL (here: https://oss.sonatype.org) in distributionManagement and nexus-staging-maven plugin must conform to each other.

Summing up, here is an example for deploying to Maven Central:

// url is optional, if described in nexus-staging-maven-plugin in pom.xml 
mvn.useRepositoryCredentials([id: 'ossrh', url: 'https://oss.sonatype.org', credentialsId: 'mavenCentral-UsernameAndAcccessTokenCredential', type: 'Nexus2'])
mvn.setSignatureCredentials('mavenCentral-secretKey-asc-file','mavenCentral-secretKey-Passphrase')
mvn.deployToNexusRepositoryWithStaging()            

Note that the staging of releases might well take 10 minutes. After that, the artifacts are in the release repository, which is later (feels like nightly) synced to Maven Central.

For an example see cloudogu/command-bus.

Deploying sites

Similar to deploying artifacts as described above, we can also easily deploy a Maven site to a "raw" maven repository.

Note that the site plugin does not provide options to specify the target repository via the command line. That is, it has to be configured in the pom.xml like so:

<distributionManagement>
    <site>
        <id>ces</id>
        <name>site repository cloudogu ecosystem</name>
        <url>dav:https://your.domain/nexus/repository/Site-repo/${project.groupId}/${project.artifactId}/${project.version}/</url>
    </site>
</distributionManagement>

Where Site-repo is the name of the raw repository that must exist in Nexus to succeed.

Then, you can deploy the site as follows:

mvn.useRepositoryCredentials([id: 'ces', credentialsId: 'nexusSystemUserCredential'])
mvn.deploySiteToNexus()

Where

  • the id parameter must match the one specified in the pom.xml(ces in the example above),
  • the nexus username and password/access token are passed as jenkins username and password credential (nexusSystemUserCredential).
  • there is no difference between Nexus 2 and Nexus 3 regarding site deployments.

For an example see cloudogu/continuous-delivery-slides-example

Passing additional arguments

Another option for deployToNexusRepositoryWithStaging() and deployToNexusRepository() is to pass additional maven arguments to the deployment like so: mvn.deployToNexusRepositoryWithStaging('-X') (enables debug output).

Maven Utilities

Available from both local Maven and Maven in Docker.

  • mvn.getVersion()
  • mvn.getArtifactId()
  • mvn.getGroupId()
  • mvn.getMavenProperty('project.build.sourceEncoding')

See Maven

Gradle

Gradle Wrapper in Docker

It's also possible to use a GradleWrapper in a Docker Container. Here, the Docker container is responsible for providing the JDK.

See GradleWrapperInDocker

Example:

String gradleDockerImage = 'openjdk:11.0.10-jdk'
Gradle gradlew = new GradleWrapperInDocker(this, gradleDockerImage)

stage('Build') {
    gradlew "clean build"
}

Since Oracle's announcement of shorter free JDK support, plenty of JDK images have appeared on public container image registries, where adoptopenjdk is just one option. The choice is yours.

Git

An extension to the git step, that provides an API for some commonly used git commands and utilities. Mostly, this is a convenient wrapper around using the sh 'git ...' calls.

Example:

Git git = new Git(this)

stage('Checkout') {
  git 'https://your.repo'
  /* Don't remove folders starting in "." like .m2 (maven), .npm, .cache, .local (bower), etc. */
  git.clean('".*/"')
}

Credentials

You can optionally pass usernamePassword (i.e. a String containing the ID that refers to the Jenkins credentials) to Git during construction. These are then used for cloning and pushing.

Note that the username and passwort are processed by a shell. Special characters in username or password might cause errors like Unterminated quoted string. So it's best to use a long password that only contains letters and numbers for now.

Git annonymousGit = new Git(this)
Git gitWithCreds = new Git(this, 'ourCredentials')


annonymousGit 'https://your.repo'
gitWithCreds 'https://your.repo' // Implicitly passed credentials

Git Utilities

Read Only

  • git.clean() - Removes all untracked and unstaged files.
  • git.clean('".*/"') - Removes all untracked and unstaged files, except folders starting in "." like .m2 (maven), .npm, .cache, .local (bower), etc.
  • git.branchName - e.g. feature/xyz/abc
  • git.simpleBranchName - e.g. abc
  • git.commitAuthorComplete - e.g. User Name <[email protected]>
  • git.commitAuthorEmail - e.g. [email protected]
  • git.commitAuthorName - e.g. User Name
  • git.commitMessage - Last commit message e.g. Implements new functionality...
  • git.commitHash - e.g. fb1c8820df462272011bca5fddbe6933e91d69ed
  • git.commitHashShort - e.g. fb1c882
  • git.areChangesStagedForCommit() - true if changes are staged for commit. If false, git.commit() will fail.
  • git.repositoryUrl - e.g. https://github.com/orga/repo.git
  • git.gitHubRepositoryName - e.g. orga/repo
  • Tags - Note that the git plugin might not fetch tags for all builds. Run sh "git fetch --tags" to make sure.
    • git.tag - e.g. 1.0.0 or empty if not set
    • git.isTag() - is there a tag on the current commit?

Changes to local repository

Note that most changing operations offer parameters to specify an author. Theses parameters are optional. If not set the author of the last commit will be used as author and committer. You can specify a different committer by setting the following fields:

It is recommended to set a different committer, so it's obvious those commits were done by Jenkins in the name of the author. This behaviour is implemented by GitHub for example when committing via the Web UI.

  • git.checkout('branchname')
  • git.checkoutOrCreate('branchname') - Creates new Branch if it does not exist
  • git.add('.')
  • git.commit('message', 'Author', '[email protected])
  • git.commit('message') - uses default author/committer (see above).
  • git.setTag('tag', 'message', 'Author', '[email protected])
  • git.setTag('tag', 'message') - uses default author/committer (see above).
  • git.fetch()
  • git.pull() - pulls, and in case of merge, uses default author/committer (see above).
  • git.pull('refspec') - pulls specific refspec (e.g. origin master), and in case of merge, uses the name and email of the last committer as author and committer.
  • git.pull('refspec', 'Author', '[email protected])
  • git.merge('develop', 'Author', '[email protected])
  • git.merge('develop') - uses default author/committer (see above).
  • git.mergeFastForwardOnly('master')

Changes to remote repository

  • git.push('origin master') - pushes origin Note: This always prepends origin if not present for historical reasonse (see #44). That is, right now it is impossible to push other remotes.
    This will change in the next major version of ces-build-lib.
    This limitation does not apply to other remote-related operations such as pull(), fetch() and pushAndPullOnFailure() So it's recommended to explicitly mention the origin and not just the refsepc:
    • Do: git.push('origin master')
    • Don't: git.push('master') because this will no longer work in the next major version.
  • git.pushAndPullOnFailure('refspec') - pushes and pulls if push failed e.g. because local and remote have diverged, then tries pushing again

Docker

The Docker class provides the default methods of the global docker variable provided by docker plugin:

Docker methods provided by the docker plugin

  • withRegistry(url, credentialsId = null, Closure body): Specifies a registry URL such as https://docker.mycorp.com/, plus an optional credentials ID to connect to it.
    Example:
    def dockerImage = docker.build("image/name:1.0", "folderOfDockfile")
    docker.withRegistry("https://your.registry", 'credentialsId') {
      dockerImage.push()
    }
  • withServer(uri, credentialsId = null, Closure body): Specifies a server URI such as tcp://swarm.mycorp.com:2376, plus an optional credentials ID to connect to it.
  • withTool(toolName, Closure body): Specifies the name of a Docker installation to use, if any are defined in Jenkins global configuration. If unspecified, docker is assumed to be in the $PATH of the Jenkins agent.
  • image(id): Creates an Image object with a specified name or ID.
    Example:
     docker.image('google/cloud-sdk:164.0.0').inside("-e HOME=${pwd()}") {
          sh "echo something"
       }
    The image returned by the Docker class has additional features see bellow.
  • build(image, args): Runs docker build to create and tag the specified image from a Dockerfile in the current directory. Additional args may be added, such as '-f Dockerfile.other --pull --build-arg http_proxy=http://192.168.1.1:3128 .'. Like docker build, args must end with the build context.
    Example:
    def dockerContainer = docker.build("image/name:1.0", "folderOfDockfile").run("-e HOME=${pwd()}")

Additional features provided by the Docker class

The Docker class provides additional convenience features:

  • String findIp(container) returns the IP address for a docker container instance
  • String findIp() returns the IP address in the current context: the docker host ip (when outside of a container) or the ip of the container this is running in
  • String findDockerHostIp() returns the IP address of the docker host. Should work both, if running inside or outside a container
  • String findEnv(container) returns the environment variables set within the docker container as string
  • boolean isRunningInsideOfContainer() return true if this step is executed inside a container, otherwise false
  • boolean isRunning(container) return true if the container is in state running, otherwise false

Example from Jenkinsfile:

 Docker docker = new Docker(this)
 def dockerContainer = docker.build("image/name:1.0").run()
 waitUntil {
     sleep(time: 10, unit: 'SECONDS')
     return docker.isRunning(dockerContainer)
 }
 echo docker.findIp(dockerContainer)
 echo docker.findEnv(dockerContainer)

Docker.Image methods provided by the docker plugin

  • id: The image name with optional tag (mycorp/myapp, mycorp/myapp:latest) or ID (hexadecimal hash).
  • inside(String args = '', Closure body) : Like withRun this starts a container for the duration of the body, but all external commands (sh) launched by the body run inside the container rather than on the host. These commands run in the same working directory (normally a Jenkins agent workspace), which means that the Docker server must be on localhost.
  • pull: Runs docker pull. Not necessary before run, withRun, or inside.
  • run(String args = '', String command = ""): Uses docker run to run the image, and returns a Container which you could stop later. Additional args may be added, such as '-p 8080:8080 --memory-swap=-1'. Optional command is equivalent to Docker command specified after the image(). Records a run fingerprint in the build.
  • withRun(String args = '', String command = "", Closure body): Like run but stops the container as soon as its body exits, so you do not need a try-finally block.
  • tag(String tagName = image().parsedId.tag, boolean force = true): Runs docker tag to record a tag of this image (defaulting to the tag it already has). Will rewrite an existing tag if one exists.
  • push(String tagName = image().parsedId.tag, boolean force = true): Pushes an image to the registry after tagging it as with the tag method. For example, you can use image().push 'latest' to publish it as the latest version in its repository.

Additional features provided by the Docker.Image class

  • repoDigests(): Returns the repo digests, a content addressable unique digest of an image that was pushed to or pulled from repositories.
    If the image was built locally and not pushed, returns an empty list.
    If the image was pulled from or pushed to a repo, returns a list containing one item.
    If the image was pulled from or pushed to multiple repos, might also contain more than one digest.

  • mountJenkinsUser(): Setting this to true provides the user that executes the build within docker container's /etc/passwd. This is necessary for some commands such as npm, ansible, git, id, etc. Those might exit with errors withouta user present.

    Why?
    Note that Jenkins starts Docker containers in the pipeline with the -u parameter (e.g. -u 1042:1043). That is, the container does not run as root (which is a good thing from a security point of view). However, the userID/UID (e.g. 1042) and the groupID/GID (e.g. 1043) will most likely not be present within the container which causes errors in some executables.

    How?
    Setting this will cause the creation of a passwd file that is mounted into a container started from this image() (triggered by run(), withRun() and inside() methods). This passwd file contains the username, UID, GID of the user that executes the build and also sets the current workspace as HOME within the docker container.

  • mountDockerSocket(): Setting this to true mounts the docker socket into the container.
    This allows the container to start other containers "next to" itself, that is "sibling" containers. Note that this is similar but not the same as "Docker In Docker".

    Note that this will make the docker host socket accessible from within the the container. Use this wisely. Some people say, you should not do this at all. On the other hand, the alternative would be to run a real docker host in docker a docker container, aka "docker in docker" or "dind" (which is possible. On this, however, other people say, you should not do this at all. So lets stick to mounting the socket, which seems to cause less problems.

    This is also used by MavenInDocker

  • installDockerClient(String version): Installs the docker client with the specified version inside the container. If no version parameter is passed, the lib tries to query the server version by calling docker version.
    This can be called in addition to mountDockerSocket(), when the "docker" CLI is required on the PATH.

    For available versions see here.

Examples:

Docker Container that uses its own docker client:

new Docker(this).image('docker') // contains the docker client binary
    .mountJenkinsUser()
    .mountDockerSocket()
    .inside() {
        sh 'whoami' // Would fail without mountJenkinsUser = true
        sh 'id' // Would fail without mountJenkinsUser = true
        
        // Start a "sibling" container and wait for it to return
        sh 'docker run hello-world' // Would fail without mountDockerSocket = true 
        
    }

Docker container that does not have its own docker client

new Docker(this).image('kkarczmarczyk/node-yarn:8.0-wheezy')
    .mountJenkinsUser()
    .mountDockerSocket()
    .installDockerClient('17.12.1')
    .inside() {
        // Start a "sibling" container and wait for it to return
        sh 'docker run hello-world' // Would fail without mountDockerSocket = true & installDockerClient()
    }

Dockerfile

The Dockerfile class provides functions to lint Dockerfiles. For example:

stage('Lint') {
    Dockerfile dockerfile = new Dockerfile(this)
    dockerfile.lint() // Lint with default configuration
    dockerfile.lintWithConfig() // Use your own hadolint configuration with a .hadolint.yaml configuration file
}

The tool hadolint is used for linting. It has a lot of configuration parameters which can be set by creating a .hadolint.yaml file in your working directory. See https://github.com/hadolint/hadolint#configure

SonarQube

When analyzing code with SonarQube there are a couple of challenges that are solved using ces-build-lib's SonarQube class:

  • Setting the branch name (note that this only works in Jenkins multi-branch pipeline builds, regular pipelines don't have information about branches - see #11)
  • Analysis for Pull Requests
  • Commenting on Pull Requtests
  • Updating commit status in GitHub for Pull Requests
  • Using the SonarQube branch plugin (SonarQube 6.x, developer edition and sonarcloud.io)

Constructors

In general, you can analyse with or without the SonarQube Plugin for Jenkins:

  • new SonarQube(this, [sonarQubeEnv: 'sonarQubeServerSetupInJenkins']) requires the SonarQube plugin and the SonarQube server sonarQubeServerSetupInJenkins setup up in your Jenkins instance. You can do this here: https://yourJenkinsInstance/configure.
  • new SonarQube(this, [token: 'secretTextCred', sonarHostUrl: 'http://ces/sonar']) does not require the plugin and uses an access token, stored as secret text credential secretTextCred in your Jenkins instance.
  • new SonarQube(this, [usernamePassword: 'usrPwCred', sonarHostUrl: 'http://ces/sonar']) does not require the plugin and uses a SonarQube user account, stored as username with password credential usrPwCred in your Jenkins instance.

With the SonarQube instance you can now analyze your code. When using the plugin (i.e. sonarQubeEnv) you can also wait for the quality gate status, that is computed by SonarQube asynchronously. Note that this does not work for token and usernamePassword.

A complete example

stage('Statical Code Analysis') {
  def sonarQube = new SonarQube(this, [sonarQubeEnv: 'sonarQubeServerSetupInJenkins'])

  sonarQube.analyzeWith(new MavenInDocker(this, "3.5.0-jdk-8"))
  sonarQube.timeoutInMinutes = 4

  if (!sonarQube.waitForQualityGateWebhookToBeCalled()) {
    unstable("Pipeline unstable due to SonarQube quality gate failure")
  }
}

Note that

  • Calling waitForQualityGateWebhookToBeCalled() requires a WebHook to be setup in your SonarQube server (globally or per project), that notifies Jenkins (url: https://yourJenkinsInstance/sonarqube-webhook/).
    See SonarQube Scanner for Jenkins.
  • Jenkins will wait for the webhook with a default timeout of 2 minutes, for big projects this might be to short and can be configured with the timeoutInMinutes property.
  • Calling waitForQualityGateWebhookToBeCalled() will only work when an analysis has been performed in the current job, i.e. analyzeWith() has been called and in conjuction with sonarQubeEnv.
  • When used in conjunction with SonarQubeCommunity/sonar-build-breaker, waitForQualityGateWebhookToBeCalled() will fail your build, if quality gate is not passed.
  • For now, SonarQube can only analyze using Maven. Extending this to use the plain SonarQube Runner in future, should be easy, however.

Branches

By default, the SonarQube legacy logic, of creating one project per branch in a Jenkins Multibranch Pipeline project.

A more convenient alternative is the paid-version-only Branch Plugin or the sonarqube-community-branch-plugin, which has similar features but is difficult to install, not supported officially and does not allow for migration to the official branch plugin later on.

You can enable either branch plugins like so:

sonarQube.isUsingBranchPlugin = true
sonarQube.analyzeWith(mvn)

The branch plugin is using master as integration branch, if you want use a different branch as master you have to use the integrationBranch parameter e.g.:

def sonarQube = new SonarQube(this, [sonarQubeEnv: 'sonarQubeServerSetupInJenkins', integrationBranch: 'develop'])
sonarQube.isUsingBranchPlugin = true
sonarQube.analyzeWith(mvn)

Note that using the branch plugin requires a first analysis without branches.

You can do this on Jenkins or locally.

On Jenkins, you can achieve this by setting the following for the first run:

sonarQube.isIgnoringBranches = true
sonarQube.analyzeWith(mvn)

Recommendation: Use Jenkins' replay feature for this. Then commit the Jenkinsfile with isUsingBranchPlugin.

An alternative is running the first analysis locally, e.g. with maven mvn clean install sonar:sonar -Dsonar.host.url=https://sonarcloud.io -Dsonar.organization=YOUR-ORG -Dsonar.login=YOUR-TOKEN

SonarCloud

SonarCloud is a public SonarQube instance that has some extra features, such as PullRequest decoration for GitHub, BitBucket, etc. ces-build-lib encapsulates the setup in SonarCloud class. It works just like SonarQube, i.e. you can create it using sonarQubeEnv, token, etc. and it provides the analyzeWith() and waitForQualityGateWebhookToBeCalled() methods.
The only difference: You either have to pass your organization ID using the sonarOrganization: 'YOUR_ID' parameter during construction, or configure it under https://yourJenkinsInstance/configure as "Additional analysis properties" (hit the "Advanced..." button to get there): sonar.organization=YOUR_ID.

Example using SonarCloud:

  def sonarQube = new SonarCloud(this, [sonarQubeEnv: 'sonarcloud.io', sonarOrganization: 'YOUR_ID'])

  sonarQube.analyzeWith(new MavenInDocker(this, "3.5.0-jdk-8"))

  if (!sonarQube.waitForQualityGateWebhookToBeCalled()) {
    unstable("Pipeline unstable due to SonarCloud quality gate failure")
  }

Just like for ordinary SonarQube, you have to setup a webhook in SonarCloud for waitForQualityGateWebhookToBeCalled() to work (see above).

If you want SonarCloud to decorate your Pull Requests, you will have to

See also Pull Request analysis.

Note that SonarCloud uses the Branch Plugin, so the first analysis has to be done differently, as described in Branches.

Pull Requests in SonarQube

As described above, SonarCloud can annotate PullRequests using the SonarCloud Application for GitHub. It is no longer possible to do this from a regular community edition SonarQube, as the GitHub Plugin for SonarQube is deprecated.

So a PR build is treated just like any other. That is,

  • without branch plugin: A new project using the BRANCH_NAME from env is created.
  • with Branch Plugin: A new branch is analysed using the BRANCH_NAME from env.

The Jenkins GitHub Plugin sets BRANCH_NAME to the PR Name, e.g. PR-42.

Changelog

Provides the functionality to read changes of a specific version in a changelog that is based on the changelog format on https://keepachangelog.com/.

Note: The changelog will automatically be formatted. Characters like ", ', \ will be removed. A \n will be replaced with \\n. This is done to make it possible to pass this string to a json struct as a value.

Example:

Changelog changelog = new Changelog(this)

stage('Changelog') {
  String changes = changelog.getChangesForVersion('v1.0.0')
  // ...
}

changelogFileName

You can optionally pass the path to the changelog file if it is located somewhere else than in the root path or if the file name is not CHANGELOG.md.

Example:

Changelog changelog = new Changelog(this, 'myNewChangelog.md')

stage('Changelog') {
  String changes = changelog.getChangesForVersion('v1.0.0')
  // ...
}

GitHub

Provides the functionality to do changes on a github repository such as creating a new release.

Example:

Git git = new Git(this)
GitHub github = new GitHub(this, git)

stage('Github') {
  github.createRelease('v1.1.1', 'Changes for version v1.1.1')
}
  • github.createRelease(releaseVersion, changes [, productionBranch]) - Creates a release on github. Returns the GitHub Release-ID.
    • Use the releaseVersion (String) as name and tag.
    • Use the changes (String) as body of the release.
    • Optionally, use productionBranch (String) as the name of the production release branch. This defaults to master.
  • github.createReleaseWithChangelog(releaseVersion, changelog [, productionBranch]) - Creates a release on github. Returns the GitHub Release-ID.
    • Use the releaseVersion (String) as name and tag.
    • Use the changelog (Changelog) to extract the changes out of a changelog and add them to the body of the release.
    • Optionally, use productionBranch (String) as the name of the production release branch. This defaults to master.
  • github.addReleaseAsset(releaseId, filePath)
    • The releaseId (String) is the unique identifier of a release in the github API. Can be obtained as return value of createReleaseWithChangelog or createRelease.
    • The filePath specifies the path to the file which should be uploaded.
  • pushPagesBranch('folderToPush', 'commit Message') - Commits and pushes a folder to the gh-pages branch of the current repo. Can be used to conveniently deliver websites. See https://pages.github.com. Note:

GitFlow

A wrapper class around the Git class to simplify the use of the git flow branching model.

Example:

Git git = new Git(this)
git.committerName = 'jenkins'
git.committerEmail = '[email protected]'
GitFlow gitflow = new GitFlow(this, git)

stage('Gitflow') {
  if (gitflow.isReleaseBranch()){
    gitflow.finishRelease(git.getSimpleBranchName())
  }
}
  • gitflow.isReleaseBranch() - Checks if the currently checked out branch is a gitflow release branch.
  • gitflow.finishRelease(releaseVersion [, productionBranch]) - Finishes a git release by merging into develop and production release branch (default: "master").
    • Use the releaseVersion (String) as the name of the new git release.
    • Optionally, use productionBranch (String) as the name of the production release branch. This defaults to master.

SCM-Manager

Provides the functionality to handle pull requests on a SCMManager repository.

You need to pass usernamePassword (i.e. a String containing the ID that refers to the Jenkins credentials) to SCMManager during construction. These are then used for handling the pull requests.

SCMManager scmm = new SCMManager(this, 'ourCredentials')

Set the repository url through the repositoryUrl property like so:

SCMManager scmm = new SCMManager(this, 'https://hostname/scm', 'ourCredentials')

Pull Requests

Each method requires a repository parameter, a String containing namespace and name, e.g. cloudogu/ces-build-lib.

  • scmm.searchPullRequestIdByTitle(repository, title) - Returns a pull request ID by title, or empty, if not present.
    • Use the repository (String) as the GitOps repository
    • Use the title (String) as the title of the pull request in question.
    • This methods requires the readJSON() step from the Pipeline Utility Steps plugin.
  • scmm.createPullRequest(repository, source, target, title, description) - Creates a pull request, or empty, if not present.
    • Use the repository (String) as the GitOps repository
    • Use the source (String) as the source branch of the pull request.
    • Use the target (String) as the target branch of the pull request.
    • Use the title (String) as the title of the pull request.
    • Use the description (String) as the description of the pull request.
  • scmm.updatePullRequest(repository, pullRequestId, title, description) - Updates the pull request.
    • Use the repository (String) as the GitOps repository
    • Use the pullRequestId (String) as the ID of the pull request.
    • Use the title (String) as the title of the pull request.
    • Use the description (String) as the description of the pull request.
  • scmm.createOrUpdatePullRequest(repository, source, target, title, description) - Creates a pull request if no PR is found or updates the existing one.
    • Use the repository (String) as the GitOps repository
    • Use the source (String) as the source branch of the pull request.
    • Use the target (String) as the target branch of the pull request.
    • Use the title (String) as the title of the pull request.
    • Use the description (String) as the description of the pull request.
  • scmm.addComment(repository, pullRequestId, comment) - Adds a comment to a pull request.
    • Use the repository (String) as the GitOps repository
    • Use the pullRequestId (String) as the ID of the pull request.
    • Use the comment (String) as the comment to add to the pull request.

Example:

def scmm = new SCMManager(this, 'https://your.ecosystem.com/scm', scmManagerCredentials)

def pullRequestId = scmm.createPullRequest('cloudogu/ces-build-lib', 'feature/abc', 'develop', 'My title', 'My description')
pullRequestId = scmm.searchPullRequestIdByTitle('cloudogu/ces-build-lib', 'My title')
scmm.updatePullRequest('cloudogu/ces-build-lib', pullRequestId, 'My new title', 'My new description')
scmm.addComment('cloudogu/ces-build-lib', pullRequestId, 'A comment')

HttpClient

HttpClient provides a simple curl frontend for groovy.

  • Not surprisingly, it requires curl on the jenkins agents.
  • If you need to authenticate, you can create a HttpClient with optional credentials ID (usernamePassword credentials)
  • HttpClient provides get(), put() and post() methods
  • All methods have the same signature, e.g.
    http.get(url, contentType = '', data = '')
    • url (String)
    • optional contentType (String) - set as acceptHeader in the request
    • optional data (Object) - sent in the body of the request
  • If successful, all methods return the same data structure a map of
    • httpCode - as string containing the http status code
    • headers - a map containing the response headers, e.g. [ location: 'http://url' ]
    • body - an optional string containing the body of the response
  • In case of an error (Connection refused, Could not resolve host, etc.) an exception is thrown which fails the build right away. If you don't want the build to fail, wrap the call in a try/catch block.

Example:

HttpClient http = new HttpClient(scriptMock, 'myCredentialID')

// Simplest example
echo http.get('http://url')

// POSTing data
def dataJson = JsonOutput.toJson([
    comment: comment
])
def response = http.post('http://url/comments"', 'application/json', dataJson)

if (response.status == '201' && response.content-type == 'application/json') {
    def json = readJSON text: response.body
    echo json.count
}

K3d

K3d provides functions to set up and administer a local k3s cluster in Docker.

Example:

K3d k3d = new K3d(this, env.WORKSPACE, env.PATH)

try {
    stage('Set up k3d cluster') {
        k3d.startK3d()
    }

    stage('Do something with your cluster') {
        k3d.kubectl("get nodes")
    }
    stage('Apply your Helm chart') {
        k3d.helm("install path/to/your/chart")
    }

    stage('build and push development artefact') {
        String myCurrentArtefactVersion = "yourTag-1.2.3-dev"
        imageName = k3d.buildAndPushToLocalRegistry("your/image", myCurrentArtefactVersion)
        // your image name may look like this: k3d-citest-123456/your/image:yourTag-1.2.3-dev
        // the image name can be applied to your cluster as usual, f. i. with k3d.kubectl() with a customized K8s resource 
    }
    
    stage('execute k8s-ces-setup') {
        k3d.setup('0.20.0')
    }

    stage('install resources and wait for them') {
        imageName = "registry.cloudogu.com/official/my-dogu-name:1.0.0"
        k3d.installDogu("my-dogu-name", imageName, myDoguResourceYamlFile)

        k3d.waitForDeploymentRollout("my-dogu-name", 300, 5)
    }

    stage('install a dependent dogu by applying a dogu resource') {
        k3d.applyDoguResource("my-dependency", "nyNamespace", "10.0.0-1")
        k3d.waitForDeploymentRollout("my-dependency", 300, 5)
    }

} catch (Exception ignored) {
    // in case of a failed build collect dogus, resources and pod logs and archive them as log file on the build.
    k3d.collectAndArchiveLogs()
    throw e
} finally {
    stage('Remove k3d cluster') {
        k3d.deleteK3d()
    }
}

DoguRegistry

DoguRegistry provides functions to easily push dogus and k8s components to a configured registry.

Example:

DoguRegistry registry = new DoguRegistry(this)

// push dogu
registry.pushDogu()

// push k8s component
registry.pushK8sYaml("pathToMyK8sYaml.yaml", "k8s-dogu-operator", "mynamespace", "0.9.0")

Bats

Bats provides functions to easily execute existing bats tests for a project.

Example:

Docker docker = new Docker(this)

stage('Bats Tests') {
    Bats bats = new Bats(this, docker)
    bats.checkAndExecuteTests()
}

Makefile

Makefile provides function regarding the Makefile from the current directory.

Example:

    Makefile makefile = new Makefile(this)
    String currentVersion = makefile.getVersion()

Markdown

Markdown provides function regarding the Markdown Files from the projects docs directory

    Markdown markdown = new Markdown(this)
    markdown.check()

markdown.check executes the function defined in Markdown running a container with the latest https://github.com/tcort/markdown-link-check image and verifies that the links in the defined project directory are alive

Additionally, the markdown link checker can be used with a specific version (default: stable).

    Markdown markdown = new Markdown(this, "3.11.0")
    markdown.check()

DockerLint (Deprecated)

Use Dockerfile.lint() instead of lintDockerfile()! See Dockerfile

lintDockerfile() // uses Dockerfile as default; optional parameter

See lintDockerFile

ShellCheck

shellCheck() // search for all .sh files in folder and runs shellcheck
shellCheck(fileList) // fileList="a.sh b.sh" execute shellcheck on a custom list

See shellCheck

Steps

mailIfStatusChanged

Provides the functionality of the Jenkins Post-build Action "E-mail Notification" known from freestyle projects.

catchError {
 // Stages and steps
}
mailIfStatusChanged('[email protected],[email protected]')

See mailIfStatusChanged

isPullRequest

Returns true if the current build is a pull request (when the CHANGE_IDenvironment variable is set) Tested with GitHub.

stage('SomethingToSkipWhenInPR') {
    if (!isPullRequest()) {
      // ...
    }
    
}

findEmailRecipients

Determines the email recipients: For branches that are considered unstable (all except for 'master' and 'develop') only the Git author is returned (if present). Otherwise, the default recipients (passed as parameter) and git author are returned.

catchError {
 // Stages and steps
}
mailIfStatusChanged(findEmailRecipients('[email protected],[email protected]'))

The example writes state changes email to '[email protected],[email protected]' + git author for stable branches and only to git author for unstable branches.

findHostName

Returns the hostname of the current Jenkins instance. For example, if running on http(s)://server:port/jenkins, server is returned.

isBuildSuccessful

Returns true if the build is successful, i.e. not failed or unstable (yet).

findVulnerabilitiesWithTrivy

Returns a list of vulnerabilities or an empty list if there are no vulnerabilities for the given severity.

findVulnerabilitiesWithTrivy(trivyConfig as Map)

trivyConfig = [ 
    imageName: 'alpine:3.17.2', 
    severity: [ 'HIGH, CRITICAL' ], 
    trivyVersion: '0.41.0',
    additionalFlags: '--ignore-unfixed'
]

Here the only mandatory field is imageName. If no imageName was passed the function returns an empty list.

  • imageName (string): The name of the image to be scanned
  • severity (list of strings): If left blank all severities will be shown. If one or more are specified only these will be shown i.e. if 'HIGH' is passed then only vulnerabilities with the 'HIGH' score are shown
  • trivyVersion (string): The version of the trivy image
  • additionalFlags (string): Additional flags for trivy, e.g. --ignore-unfixed

Simple examples

node {
    stage('Scan Vulns') {
        def vulns = findVulnerabilitiesWithTrivy(imageName: 'alpine:3.17.2')
        if (vulns.size() > 0) {
            archiveArtifacts artifacts: '.trivy/trivyOutput.json'
            unstable "Found  ${vulns.size()} vulnerabilities in image. See vulns.json"
        }
    }
}

Ignore / allowlist

If you want to ignore / allow certain vulnerabilities please use a .trivyignore file Provide the file in your repo / directory where you run your job e.g.:

.gitignore
Jenkinsfile
.trivyignore

Offical documentation

# Accept the risk
CVE-2018-14618

# Accept the risk until 2023-01-01
CVE-2019-14697 exp:2023-01-01

# No impact in our settings
CVE-2019-1543

# Ignore misconfigurations
AVD-DS-0002

# Ignore secrets
generic-unwanted-rule
aws-account-id

If there are vulnerabilities the output looks as follows.

{
  "SchemaVersion": 2,
  "ArtifactName": "alpine:3.17.2",
  "ArtifactType": "container_image",
  "Metadata": {
    "OS": {
      "Family": "alpine",
      "Name": "3.17.2"
    },
    "ImageID": "sha256:b2aa39c304c27b96c1fef0c06bee651ac9241d49c4fe34381cab8453f9a89c7d",
    "DiffIDs": [
      "sha256:7cd52847ad775a5ddc4b58326cf884beee34544296402c6292ed76474c686d39"
    ],
    "RepoTags": [
      "alpine:3.17.2"
    ],
    "RepoDigests": [
      "alpine@sha256:ff6bdca1701f3a8a67e328815ff2346b0e4067d32ec36b7992c1fdc001dc8517"
    ],
    "ImageConfig": {
      "architecture": "amd64",
      "container": "4ad3f57821a165b2174de22a9710123f0d35e5884dca772295c6ebe85f74fe57",
      "created": "2023-02-11T04:46:42.558343068Z",
      "docker_version": "20.10.12",
      "history": [
        {
          "created": "2023-02-11T04:46:42.449083344Z",
          "created_by": "/bin/sh -c #(nop) ADD file:40887ab7c06977737e63c215c9bd297c0c74de8d12d16ebdf1c3d40ac392f62d in / "
        },
        {
          "created": "2023-02-11T04:46:42.558343068Z",
          "created_by": "/bin/sh -c #(nop)  CMD [\"/bin/sh\"]",
          "empty_layer": true
        }
      ],
      "os": "linux",
      "rootfs": {
        "type": "layers",
        "diff_ids": [
          "sha256:7cd52847ad775a5ddc4b58326cf884beee34544296402c6292ed76474c686d39"
        ]
      },
      "config": {
        "Cmd": [
          "/bin/sh"
        ],
        "Env": [
          "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
        ],
        "Image": "sha256:ba2beca50019d79fb31b12c08f3786c5a0621017a3e95a72f2f8b832f894a427"
      }
    }
  },
  "Results": [
    {
      "Target": "alpine:3.17.2 (alpine 3.17.2)",
      "Class": "os-pkgs",
      "Type": "alpine",
      "Vulnerabilities": [
        {
          "VulnerabilityID": "CVE-2023-0464",
          "PkgID": "[email protected]",
          "PkgName": "libcrypto3",
          "InstalledVersion": "3.0.8-r0",
          "FixedVersion": "3.0.8-r1",
          "Layer": {
            "DiffID": "sha256:7cd52847ad775a5ddc4b58326cf884beee34544296402c6292ed76474c686d39"
          },
          "SeveritySource": "nvd",
          "PrimaryURL": "https://avd.aquasec.com/nvd/cve-2023-0464",
          "DataSource": {
            "ID": "alpine",
            "Name": "Alpine Secdb",
            "URL": "https://secdb.alpinelinux.org/"
          },
          "Title": "Denial of service by excessive resource usage in verifying X509 policy constraints",
          "Description": "A security vulnerability has been identified in all supported versions of OpenSSL related to the verification of X.509 certificate chains that include policy constraints. Attackers may be able to exploit this vulnerability by creating a malicious certificate chain that triggers exponential use of computational resources, leading to a denial-of-service (DoS) attack on affected systems. Policy processing is disabled by default but can be enabled by passing the `-policy' argument to the command line utilities or by calling the `X509_VERIFY_PARAM_set1_policies()' function.",
          "Severity": "HIGH",
          "CweIDs": [
            "CWE-295"
          ],
          "CVSS": {
            "nvd": {
              "V3Vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
              "V3Score": 7.5
            },
            "redhat": {
              "V3Vector": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H",
              "V3Score": 5.9
            }
          },
          "References": [
            "https://access.redhat.com/security/cve/CVE-2023-0464",
            "https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0464",
            "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=2017771e2db3e2b96f89bbe8766c3209f6a99545",
            "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=2dcd4f1e3115f38cefa43e3efbe9b801c27e642e",
            "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=879f7080d7e141f415c79eaa3a8ac4a3dad0348b",
            "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=959c59c7a0164117e7f8366466a32bb1f8d77ff1",
            "https://nvd.nist.gov/vuln/detail/CVE-2023-0464",
            "https://ubuntu.com/security/notices/USN-6039-1",
            "https://www.cve.org/CVERecord?id=CVE-2023-0464",
            "https://www.openssl.org/news/secadv/20230322.txt"
          ],
          "PublishedDate": "2023-03-22T17:15:00Z",
          "LastModifiedDate": "2023-03-29T19:37:00Z"
        }
      ]
    }
  ]
}

Examples

ces-build-lib's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ces-build-lib's Issues

Add github functionality to add release assets

There is a new feature which will automatically create GitHub Releases for Cloudogus Go-Tools. These changes includes new features for Jenkinsfiles which should be included here.
The features will be:

  • Uploading Assets on GitHub Releases
  • Signing Builds with GPG

Create first release

So it can be properly referenced when included in Jenkinsfiles.

Workaround: Use the commit hash. Don't use the branch, as the API might change, breaking the build!

@Library('github.com/cloudogu/ces-build-lib@feature/a5955fb')

Provide `GitInDocker` class for more deterministic Git Commands

Especially when implementing extensive Git Operations in Jenkins (e.g. GitFlow or GitOps) it might be important to rely specific git features (or avoid bugs fixed in specific versions).

However, when using the Git class as is, the calls are executed by the git client provided by the Jenkins agent. This makes the outcome of calls to the methods of Git strongly dependend on the agent.

This fact can be changed by executing all Git Commands in a Container that contains a deterministic version of git.
We could implement this similar to MavenInDocker, by deriving from Git and overriding a method that executes all script.sh calls for Git.
In this method we could wrap the original method to be executed in a container, e.g. like so:

  new Docker(this).image("alpine/git:v${gitVersion}")
          .mountJenkinsUser()
          .inside('--entrypoint=""') {
               super.method()
          }

SonarQube: Adapt to SonarCloud's changes

Change in SonarCloud for PullRequests in GitHub

if you were relying on the GitHub Plugin, its properties are no longer required and they must be removed your configuration: sonar.analysis.mode, sonar.github.repository, sonar.github.pullRequest,
sonar.github.oauth

The new properties are described here

sonar.pullrequest.base=master
sonar.pullrequest.branch=feature/my-new-feature
sonar.pullrequest.key=5
sonar.pullrequest.provider=GitHub
sonar.pullrequest.github.repository=my-company/my-repo

Does that mean we need to differentiate between SonarQube and SonarCloud now?

Change here
For an example see schnatterer/sonarcloudTest

Also: waitForQualityGateWebhookToBeCalled() should no longer exclude PRs. As they are fully analysed on SonarCloud now (and the PRs are commented via the GitHub/BitBucket integrations of SonarCloud), we can query the QualityGate status, as for normal builds. See
scm-manager.

Docker.Image.mountJenkinsUser fails when Build executor runs in a container

Problem when using MavenInDocker and running maven like mvn 'install' the following error occurred:

[Pipeline] withDockerContainer
Jenkins seems to be running inside container 6a11e2323ec0b313a455baedc5b28a35dffce026656884298aba4f58149a3b14
$ docker run -t -d -u 1000:1000 -v /var/jenkins_home/workspace/ature_3_continuous_delivery-6HRXLXYING3RPQ2T3ATY6DOVMNEXY7ZFTGPCHQBMB7KHPI2V5JBQ/.jenkins/etc/passwd:/etc/passwd:ro -w /var/jenkins_home/workspace/ature_3_continuous_delivery-6HRXLXYING3RPQ2T3ATY6DOVMNEXY7ZFTGPCHQBMB7KHPI2V5JBQ --volumes-from 6a11e2323ec0b313a455baedc5b28a35dffce026656884298aba4f58149a3b14 -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** --entrypoint cat maven:3.5.0-jdk-8
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
java.io.IOException: Failed to run image 'maven:3.5.0-jdk-8'. 
Error: docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:54:
mounting 
/var/jenkins_home/workspace/ature_3_continuous_delivery-6HRXLXYING3RPQ2T3ATY6DOVMNEXY7ZFTGPCHQBMB7KHPI2V5JBQ/.jenkins/etc/passwd 
to rootfs 
/var/lib/docker/aufs/mnt/7b8aadc86829e9cbc74d8e03555a06bdd67d00865f671ac777638cd94bfe2fe9 
at 
/var/lib/docker/aufs/mnt/7b8aadc86829e9cbc74d8e03555a06bdd67d00865f671ac777638cd94bfe2fe9/etc/passwd 
caused not a directory: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.

Mercurial Support

Right now, the implementation is Git-only.

The Cloudogu EcoSystem also supports Mercurial (and is develop on mercurial itself), so we should also support mercurial (aka hg).

A quick search reveals at least these two Git-specific implementations that must be generalized (e.g. using a SCM class, that can detect the remote somehow):

  • findEmailRecipients
  • SonarQube - sets the target branch to master (which is default on hg)
  • SonarCloud - uses Git class for BitBucket, which could be hg

Note:

  • getRepositoryUrl: hg paths default
  • getCommitAuthorComplete: hg log --branch . --limit 1 --template '{author}'

Maven: Use mirrors

Via settings.xml we can setup mirrors for Maven Repos, e.g. central.

<mirrors>
  <mirror>
    <id>${url}</id>
    <name>${url} Central Mirror</name>
    <url>${urll}/nexus/content/groups/public</url>
    <mirrorOf>central</mirrorOf>
  </mirror>
</mirrors>

ces-build-lib could help us doing that.

We already use writing settings.xml for deployment. So we have to create a generic mechanism for settings.xml e.g. using a builder. We also have to assert that the settings.xml is written at least once before mvn / call() is called and that maven uses the settings.xml (-s parameter).

SCMManager: createPullRequest() fails behind proxy.

Behind nginx the httpResponse by SCMM looks like so for example:

httpResponse header [server:nginx/1.17.10, date:Wed, 24 Mar 2021 20:03:24 GMT, cache-control:no-cache, location:https://my.host/scm/api/v2/pull-requests/fluxv1/gitops/4]

Leading to

java.lang.NullPointerException: Cannot invoke method split() on null object

Because we look up the header like so:

return httpResponse.headers.Location.split("/")[-1]

Git.push only works with remote "origin"

When implementing Git.pull() we wondered why Git.push() is implemented as git push origin ${refSpec}. With this it is impossible to push to other remotes than origin.

It would be a good idea to remove the origin, but then this would break backwards compatibility.

So with a version 2.x of ces-build-lib this behaviour should change.
For now, we implement pull() without origin so it behaves more correct but different than push() :-/

Shell: Set set pipefail by default

Using the sh step is prone to error when using pipes.
It's good practice to set set -o pipefail in all bash scripts because otherwise:

cat missingFile | sed s/foo/bar/g

would return a non zero exit code, which is unexepcted and hard to debug.

So: Why doesn't ces-build-lib make life easier for us an prepend pipefail before every sh call?
E.g. like so: script.sh(returnStdout: true, script: "set -e && ${args}")
Note: set -o pipefail fails (haha) with
...script.sh: 1: set: Illegal option -o pipefail.

SonarQube: Breaking builds from 7.9 due to `sonar.branch`

org.sonar.api.utils.MessageException: The 'sonar.branch' parameter is no longer supported. You should stop using it. Branch analysis is available in Developer Edition and above. See https://redirect.sonarsource.com/editions/developer.html for more information.
[ERROR] 

SonarCloud error

java.lang.IllegalAccessError: class com.cloudogu.ces.cesbuildlib.SonarCloud tried to access private field com.cloudogu.ces.cesbuildlib.SonarQube.isUsingBranchPlugin (com.cloudogu.ces.cesbuildlib.SonarCloud and com.cloudogu.ces.cesbuildlib.SonarQube are in unnamed module of loader org.jenkinsci.plugins.workflow.cps.CpsGroovyShell$CleanGroovyClassLoader @7dfc8298)
	at com.cloudogu.ces.cesbuildlib.SonarCloud.<init>(SonarCloud.groovy:14)
	at com.cloudogu.ces.cesbuildlib.SonarCloud.<init>(SonarCloud.groovy:12)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
	at org.codehaus.groovy.reflection.CachedConstructor.invoke(CachedConstructor.java:83)
	at org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrapNoCoerce.callConstructor(ConstructorSite.java:105)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallConstructor(CallSiteArray.java:59)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:238)
	at org.kohsuke.groovy.sandbox.impl.Checker$3.call(Checker.java:230)
	at org.kohsuke.groovy.sandbox.GroovyInterceptor.onNewInstance(GroovyInterceptor.java:42)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onNewInstance(SandboxInterceptor.java:196)
	at org.kohsuke.groovy.sandbox.impl.Checker$3.call(Checker.java:227)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedConstructor(Checker.java:232)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.constructorCall(SandboxInvoker.java:21)
	at WorkflowScript.run(WorkflowScript:42)
	at ___cps.transform___(Native Method)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:100)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:85)
	at jdk.internal.reflect.GeneratedMethodAccessor76.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.dispatch(CollectionLiteralBlock.java:55)
	at com.cloudbees.groovy.cps.impl.CollectionLiteralBlock$ContinuationImpl.item(CollectionLiteralBlock.java:45)
	at jdk.internal.reflect.GeneratedMethodAccessor78.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
	at com.cloudbees.groovy.cps.Next.step(Next.java:83)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:152)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:146)
	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:136)
	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:275)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:146)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:187)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:420)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:330)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:294)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:30)
	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:70)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)

SonarQube Preview Mode deprecated

Leads to failing builds for PRs.

The analysis mode parameter is deprecated because it has been replaced by the branch analysis functionality provided in Developer Edition($).
Yes, Developer Edition is commercial/paid,

https://groups.google.com/forum/#!topic/sonarqube/4bzwxkqJGAc

So, which options does that leave for ces-buid-lib - which is supposed to work with free edition?

First idea: Analyze PRs into new Projects, similar to the behavior for branches.
If the Branch Plugin is present we could analyse it into a Branch.

Support Nexus 3

Nexus 3's URLs for Snapshot and release repos have changed, resulting in the following errors when deploying to Nexus 3 using ces-build-libs:

Caused by: org.apache.maven.plugin.MojoExecutionException: Failed to deploy artifacts: Could not transfer artifact ...
Failed to transfer file: https://URL/content/repositories/snapshots/org/springframework/samples/spring-petclinic/1.5.2-SNAPSHOT/spring-petclinic-1.5.2-20180724.001942-1.jar. Return code is: 405, ReasonPhrase: Method Not Allowed.
Snapshot Release
Nexus 2 /content/repositories/snapshots /content/repositories/releases
Nexus 3 /repository/maven-snapshots /repository/maven-releases

SonarQube: Provide option to Analyze without SQ Plugin

It's also possible to do SQ analyses without having the Jenkins SQ Plugin installed.

See here for example.

In ces-build-lib, the SonarQube class just needs to set following params:

  • sonar.host.url
  • Credentials. Either
    • sonar.login as secret text credential or
    • sonar.login & sonar.password as username and password credential
  • SONAR_MAVEN_GOAL we could use sonar:sonar

Support "main" production branch

The release mechanism should work with production branches called either master or main. Until now it only supports master branches.

SonarQube/Git: Branch Name works only in multi-branch pipelines

env.BRANCH_NAME is only present in multi-branch pipeline builds, regular pipelines don't have information about branches.

Affected methods:

  • Git.getBranchName()
  • SonarQube.initMavenForRegularAnalysis()

We could use git rev-parse --abbrev-ref HEAD in Git class, instead. SonarQube could use Git class. Keeps everything DRY.

When fixed, update README.

Git: Clone without parameters leds to RejectedAccessException

Initializing the git object with credentials and later on try to clone a repository providing only the url runs into an error.
Executing the 'containsKey' method on a string does not result in a MissingMethodException on jenkins but into a RejectedAccessException.

Source

def git = git = script.cesBuildLib.Git.new(script, helmConfig.credentialsId)
// do some other stuff

// finally provide a url and clone runs into the given issue below
git helmConfig.repositoryUrl
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use method groovy.lang.GroovyObject invokeMethod java.lang.String java.lang.Object (org.codehaus.groovy.runtime.GStringImpl containsKey java.lang.String)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectMethod(StaticWhitelist.java:270)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:159)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:142)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:161)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:165)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
	at com.cloudogu.ces.cesbuildlib.Git.git(Git.groovy:52)
	at com.cloudogu.ces.cesbuildlib.Git.call(Git.groovy:40)
	at com.cloudogu.gitopsbuildlib.deployment.helm.repotype.GitRepo.getHelmChartFromGitRepo(GitRepo.groovy:35)
	at ___cps.transform___(Native Method)
	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:86)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83)
	at sun.reflect.GeneratedMethodAccessor92.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
	at com.cloudbees.groovy.cps.Next.step(Next.java:83)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129)
	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:185)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:400)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:96)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:312)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:276)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:136)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

findIp() without container object

Make findIp() also run with a container ID String instead of a container Object.

e.g. overloaded (if possible)

  • String findIp(String containerId)
  • String findIp(container)

Or duck typed, if String use as ID, otherwise fall back to container.id.

MavenInDocker: Check if image is already built and reuse it

Less log cluttering on jenkins. Faster builds.

Note: Add a version of the image (like, 1, 2, 3, 4) so a rebuilt can be forced when the dockerfile is changed.

Example to illustrate basic concept:

private static final int DOCKER_FILE_VERSION=1

// ...
    def mvn(String args) {
// ....
if (!dockerImagePresent("ces-build-lib/maven/$dockerImageVersion-$DOCKER_FILE_VERSION")) {
 script.docker.build("ces-build-lib/maven/$dockerImageVersion-$DOCKER_FILE_VERSION", createDockerfilePath())
}

When writeDockerFile() is changed, DOCKER_FILE_VERSION is increased by one, forcing a rebuild of the archive.

MavenInDocker: Don' create new Image, just mount temporary etc passwd

For example

            writeFile file: 'passwd', text: "jenkins:x:1002:1003::${pwd()}:/bin/bash"
// ...
    docker.image('someImage').inside("-v ${pwd()}/passwd:/etc/passwd:ro") {
  • Could this be realized as default in Docker.image()?
  • The UID/GID (1002/1003) might be different in other environments. Parse from /etc/passwd as in MavenInDocker
    Implemented in Docker
  • /home/jenkins might be different in other environments
  • The user jenkins might be different in other environments (see #6)
  • We also have to mount /etc/groups
  • Reuse the Docker class in MavenInDocker
  • Update docs

This is likely to make #4 obsolete.

Collect events with describe from kubernetes ressources.

Currently the log and archive method collects the pod logs, yaml ressources (kubectl get) and dogu descriptions (kubectl describe).
It would be useful if the method would collect the descriptions from all other ressources too.

These will be the resource being collected:

  • persistentvolumeclaim
  • statefulset
  • replicaset
  • deployment
  • service
  • secret
  • pod
  • configmap
  • persistentvolume
  • replicaset
  • ingress
  • ingressclass

More Docker Utils

void extractFromImageFileSystem(def imageName, String imagePath, String targetPath) {
    sh "tempContainer=\$(docker create '${imageName}') && " +
       "docker cp \${tempContainer}:/${imagePath} '${targetPath}' && " +
       "docker rm \${tempContainer}"
}

MavenInDocker: Does not work when pom is in subfolder

I had a problem building a maven project which was using the maven scm plugin. I was using the MavenInDocker-class to build the project.

The maven project is inside of a subfolder in the main project. So i was using the dir block in my Jenkinsfile. The problem is, that in this case only the current dir was mounted into the maven-docker-container which caused the build to fail because the .git-folder in the parent folder is required when using the scm-plugin.

To bypass this i executed the maven build in the base directory (to make sure everything is mounted inside docker) and passed the path to the pom.xml into the additionalArgs of the maven object.

Can you please make sure that the whole project is mounted into the maven-in-docker-container?
Maybe there is also another way, but i dont have an idea.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.