GithubHelp home page GithubHelp logo

sap / jenkins-library Goto Github PK

View Code? Open in Web Editor NEW
761.0 39.0 581.0 37.61 MB

Jenkins shared library for Continuous Delivery pipelines.

Home Page: https://www.project-piper.io

License: Apache License 2.0

Groovy 24.16% Shell 0.03% Dockerfile 0.01% HTML 0.02% CSS 0.01% Python 0.01% Scala 0.02% Go 75.73%
ci-cd open-source jenkins golang cli

jenkins-library's Introduction

Maintainability Test Coverage Go Report Card REUSE status

Project Piper Repository

The Project "Piper" offers default pipelines to easily implement CI/CD processes integrating SAP systems. The corresponding "Shared Library" provides a set of "steps" to build your own scenarios beyond defaults.

User Documentation

If you want to view the User Documentation of Project Piper please follow this Piper Pages Link.

Known Issues

A list of known issues is available on the GitHub issues page of this project.

How to obtain support

Feel free to open new issues for feature requests, bugs or general feedback on the GitHub issues page of this project.

Register to our google group in order to get updates or for asking questions.

Contributing

Read and understand our contribution guidelines before opening a pull request.

jenkins-library's People

Contributors

alejandraferreirovidal avatar anilkeshav27 avatar c0d1ngm0nk3y avatar ccfenner avatar ceperapl avatar daniel-kurzynski avatar danielmieg avatar daskuznetsova avatar dominiklendle avatar ffeldmann avatar fwilhe avatar kevinhudemann avatar kstiehl avatar marcusholl avatar nevskrem avatar o-liver avatar olivernocon avatar pbusko avatar peterpersiel avatar radsoulbeard avatar redehnrov avatar renovate[bot] avatar rodibrin avatar sarahlendle avatar srinikitha09 avatar stippi2 avatar sumeetpatil avatar tiloko avatar v0lkc avatar vstarostin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jenkins-library's Issues

name conflict when name of a groovy script file matches name of a class defined inside the groovy script file

Just got this when executing a pipeline:

[Pipeline] End of Pipeline
hudson.remoting.ProxyException: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
file:/var/jenkins_home/jobs/test-pipeline/builds/37/libs/piper-library-os/src/com/sap/piper/FileUtils.groovy: 10: Invalid duplicate class definition of class com.sap.piper.FileUtils : The source file:/var/jenkins_home/jobs/test-pipeline/builds/37/libs/piper-library-os/src/com/sap/piper/FileUtils.groovy contains at least two definitions of the class com.sap.piper.FileUtils.
One of the classes is an explicit generated class using the class statement, the other is a class generated from the script body based on the file name. Solutions are to change the file name or to change the class name.
 @ line 10, column 1.
   class FileUtils implements Serializable {

-> we have to ensure that the name of a class inside a groovy script file does not conflict with the name of the groovy script.

Break caused by removing getters and setters

All our jobs are failing as we used Jenkinsfile provided that uses commonPipelineEnvironment.setConfigProperties. This appears to be deleted in the last change. Please provide Jenkinsfile update or reverse change.

Reduce TravisCI jobs

We have currently two active Travis jobs on the repo which are also mandatory (PR). We should limit the push job that is only active on the master branch and not on all branches or at least disabled in branches that are already build with the pr job.

bildschirmfoto 2018-07-12 um 23 35 44

neoDeploy issue

There something wrong with some recent commits in neoDeploy.
Our pipeline is failing with the following error:

[Info] Starting deployment of deployProps['propertiesFile']
[Pipeline] echo
--- BEGIN LIBRARY STEP: neoDeploy.groovy ---
[Pipeline] echo
--- BEGIN LIBRARY STEP: prepareDefaultValues.groovy ---
[Pipeline] libraryResource
[Pipeline] readYaml
[Pipeline] echo
--- END LIBRARY STEP: prepareDefaultValues.groovy ---
[Pipeline] fileExists
[Pipeline] fileExists
[Pipeline] echo
[neoDeploy] Neo executable "$JENKINS_HOME/userContent/neo-java-web-sdk-3.x/tools/neo.sh" retrieved from environment.
[Pipeline] withCredentials
[Pipeline] {
[Pipeline] echo
--- BEGIN LIBRARY STEP: dockerExecute.groovy ---
[Pipeline] echo
[INFO][dockerExecute] Running on local environment.
[Pipeline] sh
[workspace] Running shell script
+ which neo.sh
[Pipeline] echo
--- BEGIN LIBRARY STEP: toolValidate.groovy ---
[Pipeline] echo
----------------------------------------------------------
--- ERROR OCCURED IN LIBRARY STEP: toolValidate
----------------------------------------------------------

FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
***
[tool:neo, home:$JENKINS_HOME/userContent/neo-java-web-sdk-3.x]
***

ERROR WAS:
***
hudson.AbortException: '/$JENKINS_HOME/userContent/neo-java-web-sdk-3.x' does not exist.
***

FURTHER INFORMATION:
* Documentation of library step toolValidate: https://sap.github.io/jenkins-library/steps/toolValidate/
* Source code of library step toolValidate: https://github.com/SAP/jenkins-library/blob/master/vars/toolValidate.groovy
* Library documentation: https://sap.github.io/jenkins-library/
* Library repository: https://github.com/SAP/jenkins-library

neoDeploy step fails

Hello,

Our Jenkins build started failing after the last commits in jenkins-library. The error that we received in our log:

/jenkinsdata/CLMServices_oqdiary_dev-RBASQOZST654VCJNH5CULGPKPPURYM7DLAOQW6VNHSU7RLS6PQNQ/workspace@tmp/durable-6dd99a2b/script.sh: line 2: neo.sh: command not found
[Pipeline] [Selenium] echo
[Selenium] ----------------------------------------------------------
[Selenium] --- ERROR OCCURED IN LIBRARY STEP: dockerExecute
[Selenium] ----------------------------------------------------------
[Selenium] 
[Selenium] FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
[Selenium] ***
[Selenium] [dockerImage:s4sdk/docker-neo-cli, dockerEnvVars:null, dockerOptions:null]
[Selenium] ***
[Selenium] 
[Selenium] ERROR WAS:
[Selenium] ***
[Selenium] hudson.AbortException: script returned exit code 127

The way we use the neoDeploy step is like:

neoDeploy(
  script: this, 
  deployMode: 'warParams', 
  warAction: 'deploy', 
  host: hostName, 
  account: acceptanceAccountName, 
  neoCredentialsId: credentialsId, 
  archivePath: 'target/oqdiary.war', 
  applicationName: globalPipelineEnvironment.getConfigProperty('neoAppName'), 
  runtime: globalPipelineEnvironment.getConfigProperty('runtime'), 
  runtimeVersion: globalPipelineEnvironment.getConfigProperty('runtime-version'), 
  vmSize: globalPipelineEnvironment.getConfigProperty('vmSize')
)

It seems neo client cannot be found in the docker image. The last successful build was at revision 339226f of jenkins-library.

neoDeploy: put logs written by neoDeploy into the big log

neoDeploy writes log files. The content of that log is not written in parallel to stdout/stderr. So we do not see that log output. In case of an failure we get only a hint:

If you need help, provide the output of the command and attach the log directory [/var/log/neo]

In case of running in a docker environment the volume might not be availabe at a later point in time for trouble shooting.

In order to avoid loosing the log we should cat the corresponding log files into the job job, especially (... maybe: only) in case we have a deployment failure. By default log is put into the folder where neo is installed. The logging location can be controlled by environment variable neo_logging_location. Would be possible to set that property to a temporary directory. After the neo call that folder contains the log files written by that neo deploy run and we can cat the files we find there into the job log.

Proposal: Disable download progress in Maven log

Hello,

I have discovered the option to disable log lines like this in maven:

[INFO] Downloading from some-mirror: http://some-mirror:8081/repository/mvn-proxy/org/apache/httpcomponents/httpcomponents-core/4.0.1/httpcomponents-core-4.0.1.pom

By setting the responsible logger to warn as described here.

The flags I used are --batch-mode -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn, which works like wanted.

I did this in our pipeline, and the resulting log file is half the size (was 1.4 mb before, is now about 700 kb). I think the benefit of a smaller, easier to read log is worth the potential information loss (which I don't really see as of now).

Since we use the artifactSetVersion step from this library, we still have the messages coming from this step.

My question is if anyone think this should not be the default behaviour of mavenExecute. Since the log level is warn, errors will still be in the log, but all the clutter saying "I downloaded this jar file" would be gone.

What do you think?

@CCFenner @OliverNocon @alejandraferreirovidal @marcusholl @o-liver @benhei @kurzy @rkamath3

Deprecate old configuration Framework

We have introduced a new configuration framework, that will eventually replace the old framework.

  • log whenever old configuration framework is used.
  • introduce new config framework to the library steps that are still using the old framework.

Also we need to test this on unit test level.

Stashing strategy

this is the follow-up of discussion in #173 about the stashing topic and tries to provide a summary for further discussion.

Currently we see following scenarios:

  • stashing is done in steps (current behavior with pipelineStashFiles, snykExecute, ...)
  • stashing is done in stages which are available from a library (current behavior as in SAP S/4HANA Cloud SDK)
  • stashing is done in the pipeline script itself with standard Jenkins Pipeline means

For the way forward one could think of following scenarios excluding the standard Jenkins Pipeline means which will always be possible:

  1. Stashing in steps as well as in stages
  2. Stashing only in stages
  3. Stashing only in steps

1. Stashing in steps as well as in stages

Pros

  • This is handy and more convenient
  • Stash patterns are defined explicitly in the code (-> stashing patterns) and not implicit knowledge which needs to be documented
  • Stashing can also be provided from outside -> no hard assumptions
  • Steps can be used independently, i.e. directly running on a "filled" workspace
  • Stages can run steps, possibly with own assumptions

Cons

  • Steps less "lightweight"
  • Catching stashing exception is possible but Blue Ocean UI shows red mark nevertheless (without breaking the stage) which may confuse users
  • "soft" dependencies between steps (-> pipelineStashFiles, snykExecute`, ...)

2. Stashing only in stages

Pros

  • Steps more lightweight (-> no stashing logic)
  • Blue Ocean UI cannot show confusing red mark

Cons

  • Incompatible change (-> breaks behavior which is available and proven in the current library)
  • current library does not contain stages
  • using steps individually requires knowledge about stash details

3. Stashing only in steps (-> likely not relevant)

Pros

  • ???

Cons

  • Incompatible change (-> breaks behavior which is available and proven in SAP S/4HANA Cloud SDK)

MTA Build image

The MTA build jar could be wrapped by a shell script and put on a docker image. This would normalize the MTA build step

align config mixin

correct order of config mixin: general, step, stage

  • ./vars/transportRequestCreate.groovy #250
  • ./vars/checkChangeInDevelopment.groovy #250
  • ./vars/transportRequestUploadFile.groovy #250
  • ./vars/snykExecute.groovy #250
  • ./vars/transportRequestRelease.groovy #250
  • ./vars/pipelineStashFilesAfterBuild.groovy #250
  • ./vars/pipelineStashFilesBeforeBuild.groovy #250
  • ./vars/newmanExecute.groovy #243
  • ./src/com/sap/piper/ConfigurationHelper.groovy

guidelines: where to put properties in the new configuration framework

Currently we expect for the new config framework property mtaJarLocation in the general section. IMO that property is scoped to the mta build and I do not expect any usage of that property in contexts outside mtaBuild.

Shouldn't this property be located in steps/mtaBuild rather than in general?

What reasons do we have for providing this in the general section?

Do we have some guidelines where to put properties?

For the mtaJarLocation see: line 18f in vars/mtaBuild.groovy @ commit ac6cc2a

Step: Standard Configuration Setup

Is there a standard configuration setup for steps which we could wrap by a method?

Most older steps are using ConfigurationMerger.merge() like mavenExecute:
Newer steps use the ConfigurationHelper but differently. E.g. newmanExecute doesn't use mixinGeneralConfig while others do.

Does it make sense to have something like

ConfigurationHelper.loadStep(script,generalConfigurationKeys,stepConfigurationKeys,parameterKeys)

Introduce API Package

In order to clarify what parts of the code are considered to be API for being re-used by pipelines we should introduce an API package (com.sap.piper.api). Classes/groovy-scripts contained in that package are considered to be stable as long as the major version of the piper project does not change.

The question now is: which classes/groovy-scripts should we declare as API?

Below all the currently existing classes/groovy-scripts are listed. These classes can be subsumed in 4 different categories.

1.) ConfigurationFramework
2.) Utils
3.) ProjectVersioning
4.) The Version class.

What should we consider to be API?

Ad 1.) having the configuration framework as an API it would be possible to use configuration values directly inside pipeline. Is this a realistic/desired use case?
Ad 2.) Are there some parts of the util classes which needs to be used in the pipeline directly?
Ad 3.) I guess this is clearly no API.
Ad 4.) Used internally inside the validation at the moment, hence for now no API, I suppose.

The currently known classes/groovy-scripts:

com.sap.piper.ConfigurationHelper.groovy (1)
com.sap.piper.ConfigurationLoader.groovy (1)
com.sap.piper.ConfigurationMerger.groovy (1)
com.sap.piper.ConfigurationType.groovy (1)
com.sap.piper.DefaultValueCache.groovy (1)
com.sap.piper.FileUtils.groovy (2)
com.sap.piper.GitUtils.groovy (2)
com.sap.piper.JsonUtils.groovy (2)
com.sap.piper.MapUtils.groovy (2)
com.sap.piper.Utils.groovy (2)
com.sap.piper.Version.groovy (4)


src/com/sap/piper/versioning:
com.sap.piper.versioning.ArtifactVersioning.groovy (3)
com.sap.piper.versioning.DockerArtifactVersioning.groovy (3)
com.sap.piper.versioning.MavenArtifactVersioning.groovy (3)

Running the Pipeline Without Proxy

When I'm running in a non-proxy environment I fail at the deploy stage with the following error:

SAP Cloud Platform Console Client

Executing 'deploy-mta' with the following parameters
   source     : /var/jenkins_home/workspace/x/src/app.mtar
   synchronous: true
   subaccount : p...trial
   host       : https://hanatrial.ondemand.com
   SDK version: 3.39.10
   user       : ****

(!) ERROR; The proxy port could not be determined, ensure that the proxy port property is a number, current value is 'http:'.
If you need help, provide the output of the command and attach the log directory [/opt/neo/tools/log]

`
Interestingly the "value is 'http'" -- I have no proxy settings neither in the Jenkins nor the hosting environment.

My proxy settings in the .pipeline/config.properties file:

# HTTP proxy to be used in the build used for http calls (optional)
#proxy=http://proxy:8080

# HTTPS proxy to be used in the build used for https calls (optional)
#httpsProxy=http://proxy:8080

Any idea what could be causing this?

mtaBuild needs mta.yaml not mta.yml

The mta.jar fails the build if you provide an mta.yml instead of mta.yaml.
However, on multiple lines in the mtaBuild step yml is used as phrasing and the template resource template_mta.yml.
We discovered this when using template_mta.yml manually.

How to apply structuring to configuration

We have use cases where several properties denotes one entity, lets e.g. for interacting with a backend.

In this case we have the following options for modelling these properties.

  • use flat properties containing a namespace in the key.
  myBackendEndpointURL: 'https://example.org/myBackend'
  myBackendCredentialsID: 'theCreds'
  myBackendOpts: '-Djavax.net.ssl.trustStore:/my/trust/store'
  • use nesting for structuring the properties
  myBackend:
    endpointUrl: 'https://example.org/myBackend'
    credentialsId: 'theCreds'
    opts: '-Djavax.net.ssl.trustStore:/my/trust/store'    
  • <space for more options>

We should collect some pros and cons in order to make a decission which approach to follow.

Avoid full repo with all the history gets cloned

For big git repos we have issues with the repo side. For executing a build it is not needed to clone the full repo, one branch and a bunch of commits is sufficient in almost all cases.

--> we should implement something so that not the full repo is cloned.

Again: piper-lib API

This is basically a resurrection of #115.

As the result of that issue we decided that the only API of the piper-lib are the steps. Hence everything inside the folder src is not an API and with that all content in the src folder can be changed without prior notice and without deprecate something before changing it. In case I change something which is not a step it is - without having APIs - sufficient to grep for the usage of e.g. the method I would like to change, find a way to adapt the callers and that's it.

Now we know that the configuration framework is used from outside. See here.

We have now the situation that we have code we must not change in order not to break the colleagues use cases. Of course we can say: 'Sorry, you used something which is not a API, so it was your fault'. But in fact that is not a fully suitable approach since it makes in fact sense to re-use such common parts like configuration.

IMO it is not a cool approach to have the knowledge about what is re-use and what not having only shared in our minds. This does not scale in case new colleagues entering the project, and it is anyhow error prone since it depends purely on the human factor.

Hence we should find a way to make transparent which is - with good reasons - re-used. With other words, we should think about how we declare an API.

There are - as we all know - several ways. E.g.:

  • We keep the current strategy: only the steps from var are API, for everything else we do not care.
  • Write down a comment for the class/method which is intended to be re-used saying that this method is an API. (Not that cool IMPO, but technically possible)
  • Use an @API annotation for the class/method
  • Define an api package, e.g. com.sap.piper.api
  • [...]

Since we have in fact reasonable re-use we should think about how to deal with that in a transparent way beyond sharing the API only in our minds.

In case we agree on defining an API we should also think about a communication channel used for announcing API changes. As the world keeps on moving there will be the need of deprecate some parts of the API over time. Here we need something in order to able to communicate. But that is another issue.

I would like to outline that the reason for opening this issue is not to be somehow proud of having an API. Having APIs is not an end in itself. The reason why I re-trigger the API discussion is: make life easier for the involved developers.

newmanExecute doesn't work, executeNewmanTests does

Hello Colleagues,

I'm trying to use the newmanExecute with the folloing call:

newmanExecute script: this,
newmanCollection: 'src/test/resources/BasicTest.postman_collection.json'

and I'm getting the following error:

[Pipeline] echo
--- BEGIN LIBRARY STEP: newmanExecute.groovy ---
[Pipeline] echo
--- BEGIN LIBRARY STEP: prepareDefaultValues.groovy ---
[Pipeline] echo
--- END LIBRARY STEP: prepareDefaultValues.groovy ---
[Pipeline] sh
[workspace] Running shell script
[Pipeline] echo
Unstash content: tests
[Pipeline] unstash
[Pipeline] echo
Unstash failed: tests (No such saved stash ‘tests’)
[Pipeline] findFiles
[Pipeline] echo
[newmanExecute] Found files [src/test/resources/BasicTest.postman_collection.json]
[Pipeline] echo
--- BEGIN LIBRARY STEP: dockerExecute.groovy ---
[Pipeline] echo
--- BEGIN LIBRARY STEP: prepareDefaultValues.groovy ---
[Pipeline] echo
--- END LIBRARY STEP: prepareDefaultValues.groovy ---
[Pipeline] sh
[workspace] Running shell script
+ which docker
[Pipeline] sh
[workspace] Running shell script
+ docker ps -q
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[Pipeline] echo
[WARNING][dockerExecute] Cannot connect to docker daemon (command 'docker ps' did not return with '0'). Configured docker image 'node:8-stretch' will not be used.
[Pipeline] echo
[INFO][dockerExecute] Running on local environment.
[Pipeline] sh
[workspace] Running shell script
+ npm install newman --global --quiet
/jenkinsdata/IntegrationTestsPOC/workspace@tmp/durable-292d3aaa/script.sh: 2: /jenkinsdata/IntegrationTestsPOC/workspace@tmp/durable-292d3aaa/script.sh: npm: not found
[Pipeline] echo
----------------------------------------------------------
--- ERROR OCCURED IN LIBRARY STEP: dockerExecute
----------------------------------------------------------

FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
***
[dockerImage:node:8-stretch, stashContent:[]]
***

ERROR WAS:
***
hudson.AbortException: script returned exit code 127
***

FURTHER INFORMATION:
* Documentation of library step dockerExecute: https://sap.github.io/jenkins-library/steps/dockerExecute/
* Source code of library step dockerExecute: https://github.com/SAP/jenkins-library/blob/master/vars/dockerExecute.groovy
* Library documentation: https://sap.github.io/jenkins-library/
* Library repository: https://github.com/SAP/jenkins-library

----------------------------------------------------------
[Pipeline] echo
--- END LIBRARY STEP: dockerExecute.groovy ---
[Pipeline] echo
----------------------------------------------------------
--- ERROR OCCURED IN LIBRARY STEP: newmanExecute
----------------------------------------------------------

FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
***
[script:WorkflowScript@4dfb5398, newmanCollection:src/test/resources/BasicTest.postman_collection.json]
***

ERROR WAS:
***
hudson.AbortException: script returned exit code 127
***

FURTHER INFORMATION:
* Documentation of library step newmanExecute: https://sap.github.io/jenkins-library/steps/newmanExecute/
* Source code of library step newmanExecute: https://github.com/SAP/jenkins-library/blob/master/vars/newmanExecute.groovy
* Library documentation: https://sap.github.io/jenkins-library/
* Library repository: https://github.com/SAP/jenkins-library

----------------------------------------------------------
[Pipeline] echo
--- END LIBRARY STEP: newmanExecute.groovy ---
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE

Could you please let me know the right syntax?

Btw the executeNewmanTests call is working fine...

Regards,
Milko

Cannot use neoDeploy with warPropertiesFile

Dear Colleagues,

after version 0.2 the neoDeploy doesn't work with 'warPropertiesFile' as 'deployMode' parameter.
The error is a bit strange:

----------------------------------------------------------
[Deployment: z9.canary.master.properties] --- ERROR OCCURED IN LIBRARY STEP: neoDeploy
[Deployment: z9.canary.master.properties] ----------------------------------------------------------
[Deployment: z9.canary.master.properties] 
[Deployment: z9.canary.master.properties] FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties] [deployMode:warPropertiesFile, propertiesFile:/jenkinsdata/in.credit.adapter.z9_master-3MCTMHKL6AUAT3H4KLYPDOWEV4TLGJUFGLR3FVZZSCXYZGFTPOGQ/workspace/z9.canary.master.properties, warAction:rolling-update, archivePath:/jenkinsdata/in.credit.adapter.z9_master-3MCTMHKL6AUAT3H4KLYPDOWEV4TLGJUFGLR3FVZZSCXYZGFTPOGQ/workspace/target/z9.war, neoCredentialsId:P1942084552]
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties] 
[Deployment: z9.canary.master.properties] ERROR WAS:
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties] java.lang.Exception: [neoDeploy] Invalid deployMode = 'warPropertiesFile'. Valid 'deployMode' values are: [mta, warParams, warPropertiesFile].
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties] 

I replaced line 107: "if (! (deployMode in deployModes)) {
with "if (deployMode != 'mta' && deployMode != 'warParams' && deployMode != 'warPropertiesFile') {", which is absolutely the same and then we are a step forward, but still error:

[Pipeline] [Deployment: z9.canary.master.properties] sh
[Deployment: z9.canary.master.properties] [workspace] Running shell script
[Deployment: z9.canary.master.properties] 
[Deployment: z9.canary.master.properties] SAP Cloud Platform Console Client
[Deployment: z9.canary.master.properties] 
[Deployment: z9.canary.master.properties] 
[Deployment: z9.canary.master.properties] Expected a command, got null
[Deployment: z9.canary.master.properties] 
[Deployment: z9.canary.master.properties] 
[Deployment: z9.canary.master.properties] Command [null] not found
[Deployment: z9.canary.master.properties] 
[Deployment: z9.canary.master.properties] Did you mean one of these?
[Deployment: z9.canary.master.properties] 	help
[Deployment: z9.canary.master.properties] 	system:help
[Deployment: z9.canary.master.properties] 
[Pipeline] [Deployment: z9.canary.master.properties] echo
[Deployment: z9.canary.master.properties] ----------------------------------------------------------
[Deployment: z9.canary.master.properties] --- ERROR OCCURED IN LIBRARY STEP: dockerExecute
[Deployment: z9.canary.master.properties] ----------------------------------------------------------
[Deployment: z9.canary.master.properties] 
[Deployment: z9.canary.master.properties] FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties] [dockerImage:s4sdk/docker-neo-cli, dockerEnvVars:null, dockerOptions:null]
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties] 
[Deployment: z9.canary.master.properties] ERROR WAS:
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties] hudson.AbortException: script returned exit code 2
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties] 
[Deployment: z9.canary.master.properties] FURTHER INFORMATION:
[Deployment: z9.canary.master.properties] * Documentation of library step dockerExecute: https://sap.github.io/jenkins-library/steps/dockerExecute/
[Deployment: z9.canary.master.properties] * Source code of library step dockerExecute: https://github.com/SAP/jenkins-library/blob/master/vars/dockerExecute.groovy
[Deployment: z9.canary.master.properties] * Library documentation: https://sap.github.io/jenkins-library/
[Deployment: z9.canary.master.properties] * Library repository: https://github.com/SAP/jenkins-library

https://jaas.wdf.sap.corp:30676/view/Java%20Apps/job/com.sap.bsuite.fin.credit.adapter.z9/job/master/112/console

I would appreciate some help.

Regards,
Milko

Build site as part of the PR build

In order to avoid 404 results on our site it would be good to build (just build, not deploy) the site as part of the PR build.
mkdocs issues warnings in case pointing to a non existing page, unfortunately mkdocs returns still 0 in this case.

This requires:

  • mkdocs available during the travis build.
  • something like mkdocs build 2>&1 > /dev/null |grep WARN && <failTheBuild>

Take executable path into account when performing tool validation

Currently toolValidate works upon various HOMEvariables, e.g. NEO_HOME``, JAVA_HOME```. Having the tools simply in the path we cannot check the version.

Validation should be able to handle also cases where the tool is in the path.

We might also have cases where we have a HOMEvariable and the corresponding tool in the path pointing to somewhere else at the same time. What should happen in this case depends on the later invocation of the tool itself. If the tool is invoked like ${HOME}/bin/tool.shthe validation should happen with the HOME, in case the tool is invoked simply by tool.sh via path, this should be validated.

... not that easy, this story ...

quote username and password in neoDeploy

inside neo deploy we have the following shell call:

            sh """#!/bin/bash
                    ${neoExecutable} deploy-mta \
                      --user ${username} \
                      --host ${deployHost} \
                      --source "${archivePath.getAbsolutePath()}" \
                      --account ${deployAccount} \
                      --password ${password} \
                      --synchronous
               """

At least user and password needs to be put into single quotes. In order to be able to handle single quotes inside e.g. the password we have additionally to escape the single quote, I think. There is a escape util for that currently in PR #18.

Required Neo SDK Version

I'm using the latest Java Web Tomcat 7 SDK, version 2.88.17 from here which should be ok, but toolValidate fails, because it requires 3.39. Since with SAPUI5 we only utilise the neo.sh deployment functionality, the flavour of the SDK should not matter. Should toolValidate be adjusted? I would still keep the check, because deploy-mta is not available in earlier versions.

----------------------------------------------------------
--- ERROR OCCURED IN LIBRARY STEP: toolValidate
----------------------------------------------------------

FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
***
[tool:neo, home:/opt/neo/]
***

ERROR WAS:
***
hudson.AbortException: The installed version of SAP Cloud Platform Console Client is 2.88.17. Please install version 3.39.10 or a compatible version.
***

Class cast issue when retrieving step related default config

We get a class cast issue when making an attempt to get the step default configuration. E.g.
call to mavenExecute results in

--- END LIBRARY STEP: mavenExecute.groovy ---
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
hudson.remoting.ProxyException: org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'com.sap.piper.DefaultValueCache@2be8b3e8' with class 'com.sap.piper.DefaultValueCache' to class 'java.util.Map'
	at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.continueCastOnSAM(DefaultTypeTransformation.java:405)
	at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.continueCastOnNumber(DefaultTypeTransformation.java:319)
	at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.castToType(DefaultTypeTransformation.java:232)
	at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.castToType(ScriptBytecodeAdapter.java:603)
	at Unknown.Unknown(Unknown)
	at handlePipelineStepErrors.call(/var/jenkins_home/jobs/maven/builds/1/libs/piper-library-os/vars/handlePipelineStepErrors.groovy:16)
	at mavenExecute.call(/var/jenkins_home/jobs/maven/builds/1/libs/piper-library-os/vars/mavenExecute.groovy:6)
	at WorkflowScript.run(WorkflowScript:6)

The issue is related to

final Map stepDefaults = ConfigurationLoader.defaultStepConfiguration(script, 'mavenExecute')

where we expect to get a Map returned from defaultStepConfiguration(./.) but get in fact an instance of DefaultValueCache which cannot be casted to a Map.

Steps to reproduce the issue:

1.) Build docker image https://github.com/SAP/cloud-s4-sdk-pipeline-docker.git, branch master, path s4sdk-jenkins-master at commit 3166b6a2afa9f5476bc9249abcfeb197fcc81461 (or use corresponding image from a registry)
2.) Launch the docker image docker run -p 8089:8080 <id>
3.) Create a new pipeline job with that pipeline code

@Library('piper-library-os') _


node() {
    stage('1') {
        mavenExecute script: this
    }
}

4.) Execute the job

  • Expected result: it fails on mvn level, since there is no maven project
  • Actual result: it fails with the GroovyCastException as described above.

Credential ID naming convention

I propose to define a convention how to name

  • the key of the internal configuration and
  • the value / ID of the jenkins credentials
    Since different configuration maps are merged and multiple credentials could be used, the name of the key should reflect the realm of the credentials:
    <realmID>credentialsId
    like neoCredentialsId or snykCredentialsId. We should not use unspecified name like credentialsId as seen in some steps.

For harmonization of the IDs, i'd prefer to apply a similar naming schema to the jenkins credentials like
<REALM>_CREDENTIALS_ID
as already seen or just
<REALM>
in capital letters., like NEO_CREDENTAILS_ID or NEO. The ID should reflect the realm of the credentials only. Ambiguous names like CI_CREDENTIALS_ID should be avoided.

issue during library checkout in Jenkins

When loading the library in a Jenkins, I get this message from time to time and have to wait for a long time.

12:40:32 GitHub API Usage: Current quota has 40 remaining (9 over budget). Next quota of 60 in 53 min. Sleeping for 17 min.
12:43:32 GitHub API Usage: Still sleeping, now only 14 min remaining.
12:46:32 GitHub API Usage: Still sleeping, now only 11 min remaining.

Any idea what the reason is and how to avoid it?

Configuration Documentation

We describe configuration in each step, as our configuration gets more complex, from various sources, step specific etc I think it would be very beneficial to describe that in a separate documentation chapter.

Making mta jar file name configurable shows non-backward-compatible behaviour

Having a project with an old configuration for the mta jar

steps:
  mtaBuild:
        mtaJarLocation: /opt/sap/mta

fails with the mta.jar file configurable (9d7e801).

Should we act backward compatible here? In case customers have configured the mta jar in the old way it fails now.

What about checking if the configured value is a folder containing an mta.jar file and acting accordingly until our next incompatible version is released?

Escape Characters in Bash Command Parameters

As described in #21 certain parameters in bash commands, e.g. paths or passwords, can contain spaces or other special characters. These have to be escaped.

There are two discussed solutions from #18:

  1. A BashEscapeUtil adds surrounding single quotes and escapes other single quotes.
String escape(String str) {
    return "\"${str.replace("\"", "\\\"")}\""
}
  1. A BashEscapeUtil escapes all special characters and spaces.

Should we introduce a command line builder

Inside piper-lib we work frequently with creating command lines. Would it make sense to unify creating of command lines with some kind of a command line builder?

Issues addressed with a command line builder:

  • quoting of strings (non, single, double)
  • making key/values etc. transparent (having dedicated objects outlining key/values instead of simply having strings.

Correlates/conflicts with an approach using the groovy.text.SimpleTemplateEngine.

[neo] deploy war file

Currently neoDeploy is able to deploy only mtar file, would be good to support also war upload.

Explicitly type returns of `ConfigurationLoader` and `ConfigurationMerger`?

I think it would be an improvement if the methods in ConfigurationLoader and ConfigurationMerger would explicitly define they return a Map. This would improve how the IDE can help with hints, see message in attached screenshot.

image

Is there any reason why this should not be done? I know def is a nice feature of Groovy, but I think in this case, it does more harm then good.

protect master branch

I would like to have the master branch protected (also for admins) to avoid unwanted edits in master's files.

Application is not started after deployment

Hello colleagues,

it would be great of you could start the application after successful deployment. When using the deployment functionality in a CI/CD Pipeline we currently need to start the application manually after with 'neo start'.

Thanks and regards,
Irina

Strategy: how to deal with environment validation

IMO it is in general very valuable to validate the environment right at the start of a build. With that we can avoid running a build e.g. for half an hour and then fail due to some missing constrains which could have been checked earlier.

But we can also see that the current approach makes trouble: In case some steps are executed later on inside a docker image it does not make sense to perform the check in an environment other than the docker image. Same applies also when working with different Jenkins nodes which get spread around over different Jenkins slaves. In this case the validation must be performed against the node/slave which is used later on when a certain tool is invoked.

We should discuss how to deal with environment validation in the future. The point is: in general it is a benefit to have an early environment validation (fail fast), on the other hand it is hard to do the validation right, since we have to deal with docker images and also with parts of the pipeline executed on different nodes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.