sap / jenkins-library Goto Github PK
View Code? Open in Web Editor NEWJenkins shared library for Continuous Delivery pipelines.
Home Page: https://www.project-piper.io
License: Apache License 2.0
Jenkins shared library for Continuous Delivery pipelines.
Home Page: https://www.project-piper.io
License: Apache License 2.0
In order to clarify what parts of the code are considered to be API for being re-used by pipelines we should introduce an API package (com.sap.piper.api
). Classes/groovy-scripts contained in that package are considered to be stable as long as the major version of the piper project does not change.
The question now is: which classes/groovy-scripts should we declare as API?
Below all the currently existing classes/groovy-scripts are listed. These classes can be subsumed in 4 different categories.
1.) ConfigurationFramework
2.) Utils
3.) ProjectVersioning
4.) The Version class.
What should we consider to be API?
Ad 1.) having the configuration framework as an API it would be possible to use configuration values directly inside pipeline. Is this a realistic/desired use case?
Ad 2.) Are there some parts of the util classes which needs to be used in the pipeline directly?
Ad 3.) I guess this is clearly no API.
Ad 4.) Used internally inside the validation at the moment, hence for now no API, I suppose.
The currently known classes/groovy-scripts:
com.sap.piper.ConfigurationHelper.groovy (1)
com.sap.piper.ConfigurationLoader.groovy (1)
com.sap.piper.ConfigurationMerger.groovy (1)
com.sap.piper.ConfigurationType.groovy (1)
com.sap.piper.DefaultValueCache.groovy (1)
com.sap.piper.FileUtils.groovy (2)
com.sap.piper.GitUtils.groovy (2)
com.sap.piper.JsonUtils.groovy (2)
com.sap.piper.MapUtils.groovy (2)
com.sap.piper.Utils.groovy (2)
com.sap.piper.Version.groovy (4)
src/com/sap/piper/versioning:
com.sap.piper.versioning.ArtifactVersioning.groovy (3)
com.sap.piper.versioning.DockerArtifactVersioning.groovy (3)
com.sap.piper.versioning.MavenArtifactVersioning.groovy (3)
correct order of config mixin: general, step, stage
I would like to have the master
branch protected (also for admins) to avoid unwanted edits in master
's files.
inside neo deploy we have the following shell call:
sh """#!/bin/bash
${neoExecutable} deploy-mta \
--user ${username} \
--host ${deployHost} \
--source "${archivePath.getAbsolutePath()}" \
--account ${deployAccount} \
--password ${password} \
--synchronous
"""
At least user and password needs to be put into single quotes. In order to be able to handle single quotes inside e.g. the password we have additionally to escape the single quote, I think. There is a escape util for that currently in PR #18.
This is basically a resurrection of #115.
As the result of that issue we decided that the only API of the piper-lib are the steps. Hence everything inside the folder src
is not an API and with that all content in the src
folder can be changed without prior notice and without deprecate something before changing it. In case I change something which is not a step it is - without having APIs - sufficient to grep for the usage of e.g. the method I would like to change, find a way to adapt the callers and that's it.
Now we know that the configuration framework is used from outside. See here.
We have now the situation that we have code we must not change in order not to break the colleagues use cases. Of course we can say: 'Sorry, you used something which is not a API, so it was your fault'. But in fact that is not a fully suitable approach since it makes in fact sense to re-use such common parts like configuration.
IMO it is not a cool approach to have the knowledge about what is re-use and what not having only shared in our minds. This does not scale in case new colleagues entering the project, and it is anyhow error prone since it depends purely on the human factor.
Hence we should find a way to make transparent which is - with good reasons - re-used. With other words, we should think about how we declare an API.
There are - as we all know - several ways. E.g.:
var
are API, for everything else we do not care.@API
annotation for the class/methodcom.sap.piper.api
Since we have in fact reasonable re-use we should think about how to deal with that in a transparent way beyond sharing the API only in our minds.
In case we agree on defining an API we should also think about a communication channel used for announcing API changes. As the world keeps on moving there will be the need of deprecate some parts of the API over time. Here we need something in order to able to communicate. But that is another issue.
I would like to outline that the reason for opening this issue is not to be somehow proud of having an API. Having APIs is not an end in itself. The reason why I re-trigger the API discussion is: make life easier for the involved developers.
Currently https://sap.github.io/jenkins-library/configuration/ mixes two topics:
We should clean this up in order to make it easier to understand.
In order to avoid 404 results on our site it would be good to build (just build, not deploy) the site as part of the PR build.
mkdocs issues warnings in case pointing to a non existing page, unfortunately mkdocs returns still 0 in this case.
This requires:
mkdocs build 2>&1 > /dev/null |grep WARN && <failTheBuild>
I think it would be an improvement if the methods in ConfigurationLoader
and ConfigurationMerger
would explicitly define they return a Map
. This would improve how the IDE can help with hints, see message in attached screenshot.
Is there any reason why this should not be done? I know def
is a nice feature of Groovy, but I think in this case, it does more harm then good.
All our jobs are failing as we used Jenkinsfile provided that uses commonPipelineEnvironment.setConfigProperties. This appears to be deleted in the last change. Please provide Jenkinsfile update or reverse change.
Hello,
Our Jenkins build started failing after the last commits in jenkins-library. The error that we received in our log:
/jenkinsdata/CLMServices_oqdiary_dev-RBASQOZST654VCJNH5CULGPKPPURYM7DLAOQW6VNHSU7RLS6PQNQ/workspace@tmp/durable-6dd99a2b/script.sh: line 2: neo.sh: command not found
[Pipeline] [Selenium] echo
[Selenium] ----------------------------------------------------------
[Selenium] --- ERROR OCCURED IN LIBRARY STEP: dockerExecute
[Selenium] ----------------------------------------------------------
[Selenium]
[Selenium] FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
[Selenium] ***
[Selenium] [dockerImage:s4sdk/docker-neo-cli, dockerEnvVars:null, dockerOptions:null]
[Selenium] ***
[Selenium]
[Selenium] ERROR WAS:
[Selenium] ***
[Selenium] hudson.AbortException: script returned exit code 127
The way we use the neoDeploy step is like:
neoDeploy(
script: this,
deployMode: 'warParams',
warAction: 'deploy',
host: hostName,
account: acceptanceAccountName,
neoCredentialsId: credentialsId,
archivePath: 'target/oqdiary.war',
applicationName: globalPipelineEnvironment.getConfigProperty('neoAppName'),
runtime: globalPipelineEnvironment.getConfigProperty('runtime'),
runtimeVersion: globalPipelineEnvironment.getConfigProperty('runtime-version'),
vmSize: globalPipelineEnvironment.getConfigProperty('vmSize')
)
It seems neo client cannot be found in the docker image. The last successful build was at revision 339226f of jenkins-library.
Inside piper-lib we work frequently with creating command lines. Would it make sense to unify creating of command lines with some kind of a command line builder?
Issues addressed with a command line builder:
Correlates/conflicts with an approach using the groovy.text.SimpleTemplateEngine
.
As I can provide my mtaJarLocation, I will also like to provide my mta jar file name, instead of hard coding it in the code like this:
https://github.com/SAP/jenkins-library/blob/master/vars/mtaBuild.groovy#L73
The mta.jar fails the build if you provide an mta.yml instead of mta.yaml.
However, on multiple lines in the mtaBuild step yml is used as phrasing and the template resource template_mta.yml.
We discovered this when using template_mta.yml
manually.
Hello,
I have discovered the option to disable log lines like this in maven:
[INFO] Downloading from some-mirror: http://some-mirror:8081/repository/mvn-proxy/org/apache/httpcomponents/httpcomponents-core/4.0.1/httpcomponents-core-4.0.1.pom
By setting the responsible logger to warn
as described here.
The flags I used are --batch-mode -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn
, which works like wanted.
I did this in our pipeline, and the resulting log file is half the size (was 1.4 mb before, is now about 700 kb). I think the benefit of a smaller, easier to read log is worth the potential information loss (which I don't really see as of now).
Since we use the artifactSetVersion
step from this library, we still have the messages coming from this step.
My question is if anyone think this should not be the default behaviour of mavenExecute
. Since the log level is warn
, errors will still be in the log, but all the clutter saying "I downloaded this jar file" would be gone.
What do you think?
@CCFenner @OliverNocon @alejandraferreirovidal @marcusholl @o-liver @benhei @kurzy @rkamath3
In my understanding steps should provide some kind of a feature, like executing a maven build or performing a neo deploy. Compared to those steps fetchURL looks for me more like a util rather than like a step.
Should the possibility to get the content from an URL really be a step?
See also https://jenkins.io/doc/pipeline/steps/http_request/#http-request-plugin
There is a Syntax error in assigning username value for git config.
https://github.com/SAP/jenkins-library/blob/master/vars/artifactSetVersion.groovy#L91
Output
git -c [email protected] -c user.name test commit -m update version 1.0.4-2018-03-26T144401UTC
14:44:21 error: missing value for 'user.name'
14:44:21 fatal: unable to parse 'user.name' from command-line config
Dear Colleagues,
after version 0.2 the neoDeploy doesn't work with 'warPropertiesFile' as 'deployMode' parameter.
The error is a bit strange:
----------------------------------------------------------
[Deployment: z9.canary.master.properties] --- ERROR OCCURED IN LIBRARY STEP: neoDeploy
[Deployment: z9.canary.master.properties] ----------------------------------------------------------
[Deployment: z9.canary.master.properties]
[Deployment: z9.canary.master.properties] FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties] [deployMode:warPropertiesFile, propertiesFile:/jenkinsdata/in.credit.adapter.z9_master-3MCTMHKL6AUAT3H4KLYPDOWEV4TLGJUFGLR3FVZZSCXYZGFTPOGQ/workspace/z9.canary.master.properties, warAction:rolling-update, archivePath:/jenkinsdata/in.credit.adapter.z9_master-3MCTMHKL6AUAT3H4KLYPDOWEV4TLGJUFGLR3FVZZSCXYZGFTPOGQ/workspace/target/z9.war, neoCredentialsId:P1942084552]
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties]
[Deployment: z9.canary.master.properties] ERROR WAS:
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties] java.lang.Exception: [neoDeploy] Invalid deployMode = 'warPropertiesFile'. Valid 'deployMode' values are: [mta, warParams, warPropertiesFile].
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties]
I replaced line 107: "if (! (deployMode in deployModes)) {
with "if (deployMode != 'mta' && deployMode != 'warParams' && deployMode != 'warPropertiesFile') {", which is absolutely the same and then we are a step forward, but still error:
[Pipeline] [Deployment: z9.canary.master.properties] sh
[Deployment: z9.canary.master.properties] [workspace] Running shell script
[Deployment: z9.canary.master.properties]
[Deployment: z9.canary.master.properties] SAP Cloud Platform Console Client
[Deployment: z9.canary.master.properties]
[Deployment: z9.canary.master.properties]
[Deployment: z9.canary.master.properties] Expected a command, got null
[Deployment: z9.canary.master.properties]
[Deployment: z9.canary.master.properties]
[Deployment: z9.canary.master.properties] Command [null] not found
[Deployment: z9.canary.master.properties]
[Deployment: z9.canary.master.properties] Did you mean one of these?
[Deployment: z9.canary.master.properties] help
[Deployment: z9.canary.master.properties] system:help
[Deployment: z9.canary.master.properties]
[Pipeline] [Deployment: z9.canary.master.properties] echo
[Deployment: z9.canary.master.properties] ----------------------------------------------------------
[Deployment: z9.canary.master.properties] --- ERROR OCCURED IN LIBRARY STEP: dockerExecute
[Deployment: z9.canary.master.properties] ----------------------------------------------------------
[Deployment: z9.canary.master.properties]
[Deployment: z9.canary.master.properties] FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties] [dockerImage:s4sdk/docker-neo-cli, dockerEnvVars:null, dockerOptions:null]
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties]
[Deployment: z9.canary.master.properties] ERROR WAS:
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties] hudson.AbortException: script returned exit code 2
[Deployment: z9.canary.master.properties] ***
[Deployment: z9.canary.master.properties]
[Deployment: z9.canary.master.properties] FURTHER INFORMATION:
[Deployment: z9.canary.master.properties] * Documentation of library step dockerExecute: https://sap.github.io/jenkins-library/steps/dockerExecute/
[Deployment: z9.canary.master.properties] * Source code of library step dockerExecute: https://github.com/SAP/jenkins-library/blob/master/vars/dockerExecute.groovy
[Deployment: z9.canary.master.properties] * Library documentation: https://sap.github.io/jenkins-library/
[Deployment: z9.canary.master.properties] * Library repository: https://github.com/SAP/jenkins-library
I would appreciate some help.
Regards,
Milko
As described in #21 certain parameters in bash commands, e.g. paths or passwords, can contain spaces or other special characters. These have to be escaped.
There are two discussed solutions from #18:
String escape(String str) {
return "\"${str.replace("\"", "\\\"")}\""
}
neoDeploy writes log files. The content of that log is not written in parallel to stdout/stderr
. So we do not see that log output. In case of an failure we get only a hint:
If you need help, provide the output of the command and attach the log directory [/var/log/neo]
In case of running in a docker environment the volume might not be availabe at a later point in time for trouble shooting.
In order to avoid loosing the log we should cat
the corresponding log files into the job job, especially (... maybe: only) in case we have a deployment failure. By default log is put into the folder where neo is installed. The logging location can be controlled by environment variable neo_logging_location
. Would be possible to set that property to a temporary directory. After the neo call that folder contains the log files written by that neo deploy run and we can cat the files we find there into the job log.
SAP Cloud Platform Console Client
Warning: Your SDK version "3.39.10" is no longer supported. The latest version is "3.51.14" and upgrade is highly recommended.
Executing 'deploy-mta' with the following parameters
source : /var/jenkins_home/fiori-app/fiori-reference-app.mtar
synchronous: true
subaccount : ****
host : https://hanatrial.ondemand.com
SDK version: 3.39.10
user : ****
I'm using the latest Java Web Tomcat 7 SDK, version 2.88.17 from here which should be ok, but toolValidate
fails, because it requires 3.39. Since with SAPUI5 we only utilise the neo.sh
deployment functionality, the flavour of the SDK should not matter. Should toolValidate
be adjusted? I would still keep the check, because deploy-mta
is not available in earlier versions.
----------------------------------------------------------
--- ERROR OCCURED IN LIBRARY STEP: toolValidate
----------------------------------------------------------
FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
***
[tool:neo, home:/opt/neo/]
***
ERROR WAS:
***
hudson.AbortException: The installed version of SAP Cloud Platform Console Client is 2.88.17. Please install version 3.39.10 or a compatible version.
***
The MTA build jar could be wrapped by a shell script and put on a docker image. This would normalize the MTA build step
Currently toolValidate
works upon various HOME
variables, e.g. NEO_HOME``,
JAVA_HOME```. Having the tools simply in the path we cannot check the version.
Validation should be able to handle also cases where the tool is in the path.
We might also have cases where we have a HOME
variable and the corresponding tool in the path pointing to somewhere else at the same time. What should happen in this case depends on the later invocation of the tool itself. If the tool is invoked like ${HOME}/bin/tool.sh
the validation should happen with the HOME
, in case the tool is invoked simply by tool.sh
via path, this should be validated.
... not that easy, this story ...
For big git repos we have issues with the repo side. For executing a build it is not needed to clone the full repo, one branch and a bunch of commits is sufficient in almost all cases.
--> we should implement something so that not the full repo is cloned.
Currently we expect for the new config framework property mtaJarLocation
in the general section. IMO that property is scoped to the mta build and I do not expect any usage of that property in contexts outside mtaBuild.
Shouldn't this property be located in steps/mtaBuild
rather than in general
?
What reasons do we have for providing this in the general
section?
Do we have some guidelines where to put properties?
For the mtaJarLocation see: line 18f in vars/mtaBuild.groovy @ commit ac6cc2a
Hello Colleagues,
I'm trying to use the newmanExecute with the folloing call:
newmanExecute script: this,
newmanCollection: 'src/test/resources/BasicTest.postman_collection.json'
and I'm getting the following error:
[Pipeline] echo
--- BEGIN LIBRARY STEP: newmanExecute.groovy ---
[Pipeline] echo
--- BEGIN LIBRARY STEP: prepareDefaultValues.groovy ---
[Pipeline] echo
--- END LIBRARY STEP: prepareDefaultValues.groovy ---
[Pipeline] sh
[workspace] Running shell script
[Pipeline] echo
Unstash content: tests
[Pipeline] unstash
[Pipeline] echo
Unstash failed: tests (No such saved stash ‘tests’)
[Pipeline] findFiles
[Pipeline] echo
[newmanExecute] Found files [src/test/resources/BasicTest.postman_collection.json]
[Pipeline] echo
--- BEGIN LIBRARY STEP: dockerExecute.groovy ---
[Pipeline] echo
--- BEGIN LIBRARY STEP: prepareDefaultValues.groovy ---
[Pipeline] echo
--- END LIBRARY STEP: prepareDefaultValues.groovy ---
[Pipeline] sh
[workspace] Running shell script
+ which docker
[Pipeline] sh
[workspace] Running shell script
+ docker ps -q
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[Pipeline] echo
[WARNING][dockerExecute] Cannot connect to docker daemon (command 'docker ps' did not return with '0'). Configured docker image 'node:8-stretch' will not be used.
[Pipeline] echo
[INFO][dockerExecute] Running on local environment.
[Pipeline] sh
[workspace] Running shell script
+ npm install newman --global --quiet
/jenkinsdata/IntegrationTestsPOC/workspace@tmp/durable-292d3aaa/script.sh: 2: /jenkinsdata/IntegrationTestsPOC/workspace@tmp/durable-292d3aaa/script.sh: npm: not found
[Pipeline] echo
----------------------------------------------------------
--- ERROR OCCURED IN LIBRARY STEP: dockerExecute
----------------------------------------------------------
FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
***
[dockerImage:node:8-stretch, stashContent:[]]
***
ERROR WAS:
***
hudson.AbortException: script returned exit code 127
***
FURTHER INFORMATION:
* Documentation of library step dockerExecute: https://sap.github.io/jenkins-library/steps/dockerExecute/
* Source code of library step dockerExecute: https://github.com/SAP/jenkins-library/blob/master/vars/dockerExecute.groovy
* Library documentation: https://sap.github.io/jenkins-library/
* Library repository: https://github.com/SAP/jenkins-library
----------------------------------------------------------
[Pipeline] echo
--- END LIBRARY STEP: dockerExecute.groovy ---
[Pipeline] echo
----------------------------------------------------------
--- ERROR OCCURED IN LIBRARY STEP: newmanExecute
----------------------------------------------------------
FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
***
[script:WorkflowScript@4dfb5398, newmanCollection:src/test/resources/BasicTest.postman_collection.json]
***
ERROR WAS:
***
hudson.AbortException: script returned exit code 127
***
FURTHER INFORMATION:
* Documentation of library step newmanExecute: https://sap.github.io/jenkins-library/steps/newmanExecute/
* Source code of library step newmanExecute: https://github.com/SAP/jenkins-library/blob/master/vars/newmanExecute.groovy
* Library documentation: https://sap.github.io/jenkins-library/
* Library repository: https://github.com/SAP/jenkins-library
----------------------------------------------------------
[Pipeline] echo
--- END LIBRARY STEP: newmanExecute.groovy ---
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
Could you please let me know the right syntax?
Btw the executeNewmanTests call is working fine...
Regards,
Milko
Is there a standard configuration setup for steps which we could wrap by a method?
Most older steps are using ConfigurationMerger.merge()
like mavenExecute
:
Newer steps use the ConfigurationHelper
but differently. E.g. newmanExecute
doesn't use mixinGeneralConfig
while others do.
Does it make sense to have something like
ConfigurationHelper.loadStep(script,generalConfigurationKeys,stepConfigurationKeys,parameterKeys)
Currently, we delete the old version after a successful blue-green deployment:
https://github.com/SAP/jenkins-library/blob/master/vars/cloudFoundryDeploy.groovy#L139
We got the request, that it could be nice to restart the old version in case there is an issue which was not identified by the smoke tests.
Deleting the version makes it harder to roll back. What about stopping the old version instead of deleting it?
We get a class cast issue when making an attempt to get the step default configuration. E.g.
call to mavenExecute results in
--- END LIBRARY STEP: mavenExecute.groovy ---
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
hudson.remoting.ProxyException: org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'com.sap.piper.DefaultValueCache@2be8b3e8' with class 'com.sap.piper.DefaultValueCache' to class 'java.util.Map'
at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.continueCastOnSAM(DefaultTypeTransformation.java:405)
at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.continueCastOnNumber(DefaultTypeTransformation.java:319)
at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.castToType(DefaultTypeTransformation.java:232)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.castToType(ScriptBytecodeAdapter.java:603)
at Unknown.Unknown(Unknown)
at handlePipelineStepErrors.call(/var/jenkins_home/jobs/maven/builds/1/libs/piper-library-os/vars/handlePipelineStepErrors.groovy:16)
at mavenExecute.call(/var/jenkins_home/jobs/maven/builds/1/libs/piper-library-os/vars/mavenExecute.groovy:6)
at WorkflowScript.run(WorkflowScript:6)
The issue is related to
final Map stepDefaults = ConfigurationLoader.defaultStepConfiguration(script, 'mavenExecute')
where we expect to get a Map returned from defaultStepConfiguration(./.) but get in fact an instance of DefaultValueCache which cannot be casted to a Map.
Steps to reproduce the issue:
1.) Build docker image https://github.com/SAP/cloud-s4-sdk-pipeline-docker.git
, branch master
, path s4sdk-jenkins-master
at commit 3166b6a2afa9f5476bc9249abcfeb197fcc81461
(or use corresponding image from a registry)
2.) Launch the docker image docker run -p 8089:8080 <id>
3.) Create a new pipeline job with that pipeline code
@Library('piper-library-os') _
node() {
stage('1') {
mavenExecute script: this
}
}
4.) Execute the job
this is the follow-up of discussion in #173 about the stashing topic and tries to provide a summary for further discussion.
Currently we see following scenarios:
pipelineStashFiles
, snykExecute
, ...)For the way forward one could think of following scenarios excluding the standard Jenkins Pipeline means which will always be possible:
,
snykExecute`, ...)Useful functionality to add in Piper in some easy to consume way.
https://github.com/SAP/cloud-s4-sdk-pipeline-lib/blob/master/vars/abortOldBuilds.groovy
Having a project with an old configuration for the mta jar
steps:
mtaBuild:
mtaJarLocation: /opt/sap/mta
fails with the mta.jar file configurable (9d7e801).
Should we act backward compatible here? In case customers have configured the mta jar in the old way it fails now.
What about checking if the configured value is a folder containing an mta.jar file and acting accordingly until our next incompatible version is released?
There something wrong with some recent commits in neoDeploy.
Our pipeline is failing with the following error:
[Info] Starting deployment of deployProps['propertiesFile']
[Pipeline] echo
--- BEGIN LIBRARY STEP: neoDeploy.groovy ---
[Pipeline] echo
--- BEGIN LIBRARY STEP: prepareDefaultValues.groovy ---
[Pipeline] libraryResource
[Pipeline] readYaml
[Pipeline] echo
--- END LIBRARY STEP: prepareDefaultValues.groovy ---
[Pipeline] fileExists
[Pipeline] fileExists
[Pipeline] echo
[neoDeploy] Neo executable "$JENKINS_HOME/userContent/neo-java-web-sdk-3.x/tools/neo.sh" retrieved from environment.
[Pipeline] withCredentials
[Pipeline] {
[Pipeline] echo
--- BEGIN LIBRARY STEP: dockerExecute.groovy ---
[Pipeline] echo
[INFO][dockerExecute] Running on local environment.
[Pipeline] sh
[workspace] Running shell script
+ which neo.sh
[Pipeline] echo
--- BEGIN LIBRARY STEP: toolValidate.groovy ---
[Pipeline] echo
----------------------------------------------------------
--- ERROR OCCURED IN LIBRARY STEP: toolValidate
----------------------------------------------------------
FOLLOWING PARAMETERS WERE AVAILABLE TO THIS STEP:
***
[tool:neo, home:$JENKINS_HOME/userContent/neo-java-web-sdk-3.x]
***
ERROR WAS:
***
hudson.AbortException: '/$JENKINS_HOME/userContent/neo-java-web-sdk-3.x' does not exist.
***
FURTHER INFORMATION:
* Documentation of library step toolValidate: https://sap.github.io/jenkins-library/steps/toolValidate/
* Source code of library step toolValidate: https://github.com/SAP/jenkins-library/blob/master/vars/toolValidate.groovy
* Library documentation: https://sap.github.io/jenkins-library/
* Library repository: https://github.com/SAP/jenkins-library
Similar to travis ci, the maven step should check if the project uses a maven wrapper, and use that instead of the image provided maven version. This makes it easier to have a consistent maven version for local builds and CI, as compared to changing the used docker image.
We describe configuration in each step, as our configuration gets more complex, from various sources, step specific etc I think it would be very beneficial to describe that in a separate documentation chapter.
IMO it is in general very valuable to validate the environment right at the start of a build. With that we can avoid running a build e.g. for half an hour and then fail due to some missing constrains which could have been checked earlier.
But we can also see that the current approach makes trouble: In case some steps are executed later on inside a docker image it does not make sense to perform the check in an environment other than the docker image. Same applies also when working with different Jenkins nodes which get spread around over different Jenkins slaves. In this case the validation must be performed against the node/slave which is used later on when a certain tool is invoked.
We should discuss how to deal with environment validation in the future. The point is: in general it is a benefit to have an early environment validation (fail fast), on the other hand it is hard to do the validation right, since we have to deal with docker images and also with parts of the pipeline executed on different nodes.
When I'm running in a non-proxy environment I fail at the deploy stage with the following error:
SAP Cloud Platform Console Client
Executing 'deploy-mta' with the following parameters
source : /var/jenkins_home/workspace/x/src/app.mtar
synchronous: true
subaccount : p...trial
host : https://hanatrial.ondemand.com
SDK version: 3.39.10
user : ****
(!) ERROR; The proxy port could not be determined, ensure that the proxy port property is a number, current value is 'http:'.
If you need help, provide the output of the command and attach the log directory [/opt/neo/tools/log]
`
Interestingly the "value is 'http'" -- I have no proxy settings neither in the Jenkins nor the hosting environment.
My proxy settings in the .pipeline/config.properties
file:
# HTTP proxy to be used in the build used for http calls (optional)
#proxy=http://proxy:8080
# HTTPS proxy to be used in the build used for https calls (optional)
#httpsProxy=http://proxy:8080
Any idea what could be causing this?
We could add/move following step: https://github.com/SAP/cloud-s4-sdk-pipeline-lib/blob/master/vars/deployToCloudPlatform.groovy
When loading the library in a Jenkins, I get this message from time to time and have to wait for a long time.
12:40:32 GitHub API Usage: Current quota has 40 remaining (9 over budget). Next quota of 60 in 53 min. Sleeping for 17 min.
12:43:32 GitHub API Usage: Still sleeping, now only 14 min remaining.
12:46:32 GitHub API Usage: Still sleeping, now only 11 min remaining.
Any idea what the reason is and how to avoid it?
I propose to define a convention how to name
<realmID>credentialsId
neoCredentialsId
or snykCredentialsId
. We should not use unspecified name like credentialsId
as seen in some steps.For harmonization of the IDs, i'd prefer to apply a similar naming schema to the jenkins credentials like
<REALM>_CREDENTIALS_ID
as already seen or just
<REALM>
in capital letters., like NEO_CREDENTAILS_ID
or NEO
. The ID should reflect the realm of the credentials only. Ambiguous names like CI_CREDENTIALS_ID
should be avoided.
Just got this when executing a pipeline:
[Pipeline] End of Pipeline
hudson.remoting.ProxyException: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
file:/var/jenkins_home/jobs/test-pipeline/builds/37/libs/piper-library-os/src/com/sap/piper/FileUtils.groovy: 10: Invalid duplicate class definition of class com.sap.piper.FileUtils : The source file:/var/jenkins_home/jobs/test-pipeline/builds/37/libs/piper-library-os/src/com/sap/piper/FileUtils.groovy contains at least two definitions of the class com.sap.piper.FileUtils.
One of the classes is an explicit generated class using the class statement, the other is a class generated from the script body based on the file name. Solutions are to change the file name or to change the class name.
@ line 10, column 1.
class FileUtils implements Serializable {
-> we have to ensure that the name of a class inside a groovy script file does not conflict with the name of the groovy script.
The links pointing to step documentation, pipeline documentation and GitHub repository in handlePipelineStepErrors are only placeholders.
Currently neoDeploy is able to deploy only mtar file, would be good to support also war upload.
We have use cases where several properties denotes one entity, lets e.g. for interacting with a backend.
In this case we have the following options for modelling these properties.
myBackendEndpointURL: 'https://example.org/myBackend'
myBackendCredentialsID: 'theCreds'
myBackendOpts: '-Djavax.net.ssl.trustStore:/my/trust/store'
myBackend:
endpointUrl: 'https://example.org/myBackend'
credentialsId: 'theCreds'
opts: '-Djavax.net.ssl.trustStore:/my/trust/store'
We should collect some pros and cons in order to make a decission which approach to follow.
We have introduced a new configuration framework, that will eventually replace the old framework.
Also we need to test this on unit test level.
Hello colleagues,
it would be great of you could start the application after successful deployment. When using the deployment functionality in a CI/CD Pipeline we currently need to start the application manually after with 'neo start'.
Thanks and regards,
Irina
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.