adoptium / ci-jenkins-pipelines Goto Github PK
View Code? Open in Web Editor NEWjenkins pipeline build scripts
License: Apache License 2.0
jenkins pipeline build scripts
License: Apache License 2.0
๐พ Description of the issue
We have a nightly_build_and_test_stats.groovy file that reports a success rating.
It's been shown recently during the release where we turn off Nightly Tests that the success rating figure drops immediately to 50%, as the value is currently (BuildSuccessRating+TestSuccessRating)/2.
We need to make it more useful in these scenarios. We should improve the message that is sent to the build Slack channel (see https://github.com/AdoptOpenJDK/ci-jenkins-pipelines/blob/master/tools/nightly_build_and_test_stats.groovy#L182).
This is the current style of message sent to the build Slack channel:
Suggestions:
"Adoptium Pipeline report: 98% (derived from 160 build jobs at 98%, 0 test jobs at n/a %)"
Alternatively
"Adoptium Pipeline report: 82% (derived from 158 build jobs at 97%, 201 test jobs at 68%)"
๐ Step by Step
To solve this issue and contribute a fix you should check the following step-by-step list. A more detailed documentation of the workflow can be found here.
๐คโ Questions
If you have any questions just ask us directly in this issue by adding a comment. You can join our community chat at Slack. Next to this you can find a general manual about open source contributions here.
Spawned from conversation in adoptium/temurin-build#2132 (comment)
The existing USE_ADOPT_SHELL_SCRIPTS
parameter will force the pipelines to always use Adopt's bash scripts. This could be expanded upon however to include the groovy scripts and parameters too
JDK8 hotspot bulids failing with the following error. OpenJ9 and JDK11 unaffected
Compiling /Users/jenkins/workspace/build-scripts/jobs/jdk8u/jdk8u-mac-x64-hotspot/workspace/build/src/hotspot/src/share/vm/opto/loopTransform.cpp
cc1plus: warnings being treated as errors
/Users/jenkins/workspace/build-scripts/jobs/jdk8u/jdk8u-mac-x64-hotspot/workspace/build/src/hotspot/src/share/vm/oops/klass.cpp: In static member function 'static jint Klass::array_layout_helper(BasicType)':
/Users/jenkins/workspace/build-scripts/jobs/jdk8u/jdk8u-mac-x64-hotspot/workspace/build/src/hotspot/src/share/vm/oops/klass.cpp:218: warning: negative integer implicitly converted to unsigned type
Document what existing "build info" we have:
OpenJDK source being built:
Build Platform:
Build Parameters:
Build scripts:
OpenJDK Build Info:
Tests run:
What are you trying to do?
Test new changes with a reliable and easy to understand test suite that checks my code for syntax and compile issues
Expected behaviour:
Context stubs are easy to update and issues flagged by the test is easy to spot and fix
Observed behaviour:
https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/src/test/groovy/TestCompilation.groovy
ContextStub getInputStream() {}
ContextStub getText() {}
JsonSlurper parseText(def f) {}
produces compile errors...
TestCompilation > compile_pr_test_pipelineTest() FAILED
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
prTester/pr_test_pipeline.groovy: 33: [Static type checking] - Cannot find matching method java.lang.Object#getInputStream(). Please check if the declared type is correct and if the method exists.
@ line 33, column 93.
seText(getAdopt.getInputStream().getText
^
prTester/pr_test_pipeline.groovy: 33: [Static type checking] - Cannot find matching method java.lang.Object#getText(). Please check if the declared type is correct and if the method exists.
@ line 33, column 70.
= new JsonSlurper().parseText(getAdopt.g
^
prTester/pr_test_pipeline.groovy: 33: [Static type checking] - Cannot find matching method groovy.json.JsonSlurper#parseText(java.lang.Object). Please check if the declared type is correct and if the method exists.
@ line 33, column 42.
ing, ?> ADOPT_DEFAULTS_JSON = new JsonSl
with the test not feeding back too well what code should be placed inside the ContextStub classes
Any other comments:
When using the release pipeline build jobs and the optional field "Adopt build number" which you increment for re-release builds, the package filenames do not include the ".1", which could potentially be confusing for clients downloading and not thinking it's a new release...?
Pipeline statuses at https://ci.adoptopenjdk.net/job/build-scripts/job/openjdk15-pipeline/ currently showing green despite all but one of the jobs under it failing.
(I've got a fix in for the current issue as to why they're failing, but we really ought to propagate the status back up properly - possibly similar to adoptium/temurin-build#1371)
We will need to add a second lib to jenkins that does what we need https://ci.adoptopenjdk.net/configure "Global Pipeline Libraries"
the functions we use are JobHelper.jobIsRunnable
and NodeHelper.nodeIsOnline
๐๐ฅ๐ถ First Timers Only
This issue is reserved for people who never contributed to Open Source before. We know that the process of creating a pull request is the biggest barrier for new contributors. This issue is for you ๐
๐พ Description of the issue
For the AdoptOpenJDK build we need some better configuration options for variables. Internally Groovy is used to executer parts of the CI build of AdoptOpenJDK. These parts of the build should be configurable. Today some variables are still hard coded. This issue should refactor two of these parts:
This line of Groovy code contains 2 problems:
"${additionalNodeLabels}&&${platformConfig.os}&&${platformConfig.arch}"
label is hardcoded and could be parameterized via our config files (see adoptium/temurin-build#2100 for an example of this being done for Docker node labels). Specifically, the &&
parts of the string are preventing users from making their own node strings without &&. We should have an additional configuration value (CUSTOM_NODE_LABEL
?) that specifies a user provided node string.๐ Step by Step
To solve this issue and contribute a fix you should check the following step-by-step list. A more detailed documentation of the workflow can be found here.
๐ Contribute to Hacktoberfest
Solve this issue as part of the Hacktoberfest event and get a change to receive cool goodies like a T-Shirt. ๐ฝ
๐คโ Questions
If you have any questions just ask us directly in this issue by adding a comment. You can join our community chat at Slack. Next to this you can find a general manual about open source contributions here.
This might be a requirement for Adoptium soon as well. We are building binaries where we have changed PRODUCT_NAME
and PRODUCT_SUFFIX
(and LAUNCHER_NAME
) in the openjdk code (make/autoconf/version-numbers
). As a result, our JSON build artifacts have version:null
in them because the version parser looks for OpenJDK Runtime Environment
https://github.com/AdoptOpenJDK/ci-jenkins-pipelines/blob/f757ffa251aedcd71cecce9e0c89888a2d53b720/pipelines/build/common/openjdk_build_pipeline.groovy#L310
This fails silently on most platforms but errors out on Windows and Mac because those platforms later try and use that info.
Specifically here
https://github.com/AdoptOpenJDK/ci-jenkins-pipelines/blob/f757ffa251aedcd71cecce9e0c89888a2d53b720/pipelines/build/common/openjdk_build_pipeline.groovy#L331
Which throws
01:00:09.031 Execution error: java.lang.NullPointerException: Cannot get property 'major' on null object
[Pipeline] echo
01:00:09.034 java.lang.NullPointerException: Cannot get property 'major' on null object
01:00:09.034 at org.codehaus.groovy.runtime.NullObject.getProperty(NullObject.java:60)
01:00:09.034 at org.codehaus.groovy.runtime.InvokerHelper.getProperty(InvokerHelper.java:174)
01:00:09.034 at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.getProperty(ScriptBytecodeAdapter.java:456)
01:00:09.034 at org.kohsuke.groovy.sandbox.impl.Checker$7.call(Checker.java:355)
01:00:09.034 at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:357)
01:00:09.034 at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:29)
01:00:09.034 at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
01:00:09.034 at Build.sign(Script2.groovy:355)
I am requesting either we make the regex more generic, add a second case here or make it configurable.
Many times, our docker based builds fail to pull the images because of rate limiting. If we had the option in the build config to do a docker login first, we could avoid this issue. Possibly another param called DOCKER_CREDS
?
Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
remove https://ci.adoptopenjdk.net/view/all/job/openjdk_release_tool/ and rename https://ci.adoptopenjdk.net/job/build-scripts/job/release/job/refactor_openjdk_release_tool/
We will need to update the name in the build scripts though
As we have enabled testing for jdk16 and jdk17 and adding testing for some new platforms (see adoptium/aqa-tests#2304), revisit and adjust nightly and weekly test lists to suit the number of test jobs running on the set of test machines available.
Fix the case that sanity.external is being launched against some platforms that it should not be on (remove it from here), and instead of subtracting it from corretto (in this file), add it for hotspot/openj9.
Current process is to disable nightly testing, but not buids, during a release cycle. For at least some pipelines, only one artefact is being kept. This means that it is highly likely that if any re-runs of test jobs are required which reference the upstream job, the artefacts will already have been removed from the individual jobs when a re-run is required, making it more complex to run the tests. We should evaluate the space impacts of increasing this to 2 builds everywhere instead of 1 for Max # of builds to keep with artifacts
This:
5b0dfc8#diff-cd68afc4fbcb6c7457170bc83490eb803243e4de6cfc3841de93d569cd475a74
Doesn't work and produces a job with a label targetting build&&linux&&[openj9:x64]
because crossCompile
can't handle a map - needs an adjustment of https://github.com/AdoptOpenJDK/ci-jenkins-pipelines/blob/6db2a82792a4878886da6726f42b2c745f27dec2/pipelines/build/common/build_base_file.groovy#L296 similar to https://github.com/AdoptOpenJDK/ci-jenkins-pipelines/blob/6db2a82792a4878886da6726f42b2c745f27dec2/pipelines/build/common/build_base_file.groovy#L312
What are you trying to do?
Build & Test with a custom openjdk repo/branch.
Expected behaviour:
Pass custom branch via jdkXX_pipeline_config buildArgs --disable-adopt-branch-safety -r <repo> -b <branch>
. Compile clones and checks out the correct branch. Correct repo & branch are passed to test jobs.
Observed behaviour:
Default branch (openj9) is passed to test instead of custom branch value.
Any other comments:
In order to accommodate more testing while conserving machine resources, I would like to introduce a weekly releaseType so that we can trigger a Weekly test list, different from the regular nightly list of tests to be executed once a week.
For releaseType=release, we run both Nightly + Weekly test lists
We can check for releaseType=weekly, and switch which test list we are using, much like we do for releaseType=release (in build_base_file.groovy).
Currently we run Nightly build pipelines on a trigger defined at the bottom of job configuration files, for example jdk8u.groovyL60.
The thing to figure out is how best to accommodate 2 different triggerSchedules for nightly versus weekly (perhaps just a 2nd job config in Jenkins that passes in releaseType weekly for a Saturday run and change the nightly triggerSchedule to no longer run on Saturday).
Currently, we run our job regenerators from separate files, with each file corresponding to one job. This is rather inefficient for making changes as most of the files are the same except for the javaVersion
variable.
This could be optimised so that all 4 active (currently) jobs are executed from one file via the parallel Jenkins function.
The same thing could be achieved with the openjdkxx_pipeline.groovy files for the upstream pipelines.
With the merge of adoptium/temurin-build#2132 , user's have a lot more flexibility on what and where and how they can build Java binaries. However, the limiting factor at the moment are hardcoded filenames inside directories within the scripts. These will force user's who are looking to benefit from adoptium/temurin-build#2132 to use our filenames without being able to pick their own. This should not be the case and the scripts should be smart enough to tell if they are looking at a directory, adopt file or user file. Examples below:
This is a placeholder issue for initial incorporation of the adoptium-packages-linux work, based on the what @aahlenst has in https://github.com/aahlenst/adoptium-packages-linux. We would now like to be able to trigger build and test of these linux packages from Jenkins build pipelines.
As to where this gets triggered, see the diagram found in AdoptOpenJDK/TSC#158 (comment), upon the creation of the JDK binaries, the installer pipeline can be triggered.
Initial objective is to get building and testing of these packages running. https://github.com/aahlenst/adoptium-packages-linux/blob/main/NOTES.txt has some notes. This issue can (and should) encompass some remodelling effort to get rid of the large number of jobs. It should also be standalone/modular enough to restart if failed (without a complete rerun of build pipelines / like test pipelines). We can assume that this work will go through several iterations before its refined, but let's get some basic pipeline in place and then work on refinements.
Some further steps (which can be captured under a separate issue) can be to see whether we can publish into https://cloudsmith.com/ before adding more versions, more packages.
Common problem is nightly/release builds get stalled because a vital node (eg.installer node) goes offline, and everything stalls until someone notices...?
Similarly sometimes the required selection of build nodes are all offline, or Agent failed...
Having a task that detects this, and Alerts on Slack to the situation would be useful?
Prior to #36, the openjdk_pipeline.groovy was split out between the version numbers. adoptium/temurin-build#2132 did not account for this new scenario and pulls files such as this by folder instead of by file.
Now that these version files have been squashed down (this is the same situation for the regeneration_pipeline files), we can adapt parameters on the generation of jobs to account for this and save time searching for files that do not exist anymore.
Now we have tests in https://github.com/AdoptOpenJDK/openjdk-build/tree/master/test/functional/buildAndPackage, we should incorporate them into the Jenkins build pipelines.
They ideally should be run prior to runTests and should block the running of other tests (currently there is an outstanding failure in the smoke tests that should get fixed as we will block all testing if failures occur in the SmokeTestsFoBuildAndPackaging job, related: adoptium/temurin-build#1387). Additional context described here: adoptium/aqa-tests#2024.
This will require use of some extra test job parameters:
๐๐ฅ๐ถ First Timers Only
This issue is reserved for people who never contributed to Open Source before. We know that the process of creating a pull request is the biggest barrier for new contributors. This issue is for you ๐
๐พ Description of the issue
For the AdoptOpenJDK build we need some better configuration options for variables. Internally Groovy is used to executer parts of the CI build of AdoptOpenJDK. This part of the build should be configurable. Today some variables are still hard coded. This issue should refactor one of this parts:
This line of Groovy code contains 2 problems:
fpm
label is hardcoded, could be parameterized.TARGET_OS
will map directly to a label (linux
for example). Not sure what the best way to parameterize that would be. I'm not actually sure why the lables are passed. Since these feed into 3 separate freestyle jobs, the jobs themselves could be configured to "Restrict where this project can be run".๐ Step by Step
To solve this issue and contribute a fix you should check the following step-by-step list. A more detailed documentation of the workflow can be found here.
๐ Contribute to Hacktoberfest
Solve this issue as part of the Hacktoberfest event and get a change to receive cool goodies like a T-Shirt. ๐ฝ
๐คโ Questions
If you have any questions just ask us directly in this issue by adding a comment. You can join our community chat at Slack. Next to this you can find a general manual about open source contributions here.
The test is very rigid in its flexibility and functionality, this shouldn't be the case for a file that changes a lot.
As a third party user, I will tend to have a different test list than what is in build_base_file.groovy. I will also not necessarily want to follow changes made to ENABLE_TESTS such as #92. I am proposing we move these values into a configuration file so that users can point to their own config file.
A lot of fixes have been going into the job regenerators recently and I keep having to update them one by one which gets a little time consuming and annoying after a while (there's not a way I know of to "import" configs from other jobs). Similarly to the build-pipeline-generator
and pipeline_jobs_generator_jdkxx
jobs, it would be nice to have a job in utils/ that will refresh these jobs on a commit push to master.
Despite the following from :
create_job_from_template.groovy: numToKeep(10)
create_job_from_template.groovy: artifactNumToKeep(1)
We appear to be keeping far more builds than that in the pipelines:
root@Ubuntu-1604-xenial-64-minimal /home/jenkins/.jenkins/jobs/build-scripts/jobs # for A in open*; do echo ==== $A; ls -d $A/builds/[0-9]* | wc; done
==== openjdk10-pipeline
10 10 300
==== openjdk11-pipeline
32 32 960
==== openjdk12-pipeline
23 23 690
==== openjdk13-pipeline
14 14 406
==== openjdk8-pipeline
34 34 986
==== openjdk9-pipeline
10 10 290
This needs to be resolved as these have chewed up an additional 50Gb of space on the jenkins master node over the last week.
In trying to create a dockerBuild node, discovered having created a jenkins user with uid=1001 that dockerBuild fails with ```
10:33:18 > git checkout -f ac64818 # timeout=10
10:33:19 sh: /home/jenkins/workspace/build-scripts/jobs/jdk11u/jdk11u-linux-x64-openj9@tmp/durable-0ab7d979/jenkins-log.txt: Permission denied
10:33:19 sh: /home/jenkins/workspace/build-scripts/jobs/jdk11u/jdk11u-linux-x64-openj9@tmp/durable-0ab7d979/jenkins-result.txt.tmp: Permission denied
10:33:19 mv: accessing `/home/jenkins/workspace/build-scripts/jobs/jdk11u/jdk11u-linux-x64-openj9@tmp/durable-0ab7d979/jenkins-result.txt': Permission denied
The reason being because the adoptopenjdk docker images have the jenkins user set to uid=1000, which happens to match the adoptopenjdk Azure cloud uid of 1000.
It would be good to match the image to the host environment, or make the built image handle this dynamically?
The recent repo change and updates to the PR tester mean that the documentation at https://github.com/AdoptOpenJDK/ci-jenkins-pipelines/blob/master/pipelines/build/prTester/README.md is now out of date. @gdams Can you look into updating it so it no longer contradicts your changes?
Currently, the builds will initially clone the repository and update it on future builds. Though this is more time efficient, it can cause issues ( ref: adoptium/infrastructure#1169 , adoptium/temurin-build#1236 ).
Removing the repos after a build has finished (if it succeeds), is also more space efficient for the machines and would reduce the likelihood of our build machines running out of disk space.
๐พ Description of the issue
We currently have a GitHub checks job that will place a comment in any new PR introduced into this repository (link). The content of that comment is rather limited and basic right now, and could use expanding to include useful contributor information. As an example, we could include a link to the Contributing guide and, for more specialized cases, a link to our Custom Script guide that contains details on how to add new features to the scripts on the jenkins side of things. The possibilities are endless, so be sure to have a good rummage through the Repo for linking opportunities to add to the comment bot.
What are you trying to do?
Launch multiple test jobs at the end of a build.
Expected behaviour:
These jobs should be launched in a threadsafe manner, enabling them to load the libraries they need, and for the resultant jobs to find a suitable checks publisher.
Observed behaviour:
Once we enter the multi-threaded section of openjdk_build_pipeline.groovy, we become unable to load new libraries (such as openjdk-jenkins-helper), and the test jobs it generates sometimes show this non-fatal error:
[Checks API] No suitable checks publisher found.
It's possible that both of these issues are caused by the groovy pipeline scripts being unable to generate new jobs in a threadsafe way. This issue is to cover centralized discussion over whether this is causing the problem, and ways in which it can be resolved.
There is a build parameter for NODE_LABEL which I believe is no longer used as it seems to have been replaced with the equivalent one in the json BUILD_CONFIGURATION. From my testing it appears we can just clean this up as it isn't used. I'm happy to do the PR if there are no concerns but I don't have the rights to reconfig the jobs on the Adopt CI.
This would make the node timeout logic available across the whole build and even across downstream jobs that instantiate the jenkins helper (could be useful for adoptium/temurin-build#1773).
See https://github.com/AdoptOpenJDK/openjdk-jenkins-helper/pull/39/files for a previous addition to the JobHelper lib.
In openjdk_build_pipeline.groovy/runTests, we look for pre-created test job names, run tests if we find them, and print warning if not. This check could now be changed to check if job is not found, generate it, then proceed to executing the test job.
Some options for approach are:
The expected pattern is that at the time that a new JDK version is beginning to be built, a bunch of test jobs get created on that first attempt to run testing. If we take option 1, it does not serve to refresh the test jobs, we do not regenerate each time we run them (which is perfectly fine, as we can trigger Test_Job_Auto_Gen directly for that purpose).
We had previously kept this auto-gen of test jobs a manual process until now partly to understand what new test jobs were being requested. Hopefully, by adding this auto-gen of test jobs will not further faciliate the lack of communication and awareness between build/test components, but still, now is a good time to introduce it.
This issue is related to other enhancements:
Related/Similar #50
The function for sign build has some hardcoded values which could be parameterized to make the scripts more portable.
https://github.com/AdoptOpenJDK/openjdk-build/blob/282058709909733381464dcb67b7f0d13dca4e13/pipelines/build/common/openjdk_build_pipeline.groovy#L237
def nodeFilter = "${buildConfig.TARGET_OS}"
https://github.com/AdoptOpenJDK/openjdk-build/blob/282058709909733381464dcb67b7f0d13dca4e13/pipelines/build/common/openjdk_build_pipeline.groovy#L242
nodeFilter = "${nodeFilter}&&build&&win2012"
https://github.com/AdoptOpenJDK/openjdk-build/blob/282058709909733381464dcb67b7f0d13dca4e13/pipelines/build/common/openjdk_build_pipeline.groovy#L248
nodeFilter = "${nodeFilter}&&macos10.14"
https://github.com/AdoptOpenJDK/openjdk-build/blob/282058709909733381464dcb67b7f0d13dca4e13/pipelines/build/common/openjdk_build_pipeline.groovy#L241
certificate = "C:\\openjdk\\windows.p12"
https://github.com/AdoptOpenJDK/openjdk-build/blob/282058709909733381464dcb67b7f0d13dca4e13/pipelines/build/common/openjdk_build_pipeline.groovy#L246
certificate = "\"Developer ID Application: London Jamocha Community CIC\""
There is currently no way to tell a pipeline which node to run tests on.
There are tests which fail only on certain machines. Having this capability would provide the means to check whether a particular machine is actually capable of running any test which might land on it (e.g. following the addition of a new machine or reconfiguration of an existing one) or to find exactly which tests are affected by an incorrect configuration
It might also be useful to be able to specify which machine to run the build part of the pipeline on.
Given we have some large test targets that take a long time to complete (in particular extended.openjdk), AND we now have a good number of test machines for most platforms, it is time to shift to running tests with PARALLEL=Dynamic.
Switching to this approach will help break down the very large serially run test targets into smaller sets which will mean less test jobs that get aborted at the default timeout of 10hrs (which is happening frequently for extended.openjdk jobs on some platforms).
We already use this parallel mechanism at other Jenkins servers and for running in Azure Devops, so its well-exercised. Caveats for use on our Jenkins server may be that we need to watch that the numbers of test artifacts being sent back to Jenkins master can be handled appropriately (though each artifact itself will be smaller). We do not anticipate issues, but something to be aware of.
Currently Docker builds are triggered via a schedule separate from the main build pipelines. This issue is to discuss whether we initiate those builds from the main build pipelines.
Related: adoptium/aqa-tests#773 - we currently trigger Docker image testing off of the main build pipelines, which is incorrect, as there may not be a new Docker image to test, if the Docker build was not run.
#1
All of our machines should be configured the same so the mingw-cygwin label locks it onto the ibmcloud
systems unnecessarily (in theory). We should verify whether the openj9 builds will run elsewhere (I think @Haroon-Khel has probably already verified the alibaba windows boxes)
The current layout was suited for openjdk-build and is confusing for even me to wrap my head around. This class needs adapting to it's new home
Identified in #56
Looks like the issue is similar to #57 (comment)
Started by upstream project "build-scripts-pr-tester/build-test/openjdk8-pipeline" build number 752
originally caused by:
Started by upstream project "build-scripts-pr-tester/openjdk-build-pr-tester" build number 1456
originally caused by:
GitHub pull request #56 of commit 689a157eeda72f49506f866333e9ca39fe677209, no merge conflicts.
hudson.plugins.git.GitException: Command "git fetch --tags --force --progress --prune -- origin +refs/heads/689a157eeda72f49506f866333e9ca39fe677209:refs/remotes/origin/689a157eeda72f49506f866333e9ca39fe677209" returned status code 128:
stdout:
stderr: fatal: couldn't find remote ref refs/heads/689a157eeda72f49506f866333e9ca39fe677209
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2450)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:2051)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$500(CliGitAPIImpl.java:84)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:573)
at jenkins.plugins.git.GitSCMFileSystem$BuilderImpl.build(GitSCMFileSystem.java:365)
at jenkins.scm.api.SCMFileSystem.of(SCMFileSystem.java:197)
at jenkins.scm.api.SCMFileSystem.of(SCMFileSystem.java:173)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:115)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:69)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:309)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Finished: FAILURE
FYI @AdamBrousseau
The Build > Execute shell
code-block all the code-tools
jobs in Jenkins should be moved to a git-repo just like openjdk-build
. This puts the config in a version-controlled system and also opens it up to end-users to also build in their native environments.
This includes the below jobs under the Dependencies tab:
Prerequisites:
Splitting this out from slack thread on the subject
I've been doing some analysis on the performance of the overnight builds and there are some things which are causing notable slowdowns which we should consider removing from the daily runs.
One of the biggest issues is with platforms which do not have a JIT. Now that we have enabled JIT on the AdoptOpenJDK arm32 JDK8 builds, the only ones build without an active JIT are OpenJ9 RISC-V (Still in development, not a full release platform yet) and JDK8 on Linux/s/390x for HotSpot. There are some others that are problematic too though - here is a list of ones we should consider modifying or removing:
I have run some new JDK8/s390x pipelines today and J9xl completed in about 2h31, j9 (non-XL) tool 4h30 but the difference to XL is likely just how it got scheduled around HotSpot as it has completed in about 2h31 in the past. I expect HotSpot will take about 10 hours including the build time and the sanity.perf run (Note that sanity.system takes over 4 hours for HotSpot so that will potentially be the limiting factor if we remove sanity.perf)
Features that impact the whole project (e.g. Adding a new OpenJDK distribution)are made over at the TSC.
Otherwise, please describe what enhancement you would like to see in the build
scripts:
We use triggers {}
for our pipeline template which is now deprecated:
Processing DSL script pipelines/jobs/pipeline_job_template.groovy
Warning: (pipeline_job_template.groovy, line 43) triggers is deprecated
We need to replace triggers
with an alternative triggering system (pipelineTriggers maybe?)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.