GithubHelp home page GithubHelp logo

centos / container-pipeline-service Goto Github PK

View Code? Open in Web Editor NEW
49.0 13.0 27.0 2.69 MB

Code, infrastructure and deployment backend for the CentOS Container Pipeline backing up build system for registry.centos.org

Home Page: https://registry.centos.org

License: GNU General Public License v3.0

Python 93.54% Shell 6.03% Dockerfile 0.36% Jinja 0.07%

container-pipeline-service's Introduction

CentOS Community Container Pipeline


CentOS Community Container Pipeline (CCCP) is an open-source platform to containerize applications based on CentOS.

CCCP: Builds the application (from a git repository) → Packages with the appropriate runtime → Scans the image → Pushes image to a public registry


Use Case

I have a certain stack I develop with (be it Django, Golang, NodeJS, Redis, RabbitMQ, etc.) using my CentOS as a base platform.

How do I package that application into a container that's updated automatically every time I push changes? What about security and updates, how do I automate that each time I push any changes?

That's where CCCP comes in.

CCCP will:

  • Scan the image for updates, fixes, capabilities and push it to a public registry (by default, http://registry.centos.org)
  • Automatically rebuild when a change is detected within the repository. Such as an update in base image (FROM in Dockerfile) or a git push to the project's git repository
  • Notifications / alerts regarding build status and scan results (by e-mail)

How do I host my application?

Similar to projects such as Homebrew it's as easy as opening up a pull request.

A developer wishing to host their container image will open up a pull request to the CentOS Container Index.

Once the pull request is merged, CCCP:

  1. Links the Dockerfile
  2. Builds the image
  3. Scans / analyzes it
  4. Pushes to registry.centos.org
  5. Notifies the developer (email)

Once a project is in the CentOS Container Index, the CentOS Container Pipeline Service service automatically tracks the project's Git repo and branch and rebuilds it every time there is a future change.

How everything works

  1. Project onboard / the main "index"

    First off, the pipeline points to an index. For the CentOS community and in our example, this refers to the: CentOS Container Index.

  2. Jenkins and OpenShift tracking

    Jenkins is utilized in order to track each application's Git repository as well as branch for any changes. This triggers a new build on OpenShift when a change is pushed.

    Changes to the application's repository, update to the base image or any RPMs that are part of the image will trigger a new build

  3. Building the image

    The container image is built by OpenShift.

  4. Scan and analyze the image

    Scanning happens by running scripts in the container image to check for:

    • yum updates
    • updates for packages installed via pip, npm and gem
    • capabilities of the container created from resulting container image by analyzing RUN label in the Dockerfile
    • verify the installed RPMs
  5. Push to the public registry (https://registry.centos.org)

    Finally, the image is pushed to https://registry.centos.org

  6. Notification

    An email is sent out the developer mentioning the status of the build and scan process as well as a link to read the detailed logs.

Architecture Diagram

Coming soon!

Deploy your own pipeline

The service recently underwent an architecture change and is now completely deployed on top of OpenShift. We have documented the steps in docs directory. We recommend you to follow the docs and open up issues if something doesn't work out.

Contribute to the CentOS Community Container Pipeline Service

We're always looking for ideas and improvements for the service! If you're interested in contributing to this repository, follow these simple steps:

  • open an issue on GitHub describing the feature/bug
  • fork the repository
  • work on your branch for the fix of the issue
  • raise a pull request

Before a PR is merged, it must:

  • pass the CI done on CentOS CI
  • be code reviewed by the maintainers
  • have maintainers' LGTM (Looks Good To Me)

Community

Chat (Mattermost): Our prefered method to reach the main developers is through Mattermost at chat.openshift.io.

IRC: If you prefer IRC, we can reached at #centos-devel on Libera.chat.

Email: You could always e-mail us as well at [email protected]

container-pipeline-service's People

Contributors

arrfab avatar bamachrn avatar bstinsonmhk avatar cdrage avatar dharmit avatar kbsingh avatar mohammedzee1000 avatar mscherer avatar navidshaikh avatar rtnpro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

container-pipeline-service's Issues

Have a trigger for Dockerfile linter

Currently, we have code in place for Dockerfile linter. However, a systemd service needs to be created out of it and a trigger needs to be set for it as well.

node.kubeconfig gets corrupted while simultanuos access by build and delivery worker

For login into openshift from worker we use ca.crt and node.kubeconfig from local file system. Whenever there is simultaneous access for node.kubeconfig from build and delivery worker, these files gets corrupted.
As a reason for this we found oc client reads and writes node.kubeconfig in each login to openshift.
To solve this we need to isolate the workers from each other giving them there own env each.

Jenkins replaces jobs with same appid and jobid

When there is entry of multiple tags with same appid and jobid, jenking job builder is replacing the entries with its last entry. As a result no job is getting triggered for the earlier entries. Need to make the job ids unique with appid jobid and the entry id or tag.

Consolidate / validate the atomic scanner results

As part of the pipeline, one of the step is to scan built container image using the atomic scanner(s) we have. The results generated by atomic scanner need to be consolidated and presented.

If atomic scanner is providing the scan result score, and based on the scan score / validation, we may need to notify the author about issues.

The aim of this issue is to answer following questions:

  • What do we do with the results of atomic scanners?
  • Decide, If there are results from scanner which should block the pipeline progress of image under process.
  • How do we present the results of atomic scanners?

Provision stalls and errors out during getting cccp service code in openshift node

Issue

Provisioning stalls and errors out (after long time) when pushing source code to openshift node.

fatal: [dev-32-116.lon1.centos.org]: FAILED! => {"failed": true, "msg": "the file_name '/root/container-pipeline-service/provisions/roles/openshift/../../../../.ansible/cp/ansible-ssh-dev-32-116.lon1.centos.org-22-root' does not exist, or is not readable"}

Line no: 18 in file -> provisions/roles/openshift/tasks/setup.yml

Name of scanner container images and their namespace

I am creating this issue to discuss what should be the naming convention for the atomic scanner names and the namespace should be used to group the scanners.

With scanning and introspection efforts to know more about containers, we are going to have a set of atomic scanner container images, if we can discuss and devise a naming convention to name the atomic scanners and their namespace.

Proposal 1:
For scanner container images, proposing scanner-<scanner-name>.

Example:
For a scanner doing rpm verify test in given container image, the name of the container image would be
scanner-rpm-verify.

Name of the namespace to bucket all the individual scanners,
Proposing atomic-scanners as the namespace for hosting of all the atomic scanners at registry namespace.

Proposal 2:
Namespace: atomic-scanners
name of the scanner container image: <scanner-image>

Example:
atomic-scanners/rpm-verify

Decide a namespace and naming convention for the atomic scanners, the atomic scanner images will have coherent names across different public / private registries.

Example:
registry.centos.org/atomic-scanners/rpm-verify
docker.io/atomic-scanners/rpm-verify

Disable scanner for CI environment

If we're running in CI environment, scanner worker should not perform entire scan of the image under test. Instead it should relay the data it receives to master_tube without any modification.

Builds that get stuck stay in that state for extended time blocking other jobs

By my experience with rabbitmq containers, it appears some builds which get stuck for a number of reasons, remain in the same state (2 days for rabbitmq container). We should have a timeout followed by restart of the job(s) for certain number of times to ensure this does not happen, and to mark a job as failure in case of retries running out.

Don't make multiple docker images for worker

Issue

Currently, we make multiple docker images for each worker script. This adds latency to the build/provisioning process.

Solution

We can create a single image with all the worker scripts and the run the image with worker specific commands.

Logs UI

Get public logs setup on a hosted machine. (more details to be added)

Image not tagged correctly there is more than one value in depends-on list.

I just noticed that images having more than one item in the depends on list are not getting tagged correctly.

For example consider https://github.com/CentOS/container-index/blob/master/index.d/openshift.yml#L140 which is for openshift/origin-deployer image. Its depends on contains two items, openshift/origin-base and openshift/origin. After the build, the same image gets tagged as openshift/origin-deployer:openshift/origin.

Its not only this but all such images seem to be using the last value in the list as the tag instead of desired-tag.

Non DRY workers

Currently, there's a lot of code repetition in the worker scripts and they are kind of always REPEAT YOURSELF. This makes it difficult to make changes without duplication of work. Currently, it seems that the project uses Python just as a scripting language, as a bash replacement, rather than a programming language. This needs to be fixed.

Just my 2 cents.

Reserve duffy nodes for debugging on CI job failure

Issue

Currently, we clean up the duffy nodes on ci.centos.org on CI job success/failure. It's fine when the jobs are successful, but on job failure, we should be able to login into the duffy nodes and debug things.

Solution

On job failure, hit duffy's fail API endpoint, and print the hostnames of the nodes in the console output. This will automatically reserve the duffy nodes for 6 hours for debugging.

Rename worker_start_test to worker_start_scan

Worker for the scanner service (under beanstalk_worker directory) is currently named worker_start_test.py which is inaccurate considering that we're going to have a worker that's going to perform tests. We should rename it to worker_start_scan.py.

Also the references to worker_start_test.py need to change to worker_start_scan.py.

Error while placing build, test, delivery_scripts in proper place

  • For some cases when the users docker file changes its user to non superuser. Parsing of cccp.yml raises error, as it can not install PyYAML to container. As a result the expected build_script is not placed properly. As a result the build_script is showing not available.
  • Also we are changing the dockerfile to add cccp.yml and cccp_reader in proper place to parse.

One email notification per build

Issue to track the feature of one notification email per build

This is regarding user experience of notifications received by user / author of build at registry.centos.org

We send multiple emails to user per build, like

  • Linter results
  • Build status (SUCCESS / FAILURE)
  • Scanning results
  • Weekly scan results

Linter results are sent to user as it gets populated, i.e. before project starts building. Once the build is complete, build status email is sent and later after scan is complete, we send scan results.

We can send an email to user with

  • Build status
  • Linter results
  • Scanning results

.. the email to be sent out after linter, build and scan process. The email will consist of

  • Build status
  • Summary and link hosted linter results
  • Per scanner summary and link to hosted scanning results

The weekly scan email for build will be triggered as scheduled (once per week).

--
Changes:
The logs directory will have (additional) files set as:

  • a plain text file exported as build_status (it will consist SUCCESS, FAILURE)
  • Per scanner JSON file
  • Linter results
  • Build logs

vagrant allinone setup needs IP address tweaks

While trying to setup dev environment using Vagrantfile for virtualbox provider, its prompting with unreachable 192.168.100.100 error

vagrant version: 1.8.5
VirtualBox version: 4.3.38

Installed vagrant plugins:

vagrant plugin list
vagrant-libvirt (0.0.35)
vagrant-service-manager (1.3.3)
vagrant-share (1.1.5, system)
vagrant-sshfs (1.2.0)

vagrant up Logs :

➜  container-pipeline-service git:(fix-48) vagrant status
Current machine states:

master                    not created (virtualbox)

The environment has not yet been created. Run `vagrant up` to
create the environment. If a machine is not created, only the
default provider will be shown. So if a provider is not listed,
then the machine is not created for that environment.
➜  container-pipeline-service git:(fix-48) export ALLINONE=1
➜  container-pipeline-service git:(fix-48) vagrant up    
Bringing machine 'master' up with 'virtualbox' provider...
==> master: Importing base box 'centos/7'...
==> master: Matching MAC address for NAT networking...
==> master: Checking if box 'centos/7' is up to date...
==> master: Setting the name of the VM: container-pipeline-service_master_1473167839517_65950
==> master: Clearing any previously set network interfaces...
==> master: Preparing network interfaces based on configuration...
    master: Adapter 1: nat
    master: Adapter 2: hostonly
==> master: Forwarding ports...
    master: 8443 (guest) => 8443 (host) (adapter 1)
    master: 22 (guest) => 2222 (host) (adapter 1)
==> master: Running 'pre-boot' VM customizations...
==> master: Booting VM...
==> master: Waiting for machine to boot. This may take a few minutes...
    master: SSH address: 127.0.0.1:2222
    master: SSH username: vagrant
    master: SSH auth method: private key
    master: Warning: Remote connection disconnect. Retrying...
    master: Warning: Remote connection disconnect. Retrying...
    master: Warning: Remote connection disconnect. Retrying...
==> master: Machine booted and ready!
==> master: Checking for guest additions in VM...
    master: No guest additions were detected on the base box for this VM! Guest
    master: additions are required for forwarded ports, shared folders, host only
    master: networking, and more. If SSH fails on this machine, please install
    master: the guest additions and repackage the box to continue.
    master: 
    master: This is not an error message; everything may continue to work properly,
    master: in which case you may ignore this message.
==> master: Setting hostname...
==> master: Configuring and enabling network interfaces...
==> master: Rsyncing folder: /home/nshaikh/work/src/container-pipeline-service/ => /vagrant
==> master: Rsyncing folder: /home/nshaikh/work/src/container-pipeline-service/ => /opt/cccp-service
==> master: Rsyncing folder: /home/nshaikh/work/src/container-pipeline-service/ => /home/vagrant/cccp-service
==> master: Running provisioner: shell...
    master: Running: inline script
==> master: Running provisioner: ansible...
    master: Running ansible-playbook...
[DEPRECATION WARNING]: Instead of sudo/sudo_user, use become/become_user and 
make sure become_method is 'sudo' (default).
This feature will be removed in a 
future release. Deprecation warnings can be disabled by setting 
deprecation_warnings=False in ansible.cfg.

PLAY [Disable requiretty in sudoers] *******************************************

TASK [setup] *******************************************************************
fatal: [192.168.100.100]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}
    to retry, use: --limit @provisions/vagrant.retry

PLAY RECAP *********************************************************************
192.168.100.100            : ok=0    changed=0    unreachable=1    failed=0   

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

Fields of job information sent over tube

We parse the the YAML file and prepares the data structure to be sent over beanstalk tube.
Here: https://github.com/CentOS/container-pipeline-service/blob/master/client/send_build_request.py#L9-L12
Excerpts:

build_details['name'] = sys.argv[1]
build_details['tag']  = sys.argv[2]
build_details['depends_on'] = sys.argv[3]
build_details['action'] = "start_build"

Then later, as per the worker / task to be done on job, we add / update the fields of information (fields of build_details).

This way, the information / fields of job information is altered / updated at different stages.

IMO, we should populate the job information with all the info provided via YAML file at once and then send across the tube.
This will make sure:

  • Every worker will have one source of referencing the information and consuming them
  • Other part of the code / worker need not to add the details in the job information which are needed for next phase
  • If there is additional information that needs to be added in N phase of flow and needed in N+1 , N and N+1 can handshake that before implementation.

Build worker not processing jobs

Build worker is not processing jobs after it processes one job. It hangs while reserving the job if there is no jobs in tube.

Test build on Openshift fails on non prod environment

When we override the TARGET_REGISTRY to push built containers to in client/template.json as in http://paste.fedoraproject.org/423836/14733348/, it still tries to push built image to registry.centos.org. This can be seen from the logs for test build.

==> Pulling tested image (172.30.69.93:5000/bamachrn-python/python:test)
2   Trying to pull repository 172.30.69.93:5000/bamachrn-python/python ... 
3   test: Pulling from 172.30.69.93:5000/bamachrn-python/python
4   3d8673bd162a: Already exists
5   97b032b57f03: Already exists
6   e2c5f4274b80: Already exists
7   885c218d9d80: Already exists
8   440fa7d2411b: Already exists
9   a7ed6fc727df: Already exists
10  cf497160153e: Already exists
11  6f5c02f96fa9: Already exists
12  d161327122ee: Already exists
13  9846ac8dc07d: Already exists
14  30fedacb3f53: Already exists
15  b875eb329f00: Already exists
16  f485dc67b8aa: Already exists
17  Digest: sha256:09927b8298f721688a65f3edf0b48d5f9d57f5ac994f367b087f4478eabfe9f4
18  Status: Downloaded newer image for 172.30.69.93:5000/bamachrn-python/python:test
19  ==> Checking if test script is getting success
20  exec: "--entrypoint": executable file not found in $PATH
21  docker: Error response from daemon: Container command not found or does not exist..
22  ==> Re-tagging tested image (172.30.69.93:5000/bamachrn-python/python:test -> registry.centos.org:5000/bamachrn/python:20160908065609)
23  ==> Pushing the image to registry (registry.centos.org:5000/bamachrn/python:20160908065609)
24  The push refers to a repository [registry.centos.org:5000/bamachrn/python]
25  unable to ping registry endpoint https://registry.centos.org:5000/v0/
26  v2 ping attempt failed with error: Get https://registry.centos.org:5000/v2/: http: server gave HTTP response to HTTPS client
27   v1 ping attempt failed with error: Get https://registry.centos.org:5000/v1/_ping: http: server gave HTTP response to HTTPS client
28  ==> Sending mail of failed status to [email protected]
29  ERROR:root:Failed to load PyYAML, will not parse YAML
30  Getting image details from test phase
31  Pushing notification details to master_tube
32  Traceback (most recent call last):
33    File "/tube_request/send_failed_notify_request.py", line 19, in <module>
34      bs = beanstalkc.Connection(host=beanstalk_host)
35    File "/tube_request/beanstalkc.py", line 62, in __init__
36      self.connect()
37    File "/tube_request/beanstalkc.py", line 74, in connect
38      SocketError.wrap(self._socket.connect, (self.host, self.port))
39    File "/tube_request/beanstalkc.py", line 46, in wrap
40      raise SocketError(err)
41  beanstalkc.SocketError: [Errno -2] Name or service not known

This issue can be reproduced in the allinone vagrant setup.

Build script does not work

Bug:

+ cd /opt/cccp-service/client/
+ ./build-project.sh bamachrn python https://github.com/bamachrn/cccp-python master / Dockerfile.test [email protected] None
==> login to Openshift server
Login successful.

Using project "default".
==>creating new project or using existing project with same name
error: No configuration file found, please login or point to an existing file:

  1. Via the command-line flag --config
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

To view or setup config directly use the 'config' command.
error: No configuration file found, please login or point to an existing file:

  1. Via the command-line flag --config
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

To view or setup config directly use the 'config' command.
See 'oc project -h' for help and examples.
==> Uploading template to OpenShift
error: No configuration file found, please login or point to an existing file:

  1. Via the command-line flag --config
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

To view or setup config directly use the 'config' command.
error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)
error: No configuration file found, please login or point to an existing file:

  1. Via the command-line flag --config
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

To view or setup config directly use the 'config' command.
error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)
error: No configuration file found, please login or point to an existing file:

  1. Via the command-line flag --config
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

To view or setup config directly use the 'config' command.
error: you must provide one or more resources by argument or filename (.json|.yaml|.yml|stdin)
unable to connect to a server to handle "templates": No configuration file found, please login or point to an existing file:

  1. Via the command-line flag --config
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

To view or setup config directly use the 'config' command.
unable to connect to a server to handle "templates": No configuration file found, please login or point to an existing file:

  1. Via the command-line flag --config
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

To view or setup config directly use the 'config' command.
error: No configuration file found, please login or point to an existing file:

  1. Via the command-line flag --config
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

To view or setup config directly use the 'config' command.
error: no objects passed to create
==> Send build configs to build tube
Getting build details from jenkins
Pushing bild details in the tube
build details is pushed to master tube
==> Restoring the default template
Finished: SUCCESS

Proposed fix

Specify custom oc config node.kubeconfig in every oc command, explicitly.

Flake8 cleanup and typo fixes for workers

There's a bunch of linting errors and typos in the workers. This needs to be fixed ASAP before doing further changes. As an Open Source project, we need to agree on certain coding styleguide and best practices, so that everyone writes clean, readable code.

build_project.sh fails to run project template in Openshift

Issue

client/build_project.sh uses oc process command with comma separated key value pairs for --value and this is no longer supported in oc AFAIK in v1.4 onwards.

warning: --value no longer accepts comma-separated lists of values. "SOURCE_REPOSITORY_URL=https://github.com/bamachrn/cccp-python,REPO_BRANCH=master,APPID=bamachrn,JOBID=python,REPO_BUILD_PATH=/demo,TARGET_FILE=Dockerfile.demo,NOTIFY_EMA
[email protected],TEST_TAG=20170214002433,DESIRED_TAG=release" will be treated as a single key-value pair.

Fix

Update client/build_project.sh to specify each key-value pair separately with -v option.

Putting data to beanstalk tube fails with JOB_TOO_BIG error

When scanner tries to put data to beanstalkd tube, it fails with JOB_TOO_BIG error:

Jan 02 01:48:31 cccp-atomic-scan.c2bg.rdu2.centos.org worker_start_scan.py[16901]: 2017-01-02 01:48:31,244 - container-pipeline - INFO - Result file: /var/lib/atomic/pipeline-scanner/2017-01-02-01-48-18-568129/_sha256:e265750f2f5ff3ee61b929cddb37a2a74ba5127a6732e9138b87bf3536ed41c2/image_scan_results.json
Jan 02 01:48:31 cccp-atomic-scan.c2bg.rdu2.centos.org worker_start_scan.py[16901]: 2017-01-02 01:48:31,246 - container-pipeline - INFO - Unmounting image's rootfs from /sha256:e265750f2f5ff3ee61b929cddb37a2a74ba5127a6732e9138b87bf3536ed41c2
Jan 02 01:48:33 cccp-atomic-scan.c2bg.rdu2.centos.org worker_start_scan.py[16901]: 2017-01-02 01:48:33,005 - container-pipeline - INFO - Finished executing scanner: pipeline-scanner
Jan 02 01:48:33 cccp-atomic-scan.c2bg.rdu2.centos.org worker_start_scan.py[16901]: 2017-01-02 01:48:33,006 - container-pipeline - INFO - Finished running registry.centos.org/pipeline-images/pipeline-scanner scanner.
Jan 02 01:48:33 cccp-atomic-scan.c2bg.rdu2.centos.org worker_start_scan.py[16901]: 2017-01-02 01:48:33,007 - container-pipeline - INFO - Removing the image: c2bf.rdu2.centos.org:5000/sgk/mattermost-gitlab-integration:20161230071406
Jan 02 01:48:33 cccp-atomic-scan.c2bg.rdu2.centos.org worker_start_scan.py[16901]: 2017-01-02 01:48:33,950 - container-pipeline - INFO - Finished executing all scanners.
Jan 02 01:48:33 cccp-atomic-scan.c2bg.rdu2.centos.org worker_start_scan.py[16901]: 2017-01-02 01:48:33,970 - container-pipeline - CRITICAL - ('put', 'JOB_TOO_BIG', [])

Running /opt/cccp-service/client/build-project.sh does not update env vars for already existing project and it's builds

Steps to reproduce

  • Run /opt/cccp-service/client/build-project.sh for a project
  • Update env vars, say, TARGET_REPOSITORY in /opt/cccp-service/client/template.json
  • Run /opt/cccp-service/client/build-project.sh for the project again
  • Check env variables for build for that project to see that they have not been updated

Workaround

Run build script after deleting project on openshift, and you'll see the env vars updated.

Docker API version mismatch when building containers

Issue

Docker API version does not match between docker client in server build container with that of Docker version installed in Openshift node.

Reason

The ansible provisioning scripts always ensured that the docker packages were installed in the nodes and not trying to update them. However, when building the build, delivery, etc. containers, always the latest docker package was being pulled.

Fix

Change the state param in yum ansible module from installed to latest when installing docker.

Jenkins jobs for projects not able to trigger build in Openshift in CI

The build script for bamachrn-python project generated in CI looks like:

cd /opt/cccp-service/client
./build-project.sh bamachrn python https://github.com/bamachrn/cccp-python master / Dockerfile.test [email protected] None

The output of the above command is:

[root@n16 client]# ./build-project.sh bamachrn python https://github.com/bamachrn/cccp-python master / Dockerfile.test [email protected] None
build-project.sh NAME REPO_URL
   NAME      Name of the project/namespace
   TAG       Name of the resulting image (image will be named NAME/TAG:latest)
   REPO_URL  URL of project repository containing Dockerfile
   REPO_BRANCH Branch of the repo to be built
   REPO_BUILD_PATH  Relative path to the Dockerfile in the repository
   TARGET_FILE  Name for the dockerfile to be built
   NOTIFY_EMAIL  Email ID to be notified after successful build
   DEPENDS_ON  Dependency list for the current image
   DESIRED_TAG  Tag for the final output image like latest

It seems like there are less arguments supplied to the build script. On a more detailed observation, I found that currently, the build script accepts 9 args, where as in
https://github.com/CentOS/container-pipeline-service/blob/master/jenkinsbuilder/project-defaults.yml#L16
it generates only 8 args.

Updating job.yml does not reflect changes in ci jobs

I recently updated the index ci job in the job.yml, based on requirements for dependency validation.

https://github.com/CentOS/container-pipeline-service/pull/118/files#diff-091765ca1f913cb1fc03c4831eb431bbR169

Inspite of this update, the actual job in ci has not been updated, and it still uses the old configuration, while already using the updated script.

This causes the index ci to fail as the installation of python-networkx, as required has not been done.

Error while overriding --entrypoint while running build, test, delivery_script

  • For few containers, it is not allowing to override --entrypoint for running build_script. Whenever the container is started it starts docker starts some process (other than entrypoint) from container itself. As a result build goes in hold and fails to proceed to test stage.
  • Same thing happens for test_script and delivery_script for these containers.

Invalid image name for Pipeline Scanner

Pipeline Scanner configuration file points to an invalid image name which causes atomic scan to fail with:

$ sudo atomic scan --rootfs /mnt docker.io/centos:7
docker run -it --rm -v /etc/localtime:/etc/localtime -v /run/atomic/2016-07-27-11-28-43-532637:/scanin -v /var/lib/atomic/pipeline-scanner/2016-07-27-11-28-43-532637:/scanout:rw,Z --security-opt label:disable -v /var/run/docker.sock:/var/run/docker.sock pipeline-scanner python scanner.py release
Unable to find image 'pipeline-scanner:latest' locally
Trying to pull repository docker.io/library/pipeline-scanner ... 
Pulling repository docker.io/library/pipeline-scanner
Error: image library/pipeline-scanner not found
docker: Error: image library/pipeline-scanner not found.
See 'docker run --help'.

Loggers are not verbose

Issue

Currently, loggers are not verbose, and hence useless. It's very difficult to get meaningful debug info and tracebacks from worker logs.

Fix

  • Update loggers to log with exc_info == True to show tracebacks in case of CRITICAL/FATAL logs
  • Log extra data, e.g., local variable values, when there is an error with extra keyword argument
  • Format logs properly

Install PyYaml alongside beanstalkc

Issue

Currently, we need to pass the data from beanstalk tube by hand. It seems that beanstalkc, can load the values as dict if it has access to the yaml module. This'd ease parsing data from beanstalk.

Fix

Install PyYaml in worker containers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.