GithubHelp home page GithubHelp logo

vmware-archive / pcf-pipelines Goto Github PK

View Code? Open in Web Editor NEW
158.0 50.0 289.0 26.59 MB

PCF Pipelines

License: Apache License 2.0

Shell 52.71% Go 3.30% HCL 38.57% Python 0.42% Ruby 4.41% Dockerfile 0.59%

pcf-pipelines's Introduction

End of Availability

PCF Platform Automation with Concourse (PCF Pipelines) has reached its End of Availability (“EoA”) and is no longer available.

Platform Automation for PCF has replaced PCF Pipelines. Platform Automation for PCF is available for download on Pivotal Network.

You can find documentation for this new product in Pivotal Documentation

Description

This is a collection of Concourse pipelines for installing and upgrading Pivotal Cloud Foundry.

Other pipelines which may be of interest are listed at the end of this README.

Concourse Pipeline

Downloading PCF Pipelines

Please use the Pivotal Network release of pcf-pipelines for stability. This link will result in 404 if you do not have access, to get access contact your Pivotal Support/Sales team. Tracking master is considered unstable, and is likely to result in breaking the pipelines that consume it.

If you do not have access to the Pivotal Network release, and you are using the GitHub release, for stability, please ensure you are tagging the pipeline.yml:

 resources:
  - name: pcf-pipelines
  type: git
  source:
    uri: https://github.com/pivotal-cf/pcf-pipelines
    branch: master
    username: ((github_username))
    password: ((github_token))
    tag_filter: v0.23.0

Install-PCF pipelines

Deploys PCF for whichever IaaS you choose. For public cloud installs, such as AWS, Azure, and GCP, the pipeline will deploy the necessary infrastructure in the public cloud, such as the networks, loadbalancers, and databases, and use these resources to then deploy PCF (Ops Manager and Elastic Runtime). On-premise private datacenter install pipelines, such as with vSphere and Openstack, do not provision any infrastructure resources and only deploy PCF, using resources that are specified in the parameters of the pipeline.

The desired output of these install pipelines is a PCF deployment that matches the Pivotal reference architecture, usually using three availability zones and opting for high-availability components whenever possible. If you want to deploy a different architecture, you may have to modify these pipelines to get your desired architecture.

These pipelines are found in the install-pcf directory, sorted by IaaS.

Compatibility Matrix

IAAS pipelines release OM version ERT version
vSphere v23.12 2.4.x 2.4.x
Azure v23.12 2.4.x 2.4.x
AWS v23.12 2.4.x 2.4.x
GCP v23.12 2.4.x 2.4.x
OpenStack v23 2.0.x 2.0.x
IAAS pipelines release OM version ERT version
vSphere v23.11 2.3.x 2.3.x
Azure v23.11 2.3.x 2.3.x
AWS v23.11 2.3.x 2.3.x
GCP v23.11 2.3.x 2.3.x
OpenStack v23 2.0.x 2.0.x
IAAS pipelines release OM version ERT version
vSphere v23.8 2.3.x 2.3.x
Azure v23.8 2.3.x 2.3.x
AWS v23.8 2.3.x 2.3.x
GCP v23.8 2.3.x 2.3.x
OpenStack v23 2.0.x 2.0.x
IAAS pipelines release OM version ERT version
vSphere v23.6 2.2.x 2.2.x
Azure v23.6 2.2.x 2.2.x
AWS v23.6 2.2.x 2.2.x
GCP v23.6 2.2.x 2.2.x
OpenStack v23 2.0.x 2.0.x
IAAS pipelines release OM version ERT version
vSphere v23.3 2.1.x 2.1.x
Azure v23.3 2.1.x 2.1.x
AWS v23.3 2.1.x 2.1.x
GCP v23.3 2.1.x 2.1.x
OpenStack v23 2.0.x 2.0.x
IAAS pipelines release OM version ERT version
vSphere v23.1 2.0.x 2.0.x
Azure v23.1 2.0.x 2.0.x
AWS v23.1 2.0.x 2.0.x
GCP v23.1 2.0.x 2.0.x
OpenStack v23 2.0.x 2.0.x
IAAS pipelines release OM version ERT version
vSphere v22 1.12.x 1.12.x
Azure v22 1.12.x 1.12.x
AWS v22 1.12.x 1.12.x
GCP v22 1.12.x 1.12.x
OpenStack v22 1.12.x 1.12.x
IAAS pipelines release OM version ERT version
vSphere v0.17.0 1.11.12 1.11.8
Azure v0.17.0 1.11.12 1.11.8
AWS v0.17.0 1.11.12 1.11.8
GCP v0.17.0 1.11.12 1.11.8
  • Note: ERT v1.11.14 is not compatible with pcf-pipelines

Upgrade pipelines

Used to keep your PCF foundation up to date with the latest patch versions of PCF software from Pivotal Network. They can upgrade Ops Manager, Elastic Runtime, other tiles, and buildpacks. You will need one pipeline per tile in your foundation, to keep every tile up to date, and one pipeline to keep Ops Manager up to date.

These upgrade pipelines are intended to be kept running for as long as the foundation exists. They will be checking Pivotal Network periodically for new software versions, and apply these updates to the foundation. Currently, these pipelines are only intended for patch upgrades of PCF software (new --.--.n+1 versions), and are not generally recommended for minor/major upgrades (--.n+1.-- or n+1.--.-- versions). This is because new major/minor upgrades generally require careful reading of the release notes to understand what changes will be introduced with these releases before you commit to them, as well as additional configuration of the tiles/Ops Manager (these upgrade pipelines do not have any configure steps, by default).

These pipelines are found in any of the directories with the upgrade- prefix.

The upgrade-tile pipeline is compatible with the latest version of pcf-pipelines. However, as discussed, this pipeline is only used for patch upgrades.

Prerequisites

Usage

  1. Log in to Pivotal Network and download the latest version of PCF Platform Automation with Concourse (PCF Pipelines).

  2. Each pipeline has an associated params.yml file. Edit the params.yml with details related to your infrastructure.

  3. Log in and target your Concourse:

    fly -t yourtarget login --concourse-url https://yourtarget.example.com
    
  4. Set your pipeline with the params.yml file you created in step two above. For example:

    fly -t yourtarget set-pipeline \
      --pipeline upgrade-opsman \
      --config upgrade-ops-manager/aws/pipeline.yml \
      --load-vars-from upgrade-ops-manager/aws/params.yml
    
  5. Navigate to the pipeline url, and unpause the pipeline.

  6. Depending on the pipeline, the first job will either trigger on its own or the job will require manual intervention. Some pipelines may also require manual work during the duration of the run to complete the pipeline.

Customizing the pipelines

It's possible to customize pcf-pipelines to suit your particular needs using yaml-patch.

This tool supports all operations from RFC-6902 (for YAML documents instead of JSON). which can be applied to a source YAML file, such as pcf-pipelines pipeline definition files. It allows for a repeatable and automated mechanism to apply the same local customization operations to one or more pipeline YAML files for every new release of pcf-pipelines that gets delivered.

Example of yaml-patch usage

For a source YAML file containing just one element in its jobs array, here is how to add a second job element to it.

  1. Source YAML: source.yml
---  
jobs:  
- name: first-job  
  plan:  
  - get: my-resource  
  1. Operations file: add-job.yml
- op: add  
  path: /jobs/-  
  value:  
    name: second-job  
    plan:  
    - get: my-second-resource  
  1. Execute yaml-patch command
    cat source.yml | yaml-patch -o add-job.yml > result.yml

  2. Resulting patched file: result.yml

---  
jobs:  
- name: first-job  
  plan:  
  - get: my-resource  
- name: second-job  
  plan:  
  - get: my-second-resource  

Additional samples of yaml-patch usage patterns

See file yaml-patch-examples.md for additional yaml-patch samples and usage patterns.

Deploying and Managing Multiple Pipelines

There is an experimental tool which you may find helpful for deploying and managing multiple customized pipelines all at once, called PCF Pipelines Maestro. It uses a single pipeline to generate multiple pipelines for as many PCF foundations as you need.

Pipeline Compatibility Across PCF Versions

Our goal is to at least support the latest version of PCF with these pipelines. Currently there is no assurance of backwards compatibility, however we do keep past releases of the pipelines to ensure there is at least one version of the pipelines that would work with an older version of PCF.

Compatibility is generally only an issue whenever Pivotal releases a new version of PCF software that requires additional configuration in Ops Manager. These new required fields then need to be either manually configured outside the pipeline, or supplied via a new configuration added to the pipeline itself.

Pipelines for Airgapped Environments

By default, the pipelines require outbound access to the Internet to pull resources such as releases from Pivotal Network and images from DockerHub. Various aspects of the pipelines need to be modified for them to work on an airgapped environment.

To help with the modification and adaptation of the pipelines for such environments, two sample transformation/bootstraping pipelines are provided in this repository:

  • create-offline-pinned-pipelines: adapts pcf-pipelines to run on airgapped environments and downloads required resources into a S3 repository.
  • unpack-pcf-pipelines-combined: bootstraps an airgapped S3 repository with the produced offline pipelines and resources from create-offline-pinned-pipelines.

For more details on these pipelines, along with usage instructions, refer to the Offline Pipelines for Airgapped Environments documentation page.

Contributing

Pipelines and Tasks

For practicalities, please see our Contributing page for more information.

The pipelines and tasks in this repo follow a simple pattern which must be adhered to:

.
├── some-pipeline
|   ├── params.yml
|   └── pipeline.yml
└── tasks
    ├── a-task
    │   ├── task.sh
    │   └── task.yml
    └── another-task
        ├── task.sh
        └── task.yml

Each pipeline has a pipeline.yml, which contains the YAML for a single Concourse pipeline. Pipelines typically require parameters, either for resource names or for credentials, which are supplied externally via ((placeholders)).

A pipeline may have a params.yml file which is a template for the parameters that the pipeline requires. This template should have placeholder values, typically CHANGEME, or defaults where appropriate. This file should be filled out and stored elsewhere, such as in LastPass, and then supplied to fly via the -l flag. See the fly documentation for more.

Pipelines

Pipelines should define jobs that encapsulate conceptual chunks of work, which is then split up into tasks within the job. Jobs should use aggregate where possible to speed up actions like getting resources that the job requires.

Pipelines should not declare task YAML inline; they should all exist within a directory under tasks/.

Tasks

Each task has a task.yml and a task.sh file. The task YAML has an internal reference to its task.sh.

Tasks declare what their inputs and outputs are. These inputs and outputs should be declared in a generic way so they can be used by multiple pipelines.

Tasks should not use wget or curl to retrieve resources; doing so means the resource cannot be cached, cannot be pinned to a particular version, and cannot be supplied by alternative means for airgapped environments.

Running tests

There are a series of tests that should be run before creating a PR or pushing new code. Run them with ginkgo:

go get github.com/onsi/ginkgo/ginkgo
go get github.com/onsi/gomega
go get github.com/concourse/atc

ginkgo -r -p

Other notable examples of pipelines for PCF

PCFS Sample Pipelines - includes pipelines for

  • integrating Artifactory, Azure blobstores, GCP storage, or Docker registries
  • blue-green deployment of apps to PCF
  • backing up PCF
  • deploying Concourse itself with bosh.
  • and more...

pcf-pipelines's People

Contributors

abbyachau avatar anexper avatar barkerd427 avatar bblincoe avatar bhudlemeyer avatar bijukunjummen avatar calebwashburn avatar christianang avatar datianshi avatar dmfrey avatar dtimm avatar eamonryan avatar fredwangwang avatar haydonryan avatar kamoljan avatar kcboyle avatar krishicks avatar matthewfischer avatar nbconklin avatar oozie avatar pakiheart avatar pvsone avatar rahulkj avatar romrider avatar ryanpei avatar siennathesane avatar sneal avatar xchapter7x avatar z4ce avatar zmb3 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pcf-pipelines's Issues

tcp_routing value in install pipeline cannot be set to "disable"

When tcp_routing value is set to disable, the tcp_routing_ports field is still required and configuration of the ER tile fails. See screenshot. Ports should not be required when tcp routing is disabled.

## TCP routing and routing services
tcp_routing: disable                   # Enable/disable TCP routing (enable|disable)
tcp_routing_ports:                   # A comma-separated list of ports and hyphen-separated port ranges, e.g. 52135,34000-35000,23478

er-tile-failure

Install pipeline failing on missing input

Install pipeline failing with following message.

missing input concourse-vsphere

Looking through the repo this folder path is called out several times but doesn't appear to be present in the repo.

Reconsider approach with cliaas

Please take this as a suggestion / constructive feedback after using these for a few months on AWS, Azure, and vSphere

The most easy to debug and customize IaaS for pcf-upgrades and installs is vSphere because of the use of govc. Whereas the other clouds Ops Man pipelines have not once done what I'd expect, and have required many rounds of debugging to tweak my setup to the expectations of cliaas.

While golang as a CLI is great, using golang effectively as a scripting language (as it is here, since there are so many variations in the task) has major maintainability problems in the wild.

These pipelines need to be customized for specific customer environments, and unless cliaas is going to handle almost every permutation of variation, it's a black box to system administrators who are never going to learn golang , so people will wind up writing their own scripted CLI , or IaaS Template (Azure ARM template, AWS CF, etc.) or terraform-based version of the task. I'd much rather have something easily tinker-able like this.

that said if the goal is to have cliaas "just handle it all", kudos, it's just not there for a while.

Examples include

  • use of private vs public IPs, #51
  • customized disk settings (it uses the VMDK/VHD/AMI defaults which are usually far too low)
  • reserved IOPS for disks
  • multi-homed NICs for Ops Man
  • common post-processing for Ops Man including: TLS certs, LDAP PAM setup

Is vSphere Upgrade Ops Manager Ready?

Currently, looking at the parameters available in vsphere/upgrade-ops-manager-params.yml, the list seems much smaller than what the OpsManager UI exposes. I'd expect it to be much larger (to recreate the original configuration) or much smaller (configuration imported from previously exported installation).

Is the Upgrade Ops Manager pipeline ready for use on vSphere installations?

Mismatch in pcf-pipeline-tarball filename in pipelines

I downloaded the Concourse Automation Pipeline from PivNet to update an instance of PCF. I came across a minor error where the downloaded pcf-pipeline-tarball from pivnet did not match the filename present in the run command of the tarball-unpack task within upgrad-tile job. Same issue with upgrade-buildpack pipeline where the tarball-unpack task in for each buildpack has a mismatch in the pcf-pipeline tarball and the product_version define in the resource section.

  • name: pcf-pipelines-tarball
    type: pivnet
    source:
    api_token: {{pivnet_token}}
    product_slug: pcf-automation
    product_version: v0.1.0

    • task: tarball-unpack
      config:
      inputs: [ { name: pcf-pipelines-tarball } ]
      outputs: [ { name: pcf-pipelines } ]
      platform: linux
      image_resource: { type: docker-image, source: {repository: cloudfoundry/cflinuxfs2} }
      run: { path: tar, args: [ "-xvzf", "pcf-pipelines-tarball/pcf-pipelines.tgz" ] }

The last line should be:
run:` { path: tar, args: [ "-xvzf", "pcf-pipelines-tarball/pcf-pipelines-v0.1.0.tgz" ] }

To make it resilient, it would be better if chainging the product_version of pcf-pipeline-tarball does not require changing the run task in the tarball-unpack task.

changes to import opsman appear to be breaking opsman import

Getting below during opsman install running install-pcf/vsphere/pipeline

./pivnet-opsman-product/pcf-vsphere-1.10.3.ova
jq: error: ip0/0 is not defined at , line 6:
(.PropertyMapping[] | select(.Key == ip0)).Value = $ip0 |
jq: error: netmask0/0 is not defined at , line 7:
(.PropertyMapping[] | select(.Key == netmask0)).Value = $netmask0 |
jq: error: gateway/0 is not defined at , line 8:
(.PropertyMapping[] | select(.Key == gateway)).Value = $gateway |
jq: error: DNS/0 is not defined at , line 9:
(.PropertyMapping[] | select(.Key == DNS)).Value = $dns |
jq: error: ntp_servers/0 is not defined at , line 10:
(.PropertyMapping[] | select(.Key == ntp_servers)).Value = $ntpServers |
jq: error: admin_password/0 is not defined at , line 11:
(.PropertyMapping[] | select(.Key == admin_password)).Value = $adminPassword |
jq: error: custom_hostname/0 is not defined at , line 12:
(.PropertyMapping[] | select(.Key == custom_hostname)).Value = $customHostname
jq: 7 compile errors

It looks like the pcf-pipelines/tasks/import-opsman/task.sh file was refactored for this current v0.7.0 tar release.

Specifying HAProxy IPs ow in teh range causes IP conflicts in Ops Mgr

I'm unsure if this is specific to the pipeline, om, or Ops Mgr. But I set haproxy static ip, and got the message "errors":["IP '192.168.144.11' is already taken, cannot be taken for 'cf-f7fc90f1f948b5ffffb9_ha_proxy-f7a7297e6fc566c528ea-partition-21a67ea37bbec67c3833'"]} during apply.

deployment_network_name: "DEPLOYMENT"
deployment_vsphere_network: "VM Network"   # vCenter Deployment network name
deployment_nw_cidr: 192.168.128.0/17          # Deployment network CIDR, ex: 10.0.0.0/22
deployment_excluded_range: "192.168.128.1-192.168.144.9, 192.168.159.250-192.168.255.255"    # Deployment network exclusion range
deployment_nw_dns: 192.168.3.14           # Deployment network DNS
deployment_nw_gateway: 192.168.128.1       # Deployment network Gateway
deployment_nw_azs: PCF Cluster           # Comma seperated list of AZ's to be associated with this network

ha_proxy_ips: 192.168.144.111, 192.168.144.112          # Comma-separated list of static IPs

install-pcf pipeline for vsphere fails on tcp ports

When disabling tcp routing, left tcp ports blank. Job failed at config-elastic-runtime-tile stage config-er-tile with error:

staging cf 1.11.0-build.49
finished staging
Status: 200 OK
Cache-Control: max-age=0, private, must-revalidate
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Tue, 09 May 2017 20:10:08 GMT
Etag: W/"072de635532b82c376c8c599b299e176"
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: ea690d92-d7f4-483a-ab86-e478f443ad82
X-Runtime: 0.193366
X-Xss-Protection: 1; mode=block
Using self signed certificates generated using Ops Manager...
configuring product...
setting properties
could not execute "configure-product": failed to configure product: request failed: unexpected response:
HTTP/1.1 422 Unprocessable Entity
Transfer-Encoding: chunked
Cache-Control: no-cache
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Tue, 09 May 2017 20:10:10 GMT
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 39a7affc-fb61-4955-8c67-bbaec5abbb0a
X-Runtime: 0.493735
X-Xss-Protection: 1; mode=block

57
{"errors":{".properties.tcp_routing.enable.reservable_ports":["Value can't be blank"]}}
0

[vsphere] install-pcf pipeline deployment issue - tcp_routing_ports needs to have a value even if disabled.

my params.yml

## TCP routing and routing services
tcp_routing: disable                    # Enable/disable TCP routing (enable|disable)
tcp_routing_ports:                  # A comma-separated list of ports and hyphen-separated port ranges, e.g. 52135,34000-35000,23478
route_services: disable                 # Enable/disable route services (enable|disable)

Error during configure-elastic-runtime/config-er-tile:

 staging cf 1.10.7-build.1
finished staging
SSL Configuration:...
Setting CF resources and properties...
configuring product...
setting properties
could not execute "configure-product": failed to configure product: request failed: unexpected response:
HTTP/1.1 422 Unprocessable Entity
Transfer-Encoding: chunked
Cache-Control: no-cache
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Tue, 02 May 2017 01:49:08 GMT
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: f46b568e-4b9c-49b6-ae05-2bc98f493b65
X-Runtime: 0.716472
X-Xss-Protection: 1; mode=block

132
{"errors":{".properties.tcp_routing.enable.reservable_ports":["Value can't be blank"],".properties.security_acknowledgement":["Value can't be blank"],".properties.mysql_backups":["Value 'enable' does not match the select_value of a known option"],".mysql_monitor.recipient_email":["Value can't be blank"]}}
0 

om_network_name param does not allow spaces in the value

The network name in the params.yml file for property om_vm_network does not support space characters in the value (e.g. VLAN 26). The resulting network name is just VLAN.

The root cause appears to be this line in tasks/import-opsman/task.sh:

  setNetworkMapping $OM_VM_NETWORK

The $OM_VM_NETWORK variable should be quoted.

Install pipeline failing on upload-elastic-runtime-tile

Getting following error on upload-elastic-runtime-tile

chmod: cannot access 'tool-om/om-linux': No such file or directory

Believe this is due to the folder being changed to om-cli instead of tool-om.

Hijacking the build container shows these directories:
root@25c4a38e-76d9-4945-658e-8916f27ca8cf:/tmp/build/8ac8d33d# ll
total 0
drwxr-xr-x 1 root root 86 Apr 20 02:22 ./
drwxr-xr-x 1 root root 16 Apr 20 02:08 ../
drwxr-xr-x 1 root root 44 Apr 18 18:27 om-cli/
drwxr-xr-x 1 root root 218 Apr 19 23:58 pcf-pipelines/
drwxr-xr-x 1 root root 20 Apr 20 01:25 pivnet-cli/
drwxr-xr-x 1 root root 116 Apr 20 01:22 pivnet-product/

Doing a search of the repo several tasks still call out the old name it appears.

Terraformed firewall rule allow-ert-all overly restrictive

From @cholick, originally in gcp-concourse:

GCP installation instructions don't restrict traffic flow to tags, see:
http://docs.pivotal.io/pivotalcf/1-9/customizing/gcp-prepare-env.html#firewall_rules

But the terraformed firewall rule for internal traffic does:
https://github.com/pivotal-cf/gcp-concourse/blob/master/terraform/firewalls.tf#L76-#L77

This creates traffic issues on bosh-deployed / odb vms, which don't get tagged in the same way.

cc @JaredGordon @miyer-pivotal

replace-opsman-vm task fails in GCP upgrade-opsmgr pipeline

Using 0.14.1 release of pcf-pipelines. Running on GCP, attempting to upgrade from Ops Manager 1.10.3 to 1.10.8

The replace-opsman-vm task executes the following command:
cliaas-linux replace-vm --config cliaas-config/config.yml --identifier "ops-manager-1-10-3"

After a couple of minutes, the Ops Manager VM shuts down:

image

However, the cliaas-linux remains unresponsive until the end of the 10-minute timeout period, and then errors out with:

error: waitforstatus after stopvm failed: polling for status timed out

Install pipeline.yml refers to non-existent pivnet-cli resource

Wrong glob reference being used in install-pcf/vsphere/pipeline.yml.

- name: upload-elastic-runtime-tile
  plan:
...
    - get: pivnet-cli
      params: {globs: ["*linux_amd64*"]}

Should be:

- name: upload-elastic-runtime-tile
  plan:
...
    - get: pivnet-cli
      params: {globs: ["*linux-amd64*"]}

Note the dash usage instead of underscore in globs.

config-opsmgr job still passes if network is not correctly set.

This should fail out, however job shows green.

Configuring networks...
Status: 422 Unprocessable Entity
Cache-Control: no-cache
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Thu, 27 Apr 2017 16:07:57 GMT
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 043b0400-32af-478a-91cf-569714e40c44
X-Runtime: 0.175094
X-Xss-Protection: 1; mode=block
{
  "errors": [
    "Networks There is an error on a network.",
    "Networks DEPLOYMENT: Subnets : [0] Availability zone references can't be blank",
    "Availability zones This infrastructure requires an availability zone",
    "Availability zones cannot find availability zone with name PCF"
  ]
}```

install-pcf pipeline references rjain material

Looks like the pipe is downloading something from Rahul Jain's GIT:

Status: Downloaded newer image for rjain/buildbox@sha256:

This should probably only reference materials from C[0] or R&D.

upload OpsMan Image fails on GCP

Tried out this project as you deprecated the older gcp-concourse project. However when using the pipeline running the create-infrastructure bit, this fails not finding the terraform command.

pcf-pipelines/tasks/install-pcf-gcp/upload-opsman.sh: line 30: terraform: command not found

install-pcf/aws readme is inaccurate

With commit 3da8645 the upstream aws-concourse repository was deprecated. The majority of the tasks were brought in but the README references pcfaws_terraform_pipeline.yml which does not exist.

Either the README.md is wrong or the file was missed.

Feature request - Upgrade ERT pipeline

Can we please make the upload-tile task "waiting for opsmgr" message not scroll forever while waiting for a response. It takes ten minutes to time out, so it gets quite long. Thanks
screen shot 2017-05-03 at 9 54 26 am

Product configuration

Hey, thanks for sharing the pipeline setup! We are currently trying to automate a lot of the infrastructure around CF with Concourse. Do you plan to add a step for om configure-product for the CF ERT? If not, what objections do you see?

Azure opsman pipeline is adding curly brackets cliaas-config/config.yml

When running the azure opsman upgrade (v0.8.0) pipeline we get the following error:

line 2: cannot unmarshal !!map into string
  line 3: cannot unmarshal !!map into string
  line 4: cannot unmarshal !!map into string
  line 5: cannot unmarshal !!map into string
  line 6: cannot unmarshal !!map into string
  line 7: cannot unmarshal !!map into string
  line 8: cannot unmarshal !!map into string
  line 9: cannot unmarshal !!map into string
  line 10: cannot unmarshal !!map into string

When hijacking into the replace-vm container we can see the config.yml seems to contain extraneous curly brackets like this:

azure:
  vhd_image_url: {{}}
  subscription_id: {{longid}}
  ...

deploy director fails with two issues

When using the deploy-director task on GCP it fails with the following:

####################################################################
GETTING JSON FOR: DIRECTOR -> networks <- file ...
####################################################################
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key}[] 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}[]
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key}[] 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}[]
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key}[] 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}[]
jq: 2 compile errors
jq: error: syntax error, unexpected '$' (Unix shell quoting issues?) at <top-level>, line 1:
.${key} 
jq: error: try .["field"] instead of .field for unusually named fields at <top-level>, line 1:
.${key}
jq: 2 compile errors

thus the networking and the az tab is not filled in correctly. It fails as well for the following:

Applying changes on Ops Manager @ gcp.domain.com
could not execute "apply-changes": could not check for any already running installation: could not make api request to installations endpoint: token could not be retrieved from target url: Post https://gcp.domain.com/uaa/oauth/token: dial tcp: lookup gcp.domain.com on 8.8.8.8:53: no such host

The opsman Url should be opsman.gcp.domain.com, not gcp.domain.com.

Ops Mgr upgrade has risk of loosing ops mgr settings.

Since the ops mgr settings are not persisted/backed up as part of the pipeline, if the concourse worker container goes away during the deploy for any reason, then there is the risk that you will not be able to put the settings back into ops mgr.

Can we maybe have a backup done as part of the pipeline to s3? or an option for something like that?

Pipelines tasks refer to virtmerlin/c0-worker, c0 team dockerhub may be better?

See ref: https://github.com/pivotal-cf/aws-concourse/issues/12 (that repository is now deprecated).

This is just a copy of the issue.

@bijukunjummen
I noticed that some of the pipeline steps refer to this docker image virtmerlin/c0-worker, wouldn't hosting and using docker images from a team dockerhub a better idea.

@krishicks
We have chores to replace any non-cflinuxfs2 images with either cflinuxfs2 or one based on
cflinuxfs2 that we maintain.

They're here:

https://www.pivotaltracker.com/story/show/144251129
https://www.pivotaltracker.com/story/show/144251109
https://www.pivotaltracker.com/story/show/144251095

OPSMAN_URI parameter likely redundant

See ref: https://github.com/pivotal-cf/aws-concourse/issues/13

This is just a copy of that issue.

@bijukunjummen
The OPSMAN_URI parameter is likely to be redundant - in some of the pipeline steps it is derived as https://opsman.$ERT_DOMAIN, if by any chance a different OPSMAN_URI is provided the import-stemcell step fails as the url's don't match up. We should either use OPSMAN_URI at all places or remove OPSMAN_URI and derive the url.

@ryanpei
@bijukunjummen so we would replace OPSMAN_URI with ERT_DOMAIN in all places where it is > currently used?

@bijukunjummen
Yes, I think so @ryanpei, OR make use of the OPSMAN_URI everywhere a opsman's url is required instead of deriving it as https://opsman.$ERT_DOMAIN. Former may be a good start though.

upgrade ops man task failing on govc import.ova

govc import.ova is failing with below message

`Running` govc import.ova -options=opsman_settings.json -name=OpsManager-1.9.3-201702240222 -k=true -u=VCENTER -ds=DS -dc=DC -pool=POOL  -folder=FOLDER /tmp/build/39dbd1bf/pivnet-opsmgr/pcf-vsphere-1.9.3.ova
./govc/govc_linux_amd64: no file specified

I think that something is conflicting. As when I remove all but the below options deploy begins to work correctly.

Running govc import.ova -options=opsman_settings.json -name=OpsManager-1.9.3-201702240222 -k=true  -folder=FOLDER /tmp/build/39dbd1bf/pivnet-opsmgr/pcf-vsphere-1.9.3.ova

[24-02-17 02:56:48] Uploading pivotal-ops-manager-disk1.vmdk... (28%, 16.5MiB/s)

Track updates metrics

looking at manual tasks once could track and quantify the time/effort it takes to upgrade services, application containers (tomcat for instance) and upgrade the OS. are there any capture details as part of these pipelines that could be used to quantify across tile, platform and stemcell upgrades? possibly output in each report as a stepping stone to storing externally to capture.

Valid yaml syntax for trusted_certs does not translate well to json

In params.yml file's trusted_certs property (and ssl_cert property), multi-line values are allowed by YAML. Example:

trusted_certs: |
  -----BEGIN CERTIFICATE-----
  ...
  -----END CERTIFICATE-----

This doesn't translate correctly into JSON during the configure-ops-director/config-opsdir task. JSON does not allow multiline values. Using a multiline yaml value for a certificate causes a 400 Bad Request when configuring security.

The workaround is to make the certificate value a single line value in the yaml file. We used \r\n in place of hard return characters. However, it would be a nice fix to be able to specify a certificate as a multiline value as yaml syntax allows.

Method to check OpsMgr running tasks may fail when old pending tasks in DB exists

This problem happened in two distinct PCF 1.9.2 environments of a customer that deployed the pcf-pipelines (tested with v0.8, v0.11 and v0.13.2) :
the tasks for Apply-Change and Wait-for-Opsmgr of the Upgrade-Tile pipeline both return that a task is already running in OpsMgr even though there is no one started in the Ops Mgr UI. That return code prevents the pipeline from proceeding to the Apply-Changes phase of the upgrade, requiring the customer to use the OpsMgr UI to continue.

The root cause:
We found out that in the OpsMgr's API "installations" output contained a task a couple of months old that was still in "running" state and with no finished_at date (see example below).
That entry caused the tasks mentioned above to incorrectly return that a task is already running (even the apply-changes command of the om tool v0.23 fails because of it) because their methods simply check for the existence of an entry with "running" state.
According to the customer, what seems to have caused that situation was the reboot of the OpsMgr VM after the corresponding running action got stuck. Apparently OpsMgr left that entry unchanged in its installs table after the reboot and never updated it to failed state.

{
"user_name": "admin",
"finished_at": null,
"started_at": "2017-02-01T17:38:42.941Z",
"status": "running",
"additions": [],
"deletions": [],
"updates": [
{
  "identifier": "p-rabbitmq",
   ...
},
...
],
"id": 17
},

Potential solutions:
A) Update both wait-for-opsmgr task.sh and the om tool to parse the recent OpsMgr events json file instead of just searching for an entry with "running" status; OR

B) Provide a Known-Issues readme in the pcf-pipelines package describing the issue above and the workaround below to fix those event entries in the OpsMgr DB:
1) Make a backup copy of OpsMgr settings (add link to docs)
2) SSH to the OpsMgr VM and become root (sudo su -)
3) Switch to postgres user (sudo su postgres)
4) Execute command psql (no password required)
5) Connect to the DB:
> \connect tempest_production
6) Find the id of the task in "running" state
SELECT from installs WHERE status='running';
7) Change the status of the corresponding entry:
UPDATE installs SET status='failed', finished_at='2017-05-04T13:58:39.620Z', finished=true WHERE id='<ID-NUMBER-FROM-PREVIOUS-STEP>';

vSphere upgrade-opsmgr fails for while downloading stemcells

Last night while trying to upgrade to Ops Manager 1.10.8, my pipelines (0.14.1, da072c7) failed in the download-stemcells task with the following error:

Logged-in successfully
pcf-pipelines/tasks/download-pivnet-stemcells/task.sh: line 55: pivnet: unbound variable
pcf-pipelines/tasks/download-pivnet-stemcells/task.sh: line 60: pivnet: unbound variable
Could not find stemcell 3363.24 for vsphere. Did you specify a supported IaaS type for this stemcell version?

vSphere deploy fails in upload elastic runtime setep

This is due to the pivnet cli not being present in the fetch directory.

upload-er-tile:
chmod: missing operand after '+x'
Try 'chmod --help' for more information.

I'm currently investigating this too as it's a blocker for me.

[vsphere] Enabling MySql backup does not work.

SSL Configuration:...
Setting CF resources and properties...
configuring product...
setting properties
could not execute "configure-product": failed to configure product: request failed: unexpected response:
HTTP/1.1 422 Unprocessable Entity
Transfer-Encoding: chunked
Cache-Control: no-cache
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Tue, 02 May 2017 01:58:38 GMT
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 5cc38bd6-0372-4057-9854-62725b462763
X-Runtime: 0.574899
X-Xss-Protection: 1; mode=block

cd
{"errors":{".properties.tcp_routing":["You can only change properties that are under the selected option"],".**properties.mysql_backups":["Value 'enable' does not match the select_value of a known option"]**}}
0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.