GithubHelp home page GithubHelp logo

foremast / foremast Goto Github PK

View Code? Open in Web Editor NEW
280.0 37.0 50.0 3.71 MB

Spinnaker Pipeline/Infrastructure Configuration and Templating Tool - Pipelines as Code.

Home Page: https://foremast.readthedocs.io/

License: Apache License 2.0

Makefile 1.13% Python 90.97% Shell 0.09% Jinja 7.81%
spinnaker devops aws python pipelines-as-code gcp hacktoberfest

foremast's Issues

Add chaos support

add support so that chaos monkey can be configured in pipeline.json or app.json. Chaos monkey in spinnaker is configured in the application creation and the json looks like:

    "chaosMonkey": {
      "enabled": true,
      "meanTimeBetweenKillsInWorkDays": 1,
      "minTimeBetweenKillsInWorkDays": 1,
      "grouping": "cluster",
      "regionsAreIndependent": true,
      "exceptions": [
        {
          "account": "prod",
          "stack": "*",
          "region": "*",
          "detail": "*"
        },
        {
          "account": "prods",
          "region": "*",
          "stack": "*",
          "detail": "*"
        },
        {
          "account": "prodp",
          "region": "*",
          "stack": "*",
          "detail": "*"
        }
      ]
    },

This will need to be part of the payload on create-app stage.

Pipeline JSON Validator

We should have an intermediary step to check the JSON before posting to Spinnaker. On top of that, this enables people to develop plugins to modify pipeline JSON based on business logic (keeping foremast clean). For example, if environment is dev, update max ASG size from 3 to 1.

Raw code logic is something like:

config_check=True
config_checker=<something>

if config_check:
    json = check_json(config_checker, json_load)
post_spinnaker(json)

Cleans up things like #123

Clean up removed Lambda triggers

If you remove a lambda trigger from your pipeline configs, it should be removed from Lambda. Currently, if removed or name is changed it does not clean anything up and can cause duplicate or abandoned triggers

Plugin System

I've been really interested in setting up a plugin system which I believe will help the project tremendously.

The experimentation has been occurring in gogoair/foremast-plugin-experiment-1.

My suggestion is to reorganize the project, so its something like this:

└── src
    ├── foremast
    │   ├── common
    │   │   └── base.py
    │   ├── dns
    │   │   ├── aws.py
    │   │   ├── base.py
    │   │   └── gcp.py
    │   └── loadbalancer
    │       ├── aws.py
    │       ├── base.py
    │       └── gcp.py

The general idea is that we should be able to create resources for a specific 'provider', as such:

my_plugin = plugin_source.load_plugin('aws')
dns = my_plugin.Dns()
dns.create(**kwargs)

We'd still have to define specific plugin resource 'structure' but as a first pass, I'm suggesting:

  • create()
  • update()
  • destroy()

How does this sound?

Multiplying a string

If someone has the following asg policy:

  "asg": {
    "subnet_purpose": "internal",
    "min_inst": 1,
    "max_inst": 5,
    "scaling_policy": {
      "metric": "NetworkIn",
      "threshold": "20000",
      "period_minutes": 1,
      "statistic": "Minimum"
    }
  },

You will see an error, like so:

packages/foremast/autoscaling_policy/create_policy.py", line 79, in prepare_policy_template
    'asg']['scaling_policy']['threshold'] * 0.5
TypeError: can't multiply sequence by non-int of type 'float'

The reason is that threshold is a string. If threshold is an int, this works.

We should ensure that thresholds are typecast to int(), as a possible fix for this issue.

Retain Running Pipeline Configuration Post Update

Example Flow:
DEV --> STAGE --> PROD
/health /health /health

Found an issue where a developer may have a running pipeline deployed to STAGE. If I come in and update a pipeline configuration to change say the health endpoint to /status, it will potentially break existing pipelines running that may have code that only listens at /health.

First pipeline:
DEV --> STAGE --> PROD
/health /health /health

Second Pipeline:
DEV --> STAGE
/health /health

Third Pipeline:
DEV -->
/status

Second Pipeline (promote PROD):
DEV --> STAGE --> PROD !!!!!
/health /health /status !!!!!

stage-judgement-* template files are copy-pasta except for one field

These templates:

https://github.com/gogoair/foremast/blob/master/src/foremast/templates/pipeline/stage-judgement-nonprod.json.j2
https://github.com/gogoair/foremast/blob/master/src/foremast/templates/pipeline/stage-judgement-promote-s3-nonprod.json.j2
https://github.com/gogoair/foremast/blob/master/src/foremast/templates/pipeline/stage-judgement-promote-s3-prod.json.j2

Are entirely the same except for the instructions field (and the instructions are specific to Gogo). It looks like there is a need to have this customized per-environment and per type of pipeline (to override it on s3 to non-prod, and override it on s3 to prod)

Disable / Rework environment specific rules

In the following code below we are ensuring that the 'dev' environment has a few specific settings. Now, this applies to gogo but may not be a good idea for other users to have the same restrictions. Other foremast users probably have an account named 'dev' and are likely wondering why those asg min/max settings aren't adhered to.

The code in question is:
https://github.com/gogoair/foremast/blob/master/src/foremast/pipeline/construct_pipeline_block.py#L135-L143

We need to figure out a better way to allow this functionality or remove this stuff completely.

Query exectionEngine to support Orca 2.x

from Gitter Chat:

if you check one of your existing pipelines (that was built manually) you should see the executionEngine:v2, if not then yeah your Spinnaker version is a bit old. In any case we made the change in the src/foremast/templates/pipeline/pipeline_wrapper.json.j2 template.

Might be possible to query for that execution engine so that it does not need to be configured

   {% endif %}
   "application": "{{ data.app.appname }}",
+  "executionEngine": "v2",
   "triggers": [
     {% include "pipeline/trigger-jenkins.json.j2" %}
   ],```

Problems with Spinnaker LDAP authentication

Thanks for a great tool. I'm having some issues running it against my Spinnaker instance because it requires a login via LDAP.

When I run "foremast validate all", I get the following output:

2017-11-14 20:34:25,295 [INFO] foremast.consts:find_config:131 - Loading static configuration file.
2017-11-14 20:34:25,295 [WARNING] foremast.consts:validate_key_values:67 - [base] missing key "git_url", using None.
2017-11-14 20:34:25,295 [WARNING] foremast.consts:validate_key_values:67 - [base] missing key "types", using 'ec2,lambda,s3,datapipeline'.
2017-11-14 20:34:25,296 [WARNING] foremast.consts:validate_key_values:67 - [base] missing key "ami_json_url", using None.
2017-11-14 20:34:25,296 [WARNING] foremast.consts:validate_key_values:67 - [base] missing key "default_securitygroup_rules", using ''.
2017-11-14 20:34:25,296 [WARNING] foremast.consts:validate_key_values:67 - [base] missing key "default_ec2_securitygroups", using ''.
2017-11-14 20:34:25,297 [WARNING] foremast.consts:validate_key_values:67 - [base] missing key "default_elb_securitygroups", using ''.
2017-11-14 20:34:25,297 [INFO] foremast.consts:validate_key_values:58 - Section missing from configurations: [credentials]
2017-11-14 20:34:25,297 [WARNING] foremast.consts:validate_key_values:67 - [credentials] missing key "gitlab_token", using None.
2017-11-14 20:34:25,298 [WARNING] foremast.consts:validate_key_values:67 - [credentials] missing key "slack_token", using None.
2017-11-14 20:34:25,298 [INFO] foremast.consts:validate_key_values:58 - Section missing from configurations: [task_timeouts]
2017-11-14 20:34:25,298 [WARNING] foremast.consts:validate_key_values:67 - [task_timeouts] missing key "default", using 120.
2017-11-14 20:34:25,298 [WARNING] foremast.consts:validate_key_values:67 - [task_timeouts] missing key "envs", using '{}'.
2017-11-14 20:34:25,298 [INFO] foremast.consts:validate_key_values:58 - Section missing from configurations: [whitelists]
2017-11-14 20:34:25,299 [WARNING] foremast.consts:validate_key_values:67 - [whitelists] missing key "asg_whitelist", using ''.
2017-11-14 20:34:25,299 [WARNING] foremast.consts:validate_key_values:67 - [base] missing key "gate_client_cert", using ''.
2017-11-14 20:34:25,299 [WARNING] foremast.consts:validate_key_values:67 - [base] missing key "gate_ca_bundle", using ''.
2017-11-14 20:34:25,299 [INFO] foremast.consts:validate_key_values:58 - Section missing from configurations: [links]
2017-11-14 20:34:25,300 [WARNING] foremast.consts:validate_key_values:67 - [links] missing key "default", using '{}'.
2017-11-14 20:34:25,558 [INFO] foremast.validate:validate_all:24 - Running all validate steps.
InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
Traceback (most recent call last):
  File "/usr/local/bin/foremast", line 11, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.6/site-packages/foremast/__main__.py", line 116, in main
    args.parsed.func(args)
  File "/usr/local/lib/python3.6/site-packages/foremast/validate.py", line 25, in validate_all
    validate_gate()
  File "/usr/local/lib/python3.6/site-packages/foremast/validate.py", line 13, in validate_gate
    credentials = get_env_credential()
  File "/usr/local/lib/python3.6/site-packages/foremast/utils/credentials.py", line 81, in get_env_credential
    credential = credential_response.json()
  File "/usr/local/lib/python3.6/site-packages/requests/models.py", line 892, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/local/lib/python3.6/json/__init__.py", line 354, in loads
    return _default_decoder.decode(s)
  File "/usr/local/lib/python3.6/json/decoder.py", line 339, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/local/lib/python3.6/json/decoder.py", line 357, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Upon closer inspection, this appears to be because the Gate URL is returning the login form as HTML, instead of the expected JSON:

(Pdb) l
352
353  	        """
354  	        try:
355  	            obj, end = self.scan_once(s, idx)
356  	        except StopIteration as err:
357  ->	            raise JSONDecodeError("Expecting value", s, err.value) from None
358  	        return obj, end
[EOF]
(Pdb) p s
'<html><head><title>Login Page</title></head><body onload=\'document.f.username.focus();\'>\n<h3>Login with Username and Password</h3><form name=\'f\' action=\'/login\' method=\'POST\'>\n<table>\n\t<tr><td>User:</td><td><input type=\'text\' name=\'username\' value=\'\'></td></tr>\n\t<tr><td>Password:</td><td><input type=\'password\' name=\'password\'/></td></tr>\n\t<tr><td colspan=\'2\'><input name="submit" type="submit" value="Login"/></td></tr>\n</table>\n</form></body></html>'

Any suggestions on how to make this work?

Add IAM requirements to documentation

The documentation should include the IAM requirements for foremast to run successfully.

Foremast touches Route 53, Lambda, S3, security groups, IAM, and other AWS services using boto3 outside of Spinnaker's API.

create-scaling-policy should use module entry point

create-scaling-policy currently uses the runner endpoint to get executed. I think it should probably use the module (__main__.py) entry point instead of the runner.

This aligns that entry point like other create-* ones.

Enable DynamoDB Triggers to AWS Lambda Pipeline

Currently Foremast Lambda Pipeline supports: api-gateway, s3, sns, cloudwatch-event, cloudwatch-log

Enable DynamoDB triggers.E.g. when a record is inserted OR updated a lambda is triggered. This is achieved manually today e.g. devops need to manually add the trigger after deploying via pipeline

feat: Add support for setting Public IP

Add an option to disable or enable the Public IP allocation. Will need to figure out which values the "associatePublicIpAddress" takes in the stage-deploy.json.j2.

{
          "associatePublicIpAddress": null
}

Documentation around MultiRegion Failover

We rolled out some basic functionality of creating multi region DNS with simple failover. It would be good to get some basic documentation around this before we commit too much ahead of ourselves!

fix: Check `pyapi-gitlab` returns

Problem

Need to check pyapi-gitlab returns before trying to use the returned object. For request errors, False is returned instead of a more useful object.

There has been some work in pyapi-gitlab/pyapi-gitlab#238 to raise instead of return None. Switching to handle exceptions would make this a little easier.

Example:

Should check the return of .getbranch() before trying to access like a dict.

# src/foremast/configs/prepare_configs.py
    config_commit = file_lookup.server.getbranch(file_lookup.project_id, 'master')['commit']['id']

Specify application name

It would be extremely helpful if there was a way to specify the application name that gets created.

Currently it takes the name by combining the repository name and the group name from git. That means that the repo name and group name combined must be shorter than 24 characters and must only contain alphanumeric characters. This leads to repos with names that are ambiguous and hard to read.

It seems like a pretty bad flow to have your pipelining tool dictate the names of your repo. If overwriting others' apps is a concern then the group name could be added to the configured name to ensure that only groups can overwrite their apps.

feat: Manual deployment type

Provide the ability to have foremast manage a predefined Pipeline using the raw JSON. This will be allow one to copy the JSON from the Pipeline, save it in a repository, and have foremast reconfigure the defined Pipeline with every run.

Enable toggling of Jenkins trigger

There are some instances where users don't want pipelines to automatically trigger on Jenkins job completion and they prefer to run it manually.

Lambda creation/code update should create ALIAS with ENV name

When creating Lambda function or updating lambda function we should alias $latest with ENV name so developers can easily fetch environment name and apply proper configs. That way they do not need to know account numbers or any other info.

Having lambda configs for all environments locally shortens the execution time (by not having to lookup external configs/services) and enables us to have single artifact for multiple ENVs for lambdas that need to have multiple different configs (for example ones that query Eureka for services).

This in turn needs to apply triggers to ENV Alias and not to $latest as it was the case which will leave current triggers defined on $latest for already deployed functions.

I can submit PR for this functionality if everyone agrees we need this kind of functionality.

Add config validation before running anything

Right now the errors are pretty ambiguous if one of the runway config files is missing or has a json error. We could run a config validation stage before doing anything else. Make sure that all the needed applicaiton-master-*.json files exist, pipeline.json exists, and that the JSON is valid.

datapipeline: Interpolate Subnets similar to EC2 Pipelines

Today, users only need to say internal or external to deploy to specific subnets based on metadata tags. We should have the same functionality in our datapipeline feature where users can specify internal and external and we figure out the subnets for them.

Today, they need to maintain a static list of subnets which is annoying and less than ideal (especially since that information is sometime not always shared to non Admins)

foremast-infrastructure is not creating the s3 bucket

foremast-infrastructure is looking for existing s3 bucket instead of creating one defined in foremast.cfg -> [formats]
s3_bucket = bucket-{project}-{env}

Trace:

Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/foremast/s3/create_archaius.py", line 51, in init_properties
s3client.Object(archaius['bucket'], archaius_file).get()
File "/usr/local/lib/python3.6/site-packages/boto3/resources/factory.py", line 520, in do_action
response = action(self, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/boto3/resources/action.py", line 83, in call
response = getattr(parent.meta.client, operation_name)(**params)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 310, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 599, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the GetObject operation: The specified bucket does not exist

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/bin/foremast-infrastructure", line 11, in
load_entry_point('foremast==3.19.4.dev1+ge1725fe', 'console_scripts', 'foremast-infrastructure')()
File "/usr/local/lib/python3.6/site-packages/foremast/runner.py", line 265, in prepare_infrastructure
runner.create_archaius()
File "/usr/local/lib/python3.6/site-packages/foremast/runner.py", line 132, in create_archaius
s3.init_properties(env=self.env, app=self.app)
File "/usr/local/lib/python3.6/site-packages/foremast/s3/create_archaius.py", line 56, in init_properties
s3client.Object(archaius['bucket'], archaius_file).put()
File "/usr/local/lib/python3.6/site-packages/boto3/resources/factory.py", line 520, in do_action
response = action(self, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/boto3/resources/action.py", line 83, in call
response = getattr(parent.meta.client, operation_name)(**params)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 310, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 599, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.