GithubHelp home page GithubHelp logo

awslabs / aws-config-rdk Goto Github PK

View Code? Open in Web Editor NEW
436.0 36.0 168.0 3.4 MB

The AWS Config Rules Development Kit helps developers set up, author and test custom Config rules. It contains scripts to enable AWS Config, create a Config rule and test it with sample ConfigurationItems.

Home Page: https://aws-config-rdk.readthedocs.io

License: Apache License 2.0

Python 94.65% Java 1.81% Shell 0.19% HCL 3.35%
amazon-web-services aws aws-config aws-config-rules rdk

aws-config-rdk's Introduction

AWS RDK

pypibadge PyPI

AWS Config Rules Development Kit

We greatly appreciate feedback and bug reports at [email protected]! You may also create an issue on this repo.

The RDK is designed to support a "Compliance-as-Code" workflow that is intuitive and productive. It abstracts away much of the undifferentiated heavy lifting associated with deploying AWS Config rules backed by custom lambda functions, and provides a streamlined develop-deploy-monitor iterative process.

For complete documentation, including command reference, check out the ReadTheDocs documentation.

Getting Started

Uses Python 3.7+ and is installed via pip. Requires you to have an AWS account and sufficient permissions to manage the Config service, and to create S3 Buckets, Roles, and Lambda Functions. An AWS IAM Policy Document that describes the minimum necessary permissions can be found at policy/rdk-minimum-permissions.json.

Under the hood, rdk uses boto3 to make API calls to AWS, so you can set your credentials any way that boto3 recognizes (options 3 through 8 here) or pass them in with the command-line parameters --profile, --region, --access-key-id, or --secret-access-key

If you just want to use the RDK, go ahead and install it using pip.

pip install rdk

Alternately, if you want to see the code and/or contribute you can clone the git repo, and then from the repo directory use pip to install the package. Use the -e flag to generate symlinks so that any edits you make will be reflected when you run the installed package.

If you are going to author your Lambda functions using Java you will need to have Java 8 and gradle installed. If you are going to author your Lambda functions in C# you will need to have the dotnet CLI and the .NET Core Runtime 1.08 installed.

pip install -e .

To make sure the rdk is installed correctly, running the package from the command line without any arguments should display help information.

rdk
usage: rdk [-h] [-p PROFILE] [-k ACCESS_KEY_ID] [-s SECRET_ACCESS_KEY]
           [-r REGION] [-f REGION_FILE] [--region-set REGION_SET]
           [-v] <command> ...
rdk: error: the following arguments are required: <command>, <command arguments>

Usage

Configure your env

To use the RDK, it's recommended to create a directory that will be your working directory. This should be committed to a source code repo, and ideally created as a python virtualenv. In that directory, run the init command to set up your AWS Config environment.

rdk init
Running init!
Creating Config bucket config-bucket-780784666283
Creating IAM role config-role
Waiting for IAM role to propagate
Config Service is ON
Config setup complete.
Creating Code bucket config-rule-code-bucket-780784666283ap-southeast-1

Running init subsequent times will validate your AWS Config setup and re-create any S3 buckets or IAM resources that are needed.

  • If you have config delivery bucket already present in some other AWS account then use --config-bucket-exists-in-another-account as argument.
rdk init --config-bucket-exists-in-another-account
  • If you have AWS Organizations/ControlTower Setup in your AWS environment then additionally, use --control-tower as argument.
rdk init --control-tower --config-bucket-exists-in-another-account
  • If bucket for custom lambda code is already present in current account then use --skip-code-bucket-creation argument.
rdk init --skip-code-bucket-creation
  • If you want rdk to create/update and upload the rdklib-layer for you, then use --generate-lambda-layer argument. In supported regions, rdk will deploy the layer using the Serverless Application Repository, otherwise it will build a local lambda layer archive and upload it for use.
rdk init --generate-lambda-layer
  • If you want rdk to give a custom name to the lambda layer for you, then use --custom-layer-namer argument. The Serverless Application Repository currently cannot be used for custom lambda layers.
rdk init --generate-lambda-layer --custom-layer-name <LAYER_NAME>

Create Rules

In your working directory, use the create command to start creating a new custom rule. You must specify the runtime for the lambda function that will back the Rule, and you can also specify a resource type (or comma-separated list of types) that the Rule will evaluate or a maximum frequency for a periodic rule. This will add a new directory for the rule and populate it with several files, including a skeleton of your Lambda code.

rdk create MyRule --runtime python3.11 --resource-types AWS::EC2::Instance --input-parameters '{"desiredInstanceType":"t2.micro"}'
Running create!
Local Rule files created.

On Windows it is necessary to escape the double-quotes when specifying input parameters, so the --input-parameters argument would instead look something like this:

'{\"desiredInstanceType\":\"t2.micro\"}'

As of RDK v0.17.0, you can also specify --resource-types ALL to include all resource types.

Note that you can create rules that use EITHER resource-types OR maximum-frequency, but not both. We have found that rules that try to be both event-triggered as well as periodic wind up being very complicated and so we do not recommend it as a best practice.

Edit Rules Locally

Once you have created the rule, edit the python file in your rule directory (in the above example it would be MyRule/MyRule.py, but may be deeper into the rule directory tree depending on your chosen Lambda runtime) to add whatever logic your Rule requires in the evaluate_compliance function. You will have access to the CI that was sent by Config, as well as any parameters configured for the Config Rule. Your function should return either a simple compliance status (one of COMPLIANT, NON_COMPLIANT, or NOT_APPLICABLE), or if you're using the python or node runtimes you can return a JSON object with multiple evaluation responses that the RDK will send back to AWS Config.

An example would look like:

for sg in response['SecurityGroups']:
    evaluations.append(
    {
        'ComplianceResourceType': 'AWS::EC2::SecurityGroup',
        'ComplianceResourceId': sg['GroupId'],
        'ComplianceType': 'COMPLIANT',
        'Annotation': 'This is an important note.',
        'OrderingTimestamp': str(datetime.datetime.now())
    })
return evaluations

This is necessary for periodic rules that are not triggered by any CI change (which means the CI that is passed in will be null), and also for attaching annotations to your evaluation results.

If you want to see what the JSON structure of a CI looks like for creating your logic, you can use

rdk sample-ci <Resource Type>

to output a formatted JSON document.

Write and Run Unit Tests

If you are writing Config Rules using either of the Python runtimes there will be a <rule name>_test.py file deployed along with your Lambda function skeleton. This can be used to write unit tests according to the standard Python unittest framework (documented here), which can be run using the test-local rdk command:

rdk test-local MyTestRule
Running local test!
Testing MyTestRule
Looking for tests in /Users/mborch/Code/rdk-dev/MyTestRule

---------------------------------------------------------------------

Ran 0 tests in 0.000s

OK
<unittest.runner.TextTestResult run=0 errors=0 failures=0>

The test file includes setup for the MagicMock library that can be used to stub boto3 API calls if your rule logic will involve making API calls to gather additional information about your AWS environment. For some tips on how to do this, check out this blog post: Mock Is Magic

Modify Rule

If you need to change the parameters of a Config rule in your working directory you can use the modify command. Any parameters you specify will overwrite existing values, any that you do not specify will not be changed.

rdk modify MyRule --runtime python3.11 --maximum-frequency TwentyFour_Hours --input-parameters '{"desiredInstanceType":"t2.micro"}'
Running modify!
Modified Rule 'MyRule'.  Use the `deploy` command to push your changes to AWS.

Again, on Windows the input parameters would look like:

'{\"desiredInstanceType\":\"t2.micro\"}'

It is worth noting that until you actually call the deploy command your rule only exists in your working directory, none of the Rule commands discussed thus far actually makes changes to your account.

Deploy Rule

Once you have completed your compliance validation code and set your Rule's configuration, you can deploy the Rule to your account using the deploy command. This will zip up your code (and the other associated code files, if any) into a deployable package (or run a gradle build if you have selected the java8 runtime or run the Lambda packaging step from the dotnet CLI if you have selected the dotnetcore1.0 runtime), copy that zip file to S3, and then launch or update a CloudFormation stack that defines your Config Rule, Lambda function, and the necessary permissions and IAM Roles for it to function. Since CloudFormation does not deeply inspect Lambda code objects in S3 to construct its changeset, the deploy command will also directly update the Lambda function for any subsequent deployments to make sure code changes are propagated correctly.

rdk deploy MyRule
Running deploy!
Zipping MyRule
Uploading MyRule
Creating CloudFormation Stack for MyRule
Waiting for CloudFormation stack operation to complete...
...
Waiting for CloudFormation stack operation to complete...
Config deploy complete.

The exact output will vary depending on Lambda runtime. You can use the --all flag to deploy all of the rules in your working directory. If you used the --generate-lambda-layer flag in rdk init, use the --generated-lambda-layer flag for rdk deploy.

Deploy Organization Rule

You can also deploy the Rule to your AWS Organization using the deploy-organization command. For successful evaluation of custom rules in child accounts, please make sure you do one of the following:

  1. Set ASSUME_ROLE_MODE in Lambda code to True, to get the Lambda to assume the Role attached on the Config Service and confirm that the role trusts the master account where the Lambda function is going to be deployed.
  2. Set ASSUME_ROLE_MODE in Lambda code to True, to get the Lambda to assume a custom role and define an optional parameter with key as ExecutionRoleName and set the value to your custom role name; confirm that the role trusts the master account of the organization where the Lambda function will be deployed.
rdk deploy-organization MyRule
Running deploy!
Zipping MyRule
Uploading MyRule
Creating CloudFormation Stack for MyRule
Waiting for CloudFormation stack operation to complete...
...
Waiting for CloudFormation stack operation to complete...
Config deploy complete.

The exact output will vary depending on Lambda runtime. You can use the --all flag to deploy all of the rules in your working directory. This command uses PutOrganizationConfigRule API for the rule deployment. If a new account joins an organization, the rule is deployed to that account. When an account leaves an organization, the rule is removed. Deployment of existing organizational AWS Config Rules will only be retried for 7 hours after an account is added to your organization if a recorder is not available. You are expected to create a recorder if one doesn't exist within 7 hours of adding an account to your organization.

View Logs For Deployed Rule

Once the Rule has been deployed to AWS you can get the CloudWatch logs associated with your Lambda function using the logs command.

rdk logs MyRule -n 5
2017-11-15 22:59:33 - START RequestId: 96e7639a-ca15-11e7-95a2-b1521890638d Version: $LATEST
2017-11-15 23:41:13 - REPORT RequestId: 68e0304f-ca1b-11e7-b735-81ebae95acda    Duration: 0.50 ms    Billed Duration: 100 ms     Memory Size: 256 MB     Max Memory Used: 36 MB
2017-11-15 23:41:13 - END RequestId: 68e0304f-ca1b-11e7-b735-81ebae95acda
2017-11-15 23:41:13 - Default RDK utility class does not yet support Scheduled Notifications.
2017-11-15 23:41:13 - START RequestId: 68e0304f-ca1b-11e7-b735-81ebae95acda Version: $LATEST

You can use the -n and -f command line flags just like the UNIX tail command to view a larger number of log events and to continuously poll for new events. The latter option can be useful in conjunction with manually initiating Config Evaluations for your deploy Config Rule to make sure it is behaving as expected.

Running the tests

The testing directory contains scripts and buildspec files that I use to run basic functionality tests across a variety of CLI environments (currently Ubuntu Linux running Python 3.7/3.8/3.9/3.10, and Windows Server running Python 3.10). If there is interest I can release a CloudFormation template that could be used to build the test environment, let me know if this is something you want!

Advanced Features

Cross-Account Deployments

Features have been added to the RDK to facilitate the cross-account deployment pattern that enterprise customers have standardized for custom Config Rules. A cross-account architecture is one in which the Lambda functions are deployed to a single central "Compliance" account (which may be the same as a central "Security" account), and the Config Rules are deployed to any number of "Satellite" accounts that are used by other teams or departments. This gives the compliance team confidence that their rule logic cannot be tampered with and makes it much easier for them to modify rule logic without having to go through a complex deployment process to potentially hundreds of AWS accounts. The cross-account pattern uses two advanced RDK features:

  • --functions-only (-f) deployment
  • create-rule-template command

Functions-Only Deployment

By using the -f or --functions-only flag on the deploy command the RDK will deploy only the necessary Lambda Functions, Lambda Execution Role, and Lambda Permissions to the account specified by the execution credentials. It accomplishes this by batching up all of the Lambda function CloudFormation snippets for the selected Rule(s) into a single dynamically generated template and deploy that CloudFormation template. One consequence of this is that subsequent deployments that specify a different set of rules for the same stack name will update that CloudFormation stack, and any Rules that were included in the first deployment but not in the second will be removed. You can use the --stack-name parameter to override the default CloudFormation stack name if you need to manage different subsets of your Lambda Functions independently. The intended usage is to deploy the functions for all of the Config rules in the Security/Compliance account, which can be done simply by using rdk deploy -f --all from your working directory.

create-rule-template command

This command generates a CloudFormation template that defines the AWS Config rules themselves, along with the Config Role, Config data bucket, Configuration Recorder, and Delivery channel necessary for the Config rules to work in a satellite account. You must specify the file name for the generated template using the --output-file or -o command line flags. The generated template takes a single parameter of the AccountID of the central compliance account that contains the Lambda functions that will back your custom Config Rules. The generated template can be deployed in the desired satellite accounts through any of the means that you can deploy any other CloudFormation template, including the console, the CLI, as a CodePipeline task, or using StackSets. The create-rule-template command takes all of the standard arguments for selecting Rules to include in the generated template, including lists of individual Rule names, an --all flag, or using the RuleSets feature described below.

rdk create-rule-template -o remote-rule-template.json --all
Generating CloudFormation template!
CloudFormation template written to remote-rule-template.json

Disable the supported resource types check

It is now possible to define a resource type that is not yet supported by rdk. To disable the supported resource check use the optional flag '--skip-supported-resource-check' during the create command.

rdk create MyRule --runtime python3.11 --resource-types AWS::New::ResourceType --skip-supported-resource-check
'AWS::New::ResourceType' not found in list of accepted resource types.
Skip-Supported-Resource-Check Flag set (--skip-supported-resource-check), ignoring missing resource type error.
Running create!
Local Rule files created.

Custom Lambda Function Name

As of version 0.7.14, instead of defaulting the lambda function names to RDK-Rule-Function-<RULE_NAME> it is possible to customize the name for the Lambda function to any 64 characters string as per Lambda's naming standards using the optional --custom-lambda-name flag while performing rdk create. This opens up new features like :

  1. Longer config rule name.
  2. Custom lambda function naming as per personal or enterprise standards.
rdk create MyLongerRuleName --runtime python3.11 --resource-types AWS::EC2::Instance --custom-lambda-name custom-prefix-for-MyLongerRuleName
Running create!
Local Rule files created.

The above example would create files with config rule name as MyLongerRuleName and lambda function with the name custom-prefix-for-MyLongerRuleName instead of RDK-Rule-Function-MyLongerRuleName

RuleSets

New as of version 0.3.11, it is possible to add RuleSet tags to rules that can be used to deploy and test groups of rules together. Rules can belong to multiple RuleSets, and RuleSet membership is stored only in the parameters.json metadata. The deploy, create-rule-template, and test-local commands are RuleSet-aware such that a RuleSet can be passed in as the target instead of --all or a specific named Rule.

A comma-delimited list of RuleSets can be added to a Rule when you create it (using the --rulesets flag), as part of a modify command, or using new ruleset subcommands to add or remove individual rules from a RuleSet.

Running rdk rulesets list will display a list of the RuleSets currently defined across all of the Rules in the working directory

rdk rulesets list
RuleSets:  AnotherRuleSet MyNewSet

Naming a specific RuleSet will list all of the Rules that are part of that RuleSet.

rdk rulesets list AnotherRuleSet
Rules in AnotherRuleSet :  RSTest

Rules can be added to or removed from RuleSets using the add and remove subcommands:

rdk rulesets add MyNewSet RSTest
RSTest added to RuleSet MyNewSet

rdk rulesets remove AnotherRuleSet RSTest
RSTest removed from RuleSet AnotherRuleSet

RuleSets are a convenient way to maintain a single repository of Config Rules that may need to have subsets of them deployed to different environments. For example your development environment may contain some of the Rules that you run in Production but not all of them; RuleSets gives you a way to identify and selectively deploy the appropriate Rules to each environment.

Managed Rules

The RDK is able to deploy AWS Managed Rules.

To do so, create a rule using rdk create and provide a valid SourceIdentifier via the --source-identifier CLI option. The list of Managed Rules can be found here , and note that the Identifier can be obtained by replacing the dashes with underscores and using all capitals (for example, the "guardduty-enabled-centralized" rule has the SourceIdentifier "GUARDDUTY_ENABLED_CENTRALIZED"). Just like custom Rules you will need to specify source events and/or a maximum evaluation frequency, and also pass in any Rule parameters. The resulting Rule directory will contain only the parameters.json file, but using rdk deploy or rdk create-rule-template can be used to deploy the Managed Rule like any other Custom Rule.

Deploying Rules Across Multiple Regions

The RDK is able to run init/deploy/undeploy across multiple regions with a rdk -f <region file> -t <region set>

If no region group is specified, rdk will deploy to the default region set.

To create a sample starter region group, run rdk create-region-set to specify the filename, add the -o <region set output file name> this will create a region set with the following tests and regions "default":["us-east-1","us-west-1","eu-north-1","ap-east-1"],"aws-cn-region-set":["cn-north-1","cn-northwest-1"]

Using RDK to Generate a Lambda Layer in a region (Python3)

By default rdk init --generate-lambda-layer will generate an rdklib lambda layer while running init in whatever region it is run, to force re-generation of the layer, run rdk init --generate-lambda-layer again over a region

To use this generated lambda layer, add the flag --generated-lambda-layer when running rdk deploy. For example: rdk -f regions.yaml deploy LP3_TestRule_P39_lib --generated-lambda-layer

If you created layer with a custom name (by running rdk init --custom-lambda-layer, add a similar custom-lambda-layer flag when running deploy.

Support & Feedback

This project is maintained by AWS Solution Architects and Consultants. It is not part of an AWS service and support is provided best-effort by the maintainers. To post feedback, submit feature ideas, or report bugs, please use the Issues section of this repo.

Contributing

email us at [email protected] if you have any questions. We are happy to help and discuss.

Contacts

  • Benjamin Morris - bmorrissirromb - current maintainer
  • Julio Delgado Jr - tekdj7 - current maintainer

Past Contributors

  • Michael Borchert - Original Python version
  • Jonathan Rault - Original Design, testing, feedback
  • Greg Kim and Chris Gutierrez - Initial work and CI definitions
  • Henry Huang - Original CFN templates and other code
  • Santosh Kumar - maintainer
  • Jose Obando - maintainer
  • Jarrett Andrulis - jarrettandrulis - maintainer
  • Sandeep Batchu - batchus - maintainer
  • Mark Beacom - mbeacom - maintainer
  • Ricky Chau - rickychau2780 - maintainer

License

This project is licensed under the Apache 2.0 License

Acknowledgments

  • the boto3 team makes all of this magic possible.

Link

aws-config-rdk's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-config-rdk's Issues

Using the function-only without RDK init

RDK init is great for dev, but not for a consistent deployment in a Account Vending Machine context.

The Function-only is searching to use the s3 bucket: config-rule-code-bucket-XXXXXXXXXXXXXXXXXXXXX

Without RDK init, it fails (see below). Can we either create the "rdk init" bucket in function-only or specify a particular bucket to create/use : rdk deploy -f --all --s3-bucket-name blabla

Container] 2018/07/18 06:54:04 Running command rdk deploy -f --all
Running deploy!
Generating CloudFormation template for Lambda Functions!
Traceback (most recent call last):
  File "/usr/local/bin/rdk", line 32, in <module>
    return_val = my_rdk.process_command()
  File "/usr/local/lib/python3.6/site-packages/rdk/rdk.py", line 58, in process_command
    exit_code = method_to_call()
  File "/usr/local/lib/python3.6/site-packages/rdk/rdk.py", line 373, in deploy
    Key=self.args.stack_name + ".json"
  File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 314, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 612, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist

Deploy failed - if the name of the rule is too long

rdk deploy VPC_OPEN_SECURITY_GROUP_ONLY_TO_AUTHORIZED_PORTS
Running deploy!
Zipping VPC_OPEN_SECURITY_GROUP_ONLY_TO_AUTHORIZED_PORTS
Uploading VPC_OPEN_SECURITY_GROUP_ONLY_TO_AUTHORIZED_PORTS
Upload complete.
Creating CloudFormation Stack for VPC_OPEN_SECURITY_GROUP_ONLY_TO_AUTHORIZED_PORTS
Waiting for CloudFormation stack operation to complete...
CloudFormation stack operation Rolled Back for VPCOPENSECURITYGROUPONLYTOAUTHORIZEDPORTS.
Config deploy complete.

logs:

  | 1  validation error detected: Value  'RDK-Rule-Function-VPC_OPEN_SECURITY_GROUP_ONLY_TO_AUTHORIZED_PORTS' at  'functionName' failed to satisfy constraint: Member must have length  less than or equal to 64 (Service: AWSLambda; Status Code: 400; Error  Code: InvalidParameterValueException; Request ID:  b786245f-b615-11e8-839d-8db7ce88a00e)
-- | --

rdk init fails after upgrade

The message that comes is as under
Traceback (most recent call last):
File "/usr/local/bin/rdk", line 12, in
from rdk import rdk
File "/Library/Python/2.7/site-packages/rdk/rdk.py", line 31, in
import mock
ImportError: No module named mock

Add support for central Config bucket

Add a flag to allow init to ignore missing Config bucket, for the use case where a central bucket in a separate account is being used to track Configuration history.

Subsequent `rdk deploy` fails if the lambda function does not exists

if the lambda function created by the first rdk deploy does not exits then subsequent rdk deploy throws below exception

(ven-rdk-adding-ci) 186590d07c09:ven-rdk-adding-ci rafih$ rdk deploy lambda-delete/
Removing trailing '/'
Running deploy!
Zipping lambda-delete
Uploading lambda-delete
Upload complete.
Updating CloudFormation Stack for lambda-delete
No changes to Config Rule.
Publishing Lambda code...
Creating CloudFormation Stack for lambda-delete
Traceback (most recent call last):
File "/Users/rafih/git-stuff/ven-rdk-adding-ci/bin/rdk", line 7, in
exec(compile(f.read(), file, 'exec'))
File "/Users/rafih/git-stuff/ven-rdk-adding-ci/aws-config-rdk/bin/rdk", line 32, in
return_val = my_rdk.process_command()
File "/Users/rafih/git-stuff/ven-rdk-adding-ci/aws-config-rdk/rdk/rdk.py", line 67, in process_command
exit_code = method_to_call()
File "/Users/rafih/git-stuff/ven-rdk-adding-ci/aws-config-rdk/rdk/rdk.py", line 556, in deploy
'CAPABILITY_IAM',
File "/Users/rafih/git-stuff/ven-rdk-adding-ci/lib/python2.7/site-packages/botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/rafih/git-stuff/ven-rdk-adding-ci/lib/python2.7/site-packages/botocore/client.py", line 612, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.AlreadyExistsException: An error occurred (AlreadyExistsException) when calling the CreateStack operation: Stack [lambdadelete] already exists

Steps to reproduce.
++ Create rule
++ Deploy the rule
++ delete the lambda function created by the rdk deploy
++ run rdk deploy

[Bug] Crash when "ResourceDeleted"

After building a ConfigurationChange-triggered rule, the wrapper crashes when the resource is deleted (no error in the log). It doesn't even get to the handler code (printing "event()").

Attached a code of a rule which crashes on IAM User.
iam-mfa-enabled.txt

OptionalParameters do not deploy

If i put the same string that InputParameters in the OptionalParameters, the parameters are not deployed with the rules.

{
  "Version": "1.0",
  "Parameters": {
    "RuleName": "VPC_SG_OPEN_ONLY_TO_AUTHORIZED_PORTS",
    "SourceRuntime": "python3.6",
    "CodeKey": "VPC_SG_OPEN_ONLY_TO_AUTHORIZED_PORTS.zip",
    "InputParameters": "{\"authorizedTCPPorts\": \"443\", \"authorizedUDPPorts\": \"80-443\", \"exceptionList\": \"sg-101,sg-102,sg-103\"}",
    "OptionalParameters": "{}",
    "SourceEvents": "AWS::EC2::SecurityGroup"
  }
}

RDK deploying empty rule

RDK was deploying even when I gave a wrong rule name
So I gave MFARule instead of MFArule it successfully deployed it but later evaluation did not work.
But if it gets deployed you will never know if you provided wrong rule name or not

exception when using undeploy in python2.7

rdk undeploy throws below exception when using python2.7

$ rdk undeploy undeploy-test
Delete specified Rules and Lamdba Functions from your AWS Account? (y/N): y
Traceback (most recent call last):
File "/usr/local/bin/rdk", line 32, in
return_val = my_rdk.process_command()
File "/Library/Python/2.7/site-packages/rdk/rdk.py", line 67, in process_command
exit_code = method_to_call()
File "/Library/Python/2.7/site-packages/rdk/rdk.py", line 468, in undeploy
my_input = input("Delete specified Rules and Lamdba Functions from your AWS Account? (y/N): ")
File "", line 1, in
NameError: name 'y' is not defined

GovCloud support

change partition to arn:aws-us-gov: if region == us-gov-west-1

Support rdk natively in Windows

Currently the RDK needs to be called from the python repo. Please package it as an exe to make it available in any directory.

Rules can't be created with trigger and periodic invocation

I may be missing something but

In the config rule console, I can create a rule that is triggered AND periodic (ie. the options are not mutually exclusive). However, when I deploy a rule using the rdk, it doesn't seem to set both options even though my parameters file contains both periodic and source events.

{
  "Parameters": {
    "CodeKey": "access-key-rotation.zip", 
    "SourceRuntime": "python2.7", 
    "SourcePeriodic": "TwentyFour_Hours", 
    "RuleName": "access-key-rotation", 
    "SourceEvents": "AWS::IAM::User", 
    "InputParameters": "{\"maxKeyDays\": \"90\"}"
  }
}

I was hoping this would give me a rule triggered on users AND every 24 hours periodic. Right now, I have to deploy using the rdk and then go into the UI and check off periodic to get both.

Am I missing something though?

incorrect sample ci

the sample-ci generated by rdk for S3 resources is not the actual ci event

actual ci has preserved nested json strings for some fields such as
"supplementaryConfiguration": { "AccessControlList": "{\"grantSet\":null,\"grantList\":[{\"grantee\":{\"id\":\"d609ca1050da1465c902203c3f5b8129ab754942ab2415b1cdf6de6e82c7d219\",\"displayName\":null},\"permission\":\"FullControl\"}],\"owner\":{\"displayName\":null,\"id\":\"d609ca1050da1465c902203c3f5b8129ab754942ab2415b1cdf6de6e82c7d219\"},\"isRequesterCharged\":false}", "BucketAccelerateConfiguration": { "status": null }, "BucketLoggingConfiguration": { "destinationBucketName": null, "logFilePrefix": null }, "BucketNotificationConfiguration": { "configurations": {} }, "BucketPolicy": { "policyText": "{\"Version\":\"2012-10-17\",\"Id\":\"Policy1478390053757\",\"Statement\":[{\"Sid\":\"Stmt1478389920384\",\"Effect\":\"Deny\",\"Principal\":\"*\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::testbucket2\",\"Condition\":{\"Bool\":{\"aws:SecureTransport\":\"false\"}}},{\"Sid\":\"Stmt1478389920384\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"s3:Get*\",\"Resource\":\"arn:aws:s3:::testbucket2\",\"Condition\":{\"StringEquals\":{\"aws:sourceVpce\":\"vpce-mock123\"}}}]}" }

but the sample-ci generates a full transformed dictionary as
{ "configurationItemCaptureTime": "2016-11-06T06:21:42.759Z", "resourceCreationTime": "2016-11-05T23:59:32.000Z", "availabilityZone": "Regional", "awsRegion": "ap-southeast-2", "tags": {}, "resourceType": "AWS::S3::Bucket", "resourceId": "testbucket2", "configurationStateId": "1478413302759", "relatedEvents": [ "e4a8244d-c94f-47f1-b441-424d31b0833a" ], "relationships": [], "arn": "arn:aws:s3:::testbucket2", "version": "1.2", "configurationItemMD5Hash": "2da1efbd2e4eee634d8be076f3e2eda7", "supplementaryConfiguration": { "BucketReplicationConfiguration": { "rules": { "testbucket2": { "status": "Enabled", "prefix": "", "destinationConfig": { "bucketARN": "arn:aws:s3:::testbucket2-us-west-2", "storageClass": null } } }, "roleARN": "arn:aws:iam::264683526309:role/testbucket2-testbucket2-us-west-2-s3-repl-role" }, "BucketAccelerateConfiguration": { "status": null }, "AccessControlList": { "owner": { "displayName": "aarkho", "id": "28f61982c7ea8ba301f8d90b4fe979a567383f85e9706603da913d27f5522c59" }, "grantSet": null, "isRequesterCharged": false, "grantList": [ { "grantee": { "displayName": "aarkho", "id": "28f61982c7ea8ba301f8d90b4fe979a567383f85e9706603da913d27f5522c59" }, "permission": "FullControl" } ] }, "BucketLoggingConfiguration": { "destinationBucketName": null, "logFilePrefix": null }, "IsRequesterPaysEnabled": "false", "BucketNotificationConfiguration": { "configurations": {} }, "BucketVersioningConfiguration": { "status": "Enabled", "isMfaDeleteEnabled": null }, "BucketPolicy": { "policyText": { "Version": "2012-10-17", "Id": "Policy1478390053757", "Statement": [ { "Resource": "arn:aws:s3:::testbucket2", "Effect": "Deny", "Sid": "Stmt1478389920384", "Action": "s3:*", "Condition": { "Bool": { "aws:SecureTransport": "false" } }, "Principal": "*" }, { "Resource": "arn:aws:s3:::testbucket2", "Effect": "Allow", "Sid": "Stmt1478389920384", "Action": "s3:*", "Condition": { "StringEquals": { "aws:sourceVpce": "vpce-unknown" } }, "Principal": "*" } ] } } }, "resourceName": "testbucket2", "configuration": { "owner": { "displayName": "aarkho", "id": "28f61982c7ea8ba301f8d90b4fe979a567383f85e9706603da913d27f5522c59" }, "creationDate": "2016-11-05T23:59:32.000Z", "name": "testbucket2" }, "configurationItemStatus": "OK", "accountId": "264683526309" }

Better Error handling when parameters.json is invalid

Because I changed the json manually, one of my parameters.json was invalid due to a missing ",".
The thrown error is not processed: it just displays JSONDecode failure.
Please add indication of the file name, for easier debugging.

Windows: rdk logs

On windows, got the following error.

C:\Users\myname\AppData\Local\Programs\Python\Python36\Scripts>python rdk logs myrule
'stty' is not recognized as an internal or external command,
operable program or batch file.
Traceback (most recent call last):
File "rdk", line 32, in
return_val = my_rdk.process_command()
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\site-packages\rdk\rdk.py", line 50, in process_command
exit_code = method_to_call()
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\site-packages\rdk\rdk.py", line 564, in logs
self.__print_log_event(event)
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\site-packages\rdk\rdk.py", line 629, in __print_log_event
rows, columns = os.popen('stty size', 'r').read().split()
ValueError: not enough values to unpack (expected 2, got 0)

`deploy` command doesn't display usage instructions when run with no args

Need to improve arg parsing to display usage instructions if command is called without rule name(s), --rulesets flag, or --all flag.

Current error:

(rdk-test) [mborch] rdk-test $ rdk deploy
Running deploy!
Traceback (most recent call last):
File "/Users/mborch/Code/rdk-test/bin/rdk", line 6, in
exec(compile(open(file).read(), file, 'exec'))
File "/Users/mborch/Code/aws-config-rdk/bin/rdk", line 32, in
return_val = my_rdk.process_command()
File "/Users/mborch/Code/aws-config-rdk/rdk/rdk.py", line 58, in process_command
exit_code = method_to_call()
File "/Users/mborch/Code/aws-config-rdk/rdk/rdk.py", line 448, in deploy
rule_names = self.__get_rule_list_for_command()
File "/Users/mborch/Code/aws-config-rdk/rdk/rdk.py", line 1116, in __get_rule_list_for_command
cleaned_rule_name = self.__clean_rule_name(self.args.rulename[0])
IndexError: list index out of range

Returning a list:'evaluations' with compliance and annotation appended as a dictionary results in an error.

Code Snippet:

if mfa_device_details['MFADevices'] == []:
            print('User: '+userName+' is NOT-COMPLIANT!')
            eval["ComplianceType"] = "NON_COMPLIANT"
            eval["Annotation"] = "No MFA Device detected for user"
            evaluation.append(eval)
            return evaluation

ERROR:
Parameter validation failed:
Invalid type for parameter Evaluations[0].ComplianceType, value: [{'ComplianceType': 'COMPLIANT', 'Annotation': ' MFA detected'}], type: <class 'list'>, valid types: <class 'str'>: ParamValidationError

We narrowed down the cause of this error to these lines in rule_util.py:

        configurationItem = get_configuration_item(invokingEvent)

        if configurationItem is None:
            compliance = lambda_handler(event, context)

            if isinstance(compliance, list):
                for evaluation in compliance:
                    missing_fields = False

The value returned from lambda_handler is compared to be a list only if configurationItem is None.
Version-runtime : rdk-0.3.7-py3.6

Add a description field in the parameters.json

The users are requesting an explicit description on the rules, since some (managed) rule do not have annotations.

By default there is a description field in the rules, can we had it into the parameters.json and include it into the create-rule-template function?

Make the Ruleset list more pretty

Currently, all the rules are displayed in-line seperated by space. Please add a carriage return between rule to improve readability.

> rdk rulesets list rulecriticity:medium
Rules in rulecriticity:medium :  IAM_USER_USED_LAST_90_DAYS KMS_KEY_ROTATION_ENABLED S3_BUCKET_PUBLIC_WRITE_PROHIBITED VPC_DEFAULT_SECURITY_GROUP_BLOCKED

Windows - RDK Modify input-parameters doesn't work

C:\Users\jrault\AppData\Local\Programs\Python\Python36\Scripts>python rdk modify iam-mfa-enabled --input-parameters '{"WhitelistedUserList":"ARIAWERTYUFDSSS"}'
Running modify!
Error parsing input parameter JSON. Make sure your JSON keys and values are enclosed in double quotes and your input-parameters string is enclosed in single quotes.
Traceback (most recent call last):
File "rdk", line 32, in
return_val = my_rdk.process_command()
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\site-packages\rdk\rdk.py", line 56, in process_command
exit_code = method_to_call()
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\site-packages\rdk\rdk.py", line 288, in modify
self.__write_params_file()
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\site-packages\rdk\rdk.py", line 774, in __write_params_file
raise e
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\site-packages\rdk\rdk.py", line 771, in _write_params_file
my_input_params = json.loads(self.args.input_parameters, strict=False)
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\json_init
.py", line 367, in loads
return cls(**kw).decode(s)
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\json\decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\json\decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Better error handling for `clean` when account is already clean

Currently throws a stack trace when trying to clean up the missing role:

(rdk-test) [mborch] rdk-test $ rdk clean
Delete all Rules and remove Config setup?! (y/N): y
Running clean!
Traceback (most recent call last):
File "/Users/mborch/Code/aws-config-rdk/rdk/rdk.py", line 279, in clean
response = iam_client.get_role(RoleName=config_role_name)
File "/Users/mborch/Code/rdk-test/lib/python3.6/site-packages/botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/mborch/Code/rdk-test/lib/python3.6/site-packages/botocore/client.py", line 612, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchEntityException: An error occurred (NoSuchEntity) when calling the GetRole operation: The user with name config-role cannot be found.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/mborch/Code/rdk-test/bin/rdk", line 6, in
exec(compile(open(file).read(), file, 'exec'))
File "/Users/mborch/Code/aws-config-rdk/bin/rdk", line 33, in
return_val = my_rdk.process_command()
File "/Users/mborch/Code/aws-config-rdk/rdk/rdk.py", line 68, in process_command
exit_code = method_to_call()
File "/Users/mborch/Code/aws-config-rdk/rdk/rdk.py", line 302, in clean
print("Error encountered finding Config Role to remove: " + str(e))
UnboundLocalError: local variable 'e' referenced before assignment

Create error handling for invalid parameters.json

My rdk deploy -f --all cmd failed, with the entire pipeline due to one parameters.json.

The error is cryptic. Ideally I would add a explicit warning about the rule not being deploy (rule name, file, line/collum, error message).

Current error FYI

[Container] 2018/08/28 00:03:47 Running command rdk deploy -f --all > ../result.txt
Traceback (most recent call last):
  File "/usr/local/bin/rdk", line 33, in <module>
    return_val = my_rdk.process_command()
  File "/usr/local/lib/python3.6/site-packages/rdk/rdk.py", line 68, in process_command
    exit_code = method_to_call()
  File "/usr/local/lib/python3.6/site-packages/rdk/rdk.py", line 538, in deploy
    function_template = self.__create_function_cloudformation_template()
  File "/usr/local/lib/python3.6/site-packages/rdk/rdk.py", line 1825, in __create_function_cloudformation_template
    params = self.__get_rule_parameters(rule_name)
  File "/usr/local/lib/python3.6/site-packages/rdk/rdk.py", line 1414, in __get_rule_parameters
    my_json = json.load(parameters_file)
  File "/usr/local/lib/python3.6/json/__init__.py", line 299, in load
    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
  File "/usr/local/lib/python3.6/json/__init__.py", line 354, in loads
    return _default_decoder.decode(s)
  File "/usr/local/lib/python3.6/json/decoder.py", line 339, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/local/lib/python3.6/json/decoder.py", line 355, in raw_decode
    obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 10 column 5 (char 326)

File uploaded on S3 : strange file name

I am on Windows. Latest Python 3.6

From init: S3 bucket created: config-rule-code-bucket-XXXXXXXXX6564eu-west-1 (account id hidden) [note: why not putting an hyphen between the account id and region?]

I used the rdk deploy. The file uploaded directly in the bucket is named:
"emr-cmk-encrypted\emr-cmk-encrypted.zip"

I guess it supposes to have a directory, but not it is not created.
The CFn then failed with a "Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist."

Complete CFn logs:

07:53:35 UTC+0800 ROLLBACK_COMPLETE AWS::CloudFormation::Stack emr-cmk-encrypted  
  07:53:34 UTC+0800 DELETE_COMPLETE AWS::IAM::Role rdkLambdaRole
  07:53:33 UTC+0800 DELETE_IN_PROGRESS AWS::IAM::Role rdkLambdaRole
  07:53:32 UTC+0800 DELETE_COMPLETE AWS::Lambda::Function rdkRuleCodeLambda
  07:53:10 UTC+0800 ROLLBACK_IN_PROGRESS AWS::CloudFormation::Stack emr-cmk-encrypted
  07:53:10 UTC+0800 CREATE_FAILED AWS::Lambda::Function rdkRuleCodeLambda
  07:53:01 UTC+0800 CREATE_IN_PROGRESS AWS::Lambda::Function rdkRuleCodeLambda
  07:52:59 UTC+0800 CREATE_COMPLETE AWS::IAM::Role rdkLambdaRole
  07:52:42 UTC+0800 CREATE_IN_PROGRESS AWS::IAM::Role rdkLambdaRole
  07:52:42 UTC+0800 CREATE_IN_PROGRESS AWS::IAM::Role rdkLambdaRole
  07:52:38 UTC+0800 CREATE_IN_PROGRESS AWS::CloudFormation::Stack emr-cmk-encrypted

Windows - Test-local doesn't work when the rule name has hyphen

rdk test-local cloudtrail-lfi-enabled
Running local test!
Testing cloudtrail-lfi-enabled
cloudtrail-lfi-enabled_test.py
Traceback (most recent call last):
File "rdk", line 32, in
return_val = my_rdk.process_command()
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\site-packages\rdk\rdk.py", line 56, in process_command
exit_code = method_to_call()
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\site-packages\rdk\rdk.py", line 463, in test_local
results = unittest.TextTestRunner(buffer=True, verbosity=2).run(self.__create_test_suite(test_dir))
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\site-packages\rdk\rdk.py", line 608, in __create_test_suite
suites = [unittest.defaultTestLoader.loadTestsFromName(test) for test in tests]
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\site-packages\rdk\rdk.py", line 608, in
suites = [unittest.defaultTestLoader.loadTestsFromName(test) for test in tests]
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\lib\unittest\loader.py", line 153, in loadTestsFromName
module = import(module_name)
File "C:\Users\jrault\AppData\Local\Programs\Python\Python36\Scripts\cloudtrail-lfi-enabled\cloudtrail-lfi-enabled_test.py", line 22
import cloudtrail-lfi-enabled as rule
^
SyntaxError: invalid syntax

no log output if tests are ok

test-local does have an option --verbose but there is no log output if tests are ok. how can i get an output with tests being ok

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.