GithubHelp home page GithubHelp logo

bridgecrewio / checkov Goto Github PK

View Code? Open in Web Editor NEW
6.5K 58.0 1.0K 82.72 MB

Prevent cloud misconfigurations and find vulnerabilities during build-time in infrastructure as code, container images and open source packages with Checkov by Bridgecrew.

Home Page: https://www.checkov.io/

License: Apache License 2.0

Python 75.60% HCL 22.01% Shell 0.37% Dockerfile 0.24% Batchfile 0.01% Jinja 0.04% Bicep 0.53% Smarty 0.02% TypeScript 1.13% JavaScript 0.01% Perl 0.04% Java 0.01%
terraform static-analysis aws gcp azure aws-security cloudformation scans compliance kubernetes

checkov's Introduction

checkov

Maintained by Prisma Cloud build status security status code_coverage docs PyPI Python Version Terraform Version Downloads Docker Pulls slack-community

Checkov is a static code analysis tool for infrastructure as code (IaC) and also a software composition analysis (SCA) tool for images and open source packages.

It scans cloud infrastructure provisioned using Terraform, Terraform plan, Cloudformation, AWS SAM, Kubernetes, Helm charts, Kustomize, Dockerfile, Serverless, Bicep, OpenAPI or ARM Templates and detects security and compliance misconfigurations using graph-based scanning.

It performs Software Composition Analysis (SCA) scanning which is a scan of open source packages and images for Common Vulnerabilities and Exposures (CVEs).

Checkov also powers Prisma Cloud Application Security, the developer-first platform that codifies and streamlines cloud security throughout the development lifecycle. Prisma Cloud identifies, fixes, and prevents misconfigurations in cloud resources and infrastructure-as-code files.

Table of contents

Features

  • Over 1000 built-in policies cover security and compliance best practices for AWS, Azure and Google Cloud.
  • Scans Terraform, Terraform Plan, Terraform JSON, CloudFormation, AWS SAM, Kubernetes, Helm, Kustomize, Dockerfile, Serverless framework, Ansible, Bicep and ARM template files.
  • Scans Argo Workflows, Azure Pipelines, BitBucket Pipelines, Circle CI Pipelines, GitHub Actions and GitLab CI workflow files
  • Supports Context-awareness policies based on in-memory graph-based scanning.
  • Supports Python format for attribute policies and YAML format for both attribute and composite policies.
  • Detects AWS credentials in EC2 Userdata, Lambda environment variables and Terraform providers.
  • Identifies secrets using regular expressions, keywords, and entropy based detection.
  • Evaluates Terraform Provider settings to regulate the creation, management, and updates of IaaS, PaaS or SaaS managed through Terraform.
  • Policies support evaluation of variables to their optional default value.
  • Supports in-line suppression of accepted risks or false-positives to reduce recurring scan failures. Also supports global skip from using CLI.
  • Output currently available as CLI, CycloneDX, JSON, JUnit XML, CSV, SARIF and github markdown and link to remediation guides.

Screenshots

Scan results in CLI

scan-screenshot

Scheduled scan result in Jenkins

jenikins-screenshot

Getting started

Requirements

  • Python >= 3.7 (Data classes are available for Python 3.7+)
  • Terraform >= 0.12

Installation

To install pip follow the official docs

pip3 install checkov

or with Homebrew (macOS or Linux)

brew install checkov

Enabling bash autocomplete

source <(register-python-argcomplete checkov)

Upgrade

if you installed checkov with pip3

pip3 install -U checkov

or with Homebrew

brew upgrade checkov

Configure an input folder or file

checkov --directory /user/path/to/iac/code

Or a specific file or files

checkov --file /user/tf/example.tf

Or

checkov -f /user/cloudformation/example1.yml -f /user/cloudformation/example2.yml

Or a terraform plan file in json format

terraform init
terraform plan -out tf.plan
terraform show -json tf.plan  > tf.json
checkov -f tf.json

Note: terraform show output file tf.json will be a single line. For that reason all findings will be reported line number 0 by Checkov

check: CKV_AWS_21: "Ensure all data stored in the S3 bucket have versioning enabled"
	FAILED for resource: aws_s3_bucket.customer
	File: /tf/tf.json:0-0
	Guide: https://docs.prismacloud.io/en/enterprise-edition/policy-reference/aws-policies/s3-policies/s3-16-enable-versioning

If you have installed jq you can convert json file into multiple lines with the following command:

terraform show -json tf.plan | jq '.' > tf.json

Scan result would be much user friendly.

checkov -f tf.json
Check: CKV_AWS_21: "Ensure all data stored in the S3 bucket have versioning enabled"
	FAILED for resource: aws_s3_bucket.customer
	File: /tf/tf1.json:224-268
	Guide: https://docs.prismacloud.io/en/enterprise-edition/policy-reference/aws-policies/s3-policies/s3-16-enable-versioning

		225 |               "values": {
		226 |                 "acceleration_status": "",
		227 |                 "acl": "private",
		228 |                 "arn": "arn:aws:s3:::mybucket",

Alternatively, specify the repo root of the hcl files used to generate the plan file, using the --repo-root-for-plan-enrichment flag, to enrich the output with the appropriate file path, line numbers, and codeblock of the resource(s). An added benefit is that check suppressions will be handled accordingly.

checkov -f tf.json --repo-root-for-plan-enrichment /user/path/to/iac/code

Scan result sample (CLI)

Passed Checks: 1, Failed Checks: 1, Suppressed Checks: 0
Check: "Ensure all data stored in the S3 bucket is securely encrypted at rest"
/main.tf:
	 Passed for resource: aws_s3_bucket.template_bucket
Check: "Ensure all data stored in the S3 bucket is securely encrypted at rest"
/../regionStack/main.tf:
	 Failed for resource: aws_s3_bucket.sls_deployment_bucket_name

Start using Checkov by reading the Getting Started page.

Using Docker

docker pull bridgecrew/checkov
docker run --tty --rm --volume /user/tf:/tf --workdir /tf bridgecrew/checkov --directory /tf

Note: if you are using Python 3.6(Default version in Ubuntu 18.04) checkov will not work, and it will fail with ModuleNotFoundError: No module named 'dataclasses' error message. In this case, you can use the docker version instead.

Note that there are certain cases where redirecting docker run --tty output to a file - for example, if you want to save the Checkov JUnit output to a file - will cause extra control characters to be printed. This can break file parsing. If you encounter this, remove the --tty flag.

The --workdir /tf flag is optional to change the working directory to the mounted volume. If you are using the SARIF output -o sarif this will output the results.sarif file to the mounted volume (/user/tf in the example above). If you do not include that flag, the working directory will be "/".

Running or skipping checks

By using command line flags, you can specify to run only named checks (allow list) or run all checks except those listed (deny list). If you are using the platform integration via API key, you can also specify a severity threshold to skip and / or include. Moreover, as json files can't contain comments, one can pass regex pattern to skip json file secret scan.

See the docs for more detailed information about how these flags work together.

Examples

Allow only the two specified checks to run:

checkov --directory . --check CKV_AWS_20,CKV_AWS_57

Run all checks except the one specified:

checkov -d . --skip-check CKV_AWS_20

Run all checks except checks with specified patterns:

checkov -d . --skip-check CKV_AWS*

Run all checks that are MEDIUM severity or higher (requires API key):

checkov -d . --check MEDIUM --bc-api-key ...

Run all checks that are MEDIUM severity or higher, as well as check CKV_123 (assume this is a LOW severity check):

checkov -d . --check MEDIUM,CKV_123 --bc-api-key ...

Skip all checks that are MEDIUM severity or lower:

checkov -d . --skip-check MEDIUM --bc-api-key ...

Skip all checks that are MEDIUM severity or lower, as well as check CKV_789 (assume this is a high severity check):

checkov -d . --skip-check MEDIUM,CKV_789 --bc-api-key ...

Run all checks that are MEDIUM severity or higher, but skip check CKV_123 (assume this is a medium or higher severity check):

checkov -d . --check MEDIUM --skip-check CKV_123 --bc-api-key ...

Run check CKV_789, but skip it if it is a medium severity (the --check logic is always applied before --skip-check)

checkov -d . --skip-check MEDIUM --check CKV_789 --bc-api-key ...

For Kubernetes workloads, you can also use allow/deny namespaces. For example, do not report any results for the kube-system namespace:

checkov -d . --skip-check kube-system

Run a scan of a container image. First pull or build the image then refer to it by the hash, ID, or name:tag:

checkov --framework sca_image --docker-image sha256:1234example --dockerfile-path /Users/path/to/Dockerfile --bc-api-key ...

checkov --docker-image <image-name>:tag --dockerfile-path /User/path/to/Dockerfile --bc-api-key ...

You can use --image flag also to scan container image instead of --docker-image for shortener:

checkov --image <image-name>:tag --dockerfile-path /User/path/to/Dockerfile --bc-api-key ...

Run an SCA scan of packages in a repo:

checkov -d . --framework sca_package --bc-api-key ... --repo-id <repo_id(arbitrary)>

Run a scan of a directory with environment variables removing buffering, adding debug level logs:

PYTHONUNBUFFERED=1 LOG_LEVEL=DEBUG checkov -d .

OR enable the environment variables for multiple runs

export PYTHONUNBUFFERED=1 LOG_LEVEL=DEBUG
checkov -d .

Run secrets scanning on all files in MyDirectory. Skip CKV_SECRET_6 check on json files that their suffix is DontScan

checkov -d /MyDirectory --framework secrets --bc-api-key ... --skip-check CKV_SECRET_6:.*DontScan.json$

Run secrets scanning on all files in MyDirectory. Skip CKV_SECRET_6 check on json files that contains "skip_test" in path

checkov -d /MyDirectory --framework secrets --bc-api-key ... --skip-check CKV_SECRET_6:.*skip_test.*json$

One can mask values from scanning results by supplying a configuration file (using --config-file flag) with mask entry. The masking can apply on resource & value (or multiple values, seperated with a comma). Examples:

mask:
- aws_instance:user_data
- azurerm_key_vault_secret:admin_password,user_passwords

In the example above, the following values will be masked:

  • user_data for aws_instance resource
  • both admin_password &user_passwords for azurerm_key_vault_secret

Suppressing/Ignoring a check

Like any static-analysis tool it is limited by its analysis scope. For example, if a resource is managed manually, or using subsequent configuration management tooling, suppression can be inserted as a simple code annotation.

Suppression comment format

To skip a check on a given Terraform definition block or CloudFormation resource, apply the following comment pattern inside it's scope:

checkov:skip=<check_id>:<suppression_comment>

  • <check_id> is one of the [available check scanners](docs/5.Policy Index/all.md)
  • <suppression_comment> is an optional suppression reason to be included in the output

Example

The following comment skips the CKV_AWS_20 check on the resource identified by foo-bucket, where the scan checks if an AWS S3 bucket is private. In the example, the bucket is configured with public read access; Adding the suppress comment would skip the appropriate check instead of the check to fail.

resource "aws_s3_bucket" "foo-bucket" {
  region        = var.region
    #checkov:skip=CKV_AWS_20:The bucket is a public static content host
  bucket        = local.bucket_name
  force_destroy = true
  acl           = "public-read"
}

The output would now contain a SKIPPED check result entry:

...
...
Check: "S3 Bucket has an ACL defined which allows public access."
	SKIPPED for resource: aws_s3_bucket.foo-bucket
	Suppress comment: The bucket is a public static content host
	File: /example_skip_acl.tf:1-25

...

To skip multiple checks, add each as a new line.

  #checkov:skip=CKV2_AWS_6
  #checkov:skip=CKV_AWS_20:The bucket is a public static content host

To suppress checks in Kubernetes manifests, annotations are used with the following format: checkov.io/skip#: <check_id>=<suppression_comment>

For example:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
  annotations:
    checkov.io/skip1: CKV_K8S_20=I don't care about Privilege Escalation :-O
    checkov.io/skip2: CKV_K8S_14
    checkov.io/skip3: CKV_K8S_11=I have not set CPU limits as I want BestEffort QoS
spec:
  containers:
...

Logging

For detailed logging to stdout set up the environment variable LOG_LEVEL to DEBUG.

Default is LOG_LEVEL=WARNING.

Skipping directories

To skip files or directories, use the argument --skip-path, which can be specified multiple times. This argument accepts regular expressions for paths relative to the current working directory. You can use it to skip entire directories and / or specific files.

By default, all directories named node_modules, .terraform, and .serverless will be skipped, in addition to any files or directories beginning with .. To cancel skipping directories beginning with . override CKV_IGNORE_HIDDEN_DIRECTORIES environment variable export CKV_IGNORE_HIDDEN_DIRECTORIES=false

You can override the default set of directories to skip by setting the environment variable CKV_IGNORED_DIRECTORIES. Note that if you want to preserve this list and add to it, you must include these values. For example, CKV_IGNORED_DIRECTORIES=mynewdir will skip only that directory, but not the others mentioned above. This variable is legacy functionality; we recommend using the --skip-file flag.

Console Output

The console output is in colour by default, to switch to a monochrome output, set the environment variable: ANSI_COLORS_DISABLED

VS Code Extension

If you want to use Checkov within VS Code, give a try to the vscode extension available at VS Code

Configuration using a config file

Checkov can be configured using a YAML configuration file. By default, checkov looks for a .checkov.yaml or .checkov.yml file in the following places in order of precedence:

  • Directory against which checkov is run. (--directory)
  • Current working directory where checkov is called.
  • User's home directory.

Attention: it is a best practice for checkov configuration file to be loaded from a trusted source composed by a verified identity, so that scanned files, check ids and loaded custom checks are as desired.

Users can also pass in the path to a config file via the command line. In this case, the other config files will be ignored. For example:

checkov --config-file path/to/config.yaml

Users can also create a config file using the --create-config command, which takes the current command line args and writes them out to a given path. For example:

checkov --compact --directory test-dir --docker-image sample-image --dockerfile-path Dockerfile --download-external-modules True --external-checks-dir sample-dir --quiet --repo-id prisma-cloud/sample-repo --skip-check CKV_DOCKER_3,CKV_DOCKER_2 --skip-framework dockerfile secrets --soft-fail --branch develop --check CKV_DOCKER_1 --create-config /Users/sample/config.yml

Will create a config.yaml file which looks like this:

branch: develop
check:
  - CKV_DOCKER_1
compact: true
directory:
  - test-dir
docker-image: sample-image
dockerfile-path: Dockerfile
download-external-modules: true
evaluate-variables: true
external-checks-dir:
  - sample-dir
external-modules-download-path: .external_modules
framework:
  - all 
output: cli 
quiet: true 
repo-id: prisma-cloud/sample-repo 
skip-check: 
  - CKV_DOCKER_3 
  - CKV_DOCKER_2 
skip-framework:
  - dockerfile
  - secrets
soft-fail: true

Users can also use the --show-config flag to view all the args and settings and where they came from i.e. commandline, config file, environment variable or default. For example:

checkov --show-config

Will display:

Command Line Args:   --show-config
Environment Variables:
  BC_API_KEY:        your-api-key
Config File (/Users/sample/.checkov.yml):
  soft-fail:         False
  branch:            master
  skip-check:        ['CKV_DOCKER_3', 'CKV_DOCKER_2']
Defaults:
  --output:          cli
  --framework:       ['all']
  --download-external-modules:False
  --external-modules-download-path:.external_modules
  --evaluate-variables:True

Contributing

Contribution is welcomed!

Start by reviewing the contribution guidelines. After that, take a look at a good first issue.

You can even start this with one-click dev in your browser through Gitpod at the following link:

Open in Gitpod

Looking to contribute new checks? Learn how to write a new check (AKA policy) here.

Disclaimer

checkov does not save, publish or share with anyone any identifiable customer information.
No identifiable customer information is used to query Prisma Cloud's publicly accessible guides. checkov uses Prisma Cloud's API to enrich the results with links to remediation guides. To skip this API call use the flag --skip-download.

Support

Prisma Cloud builds and maintains Checkov to make policy-as-code simple and accessible.

Start with our Documentation for quick tutorials and examples.

Python Version Support

We follow the official support cycle of Python, and we use automated tests for all supported versions of Python. This means we currently support Python 3.7 - 3.11, inclusive. Note that Python 3.7 is reaching EOL on June 2023. After that time, we will have a short grace period where we will continue 3.7 support until September 2023, and then it will no longer be considered supported for Checkov. If you run into any issues with any non-EOL Python version, please open an Issue.

checkov's People

Contributors

achiar99 avatar actions-user avatar arielkru avatar ayajbara avatar bo156 avatar chanochshayner avatar dependabot[bot] avatar drfaust92 avatar eliran-turgeman avatar gruebel avatar itai1357 avatar jameswoolfenden avatar kartikp10 avatar lirshindalman avatar marynakk avatar maxamel avatar metahertz avatar mikeurbanski1 avatar nimrodkor avatar njgibbon avatar noaazoulay avatar omrymen avatar pelegli avatar robeden avatar rotemavni avatar saarett avatar schosterbarak avatar tronxd avatar tsmithv11 avatar yaaraverner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

checkov's Issues

Add cli switch to inspect a file

Is your feature request related to a problem? Please describe.
We would like to use checkov in conjunction with pre-commit so that infra engineers can be notified of misconfigurations locally (and before checkov is executed in CI).

However, it seems checkov can only be run on directories. That means we cannot adopt this workflow without running static analysis on every terraform file in the repo every time we make a commit with any terraform changes -- or, at best, every terraform file that shares a path with any terraform file being committed.

Describe the solution you'd like
A command-line switch such as -f or --file that accepts a path to an individual file to be analyzed.

Describe alternatives you've considered
N/A

Additional context
N/A

Four checks, three empty?

Description
I've just attached Checkov to our pre-commit config, and when i run pre-commit --all-files it seems that Checkov is ran 4 times - first three are blanks, with just the ASCII logo and "Passed checks: 0, Failed checks: 0, Skipped checks: 0", and the 4th time is when the tests are done.

To Reproduce
Steps to reproduce the behavior:

  1. Add Checkov to .pre-commit-config.yaml
  2. Run pre-commit --all-files
  3. See error

Expected behavior
I was pretty sure it would just run once, including all .tf files found in all directories, recursively, and print the results once.

Desktop

  • OS: Linux Debian 10, Python 3.7.3
  • Checkov rev: 1.0.170

Additional context
It doesn't seem a really large issue - but, as it prints quite a lot of lines, it does punch the rest of the pre-commit run results pretty far away.

Add --version flag

Is your feature request related to a problem? Please describe.
It's a bit unusual that one needs to run the tool with no arguments to get the version, took me a bit to get it.

Describe the solution you'd like
--version flag printing the current version of the app

Additional context
If you ever get dependent modules, etc. in the future it may help with support tickets as you can print not only the core package version, but the version of the OS, dependencies, etc.

EDIT: I accidentally a word

Please add a CONTRIBUTING.md file

Is your feature request related to a problem? Please describe.
I would like to contribute, but between the lack code documentation and the lack of "how to start contributing", it's pretty hard to start. I know it's not the first thing you think about when you publish a project, but it would be very nice to see that kind of documentation 😃

Describe the solution you'd like
It could mention :

  • coding and commit style
  • how to get started : pipenv, dependencies, etc...
  • Should I commit pipfile and pipfile.lock
  • etc...

Thanks in advance !

Checkov Docker Image

Is your feature request related to a problem? Please describe.
We are using Gitlab CI. It would be easier/faster to integrate checkov if we had a docker image of it.

Describe the solution you'd like
Create a docker image and publish it to docker hub.

Describe alternatives you've considered
We are just getting started with checkov and just install it manually at build time. This works ok. We could build a docker image ourselves, but it'd be nice if it was official.

Additional context
Forgive me if this does not mesh with your vision for the tool or an image already exists. I couldn't find one docker hub.

Integrate with tf-parliament

Is your feature request related to a problem? Please describe.
Terraform lets us create and manage IAM policies. To keep them orderly and updated is a daunting task, which requires a lot of knowledge around IAM. https://github.com/rdkls/tf-parliament can help us manage that - we should think about integrating with it.

Describe the solution you'd like
Have a "check" which runs this python script on the policies in the dir.

Additional context
https://github.com/rdkls/tf-parliament

Implement user-defined loadable checks

Is your feature request related to a problem? Please describe.
Having integrated checks is cool. But it would even better if (as users) we could load our own checks. Like I have a policy saying that my RDS must be highly-available when deployed in our "production" environment. Such policy shouldn't belong to the core checks.

Describe the solution you'd like
Some parameter to provide a directory where checkov can find Python files to load.

I already implemented it in my own fork. Thus would you consider a PR ?

Check fails if you have different objects with the same name

Describe the bug
The checkov CLI has an exception if two objects share the same object and name.
For some unknown reason we have a data and resource with the same names - one checking if an object exists, the other to make it.

data "aws_sns_topic" "this" {
  count = "${(1 - var.create_sns_topic) * var.create}"
  name  = var.sns_topic_name
}

resource "aws_sns_topic" "this" {
  count = var.create_sns_topic * var.create
  name  = var.sns_topic_name
}

checkov -d .
Traceback (most recent call last):
File "C:\Python37\Lib\site-packages\checkov\main.py", line 55, in
run()
File "C:\Python37\Lib\site-packages\checkov\main.py", line 43, in run
report = Runner().run(root_folder, external_checks_dir=args.external_checks_dir, files=file)
File "C:\Python37\lib\site-packages\checkov\terraform\runner.py", line 32, in run
self.check_tf_definition(report, root_folder, tf_definitions)
File "C:\Python37\lib\site-packages\checkov\terraform\runner.py", line 60, in check_tf_definition
block_type)
File "C:\Python37\lib\site-packages\checkov\terraform\runner.py", line 68, in run_block
entity_context = dpath.get(definition_context[full_file_path], f'*/{entity_type}/{entity_name}')
File "C:\Python37\lib\site-packages\dpath\util.py", line 124, in get
raise ValueError("dpath.util.get() globs must match only one leaf : %s" % glob)
ValueError: dpath.util.get() globs must match only one leaf : */aws_sns_topic/this
To Reproduce
Steps to reproduce the behavior:
0. add code like above.

  1. Go to 'CLI'
  2. Run cli command '.checkov -d .'
  3. See error

Expected behavior
Checkov should be able to tell if they are data or resource and not throw an exception.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Win10
  • Checkov Version 1.0.173

Additional context
I'm just going to rename the data source but it is a bug, if minor.

security_groups in aws_security_group rule not supported

Describe the bug
referencing a security_group instead of cidr_block in a security group rule causes an exception

To Reproduce
Steps to reproduce the behavior:

  1. try to run checkov on the following resource:
resource "aws_security_group" "bar-sg" {
  name        = "sg-bar"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    security_groups = [aws_security_group.foo-sg.id]
    description = "foo"
  }

  egress {
    from_port       = 0
    to_port         = 0
    protocol        = "-1"
    cidr_blocks     = ["0.0.0.0/0"]
  }

}

result:

Traceback (most recent call last):
  File "/path/tf-checks/bin/checkov", line 34, in <module>
    report = Runner().run(root_folder, external_checks_dir=args.external_checks_dir)
  File "/path/tf-checks/lib/python3.7/site-packages/checkov/terraform/runner.py", line 38, in run
    results = resource_registry.scan(resource, scanned_file, skipped_checks)
  File "/pathtf-checks/lib/python3.7/site-packages/checkov/terraform/checks/resource/registry.py", line 38, in scan
    resource_name=resource_name, resource_type=resource, skip_info=skip_info)
  File "/path/tf-checks/lib/python3.7/site-packages/checkov/terraform/checks/resource/base_check.py", line 31, in run
    check_result['result'] = self.scan_resource_conf(resource_configuration)
  File "/path/tf-checks/lib/python3.7/site-packages/checkov/terraform/checks/resource/aws/SecurityGroupUnrestrictedIngress22.py", line 25, in scan_resource_conf
    if rule['from_port'] == [PORT] and rule['to_port'] == [PORT] and rule['cidr_blocks'] == [[
KeyError: 'cidr_blocks'

Expected behavior
such resource definition is perfectly valid

Desktop (please complete the following information):

  • OS: Ubuntu 19.10
  • Python: 3.7.5
  • Checkov Version 1.0.99

Error in password complexity check

Describe the bug
When running checkov against the following resource:

resource "aws_iam_account_password_policy" "password-policy" {
  minimum_password_length        = 15
  require_lowercase_characters   = true
  require_numbers                = true
  require_uppercase_characters   = true
  require_symbols                = true
  allow_users_to_change_password = true
}

I get the following error:

Traceback (most recent call last):
  File "/Users/arkadiyt/.pyenv/versions/3.6.0/bin/checkov", line 24, in <module>
    report = Runner().run(root_folder)
  File "/Users/arkadiyt/.pyenv/versions/3.6.0/lib/python3.6/site-packages/checkov/terraform/runner.py", line 34, in run
    results = resource_registry.scan(resource, scanned_file, skipped_checks)
  File "/Users/arkadiyt/.pyenv/versions/3.6.0/lib/python3.6/site-packages/checkov/terraform/checks/resource/registry.py", line 34, in scan
    resource_name=resource_name, resource_type=resource, skip_info=skip_info)
  File "/Users/arkadiyt/.pyenv/versions/3.6.0/lib/python3.6/site-packages/checkov/terraform/checks/resource/base_check.py", line 31, in run
    check_result['result'] = self.scan_resource_conf(resource_configuration)
  File "/Users/arkadiyt/.pyenv/versions/3.6.0/lib/python3.6/site-packages/checkov/terraform/checks/resource/aws/PasswordPolicyLength.py", line 22, in scan_resource_conf
    if conf[key] >= 14:
TypeError: '>=' not supported between instances of 'list' and 'int'

Add new check: Check if all Elasticache Replication Group encryption parameters are enabled and set

It would be nice to validate if all encryption features for AWS Elasticache Replication Group are enabled.

From https://www.terraform.io/docs/providers/aws/r/elasticache_replication_group.html:

- at_rest_encryption_enabled - (Optional) Whether to enable encryption at rest.
- transit_encryption_enabled - (Optional) Whether to enable encryption in transit.
- auth_token - (Optional) The password used to access a password protected server. Can be specified only if transit_encryption_enabled = true.
- kms_key_id - (Optional) The ARN of the key that you wish to use if encrypting at rest. If not supplied, uses service managed encryption. Can be specified only if at_rest_encryption_enabled = true. 

checkov directory switch not repeatable

Describe the bug
When running checkov -h, the directory switch claims it is repeatable but when you repeat it, only the last directory listed is scanned

  -d DIRECTORY, --directory DIRECTORY
                        Terraform root directory (can not be used together with --file). Can be repeated

To Reproduce
Steps to reproduce the behavior:

  1. Create two directories (directory1, directory2) containing valid terraform files
  2. Run cli command checkov -d ./directory1 -d ./directory2
  3. Only directory2 is scanned

Expected behavior
Both directory1 and directory2 are scanned

Desktop (please complete the following information):

  • Checkov Version: 1.0.180

Additional context
I would actually prefer a recursive switch, using the -d directive multiple times is a workaround

Option to display parsing errors

When checkov reports parsing errors there's no way to see the error or even the file which caused it. I'm testing this in a project which passes terraform validate.

$ checkov -f main.tf 

       _               _              
   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
version: 1.0.181 

Passed checks: 0, Failed checks: 0, Skipped checks: 0, Parsing errors: 1

Add CVK_AWS_## number in CLI output

Is your feature request related to a problem? Please describe.
I would like to be able to more easily add suppressions based on CLI output

Describe the solution you'd like
Add the check number in the output, Something like:

Check: CKV_AWS_23: "Ensure every security groups rule has a description"
        FAILED for resource: aws_security_group_rule.my_rule
        File: /ec2_security_rules.tf:211-218

                211 | resource "aws_security_group_rule" "my_rule" {
                212 |   type              = "ingress"
                ...
                218 | }

Describe alternatives you've considered
Look at the docs and compare the text to the provided table

Allow checks to run globally across all resources

Is your feature request related to a problem? Please describe.
I would like to define global checks, e.g. a Naming Convention check, that can apply globally to all resources, without me having to write out all the supported resources

Describe the solution you'd like
I would like to write something like:

from checkov.terraform.checks.resource.base_check import BaseResourceCheck

class NamingCheck(BaseResourceCheck):
    def __init__(self):
        name = "Ensure naming convention is correct"
        id = "MY_GLOBAL_001"
        supported_resources = ['*']
        # ...

The registry's get_checks could also look at the * resource, de-duplicating as necessary (maybe it could be a set?)

Describe alternatives you've considered
Get a list of all terraform resources and put that in the list

Additional context
It could also be neat to do, for example, supported_resources = ["aws_*"], or more advanced matching, but that is not necessary for my purposes right now.

"Resource names must start with a letter or underscore, and may contain only letters, digits, underscores, and dashes" (source) so anything else should be fair game for matching

Check rational

Thanks for sharing Checkov with the community. I've tried it out briefly on a few of our Terraform projects and found it easy enough to work with. I found some of the checks a little hard to understand the reasoning behind however.

Would it be reasonable to add a rational to each check where it's explained what kind of issues are being avoided by enforcing the check and why it is considered a best practice to follow a particular configuration?

GoogleComputeFirewallUnrestrictedIngress - KeyError on parsing

Describe the bug
Checkov traces back on checking some of my GCP manifests.

To Reproduce
Managed to get a reproducible scenario:

temikus λ checkov
...
version: 1.0.131
...

λ mkdir test && cd test                                                                                                                                       
λ wget https://gist.githubusercontent.com/Temikus/6be1f3e408d84f609a739718a42e3cf5/raw/971c5834234d32b3ddf4614defd6a641249d935c/checkov_fail.tf

λ checkov -d .
ERROR:checkov.terraform.checks.resource.gcp.GoogleComputeFirewallUnrestrictedIngress3389:Failed to run check Ensure Google compute firewall ingress does not allow unrestricted rdp access for configuration {'description': ['allow Google health checks and network load balancers access'], 'name': ['my-firewall'], 'network': ['default'], 'allow': [{'protocol': ['icmp']}, {'protocol': ['tcp'], 'ports': [['8080', '443']]}], 'source_ranges': [['130.211.0.0/22', '35.191.0.0/16']], 'target_tags': [['my-tag']]}
Traceback (most recent call last):
  File "/Users/temikus/.homebrew/bin/checkov", line 5, in <module>
    run()
  File "/Users/temikus/.homebrew/lib/python3.7/site-packages/checkov/main.py", line 37, in run
    report = Runner().run(root_folder, external_checks_dir=args.external_checks_dir, files=file)
  File "/Users/temikus/.homebrew/lib/python3.7/site-packages/checkov/terraform/runner.py", line 29, in run
    self.check_tf_definition(report, root_folder, tf_definitions)
  File "/Users/temikus/.homebrew/lib/python3.7/site-packages/checkov/terraform/runner.py", line 52, in check_tf_definition
    block_type)
  File "/Users/temikus/.homebrew/lib/python3.7/site-packages/checkov/terraform/runner.py", line 66, in run_block
    results = registry.scan(entity, scanned_file, skipped_checks)
  File "/Users/temikus/.homebrew/lib/python3.7/site-packages/checkov/terraform/checks/utilities/base_registry.py", line 39, in scan
    entity_name=entity_name, entity_type=entity, skip_info=skip_info)
  File "/Users/temikus/.homebrew/lib/python3.7/site-packages/checkov/terraform/checks/utilities/base_check.py", line 44, in run
    raise e
  File "/Users/temikus/.homebrew/lib/python3.7/site-packages/checkov/terraform/checks/utilities/base_check.py", line 33, in run
    check_result['result'] = self.scan_entity_conf(entity_configuration)
  File "/Users/temikus/.homebrew/lib/python3.7/site-packages/checkov/terraform/checks/resource/base_check.py", line 20, in scan_entity_conf
    return self.scan_resource_conf(conf)
  File "/Users/temikus/.homebrew/lib/python3.7/site-packages/checkov/terraform/checks/resource/gcp/GoogleComputeFirewallUnrestrictedIngress3389.py", line 22, in scan_resource_conf
    if PORT in conf['allow'][0]['ports'][0]:
KeyError: 'ports'

Expected behavior
Not errorring out.

Screenshots
N/A

Desktop (please complete the following information):

  • OS: OSX Mojave 10.14.6
  • Checkov Version: 1.0.131

Docker command in README.md is wrong

Describe the bug
The docker run command in the readme is incorrect and does not work. It should be:
docker run -v /user/tf:/tf bridgecrew/checkov -d /tf

Allow Global Suppression

It would be ideal to globally suppress a check rather than needing to suppress each resource. For instance, if all security groups don't have a description, it would be preferable to raise a global suppression until the issue can be addressed.

Ability to run Checkov with 'quiet mode'

Is your feature request related to a problem? Please describe.
The default cli output is very noisy on a clean, successful run. For example, I have 11 checks that run on my current project, and if it is successful, with no problems at all, I get:

  • 8 lines of output displaying Checkov in fancy branding and the version
  • 2 lines of output displaying how many checks passed, failed or were skipped
  • 4+ lines of output for each successful check, showing the line, the name of the check, the fact that it passed, and the evaluation of every variable evaluated in an expression that check, which adds an extra line each (which could be a lot if it's checking a complex object)
  • 5+ lines of output for each skipped check, show the same as successful check, but with also the skipped comment

This means running the tool, on a repo with no issues, scrolls by about 2 pages of noise, which I have to scroll back up to check on the actual status, which is printed at the top of the output.

Describe the solution you'd like
I would like the tool to behave like a standard CLI utility, with respect to default output. If there are no errors, don't print out anything and return code 0. If there are problems, just show the problems, and not a full report of everything possible, and return a code greater than 0. I still believe the default cli output has value, for when you need to validate that rule is running or you are investigating a possible problem, but that giant report is unnecessary 90% of the time when you're just iterating through changes, making sure you didn't introduce something foolish. IMO, the default cli output should be what I've just described, and if you want a full report on all checks, version, summary, etc, you should explicitly supply --output report.

Describe alternatives you've considered
One could pipe the output from Checkov into a few chained together CLI utilities, to filter down the unnecessary noise, but that definitely feels like hacking into place something that should be more or less the default.

Additional context
None

Checkov should not exit success (0) on parsing errors

Describe the bug
Checkov exits 0 on success and 1 on any failed checks.

It also returns 0 on parsing errors. It should probably return something else, to cause CI pipelines to fail instead of quietly succeeding.

To Reproduce
Steps to reproduce the behavior:

  1. echo "hello" > ./bad.tf
  2. checkov -d .
  3. See that there is a parsing error, but checkov exits 0.
$ checkov -d .
       _               _              
   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
version: 1.0.98 

Passed checks: 0, Failed checks: 0, Skipped checks: 0, Parsing errors: 1

$ echo $?
0

Expected behavior
Checkov should return something non-zero (maybe 2?)

$ checkov -d .
       _               _              
   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
version: 1.0.98 

Passed checks: 0, Failed checks: 0, Skipped checks: 0, Parsing errors: 1

$ echo $?
2

Desktop (please complete the following information):

  • OS: Linux
  • Checkov Version: 1.0.98

The releases of checkov since 1.0.188 are broken

Describe the bug
When you run checkov at the cli (windows or bash) in either python 3.6 or 3.7 it throws at exception in checkov/cloudformation/runner.py

09:17 $ checkov -d .
Traceback (most recent call last):
  File "/home/jim/.local/bin/checkov", line 5, in <module>
    run()
  File "/home/jim/.local/lib/python3.6/site-packages/checkov/main.py", line 42, in run
    scan_reports = runner_registry.run(root_folder, external_checks_dir=args.external_checks_dir, files=file)
  File "/home/jim/.local/lib/python3.6/site-packages/checkov/common/runners/runner_registry.py", line 24, in run
    scan_report = runner().run(root_folder, external_checks_dir=external_checks_dir, files=files)
  File "/home/jim/.local/lib/python3.6/site-packages/checkov/cloudformation/runner.py", line 71, in run
    for resource_name, resource in definitions[cf_file]['Resources'].items():
KeyError: 'Resources'
**To Reproduce**
Install latest checkov- [greater than 10.188]

Steps to reproduce the behavior:
1. Go to 'a folder with terraform init'
2. Run cli command  checkov -d .
3. See error

**Expected behavior**
Checkov scan results.

**Screenshots**
If applicable, add screenshots to help explain your problem.

**Desktop (please complete the following information):**
 - OS: [windows and ubuntu]
 - Checkov Version [gt 1.0.188]

**Additional context**
Add any other context about the problem here (e.g. code snippets).

Rule check suppression comments do not work

Describe the bug
When attempting to use a comment to suppress a rule check, it does not work.

To Reproduce
Steps to reproduce the behavior:

  1. Create a terraform file containing this minimal, stripped down example:
  resource "aws_eks_cluster" "main" {
    #checkov.skip=CKV_AWS_38:Testing suppression
    #checkov.skip=CKV_AWS_39:Testing suppression
    vpc_config {
      subnet_ids = [for value in aws_subnet.public : value.id]
    }

    enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
  }
  1. Run checkov via checkov --directory . -o cli
  2. Observe that the output does not have 2 skipped checks, and instead lists failures:

       _               _
   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V /
  \___|_| |_|\___|\___|_|\_\___/ \_/

version: 1.0.167

Passed checks: 1, Failed checks: 2, Skipped checks: 0

Check: CKV_AWS_37: "Ensure Amazon EKS control plane logging enabled for all log types"
        PASSED for resource: aws_eks_cluster.main
        File: /eks:1-9


Check: CKV_AWS_38: "Ensure Amazon EKS public endpoint not accessible to 0.0.0.0/0"
        FAILED for resource: aws_eks_cluster.main
        File: /eks:1-9

                1 | resource "aws_eks_cluster" "main" {
                2 |   #checkov.skip=CKV_AWS_38:Testing suppression
                3 |   #checkov.skip=CKV_AWS_39:Testing suppression
                4 |   vpc_config {
                5 |     subnet_ids = [for value in aws_subnet.public : value.id]
                6 |   }
                7 | 
                8 |   enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
                9 | }


Check: CKV_AWS_39: "Ensure Amazon EKS public endpoint disabled"
        FAILED for resource: aws_eks_cluster.main
        File: /eks:1-9

                1 | resource "aws_eks_cluster" "main" {
                2 |   #checkov.skip=CKV_AWS_38:Testing suppression
                3 |   #checkov.skip=CKV_AWS_39:Testing suppression
                4 |   vpc_config {
                5 |     subnet_ids = [for value in aws_subnet.public : value.id]
                6 |   }
                7 | 
                8 |   enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
                9 | }

Expected behavior
I expected the output to list 2 skipped checks, and list both CKV_AWS_38 and CKV_AWS_39 as skipped and not failed.

Screenshots
The examples above should describe things sufficiently.

Desktop (please complete the following information):

  • OS: Ubuntu 18.04.4 LTS
  • Checkov Version: 1.0.167

Additional context
The example above is the most stripped down example I could make from my original Terraform code that produced the error. It's not intended to be a complete, sensible Terraform file, it's intended to illustrate the bug.

GCP cloudSQL check failing even though should pass - rule bug???

Describe the bug
Check fails even though configuration correct for GCP cloudSQL
Check: CKV_GCP_6: "Ensure all Cloud SQL database instance requires all incoming connections to use SSL"
FAILED for resource: google_sql_database_instance.db-instance
File: /:29-41

	29 | resource "google_sql_database_instance" "db-instance" {
	30 |   name             = var.instance_name
	31 |   database_version = var.database_version
	32 |
	33 |   settings {
	34 |     # Second-generation instance tiers are based on the machine
	35 |     # type. See argument reference below.
	36 |     tier            = var.instance_tier
	37 |     ip_configuration {
	38 |       require_ssl = true
	39 |     }
	40 |   }
	41 | }

To Reproduce
Create resource as in the output

Expected behavior
Check PASS

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: MacOSX
  • Checkov Version 1.0.173

Additional context
Add any other context about the problem here (e.g. code snippets).

Add new check: Ensure CloudTrail logs are encrypted at rest using KMS CMKs

Is your feature request related to a problem? Please describe.
Enable checking cloudtrail encryption by verifying that in a cloudtrail definition block kms_key_id =foo

Describe the solution you'd like
Create a new policies that checks for kms encryption for cloudtrail resources.
The following resource block should have a CheckResult.Passed:

resource "aws_cloudtrail" "good_cloudtrail" {
  enable_logging                = true
  s3_bucket_name                = foo
  enable_log_file_validation    = true
  is_multi_region_trail         = true
  include_global_service_events = true
 kms_key_id =foo
}

The following resource block should have a CheckResult.Failed:

resource "aws_cloudtrail" "bad_cloudtrail" {
  enable_logging                = true
  s3_bucket_name                = foo
  enable_log_file_validation    = true
  is_multi_region_trail         = true
  include_global_service_events = true
}

Additional context
Terraform cloudtrail resource page: https://www.terraform.io/docs/providers/aws/r/cloudtrail.html
How to write a new checkov policy: https://bridgecrewio.github.io/checkov/1.Introduction/Policies.html
CIS check description: https://d1.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf#page=78

Checkov doesn't seem to support variables for aws_ebs_volume / encrypted

Describe the bug
Checkov doesn't seem to support variables for aws_ebs_volume / encrypted.

Perhaps this was by design, but please explain if otherwise.

i.e.

resource "aws_ebs_volume" "default" {
  availability_zone = var.volume_az
  size                     = var.volume_size
  type                    = var.volume_type
  encrypted          = var.volume_encryption
}

variable "volume_encryption" {
  type        = bool
  description = "Volume encryption enabled or disabled."
  default     = true
}

To Reproduce
Steps to reproduce the behavior:

  1. Run checkov -d ./terraform-code.tf
  2. You get:
Check: CKV_AWS_3: "Ensure all data stored in the EBS is securely encrypted "
	FAILED for resource: aws_ebs_volume.default

Expected behavior
It should interpret the variable and pass. As it's set to true by default and nothing is overriding that.

Desktop (please complete the following information):

  • OS: MacOS
  • Checkov Version checkov==1.0.102

Add new check: Ensure CloudTrail log file validation is enabled

Is your feature request related to a problem? Please describe.
Enable checking cloudtrail log file integrity by verifying that in a cloudtrail definition block enable_log_file_validation=true

Describe the solution you'd like
Create a new policies that checks for log validation for cloudtrail resources.
The following resource block should have a CheckResult.Passed:

resource "aws_cloudtrail" "good_cloudtrail" {
  enable_logging                = true
  s3_bucket_name                = foo
 enable_log_file_validation    = true
  is_multi_region_trail         = true
  include_global_service_events = true
}

The following resource block should have a CheckResult.Failed:

resource "aws_cloudtrail" "bad_cloudtrail" {
  enable_logging                = true
  s3_bucket_name                = foo
  is_multi_region_trail         = true
  include_global_service_events = true
}

Additional context
Terraform cloudtrail resource page: https://www.terraform.io/docs/providers/aws/r/cloudtrail.html
How to write a new checkov policy: https://bridgecrewio.github.io/checkov/1.Introduction/Policies.html
CIS check description: https://d1.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf#page=64

Support multi files scanner

Today Checkov support to scan one file using the command:
checkov -f /user/tf/example.tf

I'd love to get Checkov support multi files scan. Instead of running Checkov multiple time with different path file.

For example:
checkov -f [/user/tf/example.tf, /user/tf/example/main.tf]

Ability to disable some rules globally

Is your feature request related to a problem? Please describe.
Some default policies might be troublesome in some environments like production. We want to be able to disable globally some default checks e.g. CKV_GCP_10: "Ensure 'Automatic node upgrade' is enabled for Kubernetes Clusters" which might not be an ideal for production environments.

Describe the solution you'd like
Have an cli option where you specify list of excluded default checks

Describe alternatives you've considered
Forcing people to use skip check is against main motivation of the tool which should define what the standard is and is fight against what the best practice is

Additional context
Add any other context or screenshots about the feature request here.

Add new check: Check if CloudFront distributions are set to HTTPS

Is your feature request related to a problem? Please describe.
Enable checking cloudfront distribution ViewerProtocolPolicy and verify that viewer_protocol_policy=https-only or viewer_protocol_policy=redirect-to-https

Describe the solution you'd like
Create a new policies that checks for viewer_protocol_policy for cloudfront resources.
The following resource block should have a CheckResult.Passed:

resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name = "${aws_s3_bucket.b.bucket_regional_domain_name}"
    origin_id   = "${local.s3_origin_id}"

    s3_origin_config {
      origin_access_identity = "origin-access-identity/cloudfront/ABCDEFG1234567"
    }
  }

  enabled             = true
  is_ipv6_enabled     = true
  comment             = "Some comment"
  default_root_object = "index.html"

  logging_config {
    include_cookies = false
    bucket          = "mylogs.s3.amazonaws.com"
    prefix          = "myprefix"
  }

  aliases = ["mysite.example.com", "yoursite.example.com"]

  ordered_cache_behavior {
    path_pattern     = "/content/immutable/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD", "OPTIONS"]
    target_origin_id = "${local.s3_origin_id}"

    forwarded_values {
      query_string = false
      headers      = ["Origin"]

      cookies {
        forward = "none"
      }
    }

    min_ttl                = 0
    default_ttl            = 86400
    max_ttl                = 31536000
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }

  # Cache behavior with precedence 1
  ordered_cache_behavior {
    path_pattern     = "/content/*"
    allowed_methods  = ["GET", "HEAD", "OPTIONS"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }

    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
  }

  price_class = "PriceClass_200"

  restrictions {
    geo_restriction {
      restriction_type = "whitelist"
      locations        = ["US", "CA", "GB", "DE"]
    }
  }

  tags = {
    Environment = "production"
  }

  viewer_certificate {
    cloudfront_default_certificate = true
  }
}

Additional context
Terraform resource:
https://www.terraform.io/docs/providers/aws/r/cloudfront_distribution.html#viewer_protocol_policy

Versioned Rulesets

Is your feature request related to a problem? Please describe.
Currently it is impossible to update to a new version of checkov to get bug-fixes without also getting potentially new rules. This would result in a number of quick changes to get a repo to lint clean again. While the new rules should be fixed and deficiencies should be corrected, it shouldn't be in the same change to update software.

Describe the solution you'd like
Each rule should be part of a rule set and these sets should be enabled or disabled independently of the version of checkov running. New rulesets should default to disabled.

Describe alternatives you've considered
I've considered pinning checkov to a specific version, but this means I won't get bug-fixes anymore.

Document reasoning behind rule checks

Is your feature request related to a problem? Please describe.
When running checkov, you get pointed to rule violations with a one line explanation of what is wrong, ex. Ensure Amazon EKS public endpoint disabled. However, there's no deeper explanation on why that's something to be concerned about, only that it's suggested that you fix it. Also, fixing it with respect to making checkov happy is one thing, but supporting, external steps to make it work may be bigger than the one line configuration change. Tooling like this is a great opportunity to educate users on good practices, common problems, and solutions.

Describe the solution you'd like
I would like to see documentation that explains why each rule is there (ie. what exactly is the problem?) and suggestions on how to fix it (ie. what configuration to change/add/update, as well as external changes to support it). Since each rule is supposed to be based on 'best practices', there should be a valid explanation of what problem the rule identifies actually is. Likewise, if these are 'best practices', then well known solution patterns should be available easily and conveniently. Similarly, examples, when applicable, of when this violation might safely be ignored/disabled should be discussed.

Describe alternatives you've considered
The alternative is to internet search phrases around the one line suggestion in the output to try to find help on why it's a problem, and what to do about it.

Additional context
TFLint does a decent job of this: https://github.com/terraform-linters/tflint/blob/v0.14.0/docs/rules/aws_db_instance_default_parameter_group.md, and Rubocop at least links out to an external explanation: https://docs.rubocop.org/en/stable/cops_lint/

Dynamic blocks handling is partial

Describe the bug
An S3 bucket with a dynamic logging block is considered a violation, even if a value was set for the variable externally.

To Reproduce
Steps to reproduce the behavior:
S3 configuration:

resource "aws_s3_bucket" "bridgecrew_cws_bucket" {
  count = var.existing_bucket_name == null ? 1 : 0

  bucket        = local.bucket_name
  acl               = "private"

  versioning {
    enabled = true
  }

  lifecycle_rule {
    id      = "Delete old log files"
    enabled = true

    noncurrent_version_expiration {
      days = var.log_file_expiration
    }

    expiration {
      days = var.log_file_expiration
    }
  }

  dynamic "logging" {
    for_each = var.logs_bucket_id != null ? [var.logs_bucket_id] : []

    content {
      target_bucket = logging.value
      target_prefix = "/${local.bucket_name}"
    }
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = local.kms_key
        sse_algorithm     = "aws:kms"
      }
    }
  }

  tags = {
    Name = "BridgecrewCWSBucket"
  }
}

Expected behavior
The check should not fail

Desktop (please complete the following information):

  • OS: mac OSX Catalina
  • Checkov Version 1.0.167

Checkov fails to start in Windows environments

Describe the bug
After you install Checkov on Windows, running Checkov does nothing.

To Reproduce
Steps to reproduce the behavior:

  1. Open Powershell/cmd
  2. Run cli command 'checkov'
  3. Does nothing

Expected behavior
The tool running. Magic.

Screenshots
I'm not sure showing nothing would help.

Desktop (please complete the following information):

  • OS: Windows 10
  • Checkov Version 1.0.173

Additional context
I know Windows! Like who cares and tbh ive got WSL2 and it works a dream but customers, customers and their awful locked down... anyway.
I'm using Python37 where i've installed .
If you look in your c:/Python37/scripts folder there is a "checkov" bash script. This is the nub of it this doesn't run! However if you add a batch file "checkov-scan.bat" [or call whatever} with this content:

C:\Python37\python C:\Python37\Lib\site-packages\checkov\main.py %1 %2

Then when you run "checkov-scan" at your shell, it works! So is there anyway you could package up something similar in a release? please?
Also I made a python based pre-commit for checkov called checkov-scan - here https://github.com/JamesWoolfenden/pre-commit

Python error IndexError: list index out of range

Describe the bug
I get this error for any of my terraform modules. Could it be due to tfvars files? Is there some way to specify a var file to use?

$ checkov -d /path/to/module
Traceback (most recent call last):
  File "/usr/local/bin/checkov", line 24, in <module>
    report = Runner().run(root_folder)
  File "/usr/local/lib/python3.8/site-packages/checkov/terraform/runner.py", line 22, in run
    scanned_file = definition[0].split(root_folder)[1]
IndexError: list index out of range

My terraform module is formatted as folows:

├── apis.tf
├── dataflow.tf
├── env
│   ├── backend
│   │   ├── development.tfvars
│   │   ├── production.tfvars
│   │   └── staging.tfvars
│   ├── development.tfvars
│   ├── production.tfvars
│   └── staging.tfvars
├── main.tf
├── variables.tf
└── versions.tf

To Reproduce
Steps to reproduce the behavior:

  1. run checkov -d /path/to/module
  2. See error

Expected behavior
Output

Desktop (please complete the following information):

  • OS: python:alpine docker image
  • Checkov Version [e.g. 22]: 1.0.78

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.