GithubHelp home page GithubHelp logo

aquasecurity / cloudsploit Goto Github PK

View Code? Open in Web Editor NEW
3.2K 72.0 650.0 25.7 MB

Cloud Security Posture Management (CSPM)

Home Page: https://cloud.aquasec.com/signup

License: GNU General Public License v3.0

JavaScript 99.99% Dockerfile 0.01%
aws security security-audit cloud azure cspm aqua gcp oci oracle

cloudsploit's Introduction

Build Status

CloudSploit by Aqua - Cloud Security Scans

Quick Start

Generic

$ git clone https://github.com/aquasecurity/cloudsploit.git
$ cd cloudsploit
$ npm install
$ ./index.js -h

Docker

$ git clone https://github.com/aquasecurity/cloudsploit.git
$ cd cloudsploit
$ docker build . -t cloudsploit:0.0.1
$ docker run cloudsploit:0.0.1 -h
$ docker run -e AWS_ACCESS_KEY_ID=XX -e AWS_SECRET_ACCESS_KEY=YY cloudsploit:0.0.1 --compliance=pci

Documentation

Background

CloudSploit by Aqua is an open-source project designed to allow detection of security risks in cloud infrastructure accounts, including: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Oracle Cloud Infrastructure (OCI), and GitHub. These scripts are designed to return a series of potential misconfigurations and security risks.

Deployment Options

CloudSploit is available in two deployment options:

Self-Hosted

Follow the instructions below to deploy the open-source version of CloudSploit on your machine in just a few simple steps.

Hosted at Aqua Wave

A commercial version of CloudSploit hosted at Aqua Wave. Try Aqua Wave today!

Installation

Ensure that NodeJS is installed. If not, install it from here.

$ git clone [email protected]:cloudsploit/scans.git
$ npm install

Configuration

CloudSploit requires read-only permission to your cloud account. Follow the guides below to provision this access:

For AWS, you can run CloudSploit directly and it will detect credentials using the default AWS credential chain.

CloudSploit Config File

The CloudSploit config file allows you to pass cloud provider credentials by:

  1. A JSON file on your file system
  2. Environment variables
  3. Hard-coding (not recommended)

Start by copying the example config file:

$ cp config_example.js config.js

Edit the config file by uncommenting the relevant sections for the cloud provider you are testing. Each cloud has both a credential_file option, as well as inline options. For example:

azure: {
    // OPTION 1: If using a credential JSON file, enter the path below
    // credential_file: '/path/to/file.json',
    // OPTION 2: If using hard-coded credentials, enter them below
    // application_id: process.env.AZURE_APPLICATION_ID || '',
    // key_value: process.env.AZURE_KEY_VALUE || '',
    // directory_id: process.env.AZURE_DIRECTORY_ID || '',
    // subscription_id: process.env.AZURE_SUBSCRIPTION_ID || ''
}

Credential Files

If you use the credential_file option, point to a file in your file system that follows the correct format for the cloud you are using.

AWS

{
  "accessKeyId": "YOURACCESSKEY",
  "secretAccessKey": "YOURSECRETKEY"
}

Azure

{
  "ApplicationID": "YOURAZUREAPPLICATIONID",
  "KeyValue": "YOURAZUREKEYVALUE",
  "DirectoryID": "YOURAZUREDIRECTORYID",
  "SubscriptionID": "YOURAZURESUBSCRIPTIONID"
}

GCP

Note: For GCP, you generate a JSON file directly from the GCP console, which you should not edit.

{
    "type": "service_account",
    "project": "GCPPROJECTNAME",
    "client_email": "GCPCLIENTEMAIL",
    "private_key": "GCPPRIVATEKEY"
}

Oracle OCI

{
  "tenancyId": "YOURORACLETENANCYID",
  "compartmentId": "YOURORACLECOMPARTMENTID",
  "userId": "YOURORACLEUSERID",
  "keyFingerprint": "YOURORACLEKEYFINGERPRINT",
  "keyValue": "YOURORACLEKEYVALUE",
}

Environment Variables

CloudSploit supports passing environment variables, but you must first uncomment the section of your config.js file relevant to the cloud provider being scanned.

You can then pass the variables listed in each section. For example, for AWS:

{
  access_key: process.env.AWS_ACCESS_KEY_ID || '',
  secret_access_key: process.env.AWS_SECRET_ACCESS_KEY || '',
  session_token: process.env.AWS_SESSION_TOKEN || '',
}

Running

To run a standard scan, showing all outputs and results, simply run:

$ ./index.js

CLI Options

CloudSploit supports many options to customize the run time. Some popular options include:

  • AWS GovCloud support: --govcloud
  • AWS China support: --china
  • Save the raw cloud provider response data: --collection=file.json
  • Ignore passing (OK) results: --ignore-ok
  • Exit with a non-zero code if non-passing results are found: --exit-code
    • This is a good option for CI/CD systems
  • Change the output from a table to raw text: --console=text

See Output Formats below for more output options.

Click for a full list of options
$ ./index.js -h

  _____ _                 _  _____       _       _ _
  / ____| |               | |/ ____|     | |     (_) |
| |    | | ___  _   _  __| | (___  _ __ | | ___  _| |_
| |    | |/ _ \| | | |/ _` |\___ \| '_ \| |/ _ \| | __|
| |____| | (_) | |_| | (_| |____) | |_) | | (_) | | |_
  \_____|_|\___/ \__,_|\__,_|_____/| .__/|_|\___/|_|\__|
                                  | |
                                  |_|

  CloudSploit by Aqua Security, Ltd.
  Cloud security auditing for AWS, Azure, GCP, Oracle, and GitHub

usage: index.js [-h] --config CONFIG [--compliance {hipaa,cis,cis1,cis2,pci}] [--plugin PLUGIN] [--govcloud] [--china] [--csv CSV] [--json JSON] [--junit JUNIT]
                [--table] [--console {none,text,table}] [--collection COLLECTION] [--ignore-ok] [--exit-code] [--skip-paginate] [--suppress SUPPRESS]

optional arguments:
  -h, --help            show this help message and exit
  --config CONFIG
                        The path to a cloud provider credentials file.
  --compliance {hipaa,cis,cis1,cis2,pci}
                        Compliance mode. Only return results applicable to the selected program.
  --plugin PLUGIN       A specific plugin to run. If none provided, all plugins will be run. Obtain from the exports.js file. E.g. acmValidation
  --govcloud            AWS only. Enables GovCloud mode.
  --china               AWS only. Enables AWS China mode.
  --csv CSV             Output: CSV file
  --json JSON           Output: JSON file
  --junit JUNIT         Output: Junit file
  --table               Output: table
  --console {none,text,table}
                        Console output format. Default: table
  --collection COLLECTION
                        Output: full collection JSON as file
  --ignore-ok           Ignore passing (OK) results
  --exit-code           Exits with a non-zero status code if non-passing results are found
  --skip-paginate       AWS only. Skips pagination (for debugging).
  --suppress SUPPRESS   Suppress results matching the provided Regex. Format: pluginId:region:resourceId

Compliance

CloudSploit supports mapping of its plugins to particular compliance policies. To run the compliance scan, use the --compliance flag. For example:

$ ./index.js --compliance=hipaa
$ ./index.js --compliance=pci

Multiple compliance modes can be run at the same time:

$ ./index.js --compliance=cis1 --compliance=cis2

CloudSploit currently supports the following compliance mappings:

HIPAA

$ ./index.js --compliance=hipaa

HIPAA scans map CloudSploit plugins to the Health Insurance Portability and Accountability Act of 1996.

PCI

$ ./index.js --compliance=pci

PCI scans map CloudSploit plugins to the Payment Card Industry Data Security Standard.

CIS Benchmarks

$ ./index.js --compliance=cis
$ ./index.js --compliance=cis1
$ ./index.js --compliance=cis2

CIS Benchmarks are supported, both for Level 1 and Level 2 controls. Passing --compliance=cis will run both level 1 and level 2 controls.

Output Formats

CloudSploit supports output in several formats for consumption by other tools. If you do not specify otherwise, CloudSploit writes output to standard output (the console) as a table.

Note: You can pass multiple output formats and combine options for further customization. For example:

# Print a table to the console and save a CSV file
$ ./index.js --csv=file.csv --console=table

# Print text to the console and save a JSON and JUnit file while ignoring passing results
$ ./index.js --json=file.json --junit=file.xml --console=text --ignore-ok

Console Output

By default, CloudSploit results are printed to the console in a table format (with colors). You can override this and use plain text instead, by running:

$ ./index.js --console=text

Alternatively, you can suppress the console output entirely by running:

$ ./index.js --console=none

Ignoring Passing Results

You can ignore results from output that return an OK status by passing a --ignore-ok commandline argument.

CSV

$ ./index.js --csv=file.csv

JSON

$ ./index.js --json=file.json

JUnit XML

$ ./index.js --junit=file.xml

Collection Output

CloudSploit saves the data queried from the cloud provider APIs in JSON format, which can be saved alongside other files for debugging or historical purposes.

$ ./index.js --collection=file.json

Suppressions

Results can be suppressed by passing the --suppress flag (multiple options are supported) with the following format:

--suppress pluginId:region:resourceId

For example:

# Suppress all results for the acmValidation plugin
$ ./index.js --suppress acmValidation:*:*

# Suppress all us-east-1 region results
$ ./index.js --suppress *:us-east-1:*

# Suppress all results matching the regex "certificate/*" in all regions for all plugins
$ ./index.js --suppress *:*:certificate/*

Running a Single Plugin

The --plugin flag can be used if you only wish to run one plugin.

$ ./index.js --plugin acmValidation

Architecture

CloudSploit works in two phases. First, it queries the cloud infrastructure APIs for various metadata about your account, namely the "collection" phase. Once all the necessary data is collected, the result is passed to the "scanning" phase. The scan uses the collected data to search for potential misconfigurations, risks, and other security issues, which are the resulting output.

Writing a Plugin

Please see our contribution guidelines and complete guide to writing CloudSploit plugins.

Writing a remediation

The --remediate flag can be used if you want to run remediation for the plugins mentioned as part of this argument. This takes a list of plugin names. Please see our developing remediation guide for more details.

Other Notes

For other details about the Aqua Wave SaaS product, AWS security policies, and more, click here.

cloudsploit's People

Contributors

abdullahaslam306 avatar akhtaramir avatar ali-imran7 avatar alphadev4 avatar chrisfowles avatar chrisoverzero avatar dependabot[bot] avatar dipsubha06 avatar fatima99s avatar gabrielboucher avatar garretfick avatar giorod3 avatar giorodaqua avatar jabhishek87 avatar jchrisfarris avatar jcornell123 avatar lborguetti avatar m-akhtar avatar m-nouman-suleman avatar maclennann avatar matthewdfuller avatar mav55 avatar mehakseedat63 avatar muzzamilinovaqo avatar pasanchamikara avatar reppard avatar sadeed12345 avatar tpounds avatar umairajmal avatar umermehmood avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloudsploit's Issues

RDS Logging Enabled Plugin cannot work for Aurora Postgres

It seems that Aurora Postgres does not support publishing logs to CloudWatch Logs. Postgres RDS does, and Aurora MySQL does. I do not see anything in the Aurora Postgres docs which indicates that this is specifically a missing feature, but I am unable to select it, and I see a some people asking about this as a feature (no reply) on a couple web forums (https://www.reddit.com/r/aws/comments/9nbcev/aurora_postgres_error_logs_and_cloudwatch/, https://dba.stackexchange.com/questions/237564/how-can-we-publish-aurora-postgres-compatible-logs-to-cloudwatch), so I'm pretty sure it is not possible.

As such, the RDS Logging Enabled Cloudsploit Plugin reports this as an error and there is no way to fix it other than suppressing the alert. This would be more-graceful for the plugin to ignore Aurora Postgres databases until AWS supports this feature.

Extend S3 Bucket Plugin to Use IAM Policies

Currently, the "S3 Bucket All Users Policy" plugin relies on S3 ACLs. However, AWS also uses IAM policies for buckets. Both should be used to determine the final result.

KMS Key Policy Scan double-counts duplicates

My account has a KMS key that the KMS Key Policy plugin reports as having 13 users/roles allowed to use it. They key in question seems to have 3 policies, one with one user, another with 6 users, and a third with the same 6 users but different actions/conditions. I'd add that up to 7 instead of 13.

The code does not seem to deduplicate for each policy.

CloudFront HTTPS-Only Should Not Be a WARN Result

In CloudFront scans, a scan is configured to WARN when a CloudFront distribution is available only over HTTPS:

else if (Distribution.DefaultCacheBehavior.ViewerProtocolPolicy == 'https-only'){
    helpers.addResult(results, 1, 'The CloudFront ' + 
        'distribution is set to use HTTPS but not to' + 
        'redirect users accessing the endpoint over HTTP', 'global',
        Distribution.ARN)
}

This contradicts the "More Info" of the scan:

For maximum security, CloudFront distributions can be configured to only accept HTTPS connections or to redirect HTTP connections to HTTPS.

These are presented as equal options, and HTTPS-only is even mentioned first. Additionally, this contradicts the spirit of the compliance section:

HIPAA requires all data to be transmitted over secure channels. CloudFront HTTPS redirection should be used to ensure site visitors are always connecting over a secure channel.

HTTPS-only also ensures site visitors are always connecting over a secure channel.

Also, there's a small typo in the scan that results in the WARN description rendering as "but not toredirect users".

@octocat/app is required but missing.

A new dependency was added but not included in package.json. This breaks clients who follow the standard workflow since they will not have the dependency.

See this change for a fix and small refactor to reduce the likelihood in the future #186

Lambda support

Are there any plans to scan through Lambda function meta data?

Plugin for AWS Shield Advanced

We would like some plugins related to Shield Advanced.

  • Subscription Enabled - CLI Command: aws shield get-subscription-state
  • Emergency Contacts are set - CLI Command aws shield describe-emergency-contact-settings
  • Protections Established - CLI Command aws shield list-protections

Support for private clouds like OpenStack

Hi, this is not an issue but a feature request. Do you plan to extend CloudSploit capacities to open source clouds like OpenStack ? This could be a very interesting feature...

Support Google Cloud Platform (GCP)

Based on user feedback, CloudSploit is prioritizing support for Google Cloud Platform as its next cloud (following the beta rollout of Azure, Oracle, and GitHub support).

We've begun the initial investigation required for authentication and collection interfaces, but are interested in community recommendations for an initial set of security checks and controls.

Please comment or vote on security checks for GCP that you would like to see.

Add ability to produce a CSV export

I implemented this in a fork of this repo. Not sure if this is the best way but it's working for me.

var async = require('async');
var plugins = require('./exports.js');
var collector = require('./collect.js');
var commandLineArgs = require('command-line-args');
var csvWriter = require('csv-write-stream');
var fs = require('fs');

//define the command-line args we will support
const optionDefinitions = [
  { name: 'export', type: String }
]
//parse the command-line args - available in options
const options = commandLineArgs(optionDefinitions)

var exportCSV = false;
if (options.export) {
    if (options.export.toLowerCase() === 'csv') {
        exportCSV = true;
    }
}

var AWSConfig;

// OPTION 1: Configure AWS credentials through hard-coded key and secret
// AWSConfig = {
//     accessKeyId: '',
//     secretAccessKey: '',
//     sessionToken: '',
//     region: 'us-east-1'
// };

AWSConfig = require(__dirname + '/credentials.json');

// OPTION 3: ENV configuration with AWS_ env vars
if(process.env.AWS_ACCESS_KEY_ID && process.env.AWS_SECRET_ACCESS_KEY){
    AWSConfig = {
        accessKeyId: process.env.AWS_ACCESS_KEY_ID,
        secretAccessKey:  process.env.AWS_SECRET_ACCESS_KEY,
        sessionToken: process.env.AWS_SESSION_TOKEN,
        region: process.env.AWS_DEFAULT_REGION || 'us-east-1'
    };
}

if (!AWSConfig || !AWSConfig.accessKeyId) {
    return console.log('ERROR: Invalid AWSConfig');
}

var skipRegions = [];   // Add any regions you wish to skip here. Ex: 'us-east-2'

// Custom settings - place plugin-specific settings here
var settings = {};

// STEP 1 - Obtain API calls to make
console.log('INFO: Determining API calls to make...');

var apiCalls = [];

for (p in plugins) {
    for (a in plugins[p].apis) {
        if (apiCalls.indexOf(plugins[p].apis[a]) === -1) {
            apiCalls.push(plugins[p].apis[a]);
        }
    }
}

console.log('INFO: API calls determined.');
console.log('INFO: Collecting AWS metadata. This may take several minutes...');

// STEP 2 - Collect API Metadata from AWS
collector(AWSConfig, {api_calls: apiCalls, skip_regions: skipRegions}, function(err, collection){
    if (err || !collection) return console.log('ERROR: Unable to obtain API metadata');

    console.log('INFO: Metadata collection complete. Analyzing...');
    console.log('INFO: Analysis complete. Scan report to follow...\n');

    if (exportCSV) {
        var writer = csvWriter({headers: ["category", "title", "resource", "region", "statusWord", "message"]});
        writer.pipe(fs.createWriteStream('results.csv'));
    }

    async.forEachOfLimit(plugins, 10, function(plugin, key, callback){
        plugin.run(collection, settings, function(err, results){
            for (r in results) {
                var statusWord;
                if (results[r].status === 0) {
                    statusWord = 'OK';
                } else if (results[r].status === 1) {
                    statusWord = 'WARN';
                } else if (results[r].status === 2) {
                    statusWord = 'FAIL';
                } else {
                    statusWord = 'UNKNOWN';
                }

                console.log(plugin.category + '\t' + plugin.title + '\t' +
                            (results[r].resource || 'N/A') + '\t' +
                            (results[r].region || 'Global') + '\t\t' +
                            statusWord + '\t' + results[r].message);


		//if the user asks us to export to csv lets do that
		if (exportCSV) {
		    writer.write([plugin.category, plugin.title, (results[r].resource || 'N/A'), (results[r].region || 'Global'), statusWord, results[r].message])
		}
            }

            callback(err);
        });
    }, function(err){
        if (err) return console.log(err);
    });

    if (exportCSV) {
        writer.end();
        console.log('INFO: Results available at results.csv.');
    }
});

Suppressing UX

Improve suppression UX and add Filtering.

(via ticket) Suppressing is a bit wonky. When you suppress you get a notification up top but thre is nothing on the actual item to indicate it has now been suppressed. It would be nice to change the status from FAIL to SUPPRESS or SUPPRESSED so you know the item has been suppressed. in addition it would be a nice way to be able to filter

API Gateway AWS WAF Integration Check

Check that AWS WAF is enabled for published APIs in API Gateway to protect against attacks.

High Level process

  • Get APIs in a region
  • Get stages for each API
  • Check to see if there is an WAF Web ACL ARN assigned

Is it possible to audit all sub-accounts from master account?

Hi!
Thanks for the tool, we found it really useful for our cloud env.

I wonder if it's possible to audit all sub-accounts from master (payer) account by assuming role in them instead of having to create an IAM user in every single subaccount and assigning security audit policy to it.
This approach would be much easier to manage and will also cover new subaccounts everytime they are created.

Thank you!

SSM reporting only on the first 10 parameters per region

When running cloudsploit we are only getting the first 10 SSM parameters per region.

aws --version
aws-cli/1.11.133 Python/2.7.5 Linux/3.10.0-693.11.6.el7.x86_64 botocore/1.6.0

pip freeze
Arpeggio==1.9.0
attrs==18.2.0
awscli==1.11.133
AWSScout2==3.2.1
Babel==0.9.6
backports.ssl-match-hostname==3.5.0.1
boto==2.45.0
boto3==1.4.6
botocore==1.6.0
certifi==2018.8.24
chardet==2.2.1
cloud-init==0.7.9
colorama==0.3.2
configobj==4.7.2
cov-core==1.15.0
coverage==3.6b3
decorator==3.4.0
docutils==0.11
enum34==1.1.6
ethtool==0.8
futures==3.0.5
iampoliciesgonewild==1.0.6.2
iniparse==0.4
invoke==1.2.0
ipaddress==1.0.16
IPy==0.75
Jinja2==2.7.2
jmespath==0.9.0
jsonpatch==1.2
jsonpointer==1.9
kitchen==1.1.1
lockfile==0.9.1
lxml==3.2.1
M2Crypto==0.21.1
Magic-file-extensions==0.2
MarkupSafe==0.11
netaddr==0.7.5
nose==1.3.7
nose2==0.6.5
opinel==3.3.4
parver==0.1.1
pciutils==1.7.3
perf==0.1
Pillow==2.0.0
pipenv==2018.7.1
policycoreutils-default-encoding==0.1
prettytable==0.7.2
pyasn1==0.1.9
pycurl==7.19.0
pygobject==3.22.0
pygpgme==0.3
pyinotify==0.9.4
pyjq==2.2.0
pyliblzma==0.5.3
pyOpenSSL==0.13.1
pyserial==2.6
pystache==0.5.3
python-daemon==1.6
python-dateutil==1.5
python-dmidecode==3.10.13
python-linux-procfs==0.4.9
pyudev==0.15
pyxattr==0.5.1
PyYAML==4.1
requests==2.6.0
rhnlib==2.5.65
rsa==3.4.1
s3transfer==0.1.10
schedutils==0.4
seobject==0.1
sepolicy==1.1
setuptools-scm==3.1.0
simplejson==3.10.0
six==1.11.0
subscription-manager==1.20.11
typing==3.6.6
uritools==2.2.0
urlgrabber==3.10
urllib3==1.10.2
virtualenv==16.1.0.dev0
virtualenv-clone==0.3.0
yum-metadata-parser==1.1.4

Define code style to make accepting contributions easier

As a contributor, I'd like to make it as easy as possible to review and have my contributions approved. In making that possible, I'd like to propose adding code style checks as a part of the build.

I'm happy to adopt the style that already exists in the repository, so all I'm asking here is whether such a change would be of interest if I do all of the work.

CloudTrail Enabled plugin false positive

Issue

We're currently getting a false positive from the CloudTrail Enabled plugin regarding not having global services enabled.

Context

We have multiple active trails in our AWS account that get funneled to different downstream services. In order to avoid getting repeated global services events from every trail, only 1 trail in our account has IncludeGlobalServiceEvents enabled. The plugin check currently breaks out of the for-loop after finding the first trail that is enabled (isLogging = true). Because of this it is not able to tell that another trail in describeTrails.data does indeed have global services enabled and misreports the configuration error.

Report tests and their relationship to CIS compliance

As a user, viewing the output from a cloudsploit, I would like to be able to see if rules violate particular compliance standards, such as CIS, so that I can report compliance levels, focus on rules I care most about and ignore rules I care less about.

I'm creating this issue because I'm willing to implement annotating rules with compliance information, if that would desirable to the maintainers here. If that isn't of interest, then I would avoid this).

I see two ways to achieve this:
a. add new items in the "compliance" member
b. add IDs to rules (plugins) and and externalize the compliance information

My proposal is (b) because it would give a way for anyone to add compliance information, including industry/domain specific rules without modifying this repo. Then assuming (b)

  1. For each rule, add a new unique ID attribute, for example elbHttpsOnly rule ID would be "elb-https-only" (snake case)
  2. Create a new "compliance" set that maps rule names to a data structure that describes how the rule maps to the compliance rule.

Error while running scan

I am getting below error while running for my aws account

/Users/abhinav/cplodsploit/scans/node_modules/aws-sdk/lib/request.js:31
            throw err;
            ^

RangeError: Maximum call stack size exceeded
    at /Users/abhinav/cplodsploit/scans/index.js:85:16
    at replenish (/Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:836:21)
    at /Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:842:29
    at /Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:804:16
    at /Users/abhinav/cplodsploit/scans/index.js:112:13
    at /Users/abhinav/cplodsploit/scans/plugins/sns/topicPolicies.js:115:4
    at /Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:365:16
    at replenish (/Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:831:29)
    at /Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:842:29
    at /Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:804:16

How to pass parameters in helpers.cache service command

I am writing a new plugins ... in that I have to call describeAlarmsForMetric() and I need to pass paramters

var par = {MetricName: p.metricTransformations[0].metricName, Namespace: p.metricTransformations[0].metricNamespace };
```


so I am writing `helpers.cache(cache, cloudwatchlog, 'describeAlarmsForMetric', function(err, data) {`

But I cannot pass the paramters into it. Can you please help me regarding this How can I pass parametrs into this method.

Thanks in advance for help

The IAM Policy got an issue in the README

it should be :

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
            "cloudtrail:DescribeTrails",
            "cloudfront:ListDistributions",
            "s3:GetBucketVersioning",
            "s3:ListAllMyBuckets",
            "s3:GetBucketAcl",
            "ec2:DescribeAccountAttributes",
            "ec2:DescribeAddresses",
            "ec2:DescribeInstances",
            "ec2:DescribeSecurityGroups",
            "iam:ListServerCertificates",
            "iam:GenerateCredentialReport",
            "iam:GetCredentialReport",
            "iam:GetAccountPasswordPolicy",
            "iam:GetAccountSummary",
            "iam:GetAccessKeyLastUsed",
            "iam:GetGroup",
            "iam:ListMFADevices",
            "iam:ListUsers",
            "iam:ListGroups",
            "iam:ListAccessKeys",
            "iam:ListVirtualMFADevices"
            "iam:ListSSHPublicKeys",
            "elasticloadbalancing:DescribeLoadBalancerPolicies",
            "elasticloadbalancing:DescribeLoadBalancers",
            "route53domains:ListDomains",
            "rds:DescribeDBInstances",
            "kms:ListKeys"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

Error in here:

            "iam:ListAccessKeys",
            "iam:ListVirtualMFADevices"
            "iam:ListSSHPublicKeys",

Elasticsearch Logging Scan Produces False Negatives

The test for Elasticsearch logging provides a different shape of data than the API actually provides. Here is a sample from one of the tests:

{
  DomainStatus: {
    DomainName: 'mydomain',
    ARN: 'arn:1234',
    LogPublishingOptions: {
      Enabled: true,
      CloudWatchLogsLogGroupArn: 'arn:1234'
    }
  }
}

But the log publishing options are distinguished by kind of log, like this (anonymized from "Live Run" Raw Response):

"LogPublishingOptions": {
  "ES_APPLICATION_LOGS": {
    "CloudWatchLogsLogGroupArn": "arn:aws:logs:<REGION>:<ACCOUNT>:log-group:<LOG_GROUP>:*",
    "Enabled": true
  },
  "INDEX_SLOW_LOGS": {
    "CloudWatchLogsLogGroupArn": "arn:aws:logs:<REGION>:<ACCOUNT>:log-group:<LOG_GROUP>:*",
    "Enabled": true
  },
  "SEARCH_SLOW_LOGS": {
    "CloudWatchLogsLogGroupArn": "arn:aws:logs:<REGION>:<ACCOUNT>:log-group:<LOG_GROUP>:*",
    "Enabled": true
  }
}

I expect this is causing every logging scan to fail for every user of CloudSploit with an ES resource.

Travis is not running all of the unit tests

The repo currently has 32 unit tests, but travis is only running 12 of them. This means that work to ensure that things cannot easily break is not really being used. Fix the travis build so that the tests actually run there.

enhancement idea - oauth + detect-aws-credentials.py + webhook + email report?

Tossing more ideas together... Sorry this is a messy post and is really just note-taking form and not truly organized clearly

1 - Use GitHub OAuth with repo,read:repo_hook, write:repo_hook,user:email scopes
2 - Borrow Yelp's pre-commit-hooks-detect_aws_credentials.py logic
3 - Glue that logic into new webhooks which logic from detect_aws_credentials and email the user that they've pushed AWS credentials. Here's a out-of-the-box- Node.JS webhook handler app...

GitHub notes also:

Note: If you are building a new integration, you should build it as webhook. We suggest creating an OAuth application to automatically install and manage your users’ webhooks. We will no longer be accepting new services to the github-services repository.

Other blah note links https://developer.github.com/v3/git/commits/#get-a-commit

Error while running

I tried to install this using npm install

npm install
npm http GET https://registry.npmjs.org/async
npm http GET https://registry.npmjs.org/aws-sdk
npm http 304 https://registry.npmjs.org/async
npm http 304 https://registry.npmjs.org/aws-sdk
npm http GET https://registry.npmjs.org/sax/1.1.5
npm http GET https://registry.npmjs.org/xmlbuilder/2.6.2
npm http GET https://registry.npmjs.org/xml2js/0.4.15
npm http 304 https://registry.npmjs.org/sax/1.1.5
npm http 304 https://registry.npmjs.org/xml2js/0.4.15
npm http 304 https://registry.npmjs.org/xmlbuilder/2.6.2
npm http GET https://registry.npmjs.org/lodash
npm http 304 https://registry.npmjs.org/lodash
[email protected] node_modules/async

[email protected] node_modules/aws-sdk
├── [email protected]
├── [email protected]
└── [email protected] ([email protected])

node index.js

module.js:340
throw err;
^
Error: Cannot find module '/home/aaaaa/scans/../../cloudsploit-secure/scan-test-credentials.json'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object. (/home/keerthi/scans/index.js:10:17)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)

AWS Configuration Recorder Status False Positive - "Pending"

https://github.com/cloudsploit/scans/blob/master/plugins/aws/configservice/configServiceEnabled.js#L66

This scan that checks if AWS Config is recording and properly delivering will deliver the warning message Config Service is configured, and recording, but not delivering properly if the lastStatus message returns "Pending". This seems to be a timing issue where it catches it when it is pending delivery.

According to AWS, the ConfigurationRecordersStatus will return "Pending" when the recorder has started and will return "Success" or "Failed".

We are getting notified when this scan returns "WARN" because it returned a "Pending" status.

Can logic be added to this scan to account for this status in case the scan happens to run when the recorder is currently running?

sqs.getQueueAttributes fails when queue does not have SSE enabled

https://github.com/cloudsploit/scans/blob/924444b979b3f6298462c7eb13dc962cc28546dd/collectors/sqs/getQueueAttributes.js#L15

We are observing an issue where the SQS plugin returns UNKNOWN for queues that don't have server-side encryption enabled, even when performing the "Cross Account Access" check.

I suspect it is because the KmsMasterKeyId attribute of the getQueueAttributes API call is not allowed on queues without SSE. Using the aws cli, for example, I get error:

An error occurred (InvalidAttributeName) when calling the GetQueueAttributes operation: Unknown Attribute KmsMasterKeyId

which license?

In package.json the license is 'ISC'. The LICENSE file is GPLv3. Which one is being used for the project?

Terminated Instances Not Excluded from Scans

Terminated instances are by design no longer in a VPC. As a result they produce false positives for the "Detect EC2" Classic Instances" scan. detectClassic.js should filter out instances that are in the "terminated" state.

RDS Read Replicas Cannot Use Automatic Backups

According to the AWS docs:

Yes, you can create a snapshot of a PostgreSQL Read Replica, but you cannot enable automatic backups

In the rdsAutomatedBackups plugin, we should skip such instances.

This can be done by checking for the presence of ReadReplicaSourceDBInstanceIdentifier in the RDS describeDBInstances call.

Callback errors after update

I have been using the older version (last Dec)without and issue for a while. Did a pull to update, and now getting the following errors:

cloudsploit/scans/plugins/cloudtrail/cloudtrailBucketAccessLogging.js:66
callback(null, results, source);
^

TypeError: callback is not a function
at /Users/philcox/src/cloudsploit/scans/plugins/cloudtrail/cloudtrailBucketAccessLogging.js:66:4
at /Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:365:16
at replenish (/Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:831:29)
at /Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:842:29
at /Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:804:16
at /Users/philcox/src/cloudsploit/scans/plugins/cloudtrail/cloudtrailBucketAccessLogging.js:22:32
at /Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:3339:20
at replenish (/Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:836:21)
at /Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:842:29
at /Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:804:16

Any ideas?

For Ref:
24e0236 refs/remotes/origin/ap-south-1
525a34e refs/remotes/origin/master
1e0d307 refs/remotes/origin/plugins-to-tests

Consider updating IAM Role Example to include actual Manages Policies

"AuditRole" : {
"Type": "AWS::IAM::Role",
"Properties": {
"Path": "/",
"AssumeRolePolicyDocument": {
"Version" : "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "AWS": "arn:aws:iam::057012691312:root" },
"Action": "sts:AssumeRole"
}
]
},
"ManagedPolicyArns" : ["arn:aws:iam::aws:policy/SecurityAudit"]
}
}

Incomplete documentation about lambda function

I am trying to run this application using lambda function. My Lambda function API end points gives following output. Not sure what to do with this. Documentation on these things would be helpful.

{
"code":0,
"data":{
"plugins":[
{
"title":"CloudTrail Bucket Delete Policy",
"query":"cloudtrailBucketDelete",
"description":"Ensures CloudTrail logging bucket has a policy to prevent deletion of logs without an MFA token"
},
{
"title":"CloudTrail Enabled",
"query":"cloudtrailEnabled",
"description":"Ensures CloudTrail is enabled for all regions within an account"
},
{
"title":"Account Limits",
"query":"accountLimits",
"description":"Determine if the number of resources is close to the AWS per-account limit"
},
{
"title":"Security Groups",
"query":"securityGroups",
"description":"Determine if sensitive ports are open to all source addresses"
},
{
"title":"Certificate Expiry",
"query":"certificateExpiry",
"description":"Detect upcoming expiration of certificates used with ELBs"
},
{
"title":"Insecure Ciphers",
"query":"insecureCiphers",
"description":"Detect use of insecure ciphers on ELBs"
},
{
"title":"Password Policy",
"query":"passwordPolicy",
"description":"Ensures a strong password policy is setup for the account"
},
{
"title":"Root Account Security",
"query":"rootAccountSecurity",
"description":"Ensures a multi-factor authentication device is enabled for the root account and that no access keys are present"
},
{
"title":"Users MFA Enabled",
"query":"usersMfaEnabled",
"description":"Ensures a multi-factor authentication device is enabled for all users within the account"
},
{
"title":"Access Keys",
"query":"accessKeys",
"description":"Ensures access keys are properly rotated and audited"
},
{
"title":"Group Security",
"query":"groupSecurity",
"description":"Ensures groups contain users and policies"
},
{
"title":"Detect EC2 Classic",
"query":"detectClassic",
"description":"Ensures AWS VPC is being used instead of EC2 Classic"
},
{
"title":"S3 Buckets",
"query":"s3Buckets",
"description":"Ensures S3 buckets use proper policies and access controls"
},
{
"title":"Domain Security",
"query":"domainSecurity",
"description":"Ensures domains are properly configured in Route53"
},
{
"title":"Database Security",
"query":"databaseSecurity",
"description":"Ensures databases are properly configured in RDS"
}
]
}
}

plugins/iam/passwordExpiration.js fails even when passwords are set to expire

L34-58 treat data.PasswordPolicy.ExpirePasswords as an integer when it is actually a boolean.

Here's what I see in my account using the AWS CLI:

[cchalfant:~]$ aws iam get-account-password-policy
{
    "PasswordPolicy": {
        "AllowUsersToChangePassword": true,
        "RequireLowercaseCharacters": true,
        "RequireUppercaseCharacters": true,
        "MinimumPasswordLength": 10,
        "RequireNumbers": true,
        "PasswordReusePrevention": 24,
        "HardExpiry": false,
        "RequireSymbols": true,
        "MaxPasswordAge": 90,
        "ExpirePasswords": true
    }
}

But 1.11 fails in our account with the following output:

IAM Password Expiration N/A global      FAIL    Password expiration of: true days is less than 90

Relevant code snippet:

if (!data.PasswordPolicy.ExpirePasswords) {
                results.push({
                    status: 2,
                    message: 'Password expiration policy is not set to expire passwords',
                    region: 'global'
                });
            } else if (data.PasswordPolicy.ExpirePasswords < 90) {
                results.push({
                    status: 2,
                    message: 'Password expiration of: ' + data.PasswordPolicy.ExpirePasswords + ' days is less than 90',
                    region: 'global'
                });
            } else if (data.PasswordPolicy.ExpirePasswords < 24) {
                results.push({
                    status: 1,
                    message: 'Password expiration of: ' + data.PasswordPolicy.ExpirePasswords + ' days is less than 180',
                    region: 'global'
                });
            } else {
                results.push({
                    status: 0,
                    message: 'Password expiration of: ' + data.PasswordPolicy.ExpirePasswords + ' passwords is suitable',
                    region: 'global'
                });
            }

Add support for http_proxy

Suggested addition to collect.js:

// ADDED PROXY CONFIGURATION
if (process.env.http_proxy) {
console.log('INFO: Setting proxy to [' + process.env.http_proxy + '] from the environment variable http_proxy...');
var proxy = require('proxy-agent');
AWS.config.update({
httpOptions: { agent: proxy(process.env.http_proxy) }
});
}

monitoringMetrics does not match return values from AWS CLI

monitoringMetrics.js looks for specific patters in order to match metrics. However, there are a number of patterns that do not match how the AWS CLI actually returns data. This causes false positives for failed best practices. Some of the patterns are clearly wrong - have extra quote or curly brace that would never be a valid pattern.

Support AWS_CONTAINER_CREDENTIALS_RELATIVE_URI for running in ECS?

The IAM role system for running in ECS makes use of and environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI.

From http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html, performing:

curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI

will produce:

{
    "AccessKeyId": "ACCESS_KEY_ID",
    "Expiration": "EXPIRATION_DATE",
    "RoleArn": "TASK_ROLE_ARN",
    "SecretAccessKey": "SECRET_ACCESS_KEY",
    "Token": "SECURITY_TOKEN_STRING"
}

This could then be directly assigned to the AWSConfig variable.

Could this method of obtaining creds be added as a default in index.js?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.