GithubHelp home page GithubHelp logo

riotgames / cloud-inquisitor Goto Github PK

View Code? Open in Web Editor NEW
450.0 450.0 43.0 53.6 MB

Enforce ownership and data security within AWS

Home Page: https://cloud-inquisitor.readthedocs.io/

License: Apache License 2.0

Makefile 0.78% Python 61.52% Batchfile 0.08% Mako 0.04% HTML 15.23% Shell 0.13% JavaScript 20.96% CSS 1.26%

cloud-inquisitor's People

Contributors

aretusa avatar bunjiboys avatar comerford avatar gwcmk avatar http500 avatar huit-tvachon avatar markofu avatar mrsecure avatar riot-gabe avatar riot-jetaylor avatar rnikoopour avatar them0ng00se avatar tomvachon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloud-inquisitor's Issues

Documentation Issues

This is a list of the known documentation issues discovered in the debugging session:

  1. SQS Queue documentation isn't present beyond the IAM setup. FIFO queues are required but not documented anywhere.

  2. Plugin usage/installation procedures are unclear

  3. IAM policies are missing see #42

Troubleshooting Docs:

  1. Debug logging requires restart after being enabled
  2. Documentation of manage.py routes (e.g. scheduler) would be useful for testing

Customization of Dashboard

We don't allow public IP's (by design not by policy). As such valuable real estate is taken by a graph I couldn't care to see. Dashboard customization would help here

Google Analytics Tags are Present

Hi All,

It seems that you are wired for google analytics tracking. I would be very concerned as an end user as a result. Can you either clarify its use or remove it?

Support auto-registration of missing default ConfigOptions on startup

Proposal 3 - DB Auto-Register configuration defaults on startup.

Some of the code already supports this. The ConfigOption setup logic already avoids setting defaults when the option already exists in config_items table.

This would have the benefits of:

  1. Removing a manual step when initially deploying the project (python3 manage.py setup --headless -f localhost
    • note: reading of of -f --fqdn value is not currently used in the project, as far as I can tell. So we don't have any must-specify db config options.
  2. Removing a manual step that is not required but /should/ be run to get ConfigOption defaults for new entries after an upgrade. (python3 manage.py setup --headless -f localhost`)
  3. Future: Automatically registering new configuration defaults if somebody adds a custom plugin using the plugin architecture.
  4. Allow us to simplify our code that has duplicate settings. We could then confidently assume all defaults have been set according to ConfigOption() sections and get rid of the duplicate settings in in many places where we call app.dbconfig.get(option, namespace, default)

Full docker support for separate containers

We would like to investigate having Docker support available so that people can run the individual parts in docker containers if they want to. This means that the API, scheduler and frontend would all be in their separate containers.

Flask db.session is not thread-safe (scheduled tasks)

After wondering about SQL Alchemeny sessions and transaction scopes, I wondered about the possibility of multiple transactions db.Model being run at the same time.

Documentation suggest that Flask handles this by only processing one web request at time (e.g., submitting config changes), which is safe.

But our additional scheduled tasks of collectors and auditors should not share a db session. A session should be limited in the scope it serves so that concurrent

Effectively, I think concurrent db.session.add() and subsequent commits and rollbacks might end up behaving unexpectedly, committing more than expected, or rolling back more than expected.

From: http://docs.sqlalchemy.org/en/latest/orm/session_basics.html
"The Session is very much intended to be used in a non-concurrent fashion, which usually means in only one thread at a time."

Unable to locate the SQSScheduler scheduler plugin

I built a Packer AMI using commit: d7f56e3
AMI: ami-996372fd (eu-west-2)

The AMI builds fine and everything appears to be working except for the SQSScheduler:

Output from: /var/log/supervisor/cinq-worker_0-stdout---supervisor-X

Traceback (most recent call last): File "/opt/cinq-backend/manage.py", line 32, in <module> manager.run() File "/opt/pyenv/lib/python3.5/site-packages/flask_script/__init__.py", line 417, in run result = self.handle(argv[0], argv[1:]) File "/opt/pyenv/lib/python3.5/site-packages/flask_script/__init__.py", line 386, in handle res = handle(*args, **config) File "/opt/pyenv/lib/python3.5/site-packages/flask_script/commands.py", line 216, in __call__ return self.run(*args, **kwargs) File "/opt/cinq-backend/cloud_inquisitor/plugins/commands/scheduler.py", line 97, in run scheduler = self.scheduler_plugins[self.active_scheduler]() KeyError: 'SQSScheduler'

Let me know what else you need here.

Undefined names APP_DEBUG and APP_USE_USER_DATA

Where are these two variables defined? Are they environment variables that need to be accessed like: os.getenv('APP_DEBUG', False)?

flake8 testing of https://github.com/RiotGames/cloud-inquisitor on Python 3.6.3

$ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

./packer/files/backend-settings.py:10:9: F821 undefined name 'APP_DEBUG'
DEBUG = APP_DEBUG
        ^

./packer/files/backend-settings.py:13:17: F821 undefined name 'APP_USE_USER_DATA'
USE_USER_DATA = APP_USE_USER_DATA
                ^

Refactor notification plugins to a common API

In order to be able to split out the notification plugins to separate repos / standalone packages we need to refactor the existing notifiers (slack and email) so there is a unified API for sending a notification, regardless of the type of plugin being used.

Email Template Customization

It would be good to be able to customize the email templates for things. For instance I have the Required Tag auditor on but its in Collect only mode. The message at the top of the email says they will be killed within 1 month if not tagged right.

Two problems with this use case:

  1. I don't have it in destructive mode
  2. I didn't set that timeframe and it should be derived from a setting.

Domain Hijack Auditor - Straggling NS entries

Sometimes when people remove sub-domains they do not purge the NS records from the parent zone, which leads to people being able to take over the DNS record if they can setup the zone on one of the NS servers referenced.

STS Token Support

Could we get STS token support so permanent AWS API keys can be avoided?

Cheers

Improve RBAC feature

  • Improve documentation
  • Improve workflow (e.g. auto-fill for Required Role within Accounts in the UI

Can't join Slack

I think you need to set up one of those apps to setup user accounts for people, ex. https://streamalert.herokuapp.com/ I don't know how to join your slack otherwise, as it just prompts me with a username and password and no ability to create a user for your Slack.

Resending Mail Fails with SMTP

Traceback (most recent call last):
File "/opt/cinq-backend/cloud_inquisitor/plugins/notifiers/email.py", line 95, in send_email
__send_smtp_email(sender, recipients, subject, html_body, text_body)
File "/opt/cinq-backend/cloud_inquisitor/plugins/notifiers/email.py", line 193, in __send_smtp_email
text_part = MIMEText(text_body, 'plain')
File "/usr/lib/python3.5/email/mime/text.py", line 34, in init
_text.encode('us-ascii')
AttributeError: 'NoneType' object has no attribute 'encode'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/cinq-backend/cloud_inquisitor/plugins/views/emails.py", line 100, in put
email.message_text
File "/opt/cinq-backend/cloud_inquisitor/plugins/notifiers/email.py", line 100, in send_email
raise EmailSendError(ex)
cloud_inquisitor.exceptions.EmailSendError: 'NoneType' object has no attribute 'encode'

New Auditor:: Reports on EIPs and Public IPs

Ability to see a report (filter by account) to display EIP and Public IPs. Eventually this could morph into an 'exposed' services report that could have ELBs, EBs, S3 buckets, etc.etc.

Move templates to database storage

Instead of having the templates in files on disk, we should investigate moving these to a database backed system to allow users to edit the templates themselves.

We would need to find a good way of implementing updates to the templates, ie. the system could still keep a copy on disk and use those as the master templates, allowing users to always easily be able to revert to the original template / import updated templates.

This would resolve #72

AWS Regions List include AWS Edge Locations

CINQ may exhibit errors during collection such as:

botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://ec2.eu-north-1.amazonaws.com/"

This is a result of how regions are populated:

https://github.com/RiotGames/cloud-inquisitor/blob/master/backend/cloud_inquisitor/__init__.py#L274

Should consider using only regions marked as "service" : "EC2", or have a way of tagging/filtering edge locations such that collectors/auditors aren't operating against them.

UI/UX issues

  1. When the SQS scheduler is disabled (as is by default). The logs spew that its missing. Its defaulted as the scheduler in the config, that should be changed or the error should be updated.

  2. (Known) Enabled plugins after install require python manage.py setup to rerun

  3. An install should be minimally functional at start (e.g. use local scheduler, local auth, etc) as a base set of plugins to cut down on the SNR for people kicking the tires

  4. Changing parameters of the SQS Scheduler required a full restart of the supervisor'ed processes or it gave back cryptic "WSDL" errors

  5. IAM permission errors on non-core installs (e.g. ec2 describe instances") is silently ignored in the UI unlike s3:ListBuckets which fails loudly

Refactor Flask App Creation

Unit testing is currently much heavier than it needs to be due to needing to work around top-level Flask application initialization logic that occurs with just a simple import awsaudits (or any module below it!)

In the current situation, we cannot import any module from the project without making sure we have an entirely valid AWS_AUDITS_SETTINGS file and a database available.

This is mostly due to the run-on-import top-level init logic here
awsaudits/__init.__py

I've seen some other large flask projects that will put a lot of this startup logic inside a def create_app() function that gets called with different parameters by manage.py or test classes for integration testing.

Many even go so far to extract all the Flask logic out of the __init__.py and into an application.py
Example Pypi portal Flask app

I'd like us to move much of this app and DB startup logic behind a similar guard method so that we may more easily test the project and improve reconfigurability of logging.

At the same time, we might also reconcile manage.py run_api_server method of starting the app with the manage.py runserver

Publish frontend code as a prebuilt zip file

In order to make it easier to install and run Cloud Inquisitor, we'd want to have the frontend code published as a dist zip file for easier installation without having to install development tools and requirements

SQS Scheduler is Getting Duplicate Key Errors

This might be related to why I can't get any data back from the workers...

mysql> select * from scheduler_batches
    -> ;
+--------------------------------------+--------+---------------------+---------------------+
| batch_id                             | status | started             | completed           |
+--------------------------------------+--------+---------------------+---------------------+
| 94ff94b1-c11f-49b9-a60e-2d73a78564c2 |      2 | 2017-11-28 19:22:07 | 2017-11-28 19:22:41 |
| b7d0055f-81c6-442c-b709-25d6966284ae |      2 | 2017-11-28 19:11:51 | 2017-11-28 19:11:56 |
| bb259db6-965d-4803-bed2-0b6ce081f9cb |      2 | 2017-11-28 19:11:52 | 2017-11-28 19:12:24 |
| ce4d0b95-5d04-49bb-9fb4-7a21ed003558 |      2 | 2017-11-28 19:16:30 | 2017-11-28 19:17:17 |
| de990d23-a9fd-4b15-bc96-bac1f2379904 |      2 | 2017-11-28 19:30:17 | 2017-11-28 19:30:41 |
| e860064d-e4e5-40c8-bebf-e879661f873c |      2 | 2017-11-28 19:36:31 | 2017-11-28 19:37:05 |
| ecd43c41-6e86-4000-a094-240dff07154d |      2 | 2017-11-28 19:26:54 | 2017-11-28 19:27:11 |
| f3e4871f-9619-433d-a602-d07157256357 |      2 | 2017-11-28 19:17:12 | 2017-11-28 19:17:47 |
+--------------------------------------+--------+---------------------+---------------------+
8 rows in set (0.00 sec)

mysql> select * from scheduler_jobs;
+--------------------------------------+--------------------------------------+--------+----------------------------------------+
| job_id                               | batch_id                             | status | data                                   |
+--------------------------------------+--------------------------------------+--------+----------------------------------------+
| 6ab13a9d-a8a1-40f4-947c-563309da48cf | e860064d-e4e5-40c8-bebf-e879661f873c |      2 | {"account": "myaccount"} |
| fd55243e-25b8-46e2-a10e-d159b8f96390 | e860064d-e4e5-40c8-bebf-e879661f873c |      2 | {}                                     |
+--------------------------------------+--------------------------------------+--------+----------------------------------------+
[19:41:32] cinq_scheduler_sqs [ERROR] Error when processing worker task
Traceback (most recent call last):
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
    context)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute
    cursor.execute(statement, parameters)
  File "/opt/pyenv/lib/python3.5/site-packages/MySQLdb/cursors.py", line 250, in execute
    self.errorhandler(self, exc, value)
  File "/opt/pyenv/lib/python3.5/site-packages/MySQLdb/connections.py", line 50, in defaulterrorhandler
    raise errorvalue
  File "/opt/pyenv/lib/python3.5/site-packages/MySQLdb/cursors.py", line 247, in execute
    res = self._query(query)
  File "/opt/pyenv/lib/python3.5/site-packages/MySQLdb/cursors.py", line 411, in _query
    rowcount = self._do_query(q)
  File "/opt/pyenv/lib/python3.5/site-packages/MySQLdb/cursors.py", line 374, in _do_query
    db.query(q)
  File "/opt/pyenv/lib/python3.5/site-packages/MySQLdb/connections.py", line 277, in query
    _mysql.connection.query(self, query)
_mysql_exceptions.IntegrityError: (1062, "Duplicate entry '6ab13a9d-a8a1-40f4-947c-563309da48cf' for key 'PRIMARY'")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/pyenv/lib/python3.5/site-packages/cinq_scheduler_sqs/__init__.py", line 317, in send_worker_queue_message
    db.session.commit()
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/orm/scoping.py", line 157, in do
    return getattr(self.registry(), name)(*args, **kwargs)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 921, in commit
    self.transaction.commit()
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 461, in commit
    self._prepare_impl()
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 441, in _prepare_impl
    self.session.flush()
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 2192, in flush
    self._flush(objects)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 2312, in _flush
    transaction.rollback(_capture_exception=True)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
    compat.reraise(exc_type, exc_value, exc_tb)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/util/compat.py", line 187, in reraise
    raise value
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 2276, in _flush
    flush_context.execute()
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/orm/unitofwork.py", line 389, in execute
    rec.execute(self)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/orm/unitofwork.py", line 548, in execute
    uow
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/orm/persistence.py", line 181, in save_obj
    mapper, table, insert)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/orm/persistence.py", line 799, in _emit_insert_statements
    execute(statement, multiparams)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 945, in execute
    return meth(self, multiparams, params)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context
    context)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1402, in _handle_dbapi_exception
    exc_info
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/util/compat.py", line 186, in reraise
    raise value.with_traceback(tb)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
    context)
  File "/opt/pyenv/lib/python3.5/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute
    cursor.execute(statement, parameters)
  File "/opt/pyenv/lib/python3.5/site-packages/MySQLdb/cursors.py", line 250, in execute
    self.errorhandler(self, exc, value)
  File "/opt/pyenv/lib/python3.5/site-packages/MySQLdb/connections.py", line 50, in defaulterrorhandler
    raise errorvalue
  File "/opt/pyenv/lib/python3.5/site-packages/MySQLdb/cursors.py", line 247, in execute
    res = self._query(query)
  File "/opt/pyenv/lib/python3.5/site-packages/MySQLdb/cursors.py", line 411, in _query
    rowcount = self._do_query(q)
  File "/opt/pyenv/lib/python3.5/site-packages/MySQLdb/cursors.py", line 374, in _do_query
    db.query(q)
  File "/opt/pyenv/lib/python3.5/site-packages/MySQLdb/connections.py", line 277, in query
    _mysql.connection.query(self, query)
sqlalchemy.exc.IntegrityError: (_mysql_exceptions.IntegrityError) (1062, "Duplicate entry '6ab13a9d-a8a1-40f4-947c-563309da48cf' for key 'PRIMARY'") [SQL: 'INSERT INTO scheduler_jobs (job_id, batch_id, status, data) VALUES (%s, %s, %s, %s)'] [parameters: ('6ab13a9d-a8a1-40f4-947c-563309da48cf', 'e860064d-e4e5-40c8-bebf-e879661f873c', 0, '{"account": "myaccount"}')]

Workers timing out under load enumerating snapshots

We have ~30 or so accounts in the system. We are seeing workers timeout pretty regularly on the AWS region collector. I am seeing it regions across the globe (including the region which the instance lives). Some of these accounts have a non-trivial amount of snapshots and it seems thats an issue

Traceback (most recent call last):
  File "/opt/pyenv/lib/python3.5/site-packages/cinq_scheduler_sqs/__init__.py", line 356, in execute_worker
    worker.run()
  File "/opt/pyenv/lib/python3.5/site-packages/cinq_collector_aws/region.py", line 38, in run
    self.update_snapshots()
  File "/opt/cinq-backend/cloud_inquisitor/wrappers.py", line 54, in __call__
    return self.func(*args, **kwargs)
  File "/opt/pyenv/lib/python3.5/site-packages/cinq_collector_aws/region.py", line 277, in update_snapshots
    snapshots = {x.id: x for x in ec2.snapshots.filter(OwnerIds=[self.account.account_number])}
  File "/opt/pyenv/lib/python3.5/site-packages/cinq_collector_aws/region.py", line 277, in <dictcomp>
    snapshots = {x.id: x for x in ec2.snapshots.filter(OwnerIds=[self.account.account_number])}
  File "/opt/pyenv/lib/python3.5/site-packages/boto3/resources/collection.py", line 83, in __iter__
    for page in self.pages():
  File "/opt/pyenv/lib/python3.5/site-packages/boto3/resources/collection.py", line 166, in pages
    for page in pages:
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/paginate.py", line 255, in __iter__
    response = self._make_request(current_kwargs)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/paginate.py", line 332, in _make_request
    return self._method(**current_kwargs)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/client.py", line 314, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/client.py", line 599, in _make_api_call
    operation_model, request_dict)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/endpoint.py", line 143, in make_request
    return self._send_request(request_dict, operation_model)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/endpoint.py", line 172, in _send_request
    success_response, exception):
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/endpoint.py", line 265, in _needs_retry
    caught_exception=caught_exception, request_dict=request_dict)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/hooks.py", line 227, in emit
    return self._emit(event_name, kwargs)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/hooks.py", line 210, in _emit
    response = handler(**kwargs)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/retryhandler.py", line 183, in __call__
    if self._checker(attempts, response, caught_exception):
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/retryhandler.py", line 251, in __call__
    caught_exception)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/retryhandler.py", line 269, in _should_retry
    return self._checker(attempt_number, response, caught_exception)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/retryhandler.py", line 317, in __call__
    caught_exception)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/retryhandler.py", line 223, in __call__
    attempt_number, caught_exception)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/retryhandler.py", line 359, in _check_caught_exception
    raise caught_exception
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/endpoint.py", line 213, in _get_response
    proxies=self.proxies, timeout=self.timeout)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/vendored/requests/sessions.py", line 605, in send
    r.content
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/vendored/requests/models.py", line 750, in content
    self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/vendored/requests/models.py", line 673, in generate
    for chunk in self.raw.stream(chunk_size, decode_content=True):
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/response.py", line 303, in stream
    for line in self.read_chunked(amt, decode_content=decode_content):
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/response.py", line 450, in read_chunked
    chunk = self._handle_chunk(amt)
  File "/opt/pyenv/lib/python3.5/site-packages/botocore/vendored/requests/packages/urllib3/response.py", line 420, in _handle_chunk
    returned_chunk = self._fp._safe_read(self.chunk_left)
  File "/usr/lib/python3.5/http/client.py", line 607, in _safe_read
    chunk = self.fp.read(min(amt, MAXAMOUNT))
  File "/usr/lib/python3.5/socket.py", line 575, in readinto
    return self._sock.recv_into(b)
  File "/usr/lib/python3.5/ssl.py", line 929, in recv_into
    return self.read(nbytes, buffer)
  File "/usr/lib/python3.5/ssl.py", line 791, in read
    return self._sslobj.read(len, buffer)
  File "/usr/lib/python3.5/ssl.py", line 575, in read
    v = self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out

Event framework

Implement an event broadcast / listener framework that can be used to schedule tasks based on broadcasting of an event.

Example: When the EC2 collector has completed fetching information for an account/region, we can broadcast an even that would immediately trigger AWS Region based auditors to scan the account/region for issues.

Deleting Accounts Fails

When deleting accounts through either the UI or manage.py

Traceback (most recent call last): File "/opt/cinq-backend/cloud_inquisitor/plugins/commands/accounts.py", line 72, in run acct.delete() AttributeError: 'Account' object has no attribute 'delete'

AttributeError on AWS regional collector when no accounts configured.

The Cinq scheduler will throw AttributeError's on initial config/startup since there are no present active accounts:

Traceback (most recent call last):
  File "/opt/pyenv/lib/python3.5/site-packages/cinq_scheduler_standalone/__init__.py", line 247, in execute_aws_region_worker
    worker = cls(**kwargs)
  File "/opt/pyenv/lib/python3.5/site-packages/cinq_collector_aws/region.py", line 32, in __init__
    self.session = get_aws_session(self.account)
  File "/opt/pyenv/lib/python3.5/site-packages/cloud_inquisitor/__init__.py", line 247, in get_aws_session
    account.account_number,
AttributeError: 'NoneType' object has no attribute 'account_number'
[19:21:22] cinq_scheduler_standalone AWS Region Worker EC2 Region Collector/riotu/ap-northeast-2: 'NoneType' object has no attribute 'account_number'
Traceback (most recent call last):
  File "/opt/pyenv/lib/python3.5/site-packages/cinq_scheduler_standalone/__init__.py", line 247, in execute_aws_region_worker
    worker = cls(**kwargs)
  File "/opt/pyenv/lib/python3.5/site-packages/cinq_collector_aws/region.py", line 32, in __init__
    self.session = get_aws_session(self.account)
  File "/opt/pyenv/lib/python3.5/site-packages/cloud_inquisitor/__init__.py", line 247, in get_aws_session```
    account.account_number,
AttributeError: 'NoneType' object has no attribute 'account_number'

we should be able to account for no accounts or no active accounts.

AWS Regions file is blank

On every new build the AWS regions file is empty. It should be seeded with some regions (or even just 1 local region) and have a post-process job to refresh from the json ranges page

Build Frontend for VPC Flow Logs

A basic frontend should be build that offers the following features

  • Display Account, VPC-ID, VPC-CIDR, and Flow Log Enabled Status
  • Allows searching based on any of these parameters.

Email notification tracking

We would like to have a way to track if people have read the emails being sent by the auditors. The thought here is to use API gateways / lambda functions that will get accesses upon the email being loaded in the browser.

Update auditors to support types

W should update the auditors to have the concept of supported account types, all collectors.

Something along the lines of

class IAMAuditor(BaseAuditor):
    auditor_type = AuditorTypes.AWS_Account

with auditor_type being one of AWS_Account, AWS_Region, Global as the currently supported auditors.

For example, the VPC Flow Log auditor should be a AWS_Region auditor, and the IAM should be AWS_Account auditor, with the Domain Hijacking being a Global.

Enhance view of details through Config/CloudTrail

As I find things which were not tagged correctly, I have to constantly go into AWS config to track the owner. Since we present the email as the user name, it would be ideal to ingest that from CloudTrial and config to understand the root cause of a problem.

I understand you use the "Owner" tag for that, but we do not since we have the detail in CloudTrial and historical change in Config

Newly created accounts not appearing immediately

Currently, if someone adds a new account using the web ui, any existing user sessions will not see the new account, until they log out and log back in, as the access checks are done in login.

We need to find a better way to either force refresh it , invalidate all existing sessions on account creation or simply not cache the list of available accounts in the user session (need to ensure it won't adversely impact performance to do so)

Import/Export additions

We should have a way to import and export both the configuration settings as well as the account information, to make it easier moving between systems, without having to do a full configuration manually every time.

Ideally, we should offer to encrypt the exported messages with a KMS key, to prevent any sensitive data in the exports being accidentally leaked.

Cannot add new account disabled

When staging in a new account, you cannot add it disabled from the start. There is a validation error which says "enabled cannot be empty"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.