GithubHelp home page GithubHelp logo

ace-ecosystem / ace Goto Github PK

View Code? Open in Web Editor NEW
23.0 23.0 10.0 19.85 MB

Analysis Correlation Engine

License: Apache License 2.0

Python 63.28% CSS 1.07% JavaScript 28.51% HTML 5.40% Shell 1.30% Perl 0.07% Zeek 0.19% YARA 0.09% PowerShell 0.06% Dockerfile 0.02% Vim Script 0.01%

ace's People

Contributors

automationator avatar johndavisonintdef avatar karmapenny avatar krayzpipes avatar seanmcfeely avatar unixfreak0037 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ace's Issues

Add whois analysis module for domain age

When looking at a hyperlink in a email, if the domain is old (ex: five years) it would be easier to determine as FP than if it were one-day old.

For example: no-reply[at]webex[dot]com sends an email asking for a login, and the link for the login page is 'i-am-not-webex[dot]com'. The whois information may show the domain is two days old, which may spur an analyst to refrain from hitting the FP button too quickly.

Another usage is looking for non-corporate email phish. For example, if proxy logs show a referrer is email.google.com, and the destination domain was registered two days ago, it may be worth looking into.

SSL issue during ACE install for devices with FQDN different from `hostname` command

During install, the certificate is populated with the following in ssl/installer/install_ssl_certs.sh

  • localhost
  • $(hostname)
  • 127.0.0.1

The node location (API PREFIX) is populated in lib/saq/__init__.py with API_PREFIX = socket.getfqdn().

If the server--for example an EC2 instance like ip-10.10.10.10.ec2.internal-- has a hostname (ip-10.10.10.10) that is different than the FQDN (ip-10.10.10.10.ec2.internal), this will cause errors during manual submissions.

I recommend adding DNS.3 with $(hostname -f) as the value to the ssl install script.

Handle analysis modules that never return.

We do a lot of really crazy analysis. Some times things go wrong and something either ends up in an infinite loop or stuck processing something that's going to take so long it's never going to return.

Need to be able to handle that from ACE. There is a timeout but it may not be working the way it's current designed.

whois analyzer needs proxy support

I turned this module on to test it but found that it needs proxy support so saq.PROXIES (or future replacement) can be passed to it if a proxy config item is set for the module.

[ERROR] analysis module WhoisAnalyzer failed on WorkTarget(obs:url(http://stupid-bad-domain.example.com/malware/cats),analyis:None,module:None,dep:None) for RootAnalysis(9d163c27-96fb-4720-969f-67dfca12f792) reason timed out

Running `ace add-user` results in TypeError due to issue with encrypted passwords

When adding a user via the ace command line, I receive a TypeError where it appears parser.encrypted_passwords is being iterated; However, it is actually a Nonetype.

This is after a fresh install:

ace@ace-2:/opt/ace$ ./ace add-user user1 [email protected]
Enter password for user1: 
Confirm password for user1: 
Traceback (most recent call last):
  File "./ace", line 3796, in <module>
    args.func(args)
  File "./ace", line 1458, in add_user
    with get_db_connection() as db:
  File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/opt/ace/lib/saq/database/__init__.py", line 327, in get_db_connection
    db = _get_db_connection(*args, **kwargs)
  File "/opt/ace/lib/saq/database/__init__.py", line 271, in _get_db_connection
    'db': _section['database'],
  File "/usr/lib/python3.6/configparser.py", line 1234, in __getitem__
    return self._parser.get(self._name, key)
  File "/usr/lib/python3.6/configparser.py", line 800, in get
    d)
  File "/opt/ace/lib/saq/configuration.py", line 38, in before_get
    if key in parser.encrypted_passwords:
TypeError: argument of type 'NoneType' is not iterable
Error in sys.excepthook:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 109, in apport_excepthook
    pr.add_proc_info(extraenv=['PYTHONPATH', 'PYTHONHOME'])
  File "/usr/lib/python3/dist-packages/apport/report.py", line 543, in add_proc_info
    self.add_proc_environ(pid, extraenv)
  File "/usr/lib/python3/dist-packages/apport/report.py", line 610, in add_proc_environ
    env = _read_file('environ', dir_fd=proc_pid_fd).replace('\n', '\\n')
  File "/usr/lib/python3/dist-packages/apport/report.py", line 73, in _read_file
    with open(path, 'rb', opener=lambda path, mode: os.open(path, mode, dir_fd=dir_fd)) as fd:
  File "/usr/lib/python3/dist-packages/apport/report.py", line 73, in <lambda>
    with open(path, 'rb', opener=lambda path, mode: os.open(path, mode, dir_fd=dir_fd)) as fd:
TypeError: argument should be integer or None, not list

Original exception was:
Traceback (most recent call last):
  File "./ace", line 3796, in <module>
    args.func(args)
  File "./ace", line 1458, in add_user
    with get_db_connection() as db:
  File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/opt/ace/lib/saq/database/__init__.py", line 327, in get_db_connection
    db = _get_db_connection(*args, **kwargs)
  File "/opt/ace/lib/saq/database/__init__.py", line 271, in _get_db_connection
    'db': _section['database'],
  File "/usr/lib/python3.6/configparser.py", line 1234, in __getitem__
    return self._parser.get(self._name, key)
  File "/usr/lib/python3.6/configparser.py", line 800, in get
    d)
  File "/opt/ace/lib/saq/configuration.py", line 38, in before_get
    if key in parser.encrypted_passwords:
TypeError: argument of type 'NoneType' is not iterable

Recursion depth error introduced by non-conventional message id formats

We got an email with this surprisingly valid message ID format:

Message-Id: <[email protected]>+4099DCB271A60273

Scrubbed engine logs:

[2020-08-07 01:58:01,788] [__init__.py:1999] [MainThread] [1642] [INFO] - processing ACE Mailbox Scanner Detection - c6b65fbc-ef2b-4f8f-9ba7-3bcb12fa95c7 mode email (e8b4990e-8e05-4f5e-8d8a-dc6db550b5a4)
[2020-08-07 01:58:01,789] [__init__.py:2376] [MainThread] [1642] [INFO] - starting analysis on RootAnalysis(e8b4990e-8e05-4f5e-8d8a-dc6db550b5a4) with a workload of 1
[2020-08-07 01:58:01,909] [email.py:1713] [MainThread] [1642] [INFO] - scanning email [e8b4990e-8e05-4f5e-8d8a-dc6db550b5a4] <[email protected]>+4099DCB271A60273 from "[email protected]" <[email protected]> to ['[email protected]'] subject =?GB2312?B?xM/NqLuvx+GxvdLSz6lDT0E=?=
[2020-08-07 01:58:02,135] [__init__.py:2742] [MainThread] [1642] [ERROR] - analysis module URLExtractionAnalyzer failed on WorkTarget(obs:file(email.rfc822.unknown_text_html_000),analyis:None,module:None,dep:None) for RootAnalysis(e8b4990e-8e05-4f5e-8d8a-dc6db550b5a4) reason maximum recursion depth exceeded
[2020-08-07 01:58:02,550] [__init__.py:2742] [MainThread] [1642] [ERROR] - analysis module URLExtractionAnalyzer failed on WorkTarget(obs:file(email.rfc822.unknown_text_html_000),analyis:None,module:None,dep:None) for RootAnalysis(e8b4990e-8e05-4f5e-8d8a-dc6db550b5a4) reason maximum recursion depth exceeded
[2020-08-07 01:58:02,650] [__init__.py:2392] [MainThread] [1642] [INFO] - entering final analysis for RootAnalysis(e8b4990e-8e05-4f5e-8d8a-dc6db550b5a4)
[2020-08-07 01:58:02,686] [__init__.py:2078] [MainThread] [1642] [INFO] - completed analysis RootAnalysis(e8b4990e-8e05-4f5e-8d8a-dc6db550b5a4) in 0.90 seconds

Truncated error report:

CURRENT ENGINE: Engine (ace-qa.local - ace)
CURRENT ANALYSIS TARGET: RootAnalysis(e8b4990e-8e05-4f5e-8d8a-dc6db550b5a4)
CURRENT ANALYSIS MODE: email
EXCEPTION
maximum recursion depth exceeded

STACK TRACE
Traceback (most recent call last):
  File "/opt/ace/saq/engine/__init__.py", line 2653, in execute_module_analysis
    analysis_result = analysis_module.analyze(work_item.observable, final_analysis_mode)
  File "/opt/ace/saq/modules/__init__.py", line 846, in analyze
    analysis_result = self.execute_analysis(obj)
  File "/opt/ace/saq/modules/file_analysis.py", line 3423, in execute_analysis
    extracted_urls = find_urls(fp.read(), base_url=base_url)
  File "/usr/local/lib/python3.6/dist-packages/urlfinderlib/urlfinderlib.py", line 28, in find_urls
    urls |= finders.HtmlUrlFinder(blob, base_url=base_url).find_urls()
  File "/usr/local/lib/python3.6/dist-packages/urlfinderlib/finders/html.py", line 37, in find_urls
    urls |= HtmlSoupUrlFinder(soup, base_url=self._base_url).find_urls()
  File "/usr/local/lib/python3.6/dist-packages/urlfinderlib/finders/html.py", line 51, in find_urls
    possible_urls |= {urljoin(self._base_url, u) for u in self._get_base_url_eligible_values()}
  File "/usr/local/lib/python3.6/dist-packages/urlfinderlib/finders/html.py", line 94, in _get_base_url_eligible_values
    values |= self._get_css_url_values()
  File "/usr/local/lib/python3.6/dist-packages/urlfinderlib/finders/html.py", line 104, in _get_css_url_values
    re.findall(r"url\s*\(\s*[\'\"]?(.*?)[\'\"]?\s*\)", str(self._soup), flags=re.IGNORECASE)}
  File "/usr/local/lib/python3.6/dist-packages/bs4/element.py", line 1491, in __unicode__
    return self.decode()
  File "/usr/local/lib/python3.6/dist-packages/bs4/__init__.py", line 745, in decode
    indent_level, eventual_encoding, formatter)
  File "/usr/local/lib/python3.6/dist-packages/bs4/element.py", line 1599, in decode
    indent_contents, eventual_encoding, formatter
  File "/usr/local/lib/python3.6/dist-packages/bs4/element.py", line 1693, in decode_contents
    formatter))
  File "/usr/local/lib/python3.6/dist-packages/bs4/element.py", line 1599, in decode
    indent_contents, eventual_encoding, formatter
  File "/usr/local/lib/python3.6/dist-packages/bs4/element.py", line 1693, in decode_contents
    formatter))

  { removed hundreds of lines }

  File "/usr/local/lib/python3.6/dist-packages/bs4/element.py", line 1599, in decode
    indent_contents, eventual_encoding, formatter
  File "/usr/local/lib/python3.6/dist-packages/bs4/element.py", line 1693, in decode_contents
    formatter))
  File "/usr/local/lib/python3.6/dist-packages/bs4/element.py", line 1599, in decode
    indent_contents, eventual_encoding, formatter
  File "/usr/local/lib/python3.6/dist-packages/bs4/element.py", line 1693, in decode_contents
    formatter))
  File "/usr/local/lib/python3.6/dist-packages/bs4/element.py", line 1599, in decode
    indent_contents, eventual_encoding, formatter
  File "/usr/local/lib/python3.6/dist-packages/bs4/element.py", line 1690, in decode_contents
    text = c.output_ready(formatter)
  File "/usr/local/lib/python3.6/dist-packages/bs4/element.py", line 925, in output_ready
    output = self.format_string(self, formatter)
  File "/usr/local/lib/python3.6/dist-packages/bs4/element.py", line 209, in format_string
    output = formatter.substitute(s)
  File "/usr/local/lib/python3.6/dist-packages/bs4/formatter.py", line 86, in substitute
    from .element import NavigableString
RecursionError: maximum recursion depth exceeded

If I try to manually correlate the message_id ACE does not find anything. As a test, I removed the message_id normalization and then ace found the email history but still failed to locate the email in the archive.

¯\(ツ)

Add more descriptive error when storing an encrypted password for an integration that is not enabled.

When trying to store a password in ecs via ace store-config-password ews_my_section.password, ACE spit out an unhelpful error that included all the config sections similar to this:

[ERROR] DEFAULT
[ERROR] saq
[ERROR] another_config_section
[ERROR] another_config_section_2
.
.
.
etc.

After receiving the error, I enabled EWS with ace integrations enable ews and the error went away.

A more descriptive error would be helpful--like 'EWS integration is not enabled'--if possible.

SSL Issue when submitting Manual Analysis after ACE install.

After install of ACE with the default certificates, an SSL error occurs when attempting to submit manual file for analysis.

It appears the node location (aka... API_PREFIX) in the database table ace.nodes is the hostname while the localhost certificate alternative names are 'localhost' and '127.0.0.1'

Some sequence of events returns the node location to 'localhost', but I'm not able to reproduce.

Here is the error message flashed upon manual submission:

unable to submit alert: HTTPSConnectionPool(host='ace-1', port=443): Max retries exceeded with url: /api/analysis/submit (Caused by SSLError(CertificateError("hostname 'ace-1' doesn't match either of 'localhost', '127.0.0.1'",),)) 

In the source code, this error happens when ace_api.submit() is called.

I assume that adding the hostname as an alternative name in the certificate is an appropriate fix since the default cert should only be used on test/local systems.

We can add hostname as a subject alternative name here in installer/install_ssl_certs.sh:

# create the SSL certificates for localhost
( 
    cd ssl/root/ca && \
    cat intermediate/openssl.cnf > intermediate/openssl.temp.cnf && \
    echo 'DNS.1 = localhost' >> intermediate/openssl.temp.cnf && \
    echo 'IP.1 = 127.0.0.1' >> intermediate/openssl.temp.cnf && \
    openssl genrsa -out intermediate/private/localhost.key.pem 2048 && \
    chmod 400 intermediate/private/localhost.key.pem && \
    openssl req -config intermediate/openssl.temp.cnf -key intermediate/private/localhost.key.pem -new -sha256 -out intermediate/csr/localhost.csr.pem -subj '/C=US/ST=KY/L=Covington/O=Integral/OU=Security/CN=localhost/emailAddress=ace@localhost' && \
    openssl ca -passin file:.intermediate_ca.pwd -batch -config intermediate/openssl.temp.cnf -extensions server_cert -days 3649 -notext -md sha256 -in intermediate/csr/localhost.csr.pem -out intermediate/certs/localhost.cert.pem
    chmod 444 intermediate/certs/localhost.cert.pem
) || { echo "unable to create SSL certificate for localhost"; exit 1; }

GUI Filters not handling all character sets

The filters do not correctly handle grouping of alerts by observable if the observable value has non-ascii characters in it.

Example, if an analyst trys to group the five alerts together that have hits on this observable:

image

The filter displays:
image

And the grouping fails (no alerts result from the filter).

event_time TypeError: strptime() argument 1 must be str, not None

 Traceback (most recent call last):
   File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2292, in wsgi_app
     response = self.full_dispatch_request()
   File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1815, in full_dispatch_request
     rv = self.handle_user_exception(e)
   File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1718, in handle_user_exception
     reraise(exc_type, exc_value, tb)
   File "/usr/local/lib/python3.6/dist-packages/flask/_compat.py", line 35, in reraise
     raise value
   File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1813, in full_dispatch_request
     rv = self.dispatch_request()
   File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1799, in dispatch_request
     return self.view_functions[rule.endpoint](**req.view_args)
   File "/usr/local/lib/python3.6/dist-packages/flask_login/utils.py", line 261, in decorated_view
     return func(*args, **kwargs)
   File "/opt/ace/app/analysis/views.py", line 3273, in edit_event
     event_time = None if event_time in ['', 'None'] else datetime.datetime.strptime(event_time, '%Y-%m-%d %H:%M:%S')
 TypeError: strptime() argument 1 must be str, not None

Take advantage of MS normalized URL field if available in email headers

In the process of modifying urlfinderlib to correctly normalize URLs by mapping bytes from the universal character set to ascii for IRI -> URL mapping according to https://tools.ietf.org/html/rfc3987#section-3.1 I noticed this field in email headers from o365:

X-MS-Exchange-Organization-Persisted-Urls-0: 
	[{"ID":1,"OU":"https:/\\link&shy;edin.&shy;com/redirect?url=REMOVED","IBT":false,"U":"https://linkedin.com/redirect?url=REMOVED","DNR":false,"IAR":false,"LI":{"TN":"a","IC":true,"BF":2,"SI":-1,"EndIndex":-1},"SRCI":1,"CannonicalizedUrl":null,"NormalizedUrl":"https://linkedin.com/redirect?url=REMOVED","DPD":{"UF":"256","OCH":"6062848974812180643","CNT":"1","SL":"1"},"PROC":[]}]

So when parsing email files, check for this field and for a value inside of the NormalizedUrl field and create a URL observable with the value.

Expand the crawlphish whitelist/blacklist from domains to URL base/parts

crawlphish has a domain whitelist/blacklist but would allow for greater flexibility if it also had a URL base/part whitelist/blacklist.

An example use case:
LinkedIn has been given common network status because it's seen abundantly. However we're now seeing people exploit linkedIn's redirection service with this url base: https://www.linkedin.com/redirect?url=
We don't necessarily want to alert on every one of those with an indicator but we would like crawlphish to crawl urls that begin with that.

Something like this in lib/saq/crawlphish.py before the blacklist check:

       if result.parsed_url.path:                                                                                      
            if self.matches_path_regex(result.parsed_url.path):                                                         
                result.reason = REASON_WHITELISTED                                                                      
                result.filtered = False                                                                                 
                return result

It's a regex check against the path but you should be able to change it to the full url without issue.
I would ask you add additional unit testing to make sure it whitelists/blacklists correctly.

analysis module API and documentation

Make it was easier to develop ACE analysis modules. Remove the need to know the internals of ACE, and introduce and well documented API that is easy to use.

Problem downloading file content in alerts when content pulled from cloudphish cache

For example, if this content was pulled from the cloudphish cache and added to the alert:

image

We can no longer download/view/access any content for any files, nested any further in the analysis tree.

Everything only says "whitelist" or "un-whitelist" like so:

image
image

Even the cloudphish content itself is not available in the GUI:
image

All files a level up from any Cloudphish Analysis: ALERT, in the analysis tree, can still be accessed normally.

I'm not sure when this change/break happened in the code base because we did a major update. I can confirm it's as far back as cad3a01 but still an issue for us at 86a1aff

Add display name for analysts.

In the case where the naming of the accounts added may need to follow some rules, have a "display name" for the user account that is used in the display so people don't need to memorize the account IDs.

Internal server error & Better description of what SCAN_RESULT_PASS means

the link that caused this was shared in the ACE slack.

Cloudphish Analysis: PASS: None

{'query_result': {'analysis_result': 'PASS',
                  'details': None,
                  'file_name': None,
                  'http_message': None,
                  'http_result': None,
                  'location': None,
                  'result': 'OK',
                  'sha256_content': None,
                  'sha256_url': 'ee2c41bd16fcb8dd29d60ef62bf7afab8b2dd9removedaaeec176391cf5c05',
                  'status': 'ANALYZED',
                  'uuid': '5a8c57ba-ee53-440a-8561-08a1e0b24d1a'},
 'query_start': 1571298306}
Traceback :

  File "/usr/local/lib/python3.6/dist-packages/ace_api.py", line 687, in cloudphish_download
    r = _execute_api_call('cloudphish/download', params=params, stream=True, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/ace_api.py", line 157, in _execute_api_call
    r.raise_for_status()
  File "/data/home/smcfeely/.local/lib/python3.6/site-packages/requests/models.py", line 940, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: INTERNAL SERVER ERROR for url: https://ace.local/api/cloudphish/download?url=https%3A%2F%2Fremoved

get_analysis and analysis_modules with instances is confusing

If you're dealing with an analysis module that is using "instances",
then instead of this

analysis = observable.get_analysis(self.generated_analysis_type)

you have to do this

analysis = observable.get_analysis(self.generated_analysis_type, instance=self.instance)

That should be taken care of in behind the scenes instead of forcing the developer to figure that out.

CLI email remediation not working

$ ace remediation remove-email '<[email protected]>' [email protected]
[INFO] skipping remediator remediation_account_email_company2: company_id does not match.
[INFO] found remediator remediation_account_email_company1 specified for company_id=1
[INFO] loaded remediator account section remediation_account_email_company1
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/base.py", line 1278, in _execute_context
    cursor, statement, parameters, context
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/default.py", line 593, in do_execute
    cursor.execute(statement, parameters)
  File "/usr/local/lib/python3.6/dist-packages/pymysql/cursors.py", line 170, in execute
    result = self._query(query)
  File "/usr/local/lib/python3.6/dist-packages/pymysql/cursors.py", line 328, in _query
    conn.query(q)
  File "/usr/local/lib/python3.6/dist-packages/pymysql/connections.py", line 517, in query
    self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  File "/usr/local/lib/python3.6/dist-packages/pymysql/connections.py", line 732, in _read_query_result
    result.read()
  File "/usr/local/lib/python3.6/dist-packages/pymysql/connections.py", line 1075, in read
    first_packet = self.connection._read_packet()
  File "/usr/local/lib/python3.6/dist-packages/pymysql/connections.py", line 684, in _read_packet
    packet.check_error()
  File "/usr/local/lib/python3.6/dist-packages/pymysql/protocol.py", line 220, in check_error
    err.raise_mysql_exception(self._data)
  File "/usr/local/lib/python3.6/dist-packages/pymysql/err.py", line 109, in raise_mysql_exception
    raise errorclass(errno, errval)
pymysql.err.IntegrityError: (1048, "Column 'user_id' cannot be null")

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/ace/ace", line 4148, in <module>
    args.func(args)
  File "/opt/ace/ace", line 1303, in execute_email_remediation
    execute_remediation(args)
  File "/opt/ace/ace", line 1290, in execute_remediation
    args.comment)
  File "/opt/ace/saq/remediation/__init__.py", line 410, in execute
    datetime.datetime.now()) # lock_time
  File "/opt/ace/saq/remediation/__init__.py", line 444, in request
    saq.db.commit()
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/scoping.py", line 163, in do
    return getattr(self.registry(), name)(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/session.py", line 1042, in commit
    self.transaction.commit()
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/session.py", line 504, in commit
    self._prepare_impl()
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/session.py", line 483, in _prepare_impl
    self.session.flush()
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/session.py", line 2523, in flush
    self._flush(objects)
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/session.py", line 2664, in _flush
    transaction.rollback(_capture_exception=True)
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/util/langhelpers.py", line 69, in __exit__
    exc_value, with_traceback=exc_tb,
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/util/compat.py", line 178, in raise_
    raise exception
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/session.py", line 2624, in _flush
    flush_context.execute()
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/unitofwork.py", line 422, in execute
    rec.execute(self)
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/unitofwork.py", line 589, in execute
    uow,
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/persistence.py", line 245, in save_obj
    insert,
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/persistence.py", line 1136, in _emit_insert_statements
    statement, params
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/base.py", line 1014, in execute
    return meth(self, multiparams, params)
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/base.py", line 1133, in _execute_clauseelement
    distilled_params,
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/base.py", line 1318, in _execute_context
    e, statement, parameters, cursor, context
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/base.py", line 1512, in _handle_dbapi_exception
    sqlalchemy_exception, with_traceback=exc_info[2], from_=e
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/util/compat.py", line 178, in raise_
    raise exception
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/base.py", line 1278, in _execute_context
    cursor, statement, parameters, context
  File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/default.py", line 593, in do_execute
    cursor.execute(statement, parameters)
  File "/usr/local/lib/python3.6/dist-packages/pymysql/cursors.py", line 170, in execute
    result = self._query(query)
  File "/usr/local/lib/python3.6/dist-packages/pymysql/cursors.py", line 328, in _query
    conn.query(q)
  File "/usr/local/lib/python3.6/dist-packages/pymysql/connections.py", line 517, in query
    self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  File "/usr/local/lib/python3.6/dist-packages/pymysql/connections.py", line 732, in _read_query_result
    result.read()
  File "/usr/local/lib/python3.6/dist-packages/pymysql/connections.py", line 1075, in read
    first_packet = self.connection._read_packet()
  File "/usr/local/lib/python3.6/dist-packages/pymysql/connections.py", line 684, in _read_packet
    packet.check_error()
  File "/usr/local/lib/python3.6/dist-packages/pymysql/protocol.py", line 220, in check_error
    err.raise_mysql_exception(self._data)
  File "/usr/local/lib/python3.6/dist-packages/pymysql/err.py", line 109, in raise_mysql_exception
    raise errorclass(errno, errval)
sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, "Column 'user_id' cannot be null")
[SQL: INSERT INTO remediation (type, action, user_id, `key`, result, comment, successful, company_id, `lock`, lock_time, status) VALUES (%(type)s, %(action)s, %(user_id)s, %(key)s, %(result)s, %(comment)s, %(successful)s, %(company_id)s, %(lock)s, %(lock_time)s, %(status)s)]
[parameters: {'type': 'email', 'action': 'remove', 'user_id': None, 'key': '<YQXPR01Masdasdasdasdasd7BDD440@YQXPR01MB2535.CANPRD01.PROD.OUTLOOK.COM>:[email protected]', 'result': None, 'comment': 'command line manual request', 'successful': None, 'company_id': 1, 'lock': 'af9fdc67-1116-47d0-af97-8b3482630812', 'lock_time': datetime.datetime(2020, 8, 10, 12, 47, 39, 256199), 'status': 'NEW'}]
(Background on this error at: http://sqlalche.me/e/13/gkpj)

There was a phish in queue that had no remediation target (perhaps a different issue), so I found this after trying to remediate the phish on the cli.

Looks like it might be a simple fix but documenting and moving on.

Phishfry remediation tests broken as of commit a78c6

saq.remediation.phishfry.test tries to import a function that no longer exists after remediation refactor:

from saq.remediation import initialize_remediation_system_manager, \
                            start_remediation_system_manager, \
                            stop_remediation_system_manager, 

replace saq.PROXIES with get_proxy()

since saq.PROXIES gets set up once at startup, if ecs isn't running at that time it will never get set correctly

compute it on the fly instead

How to correctly set the timezone for query based hunters?

By default the hunting service queries instances with timezones set to UTC. For our situation, this causes the splunk hunter to look for events indexed in the future.

I started to modify the SplunkHunt to convert search times into a configured timezone but after looking it appears this may belong in the QueryHunt for consistent search time updates in the hunt database?

Is a config item needed to specify the timezone query based hunters should use? If so where does it make the most sense to introduce a configured timezone?

Thanks

SSL bad handshake error after default install

After the initial install, servers that have a domain sometimes raise an SSLError due to a Bad Handshake when the ace_api module is used. I've observed this issue via the CLI as well as a flashed message in the GUI. I have not observed this issue on my local Virtualbox VM.

For an example of a server with a domain... an AWS EC2 instance may have a hostname of ip-192-168-0-5.ec2.internal.

An example of the error:

unable to submit alert: HTTPSConnectionPool(host='ip-192-168-0-5.ec2.internal', port=443): Max retries exceeded with url: /api/analysis/submit (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),)) 

To manually fix this error, update the /opt/ace/etc/saq_apache.conf file with one of the following options:

  • Replacing the default ServerName with the FQDN of the server

or

  • Add the ServerAlias option with FQDN of the server.

This could be scripted so that the apache config file's ServerName is auto-populated by hostname -f. In the interim, we should at least add to the documentation how to fix this error by adjusting the apache config.

initial installation missing some key data in the database

There are two issues after installation that should be resolved by the installation script.

  1. An initial super-user should exist. The installation should prompt for it when it starts.
  2. No engine nodes exist, which is an issue on the manual analysis screen. Either create a default node, or, adjust the code for the manual analysis screen that populates the drop down box.

Disable/Enable users

Introduce a new "active" field to the ace.users table.

This field would be set to true if the user is currently employed/active in the alert queue. Any users that are no longer active will not be displayed in the GUI nor be allowed to log into the platform. However, the dispositions for disabled/non-active users could still be viewed.

metrics - add standard deviation

add standard deviation (stdev) metric wherever there is an average calculated on the metrics page (average was used prior to meet SLA, but to understand process better standard deviation is needed).

unexpected exception thrown during HunterCollector loop & missing readme file?

from hunter logs: [__init__.py:971] [Collector] [1147] [ERROR] - unexpected exception thrown during loop for <saq.collectors.hunter.HunterCollector object at 0x7fa2e276d6d8>: 'NoneType' object has no attribute 'check_rules'

error report:

data/error_reports/2020-07-18:17:28:06.402596
EXCEPTION
'NoneType' object has no attribute 'check_rules'
 
STACK TRACE
Traceback (most recent call last):
  File "/opt/ace/saq/collectors/__init__.py", line 969, in loop
    self.execute()
  File "/opt/ace/saq/collectors/__init__.py", line 1003, in execute
    tuning_matches = self.submission_filter.get_tuning_matches(next_submission)
  File "/opt/ace/saq/submission/__init__.py", line 340, in get_tuning_matches
    self.update_rules()
  File "/opt/ace/saq/submission/__init__.py", line 334, in update_rules
    need_update = self.tracking_scanner.check_rules()
AttributeError: 'NoneType' object has no attribute 'check_rules'

Which led me here and to notice that this referenced README is missing from the project repo.

# see README.SUBMISSION_FILTERS

I noticed these tuning rules appear to be loaded out of whatever is specified in collection.tuning_dir_*, but I don't have that. I do have a collection.tuning_temp_dir from the default config.

This doesn't appear to be causing us any issues.

Alembic for SQL upgrades/downgrades

Just getting a conversation started around using Alembic to manage database upgrades/downgrades. Is there benefit in using Alembic instead of shell scripts?

You can define the database schema/structure and Alembic can help auto-detect those changes (with access to the SQLAlchemy metadata) and create a upgrade/downgrade script (but should always be manually reviewed before trusting them)

Example from: https://alembic.sqlalchemy.org/en/latest/autogenerate.html

$ alembic revision --autogenerate -m "Added account table"
INFO [alembic.context] Detected added table 'account'
Generating /path/to/foo/alembic/versions/27c6a30d7c24.py...done

That would create the following:

"""empty message

Revision ID: 27c6a30d7c24
Revises: None
Create Date: 2011-11-08 11:40:27.089406

"""

# revision identifiers, used by Alembic.
revision = '27c6a30d7c24'
down_revision = None

from alembic import op
import sqlalchemy as sa

def upgrade():
    ### commands auto generated by Alembic - please adjust! ###
    op.create_table(
    'account',
    sa.Column('id', sa.Integer()),
    sa.Column('name', sa.String(length=50), nullable=False),
    sa.Column('description', sa.VARCHAR(200)),
    sa.Column('last_transaction_date', sa.DateTime()),
    sa.PrimaryKeyConstraint('id')
    )
    ### end Alembic commands ###

def downgrade():
    ### commands auto generated by Alembic - please adjust! ###
    op.drop_table("account")
    ### end Alembic commands ###

update script or method

We need some way to make sure we have all the installed packages, python libraries and database schemas installed after we do a git pull.

Pretty sure we've got the database part done, so we just need something that makes sure all the packages and python modules are installed.

Then maybe one script to do all of that.

Add tool icon to alert.

Make it easier to visually distinguish the type of the alert by using an icon. Currently the icon of the "company" is used, which doesn't make sense if you're running this locally for your own company.

General Falcon sandbox analysis improvement

  • allow users to select the environment type they want to upload a file to (win7x32, win7x64, win10..)
  • Don't the default environment for uploading samples (default env id is 100 and mapped to windows 7 32bit) - share samples with the other environments an account has available

Allow removal of stored password even if the section doesn't exist.

When you run ace -p delete-config-password my_section.password and you no longer have my_section in your configuration, ACE throws this error:

[ERROR] unknown config section my_section.

This error makes sense when storing a password, but would be nice (and I'm being a diva here) to not have to go back and add a section that I already removed in order to delete the old stored password.

GUI - Sort alerts by detection count?

On the alerts page, it would be nice and interesting to sort alerts by their detection counts.

  • from most to least
  • from least to most

Currently detection count is only displayed in parentheses:

image

SLA functionality missing

It appears that the SLA functionally was not implemented with the new version of GUI alert filters. We would like to have this back as it's nice for making sure alerts are not forgotten about when an analyst goes on vacation, etc.

old code:

    # we want to display alerts that are either approaching or exceeding SLA
    sla_ids = [] # list of alert IDs that need to be displayed
    if saq.GLOBAL_SLA_SETTINGS.enabled or any([s.enabled for s in saq.OTHER_SLA_SETTINGS]):
        _query = db.session.query(GUIAlert).filter(GUIAlert.disposition == None)
        for alert_type in saq.EXCLUDED_SLA_ALERT_TYPES:
            _query = _query.filter(GUIAlert.alert_type != alert_type)
        for alert in _query:
            if alert.is_over_sla or alert.is_approaching_sla:
                sla_ids.append(alert.id)

    logging.debug("{} alerts in breach of SLA".format(len(sla_ids)))

Then, if an alert was approaching SLA it would trigger the default filters to be changed, such that, alert ownership is removed and ever analyst then sees all open (disposition == None) alerts, when they reset their filters. Also, alerts approaching SLA were highlighted in yellow and alerts over SLA were highlighted in red.

Extra Data bug in ACE alert data.json

We've seen the following bug three times now. I can supply the data.json file in a secure channel if someone wants to take this on.

Command line:

$ ./ace import-alerts ~cybersecurity/6d9de41f-949d-40ce-a77b-7a607aaae0be
+ unable to load json from /home/cybersecurity/6d9de41f-949d-40ce-a77b-7a607aaae0be/data.json: Extra data: line 1 column 832256 (char 832255)
Traceback (most recent call last):
  File "./ace", line 3796, in <module>
    args.func(args)
  File "./ace", line 1740, in import_alerts
    if not alert.load():
  File "/opt/ace/lib/saq/analysis/__init__.py", line 2916, in load
    raise e
  File "/opt/ace/lib/saq/analysis/__init__.py", line 2901, in load
    self.json = json.load(fp)
  File "/usr/lib/python3.6/json/__init__.py", line 299, in load
    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
  File "/usr/lib/python3.6/json/__init__.py", line 354, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.6/json/decoder.py", line 342, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 832256 (char 832255)

GUI when trying to view alert:
image

I think the bug may be along these lines: https://stackoverflow.com/questions/48140858/json-decoder-jsondecodeerror-extra-data-line-2-column-1-char-190

WhoIsAnalysis -- Remove F_URL observable?

Currently, the WhoIsAnalyzer works with F_URL and F_FQDN observable types.

Wondering if this would be better for the whois analysis module:

  • Remove the F_URL observable from whois module
  • Rely on analysis modules to extract the domain from an F_URL observable and submit it as an F_FQDN observable.
  • WhoIsAnalyzer would then run on the F_FQDN observable only. This should keep ACE from performing whois analysis on the URL AND the FQDN if the domain is ever stripped and submitted separately from F_URL analysis in the future.

Running whois analysis twice is not a big performance hit... so it may not be worth the time. Thoughts?

@seanmcfeely / @unixfreak0037

Internal Server Error loading missing data

We started seeing the following semi-frequently after some new indicators were introduced into our environment:

ace-webserver-err

I assumed the issue was caused because the archived email had rolled off that is trying to be referenced by the new analysis.. however that wasn't the case. The old email is still in the archive. Not sure where the breakdown is happening. I settled for fixing the issue further down stream. Catching the error on display_preview that get's called below in app/templates/analysis/alert.html:

   {% if observable_node.obj.has_directive('preview') %}
       <div class="panel panel-default">
           <div class="panel-body observable-preview">{{observable_node.obj.display_preview}}. 
      </div>
    </div>
    {% endif %}

Just returning None right now but after I pushing up catch-FileNotFoundError the I thought the following may better inform the analyst:

    def display_preview(self):                                                                                          
        try:                                                                                                            
            with open(self.path, 'rb') as fp:                                                                           
                return fp.read(saq.CONFIG['gui'].getint('file_preview_bytes')).decode('utf8', errors='replace')         
        except FileNotFoundError:                                                                                       
            logging.error(f"file does not exist for display_preview: {self.path}")                                      
            return f"FileNotFoundError: {self.path}"

Which would result:
image

Thoughts?

Make and manage persistent observable comments

Sometimes you figure something unique out when working an alert, and it would be valuable for a comment to persist while it's relevant so that other analysts don't have to re-do analysis work.
For example, if an analyst discovers that a certain user uses a VPN service on his/her system then the following comment could be added to the relevant user observable:

This user uses [example VPN solution ](https://fake.vpn.example.com) on his/her system.
As discovered through these details:
        - detail
        - detail

Also, blah blah blah.

^ Make the comments so they can be expressed in markdown. This will add flexibility on the presentation. These comments should be editable directly in any alert where the observable is found.

ACE MySQL DB schema to use utf8mb4 instead of utf8 by default

the following four bytes represents this unicode character: 🌹
\xf0\x9f\x8c\xb9

MySQL doesn't support four byte unicode characters by default:
https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-sets.html

This causes the following error in ACE:

sqlalchemy.exc.InternalError: (pymysql.err.InternalError) (1366, "Incorrect string value: '\\xF0\\x9F\\x8C\\xB9' for column 'description' at row 1") [SQL: 'INSERT INTO alerts (company_id, uuid, location, storage_dir, tool, tool_instance, alert_type, description, priority, disposition, disposition_user_id, disposition_time, owner_id, owner_time, archived, removal_user_id, removal_time, detection_count) VALUES (%(company_id)s, %(uuid)s, %(location)s, %(storage_dir)s, %(tool)s, %(tool_instance)s, %(alert_type)s, %(description)s, %(priority)s, %(disposition)s, %(disposition_user_id)s, %(disposition_time)s, %(owner_id)s, %(owner_time)s, %(archived)s, %(removal_user_id)s, %(removal_time)s, %(detection_count)s)'] [parameters: {'company_id': 1, 'uuid': '891bd58f-cb81-4513-a358-75a901909445', 'location': 'ace-1.local', 'storage_dir': 'data/ace-1.local/891/891bd58f-cb81-4513-a358-75a901909445', 'tool': 'gui', 'tool_instance': 'ACE-1', 'alert_type': 'manual', 'description': 'Manual Correlation - 🌹', 'priority': 0, 'disposition': None, 'disposition_user_id': None, 'disposition_time': None, 'owner_id': None, 'owner_time': None, 'archived': 0, 'removal_user_id': None, 'removal_time': None, 'detection_count': 0}] (Background on this error at: http://sqlalche.me/e/2j85)

Good write up on this: https://mathiasbynens.be/notes/mysql-utf8mb4

Extra info:
https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-conversion.html

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.