GithubHelp home page GithubHelp logo

danielpoliakov / lisa Goto Github PK

View Code? Open in Web Editor NEW
473.0 19.0 89.0 3.85 MB

Sandbox for automated Linux malware analysis.

License: Apache License 2.0

Dockerfile 2.87% Shell 0.90% Python 52.47% JavaScript 39.36% HTML 0.36% CSS 4.04%
malware linux malware-analysis iot internet-of-things security linux-sandbox lisa

lisa's Introduction

LiSa

Project providing automated Linux malware analysis on various CPU architectures.

Table of contents

LiSa

Features

  • QEMU emulation.
  • Currently supporting x86_64, i386, arm, mips, aarch64.
  • Small images built w/ buildroot.
  • Radare2 based static analysis.
  • Dynamic (behavioral) analysis using SystemTap kernel modules - captured syscalls, openfiles, process trees.
  • Network statistics and analysis of DNS, HTTP, Telnet and IRC communication.
  • Endpoints analysis and blacklists configuration.
  • Scaled with celery and RabbitMQ.
  • REST API | frontend.
  • Extensible through sub-analysis modules and custom images.

Get Started

Requirements

  1. Get repository.
$ git clone https://github.com/danieluhricek/lisa
$ cd lisa
  1. Build.
# docker-compose build
  1. Run the sandbox (default location: http://localhost:4242).
# docker-compose up

Configuration

MaxMind GeoLite2

Sign up to get your API key. Use API key in docker-compose.yml build args section.

.
.
  worker:
    image: lisa-worker
    build:
      context: .
      dockerfile: ./docker/worker/Dockerfile
      args:
        maxmind_key: YOUR_KEY
    volumes:
      - "./data/storage:/home/lisa/data/storage"
      .
      .
      .
.
.

Web hosting

Setup your server's IP:port in nginx service in docker-compose.yml.

.
.
  nginx:
    image: lisa-nginx
    build:
      context: .
      dockerfile: ./docker/nginx/Dockerfile
      args:
        webhost: <myip|default=localhost>:<port>
    ports:
      - <port>:80
.
.

Scaling

Workers are scalable.

# docker-compose up --scale worker=10

VPN

You can route malware's traffic through OpenVPN. In order to do that:

  1. Mount volume containing OpenVPN config (named config.ovpn).
  2. Set environment valirable VPN to OpenVPN config's directory path.
.
.
  worker:
    image: lisa-worker
    build:
      context: .
      dockerfile: ./docker/worker/Dockerfile
    environment:
      - VPN=/vpn
    volumes:
      - "./data/storage:/home/lisa/data/storage"
      - "./vpn:/vpn"
.
.

Blacklists

Default used blacklists are (source):

  • bi_ssh_2_30d.ipset
  • firehol_level3.netset
  • firehol_webserver.netset
  • iblocklist_abuse_zeus.netset
  • normshield_all_wannacry.ipset

If you want to use any other blacklist, put .ipset or .netset files into data/blacklists. All of these blacklists are merged during build of worker service.

Adding new sub-analysis modules

Core of LiSa project supports 4 basic modules of analysis: static_analysis, dynamic_analysis, network_analysis and virustotal. Sub-analysis modules are plugin-based. For adding new sub-analysis and appending it's output to final json do following:

  1. Create class which inherits from AbstractSubAnalyzer class and implement run_analysis() method eg.:
class NewSubAnalyzer(AbstractSubAnalyzer):
    def run_analysis(self):
        pass
  1. Update list in lisa.config.py :
analyzers_config = [
    # core analyzers
    'lisa.analysis.static_analysis.StaticAnalyzer',
    'lisa.analysis.dynamic_analysis.DynamicAnalyzer',
    'lisa.analysis.network_analysis.NetworkAnalyzer',
    'lisa.analysis.virustotal.VirusTotalAnalyzer',

    # custom
    'module_of_new_analyzer.NewSubAnalyzer'
]

Running tests

# docker build -f ./docker/tests/Dockerfile -t lisa-tests .
# docker run lisa-tests

Upcoming features

  1. YARA module - YARA module to match patterns in LiSa's JSON output.
  2. Images selection - More Linux images containing e.g. IoT firmware.

Contribute

Contributions | feedback | issues | pull requests are welcome.

Related work

License

LiSa is licensed under Apache License 2.0.

lisa's People

Contributors

danielpoliakov avatar dependabot[bot] avatar firmianay avatar gallypette avatar jrespeto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lisa's Issues

Using --scale causes the IP of the api container to be used

Hi,
Just a small boot sequence issue.

When I execute scalable function, api container will get error "
Address already in use".

~/lisa$ docker-compose up --scale worker=10
Recreating lisa_worker_1  ... done
Recreating lisa_worker_2  ... done
Recreating lisa_worker_3  ... done
Recreating lisa_worker_4  ... done
Recreating lisa_worker_5  ... done
Recreating lisa_worker_6  ... done
Recreating lisa_worker_7  ... done
Recreating lisa_worker_8  ... done
Recreating lisa_worker_9  ... done
Recreating lisa_worker_10 ... done
Starting lisa_mariadb_1   ... done
Starting lisa_rabbitmq_1  ... done
Recreating lisa_api_1     ... error

ERROR: for lisa_api_1  Cannot start service api: Address already in use

ERROR: for api  Cannot start service api: Address already in use
ERROR: Encountered errors while bringing up the project.

Here I think it is because api has set ipv4 and depends_on worker in advance, which causes worker to occupy the IP first after it starts.
So I made worker wait for api to finish booting before starting.

api:
    image: lisa-api
    depends_on:
      - rabbitmq

worker:
    image: lisa-worker
    depends_on:
      - api

Broken Pipe.

I tried running LiSa but kept on getting this error. Thank you in advance.
**
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 450, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 731, in protected_call
return self.run(*args, **kwargs)
File "/home/lisa/lisa/web_api/tasks.py", line 80, in full_analysis
master.run()
File "/home/lisa/lisa/analysis/top_level.py", line 69, in run
sub_output = analyzer.run_analysis()
File "/home/lisa/lisa/analysis/static_analysis.py", line 31, in run_analysis
self._r2.cmd('aaa')
File "/usr/local/lib/python3.6/site-packages/r2pipe/open_base.py", line 232, in cmd
res = self._cmd(cmd, **kwargs)
File "/usr/local/lib/python3.6/site-packages/r2pipe/open_sync.py", line 109, in _cmd_process
self.process.stdin.write((cmd + "\n").encode("utf8"))
BrokenPipeError: [Errno 32] Broken pipe
**

build fail

Hello LiSa developers,
I cloned your project and tried to build docker, but got error:

$ git clone https://github.com/danieluhricek/lisa
Cloning into 'lisa'...
remote: Enumerating objects: 247, done.
remote: Total 247 (delta 0), reused 0 (delta 0), pack-reused 247
Receiving objects: 100% (247/247), 3.87 MiB | 0 bytes/s, done.
Resolving deltas: 100% (68/68), done.
$ docker-compose build
....
Step 10/11 : RUN pip install -r requirements.txt     && iprange -j data/blacklists/* > data/ipblacklist     && ./docker/worker/maxmind.sh $maxmind_key     && apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false     git     gcc     g++     make     patch     && rm -rf /var/lib/apt/lists/*     && rm -rf /radare2/.git
 ---> Running in 5249358453cd
Collecting click==7.0
  Downloading Click-7.0-py2.py3-none-any.whl (81 kB)
Collecting disspcap==1.1.1
  Downloading disspcap-1.1.1.tar.gz (19 kB)
Collecting r2pipe==1.5.3
  Downloading r2pipe-1.5.3.tar.gz (9.1 kB)
Collecting pexpect==4.2.1
  Downloading pexpect-4.2.1-py2.py3-none-any.whl (55 kB)
Collecting geoip2==2.9.0
  Downloading geoip2-2.9.0-py2.py3-none-any.whl (18 kB)
Collecting flask==1.0.2
  Downloading Flask-1.0.2-py2.py3-none-any.whl (91 kB)
Collecting flask-cors===3.0.9
  Downloading Flask_Cors-3.0.9-py2.py3-none-any.whl (14 kB)
ERROR: Could not find a version that satisfies the requirement celery==5.2.2 (from versions: 0.1.2, 0.1.4, 0.1.6, 0.1.7, 0.1.8, 0.1.10, 0.1.11, 0.1.12, 0.1.13, 0.1.14, 0.1.15, 0.2.0, 0.3.0, 0.3.7, 0.3.20, 0.4.0, 0.4.1, 0.6.0, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.0.6, 2.0.0, 2.0.1, 2.0.2, 2.0.3, 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.2.0, 2.2.1, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 2.2.6, 2.2.7, 2.2.8, 2.2.9, 2.2.10, 2.3.0, 2.3.1, 2.3.2, 2.3.3, 2.3.4, 2.3.5, 2.4.0, 2.4.1, 2.4.2, 2.4.3, 2.4.4, 2.4.5, 2.4.6, 2.4.7, 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.5.5, 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.0.6, 3.0.7, 3.0.8, 3.0.9, 3.0.10, 3.0.11, 3.0.12, 3.0.13, 3.0.14, 3.0.15, 3.0.16, 3.0.17, 3.0.18, 3.0.19, 3.0.20, 3.0.21, 3.0.22, 3.0.23, 3.0.24, 3.0.25, 3.1.0, 3.1.1, 3.1.2, 3.1.3, 3.1.4, 3.1.5, 3.1.6, 3.1.7, 3.1.8, 3.1.9, 3.1.10, 3.1.11, 3.1.12, 3.1.13, 3.1.14, 3.1.15, 3.1.16, 3.1.17, 3.1.18, 3.1.19, 3.1.20, 3.1.21, 3.1.22, 3.1.23, 3.1.24, 3.1.25, 3.1.26.post1, 3.1.26.post2, 4.0.0rc3, 4.0.0rc4, 4.0.0rc5, 4.0.0rc6, 4.0.0rc7, 4.0.0, 4.0.1, 4.0.2, 4.1.0, 4.1.1, 4.2.0rc1, 4.2.0rc2, 4.2.0rc3, 4.2.0rc4, 4.2.0, 4.2.1, 4.2.2, 4.3.0rc1, 4.3.0rc2, 4.3.0rc3, 4.3.0, 4.3.1, 4.4.0rc1, 4.4.0rc2, 4.4.0rc3, 4.4.0rc4, 4.4.0rc5, 4.4.0, 4.4.1, 4.4.2, 4.4.3, 4.4.4, 4.4.5, 4.4.6, 4.4.7, 5.0.0a1, 5.0.0a2, 5.0.0b1, 5.0.0rc1, 5.0.0rc2, 5.0.0rc3, 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5, 5.0.6, 5.1.0b1, 5.1.0b2, 5.1.0rc1, 5.1.0, 5.1.1, 5.1.2, 5.2.0b1, 5.2.0b2, 5.2.0b3)
ERROR: No matching distribution found for celery==5.2.2
WARNING: You are using pip version 21.2.4; however, version 21.3.1 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
ERROR: Service 'worker' failed to build: The command '/bin/sh -c pip install -r requirements.txt     && iprange -j data/blacklists/* > data/ipblacklist     && ./docker/worker/maxmind.sh $maxmind_key     && apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false     git     gcc     g++     make     patch     && rm -rf /var/lib/apt/lists/*     && rm -rf /radare2/.git' returned a non-zero code: 1

By the way I use CentOS 7.9

Failed task

When I submit testbin-puts-arm I got:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/pexpect/spawnbase.py", line 150, in read_nonblocking
    s = os.read(self.child_fd, size)
OSError: [Errno 5] Input/output error

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/pexpect/expect.py", line 99, in expect_loop
    incoming = spawn.read_nonblocking(spawn.maxread, timeout)
  File "/usr/local/lib/python3.6/site-packages/pexpect/pty_spawn.py", line 465, in read_nonblocking
    return super(spawn, self).read_nonblocking(size)
  File "/usr/local/lib/python3.6/site-packages/pexpect/spawnbase.py", line 155, in read_nonblocking
    raise EOF('End Of File (EOF). Exception style platform.')
pexpect.exceptions.EOF: End Of File (EOF). Exception style platform.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 451, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 734, in __protected_call__
    return self.run(*args, **kwargs)
  File "/home/lisa/lisa/web_api/tasks.py", line 80, in full_analysis
    master.run()
  File "/home/lisa/lisa/analysis/top_level.py", line 69, in run
    sub_output = analyzer.run_analysis()
  File "/home/lisa/lisa/analysis/dynamic_analysis.py", line 36, in run_analysis
    self._vm.start_vm()
  File "/home/lisa/lisa/core/qemu_guest.py", line 100, in start_vm
    self._proc.expect('login: ')
  File "/usr/local/lib/python3.6/site-packages/pexpect/spawnbase.py", line 321, in expect
    timeout, searchwindowsize, async)
  File "/usr/local/lib/python3.6/site-packages/pexpect/spawnbase.py", line 345, in expect_list
    return exp.expect_loop(timeout)
  File "/usr/local/lib/python3.6/site-packages/pexpect/expect.py", line 105, in expect_loop
    return self.eof(e)
  File "/usr/local/lib/python3.6/site-packages/pexpect/expect.py", line 50, in eof
    raise EOF(msg)
pexpect.exceptions.EOF: End Of File (EOF). Exception style platform.
<pexpect.pty_spawn.spawn object at 0x7ff5b2497828>
command: /home/lisa/images/arm/run.sh
args: [b'/home/lisa/images/arm/run.sh', b'/home/lisa/data/storage/1527e830-e4a8-44e2-9b58-1560861ef141/rootfs']
buffer (last 100 chars): ''
before (last 100 chars): "> <new-size>'\r\n(note that this will lose data if you make the image smaller than it currently is).\r\n"
after: <class 'pexpect.exceptions.EOF'>
match: None
match_index: None
exitstatus: None
flag_eof: True
pid: 30
child_fd: 13
closed: False
timeout: 70
delimiter: <class 'pexpect.exceptions.EOF'>
logfile: <_io.TextIOWrapper name='/home/lisa/data/storage/1527e830-e4a8-44e2-9b58-1560861ef141/machine.log' mode='w' encoding='utf-8'>
logfile_read: None
logfile_send: None
maxread: 2000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0.05
delayafterclose: 0.1
delayafterterminate: 0.1
searcher: searcher_re:
    0: re.compile("login: ")

requirement error

Celery version 5.2.0 runs on,

Python (3.7, 3.8, 3.9, 3.10)
PyPy3.7 (7.3.7+)
This is the version of celery which will support Python 3.7 or newer.

If you're running an older version of Python, you need to be running an older version of Celery:

Python 2.6: Celery series 3.1 or earlier.
Python 2.5: Celery series 3.0 or earlier.
Python 2.4: Celery series 2.2 or earlier.
Python 2.7: Celery 4.x series.
Python 3.6: Celery 5.1 or earlier.

Consistently getting KeyError: 'minopsz'

Hi there,

Thank you for sharing this amazing project. I set things up, and when I try to analyze any sample I always get the same KeyError: 'minopsz' error.

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 405, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 697, in __protected_call__
    return self.run(*args, **kwargs)
  File "/home/lisa/lisa/web_api/tasks.py", line 80, in full_analysis
    master.run()
  File "/home/lisa/lisa/analysis/top_level.py", line 69, in run
    sub_output = analyzer.run_analysis()
  File "/home/lisa/lisa/analysis/static_analysis.py", line 34, in run_analysis
    self._r2_info()
  File "/home/lisa/lisa/analysis/static_analysis.py", line 62, in _r2_info
    'min_opsize': info['bin']['minopsz'],
KeyError: 'minopsz'

am I missing something obvious?

Could not find a version that satisfies the requirement r2pipe==1.2.0

Collecting click==7.0
Downloading Click-7.0-py2.py3-none-any.whl (81 kB)
Collecting disspcap==1.1.1
Downloading disspcap-1.1.1.tar.gz (19 kB)
Collecting r2pipe==1.2.0
Downloading r2pipe-1.2.0.tar.gz (8.9 kB)
ERROR: Command errored out with exit status 1:
command: /usr/local/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-s1258syi/r2pipe_d32ff5bfa6174b2fa15dadcc96994aeb/setup.py'"'"'; file='"'"'/tmp/pip-install-s1258syi/r2pipe_d32ff5bfa6174b2fa15dadcc96994aeb/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '" cwd: /tmp/pip-install-s1258syi/r2pipe_d32ff5bfa6174b2fa15dadcc96994aeb/
Complete output (15 lines):
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-install-s1258syi/r2pipe_d32ff5bfa6174b2fa15dadcc96994aeb/setup.py", line 2, in
import r2pipe
File "/tmp/pip-install-s1258syi/r2pipe_d32ff5bfa6174b2fa15dadcc96994aeb/r2pipe/init.py", line 38, in
from .open_sync import open
File "/tmp/pip-install-s1258syi/r2pipe_d32ff5bfa6174b2fa15dadcc96994aeb/r2pipe/open_sync.py", line 14, in
from .open_base import OpenBase, get_radare_path
File "/tmp/pip-install-s1258syi/r2pipe_d32ff5bfa6174b2fa15dadcc96994aeb/r2pipe/open_base.py", line 20, in
from .native import RCore
File "/tmp/pip-install-s1258syi/r2pipe_d32ff5bfa6174b2fa15dadcc96994aeb/r2pipe/native.py", line 25, in
lib = CDLL(lib_name)
File "/usr/local/lib/python3.6/ctypes/init.py", line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: libr_core.so: cannot open shared object file: No such file or directory
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/9d/a7/24bb1567aa171b0dcf76a267281fca60ff4ab60b1107ec4d5808e3d9e617/r2pipe-1.2.0.tar.gz#sha256=445a61c4a4b4fc038355f9a2df48298311150de07f2a949f868aaa142bee43e3 (from https://pypi.org/simple/r2pipe/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement r2pipe==1.2.0
ERROR: No matching distribution found for r2pipe==1.2.0
ERROR: Service 'worker' failed to build : The command '/bin/sh -c pip install -r requirements.txt && iprange -j data/blacklists/* > data/ipblacklist && ./docker/worker/maxmind.sh $maxmind_key && apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false git gcc g++ make patch && rm -rf /var/lib/apt/lists/* && rm -rf /radare2/.git' returned a non-zero code: 1

Submit file doesn't work, can't run LiSa

I'm getting this error on worker, the other contains don't throw any exceptions:

worker_1    | Traceback (most recent call last):
worker_1    |   File "/usr/local/lib/python3.6/site-packages/billiard/pool.py", line 1796, in safe_apply_callback
worker_1    |     fun(*args, **kwargs)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/celery/worker/request.py", line 371, in on_failure
worker_1    |     store_result=self.store_errors,
worker_1    |   File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 160, in mark_as_failure
worker_1    |     traceback=traceback, request=request)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 342, in store_result
worker_1    |     request=request, **kwargs)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/celery/backends/database/__init__.py", line 53, in _inner
worker_1    |     return fun(*args, **kwargs)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/celery/backends/database/__init__.py", line 105, in _store_result
worker_1    |     session = self.ResultSession()
worker_1    |   File "/usr/local/lib/python3.6/site-packages/celery/backends/database/__init__.py", line 99, in ResultSession
worker_1    |     **self.engine_options)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/celery/backends/database/session.py", line 59, in session_factory
worker_1    |     self.prepare_models(engine)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/celery/backends/database/session.py", line 54, in prepare_models
worker_1    |     ResultModelBase.metadata.create_all(engine)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/sql/schema.py", line 4287, in create_all
worker_1    |     ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2032, in _run_visitor
worker_1    |     with self._optional_conn_ctx_manager(connection) as conn:
worker_1    |   File "/usr/local/lib/python3.6/contextlib.py", line 81, in __enter__
worker_1    |     return next(self.gen)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2024, in _optional_conn_ctx_manager
worker_1    |     with self._contextual_connect() as conn:
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2226, in _contextual_connect
worker_1    |     self._wrap_pool_connect(self.pool.connect, None),
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2266, in _wrap_pool_connect
worker_1    |     e, dialect, self
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1536, in _handle_dbapi_exception_noconnection
worker_1    |     util.raise_from_cause(sqlalchemy_exception, exc_info)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 383, in raise_from_cause
worker_1    |     reraise(type(exception), exception, tb=exc_tb, cause=cause)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 128, in reraise
worker_1    |     raise value.with_traceback(tb)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2262, in _wrap_pool_connect
worker_1    |     return fn()
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 354, in connect
worker_1    |     return _ConnectionFairy._checkout(self)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 751, in _checkout
worker_1    |     fairy = _ConnectionRecord.checkout(pool)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 483, in checkout
worker_1    |     rec = pool._do_get()
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/impl.py", line 237, in _do_get
worker_1    |     return self._create_connection()
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 299, in _create_connection
worker_1    |     return _ConnectionRecord(self)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 428, in __init__
worker_1    |     self.__connect(first_connect_check=True)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 630, in __connect
worker_1    |     connection = pool._invoke_creator(self)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
worker_1    |     return dialect.connect(*cargs, **cparams)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 453, in connect
worker_1    |     return self.dbapi.connect(*cargs, **cparams)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/pymysql/__init__.py", line 94, in Connect
worker_1    |     return Connection(*args, **kwargs)
worker_1    |   File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 325, in __init__
worker_1    |     self.connect()
worker_1    |   File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 630, in connect
worker_1    |     raise exc
worker_1    | sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on '172.42.0.14' ([Errno 111] Connection refused)")
worker_1    | (Background on this error at: http://sqlalche.me/e/e3q8)

Can't analyse files after a fresh installation of LiSa

Hi,

I installed LiSa on a fresh new Ubuntu mate (Ubuntu MATE 18.04.3 LTS (Bionic)) and I submitted three malware samples (same Mirai compiled with different architectures from UrlHaust (r4z0r.arm, r4z0r.mips, r4z0r.x86)) and all three analysis failed.

ae2da66a4435800c63e50de2257b268e r4z0r.arm
e357a85565f26c505f20fb9c4aa9711e r4z0r.mips
4a388c6d3dfd5b54e3d74924337eae73 r4z0r.x86

References:
https://ubuntu-mate.org/download/ (to download ubuntu)
https://urlhaus.abuse.ch/browse/ (to get the download url of mirai)

When looking into the nginx webpage failed tab, I am seeing the following error message that is confusing me because it mention no image was found.

image

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/pexpect/spawnbase.py", line 150, in read_nonblocking
    s = os.read(self.child_fd, size)
OSError: [Errno 5] Input/output error

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/pexpect/expect.py", line 99, in expect_loop
    incoming = spawn.read_nonblocking(spawn.maxread, timeout)
  File "/usr/local/lib/python3.6/site-packages/pexpect/pty_spawn.py", line 465, in read_nonblocking
    return super(spawn, self).read_nonblocking(size)
  File "/usr/local/lib/python3.6/site-packages/pexpect/spawnbase.py", line 155, in read_nonblocking
    raise EOF('End Of File (EOF). Exception style platform.')
pexpect.exceptions.EOF: End Of File (EOF). Exception style platform.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 385, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 648, in __protected_call__
    return self.run(*args, **kwargs)
  File "/home/lisa/lisa/web_api/tasks.py", line 80, in full_analysis
    master.run()
  File "/home/lisa/lisa/analysis/top_level.py", line 69, in run
    sub_output = analyzer.run_analysis()
  File "/home/lisa/lisa/analysis/dynamic_analysis.py", line 36, in run_analysis
    self._vm.start_vm()
  File "/home/lisa/lisa/core/qemu_guest.py", line 100, in start_vm
    self._proc.expect('login: ')
  File "/usr/local/lib/python3.6/site-packages/pexpect/spawnbase.py", line 321, in expect
    timeout, searchwindowsize, async)
  File "/usr/local/lib/python3.6/site-packages/pexpect/spawnbase.py", line 345, in expect_list
    return exp.expect_loop(timeout)
  File "/usr/local/lib/python3.6/site-packages/pexpect/expect.py", line 105, in expect_loop
    return self.eof(e)
  File "/usr/local/lib/python3.6/site-packages/pexpect/expect.py", line 50, in eof
    raise EOF(msg)
pexpect.exceptions.EOF: End Of File (EOF). Exception style platform.
<pexpect.pty_spawn.spawn object at 0x7f2e80483fd0>
command: /home/lisa/images/arm/run.sh
args: [b'/home/lisa/images/arm/run.sh', b'/home/lisa/data/storage/3aad1d26-b450-4d79-adb7-68d632369bc5/rootfs']
buffer (last 100 chars): ''
before (last 100 chars): '/home/lisa/images/arm/run.sh: 10: /home/lisa/images/arm/run.sh: qemu-system-arm: not found\r\n'
after: <class 'pexpect.exceptions.EOF'>
match: None
match_index: None
exitstatus: 127
flag_eof: True
pid: 59
child_fd: 27
closed: False
timeout: 110
delimiter: <class 'pexpect.exceptions.EOF'>
logfile: <_io.TextIOWrapper name='/home/lisa/data/storage/3aad1d26-b450-4d79-adb7-68d632369bc5/machine.log' mode='w' encoding='utf-8'>
logfile_read: None
logfile_send: None
maxread: 2000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0.05
delayafterclose: 0.1
delayafterterminate: 0.1
searcher: searcher_re:
    0: re.compile("login: ")

The same kind of error message appear for the other architectures. After the installation of the operating system, I did the following command to run LiSa. I didn't do anything else after that, that could had interfere with the installation.

sudo apt-get install git
git clone https://github.com/danieluhricek/LiSa.git
cd LiSa
sudo apt-get install docker docker-compose
sudo docker-compose build
sudo docker-compose up

It weird because it says that I dont have the qemu image for mips. Same thing for the other architecture. When reading your docker image (worker), I can see that it download a TAR.GZ with the images from https://github.com/danieluhricek/linux-images/archive/v1.0.1.tar.gz. I was able to download the file so I am not sure why it is complaining about that.

I am fairly new with Docker and this framework but I don't mind trying to find what going on but If you know something about this issue, please let me know.

Thanks for your help.

How to use Lisa on remote server

Hi,everyone
I install Lisa on a remote ubuntu16 server,I can access Lisa on the remote ip:4242, but when I submit some samples,the submit is stuck ,and the post domain is localhost:4242,I know some configs are wrong,but I don't know how to fix it, Could anyone please help me?

Got BrokenPipeError while analyzing binary file

Hi,
I just installed your LiSa sandbox and I submitted a binary file (I randomly chose '/bin/git' for example), but I got the BrokenPipeError.
The Failure details is as follow:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 385, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 648, in __protected_call__
    return self.run(*args, **kwargs)
  File "/home/lisa/lisa/web_api/tasks.py", line 80, in full_analysis
    master.run()
  File "/home/lisa/lisa/analysis/top_level.py", line 69, in run
    sub_output = analyzer.run_analysis()
  File "/home/lisa/lisa/analysis/static_analysis.py", line 31, in run_analysis
    self._r2.cmd('aaa')
  File "/usr/local/lib/python3.6/site-packages/r2pipe/open_base.py", line 232, in cmd
    res = self._cmd(cmd, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/r2pipe/open_sync.py", line 109, in _cmd_process
    self.process.stdin.write((cmd + "\n").encode("utf8"))
BrokenPipeError: [Errno 32] Broken pipe

The docker-compose logs seem normal:

nginx_1     | 172.42.0.1 - - [08/May/2021:09:07:13 +0000] "GET /submit HTTP/1.1" 200 2120 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0" "-"
nginx_1     | 172.42.0.1 - - [08/May/2021:09:07:13 +0000] "GET /static/media/logo-dark.011f1691.png HTTP/1.1" 200 21974 "http://localhost:4242/submit" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0" "-"
nginx_1     | 172.42.0.1 - - [08/May/2021:09:07:13 +0000] "GET /favicon.ico HTTP/1.1" 200 1086 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0" "-"
nginx_1     | 2021/05/08 09:07:29 [warn] 22#22: *56 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000003, client: 172.42.0.1, server: lisa, request: "POST /api/tasks/create/file HTTP/1.1", host: "localhost:4242", referrer: "http://localhost:4242/submit"
rabbitmq_1  | 2021-05-08 09:07:29.331 [info] <0.1626.0> accepting AMQP connection <0.1626.0> (172.42.0.10:38714 -> 172.42.0.13:5672)
api_1       | 2021-05-08 09:07:29 15:connection [DEBUG] - Start from server, version: 0.9, properties: {'capabilities': {'publisher_confirms': True, 'exchange_exchange_bindings': True, 'basic.nack': True, 'consumer_cancel_notify': True, 'connection.blocked': True, 'consumer_priorities': True, 'authentication_failure_close': True, 'per_consumer_qos': True, 'direct_reply_to': True}, 'cluster_name': 'rabbit@f4304eb61574', 'copyright': 'Copyright (c) 2007-2021 VMware, Inc. or its affiliates.', 'information': 'Licensed under the MPL 2.0. Website: https://rabbitmq.com', 'platform': 'Erlang/OTP 23.3.3', 'product': 'RabbitMQ', 'version': '3.8.16'}, mechanisms: [b'AMQPLAIN', b'PLAIN'], locales: ['en_US']
rabbitmq_1  | 2021-05-08 09:07:29.333 [info] <0.1626.0> connection <0.1626.0> (172.42.0.10:38714 -> 172.42.0.13:5672): user 'lisa' authenticated and granted access to vhost '/'
api_1       | 2021-05-08 09:07:29 15:channel [DEBUG] - using channel_id: 1
api_1       | 2021-05-08 09:07:29 15:channel [DEBUG] - Channel open
api_1       | [pid: 15|app: 0|req: 5/18] 172.42.0.1 () {46 vars in 856 bytes} [Sat May  8 09:07:29 2021] POST /api/tasks/create/file => generated 51 bytes in 60 msecs (HTTP/1.1 200) 4 headers in 137 bytes (3 switches on core 0)
worker_1    | 2021-05-08 09:07:29 18:strategy [INFO] - Received task: lisa.web_api.tasks.full_analysis[4c8ff487-6156-4477-8a91-c8f29fac1be9]  
nginx_1     | 172.42.0.1 - - [08/May/2021:09:07:29 +0000] "POST /api/tasks/create/file HTTP/1.1" 200 51 "http://localhost:4242/submit" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0" "-"
nginx_1     | 172.42.0.1 - - [08/May/2021:09:07:40 +0000] "GET /api/tasks/failed HTTP/1.1" 200 14154 "http://localhost:4242/failed" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0" "-"
api_1       | [pid: 15|app: 0|req: 6/19] 172.42.0.1 () {40 vars in 581 bytes} [Sat May  8 09:07:40 2021] GET /api/tasks/failed => generated 14154 bytes in 5 msecs (HTTP/1.1 200) 3 headers in 106 bytes (1 switches on core 0)
rabbitmq_1  | 2021-05-08 09:10:29.351 [error] <0.1626.0> closing AMQP connection <0.1626.0> (172.42.0.10:38714 -> 172.42.0.13:5672):
rabbitmq_1  | missed heartbeats from client, timeout: 60s
rabbitmq_1  | 2021-05-08 09:10:29.352 [info] <0.1677.0> Closing all channels from connection '172.42.0.10:38714 -> 172.42.0.13:5672' because it has been closed

Is there anything I can do to fix it?
I'm using CentOS 8.3

Won't analyse files

fails
I keep getting these errors i.e. EOF, Attribute Error, UnicodeDecode Error, Timeout, Type Error etc. They all seem to be coming from unknown architecture files.

qemu-system-x86_64: not found

Hi,

The new version of Debian requires qemu-system-x86 package to provide the binary: qemu-system-x86_64

I got the following error:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/pexpect/spawnbase.py", line 150, in read_nonblocking
    s = os.read(self.child_fd, size)
OSError: [Errno 5] Input/output error

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/pexpect/expect.py", line 99, in expect_loop
    incoming = spawn.read_nonblocking(spawn.maxread, timeout)
  File "/usr/local/lib/python3.6/site-packages/pexpect/pty_spawn.py", line 465, in read_nonblocking
    return super(spawn, self).read_nonblocking(size)
  File "/usr/local/lib/python3.6/site-packages/pexpect/spawnbase.py", line 155, in read_nonblocking
    raise EOF('End Of File (EOF). Exception style platform.')
pexpect.exceptions.EOF: End Of File (EOF). Exception style platform.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 385, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 648, in __protected_call__
    return self.run(*args, **kwargs)
  File "/home/lisa/lisa/web_api/tasks.py", line 80, in full_analysis
    master.run()
  File "/home/lisa/lisa/analysis/top_level.py", line 69, in run
    sub_output = analyzer.run_analysis()
  File "/home/lisa/lisa/analysis/dynamic_analysis.py", line 36, in run_analysis
    self._vm.start_vm()
  File "/home/lisa/lisa/core/qemu_guest.py", line 100, in start_vm
    self._proc.expect('login: ')
  File "/usr/local/lib/python3.6/site-packages/pexpect/spawnbase.py", line 321, in expect
    timeout, searchwindowsize, async)
  File "/usr/local/lib/python3.6/site-packages/pexpect/spawnbase.py", line 345, in expect_list
    return exp.expect_loop(timeout)
  File "/usr/local/lib/python3.6/site-packages/pexpect/expect.py", line 105, in expect_loop
    return self.eof(e)
  File "/usr/local/lib/python3.6/site-packages/pexpect/expect.py", line 50, in eof
    raise EOF(msg)
pexpect.exceptions.EOF: End Of File (EOF). Exception style platform.
<pexpect.pty_spawn.spawn object at 0x7f28b30ade48>
command: /home/lisa/images/x86_64/run.sh
args: [b'/home/lisa/images/x86_64/run.sh', b'/home/lisa/data/storage/7cee9747-25ad-4dfa-a59b-83ef8190bd5b/rootfs']
buffer (last 100 chars): ''
before (last 100 chars): '/home/lisa/images/x86_64/run.sh: 8: /home/lisa/images/x86_64/run.sh: qemu-system-x86_64: not found\r\n'
after: <class 'pexpect.exceptions.EOF'>
match: None
match_index: None
exitstatus: 127
flag_eof: True
pid: 29
child_fd: 13
closed: False
timeout: 70
delimiter: <class 'pexpect.exceptions.EOF'>
logfile: <_io.TextIOWrapper name='/home/lisa/data/storage/7cee9747-25ad-4dfa-a59b-83ef8190bd5b/machine.log' mode='w' encoding='utf-8'>
logfile_read: None
logfile_send: None
maxread: 2000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0.05
delayafterclose: 0.1
delayafterterminate: 0.1
searcher: searcher_re:
    0: re.compile("login: ")

access from other than localhost

First off, great work !!!
I have installed and have everything running fine on Ubuntu 18.04
Have modified to have the web interface access work for IP of host (not just localhost).
When accessing the web interface via real IP from the host running LiSa everything works.
When accessing the web interface from somewhere else on the network, the web page comes up, the results page is empty, and when trying to submit a file it just spins and never works.

I have modified DB to listen on both loopback and IP of machine, but that doesn't help.

How can use the web interface from something other than than being run on the LiSa box ?

Failed task

I could run LiSa, but when I submitted my file I got:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 450, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 731, in __protected_call__
    return self.run(*args, **kwargs)
  File "/home/lisa/lisa/web_api/tasks.py", line 80, in full_analysis
    master.run()
  File "/home/lisa/lisa/analysis/top_level.py", line 69, in run
    sub_output = analyzer.run_analysis()
  File "/home/lisa/lisa/analysis/static_analysis.py", line 34, in run_analysis
    self._r2_info()
  File "/home/lisa/lisa/analysis/static_analysis.py", line 62, in _r2_info
    'min_opsize': info['bin']['minopsz'],
KeyError: 'minopsz'

I attached screen picture:
failed_task

Can't run analysis on binaries (nothing shown)

Hello,

First of all, thank you for your great work.
I installed (using docker/docker-compose) Lisa and run it and everything were fine (just followed the tutorial on the wiki ).
Lisa is running without a problem on Ubuntu 18.04 (i have access to the web interface) :
image

When i upload a binary, everything is good for now and the binary is on the storage folder :
image

But when i go to the pending tasks, there's nothing shown on it (on finished and failed tabs also).
When i checked the logs (after a while), i saw the following error :

api_1 | [pid: 9|app: 0|req: 1/4] 172.42.0.1 () {40 vars in 589 bytes} [Fri Mar 13 12:36:51 2020] GET /api/tasks/failed => generated 3 bytes in 31 msecs (HTTP/1.1 200) 3 headers in 102 bytes (1 switches on core 0)
rabbitmq_1 | 2020-03-13 12:36:53.444 [info] <0.1155.0> accepting AMQP connection <0.1155.0> (172.42.0.10:46460 -> 172.42.0.13:5672)
api_1 | 2020-03-13 12:36:53 9:connection [DEBUG] - Start from server, version: 0.9, properties: {'capabilities': {'publisher_confirms': True, 'exchange_exchange_bindings': True, 'basic.nack': True, 'consumer_cancel_notify': True, 'connection.blocked': True, 'consumer_priorities': True, 'authentication_failure_close': True, 'per_consumer_qos': True, 'direct_reply_to': True}, 'cluster_name': 'rabbit@c066dd39d2b3', 'copyright': 'Copyright (c) 2007-2020 Pivotal Software, Inc.', 'information': 'Licensed under the MPL 1.1. Website: https://rabbitmq.com', 'platform': 'Erlang/OTP 22.2.8', 'product': 'RabbitMQ', 'version': '3.8.3'}, mechanisms: [b'PLAIN', b'AMQPLAIN'], locales: ['en_US']
api_1 | 2020-03-13 12:36:53 9:channel [DEBUG] - using channel_id: 1
rabbitmq_1 | 2020-03-13 12:36:53.449 [info] <0.1155.0> connection <0.1155.0> (172.42.0.10:46460 -> 172.42.0.13:5672): user 'lisa' authenticated and granted access to vhost '/'
api_1 | 2020-03-13 12:36:53 9:channel [DEBUG] - Channel open
rabbitmq_1 | 2020-03-13 12:36:53.472 [info] <0.1171.0> accepting AMQP connection <0.1171.0> (172.42.0.10:46462 -> 172.42.0.13:5672)
api_1 | 2020-03-13 12:36:53 9:connection [DEBUG] - Start from server, version: 0.9, properties: {'capabilities': {'publisher_confirms': True, 'exchange_exchange_bindings': True, 'basic.nack': True, 'consumer_cancel_notify': True, 'connection.blocked': True, 'consumer_priorities': True, 'authentication_failure_close': True, 'per_consumer_qos': True, 'direct_reply_to': True}, 'cluster_name': 'rabbit@c066dd39d2b3', 'copyright': 'Copyright (c) 2007-2020 Pivotal Software, Inc.', 'information': 'Licensed under the MPL 1.1. Website: https://rabbitmq.com', 'platform': 'Erlang/OTP 22.2.8', 'product': 'RabbitMQ', 'version': '3.8.3'}, mechanisms: [b'PLAIN', b'AMQPLAIN'], locales: ['en_US']
rabbitmq_1 | 2020-03-13 12:36:53.479 [info] <0.1171.0> connection <0.1171.0> (172.42.0.10:46462 -> 172.42.0.13:5672): user 'lisa' authenticated and granted access to vhost '/'
api_1 | 2020-03-13 12:36:53 9:channel [DEBUG] - using channel_id: 1
api_1 | 2020-03-13 12:36:53 9:channel [DEBUG] - Channel open
api_1 | [pid: 9|app: 0|req: 2/5] 172.42.0.1 () {40 vars in 592 bytes} [Fri Mar 13 12:36:53 2020] GET /api/tasks/pending => generated 5 bytes in 1098 msecs (HTTP/1.1 200) 3 headers in 102 bytes (1 switches on core 0)
nginx_1 | 172.42.0.1 - - [13/Mar/2020:12:36:54 +0000] "GET /api/tasks/pending HTTP/1.1" 200 5 "http://localhost:4242/pending" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
rabbitmq_1 | 2020-03-13 12:37:20.558 [error] <0.950.0> closing AMQP connection <0.950.0> (172.42.0.10:46426 -> 172.42.0.13:5672):
rabbitmq_1 | missed heartbeats from client, timeout: 60s
rabbitmq_1 | 2020-03-13 12:37:53.270 [error] <0.996.0> closing AMQP connection <0.996.0> (172.42.0.10:46434 -> 172.42.0.13:5672):
rabbitmq_1 | missed heartbeats from client, timeout: 60s
rabbitmq_1 | 2020-03-13 12:37:53.300 [error] <0.1013.0> closing AMQP connection <0.1013.0> (172.42.0.10:46436 -> 172.42.0.13:5672):
rabbitmq_1 | missed heartbeats from client, timeout: 60s
rabbitmq_1 | 2020-03-13 12:39:53.452 [error] <0.1155.0> closing AMQP connection <0.1155.0> (172.42.0.10:46460 -> 172.42.0.13:5672):
rabbitmq_1 | missed heartbeats from client, timeout: 60s
rabbitmq_1 | 2020-03-13 12:39:53.482 [error] <0.1171.0> closing AMQP connection <0.1171.0> (172.42.0.10:46462 -> 172.42.0.13:5672):
rabbitmq_1 | missed heartbeats from client, timeout: 60s
api_1 | [pid: 12|app: 0|req: 3/6] 172.42.0.1 () {40 vars in 587 bytes} [Fri Mar 13 12:40:26 2020] GET /api/tasks/finished => generated 3 bytes in 3 msecs (HTTP/1.1 200) 3 headers in 102 bytes (1 switches on core 0)
nginx_1 | 172.42.0.1 - - [13/Mar/2020:12:40:26 +0000] "GET /api/tasks/finished HTTP/1.1" 200 3 "http://localhost:4242/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
api_1 | [pid: 12|app: 0|req: 4/7] 172.42.0.1 () {40 vars in 589 bytes} [Fri Mar 13 12:41:52 2020] GET /api/tasks/failed => generated 3 bytes in 2 msecs (HTTP/1.1 200) 3 headers in 102 bytes (1 switches on core 0)
nginx_1 | 172.42.0.1 - - [13/Mar/2020:12:41:52 +0000] "GET /api/tasks/failed HTTP/1.1" 200 3 "http://localhost:4242/failed" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
api_1 | 2020-03-13 12:41:53 13:app [ERROR] - Exception on /api/tasks/pending [GET]
api_1 | Traceback (most recent call last):
api_1 | File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
api_1 | response = self.full_dispatch_request()
api_1 | File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
api_1 | rv = self.handle_user_exception(e)
api_1 | File "/usr/local/lib/python3.6/site-packages/flask_cors/extension.py", line 161, in wrapped_function
api_1 | return cors_after_request(app.make_response(f(*args, **kwargs)))
api_1 | File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception
api_1 | reraise(exc_type, exc_value, tb)
api_1 | File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
api_1 | raise value
api_1 | File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
api_1 | rv = self.dispatch_request()
api_1 | File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
api_1 | return self.view_functionsrule.endpoint
api_1 | File "./lisa/web_api/routes.py", line 116, in list_pending_tasks
api_1 | pending = i.reserved()
api_1 | File "/usr/local/lib/python3.6/site-packages/celery/app/control.py", line 125, in reserved
api_1 | return self._request('reserved')
api_1 | File "/usr/local/lib/python3.6/site-packages/celery/app/control.py", line 106, in _request
api_1 | pattern=self.pattern, matcher=self.matcher,
api_1 | File "/usr/local/lib/python3.6/site-packages/celery/app/control.py", line 477, in broadcast
api_1 | limit, callback, channel=channel,
api_1 | File "/usr/local/lib/python3.6/site-packages/kombu/pidbox.py", line 346, in _broadcast
api_1 | matcher=matcher)
api_1 | File "/usr/local/lib/python3.6/site-packages/kombu/pidbox.py", line 302, in _publish
api_1 | maybe_declare(self.reply_queue(channel))
api_1 | File "/usr/local/lib/python3.6/site-packages/kombu/common.py", line 121, in maybe_declare
api_1 | return _maybe_declare(entity, channel)
api_1 | File "/usr/local/lib/python3.6/site-packages/kombu/common.py", line 161, in _maybe_declare
api_1 | entity.declare(channel=channel)
api_1 | File "/usr/local/lib/python3.6/site-packages/kombu/entity.py", line 608, in declare
api_1 | self._create_exchange(nowait=nowait, channel=channel)
api_1 | File "/usr/local/lib/python3.6/site-packages/kombu/entity.py", line 615, in _create_exchange
api_1 | self.exchange.declare(nowait=nowait, channel=channel)
api_1 | File "/usr/local/lib/python3.6/site-packages/kombu/entity.py", line 186, in declare
api_1 | nowait=nowait, passive=passive,
api_1 | File "/usr/local/lib/python3.6/site-packages/amqp/channel.py", line 614, in exchange_declare
api_1 | wait=None if nowait else spec.Exchange.DeclareOk,
api_1 | File "/usr/local/lib/python3.6/site-packages/amqp/abstract_channel.py", line 59, in send_method
api_1 | conn.frame_writer(1, self.channel_id, sig, args, content)
api_1 | File "/usr/local/lib/python3.6/site-packages/amqp/method_framing.py", line 172, in write_frame
api_1 | write(view[:offset])
api_1 | File "/usr/local/lib/python3.6/site-packages/amqp/transport.py", line 284, in write
api_1 | self._write(s)
api_1 | ConnectionResetError: [Errno 104] Connection reset by peer
api_1 | [pid: 13|app: 0|req: 2/8] 172.42.0.1 () {40 vars in 592 bytes} [Fri Mar 13 12:41:53 2020]
mariadb_1 | 2020-03-13 12:46:51 9 [Warning] Aborted connection 9 to db: 'lisadb' user: 'lisa' host: '172.42.0.10' (Got timeout reading communication packets)

And also after a while "nothing" is shown .
Can you help me fix this issue ?
Thanks

No route to host

when running docker-compose up, the following error appears:

worker_1    | 2021-08-09 10:52:00 18:consumer [ERROR] - consumer: Cannot connect to amqp://lisa:**@172.42.0.13:5672//: [Errno 113] No route to host.
worker_1    | Trying again in 4.00 seconds... (2/100)

the web interface is running, but when I try to upload a file, the upload fails.

MaxMind GeoLite databases links can't be accessiable

Hi,

So when I try to install Lisa, the docker build in service 'worker' contain these wget links

    && echo "Downloading MaxMind GeoLite databases ..." \
    && wget https://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz -q -O - | tar xz -C data/geolite2databases \
    && wget https://geolite.maxmind.com/download/geoip/database/GeoLite2-ASN.tar.gz -q -O - | tar xz -C data/geolite2databases \

which are now not accessible anymore and require a registered account to download in MaxMind new policy. This needs a little bit of fix and things will work just smooth as before.

Docker build fails

When running docker-compose build and docker-compose run and error occurs:

tar: Child returned status 1
tar: Error is not recoverable: exiting now
ERROR: Service 'worker' failed to build: The command '/bin/sh -c pip install -r requirements.txt     && iprange -j data/blacklists/* > data/ipblacklist     && echo "Downloading MaxMind GeoLite databases ..."     && wget https://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz -q -O - | tar xz -C data/geolite2databases     && wget https://geolite.maxmind.com/download/geoip/database/GeoLite2-ASN.tar.gz -q -O - | tar xz -C data/geolite2databases     && mv $(find ./data -name GeoLite2-City.mmdb) ./data/geolite2databases     && mv $(find ./data -name GeoLite2-ASN.mmdb) ./data/geolite2databases     && apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false     git     gcc     g++     make     patch     && rm -rf /var/lib/apt/lists/*     && rm -rf /radare2/.git' returned a non-zero code: 2

Seems like the whole process for the GeoLite database has been changed. You need to sign up and then retrieve the files. Any chance to fix this?

sound card detect problem

After clean installed lisa on vm with ubuntu 20.04 and fixing errors with celery package, minopsz, maxopsz, owner of data/storage/ folder on api docker. There is an error with EOF.
In a failed report there is information about undetected sound card and wrong size of one of image (60mb -> 64mb).

ALSA lib confmisc.c:767:(parse_card) cannot find card '0'
ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory
ALSA lib confmisc.c:392:(snd_func_concat) error evaluating strings
ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory
ALSA lib confmisc.c:1246:(snd_func_refer) error evaluating name
ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5233:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM default
alsa: Could not initialize DAC
alsa: Failed to open default': alsa: Reason: No such file or directory ALSA lib confmisc.c:767:(parse_card) cannot find card '0' ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory ALSA lib confmisc.c:392:(snd_func_concat) error evaluating strings ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory ALSA lib confmisc.c:1246:(snd_func_refer) error evaluating name ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory ALSA lib conf.c:5233:(snd_config_expand) Evaluate error: No such file or directory ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM default alsa: Could not initialize DAC alsa: Failed to open default':
alsa: Reason: No such file or directory
audio: Failed to create voice `lm4549.out'
qemu-system-arm: Invalid SD card size: 60 MiB
SD card size has to be a power of 2, e.g. 64 MiB.
You can resize disk images with 'qemu-img resize '
(note that this will lose data if you make the image smaller than it currently is).

Could you please help me with fixing this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.