GithubHelp home page GithubHelp logo

secure-compliance-solutions-llc / gvm-docker Goto Github PK

View Code? Open in Web Editor NEW
248.0 28.0 91.0 67.77 MB

Greenbone Vulnerability Management Docker Image with OpenVAS

Home Page: https://securecompliance.gitbook.io/projects/

License: MIT License

Dockerfile 1.28% Shell 14.50% XSLT 83.22% Makefile 1.00%
openvas docker-container docker-image scanning nvts gvm vulnerability-scanners vulnerabilities vulnerability vulnerability-scanning

gvm-docker's Introduction

Docker Pulls Docker Stars Gitter Open Source Love

End of Life - Repository Deprecated

Important Notice: This repository is no longer actively maintained or supported. No further issues or pull requests will be considered or approved. The content provided here is for historical reference only.

Greenbone Community Containers

The Greenbone community has worked to release Greenbone Community Containers. v22.4 made several major changes including the introduction of the MQTT broker and Notus scanner. That project doesn't necessarily work the same way as this and it doesn't support remote scanners, but we strongly recommend using the most recent version of GVM instead of this project.

Thank You

Thank you contributors and Greenbone community. Your dedication, feedback, and contributions have been invaluable.

- SCS



Greenbone Vulnerability Management with OpenVAS

This setup is based on Greenbone Vulnerability Management and OpenVAS. We have made improvements to help stability and functionality.

You want to send GVM/OpenVAS results to Elasticsearch, try our GVM Logstash project.

Documentation

Quick Start

  • Now all -data images are full pre-initialized (with available data from the build time)

Pre Initialized (-data) images, have a web ui password: adminpassword and should be changed after the deployment. Also the Postgres got a default password: none

Github Registry

docker pull ghcr.io/secure-compliance-solutions-llc/gvm-docker:debian-master-data-full
docker pull ghcr.io/secure-compliance-solutions-llc/gvm-docker:debian-master-data
docker pull ghcr.io/secure-compliance-solutions-llc/gvm-docker:debian-master-full
docker pull ghcr.io/secure-compliance-solutions-llc/gvm-docker:debian-master

Docker Hub

NOTE: Please do not use docker pull securecompliance/gvm:latest

docker pull securecompliance/gvm:debian-master-data-full
docker pull securecompliance/gvm:debian-master-data
docker pull securecompliance/gvm:debian-master-full
docker pull securecompliance/gvm:debian-master

Estimated Hardware Requirements

Hosts CPU Cores Memory Disk Space
512 active IPs 4@2GHz cores 8 GB RAM 30 GB
2,500 active IPs 6@2GHz cores 12 GB RAM 60 GB
10,000 active IPs 8@3GHz cores 16 GB RAM 250 GB
25,000 active IPs 16@3GHz cores 32 GB RAM 1 TB
100,000 active IPs 32@3GHz cores 64 GB RAM 2 TB

Architecture

The key points to take away from the diagram below, is the way our setup establishes connection with the remote sensor, and the available ports on the GMV-Docker container. You can still use any add on tools you've used in the past with OpenVAS on 9390. One of the latest/best upgrades allows you connect directly to postgres using your favorite database tool.

GVM Container Architecture

gvm-docker's People

Contributors

ajacoder avatar ajcoll5 avatar austinsonger avatar ciscoqid avatar dexus avatar disarmm avatar everping avatar grantemsley avatar hanasuke avatar hardzen avatar johnjore avatar korzorro avatar masaya-a avatar miyoyo avatar netbix avatar nimasaed avatar pixelsquared avatar rakanskiy avatar steevi avatar tigattack avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gvm-docker's Issues

Problem login in Web front end

Hello,
i downloaded the docker image on 31 May and
i think i installed docker image correctly with command:
docker run --detach --publish 8080:9392 -e PASSWORD="My-Random-Password-Lenny" --volume gvm-data:/data --name gvm securecompliance/gvm

Log says okay, i think...
`2020-05-31 10:47:16.698 UTC [25] LOG: listening on IPv4 address "127.0.0.1", port 5432
2020-05-31 10:47:16.698 UTC [25] LOG: could not bind IPv6 address "::1": Cannot assign requested address
2020-05-31 10:47:16.698 UTC [25] HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
2020-05-31 10:47:16.704 UTC [25] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-05-31 10:47:16.750 UTC [26] LOG: database system was interrupted while in recovery at 2020-05-31 10:34:21 UTC
2020-05-31 10:47:16.750 UTC [26] HINT: This probably means that some data is corrupted and you will have to use the last backup for recovery.
2020-05-31 10:47:16.953 UTC [26] LOG: database system was not properly shut down; automatic recovery in progress
2020-05-31 10:47:16.957 UTC [26] LOG: redo starts at 0/32053EF0
2020-05-31 10:47:19.012 UTC [26] FATAL: could not extend file "base/16385/17842": wrote only 4096 of 8192 bytes at block 337
2020-05-31 10:47:19.012 UTC [26] HINT: Check free disk space.
2020-05-31 10:47:19.012 UTC [26] CONTEXT: WAL redo at 0/37727580 for Heap/INSERT+INIT: off 1
2020-05-31 10:47:19.016 UTC [25] LOG: startup process (PID 26) exited with exit code 1
2020-05-31 10:47:19.016 UTC [25] LOG: aborting startup due to startup process failure
2020-05-31 10:47:19.024 UTC [25] LOG: database system is shut down
pg_ctl: could not start server
Examine the log output.
9:C 31 May 10:52:32.074 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
9:C 31 May 10:52:32.074 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=9, just started
9:C 31 May 10:52:32.074 # Configuration loaded
Wait for redis socket to be created...
Testing redis status...
Redis ready.
Fixing Database folder...
Starting PostgreSQL...
Updating NVTs...
Updating CERT data...
Updating SCAP data...
Starting Open Scanner Protocol daemon for OpenVAS...
Starting Greenbone Vulnerability Manager...
admin
Starting Greenbone Security Assistant...
++++++++++++++++++++++++++++++++++++++++++++++

  • Your GVM 11 container is now ready to use! +
    ++++++++++++++++++++++++++++++++++++++++++++++
    `

But the problem is, that i cant login with admin:admin on page http://IP-ADR:8080
This might be the problem, i used default Username admin and password admin:

`==> /usr/local/var/log/gvm/gsad.log <==
gsad gmp:WARNING:2020-05-31 11h05.36 utc:201: Authentication failure for 'admin' from 192.168.150.21. Status was 2.

==> /usr/local/var/log/gvm/gvmd.log <==
md gmp:WARNING:2020-05-31 11h05.36 utc:687: Authentication failure for 'admin' from unix_socket
`

EDIT: Also Chromium and Firefax in latest Kali do not work.

EDIT: I reseave :" GMP error during authentication"
Log:
`==> /usr/local/var/log/gvm/gsad.log <==
gsad gmp:WARNING:2020-05-31 11h54.06 utc:245: Authentication failure for 'admin' from 192.168.150.21. Status was -1.

==> /usr/local/var/log/gvm/gvmd.log <==
md manage:WARNING:2020-05-31 11h54.06 utc:1068: sql_open: PQconnectStart to 'gvmd' failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
md manage:WARNING:2020-05-31 11h54.06 utc:1068: init_manage_process: sql_open failed
md manage:WARNING:2020-05-31 11h54.16 utc:161: sql_exec_internal: PQexec failed: (7)
md manage:WARNING:2020-05-31 11h54.16 utc:161: sql_exec_internal: SQL: BEGIN;
md manage:WARNING:2020-05-31 11h54.16 utc:161: sqlv: sql_exec_internal failed
md manage:WARNING:2020-05-31 11h54.16 utc:161: manage_schedule: manage_update_nvti_cache error (Perhaps the db went down?)
md manage:WARNING:2020-05-31 11h54.16 utc:1071: sql_open: PQconnectStart to 'gvmd' failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
md manage:WARNING:2020-05-31 11h54.16 utc:1071: init_manage_process: sql_open failed
md manage:WARNING:2020-05-31 11h54.16 utc:1072: sql_open: PQconnectStart to 'gvmd' failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
md manage:WARNING:2020-05-31 11h54.16 utc:1072: init_manage_process: sql_open failed
md manage:WARNING:2020-05-31 11h54.16 utc:1073: sql_open: PQconnectStart to 'gvmd' failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
md manage:WARNING:2020-05-31 11h54.16 utc:1073: init_manage_process: sql_open failed
md manage:WARNING:2020-05-31 11h54.26 utc:161: sql_exec_internal: PQexec failed: (7)
md manage:WARNING:2020-05-31 11h54.26 utc:161: sql_exec_internal: SQL: BEGIN;
md manage:WARNING:2020-05-31 11h54.26 utc:161: sqlv: sql_exec_internal failed
md manage:WARNING:2020-05-31 11h54.26 utc:161: manage_schedule: manage_update_nvti_cache error (Perhaps the db went down?)
md manage:WARNING:2020-05-31 11h54.26 utc:1076: sql_open: PQconnectStart to 'gvmd' failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
md manage:WARNING:2020-05-31 11h54.26 utc:1076: init_manage_process: sql_open failed
md manage:WARNING:2020-05-31 11h54.26 utc:1075: sql_open: PQconnectStart to 'gvmd' failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
md manage:WARNING:2020-05-31 11h54.26 utc:1075: init_manage_process: sql_open failed
md manage:WARNING:2020-05-31 11h54.26 utc:1077: sql_open: PQconnectStart to 'gvmd' failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
md manage:WARNING:2020-05-31 11h54.26 utc:1077: init_manage_process: sql_open failed
md manage:WARNING:2020-05-31 11h54.36 utc:161: sql_exec_internal: PQexec failed: (7)
md manage:WARNING:2020-05-31 11h54.36 utc:161: sql_exec_internal: SQL: BEGIN;
md manage:WARNING:2020-05-31 11h54.36 utc:161: sqlv: sql_exec_internal failed
md manage:WARNING:2020-05-31 11h54.36 utc:161: manage_schedule: manage_update_nvti_cache error (Perhaps the db went down?)
md manage:WARNING:2020-05-31 11h54.36 utc:1079: sql_open: PQconnectStart to 'gvmd' failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
md manage:WARNING:2020-05-31 11h54.36 utc:1079: init_manage_process: sql_open failed
md manage:WARNING:2020-05-31 11h54.36 utc:1080: sql_open: PQconnectStart to 'gvmd' failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
md manage:WARNING:2020-05-31 11h54.36 utc:1080: init_manage_process: sql_open failed
md manage:WARNING:2020-05-31 11h54.36 utc:1081: sql_open: PQconnectStart to 'gvmd' failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
md manage:WARNING:2020-05-31 11h54.36 utc:1081: init_manage_process: sql_open failed
md manage:WARNING:2020-05-31 11h54.46 utc:161: sql_exec_internal: PQexec failed: (7)
md manage:WARNING:2020-05-31 11h54.46 utc:161: sql_exec_internal: SQL: BEGIN;
md manage:WARNING:2020-05-31 11h54.46 utc:161: sqlv: sql_exec_internal failed
md manage:WARNING:2020-05-31 11h54.46 utc:161: manage_schedule: manage_update_nvti_cache error (Perhaps the db went down?)
md manage:WARNING:2020-05-31 11h54.46 utc:1083: sql_open: PQconnectStart to 'gvmd' failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
md manage:WARNING:2020-05-31 11h54.46 utc:1083: init_manage_process: sql_open failed
md manage:WARNING:2020-05-31 11h54.46 utc:1084: sql_open: PQconnectStart to 'gvmd' failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
md manage:WARNING:2020-05-31 11h54.46 utc:1084: init_manage_process: sql_open failed
md manage:WARNING:2020-05-31 11h54.46 utc:1085: sql_open: PQconnectStart to 'gvmd' failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
md manage:WARNING:2020-05-31 11h54.46 utc:1085: init_manage_process: sql_open failed
root@0xLenny:~#
`

scanner container credentials issue

Describe the bug
Public key is not generated when creating new scanner from securecompliance/gvm:scanner
/data/ssh/key.pub is missing.

Results from docker logs:

gvmscan | +++++++++++++++++++++++++++++++++++++++++++++++++++++++
gvmscan | + Your OpenVAS Scanner container is now ready to use! +
gvmscan | +++++++++++++++++++++++++++++++++++++++++++++++++++++++
gvmscan |
gvmscan | -------------------------------------------------------
gvmscan | Scanner id: ---id removed ---
gvmscan | cat: /data/ssh/key.pub: No such file or directory
gvmscan | Public key:
gvmscan | Master host key (Check that it matches the public key from the master):
gvmscan | -------------------------------------------------------

To Reproduce
Steps to reproduce the behavior:

  1. Created docker-compose.yml docker-compose.yml.txt
  2. docker-compose up -d
  3. After finished start looking for public key in logs
  4. missing key.pub file as shown above

Expected behavior
Find public key to copy to GVM 11 master or have command to create public key afterwards

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

Unable to bind the application into Nginx Reverse Proxy - 502 Bad Gateway

Describe the bug

I am able to bring the application up and running on the port 8080 - https://localhost:8080
But, I am using Nginx reverse proxy docker container along with the GVM docker container. I have certificates enabled and working through nginx reverse proxy docker container.
Port 80 & 443 is open in Nginx reverse proxy container.

To Reproduce
Steps to reproduce the behavior:

GVM Docker-compose.yml file:

gvm:
image: securecompliance/gvm:11.0.1-r2
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /data/gvm-11:/data
ports:
- 8080:9392
environment:
PASSWORD: xxxxxx
VIRTUAL_HOST: gvm.company.com

Nginx container:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /etc/ssl:/etc/ssl:ro
- ./nginx-certs:/etc/nginx/certs
- ./nginx-dhparam:/etc/nginx/dhparam
- ./nginx-conf.d:/etc/nginx/conf.d
- ./nginx-vhost.d:/etc/nginx/vhost.d:ro

First bring up the Nginx proxy container then bring up the GVM container

Expected behavior
Expected behavior is when I access https://gvm.company.com/ it should bring up the GVM application.
But, I receive "502 Bad Gateway" Error

Screenshots
image

Scan container

Hi,

is it possible to have a separate scanner only container which I could include to external scanners?

Or is the better way to deploy multiple containers with all the roles?

Maybe the certificate management is problematic.

Running behind nginx proxy

I am running containers behind Nginx-Proxy container. whic automagicaly proxy to 443 or 80 ports.
How to change default 9392 port to other so container.
Maiby possible to expose 443 or 80 port so Nginx Proxy container can proxy connection to container.

Add no update option (NVTs)

Hi there,

in my experimenting phase I have the problem that every start of the container results in nvt update (takes quite some time).

At the moment I'm even blocked out:
gvm | rsync: failed to connect to feed.community.greenbone.net (45.135.106.142): Connection refused (111)
gvm | rsync: failed to connect to feed.community.greenbone.net (2a0e:6b40:20:106:20c:29ff:fe67:cbb5): Cannot assign requested address (99)
gvm | rsync error: error in socket IO (code 10) at clientserver.c(127) [Receiver=3.1.3]

So my suggestion would be that the container continues even if the update fails, and either you can set an env variable to not update, or have a feature to update only if last update was > x days ago (persist last update timestamp and maybe have env variable to set x days).

start.sh should remove the ospd.pid file before starting

After restarting the container the following message is shown:

manage_update_nvt_cache_osp: failed to connect to /tmp/ospd.sock

This message is caused by a remaining pid file, which prevents the ospd-openvas process to start. To fix this the /var/run/ospd.pid should be removed.

Cheers,
Marcel.

No redis DB available

Hi,

I've installed gvm with commands below:

docker run --detach --publish 8080:9392 -e PASSWORD="myPass" --volume gvm-data:/data --name gvm securecompliance/gvm
docker exec -it gvm bash -exec "/reportFix.sh
(twice, for both)

But now, during a scan when I check the logs I get tons of:
lib kb:CRITICAL:2020-06-01 09h18.13 utc:680378: No redis DB available

'secret key' used for encrypting credentials lost on recreating image

I'm running the container on Docker CE 19.03.8 via docker-compose, on a Ubuntu 18.04 server.
Having got it running and run some scans, all works fine. And I think stopping/starting/restarting the container is fine.
But if I use docker-compose down/up to recreate the image I can't start scan tasks afterwards. They move to 'requested' state, but never start and can't be cancelled. Looking in the logs, I see (for instance):

==> /usr/local/var/log/gvm/gvmd.log <==
event task:MESSAGE:2020-05-05 09h37.39 UTC:603: Status of task Scan domain controllers (79d3fa36-c325-4c4e-97c9-0045373da405) has changed to Requested
event task:MESSAGE:2020-05-05 09h37.39 UTC:603: Task Scan domain controllers (79d3fa36-c325-4c4e-97c9-0045373da405) has been requested to start by admin
util gpgme:WARNING:2020-05-05 09h37.39 UTC:606: error decrypting credential: No secret key
util gpgme: INFO:2020-05-05 09h37.39 UTC:606: encrypted to keyid B8174B146B24, algo=1: No secret key

If I restart the container (to stop the stalled task) and then go back through and replace the passwords in all of the stored credentials (which generates similar warnings as above and then succeeds) then the task will start.

Presumably re-creating the image has lost an encryption key used to secure the credentials which is stored outside the /data location? Is there a way to preserve this through upgrades/recreating the image? Or have I missed a step or a setting when creating my docker-compose file;

version: "3.5"
services:
gvm:
container_name: gvm
image: securecompliance/gvm
restart: always
env_file:
- ./gvm.env
ports:
- "8080:9392"
volumes:
- ./storage/data:/data

OpenVAS Scanner not available

Hello,

The Openvas Scanner is not working.
Here the logs (from /usr/local/var/log/gvm/gvmd.log):

md manage:WARNING:2019-12-16 10h46.41 UTC:1543: Could not connect to Scanner at /tmp/ospd.sock md manage:WARNING:2019-12-16 10h46.51 utc:1548: manage_update_nvt_cache_osp: failed to connect to /tmp/ospd.sock md manage:WARNING:2019-12-16 10h47.02 utc:1559: manage_update_nvt_cache_osp: failed to connect to /tmp/ospd.sock md manage:WARNING:2019-12-16 10h47.17 utc:1580: manage_update_nvt_cache_osp: failed to connect to /tmp/ospd.sock md manage:WARNING:2019-12-16 10h47.32 utc:1593: manage_update_nvt_cache_osp: failed to connect to /tmp/ospd.sock

And here the error from ospd-openvas

root@47cbc8f24bae:/# ospd-openvas Traceback (most recent call last): File "/usr/local/bin/ospd-openvas", line 11, in <module> load_entry_point('ospd-openvas==1.0.0', 'console_scripts', 'ospd-openvas')() File "/usr/local/lib/python3.6/dist-packages/ospd_openvas-1.0.0-py3.6.egg/ospd_openvas/daemon.py", line 1454, in main File "/usr/local/lib/python3.6/dist-packages/ospd-2.0.0-py3.6.egg/ospd/main.py", line 122, in main File "/usr/local/lib/python3.6/dist-packages/ospd-2.0.0-py3.6.egg/ospd/main.py", line 103, in init_logging FileNotFoundError: [Errno 2] No such file or directory

Any advise?

Thanks,
Antonio.

scan credential issue

It looks like there is a problem with creating ssh keys for credentialed scans.
When creating a ssh key using "Configuration -> Credentials -> New -> Auto-generate" there is an error message "Internal error".

This is caused by the lack of "ssh-keygen" inside the container.

Installing openssh-client package resolves this issue.

(The other originally reported issue could not be reproduced anymore)

Outdated Reports

I'm seeing the below in the summary of newly scanned reports:

Report outdated Scan Engine / Environment (local)

Question: Persistence

What folders/paths do I need to mount if I want to make this deployment persistent?

Thanks!

sql error in latest pull for cert_bund_advs table

Describe the bug
SQL error generated after restarting refreshed pull

To Reproduce
run docker logs gvm, output contains:

md manage:WARNING:2020-05-19 21h35.36 utc:409: sql_exec_internal: PQexec failed: ERROR: relation "cert_bund_advs" does not exist
LINE 1: SELECT EXISTS (SELECT * FROM cert_bund_advs WHERE creation_...
^
(7)
md manage:WARNING:2020-05-19 21h35.36 utc:409: sql_exec_internal: SQL: SELECT EXISTS (SELECT * FROM cert_bund_advs WHERE creation_time > coalesce (CAST ((SELECT value FROM meta WHERE name = 'cert_check_time') AS INTEGER), 0));
md manage:WARNING:2020-05-19 21h35.36 utc:409: sql_x_internal: sql_exec_internal failed

Expected behavior
not that :)

Screenshots

Additional context
docker.io/securecompliance/gvm latest 409e0ced656e 8 days ago 1.84 GB

Scan completed but no result displayed

Hi,

We have downloaded the latest docker image with this commit and performed a scan on our host target.

While waiting for GVM SecInfo to fully load, we came across below error, but was still able to perform scan, so we proceed scanning the target until its completed:

md manage:WARNING:2020-05-22 06h40.13 utc:4489: sql_open: PQerrorMessage (conn): FATAL: could not open relation mapping file "global/pg_filenode.map": Too many open files in system
md manage:WARNING:2020-05-22 06h40.13 utc:4489: init_manage_process: sql_open failed
md manage:WARNING:2020-05-22 06h40.13 utc:4492: sql_open: PQconnectPoll failed
md manage:WARNING:2020-05-22 06h40.13 utc:4492: sql_open: PQerrorMessage (conn): FATAL: could not open relation mapping file "global/pg_filenode.map": Too many open files in system
md manage:WARNING:2020-05-22 06h40.13 utc:4492: init_manage_process: sql_open failed
md manage:WARNING:2020-05-22 06h40.16 utc:4494: sql_open: PQconnectPoll failed
md manage:WARNING:2020-05-22 06h40.16 utc:4494: sql_open: PQerrorMessage (conn): FATAL: could not open relation mapping file "global/pg_filenode.map": Too many open files in system
md manage:WARNING:2020-05-22 06h40.16 utc:4494: init_manage_process: sql_open failed
md manage:WARNING:2020-05-22 06h40.19 utc:4497: sql_open: PQconnectPoll failed
md manage:WARNING:2020-05-22 06h40.19 utc:4497: sql_open: PQerrorMessage (conn): FATAL: could not open relation mapping file "global/pg_filenode.map": Too many open files in system
md manage:WARNING:2020-05-22 06h40.19 utc:4497: init_manage_process: sql_open failed
md manage:WARNING:2020-05-22 06h40.22 utc:4502: sql_open: PQconnectPoll failed
md manage:WARNING:2020-05-22 06h40.22 utc:4502: sql_open: PQerrorMessage (conn): FATAL: could not open relation mapping file "global/pg_filenode.map": Too many open files in system
md manage:WARNING:2020-05-22 06h40.22 utc:4502: init_manage_process: sql_open failed
md manage: INFO:2020-05-22 06h41.43 utc:403: Updating /usr/local/var/lib/gvm/scap-data/nvdcve-2.0-2005.xml
md manage: INFO:2020-05-22 06h44.05 utc:403: Updating /usr/local/var/lib/gvm/scap-data/nvdcve-2.0-2002.xml
md manage: INFO:2020-05-22 06h45.06 utc:403: Updating /usr/local/var/lib/gvm/scap-data/nvdcve-2.0-2004.xml
md manage: INFO:2020-05-22 06h46.01 utc:403: Updating /usr/local/var/lib/gvm/scap-data/nvdcve-2.0-2006.xml
md manage: INFO:2020-05-22 06h47.19 utc:403: Updating /usr/local/var/lib/gvm/scap-data/nvdcve-2.0-2010.xml
md manage: INFO:2020-05-22 06h48.56 utc:403: Updating OVAL data
md manage: INFO:2020-05-22 06h49.03 utc:403: Updating /usr/local/var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/c/oval.xml
md manage: INFO:2020-05-22 06h49.03 utc:403: Updating /usr/local/var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/m/oval.xml
md manage: INFO:2020-05-22 06h49.03 utc:403: Updating /usr/local/var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/v/family/ios.xml
md manage: INFO:2020-05-22 06h49.05 utc:403: Updating /usr/local/var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/v/family/pixos.xml
md manage: INFO:2020-05-22 06h49.05 utc:403: Updating /usr/local/var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/p/oval.xml
==> /usr/local/var/log/gvm/gsad.log <==
gsad gmp:MESSAGE:2020-05-22 06h53.08 UTC:434: Authentication success for 'admin' from 172.17.0.1
==> /usr/local/var/log/gvm/gvmd.log <==
md manage: INFO:2020-05-22 06h55.22 utc:403: Updating /usr/local/var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/i/oval.xml
md manage: INFO:2020-05-22 06h55.25 utc:403: Updating /usr/local/var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/v/family/macos.xml
md manage: INFO:2020-05-22 06h55.26 utc:403: Updating /usr/local/var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/v/family/unix.xml
==> /usr/local/var/log/gvm/openvas.log <==
sd main:MESSAGE:2020-05-22 06h56.01 utc:4065: Finished testing XX.XXX.X.XX. Time : 1124.59 secs
md manage: INFO:2020-05-22 06h56.16 utc:403: Updating /usr/local/var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/v/family/windows.xml
md manage: INFO:2020-05-22 06h57.37 utc:403: Updating user OVAL definitions.
md manage: INFO:2020-05-22 06h57.37 utc:403: Updating CVSS scores and CVE counts for CPEs
md manage: INFO:2020-05-22 07h07.02 utc:403: Updating CVSS scores for OVAL definitions
md manage: INFO:2020-05-22 07h07.12 utc:403: Updating placeholder CPEs
md manage: INFO:2020-05-22 07h11.07 utc:403: update_scap: Updating SCAP info succeeded
md manage:WARNING:2020-05-22 07h11.17 utc:403: sql_exec_internal: PQexec failed: ERROR: relation "cert_bund_advs" does not exist
LINE 1: SELECT EXISTS (SELECT * FROM cert_bund_advs WHERE creation_...
^
(7)
md manage:WARNING:2020-05-22 07h11.17 utc:403: sql_exec_internal: SQL: SELECT EXISTS (SELECT * FROM cert_bund_advs WHERE creation_time > coalesce (CAST ((SELECT value FROM meta WHERE name = 'cert_check_time') AS INTEGER), 0));
md manage:WARNING:2020-05-22 07h11.17 utc:403: sql_x_internal: sql_exec_internal failed
gsad gmp:MESSAGE:2020-05-22 07h14.59 UTC:434: Authentication success for 'admin' from 172.17.0.1
sd main:MESSAGE:2020-05-22 07h25.20 utc:4066: Finished testing XX.XXX.X.XX. Time : 2883.88 secs
sd main:MESSAGE:2020-05-22 07h25.48 utc:4064: Finished testing XX.XXX.X.XX. Time : 2911.87 secs
sd main:MESSAGE:2020-05-22 07h25.48 utc:4007: Test complete
sd main:MESSAGE:2020-05-22 07h25.48 utc:4007: Total time to scan all hosts : 2921 seconds
==> /usr/local/var/log/gvm/ospd-openvas.log <==
2020-05-22 07:25:57,415 OSPD - openvas: INFO: (ospd.ospd) XX.XXX.X.XX, XX.XXX.X.XX, XX.XXX.X.XX: Host scan finished.
2020-05-22 07:25:58,532 OSPD - openvas: INFO: (ospd.ospd) 98661285-ef61-4a73-b894-731d2b1a43ab: Scan finished.
event task:MESSAGE:2020-05-22 07h27.01 UTC:3016: Status of task Konveodev Scan (626541f7-a2cd-40f7-bb1c-39bca9e67132) has changed to Done

Once the scan completed, when trying to access the scan result there is no information displayed (refer below screenshot). Is this caused by the error above?
1

2

We also noticed the load time for SecInfo data are significantly longer than before. Is this expected too?

IPv6

Hi,

does anyone have good example how to manage IPv6 scans? IPv6 masquerading is considered 'bad'...

What about "--network host" Option? Which should not be too bad considered on the machine isn't running anything important/critical..

Any suggestions?

Thanks a lot for your help!

4920441

Unable to get past login page

Pulled the latest version,
Installed with docker run -d -p 127.0.0.1:9392:9392 --name gvm securecompliance/gvm

Logs show that login was successful, but I keep getting presented with the login page. I cannot manually navigate to any of the deeper URIs.

Here is the tail of the logs:

`Starting Open Scanner Protocol daemon for OpenVAS...
Starting Greenbone Vulnerability Manager...
Creating Greenbone Vulnerability Manager admin user
User created.
Starting Greenbone Security Assistant...
++++++++++++++++++++++++++++++++++++++++++++++

  • Your GVM 11 container is now ready to use! +
    ++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++

  • Tailing logs +
    ++++++++++++++++
    ==> /usr/local/var/log/gvm/gsad.log <==
    gsad main:MESSAGE:2020-02-02 15h51.48 utc:309: Starting GSAD version 9.0

==> /usr/local/var/log/gvm/gvmd.log <==
md manage:WARNING:2020-02-02 15h51.46 utc:285: sql_exec_internal: SQL: CREATE OR REPLACE VIEW vulns AS SELECT id, uuid, name, creation_time, modification_time, cast (cvss_base AS double precision) AS severity, qod, 'nvt' AS type FROM nvts WHERE uuid IN (SELECT nvt FROM results WHERE (results.severity != -3.0)) UNION SELECT id, uuid, name, creation_time, modification_time, cvss AS severity, 75 AS qod, 'cve' AS type FROM cves WHERE uuid IN (SELECT nvt FROM results WHERE (results.severity != -3.0)) UNION SELECT id, uuid, name, creation_time, modification_time, max_cvss AS severity, 75 AS qod, 'ovaldef' AS type FROM ovaldefs WHERE uuid IN (SELECT nvt FROM results WHERE (results.severity != -3.0))
md manage:WARNING:2020-02-02 15h51.46 utc:285: sqlv: sql_exec_internal failed
md manage: INFO:2020-02-02 15h51.47 utc:289: update_dfn_xml: dfn-cert-2018.xml
md manage: INFO:2020-02-02 15h51.47 utc:289: Updating /usr/local/var/lib/gvm/cert-data/dfn-cert-2018.xml
md main:MESSAGE:2020-02-02 15h51.47 utc:298: Greenbone Vulnerability Manager version 9.0.0 (DB revision 221)
md manage: INFO:2020-02-02 15h51.47 utc:298: Getting users.
md manage:WARNING:2020-02-02 15h51.48 utc:298: database must be initialised from scanner
md main:MESSAGE:2020-02-02 15h51.48 utc:303: Greenbone Vulnerability Manager version 9.0.0 (DB revision 221)
md manage: INFO:2020-02-02 15h51.48 utc:303: Creating user.
md manage:WARNING:2020-02-02 15h51.48 utc:303: database must be initialised from scanner

==> /usr/local/var/log/gvm/openvas.log <==
lib nvticache:MESSAGE:2020-02-02 15h50.47 utc:133: Updated NVT cache from version 0 to 202001311108

==> /usr/local/var/log/gvm/ospd-openvas.log <==

==> /usr/local/var/log/gvm/gvmd.log <==
md manage: INFO:2020-02-02 15h51.51 utc:289: update_dfn_xml: dfn-cert-2010.xml
md manage: INFO:2020-02-02 15h51.51 utc:289: Updating /usr/local/var/lib/gvm/cert-data/dfn-cert-2010.xml
md manage: INFO:2020-02-02 15h51.54 utc:289: update_dfn_xml: dfn-cert-2012.xml
md manage: INFO:2020-02-02 15h51.54 utc:289: Updating /usr/local/var/lib/gvm/cert-data/dfn-cert-2012.xml
md manage: INFO:2020-02-02 15h51.56 utc:288: OSP service has newer VT status (version 202001311108) than in database (version (null), 0 VTs). Starting update ...
md manage: INFO:2020-02-02 15h51.57 utc:289: update_dfn_xml: dfn-cert-2013.xml
md manage: INFO:2020-02-02 15h51.57 utc:289: Updating /usr/local/var/lib/gvm/cert-data/dfn-cert-2013.xml
md manage: INFO:2020-02-02 15h52.00 utc:289: update_dfn_xml: dfn-cert-2014.xml
md manage: INFO:2020-02-02 15h52.00 utc:289: Updating /usr/local/var/lib/gvm/cert-data/dfn-cert-2014.xml
md manage: INFO:2020-02-02 15h52.03 utc:289: update_dfn_xml: dfn-cert-2017.xml
md manage: INFO:2020-02-02 15h52.03 utc:289: Updating /usr/local/var/lib/gvm/cert-data/dfn-cert-2017.xml
md manage: INFO:2020-02-02 15h52.07 utc:289: update_dfn_xml: dfn-cert-2020.xml
md manage: INFO:2020-02-02 15h52.07 utc:289: Updating /usr/local/var/lib/gvm/cert-data/dfn-cert-2020.xml
md manage: INFO:2020-02-02 15h52.08 utc:289: update_dfn_xml: dfn-cert-2015.xml
md manage: INFO:2020-02-02 15h52.08 utc:289: Updating /usr/local/var/lib/gvm/cert-data/dfn-cert-2015.xml
md manage: INFO:2020-02-02 15h52.11 utc:289: update_dfn_xml: dfn-cert-2008.xml
md manage: INFO:2020-02-02 15h52.11 utc:289: Updating /usr/local/var/lib/gvm/cert-data/dfn-cert-2008.xml
md manage: INFO:2020-02-02 15h52.11 utc:289: update_dfn_xml: dfn-cert-2016.xml
md manage: INFO:2020-02-02 15h52.11 utc:289: Updating /usr/local/var/lib/gvm/cert-data/dfn-cert-2016.xml
md manage: INFO:2020-02-02 15h52.15 utc:289: update_dfn_xml: dfn-cert-2011.xml
md manage: INFO:2020-02-02 15h52.15 utc:289: Updating /usr/local/var/lib/gvm/cert-data/dfn-cert-2011.xml
md manage: INFO:2020-02-02 15h52.18 utc:289: update_dfn_xml: dfn-cert-2019.xml
md manage: INFO:2020-02-02 15h52.18 utc:289: Updating /usr/local/var/lib/gvm/cert-data/dfn-cert-2019.xml
md manage: INFO:2020-02-02 15h52.28 utc:289: Updating /usr/local/var/lib/gvm/cert-data/CB-K20.xml
md manage: INFO:2020-02-02 15h52.28 utc:289: Updating /usr/local/var/lib/gvm/cert-data/CB-K18.xml
md manage: INFO:2020-02-02 15h52.31 utc:289: Updating /usr/local/var/lib/gvm/cert-data/CB-K15.xml
md manage: INFO:2020-02-02 15h52.34 utc:289: Updating /usr/local/var/lib/gvm/cert-data/CB-K16.xml
md manage: INFO:2020-02-02 15h52.38 utc:289: Updating /usr/local/var/lib/gvm/cert-data/CB-K17.xml
md manage: INFO:2020-02-02 15h52.42 utc:289: Updating /usr/local/var/lib/gvm/cert-data/CB-K14.xml
md manage: INFO:2020-02-02 15h52.45 utc:289: Updating /usr/local/var/lib/gvm/cert-data/CB-K13.xml
md manage: INFO:2020-02-02 15h52.46 utc:289: Updating /usr/local/var/lib/gvm/cert-data/CB-K19.xml
md manage: INFO:2020-02-02 15h52.49 utc:289: Updating Max CVSS for DFN-CERT
md manage: INFO:2020-02-02 15h52.50 utc:289: Updating DFN-CERT CVSS max succeeded.
md manage: INFO:2020-02-02 15h52.50 utc:289: Updating Max CVSS for CERT-Bund
md manage: INFO:2020-02-02 15h52.50 utc:289: Updating CERT-Bund CVSS max succeeded.
md manage: INFO:2020-02-02 15h52.50 utc:289: sync_cert: Updating CERT info succeeded.
md manage: INFO:2020-02-02 15h58.03 utc:288: Updating VTs in database ... 57501 new VTs, 0 changed VTs
md manage: INFO:2020-02-02 15h58.14 utc:288: Updating VTs in database ... done (57501 VTs).

==> /usr/local/var/log/gvm/gsad.log <==
gsad gmp:MESSAGE:2020-02-02 16h05.58 utc:310: Authentication success for 'admin' from 172.17.0.1
gsad gmp:MESSAGE:2020-02-02 16h11.41 utc:310: Authentication success for 'admin' from 172.17.0.1
`

Error: Could not connect to Scanner

Help Please: every task that I send perform the same error

Error Message ▲ | Host | Hostname | NVT | Port

Could not connect to Scanner
md manage:WARNING:2020-05-28 18h32.24 utc:9927: manage_update_nvt_cache_osp: failed to connect to /tmp/ospd.sock

md manage:WARNING:2020-05-28 18h32.30 UTC:9868: OSP start_scan 4f9e405a-9ae2-4797-9358-7af05b1f625d: Could not connect to Scanner

CVEs 0 of 0, No CVE's available

The feed shows that the CVE is up to date, and was been synced, but when I look at X.X.X.X:8080/cves, it shows nothing. What could the issue be?

Is scap data an important piece of GVM? Or can GVM function without it?

Web interface not starting on latest release

no data returned on the web interface port. I had working on a previous version of the docker image but seems to be broken now. I wiped out all my data as well just to see but still have the same issue.

I see the process running correctly

gsad --verbose --gnutls-priorities=SECURE128:-AES-128-CBC:-CAMELLIA-128-CBC:-VERS-SSL3.0:-VERS-TLS1.0 --no-redirect --mlisten=127.0.0.1 --mport=9390 --port=9392

The only error i see during the initial boot is this - not sure if it is related

md manage: INFO:2020-07-03 14h34.59 utc:795: Updating CPEs
md manage:WARNING:2020-07-03 14h35.00 utc:805: database must be initialised from scanner
md manage:WARNING:2020-07-03 14h35.00 utc:805: sql_exec_internal: PQexec failed: ERROR: relation "cves" does not exist
LINE 1: ... AS severity, 75 AS qod, 'cve' AS type FROM cves WHERE...
^
(7)
md manage:WARNING:2020-07-03 14h35.00 utc:805: sql_exec_internal: SQL: CREATE OR REPLACE VIEW vulns AS SELECT id, uuid, name, creation_time, modification_time, cast (cvss_base AS double precision) AS severity, qod, 'nvt' AS type FROM nvts W
HERE uuid IN (SELECT nvt FROM results WHERE (results.severity != -3.0)) UNION SELECT id, uuid, name, creation_time, modification_time, cvss AS severity, 75 AS qod, 'cve' AS type FROM cves WHERE uuid IN (SELECT nvt FROM results WHE
RE (results.severity != -3.0)) UNION SELECT id, uuid, name, creation_time, modification_time, max_cvss AS severity, 75 AS qod, 'ovaldef' AS type FROM ovaldefs WHERE uuid IN (SELECT nvt FROM results WHERE (results.severity != -3.0))
md manage:WARNING:2020-07-03 14h35.00 utc:805: sqlv: sql_exec_internal failed
md main:MESSAGE:2020-07-03 14h35.18 utc:30: Greenbone Vulnerability Manager version 9.0.1 (DB revision 221)
md main: INFO:2020-07-03 14h35.18 utc:30: Migrating database.

it appears to finish correctly

md manage: INFO:2020-07-03 14h48.47 utc:160: Updating user OVAL definitions.
md manage: INFO:2020-07-03 14h48.47 utc:160: Updating CVSS scores and CVE counts for CPEs
md manage: INFO:2020-07-03 14h53.20 utc:160: Updating CVSS scores for OVAL definitions
md manage: INFO:2020-07-03 14h53.31 utc:160: Updating placeholder CPEs
md manage: INFO:2020-07-03 14h55.08 utc:160: update_scap: Updating SCAP info succeeded

last line i see after restarting the docker

md manage:WARNING:2020-07-06 00h09.44 utc:207: database must be initialised from scanner

Any thoughts?

Error: gnupg

Getting this trying to start the container:


Creating gnupg folder...


mv: cannot stat '/usr/local/var/lib/gvm/gvmd/gnupg': No such file or directory

This is as of the last release 21 hours ago.

GMP Service Down Message

Unable to login to container after gnupg issue was resolved; logs -

lib  nvticache:MESSAGE:2020-05-07 20h27.54 utc:144: Updated NVT cache from version 0 to 202005061141
==> /usr/local/var/log/gvm/ospd-openvas.log <==
==> /usr/local/var/log/gvm/gsad.log <==

gsad  gmp:WARNING:2020-05-07 20h31.34 utc:210: Failed to connect to server at /usr/local/var/run/gvmd.sock: No such file or directory

gsad  gmp:WARNING:2020-05-07 20h31.34 utc:210: Authentication failure for 'admin' from 172.17.0.1

gsad  gmp:WARNING:2020-05-07 20h37.33 utc:210: Failed to connect to server at /usr/local/var/run/gvmd.sock: No such file or directory

gsad  gmp:WARNING:2020-05-07 20h37.33 utc:210: Authentication failure for 'admin' from 172.17.0.1```

"start.sh" stil persistant in process tree:

`root@42d01159fb9f:/# ps -ef | grep start
root         1     0  0 20:26 ?        00:00:00 /bin/sh -c '/start.sh'
root         6     1  0 20:26 ?        00:00:00 bash /start.sh
root       775   762  0 21:20 pts/1    00:00:00 grep --color=auto start
root@42d01159fb9f:/#`

LDAP support

Hi, any idea how to setup ldap authentication? I've just setup GVM and would like to integrate the UI authentication with our AD.

Now, when i try to configure it, the LDAP configuration page shows me "Support for LDAP is not available."

Thanks!

Failed to create user: Invalid characters in user name (CU-8ghxga)

Dear all,

I am running docker on my OpenmediaVault installation.

Any ideas why it does not finally work? Here the logs where I start to get into a loop:

Creating Greenbone Vulnerability Manager admin user
Failed to create user: Invalid characters in user name

Here the detailed logs:

Starting Open Scanner Protocol daemon for OpenVAS...
2020-06-21 08:34:48.348 UTC [96] LOG: autovacuum: dropping orphan temp table "gvmd.pg_temp_5.current_credentials"
2020-06-21 08:34:48.351 UTC [96] LOG: autovacuum: dropping orphan temp table "gvmd.pg_temp_4.current_credentials"
2020-06-21 08:34:48.353 UTC [96] LOG: autovacuum: dropping orphan temp table "gvmd.pg_temp_6.current_credentials"
Starting Greenbone Vulnerability Manager...
Creating Greenbone Vulnerability Manager admin user
Failed to create user: Invalid characters in user name
8:C 21 Jun 2020 08:35:32.359 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
8:C 21 Jun 2020 08:35:32.359 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=8, just started
8:C 21 Jun 2020 08:35:32.359 # Configuration loaded
Wait for redis socket to be created...
Testing redis status...
Redis ready.
Starting PostgreSQL...
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....2020-06-21 08:35:33.407 UTC [20] LOG: starting PostgreSQL 12.3 (Ubuntu 12.3-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-10ubuntu2) 9.3.0, 64-bit
2020-06-21 08:35:33.407 UTC [20] LOG: listening on IPv4 address "127.0.0.1", port 5432
2020-06-21 08:35:33.407 UTC [20] LOG: could not bind IPv6 address "::1": Cannot assign requested address
2020-06-21 08:35:33.407 UTC [20] HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
2020-06-21 08:35:33.413 UTC [20] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-06-21 08:35:33.431 UTC [21] LOG: database system was interrupted; last known up at 2020-06-21 08:33:48 UTC
2020-06-21 08:35:33.546 UTC [21] LOG: database system was not properly shut down; automatic recovery in progress
2020-06-21 08:35:33.551 UTC [21] LOG: redo starts at 0/1FF78E0
2020-06-21 08:35:33.620 UTC [21] LOG: invalid record length at 0/2623390: wanted 24, got 0
2020-06-21 08:35:33.620 UTC [21] LOG: redo done at 0/2623368
2020-06-21 08:35:33.667 UTC [20] LOG: database system is ready to accept connections
done
server started
Updating NVTs...
Updating CERT data...
Updating SCAP data...
Starting Open Scanner Protocol daemon for OpenVAS...
2020-06-21 08:36:33.741 UTC [102] LOG: autovacuum: dropping orphan temp table "gvmd.pg_temp_5.current_credentials"
2020-06-21 08:36:33.744 UTC [102] LOG: autovacuum: dropping orphan temp table "gvmd.pg_temp_4.current_credentials"
2020-06-21 08:36:33.745 UTC [102] LOG: autovacuum: dropping orphan temp table "gvmd.pg_temp_6.current_credentials"
Starting Greenbone Vulnerability Manager...
Creating Greenbone Vulnerability Manager admin user
Failed to create user: Invalid characters in user name

Setting HTTPS != "true" creates an unusable container

First of all thanks for all your work that was done for providing this image.

Describe the bug
I am deploying a container based on your image to a kubernetes environment with environment variable HTTPS set to false. SSL termination is provided by the ingress controller of the kubernetes environment. In this setup you are not able to login due to the fact that gvmd is not running.

To Reproduce
Steps to reproduce the behavior:

  1. Start Container with environment variable HTTPS set to something other than true
  2. Visit the web interface
  3. Try to login with admin and the corresponding password
  4. Face the error message that gvmd is not running and stay on the login page

Expected behavior
The login succeeds.

Screenshots
./.

Additional context
Setting HTTPS to something other than true leads to the situation that certification authority and certificate aren't created/copied over to /data. Because of that gvmd won't start. In my opinion the creation of the CA and the cert is necessary in order to get gvmd running. I will provide a PR shortly.

Best regards
Jens

Please update spelling

Hi,

the spelling is Greenbone and not camel cased GreenBone. Could you do me a favor and update all texts accordingly?

Regards

Data sync failing

The greenbone-certdata-sync and greenbone-scapdata-sync commands are failing on container startup.

This makes it so that the CERT and SCAP data is not available.

xml_split not found

Trying to run as per the instructions on Docker Hub, logs report sh: 1: xml_split: not found and then the container exits.

docker run --detach --publish 8080:9392 -e PASSWORD="somepassword" --name gvm securecompliance/gvm

Creating Greenbone Vulnerability Manager database
CREATE ROLE
GRANT ROLE
CREATE EXTENSION
Creating gvmd folder...
Updating NVTs...
Updating CERT data...
Updating SCAP data...
Starting Open Scanner Protocol daemon for OpenVAS...
Starting Greenbone Vulnerability Manager...
Creating Greenbone Vulnerability Manager admin user
sh: 1: xml_split: not found

No https access to the Web Interface

The http connection is working fine but the https connection is refused.

I followed these instructions: https://hub.docker.com/r/securecompliance/gvm

Can you please help to troubleshoot?

The installation was successfully completed:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 47cbc8f24bae securecompliance/gvm "/bin/sh -c '/start.…" 26 minutes ago Up 26 minutes 0.0.0.0:8080->9392/tcp gvm

Last logs:

md manage: INFO:2019-12-05 15h53.39 utc:429: Updating DFN-CERT CVSS max succeeded. md manage: INFO:2019-12-05 15h53.39 utc:429: Updating Max CVSS for CERT-Bund md manage: INFO:2019-12-05 15h53.39 utc:429: Updating CERT-Bund CVSS max succeeded. md manage: INFO:2019-12-05 15h53.39 utc:429: sync_cert: Updating CERT info succeeded. md manage: INFO:2019-12-05 15h56.03 utc:428: Updating VTs in database ... 53487 new VTs, 0 changed VTs md manage: INFO:2019-12-05 15h56.13 utc:428: Updating VTs in database ... done (53487 VTs).

The ports are listening and open

Host:
tcp6 0 0 :::8080 :::* LISTEN 3831/docker-proxy

Container:
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 10/redis-server 0.0 tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN - tcp6 0 0 :::9392 :::* LISTEN -
Client: Succesfully tested with telnet <IP address> 8080

Postgres not able to start (CU-8mc6ex)

Describe the bug
Having been running with an earlier release (11.0), I pulled the new image (tag :latest) to update my container. It now fails to start, giving a postgresql error relating to missing config files.

Creating network "gvm_default" with the default driver
Creating gvm ... done
Attaching to gvm
gvm | 8:C 23 Jun 2020 15:32:38.576 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
gvm | 8:C 23 Jun 2020 15:32:38.576 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=8, just started
gvm | 8:C 23 Jun 2020 15:32:38.576 # Configuration loaded
gvm | Wait for redis socket to be created...
gvm | Testing redis status...
gvm | Redis ready.
gvm | Starting PostgreSQL...
gvm | pg_ctl: another server might be running; trying to start server anyway
gvm | waiting for server to start....postgres: could not access the server configuration file "/data/database/postgresql.conf": No such file or directory
gvm | pg_ctl: could not start server
gvm | Examine the log output.
gvm | stopped waiting

To Reproduce
Steps to reproduce the behavior:

  1. Start docker container.
  2. Watch it fail.

Expected behavior
The container to start.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Container is being run from docker-compose:

version: "3.5"
services:
gvm:
container_name: gvm
image: securecompliance/gvm
restart: always
env_file:
- ./gvm.env
ports:
- "8080:9392"
volumes:
- /docker/apps/gvm/storage/data:/data

Data volume is mounted via a bind mount, so I know where it is. Looking in there, as with #41, there are no postgresql.conf, pg_ident.conf or pg_hba.conf files, but the rest of the database files from before the image upgrade are present. Files are all owned by UID 102/GID 104 (systemd-resolve on the host computer).

Error opening in browser

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '.docker logs gvm = 0 error'
  2. See error: I followed every procedure for installation. I ran gvm docker logs, no errors were found. However, I can't open the login screen in the browser. It gives error.

Expected behavior
I followed every procedure for installation. I ran gvm docker logs, no errors were found. However, I can't open the login screen in the browser. It gives error.

Cannot find gvmd.sock

I am using python-gvm for automation with the docker image but while establishing connection it fails saying "could not find /usr/local/var/run/gvmd.sock"
The file doesn't exist in this folder, also checked other folder /var/run etc. Couldn't find it anywhere.

Error decrypting credential: No secret key (again)

Describe the bug
After upgrading the container to 11.0.1, similar to #16, it is again no longer possible to access the credential store due to a missing secret key. Inside the container, the path where these files are stored (or where they were stored in 11.0) is still symlinked to a location in /data for persistence, and the files are present, so there doesn't seem to have been a reversion of the previous fix. With the upgrade has something else changed?

To Reproduce
Steps to reproduce the behavior:

  1. Upgrade container
  2. Try to run a task using previously defined credentials.
  3. Task never starts, logs full of messages such as:

util gpgme:WARNING:2020-06-26 07h43.31 UTC:1854: error decrypting credential: No secret key
util gpgme: INFO:2020-06-26 07h43.31 UTC:1854: encrypted to keyid 770255826DC4A8AC, algo=1: No secret key
util gpgme:WARNING:2020-06-26 07h43.31 UTC:1854: error decrypting credential: No secret key
util gpgme: INFO:2020-06-26 07h43.31 UTC:1854: encrypted to keyid 770255826DC4A8AC, algo=1: No secret key
util gpgme:WARNING:2020-06-26 07h43.31 UTC:1854: error decrypting credential: No secret key
util gpgme: INFO:2020-06-26 07h43.31 UTC:1854: encrypted to keyid 770255826DC4A8AC, algo=1: No secret key

Expected behavior
Not the above.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

ERROR: relation "cert_bund_advs" does not exist at character 30

Describe the bug
Greenbone Setup Error

....
md manage:   INFO:2020-06-23 11h34.27 utc:546: Updating user OVAL definitions.
md manage:   INFO:2020-06-23 11h34.27 utc:546: Updating CVSS scores and CVE counts for CPEs
md manage:   INFO:2020-06-23 11h35.28 utc:546: Updating CVSS scores for OVAL definitions
md manage:   INFO:2020-06-23 11h35.31 utc:546: Updating placeholder CPEs
2020-06-23 11:35:42.444 UTC [549] ERROR:  relation "cert_bund_advs" does not exist at character 30
2020-06-23 11:35:42.444 UTC [549] STATEMENT:  SELECT EXISTS (SELECT * FROM cert_bund_advs  WHERE creation_time        > coalesce (CAST ((SELECT value FROM meta                           WHERE name                                 = 'cert_check_time')                          AS INTEGER),                    0));
md manage:   INFO:2020-06-23 11h35.42 utc:546: update_scap: Updating SCAP info succeeded
md manage:WARNING:2020-06-23 11h35.42 utc:546: sql_exec_internal: PQexec failed: ERROR:  relation "cert_bund_advs" does not exist
LINE 1: SELECT EXISTS (SELECT * FROM cert_bund_advs  WHERE creation_...
                                     ^
 (7)
md manage:WARNING:2020-06-23 11h35.42 utc:546: sql_exec_internal: SQL: SELECT EXISTS (SELECT * FROM cert_bund_advs  WHERE creation_time        > coalesce (CAST ((SELECT value FROM meta                           WHERE name                                 = 'cert_check_time')                          AS INTEGER),                    0));
md manage:WARNING:2020-06-23 11h35.42 utc:546: sql_x_internal: sql_exec_internal failed

To Reproduce
Steps to reproduce the behavior:
securecompliance/gvm:master

11.0.1-r1: postgres is not able to start

Describe the bug
Containers based on this image won't start because pg_ctl is missing its config files postgresql.conf, pg_ident.conf and pg_hba.conf.

To Reproduce
Steps to reproduce the behavior:

  1. Run docker run --rm securecompliance/gvm:11.0.1-r1 (for simplicity I didn't set any environment variables and assigned no volume)
  2. Check output:
➜ docker run --rm securecompliance/gvm:11.0.1-r1
8:C 14 Jun 2020 10:30:18.227 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
8:C 14 Jun 2020 10:30:18.227 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=8, just started
8:C 14 Jun 2020 10:30:18.227 # Configuration loaded
Wait for redis socket to be created...
Testing redis status...
Redis ready.
Creating Data folder...
Creating Database folder...
Starting PostgreSQL...
waiting for server to start....postgres: could not access the server configuration file "/data/database/postgresql.conf": No such file or directory
 stopped waiting
pg_ctl: could not start server
Examine the log output.

Expected behavior
A running container is created.

Screenshots
./.

Additional context
It seems that pg_ctl is missing postgresql.conf, pg_ident.conf and pg_hba.conf. I am no expert on postgresql but using these files from /etc/postgresql/12/main/ creates a running container.

[EDIT] Specify version/tag

Adding HTTPS

Hey! Can you please explain how to add nginx as reverse proxy? I need gvm on https, not http. In your container it is http only. Thanks!

"Consider Alive"

Hi All,

Is anyone having issues with the consider-alive option? I am using the CE version in a VM and the "consider-alive" option seems to work and detects everything correctly.

When using this version consider-alive doesn't appear to do anything. I have hosts that do not respond to ping but have HTTPS running that are not detected when using

"All IANA assigned TCP and UDP 2012-02-10" (the same as on the VM version).

Any thoughts? something unique to docker?

Question: Unable to access GSA after container is ready for use

Describe the bug
Unable to access GSA after container is ready for use

To Reproduce
Steps to reproduce the behavior:

  1. Run docker run --detach --publish 3033:9392 --env PASSWORD=XXXX --volume /usr/local/var/jenkins/gvm-data:/data --name gvm securecompliance/gvm:11.0.1-r2
  2. Wait until text + Your GVM 11 container is now ready to use! + displayed
  3. Try to access GSA in http://localhost:3033/
  4. But only blank page displayed
  5. Wait until NVT finished loading (Updating VTs in database ... done (60027 VTs).), then try to access GSA again.
  6. Browser still display blank page

Actual behavior
User cannot access GSA as browser display blank page.

Expected behavior
User should be able to access GSA.

Screenshots
N/A

Additional context
Below error encountered:

Updating NVTs...
Updating CERT data...
Updating SCAP data...
Starting Open Scanner Protocol daemon for OpenVAS...
Starting Greenbone Vulnerability Manager...
2020-06-15 07:09:32.832 UTC [765] ERROR: relation "public.meta" does not exist at character 19
2020-06-15 07:09:32.832 UTC [765] STATEMENT: SELECT value FROM public.meta WHERE name = 'database_version';
Creating Greenbone Vulnerability Manager admin user
User created.
Starting Greenbone Security Assistant...
Oops, secure memory pool already initialized
++++++++++++++++++++++++++++++++++++++++++++++
+ Your GVM 11 container is now ready to use! +
++++++++++++++++++++++++++++++++++++++++++++++

Question: How to add scanner?

I want to add a new scanner (w3af). The GVM documentation points to adding it using the GOS admin menu. Since this is a docker container and not the official VM, it doesn't seem to contain the GOS menu. How do I accomplish this?

How do I access this menu/functionality?

image

How to change the logs datetime?

How can I change the timezone for the logs?
I changed the "/etc/timezone" file and then restarted the container but the logs are always 2 hours back,
thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.