lukeburciu / hpviz Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
Error when attempting to deploy docker-compose.yml remotely on viz
Configuration error - The Compose file is invalid because:\nService broker has neither an image nor a build context specified. At least one must be provided.
Self signed cert required to be created and deployed to vector the sink
CA pub certs also required
for some some am getting an error when ever pip tries to insatall packages, was working yesterday. occurring on both servers
ImportError: No module named pkg_resources
pin them to other than defaults so less likely to be taken by another vagrant environment running
zookeeper is failing to start on hpviz001.ecu-sri.net
===> Launching zookeeper ...
[2020-10-23 01:12:11,891] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-10-23 01:12:11,917] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-10-23 01:12:11,917] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-10-23 01:12:11,920] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-10-23 01:12:11,921] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-10-23 01:12:11,921] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-10-23 01:12:11,921] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2020-10-23 01:12:11,929] INFO Log4j 1.2 jmx support found and enabled. (org.apache.zookeeper.jmx.ManagedUtil)
[2020-10-23 01:12:11,950] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-10-23 01:12:11,951] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-10-23 01:12:11,951] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-10-23 01:12:11,952] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2020-10-23 01:12:11,956] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-10-23 01:12:11,958] ERROR Unable to access datadir, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Unable to create data directory /var/lib/zookeeper/log/version-2
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:127)
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:124)
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:106)
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:64)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:128)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
Unable to access datadir, exiting abnormally
Is it possible to prevent the ansible-lint github action from firing whenever i update the README?
similar issues to #77
"Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data",
"at sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) ~[?:?]",
"at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]",
"at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]",
"at sun.nio.fs.UnixFileSystemProvider.checkAccess(UnixFileSystemProvider.java:313) ~[?:?]",
"at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:397) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.FilePermissionUtils.addDirectoryPath(FilePermissionUtils.java:68) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:298) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:253) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.Security.configure(Security.java:122) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:206) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:325) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.0.1.jar:7.0.1]",
This enhancement is around what was discussed in issue #37 and how we handle the multiple applications on the docker instances using NGIN as the proxy front end:
viz001.sri-ecu.net/grafana
viz001.sri-ecu.net/neo4j
viz001.sri-ecu.net/nifi
viz001.sri-ecu.net/dejavu
needs to pull all developer related information from main README and put in docs/development
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "LISTENER_INTERNAL:PLAINTEXT,LISTENER_PLAINTEXT:PLAINTEXT,LISTENER_EXT:PLAINTEXT"
KAFKA_ADVERTISED_LISTENERS: "LISTENER_INTERNAL://broker:29092,LISTENER_PLAINTEXT://viz001.ecu-sri.net:9092,LISTENER_EXT://viz001.ecu-sri.net:9093"
KAFKA_LISTENERS: "LISTENER_INTERNAL://broker:29092,LISTENER_PLAINTEXT://0.0.0.0:9092, LISTENER_EXT://0.0.0.0:9093"
These have been tested and working for both hosts.
The only exception is 9092 should be listening on viz001 127.0.0.1 rather than 0.0.0.0
as I really want 9093 to be the only published listener to the outside world. but still need it for testing SSL is not implemented yet on 9093 but is working on plaintext until then.
BTW the above config is in ansible format os you will need to replace the ':' with '='
container vector-ingest
ERROR vector: Configuration error: "/etc/vector/vector.toml": unknown field group_by
, expected one of expire_after_ms
, flush_period_ms
, identifier_fields
, merge_strategies
, ends_when
for key transforms.reduce_sshd_log
Requires vector to be deployed to the sink
Luke has already created the config files
Damian requires access to /data on theSink
no longer supported and does not work in current version of grafana
Dependency for syslog-ng kafka plugin when installing via pip is failing
failed: [viz001] (item={'name': 'syslogng_kafka'}) => {"ansible_loop_var": "item", "changed": false, "cmd": ["/usr/bin/pip3", "install", "syslogng_kafka"], "item": {"name": "syslogng_kafka"}, "msg": "stdout: Collecting syslogng_kafka\nCollecting syslog-rfc5424-formatter>=1.0.0 (from syslogng_kafka)\n Using cached https://files.pythonhosted.org/packages/a8/a2/24fb5bb0f8680b1001381a82c5b2e2d9ab50dba22127302862916b2d878c/syslog_rfc5424_formatter-1.2.2-py3-none-any.whl\nCollecting confluent-kafka==0.11.0 (from syslogng_kafka)\n Using cached https://files.pythonhosted.org/packages/96/36/516a2b7f592376968296d10de10a20f1ce411e5ff24a86b1a23d5a5f4042/confluent-kafka-0.11.0.tar.gz\nBuilding wheels for collected packages: confluent-kafka\n Running setup.py bdist_wheel for confluent-kafka: started\n Running setup.py bdist_wheel for confluent-kafka: finished with status 'error'\n Complete output from command /usr/bin/python3 -u -c \"import setuptools, tokenize;__file__='/tmp/pip-build-i0gcv7kk/confluent-kafka/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" bdist_wheel -d /tmp/tmpvdcqrp7lpip-wheel- --python-tag cp36:\n running bdist_wheel\n running build\n running build_py\n creating build\n creating build/lib.linux-x86_64-3.6\n creating build/lib.linux-x86_64-3.6/confluent_kafka\n copying confluent_kafka/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka\n creating build/lib.linux-x86_64-3.6/confluent_kafka/avro\n copying confluent_kafka/avro/cached_schema_registry_client.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n copying confluent_kafka/avro/load.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n copying confluent_kafka/avro/error.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n copying confluent_kafka/avro/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n creating build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n copying confluent_kafka/kafkatest/verifiable_consumer.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n copying confluent_kafka/kafkatest/verifiable_client.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n copying confluent_kafka/kafkatest/verifiable_producer.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n copying confluent_kafka/kafkatest/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n creating build/lib.linux-x86_64-3.6/confluent_kafka/avro/serializer\n copying confluent_kafka/avro/serializer/message_serializer.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro/serializer\n copying confluent_kafka/avro/serializer/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro/serializer\n running build_ext\n building 'confluent_kafka.cimpl' extension\n creating build/temp.linux-x86_64-3.6\n creating build/temp.linux-x86_64-3.6/confluent_kafka\n creating build/temp.linux-x86_64-3.6/confluent_kafka/src\n x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.6m -c confluent_kafka/src/confluent_kafka.c -o build/temp.linux-x86_64-3.6/confluent_kafka/src/confluent_kafka.o\n In file included from confluent_kafka/src/confluent_kafka.c:17:0:\n confluent_kafka/src/confluent_kafka.h:21:10: fatal error: librdkafka/rdkafka.h: No such file or directory\n #include <librdkafka/rdkafka.h>\n ^~~~~~~~~~~~~~~~~~~~~~\n compilation terminated.\n error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n \n ----------------------------------------\n Running setup.py clean for confluent-kafka\nFailed to build confluent-kafka\nInstalling collected packages: syslog-rfc5424-formatter, confluent-kafka, syslogng-kafka\n Running setup.py install for confluent-kafka: started\n Running setup.py install for confluent-kafka: finished with status 'error'\n Complete output from command /usr/bin/python3 -u -c \"import setuptools, tokenize;__file__='/tmp/pip-build-i0gcv7kk/confluent-kafka/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" install --record /tmp/pip-ct59m0s5-record/install-record.txt --single-version-externally-managed --compile:\n running install\n running build\n running build_py\n creating build\n creating build/lib.linux-x86_64-3.6\n creating build/lib.linux-x86_64-3.6/confluent_kafka\n copying confluent_kafka/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka\n creating build/lib.linux-x86_64-3.6/confluent_kafka/avro\n copying confluent_kafka/avro/cached_schema_registry_client.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n copying confluent_kafka/avro/load.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n copying confluent_kafka/avro/error.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n copying confluent_kafka/avro/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n creating build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n copying confluent_kafka/kafkatest/verifiable_consumer.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n copying confluent_kafka/kafkatest/verifiable_client.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n copying confluent_kafka/kafkatest/verifiable_producer.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n copying confluent_kafka/kafkatest/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n creating build/lib.linux-x86_64-3.6/confluent_kafka/avro/serializer\n copying confluent_kafka/avro/serializer/message_serializer.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro/serializer\n copying confluent_kafka/avro/serializer/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro/serializer\n running build_ext\n building 'confluent_kafka.cimpl' extension\n creating build/temp.linux-x86_64-3.6\n creating build/temp.linux-x86_64-3.6/confluent_kafka\n creating build/temp.linux-x86_64-3.6/confluent_kafka/src\n x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.6m -c confluent_kafka/src/confluent_kafka.c -o build/temp.linux-x86_64-3.6/confluent_kafka/src/confluent_kafka.o\n In file included from confluent_kafka/src/confluent_kafka.c:17:0:\n confluent_kafka/src/confluent_kafka.h:21:10: fatal error: librdkafka/rdkafka.h: No such file or directory\n #include <librdkafka/rdkafka.h>\n ^~~~~~~~~~~~~~~~~~~~~~\n compilation terminated.\n error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n \n ----------------------------------------\n\n:stderr: Failed building wheel for confluent-kafka\nCommand \"/usr/bin/python3 -u -c \"import setuptools, tokenize;__file__='/tmp/pip-build-i0gcv7kk/confluent-kafka/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" install --record /tmp/pip-ct59m0s5-record/install-record.txt --single-version-externally-managed --compile\" failed with error code 1 in /tmp/pip-build-i0gcv7kk/confluent-kafka/\n"}
When restarting server or docker, kafka fails to restart.
docker container logs broker
error reported
The broker is trying to join the wrong cluster
Pipeline is getting the following error when attempting to run the playbooks and is failing due to an invalid format private key.
"msg": "Failed to connect to the host via ssh: load pubkey \"/tmp/privateKey173999725\": invalid format\r\nHost key verification failed."
it appears as though the setting "key-field" in our current setting, taken from the vector source/sink kafka examples is doing nothing as the user_id field is non-existant in our logs being ingested and resulting in an empty 'user_id' in elastic.
from the readme it appears we don't have to specify the field and we most like should leave it blank in this case and thus a random value will be created.
The log field name to use for the topic key. If unspecified, the key will be randomly generated. If the field does not exist on the log, a blank value will be used.
signed certs required on viz001 for vector secure TLS link for syslog ingestion
Authentication needs to be configured for:
appears to be a facts issue
Project sponsor has responded to our TLS query and we're to use LetsEncrypt, however will need a FQDN for each of the hosts.
Have requested FQDN from SRI
Related Issue #5
not receiving query results from elastic event though data is indexed
To secure all external http traffic we may need to look at nginx instance and TLS to secure any http ports rather than managing individual connections TLS certs
message body of syslog data needs to be transformed into relevant fields by vector
requires single field with log/lat
neo4j wont log in due to websocket error for docker container neo4j
issue details: https://neo4j.com/developer/kb/explanation-of-error-websocket-connection-failure
Grafana plugins are not downloading in current container as looks like filesystem permission are incorrect.
host dir: /opt/data/grafana/plugins
docker container dir: /var/lib/grafana
is preventing the container from starting
Error: โ failed to extract plugin archive: could not create "/var/lib/grafana/plugins/grafana-clock-panel/.circleci", permission denied, make sure you have write access to plugin dir
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later
installing grafana-clock-panel @ 1.1.1
from: https://grafana.com/api/plugins/grafana-clock-panel/versions/1.1.1/download
into: /var/lib/grafana/plugins
Will need to tune memory as per documentation
add SAN ip and localhost dns to dev certs
Add elastic container to docker compose
external ports:
API: 9200
Firewall on viz001 appears to be allowing kafkacat on thesink to connect on port TCP:9092 (plaintext) when only TCP:9093 (SSL) is allowed
Self signed CA required to create certs for TLS links
Add Grafana Container to docker compose
external port: 3000
gave me incorrect pub key previously
ERROR! couldn't resolve module/action 'docker_network'. This often indicates a misspelling, missing collection, or incorrect module path.
The error appears to be in '/github/workspace/pb-docker-deploy-viz.yml': line 183, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Define VIZ network (docker)
^ here
Changes required for syslog listener to TCP Port 1514 on the sink
Technical Docs need to be finalised
Grafana as an example: configure file permissions for the docker containers.
docker run -ti --user root --volume "<your volume mapping here>" --entrypoint bash grafana/grafana:5.1.0
# in the container you just started:
chown -R root:root /etc/grafana && \
chmod -R a+r /etc/grafana && \
chown -R grafana:grafana /var/lib/grafana && \
chown -R grafana:grafana /usr/share/grafana
CICD to run only on tagged releases
To identify countries originating attacks geoip information is required, this will give country of ip address is assigned to
we need to leave the original log message field intact as the fields we're transforming and enriching are the meta-data around this field
Damian is reporting that he cant access the scada pcaps under /data
proposing the following dir structure for the docker services on viz
as a starting point:
# container configurations
/var/opt/hpviz/etc - top container config files
/var/opt/hpviz/etc/nginx. - nginx config files
# docker compose files
/var/opt/hpviz/docker - docker-compose and overrrides
# Data
/var/opt/hpviz/data - data toplevel directory
/var/opt/hpviz/data/es - elastic search persistant data directory
/var/opt/hpviz/data/nifi - nifi persistant data
/var/opt/hpviz/data/nifi-registry - nifi registry persistant data
/var/opt/hpviz/data/neo4j - neo4j persistant data
/var/opt/hpviz/data/dejavu - dejavu top level
/var/opt/hpviz/data/kafka - kafka top level
/var/opt/hpviz/data/kafka/broker - kafka broker
/var/opt/hpviz/data/kafka/ksqldb - kafka ksqldb
/var/opt/hpviz/data/nginx - nginx data
# grafana dashboards
/var/opt/hpviz/grafana/dashboards - location of dashboards for grafana
Old:
docker-compose -f docker-compose.yml -f production.yml up -d
New:
docker-compose -f docker-compose.yml -f docker-compose.override.yml up -d
Need to flatten JSON data before ingestion into elastic
Release deployment failing on the CI/CD with the following message
ERROR! couldn't resolve module/action 'ansible.posix.firewalld'. This often indicates a misspelling, missing collection, or incorrect module path.
The error appears to be in '/github/workspace/roles/robertdebock.firewall/tasks/main.yml': line 52, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: open ports (firewalld-port)
^ here
is a better representation of of projects ci/cd user
raw is being indexed and will containe same messages as processed logs
need a smart way fo taggign, then processing based on the tag
Possible Solution:
vectordotdev/vector#4644 (comment)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.