GithubHelp home page GithubHelp logo

hpviz's People

Contributors

lukeburciu avatar madeinoz67 avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

hpviz's Issues

Error when deploying docker-compose remotely on viz

Error when attempting to deploy docker-compose.yml remotely on viz

Configuration error - The Compose file is invalid because:\nService broker has neither an image nor a build context specified. At least one must be provided.

error installing packages via pip

for some some am getting an error when ever pip tries to insatall packages, was working yesterday. occurring on both servers

ImportError: No module named pkg_resources

Pin vagrant box ssh ports

pin them to other than defaults so less likely to be taken by another vagrant environment running

zookeeper cant create data directory on prod

zookeeper is failing to start on hpviz001.ecu-sri.net

===> Launching zookeeper ... 
[2020-10-23 01:12:11,891] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-10-23 01:12:11,917] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-10-23 01:12:11,917] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-10-23 01:12:11,920] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-10-23 01:12:11,921] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-10-23 01:12:11,921] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2020-10-23 01:12:11,921] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2020-10-23 01:12:11,929] INFO Log4j 1.2 jmx support found and enabled. (org.apache.zookeeper.jmx.ManagedUtil)
[2020-10-23 01:12:11,950] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-10-23 01:12:11,951] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-10-23 01:12:11,951] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2020-10-23 01:12:11,952] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2020-10-23 01:12:11,956] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2020-10-23 01:12:11,958] ERROR Unable to access datadir, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Unable to create data directory /var/lib/zookeeper/log/version-2
        at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:127)
        at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:124)
        at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:106)
        at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:64)
        at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:128)
        at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
Unable to access datadir, exiting abnormally

Elastic failing to start on prod with dir permission error

similar issues to #77

"Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data",
"at sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) ~[?:?]",
"at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]",
"at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]",
"at sun.nio.fs.UnixFileSystemProvider.checkAccess(UnixFileSystemProvider.java:313) ~[?:?]",
"at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:397) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.FilePermissionUtils.addDirectoryPath(FilePermissionUtils.java:68) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:298) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:253) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.Security.configure(Security.java:122) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:206) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:325) ~[elasticsearch-7.0.1.jar:7.0.1]",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.0.1.jar:7.0.1]",

nginx application redirects

This enhancement is around what was discussed in issue #37 and how we handle the multiple applications on the docker instances using NGIN as the proxy front end:

viz001.sri-ecu.net/grafana
viz001.sri-ecu.net/neo4j
viz001.sri-ecu.net/nifi
viz001.sri-ecu.net/dejavu

Kafka standard listener ports

 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "LISTENER_INTERNAL:PLAINTEXT,LISTENER_PLAINTEXT:PLAINTEXT,LISTENER_EXT:PLAINTEXT"
          KAFKA_ADVERTISED_LISTENERS: "LISTENER_INTERNAL://broker:29092,LISTENER_PLAINTEXT://viz001.ecu-sri.net:9092,LISTENER_EXT://viz001.ecu-sri.net:9093"
          KAFKA_LISTENERS: "LISTENER_INTERNAL://broker:29092,LISTENER_PLAINTEXT://0.0.0.0:9092, LISTENER_EXT://0.0.0.0:9093"

These have been tested and working for both hosts.

The only exception is 9092 should be listening on viz001 127.0.0.1 rather than 0.0.0.0
as I really want 9093 to be the only published listener to the outside world. but still need it for testing SSL is not implemented yet on 9093 but is working on plaintext until then.

BTW the above config is in ansible format os you will need to replace the ':' with '='

ingestion process restarting on prod

container vector-ingest

ERROR vector: Configuration error: "/etc/vector/vector.toml": unknown field group_by, expected one of expire_after_ms, flush_period_ms, identifier_fields, merge_strategies, ends_when for key transforms.reduce_sshd_log

Error installing syslogng_kafka via pip on viz

Dependency for syslog-ng kafka plugin when installing via pip is failing

failed: [viz001] (item={'name': 'syslogng_kafka'}) => {"ansible_loop_var": "item", "changed": false, "cmd": ["/usr/bin/pip3", "install", "syslogng_kafka"], "item": {"name": "syslogng_kafka"}, "msg": "stdout: Collecting syslogng_kafka\nCollecting syslog-rfc5424-formatter>=1.0.0 (from syslogng_kafka)\n  Using cached https://files.pythonhosted.org/packages/a8/a2/24fb5bb0f8680b1001381a82c5b2e2d9ab50dba22127302862916b2d878c/syslog_rfc5424_formatter-1.2.2-py3-none-any.whl\nCollecting confluent-kafka==0.11.0 (from syslogng_kafka)\n  Using cached https://files.pythonhosted.org/packages/96/36/516a2b7f592376968296d10de10a20f1ce411e5ff24a86b1a23d5a5f4042/confluent-kafka-0.11.0.tar.gz\nBuilding wheels for collected packages: confluent-kafka\n  Running setup.py bdist_wheel for confluent-kafka: started\n  Running setup.py bdist_wheel for confluent-kafka: finished with status 'error'\n  Complete output from command /usr/bin/python3 -u -c \"import setuptools, tokenize;__file__='/tmp/pip-build-i0gcv7kk/confluent-kafka/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" bdist_wheel -d /tmp/tmpvdcqrp7lpip-wheel- --python-tag cp36:\n  running bdist_wheel\n  running build\n  running build_py\n  creating build\n  creating build/lib.linux-x86_64-3.6\n  creating build/lib.linux-x86_64-3.6/confluent_kafka\n  copying confluent_kafka/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka\n  creating build/lib.linux-x86_64-3.6/confluent_kafka/avro\n  copying confluent_kafka/avro/cached_schema_registry_client.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n  copying confluent_kafka/avro/load.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n  copying confluent_kafka/avro/error.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n  copying confluent_kafka/avro/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n  creating build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n  copying confluent_kafka/kafkatest/verifiable_consumer.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n  copying confluent_kafka/kafkatest/verifiable_client.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n  copying confluent_kafka/kafkatest/verifiable_producer.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n  copying confluent_kafka/kafkatest/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n  creating build/lib.linux-x86_64-3.6/confluent_kafka/avro/serializer\n  copying confluent_kafka/avro/serializer/message_serializer.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro/serializer\n  copying confluent_kafka/avro/serializer/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro/serializer\n  running build_ext\n  building 'confluent_kafka.cimpl' extension\n  creating build/temp.linux-x86_64-3.6\n  creating build/temp.linux-x86_64-3.6/confluent_kafka\n  creating build/temp.linux-x86_64-3.6/confluent_kafka/src\n  x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.6m -c confluent_kafka/src/confluent_kafka.c -o build/temp.linux-x86_64-3.6/confluent_kafka/src/confluent_kafka.o\n  In file included from confluent_kafka/src/confluent_kafka.c:17:0:\n  confluent_kafka/src/confluent_kafka.h:21:10: fatal error: librdkafka/rdkafka.h: No such file or directory\n   #include <librdkafka/rdkafka.h>\n            ^~~~~~~~~~~~~~~~~~~~~~\n  compilation terminated.\n  error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n  \n  ----------------------------------------\n  Running setup.py clean for confluent-kafka\nFailed to build confluent-kafka\nInstalling collected packages: syslog-rfc5424-formatter, confluent-kafka, syslogng-kafka\n  Running setup.py install for confluent-kafka: started\n    Running setup.py install for confluent-kafka: finished with status 'error'\n    Complete output from command /usr/bin/python3 -u -c \"import setuptools, tokenize;__file__='/tmp/pip-build-i0gcv7kk/confluent-kafka/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" install --record /tmp/pip-ct59m0s5-record/install-record.txt --single-version-externally-managed --compile:\n    running install\n    running build\n    running build_py\n    creating build\n    creating build/lib.linux-x86_64-3.6\n    creating build/lib.linux-x86_64-3.6/confluent_kafka\n    copying confluent_kafka/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka\n    creating build/lib.linux-x86_64-3.6/confluent_kafka/avro\n    copying confluent_kafka/avro/cached_schema_registry_client.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n    copying confluent_kafka/avro/load.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n    copying confluent_kafka/avro/error.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n    copying confluent_kafka/avro/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro\n    creating build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n    copying confluent_kafka/kafkatest/verifiable_consumer.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n    copying confluent_kafka/kafkatest/verifiable_client.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n    copying confluent_kafka/kafkatest/verifiable_producer.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n    copying confluent_kafka/kafkatest/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka/kafkatest\n    creating build/lib.linux-x86_64-3.6/confluent_kafka/avro/serializer\n    copying confluent_kafka/avro/serializer/message_serializer.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro/serializer\n    copying confluent_kafka/avro/serializer/__init__.py -> build/lib.linux-x86_64-3.6/confluent_kafka/avro/serializer\n    running build_ext\n    building 'confluent_kafka.cimpl' extension\n    creating build/temp.linux-x86_64-3.6\n    creating build/temp.linux-x86_64-3.6/confluent_kafka\n    creating build/temp.linux-x86_64-3.6/confluent_kafka/src\n    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.6m -c confluent_kafka/src/confluent_kafka.c -o build/temp.linux-x86_64-3.6/confluent_kafka/src/confluent_kafka.o\n    In file included from confluent_kafka/src/confluent_kafka.c:17:0:\n    confluent_kafka/src/confluent_kafka.h:21:10: fatal error: librdkafka/rdkafka.h: No such file or directory\n     #include <librdkafka/rdkafka.h>\n              ^~~~~~~~~~~~~~~~~~~~~~\n    compilation terminated.\n    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n    \n    ----------------------------------------\n\n:stderr:   Failed building wheel for confluent-kafka\nCommand \"/usr/bin/python3 -u -c \"import setuptools, tokenize;__file__='/tmp/pip-build-i0gcv7kk/confluent-kafka/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" install --record /tmp/pip-ct59m0s5-record/install-record.txt --single-version-externally-managed --compile\" failed with error code 1 in /tmp/pip-build-i0gcv7kk/confluent-kafka/\n"}

secrets ANSIBLE_CI_KEY format

Pipeline is getting the following error when attempting to run the playbooks and is failing due to an invalid format private key.

 "msg": "Failed to connect to the host via ssh: load pubkey \"/tmp/privateKey173999725\": invalid format\r\nHost key verification failed."

Vector kafka source/sink use of "key-field" setting

it appears as though the setting "key-field" in our current setting, taken from the vector source/sink kafka examples is doing nothing as the user_id field is non-existant in our logs being ingested and resulting in an empty 'user_id' in elastic.

from the readme it appears we don't have to specify the field and we most like should leave it blank in this case and thus a random value will be created.
The log field name to use for the topic key. If unspecified, the key will be randomly generated. If the field does not exist on the log, a blank value will be used.

FQDN Registration required for each host

Project sponsor has responded to our TLS query and we're to use LetsEncrypt, however will need a FQDN for each of the hosts.

Have requested FQDN from SRI

Related Issue #5

Grafana container failing to start with incorrect permissions /var/lib/grafana

Grafana plugins are not downloading in current container as looks like filesystem permission are incorrect.

host dir: /opt/data/grafana/plugins
docker container dir: /var/lib/grafana

is preventing the container from starting

Error: โœ— failed to extract plugin archive: could not create "/var/lib/grafana/plugins/grafana-clock-panel/.circleci", permission denied, make sure you have write access to plugin dir
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later
installing grafana-clock-panel @ 1.1.1
from: https://grafana.com/api/plugins/grafana-clock-panel/versions/1.1.1/download
into: /var/lib/grafana/plugins

Deployment Pipeline failed

ERROR! couldn't resolve module/action 'docker_network'. This often indicates a misspelling, missing collection, or incorrect module path.

The error appears to be in '/github/workspace/pb-docker-deploy-viz.yml': line 183, column 7, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

- name: Define VIZ network (docker)
  ^ here

Volume permissions for docker containers

Grafana as an example: configure file permissions for the docker containers.

    
docker run -ti --user root --volume "<your volume mapping here>" --entrypoint bash grafana/grafana:5.1.0
# in the container you just started:
chown -R root:root /etc/grafana && \
chmod -R a+r /etc/grafana && \
chown -R grafana:grafana /var/lib/grafana && \
chown -R grafana:grafana /usr/share/grafana

Discussion aroound Server Directory Structure for viz

proposing the following dir structure for the docker services on viz

as a starting point:

# container configurations
/var/opt/hpviz/etc                - top container config files 
/var/opt/hpviz/etc/nginx.     - nginx config files

# docker compose files
/var/opt/hpviz/docker           - docker-compose and overrrides

# Data 
/var/opt/hpviz/data                        - data toplevel directory
/var/opt/hpviz/data/es                   - elastic search persistant data directory
/var/opt/hpviz/data/nifi                  - nifi persistant data
/var/opt/hpviz/data/nifi-registry    - nifi registry persistant data
/var/opt/hpviz/data/neo4j              - neo4j persistant data
/var/opt/hpviz/data/dejavu             - dejavu top level
/var/opt/hpviz/data/kafka               - kafka top level
/var/opt/hpviz/data/kafka/broker   - kafka broker 
/var/opt/hpviz/data/kafka/ksqldb   - kafka ksqldb
/var/opt/hpviz/data/nginx                - nginx data 

# grafana dashboards
/var/opt/hpviz/grafana/dashboards - location of dashboards for grafana

update readme for /docker

Old:

docker-compose -f docker-compose.yml -f production.yml up -d

New:

docker-compose -f docker-compose.yml -f docker-compose.override.yml up -d

CI/CD Failing on release deployment

Release deployment failing on the CI/CD with the following message

ERROR! couldn't resolve module/action 'ansible.posix.firewalld'. This often indicates a misspelling, missing collection, or incorrect module path.

The error appears to be in '/github/workspace/roles/robertdebock.firewall/tasks/main.yml': line 52, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:


- name: open ports (firewalld-port)
  ^ here

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.