GithubHelp home page GithubHelp logo

ansible-zencash-nodes's Introduction

Description

Horizen (formerly known as Zencash) is a fork of Zcash which is a fixed supply digital currency that uses zk-SNARKs to provide strong on-chain privacy for digital payments.

This repo provides ansible roles that will configure a VPS or bare metal machine to host lxc containers that will each run an instance of the zend daemon and the secure node tracker client.

This was originally written for and tested against Ubuntu Xenial 16.04 and the Contabo VPS instances, but has recently been updated to support Debian 11 "Bullseye" and tested on Proxmox.

Requirements

Note that Supernodes must be publicly reachable via both IPv4 and IPv6. Securenodes must be reachable via either IPv4 or IPv6.

All nodes must have a valid public SSL certificate. These playbooks leverage acme.sh to request free public TLS certificates. Note that rather than expose TCP/80 to the Internet for TLS issuance, we are now leveraging the DNS API, and AWS Route53 is the first provider implemented.

Note also that any upstream firewalls / security groups will need to permit TCP/9033 to the nodes.

Warning

This is a non-standard installation of zend and the nodetracker client. Don't ask for help regarding installation using this playbook in the official #securenodes channel. This playbook is targeted towards experts or people willing to learn how to manage their deployments using ansible in exchange for reducing your average cost per node.

Security

Since the crypto space is full of scammers and hackers, security on your nodes is absolutely necessary. I've tried to make this playbook as secure as possible. If you see any possible improvements, open an issue or message on @techistheway in the Zencash Discord.

  1. Uses LXC to separate the namespace
  2. LXC containers are unpriviledged (WIP)
  3. SSH is disabled to save ram. Consoles are possible with lxc-attach, or you can restart ssh using the utility playbooks.
  4. ansible-hardening role applies all applicable STIGs
  5. UFW firewall is configured to block everything except the ssh port and the zend port
  6. Fail2ban is installed and enabled
  7. root login and password authentication is disabled
  8. apparmor is enabled
  9. If you use the vault to encrypt your inventory.yml as documented, all sensitive information in the playbook will be encrypted so your credentials are secure

Configuration values

Configuration Item Example Description
global_tracker_email [email protected] Email to receiver tracker alerts
global_domain your-nodes-address.com The top level domain of your nodes
secure_nodes_prefix sec Prefix for your secure nodes
super_nodes_prefix sup Prefix for your super nodes.
ipv6_subnet 2a45:c456:4493:5534:c:c:c: /64 subnet from your provider configured to a /112 . You can use any subnet other than 2a45:c456:4493:5534::1 . You can use any IP for your nodes except 2a45:c456:4493:5534:c:c:c:1
ipv6_interface eth0 Public interface used for container network bridge
ssh_public_key ssh-rsa AAAAB3Nz..K7n8DX5dBeEQlEBN6fcVN your_user@ansible_controler Public ssh key that will be used for connecting to the nodes
blocks_directory /root/chain Directory containing seed of the blockchain
ansible_become_pass "super_secret_password" ssh sudo password
ansible_user your_user Username used for the ssh connection
ansible_host 173.1.1.1 or your-nodes-address.com Address used to connect to master
stake_address znTyzLKM4VrWjSt8... Transparent wallet address where ZEN used for nodes is staked from
tracker_region eu or na Tracker server region to connect nodes to
swap_size_gb 0 Size of swap file to add
public_ipv4_netmask 24 Subnet size
public_ipv4_address 1.1.1.1 IPv4 address used by super node. supernode01 should use the included IP, the rest can be assigned randomly
private_ipv4_address 10.0.3.201 Private IP of the container tied to public address. These can be left alone unless your instance uses a different DHCP range
private_ipv4_subnet 10.0.3.0/24 Private IP subnet
announce_ipv4_address 1.2.3.4 Public IP to announce to the p2p network. Required for AWS and other providers that do not assign a routable public IP to your instances.
aws_access_key AKIA.... Your AWS access key with permissions to create records in Route53 for the global_domain. Used by acme.sh
aws_secret_access_key p/vfqt... Your AWS seret access key. Do NOT check these into a public repo

Install instructions for Debian 11 hypervisor host

Clone the repo

git clone https://github.com/alchemydc/ansible-zencash-nodes/
cd ansible-zencash-nodes/

Install dependencies

./install_host_deps.sh

Generate an SSH key to use for authenticating to the ansible controller

ssh-keygen -t ed25519
cat ~/.ssh/id_ed25519.pub >> ~/.ssh/authorized_keys
chmod 640 ~/.ssh/authorized_keys

Edit inventory.yml

Be sure to set the stake addresses properly for each node

Fill out the values in the inventory.yml file and uncomment the nodes that you're going to deploy. Make sure your DNS matches. Adjust the ansible_fqdn and container_fqdn and secure_nodes_prefix/super_nodes_prefix variables to match your naming scheme. These are variables so you don't have to fill out each value manually.

Set swap size

Configure the swap_size_gb variable to the size of swap you want. The default is 0 GB.

For supernodes

Each supernode require an additional publicly routable IPv4 address that will need to be entered in the corresponding public_ipv4_address field. Adjust announce_ipv4_address for each node as required.

Run the playbook

ansible-playbook nodes.yml

After nodes are created, generate and view the z addresses to send the challenge balances

ansible-playbook get-addresses.yml

Send 3 transactions of .01 zen to the z addresses and restart nodetracker. With the 3 day challenge interval, this will last for a long time.

You can use one of the zend instances to send the challenge balance to the z address without a swing wallet.

zen-cli z_sendmany from_address '[{"address": "1st_to_address" ,"amount": 0.01},{"address": "2nd_to_address" ,"amount": 0.01}]'

Attach to container console

dc@controller:/# lxc-attach -n sn1.example.com

after transactions confirm, restart the tracker client to register your node and follow the logs

root@sn1:/# systemctl restart nodetracker
root@sn1:/# journalctl -f

Maintenance procedures

Adding a new host

Uncomment the appropriate section in the inventory and re-run the nodes.yml playbook. It'll only touch the things it needs to for the new nodes.

ansible-playbook nodes.yml

Seeding block chain (to avoid p2p sync which is slow)

Create a folder that contains the blocks and chainstate folders inside it and set it in the blocks_directory variable in your inventory.yml.

dc@controller:/home/dc/nodes# ls /root/chain/
blocks  chainstate

Encrypting vault & using password.sh

Once you've set up everyhthing inside your inventory.yml , you should encrypt it so that it's not just plaintext on the server.

ansible-vault encrypt inventory.yml

Save that password as you'll need it to run the playbooks. So you don't have to continuously enter your password, you can set it in your environment variable.

source password.sh

Failed run destroy container

lxc-stop -n container_name ; lxc-destroy -n container_name

Stopping/Starting ssh

In an effort to save on ram and increase security slightly, I've set ssh to be disabled on startup. To start ssh on the containers

ansible-playbook start-ssh.yml

To stop ssh on the containers

ansible-playbook stop-ssh.yml

Logging

zend is configured for syslog, and the containers are configured to send all their logs through syslog to the controller host.

Inspecting logs

watching logs on the host:

tail -f /var/log/syslog | grep sn10

You can also just use less and use the "follow" feature by pressing Shift+F

less /var/log/syslog
# on keyboard press Shift+F

Getting all logs from a container

grep "sn1 " /var/log/syslog | less

Getting all nodetracker logs from a container

grep "sn1 " /var/log/syslog | less

getting all logs from a container

grep "sn1 " /var/log/syslog | less

Logs inside container

View all zend, nodetracker and system logs from this boot

journalctl -b

Follow real time logs

journalctl -f

systemd (system services)

I created systemd unit files for all service instead of using third party management tools.

Restart zend

systemctl restart zend

Restart nodetracker

systemctl restart nodetracker

Nodes command

I added a nodes-command alias that basically performs a command through lxc on all the containers.

root@master:~# nodes-command "ps -ef"
Container:monitoring.zennodes.com
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 00:09 ?        00:00:00 /sbin/init
root        37     1  0 00:09 ?        00:00:00 /lib/systemd/systemd-journald
syslog      88     1  0 00:09 ?        00:00:00 /usr/sbin/rsyslogd -n
root        92     1  0 00:09 ?        00:00:00 /usr/sbin/cron -f
root       165     1  0 00:09 ?        00:00:00 /usr/sbin/sshd -D
root       167     1  0 00:09 pts/0    00:00:00 /sbin/agetty --noclear --keep-baud pts/0 115200 38400 9600 vt220
root       168     1  0 00:09 pts/3    00:00:00 /sbin/agetty --noclear --keep-baud pts/3 115200 38400 9600 vt220
root       169     1  0 00:09 pts/1    00:00:00 /sbin/agetty --noclear --keep-baud pts/1 115200 38400 9600 vt220
root       170     1  0 00:09 pts/2    00:00:00 /sbin/agetty --noclear --keep-baud pts/2 115200 38400 9600 vt220
root       171     1  0 00:09 lxc/console 00:00:00 /sbin/agetty --noclear --keep-baud console 115200 38400 9600 vt220
root       281     1  0 00:09 ?        00:00:00 /sbin/dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases -I -df /var/lib/dhcp/dhclient6.e
grafana   4086     1  0 00:21 ?        00:00:02 /usr/sbin/grafana-server --config=/etc/grafana/grafana.ini --pidfile=/var/run/grafana/grafana-server.pid cfg:default.pa
root      4605     0  0 01:32 pts/1    00:00:00 ps -ef
Container:sec01.zennodes.com
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 00:08 ?        00:00:01 /sbin/init
root        36     1  0 00:08 ?        00:00:02 /lib/systemd/systemd-journald
root        80     1  0 00:08 ?        00:00:00 /usr/sbin/cron -f
root       182     1  0 00:08 ?        00:00:00 /usr/sbin/sshd -D
root       185     1  0 00:08 pts/1    00:00:00 /sbin/agetty --noclear --keep-baud pts/1 115200 38400 9600 vt220
root       186     1  0 00:08 lxc/console 00:00:00 /sbin/agetty --noclear --keep-baud console 115200 38400 9600 vt220
root       187     1  0 00:08 pts/0    00:00:00 /sbin/agetty --noclear --keep-baud pts/0 115200 38400 9600 vt220
root       189     1  0 00:08 pts/3    00:00:00 /sbin/agetty --noclear --keep-baud pts/3 115200 38400 9600 vt220
root       191     1  0 00:08 pts/2    00:00:00 /sbin/agetty --noclear --keep-baud pts/2 115200 38400 9600 vt220
root       281     1  0 00:08 ?        00:00:00 /sbin/dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases -I -df /var/lib/dhcp/dhclient6.e
syslog    8070     1  0 00:16 ?        00:00:00 /usr/sbin/rsyslogd -n
zend      9247     1  0 00:20 ?        00:00:04 /usr/local/bin/node app.js
zend     13221     1  0 01:13 ?        00:00:37 /usr/bin/zend -printtoconsole -logtimestamps=0
root     13743     0  0 01:32 pts/1    00:00:00 ps -ef

You could also use ansible ad-hoc commands, so this is more of a convenience.

ansible controller -m shell -a "hostname"
controller01 | SUCCESS | rc=0 >>
Tue Jun 19 02:08:55 CEST 2018

Get z addresses

Run the get-addresses.yml playbook to generate and display the z addresses to send the challenge balance.

Get z address balances

Run the get-balances.yml playbook to display the z address balances for each of the nodes.

Get private z addresses

Run the dump-keys.yml playbook to display the private z addresses so you can save them to a wallet.

Donations

If you used this and saved a bunch of money, send the original author of these tools some zen or eth!

Zen

znZ2zopm9VuAKxXxjRpygwoqSNEffQp1iYx

Ethereum

0xC720c150Bb757978Ba565912B891312190E6e9B4

ansible-zencash-nodes's People

Contributors

alchemydc avatar blockzilladev avatar rmeleromira avatar spiral11 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ansible-zencash-nodes's Issues

lxc-net issues

Running ubuntu 16.04 as ansible_host, when running nodes.yml playbook, I have trouble with lxc-net due to missing dnsmasq.conf file

Sep 06 10:56:16 blah lxc-net[18765]: dnsmasq: cannot read /etc/lxc/dnsmasq.conf: No such file or directory
Sep 06 10:56:16 blah dnsmasq[18809]: cannot read /etc/lxc/dnsmasq.conf: No such file or directory
Sep 06 10:56:16 blah dnsmasq[18809]: FAILED to start up
Sep 06 10:56:16 blah lxc-net[18765]: Failed to setup lxc-net.

lxc-create --version

2.0.11

Has to do with this:

  • name: Enable static lxc addresses
    lineinfile:
    path: /etc/default/lxc-net
    line: 'LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf'
    register: lxc_net
    when: "groups['supernodes'] is defined"

Dont understand why it triggers, have to comment supernodes as it seems.

ansible with lxc-create failing

I am having trouble getting lxc-create task working (roles/master/tasks/lxc.yaml:49)
With ansible 6.6.0 or newer version (current is 7.1.0) lxc-create task fails saying "logfile is not correct storage type" or something due to the fact that commands in lxc_container module get interpreted wrong and syntax for lxc-create command is wrong then.

source: ansible-collections/community.general#5578

Only solution I found was to lock ansible version to 6.5.0 in install_host_deps.sh

@rmeleromira do you use this still? Can anyone else relate or confirm this?

EDIT2: It is fixed in community.general 6.2.0 that is not yet bundled with ansible 7.1.0, so manual upgrade is needed:

# ansible-galaxy collection install community.general --upgrade

Closing because collection upgrade fixes issue with ansible >=6.6.0

nodes failing

Running few nodes and they started failing. Zend unable to work, constantly restarting. Due to the error, seems like a corrupted wallet.dat file.

Tried:
-restarting container
-deleting and creating new container with restored wallet
-rsyncing blockchain data from good node

Currently running zend -reindex but it is slow as hell, 24h and still not complete

Error in question is :

Sep 20 07:09:56 z8 zend[13843]: zend: wallet/wallet.cpp:698: void CWallet::IncrementNoteWitnesses(const CBlockIndex*, const CBlock*, ZCIncrementalMerkleTree&): Assertion (nd->witnessHeight == -1) || (nd->witnessHeight == pindex->nHeight - 1)' failed.
`
Log output:

Sep 20 07:06:29 z8 systemd[1]: Starting ZenCash Node Server... Sep 20 07:06:29 z8 zend[13843]: Zen server starting Sep 20 07:06:29 z8 zend[13843]: Horizen version v2.0.18 (2019-07-03 18:36:44 +0000) Sep 20 07:06:29 z8 zend[13843]: AppInit2: parameter interaction: -externalip set -> setting -discover=0 Sep 20 07:06:29 z8 zend[13843]: Zen server starting Sep 20 07:06:29 z8 systemd[1]: Started ZenCash Node Server. Sep 20 07:06:29 z8 zend[13843]: Zen version v2.0.18 (2019-07-03 18:36:44 +0000) Sep 20 07:06:29 z8 zend[13843]: Using OpenSSL version OpenSSL 1.1.1c 28 May 2019 Sep 20 07:06:29 z8 zend[13843]: Using BerkeleyDB version Berkeley DB 6.2.23: (March 28, 2016) Sep 20 07:06:29 z8 zend[13843]: Startup time: 2019-09-20 07:06:29 Sep 20 07:06:29 z8 zend[13843]: Default data directory /home/zend/.zen Sep 20 07:06:29 z8 zend[13843]: Using data directory /home/zend/.zen Sep 20 07:06:29 z8 zend[13843]: Using config file /home/zend/.zen/zen.conf Sep 20 07:06:29 z8 zend[13843]: Using at most 125 connections (1024 file descriptors available) Sep 20 07:06:29 z8 zend[13843]: Using 6 threads for script verification Sep 20 07:06:29 z8 zend[13843]: Loading verifying key from /home/zend/.zcash-params/sprout-verifying.key Sep 20 07:06:29 z8 zend[13843]: scheduler thread start Sep 20 07:06:29 z8 zend[13843]: Loaded verifying key in 0.007709s seconds. Sep 20 07:06:29 z8 zend[13843]: Loading Sapling (Spend) parameters from /home/zend/.zcash-params/sapling-spend.params Sep 20 07:06:29 z8 zend[13843]: Loading Sapling (Output) parameters from /home/zend/.zcash-params/sapling-output.params Sep 20 07:06:29 z8 zend[13843]: Loading Sapling (Sprout Groth16) parameters from /home/zend/.zcash-params/sprout-groth16.params Sep 20 07:06:33 z8 zend[13843]: Loaded Sapling parameters in 3.382360s seconds. Sep 20 07:06:33 z8 zend[13843]: Binding RPC on address 0.0.0.0 port 18231 failed. Sep 20 07:06:33 z8 zend[13843]: HTTP: creating work queue of depth 16 Sep 20 07:06:33 z8 zend[13843]: HTTP: starting 4 worker threads Sep 20 07:06:33 z8 zend[13843]: Using wallet wallet.dat Sep 20 07:06:33 z8 zend[13843]: init message: Verifying wallet... Sep 20 07:06:33 z8 zend[13843]: CDBEnv::Open: LogDir=/home/zend/.zen/database ErrorFile=/home/zend/.zen/db.log Sep 20 07:06:33 z8 zend[13843]: Bound to [::]:9033 Sep 20 07:06:33 z8 zend[13843]: Bound to 0.0.0.0:9033 Sep 20 07:06:33 z8 zend[13843]: AddLocal([xxxx]:9033,4) Sep 20 07:06:33 z8 zend[13843]: Cache configuration: Sep 20 07:06:33 z8 zend[13843]: * Using 12.5MiB for block index database Sep 20 07:06:33 z8 zend[13843]: * Using 29.9MiB for chain state database Sep 20 07:06:33 z8 zend[13843]: * Using 57.6MiB for in-memory UTXO set Sep 20 07:06:33 z8 zend[13843]: init message: Loading block index... Sep 20 07:06:33 z8 zend[13843]: Opening LevelDB in /home/zend/.zen/blocks/index Sep 20 07:06:33 z8 zend[13843]: Opened LevelDB successfully Sep 20 07:06:33 z8 zend[13843]: Opening LevelDB in /home/zend/.zen/chainstate Sep 20 07:06:33 z8 zend[13843]: Opened LevelDB successfully Sep 20 07:06:37 z8 node[8806]: 2019-09-20 07:06:37 UTC -- Checking t-address... Sep 20 07:06:37 z8 node[8806]: 2019-09-20 07:06:37 UTC -- Zend: Waiting - Loading block index... Sep 20 07:06:53 z8 zend[13843]: LoadBlockIndexDB: last block file = 142 Sep 20 07:06:53 z8 zend[13843]: LoadBlockIndexDB: last block file info: CBlockFileInfo(blocks=336, size=28391351, heights=588543...588897, time=2019-09-18...2019-09-19) Sep 20 07:06:53 z8 zend[13843]: Checking all blk files are present... Sep 20 07:06:53 z8 zend[13843]: LoadBlockIndexDB: transaction index enabled Sep 20 07:06:54 z8 zend[13843]: LoadBlockIndexDB: hashBestChain=0000000010bb4ccac7d057ca6c0c9ae30ab56bbfb6e169af68dd8af4bcffbd67 height=588897 date=2019-09-19 14:45:53 progress=0.997965 Sep 20 07:06:54 z8 zend[13843]: init message: Verifying blocks... Sep 20 07:06:54 z8 zend[13843]: Verifying last 288 blocks at level 3 Sep 20 07:06:57 z8 node[8806]: 2019-09-20 07:06:57 UTC -- Checking t-address... Sep 20 07:06:57 z8 node[8806]: 2019-09-20 07:06:57 UTC -- Zend: Waiting - Verifying blocks... Sep 20 07:07:17 z8 node[8806]: 2019-09-20 07:07:17 UTC -- Checking t-address... Sep 20 07:07:17 z8 node[8806]: 2019-09-20 07:07:17 UTC -- Zend: Waiting - Verifying blocks... Sep 20 07:07:37 z8 node[8806]: 2019-09-20 07:07:37 UTC -- Checking t-address... Sep 20 07:07:37 z8 node[8806]: 2019-09-20 07:07:37 UTC -- Zend: Waiting - Verifying blocks... Sep 20 07:07:57 z8 node[8806]: 2019-09-20 07:07:57 UTC -- Checking t-address... Sep 20 07:07:57 z8 node[8806]: 2019-09-20 07:07:57 UTC -- Zend: Waiting - Verifying blocks... Sep 20 07:08:17 z8 node[8806]: 2019-09-20 07:08:17 UTC -- Checking t-address... Sep 20 07:08:17 z8 node[8806]: 2019-09-20 07:08:17 UTC -- Zend: Waiting - Verifying blocks... Sep 20 07:08:37 z8 node[8806]: 2019-09-20 07:08:37 UTC -- Checking t-address... Sep 20 07:08:37 z8 node[8806]: 2019-09-20 07:08:37 UTC -- Zend: Waiting - Verifying blocks... Sep 20 07:08:57 z8 node[8806]: 2019-09-20 07:08:57 UTC -- Checking t-address... Sep 20 07:08:57 z8 node[8806]: 2019-09-20 07:08:57 UTC -- Zend: Waiting - Verifying blocks... Sep 20 07:09:17 z8 node[8806]: 2019-09-20 07:09:17 UTC -- Checking t-address... Sep 20 07:09:17 z8 node[8806]: 2019-09-20 07:09:17 UTC -- Zend: Waiting - Verifying blocks... Sep 20 07:09:37 z8 node[8806]: 2019-09-20 07:09:37 UTC -- Checking t-address... Sep 20 07:09:37 z8 node[8806]: 2019-09-20 07:09:37 UTC -- Zend: Waiting - Verifying blocks... Sep 20 07:09:56 z8 zend[13843]: No coin database inconsistencies in last 165 blocks (9040 transactions) Sep 20 07:09:56 z8 zend[13843]: block index 203007ms Sep 20 07:09:56 z8 zend[13843]: init message: Loading wallet... Sep 20 07:09:56 z8 zend[13843]: nFileVersion = 2001850 Sep 20 07:09:56 z8 zend[13843]: Keys: 102 plaintext, 0 encrypted, 102 w/ metadata, 102 total Sep 20 07:09:56 z8 zend[13843]: ZKeys: 2 plaintext, 0 encrypted, 2 w/metadata, 2 total Sep 20 07:09:56 z8 zend[13843]: wallet 139ms Sep 20 07:09:56 z8 zend[13843]: init message: Rescanning... Sep 20 07:09:56 z8 zend[13843]: Rescanning last 342697 blocks (from block 246200)... Sep 20 07:09:56 z8 zend[13843]: zend: wallet/wallet.cpp:698: void CWallet::IncrementNoteWitnesses(const CBlockIndex*, const CBlock*, ZCIncrementalMerkleTree&): Assertion (nd->witnessHeight == -1) || (nd->witnessHeight == pindex->nHeight - 1)' failed.
Sep 20 07:09:56 z8 systemd[1]: zend.service: Main process exited, code=killed, status=6/ABRT
Sep 20 07:09:56 z8 zen-cli[13865]: error: couldn't connect to server: unknown (code -1)
Sep 20 07:09:56 z8 zen-cli[13865]: (make sure server is running and you are connecting to the correct RPC port)
Sep 20 07:09:56 z8 systemd[1]: zend.service: Control process exited, code=exited status=1
Sep 20 07:09:56 z8 systemd[1]: zend.service: Unit entered failed state.
Sep 20 07:09:56 z8 systemd[1]: zend.service: Failed with result 'exit-code'.
Sep 20 07:09:56 z8 systemd[1]: zend.service: Service hold-off time over, scheduling restart.
Sep 20 07:09:56 z8 systemd[1]: Stopped ZenCash Node Server.`

Did someone encounter similar issue? Didnt happen ever on "normal" vps nodes I was running before, so this one is new to me ๐Ÿ’ƒ

"Destination directory /home/zend/.zen does not exist"

Thank you very much for creating this repository.

I have some issue with the installation. If I run ansible-playbook nodes.yml I get the following error:

fatal: [secnode01]: FAILED! => {"changed": false, "checksum": "e70143de7c0be14a1efc5bf31adea6de07cb0427", "msg": "Destination directory /home/zend/.zen does not exist"}

If I run ansible-playbook --check nodes.yml I get the follwing:

fatal: [secnode01]: FAILED! => {"changed": false, "module_stderr": "sudo: unknown user: zend\nsudo: unable to initialize policy plugin\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

The container is created and if I log in to it, I see the dictionary

/home/zend/.zen

exists.
Do you have any idea, what I can do to solve this problem?

The `lxc` module is not importable. Check the requirements.

Hi. I get the following error. Which I did not manage to resolve myself:

failed: [master01] (item={'changed': False, 'stat': {'exists': False}, 'invocation': {'module_args': {'path': '/var/lib/lxc/sec01.zennode.club/.configured', 'follow': False, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1', 'get_md5': None}}, 'failed': False, 'item': 'secnode01', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "failed": false, "invocation": {"module_args": {"checksum_algorithm": "sha1", "follow": false, "get_attributes": true, "get_checksum": true, "get_md5": null, "get_mime": true, "path": "/var/lib/lxc/sec01.zennode.club/.configured"}}, "item": "secnode01", "stat": {"exists": false}}, "msg": "The lxc module is not importable. Check the requirements."

ZEN 2.0.24 Released

Hello. Do you have any plans, so that the ansible-zencash-nodes will be compatible with ZEN 2.0.24?

Issue running playbook

When I run "ansible-playbook nodes.yml" after following the previous instructions It fails and spits this error out.

ERROR! ERROR! inventory.yml:3: Expected key=value host variable assignment, got: {{vault_ssh_username}}

issue with playbook

I changed inventory.yml, put master ansible_host tolocalhost, but keep getting this error:

TASK [Gathering Facts] **********************************************************************************************************************************************************************************************************************
fatal: [master01]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host "localhost". Make sure this host can be reached over ssh", "unreachable": true}

PLAY RECAP **********************************************************************************************************************************************************************************************************************************
master01 : ok=0 changed=0 unreachable=1 failed=0

can I restore nodes?

Say I have few nodes running already, and want to migrate to lxc containers. Is it possible to copy wallets and nodetracker configs in order to restore them? Kind of like a normal vps migration?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.