GithubHelp home page GithubHelp logo

qubinode / qubinode-installer Goto Github PK

View Code? Open in Web Editor NEW
86.0 12.0 26.0 7.19 MB

An easy to set up OpenShift development kit powered by Red Hat Ansible.

Home Page: https://qubinode-installer.readthedocs.io/en/latest/

License: GNU General Public License v3.0

Shell 94.98% Python 4.49% Jinja 0.53%
openshift-cluster ansible kvm workloads okd-4 ocp

qubinode-installer's Introduction

What is Qubinode Installer?

Qubinode-installer is an utility tool that facilates the quick deployment of an array of Red Hat products like Red Hat Openshift Container Platform, Red Hat Identity Manager, Red Hat Satellite, etc.. on a single piece of hardware by leveraging the KVM hypervisor. Documentation Status

The benefits of using qubinode

The Qubinode Project provides a very cost effective way to quickly stand up a lab environment on a single piece of hardware. Your most expensive investment would be the procurement of the hardware itself. This is a cheaper approach than having to pay a license fee to use a type 1 hypervisor like VMWare/VSphere or having to pay a fee to use AWS EC2 instances.

Motivation

The primary focus of this project is make it easy for you to deploy an OpenShift cluster on a single bare metal node with production like characteristics. Please visit The Qubinode Project landing page for step by step easy to follow guide on how to get started.

What is OpenShift?

Red Hat OpenShift Container Platform (OCP) - is Red Hat's private platform as a service product, built around a core of application containers powered by Kubernetes and on the foundations of Red Hat Enterprise Linux.

Resource requirements for OpenShift cluster

Baremetal Hardware

  • At least 32 GiB of memory, 128 GiB is recommended.
  • At least 300 GiB SSD or NVME dedicated storage, 1TB is recomneded.

The qubinode-installer can deploy a 3 node cluster on a system with 32GiB memory. For the best possible experince 128 GiB of memory is recommended. This will allow for the default deployment of a cluster with 3 controlplane and 3 computes.

Software

Qubinode Release Information

Qubinode Version Ansible version Tag
Release 3.0 2.10 2.8.0

Testing

Features in v3.0 Version

  • Support for RHEL 9.1
  • Support for Centos 9 Streams
  • Vyos Router Support
  • kcli support to manage vm deployments
  • Ansible Automation Platform 2.1
  • Red Hat Ceph Storage 5
  • kvm-install-vm to manage vm deployments
  • Support for TailScale VPN

See Documentation for additional details.

Deploying a OpenShift cluster

Workloads

Qubinode Documentation

Training

Red Hat Courses

OpenShift

Ansible

Contribute

We value community and collaboration, therefore any contribution back to the project/community is always welcome.

If you would like to Contribute to the qubinode project please see the documentation below.

Ways to contribute

We kindly ask you to open an issue if you find anything wrong and or something that can be improved during your usage of qubinode. If it's something that you're able to fix, please fork the project, apply your fix and submit a merge request, then we'll review and approve your merge request. Thank you for using qubinode we're looking forward to your contribution back to the project.

Support

If you have any direct questions, reach out to us using the guide.

Known issues

Acknowledgments

Authors

qubinode-installer's People

Contributors

amaliver avatar amalivert avatar dingyeah33 avatar flyemsafe avatar hackmd-deploy avatar joecharles33 avatar tosin2013 avatar urena-01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qubinode-installer's Issues

remove copys of idm entrys in hosts file

on each idm reinstall the hosts file still contains entries to the old server

example
192.168.1.x ocp-idm01 ocp-idm01.lab.testme
192.168.1.x ocp-idm01 ocp-idm01.lab.example

[BUG] swygue.edge_host_setup : setup bridge interface

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:
1../qubinode-installer -m host -p ocp

Expected behavior
complete without failures

Screenshots
TASK [swygue.edge_host_setup : setup bridge interface] ********************************************************************************************************
Saturday 31 August 2019 11:29:13 -0400 (0:00:00.277) 0:50:23.561 *******
failed: [localhost] (item={u'bridge': u'qubibr0', u'name': u'qubinet', u'mode': u'bridge'}) => {
"changed": false,
"item": {
"bridge": "qubibr0",
"mode": "bridge",
"name": "qubinet"
}
}

MSG:

AnsibleUndefinedVariable: 'kvm_host_ipaddr' is undefined

PLAY RECAP ****************************************************************************************************************************************************
localhost : ok=45 changed=13 unreachable=0 failed=1

Additional context
Add any other context about the problem here.

[BUG] remove old dns entries on vm removal

Describe the bug
OpenShift preinstallation validation script is failing because machines cannot communicate with each other due to old dns entries.

To Reproduce
Steps to reproduce the behavior:

  1. ./qubinode-installer -p ocp -m deploy_nodes -d
  2. ./qubinode-installer -p ocp -m deploy_nodes
  3. . ./qubinode-installer -p ocp

Expected behavior
dns should only return one entry of the vm name in the dns server

Screenshots
$ dig ocp-master01.lab.example

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>> ocp-master01.lab.example
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17291
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 2

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;ocp-master01.lab.example. IN A

;; ANSWER SECTION:
ocp-master01.lab.example. 86400 IN A 192.168.1.45
ocp-master01.lab.example. 86400 IN A 192.168.1.23
ocp-master01.lab.example. 86400 IN A 192.168.1.60

;; AUTHORITY SECTION:
lab.example. 86400 IN NS ocp-dns01.lab.example.

;; ADDITIONAL SECTION:
ocp-dns01.lab.example. 1200 IN A 192.168.1.121

;; Query time: 0 msec
;; SERVER: 192.168.1.121#53(192.168.1.121)
;; WHEN: Mon Aug 26 01:12:51 EDT 2019
;; MSG SIZE rcvd: 141

[admin@ocp-node01 ~]$ ping ocp-master01.lab.example -c 1
PING ocp-master01.lab.example (192.168.1.60) 56(84) bytes of data.
^C
--- ocp-master01.lab.example ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

Additional context
Add any other context about the problem here.

[BUG] PTR Records still exist after nodes are removed

Describe the bug
PTR Records still exist after nodes are removed

To Reproduce
Steps to reproduce the behavior:

  1. ./qubinode-installer -m deploy_dns -p ocp

Expected behavior
The ptr records are empty on the dns server

Screenshots
see attachment

Additional context
Add any other context about the problem here.

[BUG] required repos not enabled on master node

Describe the bug
Installing docker on master node fails. Looks lie the RHSM repos aren't enabled.

To Reproduce
./qubinode-installer -p ocp -m deploy_nodes

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots

TASK [install all required packages] *********************************************************************
Monday 26 August 2019  00:32:32 -0400 (0:06:43.063)       0:16:11.880 ********* 
fatal: [ocp-master01]: FAILED! => {
    "changed": false, 
    "rc": 126, 
    "results": [
        "net-tools-2.0-0.25.20131004git.el7.x86_64 providing net-tools is already installed", 
        "kexec-tools-2.0.15-33.el7.x86_64 providing kexec-tools is already installed", 
        "No package matching 'docker-1.13.1' found available, installed or updated"
    ]
}

MSG:

No package matching 'docker-1.13.1' found available, installed or updated

Additional context
Add any other context about the problem here.

[BUG] The dns entry should be deleted when running the ./qubinode-installer -m deploy_dns -p ocp -d command

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:
1../qubinode-installer -m deploy_dns -p ocp -d

Expected behavior
The dns entry should be deleted when running the ./qubinode-installer -m deploy_dns -p ocp -d command

Screenshots
cat inventory/hosts
localhost ansible_connection=local ansible_user=root

[masters]

[nodes]

[infra]

[lbs]

[dns]
ocp-dns01 ansible_host=192.168.1.144 ansible_user=admin
ocp-dns01 ansible_host= ansible_user=admin
ocp-dns01 ansible_host=192.168.1.142 ansible_user=admin
ocp-dns01 ansible_host=192.168.1.99 ansible_user=admin
ocp-dns01 ansible_host=192.168.1.171 ansible_user=admin

[qbnodes:children]
lbs
infra
nodes
masters

Additional context
Add any other context about the problem here.

resize_os_disk_results is not iterable

TASK [ansible-role-rhel7-kvm-cloud-init : Push base image onto vm operating system disk] **********************************************************************
Thursday 08 August 2019 12:07:46 -0400 (0:00:00.031) 0:00:03.531 *******
fatal: [localhost]: FAILED! => {}

MSG:

The conditional check '"Resize operation completed with no errors" in resize_os_disk_results.stdout' failed. The error was: error while evaluating conditional ("Resize operation completed with no errors" in resize_os_disk_results.stdout): Unable to look up a name or access an attribute in template string ({% if "Resize operation completed with no errors" in resize_os_disk_results.stdout %} True {% else %} False {% endif %}).
Make sure your variable name does not contain invalid characters like '-': argument of type 'StrictUndefined' is not iterable

The task includes an option with an undefined variable. The error was: 'idm_admin_password'

TASK [ansible-idm : Run the installer] ***********************************************************************************************
Sunday 18 August 2019 18:00:17 -0400 (0:00:00.931) 0:04:44.885 *********
fatal: [ocp-dns01]: FAILED! => {
"changed": true,
"cmd": "/root/ipa-srv-install.sh",
"delta": "0:00:00.752838",
"end": "2019-08-18 18:00:19.091466",
"rc": 2,
"start": "2019-08-18 18:00:18.338628"
}

STDERR:

ipapython.install.cli: DEBUG Logging to /var/log/ipaserver-install.log
Usage: ipa-server-install [options]

ipa-server-install: error: option admin-password: Password must be at least 8 characters long
ipapython.admintool: DEBUG File "/usr/lib/python2.7/site-packages/ipapython/admintool.py", line 178, in execute
return_value = self.run()
File "/usr/lib/python2.7/site-packages/ipapython/install/cli.py", line 310, in run
self.option_parser.error("{0}: {1}".format(desc, e))
File "/usr/lib64/python2.7/optparse.py", line 1583, in error
self.exit(2, "%s: error: %s\n" % (self.get_prog_name(), msg))
File "/usr/lib64/python2.7/optparse.py", line 1573, in exit
sys.exit(status)

ipapython.admintool: DEBUG The ipa-server-install command failed, exception: SystemExit: 2
ipapython.admintool: ERROR The ipa-server-install command failed.

MSG:

non-zero return code

Transaction check error: file /usr/bin/kubectl from install of atomic-openshift-clients-3.11.135-1.git.0.b309f70.el7.x86_64 conflicts with file from package kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64

Total download size: 23 M
Installed size: 119 M
Downloading packages:
atomic-openshift-clients-3.11.135-1.git.0.b309f70.el7.x86_64.rpm | 23 MB 00:00:02
Running transaction check
Running transaction test

Transaction check error:
file /usr/bin/kubectl from install of atomic-openshift-clients-3.11.135-1.git.0.b309f70.el7.x86_64 conflicts with file from package kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64

$ sudo yum remove kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
Resolving Dependencies
--> Running transaction check
---> Package kubernetes-client.x86_64 0:1.5.2-0.7.git269f928.el7 will be erased
--> Processing Dependency: /usr/bin/kubectl for package: cockpit-kubernetes-176-2.el7.x86_64
--> Restarting Dependency Resolution with new changes.
--> Running transaction check
---> Package cockpit-kubernetes.x86_64 0:176-2.el7 will be erased
--> Finished Dependency Resolution

Dependencies Resolved

===============================================================================================================================================================
Package Arch Version Repository Size

Removing:
kubernetes-client x86_64 1.5.2-0.7.git269f928.el7 @rhel-7-server-extras-rpms 79 M
Removing for dependencies:
cockpit-kubernetes x86_64 176-2.el7 @rhel-7-server-ose-3.11-rpms 10 M

Transaction Summary

Remove 1 Package (+1 Dependent package)

Installed size: 89 M
Is this ok [y/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Erasing : cockpit-kubernetes-176-2.el7.x86_64 1/2
Erasing : kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64 2/2
Verifying : kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64 1/2
Verifying : cockpit-kubernetes-176-2.el7.x86_64

Create a password length check for ipa server

Thursday 15 August 2019 17:56:21 -0400 (0:00:00.812) 0:14:38.095 *******
fatal: [ocp-dns01]: FAILED! => {
"changed": true,
"cmd": "/root/ipa-srv-install.sh",
"delta": "0:00:00.608630",
"end": "2019-08-15 17:56:21.930621",
"rc": 2,
"start": "2019-08-15 17:56:21.321991"
}

STDERR:

ipapython.install.cli: DEBUG Logging to /var/log/ipaserver-install.log
Usage: ipa-server-install [options]

ipa-server-install: error: option admin-password: Password must be at least 8 characters long
ipapython.admintool: DEBUG File "/usr/lib/python2.7/site-packages/ipapython/admintool.py", line 178, in execute
return_value = self.run()
File "/usr/lib/python2.7/site-packages/ipapython/install/cli.py", line 310, in run
self.option_parser.error("{0}: {1}".format(desc, e))
File "/usr/lib64/python2.7/optparse.py", line 1583, in error
self.exit(2, "%s: error: %s\n" % (self.get_prog_name(), msg))
File "/usr/lib64/python2.7/optparse.py", line 1573, in exit
sys.exit(status)

ipapython.admintool: DEBUG The ipa-server-install command failed, exception: SystemExit: 2
ipapython.admintool: ERROR The ipa-server-install command failed.

Getting Failure message when installing cockpit

Installed: Red Hat Enterprise Linux Server release 7.6 (Maipo)
enabled the yum-config-manager --enable rhel-7-server-extras-rpms
ran the ./start_deployment.sh -k setup

TASK [swygue.edge_host_setup : ensure libvirt packages are installed] *****************************************************************************************
Wednesday 07 August 2019 16:13:32 -0400 (0:00:00.042) 0:00:01.805 ******
fatal: [localhost]: FAILED! => {
"changed": false,
"changes": {
"installed": [
"virt-install",
"libvirt-daemon-config-network",
"libvirt-daemon-kvm",
"qemu-kvm",
"nfs-utils",
"libvirt-daemon",
"libvirt-client",
"virt-top",
"cockpit-bridge",
"cockpit-networkmanager",
"cockpit-packagekit",
"cockpit-shell",
"cockpit-storaged",
"cockpit-ws",
"cockpit-kubernetes",
"cockpit-pcp"
]
},
"rc": 1,

....
MSG:

Error: Package: cockpit-shell-126-1.el7.noarch (rhel-7-server-extras-rpms)
Requires: cockpit-bridge = 126-1.el7
Available: cockpit-bridge-0.53-3.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.53-3.el7
Available: cockpit-bridge-0.58-2.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.58-2.el7
Available: cockpit-bridge-0.63-1.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.63-1.el7
Available: cockpit-bridge-0.71-1.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.71-1.el7
Available: cockpit-bridge-0.77-3.1.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.77-3.1.el7
Available: cockpit-bridge-0.93-3.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.93-3.el7
Available: cockpit-bridge-0.96-2.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.96-2.el7
Available: cockpit-bridge-0.103-1.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.103-1.el7
Available: cockpit-bridge-0.108-1.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.108-1.el7
Available: cockpit-bridge-0.114-2.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.114-2.el7
Available: cockpit-bridge-118-2.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 118-2.el7
Available: cockpit-bridge-122-3.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 122-3.el7
Available: cockpit-bridge-126-1.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 126-1.el7
Available: cockpit-bridge-131-3.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 131-3.el7
Available: cockpit-bridge-135-4.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 135-4.el7
Available: cockpit-bridge-138-6.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 138-6.el7
Available: cockpit-bridge-138-9.el7.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 138-9.el7
Available: cockpit-bridge-138-10.el7_4.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 138-10.el7_4
Available: cockpit-bridge-154-3.el7.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 154-3.el7
Available: cockpit-bridge-173-7.el7.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 173-7.el7
Available: cockpit-bridge-173.1-1.el7.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 173.1-1.el7
Available: cockpit-bridge-173.2-1.el7.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 173.2-1.el7
Installing: cockpit-bridge-195.1-1.el7.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 195.1-1.el7
Error: Package: cockpit-shell-0.63-1.el7.noarch (rhel-7-server-extras-rpms)
Requires: cockpit-bridge = 0.63-1.el7
Available: cockpit-bridge-0.53-3.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.53-3.el7
Available: cockpit-bridge-0.58-2.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.58-2.el7
Available: cockpit-bridge-0.63-1.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.63-1.el7
Available: cockpit-bridge-0.71-1.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.71-1.el7
Available: cockpit-bridge-0.77-3.1.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.77-3.1.el7
Available: cockpit-bridge-0.93-3.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.93-3.el7
Available: cockpit-bridge-0.96-2.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.96-2.el7
Available: cockpit-bridge-0.103-1.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.103-1.el7
Available: cockpit-bridge-0.108-1.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.108-1.el7
Available: cockpit-bridge-0.114-2.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 0.114-2.el7
Available: cockpit-bridge-118-2.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 118-2.el7
Available: cockpit-bridge-122-3.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 122-3.el7
Available: cockpit-bridge-126-1.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 126-1.el7
Available: cockpit-bridge-131-3.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 131-3.el7
Available: cockpit-bridge-135-4.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 135-4.el7
Available: cockpit-bridge-138-6.el7.x86_64 (rhel-7-server-extras-rpms)
cockpit-bridge = 138-6.el7
Available: cockpit-bridge-138-9.el7.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 138-9.el7
Available: cockpit-bridge-138-10.el7_4.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 138-10.el7_4
Available: cockpit-bridge-154-3.el7.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 154-3.el7
Available: cockpit-bridge-173-7.el7.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 173-7.el7
Available: cockpit-bridge-173.1-1.el7.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 173.1-1.el7
Available: cockpit-bridge-173.2-1.el7.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 173.2-1.el7
Installing: cockpit-bridge-195.1-1.el7.x86_64 (rhel-7-server-rpms)
cockpit-bridge = 195.1-1.el7

vms stuck in paused state

virsh list
Id Name State

249 ocp-master01 paused
253 ocp-node01 paused
257 ocp-node02 paused
261 ocp-infra01 paused
265 ocp-infra02 paused
269 ocp-lb01 running
273 ocp-dns01 paused

[root@rhel ~]# rm image.qcow2

[root@rhel ~]# tail /var/log/libvirt/qemu/ocp-master01.log
block I/O error in device 'drive-virtio-disk0': No space left on device (28)
block I/O error in device 'drive-virtio-disk0': No space left on device (28)
block I/O error in device 'drive-virtio-disk0': No space left on device (28)
block I/O error in device 'drive-virtio-disk0': No space left on device (28)
block I/O error in device 'drive-virtio-disk0': No space left on device (28)
block I/O error in device 'drive-virtio-disk0': No space left on device (28)
block I/O error in device 'drive-virtio-disk0': No space left on device (28)
block I/O error in device 'drive-virtio-disk0': No space left on device (28)
block I/O error in device 'drive-virtio-disk0': No space left on device (28)
block I/O error in device 'drive-virtio-disk0': No space left on device (28)
[root@rhel ~]#

ERROR authentication unavailable: no polkit agent available to authenticate action 'org.libvirt.unix.manage'

virt-install --connect qemu:///system --import --name ocp-master01 --ram 16384 --vcpus 4 --disk /var/lib/libvirt/images/ocp-master01_vda.qcow2,format=qcow2,bus=virtio --disk /var/lib/libvirt/data/ocp-master01/cidata.iso,device=cdrom --network network=lunchbox --autostart --os-type=linux --os-variant=rhel7 --noautoconsole
ERROR authentication unavailable: no polkit agent available to authenticate action 'org.libvirt.unix.manage'

[BUG] ERROR! Invalid vars_files entry found:

Describe the bug
ERROR! Invalid vars_files entry found:

To Reproduce
Steps to reproduce the behavior:

  1. ./qubinode-installer -m deploy_dns

Expected behavior
the dns server should be configured and deployed.

Screenshots
ERROR! Invalid vars_files entry found: {u'include_role': {u'name': u'ansible-role-rhel7-kvm-cloud-init'}, u'name': u'Create KVM VM for DNS Server', u'vars': {u'vm_name': u'ocp-dns01', u'vm_memory': u'2048', u'enable': True, u'vm_root_disk_size': u'20G', u'vm_recreate': False, u'extra_storage': [], u'inventory_group': u'dns', u'vm_teardown': False, u'vm_cpu': u'2'}}
vars_files entries should be either a string type or a list of string types after template expansion

Additional context
Add any other context about the problem here.

The conditional check 'vm_recreate or vm_teardown' failed.

TASK [ansible-role-rhel7-kvm-cloud-init : Destroy vm] *********************************************************************************************************
Thursday 08 August 2019 01:15:00 -0400 (0:00:00.437) 0:00:00.738 *******
fatal: [localhost]: FAILED! => {}

MSG:

The conditional check 'vm_recreate or vm_teardown' failed. The error was: error while evaluating conditional (vm_recreate or vm_teardown): 'dict object' has no attribute 'vm_teardown'

The error appears to be in '/home/tosin/openshift-home-lab/playbooks/roles/ansible-role-rhel7-kvm-cloud-init/tasks/teardown-vm.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:


  • name: Destroy vm
    ^ here

...ignoring

TASK [ansible-role-rhel7-kvm-cloud-init : Undefine vm] ********************************************************************************************************
Thursday 08 August 2019 01:15:00 -0400 (0:00:00.028) 0:00:00.767 *******
fatal: [localhost]: FAILED! => {}

MSG:

The conditional check 'vm_recreate or vm_teardown' failed. The error was: error while evaluating conditional (vm_recreate or vm_teardown): 'dict object' has no attribute 'vm_teardown'

The error appears to be in '/home/tosin/openshift-home-lab/playbooks/roles/ansible-role-rhel7-kvm-cloud-init/tasks/teardown-vm.yml': line 9, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  • name: Undefine vm
    ^ here

PLAY RECAP ****************************************************************************************************************************************************
localhost : ok=5 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=1

ansible-role-rhel7-kvm-cloud-init : get VM ip FAILS

TASK [ansible-role-rhel7-kvm-cloud-init : get VM ip] ************************************************************************************
Saturday 17 August 2019 11:06:09 -0400 (0:00:00.220) 0:02:07.149 *******
FAILED - RETRYING: get VM ip (30 retries left).
FAILED - RETRYING: get VM ip (29 retries left).
FAILED - RETRYING: get VM ip (28 retries left).
FAILED - RETRYING: get VM ip (27 retries left).
FAILED - RETRYING: get VM ip (26 retries left).
FAILED - RETRYING: get VM ip (25 retries left).
FAILED - RETRYING: get VM ip (24 retries left).
FAILED - RETRYING: get VM ip (23 retries left).
FAILED - RETRYING: get VM ip (22 retries left).
FAILED - RETRYING: get VM ip (21 retries left).
FAILED - RETRYING: get VM ip (20 retries left).
FAILED - RETRYING: get VM ip (19 retries left).
FAILED - RETRYING: get VM ip (18 retries left).
FAILED - RETRYING: get VM ip (17 retries left).
FAILED - RETRYING: get VM ip (16 retries left).
FAILED - RETRYING: get VM ip (15 retries left).
FAILED - RETRYING: get VM ip (14 retries left).
FAILED - RETRYING: get VM ip (13 retries left).
FAILED - RETRYING: get VM ip (12 retries left).
FAILED - RETRYING: get VM ip (11 retries left).
FAILED - RETRYING: get VM ip (10 retries left).
FAILED - RETRYING: get VM ip (9 retries left).
FAILED - RETRYING: get VM ip (8 retries left).
FAILED - RETRYING: get VM ip (7 retries left).
FAILED - RETRYING: get VM ip (6 retries left).
FAILED - RETRYING: get VM ip (5 retries left).
FAILED - RETRYING: get VM ip (4 retries left).
FAILED - RETRYING: get VM ip (3 retries left).
FAILED - RETRYING: get VM ip (2 retries left).
FAILED - RETRYING: get VM ip (1 retries left).
fatal: [localhost]: FAILED! => {
"attempts": 30,
"changed": false,
"cmd": [
"/usr/local/bin/getvmip",
"-r",
"ocp-node01"
],
"delta": "0:00:00.020667",
"end": "2019-08-17 11:06:45.816863",
"rc": 0,
"start": "2019-08-17 11:06:45.796196"
}

issue with all.yml line 63

nsible is installed
/home/tosin/openshift-home-lab/playbooks/vars/vault.yml is encrypted

Downloading required roles

  • changing role ansible-role-rhel7-kvm-cloud-init from master to master
  • extracting ansible-role-rhel7-kvm-cloud-init to /home/tosin/openshift-home-lab/playbooks/roles/ansible-role-rhel7-kvm-cloud-init
  • ansible-role-rhel7-kvm-cloud-init (master) was installed successfully
  • changing role swygue.edge_host_setup from master to master
  • extracting swygue.edge_host_setup to /home/tosin/openshift-home-lab/playbooks/roles/swygue.edge_host_setup
  • swygue.edge_host_setup (master) was installed successfully
  • changing role ansible-idm from master to master
  • extracting ansible-idm to /home/tosin/openshift-home-lab/playbooks/roles/ansible-idm
  • ansible-idm (master) was installed successfully
  • changing role swygue-redhat-subscription from master to master
  • extracting swygue-redhat-subscription to /home/tosin/openshift-home-lab/playbooks/roles/swygue-redhat-subscription
  • swygue-redhat-subscription (master) was installed successfully
  • changing role ansible-resolv from master to master
  • extracting ansible-resolv to /home/tosin/openshift-home-lab/playbooks/roles/ansible-resolv
  • ansible-resolv (master) was installed successfully

ERROR! Syntax Error while loading YAML.
found unexpected end of stream

The error appears to be in '/home/tosin/openshift-home-lab/playbooks/vars/all.yml': line 63, column 1, but may
be elsewhere in the file depending on the exact syntax problem.

(specified line no longer in file, maybe it changed?)
Checking if password less suoders is setup for tosin.

[BUG] Unable to resolve docker-registry.default.svc

Describe the bug
Cannot deploy catalog items because nodes cannot resolve docker-registry.default.svc

To Reproduce
Log into OpenShift UI and try to launch a catalog item, for example, Jenkins.

Expected behavior
The catalog item should deploy a pod and expose the service.

Screenshots


Events:
  Type     Reason          Age              From                                  Message
  ----     ------          ----             ----                                  -------
  Normal   Scheduled       3m               default-scheduler                     Successfully assigned firstdeployment/jenkins-example-2-2jvkx to ocp-node02.lunchnet.example
  Normal   SandboxChanged  3m (x3 over 3m)  kubelet, ocp-node02.lunchnet.example  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling         2m (x3 over 3m)  kubelet, ocp-node02.lunchnet.example  pulling image "docker-registry.default.svc:5000/openshift/jenkins@sha256:a33dbdce3ffd886d3aa1d2b0c259a3e55f56dc8feaf4b0f2d96189ccb63c42ee"
  Warning  Failed          2m (x3 over 3m)  kubelet, ocp-node02.lunchnet.example  Failed to pull image "docker-registry.default.svc:5000/openshift/jenkins@sha256:a33dbdce3ffd886d3aa1d2b0c259a3e55f56dc8feaf4b0f2d96189ccb63c42ee": rpc error: code = Unknown desc = Get https://docker-registry.default.svc:5000/v2/: dial tcp: lookup docker-registry.default.svc on 172.24.24.106:53: no such host
  Warning  Failed          2m (x3 over 3m)  kubelet, ocp-node02.lunchnet.example  Error: ErrImagePull
  Normal   BackOff         2m (x6 over 3m)  kubelet, ocp-node02.lunchnet.example  Back-off pulling image "docker-registry.default.svc:5000/openshift/jenkins@sha256:a33dbdce3ffd886d3aa1d2b0c259a3e55f56dc8feaf4b0f2d96189ccb63c42ee"
  Warning  Failed     

Additional context

[admin@ocp-node02 ~]$ nslookup docker-registry.default.svc
Server:		172.24.24.106
Address:	172.24.24.106#53

** server can't find docker-registry.default.svc: NXDOMAIN

[BUG] -swygue-redhat-subscription : these repos will be enabled

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. ./qubinode-installer -m deploy_dns

Expected behavior
the dns to active the correct repos for idm

Screenshots
TASK [swygue-redhat-subscription : these repos will be enabled] ***********************************************************************************************
Friday 23 August 2019 17:44:06 -0400 (0:00:00.025) 0:00:16.566 *********
fatal: [ocp-dns01]: FAILED! => {}

MSG:

The conditional check 'repos_to_enable|length > 0' failed. The error was: Unexpected templating type error occurred on ({% if repos_to_enable|length > 0 %} True {% else %} False {% endif %}): object of type 'bool' has no len()

The error appears to have been in '/home/admin/qubinode-installer/playbooks/roles/swygue-redhat-subscription/tasks/repos.yml': line 44, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  • name: these repos will be enabled
    ^ here

Additional context
Add any other context about the problem here.

This system has no repositories available through subscriptions vm's

TASK [Enable rhel-7-server-rpms repository] *******************************************************************************************************************
Friday 16 August 2019 09:59:51 -0400 (0:00:07.378) 0:00:10.648 *********
fatal: [ocp-node01.lab.example]: FAILED! => {
"changed": false
}

MSG:

This system has no repositories available through subscriptions

fatal: [ocp-infra02.lab.example]: FAILED! => {
"changed": false
}

MSG:

This system has no repositories available through subscriptions

fatal: [ocp-infra01.lab.example]: FAILED! => {
"changed": false
}

MSG:

This system has no repositories available through subscriptions

fatal: [ocp-node02.lab.example]: FAILED! => {
"changed": false
}

MSG:

This system has no repositories available through subscriptions

No package matching 'cockpit-packagekit' found available, installed or updated

TASK [swygue.edge_host_setup : ensure libvirt packages are installed] *****************************************************************************************
Tuesday 06 August 2019 19:28:45 -0400 (0:00:00.043) 0:00:01.738 ********
fatal: [localhost]: FAILED! => {
"changed": false,
"rc": 126,
"results": [
"No package matching 'cockpit-packagekit' found available, installed or updated"
]
}

MSG:

No package matching 'cockpit-packagekit' found available, installed or updated

PLAY RECAP ****************************************************************************************************************************************************
localhost

The fix is yum-config-manager --enable rhel-7-server-extras-rpms

bad address at /etc/hosts line 3

tosin@rhel:~/openshift-home-lab (enhance-deploy M) $ sudo systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-08-08 10:38:11 EDT; 36s ago
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 57563 (libvirtd)
Tasks: 19 (limit: 32768)
CGroup: /system.slice/libvirtd.service
├─57504 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
├─57505 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
└─57563 /usr/sbin/libvirtd

Aug 08 10:38:11 rhel.example systemd[1]: Starting Virtualization daemon...
Aug 08 10:38:11 rhel.example systemd[1]: Started Virtualization daemon.
Aug 08 10:38:11 rhel.example dnsmasq[57504]: bad address at /etc/hosts line 3
Aug 08 10:38:11 rhel.example dnsmasq[57504]: read /etc/hosts - 2 addresses
Aug 08 10:38:11 rhel.example dnsmasq[57504]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Aug 08 10:38:11 rhel.example dnsmasq-dhcp[57504]: read /var/lib/libvirt/dnsmasq/default.hostsfile
tosin@rhel:~/openshift-home-lab (enhance-deploy M) $ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
ocp-idm01 ocp-idm01.lab.example

[BUG] vm entries not being added to the correct inventory group

Describe the bug
the task below in playbooks/roles/ansible-role-rhel7-kvm-cloud-init/tasks/post-config.yml
should add the vms to the correct inventory, they are all being placed under the [dns] group.

- name: ensure host is added to inventory
  lineinfile:
    path: "{{ inventory_file }}"
    line: "{{ vm_name }}   ansible_host={{ vm_ip.stdout }} ansible_user={{ admin_user }}"
    insertafter: '{{ inventory_group }}'
    state: present
  register: update_inventory
  when: update_inventory and vm_ip.stdout is defined

To Reproduce
Run ansible-playbook playbooks/deploy_vms.yml

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots

[dns]
ocp-dns01   ansible_host=172.24.24.102 ansible_user=qubi
ocp-master01   ansible_host=172.24.24.106 ansible_user=qubi
ocp-node01   ansible_host=172.24.24.105 ansible_user=qubi
ocp-node02   ansible_host=172.24.24.114 ansible_user=qubi
ocp-infra01   ansible_host=172.24.24.118 ansible_user=qubi
ocp-infra02   ansible_host=172.24.24.116 ansible_user=qubi
ocp-lb01   ansible_host=172.24.24.113 ansible_user=qubi

Additional context
Add any other context about the problem here.

[BUG] delete PTR Records out of dns on delete

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. run ./qubinode-installer -m deploy_nodes -p ocp -d

Expected behavior
PTR nodes are removed from DNS server

Screenshots
POINTER Nodes still exist in dnsserver

[BUG] docker-storage-setup /dev/dev/vdb is not a valid block device

Describe the bug
Unable to setup /dev/vdb as a valid block device

To Reproduce

./qubinode-installer -p ocp -m deploy_nodes

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots

TASK [Run  docker-storage-setup] *************************************************************************
Monday 26 August 2019  01:11:03 -0400 (0:00:01.034)       0:12:07.298 ********* 
fatal: [ocp-lb01]: FAILED! => {
    "changed": true, 
    "cmd": [
        "docker-storage-setup"
    ], 
    "delta": "0:00:00.094920", 
    "end": "2019-08-26 01:11:03.749980", 
    "rc": 1, 
    "start": "2019-08-26 01:11:03.655060"
}

STDERR:

INFO: Volume group backing root filesystem could not be determined
ERROR: /dev//dev/vdb is not a valid block device.


MSG:

non-zero return code

Additional context
Add any other context about the problem here.

[BUG]fatal: ./qubinode-installer -m deploy fails

TASK [ansible-role-rhel7-kvm-cloud-init : get vm gateway from DHCP server] fails with the following:

fatal: [localhost]: UNREACHABLE! => {
"changed": false,
"unreachable": true
}

MSG:

Failed to connect to the host via ssh: key_load_public: invalid format
Warning: Permanently added '192.168.1.90' (ECDSA) to the list of known hosts.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

The `libvirt` module is not importable. Check the requirements. on frist run

TASK [ansible-role-rhel7-kvm-cloud-init : Get a list of all instances] ****************************************************************************************
Tuesday 06 August 2019 19:16:38 -0400 (0:00:00.181) 0:00:00.267 ********
fatal: [localhost]: FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false
}

MSG:

The libvirt module is not importable. Check the requirements.

PLAY RECAP ****************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

[BUG]Could not find or access 'qubinode-installer/playbooks/networks.yml'

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. ...
  2. ...
  3. ...

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots

TASK [swygue.edge_host_setup : configure libvirt network] ************************************************
Sunday 25 August 2019  18:54:44 -0400 (0:00:01.579)       0:18:04.230 ********* 
fatal: [localhost]: FAILED! => {
    "reason": "Unable to retrieve file contents\nCould not find or access '/home/admin/qubinode-installer/playbooks/networks.yml' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"
}

PLAY RECAP ***********************************************************************************************
localhost                  : ok=36   changed=3    unreachable=0    failed=1   

Additional context
Add any other context about the problem here.

[BUG] SSH keyname is incorrect

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. ./qubinode-installer -m deploy_dns

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
$ ls ~/.ssh/
id_rsa.pub id_rsa.pub.pub known_hosts

TASK [ansible-role-rhel7-kvm-cloud-init : get vm gateway from DHCP server] ************************************************************************************
Friday 23 August 2019 17:40:49 -0400 (0:00:00.390) 0:01:09.906 *********
fatal: [localhost]: UNREACHABLE! => {
"changed": false,
"unreachable": true
}

MSG:

Failed to connect to the host via ssh: key_load_public: invalid format
Warning: Permanently added '192.168.1.125' (ECDSA) to the list of known hosts.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

Additional context
Add any other context about the problem here.

add idm server to all nodes /etc/resolv.conf

TASK [Approve node certificates when bootstrapping] ***********************************************************************************************************
Saturday 17 August 2019 15:17:38 -0400 (0:00:00.064) 0:51:08.105 *******
FAILED - RETRYING: Approve node certificates when bootstrapping (30 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (29 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (28 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (27 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (26 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (25 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (24 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (23 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (22 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (21 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (20 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (19 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (18 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (17 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (16 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (15 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (14 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (13 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (12 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (11 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (10 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (9 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (8 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (7 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (6 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (5 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (4 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (3 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (2 retries left).
FAILED - RETRYING: Approve node certificates when bootstrapping (1 retries left).

[BUG] Required repos not enabled on lb

Describe the bug
A clear and concise description of what the bug is.

To Reproduce

./qubinode-installer -p ocp -m deploy_nodes

Expected behavior
The RPM openshift-ansible installed.

Screenshots

TASK [install all required packages] *********************************************************************
Monday 26 August 2019  01:01:11 -0400 (0:00:44.737)       0:02:15.048 ********* 
changed: [ocp-infra02]
changed: [ocp-infra01]
changed: [ocp-node01]
changed: [ocp-node02]
changed: [ocp-lb01]
fatal: [ocp-master01]: FAILED! => {
    "changed": false, 
    "rc": 126, 
    "results": [
        "net-tools-2.0-0.25.20131004git.el7.x86_64 providing net-tools is already installed", 
        "kexec-tools-2.0.15-33.el7.x86_64 providing kexec-tools is already installed", 
        "No package matching 'openshift-ansible' found available, installed or updated"
    ]
}

MSG:

No package matching 'openshift-ansible' found available, installed or updated

Additional context
Add any other context about the problem here.

[BUG] qubinode-installer prompts for openshift console

Describe the bug
When running the qubinode-installer function qubinode_deploy_openshift your prompted to enter the openshift console user password.

To Reproduce

qubinode-installer -p ocp 

Expected behavior
No prompt for password

Screenshots

***************************************
Enter pasword to be used by qubinode user to access openshift console
***************************************
New password: 

Additional context

The qubinode-installer already collects the users password and stored it in the vault.yml user running the qubibode-installer should be created and the openshift console admin user along with the password collected.

name server is missing from /etc/resolv.conf

$ cat /etc/resolv.conf

Ansible managed

search lab.example
domain lab.example
nameserver
nameserver 8.8.8.8
options rotate timeout:1

This occurred have running the -m dns flg

[BUG] subscription refresh can't run on a unregistered system

Describe the bug
Executing qubinode-installer -m dns results in the task failing.

To Reproduce

Execute qubinode-installer -m dns

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots

TASK [refresh subscription-manager] **********************************************************************
Thursday 22 August 2019  14:59:26 -0400 (0:00:02.945)       0:00:03.026 ******* 
fatal: [ocp-dns01]: FAILED! => {
    "changed": true, 
    "cmd": [
        "subscription-manager", 
        "refresh"
    ], 
    "delta": "0:00:00.188031", 
    "end": "2019-08-22 14:59:26.795868", 
    "rc": 1, 
    "start": "2019-08-22 14:59:26.607837"
}

STDERR:

This system is not yet registered. Try 'subscription-manager register --help' for more information.


MSG:

non-zero return code

Additional context
Add any other context about the problem here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.