GithubHelp home page GithubHelp logo

sadsfae / ansible-elk Goto Github PK

View Code? Open in Web Editor NEW
335.0 22.0 192.0 2.16 MB

:bar_chart: Ansible playbook for setting up an ELK/EFK stack and clients.

Home Page: https://hobo.house/2016/04/08/automate-elk-stack-and-clients-with-ansible/

License: Apache License 2.0

Shell 4.33% Jinja 95.67%
elk ansible fluentd kibana playbook elasticsearch logstash centos rhel efk

ansible-elk's Introduction

ansible-elk

Ansible Playbook for setting up the ELK/EFK Stack and Filebeat client on remote hosts

ELK

GA

What does it do?

  • Automated deployment of a full 6.x series ELK or EFK stack (Elasticsearch, Logstash/Fluentd, Kibana)
    • 5.6 and 2.4 ELK versions are maintained as branches and master branch will be 6.x currently.
    • Uses Nginx as a reverse proxy for Kibana, or optionally Apache via apache_reverse_proxy: true
    • Generates SSL certificates for Filebeat or Logstash-forwarder
    • Adds either iptables or firewalld rules if firewall is active
    • Tunes Elasticsearch heapsize to half your memory, to a max of 32G
    • Deploys ELK clients using SSL and Filebeat for Logstash (Default)
    • Deploys rsyslog if Fluentd is chosen over Logstash, picks up the same set of OpenStack-related logs in /var/log/*
    • All service ports can be modified in install/group_vars/all.yml
    • Optionally install curator
    • Optionally install Elastic X-Pack Suite
    • This is also available on Ansible Galaxy

Requirements

  • RHEL7 or CentOS7 server/client with no modifications
  • RHEL7/CentOS7, Rocky or Fedora for ELK clients using Filebeat
  • ELK/EFK server with at least 8G of memory (you can try with less but 5.x series is quite demanding - try 2.4 series if you have scarce resources).
  • You may want to modify vm.swappiness as ELK/EFK is demanding and swapping kills the responsiveness.
    • I am leaving this up to your judgement.
echo "vm.swappiness=10" >> /etc/sysctl.conf
sysctl -p

Notes

  • Current ELK version is 6.x but you can checkout the 5.6 or 2.4 branch if you want that series
  • I will update this playbook for major ELK versions going forward as time allows.
  • Sets the nginx htpasswd to admin/admin initially
  • nginx ports default to 80/8080 for Kibana and SSL cert retrieval (configurable)
  • Uses OpenJDK for Java
  • It's fairly quick, takes around 3minutes on a test VM
  • Fluentd can be substituted for the default Logstash
    • Set logging_backend: fluentd in group_vars/all.yml
  • Install curator by setting install_curator_tool: true in install/group_vars/all.yml
  • Install Elastic X-Pack Suite for Elasticsearch, LogStash or Kibana via:
    • install_elasticsearch_xpack: true
    • install_kibana_xpack: true
    • install_logstash_xpack: true
    • Note: Deploying X-Pack will wrap your ES with additional authentication and security, Kibana for example will have it's own credentials now - the default is username: elastic and password: changeme

ELK/EFK Server Instructions

  • Clone repo and setup your hosts file
git clone https://github.com/sadsfae/ansible-elk
cd ansible-elk
sed -i 's/host-01/elkserver/' hosts
sed -i 's/host-02/elkclient/' hosts
  • If you're using a non-root user for Ansible, e.g. AWS EC2 likes to use ec2-user then set the follow below, default is root.
ansible_system_user: ec2-user
  • Run the playbook
ansible-playbook -i hosts install/elk.yml

Create your Kibana Index Pattern

  • Next you'll login to your Kibana instance and create a Kibana index pattern.

ELK

  • Note: Sample data can be useful, you can try it later however.

ELK

ELK

ELK

ELK

  • At this point you can setup your client(s) to start sending data via Filebeat/SSL

ELK Client Instructions

  • Run the client playbook against the generated elk_server variable
ansible-playbook -i hosts install/elk-client.yml --extra-vars 'elk_server=X.X.X.X'
  • Once this completes return to your ELK and you'll see log results come in from ELK/EFK clients via filebeat

ELK

5.6 ELK/EFK (Deprecated)

  • The 5.6 series of ELK/EFK is also available, to use this just use the 5.6 branch
git clone https://github.com/sadsfae/ansible-elk
cd ansible-elk
git checkout 5.6

2.4 ELK/EFK (Deprecated)

  • The 2.4 series of ELK/EFK is also available, to use this just use the 2.4 branch
git clone https://github.com/sadsfae/ansible-elk
cd ansible-elk
git checkout 2.4
  • You can view a deployment video here:

Ansible Elk

File Hierarchy

.
├── hosts
├── install
│   ├── elk_client.yml
│   ├── elk.yml
│   ├── group_vars
│   │   └── all.yml
│   └── roles
│       ├── apache
│       │   ├── tasks
│       │   │   └── main.yml
│       │   └── templates
│       │       ├── 8080vhost.conf.j2
│       │       └── kibana.conf.j2
│       ├── curator
│       │   ├── files
│       │   │   └── curator.repo
│       │   ├── tasks
│       │   │   └── main.yml
│       │   └── templates
│       │       ├── curator-action.yml.j2
│       │       └── curator-config.yml.j2
│       ├── elasticsearch
│       │   ├── files
│       │   │   ├── elasticsearch.in.sh
│       │   │   └── elasticsearch.repo
│       │   ├── tasks
│       │   │   └── main.yml
│       │   └── templates
│       │       └── elasticsearch.yml.j2
│       ├── elk_client
│       │   ├── files
│       │   │   └── elk.repo
│       │   └── tasks
│       │       └── main.yml
│       ├── filebeat
│       │   ├── meta
│       │   │   └── main.yml
│       │   ├── tasks
│       │   │   └── main.yml
│       │   └── templates
│       │       ├── filebeat.yml.j2
│       │       └── rsyslog-openstack.conf.j2
│       ├── firewall
│       │   ├── handlers
│       │   │   └── main.yml
│       │   └── tasks
│       │       └── main.yml
│       ├── fluentd
│       │   ├── files
│       │   │   ├── filebeat-index-template.json
│       │   │   └── fluentd.repo
│       │   ├── tasks
│       │   │   └── main.yml
│       │   └── templates
│       │       ├── openssl_extras.cnf.j2
│       │       └── td-agent.conf.j2
│       ├── heartbeat
│       │   ├── meta
│       │   │   └── main.yml
│       │   ├── tasks
│       │   │   └── main.yml
│       │   └── templates
│       │       └── heartbeat.yml.j2
│       ├── instructions
│       │   └── tasks
│       │       └── main.yml
│       ├── kibana
│       │   ├── files
│       │   │   └── kibana.repo
│       │   ├── tasks
│       │   │   └── main.yml
│       │   └── templates
│       │       └── kibana.yml.j2
│       ├── logstash
│       │   ├── files
│       │   │   ├── filebeat-index-template.json
│       │   │   └── logstash.repo
│       │   ├── tasks
│       │   │   └── main.yml
│       │   └── templates
│       │       ├── 02-beats-input.conf.j2
│       │       ├── logstash.conf.j2
│       │       └── openssl_extras.cnf.j2
│       ├── metricbeat
│       │   ├── meta
│       │   │   └── main.yml
│       │   ├── tasks
│       │   │   └── main.yml
│       │   └── templates
│       │       └── metricbeat.yml.j2
│       ├── nginx
│       │   ├── tasks
│       │   │   └── main.yml
│       │   └── templates
│       │       ├── kibana.conf.j2
│       │       └── nginx.conf.j2
│       ├── packetbeat
│       │   ├── meta
│       │   │   └── main.yml
│       │   ├── tasks
│       │   │   └── main.yml
│       │   └── templates
│       │       └── packetbeat.yml.j2
│       └── xpack
│           └── tasks
│               └── main.yml
└── meta
    └── main.yml

56 directories, 52 files

ansible-elk's People

Contributors

knechtionscoding avatar lime-red avatar sadsfae avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-elk's Issues

ELK-server not found

FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: 'elk_server' is undefined\n\nThe error appears to have been in '/vagrant/ansible-host/roles/elk_client/tasks/main.yml': line 30, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Install ELK server SSL client certificate\n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'elk_server' is undefined"}

is ansible 1.9.6 supported ?

Hi.

Trying to use on RHEL 7 using ansible 1.9.6.
The task

TASK: [logstash | Load filebeat JSON index template] **************************

Gives a timeout, if i execute this with Ansible to keep temporary files, and execute the ansible resulting python the error is as follow:

python -tt /root/.ansible/tmp/ansible-tmp-1507310163.34-85406240056390/uri
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1507310163.34-85406240056390/uri", line 2106, in
main()
File "/root/.ansible/tmp/ansible-tmp-1507310163.34-85406240056390/uri", line 415, in main
resp, content, dest = uri(module, url, dest, user, password, body, method, dict_headers, redirects, socket_timeout, validate_certs)
File "/root/.ansible/tmp/ansible-tmp-1507310163.34-85406240056390/uri", line 311, in uri
resp, content = h.request(url, method=method, body=body, headers=headers)
File "/usr/lib/python2.7/site-packages/httplib2/init.py", line 1621, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/usr/lib/python2.7/site-packages/httplib2/init.py", line 1363, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/usr/lib/python2.7/site-packages/httplib2/init.py", line 1285, in _conn_request
conn.request(method, request_uri, body, headers)
File "/usr/lib64/python2.7/httplib.py", line 1017, in request
self._send_request(method, url, body, headers)
File "/usr/lib64/python2.7/httplib.py", line 1051, in _send_request
self.endheaders(body)
File "/usr/lib64/python2.7/httplib.py", line 1013, in endheaders
self._send_output(message_body)
File "/usr/lib64/python2.7/httplib.py", line 868, in _send_output
self.send(message_body)
File "/usr/lib64/python2.7/httplib.py", line 840, in send
self.sock.sendall(data)
File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
TypeError: must be string or buffer, not dict

Maybe the json file's body has an issue, can somebody help me? i need to use ansible 1.9.6
Thanks.

Missing dependencies

Hi

I tried running your playbook against a minimal install RHEL7, enabling Fluentd.... and it fails because it can't build the elasticsearch fluentd plugin gem...

gcc and make was missing.. after installing these packages everything seems to work...

logstash-input-udp is broken on 5.6.9

I've noticed that the logstash-input-udp plugin is broken on the latest logstash, the workaround for this is to update the plugin manually:

/usr/share/logstash/bin/logstash-plugin update logstash-input-udp 
Updating logstash-input-udp
Updated logstash-input-udp 3.3.1 to 3.3.2

Related:

https://discuss.elastic.co/t/unable-to-fetch-mapping-do-you-have-indices-matching-the-pattern/844/10

I expect this to fix itself when Logstash gets updated from the current Elastic RPM repositories but I'm going to bake a fix into the playbook to do this manually as it doesn't hurt to be running the latest.

asnsible variables not found

Running ansible 2.4.1.0

ansible-playbook elk.yml -i inventory/local -b -v

TASK [elasticsearch : Copy templated elasticsearch.yml] ****************************************************************************************************************************************************
fatal: [node5]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: 'ansible_default_ipv4' is undefined"}
...ignoring

TASK [elasticsearch : Check if system memory is greater than 64G] ******************************************************************************************************************************************
fatal: [node5]: FAILED! => {"msg": "The conditional check 'ansible_memory_mb.real.total|int >= 65536' failed. The error was: error while evaluating conditional (ansible_memory_mb.real.total|int >= 65536): 'ansible_memory_mb' is undefined\n\nThe error appears to have been in '/vagrant/ansible-host/roles/elasticsearch/tasks/main.yml': line 38, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Check if system memory is greater than 64G\"\n  ^ here\n"}

es_listen_external should be reflected in elasticsearch.yml

We currently support the option es_listen_external in install/group_vars/all.yml which opens firewall rules for TCP/9200 however our Elasticsearch still needs network.host: 0.0.0.0

If you opt to change this after your ELK is already installed you'll need to bounce the elasticsearch service manually.

Curator Setup

Issue for tracking the setup of curator. Related to PR #45

Adding in the default curator setup didn't seem too complicated. I got it up and running on a test VM. It was only more tasks in the curator/tasks/main.yml file. The configs are just default from Elastic.

Add explicit variable file declaration

I was doing some work with this playbook and modifying it and discovered that it was erroring out when there were more variable files defined in other places. As such I put in an explicit variable file definition so that ansible took the correct vars file. If you are interested the elk.yml file now looks something like this:


- hosts: elk
  remote_user: root
  vars_files:
    - group_vars/elk.yml
  roles:
......

This seems like a quality of like upgrade that allows for the vars to be specified individually. Not sure if you want to implement it, but it made my life easier when trying to troubleshoot to explicitly define the vars file instead of trusting ansible.

Clean Up the Filebeat Configuration

I have a version of the filebeat config that has been cleaned up so that logstash is under the logstash and elasticsearch under the elasticsearch, are you interested me merging that?

ansible-galaxy compatibility

I added your role to requirements.yml and tried to install the role via galaxy but the repo is not galaxy compatible. Specifically, it's missing a meta/main.yml file. A sample is here: https://github.com/geerlingguy/ansible-role-ntp/blob/master/meta/main.yml.

# Role to setup ELK stack on a server (CentOS/RHEL only)
- src: https://github.com/sadsfae/ansible-elk
  name: ansible-elk
  version: master

I can open a PR. Just let me know if this is something you are willing to consider.

modify nginx dir structure as upstream has changed

Upstream nginx has changed their config layout so it doesn't create `/etc/nginx/conf.d`` but we use that lsb standard for configs (and don't plan on changing it). Currently nginx playbook run fails because of this.

Fail to Load filebeat JSON index template

TASK [logstash : Load filebeat JSON index template] ***********************************************************************************************************************************************************************
fatal: [192.168.133.163]: FAILED! => {"changed": false, "content": "", "failed": true, "msg": "Status code was not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://localhost:9200/_template/filebeat?pretty"}

Failure to check for elasticsearch index

Hello again

I already tried to disabled firewalld and iptables services before retrying a fresh installation, but still, this problems come up.

“TASK [kibana : Check elasticsearch index for content] ****************************
fatal: [192.168.2.181]: FAILED! => {“changed”: false, “content”: “”, “failed”: true, “msg”: “Status code was not [200]: Request failed: “, “redirected”: false, “status”: -1, “url”: “http://localhost:9200/_cat/indices”}
to retry, use: –limit @/root/ansible-elk/install/elk.retry”

Thanks

iptables_tcp5044_exists can be replaced with iptables_tcp{{logstash_syslog_port}}_exists

Please be aware of that this issue is harmless and can be overlook. I just wanted to share it since this playbook is super-nice ;)

In install/roles/logstash/tasks/main.yml, Ansible checks if logstatsh syslog port was blocked by iptables.

138 # iptables-services
139 - name: check firewall rules for TCP/{{logstash_syslog_port}} (iptables-services)
140   shell: grep "dport {{logstash_syslog_port}} \-j ACCEPT" /etc/sysconfig/iptables | wc -l
141   ignore_errors: true
142   register: iptables_tcp5044_exists
143   failed_when: iptables_tcp{{logstash_syslog_port}}_exists == 127

Here, although Ansible stores the value as iptables_tcp5044_exists, this registry is technically iptables_tcp{{logstash_syslog_port}}_exists.

Please consider to replace the registry name.

Scaling ES instances might want to consider using min(1/2 available memory, 32g)

The recommendations for ES are to use no more than 32 GB per instance (to avoid java large pointers), as you already have, but it is also recommended to leave 1/2 of the available memory to the page cache. So on a 48 GB system, we might want to set the heap to 24GB.

In general this is min(0.5 * physmem GB, 32 GB).

Are we relying on the Java defaults for any system under 64 GB?

Issue with IP address in subjectAltName in /etc/pki/tls/openssl.cnf on CentOS

Hello,

On CentOS 7, when subjectAltName contains an IP address it must uses IP: prefix otherwise openssl fails with this error:

Error Loading extension section v3_ca
140585944606624:error:2207507C:X509 V3 routines:v2i_GENERAL_NAME_ex:missing value:v3_alt.c:531:
140585944606624:error:22098080:X509 V3 routines:X509V3_EXT_nconf:error in extension:v3_conf.c:95:name=subjectAltName, value=10.0.2.15

To make it work, value in /etc/pki/tls/openssl.cnf should be like below:

subjectAltName = "IP: 10.0.2.15"

It impacts 2 roles fluentd and logstash.

openssl version
OpenSSL 1.0.2k-fips  26 Jan 2017
cat /etc/centos-release
CentOS Linux release 7.4.1708 (Core)

ElasticSearch 5.5 not updating to 5.6

There appears to be an issue with the repo that isn't updating elasitcsearch, even though it is updating the Kibana version. Currently kibana is on 5.6 and elasticsearch is on 5.5.2.

Potential solution is to remove the repo file, and use geturl to download the RPM and install based on a variable elk_version.

Thoughts @sadsfae ?

Order of Operations - Cert Generation

@sadsfae Is there a reason for the cert generation to take place in the kibana role rather than the logstash role? I moved it to the logstash role in my own working copy, so that each role could be logically separate, but if there was a technical reason for it I'd love to know.

Break out Firewall into Separate Role

While I still believe that we need to support proper firewall mechanisms (detection, test, don't clobber if exists, apply) it doesn't necessarily need to be put into each service role.

This will cover branching out the firewall code into its own Ansible role.

Need check for libselinux-python

Ran this playbook against a minimal CentOS 7 install and it didn't do a check for the libselinux-python package. Failed under task: [elasticsearch : Copy elasticsearch yum repo file]. I wrote a quick fix with a register that I will be testing to confirm that it works. See below for that fix.

  • name: Check for libselinux-python package
    yum: name=libselinux-python*
    state: latest
    become: true
    register: libselinux-python_installed

  • name: Copy elasticsearch yum repo file
    copy:
    src=elasticsearch.repo
    dest=/etc/yum.repos.d/elasticsearch.repo
    owner=root
    group=root
    mode=0644
    when: libselinux-python_installed
    become: true

Failed to connect to host via ssh

I cloned the project and edited hosts as follows

[elk]
127.0.0.1

I edited group_vars/all.yml to include x-pack

# install elastic x-pack
install_elasticsearch_xpack: true
# install kibana x-pack
install_kibana_xpack: true
# install logstash x-pack
install_logstash_xpack: false

If I run the script I get an error:

$ ansible-playbook -i hosts install/elk.yml

PLAY [elk] ***************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************
The authenticity of host '127.0.0.1 (127.0.0.1)' can't be established.
ECDSA key fingerprint is 9d:5a:6d:b6:96:f5:da:46:55:5c:66:22:8b:b2:a7:94.
Are you sure you want to continue connecting (yes/no)? yes
fatal: [127.0.0.1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Received message too long 1349281121\r\n", "unreachable": true}

Add fluentd option for logging backend

Add an option for fluentd logging backend, with the default to still use elasticsearch like:

    - hosts: elk
      remote_user: root
      roles:
        - { role: elasticsearch }
        when: (logging_backend: fluentd)
          - {role: fluentd}
        when: (logging_backend is none)
          - {role: logstash}
        - { role: nginx }
        - { role: kibana }

EFK stack for high availability cluster

Hi,
I deployed ansible-efk (using fluentd) and it works perfectly, but now I want to expand this configuration to multi-nodes setup for high availability reasons.
Is there any way to do it using this ansible-elk project ?

Refactor into not requiring root to be remote user

So I'm moving towards using this without allowing it to be root. I'm working on adding the correct become statements to work without having the server be root@server. Opening issue so we can track the conversation.

Upgrade to 6.2, Maintain 5.6+ Branch

This should track the upgrade of the playbook to the latest 6.x series of Elasticsearch, Logstash and Kibana.

I am in the process of getting ELK 6.2.x+ to work but have hit a few bugs. I've managed to workaround a few of them and I will go back and file bug reports but stuck on one particular logstash issue dealing with netty_tcnative and logstash crypto:

logstash-plugins/logstash-input-beats#188
elastic/logstash-docker#77
logstash-plugins/logstash-input-beats#288

This relates to the beats plugin and use of encryption ciphers as we auto-create and deploy x509 SSL certificates for the elk_client role so people can easily ship their logs encrypted over the wire to logstash.

I have not yet gotten to upgrading Fluentd and testing that or X-PACK on 6.2+ because of the above LS issues, everything else I've run into just seem like packaging issues (missing directories from RPM installation etc) and I'll file bugs for those upstream.

The conditional check '(es_curator_tool == 'install')' failed

The playbook fails with the following error since the parameter of "es_curator_tool" is not specified in group_vars by default.

TASK [curator : Copy curator yum repo file] ************************************
fatal: [192.168.43.70]: FAILED! => {"failed": true, "msg": "The conditional check '(es_curator_tool == 'install')' failed. The error was: error while evaluating conditional ((es_curator_tool == 'install')): 'es_curator_tool' is undefined"}

Adding such parameter in group_vars/all.yml workaround the problem.

Please advise.

Add MetricBeat, Packetbeat, Topbeat to elk-client installation

I needed to be able to install MetricBeat, PacketBeat, TopBeat and the other beats offered by Elasticsearch and so I added those in a scalable way, would you like a PR for those? Or do you not want to support those?

I moved the repo installation for the client into a dependency role called elk_client so that the elasticsearch.co repo is installed no matter the beat you specify and then it adds the correct config based on the beat you selected. I haven't had a chance to variablize beat selection yet, so that would be the next step.

Multi Node Support for ElasticSearch

Do we want to support Multi-node installations for Elastic Search? Because I can make the PR, including the updating the FW role that was just added, but only if you want to have that functionality.

it would involve separating out each of the roles on the server side into Kibana, logstash, elasticsearch. It'd also involve separating out the FW, or at least delegating them to the correct place using when conditions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.