GithubHelp home page GithubHelp logo

zeta36 / chess-alpha-zero Goto Github PK

View Code? Open in Web Editor NEW
2.1K 123.0 479.0 122.4 MB

Chess reinforcement learning by AlphaGo Zero methods.

License: MIT License

Python 23.48% Batchfile 0.01% Jupyter Notebook 76.51%
reinforcement-learning keras alphago-zero tensorflow chess

chess-alpha-zero's Introduction

Binder Demo Notebook

About

Chess reinforcement learning by AlphaGo Zero methods.

This project is based on these main resources:

  1. DeepMind's Oct 19th publication: Mastering the Game of Go without Human Knowledge.
  2. The great Reversi development of the DeepMind ideas that @mokemokechicken did in his repo: https://github.com/mokemokechicken/reversi-alpha-zero
  3. DeepMind just released a new version of AlphaGo Zero (named now AlphaZero) where they master chess from scratch: https://arxiv.org/pdf/1712.01815.pdf. In fact, in chess AlphaZero outperformed Stockfish after just 4 hours (300k steps) Wow!

See the wiki for more details.

Note

I'm the creator of this repo. I (and some others collaborators did our best: https://github.com/Zeta36/chess-alpha-zero/graphs/contributors) but we found the self-play is too much costed for an only machine. Supervised learning worked fine but we never try the self-play by itself.

Anyway I want to mention we have moved to a new repo where lot of people is working in a distributed version of AZ for chess (MCTS in C++): https://github.com/glinscott/leela-chess

Project is almost done and everybody will be able to participate just by executing a pre-compiled windows (or Linux) application. A really great job and effort has been done is this project and I'm pretty sure we'll be able to simulate the DeepMind results in not too long time of distributed cooperation.

So, I ask everybody that wish to see a UCI engine running a neural network to beat Stockfish go into that repo and help with his machine power.

Environment

  • Python 3.6.3
  • tensorflow-gpu: 1.3.0
  • Keras: 2.0.8

New results (after a great number of modifications due to @Akababa)

Using supervised learning on about 10k games, I trained a model (7 residual blocks of 256 filters) to a guesstimate of 1200 elo with 1200 sims/move. One of the strengths of MCTS is it scales quite well with computing power.

Here you can see an example where I (black) played against the model in the repo (white):

img

Here you can see an example of a game where I (white, ~2000 elo) played against the model in this repo (black):

img

First "good" results

Using the new supervised learning step I created, I've been able to train a model to the point that seems to be learning the openings of chess. Also it seems the model starts to avoid losing naively pieces.

Here you can see an example of a game played for me against this model (AI plays black):

partida1

Here we have a game trained by @bame55 (AI plays white):

partida3

This model plays in this way after only 5 epoch iterations of the 'opt' worker, the 'eval' worker changed 4 times the best model (4 of 5). At this moment the loss of the 'opt' worker is 5.1 (and still seems to be converging very well).

Modules

Supervised Learning

I've done a supervised learning new pipeline step (to use those human games files "PGN" we can find in internet as play-data generator). This SL step was also used in the first and original version of AlphaGo and maybe chess is a some complex game that we have to pre-train first the policy model before starting the self-play process (i.e., maybe chess is too much complicated for a self training alone).

To use the new SL process is as simple as running in the beginning instead of the worker "self" the new worker "sl". Once the model converges enough with SL play-data we just stop the worker "sl" and start the worker "self" so the model will start improving now due to self-play data.

python src/chess_zero/run.py sl

If you want to use this new SL step you will have to download big PGN files (chess files) and paste them into the data/play_data folder (FICS is a good source of data). You can also use the SCID program to filter by headers like player ELO, game result and more.

To avoid overfitting, I recommend using data sets of at least 3000 games and running at most 3-4 epochs.

Reinforcement Learning

This AlphaGo Zero implementation consists of three workers: self, opt and eval.

  • self is Self-Play to generate training data by self-play using BestModel.
  • opt is Trainer to train model, and generate next-generation models.
  • eval is Evaluator to evaluate whether the next-generation model is better than BestModel. If better, replace BestModel.

Distributed Training

Now it's possible to train the model in a distributed way. The only thing needed is to use the new parameter:

  • --type distributed: use mini config for testing, (see src/chess_zero/configs/distributed.py)

So, in order to contribute to the distributed team you just need to run the three workers locally like this:

python src/chess_zero/run.py self --type distributed (or python src/chess_zero/run.py sl --type distributed)
python src/chess_zero/run.py opt --type distributed
python src/chess_zero/run.py eval --type distributed

GUI

  • uci launches the Universal Chess Interface, for use in a GUI.

To set up ChessZero with a GUI, point it to C0uci.bat (or rename to .sh). For example, this is screenshot of the random model using Arena's self-play feature: capture

Data

  • data/model/model_best_*: BestModel.
  • data/model/next_generation/*: next-generation models.
  • data/play_data/play_*.json: generated training data.
  • logs/main.log: log file.

If you want to train the model from the beginning, delete the above directories.

How to use

Setup

install libraries

pip install -r requirements.txt

If you want to use GPU, follow these instructions to install with pip3.

Make sure Keras is using Tensorflow and you have Python 3.6.3+. Depending on your environment, you may have to run python3/pip3 instead of python/pip.

Basic Usage

For training model, execute Self-Play, Trainer and Evaluator.

Note: Make sure you are running the scripts from the top-level directory of this repo, i.e. python src/chess_zero/run.py opt, not python run.py opt.

Self-Play

python src/chess_zero/run.py self

When executed, Self-Play will start using BestModel. If the BestModel does not exist, new random model will be created and become BestModel.

options

  • --new: create new BestModel
  • --type mini: use mini config for testing, (see src/chess_zero/configs/mini.py)

Trainer

python src/chess_zero/run.py opt

When executed, Training will start. A base model will be loaded from latest saved next-generation model. If not existed, BestModel is used. Trained model will be saved every epoch.

options

  • --type mini: use mini config for testing, (see src/chess_zero/configs/mini.py)
  • --total-step: specify total step(mini-batch) numbers. The total step affects learning rate of training.

Evaluator

python src/chess_zero/run.py eval

When executed, Evaluation will start. It evaluates BestModel and the latest next-generation model by playing about 200 games. If next-generation model wins, it becomes BestModel.

options

  • --type mini: use mini config for testing, (see src/chess_zero/configs/mini.py)

Tips and Memory

GPU Memory

Usually the lack of memory cause warnings, not error. If error happens, try to change vram_frac in src/configs/mini.py,

self.vram_frac = 1.0

Smaller batch_size will reduce memory usage of opt. Try to change TrainerConfig#batch_size in MiniConfig.

chess-alpha-zero's People

Contributors

akababa avatar brianprichardson avatar camillechiquet avatar chairbender avatar dependabot[bot] avatar gummygamer avatar kirlaw avatar kmader avatar maycuatroi avatar samuelstarshot avatar yhyu13 avatar zeta36 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chess-alpha-zero's Issues

Persistent self-learning

Hi

I want deploy a dockerized cluster with ChessAlphaZero as a service. My question is:
It is possibile create a persistent self- learning process to achieve a continuous skill improovement? Make sense? It is possibile share best model to different running instances? It is possibile distribute a packed version of best model? It is possible use iteractive play during self-learning?

Thanks for all... and for your work!!!

Problem in running run.py

After running the command:
python3.6 run.py self
I get the following error:
`└──╼ $python3.6 run.py self
Traceback (most recent call last):
File "run.py", line 17, in
from chess_zero import manager
ModuleNotFoundError: No module named 'chess_zero'

Does anybody know how to fix this? Thanks

Distributed version

The distributed version of this project is ready to be used but we need to find a FTP server in internet that enable us to upload and download files of size bigger than 30MB in order to be able to store the best model configuration and its weights.

I signed up in a free hosting server with FTP support but I just realized it's limited to files up to 16MB so we cannot use it for large model configurations (because the weight file will be larger than 30MB).

If somebody wants to help with this just replace in

config.py

the FTP credentials lines with your good one and I will merge the change as soon as you do it.

Regards.

Could not connect to D-Bus

I get the following error:

"Could not connect to D-Bus server: org.freedesktop.DBus.Error.Spawn.ExecFailed: /u/gottipav/.conda/envs/chess/bin/dbus-launch terminated abnormally with the following error: Autolaunch error: X11 initialization failed."

when I run the self-play.
But, it still continue to run. Is this something to be worried about?

error

UnicodeDecodeError: 'gbk' codec can't decode byte 0x93 in position 9761: illegal multibyte sequence

ImportError: cannot import name 'multiarray'

I am getting an error. I have ran the requirements without issues and I am running it with Python 3.6 on Ubuntu 16.04 LTS. What is wrong?

$ python3.6 src/chess_zero/run.py sl
Traceback (most recent call last):
File "src/chess_zero/run.py", line 19, in
from chess_zero import manager
File "src/chess_zero/manager.py", line 10, in
from .config import Config
File "src/chess_zero/config.py", line 6, in
import numpy as np
File "/usr/lib/python3/dist-packages/numpy/init.py", line 180, in
from . import add_newdocs
File "/usr/lib/python3/dist-packages/numpy/add_newdocs.py", line 13, in
from numpy.lib import add_newdoc
File "/usr/lib/python3/dist-packages/numpy/lib/init.py", line 8, in
from .type_check import *
File "/usr/lib/python3/dist-packages/numpy/lib/type_check.py", line 11, in
import numpy.core.numeric as _nx
File "/usr/lib/python3/dist-packages/numpy/core/init.py", line 14, in
from . import multiarray
ImportError: cannot import name 'multiarray'

Virtual loss?

Has anyone tried experimenting with the virtual loss parameter? Currently it does n+=3 and w-=3 on entering a node, but intuitively I have no idea what it does except discourage other threads from picking the same action.

Also 3 seems a bit high for unit rewards...

Broken PIPE error when running EVAL phase

Hi,

I've ran into a crash when running the EVAL phase, initially I have imported the games using the PGN / SL after which i ran the training(OPT) for 1 day, which generated many models under new_generation (around 4.82GB).

My Config:

  • Python 3.6.3
  • Keras 2.0.8
  • Tensorflow-gpu 1.3.0 (using GPU - Tesla V100-SXM2-16GB)

Command: python src\chess_zero\run.py eval --type normal

2018-01-31 11:20:31,206@chess_zero.manager INFO # config type: normal
Using TensorFlow backend.
2018-01-31 11:20:43,486@chess_zero.agent.model_chess DEBUG # loading model from c:\chess-alpha-zero\data\model\model_best_config.json
name: Tesla V100-SXM2-16GB
major: 7 minor: 0 memoryClockRate (GHz) 1.53
pciBusID 0000:00:1e.0
Total memory: 15.87GiB
Free memory: 15.32GiB
2018-01-31 11:20:46.070625: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:976] DMA: 0
2018-01-31 11:20:46.071089: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:986] 0: Y
2018-01-31 11:20:46.071563: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1045] Creating TensorFlow device (
/gpu:0) -> (device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:1e.0)
2018-01-31 11:21:00,767@chess_zero.agent.model_chess DEBUG # loaded model digest = 0c379712fcb4204eccea535e5ff099cde78f87037e9805c85d4738bc350adb12
2018-01-31 11:21:00,955@chess_zero.agent.model_chess DEBUG # loading model from c:\chess-alpha-zero\data\model\next_generation\model_20180131-112007.316229\model_config.j
son
2018-01-31 11:21:04,111@chess_zero.agent.model_chess DEBUG # loaded model digest = 401e52dfc80f31e58218c2ab7e28f8bc480c1c9909eb3220c23b7da96d14d668
2018-01-31 11:21:04,111@chess_zero.worker.evaluate DEBUG # start evaluate model c:\chess-alpha-zero\data\model\next_generation\model_20180131-112007.316229
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
2018-01-31 11:24:03,871@chess_zero.worker.evaluate DEBUG # game 1: ng_score=0.0 as black win_rate= 0.0% r3kb1r/5pp1/4p2p/PQ1pP3/1ppP3B/6N1/3NBPPP/5RK1
2018-01-31 11:24:12,889@chess_zero.worker.evaluate DEBUG # game 2: ng_score=0.0 as black by resign win_rate= 0.0% 2r4r/1p2kpp1/p1p1p1np/8/1P6/P1Q3P1/5PBP/3RR1K1
2018-01-31 11:24:26,526@chess_zero.worker.evaluate DEBUG # game 3: ng_score=0.0 as white by resign win_rate= 0.0% 4R3/2qn1ppk/2p2n1p/p2p1P2/N2P1b2/1P1P3P/P1p3K1/8
2018-01-31 11:24:53,597@chess_zero.worker.evaluate DEBUG # game 4: ng_score=0.0 as white win_rate= 0.0% 2kr4/p1p2p2/2p5/5bp1/2p5/2P1Qq2/PP1B1n2/R3K2r
2018-01-31 11:25:01,775@chess_zero.worker.evaluate DEBUG # game 5: ng_score=0.5 as black win_rate= 10.0% 5r2/pN4k1/1b5p/nP1B3p/8/P2P4/2P2PPP/R4RK1
2018-01-31 11:25:43,852@chess_zero.worker.evaluate DEBUG # game 6: ng_score=0.0 as white by resign win_rate= 8.3% 4r1k1/5ppp/p7/1n3P2/1P2r3/6KP/6P1/8
2018-01-31 11:25:46,835@chess_zero.worker.evaluate DEBUG # game 7: ng_score=0.0 as white win_rate= 7.1% r4rk1/6pp/p1Qb2p1/1p6/3Pp3/4P2P/PP1B3q/R3R2K
2018-01-31 11:26:16,447@chess_zero.worker.evaluate DEBUG # game 8: ng_score=0.0 as black win_rate= 6.2% 6Q1/7R/8/5p1k/8/8/3p4/6K1
2018-01-31 11:26:24,944@chess_zero.worker.evaluate DEBUG # game 9: ng_score=0.0 as black win_rate= 5.6% 5Rk1/ppp5/2n2bpQ/3p1p2/3P4/2PB1PN1/PP3P1P/R5K1
2018-01-31 11:26:55,131@chess_zero.worker.evaluate DEBUG # game 10: ng_score=0.0 as black by resign win_rate= 5.0% r4rk1/1p2np1p/p1pP1B2/3p1p2/2P5/1P3N1P/P1QN1PP1/3RR1K
1
2018-01-31 11:27:04,990@chess_zero.worker.evaluate DEBUG # game 11: ng_score=0.0 as white win_rate= 4.5% 4r1k1/6pp/2p5/3p4/3P4/2R1P2b/6qK/8
2018-01-31 11:27:19,516@chess_zero.worker.evaluate DEBUG # game 12: ng_score=0.0 as white win_rate= 4.2% r3k3/1b1n1prp/p1p5/1p6/1P1bp1P1/Q7/P3Bq2/5K2
2018-01-31 11:27:50,701@chess_zero.worker.evaluate DEBUG # game 13: ng_score=0.0 as black win_rate= 3.8% r1b2k1N/p4Qbp/1ppp3q/n4P2/4P3/2NP4/PPP3PP/R4RK1
2018-01-31 11:27:51,332@chess_zero.worker.evaluate DEBUG # game 14: ng_score=1.0 as black win_rate= 10.7% 5k2/1p4Rp/1p1p1B2/pP6/8/P7/2r3PP/4r1K1
2018-01-31 11:28:41,002@chess_zero.worker.evaluate DEBUG # game 15: ng_score=0.0 as white by resign win_rate= 10.0% 2kr3r/p1p2p2/2p5/5b2/2p3p1/2PnQNq1/PP6/R1BK4
2018-01-31 11:28:54,658@chess_zero.worker.evaluate DEBUG # game 16: ng_score=0.5 as white win_rate= 12.5% 5Q2/8/5K2/8/8/8/8/1k6
2018-01-31 11:29:13,311@chess_zero.worker.evaluate DEBUG # game 17: ng_score=0.0 as black win_rate= 11.8% 5Q2/pp2Q3/8/2p4B/1nP2k2/7P/5qPK/8
2018-01-31 11:29:45,672@chess_zero.worker.evaluate DEBUG # game 18: ng_score=0.0 as white by resign win_rate= 11.1% 2r1r1k1/5ppp/p1n5/3b4/P1pP4/2B1P1b1/1P2K1q1/7R
2018-01-31 11:29:55,172@chess_zero.worker.evaluate DEBUG # game 19: ng_score=0.0 as white win_rate= 10.5% r3r1k1/1p1n2pp/p2Q4/3p1p2/3Pn3/3BPb2/PP3Pq1/R1B1RK2
2018-01-31 11:30:02,172@chess_zero.worker.evaluate DEBUG # game 20: ng_score=0.0 as black by resign win_rate= 10.0% 4r3/8/7Q/p1pk4/5P2/P5K1/6P1/2R4R
2018-01-31 11:30:33,328@chess_zero.worker.evaluate DEBUG # game 21: ng_score=0.0 as black win_rate= 9.5% 8/1pp1rB2/1b1p2Qk/pP2p3/8/P1PP4/5PPP/R4RK1
2018-01-31 11:30:33,363@chess_zero.worker.evaluate DEBUG # game 22: ng_score=0.0 as white by resign win_rate= 9.1% r1b1k2r/1p3ppp/p1n5/4p3/4n3/2P2Kb1/PP5q/R1BQRB2
2018-01-31 11:31:08,451@chess_zero.worker.evaluate DEBUG # game 23: ng_score=0.0 as black by resign win_rate= 8.7% r2q1rk1/pp1bnNb1/5n1p/2pPN1p1/2P1P3/6BP/PP2BPP1/2RQ1R
K1
2018-01-31 11:31:25,138@chess_zero.worker.evaluate DEBUG # game 24: ng_score=0.0 as white win_rate= 8.3% 2r2rk1/5p1p/p2p2pP/1p2p1P1/4P3/PP1bBn2/2q5/2K4R
2018-01-31 11:31:25,938@chess_zero.worker.evaluate DEBUG # game 25: ng_score=0.0 as black win_rate= 8.0% 4k3/p3Q3/1p6/3pR3/bP6/P7/6K1/8
2018-01-31 11:31:25,948@chess_zero.worker.evaluate DEBUG # lose count reach 22 so give up challenge

Traceback (most recent call last):
File "src\chess_zero\run.py", line 20, in
manager.start()
File "src\chess_zero\manager.py", line 70, in start
Exception in thread prediction_worker:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\ProgramData\Anaconda3\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "src\chess_zero\agent\api_chess.py", line 62, in _predict_batch_worker
while pipe.poll():
File "C:\ProgramData\Anaconda3\lib\multiprocessing\connection.py", line 257, in poll
return self._poll(timeout)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\connection.py", line 328, in _poll
_winapi.PeekNamedPipe(self._handle)[0] != 0):
BrokenPipeError: [WinError 109] The pipe has been ended

return evaluate.start(config)

File "src\chess_zero\worker\evaluate.py", line 22, in start
return EvaluateWorker(config).start()
File "src\chess_zero\worker\evaluate.py", line 59, in start
self.move_model(model_dir)
File "src\chess_zero\worker\evaluate.py", line 111, in move_model
new_dir = os.path.join(rc.next_generation_model_dir, "copies", model_dir.name)
AttributeError: 'str' object has no attribute 'name'
Exception in thread prediction_worker:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\ProgramData\Anaconda3\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "src\chess_zero\agent\api_chess.py", line 62, in _predict_batch_worker
while pipe.poll():
File "C:\ProgramData\Anaconda3\lib\multiprocessing\connection.py", line 257, in poll
return self._poll(timeout)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\connection.py", line 328, in _poll
_winapi.PeekNamedPipe(self._handle)[0] != 0):
BrokenPipeError: [WinError 109] The pipe has been ended

Thanks

SyntaxError

Traceback (most recent call last):
File "src/chess_zero/run.py", line 17, in
from chess_zero import manager
File "src/chess_zero/manager.py", line 22
def setup(config: Config, args):
^
SyntaxError: invalid syntax

Next step

As you now have a powerful M60 GPU (about 2-3x my GTX 1070), I am wondering what would be the most helpful next step so that I can still contribute.

uci and ARENA

Great project!

I can run chess-zero with Arena in Windows, though very slow. 1 min+ per move.
but got problem in setting up in Ubuntu. It does not move. Have renamed it to C0uci.sh. what else need to be done? please help. Tkx.

Supervised Learning results

The SL results are promising. I've finished three epochs of optimization using various PGN files with human movements (of players with ELO > 1600) and the model always improved and defeated the previous best model by a large margin. The model seems to be generalizing well and the loss continues converging at a good rate.

I'm really confortable that after the convergence of the SL process, the self-play pipeline step will start to work properly and that we could in principle get a good chess player.

Regards.

Where is cut off depth defined in self-play?

@Zeta36

I want to set turn to 60 so that it only takes at most 60 seconds to generate one game. And by doing that, we might need to employ a scoring scheme that determines who's the winner? The rule of Go naturally support this, what about Chess?

Not bugs, just questions about designing.

Hi, Zeta36!

I am trying to construct an alpha-zero-style AI for Twelve Shogi, a simple kind of Shogi. I am wondering that how you designed the board state for CNN input.

Thanks.

Problem with installl

Hi everyone, i don't know how to install the software, always appearme this:

File "src/chess_zero/run.py", line 4, in
from dotenv import load_dotenv, find_dotenv
ModuleNotFoundError: No module named 'dotenv'

I need your help!!
Thanks, Nicolás

Optimal self-play ?

Congrats on this amazing project !!
How many games or hours are advisable for a good learning experience in Self-Play ?
50 games took 21 min on my laptop using a GTX 950M -- would that be an acceptable speed ?

UCI and ChessX

Hi guys,
I was not familiar with UCI protocol, but I've tried to connect chess-zero with ChessX, and to make it worked, I had to change the parsing of the "position" command. It seems like this program is sending the extra token "fen". ChessX sends the command: position fen rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq e3 0 1

Once I fixed it it kind of work.

Another question, any reason why when starting chess-zero in UCI mode, it still used "MTCS", why not simply returning argmax of the CNN output? I'm asking because in my case (with my hardware), calculating a move by player_chess takes several seconds. Looking at it today, it seems like player_chess evaluate about 100 positions using CNN, and calling predict() takes 150-200ms on my computer.

No weights in ftp server?

Where can we share our weights and training data? I.e. to get a head start with your "good results"

Edit: model_best_config.h5 seems to be missing from the ftp server

Guide Please

Can someone make a video or a step by step dummy guide for the people that are struggling ?

issue in running sl

Facing this error while running run.py sl
ERROR
(cpu_tensor) C:\Users\m.waleed\Documents\chess-alpha-zero>python src/chess_zero/
run.py sl --type mini
2018-01-30 18:12:43,800@chess_zero.manager INFO # config type: mini
['C:\Users\m.waleed\Documents\chess-alpha-zero\data\play_data\29-1-2018.p
gn']
found 34 games
done reading
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\m.waleed\AppData\Local\Continuum\anaconda3\envs\cpu_tensor\lib
concurrent\futures\process.py", line 175, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "src\chess_zero\worker\sl.py", line 140, in get_buffer
white_elo, black_elo = int(game.headers["WhiteElo"]), int(game.headers["Blac
kElo"])
KeyError: 'WhiteElo'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "src/chess_zero/run.py", line 20, in
manager.start()
File "src\chess_zero\manager.py", line 73, in start
return sl.start(config)
File "src\chess_zero\worker\sl.py", line 25, in start
return SupervisedLearningWorker(config).start()
File "src\chess_zero\worker\sl.py", line 58, in start
env, data = res.result()
File "C:\Users\m.waleed\AppData\Local\Continuum\anaconda3\envs\cpu_tensor\lib
concurrent\futures_base.py", line 425, in result
return self.__get_result()
File "C:\Users\m.waleed\AppData\Local\Continuum\anaconda3\envs\cpu_tensor\lib
concurrent\futures_base.py", line 384, in __get_result
raise self._exception
KeyError: 'WhiteElo'

END

Can't play chess

When I want to play, it always says:

File "src/chess_zero/play_game/game_model.py", line 37, in move_by_ai
self.last_evaluation = self.last_history.values[self.last_history.action]
AttributeError: 'NoneType' object has no attribute 'values'

What should I do?

Second "good" results

By training the model with a similar config as in this repo (7 residual blocks of 256 depth) on FICS games, I think it's not doing too bad: (I played white, model gets 1200 sims/move)

image

The model and weights are in my fork, please feel free to clone and try for yourself (It's not compatible with this repo)

What are everyone's thoughts on this? Should I keep training this or start to scale up to more blocks? There are a total of 7*2+1=15 convolutions now, barely enough to traverse the board and back, so this might be a problem with respect to long-range tactics.

How can I start to train with self play?

How can I start to train chess without any data?
run Player, Trainer, Evaluator simultaneously so that run 3 processes same time or sequentially so that run 1 process any time?
If I need to run simultaneously, how can process data synchronize?

logger isn't substituting values

For example this is what I get:
2017-12-13 09:38:11,484@chess_zero.agent.model_chess DEBUG # model files does not exist at {config_path} and {weight_path}
2017-12-13 09:38:12,425@chess_zero.agent.model_chess DEBUG # save model to {config_path}
2017-12-13 09:38:13,099@chess_zero.agent.model_chess DEBUG # saved model digest {self.digest}

IDK if I'm missing something here, but I don't think logger has access to the calling scope.

Running Alpha Zero

Whenever I run AlphaZero chess for the second time after reinstalling python I get the error

File "h5py\h5f.pyx", line 78, in h5py.h5f.open
OSError: Unable to open file (File signature not found)

How do I avoid getting that error. When I run the command src\ chess_zero\run.py self --distributed and want to stop execution I type ctrl-c. How else do I stop the command without getting the above error? Thanks.

Philip

pyperclip access denied error on Windows 10

When running: python src/chess_zero/run.py self
On Windows 10 with up-to-date anaconda-python-3.6 with all requirements installed, I get the following error with pyperclip:

Traceback (most recent call last):
File "src/chess_zero/run.py", line 20, in
manager.start()
File "src\chess_zero\manager.py", line 64, in start
return self_play.start(config)
File "src\chess_zero\worker\self_play.py", line 25, in start
return SelfPlayWorker(config).start()
File "src\chess_zero\worker\self_play.py", line 69, in start
pretty_print(env, ("current_model", "current_model"))
File "src\chess_zero\lib\data_helper.py", line 26, in pretty_print
pyperclip.copy(env.board.fen())
File "C:\Users\user\Anaconda3\lib\site-packages\pyperclip_init_.py", line 574, in lazy_load_stub_copy
return copy(text)
File "C:\Users\user\Anaconda3\lib\site-packages\pyperclip_init_.py", line 416, in copy_windows
with clipboard(hwnd):
File "C:\Users\user\Anaconda3\lib\contextlib.py", line 81, in enter
return next(self.gen)
File "C:\Users\user\Anaconda3\lib\site-packages\pyperclip_init_.py", line 400, in clipboard
raise PyperclipWindowsException("Error calling OpenClipboard")
pyperclip.PyperclipWindowsException: Error calling OpenClipboard ([WinError 5] Access is denied.)
Exception in thread prediction_worker:
Traceback (most recent call last):
File "C:\Users\user\Anaconda3\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Users\user\Anaconda3\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "src\chess_zero\agent\api_chess.py", line 62, in _predict_batch_worker
while pipe.poll():
File "C:\Users\user\Anaconda3\lib\multiprocessing\connection.py", line 257, in poll
return self._poll(timeout)
File "C:\Users\user\Anaconda3\lib\multiprocessing\connection.py", line 328, in _poll
_winapi.PeekNamedPipe(self._handle)[0] != 0):
BrokenPipeError: [WinError 109] The pipe has been ended

When testing pyperclip functionality with the same string in a python shell it seems to work correctly, so my assumption is that it has to do with the multiprocessing environment

Chess position (FEN)

Hi Zeta36, great job!

It is possible to pass a position (FEN) and start analyzing from that point?

Best regards

Working for me

Progress is slow, but it does seem to be working!

Eval games take about 6 min. (set gpu mem to 25%)
Opt epochs take 29 sec. (gpu mem 30%)
Self play games highly variable. (gpu mem 25%)
Total gpu util about 93% (mem 90+% with Firefix and Nvidia and System monitors running)
Ubuntu 16.04LTS 4GHz i920 24GB 1070GTX 8GB

Thanks for sharing.

Now, how can we distribute it (at least self play and evaluation) like LeelaZero and Fishtest?

Invalid syntax while running run.py

$ python3 src/chess_zero/run.py self Traceback (most recent call last): File "src/chess_zero/run.py", line 17, in <module> from chess_zero import manager File "src/chess_zero/manager.py", line 38 logger.info(f"config type: {config_type}") ^ SyntaxError: invalid syntax

What ELO rating was the engine able to achieve so far

In the description of the project I was not able to find the current ELO rating of the engine achieved so far (also on what hardware and under what conditions was it achieved).

Have I missed this information somewhere?

Chinese chess version

Hi, this work is great !!!
Would it be very complicated to change to a Chinese chess version on the basis of this software? Which part should be changed?

Speed of move

Hi all,
In GUI part description i saw the image where uci launches the Universal Chess Interface. And how i can see average time of move ~1 sec
screenshot_27
But when i run the uci and zerochess playing with youself average time of move 41 sec.
Anybody know why so big different? 41 times slower.
screenshot_28

I don't undestand anything :/

Hello, firstly, I want to say I am really interested by this project even if my knowledge is cleary poor.

So I installed Python for windows, read for days to get a grasp about many things, ok.
Among other things, I have no idea about how to run chess-zero, using UCI with arena is for later...
To be honest, I wonder if I am totally out of the way.
Last thing I tried is that:
PS C:\Users\xxxx\Projets\chess-alpha-zero-master> python src/chess_zero/run.py play_gui
Traceback (most recent call last):
File "src/chess_zero/run.py", line 17, in
from chess_zero import manager
File "src\chess_zero\manager.py", line 6, in
from .config import Config
File "src\chess_zero\config.py", line 2, in
import chess
ModuleNotFoundError: No module named 'chess'

Is there something I need to download ?
Thank you for any help in advance.

The loss function

There's a difference between reinforcement and supervised learning in the AGZ paper. The paper mentioned that although for the reinforcement version, the loss function is like "loss = action_loss + value_loss + L2_reg", the supervised version gives the "value_loss" part a smaller weight in order to prevent over-fitting. (Page 25 of the agz paper: "By using a combined policy and value network architecture, and by using a low weight on the value component, it was possible to avoid overfitting to the values (a problem described in prior work 12).")

This is probably because for each game, there are dozens of actions but only one value (win/loss), giving too high a weight to "value_loss" would make the network memorizing the game, and potentially contaminating the common part of the neural network.

Leela zero chose "loss = 0.99action_loss + 0.01value_loss + L2_reg" in its supervised mode. See 3089-3105 lines of https://github.com/gcp/leela-zero/blob/master/training/caffe/zero.prototxt

If you guys are not aware of this I recommend trying it in chess because I think the over-fitting problem of the value part of the network is not go-only but a general one.

Error in the optimizer now after changes performed by Akababa

[root@localhost chess-alpha-zero-master]# python3 src/chess_zero/run.py opt
2017-12-26 16:55:29,154@chess_zero.manager INFO # config type: mini
Using TensorFlow backend.
2017-12-26 16:55:30,136@chess_zero.worker.optimize DEBUG # loading best model
2017-12-26 16:55:30,137@chess_zero.agent.model_chess DEBUG # loading model from /run/media/root/Fer_descargas/chess-alpha-zero-master/data/model/model_best_config.json
2017-12-26 16:55:30.975444: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2017-12-26 16:55:30.975705: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.8095
pciBusID: 0000:01:00.0
totalMemory: 5.93GiB freeMemory: 5.48GiB
2017-12-26 16:55:30.975718: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
2017-12-26 16:55:31,533@chess_zero.agent.model_chess DEBUG # loaded model digest = 0c379712fcb4204eccea535e5ff099cde78f87037e9805c85d4738bc350adb12
Traceback (most recent call last):
File "src/chess_zero/run.py", line 16, in
manager.start()
File "src/chess_zero/manager.py", line 48, in start
return optimize.start(config)
File "src/chess_zero/worker/optimize.py", line 23, in start
return OptimizeWorker(config).start()
File "src/chess_zero/worker/optimize.py", line 37, in start
self.training()
File "src/chess_zero/worker/optimize.py", line 47, in training
steps = self.train_epoch(self.config.trainer.epoch_to_checkpoint)
File "src/chess_zero/worker/optimize.py", line 65, in train_epoch
callbacks=[tensorboard_cb])
File "/usr/local/lib/python3.6/site-packages/keras/engine/training.py", line 1522, in fit
batch_size=batch_size)
File "/usr/local/lib/python3.6/site-packages/keras/engine/training.py", line 1378, in _standardize_user_data
exception_prefix='input')
File "/usr/local/lib/python3.6/site-packages/keras/engine/training.py", line 132, in _standardize_input_data
str(array.shape))
ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (0, 1)

First "good" results

Using the new supervised learning step I created, I've been able to train a model to the point that seems to be learning the the openings of chess. Also it seems it starts to avoid losing naively pieces.

Here you can see an example of a game played for me against this model (AI plays black):

partida1

This model plays in this way after only 5 epoch iterations of the 'opt' worker, the 'eval' worker changed 4 times the best model (4 of 5). At this moment the loss of the 'opt' worker is 5.1 (and still seems to be converging very well).

As I have not GPU, I had to evaluate ('eval') using only "self.simulation_num_per_move = 10" and only 10 files of play data for the 'opt' worker. I'm pretty sure if anybody is able to run in a good GPU with a more powerful configuration the results after complete convergence would be really good.

Error during supervised learning

I'm trying to teach model by set of 4000 games in PGN file. I'm using Python 3.6.3 and Tensorflow without GPU. Here is error traceback:
`
puser@vmi148103:~/chess-alpha-zero$ python3 src/chess_zero/run.py sl
2018-03-09 19:48:42,418@chess_zero.manager INFO # config type: mini
['/home/puser/chess-alpha-zero/data/play_data/Ivanchuk.pgn']
found 4033 games
done reading
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.6/concurrent/futures/process.py", line 175, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "src/chess_zero/worker/sl.py", line 140, in get_buffer
white_elo, black_elo = int(game.headers["WhiteElo"]), int(game.headers["BlackElo"])
ValueError: invalid literal for int() with base 10: ''
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "src/chess_zero/run.py", line 20, in
manager.start()
File "src/chess_zero/manager.py", line 73, in start
return sl.start(config)
File "src/chess_zero/worker/sl.py", line 25, in start
return SupervisedLearningWorker(config).start()
File "src/chess_zero/worker/sl.py", line 58, in start
env, data = res.result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
ValueError: invalid literal for int() with base 10: ''
`
Any ideas what can be wrong here?

Lets make it practical

I try to use your code to train a model, but there are several "stop production" issues:

  1. You've mentioned already two planes is not enough, but is it stop production? I think it's critical.
  2. I saw you're using two layer CNN model, while DeepMind usually use very deep network. How complex a model should be, to be capable to learn chess?
  3. Generating self play takes too long. I have good GPU (GTX 970) and it takes me a minute per game. While we need tens of millions of games, generating 2K games takes 24 hours (it will take about 30 years to only generate all the data).
  4. Even supervised learning has limitation. The biggest limitation is loading all the games into memory, before GPU optimization. I've 8GB RAM this makes me limited to 3K games, how about loading 1K games running about 5 epochs, and then load new games. This will allow to train on tens of thousand of games.

Data format?

Could someone write a quick documentation of the input planes?
Here's what I think it is:
The last 8 board positions. each one 8x8x12
Current state, also 8x8x12
Side to move, 8x8 constant
Move number, 8x8 constant

I think the move number is typo-d, it's using the halfmove clock instead of 50-move rule counter (which I assume is the intention).
We also theoretically don't need the side to move because we can flip the board and invert the colors, so the side to move is always on the bottom and has king on the right. Alternatively we can augment the dataset x2 by applying this transformation, but I think the dimensionality reduction with x2 learning rate is at least equivalent (and probably better). (It doesn't work for Go because of the 7.5 komi rule)

I think we're also missing castling.

Another idea: shuffle training data to avoid overfitting to one game

How is the policy vector represented?

No module named chess

@Zeta36

When I run python src/chess_zero/run.py self under root, it says:

Traceback (most recent call last):
  File "src/chess_zero/run.py", line 16, in <module>
    from chess_zero import manager
  File "src/chess_zero/manager.py", line 6, in <module>
    from .config import Config
  File "src/chess_zero/config.py", line 2, in <module>
    import chess
ModuleNotFoundError: No module named 'chess'

Is there any python module named chess?

Crash when running self_play

Hi,

I just cloned the repository (a few hours ago), and I ran into a crash while trying to do self-play using the best model coming with the source code. It crashes after a while (after a few minutes), it looks like the different process or threads have problems communicating with each other using pipes. I will look at it tomorrow, once I will start exploring the code a bit more. You will find my config and the stack trace below.

My config:

  • Mac OS (10.12.6)
  • Python (3.6.4)
  • Using tensorflow (not tensorflow-gpu)

(venv) 2015sys0736:chess-alpha-zero stephane$ python src/chess_zero/run.py self
2017-12-24 23:55:56,014@chess_zero.manager INFO # config type: {config_type}
Using TensorFlow backend.
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
2017-12-24 23:55:57,195@chess_zero.agent.model_chess DEBUG # loading model from /Users/stephane/Documents/Dev/chess/chess-alpha-zero/data/model/model_best_config.json
2017-12-24 23:55:57.237053: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2017-12-24 23:56:00,410@chess_zero.agent.model_chess DEBUG # loaded model digest = 0c379712fcb4204eccea535e5ff099cde78f87037e9805c85d4738bc350adb12
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
Exception in thread prediction_worker:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "src/chess_zero/agent/api_chess.py", line 33, in predict_batch_worker
data.append(pipe.recv())
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError

concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/process.py", line 175, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "src/chess_zero/worker/self_play.py", line 87, in self_play_buffer
pipes = cur.pop() # borrow
File "", line 2, in pop
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/managers.py", line 757, in _callmethod
kind, result = conn.recv()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/connection.py", line 951, in rebuild_connection
fd = df.detach()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/connection.py", line 487, in Client
c = SocketClient(address)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/connection.py", line 614, in SocketClient
s.connect(address)
ConnectionRefusedError: [Errno 61] Connection refused
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "src/chess_zero/run.py", line 16, in
manager.start()
File "src/chess_zero/manager.py", line 46, in start
return self_play.start(config)
File "src/chess_zero/worker/self_play.py", line 22, in start
return SelfPlayWorker(config).start()
File "src/chess_zero/worker/self_play.py", line 47, in start
env, data = futures.popleft().result()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
ConnectionRefusedError: [Errno 61] Connection refused

Interpret game result

@Zeta36

I've played around a bit with the mini model, it takes quite a long time for each game (is it normal, because I saw a logging says it was loading the normal model but I typed --type mini):

2017-11-19 23:12:48,465@chess_zero.worker.self_play DEBUG # game 1 time=150.61661911010742 sec, turn=163:1n3kb1/8/8/pp4P1/2nN4/7r/5p2/2K5 b - - 1 82 - Winner:Winner.black - by resignation?:True
2017-11-19 23:14:49,662@chess_zero.worker.self_play DEBUG # game 2 time=121.1972291469574 sec, turn=120:1n4r1/1p4k1/pq2Rp2/1P3n1P/2BP4/P1P2bP1/3P4/2B1R1K1 w - - 9 61 - Winner:Winner.draw - by resignation?:False
2017-11-19 23:15:08,604@chess_zero.worker.self_play DEBUG # game 3 time=18.94098925590515 sec, turn=21:rn1qkbnr/4pp2/pp1p3p/3p2p1/3P1Pb1/8/PPPKPBPP/RN3B1R b kq - 1 11 - Winner:Winner.black - by resignation?:True
2017-11-19 23:16:24,316@chess_zero.worker.self_play DEBUG # game 4 time=75.7124376296997 sec, turn=81:4k2r/3p1q2/2pB3p/5p1R/r2n2P1/P7/3K1P2/8 b k - 1 41 - Winner:Winner.black - by resignation?:True

And, my question is how to interpret the game result? Thanks!

Keep up the good work! I am looking forward to see a visualization of game result?

EDIT: I believe it's running the normal model even though I set it to be mini

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.