GithubHelp home page GithubHelp logo

luecx / koivisto Goto Github PK

View Code? Open in Web Editor NEW
133.0 13.0 32.0 40.92 MB

UCI Chess engine

License: GNU General Public License v3.0

CMake 0.14% C++ 70.64% C 27.52% Makefile 1.70%
chess-engine uci-chess-engine

koivisto's Introduction

Koivisto UCI

Banner_Ukraine_github

Koivisto is a strong chess engine written primarily by Kim Kåhre and Finn Eggers in c++. Koivisto in itself is not a complete chess program and requires a UCI-compatible graphical user interface.

Supported UCI settings:

  • Hash
  • SyzygyPath (up to 6 pieces)
  • Threads (up to 256)

Acknowledgements

All the Koivisto contributors, kz04px, Eugenio Bruno, Jay Honnold, Daniel Dugovic, Aryan Parekh, Morgan Houppin, Max Allendorf. Additionally we have recieved invaluable help and advice from Andrew Grant and theo77186. We use Fathom for tablebase probing. Chessprogramming Wiki has been a very usefull resource.

Additionaly, we have recieved support from:

Compiling

Note that compiler warnings might pop up which can be safely ignored and will most likely be fixed in one of the future releases.

Windows / Linux

We do provide binaries for Windows / Linux systems. You can download them for each release after Koivisto 3.0 here. Note that we strongly recommend that you build the binaries yourself for best performance. Assuming build tools have been installed, one can type:

git clone https://github.com/Luecx/Koivisto.git
cd Koivisto/

cd src_files
make pgo

Besides compiling a native version which should be best in terms of performance, one can also compile static executables using:

cd src_files
make release

MacOS

We do not provide binaries for MacOS yet.

koivisto's People

Contributors

altarchess avatar aryan1508 avatar ddugovic avatar disservin avatar eugenio-bruno avatar jhonnold avatar jnlt3 avatar joohanblunder avatar kierenp avatar kz04px avatar luecx avatar mhouppin avatar srimethan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

koivisto's Issues

Improve Thread scaling

The scaling of K has been shown to be mediocre at best at TCEC level. This needs to be resolved asap.

Edit by Kim: Specificly nps scaling.

Koivisto Search

FEN: 8/1p4r1/p2bb1pk/3p1p2/3P1PNP/q2BQ3/2R3K1/8 b - - 1 42

I have tested this position with Koivisto 7.2
42. .. Kh5 {black lose}
42. .. Kh7 {Black lose}
42. .. fxg4 {draw}

Koivisto 7.2
42...Kh5 43.Nf6+ Kh6 44.Qxe6 Qxd3 45.Rc8 Qd2+ 46.Kh1 Qd1+ 47.Kh2 Bxf4+ 48.Kg2 Rh7 49.Rg8 Qd2+ 50.Kf3 Qd3+ 51.Kxf4 Qxd4+ 52.Kf3 Qd1+ 53.Kg2 Qc2+ 54.Kg3 Qc3+ 55.Kh2 Qb2+ 56.Kg1 Qc1+ 57.Kf2 Qc2+
The position is equal = 0.00 Depth: 43 00:01:55 3180 mN TBhits = 11942687

Stockfish
42...fxg4 43.f5+ Kh7 44.fxe6 Re7 45.h5 Kg7 46.Rf2 g3 47.Rc2 a5 48.Qe2 Bf4 49.Bxg6 Qe3 50.Qg4 Bg5 51.Bf7 Qf4 52.h6+ Kh7 53.Qxg3 Qe4+ 54.Kg1 Qxc2 55.Qxg5 Qd1+ 56.Kh2 Qe2+ 57.Kg3 Qd3+ 58.Kh2
White is slightly better = 0.00 Depth: 46 00:00:28 901 mN TBhits = 3989578

Draw evaluation

Some draw evaluation needs to be implemented.

Potentially this should be done using scaling techniques for the endgame.
This should contain different piece combinations with OCB and further known drawishness endgames.

Most likely this should also be incorporated with the tuner.

Koivisto issues

Hi Koivisto Team

Here are pictures of current Koivisto issues.

One of them is CPU related, as there seems to be a cosiderable slowdown of your engine at about 62%

The other is evaluation related.

https://www.photobox.co.uk/my/photo/full?photo_id=504597829271
https://www.photobox.co.uk/my/photo/full?photo_id=504597836893

PS: My real name is Damir and I am a regular user on talkchess forum.

Currently I have 64 cores AMD Threadripper at my disposal.

Those were the 2 issues that I noticed were affecting Koivisto's play.

not able to make native

The syntax of the command is incorrect.
make: *** [makefile:40: native] Error 1
error found.
not able to compile even though everything is fine.

more support for arm architecures

Great strong engine. Currently though doesn't seem to support building on generic aarch64 (or even less on armv7), it may have to do with the restricted to the new set of arm neon only.

Talkchess post content

It has been a long time since we released Koivisto 4.0 and many things happened since. We were actively developing Koi while not wanting Koivisto to be tested by third parties.

We dislike the popularity of neural networks inside chess engines, not because we do not understand how they work but mostly those who use them seem to not understand what they are actually doing.
Using neural networks does not require any understanding of chess which you would need when writing a hand-crafted-evaluation or as we like to refer to: real-men-evaluation (RME). Using neural networks became more of an engineering challenge than anything else. 3 components are required for them which is: a good tuner, good data, good engine implementation. Since a good tuner requires some understanding of how they work, most engines out there seem to be using other peoples tuners. Effectively there are just a few tuners out there but a lot more NN engines. Secondly, generating data seems to be a privilege to the big projects which gather computing resources around them. The easiest part is probably the NN implementation inside the engines themself although even here many people seem to ctrl+c, ctrl+v popular implementations.

Since we personally work with neural networks beside chess engine development, we decided to write our own tuner... from scratch... We already did this a few months ago but just a few days ago we decided to give it a shot and actually tune a few networks. We generated around 1.5M selfplay-games with Koivisto, extracted a few positions and initially ended up with around 50M positions. Later we realised, the filtering mechanism we applied was bad and simply wrong. This lead to a neural network which beat our master branch by just 40 elo. Since there are other parties helping out with Koivisto like @justNo4b who generated some data with Drofa himself, he used the tuner and generated a network which was suddenly +80 elo above master.

The result seemed slightly surprising so we rechecked the data generation and filtering and found a bug. After redoing the training process which barely took one hour, we tested a new network which showed the following result:

ELO   | 103.93 +- 5.74 (95%)
CONF  | 10.0+0.10s Threads=1 Hash=16MB
GAMES | N: 10240 W: 5019 L: 2044 D: 3177

A network trained on 100M positions of Ethereal data provided by Andrew Grant was ~150 elo above master.

Koivisto, while standing on the shoulder of giants, has implemented many of it's own specialities to both search and classical eval on top of well known concepts. We are now taking the path of the sloth, and replacing our beloved RME with a silly neural network. We want to maintain our distance from other engines, so it was important that our NN development kept the same 'koivisto touch' that we already had before. All three aspects of our development have been done internally and are our own. We have written our own trainer, generated our own data, and have our own NN probing code. We strive to be as original as possible, and will not veer from this path moving forward.

Generation of higher quality data is going on and might lead to additional elo being gained here. Since the NN branch in our project started as a small test to verify the integrity of our NN tuner, we have choosen a very simple, non-relative, 2-layer, 12x64-input network. A new topology is high on our list since we do consider the topology to be very far from optimal.


Beside the addition of neural network code inside Koivisto, we have 83 further elo gaining patches since 4.0. Many RME patches have been going on which are eventually invalidated. More to that later. Our search has also made a lot of progress. Adding further unique ideas not found in any other engine so far, we gained a large amount of elo inside our search since then.

Together with the neural network code, our results look similar to this against Koivisto 4.0.

ELO   | 367.3 +- 30.5 (95%)
CONF  | 10.0+0.10s Threads=1 Hash=16MB
GAMES | N: 919 W: 767 L: 46 D: 106

The new release can be found [here] <- insert link or sth. Due to the size of the network, we will keep the networks seperate in a submodule of our repository. Further information can be found on our github page.


Beside Koivisto 5.0 being release, we want to make an inbetween release for our last HCE version. Since many people, especially Berserk author has helped massively with our classical evaluation, we want to make a final release which marks the end of development for RME inside Koivisto.

The release for that can also be found here] <- insert link or sth


We want to thank all the contributors to the project, especially Berserk author for his massive contribution to our search and the classical evaluation, @justNo4b for helping and supporting us with various topics and helping with the training of neural networks, Andrew Grant for the many discussions we had to improve parts of the code, sharing scripts and many more. Beside that we thank the official OpenBench discord with all its members (especially noobpwnftw) for answering any question we have as soon as possible and supporting us whenever possible. We also want to thank the author of Seer for offering us to share training resources and giving us ideas for training our classical as well as our neural network evaluation.

Clang MacOS compiles

"Both builds darwin-neon and darwin-sse2 do not work on my Mac mini Silicon M1"

as well as native compiles:

cd src_files
make native
Cloning into 'Koivisto'...
remote: Enumerating objects: 14525, done.
remote: Counting objects: 100% (305/305), done.
remote: Compressing objects: 100% (288/288), done.
Receiving objects: 100% (14525/14525), 40.20 MiB | 673.00 KiB/s, done.
remote: Total 14525 (delta 217), reused 91 (delta 15), pack-reused 14220
Resolving deltas: 100% (11372/11372), done.
git -C .. submodule update --init
Submodule 'networks' (https://github.com/Luecx/KoivistoNetworks.git) registered for path 'networks'
Cloning into '/Users/alessandromorales/Koivisto/networks'...
Submodule path 'networks': checked out '338372d58e0b2cace9c75555af97da766b1a606e'
mkdir -p ../bin/
g++ -O3 -std=c++17 -Wall -Wextra -Wshadow -DEVALFILE=\"../networks/default.net\" -DNDEBUG -flto -march=native *.cpp syzygy/tbprobe.c -DMINOR_VERSION=0 -DMAJOR_VERSION=8 -pthread -Wl,--whole-archive -lpthread -Wl,--no-whole-archive -march=native -o ../bin/Koivisto_8.0-x64-linux-native
clang: warning: treating 'c' input as 'c++' when in C++ mode, this behavior is deprecated [-Wdeprecated]
clang: error: the clang compiler does not support '-march=native'

Checkmate and 50 move rule at the same time

Checkmate overrides the 50 move rule

https://www.chess.com/computer-chess-championship#event=stockfish-classical-bonus-ii&game=168

[Event "Stockfish Classical Bonus II (15|3)"]
[Site "?"]
[Date "2022.04.03"]
[Round "1"]
[White "Stockfish Classic"]
[Black "Koivisto"]
[Result "1-0"]
[ECO "A27"]
[GameDuration "00:40:38"]
[GameEndTime "2022-04-03T09:08:45.981 PDT"]
[GameStartTime "2022-04-03T08:28:07.927 PDT"]
[Opening "English"]
[PlyCount "225"]
[TimeControl "900+3"]
[Variation "Three knights system"]

1. c4 e5 2. Nc3 Nc6 3. Nf3 f5 4. g3 d6 5. d4 e4 6. d5 Ne5 7. Nxe5 dxe5 8. g4 g6
9. gxf5 gxf5 10. Rg1 Nf6 11. Bg5 Kf7 12. e3 Rg8 13. Be2 Qd6 14. c5 Qxc5 15. Bh5+
Rg6 16. Bxg6+ hxg6 17. Bxf6 Kxf6 18. Qa4 b5 19. Qxb5 Qxb5 20. Nxb5 Rb8 21. Nxa7
Ba6 22. O-O-O Ra8 23. Nc6 Be2 24. Rd2 Rxa2 25. Kb1 Bc4 26. Rc1 Ra4 27. b4 Bd3+
28. Kb2 Bxb4 29. Nxb4 Rxb4+ 30. Ka3 Rb5 31. Rc6+ Kg5 32. h4+ Kxh4 33. Rxg6 Ra5+
34. Kb4 Rb5+ 35. Kc3 Rc5+ 36. Kb4 Rb5+ 37. Kc3 Rc5+ 38. Kb2 Rb5+ 39. Ka1 Ra5+
40. Ra2 Rxa2+ 41. Kxa2 Bc4+ 42. Ka3 Bxd5 43. Kb4 f4 44. Kc5 Bb7 45. Kc4 Kh3 46.
Rg7 f3 47. Rg3+ Kh2 48. Rg6 Kh3 49. Kc3 Bd5 50. Rg3+ Kh2 51. Rg5 c6 52. Kd2 Kh3
53. Rg3+ Kh2 54. Rg4 Kh1 55. Rg6 Kh2 56. Rg5 Kh3 57. Rg3+ Kh2 58. Rg6 Kh3 59.
Rh6+ Kg2 60. Ke1 Kg1 61. Rf6 Kh2 62. Rf5 Bc4 63. Rxe5 Bd5 64. Re8 Kg2 65. Rd8
Be6 66. Rd1 Bd5 67. Ra1 Bb3 68. Rb1 Bd5 69. Rb2 Kh1 70. Rb7 Kg1 71. Rh7 Ba2 72.
Rg7+ Kh2 73. Rh7+ Kg1 74. Rg7+ Kh2 75. Rc7 Bd5 76. Rh7+ Kg2 77. Rh6 Kg1 78. Rh4
Kg2 79. Rg4+ Kh2 80. Kd2 Kh3 81. Rg3+ Kh2 82. Kc3 Kh1 83. Kd4 Kh2 84. Kc5 Kh1
85. Kd4 Kh2 86. Ke5 Kh1 87. Kd6 Kh2 88. Ke5 Kh1 89. Kd6 Kh2 90. Kc5 Kh1 91. Rg4
Kh2 92. Kd6 Kh3 93. Rg5 Kh2 94. Rg3 Kh1 95. Kc5 Kh2 96. Kd6 Kh1 97. Rg4 Kh2 98.
Kc5 Kh1 99. Rg3 Kh2 100. Kd4 Kh1 101. Rg4 Kh2 102. Ke5 Kh1 103. Kd4 Kh2 104. Ke5
Kh1 105. Kd6 Kh2 106. Kd7 Kh3 107. Rg1 Kh4 108. Rg3 Kh5 109. Kd6 Kh4 110. Ke5
Kh5 111. Kf6 Kh4 112. Kf5 Kh5 113. Rh3# 1-0

image

Koivisto 6.0 release post

Howdy fellow chess enthusiasts,

Not long ago we have released Koivisto 5.0 with the goal of making a unique engine based on training data generated by its previous version, with its own tuning and inference code. Since 5.0, which marked the release of our first neural network, many things happened. Firstly, we tweaked the feature transformer in a way that we require more than just one accumulator. Making the input to the network effectively relative to the side to move, we gained about 30 Elo. Further patches followed tweaking the search, making it more aggressive since the prediction of the network outperforms our previous RME.

Furthermore we introduced a completely new time-management scheme which, as far as we know, has never been tested in any other engine. We use the internal node counts for subtrees to check how many good moves at the root there are and based on that, increaes or decrease the time we spend on the search.

Lastly, we took over one week to generate 2^24 = 16.777M games. Resulting in approximately 1 billions fens which are scored using a depth 10 search as well as the game outcome. The results surpassed our expectations by a big margin resulting in:

ELO   | 88.83 +- 3.46 (95%)
CONF  | 10.0+0.10s Threads=1 Hash=16MB
GAMES | N: 20000 W: 7664 L: 2659 D: 9677

Since 5.0, we also tracked the elo changes compared to 5.0. The entire list can be found inside our wiki.

https://github.com/Luecx/Koivisto/wiki/Regression-tests

Latest regression seems to result in about 200 Elo compared to 5.0. Since we are far from done with the network, yet want to releae as soon as we pass 100 Elo over the latest release, we decided to release Koivisto 6.0 today.

ELO   | 199.07 +- 7.17 (95%)
CONF  | 10.0+0.10s Threads=1 Hash=16MB
GAMES | N: 6216 W: 3568 L: 351 D: 2297

The compiled executables can be found on our release page here.

Kim & Finn

avx512 compile

Would like to see Koivisto also as a avx512 compile in near future!

Ofcourse if some checks are needed i can always test it out on my i9 7980XE where i running avx512 engines..

Kind regards,
Ipman.

CMakeLists.txt ?

normally when i encounter a CMakeLists.txt, i execute the command cmake -DCMAKE_BUILD_TYPE=Debug to get a custom makefile, but doing so i get an error while compiling : ../networks/default.net not found .. the path is wrong then, because the net file DOES exist .. but when i compile by make native inside the source folder (as your readme states) all goes OK : no compile errors and the v8.0 binary (12.2 Mb) runs fine in CuteChess (v1.2.0 on Linux).

where is the NN file ?

it seems Koivisto (6.16) uses some Neural Network but i see no UCI option for it .. also the Linux binary is under 1 Mb, so i guess a NN must be embedded .. did you train it uniquely for Koivisto ? I find no info about this.

macOS doesn't support --whole-archive

MacBook Pro Intel 2020 - Monterey 12.2.1

I was able to compile successfully by replacing it with -all_load and -noall_load.

Though it seems that -noall_load is ignored, as the following message is generated:

ld: warning: option -noall_load is obsolete and being ignored

Here is the top of my modified makefile I used to build:

CC       = g++
SRC      = *.cpp syzygy/tbprobe.c
LIBS     = -pthread -Wl,-all_load -lpthread -Wl,-noall_load
FOLDER   = bin/
ROOT     = ../
NAME     = Koivisto
EVALFILE = $(ROOT)networks/default.net
EXE      = $(ROOT)$(FOLDER)$(NAME)_$(MAJOR).$(MINOR)
MINOR    = 6
MAJOR    = 8
ifeq ($(OS),Windows_NT)
    PREFIX := windows
    SUFFIX := .exe
else
    PREFIX := darwin
    SUFFIX := 
endif

change log and specs

hi !

thanks for this new engine ! v4(.1) seems much faster than v3(.13) !

do you have a change log ? What are the coding ideas and specifications ? I see Andrew Grant is saluted but little info about this engine ..
and why does it lack MultiPV ? Is it by design ?

Koivisto 7.0 Release text

Koivisto 7.0

With the next TCEC swiss cup nearing and a great performance at the previous TCEC event, we decided to release the next version of Koivisto. Since many people noticed the "unusual", non-crashing, performance of Koivisto at the previous TCEC cup, questions have been raised, asking how and why Koivisto is performing so well.

The answer to that question is more complex than just a few sentences. A few people realized that Koivisto is not played with the normal network publicly available. With this release, we retrained another network of similar performance which will be embedded in the engine. Due to a huge amount of search patches, Koivisto 7.0 outperforms Koivisto 6.0 by a big margin with a win/loss ratio of over 6 and a stunning 110 Elo in self-play. We have not added major new features beside performance and accuracy.

We hope to further bring out more patches which will increase the strength even further. We plan to release major versions every 100 Elo in Self-Play which is a very ambitious goal from now on.

FYI

Not an issue, just sharing some results.

See https://iandoug.com/?p=1593

PGN of finals attached if you want to take a look. All engines struggle with the pawn/bishop/knight pawn-blockade end games ... humans would take a pawn or push a pawn to make progress.

Congrats on good engine.

Cheers, Ian

finals.zip

POPCNT binary very weak

Did a POPCNT and a NATIVE binary on OS X and it turns out that the engine is super weak.

Maybe the network code only works for AVX2 machines?

Koivisto crashing when starting game with a TB position

This uci sequence lets Koivisto 8.0 crash (I haven't tested latest master)

setoption name SyzygyPath value H:\syzygy\3-6
isready
ucinewgame
position fen 8/8/8/5N1p/4k2N/6K1/8/8 w - - 0 1
go wtime 11000 btime 11000 winc 1000 binc 1000

Koivisto playing Chess Variants

Hi everyone,
Thank you all the team for this nice engine.

I wonder if you can make an engine playing Chess Variants. The inspiration can come from Stockfish and the excellent Fabian Fichter who's work is simply outstanding.

This project could be also for a Ph D and i can help with that. Please send me a private message and check my website musketeerchess . net

Fathom

You probably want to use an updated version of https://github.com/jdart1/Fathom - it has had some fixes since the version you are synched with. Can use a submodule to reference the source instead of copying it.

Not unique build process and not same performances

Hi,

building the engine with "cmake . && make" in the root dir leads to a binary this is somehow -200Elo at least versus building using "make native" inside source dir.

This can be misleading quite a lot.

OS X compile error

To get this in a OS X clang environment compiled:

Add

#include <sstream>

in Util.h

Adjust TT replacement depth by depth and age

Idea: currently we discard tt entries with way higher depth of the entry is from a previous search. The idea is to keep previous entries and adjust their depth by age difference.

The age difference would be:

Diff = (ageN - ageO + 256) % 256

with ageN being the age of the new entry and ageO for the old entry. This diff is always positive and can be used to adjust the depth.

Instead of replacing when

depthN > depthO OR ageN is not ageO.

Replace when

depthN > depthO - 2 * Diff

Endgame mistake : could win but draws

8/8/4nN2/5K1p/5n2/4k3/8/8 b - - 3 22

koivisto_v7 5_plays_Nc5_not_h4

Black to move.
Here Koivisto v7.5 played Nc5 ? and thus made a draw .. in this position only h4 ! wins.
The move h4 seems obvious, and many engines find / play it ..

I did the test in CuteChess GUI (on Linux) with hash 64 Mb and 1 thread.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.