zulko / easyai Goto Github PK
View Code? Open in Web Editor NEWPython artificial intelligence framework for games
Home Page: http://zulko.github.io/easyAI/
License: Other
Python artificial intelligence framework for games
Home Page: http://zulko.github.io/easyAI/
License: Other
It is fairly common for a some of the possible moves to have an equal score at the end. I believe the current algorithm simply chooses the first best selection. But often, from a strategic point of view, some moves are better even if they result in the same level of fitness. I'd like to add another optional function to TwoPlayersGame
:
def judge_best_of_equal_moves(self, equal_move_list):
# add code that compares the equal_move_list to the this.board and chooses one
return best_move
This is only called if it exists and only at the end of the negamax function.
If you are okay with this, I'll write the code, update version to '0.0.6', update documentation, and generate a pull request. I'd ask that perhaps pypi
also be updated as it currently sits at '0.0.4'.
Thanks! John
Off topic:
In about a month, I'll probably be writing a "stubborn greedy" AI for this library (made up the name myself.) Instead of parsing a full decision tree, the algorithm only follows a single linear path for each possible top move with the assumption that each player only considers the immediate effect of a decision. So, for example, if the immediate list of possible moves always has 4 choices, then a 20 level 'stubborn greedy' algorithm will only consider 4 x 4 x 20 possible decisions. It will almost always return bad results, but for some games it might return interesting results. I suspect it is similar to SSS except both the MAX and MIN only select one path so that the result is a linear algorithm rather than exponential. There might already be a name for this, but I'm not an AI expert.
I have a python app I will be writing soon that requires the expectiminimax algorithm because an element of the game uses a dice roll. I could either write a quick version or I could try to integrate it into the easyAI framework as another AI algorithm choice.
Obviously, it will not be perfect integration as it will require a different result for getting possible moves. Perhaps a method called 'expected_possible_moves' which returns a list of tuples that include probability.
Thoughts on the matter?
Following the documentation at:
http://zulko.github.io/easyAI/get_started.html#human-and-ai-players
Adding tt=table
to Negamax, where table
that is an instance of DictTT
results in:
AttributeError: DictTT instance has no attribute 'lookup'
Looking at the Nim.py
example, it appears that perhaps that the TT
class is supposed to be used instead. Using that, unfortunately, appears to create an infinite loop where a few board states are toggled back and forth when the .play()
method is invoked with two AI players.
The id_solve
function used by Nim.py
does not appear to have the looping problem. so that might be a starting point for diagnosis. The game I'm writing is, unfortunately, not really solvable. Not with an off-the-shelf computer anyway. :)
I'm happy to help fix this, but I'd like feedback first. I'd hate to solve a problem that already has a quick work-around. :)
If curious, I'm using a genetics algorithm with easyAI to optimize the relative value of tactics and short-term strategies. My YouTube video for this: https://www.youtube.com/watch?v=Y6P-_sTYQcM
from easyAI import TwoPlayerGame, Human_Player, AI_Player, Negamax
from easyAI.games import ConnectFour
ai = Negamax(8)
game = ConnectFour( [ AI_Player( ai ), Human_Player() ] )
game.play()
0 1 2 3 4 5 6
-------------
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
Move #1: player 1 plays 0 :
0 1 2 3 4 5 6
-------------
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
O . . . . . .
Player 2 what do you play ? 3
Move #2: player 2 plays 3 :
0 1 2 3 4 5 6
-------------
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
O . . X . . .
Move #3: player 1 plays 0 :
0 1 2 3 4 5 6
-------------
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
O . . . . . .
O . . X . . .
Player 2 what do you play ? 2
Move #4: player 2 plays 2 :
0 1 2 3 4 5 6
-------------
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
O . . . . . .
O . X X . . .
Move #5: player 1 plays 0 :
0 1 2 3 4 5 6
-------------
. . . . . . .
. . . . . . .
. . . . . . .
O . . . . . .
O . . . . . .
O . X X . . .
Player 2 what do you play ? 0
Move #6: player 2 plays 0 :
0 1 2 3 4 5 6
-------------
. . . . . . .
. . . . . . .
X . . . . . .
O . . . . . .
O . . . . . .
O . X X . . .
Move #7: player 1 plays 1 :
0 1 2 3 4 5 6
-------------
. . . . . . .
. . . . . . .
X . . . . . .
O . . . . . .
O . . . . . .
O O X X . . .
Player 2 what do you play ? 2
Move #8: player 2 plays 2 :
0 1 2 3 4 5 6
-------------
. . . . . . .
. . . . . . .
X . . . . . .
O . . . . . .
O . X . . . .
O O X X . . .
Move #9: player 1 plays 0 :
0 1 2 3 4 5 6
-------------
. . . . . . .
O . . . . . .
X . . . . . .
O . . . . . .
O . X . . . .
O O X X . . .
Player 2 what do you play ? 3
Move #10: player 2 plays 3 :
0 1 2 3 4 5 6
-------------
. . . . . . .
O . . . . . .
X . . . . . .
O . . . . . .
O . X X . . .
O O X X . . .
Move #11: player 1 plays 2 :
0 1 2 3 4 5 6
-------------
. . . . . . .
O . . . . . .
X . . . . . .
O . O . . . .
O . X X . . .
O O X X . . .
Player 2 what do you play ? 1
Move #12: player 2 plays 1 :
0 1 2 3 4 5 6
-------------
. . . . . . .
O . . . . . .
X . . . . . .
O . O . . . .
O X X X . . .
O O X X . . .
Move #13: player 1 plays 3 :
0 1 2 3 4 5 6
-------------
. . . . . . .
O . . . . . .
X . . . . . .
O . O O . . .
O X X X . . .
O O X X . . .
Player 2 what do you play ? 1
Move #14: player 2 plays 1 :
0 1 2 3 4 5 6
-------------
. . . . . . .
O . . . . . .
X . . . . . .
O X O O . . .
O X X X . . .
O O X X . . .
>>>
The AI is setup to look 8 moves ahead, yet is unable to spot the win in one move to stop it.
Is it possible to use GPU to increase AI's speed? Thank you.
@Zulko (and others)
Currently, the means of using the library is strictly through the play
method of TwoPlayersGame
. Using that method, one gets an interactive terminal allowing a human/ai or human/human combination to work. For diagnostic purposes, that is fantastic.
However, to integrate easyAI into another framework, such as python Kivy, controlling and handling of turns must be external to the TwoPlayersGame class. Fortunately, examining .play
gives one enough information to mimic .play
in another app.
But, I'd like to suggest a more formal means of handling this. I've forked the repo and modified the class to have two additional methods: get_move
and play_move
. Using those two methods, I'd be happy to write additional documentation on how to use them in lieu of the .play
method. I'd include examples. Perhaps one example of, say, "Tic-Tac-Toe in Flask" and one of "Nim in Kivy".
Interested?
Can easyAI deal with games where the next player stays the same after the move? Even for chess, which is very much turn-based, it might make some sense to implement a promotion as two different moves by the same player. For some games, this seems almost unavoidable: merging the moves by the same player into meta-moves can be pretty awkward.
The game of bones solver script is:
tt = TT()
GameOfBones.ttentry = lambda game : game.pile # key for the table
r,d,m = id_solve(GameOfBones, range(2,20), win_score=100, tt=tt)
What is the equivalent for python? Do I just change range to range(2, 42)
and game.pile
to game.board
?
Do you have an estimate for how long it will take?
I'd noticed some amount of delay of responses in managing this project. I suspect you are quite busy; so I understand. But, if you would like, I'd be happy to co-manage (collaborate) the repo and respond to issues and review/approve PRs.
I realize that I'm being very presumptuous to offer this. Just let me know if I can be helpful.
Thanks, John
SInce this looks like such a cool project, but support for Python 2 (including security bugs) is going away at the end of the year, this should be ported to Python 3.
Currently, at least the examples fail, e.g. with
File "example.py", line 16
def show(self): print "%d bones left in the pile"%self.pile
^
SyntaxError: invalid syntax
Besides that, Python 3 is a better language, and all the major libraries support it now.
Check out the background and a nice timeline diagram at http://www.python3statement.org/
Great advice, planning process, tips at at Porting Python 2 Code to Python 3
To make your project be single-source Python 2/3 compatible, the basic steps are:
pip install coverage
)pip install future
)pip install p\ ylint
)pip install caniusepython3
)pip install tox
)I would like the AI to fight a little harder even if it foresees a loss assuming correct play from the opponent. Is there a way to access the current depth of search in negamax? Maybe then I could adjust the scoring so that, for example, a loss in 7 moves is less bad than a loss in 3 moves.
Would it be possible to include a branch or a zip of the code from How To Make The AI Faster?
Hello, does Easy AI support Quiescence Search?
I would like to have some help adjusting possible_moves(self) and make_move(self,move) sections to work with chess. It seems that the Negamax algorithm
There has to be a way that I can keep possible_moves(self) and make_move(self,move) in sync with the way Negamax uses them in steps 1 through 4 above without the not-in-list error occurring because at some moment Negamax is checking a different branch of the tree and has a different self.board that it is using. I imagine that the solution can be easily spotted by someone more familiar with Negamax's nuances than I. I am grateful for any assistance. Here is the code and its print out.
from easyAI import TwoPlayersGame, Human_Player, AI_Player, Negamax
class Chess(TwoPlayersGame):
def __init__(self,players):
global PieceMoved
self.players = players
self.board = PGN("8/8/2pk4/8/8/8/3PK3/8 w - - 0 1")
self.nplayer = 2
PieceMoved = []
def possible_moves(self):
global Continuations
global ForReply
global ListMoves
Begin(self.board)
OrigContin = Continuations[:]
OrigForReply = ForReply[:]
ListMoves = []
FT1 = [100019,100020,100021,100022,100023,100024,100025,100026,100027,100028,100029,100030,100031,100032,100033,100034,100035,100036,100037,100038,100039,100040,100041,100042,100043,100044,100045,100046,100047,100048,100049,100050,100051,100052,100053,100054,100055,100056,100057,100058,100059,100060,100061,100062,100063,100064,100065,100066,100067,100068,100069,100070,100071,100072,100073,100074,100075,100076,100077,100078,100079,100080,100081,100082]
FT2 = ["a8","a7","a6","a5","a4","a3","a2","a1","b8","b7","b6","b5","b4","b3","b2","b1","c8","c7","c6","c5","c4","c3","c2","c1","d8","d7","d6","d5","d4","d3","d2","d1","e8","e7","e6","e5","e4","e3","e2","e1","f8","f7","f6","f5","f4","f3","f2","f1","g8","g7","g6","g5","g4","g3","g2","g1","h8","h7","h6","h5","h4","h3","h2","h1"]
for ccc in OrigContin:
ForTwoLines = OrigForReply[OrigContin.index(ccc)]
FromSquare = FT2[FT1.index(ForTwoLines[0])]
ToSquare = FT2[FT1.index(ForTwoLines[1])]
ListMoves.append(FromSquare + ToSquare)
return ListMoves
def make_move(self,move):
global Continuations
global ListMoves
print(ListMoves)
print(move)
try:
self.board = Continuations[ListMoves.index(move)]
except ValueError:
print(move," is not in the move list")
def win(self):
Eval = EvalCaptures(self.board)
#100 means white has captured blacks pawn
#-100 means black has captured whites pawn
if (Eval == 100) or (Eval == -100):
return 1
else:
return 0
def is_over(self):
return self.win()
def show(self):
pass
def scoring(self):
if game.win():
return 100
else:
return 0
ai = Negamax(2) #The AI will think 2 moves in advance
game = Chess([ Human_Player(),AI_Player(ai)])
history = game.play()
['d2d3', 'd2d4', 'e2d3', 'e2e3', 'e2f3', 'e2f2', 'e2f1', 'e2e1', 'e2d1']
d2d3
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
c6c5
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
d6c7
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
d6d7
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
d6e7
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
d6e6
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
d6e5
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
d6d5
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
d6c5
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
d2d4
d2d4 is not in the move list
['d2d3', 'd2d4', 'e2d3', 'e2e3', 'e2f3', 'e2f2', 'e2f1', 'e2e1', 'e2d1']
d2d3
['d2d3', 'd2d4', 'e2d3', 'e2e3', 'e2f3', 'e2f2', 'e2f1', 'e2e1', 'e2d1']
e2d3
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
c6c5
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
e2e3
e2e3 is not in the move list
['d2d3', 'd2d4', 'e2d3', 'e2e3', 'e2f3', 'e2f2', 'e2f1', 'e2e1', 'e2d1']
d2d3
['d2d3', 'd2d4', 'e2d3', 'e2e3', 'e2f3', 'e2f2', 'e2f1', 'e2e1', 'e2d1']
e2f3
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
c6c5
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
e2f2
e2f2 is not in the move list
['d2d3', 'd2d4', 'e2d3', 'e2e3', 'e2f3', 'e2f2', 'e2f1', 'e2e1', 'e2d1']
d2d3
['d2d3', 'd2d4', 'e2d3', 'e2e3', 'e2f3', 'e2f2', 'e2f1', 'e2e1', 'e2d1']
e2f1
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
c6c5
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
e2e1
e2e1 is not in the move list
['d2d3', 'd2d4', 'e2d3', 'e2e3', 'e2f3', 'e2f2', 'e2f1', 'e2e1', 'e2d1']
d2d3
['d2d3', 'd2d4', 'e2d3', 'e2e3', 'e2f3', 'e2f2', 'e2f1', 'e2e1', 'e2d1']
e2d1
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
c6c5
['c6c5', 'd6c7', 'd6d7', 'd6e7', 'd6e6', 'd6e5', 'd6d5', 'd6c5']
d2d3
d2d3 is not in the move list
Move #1: player 2 plays d2d3 :
Player 1 what do you play ?
Would you be comfortable with me adding optional TensorFlow support to the library?
The current algorithms are deterministic in nature; so using a neural reinforcement learning algorithm is a bit out of the current scope. But easyAI's simple and easy TwoPlayerGame
framework could make using TensorFlow far easier for programmers new to AI.
I'd envision writing a tf_learn
function that repeatably has the AI play itself, randomly choosing moves at first, and then learning from each iteration. The result would be a checkpoint
file that contains the decisions that would be used during a play of TwoPlayerGame
with the TensorFlowModel
AI. If an entry is found in the (optional) transposition table, it is used first, otherwise the model checkpoint file is used to make a decision.
Thoughts?
The documentation is not currently clear on how to do "scoring" from the perspective of each player for games with points. For example, lets say each player goes around a game picking up points (either scoring 1 for player A or -1 for player B, where player A wants to maximize, and player B wants to minimize). Should scoring return the score from the perspective of game.nplayer? That is, if game.nplayer==2 then i want to return "-totalscore" instead of returning "totalscore"
I think that scoring is always from the perspective of game.nplayer but I just wanted to confirm before updating the documentation.
(Great job on this library...very useful!)
I wrote a less-recursive version yesterday (recursive but uses less stack space) and got a 400% performance improvement. Inspired, I've started writing a fully non-recursive stack-free version. Essentially, it uses a pre-allocated list of state dictionaries in an iterative manner.
The basic algorithm is now working; I just need to add the alpha/beta pruning next.
I'm thinking this might be of value to the repo and may put it in a PR. Which of the following do you all suggest?:
Negamax
class adding a parameter such as non-recurse=True
to invoke the non-recursive function. Or,NegamaxNR
that uses the non-recursive version.BTW, there are two caveats for the non-recursive version:
in addition to needing ttentry()
it must also have a ttrecover()
. The ttrecover()
takes the immutable key (created by ttentry()
) recovers the game board. If the game is too complex for such a recovery to work, then the recursive version will need to be used.
The game must be functional in the sense that it must always be repeatable. If the game involves any use of random
or an element of input from the outside world, then it will probably break. Use the recursive version.
And also, unmake_move
is ignored. Not really sure how to support that in the non-recursive version.
The first version will also ignore the TT table; but it could certainly be added in. For some games, that would speed things up even further.
I'd do it myself, but apparently GitHub restricts the Add topics
function (under the repo description) to the owner.
I recommend adding:
minimax
negamax
ai
game-development
two-player
non-recursive
sss-algorithm
dual-algorithm
This should make it easier for folks wanting to find something like easyAI on GitHub.
I suspect it's impossible to write unmake_move function for Awele, isn't it?
Currently there's no native means for saving a DictTT to a file. Might such a one be made to exist?
I'm also curious if there's a way to account for symmetry in game configurations, to reduce the amount of required ttentries in your DictTT.
example states:
""" The board positions are numbered as follows:
7 8 9
4 5 6
1 2 3
"""
However, entering 1 on keyboard corresponds with the top left spot, not bottom left, etc.
In the example code you have.
line 12: possible_moves shouldn't need move argument
line 16: string you want to print is self.pile, not d
I don't see win()
in the reference manual or elsewhere besides in the example game.
When is it called? What should it return? Based on what data?
E.g. I wouldn't think win() would be called before the end of the game, but it seems to be.
How can we e.g. just track a score for each player, and show a win (for the current player? defined by what?) if the current player has a higher score?
Update: Hmm - I see now that win()
isn't required. So my questions really apply to scoring()
.
from easyAI import TT
table = TT.fromfile('testing.dat')
Traceback (most recent call last):
File "", line 1, in
TypeError: fromfile() missing 1 required positional argument: 'filename'
exit()
@Zulko and others,
I'm converting one of my python/kivy games to nim/godot. As part of that project, I'll be using the code here as a template for a nim version of both the two-player-game class as well as the negamax algorithm (both recursive and non-recursive.)
details of new project: https://www.youtube.com/watch?v=V9MlvPCZ-jc
original use:
https://github.com/PurpleSquirrelGames/MancalaGameApp
https://www.youtube.com/playlist?list=PL6RpFCvmb5SEW1VVM9j3e0R7G5akKGu19
(see videos 4, 5, 12 and 17)
Since it is a completely different language; making it part of "easyAI" could cause confusion IMO. But I'll absolutely give full credit to the easyAI library.
I'll probably create two packages (in nim, they are managed with nimble
which is similar to pip
):
two_player_game
negamax
(with a dependency on two_player_game
)Any and all feedback is appreciated!
[x] Bug (Typo)
tranposition
, however expect to see transposition
.secuirty
, however expect to see security
.resursion
, however expect to see recursion
.optionnally
, however expect to see optionally
.occured
, however expect to see occurred
.explicity
, however expect to see explicitly
.dowload
, however expect to see download
.dictionnary
, however expect to see dictionary
.caracters
, however expect to see characters
.acccording
, however expect to see according
.Semi-automated issue generated by
https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md
To avoid wasting CI processing resources a branch with the fix has been
prepared but a pull request has not yet been created. A pull request fixing
the issue can be prepared from the link below, feel free to create it or
request @timgates42 create the PR. Alternatively if the fix is undesired please
close the issue with a small comment about the reasoning.
https://github.com/timgates42/easyAI/pull/new/bugfix_typos
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.