Pattern Based Chess Engine: Part 2

In my last story,

https://medium.com/@anuvesh.kumar/pattern-based-chess-engine-part-1-8a1265206d9e

I created the network than can take as input a chess sequence of fixed length, and output the probability of whether a player will win the game using the sequence.

This time, I’ll show you how I used it in The Chess Engine.

Here’s a simple example: The following are 5 random chess game sequences of length 50 (picked from the training set)

arr = array([[40, 70, 59, 80, 111, 2, 143, 87, 225, 16, 61, 101, 6, 121, 1, 18, 274, 80, 225, 68, 27, 310, 1, 37, 194, 138, 403, 689, 353, 7, 632, 16, 221, 471, 25, 271, 1, 77, 129, 70, 43, 68, 508, 841, 47, 845, 1063, 161, 410, 294] ,

[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 10, 2, 40, 4, 75, 89, 1, 3, 23, 36, 17, 37, 59, 70, 6, 16, 9, 45, 43, 246, 31, 183, 239, 56, 34, 1551, 62, 20, 29, 126, 377, 219],

[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 11, 8, 160, 77, 1, 147, 12, 2, 10, 7, 23, 162, 75, 449, 13, 3, 9, 46, 6, 80, 107, 348, 159, 113, 62, 361, 200, 311, 91, 44, 251, 157, 17, 45],

[0, 0, 0, 0, 0, 4, 1, 8, 17, 3, 11, 109, 166, 5, 388, 152, 10, 731, 43, 1268, 755, 2, 59, 7, 6, 80, 9, 104, 239, 113, 387, 466, 218, 227, 297, 54, 528, 214, 306, 26, 30, 271, 230, 67, 218, 81, 57, 255, 52, 321],

[82, 59, 252, 114, 123, 99, 1454, 300, 2587, 223, 794, 9, 327, 976, 1412, 579, 790, 1523, 814, 1285, 420, 300, 169, 146, 123, 57, 350, 210, 315, 124, 794, 430, 41, 146, 327, 2730, 2, 210, 350, 29, 150, 2730, 163, 430, 113, 86, 3092, 92, 82, 85]])

I load the weights in the model

import pickle
maxlen = 50
weight_path = 'discriminate_weights_good1mil50_64_64_6x64.hdf5'
from keras_preprocessing.text import Tokenizer
f = open('tokenized.pkl', 'rb')
tk = pickle.load(f)
from keras.models import Sequential
from keras.layers import Embedding, LSTM, Dense, CuDNNLSTM
import numpy as np
vocabulary_size = len(tk.word_counts.keys())+1
max_words = maxlen
embedding_size = 64
model = Sequential()
model.add(Embedding(vocabulary_size, embedding_size, input_length=max_words))
model.add(CuDNNLSTM(64, return_sequences=True))
model.add(CuDNNLSTM(64, return_sequences=True))
model.add(CuDNNLSTM(64, return_sequences=True))
model.add(CuDNNLSTM(64, return_sequences=True))
model.add(CuDNNLSTM(64, return_sequences=True))
model.add(CuDNNLSTM(64, return_sequences=False))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.load_weights(weight_path)

Next I simply predict the output of the above sequences using the model.

model.predict(arr)

>>> model.predict(arr)
array([[0.7643637 ],
[0.6580771 ],
[0.5012796 ],
[0.79403895],
[0.8383963 ]], dtype=float32)

Can you figure out a way of using this network in a chess engine ? If you can, you can skip the rest and get on with your tinkering. If you are still struggling a bit, here’s a hint:

def find_best_sequence(move_list): #when AI plays as white
test = np.array(move_list)
score = model.predict(test)
best_index = np.argmax(score)
return best_index, move_list[best_index]
def find_worst_sequence(move_list): #when AI plays as black
test = np.array(move_list)
score = model.predict(test)
best_index = np.argmin(score)
return best_index, move_list[best_index]

The code above takes in list of chess sequences and returns the best chess sequence. The picture may be becoming clearer now.

The procedure is simple. Give a board state, find all the legal sequences up till a depth d. Pass this list of possible chess sequence into the function, and it will output the best sequence. In contrast of any other chess engine, my chess engine doesn’t care about the board states, only the sequence of moves. The sequence provides the context of the board state under the following hypothesis:

Let there be two chess sub sequences s1 and s2 from two different games G1 and G2. Let p1 and p2 be the chess sequence prior to s1 and s2 respectively. This means;

G1 = p1 + s1

G2 = p2 + s2

Then my hypothesis is if s1 is similiar to s2 then p1 is similar to p2.

In other words a sub sequence of chess is dependent on the sequence preceding that sequence and therefore is unique to a game.

It is intuitive if you mull over it a while.

This is important because at any point of game I choose to consider only the past 50 moves and clip the rest.

Now in my program, I go only until a depth of 1 but you are welcome to extend the game tree to higher depth. As such you can take both the past context and the future possibilities into account, when deciding the best move. This is possible because the network essentially only cares about sequences of a fixed length (for time window of 50, you could choose past 20 moves, future 30 moves, obviously you’ll get many many many sequences).

I pit my agent against a random agent(who doesn’t care about winning and picks up any move at random). Things didn’t go as I expected.

The following code initializes the board and plays a game against a random agent:

If AI is playing as white, it will choose best sequences (to get 1)

else if AI is playing as black, it will choose worst sequences (to get 0)

board = chess.Board()
# print(board)
end = 0
past_moves = [0] * maxlen # maxlen, no. of moves to consider from past
AIiswb = random.choice(range(2))
chanceAI = AIiswb
while not end:
legal_moves = re.sub(',', '', str(board.legal_moves).split('(')[1][:-2]).split(' ')
# print(legal_moves)
past_moves = past_moves[1:]
# print(past_moves)
current_move = ''
if chanceAI == 1:
possible_seq = []
if AIiswb == 1: # white
for i in range(len(legal_moves)):
a = '@' + legal_moves[i].lower()
if a in tk.word_index.keys():
possible_seq.append(past_moves + [tk.word_index[a]])
best_move_index, past_moves = find_best_sequence(possible_seq)
else: # black
for i in range(len(legal_moves)):
a = '~' + legal_moves[i].lower()
if a in tk.word_index.keys():
possible_seq.append(past_moves + [tk.word_index[a]])
best_move_index, past_moves = find_worst_sequence(possible_seq)
best_move = legal_moves[best_move_index]
# print("ChessAI: " + best_move)
else:
#Random Agent
best_move = random.choice(legal_moves)
if AIiswb == 0: # Random is white
a = '@' + best_move.lower()
else: # Random is black
a = '~' + best_move.lower()
# A random agent will play moves, that a expert would never play, hence the LSTM network will encounter moves it has never seen before
if a not in tk.word_index.keys():
past_moves.append(0)
else:
past_moves.append(tk.word_index[a])
# print("Random: " + best_move)
game_sequence += best_move + ' '
board.push_san(best_move)
# print(board)
if board.is_game_over():
if chanceAI ==
AI += 1
outcome = 1
else:
Random += 1
outcome = 0
break
chanceAI = 1 - chanceAI # sexy as f
print('AI: ' + str(AI) + ' Random: ' + str(Random))

Few observations.

Winning

  1. When the LSTM network with better accuracy( > 74 % ) is used to evaluate the sequences when playing against a random agent it wins 50–55 games out of 100 (Almost random)
  2. When the LSTM network with not that good ( < 67 %) is used to evaluate the sequences when playing against a random agent it wins anywhere from 68–79 games out of 100.

Loosing

  1. When the AI is asked to loose the game against a random agent, it does so successfully with loosing 100 out of 100 games but with LSTM network with ‘not that good’ accuracy.

This proves that the agent is capable of exploiting the learned patterns and use it in a game to either win or loose. As such the difficulty level can also be modified, by choose the 2nd, 3rd etc best move.

You might be thinking one, or both the things.

  • That’s STRANGE. Why does a less accurate model performs better against a random agent ?
  • Hah! Loosing against a random player. This AI is crap.

The answer to both:

Here’s what I think is happening.

Let’s take a step back and see what I did. I trained the data on expert players who use precise logical sequence with the aim of winning the game. The network learns on this data and finds all the patterns that will help it win in the context that it is playing against an expert player.

When the same network is used to play against a random players, it has learned a bit much from experts. So much so, that it always expects the opponent to be a great player and make great moves and plays with this expectation in its digital mind.

A network with less accurate prediction is more flexible, it has learned the patterns just enough to guess how to win the game, but doesn’t know it yet. As a result it has much better odds when playing against agents who just don’t care about winning and make random moves.

To get even better results when playing against a random agent, the process is pretty simple. Log all the games and the outcomes and train the less accurate LSTM network on this data. (Future Work)

Also, I will go as far as to say that the Chess Engine, when utilizing the network with greater accuracy will give a good challenge to expert players.

I still haven’t realized the full potential of this idea in and outside of Chess. One other application that I can think of right now is Tackling the problem of Credit Assignment.

Credit Assignment Problem is stated as: Given a sequence of actions and a reward at the end of the sequence, which specific actions/sub sequence of actions contributed to the final reward and to what extent and thereby assign the credit score.

It has the same formulation of problem as the one I tackled:

[a1, a2, a3…. aN] be the sequence of actions and R be the reward at t.

With enough data, the LSTM network can be trained to learn the sequences that leads to a rewards and differentiate them from sequences that don’t.

Since it is not always possible to encode all the states of the environment. In that case the previous actions, instead of the environment provides the necessary context for performing the next action.

All the code is available on my github:

https://github.com/anuveshkumar/PatternBasedChessEngine

--

--

--

Interested in AI, ML, Music and Poetry. Love the nature, like to travel and explore.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Into the problem of Hand Recognition

The neural approaches to Natural Language Generation

A first approach to MLOps using Titan

Time Series Forecasting with TensorFlow.js

Mercury — A Chat-bot for Food Order Processing using ALBERT & CRF

LaSOT —  Large-Scale Dataset for Object Tracking Models

Yet Another Wake-Word Detection Engine

ISL: Statistical Learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Anuvesh Kumar

Anuvesh Kumar

Interested in AI, ML, Music and Poetry. Love the nature, like to travel and explore.

More from Medium

Bayesian Elbow Detection with tensorflow_probability

ML Security with the Adversarial Robustness Toolbox

New Integration: Comet and Ray