lichess.org
Donate

Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level

Absolutely fascinating !

Though it's a good question whether the strongest engines today will ever be beaten by this computing power with the new intuitive approach…

This new idea might however also be applied to the game Go, where the best humans still hugely outplay computers, simply because there are tons of more possible moves to consider.
The description is not all that great, however I think he is doing standard alpha-beta search and in the main training his network to evaluate the positions (and possibly in the pruning stage). A very similar approach was used to obtain superhuman performance in backgammon back in the 90s (Using TD(lambda) learning) and I am *certain* I read about a similar system (TD-learning, alpha/beta search & neural network evaluation of the position) trained to play chess on a reasonably high level using self-play.

Giraffe is no challenge to the top chess engines!

Here is the original white paper "Using Deep Reinforcement Learning to Play Chess" by Matthew Lai : http://arxiv.org/pdf/1509.01549v2.pdf

According to Giraffe's author Matthew Lai, "Unlike previous attempts using machine learning only to perform parameter-tuning on hand-crafted evaluation functions, Giraffe's learning system also performs
automatic feature extraction and pattern recognition."

However, in the MIT article that was mentioned by yanez, http://www.technologyreview.com/view/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/
, about the sixth paragraph down in the article the author claims, "Straight out of the box, the new machine plays at the same level as the best conventional chess engine." This is pure nonsense because the latest version is Giraffe 20150808 64-bit is rated a mere 2367: http://www.computerchess.org.uk/ccrl/4040/cgi/engine_details.cgi?print=Details&each_game=1&eng=Giraffe%2020150828%2064-bit

Compare this rating of 2367 for Giraffe to the top chess engine Stockfish on CCRL with a rating of 3314: http://www.computerchess.org.uk/ccrl/404/ , there is a difference of 1047 rating points!

The comment that the author of the MIT article made is like saying a good club player (1787) is just as good as Magnus Carlsen (current FIDE rating of 2834) which is ridiculous! I seriously doubt that the author of the MIT article is aware of the above CCRL (Computer Chess Ratings List).

Furthermore, the above ratings list does not include Komodo 9.3 which recently defeated Stockfish in the TCEC 8 (Top Computer Engine Championship) by a score of 53.5 to 46.5 :
http://www.chessdom.com/komodo-is-triple-champion-wins-the-top-chess-engine-championship-2015/ This comparison suggests that the difference between Giraffe and the top rated Komodo 9.3 chess engine is much more than 1047 rating points!

It seems obvious to me that Giraffe is no more of a challenge to the top computer chess engines than a good club player is to Magnus Carlsen.

The real question is whether or not Giraffe's learning system will be able to replace or improve any of the top chess engines.
AlphaGo is holistic machine learning approach though.

Giraffe is just a standard chess engine that replaced certain components with machine learning prediction, but at a huge cost to search depth. Honestly, it's a step backwards.

This topic has been archived and can no longer be replied to.