University of Sussex Students' Newspaper

Two new poker programs beat the pro’s

James Bowyer

ByJames Bowyer

Oct 9, 2017

20 years on from the momentous event in Artificial Intelligence when a computer program, “Deep Blue”, beat chess grandmaster Garry Kasparov over an extended number of games, two new poker computer programs have beaten a selection of human players with statistical significance. This follows impresive advances in AI over the last two decades on other table games such as checkers, and in late December 2016, Go by  Google Deep Mind project AI, AlphaGo.

Previous generations, including the Chess Grand Master beating “DeepBlue”, have typically been able to beat humans by selecting the best move from millions of possible chains of moves and predicting future moves based on millions of probabilities. This extraordinary number of statistics is too large for humans to think about “manually”but is required to beat the human “intuition” we use naturally when deciding which moves to make in table games.

Libratus, one of the two new poker programs, largely follows in this direction, requiring a super computer to run and store the immense amount of data needed to beat humans with this approach. Innovating on DeepBlues strategies, Libratus adaptively changes itself during the course of each game based on moves and possible mistakes made by it’s human opponents. Libratus switches modes as the game possibilities become less open resulting in more accuracy and making the best use of the processing power it has available.

This extraordinary number of statistics …is required to beat the human “intuition” we use naturally when deciding which moves to make in table games.

The other program, DeepStack AI, takes a more interesting approach, similar to the “deep learning” approach of Googles AI, AlphaGo. For humans, deep learning would be equivalent to looking at an image and picking out that it is blue, has some white objects, and a section of green at the bottom from a picture of the sky and a grassy horizon

DeepStack identifies and connects these “features” to each other by looking through moves identified with good or bad outcomes. From these, often hundreds of features, DeepStack can adjust a network created from this input towards an output of “moves to make”.  Each time DeepStack gets it wrong, it adapts that network and how it detects features until it is finding features so optimally that it’s predictions can outwit humans.

One advantage of Deepstack over Libratus is that it can run on a laptop as it does not store reams and reams of potential configurations and probabilities. Rather DeepStack merely stores a configuration for “Deep Learning” features and a configuration for a “Neural Network” which lists potential moves and makes a final choice based on that.

These two successful cases are also noteworthy because poker involves a vast amount of “hidden data”, especially when betting amounts are unlimited. Known as an “imperfect-information” area of games, this represents better our world than perfect games (such as chess).

Perhaps one day we won’t be analysed by a doctor that asks for our diet, but an Artifical Intelligence that derives from when we tell them about our diet, how much we are lying, how that correlates with our weight and how many previous similar cases to ourselves have been confirmed as diabetic patients. It seems AI is rapidly gaining the upper hand on us humans.

IMAGE: Wikimedia Commons

Leave a Reply