Google DeepMind’s AlphaGo artificial-intelligence program has beaten South Korean Go player Lee Se-dol with three consecutive wins in a five-game tournament.
The closely watched contest between man and machine on Saturday will likely reinforce the view that AI has come into its own, with great potential not only in gaming but also in enterprise and other applications.
AlphaGo previously won against Lee, a top Go player, in games in Seoul, South Korea on Wednesday and Thursday, leading the “speechless” player to say after the Thursday round that there was not a moment when he felt he was leading in the board game. On Saturday, pressure on Lee appeared to be building up very early on in the game, with many online spectators predicting he would lose.
Go players take turns to place black or white pieces, called “stones,” on the 19-by-19 line grid, to aim to capture the opponent's stones by surrounding them and encircling more empty space as territory.
Google DeepMind has said that the remaining two games of the tournament will be played on Sunday and Tuesday to arrive at the final match score of the program. AlphaGo has won US$1 million for winning the tournament, which Google has promised to donate to charities.
The last high-profile wins for AI programs were the chess victory of IBM's Deep Blue against Garry Kasporov in 1997 and the 2011 win in the Jeopardy quiz show by Watson, another computer from Big Blue. IBM went on to commercialize Watson’s natural language processing and machine learning for the analysis of unstructured data, and Google is also expected to commercialize its own AI technology more aggressively after the AlphaGo win.
AlphaGo started as a research project about two years ago to test whether a neural network using deep learning can understand and play Go, according to David Silver, one of the key researchers on the AlphaGo project. Google acquired British AI company DeepMind in 2014.
On Thursday, AlphaGo showed signs of what could possibly be described as “creativity,” when it made one move that game commentators said was unlikely to have been played by a professional human player. The program uses as a guide probable human moves from its ‘policy network,’ consisting of a model of play by human experts in different situations, but may make its own move when its ‘value’ neural network evaluates the possible moves at a greater depth.
“AlphaGo is playing a much more complicated game than it used to,” said Michael Redmond, commentator at the Seoul event and a professional Go player, during the course of the game on Saturday. He suggested that the program had got a lot stronger and more sophisticated after its 5-0 win in October against European Go champion Fan Hui.
Google DeepMind is likely to be aiming at a similar 5-0 win for AlphaGo against Lee in Seoul.