This is a three part posting on an amazing event in history, which is unfolding right now. These posts are date-reversed, so they show up in order as you scroll though the blog. Please read this one first.
Google has created a giant cluster of computers and AI software, specialized to play the ancient Chinese strategy game of Go. Go was long considered too complex for computers to master, with current estimates of more than a decade, for AI to do well.
Google's system, called AlphaGo, had recently defeated the current european champion, Fan Hui, 5-0. Fan Hui is a 2-dan professional level player, where 9-dan is the highest level. An improved AlphaGo is winning some matches vs the current world champion, Korean Lee Sedol, a 9-dan player. The battle is still raging. While the battle continues, one thing is clear, we have entered a new era in History.
Computer AI has evolved to master the most complex game known to man, against the best known professional humans! This is not proof that computers are, or will be, smarter than humans at every task. However, it should give us pause.
There is a long history leading up to this historic moment.
Computers Playing Games against Humans
The first level of mastery is beating the creator, typically a scientist who is an amateur player. The second level is to beat an average person, and then an average amateur. The highest levels are to beat professional players, culminating with the authoritative defeat of a top professional player in the world.
For a few simpler games, it is sometimes possible to mathematically "solve" the game. This means mathematicians can prove how every possible game can play out, and sometimes prove that a computer can win or not lose, no matter what move is played at any point by the opponent. It all started with a game that Americans call Tic-Tac-Toe.
Tic-Tac-Toe was mastered in 1952, so that the computer could win or draw any game. Later, Tic-Tac-Toe was solved, proving that either player (first move or second move) can force a draw. Relative to other strategy games with perfect information (no random events like dice), Tic-Tac-Toe is less complex, with roughly $3\cdot10^4$ to $3\cdot10^5$ possible games. The computer program and proof involved looking at every possible game (hence every possible move). In fact, this idea of creating a search tree of possible moves was the foundation of all successful approaches, until AlphaGo!
Next was American Checkers, in 1996. Again, a computer system would examine a board state, and then compute legal next moves, then legal counters to all those moves, legal counters to the counters, etc... This tree grows exponentially, so computers stop at a certain depth, which scientists call a "ply-level". The computer program Chinook won the USA national tournament in 1996, by the widest margin ever. In 2007, Checkers was (weakly) solved by exploring a search space of $5\cdot10^{20}$, taking computers 18 years and $10^{14}$ calculations. Again, the Chinook program was combined with this humongous database of positions from the weakly solved case. So with "perfect play", all Checkers games must end in a draw!
Chess was convincingly mastered by 2005, but work goes back to 1956. Chess is more complex than Checkers, and more widely played too. Chess is estimated to have between $10^{43}$ and $10^{50}$ legal positions! Chess is too complex to fully solve, with only a few simplified examples which can be solved. Also, the search tree grows too quickly for brute force computation. Finally, there is a long history of initial move sequences, or "openings", which can affect play many, many moves later. For a computer to play chess well, the program must consider openings as well as limited tree searches. Since the tree search stops before completion, the computer must choose which searches to explore more deeply and which to abandon. This leads to an important concept of an "evaluation function". The evaluation function tries to score a given board position, to determine if one position is better than another (more likely to win).
Which brings us to Go. While Chess has different types of pieces with different movements, Go looks simpler, with only one type of move, placing a stone on a grid. However, first appearances can be deceiving. Chess has a maximum complexity (not including rotations of the board and other symmetries) of $10^{123}$ and a theoretical average game length of 80 moves. Go has a maximum complexity of $10^{360}$ and a theoretical average of 150 moves.
These numbers are too huge for our brains, so consider this instead. Chess has 20 first moves and then 20 replies to each of those, or 400 combinations after two plays, and pros complete a game in roughly 40 moves on average (which is 80 plies). Go has $19^2$ first moves and $19^2-1$ second moves, or about 130,000 combinations after two moves, and pros complete a game in over 200 moves on average. It's a completely different beast! This is why most scientists thought that computers could not beat human Go champions for at least another decade or two. As Google proved this month, those scientists were wrong, so wrong.