On Wednesday afternoon in the South Korean capital, Seoul, Lee Se-dol, the 33-year-old master of the ancient Asian board game Go, will sit down to defend humanity.
On the other side of the table will be his opponent: Alphago, a programme built by Google subsidiary DeepMind which became, in October, the first machine to beat a professional human Go player, the European champion Fan Hui. That match proved that Alphago could hold its own against the best; this one will demonstrate whether “the best” have to relinquish that title entirely.
Lee, who is regularly ranked among the top three players alive, has been a Go professional for 21 years; Alphago won its first such match less than 21 weeks ago. Despite that, the computer has already played more games of Go than Lee could hope to fit in his life if he lived to a hundred, and it’s good. Very good.
READ ALSO
GOOGLE ADMITS RESPONSIBILITY AFTER CAR CRASH
Google photos get new editing tools
Tech industry rallies around Apple in Fight against FBI
At the press conference confirming the details of the match, Lee exuded confidence. “I don’t think it will be a very close match,” he told the assembled crowd with a sheepish grin. “I believe it will be 5–0, or maybe 4–1. So the critical point for me will be to not lose one match.”
DeepMind thinks otherwise. The company was founded by Demis Hassabis, a 39-year-old Brit who started the artificial intelligence (AI) research firm after a varied career taking in a neuroscience PhD, blockbuster video game development, and master-level chess – and he puts its chances of winning the match at around 50–50.
Clearly, one of them is wrong. Either Lee has vastly overestimated his chances against a new breed of AI, or Hassabis and company still don’t understand quite how powerful a player they are up against. But the answer to that, revealed over the course of five matches throughout the week, will have ramifications far beyond the world of Go.
The ancient Asian game of Go
On the surface, Go looks simple. Compared with chess – which has six different types of pieces, each with different movement rules, and fiddly additions such as castling and promotion – a Go board is the height of elegance.
Each player takes it in turns placing stones of their colour on a 19-by-19 board, attempting to surround and thus capture their opponent’s pieces. The player who has taken the most territory, by surrounding or occupying it with their own stones, at the end of the game is the winner.
But the simplicity of the ruleset belies the astonishing complexity that the game can demonstrate. The first move of a game of chess offers 28 possibilities; the first move of a game of Go can involve placing the stone in one of 361 positions. A game of chess lasts around 80 turns, while Go games last 150. That leads to a staggering number of possibilities: there are more legal board states for a game of Go or chess than there are atoms in the universe.
And so both chess and go are resistant to the tactic by which simpler games, such as noughts and crosses or draughts (tic-tac-toe and checkers, to Americans), have been “solved”: by enumerating every possible move, and drawing up rules for how to guarantee that a computer will be able to play to at least a draw. Each game is just too complex.
CHECK ALSO
Developers can now protect their Android Apps throughout their lifecycle
Chess computers can at least rely on a modified version of the same tactic. Such machines, including Deep Blue – the computer made by IBM which beat grandmaster Gary Kasparov in 1997, ushering in an age of dominance by computers in chess – rely on calculating and then judging the value of vast numbers of possible moves. Deep Blue, for instance, could evaluate 200m possible moves in a second. Those machines play by looking into the future, to find the set of moves that will lead them to the strongest position, and then playing them out step by step.
That tactic doesn’t work for Go. Partly, that’s because of one further complication in the game: the immense difficulty of actually evaluating a move. A chess player can easily look at a board and see who is in the stronger position, often simply by counting the number of pieces on the board held by each player.
In Go, such an approach was long thought impossible. And even if that problem could be solved, the sheer scale of the game meant that exhaustively searching through every possible move left the machine far from competitive with even a weak human player. As a result, as recently as 2014, a leading developer of Go software estimated it would be a decade before a machine could beat a professional player.
In fact, it was less than a year.
‘Deep reinforcement learning’
DeepMind approached the problem
by seeing whether the company could teach a neural network to play Go. The technology, which began with attempts to mimic the way the human brain interprets and processes information, is at the heart of DeepMind’s AI research, and lends itself well to what Hassabis, speaking on the eve of his trip to Seoul to oversee the competition, calls “deep reinforcement learning”.
“It’s the combination of deep learning, neural network stuff, with reinforcement learning,” he explains. “Learning by trial and error, incrementally improving, and learning from your mistakes.”
DeepMind had already used the technique successfully when it built a system capable of learning how to play old Atari video games. But thought rapidly turned to a greater challenge, and one which had for a long time represented a holy grail of AI research. Just two months after the Atari research was published, the team got its initial results on the Go project, Hassabis says. “Then we felt, when we assessed it, that if we put a serious team on to it we could make some pretty fast progress.”
The idea of applying neural networks to solve tricky problems in AI isn’t confined to DeepMind, but the technology is notoriously tricky to refine. Hassabis likens it to teaching a child, rather than programming a computer: even if the team knows what needs to be changed, they can’t simply add a line of code. Instead, they need to show the software enough examples of correct behaviour for it to draw its own inferences.
But DeepMind did hit upon a few genuine breakthroughs. “The big jump was the discovery of the value network, which was last summer,” Hassabis says. That was the realisation that a finely tuned neural network could solve one of the problems previously thought impossible, and learn to predict the winner of a game by looking at the board.
From there, progress was rapid. The value network, paired with a second neural network, the policy network, would work to pick a few possible moves (based on similar plays seen in previous matches) and then estimate which of the resulting board states would be strongest for the AlphaGo player.
The second neural network works differently. Called the policy network, it was trained on thousands of matches played by go professionals, with the aim of predicting where they would play the next move. It managed to achieve success 57% of the time, allowing it to very quickly reach a level of competency near that of the best humans.
The policy network on its own is good enough, according to DeepMind, to beat every other go software on the market. But it’s when the two neural networks work in concert that AlphaGo really shines. Meanwhile, a third tool, called Monte Carlo tree search, helps the system play strategically as well as tactically.
IBM didn’t publish the paper for years, and then dismantled Deep Blue. They did a few things that fuelled the paranoia. - Demis Hassabis, DeepMind CEO
Lee’s overconfidence, says Hassabis, is because he hasn’t seen the most recent progress. “He’s very confident, because he looked at the Fan Hui version” that played in October. “And clearly, if we were to play that, he would thrash it.
“I think he’s basing it off that, plus some approximation of how much it might have improved … All I can say is that our tests are leading us to believe that we have a good chance.” As for Lee’s trash talk, Hassabis counters in his own style. “I would be very disappointed if we didn’t win a game – put it that way.”
Could DeepMind be the watershed moment for artificial intelligence?
If DeepMind does win the match, it will be a watershed moment for AI with only one genuine precedent: Deep Blue’s victory over Kasparov in 1997. Hassabis’ chess days were over by then, but he followed the match as closely as he could – given that it fell weeks before his computer science finals at Cambridge (he graduated with a double first).
He recalls being surprised by Deep Blue’s success. “He would say this himself, of course, but I think he was probably at that stage still slightly stronger than Deep Blue. As we know now it was just a matter of time, but at that stage it still wasn’t clear.”
That match was won with the slightest of margins, though Deep Blue’s occasionally erratic play style led to controversy, with Kasparov publicly accusing IBM of cheating in the match. It’s a conflict DeepMind is eager to avoid, and part of the reason the team published its ground-breaking Nature paper, detailing the inner workings of AlphaGo, in advance.
“If you wanted to, with enough effort you could probably recreate AlphaGo from that paper in about a year, if you put enough people on it,” Hassabis says. “Whereas IBM didn’t publish the paper for another five to ten years afterwards, and then they dismantled Deep Blue. So they did a few things that didn’t help, that fuelled the paranoia.”
What impact would a DeepMind victory have?
The chess world has had two decades to live with the fallout of Deep Blue’s victory over Kasparov. But Frederic Friedel, a computer chess pioneer and the founder of the news site ChessBase, argues that it’s possible to overstate the effect the victory had. “AlphaGo winning won’t change the world of Go. It’s like you’ve built a bicycle or a car that can go faster than Usain Bolt, and you say: ‘Look at how fast it is!,” does this mean the world ends for athletics? No, it doesn’t.”
Friedel, who first met Hassabis as “a cocky little kid who came for a dinner with Gary [Kasparov] and myself in London, and told us about some software he was developing”, does have a warning for Go players, though. “The advent of bicycles and motorbikes did not make athletes give up in despair: they just went on racing each other without these machines. But there is a grave difference to the chess analogy: a 200-metre runner cannot secretly use the assistance of a bicycle, but a chess player can most certainly get his moves surreptitiously from a computer.
“Cheating in chess is becoming a serious problem, and it will become more acute as technology progresses. That will change the game dramatically – not the fact that computers are stronger than humans.”
Thinking about what comes after the match is one step too far for Hassabis and DeepMind, who are focusing everything they have on the next two weeks. If they win, attention will probably turn to cleaning up AlphaGo in preparation for a consumer release, and Hassabis hopes that a highly skilled Go programme could be an important step in popularising the game in the west, where would-be stars are often hampered by the lack of opponents to test their mettle against.
And the company has already turned its attention to other, more practical, problems which can be tackled with the same deep reinforcement learning approach that led to AlphaGo. In the short term, that means helping parent company Google with tricky challenges like voice and image recognition, while in the next five or ten years, Hassabis says, “ultimately we want to apply these techniques in important real-world problems, from medical diagnostics to climate modelling”.
But if AlphaGo wins its match against Lee Se-dol, it will mean much more than just a stepping stone in DeepMind’s own progress. One of the last areas of mental competition in which humanity had an advantage over machines will have been vanquished. If you still think you’re better than an AI, now is the time to think again.
No comments:
Post a Comment